diff --git a/content/docs/v2024.1.31/CHANGELOG-v2020.07.10-beta.1.md b/content/docs/v2024.1.31/CHANGELOG-v2020.07.10-beta.1.md new file mode 100644 index 0000000000..b09dced0b6 --- /dev/null +++ b/content/docs/v2024.1.31/CHANGELOG-v2020.07.10-beta.1.md @@ -0,0 +1,1170 @@ +--- +title: Changelog | KubeDB +description: Changelog +menu: + docs_v2024.1.31: + identifier: changelog-kubedb-v2020.07.10-beta.1 + name: Changelog-v2020.07.10-beta.1 + parent: welcome + weight: 20200710 +product_name: kubedb +menu_name: docs_v2024.1.31 +section_menu_id: welcome +url: /docs/v2024.1.31/welcome/changelog-v2020.07.10-beta.1/ +aliases: +- /docs/v2024.1.31/CHANGELOG-v2020.07.10-beta.1/ +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +# KubeDB v2020.07.10-beta.1 (2020-07-10) + + +## [kubedb/apimachinery](https://github.com/kubedb/apimachinery) + +### [v0.14.0-beta.1](https://github.com/kubedb/apimachinery/releases/tag/v0.14.0-beta.1) + +- [157a8724](https://github.com/kubedb/apimachinery/commit/157a8724) Update for release Stash@v2020.07.09-beta.0 (#541) +- [0e86bdbd](https://github.com/kubedb/apimachinery/commit/0e86bdbd) Update for release Stash@v2020.07.08-beta.0 (#540) +- [f4a22d0c](https://github.com/kubedb/apimachinery/commit/f4a22d0c) Update License notice (#539) +- [3c598500](https://github.com/kubedb/apimachinery/commit/3c598500) Use Allowlist and Denylist in MySQLVersion (#537) +- [3c58c062](https://github.com/kubedb/apimachinery/commit/3c58c062) Update to Kubernetes v1.18.3 (#536) +- [e1f3d603](https://github.com/kubedb/apimachinery/commit/e1f3d603) Update update-release-tracker.sh +- [0cf4a01f](https://github.com/kubedb/apimachinery/commit/0cf4a01f) Update update-release-tracker.sh +- [bfbd1f8d](https://github.com/kubedb/apimachinery/commit/bfbd1f8d) Add script to update release tracker on pr merge (#533) +- [b817d87c](https://github.com/kubedb/apimachinery/commit/b817d87c) Update .kodiak.toml +- [772e8d2f](https://github.com/kubedb/apimachinery/commit/772e8d2f) Add Ops Request const (#529) +- [453d67ca](https://github.com/kubedb/apimachinery/commit/453d67ca) Add constants for mutator & validator group names (#532) +- [69f997b5](https://github.com/kubedb/apimachinery/commit/69f997b5) Unwrap top level api folder (#531) +- [a8ccec51](https://github.com/kubedb/apimachinery/commit/a8ccec51) Make RedisOpsRequest Namespaced (#530) +- [8a076bfb](https://github.com/kubedb/apimachinery/commit/8a076bfb) Update .kodiak.toml +- [6a8e51b9](https://github.com/kubedb/apimachinery/commit/6a8e51b9) Update to Kubernetes v1.18.3 (#527) +- [2ef41962](https://github.com/kubedb/apimachinery/commit/2ef41962) Create .kodiak.toml +- [8e596d4e](https://github.com/kubedb/apimachinery/commit/8e596d4e) Update to Kubernetes v1.18.3 +- [31f72200](https://github.com/kubedb/apimachinery/commit/31f72200) Update comments +- [27bc9265](https://github.com/kubedb/apimachinery/commit/27bc9265) Use CRD v1 for Kubernetes >= 1.16 (#525) +- [d1be7d1d](https://github.com/kubedb/apimachinery/commit/d1be7d1d) Remove defaults from CRD v1beta1 +- [5c73d507](https://github.com/kubedb/apimachinery/commit/5c73d507) Use crd.Interface in Controller (#524) +- [27763544](https://github.com/kubedb/apimachinery/commit/27763544) Generate both v1beta1 and v1 CRD YAML (#523) +- [5a0f0a93](https://github.com/kubedb/apimachinery/commit/5a0f0a93) Update to Kubernetes v1.18.3 (#520) +- [25008c1a](https://github.com/kubedb/apimachinery/commit/25008c1a) Change MySQL `[]ContainerResources` to `core.ResourceRequirements` (#522) +- [abc99620](https://github.com/kubedb/apimachinery/commit/abc99620) Merge pull request #521 from kubedb/mongo-vertical +- [f38a109c](https://github.com/kubedb/apimachinery/commit/f38a109c) Change `[]ContainerResources` to `core.ResourceRequirements` + + + +## [kubedb/cli](https://github.com/kubedb/cli) + +### [v0.14.0-beta.1](https://github.com/kubedb/cli/releases/tag/v0.14.0-beta.1) + +- [80e77588](https://github.com/kubedb/cli/commit/80e77588) Prepare for release v0.14.0-beta.1 (#468) +- [6925c726](https://github.com/kubedb/cli/commit/6925c726) Update for release Stash@v2020.07.09-beta.0 (#466) +- [6036e14f](https://github.com/kubedb/cli/commit/6036e14f) Update for release Stash@v2020.07.08-beta.0 (#465) +- [03de8e3f](https://github.com/kubedb/cli/commit/03de8e3f) Disable autogen tags in docs (#464) +- [3bcfa7ef](https://github.com/kubedb/cli/commit/3bcfa7ef) Update License (#463) +- [0aa91f93](https://github.com/kubedb/cli/commit/0aa91f93) Update to Kubernetes v1.18.3 (#462) +- [023555ef](https://github.com/kubedb/cli/commit/023555ef) Add workflow to update docs (#461) +- [abd9d054](https://github.com/kubedb/cli/commit/abd9d054) Update update-release-tracker.sh +- [0a9527d4](https://github.com/kubedb/cli/commit/0a9527d4) Update update-release-tracker.sh +- [69c644a2](https://github.com/kubedb/cli/commit/69c644a2) Add script to update release tracker on pr merge (#460) +- [595679ba](https://github.com/kubedb/cli/commit/595679ba) Make release non-draft +- [880d3492](https://github.com/kubedb/cli/commit/880d3492) Update .kodiak.toml +- [a7607798](https://github.com/kubedb/cli/commit/a7607798) Update to Kubernetes v1.18.3 (#459) +- [3197b4b7](https://github.com/kubedb/cli/commit/3197b4b7) Update to Kubernetes v1.18.3 +- [8ed52c84](https://github.com/kubedb/cli/commit/8ed52c84) Create .kodiak.toml +- [cfda68d4](https://github.com/kubedb/cli/commit/cfda68d4) Update to Kubernetes v1.18.3 (#458) +- [7395c039](https://github.com/kubedb/cli/commit/7395c039) Update dependencies +- [542e6709](https://github.com/kubedb/cli/commit/542e6709) Update crazy-max/ghaction-docker-buildx flag +- [972d8119](https://github.com/kubedb/cli/commit/972d8119) Revendor kubedb.dev/apimachinery@master +- [540e5a7d](https://github.com/kubedb/cli/commit/540e5a7d) Cleanup cli commands (#454) +- [98649b0a](https://github.com/kubedb/cli/commit/98649b0a) Trigger the workflow on push or pull request +- [a0dbdab5](https://github.com/kubedb/cli/commit/a0dbdab5) Update readme (#457) +- [a52927ed](https://github.com/kubedb/cli/commit/a52927ed) Create draft GitHub release when tagged (#456) +- [42838aec](https://github.com/kubedb/cli/commit/42838aec) Convert kubedb cli into a `kubectl dba` plgin (#455) +- [aec37df2](https://github.com/kubedb/cli/commit/aec37df2) Revendor dependencies +- [2c120d1a](https://github.com/kubedb/cli/commit/2c120d1a) Update client-go to kubernetes-1.16.3 (#453) +- [ce221024](https://github.com/kubedb/cli/commit/ce221024) Add add-license make target +- [84a6a1e8](https://github.com/kubedb/cli/commit/84a6a1e8) Add license header to files (#452) +- [1ced65ea](https://github.com/kubedb/cli/commit/1ced65ea) Split imports into 3 parts (#451) +- [8e533f69](https://github.com/kubedb/cli/commit/8e533f69) Add release workflow script (#450) +- [0735ce0c](https://github.com/kubedb/cli/commit/0735ce0c) Enable GitHub actions +- [8522ec74](https://github.com/kubedb/cli/commit/8522ec74) Update changelog + + + +## [kubedb/elasticsearch](https://github.com/kubedb/elasticsearch) + +### [v0.14.0-beta.1](https://github.com/kubedb/elasticsearch/releases/tag/v0.14.0-beta.1) + +- [9aae4782](https://github.com/kubedb/elasticsearch/commit/9aae4782) Prepare for release v0.14.0-beta.1 (#319) +- [312e5682](https://github.com/kubedb/elasticsearch/commit/312e5682) Update for release Stash@v2020.07.09-beta.0 (#317) +- [681f3e87](https://github.com/kubedb/elasticsearch/commit/681f3e87) Include Makefile.env +- [e460af51](https://github.com/kubedb/elasticsearch/commit/e460af51) Allow customizing chart registry (#316) +- [64e15a33](https://github.com/kubedb/elasticsearch/commit/64e15a33) Update for release Stash@v2020.07.08-beta.0 (#315) +- [1f2ef7a6](https://github.com/kubedb/elasticsearch/commit/1f2ef7a6) Update License (#314) +- [16ce6c90](https://github.com/kubedb/elasticsearch/commit/16ce6c90) Update to Kubernetes v1.18.3 (#313) +- [3357faa3](https://github.com/kubedb/elasticsearch/commit/3357faa3) Update ci.yml +- [cb44a1eb](https://github.com/kubedb/elasticsearch/commit/cb44a1eb) Load stash version from .env file for make (#312) +- [cf212019](https://github.com/kubedb/elasticsearch/commit/cf212019) Update update-release-tracker.sh +- [5127428e](https://github.com/kubedb/elasticsearch/commit/5127428e) Update update-release-tracker.sh +- [7f790940](https://github.com/kubedb/elasticsearch/commit/7f790940) Add script to update release tracker on pr merge (#311) +- [340b6112](https://github.com/kubedb/elasticsearch/commit/340b6112) Update .kodiak.toml +- [e01c4eec](https://github.com/kubedb/elasticsearch/commit/e01c4eec) Various fixes (#310) +- [11517f71](https://github.com/kubedb/elasticsearch/commit/11517f71) Update to Kubernetes v1.18.3 (#309) +- [53d7b117](https://github.com/kubedb/elasticsearch/commit/53d7b117) Update to Kubernetes v1.18.3 +- [7eacc7dd](https://github.com/kubedb/elasticsearch/commit/7eacc7dd) Create .kodiak.toml +- [b91b23d9](https://github.com/kubedb/elasticsearch/commit/b91b23d9) Use CRD v1 for Kubernetes >= 1.16 (#308) +- [08c1d2a8](https://github.com/kubedb/elasticsearch/commit/08c1d2a8) Update to Kubernetes v1.18.3 (#307) +- [32cdb8a4](https://github.com/kubedb/elasticsearch/commit/32cdb8a4) Fix e2e tests (#306) +- [0bca1a04](https://github.com/kubedb/elasticsearch/commit/0bca1a04) Merge pull request #302 from kubedb/multi-region +- [bf0c26ee](https://github.com/kubedb/elasticsearch/commit/bf0c26ee) Revendor kubedb.dev/apimachinery@v0.14.0-beta.0 +- [7c00c63c](https://github.com/kubedb/elasticsearch/commit/7c00c63c) Add support for multi-regional cluster +- [363322df](https://github.com/kubedb/elasticsearch/commit/363322df) Update stash install commands +- [a0138a36](https://github.com/kubedb/elasticsearch/commit/a0138a36) Update crazy-max/ghaction-docker-buildx flag +- [3076eb46](https://github.com/kubedb/elasticsearch/commit/3076eb46) Use updated operator labels in e2e tests (#304) +- [d537b91b](https://github.com/kubedb/elasticsearch/commit/d537b91b) Pass annotations from CRD to AppBinding (#305) +- [48f9399c](https://github.com/kubedb/elasticsearch/commit/48f9399c) Trigger the workflow on push or pull request +- [7b8d56cb](https://github.com/kubedb/elasticsearch/commit/7b8d56cb) Update CHANGELOG.md +- [939f6882](https://github.com/kubedb/elasticsearch/commit/939f6882) Update labelSelector for statefulsets (#300) +- [ed1c0553](https://github.com/kubedb/elasticsearch/commit/ed1c0553) Make master service headless & add rest-port to all db nodes (#299) +- [b7e7c8d7](https://github.com/kubedb/elasticsearch/commit/b7e7c8d7) Use stash.appscode.dev/apimachinery@v0.9.0-rc.6 (#301) +- [e51555d5](https://github.com/kubedb/elasticsearch/commit/e51555d5) Introduce spec.halted and removed dormant and snapshot crd (#296) +- [8255276f](https://github.com/kubedb/elasticsearch/commit/8255276f) Add spec.selector fields to the governing service (#297) +- [13bc760f](https://github.com/kubedb/elasticsearch/commit/13bc760f) Use stash@v0.9.0-rc.4 release (#298) +- [6a21fb86](https://github.com/kubedb/elasticsearch/commit/6a21fb86) Add `Pause` feature (#295) +- [1b25070c](https://github.com/kubedb/elasticsearch/commit/1b25070c) Refactor CI pipeline to build once (#294) +- [ace3d779](https://github.com/kubedb/elasticsearch/commit/ace3d779) Fix e2e tests on GitHub actions (#292) +- [7a7eb8d1](https://github.com/kubedb/elasticsearch/commit/7a7eb8d1) fix bug (#293) +- [0641649e](https://github.com/kubedb/elasticsearch/commit/0641649e) Use Go 1.13 in CI (#291) +- [97790e1e](https://github.com/kubedb/elasticsearch/commit/97790e1e) Take out elasticsearch docker images and Matrix test (#289) +- [3a20c1db](https://github.com/kubedb/elasticsearch/commit/3a20c1db) Fix default make command +- [ece073a2](https://github.com/kubedb/elasticsearch/commit/ece073a2) Update catalog values for make install command +- [8df4697b](https://github.com/kubedb/elasticsearch/commit/8df4697b) Use charts to install operator (#290) +- [5cbde391](https://github.com/kubedb/elasticsearch/commit/5cbde391) Add add-license make target +- [b7012bc5](https://github.com/kubedb/elasticsearch/commit/b7012bc5) Skip libbuild folder from checking license +- [d56db3a0](https://github.com/kubedb/elasticsearch/commit/d56db3a0) Add license header to files (#288) +- [1d0c368a](https://github.com/kubedb/elasticsearch/commit/1d0c368a) Enable make ci (#287) +- [2e835dff](https://github.com/kubedb/elasticsearch/commit/2e835dff) Remove EnableStatusSubresource (#286) +- [bcd0ebd9](https://github.com/kubedb/elasticsearch/commit/bcd0ebd9) Fix E2E tests in github action (#285) + + + +## [kubedb/installer](https://github.com/kubedb/installer) + +### [v0.14.0-beta.1](https://github.com/kubedb/installer/releases/tag/v0.14.0-beta.1) + +- [a081a36](https://github.com/kubedb/installer/commit/a081a36) Prepare for release v0.14.0-beta.1 (#107) +- [9c3fd4a](https://github.com/kubedb/installer/commit/9c3fd4a) Make chart registry configurable (#106) +- [a3da9a1](https://github.com/kubedb/installer/commit/a3da9a1) Publish to testing dir for alpha/beta releases +- [33685ee](https://github.com/kubedb/installer/commit/33685ee) Update License (#105) +- [f06fa20](https://github.com/kubedb/installer/commit/f06fa20) Update MySQL version catalog (#104) +- [674d129](https://github.com/kubedb/installer/commit/674d129) Update to Kubernetes v1.18.3 (#101) +- [fc16306](https://github.com/kubedb/installer/commit/fc16306) Update ci.yml +- [f65dd16](https://github.com/kubedb/installer/commit/f65dd16) Tag chart and app version as string for yq +- [ac21db4](https://github.com/kubedb/installer/commit/ac21db4) Update links (#100) +- [4a71c15](https://github.com/kubedb/installer/commit/4a71c15) Update update-release-tracker.sh +- [e7f14e9](https://github.com/kubedb/installer/commit/e7f14e9) Update update-release-tracker.sh +- [b26d3b8](https://github.com/kubedb/installer/commit/b26d3b8) Update release.yml +- [4f4985d](https://github.com/kubedb/installer/commit/4f4985d) Add script to update release tracker on pr merge (#98) +- [94baab8](https://github.com/kubedb/installer/commit/94baab8) Update ci.yml +- [2ffe241](https://github.com/kubedb/installer/commit/2ffe241) Rename TEST_NAMESPACE -> KUBE_NAMESPACE +- [34ba017](https://github.com/kubedb/installer/commit/34ba017) Change Enterprise operator image name to kubedb-enterprise (#97) +- [bc83b11](https://github.com/kubedb/installer/commit/bc83b11) Add commands to update chart (#96) +- [a0ddc4b](https://github.com/kubedb/installer/commit/a0ddc4b) Bring back postgres 9.6 (#95) +- [59c1cee](https://github.com/kubedb/installer/commit/59c1cee) Fix chart release process (#94) +- [40072c6](https://github.com/kubedb/installer/commit/40072c6) Deprecate non-patched versions (#93) +- [16f09ed](https://github.com/kubedb/installer/commit/16f09ed) Update .kodiak.toml +- [bb902e3](https://github.com/kubedb/installer/commit/bb902e3) Release kubedb-enterprise chart to stable charts +- [7c94dfc](https://github.com/kubedb/installer/commit/7c94dfc) Remove default deprecated: false fields (#92) +- [07b162d](https://github.com/kubedb/installer/commit/07b162d) Update chart versions (#91) +- [dd156da](https://github.com/kubedb/installer/commit/dd156da) Add rbac for configmaps +- [a175cc9](https://github.com/kubedb/installer/commit/a175cc9) Revise the validator & mutator webhook names (#90) +- [777b636](https://github.com/kubedb/installer/commit/777b636) Add kubedb-enterprise chart (#89) +- [6d4f4d8](https://github.com/kubedb/installer/commit/6d4f4d8) Update to Kubernetes v1.18.3 (#84) +- [8065729](https://github.com/kubedb/installer/commit/8065729) Update to Kubernetes v1.18.3 +- [87052ae](https://github.com/kubedb/installer/commit/87052ae) Create .kodiak.toml +- [8c8c122](https://github.com/kubedb/installer/commit/8c8c122) Add RBAC permission for generic garbage collector (#82) +- [f391304](https://github.com/kubedb/installer/commit/f391304) Permit configmap list/watch for delegated authentication (#81) +- [96dbad6](https://github.com/kubedb/installer/commit/96dbad6) Use updated image registry values field +- [a770d06](https://github.com/kubedb/installer/commit/a770d06) Generate both v1beta1 and v1 CRD YAML (#80) +- [ee01bf6](https://github.com/kubedb/installer/commit/ee01bf6) Update to Kubernetes v1.18.3 (#79) +- [7e6edc3](https://github.com/kubedb/installer/commit/7e6edc3) Update chart docs +- [71e999d](https://github.com/kubedb/installer/commit/71e999d) Remove combined redis catalog template +- [19aa0a1](https://github.com/kubedb/installer/commit/19aa0a1) Merge pull request #77 from kubedb/opsvalidator +- [103ca84](https://github.com/kubedb/installer/commit/103ca84) Use enterprise port values +- [5d538b8](https://github.com/kubedb/installer/commit/5d538b8) Add ops request validator +- [ce37683](https://github.com/kubedb/installer/commit/ce37683) Update Enterprise operator tag (#78) +- [9a08d70](https://github.com/kubedb/installer/commit/9a08d70) Merge pull request #76 from kubedb/mysqlnewversion +- [82a2d67](https://github.com/kubedb/installer/commit/82a2d67) remove unnecessary code and rename standAlone to standalone +- [f3f6d05](https://github.com/kubedb/installer/commit/f3f6d05) Add extra wrap for depricated version +- [7206194](https://github.com/kubedb/installer/commit/7206194) Add mysql new version +- [f4e79c8](https://github.com/kubedb/installer/commit/f4e79c8) Rename api group to ops.kubedb.com (#75) +- [ee49da5](https://github.com/kubedb/installer/commit/ee49da5) Add skipDeprecated to catalog chart (#74) +- [dd6d4f9](https://github.com/kubedb/installer/commit/dd6d4f9) Split db catalog into separate files per version (#73) +- [4ab187b](https://github.com/kubedb/installer/commit/4ab187b) Merge pull request #71 from kubedb/fix-ci +- [bdbc6b5](https://github.com/kubedb/installer/commit/bdbc6b5) Remove PSP for Snapshot +- [b51576b](https://github.com/kubedb/installer/commit/b51576b) Use recommended kubernetes app labels +- [6f5a51c](https://github.com/kubedb/installer/commit/6f5a51c) Merge pull request #72 from pohly/memached-1.5.22 +- [a89d9bf](https://github.com/kubedb/installer/commit/a89d9bf) memcached: add 1.5.22 +- [a6b63d6](https://github.com/kubedb/installer/commit/a6b63d6) Trigger the workflow on push or pull request +- [600eb93](https://github.com/kubedb/installer/commit/600eb93) Update chart readme +- [df4bcb2](https://github.com/kubedb/installer/commit/df4bcb2) Auto generate chart readme file +- [00ca986](https://github.com/kubedb/installer/commit/00ca986) Use GCR_SERVICE_ACCOUNT_JSON_KEY env in CI +- [c0cdfe0](https://github.com/kubedb/installer/commit/c0cdfe0) Configure Docker credential helper +- [06ed3df](https://github.com/kubedb/installer/commit/06ed3df) Use gcr.io/appscode to host Enterprise operator image +- [9d0fbc9](https://github.com/kubedb/installer/commit/9d0fbc9) Update release.yml +- [e066043](https://github.com/kubedb/installer/commit/e066043) prometheus.io/coreos-operator -> prometheus.io/coreos-operator (#66) +- [91f37ec](https://github.com/kubedb/installer/commit/91f37ec) Use image.registry in catalog chart (#65) +- [a1ad35c](https://github.com/kubedb/installer/commit/a1ad35c) Move apireg annotation to operator pod (#64) +- [b02b054](https://github.com/kubedb/installer/commit/b02b054) Add fuzz tests for CRDs (#63) +- [12c1d4f](https://github.com/kubedb/installer/commit/12c1d4f) Various fixes (#62) +- [9b572fa](https://github.com/kubedb/installer/commit/9b572fa) Use kubectl v1.17.0 (#61) +- [2825f18](https://github.com/kubedb/installer/commit/2825f18) Fix helm install --wait flag (#57) +- [3e205ae](https://github.com/kubedb/installer/commit/3e205ae) Fix tolerations indentation for deployment (#58) +- [bed096d](https://github.com/kubedb/installer/commit/bed096d) Add cluster-role for dba.kubedb.com (#54) +- [a684c02](https://github.com/kubedb/installer/commit/a684c02) Update user roles for KubeDB crds (#60) +- [9e4f924](https://github.com/kubedb/installer/commit/9e4f924) Add release script to upload charts (#55) +- [1a9ba37](https://github.com/kubedb/installer/commit/1a9ba37) Updated Mongodb Init images (#51) +- [1ab4bed](https://github.com/kubedb/installer/commit/1ab4bed) Run checks once in CI pipeline (#53) +- [5265527](https://github.com/kubedb/installer/commit/5265527) Properly mark optional fields (#52) +- [6153622](https://github.com/kubedb/installer/commit/6153622) Add replicationModeDetector image field into MySQLVersion CRD (#50) +- [d546169](https://github.com/kubedb/installer/commit/d546169) Add Enterprise operator sidecar (#49) +- [cbc7f03](https://github.com/kubedb/installer/commit/cbc7f03) Add deletocollection verbs to kubedb roles (#44) +- [c598e90](https://github.com/kubedb/installer/commit/c598e90) Allow specifying rather than generating certs (#48) +- [b39d710](https://github.com/kubedb/installer/commit/b39d710) RBAC for cert manger, issuer watcher, and secret watcher (#43) +- [0276c34](https://github.com/kubedb/installer/commit/0276c34) Add missing permissions for PgBouncer operator (#47) +- [b85efed](https://github.com/kubedb/installer/commit/b85efed) Update Installer for ProxySQL and PerconaXtraDB (#46) +- [5c8212a](https://github.com/kubedb/installer/commit/5c8212a) Don't install PSP policy when catalog is disabled. (#45) +- [b8ebcdb](https://github.com/kubedb/installer/commit/b8ebcdb) Bring back support for k8s 1.11 (#42) +- [b9453f2](https://github.com/kubedb/installer/commit/b9453f2) Change minimum k8s req to 1.12 and use helm 3 in chart readme (#41) +- [29b4a96](https://github.com/kubedb/installer/commit/29b4a96) Add catalog for percona standalone (#40) +- [1ff9a1f](https://github.com/kubedb/installer/commit/1ff9a1f) Avoid creating apiservices when webhooks are disabled (#39) +- [b96eeba](https://github.com/kubedb/installer/commit/b96eeba) Update kubedb-catalog values +- [c6bc91a](https://github.com/kubedb/installer/commit/c6bc91a) Conditionally create validating and mutating webhooks. (#38) +- [e390c04](https://github.com/kubedb/installer/commit/e390c04) Delete script based installer (#36) +- [ab0f799](https://github.com/kubedb/installer/commit/ab0f799) Update installer for ProxySQL (#17) +- [5762cbf](https://github.com/kubedb/installer/commit/5762cbf) Update installer for PerconaXtraDB (#14) +- [6b5565a](https://github.com/kubedb/installer/commit/6b5565a) Pass imagePullSecrets as an array to service accounts (#37) +- [3a552a1](https://github.com/kubedb/installer/commit/3a552a1) Use helmpack/chart-testing:v3.0.0-beta.1 (#35) +- [13fc00b](https://github.com/kubedb/installer/commit/13fc00b) Fix RBAC permissions for Stash restoresessions (#34) +- [0023d58](https://github.com/kubedb/installer/commit/0023d58) Mark optional fields in installer CRD +- [b24b05e](https://github.com/kubedb/installer/commit/b24b05e) Add installer api CRD (#31) +- [51f80ea](https://github.com/kubedb/installer/commit/51f80ea) Always create rbac resources (#32) +- [f36f6c8](https://github.com/kubedb/installer/commit/f36f6c8) Use kind v0.6.1 (#30) +- [b13266e](https://github.com/kubedb/installer/commit/b13266e) Properly handle empty image pull secret name in installer (#29) +- [0243c9e](https://github.com/kubedb/installer/commit/0243c9e) Test installers (#27) +- [5aaba63](https://github.com/kubedb/installer/commit/5aaba63) Fix typo (#28) +- [dd2595d](https://github.com/kubedb/installer/commit/dd2595d) Use separate docker registry for operator and catalog images (#26) +- [316f340](https://github.com/kubedb/installer/commit/316f340) Use pgbouncer_exporter:v0.1.1 +- [29843a0](https://github.com/kubedb/installer/commit/29843a0) Support for pgbouncers (#11) +- [2f2f902](https://github.com/kubedb/installer/commit/2f2f902) Ensure operator service points to its own pod. (#25) +- [d187265](https://github.com/kubedb/installer/commit/d187265) Update postgres versions (#24) +- [167fe46](https://github.com/kubedb/installer/commit/167fe46) Remove --enable-status-subresource flag (#23) +- [025afcb](https://github.com/kubedb/installer/commit/025afcb) ESVerdion 7.3.2 and 7.3 added (#21) +- [adb433f](https://github.com/kubedb/installer/commit/adb433f) Support for xpack in es6.8 and es7.2 (#20) +- [22634fa](https://github.com/kubedb/installer/commit/22634fa) Add crd for elasticsearch 7.2.0 (#9) +- [b06b2ea](https://github.com/kubedb/installer/commit/b06b2ea) Add namespace to cleaner Job (#18) +- [ac173e6](https://github.com/kubedb/installer/commit/ac173e6) Download onessl version v0.13.1 for Kubernetes 1.16 fix (#19) +- [3375df9](https://github.com/kubedb/installer/commit/3375df9) Use percona mongodb exporter from 0.13.0 (#16) +- [fdc6105](https://github.com/kubedb/installer/commit/fdc6105) Add support for Elasticsearch 6.8.0 (#7) + + + +## [kubedb/memcached](https://github.com/kubedb/memcached) + +### [v0.7.0-beta.1](https://github.com/kubedb/memcached/releases/tag/v0.7.0-beta.1) + +- [3f7c1b90](https://github.com/kubedb/memcached/commit/3f7c1b90) Prepare for release v0.7.0-beta.1 (#160) +- [1278cd57](https://github.com/kubedb/memcached/commit/1278cd57) include Makefile.env (#158) +- [676222b7](https://github.com/kubedb/memcached/commit/676222b7) Update License (#157) +- [216fdcd4](https://github.com/kubedb/memcached/commit/216fdcd4) Update to Kubernetes v1.18.3 (#156) +- [dc59abf4](https://github.com/kubedb/memcached/commit/dc59abf4) Update ci.yml +- [071589c5](https://github.com/kubedb/memcached/commit/071589c5) Update update-release-tracker.sh +- [79bc96d8](https://github.com/kubedb/memcached/commit/79bc96d8) Update update-release-tracker.sh +- [31f5fca6](https://github.com/kubedb/memcached/commit/31f5fca6) Add script to update release tracker on pr merge (#155) +- [05d1d6ab](https://github.com/kubedb/memcached/commit/05d1d6ab) Update .kodiak.toml +- [522b617f](https://github.com/kubedb/memcached/commit/522b617f) Various fixes (#154) +- [2ed2c3a0](https://github.com/kubedb/memcached/commit/2ed2c3a0) Update to Kubernetes v1.18.3 (#152) +- [10cea9ad](https://github.com/kubedb/memcached/commit/10cea9ad) Update to Kubernetes v1.18.3 +- [582177b0](https://github.com/kubedb/memcached/commit/582177b0) Create .kodiak.toml +- [bf1900b6](https://github.com/kubedb/memcached/commit/bf1900b6) Run flaky e2e test (#151) +- [aa09abfc](https://github.com/kubedb/memcached/commit/aa09abfc) Use CRD v1 for Kubernetes >= 1.16 (#150) +- [b2586151](https://github.com/kubedb/memcached/commit/b2586151) Merge pull request #146 from pohly/pmem +- [dbd5b2b0](https://github.com/kubedb/memcached/commit/dbd5b2b0) Fix build +- [d0722c34](https://github.com/kubedb/memcached/commit/d0722c34) WIP: implement PMEM support +- [f16b1198](https://github.com/kubedb/memcached/commit/f16b1198) Makefile: adapt to recent installer repo changes +- [32f71c56](https://github.com/kubedb/memcached/commit/32f71c56) Makefile: support e2e testing with arbitrary KUBECONFIG file +- [6ed07efc](https://github.com/kubedb/memcached/commit/6ed07efc) Update to Kubernetes v1.18.3 (#149) +- [ce702669](https://github.com/kubedb/memcached/commit/ce702669) Fix e2e tests (#148) +- [18917f8d](https://github.com/kubedb/memcached/commit/18917f8d) Revendor kubedb.dev/apimachinery@master (#147) +- [e51d327c](https://github.com/kubedb/memcached/commit/e51d327c) Update crazy-max/ghaction-docker-buildx flag +- [1202c059](https://github.com/kubedb/memcached/commit/1202c059) Use updated operator labels in e2e tests (#144) +- [e02d42a4](https://github.com/kubedb/memcached/commit/e02d42a4) Pass annotations from CRD to AppBinding (#145) +- [2c91d63b](https://github.com/kubedb/memcached/commit/2c91d63b) Trigger the workflow on push or pull request +- [67c83a9a](https://github.com/kubedb/memcached/commit/67c83a9a) Update CHANGELOG.md +- [85e3cf54](https://github.com/kubedb/memcached/commit/85e3cf54) Use stash.appscode.dev/apimachinery@v0.9.0-rc.6 (#143) +- [e61dd2e6](https://github.com/kubedb/memcached/commit/e61dd2e6) Update error msg to reject halt when termination policy is 'DoNotTerminate' +- [bc079b7b](https://github.com/kubedb/memcached/commit/bc079b7b) Introduce spec.halted and removed dormant crd (#142) +- [f31610c3](https://github.com/kubedb/memcached/commit/f31610c3) Refactor CI pipeline to run build once (#141) +- [f5eec5e4](https://github.com/kubedb/memcached/commit/f5eec5e4) Update kubernetes client-go to 1.16.3 (#140) +- [f645174a](https://github.com/kubedb/memcached/commit/f645174a) Update catalog values for make install command +- [2a297c89](https://github.com/kubedb/memcached/commit/2a297c89) Use charts to install operator (#139) +- [83e2ba17](https://github.com/kubedb/memcached/commit/83e2ba17) Moved out docker files and added matrix github actions ci/cd (#138) +- [97e3a5bd](https://github.com/kubedb/memcached/commit/97e3a5bd) Add add-license make target +- [7b79fbfe](https://github.com/kubedb/memcached/commit/7b79fbfe) Add license header to files (#137) +- [2afa406f](https://github.com/kubedb/memcached/commit/2afa406f) Enable make ci (#136) +- [bab32534](https://github.com/kubedb/memcached/commit/bab32534) Remove EnableStatusSubresource (#135) + + + +## [kubedb/mongodb](https://github.com/kubedb/mongodb) + +### [v0.7.0-beta.1](https://github.com/kubedb/mongodb/releases/tag/v0.7.0-beta.1) + +- [b82a8fa7](https://github.com/kubedb/mongodb/commit/b82a8fa7) Prepare for release v0.7.0-beta.1 (#211) +- [a63d53ae](https://github.com/kubedb/mongodb/commit/a63d53ae) Update for release Stash@v2020.07.09-beta.0 (#209) +- [4e33e978](https://github.com/kubedb/mongodb/commit/4e33e978) include Makefile.env +- [1aa81a18](https://github.com/kubedb/mongodb/commit/1aa81a18) Allow customizing chart registry (#208) +- [05355e75](https://github.com/kubedb/mongodb/commit/05355e75) Update for release Stash@v2020.07.08-beta.0 (#207) +- [4f6be7b4](https://github.com/kubedb/mongodb/commit/4f6be7b4) Update License (#206) +- [cc54f7d3](https://github.com/kubedb/mongodb/commit/cc54f7d3) Update to Kubernetes v1.18.3 (#204) +- [d1a51b8e](https://github.com/kubedb/mongodb/commit/d1a51b8e) Update ci.yml +- [3a993329](https://github.com/kubedb/mongodb/commit/3a993329) Load stash version from .env file for make (#203) +- [7180a98c](https://github.com/kubedb/mongodb/commit/7180a98c) Update update-release-tracker.sh +- [745085fd](https://github.com/kubedb/mongodb/commit/745085fd) Update update-release-tracker.sh +- [07d83ac0](https://github.com/kubedb/mongodb/commit/07d83ac0) Add script to update release tracker on pr merge (#202) +- [bbe205bb](https://github.com/kubedb/mongodb/commit/bbe205bb) Update .kodiak.toml +- [998e656e](https://github.com/kubedb/mongodb/commit/998e656e) Various fixes (#201) +- [ca03db09](https://github.com/kubedb/mongodb/commit/ca03db09) Update to Kubernetes v1.18.3 (#200) +- [975fc700](https://github.com/kubedb/mongodb/commit/975fc700) Update to Kubernetes v1.18.3 +- [52972dcf](https://github.com/kubedb/mongodb/commit/52972dcf) Create .kodiak.toml +- [39168e53](https://github.com/kubedb/mongodb/commit/39168e53) Use CRD v1 for Kubernetes >= 1.16 (#199) +- [d6d87e16](https://github.com/kubedb/mongodb/commit/d6d87e16) Update to Kubernetes v1.18.3 (#198) +- [09cd5809](https://github.com/kubedb/mongodb/commit/09cd5809) Fix e2e tests (#197) +- [f47c4846](https://github.com/kubedb/mongodb/commit/f47c4846) Update stash install commands +- [010d0294](https://github.com/kubedb/mongodb/commit/010d0294) Revendor kubedb.dev/apimachinery@master (#196) +- [31ef2632](https://github.com/kubedb/mongodb/commit/31ef2632) Pass annotations from CRD to AppBinding (#195) +- [9594e92f](https://github.com/kubedb/mongodb/commit/9594e92f) Update crazy-max/ghaction-docker-buildx flag +- [0693d7a0](https://github.com/kubedb/mongodb/commit/0693d7a0) Use updated operator labels in e2e tests (#193) +- [5aaeeb90](https://github.com/kubedb/mongodb/commit/5aaeeb90) Trigger the workflow on push or pull request +- [2af16e3c](https://github.com/kubedb/mongodb/commit/2af16e3c) Update CHANGELOG.md +- [288c5d2f](https://github.com/kubedb/mongodb/commit/288c5d2f) Use SHARD_INDEX constant from apimachinery +- [4482edf3](https://github.com/kubedb/mongodb/commit/4482edf3) Use stash.appscode.dev/apimachinery@v0.9.0-rc.6 (#191) +- [0f20ff3a](https://github.com/kubedb/mongodb/commit/0f20ff3a) Manage SSL certificates using cert-manager (#190) +- [6f0c1aef](https://github.com/kubedb/mongodb/commit/6f0c1aef) Use Minio storage for testing (#188) +- [f8c56bac](https://github.com/kubedb/mongodb/commit/f8c56bac) Support affinity templating in mongodb-shard (#186) +- [71283767](https://github.com/kubedb/mongodb/commit/71283767) Use stash@v0.9.0-rc.4 release (#185) +- [f480de35](https://github.com/kubedb/mongodb/commit/f480de35) Fix `Pause` Logic (#184) +- [263e1bac](https://github.com/kubedb/mongodb/commit/263e1bac) Refactor CI pipeline to build once (#182) +- [e383f271](https://github.com/kubedb/mongodb/commit/e383f271) Add `Pause` Feature (#181) +- [584ecde6](https://github.com/kubedb/mongodb/commit/584ecde6) Delete backupconfig before attempting restoresession. (#180) +- [a78bc2a7](https://github.com/kubedb/mongodb/commit/a78bc2a7) Wipeout if custom databaseSecret has been deleted (#179) +- [e90cd386](https://github.com/kubedb/mongodb/commit/e90cd386) Matrix test and Moved out mongo docker files (#178) +- [c132db8f](https://github.com/kubedb/mongodb/commit/c132db8f) Add add-license makefile target +- [cc545e04](https://github.com/kubedb/mongodb/commit/cc545e04) Update Makefile +- [7a2eab2c](https://github.com/kubedb/mongodb/commit/7a2eab2c) Add license header to files (#177) +- [eecdb2cb](https://github.com/kubedb/mongodb/commit/eecdb2cb) Fix E2E tests in github action (#176) + + + +## [kubedb/mysql](https://github.com/kubedb/mysql) + +### [v0.7.0-beta.1](https://github.com/kubedb/mysql/releases/tag/v0.7.0-beta.1) + +- [19ccc5b8](https://github.com/kubedb/mysql/commit/19ccc5b8) Prepare for release v0.7.0-beta.1 (#201) +- [e61de0e7](https://github.com/kubedb/mysql/commit/e61de0e7) Update for release Stash@v2020.07.09-beta.0 (#199) +- [3269df76](https://github.com/kubedb/mysql/commit/3269df76) Allow customizing chart registry (#198) +- [c487e68e](https://github.com/kubedb/mysql/commit/c487e68e) Update for release Stash@v2020.07.08-beta.0 (#197) +- [4f288ef0](https://github.com/kubedb/mysql/commit/4f288ef0) Update License (#196) +- [858a5e03](https://github.com/kubedb/mysql/commit/858a5e03) Update to Kubernetes v1.18.3 (#195) +- [88dec378](https://github.com/kubedb/mysql/commit/88dec378) Update ci.yml +- [31ef7c2a](https://github.com/kubedb/mysql/commit/31ef7c2a) Load stash version from .env file for make (#194) +- [872954a9](https://github.com/kubedb/mysql/commit/872954a9) Update update-release-tracker.sh +- [771059b9](https://github.com/kubedb/mysql/commit/771059b9) Update update-release-tracker.sh +- [0e625902](https://github.com/kubedb/mysql/commit/0e625902) Add script to update release tracker on pr merge (#193) +- [6a204efd](https://github.com/kubedb/mysql/commit/6a204efd) Update .kodiak.toml +- [de6fc09b](https://github.com/kubedb/mysql/commit/de6fc09b) Various fixes (#192) +- [86eb3313](https://github.com/kubedb/mysql/commit/86eb3313) Update to Kubernetes v1.18.3 (#191) +- [937afcc8](https://github.com/kubedb/mysql/commit/937afcc8) Update to Kubernetes v1.18.3 +- [8646a9c8](https://github.com/kubedb/mysql/commit/8646a9c8) Create .kodiak.toml +- [9f3d2e3c](https://github.com/kubedb/mysql/commit/9f3d2e3c) Use helm --wait in make install command +- [3d1e9cf3](https://github.com/kubedb/mysql/commit/3d1e9cf3) Use CRD v1 for Kubernetes >= 1.16 (#188) +- [5df90daa](https://github.com/kubedb/mysql/commit/5df90daa) Merge pull request #187 from kubedb/k-1.18.3 +- [179207de](https://github.com/kubedb/mysql/commit/179207de) Pass context +- [76c3fc86](https://github.com/kubedb/mysql/commit/76c3fc86) Update to Kubernetes v1.18.3 +- [da9ad307](https://github.com/kubedb/mysql/commit/da9ad307) Fix e2e tests (#186) +- [d7f2c63d](https://github.com/kubedb/mysql/commit/d7f2c63d) Update stash install commands +- [cfee601b](https://github.com/kubedb/mysql/commit/cfee601b) Revendor kubedb.dev/apimachinery@master (#185) +- [741fada4](https://github.com/kubedb/mysql/commit/741fada4) Update crazy-max/ghaction-docker-buildx flag +- [27291b98](https://github.com/kubedb/mysql/commit/27291b98) Use updated operator labels in e2e tests (#183) +- [16b00f9d](https://github.com/kubedb/mysql/commit/16b00f9d) Pass annotations from CRD to AppBinding (#184) +- [b70e0620](https://github.com/kubedb/mysql/commit/b70e0620) Trigger the workflow on push or pull request +- [6ea308d8](https://github.com/kubedb/mysql/commit/6ea308d8) Update CHANGELOG.md +- [188c3a91](https://github.com/kubedb/mysql/commit/188c3a91) Use stash.appscode.dev/apimachinery@v0.9.0-rc.6 (#181) +- [f4a67e95](https://github.com/kubedb/mysql/commit/f4a67e95) Introduce spec.halted and removed dormant and snapshot crd (#178) +- [8774a90c](https://github.com/kubedb/mysql/commit/8774a90c) Use stash@v0.9.0-rc.4 release (#179) +- [209653e6](https://github.com/kubedb/mysql/commit/209653e6) Use apache thrift v0.13.0 +- [e89fbe40](https://github.com/kubedb/mysql/commit/e89fbe40) Update github.com/apache/thrift v0.12.0 (#176) +- [c0d035c9](https://github.com/kubedb/mysql/commit/c0d035c9) Add Pause Feature (#177) +- [827a92b6](https://github.com/kubedb/mysql/commit/827a92b6) Mount mysql config dir and tmp dir as emptydir (#166) +- [2a84ed08](https://github.com/kubedb/mysql/commit/2a84ed08) Enable subresource for MySQL crd. (#175) +- [bc8ec773](https://github.com/kubedb/mysql/commit/bc8ec773) Update kubernetes client-go to 1.16.3 (#174) +- [014f6b0b](https://github.com/kubedb/mysql/commit/014f6b0b) Matrix tests for github actions (#172) +- [68f427db](https://github.com/kubedb/mysql/commit/68f427db) Fix default make command +- [76dc7d7b](https://github.com/kubedb/mysql/commit/76dc7d7b) Use charts to install operator (#173) +- [5ff41dc1](https://github.com/kubedb/mysql/commit/5ff41dc1) Add add-license make target +- [132b2a0e](https://github.com/kubedb/mysql/commit/132b2a0e) Add license header to files (#171) +- [aab6050e](https://github.com/kubedb/mysql/commit/aab6050e) Fix linter errors. (#169) +- [35043a15](https://github.com/kubedb/mysql/commit/35043a15) Enable make ci (#168) +- [e452bb4b](https://github.com/kubedb/mysql/commit/e452bb4b) Remove EnableStatusSubresource (#167) +- [28794570](https://github.com/kubedb/mysql/commit/28794570) Run e2e tests using GitHub actions (#164) +- [af3b284b](https://github.com/kubedb/mysql/commit/af3b284b) Validate DBVersionSpecs and fixed broken build (#165) +- [e4963763](https://github.com/kubedb/mysql/commit/e4963763) Update go.yml +- [a808e508](https://github.com/kubedb/mysql/commit/a808e508) Enable GitHub actions +- [6fe5dd42](https://github.com/kubedb/mysql/commit/6fe5dd42) Update changelog + + + +## [kubedb/mysql-replication-mode-detector](https://github.com/kubedb/mysql-replication-mode-detector) + +### [v0.1.0-beta.1](https://github.com/kubedb/mysql-replication-mode-detector/releases/tag/v0.1.0-beta.1) + +- [3e62838](https://github.com/kubedb/mysql-replication-mode-detector/commit/3e62838) Prepare for release v0.1.0-beta.1 (#9) +- [e54c4c0](https://github.com/kubedb/mysql-replication-mode-detector/commit/e54c4c0) Update License (#7) +- [e071b02](https://github.com/kubedb/mysql-replication-mode-detector/commit/e071b02) Update to Kubernetes v1.18.3 (#6) +- [8992bcb](https://github.com/kubedb/mysql-replication-mode-detector/commit/8992bcb) Update update-release-tracker.sh +- [acc1038](https://github.com/kubedb/mysql-replication-mode-detector/commit/acc1038) Add script to update release tracker on pr merge (#5) +- [706b5b0](https://github.com/kubedb/mysql-replication-mode-detector/commit/706b5b0) Update .kodiak.toml +- [4e52c03](https://github.com/kubedb/mysql-replication-mode-detector/commit/4e52c03) Update to Kubernetes v1.18.3 (#4) +- [adb05ae](https://github.com/kubedb/mysql-replication-mode-detector/commit/adb05ae) Merge branch 'master' into gomod-refresher-1591418508 +- [3a99f80](https://github.com/kubedb/mysql-replication-mode-detector/commit/3a99f80) Create .kodiak.toml +- [6289807](https://github.com/kubedb/mysql-replication-mode-detector/commit/6289807) Update to Kubernetes v1.18.3 +- [1dd24be](https://github.com/kubedb/mysql-replication-mode-detector/commit/1dd24be) Update to Kubernetes v1.18.3 (#3) +- [6d02366](https://github.com/kubedb/mysql-replication-mode-detector/commit/6d02366) Update Makefile and CI configuration (#2) +- [fc95884](https://github.com/kubedb/mysql-replication-mode-detector/commit/fc95884) Add primary role labeler controller (#1) +- [99dfb12](https://github.com/kubedb/mysql-replication-mode-detector/commit/99dfb12) add readme.md + + + +## [kubedb/operator](https://github.com/kubedb/operator) + +### [v0.14.0-beta.1](https://github.com/kubedb/operator/releases/tag/v0.14.0-beta.1) + +- [a2bba612](https://github.com/kubedb/operator/commit/a2bba612) Prepare for release v0.14.0-beta.1 (#262) +- [22bc85ec](https://github.com/kubedb/operator/commit/22bc85ec) Allow customizing chart registry (#261) +- [52cc1dc7](https://github.com/kubedb/operator/commit/52cc1dc7) Update for release Stash@v2020.07.09-beta.0 (#260) +- [2e8b709f](https://github.com/kubedb/operator/commit/2e8b709f) Update for release Stash@v2020.07.08-beta.0 (#259) +- [7b58b548](https://github.com/kubedb/operator/commit/7b58b548) Update License (#258) +- [d4cd1a93](https://github.com/kubedb/operator/commit/d4cd1a93) Update to Kubernetes v1.18.3 (#256) +- [f6091845](https://github.com/kubedb/operator/commit/f6091845) Update ci.yml +- [5324d2b6](https://github.com/kubedb/operator/commit/5324d2b6) Update ci.yml +- [c888d7fd](https://github.com/kubedb/operator/commit/c888d7fd) Add workflow to update docs (#255) +- [ba843e17](https://github.com/kubedb/operator/commit/ba843e17) Update update-release-tracker.sh +- [b93c5ab4](https://github.com/kubedb/operator/commit/b93c5ab4) Update update-release-tracker.sh +- [6b8d2149](https://github.com/kubedb/operator/commit/6b8d2149) Add script to update release tracker on pr merge (#254) +- [bb1290dc](https://github.com/kubedb/operator/commit/bb1290dc) Update .kodiak.toml +- [9bb85c3b](https://github.com/kubedb/operator/commit/9bb85c3b) Register validator & mutators for all supported dbs (#253) +- [1a524d9c](https://github.com/kubedb/operator/commit/1a524d9c) Various fixes (#252) +- [4860f2a7](https://github.com/kubedb/operator/commit/4860f2a7) Update to Kubernetes v1.18.3 (#251) +- [1a163c6a](https://github.com/kubedb/operator/commit/1a163c6a) Create .kodiak.toml +- [1eda36b9](https://github.com/kubedb/operator/commit/1eda36b9) Update to Kubernetes v1.18.3 (#247) +- [77b8b858](https://github.com/kubedb/operator/commit/77b8b858) Update Enterprise operator tag (#246) +- [96ca876e](https://github.com/kubedb/operator/commit/96ca876e) Revendor kubedb.dev/apimachinery@master (#245) +- [43a3a7f1](https://github.com/kubedb/operator/commit/43a3a7f1) Use recommended kubernetes app labels +- [1ae7045f](https://github.com/kubedb/operator/commit/1ae7045f) Update crazy-max/ghaction-docker-buildx flag +- [f25034ef](https://github.com/kubedb/operator/commit/f25034ef) Trigger the workflow on push or pull request +- [ba486319](https://github.com/kubedb/operator/commit/ba486319) Update readme (#244) +- [5f7191f4](https://github.com/kubedb/operator/commit/5f7191f4) Update CHANGELOG.md +- [5b14af4b](https://github.com/kubedb/operator/commit/5b14af4b) Add license scan report and status (#241) +- [9848932b](https://github.com/kubedb/operator/commit/9848932b) Pass the topology object to common controller +- [90d1c873](https://github.com/kubedb/operator/commit/90d1c873) Initialize topology for MonogDB webhooks (#243) +- [8ecb87c8](https://github.com/kubedb/operator/commit/8ecb87c8) Fix nil pointer exception (#242) +- [b12c3392](https://github.com/kubedb/operator/commit/b12c3392) Update operator dependencies (#237) +- [f714bb1b](https://github.com/kubedb/operator/commit/f714bb1b) Always create RBAC resources (#238) +- [f43a588e](https://github.com/kubedb/operator/commit/f43a588e) Use Go 1.13 in CI +- [e8ab3580](https://github.com/kubedb/operator/commit/e8ab3580) Update client-go to kubernetes-1.16.3 (#239) +- [1dc84a67](https://github.com/kubedb/operator/commit/1dc84a67) Update CI badge +- [d9d1cc0a](https://github.com/kubedb/operator/commit/d9d1cc0a) Bundle PgBouncer operator (#236) +- [720303c1](https://github.com/kubedb/operator/commit/720303c1) Fix linter errors (#235) +- [4c53a71f](https://github.com/kubedb/operator/commit/4c53a71f) Update go.yml +- [e65fc457](https://github.com/kubedb/operator/commit/e65fc457) Enable GitHub actions +- [2dcb0d6d](https://github.com/kubedb/operator/commit/2dcb0d6d) Update changelog + + + +## [kubedb/percona-xtradb](https://github.com/kubedb/percona-xtradb) + +### [v0.1.0-beta.1](https://github.com/kubedb/percona-xtradb/releases/tag/v0.1.0-beta.1) + +- [28b9fc0f](https://github.com/kubedb/percona-xtradb/commit/28b9fc0f) Prepare for release v0.1.0-beta.1 (#41) +- [fb4f5444](https://github.com/kubedb/percona-xtradb/commit/fb4f5444) Update for release Stash@v2020.07.09-beta.0 (#39) +- [ad221aa2](https://github.com/kubedb/percona-xtradb/commit/ad221aa2) include Makefile.env +- [841ec855](https://github.com/kubedb/percona-xtradb/commit/841ec855) Allow customizing chart registry (#38) +- [bb608980](https://github.com/kubedb/percona-xtradb/commit/bb608980) Update License (#37) +- [cf8cd2fa](https://github.com/kubedb/percona-xtradb/commit/cf8cd2fa) Update for release Stash@v2020.07.08-beta.0 (#36) +- [7b28c4b9](https://github.com/kubedb/percona-xtradb/commit/7b28c4b9) Update to Kubernetes v1.18.3 (#35) +- [848ff94a](https://github.com/kubedb/percona-xtradb/commit/848ff94a) Update ci.yml +- [d124dd6a](https://github.com/kubedb/percona-xtradb/commit/d124dd6a) Load stash version from .env file for make (#34) +- [1de40e1d](https://github.com/kubedb/percona-xtradb/commit/1de40e1d) Update update-release-tracker.sh +- [7a4503be](https://github.com/kubedb/percona-xtradb/commit/7a4503be) Update update-release-tracker.sh +- [ad0dfaf8](https://github.com/kubedb/percona-xtradb/commit/ad0dfaf8) Add script to update release tracker on pr merge (#33) +- [aaca6bd9](https://github.com/kubedb/percona-xtradb/commit/aaca6bd9) Update .kodiak.toml +- [9a495724](https://github.com/kubedb/percona-xtradb/commit/9a495724) Various fixes (#32) +- [9b6c9a53](https://github.com/kubedb/percona-xtradb/commit/9b6c9a53) Update to Kubernetes v1.18.3 (#31) +- [67912547](https://github.com/kubedb/percona-xtradb/commit/67912547) Update to Kubernetes v1.18.3 +- [fc8ce4cc](https://github.com/kubedb/percona-xtradb/commit/fc8ce4cc) Create .kodiak.toml +- [8aba5ef2](https://github.com/kubedb/percona-xtradb/commit/8aba5ef2) Use CRD v1 for Kubernetes >= 1.16 (#30) +- [e81d2b4c](https://github.com/kubedb/percona-xtradb/commit/e81d2b4c) Update to Kubernetes v1.18.3 (#29) +- [2a32730a](https://github.com/kubedb/percona-xtradb/commit/2a32730a) Fix e2e tests (#28) +- [a79626d9](https://github.com/kubedb/percona-xtradb/commit/a79626d9) Update stash install commands +- [52fc2059](https://github.com/kubedb/percona-xtradb/commit/52fc2059) Use recommended kubernetes app labels (#27) +- [93dc10ec](https://github.com/kubedb/percona-xtradb/commit/93dc10ec) Update crazy-max/ghaction-docker-buildx flag +- [ce5717e2](https://github.com/kubedb/percona-xtradb/commit/ce5717e2) Revendor kubedb.dev/apimachinery@master (#26) +- [c1ca649d](https://github.com/kubedb/percona-xtradb/commit/c1ca649d) Pass annotations from CRD to AppBinding (#25) +- [f327cc01](https://github.com/kubedb/percona-xtradb/commit/f327cc01) Trigger the workflow on push or pull request +- [02432393](https://github.com/kubedb/percona-xtradb/commit/02432393) Update CHANGELOG.md +- [a89dbc55](https://github.com/kubedb/percona-xtradb/commit/a89dbc55) Use stash.appscode.dev/apimachinery@v0.9.0-rc.6 (#24) +- [e69742de](https://github.com/kubedb/percona-xtradb/commit/e69742de) Update for percona-xtradb standalone restoresession (#23) +- [958877a1](https://github.com/kubedb/percona-xtradb/commit/958877a1) Various fixes (#21) +- [fb0d7a35](https://github.com/kubedb/percona-xtradb/commit/fb0d7a35) Update kubernetes client-go to 1.16.3 (#20) +- [293fe9a4](https://github.com/kubedb/percona-xtradb/commit/293fe9a4) Fix default make command +- [39358e3b](https://github.com/kubedb/percona-xtradb/commit/39358e3b) Use charts to install operator (#19) +- [6c5b3395](https://github.com/kubedb/percona-xtradb/commit/6c5b3395) Several fixes and update tests (#18) +- [84ff139f](https://github.com/kubedb/percona-xtradb/commit/84ff139f) Various Makefile improvements (#16) +- [e2737f65](https://github.com/kubedb/percona-xtradb/commit/e2737f65) Remove EnableStatusSubresource (#17) +- [fb886b07](https://github.com/kubedb/percona-xtradb/commit/fb886b07) Run e2e tests using GitHub actions (#12) +- [35b155d9](https://github.com/kubedb/percona-xtradb/commit/35b155d9) Validate DBVersionSpecs and fixed broken build (#15) +- [67794bd9](https://github.com/kubedb/percona-xtradb/commit/67794bd9) Update go.yml +- [f7666354](https://github.com/kubedb/percona-xtradb/commit/f7666354) Various changes for Percona XtraDB (#13) +- [ceb7ba67](https://github.com/kubedb/percona-xtradb/commit/ceb7ba67) Enable GitHub actions +- [f5a112af](https://github.com/kubedb/percona-xtradb/commit/f5a112af) Refactor for ProxySQL Integration (#11) +- [26602049](https://github.com/kubedb/percona-xtradb/commit/26602049) Revendor +- [71957d40](https://github.com/kubedb/percona-xtradb/commit/71957d40) Rename from perconaxtradb to percona-xtradb (#10) +- [b526ccd8](https://github.com/kubedb/percona-xtradb/commit/b526ccd8) Set database version in AppBinding (#7) +- [336e7203](https://github.com/kubedb/percona-xtradb/commit/336e7203) Percona XtraDB Cluster support (#9) +- [71a42f7a](https://github.com/kubedb/percona-xtradb/commit/71a42f7a) Don't set annotation to AppBinding (#8) +- [282298cb](https://github.com/kubedb/percona-xtradb/commit/282298cb) Fix UpsertDatabaseAnnotation() function (#4) +- [2ab9dddf](https://github.com/kubedb/percona-xtradb/commit/2ab9dddf) Add license header to Makefiles (#6) +- [df135c08](https://github.com/kubedb/percona-xtradb/commit/df135c08) Add install, uninstall and purge command in Makefile (#3) +- [73d3a845](https://github.com/kubedb/percona-xtradb/commit/73d3a845) Update .gitignore +- [59a4e754](https://github.com/kubedb/percona-xtradb/commit/59a4e754) Add Makefile (#2) +- [f3551ddc](https://github.com/kubedb/percona-xtradb/commit/f3551ddc) Rename package path (#1) +- [56a241d6](https://github.com/kubedb/percona-xtradb/commit/56a241d6) Use explicit IP whitelist instead of automatic IP whitelist (#151) +- [9f0b5ca3](https://github.com/kubedb/percona-xtradb/commit/9f0b5ca3) Update to k8s 1.14.0 client libraries using go.mod (#147) +- [73ad7c30](https://github.com/kubedb/percona-xtradb/commit/73ad7c30) Update changelog +- [ccc36b5c](https://github.com/kubedb/percona-xtradb/commit/ccc36b5c) Update README.md +- [9769e8e1](https://github.com/kubedb/percona-xtradb/commit/9769e8e1) Start next dev cycle +- [a3fa468a](https://github.com/kubedb/percona-xtradb/commit/a3fa468a) Prepare release 0.5.0 +- [6d8862de](https://github.com/kubedb/percona-xtradb/commit/6d8862de) Mysql Group Replication tests (#146) +- [49544e55](https://github.com/kubedb/percona-xtradb/commit/49544e55) Mysql Group Replication (#144) +- [a85d4b44](https://github.com/kubedb/percona-xtradb/commit/a85d4b44) Revendor dependencies +- [9c538460](https://github.com/kubedb/percona-xtradb/commit/9c538460) Changed Role to exclude psp without name (#143) +- [6cace93b](https://github.com/kubedb/percona-xtradb/commit/6cace93b) Modify mutator validator names (#142) +- [da0c19b9](https://github.com/kubedb/percona-xtradb/commit/da0c19b9) Update changelog +- [b79c80d6](https://github.com/kubedb/percona-xtradb/commit/b79c80d6) Start next dev cycle +- [838d9459](https://github.com/kubedb/percona-xtradb/commit/838d9459) Prepare release 0.4.0 +- [bf0f2c14](https://github.com/kubedb/percona-xtradb/commit/bf0f2c14) Added PSP names and init container image in testing framework (#141) +- [3d227570](https://github.com/kubedb/percona-xtradb/commit/3d227570) Added PSP support for mySQL (#137) +- [7b766657](https://github.com/kubedb/percona-xtradb/commit/7b766657) Don't inherit app.kubernetes.io labels from CRD into offshoots (#140) +- [29e23470](https://github.com/kubedb/percona-xtradb/commit/29e23470) Support for init container (#139) +- [3e1556f6](https://github.com/kubedb/percona-xtradb/commit/3e1556f6) Add role label to stats service (#138) +- [ee078af9](https://github.com/kubedb/percona-xtradb/commit/ee078af9) Update changelog +- [978f1139](https://github.com/kubedb/percona-xtradb/commit/978f1139) Update Kubernetes client libraries to 1.13.0 release (#136) +- [821f23d1](https://github.com/kubedb/percona-xtradb/commit/821f23d1) Start next dev cycle +- [678b26aa](https://github.com/kubedb/percona-xtradb/commit/678b26aa) Prepare release 0.3.0 +- [40ad7a23](https://github.com/kubedb/percona-xtradb/commit/40ad7a23) Initial RBAC support: create and use K8s service account for MySQL (#134) +- [98f03387](https://github.com/kubedb/percona-xtradb/commit/98f03387) Revendor dependencies (#135) +- [dfe92615](https://github.com/kubedb/percona-xtradb/commit/dfe92615) Revendor dependencies : Retry Failed Scheduler Snapshot (#133) +- [71f8a350](https://github.com/kubedb/percona-xtradb/commit/71f8a350) Added ephemeral StorageType support (#132) +- [0a6b6e46](https://github.com/kubedb/percona-xtradb/commit/0a6b6e46) Added support of MySQL 8.0.14 (#131) +- [99e57a9e](https://github.com/kubedb/percona-xtradb/commit/99e57a9e) Use PVC spec from snapshot if provided (#130) +- [61497be6](https://github.com/kubedb/percona-xtradb/commit/61497be6) Revendored and updated tests for 'Prevent prefix matching of multiple snapshots' (#129) +- [7eafe088](https://github.com/kubedb/percona-xtradb/commit/7eafe088) Add certificate health checker (#128) +- [973ec416](https://github.com/kubedb/percona-xtradb/commit/973ec416) Update E2E test: Env update is not restricted anymore (#127) +- [339975ff](https://github.com/kubedb/percona-xtradb/commit/339975ff) Fix AppBinding (#126) +- [62050a72](https://github.com/kubedb/percona-xtradb/commit/62050a72) Update changelog +- [2d454043](https://github.com/kubedb/percona-xtradb/commit/2d454043) Prepare release 0.2.0 +- [6941ea59](https://github.com/kubedb/percona-xtradb/commit/6941ea59) Reuse event recorder (#125) +- [b77e66c4](https://github.com/kubedb/percona-xtradb/commit/b77e66c4) OSM binary upgraded in mysql-tools (#123) +- [c9228086](https://github.com/kubedb/percona-xtradb/commit/c9228086) Revendor dependencies (#124) +- [97837120](https://github.com/kubedb/percona-xtradb/commit/97837120) Test for faulty snapshot (#122) +- [c3e995b6](https://github.com/kubedb/percona-xtradb/commit/c3e995b6) Start next dev cycle +- [8a4f3b13](https://github.com/kubedb/percona-xtradb/commit/8a4f3b13) Prepare release 0.2.0-rc.2 +- [79942191](https://github.com/kubedb/percona-xtradb/commit/79942191) Upgrade database secret keys (#121) +- [1747fdf5](https://github.com/kubedb/percona-xtradb/commit/1747fdf5) Ignore mutation of fields to default values during update (#120) +- [d902d588](https://github.com/kubedb/percona-xtradb/commit/d902d588) Support configuration options for exporter sidecar (#119) +- [dd7c3f44](https://github.com/kubedb/percona-xtradb/commit/dd7c3f44) Use flags.DumpAll (#118) +- [bc1ef05b](https://github.com/kubedb/percona-xtradb/commit/bc1ef05b) Start next dev cycle +- [9d33c1a0](https://github.com/kubedb/percona-xtradb/commit/9d33c1a0) Prepare release 0.2.0-rc.1 +- [b076e141](https://github.com/kubedb/percona-xtradb/commit/b076e141) Apply cleanup (#117) +- [7dc5641f](https://github.com/kubedb/percona-xtradb/commit/7dc5641f) Set periodic analytics (#116) +- [90ea6acc](https://github.com/kubedb/percona-xtradb/commit/90ea6acc) Introduce AppBinding support (#115) +- [a882d76a](https://github.com/kubedb/percona-xtradb/commit/a882d76a) Fix Analytics (#114) +- [0961009c](https://github.com/kubedb/percona-xtradb/commit/0961009c) Error out from cron job for deprecated dbversion (#113) +- [da1f4e27](https://github.com/kubedb/percona-xtradb/commit/da1f4e27) Add CRDs without observation when operator starts (#112) +- [0a754d2f](https://github.com/kubedb/percona-xtradb/commit/0a754d2f) Update changelog +- [b09bc6e1](https://github.com/kubedb/percona-xtradb/commit/b09bc6e1) Start next dev cycle +- [0d467ccb](https://github.com/kubedb/percona-xtradb/commit/0d467ccb) Prepare release 0.2.0-rc.0 +- [c757007a](https://github.com/kubedb/percona-xtradb/commit/c757007a) Merge commit 'cc6607a3589a79a5e61bb198d370ea0ae30b9d09' +- [ddfe4be1](https://github.com/kubedb/percona-xtradb/commit/ddfe4be1) Support custom user passowrd for backup (#111) +- [8c84ba20](https://github.com/kubedb/percona-xtradb/commit/8c84ba20) Support providing resources for monitoring container (#110) +- [7bcfbc48](https://github.com/kubedb/percona-xtradb/commit/7bcfbc48) Update kubernetes client libraries to 1.12.0 (#109) +- [145bba2b](https://github.com/kubedb/percona-xtradb/commit/145bba2b) Add validation webhook xray (#108) +- [6da1887f](https://github.com/kubedb/percona-xtradb/commit/6da1887f) Various Fixes (#107) +- [111519e9](https://github.com/kubedb/percona-xtradb/commit/111519e9) Merge ports from service template (#105) +- [38147ef1](https://github.com/kubedb/percona-xtradb/commit/38147ef1) Replace doNotPause with TerminationPolicy = DoNotTerminate (#104) +- [e28ebc47](https://github.com/kubedb/percona-xtradb/commit/e28ebc47) Pass resources to NamespaceValidator (#103) +- [aed12bf5](https://github.com/kubedb/percona-xtradb/commit/aed12bf5) Various fixes (#102) +- [3d372ef6](https://github.com/kubedb/percona-xtradb/commit/3d372ef6) Support Livecycle hook and container probes (#101) +- [b6ef6887](https://github.com/kubedb/percona-xtradb/commit/b6ef6887) Check if Kubernetes version is supported before running operator (#100) +- [d89e7783](https://github.com/kubedb/percona-xtradb/commit/d89e7783) Update package alias (#99) +- [f0b44b3a](https://github.com/kubedb/percona-xtradb/commit/f0b44b3a) Start next dev cycle +- [a79ff03b](https://github.com/kubedb/percona-xtradb/commit/a79ff03b) Prepare release 0.2.0-beta.1 +- [0d8d3cca](https://github.com/kubedb/percona-xtradb/commit/0d8d3cca) Revendor api (#98) +- [2f850243](https://github.com/kubedb/percona-xtradb/commit/2f850243) Fix tests (#97) +- [4ced0bfe](https://github.com/kubedb/percona-xtradb/commit/4ced0bfe) Revendor api for catalog apigroup (#96) +- [e7695400](https://github.com/kubedb/percona-xtradb/commit/e7695400) Update chanelog +- [8e358aea](https://github.com/kubedb/percona-xtradb/commit/8e358aea) Use --pull flag with docker build (#20) (#95) +- [d2a97d90](https://github.com/kubedb/percona-xtradb/commit/d2a97d90) Merge commit '16c769ee4686576f172a6b79a10d25bfd79ca4a4' +- [d1fe8a8a](https://github.com/kubedb/percona-xtradb/commit/d1fe8a8a) Start next dev cycle +- [04eb9bb5](https://github.com/kubedb/percona-xtradb/commit/04eb9bb5) Prepare release 0.2.0-beta.0 +- [9dfea960](https://github.com/kubedb/percona-xtradb/commit/9dfea960) Pass extra args to tools.sh (#93) +- [47dd3cad](https://github.com/kubedb/percona-xtradb/commit/47dd3cad) Don't try to wipe out Snapshot data for Local backend (#92) +- [9c4d485b](https://github.com/kubedb/percona-xtradb/commit/9c4d485b) Add missing alt-tag docker folder mysql-tools images (#91) +- [be72f784](https://github.com/kubedb/percona-xtradb/commit/be72f784) Use suffix for updated DBImage & Stop working for deprecated *Versions (#90) +- [05c8f14d](https://github.com/kubedb/percona-xtradb/commit/05c8f14d) Search used secrets within same namespace of DB object (#89) +- [0d94c946](https://github.com/kubedb/percona-xtradb/commit/0d94c946) Support Termination Policy (#88) +- [8775ddf7](https://github.com/kubedb/percona-xtradb/commit/8775ddf7) Update builddeps.sh +- [796c93da](https://github.com/kubedb/percona-xtradb/commit/796c93da) Revendor k8s.io/apiserver (#87) +- [5a1e3f57](https://github.com/kubedb/percona-xtradb/commit/5a1e3f57) Revendor kubernetes-1.11.3 (#86) +- [809a3c49](https://github.com/kubedb/percona-xtradb/commit/809a3c49) Support UpdateStrategy (#84) +- [372c52ef](https://github.com/kubedb/percona-xtradb/commit/372c52ef) Add TerminationPolicy for databases (#83) +- [c01b55e8](https://github.com/kubedb/percona-xtradb/commit/c01b55e8) Revendor api (#82) +- [5e196b95](https://github.com/kubedb/percona-xtradb/commit/5e196b95) Use IntHash as status.observedGeneration (#81) +- [2da3bb1b](https://github.com/kubedb/percona-xtradb/commit/2da3bb1b) fix github status (#80) +- [121d0a98](https://github.com/kubedb/percona-xtradb/commit/121d0a98) Update pipeline (#79) +- [532e3137](https://github.com/kubedb/percona-xtradb/commit/532e3137) Fix E2E test for minikube (#78) +- [0f107815](https://github.com/kubedb/percona-xtradb/commit/0f107815) Update pipeline (#77) +- [851679e2](https://github.com/kubedb/percona-xtradb/commit/851679e2) Migrate MySQL (#75) +- [0b997855](https://github.com/kubedb/percona-xtradb/commit/0b997855) Use official exporter image (#74) +- [702d5736](https://github.com/kubedb/percona-xtradb/commit/702d5736) Fix uninstall for concourse (#70) +- [9ee88bd2](https://github.com/kubedb/percona-xtradb/commit/9ee88bd2) Update status.ObservedGeneration for failure phase (#73) +- [559cdb6a](https://github.com/kubedb/percona-xtradb/commit/559cdb6a) Keep track of ObservedGenerationHash (#72) +- [61c8b898](https://github.com/kubedb/percona-xtradb/commit/61c8b898) Use NewObservableHandler (#71) +- [421274dc](https://github.com/kubedb/percona-xtradb/commit/421274dc) Merge commit '887037c7e36289e3135dda99346fccc7e2ce303b' +- [6a41d9bc](https://github.com/kubedb/percona-xtradb/commit/6a41d9bc) Fix uninstall for concourse (#69) +- [f1af09db](https://github.com/kubedb/percona-xtradb/commit/f1af09db) Update README.md +- [bf3f1823](https://github.com/kubedb/percona-xtradb/commit/bf3f1823) Revise immutable spec fields (#68) +- [26adec3b](https://github.com/kubedb/percona-xtradb/commit/26adec3b) Merge commit '5f83049fc01dc1d0709ac0014d6f3a0f74a39417' +- [31a97820](https://github.com/kubedb/percona-xtradb/commit/31a97820) Support passing args via PodTemplate (#67) +- [60f4ee23](https://github.com/kubedb/percona-xtradb/commit/60f4ee23) Introduce storageType : ephemeral (#66) +- [bfd3fcd6](https://github.com/kubedb/percona-xtradb/commit/bfd3fcd6) Add support for running tests on cncf cluster (#63) +- [fba47b19](https://github.com/kubedb/percona-xtradb/commit/fba47b19) Merge commit 'e010cbb302c8d59d4cf69dd77085b046ff423b78' +- [6be96ce0](https://github.com/kubedb/percona-xtradb/commit/6be96ce0) Revendor api (#65) +- [0f629ab3](https://github.com/kubedb/percona-xtradb/commit/0f629ab3) Keep track of observedGeneration in status (#64) +- [c9a9596f](https://github.com/kubedb/percona-xtradb/commit/c9a9596f) Separate StatsService for monitoring (#62) +- [62854641](https://github.com/kubedb/percona-xtradb/commit/62854641) Use MySQLVersion for MySQL images (#61) +- [3c170c56](https://github.com/kubedb/percona-xtradb/commit/3c170c56) Use updated crd spec (#60) +- [873c285e](https://github.com/kubedb/percona-xtradb/commit/873c285e) Rename OffshootLabels to OffshootSelectors (#59) +- [2fd02169](https://github.com/kubedb/percona-xtradb/commit/2fd02169) Revendor api (#58) +- [a127d6cd](https://github.com/kubedb/percona-xtradb/commit/a127d6cd) Use kmodules monitoring and objectstore api (#57) +- [2f79a038](https://github.com/kubedb/percona-xtradb/commit/2f79a038) Support custom configuration (#52) +- [49c67f00](https://github.com/kubedb/percona-xtradb/commit/49c67f00) Merge commit '44e6d4985d93556e39ddcc4677ada5437fc5be64' +- [fb28bc6c](https://github.com/kubedb/percona-xtradb/commit/fb28bc6c) Refactor concourse scripts (#56) +- [4de4ced1](https://github.com/kubedb/percona-xtradb/commit/4de4ced1) Fix command `./hack/make.py test e2e` (#55) +- [3082123e](https://github.com/kubedb/percona-xtradb/commit/3082123e) Set generated binary name to my-operator (#54) +- [5698f314](https://github.com/kubedb/percona-xtradb/commit/5698f314) Don't add admission/v1beta1 group as a prioritized version (#53) +- [696135d5](https://github.com/kubedb/percona-xtradb/commit/696135d5) Fix travis build (#48) +- [c519ef89](https://github.com/kubedb/percona-xtradb/commit/c519ef89) Format shell script (#51) +- [c93e2f40](https://github.com/kubedb/percona-xtradb/commit/c93e2f40) Enable status subresource for crds (#50) +- [edd951ca](https://github.com/kubedb/percona-xtradb/commit/edd951ca) Update client-go to v8.0.0 (#49) +- [520597a6](https://github.com/kubedb/percona-xtradb/commit/520597a6) Merge commit '71850e2c90cda8fc588b7dedb340edf3d316baea' +- [f1549e95](https://github.com/kubedb/percona-xtradb/commit/f1549e95) Support ENV variables in CRDs (#46) +- [67f37780](https://github.com/kubedb/percona-xtradb/commit/67f37780) Updated osm version to 0.7.1 (#47) +- [10e309c0](https://github.com/kubedb/percona-xtradb/commit/10e309c0) Prepare release 0.1.0 +- [62a8fbbd](https://github.com/kubedb/percona-xtradb/commit/62a8fbbd) Fixed missing error return (#45) +- [8c05bb83](https://github.com/kubedb/percona-xtradb/commit/8c05bb83) Revendor dependencies (#44) +- [ca811a2e](https://github.com/kubedb/percona-xtradb/commit/ca811a2e) Fix release script (#43) +- [b79541f6](https://github.com/kubedb/percona-xtradb/commit/b79541f6) Add changelog (#42) +- [a2d13c82](https://github.com/kubedb/percona-xtradb/commit/a2d13c82) Concourse (#41) +- [95b2186e](https://github.com/kubedb/percona-xtradb/commit/95b2186e) Fixed kubeconfig plugin for Cloud Providers && Storage is required for MySQL (#40) +- [37762093](https://github.com/kubedb/percona-xtradb/commit/37762093) Refactored E2E testing to support E2E testing with admission webhook in cloud (#38) +- [b6fe72ca](https://github.com/kubedb/percona-xtradb/commit/b6fe72ca) Remove lost+found directory before initializing mysql (#39) +- [18ebb959](https://github.com/kubedb/percona-xtradb/commit/18ebb959) Skip delete requests for empty resources (#37) +- [eeb7add0](https://github.com/kubedb/percona-xtradb/commit/eeb7add0) Don't panic if admission options is nil (#36) +- [ccb59db0](https://github.com/kubedb/percona-xtradb/commit/ccb59db0) Disable admission controllers for webhook server (#35) +- [b1c6c149](https://github.com/kubedb/percona-xtradb/commit/b1c6c149) Separate ApiGroup for Mutating and Validating webhook && upgraded osm to 0.7.0 (#34) +- [b1890f7c](https://github.com/kubedb/percona-xtradb/commit/b1890f7c) Update client-go to 7.0.0 (#33) +- [08c81726](https://github.com/kubedb/percona-xtradb/commit/08c81726) Added update script for mysql-tools:8 (#32) +- [4bbe6c9f](https://github.com/kubedb/percona-xtradb/commit/4bbe6c9f) Added support of mysql:5.7 (#31) +- [e657f512](https://github.com/kubedb/percona-xtradb/commit/e657f512) Add support for one informer and N-eventHandler for snapshot, dromantDB and Job (#30) +- [bbcd48d6](https://github.com/kubedb/percona-xtradb/commit/bbcd48d6) Use metrics from kube apiserver (#29) +- [1687e197](https://github.com/kubedb/percona-xtradb/commit/1687e197) Bundle webhook server and Use SharedInformerFactory (#28) +- [cd0efc00](https://github.com/kubedb/percona-xtradb/commit/cd0efc00) Move MySQL AdmissionWebhook packages into MySQL repository (#27) +- [46065e18](https://github.com/kubedb/percona-xtradb/commit/46065e18) Use mysql:8.0.3 image as mysql:8.0 (#26) +- [1b73529f](https://github.com/kubedb/percona-xtradb/commit/1b73529f) Update README.md +- [62eaa397](https://github.com/kubedb/percona-xtradb/commit/62eaa397) Update README.md +- [c53704c7](https://github.com/kubedb/percona-xtradb/commit/c53704c7) Remove Docker pull count +- [b9ec877e](https://github.com/kubedb/percona-xtradb/commit/b9ec877e) Add travis yaml (#25) +- [ade3571c](https://github.com/kubedb/percona-xtradb/commit/ade3571c) Start next dev cycle +- [b4b749df](https://github.com/kubedb/percona-xtradb/commit/b4b749df) Prepare release 0.1.0-beta.2 +- [4d46d95d](https://github.com/kubedb/percona-xtradb/commit/4d46d95d) Migrating to apps/v1 (#23) +- [5ee1ac8c](https://github.com/kubedb/percona-xtradb/commit/5ee1ac8c) Update validation (#22) +- [dd023c50](https://github.com/kubedb/percona-xtradb/commit/dd023c50) Fix dormantDB matching: pass same type to Equal method (#21) +- [37a1e4fd](https://github.com/kubedb/percona-xtradb/commit/37a1e4fd) Use official code generator scripts (#20) +- [485d3d7c](https://github.com/kubedb/percona-xtradb/commit/485d3d7c) Fixed dormantdb matching & Raised throttling time & Fixed MySQL version Checking (#19) +- [6db2ae8d](https://github.com/kubedb/percona-xtradb/commit/6db2ae8d) Prepare release 0.1.0-beta.1 +- [ebbfec2f](https://github.com/kubedb/percona-xtradb/commit/ebbfec2f) converted to k8s 1.9 & Improved InitSpec in DormantDB & Added support for Job watcher & Improved Tests (#17) +- [a484e0e5](https://github.com/kubedb/percona-xtradb/commit/a484e0e5) Fixed logger, analytics and removed rbac stuff (#16) +- [7aa2d1d2](https://github.com/kubedb/percona-xtradb/commit/7aa2d1d2) Add rbac stuffs for mysql-exporter (#15) +- [078098c8](https://github.com/kubedb/percona-xtradb/commit/078098c8) Review Mysql docker images and Fixed monitring (#14) +- [6877108a](https://github.com/kubedb/percona-xtradb/commit/6877108a) Update README.md +- [1f84a5da](https://github.com/kubedb/percona-xtradb/commit/1f84a5da) Start next dev cycle +- [2f1e4b7d](https://github.com/kubedb/percona-xtradb/commit/2f1e4b7d) Prepare release 0.1.0-beta.0 +- [dce1e88e](https://github.com/kubedb/percona-xtradb/commit/dce1e88e) Add release script +- [60ed55cb](https://github.com/kubedb/percona-xtradb/commit/60ed55cb) Rename ms-operator to my-operator (#13) +- [5451d166](https://github.com/kubedb/percona-xtradb/commit/5451d166) Fix Analytics and pass client-id as ENV to Snapshot Job (#12) +- [788ae178](https://github.com/kubedb/percona-xtradb/commit/788ae178) update docker image validation (#11) +- [c966efd5](https://github.com/kubedb/percona-xtradb/commit/c966efd5) Add docker-registry and WorkQueue (#10) +- [be340103](https://github.com/kubedb/percona-xtradb/commit/be340103) Set client id for analytics (#9) +- [ca11f683](https://github.com/kubedb/percona-xtradb/commit/ca11f683) Fix CRD Registration (#8) +- [2f95c13d](https://github.com/kubedb/percona-xtradb/commit/2f95c13d) Update issue repo link +- [6fffa713](https://github.com/kubedb/percona-xtradb/commit/6fffa713) Update pkg paths to kubedb org (#7) +- [2d4d5c44](https://github.com/kubedb/percona-xtradb/commit/2d4d5c44) Assign default Prometheus Monitoring Port (#6) +- [a7595613](https://github.com/kubedb/percona-xtradb/commit/a7595613) Add Snapshot Backup, Restore and Backup-Scheduler (#4) +- [17a782c6](https://github.com/kubedb/percona-xtradb/commit/17a782c6) Update Dockerfile +- [e92bfec9](https://github.com/kubedb/percona-xtradb/commit/e92bfec9) Add mysql-util docker image (#5) +- [2a4b25ac](https://github.com/kubedb/percona-xtradb/commit/2a4b25ac) Mysql db - Inititalizing (#2) +- [cbfbc878](https://github.com/kubedb/percona-xtradb/commit/cbfbc878) Update README.md +- [01cab651](https://github.com/kubedb/percona-xtradb/commit/01cab651) Update README.md +- [0aa81cdf](https://github.com/kubedb/percona-xtradb/commit/0aa81cdf) Use client-go 5.x +- [3de10d7f](https://github.com/kubedb/percona-xtradb/commit/3de10d7f) Update ./hack folder (#3) +- [46f05b1f](https://github.com/kubedb/percona-xtradb/commit/46f05b1f) Add skeleton for mysql (#1) +- [73147dba](https://github.com/kubedb/percona-xtradb/commit/73147dba) Merge commit 'be70502b4993171bbad79d2ff89a9844f1c24caa' as 'hack/libbuild' + + + +## [kubedb/pg-leader-election](https://github.com/kubedb/pg-leader-election) + +### [v0.2.0-beta.1](https://github.com/kubedb/pg-leader-election/releases/tag/v0.2.0-beta.1) + + + + +## [kubedb/pgbouncer](https://github.com/kubedb/pgbouncer) + +### [v0.1.0-beta.1](https://github.com/kubedb/pgbouncer/releases/tag/v0.1.0-beta.1) + +- [bbf810c](https://github.com/kubedb/pgbouncer/commit/bbf810c) Prepare for release v0.1.0-beta.1 (#23) +- [5a6e361](https://github.com/kubedb/pgbouncer/commit/5a6e361) include Makefile.env (#22) +- [2d52d66](https://github.com/kubedb/pgbouncer/commit/2d52d66) Update License (#21) +- [33305d5](https://github.com/kubedb/pgbouncer/commit/33305d5) Update to Kubernetes v1.18.3 (#20) +- [b443a55](https://github.com/kubedb/pgbouncer/commit/b443a55) Update ci.yml +- [d3bedc9](https://github.com/kubedb/pgbouncer/commit/d3bedc9) Update update-release-tracker.sh +- [d9100ec](https://github.com/kubedb/pgbouncer/commit/d9100ec) Update update-release-tracker.sh +- [9b86bda](https://github.com/kubedb/pgbouncer/commit/9b86bda) Add script to update release tracker on pr merge (#19) +- [3362cef](https://github.com/kubedb/pgbouncer/commit/3362cef) Update .kodiak.toml +- [11ebebd](https://github.com/kubedb/pgbouncer/commit/11ebebd) Use POSTGRES_TAG v0.14.0-alpha.0 +- [dbe95b5](https://github.com/kubedb/pgbouncer/commit/dbe95b5) Various fixes (#18) +- [c50c65d](https://github.com/kubedb/pgbouncer/commit/c50c65d) Update to Kubernetes v1.18.3 (#17) +- [483fa43](https://github.com/kubedb/pgbouncer/commit/483fa43) Update to Kubernetes v1.18.3 +- [c0fa8e4](https://github.com/kubedb/pgbouncer/commit/c0fa8e4) Create .kodiak.toml +- [5e33801](https://github.com/kubedb/pgbouncer/commit/5e33801) Use CRD v1 for Kubernetes >= 1.16 (#16) +- [ef7fe47](https://github.com/kubedb/pgbouncer/commit/ef7fe47) Update to Kubernetes v1.18.3 (#15) +- [063339f](https://github.com/kubedb/pgbouncer/commit/063339f) Fix e2e tests (#14) +- [7cd92ba](https://github.com/kubedb/pgbouncer/commit/7cd92ba) Update crazy-max/ghaction-docker-buildx flag +- [e7a47a5](https://github.com/kubedb/pgbouncer/commit/e7a47a5) Revendor kubedb.dev/apimachinery@master (#13) +- [9d00916](https://github.com/kubedb/pgbouncer/commit/9d00916) Use updated operator labels in e2e tests (#12) +- [778924a](https://github.com/kubedb/pgbouncer/commit/778924a) Trigger the workflow on push or pull request +- [77be6b9](https://github.com/kubedb/pgbouncer/commit/77be6b9) Update CHANGELOG.md +- [a9decb9](https://github.com/kubedb/pgbouncer/commit/a9decb9) Use stash.appscode.dev/apimachinery@v0.9.0-rc.6 (#11) +- [cd4d272](https://github.com/kubedb/pgbouncer/commit/cd4d272) Fix build +- [b21b1a1](https://github.com/kubedb/pgbouncer/commit/b21b1a1) Revendor and update enterprise sidecar image (#10) +- [463f7bc](https://github.com/kubedb/pgbouncer/commit/463f7bc) Update Enterprise operator tag (#9) +- [6e01588](https://github.com/kubedb/pgbouncer/commit/6e01588) Use kubedb/installer master branch in CI +- [88b98a4](https://github.com/kubedb/pgbouncer/commit/88b98a4) Update pgbouncer controller (#8) +- [a6b71bc](https://github.com/kubedb/pgbouncer/commit/a6b71bc) Update variable names +- [1a6794b](https://github.com/kubedb/pgbouncer/commit/1a6794b) Fix plain text secret in exporter container of StatefulSet (#5) +- [ab104a9](https://github.com/kubedb/pgbouncer/commit/ab104a9) Update client-go to kubernetes-1.16.3 (#7) +- [68dbb14](https://github.com/kubedb/pgbouncer/commit/68dbb14) Use charts to install operator (#6) +- [30e3e72](https://github.com/kubedb/pgbouncer/commit/30e3e72) Add add-license make target +- [6c1a78a](https://github.com/kubedb/pgbouncer/commit/6c1a78a) Enable e2e tests in GitHub actions (#4) +- [0960f80](https://github.com/kubedb/pgbouncer/commit/0960f80) Initial implementation (#2) +- [a8a9b1d](https://github.com/kubedb/pgbouncer/commit/a8a9b1d) Update go.yml +- [bc3b262](https://github.com/kubedb/pgbouncer/commit/bc3b262) Enable GitHub actions +- [2e33db2](https://github.com/kubedb/pgbouncer/commit/2e33db2) Clone kubedb/postgres repo (#1) +- [45a7cac](https://github.com/kubedb/pgbouncer/commit/45a7cac) Merge commit 'f78de886ed657650438f99574c3b002dd3607497' as 'hack/libbuild' + + + +## [kubedb/postgres](https://github.com/kubedb/postgres) + +### [v0.14.0-beta.1](https://github.com/kubedb/postgres/releases/tag/v0.14.0-beta.1) + +- [3848a43e](https://github.com/kubedb/postgres/commit/3848a43e) Prepare for release v0.14.0-beta.1 (#325) +- [d4ea0ba7](https://github.com/kubedb/postgres/commit/d4ea0ba7) Update for release Stash@v2020.07.09-beta.0 (#323) +- [6974afda](https://github.com/kubedb/postgres/commit/6974afda) Allow customizing kube namespace for Stash +- [d7d79ea1](https://github.com/kubedb/postgres/commit/d7d79ea1) Allow customizing chart registry (#322) +- [ba0423ac](https://github.com/kubedb/postgres/commit/ba0423ac) Update for release Stash@v2020.07.08-beta.0 (#321) +- [7e855763](https://github.com/kubedb/postgres/commit/7e855763) Update License +- [7bea404a](https://github.com/kubedb/postgres/commit/7bea404a) Update to Kubernetes v1.18.3 (#320) +- [eab0e83f](https://github.com/kubedb/postgres/commit/eab0e83f) Update ci.yml +- [4949f76e](https://github.com/kubedb/postgres/commit/4949f76e) Load stash version from .env file for make (#319) +- [79e9d8d9](https://github.com/kubedb/postgres/commit/79e9d8d9) Update update-release-tracker.sh +- [ca966b7b](https://github.com/kubedb/postgres/commit/ca966b7b) Update update-release-tracker.sh +- [31bbecfe](https://github.com/kubedb/postgres/commit/31bbecfe) Add script to update release tracker on pr merge (#318) +- [540d977f](https://github.com/kubedb/postgres/commit/540d977f) Update .kodiak.toml +- [3e7514a7](https://github.com/kubedb/postgres/commit/3e7514a7) Various fixes (#317) +- [1a5df17c](https://github.com/kubedb/postgres/commit/1a5df17c) Update to Kubernetes v1.18.3 (#315) +- [717cfb3f](https://github.com/kubedb/postgres/commit/717cfb3f) Update to Kubernetes v1.18.3 +- [95537169](https://github.com/kubedb/postgres/commit/95537169) Create .kodiak.toml +- [02579005](https://github.com/kubedb/postgres/commit/02579005) Use CRD v1 for Kubernetes >= 1.16 (#314) +- [6ce6deb1](https://github.com/kubedb/postgres/commit/6ce6deb1) Update to Kubernetes v1.18.3 (#313) +- [97f25ba0](https://github.com/kubedb/postgres/commit/97f25ba0) Fix e2e tests (#312) +- [a989c377](https://github.com/kubedb/postgres/commit/a989c377) Update stash install commands +- [6af12596](https://github.com/kubedb/postgres/commit/6af12596) Revendor kubedb.dev/apimachinery@master (#311) +- [9969b064](https://github.com/kubedb/postgres/commit/9969b064) Update crazy-max/ghaction-docker-buildx flag +- [e3360119](https://github.com/kubedb/postgres/commit/e3360119) Use updated operator labels in e2e tests (#309) +- [c183007c](https://github.com/kubedb/postgres/commit/c183007c) Pass annotations from CRD to AppBinding (#310) +- [55581f79](https://github.com/kubedb/postgres/commit/55581f79) Trigger the workflow on push or pull request +- [931b88cf](https://github.com/kubedb/postgres/commit/931b88cf) Update CHANGELOG.md +- [6f481749](https://github.com/kubedb/postgres/commit/6f481749) Use stash.appscode.dev/apimachinery@v0.9.0-rc.6 (#308) +- [15f0611d](https://github.com/kubedb/postgres/commit/15f0611d) Fix error msg to reject halt when termination policy is 'DoNotTerminate' +- [18aba058](https://github.com/kubedb/postgres/commit/18aba058) Change Pause to Halt (#307) +- [7e9b1c69](https://github.com/kubedb/postgres/commit/7e9b1c69) feat: allow changes to nodeSelector (#298) +- [a602faa1](https://github.com/kubedb/postgres/commit/a602faa1) Introduce spec.halted and removed dormant and snapshot crd (#305) +- [cdd384d7](https://github.com/kubedb/postgres/commit/cdd384d7) Moved leader election to kubedb/pg-leader-election (#304) +- [32c41db6](https://github.com/kubedb/postgres/commit/32c41db6) Use stash@v0.9.0-rc.4 release (#306) +- [fa55b472](https://github.com/kubedb/postgres/commit/fa55b472) Make e2e tests stable in github actions (#303) +- [afdc5fda](https://github.com/kubedb/postgres/commit/afdc5fda) Update client-go to kubernetes-1.16.3 (#301) +- [d28eb55a](https://github.com/kubedb/postgres/commit/d28eb55a) Take out postgres docker images and Matrix test (#297) +- [13fee32d](https://github.com/kubedb/postgres/commit/13fee32d) Fix default make command +- [55dfb368](https://github.com/kubedb/postgres/commit/55dfb368) Update catalog values for make install command +- [25f5b79c](https://github.com/kubedb/postgres/commit/25f5b79c) Use charts to install operator (#302) +- [c5a4ed77](https://github.com/kubedb/postgres/commit/c5a4ed77) Add add-license make target +- [aa1d98d0](https://github.com/kubedb/postgres/commit/aa1d98d0) Add license header to files (#296) +- [fd356006](https://github.com/kubedb/postgres/commit/fd356006) Fix E2E testing for github actions (#295) +- [6a3443a7](https://github.com/kubedb/postgres/commit/6a3443a7) Minio and S3 compatible storage fixes (#292) +- [5150cf34](https://github.com/kubedb/postgres/commit/5150cf34) Run e2e tests using GitHub actions (#293) +- [a4a3785b](https://github.com/kubedb/postgres/commit/a4a3785b) Validate DBVersionSpecs and fixed broken build (#294) +- [b171a244](https://github.com/kubedb/postgres/commit/b171a244) Update go.yml +- [1a61bf29](https://github.com/kubedb/postgres/commit/1a61bf29) Enable GitHub actions +- [6b869b15](https://github.com/kubedb/postgres/commit/6b869b15) Update changelog + + + +## [kubedb/proxysql](https://github.com/kubedb/proxysql) + +### [v0.1.0-beta.1](https://github.com/kubedb/proxysql/releases/tag/v0.1.0-beta.1) + +- [2ed7d0e8](https://github.com/kubedb/proxysql/commit/2ed7d0e8) Prepare for release v0.1.0-beta.1 (#26) +- [3b5ee481](https://github.com/kubedb/proxysql/commit/3b5ee481) Update for release Stash@v2020.07.09-beta.0 (#25) +- [92b04b33](https://github.com/kubedb/proxysql/commit/92b04b33) include Makefile.env (#24) +- [eace7e26](https://github.com/kubedb/proxysql/commit/eace7e26) Update for release Stash@v2020.07.08-beta.0 (#23) +- [0c647c01](https://github.com/kubedb/proxysql/commit/0c647c01) Update License (#22) +- [3c1b41be](https://github.com/kubedb/proxysql/commit/3c1b41be) Update to Kubernetes v1.18.3 (#21) +- [dfa95bb8](https://github.com/kubedb/proxysql/commit/dfa95bb8) Update ci.yml +- [87390932](https://github.com/kubedb/proxysql/commit/87390932) Update update-release-tracker.sh +- [772a0c6a](https://github.com/kubedb/proxysql/commit/772a0c6a) Update update-release-tracker.sh +- [a3b2ae92](https://github.com/kubedb/proxysql/commit/a3b2ae92) Add script to update release tracker on pr merge (#20) +- [7578cae3](https://github.com/kubedb/proxysql/commit/7578cae3) Update .kodiak.toml +- [4ba876bc](https://github.com/kubedb/proxysql/commit/4ba876bc) Update operator tags +- [399aa60b](https://github.com/kubedb/proxysql/commit/399aa60b) Various fixes (#19) +- [7235b0c5](https://github.com/kubedb/proxysql/commit/7235b0c5) Update to Kubernetes v1.18.3 (#18) +- [427c1f21](https://github.com/kubedb/proxysql/commit/427c1f21) Update to Kubernetes v1.18.3 +- [1ac8da55](https://github.com/kubedb/proxysql/commit/1ac8da55) Create .kodiak.toml +- [3243d446](https://github.com/kubedb/proxysql/commit/3243d446) Use CRD v1 for Kubernetes >= 1.16 (#17) +- [4f5bea8d](https://github.com/kubedb/proxysql/commit/4f5bea8d) Update to Kubernetes v1.18.3 (#16) +- [a0d2611a](https://github.com/kubedb/proxysql/commit/a0d2611a) Fix e2e tests (#15) +- [987fbf60](https://github.com/kubedb/proxysql/commit/987fbf60) Update crazy-max/ghaction-docker-buildx flag +- [c2fad78e](https://github.com/kubedb/proxysql/commit/c2fad78e) Use updated operator labels in e2e tests (#14) +- [c5a01db8](https://github.com/kubedb/proxysql/commit/c5a01db8) Revendor kubedb.dev/apimachinery@master (#13) +- [756c8f8f](https://github.com/kubedb/proxysql/commit/756c8f8f) Trigger the workflow on push or pull request +- [fdf84e27](https://github.com/kubedb/proxysql/commit/fdf84e27) Update CHANGELOG.md +- [9075b453](https://github.com/kubedb/proxysql/commit/9075b453) Use stash.appscode.dev/apimachinery@v0.9.0-rc.6 (#12) +- [f4d1c024](https://github.com/kubedb/proxysql/commit/f4d1c024) Matrix Tests on Github Actions (#11) +- [4e021072](https://github.com/kubedb/proxysql/commit/4e021072) Update mount path for custom config (#8) +- [b0922173](https://github.com/kubedb/proxysql/commit/b0922173) Enable ProxySQL monitoring (#6) +- [70be4e67](https://github.com/kubedb/proxysql/commit/70be4e67) ProxySQL test for MySQL (#4) +- [0a444b9e](https://github.com/kubedb/proxysql/commit/0a444b9e) Use charts to install operator (#7) +- [a51fbb51](https://github.com/kubedb/proxysql/commit/a51fbb51) ProxySQL operator for MySQL databases (#2) +- [883fa437](https://github.com/kubedb/proxysql/commit/883fa437) Update go.yml +- [2c0cf51c](https://github.com/kubedb/proxysql/commit/2c0cf51c) Enable GitHub actions +- [52e15cd2](https://github.com/kubedb/proxysql/commit/52e15cd2) percona-xtradb -> proxysql (#1) +- [dc71bffe](https://github.com/kubedb/proxysql/commit/dc71bffe) Revendor +- [71957d40](https://github.com/kubedb/proxysql/commit/71957d40) Rename from perconaxtradb to percona-xtradb (#10) +- [b526ccd8](https://github.com/kubedb/proxysql/commit/b526ccd8) Set database version in AppBinding (#7) +- [336e7203](https://github.com/kubedb/proxysql/commit/336e7203) Percona XtraDB Cluster support (#9) +- [71a42f7a](https://github.com/kubedb/proxysql/commit/71a42f7a) Don't set annotation to AppBinding (#8) +- [282298cb](https://github.com/kubedb/proxysql/commit/282298cb) Fix UpsertDatabaseAnnotation() function (#4) +- [2ab9dddf](https://github.com/kubedb/proxysql/commit/2ab9dddf) Add license header to Makefiles (#6) +- [df135c08](https://github.com/kubedb/proxysql/commit/df135c08) Add install, uninstall and purge command in Makefile (#3) +- [73d3a845](https://github.com/kubedb/proxysql/commit/73d3a845) Update .gitignore +- [59a4e754](https://github.com/kubedb/proxysql/commit/59a4e754) Add Makefile (#2) +- [f3551ddc](https://github.com/kubedb/proxysql/commit/f3551ddc) Rename package path (#1) +- [56a241d6](https://github.com/kubedb/proxysql/commit/56a241d6) Use explicit IP whitelist instead of automatic IP whitelist (#151) +- [9f0b5ca3](https://github.com/kubedb/proxysql/commit/9f0b5ca3) Update to k8s 1.14.0 client libraries using go.mod (#147) +- [73ad7c30](https://github.com/kubedb/proxysql/commit/73ad7c30) Update changelog +- [ccc36b5c](https://github.com/kubedb/proxysql/commit/ccc36b5c) Update README.md +- [9769e8e1](https://github.com/kubedb/proxysql/commit/9769e8e1) Start next dev cycle +- [a3fa468a](https://github.com/kubedb/proxysql/commit/a3fa468a) Prepare release 0.5.0 +- [6d8862de](https://github.com/kubedb/proxysql/commit/6d8862de) Mysql Group Replication tests (#146) +- [49544e55](https://github.com/kubedb/proxysql/commit/49544e55) Mysql Group Replication (#144) +- [a85d4b44](https://github.com/kubedb/proxysql/commit/a85d4b44) Revendor dependencies +- [9c538460](https://github.com/kubedb/proxysql/commit/9c538460) Changed Role to exclude psp without name (#143) +- [6cace93b](https://github.com/kubedb/proxysql/commit/6cace93b) Modify mutator validator names (#142) +- [da0c19b9](https://github.com/kubedb/proxysql/commit/da0c19b9) Update changelog +- [b79c80d6](https://github.com/kubedb/proxysql/commit/b79c80d6) Start next dev cycle +- [838d9459](https://github.com/kubedb/proxysql/commit/838d9459) Prepare release 0.4.0 +- [bf0f2c14](https://github.com/kubedb/proxysql/commit/bf0f2c14) Added PSP names and init container image in testing framework (#141) +- [3d227570](https://github.com/kubedb/proxysql/commit/3d227570) Added PSP support for mySQL (#137) +- [7b766657](https://github.com/kubedb/proxysql/commit/7b766657) Don't inherit app.kubernetes.io labels from CRD into offshoots (#140) +- [29e23470](https://github.com/kubedb/proxysql/commit/29e23470) Support for init container (#139) +- [3e1556f6](https://github.com/kubedb/proxysql/commit/3e1556f6) Add role label to stats service (#138) +- [ee078af9](https://github.com/kubedb/proxysql/commit/ee078af9) Update changelog +- [978f1139](https://github.com/kubedb/proxysql/commit/978f1139) Update Kubernetes client libraries to 1.13.0 release (#136) +- [821f23d1](https://github.com/kubedb/proxysql/commit/821f23d1) Start next dev cycle +- [678b26aa](https://github.com/kubedb/proxysql/commit/678b26aa) Prepare release 0.3.0 +- [40ad7a23](https://github.com/kubedb/proxysql/commit/40ad7a23) Initial RBAC support: create and use K8s service account for MySQL (#134) +- [98f03387](https://github.com/kubedb/proxysql/commit/98f03387) Revendor dependencies (#135) +- [dfe92615](https://github.com/kubedb/proxysql/commit/dfe92615) Revendor dependencies : Retry Failed Scheduler Snapshot (#133) +- [71f8a350](https://github.com/kubedb/proxysql/commit/71f8a350) Added ephemeral StorageType support (#132) +- [0a6b6e46](https://github.com/kubedb/proxysql/commit/0a6b6e46) Added support of MySQL 8.0.14 (#131) +- [99e57a9e](https://github.com/kubedb/proxysql/commit/99e57a9e) Use PVC spec from snapshot if provided (#130) +- [61497be6](https://github.com/kubedb/proxysql/commit/61497be6) Revendored and updated tests for 'Prevent prefix matching of multiple snapshots' (#129) +- [7eafe088](https://github.com/kubedb/proxysql/commit/7eafe088) Add certificate health checker (#128) +- [973ec416](https://github.com/kubedb/proxysql/commit/973ec416) Update E2E test: Env update is not restricted anymore (#127) +- [339975ff](https://github.com/kubedb/proxysql/commit/339975ff) Fix AppBinding (#126) +- [62050a72](https://github.com/kubedb/proxysql/commit/62050a72) Update changelog +- [2d454043](https://github.com/kubedb/proxysql/commit/2d454043) Prepare release 0.2.0 +- [6941ea59](https://github.com/kubedb/proxysql/commit/6941ea59) Reuse event recorder (#125) +- [b77e66c4](https://github.com/kubedb/proxysql/commit/b77e66c4) OSM binary upgraded in mysql-tools (#123) +- [c9228086](https://github.com/kubedb/proxysql/commit/c9228086) Revendor dependencies (#124) +- [97837120](https://github.com/kubedb/proxysql/commit/97837120) Test for faulty snapshot (#122) +- [c3e995b6](https://github.com/kubedb/proxysql/commit/c3e995b6) Start next dev cycle +- [8a4f3b13](https://github.com/kubedb/proxysql/commit/8a4f3b13) Prepare release 0.2.0-rc.2 +- [79942191](https://github.com/kubedb/proxysql/commit/79942191) Upgrade database secret keys (#121) +- [1747fdf5](https://github.com/kubedb/proxysql/commit/1747fdf5) Ignore mutation of fields to default values during update (#120) +- [d902d588](https://github.com/kubedb/proxysql/commit/d902d588) Support configuration options for exporter sidecar (#119) +- [dd7c3f44](https://github.com/kubedb/proxysql/commit/dd7c3f44) Use flags.DumpAll (#118) +- [bc1ef05b](https://github.com/kubedb/proxysql/commit/bc1ef05b) Start next dev cycle +- [9d33c1a0](https://github.com/kubedb/proxysql/commit/9d33c1a0) Prepare release 0.2.0-rc.1 +- [b076e141](https://github.com/kubedb/proxysql/commit/b076e141) Apply cleanup (#117) +- [7dc5641f](https://github.com/kubedb/proxysql/commit/7dc5641f) Set periodic analytics (#116) +- [90ea6acc](https://github.com/kubedb/proxysql/commit/90ea6acc) Introduce AppBinding support (#115) +- [a882d76a](https://github.com/kubedb/proxysql/commit/a882d76a) Fix Analytics (#114) +- [0961009c](https://github.com/kubedb/proxysql/commit/0961009c) Error out from cron job for deprecated dbversion (#113) +- [da1f4e27](https://github.com/kubedb/proxysql/commit/da1f4e27) Add CRDs without observation when operator starts (#112) +- [0a754d2f](https://github.com/kubedb/proxysql/commit/0a754d2f) Update changelog +- [b09bc6e1](https://github.com/kubedb/proxysql/commit/b09bc6e1) Start next dev cycle +- [0d467ccb](https://github.com/kubedb/proxysql/commit/0d467ccb) Prepare release 0.2.0-rc.0 +- [c757007a](https://github.com/kubedb/proxysql/commit/c757007a) Merge commit 'cc6607a3589a79a5e61bb198d370ea0ae30b9d09' +- [ddfe4be1](https://github.com/kubedb/proxysql/commit/ddfe4be1) Support custom user passowrd for backup (#111) +- [8c84ba20](https://github.com/kubedb/proxysql/commit/8c84ba20) Support providing resources for monitoring container (#110) +- [7bcfbc48](https://github.com/kubedb/proxysql/commit/7bcfbc48) Update kubernetes client libraries to 1.12.0 (#109) +- [145bba2b](https://github.com/kubedb/proxysql/commit/145bba2b) Add validation webhook xray (#108) +- [6da1887f](https://github.com/kubedb/proxysql/commit/6da1887f) Various Fixes (#107) +- [111519e9](https://github.com/kubedb/proxysql/commit/111519e9) Merge ports from service template (#105) +- [38147ef1](https://github.com/kubedb/proxysql/commit/38147ef1) Replace doNotPause with TerminationPolicy = DoNotTerminate (#104) +- [e28ebc47](https://github.com/kubedb/proxysql/commit/e28ebc47) Pass resources to NamespaceValidator (#103) +- [aed12bf5](https://github.com/kubedb/proxysql/commit/aed12bf5) Various fixes (#102) +- [3d372ef6](https://github.com/kubedb/proxysql/commit/3d372ef6) Support Livecycle hook and container probes (#101) +- [b6ef6887](https://github.com/kubedb/proxysql/commit/b6ef6887) Check if Kubernetes version is supported before running operator (#100) +- [d89e7783](https://github.com/kubedb/proxysql/commit/d89e7783) Update package alias (#99) +- [f0b44b3a](https://github.com/kubedb/proxysql/commit/f0b44b3a) Start next dev cycle +- [a79ff03b](https://github.com/kubedb/proxysql/commit/a79ff03b) Prepare release 0.2.0-beta.1 +- [0d8d3cca](https://github.com/kubedb/proxysql/commit/0d8d3cca) Revendor api (#98) +- [2f850243](https://github.com/kubedb/proxysql/commit/2f850243) Fix tests (#97) +- [4ced0bfe](https://github.com/kubedb/proxysql/commit/4ced0bfe) Revendor api for catalog apigroup (#96) +- [e7695400](https://github.com/kubedb/proxysql/commit/e7695400) Update chanelog +- [8e358aea](https://github.com/kubedb/proxysql/commit/8e358aea) Use --pull flag with docker build (#20) (#95) +- [d2a97d90](https://github.com/kubedb/proxysql/commit/d2a97d90) Merge commit '16c769ee4686576f172a6b79a10d25bfd79ca4a4' +- [d1fe8a8a](https://github.com/kubedb/proxysql/commit/d1fe8a8a) Start next dev cycle +- [04eb9bb5](https://github.com/kubedb/proxysql/commit/04eb9bb5) Prepare release 0.2.0-beta.0 +- [9dfea960](https://github.com/kubedb/proxysql/commit/9dfea960) Pass extra args to tools.sh (#93) +- [47dd3cad](https://github.com/kubedb/proxysql/commit/47dd3cad) Don't try to wipe out Snapshot data for Local backend (#92) +- [9c4d485b](https://github.com/kubedb/proxysql/commit/9c4d485b) Add missing alt-tag docker folder mysql-tools images (#91) +- [be72f784](https://github.com/kubedb/proxysql/commit/be72f784) Use suffix for updated DBImage & Stop working for deprecated *Versions (#90) +- [05c8f14d](https://github.com/kubedb/proxysql/commit/05c8f14d) Search used secrets within same namespace of DB object (#89) +- [0d94c946](https://github.com/kubedb/proxysql/commit/0d94c946) Support Termination Policy (#88) +- [8775ddf7](https://github.com/kubedb/proxysql/commit/8775ddf7) Update builddeps.sh +- [796c93da](https://github.com/kubedb/proxysql/commit/796c93da) Revendor k8s.io/apiserver (#87) +- [5a1e3f57](https://github.com/kubedb/proxysql/commit/5a1e3f57) Revendor kubernetes-1.11.3 (#86) +- [809a3c49](https://github.com/kubedb/proxysql/commit/809a3c49) Support UpdateStrategy (#84) +- [372c52ef](https://github.com/kubedb/proxysql/commit/372c52ef) Add TerminationPolicy for databases (#83) +- [c01b55e8](https://github.com/kubedb/proxysql/commit/c01b55e8) Revendor api (#82) +- [5e196b95](https://github.com/kubedb/proxysql/commit/5e196b95) Use IntHash as status.observedGeneration (#81) +- [2da3bb1b](https://github.com/kubedb/proxysql/commit/2da3bb1b) fix github status (#80) +- [121d0a98](https://github.com/kubedb/proxysql/commit/121d0a98) Update pipeline (#79) +- [532e3137](https://github.com/kubedb/proxysql/commit/532e3137) Fix E2E test for minikube (#78) +- [0f107815](https://github.com/kubedb/proxysql/commit/0f107815) Update pipeline (#77) +- [851679e2](https://github.com/kubedb/proxysql/commit/851679e2) Migrate MySQL (#75) +- [0b997855](https://github.com/kubedb/proxysql/commit/0b997855) Use official exporter image (#74) +- [702d5736](https://github.com/kubedb/proxysql/commit/702d5736) Fix uninstall for concourse (#70) +- [9ee88bd2](https://github.com/kubedb/proxysql/commit/9ee88bd2) Update status.ObservedGeneration for failure phase (#73) +- [559cdb6a](https://github.com/kubedb/proxysql/commit/559cdb6a) Keep track of ObservedGenerationHash (#72) +- [61c8b898](https://github.com/kubedb/proxysql/commit/61c8b898) Use NewObservableHandler (#71) +- [421274dc](https://github.com/kubedb/proxysql/commit/421274dc) Merge commit '887037c7e36289e3135dda99346fccc7e2ce303b' +- [6a41d9bc](https://github.com/kubedb/proxysql/commit/6a41d9bc) Fix uninstall for concourse (#69) +- [f1af09db](https://github.com/kubedb/proxysql/commit/f1af09db) Update README.md +- [bf3f1823](https://github.com/kubedb/proxysql/commit/bf3f1823) Revise immutable spec fields (#68) +- [26adec3b](https://github.com/kubedb/proxysql/commit/26adec3b) Merge commit '5f83049fc01dc1d0709ac0014d6f3a0f74a39417' +- [31a97820](https://github.com/kubedb/proxysql/commit/31a97820) Support passing args via PodTemplate (#67) +- [60f4ee23](https://github.com/kubedb/proxysql/commit/60f4ee23) Introduce storageType : ephemeral (#66) +- [bfd3fcd6](https://github.com/kubedb/proxysql/commit/bfd3fcd6) Add support for running tests on cncf cluster (#63) +- [fba47b19](https://github.com/kubedb/proxysql/commit/fba47b19) Merge commit 'e010cbb302c8d59d4cf69dd77085b046ff423b78' +- [6be96ce0](https://github.com/kubedb/proxysql/commit/6be96ce0) Revendor api (#65) +- [0f629ab3](https://github.com/kubedb/proxysql/commit/0f629ab3) Keep track of observedGeneration in status (#64) +- [c9a9596f](https://github.com/kubedb/proxysql/commit/c9a9596f) Separate StatsService for monitoring (#62) +- [62854641](https://github.com/kubedb/proxysql/commit/62854641) Use MySQLVersion for MySQL images (#61) +- [3c170c56](https://github.com/kubedb/proxysql/commit/3c170c56) Use updated crd spec (#60) +- [873c285e](https://github.com/kubedb/proxysql/commit/873c285e) Rename OffshootLabels to OffshootSelectors (#59) +- [2fd02169](https://github.com/kubedb/proxysql/commit/2fd02169) Revendor api (#58) +- [a127d6cd](https://github.com/kubedb/proxysql/commit/a127d6cd) Use kmodules monitoring and objectstore api (#57) +- [2f79a038](https://github.com/kubedb/proxysql/commit/2f79a038) Support custom configuration (#52) +- [49c67f00](https://github.com/kubedb/proxysql/commit/49c67f00) Merge commit '44e6d4985d93556e39ddcc4677ada5437fc5be64' +- [fb28bc6c](https://github.com/kubedb/proxysql/commit/fb28bc6c) Refactor concourse scripts (#56) +- [4de4ced1](https://github.com/kubedb/proxysql/commit/4de4ced1) Fix command `./hack/make.py test e2e` (#55) +- [3082123e](https://github.com/kubedb/proxysql/commit/3082123e) Set generated binary name to my-operator (#54) +- [5698f314](https://github.com/kubedb/proxysql/commit/5698f314) Don't add admission/v1beta1 group as a prioritized version (#53) +- [696135d5](https://github.com/kubedb/proxysql/commit/696135d5) Fix travis build (#48) +- [c519ef89](https://github.com/kubedb/proxysql/commit/c519ef89) Format shell script (#51) +- [c93e2f40](https://github.com/kubedb/proxysql/commit/c93e2f40) Enable status subresource for crds (#50) +- [edd951ca](https://github.com/kubedb/proxysql/commit/edd951ca) Update client-go to v8.0.0 (#49) +- [520597a6](https://github.com/kubedb/proxysql/commit/520597a6) Merge commit '71850e2c90cda8fc588b7dedb340edf3d316baea' +- [f1549e95](https://github.com/kubedb/proxysql/commit/f1549e95) Support ENV variables in CRDs (#46) +- [67f37780](https://github.com/kubedb/proxysql/commit/67f37780) Updated osm version to 0.7.1 (#47) +- [10e309c0](https://github.com/kubedb/proxysql/commit/10e309c0) Prepare release 0.1.0 +- [62a8fbbd](https://github.com/kubedb/proxysql/commit/62a8fbbd) Fixed missing error return (#45) +- [8c05bb83](https://github.com/kubedb/proxysql/commit/8c05bb83) Revendor dependencies (#44) +- [ca811a2e](https://github.com/kubedb/proxysql/commit/ca811a2e) Fix release script (#43) +- [b79541f6](https://github.com/kubedb/proxysql/commit/b79541f6) Add changelog (#42) +- [a2d13c82](https://github.com/kubedb/proxysql/commit/a2d13c82) Concourse (#41) +- [95b2186e](https://github.com/kubedb/proxysql/commit/95b2186e) Fixed kubeconfig plugin for Cloud Providers && Storage is required for MySQL (#40) +- [37762093](https://github.com/kubedb/proxysql/commit/37762093) Refactored E2E testing to support E2E testing with admission webhook in cloud (#38) +- [b6fe72ca](https://github.com/kubedb/proxysql/commit/b6fe72ca) Remove lost+found directory before initializing mysql (#39) +- [18ebb959](https://github.com/kubedb/proxysql/commit/18ebb959) Skip delete requests for empty resources (#37) +- [eeb7add0](https://github.com/kubedb/proxysql/commit/eeb7add0) Don't panic if admission options is nil (#36) +- [ccb59db0](https://github.com/kubedb/proxysql/commit/ccb59db0) Disable admission controllers for webhook server (#35) +- [b1c6c149](https://github.com/kubedb/proxysql/commit/b1c6c149) Separate ApiGroup for Mutating and Validating webhook && upgraded osm to 0.7.0 (#34) +- [b1890f7c](https://github.com/kubedb/proxysql/commit/b1890f7c) Update client-go to 7.0.0 (#33) +- [08c81726](https://github.com/kubedb/proxysql/commit/08c81726) Added update script for mysql-tools:8 (#32) +- [4bbe6c9f](https://github.com/kubedb/proxysql/commit/4bbe6c9f) Added support of mysql:5.7 (#31) +- [e657f512](https://github.com/kubedb/proxysql/commit/e657f512) Add support for one informer and N-eventHandler for snapshot, dromantDB and Job (#30) +- [bbcd48d6](https://github.com/kubedb/proxysql/commit/bbcd48d6) Use metrics from kube apiserver (#29) +- [1687e197](https://github.com/kubedb/proxysql/commit/1687e197) Bundle webhook server and Use SharedInformerFactory (#28) +- [cd0efc00](https://github.com/kubedb/proxysql/commit/cd0efc00) Move MySQL AdmissionWebhook packages into MySQL repository (#27) +- [46065e18](https://github.com/kubedb/proxysql/commit/46065e18) Use mysql:8.0.3 image as mysql:8.0 (#26) +- [1b73529f](https://github.com/kubedb/proxysql/commit/1b73529f) Update README.md +- [62eaa397](https://github.com/kubedb/proxysql/commit/62eaa397) Update README.md +- [c53704c7](https://github.com/kubedb/proxysql/commit/c53704c7) Remove Docker pull count +- [b9ec877e](https://github.com/kubedb/proxysql/commit/b9ec877e) Add travis yaml (#25) +- [ade3571c](https://github.com/kubedb/proxysql/commit/ade3571c) Start next dev cycle +- [b4b749df](https://github.com/kubedb/proxysql/commit/b4b749df) Prepare release 0.1.0-beta.2 +- [4d46d95d](https://github.com/kubedb/proxysql/commit/4d46d95d) Migrating to apps/v1 (#23) +- [5ee1ac8c](https://github.com/kubedb/proxysql/commit/5ee1ac8c) Update validation (#22) +- [dd023c50](https://github.com/kubedb/proxysql/commit/dd023c50) Fix dormantDB matching: pass same type to Equal method (#21) +- [37a1e4fd](https://github.com/kubedb/proxysql/commit/37a1e4fd) Use official code generator scripts (#20) +- [485d3d7c](https://github.com/kubedb/proxysql/commit/485d3d7c) Fixed dormantdb matching & Raised throttling time & Fixed MySQL version Checking (#19) +- [6db2ae8d](https://github.com/kubedb/proxysql/commit/6db2ae8d) Prepare release 0.1.0-beta.1 +- [ebbfec2f](https://github.com/kubedb/proxysql/commit/ebbfec2f) converted to k8s 1.9 & Improved InitSpec in DormantDB & Added support for Job watcher & Improved Tests (#17) +- [a484e0e5](https://github.com/kubedb/proxysql/commit/a484e0e5) Fixed logger, analytics and removed rbac stuff (#16) +- [7aa2d1d2](https://github.com/kubedb/proxysql/commit/7aa2d1d2) Add rbac stuffs for mysql-exporter (#15) +- [078098c8](https://github.com/kubedb/proxysql/commit/078098c8) Review Mysql docker images and Fixed monitring (#14) +- [6877108a](https://github.com/kubedb/proxysql/commit/6877108a) Update README.md +- [1f84a5da](https://github.com/kubedb/proxysql/commit/1f84a5da) Start next dev cycle +- [2f1e4b7d](https://github.com/kubedb/proxysql/commit/2f1e4b7d) Prepare release 0.1.0-beta.0 +- [dce1e88e](https://github.com/kubedb/proxysql/commit/dce1e88e) Add release script +- [60ed55cb](https://github.com/kubedb/proxysql/commit/60ed55cb) Rename ms-operator to my-operator (#13) +- [5451d166](https://github.com/kubedb/proxysql/commit/5451d166) Fix Analytics and pass client-id as ENV to Snapshot Job (#12) +- [788ae178](https://github.com/kubedb/proxysql/commit/788ae178) update docker image validation (#11) +- [c966efd5](https://github.com/kubedb/proxysql/commit/c966efd5) Add docker-registry and WorkQueue (#10) +- [be340103](https://github.com/kubedb/proxysql/commit/be340103) Set client id for analytics (#9) +- [ca11f683](https://github.com/kubedb/proxysql/commit/ca11f683) Fix CRD Registration (#8) +- [2f95c13d](https://github.com/kubedb/proxysql/commit/2f95c13d) Update issue repo link +- [6fffa713](https://github.com/kubedb/proxysql/commit/6fffa713) Update pkg paths to kubedb org (#7) +- [2d4d5c44](https://github.com/kubedb/proxysql/commit/2d4d5c44) Assign default Prometheus Monitoring Port (#6) +- [a7595613](https://github.com/kubedb/proxysql/commit/a7595613) Add Snapshot Backup, Restore and Backup-Scheduler (#4) +- [17a782c6](https://github.com/kubedb/proxysql/commit/17a782c6) Update Dockerfile +- [e92bfec9](https://github.com/kubedb/proxysql/commit/e92bfec9) Add mysql-util docker image (#5) +- [2a4b25ac](https://github.com/kubedb/proxysql/commit/2a4b25ac) Mysql db - Inititalizing (#2) +- [cbfbc878](https://github.com/kubedb/proxysql/commit/cbfbc878) Update README.md +- [01cab651](https://github.com/kubedb/proxysql/commit/01cab651) Update README.md +- [0aa81cdf](https://github.com/kubedb/proxysql/commit/0aa81cdf) Use client-go 5.x +- [3de10d7f](https://github.com/kubedb/proxysql/commit/3de10d7f) Update ./hack folder (#3) +- [46f05b1f](https://github.com/kubedb/proxysql/commit/46f05b1f) Add skeleton for mysql (#1) +- [73147dba](https://github.com/kubedb/proxysql/commit/73147dba) Merge commit 'be70502b4993171bbad79d2ff89a9844f1c24caa' as 'hack/libbuild' + + + +## [kubedb/redis](https://github.com/kubedb/redis) + +### [v0.7.0-beta.1](https://github.com/kubedb/redis/releases/tag/v0.7.0-beta.1) + +- [768962f4](https://github.com/kubedb/redis/commit/768962f4) Prepare for release v0.7.0-beta.1 (#173) +- [9efbb8e4](https://github.com/kubedb/redis/commit/9efbb8e4) include Makefile.env (#171) +- [b343c559](https://github.com/kubedb/redis/commit/b343c559) Update License (#170) +- [d666ac18](https://github.com/kubedb/redis/commit/d666ac18) Update to Kubernetes v1.18.3 (#169) +- [602354f6](https://github.com/kubedb/redis/commit/602354f6) Update ci.yml +- [59f2d238](https://github.com/kubedb/redis/commit/59f2d238) Update update-release-tracker.sh +- [64c96db5](https://github.com/kubedb/redis/commit/64c96db5) Update update-release-tracker.sh +- [49cd15a9](https://github.com/kubedb/redis/commit/49cd15a9) Add script to update release tracker on pr merge (#167) +- [c711be8f](https://github.com/kubedb/redis/commit/c711be8f) chore: replica alert typo (#166) +- [2d752316](https://github.com/kubedb/redis/commit/2d752316) Update .kodiak.toml +- [ea3b206d](https://github.com/kubedb/redis/commit/ea3b206d) Various fixes (#165) +- [e441809c](https://github.com/kubedb/redis/commit/e441809c) Update to Kubernetes v1.18.3 (#164) +- [1e5ecfb7](https://github.com/kubedb/redis/commit/1e5ecfb7) Update to Kubernetes v1.18.3 +- [742679dd](https://github.com/kubedb/redis/commit/742679dd) Create .kodiak.toml +- [2eb77b80](https://github.com/kubedb/redis/commit/2eb77b80) Update apis (#163) +- [7cf9e7d3](https://github.com/kubedb/redis/commit/7cf9e7d3) Use CRD v1 for Kubernetes >= 1.16 (#162) +- [bf072134](https://github.com/kubedb/redis/commit/bf072134) Update kind command +- [cb2a748d](https://github.com/kubedb/redis/commit/cb2a748d) Update dependencies +- [a30cd6eb](https://github.com/kubedb/redis/commit/a30cd6eb) Update to Kubernetes v1.18.3 (#161) +- [9cdac95f](https://github.com/kubedb/redis/commit/9cdac95f) Fix e2e tests (#160) +- [429141b4](https://github.com/kubedb/redis/commit/429141b4) Revendor kubedb.dev/apimachinery@master (#159) +- [664c086b](https://github.com/kubedb/redis/commit/664c086b) Use recommended kubernetes app labels +- [2e6a2f03](https://github.com/kubedb/redis/commit/2e6a2f03) Update crazy-max/ghaction-docker-buildx flag +- [88417e86](https://github.com/kubedb/redis/commit/88417e86) Pass annotations from CRD to AppBinding (#158) +- [84167d7a](https://github.com/kubedb/redis/commit/84167d7a) Trigger the workflow on push or pull request +- [2f43dd9a](https://github.com/kubedb/redis/commit/2f43dd9a) Use helm --wait +- [36399173](https://github.com/kubedb/redis/commit/36399173) Use updated operator labels in e2e tests (#156) +- [c6582491](https://github.com/kubedb/redis/commit/c6582491) Update CHANGELOG.md +- [197b4973](https://github.com/kubedb/redis/commit/197b4973) Support PodAffinity Templating (#155) +- [cdfbb77d](https://github.com/kubedb/redis/commit/cdfbb77d) Use stash.appscode.dev/apimachinery@v0.9.0-rc.6 (#154) +- [c1db4c43](https://github.com/kubedb/redis/commit/c1db4c43) Version update to resolve security issue in github.com/apache/th… (#153) +- [7acc502b](https://github.com/kubedb/redis/commit/7acc502b) Use rancher/local-path-provisioner@v0.0.12 (#152) +- [d00f765e](https://github.com/kubedb/redis/commit/d00f765e) Introduce spec.halted and removed dormant crd (#151) +- [9ed1d97e](https://github.com/kubedb/redis/commit/9ed1d97e) Add `Pause` Feature (#150) +- [39ed60c4](https://github.com/kubedb/redis/commit/39ed60c4) Refactor CI pipeline to build once (#149) +- [1707e0c7](https://github.com/kubedb/redis/commit/1707e0c7) Update kubernetes client-go to 1.16.3 (#148) +- [dcbb4be4](https://github.com/kubedb/redis/commit/dcbb4be4) Update catalog values for make install command +- [9fa3ef1c](https://github.com/kubedb/redis/commit/9fa3ef1c) Update catalog values for make install command (#147) +- [44538409](https://github.com/kubedb/redis/commit/44538409) Use charts to install operator (#146) +- [05e3b95a](https://github.com/kubedb/redis/commit/05e3b95a) Matrix test for github actions (#145) +- [e76f96f6](https://github.com/kubedb/redis/commit/e76f96f6) Add add-license make target +- [6ccd651c](https://github.com/kubedb/redis/commit/6ccd651c) Update Makefile +- [2a56f27f](https://github.com/kubedb/redis/commit/2a56f27f) Add license header to files (#144) +- [5ce5e5e0](https://github.com/kubedb/redis/commit/5ce5e5e0) Run e2e tests in parallel (#142) +- [77012ddf](https://github.com/kubedb/redis/commit/77012ddf) Use log.Fatal instead of Must() (#143) +- [aa7f1673](https://github.com/kubedb/redis/commit/aa7f1673) Enable make ci (#141) +- [abd6a605](https://github.com/kubedb/redis/commit/abd6a605) Remove EnableStatusSubresource (#140) +- [08cfe0ca](https://github.com/kubedb/redis/commit/08cfe0ca) Fix tests for github actions (#139) +- [09e72f63](https://github.com/kubedb/redis/commit/09e72f63) Prepend redis.conf to args list (#136) +- [101afa35](https://github.com/kubedb/redis/commit/101afa35) Run e2e tests using GitHub actions (#137) +- [bbf5cb9f](https://github.com/kubedb/redis/commit/bbf5cb9f) Validate DBVersionSpecs and fixed broken build (#138) +- [26f0c88b](https://github.com/kubedb/redis/commit/26f0c88b) Update go.yml +- [9dab8c06](https://github.com/kubedb/redis/commit/9dab8c06) Enable GitHub actions +- [6a722f20](https://github.com/kubedb/redis/commit/6a722f20) Update changelog + + + + diff --git a/content/docs/v2024.1.31/CHANGELOG-v2020.09.04-beta.0.md b/content/docs/v2024.1.31/CHANGELOG-v2020.09.04-beta.0.md new file mode 100644 index 0000000000..c5f6bd9840 --- /dev/null +++ b/content/docs/v2024.1.31/CHANGELOG-v2020.09.04-beta.0.md @@ -0,0 +1,423 @@ +--- +title: Changelog | KubeDB +description: Changelog +menu: + docs_v2024.1.31: + identifier: changelog-kubedb-v2020.09.04-beta.0 + name: Changelog-v2020.09.04-beta.0 + parent: welcome + weight: 20200904 +product_name: kubedb +menu_name: docs_v2024.1.31 +section_menu_id: welcome +url: /docs/v2024.1.31/welcome/changelog-v2020.09.04-beta.0/ +aliases: +- /docs/v2024.1.31/CHANGELOG-v2020.09.04-beta.0/ +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +# KubeDB v2020.09.04-beta.0 (2020-09-04) + + +## [kubedb/apimachinery](https://github.com/kubedb/apimachinery) + +### [v0.14.0-beta.2](https://github.com/kubedb/apimachinery/releases/tag/v0.14.0-beta.2) + +- [76ac9bc0](https://github.com/kubedb/apimachinery/commit/76ac9bc0) Remove CertManagerClient client +- [b99048f4](https://github.com/kubedb/apimachinery/commit/b99048f4) Remove unused constants for ProxySQL +- [152cef57](https://github.com/kubedb/apimachinery/commit/152cef57) Update Kubernetes v1.18.3 dependencies (#578) +- [24c5e829](https://github.com/kubedb/apimachinery/commit/24c5e829) Update redis constants (#575) +- [7075b38d](https://github.com/kubedb/apimachinery/commit/7075b38d) Remove spec.updateStrategy field (#577) +- [dfd11955](https://github.com/kubedb/apimachinery/commit/dfd11955) Remove description from CRD yamls (#576) +- [2d1b5878](https://github.com/kubedb/apimachinery/commit/2d1b5878) Add autoscaling crds (#554) +- [68ed8127](https://github.com/kubedb/apimachinery/commit/68ed8127) Fix build +- [63d18f0d](https://github.com/kubedb/apimachinery/commit/63d18f0d) Rename PgBouncer archiver to client +- [a219c251](https://github.com/kubedb/apimachinery/commit/a219c251) Handle shard scenario for MongoDB cert names (#574) +- [d2c80e55](https://github.com/kubedb/apimachinery/commit/d2c80e55) Add MongoDB Custom Config Spec (#562) +- [1e69fb02](https://github.com/kubedb/apimachinery/commit/1e69fb02) Support multiple certificates per DB (#555) +- [9bbed3d1](https://github.com/kubedb/apimachinery/commit/9bbed3d1) Update Kubernetes v1.18.3 dependencies (#573) +- [7df78c7a](https://github.com/kubedb/apimachinery/commit/7df78c7a) Update CRD yamls +- [406d895d](https://github.com/kubedb/apimachinery/commit/406d895d) Implement ServiceMonitorAdditionalLabels method (#572) +- [cfe4374a](https://github.com/kubedb/apimachinery/commit/cfe4374a) Make ServiceMonitor name same as stats service (#563) +- [d2ed6b4a](https://github.com/kubedb/apimachinery/commit/d2ed6b4a) Update for release Stash@v2020.08.27 (#571) +- [749b9084](https://github.com/kubedb/apimachinery/commit/749b9084) Update for release Stash@v2020.08.27-rc.0 (#570) +- [5d8bf42c](https://github.com/kubedb/apimachinery/commit/5d8bf42c) Update for release Stash@v2020.08.26-rc.1 (#569) +- [6edc4782](https://github.com/kubedb/apimachinery/commit/6edc4782) Update for release Stash@v2020.08.26-rc.0 (#568) +- [c451ff3a](https://github.com/kubedb/apimachinery/commit/c451ff3a) Update Kubernetes v1.18.3 dependencies (#565) +- [fdc6e2d6](https://github.com/kubedb/apimachinery/commit/fdc6e2d6) Update Kubernetes v1.18.3 dependencies (#564) +- [2f509c26](https://github.com/kubedb/apimachinery/commit/2f509c26) Update Kubernetes v1.18.3 dependencies (#561) +- [da655afe](https://github.com/kubedb/apimachinery/commit/da655afe) Update Kubernetes v1.18.3 dependencies (#560) +- [9c2c06a9](https://github.com/kubedb/apimachinery/commit/9c2c06a9) Fix MySQL enterprise condition's constant (#559) +- [81ed2724](https://github.com/kubedb/apimachinery/commit/81ed2724) Update Kubernetes v1.18.3 dependencies (#558) +- [738b7ade](https://github.com/kubedb/apimachinery/commit/738b7ade) Update Kubernetes v1.18.3 dependencies (#557) +- [93f0af4b](https://github.com/kubedb/apimachinery/commit/93f0af4b) Add MySQL Constants (#553) +- [6049554d](https://github.com/kubedb/apimachinery/commit/6049554d) Add {Horizontal,Vertical}ScalingSpec for Redis (#534) +- [28552272](https://github.com/kubedb/apimachinery/commit/28552272) Enable TLS for Redis (#546) +- [68e00844](https://github.com/kubedb/apimachinery/commit/68e00844) Add Spec for MongoDB Volume Expansion (#548) +- [759a800a](https://github.com/kubedb/apimachinery/commit/759a800a) Add Subject spec for Certificate (#552) +- [b1552628](https://github.com/kubedb/apimachinery/commit/b1552628) Add email SANs for certificate (#551) +- [fdfad57e](https://github.com/kubedb/apimachinery/commit/fdfad57e) Update to cert-manager@v0.16.0 (#550) +- [3b5e9ece](https://github.com/kubedb/apimachinery/commit/3b5e9ece) Update to Kubernetes v1.18.3 (#549) +- [0c5a1e9b](https://github.com/kubedb/apimachinery/commit/0c5a1e9b) Make ElasticsearchVersion spec.tools optional (#526) +- [01a0b4b3](https://github.com/kubedb/apimachinery/commit/01a0b4b3) Add Conditions Constant for MongoDBOpsRequest (#535) +- [34a9ed61](https://github.com/kubedb/apimachinery/commit/34a9ed61) Update to Kubernetes v1.18.3 (#547) +- [6392f19e](https://github.com/kubedb/apimachinery/commit/6392f19e) Add Storage Engine Support for Percona Server MongoDB (#538) +- [02d205bc](https://github.com/kubedb/apimachinery/commit/02d205bc) Remove extra - from prefix/suffix (#543) +- [06158f51](https://github.com/kubedb/apimachinery/commit/06158f51) Update to Kubernetes v1.18.3 (#542) + + + +## [kubedb/cli](https://github.com/kubedb/cli) + +### [v0.14.0-beta.2](https://github.com/kubedb/cli/releases/tag/v0.14.0-beta.2) + +- [58b39094](https://github.com/kubedb/cli/commit/58b39094) Prepare for release v0.14.0-beta.2 (#484) +- [0f8819ce](https://github.com/kubedb/cli/commit/0f8819ce) Update Kubernetes v1.18.3 dependencies (#483) +- [86a92381](https://github.com/kubedb/cli/commit/86a92381) Update Kubernetes v1.18.3 dependencies (#482) +- [05e5cef2](https://github.com/kubedb/cli/commit/05e5cef2) Update for release Stash@v2020.08.27 (#481) +- [b1aa1dc2](https://github.com/kubedb/cli/commit/b1aa1dc2) Update for release Stash@v2020.08.27-rc.0 (#480) +- [36716efc](https://github.com/kubedb/cli/commit/36716efc) Update for release Stash@v2020.08.26-rc.1 (#479) +- [a30f21e0](https://github.com/kubedb/cli/commit/a30f21e0) Update for release Stash@v2020.08.26-rc.0 (#478) +- [836d6227](https://github.com/kubedb/cli/commit/836d6227) Update Kubernetes v1.18.3 dependencies (#477) +- [8a81d715](https://github.com/kubedb/cli/commit/8a81d715) Update Kubernetes v1.18.3 dependencies (#476) +- [7ce2101d](https://github.com/kubedb/cli/commit/7ce2101d) Update Kubernetes v1.18.3 dependencies (#475) +- [3c617e66](https://github.com/kubedb/cli/commit/3c617e66) Update Kubernetes v1.18.3 dependencies (#474) +- [f70b2ba4](https://github.com/kubedb/cli/commit/f70b2ba4) Update Kubernetes v1.18.3 dependencies (#473) +- [ba77ba2b](https://github.com/kubedb/cli/commit/ba77ba2b) Update Kubernetes v1.18.3 dependencies (#472) +- [b296035f](https://github.com/kubedb/cli/commit/b296035f) Use actions/upload-artifact@v2 +- [7bb95619](https://github.com/kubedb/cli/commit/7bb95619) Update to Kubernetes v1.18.3 (#471) +- [6e5789a2](https://github.com/kubedb/cli/commit/6e5789a2) Update to Kubernetes v1.18.3 (#470) +- [9d550ebc](https://github.com/kubedb/cli/commit/9d550ebc) Update to Kubernetes v1.18.3 (#469) + + + +## [kubedb/elasticsearch](https://github.com/kubedb/elasticsearch) + +### [v0.14.0-beta.2](https://github.com/kubedb/elasticsearch/releases/tag/v0.14.0-beta.2) + +- [3b83c316](https://github.com/kubedb/elasticsearch/commit/3b83c316) Prepare for release v0.14.0-beta.2 (#339) +- [662823ae](https://github.com/kubedb/elasticsearch/commit/662823ae) Update release.yml +- [ada6c2d3](https://github.com/kubedb/elasticsearch/commit/ada6c2d3) Add support for Open-Distro-for-Elasticsearch (#303) +- [a9c7ba33](https://github.com/kubedb/elasticsearch/commit/a9c7ba33) Update Kubernetes v1.18.3 dependencies (#333) +- [c67b1290](https://github.com/kubedb/elasticsearch/commit/c67b1290) Update Kubernetes v1.18.3 dependencies (#332) +- [aa1d64ad](https://github.com/kubedb/elasticsearch/commit/aa1d64ad) Update Kubernetes v1.18.3 dependencies (#331) +- [3d6c3e91](https://github.com/kubedb/elasticsearch/commit/3d6c3e91) Update Kubernetes v1.18.3 dependencies (#330) +- [bb318e74](https://github.com/kubedb/elasticsearch/commit/bb318e74) Update Kubernetes v1.18.3 dependencies (#329) +- [6b6b4d2d](https://github.com/kubedb/elasticsearch/commit/6b6b4d2d) Update Kubernetes v1.18.3 dependencies (#328) +- [06cef782](https://github.com/kubedb/elasticsearch/commit/06cef782) Remove dependency on enterprise operator (#327) +- [20a2c7d4](https://github.com/kubedb/elasticsearch/commit/20a2c7d4) Update to cert-manager v0.16.0 (#326) +- [e767c356](https://github.com/kubedb/elasticsearch/commit/e767c356) Build images in e2e workflow (#325) +- [ae696dbe](https://github.com/kubedb/elasticsearch/commit/ae696dbe) Update to Kubernetes v1.18.3 (#324) +- [a511d8d6](https://github.com/kubedb/elasticsearch/commit/a511d8d6) Allow configuring k8s & db version in e2e tests (#323) +- [a50b503d](https://github.com/kubedb/elasticsearch/commit/a50b503d) Trigger e2e tests on /ok-to-test command (#322) +- [107faff2](https://github.com/kubedb/elasticsearch/commit/107faff2) Update to Kubernetes v1.18.3 (#321) +- [60fb6d9b](https://github.com/kubedb/elasticsearch/commit/60fb6d9b) Update to Kubernetes v1.18.3 (#320) + + + +## [kubedb/installer](https://github.com/kubedb/installer) + +### [v0.14.0-beta.2](https://github.com/kubedb/installer/releases/tag/v0.14.0-beta.2) + +- [cb0e278](https://github.com/kubedb/installer/commit/cb0e278) Prepare for release v0.14.0-beta.2 (#128) +- [b31ccbf](https://github.com/kubedb/installer/commit/b31ccbf) Update Kubernetes v1.18.3 dependencies (#127) +- [389ce6a](https://github.com/kubedb/installer/commit/389ce6a) Update Kubernetes v1.18.3 dependencies (#126) +- [db6f1e9](https://github.com/kubedb/installer/commit/db6f1e9) Update chart icons +- [9f41f2d](https://github.com/kubedb/installer/commit/9f41f2d) Update Kubernetes v1.18.3 dependencies (#124) +- [004373e](https://github.com/kubedb/installer/commit/004373e) Update Kubernetes v1.18.3 dependencies (#123) +- [e517626](https://github.com/kubedb/installer/commit/e517626) Prefix catalog files with non-patched versions deprecated- (#119) +- [2bf8715](https://github.com/kubedb/installer/commit/2bf8715) Update Kubernetes v1.18.3 dependencies (#121) +- [9a5cc7b](https://github.com/kubedb/installer/commit/9a5cc7b) Update Kubernetes v1.18.3 dependencies (#120) +- [e2f8ebd](https://github.com/kubedb/installer/commit/e2f8ebd) Add MySQL New catalog (#116) +- [72ad85e](https://github.com/kubedb/installer/commit/72ad85e) Update Kubernetes v1.18.3 dependencies (#118) +- [94ebcb2](https://github.com/kubedb/installer/commit/94ebcb2) Update Kubernetes v1.18.3 dependencies (#117) +- [5dc2808](https://github.com/kubedb/installer/commit/5dc2808) Remove excess permission (#115) +- [65b4443](https://github.com/kubedb/installer/commit/65b4443) Update redis exporter image tag +- [7191679](https://github.com/kubedb/installer/commit/7191679) Add support for Redis 6.0.6 (#99) +- [902f00e](https://github.com/kubedb/installer/commit/902f00e) Add Pod `exec` permission in ClusterRole (#102) +- [4a83599](https://github.com/kubedb/installer/commit/4a83599) Update to Kubernetes v1.18.3 (#114) +- [df8412a](https://github.com/kubedb/installer/commit/df8412a) Add Permissions for PVC (#112) +- [99d6e66](https://github.com/kubedb/installer/commit/99d6e66) Update elasticsearchversion crds (#111) +- [57561a3](https://github.com/kubedb/installer/commit/57561a3) Use `percona` as Suffix in MongoDBVersion Name (#110) +- [7706f93](https://github.com/kubedb/installer/commit/7706f93) Update to Kubernetes v1.18.3 (#109) +- [513db6d](https://github.com/kubedb/installer/commit/513db6d) Add Percona MongoDB Server Catalogs (#103) +- [2b10a12](https://github.com/kubedb/installer/commit/2b10a12) Update to Kubernetes v1.18.3 (#108) + + + +## [kubedb/memcached](https://github.com/kubedb/memcached) + +### [v0.7.0-beta.2](https://github.com/kubedb/memcached/releases/tag/v0.7.0-beta.2) + +- [b8fe927b](https://github.com/kubedb/memcached/commit/b8fe927b) Prepare for release v0.7.0-beta.2 (#177) +- [0f5014d2](https://github.com/kubedb/memcached/commit/0f5014d2) Update release.yml +- [1b627013](https://github.com/kubedb/memcached/commit/1b627013) Remove updateStrategy field (#176) +- [66f008d6](https://github.com/kubedb/memcached/commit/66f008d6) Update Kubernetes v1.18.3 dependencies (#175) +- [09ff8589](https://github.com/kubedb/memcached/commit/09ff8589) Update Kubernetes v1.18.3 dependencies (#174) +- [92e344d8](https://github.com/kubedb/memcached/commit/92e344d8) Update Kubernetes v1.18.3 dependencies (#173) +- [51e977f3](https://github.com/kubedb/memcached/commit/51e977f3) Update Kubernetes v1.18.3 dependencies (#172) +- [f32d7e9c](https://github.com/kubedb/memcached/commit/f32d7e9c) Update Kubernetes v1.18.3 dependencies (#171) +- [2cdba698](https://github.com/kubedb/memcached/commit/2cdba698) Update Kubernetes v1.18.3 dependencies (#170) +- [9486876e](https://github.com/kubedb/memcached/commit/9486876e) Update Kubernetes v1.18.3 dependencies (#169) +- [81648447](https://github.com/kubedb/memcached/commit/81648447) Update Kubernetes v1.18.3 dependencies (#168) +- [e9c3f98d](https://github.com/kubedb/memcached/commit/e9c3f98d) Fix install target +- [6dff8f7b](https://github.com/kubedb/memcached/commit/6dff8f7b) Remove dependency on enterprise operator (#167) +- [707d4d83](https://github.com/kubedb/memcached/commit/707d4d83) Build images in e2e workflow (#166) +- [ff1b144e](https://github.com/kubedb/memcached/commit/ff1b144e) Allow configuring k8s & db version in e2e tests (#165) +- [0b1699d8](https://github.com/kubedb/memcached/commit/0b1699d8) Update to Kubernetes v1.18.3 (#164) +- [b141122a](https://github.com/kubedb/memcached/commit/b141122a) Trigger e2e tests on /ok-to-test command (#163) +- [36b03266](https://github.com/kubedb/memcached/commit/36b03266) Update to Kubernetes v1.18.3 (#162) +- [3ede9dcc](https://github.com/kubedb/memcached/commit/3ede9dcc) Update to Kubernetes v1.18.3 (#161) + + + +## [kubedb/mongodb](https://github.com/kubedb/mongodb) + +### [v0.7.0-beta.2](https://github.com/kubedb/mongodb/releases/tag/v0.7.0-beta.2) + +- [8fd389de](https://github.com/kubedb/mongodb/commit/8fd389de) Prepare for release v0.7.0-beta.2 (#234) +- [3e4981ee](https://github.com/kubedb/mongodb/commit/3e4981ee) Update release.yml +- [c1d5cdb8](https://github.com/kubedb/mongodb/commit/c1d5cdb8) Always use OnDelete UpdateStrategy (#233) +- [a135b2c7](https://github.com/kubedb/mongodb/commit/a135b2c7) Fix build (#232) +- [cfb1788b](https://github.com/kubedb/mongodb/commit/cfb1788b) Use updated certificate spec (#221) +- [486e820a](https://github.com/kubedb/mongodb/commit/486e820a) Remove `storage` Validation Check (#231) +- [12e621ed](https://github.com/kubedb/mongodb/commit/12e621ed) Update Kubernetes v1.18.3 dependencies (#225) +- [0d7ea7d7](https://github.com/kubedb/mongodb/commit/0d7ea7d7) Update Kubernetes v1.18.3 dependencies (#224) +- [e79d1dfe](https://github.com/kubedb/mongodb/commit/e79d1dfe) Update Kubernetes v1.18.3 dependencies (#223) +- [d0ff5e1d](https://github.com/kubedb/mongodb/commit/d0ff5e1d) Update Kubernetes v1.18.3 dependencies (#222) +- [d22ade32](https://github.com/kubedb/mongodb/commit/d22ade32) Add `inMemory` Storage Engine Support for Percona MongoDB Server (#205) +- [90847996](https://github.com/kubedb/mongodb/commit/90847996) Update Kubernetes v1.18.3 dependencies (#220) +- [1098974f](https://github.com/kubedb/mongodb/commit/1098974f) Update Kubernetes v1.18.3 dependencies (#219) +- [e7d1407a](https://github.com/kubedb/mongodb/commit/e7d1407a) Fix install target +- [a5742d11](https://github.com/kubedb/mongodb/commit/a5742d11) Remove dependency on enterprise operator (#218) +- [1de4fbee](https://github.com/kubedb/mongodb/commit/1de4fbee) Build images in e2e workflow (#217) +- [b736c57e](https://github.com/kubedb/mongodb/commit/b736c57e) Update to Kubernetes v1.18.3 (#216) +- [180ae28d](https://github.com/kubedb/mongodb/commit/180ae28d) Allow configuring k8s & db version in e2e tests (#215) +- [c2f09a6f](https://github.com/kubedb/mongodb/commit/c2f09a6f) Trigger e2e tests on /ok-to-test command (#214) +- [c1c7fa39](https://github.com/kubedb/mongodb/commit/c1c7fa39) Update to Kubernetes v1.18.3 (#213) +- [8fb6cf78](https://github.com/kubedb/mongodb/commit/8fb6cf78) Update to Kubernetes v1.18.3 (#212) + + + +## [kubedb/mysql](https://github.com/kubedb/mysql) + +### [v0.7.0-beta.2](https://github.com/kubedb/mysql/releases/tag/v0.7.0-beta.2) + +- [6010c034](https://github.com/kubedb/mysql/commit/6010c034) Prepare for release v0.7.0-beta.2 (#224) +- [4b530066](https://github.com/kubedb/mysql/commit/4b530066) Update release.yml +- [184a6cbc](https://github.com/kubedb/mysql/commit/184a6cbc) Update dependencies (#223) +- [903b13b6](https://github.com/kubedb/mysql/commit/903b13b6) Always use OnDelete update strategy +- [1c10224a](https://github.com/kubedb/mysql/commit/1c10224a) Update Kubernetes v1.18.3 dependencies (#222) +- [4e9e5e44](https://github.com/kubedb/mysql/commit/4e9e5e44) Added TLS/SSL Configuration in MySQL Server (#204) +- [d08209b8](https://github.com/kubedb/mysql/commit/d08209b8) Use username/password constants from core/v1 +- [87238c42](https://github.com/kubedb/mysql/commit/87238c42) Update MySQL vendor for changes of prometheus coreos operator (#216) +- [999005ed](https://github.com/kubedb/mysql/commit/999005ed) Update Kubernetes v1.18.3 dependencies (#215) +- [3eb5086e](https://github.com/kubedb/mysql/commit/3eb5086e) Update Kubernetes v1.18.3 dependencies (#214) +- [cd58f276](https://github.com/kubedb/mysql/commit/cd58f276) Update Kubernetes v1.18.3 dependencies (#213) +- [4dcfcd14](https://github.com/kubedb/mysql/commit/4dcfcd14) Update Kubernetes v1.18.3 dependencies (#212) +- [d41015c9](https://github.com/kubedb/mysql/commit/d41015c9) Update Kubernetes v1.18.3 dependencies (#211) +- [4350cb79](https://github.com/kubedb/mysql/commit/4350cb79) Update Kubernetes v1.18.3 dependencies (#210) +- [617af851](https://github.com/kubedb/mysql/commit/617af851) Fix install target +- [fc308cc3](https://github.com/kubedb/mysql/commit/fc308cc3) Remove dependency on enterprise operator (#209) +- [1b717aee](https://github.com/kubedb/mysql/commit/1b717aee) Detect primary pod in MySQL group replication (#190) +- [c3e516f4](https://github.com/kubedb/mysql/commit/c3e516f4) Support MySQL new version for group replication and standalone (#189) +- [8bedade3](https://github.com/kubedb/mysql/commit/8bedade3) Build images in e2e workflow (#208) +- [02c9434c](https://github.com/kubedb/mysql/commit/02c9434c) Allow configuring k8s & db version in e2e tests (#207) +- [ae5d757c](https://github.com/kubedb/mysql/commit/ae5d757c) Update to Kubernetes v1.18.3 (#206) +- [16bdc23f](https://github.com/kubedb/mysql/commit/16bdc23f) Trigger e2e tests on /ok-to-test command (#205) +- [7be13878](https://github.com/kubedb/mysql/commit/7be13878) Update to Kubernetes v1.18.3 (#203) +- [d69fe478](https://github.com/kubedb/mysql/commit/d69fe478) Update to Kubernetes v1.18.3 (#202) + + + +## [kubedb/mysql-replication-mode-detector](https://github.com/kubedb/mysql-replication-mode-detector) + +### [v0.1.0-beta.2](https://github.com/kubedb/mysql-replication-mode-detector/releases/tag/v0.1.0-beta.2) + +- [eb878dc](https://github.com/kubedb/mysql-replication-mode-detector/commit/eb878dc) Prepare for release v0.1.0-beta.2 (#21) +- [6c214b8](https://github.com/kubedb/mysql-replication-mode-detector/commit/6c214b8) Update Kubernetes v1.18.3 dependencies (#19) +- [00800e8](https://github.com/kubedb/mysql-replication-mode-detector/commit/00800e8) Update Kubernetes v1.18.3 dependencies (#18) +- [373ab6d](https://github.com/kubedb/mysql-replication-mode-detector/commit/373ab6d) Update Kubernetes v1.18.3 dependencies (#17) +- [8b61313](https://github.com/kubedb/mysql-replication-mode-detector/commit/8b61313) Update Kubernetes v1.18.3 dependencies (#16) +- [f2a68e3](https://github.com/kubedb/mysql-replication-mode-detector/commit/f2a68e3) Update Kubernetes v1.18.3 dependencies (#15) +- [3bce396](https://github.com/kubedb/mysql-replication-mode-detector/commit/3bce396) Update Kubernetes v1.18.3 dependencies (#14) +- [32603a2](https://github.com/kubedb/mysql-replication-mode-detector/commit/32603a2) Don't push binary with release + + + +## [kubedb/operator](https://github.com/kubedb/operator) + +### [v0.14.0-beta.2](https://github.com/kubedb/operator/releases/tag/v0.14.0-beta.2) + +- [a13ca48b](https://github.com/kubedb/operator/commit/a13ca48b) Prepare for release v0.14.0-beta.2 (#281) +- [fc6c1e9e](https://github.com/kubedb/operator/commit/fc6c1e9e) Update Kubernetes v1.18.3 dependencies (#280) +- [cd74716b](https://github.com/kubedb/operator/commit/cd74716b) Update Kubernetes v1.18.3 dependencies (#275) +- [5b3c76ed](https://github.com/kubedb/operator/commit/5b3c76ed) Update Kubernetes v1.18.3 dependencies (#274) +- [397a7e60](https://github.com/kubedb/operator/commit/397a7e60) Update Kubernetes v1.18.3 dependencies (#273) +- [616ea78d](https://github.com/kubedb/operator/commit/616ea78d) Update Kubernetes v1.18.3 dependencies (#272) +- [b7b0d2b9](https://github.com/kubedb/operator/commit/b7b0d2b9) Update Kubernetes v1.18.3 dependencies (#271) +- [3afadb7a](https://github.com/kubedb/operator/commit/3afadb7a) Update Kubernetes v1.18.3 dependencies (#270) +- [60b15632](https://github.com/kubedb/operator/commit/60b15632) Remove dependency on enterprise operator (#269) +- [b3648cde](https://github.com/kubedb/operator/commit/b3648cde) Build images in e2e workflow (#268) +- [73dee065](https://github.com/kubedb/operator/commit/73dee065) Update to Kubernetes v1.18.3 (#266) +- [a8a42ab8](https://github.com/kubedb/operator/commit/a8a42ab8) Allow configuring k8s in e2e tests (#267) +- [4b7d6ee3](https://github.com/kubedb/operator/commit/4b7d6ee3) Trigger e2e tests on /ok-to-test command (#265) +- [024fc40a](https://github.com/kubedb/operator/commit/024fc40a) Update to Kubernetes v1.18.3 (#264) +- [bd1da662](https://github.com/kubedb/operator/commit/bd1da662) Update to Kubernetes v1.18.3 (#263) + + + +## [kubedb/percona-xtradb](https://github.com/kubedb/percona-xtradb) + +### [v0.1.0-beta.2](https://github.com/kubedb/percona-xtradb/releases/tag/v0.1.0-beta.2) + +- [471b6def](https://github.com/kubedb/percona-xtradb/commit/471b6def) Prepare for release v0.1.0-beta.2 (#60) +- [9423a70f](https://github.com/kubedb/percona-xtradb/commit/9423a70f) Update release.yml +- [85d1d036](https://github.com/kubedb/percona-xtradb/commit/85d1d036) Use updated apis (#59) +- [6811b8dc](https://github.com/kubedb/percona-xtradb/commit/6811b8dc) Update Kubernetes v1.18.3 dependencies (#53) +- [4212d2a0](https://github.com/kubedb/percona-xtradb/commit/4212d2a0) Update Kubernetes v1.18.3 dependencies (#52) +- [659d646c](https://github.com/kubedb/percona-xtradb/commit/659d646c) Update Kubernetes v1.18.3 dependencies (#51) +- [a868e0c3](https://github.com/kubedb/percona-xtradb/commit/a868e0c3) Update Kubernetes v1.18.3 dependencies (#50) +- [162e6ca4](https://github.com/kubedb/percona-xtradb/commit/162e6ca4) Update Kubernetes v1.18.3 dependencies (#49) +- [a7fa1fbf](https://github.com/kubedb/percona-xtradb/commit/a7fa1fbf) Update Kubernetes v1.18.3 dependencies (#48) +- [b6a4583f](https://github.com/kubedb/percona-xtradb/commit/b6a4583f) Remove dependency on enterprise operator (#47) +- [a8909b38](https://github.com/kubedb/percona-xtradb/commit/a8909b38) Allow configuring k8s & db version in e2e tests (#46) +- [4d79d26e](https://github.com/kubedb/percona-xtradb/commit/4d79d26e) Update to Kubernetes v1.18.3 (#45) +- [189f3212](https://github.com/kubedb/percona-xtradb/commit/189f3212) Trigger e2e tests on /ok-to-test command (#44) +- [a037bd03](https://github.com/kubedb/percona-xtradb/commit/a037bd03) Update to Kubernetes v1.18.3 (#43) +- [33cabdf3](https://github.com/kubedb/percona-xtradb/commit/33cabdf3) Update to Kubernetes v1.18.3 (#42) + + + +## [kubedb/pg-leader-election](https://github.com/kubedb/pg-leader-election) + +### [v0.2.0-beta.2](https://github.com/kubedb/pg-leader-election/releases/tag/v0.2.0-beta.2) + +- [f92f350](https://github.com/kubedb/pg-leader-election/commit/f92f350) Update Kubernetes v1.18.3 dependencies (#17) +- [65c551f](https://github.com/kubedb/pg-leader-election/commit/65c551f) Update Kubernetes v1.18.3 dependencies (#16) +- [c7b516d](https://github.com/kubedb/pg-leader-election/commit/c7b516d) Update Kubernetes v1.18.3 dependencies (#15) +- [8440ee3](https://github.com/kubedb/pg-leader-election/commit/8440ee3) Update Kubernetes v1.18.3 dependencies (#14) +- [33b175b](https://github.com/kubedb/pg-leader-election/commit/33b175b) Update Kubernetes v1.18.3 dependencies (#13) +- [102fbfa](https://github.com/kubedb/pg-leader-election/commit/102fbfa) Update Kubernetes v1.18.3 dependencies (#12) +- [d850da1](https://github.com/kubedb/pg-leader-election/commit/d850da1) Update Kubernetes v1.18.3 dependencies (#11) +- [0505eaf](https://github.com/kubedb/pg-leader-election/commit/0505eaf) Update Kubernetes v1.18.3 dependencies (#10) +- [d46e56c](https://github.com/kubedb/pg-leader-election/commit/d46e56c) Use actions/upload-artifact@v2 +- [37fb860](https://github.com/kubedb/pg-leader-election/commit/37fb860) Update to Kubernetes v1.18.3 (#9) +- [7566bf3](https://github.com/kubedb/pg-leader-election/commit/7566bf3) Update to Kubernetes v1.18.3 (#8) +- [07c4965](https://github.com/kubedb/pg-leader-election/commit/07c4965) Update to Kubernetes v1.18.3 (#7) + + + +## [kubedb/pgbouncer](https://github.com/kubedb/pgbouncer) + +### [v0.1.0-beta.2](https://github.com/kubedb/pgbouncer/releases/tag/v0.1.0-beta.2) + +- [e083d55](https://github.com/kubedb/pgbouncer/commit/e083d55) Prepare for release v0.1.0-beta.2 (#41) +- [fe84790](https://github.com/kubedb/pgbouncer/commit/fe84790) Update release.yml +- [ddf5a85](https://github.com/kubedb/pgbouncer/commit/ddf5a85) Use updated certificate spec (#35) +- [d5cd5bf](https://github.com/kubedb/pgbouncer/commit/d5cd5bf) Update Kubernetes v1.18.3 dependencies (#39) +- [21693c7](https://github.com/kubedb/pgbouncer/commit/21693c7) Update Kubernetes v1.18.3 dependencies (#38) +- [39ad48d](https://github.com/kubedb/pgbouncer/commit/39ad48d) Update Kubernetes v1.18.3 dependencies (#37) +- [7f1ecc7](https://github.com/kubedb/pgbouncer/commit/7f1ecc7) Update Kubernetes v1.18.3 dependencies (#36) +- [8d9d379](https://github.com/kubedb/pgbouncer/commit/8d9d379) Update Kubernetes v1.18.3 dependencies (#34) +- [c9b8300](https://github.com/kubedb/pgbouncer/commit/c9b8300) Update Kubernetes v1.18.3 dependencies (#33) +- [66c72a4](https://github.com/kubedb/pgbouncer/commit/66c72a4) Remove dependency on enterprise operator (#32) +- [757dc10](https://github.com/kubedb/pgbouncer/commit/757dc10) Update to cert-manager v0.16.0 (#30) +- [0a183d1](https://github.com/kubedb/pgbouncer/commit/0a183d1) Build images in e2e workflow (#29) +- [ca61e88](https://github.com/kubedb/pgbouncer/commit/ca61e88) Allow configuring k8s & db version in e2e tests (#28) +- [a87278b](https://github.com/kubedb/pgbouncer/commit/a87278b) Update to Kubernetes v1.18.3 (#27) +- [5abe86f](https://github.com/kubedb/pgbouncer/commit/5abe86f) Fix formatting +- [845f7a3](https://github.com/kubedb/pgbouncer/commit/845f7a3) Trigger e2e tests on /ok-to-test command (#26) +- [2cc23c0](https://github.com/kubedb/pgbouncer/commit/2cc23c0) Fix cert-manager integration for PgBouncer (#25) +- [2a148c2](https://github.com/kubedb/pgbouncer/commit/2a148c2) Update to Kubernetes v1.18.3 (#24) +- [f6eb812](https://github.com/kubedb/pgbouncer/commit/f6eb812) Update Makefile.env + + + +## [kubedb/postgres](https://github.com/kubedb/postgres) + +### [v0.14.0-beta.2](https://github.com/kubedb/postgres/releases/tag/v0.14.0-beta.2) + +- [6e6fe6fe](https://github.com/kubedb/postgres/commit/6e6fe6fe) Prepare for release v0.14.0-beta.2 (#345) +- [5ee33bb8](https://github.com/kubedb/postgres/commit/5ee33bb8) Update release.yml +- [9208f754](https://github.com/kubedb/postgres/commit/9208f754) Always use OnDelete update strategy +- [74367d01](https://github.com/kubedb/postgres/commit/74367d01) Update Kubernetes v1.18.3 dependencies (#344) +- [01843533](https://github.com/kubedb/postgres/commit/01843533) Update Kubernetes v1.18.3 dependencies (#343) +- [34a3a460](https://github.com/kubedb/postgres/commit/34a3a460) Update Kubernetes v1.18.3 dependencies (#338) +- [455bf56a](https://github.com/kubedb/postgres/commit/455bf56a) Update Kubernetes v1.18.3 dependencies (#337) +- [960d1efa](https://github.com/kubedb/postgres/commit/960d1efa) Update Kubernetes v1.18.3 dependencies (#336) +- [9b428745](https://github.com/kubedb/postgres/commit/9b428745) Update Kubernetes v1.18.3 dependencies (#335) +- [cc95c5f5](https://github.com/kubedb/postgres/commit/cc95c5f5) Update Kubernetes v1.18.3 dependencies (#334) +- [c0694d83](https://github.com/kubedb/postgres/commit/c0694d83) Update Kubernetes v1.18.3 dependencies (#333) +- [8d0977d3](https://github.com/kubedb/postgres/commit/8d0977d3) Remove dependency on enterprise operator (#332) +- [daa5b77c](https://github.com/kubedb/postgres/commit/daa5b77c) Build images in e2e workflow (#331) +- [197f1b2b](https://github.com/kubedb/postgres/commit/197f1b2b) Update to Kubernetes v1.18.3 (#329) +- [e732d319](https://github.com/kubedb/postgres/commit/e732d319) Allow configuring k8s & db version in e2e tests (#330) +- [f37180ec](https://github.com/kubedb/postgres/commit/f37180ec) Trigger e2e tests on /ok-to-test command (#328) +- [becb3e2c](https://github.com/kubedb/postgres/commit/becb3e2c) Update to Kubernetes v1.18.3 (#327) +- [91bf7440](https://github.com/kubedb/postgres/commit/91bf7440) Update to Kubernetes v1.18.3 (#326) + + + +## [kubedb/proxysql](https://github.com/kubedb/proxysql) + +### [v0.1.0-beta.2](https://github.com/kubedb/proxysql/releases/tag/v0.1.0-beta.2) + +- [f86bb6cd](https://github.com/kubedb/proxysql/commit/f86bb6cd) Prepare for release v0.1.0-beta.2 (#46) +- [e74f3803](https://github.com/kubedb/proxysql/commit/e74f3803) Update release.yml +- [7f5349cc](https://github.com/kubedb/proxysql/commit/7f5349cc) Use updated apis (#45) +- [27faefef](https://github.com/kubedb/proxysql/commit/27faefef) Update for release Stash@v2020.08.27 (#43) +- [65bc5bca](https://github.com/kubedb/proxysql/commit/65bc5bca) Update for release Stash@v2020.08.27-rc.0 (#42) +- [833ac78b](https://github.com/kubedb/proxysql/commit/833ac78b) Update for release Stash@v2020.08.26-rc.1 (#41) +- [fe13ce42](https://github.com/kubedb/proxysql/commit/fe13ce42) Update for release Stash@v2020.08.26-rc.0 (#40) +- [b1a72843](https://github.com/kubedb/proxysql/commit/b1a72843) Update Kubernetes v1.18.3 dependencies (#39) +- [a9c40618](https://github.com/kubedb/proxysql/commit/a9c40618) Update Kubernetes v1.18.3 dependencies (#38) +- [664c974a](https://github.com/kubedb/proxysql/commit/664c974a) Update Kubernetes v1.18.3 dependencies (#37) +- [69ed46d5](https://github.com/kubedb/proxysql/commit/69ed46d5) Update Kubernetes v1.18.3 dependencies (#36) +- [a93d80d4](https://github.com/kubedb/proxysql/commit/a93d80d4) Update Kubernetes v1.18.3 dependencies (#35) +- [84fc9e37](https://github.com/kubedb/proxysql/commit/84fc9e37) Update Kubernetes v1.18.3 dependencies (#34) +- [b09f89d0](https://github.com/kubedb/proxysql/commit/b09f89d0) Remove dependency on enterprise operator (#33) +- [78ad5a88](https://github.com/kubedb/proxysql/commit/78ad5a88) Build images in e2e workflow (#32) +- [6644058e](https://github.com/kubedb/proxysql/commit/6644058e) Update to Kubernetes v1.18.3 (#30) +- [2c03dadd](https://github.com/kubedb/proxysql/commit/2c03dadd) Allow configuring k8s & db version in e2e tests (#31) +- [2c6e04bc](https://github.com/kubedb/proxysql/commit/2c6e04bc) Trigger e2e tests on /ok-to-test command (#29) +- [c7830af8](https://github.com/kubedb/proxysql/commit/c7830af8) Update to Kubernetes v1.18.3 (#28) +- [f2da8746](https://github.com/kubedb/proxysql/commit/f2da8746) Update to Kubernetes v1.18.3 (#27) + + + +## [kubedb/redis](https://github.com/kubedb/redis) + +### [v0.7.0-beta.2](https://github.com/kubedb/redis/releases/tag/v0.7.0-beta.2) + +- [73cf267e](https://github.com/kubedb/redis/commit/73cf267e) Prepare for release v0.7.0-beta.2 (#192) +- [d2911ea9](https://github.com/kubedb/redis/commit/d2911ea9) Update release.yml +- [c76ee46e](https://github.com/kubedb/redis/commit/c76ee46e) Update dependencies (#191) +- [0b030534](https://github.com/kubedb/redis/commit/0b030534) Fix build +- [408216ab](https://github.com/kubedb/redis/commit/408216ab) Add support for Redis v6.0.6 and TLS (#180) +- [944327df](https://github.com/kubedb/redis/commit/944327df) Update Kubernetes v1.18.3 dependencies (#187) +- [40b7cde6](https://github.com/kubedb/redis/commit/40b7cde6) Update Kubernetes v1.18.3 dependencies (#186) +- [f2bf110d](https://github.com/kubedb/redis/commit/f2bf110d) Update Kubernetes v1.18.3 dependencies (#184) +- [61485cfa](https://github.com/kubedb/redis/commit/61485cfa) Update Kubernetes v1.18.3 dependencies (#183) +- [184ae35d](https://github.com/kubedb/redis/commit/184ae35d) Update Kubernetes v1.18.3 dependencies (#182) +- [bc72b51b](https://github.com/kubedb/redis/commit/bc72b51b) Update Kubernetes v1.18.3 dependencies (#181) +- [ca540560](https://github.com/kubedb/redis/commit/ca540560) Remove dependency on enterprise operator (#179) +- [09bade2e](https://github.com/kubedb/redis/commit/09bade2e) Allow configuring k8s & db version in e2e tests (#178) +- [2bafb114](https://github.com/kubedb/redis/commit/2bafb114) Update to Kubernetes v1.18.3 (#177) +- [b2fe59ef](https://github.com/kubedb/redis/commit/b2fe59ef) Trigger e2e tests on /ok-to-test command (#176) +- [df5131e1](https://github.com/kubedb/redis/commit/df5131e1) Update to Kubernetes v1.18.3 (#175) +- [a404ae08](https://github.com/kubedb/redis/commit/a404ae08) Update to Kubernetes v1.18.3 (#174) + + + + diff --git a/content/docs/v2024.1.31/CHANGELOG-v2020.10.24-beta.0.md b/content/docs/v2024.1.31/CHANGELOG-v2020.10.24-beta.0.md new file mode 100644 index 0000000000..b723e3d834 --- /dev/null +++ b/content/docs/v2024.1.31/CHANGELOG-v2020.10.24-beta.0.md @@ -0,0 +1,742 @@ +--- +title: Changelog | KubeDB +description: Changelog +menu: + docs_v2024.1.31: + identifier: changelog-kubedb-v2020.10.24-beta.0 + name: Changelog-v2020.10.24-beta.0 + parent: welcome + weight: 20201024 +product_name: kubedb +menu_name: docs_v2024.1.31 +section_menu_id: welcome +url: /docs/v2024.1.31/welcome/changelog-v2020.10.24-beta.0/ +aliases: +- /docs/v2024.1.31/CHANGELOG-v2020.10.24-beta.0/ +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +# KubeDB v2020.10.24-beta.0 (2020-10-24) + + +## [appscode/kubedb-enterprise](https://github.com/appscode/kubedb-enterprise) + +### [v0.1.0-beta.4](https://github.com/appscode/kubedb-enterprise/releases/tag/v0.1.0-beta.4) + +- [d045bd2d](https://github.com/appscode/kubedb-enterprise/commit/d045bd2d) Prepare for release v0.1.0-beta.4 (#78) +- [5fbe4b48](https://github.com/appscode/kubedb-enterprise/commit/5fbe4b48) Update KubeDB api (#73) +- [00db6203](https://github.com/appscode/kubedb-enterprise/commit/00db6203) Replace getConditions with kmapi.NewCondition (#71) +- [aea1f64a](https://github.com/appscode/kubedb-enterprise/commit/aea1f64a) Update License header (#70) +- [1c15c2b8](https://github.com/appscode/kubedb-enterprise/commit/1c15c2b8) Add RedisOpsRequest Controller (#28) +- [5cedb8fd](https://github.com/appscode/kubedb-enterprise/commit/5cedb8fd) Add MySQL OpsRequest Controller (#14) +- [f0f282c0](https://github.com/appscode/kubedb-enterprise/commit/f0f282c0) Add Reconfigure TLS (#69) +- [cea85618](https://github.com/appscode/kubedb-enterprise/commit/cea85618) Add Restart Operation, Readiness Criteria and Remove Configuration (#59) +- [68cd3dcc](https://github.com/appscode/kubedb-enterprise/commit/68cd3dcc) Update repository config (#66) +- [feef09ab](https://github.com/appscode/kubedb-enterprise/commit/feef09ab) Publish docker images to ghcr.io (#65) +- [199d4bd2](https://github.com/appscode/kubedb-enterprise/commit/199d4bd2) Update repository config (#60) +- [2ae29633](https://github.com/appscode/kubedb-enterprise/commit/2ae29633) Reconfigure MongoDB with Vertical Scaling (#57) +- [9a98fc29](https://github.com/appscode/kubedb-enterprise/commit/9a98fc29) Fix MongoDB Upgrade (#51) +- [9a1a792a](https://github.com/appscode/kubedb-enterprise/commit/9a1a792a) Integrate cert-manager for Elasticsearch (#56) +- [b02cda77](https://github.com/appscode/kubedb-enterprise/commit/b02cda77) Update repository config (#54) +- [947c33e2](https://github.com/appscode/kubedb-enterprise/commit/947c33e2) Update repository config (#52) +- [12edf6f1](https://github.com/appscode/kubedb-enterprise/commit/12edf6f1) Update Kubernetes v1.18.9 dependencies (#49) +- [08f6a4ac](https://github.com/appscode/kubedb-enterprise/commit/08f6a4ac) Add license verifier (#50) +- [30ceb1a5](https://github.com/appscode/kubedb-enterprise/commit/30ceb1a5) Add MongoDBOpsRequest Controller (#20) +- [164ed838](https://github.com/appscode/kubedb-enterprise/commit/164ed838) Use cert-manager v1 api (#47) +- [7612ec19](https://github.com/appscode/kubedb-enterprise/commit/7612ec19) Update apis (#45) +- [00550fe0](https://github.com/appscode/kubedb-enterprise/commit/00550fe0) Dynamically Generate Cluster Domain (#43) +- [e1c3193f](https://github.com/appscode/kubedb-enterprise/commit/e1c3193f) Use updated certstore & blobfs (#42) +- [0d5d05bb](https://github.com/appscode/kubedb-enterprise/commit/0d5d05bb) Add TLS support for redis (#35) +- [bb53fc86](https://github.com/appscode/kubedb-enterprise/commit/bb53fc86) Various fixes (#41) +- [023c5dfd](https://github.com/appscode/kubedb-enterprise/commit/023c5dfd) Add TLS/SSL configuration using Cert Manager for MySQL (#34) +- [e1795b97](https://github.com/appscode/kubedb-enterprise/commit/e1795b97) Update certificate spec for MongoDB and PgBouncer (#40) +- [5e82443d](https://github.com/appscode/kubedb-enterprise/commit/5e82443d) Update new Subject sped for certificates (#38) +- [099abfb8](https://github.com/appscode/kubedb-enterprise/commit/099abfb8) Update to cert-manager v0.16.0 (#37) +- [b14346d3](https://github.com/appscode/kubedb-enterprise/commit/b14346d3) Update to Kubernetes v1.18.3 (#36) +- [c569a8eb](https://github.com/appscode/kubedb-enterprise/commit/c569a8eb) Fix cert-manager integration for PgBouncer (#32) +- [28548950](https://github.com/appscode/kubedb-enterprise/commit/28548950) Update to Kubernetes v1.18.3 (#31) +- [1ba9573e](https://github.com/appscode/kubedb-enterprise/commit/1ba9573e) Include Makefile.env (#30) +- [54133b44](https://github.com/appscode/kubedb-enterprise/commit/54133b44) Disable e2e tests (#29) +- [3939ece7](https://github.com/appscode/kubedb-enterprise/commit/3939ece7) Update to Kubernetes v1.18.3 (#27) +- [95c6b535](https://github.com/appscode/kubedb-enterprise/commit/95c6b535) Update .kodiak.toml +- [a88032cd](https://github.com/appscode/kubedb-enterprise/commit/a88032cd) Add script to update release tracker on pr merge (#26) +- [a90f68e7](https://github.com/appscode/kubedb-enterprise/commit/a90f68e7) Rename docker image to kubedb-enterprise +- [ccb9967f](https://github.com/appscode/kubedb-enterprise/commit/ccb9967f) Create .kodiak.toml +- [fb6222ab](https://github.com/appscode/kubedb-enterprise/commit/fb6222ab) Format CI files +- [93756db8](https://github.com/appscode/kubedb-enterprise/commit/93756db8) Fix e2e tests (#25) +- [48ada32b](https://github.com/appscode/kubedb-enterprise/commit/48ada32b) Fix e2e tests using self-hosted GitHub action runners (#23) +- [12b15d00](https://github.com/appscode/kubedb-enterprise/commit/12b15d00) Update to kubedb.dev/apimachinery@v0.14.0-alpha.6 (#24) +- [9f32ab11](https://github.com/appscode/kubedb-enterprise/commit/9f32ab11) Update to Kubernetes v1.18.3 (#21) +- [cd3422a7](https://github.com/appscode/kubedb-enterprise/commit/cd3422a7) Use CRD v1 for Kubernetes >= 1.16 (#19) +- [4cc2f714](https://github.com/appscode/kubedb-enterprise/commit/4cc2f714) Update to Kubernetes v1.18.3 (#18) +- [7fb86dfb](https://github.com/appscode/kubedb-enterprise/commit/7fb86dfb) Update cert-manager util +- [1c8e1e32](https://github.com/appscode/kubedb-enterprise/commit/1c8e1e32) Configure GCR Docker credential helper in release pipeline +- [cd74a0c2](https://github.com/appscode/kubedb-enterprise/commit/cd74a0c2) Vendor kubedb.dev/apimachinery@v0.14.0-beta.0 +- [5522f7ef](https://github.com/appscode/kubedb-enterprise/commit/5522f7ef) Revendor kubedb.dev/apimachinery@master +- [e52cecfb](https://github.com/appscode/kubedb-enterprise/commit/e52cecfb) Update crazy-max/ghaction-docker-buildx flag +- [9ce414ca](https://github.com/appscode/kubedb-enterprise/commit/9ce414ca) Merge pull request #17 from appscode/x7 +- [1938de61](https://github.com/appscode/kubedb-enterprise/commit/1938de61) Remove existing cluster +- [262dae05](https://github.com/appscode/kubedb-enterprise/commit/262dae05) Remove support for k8s 1.11 +- [a00f342c](https://github.com/appscode/kubedb-enterprise/commit/a00f342c) Run e2e tests on GitHub actions +- [b615b1ac](https://github.com/appscode/kubedb-enterprise/commit/b615b1ac) Use GCR_SERVICE_ACCOUNT_JSON_KEY env in CI +- [41668265](https://github.com/appscode/kubedb-enterprise/commit/41668265) Use gcr.io/appscode as docker registry (#16) +- [2e5df236](https://github.com/appscode/kubedb-enterprise/commit/2e5df236) Run on self-hosted hosts +- [3da6adef](https://github.com/appscode/kubedb-enterprise/commit/3da6adef) Store enterprise images in `gcr.io/appscode` (#15) +- [bd4a8eb1](https://github.com/appscode/kubedb-enterprise/commit/bd4a8eb1) Use stash.appscode.dev/apimachinery@v0.9.0-rc.6 (#12) +- [c5436b50](https://github.com/appscode/kubedb-enterprise/commit/c5436b50) Don't handle deleted objects. (#11) +- [ee5eea66](https://github.com/appscode/kubedb-enterprise/commit/ee5eea66) Fix MongoDB cert-manager integration (#10) +- [105f08b8](https://github.com/appscode/kubedb-enterprise/commit/105f08b8) Add cert-manager integration for MongoDB (#9) +- [b2a3af53](https://github.com/appscode/kubedb-enterprise/commit/b2a3af53) Refactor PgBouncer controller into its pkg (#8) +- [b0e90f75](https://github.com/appscode/kubedb-enterprise/commit/b0e90f75) Use SecretInformer from apimachinery (#5) +- [8dabbb1b](https://github.com/appscode/kubedb-enterprise/commit/8dabbb1b) Use non-deprecated Exporter fields (#4) +- [de22842e](https://github.com/appscode/kubedb-enterprise/commit/de22842e) Cert-Manager support for PgBouncer [Client TLS] (#2) +- [1a6794b7](https://github.com/appscode/kubedb-enterprise/commit/1a6794b7) Fix plain text secret in exporter container of StatefulSet (#5) +- [ab104a9f](https://github.com/appscode/kubedb-enterprise/commit/ab104a9f) Update client-go to kubernetes-1.16.3 (#7) +- [68dbb142](https://github.com/appscode/kubedb-enterprise/commit/68dbb142) Use charts to install operator (#6) +- [30e3e729](https://github.com/appscode/kubedb-enterprise/commit/30e3e729) Add add-license make target +- [6c1a78a0](https://github.com/appscode/kubedb-enterprise/commit/6c1a78a0) Enable e2e tests in GitHub actions (#4) +- [0960f805](https://github.com/appscode/kubedb-enterprise/commit/0960f805) Initial implementation (#2) +- [a8a9b1db](https://github.com/appscode/kubedb-enterprise/commit/a8a9b1db) Update go.yml +- [bc3b2624](https://github.com/appscode/kubedb-enterprise/commit/bc3b2624) Enable GitHub actions +- [2e33db2b](https://github.com/appscode/kubedb-enterprise/commit/2e33db2b) Clone kubedb/postgres repo (#1) +- [45a7cace](https://github.com/appscode/kubedb-enterprise/commit/45a7cace) Merge commit 'f78de886ed657650438f99574c3b002dd3607497' as 'hack/libbuild' + + + +## [kubedb/apimachinery](https://github.com/kubedb/apimachinery) + +### [v0.14.0-beta.4](https://github.com/kubedb/apimachinery/releases/tag/v0.14.0-beta.4) + +- [61b26532](https://github.com/kubedb/apimachinery/commit/61b26532) Add MySQL constants (#633) +- [42888647](https://github.com/kubedb/apimachinery/commit/42888647) Update Kubernetes v1.18.9 dependencies (#632) +- [a57a7df5](https://github.com/kubedb/apimachinery/commit/a57a7df5) Set prx as ProxySQL short code (#631) +- [282992ea](https://github.com/kubedb/apimachinery/commit/282992ea) Update for release Stash@v2020.10.21 (#630) +- [5f17e1b4](https://github.com/kubedb/apimachinery/commit/5f17e1b4) Set default CA secret name even if the SSL is disabled. (#624) +- [c3710b61](https://github.com/kubedb/apimachinery/commit/c3710b61) Add host functions for different components of MongoDB (#625) +- [028d939d](https://github.com/kubedb/apimachinery/commit/028d939d) Refine api (#629) +- [4f4cfb3b](https://github.com/kubedb/apimachinery/commit/4f4cfb3b) Update Kubernetes v1.18.9 dependencies (#626) +- [47eaa486](https://github.com/kubedb/apimachinery/commit/47eaa486) Add MongoDBCustomConfigFile constant +- [5201c39b](https://github.com/kubedb/apimachinery/commit/5201c39b) Update MySQL ops request custom config api (#623) +- [06c2076f](https://github.com/kubedb/apimachinery/commit/06c2076f) Rename redis ConfigMapName to ConfigSecretName +- [0d4040b4](https://github.com/kubedb/apimachinery/commit/0d4040b4) API refinement (#622) +- [2eabe4c2](https://github.com/kubedb/apimachinery/commit/2eabe4c2) Update Kubernetes v1.18.9 dependencies (#621) +- [ac3ff1a6](https://github.com/kubedb/apimachinery/commit/ac3ff1a6) Handle halted condition (#620) +- [8ed26973](https://github.com/kubedb/apimachinery/commit/8ed26973) Update constants for Elasticsearch conditions (#618) +- [97c32f71](https://github.com/kubedb/apimachinery/commit/97c32f71) Use core/v1 ConditionStatus (#619) +- [304c48b8](https://github.com/kubedb/apimachinery/commit/304c48b8) Update Kubernetes v1.18.9 dependencies (#617) +- [a841401e](https://github.com/kubedb/apimachinery/commit/a841401e) Fix StatefulSet controller (#616) +- [517285ea](https://github.com/kubedb/apimachinery/commit/517285ea) Add spec.init.initialized field (#615) +- [057d3aef](https://github.com/kubedb/apimachinery/commit/057d3aef) Implement ReplicasAreReady (#614) +- [32105113](https://github.com/kubedb/apimachinery/commit/32105113) Update appcatalog dependency +- [34bf142e](https://github.com/kubedb/apimachinery/commit/34bf142e) Update swagger.json +- [7d9095af](https://github.com/kubedb/apimachinery/commit/7d9095af) Fix build (#613) +- [ad7988a8](https://github.com/kubedb/apimachinery/commit/ad7988a8) Fix build +- [0cf6469d](https://github.com/kubedb/apimachinery/commit/0cf6469d) Switch kubedb apiVersion to v1alpha2 (#612) +- [fd3131cd](https://github.com/kubedb/apimachinery/commit/fd3131cd) Add Volume Expansion and Configuration for MySQL OpsRequest (#607) +- [fd285012](https://github.com/kubedb/apimachinery/commit/fd285012) Add `alias` in the name of MongoDB server certificates (#611) +- [e562def9](https://github.com/kubedb/apimachinery/commit/e562def9) Remove GetMonitoringVendor method +- [a71f9b7e](https://github.com/kubedb/apimachinery/commit/a71f9b7e) Fix build +- [c97abe0d](https://github.com/kubedb/apimachinery/commit/c97abe0d) Update monitoring api dependency (#610) +- [d6070fc7](https://github.com/kubedb/apimachinery/commit/d6070fc7) Remove deprecated fields for monitoring (#609) +- [8d2f606a](https://github.com/kubedb/apimachinery/commit/8d2f606a) Add framework support for conditions (#608) +- [a74ea7a4](https://github.com/kubedb/apimachinery/commit/a74ea7a4) Bring back mysql ops spec StatefulSetOrdinal field +- [bda2d85a](https://github.com/kubedb/apimachinery/commit/bda2d85a) Add VerticalAutoscaler type (#606) +- [b9b22a35](https://github.com/kubedb/apimachinery/commit/b9b22a35) Add MySQL constant (#604) +- [2b887957](https://github.com/kubedb/apimachinery/commit/2b887957) Fix typo +- [c31cd2fd](https://github.com/kubedb/apimachinery/commit/c31cd2fd) Update ops request enumerations +- [41083a9d](https://github.com/kubedb/apimachinery/commit/41083a9d) Revise ops request apis (#603) +- [acfb1564](https://github.com/kubedb/apimachinery/commit/acfb1564) Revise api conditions (#602) +- [5c12de3a](https://github.com/kubedb/apimachinery/commit/5c12de3a) Update DB condition types and phases (#598) +- [f27cb720](https://github.com/kubedb/apimachinery/commit/f27cb720) Write data restore completion event using dynamic client (#601) +- [60ada14c](https://github.com/kubedb/apimachinery/commit/60ada14c) Update Kubernetes v1.18.9 dependencies (#600) +- [5779a5d7](https://github.com/kubedb/apimachinery/commit/5779a5d7) Update for release Stash@v2020.09.29 (#599) +- [86121dad](https://github.com/kubedb/apimachinery/commit/86121dad) Update Kubernetes v1.18.9 dependencies (#597) +- [da9fbe59](https://github.com/kubedb/apimachinery/commit/da9fbe59) Add DB conditions +- [7399d13f](https://github.com/kubedb/apimachinery/commit/7399d13f) Rename ES root-cert to ca-cert (#594) +- [1cd75609](https://github.com/kubedb/apimachinery/commit/1cd75609) Remove spec.paused & deprecated fields DB crds (#596) +- [9c85f9f1](https://github.com/kubedb/apimachinery/commit/9c85f9f1) Use `status.conditions` to handle database initialization (#593) +- [87e8e58b](https://github.com/kubedb/apimachinery/commit/87e8e58b) Update Kubernetes v1.18.9 dependencies (#595) +- [32206db2](https://github.com/kubedb/apimachinery/commit/32206db2) Add helper methods for MySQL (#592) +- [10aca81a](https://github.com/kubedb/apimachinery/commit/10aca81a) Rename client node to ingest node (#583) +- [d8bbd5ec](https://github.com/kubedb/apimachinery/commit/d8bbd5ec) Update repository config (#591) +- [4d51a066](https://github.com/kubedb/apimachinery/commit/4d51a066) Update repository config (#590) +- [5905c2cb](https://github.com/kubedb/apimachinery/commit/5905c2cb) Update Kubernetes v1.18.9 dependencies (#589) +- [3dc3d970](https://github.com/kubedb/apimachinery/commit/3dc3d970) Update Kubernetes v1.18.3 dependencies (#588) +- [53b42277](https://github.com/kubedb/apimachinery/commit/53b42277) Add event recorder in controller struct (#587) + + + +## [kubedb/elasticsearch](https://github.com/kubedb/elasticsearch) + +### [v0.14.0-beta.4](https://github.com/kubedb/elasticsearch/releases/tag/v0.14.0-beta.4) + +- [d6f5ae41](https://github.com/kubedb/elasticsearch/commit/d6f5ae41) Prepare for release v0.14.0-beta.4 (#387) +- [149314b5](https://github.com/kubedb/elasticsearch/commit/149314b5) Update KubeDB api (#386) +- [1de4b578](https://github.com/kubedb/elasticsearch/commit/1de4b578) Make database's phase NotReady as soon as the halted is removed (#375) +- [57704afa](https://github.com/kubedb/elasticsearch/commit/57704afa) Update Kubernetes v1.18.9 dependencies (#385) +- [16d37657](https://github.com/kubedb/elasticsearch/commit/16d37657) Update Kubernetes v1.18.9 dependencies (#383) +- [828f8ab8](https://github.com/kubedb/elasticsearch/commit/828f8ab8) Update KubeDB api (#382) +- [d70e68a8](https://github.com/kubedb/elasticsearch/commit/d70e68a8) Update for release Stash@v2020.10.21 (#381) +- [05a687bc](https://github.com/kubedb/elasticsearch/commit/05a687bc) Fix init validator (#379) +- [24d7f2c8](https://github.com/kubedb/elasticsearch/commit/24d7f2c8) Update KubeDB api (#380) +- [8c981e08](https://github.com/kubedb/elasticsearch/commit/8c981e08) Update KubeDB api (#378) +- [cf833e49](https://github.com/kubedb/elasticsearch/commit/cf833e49) Update Kubernetes v1.18.9 dependencies (#377) +- [fb335a43](https://github.com/kubedb/elasticsearch/commit/fb335a43) Update KubeDB api (#376) +- [e652a7ec](https://github.com/kubedb/elasticsearch/commit/e652a7ec) Update KubeDB api (#374) +- [c22b7f31](https://github.com/kubedb/elasticsearch/commit/c22b7f31) Update KubeDB api (#373) +- [a7d8e3b0](https://github.com/kubedb/elasticsearch/commit/a7d8e3b0) Integrate cert-manager and status.conditions (#357) +- [370f0df1](https://github.com/kubedb/elasticsearch/commit/370f0df1) Update repository config (#372) +- [78bdc59e](https://github.com/kubedb/elasticsearch/commit/78bdc59e) Update repository config (#371) +- [b8003d4b](https://github.com/kubedb/elasticsearch/commit/b8003d4b) Update repository config (#370) +- [d4ff1ac2](https://github.com/kubedb/elasticsearch/commit/d4ff1ac2) Publish docker images to ghcr.io (#369) +- [5f5ef393](https://github.com/kubedb/elasticsearch/commit/5f5ef393) Update repository config (#363) +- [e537ae40](https://github.com/kubedb/elasticsearch/commit/e537ae40) Update Kubernetes v1.18.9 dependencies (#362) +- [a5a5b084](https://github.com/kubedb/elasticsearch/commit/a5a5b084) Update for release Stash@v2020.09.29 (#361) +- [11eebe39](https://github.com/kubedb/elasticsearch/commit/11eebe39) Update Kubernetes v1.18.9 dependencies (#360) +- [a5b47b08](https://github.com/kubedb/elasticsearch/commit/a5b47b08) Update Kubernetes v1.18.9 dependencies (#358) +- [91f1dc00](https://github.com/kubedb/elasticsearch/commit/91f1dc00) Rename client node to ingest node (#346) +- [318a8b19](https://github.com/kubedb/elasticsearch/commit/318a8b19) Update repository config (#356) +- [a8773921](https://github.com/kubedb/elasticsearch/commit/a8773921) Update repository config (#355) +- [55bef891](https://github.com/kubedb/elasticsearch/commit/55bef891) Update Kubernetes v1.18.9 dependencies (#354) +- [1a3e421a](https://github.com/kubedb/elasticsearch/commit/1a3e421a) Use common event recorder (#353) +- [4df32f60](https://github.com/kubedb/elasticsearch/commit/4df32f60) Update Kubernetes v1.18.3 dependencies (#352) + + + +## [kubedb/installer](https://github.com/kubedb/installer) + +### [v0.14.0-beta.4](https://github.com/kubedb/installer/releases/tag/v0.14.0-beta.4) + +- [051bc36](https://github.com/kubedb/installer/commit/051bc36) Prepare for release v0.14.0-beta.4 (#165) +- [1de504a](https://github.com/kubedb/installer/commit/1de504a) Update Kubernetes v1.18.9 dependencies (#164) +- [d97e36e](https://github.com/kubedb/installer/commit/d97e36e) Update KubeDB api (#163) +- [f3f0049](https://github.com/kubedb/installer/commit/f3f0049) Update Kubernetes v1.18.9 dependencies (#162) +- [fd73fd3](https://github.com/kubedb/installer/commit/fd73fd3) Remove caFile from ServiceMonitor (#160) +- [2ada956](https://github.com/kubedb/installer/commit/2ada956) Update KubeDB api (#159) +- [2709602](https://github.com/kubedb/installer/commit/2709602) Create ServiceMonitor in the same namespace as the operator (#158) +- [3cb27a1](https://github.com/kubedb/installer/commit/3cb27a1) Update Kubernetes v1.18.9 dependencies (#157) +- [1edeecd](https://github.com/kubedb/installer/commit/1edeecd) Update KubeDB api (#156) +- [16d3089](https://github.com/kubedb/installer/commit/16d3089) Update repository config (#155) +- [307b8cc](https://github.com/kubedb/installer/commit/307b8cc) Update Kubernetes v1.18.9 dependencies (#154) +- [b1b1569](https://github.com/kubedb/installer/commit/b1b1569) Add StatefulSet watch permission (#153) +- [8c80de8](https://github.com/kubedb/installer/commit/8c80de8) Add permission to update cert-manager certificates status (#150) +- [f03cec7](https://github.com/kubedb/installer/commit/f03cec7) Merge branch 'master' into permission +- [54f36d9](https://github.com/kubedb/installer/commit/54f36d9) Update repository config (#152) +- [150d180](https://github.com/kubedb/installer/commit/150d180) Update KubeDB api (#151) +- [639157f](https://github.com/kubedb/installer/commit/639157f) Add permission to update cert-manager certificates status +- [bcb1116](https://github.com/kubedb/installer/commit/bcb1116) Update KubeDB api (#149) +- [d6d4875](https://github.com/kubedb/installer/commit/d6d4875) Update Kubernetes v1.18.9 dependencies (#148) +- [7a00b13](https://github.com/kubedb/installer/commit/7a00b13) Update KubeDB api (#147) +- [a05c122](https://github.com/kubedb/installer/commit/a05c122) Update Kubernetes v1.18.9 dependencies (#146) +- [9f97ec4](https://github.com/kubedb/installer/commit/9f97ec4) Update KubeDB api (#145) +- [c487f64](https://github.com/kubedb/installer/commit/c487f64) Update Kubernetes v1.18.9 dependencies (#144) +- [2fff4f4](https://github.com/kubedb/installer/commit/2fff4f4) Update repository config (#143) +- [627c725](https://github.com/kubedb/installer/commit/627c725) Update repository config (#142) +- [c98d8f3](https://github.com/kubedb/installer/commit/c98d8f3) Update Kubernetes v1.18.9 dependencies (#141) +- [53b5a39](https://github.com/kubedb/installer/commit/53b5a39) Update elasticsearch-init images (#140) +- [d6f6abd](https://github.com/kubedb/installer/commit/d6f6abd) Update Kubernetes v1.18.3 dependencies (#139) +- [d4d55e9](https://github.com/kubedb/installer/commit/d4d55e9) Fix RBAC for Stash (#138) +- [0f96519](https://github.com/kubedb/installer/commit/0f96519) Update Kubernetes v1.18.3 dependencies (#136) +- [a459cbe](https://github.com/kubedb/installer/commit/a459cbe) Add license verification (#134) +- [26fa351](https://github.com/kubedb/installer/commit/26fa351) Change cluster-role of kubedb for stash/restorebatch (#125) +- [ab75b29](https://github.com/kubedb/installer/commit/ab75b29) Update Kubernetes v1.18.3 dependencies (#133) +- [a38a493](https://github.com/kubedb/installer/commit/a38a493) Add support for elasticsearch version 7.9.1-xpack (#132) +- [3a1c525](https://github.com/kubedb/installer/commit/3a1c525) Update Kubernetes v1.18.3 dependencies (#131) +- [55b5040](https://github.com/kubedb/installer/commit/55b5040) Mount emptyDir as /tmp for enterprise operator (#130) +- [7c11eea](https://github.com/kubedb/installer/commit/7c11eea) Use AppsCode Community License (#129) +- [9f08d2f](https://github.com/kubedb/installer/commit/9f08d2f) Update Elasticsearch metrics-exporter image version (#122) + + + +## [kubedb/memcached](https://github.com/kubedb/memcached) + +### [v0.7.0-beta.4](https://github.com/kubedb/memcached/releases/tag/v0.7.0-beta.4) + +- [49da218c](https://github.com/kubedb/memcached/commit/49da218c) Prepare for release v0.7.0-beta.4 (#221) +- [25677a68](https://github.com/kubedb/memcached/commit/25677a68) Update KubeDB api (#220) +- [b4cd7a06](https://github.com/kubedb/memcached/commit/b4cd7a06) Update Kubernetes v1.18.9 dependencies (#219) +- [553c98d4](https://github.com/kubedb/memcached/commit/553c98d4) Update KubeDB api (#218) +- [2e9af5f1](https://github.com/kubedb/memcached/commit/2e9af5f1) Update KubeDB api (#217) +- [86b20622](https://github.com/kubedb/memcached/commit/86b20622) Update KubeDB api (#216) +- [8a46e900](https://github.com/kubedb/memcached/commit/8a46e900) Update Kubernetes v1.18.9 dependencies (#215) +- [366531e0](https://github.com/kubedb/memcached/commit/366531e0) Update KubeDB api (#214) +- [1a45a5d3](https://github.com/kubedb/memcached/commit/1a45a5d3) Update KubeDB api (#213) +- [40afd78d](https://github.com/kubedb/memcached/commit/40afd78d) Update KubeDB api (#212) +- [bee3d626](https://github.com/kubedb/memcached/commit/bee3d626) Update KubeDB api (#211) +- [3a71917a](https://github.com/kubedb/memcached/commit/3a71917a) Update Kubernetes v1.18.9 dependencies (#210) +- [efaeb8f1](https://github.com/kubedb/memcached/commit/efaeb8f1) Update KubeDB api (#209) +- [f8bcc2ac](https://github.com/kubedb/memcached/commit/f8bcc2ac) Update KubeDB api (#208) +- [de050491](https://github.com/kubedb/memcached/commit/de050491) Update KubeDB api (#207) +- [f59d7b22](https://github.com/kubedb/memcached/commit/f59d7b22) Update repository config (#206) +- [ef1b61d7](https://github.com/kubedb/memcached/commit/ef1b61d7) Update repository config (#205) +- [2401e6a4](https://github.com/kubedb/memcached/commit/2401e6a4) Update repository config (#204) +- [59b4a20b](https://github.com/kubedb/memcached/commit/59b4a20b) Update KubeDB api (#203) +- [7ceab937](https://github.com/kubedb/memcached/commit/7ceab937) Update Kubernetes v1.18.9 dependencies (#202) +- [22ed0d2f](https://github.com/kubedb/memcached/commit/22ed0d2f) Publish docker images to ghcr.io (#201) +- [059535f1](https://github.com/kubedb/memcached/commit/059535f1) Update KubeDB api (#200) +- [480c5281](https://github.com/kubedb/memcached/commit/480c5281) Update KubeDB api (#199) +- [60980557](https://github.com/kubedb/memcached/commit/60980557) Update KubeDB api (#198) +- [57091fac](https://github.com/kubedb/memcached/commit/57091fac) Update KubeDB api (#197) +- [4fa3793d](https://github.com/kubedb/memcached/commit/4fa3793d) Update repository config (#196) +- [9891c8e3](https://github.com/kubedb/memcached/commit/9891c8e3) Update Kubernetes v1.18.9 dependencies (#195) +- [d4dbb4a6](https://github.com/kubedb/memcached/commit/d4dbb4a6) Update KubeDB api (#192) +- [8e27b6ef](https://github.com/kubedb/memcached/commit/8e27b6ef) Update Kubernetes v1.18.9 dependencies (#193) +- [f8fefd18](https://github.com/kubedb/memcached/commit/f8fefd18) Update Kubernetes v1.18.9 dependencies (#191) +- [0c8250d9](https://github.com/kubedb/memcached/commit/0c8250d9) Update repository config (#190) +- [08cd9670](https://github.com/kubedb/memcached/commit/08cd9670) Update repository config (#189) +- [c15513f2](https://github.com/kubedb/memcached/commit/c15513f2) Update Kubernetes v1.18.9 dependencies (#188) +- [f6115aaa](https://github.com/kubedb/memcached/commit/f6115aaa) Use common event recorder (#187) +- [bbf717a9](https://github.com/kubedb/memcached/commit/bbf717a9) Update Kubernetes v1.18.3 dependencies (#186) + + + +## [kubedb/mongodb](https://github.com/kubedb/mongodb) + +### [v0.7.0-beta.4](https://github.com/kubedb/mongodb/releases/tag/v0.7.0-beta.4) + +- [007e3ccd](https://github.com/kubedb/mongodb/commit/007e3ccd) Prepare for release v0.7.0-beta.4 (#289) +- [11f6573e](https://github.com/kubedb/mongodb/commit/11f6573e) Update MongoDB Conditions (#280) +- [a964af9b](https://github.com/kubedb/mongodb/commit/a964af9b) Update KubeDB api (#288) +- [38fd31b3](https://github.com/kubedb/mongodb/commit/38fd31b3) Update Kubernetes v1.18.9 dependencies (#287) +- [b0110bea](https://github.com/kubedb/mongodb/commit/b0110bea) Update KubeDB api (#286) +- [bfad7e48](https://github.com/kubedb/mongodb/commit/bfad7e48) Update for release Stash@v2020.10.21 (#285) +- [2eebd6eb](https://github.com/kubedb/mongodb/commit/2eebd6eb) Fix init validator (#283) +- [7912e726](https://github.com/kubedb/mongodb/commit/7912e726) Update KubeDB api (#284) +- [ebf85b6d](https://github.com/kubedb/mongodb/commit/ebf85b6d) Update KubeDB api (#282) +- [7fa4958c](https://github.com/kubedb/mongodb/commit/7fa4958c) Update Kubernetes v1.18.9 dependencies (#281) +- [705843b8](https://github.com/kubedb/mongodb/commit/705843b8) Use MongoDBCustomConfigFile constant +- [dac6262d](https://github.com/kubedb/mongodb/commit/dac6262d) Update KubeDB api (#279) +- [7e7a960e](https://github.com/kubedb/mongodb/commit/7e7a960e) Update KubeDB api (#278) +- [aed9bd49](https://github.com/kubedb/mongodb/commit/aed9bd49) Update KubeDB api (#277) +- [18ec2e99](https://github.com/kubedb/mongodb/commit/18ec2e99) Update Kubernetes v1.18.9 dependencies (#276) +- [dbec1f66](https://github.com/kubedb/mongodb/commit/dbec1f66) Update KubeDB api (#275) +- [ad028b51](https://github.com/kubedb/mongodb/commit/ad028b51) Update KubeDB api (#274) +- [a21dfd6a](https://github.com/kubedb/mongodb/commit/a21dfd6a) Update KubeDB api (#272) +- [932ac34b](https://github.com/kubedb/mongodb/commit/932ac34b) Update repository config (#271) +- [3f52a364](https://github.com/kubedb/mongodb/commit/3f52a364) Update repository config (#270) +- [d3bf87db](https://github.com/kubedb/mongodb/commit/d3bf87db) Initialize statefulset watcher from cmd/server/options.go (#269) +- [e3e15b7f](https://github.com/kubedb/mongodb/commit/e3e15b7f) Update KubeDB api (#268) +- [406ae5a2](https://github.com/kubedb/mongodb/commit/406ae5a2) Update Kubernetes v1.18.9 dependencies (#267) +- [0339503d](https://github.com/kubedb/mongodb/commit/0339503d) Publish docker images to ghcr.io (#266) +- [ffccdc3c](https://github.com/kubedb/mongodb/commit/ffccdc3c) Update KubeDB api (#265) +- [05b7a0bd](https://github.com/kubedb/mongodb/commit/05b7a0bd) Update KubeDB api (#264) +- [d6447024](https://github.com/kubedb/mongodb/commit/d6447024) Update KubeDB api (#263) +- [e7c1e3a3](https://github.com/kubedb/mongodb/commit/e7c1e3a3) Update KubeDB api (#262) +- [5647960a](https://github.com/kubedb/mongodb/commit/5647960a) Update repository config (#261) +- [e7481d8d](https://github.com/kubedb/mongodb/commit/e7481d8d) Use conditions to handle initialization (#258) +- [d406586a](https://github.com/kubedb/mongodb/commit/d406586a) Update Kubernetes v1.18.9 dependencies (#260) +- [93708d02](https://github.com/kubedb/mongodb/commit/93708d02) Remove redundant volume mounts (#259) +- [bf28af80](https://github.com/kubedb/mongodb/commit/bf28af80) Update for release Stash@v2020.09.29 (#257) +- [b34e2326](https://github.com/kubedb/mongodb/commit/b34e2326) Update Kubernetes v1.18.9 dependencies (#256) +- [86e84d48](https://github.com/kubedb/mongodb/commit/86e84d48) Remove bootstrap container (#248) +- [0b66e225](https://github.com/kubedb/mongodb/commit/0b66e225) Update Kubernetes v1.18.9 dependencies (#254) +- [1a06f223](https://github.com/kubedb/mongodb/commit/1a06f223) Update repository config (#253) +- [c199b164](https://github.com/kubedb/mongodb/commit/c199b164) Update repository config (#252) +- [1268868d](https://github.com/kubedb/mongodb/commit/1268868d) Update Kubernetes v1.18.9 dependencies (#251) +- [de63158f](https://github.com/kubedb/mongodb/commit/de63158f) Use common event recorder (#249) +- [2f96b75a](https://github.com/kubedb/mongodb/commit/2f96b75a) Update Kubernetes v1.18.3 dependencies (#250) + + + +## [kubedb/mysql](https://github.com/kubedb/mysql) + +### [v0.7.0-beta.4](https://github.com/kubedb/mysql/releases/tag/v0.7.0-beta.4) + +- [da0ee5ac](https://github.com/kubedb/mysql/commit/da0ee5ac) Prepare for release v0.7.0-beta.4 (#281) +- [dcab13f9](https://github.com/kubedb/mysql/commit/dcab13f9) Add conditions to MySQL status (#275) +- [972f4ade](https://github.com/kubedb/mysql/commit/972f4ade) Update KubeDB api (#280) +- [0a16d2f0](https://github.com/kubedb/mysql/commit/0a16d2f0) Update Kubernetes v1.18.9 dependencies (#279) +- [7fef1045](https://github.com/kubedb/mysql/commit/7fef1045) Update KubeDB api (#278) +- [489927ab](https://github.com/kubedb/mysql/commit/489927ab) Update for release Stash@v2020.10.21 (#277) +- [2491868c](https://github.com/kubedb/mysql/commit/2491868c) Update KubeDB api (#276) +- [5f6a0f6e](https://github.com/kubedb/mysql/commit/5f6a0f6e) Update KubeDB api (#274) +- [08c0720c](https://github.com/kubedb/mysql/commit/08c0720c) Update Kubernetes v1.18.9 dependencies (#273) +- [22fbdd3f](https://github.com/kubedb/mysql/commit/22fbdd3f) Update KubeDB api (#272) +- [7f4fb5e4](https://github.com/kubedb/mysql/commit/7f4fb5e4) Update KubeDB api (#271) +- [09d4743d](https://github.com/kubedb/mysql/commit/09d4743d) Update KubeDB api (#270) +- [d055fb11](https://github.com/kubedb/mysql/commit/d055fb11) Add Pod name to mysql replication-mode-detector container envs (#269) +- [4d1eea70](https://github.com/kubedb/mysql/commit/4d1eea70) Update KubeDB api (#268) +- [58fd9385](https://github.com/kubedb/mysql/commit/58fd9385) Update Kubernetes v1.18.9 dependencies (#267) +- [fb445df6](https://github.com/kubedb/mysql/commit/fb445df6) Update KubeDB api (#266) +- [3717609e](https://github.com/kubedb/mysql/commit/3717609e) Update KubeDB api (#265) +- [b9ba8cc7](https://github.com/kubedb/mysql/commit/b9ba8cc7) Update KubeDB api (#263) +- [1c2a7704](https://github.com/kubedb/mysql/commit/1c2a7704) Update repository config (#262) +- [6cb5d9d0](https://github.com/kubedb/mysql/commit/6cb5d9d0) Update repository config (#261) +- [3eadb17a](https://github.com/kubedb/mysql/commit/3eadb17a) Update repository config (#260) +- [03661faa](https://github.com/kubedb/mysql/commit/03661faa) Initialize statefulset watcher from cmd/server/options.go (#259) +- [e03649bb](https://github.com/kubedb/mysql/commit/e03649bb) Update KubeDB api (#258) +- [91e983b0](https://github.com/kubedb/mysql/commit/91e983b0) Update Kubernetes v1.18.9 dependencies (#257) +- [a03f4d24](https://github.com/kubedb/mysql/commit/a03f4d24) Publish docker images to ghcr.io (#256) +- [252902b5](https://github.com/kubedb/mysql/commit/252902b5) Update KubeDB api (#255) +- [d490e95c](https://github.com/kubedb/mysql/commit/d490e95c) Update KubeDB api (#254) +- [476de6f3](https://github.com/kubedb/mysql/commit/476de6f3) Update KubeDB api (#253) +- [54a36140](https://github.com/kubedb/mysql/commit/54a36140) Pass mysql name by flag for replication-mode-detector container (#247) +- [c2836d86](https://github.com/kubedb/mysql/commit/c2836d86) Update KubeDB api (#252) +- [69756664](https://github.com/kubedb/mysql/commit/69756664) Update repository config (#251) +- [6d1c0fa8](https://github.com/kubedb/mysql/commit/6d1c0fa8) Cleanup monitoring spec api (#250) +- [c971158c](https://github.com/kubedb/mysql/commit/c971158c) Use condition to handle database initialization (#243) +- [a839fa52](https://github.com/kubedb/mysql/commit/a839fa52) Update Kubernetes v1.18.9 dependencies (#249) +- [1b231f81](https://github.com/kubedb/mysql/commit/1b231f81) Use offshootSelectors to find statefulset (#248) +- [e6c6db76](https://github.com/kubedb/mysql/commit/e6c6db76) Update for release Stash@v2020.09.29 (#246) +- [fb577f93](https://github.com/kubedb/mysql/commit/fb577f93) Update Kubernetes v1.18.9 dependencies (#245) +- [dfe700ff](https://github.com/kubedb/mysql/commit/dfe700ff) Update Kubernetes v1.18.9 dependencies (#242) +- [928c15fe](https://github.com/kubedb/mysql/commit/928c15fe) Add separate services for primary and secondary Replicas (#229) +- [ac7161c9](https://github.com/kubedb/mysql/commit/ac7161c9) Update repository config (#241) +- [6344c1df](https://github.com/kubedb/mysql/commit/6344c1df) Update repository config (#240) +- [3389dbd8](https://github.com/kubedb/mysql/commit/3389dbd8) Update Kubernetes v1.18.9 dependencies (#239) +- [b22787a7](https://github.com/kubedb/mysql/commit/b22787a7) Remove unused StashClient (#238) +- [c1c1de57](https://github.com/kubedb/mysql/commit/c1c1de57) Update Kubernetes v1.18.3 dependencies (#237) +- [b2e37ce5](https://github.com/kubedb/mysql/commit/b2e37ce5) Use common event recorder (#236) + + + +## [kubedb/mysql-replication-mode-detector](https://github.com/kubedb/mysql-replication-mode-detector) + +### [v0.1.0-beta.4](https://github.com/kubedb/mysql-replication-mode-detector/releases/tag/v0.1.0-beta.4) + +- [557e8f7](https://github.com/kubedb/mysql-replication-mode-detector/commit/557e8f7) Prepare for release v0.1.0-beta.4 (#70) +- [4dd885a](https://github.com/kubedb/mysql-replication-mode-detector/commit/4dd885a) Update KubeDB api (#69) +- [dc0ed39](https://github.com/kubedb/mysql-replication-mode-detector/commit/dc0ed39) Update Kubernetes v1.18.9 dependencies (#68) +- [f49a1d1](https://github.com/kubedb/mysql-replication-mode-detector/commit/f49a1d1) Update Kubernetes v1.18.9 dependencies (#65) +- [306235a](https://github.com/kubedb/mysql-replication-mode-detector/commit/306235a) Update KubeDB api (#64) +- [3c9e99a](https://github.com/kubedb/mysql-replication-mode-detector/commit/3c9e99a) Update KubeDB api (#63) +- [974a940](https://github.com/kubedb/mysql-replication-mode-detector/commit/974a940) Update KubeDB api (#62) +- [8521462](https://github.com/kubedb/mysql-replication-mode-detector/commit/8521462) Update Kubernetes v1.18.9 dependencies (#61) +- [38f7a4c](https://github.com/kubedb/mysql-replication-mode-detector/commit/38f7a4c) Update KubeDB api (#60) +- [a7b7c87](https://github.com/kubedb/mysql-replication-mode-detector/commit/a7b7c87) Update KubeDB api (#59) +- [daa02dd](https://github.com/kubedb/mysql-replication-mode-detector/commit/daa02dd) Update KubeDB api (#58) +- [341b6b6](https://github.com/kubedb/mysql-replication-mode-detector/commit/341b6b6) Add tls config (#40) +- [04161c8](https://github.com/kubedb/mysql-replication-mode-detector/commit/04161c8) Update KubeDB api (#57) +- [fdd705d](https://github.com/kubedb/mysql-replication-mode-detector/commit/fdd705d) Update Kubernetes v1.18.9 dependencies (#56) +- [22cb410](https://github.com/kubedb/mysql-replication-mode-detector/commit/22cb410) Update KubeDB api (#55) +- [11b1758](https://github.com/kubedb/mysql-replication-mode-detector/commit/11b1758) Update KubeDB api (#54) +- [9df3045](https://github.com/kubedb/mysql-replication-mode-detector/commit/9df3045) Update KubeDB api (#53) +- [6557f92](https://github.com/kubedb/mysql-replication-mode-detector/commit/6557f92) Update KubeDB api (#52) +- [43c3694](https://github.com/kubedb/mysql-replication-mode-detector/commit/43c3694) Update Kubernetes v1.18.9 dependencies (#51) +- [511e974](https://github.com/kubedb/mysql-replication-mode-detector/commit/511e974) Publish docker images to ghcr.io (#50) +- [093a995](https://github.com/kubedb/mysql-replication-mode-detector/commit/093a995) Update KubeDB api (#49) +- [49c07e9](https://github.com/kubedb/mysql-replication-mode-detector/commit/49c07e9) Update KubeDB api (#48) +- [91ead1c](https://github.com/kubedb/mysql-replication-mode-detector/commit/91ead1c) Update KubeDB api (#47) +- [45956b4](https://github.com/kubedb/mysql-replication-mode-detector/commit/45956b4) Update KubeDB api (#46) +- [a6c57a7](https://github.com/kubedb/mysql-replication-mode-detector/commit/a6c57a7) Update KubeDB api (#45) +- [8a2fd20](https://github.com/kubedb/mysql-replication-mode-detector/commit/8a2fd20) Update KubeDB api (#44) +- [be63987](https://github.com/kubedb/mysql-replication-mode-detector/commit/be63987) Update KubeDB api (#43) +- [f33220a](https://github.com/kubedb/mysql-replication-mode-detector/commit/f33220a) Update KubeDB api (#42) +- [46b7d44](https://github.com/kubedb/mysql-replication-mode-detector/commit/46b7d44) Update KubeDB api (#41) +- [c151070](https://github.com/kubedb/mysql-replication-mode-detector/commit/c151070) Update KubeDB api (#38) +- [7a04763](https://github.com/kubedb/mysql-replication-mode-detector/commit/7a04763) Update KubeDB api (#37) +- [4367ef5](https://github.com/kubedb/mysql-replication-mode-detector/commit/4367ef5) Update KubeDB api (#36) +- [6bc4f1c](https://github.com/kubedb/mysql-replication-mode-detector/commit/6bc4f1c) Update Kubernetes v1.18.9 dependencies (#35) +- [fdaff01](https://github.com/kubedb/mysql-replication-mode-detector/commit/fdaff01) Update KubeDB api (#34) +- [087170a](https://github.com/kubedb/mysql-replication-mode-detector/commit/087170a) Update KubeDB api (#33) +- [127efe7](https://github.com/kubedb/mysql-replication-mode-detector/commit/127efe7) Update Kubernetes v1.18.9 dependencies (#32) +- [1df3573](https://github.com/kubedb/mysql-replication-mode-detector/commit/1df3573) Move constant to apimachinery repo (#24) +- [74b41b0](https://github.com/kubedb/mysql-replication-mode-detector/commit/74b41b0) Update repository config (#31) +- [b0932a7](https://github.com/kubedb/mysql-replication-mode-detector/commit/b0932a7) Update repository config (#30) +- [8e9c235](https://github.com/kubedb/mysql-replication-mode-detector/commit/8e9c235) Update Kubernetes v1.18.9 dependencies (#29) +- [8f61ebc](https://github.com/kubedb/mysql-replication-mode-detector/commit/8f61ebc) Update Kubernetes v1.18.3 dependencies (#28) + + + +## [kubedb/operator](https://github.com/kubedb/operator) + +### [v0.14.0-beta.4](https://github.com/kubedb/operator/releases/tag/v0.14.0-beta.4) + +- [2145978d](https://github.com/kubedb/operator/commit/2145978d) Prepare for release v0.14.0-beta.4 (#326) +- [8fd3b682](https://github.com/kubedb/operator/commit/8fd3b682) Add --readiness-probe-interval flag (#325) +- [7bf0c3c5](https://github.com/kubedb/operator/commit/7bf0c3c5) Update KubeDB api (#324) +- [25c7dc21](https://github.com/kubedb/operator/commit/25c7dc21) Update Kubernetes v1.18.9 dependencies (#323) +- [bb7525d6](https://github.com/kubedb/operator/commit/bb7525d6) Update Kubernetes v1.18.9 dependencies (#321) +- [6db45b57](https://github.com/kubedb/operator/commit/6db45b57) Update KubeDB api (#320) +- [fa1438e3](https://github.com/kubedb/operator/commit/fa1438e3) Update KubeDB api (#319) +- [6be49e7e](https://github.com/kubedb/operator/commit/6be49e7e) Update KubeDB api (#318) +- [00bf9bec](https://github.com/kubedb/operator/commit/00bf9bec) Update Kubernetes v1.18.9 dependencies (#317) +- [fd529403](https://github.com/kubedb/operator/commit/fd529403) Update KubeDB api (#316) +- [f03305e1](https://github.com/kubedb/operator/commit/f03305e1) Update KubeDB api (#315) +- [fb5e4873](https://github.com/kubedb/operator/commit/fb5e4873) Update KubeDB api (#312) +- [f3843a05](https://github.com/kubedb/operator/commit/f3843a05) Update repository config (#311) +- [18f29e73](https://github.com/kubedb/operator/commit/18f29e73) Update repository config (#310) +- [25405c38](https://github.com/kubedb/operator/commit/25405c38) Update repository config (#309) +- [e464d336](https://github.com/kubedb/operator/commit/e464d336) Update KubeDB api (#308) +- [eeccd59e](https://github.com/kubedb/operator/commit/eeccd59e) Update Kubernetes v1.18.9 dependencies (#307) +- [dd2f176f](https://github.com/kubedb/operator/commit/dd2f176f) Publish docker images to ghcr.io (#306) +- [d65d299f](https://github.com/kubedb/operator/commit/d65d299f) Update KubeDB api (#305) +- [3f681cef](https://github.com/kubedb/operator/commit/3f681cef) Update KubeDB api (#304) +- [bc58d3d7](https://github.com/kubedb/operator/commit/bc58d3d7) Refactor initializer code + Use common event recorder (#292) +- [952e1b33](https://github.com/kubedb/operator/commit/952e1b33) Update repository config (#301) +- [66bee9c3](https://github.com/kubedb/operator/commit/66bee9c3) Update Kubernetes v1.18.9 dependencies (#300) +- [4e508002](https://github.com/kubedb/operator/commit/4e508002) Update for release Stash@v2020.09.29 (#299) +- [b6a4caa4](https://github.com/kubedb/operator/commit/b6a4caa4) Update Kubernetes v1.18.9 dependencies (#298) +- [201aed32](https://github.com/kubedb/operator/commit/201aed32) Update Kubernetes v1.18.9 dependencies (#296) +- [36ed325d](https://github.com/kubedb/operator/commit/36ed325d) Update repository config (#295) +- [36ec3035](https://github.com/kubedb/operator/commit/36ec3035) Update repository config (#294) +- [32e61f43](https://github.com/kubedb/operator/commit/32e61f43) Update Kubernetes v1.18.9 dependencies (#293) +- [078e7062](https://github.com/kubedb/operator/commit/078e7062) Update Kubernetes v1.18.3 dependencies (#291) +- [900626dd](https://github.com/kubedb/operator/commit/900626dd) Update Kubernetes v1.18.3 dependencies (#290) +- [7bf1e16e](https://github.com/kubedb/operator/commit/7bf1e16e) Use AppsCode Community license (#289) +- [ba436a4b](https://github.com/kubedb/operator/commit/ba436a4b) Add license verifier (#288) +- [0a02a313](https://github.com/kubedb/operator/commit/0a02a313) Update for release Stash@v2020.09.16 (#287) +- [9ae202e1](https://github.com/kubedb/operator/commit/9ae202e1) Update Kubernetes v1.18.3 dependencies (#286) +- [5bea03b9](https://github.com/kubedb/operator/commit/5bea03b9) Update Kubernetes v1.18.3 dependencies (#284) +- [b1375565](https://github.com/kubedb/operator/commit/b1375565) Update Kubernetes v1.18.3 dependencies (#282) + + + +## [kubedb/percona-xtradb](https://github.com/kubedb/percona-xtradb) + +### [v0.1.0-beta.4](https://github.com/kubedb/percona-xtradb/releases/tag/v0.1.0-beta.4) + +- [14b2f1b2](https://github.com/kubedb/percona-xtradb/commit/14b2f1b2) Prepare for release v0.1.0-beta.4 (#113) +- [eff1d265](https://github.com/kubedb/percona-xtradb/commit/eff1d265) Update KubeDB api (#112) +- [a2878d4a](https://github.com/kubedb/percona-xtradb/commit/a2878d4a) Update Kubernetes v1.18.9 dependencies (#111) +- [51f0d104](https://github.com/kubedb/percona-xtradb/commit/51f0d104) Update KubeDB api (#110) +- [fcf5343b](https://github.com/kubedb/percona-xtradb/commit/fcf5343b) Update for release Stash@v2020.10.21 (#109) +- [9fe68d43](https://github.com/kubedb/percona-xtradb/commit/9fe68d43) Fix init validator (#107) +- [1c528cff](https://github.com/kubedb/percona-xtradb/commit/1c528cff) Update KubeDB api (#108) +- [99d23f3d](https://github.com/kubedb/percona-xtradb/commit/99d23f3d) Update KubeDB api (#106) +- [d0807640](https://github.com/kubedb/percona-xtradb/commit/d0807640) Update Kubernetes v1.18.9 dependencies (#105) +- [bac7705b](https://github.com/kubedb/percona-xtradb/commit/bac7705b) Update KubeDB api (#104) +- [475aabd5](https://github.com/kubedb/percona-xtradb/commit/475aabd5) Update KubeDB api (#103) +- [60f7e5a9](https://github.com/kubedb/percona-xtradb/commit/60f7e5a9) Update KubeDB api (#102) +- [84a97ced](https://github.com/kubedb/percona-xtradb/commit/84a97ced) Update KubeDB api (#101) +- [d4a7b7c5](https://github.com/kubedb/percona-xtradb/commit/d4a7b7c5) Update Kubernetes v1.18.9 dependencies (#100) +- [b818a4c5](https://github.com/kubedb/percona-xtradb/commit/b818a4c5) Update KubeDB api (#99) +- [03df7739](https://github.com/kubedb/percona-xtradb/commit/03df7739) Update KubeDB api (#98) +- [2f3ce0e6](https://github.com/kubedb/percona-xtradb/commit/2f3ce0e6) Update KubeDB api (#96) +- [94e009e8](https://github.com/kubedb/percona-xtradb/commit/94e009e8) Update repository config (#95) +- [fc61d440](https://github.com/kubedb/percona-xtradb/commit/fc61d440) Update repository config (#94) +- [35f5b2bb](https://github.com/kubedb/percona-xtradb/commit/35f5b2bb) Update repository config (#93) +- [d01e39dd](https://github.com/kubedb/percona-xtradb/commit/d01e39dd) Initialize statefulset watcher from cmd/server/options.go (#92) +- [41bf932f](https://github.com/kubedb/percona-xtradb/commit/41bf932f) Update KubeDB api (#91) +- [da92a1f3](https://github.com/kubedb/percona-xtradb/commit/da92a1f3) Update Kubernetes v1.18.9 dependencies (#90) +- [554beafb](https://github.com/kubedb/percona-xtradb/commit/554beafb) Publish docker images to ghcr.io (#89) +- [4c7031e1](https://github.com/kubedb/percona-xtradb/commit/4c7031e1) Update KubeDB api (#88) +- [418c767a](https://github.com/kubedb/percona-xtradb/commit/418c767a) Update KubeDB api (#87) +- [94eef91e](https://github.com/kubedb/percona-xtradb/commit/94eef91e) Update KubeDB api (#86) +- [f3c2a360](https://github.com/kubedb/percona-xtradb/commit/f3c2a360) Update KubeDB api (#85) +- [107bb6a6](https://github.com/kubedb/percona-xtradb/commit/107bb6a6) Update repository config (#84) +- [938e64bc](https://github.com/kubedb/percona-xtradb/commit/938e64bc) Cleanup monitoring spec api (#83) +- [deeaad8f](https://github.com/kubedb/percona-xtradb/commit/deeaad8f) Use conditions to handle database initialization (#80) +- [798c3ddc](https://github.com/kubedb/percona-xtradb/commit/798c3ddc) Update Kubernetes v1.18.9 dependencies (#82) +- [16c72ba6](https://github.com/kubedb/percona-xtradb/commit/16c72ba6) Updated the exporter port and service (#81) +- [9314faf1](https://github.com/kubedb/percona-xtradb/commit/9314faf1) Update for release Stash@v2020.09.29 (#79) +- [6cb53efc](https://github.com/kubedb/percona-xtradb/commit/6cb53efc) Update Kubernetes v1.18.9 dependencies (#78) +- [fd2b8cdd](https://github.com/kubedb/percona-xtradb/commit/fd2b8cdd) Update Kubernetes v1.18.9 dependencies (#76) +- [9d1038db](https://github.com/kubedb/percona-xtradb/commit/9d1038db) Update repository config (#75) +- [41a05a44](https://github.com/kubedb/percona-xtradb/commit/41a05a44) Update repository config (#74) +- [eccd2acd](https://github.com/kubedb/percona-xtradb/commit/eccd2acd) Update Kubernetes v1.18.9 dependencies (#73) +- [27635f1c](https://github.com/kubedb/percona-xtradb/commit/27635f1c) Update Kubernetes v1.18.3 dependencies (#72) +- [792326c7](https://github.com/kubedb/percona-xtradb/commit/792326c7) Use common event recorder (#71) + + + +## [kubedb/pg-leader-election](https://github.com/kubedb/pg-leader-election) + +### [v0.2.0-beta.4](https://github.com/kubedb/pg-leader-election/releases/tag/v0.2.0-beta.4) + +- [46c6bf5](https://github.com/kubedb/pg-leader-election/commit/46c6bf5) Update KubeDB api (#36) +- [e662de8](https://github.com/kubedb/pg-leader-election/commit/e662de8) Update Kubernetes v1.18.9 dependencies (#35) +- [f4167b5](https://github.com/kubedb/pg-leader-election/commit/f4167b5) Update KubeDB api (#34) +- [e1d3199](https://github.com/kubedb/pg-leader-election/commit/e1d3199) Update Kubernetes v1.18.9 dependencies (#33) +- [933baa7](https://github.com/kubedb/pg-leader-election/commit/933baa7) Update KubeDB api (#32) +- [3edfc19](https://github.com/kubedb/pg-leader-election/commit/3edfc19) Update KubeDB api (#31) +- [cef0f38](https://github.com/kubedb/pg-leader-election/commit/cef0f38) Update KubeDB api (#30) +- [319a452](https://github.com/kubedb/pg-leader-election/commit/319a452) Update Kubernetes v1.18.9 dependencies (#29) +- [7d82228](https://github.com/kubedb/pg-leader-election/commit/7d82228) Update KubeDB api (#28) +- [bd6617b](https://github.com/kubedb/pg-leader-election/commit/bd6617b) Update Kubernetes v1.18.9 dependencies (#27) +- [465c6a9](https://github.com/kubedb/pg-leader-election/commit/465c6a9) Update KubeDB api (#26) +- [e90b2ba](https://github.com/kubedb/pg-leader-election/commit/e90b2ba) Update Kubernetes v1.18.9 dependencies (#25) +- [2feb7bc](https://github.com/kubedb/pg-leader-election/commit/2feb7bc) Update repository config (#24) +- [32ca246](https://github.com/kubedb/pg-leader-election/commit/32ca246) Update Kubernetes v1.18.9 dependencies (#23) +- [03fe9f9](https://github.com/kubedb/pg-leader-election/commit/03fe9f9) Update Kubernetes v1.18.3 dependencies (#22) + + + +## [kubedb/pgbouncer](https://github.com/kubedb/pgbouncer) + +### [v0.1.0-beta.4](https://github.com/kubedb/pgbouncer/releases/tag/v0.1.0-beta.4) + +- [4c292933](https://github.com/kubedb/pgbouncer/commit/4c292933) Prepare for release v0.1.0-beta.4 (#85) +- [c3daaa90](https://github.com/kubedb/pgbouncer/commit/c3daaa90) Update KubeDB api (#84) +- [19784f7a](https://github.com/kubedb/pgbouncer/commit/19784f7a) Update Kubernetes v1.18.9 dependencies (#83) +- [a7ea74e4](https://github.com/kubedb/pgbouncer/commit/a7ea74e4) Update KubeDB api (#82) +- [49391b30](https://github.com/kubedb/pgbouncer/commit/49391b30) Update KubeDB api (#81) +- [2ad0016d](https://github.com/kubedb/pgbouncer/commit/2ad0016d) Update KubeDB api (#80) +- [e0169139](https://github.com/kubedb/pgbouncer/commit/e0169139) Update Kubernetes v1.18.9 dependencies (#79) +- [ade8edf9](https://github.com/kubedb/pgbouncer/commit/ade8edf9) Update KubeDB api (#78) +- [86387966](https://github.com/kubedb/pgbouncer/commit/86387966) Update KubeDB api (#77) +- [d5fa2ce7](https://github.com/kubedb/pgbouncer/commit/d5fa2ce7) Update KubeDB api (#76) +- [938d61f6](https://github.com/kubedb/pgbouncer/commit/938d61f6) Update KubeDB api (#75) +- [89ceecb1](https://github.com/kubedb/pgbouncer/commit/89ceecb1) Update Kubernetes v1.18.9 dependencies (#74) +- [3b8fc849](https://github.com/kubedb/pgbouncer/commit/3b8fc849) Update KubeDB api (#73) +- [89ed5bf0](https://github.com/kubedb/pgbouncer/commit/89ed5bf0) Update KubeDB api (#72) +- [187eaff5](https://github.com/kubedb/pgbouncer/commit/187eaff5) Update KubeDB api (#71) +- [1222c935](https://github.com/kubedb/pgbouncer/commit/1222c935) Update repository config (#70) +- [f9c72f8c](https://github.com/kubedb/pgbouncer/commit/f9c72f8c) Update repository config (#69) +- [a55e0a9f](https://github.com/kubedb/pgbouncer/commit/a55e0a9f) Update repository config (#68) +- [20f01c3b](https://github.com/kubedb/pgbouncer/commit/20f01c3b) Update KubeDB api (#67) +- [ea907c2f](https://github.com/kubedb/pgbouncer/commit/ea907c2f) Update Kubernetes v1.18.9 dependencies (#66) +- [86f92e64](https://github.com/kubedb/pgbouncer/commit/86f92e64) Publish docker images to ghcr.io (#65) +- [189ab8b8](https://github.com/kubedb/pgbouncer/commit/189ab8b8) Update KubeDB api (#64) +- [d30a59c2](https://github.com/kubedb/pgbouncer/commit/d30a59c2) Update KubeDB api (#63) +- [545ee043](https://github.com/kubedb/pgbouncer/commit/545ee043) Update KubeDB api (#62) +- [cc01e1ca](https://github.com/kubedb/pgbouncer/commit/cc01e1ca) Update KubeDB api (#61) +- [40bc916f](https://github.com/kubedb/pgbouncer/commit/40bc916f) Update repository config (#60) +- [00313b21](https://github.com/kubedb/pgbouncer/commit/00313b21) Update Kubernetes v1.18.9 dependencies (#59) +- [080b77f3](https://github.com/kubedb/pgbouncer/commit/080b77f3) Update KubeDB api (#56) +- [fa479841](https://github.com/kubedb/pgbouncer/commit/fa479841) Update Kubernetes v1.18.9 dependencies (#57) +- [559d7421](https://github.com/kubedb/pgbouncer/commit/559d7421) Update Kubernetes v1.18.9 dependencies (#55) +- [1bfe4067](https://github.com/kubedb/pgbouncer/commit/1bfe4067) Update repository config (#54) +- [5ac28f25](https://github.com/kubedb/pgbouncer/commit/5ac28f25) Update repository config (#53) +- [162034f0](https://github.com/kubedb/pgbouncer/commit/162034f0) Update Kubernetes v1.18.9 dependencies (#52) +- [71697842](https://github.com/kubedb/pgbouncer/commit/71697842) Update Kubernetes v1.18.3 dependencies (#51) + + + +## [kubedb/postgres](https://github.com/kubedb/postgres) + +### [v0.14.0-beta.4](https://github.com/kubedb/postgres/releases/tag/v0.14.0-beta.4) + +- [ed9a22ac](https://github.com/kubedb/postgres/commit/ed9a22ac) Prepare for release v0.14.0-beta.4 (#396) +- [e6b37365](https://github.com/kubedb/postgres/commit/e6b37365) Update KubeDB api (#395) +- [825f55c3](https://github.com/kubedb/postgres/commit/825f55c3) Update Kubernetes v1.18.9 dependencies (#394) +- [c879e7e8](https://github.com/kubedb/postgres/commit/c879e7e8) Update KubeDB api (#393) +- [c90ad84e](https://github.com/kubedb/postgres/commit/c90ad84e) Update for release Stash@v2020.10.21 (#392) +- [9db225c0](https://github.com/kubedb/postgres/commit/9db225c0) Fix init validator (#390) +- [e56e5ae6](https://github.com/kubedb/postgres/commit/e56e5ae6) Update KubeDB api (#391) +- [5da16a5c](https://github.com/kubedb/postgres/commit/5da16a5c) Update KubeDB api (#389) +- [221eb7cf](https://github.com/kubedb/postgres/commit/221eb7cf) Update Kubernetes v1.18.9 dependencies (#388) +- [261aaaf3](https://github.com/kubedb/postgres/commit/261aaaf3) Update KubeDB api (#387) +- [6d8efe23](https://github.com/kubedb/postgres/commit/6d8efe23) Update KubeDB api (#386) +- [0df8a375](https://github.com/kubedb/postgres/commit/0df8a375) Update KubeDB api (#385) +- [b0b4f7e7](https://github.com/kubedb/postgres/commit/b0b4f7e7) Update KubeDB api (#384) +- [c10ff311](https://github.com/kubedb/postgres/commit/c10ff311) Update Kubernetes v1.18.9 dependencies (#383) +- [4f237fc0](https://github.com/kubedb/postgres/commit/4f237fc0) Update KubeDB api (#382) +- [b31defb8](https://github.com/kubedb/postgres/commit/b31defb8) Update KubeDB api (#381) +- [667a4ec8](https://github.com/kubedb/postgres/commit/667a4ec8) Update KubeDB api (#379) +- [da86f8d7](https://github.com/kubedb/postgres/commit/da86f8d7) Update repository config (#378) +- [1da3afb9](https://github.com/kubedb/postgres/commit/1da3afb9) Update repository config (#377) +- [29b8a231](https://github.com/kubedb/postgres/commit/29b8a231) Update repository config (#376) +- [22612534](https://github.com/kubedb/postgres/commit/22612534) Initialize statefulset watcher from cmd/server/options.go (#375) +- [bfd6eae7](https://github.com/kubedb/postgres/commit/bfd6eae7) Update KubeDB api (#374) +- [10566771](https://github.com/kubedb/postgres/commit/10566771) Update Kubernetes v1.18.9 dependencies (#373) +- [1eb7c29b](https://github.com/kubedb/postgres/commit/1eb7c29b) Publish docker images to ghcr.io (#372) +- [49dd7946](https://github.com/kubedb/postgres/commit/49dd7946) Only keep username/password keys in Postgres secret +- [f1131a2c](https://github.com/kubedb/postgres/commit/f1131a2c) Update KubeDB api (#371) +- [ccadf274](https://github.com/kubedb/postgres/commit/ccadf274) Update KubeDB api (#370) +- [bddd6692](https://github.com/kubedb/postgres/commit/bddd6692) Update KubeDB api (#369) +- [d76bbe3d](https://github.com/kubedb/postgres/commit/d76bbe3d) Don't add secretTransformation in AppBinding section by default (#316) +- [ae29ba5e](https://github.com/kubedb/postgres/commit/ae29ba5e) Update KubeDB api (#368) +- [4bb1c171](https://github.com/kubedb/postgres/commit/4bb1c171) Update repository config (#367) +- [a7b1138f](https://github.com/kubedb/postgres/commit/a7b1138f) Use conditions to handle initialization (#365) +- [126e20f1](https://github.com/kubedb/postgres/commit/126e20f1) Update Kubernetes v1.18.9 dependencies (#366) +- [29a99b8d](https://github.com/kubedb/postgres/commit/29a99b8d) Update for release Stash@v2020.09.29 (#364) +- [b097b330](https://github.com/kubedb/postgres/commit/b097b330) Update Kubernetes v1.18.9 dependencies (#363) +- [26e2f90c](https://github.com/kubedb/postgres/commit/26e2f90c) Update Kubernetes v1.18.9 dependencies (#361) +- [67c6d618](https://github.com/kubedb/postgres/commit/67c6d618) Update repository config (#360) +- [6fc5fbce](https://github.com/kubedb/postgres/commit/6fc5fbce) Update repository config (#359) +- [4e566391](https://github.com/kubedb/postgres/commit/4e566391) Update Kubernetes v1.18.9 dependencies (#358) +- [7236b6e1](https://github.com/kubedb/postgres/commit/7236b6e1) Use common event recorder (#357) +- [d1293558](https://github.com/kubedb/postgres/commit/d1293558) Update Kubernetes v1.18.3 dependencies (#356) + + + +## [kubedb/proxysql](https://github.com/kubedb/proxysql) + +### [v0.1.0-beta.4](https://github.com/kubedb/proxysql/releases/tag/v0.1.0-beta.4) + +- [d344e43f](https://github.com/kubedb/proxysql/commit/d344e43f) Prepare for release v0.1.0-beta.4 (#94) +- [15deb4df](https://github.com/kubedb/proxysql/commit/15deb4df) Update KubeDB api (#93) +- [dc59184c](https://github.com/kubedb/proxysql/commit/dc59184c) Update Kubernetes v1.18.9 dependencies (#92) +- [b2b11084](https://github.com/kubedb/proxysql/commit/b2b11084) Update KubeDB api (#91) +- [535820ff](https://github.com/kubedb/proxysql/commit/535820ff) Update for release Stash@v2020.10.21 (#90) +- [c00f0b6a](https://github.com/kubedb/proxysql/commit/c00f0b6a) Update KubeDB api (#89) +- [af8ab91c](https://github.com/kubedb/proxysql/commit/af8ab91c) Update KubeDB api (#88) +- [154fff60](https://github.com/kubedb/proxysql/commit/154fff60) Update Kubernetes v1.18.9 dependencies (#87) +- [608ca467](https://github.com/kubedb/proxysql/commit/608ca467) Update KubeDB api (#86) +- [c0b1286b](https://github.com/kubedb/proxysql/commit/c0b1286b) Update KubeDB api (#85) +- [d2f326c7](https://github.com/kubedb/proxysql/commit/d2f326c7) Update KubeDB api (#84) +- [01ea3c3c](https://github.com/kubedb/proxysql/commit/01ea3c3c) Update KubeDB api (#83) +- [4ae700ed](https://github.com/kubedb/proxysql/commit/4ae700ed) Update Kubernetes v1.18.9 dependencies (#82) +- [d0ad0b70](https://github.com/kubedb/proxysql/commit/d0ad0b70) Update KubeDB api (#81) +- [8f1e0d51](https://github.com/kubedb/proxysql/commit/8f1e0d51) Update KubeDB api (#80) +- [7b02bebb](https://github.com/kubedb/proxysql/commit/7b02bebb) Update KubeDB api (#79) +- [4f95e854](https://github.com/kubedb/proxysql/commit/4f95e854) Update repository config (#78) +- [c229a939](https://github.com/kubedb/proxysql/commit/c229a939) Update repository config (#77) +- [89dbb47f](https://github.com/kubedb/proxysql/commit/89dbb47f) Update repository config (#76) +- [d28494ab](https://github.com/kubedb/proxysql/commit/d28494ab) Update KubeDB api (#75) +- [b25cb7db](https://github.com/kubedb/proxysql/commit/b25cb7db) Update Kubernetes v1.18.9 dependencies (#74) +- [d4b026a4](https://github.com/kubedb/proxysql/commit/d4b026a4) Publish docker images to ghcr.io (#73) +- [e263f9c3](https://github.com/kubedb/proxysql/commit/e263f9c3) Update KubeDB api (#72) +- [07ea3acb](https://github.com/kubedb/proxysql/commit/07ea3acb) Update KubeDB api (#71) +- [946e292b](https://github.com/kubedb/proxysql/commit/946e292b) Update KubeDB api (#70) +- [66eb2156](https://github.com/kubedb/proxysql/commit/66eb2156) Update KubeDB api (#69) +- [d3fe09ae](https://github.com/kubedb/proxysql/commit/d3fe09ae) Update repository config (#68) +- [10c7cde0](https://github.com/kubedb/proxysql/commit/10c7cde0) Update Kubernetes v1.18.9 dependencies (#67) +- [ed5d24a9](https://github.com/kubedb/proxysql/commit/ed5d24a9) Update KubeDB api (#65) +- [a4f6dd4c](https://github.com/kubedb/proxysql/commit/a4f6dd4c) Update KubeDB api (#62) +- [2956b1bd](https://github.com/kubedb/proxysql/commit/2956b1bd) Update for release Stash@v2020.09.29 (#64) +- [9cbd0244](https://github.com/kubedb/proxysql/commit/9cbd0244) Update Kubernetes v1.18.9 dependencies (#63) +- [4cd9bb02](https://github.com/kubedb/proxysql/commit/4cd9bb02) Update Kubernetes v1.18.9 dependencies (#61) +- [a9a9caf0](https://github.com/kubedb/proxysql/commit/a9a9caf0) Update repository config (#60) +- [af3a2a68](https://github.com/kubedb/proxysql/commit/af3a2a68) Update repository config (#59) +- [25f47ff4](https://github.com/kubedb/proxysql/commit/25f47ff4) Update Kubernetes v1.18.9 dependencies (#58) +- [05e57476](https://github.com/kubedb/proxysql/commit/05e57476) Update Kubernetes v1.18.3 dependencies (#57) + + + +## [kubedb/redis](https://github.com/kubedb/redis) + +### [v0.7.0-beta.4](https://github.com/kubedb/redis/releases/tag/v0.7.0-beta.4) + +- [d31b919a](https://github.com/kubedb/redis/commit/d31b919a) Prepare for release v0.7.0-beta.4 (#240) +- [bfecc0c5](https://github.com/kubedb/redis/commit/bfecc0c5) Update KubeDB api (#239) +- [307efbef](https://github.com/kubedb/redis/commit/307efbef) Update Kubernetes v1.18.9 dependencies (#238) +- [34b09d4c](https://github.com/kubedb/redis/commit/34b09d4c) Update KubeDB api (#237) +- [4aefb939](https://github.com/kubedb/redis/commit/4aefb939) Fix init validator (#236) +- [4ea47108](https://github.com/kubedb/redis/commit/4ea47108) Update KubeDB api (#235) +- [8c4c8a54](https://github.com/kubedb/redis/commit/8c4c8a54) Update KubeDB api (#234) +- [cbee9597](https://github.com/kubedb/redis/commit/cbee9597) Update Kubernetes v1.18.9 dependencies (#233) +- [9fb1b23c](https://github.com/kubedb/redis/commit/9fb1b23c) Update KubeDB api (#232) +- [c5fb9a6d](https://github.com/kubedb/redis/commit/c5fb9a6d) Update KubeDB api (#230) +- [2e2f2d7b](https://github.com/kubedb/redis/commit/2e2f2d7b) Update KubeDB api (#229) +- [3c8e6c6d](https://github.com/kubedb/redis/commit/3c8e6c6d) Update KubeDB api (#228) +- [8467464d](https://github.com/kubedb/redis/commit/8467464d) Update Kubernetes v1.18.9 dependencies (#227) +- [5febd393](https://github.com/kubedb/redis/commit/5febd393) Update KubeDB api (#226) +- [d8024e4d](https://github.com/kubedb/redis/commit/d8024e4d) Update KubeDB api (#225) +- [12d112de](https://github.com/kubedb/redis/commit/12d112de) Update KubeDB api (#223) +- [8a9f5398](https://github.com/kubedb/redis/commit/8a9f5398) Update repository config (#222) +- [b3b48a91](https://github.com/kubedb/redis/commit/b3b48a91) Update repository config (#221) +- [2fa45230](https://github.com/kubedb/redis/commit/2fa45230) Update repository config (#220) +- [552f1f80](https://github.com/kubedb/redis/commit/552f1f80) Initialize statefulset watcher from cmd/server/options.go (#219) +- [446b4b55](https://github.com/kubedb/redis/commit/446b4b55) Update KubeDB api (#218) +- [f6203009](https://github.com/kubedb/redis/commit/f6203009) Update Kubernetes v1.18.9 dependencies (#217) +- [b7172fb8](https://github.com/kubedb/redis/commit/b7172fb8) Publish docker images to ghcr.io (#216) +- [9897bab9](https://github.com/kubedb/redis/commit/9897bab9) Update KubeDB api (#215) +- [00f07b4f](https://github.com/kubedb/redis/commit/00f07b4f) Update KubeDB api (#214) +- [f2133f26](https://github.com/kubedb/redis/commit/f2133f26) Update KubeDB api (#213) +- [b1f3b76a](https://github.com/kubedb/redis/commit/b1f3b76a) Update KubeDB api (#212) +- [a3144e30](https://github.com/kubedb/redis/commit/a3144e30) Update repository config (#211) +- [8472ff88](https://github.com/kubedb/redis/commit/8472ff88) Add support to initialize Redis using Stash (#188) +- [20ba04a7](https://github.com/kubedb/redis/commit/20ba04a7) Update Kubernetes v1.18.9 dependencies (#210) +- [457611a1](https://github.com/kubedb/redis/commit/457611a1) Update Kubernetes v1.18.9 dependencies (#209) +- [2bd8b281](https://github.com/kubedb/redis/commit/2bd8b281) Update Kubernetes v1.18.9 dependencies (#207) +- [8779c7ea](https://github.com/kubedb/redis/commit/8779c7ea) Update repository config (#206) +- [db9280b7](https://github.com/kubedb/redis/commit/db9280b7) Update repository config (#205) +- [ada18bca](https://github.com/kubedb/redis/commit/ada18bca) Update Kubernetes v1.18.9 dependencies (#204) +- [17a55147](https://github.com/kubedb/redis/commit/17a55147) Use common event recorder (#203) +- [71a34b6a](https://github.com/kubedb/redis/commit/71a34b6a) Update Kubernetes v1.18.3 dependencies (#202) + + + + diff --git a/content/docs/v2024.1.31/CHANGELOG-v2020.10.26-beta.0.md b/content/docs/v2024.1.31/CHANGELOG-v2020.10.26-beta.0.md new file mode 100644 index 0000000000..54933c0a93 --- /dev/null +++ b/content/docs/v2024.1.31/CHANGELOG-v2020.10.26-beta.0.md @@ -0,0 +1,229 @@ +--- +title: Changelog | KubeDB +description: Changelog +menu: + docs_v2024.1.31: + identifier: changelog-kubedb-v2020.10.26-beta.0 + name: Changelog-v2020.10.26-beta.0 + parent: welcome + weight: 20201026 +product_name: kubedb +menu_name: docs_v2024.1.31 +section_menu_id: welcome +url: /docs/v2024.1.31/welcome/changelog-v2020.10.26-beta.0/ +aliases: +- /docs/v2024.1.31/CHANGELOG-v2020.10.26-beta.0/ +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +# KubeDB v2020.10.26-beta.0 (2020-10-27) + + +## [appscode/kubedb-enterprise](https://github.com/appscode/kubedb-enterprise) + +### [v0.1.0-beta.5](https://github.com/appscode/kubedb-enterprise/releases/tag/v0.1.0-beta.5) + +- [c7bf3943](https://github.com/appscode/kubedb-enterprise/commit/c7bf3943) Prepare for release v0.1.0-beta.5 (#81) +- [1bf37b01](https://github.com/appscode/kubedb-enterprise/commit/1bf37b01) Update KubeDB api (#80) +- [a99c4e9f](https://github.com/appscode/kubedb-enterprise/commit/a99c4e9f) Update readme +- [2ad24272](https://github.com/appscode/kubedb-enterprise/commit/2ad24272) Update repository config (#79) + + + +## [kubedb/apimachinery](https://github.com/kubedb/apimachinery) + +### [v0.14.0-beta.5](https://github.com/kubedb/apimachinery/releases/tag/v0.14.0-beta.5) + +- [b72968d5](https://github.com/kubedb/apimachinery/commit/b72968d5) Add port constants (#635) +- [6ce39fbe](https://github.com/kubedb/apimachinery/commit/6ce39fbe) Create separate governing service for each database (#634) +- [ecfb5d85](https://github.com/kubedb/apimachinery/commit/ecfb5d85) Update readme + + + +## [kubedb/cli](https://github.com/kubedb/cli) + +### [v0.14.0-beta.5](https://github.com/kubedb/cli/releases/tag/v0.14.0-beta.5) + +- [a4af36b9](https://github.com/kubedb/cli/commit/a4af36b9) Prepare for release v0.14.0-beta.5 (#529) +- [2f1cb09d](https://github.com/kubedb/cli/commit/2f1cb09d) Update KubeDB api (#528) +- [87eb5ad3](https://github.com/kubedb/cli/commit/87eb5ad3) Update readme +- [4dfe0da7](https://github.com/kubedb/cli/commit/4dfe0da7) Update KubeDB api (#527) +- [5448e521](https://github.com/kubedb/cli/commit/5448e521) Update repository config (#526) +- [e1e9dbe2](https://github.com/kubedb/cli/commit/e1e9dbe2) Update KubeDB api (#525) +- [e49d303a](https://github.com/kubedb/cli/commit/e49d303a) Update Kubernetes v1.18.9 dependencies (#524) +- [9f54a783](https://github.com/kubedb/cli/commit/9f54a783) Update KubeDB api (#523) +- [ad764956](https://github.com/kubedb/cli/commit/ad764956) Update for release Stash@v2020.10.21 (#522) +- [46ae22cd](https://github.com/kubedb/cli/commit/46ae22cd) Update KubeDB api (#521) +- [2914b270](https://github.com/kubedb/cli/commit/2914b270) Update KubeDB api (#520) +- [87ce0033](https://github.com/kubedb/cli/commit/87ce0033) Update Kubernetes v1.18.9 dependencies (#519) +- [ab524afe](https://github.com/kubedb/cli/commit/ab524afe) Update KubeDB api (#518) +- [899e2b21](https://github.com/kubedb/cli/commit/899e2b21) Update KubeDB api (#517) +- [37a5da4b](https://github.com/kubedb/cli/commit/37a5da4b) Update KubeDB api (#516) +- [5c87d6e8](https://github.com/kubedb/cli/commit/5c87d6e8) Update KubeDB api (#515) +- [dfc9e245](https://github.com/kubedb/cli/commit/dfc9e245) Update Kubernetes v1.18.9 dependencies (#514) +- [c0650bb7](https://github.com/kubedb/cli/commit/c0650bb7) Update KubeDB api (#513) +- [278dccbe](https://github.com/kubedb/cli/commit/278dccbe) Update KubeDB api (#512) +- [221be742](https://github.com/kubedb/cli/commit/221be742) Update KubeDB api (#511) +- [2a301cd0](https://github.com/kubedb/cli/commit/2a301cd0) Don't update krew manifest for pre-releases + + + +## [kubedb/elasticsearch](https://github.com/kubedb/elasticsearch) + +### [v0.14.0-beta.5](https://github.com/kubedb/elasticsearch/releases/tag/v0.14.0-beta.5) + +- [97f34417](https://github.com/kubedb/elasticsearch/commit/97f34417) Prepare for release v0.14.0-beta.5 (#391) +- [a3e9a733](https://github.com/kubedb/elasticsearch/commit/a3e9a733) Create separate governing service for each database (#390) +- [ce8f80b5](https://github.com/kubedb/elasticsearch/commit/ce8f80b5) Update KubeDB api (#389) +- [0fe8d617](https://github.com/kubedb/elasticsearch/commit/0fe8d617) Update readme +- [657797fe](https://github.com/kubedb/elasticsearch/commit/657797fe) Update repository config (#388) + + + +## [kubedb/installer](https://github.com/kubedb/installer) + +### [v0.14.0-beta.5](https://github.com/kubedb/installer/releases/tag/v0.14.0-beta.5) + +- [5ae7fce](https://github.com/kubedb/installer/commit/5ae7fce) Prepare for release v0.14.0-beta.5 (#170) +- [127eda0](https://github.com/kubedb/installer/commit/127eda0) Update Kubernetes v1.18.9 dependencies (#169) +- [984c86f](https://github.com/kubedb/installer/commit/984c86f) Update KubeDB api (#168) +- [8956af1](https://github.com/kubedb/installer/commit/8956af1) Update readme +- [a1eea93](https://github.com/kubedb/installer/commit/a1eea93) Update Kubernetes v1.18.9 dependencies (#167) +- [2c7ab1e](https://github.com/kubedb/installer/commit/2c7ab1e) Update KubeDB api (#166) + + + +## [kubedb/memcached](https://github.com/kubedb/memcached) + +### [v0.7.0-beta.5](https://github.com/kubedb/memcached/releases/tag/v0.7.0-beta.5) + +- [0fbfc766](https://github.com/kubedb/memcached/commit/0fbfc766) Prepare for release v0.7.0-beta.5 (#224) +- [7a01e878](https://github.com/kubedb/memcached/commit/7a01e878) Create separate governing service for each database (#223) +- [6cecdfec](https://github.com/kubedb/memcached/commit/6cecdfec) Update KubeDB api (#222) +- [5942b1ff](https://github.com/kubedb/memcached/commit/5942b1ff) Update readme + + + +## [kubedb/mongodb](https://github.com/kubedb/mongodb) + +### [v0.7.0-beta.5](https://github.com/kubedb/mongodb/releases/tag/v0.7.0-beta.5) + +- [f1818bb1](https://github.com/kubedb/mongodb/commit/f1818bb1) Prepare for release v0.7.0-beta.5 (#292) +- [7d1586f7](https://github.com/kubedb/mongodb/commit/7d1586f7) Create separate governing service for each database (#291) +- [1e281abb](https://github.com/kubedb/mongodb/commit/1e281abb) Update KubeDB api (#290) +- [23d8785f](https://github.com/kubedb/mongodb/commit/23d8785f) Update readme + + + +## [kubedb/mysql](https://github.com/kubedb/mysql) + +### [v0.7.0-beta.5](https://github.com/kubedb/mysql/releases/tag/v0.7.0-beta.5) + +- [8dbd64c9](https://github.com/kubedb/mysql/commit/8dbd64c9) Prepare for release v0.7.0-beta.5 (#284) +- [ee4285c1](https://github.com/kubedb/mysql/commit/ee4285c1) Create separate governing service for each database (#283) +- [8e2fcbf4](https://github.com/kubedb/mysql/commit/8e2fcbf4) Update KubeDB api (#282) +- [ae962768](https://github.com/kubedb/mysql/commit/ae962768) Update readme + + + +## [kubedb/mysql-replication-mode-detector](https://github.com/kubedb/mysql-replication-mode-detector) + +### [v0.1.0-beta.5](https://github.com/kubedb/mysql-replication-mode-detector/releases/tag/v0.1.0-beta.5) + +- [e251fd6](https://github.com/kubedb/mysql-replication-mode-detector/commit/e251fd6) Prepare for release v0.1.0-beta.5 (#72) +- [633ba00](https://github.com/kubedb/mysql-replication-mode-detector/commit/633ba00) Update KubeDB api (#71) + + + +## [kubedb/operator](https://github.com/kubedb/operator) + +### [v0.14.0-beta.5](https://github.com/kubedb/operator/releases/tag/v0.14.0-beta.5) + +- [bcada180](https://github.com/kubedb/operator/commit/bcada180) Prepare for release v0.14.0-beta.5 (#331) +- [07d63285](https://github.com/kubedb/operator/commit/07d63285) Enable PgBoucner & ProxySQL for enterprise license (#330) +- [35b75a05](https://github.com/kubedb/operator/commit/35b75a05) Update readme.md +- [14304e05](https://github.com/kubedb/operator/commit/14304e05) Update KubeDB api (#329) +- [df61aae3](https://github.com/kubedb/operator/commit/df61aae3) Update readme +- [c9882619](https://github.com/kubedb/operator/commit/c9882619) Format readme +- [73b725e3](https://github.com/kubedb/operator/commit/73b725e3) Update readme (#328) +- [541c2460](https://github.com/kubedb/operator/commit/541c2460) Update repository config (#327) + + + +## [kubedb/percona-xtradb](https://github.com/kubedb/percona-xtradb) + +### [v0.1.0-beta.5](https://github.com/kubedb/percona-xtradb/releases/tag/v0.1.0-beta.5) + +- [9866a420](https://github.com/kubedb/percona-xtradb/commit/9866a420) Prepare for release v0.1.0-beta.5 (#116) +- [f92081d1](https://github.com/kubedb/percona-xtradb/commit/f92081d1) Create separate governing service for each database (#115) +- [6010b189](https://github.com/kubedb/percona-xtradb/commit/6010b189) Update KubeDB api (#114) +- [95b57c72](https://github.com/kubedb/percona-xtradb/commit/95b57c72) Update readme + + + +## [kubedb/pg-leader-election](https://github.com/kubedb/pg-leader-election) + +### [v0.2.0-beta.5](https://github.com/kubedb/pg-leader-election/releases/tag/v0.2.0-beta.5) + + + + +## [kubedb/pgbouncer](https://github.com/kubedb/pgbouncer) + +### [v0.1.0-beta.5](https://github.com/kubedb/pgbouncer/releases/tag/v0.1.0-beta.5) + +- [96144773](https://github.com/kubedb/pgbouncer/commit/96144773) Prepare for release v0.1.0-beta.5 (#89) +- [bb574108](https://github.com/kubedb/pgbouncer/commit/bb574108) Create separate governing service for each database (#88) +- [28f29e3c](https://github.com/kubedb/pgbouncer/commit/28f29e3c) Update KubeDB api (#87) +- [79a3e3f7](https://github.com/kubedb/pgbouncer/commit/79a3e3f7) Update readme +- [f42d28f9](https://github.com/kubedb/pgbouncer/commit/f42d28f9) Update repository config (#86) + + + +## [kubedb/postgres](https://github.com/kubedb/postgres) + +### [v0.14.0-beta.5](https://github.com/kubedb/postgres/releases/tag/v0.14.0-beta.5) + +- [c6e802a7](https://github.com/kubedb/postgres/commit/c6e802a7) Prepare for release v0.14.0-beta.5 (#401) +- [4da12584](https://github.com/kubedb/postgres/commit/4da12584) Simplify port assignment (#400) +- [71420f2b](https://github.com/kubedb/postgres/commit/71420f2b) Create separate governing service for each database (#399) +- [49792ddb](https://github.com/kubedb/postgres/commit/49792ddb) Update KubeDB api (#398) +- [721f5e16](https://github.com/kubedb/postgres/commit/721f5e16) Update readme +- [c036ee15](https://github.com/kubedb/postgres/commit/c036ee15) Update Kubernetes v1.18.9 dependencies (#397) + + + +## [kubedb/proxysql](https://github.com/kubedb/proxysql) + +### [v0.1.0-beta.5](https://github.com/kubedb/proxysql/releases/tag/v0.1.0-beta.5) + +- [4269db9c](https://github.com/kubedb/proxysql/commit/4269db9c) Prepare for release v0.1.0-beta.5 (#98) +- [e48bd006](https://github.com/kubedb/proxysql/commit/e48bd006) Create separate governing service for each database (#97) +- [23f1c6de](https://github.com/kubedb/proxysql/commit/23f1c6de) Update KubeDB api (#96) +- [13abe9ff](https://github.com/kubedb/proxysql/commit/13abe9ff) Update readme +- [78ef0d29](https://github.com/kubedb/proxysql/commit/78ef0d29) Update repository config (#95) + + + +## [kubedb/redis](https://github.com/kubedb/redis) + +### [v0.7.0-beta.5](https://github.com/kubedb/redis/releases/tag/v0.7.0-beta.5) + +- [57743070](https://github.com/kubedb/redis/commit/57743070) Prepare for release v0.7.0-beta.5 (#243) +- [5e8f1a25](https://github.com/kubedb/redis/commit/5e8f1a25) Create separate governing service for each database (#242) +- [ebeda2c7](https://github.com/kubedb/redis/commit/ebeda2c7) Update KubeDB api (#241) +- [b0a39a3c](https://github.com/kubedb/redis/commit/b0a39a3c) Update readme + + + + diff --git a/content/docs/v2024.1.31/CHANGELOG-v2020.10.27-rc.0.md b/content/docs/v2024.1.31/CHANGELOG-v2020.10.27-rc.0.md new file mode 100644 index 0000000000..216d40ebe3 --- /dev/null +++ b/content/docs/v2024.1.31/CHANGELOG-v2020.10.27-rc.0.md @@ -0,0 +1,181 @@ +--- +title: Changelog | KubeDB +description: Changelog +menu: + docs_v2024.1.31: + identifier: changelog-kubedb-v2020.10.27-rc.0 + name: Changelog-v2020.10.27-rc.0 + parent: welcome + weight: 20201027 +product_name: kubedb +menu_name: docs_v2024.1.31 +section_menu_id: welcome +url: /docs/v2024.1.31/welcome/changelog-v2020.10.27-rc.0/ +aliases: +- /docs/v2024.1.31/CHANGELOG-v2020.10.27-rc.0/ +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +# KubeDB v2020.10.27-rc.0 (2020-10-27) + + +## [appscode/kubedb-enterprise](https://github.com/appscode/kubedb-enterprise) + +### [v0.1.0-beta.6](https://github.com/appscode/kubedb-enterprise/releases/tag/v0.1.0-beta.6) + +- [5df3d1e9](https://github.com/appscode/kubedb-enterprise/commit/5df3d1e9) Prepare for release v0.1.0-beta.6 (#82) + + + +## [kubedb/apimachinery](https://github.com/kubedb/apimachinery) + +### [v0.14.0-beta.6](https://github.com/kubedb/apimachinery/releases/tag/v0.14.0-beta.6) + +- [cd358dda](https://github.com/kubedb/apimachinery/commit/cd358dda) Update MergeServicePort and PatchServicePort apis + + + +## [kubedb/cli](https://github.com/kubedb/cli) + +### [v0.14.0-beta.6](https://github.com/kubedb/cli/releases/tag/v0.14.0-beta.6) + +- [39c4d5a0](https://github.com/kubedb/cli/commit/39c4d5a0) Prepare for release v0.14.0-beta.6 (#531) +- [a7a8dfc0](https://github.com/kubedb/cli/commit/a7a8dfc0) Update KubeDB api (#530) + + + +## [kubedb/elasticsearch](https://github.com/kubedb/elasticsearch) + +### [v0.14.0-beta.6](https://github.com/kubedb/elasticsearch/releases/tag/v0.14.0-beta.6) + +- [58dac2ba](https://github.com/kubedb/elasticsearch/commit/58dac2ba) Prepare for release v0.14.0-beta.6 (#394) +- [5d4ad40c](https://github.com/kubedb/elasticsearch/commit/5d4ad40c) Update MergeServicePort and PatchServicePort apis (#393) +- [992edb90](https://github.com/kubedb/elasticsearch/commit/992edb90) Always set protocol for service ports +- [0f408cbf](https://github.com/kubedb/elasticsearch/commit/0f408cbf) Create SRV records for governing service (#392) + + + +## [kubedb/installer](https://github.com/kubedb/installer) + +### [v0.14.0-beta.6](https://github.com/kubedb/installer/releases/tag/v0.14.0-beta.6) + +- [0ac5eb3](https://github.com/kubedb/installer/commit/0ac5eb3) Prepare for release v0.14.0-beta.6 (#174) +- [f6407c7](https://github.com/kubedb/installer/commit/f6407c7) Update KubeDB api (#173) +- [d3e28a7](https://github.com/kubedb/installer/commit/d3e28a7) Use kubedb-community release name for community chart +- [98553cb](https://github.com/kubedb/installer/commit/98553cb) Update Kubernetes v1.18.9 dependencies (#172) +- [1b35dfb](https://github.com/kubedb/installer/commit/1b35dfb) Update KubeDB api (#171) + + + +## [kubedb/memcached](https://github.com/kubedb/memcached) + +### [v0.7.0-beta.6](https://github.com/kubedb/memcached/releases/tag/v0.7.0-beta.6) + +- [704cf9f2](https://github.com/kubedb/memcached/commit/704cf9f2) Prepare for release v0.7.0-beta.6 (#226) +- [47039c68](https://github.com/kubedb/memcached/commit/47039c68) Create SRV records for governing service (#225) + + + +## [kubedb/mongodb](https://github.com/kubedb/mongodb) + +### [v0.7.0-beta.6](https://github.com/kubedb/mongodb/releases/tag/v0.7.0-beta.6) + +- [0d32b697](https://github.com/kubedb/mongodb/commit/0d32b697) Prepare for release v0.7.0-beta.6 (#296) +- [1f75de65](https://github.com/kubedb/mongodb/commit/1f75de65) Update MergeServicePort and PatchServicePort apis (#295) +- [984fd7c2](https://github.com/kubedb/mongodb/commit/984fd7c2) Create SRV records for governing service (#294) +- [fc973dd0](https://github.com/kubedb/mongodb/commit/fc973dd0) Make database's phase NotReady as soon as the halted is removed (#293) + + + +## [kubedb/mysql](https://github.com/kubedb/mysql) + +### [v0.7.0-beta.6](https://github.com/kubedb/mysql/releases/tag/v0.7.0-beta.6) + +- [680da825](https://github.com/kubedb/mysql/commit/680da825) Prepare for release v0.7.0-beta.6 (#286) +- [a5066552](https://github.com/kubedb/mysql/commit/a5066552) Create SRV records for governing service (#285) + + + +## [kubedb/mysql-replication-mode-detector](https://github.com/kubedb/mysql-replication-mode-detector) + +### [v0.1.0-beta.6](https://github.com/kubedb/mysql-replication-mode-detector/releases/tag/v0.1.0-beta.6) + +- [67ec09b](https://github.com/kubedb/mysql-replication-mode-detector/commit/67ec09b) Prepare for release v0.1.0-beta.6 (#74) +- [724eaa9](https://github.com/kubedb/mysql-replication-mode-detector/commit/724eaa9) Update KubeDB api (#73) + + + +## [kubedb/operator](https://github.com/kubedb/operator) + +### [v0.14.0-beta.6](https://github.com/kubedb/operator/releases/tag/v0.14.0-beta.6) + +- [7c0e97a2](https://github.com/kubedb/operator/commit/7c0e97a2) Prepare for release v0.14.0-beta.6 (#334) +- [17b42fd3](https://github.com/kubedb/operator/commit/17b42fd3) Update KubeDB api (#333) +- [6dbde882](https://github.com/kubedb/operator/commit/6dbde882) Update Kubernetes v1.18.9 dependencies (#332) + + + +## [kubedb/percona-xtradb](https://github.com/kubedb/percona-xtradb) + +### [v0.1.0-beta.6](https://github.com/kubedb/percona-xtradb/releases/tag/v0.1.0-beta.6) + +- [397607a3](https://github.com/kubedb/percona-xtradb/commit/397607a3) Prepare for release v0.1.0-beta.6 (#118) +- [a3b7642d](https://github.com/kubedb/percona-xtradb/commit/a3b7642d) Create SRV records for governing service (#117) + + + +## [kubedb/pg-leader-election](https://github.com/kubedb/pg-leader-election) + +### [v0.2.0-beta.6](https://github.com/kubedb/pg-leader-election/releases/tag/v0.2.0-beta.6) + +- [4635eab](https://github.com/kubedb/pg-leader-election/commit/4635eab) Update KubeDB api (#37) + + + +## [kubedb/pgbouncer](https://github.com/kubedb/pgbouncer) + +### [v0.1.0-beta.6](https://github.com/kubedb/pgbouncer/releases/tag/v0.1.0-beta.6) + +- [e82f1017](https://github.com/kubedb/pgbouncer/commit/e82f1017) Prepare for release v0.1.0-beta.6 (#91) +- [8d2fa953](https://github.com/kubedb/pgbouncer/commit/8d2fa953) Create SRV records for governing service (#90) + + + +## [kubedb/postgres](https://github.com/kubedb/postgres) + +### [v0.14.0-beta.6](https://github.com/kubedb/postgres/releases/tag/v0.14.0-beta.6) + +- [9e1a642e](https://github.com/kubedb/postgres/commit/9e1a642e) Prepare for release v0.14.0-beta.6 (#404) +- [8b869c02](https://github.com/kubedb/postgres/commit/8b869c02) Create SRV records for governing service (#402) + + + +## [kubedb/proxysql](https://github.com/kubedb/proxysql) + +### [v0.1.0-beta.6](https://github.com/kubedb/proxysql/releases/tag/v0.1.0-beta.6) + +- [d01512de](https://github.com/kubedb/proxysql/commit/d01512de) Prepare for release v0.1.0-beta.6 (#100) +- [6a0d52ff](https://github.com/kubedb/proxysql/commit/6a0d52ff) Create SRV records for governing service (#99) + + + +## [kubedb/redis](https://github.com/kubedb/redis) + +### [v0.7.0-beta.6](https://github.com/kubedb/redis/releases/tag/v0.7.0-beta.6) + +- [50f709bf](https://github.com/kubedb/redis/commit/50f709bf) Prepare for release v0.7.0-beta.6 (#245) +- [d4aaaf38](https://github.com/kubedb/redis/commit/d4aaaf38) Create SRV records for governing service (#244) + + + + diff --git a/content/docs/v2024.1.31/CHANGELOG-v2020.10.27-rc.1.md b/content/docs/v2024.1.31/CHANGELOG-v2020.10.27-rc.1.md new file mode 100644 index 0000000000..a8b153dfad --- /dev/null +++ b/content/docs/v2024.1.31/CHANGELOG-v2020.10.27-rc.1.md @@ -0,0 +1,2041 @@ +--- +title: Changelog | KubeDB +description: Changelog +menu: + docs_v2024.1.31: + identifier: changelog-kubedb-v2020.10.27-rc.1 + name: Changelog-v2020.10.27-rc.1 + parent: welcome + weight: 20201027 +product_name: kubedb +menu_name: docs_v2024.1.31 +section_menu_id: welcome +url: /docs/v2024.1.31/welcome/changelog-v2020.10.27-rc.1/ +aliases: +- /docs/v2024.1.31/CHANGELOG-v2020.10.27-rc.1/ +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +# KubeDB v2020.10.27-rc.1 (2020-10-27) + + +## [appscode/kubedb-enterprise](https://github.com/appscode/kubedb-enterprise) + +### [v0.1.0-rc.1](https://github.com/appscode/kubedb-enterprise/releases/tag/v0.1.0-rc.1) + +- [095e631c](https://github.com/appscode/kubedb-enterprise/commit/095e631c) Prepare for release v0.1.0-rc.1 (#83) +- [5df3d1e9](https://github.com/appscode/kubedb-enterprise/commit/5df3d1e9) Prepare for release v0.1.0-beta.6 (#82) +- [c7bf3943](https://github.com/appscode/kubedb-enterprise/commit/c7bf3943) Prepare for release v0.1.0-beta.5 (#81) +- [1bf37b01](https://github.com/appscode/kubedb-enterprise/commit/1bf37b01) Update KubeDB api (#80) +- [a99c4e9f](https://github.com/appscode/kubedb-enterprise/commit/a99c4e9f) Update readme +- [2ad24272](https://github.com/appscode/kubedb-enterprise/commit/2ad24272) Update repository config (#79) +- [d045bd2d](https://github.com/appscode/kubedb-enterprise/commit/d045bd2d) Prepare for release v0.1.0-beta.4 (#78) +- [5fbe4b48](https://github.com/appscode/kubedb-enterprise/commit/5fbe4b48) Update KubeDB api (#73) +- [00db6203](https://github.com/appscode/kubedb-enterprise/commit/00db6203) Replace getConditions with kmapi.NewCondition (#71) +- [aea1f64a](https://github.com/appscode/kubedb-enterprise/commit/aea1f64a) Update License header (#70) +- [1c15c2b8](https://github.com/appscode/kubedb-enterprise/commit/1c15c2b8) Add RedisOpsRequest Controller (#28) +- [5cedb8fd](https://github.com/appscode/kubedb-enterprise/commit/5cedb8fd) Add MySQL OpsRequest Controller (#14) +- [f0f282c0](https://github.com/appscode/kubedb-enterprise/commit/f0f282c0) Add Reconfigure TLS (#69) +- [cea85618](https://github.com/appscode/kubedb-enterprise/commit/cea85618) Add Restart Operation, Readiness Criteria and Remove Configuration (#59) +- [68cd3dcc](https://github.com/appscode/kubedb-enterprise/commit/68cd3dcc) Update repository config (#66) +- [feef09ab](https://github.com/appscode/kubedb-enterprise/commit/feef09ab) Publish docker images to ghcr.io (#65) +- [199d4bd2](https://github.com/appscode/kubedb-enterprise/commit/199d4bd2) Update repository config (#60) +- [2ae29633](https://github.com/appscode/kubedb-enterprise/commit/2ae29633) Reconfigure MongoDB with Vertical Scaling (#57) +- [9a98fc29](https://github.com/appscode/kubedb-enterprise/commit/9a98fc29) Fix MongoDB Upgrade (#51) +- [9a1a792a](https://github.com/appscode/kubedb-enterprise/commit/9a1a792a) Integrate cert-manager for Elasticsearch (#56) +- [b02cda77](https://github.com/appscode/kubedb-enterprise/commit/b02cda77) Update repository config (#54) +- [947c33e2](https://github.com/appscode/kubedb-enterprise/commit/947c33e2) Update repository config (#52) +- [12edf6f1](https://github.com/appscode/kubedb-enterprise/commit/12edf6f1) Update Kubernetes v1.18.9 dependencies (#49) +- [08f6a4ac](https://github.com/appscode/kubedb-enterprise/commit/08f6a4ac) Add license verifier (#50) +- [30ceb1a5](https://github.com/appscode/kubedb-enterprise/commit/30ceb1a5) Add MongoDBOpsRequest Controller (#20) +- [164ed838](https://github.com/appscode/kubedb-enterprise/commit/164ed838) Use cert-manager v1 api (#47) +- [7612ec19](https://github.com/appscode/kubedb-enterprise/commit/7612ec19) Update apis (#45) +- [00550fe0](https://github.com/appscode/kubedb-enterprise/commit/00550fe0) Dynamically Generate Cluster Domain (#43) +- [e1c3193f](https://github.com/appscode/kubedb-enterprise/commit/e1c3193f) Use updated certstore & blobfs (#42) +- [0d5d05bb](https://github.com/appscode/kubedb-enterprise/commit/0d5d05bb) Add TLS support for redis (#35) +- [bb53fc86](https://github.com/appscode/kubedb-enterprise/commit/bb53fc86) Various fixes (#41) +- [023c5dfd](https://github.com/appscode/kubedb-enterprise/commit/023c5dfd) Add TLS/SSL configuration using Cert Manager for MySQL (#34) +- [e1795b97](https://github.com/appscode/kubedb-enterprise/commit/e1795b97) Update certificate spec for MongoDB and PgBouncer (#40) +- [5e82443d](https://github.com/appscode/kubedb-enterprise/commit/5e82443d) Update new Subject sped for certificates (#38) +- [099abfb8](https://github.com/appscode/kubedb-enterprise/commit/099abfb8) Update to cert-manager v0.16.0 (#37) +- [b14346d3](https://github.com/appscode/kubedb-enterprise/commit/b14346d3) Update to Kubernetes v1.18.3 (#36) +- [c569a8eb](https://github.com/appscode/kubedb-enterprise/commit/c569a8eb) Fix cert-manager integration for PgBouncer (#32) +- [28548950](https://github.com/appscode/kubedb-enterprise/commit/28548950) Update to Kubernetes v1.18.3 (#31) +- [1ba9573e](https://github.com/appscode/kubedb-enterprise/commit/1ba9573e) Include Makefile.env (#30) +- [54133b44](https://github.com/appscode/kubedb-enterprise/commit/54133b44) Disable e2e tests (#29) +- [3939ece7](https://github.com/appscode/kubedb-enterprise/commit/3939ece7) Update to Kubernetes v1.18.3 (#27) +- [95c6b535](https://github.com/appscode/kubedb-enterprise/commit/95c6b535) Update .kodiak.toml +- [a88032cd](https://github.com/appscode/kubedb-enterprise/commit/a88032cd) Add script to update release tracker on pr merge (#26) +- [a90f68e7](https://github.com/appscode/kubedb-enterprise/commit/a90f68e7) Rename docker image to kubedb-enterprise +- [ccb9967f](https://github.com/appscode/kubedb-enterprise/commit/ccb9967f) Create .kodiak.toml +- [fb6222ab](https://github.com/appscode/kubedb-enterprise/commit/fb6222ab) Format CI files +- [93756db8](https://github.com/appscode/kubedb-enterprise/commit/93756db8) Fix e2e tests (#25) +- [48ada32b](https://github.com/appscode/kubedb-enterprise/commit/48ada32b) Fix e2e tests using self-hosted GitHub action runners (#23) +- [12b15d00](https://github.com/appscode/kubedb-enterprise/commit/12b15d00) Update to kubedb.dev/apimachinery@v0.14.0-alpha.6 (#24) +- [9f32ab11](https://github.com/appscode/kubedb-enterprise/commit/9f32ab11) Update to Kubernetes v1.18.3 (#21) +- [cd3422a7](https://github.com/appscode/kubedb-enterprise/commit/cd3422a7) Use CRD v1 for Kubernetes >= 1.16 (#19) +- [4cc2f714](https://github.com/appscode/kubedb-enterprise/commit/4cc2f714) Update to Kubernetes v1.18.3 (#18) +- [7fb86dfb](https://github.com/appscode/kubedb-enterprise/commit/7fb86dfb) Update cert-manager util +- [1c8e1e32](https://github.com/appscode/kubedb-enterprise/commit/1c8e1e32) Configure GCR Docker credential helper in release pipeline +- [cd74a0c2](https://github.com/appscode/kubedb-enterprise/commit/cd74a0c2) Vendor kubedb.dev/apimachinery@v0.14.0-beta.0 +- [5522f7ef](https://github.com/appscode/kubedb-enterprise/commit/5522f7ef) Revendor kubedb.dev/apimachinery@master +- [e52cecfb](https://github.com/appscode/kubedb-enterprise/commit/e52cecfb) Update crazy-max/ghaction-docker-buildx flag +- [9ce414ca](https://github.com/appscode/kubedb-enterprise/commit/9ce414ca) Merge pull request #17 from appscode/x7 +- [1938de61](https://github.com/appscode/kubedb-enterprise/commit/1938de61) Remove existing cluster +- [262dae05](https://github.com/appscode/kubedb-enterprise/commit/262dae05) Remove support for k8s 1.11 +- [a00f342c](https://github.com/appscode/kubedb-enterprise/commit/a00f342c) Run e2e tests on GitHub actions +- [b615b1ac](https://github.com/appscode/kubedb-enterprise/commit/b615b1ac) Use GCR_SERVICE_ACCOUNT_JSON_KEY env in CI +- [41668265](https://github.com/appscode/kubedb-enterprise/commit/41668265) Use gcr.io/appscode as docker registry (#16) +- [2e5df236](https://github.com/appscode/kubedb-enterprise/commit/2e5df236) Run on self-hosted hosts +- [3da6adef](https://github.com/appscode/kubedb-enterprise/commit/3da6adef) Store enterprise images in `gcr.io/appscode` (#15) +- [bd4a8eb1](https://github.com/appscode/kubedb-enterprise/commit/bd4a8eb1) Use stash.appscode.dev/apimachinery@v0.9.0-rc.6 (#12) +- [c5436b50](https://github.com/appscode/kubedb-enterprise/commit/c5436b50) Don't handle deleted objects. (#11) +- [ee5eea66](https://github.com/appscode/kubedb-enterprise/commit/ee5eea66) Fix MongoDB cert-manager integration (#10) +- [105f08b8](https://github.com/appscode/kubedb-enterprise/commit/105f08b8) Add cert-manager integration for MongoDB (#9) +- [b2a3af53](https://github.com/appscode/kubedb-enterprise/commit/b2a3af53) Refactor PgBouncer controller into its pkg (#8) +- [b0e90f75](https://github.com/appscode/kubedb-enterprise/commit/b0e90f75) Use SecretInformer from apimachinery (#5) +- [8dabbb1b](https://github.com/appscode/kubedb-enterprise/commit/8dabbb1b) Use non-deprecated Exporter fields (#4) +- [de22842e](https://github.com/appscode/kubedb-enterprise/commit/de22842e) Cert-Manager support for PgBouncer [Client TLS] (#2) +- [1a6794b7](https://github.com/appscode/kubedb-enterprise/commit/1a6794b7) Fix plain text secret in exporter container of StatefulSet (#5) +- [ab104a9f](https://github.com/appscode/kubedb-enterprise/commit/ab104a9f) Update client-go to kubernetes-1.16.3 (#7) +- [68dbb142](https://github.com/appscode/kubedb-enterprise/commit/68dbb142) Use charts to install operator (#6) +- [30e3e729](https://github.com/appscode/kubedb-enterprise/commit/30e3e729) Add add-license make target +- [6c1a78a0](https://github.com/appscode/kubedb-enterprise/commit/6c1a78a0) Enable e2e tests in GitHub actions (#4) +- [0960f805](https://github.com/appscode/kubedb-enterprise/commit/0960f805) Initial implementation (#2) +- [a8a9b1db](https://github.com/appscode/kubedb-enterprise/commit/a8a9b1db) Update go.yml +- [bc3b2624](https://github.com/appscode/kubedb-enterprise/commit/bc3b2624) Enable GitHub actions +- [2e33db2b](https://github.com/appscode/kubedb-enterprise/commit/2e33db2b) Clone kubedb/postgres repo (#1) +- [45a7cace](https://github.com/appscode/kubedb-enterprise/commit/45a7cace) Merge commit 'f78de886ed657650438f99574c3b002dd3607497' as 'hack/libbuild' + + + +## [kubedb/apimachinery](https://github.com/kubedb/apimachinery) + +### [v0.14.0-rc.1](https://github.com/kubedb/apimachinery/releases/tag/v0.14.0-rc.1) + +- [57468b4d](https://github.com/kubedb/apimachinery/commit/57468b4d) Add docker badge +- [cd358dda](https://github.com/kubedb/apimachinery/commit/cd358dda) Update MergeServicePort and PatchServicePort apis +- [b72968d5](https://github.com/kubedb/apimachinery/commit/b72968d5) Add port constants (#635) +- [6ce39fbe](https://github.com/kubedb/apimachinery/commit/6ce39fbe) Create separate governing service for each database (#634) +- [ecfb5d85](https://github.com/kubedb/apimachinery/commit/ecfb5d85) Update readme +- [61b26532](https://github.com/kubedb/apimachinery/commit/61b26532) Add MySQL constants (#633) +- [42888647](https://github.com/kubedb/apimachinery/commit/42888647) Update Kubernetes v1.18.9 dependencies (#632) +- [a57a7df5](https://github.com/kubedb/apimachinery/commit/a57a7df5) Set prx as ProxySQL short code (#631) +- [282992ea](https://github.com/kubedb/apimachinery/commit/282992ea) Update for release Stash@v2020.10.21 (#630) +- [5f17e1b4](https://github.com/kubedb/apimachinery/commit/5f17e1b4) Set default CA secret name even if the SSL is disabled. (#624) +- [c3710b61](https://github.com/kubedb/apimachinery/commit/c3710b61) Add host functions for different components of MongoDB (#625) +- [028d939d](https://github.com/kubedb/apimachinery/commit/028d939d) Refine api (#629) +- [4f4cfb3b](https://github.com/kubedb/apimachinery/commit/4f4cfb3b) Update Kubernetes v1.18.9 dependencies (#626) +- [47eaa486](https://github.com/kubedb/apimachinery/commit/47eaa486) Add MongoDBCustomConfigFile constant +- [5201c39b](https://github.com/kubedb/apimachinery/commit/5201c39b) Update MySQL ops request custom config api (#623) +- [06c2076f](https://github.com/kubedb/apimachinery/commit/06c2076f) Rename redis ConfigMapName to ConfigSecretName +- [0d4040b4](https://github.com/kubedb/apimachinery/commit/0d4040b4) API refinement (#622) +- [2eabe4c2](https://github.com/kubedb/apimachinery/commit/2eabe4c2) Update Kubernetes v1.18.9 dependencies (#621) +- [ac3ff1a6](https://github.com/kubedb/apimachinery/commit/ac3ff1a6) Handle halted condition (#620) +- [8ed26973](https://github.com/kubedb/apimachinery/commit/8ed26973) Update constants for Elasticsearch conditions (#618) +- [97c32f71](https://github.com/kubedb/apimachinery/commit/97c32f71) Use core/v1 ConditionStatus (#619) +- [304c48b8](https://github.com/kubedb/apimachinery/commit/304c48b8) Update Kubernetes v1.18.9 dependencies (#617) +- [a841401e](https://github.com/kubedb/apimachinery/commit/a841401e) Fix StatefulSet controller (#616) +- [517285ea](https://github.com/kubedb/apimachinery/commit/517285ea) Add spec.init.initialized field (#615) +- [057d3aef](https://github.com/kubedb/apimachinery/commit/057d3aef) Implement ReplicasAreReady (#614) +- [32105113](https://github.com/kubedb/apimachinery/commit/32105113) Update appcatalog dependency +- [34bf142e](https://github.com/kubedb/apimachinery/commit/34bf142e) Update swagger.json +- [7d9095af](https://github.com/kubedb/apimachinery/commit/7d9095af) Fix build (#613) +- [ad7988a8](https://github.com/kubedb/apimachinery/commit/ad7988a8) Fix build +- [0cf6469d](https://github.com/kubedb/apimachinery/commit/0cf6469d) Switch kubedb apiVersion to v1alpha2 (#612) +- [fd3131cd](https://github.com/kubedb/apimachinery/commit/fd3131cd) Add Volume Expansion and Configuration for MySQL OpsRequest (#607) +- [fd285012](https://github.com/kubedb/apimachinery/commit/fd285012) Add `alias` in the name of MongoDB server certificates (#611) +- [e562def9](https://github.com/kubedb/apimachinery/commit/e562def9) Remove GetMonitoringVendor method +- [a71f9b7e](https://github.com/kubedb/apimachinery/commit/a71f9b7e) Fix build +- [c97abe0d](https://github.com/kubedb/apimachinery/commit/c97abe0d) Update monitoring api dependency (#610) +- [d6070fc7](https://github.com/kubedb/apimachinery/commit/d6070fc7) Remove deprecated fields for monitoring (#609) +- [8d2f606a](https://github.com/kubedb/apimachinery/commit/8d2f606a) Add framework support for conditions (#608) +- [a74ea7a4](https://github.com/kubedb/apimachinery/commit/a74ea7a4) Bring back mysql ops spec StatefulSetOrdinal field +- [bda2d85a](https://github.com/kubedb/apimachinery/commit/bda2d85a) Add VerticalAutoscaler type (#606) +- [b9b22a35](https://github.com/kubedb/apimachinery/commit/b9b22a35) Add MySQL constant (#604) +- [2b887957](https://github.com/kubedb/apimachinery/commit/2b887957) Fix typo +- [c31cd2fd](https://github.com/kubedb/apimachinery/commit/c31cd2fd) Update ops request enumerations +- [41083a9d](https://github.com/kubedb/apimachinery/commit/41083a9d) Revise ops request apis (#603) +- [acfb1564](https://github.com/kubedb/apimachinery/commit/acfb1564) Revise api conditions (#602) +- [5c12de3a](https://github.com/kubedb/apimachinery/commit/5c12de3a) Update DB condition types and phases (#598) +- [f27cb720](https://github.com/kubedb/apimachinery/commit/f27cb720) Write data restore completion event using dynamic client (#601) +- [60ada14c](https://github.com/kubedb/apimachinery/commit/60ada14c) Update Kubernetes v1.18.9 dependencies (#600) +- [5779a5d7](https://github.com/kubedb/apimachinery/commit/5779a5d7) Update for release Stash@v2020.09.29 (#599) +- [86121dad](https://github.com/kubedb/apimachinery/commit/86121dad) Update Kubernetes v1.18.9 dependencies (#597) +- [da9fbe59](https://github.com/kubedb/apimachinery/commit/da9fbe59) Add DB conditions +- [7399d13f](https://github.com/kubedb/apimachinery/commit/7399d13f) Rename ES root-cert to ca-cert (#594) +- [1cd75609](https://github.com/kubedb/apimachinery/commit/1cd75609) Remove spec.paused & deprecated fields DB crds (#596) +- [9c85f9f1](https://github.com/kubedb/apimachinery/commit/9c85f9f1) Use `status.conditions` to handle database initialization (#593) +- [87e8e58b](https://github.com/kubedb/apimachinery/commit/87e8e58b) Update Kubernetes v1.18.9 dependencies (#595) +- [32206db2](https://github.com/kubedb/apimachinery/commit/32206db2) Add helper methods for MySQL (#592) +- [10aca81a](https://github.com/kubedb/apimachinery/commit/10aca81a) Rename client node to ingest node (#583) +- [d8bbd5ec](https://github.com/kubedb/apimachinery/commit/d8bbd5ec) Update repository config (#591) +- [4d51a066](https://github.com/kubedb/apimachinery/commit/4d51a066) Update repository config (#590) +- [5905c2cb](https://github.com/kubedb/apimachinery/commit/5905c2cb) Update Kubernetes v1.18.9 dependencies (#589) +- [3dc3d970](https://github.com/kubedb/apimachinery/commit/3dc3d970) Update Kubernetes v1.18.3 dependencies (#588) +- [53b42277](https://github.com/kubedb/apimachinery/commit/53b42277) Add event recorder in controller struct (#587) +- [ec58309a](https://github.com/kubedb/apimachinery/commit/ec58309a) Update Kubernetes v1.18.3 dependencies (#586) +- [38050bae](https://github.com/kubedb/apimachinery/commit/38050bae) Initialize db from stash restoresession/restoreBatch (#567) +- [ec3efa91](https://github.com/kubedb/apimachinery/commit/ec3efa91) Update for release Stash@v2020.09.16 (#585) +- [5ddfd53a](https://github.com/kubedb/apimachinery/commit/5ddfd53a) Update Kubernetes v1.18.3 dependencies (#584) +- [24398515](https://github.com/kubedb/apimachinery/commit/24398515) Add some `MongoDB` and `MongoDBOpsRequest` Constants (#582) +- [584a4bf6](https://github.com/kubedb/apimachinery/commit/584a4bf6) Add primary and secondary role constant for MySQL (#581) +- [82299808](https://github.com/kubedb/apimachinery/commit/82299808) Update Kubernetes v1.18.3 dependencies (#580) +- [ecd1d17f](https://github.com/kubedb/apimachinery/commit/ecd1d17f) Add Functions to get Default Probes (#579) +- [76ac9bc0](https://github.com/kubedb/apimachinery/commit/76ac9bc0) Remove CertManagerClient client +- [b99048f4](https://github.com/kubedb/apimachinery/commit/b99048f4) Remove unused constants for ProxySQL +- [152cef57](https://github.com/kubedb/apimachinery/commit/152cef57) Update Kubernetes v1.18.3 dependencies (#578) +- [24c5e829](https://github.com/kubedb/apimachinery/commit/24c5e829) Update redis constants (#575) +- [7075b38d](https://github.com/kubedb/apimachinery/commit/7075b38d) Remove spec.updateStrategy field (#577) +- [dfd11955](https://github.com/kubedb/apimachinery/commit/dfd11955) Remove description from CRD yamls (#576) +- [2d1b5878](https://github.com/kubedb/apimachinery/commit/2d1b5878) Add autoscaling crds (#554) +- [68ed8127](https://github.com/kubedb/apimachinery/commit/68ed8127) Fix build +- [63d18f0d](https://github.com/kubedb/apimachinery/commit/63d18f0d) Rename PgBouncer archiver to client +- [a219c251](https://github.com/kubedb/apimachinery/commit/a219c251) Handle shard scenario for MongoDB cert names (#574) +- [d2c80e55](https://github.com/kubedb/apimachinery/commit/d2c80e55) Add MongoDB Custom Config Spec (#562) +- [1e69fb02](https://github.com/kubedb/apimachinery/commit/1e69fb02) Support multiple certificates per DB (#555) +- [9bbed3d1](https://github.com/kubedb/apimachinery/commit/9bbed3d1) Update Kubernetes v1.18.3 dependencies (#573) +- [7df78c7a](https://github.com/kubedb/apimachinery/commit/7df78c7a) Update CRD yamls +- [406d895d](https://github.com/kubedb/apimachinery/commit/406d895d) Implement ServiceMonitorAdditionalLabels method (#572) +- [cfe4374a](https://github.com/kubedb/apimachinery/commit/cfe4374a) Make ServiceMonitor name same as stats service (#563) +- [d2ed6b4a](https://github.com/kubedb/apimachinery/commit/d2ed6b4a) Update for release Stash@v2020.08.27 (#571) +- [749b9084](https://github.com/kubedb/apimachinery/commit/749b9084) Update for release Stash@v2020.08.27-rc.0 (#570) +- [5d8bf42c](https://github.com/kubedb/apimachinery/commit/5d8bf42c) Update for release Stash@v2020.08.26-rc.1 (#569) +- [6edc4782](https://github.com/kubedb/apimachinery/commit/6edc4782) Update for release Stash@v2020.08.26-rc.0 (#568) +- [c451ff3a](https://github.com/kubedb/apimachinery/commit/c451ff3a) Update Kubernetes v1.18.3 dependencies (#565) +- [fdc6e2d6](https://github.com/kubedb/apimachinery/commit/fdc6e2d6) Update Kubernetes v1.18.3 dependencies (#564) +- [2f509c26](https://github.com/kubedb/apimachinery/commit/2f509c26) Update Kubernetes v1.18.3 dependencies (#561) +- [da655afe](https://github.com/kubedb/apimachinery/commit/da655afe) Update Kubernetes v1.18.3 dependencies (#560) +- [9c2c06a9](https://github.com/kubedb/apimachinery/commit/9c2c06a9) Fix MySQL enterprise condition's constant (#559) +- [81ed2724](https://github.com/kubedb/apimachinery/commit/81ed2724) Update Kubernetes v1.18.3 dependencies (#558) +- [738b7ade](https://github.com/kubedb/apimachinery/commit/738b7ade) Update Kubernetes v1.18.3 dependencies (#557) +- [93f0af4b](https://github.com/kubedb/apimachinery/commit/93f0af4b) Add MySQL Constants (#553) +- [6049554d](https://github.com/kubedb/apimachinery/commit/6049554d) Add {Horizontal,Vertical}ScalingSpec for Redis (#534) +- [28552272](https://github.com/kubedb/apimachinery/commit/28552272) Enable TLS for Redis (#546) +- [68e00844](https://github.com/kubedb/apimachinery/commit/68e00844) Add Spec for MongoDB Volume Expansion (#548) +- [759a800a](https://github.com/kubedb/apimachinery/commit/759a800a) Add Subject spec for Certificate (#552) +- [b1552628](https://github.com/kubedb/apimachinery/commit/b1552628) Add email SANs for certificate (#551) +- [fdfad57e](https://github.com/kubedb/apimachinery/commit/fdfad57e) Update to cert-manager@v0.16.0 (#550) +- [3b5e9ece](https://github.com/kubedb/apimachinery/commit/3b5e9ece) Update to Kubernetes v1.18.3 (#549) +- [0c5a1e9b](https://github.com/kubedb/apimachinery/commit/0c5a1e9b) Make ElasticsearchVersion spec.tools optional (#526) +- [01a0b4b3](https://github.com/kubedb/apimachinery/commit/01a0b4b3) Add Conditions Constant for MongoDBOpsRequest (#535) +- [34a9ed61](https://github.com/kubedb/apimachinery/commit/34a9ed61) Update to Kubernetes v1.18.3 (#547) +- [6392f19e](https://github.com/kubedb/apimachinery/commit/6392f19e) Add Storage Engine Support for Percona Server MongoDB (#538) +- [02d205bc](https://github.com/kubedb/apimachinery/commit/02d205bc) Remove extra - from prefix/suffix (#543) +- [06158f51](https://github.com/kubedb/apimachinery/commit/06158f51) Update to Kubernetes v1.18.3 (#542) +- [157a8724](https://github.com/kubedb/apimachinery/commit/157a8724) Update for release Stash@v2020.07.09-beta.0 (#541) +- [0e86bdbd](https://github.com/kubedb/apimachinery/commit/0e86bdbd) Update for release Stash@v2020.07.08-beta.0 (#540) +- [f4a22d0c](https://github.com/kubedb/apimachinery/commit/f4a22d0c) Update License notice (#539) +- [3c598500](https://github.com/kubedb/apimachinery/commit/3c598500) Use Allowlist and Denylist in MySQLVersion (#537) +- [3c58c062](https://github.com/kubedb/apimachinery/commit/3c58c062) Update to Kubernetes v1.18.3 (#536) +- [e1f3d603](https://github.com/kubedb/apimachinery/commit/e1f3d603) Update update-release-tracker.sh +- [0cf4a01f](https://github.com/kubedb/apimachinery/commit/0cf4a01f) Update update-release-tracker.sh +- [bfbd1f8d](https://github.com/kubedb/apimachinery/commit/bfbd1f8d) Add script to update release tracker on pr merge (#533) +- [b817d87c](https://github.com/kubedb/apimachinery/commit/b817d87c) Update .kodiak.toml +- [772e8d2f](https://github.com/kubedb/apimachinery/commit/772e8d2f) Add Ops Request const (#529) +- [453d67ca](https://github.com/kubedb/apimachinery/commit/453d67ca) Add constants for mutator & validator group names (#532) +- [69f997b5](https://github.com/kubedb/apimachinery/commit/69f997b5) Unwrap top level api folder (#531) +- [a8ccec51](https://github.com/kubedb/apimachinery/commit/a8ccec51) Make RedisOpsRequest Namespaced (#530) +- [8a076bfb](https://github.com/kubedb/apimachinery/commit/8a076bfb) Update .kodiak.toml +- [6a8e51b9](https://github.com/kubedb/apimachinery/commit/6a8e51b9) Update to Kubernetes v1.18.3 (#527) +- [2ef41962](https://github.com/kubedb/apimachinery/commit/2ef41962) Create .kodiak.toml +- [8e596d4e](https://github.com/kubedb/apimachinery/commit/8e596d4e) Update to Kubernetes v1.18.3 +- [31f72200](https://github.com/kubedb/apimachinery/commit/31f72200) Update comments +- [27bc9265](https://github.com/kubedb/apimachinery/commit/27bc9265) Use CRD v1 for Kubernetes >= 1.16 (#525) +- [d1be7d1d](https://github.com/kubedb/apimachinery/commit/d1be7d1d) Remove defaults from CRD v1beta1 +- [5c73d507](https://github.com/kubedb/apimachinery/commit/5c73d507) Use crd.Interface in Controller (#524) +- [27763544](https://github.com/kubedb/apimachinery/commit/27763544) Generate both v1beta1 and v1 CRD YAML (#523) +- [5a0f0a93](https://github.com/kubedb/apimachinery/commit/5a0f0a93) Update to Kubernetes v1.18.3 (#520) +- [25008c1a](https://github.com/kubedb/apimachinery/commit/25008c1a) Change MySQL `[]ContainerResources` to `core.ResourceRequirements` (#522) +- [abc99620](https://github.com/kubedb/apimachinery/commit/abc99620) Merge pull request #521 from kubedb/mongo-vertical +- [f38a109c](https://github.com/kubedb/apimachinery/commit/f38a109c) Change `[]ContainerResources` to `core.ResourceRequirements` +- [e3058f85](https://github.com/kubedb/apimachinery/commit/e3058f85) Update `modification request` to `ops request` (#519) +- [bd3c7d01](https://github.com/kubedb/apimachinery/commit/bd3c7d01) Fix linter warnings +- [d70848d7](https://github.com/kubedb/apimachinery/commit/d70848d7) Rename api group to ops.kubedb.com (#518) +- [745f2438](https://github.com/kubedb/apimachinery/commit/745f2438) Merge pull request #511 from pohly/memcached-pmem +- [75c949aa](https://github.com/kubedb/apimachinery/commit/75c949aa) memcached: add dataVolume +- [3e5cdc03](https://github.com/kubedb/apimachinery/commit/3e5cdc03) Merge pull request #517 from kubedb/mg-scaling +- [0c9e2b4f](https://github.com/kubedb/apimachinery/commit/0c9e2b4f) Flatten api structure +- [9c98fbc1](https://github.com/kubedb/apimachinery/commit/9c98fbc1) Add MongoDBModificationRequest Scaling Spec +- [22b199b6](https://github.com/kubedb/apimachinery/commit/22b199b6) Update comment for UpgradeSpec +- [c66fda4b](https://github.com/kubedb/apimachinery/commit/c66fda4b) Review DBA crds (#516) +- [bc1e13f7](https://github.com/kubedb/apimachinery/commit/bc1e13f7) Merge pull request #509 from kubedb/mysql-upgrade +- [2c9ae147](https://github.com/kubedb/apimachinery/commit/2c9ae147) Fix type names and definition +- [4c7c5074](https://github.com/kubedb/apimachinery/commit/4c7c5074) Update MySQLModificationRequest CRD +- [4096642c](https://github.com/kubedb/apimachinery/commit/4096642c) Merge pull request #501 from kubedb/redis-modification +- [3d683e58](https://github.com/kubedb/apimachinery/commit/3d683e58) Use standard condition from kmodules +- [7be4a3dd](https://github.com/kubedb/apimachinery/commit/7be4a3dd) Update RedisModificationRequest CRD +- [a594bdb9](https://github.com/kubedb/apimachinery/commit/a594bdb9) Merge pull request #503 from kubedb/elastic-upgrade +- [ee0eada4](https://github.com/kubedb/apimachinery/commit/ee0eada4) Use standard conditions from kmodules +- [22cb24f6](https://github.com/kubedb/apimachinery/commit/22cb24f6) Update dba api for elasticsearchModificationRequest +- [a2768752](https://github.com/kubedb/apimachinery/commit/a2768752) Merge pull request #499 from kubedb/mongodb-modification +- [be5dde87](https://github.com/kubedb/apimachinery/commit/be5dde87) Use standard conditions from kmodules +- [9bf2c80e](https://github.com/kubedb/apimachinery/commit/9bf2c80e) Add MongoDBModificationRequest Spec +- [9ee80efd](https://github.com/kubedb/apimachinery/commit/9ee80efd) Fix Update***Status helpers (#515) +- [2c75e77d](https://github.com/kubedb/apimachinery/commit/2c75e77d) Merge pull request #512 from kubedb/prestop-mongos +- [e13d73c5](https://github.com/kubedb/apimachinery/commit/e13d73c5) Use recommended kubernetes app labels (#514) +- [50856267](https://github.com/kubedb/apimachinery/commit/50856267) Add Enum markers to api types +- [95e00c8e](https://github.com/kubedb/apimachinery/commit/95e00c8e) Add Default PreStop Hook for Mongos +- [d99a1001](https://github.com/kubedb/apimachinery/commit/d99a1001) Trigger the workflow on push or pull request +- [b8047fc0](https://github.com/kubedb/apimachinery/commit/b8047fc0) Regenerate api types +- [83c8e40a](https://github.com/kubedb/apimachinery/commit/83c8e40a) Update CHANGELOG.md +- [ddb1f266](https://github.com/kubedb/apimachinery/commit/ddb1f266) Add requireSSL field to MySQL crd (#506) +- [c0c293bd](https://github.com/kubedb/apimachinery/commit/c0c293bd) Rename Elasticsearch NODE_ROLE constant +- [9bfe7f2c](https://github.com/kubedb/apimachinery/commit/9bfe7f2c) Rename Mongo SHARD_INDEX constant +- [e6f72c37](https://github.com/kubedb/apimachinery/commit/e6f72c37) Add default affinity rules for Redis (#508) +- [ab738acf](https://github.com/kubedb/apimachinery/commit/ab738acf) Set default affinity if not provided for Elasticsearch (#507) + + + +## [kubedb/cli](https://github.com/kubedb/cli) + +### [v0.14.0-rc.1](https://github.com/kubedb/cli/releases/tag/v0.14.0-rc.1) + +- [77d036ff](https://github.com/kubedb/cli/commit/77d036ff) Prepare for release v0.14.0-rc.1 (#532) +- [39c4d5a0](https://github.com/kubedb/cli/commit/39c4d5a0) Prepare for release v0.14.0-beta.6 (#531) +- [a7a8dfc0](https://github.com/kubedb/cli/commit/a7a8dfc0) Update KubeDB api (#530) +- [a4af36b9](https://github.com/kubedb/cli/commit/a4af36b9) Prepare for release v0.14.0-beta.5 (#529) +- [2f1cb09d](https://github.com/kubedb/cli/commit/2f1cb09d) Update KubeDB api (#528) +- [87eb5ad3](https://github.com/kubedb/cli/commit/87eb5ad3) Update readme +- [4dfe0da7](https://github.com/kubedb/cli/commit/4dfe0da7) Update KubeDB api (#527) +- [5448e521](https://github.com/kubedb/cli/commit/5448e521) Update repository config (#526) +- [e1e9dbe2](https://github.com/kubedb/cli/commit/e1e9dbe2) Update KubeDB api (#525) +- [e49d303a](https://github.com/kubedb/cli/commit/e49d303a) Update Kubernetes v1.18.9 dependencies (#524) +- [9f54a783](https://github.com/kubedb/cli/commit/9f54a783) Update KubeDB api (#523) +- [ad764956](https://github.com/kubedb/cli/commit/ad764956) Update for release Stash@v2020.10.21 (#522) +- [46ae22cd](https://github.com/kubedb/cli/commit/46ae22cd) Update KubeDB api (#521) +- [2914b270](https://github.com/kubedb/cli/commit/2914b270) Update KubeDB api (#520) +- [87ce0033](https://github.com/kubedb/cli/commit/87ce0033) Update Kubernetes v1.18.9 dependencies (#519) +- [ab524afe](https://github.com/kubedb/cli/commit/ab524afe) Update KubeDB api (#518) +- [899e2b21](https://github.com/kubedb/cli/commit/899e2b21) Update KubeDB api (#517) +- [37a5da4b](https://github.com/kubedb/cli/commit/37a5da4b) Update KubeDB api (#516) +- [5c87d6e8](https://github.com/kubedb/cli/commit/5c87d6e8) Update KubeDB api (#515) +- [dfc9e245](https://github.com/kubedb/cli/commit/dfc9e245) Update Kubernetes v1.18.9 dependencies (#514) +- [c0650bb7](https://github.com/kubedb/cli/commit/c0650bb7) Update KubeDB api (#513) +- [278dccbe](https://github.com/kubedb/cli/commit/278dccbe) Update KubeDB api (#512) +- [221be742](https://github.com/kubedb/cli/commit/221be742) Update KubeDB api (#511) +- [2a301cd0](https://github.com/kubedb/cli/commit/2a301cd0) Don't update krew manifest for pre-releases +- [14d23b77](https://github.com/kubedb/cli/commit/14d23b77) Publish to krew index (#510) +- [d486fa21](https://github.com/kubedb/cli/commit/d486fa21) Update KubeDB api (#509) +- [0432d8a8](https://github.com/kubedb/cli/commit/0432d8a8) Update Kubernetes v1.18.9 dependencies (#508) +- [4e46c763](https://github.com/kubedb/cli/commit/4e46c763) Add completion command (#507) +- [a9262fd5](https://github.com/kubedb/cli/commit/a9262fd5) Update KubeDB api (#506) +- [0af538be](https://github.com/kubedb/cli/commit/0af538be) Update KubeDB api (#505) +- [08c1a5b2](https://github.com/kubedb/cli/commit/08c1a5b2) Update KubeDB api (#504) +- [9f2a907d](https://github.com/kubedb/cli/commit/9f2a907d) Update KubeDB api (#503) +- [463f94e9](https://github.com/kubedb/cli/commit/463f94e9) Update Kubernetes v1.18.9 dependencies (#502) +- [dee2d9d9](https://github.com/kubedb/cli/commit/dee2d9d9) Update KubeDB api (#500) +- [05334cb5](https://github.com/kubedb/cli/commit/05334cb5) Update KubeDB api (#499) +- [db82a5f6](https://github.com/kubedb/cli/commit/db82a5f6) Update KubeDB api (#498) +- [cbfcaaac](https://github.com/kubedb/cli/commit/cbfcaaac) Update for release Stash@v2020.09.29 (#497) +- [f0c19a83](https://github.com/kubedb/cli/commit/f0c19a83) Update Kubernetes v1.18.9 dependencies (#496) +- [63902e98](https://github.com/kubedb/cli/commit/63902e98) Update KubeDB api (#495) +- [270e305f](https://github.com/kubedb/cli/commit/270e305f) Update Kubernetes v1.18.9 dependencies (#494) +- [db8e67a5](https://github.com/kubedb/cli/commit/db8e67a5) Update repository config (#493) +- [9fc2c143](https://github.com/kubedb/cli/commit/9fc2c143) Update Kubernetes v1.18.9 dependencies (#492) +- [97fcc3cd](https://github.com/kubedb/cli/commit/97fcc3cd) Update Kubernetes v1.18.3 dependencies (#491) +- [58fd85bb](https://github.com/kubedb/cli/commit/58fd85bb) Prepare for release v0.14.0-beta.3 (#490) +- [a6225a64](https://github.com/kubedb/cli/commit/a6225a64) Update Kubernetes v1.18.3 dependencies (#489) +- [4c3afba3](https://github.com/kubedb/cli/commit/4c3afba3) Update for release Stash@v2020.09.16 (#488) +- [df224a1a](https://github.com/kubedb/cli/commit/df224a1a) Update Kubernetes v1.18.3 dependencies (#487) +- [dcb30ecf](https://github.com/kubedb/cli/commit/dcb30ecf) Update Kubernetes v1.18.3 dependencies (#486) +- [faab5e0e](https://github.com/kubedb/cli/commit/faab5e0e) Use AppsCode Community License (#485) +- [58b39094](https://github.com/kubedb/cli/commit/58b39094) Prepare for release v0.14.0-beta.2 (#484) +- [0f8819ce](https://github.com/kubedb/cli/commit/0f8819ce) Update Kubernetes v1.18.3 dependencies (#483) +- [86a92381](https://github.com/kubedb/cli/commit/86a92381) Update Kubernetes v1.18.3 dependencies (#482) +- [05e5cef2](https://github.com/kubedb/cli/commit/05e5cef2) Update for release Stash@v2020.08.27 (#481) +- [b1aa1dc2](https://github.com/kubedb/cli/commit/b1aa1dc2) Update for release Stash@v2020.08.27-rc.0 (#480) +- [36716efc](https://github.com/kubedb/cli/commit/36716efc) Update for release Stash@v2020.08.26-rc.1 (#479) +- [a30f21e0](https://github.com/kubedb/cli/commit/a30f21e0) Update for release Stash@v2020.08.26-rc.0 (#478) +- [836d6227](https://github.com/kubedb/cli/commit/836d6227) Update Kubernetes v1.18.3 dependencies (#477) +- [8a81d715](https://github.com/kubedb/cli/commit/8a81d715) Update Kubernetes v1.18.3 dependencies (#476) +- [7ce2101d](https://github.com/kubedb/cli/commit/7ce2101d) Update Kubernetes v1.18.3 dependencies (#475) +- [3c617e66](https://github.com/kubedb/cli/commit/3c617e66) Update Kubernetes v1.18.3 dependencies (#474) +- [f70b2ba4](https://github.com/kubedb/cli/commit/f70b2ba4) Update Kubernetes v1.18.3 dependencies (#473) +- [ba77ba2b](https://github.com/kubedb/cli/commit/ba77ba2b) Update Kubernetes v1.18.3 dependencies (#472) +- [b296035f](https://github.com/kubedb/cli/commit/b296035f) Use actions/upload-artifact@v2 +- [7bb95619](https://github.com/kubedb/cli/commit/7bb95619) Update to Kubernetes v1.18.3 (#471) +- [6e5789a2](https://github.com/kubedb/cli/commit/6e5789a2) Update to Kubernetes v1.18.3 (#470) +- [9d550ebc](https://github.com/kubedb/cli/commit/9d550ebc) Update to Kubernetes v1.18.3 (#469) +- [dd09f8e9](https://github.com/kubedb/cli/commit/dd09f8e9) Fix binary build path +- [80e77588](https://github.com/kubedb/cli/commit/80e77588) Prepare for release v0.14.0-beta.1 (#468) +- [6925c726](https://github.com/kubedb/cli/commit/6925c726) Update for release Stash@v2020.07.09-beta.0 (#466) +- [6036e14f](https://github.com/kubedb/cli/commit/6036e14f) Update for release Stash@v2020.07.08-beta.0 (#465) +- [03de8e3f](https://github.com/kubedb/cli/commit/03de8e3f) Disable autogen tags in docs (#464) +- [3bcfa7ef](https://github.com/kubedb/cli/commit/3bcfa7ef) Update License (#463) +- [0aa91f93](https://github.com/kubedb/cli/commit/0aa91f93) Update to Kubernetes v1.18.3 (#462) +- [023555ef](https://github.com/kubedb/cli/commit/023555ef) Add workflow to update docs (#461) +- [abd9d054](https://github.com/kubedb/cli/commit/abd9d054) Update update-release-tracker.sh +- [0a9527d4](https://github.com/kubedb/cli/commit/0a9527d4) Update update-release-tracker.sh +- [69c644a2](https://github.com/kubedb/cli/commit/69c644a2) Add script to update release tracker on pr merge (#460) +- [595679ba](https://github.com/kubedb/cli/commit/595679ba) Make release non-draft +- [880d3492](https://github.com/kubedb/cli/commit/880d3492) Update .kodiak.toml +- [a7607798](https://github.com/kubedb/cli/commit/a7607798) Update to Kubernetes v1.18.3 (#459) +- [3197b4b7](https://github.com/kubedb/cli/commit/3197b4b7) Update to Kubernetes v1.18.3 +- [8ed52c84](https://github.com/kubedb/cli/commit/8ed52c84) Create .kodiak.toml +- [cfda68d4](https://github.com/kubedb/cli/commit/cfda68d4) Update to Kubernetes v1.18.3 (#458) +- [7395c039](https://github.com/kubedb/cli/commit/7395c039) Update dependencies +- [542e6709](https://github.com/kubedb/cli/commit/542e6709) Update crazy-max/ghaction-docker-buildx flag +- [972d8119](https://github.com/kubedb/cli/commit/972d8119) Revendor kubedb.dev/apimachinery@master +- [540e5a7d](https://github.com/kubedb/cli/commit/540e5a7d) Cleanup cli commands (#454) +- [98649b0a](https://github.com/kubedb/cli/commit/98649b0a) Trigger the workflow on push or pull request +- [a0dbdab5](https://github.com/kubedb/cli/commit/a0dbdab5) Update readme (#457) +- [a52927ed](https://github.com/kubedb/cli/commit/a52927ed) Create draft GitHub release when tagged (#456) +- [42838aec](https://github.com/kubedb/cli/commit/42838aec) Convert kubedb cli into a `kubectl dba` plgin (#455) +- [aec37df2](https://github.com/kubedb/cli/commit/aec37df2) Revendor dependencies +- [2c120d1a](https://github.com/kubedb/cli/commit/2c120d1a) Update client-go to kubernetes-1.16.3 (#453) +- [ce221024](https://github.com/kubedb/cli/commit/ce221024) Add add-license make target +- [84a6a1e8](https://github.com/kubedb/cli/commit/84a6a1e8) Add license header to files (#452) +- [1ced65ea](https://github.com/kubedb/cli/commit/1ced65ea) Split imports into 3 parts (#451) +- [8e533f69](https://github.com/kubedb/cli/commit/8e533f69) Add release workflow script (#450) +- [0735ce0c](https://github.com/kubedb/cli/commit/0735ce0c) Enable GitHub actions +- [8522ec74](https://github.com/kubedb/cli/commit/8522ec74) Update changelog + + + +## [kubedb/elasticsearch](https://github.com/kubedb/elasticsearch) + +### [v0.14.0-rc.1](https://github.com/kubedb/elasticsearch/releases/tag/v0.14.0-rc.1) + +- [709ba7d2](https://github.com/kubedb/elasticsearch/commit/709ba7d2) Prepare for release v0.14.0-rc.1 (#395) +- [58dac2ba](https://github.com/kubedb/elasticsearch/commit/58dac2ba) Prepare for release v0.14.0-beta.6 (#394) +- [5d4ad40c](https://github.com/kubedb/elasticsearch/commit/5d4ad40c) Update MergeServicePort and PatchServicePort apis (#393) +- [992edb90](https://github.com/kubedb/elasticsearch/commit/992edb90) Always set protocol for service ports +- [0f408cbf](https://github.com/kubedb/elasticsearch/commit/0f408cbf) Create SRV records for governing service (#392) +- [97f34417](https://github.com/kubedb/elasticsearch/commit/97f34417) Prepare for release v0.14.0-beta.5 (#391) +- [a3e9a733](https://github.com/kubedb/elasticsearch/commit/a3e9a733) Create separate governing service for each database (#390) +- [ce8f80b5](https://github.com/kubedb/elasticsearch/commit/ce8f80b5) Update KubeDB api (#389) +- [0fe8d617](https://github.com/kubedb/elasticsearch/commit/0fe8d617) Update readme +- [657797fe](https://github.com/kubedb/elasticsearch/commit/657797fe) Update repository config (#388) +- [d6f5ae41](https://github.com/kubedb/elasticsearch/commit/d6f5ae41) Prepare for release v0.14.0-beta.4 (#387) +- [149314b5](https://github.com/kubedb/elasticsearch/commit/149314b5) Update KubeDB api (#386) +- [1de4b578](https://github.com/kubedb/elasticsearch/commit/1de4b578) Make database's phase NotReady as soon as the halted is removed (#375) +- [57704afa](https://github.com/kubedb/elasticsearch/commit/57704afa) Update Kubernetes v1.18.9 dependencies (#385) +- [16d37657](https://github.com/kubedb/elasticsearch/commit/16d37657) Update Kubernetes v1.18.9 dependencies (#383) +- [828f8ab8](https://github.com/kubedb/elasticsearch/commit/828f8ab8) Update KubeDB api (#382) +- [d70e68a8](https://github.com/kubedb/elasticsearch/commit/d70e68a8) Update for release Stash@v2020.10.21 (#381) +- [05a687bc](https://github.com/kubedb/elasticsearch/commit/05a687bc) Fix init validator (#379) +- [24d7f2c8](https://github.com/kubedb/elasticsearch/commit/24d7f2c8) Update KubeDB api (#380) +- [8c981e08](https://github.com/kubedb/elasticsearch/commit/8c981e08) Update KubeDB api (#378) +- [cf833e49](https://github.com/kubedb/elasticsearch/commit/cf833e49) Update Kubernetes v1.18.9 dependencies (#377) +- [fb335a43](https://github.com/kubedb/elasticsearch/commit/fb335a43) Update KubeDB api (#376) +- [e652a7ec](https://github.com/kubedb/elasticsearch/commit/e652a7ec) Update KubeDB api (#374) +- [c22b7f31](https://github.com/kubedb/elasticsearch/commit/c22b7f31) Update KubeDB api (#373) +- [a7d8e3b0](https://github.com/kubedb/elasticsearch/commit/a7d8e3b0) Integrate cert-manager and status.conditions (#357) +- [370f0df1](https://github.com/kubedb/elasticsearch/commit/370f0df1) Update repository config (#372) +- [78bdc59e](https://github.com/kubedb/elasticsearch/commit/78bdc59e) Update repository config (#371) +- [b8003d4b](https://github.com/kubedb/elasticsearch/commit/b8003d4b) Update repository config (#370) +- [d4ff1ac2](https://github.com/kubedb/elasticsearch/commit/d4ff1ac2) Publish docker images to ghcr.io (#369) +- [5f5ef393](https://github.com/kubedb/elasticsearch/commit/5f5ef393) Update repository config (#363) +- [e537ae40](https://github.com/kubedb/elasticsearch/commit/e537ae40) Update Kubernetes v1.18.9 dependencies (#362) +- [a5a5b084](https://github.com/kubedb/elasticsearch/commit/a5a5b084) Update for release Stash@v2020.09.29 (#361) +- [11eebe39](https://github.com/kubedb/elasticsearch/commit/11eebe39) Update Kubernetes v1.18.9 dependencies (#360) +- [a5b47b08](https://github.com/kubedb/elasticsearch/commit/a5b47b08) Update Kubernetes v1.18.9 dependencies (#358) +- [91f1dc00](https://github.com/kubedb/elasticsearch/commit/91f1dc00) Rename client node to ingest node (#346) +- [318a8b19](https://github.com/kubedb/elasticsearch/commit/318a8b19) Update repository config (#356) +- [a8773921](https://github.com/kubedb/elasticsearch/commit/a8773921) Update repository config (#355) +- [55bef891](https://github.com/kubedb/elasticsearch/commit/55bef891) Update Kubernetes v1.18.9 dependencies (#354) +- [1a3e421a](https://github.com/kubedb/elasticsearch/commit/1a3e421a) Use common event recorder (#353) +- [4df32f60](https://github.com/kubedb/elasticsearch/commit/4df32f60) Update Kubernetes v1.18.3 dependencies (#352) +- [9fb43795](https://github.com/kubedb/elasticsearch/commit/9fb43795) Prepare for release v0.14.0-beta.3 (#351) +- [a279a60c](https://github.com/kubedb/elasticsearch/commit/a279a60c) Use new `spec.init` section (#350) +- [a1e2e2f6](https://github.com/kubedb/elasticsearch/commit/a1e2e2f6) Update Kubernetes v1.18.3 dependencies (#349) +- [0aaf4530](https://github.com/kubedb/elasticsearch/commit/0aaf4530) Add license verifier (#348) +- [bbacb00b](https://github.com/kubedb/elasticsearch/commit/bbacb00b) Update for release Stash@v2020.09.16 (#347) +- [98c1ad83](https://github.com/kubedb/elasticsearch/commit/98c1ad83) Update Kubernetes v1.18.3 dependencies (#345) +- [1ebf168d](https://github.com/kubedb/elasticsearch/commit/1ebf168d) Use background propagation policy +- [9d7997df](https://github.com/kubedb/elasticsearch/commit/9d7997df) Update Kubernetes v1.18.3 dependencies (#343) +- [42786958](https://github.com/kubedb/elasticsearch/commit/42786958) Use AppsCode Community License (#342) +- [a96b0bd3](https://github.com/kubedb/elasticsearch/commit/a96b0bd3) Fix unit tests (#341) +- [c9905966](https://github.com/kubedb/elasticsearch/commit/c9905966) Update Kubernetes v1.18.3 dependencies (#340) +- [3b83c316](https://github.com/kubedb/elasticsearch/commit/3b83c316) Prepare for release v0.14.0-beta.2 (#339) +- [662823ae](https://github.com/kubedb/elasticsearch/commit/662823ae) Update release.yml +- [ada6c2d3](https://github.com/kubedb/elasticsearch/commit/ada6c2d3) Add support for Open-Distro-for-Elasticsearch (#303) +- [a9c7ba33](https://github.com/kubedb/elasticsearch/commit/a9c7ba33) Update Kubernetes v1.18.3 dependencies (#333) +- [c67b1290](https://github.com/kubedb/elasticsearch/commit/c67b1290) Update Kubernetes v1.18.3 dependencies (#332) +- [aa1d64ad](https://github.com/kubedb/elasticsearch/commit/aa1d64ad) Update Kubernetes v1.18.3 dependencies (#331) +- [3d6c3e91](https://github.com/kubedb/elasticsearch/commit/3d6c3e91) Update Kubernetes v1.18.3 dependencies (#330) +- [bb318e74](https://github.com/kubedb/elasticsearch/commit/bb318e74) Update Kubernetes v1.18.3 dependencies (#329) +- [6b6b4d2d](https://github.com/kubedb/elasticsearch/commit/6b6b4d2d) Update Kubernetes v1.18.3 dependencies (#328) +- [06cef782](https://github.com/kubedb/elasticsearch/commit/06cef782) Remove dependency on enterprise operator (#327) +- [20a2c7d4](https://github.com/kubedb/elasticsearch/commit/20a2c7d4) Update to cert-manager v0.16.0 (#326) +- [e767c356](https://github.com/kubedb/elasticsearch/commit/e767c356) Build images in e2e workflow (#325) +- [ae696dbe](https://github.com/kubedb/elasticsearch/commit/ae696dbe) Update to Kubernetes v1.18.3 (#324) +- [a511d8d6](https://github.com/kubedb/elasticsearch/commit/a511d8d6) Allow configuring k8s & db version in e2e tests (#323) +- [a50b503d](https://github.com/kubedb/elasticsearch/commit/a50b503d) Trigger e2e tests on /ok-to-test command (#322) +- [107faff2](https://github.com/kubedb/elasticsearch/commit/107faff2) Update to Kubernetes v1.18.3 (#321) +- [60fb6d9b](https://github.com/kubedb/elasticsearch/commit/60fb6d9b) Update to Kubernetes v1.18.3 (#320) +- [9aae4782](https://github.com/kubedb/elasticsearch/commit/9aae4782) Prepare for release v0.14.0-beta.1 (#319) +- [312e5682](https://github.com/kubedb/elasticsearch/commit/312e5682) Update for release Stash@v2020.07.09-beta.0 (#317) +- [681f3e87](https://github.com/kubedb/elasticsearch/commit/681f3e87) Include Makefile.env +- [e460af51](https://github.com/kubedb/elasticsearch/commit/e460af51) Allow customizing chart registry (#316) +- [64e15a33](https://github.com/kubedb/elasticsearch/commit/64e15a33) Update for release Stash@v2020.07.08-beta.0 (#315) +- [1f2ef7a6](https://github.com/kubedb/elasticsearch/commit/1f2ef7a6) Update License (#314) +- [16ce6c90](https://github.com/kubedb/elasticsearch/commit/16ce6c90) Update to Kubernetes v1.18.3 (#313) +- [3357faa3](https://github.com/kubedb/elasticsearch/commit/3357faa3) Update ci.yml +- [cb44a1eb](https://github.com/kubedb/elasticsearch/commit/cb44a1eb) Load stash version from .env file for make (#312) +- [cf212019](https://github.com/kubedb/elasticsearch/commit/cf212019) Update update-release-tracker.sh +- [5127428e](https://github.com/kubedb/elasticsearch/commit/5127428e) Update update-release-tracker.sh +- [7f790940](https://github.com/kubedb/elasticsearch/commit/7f790940) Add script to update release tracker on pr merge (#311) +- [340b6112](https://github.com/kubedb/elasticsearch/commit/340b6112) Update .kodiak.toml +- [e01c4eec](https://github.com/kubedb/elasticsearch/commit/e01c4eec) Various fixes (#310) +- [11517f71](https://github.com/kubedb/elasticsearch/commit/11517f71) Update to Kubernetes v1.18.3 (#309) +- [53d7b117](https://github.com/kubedb/elasticsearch/commit/53d7b117) Update to Kubernetes v1.18.3 +- [7eacc7dd](https://github.com/kubedb/elasticsearch/commit/7eacc7dd) Create .kodiak.toml +- [b91b23d9](https://github.com/kubedb/elasticsearch/commit/b91b23d9) Use CRD v1 for Kubernetes >= 1.16 (#308) +- [08c1d2a8](https://github.com/kubedb/elasticsearch/commit/08c1d2a8) Update to Kubernetes v1.18.3 (#307) +- [32cdb8a4](https://github.com/kubedb/elasticsearch/commit/32cdb8a4) Fix e2e tests (#306) +- [0bca1a04](https://github.com/kubedb/elasticsearch/commit/0bca1a04) Merge pull request #302 from kubedb/multi-region +- [bf0c26ee](https://github.com/kubedb/elasticsearch/commit/bf0c26ee) Revendor kubedb.dev/apimachinery@v0.14.0-beta.0 +- [7c00c63c](https://github.com/kubedb/elasticsearch/commit/7c00c63c) Add support for multi-regional cluster +- [363322df](https://github.com/kubedb/elasticsearch/commit/363322df) Update stash install commands +- [a0138a36](https://github.com/kubedb/elasticsearch/commit/a0138a36) Update crazy-max/ghaction-docker-buildx flag +- [3076eb46](https://github.com/kubedb/elasticsearch/commit/3076eb46) Use updated operator labels in e2e tests (#304) +- [d537b91b](https://github.com/kubedb/elasticsearch/commit/d537b91b) Pass annotations from CRD to AppBinding (#305) +- [48f9399c](https://github.com/kubedb/elasticsearch/commit/48f9399c) Trigger the workflow on push or pull request +- [7b8d56cb](https://github.com/kubedb/elasticsearch/commit/7b8d56cb) Update CHANGELOG.md +- [939f6882](https://github.com/kubedb/elasticsearch/commit/939f6882) Update labelSelector for statefulsets (#300) +- [ed1c0553](https://github.com/kubedb/elasticsearch/commit/ed1c0553) Make master service headless & add rest-port to all db nodes (#299) +- [b7e7c8d7](https://github.com/kubedb/elasticsearch/commit/b7e7c8d7) Use stash.appscode.dev/apimachinery@v0.9.0-rc.6 (#301) +- [e51555d5](https://github.com/kubedb/elasticsearch/commit/e51555d5) Introduce spec.halted and removed dormant and snapshot crd (#296) +- [8255276f](https://github.com/kubedb/elasticsearch/commit/8255276f) Add spec.selector fields to the governing service (#297) +- [13bc760f](https://github.com/kubedb/elasticsearch/commit/13bc760f) Use stash@v0.9.0-rc.4 release (#298) +- [6a21fb86](https://github.com/kubedb/elasticsearch/commit/6a21fb86) Add `Pause` feature (#295) +- [1b25070c](https://github.com/kubedb/elasticsearch/commit/1b25070c) Refactor CI pipeline to build once (#294) +- [ace3d779](https://github.com/kubedb/elasticsearch/commit/ace3d779) Fix e2e tests on GitHub actions (#292) +- [7a7eb8d1](https://github.com/kubedb/elasticsearch/commit/7a7eb8d1) fix bug (#293) +- [0641649e](https://github.com/kubedb/elasticsearch/commit/0641649e) Use Go 1.13 in CI (#291) +- [97790e1e](https://github.com/kubedb/elasticsearch/commit/97790e1e) Take out elasticsearch docker images and Matrix test (#289) +- [3a20c1db](https://github.com/kubedb/elasticsearch/commit/3a20c1db) Fix default make command +- [ece073a2](https://github.com/kubedb/elasticsearch/commit/ece073a2) Update catalog values for make install command +- [8df4697b](https://github.com/kubedb/elasticsearch/commit/8df4697b) Use charts to install operator (#290) +- [5cbde391](https://github.com/kubedb/elasticsearch/commit/5cbde391) Add add-license make target +- [b7012bc5](https://github.com/kubedb/elasticsearch/commit/b7012bc5) Skip libbuild folder from checking license +- [d56db3a0](https://github.com/kubedb/elasticsearch/commit/d56db3a0) Add license header to files (#288) +- [1d0c368a](https://github.com/kubedb/elasticsearch/commit/1d0c368a) Enable make ci (#287) +- [2e835dff](https://github.com/kubedb/elasticsearch/commit/2e835dff) Remove EnableStatusSubresource (#286) +- [bcd0ebd9](https://github.com/kubedb/elasticsearch/commit/bcd0ebd9) Fix E2E tests in github action (#285) + + + +## [kubedb/installer](https://github.com/kubedb/installer) + +### [v0.14.0-rc.1](https://github.com/kubedb/installer/releases/tag/v0.14.0-rc.1) + + + + +## [kubedb/mongodb](https://github.com/kubedb/mongodb) + +### [v0.7.0-rc.1](https://github.com/kubedb/mongodb/releases/tag/v0.7.0-rc.1) + +- [f428010d](https://github.com/kubedb/mongodb/commit/f428010d) Prepare for release v0.7.0-rc.1 (#297) +- [0d32b697](https://github.com/kubedb/mongodb/commit/0d32b697) Prepare for release v0.7.0-beta.6 (#296) +- [1f75de65](https://github.com/kubedb/mongodb/commit/1f75de65) Update MergeServicePort and PatchServicePort apis (#295) +- [984fd7c2](https://github.com/kubedb/mongodb/commit/984fd7c2) Create SRV records for governing service (#294) +- [fc973dd0](https://github.com/kubedb/mongodb/commit/fc973dd0) Make database's phase NotReady as soon as the halted is removed (#293) +- [f1818bb1](https://github.com/kubedb/mongodb/commit/f1818bb1) Prepare for release v0.7.0-beta.5 (#292) +- [7d1586f7](https://github.com/kubedb/mongodb/commit/7d1586f7) Create separate governing service for each database (#291) +- [1e281abb](https://github.com/kubedb/mongodb/commit/1e281abb) Update KubeDB api (#290) +- [23d8785f](https://github.com/kubedb/mongodb/commit/23d8785f) Update readme +- [007e3ccd](https://github.com/kubedb/mongodb/commit/007e3ccd) Prepare for release v0.7.0-beta.4 (#289) +- [11f6573e](https://github.com/kubedb/mongodb/commit/11f6573e) Update MongoDB Conditions (#280) +- [a964af9b](https://github.com/kubedb/mongodb/commit/a964af9b) Update KubeDB api (#288) +- [38fd31b3](https://github.com/kubedb/mongodb/commit/38fd31b3) Update Kubernetes v1.18.9 dependencies (#287) +- [b0110bea](https://github.com/kubedb/mongodb/commit/b0110bea) Update KubeDB api (#286) +- [bfad7e48](https://github.com/kubedb/mongodb/commit/bfad7e48) Update for release Stash@v2020.10.21 (#285) +- [2eebd6eb](https://github.com/kubedb/mongodb/commit/2eebd6eb) Fix init validator (#283) +- [7912e726](https://github.com/kubedb/mongodb/commit/7912e726) Update KubeDB api (#284) +- [ebf85b6d](https://github.com/kubedb/mongodb/commit/ebf85b6d) Update KubeDB api (#282) +- [7fa4958c](https://github.com/kubedb/mongodb/commit/7fa4958c) Update Kubernetes v1.18.9 dependencies (#281) +- [705843b8](https://github.com/kubedb/mongodb/commit/705843b8) Use MongoDBCustomConfigFile constant +- [dac6262d](https://github.com/kubedb/mongodb/commit/dac6262d) Update KubeDB api (#279) +- [7e7a960e](https://github.com/kubedb/mongodb/commit/7e7a960e) Update KubeDB api (#278) +- [aed9bd49](https://github.com/kubedb/mongodb/commit/aed9bd49) Update KubeDB api (#277) +- [18ec2e99](https://github.com/kubedb/mongodb/commit/18ec2e99) Update Kubernetes v1.18.9 dependencies (#276) +- [dbec1f66](https://github.com/kubedb/mongodb/commit/dbec1f66) Update KubeDB api (#275) +- [ad028b51](https://github.com/kubedb/mongodb/commit/ad028b51) Update KubeDB api (#274) +- [a21dfd6a](https://github.com/kubedb/mongodb/commit/a21dfd6a) Update KubeDB api (#272) +- [932ac34b](https://github.com/kubedb/mongodb/commit/932ac34b) Update repository config (#271) +- [3f52a364](https://github.com/kubedb/mongodb/commit/3f52a364) Update repository config (#270) +- [d3bf87db](https://github.com/kubedb/mongodb/commit/d3bf87db) Initialize statefulset watcher from cmd/server/options.go (#269) +- [e3e15b7f](https://github.com/kubedb/mongodb/commit/e3e15b7f) Update KubeDB api (#268) +- [406ae5a2](https://github.com/kubedb/mongodb/commit/406ae5a2) Update Kubernetes v1.18.9 dependencies (#267) +- [0339503d](https://github.com/kubedb/mongodb/commit/0339503d) Publish docker images to ghcr.io (#266) +- [ffccdc3c](https://github.com/kubedb/mongodb/commit/ffccdc3c) Update KubeDB api (#265) +- [05b7a0bd](https://github.com/kubedb/mongodb/commit/05b7a0bd) Update KubeDB api (#264) +- [d6447024](https://github.com/kubedb/mongodb/commit/d6447024) Update KubeDB api (#263) +- [e7c1e3a3](https://github.com/kubedb/mongodb/commit/e7c1e3a3) Update KubeDB api (#262) +- [5647960a](https://github.com/kubedb/mongodb/commit/5647960a) Update repository config (#261) +- [e7481d8d](https://github.com/kubedb/mongodb/commit/e7481d8d) Use conditions to handle initialization (#258) +- [d406586a](https://github.com/kubedb/mongodb/commit/d406586a) Update Kubernetes v1.18.9 dependencies (#260) +- [93708d02](https://github.com/kubedb/mongodb/commit/93708d02) Remove redundant volume mounts (#259) +- [bf28af80](https://github.com/kubedb/mongodb/commit/bf28af80) Update for release Stash@v2020.09.29 (#257) +- [b34e2326](https://github.com/kubedb/mongodb/commit/b34e2326) Update Kubernetes v1.18.9 dependencies (#256) +- [86e84d48](https://github.com/kubedb/mongodb/commit/86e84d48) Remove bootstrap container (#248) +- [0b66e225](https://github.com/kubedb/mongodb/commit/0b66e225) Update Kubernetes v1.18.9 dependencies (#254) +- [1a06f223](https://github.com/kubedb/mongodb/commit/1a06f223) Update repository config (#253) +- [c199b164](https://github.com/kubedb/mongodb/commit/c199b164) Update repository config (#252) +- [1268868d](https://github.com/kubedb/mongodb/commit/1268868d) Update Kubernetes v1.18.9 dependencies (#251) +- [de63158f](https://github.com/kubedb/mongodb/commit/de63158f) Use common event recorder (#249) +- [2f96b75a](https://github.com/kubedb/mongodb/commit/2f96b75a) Update Kubernetes v1.18.3 dependencies (#250) +- [2867a4ef](https://github.com/kubedb/mongodb/commit/2867a4ef) Prepare for release v0.7.0-beta.3 (#247) +- [8e6c12e7](https://github.com/kubedb/mongodb/commit/8e6c12e7) Use new `spec.init` section (#246) +- [96aefe31](https://github.com/kubedb/mongodb/commit/96aefe31) Update Kubernetes v1.18.3 dependencies (#245) +- [59e2a89c](https://github.com/kubedb/mongodb/commit/59e2a89c) Add license verifier (#244) +- [2824cb71](https://github.com/kubedb/mongodb/commit/2824cb71) Update for release Stash@v2020.09.16 (#243) +- [3c626235](https://github.com/kubedb/mongodb/commit/3c626235) Update Kubernetes v1.18.3 dependencies (#242) +- [86b205ef](https://github.com/kubedb/mongodb/commit/86b205ef) Update Constants (#241) +- [1910e947](https://github.com/kubedb/mongodb/commit/1910e947) Use common constant across MongoDB Community and Enterprise operator (#240) +- [05364676](https://github.com/kubedb/mongodb/commit/05364676) Run e2e tests from kubedb/tests repo (#238) +- [80a78fe7](https://github.com/kubedb/mongodb/commit/80a78fe7) Set Delete Propagation Policy to Background (#237) +- [9a9d101c](https://github.com/kubedb/mongodb/commit/9a9d101c) Update Kubernetes v1.18.3 dependencies (#236) +- [d596ca68](https://github.com/kubedb/mongodb/commit/d596ca68) Use AppsCode Community License (#235) +- [8fd389de](https://github.com/kubedb/mongodb/commit/8fd389de) Prepare for release v0.7.0-beta.2 (#234) +- [3e4981ee](https://github.com/kubedb/mongodb/commit/3e4981ee) Update release.yml +- [c1d5cdb8](https://github.com/kubedb/mongodb/commit/c1d5cdb8) Always use OnDelete UpdateStrategy (#233) +- [a135b2c7](https://github.com/kubedb/mongodb/commit/a135b2c7) Fix build (#232) +- [cfb1788b](https://github.com/kubedb/mongodb/commit/cfb1788b) Use updated certificate spec (#221) +- [486e820a](https://github.com/kubedb/mongodb/commit/486e820a) Remove `storage` Validation Check (#231) +- [12e621ed](https://github.com/kubedb/mongodb/commit/12e621ed) Update Kubernetes v1.18.3 dependencies (#225) +- [0d7ea7d7](https://github.com/kubedb/mongodb/commit/0d7ea7d7) Update Kubernetes v1.18.3 dependencies (#224) +- [e79d1dfe](https://github.com/kubedb/mongodb/commit/e79d1dfe) Update Kubernetes v1.18.3 dependencies (#223) +- [d0ff5e1d](https://github.com/kubedb/mongodb/commit/d0ff5e1d) Update Kubernetes v1.18.3 dependencies (#222) +- [d22ade32](https://github.com/kubedb/mongodb/commit/d22ade32) Add `inMemory` Storage Engine Support for Percona MongoDB Server (#205) +- [90847996](https://github.com/kubedb/mongodb/commit/90847996) Update Kubernetes v1.18.3 dependencies (#220) +- [1098974f](https://github.com/kubedb/mongodb/commit/1098974f) Update Kubernetes v1.18.3 dependencies (#219) +- [e7d1407a](https://github.com/kubedb/mongodb/commit/e7d1407a) Fix install target +- [a5742d11](https://github.com/kubedb/mongodb/commit/a5742d11) Remove dependency on enterprise operator (#218) +- [1de4fbee](https://github.com/kubedb/mongodb/commit/1de4fbee) Build images in e2e workflow (#217) +- [b736c57e](https://github.com/kubedb/mongodb/commit/b736c57e) Update to Kubernetes v1.18.3 (#216) +- [180ae28d](https://github.com/kubedb/mongodb/commit/180ae28d) Allow configuring k8s & db version in e2e tests (#215) +- [c2f09a6f](https://github.com/kubedb/mongodb/commit/c2f09a6f) Trigger e2e tests on /ok-to-test command (#214) +- [c1c7fa39](https://github.com/kubedb/mongodb/commit/c1c7fa39) Update to Kubernetes v1.18.3 (#213) +- [8fb6cf78](https://github.com/kubedb/mongodb/commit/8fb6cf78) Update to Kubernetes v1.18.3 (#212) +- [b82a8fa7](https://github.com/kubedb/mongodb/commit/b82a8fa7) Prepare for release v0.7.0-beta.1 (#211) +- [a63d53ae](https://github.com/kubedb/mongodb/commit/a63d53ae) Update for release Stash@v2020.07.09-beta.0 (#209) +- [4e33e978](https://github.com/kubedb/mongodb/commit/4e33e978) include Makefile.env +- [1aa81a18](https://github.com/kubedb/mongodb/commit/1aa81a18) Allow customizing chart registry (#208) +- [05355e75](https://github.com/kubedb/mongodb/commit/05355e75) Update for release Stash@v2020.07.08-beta.0 (#207) +- [4f6be7b4](https://github.com/kubedb/mongodb/commit/4f6be7b4) Update License (#206) +- [cc54f7d3](https://github.com/kubedb/mongodb/commit/cc54f7d3) Update to Kubernetes v1.18.3 (#204) +- [d1a51b8e](https://github.com/kubedb/mongodb/commit/d1a51b8e) Update ci.yml +- [3a993329](https://github.com/kubedb/mongodb/commit/3a993329) Load stash version from .env file for make (#203) +- [7180a98c](https://github.com/kubedb/mongodb/commit/7180a98c) Update update-release-tracker.sh +- [745085fd](https://github.com/kubedb/mongodb/commit/745085fd) Update update-release-tracker.sh +- [07d83ac0](https://github.com/kubedb/mongodb/commit/07d83ac0) Add script to update release tracker on pr merge (#202) +- [bbe205bb](https://github.com/kubedb/mongodb/commit/bbe205bb) Update .kodiak.toml +- [998e656e](https://github.com/kubedb/mongodb/commit/998e656e) Various fixes (#201) +- [ca03db09](https://github.com/kubedb/mongodb/commit/ca03db09) Update to Kubernetes v1.18.3 (#200) +- [975fc700](https://github.com/kubedb/mongodb/commit/975fc700) Update to Kubernetes v1.18.3 +- [52972dcf](https://github.com/kubedb/mongodb/commit/52972dcf) Create .kodiak.toml +- [39168e53](https://github.com/kubedb/mongodb/commit/39168e53) Use CRD v1 for Kubernetes >= 1.16 (#199) +- [d6d87e16](https://github.com/kubedb/mongodb/commit/d6d87e16) Update to Kubernetes v1.18.3 (#198) +- [09cd5809](https://github.com/kubedb/mongodb/commit/09cd5809) Fix e2e tests (#197) +- [f47c4846](https://github.com/kubedb/mongodb/commit/f47c4846) Update stash install commands +- [010d0294](https://github.com/kubedb/mongodb/commit/010d0294) Revendor kubedb.dev/apimachinery@master (#196) +- [31ef2632](https://github.com/kubedb/mongodb/commit/31ef2632) Pass annotations from CRD to AppBinding (#195) +- [9594e92f](https://github.com/kubedb/mongodb/commit/9594e92f) Update crazy-max/ghaction-docker-buildx flag +- [0693d7a0](https://github.com/kubedb/mongodb/commit/0693d7a0) Use updated operator labels in e2e tests (#193) +- [5aaeeb90](https://github.com/kubedb/mongodb/commit/5aaeeb90) Trigger the workflow on push or pull request +- [2af16e3c](https://github.com/kubedb/mongodb/commit/2af16e3c) Update CHANGELOG.md +- [288c5d2f](https://github.com/kubedb/mongodb/commit/288c5d2f) Use SHARD_INDEX constant from apimachinery +- [4482edf3](https://github.com/kubedb/mongodb/commit/4482edf3) Use stash.appscode.dev/apimachinery@v0.9.0-rc.6 (#191) +- [0f20ff3a](https://github.com/kubedb/mongodb/commit/0f20ff3a) Manage SSL certificates using cert-manager (#190) +- [6f0c1aef](https://github.com/kubedb/mongodb/commit/6f0c1aef) Use Minio storage for testing (#188) +- [f8c56bac](https://github.com/kubedb/mongodb/commit/f8c56bac) Support affinity templating in mongodb-shard (#186) +- [71283767](https://github.com/kubedb/mongodb/commit/71283767) Use stash@v0.9.0-rc.4 release (#185) +- [f480de35](https://github.com/kubedb/mongodb/commit/f480de35) Fix `Pause` Logic (#184) +- [263e1bac](https://github.com/kubedb/mongodb/commit/263e1bac) Refactor CI pipeline to build once (#182) +- [e383f271](https://github.com/kubedb/mongodb/commit/e383f271) Add `Pause` Feature (#181) +- [584ecde6](https://github.com/kubedb/mongodb/commit/584ecde6) Delete backupconfig before attempting restoresession. (#180) +- [a78bc2a7](https://github.com/kubedb/mongodb/commit/a78bc2a7) Wipeout if custom databaseSecret has been deleted (#179) +- [e90cd386](https://github.com/kubedb/mongodb/commit/e90cd386) Matrix test and Moved out mongo docker files (#178) +- [c132db8f](https://github.com/kubedb/mongodb/commit/c132db8f) Add add-license makefile target +- [cc545e04](https://github.com/kubedb/mongodb/commit/cc545e04) Update Makefile +- [7a2eab2c](https://github.com/kubedb/mongodb/commit/7a2eab2c) Add license header to files (#177) +- [eecdb2cb](https://github.com/kubedb/mongodb/commit/eecdb2cb) Fix E2E tests in github action (#176) + + + +## [kubedb/mysql](https://github.com/kubedb/mysql) + +### [v0.7.0-rc.1](https://github.com/kubedb/mysql/releases/tag/v0.7.0-rc.1) + +- [2f598065](https://github.com/kubedb/mysql/commit/2f598065) Prepare for release v0.7.0-rc.1 (#287) +- [680da825](https://github.com/kubedb/mysql/commit/680da825) Prepare for release v0.7.0-beta.6 (#286) +- [a5066552](https://github.com/kubedb/mysql/commit/a5066552) Create SRV records for governing service (#285) +- [8dbd64c9](https://github.com/kubedb/mysql/commit/8dbd64c9) Prepare for release v0.7.0-beta.5 (#284) +- [ee4285c1](https://github.com/kubedb/mysql/commit/ee4285c1) Create separate governing service for each database (#283) +- [8e2fcbf4](https://github.com/kubedb/mysql/commit/8e2fcbf4) Update KubeDB api (#282) +- [ae962768](https://github.com/kubedb/mysql/commit/ae962768) Update readme +- [da0ee5ac](https://github.com/kubedb/mysql/commit/da0ee5ac) Prepare for release v0.7.0-beta.4 (#281) +- [dcab13f9](https://github.com/kubedb/mysql/commit/dcab13f9) Add conditions to MySQL status (#275) +- [972f4ade](https://github.com/kubedb/mysql/commit/972f4ade) Update KubeDB api (#280) +- [0a16d2f0](https://github.com/kubedb/mysql/commit/0a16d2f0) Update Kubernetes v1.18.9 dependencies (#279) +- [7fef1045](https://github.com/kubedb/mysql/commit/7fef1045) Update KubeDB api (#278) +- [489927ab](https://github.com/kubedb/mysql/commit/489927ab) Update for release Stash@v2020.10.21 (#277) +- [2491868c](https://github.com/kubedb/mysql/commit/2491868c) Update KubeDB api (#276) +- [5f6a0f6e](https://github.com/kubedb/mysql/commit/5f6a0f6e) Update KubeDB api (#274) +- [08c0720c](https://github.com/kubedb/mysql/commit/08c0720c) Update Kubernetes v1.18.9 dependencies (#273) +- [22fbdd3f](https://github.com/kubedb/mysql/commit/22fbdd3f) Update KubeDB api (#272) +- [7f4fb5e4](https://github.com/kubedb/mysql/commit/7f4fb5e4) Update KubeDB api (#271) +- [09d4743d](https://github.com/kubedb/mysql/commit/09d4743d) Update KubeDB api (#270) +- [d055fb11](https://github.com/kubedb/mysql/commit/d055fb11) Add Pod name to mysql replication-mode-detector container envs (#269) +- [4d1eea70](https://github.com/kubedb/mysql/commit/4d1eea70) Update KubeDB api (#268) +- [58fd9385](https://github.com/kubedb/mysql/commit/58fd9385) Update Kubernetes v1.18.9 dependencies (#267) +- [fb445df6](https://github.com/kubedb/mysql/commit/fb445df6) Update KubeDB api (#266) +- [3717609e](https://github.com/kubedb/mysql/commit/3717609e) Update KubeDB api (#265) +- [b9ba8cc7](https://github.com/kubedb/mysql/commit/b9ba8cc7) Update KubeDB api (#263) +- [1c2a7704](https://github.com/kubedb/mysql/commit/1c2a7704) Update repository config (#262) +- [6cb5d9d0](https://github.com/kubedb/mysql/commit/6cb5d9d0) Update repository config (#261) +- [3eadb17a](https://github.com/kubedb/mysql/commit/3eadb17a) Update repository config (#260) +- [03661faa](https://github.com/kubedb/mysql/commit/03661faa) Initialize statefulset watcher from cmd/server/options.go (#259) +- [e03649bb](https://github.com/kubedb/mysql/commit/e03649bb) Update KubeDB api (#258) +- [91e983b0](https://github.com/kubedb/mysql/commit/91e983b0) Update Kubernetes v1.18.9 dependencies (#257) +- [a03f4d24](https://github.com/kubedb/mysql/commit/a03f4d24) Publish docker images to ghcr.io (#256) +- [252902b5](https://github.com/kubedb/mysql/commit/252902b5) Update KubeDB api (#255) +- [d490e95c](https://github.com/kubedb/mysql/commit/d490e95c) Update KubeDB api (#254) +- [476de6f3](https://github.com/kubedb/mysql/commit/476de6f3) Update KubeDB api (#253) +- [54a36140](https://github.com/kubedb/mysql/commit/54a36140) Pass mysql name by flag for replication-mode-detector container (#247) +- [c2836d86](https://github.com/kubedb/mysql/commit/c2836d86) Update KubeDB api (#252) +- [69756664](https://github.com/kubedb/mysql/commit/69756664) Update repository config (#251) +- [6d1c0fa8](https://github.com/kubedb/mysql/commit/6d1c0fa8) Cleanup monitoring spec api (#250) +- [c971158c](https://github.com/kubedb/mysql/commit/c971158c) Use condition to handle database initialization (#243) +- [a839fa52](https://github.com/kubedb/mysql/commit/a839fa52) Update Kubernetes v1.18.9 dependencies (#249) +- [1b231f81](https://github.com/kubedb/mysql/commit/1b231f81) Use offshootSelectors to find statefulset (#248) +- [e6c6db76](https://github.com/kubedb/mysql/commit/e6c6db76) Update for release Stash@v2020.09.29 (#246) +- [fb577f93](https://github.com/kubedb/mysql/commit/fb577f93) Update Kubernetes v1.18.9 dependencies (#245) +- [dfe700ff](https://github.com/kubedb/mysql/commit/dfe700ff) Update Kubernetes v1.18.9 dependencies (#242) +- [928c15fe](https://github.com/kubedb/mysql/commit/928c15fe) Add separate services for primary and secondary Replicas (#229) +- [ac7161c9](https://github.com/kubedb/mysql/commit/ac7161c9) Update repository config (#241) +- [6344c1df](https://github.com/kubedb/mysql/commit/6344c1df) Update repository config (#240) +- [3389dbd8](https://github.com/kubedb/mysql/commit/3389dbd8) Update Kubernetes v1.18.9 dependencies (#239) +- [b22787a7](https://github.com/kubedb/mysql/commit/b22787a7) Remove unused StashClient (#238) +- [c1c1de57](https://github.com/kubedb/mysql/commit/c1c1de57) Update Kubernetes v1.18.3 dependencies (#237) +- [b2e37ce5](https://github.com/kubedb/mysql/commit/b2e37ce5) Use common event recorder (#236) +- [8e85dd18](https://github.com/kubedb/mysql/commit/8e85dd18) Prepare for release v0.7.0-beta.3 (#235) +- [bb1f0869](https://github.com/kubedb/mysql/commit/bb1f0869) Update Kubernetes v1.18.3 dependencies (#234) +- [aa33e58e](https://github.com/kubedb/mysql/commit/aa33e58e) Add license verifier (#233) +- [d25054c0](https://github.com/kubedb/mysql/commit/d25054c0) Use new `spec.init` section (#230) +- [3936e5b8](https://github.com/kubedb/mysql/commit/3936e5b8) Update for release Stash@v2020.09.16 (#232) +- [5162a530](https://github.com/kubedb/mysql/commit/5162a530) Update Kubernetes v1.18.3 dependencies (#231) +- [db42056e](https://github.com/kubedb/mysql/commit/db42056e) Use background deletion policy +- [4045a502](https://github.com/kubedb/mysql/commit/4045a502) Update Kubernetes v1.18.3 dependencies (#227) +- [93916a12](https://github.com/kubedb/mysql/commit/93916a12) Use AppsCode Community License (#226) +- [e3eefff3](https://github.com/kubedb/mysql/commit/e3eefff3) Update Kubernetes v1.18.3 dependencies (#225) +- [6010c034](https://github.com/kubedb/mysql/commit/6010c034) Prepare for release v0.7.0-beta.2 (#224) +- [4b530066](https://github.com/kubedb/mysql/commit/4b530066) Update release.yml +- [184a6cbc](https://github.com/kubedb/mysql/commit/184a6cbc) Update dependencies (#223) +- [903b13b6](https://github.com/kubedb/mysql/commit/903b13b6) Always use OnDelete update strategy +- [1c10224a](https://github.com/kubedb/mysql/commit/1c10224a) Update Kubernetes v1.18.3 dependencies (#222) +- [4e9e5e44](https://github.com/kubedb/mysql/commit/4e9e5e44) Added TLS/SSL Configuration in MySQL Server (#204) +- [d08209b8](https://github.com/kubedb/mysql/commit/d08209b8) Use username/password constants from core/v1 +- [87238c42](https://github.com/kubedb/mysql/commit/87238c42) Update MySQL vendor for changes of prometheus coreos operator (#216) +- [999005ed](https://github.com/kubedb/mysql/commit/999005ed) Update Kubernetes v1.18.3 dependencies (#215) +- [3eb5086e](https://github.com/kubedb/mysql/commit/3eb5086e) Update Kubernetes v1.18.3 dependencies (#214) +- [cd58f276](https://github.com/kubedb/mysql/commit/cd58f276) Update Kubernetes v1.18.3 dependencies (#213) +- [4dcfcd14](https://github.com/kubedb/mysql/commit/4dcfcd14) Update Kubernetes v1.18.3 dependencies (#212) +- [d41015c9](https://github.com/kubedb/mysql/commit/d41015c9) Update Kubernetes v1.18.3 dependencies (#211) +- [4350cb79](https://github.com/kubedb/mysql/commit/4350cb79) Update Kubernetes v1.18.3 dependencies (#210) +- [617af851](https://github.com/kubedb/mysql/commit/617af851) Fix install target +- [fc308cc3](https://github.com/kubedb/mysql/commit/fc308cc3) Remove dependency on enterprise operator (#209) +- [1b717aee](https://github.com/kubedb/mysql/commit/1b717aee) Detect primary pod in MySQL group replication (#190) +- [c3e516f4](https://github.com/kubedb/mysql/commit/c3e516f4) Support MySQL new version for group replication and standalone (#189) +- [8bedade3](https://github.com/kubedb/mysql/commit/8bedade3) Build images in e2e workflow (#208) +- [02c9434c](https://github.com/kubedb/mysql/commit/02c9434c) Allow configuring k8s & db version in e2e tests (#207) +- [ae5d757c](https://github.com/kubedb/mysql/commit/ae5d757c) Update to Kubernetes v1.18.3 (#206) +- [16bdc23f](https://github.com/kubedb/mysql/commit/16bdc23f) Trigger e2e tests on /ok-to-test command (#205) +- [7be13878](https://github.com/kubedb/mysql/commit/7be13878) Update to Kubernetes v1.18.3 (#203) +- [d69fe478](https://github.com/kubedb/mysql/commit/d69fe478) Update to Kubernetes v1.18.3 (#202) +- [19ccc5b8](https://github.com/kubedb/mysql/commit/19ccc5b8) Prepare for release v0.7.0-beta.1 (#201) +- [e61de0e7](https://github.com/kubedb/mysql/commit/e61de0e7) Update for release Stash@v2020.07.09-beta.0 (#199) +- [3269df76](https://github.com/kubedb/mysql/commit/3269df76) Allow customizing chart registry (#198) +- [c487e68e](https://github.com/kubedb/mysql/commit/c487e68e) Update for release Stash@v2020.07.08-beta.0 (#197) +- [4f288ef0](https://github.com/kubedb/mysql/commit/4f288ef0) Update License (#196) +- [858a5e03](https://github.com/kubedb/mysql/commit/858a5e03) Update to Kubernetes v1.18.3 (#195) +- [88dec378](https://github.com/kubedb/mysql/commit/88dec378) Update ci.yml +- [31ef7c2a](https://github.com/kubedb/mysql/commit/31ef7c2a) Load stash version from .env file for make (#194) +- [872954a9](https://github.com/kubedb/mysql/commit/872954a9) Update update-release-tracker.sh +- [771059b9](https://github.com/kubedb/mysql/commit/771059b9) Update update-release-tracker.sh +- [0e625902](https://github.com/kubedb/mysql/commit/0e625902) Add script to update release tracker on pr merge (#193) +- [6a204efd](https://github.com/kubedb/mysql/commit/6a204efd) Update .kodiak.toml +- [de6fc09b](https://github.com/kubedb/mysql/commit/de6fc09b) Various fixes (#192) +- [86eb3313](https://github.com/kubedb/mysql/commit/86eb3313) Update to Kubernetes v1.18.3 (#191) +- [937afcc8](https://github.com/kubedb/mysql/commit/937afcc8) Update to Kubernetes v1.18.3 +- [8646a9c8](https://github.com/kubedb/mysql/commit/8646a9c8) Create .kodiak.toml +- [9f3d2e3c](https://github.com/kubedb/mysql/commit/9f3d2e3c) Use helm --wait in make install command +- [3d1e9cf3](https://github.com/kubedb/mysql/commit/3d1e9cf3) Use CRD v1 for Kubernetes >= 1.16 (#188) +- [5df90daa](https://github.com/kubedb/mysql/commit/5df90daa) Merge pull request #187 from kubedb/k-1.18.3 +- [179207de](https://github.com/kubedb/mysql/commit/179207de) Pass context +- [76c3fc86](https://github.com/kubedb/mysql/commit/76c3fc86) Update to Kubernetes v1.18.3 +- [da9ad307](https://github.com/kubedb/mysql/commit/da9ad307) Fix e2e tests (#186) +- [d7f2c63d](https://github.com/kubedb/mysql/commit/d7f2c63d) Update stash install commands +- [cfee601b](https://github.com/kubedb/mysql/commit/cfee601b) Revendor kubedb.dev/apimachinery@master (#185) +- [741fada4](https://github.com/kubedb/mysql/commit/741fada4) Update crazy-max/ghaction-docker-buildx flag +- [27291b98](https://github.com/kubedb/mysql/commit/27291b98) Use updated operator labels in e2e tests (#183) +- [16b00f9d](https://github.com/kubedb/mysql/commit/16b00f9d) Pass annotations from CRD to AppBinding (#184) +- [b70e0620](https://github.com/kubedb/mysql/commit/b70e0620) Trigger the workflow on push or pull request +- [6ea308d8](https://github.com/kubedb/mysql/commit/6ea308d8) Update CHANGELOG.md +- [188c3a91](https://github.com/kubedb/mysql/commit/188c3a91) Use stash.appscode.dev/apimachinery@v0.9.0-rc.6 (#181) +- [f4a67e95](https://github.com/kubedb/mysql/commit/f4a67e95) Introduce spec.halted and removed dormant and snapshot crd (#178) +- [8774a90c](https://github.com/kubedb/mysql/commit/8774a90c) Use stash@v0.9.0-rc.4 release (#179) +- [209653e6](https://github.com/kubedb/mysql/commit/209653e6) Use apache thrift v0.13.0 +- [e89fbe40](https://github.com/kubedb/mysql/commit/e89fbe40) Update github.com/apache/thrift v0.12.0 (#176) +- [c0d035c9](https://github.com/kubedb/mysql/commit/c0d035c9) Add Pause Feature (#177) +- [827a92b6](https://github.com/kubedb/mysql/commit/827a92b6) Mount mysql config dir and tmp dir as emptydir (#166) +- [2a84ed08](https://github.com/kubedb/mysql/commit/2a84ed08) Enable subresource for MySQL crd. (#175) +- [bc8ec773](https://github.com/kubedb/mysql/commit/bc8ec773) Update kubernetes client-go to 1.16.3 (#174) +- [014f6b0b](https://github.com/kubedb/mysql/commit/014f6b0b) Matrix tests for github actions (#172) +- [68f427db](https://github.com/kubedb/mysql/commit/68f427db) Fix default make command +- [76dc7d7b](https://github.com/kubedb/mysql/commit/76dc7d7b) Use charts to install operator (#173) +- [5ff41dc1](https://github.com/kubedb/mysql/commit/5ff41dc1) Add add-license make target +- [132b2a0e](https://github.com/kubedb/mysql/commit/132b2a0e) Add license header to files (#171) +- [aab6050e](https://github.com/kubedb/mysql/commit/aab6050e) Fix linter errors. (#169) +- [35043a15](https://github.com/kubedb/mysql/commit/35043a15) Enable make ci (#168) +- [e452bb4b](https://github.com/kubedb/mysql/commit/e452bb4b) Remove EnableStatusSubresource (#167) +- [28794570](https://github.com/kubedb/mysql/commit/28794570) Run e2e tests using GitHub actions (#164) +- [af3b284b](https://github.com/kubedb/mysql/commit/af3b284b) Validate DBVersionSpecs and fixed broken build (#165) +- [e4963763](https://github.com/kubedb/mysql/commit/e4963763) Update go.yml +- [a808e508](https://github.com/kubedb/mysql/commit/a808e508) Enable GitHub actions +- [6fe5dd42](https://github.com/kubedb/mysql/commit/6fe5dd42) Update changelog + + + +## [kubedb/mysql-replication-mode-detector](https://github.com/kubedb/mysql-replication-mode-detector) + +### [v0.1.0-rc.1](https://github.com/kubedb/mysql-replication-mode-detector/releases/tag/v0.1.0-rc.1) + +- [7c0b82f](https://github.com/kubedb/mysql-replication-mode-detector/commit/7c0b82f) Prepare for release v0.1.0-rc.1 (#75) +- [67ec09b](https://github.com/kubedb/mysql-replication-mode-detector/commit/67ec09b) Prepare for release v0.1.0-beta.6 (#74) +- [724eaa9](https://github.com/kubedb/mysql-replication-mode-detector/commit/724eaa9) Update KubeDB api (#73) +- [a82a26e](https://github.com/kubedb/mysql-replication-mode-detector/commit/a82a26e) Fix sql query to find primary host for different version of MySQL (#66) +- [e251fd6](https://github.com/kubedb/mysql-replication-mode-detector/commit/e251fd6) Prepare for release v0.1.0-beta.5 (#72) +- [633ba00](https://github.com/kubedb/mysql-replication-mode-detector/commit/633ba00) Update KubeDB api (#71) +- [557e8f7](https://github.com/kubedb/mysql-replication-mode-detector/commit/557e8f7) Prepare for release v0.1.0-beta.4 (#70) +- [4dd885a](https://github.com/kubedb/mysql-replication-mode-detector/commit/4dd885a) Update KubeDB api (#69) +- [dc0ed39](https://github.com/kubedb/mysql-replication-mode-detector/commit/dc0ed39) Update Kubernetes v1.18.9 dependencies (#68) +- [f49a1d1](https://github.com/kubedb/mysql-replication-mode-detector/commit/f49a1d1) Update Kubernetes v1.18.9 dependencies (#65) +- [306235a](https://github.com/kubedb/mysql-replication-mode-detector/commit/306235a) Update KubeDB api (#64) +- [3c9e99a](https://github.com/kubedb/mysql-replication-mode-detector/commit/3c9e99a) Update KubeDB api (#63) +- [974a940](https://github.com/kubedb/mysql-replication-mode-detector/commit/974a940) Update KubeDB api (#62) +- [8521462](https://github.com/kubedb/mysql-replication-mode-detector/commit/8521462) Update Kubernetes v1.18.9 dependencies (#61) +- [38f7a4c](https://github.com/kubedb/mysql-replication-mode-detector/commit/38f7a4c) Update KubeDB api (#60) +- [a7b7c87](https://github.com/kubedb/mysql-replication-mode-detector/commit/a7b7c87) Update KubeDB api (#59) +- [daa02dd](https://github.com/kubedb/mysql-replication-mode-detector/commit/daa02dd) Update KubeDB api (#58) +- [341b6b6](https://github.com/kubedb/mysql-replication-mode-detector/commit/341b6b6) Add tls config (#40) +- [04161c8](https://github.com/kubedb/mysql-replication-mode-detector/commit/04161c8) Update KubeDB api (#57) +- [fdd705d](https://github.com/kubedb/mysql-replication-mode-detector/commit/fdd705d) Update Kubernetes v1.18.9 dependencies (#56) +- [22cb410](https://github.com/kubedb/mysql-replication-mode-detector/commit/22cb410) Update KubeDB api (#55) +- [11b1758](https://github.com/kubedb/mysql-replication-mode-detector/commit/11b1758) Update KubeDB api (#54) +- [9df3045](https://github.com/kubedb/mysql-replication-mode-detector/commit/9df3045) Update KubeDB api (#53) +- [6557f92](https://github.com/kubedb/mysql-replication-mode-detector/commit/6557f92) Update KubeDB api (#52) +- [43c3694](https://github.com/kubedb/mysql-replication-mode-detector/commit/43c3694) Update Kubernetes v1.18.9 dependencies (#51) +- [511e974](https://github.com/kubedb/mysql-replication-mode-detector/commit/511e974) Publish docker images to ghcr.io (#50) +- [093a995](https://github.com/kubedb/mysql-replication-mode-detector/commit/093a995) Update KubeDB api (#49) +- [49c07e9](https://github.com/kubedb/mysql-replication-mode-detector/commit/49c07e9) Update KubeDB api (#48) +- [91ead1c](https://github.com/kubedb/mysql-replication-mode-detector/commit/91ead1c) Update KubeDB api (#47) +- [45956b4](https://github.com/kubedb/mysql-replication-mode-detector/commit/45956b4) Update KubeDB api (#46) +- [a6c57a7](https://github.com/kubedb/mysql-replication-mode-detector/commit/a6c57a7) Update KubeDB api (#45) +- [8a2fd20](https://github.com/kubedb/mysql-replication-mode-detector/commit/8a2fd20) Update KubeDB api (#44) +- [be63987](https://github.com/kubedb/mysql-replication-mode-detector/commit/be63987) Update KubeDB api (#43) +- [f33220a](https://github.com/kubedb/mysql-replication-mode-detector/commit/f33220a) Update KubeDB api (#42) +- [46b7d44](https://github.com/kubedb/mysql-replication-mode-detector/commit/46b7d44) Update KubeDB api (#41) +- [c151070](https://github.com/kubedb/mysql-replication-mode-detector/commit/c151070) Update KubeDB api (#38) +- [7a04763](https://github.com/kubedb/mysql-replication-mode-detector/commit/7a04763) Update KubeDB api (#37) +- [4367ef5](https://github.com/kubedb/mysql-replication-mode-detector/commit/4367ef5) Update KubeDB api (#36) +- [6bc4f1c](https://github.com/kubedb/mysql-replication-mode-detector/commit/6bc4f1c) Update Kubernetes v1.18.9 dependencies (#35) +- [fdaff01](https://github.com/kubedb/mysql-replication-mode-detector/commit/fdaff01) Update KubeDB api (#34) +- [087170a](https://github.com/kubedb/mysql-replication-mode-detector/commit/087170a) Update KubeDB api (#33) +- [127efe7](https://github.com/kubedb/mysql-replication-mode-detector/commit/127efe7) Update Kubernetes v1.18.9 dependencies (#32) +- [1df3573](https://github.com/kubedb/mysql-replication-mode-detector/commit/1df3573) Move constant to apimachinery repo (#24) +- [74b41b0](https://github.com/kubedb/mysql-replication-mode-detector/commit/74b41b0) Update repository config (#31) +- [b0932a7](https://github.com/kubedb/mysql-replication-mode-detector/commit/b0932a7) Update repository config (#30) +- [8e9c235](https://github.com/kubedb/mysql-replication-mode-detector/commit/8e9c235) Update Kubernetes v1.18.9 dependencies (#29) +- [8f61ebc](https://github.com/kubedb/mysql-replication-mode-detector/commit/8f61ebc) Update Kubernetes v1.18.3 dependencies (#28) +- [eedb970](https://github.com/kubedb/mysql-replication-mode-detector/commit/eedb970) Prepare for release v0.1.0-beta.3 (#27) +- [e4c3962](https://github.com/kubedb/mysql-replication-mode-detector/commit/e4c3962) Update Kubernetes v1.18.3 dependencies (#26) +- [9c20bfb](https://github.com/kubedb/mysql-replication-mode-detector/commit/9c20bfb) Update Kubernetes v1.18.3 dependencies (#25) +- [a1f5dbd](https://github.com/kubedb/mysql-replication-mode-detector/commit/a1f5dbd) Update Kubernetes v1.18.3 dependencies (#23) +- [feedb97](https://github.com/kubedb/mysql-replication-mode-detector/commit/feedb97) Use AppsCode Community License (#22) +- [eb878dc](https://github.com/kubedb/mysql-replication-mode-detector/commit/eb878dc) Prepare for release v0.1.0-beta.2 (#21) +- [6c214b8](https://github.com/kubedb/mysql-replication-mode-detector/commit/6c214b8) Update Kubernetes v1.18.3 dependencies (#19) +- [00800e8](https://github.com/kubedb/mysql-replication-mode-detector/commit/00800e8) Update Kubernetes v1.18.3 dependencies (#18) +- [373ab6d](https://github.com/kubedb/mysql-replication-mode-detector/commit/373ab6d) Update Kubernetes v1.18.3 dependencies (#17) +- [8b61313](https://github.com/kubedb/mysql-replication-mode-detector/commit/8b61313) Update Kubernetes v1.18.3 dependencies (#16) +- [f2a68e3](https://github.com/kubedb/mysql-replication-mode-detector/commit/f2a68e3) Update Kubernetes v1.18.3 dependencies (#15) +- [3bce396](https://github.com/kubedb/mysql-replication-mode-detector/commit/3bce396) Update Kubernetes v1.18.3 dependencies (#14) +- [32603a2](https://github.com/kubedb/mysql-replication-mode-detector/commit/32603a2) Don't push binary with release +- [bb47e58](https://github.com/kubedb/mysql-replication-mode-detector/commit/bb47e58) Remove port-forwarding and Refactor Code (#13) +- [df73419](https://github.com/kubedb/mysql-replication-mode-detector/commit/df73419) Update to Kubernetes v1.18.3 (#12) +- [61fe2ea](https://github.com/kubedb/mysql-replication-mode-detector/commit/61fe2ea) Update to Kubernetes v1.18.3 (#11) +- [b7ccc85](https://github.com/kubedb/mysql-replication-mode-detector/commit/b7ccc85) Update to Kubernetes v1.18.3 (#10) +- [3e62838](https://github.com/kubedb/mysql-replication-mode-detector/commit/3e62838) Prepare for release v0.1.0-beta.1 (#9) +- [e54c4c0](https://github.com/kubedb/mysql-replication-mode-detector/commit/e54c4c0) Update License (#7) +- [e071b02](https://github.com/kubedb/mysql-replication-mode-detector/commit/e071b02) Update to Kubernetes v1.18.3 (#6) +- [8992bcb](https://github.com/kubedb/mysql-replication-mode-detector/commit/8992bcb) Update update-release-tracker.sh +- [acc1038](https://github.com/kubedb/mysql-replication-mode-detector/commit/acc1038) Add script to update release tracker on pr merge (#5) +- [706b5b0](https://github.com/kubedb/mysql-replication-mode-detector/commit/706b5b0) Update .kodiak.toml +- [4e52c03](https://github.com/kubedb/mysql-replication-mode-detector/commit/4e52c03) Update to Kubernetes v1.18.3 (#4) +- [adb05ae](https://github.com/kubedb/mysql-replication-mode-detector/commit/adb05ae) Merge branch 'master' into gomod-refresher-1591418508 +- [3a99f80](https://github.com/kubedb/mysql-replication-mode-detector/commit/3a99f80) Create .kodiak.toml +- [6289807](https://github.com/kubedb/mysql-replication-mode-detector/commit/6289807) Update to Kubernetes v1.18.3 +- [1dd24be](https://github.com/kubedb/mysql-replication-mode-detector/commit/1dd24be) Update to Kubernetes v1.18.3 (#3) +- [6d02366](https://github.com/kubedb/mysql-replication-mode-detector/commit/6d02366) Update Makefile and CI configuration (#2) +- [fc95884](https://github.com/kubedb/mysql-replication-mode-detector/commit/fc95884) Add primary role labeler controller (#1) +- [99dfb12](https://github.com/kubedb/mysql-replication-mode-detector/commit/99dfb12) add readme.md + + + +## [kubedb/operator](https://github.com/kubedb/operator) + +### [v0.14.0-rc.1](https://github.com/kubedb/operator/releases/tag/v0.14.0-rc.1) + +- [7a74c49c](https://github.com/kubedb/operator/commit/7a74c49c) Prepare for release v0.14.0-rc.1 (#335) +- [7c0e97a2](https://github.com/kubedb/operator/commit/7c0e97a2) Prepare for release v0.14.0-beta.6 (#334) +- [17b42fd3](https://github.com/kubedb/operator/commit/17b42fd3) Update KubeDB api (#333) +- [6dbde882](https://github.com/kubedb/operator/commit/6dbde882) Update Kubernetes v1.18.9 dependencies (#332) +- [ce62c61a](https://github.com/kubedb/operator/commit/ce62c61a) Use go.bytebuilders.dev/license-verifier v0.4.0 +- [bcada180](https://github.com/kubedb/operator/commit/bcada180) Prepare for release v0.14.0-beta.5 (#331) +- [07d63285](https://github.com/kubedb/operator/commit/07d63285) Enable PgBoucner & ProxySQL for enterprise license (#330) +- [35b75a05](https://github.com/kubedb/operator/commit/35b75a05) Update readme.md +- [14304e05](https://github.com/kubedb/operator/commit/14304e05) Update KubeDB api (#329) +- [df61aae3](https://github.com/kubedb/operator/commit/df61aae3) Update readme +- [c9882619](https://github.com/kubedb/operator/commit/c9882619) Format readme +- [73b725e3](https://github.com/kubedb/operator/commit/73b725e3) Update readme (#328) +- [541c2460](https://github.com/kubedb/operator/commit/541c2460) Update repository config (#327) +- [2145978d](https://github.com/kubedb/operator/commit/2145978d) Prepare for release v0.14.0-beta.4 (#326) +- [8fd3b682](https://github.com/kubedb/operator/commit/8fd3b682) Add --readiness-probe-interval flag (#325) +- [7bf0c3c5](https://github.com/kubedb/operator/commit/7bf0c3c5) Update KubeDB api (#324) +- [25c7dc21](https://github.com/kubedb/operator/commit/25c7dc21) Update Kubernetes v1.18.9 dependencies (#323) +- [bb7525d6](https://github.com/kubedb/operator/commit/bb7525d6) Update Kubernetes v1.18.9 dependencies (#321) +- [6db45b57](https://github.com/kubedb/operator/commit/6db45b57) Update KubeDB api (#320) +- [fa1438e3](https://github.com/kubedb/operator/commit/fa1438e3) Update KubeDB api (#319) +- [6be49e7e](https://github.com/kubedb/operator/commit/6be49e7e) Update KubeDB api (#318) +- [00bf9bec](https://github.com/kubedb/operator/commit/00bf9bec) Update Kubernetes v1.18.9 dependencies (#317) +- [fd529403](https://github.com/kubedb/operator/commit/fd529403) Update KubeDB api (#316) +- [f03305e1](https://github.com/kubedb/operator/commit/f03305e1) Update KubeDB api (#315) +- [fb5e4873](https://github.com/kubedb/operator/commit/fb5e4873) Update KubeDB api (#312) +- [f3843a05](https://github.com/kubedb/operator/commit/f3843a05) Update repository config (#311) +- [18f29e73](https://github.com/kubedb/operator/commit/18f29e73) Update repository config (#310) +- [25405c38](https://github.com/kubedb/operator/commit/25405c38) Update repository config (#309) +- [e464d336](https://github.com/kubedb/operator/commit/e464d336) Update KubeDB api (#308) +- [eeccd59e](https://github.com/kubedb/operator/commit/eeccd59e) Update Kubernetes v1.18.9 dependencies (#307) +- [dd2f176f](https://github.com/kubedb/operator/commit/dd2f176f) Publish docker images to ghcr.io (#306) +- [d65d299f](https://github.com/kubedb/operator/commit/d65d299f) Update KubeDB api (#305) +- [3f681cef](https://github.com/kubedb/operator/commit/3f681cef) Update KubeDB api (#304) +- [bc58d3d7](https://github.com/kubedb/operator/commit/bc58d3d7) Refactor initializer code + Use common event recorder (#292) +- [952e1b33](https://github.com/kubedb/operator/commit/952e1b33) Update repository config (#301) +- [66bee9c3](https://github.com/kubedb/operator/commit/66bee9c3) Update Kubernetes v1.18.9 dependencies (#300) +- [4e508002](https://github.com/kubedb/operator/commit/4e508002) Update for release Stash@v2020.09.29 (#299) +- [b6a4caa4](https://github.com/kubedb/operator/commit/b6a4caa4) Update Kubernetes v1.18.9 dependencies (#298) +- [201aed32](https://github.com/kubedb/operator/commit/201aed32) Update Kubernetes v1.18.9 dependencies (#296) +- [36ed325d](https://github.com/kubedb/operator/commit/36ed325d) Update repository config (#295) +- [36ec3035](https://github.com/kubedb/operator/commit/36ec3035) Update repository config (#294) +- [32e61f43](https://github.com/kubedb/operator/commit/32e61f43) Update Kubernetes v1.18.9 dependencies (#293) +- [078e7062](https://github.com/kubedb/operator/commit/078e7062) Update Kubernetes v1.18.3 dependencies (#291) +- [900626dd](https://github.com/kubedb/operator/commit/900626dd) Update Kubernetes v1.18.3 dependencies (#290) +- [7bf1e16e](https://github.com/kubedb/operator/commit/7bf1e16e) Use AppsCode Community license (#289) +- [ba436a4b](https://github.com/kubedb/operator/commit/ba436a4b) Add license verifier (#288) +- [0a02a313](https://github.com/kubedb/operator/commit/0a02a313) Update for release Stash@v2020.09.16 (#287) +- [9ae202e1](https://github.com/kubedb/operator/commit/9ae202e1) Update Kubernetes v1.18.3 dependencies (#286) +- [5bea03b9](https://github.com/kubedb/operator/commit/5bea03b9) Update Kubernetes v1.18.3 dependencies (#284) +- [b1375565](https://github.com/kubedb/operator/commit/b1375565) Update Kubernetes v1.18.3 dependencies (#282) +- [a13ca48b](https://github.com/kubedb/operator/commit/a13ca48b) Prepare for release v0.14.0-beta.2 (#281) +- [fc6c1e9e](https://github.com/kubedb/operator/commit/fc6c1e9e) Update Kubernetes v1.18.3 dependencies (#280) +- [cd74716b](https://github.com/kubedb/operator/commit/cd74716b) Update Kubernetes v1.18.3 dependencies (#275) +- [5b3c76ed](https://github.com/kubedb/operator/commit/5b3c76ed) Update Kubernetes v1.18.3 dependencies (#274) +- [397a7e60](https://github.com/kubedb/operator/commit/397a7e60) Update Kubernetes v1.18.3 dependencies (#273) +- [616ea78d](https://github.com/kubedb/operator/commit/616ea78d) Update Kubernetes v1.18.3 dependencies (#272) +- [b7b0d2b9](https://github.com/kubedb/operator/commit/b7b0d2b9) Update Kubernetes v1.18.3 dependencies (#271) +- [3afadb7a](https://github.com/kubedb/operator/commit/3afadb7a) Update Kubernetes v1.18.3 dependencies (#270) +- [60b15632](https://github.com/kubedb/operator/commit/60b15632) Remove dependency on enterprise operator (#269) +- [b3648cde](https://github.com/kubedb/operator/commit/b3648cde) Build images in e2e workflow (#268) +- [73dee065](https://github.com/kubedb/operator/commit/73dee065) Update to Kubernetes v1.18.3 (#266) +- [a8a42ab8](https://github.com/kubedb/operator/commit/a8a42ab8) Allow configuring k8s in e2e tests (#267) +- [4b7d6ee3](https://github.com/kubedb/operator/commit/4b7d6ee3) Trigger e2e tests on /ok-to-test command (#265) +- [024fc40a](https://github.com/kubedb/operator/commit/024fc40a) Update to Kubernetes v1.18.3 (#264) +- [bd1da662](https://github.com/kubedb/operator/commit/bd1da662) Update to Kubernetes v1.18.3 (#263) +- [a2bba612](https://github.com/kubedb/operator/commit/a2bba612) Prepare for release v0.14.0-beta.1 (#262) +- [22bc85ec](https://github.com/kubedb/operator/commit/22bc85ec) Allow customizing chart registry (#261) +- [52cc1dc7](https://github.com/kubedb/operator/commit/52cc1dc7) Update for release Stash@v2020.07.09-beta.0 (#260) +- [2e8b709f](https://github.com/kubedb/operator/commit/2e8b709f) Update for release Stash@v2020.07.08-beta.0 (#259) +- [7b58b548](https://github.com/kubedb/operator/commit/7b58b548) Update License (#258) +- [d4cd1a93](https://github.com/kubedb/operator/commit/d4cd1a93) Update to Kubernetes v1.18.3 (#256) +- [f6091845](https://github.com/kubedb/operator/commit/f6091845) Update ci.yml +- [5324d2b6](https://github.com/kubedb/operator/commit/5324d2b6) Update ci.yml +- [c888d7fd](https://github.com/kubedb/operator/commit/c888d7fd) Add workflow to update docs (#255) +- [ba843e17](https://github.com/kubedb/operator/commit/ba843e17) Update update-release-tracker.sh +- [b93c5ab4](https://github.com/kubedb/operator/commit/b93c5ab4) Update update-release-tracker.sh +- [6b8d2149](https://github.com/kubedb/operator/commit/6b8d2149) Add script to update release tracker on pr merge (#254) +- [bb1290dc](https://github.com/kubedb/operator/commit/bb1290dc) Update .kodiak.toml +- [9bb85c3b](https://github.com/kubedb/operator/commit/9bb85c3b) Register validator & mutators for all supported dbs (#253) +- [1a524d9c](https://github.com/kubedb/operator/commit/1a524d9c) Various fixes (#252) +- [4860f2a7](https://github.com/kubedb/operator/commit/4860f2a7) Update to Kubernetes v1.18.3 (#251) +- [1a163c6a](https://github.com/kubedb/operator/commit/1a163c6a) Create .kodiak.toml +- [1eda36b9](https://github.com/kubedb/operator/commit/1eda36b9) Update to Kubernetes v1.18.3 (#247) +- [77b8b858](https://github.com/kubedb/operator/commit/77b8b858) Update enterprise operator tag (#246) +- [96ca876e](https://github.com/kubedb/operator/commit/96ca876e) Revendor kubedb.dev/apimachinery@master (#245) +- [43a3a7f1](https://github.com/kubedb/operator/commit/43a3a7f1) Use recommended kubernetes app labels +- [1ae7045f](https://github.com/kubedb/operator/commit/1ae7045f) Update crazy-max/ghaction-docker-buildx flag +- [f25034ef](https://github.com/kubedb/operator/commit/f25034ef) Trigger the workflow on push or pull request +- [ba486319](https://github.com/kubedb/operator/commit/ba486319) Update readme (#244) +- [5f7191f4](https://github.com/kubedb/operator/commit/5f7191f4) Update CHANGELOG.md +- [5b14af4b](https://github.com/kubedb/operator/commit/5b14af4b) Add license scan report and status (#241) +- [9848932b](https://github.com/kubedb/operator/commit/9848932b) Pass the topology object to common controller +- [90d1c873](https://github.com/kubedb/operator/commit/90d1c873) Initialize topology for MonogDB webhooks (#243) +- [8ecb87c8](https://github.com/kubedb/operator/commit/8ecb87c8) Fix nil pointer exception (#242) +- [b12c3392](https://github.com/kubedb/operator/commit/b12c3392) Update operator dependencies (#237) +- [f714bb1b](https://github.com/kubedb/operator/commit/f714bb1b) Always create RBAC resources (#238) +- [f43a588e](https://github.com/kubedb/operator/commit/f43a588e) Use Go 1.13 in CI +- [e8ab3580](https://github.com/kubedb/operator/commit/e8ab3580) Update client-go to kubernetes-1.16.3 (#239) +- [1dc84a67](https://github.com/kubedb/operator/commit/1dc84a67) Update CI badge +- [d9d1cc0a](https://github.com/kubedb/operator/commit/d9d1cc0a) Bundle PgBouncer operator (#236) +- [720303c1](https://github.com/kubedb/operator/commit/720303c1) Fix linter errors (#235) +- [4c53a71f](https://github.com/kubedb/operator/commit/4c53a71f) Update go.yml +- [e65fc457](https://github.com/kubedb/operator/commit/e65fc457) Enable GitHub actions +- [2dcb0d6d](https://github.com/kubedb/operator/commit/2dcb0d6d) Update changelog + + + +## [kubedb/percona-xtradb](https://github.com/kubedb/percona-xtradb) + +### [v0.1.0-rc.1](https://github.com/kubedb/percona-xtradb/releases/tag/v0.1.0-rc.1) + +- [4ac07f08](https://github.com/kubedb/percona-xtradb/commit/4ac07f08) Prepare for release v0.1.0-rc.1 (#119) +- [397607a3](https://github.com/kubedb/percona-xtradb/commit/397607a3) Prepare for release v0.1.0-beta.6 (#118) +- [a3b7642d](https://github.com/kubedb/percona-xtradb/commit/a3b7642d) Create SRV records for governing service (#117) +- [9866a420](https://github.com/kubedb/percona-xtradb/commit/9866a420) Prepare for release v0.1.0-beta.5 (#116) +- [f92081d1](https://github.com/kubedb/percona-xtradb/commit/f92081d1) Create separate governing service for each database (#115) +- [6010b189](https://github.com/kubedb/percona-xtradb/commit/6010b189) Update KubeDB api (#114) +- [95b57c72](https://github.com/kubedb/percona-xtradb/commit/95b57c72) Update readme +- [14b2f1b2](https://github.com/kubedb/percona-xtradb/commit/14b2f1b2) Prepare for release v0.1.0-beta.4 (#113) +- [eff1d265](https://github.com/kubedb/percona-xtradb/commit/eff1d265) Update KubeDB api (#112) +- [a2878d4a](https://github.com/kubedb/percona-xtradb/commit/a2878d4a) Update Kubernetes v1.18.9 dependencies (#111) +- [51f0d104](https://github.com/kubedb/percona-xtradb/commit/51f0d104) Update KubeDB api (#110) +- [fcf5343b](https://github.com/kubedb/percona-xtradb/commit/fcf5343b) Update for release Stash@v2020.10.21 (#109) +- [9fe68d43](https://github.com/kubedb/percona-xtradb/commit/9fe68d43) Fix init validator (#107) +- [1c528cff](https://github.com/kubedb/percona-xtradb/commit/1c528cff) Update KubeDB api (#108) +- [99d23f3d](https://github.com/kubedb/percona-xtradb/commit/99d23f3d) Update KubeDB api (#106) +- [d0807640](https://github.com/kubedb/percona-xtradb/commit/d0807640) Update Kubernetes v1.18.9 dependencies (#105) +- [bac7705b](https://github.com/kubedb/percona-xtradb/commit/bac7705b) Update KubeDB api (#104) +- [475aabd5](https://github.com/kubedb/percona-xtradb/commit/475aabd5) Update KubeDB api (#103) +- [60f7e5a9](https://github.com/kubedb/percona-xtradb/commit/60f7e5a9) Update KubeDB api (#102) +- [84a97ced](https://github.com/kubedb/percona-xtradb/commit/84a97ced) Update KubeDB api (#101) +- [d4a7b7c5](https://github.com/kubedb/percona-xtradb/commit/d4a7b7c5) Update Kubernetes v1.18.9 dependencies (#100) +- [b818a4c5](https://github.com/kubedb/percona-xtradb/commit/b818a4c5) Update KubeDB api (#99) +- [03df7739](https://github.com/kubedb/percona-xtradb/commit/03df7739) Update KubeDB api (#98) +- [2f3ce0e6](https://github.com/kubedb/percona-xtradb/commit/2f3ce0e6) Update KubeDB api (#96) +- [94e009e8](https://github.com/kubedb/percona-xtradb/commit/94e009e8) Update repository config (#95) +- [fc61d440](https://github.com/kubedb/percona-xtradb/commit/fc61d440) Update repository config (#94) +- [35f5b2bb](https://github.com/kubedb/percona-xtradb/commit/35f5b2bb) Update repository config (#93) +- [d01e39dd](https://github.com/kubedb/percona-xtradb/commit/d01e39dd) Initialize statefulset watcher from cmd/server/options.go (#92) +- [41bf932f](https://github.com/kubedb/percona-xtradb/commit/41bf932f) Update KubeDB api (#91) +- [da92a1f3](https://github.com/kubedb/percona-xtradb/commit/da92a1f3) Update Kubernetes v1.18.9 dependencies (#90) +- [554beafb](https://github.com/kubedb/percona-xtradb/commit/554beafb) Publish docker images to ghcr.io (#89) +- [4c7031e1](https://github.com/kubedb/percona-xtradb/commit/4c7031e1) Update KubeDB api (#88) +- [418c767a](https://github.com/kubedb/percona-xtradb/commit/418c767a) Update KubeDB api (#87) +- [94eef91e](https://github.com/kubedb/percona-xtradb/commit/94eef91e) Update KubeDB api (#86) +- [f3c2a360](https://github.com/kubedb/percona-xtradb/commit/f3c2a360) Update KubeDB api (#85) +- [107bb6a6](https://github.com/kubedb/percona-xtradb/commit/107bb6a6) Update repository config (#84) +- [938e64bc](https://github.com/kubedb/percona-xtradb/commit/938e64bc) Cleanup monitoring spec api (#83) +- [deeaad8f](https://github.com/kubedb/percona-xtradb/commit/deeaad8f) Use conditions to handle database initialization (#80) +- [798c3ddc](https://github.com/kubedb/percona-xtradb/commit/798c3ddc) Update Kubernetes v1.18.9 dependencies (#82) +- [16c72ba6](https://github.com/kubedb/percona-xtradb/commit/16c72ba6) Updated the exporter port and service (#81) +- [9314faf1](https://github.com/kubedb/percona-xtradb/commit/9314faf1) Update for release Stash@v2020.09.29 (#79) +- [6cb53efc](https://github.com/kubedb/percona-xtradb/commit/6cb53efc) Update Kubernetes v1.18.9 dependencies (#78) +- [fd2b8cdd](https://github.com/kubedb/percona-xtradb/commit/fd2b8cdd) Update Kubernetes v1.18.9 dependencies (#76) +- [9d1038db](https://github.com/kubedb/percona-xtradb/commit/9d1038db) Update repository config (#75) +- [41a05a44](https://github.com/kubedb/percona-xtradb/commit/41a05a44) Update repository config (#74) +- [eccd2acd](https://github.com/kubedb/percona-xtradb/commit/eccd2acd) Update Kubernetes v1.18.9 dependencies (#73) +- [27635f1c](https://github.com/kubedb/percona-xtradb/commit/27635f1c) Update Kubernetes v1.18.3 dependencies (#72) +- [792326c7](https://github.com/kubedb/percona-xtradb/commit/792326c7) Use common event recorder (#71) +- [0ff583b8](https://github.com/kubedb/percona-xtradb/commit/0ff583b8) Prepare for release v0.1.0-beta.3 (#70) +- [627bc039](https://github.com/kubedb/percona-xtradb/commit/627bc039) Use new `spec.init` section (#69) +- [f79e4771](https://github.com/kubedb/percona-xtradb/commit/f79e4771) Update Kubernetes v1.18.3 dependencies (#68) +- [257954c2](https://github.com/kubedb/percona-xtradb/commit/257954c2) Add license verifier (#67) +- [e06eec6b](https://github.com/kubedb/percona-xtradb/commit/e06eec6b) Update for release Stash@v2020.09.16 (#66) +- [29901348](https://github.com/kubedb/percona-xtradb/commit/29901348) Update Kubernetes v1.18.3 dependencies (#65) +- [02d5bfde](https://github.com/kubedb/percona-xtradb/commit/02d5bfde) Use background deletion policy +- [6e6d8b5b](https://github.com/kubedb/percona-xtradb/commit/6e6d8b5b) Update Kubernetes v1.18.3 dependencies (#63) +- [7601a237](https://github.com/kubedb/percona-xtradb/commit/7601a237) Use AppsCode Community License (#62) +- [4d1a2424](https://github.com/kubedb/percona-xtradb/commit/4d1a2424) Update Kubernetes v1.18.3 dependencies (#61) +- [471b6def](https://github.com/kubedb/percona-xtradb/commit/471b6def) Prepare for release v0.1.0-beta.2 (#60) +- [9423a70f](https://github.com/kubedb/percona-xtradb/commit/9423a70f) Update release.yml +- [85d1d036](https://github.com/kubedb/percona-xtradb/commit/85d1d036) Use updated apis (#59) +- [6811b8dc](https://github.com/kubedb/percona-xtradb/commit/6811b8dc) Update Kubernetes v1.18.3 dependencies (#53) +- [4212d2a0](https://github.com/kubedb/percona-xtradb/commit/4212d2a0) Update Kubernetes v1.18.3 dependencies (#52) +- [659d646c](https://github.com/kubedb/percona-xtradb/commit/659d646c) Update Kubernetes v1.18.3 dependencies (#51) +- [a868e0c3](https://github.com/kubedb/percona-xtradb/commit/a868e0c3) Update Kubernetes v1.18.3 dependencies (#50) +- [162e6ca4](https://github.com/kubedb/percona-xtradb/commit/162e6ca4) Update Kubernetes v1.18.3 dependencies (#49) +- [a7fa1fbf](https://github.com/kubedb/percona-xtradb/commit/a7fa1fbf) Update Kubernetes v1.18.3 dependencies (#48) +- [b6a4583f](https://github.com/kubedb/percona-xtradb/commit/b6a4583f) Remove dependency on enterprise operator (#47) +- [a8909b38](https://github.com/kubedb/percona-xtradb/commit/a8909b38) Allow configuring k8s & db version in e2e tests (#46) +- [4d79d26e](https://github.com/kubedb/percona-xtradb/commit/4d79d26e) Update to Kubernetes v1.18.3 (#45) +- [189f3212](https://github.com/kubedb/percona-xtradb/commit/189f3212) Trigger e2e tests on /ok-to-test command (#44) +- [a037bd03](https://github.com/kubedb/percona-xtradb/commit/a037bd03) Update to Kubernetes v1.18.3 (#43) +- [33cabdf3](https://github.com/kubedb/percona-xtradb/commit/33cabdf3) Update to Kubernetes v1.18.3 (#42) +- [28b9fc0f](https://github.com/kubedb/percona-xtradb/commit/28b9fc0f) Prepare for release v0.1.0-beta.1 (#41) +- [fb4f5444](https://github.com/kubedb/percona-xtradb/commit/fb4f5444) Update for release Stash@v2020.07.09-beta.0 (#39) +- [ad221aa2](https://github.com/kubedb/percona-xtradb/commit/ad221aa2) include Makefile.env +- [841ec855](https://github.com/kubedb/percona-xtradb/commit/841ec855) Allow customizing chart registry (#38) +- [bb608980](https://github.com/kubedb/percona-xtradb/commit/bb608980) Update License (#37) +- [cf8cd2fa](https://github.com/kubedb/percona-xtradb/commit/cf8cd2fa) Update for release Stash@v2020.07.08-beta.0 (#36) +- [7b28c4b9](https://github.com/kubedb/percona-xtradb/commit/7b28c4b9) Update to Kubernetes v1.18.3 (#35) +- [848ff94a](https://github.com/kubedb/percona-xtradb/commit/848ff94a) Update ci.yml +- [d124dd6a](https://github.com/kubedb/percona-xtradb/commit/d124dd6a) Load stash version from .env file for make (#34) +- [1de40e1d](https://github.com/kubedb/percona-xtradb/commit/1de40e1d) Update update-release-tracker.sh +- [7a4503be](https://github.com/kubedb/percona-xtradb/commit/7a4503be) Update update-release-tracker.sh +- [ad0dfaf8](https://github.com/kubedb/percona-xtradb/commit/ad0dfaf8) Add script to update release tracker on pr merge (#33) +- [aaca6bd9](https://github.com/kubedb/percona-xtradb/commit/aaca6bd9) Update .kodiak.toml +- [9a495724](https://github.com/kubedb/percona-xtradb/commit/9a495724) Various fixes (#32) +- [9b6c9a53](https://github.com/kubedb/percona-xtradb/commit/9b6c9a53) Update to Kubernetes v1.18.3 (#31) +- [67912547](https://github.com/kubedb/percona-xtradb/commit/67912547) Update to Kubernetes v1.18.3 +- [fc8ce4cc](https://github.com/kubedb/percona-xtradb/commit/fc8ce4cc) Create .kodiak.toml +- [8aba5ef2](https://github.com/kubedb/percona-xtradb/commit/8aba5ef2) Use CRD v1 for Kubernetes >= 1.16 (#30) +- [e81d2b4c](https://github.com/kubedb/percona-xtradb/commit/e81d2b4c) Update to Kubernetes v1.18.3 (#29) +- [2a32730a](https://github.com/kubedb/percona-xtradb/commit/2a32730a) Fix e2e tests (#28) +- [a79626d9](https://github.com/kubedb/percona-xtradb/commit/a79626d9) Update stash install commands +- [52fc2059](https://github.com/kubedb/percona-xtradb/commit/52fc2059) Use recommended kubernetes app labels (#27) +- [93dc10ec](https://github.com/kubedb/percona-xtradb/commit/93dc10ec) Update crazy-max/ghaction-docker-buildx flag +- [ce5717e2](https://github.com/kubedb/percona-xtradb/commit/ce5717e2) Revendor kubedb.dev/apimachinery@master (#26) +- [c1ca649d](https://github.com/kubedb/percona-xtradb/commit/c1ca649d) Pass annotations from CRD to AppBinding (#25) +- [f327cc01](https://github.com/kubedb/percona-xtradb/commit/f327cc01) Trigger the workflow on push or pull request +- [02432393](https://github.com/kubedb/percona-xtradb/commit/02432393) Update CHANGELOG.md +- [a89dbc55](https://github.com/kubedb/percona-xtradb/commit/a89dbc55) Use stash.appscode.dev/apimachinery@v0.9.0-rc.6 (#24) +- [e69742de](https://github.com/kubedb/percona-xtradb/commit/e69742de) Update for percona-xtradb standalone restoresession (#23) +- [958877a1](https://github.com/kubedb/percona-xtradb/commit/958877a1) Various fixes (#21) +- [fb0d7a35](https://github.com/kubedb/percona-xtradb/commit/fb0d7a35) Update kubernetes client-go to 1.16.3 (#20) +- [293fe9a4](https://github.com/kubedb/percona-xtradb/commit/293fe9a4) Fix default make command +- [39358e3b](https://github.com/kubedb/percona-xtradb/commit/39358e3b) Use charts to install operator (#19) +- [6c5b3395](https://github.com/kubedb/percona-xtradb/commit/6c5b3395) Several fixes and update tests (#18) +- [84ff139f](https://github.com/kubedb/percona-xtradb/commit/84ff139f) Various Makefile improvements (#16) +- [e2737f65](https://github.com/kubedb/percona-xtradb/commit/e2737f65) Remove EnableStatusSubresource (#17) +- [fb886b07](https://github.com/kubedb/percona-xtradb/commit/fb886b07) Run e2e tests using GitHub actions (#12) +- [35b155d9](https://github.com/kubedb/percona-xtradb/commit/35b155d9) Validate DBVersionSpecs and fixed broken build (#15) +- [67794bd9](https://github.com/kubedb/percona-xtradb/commit/67794bd9) Update go.yml +- [f7666354](https://github.com/kubedb/percona-xtradb/commit/f7666354) Various changes for Percona XtraDB (#13) +- [ceb7ba67](https://github.com/kubedb/percona-xtradb/commit/ceb7ba67) Enable GitHub actions +- [f5a112af](https://github.com/kubedb/percona-xtradb/commit/f5a112af) Refactor for ProxySQL Integration (#11) +- [26602049](https://github.com/kubedb/percona-xtradb/commit/26602049) Revendor +- [71957d40](https://github.com/kubedb/percona-xtradb/commit/71957d40) Rename from perconaxtradb to percona-xtradb (#10) +- [b526ccd8](https://github.com/kubedb/percona-xtradb/commit/b526ccd8) Set database version in AppBinding (#7) +- [336e7203](https://github.com/kubedb/percona-xtradb/commit/336e7203) Percona XtraDB Cluster support (#9) +- [71a42f7a](https://github.com/kubedb/percona-xtradb/commit/71a42f7a) Don't set annotation to AppBinding (#8) +- [282298cb](https://github.com/kubedb/percona-xtradb/commit/282298cb) Fix UpsertDatabaseAnnotation() function (#4) +- [2ab9dddf](https://github.com/kubedb/percona-xtradb/commit/2ab9dddf) Add license header to Makefiles (#6) +- [df135c08](https://github.com/kubedb/percona-xtradb/commit/df135c08) Add install, uninstall and purge command in Makefile (#3) +- [73d3a845](https://github.com/kubedb/percona-xtradb/commit/73d3a845) Update .gitignore +- [59a4e754](https://github.com/kubedb/percona-xtradb/commit/59a4e754) Add Makefile (#2) +- [f3551ddc](https://github.com/kubedb/percona-xtradb/commit/f3551ddc) Rename package path (#1) +- [56a241d6](https://github.com/kubedb/percona-xtradb/commit/56a241d6) Use explicit IP whitelist instead of automatic IP whitelist (#151) +- [9f0b5ca3](https://github.com/kubedb/percona-xtradb/commit/9f0b5ca3) Update to k8s 1.14.0 client libraries using go.mod (#147) +- [73ad7c30](https://github.com/kubedb/percona-xtradb/commit/73ad7c30) Update changelog +- [ccc36b5c](https://github.com/kubedb/percona-xtradb/commit/ccc36b5c) Update README.md +- [9769e8e1](https://github.com/kubedb/percona-xtradb/commit/9769e8e1) Start next dev cycle +- [a3fa468a](https://github.com/kubedb/percona-xtradb/commit/a3fa468a) Prepare release 0.5.0 +- [6d8862de](https://github.com/kubedb/percona-xtradb/commit/6d8862de) Mysql Group Replication tests (#146) +- [49544e55](https://github.com/kubedb/percona-xtradb/commit/49544e55) Mysql Group Replication (#144) +- [a85d4b44](https://github.com/kubedb/percona-xtradb/commit/a85d4b44) Revendor dependencies +- [9c538460](https://github.com/kubedb/percona-xtradb/commit/9c538460) Changed Role to exclude psp without name (#143) +- [6cace93b](https://github.com/kubedb/percona-xtradb/commit/6cace93b) Modify mutator validator names (#142) +- [da0c19b9](https://github.com/kubedb/percona-xtradb/commit/da0c19b9) Update changelog +- [b79c80d6](https://github.com/kubedb/percona-xtradb/commit/b79c80d6) Start next dev cycle +- [838d9459](https://github.com/kubedb/percona-xtradb/commit/838d9459) Prepare release 0.4.0 +- [bf0f2c14](https://github.com/kubedb/percona-xtradb/commit/bf0f2c14) Added PSP names and init container image in testing framework (#141) +- [3d227570](https://github.com/kubedb/percona-xtradb/commit/3d227570) Added PSP support for mySQL (#137) +- [7b766657](https://github.com/kubedb/percona-xtradb/commit/7b766657) Don't inherit app.kubernetes.io labels from CRD into offshoots (#140) +- [29e23470](https://github.com/kubedb/percona-xtradb/commit/29e23470) Support for init container (#139) +- [3e1556f6](https://github.com/kubedb/percona-xtradb/commit/3e1556f6) Add role label to stats service (#138) +- [ee078af9](https://github.com/kubedb/percona-xtradb/commit/ee078af9) Update changelog +- [978f1139](https://github.com/kubedb/percona-xtradb/commit/978f1139) Update Kubernetes client libraries to 1.13.0 release (#136) +- [821f23d1](https://github.com/kubedb/percona-xtradb/commit/821f23d1) Start next dev cycle +- [678b26aa](https://github.com/kubedb/percona-xtradb/commit/678b26aa) Prepare release 0.3.0 +- [40ad7a23](https://github.com/kubedb/percona-xtradb/commit/40ad7a23) Initial RBAC support: create and use K8s service account for MySQL (#134) +- [98f03387](https://github.com/kubedb/percona-xtradb/commit/98f03387) Revendor dependencies (#135) +- [dfe92615](https://github.com/kubedb/percona-xtradb/commit/dfe92615) Revendor dependencies : Retry Failed Scheduler Snapshot (#133) +- [71f8a350](https://github.com/kubedb/percona-xtradb/commit/71f8a350) Added ephemeral StorageType support (#132) +- [0a6b6e46](https://github.com/kubedb/percona-xtradb/commit/0a6b6e46) Added support of MySQL 8.0.14 (#131) +- [99e57a9e](https://github.com/kubedb/percona-xtradb/commit/99e57a9e) Use PVC spec from snapshot if provided (#130) +- [61497be6](https://github.com/kubedb/percona-xtradb/commit/61497be6) Revendored and updated tests for 'Prevent prefix matching of multiple snapshots' (#129) +- [7eafe088](https://github.com/kubedb/percona-xtradb/commit/7eafe088) Add certificate health checker (#128) +- [973ec416](https://github.com/kubedb/percona-xtradb/commit/973ec416) Update E2E test: Env update is not restricted anymore (#127) +- [339975ff](https://github.com/kubedb/percona-xtradb/commit/339975ff) Fix AppBinding (#126) +- [62050a72](https://github.com/kubedb/percona-xtradb/commit/62050a72) Update changelog +- [2d454043](https://github.com/kubedb/percona-xtradb/commit/2d454043) Prepare release 0.2.0 +- [6941ea59](https://github.com/kubedb/percona-xtradb/commit/6941ea59) Reuse event recorder (#125) +- [b77e66c4](https://github.com/kubedb/percona-xtradb/commit/b77e66c4) OSM binary upgraded in mysql-tools (#123) +- [c9228086](https://github.com/kubedb/percona-xtradb/commit/c9228086) Revendor dependencies (#124) +- [97837120](https://github.com/kubedb/percona-xtradb/commit/97837120) Test for faulty snapshot (#122) +- [c3e995b6](https://github.com/kubedb/percona-xtradb/commit/c3e995b6) Start next dev cycle +- [8a4f3b13](https://github.com/kubedb/percona-xtradb/commit/8a4f3b13) Prepare release 0.2.0-rc.2 +- [79942191](https://github.com/kubedb/percona-xtradb/commit/79942191) Upgrade database secret keys (#121) +- [1747fdf5](https://github.com/kubedb/percona-xtradb/commit/1747fdf5) Ignore mutation of fields to default values during update (#120) +- [d902d588](https://github.com/kubedb/percona-xtradb/commit/d902d588) Support configuration options for exporter sidecar (#119) +- [dd7c3f44](https://github.com/kubedb/percona-xtradb/commit/dd7c3f44) Use flags.DumpAll (#118) +- [bc1ef05b](https://github.com/kubedb/percona-xtradb/commit/bc1ef05b) Start next dev cycle +- [9d33c1a0](https://github.com/kubedb/percona-xtradb/commit/9d33c1a0) Prepare release 0.2.0-rc.1 +- [b076e141](https://github.com/kubedb/percona-xtradb/commit/b076e141) Apply cleanup (#117) +- [7dc5641f](https://github.com/kubedb/percona-xtradb/commit/7dc5641f) Set periodic analytics (#116) +- [90ea6acc](https://github.com/kubedb/percona-xtradb/commit/90ea6acc) Introduce AppBinding support (#115) +- [a882d76a](https://github.com/kubedb/percona-xtradb/commit/a882d76a) Fix Analytics (#114) +- [0961009c](https://github.com/kubedb/percona-xtradb/commit/0961009c) Error out from cron job for deprecated dbversion (#113) +- [da1f4e27](https://github.com/kubedb/percona-xtradb/commit/da1f4e27) Add CRDs without observation when operator starts (#112) +- [0a754d2f](https://github.com/kubedb/percona-xtradb/commit/0a754d2f) Update changelog +- [b09bc6e1](https://github.com/kubedb/percona-xtradb/commit/b09bc6e1) Start next dev cycle +- [0d467ccb](https://github.com/kubedb/percona-xtradb/commit/0d467ccb) Prepare release 0.2.0-rc.0 +- [c757007a](https://github.com/kubedb/percona-xtradb/commit/c757007a) Merge commit 'cc6607a3589a79a5e61bb198d370ea0ae30b9d09' +- [ddfe4be1](https://github.com/kubedb/percona-xtradb/commit/ddfe4be1) Support custom user passowrd for backup (#111) +- [8c84ba20](https://github.com/kubedb/percona-xtradb/commit/8c84ba20) Support providing resources for monitoring container (#110) +- [7bcfbc48](https://github.com/kubedb/percona-xtradb/commit/7bcfbc48) Update kubernetes client libraries to 1.12.0 (#109) +- [145bba2b](https://github.com/kubedb/percona-xtradb/commit/145bba2b) Add validation webhook xray (#108) +- [6da1887f](https://github.com/kubedb/percona-xtradb/commit/6da1887f) Various Fixes (#107) +- [111519e9](https://github.com/kubedb/percona-xtradb/commit/111519e9) Merge ports from service template (#105) +- [38147ef1](https://github.com/kubedb/percona-xtradb/commit/38147ef1) Replace doNotPause with TerminationPolicy = DoNotTerminate (#104) +- [e28ebc47](https://github.com/kubedb/percona-xtradb/commit/e28ebc47) Pass resources to NamespaceValidator (#103) +- [aed12bf5](https://github.com/kubedb/percona-xtradb/commit/aed12bf5) Various fixes (#102) +- [3d372ef6](https://github.com/kubedb/percona-xtradb/commit/3d372ef6) Support Livecycle hook and container probes (#101) +- [b6ef6887](https://github.com/kubedb/percona-xtradb/commit/b6ef6887) Check if Kubernetes version is supported before running operator (#100) +- [d89e7783](https://github.com/kubedb/percona-xtradb/commit/d89e7783) Update package alias (#99) +- [f0b44b3a](https://github.com/kubedb/percona-xtradb/commit/f0b44b3a) Start next dev cycle +- [a79ff03b](https://github.com/kubedb/percona-xtradb/commit/a79ff03b) Prepare release 0.2.0-beta.1 +- [0d8d3cca](https://github.com/kubedb/percona-xtradb/commit/0d8d3cca) Revendor api (#98) +- [2f850243](https://github.com/kubedb/percona-xtradb/commit/2f850243) Fix tests (#97) +- [4ced0bfe](https://github.com/kubedb/percona-xtradb/commit/4ced0bfe) Revendor api for catalog apigroup (#96) +- [e7695400](https://github.com/kubedb/percona-xtradb/commit/e7695400) Update chanelog +- [8e358aea](https://github.com/kubedb/percona-xtradb/commit/8e358aea) Use --pull flag with docker build (#20) (#95) +- [d2a97d90](https://github.com/kubedb/percona-xtradb/commit/d2a97d90) Merge commit '16c769ee4686576f172a6b79a10d25bfd79ca4a4' +- [d1fe8a8a](https://github.com/kubedb/percona-xtradb/commit/d1fe8a8a) Start next dev cycle +- [04eb9bb5](https://github.com/kubedb/percona-xtradb/commit/04eb9bb5) Prepare release 0.2.0-beta.0 +- [9dfea960](https://github.com/kubedb/percona-xtradb/commit/9dfea960) Pass extra args to tools.sh (#93) +- [47dd3cad](https://github.com/kubedb/percona-xtradb/commit/47dd3cad) Don't try to wipe out Snapshot data for Local backend (#92) +- [9c4d485b](https://github.com/kubedb/percona-xtradb/commit/9c4d485b) Add missing alt-tag docker folder mysql-tools images (#91) +- [be72f784](https://github.com/kubedb/percona-xtradb/commit/be72f784) Use suffix for updated DBImage & Stop working for deprecated *Versions (#90) +- [05c8f14d](https://github.com/kubedb/percona-xtradb/commit/05c8f14d) Search used secrets within same namespace of DB object (#89) +- [0d94c946](https://github.com/kubedb/percona-xtradb/commit/0d94c946) Support Termination Policy (#88) +- [8775ddf7](https://github.com/kubedb/percona-xtradb/commit/8775ddf7) Update builddeps.sh +- [796c93da](https://github.com/kubedb/percona-xtradb/commit/796c93da) Revendor k8s.io/apiserver (#87) +- [5a1e3f57](https://github.com/kubedb/percona-xtradb/commit/5a1e3f57) Revendor kubernetes-1.11.3 (#86) +- [809a3c49](https://github.com/kubedb/percona-xtradb/commit/809a3c49) Support UpdateStrategy (#84) +- [372c52ef](https://github.com/kubedb/percona-xtradb/commit/372c52ef) Add TerminationPolicy for databases (#83) +- [c01b55e8](https://github.com/kubedb/percona-xtradb/commit/c01b55e8) Revendor api (#82) +- [5e196b95](https://github.com/kubedb/percona-xtradb/commit/5e196b95) Use IntHash as status.observedGeneration (#81) +- [2da3bb1b](https://github.com/kubedb/percona-xtradb/commit/2da3bb1b) fix github status (#80) +- [121d0a98](https://github.com/kubedb/percona-xtradb/commit/121d0a98) Update pipeline (#79) +- [532e3137](https://github.com/kubedb/percona-xtradb/commit/532e3137) Fix E2E test for minikube (#78) +- [0f107815](https://github.com/kubedb/percona-xtradb/commit/0f107815) Update pipeline (#77) +- [851679e2](https://github.com/kubedb/percona-xtradb/commit/851679e2) Migrate MySQL (#75) +- [0b997855](https://github.com/kubedb/percona-xtradb/commit/0b997855) Use official exporter image (#74) +- [702d5736](https://github.com/kubedb/percona-xtradb/commit/702d5736) Fix uninstall for concourse (#70) +- [9ee88bd2](https://github.com/kubedb/percona-xtradb/commit/9ee88bd2) Update status.ObservedGeneration for failure phase (#73) +- [559cdb6a](https://github.com/kubedb/percona-xtradb/commit/559cdb6a) Keep track of ObservedGenerationHash (#72) +- [61c8b898](https://github.com/kubedb/percona-xtradb/commit/61c8b898) Use NewObservableHandler (#71) +- [421274dc](https://github.com/kubedb/percona-xtradb/commit/421274dc) Merge commit '887037c7e36289e3135dda99346fccc7e2ce303b' +- [6a41d9bc](https://github.com/kubedb/percona-xtradb/commit/6a41d9bc) Fix uninstall for concourse (#69) +- [f1af09db](https://github.com/kubedb/percona-xtradb/commit/f1af09db) Update README.md +- [bf3f1823](https://github.com/kubedb/percona-xtradb/commit/bf3f1823) Revise immutable spec fields (#68) +- [26adec3b](https://github.com/kubedb/percona-xtradb/commit/26adec3b) Merge commit '5f83049fc01dc1d0709ac0014d6f3a0f74a39417' +- [31a97820](https://github.com/kubedb/percona-xtradb/commit/31a97820) Support passing args via PodTemplate (#67) +- [60f4ee23](https://github.com/kubedb/percona-xtradb/commit/60f4ee23) Introduce storageType : ephemeral (#66) +- [bfd3fcd6](https://github.com/kubedb/percona-xtradb/commit/bfd3fcd6) Add support for running tests on cncf cluster (#63) +- [fba47b19](https://github.com/kubedb/percona-xtradb/commit/fba47b19) Merge commit 'e010cbb302c8d59d4cf69dd77085b046ff423b78' +- [6be96ce0](https://github.com/kubedb/percona-xtradb/commit/6be96ce0) Revendor api (#65) +- [0f629ab3](https://github.com/kubedb/percona-xtradb/commit/0f629ab3) Keep track of observedGeneration in status (#64) +- [c9a9596f](https://github.com/kubedb/percona-xtradb/commit/c9a9596f) Separate StatsService for monitoring (#62) +- [62854641](https://github.com/kubedb/percona-xtradb/commit/62854641) Use MySQLVersion for MySQL images (#61) +- [3c170c56](https://github.com/kubedb/percona-xtradb/commit/3c170c56) Use updated crd spec (#60) +- [873c285e](https://github.com/kubedb/percona-xtradb/commit/873c285e) Rename OffshootLabels to OffshootSelectors (#59) +- [2fd02169](https://github.com/kubedb/percona-xtradb/commit/2fd02169) Revendor api (#58) +- [a127d6cd](https://github.com/kubedb/percona-xtradb/commit/a127d6cd) Use kmodules monitoring and objectstore api (#57) +- [2f79a038](https://github.com/kubedb/percona-xtradb/commit/2f79a038) Support custom configuration (#52) +- [49c67f00](https://github.com/kubedb/percona-xtradb/commit/49c67f00) Merge commit '44e6d4985d93556e39ddcc4677ada5437fc5be64' +- [fb28bc6c](https://github.com/kubedb/percona-xtradb/commit/fb28bc6c) Refactor concourse scripts (#56) +- [4de4ced1](https://github.com/kubedb/percona-xtradb/commit/4de4ced1) Fix command `./hack/make.py test e2e` (#55) +- [3082123e](https://github.com/kubedb/percona-xtradb/commit/3082123e) Set generated binary name to my-operator (#54) +- [5698f314](https://github.com/kubedb/percona-xtradb/commit/5698f314) Don't add admission/v1beta1 group as a prioritized version (#53) +- [696135d5](https://github.com/kubedb/percona-xtradb/commit/696135d5) Fix travis build (#48) +- [c519ef89](https://github.com/kubedb/percona-xtradb/commit/c519ef89) Format shell script (#51) +- [c93e2f40](https://github.com/kubedb/percona-xtradb/commit/c93e2f40) Enable status subresource for crds (#50) +- [edd951ca](https://github.com/kubedb/percona-xtradb/commit/edd951ca) Update client-go to v8.0.0 (#49) +- [520597a6](https://github.com/kubedb/percona-xtradb/commit/520597a6) Merge commit '71850e2c90cda8fc588b7dedb340edf3d316baea' +- [f1549e95](https://github.com/kubedb/percona-xtradb/commit/f1549e95) Support ENV variables in CRDs (#46) +- [67f37780](https://github.com/kubedb/percona-xtradb/commit/67f37780) Updated osm version to 0.7.1 (#47) +- [10e309c0](https://github.com/kubedb/percona-xtradb/commit/10e309c0) Prepare release 0.1.0 +- [62a8fbbd](https://github.com/kubedb/percona-xtradb/commit/62a8fbbd) Fixed missing error return (#45) +- [8c05bb83](https://github.com/kubedb/percona-xtradb/commit/8c05bb83) Revendor dependencies (#44) +- [ca811a2e](https://github.com/kubedb/percona-xtradb/commit/ca811a2e) Fix release script (#43) +- [b79541f6](https://github.com/kubedb/percona-xtradb/commit/b79541f6) Add changelog (#42) +- [a2d13c82](https://github.com/kubedb/percona-xtradb/commit/a2d13c82) Concourse (#41) +- [95b2186e](https://github.com/kubedb/percona-xtradb/commit/95b2186e) Fixed kubeconfig plugin for Cloud Providers && Storage is required for MySQL (#40) +- [37762093](https://github.com/kubedb/percona-xtradb/commit/37762093) Refactored E2E testing to support E2E testing with admission webhook in cloud (#38) +- [b6fe72ca](https://github.com/kubedb/percona-xtradb/commit/b6fe72ca) Remove lost+found directory before initializing mysql (#39) +- [18ebb959](https://github.com/kubedb/percona-xtradb/commit/18ebb959) Skip delete requests for empty resources (#37) +- [eeb7add0](https://github.com/kubedb/percona-xtradb/commit/eeb7add0) Don't panic if admission options is nil (#36) +- [ccb59db0](https://github.com/kubedb/percona-xtradb/commit/ccb59db0) Disable admission controllers for webhook server (#35) +- [b1c6c149](https://github.com/kubedb/percona-xtradb/commit/b1c6c149) Separate ApiGroup for Mutating and Validating webhook && upgraded osm to 0.7.0 (#34) +- [b1890f7c](https://github.com/kubedb/percona-xtradb/commit/b1890f7c) Update client-go to 7.0.0 (#33) +- [08c81726](https://github.com/kubedb/percona-xtradb/commit/08c81726) Added update script for mysql-tools:8 (#32) +- [4bbe6c9f](https://github.com/kubedb/percona-xtradb/commit/4bbe6c9f) Added support of mysql:5.7 (#31) +- [e657f512](https://github.com/kubedb/percona-xtradb/commit/e657f512) Add support for one informer and N-eventHandler for snapshot, dromantDB and Job (#30) +- [bbcd48d6](https://github.com/kubedb/percona-xtradb/commit/bbcd48d6) Use metrics from kube apiserver (#29) +- [1687e197](https://github.com/kubedb/percona-xtradb/commit/1687e197) Bundle webhook server and Use SharedInformerFactory (#28) +- [cd0efc00](https://github.com/kubedb/percona-xtradb/commit/cd0efc00) Move MySQL AdmissionWebhook packages into MySQL repository (#27) +- [46065e18](https://github.com/kubedb/percona-xtradb/commit/46065e18) Use mysql:8.0.3 image as mysql:8.0 (#26) +- [1b73529f](https://github.com/kubedb/percona-xtradb/commit/1b73529f) Update README.md +- [62eaa397](https://github.com/kubedb/percona-xtradb/commit/62eaa397) Update README.md +- [c53704c7](https://github.com/kubedb/percona-xtradb/commit/c53704c7) Remove Docker pull count +- [b9ec877e](https://github.com/kubedb/percona-xtradb/commit/b9ec877e) Add travis yaml (#25) +- [ade3571c](https://github.com/kubedb/percona-xtradb/commit/ade3571c) Start next dev cycle +- [b4b749df](https://github.com/kubedb/percona-xtradb/commit/b4b749df) Prepare release 0.1.0-beta.2 +- [4d46d95d](https://github.com/kubedb/percona-xtradb/commit/4d46d95d) Migrating to apps/v1 (#23) +- [5ee1ac8c](https://github.com/kubedb/percona-xtradb/commit/5ee1ac8c) Update validation (#22) +- [dd023c50](https://github.com/kubedb/percona-xtradb/commit/dd023c50) Fix dormantDB matching: pass same type to Equal method (#21) +- [37a1e4fd](https://github.com/kubedb/percona-xtradb/commit/37a1e4fd) Use official code generator scripts (#20) +- [485d3d7c](https://github.com/kubedb/percona-xtradb/commit/485d3d7c) Fixed dormantdb matching & Raised throttling time & Fixed MySQL version Checking (#19) +- [6db2ae8d](https://github.com/kubedb/percona-xtradb/commit/6db2ae8d) Prepare release 0.1.0-beta.1 +- [ebbfec2f](https://github.com/kubedb/percona-xtradb/commit/ebbfec2f) converted to k8s 1.9 & Improved InitSpec in DormantDB & Added support for Job watcher & Improved Tests (#17) +- [a484e0e5](https://github.com/kubedb/percona-xtradb/commit/a484e0e5) Fixed logger, analytics and removed rbac stuff (#16) +- [7aa2d1d2](https://github.com/kubedb/percona-xtradb/commit/7aa2d1d2) Add rbac stuffs for mysql-exporter (#15) +- [078098c8](https://github.com/kubedb/percona-xtradb/commit/078098c8) Review Mysql docker images and Fixed monitring (#14) +- [6877108a](https://github.com/kubedb/percona-xtradb/commit/6877108a) Update README.md +- [1f84a5da](https://github.com/kubedb/percona-xtradb/commit/1f84a5da) Start next dev cycle +- [2f1e4b7d](https://github.com/kubedb/percona-xtradb/commit/2f1e4b7d) Prepare release 0.1.0-beta.0 +- [dce1e88e](https://github.com/kubedb/percona-xtradb/commit/dce1e88e) Add release script +- [60ed55cb](https://github.com/kubedb/percona-xtradb/commit/60ed55cb) Rename ms-operator to my-operator (#13) +- [5451d166](https://github.com/kubedb/percona-xtradb/commit/5451d166) Fix Analytics and pass client-id as ENV to Snapshot Job (#12) +- [788ae178](https://github.com/kubedb/percona-xtradb/commit/788ae178) update docker image validation (#11) +- [c966efd5](https://github.com/kubedb/percona-xtradb/commit/c966efd5) Add docker-registry and WorkQueue (#10) +- [be340103](https://github.com/kubedb/percona-xtradb/commit/be340103) Set client id for analytics (#9) +- [ca11f683](https://github.com/kubedb/percona-xtradb/commit/ca11f683) Fix CRD Registration (#8) +- [2f95c13d](https://github.com/kubedb/percona-xtradb/commit/2f95c13d) Update issue repo link +- [6fffa713](https://github.com/kubedb/percona-xtradb/commit/6fffa713) Update pkg paths to kubedb org (#7) +- [2d4d5c44](https://github.com/kubedb/percona-xtradb/commit/2d4d5c44) Assign default Prometheus Monitoring Port (#6) +- [a7595613](https://github.com/kubedb/percona-xtradb/commit/a7595613) Add Snapshot Backup, Restore and Backup-Scheduler (#4) +- [17a782c6](https://github.com/kubedb/percona-xtradb/commit/17a782c6) Update Dockerfile +- [e92bfec9](https://github.com/kubedb/percona-xtradb/commit/e92bfec9) Add mysql-util docker image (#5) +- [2a4b25ac](https://github.com/kubedb/percona-xtradb/commit/2a4b25ac) Mysql db - Inititalizing (#2) +- [cbfbc878](https://github.com/kubedb/percona-xtradb/commit/cbfbc878) Update README.md +- [01cab651](https://github.com/kubedb/percona-xtradb/commit/01cab651) Update README.md +- [0aa81cdf](https://github.com/kubedb/percona-xtradb/commit/0aa81cdf) Use client-go 5.x +- [3de10d7f](https://github.com/kubedb/percona-xtradb/commit/3de10d7f) Update ./hack folder (#3) +- [46f05b1f](https://github.com/kubedb/percona-xtradb/commit/46f05b1f) Add skeleton for mysql (#1) +- [73147dba](https://github.com/kubedb/percona-xtradb/commit/73147dba) Merge commit 'be70502b4993171bbad79d2ff89a9844f1c24caa' as 'hack/libbuild' + + + +## [kubedb/pg-leader-election](https://github.com/kubedb/pg-leader-election) + +### [v0.2.0-rc.1](https://github.com/kubedb/pg-leader-election/releases/tag/v0.2.0-rc.1) + + + + +## [kubedb/pgbouncer](https://github.com/kubedb/pgbouncer) + +### [v0.1.0-rc.1](https://github.com/kubedb/pgbouncer/releases/tag/v0.1.0-rc.1) + +- [b77fa7c8](https://github.com/kubedb/pgbouncer/commit/b77fa7c8) Prepare for release v0.1.0-rc.1 (#92) +- [e82f1017](https://github.com/kubedb/pgbouncer/commit/e82f1017) Prepare for release v0.1.0-beta.6 (#91) +- [8d2fa953](https://github.com/kubedb/pgbouncer/commit/8d2fa953) Create SRV records for governing service (#90) +- [96144773](https://github.com/kubedb/pgbouncer/commit/96144773) Prepare for release v0.1.0-beta.5 (#89) +- [bb574108](https://github.com/kubedb/pgbouncer/commit/bb574108) Create separate governing service for each database (#88) +- [28f29e3c](https://github.com/kubedb/pgbouncer/commit/28f29e3c) Update KubeDB api (#87) +- [79a3e3f7](https://github.com/kubedb/pgbouncer/commit/79a3e3f7) Update readme +- [f42d28f9](https://github.com/kubedb/pgbouncer/commit/f42d28f9) Update repository config (#86) +- [4c292933](https://github.com/kubedb/pgbouncer/commit/4c292933) Prepare for release v0.1.0-beta.4 (#85) +- [c3daaa90](https://github.com/kubedb/pgbouncer/commit/c3daaa90) Update KubeDB api (#84) +- [19784f7a](https://github.com/kubedb/pgbouncer/commit/19784f7a) Update Kubernetes v1.18.9 dependencies (#83) +- [a7ea74e4](https://github.com/kubedb/pgbouncer/commit/a7ea74e4) Update KubeDB api (#82) +- [49391b30](https://github.com/kubedb/pgbouncer/commit/49391b30) Update KubeDB api (#81) +- [2ad0016d](https://github.com/kubedb/pgbouncer/commit/2ad0016d) Update KubeDB api (#80) +- [e0169139](https://github.com/kubedb/pgbouncer/commit/e0169139) Update Kubernetes v1.18.9 dependencies (#79) +- [ade8edf9](https://github.com/kubedb/pgbouncer/commit/ade8edf9) Update KubeDB api (#78) +- [86387966](https://github.com/kubedb/pgbouncer/commit/86387966) Update KubeDB api (#77) +- [d5fa2ce7](https://github.com/kubedb/pgbouncer/commit/d5fa2ce7) Update KubeDB api (#76) +- [938d61f6](https://github.com/kubedb/pgbouncer/commit/938d61f6) Update KubeDB api (#75) +- [89ceecb1](https://github.com/kubedb/pgbouncer/commit/89ceecb1) Update Kubernetes v1.18.9 dependencies (#74) +- [3b8fc849](https://github.com/kubedb/pgbouncer/commit/3b8fc849) Update KubeDB api (#73) +- [89ed5bf0](https://github.com/kubedb/pgbouncer/commit/89ed5bf0) Update KubeDB api (#72) +- [187eaff5](https://github.com/kubedb/pgbouncer/commit/187eaff5) Update KubeDB api (#71) +- [1222c935](https://github.com/kubedb/pgbouncer/commit/1222c935) Update repository config (#70) +- [f9c72f8c](https://github.com/kubedb/pgbouncer/commit/f9c72f8c) Update repository config (#69) +- [a55e0a9f](https://github.com/kubedb/pgbouncer/commit/a55e0a9f) Update repository config (#68) +- [20f01c3b](https://github.com/kubedb/pgbouncer/commit/20f01c3b) Update KubeDB api (#67) +- [ea907c2f](https://github.com/kubedb/pgbouncer/commit/ea907c2f) Update Kubernetes v1.18.9 dependencies (#66) +- [86f92e64](https://github.com/kubedb/pgbouncer/commit/86f92e64) Publish docker images to ghcr.io (#65) +- [189ab8b8](https://github.com/kubedb/pgbouncer/commit/189ab8b8) Update KubeDB api (#64) +- [d30a59c2](https://github.com/kubedb/pgbouncer/commit/d30a59c2) Update KubeDB api (#63) +- [545ee043](https://github.com/kubedb/pgbouncer/commit/545ee043) Update KubeDB api (#62) +- [cc01e1ca](https://github.com/kubedb/pgbouncer/commit/cc01e1ca) Update KubeDB api (#61) +- [40bc916f](https://github.com/kubedb/pgbouncer/commit/40bc916f) Update repository config (#60) +- [00313b21](https://github.com/kubedb/pgbouncer/commit/00313b21) Update Kubernetes v1.18.9 dependencies (#59) +- [080b77f3](https://github.com/kubedb/pgbouncer/commit/080b77f3) Update KubeDB api (#56) +- [fa479841](https://github.com/kubedb/pgbouncer/commit/fa479841) Update Kubernetes v1.18.9 dependencies (#57) +- [559d7421](https://github.com/kubedb/pgbouncer/commit/559d7421) Update Kubernetes v1.18.9 dependencies (#55) +- [1bfe4067](https://github.com/kubedb/pgbouncer/commit/1bfe4067) Update repository config (#54) +- [5ac28f25](https://github.com/kubedb/pgbouncer/commit/5ac28f25) Update repository config (#53) +- [162034f0](https://github.com/kubedb/pgbouncer/commit/162034f0) Update Kubernetes v1.18.9 dependencies (#52) +- [71697842](https://github.com/kubedb/pgbouncer/commit/71697842) Update Kubernetes v1.18.3 dependencies (#51) +- [3a868c6d](https://github.com/kubedb/pgbouncer/commit/3a868c6d) Prepare for release v0.1.0-beta.3 (#50) +- [72745988](https://github.com/kubedb/pgbouncer/commit/72745988) Add license verifier (#49) +- [36e16b55](https://github.com/kubedb/pgbouncer/commit/36e16b55) Use AppsCode Trial license (#48) +- [d3917d72](https://github.com/kubedb/pgbouncer/commit/d3917d72) Update Kubernetes v1.18.3 dependencies (#47) +- [c5fb3b0e](https://github.com/kubedb/pgbouncer/commit/c5fb3b0e) Update Kubernetes v1.18.3 dependencies (#46) +- [64f27a21](https://github.com/kubedb/pgbouncer/commit/64f27a21) Update Kubernetes v1.18.3 dependencies (#44) +- [817891a9](https://github.com/kubedb/pgbouncer/commit/817891a9) Use AppsCode Community License (#43) +- [11826ae7](https://github.com/kubedb/pgbouncer/commit/11826ae7) Update Kubernetes v1.18.3 dependencies (#42) +- [e083d550](https://github.com/kubedb/pgbouncer/commit/e083d550) Prepare for release v0.1.0-beta.2 (#41) +- [fe847905](https://github.com/kubedb/pgbouncer/commit/fe847905) Update release.yml +- [ddf5a857](https://github.com/kubedb/pgbouncer/commit/ddf5a857) Use updated certificate spec (#35) +- [d5cd5bfd](https://github.com/kubedb/pgbouncer/commit/d5cd5bfd) Update Kubernetes v1.18.3 dependencies (#39) +- [21693c76](https://github.com/kubedb/pgbouncer/commit/21693c76) Update Kubernetes v1.18.3 dependencies (#38) +- [39ad48db](https://github.com/kubedb/pgbouncer/commit/39ad48db) Update Kubernetes v1.18.3 dependencies (#37) +- [7f1ecc77](https://github.com/kubedb/pgbouncer/commit/7f1ecc77) Update Kubernetes v1.18.3 dependencies (#36) +- [8d9d379a](https://github.com/kubedb/pgbouncer/commit/8d9d379a) Update Kubernetes v1.18.3 dependencies (#34) +- [c9b8300c](https://github.com/kubedb/pgbouncer/commit/c9b8300c) Update Kubernetes v1.18.3 dependencies (#33) +- [66c72a40](https://github.com/kubedb/pgbouncer/commit/66c72a40) Remove dependency on enterprise operator (#32) +- [757dc104](https://github.com/kubedb/pgbouncer/commit/757dc104) Update to cert-manager v0.16.0 (#30) +- [0a183d15](https://github.com/kubedb/pgbouncer/commit/0a183d15) Build images in e2e workflow (#29) +- [ca61e88c](https://github.com/kubedb/pgbouncer/commit/ca61e88c) Allow configuring k8s & db version in e2e tests (#28) +- [a87278b1](https://github.com/kubedb/pgbouncer/commit/a87278b1) Update to Kubernetes v1.18.3 (#27) +- [5abe86f3](https://github.com/kubedb/pgbouncer/commit/5abe86f3) Fix formatting +- [845f7a35](https://github.com/kubedb/pgbouncer/commit/845f7a35) Trigger e2e tests on /ok-to-test command (#26) +- [2cc23c03](https://github.com/kubedb/pgbouncer/commit/2cc23c03) Fix cert-manager integration for PgBouncer (#25) +- [2a148c26](https://github.com/kubedb/pgbouncer/commit/2a148c26) Update to Kubernetes v1.18.3 (#24) +- [f6eb8120](https://github.com/kubedb/pgbouncer/commit/f6eb8120) Update Makefile.env +- [bbf810c5](https://github.com/kubedb/pgbouncer/commit/bbf810c5) Prepare for release v0.1.0-beta.1 (#23) +- [5a6e361a](https://github.com/kubedb/pgbouncer/commit/5a6e361a) include Makefile.env (#22) +- [2d52d66e](https://github.com/kubedb/pgbouncer/commit/2d52d66e) Update License (#21) +- [33305d5f](https://github.com/kubedb/pgbouncer/commit/33305d5f) Update to Kubernetes v1.18.3 (#20) +- [b443a550](https://github.com/kubedb/pgbouncer/commit/b443a550) Update ci.yml +- [d3bedc9b](https://github.com/kubedb/pgbouncer/commit/d3bedc9b) Update update-release-tracker.sh +- [d9100ecc](https://github.com/kubedb/pgbouncer/commit/d9100ecc) Update update-release-tracker.sh +- [9b86bdaa](https://github.com/kubedb/pgbouncer/commit/9b86bdaa) Add script to update release tracker on pr merge (#19) +- [3362cef7](https://github.com/kubedb/pgbouncer/commit/3362cef7) Update .kodiak.toml +- [11ebebda](https://github.com/kubedb/pgbouncer/commit/11ebebda) Use POSTGRES_TAG v0.14.0-alpha.0 +- [dbe95b54](https://github.com/kubedb/pgbouncer/commit/dbe95b54) Various fixes (#18) +- [c50c65de](https://github.com/kubedb/pgbouncer/commit/c50c65de) Update to Kubernetes v1.18.3 (#17) +- [483fa438](https://github.com/kubedb/pgbouncer/commit/483fa438) Update to Kubernetes v1.18.3 +- [c0fa8e49](https://github.com/kubedb/pgbouncer/commit/c0fa8e49) Create .kodiak.toml +- [5e338016](https://github.com/kubedb/pgbouncer/commit/5e338016) Use CRD v1 for Kubernetes >= 1.16 (#16) +- [ef7fe475](https://github.com/kubedb/pgbouncer/commit/ef7fe475) Update to Kubernetes v1.18.3 (#15) +- [063339fc](https://github.com/kubedb/pgbouncer/commit/063339fc) Fix e2e tests (#14) +- [7cd92ba4](https://github.com/kubedb/pgbouncer/commit/7cd92ba4) Update crazy-max/ghaction-docker-buildx flag +- [e7a47a50](https://github.com/kubedb/pgbouncer/commit/e7a47a50) Revendor kubedb.dev/apimachinery@master (#13) +- [9d009160](https://github.com/kubedb/pgbouncer/commit/9d009160) Use updated operator labels in e2e tests (#12) +- [778924af](https://github.com/kubedb/pgbouncer/commit/778924af) Trigger the workflow on push or pull request +- [77be6b9e](https://github.com/kubedb/pgbouncer/commit/77be6b9e) Update CHANGELOG.md +- [a9decb98](https://github.com/kubedb/pgbouncer/commit/a9decb98) Use stash.appscode.dev/apimachinery@v0.9.0-rc.6 (#11) +- [cd4d2721](https://github.com/kubedb/pgbouncer/commit/cd4d2721) Fix build +- [b21b1a11](https://github.com/kubedb/pgbouncer/commit/b21b1a11) Revendor and update enterprise sidecar image (#10) +- [463f7bc0](https://github.com/kubedb/pgbouncer/commit/463f7bc0) Update enterprise operator tag (#9) +- [6e015884](https://github.com/kubedb/pgbouncer/commit/6e015884) Use kubedb/installer master branch in CI +- [88b98a49](https://github.com/kubedb/pgbouncer/commit/88b98a49) Update pgbouncer controller (#8) +- [a6b71bc3](https://github.com/kubedb/pgbouncer/commit/a6b71bc3) Update variable names +- [1a6794b7](https://github.com/kubedb/pgbouncer/commit/1a6794b7) Fix plain text secret in exporter container of StatefulSet (#5) +- [ab104a9f](https://github.com/kubedb/pgbouncer/commit/ab104a9f) Update client-go to kubernetes-1.16.3 (#7) +- [68dbb142](https://github.com/kubedb/pgbouncer/commit/68dbb142) Use charts to install operator (#6) +- [30e3e729](https://github.com/kubedb/pgbouncer/commit/30e3e729) Add add-license make target +- [6c1a78a0](https://github.com/kubedb/pgbouncer/commit/6c1a78a0) Enable e2e tests in GitHub actions (#4) +- [0960f805](https://github.com/kubedb/pgbouncer/commit/0960f805) Initial implementation (#2) +- [a8a9b1db](https://github.com/kubedb/pgbouncer/commit/a8a9b1db) Update go.yml +- [bc3b2624](https://github.com/kubedb/pgbouncer/commit/bc3b2624) Enable GitHub actions +- [2e33db2b](https://github.com/kubedb/pgbouncer/commit/2e33db2b) Clone kubedb/postgres repo (#1) +- [45a7cace](https://github.com/kubedb/pgbouncer/commit/45a7cace) Merge commit 'f78de886ed657650438f99574c3b002dd3607497' as 'hack/libbuild' + + + +## [kubedb/postgres](https://github.com/kubedb/postgres) + +### [v0.14.0-rc.1](https://github.com/kubedb/postgres/releases/tag/v0.14.0-rc.1) + +- [c1ea472a](https://github.com/kubedb/postgres/commit/c1ea472a) Prepare for release v0.14.0-rc.1 (#405) +- [9e1a642e](https://github.com/kubedb/postgres/commit/9e1a642e) Prepare for release v0.14.0-beta.6 (#404) +- [8b869c02](https://github.com/kubedb/postgres/commit/8b869c02) Create SRV records for governing service (#402) +- [c6e802a7](https://github.com/kubedb/postgres/commit/c6e802a7) Prepare for release v0.14.0-beta.5 (#401) +- [4da12584](https://github.com/kubedb/postgres/commit/4da12584) Simplify port assignment (#400) +- [71420f2b](https://github.com/kubedb/postgres/commit/71420f2b) Create separate governing service for each database (#399) +- [49792ddb](https://github.com/kubedb/postgres/commit/49792ddb) Update KubeDB api (#398) +- [721f5e16](https://github.com/kubedb/postgres/commit/721f5e16) Update readme +- [c036ee15](https://github.com/kubedb/postgres/commit/c036ee15) Update Kubernetes v1.18.9 dependencies (#397) +- [ed9a22ac](https://github.com/kubedb/postgres/commit/ed9a22ac) Prepare for release v0.14.0-beta.4 (#396) +- [e6b37365](https://github.com/kubedb/postgres/commit/e6b37365) Update KubeDB api (#395) +- [825f55c3](https://github.com/kubedb/postgres/commit/825f55c3) Update Kubernetes v1.18.9 dependencies (#394) +- [c879e7e8](https://github.com/kubedb/postgres/commit/c879e7e8) Update KubeDB api (#393) +- [c90ad84e](https://github.com/kubedb/postgres/commit/c90ad84e) Update for release Stash@v2020.10.21 (#392) +- [9db225c0](https://github.com/kubedb/postgres/commit/9db225c0) Fix init validator (#390) +- [e56e5ae6](https://github.com/kubedb/postgres/commit/e56e5ae6) Update KubeDB api (#391) +- [5da16a5c](https://github.com/kubedb/postgres/commit/5da16a5c) Update KubeDB api (#389) +- [221eb7cf](https://github.com/kubedb/postgres/commit/221eb7cf) Update Kubernetes v1.18.9 dependencies (#388) +- [261aaaf3](https://github.com/kubedb/postgres/commit/261aaaf3) Update KubeDB api (#387) +- [6d8efe23](https://github.com/kubedb/postgres/commit/6d8efe23) Update KubeDB api (#386) +- [0df8a375](https://github.com/kubedb/postgres/commit/0df8a375) Update KubeDB api (#385) +- [b0b4f7e7](https://github.com/kubedb/postgres/commit/b0b4f7e7) Update KubeDB api (#384) +- [c10ff311](https://github.com/kubedb/postgres/commit/c10ff311) Update Kubernetes v1.18.9 dependencies (#383) +- [4f237fc0](https://github.com/kubedb/postgres/commit/4f237fc0) Update KubeDB api (#382) +- [b31defb8](https://github.com/kubedb/postgres/commit/b31defb8) Update KubeDB api (#381) +- [667a4ec8](https://github.com/kubedb/postgres/commit/667a4ec8) Update KubeDB api (#379) +- [da86f8d7](https://github.com/kubedb/postgres/commit/da86f8d7) Update repository config (#378) +- [1da3afb9](https://github.com/kubedb/postgres/commit/1da3afb9) Update repository config (#377) +- [29b8a231](https://github.com/kubedb/postgres/commit/29b8a231) Update repository config (#376) +- [22612534](https://github.com/kubedb/postgres/commit/22612534) Initialize statefulset watcher from cmd/server/options.go (#375) +- [bfd6eae7](https://github.com/kubedb/postgres/commit/bfd6eae7) Update KubeDB api (#374) +- [10566771](https://github.com/kubedb/postgres/commit/10566771) Update Kubernetes v1.18.9 dependencies (#373) +- [1eb7c29b](https://github.com/kubedb/postgres/commit/1eb7c29b) Publish docker images to ghcr.io (#372) +- [49dd7946](https://github.com/kubedb/postgres/commit/49dd7946) Only keep username/password keys in Postgres secret +- [f1131a2c](https://github.com/kubedb/postgres/commit/f1131a2c) Update KubeDB api (#371) +- [ccadf274](https://github.com/kubedb/postgres/commit/ccadf274) Update KubeDB api (#370) +- [bddd6692](https://github.com/kubedb/postgres/commit/bddd6692) Update KubeDB api (#369) +- [d76bbe3d](https://github.com/kubedb/postgres/commit/d76bbe3d) Don't add secretTransformation in AppBinding section by default (#316) +- [ae29ba5e](https://github.com/kubedb/postgres/commit/ae29ba5e) Update KubeDB api (#368) +- [4bb1c171](https://github.com/kubedb/postgres/commit/4bb1c171) Update repository config (#367) +- [a7b1138f](https://github.com/kubedb/postgres/commit/a7b1138f) Use conditions to handle initialization (#365) +- [126e20f1](https://github.com/kubedb/postgres/commit/126e20f1) Update Kubernetes v1.18.9 dependencies (#366) +- [29a99b8d](https://github.com/kubedb/postgres/commit/29a99b8d) Update for release Stash@v2020.09.29 (#364) +- [b097b330](https://github.com/kubedb/postgres/commit/b097b330) Update Kubernetes v1.18.9 dependencies (#363) +- [26e2f90c](https://github.com/kubedb/postgres/commit/26e2f90c) Update Kubernetes v1.18.9 dependencies (#361) +- [67c6d618](https://github.com/kubedb/postgres/commit/67c6d618) Update repository config (#360) +- [6fc5fbce](https://github.com/kubedb/postgres/commit/6fc5fbce) Update repository config (#359) +- [4e566391](https://github.com/kubedb/postgres/commit/4e566391) Update Kubernetes v1.18.9 dependencies (#358) +- [7236b6e1](https://github.com/kubedb/postgres/commit/7236b6e1) Use common event recorder (#357) +- [d1293558](https://github.com/kubedb/postgres/commit/d1293558) Update Kubernetes v1.18.3 dependencies (#356) +- [0dd8903e](https://github.com/kubedb/postgres/commit/0dd8903e) Prepare for release v0.14.0-beta.3 (#355) +- [8f59199a](https://github.com/kubedb/postgres/commit/8f59199a) Use new `sepc.init` section (#354) +- [32305e6d](https://github.com/kubedb/postgres/commit/32305e6d) Update Kubernetes v1.18.3 dependencies (#353) +- [e65ecdf3](https://github.com/kubedb/postgres/commit/e65ecdf3) Add license verifier (#352) +- [55b2f61e](https://github.com/kubedb/postgres/commit/55b2f61e) Update for release Stash@v2020.09.16 (#351) +- [66f45a55](https://github.com/kubedb/postgres/commit/66f45a55) Update Kubernetes v1.18.3 dependencies (#350) +- [80f3cc3b](https://github.com/kubedb/postgres/commit/80f3cc3b) Use background deletion policy +- [63119dba](https://github.com/kubedb/postgres/commit/63119dba) Update Kubernetes v1.18.3 dependencies (#348) +- [ac48cf6a](https://github.com/kubedb/postgres/commit/ac48cf6a) Use AppsCode Community License (#347) +- [03449359](https://github.com/kubedb/postgres/commit/03449359) Update Kubernetes v1.18.3 dependencies (#346) +- [6e6fe6fe](https://github.com/kubedb/postgres/commit/6e6fe6fe) Prepare for release v0.14.0-beta.2 (#345) +- [5ee33bb8](https://github.com/kubedb/postgres/commit/5ee33bb8) Update release.yml +- [9208f754](https://github.com/kubedb/postgres/commit/9208f754) Always use OnDelete update strategy +- [74367d01](https://github.com/kubedb/postgres/commit/74367d01) Update Kubernetes v1.18.3 dependencies (#344) +- [01843533](https://github.com/kubedb/postgres/commit/01843533) Update Kubernetes v1.18.3 dependencies (#343) +- [34a3a460](https://github.com/kubedb/postgres/commit/34a3a460) Update Kubernetes v1.18.3 dependencies (#338) +- [455bf56a](https://github.com/kubedb/postgres/commit/455bf56a) Update Kubernetes v1.18.3 dependencies (#337) +- [960d1efa](https://github.com/kubedb/postgres/commit/960d1efa) Update Kubernetes v1.18.3 dependencies (#336) +- [9b428745](https://github.com/kubedb/postgres/commit/9b428745) Update Kubernetes v1.18.3 dependencies (#335) +- [cc95c5f5](https://github.com/kubedb/postgres/commit/cc95c5f5) Update Kubernetes v1.18.3 dependencies (#334) +- [c0694d83](https://github.com/kubedb/postgres/commit/c0694d83) Update Kubernetes v1.18.3 dependencies (#333) +- [8d0977d3](https://github.com/kubedb/postgres/commit/8d0977d3) Remove dependency on enterprise operator (#332) +- [daa5b77c](https://github.com/kubedb/postgres/commit/daa5b77c) Build images in e2e workflow (#331) +- [197f1b2b](https://github.com/kubedb/postgres/commit/197f1b2b) Update to Kubernetes v1.18.3 (#329) +- [e732d319](https://github.com/kubedb/postgres/commit/e732d319) Allow configuring k8s & db version in e2e tests (#330) +- [f37180ec](https://github.com/kubedb/postgres/commit/f37180ec) Trigger e2e tests on /ok-to-test command (#328) +- [becb3e2c](https://github.com/kubedb/postgres/commit/becb3e2c) Update to Kubernetes v1.18.3 (#327) +- [91bf7440](https://github.com/kubedb/postgres/commit/91bf7440) Update to Kubernetes v1.18.3 (#326) +- [3848a43e](https://github.com/kubedb/postgres/commit/3848a43e) Prepare for release v0.14.0-beta.1 (#325) +- [d4ea0ba7](https://github.com/kubedb/postgres/commit/d4ea0ba7) Update for release Stash@v2020.07.09-beta.0 (#323) +- [6974afda](https://github.com/kubedb/postgres/commit/6974afda) Allow customizing kube namespace for Stash +- [d7d79ea1](https://github.com/kubedb/postgres/commit/d7d79ea1) Allow customizing chart registry (#322) +- [ba0423ac](https://github.com/kubedb/postgres/commit/ba0423ac) Update for release Stash@v2020.07.08-beta.0 (#321) +- [7e855763](https://github.com/kubedb/postgres/commit/7e855763) Update License +- [7bea404a](https://github.com/kubedb/postgres/commit/7bea404a) Update to Kubernetes v1.18.3 (#320) +- [eab0e83f](https://github.com/kubedb/postgres/commit/eab0e83f) Update ci.yml +- [4949f76e](https://github.com/kubedb/postgres/commit/4949f76e) Load stash version from .env file for make (#319) +- [79e9d8d9](https://github.com/kubedb/postgres/commit/79e9d8d9) Update update-release-tracker.sh +- [ca966b7b](https://github.com/kubedb/postgres/commit/ca966b7b) Update update-release-tracker.sh +- [31bbecfe](https://github.com/kubedb/postgres/commit/31bbecfe) Add script to update release tracker on pr merge (#318) +- [540d977f](https://github.com/kubedb/postgres/commit/540d977f) Update .kodiak.toml +- [3e7514a7](https://github.com/kubedb/postgres/commit/3e7514a7) Various fixes (#317) +- [1a5df17c](https://github.com/kubedb/postgres/commit/1a5df17c) Update to Kubernetes v1.18.3 (#315) +- [717cfb3f](https://github.com/kubedb/postgres/commit/717cfb3f) Update to Kubernetes v1.18.3 +- [95537169](https://github.com/kubedb/postgres/commit/95537169) Create .kodiak.toml +- [02579005](https://github.com/kubedb/postgres/commit/02579005) Use CRD v1 for Kubernetes >= 1.16 (#314) +- [6ce6deb1](https://github.com/kubedb/postgres/commit/6ce6deb1) Update to Kubernetes v1.18.3 (#313) +- [97f25ba0](https://github.com/kubedb/postgres/commit/97f25ba0) Fix e2e tests (#312) +- [a989c377](https://github.com/kubedb/postgres/commit/a989c377) Update stash install commands +- [6af12596](https://github.com/kubedb/postgres/commit/6af12596) Revendor kubedb.dev/apimachinery@master (#311) +- [9969b064](https://github.com/kubedb/postgres/commit/9969b064) Update crazy-max/ghaction-docker-buildx flag +- [e3360119](https://github.com/kubedb/postgres/commit/e3360119) Use updated operator labels in e2e tests (#309) +- [c183007c](https://github.com/kubedb/postgres/commit/c183007c) Pass annotations from CRD to AppBinding (#310) +- [55581f79](https://github.com/kubedb/postgres/commit/55581f79) Trigger the workflow on push or pull request +- [931b88cf](https://github.com/kubedb/postgres/commit/931b88cf) Update CHANGELOG.md +- [6f481749](https://github.com/kubedb/postgres/commit/6f481749) Use stash.appscode.dev/apimachinery@v0.9.0-rc.6 (#308) +- [15f0611d](https://github.com/kubedb/postgres/commit/15f0611d) Fix error msg to reject halt when termination policy is 'DoNotTerminate' +- [18aba058](https://github.com/kubedb/postgres/commit/18aba058) Change Pause to Halt (#307) +- [7e9b1c69](https://github.com/kubedb/postgres/commit/7e9b1c69) feat: allow changes to nodeSelector (#298) +- [a602faa1](https://github.com/kubedb/postgres/commit/a602faa1) Introduce spec.halted and removed dormant and snapshot crd (#305) +- [cdd384d7](https://github.com/kubedb/postgres/commit/cdd384d7) Moved leader election to kubedb/pg-leader-election (#304) +- [32c41db6](https://github.com/kubedb/postgres/commit/32c41db6) Use stash@v0.9.0-rc.4 release (#306) +- [fa55b472](https://github.com/kubedb/postgres/commit/fa55b472) Make e2e tests stable in github actions (#303) +- [afdc5fda](https://github.com/kubedb/postgres/commit/afdc5fda) Update client-go to kubernetes-1.16.3 (#301) +- [d28eb55a](https://github.com/kubedb/postgres/commit/d28eb55a) Take out postgres docker images and Matrix test (#297) +- [13fee32d](https://github.com/kubedb/postgres/commit/13fee32d) Fix default make command +- [55dfb368](https://github.com/kubedb/postgres/commit/55dfb368) Update catalog values for make install command +- [25f5b79c](https://github.com/kubedb/postgres/commit/25f5b79c) Use charts to install operator (#302) +- [c5a4ed77](https://github.com/kubedb/postgres/commit/c5a4ed77) Add add-license make target +- [aa1d98d0](https://github.com/kubedb/postgres/commit/aa1d98d0) Add license header to files (#296) +- [fd356006](https://github.com/kubedb/postgres/commit/fd356006) Fix E2E testing for github actions (#295) +- [6a3443a7](https://github.com/kubedb/postgres/commit/6a3443a7) Minio and S3 compatible storage fixes (#292) +- [5150cf34](https://github.com/kubedb/postgres/commit/5150cf34) Run e2e tests using GitHub actions (#293) +- [a4a3785b](https://github.com/kubedb/postgres/commit/a4a3785b) Validate DBVersionSpecs and fixed broken build (#294) +- [b171a244](https://github.com/kubedb/postgres/commit/b171a244) Update go.yml +- [1a61bf29](https://github.com/kubedb/postgres/commit/1a61bf29) Enable GitHub actions +- [6b869b15](https://github.com/kubedb/postgres/commit/6b869b15) Update changelog + + + +## [kubedb/proxysql](https://github.com/kubedb/proxysql) + +### [v0.1.0-rc.1](https://github.com/kubedb/proxysql/releases/tag/v0.1.0-rc.1) + +- [e3f4999c](https://github.com/kubedb/proxysql/commit/e3f4999c) Prepare for release v0.1.0-rc.1 (#101) +- [d01512de](https://github.com/kubedb/proxysql/commit/d01512de) Prepare for release v0.1.0-beta.6 (#100) +- [6a0d52ff](https://github.com/kubedb/proxysql/commit/6a0d52ff) Create SRV records for governing service (#99) +- [4269db9c](https://github.com/kubedb/proxysql/commit/4269db9c) Prepare for release v0.1.0-beta.5 (#98) +- [e48bd006](https://github.com/kubedb/proxysql/commit/e48bd006) Create separate governing service for each database (#97) +- [23f1c6de](https://github.com/kubedb/proxysql/commit/23f1c6de) Update KubeDB api (#96) +- [13abe9ff](https://github.com/kubedb/proxysql/commit/13abe9ff) Update readme +- [78ef0d29](https://github.com/kubedb/proxysql/commit/78ef0d29) Update repository config (#95) +- [d344e43f](https://github.com/kubedb/proxysql/commit/d344e43f) Prepare for release v0.1.0-beta.4 (#94) +- [15deb4df](https://github.com/kubedb/proxysql/commit/15deb4df) Update KubeDB api (#93) +- [dc59184c](https://github.com/kubedb/proxysql/commit/dc59184c) Update Kubernetes v1.18.9 dependencies (#92) +- [b2b11084](https://github.com/kubedb/proxysql/commit/b2b11084) Update KubeDB api (#91) +- [535820ff](https://github.com/kubedb/proxysql/commit/535820ff) Update for release Stash@v2020.10.21 (#90) +- [c00f0b6a](https://github.com/kubedb/proxysql/commit/c00f0b6a) Update KubeDB api (#89) +- [af8ab91c](https://github.com/kubedb/proxysql/commit/af8ab91c) Update KubeDB api (#88) +- [154fff60](https://github.com/kubedb/proxysql/commit/154fff60) Update Kubernetes v1.18.9 dependencies (#87) +- [608ca467](https://github.com/kubedb/proxysql/commit/608ca467) Update KubeDB api (#86) +- [c0b1286b](https://github.com/kubedb/proxysql/commit/c0b1286b) Update KubeDB api (#85) +- [d2f326c7](https://github.com/kubedb/proxysql/commit/d2f326c7) Update KubeDB api (#84) +- [01ea3c3c](https://github.com/kubedb/proxysql/commit/01ea3c3c) Update KubeDB api (#83) +- [4ae700ed](https://github.com/kubedb/proxysql/commit/4ae700ed) Update Kubernetes v1.18.9 dependencies (#82) +- [d0ad0b70](https://github.com/kubedb/proxysql/commit/d0ad0b70) Update KubeDB api (#81) +- [8f1e0d51](https://github.com/kubedb/proxysql/commit/8f1e0d51) Update KubeDB api (#80) +- [7b02bebb](https://github.com/kubedb/proxysql/commit/7b02bebb) Update KubeDB api (#79) +- [4f95e854](https://github.com/kubedb/proxysql/commit/4f95e854) Update repository config (#78) +- [c229a939](https://github.com/kubedb/proxysql/commit/c229a939) Update repository config (#77) +- [89dbb47f](https://github.com/kubedb/proxysql/commit/89dbb47f) Update repository config (#76) +- [d28494ab](https://github.com/kubedb/proxysql/commit/d28494ab) Update KubeDB api (#75) +- [b25cb7db](https://github.com/kubedb/proxysql/commit/b25cb7db) Update Kubernetes v1.18.9 dependencies (#74) +- [d4b026a4](https://github.com/kubedb/proxysql/commit/d4b026a4) Publish docker images to ghcr.io (#73) +- [e263f9c3](https://github.com/kubedb/proxysql/commit/e263f9c3) Update KubeDB api (#72) +- [07ea3acb](https://github.com/kubedb/proxysql/commit/07ea3acb) Update KubeDB api (#71) +- [946e292b](https://github.com/kubedb/proxysql/commit/946e292b) Update KubeDB api (#70) +- [66eb2156](https://github.com/kubedb/proxysql/commit/66eb2156) Update KubeDB api (#69) +- [d3fe09ae](https://github.com/kubedb/proxysql/commit/d3fe09ae) Update repository config (#68) +- [10c7cde0](https://github.com/kubedb/proxysql/commit/10c7cde0) Update Kubernetes v1.18.9 dependencies (#67) +- [ed5d24a9](https://github.com/kubedb/proxysql/commit/ed5d24a9) Update KubeDB api (#65) +- [a4f6dd4c](https://github.com/kubedb/proxysql/commit/a4f6dd4c) Update KubeDB api (#62) +- [2956b1bd](https://github.com/kubedb/proxysql/commit/2956b1bd) Update for release Stash@v2020.09.29 (#64) +- [9cbd0244](https://github.com/kubedb/proxysql/commit/9cbd0244) Update Kubernetes v1.18.9 dependencies (#63) +- [4cd9bb02](https://github.com/kubedb/proxysql/commit/4cd9bb02) Update Kubernetes v1.18.9 dependencies (#61) +- [a9a9caf0](https://github.com/kubedb/proxysql/commit/a9a9caf0) Update repository config (#60) +- [af3a2a68](https://github.com/kubedb/proxysql/commit/af3a2a68) Update repository config (#59) +- [25f47ff4](https://github.com/kubedb/proxysql/commit/25f47ff4) Update Kubernetes v1.18.9 dependencies (#58) +- [05e57476](https://github.com/kubedb/proxysql/commit/05e57476) Update Kubernetes v1.18.3 dependencies (#57) +- [8b0af94b](https://github.com/kubedb/proxysql/commit/8b0af94b) Prepare for release v0.1.0-beta.3 (#56) +- [f2a98806](https://github.com/kubedb/proxysql/commit/f2a98806) Update Makefile +- [f59b73a1](https://github.com/kubedb/proxysql/commit/f59b73a1) Use AppsCode Trial license (#55) +- [2ae32d3c](https://github.com/kubedb/proxysql/commit/2ae32d3c) Update Kubernetes v1.18.3 dependencies (#54) +- [724b9829](https://github.com/kubedb/proxysql/commit/724b9829) Add license verifier (#53) +- [8a2aafb5](https://github.com/kubedb/proxysql/commit/8a2aafb5) Update for release Stash@v2020.09.16 (#52) +- [4759525b](https://github.com/kubedb/proxysql/commit/4759525b) Update Kubernetes v1.18.3 dependencies (#51) +- [f55b1402](https://github.com/kubedb/proxysql/commit/f55b1402) Update Kubernetes v1.18.3 dependencies (#49) +- [f7036236](https://github.com/kubedb/proxysql/commit/f7036236) Use AppsCode Community License (#48) +- [d922196f](https://github.com/kubedb/proxysql/commit/d922196f) Update Kubernetes v1.18.3 dependencies (#47) +- [f86bb6cd](https://github.com/kubedb/proxysql/commit/f86bb6cd) Prepare for release v0.1.0-beta.2 (#46) +- [e74f3803](https://github.com/kubedb/proxysql/commit/e74f3803) Update release.yml +- [7f5349cc](https://github.com/kubedb/proxysql/commit/7f5349cc) Use updated apis (#45) +- [27faefef](https://github.com/kubedb/proxysql/commit/27faefef) Update for release Stash@v2020.08.27 (#43) +- [65bc5bca](https://github.com/kubedb/proxysql/commit/65bc5bca) Update for release Stash@v2020.08.27-rc.0 (#42) +- [833ac78b](https://github.com/kubedb/proxysql/commit/833ac78b) Update for release Stash@v2020.08.26-rc.1 (#41) +- [fe13ce42](https://github.com/kubedb/proxysql/commit/fe13ce42) Update for release Stash@v2020.08.26-rc.0 (#40) +- [b1a72843](https://github.com/kubedb/proxysql/commit/b1a72843) Update Kubernetes v1.18.3 dependencies (#39) +- [a9c40618](https://github.com/kubedb/proxysql/commit/a9c40618) Update Kubernetes v1.18.3 dependencies (#38) +- [664c974a](https://github.com/kubedb/proxysql/commit/664c974a) Update Kubernetes v1.18.3 dependencies (#37) +- [69ed46d5](https://github.com/kubedb/proxysql/commit/69ed46d5) Update Kubernetes v1.18.3 dependencies (#36) +- [a93d80d4](https://github.com/kubedb/proxysql/commit/a93d80d4) Update Kubernetes v1.18.3 dependencies (#35) +- [84fc9e37](https://github.com/kubedb/proxysql/commit/84fc9e37) Update Kubernetes v1.18.3 dependencies (#34) +- [b09f89d0](https://github.com/kubedb/proxysql/commit/b09f89d0) Remove dependency on enterprise operator (#33) +- [78ad5a88](https://github.com/kubedb/proxysql/commit/78ad5a88) Build images in e2e workflow (#32) +- [6644058e](https://github.com/kubedb/proxysql/commit/6644058e) Update to Kubernetes v1.18.3 (#30) +- [2c03dadd](https://github.com/kubedb/proxysql/commit/2c03dadd) Allow configuring k8s & db version in e2e tests (#31) +- [2c6e04bc](https://github.com/kubedb/proxysql/commit/2c6e04bc) Trigger e2e tests on /ok-to-test command (#29) +- [c7830af8](https://github.com/kubedb/proxysql/commit/c7830af8) Update to Kubernetes v1.18.3 (#28) +- [f2da8746](https://github.com/kubedb/proxysql/commit/f2da8746) Update to Kubernetes v1.18.3 (#27) +- [2ed7d0e8](https://github.com/kubedb/proxysql/commit/2ed7d0e8) Prepare for release v0.1.0-beta.1 (#26) +- [3b5ee481](https://github.com/kubedb/proxysql/commit/3b5ee481) Update for release Stash@v2020.07.09-beta.0 (#25) +- [92b04b33](https://github.com/kubedb/proxysql/commit/92b04b33) include Makefile.env (#24) +- [eace7e26](https://github.com/kubedb/proxysql/commit/eace7e26) Update for release Stash@v2020.07.08-beta.0 (#23) +- [0c647c01](https://github.com/kubedb/proxysql/commit/0c647c01) Update License (#22) +- [3c1b41be](https://github.com/kubedb/proxysql/commit/3c1b41be) Update to Kubernetes v1.18.3 (#21) +- [dfa95bb8](https://github.com/kubedb/proxysql/commit/dfa95bb8) Update ci.yml +- [87390932](https://github.com/kubedb/proxysql/commit/87390932) Update update-release-tracker.sh +- [772a0c6a](https://github.com/kubedb/proxysql/commit/772a0c6a) Update update-release-tracker.sh +- [a3b2ae92](https://github.com/kubedb/proxysql/commit/a3b2ae92) Add script to update release tracker on pr merge (#20) +- [7578cae3](https://github.com/kubedb/proxysql/commit/7578cae3) Update .kodiak.toml +- [4ba876bc](https://github.com/kubedb/proxysql/commit/4ba876bc) Update operator tags +- [399aa60b](https://github.com/kubedb/proxysql/commit/399aa60b) Various fixes (#19) +- [7235b0c5](https://github.com/kubedb/proxysql/commit/7235b0c5) Update to Kubernetes v1.18.3 (#18) +- [427c1f21](https://github.com/kubedb/proxysql/commit/427c1f21) Update to Kubernetes v1.18.3 +- [1ac8da55](https://github.com/kubedb/proxysql/commit/1ac8da55) Create .kodiak.toml +- [3243d446](https://github.com/kubedb/proxysql/commit/3243d446) Use CRD v1 for Kubernetes >= 1.16 (#17) +- [4f5bea8d](https://github.com/kubedb/proxysql/commit/4f5bea8d) Update to Kubernetes v1.18.3 (#16) +- [a0d2611a](https://github.com/kubedb/proxysql/commit/a0d2611a) Fix e2e tests (#15) +- [987fbf60](https://github.com/kubedb/proxysql/commit/987fbf60) Update crazy-max/ghaction-docker-buildx flag +- [c2fad78e](https://github.com/kubedb/proxysql/commit/c2fad78e) Use updated operator labels in e2e tests (#14) +- [c5a01db8](https://github.com/kubedb/proxysql/commit/c5a01db8) Revendor kubedb.dev/apimachinery@master (#13) +- [756c8f8f](https://github.com/kubedb/proxysql/commit/756c8f8f) Trigger the workflow on push or pull request +- [fdf84e27](https://github.com/kubedb/proxysql/commit/fdf84e27) Update CHANGELOG.md +- [9075b453](https://github.com/kubedb/proxysql/commit/9075b453) Use stash.appscode.dev/apimachinery@v0.9.0-rc.6 (#12) +- [f4d1c024](https://github.com/kubedb/proxysql/commit/f4d1c024) Matrix Tests on Github Actions (#11) +- [4e021072](https://github.com/kubedb/proxysql/commit/4e021072) Update mount path for custom config (#8) +- [b0922173](https://github.com/kubedb/proxysql/commit/b0922173) Enable ProxySQL monitoring (#6) +- [70be4e67](https://github.com/kubedb/proxysql/commit/70be4e67) ProxySQL test for MySQL (#4) +- [0a444b9e](https://github.com/kubedb/proxysql/commit/0a444b9e) Use charts to install operator (#7) +- [a51fbb51](https://github.com/kubedb/proxysql/commit/a51fbb51) ProxySQL operator for MySQL databases (#2) +- [883fa437](https://github.com/kubedb/proxysql/commit/883fa437) Update go.yml +- [2c0cf51c](https://github.com/kubedb/proxysql/commit/2c0cf51c) Enable GitHub actions +- [52e15cd2](https://github.com/kubedb/proxysql/commit/52e15cd2) percona-xtradb -> proxysql (#1) +- [dc71bffe](https://github.com/kubedb/proxysql/commit/dc71bffe) Revendor +- [71957d40](https://github.com/kubedb/proxysql/commit/71957d40) Rename from perconaxtradb to percona-xtradb (#10) +- [b526ccd8](https://github.com/kubedb/proxysql/commit/b526ccd8) Set database version in AppBinding (#7) +- [336e7203](https://github.com/kubedb/proxysql/commit/336e7203) Percona XtraDB Cluster support (#9) +- [71a42f7a](https://github.com/kubedb/proxysql/commit/71a42f7a) Don't set annotation to AppBinding (#8) +- [282298cb](https://github.com/kubedb/proxysql/commit/282298cb) Fix UpsertDatabaseAnnotation() function (#4) +- [2ab9dddf](https://github.com/kubedb/proxysql/commit/2ab9dddf) Add license header to Makefiles (#6) +- [df135c08](https://github.com/kubedb/proxysql/commit/df135c08) Add install, uninstall and purge command in Makefile (#3) +- [73d3a845](https://github.com/kubedb/proxysql/commit/73d3a845) Update .gitignore +- [59a4e754](https://github.com/kubedb/proxysql/commit/59a4e754) Add Makefile (#2) +- [f3551ddc](https://github.com/kubedb/proxysql/commit/f3551ddc) Rename package path (#1) +- [56a241d6](https://github.com/kubedb/proxysql/commit/56a241d6) Use explicit IP whitelist instead of automatic IP whitelist (#151) +- [9f0b5ca3](https://github.com/kubedb/proxysql/commit/9f0b5ca3) Update to k8s 1.14.0 client libraries using go.mod (#147) +- [73ad7c30](https://github.com/kubedb/proxysql/commit/73ad7c30) Update changelog +- [ccc36b5c](https://github.com/kubedb/proxysql/commit/ccc36b5c) Update README.md +- [9769e8e1](https://github.com/kubedb/proxysql/commit/9769e8e1) Start next dev cycle +- [a3fa468a](https://github.com/kubedb/proxysql/commit/a3fa468a) Prepare release 0.5.0 +- [6d8862de](https://github.com/kubedb/proxysql/commit/6d8862de) Mysql Group Replication tests (#146) +- [49544e55](https://github.com/kubedb/proxysql/commit/49544e55) Mysql Group Replication (#144) +- [a85d4b44](https://github.com/kubedb/proxysql/commit/a85d4b44) Revendor dependencies +- [9c538460](https://github.com/kubedb/proxysql/commit/9c538460) Changed Role to exclude psp without name (#143) +- [6cace93b](https://github.com/kubedb/proxysql/commit/6cace93b) Modify mutator validator names (#142) +- [da0c19b9](https://github.com/kubedb/proxysql/commit/da0c19b9) Update changelog +- [b79c80d6](https://github.com/kubedb/proxysql/commit/b79c80d6) Start next dev cycle +- [838d9459](https://github.com/kubedb/proxysql/commit/838d9459) Prepare release 0.4.0 +- [bf0f2c14](https://github.com/kubedb/proxysql/commit/bf0f2c14) Added PSP names and init container image in testing framework (#141) +- [3d227570](https://github.com/kubedb/proxysql/commit/3d227570) Added PSP support for mySQL (#137) +- [7b766657](https://github.com/kubedb/proxysql/commit/7b766657) Don't inherit app.kubernetes.io labels from CRD into offshoots (#140) +- [29e23470](https://github.com/kubedb/proxysql/commit/29e23470) Support for init container (#139) +- [3e1556f6](https://github.com/kubedb/proxysql/commit/3e1556f6) Add role label to stats service (#138) +- [ee078af9](https://github.com/kubedb/proxysql/commit/ee078af9) Update changelog +- [978f1139](https://github.com/kubedb/proxysql/commit/978f1139) Update Kubernetes client libraries to 1.13.0 release (#136) +- [821f23d1](https://github.com/kubedb/proxysql/commit/821f23d1) Start next dev cycle +- [678b26aa](https://github.com/kubedb/proxysql/commit/678b26aa) Prepare release 0.3.0 +- [40ad7a23](https://github.com/kubedb/proxysql/commit/40ad7a23) Initial RBAC support: create and use K8s service account for MySQL (#134) +- [98f03387](https://github.com/kubedb/proxysql/commit/98f03387) Revendor dependencies (#135) +- [dfe92615](https://github.com/kubedb/proxysql/commit/dfe92615) Revendor dependencies : Retry Failed Scheduler Snapshot (#133) +- [71f8a350](https://github.com/kubedb/proxysql/commit/71f8a350) Added ephemeral StorageType support (#132) +- [0a6b6e46](https://github.com/kubedb/proxysql/commit/0a6b6e46) Added support of MySQL 8.0.14 (#131) +- [99e57a9e](https://github.com/kubedb/proxysql/commit/99e57a9e) Use PVC spec from snapshot if provided (#130) +- [61497be6](https://github.com/kubedb/proxysql/commit/61497be6) Revendored and updated tests for 'Prevent prefix matching of multiple snapshots' (#129) +- [7eafe088](https://github.com/kubedb/proxysql/commit/7eafe088) Add certificate health checker (#128) +- [973ec416](https://github.com/kubedb/proxysql/commit/973ec416) Update E2E test: Env update is not restricted anymore (#127) +- [339975ff](https://github.com/kubedb/proxysql/commit/339975ff) Fix AppBinding (#126) +- [62050a72](https://github.com/kubedb/proxysql/commit/62050a72) Update changelog +- [2d454043](https://github.com/kubedb/proxysql/commit/2d454043) Prepare release 0.2.0 +- [6941ea59](https://github.com/kubedb/proxysql/commit/6941ea59) Reuse event recorder (#125) +- [b77e66c4](https://github.com/kubedb/proxysql/commit/b77e66c4) OSM binary upgraded in mysql-tools (#123) +- [c9228086](https://github.com/kubedb/proxysql/commit/c9228086) Revendor dependencies (#124) +- [97837120](https://github.com/kubedb/proxysql/commit/97837120) Test for faulty snapshot (#122) +- [c3e995b6](https://github.com/kubedb/proxysql/commit/c3e995b6) Start next dev cycle +- [8a4f3b13](https://github.com/kubedb/proxysql/commit/8a4f3b13) Prepare release 0.2.0-rc.2 +- [79942191](https://github.com/kubedb/proxysql/commit/79942191) Upgrade database secret keys (#121) +- [1747fdf5](https://github.com/kubedb/proxysql/commit/1747fdf5) Ignore mutation of fields to default values during update (#120) +- [d902d588](https://github.com/kubedb/proxysql/commit/d902d588) Support configuration options for exporter sidecar (#119) +- [dd7c3f44](https://github.com/kubedb/proxysql/commit/dd7c3f44) Use flags.DumpAll (#118) +- [bc1ef05b](https://github.com/kubedb/proxysql/commit/bc1ef05b) Start next dev cycle +- [9d33c1a0](https://github.com/kubedb/proxysql/commit/9d33c1a0) Prepare release 0.2.0-rc.1 +- [b076e141](https://github.com/kubedb/proxysql/commit/b076e141) Apply cleanup (#117) +- [7dc5641f](https://github.com/kubedb/proxysql/commit/7dc5641f) Set periodic analytics (#116) +- [90ea6acc](https://github.com/kubedb/proxysql/commit/90ea6acc) Introduce AppBinding support (#115) +- [a882d76a](https://github.com/kubedb/proxysql/commit/a882d76a) Fix Analytics (#114) +- [0961009c](https://github.com/kubedb/proxysql/commit/0961009c) Error out from cron job for deprecated dbversion (#113) +- [da1f4e27](https://github.com/kubedb/proxysql/commit/da1f4e27) Add CRDs without observation when operator starts (#112) +- [0a754d2f](https://github.com/kubedb/proxysql/commit/0a754d2f) Update changelog +- [b09bc6e1](https://github.com/kubedb/proxysql/commit/b09bc6e1) Start next dev cycle +- [0d467ccb](https://github.com/kubedb/proxysql/commit/0d467ccb) Prepare release 0.2.0-rc.0 +- [c757007a](https://github.com/kubedb/proxysql/commit/c757007a) Merge commit 'cc6607a3589a79a5e61bb198d370ea0ae30b9d09' +- [ddfe4be1](https://github.com/kubedb/proxysql/commit/ddfe4be1) Support custom user passowrd for backup (#111) +- [8c84ba20](https://github.com/kubedb/proxysql/commit/8c84ba20) Support providing resources for monitoring container (#110) +- [7bcfbc48](https://github.com/kubedb/proxysql/commit/7bcfbc48) Update kubernetes client libraries to 1.12.0 (#109) +- [145bba2b](https://github.com/kubedb/proxysql/commit/145bba2b) Add validation webhook xray (#108) +- [6da1887f](https://github.com/kubedb/proxysql/commit/6da1887f) Various Fixes (#107) +- [111519e9](https://github.com/kubedb/proxysql/commit/111519e9) Merge ports from service template (#105) +- [38147ef1](https://github.com/kubedb/proxysql/commit/38147ef1) Replace doNotPause with TerminationPolicy = DoNotTerminate (#104) +- [e28ebc47](https://github.com/kubedb/proxysql/commit/e28ebc47) Pass resources to NamespaceValidator (#103) +- [aed12bf5](https://github.com/kubedb/proxysql/commit/aed12bf5) Various fixes (#102) +- [3d372ef6](https://github.com/kubedb/proxysql/commit/3d372ef6) Support Livecycle hook and container probes (#101) +- [b6ef6887](https://github.com/kubedb/proxysql/commit/b6ef6887) Check if Kubernetes version is supported before running operator (#100) +- [d89e7783](https://github.com/kubedb/proxysql/commit/d89e7783) Update package alias (#99) +- [f0b44b3a](https://github.com/kubedb/proxysql/commit/f0b44b3a) Start next dev cycle +- [a79ff03b](https://github.com/kubedb/proxysql/commit/a79ff03b) Prepare release 0.2.0-beta.1 +- [0d8d3cca](https://github.com/kubedb/proxysql/commit/0d8d3cca) Revendor api (#98) +- [2f850243](https://github.com/kubedb/proxysql/commit/2f850243) Fix tests (#97) +- [4ced0bfe](https://github.com/kubedb/proxysql/commit/4ced0bfe) Revendor api for catalog apigroup (#96) +- [e7695400](https://github.com/kubedb/proxysql/commit/e7695400) Update chanelog +- [8e358aea](https://github.com/kubedb/proxysql/commit/8e358aea) Use --pull flag with docker build (#20) (#95) +- [d2a97d90](https://github.com/kubedb/proxysql/commit/d2a97d90) Merge commit '16c769ee4686576f172a6b79a10d25bfd79ca4a4' +- [d1fe8a8a](https://github.com/kubedb/proxysql/commit/d1fe8a8a) Start next dev cycle +- [04eb9bb5](https://github.com/kubedb/proxysql/commit/04eb9bb5) Prepare release 0.2.0-beta.0 +- [9dfea960](https://github.com/kubedb/proxysql/commit/9dfea960) Pass extra args to tools.sh (#93) +- [47dd3cad](https://github.com/kubedb/proxysql/commit/47dd3cad) Don't try to wipe out Snapshot data for Local backend (#92) +- [9c4d485b](https://github.com/kubedb/proxysql/commit/9c4d485b) Add missing alt-tag docker folder mysql-tools images (#91) +- [be72f784](https://github.com/kubedb/proxysql/commit/be72f784) Use suffix for updated DBImage & Stop working for deprecated *Versions (#90) +- [05c8f14d](https://github.com/kubedb/proxysql/commit/05c8f14d) Search used secrets within same namespace of DB object (#89) +- [0d94c946](https://github.com/kubedb/proxysql/commit/0d94c946) Support Termination Policy (#88) +- [8775ddf7](https://github.com/kubedb/proxysql/commit/8775ddf7) Update builddeps.sh +- [796c93da](https://github.com/kubedb/proxysql/commit/796c93da) Revendor k8s.io/apiserver (#87) +- [5a1e3f57](https://github.com/kubedb/proxysql/commit/5a1e3f57) Revendor kubernetes-1.11.3 (#86) +- [809a3c49](https://github.com/kubedb/proxysql/commit/809a3c49) Support UpdateStrategy (#84) +- [372c52ef](https://github.com/kubedb/proxysql/commit/372c52ef) Add TerminationPolicy for databases (#83) +- [c01b55e8](https://github.com/kubedb/proxysql/commit/c01b55e8) Revendor api (#82) +- [5e196b95](https://github.com/kubedb/proxysql/commit/5e196b95) Use IntHash as status.observedGeneration (#81) +- [2da3bb1b](https://github.com/kubedb/proxysql/commit/2da3bb1b) fix github status (#80) +- [121d0a98](https://github.com/kubedb/proxysql/commit/121d0a98) Update pipeline (#79) +- [532e3137](https://github.com/kubedb/proxysql/commit/532e3137) Fix E2E test for minikube (#78) +- [0f107815](https://github.com/kubedb/proxysql/commit/0f107815) Update pipeline (#77) +- [851679e2](https://github.com/kubedb/proxysql/commit/851679e2) Migrate MySQL (#75) +- [0b997855](https://github.com/kubedb/proxysql/commit/0b997855) Use official exporter image (#74) +- [702d5736](https://github.com/kubedb/proxysql/commit/702d5736) Fix uninstall for concourse (#70) +- [9ee88bd2](https://github.com/kubedb/proxysql/commit/9ee88bd2) Update status.ObservedGeneration for failure phase (#73) +- [559cdb6a](https://github.com/kubedb/proxysql/commit/559cdb6a) Keep track of ObservedGenerationHash (#72) +- [61c8b898](https://github.com/kubedb/proxysql/commit/61c8b898) Use NewObservableHandler (#71) +- [421274dc](https://github.com/kubedb/proxysql/commit/421274dc) Merge commit '887037c7e36289e3135dda99346fccc7e2ce303b' +- [6a41d9bc](https://github.com/kubedb/proxysql/commit/6a41d9bc) Fix uninstall for concourse (#69) +- [f1af09db](https://github.com/kubedb/proxysql/commit/f1af09db) Update README.md +- [bf3f1823](https://github.com/kubedb/proxysql/commit/bf3f1823) Revise immutable spec fields (#68) +- [26adec3b](https://github.com/kubedb/proxysql/commit/26adec3b) Merge commit '5f83049fc01dc1d0709ac0014d6f3a0f74a39417' +- [31a97820](https://github.com/kubedb/proxysql/commit/31a97820) Support passing args via PodTemplate (#67) +- [60f4ee23](https://github.com/kubedb/proxysql/commit/60f4ee23) Introduce storageType : ephemeral (#66) +- [bfd3fcd6](https://github.com/kubedb/proxysql/commit/bfd3fcd6) Add support for running tests on cncf cluster (#63) +- [fba47b19](https://github.com/kubedb/proxysql/commit/fba47b19) Merge commit 'e010cbb302c8d59d4cf69dd77085b046ff423b78' +- [6be96ce0](https://github.com/kubedb/proxysql/commit/6be96ce0) Revendor api (#65) +- [0f629ab3](https://github.com/kubedb/proxysql/commit/0f629ab3) Keep track of observedGeneration in status (#64) +- [c9a9596f](https://github.com/kubedb/proxysql/commit/c9a9596f) Separate StatsService for monitoring (#62) +- [62854641](https://github.com/kubedb/proxysql/commit/62854641) Use MySQLVersion for MySQL images (#61) +- [3c170c56](https://github.com/kubedb/proxysql/commit/3c170c56) Use updated crd spec (#60) +- [873c285e](https://github.com/kubedb/proxysql/commit/873c285e) Rename OffshootLabels to OffshootSelectors (#59) +- [2fd02169](https://github.com/kubedb/proxysql/commit/2fd02169) Revendor api (#58) +- [a127d6cd](https://github.com/kubedb/proxysql/commit/a127d6cd) Use kmodules monitoring and objectstore api (#57) +- [2f79a038](https://github.com/kubedb/proxysql/commit/2f79a038) Support custom configuration (#52) +- [49c67f00](https://github.com/kubedb/proxysql/commit/49c67f00) Merge commit '44e6d4985d93556e39ddcc4677ada5437fc5be64' +- [fb28bc6c](https://github.com/kubedb/proxysql/commit/fb28bc6c) Refactor concourse scripts (#56) +- [4de4ced1](https://github.com/kubedb/proxysql/commit/4de4ced1) Fix command `./hack/make.py test e2e` (#55) +- [3082123e](https://github.com/kubedb/proxysql/commit/3082123e) Set generated binary name to my-operator (#54) +- [5698f314](https://github.com/kubedb/proxysql/commit/5698f314) Don't add admission/v1beta1 group as a prioritized version (#53) +- [696135d5](https://github.com/kubedb/proxysql/commit/696135d5) Fix travis build (#48) +- [c519ef89](https://github.com/kubedb/proxysql/commit/c519ef89) Format shell script (#51) +- [c93e2f40](https://github.com/kubedb/proxysql/commit/c93e2f40) Enable status subresource for crds (#50) +- [edd951ca](https://github.com/kubedb/proxysql/commit/edd951ca) Update client-go to v8.0.0 (#49) +- [520597a6](https://github.com/kubedb/proxysql/commit/520597a6) Merge commit '71850e2c90cda8fc588b7dedb340edf3d316baea' +- [f1549e95](https://github.com/kubedb/proxysql/commit/f1549e95) Support ENV variables in CRDs (#46) +- [67f37780](https://github.com/kubedb/proxysql/commit/67f37780) Updated osm version to 0.7.1 (#47) +- [10e309c0](https://github.com/kubedb/proxysql/commit/10e309c0) Prepare release 0.1.0 +- [62a8fbbd](https://github.com/kubedb/proxysql/commit/62a8fbbd) Fixed missing error return (#45) +- [8c05bb83](https://github.com/kubedb/proxysql/commit/8c05bb83) Revendor dependencies (#44) +- [ca811a2e](https://github.com/kubedb/proxysql/commit/ca811a2e) Fix release script (#43) +- [b79541f6](https://github.com/kubedb/proxysql/commit/b79541f6) Add changelog (#42) +- [a2d13c82](https://github.com/kubedb/proxysql/commit/a2d13c82) Concourse (#41) +- [95b2186e](https://github.com/kubedb/proxysql/commit/95b2186e) Fixed kubeconfig plugin for Cloud Providers && Storage is required for MySQL (#40) +- [37762093](https://github.com/kubedb/proxysql/commit/37762093) Refactored E2E testing to support E2E testing with admission webhook in cloud (#38) +- [b6fe72ca](https://github.com/kubedb/proxysql/commit/b6fe72ca) Remove lost+found directory before initializing mysql (#39) +- [18ebb959](https://github.com/kubedb/proxysql/commit/18ebb959) Skip delete requests for empty resources (#37) +- [eeb7add0](https://github.com/kubedb/proxysql/commit/eeb7add0) Don't panic if admission options is nil (#36) +- [ccb59db0](https://github.com/kubedb/proxysql/commit/ccb59db0) Disable admission controllers for webhook server (#35) +- [b1c6c149](https://github.com/kubedb/proxysql/commit/b1c6c149) Separate ApiGroup for Mutating and Validating webhook && upgraded osm to 0.7.0 (#34) +- [b1890f7c](https://github.com/kubedb/proxysql/commit/b1890f7c) Update client-go to 7.0.0 (#33) +- [08c81726](https://github.com/kubedb/proxysql/commit/08c81726) Added update script for mysql-tools:8 (#32) +- [4bbe6c9f](https://github.com/kubedb/proxysql/commit/4bbe6c9f) Added support of mysql:5.7 (#31) +- [e657f512](https://github.com/kubedb/proxysql/commit/e657f512) Add support for one informer and N-eventHandler for snapshot, dromantDB and Job (#30) +- [bbcd48d6](https://github.com/kubedb/proxysql/commit/bbcd48d6) Use metrics from kube apiserver (#29) +- [1687e197](https://github.com/kubedb/proxysql/commit/1687e197) Bundle webhook server and Use SharedInformerFactory (#28) +- [cd0efc00](https://github.com/kubedb/proxysql/commit/cd0efc00) Move MySQL AdmissionWebhook packages into MySQL repository (#27) +- [46065e18](https://github.com/kubedb/proxysql/commit/46065e18) Use mysql:8.0.3 image as mysql:8.0 (#26) +- [1b73529f](https://github.com/kubedb/proxysql/commit/1b73529f) Update README.md +- [62eaa397](https://github.com/kubedb/proxysql/commit/62eaa397) Update README.md +- [c53704c7](https://github.com/kubedb/proxysql/commit/c53704c7) Remove Docker pull count +- [b9ec877e](https://github.com/kubedb/proxysql/commit/b9ec877e) Add travis yaml (#25) +- [ade3571c](https://github.com/kubedb/proxysql/commit/ade3571c) Start next dev cycle +- [b4b749df](https://github.com/kubedb/proxysql/commit/b4b749df) Prepare release 0.1.0-beta.2 +- [4d46d95d](https://github.com/kubedb/proxysql/commit/4d46d95d) Migrating to apps/v1 (#23) +- [5ee1ac8c](https://github.com/kubedb/proxysql/commit/5ee1ac8c) Update validation (#22) +- [dd023c50](https://github.com/kubedb/proxysql/commit/dd023c50) Fix dormantDB matching: pass same type to Equal method (#21) +- [37a1e4fd](https://github.com/kubedb/proxysql/commit/37a1e4fd) Use official code generator scripts (#20) +- [485d3d7c](https://github.com/kubedb/proxysql/commit/485d3d7c) Fixed dormantdb matching & Raised throttling time & Fixed MySQL version Checking (#19) +- [6db2ae8d](https://github.com/kubedb/proxysql/commit/6db2ae8d) Prepare release 0.1.0-beta.1 +- [ebbfec2f](https://github.com/kubedb/proxysql/commit/ebbfec2f) converted to k8s 1.9 & Improved InitSpec in DormantDB & Added support for Job watcher & Improved Tests (#17) +- [a484e0e5](https://github.com/kubedb/proxysql/commit/a484e0e5) Fixed logger, analytics and removed rbac stuff (#16) +- [7aa2d1d2](https://github.com/kubedb/proxysql/commit/7aa2d1d2) Add rbac stuffs for mysql-exporter (#15) +- [078098c8](https://github.com/kubedb/proxysql/commit/078098c8) Review Mysql docker images and Fixed monitring (#14) +- [6877108a](https://github.com/kubedb/proxysql/commit/6877108a) Update README.md +- [1f84a5da](https://github.com/kubedb/proxysql/commit/1f84a5da) Start next dev cycle +- [2f1e4b7d](https://github.com/kubedb/proxysql/commit/2f1e4b7d) Prepare release 0.1.0-beta.0 +- [dce1e88e](https://github.com/kubedb/proxysql/commit/dce1e88e) Add release script +- [60ed55cb](https://github.com/kubedb/proxysql/commit/60ed55cb) Rename ms-operator to my-operator (#13) +- [5451d166](https://github.com/kubedb/proxysql/commit/5451d166) Fix Analytics and pass client-id as ENV to Snapshot Job (#12) +- [788ae178](https://github.com/kubedb/proxysql/commit/788ae178) update docker image validation (#11) +- [c966efd5](https://github.com/kubedb/proxysql/commit/c966efd5) Add docker-registry and WorkQueue (#10) +- [be340103](https://github.com/kubedb/proxysql/commit/be340103) Set client id for analytics (#9) +- [ca11f683](https://github.com/kubedb/proxysql/commit/ca11f683) Fix CRD Registration (#8) +- [2f95c13d](https://github.com/kubedb/proxysql/commit/2f95c13d) Update issue repo link +- [6fffa713](https://github.com/kubedb/proxysql/commit/6fffa713) Update pkg paths to kubedb org (#7) +- [2d4d5c44](https://github.com/kubedb/proxysql/commit/2d4d5c44) Assign default Prometheus Monitoring Port (#6) +- [a7595613](https://github.com/kubedb/proxysql/commit/a7595613) Add Snapshot Backup, Restore and Backup-Scheduler (#4) +- [17a782c6](https://github.com/kubedb/proxysql/commit/17a782c6) Update Dockerfile +- [e92bfec9](https://github.com/kubedb/proxysql/commit/e92bfec9) Add mysql-util docker image (#5) +- [2a4b25ac](https://github.com/kubedb/proxysql/commit/2a4b25ac) Mysql db - Inititalizing (#2) +- [cbfbc878](https://github.com/kubedb/proxysql/commit/cbfbc878) Update README.md +- [01cab651](https://github.com/kubedb/proxysql/commit/01cab651) Update README.md +- [0aa81cdf](https://github.com/kubedb/proxysql/commit/0aa81cdf) Use client-go 5.x +- [3de10d7f](https://github.com/kubedb/proxysql/commit/3de10d7f) Update ./hack folder (#3) +- [46f05b1f](https://github.com/kubedb/proxysql/commit/46f05b1f) Add skeleton for mysql (#1) +- [73147dba](https://github.com/kubedb/proxysql/commit/73147dba) Merge commit 'be70502b4993171bbad79d2ff89a9844f1c24caa' as 'hack/libbuild' + + + +## [kubedb/redis](https://github.com/kubedb/redis) + +### [v0.7.0-rc.1](https://github.com/kubedb/redis/releases/tag/v0.7.0-rc.1) + +- [b9e54a66](https://github.com/kubedb/redis/commit/b9e54a66) Prepare for release v0.7.0-rc.1 (#246) +- [50f709bf](https://github.com/kubedb/redis/commit/50f709bf) Prepare for release v0.7.0-beta.6 (#245) +- [d4aaaf38](https://github.com/kubedb/redis/commit/d4aaaf38) Create SRV records for governing service (#244) +- [57743070](https://github.com/kubedb/redis/commit/57743070) Prepare for release v0.7.0-beta.5 (#243) +- [5e8f1a25](https://github.com/kubedb/redis/commit/5e8f1a25) Create separate governing service for each database (#242) +- [ebeda2c7](https://github.com/kubedb/redis/commit/ebeda2c7) Update KubeDB api (#241) +- [b0a39a3c](https://github.com/kubedb/redis/commit/b0a39a3c) Update readme +- [d31b919a](https://github.com/kubedb/redis/commit/d31b919a) Prepare for release v0.7.0-beta.4 (#240) +- [bfecc0c5](https://github.com/kubedb/redis/commit/bfecc0c5) Update KubeDB api (#239) +- [307efbef](https://github.com/kubedb/redis/commit/307efbef) Update Kubernetes v1.18.9 dependencies (#238) +- [34b09d4c](https://github.com/kubedb/redis/commit/34b09d4c) Update KubeDB api (#237) +- [4aefb939](https://github.com/kubedb/redis/commit/4aefb939) Fix init validator (#236) +- [4ea47108](https://github.com/kubedb/redis/commit/4ea47108) Update KubeDB api (#235) +- [8c4c8a54](https://github.com/kubedb/redis/commit/8c4c8a54) Update KubeDB api (#234) +- [cbee9597](https://github.com/kubedb/redis/commit/cbee9597) Update Kubernetes v1.18.9 dependencies (#233) +- [9fb1b23c](https://github.com/kubedb/redis/commit/9fb1b23c) Update KubeDB api (#232) +- [c5fb9a6d](https://github.com/kubedb/redis/commit/c5fb9a6d) Update KubeDB api (#230) +- [2e2f2d7b](https://github.com/kubedb/redis/commit/2e2f2d7b) Update KubeDB api (#229) +- [3c8e6c6d](https://github.com/kubedb/redis/commit/3c8e6c6d) Update KubeDB api (#228) +- [8467464d](https://github.com/kubedb/redis/commit/8467464d) Update Kubernetes v1.18.9 dependencies (#227) +- [5febd393](https://github.com/kubedb/redis/commit/5febd393) Update KubeDB api (#226) +- [d8024e4d](https://github.com/kubedb/redis/commit/d8024e4d) Update KubeDB api (#225) +- [12d112de](https://github.com/kubedb/redis/commit/12d112de) Update KubeDB api (#223) +- [8a9f5398](https://github.com/kubedb/redis/commit/8a9f5398) Update repository config (#222) +- [b3b48a91](https://github.com/kubedb/redis/commit/b3b48a91) Update repository config (#221) +- [2fa45230](https://github.com/kubedb/redis/commit/2fa45230) Update repository config (#220) +- [552f1f80](https://github.com/kubedb/redis/commit/552f1f80) Initialize statefulset watcher from cmd/server/options.go (#219) +- [446b4b55](https://github.com/kubedb/redis/commit/446b4b55) Update KubeDB api (#218) +- [f6203009](https://github.com/kubedb/redis/commit/f6203009) Update Kubernetes v1.18.9 dependencies (#217) +- [b7172fb8](https://github.com/kubedb/redis/commit/b7172fb8) Publish docker images to ghcr.io (#216) +- [9897bab9](https://github.com/kubedb/redis/commit/9897bab9) Update KubeDB api (#215) +- [00f07b4f](https://github.com/kubedb/redis/commit/00f07b4f) Update KubeDB api (#214) +- [f2133f26](https://github.com/kubedb/redis/commit/f2133f26) Update KubeDB api (#213) +- [b1f3b76a](https://github.com/kubedb/redis/commit/b1f3b76a) Update KubeDB api (#212) +- [a3144e30](https://github.com/kubedb/redis/commit/a3144e30) Update repository config (#211) +- [8472ff88](https://github.com/kubedb/redis/commit/8472ff88) Add support to initialize Redis using Stash (#188) +- [20ba04a7](https://github.com/kubedb/redis/commit/20ba04a7) Update Kubernetes v1.18.9 dependencies (#210) +- [457611a1](https://github.com/kubedb/redis/commit/457611a1) Update Kubernetes v1.18.9 dependencies (#209) +- [2bd8b281](https://github.com/kubedb/redis/commit/2bd8b281) Update Kubernetes v1.18.9 dependencies (#207) +- [8779c7ea](https://github.com/kubedb/redis/commit/8779c7ea) Update repository config (#206) +- [db9280b7](https://github.com/kubedb/redis/commit/db9280b7) Update repository config (#205) +- [ada18bca](https://github.com/kubedb/redis/commit/ada18bca) Update Kubernetes v1.18.9 dependencies (#204) +- [17a55147](https://github.com/kubedb/redis/commit/17a55147) Use common event recorder (#203) +- [71a34b6a](https://github.com/kubedb/redis/commit/71a34b6a) Update Kubernetes v1.18.3 dependencies (#202) +- [32dadab6](https://github.com/kubedb/redis/commit/32dadab6) Prepare for release v0.7.0-beta.3 (#201) +- [e41222a1](https://github.com/kubedb/redis/commit/e41222a1) Update Kubernetes v1.18.3 dependencies (#200) +- [41172908](https://github.com/kubedb/redis/commit/41172908) Add license verifier (#199) +- [d46d0dbd](https://github.com/kubedb/redis/commit/d46d0dbd) Update Kubernetes v1.18.3 dependencies (#198) +- [283c2777](https://github.com/kubedb/redis/commit/283c2777) Use background deletion policy +- [5ee6470d](https://github.com/kubedb/redis/commit/5ee6470d) Update Kubernetes v1.18.3 dependencies (#195) +- [e391f0d6](https://github.com/kubedb/redis/commit/e391f0d6) Use AppsCode Community License (#194) +- [12211e40](https://github.com/kubedb/redis/commit/12211e40) Update Kubernetes v1.18.3 dependencies (#193) +- [73cf267e](https://github.com/kubedb/redis/commit/73cf267e) Prepare for release v0.7.0-beta.2 (#192) +- [d2911ea9](https://github.com/kubedb/redis/commit/d2911ea9) Update release.yml +- [c76ee46e](https://github.com/kubedb/redis/commit/c76ee46e) Update dependencies (#191) +- [0b030534](https://github.com/kubedb/redis/commit/0b030534) Fix build +- [408216ab](https://github.com/kubedb/redis/commit/408216ab) Add support for Redis v6.0.6 and TLS (#180) +- [944327df](https://github.com/kubedb/redis/commit/944327df) Update Kubernetes v1.18.3 dependencies (#187) +- [40b7cde6](https://github.com/kubedb/redis/commit/40b7cde6) Update Kubernetes v1.18.3 dependencies (#186) +- [f2bf110d](https://github.com/kubedb/redis/commit/f2bf110d) Update Kubernetes v1.18.3 dependencies (#184) +- [61485cfa](https://github.com/kubedb/redis/commit/61485cfa) Update Kubernetes v1.18.3 dependencies (#183) +- [184ae35d](https://github.com/kubedb/redis/commit/184ae35d) Update Kubernetes v1.18.3 dependencies (#182) +- [bc72b51b](https://github.com/kubedb/redis/commit/bc72b51b) Update Kubernetes v1.18.3 dependencies (#181) +- [ca540560](https://github.com/kubedb/redis/commit/ca540560) Remove dependency on enterprise operator (#179) +- [09bade2e](https://github.com/kubedb/redis/commit/09bade2e) Allow configuring k8s & db version in e2e tests (#178) +- [2bafb114](https://github.com/kubedb/redis/commit/2bafb114) Update to Kubernetes v1.18.3 (#177) +- [b2fe59ef](https://github.com/kubedb/redis/commit/b2fe59ef) Trigger e2e tests on /ok-to-test command (#176) +- [df5131e1](https://github.com/kubedb/redis/commit/df5131e1) Update to Kubernetes v1.18.3 (#175) +- [a404ae08](https://github.com/kubedb/redis/commit/a404ae08) Update to Kubernetes v1.18.3 (#174) +- [768962f4](https://github.com/kubedb/redis/commit/768962f4) Prepare for release v0.7.0-beta.1 (#173) +- [9efbb8e4](https://github.com/kubedb/redis/commit/9efbb8e4) include Makefile.env (#171) +- [b343c559](https://github.com/kubedb/redis/commit/b343c559) Update License (#170) +- [d666ac18](https://github.com/kubedb/redis/commit/d666ac18) Update to Kubernetes v1.18.3 (#169) +- [602354f6](https://github.com/kubedb/redis/commit/602354f6) Update ci.yml +- [59f2d238](https://github.com/kubedb/redis/commit/59f2d238) Update update-release-tracker.sh +- [64c96db5](https://github.com/kubedb/redis/commit/64c96db5) Update update-release-tracker.sh +- [49cd15a9](https://github.com/kubedb/redis/commit/49cd15a9) Add script to update release tracker on pr merge (#167) +- [c711be8f](https://github.com/kubedb/redis/commit/c711be8f) chore: replica alert typo (#166) +- [2d752316](https://github.com/kubedb/redis/commit/2d752316) Update .kodiak.toml +- [ea3b206d](https://github.com/kubedb/redis/commit/ea3b206d) Various fixes (#165) +- [e441809c](https://github.com/kubedb/redis/commit/e441809c) Update to Kubernetes v1.18.3 (#164) +- [1e5ecfb7](https://github.com/kubedb/redis/commit/1e5ecfb7) Update to Kubernetes v1.18.3 +- [742679dd](https://github.com/kubedb/redis/commit/742679dd) Create .kodiak.toml +- [2eb77b80](https://github.com/kubedb/redis/commit/2eb77b80) Update apis (#163) +- [7cf9e7d3](https://github.com/kubedb/redis/commit/7cf9e7d3) Use CRD v1 for Kubernetes >= 1.16 (#162) +- [bf072134](https://github.com/kubedb/redis/commit/bf072134) Update kind command +- [cb2a748d](https://github.com/kubedb/redis/commit/cb2a748d) Update dependencies +- [a30cd6eb](https://github.com/kubedb/redis/commit/a30cd6eb) Update to Kubernetes v1.18.3 (#161) +- [9cdac95f](https://github.com/kubedb/redis/commit/9cdac95f) Fix e2e tests (#160) +- [429141b4](https://github.com/kubedb/redis/commit/429141b4) Revendor kubedb.dev/apimachinery@master (#159) +- [664c086b](https://github.com/kubedb/redis/commit/664c086b) Use recommended kubernetes app labels +- [2e6a2f03](https://github.com/kubedb/redis/commit/2e6a2f03) Update crazy-max/ghaction-docker-buildx flag +- [88417e86](https://github.com/kubedb/redis/commit/88417e86) Pass annotations from CRD to AppBinding (#158) +- [84167d7a](https://github.com/kubedb/redis/commit/84167d7a) Trigger the workflow on push or pull request +- [2f43dd9a](https://github.com/kubedb/redis/commit/2f43dd9a) Use helm --wait +- [36399173](https://github.com/kubedb/redis/commit/36399173) Use updated operator labels in e2e tests (#156) +- [c6582491](https://github.com/kubedb/redis/commit/c6582491) Update CHANGELOG.md +- [197b4973](https://github.com/kubedb/redis/commit/197b4973) Support PodAffinity Templating (#155) +- [cdfbb77d](https://github.com/kubedb/redis/commit/cdfbb77d) Use stash.appscode.dev/apimachinery@v0.9.0-rc.6 (#154) +- [c1db4c43](https://github.com/kubedb/redis/commit/c1db4c43) Version update to resolve security issue in github.com/apache/th… (#153) +- [7acc502b](https://github.com/kubedb/redis/commit/7acc502b) Use rancher/local-path-provisioner@v0.0.12 (#152) +- [d00f765e](https://github.com/kubedb/redis/commit/d00f765e) Introduce spec.halted and removed dormant crd (#151) +- [9ed1d97e](https://github.com/kubedb/redis/commit/9ed1d97e) Add `Pause` Feature (#150) +- [39ed60c4](https://github.com/kubedb/redis/commit/39ed60c4) Refactor CI pipeline to build once (#149) +- [1707e0c7](https://github.com/kubedb/redis/commit/1707e0c7) Update kubernetes client-go to 1.16.3 (#148) +- [dcbb4be4](https://github.com/kubedb/redis/commit/dcbb4be4) Update catalog values for make install command +- [9fa3ef1c](https://github.com/kubedb/redis/commit/9fa3ef1c) Update catalog values for make install command (#147) +- [44538409](https://github.com/kubedb/redis/commit/44538409) Use charts to install operator (#146) +- [05e3b95a](https://github.com/kubedb/redis/commit/05e3b95a) Matrix test for github actions (#145) +- [e76f96f6](https://github.com/kubedb/redis/commit/e76f96f6) Add add-license make target +- [6ccd651c](https://github.com/kubedb/redis/commit/6ccd651c) Update Makefile +- [2a56f27f](https://github.com/kubedb/redis/commit/2a56f27f) Add license header to files (#144) +- [5ce5e5e0](https://github.com/kubedb/redis/commit/5ce5e5e0) Run e2e tests in parallel (#142) +- [77012ddf](https://github.com/kubedb/redis/commit/77012ddf) Use log.Fatal instead of Must() (#143) +- [aa7f1673](https://github.com/kubedb/redis/commit/aa7f1673) Enable make ci (#141) +- [abd6a605](https://github.com/kubedb/redis/commit/abd6a605) Remove EnableStatusSubresource (#140) +- [08cfe0ca](https://github.com/kubedb/redis/commit/08cfe0ca) Fix tests for github actions (#139) +- [09e72f63](https://github.com/kubedb/redis/commit/09e72f63) Prepend redis.conf to args list (#136) +- [101afa35](https://github.com/kubedb/redis/commit/101afa35) Run e2e tests using GitHub actions (#137) +- [bbf5cb9f](https://github.com/kubedb/redis/commit/bbf5cb9f) Validate DBVersionSpecs and fixed broken build (#138) +- [26f0c88b](https://github.com/kubedb/redis/commit/26f0c88b) Update go.yml +- [9dab8c06](https://github.com/kubedb/redis/commit/9dab8c06) Enable GitHub actions +- [6a722f20](https://github.com/kubedb/redis/commit/6a722f20) Update changelog + + + + diff --git a/content/docs/v2024.1.31/CHANGELOG-v2020.10.27-rc.2.md b/content/docs/v2024.1.31/CHANGELOG-v2020.10.27-rc.2.md new file mode 100644 index 0000000000..772e1104fd --- /dev/null +++ b/content/docs/v2024.1.31/CHANGELOG-v2020.10.27-rc.2.md @@ -0,0 +1,159 @@ +--- +title: Changelog | KubeDB +description: Changelog +menu: + docs_v2024.1.31: + identifier: changelog-kubedb-v2020.10.27-rc.2 + name: Changelog-v2020.10.27-rc.2 + parent: welcome + weight: 20201027 +product_name: kubedb +menu_name: docs_v2024.1.31 +section_menu_id: welcome +url: /docs/v2024.1.31/welcome/changelog-v2020.10.27-rc.2/ +aliases: +- /docs/v2024.1.31/CHANGELOG-v2020.10.27-rc.2/ +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +# KubeDB v2020.10.27-rc.2 (2020-10-28) + + +## [appscode/kubedb-enterprise](https://github.com/appscode/kubedb-enterprise) + +### [v0.1.0-rc.2](https://github.com/appscode/kubedb-enterprise/releases/tag/v0.1.0-rc.2) + +- [c2e95f74](https://github.com/appscode/kubedb-enterprise/commit/c2e95f74) Prepare for release v0.1.0-rc.2 (#84) + + + +## [kubedb/apimachinery](https://github.com/kubedb/apimachinery) + +### [v0.14.0-rc.2](https://github.com/kubedb/apimachinery/releases/tag/v0.14.0-rc.2) + +- [dbc93cda](https://github.com/kubedb/apimachinery/commit/dbc93cda) Add dnsConfig and dnsPolicy to podTemplate (#636) + + + +## [kubedb/cli](https://github.com/kubedb/cli) + +### [v0.14.0-rc.2](https://github.com/kubedb/cli/releases/tag/v0.14.0-rc.2) + +- [b00a2123](https://github.com/kubedb/cli/commit/b00a2123) Prepare for release v0.14.0-rc.2 (#533) + + + +## [kubedb/elasticsearch](https://github.com/kubedb/elasticsearch) + +### [v0.14.0-rc.2](https://github.com/kubedb/elasticsearch/releases/tag/v0.14.0-rc.2) + +- [846ea8ee](https://github.com/kubedb/elasticsearch/commit/846ea8ee) Prepare for release v0.14.0-rc.2 (#396) + + + +## [kubedb/installer](https://github.com/kubedb/installer) + +### [v0.14.0-rc.2](https://github.com/kubedb/installer/releases/tag/v0.14.0-rc.2) + +- [262439e](https://github.com/kubedb/installer/commit/262439e) Prepare for release v0.14.0-rc.2 (#176) + + + +## [kubedb/memcached](https://github.com/kubedb/memcached) + +### [v0.7.0-rc.2](https://github.com/kubedb/memcached/releases/tag/v0.7.0-rc.2) + +- [a63e015b](https://github.com/kubedb/memcached/commit/a63e015b) Prepare for release v0.7.0-rc.2 (#228) + + + +## [kubedb/mongodb](https://github.com/kubedb/mongodb) + +### [v0.7.0-rc.2](https://github.com/kubedb/mongodb/releases/tag/v0.7.0-rc.2) + +- [d87c55bb](https://github.com/kubedb/mongodb/commit/d87c55bb) Prepare for release v0.7.0-rc.2 (#298) + + + +## [kubedb/mysql](https://github.com/kubedb/mysql) + +### [v0.7.0-rc.2](https://github.com/kubedb/mysql/releases/tag/v0.7.0-rc.2) + +- [921327d1](https://github.com/kubedb/mysql/commit/921327d1) Prepare for release v0.7.0-rc.2 (#288) + + + +## [kubedb/mysql-replication-mode-detector](https://github.com/kubedb/mysql-replication-mode-detector) + +### [v0.1.0-rc.2](https://github.com/kubedb/mysql-replication-mode-detector/releases/tag/v0.1.0-rc.2) + +- [b49e098](https://github.com/kubedb/mysql-replication-mode-detector/commit/b49e098) Prepare for release v0.1.0-rc.2 (#76) + + + +## [kubedb/operator](https://github.com/kubedb/operator) + +### [v0.14.0-rc.2](https://github.com/kubedb/operator/releases/tag/v0.14.0-rc.2) + +- [a06c98d1](https://github.com/kubedb/operator/commit/a06c98d1) Prepare for release v0.14.0-rc.2 (#336) + + + +## [kubedb/percona-xtradb](https://github.com/kubedb/percona-xtradb) + +### [v0.1.0-rc.2](https://github.com/kubedb/percona-xtradb/releases/tag/v0.1.0-rc.2) + +- [ae82716f](https://github.com/kubedb/percona-xtradb/commit/ae82716f) Prepare for release v0.1.0-rc.2 (#120) + + + +## [kubedb/pg-leader-election](https://github.com/kubedb/pg-leader-election) + +### [v0.2.0-rc.2](https://github.com/kubedb/pg-leader-election/releases/tag/v0.2.0-rc.2) + + + + +## [kubedb/pgbouncer](https://github.com/kubedb/pgbouncer) + +### [v0.1.0-rc.2](https://github.com/kubedb/pgbouncer/releases/tag/v0.1.0-rc.2) + +- [c4083972](https://github.com/kubedb/pgbouncer/commit/c4083972) Prepare for release v0.1.0-rc.2 (#93) + + + +## [kubedb/postgres](https://github.com/kubedb/postgres) + +### [v0.14.0-rc.2](https://github.com/kubedb/postgres/releases/tag/v0.14.0-rc.2) + +- [2ed7a29c](https://github.com/kubedb/postgres/commit/2ed7a29c) Prepare for release v0.14.0-rc.2 (#406) + + + +## [kubedb/proxysql](https://github.com/kubedb/proxysql) + +### [v0.1.0-rc.2](https://github.com/kubedb/proxysql/releases/tag/v0.1.0-rc.2) + +- [8a5443d9](https://github.com/kubedb/proxysql/commit/8a5443d9) Prepare for release v0.1.0-rc.2 (#102) + + + +## [kubedb/redis](https://github.com/kubedb/redis) + +### [v0.7.0-rc.2](https://github.com/kubedb/redis/releases/tag/v0.7.0-rc.2) + +- [ac0d5b08](https://github.com/kubedb/redis/commit/ac0d5b08) Prepare for release v0.7.0-rc.2 (#247) + + + + diff --git a/content/docs/v2024.1.31/CHANGELOG-v2020.10.28.md b/content/docs/v2024.1.31/CHANGELOG-v2020.10.28.md new file mode 100644 index 0000000000..d22c637030 --- /dev/null +++ b/content/docs/v2024.1.31/CHANGELOG-v2020.10.28.md @@ -0,0 +1,2215 @@ +--- +title: Changelog | KubeDB +description: Changelog +menu: + docs_v2024.1.31: + identifier: changelog-kubedb-v2020.10.28 + name: Changelog-v2020.10.28 + parent: welcome + weight: 20201028 +product_name: kubedb +menu_name: docs_v2024.1.31 +section_menu_id: welcome +url: /docs/v2024.1.31/welcome/changelog-v2020.10.28/ +aliases: +- /docs/v2024.1.31/CHANGELOG-v2020.10.28/ +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +# KubeDB v2020.10.28 (2020-10-29) + + +## [appscode/kubedb-enterprise](https://github.com/appscode/kubedb-enterprise) + +### [v0.1.0](https://github.com/appscode/kubedb-enterprise/releases/tag/v0.1.0) + +- [3c20bdae](https://github.com/appscode/kubedb-enterprise/commit/3c20bdae) Prepare for release v0.1.0 (#88) +- [9bd093fc](https://github.com/appscode/kubedb-enterprise/commit/9bd093fc) Change selector to podNames (#86) +- [c2e95f74](https://github.com/appscode/kubedb-enterprise/commit/c2e95f74) Prepare for release v0.1.0-rc.2 (#84) +- [095e631c](https://github.com/appscode/kubedb-enterprise/commit/095e631c) Prepare for release v0.1.0-rc.1 (#83) +- [5df3d1e9](https://github.com/appscode/kubedb-enterprise/commit/5df3d1e9) Prepare for release v0.1.0-beta.6 (#82) +- [c7bf3943](https://github.com/appscode/kubedb-enterprise/commit/c7bf3943) Prepare for release v0.1.0-beta.5 (#81) +- [1bf37b01](https://github.com/appscode/kubedb-enterprise/commit/1bf37b01) Update KubeDB api (#80) +- [a99c4e9f](https://github.com/appscode/kubedb-enterprise/commit/a99c4e9f) Update readme +- [2ad24272](https://github.com/appscode/kubedb-enterprise/commit/2ad24272) Update repository config (#79) +- [d045bd2d](https://github.com/appscode/kubedb-enterprise/commit/d045bd2d) Prepare for release v0.1.0-beta.4 (#78) +- [5fbe4b48](https://github.com/appscode/kubedb-enterprise/commit/5fbe4b48) Update KubeDB api (#73) +- [00db6203](https://github.com/appscode/kubedb-enterprise/commit/00db6203) Replace getConditions with kmapi.NewCondition (#71) +- [aea1f64a](https://github.com/appscode/kubedb-enterprise/commit/aea1f64a) Update License header (#70) +- [1c15c2b8](https://github.com/appscode/kubedb-enterprise/commit/1c15c2b8) Add RedisOpsRequest Controller (#28) +- [5cedb8fd](https://github.com/appscode/kubedb-enterprise/commit/5cedb8fd) Add MySQL OpsRequest Controller (#14) +- [f0f282c0](https://github.com/appscode/kubedb-enterprise/commit/f0f282c0) Add Reconfigure TLS (#69) +- [cea85618](https://github.com/appscode/kubedb-enterprise/commit/cea85618) Add Restart Operation, Readiness Criteria and Remove Configuration (#59) +- [68cd3dcc](https://github.com/appscode/kubedb-enterprise/commit/68cd3dcc) Update repository config (#66) +- [feef09ab](https://github.com/appscode/kubedb-enterprise/commit/feef09ab) Publish docker images to ghcr.io (#65) +- [199d4bd2](https://github.com/appscode/kubedb-enterprise/commit/199d4bd2) Update repository config (#60) +- [2ae29633](https://github.com/appscode/kubedb-enterprise/commit/2ae29633) Reconfigure MongoDB with Vertical Scaling (#57) +- [9a98fc29](https://github.com/appscode/kubedb-enterprise/commit/9a98fc29) Fix MongoDB Upgrade (#51) +- [9a1a792a](https://github.com/appscode/kubedb-enterprise/commit/9a1a792a) Integrate cert-manager for Elasticsearch (#56) +- [b02cda77](https://github.com/appscode/kubedb-enterprise/commit/b02cda77) Update repository config (#54) +- [947c33e2](https://github.com/appscode/kubedb-enterprise/commit/947c33e2) Update repository config (#52) +- [12edf6f1](https://github.com/appscode/kubedb-enterprise/commit/12edf6f1) Update Kubernetes v1.18.9 dependencies (#49) +- [08f6a4ac](https://github.com/appscode/kubedb-enterprise/commit/08f6a4ac) Add license verifier (#50) +- [30ceb1a5](https://github.com/appscode/kubedb-enterprise/commit/30ceb1a5) Add MongoDBOpsRequest Controller (#20) +- [164ed838](https://github.com/appscode/kubedb-enterprise/commit/164ed838) Use cert-manager v1 api (#47) +- [7612ec19](https://github.com/appscode/kubedb-enterprise/commit/7612ec19) Update apis (#45) +- [00550fe0](https://github.com/appscode/kubedb-enterprise/commit/00550fe0) Dynamically Generate Cluster Domain (#43) +- [e1c3193f](https://github.com/appscode/kubedb-enterprise/commit/e1c3193f) Use updated certstore & blobfs (#42) +- [0d5d05bb](https://github.com/appscode/kubedb-enterprise/commit/0d5d05bb) Add TLS support for redis (#35) +- [bb53fc86](https://github.com/appscode/kubedb-enterprise/commit/bb53fc86) Various fixes (#41) +- [023c5dfd](https://github.com/appscode/kubedb-enterprise/commit/023c5dfd) Add TLS/SSL configuration using Cert Manager for MySQL (#34) +- [e1795b97](https://github.com/appscode/kubedb-enterprise/commit/e1795b97) Update certificate spec for MongoDB and PgBouncer (#40) +- [5e82443d](https://github.com/appscode/kubedb-enterprise/commit/5e82443d) Update new Subject sped for certificates (#38) +- [099abfb8](https://github.com/appscode/kubedb-enterprise/commit/099abfb8) Update to cert-manager v0.16.0 (#37) +- [b14346d3](https://github.com/appscode/kubedb-enterprise/commit/b14346d3) Update to Kubernetes v1.18.3 (#36) +- [c569a8eb](https://github.com/appscode/kubedb-enterprise/commit/c569a8eb) Fix cert-manager integration for PgBouncer (#32) +- [28548950](https://github.com/appscode/kubedb-enterprise/commit/28548950) Update to Kubernetes v1.18.3 (#31) +- [1ba9573e](https://github.com/appscode/kubedb-enterprise/commit/1ba9573e) Include Makefile.env (#30) +- [54133b44](https://github.com/appscode/kubedb-enterprise/commit/54133b44) Disable e2e tests (#29) +- [3939ece7](https://github.com/appscode/kubedb-enterprise/commit/3939ece7) Update to Kubernetes v1.18.3 (#27) +- [95c6b535](https://github.com/appscode/kubedb-enterprise/commit/95c6b535) Update .kodiak.toml +- [a88032cd](https://github.com/appscode/kubedb-enterprise/commit/a88032cd) Add script to update release tracker on pr merge (#26) +- [a90f68e7](https://github.com/appscode/kubedb-enterprise/commit/a90f68e7) Rename docker image to kubedb-enterprise +- [ccb9967f](https://github.com/appscode/kubedb-enterprise/commit/ccb9967f) Create .kodiak.toml +- [fb6222ab](https://github.com/appscode/kubedb-enterprise/commit/fb6222ab) Format CI files +- [93756db8](https://github.com/appscode/kubedb-enterprise/commit/93756db8) Fix e2e tests (#25) +- [48ada32b](https://github.com/appscode/kubedb-enterprise/commit/48ada32b) Fix e2e tests using self-hosted GitHub action runners (#23) +- [12b15d00](https://github.com/appscode/kubedb-enterprise/commit/12b15d00) Update to kubedb.dev/apimachinery@v0.14.0-alpha.6 (#24) +- [9f32ab11](https://github.com/appscode/kubedb-enterprise/commit/9f32ab11) Update to Kubernetes v1.18.3 (#21) +- [cd3422a7](https://github.com/appscode/kubedb-enterprise/commit/cd3422a7) Use CRD v1 for Kubernetes >= 1.16 (#19) +- [4cc2f714](https://github.com/appscode/kubedb-enterprise/commit/4cc2f714) Update to Kubernetes v1.18.3 (#18) +- [7fb86dfb](https://github.com/appscode/kubedb-enterprise/commit/7fb86dfb) Update cert-manager util +- [1c8e1e32](https://github.com/appscode/kubedb-enterprise/commit/1c8e1e32) Configure GCR Docker credential helper in release pipeline +- [cd74a0c2](https://github.com/appscode/kubedb-enterprise/commit/cd74a0c2) Vendor kubedb.dev/apimachinery@v0.14.0-beta.0 +- [5522f7ef](https://github.com/appscode/kubedb-enterprise/commit/5522f7ef) Revendor kubedb.dev/apimachinery@master +- [e52cecfb](https://github.com/appscode/kubedb-enterprise/commit/e52cecfb) Update crazy-max/ghaction-docker-buildx flag +- [9ce414ca](https://github.com/appscode/kubedb-enterprise/commit/9ce414ca) Merge pull request #17 from appscode/x7 +- [1938de61](https://github.com/appscode/kubedb-enterprise/commit/1938de61) Remove existing cluster +- [262dae05](https://github.com/appscode/kubedb-enterprise/commit/262dae05) Remove support for k8s 1.11 +- [a00f342c](https://github.com/appscode/kubedb-enterprise/commit/a00f342c) Run e2e tests on GitHub actions +- [b615b1ac](https://github.com/appscode/kubedb-enterprise/commit/b615b1ac) Use GCR_SERVICE_ACCOUNT_JSON_KEY env in CI +- [41668265](https://github.com/appscode/kubedb-enterprise/commit/41668265) Use gcr.io/appscode as docker registry (#16) +- [2e5df236](https://github.com/appscode/kubedb-enterprise/commit/2e5df236) Run on self-hosted hosts +- [3da6adef](https://github.com/appscode/kubedb-enterprise/commit/3da6adef) Store enterprise images in `gcr.io/appscode` (#15) +- [bd4a8eb1](https://github.com/appscode/kubedb-enterprise/commit/bd4a8eb1) Use stash.appscode.dev/apimachinery@v0.9.0-rc.6 (#12) +- [c5436b50](https://github.com/appscode/kubedb-enterprise/commit/c5436b50) Don't handle deleted objects. (#11) +- [ee5eea66](https://github.com/appscode/kubedb-enterprise/commit/ee5eea66) Fix MongoDB cert-manager integration (#10) +- [105f08b8](https://github.com/appscode/kubedb-enterprise/commit/105f08b8) Add cert-manager integration for MongoDB (#9) +- [b2a3af53](https://github.com/appscode/kubedb-enterprise/commit/b2a3af53) Refactor PgBouncer controller into its pkg (#8) +- [b0e90f75](https://github.com/appscode/kubedb-enterprise/commit/b0e90f75) Use SecretInformer from apimachinery (#5) +- [8dabbb1b](https://github.com/appscode/kubedb-enterprise/commit/8dabbb1b) Use non-deprecated Exporter fields (#4) +- [de22842e](https://github.com/appscode/kubedb-enterprise/commit/de22842e) Cert-Manager support for PgBouncer [Client TLS] (#2) +- [1a6794b7](https://github.com/appscode/kubedb-enterprise/commit/1a6794b7) Fix plain text secret in exporter container of StatefulSet (#5) +- [ab104a9f](https://github.com/appscode/kubedb-enterprise/commit/ab104a9f) Update client-go to kubernetes-1.16.3 (#7) +- [68dbb142](https://github.com/appscode/kubedb-enterprise/commit/68dbb142) Use charts to install operator (#6) +- [30e3e729](https://github.com/appscode/kubedb-enterprise/commit/30e3e729) Add add-license make target +- [6c1a78a0](https://github.com/appscode/kubedb-enterprise/commit/6c1a78a0) Enable e2e tests in GitHub actions (#4) +- [0960f805](https://github.com/appscode/kubedb-enterprise/commit/0960f805) Initial implementation (#2) +- [a8a9b1db](https://github.com/appscode/kubedb-enterprise/commit/a8a9b1db) Update go.yml +- [bc3b2624](https://github.com/appscode/kubedb-enterprise/commit/bc3b2624) Enable GitHub actions +- [2e33db2b](https://github.com/appscode/kubedb-enterprise/commit/2e33db2b) Clone kubedb/postgres repo (#1) +- [45a7cace](https://github.com/appscode/kubedb-enterprise/commit/45a7cace) Merge commit 'f78de886ed657650438f99574c3b002dd3607497' as 'hack/libbuild' + + + +## [kubedb/apimachinery](https://github.com/kubedb/apimachinery) + +### [v0.14.0](https://github.com/kubedb/apimachinery/releases/tag/v0.14.0) + +- [dbc93cda](https://github.com/kubedb/apimachinery/commit/dbc93cda) Add dnsConfig and dnsPolicy to podTemplate (#636) +- [57468b4d](https://github.com/kubedb/apimachinery/commit/57468b4d) Add docker badge +- [cd358dda](https://github.com/kubedb/apimachinery/commit/cd358dda) Update MergeServicePort and PatchServicePort apis +- [b72968d5](https://github.com/kubedb/apimachinery/commit/b72968d5) Add port constants (#635) +- [6ce39fbe](https://github.com/kubedb/apimachinery/commit/6ce39fbe) Create separate governing service for each database (#634) +- [ecfb5d85](https://github.com/kubedb/apimachinery/commit/ecfb5d85) Update readme +- [61b26532](https://github.com/kubedb/apimachinery/commit/61b26532) Add MySQL constants (#633) +- [42888647](https://github.com/kubedb/apimachinery/commit/42888647) Update Kubernetes v1.18.9 dependencies (#632) +- [a57a7df5](https://github.com/kubedb/apimachinery/commit/a57a7df5) Set prx as ProxySQL short code (#631) +- [282992ea](https://github.com/kubedb/apimachinery/commit/282992ea) Update for release Stash@v2020.10.21 (#630) +- [5f17e1b4](https://github.com/kubedb/apimachinery/commit/5f17e1b4) Set default CA secret name even if the SSL is disabled. (#624) +- [c3710b61](https://github.com/kubedb/apimachinery/commit/c3710b61) Add host functions for different components of MongoDB (#625) +- [028d939d](https://github.com/kubedb/apimachinery/commit/028d939d) Refine api (#629) +- [4f4cfb3b](https://github.com/kubedb/apimachinery/commit/4f4cfb3b) Update Kubernetes v1.18.9 dependencies (#626) +- [47eaa486](https://github.com/kubedb/apimachinery/commit/47eaa486) Add MongoDBCustomConfigFile constant +- [5201c39b](https://github.com/kubedb/apimachinery/commit/5201c39b) Update MySQL ops request custom config api (#623) +- [06c2076f](https://github.com/kubedb/apimachinery/commit/06c2076f) Rename redis ConfigMapName to ConfigSecretName +- [0d4040b4](https://github.com/kubedb/apimachinery/commit/0d4040b4) API refinement (#622) +- [2eabe4c2](https://github.com/kubedb/apimachinery/commit/2eabe4c2) Update Kubernetes v1.18.9 dependencies (#621) +- [ac3ff1a6](https://github.com/kubedb/apimachinery/commit/ac3ff1a6) Handle halted condition (#620) +- [8ed26973](https://github.com/kubedb/apimachinery/commit/8ed26973) Update constants for Elasticsearch conditions (#618) +- [97c32f71](https://github.com/kubedb/apimachinery/commit/97c32f71) Use core/v1 ConditionStatus (#619) +- [304c48b8](https://github.com/kubedb/apimachinery/commit/304c48b8) Update Kubernetes v1.18.9 dependencies (#617) +- [a841401e](https://github.com/kubedb/apimachinery/commit/a841401e) Fix StatefulSet controller (#616) +- [517285ea](https://github.com/kubedb/apimachinery/commit/517285ea) Add spec.init.initialized field (#615) +- [057d3aef](https://github.com/kubedb/apimachinery/commit/057d3aef) Implement ReplicasAreReady (#614) +- [32105113](https://github.com/kubedb/apimachinery/commit/32105113) Update appcatalog dependency +- [34bf142e](https://github.com/kubedb/apimachinery/commit/34bf142e) Update swagger.json +- [7d9095af](https://github.com/kubedb/apimachinery/commit/7d9095af) Fix build (#613) +- [ad7988a8](https://github.com/kubedb/apimachinery/commit/ad7988a8) Fix build +- [0cf6469d](https://github.com/kubedb/apimachinery/commit/0cf6469d) Switch kubedb apiVersion to v1alpha2 (#612) +- [fd3131cd](https://github.com/kubedb/apimachinery/commit/fd3131cd) Add Volume Expansion and Configuration for MySQL OpsRequest (#607) +- [fd285012](https://github.com/kubedb/apimachinery/commit/fd285012) Add `alias` in the name of MongoDB server certificates (#611) +- [e562def9](https://github.com/kubedb/apimachinery/commit/e562def9) Remove GetMonitoringVendor method +- [a71f9b7e](https://github.com/kubedb/apimachinery/commit/a71f9b7e) Fix build +- [c97abe0d](https://github.com/kubedb/apimachinery/commit/c97abe0d) Update monitoring api dependency (#610) +- [d6070fc7](https://github.com/kubedb/apimachinery/commit/d6070fc7) Remove deprecated fields for monitoring (#609) +- [8d2f606a](https://github.com/kubedb/apimachinery/commit/8d2f606a) Add framework support for conditions (#608) +- [a74ea7a4](https://github.com/kubedb/apimachinery/commit/a74ea7a4) Bring back mysql ops spec StatefulSetOrdinal field +- [bda2d85a](https://github.com/kubedb/apimachinery/commit/bda2d85a) Add VerticalAutoscaler type (#606) +- [b9b22a35](https://github.com/kubedb/apimachinery/commit/b9b22a35) Add MySQL constant (#604) +- [2b887957](https://github.com/kubedb/apimachinery/commit/2b887957) Fix typo +- [c31cd2fd](https://github.com/kubedb/apimachinery/commit/c31cd2fd) Update ops request enumerations +- [41083a9d](https://github.com/kubedb/apimachinery/commit/41083a9d) Revise ops request apis (#603) +- [acfb1564](https://github.com/kubedb/apimachinery/commit/acfb1564) Revise api conditions (#602) +- [5c12de3a](https://github.com/kubedb/apimachinery/commit/5c12de3a) Update DB condition types and phases (#598) +- [f27cb720](https://github.com/kubedb/apimachinery/commit/f27cb720) Write data restore completion event using dynamic client (#601) +- [60ada14c](https://github.com/kubedb/apimachinery/commit/60ada14c) Update Kubernetes v1.18.9 dependencies (#600) +- [5779a5d7](https://github.com/kubedb/apimachinery/commit/5779a5d7) Update for release Stash@v2020.09.29 (#599) +- [86121dad](https://github.com/kubedb/apimachinery/commit/86121dad) Update Kubernetes v1.18.9 dependencies (#597) +- [da9fbe59](https://github.com/kubedb/apimachinery/commit/da9fbe59) Add DB conditions +- [7399d13f](https://github.com/kubedb/apimachinery/commit/7399d13f) Rename ES root-cert to ca-cert (#594) +- [1cd75609](https://github.com/kubedb/apimachinery/commit/1cd75609) Remove spec.paused & deprecated fields DB crds (#596) +- [9c85f9f1](https://github.com/kubedb/apimachinery/commit/9c85f9f1) Use `status.conditions` to handle database initialization (#593) +- [87e8e58b](https://github.com/kubedb/apimachinery/commit/87e8e58b) Update Kubernetes v1.18.9 dependencies (#595) +- [32206db2](https://github.com/kubedb/apimachinery/commit/32206db2) Add helper methods for MySQL (#592) +- [10aca81a](https://github.com/kubedb/apimachinery/commit/10aca81a) Rename client node to ingest node (#583) +- [d8bbd5ec](https://github.com/kubedb/apimachinery/commit/d8bbd5ec) Update repository config (#591) +- [4d51a066](https://github.com/kubedb/apimachinery/commit/4d51a066) Update repository config (#590) +- [5905c2cb](https://github.com/kubedb/apimachinery/commit/5905c2cb) Update Kubernetes v1.18.9 dependencies (#589) +- [3dc3d970](https://github.com/kubedb/apimachinery/commit/3dc3d970) Update Kubernetes v1.18.3 dependencies (#588) +- [53b42277](https://github.com/kubedb/apimachinery/commit/53b42277) Add event recorder in controller struct (#587) +- [ec58309a](https://github.com/kubedb/apimachinery/commit/ec58309a) Update Kubernetes v1.18.3 dependencies (#586) +- [38050bae](https://github.com/kubedb/apimachinery/commit/38050bae) Initialize db from stash restoresession/restoreBatch (#567) +- [ec3efa91](https://github.com/kubedb/apimachinery/commit/ec3efa91) Update for release Stash@v2020.09.16 (#585) +- [5ddfd53a](https://github.com/kubedb/apimachinery/commit/5ddfd53a) Update Kubernetes v1.18.3 dependencies (#584) +- [24398515](https://github.com/kubedb/apimachinery/commit/24398515) Add some `MongoDB` and `MongoDBOpsRequest` Constants (#582) +- [584a4bf6](https://github.com/kubedb/apimachinery/commit/584a4bf6) Add primary and secondary role constant for MySQL (#581) +- [82299808](https://github.com/kubedb/apimachinery/commit/82299808) Update Kubernetes v1.18.3 dependencies (#580) +- [ecd1d17f](https://github.com/kubedb/apimachinery/commit/ecd1d17f) Add Functions to get Default Probes (#579) +- [76ac9bc0](https://github.com/kubedb/apimachinery/commit/76ac9bc0) Remove CertManagerClient client +- [b99048f4](https://github.com/kubedb/apimachinery/commit/b99048f4) Remove unused constants for ProxySQL +- [152cef57](https://github.com/kubedb/apimachinery/commit/152cef57) Update Kubernetes v1.18.3 dependencies (#578) +- [24c5e829](https://github.com/kubedb/apimachinery/commit/24c5e829) Update redis constants (#575) +- [7075b38d](https://github.com/kubedb/apimachinery/commit/7075b38d) Remove spec.updateStrategy field (#577) +- [dfd11955](https://github.com/kubedb/apimachinery/commit/dfd11955) Remove description from CRD yamls (#576) +- [2d1b5878](https://github.com/kubedb/apimachinery/commit/2d1b5878) Add autoscaling crds (#554) +- [68ed8127](https://github.com/kubedb/apimachinery/commit/68ed8127) Fix build +- [63d18f0d](https://github.com/kubedb/apimachinery/commit/63d18f0d) Rename PgBouncer archiver to client +- [a219c251](https://github.com/kubedb/apimachinery/commit/a219c251) Handle shard scenario for MongoDB cert names (#574) +- [d2c80e55](https://github.com/kubedb/apimachinery/commit/d2c80e55) Add MongoDB Custom Config Spec (#562) +- [1e69fb02](https://github.com/kubedb/apimachinery/commit/1e69fb02) Support multiple certificates per DB (#555) +- [9bbed3d1](https://github.com/kubedb/apimachinery/commit/9bbed3d1) Update Kubernetes v1.18.3 dependencies (#573) +- [7df78c7a](https://github.com/kubedb/apimachinery/commit/7df78c7a) Update CRD yamls +- [406d895d](https://github.com/kubedb/apimachinery/commit/406d895d) Implement ServiceMonitorAdditionalLabels method (#572) +- [cfe4374a](https://github.com/kubedb/apimachinery/commit/cfe4374a) Make ServiceMonitor name same as stats service (#563) +- [d2ed6b4a](https://github.com/kubedb/apimachinery/commit/d2ed6b4a) Update for release Stash@v2020.08.27 (#571) +- [749b9084](https://github.com/kubedb/apimachinery/commit/749b9084) Update for release Stash@v2020.08.27-rc.0 (#570) +- [5d8bf42c](https://github.com/kubedb/apimachinery/commit/5d8bf42c) Update for release Stash@v2020.08.26-rc.1 (#569) +- [6edc4782](https://github.com/kubedb/apimachinery/commit/6edc4782) Update for release Stash@v2020.08.26-rc.0 (#568) +- [c451ff3a](https://github.com/kubedb/apimachinery/commit/c451ff3a) Update Kubernetes v1.18.3 dependencies (#565) +- [fdc6e2d6](https://github.com/kubedb/apimachinery/commit/fdc6e2d6) Update Kubernetes v1.18.3 dependencies (#564) +- [2f509c26](https://github.com/kubedb/apimachinery/commit/2f509c26) Update Kubernetes v1.18.3 dependencies (#561) +- [da655afe](https://github.com/kubedb/apimachinery/commit/da655afe) Update Kubernetes v1.18.3 dependencies (#560) +- [9c2c06a9](https://github.com/kubedb/apimachinery/commit/9c2c06a9) Fix MySQL enterprise condition's constant (#559) +- [81ed2724](https://github.com/kubedb/apimachinery/commit/81ed2724) Update Kubernetes v1.18.3 dependencies (#558) +- [738b7ade](https://github.com/kubedb/apimachinery/commit/738b7ade) Update Kubernetes v1.18.3 dependencies (#557) +- [93f0af4b](https://github.com/kubedb/apimachinery/commit/93f0af4b) Add MySQL Constants (#553) +- [6049554d](https://github.com/kubedb/apimachinery/commit/6049554d) Add {Horizontal,Vertical}ScalingSpec for Redis (#534) +- [28552272](https://github.com/kubedb/apimachinery/commit/28552272) Enable TLS for Redis (#546) +- [68e00844](https://github.com/kubedb/apimachinery/commit/68e00844) Add Spec for MongoDB Volume Expansion (#548) +- [759a800a](https://github.com/kubedb/apimachinery/commit/759a800a) Add Subject spec for Certificate (#552) +- [b1552628](https://github.com/kubedb/apimachinery/commit/b1552628) Add email SANs for certificate (#551) +- [fdfad57e](https://github.com/kubedb/apimachinery/commit/fdfad57e) Update to cert-manager@v0.16.0 (#550) +- [3b5e9ece](https://github.com/kubedb/apimachinery/commit/3b5e9ece) Update to Kubernetes v1.18.3 (#549) +- [0c5a1e9b](https://github.com/kubedb/apimachinery/commit/0c5a1e9b) Make ElasticsearchVersion spec.tools optional (#526) +- [01a0b4b3](https://github.com/kubedb/apimachinery/commit/01a0b4b3) Add Conditions Constant for MongoDBOpsRequest (#535) +- [34a9ed61](https://github.com/kubedb/apimachinery/commit/34a9ed61) Update to Kubernetes v1.18.3 (#547) +- [6392f19e](https://github.com/kubedb/apimachinery/commit/6392f19e) Add Storage Engine Support for Percona Server MongoDB (#538) +- [02d205bc](https://github.com/kubedb/apimachinery/commit/02d205bc) Remove extra - from prefix/suffix (#543) +- [06158f51](https://github.com/kubedb/apimachinery/commit/06158f51) Update to Kubernetes v1.18.3 (#542) +- [157a8724](https://github.com/kubedb/apimachinery/commit/157a8724) Update for release Stash@v2020.07.09-beta.0 (#541) +- [0e86bdbd](https://github.com/kubedb/apimachinery/commit/0e86bdbd) Update for release Stash@v2020.07.08-beta.0 (#540) +- [f4a22d0c](https://github.com/kubedb/apimachinery/commit/f4a22d0c) Update License notice (#539) +- [3c598500](https://github.com/kubedb/apimachinery/commit/3c598500) Use Allowlist and Denylist in MySQLVersion (#537) +- [3c58c062](https://github.com/kubedb/apimachinery/commit/3c58c062) Update to Kubernetes v1.18.3 (#536) +- [e1f3d603](https://github.com/kubedb/apimachinery/commit/e1f3d603) Update update-release-tracker.sh +- [0cf4a01f](https://github.com/kubedb/apimachinery/commit/0cf4a01f) Update update-release-tracker.sh +- [bfbd1f8d](https://github.com/kubedb/apimachinery/commit/bfbd1f8d) Add script to update release tracker on pr merge (#533) +- [b817d87c](https://github.com/kubedb/apimachinery/commit/b817d87c) Update .kodiak.toml +- [772e8d2f](https://github.com/kubedb/apimachinery/commit/772e8d2f) Add Ops Request const (#529) +- [453d67ca](https://github.com/kubedb/apimachinery/commit/453d67ca) Add constants for mutator & validator group names (#532) +- [69f997b5](https://github.com/kubedb/apimachinery/commit/69f997b5) Unwrap top level api folder (#531) +- [a8ccec51](https://github.com/kubedb/apimachinery/commit/a8ccec51) Make RedisOpsRequest Namespaced (#530) +- [8a076bfb](https://github.com/kubedb/apimachinery/commit/8a076bfb) Update .kodiak.toml +- [6a8e51b9](https://github.com/kubedb/apimachinery/commit/6a8e51b9) Update to Kubernetes v1.18.3 (#527) +- [2ef41962](https://github.com/kubedb/apimachinery/commit/2ef41962) Create .kodiak.toml +- [8e596d4e](https://github.com/kubedb/apimachinery/commit/8e596d4e) Update to Kubernetes v1.18.3 +- [31f72200](https://github.com/kubedb/apimachinery/commit/31f72200) Update comments +- [27bc9265](https://github.com/kubedb/apimachinery/commit/27bc9265) Use CRD v1 for Kubernetes >= 1.16 (#525) +- [d1be7d1d](https://github.com/kubedb/apimachinery/commit/d1be7d1d) Remove defaults from CRD v1beta1 +- [5c73d507](https://github.com/kubedb/apimachinery/commit/5c73d507) Use crd.Interface in Controller (#524) +- [27763544](https://github.com/kubedb/apimachinery/commit/27763544) Generate both v1beta1 and v1 CRD YAML (#523) +- [5a0f0a93](https://github.com/kubedb/apimachinery/commit/5a0f0a93) Update to Kubernetes v1.18.3 (#520) +- [25008c1a](https://github.com/kubedb/apimachinery/commit/25008c1a) Change MySQL `[]ContainerResources` to `core.ResourceRequirements` (#522) +- [abc99620](https://github.com/kubedb/apimachinery/commit/abc99620) Merge pull request #521 from kubedb/mongo-vertical +- [f38a109c](https://github.com/kubedb/apimachinery/commit/f38a109c) Change `[]ContainerResources` to `core.ResourceRequirements` +- [e3058f85](https://github.com/kubedb/apimachinery/commit/e3058f85) Update `modification request` to `ops request` (#519) +- [bd3c7d01](https://github.com/kubedb/apimachinery/commit/bd3c7d01) Fix linter warnings +- [d70848d7](https://github.com/kubedb/apimachinery/commit/d70848d7) Rename api group to ops.kubedb.com (#518) +- [745f2438](https://github.com/kubedb/apimachinery/commit/745f2438) Merge pull request #511 from pohly/memcached-pmem +- [75c949aa](https://github.com/kubedb/apimachinery/commit/75c949aa) memcached: add dataVolume +- [3e5cdc03](https://github.com/kubedb/apimachinery/commit/3e5cdc03) Merge pull request #517 from kubedb/mg-scaling +- [0c9e2b4f](https://github.com/kubedb/apimachinery/commit/0c9e2b4f) Flatten api structure +- [9c98fbc1](https://github.com/kubedb/apimachinery/commit/9c98fbc1) Add MongoDBModificationRequest Scaling Spec +- [22b199b6](https://github.com/kubedb/apimachinery/commit/22b199b6) Update comment for UpgradeSpec +- [c66fda4b](https://github.com/kubedb/apimachinery/commit/c66fda4b) Review DBA crds (#516) +- [bc1e13f7](https://github.com/kubedb/apimachinery/commit/bc1e13f7) Merge pull request #509 from kubedb/mysql-upgrade +- [2c9ae147](https://github.com/kubedb/apimachinery/commit/2c9ae147) Fix type names and definition +- [4c7c5074](https://github.com/kubedb/apimachinery/commit/4c7c5074) Update MySQLModificationRequest CRD +- [4096642c](https://github.com/kubedb/apimachinery/commit/4096642c) Merge pull request #501 from kubedb/redis-modification +- [3d683e58](https://github.com/kubedb/apimachinery/commit/3d683e58) Use standard condition from kmodules +- [7be4a3dd](https://github.com/kubedb/apimachinery/commit/7be4a3dd) Update RedisModificationRequest CRD +- [a594bdb9](https://github.com/kubedb/apimachinery/commit/a594bdb9) Merge pull request #503 from kubedb/elastic-upgrade +- [ee0eada4](https://github.com/kubedb/apimachinery/commit/ee0eada4) Use standard conditions from kmodules +- [22cb24f6](https://github.com/kubedb/apimachinery/commit/22cb24f6) Update dba api for elasticsearchModificationRequest +- [a2768752](https://github.com/kubedb/apimachinery/commit/a2768752) Merge pull request #499 from kubedb/mongodb-modification +- [be5dde87](https://github.com/kubedb/apimachinery/commit/be5dde87) Use standard conditions from kmodules +- [9bf2c80e](https://github.com/kubedb/apimachinery/commit/9bf2c80e) Add MongoDBModificationRequest Spec +- [9ee80efd](https://github.com/kubedb/apimachinery/commit/9ee80efd) Fix Update***Status helpers (#515) +- [2c75e77d](https://github.com/kubedb/apimachinery/commit/2c75e77d) Merge pull request #512 from kubedb/prestop-mongos +- [e13d73c5](https://github.com/kubedb/apimachinery/commit/e13d73c5) Use recommended kubernetes app labels (#514) +- [50856267](https://github.com/kubedb/apimachinery/commit/50856267) Add Enum markers to api types +- [95e00c8e](https://github.com/kubedb/apimachinery/commit/95e00c8e) Add Default PreStop Hook for Mongos +- [d99a1001](https://github.com/kubedb/apimachinery/commit/d99a1001) Trigger the workflow on push or pull request +- [b8047fc0](https://github.com/kubedb/apimachinery/commit/b8047fc0) Regenerate api types +- [83c8e40a](https://github.com/kubedb/apimachinery/commit/83c8e40a) Update CHANGELOG.md +- [ddb1f266](https://github.com/kubedb/apimachinery/commit/ddb1f266) Add requireSSL field to MySQL crd (#506) +- [c0c293bd](https://github.com/kubedb/apimachinery/commit/c0c293bd) Rename Elasticsearch NODE_ROLE constant +- [9bfe7f2c](https://github.com/kubedb/apimachinery/commit/9bfe7f2c) Rename Mongo SHARD_INDEX constant +- [e6f72c37](https://github.com/kubedb/apimachinery/commit/e6f72c37) Add default affinity rules for Redis (#508) +- [ab738acf](https://github.com/kubedb/apimachinery/commit/ab738acf) Set default affinity if not provided for Elasticsearch (#507) +- [3ea77524](https://github.com/kubedb/apimachinery/commit/3ea77524) Use stash.appscode.dev/apimachinery@v0.9.0-rc.6 (#504) +- [d39a1db6](https://github.com/kubedb/apimachinery/commit/d39a1db6) Defau;t replica count for etcd +- [5b2fb5e2](https://github.com/kubedb/apimachinery/commit/5b2fb5e2) Rename CA key and cert file name constants +- [116ebb65](https://github.com/kubedb/apimachinery/commit/116ebb65) Add constants for MongoDB cert-manager integration +- [46525cfa](https://github.com/kubedb/apimachinery/commit/46525cfa) Generate keyFile for sendX509 auth mode in Mongo (#502) +- [fef64435](https://github.com/kubedb/apimachinery/commit/fef64435) Update MongoDB keyFile comments +- [cf53627d](https://github.com/kubedb/apimachinery/commit/cf53627d) Redesign keyFile secrets for replicaset and sharing MongoDBs (#500) +- [2d144e0d](https://github.com/kubedb/apimachinery/commit/2d144e0d) Ensure that statefulset PDP allows at least 1 unavailable pod +- [518ce3c4](https://github.com/kubedb/apimachinery/commit/518ce3c4) Add labeler field into mysqlversions CRD (#497) +- [b5069f30](https://github.com/kubedb/apimachinery/commit/b5069f30) TLS config for mongoDB, Issuer ref for mysql,percona,postgres,proxysql (#496) +- [a02d04eb](https://github.com/kubedb/apimachinery/commit/a02d04eb) Update for percona-xtradb standalone restore (#491) +- [0e1d5e0d](https://github.com/kubedb/apimachinery/commit/0e1d5e0d) Add PgBouncer constants +- [1267de93](https://github.com/kubedb/apimachinery/commit/1267de93) Add secret informer and lister to controller (#495) +- [b88239d7](https://github.com/kubedb/apimachinery/commit/b88239d7) Update defautl affinity rules +- [b32bf152](https://github.com/kubedb/apimachinery/commit/b32bf152) validate prometheus agent spec (#493) +- [c046f673](https://github.com/kubedb/apimachinery/commit/c046f673) Change termination policy to Halt from Pause (#492) +- [118d374d](https://github.com/kubedb/apimachinery/commit/118d374d) Affinity defaulting for mongodb (#490) +- [de5af664](https://github.com/kubedb/apimachinery/commit/de5af664) Add method for ES governing service name (#487) +- [8763e5ef](https://github.com/kubedb/apimachinery/commit/8763e5ef) Cleanup DB helper defaulting functions (#489) +- [3c552237](https://github.com/kubedb/apimachinery/commit/3c552237) Delete dormant and snapshot CRD (#486) +- [e4285d11](https://github.com/kubedb/apimachinery/commit/e4285d11) Revendor stash@v0.9.0-rc.4 +- [722f0b21](https://github.com/kubedb/apimachinery/commit/722f0b21) Update cron and stow libraries (#485) +- [75099611](https://github.com/kubedb/apimachinery/commit/75099611) Vendor stash@v0.9.0-rc.3 (#484) +- [a483a070](https://github.com/kubedb/apimachinery/commit/a483a070) Fix the const value for the max possible base-server id for MySQ… (#483) +- [5dd537e5](https://github.com/kubedb/apimachinery/commit/5dd537e5) Fix linter errors. +- [1d4b76df](https://github.com/kubedb/apimachinery/commit/1d4b76df) Fix typo. +- [95749a7f](https://github.com/kubedb/apimachinery/commit/95749a7f) Add spec.halted as a replacement for DormantDatabase crd (#482) +- [17bd64fb](https://github.com/kubedb/apimachinery/commit/17bd64fb) Bring back support for k8s 1.11 (#481) +- [8e913f55](https://github.com/kubedb/apimachinery/commit/8e913f55) Enable Status subresource for MySQL crd. (#480) +- [3f83b38c](https://github.com/kubedb/apimachinery/commit/3f83b38c) Update mount path for custom config for ProxySQL & PerconaXtraDB (#478) +- [0daeb688](https://github.com/kubedb/apimachinery/commit/0daeb688) Deprecate Pause termination policy (#479) +- [5176ab93](https://github.com/kubedb/apimachinery/commit/5176ab93) Cert-Manager for PgBouncer (#467) +- [b9ec93d3](https://github.com/kubedb/apimachinery/commit/b9ec93d3) API Review (#476) +- [29ed98ef](https://github.com/kubedb/apimachinery/commit/29ed98ef) Fix MySQL base server id data type (#475) +- [1c9ad3d0](https://github.com/kubedb/apimachinery/commit/1c9ad3d0) Fixed mongodb ssl args (#471) +- [67931063](https://github.com/kubedb/apimachinery/commit/67931063) Introduce KubeDB DBA resources (#470) +- [c6f7c72d](https://github.com/kubedb/apimachinery/commit/c6f7c72d) Update client-go to kubernetes-1.16.3 (#468) +- [eeb91084](https://github.com/kubedb/apimachinery/commit/eeb91084) Always create RBAC roles (#474) +- [bd389625](https://github.com/kubedb/apimachinery/commit/bd389625) Use 2-way merge patch in Patch helpers (#472) +- [2929bb25](https://github.com/kubedb/apimachinery/commit/2929bb25) Show AUTH_PLUGIN for ESVersion (#469) +- [2ff472ad](https://github.com/kubedb/apimachinery/commit/2ff472ad) Add helper library for CRDs (#466) +- [530d9124](https://github.com/kubedb/apimachinery/commit/530d9124) Use kubebuilder generated CRD yamls (#465) +- [b4b5db7e](https://github.com/kubedb/apimachinery/commit/b4b5db7e) Use controller-tools@v0.2.2 to generate structural schema (#464) +- [6788ee91](https://github.com/kubedb/apimachinery/commit/6788ee91) Generate protobuf files for api types (#463) +- [b043ee97](https://github.com/kubedb/apimachinery/commit/b043ee97) Add add-license make target +- [73931d22](https://github.com/kubedb/apimachinery/commit/73931d22) Add license header to files (#462) +- [0b18633a](https://github.com/kubedb/apimachinery/commit/0b18633a) Verify Go modules in ci (#461) +- [bed5a9c0](https://github.com/kubedb/apimachinery/commit/bed5a9c0) Use stash@v0.9.0-rc.2 (#460) +- [6fa677b3](https://github.com/kubedb/apimachinery/commit/6fa677b3) Update badge +- [40cc465c](https://github.com/kubedb/apimachinery/commit/40cc465c) Show diff when files `make verify` fails (#459) +- [ef58dbd0](https://github.com/kubedb/apimachinery/commit/ef58dbd0) Split imports into 3 blocks (#458) +- [03dbc5bf](https://github.com/kubedb/apimachinery/commit/03dbc5bf) Remove travis.yml +- [1480eaea](https://github.com/kubedb/apimachinery/commit/1480eaea) Fix make ci command +- [41a05a48](https://github.com/kubedb/apimachinery/commit/41a05a48) Remove EnableStatusSubresource (#457) +- [e2aca118](https://github.com/kubedb/apimachinery/commit/e2aca118) Update linter command +- [c95171ef](https://github.com/kubedb/apimachinery/commit/c95171ef) Run fmt before verify-gen +- [a75ba832](https://github.com/kubedb/apimachinery/commit/a75ba832) Fix linter issues (#456) +- [42521d29](https://github.com/kubedb/apimachinery/commit/42521d29) Verify generated files are up to date (#454) +- [948c9096](https://github.com/kubedb/apimachinery/commit/948c9096) Rename workflow pipeline +- [fc97357b](https://github.com/kubedb/apimachinery/commit/fc97357b) Use utilruntime.Must to check for errors (#453) +- [bac255d3](https://github.com/kubedb/apimachinery/commit/bac255d3) Prepare v0.13.0-rc.1 release (#452) +- [d4f718b4](https://github.com/kubedb/apimachinery/commit/d4f718b4) Update Makefile +- [d3ef169d](https://github.com/kubedb/apimachinery/commit/d3ef169d) Generate swagger.json (#451) +- [b862382f](https://github.com/kubedb/apimachinery/commit/b862382f) Remove client dir from linter +- [7c01ece1](https://github.com/kubedb/apimachinery/commit/7c01ece1) Add ProxySQLVersion types (#442) +- [7fdba5b2](https://github.com/kubedb/apimachinery/commit/7fdba5b2) Add ProxySQL to Load Balance MySQL Query Requests (#439) +- [86289539](https://github.com/kubedb/apimachinery/commit/86289539) Update dependencies +- [78b79df1](https://github.com/kubedb/apimachinery/commit/78b79df1) Add Readiness probe for PerconaXtraDB (#448) +- [a7283390](https://github.com/kubedb/apimachinery/commit/a7283390) Added helper functions to dbVersions to check for valid specs (#450) +- [630714c2](https://github.com/kubedb/apimachinery/commit/630714c2) Fix linter errors (#449) +- [1b2a09bf](https://github.com/kubedb/apimachinery/commit/1b2a09bf) Add PgBouncer types and clientset (#424) +- [9b4868f1](https://github.com/kubedb/apimachinery/commit/9b4868f1) Added authPlugin field in Version Catalog spec (#447) +- [f828d922](https://github.com/kubedb/apimachinery/commit/f828d922) Remove additional print column PROXYSQL_IMAGE from PerconaXtraDBVersion (#446) +- [262aab46](https://github.com/kubedb/apimachinery/commit/262aab46) Run make ci +- [e5fd19f8](https://github.com/kubedb/apimachinery/commit/e5fd19f8) Add GitHub actions file +- [20e376b0](https://github.com/kubedb/apimachinery/commit/20e376b0) Add helper methods to configure proxysql for group replication (#441) +- [74daad34](https://github.com/kubedb/apimachinery/commit/74daad34) Change to configure proxysql for group replication (#443) +- [f617c0f6](https://github.com/kubedb/apimachinery/commit/f617c0f6) Change default termination policy as Delete. (#444) +- [5cffe0c3](https://github.com/kubedb/apimachinery/commit/5cffe0c3) Remove ProxySQL image field from PerconaXtraDBVersion type (#445) +- [d28870c9](https://github.com/kubedb/apimachinery/commit/d28870c9) Use authentication for readiness and liveness probes (#440) +- [93079ac4](https://github.com/kubedb/apimachinery/commit/93079ac4) Update changelog +- [f61829ac](https://github.com/kubedb/apimachinery/commit/f61829ac) Update github.com/kmodules/apiserver fork +- [598495bf](https://github.com/kubedb/apimachinery/commit/598495bf) Update galera arbitrator helper methods and constants (#438) +- [ee40af4f](https://github.com/kubedb/apimachinery/commit/ee40af4f) Use github.com/Azure/go-autorest/autorest@v0.7.0 (#437) +- [a796cc43](https://github.com/kubedb/apimachinery/commit/a796cc43) Add logic to get the PerconaXtraDB object name (#436) +- [9d4e953d](https://github.com/kubedb/apimachinery/commit/9d4e953d) Add galera arbitrator config (#420) +- [c0f65020](https://github.com/kubedb/apimachinery/commit/c0f65020) Delete hack/codegen.sh +- [264d09f6](https://github.com/kubedb/apimachinery/commit/264d09f6) Use github.com/golang/protobuf@v1.2.0 (#435) +- [3ee2a599](https://github.com/kubedb/apimachinery/commit/3ee2a599) Fix defaulting ClusterAuthMode (#426) +- [b750710e](https://github.com/kubedb/apimachinery/commit/b750710e) Update tls file paths in default probes (#425) +- [6539c67a](https://github.com/kubedb/apimachinery/commit/6539c67a) Bring back MongoDBConfiguration (#434) +- [9549fe9c](https://github.com/kubedb/apimachinery/commit/9549fe9c) Delete authorization and config api types (#433) +- [51b8c03e](https://github.com/kubedb/apimachinery/commit/51b8c03e) Rename from Percona to PerconaXtraDB (#432) +- [270ef77d](https://github.com/kubedb/apimachinery/commit/270ef77d) Remove hack/gencrd folder from Makefile +- [78fa42a7](https://github.com/kubedb/apimachinery/commit/78fa42a7) Update Makefile +- [19156024](https://github.com/kubedb/apimachinery/commit/19156024) Apply label to CRD yamls (#430) +- [7fedf00b](https://github.com/kubedb/apimachinery/commit/7fedf00b) Add naming patterns for CRDs (#429) +- [e86ef0f5](https://github.com/kubedb/apimachinery/commit/e86ef0f5) Use kubebuilder to generate crd yamls (#427) +- [3d76f0f8](https://github.com/kubedb/apimachinery/commit/3d76f0f8) Delete Report types (#428) +- [6973f90f](https://github.com/kubedb/apimachinery/commit/6973f90f) Fix travis build (#423) +- [db6703a7](https://github.com/kubedb/apimachinery/commit/db6703a7) Change package path to kubedb.dev/apimachinery (#422) +- [5f6c0b43](https://github.com/kubedb/apimachinery/commit/5f6c0b43) Update azure-sdk-for-go to v31.1.0 +- [337d6f57](https://github.com/kubedb/apimachinery/commit/337d6f57) Mongodb config parameter for stash-mongodb integration (#421) +- [fae41781](https://github.com/kubedb/apimachinery/commit/fae41781) API for SSL support in mongodb (#400) +- [4c1abd1d](https://github.com/kubedb/apimachinery/commit/4c1abd1d) Add util functions for Percona (#409) +- [d3391c78](https://github.com/kubedb/apimachinery/commit/d3391c78) Update dormant for percona and mariadb (#408) +- [6dd3b303](https://github.com/kubedb/apimachinery/commit/6dd3b303) Add percona version api, client (#407) +- [aca334d6](https://github.com/kubedb/apimachinery/commit/aca334d6) Add license header to make files (#419) +- [8e981d5f](https://github.com/kubedb/apimachinery/commit/8e981d5f) Use robfig/cron@v3 (#418) +- [484c1e50](https://github.com/kubedb/apimachinery/commit/484c1e50) Add MaxUnavailable to ElasticsearchNode +- [c9bc03fe](https://github.com/kubedb/apimachinery/commit/c9bc03fe) Add Maxunavaiable for Elasticsearch PDB support (#414) +- [7517175a](https://github.com/kubedb/apimachinery/commit/7517175a) Use stopch to cancel BlockOnStashOperator (#413) +- [0e7a57f2](https://github.com/kubedb/apimachinery/commit/0e7a57f2) Integrate stash/restic with kubedb (#398) +- [48e4bab6](https://github.com/kubedb/apimachinery/commit/48e4bab6) Add Makefile (#412) +- [3e0be30e](https://github.com/kubedb/apimachinery/commit/3e0be30e) Using PDB createOrPatch (#411) +- [ab265d62](https://github.com/kubedb/apimachinery/commit/ab265d62) PDB creator for StatefulSets and Deployments (#410) +- [dd30b7c2](https://github.com/kubedb/apimachinery/commit/dd30b7c2) Add Service Account name to Database CRDs (#404) +- [276e07f3](https://github.com/kubedb/apimachinery/commit/276e07f3) Add Percona api, client (#405) +- [26e83bdb](https://github.com/kubedb/apimachinery/commit/26e83bdb) Add MariaDB api, client (#406) +- [5ba91534](https://github.com/kubedb/apimachinery/commit/5ba91534) Update to k8s 1.14.0 client libraries using go.mod (#403) +- [d0115fa0](https://github.com/kubedb/apimachinery/commit/d0115fa0) Update changelog + + + +## [kubedb/cli](https://github.com/kubedb/cli) + +### [v0.14.0](https://github.com/kubedb/cli/releases/tag/v0.14.0) + + + + +## [kubedb/elasticsearch](https://github.com/kubedb/elasticsearch) + +### [v0.14.0](https://github.com/kubedb/elasticsearch/releases/tag/v0.14.0) + +- [14cb3a98](https://github.com/kubedb/elasticsearch/commit/14cb3a98) Prepare for release v0.14.0 (#397) +- [846ea8ee](https://github.com/kubedb/elasticsearch/commit/846ea8ee) Prepare for release v0.14.0-rc.2 (#396) +- [709ba7d2](https://github.com/kubedb/elasticsearch/commit/709ba7d2) Prepare for release v0.14.0-rc.1 (#395) +- [58dac2ba](https://github.com/kubedb/elasticsearch/commit/58dac2ba) Prepare for release v0.14.0-beta.6 (#394) +- [5d4ad40c](https://github.com/kubedb/elasticsearch/commit/5d4ad40c) Update MergeServicePort and PatchServicePort apis (#393) +- [992edb90](https://github.com/kubedb/elasticsearch/commit/992edb90) Always set protocol for service ports +- [0f408cbf](https://github.com/kubedb/elasticsearch/commit/0f408cbf) Create SRV records for governing service (#392) +- [97f34417](https://github.com/kubedb/elasticsearch/commit/97f34417) Prepare for release v0.14.0-beta.5 (#391) +- [a3e9a733](https://github.com/kubedb/elasticsearch/commit/a3e9a733) Create separate governing service for each database (#390) +- [ce8f80b5](https://github.com/kubedb/elasticsearch/commit/ce8f80b5) Update KubeDB api (#389) +- [0fe8d617](https://github.com/kubedb/elasticsearch/commit/0fe8d617) Update readme +- [657797fe](https://github.com/kubedb/elasticsearch/commit/657797fe) Update repository config (#388) +- [d6f5ae41](https://github.com/kubedb/elasticsearch/commit/d6f5ae41) Prepare for release v0.14.0-beta.4 (#387) +- [149314b5](https://github.com/kubedb/elasticsearch/commit/149314b5) Update KubeDB api (#386) +- [1de4b578](https://github.com/kubedb/elasticsearch/commit/1de4b578) Make database's phase NotReady as soon as the halted is removed (#375) +- [57704afa](https://github.com/kubedb/elasticsearch/commit/57704afa) Update Kubernetes v1.18.9 dependencies (#385) +- [16d37657](https://github.com/kubedb/elasticsearch/commit/16d37657) Update Kubernetes v1.18.9 dependencies (#383) +- [828f8ab8](https://github.com/kubedb/elasticsearch/commit/828f8ab8) Update KubeDB api (#382) +- [d70e68a8](https://github.com/kubedb/elasticsearch/commit/d70e68a8) Update for release Stash@v2020.10.21 (#381) +- [05a687bc](https://github.com/kubedb/elasticsearch/commit/05a687bc) Fix init validator (#379) +- [24d7f2c8](https://github.com/kubedb/elasticsearch/commit/24d7f2c8) Update KubeDB api (#380) +- [8c981e08](https://github.com/kubedb/elasticsearch/commit/8c981e08) Update KubeDB api (#378) +- [cf833e49](https://github.com/kubedb/elasticsearch/commit/cf833e49) Update Kubernetes v1.18.9 dependencies (#377) +- [fb335a43](https://github.com/kubedb/elasticsearch/commit/fb335a43) Update KubeDB api (#376) +- [e652a7ec](https://github.com/kubedb/elasticsearch/commit/e652a7ec) Update KubeDB api (#374) +- [c22b7f31](https://github.com/kubedb/elasticsearch/commit/c22b7f31) Update KubeDB api (#373) +- [a7d8e3b0](https://github.com/kubedb/elasticsearch/commit/a7d8e3b0) Integrate cert-manager and status.conditions (#357) +- [370f0df1](https://github.com/kubedb/elasticsearch/commit/370f0df1) Update repository config (#372) +- [78bdc59e](https://github.com/kubedb/elasticsearch/commit/78bdc59e) Update repository config (#371) +- [b8003d4b](https://github.com/kubedb/elasticsearch/commit/b8003d4b) Update repository config (#370) +- [d4ff1ac2](https://github.com/kubedb/elasticsearch/commit/d4ff1ac2) Publish docker images to ghcr.io (#369) +- [5f5ef393](https://github.com/kubedb/elasticsearch/commit/5f5ef393) Update repository config (#363) +- [e537ae40](https://github.com/kubedb/elasticsearch/commit/e537ae40) Update Kubernetes v1.18.9 dependencies (#362) +- [a5a5b084](https://github.com/kubedb/elasticsearch/commit/a5a5b084) Update for release Stash@v2020.09.29 (#361) +- [11eebe39](https://github.com/kubedb/elasticsearch/commit/11eebe39) Update Kubernetes v1.18.9 dependencies (#360) +- [a5b47b08](https://github.com/kubedb/elasticsearch/commit/a5b47b08) Update Kubernetes v1.18.9 dependencies (#358) +- [91f1dc00](https://github.com/kubedb/elasticsearch/commit/91f1dc00) Rename client node to ingest node (#346) +- [318a8b19](https://github.com/kubedb/elasticsearch/commit/318a8b19) Update repository config (#356) +- [a8773921](https://github.com/kubedb/elasticsearch/commit/a8773921) Update repository config (#355) +- [55bef891](https://github.com/kubedb/elasticsearch/commit/55bef891) Update Kubernetes v1.18.9 dependencies (#354) +- [1a3e421a](https://github.com/kubedb/elasticsearch/commit/1a3e421a) Use common event recorder (#353) +- [4df32f60](https://github.com/kubedb/elasticsearch/commit/4df32f60) Update Kubernetes v1.18.3 dependencies (#352) +- [9fb43795](https://github.com/kubedb/elasticsearch/commit/9fb43795) Prepare for release v0.14.0-beta.3 (#351) +- [a279a60c](https://github.com/kubedb/elasticsearch/commit/a279a60c) Use new `spec.init` section (#350) +- [a1e2e2f6](https://github.com/kubedb/elasticsearch/commit/a1e2e2f6) Update Kubernetes v1.18.3 dependencies (#349) +- [0aaf4530](https://github.com/kubedb/elasticsearch/commit/0aaf4530) Add license verifier (#348) +- [bbacb00b](https://github.com/kubedb/elasticsearch/commit/bbacb00b) Update for release Stash@v2020.09.16 (#347) +- [98c1ad83](https://github.com/kubedb/elasticsearch/commit/98c1ad83) Update Kubernetes v1.18.3 dependencies (#345) +- [1ebf168d](https://github.com/kubedb/elasticsearch/commit/1ebf168d) Use background propagation policy +- [9d7997df](https://github.com/kubedb/elasticsearch/commit/9d7997df) Update Kubernetes v1.18.3 dependencies (#343) +- [42786958](https://github.com/kubedb/elasticsearch/commit/42786958) Use AppsCode Community License (#342) +- [a96b0bd3](https://github.com/kubedb/elasticsearch/commit/a96b0bd3) Fix unit tests (#341) +- [c9905966](https://github.com/kubedb/elasticsearch/commit/c9905966) Update Kubernetes v1.18.3 dependencies (#340) +- [3b83c316](https://github.com/kubedb/elasticsearch/commit/3b83c316) Prepare for release v0.14.0-beta.2 (#339) +- [662823ae](https://github.com/kubedb/elasticsearch/commit/662823ae) Update release.yml +- [ada6c2d3](https://github.com/kubedb/elasticsearch/commit/ada6c2d3) Add support for Open-Distro-for-Elasticsearch (#303) +- [a9c7ba33](https://github.com/kubedb/elasticsearch/commit/a9c7ba33) Update Kubernetes v1.18.3 dependencies (#333) +- [c67b1290](https://github.com/kubedb/elasticsearch/commit/c67b1290) Update Kubernetes v1.18.3 dependencies (#332) +- [aa1d64ad](https://github.com/kubedb/elasticsearch/commit/aa1d64ad) Update Kubernetes v1.18.3 dependencies (#331) +- [3d6c3e91](https://github.com/kubedb/elasticsearch/commit/3d6c3e91) Update Kubernetes v1.18.3 dependencies (#330) +- [bb318e74](https://github.com/kubedb/elasticsearch/commit/bb318e74) Update Kubernetes v1.18.3 dependencies (#329) +- [6b6b4d2d](https://github.com/kubedb/elasticsearch/commit/6b6b4d2d) Update Kubernetes v1.18.3 dependencies (#328) +- [06cef782](https://github.com/kubedb/elasticsearch/commit/06cef782) Remove dependency on enterprise operator (#327) +- [20a2c7d4](https://github.com/kubedb/elasticsearch/commit/20a2c7d4) Update to cert-manager v0.16.0 (#326) +- [e767c356](https://github.com/kubedb/elasticsearch/commit/e767c356) Build images in e2e workflow (#325) +- [ae696dbe](https://github.com/kubedb/elasticsearch/commit/ae696dbe) Update to Kubernetes v1.18.3 (#324) +- [a511d8d6](https://github.com/kubedb/elasticsearch/commit/a511d8d6) Allow configuring k8s & db version in e2e tests (#323) +- [a50b503d](https://github.com/kubedb/elasticsearch/commit/a50b503d) Trigger e2e tests on /ok-to-test command (#322) +- [107faff2](https://github.com/kubedb/elasticsearch/commit/107faff2) Update to Kubernetes v1.18.3 (#321) +- [60fb6d9b](https://github.com/kubedb/elasticsearch/commit/60fb6d9b) Update to Kubernetes v1.18.3 (#320) +- [9aae4782](https://github.com/kubedb/elasticsearch/commit/9aae4782) Prepare for release v0.14.0-beta.1 (#319) +- [312e5682](https://github.com/kubedb/elasticsearch/commit/312e5682) Update for release Stash@v2020.07.09-beta.0 (#317) +- [681f3e87](https://github.com/kubedb/elasticsearch/commit/681f3e87) Include Makefile.env +- [e460af51](https://github.com/kubedb/elasticsearch/commit/e460af51) Allow customizing chart registry (#316) +- [64e15a33](https://github.com/kubedb/elasticsearch/commit/64e15a33) Update for release Stash@v2020.07.08-beta.0 (#315) +- [1f2ef7a6](https://github.com/kubedb/elasticsearch/commit/1f2ef7a6) Update License (#314) +- [16ce6c90](https://github.com/kubedb/elasticsearch/commit/16ce6c90) Update to Kubernetes v1.18.3 (#313) +- [3357faa3](https://github.com/kubedb/elasticsearch/commit/3357faa3) Update ci.yml +- [cb44a1eb](https://github.com/kubedb/elasticsearch/commit/cb44a1eb) Load stash version from .env file for make (#312) +- [cf212019](https://github.com/kubedb/elasticsearch/commit/cf212019) Update update-release-tracker.sh +- [5127428e](https://github.com/kubedb/elasticsearch/commit/5127428e) Update update-release-tracker.sh +- [7f790940](https://github.com/kubedb/elasticsearch/commit/7f790940) Add script to update release tracker on pr merge (#311) +- [340b6112](https://github.com/kubedb/elasticsearch/commit/340b6112) Update .kodiak.toml +- [e01c4eec](https://github.com/kubedb/elasticsearch/commit/e01c4eec) Various fixes (#310) +- [11517f71](https://github.com/kubedb/elasticsearch/commit/11517f71) Update to Kubernetes v1.18.3 (#309) +- [53d7b117](https://github.com/kubedb/elasticsearch/commit/53d7b117) Update to Kubernetes v1.18.3 +- [7eacc7dd](https://github.com/kubedb/elasticsearch/commit/7eacc7dd) Create .kodiak.toml +- [b91b23d9](https://github.com/kubedb/elasticsearch/commit/b91b23d9) Use CRD v1 for Kubernetes >= 1.16 (#308) +- [08c1d2a8](https://github.com/kubedb/elasticsearch/commit/08c1d2a8) Update to Kubernetes v1.18.3 (#307) +- [32cdb8a4](https://github.com/kubedb/elasticsearch/commit/32cdb8a4) Fix e2e tests (#306) +- [0bca1a04](https://github.com/kubedb/elasticsearch/commit/0bca1a04) Merge pull request #302 from kubedb/multi-region +- [bf0c26ee](https://github.com/kubedb/elasticsearch/commit/bf0c26ee) Revendor kubedb.dev/apimachinery@v0.14.0-beta.0 +- [7c00c63c](https://github.com/kubedb/elasticsearch/commit/7c00c63c) Add support for multi-regional cluster +- [363322df](https://github.com/kubedb/elasticsearch/commit/363322df) Update stash install commands +- [a0138a36](https://github.com/kubedb/elasticsearch/commit/a0138a36) Update crazy-max/ghaction-docker-buildx flag +- [3076eb46](https://github.com/kubedb/elasticsearch/commit/3076eb46) Use updated operator labels in e2e tests (#304) +- [d537b91b](https://github.com/kubedb/elasticsearch/commit/d537b91b) Pass annotations from CRD to AppBinding (#305) +- [48f9399c](https://github.com/kubedb/elasticsearch/commit/48f9399c) Trigger the workflow on push or pull request +- [7b8d56cb](https://github.com/kubedb/elasticsearch/commit/7b8d56cb) Update CHANGELOG.md +- [939f6882](https://github.com/kubedb/elasticsearch/commit/939f6882) Update labelSelector for statefulsets (#300) +- [ed1c0553](https://github.com/kubedb/elasticsearch/commit/ed1c0553) Make master service headless & add rest-port to all db nodes (#299) +- [b7e7c8d7](https://github.com/kubedb/elasticsearch/commit/b7e7c8d7) Use stash.appscode.dev/apimachinery@v0.9.0-rc.6 (#301) +- [e51555d5](https://github.com/kubedb/elasticsearch/commit/e51555d5) Introduce spec.halted and removed dormant and snapshot crd (#296) +- [8255276f](https://github.com/kubedb/elasticsearch/commit/8255276f) Add spec.selector fields to the governing service (#297) +- [13bc760f](https://github.com/kubedb/elasticsearch/commit/13bc760f) Use stash@v0.9.0-rc.4 release (#298) +- [6a21fb86](https://github.com/kubedb/elasticsearch/commit/6a21fb86) Add `Pause` feature (#295) +- [1b25070c](https://github.com/kubedb/elasticsearch/commit/1b25070c) Refactor CI pipeline to build once (#294) +- [ace3d779](https://github.com/kubedb/elasticsearch/commit/ace3d779) Fix e2e tests on GitHub actions (#292) +- [7a7eb8d1](https://github.com/kubedb/elasticsearch/commit/7a7eb8d1) fix bug (#293) +- [0641649e](https://github.com/kubedb/elasticsearch/commit/0641649e) Use Go 1.13 in CI (#291) +- [97790e1e](https://github.com/kubedb/elasticsearch/commit/97790e1e) Take out elasticsearch docker images and Matrix test (#289) +- [3a20c1db](https://github.com/kubedb/elasticsearch/commit/3a20c1db) Fix default make command +- [ece073a2](https://github.com/kubedb/elasticsearch/commit/ece073a2) Update catalog values for make install command +- [8df4697b](https://github.com/kubedb/elasticsearch/commit/8df4697b) Use charts to install operator (#290) +- [5cbde391](https://github.com/kubedb/elasticsearch/commit/5cbde391) Add add-license make target +- [b7012bc5](https://github.com/kubedb/elasticsearch/commit/b7012bc5) Skip libbuild folder from checking license +- [d56db3a0](https://github.com/kubedb/elasticsearch/commit/d56db3a0) Add license header to files (#288) +- [1d0c368a](https://github.com/kubedb/elasticsearch/commit/1d0c368a) Enable make ci (#287) +- [2e835dff](https://github.com/kubedb/elasticsearch/commit/2e835dff) Remove EnableStatusSubresource (#286) +- [bcd0ebd9](https://github.com/kubedb/elasticsearch/commit/bcd0ebd9) Fix E2E tests in github action (#285) +- [865dc774](https://github.com/kubedb/elasticsearch/commit/865dc774) Prepare v0.13.0-rc.1 release (#284) +- [9348ba60](https://github.com/kubedb/elasticsearch/commit/9348ba60) Run e2e tests using GitHub actions (#283) +- [aea27214](https://github.com/kubedb/elasticsearch/commit/aea27214) Validate DBVersionSpecs and fixed broken build (#282) +- [cb48734f](https://github.com/kubedb/elasticsearch/commit/cb48734f) Update elasticdump version for es7.3, 7.2 and 6.8 (#281) +- [7bd05be9](https://github.com/kubedb/elasticsearch/commit/7bd05be9) Added Es7.3 (#280) +- [ca0cc981](https://github.com/kubedb/elasticsearch/commit/ca0cc981) Added support for Xpack in es6.8 and es7.2 (#278) +- [c5f54840](https://github.com/kubedb/elasticsearch/commit/c5f54840) Add support for 7.2.0 (#268) +- [0531464e](https://github.com/kubedb/elasticsearch/commit/0531464e) Run e2e tests using GitHub actions (#279) +- [67965e99](https://github.com/kubedb/elasticsearch/commit/67965e99) Update go.yml +- [a5d848ca](https://github.com/kubedb/elasticsearch/commit/a5d848ca) Enable GitHub actions +- [723383c6](https://github.com/kubedb/elasticsearch/commit/723383c6) Fixed snapshot for 6.8.0 (#276) +- [c5421e61](https://github.com/kubedb/elasticsearch/commit/c5421e61) Add support for Elasticsearch 6.8.0 (#265) +- [4b665fbb](https://github.com/kubedb/elasticsearch/commit/4b665fbb) Support configuration options for exporter sidecar (#275) +- [04ac9729](https://github.com/kubedb/elasticsearch/commit/04ac9729) Update changelog +- [d4b41b55](https://github.com/kubedb/elasticsearch/commit/d4b41b55) Remove linux/arm support +- [f99f7e8f](https://github.com/kubedb/elasticsearch/commit/f99f7e8f) Revendor +- [61e89f4b](https://github.com/kubedb/elasticsearch/commit/61e89f4b) Add e2e test commands to Makefile (#274) +- [7e56a7c0](https://github.com/kubedb/elasticsearch/commit/7e56a7c0) Use docker buildx to build docker image +- [3dd46b3f](https://github.com/kubedb/elasticsearch/commit/3dd46b3f) Update dependencies (#273) +- [73edeb82](https://github.com/kubedb/elasticsearch/commit/73edeb82) Don't set annotation to AppBinding (#272) +- [699d41ce](https://github.com/kubedb/elasticsearch/commit/699d41ce) Set database version in AppBinding (#271) +- [cf3522a5](https://github.com/kubedb/elasticsearch/commit/cf3522a5) Fix travis yaml +- [a410aed9](https://github.com/kubedb/elasticsearch/commit/a410aed9) Change package path to kubedb.dev/elasticsearch (#270) +- [2cebf0eb](https://github.com/kubedb/elasticsearch/commit/2cebf0eb) Support initializing from stash restoresession. (#267) +- [c363bf68](https://github.com/kubedb/elasticsearch/commit/c363bf68) Pod Disruption Budget for Elasticsearch (#262) +- [27a27b04](https://github.com/kubedb/elasticsearch/commit/27a27b04) Fix UpsertDatabaseAnnotation() function (#266) +- [780837b3](https://github.com/kubedb/elasticsearch/commit/780837b3) Add license header to Makefiles (#269) +- [35e5b873](https://github.com/kubedb/elasticsearch/commit/35e5b873) Update Makefile +- [00328da8](https://github.com/kubedb/elasticsearch/commit/00328da8) Add install, uninstall and purge command in makefile (#264) +- [7445850e](https://github.com/kubedb/elasticsearch/commit/7445850e) Update .gitignore +- [ba0f0140](https://github.com/kubedb/elasticsearch/commit/ba0f0140) Handling resource ownership (#261) +- [59328b8d](https://github.com/kubedb/elasticsearch/commit/59328b8d) Add Makefile (#263) +- [e8fbea4b](https://github.com/kubedb/elasticsearch/commit/e8fbea4b) Update to k8s 1.14.0 client libraries using go.mod (#260) +- [8adbe567](https://github.com/kubedb/elasticsearch/commit/8adbe567) Update README.md +- [3ca67679](https://github.com/kubedb/elasticsearch/commit/3ca67679) Start next dev cycle + + + +## [kubedb/installer](https://github.com/kubedb/installer) + +### [v0.14.0](https://github.com/kubedb/installer/releases/tag/v0.14.0) + + + + +## [kubedb/memcached](https://github.com/kubedb/memcached) + +### [v0.7.0](https://github.com/kubedb/memcached/releases/tag/v0.7.0) + +- [e98bc3a2](https://github.com/kubedb/memcached/commit/e98bc3a2) Prepare for release v0.7.0 (#229) +- [a63e015b](https://github.com/kubedb/memcached/commit/a63e015b) Prepare for release v0.7.0-rc.2 (#228) +- [85527a82](https://github.com/kubedb/memcached/commit/85527a82) Prepare for release v0.7.0-rc.1 (#227) +- [704cf9f2](https://github.com/kubedb/memcached/commit/704cf9f2) Prepare for release v0.7.0-beta.6 (#226) +- [47039c68](https://github.com/kubedb/memcached/commit/47039c68) Create SRV records for governing service (#225) +- [0fbfc766](https://github.com/kubedb/memcached/commit/0fbfc766) Prepare for release v0.7.0-beta.5 (#224) +- [7a01e878](https://github.com/kubedb/memcached/commit/7a01e878) Create separate governing service for each database (#223) +- [6cecdfec](https://github.com/kubedb/memcached/commit/6cecdfec) Update KubeDB api (#222) +- [5942b1ff](https://github.com/kubedb/memcached/commit/5942b1ff) Update readme +- [49da218c](https://github.com/kubedb/memcached/commit/49da218c) Prepare for release v0.7.0-beta.4 (#221) +- [25677a68](https://github.com/kubedb/memcached/commit/25677a68) Update KubeDB api (#220) +- [b4cd7a06](https://github.com/kubedb/memcached/commit/b4cd7a06) Update Kubernetes v1.18.9 dependencies (#219) +- [553c98d4](https://github.com/kubedb/memcached/commit/553c98d4) Update KubeDB api (#218) +- [2e9af5f1](https://github.com/kubedb/memcached/commit/2e9af5f1) Update KubeDB api (#217) +- [86b20622](https://github.com/kubedb/memcached/commit/86b20622) Update KubeDB api (#216) +- [8a46e900](https://github.com/kubedb/memcached/commit/8a46e900) Update Kubernetes v1.18.9 dependencies (#215) +- [366531e0](https://github.com/kubedb/memcached/commit/366531e0) Update KubeDB api (#214) +- [1a45a5d3](https://github.com/kubedb/memcached/commit/1a45a5d3) Update KubeDB api (#213) +- [40afd78d](https://github.com/kubedb/memcached/commit/40afd78d) Update KubeDB api (#212) +- [bee3d626](https://github.com/kubedb/memcached/commit/bee3d626) Update KubeDB api (#211) +- [3a71917a](https://github.com/kubedb/memcached/commit/3a71917a) Update Kubernetes v1.18.9 dependencies (#210) +- [efaeb8f1](https://github.com/kubedb/memcached/commit/efaeb8f1) Update KubeDB api (#209) +- [f8bcc2ac](https://github.com/kubedb/memcached/commit/f8bcc2ac) Update KubeDB api (#208) +- [de050491](https://github.com/kubedb/memcached/commit/de050491) Update KubeDB api (#207) +- [f59d7b22](https://github.com/kubedb/memcached/commit/f59d7b22) Update repository config (#206) +- [ef1b61d7](https://github.com/kubedb/memcached/commit/ef1b61d7) Update repository config (#205) +- [2401e6a4](https://github.com/kubedb/memcached/commit/2401e6a4) Update repository config (#204) +- [59b4a20b](https://github.com/kubedb/memcached/commit/59b4a20b) Update KubeDB api (#203) +- [7ceab937](https://github.com/kubedb/memcached/commit/7ceab937) Update Kubernetes v1.18.9 dependencies (#202) +- [22ed0d2f](https://github.com/kubedb/memcached/commit/22ed0d2f) Publish docker images to ghcr.io (#201) +- [059535f1](https://github.com/kubedb/memcached/commit/059535f1) Update KubeDB api (#200) +- [480c5281](https://github.com/kubedb/memcached/commit/480c5281) Update KubeDB api (#199) +- [60980557](https://github.com/kubedb/memcached/commit/60980557) Update KubeDB api (#198) +- [57091fac](https://github.com/kubedb/memcached/commit/57091fac) Update KubeDB api (#197) +- [4fa3793d](https://github.com/kubedb/memcached/commit/4fa3793d) Update repository config (#196) +- [9891c8e3](https://github.com/kubedb/memcached/commit/9891c8e3) Update Kubernetes v1.18.9 dependencies (#195) +- [d4dbb4a6](https://github.com/kubedb/memcached/commit/d4dbb4a6) Update KubeDB api (#192) +- [8e27b6ef](https://github.com/kubedb/memcached/commit/8e27b6ef) Update Kubernetes v1.18.9 dependencies (#193) +- [f8fefd18](https://github.com/kubedb/memcached/commit/f8fefd18) Update Kubernetes v1.18.9 dependencies (#191) +- [0c8250d9](https://github.com/kubedb/memcached/commit/0c8250d9) Update repository config (#190) +- [08cd9670](https://github.com/kubedb/memcached/commit/08cd9670) Update repository config (#189) +- [c15513f2](https://github.com/kubedb/memcached/commit/c15513f2) Update Kubernetes v1.18.9 dependencies (#188) +- [f6115aaa](https://github.com/kubedb/memcached/commit/f6115aaa) Use common event recorder (#187) +- [bbf717a9](https://github.com/kubedb/memcached/commit/bbf717a9) Update Kubernetes v1.18.3 dependencies (#186) +- [a9edb56d](https://github.com/kubedb/memcached/commit/a9edb56d) Prepare for release v0.7.0-beta.3 (#185) +- [ce99d040](https://github.com/kubedb/memcached/commit/ce99d040) Update Kubernetes v1.18.3 dependencies (#184) +- [dd19f634](https://github.com/kubedb/memcached/commit/dd19f634) Add license verifier (#183) +- [fc482bc2](https://github.com/kubedb/memcached/commit/fc482bc2) Update Kubernetes v1.18.3 dependencies (#182) +- [b85b6742](https://github.com/kubedb/memcached/commit/b85b6742) Use background deletion policy +- [d46f41f9](https://github.com/kubedb/memcached/commit/d46f41f9) Update Kubernetes v1.18.3 dependencies (#180) +- [d958c241](https://github.com/kubedb/memcached/commit/d958c241) Use AppsCode Community License (#179) +- [64410c11](https://github.com/kubedb/memcached/commit/64410c11) Update Kubernetes v1.18.3 dependencies (#178) +- [b8fe927b](https://github.com/kubedb/memcached/commit/b8fe927b) Prepare for release v0.7.0-beta.2 (#177) +- [0f5014d2](https://github.com/kubedb/memcached/commit/0f5014d2) Update release.yml +- [1b627013](https://github.com/kubedb/memcached/commit/1b627013) Remove updateStrategy field (#176) +- [66f008d6](https://github.com/kubedb/memcached/commit/66f008d6) Update Kubernetes v1.18.3 dependencies (#175) +- [09ff8589](https://github.com/kubedb/memcached/commit/09ff8589) Update Kubernetes v1.18.3 dependencies (#174) +- [92e344d8](https://github.com/kubedb/memcached/commit/92e344d8) Update Kubernetes v1.18.3 dependencies (#173) +- [51e977f3](https://github.com/kubedb/memcached/commit/51e977f3) Update Kubernetes v1.18.3 dependencies (#172) +- [f32d7e9c](https://github.com/kubedb/memcached/commit/f32d7e9c) Update Kubernetes v1.18.3 dependencies (#171) +- [2cdba698](https://github.com/kubedb/memcached/commit/2cdba698) Update Kubernetes v1.18.3 dependencies (#170) +- [9486876e](https://github.com/kubedb/memcached/commit/9486876e) Update Kubernetes v1.18.3 dependencies (#169) +- [81648447](https://github.com/kubedb/memcached/commit/81648447) Update Kubernetes v1.18.3 dependencies (#168) +- [e9c3f98d](https://github.com/kubedb/memcached/commit/e9c3f98d) Fix install target +- [6dff8f7b](https://github.com/kubedb/memcached/commit/6dff8f7b) Remove dependency on enterprise operator (#167) +- [707d4d83](https://github.com/kubedb/memcached/commit/707d4d83) Build images in e2e workflow (#166) +- [ff1b144e](https://github.com/kubedb/memcached/commit/ff1b144e) Allow configuring k8s & db version in e2e tests (#165) +- [0b1699d8](https://github.com/kubedb/memcached/commit/0b1699d8) Update to Kubernetes v1.18.3 (#164) +- [b141122a](https://github.com/kubedb/memcached/commit/b141122a) Trigger e2e tests on /ok-to-test command (#163) +- [36b03266](https://github.com/kubedb/memcached/commit/36b03266) Update to Kubernetes v1.18.3 (#162) +- [3ede9dcc](https://github.com/kubedb/memcached/commit/3ede9dcc) Update to Kubernetes v1.18.3 (#161) +- [3f7c1b90](https://github.com/kubedb/memcached/commit/3f7c1b90) Prepare for release v0.7.0-beta.1 (#160) +- [1278cd57](https://github.com/kubedb/memcached/commit/1278cd57) include Makefile.env (#158) +- [676222b7](https://github.com/kubedb/memcached/commit/676222b7) Update License (#157) +- [216fdcd4](https://github.com/kubedb/memcached/commit/216fdcd4) Update to Kubernetes v1.18.3 (#156) +- [dc59abf4](https://github.com/kubedb/memcached/commit/dc59abf4) Update ci.yml +- [071589c5](https://github.com/kubedb/memcached/commit/071589c5) Update update-release-tracker.sh +- [79bc96d8](https://github.com/kubedb/memcached/commit/79bc96d8) Update update-release-tracker.sh +- [31f5fca6](https://github.com/kubedb/memcached/commit/31f5fca6) Add script to update release tracker on pr merge (#155) +- [05d1d6ab](https://github.com/kubedb/memcached/commit/05d1d6ab) Update .kodiak.toml +- [522b617f](https://github.com/kubedb/memcached/commit/522b617f) Various fixes (#154) +- [2ed2c3a0](https://github.com/kubedb/memcached/commit/2ed2c3a0) Update to Kubernetes v1.18.3 (#152) +- [10cea9ad](https://github.com/kubedb/memcached/commit/10cea9ad) Update to Kubernetes v1.18.3 +- [582177b0](https://github.com/kubedb/memcached/commit/582177b0) Create .kodiak.toml +- [bf1900b6](https://github.com/kubedb/memcached/commit/bf1900b6) Run flaky e2e test (#151) +- [aa09abfc](https://github.com/kubedb/memcached/commit/aa09abfc) Use CRD v1 for Kubernetes >= 1.16 (#150) +- [b2586151](https://github.com/kubedb/memcached/commit/b2586151) Merge pull request #146 from pohly/pmem +- [dbd5b2b0](https://github.com/kubedb/memcached/commit/dbd5b2b0) Fix build +- [d0722c34](https://github.com/kubedb/memcached/commit/d0722c34) WIP: implement PMEM support +- [f16b1198](https://github.com/kubedb/memcached/commit/f16b1198) Makefile: adapt to recent installer repo changes +- [32f71c56](https://github.com/kubedb/memcached/commit/32f71c56) Makefile: support e2e testing with arbitrary KUBECONFIG file +- [6ed07efc](https://github.com/kubedb/memcached/commit/6ed07efc) Update to Kubernetes v1.18.3 (#149) +- [ce702669](https://github.com/kubedb/memcached/commit/ce702669) Fix e2e tests (#148) +- [18917f8d](https://github.com/kubedb/memcached/commit/18917f8d) Revendor kubedb.dev/apimachinery@master (#147) +- [e51d327c](https://github.com/kubedb/memcached/commit/e51d327c) Update crazy-max/ghaction-docker-buildx flag +- [1202c059](https://github.com/kubedb/memcached/commit/1202c059) Use updated operator labels in e2e tests (#144) +- [e02d42a4](https://github.com/kubedb/memcached/commit/e02d42a4) Pass annotations from CRD to AppBinding (#145) +- [2c91d63b](https://github.com/kubedb/memcached/commit/2c91d63b) Trigger the workflow on push or pull request +- [67c83a9a](https://github.com/kubedb/memcached/commit/67c83a9a) Update CHANGELOG.md +- [85e3cf54](https://github.com/kubedb/memcached/commit/85e3cf54) Use stash.appscode.dev/apimachinery@v0.9.0-rc.6 (#143) +- [e61dd2e6](https://github.com/kubedb/memcached/commit/e61dd2e6) Update error msg to reject halt when termination policy is 'DoNotTerminate' +- [bc079b7b](https://github.com/kubedb/memcached/commit/bc079b7b) Introduce spec.halted and removed dormant crd (#142) +- [f31610c3](https://github.com/kubedb/memcached/commit/f31610c3) Refactor CI pipeline to run build once (#141) +- [f5eec5e4](https://github.com/kubedb/memcached/commit/f5eec5e4) Update kubernetes client-go to 1.16.3 (#140) +- [f645174a](https://github.com/kubedb/memcached/commit/f645174a) Update catalog values for make install command +- [2a297c89](https://github.com/kubedb/memcached/commit/2a297c89) Use charts to install operator (#139) +- [83e2ba17](https://github.com/kubedb/memcached/commit/83e2ba17) Moved out docker files and added matrix github actions ci/cd (#138) +- [97e3a5bd](https://github.com/kubedb/memcached/commit/97e3a5bd) Add add-license make target +- [7b79fbfe](https://github.com/kubedb/memcached/commit/7b79fbfe) Add license header to files (#137) +- [2afa406f](https://github.com/kubedb/memcached/commit/2afa406f) Enable make ci (#136) +- [bab32534](https://github.com/kubedb/memcached/commit/bab32534) Remove EnableStatusSubresource (#135) +- [b28d4b1e](https://github.com/kubedb/memcached/commit/b28d4b1e) Prepare v0.6.0-rc.1 release (#134) +- [c7aad4dd](https://github.com/kubedb/memcached/commit/c7aad4dd) Run e2e tests using GitHub actions (#132) +- [66efb2de](https://github.com/kubedb/memcached/commit/66efb2de) Validate DBVersionSpecs and fixed broken build (#133) +- [c16091ba](https://github.com/kubedb/memcached/commit/c16091ba) Update go.yml +- [69b2dc70](https://github.com/kubedb/memcached/commit/69b2dc70) Enable GitHub actions +- [93edca95](https://github.com/kubedb/memcached/commit/93edca95) Update changelog +- [d46cda1f](https://github.com/kubedb/memcached/commit/d46cda1f) Remove linux/arm support +- [378ad5e1](https://github.com/kubedb/memcached/commit/378ad5e1) Revendor +- [85f05095](https://github.com/kubedb/memcached/commit/85f05095) Improve test: Use installed memcachedversions (#131) +- [47444708](https://github.com/kubedb/memcached/commit/47444708) Use docker buildx to build docker image xref: https://community.arm.com/developer/tools-software/tools/b/tools-software-ides-blog/posts/getting-started-with-docker-for-arm-on-linux +- [ee1082b4](https://github.com/kubedb/memcached/commit/ee1082b4) Update dependencies (#130) +- [ac9391b5](https://github.com/kubedb/memcached/commit/ac9391b5) Don't set annotation to AppBinding (#129) +- [4a2f42a1](https://github.com/kubedb/memcached/commit/4a2f42a1) Set database version in AppBinding (#128) +- [dab1a2fc](https://github.com/kubedb/memcached/commit/dab1a2fc) Change package path to kubedb.dev/memcached (#127) +- [3e92762f](https://github.com/kubedb/memcached/commit/3e92762f) Add license header to Makefiles (#126) +- [50491fd3](https://github.com/kubedb/memcached/commit/50491fd3) Update Makefile +- [66e955a2](https://github.com/kubedb/memcached/commit/66e955a2) Add install, uninstall and purge command in Makefile (#125) +- [9f4b0865](https://github.com/kubedb/memcached/commit/9f4b0865) Update .gitignore +- [9a760f7c](https://github.com/kubedb/memcached/commit/9a760f7c) Pod Disruption Budget for Memcached (#123) +- [ced8e75c](https://github.com/kubedb/memcached/commit/ced8e75c) Handling resource ownership (#122) +- [0a01c5ea](https://github.com/kubedb/memcached/commit/0a01c5ea) Add Makefile (#124) +- [4a80b9af](https://github.com/kubedb/memcached/commit/4a80b9af) Update to k8s 1.14.0 client libraries using go.mod (#121) +- [a7bbcf54](https://github.com/kubedb/memcached/commit/a7bbcf54) Update README.md +- [44ddb0d6](https://github.com/kubedb/memcached/commit/44ddb0d6) Start next dev cycle + + + +## [kubedb/mongodb](https://github.com/kubedb/mongodb) + +### [v0.7.0](https://github.com/kubedb/mongodb/releases/tag/v0.7.0) + +- [eceea248](https://github.com/kubedb/mongodb/commit/eceea248) Prepare for release v0.7.0 (#299) +- [d87c55bb](https://github.com/kubedb/mongodb/commit/d87c55bb) Prepare for release v0.7.0-rc.2 (#298) +- [f428010d](https://github.com/kubedb/mongodb/commit/f428010d) Prepare for release v0.7.0-rc.1 (#297) +- [0d32b697](https://github.com/kubedb/mongodb/commit/0d32b697) Prepare for release v0.7.0-beta.6 (#296) +- [1f75de65](https://github.com/kubedb/mongodb/commit/1f75de65) Update MergeServicePort and PatchServicePort apis (#295) +- [984fd7c2](https://github.com/kubedb/mongodb/commit/984fd7c2) Create SRV records for governing service (#294) +- [fc973dd0](https://github.com/kubedb/mongodb/commit/fc973dd0) Make database's phase NotReady as soon as the halted is removed (#293) +- [f1818bb1](https://github.com/kubedb/mongodb/commit/f1818bb1) Prepare for release v0.7.0-beta.5 (#292) +- [7d1586f7](https://github.com/kubedb/mongodb/commit/7d1586f7) Create separate governing service for each database (#291) +- [1e281abb](https://github.com/kubedb/mongodb/commit/1e281abb) Update KubeDB api (#290) +- [23d8785f](https://github.com/kubedb/mongodb/commit/23d8785f) Update readme +- [007e3ccd](https://github.com/kubedb/mongodb/commit/007e3ccd) Prepare for release v0.7.0-beta.4 (#289) +- [11f6573e](https://github.com/kubedb/mongodb/commit/11f6573e) Update MongoDB Conditions (#280) +- [a964af9b](https://github.com/kubedb/mongodb/commit/a964af9b) Update KubeDB api (#288) +- [38fd31b3](https://github.com/kubedb/mongodb/commit/38fd31b3) Update Kubernetes v1.18.9 dependencies (#287) +- [b0110bea](https://github.com/kubedb/mongodb/commit/b0110bea) Update KubeDB api (#286) +- [bfad7e48](https://github.com/kubedb/mongodb/commit/bfad7e48) Update for release Stash@v2020.10.21 (#285) +- [2eebd6eb](https://github.com/kubedb/mongodb/commit/2eebd6eb) Fix init validator (#283) +- [7912e726](https://github.com/kubedb/mongodb/commit/7912e726) Update KubeDB api (#284) +- [ebf85b6d](https://github.com/kubedb/mongodb/commit/ebf85b6d) Update KubeDB api (#282) +- [7fa4958c](https://github.com/kubedb/mongodb/commit/7fa4958c) Update Kubernetes v1.18.9 dependencies (#281) +- [705843b8](https://github.com/kubedb/mongodb/commit/705843b8) Use MongoDBCustomConfigFile constant +- [dac6262d](https://github.com/kubedb/mongodb/commit/dac6262d) Update KubeDB api (#279) +- [7e7a960e](https://github.com/kubedb/mongodb/commit/7e7a960e) Update KubeDB api (#278) +- [aed9bd49](https://github.com/kubedb/mongodb/commit/aed9bd49) Update KubeDB api (#277) +- [18ec2e99](https://github.com/kubedb/mongodb/commit/18ec2e99) Update Kubernetes v1.18.9 dependencies (#276) +- [dbec1f66](https://github.com/kubedb/mongodb/commit/dbec1f66) Update KubeDB api (#275) +- [ad028b51](https://github.com/kubedb/mongodb/commit/ad028b51) Update KubeDB api (#274) +- [a21dfd6a](https://github.com/kubedb/mongodb/commit/a21dfd6a) Update KubeDB api (#272) +- [932ac34b](https://github.com/kubedb/mongodb/commit/932ac34b) Update repository config (#271) +- [3f52a364](https://github.com/kubedb/mongodb/commit/3f52a364) Update repository config (#270) +- [d3bf87db](https://github.com/kubedb/mongodb/commit/d3bf87db) Initialize statefulset watcher from cmd/server/options.go (#269) +- [e3e15b7f](https://github.com/kubedb/mongodb/commit/e3e15b7f) Update KubeDB api (#268) +- [406ae5a2](https://github.com/kubedb/mongodb/commit/406ae5a2) Update Kubernetes v1.18.9 dependencies (#267) +- [0339503d](https://github.com/kubedb/mongodb/commit/0339503d) Publish docker images to ghcr.io (#266) +- [ffccdc3c](https://github.com/kubedb/mongodb/commit/ffccdc3c) Update KubeDB api (#265) +- [05b7a0bd](https://github.com/kubedb/mongodb/commit/05b7a0bd) Update KubeDB api (#264) +- [d6447024](https://github.com/kubedb/mongodb/commit/d6447024) Update KubeDB api (#263) +- [e7c1e3a3](https://github.com/kubedb/mongodb/commit/e7c1e3a3) Update KubeDB api (#262) +- [5647960a](https://github.com/kubedb/mongodb/commit/5647960a) Update repository config (#261) +- [e7481d8d](https://github.com/kubedb/mongodb/commit/e7481d8d) Use conditions to handle initialization (#258) +- [d406586a](https://github.com/kubedb/mongodb/commit/d406586a) Update Kubernetes v1.18.9 dependencies (#260) +- [93708d02](https://github.com/kubedb/mongodb/commit/93708d02) Remove redundant volume mounts (#259) +- [bf28af80](https://github.com/kubedb/mongodb/commit/bf28af80) Update for release Stash@v2020.09.29 (#257) +- [b34e2326](https://github.com/kubedb/mongodb/commit/b34e2326) Update Kubernetes v1.18.9 dependencies (#256) +- [86e84d48](https://github.com/kubedb/mongodb/commit/86e84d48) Remove bootstrap container (#248) +- [0b66e225](https://github.com/kubedb/mongodb/commit/0b66e225) Update Kubernetes v1.18.9 dependencies (#254) +- [1a06f223](https://github.com/kubedb/mongodb/commit/1a06f223) Update repository config (#253) +- [c199b164](https://github.com/kubedb/mongodb/commit/c199b164) Update repository config (#252) +- [1268868d](https://github.com/kubedb/mongodb/commit/1268868d) Update Kubernetes v1.18.9 dependencies (#251) +- [de63158f](https://github.com/kubedb/mongodb/commit/de63158f) Use common event recorder (#249) +- [2f96b75a](https://github.com/kubedb/mongodb/commit/2f96b75a) Update Kubernetes v1.18.3 dependencies (#250) +- [2867a4ef](https://github.com/kubedb/mongodb/commit/2867a4ef) Prepare for release v0.7.0-beta.3 (#247) +- [8e6c12e7](https://github.com/kubedb/mongodb/commit/8e6c12e7) Use new `spec.init` section (#246) +- [96aefe31](https://github.com/kubedb/mongodb/commit/96aefe31) Update Kubernetes v1.18.3 dependencies (#245) +- [59e2a89c](https://github.com/kubedb/mongodb/commit/59e2a89c) Add license verifier (#244) +- [2824cb71](https://github.com/kubedb/mongodb/commit/2824cb71) Update for release Stash@v2020.09.16 (#243) +- [3c626235](https://github.com/kubedb/mongodb/commit/3c626235) Update Kubernetes v1.18.3 dependencies (#242) +- [86b205ef](https://github.com/kubedb/mongodb/commit/86b205ef) Update Constants (#241) +- [1910e947](https://github.com/kubedb/mongodb/commit/1910e947) Use common constant across MongoDB Community and Enterprise operator (#240) +- [05364676](https://github.com/kubedb/mongodb/commit/05364676) Run e2e tests from kubedb/tests repo (#238) +- [80a78fe7](https://github.com/kubedb/mongodb/commit/80a78fe7) Set Delete Propagation Policy to Background (#237) +- [9a9d101c](https://github.com/kubedb/mongodb/commit/9a9d101c) Update Kubernetes v1.18.3 dependencies (#236) +- [d596ca68](https://github.com/kubedb/mongodb/commit/d596ca68) Use AppsCode Community License (#235) +- [8fd389de](https://github.com/kubedb/mongodb/commit/8fd389de) Prepare for release v0.7.0-beta.2 (#234) +- [3e4981ee](https://github.com/kubedb/mongodb/commit/3e4981ee) Update release.yml +- [c1d5cdb8](https://github.com/kubedb/mongodb/commit/c1d5cdb8) Always use OnDelete UpdateStrategy (#233) +- [a135b2c7](https://github.com/kubedb/mongodb/commit/a135b2c7) Fix build (#232) +- [cfb1788b](https://github.com/kubedb/mongodb/commit/cfb1788b) Use updated certificate spec (#221) +- [486e820a](https://github.com/kubedb/mongodb/commit/486e820a) Remove `storage` Validation Check (#231) +- [12e621ed](https://github.com/kubedb/mongodb/commit/12e621ed) Update Kubernetes v1.18.3 dependencies (#225) +- [0d7ea7d7](https://github.com/kubedb/mongodb/commit/0d7ea7d7) Update Kubernetes v1.18.3 dependencies (#224) +- [e79d1dfe](https://github.com/kubedb/mongodb/commit/e79d1dfe) Update Kubernetes v1.18.3 dependencies (#223) +- [d0ff5e1d](https://github.com/kubedb/mongodb/commit/d0ff5e1d) Update Kubernetes v1.18.3 dependencies (#222) +- [d22ade32](https://github.com/kubedb/mongodb/commit/d22ade32) Add `inMemory` Storage Engine Support for Percona MongoDB Server (#205) +- [90847996](https://github.com/kubedb/mongodb/commit/90847996) Update Kubernetes v1.18.3 dependencies (#220) +- [1098974f](https://github.com/kubedb/mongodb/commit/1098974f) Update Kubernetes v1.18.3 dependencies (#219) +- [e7d1407a](https://github.com/kubedb/mongodb/commit/e7d1407a) Fix install target +- [a5742d11](https://github.com/kubedb/mongodb/commit/a5742d11) Remove dependency on enterprise operator (#218) +- [1de4fbee](https://github.com/kubedb/mongodb/commit/1de4fbee) Build images in e2e workflow (#217) +- [b736c57e](https://github.com/kubedb/mongodb/commit/b736c57e) Update to Kubernetes v1.18.3 (#216) +- [180ae28d](https://github.com/kubedb/mongodb/commit/180ae28d) Allow configuring k8s & db version in e2e tests (#215) +- [c2f09a6f](https://github.com/kubedb/mongodb/commit/c2f09a6f) Trigger e2e tests on /ok-to-test command (#214) +- [c1c7fa39](https://github.com/kubedb/mongodb/commit/c1c7fa39) Update to Kubernetes v1.18.3 (#213) +- [8fb6cf78](https://github.com/kubedb/mongodb/commit/8fb6cf78) Update to Kubernetes v1.18.3 (#212) +- [b82a8fa7](https://github.com/kubedb/mongodb/commit/b82a8fa7) Prepare for release v0.7.0-beta.1 (#211) +- [a63d53ae](https://github.com/kubedb/mongodb/commit/a63d53ae) Update for release Stash@v2020.07.09-beta.0 (#209) +- [4e33e978](https://github.com/kubedb/mongodb/commit/4e33e978) include Makefile.env +- [1aa81a18](https://github.com/kubedb/mongodb/commit/1aa81a18) Allow customizing chart registry (#208) +- [05355e75](https://github.com/kubedb/mongodb/commit/05355e75) Update for release Stash@v2020.07.08-beta.0 (#207) +- [4f6be7b4](https://github.com/kubedb/mongodb/commit/4f6be7b4) Update License (#206) +- [cc54f7d3](https://github.com/kubedb/mongodb/commit/cc54f7d3) Update to Kubernetes v1.18.3 (#204) +- [d1a51b8e](https://github.com/kubedb/mongodb/commit/d1a51b8e) Update ci.yml +- [3a993329](https://github.com/kubedb/mongodb/commit/3a993329) Load stash version from .env file for make (#203) +- [7180a98c](https://github.com/kubedb/mongodb/commit/7180a98c) Update update-release-tracker.sh +- [745085fd](https://github.com/kubedb/mongodb/commit/745085fd) Update update-release-tracker.sh +- [07d83ac0](https://github.com/kubedb/mongodb/commit/07d83ac0) Add script to update release tracker on pr merge (#202) +- [bbe205bb](https://github.com/kubedb/mongodb/commit/bbe205bb) Update .kodiak.toml +- [998e656e](https://github.com/kubedb/mongodb/commit/998e656e) Various fixes (#201) +- [ca03db09](https://github.com/kubedb/mongodb/commit/ca03db09) Update to Kubernetes v1.18.3 (#200) +- [975fc700](https://github.com/kubedb/mongodb/commit/975fc700) Update to Kubernetes v1.18.3 +- [52972dcf](https://github.com/kubedb/mongodb/commit/52972dcf) Create .kodiak.toml +- [39168e53](https://github.com/kubedb/mongodb/commit/39168e53) Use CRD v1 for Kubernetes >= 1.16 (#199) +- [d6d87e16](https://github.com/kubedb/mongodb/commit/d6d87e16) Update to Kubernetes v1.18.3 (#198) +- [09cd5809](https://github.com/kubedb/mongodb/commit/09cd5809) Fix e2e tests (#197) +- [f47c4846](https://github.com/kubedb/mongodb/commit/f47c4846) Update stash install commands +- [010d0294](https://github.com/kubedb/mongodb/commit/010d0294) Revendor kubedb.dev/apimachinery@master (#196) +- [31ef2632](https://github.com/kubedb/mongodb/commit/31ef2632) Pass annotations from CRD to AppBinding (#195) +- [9594e92f](https://github.com/kubedb/mongodb/commit/9594e92f) Update crazy-max/ghaction-docker-buildx flag +- [0693d7a0](https://github.com/kubedb/mongodb/commit/0693d7a0) Use updated operator labels in e2e tests (#193) +- [5aaeeb90](https://github.com/kubedb/mongodb/commit/5aaeeb90) Trigger the workflow on push or pull request +- [2af16e3c](https://github.com/kubedb/mongodb/commit/2af16e3c) Update CHANGELOG.md +- [288c5d2f](https://github.com/kubedb/mongodb/commit/288c5d2f) Use SHARD_INDEX constant from apimachinery +- [4482edf3](https://github.com/kubedb/mongodb/commit/4482edf3) Use stash.appscode.dev/apimachinery@v0.9.0-rc.6 (#191) +- [0f20ff3a](https://github.com/kubedb/mongodb/commit/0f20ff3a) Manage SSL certificates using cert-manager (#190) +- [6f0c1aef](https://github.com/kubedb/mongodb/commit/6f0c1aef) Use Minio storage for testing (#188) +- [f8c56bac](https://github.com/kubedb/mongodb/commit/f8c56bac) Support affinity templating in mongodb-shard (#186) +- [71283767](https://github.com/kubedb/mongodb/commit/71283767) Use stash@v0.9.0-rc.4 release (#185) +- [f480de35](https://github.com/kubedb/mongodb/commit/f480de35) Fix `Pause` Logic (#184) +- [263e1bac](https://github.com/kubedb/mongodb/commit/263e1bac) Refactor CI pipeline to build once (#182) +- [e383f271](https://github.com/kubedb/mongodb/commit/e383f271) Add `Pause` Feature (#181) +- [584ecde6](https://github.com/kubedb/mongodb/commit/584ecde6) Delete backupconfig before attempting restoresession. (#180) +- [a78bc2a7](https://github.com/kubedb/mongodb/commit/a78bc2a7) Wipeout if custom databaseSecret has been deleted (#179) +- [e90cd386](https://github.com/kubedb/mongodb/commit/e90cd386) Matrix test and Moved out mongo docker files (#178) +- [c132db8f](https://github.com/kubedb/mongodb/commit/c132db8f) Add add-license makefile target +- [cc545e04](https://github.com/kubedb/mongodb/commit/cc545e04) Update Makefile +- [7a2eab2c](https://github.com/kubedb/mongodb/commit/7a2eab2c) Add license header to files (#177) +- [eecdb2cb](https://github.com/kubedb/mongodb/commit/eecdb2cb) Fix E2E tests in github action (#176) +- [dfe3b310](https://github.com/kubedb/mongodb/commit/dfe3b310) Run e2e tests using GitHub actions (#174) +- [a322a894](https://github.com/kubedb/mongodb/commit/a322a894) Validate DBVersionSpecs and fixed broken build (#175) +- [d7061739](https://github.com/kubedb/mongodb/commit/d7061739) Update go.yml +- [5a9a0f13](https://github.com/kubedb/mongodb/commit/5a9a0f13) Enable GitHub actions +- [5489feb7](https://github.com/kubedb/mongodb/commit/5489feb7) Fix unauthorized readiness and liveness probe (#173) +- [f662df33](https://github.com/kubedb/mongodb/commit/f662df33) Update changelog +- [3bc16064](https://github.com/kubedb/mongodb/commit/3bc16064) Remove linux/arm support +- [43822991](https://github.com/kubedb/mongodb/commit/43822991) Revendor +- [6966e187](https://github.com/kubedb/mongodb/commit/6966e187) Add e2e test commands to Makefile (#172) +- [674e7504](https://github.com/kubedb/mongodb/commit/674e7504) Support more mongodb versions (#171) +- [b25e93a2](https://github.com/kubedb/mongodb/commit/b25e93a2) Fix create database secret will end with dead loop (#170) +- [172be98d](https://github.com/kubedb/mongodb/commit/172be98d) Ensure client.pem subject as root user (#166) +- [3a2acacd](https://github.com/kubedb/mongodb/commit/3a2acacd) Update dependencies (#169) +- [946208da](https://github.com/kubedb/mongodb/commit/946208da) Don't set annotation to AppBinding (#168) +- [cc2e2026](https://github.com/kubedb/mongodb/commit/cc2e2026) Set database version in AppBinding (#167) +- [66296595](https://github.com/kubedb/mongodb/commit/66296595) Fix travis build +- [1f88cb94](https://github.com/kubedb/mongodb/commit/1f88cb94) Change package path to kubedb.dev/mongodb (#165) +- [c8db7ec2](https://github.com/kubedb/mongodb/commit/c8db7ec2) SSL support in mongodb (#158) +- [99b9df63](https://github.com/kubedb/mongodb/commit/99b9df63) Improve stash integration (#164) +- [e7052c38](https://github.com/kubedb/mongodb/commit/e7052c38) Fix UpsertDatabaseAnnotation() function (#162) +- [3eb07820](https://github.com/kubedb/mongodb/commit/3eb07820) Add license header to Makefiles (#163) +- [016a7bf8](https://github.com/kubedb/mongodb/commit/016a7bf8) Update Makefile +- [d6d28abe](https://github.com/kubedb/mongodb/commit/d6d28abe) Makefile install uninstall & purge command (#161) +- [03ad552b](https://github.com/kubedb/mongodb/commit/03ad552b) Integrate stash/restic with mongodb (#157) +- [a0633f4b](https://github.com/kubedb/mongodb/commit/a0633f4b) Cleanup ensureDatabaseRBAC +- [0c7789f6](https://github.com/kubedb/mongodb/commit/0c7789f6) Handling resource ownership (#156) +- [8cf3d3fc](https://github.com/kubedb/mongodb/commit/8cf3d3fc) Pod Disruption Budget for Mongo (#159) +- [6cf2756c](https://github.com/kubedb/mongodb/commit/6cf2756c) Add Makefile (#160) +- [f4167b84](https://github.com/kubedb/mongodb/commit/f4167b84) Update to k8s 1.14.0 client libraries using go.mod (#155) +- [14fe6a0b](https://github.com/kubedb/mongodb/commit/14fe6a0b) Update changelog +- [5cc20768](https://github.com/kubedb/mongodb/commit/5cc20768) Update README.md +- [4de5dcb4](https://github.com/kubedb/mongodb/commit/4de5dcb4) Start next dev cycle + + + +## [kubedb/mysql](https://github.com/kubedb/mysql) + +### [v0.7.0](https://github.com/kubedb/mysql/releases/tag/v0.7.0) + + + + +## [kubedb/mysql-replication-mode-detector](https://github.com/kubedb/mysql-replication-mode-detector) + +### [v0.1.0](https://github.com/kubedb/mysql-replication-mode-detector/releases/tag/v0.1.0) + +- [4ee489b](https://github.com/kubedb/mysql-replication-mode-detector/commit/4ee489b) Prepare for release v0.1.0 (#77) +- [b49e098](https://github.com/kubedb/mysql-replication-mode-detector/commit/b49e098) Prepare for release v0.1.0-rc.2 (#76) +- [7c0b82f](https://github.com/kubedb/mysql-replication-mode-detector/commit/7c0b82f) Prepare for release v0.1.0-rc.1 (#75) +- [67ec09b](https://github.com/kubedb/mysql-replication-mode-detector/commit/67ec09b) Prepare for release v0.1.0-beta.6 (#74) +- [724eaa9](https://github.com/kubedb/mysql-replication-mode-detector/commit/724eaa9) Update KubeDB api (#73) +- [a82a26e](https://github.com/kubedb/mysql-replication-mode-detector/commit/a82a26e) Fix sql query to find primary host for different version of MySQL (#66) +- [e251fd6](https://github.com/kubedb/mysql-replication-mode-detector/commit/e251fd6) Prepare for release v0.1.0-beta.5 (#72) +- [633ba00](https://github.com/kubedb/mysql-replication-mode-detector/commit/633ba00) Update KubeDB api (#71) +- [557e8f7](https://github.com/kubedb/mysql-replication-mode-detector/commit/557e8f7) Prepare for release v0.1.0-beta.4 (#70) +- [4dd885a](https://github.com/kubedb/mysql-replication-mode-detector/commit/4dd885a) Update KubeDB api (#69) +- [dc0ed39](https://github.com/kubedb/mysql-replication-mode-detector/commit/dc0ed39) Update Kubernetes v1.18.9 dependencies (#68) +- [f49a1d1](https://github.com/kubedb/mysql-replication-mode-detector/commit/f49a1d1) Update Kubernetes v1.18.9 dependencies (#65) +- [306235a](https://github.com/kubedb/mysql-replication-mode-detector/commit/306235a) Update KubeDB api (#64) +- [3c9e99a](https://github.com/kubedb/mysql-replication-mode-detector/commit/3c9e99a) Update KubeDB api (#63) +- [974a940](https://github.com/kubedb/mysql-replication-mode-detector/commit/974a940) Update KubeDB api (#62) +- [8521462](https://github.com/kubedb/mysql-replication-mode-detector/commit/8521462) Update Kubernetes v1.18.9 dependencies (#61) +- [38f7a4c](https://github.com/kubedb/mysql-replication-mode-detector/commit/38f7a4c) Update KubeDB api (#60) +- [a7b7c87](https://github.com/kubedb/mysql-replication-mode-detector/commit/a7b7c87) Update KubeDB api (#59) +- [daa02dd](https://github.com/kubedb/mysql-replication-mode-detector/commit/daa02dd) Update KubeDB api (#58) +- [341b6b6](https://github.com/kubedb/mysql-replication-mode-detector/commit/341b6b6) Add tls config (#40) +- [04161c8](https://github.com/kubedb/mysql-replication-mode-detector/commit/04161c8) Update KubeDB api (#57) +- [fdd705d](https://github.com/kubedb/mysql-replication-mode-detector/commit/fdd705d) Update Kubernetes v1.18.9 dependencies (#56) +- [22cb410](https://github.com/kubedb/mysql-replication-mode-detector/commit/22cb410) Update KubeDB api (#55) +- [11b1758](https://github.com/kubedb/mysql-replication-mode-detector/commit/11b1758) Update KubeDB api (#54) +- [9df3045](https://github.com/kubedb/mysql-replication-mode-detector/commit/9df3045) Update KubeDB api (#53) +- [6557f92](https://github.com/kubedb/mysql-replication-mode-detector/commit/6557f92) Update KubeDB api (#52) +- [43c3694](https://github.com/kubedb/mysql-replication-mode-detector/commit/43c3694) Update Kubernetes v1.18.9 dependencies (#51) +- [511e974](https://github.com/kubedb/mysql-replication-mode-detector/commit/511e974) Publish docker images to ghcr.io (#50) +- [093a995](https://github.com/kubedb/mysql-replication-mode-detector/commit/093a995) Update KubeDB api (#49) +- [49c07e9](https://github.com/kubedb/mysql-replication-mode-detector/commit/49c07e9) Update KubeDB api (#48) +- [91ead1c](https://github.com/kubedb/mysql-replication-mode-detector/commit/91ead1c) Update KubeDB api (#47) +- [45956b4](https://github.com/kubedb/mysql-replication-mode-detector/commit/45956b4) Update KubeDB api (#46) +- [a6c57a7](https://github.com/kubedb/mysql-replication-mode-detector/commit/a6c57a7) Update KubeDB api (#45) +- [8a2fd20](https://github.com/kubedb/mysql-replication-mode-detector/commit/8a2fd20) Update KubeDB api (#44) +- [be63987](https://github.com/kubedb/mysql-replication-mode-detector/commit/be63987) Update KubeDB api (#43) +- [f33220a](https://github.com/kubedb/mysql-replication-mode-detector/commit/f33220a) Update KubeDB api (#42) +- [46b7d44](https://github.com/kubedb/mysql-replication-mode-detector/commit/46b7d44) Update KubeDB api (#41) +- [c151070](https://github.com/kubedb/mysql-replication-mode-detector/commit/c151070) Update KubeDB api (#38) +- [7a04763](https://github.com/kubedb/mysql-replication-mode-detector/commit/7a04763) Update KubeDB api (#37) +- [4367ef5](https://github.com/kubedb/mysql-replication-mode-detector/commit/4367ef5) Update KubeDB api (#36) +- [6bc4f1c](https://github.com/kubedb/mysql-replication-mode-detector/commit/6bc4f1c) Update Kubernetes v1.18.9 dependencies (#35) +- [fdaff01](https://github.com/kubedb/mysql-replication-mode-detector/commit/fdaff01) Update KubeDB api (#34) +- [087170a](https://github.com/kubedb/mysql-replication-mode-detector/commit/087170a) Update KubeDB api (#33) +- [127efe7](https://github.com/kubedb/mysql-replication-mode-detector/commit/127efe7) Update Kubernetes v1.18.9 dependencies (#32) +- [1df3573](https://github.com/kubedb/mysql-replication-mode-detector/commit/1df3573) Move constant to apimachinery repo (#24) +- [74b41b0](https://github.com/kubedb/mysql-replication-mode-detector/commit/74b41b0) Update repository config (#31) +- [b0932a7](https://github.com/kubedb/mysql-replication-mode-detector/commit/b0932a7) Update repository config (#30) +- [8e9c235](https://github.com/kubedb/mysql-replication-mode-detector/commit/8e9c235) Update Kubernetes v1.18.9 dependencies (#29) +- [8f61ebc](https://github.com/kubedb/mysql-replication-mode-detector/commit/8f61ebc) Update Kubernetes v1.18.3 dependencies (#28) +- [eedb970](https://github.com/kubedb/mysql-replication-mode-detector/commit/eedb970) Prepare for release v0.1.0-beta.3 (#27) +- [e4c3962](https://github.com/kubedb/mysql-replication-mode-detector/commit/e4c3962) Update Kubernetes v1.18.3 dependencies (#26) +- [9c20bfb](https://github.com/kubedb/mysql-replication-mode-detector/commit/9c20bfb) Update Kubernetes v1.18.3 dependencies (#25) +- [a1f5dbd](https://github.com/kubedb/mysql-replication-mode-detector/commit/a1f5dbd) Update Kubernetes v1.18.3 dependencies (#23) +- [feedb97](https://github.com/kubedb/mysql-replication-mode-detector/commit/feedb97) Use AppsCode Community License (#22) +- [eb878dc](https://github.com/kubedb/mysql-replication-mode-detector/commit/eb878dc) Prepare for release v0.1.0-beta.2 (#21) +- [6c214b8](https://github.com/kubedb/mysql-replication-mode-detector/commit/6c214b8) Update Kubernetes v1.18.3 dependencies (#19) +- [00800e8](https://github.com/kubedb/mysql-replication-mode-detector/commit/00800e8) Update Kubernetes v1.18.3 dependencies (#18) +- [373ab6d](https://github.com/kubedb/mysql-replication-mode-detector/commit/373ab6d) Update Kubernetes v1.18.3 dependencies (#17) +- [8b61313](https://github.com/kubedb/mysql-replication-mode-detector/commit/8b61313) Update Kubernetes v1.18.3 dependencies (#16) +- [f2a68e3](https://github.com/kubedb/mysql-replication-mode-detector/commit/f2a68e3) Update Kubernetes v1.18.3 dependencies (#15) +- [3bce396](https://github.com/kubedb/mysql-replication-mode-detector/commit/3bce396) Update Kubernetes v1.18.3 dependencies (#14) +- [32603a2](https://github.com/kubedb/mysql-replication-mode-detector/commit/32603a2) Don't push binary with release +- [bb47e58](https://github.com/kubedb/mysql-replication-mode-detector/commit/bb47e58) Remove port-forwarding and Refactor Code (#13) +- [df73419](https://github.com/kubedb/mysql-replication-mode-detector/commit/df73419) Update to Kubernetes v1.18.3 (#12) +- [61fe2ea](https://github.com/kubedb/mysql-replication-mode-detector/commit/61fe2ea) Update to Kubernetes v1.18.3 (#11) +- [b7ccc85](https://github.com/kubedb/mysql-replication-mode-detector/commit/b7ccc85) Update to Kubernetes v1.18.3 (#10) +- [3e62838](https://github.com/kubedb/mysql-replication-mode-detector/commit/3e62838) Prepare for release v0.1.0-beta.1 (#9) +- [e54c4c0](https://github.com/kubedb/mysql-replication-mode-detector/commit/e54c4c0) Update License (#7) +- [e071b02](https://github.com/kubedb/mysql-replication-mode-detector/commit/e071b02) Update to Kubernetes v1.18.3 (#6) +- [8992bcb](https://github.com/kubedb/mysql-replication-mode-detector/commit/8992bcb) Update update-release-tracker.sh +- [acc1038](https://github.com/kubedb/mysql-replication-mode-detector/commit/acc1038) Add script to update release tracker on pr merge (#5) +- [706b5b0](https://github.com/kubedb/mysql-replication-mode-detector/commit/706b5b0) Update .kodiak.toml +- [4e52c03](https://github.com/kubedb/mysql-replication-mode-detector/commit/4e52c03) Update to Kubernetes v1.18.3 (#4) +- [adb05ae](https://github.com/kubedb/mysql-replication-mode-detector/commit/adb05ae) Merge branch 'master' into gomod-refresher-1591418508 +- [3a99f80](https://github.com/kubedb/mysql-replication-mode-detector/commit/3a99f80) Create .kodiak.toml +- [6289807](https://github.com/kubedb/mysql-replication-mode-detector/commit/6289807) Update to Kubernetes v1.18.3 +- [1dd24be](https://github.com/kubedb/mysql-replication-mode-detector/commit/1dd24be) Update to Kubernetes v1.18.3 (#3) +- [6d02366](https://github.com/kubedb/mysql-replication-mode-detector/commit/6d02366) Update Makefile and CI configuration (#2) +- [fc95884](https://github.com/kubedb/mysql-replication-mode-detector/commit/fc95884) Add primary role labeler controller (#1) +- [99dfb12](https://github.com/kubedb/mysql-replication-mode-detector/commit/99dfb12) add readme.md + + + +## [kubedb/operator](https://github.com/kubedb/operator) + +### [v0.14.0](https://github.com/kubedb/operator/releases/tag/v0.14.0) + +- [22ee7d88](https://github.com/kubedb/operator/commit/22ee7d88) Prepare for release v0.14.0 (#337) +- [cd4b5292](https://github.com/kubedb/operator/commit/cd4b5292) Update README.md +- [a06c98d1](https://github.com/kubedb/operator/commit/a06c98d1) Prepare for release v0.14.0-rc.2 (#336) +- [7a74c49c](https://github.com/kubedb/operator/commit/7a74c49c) Prepare for release v0.14.0-rc.1 (#335) +- [7c0e97a2](https://github.com/kubedb/operator/commit/7c0e97a2) Prepare for release v0.14.0-beta.6 (#334) +- [17b42fd3](https://github.com/kubedb/operator/commit/17b42fd3) Update KubeDB api (#333) +- [6dbde882](https://github.com/kubedb/operator/commit/6dbde882) Update Kubernetes v1.18.9 dependencies (#332) +- [ce62c61a](https://github.com/kubedb/operator/commit/ce62c61a) Use go.bytebuilders.dev/license-verifier v0.4.0 +- [bcada180](https://github.com/kubedb/operator/commit/bcada180) Prepare for release v0.14.0-beta.5 (#331) +- [07d63285](https://github.com/kubedb/operator/commit/07d63285) Enable PgBoucner & ProxySQL for enterprise license (#330) +- [35b75a05](https://github.com/kubedb/operator/commit/35b75a05) Update readme.md +- [14304e05](https://github.com/kubedb/operator/commit/14304e05) Update KubeDB api (#329) +- [df61aae3](https://github.com/kubedb/operator/commit/df61aae3) Update readme +- [c9882619](https://github.com/kubedb/operator/commit/c9882619) Format readme +- [73b725e3](https://github.com/kubedb/operator/commit/73b725e3) Update readme (#328) +- [541c2460](https://github.com/kubedb/operator/commit/541c2460) Update repository config (#327) +- [2145978d](https://github.com/kubedb/operator/commit/2145978d) Prepare for release v0.14.0-beta.4 (#326) +- [8fd3b682](https://github.com/kubedb/operator/commit/8fd3b682) Add --readiness-probe-interval flag (#325) +- [7bf0c3c5](https://github.com/kubedb/operator/commit/7bf0c3c5) Update KubeDB api (#324) +- [25c7dc21](https://github.com/kubedb/operator/commit/25c7dc21) Update Kubernetes v1.18.9 dependencies (#323) +- [bb7525d6](https://github.com/kubedb/operator/commit/bb7525d6) Update Kubernetes v1.18.9 dependencies (#321) +- [6db45b57](https://github.com/kubedb/operator/commit/6db45b57) Update KubeDB api (#320) +- [fa1438e3](https://github.com/kubedb/operator/commit/fa1438e3) Update KubeDB api (#319) +- [6be49e7e](https://github.com/kubedb/operator/commit/6be49e7e) Update KubeDB api (#318) +- [00bf9bec](https://github.com/kubedb/operator/commit/00bf9bec) Update Kubernetes v1.18.9 dependencies (#317) +- [fd529403](https://github.com/kubedb/operator/commit/fd529403) Update KubeDB api (#316) +- [f03305e1](https://github.com/kubedb/operator/commit/f03305e1) Update KubeDB api (#315) +- [fb5e4873](https://github.com/kubedb/operator/commit/fb5e4873) Update KubeDB api (#312) +- [f3843a05](https://github.com/kubedb/operator/commit/f3843a05) Update repository config (#311) +- [18f29e73](https://github.com/kubedb/operator/commit/18f29e73) Update repository config (#310) +- [25405c38](https://github.com/kubedb/operator/commit/25405c38) Update repository config (#309) +- [e464d336](https://github.com/kubedb/operator/commit/e464d336) Update KubeDB api (#308) +- [eeccd59e](https://github.com/kubedb/operator/commit/eeccd59e) Update Kubernetes v1.18.9 dependencies (#307) +- [dd2f176f](https://github.com/kubedb/operator/commit/dd2f176f) Publish docker images to ghcr.io (#306) +- [d65d299f](https://github.com/kubedb/operator/commit/d65d299f) Update KubeDB api (#305) +- [3f681cef](https://github.com/kubedb/operator/commit/3f681cef) Update KubeDB api (#304) +- [bc58d3d7](https://github.com/kubedb/operator/commit/bc58d3d7) Refactor initializer code + Use common event recorder (#292) +- [952e1b33](https://github.com/kubedb/operator/commit/952e1b33) Update repository config (#301) +- [66bee9c3](https://github.com/kubedb/operator/commit/66bee9c3) Update Kubernetes v1.18.9 dependencies (#300) +- [4e508002](https://github.com/kubedb/operator/commit/4e508002) Update for release Stash@v2020.09.29 (#299) +- [b6a4caa4](https://github.com/kubedb/operator/commit/b6a4caa4) Update Kubernetes v1.18.9 dependencies (#298) +- [201aed32](https://github.com/kubedb/operator/commit/201aed32) Update Kubernetes v1.18.9 dependencies (#296) +- [36ed325d](https://github.com/kubedb/operator/commit/36ed325d) Update repository config (#295) +- [36ec3035](https://github.com/kubedb/operator/commit/36ec3035) Update repository config (#294) +- [32e61f43](https://github.com/kubedb/operator/commit/32e61f43) Update Kubernetes v1.18.9 dependencies (#293) +- [078e7062](https://github.com/kubedb/operator/commit/078e7062) Update Kubernetes v1.18.3 dependencies (#291) +- [900626dd](https://github.com/kubedb/operator/commit/900626dd) Update Kubernetes v1.18.3 dependencies (#290) +- [7bf1e16e](https://github.com/kubedb/operator/commit/7bf1e16e) Use AppsCode Community license (#289) +- [ba436a4b](https://github.com/kubedb/operator/commit/ba436a4b) Add license verifier (#288) +- [0a02a313](https://github.com/kubedb/operator/commit/0a02a313) Update for release Stash@v2020.09.16 (#287) +- [9ae202e1](https://github.com/kubedb/operator/commit/9ae202e1) Update Kubernetes v1.18.3 dependencies (#286) +- [5bea03b9](https://github.com/kubedb/operator/commit/5bea03b9) Update Kubernetes v1.18.3 dependencies (#284) +- [b1375565](https://github.com/kubedb/operator/commit/b1375565) Update Kubernetes v1.18.3 dependencies (#282) +- [a13ca48b](https://github.com/kubedb/operator/commit/a13ca48b) Prepare for release v0.14.0-beta.2 (#281) +- [fc6c1e9e](https://github.com/kubedb/operator/commit/fc6c1e9e) Update Kubernetes v1.18.3 dependencies (#280) +- [cd74716b](https://github.com/kubedb/operator/commit/cd74716b) Update Kubernetes v1.18.3 dependencies (#275) +- [5b3c76ed](https://github.com/kubedb/operator/commit/5b3c76ed) Update Kubernetes v1.18.3 dependencies (#274) +- [397a7e60](https://github.com/kubedb/operator/commit/397a7e60) Update Kubernetes v1.18.3 dependencies (#273) +- [616ea78d](https://github.com/kubedb/operator/commit/616ea78d) Update Kubernetes v1.18.3 dependencies (#272) +- [b7b0d2b9](https://github.com/kubedb/operator/commit/b7b0d2b9) Update Kubernetes v1.18.3 dependencies (#271) +- [3afadb7a](https://github.com/kubedb/operator/commit/3afadb7a) Update Kubernetes v1.18.3 dependencies (#270) +- [60b15632](https://github.com/kubedb/operator/commit/60b15632) Remove dependency on enterprise operator (#269) +- [b3648cde](https://github.com/kubedb/operator/commit/b3648cde) Build images in e2e workflow (#268) +- [73dee065](https://github.com/kubedb/operator/commit/73dee065) Update to Kubernetes v1.18.3 (#266) +- [a8a42ab8](https://github.com/kubedb/operator/commit/a8a42ab8) Allow configuring k8s in e2e tests (#267) +- [4b7d6ee3](https://github.com/kubedb/operator/commit/4b7d6ee3) Trigger e2e tests on /ok-to-test command (#265) +- [024fc40a](https://github.com/kubedb/operator/commit/024fc40a) Update to Kubernetes v1.18.3 (#264) +- [bd1da662](https://github.com/kubedb/operator/commit/bd1da662) Update to Kubernetes v1.18.3 (#263) +- [a2bba612](https://github.com/kubedb/operator/commit/a2bba612) Prepare for release v0.14.0-beta.1 (#262) +- [22bc85ec](https://github.com/kubedb/operator/commit/22bc85ec) Allow customizing chart registry (#261) +- [52cc1dc7](https://github.com/kubedb/operator/commit/52cc1dc7) Update for release Stash@v2020.07.09-beta.0 (#260) +- [2e8b709f](https://github.com/kubedb/operator/commit/2e8b709f) Update for release Stash@v2020.07.08-beta.0 (#259) +- [7b58b548](https://github.com/kubedb/operator/commit/7b58b548) Update License (#258) +- [d4cd1a93](https://github.com/kubedb/operator/commit/d4cd1a93) Update to Kubernetes v1.18.3 (#256) +- [f6091845](https://github.com/kubedb/operator/commit/f6091845) Update ci.yml +- [5324d2b6](https://github.com/kubedb/operator/commit/5324d2b6) Update ci.yml +- [c888d7fd](https://github.com/kubedb/operator/commit/c888d7fd) Add workflow to update docs (#255) +- [ba843e17](https://github.com/kubedb/operator/commit/ba843e17) Update update-release-tracker.sh +- [b93c5ab4](https://github.com/kubedb/operator/commit/b93c5ab4) Update update-release-tracker.sh +- [6b8d2149](https://github.com/kubedb/operator/commit/6b8d2149) Add script to update release tracker on pr merge (#254) +- [bb1290dc](https://github.com/kubedb/operator/commit/bb1290dc) Update .kodiak.toml +- [9bb85c3b](https://github.com/kubedb/operator/commit/9bb85c3b) Register validator & mutators for all supported dbs (#253) +- [1a524d9c](https://github.com/kubedb/operator/commit/1a524d9c) Various fixes (#252) +- [4860f2a7](https://github.com/kubedb/operator/commit/4860f2a7) Update to Kubernetes v1.18.3 (#251) +- [1a163c6a](https://github.com/kubedb/operator/commit/1a163c6a) Create .kodiak.toml +- [1eda36b9](https://github.com/kubedb/operator/commit/1eda36b9) Update to Kubernetes v1.18.3 (#247) +- [77b8b858](https://github.com/kubedb/operator/commit/77b8b858) Update enterprise operator tag (#246) +- [96ca876e](https://github.com/kubedb/operator/commit/96ca876e) Revendor kubedb.dev/apimachinery@master (#245) +- [43a3a7f1](https://github.com/kubedb/operator/commit/43a3a7f1) Use recommended kubernetes app labels +- [1ae7045f](https://github.com/kubedb/operator/commit/1ae7045f) Update crazy-max/ghaction-docker-buildx flag +- [f25034ef](https://github.com/kubedb/operator/commit/f25034ef) Trigger the workflow on push or pull request +- [ba486319](https://github.com/kubedb/operator/commit/ba486319) Update readme (#244) +- [5f7191f4](https://github.com/kubedb/operator/commit/5f7191f4) Update CHANGELOG.md +- [5b14af4b](https://github.com/kubedb/operator/commit/5b14af4b) Add license scan report and status (#241) +- [9848932b](https://github.com/kubedb/operator/commit/9848932b) Pass the topology object to common controller +- [90d1c873](https://github.com/kubedb/operator/commit/90d1c873) Initialize topology for MonogDB webhooks (#243) +- [8ecb87c8](https://github.com/kubedb/operator/commit/8ecb87c8) Fix nil pointer exception (#242) +- [b12c3392](https://github.com/kubedb/operator/commit/b12c3392) Update operator dependencies (#237) +- [f714bb1b](https://github.com/kubedb/operator/commit/f714bb1b) Always create RBAC resources (#238) +- [f43a588e](https://github.com/kubedb/operator/commit/f43a588e) Use Go 1.13 in CI +- [e8ab3580](https://github.com/kubedb/operator/commit/e8ab3580) Update client-go to kubernetes-1.16.3 (#239) +- [1dc84a67](https://github.com/kubedb/operator/commit/1dc84a67) Update CI badge +- [d9d1cc0a](https://github.com/kubedb/operator/commit/d9d1cc0a) Bundle PgBouncer operator (#236) +- [720303c1](https://github.com/kubedb/operator/commit/720303c1) Fix linter errors (#235) +- [4c53a71f](https://github.com/kubedb/operator/commit/4c53a71f) Update go.yml +- [e65fc457](https://github.com/kubedb/operator/commit/e65fc457) Enable GitHub actions +- [2dcb0d6d](https://github.com/kubedb/operator/commit/2dcb0d6d) Update changelog +- [1e407192](https://github.com/kubedb/operator/commit/1e407192) Remove linux/arm support +- [b97a2028](https://github.com/kubedb/operator/commit/b97a2028) RestoreSession watcher added (#233) +- [2cfbbb15](https://github.com/kubedb/operator/commit/2cfbbb15) Fix dev deployment script for operator (part-2) (#231) +- [b673c6cc](https://github.com/kubedb/operator/commit/b673c6cc) Fix dev deployment script for operator (#228) +- [ed7e2eb1](https://github.com/kubedb/operator/commit/ed7e2eb1) Fix build (#230) +- [83123ce6](https://github.com/kubedb/operator/commit/83123ce6) Fix travis build +- [cd2fb26c](https://github.com/kubedb/operator/commit/cd2fb26c) Change package path to kubedb.dev/operator (#229) +- [375c1f2b](https://github.com/kubedb/operator/commit/375c1f2b) Fix #596 validating and mutating yaml file missing when run oper… (#227) +- [c833a4f5](https://github.com/kubedb/operator/commit/c833a4f5) Update .gitignore +- [b0de0a8f](https://github.com/kubedb/operator/commit/b0de0a8f) Fix calling `deploy/kubedb.sh` (#226) +- [e528ace1](https://github.com/kubedb/operator/commit/e528ace1) Add make install, uninstall, purge commands (#225) +- [1ce21404](https://github.com/kubedb/operator/commit/1ce21404) Add Makefile (#224) +- [872801c0](https://github.com/kubedb/operator/commit/872801c0) Update to k8s 1.14.0 client libraries using go.mod (#223) +- [95c8d2ee](https://github.com/kubedb/operator/commit/95c8d2ee) Start next dev cycle + + + +## [kubedb/percona-xtradb](https://github.com/kubedb/percona-xtradb) + +### [v0.1.0](https://github.com/kubedb/percona-xtradb/releases/tag/v0.1.0) + +- [3b2593ce](https://github.com/kubedb/percona-xtradb/commit/3b2593ce) Prepare for release v0.1.0 (#121) +- [ae82716f](https://github.com/kubedb/percona-xtradb/commit/ae82716f) Prepare for release v0.1.0-rc.2 (#120) +- [4ac07f08](https://github.com/kubedb/percona-xtradb/commit/4ac07f08) Prepare for release v0.1.0-rc.1 (#119) +- [397607a3](https://github.com/kubedb/percona-xtradb/commit/397607a3) Prepare for release v0.1.0-beta.6 (#118) +- [a3b7642d](https://github.com/kubedb/percona-xtradb/commit/a3b7642d) Create SRV records for governing service (#117) +- [9866a420](https://github.com/kubedb/percona-xtradb/commit/9866a420) Prepare for release v0.1.0-beta.5 (#116) +- [f92081d1](https://github.com/kubedb/percona-xtradb/commit/f92081d1) Create separate governing service for each database (#115) +- [6010b189](https://github.com/kubedb/percona-xtradb/commit/6010b189) Update KubeDB api (#114) +- [95b57c72](https://github.com/kubedb/percona-xtradb/commit/95b57c72) Update readme +- [14b2f1b2](https://github.com/kubedb/percona-xtradb/commit/14b2f1b2) Prepare for release v0.1.0-beta.4 (#113) +- [eff1d265](https://github.com/kubedb/percona-xtradb/commit/eff1d265) Update KubeDB api (#112) +- [a2878d4a](https://github.com/kubedb/percona-xtradb/commit/a2878d4a) Update Kubernetes v1.18.9 dependencies (#111) +- [51f0d104](https://github.com/kubedb/percona-xtradb/commit/51f0d104) Update KubeDB api (#110) +- [fcf5343b](https://github.com/kubedb/percona-xtradb/commit/fcf5343b) Update for release Stash@v2020.10.21 (#109) +- [9fe68d43](https://github.com/kubedb/percona-xtradb/commit/9fe68d43) Fix init validator (#107) +- [1c528cff](https://github.com/kubedb/percona-xtradb/commit/1c528cff) Update KubeDB api (#108) +- [99d23f3d](https://github.com/kubedb/percona-xtradb/commit/99d23f3d) Update KubeDB api (#106) +- [d0807640](https://github.com/kubedb/percona-xtradb/commit/d0807640) Update Kubernetes v1.18.9 dependencies (#105) +- [bac7705b](https://github.com/kubedb/percona-xtradb/commit/bac7705b) Update KubeDB api (#104) +- [475aabd5](https://github.com/kubedb/percona-xtradb/commit/475aabd5) Update KubeDB api (#103) +- [60f7e5a9](https://github.com/kubedb/percona-xtradb/commit/60f7e5a9) Update KubeDB api (#102) +- [84a97ced](https://github.com/kubedb/percona-xtradb/commit/84a97ced) Update KubeDB api (#101) +- [d4a7b7c5](https://github.com/kubedb/percona-xtradb/commit/d4a7b7c5) Update Kubernetes v1.18.9 dependencies (#100) +- [b818a4c5](https://github.com/kubedb/percona-xtradb/commit/b818a4c5) Update KubeDB api (#99) +- [03df7739](https://github.com/kubedb/percona-xtradb/commit/03df7739) Update KubeDB api (#98) +- [2f3ce0e6](https://github.com/kubedb/percona-xtradb/commit/2f3ce0e6) Update KubeDB api (#96) +- [94e009e8](https://github.com/kubedb/percona-xtradb/commit/94e009e8) Update repository config (#95) +- [fc61d440](https://github.com/kubedb/percona-xtradb/commit/fc61d440) Update repository config (#94) +- [35f5b2bb](https://github.com/kubedb/percona-xtradb/commit/35f5b2bb) Update repository config (#93) +- [d01e39dd](https://github.com/kubedb/percona-xtradb/commit/d01e39dd) Initialize statefulset watcher from cmd/server/options.go (#92) +- [41bf932f](https://github.com/kubedb/percona-xtradb/commit/41bf932f) Update KubeDB api (#91) +- [da92a1f3](https://github.com/kubedb/percona-xtradb/commit/da92a1f3) Update Kubernetes v1.18.9 dependencies (#90) +- [554beafb](https://github.com/kubedb/percona-xtradb/commit/554beafb) Publish docker images to ghcr.io (#89) +- [4c7031e1](https://github.com/kubedb/percona-xtradb/commit/4c7031e1) Update KubeDB api (#88) +- [418c767a](https://github.com/kubedb/percona-xtradb/commit/418c767a) Update KubeDB api (#87) +- [94eef91e](https://github.com/kubedb/percona-xtradb/commit/94eef91e) Update KubeDB api (#86) +- [f3c2a360](https://github.com/kubedb/percona-xtradb/commit/f3c2a360) Update KubeDB api (#85) +- [107bb6a6](https://github.com/kubedb/percona-xtradb/commit/107bb6a6) Update repository config (#84) +- [938e64bc](https://github.com/kubedb/percona-xtradb/commit/938e64bc) Cleanup monitoring spec api (#83) +- [deeaad8f](https://github.com/kubedb/percona-xtradb/commit/deeaad8f) Use conditions to handle database initialization (#80) +- [798c3ddc](https://github.com/kubedb/percona-xtradb/commit/798c3ddc) Update Kubernetes v1.18.9 dependencies (#82) +- [16c72ba6](https://github.com/kubedb/percona-xtradb/commit/16c72ba6) Updated the exporter port and service (#81) +- [9314faf1](https://github.com/kubedb/percona-xtradb/commit/9314faf1) Update for release Stash@v2020.09.29 (#79) +- [6cb53efc](https://github.com/kubedb/percona-xtradb/commit/6cb53efc) Update Kubernetes v1.18.9 dependencies (#78) +- [fd2b8cdd](https://github.com/kubedb/percona-xtradb/commit/fd2b8cdd) Update Kubernetes v1.18.9 dependencies (#76) +- [9d1038db](https://github.com/kubedb/percona-xtradb/commit/9d1038db) Update repository config (#75) +- [41a05a44](https://github.com/kubedb/percona-xtradb/commit/41a05a44) Update repository config (#74) +- [eccd2acd](https://github.com/kubedb/percona-xtradb/commit/eccd2acd) Update Kubernetes v1.18.9 dependencies (#73) +- [27635f1c](https://github.com/kubedb/percona-xtradb/commit/27635f1c) Update Kubernetes v1.18.3 dependencies (#72) +- [792326c7](https://github.com/kubedb/percona-xtradb/commit/792326c7) Use common event recorder (#71) +- [0ff583b8](https://github.com/kubedb/percona-xtradb/commit/0ff583b8) Prepare for release v0.1.0-beta.3 (#70) +- [627bc039](https://github.com/kubedb/percona-xtradb/commit/627bc039) Use new `spec.init` section (#69) +- [f79e4771](https://github.com/kubedb/percona-xtradb/commit/f79e4771) Update Kubernetes v1.18.3 dependencies (#68) +- [257954c2](https://github.com/kubedb/percona-xtradb/commit/257954c2) Add license verifier (#67) +- [e06eec6b](https://github.com/kubedb/percona-xtradb/commit/e06eec6b) Update for release Stash@v2020.09.16 (#66) +- [29901348](https://github.com/kubedb/percona-xtradb/commit/29901348) Update Kubernetes v1.18.3 dependencies (#65) +- [02d5bfde](https://github.com/kubedb/percona-xtradb/commit/02d5bfde) Use background deletion policy +- [6e6d8b5b](https://github.com/kubedb/percona-xtradb/commit/6e6d8b5b) Update Kubernetes v1.18.3 dependencies (#63) +- [7601a237](https://github.com/kubedb/percona-xtradb/commit/7601a237) Use AppsCode Community License (#62) +- [4d1a2424](https://github.com/kubedb/percona-xtradb/commit/4d1a2424) Update Kubernetes v1.18.3 dependencies (#61) +- [471b6def](https://github.com/kubedb/percona-xtradb/commit/471b6def) Prepare for release v0.1.0-beta.2 (#60) +- [9423a70f](https://github.com/kubedb/percona-xtradb/commit/9423a70f) Update release.yml +- [85d1d036](https://github.com/kubedb/percona-xtradb/commit/85d1d036) Use updated apis (#59) +- [6811b8dc](https://github.com/kubedb/percona-xtradb/commit/6811b8dc) Update Kubernetes v1.18.3 dependencies (#53) +- [4212d2a0](https://github.com/kubedb/percona-xtradb/commit/4212d2a0) Update Kubernetes v1.18.3 dependencies (#52) +- [659d646c](https://github.com/kubedb/percona-xtradb/commit/659d646c) Update Kubernetes v1.18.3 dependencies (#51) +- [a868e0c3](https://github.com/kubedb/percona-xtradb/commit/a868e0c3) Update Kubernetes v1.18.3 dependencies (#50) +- [162e6ca4](https://github.com/kubedb/percona-xtradb/commit/162e6ca4) Update Kubernetes v1.18.3 dependencies (#49) +- [a7fa1fbf](https://github.com/kubedb/percona-xtradb/commit/a7fa1fbf) Update Kubernetes v1.18.3 dependencies (#48) +- [b6a4583f](https://github.com/kubedb/percona-xtradb/commit/b6a4583f) Remove dependency on enterprise operator (#47) +- [a8909b38](https://github.com/kubedb/percona-xtradb/commit/a8909b38) Allow configuring k8s & db version in e2e tests (#46) +- [4d79d26e](https://github.com/kubedb/percona-xtradb/commit/4d79d26e) Update to Kubernetes v1.18.3 (#45) +- [189f3212](https://github.com/kubedb/percona-xtradb/commit/189f3212) Trigger e2e tests on /ok-to-test command (#44) +- [a037bd03](https://github.com/kubedb/percona-xtradb/commit/a037bd03) Update to Kubernetes v1.18.3 (#43) +- [33cabdf3](https://github.com/kubedb/percona-xtradb/commit/33cabdf3) Update to Kubernetes v1.18.3 (#42) +- [28b9fc0f](https://github.com/kubedb/percona-xtradb/commit/28b9fc0f) Prepare for release v0.1.0-beta.1 (#41) +- [fb4f5444](https://github.com/kubedb/percona-xtradb/commit/fb4f5444) Update for release Stash@v2020.07.09-beta.0 (#39) +- [ad221aa2](https://github.com/kubedb/percona-xtradb/commit/ad221aa2) include Makefile.env +- [841ec855](https://github.com/kubedb/percona-xtradb/commit/841ec855) Allow customizing chart registry (#38) +- [bb608980](https://github.com/kubedb/percona-xtradb/commit/bb608980) Update License (#37) +- [cf8cd2fa](https://github.com/kubedb/percona-xtradb/commit/cf8cd2fa) Update for release Stash@v2020.07.08-beta.0 (#36) +- [7b28c4b9](https://github.com/kubedb/percona-xtradb/commit/7b28c4b9) Update to Kubernetes v1.18.3 (#35) +- [848ff94a](https://github.com/kubedb/percona-xtradb/commit/848ff94a) Update ci.yml +- [d124dd6a](https://github.com/kubedb/percona-xtradb/commit/d124dd6a) Load stash version from .env file for make (#34) +- [1de40e1d](https://github.com/kubedb/percona-xtradb/commit/1de40e1d) Update update-release-tracker.sh +- [7a4503be](https://github.com/kubedb/percona-xtradb/commit/7a4503be) Update update-release-tracker.sh +- [ad0dfaf8](https://github.com/kubedb/percona-xtradb/commit/ad0dfaf8) Add script to update release tracker on pr merge (#33) +- [aaca6bd9](https://github.com/kubedb/percona-xtradb/commit/aaca6bd9) Update .kodiak.toml +- [9a495724](https://github.com/kubedb/percona-xtradb/commit/9a495724) Various fixes (#32) +- [9b6c9a53](https://github.com/kubedb/percona-xtradb/commit/9b6c9a53) Update to Kubernetes v1.18.3 (#31) +- [67912547](https://github.com/kubedb/percona-xtradb/commit/67912547) Update to Kubernetes v1.18.3 +- [fc8ce4cc](https://github.com/kubedb/percona-xtradb/commit/fc8ce4cc) Create .kodiak.toml +- [8aba5ef2](https://github.com/kubedb/percona-xtradb/commit/8aba5ef2) Use CRD v1 for Kubernetes >= 1.16 (#30) +- [e81d2b4c](https://github.com/kubedb/percona-xtradb/commit/e81d2b4c) Update to Kubernetes v1.18.3 (#29) +- [2a32730a](https://github.com/kubedb/percona-xtradb/commit/2a32730a) Fix e2e tests (#28) +- [a79626d9](https://github.com/kubedb/percona-xtradb/commit/a79626d9) Update stash install commands +- [52fc2059](https://github.com/kubedb/percona-xtradb/commit/52fc2059) Use recommended kubernetes app labels (#27) +- [93dc10ec](https://github.com/kubedb/percona-xtradb/commit/93dc10ec) Update crazy-max/ghaction-docker-buildx flag +- [ce5717e2](https://github.com/kubedb/percona-xtradb/commit/ce5717e2) Revendor kubedb.dev/apimachinery@master (#26) +- [c1ca649d](https://github.com/kubedb/percona-xtradb/commit/c1ca649d) Pass annotations from CRD to AppBinding (#25) +- [f327cc01](https://github.com/kubedb/percona-xtradb/commit/f327cc01) Trigger the workflow on push or pull request +- [02432393](https://github.com/kubedb/percona-xtradb/commit/02432393) Update CHANGELOG.md +- [a89dbc55](https://github.com/kubedb/percona-xtradb/commit/a89dbc55) Use stash.appscode.dev/apimachinery@v0.9.0-rc.6 (#24) +- [e69742de](https://github.com/kubedb/percona-xtradb/commit/e69742de) Update for percona-xtradb standalone restoresession (#23) +- [958877a1](https://github.com/kubedb/percona-xtradb/commit/958877a1) Various fixes (#21) +- [fb0d7a35](https://github.com/kubedb/percona-xtradb/commit/fb0d7a35) Update kubernetes client-go to 1.16.3 (#20) +- [293fe9a4](https://github.com/kubedb/percona-xtradb/commit/293fe9a4) Fix default make command +- [39358e3b](https://github.com/kubedb/percona-xtradb/commit/39358e3b) Use charts to install operator (#19) +- [6c5b3395](https://github.com/kubedb/percona-xtradb/commit/6c5b3395) Several fixes and update tests (#18) +- [84ff139f](https://github.com/kubedb/percona-xtradb/commit/84ff139f) Various Makefile improvements (#16) +- [e2737f65](https://github.com/kubedb/percona-xtradb/commit/e2737f65) Remove EnableStatusSubresource (#17) +- [fb886b07](https://github.com/kubedb/percona-xtradb/commit/fb886b07) Run e2e tests using GitHub actions (#12) +- [35b155d9](https://github.com/kubedb/percona-xtradb/commit/35b155d9) Validate DBVersionSpecs and fixed broken build (#15) +- [67794bd9](https://github.com/kubedb/percona-xtradb/commit/67794bd9) Update go.yml +- [f7666354](https://github.com/kubedb/percona-xtradb/commit/f7666354) Various changes for Percona XtraDB (#13) +- [ceb7ba67](https://github.com/kubedb/percona-xtradb/commit/ceb7ba67) Enable GitHub actions +- [f5a112af](https://github.com/kubedb/percona-xtradb/commit/f5a112af) Refactor for ProxySQL Integration (#11) +- [26602049](https://github.com/kubedb/percona-xtradb/commit/26602049) Revendor +- [71957d40](https://github.com/kubedb/percona-xtradb/commit/71957d40) Rename from perconaxtradb to percona-xtradb (#10) +- [b526ccd8](https://github.com/kubedb/percona-xtradb/commit/b526ccd8) Set database version in AppBinding (#7) +- [336e7203](https://github.com/kubedb/percona-xtradb/commit/336e7203) Percona XtraDB Cluster support (#9) +- [71a42f7a](https://github.com/kubedb/percona-xtradb/commit/71a42f7a) Don't set annotation to AppBinding (#8) +- [282298cb](https://github.com/kubedb/percona-xtradb/commit/282298cb) Fix UpsertDatabaseAnnotation() function (#4) +- [2ab9dddf](https://github.com/kubedb/percona-xtradb/commit/2ab9dddf) Add license header to Makefiles (#6) +- [df135c08](https://github.com/kubedb/percona-xtradb/commit/df135c08) Add install, uninstall and purge command in Makefile (#3) +- [73d3a845](https://github.com/kubedb/percona-xtradb/commit/73d3a845) Update .gitignore +- [59a4e754](https://github.com/kubedb/percona-xtradb/commit/59a4e754) Add Makefile (#2) +- [f3551ddc](https://github.com/kubedb/percona-xtradb/commit/f3551ddc) Rename package path (#1) +- [56a241d6](https://github.com/kubedb/percona-xtradb/commit/56a241d6) Use explicit IP whitelist instead of automatic IP whitelist (#151) +- [9f0b5ca3](https://github.com/kubedb/percona-xtradb/commit/9f0b5ca3) Update to k8s 1.14.0 client libraries using go.mod (#147) +- [73ad7c30](https://github.com/kubedb/percona-xtradb/commit/73ad7c30) Update changelog +- [ccc36b5c](https://github.com/kubedb/percona-xtradb/commit/ccc36b5c) Update README.md +- [9769e8e1](https://github.com/kubedb/percona-xtradb/commit/9769e8e1) Start next dev cycle +- [a3fa468a](https://github.com/kubedb/percona-xtradb/commit/a3fa468a) Prepare release 0.5.0 +- [6d8862de](https://github.com/kubedb/percona-xtradb/commit/6d8862de) Mysql Group Replication tests (#146) +- [49544e55](https://github.com/kubedb/percona-xtradb/commit/49544e55) Mysql Group Replication (#144) +- [a85d4b44](https://github.com/kubedb/percona-xtradb/commit/a85d4b44) Revendor dependencies +- [9c538460](https://github.com/kubedb/percona-xtradb/commit/9c538460) Changed Role to exclude psp without name (#143) +- [6cace93b](https://github.com/kubedb/percona-xtradb/commit/6cace93b) Modify mutator validator names (#142) +- [da0c19b9](https://github.com/kubedb/percona-xtradb/commit/da0c19b9) Update changelog +- [b79c80d6](https://github.com/kubedb/percona-xtradb/commit/b79c80d6) Start next dev cycle +- [838d9459](https://github.com/kubedb/percona-xtradb/commit/838d9459) Prepare release 0.4.0 +- [bf0f2c14](https://github.com/kubedb/percona-xtradb/commit/bf0f2c14) Added PSP names and init container image in testing framework (#141) +- [3d227570](https://github.com/kubedb/percona-xtradb/commit/3d227570) Added PSP support for mySQL (#137) +- [7b766657](https://github.com/kubedb/percona-xtradb/commit/7b766657) Don't inherit app.kubernetes.io labels from CRD into offshoots (#140) +- [29e23470](https://github.com/kubedb/percona-xtradb/commit/29e23470) Support for init container (#139) +- [3e1556f6](https://github.com/kubedb/percona-xtradb/commit/3e1556f6) Add role label to stats service (#138) +- [ee078af9](https://github.com/kubedb/percona-xtradb/commit/ee078af9) Update changelog +- [978f1139](https://github.com/kubedb/percona-xtradb/commit/978f1139) Update Kubernetes client libraries to 1.13.0 release (#136) +- [821f23d1](https://github.com/kubedb/percona-xtradb/commit/821f23d1) Start next dev cycle +- [678b26aa](https://github.com/kubedb/percona-xtradb/commit/678b26aa) Prepare release 0.3.0 +- [40ad7a23](https://github.com/kubedb/percona-xtradb/commit/40ad7a23) Initial RBAC support: create and use K8s service account for MySQL (#134) +- [98f03387](https://github.com/kubedb/percona-xtradb/commit/98f03387) Revendor dependencies (#135) +- [dfe92615](https://github.com/kubedb/percona-xtradb/commit/dfe92615) Revendor dependencies : Retry Failed Scheduler Snapshot (#133) +- [71f8a350](https://github.com/kubedb/percona-xtradb/commit/71f8a350) Added ephemeral StorageType support (#132) +- [0a6b6e46](https://github.com/kubedb/percona-xtradb/commit/0a6b6e46) Added support of MySQL 8.0.14 (#131) +- [99e57a9e](https://github.com/kubedb/percona-xtradb/commit/99e57a9e) Use PVC spec from snapshot if provided (#130) +- [61497be6](https://github.com/kubedb/percona-xtradb/commit/61497be6) Revendored and updated tests for 'Prevent prefix matching of multiple snapshots' (#129) +- [7eafe088](https://github.com/kubedb/percona-xtradb/commit/7eafe088) Add certificate health checker (#128) +- [973ec416](https://github.com/kubedb/percona-xtradb/commit/973ec416) Update E2E test: Env update is not restricted anymore (#127) +- [339975ff](https://github.com/kubedb/percona-xtradb/commit/339975ff) Fix AppBinding (#126) +- [62050a72](https://github.com/kubedb/percona-xtradb/commit/62050a72) Update changelog +- [2d454043](https://github.com/kubedb/percona-xtradb/commit/2d454043) Prepare release 0.2.0 +- [6941ea59](https://github.com/kubedb/percona-xtradb/commit/6941ea59) Reuse event recorder (#125) +- [b77e66c4](https://github.com/kubedb/percona-xtradb/commit/b77e66c4) OSM binary upgraded in mysql-tools (#123) +- [c9228086](https://github.com/kubedb/percona-xtradb/commit/c9228086) Revendor dependencies (#124) +- [97837120](https://github.com/kubedb/percona-xtradb/commit/97837120) Test for faulty snapshot (#122) +- [c3e995b6](https://github.com/kubedb/percona-xtradb/commit/c3e995b6) Start next dev cycle +- [8a4f3b13](https://github.com/kubedb/percona-xtradb/commit/8a4f3b13) Prepare release 0.2.0-rc.2 +- [79942191](https://github.com/kubedb/percona-xtradb/commit/79942191) Upgrade database secret keys (#121) +- [1747fdf5](https://github.com/kubedb/percona-xtradb/commit/1747fdf5) Ignore mutation of fields to default values during update (#120) +- [d902d588](https://github.com/kubedb/percona-xtradb/commit/d902d588) Support configuration options for exporter sidecar (#119) +- [dd7c3f44](https://github.com/kubedb/percona-xtradb/commit/dd7c3f44) Use flags.DumpAll (#118) +- [bc1ef05b](https://github.com/kubedb/percona-xtradb/commit/bc1ef05b) Start next dev cycle +- [9d33c1a0](https://github.com/kubedb/percona-xtradb/commit/9d33c1a0) Prepare release 0.2.0-rc.1 +- [b076e141](https://github.com/kubedb/percona-xtradb/commit/b076e141) Apply cleanup (#117) +- [7dc5641f](https://github.com/kubedb/percona-xtradb/commit/7dc5641f) Set periodic analytics (#116) +- [90ea6acc](https://github.com/kubedb/percona-xtradb/commit/90ea6acc) Introduce AppBinding support (#115) +- [a882d76a](https://github.com/kubedb/percona-xtradb/commit/a882d76a) Fix Analytics (#114) +- [0961009c](https://github.com/kubedb/percona-xtradb/commit/0961009c) Error out from cron job for deprecated dbversion (#113) +- [da1f4e27](https://github.com/kubedb/percona-xtradb/commit/da1f4e27) Add CRDs without observation when operator starts (#112) +- [0a754d2f](https://github.com/kubedb/percona-xtradb/commit/0a754d2f) Update changelog +- [b09bc6e1](https://github.com/kubedb/percona-xtradb/commit/b09bc6e1) Start next dev cycle +- [0d467ccb](https://github.com/kubedb/percona-xtradb/commit/0d467ccb) Prepare release 0.2.0-rc.0 +- [c757007a](https://github.com/kubedb/percona-xtradb/commit/c757007a) Merge commit 'cc6607a3589a79a5e61bb198d370ea0ae30b9d09' +- [ddfe4be1](https://github.com/kubedb/percona-xtradb/commit/ddfe4be1) Support custom user passowrd for backup (#111) +- [8c84ba20](https://github.com/kubedb/percona-xtradb/commit/8c84ba20) Support providing resources for monitoring container (#110) +- [7bcfbc48](https://github.com/kubedb/percona-xtradb/commit/7bcfbc48) Update kubernetes client libraries to 1.12.0 (#109) +- [145bba2b](https://github.com/kubedb/percona-xtradb/commit/145bba2b) Add validation webhook xray (#108) +- [6da1887f](https://github.com/kubedb/percona-xtradb/commit/6da1887f) Various Fixes (#107) +- [111519e9](https://github.com/kubedb/percona-xtradb/commit/111519e9) Merge ports from service template (#105) +- [38147ef1](https://github.com/kubedb/percona-xtradb/commit/38147ef1) Replace doNotPause with TerminationPolicy = DoNotTerminate (#104) +- [e28ebc47](https://github.com/kubedb/percona-xtradb/commit/e28ebc47) Pass resources to NamespaceValidator (#103) +- [aed12bf5](https://github.com/kubedb/percona-xtradb/commit/aed12bf5) Various fixes (#102) +- [3d372ef6](https://github.com/kubedb/percona-xtradb/commit/3d372ef6) Support Livecycle hook and container probes (#101) +- [b6ef6887](https://github.com/kubedb/percona-xtradb/commit/b6ef6887) Check if Kubernetes version is supported before running operator (#100) +- [d89e7783](https://github.com/kubedb/percona-xtradb/commit/d89e7783) Update package alias (#99) +- [f0b44b3a](https://github.com/kubedb/percona-xtradb/commit/f0b44b3a) Start next dev cycle +- [a79ff03b](https://github.com/kubedb/percona-xtradb/commit/a79ff03b) Prepare release 0.2.0-beta.1 +- [0d8d3cca](https://github.com/kubedb/percona-xtradb/commit/0d8d3cca) Revendor api (#98) +- [2f850243](https://github.com/kubedb/percona-xtradb/commit/2f850243) Fix tests (#97) +- [4ced0bfe](https://github.com/kubedb/percona-xtradb/commit/4ced0bfe) Revendor api for catalog apigroup (#96) +- [e7695400](https://github.com/kubedb/percona-xtradb/commit/e7695400) Update chanelog +- [8e358aea](https://github.com/kubedb/percona-xtradb/commit/8e358aea) Use --pull flag with docker build (#20) (#95) +- [d2a97d90](https://github.com/kubedb/percona-xtradb/commit/d2a97d90) Merge commit '16c769ee4686576f172a6b79a10d25bfd79ca4a4' +- [d1fe8a8a](https://github.com/kubedb/percona-xtradb/commit/d1fe8a8a) Start next dev cycle +- [04eb9bb5](https://github.com/kubedb/percona-xtradb/commit/04eb9bb5) Prepare release 0.2.0-beta.0 +- [9dfea960](https://github.com/kubedb/percona-xtradb/commit/9dfea960) Pass extra args to tools.sh (#93) +- [47dd3cad](https://github.com/kubedb/percona-xtradb/commit/47dd3cad) Don't try to wipe out Snapshot data for Local backend (#92) +- [9c4d485b](https://github.com/kubedb/percona-xtradb/commit/9c4d485b) Add missing alt-tag docker folder mysql-tools images (#91) +- [be72f784](https://github.com/kubedb/percona-xtradb/commit/be72f784) Use suffix for updated DBImage & Stop working for deprecated *Versions (#90) +- [05c8f14d](https://github.com/kubedb/percona-xtradb/commit/05c8f14d) Search used secrets within same namespace of DB object (#89) +- [0d94c946](https://github.com/kubedb/percona-xtradb/commit/0d94c946) Support Termination Policy (#88) +- [8775ddf7](https://github.com/kubedb/percona-xtradb/commit/8775ddf7) Update builddeps.sh +- [796c93da](https://github.com/kubedb/percona-xtradb/commit/796c93da) Revendor k8s.io/apiserver (#87) +- [5a1e3f57](https://github.com/kubedb/percona-xtradb/commit/5a1e3f57) Revendor kubernetes-1.11.3 (#86) +- [809a3c49](https://github.com/kubedb/percona-xtradb/commit/809a3c49) Support UpdateStrategy (#84) +- [372c52ef](https://github.com/kubedb/percona-xtradb/commit/372c52ef) Add TerminationPolicy for databases (#83) +- [c01b55e8](https://github.com/kubedb/percona-xtradb/commit/c01b55e8) Revendor api (#82) +- [5e196b95](https://github.com/kubedb/percona-xtradb/commit/5e196b95) Use IntHash as status.observedGeneration (#81) +- [2da3bb1b](https://github.com/kubedb/percona-xtradb/commit/2da3bb1b) fix github status (#80) +- [121d0a98](https://github.com/kubedb/percona-xtradb/commit/121d0a98) Update pipeline (#79) +- [532e3137](https://github.com/kubedb/percona-xtradb/commit/532e3137) Fix E2E test for minikube (#78) +- [0f107815](https://github.com/kubedb/percona-xtradb/commit/0f107815) Update pipeline (#77) +- [851679e2](https://github.com/kubedb/percona-xtradb/commit/851679e2) Migrate MySQL (#75) +- [0b997855](https://github.com/kubedb/percona-xtradb/commit/0b997855) Use official exporter image (#74) +- [702d5736](https://github.com/kubedb/percona-xtradb/commit/702d5736) Fix uninstall for concourse (#70) +- [9ee88bd2](https://github.com/kubedb/percona-xtradb/commit/9ee88bd2) Update status.ObservedGeneration for failure phase (#73) +- [559cdb6a](https://github.com/kubedb/percona-xtradb/commit/559cdb6a) Keep track of ObservedGenerationHash (#72) +- [61c8b898](https://github.com/kubedb/percona-xtradb/commit/61c8b898) Use NewObservableHandler (#71) +- [421274dc](https://github.com/kubedb/percona-xtradb/commit/421274dc) Merge commit '887037c7e36289e3135dda99346fccc7e2ce303b' +- [6a41d9bc](https://github.com/kubedb/percona-xtradb/commit/6a41d9bc) Fix uninstall for concourse (#69) +- [f1af09db](https://github.com/kubedb/percona-xtradb/commit/f1af09db) Update README.md +- [bf3f1823](https://github.com/kubedb/percona-xtradb/commit/bf3f1823) Revise immutable spec fields (#68) +- [26adec3b](https://github.com/kubedb/percona-xtradb/commit/26adec3b) Merge commit '5f83049fc01dc1d0709ac0014d6f3a0f74a39417' +- [31a97820](https://github.com/kubedb/percona-xtradb/commit/31a97820) Support passing args via PodTemplate (#67) +- [60f4ee23](https://github.com/kubedb/percona-xtradb/commit/60f4ee23) Introduce storageType : ephemeral (#66) +- [bfd3fcd6](https://github.com/kubedb/percona-xtradb/commit/bfd3fcd6) Add support for running tests on cncf cluster (#63) +- [fba47b19](https://github.com/kubedb/percona-xtradb/commit/fba47b19) Merge commit 'e010cbb302c8d59d4cf69dd77085b046ff423b78' +- [6be96ce0](https://github.com/kubedb/percona-xtradb/commit/6be96ce0) Revendor api (#65) +- [0f629ab3](https://github.com/kubedb/percona-xtradb/commit/0f629ab3) Keep track of observedGeneration in status (#64) +- [c9a9596f](https://github.com/kubedb/percona-xtradb/commit/c9a9596f) Separate StatsService for monitoring (#62) +- [62854641](https://github.com/kubedb/percona-xtradb/commit/62854641) Use MySQLVersion for MySQL images (#61) +- [3c170c56](https://github.com/kubedb/percona-xtradb/commit/3c170c56) Use updated crd spec (#60) +- [873c285e](https://github.com/kubedb/percona-xtradb/commit/873c285e) Rename OffshootLabels to OffshootSelectors (#59) +- [2fd02169](https://github.com/kubedb/percona-xtradb/commit/2fd02169) Revendor api (#58) +- [a127d6cd](https://github.com/kubedb/percona-xtradb/commit/a127d6cd) Use kmodules monitoring and objectstore api (#57) +- [2f79a038](https://github.com/kubedb/percona-xtradb/commit/2f79a038) Support custom configuration (#52) +- [49c67f00](https://github.com/kubedb/percona-xtradb/commit/49c67f00) Merge commit '44e6d4985d93556e39ddcc4677ada5437fc5be64' +- [fb28bc6c](https://github.com/kubedb/percona-xtradb/commit/fb28bc6c) Refactor concourse scripts (#56) +- [4de4ced1](https://github.com/kubedb/percona-xtradb/commit/4de4ced1) Fix command `./hack/make.py test e2e` (#55) +- [3082123e](https://github.com/kubedb/percona-xtradb/commit/3082123e) Set generated binary name to my-operator (#54) +- [5698f314](https://github.com/kubedb/percona-xtradb/commit/5698f314) Don't add admission/v1beta1 group as a prioritized version (#53) +- [696135d5](https://github.com/kubedb/percona-xtradb/commit/696135d5) Fix travis build (#48) +- [c519ef89](https://github.com/kubedb/percona-xtradb/commit/c519ef89) Format shell script (#51) +- [c93e2f40](https://github.com/kubedb/percona-xtradb/commit/c93e2f40) Enable status subresource for crds (#50) +- [edd951ca](https://github.com/kubedb/percona-xtradb/commit/edd951ca) Update client-go to v8.0.0 (#49) +- [520597a6](https://github.com/kubedb/percona-xtradb/commit/520597a6) Merge commit '71850e2c90cda8fc588b7dedb340edf3d316baea' +- [f1549e95](https://github.com/kubedb/percona-xtradb/commit/f1549e95) Support ENV variables in CRDs (#46) +- [67f37780](https://github.com/kubedb/percona-xtradb/commit/67f37780) Updated osm version to 0.7.1 (#47) +- [10e309c0](https://github.com/kubedb/percona-xtradb/commit/10e309c0) Prepare release 0.1.0 +- [62a8fbbd](https://github.com/kubedb/percona-xtradb/commit/62a8fbbd) Fixed missing error return (#45) +- [8c05bb83](https://github.com/kubedb/percona-xtradb/commit/8c05bb83) Revendor dependencies (#44) +- [ca811a2e](https://github.com/kubedb/percona-xtradb/commit/ca811a2e) Fix release script (#43) +- [b79541f6](https://github.com/kubedb/percona-xtradb/commit/b79541f6) Add changelog (#42) +- [a2d13c82](https://github.com/kubedb/percona-xtradb/commit/a2d13c82) Concourse (#41) +- [95b2186e](https://github.com/kubedb/percona-xtradb/commit/95b2186e) Fixed kubeconfig plugin for Cloud Providers && Storage is required for MySQL (#40) +- [37762093](https://github.com/kubedb/percona-xtradb/commit/37762093) Refactored E2E testing to support E2E testing with admission webhook in cloud (#38) +- [b6fe72ca](https://github.com/kubedb/percona-xtradb/commit/b6fe72ca) Remove lost+found directory before initializing mysql (#39) +- [18ebb959](https://github.com/kubedb/percona-xtradb/commit/18ebb959) Skip delete requests for empty resources (#37) +- [eeb7add0](https://github.com/kubedb/percona-xtradb/commit/eeb7add0) Don't panic if admission options is nil (#36) +- [ccb59db0](https://github.com/kubedb/percona-xtradb/commit/ccb59db0) Disable admission controllers for webhook server (#35) +- [b1c6c149](https://github.com/kubedb/percona-xtradb/commit/b1c6c149) Separate ApiGroup for Mutating and Validating webhook && upgraded osm to 0.7.0 (#34) +- [b1890f7c](https://github.com/kubedb/percona-xtradb/commit/b1890f7c) Update client-go to 7.0.0 (#33) +- [08c81726](https://github.com/kubedb/percona-xtradb/commit/08c81726) Added update script for mysql-tools:8 (#32) +- [4bbe6c9f](https://github.com/kubedb/percona-xtradb/commit/4bbe6c9f) Added support of mysql:5.7 (#31) +- [e657f512](https://github.com/kubedb/percona-xtradb/commit/e657f512) Add support for one informer and N-eventHandler for snapshot, dromantDB and Job (#30) +- [bbcd48d6](https://github.com/kubedb/percona-xtradb/commit/bbcd48d6) Use metrics from kube apiserver (#29) +- [1687e197](https://github.com/kubedb/percona-xtradb/commit/1687e197) Bundle webhook server and Use SharedInformerFactory (#28) +- [cd0efc00](https://github.com/kubedb/percona-xtradb/commit/cd0efc00) Move MySQL AdmissionWebhook packages into MySQL repository (#27) +- [46065e18](https://github.com/kubedb/percona-xtradb/commit/46065e18) Use mysql:8.0.3 image as mysql:8.0 (#26) +- [1b73529f](https://github.com/kubedb/percona-xtradb/commit/1b73529f) Update README.md +- [62eaa397](https://github.com/kubedb/percona-xtradb/commit/62eaa397) Update README.md +- [c53704c7](https://github.com/kubedb/percona-xtradb/commit/c53704c7) Remove Docker pull count +- [b9ec877e](https://github.com/kubedb/percona-xtradb/commit/b9ec877e) Add travis yaml (#25) +- [ade3571c](https://github.com/kubedb/percona-xtradb/commit/ade3571c) Start next dev cycle +- [b4b749df](https://github.com/kubedb/percona-xtradb/commit/b4b749df) Prepare release 0.1.0-beta.2 +- [4d46d95d](https://github.com/kubedb/percona-xtradb/commit/4d46d95d) Migrating to apps/v1 (#23) +- [5ee1ac8c](https://github.com/kubedb/percona-xtradb/commit/5ee1ac8c) Update validation (#22) +- [dd023c50](https://github.com/kubedb/percona-xtradb/commit/dd023c50) Fix dormantDB matching: pass same type to Equal method (#21) +- [37a1e4fd](https://github.com/kubedb/percona-xtradb/commit/37a1e4fd) Use official code generator scripts (#20) +- [485d3d7c](https://github.com/kubedb/percona-xtradb/commit/485d3d7c) Fixed dormantdb matching & Raised throttling time & Fixed MySQL version Checking (#19) +- [6db2ae8d](https://github.com/kubedb/percona-xtradb/commit/6db2ae8d) Prepare release 0.1.0-beta.1 +- [ebbfec2f](https://github.com/kubedb/percona-xtradb/commit/ebbfec2f) converted to k8s 1.9 & Improved InitSpec in DormantDB & Added support for Job watcher & Improved Tests (#17) +- [a484e0e5](https://github.com/kubedb/percona-xtradb/commit/a484e0e5) Fixed logger, analytics and removed rbac stuff (#16) +- [7aa2d1d2](https://github.com/kubedb/percona-xtradb/commit/7aa2d1d2) Add rbac stuffs for mysql-exporter (#15) +- [078098c8](https://github.com/kubedb/percona-xtradb/commit/078098c8) Review Mysql docker images and Fixed monitring (#14) +- [6877108a](https://github.com/kubedb/percona-xtradb/commit/6877108a) Update README.md +- [1f84a5da](https://github.com/kubedb/percona-xtradb/commit/1f84a5da) Start next dev cycle +- [2f1e4b7d](https://github.com/kubedb/percona-xtradb/commit/2f1e4b7d) Prepare release 0.1.0-beta.0 +- [dce1e88e](https://github.com/kubedb/percona-xtradb/commit/dce1e88e) Add release script +- [60ed55cb](https://github.com/kubedb/percona-xtradb/commit/60ed55cb) Rename ms-operator to my-operator (#13) +- [5451d166](https://github.com/kubedb/percona-xtradb/commit/5451d166) Fix Analytics and pass client-id as ENV to Snapshot Job (#12) +- [788ae178](https://github.com/kubedb/percona-xtradb/commit/788ae178) update docker image validation (#11) +- [c966efd5](https://github.com/kubedb/percona-xtradb/commit/c966efd5) Add docker-registry and WorkQueue (#10) +- [be340103](https://github.com/kubedb/percona-xtradb/commit/be340103) Set client id for analytics (#9) +- [ca11f683](https://github.com/kubedb/percona-xtradb/commit/ca11f683) Fix CRD Registration (#8) +- [2f95c13d](https://github.com/kubedb/percona-xtradb/commit/2f95c13d) Update issue repo link +- [6fffa713](https://github.com/kubedb/percona-xtradb/commit/6fffa713) Update pkg paths to kubedb org (#7) +- [2d4d5c44](https://github.com/kubedb/percona-xtradb/commit/2d4d5c44) Assign default Prometheus Monitoring Port (#6) +- [a7595613](https://github.com/kubedb/percona-xtradb/commit/a7595613) Add Snapshot Backup, Restore and Backup-Scheduler (#4) +- [17a782c6](https://github.com/kubedb/percona-xtradb/commit/17a782c6) Update Dockerfile +- [e92bfec9](https://github.com/kubedb/percona-xtradb/commit/e92bfec9) Add mysql-util docker image (#5) +- [2a4b25ac](https://github.com/kubedb/percona-xtradb/commit/2a4b25ac) Mysql db - Inititalizing (#2) +- [cbfbc878](https://github.com/kubedb/percona-xtradb/commit/cbfbc878) Update README.md +- [01cab651](https://github.com/kubedb/percona-xtradb/commit/01cab651) Update README.md +- [0aa81cdf](https://github.com/kubedb/percona-xtradb/commit/0aa81cdf) Use client-go 5.x +- [3de10d7f](https://github.com/kubedb/percona-xtradb/commit/3de10d7f) Update ./hack folder (#3) +- [46f05b1f](https://github.com/kubedb/percona-xtradb/commit/46f05b1f) Add skeleton for mysql (#1) +- [73147dba](https://github.com/kubedb/percona-xtradb/commit/73147dba) Merge commit 'be70502b4993171bbad79d2ff89a9844f1c24caa' as 'hack/libbuild' + + + +## [kubedb/pg-leader-election](https://github.com/kubedb/pg-leader-election) + +### [v0.2.0](https://github.com/kubedb/pg-leader-election/releases/tag/v0.2.0) + + + + +## [kubedb/pgbouncer](https://github.com/kubedb/pgbouncer) + +### [v0.1.0](https://github.com/kubedb/pgbouncer/releases/tag/v0.1.0) + +- [464cc54a](https://github.com/kubedb/pgbouncer/commit/464cc54a) Prepare for release v0.1.0 (#94) +- [c4083972](https://github.com/kubedb/pgbouncer/commit/c4083972) Prepare for release v0.1.0-rc.2 (#93) +- [b77fa7c8](https://github.com/kubedb/pgbouncer/commit/b77fa7c8) Prepare for release v0.1.0-rc.1 (#92) +- [e82f1017](https://github.com/kubedb/pgbouncer/commit/e82f1017) Prepare for release v0.1.0-beta.6 (#91) +- [8d2fa953](https://github.com/kubedb/pgbouncer/commit/8d2fa953) Create SRV records for governing service (#90) +- [96144773](https://github.com/kubedb/pgbouncer/commit/96144773) Prepare for release v0.1.0-beta.5 (#89) +- [bb574108](https://github.com/kubedb/pgbouncer/commit/bb574108) Create separate governing service for each database (#88) +- [28f29e3c](https://github.com/kubedb/pgbouncer/commit/28f29e3c) Update KubeDB api (#87) +- [79a3e3f7](https://github.com/kubedb/pgbouncer/commit/79a3e3f7) Update readme +- [f42d28f9](https://github.com/kubedb/pgbouncer/commit/f42d28f9) Update repository config (#86) +- [4c292933](https://github.com/kubedb/pgbouncer/commit/4c292933) Prepare for release v0.1.0-beta.4 (#85) +- [c3daaa90](https://github.com/kubedb/pgbouncer/commit/c3daaa90) Update KubeDB api (#84) +- [19784f7a](https://github.com/kubedb/pgbouncer/commit/19784f7a) Update Kubernetes v1.18.9 dependencies (#83) +- [a7ea74e4](https://github.com/kubedb/pgbouncer/commit/a7ea74e4) Update KubeDB api (#82) +- [49391b30](https://github.com/kubedb/pgbouncer/commit/49391b30) Update KubeDB api (#81) +- [2ad0016d](https://github.com/kubedb/pgbouncer/commit/2ad0016d) Update KubeDB api (#80) +- [e0169139](https://github.com/kubedb/pgbouncer/commit/e0169139) Update Kubernetes v1.18.9 dependencies (#79) +- [ade8edf9](https://github.com/kubedb/pgbouncer/commit/ade8edf9) Update KubeDB api (#78) +- [86387966](https://github.com/kubedb/pgbouncer/commit/86387966) Update KubeDB api (#77) +- [d5fa2ce7](https://github.com/kubedb/pgbouncer/commit/d5fa2ce7) Update KubeDB api (#76) +- [938d61f6](https://github.com/kubedb/pgbouncer/commit/938d61f6) Update KubeDB api (#75) +- [89ceecb1](https://github.com/kubedb/pgbouncer/commit/89ceecb1) Update Kubernetes v1.18.9 dependencies (#74) +- [3b8fc849](https://github.com/kubedb/pgbouncer/commit/3b8fc849) Update KubeDB api (#73) +- [89ed5bf0](https://github.com/kubedb/pgbouncer/commit/89ed5bf0) Update KubeDB api (#72) +- [187eaff5](https://github.com/kubedb/pgbouncer/commit/187eaff5) Update KubeDB api (#71) +- [1222c935](https://github.com/kubedb/pgbouncer/commit/1222c935) Update repository config (#70) +- [f9c72f8c](https://github.com/kubedb/pgbouncer/commit/f9c72f8c) Update repository config (#69) +- [a55e0a9f](https://github.com/kubedb/pgbouncer/commit/a55e0a9f) Update repository config (#68) +- [20f01c3b](https://github.com/kubedb/pgbouncer/commit/20f01c3b) Update KubeDB api (#67) +- [ea907c2f](https://github.com/kubedb/pgbouncer/commit/ea907c2f) Update Kubernetes v1.18.9 dependencies (#66) +- [86f92e64](https://github.com/kubedb/pgbouncer/commit/86f92e64) Publish docker images to ghcr.io (#65) +- [189ab8b8](https://github.com/kubedb/pgbouncer/commit/189ab8b8) Update KubeDB api (#64) +- [d30a59c2](https://github.com/kubedb/pgbouncer/commit/d30a59c2) Update KubeDB api (#63) +- [545ee043](https://github.com/kubedb/pgbouncer/commit/545ee043) Update KubeDB api (#62) +- [cc01e1ca](https://github.com/kubedb/pgbouncer/commit/cc01e1ca) Update KubeDB api (#61) +- [40bc916f](https://github.com/kubedb/pgbouncer/commit/40bc916f) Update repository config (#60) +- [00313b21](https://github.com/kubedb/pgbouncer/commit/00313b21) Update Kubernetes v1.18.9 dependencies (#59) +- [080b77f3](https://github.com/kubedb/pgbouncer/commit/080b77f3) Update KubeDB api (#56) +- [fa479841](https://github.com/kubedb/pgbouncer/commit/fa479841) Update Kubernetes v1.18.9 dependencies (#57) +- [559d7421](https://github.com/kubedb/pgbouncer/commit/559d7421) Update Kubernetes v1.18.9 dependencies (#55) +- [1bfe4067](https://github.com/kubedb/pgbouncer/commit/1bfe4067) Update repository config (#54) +- [5ac28f25](https://github.com/kubedb/pgbouncer/commit/5ac28f25) Update repository config (#53) +- [162034f0](https://github.com/kubedb/pgbouncer/commit/162034f0) Update Kubernetes v1.18.9 dependencies (#52) +- [71697842](https://github.com/kubedb/pgbouncer/commit/71697842) Update Kubernetes v1.18.3 dependencies (#51) +- [3a868c6d](https://github.com/kubedb/pgbouncer/commit/3a868c6d) Prepare for release v0.1.0-beta.3 (#50) +- [72745988](https://github.com/kubedb/pgbouncer/commit/72745988) Add license verifier (#49) +- [36e16b55](https://github.com/kubedb/pgbouncer/commit/36e16b55) Use AppsCode Trial license (#48) +- [d3917d72](https://github.com/kubedb/pgbouncer/commit/d3917d72) Update Kubernetes v1.18.3 dependencies (#47) +- [c5fb3b0e](https://github.com/kubedb/pgbouncer/commit/c5fb3b0e) Update Kubernetes v1.18.3 dependencies (#46) +- [64f27a21](https://github.com/kubedb/pgbouncer/commit/64f27a21) Update Kubernetes v1.18.3 dependencies (#44) +- [817891a9](https://github.com/kubedb/pgbouncer/commit/817891a9) Use AppsCode Community License (#43) +- [11826ae7](https://github.com/kubedb/pgbouncer/commit/11826ae7) Update Kubernetes v1.18.3 dependencies (#42) +- [e083d550](https://github.com/kubedb/pgbouncer/commit/e083d550) Prepare for release v0.1.0-beta.2 (#41) +- [fe847905](https://github.com/kubedb/pgbouncer/commit/fe847905) Update release.yml +- [ddf5a857](https://github.com/kubedb/pgbouncer/commit/ddf5a857) Use updated certificate spec (#35) +- [d5cd5bfd](https://github.com/kubedb/pgbouncer/commit/d5cd5bfd) Update Kubernetes v1.18.3 dependencies (#39) +- [21693c76](https://github.com/kubedb/pgbouncer/commit/21693c76) Update Kubernetes v1.18.3 dependencies (#38) +- [39ad48db](https://github.com/kubedb/pgbouncer/commit/39ad48db) Update Kubernetes v1.18.3 dependencies (#37) +- [7f1ecc77](https://github.com/kubedb/pgbouncer/commit/7f1ecc77) Update Kubernetes v1.18.3 dependencies (#36) +- [8d9d379a](https://github.com/kubedb/pgbouncer/commit/8d9d379a) Update Kubernetes v1.18.3 dependencies (#34) +- [c9b8300c](https://github.com/kubedb/pgbouncer/commit/c9b8300c) Update Kubernetes v1.18.3 dependencies (#33) +- [66c72a40](https://github.com/kubedb/pgbouncer/commit/66c72a40) Remove dependency on enterprise operator (#32) +- [757dc104](https://github.com/kubedb/pgbouncer/commit/757dc104) Update to cert-manager v0.16.0 (#30) +- [0a183d15](https://github.com/kubedb/pgbouncer/commit/0a183d15) Build images in e2e workflow (#29) +- [ca61e88c](https://github.com/kubedb/pgbouncer/commit/ca61e88c) Allow configuring k8s & db version in e2e tests (#28) +- [a87278b1](https://github.com/kubedb/pgbouncer/commit/a87278b1) Update to Kubernetes v1.18.3 (#27) +- [5abe86f3](https://github.com/kubedb/pgbouncer/commit/5abe86f3) Fix formatting +- [845f7a35](https://github.com/kubedb/pgbouncer/commit/845f7a35) Trigger e2e tests on /ok-to-test command (#26) +- [2cc23c03](https://github.com/kubedb/pgbouncer/commit/2cc23c03) Fix cert-manager integration for PgBouncer (#25) +- [2a148c26](https://github.com/kubedb/pgbouncer/commit/2a148c26) Update to Kubernetes v1.18.3 (#24) +- [f6eb8120](https://github.com/kubedb/pgbouncer/commit/f6eb8120) Update Makefile.env +- [bbf810c5](https://github.com/kubedb/pgbouncer/commit/bbf810c5) Prepare for release v0.1.0-beta.1 (#23) +- [5a6e361a](https://github.com/kubedb/pgbouncer/commit/5a6e361a) include Makefile.env (#22) +- [2d52d66e](https://github.com/kubedb/pgbouncer/commit/2d52d66e) Update License (#21) +- [33305d5f](https://github.com/kubedb/pgbouncer/commit/33305d5f) Update to Kubernetes v1.18.3 (#20) +- [b443a550](https://github.com/kubedb/pgbouncer/commit/b443a550) Update ci.yml +- [d3bedc9b](https://github.com/kubedb/pgbouncer/commit/d3bedc9b) Update update-release-tracker.sh +- [d9100ecc](https://github.com/kubedb/pgbouncer/commit/d9100ecc) Update update-release-tracker.sh +- [9b86bdaa](https://github.com/kubedb/pgbouncer/commit/9b86bdaa) Add script to update release tracker on pr merge (#19) +- [3362cef7](https://github.com/kubedb/pgbouncer/commit/3362cef7) Update .kodiak.toml +- [11ebebda](https://github.com/kubedb/pgbouncer/commit/11ebebda) Use POSTGRES_TAG v0.14.0-alpha.0 +- [dbe95b54](https://github.com/kubedb/pgbouncer/commit/dbe95b54) Various fixes (#18) +- [c50c65de](https://github.com/kubedb/pgbouncer/commit/c50c65de) Update to Kubernetes v1.18.3 (#17) +- [483fa438](https://github.com/kubedb/pgbouncer/commit/483fa438) Update to Kubernetes v1.18.3 +- [c0fa8e49](https://github.com/kubedb/pgbouncer/commit/c0fa8e49) Create .kodiak.toml +- [5e338016](https://github.com/kubedb/pgbouncer/commit/5e338016) Use CRD v1 for Kubernetes >= 1.16 (#16) +- [ef7fe475](https://github.com/kubedb/pgbouncer/commit/ef7fe475) Update to Kubernetes v1.18.3 (#15) +- [063339fc](https://github.com/kubedb/pgbouncer/commit/063339fc) Fix e2e tests (#14) +- [7cd92ba4](https://github.com/kubedb/pgbouncer/commit/7cd92ba4) Update crazy-max/ghaction-docker-buildx flag +- [e7a47a50](https://github.com/kubedb/pgbouncer/commit/e7a47a50) Revendor kubedb.dev/apimachinery@master (#13) +- [9d009160](https://github.com/kubedb/pgbouncer/commit/9d009160) Use updated operator labels in e2e tests (#12) +- [778924af](https://github.com/kubedb/pgbouncer/commit/778924af) Trigger the workflow on push or pull request +- [77be6b9e](https://github.com/kubedb/pgbouncer/commit/77be6b9e) Update CHANGELOG.md +- [a9decb98](https://github.com/kubedb/pgbouncer/commit/a9decb98) Use stash.appscode.dev/apimachinery@v0.9.0-rc.6 (#11) +- [cd4d2721](https://github.com/kubedb/pgbouncer/commit/cd4d2721) Fix build +- [b21b1a11](https://github.com/kubedb/pgbouncer/commit/b21b1a11) Revendor and update enterprise sidecar image (#10) +- [463f7bc0](https://github.com/kubedb/pgbouncer/commit/463f7bc0) Update enterprise operator tag (#9) +- [6e015884](https://github.com/kubedb/pgbouncer/commit/6e015884) Use kubedb/installer master branch in CI +- [88b98a49](https://github.com/kubedb/pgbouncer/commit/88b98a49) Update pgbouncer controller (#8) +- [a6b71bc3](https://github.com/kubedb/pgbouncer/commit/a6b71bc3) Update variable names +- [1a6794b7](https://github.com/kubedb/pgbouncer/commit/1a6794b7) Fix plain text secret in exporter container of StatefulSet (#5) +- [ab104a9f](https://github.com/kubedb/pgbouncer/commit/ab104a9f) Update client-go to kubernetes-1.16.3 (#7) +- [68dbb142](https://github.com/kubedb/pgbouncer/commit/68dbb142) Use charts to install operator (#6) +- [30e3e729](https://github.com/kubedb/pgbouncer/commit/30e3e729) Add add-license make target +- [6c1a78a0](https://github.com/kubedb/pgbouncer/commit/6c1a78a0) Enable e2e tests in GitHub actions (#4) +- [0960f805](https://github.com/kubedb/pgbouncer/commit/0960f805) Initial implementation (#2) +- [a8a9b1db](https://github.com/kubedb/pgbouncer/commit/a8a9b1db) Update go.yml +- [bc3b2624](https://github.com/kubedb/pgbouncer/commit/bc3b2624) Enable GitHub actions +- [2e33db2b](https://github.com/kubedb/pgbouncer/commit/2e33db2b) Clone kubedb/postgres repo (#1) +- [45a7cace](https://github.com/kubedb/pgbouncer/commit/45a7cace) Merge commit 'f78de886ed657650438f99574c3b002dd3607497' as 'hack/libbuild' + + + +## [kubedb/postgres](https://github.com/kubedb/postgres) + +### [v0.14.0](https://github.com/kubedb/postgres/releases/tag/v0.14.0) + +- [50fb579a](https://github.com/kubedb/postgres/commit/50fb579a) Prepare for release v0.14.0 (#407) +- [2ed7a29c](https://github.com/kubedb/postgres/commit/2ed7a29c) Prepare for release v0.14.0-rc.2 (#406) +- [c1ea472a](https://github.com/kubedb/postgres/commit/c1ea472a) Prepare for release v0.14.0-rc.1 (#405) +- [9e1a642e](https://github.com/kubedb/postgres/commit/9e1a642e) Prepare for release v0.14.0-beta.6 (#404) +- [8b869c02](https://github.com/kubedb/postgres/commit/8b869c02) Create SRV records for governing service (#402) +- [c6e802a7](https://github.com/kubedb/postgres/commit/c6e802a7) Prepare for release v0.14.0-beta.5 (#401) +- [4da12584](https://github.com/kubedb/postgres/commit/4da12584) Simplify port assignment (#400) +- [71420f2b](https://github.com/kubedb/postgres/commit/71420f2b) Create separate governing service for each database (#399) +- [49792ddb](https://github.com/kubedb/postgres/commit/49792ddb) Update KubeDB api (#398) +- [721f5e16](https://github.com/kubedb/postgres/commit/721f5e16) Update readme +- [c036ee15](https://github.com/kubedb/postgres/commit/c036ee15) Update Kubernetes v1.18.9 dependencies (#397) +- [ed9a22ac](https://github.com/kubedb/postgres/commit/ed9a22ac) Prepare for release v0.14.0-beta.4 (#396) +- [e6b37365](https://github.com/kubedb/postgres/commit/e6b37365) Update KubeDB api (#395) +- [825f55c3](https://github.com/kubedb/postgres/commit/825f55c3) Update Kubernetes v1.18.9 dependencies (#394) +- [c879e7e8](https://github.com/kubedb/postgres/commit/c879e7e8) Update KubeDB api (#393) +- [c90ad84e](https://github.com/kubedb/postgres/commit/c90ad84e) Update for release Stash@v2020.10.21 (#392) +- [9db225c0](https://github.com/kubedb/postgres/commit/9db225c0) Fix init validator (#390) +- [e56e5ae6](https://github.com/kubedb/postgres/commit/e56e5ae6) Update KubeDB api (#391) +- [5da16a5c](https://github.com/kubedb/postgres/commit/5da16a5c) Update KubeDB api (#389) +- [221eb7cf](https://github.com/kubedb/postgres/commit/221eb7cf) Update Kubernetes v1.18.9 dependencies (#388) +- [261aaaf3](https://github.com/kubedb/postgres/commit/261aaaf3) Update KubeDB api (#387) +- [6d8efe23](https://github.com/kubedb/postgres/commit/6d8efe23) Update KubeDB api (#386) +- [0df8a375](https://github.com/kubedb/postgres/commit/0df8a375) Update KubeDB api (#385) +- [b0b4f7e7](https://github.com/kubedb/postgres/commit/b0b4f7e7) Update KubeDB api (#384) +- [c10ff311](https://github.com/kubedb/postgres/commit/c10ff311) Update Kubernetes v1.18.9 dependencies (#383) +- [4f237fc0](https://github.com/kubedb/postgres/commit/4f237fc0) Update KubeDB api (#382) +- [b31defb8](https://github.com/kubedb/postgres/commit/b31defb8) Update KubeDB api (#381) +- [667a4ec8](https://github.com/kubedb/postgres/commit/667a4ec8) Update KubeDB api (#379) +- [da86f8d7](https://github.com/kubedb/postgres/commit/da86f8d7) Update repository config (#378) +- [1da3afb9](https://github.com/kubedb/postgres/commit/1da3afb9) Update repository config (#377) +- [29b8a231](https://github.com/kubedb/postgres/commit/29b8a231) Update repository config (#376) +- [22612534](https://github.com/kubedb/postgres/commit/22612534) Initialize statefulset watcher from cmd/server/options.go (#375) +- [bfd6eae7](https://github.com/kubedb/postgres/commit/bfd6eae7) Update KubeDB api (#374) +- [10566771](https://github.com/kubedb/postgres/commit/10566771) Update Kubernetes v1.18.9 dependencies (#373) +- [1eb7c29b](https://github.com/kubedb/postgres/commit/1eb7c29b) Publish docker images to ghcr.io (#372) +- [49dd7946](https://github.com/kubedb/postgres/commit/49dd7946) Only keep username/password keys in Postgres secret +- [f1131a2c](https://github.com/kubedb/postgres/commit/f1131a2c) Update KubeDB api (#371) +- [ccadf274](https://github.com/kubedb/postgres/commit/ccadf274) Update KubeDB api (#370) +- [bddd6692](https://github.com/kubedb/postgres/commit/bddd6692) Update KubeDB api (#369) +- [d76bbe3d](https://github.com/kubedb/postgres/commit/d76bbe3d) Don't add secretTransformation in AppBinding section by default (#316) +- [ae29ba5e](https://github.com/kubedb/postgres/commit/ae29ba5e) Update KubeDB api (#368) +- [4bb1c171](https://github.com/kubedb/postgres/commit/4bb1c171) Update repository config (#367) +- [a7b1138f](https://github.com/kubedb/postgres/commit/a7b1138f) Use conditions to handle initialization (#365) +- [126e20f1](https://github.com/kubedb/postgres/commit/126e20f1) Update Kubernetes v1.18.9 dependencies (#366) +- [29a99b8d](https://github.com/kubedb/postgres/commit/29a99b8d) Update for release Stash@v2020.09.29 (#364) +- [b097b330](https://github.com/kubedb/postgres/commit/b097b330) Update Kubernetes v1.18.9 dependencies (#363) +- [26e2f90c](https://github.com/kubedb/postgres/commit/26e2f90c) Update Kubernetes v1.18.9 dependencies (#361) +- [67c6d618](https://github.com/kubedb/postgres/commit/67c6d618) Update repository config (#360) +- [6fc5fbce](https://github.com/kubedb/postgres/commit/6fc5fbce) Update repository config (#359) +- [4e566391](https://github.com/kubedb/postgres/commit/4e566391) Update Kubernetes v1.18.9 dependencies (#358) +- [7236b6e1](https://github.com/kubedb/postgres/commit/7236b6e1) Use common event recorder (#357) +- [d1293558](https://github.com/kubedb/postgres/commit/d1293558) Update Kubernetes v1.18.3 dependencies (#356) +- [0dd8903e](https://github.com/kubedb/postgres/commit/0dd8903e) Prepare for release v0.14.0-beta.3 (#355) +- [8f59199a](https://github.com/kubedb/postgres/commit/8f59199a) Use new `sepc.init` section (#354) +- [32305e6d](https://github.com/kubedb/postgres/commit/32305e6d) Update Kubernetes v1.18.3 dependencies (#353) +- [e65ecdf3](https://github.com/kubedb/postgres/commit/e65ecdf3) Add license verifier (#352) +- [55b2f61e](https://github.com/kubedb/postgres/commit/55b2f61e) Update for release Stash@v2020.09.16 (#351) +- [66f45a55](https://github.com/kubedb/postgres/commit/66f45a55) Update Kubernetes v1.18.3 dependencies (#350) +- [80f3cc3b](https://github.com/kubedb/postgres/commit/80f3cc3b) Use background deletion policy +- [63119dba](https://github.com/kubedb/postgres/commit/63119dba) Update Kubernetes v1.18.3 dependencies (#348) +- [ac48cf6a](https://github.com/kubedb/postgres/commit/ac48cf6a) Use AppsCode Community License (#347) +- [03449359](https://github.com/kubedb/postgres/commit/03449359) Update Kubernetes v1.18.3 dependencies (#346) +- [6e6fe6fe](https://github.com/kubedb/postgres/commit/6e6fe6fe) Prepare for release v0.14.0-beta.2 (#345) +- [5ee33bb8](https://github.com/kubedb/postgres/commit/5ee33bb8) Update release.yml +- [9208f754](https://github.com/kubedb/postgres/commit/9208f754) Always use OnDelete update strategy +- [74367d01](https://github.com/kubedb/postgres/commit/74367d01) Update Kubernetes v1.18.3 dependencies (#344) +- [01843533](https://github.com/kubedb/postgres/commit/01843533) Update Kubernetes v1.18.3 dependencies (#343) +- [34a3a460](https://github.com/kubedb/postgres/commit/34a3a460) Update Kubernetes v1.18.3 dependencies (#338) +- [455bf56a](https://github.com/kubedb/postgres/commit/455bf56a) Update Kubernetes v1.18.3 dependencies (#337) +- [960d1efa](https://github.com/kubedb/postgres/commit/960d1efa) Update Kubernetes v1.18.3 dependencies (#336) +- [9b428745](https://github.com/kubedb/postgres/commit/9b428745) Update Kubernetes v1.18.3 dependencies (#335) +- [cc95c5f5](https://github.com/kubedb/postgres/commit/cc95c5f5) Update Kubernetes v1.18.3 dependencies (#334) +- [c0694d83](https://github.com/kubedb/postgres/commit/c0694d83) Update Kubernetes v1.18.3 dependencies (#333) +- [8d0977d3](https://github.com/kubedb/postgres/commit/8d0977d3) Remove dependency on enterprise operator (#332) +- [daa5b77c](https://github.com/kubedb/postgres/commit/daa5b77c) Build images in e2e workflow (#331) +- [197f1b2b](https://github.com/kubedb/postgres/commit/197f1b2b) Update to Kubernetes v1.18.3 (#329) +- [e732d319](https://github.com/kubedb/postgres/commit/e732d319) Allow configuring k8s & db version in e2e tests (#330) +- [f37180ec](https://github.com/kubedb/postgres/commit/f37180ec) Trigger e2e tests on /ok-to-test command (#328) +- [becb3e2c](https://github.com/kubedb/postgres/commit/becb3e2c) Update to Kubernetes v1.18.3 (#327) +- [91bf7440](https://github.com/kubedb/postgres/commit/91bf7440) Update to Kubernetes v1.18.3 (#326) +- [3848a43e](https://github.com/kubedb/postgres/commit/3848a43e) Prepare for release v0.14.0-beta.1 (#325) +- [d4ea0ba7](https://github.com/kubedb/postgres/commit/d4ea0ba7) Update for release Stash@v2020.07.09-beta.0 (#323) +- [6974afda](https://github.com/kubedb/postgres/commit/6974afda) Allow customizing kube namespace for Stash +- [d7d79ea1](https://github.com/kubedb/postgres/commit/d7d79ea1) Allow customizing chart registry (#322) +- [ba0423ac](https://github.com/kubedb/postgres/commit/ba0423ac) Update for release Stash@v2020.07.08-beta.0 (#321) +- [7e855763](https://github.com/kubedb/postgres/commit/7e855763) Update License +- [7bea404a](https://github.com/kubedb/postgres/commit/7bea404a) Update to Kubernetes v1.18.3 (#320) +- [eab0e83f](https://github.com/kubedb/postgres/commit/eab0e83f) Update ci.yml +- [4949f76e](https://github.com/kubedb/postgres/commit/4949f76e) Load stash version from .env file for make (#319) +- [79e9d8d9](https://github.com/kubedb/postgres/commit/79e9d8d9) Update update-release-tracker.sh +- [ca966b7b](https://github.com/kubedb/postgres/commit/ca966b7b) Update update-release-tracker.sh +- [31bbecfe](https://github.com/kubedb/postgres/commit/31bbecfe) Add script to update release tracker on pr merge (#318) +- [540d977f](https://github.com/kubedb/postgres/commit/540d977f) Update .kodiak.toml +- [3e7514a7](https://github.com/kubedb/postgres/commit/3e7514a7) Various fixes (#317) +- [1a5df17c](https://github.com/kubedb/postgres/commit/1a5df17c) Update to Kubernetes v1.18.3 (#315) +- [717cfb3f](https://github.com/kubedb/postgres/commit/717cfb3f) Update to Kubernetes v1.18.3 +- [95537169](https://github.com/kubedb/postgres/commit/95537169) Create .kodiak.toml +- [02579005](https://github.com/kubedb/postgres/commit/02579005) Use CRD v1 for Kubernetes >= 1.16 (#314) +- [6ce6deb1](https://github.com/kubedb/postgres/commit/6ce6deb1) Update to Kubernetes v1.18.3 (#313) +- [97f25ba0](https://github.com/kubedb/postgres/commit/97f25ba0) Fix e2e tests (#312) +- [a989c377](https://github.com/kubedb/postgres/commit/a989c377) Update stash install commands +- [6af12596](https://github.com/kubedb/postgres/commit/6af12596) Revendor kubedb.dev/apimachinery@master (#311) +- [9969b064](https://github.com/kubedb/postgres/commit/9969b064) Update crazy-max/ghaction-docker-buildx flag +- [e3360119](https://github.com/kubedb/postgres/commit/e3360119) Use updated operator labels in e2e tests (#309) +- [c183007c](https://github.com/kubedb/postgres/commit/c183007c) Pass annotations from CRD to AppBinding (#310) +- [55581f79](https://github.com/kubedb/postgres/commit/55581f79) Trigger the workflow on push or pull request +- [931b88cf](https://github.com/kubedb/postgres/commit/931b88cf) Update CHANGELOG.md +- [6f481749](https://github.com/kubedb/postgres/commit/6f481749) Use stash.appscode.dev/apimachinery@v0.9.0-rc.6 (#308) +- [15f0611d](https://github.com/kubedb/postgres/commit/15f0611d) Fix error msg to reject halt when termination policy is 'DoNotTerminate' +- [18aba058](https://github.com/kubedb/postgres/commit/18aba058) Change Pause to Halt (#307) +- [7e9b1c69](https://github.com/kubedb/postgres/commit/7e9b1c69) feat: allow changes to nodeSelector (#298) +- [a602faa1](https://github.com/kubedb/postgres/commit/a602faa1) Introduce spec.halted and removed dormant and snapshot crd (#305) +- [cdd384d7](https://github.com/kubedb/postgres/commit/cdd384d7) Moved leader election to kubedb/pg-leader-election (#304) +- [32c41db6](https://github.com/kubedb/postgres/commit/32c41db6) Use stash@v0.9.0-rc.4 release (#306) +- [fa55b472](https://github.com/kubedb/postgres/commit/fa55b472) Make e2e tests stable in github actions (#303) +- [afdc5fda](https://github.com/kubedb/postgres/commit/afdc5fda) Update client-go to kubernetes-1.16.3 (#301) +- [d28eb55a](https://github.com/kubedb/postgres/commit/d28eb55a) Take out postgres docker images and Matrix test (#297) +- [13fee32d](https://github.com/kubedb/postgres/commit/13fee32d) Fix default make command +- [55dfb368](https://github.com/kubedb/postgres/commit/55dfb368) Update catalog values for make install command +- [25f5b79c](https://github.com/kubedb/postgres/commit/25f5b79c) Use charts to install operator (#302) +- [c5a4ed77](https://github.com/kubedb/postgres/commit/c5a4ed77) Add add-license make target +- [aa1d98d0](https://github.com/kubedb/postgres/commit/aa1d98d0) Add license header to files (#296) +- [fd356006](https://github.com/kubedb/postgres/commit/fd356006) Fix E2E testing for github actions (#295) +- [6a3443a7](https://github.com/kubedb/postgres/commit/6a3443a7) Minio and S3 compatible storage fixes (#292) +- [5150cf34](https://github.com/kubedb/postgres/commit/5150cf34) Run e2e tests using GitHub actions (#293) +- [a4a3785b](https://github.com/kubedb/postgres/commit/a4a3785b) Validate DBVersionSpecs and fixed broken build (#294) +- [b171a244](https://github.com/kubedb/postgres/commit/b171a244) Update go.yml +- [1a61bf29](https://github.com/kubedb/postgres/commit/1a61bf29) Enable GitHub actions +- [6b869b15](https://github.com/kubedb/postgres/commit/6b869b15) Update changelog +- [87e67898](https://github.com/kubedb/postgres/commit/87e67898) Remove linux/arm support +- [1ca28812](https://github.com/kubedb/postgres/commit/1ca28812) Revendor +- [05b7ef5a](https://github.com/kubedb/postgres/commit/05b7ef5a) Implement proper shutdown procedure for postgres (#284) +- [b82a36b8](https://github.com/kubedb/postgres/commit/b82a36b8) Use Go 1.12.9 +- [b4a50eaa](https://github.com/kubedb/postgres/commit/b4a50eaa) Delete builddeps.sh +- [b3b6b855](https://github.com/kubedb/postgres/commit/b3b6b855) Add e2e test commands to Makefile (#291) +- [af93201a](https://github.com/kubedb/postgres/commit/af93201a) Update dependencies (#290) +- [be2abd85](https://github.com/kubedb/postgres/commit/be2abd85) Don't set annotation to AppBinding (#289) +- [de42e37a](https://github.com/kubedb/postgres/commit/de42e37a) Set database version in AppBinding (#288) +- [8ed3f84d](https://github.com/kubedb/postgres/commit/8ed3f84d) Change package path to kubedb.dev/postgres (#287) +- [b69cfbdc](https://github.com/kubedb/postgres/commit/b69cfbdc) Add shared memory /dev/shm volume (#269) +- [30937fd1](https://github.com/kubedb/postgres/commit/30937fd1) Fix UpsertDatabaseAnnotation() function (#283) +- [140ea546](https://github.com/kubedb/postgres/commit/140ea546) Add license header to Makefiles (#285) +- [a4511f97](https://github.com/kubedb/postgres/commit/a4511f97) Update Makefile +- [6f458d8e](https://github.com/kubedb/postgres/commit/6f458d8e) Add install, uninstall and purge command in Makefile (#281) +- [13e06b3d](https://github.com/kubedb/postgres/commit/13e06b3d) Integrate stash/restic with postgres (#273) +- [119bdd0a](https://github.com/kubedb/postgres/commit/119bdd0a) Provide role and rolebinding for existing sa managed by kubedb (#280) +- [b5ff93c8](https://github.com/kubedb/postgres/commit/b5ff93c8) Pod Disruption Budget for Postgres (#278) +- [8355cbb6](https://github.com/kubedb/postgres/commit/8355cbb6) Handling resource ownership (#276) +- [4542d6e4](https://github.com/kubedb/postgres/commit/4542d6e4) Added ARM64 support to the install script and manifest +- [22305620](https://github.com/kubedb/postgres/commit/22305620) Add Makefile (#279) +- [7e69d665](https://github.com/kubedb/postgres/commit/7e69d665) Update to k8s 1.14.0 client libraries using go.mod (#277) +- [a314b60d](https://github.com/kubedb/postgres/commit/a314b60d) Update README.md +- [d040d227](https://github.com/kubedb/postgres/commit/d040d227) Start next dev cycle + + + +## [kubedb/proxysql](https://github.com/kubedb/proxysql) + +### [v0.1.0](https://github.com/kubedb/proxysql/releases/tag/v0.1.0) + +- [ee27f53b](https://github.com/kubedb/proxysql/commit/ee27f53b) Prepare for release v0.1.0 (#103) +- [8a5443d9](https://github.com/kubedb/proxysql/commit/8a5443d9) Prepare for release v0.1.0-rc.2 (#102) +- [e3f4999c](https://github.com/kubedb/proxysql/commit/e3f4999c) Prepare for release v0.1.0-rc.1 (#101) +- [d01512de](https://github.com/kubedb/proxysql/commit/d01512de) Prepare for release v0.1.0-beta.6 (#100) +- [6a0d52ff](https://github.com/kubedb/proxysql/commit/6a0d52ff) Create SRV records for governing service (#99) +- [4269db9c](https://github.com/kubedb/proxysql/commit/4269db9c) Prepare for release v0.1.0-beta.5 (#98) +- [e48bd006](https://github.com/kubedb/proxysql/commit/e48bd006) Create separate governing service for each database (#97) +- [23f1c6de](https://github.com/kubedb/proxysql/commit/23f1c6de) Update KubeDB api (#96) +- [13abe9ff](https://github.com/kubedb/proxysql/commit/13abe9ff) Update readme +- [78ef0d29](https://github.com/kubedb/proxysql/commit/78ef0d29) Update repository config (#95) +- [d344e43f](https://github.com/kubedb/proxysql/commit/d344e43f) Prepare for release v0.1.0-beta.4 (#94) +- [15deb4df](https://github.com/kubedb/proxysql/commit/15deb4df) Update KubeDB api (#93) +- [dc59184c](https://github.com/kubedb/proxysql/commit/dc59184c) Update Kubernetes v1.18.9 dependencies (#92) +- [b2b11084](https://github.com/kubedb/proxysql/commit/b2b11084) Update KubeDB api (#91) +- [535820ff](https://github.com/kubedb/proxysql/commit/535820ff) Update for release Stash@v2020.10.21 (#90) +- [c00f0b6a](https://github.com/kubedb/proxysql/commit/c00f0b6a) Update KubeDB api (#89) +- [af8ab91c](https://github.com/kubedb/proxysql/commit/af8ab91c) Update KubeDB api (#88) +- [154fff60](https://github.com/kubedb/proxysql/commit/154fff60) Update Kubernetes v1.18.9 dependencies (#87) +- [608ca467](https://github.com/kubedb/proxysql/commit/608ca467) Update KubeDB api (#86) +- [c0b1286b](https://github.com/kubedb/proxysql/commit/c0b1286b) Update KubeDB api (#85) +- [d2f326c7](https://github.com/kubedb/proxysql/commit/d2f326c7) Update KubeDB api (#84) +- [01ea3c3c](https://github.com/kubedb/proxysql/commit/01ea3c3c) Update KubeDB api (#83) +- [4ae700ed](https://github.com/kubedb/proxysql/commit/4ae700ed) Update Kubernetes v1.18.9 dependencies (#82) +- [d0ad0b70](https://github.com/kubedb/proxysql/commit/d0ad0b70) Update KubeDB api (#81) +- [8f1e0d51](https://github.com/kubedb/proxysql/commit/8f1e0d51) Update KubeDB api (#80) +- [7b02bebb](https://github.com/kubedb/proxysql/commit/7b02bebb) Update KubeDB api (#79) +- [4f95e854](https://github.com/kubedb/proxysql/commit/4f95e854) Update repository config (#78) +- [c229a939](https://github.com/kubedb/proxysql/commit/c229a939) Update repository config (#77) +- [89dbb47f](https://github.com/kubedb/proxysql/commit/89dbb47f) Update repository config (#76) +- [d28494ab](https://github.com/kubedb/proxysql/commit/d28494ab) Update KubeDB api (#75) +- [b25cb7db](https://github.com/kubedb/proxysql/commit/b25cb7db) Update Kubernetes v1.18.9 dependencies (#74) +- [d4b026a4](https://github.com/kubedb/proxysql/commit/d4b026a4) Publish docker images to ghcr.io (#73) +- [e263f9c3](https://github.com/kubedb/proxysql/commit/e263f9c3) Update KubeDB api (#72) +- [07ea3acb](https://github.com/kubedb/proxysql/commit/07ea3acb) Update KubeDB api (#71) +- [946e292b](https://github.com/kubedb/proxysql/commit/946e292b) Update KubeDB api (#70) +- [66eb2156](https://github.com/kubedb/proxysql/commit/66eb2156) Update KubeDB api (#69) +- [d3fe09ae](https://github.com/kubedb/proxysql/commit/d3fe09ae) Update repository config (#68) +- [10c7cde0](https://github.com/kubedb/proxysql/commit/10c7cde0) Update Kubernetes v1.18.9 dependencies (#67) +- [ed5d24a9](https://github.com/kubedb/proxysql/commit/ed5d24a9) Update KubeDB api (#65) +- [a4f6dd4c](https://github.com/kubedb/proxysql/commit/a4f6dd4c) Update KubeDB api (#62) +- [2956b1bd](https://github.com/kubedb/proxysql/commit/2956b1bd) Update for release Stash@v2020.09.29 (#64) +- [9cbd0244](https://github.com/kubedb/proxysql/commit/9cbd0244) Update Kubernetes v1.18.9 dependencies (#63) +- [4cd9bb02](https://github.com/kubedb/proxysql/commit/4cd9bb02) Update Kubernetes v1.18.9 dependencies (#61) +- [a9a9caf0](https://github.com/kubedb/proxysql/commit/a9a9caf0) Update repository config (#60) +- [af3a2a68](https://github.com/kubedb/proxysql/commit/af3a2a68) Update repository config (#59) +- [25f47ff4](https://github.com/kubedb/proxysql/commit/25f47ff4) Update Kubernetes v1.18.9 dependencies (#58) +- [05e57476](https://github.com/kubedb/proxysql/commit/05e57476) Update Kubernetes v1.18.3 dependencies (#57) +- [8b0af94b](https://github.com/kubedb/proxysql/commit/8b0af94b) Prepare for release v0.1.0-beta.3 (#56) +- [f2a98806](https://github.com/kubedb/proxysql/commit/f2a98806) Update Makefile +- [f59b73a1](https://github.com/kubedb/proxysql/commit/f59b73a1) Use AppsCode Trial license (#55) +- [2ae32d3c](https://github.com/kubedb/proxysql/commit/2ae32d3c) Update Kubernetes v1.18.3 dependencies (#54) +- [724b9829](https://github.com/kubedb/proxysql/commit/724b9829) Add license verifier (#53) +- [8a2aafb5](https://github.com/kubedb/proxysql/commit/8a2aafb5) Update for release Stash@v2020.09.16 (#52) +- [4759525b](https://github.com/kubedb/proxysql/commit/4759525b) Update Kubernetes v1.18.3 dependencies (#51) +- [f55b1402](https://github.com/kubedb/proxysql/commit/f55b1402) Update Kubernetes v1.18.3 dependencies (#49) +- [f7036236](https://github.com/kubedb/proxysql/commit/f7036236) Use AppsCode Community License (#48) +- [d922196f](https://github.com/kubedb/proxysql/commit/d922196f) Update Kubernetes v1.18.3 dependencies (#47) +- [f86bb6cd](https://github.com/kubedb/proxysql/commit/f86bb6cd) Prepare for release v0.1.0-beta.2 (#46) +- [e74f3803](https://github.com/kubedb/proxysql/commit/e74f3803) Update release.yml +- [7f5349cc](https://github.com/kubedb/proxysql/commit/7f5349cc) Use updated apis (#45) +- [27faefef](https://github.com/kubedb/proxysql/commit/27faefef) Update for release Stash@v2020.08.27 (#43) +- [65bc5bca](https://github.com/kubedb/proxysql/commit/65bc5bca) Update for release Stash@v2020.08.27-rc.0 (#42) +- [833ac78b](https://github.com/kubedb/proxysql/commit/833ac78b) Update for release Stash@v2020.08.26-rc.1 (#41) +- [fe13ce42](https://github.com/kubedb/proxysql/commit/fe13ce42) Update for release Stash@v2020.08.26-rc.0 (#40) +- [b1a72843](https://github.com/kubedb/proxysql/commit/b1a72843) Update Kubernetes v1.18.3 dependencies (#39) +- [a9c40618](https://github.com/kubedb/proxysql/commit/a9c40618) Update Kubernetes v1.18.3 dependencies (#38) +- [664c974a](https://github.com/kubedb/proxysql/commit/664c974a) Update Kubernetes v1.18.3 dependencies (#37) +- [69ed46d5](https://github.com/kubedb/proxysql/commit/69ed46d5) Update Kubernetes v1.18.3 dependencies (#36) +- [a93d80d4](https://github.com/kubedb/proxysql/commit/a93d80d4) Update Kubernetes v1.18.3 dependencies (#35) +- [84fc9e37](https://github.com/kubedb/proxysql/commit/84fc9e37) Update Kubernetes v1.18.3 dependencies (#34) +- [b09f89d0](https://github.com/kubedb/proxysql/commit/b09f89d0) Remove dependency on enterprise operator (#33) +- [78ad5a88](https://github.com/kubedb/proxysql/commit/78ad5a88) Build images in e2e workflow (#32) +- [6644058e](https://github.com/kubedb/proxysql/commit/6644058e) Update to Kubernetes v1.18.3 (#30) +- [2c03dadd](https://github.com/kubedb/proxysql/commit/2c03dadd) Allow configuring k8s & db version in e2e tests (#31) +- [2c6e04bc](https://github.com/kubedb/proxysql/commit/2c6e04bc) Trigger e2e tests on /ok-to-test command (#29) +- [c7830af8](https://github.com/kubedb/proxysql/commit/c7830af8) Update to Kubernetes v1.18.3 (#28) +- [f2da8746](https://github.com/kubedb/proxysql/commit/f2da8746) Update to Kubernetes v1.18.3 (#27) +- [2ed7d0e8](https://github.com/kubedb/proxysql/commit/2ed7d0e8) Prepare for release v0.1.0-beta.1 (#26) +- [3b5ee481](https://github.com/kubedb/proxysql/commit/3b5ee481) Update for release Stash@v2020.07.09-beta.0 (#25) +- [92b04b33](https://github.com/kubedb/proxysql/commit/92b04b33) include Makefile.env (#24) +- [eace7e26](https://github.com/kubedb/proxysql/commit/eace7e26) Update for release Stash@v2020.07.08-beta.0 (#23) +- [0c647c01](https://github.com/kubedb/proxysql/commit/0c647c01) Update License (#22) +- [3c1b41be](https://github.com/kubedb/proxysql/commit/3c1b41be) Update to Kubernetes v1.18.3 (#21) +- [dfa95bb8](https://github.com/kubedb/proxysql/commit/dfa95bb8) Update ci.yml +- [87390932](https://github.com/kubedb/proxysql/commit/87390932) Update update-release-tracker.sh +- [772a0c6a](https://github.com/kubedb/proxysql/commit/772a0c6a) Update update-release-tracker.sh +- [a3b2ae92](https://github.com/kubedb/proxysql/commit/a3b2ae92) Add script to update release tracker on pr merge (#20) +- [7578cae3](https://github.com/kubedb/proxysql/commit/7578cae3) Update .kodiak.toml +- [4ba876bc](https://github.com/kubedb/proxysql/commit/4ba876bc) Update operator tags +- [399aa60b](https://github.com/kubedb/proxysql/commit/399aa60b) Various fixes (#19) +- [7235b0c5](https://github.com/kubedb/proxysql/commit/7235b0c5) Update to Kubernetes v1.18.3 (#18) +- [427c1f21](https://github.com/kubedb/proxysql/commit/427c1f21) Update to Kubernetes v1.18.3 +- [1ac8da55](https://github.com/kubedb/proxysql/commit/1ac8da55) Create .kodiak.toml +- [3243d446](https://github.com/kubedb/proxysql/commit/3243d446) Use CRD v1 for Kubernetes >= 1.16 (#17) +- [4f5bea8d](https://github.com/kubedb/proxysql/commit/4f5bea8d) Update to Kubernetes v1.18.3 (#16) +- [a0d2611a](https://github.com/kubedb/proxysql/commit/a0d2611a) Fix e2e tests (#15) +- [987fbf60](https://github.com/kubedb/proxysql/commit/987fbf60) Update crazy-max/ghaction-docker-buildx flag +- [c2fad78e](https://github.com/kubedb/proxysql/commit/c2fad78e) Use updated operator labels in e2e tests (#14) +- [c5a01db8](https://github.com/kubedb/proxysql/commit/c5a01db8) Revendor kubedb.dev/apimachinery@master (#13) +- [756c8f8f](https://github.com/kubedb/proxysql/commit/756c8f8f) Trigger the workflow on push or pull request +- [fdf84e27](https://github.com/kubedb/proxysql/commit/fdf84e27) Update CHANGELOG.md +- [9075b453](https://github.com/kubedb/proxysql/commit/9075b453) Use stash.appscode.dev/apimachinery@v0.9.0-rc.6 (#12) +- [f4d1c024](https://github.com/kubedb/proxysql/commit/f4d1c024) Matrix Tests on Github Actions (#11) +- [4e021072](https://github.com/kubedb/proxysql/commit/4e021072) Update mount path for custom config (#8) +- [b0922173](https://github.com/kubedb/proxysql/commit/b0922173) Enable ProxySQL monitoring (#6) +- [70be4e67](https://github.com/kubedb/proxysql/commit/70be4e67) ProxySQL test for MySQL (#4) +- [0a444b9e](https://github.com/kubedb/proxysql/commit/0a444b9e) Use charts to install operator (#7) +- [a51fbb51](https://github.com/kubedb/proxysql/commit/a51fbb51) ProxySQL operator for MySQL databases (#2) +- [883fa437](https://github.com/kubedb/proxysql/commit/883fa437) Update go.yml +- [2c0cf51c](https://github.com/kubedb/proxysql/commit/2c0cf51c) Enable GitHub actions +- [52e15cd2](https://github.com/kubedb/proxysql/commit/52e15cd2) percona-xtradb -> proxysql (#1) +- [dc71bffe](https://github.com/kubedb/proxysql/commit/dc71bffe) Revendor +- [71957d40](https://github.com/kubedb/proxysql/commit/71957d40) Rename from perconaxtradb to percona-xtradb (#10) +- [b526ccd8](https://github.com/kubedb/proxysql/commit/b526ccd8) Set database version in AppBinding (#7) +- [336e7203](https://github.com/kubedb/proxysql/commit/336e7203) Percona XtraDB Cluster support (#9) +- [71a42f7a](https://github.com/kubedb/proxysql/commit/71a42f7a) Don't set annotation to AppBinding (#8) +- [282298cb](https://github.com/kubedb/proxysql/commit/282298cb) Fix UpsertDatabaseAnnotation() function (#4) +- [2ab9dddf](https://github.com/kubedb/proxysql/commit/2ab9dddf) Add license header to Makefiles (#6) +- [df135c08](https://github.com/kubedb/proxysql/commit/df135c08) Add install, uninstall and purge command in Makefile (#3) +- [73d3a845](https://github.com/kubedb/proxysql/commit/73d3a845) Update .gitignore +- [59a4e754](https://github.com/kubedb/proxysql/commit/59a4e754) Add Makefile (#2) +- [f3551ddc](https://github.com/kubedb/proxysql/commit/f3551ddc) Rename package path (#1) +- [56a241d6](https://github.com/kubedb/proxysql/commit/56a241d6) Use explicit IP whitelist instead of automatic IP whitelist (#151) +- [9f0b5ca3](https://github.com/kubedb/proxysql/commit/9f0b5ca3) Update to k8s 1.14.0 client libraries using go.mod (#147) +- [73ad7c30](https://github.com/kubedb/proxysql/commit/73ad7c30) Update changelog +- [ccc36b5c](https://github.com/kubedb/proxysql/commit/ccc36b5c) Update README.md +- [9769e8e1](https://github.com/kubedb/proxysql/commit/9769e8e1) Start next dev cycle +- [a3fa468a](https://github.com/kubedb/proxysql/commit/a3fa468a) Prepare release 0.5.0 +- [6d8862de](https://github.com/kubedb/proxysql/commit/6d8862de) Mysql Group Replication tests (#146) +- [49544e55](https://github.com/kubedb/proxysql/commit/49544e55) Mysql Group Replication (#144) +- [a85d4b44](https://github.com/kubedb/proxysql/commit/a85d4b44) Revendor dependencies +- [9c538460](https://github.com/kubedb/proxysql/commit/9c538460) Changed Role to exclude psp without name (#143) +- [6cace93b](https://github.com/kubedb/proxysql/commit/6cace93b) Modify mutator validator names (#142) +- [da0c19b9](https://github.com/kubedb/proxysql/commit/da0c19b9) Update changelog +- [b79c80d6](https://github.com/kubedb/proxysql/commit/b79c80d6) Start next dev cycle +- [838d9459](https://github.com/kubedb/proxysql/commit/838d9459) Prepare release 0.4.0 +- [bf0f2c14](https://github.com/kubedb/proxysql/commit/bf0f2c14) Added PSP names and init container image in testing framework (#141) +- [3d227570](https://github.com/kubedb/proxysql/commit/3d227570) Added PSP support for mySQL (#137) +- [7b766657](https://github.com/kubedb/proxysql/commit/7b766657) Don't inherit app.kubernetes.io labels from CRD into offshoots (#140) +- [29e23470](https://github.com/kubedb/proxysql/commit/29e23470) Support for init container (#139) +- [3e1556f6](https://github.com/kubedb/proxysql/commit/3e1556f6) Add role label to stats service (#138) +- [ee078af9](https://github.com/kubedb/proxysql/commit/ee078af9) Update changelog +- [978f1139](https://github.com/kubedb/proxysql/commit/978f1139) Update Kubernetes client libraries to 1.13.0 release (#136) +- [821f23d1](https://github.com/kubedb/proxysql/commit/821f23d1) Start next dev cycle +- [678b26aa](https://github.com/kubedb/proxysql/commit/678b26aa) Prepare release 0.3.0 +- [40ad7a23](https://github.com/kubedb/proxysql/commit/40ad7a23) Initial RBAC support: create and use K8s service account for MySQL (#134) +- [98f03387](https://github.com/kubedb/proxysql/commit/98f03387) Revendor dependencies (#135) +- [dfe92615](https://github.com/kubedb/proxysql/commit/dfe92615) Revendor dependencies : Retry Failed Scheduler Snapshot (#133) +- [71f8a350](https://github.com/kubedb/proxysql/commit/71f8a350) Added ephemeral StorageType support (#132) +- [0a6b6e46](https://github.com/kubedb/proxysql/commit/0a6b6e46) Added support of MySQL 8.0.14 (#131) +- [99e57a9e](https://github.com/kubedb/proxysql/commit/99e57a9e) Use PVC spec from snapshot if provided (#130) +- [61497be6](https://github.com/kubedb/proxysql/commit/61497be6) Revendored and updated tests for 'Prevent prefix matching of multiple snapshots' (#129) +- [7eafe088](https://github.com/kubedb/proxysql/commit/7eafe088) Add certificate health checker (#128) +- [973ec416](https://github.com/kubedb/proxysql/commit/973ec416) Update E2E test: Env update is not restricted anymore (#127) +- [339975ff](https://github.com/kubedb/proxysql/commit/339975ff) Fix AppBinding (#126) +- [62050a72](https://github.com/kubedb/proxysql/commit/62050a72) Update changelog +- [2d454043](https://github.com/kubedb/proxysql/commit/2d454043) Prepare release 0.2.0 +- [6941ea59](https://github.com/kubedb/proxysql/commit/6941ea59) Reuse event recorder (#125) +- [b77e66c4](https://github.com/kubedb/proxysql/commit/b77e66c4) OSM binary upgraded in mysql-tools (#123) +- [c9228086](https://github.com/kubedb/proxysql/commit/c9228086) Revendor dependencies (#124) +- [97837120](https://github.com/kubedb/proxysql/commit/97837120) Test for faulty snapshot (#122) +- [c3e995b6](https://github.com/kubedb/proxysql/commit/c3e995b6) Start next dev cycle +- [8a4f3b13](https://github.com/kubedb/proxysql/commit/8a4f3b13) Prepare release 0.2.0-rc.2 +- [79942191](https://github.com/kubedb/proxysql/commit/79942191) Upgrade database secret keys (#121) +- [1747fdf5](https://github.com/kubedb/proxysql/commit/1747fdf5) Ignore mutation of fields to default values during update (#120) +- [d902d588](https://github.com/kubedb/proxysql/commit/d902d588) Support configuration options for exporter sidecar (#119) +- [dd7c3f44](https://github.com/kubedb/proxysql/commit/dd7c3f44) Use flags.DumpAll (#118) +- [bc1ef05b](https://github.com/kubedb/proxysql/commit/bc1ef05b) Start next dev cycle +- [9d33c1a0](https://github.com/kubedb/proxysql/commit/9d33c1a0) Prepare release 0.2.0-rc.1 +- [b076e141](https://github.com/kubedb/proxysql/commit/b076e141) Apply cleanup (#117) +- [7dc5641f](https://github.com/kubedb/proxysql/commit/7dc5641f) Set periodic analytics (#116) +- [90ea6acc](https://github.com/kubedb/proxysql/commit/90ea6acc) Introduce AppBinding support (#115) +- [a882d76a](https://github.com/kubedb/proxysql/commit/a882d76a) Fix Analytics (#114) +- [0961009c](https://github.com/kubedb/proxysql/commit/0961009c) Error out from cron job for deprecated dbversion (#113) +- [da1f4e27](https://github.com/kubedb/proxysql/commit/da1f4e27) Add CRDs without observation when operator starts (#112) +- [0a754d2f](https://github.com/kubedb/proxysql/commit/0a754d2f) Update changelog +- [b09bc6e1](https://github.com/kubedb/proxysql/commit/b09bc6e1) Start next dev cycle +- [0d467ccb](https://github.com/kubedb/proxysql/commit/0d467ccb) Prepare release 0.2.0-rc.0 +- [c757007a](https://github.com/kubedb/proxysql/commit/c757007a) Merge commit 'cc6607a3589a79a5e61bb198d370ea0ae30b9d09' +- [ddfe4be1](https://github.com/kubedb/proxysql/commit/ddfe4be1) Support custom user passowrd for backup (#111) +- [8c84ba20](https://github.com/kubedb/proxysql/commit/8c84ba20) Support providing resources for monitoring container (#110) +- [7bcfbc48](https://github.com/kubedb/proxysql/commit/7bcfbc48) Update kubernetes client libraries to 1.12.0 (#109) +- [145bba2b](https://github.com/kubedb/proxysql/commit/145bba2b) Add validation webhook xray (#108) +- [6da1887f](https://github.com/kubedb/proxysql/commit/6da1887f) Various Fixes (#107) +- [111519e9](https://github.com/kubedb/proxysql/commit/111519e9) Merge ports from service template (#105) +- [38147ef1](https://github.com/kubedb/proxysql/commit/38147ef1) Replace doNotPause with TerminationPolicy = DoNotTerminate (#104) +- [e28ebc47](https://github.com/kubedb/proxysql/commit/e28ebc47) Pass resources to NamespaceValidator (#103) +- [aed12bf5](https://github.com/kubedb/proxysql/commit/aed12bf5) Various fixes (#102) +- [3d372ef6](https://github.com/kubedb/proxysql/commit/3d372ef6) Support Livecycle hook and container probes (#101) +- [b6ef6887](https://github.com/kubedb/proxysql/commit/b6ef6887) Check if Kubernetes version is supported before running operator (#100) +- [d89e7783](https://github.com/kubedb/proxysql/commit/d89e7783) Update package alias (#99) +- [f0b44b3a](https://github.com/kubedb/proxysql/commit/f0b44b3a) Start next dev cycle +- [a79ff03b](https://github.com/kubedb/proxysql/commit/a79ff03b) Prepare release 0.2.0-beta.1 +- [0d8d3cca](https://github.com/kubedb/proxysql/commit/0d8d3cca) Revendor api (#98) +- [2f850243](https://github.com/kubedb/proxysql/commit/2f850243) Fix tests (#97) +- [4ced0bfe](https://github.com/kubedb/proxysql/commit/4ced0bfe) Revendor api for catalog apigroup (#96) +- [e7695400](https://github.com/kubedb/proxysql/commit/e7695400) Update chanelog +- [8e358aea](https://github.com/kubedb/proxysql/commit/8e358aea) Use --pull flag with docker build (#20) (#95) +- [d2a97d90](https://github.com/kubedb/proxysql/commit/d2a97d90) Merge commit '16c769ee4686576f172a6b79a10d25bfd79ca4a4' +- [d1fe8a8a](https://github.com/kubedb/proxysql/commit/d1fe8a8a) Start next dev cycle +- [04eb9bb5](https://github.com/kubedb/proxysql/commit/04eb9bb5) Prepare release 0.2.0-beta.0 +- [9dfea960](https://github.com/kubedb/proxysql/commit/9dfea960) Pass extra args to tools.sh (#93) +- [47dd3cad](https://github.com/kubedb/proxysql/commit/47dd3cad) Don't try to wipe out Snapshot data for Local backend (#92) +- [9c4d485b](https://github.com/kubedb/proxysql/commit/9c4d485b) Add missing alt-tag docker folder mysql-tools images (#91) +- [be72f784](https://github.com/kubedb/proxysql/commit/be72f784) Use suffix for updated DBImage & Stop working for deprecated *Versions (#90) +- [05c8f14d](https://github.com/kubedb/proxysql/commit/05c8f14d) Search used secrets within same namespace of DB object (#89) +- [0d94c946](https://github.com/kubedb/proxysql/commit/0d94c946) Support Termination Policy (#88) +- [8775ddf7](https://github.com/kubedb/proxysql/commit/8775ddf7) Update builddeps.sh +- [796c93da](https://github.com/kubedb/proxysql/commit/796c93da) Revendor k8s.io/apiserver (#87) +- [5a1e3f57](https://github.com/kubedb/proxysql/commit/5a1e3f57) Revendor kubernetes-1.11.3 (#86) +- [809a3c49](https://github.com/kubedb/proxysql/commit/809a3c49) Support UpdateStrategy (#84) +- [372c52ef](https://github.com/kubedb/proxysql/commit/372c52ef) Add TerminationPolicy for databases (#83) +- [c01b55e8](https://github.com/kubedb/proxysql/commit/c01b55e8) Revendor api (#82) +- [5e196b95](https://github.com/kubedb/proxysql/commit/5e196b95) Use IntHash as status.observedGeneration (#81) +- [2da3bb1b](https://github.com/kubedb/proxysql/commit/2da3bb1b) fix github status (#80) +- [121d0a98](https://github.com/kubedb/proxysql/commit/121d0a98) Update pipeline (#79) +- [532e3137](https://github.com/kubedb/proxysql/commit/532e3137) Fix E2E test for minikube (#78) +- [0f107815](https://github.com/kubedb/proxysql/commit/0f107815) Update pipeline (#77) +- [851679e2](https://github.com/kubedb/proxysql/commit/851679e2) Migrate MySQL (#75) +- [0b997855](https://github.com/kubedb/proxysql/commit/0b997855) Use official exporter image (#74) +- [702d5736](https://github.com/kubedb/proxysql/commit/702d5736) Fix uninstall for concourse (#70) +- [9ee88bd2](https://github.com/kubedb/proxysql/commit/9ee88bd2) Update status.ObservedGeneration for failure phase (#73) +- [559cdb6a](https://github.com/kubedb/proxysql/commit/559cdb6a) Keep track of ObservedGenerationHash (#72) +- [61c8b898](https://github.com/kubedb/proxysql/commit/61c8b898) Use NewObservableHandler (#71) +- [421274dc](https://github.com/kubedb/proxysql/commit/421274dc) Merge commit '887037c7e36289e3135dda99346fccc7e2ce303b' +- [6a41d9bc](https://github.com/kubedb/proxysql/commit/6a41d9bc) Fix uninstall for concourse (#69) +- [f1af09db](https://github.com/kubedb/proxysql/commit/f1af09db) Update README.md +- [bf3f1823](https://github.com/kubedb/proxysql/commit/bf3f1823) Revise immutable spec fields (#68) +- [26adec3b](https://github.com/kubedb/proxysql/commit/26adec3b) Merge commit '5f83049fc01dc1d0709ac0014d6f3a0f74a39417' +- [31a97820](https://github.com/kubedb/proxysql/commit/31a97820) Support passing args via PodTemplate (#67) +- [60f4ee23](https://github.com/kubedb/proxysql/commit/60f4ee23) Introduce storageType : ephemeral (#66) +- [bfd3fcd6](https://github.com/kubedb/proxysql/commit/bfd3fcd6) Add support for running tests on cncf cluster (#63) +- [fba47b19](https://github.com/kubedb/proxysql/commit/fba47b19) Merge commit 'e010cbb302c8d59d4cf69dd77085b046ff423b78' +- [6be96ce0](https://github.com/kubedb/proxysql/commit/6be96ce0) Revendor api (#65) +- [0f629ab3](https://github.com/kubedb/proxysql/commit/0f629ab3) Keep track of observedGeneration in status (#64) +- [c9a9596f](https://github.com/kubedb/proxysql/commit/c9a9596f) Separate StatsService for monitoring (#62) +- [62854641](https://github.com/kubedb/proxysql/commit/62854641) Use MySQLVersion for MySQL images (#61) +- [3c170c56](https://github.com/kubedb/proxysql/commit/3c170c56) Use updated crd spec (#60) +- [873c285e](https://github.com/kubedb/proxysql/commit/873c285e) Rename OffshootLabels to OffshootSelectors (#59) +- [2fd02169](https://github.com/kubedb/proxysql/commit/2fd02169) Revendor api (#58) +- [a127d6cd](https://github.com/kubedb/proxysql/commit/a127d6cd) Use kmodules monitoring and objectstore api (#57) +- [2f79a038](https://github.com/kubedb/proxysql/commit/2f79a038) Support custom configuration (#52) +- [49c67f00](https://github.com/kubedb/proxysql/commit/49c67f00) Merge commit '44e6d4985d93556e39ddcc4677ada5437fc5be64' +- [fb28bc6c](https://github.com/kubedb/proxysql/commit/fb28bc6c) Refactor concourse scripts (#56) +- [4de4ced1](https://github.com/kubedb/proxysql/commit/4de4ced1) Fix command `./hack/make.py test e2e` (#55) +- [3082123e](https://github.com/kubedb/proxysql/commit/3082123e) Set generated binary name to my-operator (#54) +- [5698f314](https://github.com/kubedb/proxysql/commit/5698f314) Don't add admission/v1beta1 group as a prioritized version (#53) +- [696135d5](https://github.com/kubedb/proxysql/commit/696135d5) Fix travis build (#48) +- [c519ef89](https://github.com/kubedb/proxysql/commit/c519ef89) Format shell script (#51) +- [c93e2f40](https://github.com/kubedb/proxysql/commit/c93e2f40) Enable status subresource for crds (#50) +- [edd951ca](https://github.com/kubedb/proxysql/commit/edd951ca) Update client-go to v8.0.0 (#49) +- [520597a6](https://github.com/kubedb/proxysql/commit/520597a6) Merge commit '71850e2c90cda8fc588b7dedb340edf3d316baea' +- [f1549e95](https://github.com/kubedb/proxysql/commit/f1549e95) Support ENV variables in CRDs (#46) +- [67f37780](https://github.com/kubedb/proxysql/commit/67f37780) Updated osm version to 0.7.1 (#47) +- [10e309c0](https://github.com/kubedb/proxysql/commit/10e309c0) Prepare release 0.1.0 +- [62a8fbbd](https://github.com/kubedb/proxysql/commit/62a8fbbd) Fixed missing error return (#45) +- [8c05bb83](https://github.com/kubedb/proxysql/commit/8c05bb83) Revendor dependencies (#44) +- [ca811a2e](https://github.com/kubedb/proxysql/commit/ca811a2e) Fix release script (#43) +- [b79541f6](https://github.com/kubedb/proxysql/commit/b79541f6) Add changelog (#42) +- [a2d13c82](https://github.com/kubedb/proxysql/commit/a2d13c82) Concourse (#41) +- [95b2186e](https://github.com/kubedb/proxysql/commit/95b2186e) Fixed kubeconfig plugin for Cloud Providers && Storage is required for MySQL (#40) +- [37762093](https://github.com/kubedb/proxysql/commit/37762093) Refactored E2E testing to support E2E testing with admission webhook in cloud (#38) +- [b6fe72ca](https://github.com/kubedb/proxysql/commit/b6fe72ca) Remove lost+found directory before initializing mysql (#39) +- [18ebb959](https://github.com/kubedb/proxysql/commit/18ebb959) Skip delete requests for empty resources (#37) +- [eeb7add0](https://github.com/kubedb/proxysql/commit/eeb7add0) Don't panic if admission options is nil (#36) +- [ccb59db0](https://github.com/kubedb/proxysql/commit/ccb59db0) Disable admission controllers for webhook server (#35) +- [b1c6c149](https://github.com/kubedb/proxysql/commit/b1c6c149) Separate ApiGroup for Mutating and Validating webhook && upgraded osm to 0.7.0 (#34) +- [b1890f7c](https://github.com/kubedb/proxysql/commit/b1890f7c) Update client-go to 7.0.0 (#33) +- [08c81726](https://github.com/kubedb/proxysql/commit/08c81726) Added update script for mysql-tools:8 (#32) +- [4bbe6c9f](https://github.com/kubedb/proxysql/commit/4bbe6c9f) Added support of mysql:5.7 (#31) +- [e657f512](https://github.com/kubedb/proxysql/commit/e657f512) Add support for one informer and N-eventHandler for snapshot, dromantDB and Job (#30) +- [bbcd48d6](https://github.com/kubedb/proxysql/commit/bbcd48d6) Use metrics from kube apiserver (#29) +- [1687e197](https://github.com/kubedb/proxysql/commit/1687e197) Bundle webhook server and Use SharedInformerFactory (#28) +- [cd0efc00](https://github.com/kubedb/proxysql/commit/cd0efc00) Move MySQL AdmissionWebhook packages into MySQL repository (#27) +- [46065e18](https://github.com/kubedb/proxysql/commit/46065e18) Use mysql:8.0.3 image as mysql:8.0 (#26) +- [1b73529f](https://github.com/kubedb/proxysql/commit/1b73529f) Update README.md +- [62eaa397](https://github.com/kubedb/proxysql/commit/62eaa397) Update README.md +- [c53704c7](https://github.com/kubedb/proxysql/commit/c53704c7) Remove Docker pull count +- [b9ec877e](https://github.com/kubedb/proxysql/commit/b9ec877e) Add travis yaml (#25) +- [ade3571c](https://github.com/kubedb/proxysql/commit/ade3571c) Start next dev cycle +- [b4b749df](https://github.com/kubedb/proxysql/commit/b4b749df) Prepare release 0.1.0-beta.2 +- [4d46d95d](https://github.com/kubedb/proxysql/commit/4d46d95d) Migrating to apps/v1 (#23) +- [5ee1ac8c](https://github.com/kubedb/proxysql/commit/5ee1ac8c) Update validation (#22) +- [dd023c50](https://github.com/kubedb/proxysql/commit/dd023c50) Fix dormantDB matching: pass same type to Equal method (#21) +- [37a1e4fd](https://github.com/kubedb/proxysql/commit/37a1e4fd) Use official code generator scripts (#20) +- [485d3d7c](https://github.com/kubedb/proxysql/commit/485d3d7c) Fixed dormantdb matching & Raised throttling time & Fixed MySQL version Checking (#19) +- [6db2ae8d](https://github.com/kubedb/proxysql/commit/6db2ae8d) Prepare release 0.1.0-beta.1 +- [ebbfec2f](https://github.com/kubedb/proxysql/commit/ebbfec2f) converted to k8s 1.9 & Improved InitSpec in DormantDB & Added support for Job watcher & Improved Tests (#17) +- [a484e0e5](https://github.com/kubedb/proxysql/commit/a484e0e5) Fixed logger, analytics and removed rbac stuff (#16) +- [7aa2d1d2](https://github.com/kubedb/proxysql/commit/7aa2d1d2) Add rbac stuffs for mysql-exporter (#15) +- [078098c8](https://github.com/kubedb/proxysql/commit/078098c8) Review Mysql docker images and Fixed monitring (#14) +- [6877108a](https://github.com/kubedb/proxysql/commit/6877108a) Update README.md +- [1f84a5da](https://github.com/kubedb/proxysql/commit/1f84a5da) Start next dev cycle +- [2f1e4b7d](https://github.com/kubedb/proxysql/commit/2f1e4b7d) Prepare release 0.1.0-beta.0 +- [dce1e88e](https://github.com/kubedb/proxysql/commit/dce1e88e) Add release script +- [60ed55cb](https://github.com/kubedb/proxysql/commit/60ed55cb) Rename ms-operator to my-operator (#13) +- [5451d166](https://github.com/kubedb/proxysql/commit/5451d166) Fix Analytics and pass client-id as ENV to Snapshot Job (#12) +- [788ae178](https://github.com/kubedb/proxysql/commit/788ae178) update docker image validation (#11) +- [c966efd5](https://github.com/kubedb/proxysql/commit/c966efd5) Add docker-registry and WorkQueue (#10) +- [be340103](https://github.com/kubedb/proxysql/commit/be340103) Set client id for analytics (#9) +- [ca11f683](https://github.com/kubedb/proxysql/commit/ca11f683) Fix CRD Registration (#8) +- [2f95c13d](https://github.com/kubedb/proxysql/commit/2f95c13d) Update issue repo link +- [6fffa713](https://github.com/kubedb/proxysql/commit/6fffa713) Update pkg paths to kubedb org (#7) +- [2d4d5c44](https://github.com/kubedb/proxysql/commit/2d4d5c44) Assign default Prometheus Monitoring Port (#6) +- [a7595613](https://github.com/kubedb/proxysql/commit/a7595613) Add Snapshot Backup, Restore and Backup-Scheduler (#4) +- [17a782c6](https://github.com/kubedb/proxysql/commit/17a782c6) Update Dockerfile +- [e92bfec9](https://github.com/kubedb/proxysql/commit/e92bfec9) Add mysql-util docker image (#5) +- [2a4b25ac](https://github.com/kubedb/proxysql/commit/2a4b25ac) Mysql db - Inititalizing (#2) +- [cbfbc878](https://github.com/kubedb/proxysql/commit/cbfbc878) Update README.md +- [01cab651](https://github.com/kubedb/proxysql/commit/01cab651) Update README.md +- [0aa81cdf](https://github.com/kubedb/proxysql/commit/0aa81cdf) Use client-go 5.x +- [3de10d7f](https://github.com/kubedb/proxysql/commit/3de10d7f) Update ./hack folder (#3) +- [46f05b1f](https://github.com/kubedb/proxysql/commit/46f05b1f) Add skeleton for mysql (#1) +- [73147dba](https://github.com/kubedb/proxysql/commit/73147dba) Merge commit 'be70502b4993171bbad79d2ff89a9844f1c24caa' as 'hack/libbuild' + + + +## [kubedb/redis](https://github.com/kubedb/redis) + +### [v0.7.0](https://github.com/kubedb/redis/releases/tag/v0.7.0) + +- [978d89ef](https://github.com/kubedb/redis/commit/978d89ef) Prepare for release v0.7.0 (#248) +- [ac0d5b08](https://github.com/kubedb/redis/commit/ac0d5b08) Prepare for release v0.7.0-rc.2 (#247) +- [b9e54a66](https://github.com/kubedb/redis/commit/b9e54a66) Prepare for release v0.7.0-rc.1 (#246) +- [50f709bf](https://github.com/kubedb/redis/commit/50f709bf) Prepare for release v0.7.0-beta.6 (#245) +- [d4aaaf38](https://github.com/kubedb/redis/commit/d4aaaf38) Create SRV records for governing service (#244) +- [57743070](https://github.com/kubedb/redis/commit/57743070) Prepare for release v0.7.0-beta.5 (#243) +- [5e8f1a25](https://github.com/kubedb/redis/commit/5e8f1a25) Create separate governing service for each database (#242) +- [ebeda2c7](https://github.com/kubedb/redis/commit/ebeda2c7) Update KubeDB api (#241) +- [b0a39a3c](https://github.com/kubedb/redis/commit/b0a39a3c) Update readme +- [d31b919a](https://github.com/kubedb/redis/commit/d31b919a) Prepare for release v0.7.0-beta.4 (#240) +- [bfecc0c5](https://github.com/kubedb/redis/commit/bfecc0c5) Update KubeDB api (#239) +- [307efbef](https://github.com/kubedb/redis/commit/307efbef) Update Kubernetes v1.18.9 dependencies (#238) +- [34b09d4c](https://github.com/kubedb/redis/commit/34b09d4c) Update KubeDB api (#237) +- [4aefb939](https://github.com/kubedb/redis/commit/4aefb939) Fix init validator (#236) +- [4ea47108](https://github.com/kubedb/redis/commit/4ea47108) Update KubeDB api (#235) +- [8c4c8a54](https://github.com/kubedb/redis/commit/8c4c8a54) Update KubeDB api (#234) +- [cbee9597](https://github.com/kubedb/redis/commit/cbee9597) Update Kubernetes v1.18.9 dependencies (#233) +- [9fb1b23c](https://github.com/kubedb/redis/commit/9fb1b23c) Update KubeDB api (#232) +- [c5fb9a6d](https://github.com/kubedb/redis/commit/c5fb9a6d) Update KubeDB api (#230) +- [2e2f2d7b](https://github.com/kubedb/redis/commit/2e2f2d7b) Update KubeDB api (#229) +- [3c8e6c6d](https://github.com/kubedb/redis/commit/3c8e6c6d) Update KubeDB api (#228) +- [8467464d](https://github.com/kubedb/redis/commit/8467464d) Update Kubernetes v1.18.9 dependencies (#227) +- [5febd393](https://github.com/kubedb/redis/commit/5febd393) Update KubeDB api (#226) +- [d8024e4d](https://github.com/kubedb/redis/commit/d8024e4d) Update KubeDB api (#225) +- [12d112de](https://github.com/kubedb/redis/commit/12d112de) Update KubeDB api (#223) +- [8a9f5398](https://github.com/kubedb/redis/commit/8a9f5398) Update repository config (#222) +- [b3b48a91](https://github.com/kubedb/redis/commit/b3b48a91) Update repository config (#221) +- [2fa45230](https://github.com/kubedb/redis/commit/2fa45230) Update repository config (#220) +- [552f1f80](https://github.com/kubedb/redis/commit/552f1f80) Initialize statefulset watcher from cmd/server/options.go (#219) +- [446b4b55](https://github.com/kubedb/redis/commit/446b4b55) Update KubeDB api (#218) +- [f6203009](https://github.com/kubedb/redis/commit/f6203009) Update Kubernetes v1.18.9 dependencies (#217) +- [b7172fb8](https://github.com/kubedb/redis/commit/b7172fb8) Publish docker images to ghcr.io (#216) +- [9897bab9](https://github.com/kubedb/redis/commit/9897bab9) Update KubeDB api (#215) +- [00f07b4f](https://github.com/kubedb/redis/commit/00f07b4f) Update KubeDB api (#214) +- [f2133f26](https://github.com/kubedb/redis/commit/f2133f26) Update KubeDB api (#213) +- [b1f3b76a](https://github.com/kubedb/redis/commit/b1f3b76a) Update KubeDB api (#212) +- [a3144e30](https://github.com/kubedb/redis/commit/a3144e30) Update repository config (#211) +- [8472ff88](https://github.com/kubedb/redis/commit/8472ff88) Add support to initialize Redis using Stash (#188) +- [20ba04a7](https://github.com/kubedb/redis/commit/20ba04a7) Update Kubernetes v1.18.9 dependencies (#210) +- [457611a1](https://github.com/kubedb/redis/commit/457611a1) Update Kubernetes v1.18.9 dependencies (#209) +- [2bd8b281](https://github.com/kubedb/redis/commit/2bd8b281) Update Kubernetes v1.18.9 dependencies (#207) +- [8779c7ea](https://github.com/kubedb/redis/commit/8779c7ea) Update repository config (#206) +- [db9280b7](https://github.com/kubedb/redis/commit/db9280b7) Update repository config (#205) +- [ada18bca](https://github.com/kubedb/redis/commit/ada18bca) Update Kubernetes v1.18.9 dependencies (#204) +- [17a55147](https://github.com/kubedb/redis/commit/17a55147) Use common event recorder (#203) +- [71a34b6a](https://github.com/kubedb/redis/commit/71a34b6a) Update Kubernetes v1.18.3 dependencies (#202) +- [32dadab6](https://github.com/kubedb/redis/commit/32dadab6) Prepare for release v0.7.0-beta.3 (#201) +- [e41222a1](https://github.com/kubedb/redis/commit/e41222a1) Update Kubernetes v1.18.3 dependencies (#200) +- [41172908](https://github.com/kubedb/redis/commit/41172908) Add license verifier (#199) +- [d46d0dbd](https://github.com/kubedb/redis/commit/d46d0dbd) Update Kubernetes v1.18.3 dependencies (#198) +- [283c2777](https://github.com/kubedb/redis/commit/283c2777) Use background deletion policy +- [5ee6470d](https://github.com/kubedb/redis/commit/5ee6470d) Update Kubernetes v1.18.3 dependencies (#195) +- [e391f0d6](https://github.com/kubedb/redis/commit/e391f0d6) Use AppsCode Community License (#194) +- [12211e40](https://github.com/kubedb/redis/commit/12211e40) Update Kubernetes v1.18.3 dependencies (#193) +- [73cf267e](https://github.com/kubedb/redis/commit/73cf267e) Prepare for release v0.7.0-beta.2 (#192) +- [d2911ea9](https://github.com/kubedb/redis/commit/d2911ea9) Update release.yml +- [c76ee46e](https://github.com/kubedb/redis/commit/c76ee46e) Update dependencies (#191) +- [0b030534](https://github.com/kubedb/redis/commit/0b030534) Fix build +- [408216ab](https://github.com/kubedb/redis/commit/408216ab) Add support for Redis v6.0.6 and TLS (#180) +- [944327df](https://github.com/kubedb/redis/commit/944327df) Update Kubernetes v1.18.3 dependencies (#187) +- [40b7cde6](https://github.com/kubedb/redis/commit/40b7cde6) Update Kubernetes v1.18.3 dependencies (#186) +- [f2bf110d](https://github.com/kubedb/redis/commit/f2bf110d) Update Kubernetes v1.18.3 dependencies (#184) +- [61485cfa](https://github.com/kubedb/redis/commit/61485cfa) Update Kubernetes v1.18.3 dependencies (#183) +- [184ae35d](https://github.com/kubedb/redis/commit/184ae35d) Update Kubernetes v1.18.3 dependencies (#182) +- [bc72b51b](https://github.com/kubedb/redis/commit/bc72b51b) Update Kubernetes v1.18.3 dependencies (#181) +- [ca540560](https://github.com/kubedb/redis/commit/ca540560) Remove dependency on enterprise operator (#179) +- [09bade2e](https://github.com/kubedb/redis/commit/09bade2e) Allow configuring k8s & db version in e2e tests (#178) +- [2bafb114](https://github.com/kubedb/redis/commit/2bafb114) Update to Kubernetes v1.18.3 (#177) +- [b2fe59ef](https://github.com/kubedb/redis/commit/b2fe59ef) Trigger e2e tests on /ok-to-test command (#176) +- [df5131e1](https://github.com/kubedb/redis/commit/df5131e1) Update to Kubernetes v1.18.3 (#175) +- [a404ae08](https://github.com/kubedb/redis/commit/a404ae08) Update to Kubernetes v1.18.3 (#174) +- [768962f4](https://github.com/kubedb/redis/commit/768962f4) Prepare for release v0.7.0-beta.1 (#173) +- [9efbb8e4](https://github.com/kubedb/redis/commit/9efbb8e4) include Makefile.env (#171) +- [b343c559](https://github.com/kubedb/redis/commit/b343c559) Update License (#170) +- [d666ac18](https://github.com/kubedb/redis/commit/d666ac18) Update to Kubernetes v1.18.3 (#169) +- [602354f6](https://github.com/kubedb/redis/commit/602354f6) Update ci.yml +- [59f2d238](https://github.com/kubedb/redis/commit/59f2d238) Update update-release-tracker.sh +- [64c96db5](https://github.com/kubedb/redis/commit/64c96db5) Update update-release-tracker.sh +- [49cd15a9](https://github.com/kubedb/redis/commit/49cd15a9) Add script to update release tracker on pr merge (#167) +- [c711be8f](https://github.com/kubedb/redis/commit/c711be8f) chore: replica alert typo (#166) +- [2d752316](https://github.com/kubedb/redis/commit/2d752316) Update .kodiak.toml +- [ea3b206d](https://github.com/kubedb/redis/commit/ea3b206d) Various fixes (#165) +- [e441809c](https://github.com/kubedb/redis/commit/e441809c) Update to Kubernetes v1.18.3 (#164) +- [1e5ecfb7](https://github.com/kubedb/redis/commit/1e5ecfb7) Update to Kubernetes v1.18.3 +- [742679dd](https://github.com/kubedb/redis/commit/742679dd) Create .kodiak.toml +- [2eb77b80](https://github.com/kubedb/redis/commit/2eb77b80) Update apis (#163) +- [7cf9e7d3](https://github.com/kubedb/redis/commit/7cf9e7d3) Use CRD v1 for Kubernetes >= 1.16 (#162) +- [bf072134](https://github.com/kubedb/redis/commit/bf072134) Update kind command +- [cb2a748d](https://github.com/kubedb/redis/commit/cb2a748d) Update dependencies +- [a30cd6eb](https://github.com/kubedb/redis/commit/a30cd6eb) Update to Kubernetes v1.18.3 (#161) +- [9cdac95f](https://github.com/kubedb/redis/commit/9cdac95f) Fix e2e tests (#160) +- [429141b4](https://github.com/kubedb/redis/commit/429141b4) Revendor kubedb.dev/apimachinery@master (#159) +- [664c086b](https://github.com/kubedb/redis/commit/664c086b) Use recommended kubernetes app labels +- [2e6a2f03](https://github.com/kubedb/redis/commit/2e6a2f03) Update crazy-max/ghaction-docker-buildx flag +- [88417e86](https://github.com/kubedb/redis/commit/88417e86) Pass annotations from CRD to AppBinding (#158) +- [84167d7a](https://github.com/kubedb/redis/commit/84167d7a) Trigger the workflow on push or pull request +- [2f43dd9a](https://github.com/kubedb/redis/commit/2f43dd9a) Use helm --wait +- [36399173](https://github.com/kubedb/redis/commit/36399173) Use updated operator labels in e2e tests (#156) +- [c6582491](https://github.com/kubedb/redis/commit/c6582491) Update CHANGELOG.md +- [197b4973](https://github.com/kubedb/redis/commit/197b4973) Support PodAffinity Templating (#155) +- [cdfbb77d](https://github.com/kubedb/redis/commit/cdfbb77d) Use stash.appscode.dev/apimachinery@v0.9.0-rc.6 (#154) +- [c1db4c43](https://github.com/kubedb/redis/commit/c1db4c43) Version update to resolve security issue in github.com/apache/th… (#153) +- [7acc502b](https://github.com/kubedb/redis/commit/7acc502b) Use rancher/local-path-provisioner@v0.0.12 (#152) +- [d00f765e](https://github.com/kubedb/redis/commit/d00f765e) Introduce spec.halted and removed dormant crd (#151) +- [9ed1d97e](https://github.com/kubedb/redis/commit/9ed1d97e) Add `Pause` Feature (#150) +- [39ed60c4](https://github.com/kubedb/redis/commit/39ed60c4) Refactor CI pipeline to build once (#149) +- [1707e0c7](https://github.com/kubedb/redis/commit/1707e0c7) Update kubernetes client-go to 1.16.3 (#148) +- [dcbb4be4](https://github.com/kubedb/redis/commit/dcbb4be4) Update catalog values for make install command +- [9fa3ef1c](https://github.com/kubedb/redis/commit/9fa3ef1c) Update catalog values for make install command (#147) +- [44538409](https://github.com/kubedb/redis/commit/44538409) Use charts to install operator (#146) +- [05e3b95a](https://github.com/kubedb/redis/commit/05e3b95a) Matrix test for github actions (#145) +- [e76f96f6](https://github.com/kubedb/redis/commit/e76f96f6) Add add-license make target +- [6ccd651c](https://github.com/kubedb/redis/commit/6ccd651c) Update Makefile +- [2a56f27f](https://github.com/kubedb/redis/commit/2a56f27f) Add license header to files (#144) +- [5ce5e5e0](https://github.com/kubedb/redis/commit/5ce5e5e0) Run e2e tests in parallel (#142) +- [77012ddf](https://github.com/kubedb/redis/commit/77012ddf) Use log.Fatal instead of Must() (#143) +- [aa7f1673](https://github.com/kubedb/redis/commit/aa7f1673) Enable make ci (#141) +- [abd6a605](https://github.com/kubedb/redis/commit/abd6a605) Remove EnableStatusSubresource (#140) +- [08cfe0ca](https://github.com/kubedb/redis/commit/08cfe0ca) Fix tests for github actions (#139) +- [09e72f63](https://github.com/kubedb/redis/commit/09e72f63) Prepend redis.conf to args list (#136) +- [101afa35](https://github.com/kubedb/redis/commit/101afa35) Run e2e tests using GitHub actions (#137) +- [bbf5cb9f](https://github.com/kubedb/redis/commit/bbf5cb9f) Validate DBVersionSpecs and fixed broken build (#138) +- [26f0c88b](https://github.com/kubedb/redis/commit/26f0c88b) Update go.yml +- [9dab8c06](https://github.com/kubedb/redis/commit/9dab8c06) Enable GitHub actions +- [6a722f20](https://github.com/kubedb/redis/commit/6a722f20) Update changelog +- [0bbe3319](https://github.com/kubedb/redis/commit/0bbe3319) Remove linux/arm support +- [50d8a79e](https://github.com/kubedb/redis/commit/50d8a79e) Revendor +- [29606494](https://github.com/kubedb/redis/commit/29606494) Improve test: Use installed redisversions (#135) +- [2428be41](https://github.com/kubedb/redis/commit/2428be41) Use docker buildx to build docker image xref: https://community.arm.com/developer/tools-software/tools/b/tools-software-ides-blog/posts/getting-started-with-docker-for-arm-on-linux +- [82a1d8de](https://github.com/kubedb/redis/commit/82a1d8de) Update dependencies (#134) +- [55a971e4](https://github.com/kubedb/redis/commit/55a971e4) Don't set annotation to AppBinding (#133) +- [1d684981](https://github.com/kubedb/redis/commit/1d684981) Set database version in AppBinding (#132) +- [c046a975](https://github.com/kubedb/redis/commit/c046a975) Change package path to kubedb.dev/redis (#131) +- [61417d3f](https://github.com/kubedb/redis/commit/61417d3f) Add license header to Makefiles (#130) +- [56e4a2b6](https://github.com/kubedb/redis/commit/56e4a2b6) Update Makefile +- [7b0594cd](https://github.com/kubedb/redis/commit/7b0594cd) Add install, uninstall and purge command in Makefile (#129) +- [4c8ff160](https://github.com/kubedb/redis/commit/4c8ff160) Update .gitignore +- [95d791b0](https://github.com/kubedb/redis/commit/95d791b0) Pod Disruption Budget for Redis (#127) +- [a3ca9ce8](https://github.com/kubedb/redis/commit/a3ca9ce8) Handling resource ownership (#126) +- [201493ca](https://github.com/kubedb/redis/commit/201493ca) Update .travis.yml +- [fa84a9f5](https://github.com/kubedb/redis/commit/fa84a9f5) Add Makefile (#128) +- [3699dfb2](https://github.com/kubedb/redis/commit/3699dfb2) Update to k8s 1.14.0 client libraries using go.mod (#125) +- [d92eb4ff](https://github.com/kubedb/redis/commit/d92eb4ff) Update README.md +- [c6121adf](https://github.com/kubedb/redis/commit/c6121adf) Start next dev cycle + + + + diff --git a/content/docs/v2024.1.31/CHANGELOG-v2020.11.08.md b/content/docs/v2024.1.31/CHANGELOG-v2020.11.08.md new file mode 100644 index 0000000000..64c7c43eea --- /dev/null +++ b/content/docs/v2024.1.31/CHANGELOG-v2020.11.08.md @@ -0,0 +1,311 @@ +--- +title: Changelog | KubeDB +description: Changelog +menu: + docs_v2024.1.31: + identifier: changelog-kubedb-v2020.11.08 + name: Changelog-v2020.11.08 + parent: welcome + weight: 20201108 +product_name: kubedb +menu_name: docs_v2024.1.31 +section_menu_id: welcome +url: /docs/v2024.1.31/welcome/changelog-v2020.11.08/ +aliases: +- /docs/v2024.1.31/CHANGELOG-v2020.11.08/ +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +# KubeDB v2020.11.08 (2020-11-09) + + +## [appscode/kubedb-enterprise](https://github.com/appscode/kubedb-enterprise) + +### [v0.1.1](https://github.com/appscode/kubedb-enterprise/releases/tag/v0.1.1) + +- [99f6b7cf](https://github.com/appscode/kubedb-enterprise/commit/99f6b7cf) Prepare for release v0.1.1 (#94) +- [ef26ef08](https://github.com/appscode/kubedb-enterprise/commit/ef26ef08) Fix elasticsearch cert-manager integration (#87) +- [775a4b1c](https://github.com/appscode/kubedb-enterprise/commit/775a4b1c) Update license +- [424dcade](https://github.com/appscode/kubedb-enterprise/commit/424dcade) Update Kubernetes v1.18.9 dependencies (#93) +- [4b3a348b](https://github.com/appscode/kubedb-enterprise/commit/4b3a348b) Update Kubernetes v1.18.9 dependencies (#92) +- [ec81a2c9](https://github.com/appscode/kubedb-enterprise/commit/ec81a2c9) Update Kubernetes v1.18.9 dependencies (#85) +- [fcdf8afe](https://github.com/appscode/kubedb-enterprise/commit/fcdf8afe) Update KubeDB api (#89) + + + +## [kubedb/apimachinery](https://github.com/kubedb/apimachinery) + +### [v0.14.1](https://github.com/kubedb/apimachinery/releases/tag/v0.14.1) + +- [52b04a20](https://github.com/kubedb/apimachinery/commit/52b04a20) Update Kubernetes v1.18.9 dependencies (#643) +- [1c8d3ff8](https://github.com/kubedb/apimachinery/commit/1c8d3ff8) Update for release Stash@v2020.11.06 (#642) +- [750e5762](https://github.com/kubedb/apimachinery/commit/750e5762) Update Kubernetes v1.18.9 dependencies (#641) +- [f74effb9](https://github.com/kubedb/apimachinery/commit/f74effb9) Use modified UpdateStatus & Invoker utils (#640) +- [92af33bd](https://github.com/kubedb/apimachinery/commit/92af33bd) Update for release Stash@v2020.10.30 (#638) +- [a0cc0f91](https://github.com/kubedb/apimachinery/commit/a0cc0f91) Update for release Stash@v2020.10.29 (#637) + + + +## [kubedb/cli](https://github.com/kubedb/cli) + +### [v0.14.1](https://github.com/kubedb/cli/releases/tag/v0.14.1) + +- [bfd7e528](https://github.com/kubedb/cli/commit/bfd7e528) Prepare for release v0.14.1 (#545) +- [594d1972](https://github.com/kubedb/cli/commit/594d1972) Update KubeDB api (#544) +- [feece297](https://github.com/kubedb/cli/commit/feece297) Update Kubernetes v1.18.9 dependencies (#543) +- [ce8bfd0d](https://github.com/kubedb/cli/commit/ce8bfd0d) Update KubeDB api (#542) +- [3b9ee772](https://github.com/kubedb/cli/commit/3b9ee772) Update for release Stash@v2020.11.06 (#541) +- [8b2623a6](https://github.com/kubedb/cli/commit/8b2623a6) Update Kubernetes v1.18.9 dependencies (#540) +- [4d348dbd](https://github.com/kubedb/cli/commit/4d348dbd) Replace appscode/go with gomodules.xyz/x +- [aeaa8c4f](https://github.com/kubedb/cli/commit/aeaa8c4f) Update KubeDB api (#539) +- [1b4beb54](https://github.com/kubedb/cli/commit/1b4beb54) Update KubeDB api (#538) +- [5f42fa04](https://github.com/kubedb/cli/commit/5f42fa04) Update for release Stash@v2020.10.30 (#537) +- [af3f537d](https://github.com/kubedb/cli/commit/af3f537d) Update KubeDB api (#536) +- [ca9941f5](https://github.com/kubedb/cli/commit/ca9941f5) Update for release Stash@v2020.10.29 (#535) + + + +## [kubedb/elasticsearch](https://github.com/kubedb/elasticsearch) + +### [v0.14.1](https://github.com/kubedb/elasticsearch/releases/tag/v0.14.1) + +- [833f9a84](https://github.com/kubedb/elasticsearch/commit/833f9a84) Prepare for release v0.14.1 (#410) +- [3451691f](https://github.com/kubedb/elasticsearch/commit/3451691f) Update Kubernetes v1.18.9 dependencies (#409) +- [0106be92](https://github.com/kubedb/elasticsearch/commit/0106be92) Update KubeDB api (#408) +- [04047c59](https://github.com/kubedb/elasticsearch/commit/04047c59) Update Kubernetes v1.18.9 dependencies (#407) +- [237b74b8](https://github.com/kubedb/elasticsearch/commit/237b74b8) Update KubeDB api (#406) +- [a607ca4f](https://github.com/kubedb/elasticsearch/commit/a607ca4f) Update for release Stash@v2020.11.06 (#405) +- [8a34c770](https://github.com/kubedb/elasticsearch/commit/8a34c770) Update Kubernetes v1.18.9 dependencies (#404) +- [6cad9476](https://github.com/kubedb/elasticsearch/commit/6cad9476) Update KubeDB api (#403) +- [a8694072](https://github.com/kubedb/elasticsearch/commit/a8694072) Update KubeDB api (#401) +- [5d89e4e4](https://github.com/kubedb/elasticsearch/commit/5d89e4e4) Update for release Stash@v2020.10.30 (#400) +- [389007fa](https://github.com/kubedb/elasticsearch/commit/389007fa) Update KubeDB api (#399) +- [7214b539](https://github.com/kubedb/elasticsearch/commit/7214b539) Update for release Stash@v2020.10.29 (#398) + + + +## [kubedb/installer](https://github.com/kubedb/installer) + +### [v0.14.1](https://github.com/kubedb/installer/releases/tag/v0.14.1) + +- [0554b96](https://github.com/kubedb/installer/commit/0554b96) Prepare for release v0.14.1 (#200) +- [abb9984](https://github.com/kubedb/installer/commit/abb9984) Update Kubernetes v1.18.9 dependencies (#199) +- [cd8a787](https://github.com/kubedb/installer/commit/cd8a787) Update Kubernetes v1.18.9 dependencies (#198) +- [06394de](https://github.com/kubedb/installer/commit/06394de) Replace appscode/go with gomodules.xyz/x (#197) +- [94aef68](https://github.com/kubedb/installer/commit/94aef68) Update KubeDB api (#196) +- [881688a](https://github.com/kubedb/installer/commit/881688a) Update Kubernetes v1.18.9 dependencies (#195) +- [933af3d](https://github.com/kubedb/installer/commit/933af3d) Update KubeDB api (#194) +- [377d367](https://github.com/kubedb/installer/commit/377d367) Update Kubernetes v1.18.9 dependencies (#193) +- [a8f33f4](https://github.com/kubedb/installer/commit/a8f33f4) Update KubeDB api (#192) +- [3120c97](https://github.com/kubedb/installer/commit/3120c97) Update Kubernetes v1.18.9 dependencies (#191) +- [abd119e](https://github.com/kubedb/installer/commit/abd119e) Update KubeDB api (#190) +- [a63008c](https://github.com/kubedb/installer/commit/a63008c) Update Kubernetes v1.18.9 dependencies (#189) +- [ae97c7e](https://github.com/kubedb/installer/commit/ae97c7e) Update KubeDB api (#188) +- [e698985](https://github.com/kubedb/installer/commit/e698985) Update Kubernetes v1.18.9 dependencies (#187) +- [42b8da0](https://github.com/kubedb/installer/commit/42b8da0) Update KubeDB api (#186) +- [51b229c](https://github.com/kubedb/installer/commit/51b229c) Update Kubernetes v1.18.9 dependencies (#185) +- [0bf41c1](https://github.com/kubedb/installer/commit/0bf41c1) Update KubeDB api (#184) +- [cd8e61e](https://github.com/kubedb/installer/commit/cd8e61e) Update Kubernetes v1.18.9 dependencies (#183) +- [57da8dc](https://github.com/kubedb/installer/commit/57da8dc) Update KubeDB api (#182) +- [c087bab](https://github.com/kubedb/installer/commit/c087bab) Update Kubernetes v1.18.9 dependencies (#181) +- [209a567](https://github.com/kubedb/installer/commit/209a567) Update KubeDB api (#180) + + + +## [kubedb/memcached](https://github.com/kubedb/memcached) + +### [v0.7.1](https://github.com/kubedb/memcached/releases/tag/v0.7.1) + +- [f54d5d75](https://github.com/kubedb/memcached/commit/f54d5d75) Prepare for release v0.7.1 (#239) +- [75a22c4b](https://github.com/kubedb/memcached/commit/75a22c4b) Update Kubernetes v1.18.9 dependencies (#238) +- [8b891213](https://github.com/kubedb/memcached/commit/8b891213) Update KubeDB api (#237) +- [6f712c71](https://github.com/kubedb/memcached/commit/6f712c71) Update Kubernetes v1.18.9 dependencies (#236) +- [78dfd6e4](https://github.com/kubedb/memcached/commit/78dfd6e4) Update KubeDB api (#235) +- [07c3ef2c](https://github.com/kubedb/memcached/commit/07c3ef2c) Update Kubernetes v1.18.9 dependencies (#234) +- [d9093f08](https://github.com/kubedb/memcached/commit/d9093f08) Update KubeDB api (#232) +- [7ba3816d](https://github.com/kubedb/memcached/commit/7ba3816d) Update KubeDB api (#231) +- [bdd430ea](https://github.com/kubedb/memcached/commit/bdd430ea) Update KubeDB api (#230) + + + +## [kubedb/mongodb](https://github.com/kubedb/mongodb) + +### [v0.7.1](https://github.com/kubedb/mongodb/releases/tag/v0.7.1) + +- [0ab674f8](https://github.com/kubedb/mongodb/commit/0ab674f8) Prepare for release v0.7.1 (#313) +- [9ef6fd1f](https://github.com/kubedb/mongodb/commit/9ef6fd1f) Update Kubernetes v1.18.9 dependencies (#312) +- [6a943688](https://github.com/kubedb/mongodb/commit/6a943688) Update KubeDB api (#311) +- [15cbc825](https://github.com/kubedb/mongodb/commit/15cbc825) Update Kubernetes v1.18.9 dependencies (#310) +- [92b6f366](https://github.com/kubedb/mongodb/commit/92b6f366) Update KubeDB api (#309) +- [218a9ff5](https://github.com/kubedb/mongodb/commit/218a9ff5) Update for release Stash@v2020.11.06 (#308) +- [9f4bade4](https://github.com/kubedb/mongodb/commit/9f4bade4) Update Kubernetes v1.18.9 dependencies (#307) +- [4e088452](https://github.com/kubedb/mongodb/commit/4e088452) Update KubeDB api (#306) +- [f06df599](https://github.com/kubedb/mongodb/commit/f06df599) Update KubeDB api (#303) +- [3b402637](https://github.com/kubedb/mongodb/commit/3b402637) Update for release Stash@v2020.10.30 (#302) +- [29fb006f](https://github.com/kubedb/mongodb/commit/29fb006f) Update KubeDB api (#301) +- [dcbd397b](https://github.com/kubedb/mongodb/commit/dcbd397b) Update for release Stash@v2020.10.29 (#300) + + + +## [kubedb/mysql](https://github.com/kubedb/mysql) + +### [v0.7.1](https://github.com/kubedb/mysql/releases/tag/v0.7.1) + +- [e7cd0f88](https://github.com/kubedb/mysql/commit/e7cd0f88) Prepare for release v0.7.1 (#302) +- [9eac462f](https://github.com/kubedb/mysql/commit/9eac462f) Update Kubernetes v1.18.9 dependencies (#301) +- [b2e6ecf0](https://github.com/kubedb/mysql/commit/b2e6ecf0) Update KubeDB api (#300) +- [79fd9699](https://github.com/kubedb/mysql/commit/79fd9699) Update Kubernetes v1.18.9 dependencies (#299) +- [b5973719](https://github.com/kubedb/mysql/commit/b5973719) Update KubeDB api (#298) +- [499082d5](https://github.com/kubedb/mysql/commit/499082d5) Update for release Stash@v2020.11.06 (#297) +- [7528f7a5](https://github.com/kubedb/mysql/commit/7528f7a5) Update Kubernetes v1.18.9 dependencies (#296) +- [99beba22](https://github.com/kubedb/mysql/commit/99beba22) Update KubeDB api (#294) +- [ec714efd](https://github.com/kubedb/mysql/commit/ec714efd) Update KubeDB api (#293) +- [4c0b30a3](https://github.com/kubedb/mysql/commit/4c0b30a3) Update for release Stash@v2020.10.30 (#292) +- [e307ecb3](https://github.com/kubedb/mysql/commit/e307ecb3) Update KubeDB api (#291) +- [05c64bd6](https://github.com/kubedb/mysql/commit/05c64bd6) Update for release Stash@v2020.10.29 (#290) + + + +## [kubedb/mysql-replication-mode-detector](https://github.com/kubedb/mysql-replication-mode-detector) + +### [v0.1.1](https://github.com/kubedb/mysql-replication-mode-detector/releases/tag/v0.1.1) + +- [deb9723](https://github.com/kubedb/mysql-replication-mode-detector/commit/deb9723) Prepare for release v0.1.1 (#87) +- [b3d2d53](https://github.com/kubedb/mysql-replication-mode-detector/commit/b3d2d53) Replace appscode/go with gomodules.xyz/x (#82) +- [4ae0f5e](https://github.com/kubedb/mysql-replication-mode-detector/commit/4ae0f5e) Update KubeDB api (#86) +- [e65fdbf](https://github.com/kubedb/mysql-replication-mode-detector/commit/e65fdbf) Update Kubernetes v1.18.9 dependencies (#85) +- [21db0b0](https://github.com/kubedb/mysql-replication-mode-detector/commit/21db0b0) Update KubeDB api (#84) +- [ca326c2](https://github.com/kubedb/mysql-replication-mode-detector/commit/ca326c2) Update Kubernetes v1.18.9 dependencies (#83) +- [1387bd2](https://github.com/kubedb/mysql-replication-mode-detector/commit/1387bd2) Update KubeDB api (#81) +- [4fd3dc6](https://github.com/kubedb/mysql-replication-mode-detector/commit/4fd3dc6) Remove primary role from previous master pod and update query (#80) +- [552b9b9](https://github.com/kubedb/mysql-replication-mode-detector/commit/552b9b9) Update KubeDB api (#79) +- [72c4b51](https://github.com/kubedb/mysql-replication-mode-detector/commit/72c4b51) Update KubeDB api (#78) + + + +## [kubedb/operator](https://github.com/kubedb/operator) + +### [v0.14.1](https://github.com/kubedb/operator/releases/tag/v0.14.1) + +- [fb31bcf3](https://github.com/kubedb/operator/commit/fb31bcf3) Prepare for release v0.14.1 (#347) +- [f9e6e5eb](https://github.com/kubedb/operator/commit/f9e6e5eb) Update Kubernetes v1.18.9 dependencies (#346) +- [0cd46325](https://github.com/kubedb/operator/commit/0cd46325) Update KubeDB api (#345) +- [782c00c6](https://github.com/kubedb/operator/commit/782c00c6) Update Kubernetes v1.18.9 dependencies (#344) +- [33765457](https://github.com/kubedb/operator/commit/33765457) Update KubeDB api (#343) +- [ff815850](https://github.com/kubedb/operator/commit/ff815850) Update Kubernetes v1.18.9 dependencies (#342) +- [9e12a91f](https://github.com/kubedb/operator/commit/9e12a91f) Update KubeDB api (#340) +- [729298df](https://github.com/kubedb/operator/commit/729298df) Update KubeDB api (#339) +- [ccc55504](https://github.com/kubedb/operator/commit/ccc55504) Update KubeDB api (#338) + + + +## [kubedb/percona-xtradb](https://github.com/kubedb/percona-xtradb) + +### [v0.1.1](https://github.com/kubedb/percona-xtradb/releases/tag/v0.1.1) + +- [aa216cf5](https://github.com/kubedb/percona-xtradb/commit/aa216cf5) Prepare for release v0.1.1 (#134) +- [d43b87a3](https://github.com/kubedb/percona-xtradb/commit/d43b87a3) Update Kubernetes v1.18.9 dependencies (#133) +- [1a354dba](https://github.com/kubedb/percona-xtradb/commit/1a354dba) Update KubeDB api (#132) +- [808366cc](https://github.com/kubedb/percona-xtradb/commit/808366cc) Update Kubernetes v1.18.9 dependencies (#131) +- [adb44379](https://github.com/kubedb/percona-xtradb/commit/adb44379) Update KubeDB api (#130) +- [6d6188de](https://github.com/kubedb/percona-xtradb/commit/6d6188de) Update for release Stash@v2020.11.06 (#129) +- [8d3eaa37](https://github.com/kubedb/percona-xtradb/commit/8d3eaa37) Update Kubernetes v1.18.9 dependencies (#128) +- [5f7253b6](https://github.com/kubedb/percona-xtradb/commit/5f7253b6) Update KubeDB api (#126) +- [43f10d83](https://github.com/kubedb/percona-xtradb/commit/43f10d83) Update KubeDB api (#125) +- [91940395](https://github.com/kubedb/percona-xtradb/commit/91940395) Update for release Stash@v2020.10.30 (#124) +- [eba69286](https://github.com/kubedb/percona-xtradb/commit/eba69286) Update KubeDB api (#123) +- [a4dd87ba](https://github.com/kubedb/percona-xtradb/commit/a4dd87ba) Update for release Stash@v2020.10.29 (#122) + + + +## [kubedb/pg-leader-election](https://github.com/kubedb/pg-leader-election) + +### [v0.2.1](https://github.com/kubedb/pg-leader-election/releases/tag/v0.2.1) + +- [bb6b5d6](https://github.com/kubedb/pg-leader-election/commit/bb6b5d6) Update Kubernetes v1.18.9 dependencies (#40) +- [1291b2f](https://github.com/kubedb/pg-leader-election/commit/1291b2f) Update Kubernetes v1.18.9 dependencies (#39) +- [41dc2c9](https://github.com/kubedb/pg-leader-election/commit/41dc2c9) Update KubeDB api (#38) + + + +## [kubedb/pgbouncer](https://github.com/kubedb/pgbouncer) + +### [v0.1.1](https://github.com/kubedb/pgbouncer/releases/tag/v0.1.1) + +- [1b646d91](https://github.com/kubedb/pgbouncer/commit/1b646d91) Prepare for release v0.1.1 (#104) +- [3c48a58e](https://github.com/kubedb/pgbouncer/commit/3c48a58e) Update Kubernetes v1.18.9 dependencies (#103) +- [906649e4](https://github.com/kubedb/pgbouncer/commit/906649e4) Update KubeDB api (#102) +- [5b245f73](https://github.com/kubedb/pgbouncer/commit/5b245f73) Update Kubernetes v1.18.9 dependencies (#101) +- [41f6c693](https://github.com/kubedb/pgbouncer/commit/41f6c693) Update KubeDB api (#100) +- [63cbbc07](https://github.com/kubedb/pgbouncer/commit/63cbbc07) Update Kubernetes v1.18.9 dependencies (#99) +- [36a100b6](https://github.com/kubedb/pgbouncer/commit/36a100b6) Update KubeDB api (#97) +- [dd9beb65](https://github.com/kubedb/pgbouncer/commit/dd9beb65) Update KubeDB api (#96) +- [2e24a612](https://github.com/kubedb/pgbouncer/commit/2e24a612) Update KubeDB api (#95) + + + +## [kubedb/postgres](https://github.com/kubedb/postgres) + +### [v0.14.1](https://github.com/kubedb/postgres/releases/tag/v0.14.1) + +- [0482db11](https://github.com/kubedb/postgres/commit/0482db11) Prepare for release v0.14.1 (#420) +- [ac8eeb4b](https://github.com/kubedb/postgres/commit/ac8eeb4b) Update Kubernetes v1.18.9 dependencies (#419) +- [9fdb427e](https://github.com/kubedb/postgres/commit/9fdb427e) Update KubeDB api (#418) +- [fbed9716](https://github.com/kubedb/postgres/commit/fbed9716) Update Kubernetes v1.18.9 dependencies (#417) +- [a0a639c0](https://github.com/kubedb/postgres/commit/a0a639c0) Update KubeDB api (#416) +- [c6a6416a](https://github.com/kubedb/postgres/commit/c6a6416a) Update for release Stash@v2020.11.06 (#415) +- [0f5ad475](https://github.com/kubedb/postgres/commit/0f5ad475) Update Kubernetes v1.18.9 dependencies (#414) +- [7db04cdd](https://github.com/kubedb/postgres/commit/7db04cdd) Update KubeDB api (#412) +- [3ce64913](https://github.com/kubedb/postgres/commit/3ce64913) Update KubeDB api (#411) +- [12cf4b77](https://github.com/kubedb/postgres/commit/12cf4b77) Update for release Stash@v2020.10.30 (#410) +- [f319590f](https://github.com/kubedb/postgres/commit/f319590f) Update KubeDB api (#409) +- [6bd61adf](https://github.com/kubedb/postgres/commit/6bd61adf) Update for release Stash@v2020.10.29 (#408) + + + +## [kubedb/proxysql](https://github.com/kubedb/proxysql) + +### [v0.1.1](https://github.com/kubedb/proxysql/releases/tag/v0.1.1) + +- [a196a5bf](https://github.com/kubedb/proxysql/commit/a196a5bf) Prepare for release v0.1.1 (#116) +- [eaba8cd8](https://github.com/kubedb/proxysql/commit/eaba8cd8) Update Kubernetes v1.18.9 dependencies (#115) +- [c6042baf](https://github.com/kubedb/proxysql/commit/c6042baf) Update KubeDB api (#114) +- [4ad769e3](https://github.com/kubedb/proxysql/commit/4ad769e3) Update Kubernetes v1.18.9 dependencies (#113) +- [f57b4971](https://github.com/kubedb/proxysql/commit/f57b4971) Update KubeDB api (#112) +- [b44e67be](https://github.com/kubedb/proxysql/commit/b44e67be) Update for release Stash@v2020.11.06 (#111) +- [f3f33efa](https://github.com/kubedb/proxysql/commit/f3f33efa) Update Kubernetes v1.18.9 dependencies (#110) +- [dff77ecf](https://github.com/kubedb/proxysql/commit/dff77ecf) Update KubeDB api (#108) +- [7cbcfeee](https://github.com/kubedb/proxysql/commit/7cbcfeee) Update KubeDB api (#107) +- [49099b45](https://github.com/kubedb/proxysql/commit/49099b45) Update for release Stash@v2020.10.30 (#106) +- [4b337417](https://github.com/kubedb/proxysql/commit/4b337417) Update KubeDB api (#105) +- [cebf6eee](https://github.com/kubedb/proxysql/commit/cebf6eee) Update for release Stash@v2020.10.29 (#104) + + + +## [kubedb/redis](https://github.com/kubedb/redis) + +### [v0.7.1](https://github.com/kubedb/redis/releases/tag/v0.7.1) + +- [5827be9a](https://github.com/kubedb/redis/commit/5827be9a) Prepare for release v0.7.1 (#258) +- [893f4573](https://github.com/kubedb/redis/commit/893f4573) Update Kubernetes v1.18.9 dependencies (#257) +- [1f3ba044](https://github.com/kubedb/redis/commit/1f3ba044) Update KubeDB api (#256) +- [b824d122](https://github.com/kubedb/redis/commit/b824d122) Update Kubernetes v1.18.9 dependencies (#255) +- [5783c135](https://github.com/kubedb/redis/commit/5783c135) Update KubeDB api (#254) +- [dc7e3986](https://github.com/kubedb/redis/commit/dc7e3986) Update Kubernetes v1.18.9 dependencies (#253) +- [83acb92a](https://github.com/kubedb/redis/commit/83acb92a) Update KubeDB api (#252) +- [ee967c20](https://github.com/kubedb/redis/commit/ee967c20) Update KubeDB api (#251) +- [9706ea8e](https://github.com/kubedb/redis/commit/9706ea8e) Update KubeDB api (#249) + + + + diff --git a/content/docs/v2024.1.31/CHANGELOG-v2020.11.11.md b/content/docs/v2024.1.31/CHANGELOG-v2020.11.11.md new file mode 100644 index 0000000000..34dcb520db --- /dev/null +++ b/content/docs/v2024.1.31/CHANGELOG-v2020.11.11.md @@ -0,0 +1,209 @@ +--- +title: Changelog | KubeDB +description: Changelog +menu: + docs_v2024.1.31: + identifier: changelog-kubedb-v2020.11.11 + name: Changelog-v2020.11.11 + parent: welcome + weight: 20201111 +product_name: kubedb +menu_name: docs_v2024.1.31 +section_menu_id: welcome +url: /docs/v2024.1.31/welcome/changelog-v2020.11.11/ +aliases: +- /docs/v2024.1.31/CHANGELOG-v2020.11.11/ +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +# KubeDB v2020.11.11 (2020-11-11) + + +## [appscode/kubedb-enterprise](https://github.com/appscode/kubedb-enterprise) + +### [v0.2.0](https://github.com/appscode/kubedb-enterprise/releases/tag/v0.2.0) + +- [b96f1b56](https://github.com/appscode/kubedb-enterprise/commit/b96f1b56) Prepare for release v0.2.0 (#95) + + + +## [kubedb/apimachinery](https://github.com/kubedb/apimachinery) + +### [v0.15.0](https://github.com/kubedb/apimachinery/releases/tag/v0.15.0) + +- [592d5b47](https://github.com/kubedb/apimachinery/commit/592d5b47) Add default resource limit (#650) +- [b5fa4a10](https://github.com/kubedb/apimachinery/commit/b5fa4a10) Rename MasterServiceName to MasterDiscoveryServiceName +- [849e6c06](https://github.com/kubedb/apimachinery/commit/849e6c06) Remove ElasticsearchMetricsPortName +- [421d760b](https://github.com/kubedb/apimachinery/commit/421d760b) Add HasServiceTemplate & GetServiceTemplate helpers (#649) +- [25e0e4af](https://github.com/kubedb/apimachinery/commit/25e0e4af) Enable separate serviceTemplate for each service (#648) +- [f325af77](https://github.com/kubedb/apimachinery/commit/f325af77) Remove replicaServiceTemplate from Postgres CRD (#646) +- [31286270](https://github.com/kubedb/apimachinery/commit/31286270) Add `ReplicationModeDetector` Image for MongoDB (#645) +- [df5ada37](https://github.com/kubedb/apimachinery/commit/df5ada37) Update Elasticsearch constants (#639) +- [1387ac1f](https://github.com/kubedb/apimachinery/commit/1387ac1f) Remove version label from database labels (#644) + + + +## [kubedb/cli](https://github.com/kubedb/cli) + +### [v0.15.0](https://github.com/kubedb/cli/releases/tag/v0.15.0) + +- [df75044a](https://github.com/kubedb/cli/commit/df75044a) Prepare for release v0.15.0 (#549) +- [08bce120](https://github.com/kubedb/cli/commit/08bce120) Update KubeDB api (#548) +- [3f4e0fd5](https://github.com/kubedb/cli/commit/3f4e0fd5) Update KubeDB api (#547) +- [56429d25](https://github.com/kubedb/cli/commit/56429d25) Update KubeDB api (#546) + + + +## [kubedb/elasticsearch](https://github.com/kubedb/elasticsearch) + +### [v0.15.0](https://github.com/kubedb/elasticsearch/releases/tag/v0.15.0) + +- [bffb46a0](https://github.com/kubedb/elasticsearch/commit/bffb46a0) Prepare for release v0.15.0 (#414) +- [8915b7b5](https://github.com/kubedb/elasticsearch/commit/8915b7b5) Update KubeDB api (#413) +- [6a95dbf1](https://github.com/kubedb/elasticsearch/commit/6a95dbf1) Allow stats service patching +- [93b8501c](https://github.com/kubedb/elasticsearch/commit/93b8501c) Use separate ServiceTemplate for each service (#412) +- [87ba4941](https://github.com/kubedb/elasticsearch/commit/87ba4941) Use container name as constant (#402) +- [a1b6343a](https://github.com/kubedb/elasticsearch/commit/a1b6343a) Update KubeDB api (#411) + + + +## [kubedb/installer](https://github.com/kubedb/installer) + +### [v0.15.0](https://github.com/kubedb/installer/releases/tag/v0.15.0) + +- [a8c5b9c](https://github.com/kubedb/installer/commit/a8c5b9c) Prepare for release v0.15.0 (#204) +- [f17451e](https://github.com/kubedb/installer/commit/f17451e) Add `ReplicationModeDetector` Image for MongoDB (#202) +- [8dc9941](https://github.com/kubedb/installer/commit/8dc9941) Add permissions to evict pods (#201) + + + +## [kubedb/memcached](https://github.com/kubedb/memcached) + +### [v0.8.0](https://github.com/kubedb/memcached/releases/tag/v0.8.0) + +- [aff58c9e](https://github.com/kubedb/memcached/commit/aff58c9e) Prepare for release v0.8.0 (#243) +- [3dbf5486](https://github.com/kubedb/memcached/commit/3dbf5486) Update KubeDB api (#242) +- [d1821f03](https://github.com/kubedb/memcached/commit/d1821f03) Use separate ServiceTemplate for each service (#241) +- [44ea6d2b](https://github.com/kubedb/memcached/commit/44ea6d2b) Update KubeDB api (#240) + + + +## [kubedb/mongodb](https://github.com/kubedb/mongodb) + +### [v0.8.0](https://github.com/kubedb/mongodb/releases/tag/v0.8.0) + +- [3b0f1d08](https://github.com/kubedb/mongodb/commit/3b0f1d08) Prepare for release v0.8.0 (#318) +- [b3685ab8](https://github.com/kubedb/mongodb/commit/b3685ab8) Update KubeDB api (#317) +- [bf9d872c](https://github.com/kubedb/mongodb/commit/bf9d872c) Allow stats service patching +- [183f1ac3](https://github.com/kubedb/mongodb/commit/183f1ac3) Use separate ServiceTemplate for each service (#315) +- [3d105d2a](https://github.com/kubedb/mongodb/commit/3d105d2a) Fix Health Check (#305) +- [98fe156b](https://github.com/kubedb/mongodb/commit/98fe156b) Add `ReplicationModeDetector` (#316) +- [539337b0](https://github.com/kubedb/mongodb/commit/539337b0) Update KubeDB api (#314) + + + +## [kubedb/mysql](https://github.com/kubedb/mysql) + +### [v0.8.0](https://github.com/kubedb/mysql/releases/tag/v0.8.0) + +- [b83e2323](https://github.com/kubedb/mysql/commit/b83e2323) Prepare for release v0.8.0 (#305) +- [1916fb3d](https://github.com/kubedb/mysql/commit/1916fb3d) Update KubeDB api (#304) +- [2e2dd9b0](https://github.com/kubedb/mysql/commit/2e2dd9b0) Allow stats service patching +- [18cbe558](https://github.com/kubedb/mysql/commit/18cbe558) Use separate ServiceTemplate for each service (#303) +- [741c9718](https://github.com/kubedb/mysql/commit/741c9718) Fix MySQL args (#295) + + + +## [kubedb/operator](https://github.com/kubedb/operator) + +### [v0.15.0](https://github.com/kubedb/operator/releases/tag/v0.15.0) + +- [c9c540f0](https://github.com/kubedb/operator/commit/c9c540f0) Prepare for release v0.15.0 (#349) + + + +## [kubedb/percona-xtradb](https://github.com/kubedb/percona-xtradb) + +### [v0.2.0](https://github.com/kubedb/percona-xtradb/releases/tag/v0.2.0) + +- [13b6c2d7](https://github.com/kubedb/percona-xtradb/commit/13b6c2d7) Prepare for release v0.2.0 (#138) +- [6e4a4449](https://github.com/kubedb/percona-xtradb/commit/6e4a4449) Update KubeDB api (#137) +- [717fca92](https://github.com/kubedb/percona-xtradb/commit/717fca92) Use separate ServiceTemplate for each service (#136) +- [3386c10e](https://github.com/kubedb/percona-xtradb/commit/3386c10e) Update KubeDB api (#135) + + + +## [kubedb/pg-leader-election](https://github.com/kubedb/pg-leader-election) + +### [v0.3.0](https://github.com/kubedb/pg-leader-election/releases/tag/v0.3.0) + + + + +## [kubedb/pgbouncer](https://github.com/kubedb/pgbouncer) + +### [v0.2.0](https://github.com/kubedb/pgbouncer/releases/tag/v0.2.0) + +- [a9c88518](https://github.com/kubedb/pgbouncer/commit/a9c88518) Prepare for release v0.2.0 (#108) +- [56132158](https://github.com/kubedb/pgbouncer/commit/56132158) Update KubeDB api (#107) +- [2d9e4490](https://github.com/kubedb/pgbouncer/commit/2d9e4490) Use separate ServiceTemplate for each service (#106) +- [9cfb2ae2](https://github.com/kubedb/pgbouncer/commit/9cfb2ae2) Update KubeDB api (#105) + + + +## [kubedb/postgres](https://github.com/kubedb/postgres) + +### [v0.15.0](https://github.com/kubedb/postgres/releases/tag/v0.15.0) + +- [929217b2](https://github.com/kubedb/postgres/commit/929217b2) Prepare for release v0.15.0 (#424) +- [7782f03d](https://github.com/kubedb/postgres/commit/7782f03d) Update KubeDB api (#423) +- [0216423d](https://github.com/kubedb/postgres/commit/0216423d) Allow stats service patching +- [6f5e3b57](https://github.com/kubedb/postgres/commit/6f5e3b57) Use separate ServiceTemplate for each service (#422) +- [111fddfa](https://github.com/kubedb/postgres/commit/111fddfa) Update KubeDB api (#421) + + + +## [kubedb/proxysql](https://github.com/kubedb/proxysql) + +### [v0.2.0](https://github.com/kubedb/proxysql/releases/tag/v0.2.0) + +- [71444683](https://github.com/kubedb/proxysql/commit/71444683) Prepare for release v0.2.0 (#120) +- [ce811abf](https://github.com/kubedb/proxysql/commit/ce811abf) Update KubeDB api (#119) +- [4ed10ea2](https://github.com/kubedb/proxysql/commit/4ed10ea2) Use separate ServiceTemplate for each service (#118) +- [d43e7359](https://github.com/kubedb/proxysql/commit/d43e7359) Update KubeDB api (#117) + + + +## [kubedb/redis](https://github.com/kubedb/redis) + +### [v0.8.0](https://github.com/kubedb/redis/releases/tag/v0.8.0) + +- [a2fe5b3b](https://github.com/kubedb/redis/commit/a2fe5b3b) Prepare for release v0.8.0 (#262) +- [9de30e41](https://github.com/kubedb/redis/commit/9de30e41) Update KubeDB api (#261) +- [5c8281d2](https://github.com/kubedb/redis/commit/5c8281d2) Use separate ServiceTemplate for each service (#260) +- [3a269916](https://github.com/kubedb/redis/commit/3a269916) Update KubeDB api (#259) + + + +## [kubedb/replication-mode-detector](https://github.com/kubedb/replication-mode-detector) + +### [v0.2.0](https://github.com/kubedb/replication-mode-detector/releases/tag/v0.2.0) + +- [70416f7](https://github.com/kubedb/replication-mode-detector/commit/70416f7) Prepare for release v0.2.0 (#92) +- [d75f103](https://github.com/kubedb/replication-mode-detector/commit/d75f103) Update KubeDB api (#91) +- [5f03577](https://github.com/kubedb/replication-mode-detector/commit/5f03577) Add MongoDB `ReplicationModeDetector` (#90) +- [8456fdc](https://github.com/kubedb/replication-mode-detector/commit/8456fdc) Drop mysql from repo name (#89) + + + + diff --git a/content/docs/v2024.1.31/CHANGELOG-v2020.11.12.md b/content/docs/v2024.1.31/CHANGELOG-v2020.11.12.md new file mode 100644 index 0000000000..4f05a11f6b --- /dev/null +++ b/content/docs/v2024.1.31/CHANGELOG-v2020.11.12.md @@ -0,0 +1,176 @@ +--- +title: Changelog | KubeDB +description: Changelog +menu: + docs_v2024.1.31: + identifier: changelog-kubedb-v2020.11.12 + name: Changelog-v2020.11.12 + parent: welcome + weight: 20201112 +product_name: kubedb +menu_name: docs_v2024.1.31 +section_menu_id: welcome +url: /docs/v2024.1.31/welcome/changelog-v2020.11.12/ +aliases: +- /docs/v2024.1.31/CHANGELOG-v2020.11.12/ +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +# KubeDB v2020.11.12 (2020-11-12) + + +## [appscode/kubedb-enterprise](https://github.com/appscode/kubedb-enterprise) + +### [v0.2.1](https://github.com/appscode/kubedb-enterprise/releases/tag/v0.2.1) + +- [9aa1860e](https://github.com/appscode/kubedb-enterprise/commit/9aa1860e) Prepare for release v0.2.1 (#97) + + + +## [kubedb/apimachinery](https://github.com/kubedb/apimachinery) + +### [v0.15.1](https://github.com/kubedb/apimachinery/releases/tag/v0.15.1) + +- [ea20944b](https://github.com/kubedb/apimachinery/commit/ea20944b) Set default resource requests = 1/2 * limits (#653) +- [fb8a454c](https://github.com/kubedb/apimachinery/commit/fb8a454c) Fix serviceTemplate inline json (#652) +- [44d1f43b](https://github.com/kubedb/apimachinery/commit/44d1f43b) Add MariaDB patch util +- [cb21bb09](https://github.com/kubedb/apimachinery/commit/cb21bb09) Add MariaDB constants (#651) + + + +## [kubedb/cli](https://github.com/kubedb/cli) + +### [v0.15.1](https://github.com/kubedb/cli/releases/tag/v0.15.1) + +- [81e8fb7a](https://github.com/kubedb/cli/commit/81e8fb7a) Prepare for release v0.15.1 (#551) +- [7152cf5d](https://github.com/kubedb/cli/commit/7152cf5d) Update KubeDB api (#550) + + + +## [kubedb/elasticsearch](https://github.com/kubedb/elasticsearch) + +### [v0.15.1](https://github.com/kubedb/elasticsearch/releases/tag/v0.15.1) + +- [27111d23](https://github.com/kubedb/elasticsearch/commit/27111d23) Prepare for release v0.15.1 (#416) +- [f23b33ee](https://github.com/kubedb/elasticsearch/commit/f23b33ee) Update KubeDB api (#415) + + + +## [kubedb/installer](https://github.com/kubedb/installer) + +### [v0.15.1](https://github.com/kubedb/installer/releases/tag/v0.15.1) + +- [4129755](https://github.com/kubedb/installer/commit/4129755) Prepare for release v0.15.1 (#205) +- [26dbe8e](https://github.com/kubedb/installer/commit/26dbe8e) Update NOTE.txt with helm 3 command (#203) + + + +## [kubedb/memcached](https://github.com/kubedb/memcached) + +### [v0.8.1](https://github.com/kubedb/memcached/releases/tag/v0.8.1) + +- [00eda07f](https://github.com/kubedb/memcached/commit/00eda07f) Prepare for release v0.8.1 (#245) +- [a32f158d](https://github.com/kubedb/memcached/commit/a32f158d) Update KubeDB api (#244) + + + +## [kubedb/mongodb](https://github.com/kubedb/mongodb) + +### [v0.8.1](https://github.com/kubedb/mongodb/releases/tag/v0.8.1) + +- [19f17fb9](https://github.com/kubedb/mongodb/commit/19f17fb9) Prepare for release v0.8.1 (#320) +- [bee38ded](https://github.com/kubedb/mongodb/commit/bee38ded) Update KubeDB api (#319) + + + +## [kubedb/mysql](https://github.com/kubedb/mysql) + +### [v0.8.1](https://github.com/kubedb/mysql/releases/tag/v0.8.1) + +- [8a41db5d](https://github.com/kubedb/mysql/commit/8a41db5d) Prepare for release v0.8.1 (#307) +- [e224e887](https://github.com/kubedb/mysql/commit/e224e887) Update KubeDB api (#306) + + + +## [kubedb/operator](https://github.com/kubedb/operator) + +### [v0.15.1](https://github.com/kubedb/operator/releases/tag/v0.15.1) + +- [bed5f88c](https://github.com/kubedb/operator/commit/bed5f88c) Prepare for release v0.15.1 (#351) +- [4ee7d2fb](https://github.com/kubedb/operator/commit/4ee7d2fb) Update KubeDB api (#350) + + + +## [kubedb/percona-xtradb](https://github.com/kubedb/percona-xtradb) + +### [v0.2.1](https://github.com/kubedb/percona-xtradb/releases/tag/v0.2.1) + +- [6e13a0a0](https://github.com/kubedb/percona-xtradb/commit/6e13a0a0) Prepare for release v0.2.1 (#140) +- [7afb8aee](https://github.com/kubedb/percona-xtradb/commit/7afb8aee) Update KubeDB api (#139) + + + +## [kubedb/pg-leader-election](https://github.com/kubedb/pg-leader-election) + +### [v0.3.1](https://github.com/kubedb/pg-leader-election/releases/tag/v0.3.1) + +- [2bc569c](https://github.com/kubedb/pg-leader-election/commit/2bc569c) Update README.md + + + +## [kubedb/pgbouncer](https://github.com/kubedb/pgbouncer) + +### [v0.2.1](https://github.com/kubedb/pgbouncer/releases/tag/v0.2.1) + +- [c04b66ab](https://github.com/kubedb/pgbouncer/commit/c04b66ab) Prepare for release v0.2.1 (#110) +- [082be68e](https://github.com/kubedb/pgbouncer/commit/082be68e) Update KubeDB api (#109) + + + +## [kubedb/postgres](https://github.com/kubedb/postgres) + +### [v0.15.1](https://github.com/kubedb/postgres/releases/tag/v0.15.1) + +- [c83549ae](https://github.com/kubedb/postgres/commit/c83549ae) Prepare for release v0.15.1 (#426) +- [cbb19a1c](https://github.com/kubedb/postgres/commit/cbb19a1c) Update KubeDB api (#425) + + + +## [kubedb/proxysql](https://github.com/kubedb/proxysql) + +### [v0.2.1](https://github.com/kubedb/proxysql/releases/tag/v0.2.1) + +- [2c20cd82](https://github.com/kubedb/proxysql/commit/2c20cd82) Prepare for release v0.2.1 (#122) +- [6b268324](https://github.com/kubedb/proxysql/commit/6b268324) Update KubeDB api (#121) + + + +## [kubedb/redis](https://github.com/kubedb/redis) + +### [v0.8.1](https://github.com/kubedb/redis/releases/tag/v0.8.1) + +- [5d378ec3](https://github.com/kubedb/redis/commit/5d378ec3) Prepare for release v0.8.1 (#264) +- [b7cf4380](https://github.com/kubedb/redis/commit/b7cf4380) Update KubeDB api (#263) + + + +## [kubedb/replication-mode-detector](https://github.com/kubedb/replication-mode-detector) + +### [v0.2.1](https://github.com/kubedb/replication-mode-detector/releases/tag/v0.2.1) + +- [8c0e92f](https://github.com/kubedb/replication-mode-detector/commit/8c0e92f) Prepare for release v0.2.1 (#94) +- [b0bf4e9](https://github.com/kubedb/replication-mode-detector/commit/b0bf4e9) Update KubeDB api (#93) + + + + diff --git a/content/docs/v2024.1.31/CHANGELOG-v2020.12.10.md b/content/docs/v2024.1.31/CHANGELOG-v2020.12.10.md new file mode 100644 index 0000000000..ba62ecae56 --- /dev/null +++ b/content/docs/v2024.1.31/CHANGELOG-v2020.12.10.md @@ -0,0 +1,329 @@ +--- +title: Changelog | KubeDB +description: Changelog +menu: + docs_v2024.1.31: + identifier: changelog-kubedb-v2020.12.10 + name: Changelog-v2020.12.10 + parent: welcome + weight: 20201210 +product_name: kubedb +menu_name: docs_v2024.1.31 +section_menu_id: welcome +url: /docs/v2024.1.31/welcome/changelog-v2020.12.10/ +aliases: +- /docs/v2024.1.31/CHANGELOG-v2020.12.10/ +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +# KubeDB v2020.12.10 (2020-12-10) + + +## [appscode/kubedb-enterprise](https://github.com/appscode/kubedb-enterprise) + +### [v0.2.2](https://github.com/appscode/kubedb-enterprise/releases/tag/v0.2.2) + +- [ade073c7](https://github.com/appscode/kubedb-enterprise/commit/ade073c7) Prepare for release v0.2.2 (#103) +- [ed4b88de](https://github.com/appscode/kubedb-enterprise/commit/ed4b88de) Format shell scripts (#99) +- [c792b1aa](https://github.com/appscode/kubedb-enterprise/commit/c792b1aa) Update CI + + + +## [kubedb/apimachinery](https://github.com/kubedb/apimachinery) + +### [v0.15.2](https://github.com/kubedb/apimachinery/releases/tag/v0.15.2) + +- [e0b8f4fa](https://github.com/kubedb/apimachinery/commit/e0b8f4fa) Use same request and limits for databases (#668) +- [429b2e10](https://github.com/kubedb/apimachinery/commit/429b2e10) Update Kubernetes v1.18.9 dependencies (#667) +- [0e4574eb](https://github.com/kubedb/apimachinery/commit/0e4574eb) Add Elasticsearch helper methods for StatefulSet names (#665) +- [5917f095](https://github.com/kubedb/apimachinery/commit/5917f095) Update default resource limits and requests for MySQL (#664) +- [89b09825](https://github.com/kubedb/apimachinery/commit/89b09825) Update Kubernetes v1.18.9 dependencies (#661) +- [a022a502](https://github.com/kubedb/apimachinery/commit/a022a502) Format shell scripts (#660) +- [03a5a9a7](https://github.com/kubedb/apimachinery/commit/03a5a9a7) Update for release Stash@v2020.11.17 (#656) +- [94392a27](https://github.com/kubedb/apimachinery/commit/94392a27) Set default resource limits for Elasticsearch (#655) +- [b2fb44c8](https://github.com/kubedb/apimachinery/commit/b2fb44c8) Add requireSSL field to MySQLOpsRequest (#654) + + + +## [kubedb/cli](https://github.com/kubedb/cli) + +### [v0.15.2](https://github.com/kubedb/cli/releases/tag/v0.15.2) + +- [d436f0de](https://github.com/kubedb/cli/commit/d436f0de) Prepare for release v0.15.2 (#564) +- [8eaff25c](https://github.com/kubedb/cli/commit/8eaff25c) Fix nil pointer panic (#563) +- [1ec791eb](https://github.com/kubedb/cli/commit/1ec791eb) Update KubeDB api (#562) +- [aa54326a](https://github.com/kubedb/cli/commit/aa54326a) Update Kubernetes v1.18.9 dependencies (#561) +- [0dc1847a](https://github.com/kubedb/cli/commit/0dc1847a) Update KubeDB api (#560) +- [dbe633dc](https://github.com/kubedb/cli/commit/dbe633dc) Update KubeDB api (#559) +- [82f6be65](https://github.com/kubedb/cli/commit/82f6be65) Update KubeDB api (#558) +- [251eb867](https://github.com/kubedb/cli/commit/251eb867) Update Kubernetes v1.18.9 dependencies (#557) +- [870efaec](https://github.com/kubedb/cli/commit/870efaec) Update KubeDB api (#556) +- [20503198](https://github.com/kubedb/cli/commit/20503198) Format shell scripts (#555) +- [446734ba](https://github.com/kubedb/cli/commit/446734ba) Update KubeDB api (#554) +- [eb1648f0](https://github.com/kubedb/cli/commit/eb1648f0) Update for release Stash@v2020.11.17 (#553) +- [2e8667da](https://github.com/kubedb/cli/commit/2e8667da) Update KubeDB api (#552) + + + +## [kubedb/elasticsearch](https://github.com/kubedb/elasticsearch) + +### [v0.15.2](https://github.com/kubedb/elasticsearch/releases/tag/v0.15.2) + +- [12af2fe4](https://github.com/kubedb/elasticsearch/commit/12af2fe4) Prepare for release v0.15.2 (#434) +- [ade7c7e3](https://github.com/kubedb/elasticsearch/commit/ade7c7e3) Update KubeDB api (#433) +- [f9290a40](https://github.com/kubedb/elasticsearch/commit/f9290a40) Update Kubernetes v1.18.9 dependencies (#432) +- [3fe4723d](https://github.com/kubedb/elasticsearch/commit/3fe4723d) Update KubeDB api (#431) +- [07a09623](https://github.com/kubedb/elasticsearch/commit/07a09623) Add validation for minimum memory request (heap size) (#429) +- [0132b8cd](https://github.com/kubedb/elasticsearch/commit/0132b8cd) Remove resource allocation from init container (#427) +- [9fdef46d](https://github.com/kubedb/elasticsearch/commit/9fdef46d) Update KubeDB api (#428) +- [166cbc51](https://github.com/kubedb/elasticsearch/commit/166cbc51) Update KubeDB api (#426) +- [3a4f29b8](https://github.com/kubedb/elasticsearch/commit/3a4f29b8) Update Kubernetes v1.18.9 dependencies (#425) +- [9ed6e723](https://github.com/kubedb/elasticsearch/commit/9ed6e723) Fix health checker (#417) +- [28314536](https://github.com/kubedb/elasticsearch/commit/28314536) Update e2e workflow (#424) +- [bd961e58](https://github.com/kubedb/elasticsearch/commit/bd961e58) Update KubeDB api (#423) +- [450df760](https://github.com/kubedb/elasticsearch/commit/450df760) Format shell scripts (#422) +- [ad67a75e](https://github.com/kubedb/elasticsearch/commit/ad67a75e) Update KubeDB api (#421) +- [353f2031](https://github.com/kubedb/elasticsearch/commit/353f2031) Update for release Stash@v2020.11.17 (#420) +- [999b1e0d](https://github.com/kubedb/elasticsearch/commit/999b1e0d) Update KubeDB api (#419) +- [73d6618c](https://github.com/kubedb/elasticsearch/commit/73d6618c) Update repository config (#418) + + + +## [kubedb/installer](https://github.com/kubedb/installer) + +### [v0.15.2](https://github.com/kubedb/installer/releases/tag/v0.15.2) + +- [2da95dc](https://github.com/kubedb/installer/commit/2da95dc) Prepare for release v0.15.2 (#211) +- [b6b0d98](https://github.com/kubedb/installer/commit/b6b0d98) Update repository config (#210) +- [d560f29](https://github.com/kubedb/installer/commit/d560f29) Update Kubernetes v1.18.9 dependencies (#209) +- [8e061ed](https://github.com/kubedb/installer/commit/8e061ed) Use apiregistration.k8s.io/v1 (#207) +- [842e577](https://github.com/kubedb/installer/commit/842e577) Update repository config (#206) + + + +## [kubedb/memcached](https://github.com/kubedb/memcached) + +### [v0.8.2](https://github.com/kubedb/memcached/releases/tag/v0.8.2) + +- [2f458585](https://github.com/kubedb/memcached/commit/2f458585) Prepare for release v0.8.2 (#258) +- [0c16e3fa](https://github.com/kubedb/memcached/commit/0c16e3fa) Update KubeDB api (#257) +- [2601e91b](https://github.com/kubedb/memcached/commit/2601e91b) Update Kubernetes v1.18.9 dependencies (#256) +- [fa1b08b7](https://github.com/kubedb/memcached/commit/fa1b08b7) Update KubeDB api (#255) +- [61395e02](https://github.com/kubedb/memcached/commit/61395e02) Update KubeDB api (#254) +- [7ea6ec8e](https://github.com/kubedb/memcached/commit/7ea6ec8e) Update KubeDB api (#253) +- [005f33c1](https://github.com/kubedb/memcached/commit/005f33c1) Update Kubernetes v1.18.9 dependencies (#252) +- [a4bc28bd](https://github.com/kubedb/memcached/commit/a4bc28bd) Update e2e workflow (#251) +- [11286227](https://github.com/kubedb/memcached/commit/11286227) Update KubeDB api (#250) +- [c6a704fe](https://github.com/kubedb/memcached/commit/c6a704fe) Format shell scripts (#249) +- [c88fd651](https://github.com/kubedb/memcached/commit/c88fd651) Update KubeDB api (#248) +- [df511335](https://github.com/kubedb/memcached/commit/df511335) Update KubeDB api (#247) +- [991933af](https://github.com/kubedb/memcached/commit/991933af) Update repository config (#246) + + + +## [kubedb/mongodb](https://github.com/kubedb/mongodb) + +### [v0.8.2](https://github.com/kubedb/mongodb/releases/tag/v0.8.2) + +- [63ec5d5f](https://github.com/kubedb/mongodb/commit/63ec5d5f) Prepare for release v0.8.2 (#334) +- [b26e3b38](https://github.com/kubedb/mongodb/commit/b26e3b38) Update KubeDB api (#333) +- [e7b60f5d](https://github.com/kubedb/mongodb/commit/e7b60f5d) Update Kubernetes v1.18.9 dependencies (#332) +- [c492a1e6](https://github.com/kubedb/mongodb/commit/c492a1e6) Update KubeDB api (#331) +- [4ca053a1](https://github.com/kubedb/mongodb/commit/4ca053a1) Update KubeDB api (#330) +- [40224cad](https://github.com/kubedb/mongodb/commit/40224cad) Update KubeDB api (#329) +- [a54933b8](https://github.com/kubedb/mongodb/commit/a54933b8) Update Kubernetes v1.18.9 dependencies (#328) +- [ab77019b](https://github.com/kubedb/mongodb/commit/ab77019b) Wait until crd names are accepted +- [9cc7c869](https://github.com/kubedb/mongodb/commit/9cc7c869) Prevent a specific failing matrix job from failing a workflow run +- [fb8491d7](https://github.com/kubedb/mongodb/commit/fb8491d7) Update e2e workflow (#325) +- [abf1ae72](https://github.com/kubedb/mongodb/commit/abf1ae72) Update KubeDB api (#327) +- [b138bc3d](https://github.com/kubedb/mongodb/commit/b138bc3d) Format shell scripts (#326) +- [f1c913c6](https://github.com/kubedb/mongodb/commit/f1c913c6) Update e2e CI +- [3b5d2e5c](https://github.com/kubedb/mongodb/commit/3b5d2e5c) Update github actions for e2e tests (#304) +- [c49e535a](https://github.com/kubedb/mongodb/commit/c49e535a) Update KubeDB api (#324) +- [f3c138e1](https://github.com/kubedb/mongodb/commit/f3c138e1) Update for release Stash@v2020.11.17 (#323) +- [d4e18d81](https://github.com/kubedb/mongodb/commit/d4e18d81) Update KubeDB api (#322) +- [9e8fc523](https://github.com/kubedb/mongodb/commit/9e8fc523) Update repository config (#321) + + + +## [kubedb/mysql](https://github.com/kubedb/mysql) + +### [v0.8.2](https://github.com/kubedb/mysql/releases/tag/v0.8.2) + +- [d3b3f9c3](https://github.com/kubedb/mysql/commit/d3b3f9c3) Prepare for release v0.8.2 (#321) +- [a6564cc1](https://github.com/kubedb/mysql/commit/a6564cc1) Update KubeDB api (#320) +- [2e0fe5f9](https://github.com/kubedb/mysql/commit/2e0fe5f9) Update Kubernetes v1.18.9 dependencies (#319) +- [a6731a86](https://github.com/kubedb/mysql/commit/a6731a86) Update KubeDB api (#318) +- [2fb04f16](https://github.com/kubedb/mysql/commit/2fb04f16) Update KubeDB api (#317) +- [3432a00a](https://github.com/kubedb/mysql/commit/3432a00a) Update KubeDB api (#316) +- [5f51a466](https://github.com/kubedb/mysql/commit/5f51a466) Update Kubernetes v1.18.9 dependencies (#315) +- [b0dbbac4](https://github.com/kubedb/mysql/commit/b0dbbac4) Update e2e workflow (#314) +- [a0864e5d](https://github.com/kubedb/mysql/commit/a0864e5d) Update KubeDB api (#313) +- [cc80a56f](https://github.com/kubedb/mysql/commit/cc80a56f) Format shell scripts (#312) +- [3ac83778](https://github.com/kubedb/mysql/commit/3ac83778) Update KubeDB api (#311) +- [d87bc74a](https://github.com/kubedb/mysql/commit/d87bc74a) Update for release Stash@v2020.11.17 (#310) +- [353d6795](https://github.com/kubedb/mysql/commit/353d6795) Update KubeDB api (#309) +- [8b9b7009](https://github.com/kubedb/mysql/commit/8b9b7009) Update repository config (#308) + + + +## [kubedb/operator](https://github.com/kubedb/operator) + +### [v0.15.2](https://github.com/kubedb/operator/releases/tag/v0.15.2) + +- [06945cc9](https://github.com/kubedb/operator/commit/06945cc9) Prepare for release v0.15.2 (#365) +- [6a3dacd4](https://github.com/kubedb/operator/commit/6a3dacd4) Update KubeDB api (#364) +- [6a0626b1](https://github.com/kubedb/operator/commit/6a0626b1) Update Kubernetes v1.18.9 dependencies (#363) +- [e72fbf89](https://github.com/kubedb/operator/commit/e72fbf89) Update KubeDB api (#362) +- [a27078cd](https://github.com/kubedb/operator/commit/a27078cd) Update KubeDB api (#361) +- [5547a0bd](https://github.com/kubedb/operator/commit/5547a0bd) Update KubeDB api (#360) +- [53225795](https://github.com/kubedb/operator/commit/53225795) Update Kubernetes v1.18.9 dependencies (#359) +- [d9ba1ba9](https://github.com/kubedb/operator/commit/d9ba1ba9) Update e2e workflow (#358) +- [68171e01](https://github.com/kubedb/operator/commit/68171e01) Update KubeDB api (#357) +- [6411ccdb](https://github.com/kubedb/operator/commit/6411ccdb) Format shell scripts (#356) +- [b666a8d2](https://github.com/kubedb/operator/commit/b666a8d2) Update KubeDB api (#355) +- [5b280e2b](https://github.com/kubedb/operator/commit/5b280e2b) Update KubeDB api (#354) +- [3732fe26](https://github.com/kubedb/operator/commit/3732fe26) Update repository config (#353) + + + +## [kubedb/percona-xtradb](https://github.com/kubedb/percona-xtradb) + +### [v0.2.2](https://github.com/kubedb/percona-xtradb/releases/tag/v0.2.2) + +- [354ec7a7](https://github.com/kubedb/percona-xtradb/commit/354ec7a7) Prepare for release v0.2.2 (#154) +- [927cb07d](https://github.com/kubedb/percona-xtradb/commit/927cb07d) Update KubeDB api (#153) +- [28463a62](https://github.com/kubedb/percona-xtradb/commit/28463a62) Update Kubernetes v1.18.9 dependencies (#152) +- [a7e8e78a](https://github.com/kubedb/percona-xtradb/commit/a7e8e78a) Update KubeDB api (#151) +- [fdcb3a74](https://github.com/kubedb/percona-xtradb/commit/fdcb3a74) Update KubeDB api (#150) +- [18b128e1](https://github.com/kubedb/percona-xtradb/commit/18b128e1) Update KubeDB api (#149) +- [15d0c5e3](https://github.com/kubedb/percona-xtradb/commit/15d0c5e3) Update Kubernetes v1.18.9 dependencies (#148) +- [e953b6c1](https://github.com/kubedb/percona-xtradb/commit/e953b6c1) Update e2e workflow (#147) +- [85d96fe5](https://github.com/kubedb/percona-xtradb/commit/85d96fe5) Update KubeDB api (#146) +- [1f02228b](https://github.com/kubedb/percona-xtradb/commit/1f02228b) Format shell scripts (#145) +- [a832a4c1](https://github.com/kubedb/percona-xtradb/commit/a832a4c1) Update KubeDB api (#144) +- [ec085e9a](https://github.com/kubedb/percona-xtradb/commit/ec085e9a) Update for release Stash@v2020.11.17 (#143) +- [4b22cc6c](https://github.com/kubedb/percona-xtradb/commit/4b22cc6c) Update KubeDB api (#142) +- [70a0c092](https://github.com/kubedb/percona-xtradb/commit/70a0c092) Update repository config (#141) + + + +## [kubedb/pg-leader-election](https://github.com/kubedb/pg-leader-election) + +### [v0.3.2](https://github.com/kubedb/pg-leader-election/releases/tag/v0.3.2) + +- [e5413e4](https://github.com/kubedb/pg-leader-election/commit/e5413e4) Update Kubernetes v1.18.9 dependencies (#42) +- [aff7ccb](https://github.com/kubedb/pg-leader-election/commit/aff7ccb) Update Kubernetes v1.18.9 dependencies (#41) + + + +## [kubedb/pgbouncer](https://github.com/kubedb/pgbouncer) + +### [v0.2.2](https://github.com/kubedb/pgbouncer/releases/tag/v0.2.2) + +- [390e35d5](https://github.com/kubedb/pgbouncer/commit/390e35d5) Prepare for release v0.2.2 (#123) +- [0629a632](https://github.com/kubedb/pgbouncer/commit/0629a632) Update KubeDB api (#122) +- [452bbff4](https://github.com/kubedb/pgbouncer/commit/452bbff4) Update Kubernetes v1.18.9 dependencies (#121) +- [8f9f8691](https://github.com/kubedb/pgbouncer/commit/8f9f8691) Update KubeDB api (#120) +- [10016a15](https://github.com/kubedb/pgbouncer/commit/10016a15) Update KubeDB api (#119) +- [301da718](https://github.com/kubedb/pgbouncer/commit/301da718) Update KubeDB api (#118) +- [cfdcb3ae](https://github.com/kubedb/pgbouncer/commit/cfdcb3ae) Update Kubernetes v1.18.9 dependencies (#117) +- [ba4f9abf](https://github.com/kubedb/pgbouncer/commit/ba4f9abf) Update e2e workflow (#116) +- [6b6fa1ce](https://github.com/kubedb/pgbouncer/commit/6b6fa1ce) Update KubeDB api (#115) +- [644264ce](https://github.com/kubedb/pgbouncer/commit/644264ce) Format shell scripts (#114) +- [300b8885](https://github.com/kubedb/pgbouncer/commit/300b8885) Update KubeDB api (#113) +- [74f43dd6](https://github.com/kubedb/pgbouncer/commit/74f43dd6) Update KubeDB api (#112) +- [762c0c39](https://github.com/kubedb/pgbouncer/commit/762c0c39) Update repository config (#111) + + + +## [kubedb/postgres](https://github.com/kubedb/postgres) + +### [v0.15.2](https://github.com/kubedb/postgres/releases/tag/v0.15.2) + +- [4dbc020f](https://github.com/kubedb/postgres/commit/4dbc020f) Prepare for release v0.15.2 (#440) +- [2e498528](https://github.com/kubedb/postgres/commit/2e498528) Update KubeDB api (#439) +- [d1a36ba5](https://github.com/kubedb/postgres/commit/d1a36ba5) Update Kubernetes v1.18.9 dependencies (#438) +- [5e992258](https://github.com/kubedb/postgres/commit/5e992258) Update KubeDB api (#437) +- [3e690087](https://github.com/kubedb/postgres/commit/3e690087) Update KubeDB api (#436) +- [c6b5cdea](https://github.com/kubedb/postgres/commit/c6b5cdea) Update KubeDB api (#435) +- [a580560c](https://github.com/kubedb/postgres/commit/a580560c) Update Kubernetes v1.18.9 dependencies (#434) +- [baba93f2](https://github.com/kubedb/postgres/commit/baba93f2) Update e2e workflow (#433) +- [87ccb1df](https://github.com/kubedb/postgres/commit/87ccb1df) Update KubeDB api (#432) +- [56cd9e3a](https://github.com/kubedb/postgres/commit/56cd9e3a) Format shell scripts (#431) +- [9f042958](https://github.com/kubedb/postgres/commit/9f042958) Update KubeDB api (#430) +- [a922ba6f](https://github.com/kubedb/postgres/commit/a922ba6f) Update for release Stash@v2020.11.17 (#429) +- [8de84f76](https://github.com/kubedb/postgres/commit/8de84f76) Update KubeDB api (#428) +- [5971da91](https://github.com/kubedb/postgres/commit/5971da91) Update repository config (#427) + + + +## [kubedb/proxysql](https://github.com/kubedb/proxysql) + +### [v0.2.2](https://github.com/kubedb/proxysql/releases/tag/v0.2.2) + +- [f0491c46](https://github.com/kubedb/proxysql/commit/f0491c46) Prepare for release v0.2.2 (#136) +- [cf45d773](https://github.com/kubedb/proxysql/commit/cf45d773) Update KubeDB api (#135) +- [fdf01a09](https://github.com/kubedb/proxysql/commit/fdf01a09) Update Kubernetes v1.18.9 dependencies (#134) +- [e47042b6](https://github.com/kubedb/proxysql/commit/e47042b6) Update KubeDB api (#133) +- [29fde3f7](https://github.com/kubedb/proxysql/commit/29fde3f7) Update KubeDB api (#132) +- [af7aba87](https://github.com/kubedb/proxysql/commit/af7aba87) Update KubeDB api (#131) +- [f9f8dcd3](https://github.com/kubedb/proxysql/commit/f9f8dcd3) Update Kubernetes v1.18.9 dependencies (#130) +- [6e7b1226](https://github.com/kubedb/proxysql/commit/6e7b1226) Update e2e workflow (#129) +- [00839667](https://github.com/kubedb/proxysql/commit/00839667) Update KubeDB api (#128) +- [f05cfd4b](https://github.com/kubedb/proxysql/commit/f05cfd4b) Format shell scripts (#127) +- [5b349d7e](https://github.com/kubedb/proxysql/commit/5b349d7e) Update KubeDB api (#126) +- [6ae465b8](https://github.com/kubedb/proxysql/commit/6ae465b8) Update for release Stash@v2020.11.17 (#125) +- [d0b8b205](https://github.com/kubedb/proxysql/commit/d0b8b205) Update KubeDB api (#124) +- [9230e1f6](https://github.com/kubedb/proxysql/commit/9230e1f6) Update repository config (#123) + + + +## [kubedb/redis](https://github.com/kubedb/redis) + +### [v0.8.2](https://github.com/kubedb/redis/releases/tag/v0.8.2) + +- [fdf40740](https://github.com/kubedb/redis/commit/fdf40740) Prepare for release v0.8.2 (#277) +- [0aa17291](https://github.com/kubedb/redis/commit/0aa17291) Update KubeDB api (#276) +- [35a9ba9b](https://github.com/kubedb/redis/commit/35a9ba9b) Update Kubernetes v1.18.9 dependencies (#275) +- [effea60f](https://github.com/kubedb/redis/commit/effea60f) Update KubeDB api (#274) +- [fd043549](https://github.com/kubedb/redis/commit/fd043549) Update KubeDB api (#273) +- [2406649f](https://github.com/kubedb/redis/commit/2406649f) Update KubeDB api (#272) +- [33185a6a](https://github.com/kubedb/redis/commit/33185a6a) Update Kubernetes v1.18.9 dependencies (#271) +- [0472eb34](https://github.com/kubedb/redis/commit/0472eb34) Update e2e workflow (#270) +- [42799a81](https://github.com/kubedb/redis/commit/42799a81) Update KubeDB api (#269) +- [46fdd08f](https://github.com/kubedb/redis/commit/46fdd08f) Format shell scripts (#268) +- [da81ecdf](https://github.com/kubedb/redis/commit/da81ecdf) Update KubeDB api (#267) +- [dd157e35](https://github.com/kubedb/redis/commit/dd157e35) Update KubeDB api (#266) +- [effd3fc2](https://github.com/kubedb/redis/commit/effd3fc2) Update repository config (#265) + + + +## [kubedb/replication-mode-detector](https://github.com/kubedb/replication-mode-detector) + +### [v0.2.2](https://github.com/kubedb/replication-mode-detector/releases/tag/v0.2.2) + +- [e2cb2bf](https://github.com/kubedb/replication-mode-detector/commit/e2cb2bf) Prepare for release v0.2.2 (#104) +- [973442a](https://github.com/kubedb/replication-mode-detector/commit/973442a) Update KubeDB api (#103) +- [8758ccf](https://github.com/kubedb/replication-mode-detector/commit/8758ccf) Update Kubernetes v1.18.9 dependencies (#102) +- [21be65b](https://github.com/kubedb/replication-mode-detector/commit/21be65b) Update KubeDB api (#101) +- [3c87bcb](https://github.com/kubedb/replication-mode-detector/commit/3c87bcb) Update KubeDB api (#100) +- [a6bbd6b](https://github.com/kubedb/replication-mode-detector/commit/a6bbd6b) Update KubeDB api (#99) +- [dedab95](https://github.com/kubedb/replication-mode-detector/commit/dedab95) Update Kubernetes v1.18.9 dependencies (#98) +- [3c884d7](https://github.com/kubedb/replication-mode-detector/commit/3c884d7) Update KubeDB api (#97) +- [5c96baa](https://github.com/kubedb/replication-mode-detector/commit/5c96baa) Update KubeDB api (#96) +- [aef3623](https://github.com/kubedb/replication-mode-detector/commit/aef3623) Update KubeDB api (#95) + + + + diff --git a/content/docs/v2024.1.31/CHANGELOG-v2021.01.02-rc.0.md b/content/docs/v2024.1.31/CHANGELOG-v2021.01.02-rc.0.md new file mode 100644 index 0000000000..d7ff92b916 --- /dev/null +++ b/content/docs/v2024.1.31/CHANGELOG-v2021.01.02-rc.0.md @@ -0,0 +1,339 @@ +--- +title: Changelog | KubeDB +description: Changelog +menu: + docs_v2024.1.31: + identifier: changelog-kubedb-v2021.01.02-rc.0 + name: Changelog-v2021.01.02-rc.0 + parent: welcome + weight: 20210102 +product_name: kubedb +menu_name: docs_v2024.1.31 +section_menu_id: welcome +url: /docs/v2024.1.31/welcome/changelog-v2021.01.02-rc.0/ +aliases: +- /docs/v2024.1.31/CHANGELOG-v2021.01.02-rc.0/ +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +# KubeDB v2021.01.02-rc.0 (2021-01-03) + + +## [appscode/kubedb-autoscaler](https://github.com/appscode/kubedb-autoscaler) + +### [v0.1.0-rc.0](https://github.com/appscode/kubedb-autoscaler/releases/tag/v0.1.0-rc.0) + +- [f346d5e](https://github.com/appscode/kubedb-autoscaler/commit/f346d5e) Prepare for release v0.0.1-rc.0 (#5) +- [bd5dbd9](https://github.com/appscode/kubedb-autoscaler/commit/bd5dbd9) Remove extra informers (#4) +- [9b461a5](https://github.com/appscode/kubedb-autoscaler/commit/9b461a5) Enable GitHub Actions (#6) +- [de39ed0](https://github.com/appscode/kubedb-autoscaler/commit/de39ed0) Update license header (#7) +- [5518680](https://github.com/appscode/kubedb-autoscaler/commit/5518680) Remove validators and enable ES autoscaler (#3) +- [c0d65f4](https://github.com/appscode/kubedb-autoscaler/commit/c0d65f4) Add `inMemory` configuration in vertical scaling (#2) +- [088777c](https://github.com/appscode/kubedb-autoscaler/commit/088777c) Add Elasticsearch Autoscaler Controller (#1) +- [779a2d2](https://github.com/appscode/kubedb-autoscaler/commit/779a2d2) Add Conditions +- [cce0828](https://github.com/appscode/kubedb-autoscaler/commit/cce0828) Update Makefile for install and uninstall +- [04c9f28](https://github.com/appscode/kubedb-autoscaler/commit/04c9f28) Remove some prometheus flags +- [118284a](https://github.com/appscode/kubedb-autoscaler/commit/118284a) Refactor some common code +- [bdf8d89](https://github.com/appscode/kubedb-autoscaler/commit/bdf8d89) Fix Webhook +- [2934025](https://github.com/appscode/kubedb-autoscaler/commit/2934025) Handle empty prometheus vector +- [c718118](https://github.com/appscode/kubedb-autoscaler/commit/c718118) Fix Trigger +- [b795a24](https://github.com/appscode/kubedb-autoscaler/commit/b795a24) Update Prometheus Client +- [20c69c1](https://github.com/appscode/kubedb-autoscaler/commit/20c69c1) Add MongoDBAutoscaler CRD +- [6c2c2be](https://github.com/appscode/kubedb-autoscaler/commit/6c2c2be) Add Storage Auto Scaler + + + +## [appscode/kubedb-enterprise](https://github.com/appscode/kubedb-enterprise) + +### [v0.3.0-rc.0](https://github.com/appscode/kubedb-enterprise/releases/tag/v0.3.0-rc.0) + +- [a9ed2e6a](https://github.com/appscode/kubedb-enterprise/commit/a9ed2e6a) Prepare for release v0.3.0-rc.0 (#109) +- [d62bdf40](https://github.com/appscode/kubedb-enterprise/commit/d62bdf40) Change offshoot selector labels to standard k8s app labels (#96) +- [137b1d11](https://github.com/appscode/kubedb-enterprise/commit/137b1d11) Add evict pods in MongoDB (#106) + + + +## [kubedb/apimachinery](https://github.com/kubedb/apimachinery) + +### [v0.16.0-rc.0](https://github.com/kubedb/apimachinery/releases/tag/v0.16.0-rc.0) + +- [e4cb7ef9](https://github.com/kubedb/apimachinery/commit/e4cb7ef9) MySQL primary service dns helper (#677) +- [2469f17e](https://github.com/kubedb/apimachinery/commit/2469f17e) Add constants for Elasticsearch TLS reconfiguration (#672) +- [2c61fb41](https://github.com/kubedb/apimachinery/commit/2c61fb41) Add MongoDB constants (#676) +- [31584e58](https://github.com/kubedb/apimachinery/commit/31584e58) Add DB constants and tls-reconfigure checker func (#657) +- [4d67bea1](https://github.com/kubedb/apimachinery/commit/4d67bea1) Add MongoDB & Elasticsearch Autoscaler CRDs (#659) +- [fb88afcf](https://github.com/kubedb/apimachinery/commit/fb88afcf) Update Kubernetes v1.18.9 dependencies (#675) +- [56a61c7f](https://github.com/kubedb/apimachinery/commit/56a61c7f) Change default resource limits to 1Gi ram and 500m cpu (#674) +- [a36050ca](https://github.com/kubedb/apimachinery/commit/a36050ca) Invoke update handler on labels or annotations change +- [37c68bd0](https://github.com/kubedb/apimachinery/commit/37c68bd0) Change offshoot selector labels to standard k8s app labels (#673) +- [83fb66c2](https://github.com/kubedb/apimachinery/commit/83fb66c2) Add redis constants and an address function (#663) +- [2c0e6319](https://github.com/kubedb/apimachinery/commit/2c0e6319) Add support for Elasticsearch volume expansion (#666) +- [d16f40aa](https://github.com/kubedb/apimachinery/commit/d16f40aa) Add changes to Elasticsearch vertical scaling spec (#662) +- [938147c4](https://github.com/kubedb/apimachinery/commit/938147c4) Add Elasticsearch scaling constants (#658) +- [b1641bdf](https://github.com/kubedb/apimachinery/commit/b1641bdf) Update for release Stash@v2020.12.17 (#671) +- [d37718a2](https://github.com/kubedb/apimachinery/commit/d37718a2) Remove doNotPause logic from namespace validator (#669) + + + +## [kubedb/cli](https://github.com/kubedb/cli) + +### [v0.16.0-rc.0](https://github.com/kubedb/cli/releases/tag/v0.16.0-rc.0) + +- [2a3bc5a8](https://github.com/kubedb/cli/commit/2a3bc5a8) Prepare for release v0.16.0-rc.0 (#575) +- [500b142a](https://github.com/kubedb/cli/commit/500b142a) Update KubeDB api (#574) +- [8208fcf1](https://github.com/kubedb/cli/commit/8208fcf1) Update KubeDB api (#573) +- [59ac94e7](https://github.com/kubedb/cli/commit/59ac94e7) Update Kubernetes v1.18.9 dependencies (#572) +- [1ebd0633](https://github.com/kubedb/cli/commit/1ebd0633) Update KubeDB api (#571) +- [0ccba4d1](https://github.com/kubedb/cli/commit/0ccba4d1) Update KubeDB api (#570) +- [770f94be](https://github.com/kubedb/cli/commit/770f94be) Update KubeDB api (#569) +- [fbdcce08](https://github.com/kubedb/cli/commit/fbdcce08) Update KubeDB api (#568) +- [93b038e9](https://github.com/kubedb/cli/commit/93b038e9) Update KubeDB api (#567) +- [ef758783](https://github.com/kubedb/cli/commit/ef758783) Update for release Stash@v2020.12.17 (#566) +- [07fa4a7e](https://github.com/kubedb/cli/commit/07fa4a7e) Update KubeDB api (#565) + + + +## [kubedb/elasticsearch](https://github.com/kubedb/elasticsearch) + +### [v0.16.0-rc.0](https://github.com/kubedb/elasticsearch/releases/tag/v0.16.0-rc.0) + +- [9961f623](https://github.com/kubedb/elasticsearch/commit/9961f623) Prepare for release v0.16.0-rc.0 (#450) +- [e7d84a5f](https://github.com/kubedb/elasticsearch/commit/e7d84a5f) Update KubeDB api (#449) +- [7a40f5a5](https://github.com/kubedb/elasticsearch/commit/7a40f5a5) Update KubeDB api (#448) +- [c680498d](https://github.com/kubedb/elasticsearch/commit/c680498d) Update Kubernetes v1.18.9 dependencies (#447) +- [e28277d8](https://github.com/kubedb/elasticsearch/commit/e28277d8) Update KubeDB api (#446) +- [21f98151](https://github.com/kubedb/elasticsearch/commit/21f98151) Fix annotations passing to AppBinding (#445) +- [6c7ff056](https://github.com/kubedb/elasticsearch/commit/6c7ff056) Use StatefulSet naming methods (#430) +- [23a53309](https://github.com/kubedb/elasticsearch/commit/23a53309) Update KubeDB api (#444) +- [a4217edf](https://github.com/kubedb/elasticsearch/commit/a4217edf) Change offshoot selector labels to standard k8s app labels (#442) +- [6535adff](https://github.com/kubedb/elasticsearch/commit/6535adff) Delete tests moved to tests repo (#443) +- [ca2b5be5](https://github.com/kubedb/elasticsearch/commit/ca2b5be5) Update KubeDB api (#441) +- [ce19a83e](https://github.com/kubedb/elasticsearch/commit/ce19a83e) Update KubeDB api (#440) +- [662902a9](https://github.com/kubedb/elasticsearch/commit/662902a9) Update immutable field list (#435) +- [efe804c9](https://github.com/kubedb/elasticsearch/commit/efe804c9) Update KubeDB api (#438) +- [6ac3eb02](https://github.com/kubedb/elasticsearch/commit/6ac3eb02) Update for release Stash@v2020.12.17 (#437) +- [1da53ab9](https://github.com/kubedb/elasticsearch/commit/1da53ab9) Update KubeDB api (#436) + + + +## [kubedb/installer](https://github.com/kubedb/installer) + +### [v0.16.0-rc.0](https://github.com/kubedb/installer/releases/tag/v0.16.0-rc.0) + +- [feb4a3f](https://github.com/kubedb/installer/commit/feb4a3f) Prepare for release v0.16.0-rc.0 (#218) +- [7e17d4d](https://github.com/kubedb/installer/commit/7e17d4d) Add kubedb-autoscaler chart (#137) +- [fe87336](https://github.com/kubedb/installer/commit/fe87336) Rename gerbage-collector-rbac.yaml to garbage-collector-rbac.yaml +- [5630a5e](https://github.com/kubedb/installer/commit/5630a5e) Use kmodules.xyz/schema-checker to validate values schema (#217) +- [e22e67e](https://github.com/kubedb/installer/commit/e22e67e) Update repository config (#215) +- [3ded17a](https://github.com/kubedb/installer/commit/3ded17a) Update Kubernetes v1.18.9 dependencies (#214) +- [cb9a295](https://github.com/kubedb/installer/commit/cb9a295) Add enforceTerminationPolicy (#212) + + + +## [kubedb/memcached](https://github.com/kubedb/memcached) + +### [v0.9.0-rc.0](https://github.com/kubedb/memcached/releases/tag/v0.9.0-rc.0) + +- [33752041](https://github.com/kubedb/memcached/commit/33752041) Prepare for release v0.9.0-rc.0 (#269) +- [9cf96e13](https://github.com/kubedb/memcached/commit/9cf96e13) Update KubeDB api (#268) +- [0bfe24df](https://github.com/kubedb/memcached/commit/0bfe24df) Update KubeDB api (#267) +- [29fc8f33](https://github.com/kubedb/memcached/commit/29fc8f33) Update Kubernetes v1.18.9 dependencies (#266) +- [c9dfe14c](https://github.com/kubedb/memcached/commit/c9dfe14c) Update KubeDB api (#265) +- [f75073c9](https://github.com/kubedb/memcached/commit/f75073c9) Fix annotations passing to AppBinding (#264) +- [28cdfdfd](https://github.com/kubedb/memcached/commit/28cdfdfd) Initialize mapper +- [6a9243ab](https://github.com/kubedb/memcached/commit/6a9243ab) Change offshoot selector labels to standard k8s app labels (#263) +- [e838aec4](https://github.com/kubedb/memcached/commit/e838aec4) Update KubeDB api (#262) +- [88654cdd](https://github.com/kubedb/memcached/commit/88654cdd) Update KubeDB api (#261) +- [c2fb7c2f](https://github.com/kubedb/memcached/commit/c2fb7c2f) Update KubeDB api (#260) +- [5cc2cf17](https://github.com/kubedb/memcached/commit/5cc2cf17) Update KubeDB api (#259) + + + +## [kubedb/mongodb](https://github.com/kubedb/mongodb) + +### [v0.9.0-rc.0](https://github.com/kubedb/mongodb/releases/tag/v0.9.0-rc.0) + +- [ee410983](https://github.com/kubedb/mongodb/commit/ee410983) Prepare for release v0.9.0-rc.0 (#348) +- [b39b664b](https://github.com/kubedb/mongodb/commit/b39b664b) Update KubeDB api (#347) +- [84e007fe](https://github.com/kubedb/mongodb/commit/84e007fe) Update KubeDB api (#346) +- [e8aa1f8a](https://github.com/kubedb/mongodb/commit/e8aa1f8a) Close connections when operation completes (#338) +- [1ec2a2c7](https://github.com/kubedb/mongodb/commit/1ec2a2c7) Update Kubernetes v1.18.9 dependencies (#345) +- [7306fb26](https://github.com/kubedb/mongodb/commit/7306fb26) Update KubeDB api (#344) +- [efa62a85](https://github.com/kubedb/mongodb/commit/efa62a85) Fix annotations passing to AppBinding (#342) +- [9d88e69e](https://github.com/kubedb/mongodb/commit/9d88e69e) Remove `inMemory` setting from Config Server (#343) +- [32b96d12](https://github.com/kubedb/mongodb/commit/32b96d12) Change offshoot selector labels to standard k8s app labels (#341) +- [67fcdbf4](https://github.com/kubedb/mongodb/commit/67fcdbf4) Update KubeDB api (#340) +- [cf2c0778](https://github.com/kubedb/mongodb/commit/cf2c0778) Update KubeDB api (#339) +- [232a4a00](https://github.com/kubedb/mongodb/commit/232a4a00) Update KubeDB api (#337) +- [0a1307e7](https://github.com/kubedb/mongodb/commit/0a1307e7) Update for release Stash@v2020.12.17 (#336) +- [89b4e4fc](https://github.com/kubedb/mongodb/commit/89b4e4fc) Update KubeDB api (#335) + + + +## [kubedb/mysql](https://github.com/kubedb/mysql) + +### [v0.9.0-rc.0](https://github.com/kubedb/mysql/releases/tag/v0.9.0-rc.0) + +- [ad9d9879](https://github.com/kubedb/mysql/commit/ad9d9879) Prepare for release v0.9.0-rc.0 (#337) +- [a9e9d1f7](https://github.com/kubedb/mysql/commit/a9e9d1f7) Fix args for TLS (#336) +- [9dd89572](https://github.com/kubedb/mysql/commit/9dd89572) Update KubeDB api (#335) +- [29ff2c57](https://github.com/kubedb/mysql/commit/29ff2c57) Fixes DB Health Checker and StatefulSet Patch (#322) +- [47470895](https://github.com/kubedb/mysql/commit/47470895) Remove unnecessary StatefulSet waitloop (#331) +- [3aec8f59](https://github.com/kubedb/mysql/commit/3aec8f59) Update Kubernetes v1.18.9 dependencies (#334) +- [c1ca980d](https://github.com/kubedb/mysql/commit/c1ca980d) Update KubeDB api (#333) +- [96f4b59c](https://github.com/kubedb/mysql/commit/96f4b59c) Fix annotations passing to AppBinding (#332) +- [76f371a2](https://github.com/kubedb/mysql/commit/76f371a2) Change offshoot selector labels to standard k8s app labels (#329) +- [aa3d6b6f](https://github.com/kubedb/mysql/commit/aa3d6b6f) Delete tests moved to tests repo (#330) +- [6c544d2c](https://github.com/kubedb/mysql/commit/6c544d2c) Update KubeDB api (#328) +- [fe03a36c](https://github.com/kubedb/mysql/commit/fe03a36c) Update KubeDB api (#327) +- [29fd7474](https://github.com/kubedb/mysql/commit/29fd7474) Use basic-auth secret type for auth secret (#326) +- [90457549](https://github.com/kubedb/mysql/commit/90457549) Update KubeDB api (#325) +- [1487f15e](https://github.com/kubedb/mysql/commit/1487f15e) Update for release Stash@v2020.12.17 (#324) +- [2d7fa549](https://github.com/kubedb/mysql/commit/2d7fa549) Update KubeDB api (#323) + + + +## [kubedb/operator](https://github.com/kubedb/operator) + +### [v0.16.0-rc.0](https://github.com/kubedb/operator/releases/tag/v0.16.0-rc.0) + +- [3ee052dc](https://github.com/kubedb/operator/commit/3ee052dc) Prepare for release v0.16.0-rc.0 (#376) +- [dbb5195b](https://github.com/kubedb/operator/commit/dbb5195b) Update KubeDB api (#375) +- [4b162e08](https://github.com/kubedb/operator/commit/4b162e08) Update KubeDB api (#374) +- [39762b0f](https://github.com/kubedb/operator/commit/39762b0f) Update KubeDB api (#373) +- [d6a2cf27](https://github.com/kubedb/operator/commit/d6a2cf27) Change offshoot selector labels to standard k8s app labels (#372) +- [36a8ab6f](https://github.com/kubedb/operator/commit/36a8ab6f) Update Kubernetes v1.18.9 dependencies (#371) +- [554638e0](https://github.com/kubedb/operator/commit/554638e0) Update KubeDB api (#369) +- [8c7ef91d](https://github.com/kubedb/operator/commit/8c7ef91d) Update KubeDB api (#368) +- [dd96574e](https://github.com/kubedb/operator/commit/dd96574e) Update KubeDB api (#367) +- [eef04de1](https://github.com/kubedb/operator/commit/eef04de1) Update KubeDB api (#366) + + + +## [kubedb/percona-xtradb](https://github.com/kubedb/percona-xtradb) + +### [v0.3.0-rc.0](https://github.com/kubedb/percona-xtradb/releases/tag/v0.3.0-rc.0) + +- [f545beb4](https://github.com/kubedb/percona-xtradb/commit/f545beb4) Prepare for release v0.3.0-rc.0 (#166) +- [c5d0c826](https://github.com/kubedb/percona-xtradb/commit/c5d0c826) Update KubeDB api (#164) +- [b3da5757](https://github.com/kubedb/percona-xtradb/commit/b3da5757) Fix annotations passing to AppBinding (#163) +- [7aeaee74](https://github.com/kubedb/percona-xtradb/commit/7aeaee74) Change offshoot selector labels to standard k8s app labels (#161) +- [a36ffa87](https://github.com/kubedb/percona-xtradb/commit/a36ffa87) Update Kubernetes v1.18.9 dependencies (#162) +- [fa3a2a9d](https://github.com/kubedb/percona-xtradb/commit/fa3a2a9d) Update KubeDB api (#160) +- [a1db6821](https://github.com/kubedb/percona-xtradb/commit/a1db6821) Update KubeDB api (#159) +- [4357b18a](https://github.com/kubedb/percona-xtradb/commit/4357b18a) Use basic-auth secret type for auth secret (#158) +- [f9ccfc4e](https://github.com/kubedb/percona-xtradb/commit/f9ccfc4e) Update KubeDB api (#157) +- [11739165](https://github.com/kubedb/percona-xtradb/commit/11739165) Update for release Stash@v2020.12.17 (#156) +- [80bf041c](https://github.com/kubedb/percona-xtradb/commit/80bf041c) Update KubeDB api (#155) + + + +## [kubedb/pg-leader-election](https://github.com/kubedb/pg-leader-election) + +### [v0.4.0-rc.0](https://github.com/kubedb/pg-leader-election/releases/tag/v0.4.0-rc.0) + +- [31050c1](https://github.com/kubedb/pg-leader-election/commit/31050c1) Update KubeDB api (#44) +- [dc786b7](https://github.com/kubedb/pg-leader-election/commit/dc786b7) Update KubeDB api (#43) + + + +## [kubedb/pgbouncer](https://github.com/kubedb/pgbouncer) + +### [v0.3.0-rc.0](https://github.com/kubedb/pgbouncer/releases/tag/v0.3.0-rc.0) + +- [51c8fee2](https://github.com/kubedb/pgbouncer/commit/51c8fee2) Prepare for release v0.3.0-rc.0 (#132) +- [fded227a](https://github.com/kubedb/pgbouncer/commit/fded227a) Update KubeDB api (#130) +- [7702e10a](https://github.com/kubedb/pgbouncer/commit/7702e10a) Change offshoot selector labels to standard k8s app labels (#128) +- [2ba5284c](https://github.com/kubedb/pgbouncer/commit/2ba5284c) Update Kubernetes v1.18.9 dependencies (#129) +- [3507a96c](https://github.com/kubedb/pgbouncer/commit/3507a96c) Update KubeDB api (#127) +- [fc8330e4](https://github.com/kubedb/pgbouncer/commit/fc8330e4) Update KubeDB api (#126) +- [3e9b4e77](https://github.com/kubedb/pgbouncer/commit/3e9b4e77) Update KubeDB api (#125) +- [6c85ca6a](https://github.com/kubedb/pgbouncer/commit/6c85ca6a) Update KubeDB api (#124) + + + +## [kubedb/postgres](https://github.com/kubedb/postgres) + +### [v0.16.0-rc.0](https://github.com/kubedb/postgres/releases/tag/v0.16.0-rc.0) + +- [c7b618f5](https://github.com/kubedb/postgres/commit/c7b618f5) Prepare for release v0.16.0-rc.0 (#452) +- [be060733](https://github.com/kubedb/postgres/commit/be060733) Update KubeDB api (#451) +- [d2d2f32c](https://github.com/kubedb/postgres/commit/d2d2f32c) Update KubeDB api (#450) +- [ed375b2b](https://github.com/kubedb/postgres/commit/ed375b2b) Update KubeDB api (#449) +- [a3940790](https://github.com/kubedb/postgres/commit/a3940790) Fix annotations passing to AppBinding (#448) +- [f0b5a9dd](https://github.com/kubedb/postgres/commit/f0b5a9dd) Change offshoot selector labels to standard k8s app labels (#447) +- [eb4f80ab](https://github.com/kubedb/postgres/commit/eb4f80ab) Update KubeDB api (#446) +- [c9075b5a](https://github.com/kubedb/postgres/commit/c9075b5a) Update KubeDB api (#445) +- [a04891e1](https://github.com/kubedb/postgres/commit/a04891e1) Use basic-auth secret type for auth secret (#444) +- [e7503eec](https://github.com/kubedb/postgres/commit/e7503eec) Update KubeDB api (#443) +- [0eb3a1b9](https://github.com/kubedb/postgres/commit/0eb3a1b9) Update for release Stash@v2020.12.17 (#442) +- [c3ea786d](https://github.com/kubedb/postgres/commit/c3ea786d) Update KubeDB api (#441) + + + +## [kubedb/proxysql](https://github.com/kubedb/proxysql) + +### [v0.3.0-rc.0](https://github.com/kubedb/proxysql/releases/tag/v0.3.0-rc.0) + +- [1ae8aed1](https://github.com/kubedb/proxysql/commit/1ae8aed1) Prepare for release v0.3.0-rc.0 (#147) +- [0e60bddf](https://github.com/kubedb/proxysql/commit/0e60bddf) Update KubeDB api (#145) +- [df11880c](https://github.com/kubedb/proxysql/commit/df11880c) Change offshoot selector labels to standard k8s app labels (#143) +- [540bdea2](https://github.com/kubedb/proxysql/commit/540bdea2) Update Kubernetes v1.18.9 dependencies (#144) +- [52907cb4](https://github.com/kubedb/proxysql/commit/52907cb4) Update KubeDB api (#142) +- [d1686708](https://github.com/kubedb/proxysql/commit/d1686708) Update KubeDB api (#141) +- [e5e2a798](https://github.com/kubedb/proxysql/commit/e5e2a798) Use basic-auth secret type for auth secret (#140) +- [8cf2a9e4](https://github.com/kubedb/proxysql/commit/8cf2a9e4) Update KubeDB api (#139) +- [7b0cdb0f](https://github.com/kubedb/proxysql/commit/7b0cdb0f) Update for release Stash@v2020.12.17 (#138) +- [ce7136a1](https://github.com/kubedb/proxysql/commit/ce7136a1) Update KubeDB api (#137) + + + +## [kubedb/redis](https://github.com/kubedb/redis) + +### [v0.9.0-rc.0](https://github.com/kubedb/redis/releases/tag/v0.9.0-rc.0) + +- [b416a016](https://github.com/kubedb/redis/commit/b416a016) Prepare for release v0.9.0-rc.0 (#290) +- [751b8f6b](https://github.com/kubedb/redis/commit/751b8f6b) Update KubeDB api (#289) +- [0affafe9](https://github.com/kubedb/redis/commit/0affafe9) Update KubeDB api (#287) +- [665d6b4f](https://github.com/kubedb/redis/commit/665d6b4f) Remove tests moved to kubedb/tests (#288) +- [6c254e3b](https://github.com/kubedb/redis/commit/6c254e3b) Update KubeDB api (#286) +- [1b73def3](https://github.com/kubedb/redis/commit/1b73def3) Fix annotations passing to AppBinding (#285) +- [dc349058](https://github.com/kubedb/redis/commit/dc349058) Update KubeDB api (#283) +- [7d47e506](https://github.com/kubedb/redis/commit/7d47e506) Change offshoot selector labels to standard k8s app labels (#282) +- [f8f7570f](https://github.com/kubedb/redis/commit/f8f7570f) Update Kubernetes v1.18.9 dependencies (#284) +- [63cb769d](https://github.com/kubedb/redis/commit/63cb769d) Update KubeDB api (#281) +- [19ec4460](https://github.com/kubedb/redis/commit/19ec4460) Update KubeDB api (#280) +- [af67e190](https://github.com/kubedb/redis/commit/af67e190) Update KubeDB api (#279) +- [4b89034c](https://github.com/kubedb/redis/commit/4b89034c) Update KubeDB api (#278) + + + +## [kubedb/replication-mode-detector](https://github.com/kubedb/replication-mode-detector) + +### [v0.3.0-rc.0](https://github.com/kubedb/replication-mode-detector/releases/tag/v0.3.0-rc.0) + +- [179e153](https://github.com/kubedb/replication-mode-detector/commit/179e153) Prepare for release v0.3.0-rc.0 (#115) +- [d47023b](https://github.com/kubedb/replication-mode-detector/commit/d47023b) Update KubeDB api (#114) +- [3e5db31](https://github.com/kubedb/replication-mode-detector/commit/3e5db31) Update KubeDB api (#113) +- [987f068](https://github.com/kubedb/replication-mode-detector/commit/987f068) Change offshoot selector labels to standard k8s app labels (#110) +- [21fc76f](https://github.com/kubedb/replication-mode-detector/commit/21fc76f) Update Kubernetes v1.18.9 dependencies (#112) +- [db85cbd](https://github.com/kubedb/replication-mode-detector/commit/db85cbd) Close database connection when operation completes (#107) +- [740d1d8](https://github.com/kubedb/replication-mode-detector/commit/740d1d8) Update Kubernetes v1.18.9 dependencies (#111) +- [6f228a5](https://github.com/kubedb/replication-mode-detector/commit/6f228a5) Update KubeDB api (#109) +- [256ea7a](https://github.com/kubedb/replication-mode-detector/commit/256ea7a) Update KubeDB api (#108) +- [7a9acc0](https://github.com/kubedb/replication-mode-detector/commit/7a9acc0) Update KubeDB api (#106) +- [21a18c2](https://github.com/kubedb/replication-mode-detector/commit/21a18c2) Update KubeDB api (#105) + + + + diff --git a/content/docs/v2024.1.31/CHANGELOG-v2021.01.14.md b/content/docs/v2024.1.31/CHANGELOG-v2021.01.14.md new file mode 100644 index 0000000000..397f069e3e --- /dev/null +++ b/content/docs/v2024.1.31/CHANGELOG-v2021.01.14.md @@ -0,0 +1,496 @@ +--- +title: Changelog | KubeDB +description: Changelog +menu: + docs_v2024.1.31: + identifier: changelog-kubedb-v2021.01.14 + name: Changelog-v2021.01.14 + parent: welcome + weight: 20210114 +product_name: kubedb +menu_name: docs_v2024.1.31 +section_menu_id: welcome +url: /docs/v2024.1.31/welcome/changelog-v2021.01.14/ +aliases: +- /docs/v2024.1.31/CHANGELOG-v2021.01.14/ +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +# KubeDB v2021.01.14 (2021-01-14) + + +## [appscode/kubedb-autoscaler](https://github.com/appscode/kubedb-autoscaler) + +### [v0.1.0](https://github.com/appscode/kubedb-autoscaler/releases/tag/v0.1.0) + +- [1eb7c3b](https://github.com/appscode/kubedb-autoscaler/commit/1eb7c3b) Prepare for release v0.1.0 (#8) +- [f346d5e](https://github.com/appscode/kubedb-autoscaler/commit/f346d5e) Prepare for release v0.0.1-rc.0 (#5) +- [bd5dbd9](https://github.com/appscode/kubedb-autoscaler/commit/bd5dbd9) Remove extra informers (#4) +- [9b461a5](https://github.com/appscode/kubedb-autoscaler/commit/9b461a5) Enable GitHub Actions (#6) +- [de39ed0](https://github.com/appscode/kubedb-autoscaler/commit/de39ed0) Update license header (#7) +- [5518680](https://github.com/appscode/kubedb-autoscaler/commit/5518680) Remove validators and enable ES autoscaler (#3) +- [c0d65f4](https://github.com/appscode/kubedb-autoscaler/commit/c0d65f4) Add `inMemory` configuration in vertical scaling (#2) +- [088777c](https://github.com/appscode/kubedb-autoscaler/commit/088777c) Add Elasticsearch Autoscaler Controller (#1) +- [779a2d2](https://github.com/appscode/kubedb-autoscaler/commit/779a2d2) Add Conditions +- [cce0828](https://github.com/appscode/kubedb-autoscaler/commit/cce0828) Update Makefile for install and uninstall +- [04c9f28](https://github.com/appscode/kubedb-autoscaler/commit/04c9f28) Remove some prometheus flags +- [118284a](https://github.com/appscode/kubedb-autoscaler/commit/118284a) Refactor some common code +- [bdf8d89](https://github.com/appscode/kubedb-autoscaler/commit/bdf8d89) Fix Webhook +- [2934025](https://github.com/appscode/kubedb-autoscaler/commit/2934025) Handle empty prometheus vector +- [c718118](https://github.com/appscode/kubedb-autoscaler/commit/c718118) Fix Trigger +- [b795a24](https://github.com/appscode/kubedb-autoscaler/commit/b795a24) Update Prometheus Client +- [20c69c1](https://github.com/appscode/kubedb-autoscaler/commit/20c69c1) Add MongoDBAutoscaler CRD +- [6c2c2be](https://github.com/appscode/kubedb-autoscaler/commit/6c2c2be) Add Storage Auto Scaler + + + +## [appscode/kubedb-enterprise](https://github.com/appscode/kubedb-enterprise) + +### [v0.3.0](https://github.com/appscode/kubedb-enterprise/releases/tag/v0.3.0) + +- [50a3e6b9](https://github.com/appscode/kubedb-enterprise/commit/50a3e6b9) Prepare for release v0.3.0 (#119) +- [b8195907](https://github.com/appscode/kubedb-enterprise/commit/b8195907) Fix reconfigure TLS condition (#118) +- [62fc25ce](https://github.com/appscode/kubedb-enterprise/commit/62fc25ce) Add ServiceName into certificate DNS list (#113) +- [7684481a](https://github.com/appscode/kubedb-enterprise/commit/7684481a) Delete PVC in MongoDB while scaling down horizontally (#116) +- [729d44c1](https://github.com/appscode/kubedb-enterprise/commit/729d44c1) Add Elasticsearch ops requests support (#115) +- [bfb5f0f5](https://github.com/appscode/kubedb-enterprise/commit/bfb5f0f5) use evict pod instead of delete (#111) +- [93edad9b](https://github.com/appscode/kubedb-enterprise/commit/93edad9b) Update Reconfigure (#105) +- [fc5fac23](https://github.com/appscode/kubedb-enterprise/commit/fc5fac23) Fix vertical scaling resources (#107) +- [332c22c8](https://github.com/appscode/kubedb-enterprise/commit/332c22c8) Update Volume Expansion (#112) +- [c003adef](https://github.com/appscode/kubedb-enterprise/commit/c003adef) Add MySQL VolumeExpansion and Reconfiguration (#76) +- [a9ed2e6a](https://github.com/appscode/kubedb-enterprise/commit/a9ed2e6a) Prepare for release v0.3.0-rc.0 (#109) +- [d62bdf40](https://github.com/appscode/kubedb-enterprise/commit/d62bdf40) Change offshoot selector labels to standard k8s app labels (#96) +- [137b1d11](https://github.com/appscode/kubedb-enterprise/commit/137b1d11) Add evict pods in MongoDB (#106) + + + +## [kubedb/apimachinery](https://github.com/kubedb/apimachinery) + +### [v0.16.0](https://github.com/kubedb/apimachinery/releases/tag/v0.16.0) + +- [d516b399](https://github.com/kubedb/apimachinery/commit/d516b399) Keep resource request & limit is in sync (#685) +- [8766c8b9](https://github.com/kubedb/apimachinery/commit/8766c8b9) Remove readiness and liveness prove from MySQL helper (#686) +- [06de1945](https://github.com/kubedb/apimachinery/commit/06de1945) Use suffix instead of prefix for ES pods (#684) +- [a76dc4cc](https://github.com/kubedb/apimachinery/commit/a76dc4cc) Move all MongoDB constants (#683) +- [7a3dd5ee](https://github.com/kubedb/apimachinery/commit/7a3dd5ee) Set default affinity rules for MySQL and Postgres (#680) +- [45768d13](https://github.com/kubedb/apimachinery/commit/45768d13) Make sysctl initContainer optional (#682) +- [91826678](https://github.com/kubedb/apimachinery/commit/91826678) Use kubedb.com prefix for ES node roles (#678) +- [31ec37c3](https://github.com/kubedb/apimachinery/commit/31ec37c3) Add MySQL OpsRequest constants (#681) +- [2bfc35e9](https://github.com/kubedb/apimachinery/commit/2bfc35e9) Add Hosts helper for MySQL (#679) +- [e4cb7ef9](https://github.com/kubedb/apimachinery/commit/e4cb7ef9) MySQL primary service dns helper (#677) +- [2469f17e](https://github.com/kubedb/apimachinery/commit/2469f17e) Add constants for Elasticsearch TLS reconfiguration (#672) +- [2c61fb41](https://github.com/kubedb/apimachinery/commit/2c61fb41) Add MongoDB constants (#676) +- [31584e58](https://github.com/kubedb/apimachinery/commit/31584e58) Add DB constants and tls-reconfigure checker func (#657) +- [4d67bea1](https://github.com/kubedb/apimachinery/commit/4d67bea1) Add MongoDB & Elasticsearch Autoscaler CRDs (#659) +- [fb88afcf](https://github.com/kubedb/apimachinery/commit/fb88afcf) Update Kubernetes v1.18.9 dependencies (#675) +- [56a61c7f](https://github.com/kubedb/apimachinery/commit/56a61c7f) Change default resource limits to 1Gi ram and 500m cpu (#674) +- [a36050ca](https://github.com/kubedb/apimachinery/commit/a36050ca) Invoke update handler on labels or annotations change +- [37c68bd0](https://github.com/kubedb/apimachinery/commit/37c68bd0) Change offshoot selector labels to standard k8s app labels (#673) +- [83fb66c2](https://github.com/kubedb/apimachinery/commit/83fb66c2) Add redis constants and an address function (#663) +- [2c0e6319](https://github.com/kubedb/apimachinery/commit/2c0e6319) Add support for Elasticsearch volume expansion (#666) +- [d16f40aa](https://github.com/kubedb/apimachinery/commit/d16f40aa) Add changes to Elasticsearch vertical scaling spec (#662) +- [938147c4](https://github.com/kubedb/apimachinery/commit/938147c4) Add Elasticsearch scaling constants (#658) +- [b1641bdf](https://github.com/kubedb/apimachinery/commit/b1641bdf) Update for release Stash@v2020.12.17 (#671) +- [d37718a2](https://github.com/kubedb/apimachinery/commit/d37718a2) Remove doNotPause logic from namespace validator (#669) + + + +## [kubedb/cli](https://github.com/kubedb/cli) + +### [v0.16.0](https://github.com/kubedb/cli/releases/tag/v0.16.0) + +- [82be6c3c](https://github.com/kubedb/cli/commit/82be6c3c) Prepare for release v0.16.0 (#578) +- [4e216d5b](https://github.com/kubedb/cli/commit/4e216d5b) Update KubeDB api (#577) +- [d49954d2](https://github.com/kubedb/cli/commit/d49954d2) Update KubeDB api (#576) +- [2a3bc5a8](https://github.com/kubedb/cli/commit/2a3bc5a8) Prepare for release v0.16.0-rc.0 (#575) +- [500b142a](https://github.com/kubedb/cli/commit/500b142a) Update KubeDB api (#574) +- [8208fcf1](https://github.com/kubedb/cli/commit/8208fcf1) Update KubeDB api (#573) +- [59ac94e7](https://github.com/kubedb/cli/commit/59ac94e7) Update Kubernetes v1.18.9 dependencies (#572) +- [1ebd0633](https://github.com/kubedb/cli/commit/1ebd0633) Update KubeDB api (#571) +- [0ccba4d1](https://github.com/kubedb/cli/commit/0ccba4d1) Update KubeDB api (#570) +- [770f94be](https://github.com/kubedb/cli/commit/770f94be) Update KubeDB api (#569) +- [fbdcce08](https://github.com/kubedb/cli/commit/fbdcce08) Update KubeDB api (#568) +- [93b038e9](https://github.com/kubedb/cli/commit/93b038e9) Update KubeDB api (#567) +- [ef758783](https://github.com/kubedb/cli/commit/ef758783) Update for release Stash@v2020.12.17 (#566) +- [07fa4a7e](https://github.com/kubedb/cli/commit/07fa4a7e) Update KubeDB api (#565) + + + +## [kubedb/elasticsearch](https://github.com/kubedb/elasticsearch) + +### [v0.16.0](https://github.com/kubedb/elasticsearch/releases/tag/v0.16.0) + +- [e7304c07](https://github.com/kubedb/elasticsearch/commit/e7304c07) Prepare for release v0.16.0 (#456) +- [e0cf49e0](https://github.com/kubedb/elasticsearch/commit/e0cf49e0) Use suffix instead of prefix for ES pods (#455) +- [8c26a131](https://github.com/kubedb/elasticsearch/commit/8c26a131) Use version from version object and delete olivere go-client (#454) +- [c3964ec7](https://github.com/kubedb/elasticsearch/commit/c3964ec7) Use original Elasticsearch version for opendistro version crd (#453) +- [f60129fc](https://github.com/kubedb/elasticsearch/commit/f60129fc) Add various fixes (#439) +- [07b2810e](https://github.com/kubedb/elasticsearch/commit/07b2810e) Make sysctl initContainer optional (#452) +- [694e922c](https://github.com/kubedb/elasticsearch/commit/694e922c) Update KubeDB api (#451) +- [9961f623](https://github.com/kubedb/elasticsearch/commit/9961f623) Prepare for release v0.16.0-rc.0 (#450) +- [e7d84a5f](https://github.com/kubedb/elasticsearch/commit/e7d84a5f) Update KubeDB api (#449) +- [7a40f5a5](https://github.com/kubedb/elasticsearch/commit/7a40f5a5) Update KubeDB api (#448) +- [c680498d](https://github.com/kubedb/elasticsearch/commit/c680498d) Update Kubernetes v1.18.9 dependencies (#447) +- [e28277d8](https://github.com/kubedb/elasticsearch/commit/e28277d8) Update KubeDB api (#446) +- [21f98151](https://github.com/kubedb/elasticsearch/commit/21f98151) Fix annotations passing to AppBinding (#445) +- [6c7ff056](https://github.com/kubedb/elasticsearch/commit/6c7ff056) Use StatefulSet naming methods (#430) +- [23a53309](https://github.com/kubedb/elasticsearch/commit/23a53309) Update KubeDB api (#444) +- [a4217edf](https://github.com/kubedb/elasticsearch/commit/a4217edf) Change offshoot selector labels to standard k8s app labels (#442) +- [6535adff](https://github.com/kubedb/elasticsearch/commit/6535adff) Delete tests moved to tests repo (#443) +- [ca2b5be5](https://github.com/kubedb/elasticsearch/commit/ca2b5be5) Update KubeDB api (#441) +- [ce19a83e](https://github.com/kubedb/elasticsearch/commit/ce19a83e) Update KubeDB api (#440) +- [662902a9](https://github.com/kubedb/elasticsearch/commit/662902a9) Update immutable field list (#435) +- [efe804c9](https://github.com/kubedb/elasticsearch/commit/efe804c9) Update KubeDB api (#438) +- [6ac3eb02](https://github.com/kubedb/elasticsearch/commit/6ac3eb02) Update for release Stash@v2020.12.17 (#437) +- [1da53ab9](https://github.com/kubedb/elasticsearch/commit/1da53ab9) Update KubeDB api (#436) + + + +## [kubedb/installer](https://github.com/kubedb/installer) + +### [v0.16.0](https://github.com/kubedb/installer/releases/tag/v0.16.0) + +- [27d1591](https://github.com/kubedb/installer/commit/27d1591) Prepare for release v0.16.0 (#224) +- [c4b063d](https://github.com/kubedb/installer/commit/c4b063d) Add permissions for updating pod status (#223) +- [724b8a6](https://github.com/kubedb/installer/commit/724b8a6) Add permission to update pod status (#222) +- [b7e69f3](https://github.com/kubedb/installer/commit/b7e69f3) Add permission to delete PVC for enterprise operator (#221) +- [3064204](https://github.com/kubedb/installer/commit/3064204) Use original underlying Elasticsearch version in openDistro version crds (#220) +- [5d8d3db](https://github.com/kubedb/installer/commit/5d8d3db) Update Percona MongoDB Server Images (#219) +- [feb4a3f](https://github.com/kubedb/installer/commit/feb4a3f) Prepare for release v0.16.0-rc.0 (#218) +- [7e17d4d](https://github.com/kubedb/installer/commit/7e17d4d) Add kubedb-autoscaler chart (#137) +- [fe87336](https://github.com/kubedb/installer/commit/fe87336) Rename gerbage-collector-rbac.yaml to garbage-collector-rbac.yaml +- [5630a5e](https://github.com/kubedb/installer/commit/5630a5e) Use kmodules.xyz/schema-checker to validate values schema (#217) +- [e22e67e](https://github.com/kubedb/installer/commit/e22e67e) Update repository config (#215) +- [3ded17a](https://github.com/kubedb/installer/commit/3ded17a) Update Kubernetes v1.18.9 dependencies (#214) +- [cb9a295](https://github.com/kubedb/installer/commit/cb9a295) Add enforceTerminationPolicy (#212) + + + +## [kubedb/memcached](https://github.com/kubedb/memcached) + +### [v0.9.0](https://github.com/kubedb/memcached/releases/tag/v0.9.0) + +- [bdbf3281](https://github.com/kubedb/memcached/commit/bdbf3281) Prepare for release v0.9.0 (#272) +- [b67eb377](https://github.com/kubedb/memcached/commit/b67eb377) Update KubeDB api (#271) +- [c1104043](https://github.com/kubedb/memcached/commit/c1104043) Update KubeDB api (#270) +- [33752041](https://github.com/kubedb/memcached/commit/33752041) Prepare for release v0.9.0-rc.0 (#269) +- [9cf96e13](https://github.com/kubedb/memcached/commit/9cf96e13) Update KubeDB api (#268) +- [0bfe24df](https://github.com/kubedb/memcached/commit/0bfe24df) Update KubeDB api (#267) +- [29fc8f33](https://github.com/kubedb/memcached/commit/29fc8f33) Update Kubernetes v1.18.9 dependencies (#266) +- [c9dfe14c](https://github.com/kubedb/memcached/commit/c9dfe14c) Update KubeDB api (#265) +- [f75073c9](https://github.com/kubedb/memcached/commit/f75073c9) Fix annotations passing to AppBinding (#264) +- [28cdfdfd](https://github.com/kubedb/memcached/commit/28cdfdfd) Initialize mapper +- [6a9243ab](https://github.com/kubedb/memcached/commit/6a9243ab) Change offshoot selector labels to standard k8s app labels (#263) +- [e838aec4](https://github.com/kubedb/memcached/commit/e838aec4) Update KubeDB api (#262) +- [88654cdd](https://github.com/kubedb/memcached/commit/88654cdd) Update KubeDB api (#261) +- [c2fb7c2f](https://github.com/kubedb/memcached/commit/c2fb7c2f) Update KubeDB api (#260) +- [5cc2cf17](https://github.com/kubedb/memcached/commit/5cc2cf17) Update KubeDB api (#259) + + + +## [kubedb/mongodb](https://github.com/kubedb/mongodb) + +### [v0.9.0](https://github.com/kubedb/mongodb/releases/tag/v0.9.0) + +- [59e808c4](https://github.com/kubedb/mongodb/commit/59e808c4) Prepare for release v0.9.0 (#354) +- [2d5c1629](https://github.com/kubedb/mongodb/commit/2d5c1629) Use constants from apimachinery (#352) +- [55ef5143](https://github.com/kubedb/mongodb/commit/55ef5143) Add inMemory Validator (#353) +- [3fb3258a](https://github.com/kubedb/mongodb/commit/3fb3258a) Update condition to not panic on invalid TLS configuration (#351) +- [1e9bb613](https://github.com/kubedb/mongodb/commit/1e9bb613) Update KubeDB api (#350) +- [f23949c6](https://github.com/kubedb/mongodb/commit/f23949c6) Update KubeDB api (#349) +- [ee410983](https://github.com/kubedb/mongodb/commit/ee410983) Prepare for release v0.9.0-rc.0 (#348) +- [b39b664b](https://github.com/kubedb/mongodb/commit/b39b664b) Update KubeDB api (#347) +- [84e007fe](https://github.com/kubedb/mongodb/commit/84e007fe) Update KubeDB api (#346) +- [e8aa1f8a](https://github.com/kubedb/mongodb/commit/e8aa1f8a) Close connections when operation completes (#338) +- [1ec2a2c7](https://github.com/kubedb/mongodb/commit/1ec2a2c7) Update Kubernetes v1.18.9 dependencies (#345) +- [7306fb26](https://github.com/kubedb/mongodb/commit/7306fb26) Update KubeDB api (#344) +- [efa62a85](https://github.com/kubedb/mongodb/commit/efa62a85) Fix annotations passing to AppBinding (#342) +- [9d88e69e](https://github.com/kubedb/mongodb/commit/9d88e69e) Remove `inMemory` setting from Config Server (#343) +- [32b96d12](https://github.com/kubedb/mongodb/commit/32b96d12) Change offshoot selector labels to standard k8s app labels (#341) +- [67fcdbf4](https://github.com/kubedb/mongodb/commit/67fcdbf4) Update KubeDB api (#340) +- [cf2c0778](https://github.com/kubedb/mongodb/commit/cf2c0778) Update KubeDB api (#339) +- [232a4a00](https://github.com/kubedb/mongodb/commit/232a4a00) Update KubeDB api (#337) +- [0a1307e7](https://github.com/kubedb/mongodb/commit/0a1307e7) Update for release Stash@v2020.12.17 (#336) +- [89b4e4fc](https://github.com/kubedb/mongodb/commit/89b4e4fc) Update KubeDB api (#335) + + + +## [kubedb/mysql](https://github.com/kubedb/mysql) + +### [v0.9.0](https://github.com/kubedb/mysql/releases/tag/v0.9.0) + +- [e5e3a121](https://github.com/kubedb/mysql/commit/e5e3a121) Prepare for release v0.9.0 (#343) +- [192c6b83](https://github.com/kubedb/mysql/commit/192c6b83) Update health checker for cluster readiness check (#342) +- [2948601f](https://github.com/kubedb/mysql/commit/2948601f) Fix unit test failed for adding affinity rules to DB (#341) +- [de8198ce](https://github.com/kubedb/mysql/commit/de8198ce) Add Affinity rules to DB (#340) +- [1877e10f](https://github.com/kubedb/mysql/commit/1877e10f) Update KubeDB api (#339) +- [c7a40574](https://github.com/kubedb/mysql/commit/c7a40574) Pass --db-kind to replication mode detector (#338) +- [ad9d9879](https://github.com/kubedb/mysql/commit/ad9d9879) Prepare for release v0.9.0-rc.0 (#337) +- [a9e9d1f7](https://github.com/kubedb/mysql/commit/a9e9d1f7) Fix args for TLS (#336) +- [9dd89572](https://github.com/kubedb/mysql/commit/9dd89572) Update KubeDB api (#335) +- [29ff2c57](https://github.com/kubedb/mysql/commit/29ff2c57) Fixes DB Health Checker and StatefulSet Patch (#322) +- [47470895](https://github.com/kubedb/mysql/commit/47470895) Remove unnecessary StatefulSet waitloop (#331) +- [3aec8f59](https://github.com/kubedb/mysql/commit/3aec8f59) Update Kubernetes v1.18.9 dependencies (#334) +- [c1ca980d](https://github.com/kubedb/mysql/commit/c1ca980d) Update KubeDB api (#333) +- [96f4b59c](https://github.com/kubedb/mysql/commit/96f4b59c) Fix annotations passing to AppBinding (#332) +- [76f371a2](https://github.com/kubedb/mysql/commit/76f371a2) Change offshoot selector labels to standard k8s app labels (#329) +- [aa3d6b6f](https://github.com/kubedb/mysql/commit/aa3d6b6f) Delete tests moved to tests repo (#330) +- [6c544d2c](https://github.com/kubedb/mysql/commit/6c544d2c) Update KubeDB api (#328) +- [fe03a36c](https://github.com/kubedb/mysql/commit/fe03a36c) Update KubeDB api (#327) +- [29fd7474](https://github.com/kubedb/mysql/commit/29fd7474) Use basic-auth secret type for auth secret (#326) +- [90457549](https://github.com/kubedb/mysql/commit/90457549) Update KubeDB api (#325) +- [1487f15e](https://github.com/kubedb/mysql/commit/1487f15e) Update for release Stash@v2020.12.17 (#324) +- [2d7fa549](https://github.com/kubedb/mysql/commit/2d7fa549) Update KubeDB api (#323) + + + +## [kubedb/operator](https://github.com/kubedb/operator) + +### [v0.16.0](https://github.com/kubedb/operator/releases/tag/v0.16.0) + +- [58a5bfd9](https://github.com/kubedb/operator/commit/58a5bfd9) Prepare for release v0.16.0 (#380) +- [4ce94dce](https://github.com/kubedb/operator/commit/4ce94dce) Update KubeDB api (#378) +- [24006027](https://github.com/kubedb/operator/commit/24006027) Add affinity rules to MySQL and Postgres (#379) +- [a5eb51e0](https://github.com/kubedb/operator/commit/a5eb51e0) Update KubeDB api (#377) +- [f8c92379](https://github.com/kubedb/operator/commit/f8c92379) MySQL: Pass --db-kind to replication mode detector +- [3ee052dc](https://github.com/kubedb/operator/commit/3ee052dc) Prepare for release v0.16.0-rc.0 (#376) +- [dbb5195b](https://github.com/kubedb/operator/commit/dbb5195b) Update KubeDB api (#375) +- [4b162e08](https://github.com/kubedb/operator/commit/4b162e08) Update KubeDB api (#374) +- [39762b0f](https://github.com/kubedb/operator/commit/39762b0f) Update KubeDB api (#373) +- [d6a2cf27](https://github.com/kubedb/operator/commit/d6a2cf27) Change offshoot selector labels to standard k8s app labels (#372) +- [36a8ab6f](https://github.com/kubedb/operator/commit/36a8ab6f) Update Kubernetes v1.18.9 dependencies (#371) +- [554638e0](https://github.com/kubedb/operator/commit/554638e0) Update KubeDB api (#369) +- [8c7ef91d](https://github.com/kubedb/operator/commit/8c7ef91d) Update KubeDB api (#368) +- [dd96574e](https://github.com/kubedb/operator/commit/dd96574e) Update KubeDB api (#367) +- [eef04de1](https://github.com/kubedb/operator/commit/eef04de1) Update KubeDB api (#366) + + + +## [kubedb/percona-xtradb](https://github.com/kubedb/percona-xtradb) + +### [v0.3.0](https://github.com/kubedb/percona-xtradb/releases/tag/v0.3.0) + +- [bb9f2320](https://github.com/kubedb/percona-xtradb/commit/bb9f2320) Prepare for release v0.3.0 (#167) +- [f545beb4](https://github.com/kubedb/percona-xtradb/commit/f545beb4) Prepare for release v0.3.0-rc.0 (#166) +- [c5d0c826](https://github.com/kubedb/percona-xtradb/commit/c5d0c826) Update KubeDB api (#164) +- [b3da5757](https://github.com/kubedb/percona-xtradb/commit/b3da5757) Fix annotations passing to AppBinding (#163) +- [7aeaee74](https://github.com/kubedb/percona-xtradb/commit/7aeaee74) Change offshoot selector labels to standard k8s app labels (#161) +- [a36ffa87](https://github.com/kubedb/percona-xtradb/commit/a36ffa87) Update Kubernetes v1.18.9 dependencies (#162) +- [fa3a2a9d](https://github.com/kubedb/percona-xtradb/commit/fa3a2a9d) Update KubeDB api (#160) +- [a1db6821](https://github.com/kubedb/percona-xtradb/commit/a1db6821) Update KubeDB api (#159) +- [4357b18a](https://github.com/kubedb/percona-xtradb/commit/4357b18a) Use basic-auth secret type for auth secret (#158) +- [f9ccfc4e](https://github.com/kubedb/percona-xtradb/commit/f9ccfc4e) Update KubeDB api (#157) +- [11739165](https://github.com/kubedb/percona-xtradb/commit/11739165) Update for release Stash@v2020.12.17 (#156) +- [80bf041c](https://github.com/kubedb/percona-xtradb/commit/80bf041c) Update KubeDB api (#155) + + + +## [kubedb/pg-leader-election](https://github.com/kubedb/pg-leader-election) + +### [v0.4.0](https://github.com/kubedb/pg-leader-election/releases/tag/v0.4.0) + +- [31050c1](https://github.com/kubedb/pg-leader-election/commit/31050c1) Update KubeDB api (#44) +- [dc786b7](https://github.com/kubedb/pg-leader-election/commit/dc786b7) Update KubeDB api (#43) + + + +## [kubedb/pgbouncer](https://github.com/kubedb/pgbouncer) + +### [v0.3.0](https://github.com/kubedb/pgbouncer/releases/tag/v0.3.0) + +- [693e3cee](https://github.com/kubedb/pgbouncer/commit/693e3cee) Prepare for release v0.3.0 (#133) +- [51c8fee2](https://github.com/kubedb/pgbouncer/commit/51c8fee2) Prepare for release v0.3.0-rc.0 (#132) +- [fded227a](https://github.com/kubedb/pgbouncer/commit/fded227a) Update KubeDB api (#130) +- [7702e10a](https://github.com/kubedb/pgbouncer/commit/7702e10a) Change offshoot selector labels to standard k8s app labels (#128) +- [2ba5284c](https://github.com/kubedb/pgbouncer/commit/2ba5284c) Update Kubernetes v1.18.9 dependencies (#129) +- [3507a96c](https://github.com/kubedb/pgbouncer/commit/3507a96c) Update KubeDB api (#127) +- [fc8330e4](https://github.com/kubedb/pgbouncer/commit/fc8330e4) Update KubeDB api (#126) +- [3e9b4e77](https://github.com/kubedb/pgbouncer/commit/3e9b4e77) Update KubeDB api (#125) +- [6c85ca6a](https://github.com/kubedb/pgbouncer/commit/6c85ca6a) Update KubeDB api (#124) + + + +## [kubedb/postgres](https://github.com/kubedb/postgres) + +### [v0.16.0](https://github.com/kubedb/postgres/releases/tag/v0.16.0) + +- [a53c9c67](https://github.com/kubedb/postgres/commit/a53c9c67) Prepare for release v0.16.0 (#456) +- [7787991e](https://github.com/kubedb/postgres/commit/7787991e) Update KubeDB api (#454) +- [0e3d4c53](https://github.com/kubedb/postgres/commit/0e3d4c53) Add pod affinity rules to DB (#455) +- [c5b1d2ac](https://github.com/kubedb/postgres/commit/c5b1d2ac) Update KubeDB api (#453) +- [c7b618f5](https://github.com/kubedb/postgres/commit/c7b618f5) Prepare for release v0.16.0-rc.0 (#452) +- [be060733](https://github.com/kubedb/postgres/commit/be060733) Update KubeDB api (#451) +- [d2d2f32c](https://github.com/kubedb/postgres/commit/d2d2f32c) Update KubeDB api (#450) +- [ed375b2b](https://github.com/kubedb/postgres/commit/ed375b2b) Update KubeDB api (#449) +- [a3940790](https://github.com/kubedb/postgres/commit/a3940790) Fix annotations passing to AppBinding (#448) +- [f0b5a9dd](https://github.com/kubedb/postgres/commit/f0b5a9dd) Change offshoot selector labels to standard k8s app labels (#447) +- [eb4f80ab](https://github.com/kubedb/postgres/commit/eb4f80ab) Update KubeDB api (#446) +- [c9075b5a](https://github.com/kubedb/postgres/commit/c9075b5a) Update KubeDB api (#445) +- [a04891e1](https://github.com/kubedb/postgres/commit/a04891e1) Use basic-auth secret type for auth secret (#444) +- [e7503eec](https://github.com/kubedb/postgres/commit/e7503eec) Update KubeDB api (#443) +- [0eb3a1b9](https://github.com/kubedb/postgres/commit/0eb3a1b9) Update for release Stash@v2020.12.17 (#442) +- [c3ea786d](https://github.com/kubedb/postgres/commit/c3ea786d) Update KubeDB api (#441) + + + +## [kubedb/proxysql](https://github.com/kubedb/proxysql) + +### [v0.3.0](https://github.com/kubedb/proxysql/releases/tag/v0.3.0) + +- [fdd650cb](https://github.com/kubedb/proxysql/commit/fdd650cb) Prepare for release v0.3.0 (#148) +- [1ae8aed1](https://github.com/kubedb/proxysql/commit/1ae8aed1) Prepare for release v0.3.0-rc.0 (#147) +- [0e60bddf](https://github.com/kubedb/proxysql/commit/0e60bddf) Update KubeDB api (#145) +- [df11880c](https://github.com/kubedb/proxysql/commit/df11880c) Change offshoot selector labels to standard k8s app labels (#143) +- [540bdea2](https://github.com/kubedb/proxysql/commit/540bdea2) Update Kubernetes v1.18.9 dependencies (#144) +- [52907cb4](https://github.com/kubedb/proxysql/commit/52907cb4) Update KubeDB api (#142) +- [d1686708](https://github.com/kubedb/proxysql/commit/d1686708) Update KubeDB api (#141) +- [e5e2a798](https://github.com/kubedb/proxysql/commit/e5e2a798) Use basic-auth secret type for auth secret (#140) +- [8cf2a9e4](https://github.com/kubedb/proxysql/commit/8cf2a9e4) Update KubeDB api (#139) +- [7b0cdb0f](https://github.com/kubedb/proxysql/commit/7b0cdb0f) Update for release Stash@v2020.12.17 (#138) +- [ce7136a1](https://github.com/kubedb/proxysql/commit/ce7136a1) Update KubeDB api (#137) + + + +## [kubedb/redis](https://github.com/kubedb/redis) + +### [v0.9.0](https://github.com/kubedb/redis/releases/tag/v0.9.0) + +- [b7d20a3e](https://github.com/kubedb/redis/commit/b7d20a3e) Prepare for release v0.9.0 (#294) +- [d6b6c733](https://github.com/kubedb/redis/commit/d6b6c733) Update KubeDB api (#293) +- [bba72c0a](https://github.com/kubedb/redis/commit/bba72c0a) Update Kubernetes v1.18.9 dependencies (#292) +- [d34eff66](https://github.com/kubedb/redis/commit/d34eff66) Update KubeDB api (#291) +- [b416a016](https://github.com/kubedb/redis/commit/b416a016) Prepare for release v0.9.0-rc.0 (#290) +- [751b8f6b](https://github.com/kubedb/redis/commit/751b8f6b) Update KubeDB api (#289) +- [0affafe9](https://github.com/kubedb/redis/commit/0affafe9) Update KubeDB api (#287) +- [665d6b4f](https://github.com/kubedb/redis/commit/665d6b4f) Remove tests moved to kubedb/tests (#288) +- [6c254e3b](https://github.com/kubedb/redis/commit/6c254e3b) Update KubeDB api (#286) +- [1b73def3](https://github.com/kubedb/redis/commit/1b73def3) Fix annotations passing to AppBinding (#285) +- [dc349058](https://github.com/kubedb/redis/commit/dc349058) Update KubeDB api (#283) +- [7d47e506](https://github.com/kubedb/redis/commit/7d47e506) Change offshoot selector labels to standard k8s app labels (#282) +- [f8f7570f](https://github.com/kubedb/redis/commit/f8f7570f) Update Kubernetes v1.18.9 dependencies (#284) +- [63cb769d](https://github.com/kubedb/redis/commit/63cb769d) Update KubeDB api (#281) +- [19ec4460](https://github.com/kubedb/redis/commit/19ec4460) Update KubeDB api (#280) +- [af67e190](https://github.com/kubedb/redis/commit/af67e190) Update KubeDB api (#279) +- [4b89034c](https://github.com/kubedb/redis/commit/4b89034c) Update KubeDB api (#278) + + + +## [kubedb/replication-mode-detector](https://github.com/kubedb/replication-mode-detector) + +### [v0.3.0](https://github.com/kubedb/replication-mode-detector/releases/tag/v0.3.0) + +- [f7b0e81](https://github.com/kubedb/replication-mode-detector/commit/f7b0e81) Prepare for release v0.3.0 (#118) +- [26111c6](https://github.com/kubedb/replication-mode-detector/commit/26111c6) Update KubeDB api (#117) +- [f5825e2](https://github.com/kubedb/replication-mode-detector/commit/f5825e2) Update KubeDB api (#116) +- [179e153](https://github.com/kubedb/replication-mode-detector/commit/179e153) Prepare for release v0.3.0-rc.0 (#115) +- [d47023b](https://github.com/kubedb/replication-mode-detector/commit/d47023b) Update KubeDB api (#114) +- [3e5db31](https://github.com/kubedb/replication-mode-detector/commit/3e5db31) Update KubeDB api (#113) +- [987f068](https://github.com/kubedb/replication-mode-detector/commit/987f068) Change offshoot selector labels to standard k8s app labels (#110) +- [21fc76f](https://github.com/kubedb/replication-mode-detector/commit/21fc76f) Update Kubernetes v1.18.9 dependencies (#112) +- [db85cbd](https://github.com/kubedb/replication-mode-detector/commit/db85cbd) Close database connection when operation completes (#107) +- [740d1d8](https://github.com/kubedb/replication-mode-detector/commit/740d1d8) Update Kubernetes v1.18.9 dependencies (#111) +- [6f228a5](https://github.com/kubedb/replication-mode-detector/commit/6f228a5) Update KubeDB api (#109) +- [256ea7a](https://github.com/kubedb/replication-mode-detector/commit/256ea7a) Update KubeDB api (#108) +- [7a9acc0](https://github.com/kubedb/replication-mode-detector/commit/7a9acc0) Update KubeDB api (#106) +- [21a18c2](https://github.com/kubedb/replication-mode-detector/commit/21a18c2) Update KubeDB api (#105) + + + +## [kubedb/tests](https://github.com/kubedb/tests) + +### [v0.1.0](https://github.com/kubedb/tests/releases/tag/v0.1.0) + +- [53972ee](https://github.com/kubedb/tests/commit/53972ee) Add release tracker script and workflow +- [0bba0a5](https://github.com/kubedb/tests/commit/0bba0a5) Prepare for release v0.1.0 (#88) +- [8f14ee4](https://github.com/kubedb/tests/commit/8f14ee4) Add e2e-test for Elasticsearch (#68) +- [67e0e55](https://github.com/kubedb/tests/commit/67e0e55) Fix Stash backup tests for MongoDB Percona variant (#85) +- [093955e](https://github.com/kubedb/tests/commit/093955e) Update MongoDB test with DBType check (#84) +- [fc8017a](https://github.com/kubedb/tests/commit/fc8017a) Add MongoDB Autoscaling test (#80) +- [edb2ecc](https://github.com/kubedb/tests/commit/edb2ecc) Update MongoDB backup tests (#71) +- [efa6b30](https://github.com/kubedb/tests/commit/efa6b30) Add inmemory test for MongoDB enterprise (#83) +- [201bcda](https://github.com/kubedb/tests/commit/201bcda) Update KubeDB api (#82) +- [f5698eb](https://github.com/kubedb/tests/commit/f5698eb) Update KubeDB api (#81) +- [cf37be5](https://github.com/kubedb/tests/commit/cf37be5) Update KubeDB api (#79) +- [a3ea727](https://github.com/kubedb/tests/commit/a3ea727) Update KubeDB api (#78) +- [411b4fd](https://github.com/kubedb/tests/commit/411b4fd) Update KubeDB api (#77) +- [5d1747a](https://github.com/kubedb/tests/commit/5d1747a) Change offshoot selector labels to standard k8s app labels (#74) +- [dee523d](https://github.com/kubedb/tests/commit/dee523d) Use Service for connecting with DB (where possible) (#76) +- [69a9cb3](https://github.com/kubedb/tests/commit/69a9cb3) Update Kubernetes v1.18.9 dependencies (#75) +- [f0ac7ed](https://github.com/kubedb/tests/commit/f0ac7ed) Update KubeDB api (#73) +- [42d8169](https://github.com/kubedb/tests/commit/42d8169) Update KubeDB api (#72) +- [75003e7](https://github.com/kubedb/tests/commit/75003e7) Update KubeDB api (#70) +- [af976e3](https://github.com/kubedb/tests/commit/af976e3) Update KubeDB api (#69) +- [c1dd8f4](https://github.com/kubedb/tests/commit/c1dd8f4) Update KubeDB api (#67) +- [44b4191](https://github.com/kubedb/tests/commit/44b4191) Update KubeDB api (#66) +- [1e77bed](https://github.com/kubedb/tests/commit/1e77bed) Update Kubernetes v1.18.9 dependencies (#65) +- [1309e15](https://github.com/kubedb/tests/commit/1309e15) Update KubeDB api (#64) +- [c6b9039](https://github.com/kubedb/tests/commit/c6b9039) Update KubeDB api (#61) +- [e770d66](https://github.com/kubedb/tests/commit/e770d66) Update KubeDB api (#60) +- [afa5dcc](https://github.com/kubedb/tests/commit/afa5dcc) Update Kubernetes v1.18.9 dependencies (#59) +- [0dd91f9](https://github.com/kubedb/tests/commit/0dd91f9) Update KubeDB api (#57) +- [3cf15c0](https://github.com/kubedb/tests/commit/3cf15c0) Update KubeDB api (#56) +- [3736166](https://github.com/kubedb/tests/commit/3736166) Update KubeDB api (#55) +- [b905769](https://github.com/kubedb/tests/commit/b905769) Update KubeDB api (#54) +- [5d710ab](https://github.com/kubedb/tests/commit/5d710ab) Update KubeDB api (#53) +- [d49f0bb](https://github.com/kubedb/tests/commit/d49f0bb) Update KubeDB api (#52) +- [fbac2a9](https://github.com/kubedb/tests/commit/fbac2a9) Update KubeDB api (#51) +- [049851b](https://github.com/kubedb/tests/commit/049851b) Update KubeDB api (#50) +- [9bdedb4](https://github.com/kubedb/tests/commit/9bdedb4) Update KubeDB api (#48) +- [175e009](https://github.com/kubedb/tests/commit/175e009) Update KubeDB api (#47) +- [f7dda0e](https://github.com/kubedb/tests/commit/f7dda0e) Update KubeDB api (#46) +- [6175a77](https://github.com/kubedb/tests/commit/6175a77) Update Kubernetes v1.18.9 dependencies (#45) +- [26f2b54](https://github.com/kubedb/tests/commit/26f2b54) Add Elasticsearch tests (#28) +- [4531ec0](https://github.com/kubedb/tests/commit/4531ec0) Update KubeDB api (#44) +- [dfe1655](https://github.com/kubedb/tests/commit/dfe1655) Update KubeDB api (#42) +- [cfbeb36](https://github.com/kubedb/tests/commit/cfbeb36) Update KubeDB api (#41) +- [98ca152](https://github.com/kubedb/tests/commit/98ca152) Update KubeDB api (#40) +- [dcfb4d0](https://github.com/kubedb/tests/commit/dcfb4d0) Update KubeDB api (#39) +- [8fbc3d5](https://github.com/kubedb/tests/commit/8fbc3d5) Update KubeDB api (#38) +- [eda5b69](https://github.com/kubedb/tests/commit/eda5b69) Update KubeDB api (#37) +- [ffa46c7](https://github.com/kubedb/tests/commit/ffa46c7) Update KubeDB api (#36) +- [9b2ceea](https://github.com/kubedb/tests/commit/9b2ceea) Update KubeDB api (#35) +- [7849334](https://github.com/kubedb/tests/commit/7849334) Update KubeDB api (#34) +- [b08c1b8](https://github.com/kubedb/tests/commit/b08c1b8) Update Kubernetes v1.18.9 dependencies (#33) +- [4faa8f6](https://github.com/kubedb/tests/commit/4faa8f6) Update Kubernetes v1.18.9 dependencies (#31) +- [0ebd642](https://github.com/kubedb/tests/commit/0ebd642) Update KubeDB api (#30) +- [5e945c0](https://github.com/kubedb/tests/commit/5e945c0) Update KubeDB api (#29) +- [a921cf2](https://github.com/kubedb/tests/commit/a921cf2) Update KubeDB api (#27) +- [9614f68](https://github.com/kubedb/tests/commit/9614f68) Update Kubernetes v1.18.9 dependencies (#26) +- [c706d27](https://github.com/kubedb/tests/commit/c706d27) Update KubeDB api (#25) +- [ad2b73d](https://github.com/kubedb/tests/commit/ad2b73d) Add test for redis (#9) +- [ac55856](https://github.com/kubedb/tests/commit/ac55856) MySQL Tests (#8) +- [bc99f28](https://github.com/kubedb/tests/commit/bc99f28) Update KubeDB api (#24) +- [9070708](https://github.com/kubedb/tests/commit/9070708) Update KubeDB api (#23) +- [c9e4212](https://github.com/kubedb/tests/commit/c9e4212) Update KubeDB api (#22) +- [00a72b0](https://github.com/kubedb/tests/commit/00a72b0) Update Kubernetes v1.18.9 dependencies (#21) +- [9f40719](https://github.com/kubedb/tests/commit/9f40719) Update KubeDB api (#20) +- [7c94608](https://github.com/kubedb/tests/commit/7c94608) Update KubeDB api (#19) +- [6eb0f46](https://github.com/kubedb/tests/commit/6eb0f46) Update KubeDB api (#18) +- [f0c04cf](https://github.com/kubedb/tests/commit/f0c04cf) Update KubeDB api (#17) +- [0477ed8](https://github.com/kubedb/tests/commit/0477ed8) Update Kubernetes v1.18.9 dependencies (#16) +- [405b00a](https://github.com/kubedb/tests/commit/405b00a) Update KubeDB api (#15) +- [3464ffb](https://github.com/kubedb/tests/commit/3464ffb) Update KubeDB api (#14) +- [08a4059](https://github.com/kubedb/tests/commit/08a4059) Update KubeDB api (#13) +- [0adf9dd](https://github.com/kubedb/tests/commit/0adf9dd) Update KubeDB api (#12) +- [af6712c](https://github.com/kubedb/tests/commit/af6712c) Update Kubernetes v1.18.9 dependencies (#11) +- [6e54f80](https://github.com/kubedb/tests/commit/6e54f80) Update Kubernetes v1.18.9 dependencies (#6) +- [be9860e](https://github.com/kubedb/tests/commit/be9860e) Update repository config (#4) +- [a1cd2f0](https://github.com/kubedb/tests/commit/a1cd2f0) Add Test for Vertical Scaling MongoDB Database with Reconfiguration (#3) +- [3d6903c](https://github.com/kubedb/tests/commit/3d6903c) Parameterize Tests (#2) +- [224fb77](https://github.com/kubedb/tests/commit/224fb77) Add Makefile and github action (#1) +- [07912c2](https://github.com/kubedb/tests/commit/07912c2) Change module name to "kubedb.dev/tests" +- [b15fe6d](https://github.com/kubedb/tests/commit/b15fe6d) Merge e2e test of MongoDB Community and Enterprise in a single Repo + + + + diff --git a/content/docs/v2024.1.31/CHANGELOG-v2021.01.15.md b/content/docs/v2024.1.31/CHANGELOG-v2021.01.15.md new file mode 100644 index 0000000000..85daa61420 --- /dev/null +++ b/content/docs/v2024.1.31/CHANGELOG-v2021.01.15.md @@ -0,0 +1,179 @@ +--- +title: Changelog | KubeDB +description: Changelog +menu: + docs_v2024.1.31: + identifier: changelog-kubedb-v2021.01.15 + name: Changelog-v2021.01.15 + parent: welcome + weight: 20210115 +product_name: kubedb +menu_name: docs_v2024.1.31 +section_menu_id: welcome +url: /docs/v2024.1.31/welcome/changelog-v2021.01.15/ +aliases: +- /docs/v2024.1.31/CHANGELOG-v2021.01.15/ +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +# KubeDB v2021.01.15 (2021-01-15) + + +## [appscode/kubedb-autoscaler](https://github.com/appscode/kubedb-autoscaler) + +### [v0.1.1](https://github.com/appscode/kubedb-autoscaler/releases/tag/v0.1.1) + +- [844b159](https://github.com/appscode/kubedb-autoscaler/commit/844b159) Prepare for release v0.1.1 (#10) + + + +## [appscode/kubedb-enterprise](https://github.com/appscode/kubedb-enterprise) + +### [v0.3.1](https://github.com/appscode/kubedb-enterprise/releases/tag/v0.3.1) + +- [48e08e31](https://github.com/appscode/kubedb-enterprise/commit/48e08e31) Prepare for release v0.3.1 (#122) +- [be921811](https://github.com/appscode/kubedb-enterprise/commit/be921811) Update Elasticsearch Vertical Scaling (#120) +- [85ad0e77](https://github.com/appscode/kubedb-enterprise/commit/85ad0e77) Fix mongodb config directory name constants (#121) + + + +## [kubedb/apimachinery](https://github.com/kubedb/apimachinery) + +### [v0.16.1](https://github.com/kubedb/apimachinery/releases/tag/v0.16.1) + +- [ef0b4ef2](https://github.com/kubedb/apimachinery/commit/ef0b4ef2) Fix mongodb config directory name constants (#687) + + + +## [kubedb/cli](https://github.com/kubedb/cli) + +### [v0.16.1](https://github.com/kubedb/cli/releases/tag/v0.16.1) + +- [8576b8cf](https://github.com/kubedb/cli/commit/8576b8cf) Prepare for release v0.16.1 (#579) + + + +## [kubedb/elasticsearch](https://github.com/kubedb/elasticsearch) + +### [v0.16.1](https://github.com/kubedb/elasticsearch/releases/tag/v0.16.1) + +- [90ef17eb](https://github.com/kubedb/elasticsearch/commit/90ef17eb) Prepare for release v0.16.1 (#457) + + + +## [kubedb/installer](https://github.com/kubedb/installer) + +### [v0.16.1](https://github.com/kubedb/installer/releases/tag/v0.16.1) + +- [f870039](https://github.com/kubedb/installer/commit/f870039) Prepare for release v0.16.1 (#225) + + + +## [kubedb/memcached](https://github.com/kubedb/memcached) + +### [v0.9.1](https://github.com/kubedb/memcached/releases/tag/v0.9.1) + +- [f066d0f3](https://github.com/kubedb/memcached/commit/f066d0f3) Prepare for release v0.9.1 (#273) + + + +## [kubedb/mongodb](https://github.com/kubedb/mongodb) + +### [v0.9.1](https://github.com/kubedb/mongodb/releases/tag/v0.9.1) + +- [fd7b45bd](https://github.com/kubedb/mongodb/commit/fd7b45bd) Prepare for release v0.9.1 (#356) +- [c805f612](https://github.com/kubedb/mongodb/commit/c805f612) Fix mongodb config directory name constants (#355) + + + +## [kubedb/mysql](https://github.com/kubedb/mysql) + +### [v0.9.1](https://github.com/kubedb/mysql/releases/tag/v0.9.1) + +- [3ed0d709](https://github.com/kubedb/mysql/commit/3ed0d709) Prepare for release v0.9.1 (#344) + + + +## [kubedb/operator](https://github.com/kubedb/operator) + +### [v0.16.1](https://github.com/kubedb/operator/releases/tag/v0.16.1) + +- [0d140975](https://github.com/kubedb/operator/commit/0d140975) Prepare for release v0.16.1 (#381) + + + +## [kubedb/percona-xtradb](https://github.com/kubedb/percona-xtradb) + +### [v0.3.1](https://github.com/kubedb/percona-xtradb/releases/tag/v0.3.1) + +- [bbe4cd92](https://github.com/kubedb/percona-xtradb/commit/bbe4cd92) Prepare for release v0.3.1 (#168) + + + +## [kubedb/pg-leader-election](https://github.com/kubedb/pg-leader-election) + +### [v0.4.1](https://github.com/kubedb/pg-leader-election/releases/tag/v0.4.1) + +- [42d7aef](https://github.com/kubedb/pg-leader-election/commit/42d7aef) Update readme + + + +## [kubedb/pgbouncer](https://github.com/kubedb/pgbouncer) + +### [v0.3.1](https://github.com/kubedb/pgbouncer/releases/tag/v0.3.1) + +- [98fd0585](https://github.com/kubedb/pgbouncer/commit/98fd0585) Prepare for release v0.3.1 (#134) + + + +## [kubedb/postgres](https://github.com/kubedb/postgres) + +### [v0.16.1](https://github.com/kubedb/postgres/releases/tag/v0.16.1) + +- [6802d07e](https://github.com/kubedb/postgres/commit/6802d07e) Prepare for release v0.16.1 (#457) + + + +## [kubedb/proxysql](https://github.com/kubedb/proxysql) + +### [v0.3.1](https://github.com/kubedb/proxysql/releases/tag/v0.3.1) + +- [9ad8e766](https://github.com/kubedb/proxysql/commit/9ad8e766) Prepare for release v0.3.1 (#149) + + + +## [kubedb/redis](https://github.com/kubedb/redis) + +### [v0.9.1](https://github.com/kubedb/redis/releases/tag/v0.9.1) + +- [3c1bf4b6](https://github.com/kubedb/redis/commit/3c1bf4b6) Prepare for release v0.9.1 (#295) + + + +## [kubedb/replication-mode-detector](https://github.com/kubedb/replication-mode-detector) + +### [v0.3.1](https://github.com/kubedb/replication-mode-detector/releases/tag/v0.3.1) + +- [a5e82a9](https://github.com/kubedb/replication-mode-detector/commit/a5e82a9) Prepare for release v0.3.1 (#119) + + + +## [kubedb/tests](https://github.com/kubedb/tests) + +### [v0.1.1](https://github.com/kubedb/tests/releases/tag/v0.1.1) + +- [4b8b17a](https://github.com/kubedb/tests/commit/4b8b17a) Prepare for release v0.1.1 (#89) + + + + diff --git a/content/docs/v2024.1.31/CHANGELOG-v2021.01.26.md b/content/docs/v2024.1.31/CHANGELOG-v2021.01.26.md new file mode 100644 index 0000000000..2110480b82 --- /dev/null +++ b/content/docs/v2024.1.31/CHANGELOG-v2021.01.26.md @@ -0,0 +1,248 @@ +--- +title: Changelog | KubeDB +description: Changelog +menu: + docs_v2024.1.31: + identifier: changelog-kubedb-v2021.01.26 + name: Changelog-v2021.01.26 + parent: welcome + weight: 20210126 +product_name: kubedb +menu_name: docs_v2024.1.31 +section_menu_id: welcome +url: /docs/v2024.1.31/welcome/changelog-v2021.01.26/ +aliases: +- /docs/v2024.1.31/CHANGELOG-v2021.01.26/ +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +# KubeDB v2021.01.26 (2021-01-26) + + +## [appscode/kubedb-autoscaler](https://github.com/appscode/kubedb-autoscaler) + +### [v0.1.2](https://github.com/appscode/kubedb-autoscaler/releases/tag/v0.1.2) + +- [8a42374](https://github.com/appscode/kubedb-autoscaler/commit/8a42374) Prepare for release v0.1.2 (#15) +- [75e0b0e](https://github.com/appscode/kubedb-autoscaler/commit/75e0b0e) Update repository config (#13) +- [bf1487e](https://github.com/appscode/kubedb-autoscaler/commit/bf1487e) Fix Elasticsearch storage autoscaler (#12) +- [b23280c](https://github.com/appscode/kubedb-autoscaler/commit/b23280c) Update readme +- [d320045](https://github.com/appscode/kubedb-autoscaler/commit/d320045) Fix Elasticsearch Autoscaler (#11) + + + +## [appscode/kubedb-enterprise](https://github.com/appscode/kubedb-enterprise) + +### [v0.3.2](https://github.com/appscode/kubedb-enterprise/releases/tag/v0.3.2) + +- [d235a3ec](https://github.com/appscode/kubedb-enterprise/commit/d235a3ec) Prepare for release v0.3.2 (#132) +- [98ac77be](https://github.com/appscode/kubedb-enterprise/commit/98ac77be) Delete operator generated owned certificate secrets before creating new ones (#131) +- [a8e699f9](https://github.com/appscode/kubedb-enterprise/commit/a8e699f9) Ingore paused DB events from enterprise operator too (#130) +- [fcaf1b8b](https://github.com/appscode/kubedb-enterprise/commit/fcaf1b8b) Fix scale up and scale down (#124) +- [7d37df14](https://github.com/appscode/kubedb-enterprise/commit/7d37df14) Update Kubernetes v1.18.9 dependencies (#114) +- [ff12ad3c](https://github.com/appscode/kubedb-enterprise/commit/ff12ad3c) Update reconfigureTLS for Elasticsearch (#125) +- [0e9e15c6](https://github.com/appscode/kubedb-enterprise/commit/0e9e15c6) Use `NewSpecStatusChangeHandler` for Ops Requests (#129) +- [00c41590](https://github.com/appscode/kubedb-enterprise/commit/00c41590) Change `DBSizeDiffPercentage` to `ObjectsCountDiffPercentage` (#128) +- [4bfcacad](https://github.com/appscode/kubedb-enterprise/commit/4bfcacad) Update repository config (#127) +- [f0570d8b](https://github.com/appscode/kubedb-enterprise/commit/f0570d8b) Update repository config (#126) +- [ddf7ca41](https://github.com/appscode/kubedb-enterprise/commit/ddf7ca41) Check readiness gates for IsPodReady (#123) + + + +## [kubedb/apimachinery](https://github.com/kubedb/apimachinery) + +### [v0.16.2](https://github.com/kubedb/apimachinery/releases/tag/v0.16.2) + +- [7eb1fdda](https://github.com/kubedb/apimachinery/commit/7eb1fdda) Update Kubernetes v1.18.9 dependencies (#692) +- [ed484da9](https://github.com/kubedb/apimachinery/commit/ed484da9) Don't add default subject to certificate if already exists (#689) +- [d3b5b50e](https://github.com/kubedb/apimachinery/commit/d3b5b50e) Change `DBSizeDiffPercentage` to `ObjectsCountDiffPercentage` (#690) +- [63e27a25](https://github.com/kubedb/apimachinery/commit/63e27a25) Update for release Stash@v2021.01.21 (#691) +- [459684a5](https://github.com/kubedb/apimachinery/commit/459684a5) Update repository config (#688) + + + +## [kubedb/cli](https://github.com/kubedb/cli) + +### [v0.16.2](https://github.com/kubedb/cli/releases/tag/v0.16.2) + +- [ada47bf8](https://github.com/kubedb/cli/commit/ada47bf8) Prepare for release v0.16.2 (#584) +- [ff1a7aac](https://github.com/kubedb/cli/commit/ff1a7aac) Update Kubernetes v1.18.9 dependencies (#583) +- [664f1b1c](https://github.com/kubedb/cli/commit/664f1b1c) Update for release Stash@v2021.01.21 (#582) +- [7a07edfd](https://github.com/kubedb/cli/commit/7a07edfd) Update repository config (#581) +- [2ddea9f5](https://github.com/kubedb/cli/commit/2ddea9f5) Update repository config (#580) + + + +## [kubedb/elasticsearch](https://github.com/kubedb/elasticsearch) + +### [v0.16.2](https://github.com/kubedb/elasticsearch/releases/tag/v0.16.2) + +- [7787d2a6](https://github.com/kubedb/elasticsearch/commit/7787d2a6) Prepare for release v0.16.2 (#463) +- [29e4198a](https://github.com/kubedb/elasticsearch/commit/29e4198a) Add nodeDNs to configuration even when enableSSL is false (#458) +- [4a76db12](https://github.com/kubedb/elasticsearch/commit/4a76db12) Update Kubernetes v1.18.9 dependencies (#462) +- [42680118](https://github.com/kubedb/elasticsearch/commit/42680118) Update for release Stash@v2021.01.21 (#461) +- [27525afb](https://github.com/kubedb/elasticsearch/commit/27525afb) Update repository config (#460) +- [02d0fb3f](https://github.com/kubedb/elasticsearch/commit/02d0fb3f) Update repository config (#459) + + + +## [kubedb/installer](https://github.com/kubedb/installer) + +### [v0.16.2](https://github.com/kubedb/installer/releases/tag/v0.16.2) + +- [61bbb19](https://github.com/kubedb/installer/commit/61bbb19) Prepare for release v0.16.2 (#227) +- [091665f](https://github.com/kubedb/installer/commit/091665f) Revert "Update Percona MongoDB Server Images (#219)" +- [9736ad8](https://github.com/kubedb/installer/commit/9736ad8) Add permission to add finalizers on custom resoures (#226) + + + +## [kubedb/memcached](https://github.com/kubedb/memcached) + +### [v0.9.2](https://github.com/kubedb/memcached/releases/tag/v0.9.2) + +- [2a1e2e7c](https://github.com/kubedb/memcached/commit/2a1e2e7c) Prepare for release v0.9.2 (#277) +- [dd5f19d6](https://github.com/kubedb/memcached/commit/dd5f19d6) Update Kubernetes v1.18.9 dependencies (#276) +- [2dfc00ee](https://github.com/kubedb/memcached/commit/2dfc00ee) Update repository config (#275) +- [a4278122](https://github.com/kubedb/memcached/commit/a4278122) Update repository config (#274) + + + +## [kubedb/mongodb](https://github.com/kubedb/mongodb) + +### [v0.9.2](https://github.com/kubedb/mongodb/releases/tag/v0.9.2) + +- [9ecb8b3f](https://github.com/kubedb/mongodb/commit/9ecb8b3f) Prepare for release v0.9.2 (#362) +- [6ff0b2ab](https://github.com/kubedb/mongodb/commit/6ff0b2ab) Return error when catalog doesn't exist (#361) +- [70559218](https://github.com/kubedb/mongodb/commit/70559218) Update Kubernetes v1.18.9 dependencies (#360) +- [e46daaf7](https://github.com/kubedb/mongodb/commit/e46daaf7) Update for release Stash@v2021.01.21 (#359) +- [dd4c2fcf](https://github.com/kubedb/mongodb/commit/dd4c2fcf) Update repository config (#358) +- [f8ab57cb](https://github.com/kubedb/mongodb/commit/f8ab57cb) Update repository config (#357) + + + +## [kubedb/mysql](https://github.com/kubedb/mysql) + +### [v0.9.2](https://github.com/kubedb/mysql/releases/tag/v0.9.2) + +- [5f7dfd8c](https://github.com/kubedb/mysql/commit/5f7dfd8c) Prepare for release v0.9.2 (#351) +- [26ef56cb](https://github.com/kubedb/mysql/commit/26ef56cb) Configure innodb buffer pool and group repl cache size (#350) +- [6562cf8e](https://github.com/kubedb/mysql/commit/6562cf8e) Fix Health-checker for standalone (#345) +- [f20f5763](https://github.com/kubedb/mysql/commit/f20f5763) Update Kubernetes v1.18.9 dependencies (#349) +- [e11bea0b](https://github.com/kubedb/mysql/commit/e11bea0b) Update for release Stash@v2021.01.21 (#348) +- [5cdc3424](https://github.com/kubedb/mysql/commit/5cdc3424) Update repository config (#347) +- [0438f075](https://github.com/kubedb/mysql/commit/0438f075) Update repository config (#346) + + + +## [kubedb/operator](https://github.com/kubedb/operator) + +### [v0.16.2](https://github.com/kubedb/operator/releases/tag/v0.16.2) + +- [92baf160](https://github.com/kubedb/operator/commit/92baf160) Prepare for release v0.16.2 (#385) +- [aa818921](https://github.com/kubedb/operator/commit/aa818921) Update Kubernetes v1.18.9 dependencies (#384) +- [8344e056](https://github.com/kubedb/operator/commit/8344e056) Update repository config (#383) +- [242bae58](https://github.com/kubedb/operator/commit/242bae58) Update repository config (#382) + + + +## [kubedb/percona-xtradb](https://github.com/kubedb/percona-xtradb) + +### [v0.3.2](https://github.com/kubedb/percona-xtradb/releases/tag/v0.3.2) + +- [875dbcfb](https://github.com/kubedb/percona-xtradb/commit/875dbcfb) Prepare for release v0.3.2 (#173) +- [afcb2e37](https://github.com/kubedb/percona-xtradb/commit/afcb2e37) Update Kubernetes v1.18.9 dependencies (#172) +- [48aa03cc](https://github.com/kubedb/percona-xtradb/commit/48aa03cc) Update for release Stash@v2021.01.21 (#171) +- [2bd07624](https://github.com/kubedb/percona-xtradb/commit/2bd07624) Update repository config (#170) +- [d10fccc5](https://github.com/kubedb/percona-xtradb/commit/d10fccc5) Update repository config (#169) + + + +## [kubedb/pg-leader-election](https://github.com/kubedb/pg-leader-election) + +### [v0.4.2](https://github.com/kubedb/pg-leader-election/releases/tag/v0.4.2) + +- [4162fc7](https://github.com/kubedb/pg-leader-election/commit/4162fc7) Update Kubernetes v1.18.9 dependencies (#47) +- [5f1ec75](https://github.com/kubedb/pg-leader-election/commit/5f1ec75) Update repository config (#46) +- [6f29932](https://github.com/kubedb/pg-leader-election/commit/6f29932) Update repository config (#45) + + + +## [kubedb/pgbouncer](https://github.com/kubedb/pgbouncer) + +### [v0.3.2](https://github.com/kubedb/pgbouncer/releases/tag/v0.3.2) + +- [1c1f20bf](https://github.com/kubedb/pgbouncer/commit/1c1f20bf) Prepare for release v0.3.2 (#138) +- [7dc88cc4](https://github.com/kubedb/pgbouncer/commit/7dc88cc4) Update Kubernetes v1.18.9 dependencies (#137) +- [c2574a34](https://github.com/kubedb/pgbouncer/commit/c2574a34) Update repository config (#136) +- [d5baad1f](https://github.com/kubedb/pgbouncer/commit/d5baad1f) Update repository config (#135) + + + +## [kubedb/postgres](https://github.com/kubedb/postgres) + +### [v0.16.2](https://github.com/kubedb/postgres/releases/tag/v0.16.2) + +- [a5e81da7](https://github.com/kubedb/postgres/commit/a5e81da7) Prepare for release v0.16.2 (#462) +- [f5bcfb66](https://github.com/kubedb/postgres/commit/f5bcfb66) Update Kubernetes v1.18.9 dependencies (#461) +- [c8e4da8b](https://github.com/kubedb/postgres/commit/c8e4da8b) Update for release Stash@v2021.01.21 (#460) +- [d0d9c090](https://github.com/kubedb/postgres/commit/d0d9c090) Update repository config (#459) +- [9323c043](https://github.com/kubedb/postgres/commit/9323c043) Update repository config (#458) + + + +## [kubedb/proxysql](https://github.com/kubedb/proxysql) + +### [v0.3.2](https://github.com/kubedb/proxysql/releases/tag/v0.3.2) + +- [928bac65](https://github.com/kubedb/proxysql/commit/928bac65) Prepare for release v0.3.2 (#154) +- [49a9a9f6](https://github.com/kubedb/proxysql/commit/49a9a9f6) Update Kubernetes v1.18.9 dependencies (#153) +- [830eb7c6](https://github.com/kubedb/proxysql/commit/830eb7c6) Update for release Stash@v2021.01.21 (#152) +- [aa856424](https://github.com/kubedb/proxysql/commit/aa856424) Update repository config (#151) +- [6b16f30c](https://github.com/kubedb/proxysql/commit/6b16f30c) Update repository config (#150) + + + +## [kubedb/redis](https://github.com/kubedb/redis) + +### [v0.9.2](https://github.com/kubedb/redis/releases/tag/v0.9.2) + +- [a94faf53](https://github.com/kubedb/redis/commit/a94faf53) Prepare for release v0.9.2 (#299) +- [cfcbb855](https://github.com/kubedb/redis/commit/cfcbb855) Update Kubernetes v1.18.9 dependencies (#298) +- [76b9b70c](https://github.com/kubedb/redis/commit/76b9b70c) Update repository config (#297) +- [0cb62a27](https://github.com/kubedb/redis/commit/0cb62a27) Update repository config (#296) + + + +## [kubedb/replication-mode-detector](https://github.com/kubedb/replication-mode-detector) + +### [v0.3.2](https://github.com/kubedb/replication-mode-detector/releases/tag/v0.3.2) + +- [a2e3ff5](https://github.com/kubedb/replication-mode-detector/commit/a2e3ff5) Prepare for release v0.3.2 (#123) +- [1b43ee1](https://github.com/kubedb/replication-mode-detector/commit/1b43ee1) Update Kubernetes v1.18.9 dependencies (#122) +- [a0e0fc0](https://github.com/kubedb/replication-mode-detector/commit/a0e0fc0) Update repository config (#121) +- [84155f6](https://github.com/kubedb/replication-mode-detector/commit/84155f6) Update repository config (#120) + + + +## [kubedb/tests](https://github.com/kubedb/tests) + +### [v0.1.2](https://github.com/kubedb/tests/releases/tag/v0.1.2) + +- [6b6c030](https://github.com/kubedb/tests/commit/6b6c030) Prepare for release v0.1.2 (#95) +- [3456495](https://github.com/kubedb/tests/commit/3456495) Update Kubernetes v1.18.9 dependencies (#92) +- [e335294](https://github.com/kubedb/tests/commit/e335294) Update repository config (#91) +- [9d82b07](https://github.com/kubedb/tests/commit/9d82b07) Update repository config (#90) + + + + diff --git a/content/docs/v2024.1.31/CHANGELOG-v2021.03.11.md b/content/docs/v2024.1.31/CHANGELOG-v2021.03.11.md new file mode 100644 index 0000000000..3d6e045707 --- /dev/null +++ b/content/docs/v2024.1.31/CHANGELOG-v2021.03.11.md @@ -0,0 +1,852 @@ +--- +title: Changelog | KubeDB +description: Changelog +menu: + docs_v2024.1.31: + identifier: changelog-kubedb-v2021.03.11 + name: Changelog-v2021.03.11 + parent: welcome + weight: 20210311 +product_name: kubedb +menu_name: docs_v2024.1.31 +section_menu_id: welcome +url: /docs/v2024.1.31/welcome/changelog-v2021.03.11/ +aliases: +- /docs/v2024.1.31/CHANGELOG-v2021.03.11/ +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +# KubeDB v2021.03.11 (2021-03-11) + + +## [appscode/kubedb-autoscaler](https://github.com/appscode/kubedb-autoscaler) + +### [v0.2.0](https://github.com/appscode/kubedb-autoscaler/releases/tag/v0.2.0) + +- [f93c060](https://github.com/appscode/kubedb-autoscaler/commit/f93c060) Prepare for release v0.2.0 (#18) +- [b86b36a](https://github.com/appscode/kubedb-autoscaler/commit/b86b36a) Update dependencies +- [3e50e33](https://github.com/appscode/kubedb-autoscaler/commit/3e50e33) Update repository config (#17) +- [efd6d82](https://github.com/appscode/kubedb-autoscaler/commit/efd6d82) Update repository config (#16) +- [eddccf2](https://github.com/appscode/kubedb-autoscaler/commit/eddccf2) Update repository config (#14) + + + +## [appscode/kubedb-enterprise](https://github.com/appscode/kubedb-enterprise) + +### [v0.4.0](https://github.com/appscode/kubedb-enterprise/releases/tag/v0.4.0) + +- [6bcddad8](https://github.com/appscode/kubedb-enterprise/commit/6bcddad8) Prepare for release v0.4.0 (#154) +- [0785c36e](https://github.com/appscode/kubedb-enterprise/commit/0785c36e) Fix ConfigServer Horizontal Scaling Up (#153) +- [0784a195](https://github.com/appscode/kubedb-enterprise/commit/0784a195) Fix MySQL DB version patch (#152) +- [958b2390](https://github.com/appscode/kubedb-enterprise/commit/958b2390) Register CRD for MariaDB (#150) +- [32caa479](https://github.com/appscode/kubedb-enterprise/commit/32caa479) TLS support for PostgreSQL and pg-coordinator (#148) +- [03201b02](https://github.com/appscode/kubedb-enterprise/commit/03201b02) Add MariaDB TLS support (#110) +- [1ad3e7df](https://github.com/appscode/kubedb-enterprise/commit/1ad3e7df) Add redis & tls reconfigure, restart support (#98) +- [dad9c4cc](https://github.com/appscode/kubedb-enterprise/commit/dad9c4cc) Fix MongoDB Reconfigure TLS (#143) +- [f62fc9f4](https://github.com/appscode/kubedb-enterprise/commit/f62fc9f4) Update KubeDB api (#149) +- [c86c8d0c](https://github.com/appscode/kubedb-enterprise/commit/c86c8d0c) Update old env with the new one while upgrading ES version 6 to 7 (#147) +- [48af0d99](https://github.com/appscode/kubedb-enterprise/commit/48af0d99) Use Elasticsearch version from version CRD while creating client (#135) +- [70048682](https://github.com/appscode/kubedb-enterprise/commit/70048682) Fix MySQL Reconfigure TLS (#144) +- [7a45302b](https://github.com/appscode/kubedb-enterprise/commit/7a45302b) Fix MySQL major version upgrading (#134) +- [a5f76ab0](https://github.com/appscode/kubedb-enterprise/commit/a5f76ab0) Fix install command in Makefile (#145) +- [34ae3519](https://github.com/appscode/kubedb-enterprise/commit/34ae3519) Update repository config (#141) +- [f02f0007](https://github.com/appscode/kubedb-enterprise/commit/f02f0007) Update repository config (#139) +- [b1ea4c2e](https://github.com/appscode/kubedb-enterprise/commit/b1ea4c2e) Update Kubernetes v1.18.9 dependencies (#138) +- [341d79ae](https://github.com/appscode/kubedb-enterprise/commit/341d79ae) Update Kubernetes v1.18.9 dependencies (#137) +- [7b26337b](https://github.com/appscode/kubedb-enterprise/commit/7b26337b) Update Kubernetes v1.18.9 dependencies (#136) +- [e4455e82](https://github.com/appscode/kubedb-enterprise/commit/e4455e82) Update repository config (#133) + + + +## [kubedb/apimachinery](https://github.com/kubedb/apimachinery) + +### [v0.17.0](https://github.com/kubedb/apimachinery/releases/tag/v0.17.0) + +- [a550d467](https://github.com/kubedb/apimachinery/commit/a550d467) Removed constant MariaDBClusterRecommendedVersion (#722) +- [1029fc48](https://github.com/kubedb/apimachinery/commit/1029fc48) Add default monitoring configuration (#721) +- [6d0a8316](https://github.com/kubedb/apimachinery/commit/6d0a8316) Always run PostgreSQL container as 70 (#720) +- [0922b1ab](https://github.com/kubedb/apimachinery/commit/0922b1ab) Update for release Stash@v2021.03.08 (#719) +- [3cdca509](https://github.com/kubedb/apimachinery/commit/3cdca509) Merge ContainerTemplate into PodTemplate spec (#718) +- [8dedb762](https://github.com/kubedb/apimachinery/commit/8dedb762) Set default affinity rules for MariaDB (#717) +- [2bf1490e](https://github.com/kubedb/apimachinery/commit/2bf1490e) Default db container security context (#716) +- [8df88aaa](https://github.com/kubedb/apimachinery/commit/8df88aaa) Update container template (#715) +- [74561e0d](https://github.com/kubedb/apimachinery/commit/74561e0d) Use etcd ports for for pg coordinator (#714) +- [c2cd1993](https://github.com/kubedb/apimachinery/commit/c2cd1993) Update constant fields for pg-coordinator (#713) +- [77c4bd69](https://github.com/kubedb/apimachinery/commit/77c4bd69) Add Elasticsearch helper method for initial master nodes (#712) +- [5cc3309e](https://github.com/kubedb/apimachinery/commit/5cc3309e) Add distribution support to Postgres (#711) +- [5de49ea6](https://github.com/kubedb/apimachinery/commit/5de49ea6) Use import-crds.sh script (#710) +- [5e28f585](https://github.com/kubedb/apimachinery/commit/5e28f585) Use Es distro as ElasticStack +- [169675bb](https://github.com/kubedb/apimachinery/commit/169675bb) Remove spec.tools from EtcdVersion (#709) +- [f4aa5bcc](https://github.com/kubedb/apimachinery/commit/f4aa5bcc) Remove memberWeight from MySQLOpsRequest (#708) +- [201456e8](https://github.com/kubedb/apimachinery/commit/201456e8) Postgres : updated leader elector [ElectionTick,HeartbeatTick] (#700) +- [4cb3f571](https://github.com/kubedb/apimachinery/commit/4cb3f571) Add distribution support in catalog (#707) +- [eb98592d](https://github.com/kubedb/apimachinery/commit/eb98592d) MongoDB: Remove `OrganizationalUnit` and default `Organization` (#704) +- [14ba7e04](https://github.com/kubedb/apimachinery/commit/14ba7e04) Update dependencies +- [3075facf](https://github.com/kubedb/apimachinery/commit/3075facf) Update config types with stash addon config +- [1b8ec75a](https://github.com/kubedb/apimachinery/commit/1b8ec75a) Update catalog stash addon (#703) +- [4b451feb](https://github.com/kubedb/apimachinery/commit/4b451feb) Update repository config (#701) +- [6503a31c](https://github.com/kubedb/apimachinery/commit/6503a31c) Update repository config (#698) +- [612f7384](https://github.com/kubedb/apimachinery/commit/612f7384) Add Stash task refs to Catalog crds (#696) +- [bcb978c0](https://github.com/kubedb/apimachinery/commit/bcb978c0) Update Kubernetes v1.18.9 dependencies (#697) +- [5b44aa8c](https://github.com/kubedb/apimachinery/commit/5b44aa8c) Remove server-id from MySQL CR (#693) +- [4ab0a496](https://github.com/kubedb/apimachinery/commit/4ab0a496) Update Kubernetes v1.18.9 dependencies (#694) +- [8591d95d](https://github.com/kubedb/apimachinery/commit/8591d95d) Update crds via GitHub actions (#695) +- [5fc7c521](https://github.com/kubedb/apimachinery/commit/5fc7c521) Remove deprecated crd yamls +- [7a221977](https://github.com/kubedb/apimachinery/commit/7a221977) Add Mariadb support (#670) + + + +## [kubedb/cli](https://github.com/kubedb/cli) + +### [v0.17.0](https://github.com/kubedb/cli/releases/tag/v0.17.0) + +- [818df7f7](https://github.com/kubedb/cli/commit/818df7f7) Prepare for release v0.17.0 (#594) +- [235e88a0](https://github.com/kubedb/cli/commit/235e88a0) Update for release Stash@v2021.03.08 (#593) +- [755754a2](https://github.com/kubedb/cli/commit/755754a2) Update KubeDB api (#592) +- [2c13bea2](https://github.com/kubedb/cli/commit/2c13bea2) Update postgres cli (#591) +- [34b62534](https://github.com/kubedb/cli/commit/34b62534) Update repository config (#588) +- [1cda66c1](https://github.com/kubedb/cli/commit/1cda66c1) Update repository config (#587) +- [65b5d097](https://github.com/kubedb/cli/commit/65b5d097) Update Kubernetes v1.18.9 dependencies (#586) +- [10e2d9b2](https://github.com/kubedb/cli/commit/10e2d9b2) Update Kubernetes v1.18.9 dependencies (#585) + + + +## [kubedb/elasticsearch](https://github.com/kubedb/elasticsearch) + +### [v0.17.0](https://github.com/kubedb/elasticsearch/releases/tag/v0.17.0) + +- [6df700d8](https://github.com/kubedb/elasticsearch/commit/6df700d8) Prepare for release v0.17.0 (#483) +- [58eb52eb](https://github.com/kubedb/elasticsearch/commit/58eb52eb) Update for release Stash@v2021.03.08 (#482) +- [11504552](https://github.com/kubedb/elasticsearch/commit/11504552) Update KubeDB api (#481) +- [d31d0364](https://github.com/kubedb/elasticsearch/commit/d31d0364) Update db container security context (#480) +- [e097ef82](https://github.com/kubedb/elasticsearch/commit/e097ef82) Update KubeDB api (#479) +- [03b16ef0](https://github.com/kubedb/elasticsearch/commit/03b16ef0) Use helper method for initial master nodes (#478) +- [b9785e29](https://github.com/kubedb/elasticsearch/commit/b9785e29) Fix appbinding type meta (#477) +- [fb6a25a8](https://github.com/kubedb/elasticsearch/commit/fb6a25a8) Fix install command in Makefile (#476) +- [8de7f729](https://github.com/kubedb/elasticsearch/commit/8de7f729) Update repository config (#475) +- [99a594c7](https://github.com/kubedb/elasticsearch/commit/99a594c7) Pass stash addon info to AppBinding (#474) +- [fe7603bb](https://github.com/kubedb/elasticsearch/commit/fe7603bb) Mount custom config files to Elasticsearch config directory (#466) +- [8e39688e](https://github.com/kubedb/elasticsearch/commit/8e39688e) Update repository config (#472) +- [1915aa8f](https://github.com/kubedb/elasticsearch/commit/1915aa8f) Update repository config (#471) +- [a0c0a92a](https://github.com/kubedb/elasticsearch/commit/a0c0a92a) Update Kubernetes v1.18.9 dependencies (#470) +- [5579736d](https://github.com/kubedb/elasticsearch/commit/5579736d) Update repository config (#469) +- [ff140030](https://github.com/kubedb/elasticsearch/commit/ff140030) Update Kubernetes v1.18.9 dependencies (#468) +- [95d848b5](https://github.com/kubedb/elasticsearch/commit/95d848b5) Update Kubernetes v1.18.9 dependencies (#467) +- [15ec7161](https://github.com/kubedb/elasticsearch/commit/15ec7161) Update repository config (#465) +- [005a8cc5](https://github.com/kubedb/elasticsearch/commit/005a8cc5) Update repository config (#464) + + + +## [kubedb/installer](https://github.com/kubedb/installer) + +### [v0.17.0](https://github.com/kubedb/installer/releases/tag/v0.17.0) + +- [c1770ad](https://github.com/kubedb/installer/commit/c1770ad) Prepare for release v0.17.0 (#279) +- [3ed3ee8](https://github.com/kubedb/installer/commit/3ed3ee8) Add global skipCleaner values field (#277) +- [0a14985](https://github.com/kubedb/installer/commit/0a14985) Update combined chart dependency (#276) +- [d4d9f3a](https://github.com/kubedb/installer/commit/d4d9f3a) Add open source images for TimescaleDB (#275) +- [d325aff](https://github.com/kubedb/installer/commit/d325aff) Use distro aware version sorting +- [c99847a](https://github.com/kubedb/installer/commit/c99847a) Add TimescaleDB in Postgres catalog (#274) +- [0cf8ceb](https://github.com/kubedb/installer/commit/0cf8ceb) Fail fmt command if formatting fails +- [605fa5a](https://github.com/kubedb/installer/commit/605fa5a) Update percona MongoDBVersion name (#273) +- [d9adea3](https://github.com/kubedb/installer/commit/d9adea3) Change Elasticsearch catalog naming format (#272) +- [98ce374](https://github.com/kubedb/installer/commit/98ce374) Handle non-semver db version names (#271) +- [291fca7](https://github.com/kubedb/installer/commit/291fca7) Auto download api repo to update crds (#270) +- [ca1b813](https://github.com/kubedb/installer/commit/ca1b813) Update MongoDB init container image (#268) +- [dad5f24](https://github.com/kubedb/installer/commit/dad5f24) Added official image for postgres (#269) +- [0724348](https://github.com/kubedb/installer/commit/0724348) Update for release Stash@v2021.03.08 (#267) +- [6ee56d9](https://github.com/kubedb/installer/commit/6ee56d9) Don't fail deleting namespace when license expires (#266) +- [833135b](https://github.com/kubedb/installer/commit/833135b) Add temporary volume for storing temporary certificates (#265) +- [1a52a95](https://github.com/kubedb/installer/commit/1a52a95) Added new postgres versions in kubedb-catalog (#259) +- [b8a0d0c](https://github.com/kubedb/installer/commit/b8a0d0c) Update MariaDB Image (#264) +- [2270ede](https://github.com/kubedb/installer/commit/2270ede) Fix Stash Addon params for ES SearchGuard & OpenDistro variant (#262) +- [7d361cb](https://github.com/kubedb/installer/commit/7d361cb) Fix build (#261) +- [6863b5a](https://github.com/kubedb/installer/commit/6863b5a) Create combined kubedb chart (#257) +- [a566b56](https://github.com/kubedb/installer/commit/a566b56) Format catalog chart with make fmt +- [fd67c67](https://github.com/kubedb/installer/commit/fd67c67) Add raw catalog yamls (#254) +- [6b283b4](https://github.com/kubedb/installer/commit/6b283b4) Add import-crds.sh script (#255) +- [9897427](https://github.com/kubedb/installer/commit/9897427) Update crds for kubedb/apimachinery@5e28f585 (#253) +- [f3ccbd9](https://github.com/kubedb/installer/commit/f3ccbd9) .Values.catalog.mongo -> .Values.catalog.mongodb (#252) +- [20538d3](https://github.com/kubedb/installer/commit/20538d3) Remove spec.tools from catalog (#250) +- [8bae1ac](https://github.com/kubedb/installer/commit/8bae1ac) Update crds for kubedb/apimachinery@169675bb (#251) +- [f5661b5](https://github.com/kubedb/installer/commit/f5661b5) Update crds for kubedb/apimachinery@f4aa5bcc (#249) +- [30e1a11](https://github.com/kubedb/installer/commit/30e1a11) Disable verify modules +- [6280dff](https://github.com/kubedb/installer/commit/6280dff) Add Stash addon info in MongoDB catalogs (#247) +- [af0b011](https://github.com/kubedb/installer/commit/af0b011) Update crds for kubedb/apimachinery@201456e8 (#248) +- [23f31da](https://github.com/kubedb/installer/commit/23f31da) Update crds for kubedb/apimachinery@1b8ec75a (#245) +- [a950d29](https://github.com/kubedb/installer/commit/a950d29) make ct (#242) +- [95176b0](https://github.com/kubedb/installer/commit/95176b0) Update repository config (#243) +- [9a63b89](https://github.com/kubedb/installer/commit/9a63b89) Remove unused template from chart +- [0da3eb1](https://github.com/kubedb/installer/commit/0da3eb1) Update repository config (#241) +- [cb559d8](https://github.com/kubedb/installer/commit/cb559d8) Update crds for kubedb/apimachinery@612f7384 (#240) +- [bbbd753](https://github.com/kubedb/installer/commit/bbbd753) Update crds for kubedb/apimachinery@5b44aa8c (#239) +- [25988b0](https://github.com/kubedb/installer/commit/25988b0) Add combined kubedb chart (#238) +- [d3bdf52](https://github.com/kubedb/installer/commit/d3bdf52) Rename kubedb chart to kubedb-community (#237) +- [154d542](https://github.com/kubedb/installer/commit/154d542) Add MariaDB Catalogs (#208) +- [8682f3a](https://github.com/kubedb/installer/commit/8682f3a) Update MySQL catalogs (#235) +- [a32f766](https://github.com/kubedb/installer/commit/a32f766) Update Elasticsearch versions (#234) +- [435fc07](https://github.com/kubedb/installer/commit/435fc07) Update chart description +- [f7bebec](https://github.com/kubedb/installer/commit/f7bebec) Add kubedb-crds chart (#236) +- [26397fc](https://github.com/kubedb/installer/commit/26397fc) Skip generating YAMLs not needed for install command (#233) +- [5788701](https://github.com/kubedb/installer/commit/5788701) Update repository config (#232) +- [89b21eb](https://github.com/kubedb/installer/commit/89b21eb) Add statefulsets/finalizers to ClusterRole (#230) +- [d282443](https://github.com/kubedb/installer/commit/d282443) Cleanup CI workflow (#231) +- [cf01dd6](https://github.com/kubedb/installer/commit/cf01dd6) Update repository config (#229) + + + +## [kubedb/mariadb](https://github.com/kubedb/mariadb) + +### [v0.1.0](https://github.com/kubedb/mariadb/releases/tag/v0.1.0) + +- [146c9b87](https://github.com/kubedb/mariadb/commit/146c9b87) Prepare for release v0.1.0 (#56) +- [808ff4cd](https://github.com/kubedb/mariadb/commit/808ff4cd) Pass stash addon info to AppBinding (#55) +- [8e77c251](https://github.com/kubedb/mariadb/commit/8e77c251) Removed recommended version check from validator (#54) +- [4139a7b3](https://github.com/kubedb/mariadb/commit/4139a7b3) Update for release Stash@v2021.03.08 (#53) +- [fabdbce0](https://github.com/kubedb/mariadb/commit/fabdbce0) Update Makefile +- [9c4ee6e8](https://github.com/kubedb/mariadb/commit/9c4ee6e8) Implement MariaDB operator (#42) +- [2c3ad2c0](https://github.com/kubedb/mariadb/commit/2c3ad2c0) Fix install command in Makefile (#50) +- [2d57e20d](https://github.com/kubedb/mariadb/commit/2d57e20d) Update repository config (#48) +- [f2f8c646](https://github.com/kubedb/mariadb/commit/f2f8c646) Update repository config (#47) +- [3f120133](https://github.com/kubedb/mariadb/commit/3f120133) Update Kubernetes v1.18.9 dependencies (#46) +- [ede125ba](https://github.com/kubedb/mariadb/commit/ede125ba) Update repository config (#45) +- [d173d7a9](https://github.com/kubedb/mariadb/commit/d173d7a9) Update Kubernetes v1.18.9 dependencies (#44) +- [b22bc09e](https://github.com/kubedb/mariadb/commit/b22bc09e) Update Kubernetes v1.18.9 dependencies (#43) +- [3d63b2fa](https://github.com/kubedb/mariadb/commit/3d63b2fa) Update repository config (#40) +- [8440ebde](https://github.com/kubedb/mariadb/commit/8440ebde) Update repository config (#39) +- [24dfaf7d](https://github.com/kubedb/mariadb/commit/24dfaf7d) Update Kubernetes v1.18.9 dependencies (#37) +- [923f1e88](https://github.com/kubedb/mariadb/commit/923f1e88) Update for release Stash@v2021.01.21 (#36) +- [e5a1f271](https://github.com/kubedb/mariadb/commit/e5a1f271) Update repository config (#35) +- [b60c7fb6](https://github.com/kubedb/mariadb/commit/b60c7fb6) Update repository config (#34) +- [a9361a2f](https://github.com/kubedb/mariadb/commit/a9361a2f) Update KubeDB api (#33) +- [1b66913c](https://github.com/kubedb/mariadb/commit/1b66913c) Update KubeDB api (#32) +- [9b530888](https://github.com/kubedb/mariadb/commit/9b530888) Update KubeDB api (#30) +- [a5a2b4b8](https://github.com/kubedb/mariadb/commit/a5a2b4b8) Update KubeDB api (#29) +- [7642dfdf](https://github.com/kubedb/mariadb/commit/7642dfdf) Update KubeDB api (#28) +- [561e1da9](https://github.com/kubedb/mariadb/commit/561e1da9) Delete e2e tests moved to kubedb/test repo (#27) +- [1f772e07](https://github.com/kubedb/mariadb/commit/1f772e07) Update KubeDB api (#25) +- [7f18e249](https://github.com/kubedb/mariadb/commit/7f18e249) Fix annotations passing to AppBinding (#24) +- [40e5e1d6](https://github.com/kubedb/mariadb/commit/40e5e1d6) Initialize mapper +- [6b6be5d7](https://github.com/kubedb/mariadb/commit/6b6be5d7) Change offshoot selector labels to standard k8s app labels (#23) +- [8e88e863](https://github.com/kubedb/mariadb/commit/8e88e863) Update KubeDB api (#22) +- [ab55d3f3](https://github.com/kubedb/mariadb/commit/ab55d3f3) Update KubeDB api (#21) +- [d256ae83](https://github.com/kubedb/mariadb/commit/d256ae83) Use basic-auth secret type for auth secret (#20) +- [8988ecbe](https://github.com/kubedb/mariadb/commit/8988ecbe) Update KubeDB api (#19) +- [cb9264eb](https://github.com/kubedb/mariadb/commit/cb9264eb) Update for release Stash@v2020.12.17 (#18) +- [92a4a353](https://github.com/kubedb/mariadb/commit/92a4a353) Update KubeDB api (#17) +- [f95e6bd8](https://github.com/kubedb/mariadb/commit/f95e6bd8) Update KubeDB api (#16) +- [4ce3e1fe](https://github.com/kubedb/mariadb/commit/4ce3e1fe) Update KubeDB api (#15) +- [9c2d3e8f](https://github.com/kubedb/mariadb/commit/9c2d3e8f) Update Kubernetes v1.18.9 dependencies (#14) +- [57f57bcc](https://github.com/kubedb/mariadb/commit/57f57bcc) Update KubeDB api (#13) +- [a0e4100d](https://github.com/kubedb/mariadb/commit/a0e4100d) Update KubeDB api (#12) +- [6e3159b0](https://github.com/kubedb/mariadb/commit/6e3159b0) Update KubeDB api (#11) +- [04adaa56](https://github.com/kubedb/mariadb/commit/04adaa56) Update Kubernetes v1.18.9 dependencies (#10) +- [34c40bf6](https://github.com/kubedb/mariadb/commit/34c40bf6) Update e2e workflow (#9) +- [c95cb8e7](https://github.com/kubedb/mariadb/commit/c95cb8e7) Update KubeDB api (#8) +- [cefbd5e6](https://github.com/kubedb/mariadb/commit/cefbd5e6) Format shel scripts (#7) +- [3fbc312e](https://github.com/kubedb/mariadb/commit/3fbc312e) Update KubeDB api (#6) +- [6dfefd95](https://github.com/kubedb/mariadb/commit/6dfefd95) Update KubeDB api (#5) +- [b0e0bc48](https://github.com/kubedb/mariadb/commit/b0e0bc48) Update repository config (#4) +- [ddac5279](https://github.com/kubedb/mariadb/commit/ddac5279) Update readme +- [3aebb7b1](https://github.com/kubedb/mariadb/commit/3aebb7b1) Fix serviceTemplate inline json (#2) +- [bda7cb60](https://github.com/kubedb/mariadb/commit/bda7cb60) Rename to MariaDB (#3) +- [aa216cf5](https://github.com/kubedb/mariadb/commit/aa216cf5) Prepare for release v0.1.1 (#134) +- [d43b87a3](https://github.com/kubedb/mariadb/commit/d43b87a3) Update Kubernetes v1.18.9 dependencies (#133) +- [1a354dba](https://github.com/kubedb/mariadb/commit/1a354dba) Update KubeDB api (#132) +- [808366cc](https://github.com/kubedb/mariadb/commit/808366cc) Update Kubernetes v1.18.9 dependencies (#131) +- [adb44379](https://github.com/kubedb/mariadb/commit/adb44379) Update KubeDB api (#130) +- [6d6188de](https://github.com/kubedb/mariadb/commit/6d6188de) Update for release Stash@v2020.11.06 (#129) +- [8d3eaa37](https://github.com/kubedb/mariadb/commit/8d3eaa37) Update Kubernetes v1.18.9 dependencies (#128) +- [5f7253b6](https://github.com/kubedb/mariadb/commit/5f7253b6) Update KubeDB api (#126) +- [43f10d83](https://github.com/kubedb/mariadb/commit/43f10d83) Update KubeDB api (#125) +- [91940395](https://github.com/kubedb/mariadb/commit/91940395) Update for release Stash@v2020.10.30 (#124) +- [eba69286](https://github.com/kubedb/mariadb/commit/eba69286) Update KubeDB api (#123) +- [a4dd87ba](https://github.com/kubedb/mariadb/commit/a4dd87ba) Update for release Stash@v2020.10.29 (#122) +- [3b2593ce](https://github.com/kubedb/mariadb/commit/3b2593ce) Prepare for release v0.1.0 (#121) +- [ae82716f](https://github.com/kubedb/mariadb/commit/ae82716f) Prepare for release v0.1.0-rc.2 (#120) +- [4ac07f08](https://github.com/kubedb/mariadb/commit/4ac07f08) Prepare for release v0.1.0-rc.1 (#119) +- [397607a3](https://github.com/kubedb/mariadb/commit/397607a3) Prepare for release v0.1.0-beta.6 (#118) +- [a3b7642d](https://github.com/kubedb/mariadb/commit/a3b7642d) Create SRV records for governing service (#117) +- [9866a420](https://github.com/kubedb/mariadb/commit/9866a420) Prepare for release v0.1.0-beta.5 (#116) +- [f92081d1](https://github.com/kubedb/mariadb/commit/f92081d1) Create separate governing service for each database (#115) +- [6010b189](https://github.com/kubedb/mariadb/commit/6010b189) Update KubeDB api (#114) +- [95b57c72](https://github.com/kubedb/mariadb/commit/95b57c72) Update readme +- [14b2f1b2](https://github.com/kubedb/mariadb/commit/14b2f1b2) Prepare for release v0.1.0-beta.4 (#113) +- [eff1d265](https://github.com/kubedb/mariadb/commit/eff1d265) Update KubeDB api (#112) +- [a2878d4a](https://github.com/kubedb/mariadb/commit/a2878d4a) Update Kubernetes v1.18.9 dependencies (#111) +- [51f0d104](https://github.com/kubedb/mariadb/commit/51f0d104) Update KubeDB api (#110) +- [fcf5343b](https://github.com/kubedb/mariadb/commit/fcf5343b) Update for release Stash@v2020.10.21 (#109) +- [9fe68d43](https://github.com/kubedb/mariadb/commit/9fe68d43) Fix init validator (#107) +- [1c528cff](https://github.com/kubedb/mariadb/commit/1c528cff) Update KubeDB api (#108) +- [99d23f3d](https://github.com/kubedb/mariadb/commit/99d23f3d) Update KubeDB api (#106) +- [d0807640](https://github.com/kubedb/mariadb/commit/d0807640) Update Kubernetes v1.18.9 dependencies (#105) +- [bac7705b](https://github.com/kubedb/mariadb/commit/bac7705b) Update KubeDB api (#104) +- [475aabd5](https://github.com/kubedb/mariadb/commit/475aabd5) Update KubeDB api (#103) +- [60f7e5a9](https://github.com/kubedb/mariadb/commit/60f7e5a9) Update KubeDB api (#102) +- [84a97ced](https://github.com/kubedb/mariadb/commit/84a97ced) Update KubeDB api (#101) +- [d4a7b7c5](https://github.com/kubedb/mariadb/commit/d4a7b7c5) Update Kubernetes v1.18.9 dependencies (#100) +- [b818a4c5](https://github.com/kubedb/mariadb/commit/b818a4c5) Update KubeDB api (#99) +- [03df7739](https://github.com/kubedb/mariadb/commit/03df7739) Update KubeDB api (#98) +- [2f3ce0e6](https://github.com/kubedb/mariadb/commit/2f3ce0e6) Update KubeDB api (#96) +- [94e009e8](https://github.com/kubedb/mariadb/commit/94e009e8) Update repository config (#95) +- [fc61d440](https://github.com/kubedb/mariadb/commit/fc61d440) Update repository config (#94) +- [35f5b2bb](https://github.com/kubedb/mariadb/commit/35f5b2bb) Update repository config (#93) +- [d01e39dd](https://github.com/kubedb/mariadb/commit/d01e39dd) Initialize statefulset watcher from cmd/server/options.go (#92) +- [41bf932f](https://github.com/kubedb/mariadb/commit/41bf932f) Update KubeDB api (#91) +- [da92a1f3](https://github.com/kubedb/mariadb/commit/da92a1f3) Update Kubernetes v1.18.9 dependencies (#90) +- [554beafb](https://github.com/kubedb/mariadb/commit/554beafb) Publish docker images to ghcr.io (#89) +- [4c7031e1](https://github.com/kubedb/mariadb/commit/4c7031e1) Update KubeDB api (#88) +- [418c767a](https://github.com/kubedb/mariadb/commit/418c767a) Update KubeDB api (#87) +- [94eef91e](https://github.com/kubedb/mariadb/commit/94eef91e) Update KubeDB api (#86) +- [f3c2a360](https://github.com/kubedb/mariadb/commit/f3c2a360) Update KubeDB api (#85) +- [107bb6a6](https://github.com/kubedb/mariadb/commit/107bb6a6) Update repository config (#84) +- [938e64bc](https://github.com/kubedb/mariadb/commit/938e64bc) Cleanup monitoring spec api (#83) +- [deeaad8f](https://github.com/kubedb/mariadb/commit/deeaad8f) Use conditions to handle database initialization (#80) +- [798c3ddc](https://github.com/kubedb/mariadb/commit/798c3ddc) Update Kubernetes v1.18.9 dependencies (#82) +- [16c72ba6](https://github.com/kubedb/mariadb/commit/16c72ba6) Updated the exporter port and service (#81) +- [9314faf1](https://github.com/kubedb/mariadb/commit/9314faf1) Update for release Stash@v2020.09.29 (#79) +- [6cb53efc](https://github.com/kubedb/mariadb/commit/6cb53efc) Update Kubernetes v1.18.9 dependencies (#78) +- [fd2b8cdd](https://github.com/kubedb/mariadb/commit/fd2b8cdd) Update Kubernetes v1.18.9 dependencies (#76) +- [9d1038db](https://github.com/kubedb/mariadb/commit/9d1038db) Update repository config (#75) +- [41a05a44](https://github.com/kubedb/mariadb/commit/41a05a44) Update repository config (#74) +- [eccd2acd](https://github.com/kubedb/mariadb/commit/eccd2acd) Update Kubernetes v1.18.9 dependencies (#73) +- [27635f1c](https://github.com/kubedb/mariadb/commit/27635f1c) Update Kubernetes v1.18.3 dependencies (#72) +- [792326c7](https://github.com/kubedb/mariadb/commit/792326c7) Use common event recorder (#71) +- [0ff583b8](https://github.com/kubedb/mariadb/commit/0ff583b8) Prepare for release v0.1.0-beta.3 (#70) +- [627bc039](https://github.com/kubedb/mariadb/commit/627bc039) Use new `spec.init` section (#69) +- [f79e4771](https://github.com/kubedb/mariadb/commit/f79e4771) Update Kubernetes v1.18.3 dependencies (#68) +- [257954c2](https://github.com/kubedb/mariadb/commit/257954c2) Add license verifier (#67) +- [e06eec6b](https://github.com/kubedb/mariadb/commit/e06eec6b) Update for release Stash@v2020.09.16 (#66) +- [29901348](https://github.com/kubedb/mariadb/commit/29901348) Update Kubernetes v1.18.3 dependencies (#65) +- [02d5bfde](https://github.com/kubedb/mariadb/commit/02d5bfde) Use background deletion policy +- [6e6d8b5b](https://github.com/kubedb/mariadb/commit/6e6d8b5b) Update Kubernetes v1.18.3 dependencies (#63) +- [7601a237](https://github.com/kubedb/mariadb/commit/7601a237) Use AppsCode Community License (#62) +- [4d1a2424](https://github.com/kubedb/mariadb/commit/4d1a2424) Update Kubernetes v1.18.3 dependencies (#61) +- [471b6def](https://github.com/kubedb/mariadb/commit/471b6def) Prepare for release v0.1.0-beta.2 (#60) +- [9423a70f](https://github.com/kubedb/mariadb/commit/9423a70f) Update release.yml +- [85d1d036](https://github.com/kubedb/mariadb/commit/85d1d036) Use updated apis (#59) +- [6811b8dc](https://github.com/kubedb/mariadb/commit/6811b8dc) Update Kubernetes v1.18.3 dependencies (#53) +- [4212d2a0](https://github.com/kubedb/mariadb/commit/4212d2a0) Update Kubernetes v1.18.3 dependencies (#52) +- [659d646c](https://github.com/kubedb/mariadb/commit/659d646c) Update Kubernetes v1.18.3 dependencies (#51) +- [a868e0c3](https://github.com/kubedb/mariadb/commit/a868e0c3) Update Kubernetes v1.18.3 dependencies (#50) +- [162e6ca4](https://github.com/kubedb/mariadb/commit/162e6ca4) Update Kubernetes v1.18.3 dependencies (#49) +- [a7fa1fbf](https://github.com/kubedb/mariadb/commit/a7fa1fbf) Update Kubernetes v1.18.3 dependencies (#48) +- [b6a4583f](https://github.com/kubedb/mariadb/commit/b6a4583f) Remove dependency on enterprise operator (#47) +- [a8909b38](https://github.com/kubedb/mariadb/commit/a8909b38) Allow configuring k8s & db version in e2e tests (#46) +- [4d79d26e](https://github.com/kubedb/mariadb/commit/4d79d26e) Update to Kubernetes v1.18.3 (#45) +- [189f3212](https://github.com/kubedb/mariadb/commit/189f3212) Trigger e2e tests on /ok-to-test command (#44) +- [a037bd03](https://github.com/kubedb/mariadb/commit/a037bd03) Update to Kubernetes v1.18.3 (#43) +- [33cabdf3](https://github.com/kubedb/mariadb/commit/33cabdf3) Update to Kubernetes v1.18.3 (#42) +- [28b9fc0f](https://github.com/kubedb/mariadb/commit/28b9fc0f) Prepare for release v0.1.0-beta.1 (#41) +- [fb4f5444](https://github.com/kubedb/mariadb/commit/fb4f5444) Update for release Stash@v2020.07.09-beta.0 (#39) +- [ad221aa2](https://github.com/kubedb/mariadb/commit/ad221aa2) include Makefile.env +- [841ec855](https://github.com/kubedb/mariadb/commit/841ec855) Allow customizing chart registry (#38) +- [bb608980](https://github.com/kubedb/mariadb/commit/bb608980) Update License (#37) +- [cf8cd2fa](https://github.com/kubedb/mariadb/commit/cf8cd2fa) Update for release Stash@v2020.07.08-beta.0 (#36) +- [7b28c4b9](https://github.com/kubedb/mariadb/commit/7b28c4b9) Update to Kubernetes v1.18.3 (#35) +- [848ff94a](https://github.com/kubedb/mariadb/commit/848ff94a) Update ci.yml +- [d124dd6a](https://github.com/kubedb/mariadb/commit/d124dd6a) Load stash version from .env file for make (#34) +- [1de40e1d](https://github.com/kubedb/mariadb/commit/1de40e1d) Update update-release-tracker.sh +- [7a4503be](https://github.com/kubedb/mariadb/commit/7a4503be) Update update-release-tracker.sh +- [ad0dfaf8](https://github.com/kubedb/mariadb/commit/ad0dfaf8) Add script to update release tracker on pr merge (#33) +- [aaca6bd9](https://github.com/kubedb/mariadb/commit/aaca6bd9) Update .kodiak.toml +- [9a495724](https://github.com/kubedb/mariadb/commit/9a495724) Various fixes (#32) +- [9b6c9a53](https://github.com/kubedb/mariadb/commit/9b6c9a53) Update to Kubernetes v1.18.3 (#31) +- [67912547](https://github.com/kubedb/mariadb/commit/67912547) Update to Kubernetes v1.18.3 +- [fc8ce4cc](https://github.com/kubedb/mariadb/commit/fc8ce4cc) Create .kodiak.toml +- [8aba5ef2](https://github.com/kubedb/mariadb/commit/8aba5ef2) Use CRD v1 for Kubernetes >= 1.16 (#30) +- [e81d2b4c](https://github.com/kubedb/mariadb/commit/e81d2b4c) Update to Kubernetes v1.18.3 (#29) +- [2a32730a](https://github.com/kubedb/mariadb/commit/2a32730a) Fix e2e tests (#28) +- [a79626d9](https://github.com/kubedb/mariadb/commit/a79626d9) Update stash install commands +- [52fc2059](https://github.com/kubedb/mariadb/commit/52fc2059) Use recommended kubernetes app labels (#27) +- [93dc10ec](https://github.com/kubedb/mariadb/commit/93dc10ec) Update crazy-max/ghaction-docker-buildx flag +- [ce5717e2](https://github.com/kubedb/mariadb/commit/ce5717e2) Revendor kubedb.dev/apimachinery@master (#26) +- [c1ca649d](https://github.com/kubedb/mariadb/commit/c1ca649d) Pass annotations from CRD to AppBinding (#25) +- [f327cc01](https://github.com/kubedb/mariadb/commit/f327cc01) Trigger the workflow on push or pull request +- [02432393](https://github.com/kubedb/mariadb/commit/02432393) Update CHANGELOG.md +- [a89dbc55](https://github.com/kubedb/mariadb/commit/a89dbc55) Use stash.appscode.dev/apimachinery@v0.9.0-rc.6 (#24) +- [e69742de](https://github.com/kubedb/mariadb/commit/e69742de) Update for percona-xtradb standalone restoresession (#23) +- [958877a1](https://github.com/kubedb/mariadb/commit/958877a1) Various fixes (#21) +- [fb0d7a35](https://github.com/kubedb/mariadb/commit/fb0d7a35) Update kubernetes client-go to 1.16.3 (#20) +- [293fe9a4](https://github.com/kubedb/mariadb/commit/293fe9a4) Fix default make command +- [39358e3b](https://github.com/kubedb/mariadb/commit/39358e3b) Use charts to install operator (#19) +- [6c5b3395](https://github.com/kubedb/mariadb/commit/6c5b3395) Several fixes and update tests (#18) +- [84ff139f](https://github.com/kubedb/mariadb/commit/84ff139f) Various Makefile improvements (#16) +- [e2737f65](https://github.com/kubedb/mariadb/commit/e2737f65) Remove EnableStatusSubresource (#17) +- [fb886b07](https://github.com/kubedb/mariadb/commit/fb886b07) Run e2e tests using GitHub actions (#12) +- [35b155d9](https://github.com/kubedb/mariadb/commit/35b155d9) Validate DBVersionSpecs and fixed broken build (#15) +- [67794bd9](https://github.com/kubedb/mariadb/commit/67794bd9) Update go.yml +- [f7666354](https://github.com/kubedb/mariadb/commit/f7666354) Various changes for Percona XtraDB (#13) +- [ceb7ba67](https://github.com/kubedb/mariadb/commit/ceb7ba67) Enable GitHub actions +- [f5a112af](https://github.com/kubedb/mariadb/commit/f5a112af) Refactor for ProxySQL Integration (#11) +- [26602049](https://github.com/kubedb/mariadb/commit/26602049) Revendor +- [71957d40](https://github.com/kubedb/mariadb/commit/71957d40) Rename from perconaxtradb to percona-xtradb (#10) +- [b526ccd8](https://github.com/kubedb/mariadb/commit/b526ccd8) Set database version in AppBinding (#7) +- [336e7203](https://github.com/kubedb/mariadb/commit/336e7203) Percona XtraDB Cluster support (#9) +- [71a42f7a](https://github.com/kubedb/mariadb/commit/71a42f7a) Don't set annotation to AppBinding (#8) +- [282298cb](https://github.com/kubedb/mariadb/commit/282298cb) Fix UpsertDatabaseAnnotation() function (#4) +- [2ab9dddf](https://github.com/kubedb/mariadb/commit/2ab9dddf) Add license header to Makefiles (#6) +- [df135c08](https://github.com/kubedb/mariadb/commit/df135c08) Add install, uninstall and purge command in Makefile (#3) +- [73d3a845](https://github.com/kubedb/mariadb/commit/73d3a845) Update .gitignore +- [59a4e754](https://github.com/kubedb/mariadb/commit/59a4e754) Add Makefile (#2) +- [f3551ddc](https://github.com/kubedb/mariadb/commit/f3551ddc) Rename package path (#1) +- [56a241d6](https://github.com/kubedb/mariadb/commit/56a241d6) Use explicit IP whitelist instead of automatic IP whitelist (#151) +- [9f0b5ca3](https://github.com/kubedb/mariadb/commit/9f0b5ca3) Update to k8s 1.14.0 client libraries using go.mod (#147) +- [73ad7c30](https://github.com/kubedb/mariadb/commit/73ad7c30) Update changelog +- [ccc36b5c](https://github.com/kubedb/mariadb/commit/ccc36b5c) Update README.md +- [9769e8e1](https://github.com/kubedb/mariadb/commit/9769e8e1) Start next dev cycle +- [a3fa468a](https://github.com/kubedb/mariadb/commit/a3fa468a) Prepare release 0.5.0 +- [6d8862de](https://github.com/kubedb/mariadb/commit/6d8862de) Mysql Group Replication tests (#146) +- [49544e55](https://github.com/kubedb/mariadb/commit/49544e55) Mysql Group Replication (#144) +- [a85d4b44](https://github.com/kubedb/mariadb/commit/a85d4b44) Revendor dependencies +- [9c538460](https://github.com/kubedb/mariadb/commit/9c538460) Changed Role to exclude psp without name (#143) +- [6cace93b](https://github.com/kubedb/mariadb/commit/6cace93b) Modify mutator validator names (#142) +- [da0c19b9](https://github.com/kubedb/mariadb/commit/da0c19b9) Update changelog +- [b79c80d6](https://github.com/kubedb/mariadb/commit/b79c80d6) Start next dev cycle +- [838d9459](https://github.com/kubedb/mariadb/commit/838d9459) Prepare release 0.4.0 +- [bf0f2c14](https://github.com/kubedb/mariadb/commit/bf0f2c14) Added PSP names and init container image in testing framework (#141) +- [3d227570](https://github.com/kubedb/mariadb/commit/3d227570) Added PSP support for mySQL (#137) +- [7b766657](https://github.com/kubedb/mariadb/commit/7b766657) Don't inherit app.kubernetes.io labels from CRD into offshoots (#140) +- [29e23470](https://github.com/kubedb/mariadb/commit/29e23470) Support for init container (#139) +- [3e1556f6](https://github.com/kubedb/mariadb/commit/3e1556f6) Add role label to stats service (#138) +- [ee078af9](https://github.com/kubedb/mariadb/commit/ee078af9) Update changelog +- [978f1139](https://github.com/kubedb/mariadb/commit/978f1139) Update Kubernetes client libraries to 1.13.0 release (#136) +- [821f23d1](https://github.com/kubedb/mariadb/commit/821f23d1) Start next dev cycle +- [678b26aa](https://github.com/kubedb/mariadb/commit/678b26aa) Prepare release 0.3.0 +- [40ad7a23](https://github.com/kubedb/mariadb/commit/40ad7a23) Initial RBAC support: create and use K8s service account for MySQL (#134) +- [98f03387](https://github.com/kubedb/mariadb/commit/98f03387) Revendor dependencies (#135) +- [dfe92615](https://github.com/kubedb/mariadb/commit/dfe92615) Revendor dependencies : Retry Failed Scheduler Snapshot (#133) +- [71f8a350](https://github.com/kubedb/mariadb/commit/71f8a350) Added ephemeral StorageType support (#132) +- [0a6b6e46](https://github.com/kubedb/mariadb/commit/0a6b6e46) Added support of MySQL 8.0.14 (#131) +- [99e57a9e](https://github.com/kubedb/mariadb/commit/99e57a9e) Use PVC spec from snapshot if provided (#130) +- [61497be6](https://github.com/kubedb/mariadb/commit/61497be6) Revendored and updated tests for 'Prevent prefix matching of multiple snapshots' (#129) +- [7eafe088](https://github.com/kubedb/mariadb/commit/7eafe088) Add certificate health checker (#128) +- [973ec416](https://github.com/kubedb/mariadb/commit/973ec416) Update E2E test: Env update is not restricted anymore (#127) +- [339975ff](https://github.com/kubedb/mariadb/commit/339975ff) Fix AppBinding (#126) +- [62050a72](https://github.com/kubedb/mariadb/commit/62050a72) Update changelog +- [2d454043](https://github.com/kubedb/mariadb/commit/2d454043) Prepare release 0.2.0 +- [6941ea59](https://github.com/kubedb/mariadb/commit/6941ea59) Reuse event recorder (#125) +- [b77e66c4](https://github.com/kubedb/mariadb/commit/b77e66c4) OSM binary upgraded in mysql-tools (#123) +- [c9228086](https://github.com/kubedb/mariadb/commit/c9228086) Revendor dependencies (#124) +- [97837120](https://github.com/kubedb/mariadb/commit/97837120) Test for faulty snapshot (#122) +- [c3e995b6](https://github.com/kubedb/mariadb/commit/c3e995b6) Start next dev cycle +- [8a4f3b13](https://github.com/kubedb/mariadb/commit/8a4f3b13) Prepare release 0.2.0-rc.2 +- [79942191](https://github.com/kubedb/mariadb/commit/79942191) Upgrade database secret keys (#121) +- [1747fdf5](https://github.com/kubedb/mariadb/commit/1747fdf5) Ignore mutation of fields to default values during update (#120) +- [d902d588](https://github.com/kubedb/mariadb/commit/d902d588) Support configuration options for exporter sidecar (#119) +- [dd7c3f44](https://github.com/kubedb/mariadb/commit/dd7c3f44) Use flags.DumpAll (#118) +- [bc1ef05b](https://github.com/kubedb/mariadb/commit/bc1ef05b) Start next dev cycle +- [9d33c1a0](https://github.com/kubedb/mariadb/commit/9d33c1a0) Prepare release 0.2.0-rc.1 +- [b076e141](https://github.com/kubedb/mariadb/commit/b076e141) Apply cleanup (#117) +- [7dc5641f](https://github.com/kubedb/mariadb/commit/7dc5641f) Set periodic analytics (#116) +- [90ea6acc](https://github.com/kubedb/mariadb/commit/90ea6acc) Introduce AppBinding support (#115) +- [a882d76a](https://github.com/kubedb/mariadb/commit/a882d76a) Fix Analytics (#114) +- [0961009c](https://github.com/kubedb/mariadb/commit/0961009c) Error out from cron job for deprecated dbversion (#113) +- [da1f4e27](https://github.com/kubedb/mariadb/commit/da1f4e27) Add CRDs without observation when operator starts (#112) +- [0a754d2f](https://github.com/kubedb/mariadb/commit/0a754d2f) Update changelog +- [b09bc6e1](https://github.com/kubedb/mariadb/commit/b09bc6e1) Start next dev cycle +- [0d467ccb](https://github.com/kubedb/mariadb/commit/0d467ccb) Prepare release 0.2.0-rc.0 +- [c757007a](https://github.com/kubedb/mariadb/commit/c757007a) Merge commit 'cc6607a3589a79a5e61bb198d370ea0ae30b9d09' +- [ddfe4be1](https://github.com/kubedb/mariadb/commit/ddfe4be1) Support custom user passowrd for backup (#111) +- [8c84ba20](https://github.com/kubedb/mariadb/commit/8c84ba20) Support providing resources for monitoring container (#110) +- [7bcfbc48](https://github.com/kubedb/mariadb/commit/7bcfbc48) Update kubernetes client libraries to 1.12.0 (#109) +- [145bba2b](https://github.com/kubedb/mariadb/commit/145bba2b) Add validation webhook xray (#108) +- [6da1887f](https://github.com/kubedb/mariadb/commit/6da1887f) Various Fixes (#107) +- [111519e9](https://github.com/kubedb/mariadb/commit/111519e9) Merge ports from service template (#105) +- [38147ef1](https://github.com/kubedb/mariadb/commit/38147ef1) Replace doNotPause with TerminationPolicy = DoNotTerminate (#104) +- [e28ebc47](https://github.com/kubedb/mariadb/commit/e28ebc47) Pass resources to NamespaceValidator (#103) +- [aed12bf5](https://github.com/kubedb/mariadb/commit/aed12bf5) Various fixes (#102) +- [3d372ef6](https://github.com/kubedb/mariadb/commit/3d372ef6) Support Livecycle hook and container probes (#101) +- [b6ef6887](https://github.com/kubedb/mariadb/commit/b6ef6887) Check if Kubernetes version is supported before running operator (#100) +- [d89e7783](https://github.com/kubedb/mariadb/commit/d89e7783) Update package alias (#99) +- [f0b44b3a](https://github.com/kubedb/mariadb/commit/f0b44b3a) Start next dev cycle +- [a79ff03b](https://github.com/kubedb/mariadb/commit/a79ff03b) Prepare release 0.2.0-beta.1 +- [0d8d3cca](https://github.com/kubedb/mariadb/commit/0d8d3cca) Revendor api (#98) +- [2f850243](https://github.com/kubedb/mariadb/commit/2f850243) Fix tests (#97) +- [4ced0bfe](https://github.com/kubedb/mariadb/commit/4ced0bfe) Revendor api for catalog apigroup (#96) +- [e7695400](https://github.com/kubedb/mariadb/commit/e7695400) Update chanelog +- [8e358aea](https://github.com/kubedb/mariadb/commit/8e358aea) Use --pull flag with docker build (#20) (#95) +- [d2a97d90](https://github.com/kubedb/mariadb/commit/d2a97d90) Merge commit '16c769ee4686576f172a6b79a10d25bfd79ca4a4' +- [d1fe8a8a](https://github.com/kubedb/mariadb/commit/d1fe8a8a) Start next dev cycle +- [04eb9bb5](https://github.com/kubedb/mariadb/commit/04eb9bb5) Prepare release 0.2.0-beta.0 +- [9dfea960](https://github.com/kubedb/mariadb/commit/9dfea960) Pass extra args to tools.sh (#93) +- [47dd3cad](https://github.com/kubedb/mariadb/commit/47dd3cad) Don't try to wipe out Snapshot data for Local backend (#92) +- [9c4d485b](https://github.com/kubedb/mariadb/commit/9c4d485b) Add missing alt-tag docker folder mysql-tools images (#91) +- [be72f784](https://github.com/kubedb/mariadb/commit/be72f784) Use suffix for updated DBImage & Stop working for deprecated *Versions (#90) +- [05c8f14d](https://github.com/kubedb/mariadb/commit/05c8f14d) Search used secrets within same namespace of DB object (#89) +- [0d94c946](https://github.com/kubedb/mariadb/commit/0d94c946) Support Termination Policy (#88) +- [8775ddf7](https://github.com/kubedb/mariadb/commit/8775ddf7) Update builddeps.sh +- [796c93da](https://github.com/kubedb/mariadb/commit/796c93da) Revendor k8s.io/apiserver (#87) +- [5a1e3f57](https://github.com/kubedb/mariadb/commit/5a1e3f57) Revendor kubernetes-1.11.3 (#86) +- [809a3c49](https://github.com/kubedb/mariadb/commit/809a3c49) Support UpdateStrategy (#84) +- [372c52ef](https://github.com/kubedb/mariadb/commit/372c52ef) Add TerminationPolicy for databases (#83) +- [c01b55e8](https://github.com/kubedb/mariadb/commit/c01b55e8) Revendor api (#82) +- [5e196b95](https://github.com/kubedb/mariadb/commit/5e196b95) Use IntHash as status.observedGeneration (#81) +- [2da3bb1b](https://github.com/kubedb/mariadb/commit/2da3bb1b) fix github status (#80) +- [121d0a98](https://github.com/kubedb/mariadb/commit/121d0a98) Update pipeline (#79) +- [532e3137](https://github.com/kubedb/mariadb/commit/532e3137) Fix E2E test for minikube (#78) +- [0f107815](https://github.com/kubedb/mariadb/commit/0f107815) Update pipeline (#77) +- [851679e2](https://github.com/kubedb/mariadb/commit/851679e2) Migrate MySQL (#75) +- [0b997855](https://github.com/kubedb/mariadb/commit/0b997855) Use official exporter image (#74) +- [702d5736](https://github.com/kubedb/mariadb/commit/702d5736) Fix uninstall for concourse (#70) +- [9ee88bd2](https://github.com/kubedb/mariadb/commit/9ee88bd2) Update status.ObservedGeneration for failure phase (#73) +- [559cdb6a](https://github.com/kubedb/mariadb/commit/559cdb6a) Keep track of ObservedGenerationHash (#72) +- [61c8b898](https://github.com/kubedb/mariadb/commit/61c8b898) Use NewObservableHandler (#71) +- [421274dc](https://github.com/kubedb/mariadb/commit/421274dc) Merge commit '887037c7e36289e3135dda99346fccc7e2ce303b' +- [6a41d9bc](https://github.com/kubedb/mariadb/commit/6a41d9bc) Fix uninstall for concourse (#69) +- [f1af09db](https://github.com/kubedb/mariadb/commit/f1af09db) Update README.md +- [bf3f1823](https://github.com/kubedb/mariadb/commit/bf3f1823) Revise immutable spec fields (#68) +- [26adec3b](https://github.com/kubedb/mariadb/commit/26adec3b) Merge commit '5f83049fc01dc1d0709ac0014d6f3a0f74a39417' +- [31a97820](https://github.com/kubedb/mariadb/commit/31a97820) Support passing args via PodTemplate (#67) +- [60f4ee23](https://github.com/kubedb/mariadb/commit/60f4ee23) Introduce storageType : ephemeral (#66) +- [bfd3fcd6](https://github.com/kubedb/mariadb/commit/bfd3fcd6) Add support for running tests on cncf cluster (#63) +- [fba47b19](https://github.com/kubedb/mariadb/commit/fba47b19) Merge commit 'e010cbb302c8d59d4cf69dd77085b046ff423b78' +- [6be96ce0](https://github.com/kubedb/mariadb/commit/6be96ce0) Revendor api (#65) +- [0f629ab3](https://github.com/kubedb/mariadb/commit/0f629ab3) Keep track of observedGeneration in status (#64) +- [c9a9596f](https://github.com/kubedb/mariadb/commit/c9a9596f) Separate StatsService for monitoring (#62) +- [62854641](https://github.com/kubedb/mariadb/commit/62854641) Use MySQLVersion for MySQL images (#61) +- [3c170c56](https://github.com/kubedb/mariadb/commit/3c170c56) Use updated crd spec (#60) +- [873c285e](https://github.com/kubedb/mariadb/commit/873c285e) Rename OffshootLabels to OffshootSelectors (#59) +- [2fd02169](https://github.com/kubedb/mariadb/commit/2fd02169) Revendor api (#58) +- [a127d6cd](https://github.com/kubedb/mariadb/commit/a127d6cd) Use kmodules monitoring and objectstore api (#57) +- [2f79a038](https://github.com/kubedb/mariadb/commit/2f79a038) Support custom configuration (#52) +- [49c67f00](https://github.com/kubedb/mariadb/commit/49c67f00) Merge commit '44e6d4985d93556e39ddcc4677ada5437fc5be64' +- [fb28bc6c](https://github.com/kubedb/mariadb/commit/fb28bc6c) Refactor concourse scripts (#56) +- [4de4ced1](https://github.com/kubedb/mariadb/commit/4de4ced1) Fix command `./hack/make.py test e2e` (#55) +- [3082123e](https://github.com/kubedb/mariadb/commit/3082123e) Set generated binary name to my-operator (#54) +- [5698f314](https://github.com/kubedb/mariadb/commit/5698f314) Don't add admission/v1beta1 group as a prioritized version (#53) +- [696135d5](https://github.com/kubedb/mariadb/commit/696135d5) Fix travis build (#48) +- [c519ef89](https://github.com/kubedb/mariadb/commit/c519ef89) Format shell script (#51) +- [c93e2f40](https://github.com/kubedb/mariadb/commit/c93e2f40) Enable status subresource for crds (#50) +- [edd951ca](https://github.com/kubedb/mariadb/commit/edd951ca) Update client-go to v8.0.0 (#49) +- [520597a6](https://github.com/kubedb/mariadb/commit/520597a6) Merge commit '71850e2c90cda8fc588b7dedb340edf3d316baea' +- [f1549e95](https://github.com/kubedb/mariadb/commit/f1549e95) Support ENV variables in CRDs (#46) +- [67f37780](https://github.com/kubedb/mariadb/commit/67f37780) Updated osm version to 0.7.1 (#47) +- [10e309c0](https://github.com/kubedb/mariadb/commit/10e309c0) Prepare release 0.1.0 +- [62a8fbbd](https://github.com/kubedb/mariadb/commit/62a8fbbd) Fixed missing error return (#45) +- [8c05bb83](https://github.com/kubedb/mariadb/commit/8c05bb83) Revendor dependencies (#44) +- [ca811a2e](https://github.com/kubedb/mariadb/commit/ca811a2e) Fix release script (#43) +- [b79541f6](https://github.com/kubedb/mariadb/commit/b79541f6) Add changelog (#42) +- [a2d13c82](https://github.com/kubedb/mariadb/commit/a2d13c82) Concourse (#41) +- [95b2186e](https://github.com/kubedb/mariadb/commit/95b2186e) Fixed kubeconfig plugin for Cloud Providers && Storage is required for MySQL (#40) +- [37762093](https://github.com/kubedb/mariadb/commit/37762093) Refactored E2E testing to support E2E testing with admission webhook in cloud (#38) +- [b6fe72ca](https://github.com/kubedb/mariadb/commit/b6fe72ca) Remove lost+found directory before initializing mysql (#39) +- [18ebb959](https://github.com/kubedb/mariadb/commit/18ebb959) Skip delete requests for empty resources (#37) +- [eeb7add0](https://github.com/kubedb/mariadb/commit/eeb7add0) Don't panic if admission options is nil (#36) +- [ccb59db0](https://github.com/kubedb/mariadb/commit/ccb59db0) Disable admission controllers for webhook server (#35) +- [b1c6c149](https://github.com/kubedb/mariadb/commit/b1c6c149) Separate ApiGroup for Mutating and Validating webhook && upgraded osm to 0.7.0 (#34) +- [b1890f7c](https://github.com/kubedb/mariadb/commit/b1890f7c) Update client-go to 7.0.0 (#33) +- [08c81726](https://github.com/kubedb/mariadb/commit/08c81726) Added update script for mysql-tools:8 (#32) +- [4bbe6c9f](https://github.com/kubedb/mariadb/commit/4bbe6c9f) Added support of mysql:5.7 (#31) +- [e657f512](https://github.com/kubedb/mariadb/commit/e657f512) Add support for one informer and N-eventHandler for snapshot, dromantDB and Job (#30) +- [bbcd48d6](https://github.com/kubedb/mariadb/commit/bbcd48d6) Use metrics from kube apiserver (#29) +- [1687e197](https://github.com/kubedb/mariadb/commit/1687e197) Bundle webhook server and Use SharedInformerFactory (#28) +- [cd0efc00](https://github.com/kubedb/mariadb/commit/cd0efc00) Move MySQL AdmissionWebhook packages into MySQL repository (#27) +- [46065e18](https://github.com/kubedb/mariadb/commit/46065e18) Use mysql:8.0.3 image as mysql:8.0 (#26) +- [1b73529f](https://github.com/kubedb/mariadb/commit/1b73529f) Update README.md +- [62eaa397](https://github.com/kubedb/mariadb/commit/62eaa397) Update README.md +- [c53704c7](https://github.com/kubedb/mariadb/commit/c53704c7) Remove Docker pull count +- [b9ec877e](https://github.com/kubedb/mariadb/commit/b9ec877e) Add travis yaml (#25) +- [ade3571c](https://github.com/kubedb/mariadb/commit/ade3571c) Start next dev cycle +- [b4b749df](https://github.com/kubedb/mariadb/commit/b4b749df) Prepare release 0.1.0-beta.2 +- [4d46d95d](https://github.com/kubedb/mariadb/commit/4d46d95d) Migrating to apps/v1 (#23) +- [5ee1ac8c](https://github.com/kubedb/mariadb/commit/5ee1ac8c) Update validation (#22) +- [dd023c50](https://github.com/kubedb/mariadb/commit/dd023c50) Fix dormantDB matching: pass same type to Equal method (#21) +- [37a1e4fd](https://github.com/kubedb/mariadb/commit/37a1e4fd) Use official code generator scripts (#20) +- [485d3d7c](https://github.com/kubedb/mariadb/commit/485d3d7c) Fixed dormantdb matching & Raised throttling time & Fixed MySQL version Checking (#19) +- [6db2ae8d](https://github.com/kubedb/mariadb/commit/6db2ae8d) Prepare release 0.1.0-beta.1 +- [ebbfec2f](https://github.com/kubedb/mariadb/commit/ebbfec2f) converted to k8s 1.9 & Improved InitSpec in DormantDB & Added support for Job watcher & Improved Tests (#17) +- [a484e0e5](https://github.com/kubedb/mariadb/commit/a484e0e5) Fixed logger, analytics and removed rbac stuff (#16) +- [7aa2d1d2](https://github.com/kubedb/mariadb/commit/7aa2d1d2) Add rbac stuffs for mysql-exporter (#15) +- [078098c8](https://github.com/kubedb/mariadb/commit/078098c8) Review Mysql docker images and Fixed monitring (#14) +- [6877108a](https://github.com/kubedb/mariadb/commit/6877108a) Update README.md +- [1f84a5da](https://github.com/kubedb/mariadb/commit/1f84a5da) Start next dev cycle +- [2f1e4b7d](https://github.com/kubedb/mariadb/commit/2f1e4b7d) Prepare release 0.1.0-beta.0 +- [dce1e88e](https://github.com/kubedb/mariadb/commit/dce1e88e) Add release script +- [60ed55cb](https://github.com/kubedb/mariadb/commit/60ed55cb) Rename ms-operator to my-operator (#13) +- [5451d166](https://github.com/kubedb/mariadb/commit/5451d166) Fix Analytics and pass client-id as ENV to Snapshot Job (#12) +- [788ae178](https://github.com/kubedb/mariadb/commit/788ae178) update docker image validation (#11) +- [c966efd5](https://github.com/kubedb/mariadb/commit/c966efd5) Add docker-registry and WorkQueue (#10) +- [be340103](https://github.com/kubedb/mariadb/commit/be340103) Set client id for analytics (#9) +- [ca11f683](https://github.com/kubedb/mariadb/commit/ca11f683) Fix CRD Registration (#8) +- [2f95c13d](https://github.com/kubedb/mariadb/commit/2f95c13d) Update issue repo link +- [6fffa713](https://github.com/kubedb/mariadb/commit/6fffa713) Update pkg paths to kubedb org (#7) +- [2d4d5c44](https://github.com/kubedb/mariadb/commit/2d4d5c44) Assign default Prometheus Monitoring Port (#6) +- [a7595613](https://github.com/kubedb/mariadb/commit/a7595613) Add Snapshot Backup, Restore and Backup-Scheduler (#4) +- [17a782c6](https://github.com/kubedb/mariadb/commit/17a782c6) Update Dockerfile +- [e92bfec9](https://github.com/kubedb/mariadb/commit/e92bfec9) Add mysql-util docker image (#5) +- [2a4b25ac](https://github.com/kubedb/mariadb/commit/2a4b25ac) Mysql db - Inititalizing (#2) +- [cbfbc878](https://github.com/kubedb/mariadb/commit/cbfbc878) Update README.md +- [01cab651](https://github.com/kubedb/mariadb/commit/01cab651) Update README.md +- [0aa81cdf](https://github.com/kubedb/mariadb/commit/0aa81cdf) Use client-go 5.x +- [3de10d7f](https://github.com/kubedb/mariadb/commit/3de10d7f) Update ./hack folder (#3) +- [46f05b1f](https://github.com/kubedb/mariadb/commit/46f05b1f) Add skeleton for mysql (#1) +- [73147dba](https://github.com/kubedb/mariadb/commit/73147dba) Merge commit 'be70502b4993171bbad79d2ff89a9844f1c24caa' as 'hack/libbuild' + + + +## [kubedb/memcached](https://github.com/kubedb/memcached) + +### [v0.10.0](https://github.com/kubedb/memcached/releases/tag/v0.10.0) + +- [58cdf64a](https://github.com/kubedb/memcached/commit/58cdf64a) Prepare for release v0.10.0 (#291) +- [13e5a1fb](https://github.com/kubedb/memcached/commit/13e5a1fb) Update KubeDB api (#290) +- [390739ad](https://github.com/kubedb/memcached/commit/390739ad) Update db container security context (#289) +- [7bf492f1](https://github.com/kubedb/memcached/commit/7bf492f1) Update KubeDB api (#288) +- [d074bf23](https://github.com/kubedb/memcached/commit/d074bf23) Fix make install (#287) +- [ee747948](https://github.com/kubedb/memcached/commit/ee747948) Update repository config (#285) +- [e19e53fe](https://github.com/kubedb/memcached/commit/e19e53fe) Update repository config (#284) +- [e7764dcf](https://github.com/kubedb/memcached/commit/e7764dcf) Update Kubernetes v1.18.9 dependencies (#283) +- [b491d1d9](https://github.com/kubedb/memcached/commit/b491d1d9) Update repository config (#282) +- [beaa42b1](https://github.com/kubedb/memcached/commit/beaa42b1) Update Kubernetes v1.18.9 dependencies (#281) +- [25e0c0a5](https://github.com/kubedb/memcached/commit/25e0c0a5) Update Kubernetes v1.18.9 dependencies (#280) +- [a4a6b2b8](https://github.com/kubedb/memcached/commit/a4a6b2b8) Update repository config (#279) +- [c3b1154b](https://github.com/kubedb/memcached/commit/c3b1154b) Update repository config (#278) + + + +## [kubedb/mongodb](https://github.com/kubedb/mongodb) + +### [v0.10.0](https://github.com/kubedb/mongodb/releases/tag/v0.10.0) + +- [e14d90b3](https://github.com/kubedb/mongodb/commit/e14d90b3) Prepare for release v0.10.0 (#382) +- [48e752f3](https://github.com/kubedb/mongodb/commit/48e752f3) Update for release Stash@v2021.03.08 (#381) +- [03c8ccb4](https://github.com/kubedb/mongodb/commit/03c8ccb4) Update KubeDB api (#380) +- [bda470f8](https://github.com/kubedb/mongodb/commit/bda470f8) Update db container security context (#379) +- [86bc54a8](https://github.com/kubedb/mongodb/commit/86bc54a8) Update KubeDB api (#378) +- [a406dfca](https://github.com/kubedb/mongodb/commit/a406dfca) Fix make install command (#377) +- [17121476](https://github.com/kubedb/mongodb/commit/17121476) Update install command (#376) +- [05fd7b77](https://github.com/kubedb/mongodb/commit/05fd7b77) Pass stash addon info to AppBinding (#374) +- [787861a3](https://github.com/kubedb/mongodb/commit/787861a3) Create TLS user in `$external` database (#366) +- [dc9cef47](https://github.com/kubedb/mongodb/commit/dc9cef47) Update Kubernetes v1.18.9 dependencies (#375) +- [e8081471](https://github.com/kubedb/mongodb/commit/e8081471) Update Kubernetes v1.18.9 dependencies (#373) +- [612f7350](https://github.com/kubedb/mongodb/commit/612f7350) Update repository config (#372) +- [94410f92](https://github.com/kubedb/mongodb/commit/94410f92) Update repository config (#371) +- [d10b9b03](https://github.com/kubedb/mongodb/commit/d10b9b03) Update Kubernetes v1.18.9 dependencies (#370) +- [132172b4](https://github.com/kubedb/mongodb/commit/132172b4) Update repository config (#369) +- [94fa1536](https://github.com/kubedb/mongodb/commit/94fa1536) #818 MongoDB IPv6 support (#365) +- [9614d777](https://github.com/kubedb/mongodb/commit/9614d777) Update Kubernetes v1.18.9 dependencies (#368) +- [054c7312](https://github.com/kubedb/mongodb/commit/054c7312) Update Kubernetes v1.18.9 dependencies (#367) +- [02bed305](https://github.com/kubedb/mongodb/commit/02bed305) Update repository config (#364) +- [ac0e9a51](https://github.com/kubedb/mongodb/commit/ac0e9a51) Update repository config (#363) + + + +## [kubedb/mysql](https://github.com/kubedb/mysql) + +### [v0.10.0](https://github.com/kubedb/mysql/releases/tag/v0.10.0) + +- [3c97ea11](https://github.com/kubedb/mysql/commit/3c97ea11) Prepare for release v0.10.0 (#375) +- [a7c9b3dc](https://github.com/kubedb/mysql/commit/a7c9b3dc) Inject "--set-gtid-purged=OFF" in backup Task params for clustered MySQL (#374) +- [4d1eba85](https://github.com/kubedb/mysql/commit/4d1eba85) Update for release Stash@v2021.03.08 (#373) +- [36d53c97](https://github.com/kubedb/mysql/commit/36d53c97) Fix default set binary log expire (#372) +- [9118c66e](https://github.com/kubedb/mysql/commit/9118c66e) Update KubeDB api (#371) +- [d323daae](https://github.com/kubedb/mysql/commit/d323daae) Update variable name +- [6fad227c](https://github.com/kubedb/mysql/commit/6fad227c) Update db container security context (#370) +- [9a570f2e](https://github.com/kubedb/mysql/commit/9a570f2e) Update KubeDB api (#369) +- [80e4b857](https://github.com/kubedb/mysql/commit/80e4b857) Fix appbinding type meta (#368) +- [6bc063b7](https://github.com/kubedb/mysql/commit/6bc063b7) Fix install command in Makefile (#367) +- [3400d8c0](https://github.com/kubedb/mysql/commit/3400d8c0) Pass stash addon info to AppBinding (#364) +- [2ddc20c4](https://github.com/kubedb/mysql/commit/2ddc20c4) Add ca bundle to AppBinding (#362) +- [fb55f0e4](https://github.com/kubedb/mysql/commit/fb55f0e4) Purge executed binary log after 3 days by default (#352) +- [92cd744c](https://github.com/kubedb/mysql/commit/92cd744c) Remove `baseServerID` from mysql cr (#356) +- [e99b4e51](https://github.com/kubedb/mysql/commit/e99b4e51) Fix updating mysql status condition when db is not online (#355) +- [d5527967](https://github.com/kubedb/mysql/commit/d5527967) Update repository config (#363) +- [970db7e8](https://github.com/kubedb/mysql/commit/970db7e8) Update repository config (#361) +- [077a4b44](https://github.com/kubedb/mysql/commit/077a4b44) Update Kubernetes v1.18.9 dependencies (#360) +- [7c577664](https://github.com/kubedb/mysql/commit/7c577664) Update repository config (#359) +- [1039210c](https://github.com/kubedb/mysql/commit/1039210c) Update Kubernetes v1.18.9 dependencies (#358) +- [27a7fab8](https://github.com/kubedb/mysql/commit/27a7fab8) Update Kubernetes v1.18.9 dependencies (#357) +- [b94283e9](https://github.com/kubedb/mysql/commit/b94283e9) Update repository config (#354) +- [78af88a4](https://github.com/kubedb/mysql/commit/78af88a4) Update repository config (#353) + + + +## [kubedb/operator](https://github.com/kubedb/operator) + +### [v0.17.0](https://github.com/kubedb/operator/releases/tag/v0.17.0) + +- [fa0cb596](https://github.com/kubedb/operator/commit/fa0cb596) Prepare for release v0.17.0 (#399) +- [46576385](https://github.com/kubedb/operator/commit/46576385) Update KubeDB api (#397) +- [6f0c1887](https://github.com/kubedb/operator/commit/6f0c1887) Add MariaDB support to kubedb/operator (#396) +- [970e29d7](https://github.com/kubedb/operator/commit/970e29d7) Update repository config (#393) +- [728b320e](https://github.com/kubedb/operator/commit/728b320e) Update repository config (#392) +- [b0f2a1c3](https://github.com/kubedb/operator/commit/b0f2a1c3) Update Kubernetes v1.18.9 dependencies (#391) +- [8f31d09c](https://github.com/kubedb/operator/commit/8f31d09c) Update repository config (#390) +- [12dbdb2d](https://github.com/kubedb/operator/commit/12dbdb2d) Update Kubernetes v1.18.9 dependencies (#389) +- [e3a7e911](https://github.com/kubedb/operator/commit/e3a7e911) Update Kubernetes v1.18.9 dependencies (#388) +- [ebff29a4](https://github.com/kubedb/operator/commit/ebff29a4) Update repository config (#387) +- [65c6529f](https://github.com/kubedb/operator/commit/65c6529f) Update repository config (#386) + + + +## [kubedb/percona-xtradb](https://github.com/kubedb/percona-xtradb) + +### [v0.4.0](https://github.com/kubedb/percona-xtradb/releases/tag/v0.4.0) + +- [2fcad9f8](https://github.com/kubedb/percona-xtradb/commit/2fcad9f8) Prepare for release v0.4.0 (#190) +- [5e925447](https://github.com/kubedb/percona-xtradb/commit/5e925447) Update for release Stash@v2021.03.08 (#189) +- [43546c4a](https://github.com/kubedb/percona-xtradb/commit/43546c4a) Update KubeDB api (#188) +- [86cd32ae](https://github.com/kubedb/percona-xtradb/commit/86cd32ae) Update db container security context (#187) +- [efe459a3](https://github.com/kubedb/percona-xtradb/commit/efe459a3) Update KubeDB api (#186) +- [4cd31b92](https://github.com/kubedb/percona-xtradb/commit/4cd31b92) Fix make install (#185) +- [105b4ca5](https://github.com/kubedb/percona-xtradb/commit/105b4ca5) Fix install command in Makefile (#184) +- [be699bcb](https://github.com/kubedb/percona-xtradb/commit/be699bcb) Pass stash addon info to AppBinding (#182) +- [431bfad8](https://github.com/kubedb/percona-xtradb/commit/431bfad8) Update repository config (#181) +- [37953474](https://github.com/kubedb/percona-xtradb/commit/37953474) Update repository config (#180) +- [387795d2](https://github.com/kubedb/percona-xtradb/commit/387795d2) Update Kubernetes v1.18.9 dependencies (#179) +- [ccf8ee25](https://github.com/kubedb/percona-xtradb/commit/ccf8ee25) Update repository config (#178) +- [9f61328a](https://github.com/kubedb/percona-xtradb/commit/9f61328a) Update Kubernetes v1.18.9 dependencies (#177) +- [9241cd63](https://github.com/kubedb/percona-xtradb/commit/9241cd63) Update Kubernetes v1.18.9 dependencies (#176) +- [3687b603](https://github.com/kubedb/percona-xtradb/commit/3687b603) Update repository config (#175) +- [a8a83f93](https://github.com/kubedb/percona-xtradb/commit/a8a83f93) Update repository config (#174) + + + +## [kubedb/pg-coordinator](https://github.com/kubedb/pg-coordinator) + +### [v0.1.0](https://github.com/kubedb/pg-coordinator/releases/tag/v0.1.0) + +- [ebb5c70](https://github.com/kubedb/pg-coordinator/commit/ebb5c70) Prepare for release v0.1.0 (#13) + + + +## [kubedb/pgbouncer](https://github.com/kubedb/pgbouncer) + +### [v0.4.0](https://github.com/kubedb/pgbouncer/releases/tag/v0.4.0) + +- [b68b46c7](https://github.com/kubedb/pgbouncer/commit/b68b46c7) Prepare for release v0.4.0 (#152) +- [efd337fe](https://github.com/kubedb/pgbouncer/commit/efd337fe) Update KubeDB api (#151) +- [c649bc9b](https://github.com/kubedb/pgbouncer/commit/c649bc9b) Update db container security context (#150) +- [6c7da627](https://github.com/kubedb/pgbouncer/commit/6c7da627) Update KubeDB api (#149) +- [a6254d15](https://github.com/kubedb/pgbouncer/commit/a6254d15) Fix make install (#148) +- [fcdcbe00](https://github.com/kubedb/pgbouncer/commit/fcdcbe00) Update repository config (#146) +- [7e1f30ef](https://github.com/kubedb/pgbouncer/commit/7e1f30ef) Update repository config (#145) +- [eed4411c](https://github.com/kubedb/pgbouncer/commit/eed4411c) Update Kubernetes v1.18.9 dependencies (#144) +- [2f3d4363](https://github.com/kubedb/pgbouncer/commit/2f3d4363) Update repository config (#143) +- [951bb00e](https://github.com/kubedb/pgbouncer/commit/951bb00e) Update Kubernetes v1.18.9 dependencies (#142) +- [13f63fe3](https://github.com/kubedb/pgbouncer/commit/13f63fe3) Update Kubernetes v1.18.9 dependencies (#141) +- [b80a350c](https://github.com/kubedb/pgbouncer/commit/b80a350c) Update repository config (#140) +- [1ae2b26c](https://github.com/kubedb/pgbouncer/commit/1ae2b26c) Update repository config (#139) + + + +## [kubedb/postgres](https://github.com/kubedb/postgres) + +### [v0.17.0](https://github.com/kubedb/postgres/releases/tag/v0.17.0) + +- [b47e7f4b](https://github.com/kubedb/postgres/commit/b47e7f4b) Prepare for release v0.17.0 (#481) +- [a73ac849](https://github.com/kubedb/postgres/commit/a73ac849) Pass stash addon info to AppBinding (#480) +- [d47d78ea](https://github.com/kubedb/postgres/commit/d47d78ea) Added supoort for TimescaleDB (#479) +- [6ac94ae6](https://github.com/kubedb/postgres/commit/6ac94ae6) Added support for Official Postgres Images (#478) +- [0506cb76](https://github.com/kubedb/postgres/commit/0506cb76) Update for release Stash@v2021.03.08 (#477) +- [5d004ff4](https://github.com/kubedb/postgres/commit/5d004ff4) Update Makefile +- [eb84fc88](https://github.com/kubedb/postgres/commit/eb84fc88) TLS support for postgres & Status condition update (#474) +- [a6a365dd](https://github.com/kubedb/postgres/commit/a6a365dd) Fix install command (#473) +- [004b2b8c](https://github.com/kubedb/postgres/commit/004b2b8c) Fix install command in Makefile (#472) +- [6c714c92](https://github.com/kubedb/postgres/commit/6c714c92) Fix install command +- [33eb6d74](https://github.com/kubedb/postgres/commit/33eb6d74) Update repository config (#470) +- [90f48417](https://github.com/kubedb/postgres/commit/90f48417) Update repository config (#469) +- [aa0f0760](https://github.com/kubedb/postgres/commit/aa0f0760) Update Kubernetes v1.18.9 dependencies (#468) +- [43f953d9](https://github.com/kubedb/postgres/commit/43f953d9) Update repository config (#467) +- [8247bcb6](https://github.com/kubedb/postgres/commit/8247bcb6) Update Kubernetes v1.18.9 dependencies (#466) +- [619c8903](https://github.com/kubedb/postgres/commit/619c8903) Update Kubernetes v1.18.9 dependencies (#465) +- [f2998147](https://github.com/kubedb/postgres/commit/f2998147) Update repository config (#464) +- [93d466be](https://github.com/kubedb/postgres/commit/93d466be) Update repository config (#463) + + + +## [kubedb/proxysql](https://github.com/kubedb/proxysql) + +### [v0.4.0](https://github.com/kubedb/proxysql/releases/tag/v0.4.0) + +- [6e8fb4a1](https://github.com/kubedb/proxysql/commit/6e8fb4a1) Prepare for release v0.4.0 (#169) +- [77bafd23](https://github.com/kubedb/proxysql/commit/77bafd23) Update for release Stash@v2021.03.08 (#168) +- [7bff702a](https://github.com/kubedb/proxysql/commit/7bff702a) Update KubeDB api (#167) +- [7fa81242](https://github.com/kubedb/proxysql/commit/7fa81242) Update db container security context (#166) +- [c218aa7e](https://github.com/kubedb/proxysql/commit/c218aa7e) Update KubeDB api (#165) +- [2705c4e0](https://github.com/kubedb/proxysql/commit/2705c4e0) Fix make install (#164) +- [8498b254](https://github.com/kubedb/proxysql/commit/8498b254) Update Kubernetes v1.18.9 dependencies (#163) +- [fa003df3](https://github.com/kubedb/proxysql/commit/fa003df3) Update repository config (#162) +- [eff1530f](https://github.com/kubedb/proxysql/commit/eff1530f) Update repository config (#161) +- [863abf38](https://github.com/kubedb/proxysql/commit/863abf38) Update Kubernetes v1.18.9 dependencies (#160) +- [70f8d51d](https://github.com/kubedb/proxysql/commit/70f8d51d) Update repository config (#159) +- [0641bc35](https://github.com/kubedb/proxysql/commit/0641bc35) Update Kubernetes v1.18.9 dependencies (#158) +- [a95d45e3](https://github.com/kubedb/proxysql/commit/a95d45e3) Update Kubernetes v1.18.9 dependencies (#157) +- [2229b43f](https://github.com/kubedb/proxysql/commit/2229b43f) Update repository config (#156) +- [a36856a6](https://github.com/kubedb/proxysql/commit/a36856a6) Update repository config (#155) + + + +## [kubedb/redis](https://github.com/kubedb/redis) + +### [v0.10.0](https://github.com/kubedb/redis/releases/tag/v0.10.0) + +- [fbb31240](https://github.com/kubedb/redis/commit/fbb31240) Prepare for release v0.10.0 (#314) +- [7ed160b4](https://github.com/kubedb/redis/commit/7ed160b4) Update KubeDB api (#313) +- [232d206e](https://github.com/kubedb/redis/commit/232d206e) Update db container security context (#312) +- [09084d0c](https://github.com/kubedb/redis/commit/09084d0c) Update KubeDB api (#311) +- [62f3cef7](https://github.com/kubedb/redis/commit/62f3cef7) Fix appbinding type meta (#310) +- [bed4e87d](https://github.com/kubedb/redis/commit/bed4e87d) Change redis config structure (#231) +- [3eb9a5b5](https://github.com/kubedb/redis/commit/3eb9a5b5) Update Redis Conditions (#250) +- [df65bfe8](https://github.com/kubedb/redis/commit/df65bfe8) Pass stash addon info to AppBinding (#308) +- [1a4b3fe2](https://github.com/kubedb/redis/commit/1a4b3fe2) Update repository config (#307) +- [fcab4120](https://github.com/kubedb/redis/commit/fcab4120) Update repository config (#306) +- [ffa4a9ba](https://github.com/kubedb/redis/commit/ffa4a9ba) Update Kubernetes v1.18.9 dependencies (#305) +- [5afb498e](https://github.com/kubedb/redis/commit/5afb498e) Update repository config (#304) +- [38e93cb9](https://github.com/kubedb/redis/commit/38e93cb9) Update Kubernetes v1.18.9 dependencies (#303) +- [f3083d8c](https://github.com/kubedb/redis/commit/f3083d8c) Update Kubernetes v1.18.9 dependencies (#302) +- [878b4f7e](https://github.com/kubedb/redis/commit/878b4f7e) Update repository config (#301) +- [d3a2e333](https://github.com/kubedb/redis/commit/d3a2e333) Update repository config (#300) + + + +## [kubedb/replication-mode-detector](https://github.com/kubedb/replication-mode-detector) + +### [v0.4.0](https://github.com/kubedb/replication-mode-detector/releases/tag/v0.4.0) + +- [78195c3](https://github.com/kubedb/replication-mode-detector/commit/78195c3) Prepare for release v0.4.0 (#132) +- [3f1dc9c](https://github.com/kubedb/replication-mode-detector/commit/3f1dc9c) Update KubeDB api (#131) +- [31591ab](https://github.com/kubedb/replication-mode-detector/commit/31591ab) Update repository config (#128) +- [606a9ba](https://github.com/kubedb/replication-mode-detector/commit/606a9ba) Update repository config (#127) +- [69048ba](https://github.com/kubedb/replication-mode-detector/commit/69048ba) Update Kubernetes v1.18.9 dependencies (#126) +- [ae0857b](https://github.com/kubedb/replication-mode-detector/commit/ae0857b) Update Kubernetes v1.18.9 dependencies (#125) + + + +## [kubedb/tests](https://github.com/kubedb/tests) + +### [v0.2.0](https://github.com/kubedb/tests/releases/tag/v0.2.0) + +- [d515b35](https://github.com/kubedb/tests/commit/d515b35) Prepare for release v0.2.0 (#109) +- [8af80df](https://github.com/kubedb/tests/commit/8af80df) Remove parameters in clustered MySQL backup tests (#108) +- [34d9ed0](https://github.com/kubedb/tests/commit/34d9ed0) Add Stash backup tests for Elasticsearch (#86) +- [0ae044b](https://github.com/kubedb/tests/commit/0ae044b) Add e2e-tests for Elasticsearch Reconfigure TLS (#98) +- [0ecf747](https://github.com/kubedb/tests/commit/0ecf747) Add test for redis reconfiguration (#43) +- [051e74f](https://github.com/kubedb/tests/commit/051e74f) MariaDB Test with Backup Recovery (#96) +- [84f93a2](https://github.com/kubedb/tests/commit/84f93a2) Update KubeDB api (#107) +- [341b130](https://github.com/kubedb/tests/commit/341b130) Add `ElasticsearchAutoscaler` e2e test (#93) +- [b09219c](https://github.com/kubedb/tests/commit/b09219c) Add Stash Backup & Restore test for MySQL (#102) +- [d094303](https://github.com/kubedb/tests/commit/d094303) Test for MySQL Reconfigure, TLS-Reconfigure and VolumeExpansion (#62) +- [049cfb6](https://github.com/kubedb/tests/commit/049cfb6) Fix failed test for MySQL (#103) +- [f84b277](https://github.com/kubedb/tests/commit/f84b277) Update Kubernetes v1.18.9 dependencies (#105) +- [32e88cf](https://github.com/kubedb/tests/commit/32e88cf) Update repository config (#104) +- [b20b2d9](https://github.com/kubedb/tests/commit/b20b2d9) Update repository config (#101) +- [e720f80](https://github.com/kubedb/tests/commit/e720f80) Update Kubernetes v1.18.9 dependencies (#100) +- [25a6cdd](https://github.com/kubedb/tests/commit/25a6cdd) Add MongoDB ReconfigureTLS Test (#97) +- [1814c42](https://github.com/kubedb/tests/commit/1814c42) Update Elasticsearch go-client (#94) +- [9849e81](https://github.com/kubedb/tests/commit/9849e81) Update Kubernetes v1.18.9 dependencies (#99) + + + + diff --git a/content/docs/v2024.1.31/CHANGELOG-v2021.03.17.md b/content/docs/v2024.1.31/CHANGELOG-v2021.03.17.md new file mode 100644 index 0000000000..5f8fb2c96d --- /dev/null +++ b/content/docs/v2024.1.31/CHANGELOG-v2021.03.17.md @@ -0,0 +1,209 @@ +--- +title: Changelog | KubeDB +description: Changelog +menu: + docs_v2024.1.31: + identifier: changelog-kubedb-v2021.03.17 + name: Changelog-v2021.03.17 + parent: welcome + weight: 20210317 +product_name: kubedb +menu_name: docs_v2024.1.31 +section_menu_id: welcome +url: /docs/v2024.1.31/welcome/changelog-v2021.03.17/ +aliases: +- /docs/v2024.1.31/CHANGELOG-v2021.03.17/ +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +# KubeDB v2021.03.17 (2021-03-19) + + +## [appscode/kubedb-autoscaler](https://github.com/appscode/kubedb-autoscaler) + +### [v0.2.1](https://github.com/appscode/kubedb-autoscaler/releases/tag/v0.2.1) + +- [731b46c](https://github.com/appscode/kubedb-autoscaler/commit/731b46c) Prepare for release v0.2.1 (#19) + + + +## [appscode/kubedb-enterprise](https://github.com/appscode/kubedb-enterprise) + +### [v0.4.1](https://github.com/appscode/kubedb-enterprise/releases/tag/v0.4.1) + +- [2665db85](https://github.com/appscode/kubedb-enterprise/commit/2665db85) Prepare for release v0.4.1 (#158) +- [a3d8a50d](https://github.com/appscode/kubedb-enterprise/commit/a3d8a50d) Fix Various Issues (#156) +- [19896fdf](https://github.com/appscode/kubedb-enterprise/commit/19896fdf) Add individual certificate issuer (#155) + + + +## [kubedb/apimachinery](https://github.com/kubedb/apimachinery) + +### [v0.17.1](https://github.com/kubedb/apimachinery/releases/tag/v0.17.1) + +- [12d0f980](https://github.com/kubedb/apimachinery/commit/12d0f980) Add secretName to Elasticsearch userSpec (#725) +- [b2345eb8](https://github.com/kubedb/apimachinery/commit/b2345eb8) Add printable Distribution field for DB Versions (#723) +- [3fc568ac](https://github.com/kubedb/apimachinery/commit/3fc568ac) Add default organizational unit (#726) +- [e96fc066](https://github.com/kubedb/apimachinery/commit/e96fc066) Update for release Stash@v2021.03.11 (#724) + + + +## [kubedb/cli](https://github.com/kubedb/cli) + +### [v0.17.1](https://github.com/kubedb/cli/releases/tag/v0.17.1) + +- [6b41bfb1](https://github.com/kubedb/cli/commit/6b41bfb1) Prepare for release v0.17.1 (#596) +- [ace78977](https://github.com/kubedb/cli/commit/ace78977) Update for release Stash@v2021.03.11 (#595) + + + +## [kubedb/elasticsearch](https://github.com/kubedb/elasticsearch) + +### [v0.17.1](https://github.com/kubedb/elasticsearch/releases/tag/v0.17.1) + +- [ed002c9d](https://github.com/kubedb/elasticsearch/commit/ed002c9d) Prepare for release v0.17.1 (#486) +- [4a1978ff](https://github.com/kubedb/elasticsearch/commit/4a1978ff) Use user provided secret for Elasticsearch internal users (#485) +- [821d5b65](https://github.com/kubedb/elasticsearch/commit/821d5b65) Update for release Stash@v2021.03.11 (#484) + + + +## [kubedb/installer](https://github.com/kubedb/installer) + +### [v0.17.1](https://github.com/kubedb/installer/releases/tag/v0.17.1) + +- [4a1edf0](https://github.com/kubedb/installer/commit/4a1edf0) Prepare for release v0.17.1 (#289) +- [602be5c](https://github.com/kubedb/installer/commit/602be5c) Default enable mutating webhook for KubeDB Enterprise (#288) +- [26b5379](https://github.com/kubedb/installer/commit/26b5379) Add Elasticsearch and Redis OpsRequest Validator (#287) +- [18344ee](https://github.com/kubedb/installer/commit/18344ee) Update perconaxtradb task names +- [599831f](https://github.com/kubedb/installer/commit/599831f) Add MongoDBOpsRequest Validator (#286) +- [5bb89cd](https://github.com/kubedb/installer/commit/5bb89cd) Update kubedb chart values (#285) +- [ad07333](https://github.com/kubedb/installer/commit/ad07333) Allow overriding official registry in catalog (#284) +- [b1da884](https://github.com/kubedb/installer/commit/b1da884) Update stash task names (#282) +- [30a168b](https://github.com/kubedb/installer/commit/30a168b) Pass registry parameter from unified chart (#283) +- [322d15e](https://github.com/kubedb/installer/commit/322d15e) Use stash catalog from installer repo (#281) +- [818f50f](https://github.com/kubedb/installer/commit/818f50f) Remove namespace from db version crs (#280) + + + +## [kubedb/mariadb](https://github.com/kubedb/mariadb) + +### [v0.1.1](https://github.com/kubedb/mariadb/releases/tag/v0.1.1) + +- [0dc99818](https://github.com/kubedb/mariadb/commit/0dc99818) Prepare for release v0.1.1 (#58) +- [f09dda5a](https://github.com/kubedb/mariadb/commit/f09dda5a) Update for release Stash@v2021.03.11 (#57) + + + +## [kubedb/memcached](https://github.com/kubedb/memcached) + +### [v0.10.1](https://github.com/kubedb/memcached/releases/tag/v0.10.1) + +- [72ae2ba4](https://github.com/kubedb/memcached/commit/72ae2ba4) Prepare for release v0.10.1 (#292) + + + +## [kubedb/mongodb](https://github.com/kubedb/mongodb) + +### [v0.10.1](https://github.com/kubedb/mongodb/releases/tag/v0.10.1) + +- [21486cc6](https://github.com/kubedb/mongodb/commit/21486cc6) Prepare for release v0.10.1 (#385) +- [afdf4296](https://github.com/kubedb/mongodb/commit/afdf4296) Update for release Stash@v2021.03.11 (#383) + + + +## [kubedb/mysql](https://github.com/kubedb/mysql) + +### [v0.10.1](https://github.com/kubedb/mysql/releases/tag/v0.10.1) + +- [4f487d21](https://github.com/kubedb/mysql/commit/4f487d21) Prepare for release v0.10.1 (#378) +- [9c5a24d8](https://github.com/kubedb/mysql/commit/9c5a24d8) Update for release Stash@v2021.03.11 (#376) + + + +## [kubedb/operator](https://github.com/kubedb/operator) + +### [v0.17.1](https://github.com/kubedb/operator/releases/tag/v0.17.1) + +- [5f73f990](https://github.com/kubedb/operator/commit/5f73f990) Prepare for release v0.17.1 (#400) + + + +## [kubedb/percona-xtradb](https://github.com/kubedb/percona-xtradb) + +### [v0.4.1](https://github.com/kubedb/percona-xtradb/releases/tag/v0.4.1) + +- [7208b811](https://github.com/kubedb/percona-xtradb/commit/7208b811) Prepare for release v0.4.1 (#192) +- [7d3513b2](https://github.com/kubedb/percona-xtradb/commit/7d3513b2) Update for release Stash@v2021.03.11 (#191) + + + +## [kubedb/pg-coordinator](https://github.com/kubedb/pg-coordinator) + +### [v0.1.1](https://github.com/kubedb/pg-coordinator/releases/tag/v0.1.1) + +- [aa770b0](https://github.com/kubedb/pg-coordinator/commit/aa770b0) Prepare for release v0.1.1 (#14) + + + +## [kubedb/pgbouncer](https://github.com/kubedb/pgbouncer) + +### [v0.4.1](https://github.com/kubedb/pgbouncer/releases/tag/v0.4.1) + +- [babd54a8](https://github.com/kubedb/pgbouncer/commit/babd54a8) Prepare for release v0.4.1 (#153) + + + +## [kubedb/postgres](https://github.com/kubedb/postgres) + +### [v0.17.1](https://github.com/kubedb/postgres/releases/tag/v0.17.1) + +- [24d9e117](https://github.com/kubedb/postgres/commit/24d9e117) Prepare for release v0.17.1 (#483) +- [6d1b8b6b](https://github.com/kubedb/postgres/commit/6d1b8b6b) Update for release Stash@v2021.03.11 (#482) + + + +## [kubedb/proxysql](https://github.com/kubedb/proxysql) + +### [v0.4.1](https://github.com/kubedb/proxysql/releases/tag/v0.4.1) + +- [8ba592d2](https://github.com/kubedb/proxysql/commit/8ba592d2) Prepare for release v0.4.1 (#171) +- [a3db83e3](https://github.com/kubedb/proxysql/commit/a3db83e3) Update for release Stash@v2021.03.11 (#170) + + + +## [kubedb/redis](https://github.com/kubedb/redis) + +### [v0.10.1](https://github.com/kubedb/redis/releases/tag/v0.10.1) + +- [baef9fd9](https://github.com/kubedb/redis/commit/baef9fd9) Prepare for release v0.10.1 (#315) + + + +## [kubedb/replication-mode-detector](https://github.com/kubedb/replication-mode-detector) + +### [v0.4.1](https://github.com/kubedb/replication-mode-detector/releases/tag/v0.4.1) + +- [b115b51](https://github.com/kubedb/replication-mode-detector/commit/b115b51) Prepare for release v0.4.1 (#133) + + + +## [kubedb/tests](https://github.com/kubedb/tests) + +### [v0.2.1](https://github.com/kubedb/tests/releases/tag/v0.2.1) + +- [eca46b5](https://github.com/kubedb/tests/commit/eca46b5) Prepare for release v0.2.1 (#111) +- [a58b54f](https://github.com/kubedb/tests/commit/a58b54f) Don't explicitly set RestoreSession labels (#110) + + + + diff --git a/content/docs/v2024.1.31/CHANGELOG-v2021.04.16.md b/content/docs/v2024.1.31/CHANGELOG-v2021.04.16.md new file mode 100644 index 0000000000..6483596dfb --- /dev/null +++ b/content/docs/v2024.1.31/CHANGELOG-v2021.04.16.md @@ -0,0 +1,298 @@ +--- +title: Changelog | KubeDB +description: Changelog +menu: + docs_v2024.1.31: + identifier: changelog-kubedb-v2021.04.16 + name: Changelog-v2021.04.16 + parent: welcome + weight: 20210416 +product_name: kubedb +menu_name: docs_v2024.1.31 +section_menu_id: welcome +url: /docs/v2024.1.31/welcome/changelog-v2021.04.16/ +aliases: +- /docs/v2024.1.31/CHANGELOG-v2021.04.16/ +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +# KubeDB v2021.04.16 (2021-04-16) + + +## [appscode/kubedb-autoscaler](https://github.com/appscode/kubedb-autoscaler) + +### [v0.3.0](https://github.com/appscode/kubedb-autoscaler/releases/tag/v0.3.0) + +- [335c87a](https://github.com/appscode/kubedb-autoscaler/commit/335c87a) Prepare for release v0.3.0 (#20) +- [e90c615](https://github.com/appscode/kubedb-autoscaler/commit/e90c615) Use license-verifier v0.8.1 +- [432ea8d](https://github.com/appscode/kubedb-autoscaler/commit/432ea8d) Use license verifier v0.8.0 +- [e6293b0](https://github.com/appscode/kubedb-autoscaler/commit/e6293b0) Update license verifier +- [573e940](https://github.com/appscode/kubedb-autoscaler/commit/573e940) Fix spelling + + + +## [appscode/kubedb-enterprise](https://github.com/appscode/kubedb-enterprise) + +### [v0.5.0](https://github.com/appscode/kubedb-enterprise/releases/tag/v0.5.0) + +- [a6eccd35](https://github.com/appscode/kubedb-enterprise/commit/a6eccd35) Prepare for release v0.5.0 (#175) +- [a4af7e22](https://github.com/appscode/kubedb-enterprise/commit/a4af7e22) Fix wait for backup logic (#172) +- [5ef4fd8c](https://github.com/appscode/kubedb-enterprise/commit/5ef4fd8c) Fix nil pointer exception while updating MongoDB configSecret (#173) +- [b9ee5297](https://github.com/appscode/kubedb-enterprise/commit/b9ee5297) Pause `BackupConfiguration` and Wait for `BackupSession` & `RestoreSession` to complete (#168) +- [7064e346](https://github.com/appscode/kubedb-enterprise/commit/7064e346) Fix various issues for MongoDBOpsRequest (#169) +- [adf174e7](https://github.com/appscode/kubedb-enterprise/commit/adf174e7) Add Ops Request Phase `Pending` (#166) +- [355d1b1e](https://github.com/appscode/kubedb-enterprise/commit/355d1b1e) Fix panic for MongoDB (#167) +- [df672de0](https://github.com/appscode/kubedb-enterprise/commit/df672de0) Add HostNetwork and DNSPolicy to new StatefulSet (#171) +- [7a279d7b](https://github.com/appscode/kubedb-enterprise/commit/7a279d7b) Add Elsticsearch statefulSet reconciler (#161) +- [0fbd67b6](https://github.com/appscode/kubedb-enterprise/commit/0fbd67b6) Updated MustCertSecretName to GetCertSecretName (#162) +- [7e6a0d78](https://github.com/appscode/kubedb-enterprise/commit/7e6a0d78) Remove panic from Postgres (#170) +- [ae7f27bb](https://github.com/appscode/kubedb-enterprise/commit/ae7f27bb) Use license-verifier v0.8.1 +- [e3ff9160](https://github.com/appscode/kubedb-enterprise/commit/e3ff9160) Elasticsearch: Return default certificate secret name if missing (#165) +- [824a2d80](https://github.com/appscode/kubedb-enterprise/commit/824a2d80) Use license verifier v0.8.0 +- [40ec97e9](https://github.com/appscode/kubedb-enterprise/commit/40ec97e9) Update license verifier +- [c2757fb3](https://github.com/appscode/kubedb-enterprise/commit/c2757fb3) Fix spelling +- [4c41bc1e](https://github.com/appscode/kubedb-enterprise/commit/4c41bc1e) Don't activate namespace validator + + + +## [kubedb/apimachinery](https://github.com/kubedb/apimachinery) + +### [v0.18.0](https://github.com/kubedb/apimachinery/releases/tag/v0.18.0) + +- [fdf2681b](https://github.com/kubedb/apimachinery/commit/fdf2681b) Remove some panics from MongoDB (#734) +- [4bc52e52](https://github.com/kubedb/apimachinery/commit/4bc52e52) Add Ops Request Phase `Pending` (#736) +- [62e6b324](https://github.com/kubedb/apimachinery/commit/62e6b324) Add timeout for MongoDBOpsrequest (#738) +- [15a029cc](https://github.com/kubedb/apimachinery/commit/15a029cc) Check for all Stash CRDs before starting Start controller (#740) +- [b9da5117](https://github.com/kubedb/apimachinery/commit/b9da5117) Add backup condition constants for ops requests (#737) +- [989d8200](https://github.com/kubedb/apimachinery/commit/989d8200) Remove Panic from MariaDB (#731) +- [b7b2a28c](https://github.com/kubedb/apimachinery/commit/b7b2a28c) Remove panic from Postgres (#739) +- [264c0872](https://github.com/kubedb/apimachinery/commit/264c0872) Add IsIP helper +- [feebf1d8](https://github.com/kubedb/apimachinery/commit/feebf1d8) Add pod identity for cluster configuration (#729) +- [1b58e82b](https://github.com/kubedb/apimachinery/commit/1b58e82b) Add SecurityContext to ElasticsearchVersion CRD (#733) +- [b3c20afc](https://github.com/kubedb/apimachinery/commit/b3c20afc) Update for release Stash@v2021.04.07 (#735) +- [ced2341f](https://github.com/kubedb/apimachinery/commit/ced2341f) Return default cert-secret name if missing (#730) +- [ecf77001](https://github.com/kubedb/apimachinery/commit/ecf77001) Rename Features to SecurityContext in Postgres Version Spec (#732) +- [e5917a15](https://github.com/kubedb/apimachinery/commit/e5917a15) Rename RunAsAny to RunAsAnyNonRoot in PostgresVersion +- [5060058c](https://github.com/kubedb/apimachinery/commit/5060058c) Add Custom UID Options for Postgres (#728) +- [ed221fe1](https://github.com/kubedb/apimachinery/commit/ed221fe1) Fix spelling + + + +## [kubedb/cli](https://github.com/kubedb/cli) + +### [v0.18.0](https://github.com/kubedb/cli/releases/tag/v0.18.0) + +- [a1a424ba](https://github.com/kubedb/cli/commit/a1a424ba) Prepare for release v0.18.0 (#598) +- [c8bec973](https://github.com/kubedb/cli/commit/c8bec973) Fix spelling + + + +## [kubedb/elasticsearch](https://github.com/kubedb/elasticsearch) + +### [v0.18.0](https://github.com/kubedb/elasticsearch/releases/tag/v0.18.0) + +- [2da042ce](https://github.com/kubedb/elasticsearch/commit/2da042ce) Prepare for release v0.18.0 (#490) +- [e398bd53](https://github.com/kubedb/elasticsearch/commit/e398bd53) Add statefulSet reconciler (#488) +- [2944f5e5](https://github.com/kubedb/elasticsearch/commit/2944f5e5) Use license-verifier v0.8.1 +- [8d7177ab](https://github.com/kubedb/elasticsearch/commit/8d7177ab) Add support for custom UID for Elasticsearch (#489) +- [71d1fac3](https://github.com/kubedb/elasticsearch/commit/71d1fac3) Use license verifier v0.8.0 +- [fb5ee170](https://github.com/kubedb/elasticsearch/commit/fb5ee170) Update license verifier +- [f166bf3a](https://github.com/kubedb/elasticsearch/commit/f166bf3a) Update stash make targets (#487) +- [39d5be0f](https://github.com/kubedb/elasticsearch/commit/39d5be0f) Fix spelling + + + +## [kubedb/installer](https://github.com/kubedb/installer) + +### [v2021.04.16](https://github.com/kubedb/installer/releases/tag/v2021.04.16) + +- [a5f3c63](https://github.com/kubedb/installer/commit/a5f3c63) Prepare for release v2021.04.16 (#299) +- [6aa0d8b](https://github.com/kubedb/installer/commit/6aa0d8b) Update MongoDB init container image (#298) +- [7407a80](https://github.com/kubedb/installer/commit/7407a80) Add `poddisruptionbudgets` and backup permissions for KubeDB Enterprise (#297) +- [c12845b](https://github.com/kubedb/installer/commit/c12845b) Add mariadb-init-docker Image (#296) +- [39c7a94](https://github.com/kubedb/installer/commit/39c7a94) Add MySQL init container images to catalog (#291) +- [2272f27](https://github.com/kubedb/installer/commit/2272f27) Add support for Elasticsearch v7.12.0 (#295) +- [eebcbac](https://github.com/kubedb/installer/commit/eebcbac) Update installer schema +- [8619834](https://github.com/kubedb/installer/commit/8619834) Allow passing registry fqdn (#294) +- [4435651](https://github.com/kubedb/installer/commit/4435651) Custom UID for Postgres (#293) +- [1658fb2](https://github.com/kubedb/installer/commit/1658fb2) Fix spelling + + + +## [kubedb/mariadb](https://github.com/kubedb/mariadb) + +### [v0.2.0](https://github.com/kubedb/mariadb/releases/tag/v0.2.0) + +- [db6efa46](https://github.com/kubedb/mariadb/commit/db6efa46) Prepare for release v0.2.0 (#65) +- [01575e35](https://github.com/kubedb/mariadb/commit/01575e35) Updated validator for requireSSL field. (#61) +- [585f1873](https://github.com/kubedb/mariadb/commit/585f1873) Introduced MariaDB init-container (#62) +- [821c3688](https://github.com/kubedb/mariadb/commit/821c3688) Updated MustCertSecretName to GetCertSecretName (#64) +- [5d41c58a](https://github.com/kubedb/mariadb/commit/5d41c58a) Add POD_IP env variable (#63) +- [11e56c19](https://github.com/kubedb/mariadb/commit/11e56c19) Use license-verifier v0.8.1 +- [f7d6c516](https://github.com/kubedb/mariadb/commit/f7d6c516) Use license verifier v0.8.0 +- [3cfc4979](https://github.com/kubedb/mariadb/commit/3cfc4979) Update license verifier +- [60e8e7a3](https://github.com/kubedb/mariadb/commit/60e8e7a3) Update stash make targets (#60) +- [9424f4be](https://github.com/kubedb/mariadb/commit/9424f4be) Fix spelling + + + +## [kubedb/memcached](https://github.com/kubedb/memcached) + +### [v0.11.0](https://github.com/kubedb/memcached/releases/tag/v0.11.0) + +- [4f391d75](https://github.com/kubedb/memcached/commit/4f391d75) Prepare for release v0.11.0 (#293) +- [294fb730](https://github.com/kubedb/memcached/commit/294fb730) Use license-verifier v0.8.1 +- [717a5c06](https://github.com/kubedb/memcached/commit/717a5c06) Use license verifier v0.8.0 +- [37f7bba6](https://github.com/kubedb/memcached/commit/37f7bba6) Update license verifier +- [4a6fea4d](https://github.com/kubedb/memcached/commit/4a6fea4d) Fix spelling + + + +## [kubedb/mongodb](https://github.com/kubedb/mongodb) + +### [v0.11.0](https://github.com/kubedb/mongodb/releases/tag/v0.11.0) + +- [c4d7444c](https://github.com/kubedb/mongodb/commit/c4d7444c) Prepare for release v0.11.0 (#390) +- [26fbc21b](https://github.com/kubedb/mongodb/commit/26fbc21b) Use `IPv6EnabledInKernel` (#389) +- [0221339a](https://github.com/kubedb/mongodb/commit/0221339a) Selectively enable binding IPv6 address (#388) +- [7a53a0bc](https://github.com/kubedb/mongodb/commit/7a53a0bc) Remove panic (#387) +- [87623d58](https://github.com/kubedb/mongodb/commit/87623d58) Introduce NodeReconciler (#384) +- [891aac47](https://github.com/kubedb/mongodb/commit/891aac47) Use license-verifier v0.8.1 +- [0722ad6d](https://github.com/kubedb/mongodb/commit/0722ad6d) Use license verifier v0.8.0 +- [f8522304](https://github.com/kubedb/mongodb/commit/f8522304) Update license verifier +- [dab6babc](https://github.com/kubedb/mongodb/commit/dab6babc) Update stash make targets (#386) +- [b18cbac5](https://github.com/kubedb/mongodb/commit/b18cbac5) Fix spelling + + + +## [kubedb/mysql](https://github.com/kubedb/mysql) + +### [v0.11.0](https://github.com/kubedb/mysql/releases/tag/v0.11.0) + +- [d7e33a0c](https://github.com/kubedb/mysql/commit/d7e33a0c) Prepare for release v0.11.0 (#382) +- [782b3613](https://github.com/kubedb/mysql/commit/782b3613) Add podIP to pod env (#381) +- [f0166dbe](https://github.com/kubedb/mysql/commit/f0166dbe) Always pass -address-type to peer-finder (#380) +- [df770ff2](https://github.com/kubedb/mysql/commit/df770ff2) Use license-verifier v0.8.1 +- [d610fddc](https://github.com/kubedb/mysql/commit/d610fddc) Add support for using official mysql image (#377) +- [a99510f2](https://github.com/kubedb/mysql/commit/a99510f2) Use license verifier v0.8.0 +- [100dd336](https://github.com/kubedb/mysql/commit/100dd336) Update license verifier +- [1bdbe4ed](https://github.com/kubedb/mysql/commit/1bdbe4ed) Update stash make targets (#379) +- [40f9a2f2](https://github.com/kubedb/mysql/commit/40f9a2f2) Fix spelling + + + +## [kubedb/operator](https://github.com/kubedb/operator) + +### [v0.18.0](https://github.com/kubedb/operator/releases/tag/v0.18.0) + +- [5c2cb8b2](https://github.com/kubedb/operator/commit/5c2cb8b2) Prepare for release v0.18.0 (#402) +- [5ca8913b](https://github.com/kubedb/operator/commit/5ca8913b) Use license-verifier v0.8.1 + + + +## [kubedb/percona-xtradb](https://github.com/kubedb/percona-xtradb) + +### [v0.5.0](https://github.com/kubedb/percona-xtradb/releases/tag/v0.5.0) + +- [06e1c3fb](https://github.com/kubedb/percona-xtradb/commit/06e1c3fb) Prepare for release v0.5.0 (#194) +- [ecba4c64](https://github.com/kubedb/percona-xtradb/commit/ecba4c64) Use license-verifier v0.8.1 +- [9d59d002](https://github.com/kubedb/percona-xtradb/commit/9d59d002) Use license verifier v0.8.0 +- [6f924248](https://github.com/kubedb/percona-xtradb/commit/6f924248) Update license verifier +- [e1055e9b](https://github.com/kubedb/percona-xtradb/commit/e1055e9b) Update stash make targets (#193) +- [febdf8de](https://github.com/kubedb/percona-xtradb/commit/febdf8de) Fix spelling + + + +## [kubedb/pg-coordinator](https://github.com/kubedb/pg-coordinator) + +### [v0.2.0](https://github.com/kubedb/pg-coordinator/releases/tag/v0.2.0) + +- [b5311e9](https://github.com/kubedb/pg-coordinator/commit/b5311e9) Prepare for release v0.2.0 (#16) +- [db687a1](https://github.com/kubedb/pg-coordinator/commit/db687a1) Add Support for Custom UID (#15) +- [1f923a4](https://github.com/kubedb/pg-coordinator/commit/1f923a4) Fix spelling + + + +## [kubedb/pgbouncer](https://github.com/kubedb/pgbouncer) + +### [v0.5.0](https://github.com/kubedb/pgbouncer/releases/tag/v0.5.0) + +- [1b32fbbb](https://github.com/kubedb/pgbouncer/commit/1b32fbbb) Prepare for release v0.5.0 (#154) +- [d5102b66](https://github.com/kubedb/pgbouncer/commit/d5102b66) Use license-verifier v0.8.1 +- [30e3e2f9](https://github.com/kubedb/pgbouncer/commit/30e3e2f9) Use license verifier v0.8.0 +- [3c2833db](https://github.com/kubedb/pgbouncer/commit/3c2833db) Update license verifier +- [06463c97](https://github.com/kubedb/pgbouncer/commit/06463c97) Fix spelling + + + +## [kubedb/postgres](https://github.com/kubedb/postgres) + +### [v0.18.0](https://github.com/kubedb/postgres/releases/tag/v0.18.0) + +- [e0b70d83](https://github.com/kubedb/postgres/commit/e0b70d83) Prepare for release v0.18.0 (#489) +- [a3ffea16](https://github.com/kubedb/postgres/commit/a3ffea16) remove panic from postgres (#488) +- [09adb390](https://github.com/kubedb/postgres/commit/09adb390) Remove wait-group from postgres operator (#487) +- [ae8b87da](https://github.com/kubedb/postgres/commit/ae8b87da) Use license-verifier v0.8.1 +- [77f220b8](https://github.com/kubedb/postgres/commit/77f220b8) Update KubeDB api (#486) +- [b0234c4b](https://github.com/kubedb/postgres/commit/b0234c4b) Add Custom-UID Support for Debian Images (#485) +- [fdf4d2df](https://github.com/kubedb/postgres/commit/fdf4d2df) Use license verifier v0.8.0 +- [dd59f9b1](https://github.com/kubedb/postgres/commit/dd59f9b1) Update license verifier +- [43fd0c33](https://github.com/kubedb/postgres/commit/43fd0c33) Update stash make targets (#484) +- [8632e4c5](https://github.com/kubedb/postgres/commit/8632e4c5) Fix spelling + + + +## [kubedb/proxysql](https://github.com/kubedb/proxysql) + +### [v0.5.0](https://github.com/kubedb/proxysql/releases/tag/v0.5.0) + +- [8cd69666](https://github.com/kubedb/proxysql/commit/8cd69666) Prepare for release v0.5.0 (#172) +- [7cc0781a](https://github.com/kubedb/proxysql/commit/7cc0781a) Use license-verifier v0.8.1 +- [296e14f0](https://github.com/kubedb/proxysql/commit/296e14f0) Use license verifier v0.8.0 +- [2fd9f4e5](https://github.com/kubedb/proxysql/commit/2fd9f4e5) Update license verifier +- [7fb0a67f](https://github.com/kubedb/proxysql/commit/7fb0a67f) Fix spelling + + + +## [kubedb/redis](https://github.com/kubedb/redis) + +### [v0.11.0](https://github.com/kubedb/redis/releases/tag/v0.11.0) + +- [f08b8987](https://github.com/kubedb/redis/commit/f08b8987) Prepare for release v0.11.0 (#316) +- [02347918](https://github.com/kubedb/redis/commit/02347918) Use license-verifier v0.8.1 +- [fc33c657](https://github.com/kubedb/redis/commit/fc33c657) Use license verifier v0.8.0 +- [1cd12234](https://github.com/kubedb/redis/commit/1cd12234) Update license verifier +- [5ba20810](https://github.com/kubedb/redis/commit/5ba20810) Fix spelling + + + +## [kubedb/replication-mode-detector](https://github.com/kubedb/replication-mode-detector) + +### [v0.5.0](https://github.com/kubedb/replication-mode-detector/releases/tag/v0.5.0) + +- [ee224f1](https://github.com/kubedb/replication-mode-detector/commit/ee224f1) Prepare for release v0.5.0 (#135) +- [8293c27](https://github.com/kubedb/replication-mode-detector/commit/8293c27) Add comparing host with podIP or DNS for MySQL (#134) +- [f608626](https://github.com/kubedb/replication-mode-detector/commit/f608626) Fix mysql query for getting primary member "ONLINE" (#124) + + + +## [kubedb/tests](https://github.com/kubedb/tests) + +### [v0.3.0](https://github.com/kubedb/tests/releases/tag/v0.3.0) + +- [1d230e5](https://github.com/kubedb/tests/commit/1d230e5) Prepare for release v0.3.0 (#115) +- [a2148b0](https://github.com/kubedb/tests/commit/a2148b0) Rename `MustCertSecretName` to `GetCertSecretName` (#113) + + + + diff --git a/content/docs/v2024.1.31/CHANGELOG-v2021.06.21-rc.0.md b/content/docs/v2024.1.31/CHANGELOG-v2021.06.21-rc.0.md new file mode 100644 index 0000000000..07f07e88c7 --- /dev/null +++ b/content/docs/v2024.1.31/CHANGELOG-v2021.06.21-rc.0.md @@ -0,0 +1,369 @@ +--- +title: Changelog | KubeDB +description: Changelog +menu: + docs_v2024.1.31: + identifier: changelog-kubedb-v2021.06.21-rc.0 + name: Changelog-v2021.06.21-rc.0 + parent: welcome + weight: 20210621 +product_name: kubedb +menu_name: docs_v2024.1.31 +section_menu_id: welcome +url: /docs/v2024.1.31/welcome/changelog-v2021.06.21-rc.0/ +aliases: +- /docs/v2024.1.31/CHANGELOG-v2021.06.21-rc.0/ +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +# KubeDB v2021.06.21-rc.0 (2021-06-22) + + +## [appscode/kubedb-autoscaler](https://github.com/appscode/kubedb-autoscaler) + +### [v0.4.0-rc.0](https://github.com/appscode/kubedb-autoscaler/releases/tag/v0.4.0-rc.0) + +- [fe37e94](https://github.com/appscode/kubedb-autoscaler/commit/fe37e94) Prepare for release v0.4.0-rc.0 (#25) +- [f81237c](https://github.com/appscode/kubedb-autoscaler/commit/f81237c) Update audit lib (#24) +- [ed8c87b](https://github.com/appscode/kubedb-autoscaler/commit/ed8c87b) Send audit events if analytics enabled +- [c0a03d5](https://github.com/appscode/kubedb-autoscaler/commit/c0a03d5) Create auditor if license file is provided (#23) +- [2775227](https://github.com/appscode/kubedb-autoscaler/commit/2775227) Publish audit events (#22) +- [636d3a7](https://github.com/appscode/kubedb-autoscaler/commit/636d3a7) Use kglog helper +- [6a64bb1](https://github.com/appscode/kubedb-autoscaler/commit/6a64bb1) Use k8s 1.21.0 toolchain (#21) + + + +## [appscode/kubedb-enterprise](https://github.com/appscode/kubedb-enterprise) + +### [v0.6.0-rc.0](https://github.com/appscode/kubedb-enterprise/releases/tag/v0.6.0-rc.0) + +- [97749287](https://github.com/appscode/kubedb-enterprise/commit/97749287) Prepare for release v0.6.0-rc.0 (#199) +- [401cfc86](https://github.com/appscode/kubedb-enterprise/commit/401cfc86) Update audit lib (#197) +- [4378d35a](https://github.com/appscode/kubedb-enterprise/commit/4378d35a) Add MariaDB OpsReq [Restart, Upgrade, Scaling, Volume Expansion, Reconfigure Custom Config] (#179) +- [f879e934](https://github.com/appscode/kubedb-enterprise/commit/f879e934) Postgres Ops Req (Upgrade, Horizontal, Vertical, Volume Expansion, Reconfigure, Reconfigure TLS, Restart) (#193) +- [79b51d25](https://github.com/appscode/kubedb-enterprise/commit/79b51d25) Skip stash checks if stash CRD doesn't exist (#196) +- [3efc4ee8](https://github.com/appscode/kubedb-enterprise/commit/3efc4ee8) Refactor MongoDB Scale Down Shard (#189) +- [64962f36](https://github.com/appscode/kubedb-enterprise/commit/64962f36) Add timeout for Elasticsearch ops request (#183) +- [4ed736b8](https://github.com/appscode/kubedb-enterprise/commit/4ed736b8) Send audit events if analytics enabled +- [498ef67b](https://github.com/appscode/kubedb-enterprise/commit/498ef67b) Create auditor if license file is provided (#195) +- [a61965cc](https://github.com/appscode/kubedb-enterprise/commit/a61965cc) Publish audit events (#194) +- [cdc0ee37](https://github.com/appscode/kubedb-enterprise/commit/cdc0ee37) Fix log level issue with klog (#187) +- [356c6965](https://github.com/appscode/kubedb-enterprise/commit/356c6965) Use kglog helper +- [d7248cfd](https://github.com/appscode/kubedb-enterprise/commit/d7248cfd) Update Kubernetes toolchain to v1.21.0 (#181) +- [b8493083](https://github.com/appscode/kubedb-enterprise/commit/b8493083) Only restart the changed pods while VerticalScaling Elasticsearch (#174) + + + +## [kubedb/apimachinery](https://github.com/kubedb/apimachinery) + +### [v0.19.0-rc.0](https://github.com/kubedb/apimachinery/releases/tag/v0.19.0-rc.0) + +- [c885fc2d](https://github.com/kubedb/apimachinery/commit/c885fc2d) Postgres DB Container's RunAsGroup As FSGroup (#769) +- [29cb0260](https://github.com/kubedb/apimachinery/commit/29cb0260) Add fixes to helper method (#768) +- [b20b40c2](https://github.com/kubedb/apimachinery/commit/b20b40c2) Use Stash v2021.06.23 +- [e98fb31f](https://github.com/kubedb/apimachinery/commit/e98fb31f) Update audit event publisher (#767) +- [81e26637](https://github.com/kubedb/apimachinery/commit/81e26637) Add MariaDB Constants (#766) +- [532b6982](https://github.com/kubedb/apimachinery/commit/532b6982) Update Elasticsearch API to support various node roles including hot-warm-cold (#764) +- [a9979e15](https://github.com/kubedb/apimachinery/commit/a9979e15) Update for release Stash@v2021.6.18 (#765) +- [d20c46a2](https://github.com/kubedb/apimachinery/commit/d20c46a2) Fix locking in ResourceMapper +- [3a597982](https://github.com/kubedb/apimachinery/commit/3a597982) Send audit events if analytics enabled +- [27cc118e](https://github.com/kubedb/apimachinery/commit/27cc118e) Add auditor to shared Controller (#761) +- [eb13a94f](https://github.com/kubedb/apimachinery/commit/eb13a94f) Rename TimeoutSeconds to Timeout in MongoDBOpsRequest (#759) +- [29627ec6](https://github.com/kubedb/apimachinery/commit/29627ec6) Add timeout for each step of ES ops request (#742) +- [cc6b9690](https://github.com/kubedb/apimachinery/commit/cc6b9690) Add MariaDB OpsRequest Types (#743) +- [6fb2646e](https://github.com/kubedb/apimachinery/commit/6fb2646e) Update default resource limits for databases (#755) +- [161b3fe3](https://github.com/kubedb/apimachinery/commit/161b3fe3) Add UpdateMariaDBOpsRequestStatus function (#727) +- [98cd75f0](https://github.com/kubedb/apimachinery/commit/98cd75f0) Add Fields, Constant, Func For Ops Request Postgres (#758) +- [722656b7](https://github.com/kubedb/apimachinery/commit/722656b7) Add Innodb Group Replication Mode (#750) +- [eb8e5883](https://github.com/kubedb/apimachinery/commit/eb8e5883) Replace go-bindata with //go:embed (#753) +- [df570f7b](https://github.com/kubedb/apimachinery/commit/df570f7b) Add HealthCheckInterval constant (#752) +- [e982e590](https://github.com/kubedb/apimachinery/commit/e982e590) Use kglog helper +- [e725873d](https://github.com/kubedb/apimachinery/commit/e725873d) Fix tests (#749) +- [11d1c306](https://github.com/kubedb/apimachinery/commit/11d1c306) Cleanup dependencies +- [7030bd8f](https://github.com/kubedb/apimachinery/commit/7030bd8f) Update crds +- [766fa11f](https://github.com/kubedb/apimachinery/commit/766fa11f) Update Kubernetes toolchain to v1.21.0 (#746) +- [12014667](https://github.com/kubedb/apimachinery/commit/12014667) Add Elasticsearch vertical scaling constants (#741) + + + +## [kubedb/cli](https://github.com/kubedb/cli) + +### [v0.19.0-rc.0](https://github.com/kubedb/cli/releases/tag/v0.19.0-rc.0) + +- [b367d2a5](https://github.com/kubedb/cli/commit/b367d2a5) Prepare for release v0.19.0-rc.0 (#609) +- [b9214d6a](https://github.com/kubedb/cli/commit/b9214d6a) Use Kubernetes 1.21.1 toolchain (#608) +- [36866cf5](https://github.com/kubedb/cli/commit/36866cf5) Use kglog helper +- [e4ee9973](https://github.com/kubedb/cli/commit/e4ee9973) Cleanup dependencies (#607) +- [07999fc2](https://github.com/kubedb/cli/commit/07999fc2) Use Kubernetes v1.21.0 toolchain (#606) +- [05e3b7e5](https://github.com/kubedb/cli/commit/05e3b7e5) Use Kubernetes v1.21.0 toolchain (#605) +- [44f4188e](https://github.com/kubedb/cli/commit/44f4188e) Use Kubernetes v1.21.0 toolchain (#604) +- [82cd8399](https://github.com/kubedb/cli/commit/82cd8399) Use Kubernetes v1.21.0 toolchain (#603) +- [998506cd](https://github.com/kubedb/cli/commit/998506cd) Use Kubernetes v1.21.0 toolchain (#602) +- [4ff64f94](https://github.com/kubedb/cli/commit/4ff64f94) Use Kubernetes v1.21.0 toolchain (#601) +- [19b257f1](https://github.com/kubedb/cli/commit/19b257f1) Update Kubernetes toolchain to v1.21.0 (#600) + + + +## [kubedb/elasticsearch](https://github.com/kubedb/elasticsearch) + +### [v0.19.0-rc.0](https://github.com/kubedb/elasticsearch/releases/tag/v0.19.0-rc.0) + +- [aed0fcb4](https://github.com/kubedb/elasticsearch/commit/aed0fcb4) Prepare for release v0.19.0-rc.0 (#502) +- [630d6940](https://github.com/kubedb/elasticsearch/commit/630d6940) Update audit lib (#501) +- [df4c9a0d](https://github.com/kubedb/elasticsearch/commit/df4c9a0d) Do not create user credentials when security is disabled (#500) +- [3b656b57](https://github.com/kubedb/elasticsearch/commit/3b656b57) Add support for various node roles for ElasticStack (#499) +- [64133cb6](https://github.com/kubedb/elasticsearch/commit/64133cb6) Send audit events if analytics enabled +- [21caa38f](https://github.com/kubedb/elasticsearch/commit/21caa38f) Create auditor if license file is provided (#498) +- [8319ba70](https://github.com/kubedb/elasticsearch/commit/8319ba70) Publish audit events (#497) +- [5f08d1b2](https://github.com/kubedb/elasticsearch/commit/5f08d1b2) Skip health check for halted DB (#494) +- [6a23d464](https://github.com/kubedb/elasticsearch/commit/6a23d464) Disable flow control if api is not enabled (#495) +- [a23c5481](https://github.com/kubedb/elasticsearch/commit/a23c5481) Fix log level issue with klog (#496) +- [38dbddda](https://github.com/kubedb/elasticsearch/commit/38dbddda) Limit health checker go-routine for specific DB object (#491) +- [0aefd5f7](https://github.com/kubedb/elasticsearch/commit/0aefd5f7) Use kglog helper +- [03255078](https://github.com/kubedb/elasticsearch/commit/03255078) Cleanup glog dependency +- [57bb1bf1](https://github.com/kubedb/elasticsearch/commit/57bb1bf1) Update dependencies +- [69fdfde7](https://github.com/kubedb/elasticsearch/commit/69fdfde7) Update Kubernetes toolchain to v1.21.0 (#492) + + + +## [kubedb/installer](https://github.com/kubedb/installer) + +### [v2021.06.21-rc.0](https://github.com/kubedb/installer/releases/tag/v2021.06.21-rc.0) + +- [823feb3](https://github.com/kubedb/installer/commit/823feb3) Prepare for release v2021.06.21-rc.0 (#315) +- [946dc13](https://github.com/kubedb/installer/commit/946dc13) Use Stash v2021.06.23 +- [77a54a1](https://github.com/kubedb/installer/commit/77a54a1) Use Kubernetes 1.21.1 toolchain (#314) +- [2b15157](https://github.com/kubedb/installer/commit/2b15157) Add support for Elasticsearch v7.13.2 (#313) +- [a11d7d0](https://github.com/kubedb/installer/commit/a11d7d0) Support MongoDB Version 4.4.6 (#312) +- [4c79e1a](https://github.com/kubedb/installer/commit/4c79e1a) Update Elasticsearch versions to support various node roles (#308) +- [8e52114](https://github.com/kubedb/installer/commit/8e52114) Update for release Stash@v2021.6.18 (#311) +- [95aa010](https://github.com/kubedb/installer/commit/95aa010) Update to MariaDB init docker version 0.2.0 (#310) +- [1659b91](https://github.com/kubedb/installer/commit/1659b91) Fix: Update Ops Request yaml for Reconfigure TLS in Postgres (#307) +- [b2a806b](https://github.com/kubedb/installer/commit/b2a806b) Use mongodb-exporter v0.20.4 (#305) +- [12e720a](https://github.com/kubedb/installer/commit/12e720a) Update Kubernetes toolchain to v1.21.0 (#302) +- [3ff3bc3](https://github.com/kubedb/installer/commit/3ff3bc3) Add monitoring values to global chart (#301) + + + +## [kubedb/mariadb](https://github.com/kubedb/mariadb) + +### [v0.3.0-rc.0](https://github.com/kubedb/mariadb/releases/tag/v0.3.0-rc.0) + +- [9b982f74](https://github.com/kubedb/mariadb/commit/9b982f74) Prepare for release v0.3.0-rc.0 (#77) +- [0ad0022c](https://github.com/kubedb/mariadb/commit/0ad0022c) Update audit lib (#75) +- [501a2e61](https://github.com/kubedb/mariadb/commit/501a2e61) Update custom config mount path for MariaDB Cluster (#59) +- [d00cf65b](https://github.com/kubedb/mariadb/commit/d00cf65b) Separate Reconcile functionality in a new function ReconcileNode (#68) +- [e9239d4f](https://github.com/kubedb/mariadb/commit/e9239d4f) Limit Go routines in Health Checker (#73) +- [d695adf1](https://github.com/kubedb/mariadb/commit/d695adf1) Send audit events if analytics enabled (#74) +- [070a0f79](https://github.com/kubedb/mariadb/commit/070a0f79) Create auditor if license file is provided (#72) +- [fc9046c3](https://github.com/kubedb/mariadb/commit/fc9046c3) Publish audit events (#71) +- [3a1f08a9](https://github.com/kubedb/mariadb/commit/3a1f08a9) Fix log level issue with klog for MariaDB (#70) +- [b6075e5d](https://github.com/kubedb/mariadb/commit/b6075e5d) Use kglog helper +- [f510e375](https://github.com/kubedb/mariadb/commit/f510e375) Use klog/v2 +- [c009905e](https://github.com/kubedb/mariadb/commit/c009905e) Update Kubernetes toolchain to v1.21.0 (#66) + + + +## [kubedb/memcached](https://github.com/kubedb/memcached) + +### [v0.12.0-rc.0](https://github.com/kubedb/memcached/releases/tag/v0.12.0-rc.0) + +- [99ab26b5](https://github.com/kubedb/memcached/commit/99ab26b5) Prepare for release v0.12.0-rc.0 (#299) +- [213807d5](https://github.com/kubedb/memcached/commit/213807d5) Update audit lib (#298) +- [29054b5b](https://github.com/kubedb/memcached/commit/29054b5b) Send audit events if analytics enabled (#297) +- [a4888446](https://github.com/kubedb/memcached/commit/a4888446) Publish audit events (#296) +- [236d6108](https://github.com/kubedb/memcached/commit/236d6108) Use kglog helper +- [7ffe5c73](https://github.com/kubedb/memcached/commit/7ffe5c73) Use klog/v2 +- [fb34645b](https://github.com/kubedb/memcached/commit/fb34645b) Update Kubernetes toolchain to v1.21.0 (#294) + + + +## [kubedb/mongodb](https://github.com/kubedb/mongodb) + +### [v0.12.0-rc.0](https://github.com/kubedb/mongodb/releases/tag/v0.12.0-rc.0) + +- [11eb6ee8](https://github.com/kubedb/mongodb/commit/11eb6ee8) Prepare for release v0.12.0-rc.0 (#400) +- [dbf5cd16](https://github.com/kubedb/mongodb/commit/dbf5cd16) Update audit lib (#399) +- [a55bf1d5](https://github.com/kubedb/mongodb/commit/a55bf1d5) Limit go routine in health check (#394) +- [0a61c733](https://github.com/kubedb/mongodb/commit/0a61c733) Update TLS args for Exporter (#395) +- [80d3fec2](https://github.com/kubedb/mongodb/commit/80d3fec2) Send audit events if analytics enabled (#398) +- [8ac51d7e](https://github.com/kubedb/mongodb/commit/8ac51d7e) Create auditor if license file is provided (#397) +- [c6c4b380](https://github.com/kubedb/mongodb/commit/c6c4b380) Publish audit events (#396) +- [e261937a](https://github.com/kubedb/mongodb/commit/e261937a) Fix log level issue with klog (#393) +- [426afbfc](https://github.com/kubedb/mongodb/commit/426afbfc) Use kglog helper +- [24b7976c](https://github.com/kubedb/mongodb/commit/24b7976c) Use klog/v2 +- [0ace005d](https://github.com/kubedb/mongodb/commit/0ace005d) Update Kubernetes toolchain to v1.21.0 (#391) + + + +## [kubedb/mysql](https://github.com/kubedb/mysql) + +### [v0.12.0-rc.0](https://github.com/kubedb/mysql/releases/tag/v0.12.0-rc.0) + +- [9533c528](https://github.com/kubedb/mysql/commit/9533c528) Prepare for release v0.12.0-rc.0 (#392) +- [f0313b17](https://github.com/kubedb/mysql/commit/f0313b17) Limit Health Checker goroutines (#385) +- [ab601a28](https://github.com/kubedb/mysql/commit/ab601a28) Use gomodules.xyz/password-generator v0.2.7 +- [782362db](https://github.com/kubedb/mysql/commit/782362db) Update audit library (#390) +- [1d36bacb](https://github.com/kubedb/mysql/commit/1d36bacb) Send audit events if analytics enabled (#389) +- [55a903a3](https://github.com/kubedb/mysql/commit/55a903a3) Create auditor if license file is provided (#388) +- [dc6f6ea5](https://github.com/kubedb/mysql/commit/dc6f6ea5) Publish audit events (#387) +- [75bd1a1c](https://github.com/kubedb/mysql/commit/75bd1a1c) Fix log level issue with klog for mysql (#386) +- [1014a393](https://github.com/kubedb/mysql/commit/1014a393) Use kglog helper +- [728fa299](https://github.com/kubedb/mysql/commit/728fa299) Use klog/v2 +- [80581df4](https://github.com/kubedb/mysql/commit/80581df4) Update Kubernetes toolchain to v1.21.0 (#383) + + + +## [kubedb/operator](https://github.com/kubedb/operator) + +### [v0.19.0-rc.0](https://github.com/kubedb/operator/releases/tag/v0.19.0-rc.0) + +- [08daa22a](https://github.com/kubedb/operator/commit/08daa22a) Prepare for release v0.19.0-rc.0 (#407) +- [203ffa38](https://github.com/kubedb/operator/commit/203ffa38) Update audit lib (#406) +- [704a774f](https://github.com/kubedb/operator/commit/704a774f) Send audit events if analytics enabled (#405) +- [7e8f1be0](https://github.com/kubedb/operator/commit/7e8f1be0) Stop using gomodules.xyz/version +- [49d7d7f2](https://github.com/kubedb/operator/commit/49d7d7f2) Publish audit events (#404) +- [820d7372](https://github.com/kubedb/operator/commit/820d7372) Use kglog helper +- [396ae75f](https://github.com/kubedb/operator/commit/396ae75f) Update Kubernetes toolchain to v1.21.0 (#403) + + + +## [kubedb/percona-xtradb](https://github.com/kubedb/percona-xtradb) + +### [v0.6.0-rc.0](https://github.com/kubedb/percona-xtradb/releases/tag/v0.6.0-rc.0) + +- [870e08df](https://github.com/kubedb/percona-xtradb/commit/870e08df) Prepare for release v0.6.0-rc.0 (#201) +- [f163f637](https://github.com/kubedb/percona-xtradb/commit/f163f637) Update audit lib (#200) +- [c42c3401](https://github.com/kubedb/percona-xtradb/commit/c42c3401) Send audit events if analytics enabled (#199) +- [e2ce3664](https://github.com/kubedb/percona-xtradb/commit/e2ce3664) Create auditor if license file is provided (#198) +- [3e85edb2](https://github.com/kubedb/percona-xtradb/commit/3e85edb2) Publish audit events (#197) +- [6f23031c](https://github.com/kubedb/percona-xtradb/commit/6f23031c) Use kglog helper +- [cc0e270a](https://github.com/kubedb/percona-xtradb/commit/cc0e270a) Use klog/v2 +- [a44e3347](https://github.com/kubedb/percona-xtradb/commit/a44e3347) Update Kubernetes toolchain to v1.21.0 (#195) + + + +## [kubedb/pg-coordinator](https://github.com/kubedb/pg-coordinator) + +### [v0.3.0-rc.0](https://github.com/kubedb/pg-coordinator/releases/tag/v0.3.0-rc.0) + +- [3ca5f67](https://github.com/kubedb/pg-coordinator/commit/3ca5f67) Prepare for release v0.3.0-rc.0 (#25) +- [4ef7d95](https://github.com/kubedb/pg-coordinator/commit/4ef7d95) Update Client TLS Path for Postgres (#24) +- [7208199](https://github.com/kubedb/pg-coordinator/commit/7208199) Raft Version Update And Ops Request Fix (#23) +- [5adb304](https://github.com/kubedb/pg-coordinator/commit/5adb304) Use klog/v2 (#19) +- [a9b3f16](https://github.com/kubedb/pg-coordinator/commit/a9b3f16) Use klog/v2 + + + +## [kubedb/pgbouncer](https://github.com/kubedb/pgbouncer) + +### [v0.6.0-rc.0](https://github.com/kubedb/pgbouncer/releases/tag/v0.6.0-rc.0) + +- [cbba6969](https://github.com/kubedb/pgbouncer/commit/cbba6969) Prepare for release v0.6.0-rc.0 (#161) +- [bc6428cd](https://github.com/kubedb/pgbouncer/commit/bc6428cd) Update audit lib (#160) +- [442f0635](https://github.com/kubedb/pgbouncer/commit/442f0635) Send audit events if analytics enabled (#159) +- [2ebaf4bb](https://github.com/kubedb/pgbouncer/commit/2ebaf4bb) Create auditor if license file is provided (#158) +- [4e3f115d](https://github.com/kubedb/pgbouncer/commit/4e3f115d) Publish audit events (#157) +- [1ed2f883](https://github.com/kubedb/pgbouncer/commit/1ed2f883) Use kglog helper +- [870cf108](https://github.com/kubedb/pgbouncer/commit/870cf108) Use klog/v2 +- [11c2ac03](https://github.com/kubedb/pgbouncer/commit/11c2ac03) Update Kubernetes toolchain to v1.21.0 (#155) + + + +## [kubedb/postgres](https://github.com/kubedb/postgres) + +### [v0.19.0-rc.0](https://github.com/kubedb/postgres/releases/tag/v0.19.0-rc.0) + +- [06fcab6e](https://github.com/kubedb/postgres/commit/06fcab6e) Prepare for release v0.19.0-rc.0 (#508) +- [5c0e0fa2](https://github.com/kubedb/postgres/commit/5c0e0fa2) Run All DB Pod's Container with Custom-UID (#507) +- [9496dadf](https://github.com/kubedb/postgres/commit/9496dadf) Update audit lib (#506) +- [d51cdfdd](https://github.com/kubedb/postgres/commit/d51cdfdd) Limit Health Check for Postgres (#504) +- [24851ba8](https://github.com/kubedb/postgres/commit/24851ba8) Send audit events if analytics enabled (#505) +- [faecf01d](https://github.com/kubedb/postgres/commit/faecf01d) Create auditor if license file is provided (#503) +- [8d4bf26b](https://github.com/kubedb/postgres/commit/8d4bf26b) Stop using gomodules.xyz/version (#501) +- [906c678e](https://github.com/kubedb/postgres/commit/906c678e) Publish audit events (#500) +- [c6afe209](https://github.com/kubedb/postgres/commit/c6afe209) Fix: Log Level Issue with klog (#496) +- [2a910034](https://github.com/kubedb/postgres/commit/2a910034) Use kglog helper +- [a4e685d6](https://github.com/kubedb/postgres/commit/a4e685d6) Use klog/v2 +- [ee9a9d15](https://github.com/kubedb/postgres/commit/ee9a9d15) Update Kubernetes toolchain to v1.21.0 (#492) + + + +## [kubedb/proxysql](https://github.com/kubedb/proxysql) + +### [v0.6.0-rc.0](https://github.com/kubedb/proxysql/releases/tag/v0.6.0-rc.0) + +- [ba5ec48b](https://github.com/kubedb/proxysql/commit/ba5ec48b) Prepare for release v0.6.0-rc.0 (#179) +- [9770fa0d](https://github.com/kubedb/proxysql/commit/9770fa0d) Update audit lib (#178) +- [3e307411](https://github.com/kubedb/proxysql/commit/3e307411) Send audit events if analytics enabled (#177) +- [790b57ed](https://github.com/kubedb/proxysql/commit/790b57ed) Create auditor if license file is provided (#176) +- [6e6c9ba1](https://github.com/kubedb/proxysql/commit/6e6c9ba1) Publish audit events (#175) +- [df2937ed](https://github.com/kubedb/proxysql/commit/df2937ed) Use kglog helper +- [2ca12e48](https://github.com/kubedb/proxysql/commit/2ca12e48) Use klog/v2 +- [3796f730](https://github.com/kubedb/proxysql/commit/3796f730) Update Kubernetes toolchain to v1.21.0 (#173) + + + +## [kubedb/redis](https://github.com/kubedb/redis) + +### [v0.12.0-rc.0](https://github.com/kubedb/redis/releases/tag/v0.12.0-rc.0) + +- [0c15054c](https://github.com/kubedb/redis/commit/0c15054c) Prepare for release v0.12.0-rc.0 (#324) +- [5a5ec318](https://github.com/kubedb/redis/commit/5a5ec318) Update audit lib (#323) +- [6673f940](https://github.com/kubedb/redis/commit/6673f940) Limit Health Check go-routine Redis (#321) +- [e945029e](https://github.com/kubedb/redis/commit/e945029e) Send audit events if analytics enabled (#322) +- [3715ff10](https://github.com/kubedb/redis/commit/3715ff10) Create auditor if license file is provided (#320) +- [9d5d90a9](https://github.com/kubedb/redis/commit/9d5d90a9) Add auditor handler +- [5004f56c](https://github.com/kubedb/redis/commit/5004f56c) Publish audit events (#319) +- [146b3863](https://github.com/kubedb/redis/commit/146b3863) Use kglog helper +- [71d8ced8](https://github.com/kubedb/redis/commit/71d8ced8) Use klog/v2 +- [4900a564](https://github.com/kubedb/redis/commit/4900a564) Update Kubernetes toolchain to v1.21.0 (#317) + + + +## [kubedb/replication-mode-detector](https://github.com/kubedb/replication-mode-detector) + +### [v0.6.0-rc.0](https://github.com/kubedb/replication-mode-detector/releases/tag/v0.6.0-rc.0) + +- [1382382](https://github.com/kubedb/replication-mode-detector/commit/1382382) Prepare for release v0.6.0-rc.0 (#143) +- [feba070](https://github.com/kubedb/replication-mode-detector/commit/feba070) Remove glog dependency +- [fd757b4](https://github.com/kubedb/replication-mode-detector/commit/fd757b4) Use kglog helper +- [8ba20a3](https://github.com/kubedb/replication-mode-detector/commit/8ba20a3) Update repository config (#142) +- [eece885](https://github.com/kubedb/replication-mode-detector/commit/eece885) Use klog/v2 +- [e30c050](https://github.com/kubedb/replication-mode-detector/commit/e30c050) Use Kubernetes v1.21.0 toolchain (#140) +- [8e7b7c2](https://github.com/kubedb/replication-mode-detector/commit/8e7b7c2) Use Kubernetes v1.21.0 toolchain (#139) +- [6bceb2f](https://github.com/kubedb/replication-mode-detector/commit/6bceb2f) Use Kubernetes v1.21.0 toolchain (#138) +- [0fe720e](https://github.com/kubedb/replication-mode-detector/commit/0fe720e) Use Kubernetes v1.21.0 toolchain (#137) +- [8c54b2a](https://github.com/kubedb/replication-mode-detector/commit/8c54b2a) Update Kubernetes toolchain to v1.21.0 (#136) + + + +## [kubedb/tests](https://github.com/kubedb/tests) + +### [v0.4.0-rc.0](https://github.com/kubedb/tests/releases/tag/v0.4.0-rc.0) + +- [b6b4be3](https://github.com/kubedb/tests/commit/b6b4be3) Prepare for release v0.4.0-rc.0 (#124) +- [62e6b50](https://github.com/kubedb/tests/commit/62e6b50) Fix locking in ResourceMapper (#123) +- [a855fab](https://github.com/kubedb/tests/commit/a855fab) Update dependencies (#122) +- [7d5b1a4](https://github.com/kubedb/tests/commit/7d5b1a4) Use kglog helper +- [a08eee4](https://github.com/kubedb/tests/commit/a08eee4) Use klog/v2 +- [ed1afd4](https://github.com/kubedb/tests/commit/ed1afd4) Use Kubernetes v1.21.0 toolchain (#120) +- [ccb54f1](https://github.com/kubedb/tests/commit/ccb54f1) Use Kubernetes v1.21.0 toolchain (#119) +- [2a6f06d](https://github.com/kubedb/tests/commit/2a6f06d) Use Kubernetes v1.21.0 toolchain (#118) +- [7fb99f7](https://github.com/kubedb/tests/commit/7fb99f7) Use Kubernetes v1.21.0 toolchain (#117) +- [aaa0647](https://github.com/kubedb/tests/commit/aaa0647) Update Kubernetes toolchain to v1.21.0 (#116) +- [79d815d](https://github.com/kubedb/tests/commit/79d815d) Fix Elasticsearch status check while creating the client (#114) + + + + diff --git a/content/docs/v2024.1.31/CHANGELOG-v2021.06.23.md b/content/docs/v2024.1.31/CHANGELOG-v2021.06.23.md new file mode 100644 index 0000000000..b4377add46 --- /dev/null +++ b/content/docs/v2024.1.31/CHANGELOG-v2021.06.23.md @@ -0,0 +1,398 @@ +--- +title: Changelog | KubeDB +description: Changelog +menu: + docs_v2024.1.31: + identifier: changelog-kubedb-v2021.06.23 + name: Changelog-v2021.06.23 + parent: welcome + weight: 20210623 +product_name: kubedb +menu_name: docs_v2024.1.31 +section_menu_id: welcome +url: /docs/v2024.1.31/welcome/changelog-v2021.06.23/ +aliases: +- /docs/v2024.1.31/CHANGELOG-v2021.06.23/ +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +# KubeDB v2021.06.23 (2021-06-23) + + +## [appscode/kubedb-autoscaler](https://github.com/appscode/kubedb-autoscaler) + +### [v0.4.0](https://github.com/appscode/kubedb-autoscaler/releases/tag/v0.4.0) + +- [93e27c4](https://github.com/appscode/kubedb-autoscaler/commit/93e27c4) Prepare for release v0.4.0 (#27) +- [e7e6c98](https://github.com/appscode/kubedb-autoscaler/commit/e7e6c98) Disable api priority and fairness feature for webhook server (#26) +- [fe37e94](https://github.com/appscode/kubedb-autoscaler/commit/fe37e94) Prepare for release v0.4.0-rc.0 (#25) +- [f81237c](https://github.com/appscode/kubedb-autoscaler/commit/f81237c) Update audit lib (#24) +- [ed8c87b](https://github.com/appscode/kubedb-autoscaler/commit/ed8c87b) Send audit events if analytics enabled +- [c0a03d5](https://github.com/appscode/kubedb-autoscaler/commit/c0a03d5) Create auditor if license file is provided (#23) +- [2775227](https://github.com/appscode/kubedb-autoscaler/commit/2775227) Publish audit events (#22) +- [636d3a7](https://github.com/appscode/kubedb-autoscaler/commit/636d3a7) Use kglog helper +- [6a64bb1](https://github.com/appscode/kubedb-autoscaler/commit/6a64bb1) Use k8s 1.21.0 toolchain (#21) + + + +## [appscode/kubedb-enterprise](https://github.com/appscode/kubedb-enterprise) + +### [v0.6.0](https://github.com/appscode/kubedb-enterprise/releases/tag/v0.6.0) + +- [4b2195b7](https://github.com/appscode/kubedb-enterprise/commit/4b2195b7) Prepare for release v0.6.0 (#202) +- [cde1a54c](https://github.com/appscode/kubedb-enterprise/commit/cde1a54c) Improve Elasticsearch version upgrade with reconciler (#201) +- [c4c2d3c9](https://github.com/appscode/kubedb-enterprise/commit/c4c2d3c9) Use NSS_Wrapper for Pg_Upgrade Command (#200) +- [97749287](https://github.com/appscode/kubedb-enterprise/commit/97749287) Prepare for release v0.6.0-rc.0 (#199) +- [401cfc86](https://github.com/appscode/kubedb-enterprise/commit/401cfc86) Update audit lib (#197) +- [4378d35a](https://github.com/appscode/kubedb-enterprise/commit/4378d35a) Add MariaDB OpsReq [Restart, Upgrade, Scaling, Volume Expansion, Reconfigure Custom Config] (#179) +- [f879e934](https://github.com/appscode/kubedb-enterprise/commit/f879e934) Postgres Ops Req (Upgrade, Horizontal, Vertical, Volume Expansion, Reconfigure, Reconfigure TLS, Restart) (#193) +- [79b51d25](https://github.com/appscode/kubedb-enterprise/commit/79b51d25) Skip stash checks if stash CRD doesn't exist (#196) +- [3efc4ee8](https://github.com/appscode/kubedb-enterprise/commit/3efc4ee8) Refactor MongoDB Scale Down Shard (#189) +- [64962f36](https://github.com/appscode/kubedb-enterprise/commit/64962f36) Add timeout for Elasticsearch ops request (#183) +- [4ed736b8](https://github.com/appscode/kubedb-enterprise/commit/4ed736b8) Send audit events if analytics enabled +- [498ef67b](https://github.com/appscode/kubedb-enterprise/commit/498ef67b) Create auditor if license file is provided (#195) +- [a61965cc](https://github.com/appscode/kubedb-enterprise/commit/a61965cc) Publish audit events (#194) +- [cdc0ee37](https://github.com/appscode/kubedb-enterprise/commit/cdc0ee37) Fix log level issue with klog (#187) +- [356c6965](https://github.com/appscode/kubedb-enterprise/commit/356c6965) Use kglog helper +- [d7248cfd](https://github.com/appscode/kubedb-enterprise/commit/d7248cfd) Update Kubernetes toolchain to v1.21.0 (#181) +- [b8493083](https://github.com/appscode/kubedb-enterprise/commit/b8493083) Only restart the changed pods while VerticalScaling Elasticsearch (#174) + + + +## [kubedb/apimachinery](https://github.com/kubedb/apimachinery) + +### [v0.19.0](https://github.com/kubedb/apimachinery/releases/tag/v0.19.0) + +- [7cecea8e](https://github.com/kubedb/apimachinery/commit/7cecea8e) Add docs badge +- [c885fc2d](https://github.com/kubedb/apimachinery/commit/c885fc2d) Postgres DB Container's RunAsGroup As FSGroup (#769) +- [29cb0260](https://github.com/kubedb/apimachinery/commit/29cb0260) Add fixes to helper method (#768) +- [b20b40c2](https://github.com/kubedb/apimachinery/commit/b20b40c2) Use Stash v2021.06.23 +- [e98fb31f](https://github.com/kubedb/apimachinery/commit/e98fb31f) Update audit event publisher (#767) +- [81e26637](https://github.com/kubedb/apimachinery/commit/81e26637) Add MariaDB Constants (#766) +- [532b6982](https://github.com/kubedb/apimachinery/commit/532b6982) Update Elasticsearch API to support various node roles including hot-warm-cold (#764) +- [a9979e15](https://github.com/kubedb/apimachinery/commit/a9979e15) Update for release Stash@v2021.6.18 (#765) +- [d20c46a2](https://github.com/kubedb/apimachinery/commit/d20c46a2) Fix locking in ResourceMapper +- [3a597982](https://github.com/kubedb/apimachinery/commit/3a597982) Send audit events if analytics enabled +- [27cc118e](https://github.com/kubedb/apimachinery/commit/27cc118e) Add auditor to shared Controller (#761) +- [eb13a94f](https://github.com/kubedb/apimachinery/commit/eb13a94f) Rename TimeoutSeconds to Timeout in MongoDBOpsRequest (#759) +- [29627ec6](https://github.com/kubedb/apimachinery/commit/29627ec6) Add timeout for each step of ES ops request (#742) +- [cc6b9690](https://github.com/kubedb/apimachinery/commit/cc6b9690) Add MariaDB OpsRequest Types (#743) +- [6fb2646e](https://github.com/kubedb/apimachinery/commit/6fb2646e) Update default resource limits for databases (#755) +- [161b3fe3](https://github.com/kubedb/apimachinery/commit/161b3fe3) Add UpdateMariaDBOpsRequestStatus function (#727) +- [98cd75f0](https://github.com/kubedb/apimachinery/commit/98cd75f0) Add Fields, Constant, Func For Ops Request Postgres (#758) +- [722656b7](https://github.com/kubedb/apimachinery/commit/722656b7) Add Innodb Group Replication Mode (#750) +- [eb8e5883](https://github.com/kubedb/apimachinery/commit/eb8e5883) Replace go-bindata with //go:embed (#753) +- [df570f7b](https://github.com/kubedb/apimachinery/commit/df570f7b) Add HealthCheckInterval constant (#752) +- [e982e590](https://github.com/kubedb/apimachinery/commit/e982e590) Use kglog helper +- [e725873d](https://github.com/kubedb/apimachinery/commit/e725873d) Fix tests (#749) +- [11d1c306](https://github.com/kubedb/apimachinery/commit/11d1c306) Cleanup dependencies +- [7030bd8f](https://github.com/kubedb/apimachinery/commit/7030bd8f) Update crds +- [766fa11f](https://github.com/kubedb/apimachinery/commit/766fa11f) Update Kubernetes toolchain to v1.21.0 (#746) +- [12014667](https://github.com/kubedb/apimachinery/commit/12014667) Add Elasticsearch vertical scaling constants (#741) + + + +## [kubedb/cli](https://github.com/kubedb/cli) + +### [v0.19.0](https://github.com/kubedb/cli/releases/tag/v0.19.0) + +- [2b394bba](https://github.com/kubedb/cli/commit/2b394bba) Prepare for release v0.19.0 (#610) +- [b367d2a5](https://github.com/kubedb/cli/commit/b367d2a5) Prepare for release v0.19.0-rc.0 (#609) +- [b9214d6a](https://github.com/kubedb/cli/commit/b9214d6a) Use Kubernetes 1.21.1 toolchain (#608) +- [36866cf5](https://github.com/kubedb/cli/commit/36866cf5) Use kglog helper +- [e4ee9973](https://github.com/kubedb/cli/commit/e4ee9973) Cleanup dependencies (#607) +- [07999fc2](https://github.com/kubedb/cli/commit/07999fc2) Use Kubernetes v1.21.0 toolchain (#606) +- [05e3b7e5](https://github.com/kubedb/cli/commit/05e3b7e5) Use Kubernetes v1.21.0 toolchain (#605) +- [44f4188e](https://github.com/kubedb/cli/commit/44f4188e) Use Kubernetes v1.21.0 toolchain (#604) +- [82cd8399](https://github.com/kubedb/cli/commit/82cd8399) Use Kubernetes v1.21.0 toolchain (#603) +- [998506cd](https://github.com/kubedb/cli/commit/998506cd) Use Kubernetes v1.21.0 toolchain (#602) +- [4ff64f94](https://github.com/kubedb/cli/commit/4ff64f94) Use Kubernetes v1.21.0 toolchain (#601) +- [19b257f1](https://github.com/kubedb/cli/commit/19b257f1) Update Kubernetes toolchain to v1.21.0 (#600) + + + +## [kubedb/elasticsearch](https://github.com/kubedb/elasticsearch) + +### [v0.19.0](https://github.com/kubedb/elasticsearch/releases/tag/v0.19.0) + +- [a38490f9](https://github.com/kubedb/elasticsearch/commit/a38490f9) Prepare for release v0.19.0 (#503) +- [aed0fcb4](https://github.com/kubedb/elasticsearch/commit/aed0fcb4) Prepare for release v0.19.0-rc.0 (#502) +- [630d6940](https://github.com/kubedb/elasticsearch/commit/630d6940) Update audit lib (#501) +- [df4c9a0d](https://github.com/kubedb/elasticsearch/commit/df4c9a0d) Do not create user credentials when security is disabled (#500) +- [3b656b57](https://github.com/kubedb/elasticsearch/commit/3b656b57) Add support for various node roles for ElasticStack (#499) +- [64133cb6](https://github.com/kubedb/elasticsearch/commit/64133cb6) Send audit events if analytics enabled +- [21caa38f](https://github.com/kubedb/elasticsearch/commit/21caa38f) Create auditor if license file is provided (#498) +- [8319ba70](https://github.com/kubedb/elasticsearch/commit/8319ba70) Publish audit events (#497) +- [5f08d1b2](https://github.com/kubedb/elasticsearch/commit/5f08d1b2) Skip health check for halted DB (#494) +- [6a23d464](https://github.com/kubedb/elasticsearch/commit/6a23d464) Disable flow control if api is not enabled (#495) +- [a23c5481](https://github.com/kubedb/elasticsearch/commit/a23c5481) Fix log level issue with klog (#496) +- [38dbddda](https://github.com/kubedb/elasticsearch/commit/38dbddda) Limit health checker go-routine for specific DB object (#491) +- [0aefd5f7](https://github.com/kubedb/elasticsearch/commit/0aefd5f7) Use kglog helper +- [03255078](https://github.com/kubedb/elasticsearch/commit/03255078) Cleanup glog dependency +- [57bb1bf1](https://github.com/kubedb/elasticsearch/commit/57bb1bf1) Update dependencies +- [69fdfde7](https://github.com/kubedb/elasticsearch/commit/69fdfde7) Update Kubernetes toolchain to v1.21.0 (#492) + + + +## [kubedb/installer](https://github.com/kubedb/installer) + +### [v2021.06.23](https://github.com/kubedb/installer/releases/tag/v2021.06.23) + +- [334e8b4](https://github.com/kubedb/installer/commit/334e8b4) Prepare for release v2021.06.23 (#317) +- [823feb3](https://github.com/kubedb/installer/commit/823feb3) Prepare for release v2021.06.21-rc.0 (#315) +- [946dc13](https://github.com/kubedb/installer/commit/946dc13) Use Stash v2021.06.23 +- [77a54a1](https://github.com/kubedb/installer/commit/77a54a1) Use Kubernetes 1.21.1 toolchain (#314) +- [2b15157](https://github.com/kubedb/installer/commit/2b15157) Add support for Elasticsearch v7.13.2 (#313) +- [a11d7d0](https://github.com/kubedb/installer/commit/a11d7d0) Support MongoDB Version 4.4.6 (#312) +- [4c79e1a](https://github.com/kubedb/installer/commit/4c79e1a) Update Elasticsearch versions to support various node roles (#308) +- [8e52114](https://github.com/kubedb/installer/commit/8e52114) Update for release Stash@v2021.6.18 (#311) +- [95aa010](https://github.com/kubedb/installer/commit/95aa010) Update to MariaDB init docker version 0.2.0 (#310) +- [1659b91](https://github.com/kubedb/installer/commit/1659b91) Fix: Update Ops Request yaml for Reconfigure TLS in Postgres (#307) +- [b2a806b](https://github.com/kubedb/installer/commit/b2a806b) Use mongodb-exporter v0.20.4 (#305) +- [12e720a](https://github.com/kubedb/installer/commit/12e720a) Update Kubernetes toolchain to v1.21.0 (#302) +- [3ff3bc3](https://github.com/kubedb/installer/commit/3ff3bc3) Add monitoring values to global chart (#301) + + + +## [kubedb/mariadb](https://github.com/kubedb/mariadb) + +### [v0.3.0](https://github.com/kubedb/mariadb/releases/tag/v0.3.0) + +- [189cc352](https://github.com/kubedb/mariadb/commit/189cc352) Prepare for release v0.3.0 (#78) +- [9b982f74](https://github.com/kubedb/mariadb/commit/9b982f74) Prepare for release v0.3.0-rc.0 (#77) +- [0ad0022c](https://github.com/kubedb/mariadb/commit/0ad0022c) Update audit lib (#75) +- [501a2e61](https://github.com/kubedb/mariadb/commit/501a2e61) Update custom config mount path for MariaDB Cluster (#59) +- [d00cf65b](https://github.com/kubedb/mariadb/commit/d00cf65b) Separate Reconcile functionality in a new function ReconcileNode (#68) +- [e9239d4f](https://github.com/kubedb/mariadb/commit/e9239d4f) Limit Go routines in Health Checker (#73) +- [d695adf1](https://github.com/kubedb/mariadb/commit/d695adf1) Send audit events if analytics enabled (#74) +- [070a0f79](https://github.com/kubedb/mariadb/commit/070a0f79) Create auditor if license file is provided (#72) +- [fc9046c3](https://github.com/kubedb/mariadb/commit/fc9046c3) Publish audit events (#71) +- [3a1f08a9](https://github.com/kubedb/mariadb/commit/3a1f08a9) Fix log level issue with klog for MariaDB (#70) +- [b6075e5d](https://github.com/kubedb/mariadb/commit/b6075e5d) Use kglog helper +- [f510e375](https://github.com/kubedb/mariadb/commit/f510e375) Use klog/v2 +- [c009905e](https://github.com/kubedb/mariadb/commit/c009905e) Update Kubernetes toolchain to v1.21.0 (#66) + + + +## [kubedb/memcached](https://github.com/kubedb/memcached) + +### [v0.12.0](https://github.com/kubedb/memcached/releases/tag/v0.12.0) + +- [9c2c58d7](https://github.com/kubedb/memcached/commit/9c2c58d7) Prepare for release v0.12.0 (#301) +- [604d95db](https://github.com/kubedb/memcached/commit/604d95db) Disable api priority and fairness feature for webhook server (#300) +- [99ab26b5](https://github.com/kubedb/memcached/commit/99ab26b5) Prepare for release v0.12.0-rc.0 (#299) +- [213807d5](https://github.com/kubedb/memcached/commit/213807d5) Update audit lib (#298) +- [29054b5b](https://github.com/kubedb/memcached/commit/29054b5b) Send audit events if analytics enabled (#297) +- [a4888446](https://github.com/kubedb/memcached/commit/a4888446) Publish audit events (#296) +- [236d6108](https://github.com/kubedb/memcached/commit/236d6108) Use kglog helper +- [7ffe5c73](https://github.com/kubedb/memcached/commit/7ffe5c73) Use klog/v2 +- [fb34645b](https://github.com/kubedb/memcached/commit/fb34645b) Update Kubernetes toolchain to v1.21.0 (#294) + + + +## [kubedb/mongodb](https://github.com/kubedb/mongodb) + +### [v0.12.0](https://github.com/kubedb/mongodb/releases/tag/v0.12.0) + +- [06b04a8c](https://github.com/kubedb/mongodb/commit/06b04a8c) Prepare for release v0.12.0 (#402) +- [ae4e0cd1](https://github.com/kubedb/mongodb/commit/ae4e0cd1) Fix mongodb exporter error (#401) +- [11eb6ee8](https://github.com/kubedb/mongodb/commit/11eb6ee8) Prepare for release v0.12.0-rc.0 (#400) +- [dbf5cd16](https://github.com/kubedb/mongodb/commit/dbf5cd16) Update audit lib (#399) +- [a55bf1d5](https://github.com/kubedb/mongodb/commit/a55bf1d5) Limit go routine in health check (#394) +- [0a61c733](https://github.com/kubedb/mongodb/commit/0a61c733) Update TLS args for Exporter (#395) +- [80d3fec2](https://github.com/kubedb/mongodb/commit/80d3fec2) Send audit events if analytics enabled (#398) +- [8ac51d7e](https://github.com/kubedb/mongodb/commit/8ac51d7e) Create auditor if license file is provided (#397) +- [c6c4b380](https://github.com/kubedb/mongodb/commit/c6c4b380) Publish audit events (#396) +- [e261937a](https://github.com/kubedb/mongodb/commit/e261937a) Fix log level issue with klog (#393) +- [426afbfc](https://github.com/kubedb/mongodb/commit/426afbfc) Use kglog helper +- [24b7976c](https://github.com/kubedb/mongodb/commit/24b7976c) Use klog/v2 +- [0ace005d](https://github.com/kubedb/mongodb/commit/0ace005d) Update Kubernetes toolchain to v1.21.0 (#391) + + + +## [kubedb/mysql](https://github.com/kubedb/mysql) + +### [v0.12.0](https://github.com/kubedb/mysql/releases/tag/v0.12.0) + +- [5fb0bf79](https://github.com/kubedb/mysql/commit/5fb0bf79) Prepare for release v0.12.0 (#393) +- [9533c528](https://github.com/kubedb/mysql/commit/9533c528) Prepare for release v0.12.0-rc.0 (#392) +- [f0313b17](https://github.com/kubedb/mysql/commit/f0313b17) Limit Health Checker goroutines (#385) +- [ab601a28](https://github.com/kubedb/mysql/commit/ab601a28) Use gomodules.xyz/password-generator v0.2.7 +- [782362db](https://github.com/kubedb/mysql/commit/782362db) Update audit library (#390) +- [1d36bacb](https://github.com/kubedb/mysql/commit/1d36bacb) Send audit events if analytics enabled (#389) +- [55a903a3](https://github.com/kubedb/mysql/commit/55a903a3) Create auditor if license file is provided (#388) +- [dc6f6ea5](https://github.com/kubedb/mysql/commit/dc6f6ea5) Publish audit events (#387) +- [75bd1a1c](https://github.com/kubedb/mysql/commit/75bd1a1c) Fix log level issue with klog for mysql (#386) +- [1014a393](https://github.com/kubedb/mysql/commit/1014a393) Use kglog helper +- [728fa299](https://github.com/kubedb/mysql/commit/728fa299) Use klog/v2 +- [80581df4](https://github.com/kubedb/mysql/commit/80581df4) Update Kubernetes toolchain to v1.21.0 (#383) + + + +## [kubedb/operator](https://github.com/kubedb/operator) + +### [v0.19.0](https://github.com/kubedb/operator/releases/tag/v0.19.0) + +- [cd2b14ca](https://github.com/kubedb/operator/commit/cd2b14ca) Prepare for release v0.19.0 (#409) +- [e48d8929](https://github.com/kubedb/operator/commit/e48d8929) Disable api priority and fairness feature for webhook server (#408) +- [08daa22a](https://github.com/kubedb/operator/commit/08daa22a) Prepare for release v0.19.0-rc.0 (#407) +- [203ffa38](https://github.com/kubedb/operator/commit/203ffa38) Update audit lib (#406) +- [704a774f](https://github.com/kubedb/operator/commit/704a774f) Send audit events if analytics enabled (#405) +- [7e8f1be0](https://github.com/kubedb/operator/commit/7e8f1be0) Stop using gomodules.xyz/version +- [49d7d7f2](https://github.com/kubedb/operator/commit/49d7d7f2) Publish audit events (#404) +- [820d7372](https://github.com/kubedb/operator/commit/820d7372) Use kglog helper +- [396ae75f](https://github.com/kubedb/operator/commit/396ae75f) Update Kubernetes toolchain to v1.21.0 (#403) + + + +## [kubedb/percona-xtradb](https://github.com/kubedb/percona-xtradb) + +### [v0.6.0](https://github.com/kubedb/percona-xtradb/releases/tag/v0.6.0) + +- [318d259d](https://github.com/kubedb/percona-xtradb/commit/318d259d) Prepare for release v0.6.0 (#203) +- [181a7d71](https://github.com/kubedb/percona-xtradb/commit/181a7d71) Disable api priority and fairness feature for webhook server (#202) +- [870e08df](https://github.com/kubedb/percona-xtradb/commit/870e08df) Prepare for release v0.6.0-rc.0 (#201) +- [f163f637](https://github.com/kubedb/percona-xtradb/commit/f163f637) Update audit lib (#200) +- [c42c3401](https://github.com/kubedb/percona-xtradb/commit/c42c3401) Send audit events if analytics enabled (#199) +- [e2ce3664](https://github.com/kubedb/percona-xtradb/commit/e2ce3664) Create auditor if license file is provided (#198) +- [3e85edb2](https://github.com/kubedb/percona-xtradb/commit/3e85edb2) Publish audit events (#197) +- [6f23031c](https://github.com/kubedb/percona-xtradb/commit/6f23031c) Use kglog helper +- [cc0e270a](https://github.com/kubedb/percona-xtradb/commit/cc0e270a) Use klog/v2 +- [a44e3347](https://github.com/kubedb/percona-xtradb/commit/a44e3347) Update Kubernetes toolchain to v1.21.0 (#195) + + + +## [kubedb/pg-coordinator](https://github.com/kubedb/pg-coordinator) + +### [v0.3.0](https://github.com/kubedb/pg-coordinator/releases/tag/v0.3.0) + +- [d0c24fa](https://github.com/kubedb/pg-coordinator/commit/d0c24fa) Prepare for release v0.3.0 (#26) +- [3ca5f67](https://github.com/kubedb/pg-coordinator/commit/3ca5f67) Prepare for release v0.3.0-rc.0 (#25) +- [4ef7d95](https://github.com/kubedb/pg-coordinator/commit/4ef7d95) Update Client TLS Path for Postgres (#24) +- [7208199](https://github.com/kubedb/pg-coordinator/commit/7208199) Raft Version Update And Ops Request Fix (#23) +- [5adb304](https://github.com/kubedb/pg-coordinator/commit/5adb304) Use klog/v2 (#19) +- [a9b3f16](https://github.com/kubedb/pg-coordinator/commit/a9b3f16) Use klog/v2 + + + +## [kubedb/pgbouncer](https://github.com/kubedb/pgbouncer) + +### [v0.6.0](https://github.com/kubedb/pgbouncer/releases/tag/v0.6.0) + +- [3ab2f55a](https://github.com/kubedb/pgbouncer/commit/3ab2f55a) Prepare for release v0.6.0 (#163) +- [ae89b9a6](https://github.com/kubedb/pgbouncer/commit/ae89b9a6) Disable api priority and fairness feature for webhook server (#162) +- [cbba6969](https://github.com/kubedb/pgbouncer/commit/cbba6969) Prepare for release v0.6.0-rc.0 (#161) +- [bc6428cd](https://github.com/kubedb/pgbouncer/commit/bc6428cd) Update audit lib (#160) +- [442f0635](https://github.com/kubedb/pgbouncer/commit/442f0635) Send audit events if analytics enabled (#159) +- [2ebaf4bb](https://github.com/kubedb/pgbouncer/commit/2ebaf4bb) Create auditor if license file is provided (#158) +- [4e3f115d](https://github.com/kubedb/pgbouncer/commit/4e3f115d) Publish audit events (#157) +- [1ed2f883](https://github.com/kubedb/pgbouncer/commit/1ed2f883) Use kglog helper +- [870cf108](https://github.com/kubedb/pgbouncer/commit/870cf108) Use klog/v2 +- [11c2ac03](https://github.com/kubedb/pgbouncer/commit/11c2ac03) Update Kubernetes toolchain to v1.21.0 (#155) + + + +## [kubedb/postgres](https://github.com/kubedb/postgres) + +### [v0.19.0](https://github.com/kubedb/postgres/releases/tag/v0.19.0) + +- [d10c5e40](https://github.com/kubedb/postgres/commit/d10c5e40) Prepare for release v0.19.0 (#509) +- [06fcab6e](https://github.com/kubedb/postgres/commit/06fcab6e) Prepare for release v0.19.0-rc.0 (#508) +- [5c0e0fa2](https://github.com/kubedb/postgres/commit/5c0e0fa2) Run All DB Pod's Container with Custom-UID (#507) +- [9496dadf](https://github.com/kubedb/postgres/commit/9496dadf) Update audit lib (#506) +- [d51cdfdd](https://github.com/kubedb/postgres/commit/d51cdfdd) Limit Health Check for Postgres (#504) +- [24851ba8](https://github.com/kubedb/postgres/commit/24851ba8) Send audit events if analytics enabled (#505) +- [faecf01d](https://github.com/kubedb/postgres/commit/faecf01d) Create auditor if license file is provided (#503) +- [8d4bf26b](https://github.com/kubedb/postgres/commit/8d4bf26b) Stop using gomodules.xyz/version (#501) +- [906c678e](https://github.com/kubedb/postgres/commit/906c678e) Publish audit events (#500) +- [c6afe209](https://github.com/kubedb/postgres/commit/c6afe209) Fix: Log Level Issue with klog (#496) +- [2a910034](https://github.com/kubedb/postgres/commit/2a910034) Use kglog helper +- [a4e685d6](https://github.com/kubedb/postgres/commit/a4e685d6) Use klog/v2 +- [ee9a9d15](https://github.com/kubedb/postgres/commit/ee9a9d15) Update Kubernetes toolchain to v1.21.0 (#492) + + + +## [kubedb/proxysql](https://github.com/kubedb/proxysql) + +### [v0.6.0](https://github.com/kubedb/proxysql/releases/tag/v0.6.0) + +- [08e892e2](https://github.com/kubedb/proxysql/commit/08e892e2) Prepare for release v0.6.0 (#181) +- [ecd32aea](https://github.com/kubedb/proxysql/commit/ecd32aea) Disable api priority and fairness feature for webhook server (#180) +- [ba5ec48b](https://github.com/kubedb/proxysql/commit/ba5ec48b) Prepare for release v0.6.0-rc.0 (#179) +- [9770fa0d](https://github.com/kubedb/proxysql/commit/9770fa0d) Update audit lib (#178) +- [3e307411](https://github.com/kubedb/proxysql/commit/3e307411) Send audit events if analytics enabled (#177) +- [790b57ed](https://github.com/kubedb/proxysql/commit/790b57ed) Create auditor if license file is provided (#176) +- [6e6c9ba1](https://github.com/kubedb/proxysql/commit/6e6c9ba1) Publish audit events (#175) +- [df2937ed](https://github.com/kubedb/proxysql/commit/df2937ed) Use kglog helper +- [2ca12e48](https://github.com/kubedb/proxysql/commit/2ca12e48) Use klog/v2 +- [3796f730](https://github.com/kubedb/proxysql/commit/3796f730) Update Kubernetes toolchain to v1.21.0 (#173) + + + +## [kubedb/redis](https://github.com/kubedb/redis) + +### [v0.12.0](https://github.com/kubedb/redis/releases/tag/v0.12.0) + +- [a29ff99d](https://github.com/kubedb/redis/commit/a29ff99d) Prepare for release v0.12.0 (#326) +- [a1392dee](https://github.com/kubedb/redis/commit/a1392dee) Disable api priority and fairness feature for webhook server (#325) +- [0c15054c](https://github.com/kubedb/redis/commit/0c15054c) Prepare for release v0.12.0-rc.0 (#324) +- [5a5ec318](https://github.com/kubedb/redis/commit/5a5ec318) Update audit lib (#323) +- [6673f940](https://github.com/kubedb/redis/commit/6673f940) Limit Health Check go-routine Redis (#321) +- [e945029e](https://github.com/kubedb/redis/commit/e945029e) Send audit events if analytics enabled (#322) +- [3715ff10](https://github.com/kubedb/redis/commit/3715ff10) Create auditor if license file is provided (#320) +- [9d5d90a9](https://github.com/kubedb/redis/commit/9d5d90a9) Add auditor handler +- [5004f56c](https://github.com/kubedb/redis/commit/5004f56c) Publish audit events (#319) +- [146b3863](https://github.com/kubedb/redis/commit/146b3863) Use kglog helper +- [71d8ced8](https://github.com/kubedb/redis/commit/71d8ced8) Use klog/v2 +- [4900a564](https://github.com/kubedb/redis/commit/4900a564) Update Kubernetes toolchain to v1.21.0 (#317) + + + +## [kubedb/replication-mode-detector](https://github.com/kubedb/replication-mode-detector) + +### [v0.6.0](https://github.com/kubedb/replication-mode-detector/releases/tag/v0.6.0) + +- [c1af00f](https://github.com/kubedb/replication-mode-detector/commit/c1af00f) Prepare for release v0.6.0 (#144) +- [1382382](https://github.com/kubedb/replication-mode-detector/commit/1382382) Prepare for release v0.6.0-rc.0 (#143) +- [feba070](https://github.com/kubedb/replication-mode-detector/commit/feba070) Remove glog dependency +- [fd757b4](https://github.com/kubedb/replication-mode-detector/commit/fd757b4) Use kglog helper +- [8ba20a3](https://github.com/kubedb/replication-mode-detector/commit/8ba20a3) Update repository config (#142) +- [eece885](https://github.com/kubedb/replication-mode-detector/commit/eece885) Use klog/v2 +- [e30c050](https://github.com/kubedb/replication-mode-detector/commit/e30c050) Use Kubernetes v1.21.0 toolchain (#140) +- [8e7b7c2](https://github.com/kubedb/replication-mode-detector/commit/8e7b7c2) Use Kubernetes v1.21.0 toolchain (#139) +- [6bceb2f](https://github.com/kubedb/replication-mode-detector/commit/6bceb2f) Use Kubernetes v1.21.0 toolchain (#138) +- [0fe720e](https://github.com/kubedb/replication-mode-detector/commit/0fe720e) Use Kubernetes v1.21.0 toolchain (#137) +- [8c54b2a](https://github.com/kubedb/replication-mode-detector/commit/8c54b2a) Update Kubernetes toolchain to v1.21.0 (#136) + + + +## [kubedb/tests](https://github.com/kubedb/tests) + +### [v0.4.0](https://github.com/kubedb/tests/releases/tag/v0.4.0) + +- [c6f1adc](https://github.com/kubedb/tests/commit/c6f1adc) Prepare for release v0.4.0 (#125) +- [b6b4be3](https://github.com/kubedb/tests/commit/b6b4be3) Prepare for release v0.4.0-rc.0 (#124) +- [62e6b50](https://github.com/kubedb/tests/commit/62e6b50) Fix locking in ResourceMapper (#123) +- [a855fab](https://github.com/kubedb/tests/commit/a855fab) Update dependencies (#122) +- [7d5b1a4](https://github.com/kubedb/tests/commit/7d5b1a4) Use kglog helper +- [a08eee4](https://github.com/kubedb/tests/commit/a08eee4) Use klog/v2 +- [ed1afd4](https://github.com/kubedb/tests/commit/ed1afd4) Use Kubernetes v1.21.0 toolchain (#120) +- [ccb54f1](https://github.com/kubedb/tests/commit/ccb54f1) Use Kubernetes v1.21.0 toolchain (#119) +- [2a6f06d](https://github.com/kubedb/tests/commit/2a6f06d) Use Kubernetes v1.21.0 toolchain (#118) +- [7fb99f7](https://github.com/kubedb/tests/commit/7fb99f7) Use Kubernetes v1.21.0 toolchain (#117) +- [aaa0647](https://github.com/kubedb/tests/commit/aaa0647) Update Kubernetes toolchain to v1.21.0 (#116) +- [79d815d](https://github.com/kubedb/tests/commit/79d815d) Fix Elasticsearch status check while creating the client (#114) + + + + diff --git a/content/docs/v2024.1.31/CHANGELOG-v2021.08.23.md b/content/docs/v2024.1.31/CHANGELOG-v2021.08.23.md new file mode 100644 index 0000000000..3bad536754 --- /dev/null +++ b/content/docs/v2024.1.31/CHANGELOG-v2021.08.23.md @@ -0,0 +1,341 @@ +--- +title: Changelog | KubeDB +description: Changelog +menu: + docs_v2024.1.31: + identifier: changelog-kubedb-v2021.08.23 + name: Changelog-v2021.08.23 + parent: welcome + weight: 20210823 +product_name: kubedb +menu_name: docs_v2024.1.31 +section_menu_id: welcome +url: /docs/v2024.1.31/welcome/changelog-v2021.08.23/ +aliases: +- /docs/v2024.1.31/CHANGELOG-v2021.08.23/ +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +# KubeDB v2021.08.23 (2021-08-23) + + +## [appscode/kubedb-autoscaler](https://github.com/appscode/kubedb-autoscaler) + +### [v0.5.0](https://github.com/appscode/kubedb-autoscaler/releases/tag/v0.5.0) + +- [7cc44de](https://github.com/appscode/kubedb-autoscaler/commit/7cc44de) Prepare for release v0.5.0 (#31) +- [4df9212](https://github.com/appscode/kubedb-autoscaler/commit/4df9212) Restrict Community Edition to demo namespace (#30) +- [02afaf3](https://github.com/appscode/kubedb-autoscaler/commit/02afaf3) Update repository config (#29) +- [9cdb19f](https://github.com/appscode/kubedb-autoscaler/commit/9cdb19f) Update dependencies (#28) +- [faa4b65](https://github.com/appscode/kubedb-autoscaler/commit/faa4b65) Remove repetitive 403 errors from validator and mutators +- [17f9798](https://github.com/appscode/kubedb-autoscaler/commit/17f9798) Stop using deprecated api kind in k8s 1.22 + + + +## [appscode/kubedb-enterprise](https://github.com/appscode/kubedb-enterprise) + +### [v0.7.0](https://github.com/appscode/kubedb-enterprise/releases/tag/v0.7.0) + +- [fdb9dbbd](https://github.com/appscode/kubedb-enterprise/commit/fdb9dbbd) Prepare for release v0.7.0 (#211) +- [6d6bcdba](https://github.com/appscode/kubedb-enterprise/commit/6d6bcdba) Fix MongoDB log verbosity (#208) +- [b9924447](https://github.com/appscode/kubedb-enterprise/commit/b9924447) Add Auth Secret support in Redis (#210) +- [28876f9e](https://github.com/appscode/kubedb-enterprise/commit/28876f9e) Restrict Community Edition to demo namespace (#209) +- [ccdca5bb](https://github.com/appscode/kubedb-enterprise/commit/ccdca5bb) Update repository config (#206) +- [bea1f2b4](https://github.com/appscode/kubedb-enterprise/commit/bea1f2b4) Update dependencies (#205) +- [f1dc4980](https://github.com/appscode/kubedb-enterprise/commit/f1dc4980) Remove repetitive 403 errors from validator and mutators +- [e6e56cf4](https://github.com/appscode/kubedb-enterprise/commit/e6e56cf4) Remove Panic for Redis (#204) +- [2b0fd429](https://github.com/appscode/kubedb-enterprise/commit/2b0fd429) Stop using api versions removed in k8s 1.22 + + + +## [kubedb/apimachinery](https://github.com/kubedb/apimachinery) + +### [v0.20.0](https://github.com/kubedb/apimachinery/releases/tag/v0.20.0) + +- [c8377128](https://github.com/kubedb/apimachinery/commit/c8377128) Add constant for Redis (#783) +- [86bf3060](https://github.com/kubedb/apimachinery/commit/86bf3060) Remove Auth secret after Deletion of Redis (#781) +- [a982c412](https://github.com/kubedb/apimachinery/commit/a982c412) Add KubeDB distribution for Elasticsearch images (#782) +- [a8eb885c](https://github.com/kubedb/apimachinery/commit/a8eb885c) Revert back to old structure +- [a3581bc6](https://github.com/kubedb/apimachinery/commit/a3581bc6) Move secret and statefulset informer to Controller (#780) +- [3327d4c1](https://github.com/kubedb/apimachinery/commit/3327d4c1) Remove WatchNamespace from Config +- [5465bfd4](https://github.com/kubedb/apimachinery/commit/5465bfd4) Restrict watchers to a namespace (#779) +- [d5e158ef](https://github.com/kubedb/apimachinery/commit/d5e158ef) Add support for Elasticsearch secure settings (#777) +- [e6ff1fd6](https://github.com/kubedb/apimachinery/commit/e6ff1fd6) Update documentation for enableSSL field (#778) +- [f011f893](https://github.com/kubedb/apimachinery/commit/f011f893) Update repository config (#776) +- [ba3edad2](https://github.com/kubedb/apimachinery/commit/ba3edad2) Update repository config (#775) +- [2c8e7ea7](https://github.com/kubedb/apimachinery/commit/2c8e7ea7) Test crds (#774) +- [89c85a98](https://github.com/kubedb/apimachinery/commit/89c85a98) Remove panic for Redis (#773) +- [24bc990a](https://github.com/kubedb/apimachinery/commit/24bc990a) Update dependencies +- [7fe9731b](https://github.com/kubedb/apimachinery/commit/7fe9731b) Only generate crd v1 yamls (#772) +- [1ce46330](https://github.com/kubedb/apimachinery/commit/1ce46330) Rename Opendistro for Elasticsearch performance analyzer port name (#770) +- [1c1e6ef0](https://github.com/kubedb/apimachinery/commit/1c1e6ef0) Allow customizing resource settings for pg coordinator container (#771) + + + +## [kubedb/cli](https://github.com/kubedb/cli) + +### [v0.20.0](https://github.com/kubedb/cli/releases/tag/v0.20.0) + +- [d5f1dbd4](https://github.com/kubedb/cli/commit/d5f1dbd4) Prepare for release v0.20.0 (#619) +- [2e84e662](https://github.com/kubedb/cli/commit/2e84e662) Add `show-credentials` commands (#618) +- [79ccacd2](https://github.com/kubedb/cli/commit/79ccacd2) Add pause/resume support (#597) +- [e1d6e1ef](https://github.com/kubedb/cli/commit/e1d6e1ef) Add `restart` command (#599) +- [3fa6c894](https://github.com/kubedb/cli/commit/3fa6c894) Update dependencies (#617) +- [998ec275](https://github.com/kubedb/cli/commit/998ec275) Update dependencies (#616) +- [231371de](https://github.com/kubedb/cli/commit/231371de) Update dependencies (#615) +- [654dd914](https://github.com/kubedb/cli/commit/654dd914) Update dependencies (#614) +- [3e1f519a](https://github.com/kubedb/cli/commit/3e1f519a) Update dependencies (#613) +- [1a7cdbc7](https://github.com/kubedb/cli/commit/1a7cdbc7) Update dependencies (#611) +- [2fcf8e6d](https://github.com/kubedb/cli/commit/2fcf8e6d) Stop using deprecated api kinds in k8s 1.22 + + + +## [kubedb/elasticsearch](https://github.com/kubedb/elasticsearch) + +### [v0.20.0](https://github.com/kubedb/elasticsearch/releases/tag/v0.20.0) + +- [0568ed61](https://github.com/kubedb/elasticsearch/commit/0568ed61) Prepare for release v0.20.0 (#511) +- [da28bf06](https://github.com/kubedb/elasticsearch/commit/da28bf06) Add support for Hot-Warm Clustering for OpenDistro of Elasticsearch (#506) +- [5859b723](https://github.com/kubedb/elasticsearch/commit/5859b723) Add support for secure settings (#509) +- [e9478903](https://github.com/kubedb/elasticsearch/commit/e9478903) Restrict Community Edition to demo namespace (#510) +- [7e8d91c5](https://github.com/kubedb/elasticsearch/commit/7e8d91c5) Update repository config (#508) +- [dc5d7fed](https://github.com/kubedb/elasticsearch/commit/dc5d7fed) Update dependencies (#507) +- [f472c1ea](https://github.com/kubedb/elasticsearch/commit/f472c1ea) Stop using deprecated api kinds in k8s 1.22 +- [ee479fa6](https://github.com/kubedb/elasticsearch/commit/ee479fa6) Only points to the ingest nodes from stats service (#505) +- [6b576eb9](https://github.com/kubedb/elasticsearch/commit/6b576eb9) Fix repetitive patch-ing (#504) + + + +## [kubedb/installer](https://github.com/kubedb/installer) + +### [v2021.08.23](https://github.com/kubedb/installer/releases/tag/v2021.08.23) + +- [2a48613](https://github.com/kubedb/installer/commit/2a48613) Prepare for release v2021.08.23 (#345) +- [e2feb7c](https://github.com/kubedb/installer/commit/e2feb7c) Add support for Elasticsearch v7.14.0 and pre-build images with snapshot plugins (#344) +- [e62be83](https://github.com/kubedb/installer/commit/e62be83) Use mongodb official images (#343) +- [aaf9f13](https://github.com/kubedb/installer/commit/aaf9f13) Update dependencies (#341) +- [f852ebc](https://github.com/kubedb/installer/commit/f852ebc) Update dependencies (#340) +- [799620d](https://github.com/kubedb/installer/commit/799620d) Fix metrics for resource calculation (#338) +- [2aa7aba](https://github.com/kubedb/installer/commit/2aa7aba) Move metrics to its own chart (#337) +- [62e359d](https://github.com/kubedb/installer/commit/62e359d) Update MongoDB resource metrics (#336) +- [575332d](https://github.com/kubedb/installer/commit/575332d) Add redis cluster metrics (#333) +- [d339405](https://github.com/kubedb/installer/commit/d339405) Update repository config (#335) +- [13a4438](https://github.com/kubedb/installer/commit/13a4438) Update repository config (#334) +- [4b25281](https://github.com/kubedb/installer/commit/4b25281) Update repository config (#332) +- [2225d30](https://github.com/kubedb/installer/commit/2225d30) Update dependencies (#331) +- [5aaf750](https://github.com/kubedb/installer/commit/5aaf750) Update function names in metrics configuration +- [d64a4e4](https://github.com/kubedb/installer/commit/d64a4e4) Add metrics config for redis (#326) +- [9268c0f](https://github.com/kubedb/installer/commit/9268c0f) Rename functions in metrics configuration (#330) +- [c4be2d2](https://github.com/kubedb/installer/commit/c4be2d2) Add elasticsearch metrics configurations (#322) +- [c313657](https://github.com/kubedb/installer/commit/c313657) Add MongoDB metrics configurations (#321) +- [c8aa742](https://github.com/kubedb/installer/commit/c8aa742) Add metrics config for mysql (#325) +- [2354cab](https://github.com/kubedb/installer/commit/2354cab) Add MariaDB metrics configurations (#324) +- [8e42490](https://github.com/kubedb/installer/commit/8e42490) Add postgres metrics configurations (#323) +- [fd2deb8](https://github.com/kubedb/installer/commit/fd2deb8) Remove etcd from catalog (#329) +- [344f45a](https://github.com/kubedb/installer/commit/344f45a) Update Elasticsearch exporter images (#320) +- [bc18aed](https://github.com/kubedb/installer/commit/bc18aed) Update chart docs +- [d43e394](https://github.com/kubedb/installer/commit/d43e394) Update kubedb chart dependencies via Makefile (#328) +- [574fc22](https://github.com/kubedb/installer/commit/574fc22) Stop using deprecated api kinds in 1.22 (#327) +- [d957f4d](https://github.com/kubedb/installer/commit/d957f4d) Sort crd yamls by GK +- [930bacf](https://github.com/kubedb/installer/commit/930bacf) Merge metrics chart into crds +- [c4b8659](https://github.com/kubedb/installer/commit/c4b8659) Pass image pull secrets to cleaner images (#319) +- [5f391d8](https://github.com/kubedb/installer/commit/5f391d8) Rename user-roles.yaml to metrics-user-roles.yaml +- [892e6da](https://github.com/kubedb/installer/commit/892e6da) Add kubedb-metrics chart (#318) + + + +## [kubedb/mariadb](https://github.com/kubedb/mariadb) + +### [v0.4.0](https://github.com/kubedb/mariadb/releases/tag/v0.4.0) + +- [55924eb9](https://github.com/kubedb/mariadb/commit/55924eb9) Prepare for release v0.4.0 (#85) +- [355da6ee](https://github.com/kubedb/mariadb/commit/355da6ee) Restrict Community Edition to demo namespace (#84) +- [b284cde6](https://github.com/kubedb/mariadb/commit/b284cde6) Update repository config (#83) +- [8d25306b](https://github.com/kubedb/mariadb/commit/8d25306b) Update repository config (#82) +- [c7c999a0](https://github.com/kubedb/mariadb/commit/c7c999a0) Update dependencies (#81) +- [7ac44469](https://github.com/kubedb/mariadb/commit/7ac44469) Fix repetitive patch issue in MariaDB (#79) +- [24fe8f76](https://github.com/kubedb/mariadb/commit/24fe8f76) Stop using deprecated api kinds in k8s 1.22 + + + +## [kubedb/memcached](https://github.com/kubedb/memcached) + +### [v0.13.0](https://github.com/kubedb/memcached/releases/tag/v0.13.0) + +- [98287a7b](https://github.com/kubedb/memcached/commit/98287a7b) Prepare for release v0.13.0 (#306) +- [5ab5673d](https://github.com/kubedb/memcached/commit/5ab5673d) Restrict Community Edition to demo namespace (#305) +- [399bc04e](https://github.com/kubedb/memcached/commit/399bc04e) Update repository config (#304) +- [95694c2f](https://github.com/kubedb/memcached/commit/95694c2f) Update repository config (#303) +- [6fb5b271](https://github.com/kubedb/memcached/commit/6fb5b271) Update dependencies (#302) +- [2d3550fc](https://github.com/kubedb/memcached/commit/2d3550fc) Stop using api kinds deprecated in k8s 1.22 + + + +## [kubedb/mongodb](https://github.com/kubedb/mongodb) + +### [v0.13.0](https://github.com/kubedb/mongodb/releases/tag/v0.13.0) + +- [34999e07](https://github.com/kubedb/mongodb/commit/34999e07) Prepare for release v0.13.0 (#409) +- [9dadff03](https://github.com/kubedb/mongodb/commit/9dadff03) Restrict Community Edition to demo namespace (#408) +- [20e8d6d7](https://github.com/kubedb/mongodb/commit/20e8d6d7) Update repository config (#407) +- [410dc379](https://github.com/kubedb/mongodb/commit/410dc379) Update repository config (#406) +- [ba8ea791](https://github.com/kubedb/mongodb/commit/ba8ea791) Update dependencies (#404) +- [fb5f5257](https://github.com/kubedb/mongodb/commit/fb5f5257) Stop using api kinds deprecated in k8s 1.22 +- [e38b4aa5](https://github.com/kubedb/mongodb/commit/e38b4aa5) Fix repetitive patch-ing (#403) + + + +## [kubedb/mysql](https://github.com/kubedb/mysql) + +### [v0.13.0](https://github.com/kubedb/mysql/releases/tag/v0.13.0) + +- [2700a224](https://github.com/kubedb/mysql/commit/2700a224) Prepare for release v0.13.0 (#399) +- [83b1b7e9](https://github.com/kubedb/mysql/commit/83b1b7e9) Restrict Community Edition to demo namespace (#398) +- [1a0dcd47](https://github.com/kubedb/mysql/commit/1a0dcd47) Update repository config (#397) +- [d59061fd](https://github.com/kubedb/mysql/commit/d59061fd) Update repository config (#396) +- [3066cd54](https://github.com/kubedb/mysql/commit/3066cd54) Update dependencies (#395) +- [ef8b78d1](https://github.com/kubedb/mysql/commit/ef8b78d1) Fix Repetitive Patch Issue MySQL (#394) +- [99bcc275](https://github.com/kubedb/mysql/commit/99bcc275) Stop using api kinds removed in k8s 1.22 + + + +## [kubedb/operator](https://github.com/kubedb/operator) + +### [v0.20.0](https://github.com/kubedb/operator/releases/tag/v0.20.0) + +- [c24b636e](https://github.com/kubedb/operator/commit/c24b636e) Prepare for release v0.20.0 (#416) +- [7f8b0526](https://github.com/kubedb/operator/commit/7f8b0526) Restrict Community Edition to demo namespace (#415) +- [a6759d58](https://github.com/kubedb/operator/commit/a6759d58) Remove FOSSA link +- [cc01486f](https://github.com/kubedb/operator/commit/cc01486f) Update repository config (#414) +- [042f8feb](https://github.com/kubedb/operator/commit/042f8feb) Update repository config (#413) +- [2a803113](https://github.com/kubedb/operator/commit/2a803113) Update dependencies (#412) +- [bf047c47](https://github.com/kubedb/operator/commit/bf047c47) Remove dependency on github.com/satori/go.uuid (#411) +- [614b930a](https://github.com/kubedb/operator/commit/614b930a) Remove repetitive 403 errors from validator and mutators +- [03d053e7](https://github.com/kubedb/operator/commit/03d053e7) Stop using api versions removed in k8s 1.22 + + + +## [kubedb/percona-xtradb](https://github.com/kubedb/percona-xtradb) + +### [v0.7.0](https://github.com/kubedb/percona-xtradb/releases/tag/v0.7.0) + +- [0bcb7e8c](https://github.com/kubedb/percona-xtradb/commit/0bcb7e8c) Prepare for release v0.7.0 (#208) +- [5c94a9b0](https://github.com/kubedb/percona-xtradb/commit/5c94a9b0) Restrict Community Edition to demo namespace (#207) +- [bd6740df](https://github.com/kubedb/percona-xtradb/commit/bd6740df) Update repository config (#206) +- [67fcc570](https://github.com/kubedb/percona-xtradb/commit/67fcc570) Update repository config (#205) +- [2dfcfd3f](https://github.com/kubedb/percona-xtradb/commit/2dfcfd3f) Update dependencies (#204) +- [1a0aa385](https://github.com/kubedb/percona-xtradb/commit/1a0aa385) Stop using api versions removed in k8s 1.22 + + + +## [kubedb/pg-coordinator](https://github.com/kubedb/pg-coordinator) + +### [v0.4.0](https://github.com/kubedb/pg-coordinator/releases/tag/v0.4.0) + +- [d5509cd](https://github.com/kubedb/pg-coordinator/commit/d5509cd) Prepare for release v0.4.0 (#32) +- [d2361d3](https://github.com/kubedb/pg-coordinator/commit/d2361d3) Update dependencies (#31) +- [7d74cb5](https://github.com/kubedb/pg-coordinator/commit/7d74cb5) Update dependencies (#30) +- [6d88615](https://github.com/kubedb/pg-coordinator/commit/6d88615) Update repository config (#29) +- [2c6007f](https://github.com/kubedb/pg-coordinator/commit/2c6007f) Update repository config (#28) +- [71cfe14](https://github.com/kubedb/pg-coordinator/commit/71cfe14) Update dependencies (#27) + + + +## [kubedb/pgbouncer](https://github.com/kubedb/pgbouncer) + +### [v0.7.0](https://github.com/kubedb/pgbouncer/releases/tag/v0.7.0) + +- [94075616](https://github.com/kubedb/pgbouncer/commit/94075616) Prepare for release v0.7.0 (#168) +- [38e7bbef](https://github.com/kubedb/pgbouncer/commit/38e7bbef) Restrict Community Edition to demo namespace (#167) +- [5ba4e255](https://github.com/kubedb/pgbouncer/commit/5ba4e255) Update repository config (#166) +- [f73bf32e](https://github.com/kubedb/pgbouncer/commit/f73bf32e) Update repository config (#165) +- [70b6363b](https://github.com/kubedb/pgbouncer/commit/70b6363b) Update dependencies (#164) +- [e452aef3](https://github.com/kubedb/pgbouncer/commit/e452aef3) Stop using api versions removed in k8s 1.22 + + + +## [kubedb/postgres](https://github.com/kubedb/postgres) + +### [v0.20.0](https://github.com/kubedb/postgres/releases/tag/v0.20.0) + +- [d3eec559](https://github.com/kubedb/postgres/commit/d3eec559) Prepare for release v0.20.0 (#515) +- [ea11e98d](https://github.com/kubedb/postgres/commit/ea11e98d) Restrict Community Edition to demo namespace (#514) +- [d3d17bfc](https://github.com/kubedb/postgres/commit/d3d17bfc) Update repository config (#513) +- [698c72b5](https://github.com/kubedb/postgres/commit/698c72b5) Update repository config (#512) +- [4bd33547](https://github.com/kubedb/postgres/commit/4bd33547) Update dependencies (#511) +- [39f294df](https://github.com/kubedb/postgres/commit/39f294df) Stop using api versions deprecated in k8s 1.22 +- [6cb08c0f](https://github.com/kubedb/postgres/commit/6cb08c0f) Fix Continuous Statefulset Patching (#510) + + + +## [kubedb/proxysql](https://github.com/kubedb/proxysql) + +### [v0.7.0](https://github.com/kubedb/proxysql/releases/tag/v0.7.0) + +- [8e3d6afb](https://github.com/kubedb/proxysql/commit/8e3d6afb) Prepare for release v0.7.0 (#187) +- [8c569877](https://github.com/kubedb/proxysql/commit/8c569877) Restrict Community Edition to demo namespace (#186) +- [4510a5e2](https://github.com/kubedb/proxysql/commit/4510a5e2) Restrict Community Edition to demo namespace (#185) +- [967480a1](https://github.com/kubedb/proxysql/commit/967480a1) Update repository config (#184) +- [9fd454dc](https://github.com/kubedb/proxysql/commit/9fd454dc) Update repository config (#183) +- [2c2dbba7](https://github.com/kubedb/proxysql/commit/2c2dbba7) Update dependencies (#182) +- [44f352d0](https://github.com/kubedb/proxysql/commit/44f352d0) Stop using api versions removed in k8s 1.22 + + + +## [kubedb/redis](https://github.com/kubedb/redis) + +### [v0.13.0](https://github.com/kubedb/redis/releases/tag/v0.13.0) + +- [4e331b2c](https://github.com/kubedb/redis/commit/4e331b2c) Prepare for release v0.13.0 (#334) +- [cefba860](https://github.com/kubedb/redis/commit/cefba860) Add Auth Secret in Redis (#333) +- [b412a862](https://github.com/kubedb/redis/commit/b412a862) Restrict Community Edition to demo namespace (#332) +- [765c767c](https://github.com/kubedb/redis/commit/765c767c) Restrict Community Edition to demo namespace (#331) +- [f5b0af55](https://github.com/kubedb/redis/commit/f5b0af55) Update repository config (#330) +- [7004f37f](https://github.com/kubedb/redis/commit/7004f37f) Update repository config (#329) +- [bbe8a4fe](https://github.com/kubedb/redis/commit/bbe8a4fe) Update dependencies (#328) +- [4bd2a852](https://github.com/kubedb/redis/commit/4bd2a852) Remove panic And Fix Repetitive Statefulset patch (#327) +- [d941d122](https://github.com/kubedb/redis/commit/d941d122) Stop using api versions removed in k8s 1.22 + + + +## [kubedb/replication-mode-detector](https://github.com/kubedb/replication-mode-detector) + +### [v0.7.0](https://github.com/kubedb/replication-mode-detector/releases/tag/v0.7.0) + +- [5acb289](https://github.com/kubedb/replication-mode-detector/commit/5acb289) Prepare for release v0.7.0 (#153) +- [779ca5f](https://github.com/kubedb/replication-mode-detector/commit/779ca5f) Update dependencies (#152) +- [07e8917](https://github.com/kubedb/replication-mode-detector/commit/07e8917) Update dependencies (#151) +- [7184fc6](https://github.com/kubedb/replication-mode-detector/commit/7184fc6) Update dependencies (#150) +- [fe3d8e3](https://github.com/kubedb/replication-mode-detector/commit/fe3d8e3) Update dependencies (#149) +- [cc1b4e9](https://github.com/kubedb/replication-mode-detector/commit/cc1b4e9) Update dependencies (#148) +- [e80918a](https://github.com/kubedb/replication-mode-detector/commit/e80918a) Update repository config (#147) +- [b17c5e4](https://github.com/kubedb/replication-mode-detector/commit/b17c5e4) Update dependencies (#145) + + + +## [kubedb/tests](https://github.com/kubedb/tests) + +### [v0.5.0](https://github.com/kubedb/tests/releases/tag/v0.5.0) + +- [8526856](https://github.com/kubedb/tests/commit/8526856) Prepare for release v0.5.0 (#133) +- [9d517da](https://github.com/kubedb/tests/commit/9d517da) Update dependencies (#132) +- [66c9042](https://github.com/kubedb/tests/commit/66c9042) Update dependencies (#131) +- [712e2e3](https://github.com/kubedb/tests/commit/712e2e3) Update dependencies (#130) +- [b92a730](https://github.com/kubedb/tests/commit/b92a730) Update dependencies (#129) +- [2c4bd02](https://github.com/kubedb/tests/commit/2c4bd02) Update dependencies (#128) +- [f8bb956](https://github.com/kubedb/tests/commit/f8bb956) Update repository config (#127) +- [0d2e0d6](https://github.com/kubedb/tests/commit/0d2e0d6) Update dependencies (#126) + + + + diff --git a/content/docs/v2024.1.31/CHANGELOG-v2021.09.09.md b/content/docs/v2024.1.31/CHANGELOG-v2021.09.09.md new file mode 100644 index 0000000000..1e2401d7c7 --- /dev/null +++ b/content/docs/v2024.1.31/CHANGELOG-v2021.09.09.md @@ -0,0 +1,293 @@ +--- +title: Changelog | KubeDB +description: Changelog +menu: + docs_v2024.1.31: + identifier: changelog-kubedb-v2021.09.09 + name: Changelog-v2021.09.09 + parent: welcome + weight: 20210909 +product_name: kubedb +menu_name: docs_v2024.1.31 +section_menu_id: welcome +url: /docs/v2024.1.31/welcome/changelog-v2021.09.09/ +aliases: +- /docs/v2024.1.31/CHANGELOG-v2021.09.09/ +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +# KubeDB v2021.09.09 (2021-09-09) + + +## [appscode/kubedb-autoscaler](https://github.com/appscode/kubedb-autoscaler) + +### [v0.6.0](https://github.com/appscode/kubedb-autoscaler/releases/tag/v0.6.0) + +- [8ab6437](https://github.com/appscode/kubedb-autoscaler/commit/8ab6437) Prepare for release v0.6.0 (#35) +- [b517f2f](https://github.com/appscode/kubedb-autoscaler/commit/b517f2f) Update dependencies (#34) +- [0bc78a1](https://github.com/appscode/kubedb-autoscaler/commit/0bc78a1) Update repository config (#33) +- [8cd4758](https://github.com/appscode/kubedb-autoscaler/commit/8cd4758) Update dependencies (#32) + + + +## [appscode/kubedb-enterprise](https://github.com/appscode/kubedb-enterprise) + +### [v0.8.0](https://github.com/appscode/kubedb-enterprise/releases/tag/v0.8.0) + +- [4495e7a6](https://github.com/appscode/kubedb-enterprise/commit/4495e7a6) Prepare for release v0.8.0 (#222) +- [74a74d0c](https://github.com/appscode/kubedb-enterprise/commit/74a74d0c) Update dependencies (#221) +- [784a2d3a](https://github.com/appscode/kubedb-enterprise/commit/784a2d3a) Fix: Ensure replica running before starting OpsReq Postgres (#203) +- [2993ba08](https://github.com/appscode/kubedb-enterprise/commit/2993ba08) MongoDB Add feature compatibility version (#218) +- [ba21d8a4](https://github.com/appscode/kubedb-enterprise/commit/ba21d8a4) Use reconciler in mongodb upgrade (#219) +- [4c07fb13](https://github.com/appscode/kubedb-enterprise/commit/4c07fb13) Always stop the ticker channel in run parallel function (#212) +- [af99f555](https://github.com/appscode/kubedb-enterprise/commit/af99f555) Update dependencies (#216) +- [19d3b43e](https://github.com/appscode/kubedb-enterprise/commit/19d3b43e) Update dependencies (#215) +- [d3e07d34](https://github.com/appscode/kubedb-enterprise/commit/d3e07d34) Update repository config (#214) +- [9ce72e12](https://github.com/appscode/kubedb-enterprise/commit/9ce72e12) Update dependencies (#213) + + + +## [kubedb/apimachinery](https://github.com/kubedb/apimachinery) + +### [v0.21.0](https://github.com/kubedb/apimachinery/releases/tag/v0.21.0) + +- [83d57466](https://github.com/kubedb/apimachinery/commit/83d57466) Add CoordinatorDefaultResources variable +- [9ae6c818](https://github.com/kubedb/apimachinery/commit/9ae6c818) Remove resources from Postgres leader election config (#798) +- [76460f61](https://github.com/kubedb/apimachinery/commit/76460f61) Add Constants for Volumes (#796) +- [51ccad08](https://github.com/kubedb/apimachinery/commit/51ccad08) Add Coordinator to spec (#797) +- [1f545e61](https://github.com/kubedb/apimachinery/commit/1f545e61) Add MariaDBVersionCoordinator field in MariaDBVersions (#763) +- [ca19310d](https://github.com/kubedb/apimachinery/commit/ca19310d) Update constants for Postgres (#792) +- [992070f1](https://github.com/kubedb/apimachinery/commit/992070f1) Update the default pg-coordinator params (#791) +- [9d9bc758](https://github.com/kubedb/apimachinery/commit/9d9bc758) Fix conversion v1alpha1 <-> v1alpha2 (#790) +- [2a901017](https://github.com/kubedb/apimachinery/commit/2a901017) Fix v1alpha1 <-> v1alpha2 conversion (#789) +- [e6708c20](https://github.com/kubedb/apimachinery/commit/e6708c20) Add conversion code for v1alpha1 to v1alph2 api (#788) +- [92ccb751](https://github.com/kubedb/apimachinery/commit/92ccb751) Update repository config (#787) +- [f6c82136](https://github.com/kubedb/apimachinery/commit/f6c82136) Fix build +- [356e942f](https://github.com/kubedb/apimachinery/commit/356e942f) Validate monitoring agent type (#786) +- [32f03647](https://github.com/kubedb/apimachinery/commit/32f03647) Use formatted logs (#785) +- [a1617830](https://github.com/kubedb/apimachinery/commit/a1617830) Enable ES volume expansion ops request for all node roles +- [c8889bb7](https://github.com/kubedb/apimachinery/commit/c8889bb7) Enable ES ops request for every node role type (#784) + + + +## [kubedb/cli](https://github.com/kubedb/cli) + +### [v0.21.0](https://github.com/kubedb/cli/releases/tag/v0.21.0) + +- [79ac24b5](https://github.com/kubedb/cli/commit/79ac24b5) Prepare for release v0.21.0 (#627) +- [0d5c3c4d](https://github.com/kubedb/cli/commit/0d5c3c4d) Update dependencies (#626) +- [c1d8053d](https://github.com/kubedb/cli/commit/c1d8053d) Don't show error in pause/resume when stash is not installed (#625) +- [91e53231](https://github.com/kubedb/cli/commit/91e53231) Fix --namespace flag parsing (#624) +- [5f18ef12](https://github.com/kubedb/cli/commit/5f18ef12) Update repository config (#623) +- [34e61e39](https://github.com/kubedb/cli/commit/34e61e39) Update dependencies (#622) +- [e29f8aca](https://github.com/kubedb/cli/commit/e29f8aca) Update dependencies (#621) +- [cee7dfb9](https://github.com/kubedb/cli/commit/cee7dfb9) Update dependencies (#620) + + + +## [kubedb/elasticsearch](https://github.com/kubedb/elasticsearch) + +### [v0.21.0](https://github.com/kubedb/elasticsearch/releases/tag/v0.21.0) + +- [c6927525](https://github.com/kubedb/elasticsearch/commit/c6927525) Prepare for release v0.21.0 (#518) +- [7505e9f6](https://github.com/kubedb/elasticsearch/commit/7505e9f6) Update dependencies (#517) +- [22bc1e6b](https://github.com/kubedb/elasticsearch/commit/22bc1e6b) Use CreateJSONMergePatch (#515) +- [039f67e1](https://github.com/kubedb/elasticsearch/commit/039f67e1) Update repository config (#514) +- [b670f85e](https://github.com/kubedb/elasticsearch/commit/b670f85e) Fix panic for unknown monitoring agent type (#513) + + + +## [kubedb/installer](https://github.com/kubedb/installer) + +### [v2021.09.09](https://github.com/kubedb/installer/releases/tag/v2021.09.09) + +- [4e4e872](https://github.com/kubedb/installer/commit/4e4e872) Prepare for release v2021.09.09 (#356) +- [ffc283d](https://github.com/kubedb/installer/commit/ffc283d) Update spec for a given api kind (#355) +- [3cd6181](https://github.com/kubedb/installer/commit/3cd6181) Update dependencies (#354) +- [dfbe558](https://github.com/kubedb/installer/commit/dfbe558) Add MariaDB Coordinator | Add MariaDB Version 10.6.4 (#342) +- [711dcab](https://github.com/kubedb/installer/commit/711dcab) Add MongoDB 5.0.2 (#353) +- [0b2b3de](https://github.com/kubedb/installer/commit/0b2b3de) Update dependencies +- [5b64c67](https://github.com/kubedb/installer/commit/5b64c67) Check metrics configuration (#351) +- [67ace3d](https://github.com/kubedb/installer/commit/67ace3d) Add permission to set finalizers on services (#350) +- [bf1d4bc](https://github.com/kubedb/installer/commit/bf1d4bc) Update repository config (#349) +- [01d9f50](https://github.com/kubedb/installer/commit/01d9f50) Test schema validity for deprecated catalog versions (#348) +- [8641529](https://github.com/kubedb/installer/commit/8641529) Add redis:6.2.5 (#347) +- [0c819fb](https://github.com/kubedb/installer/commit/0c819fb) Update dependencies (#346) + + + +## [kubedb/mariadb](https://github.com/kubedb/mariadb) + +### [v0.5.0](https://github.com/kubedb/mariadb/releases/tag/v0.5.0) + +- [7650716e](https://github.com/kubedb/mariadb/commit/7650716e) Prepare for release v0.5.0 (#90) +- [ce261d96](https://github.com/kubedb/mariadb/commit/ce261d96) Add mariadb coordinator (#80) +- [f33bf030](https://github.com/kubedb/mariadb/commit/f33bf030) Update dependencies (#89) +- [5e487ff1](https://github.com/kubedb/mariadb/commit/5e487ff1) Update repository config (#88) +- [69d22af2](https://github.com/kubedb/mariadb/commit/69d22af2) Fix panic for unknown monitoring agent type (#87) + + + +## [kubedb/mariadb-coordinator](https://github.com/kubedb/mariadb-coordinator) + +### [v0.1.0](https://github.com/kubedb/mariadb-coordinator/releases/tag/v0.1.0) + +- [ad75278](https://github.com/kubedb/mariadb-coordinator/commit/ad75278) Prepare for release v0.1.0 (#9) +- [140c663](https://github.com/kubedb/mariadb-coordinator/commit/140c663) Update dependencies (#8) + + + +## [kubedb/memcached](https://github.com/kubedb/memcached) + +### [v0.14.0](https://github.com/kubedb/memcached/releases/tag/v0.14.0) + +- [794b7e50](https://github.com/kubedb/memcached/commit/794b7e50) Prepare for release v0.14.0 (#310) +- [6988937d](https://github.com/kubedb/memcached/commit/6988937d) Update dependencies (#309) +- [a86e09c2](https://github.com/kubedb/memcached/commit/a86e09c2) Update repository config (#308) +- [f3d4edb9](https://github.com/kubedb/memcached/commit/f3d4edb9) Fix panic for unknown monitoring agent type (#307) + + + +## [kubedb/mongodb](https://github.com/kubedb/mongodb) + +### [v0.14.0](https://github.com/kubedb/mongodb/releases/tag/v0.14.0) + +- [7bceef7f](https://github.com/kubedb/mongodb/commit/7bceef7f) Prepare for release v0.14.0 (#415) +- [2d8ee793](https://github.com/kubedb/mongodb/commit/2d8ee793) Apply coordinator resources (#414) +- [d13347fe](https://github.com/kubedb/mongodb/commit/d13347fe) Update dependencies (#413) +- [fab26227](https://github.com/kubedb/mongodb/commit/fab26227) Update repository config (#412) +- [f9caff04](https://github.com/kubedb/mongodb/commit/f9caff04) Fix panic for unknown monitoring agent type (#411) +- [9c875e8e](https://github.com/kubedb/mongodb/commit/9c875e8e) Fix `mongos` pods running before `shard` and `configserver` pods (#410) + + + +## [kubedb/mysql](https://github.com/kubedb/mysql) + +### [v0.14.0](https://github.com/kubedb/mysql/releases/tag/v0.14.0) + +- [2d7cd78c](https://github.com/kubedb/mysql/commit/2d7cd78c) Prepare for release v0.14.0 (#405) +- [a172908d](https://github.com/kubedb/mysql/commit/a172908d) Apply coordinator resources (#404) +- [66cc6c27](https://github.com/kubedb/mysql/commit/66cc6c27) Update dependencies (#403) +- [09c235d6](https://github.com/kubedb/mysql/commit/09c235d6) Update repository config (#401) +- [8cfc97e0](https://github.com/kubedb/mysql/commit/8cfc97e0) Fix panic for unknown monitoring agent type (#400) + + + +## [kubedb/operator](https://github.com/kubedb/operator) + +### [v0.21.0](https://github.com/kubedb/operator/releases/tag/v0.21.0) + +- [dcd23db0](https://github.com/kubedb/operator/commit/dcd23db0) Prepare for release v0.21.0 (#421) +- [6deaba7d](https://github.com/kubedb/operator/commit/6deaba7d) Update dependencies (#420) +- [61b4ca63](https://github.com/kubedb/operator/commit/61b4ca63) Update repository config (#418) +- [7be4f87b](https://github.com/kubedb/operator/commit/7be4f87b) Fix panic for unknown monitoring agent type (#417) + + + +## [kubedb/percona-xtradb](https://github.com/kubedb/percona-xtradb) + +### [v0.8.0](https://github.com/kubedb/percona-xtradb/releases/tag/v0.8.0) + +- [0dbbece3](https://github.com/kubedb/percona-xtradb/commit/0dbbece3) Prepare for release v0.8.0 (#212) +- [7c460aec](https://github.com/kubedb/percona-xtradb/commit/7c460aec) Update dependencies (#211) +- [6c9c5722](https://github.com/kubedb/percona-xtradb/commit/6c9c5722) Update repository config (#210) +- [df6b7a92](https://github.com/kubedb/percona-xtradb/commit/df6b7a92) Fix panic for unknown monitoring agent type (#209) + + + +## [kubedb/pg-coordinator](https://github.com/kubedb/pg-coordinator) + +### [v0.5.0](https://github.com/kubedb/pg-coordinator/releases/tag/v0.5.0) + +- [394fe7d](https://github.com/kubedb/pg-coordinator/commit/394fe7d) Prepare for release v0.5.0 (#37) +- [21ab0a0](https://github.com/kubedb/pg-coordinator/commit/21ab0a0) Update dependencies (#36) +- [c879536](https://github.com/kubedb/pg-coordinator/commit/c879536) Fix Rewind And Memory leak Issues (#35) +- [9b88399](https://github.com/kubedb/pg-coordinator/commit/9b88399) Update repository config (#34) +- [267936b](https://github.com/kubedb/pg-coordinator/commit/267936b) Update dependencies (#33) + + + +## [kubedb/pgbouncer](https://github.com/kubedb/pgbouncer) + +### [v0.8.0](https://github.com/kubedb/pgbouncer/releases/tag/v0.8.0) + +- [0ace6b51](https://github.com/kubedb/pgbouncer/commit/0ace6b51) Prepare for release v0.8.0 (#172) +- [a68ff554](https://github.com/kubedb/pgbouncer/commit/a68ff554) Update dependencies (#171) +- [007f90dc](https://github.com/kubedb/pgbouncer/commit/007f90dc) Update repository config (#170) +- [7efab716](https://github.com/kubedb/pgbouncer/commit/7efab716) Fix panic for unknown monitoring agent type (#169) + + + +## [kubedb/postgres](https://github.com/kubedb/postgres) + +### [v0.21.0](https://github.com/kubedb/postgres/releases/tag/v0.21.0) + +- [57bfcc31](https://github.com/kubedb/postgres/commit/57bfcc31) Prepare for release v0.21.0 (#521) +- [f4c77294](https://github.com/kubedb/postgres/commit/f4c77294) Use coordinator resources (#520) +- [4e2c73e9](https://github.com/kubedb/postgres/commit/4e2c73e9) Update dependencies (#519) +- [1dbb66c0](https://github.com/kubedb/postgres/commit/1dbb66c0) Add Constants for ENV variables (#518) +- [48814901](https://github.com/kubedb/postgres/commit/48814901) Update repository config (#517) +- [9ea604b8](https://github.com/kubedb/postgres/commit/9ea604b8) Fix panic for unknown monitoring agent type (#516) + + + +## [kubedb/proxysql](https://github.com/kubedb/proxysql) + +### [v0.8.0](https://github.com/kubedb/proxysql/releases/tag/v0.8.0) + +- [30c8a53a](https://github.com/kubedb/proxysql/commit/30c8a53a) Prepare for release v0.8.0 (#191) +- [ff88f431](https://github.com/kubedb/proxysql/commit/ff88f431) Update dependencies (#190) +- [f995a4d7](https://github.com/kubedb/proxysql/commit/f995a4d7) Update repository config (#189) +- [87f33dde](https://github.com/kubedb/proxysql/commit/87f33dde) Fix panic for unknown monitoring agent type (#188) + + + +## [kubedb/redis](https://github.com/kubedb/redis) + +### [v0.14.0](https://github.com/kubedb/redis/releases/tag/v0.14.0) + +- [052e1175](https://github.com/kubedb/redis/commit/052e1175) Prepare for release v0.14.0 (#339) +- [5909586e](https://github.com/kubedb/redis/commit/5909586e) Update dependencies (#338) +- [316e713f](https://github.com/kubedb/redis/commit/316e713f) Update repository config (#337) +- [02251943](https://github.com/kubedb/redis/commit/02251943) Fix panic for unknown monitoring agent type (#336) + + + +## [kubedb/replication-mode-detector](https://github.com/kubedb/replication-mode-detector) + +### [v0.8.0](https://github.com/kubedb/replication-mode-detector/releases/tag/v0.8.0) + +- [414ecab](https://github.com/kubedb/replication-mode-detector/commit/414ecab) Prepare for release v0.8.0 (#159) +- [73dd783](https://github.com/kubedb/replication-mode-detector/commit/73dd783) Update dependencies (#158) +- [79dd52e](https://github.com/kubedb/replication-mode-detector/commit/79dd52e) Update repository config (#157) +- [0d5c498](https://github.com/kubedb/replication-mode-detector/commit/0d5c498) Update dependencies (#156) +- [1a2da83](https://github.com/kubedb/replication-mode-detector/commit/1a2da83) Update dependencies (#155) +- [789087d](https://github.com/kubedb/replication-mode-detector/commit/789087d) Update dependencies (#154) + + + +## [kubedb/tests](https://github.com/kubedb/tests) + +### [v0.6.0](https://github.com/kubedb/tests/releases/tag/v0.6.0) + +- [282a366](https://github.com/kubedb/tests/commit/282a366) Prepare for release v0.6.0 (#140) +- [058265a](https://github.com/kubedb/tests/commit/058265a) Update dependencies (#139) +- [230e72e](https://github.com/kubedb/tests/commit/230e72e) Update repository config (#138) +- [fd27e56](https://github.com/kubedb/tests/commit/fd27e56) Update dependencies (#137) +- [4314462](https://github.com/kubedb/tests/commit/4314462) Update dependencies (#136) +- [35f80a5](https://github.com/kubedb/tests/commit/35f80a5) Update dependencies (#135) +- [cb1e72f](https://github.com/kubedb/tests/commit/cb1e72f) Update dependencies (#134) + + + + diff --git a/content/docs/v2024.1.31/CHANGELOG-v2021.09.30.md b/content/docs/v2024.1.31/CHANGELOG-v2021.09.30.md new file mode 100644 index 0000000000..bdcaeb0f86 --- /dev/null +++ b/content/docs/v2024.1.31/CHANGELOG-v2021.09.30.md @@ -0,0 +1,233 @@ +--- +title: Changelog | KubeDB +description: Changelog +menu: + docs_v2024.1.31: + identifier: changelog-kubedb-v2021.09.30 + name: Changelog-v2021.09.30 + parent: welcome + weight: 20210930 +product_name: kubedb +menu_name: docs_v2024.1.31 +section_menu_id: welcome +url: /docs/v2024.1.31/welcome/changelog-v2021.09.30/ +aliases: +- /docs/v2024.1.31/CHANGELOG-v2021.09.30/ +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +# KubeDB v2021.09.30 (2021-09-25) + + +## [appscode/kubedb-autoscaler](https://github.com/appscode/kubedb-autoscaler) + +### [v0.7.0](https://github.com/appscode/kubedb-autoscaler/releases/tag/v0.7.0) + +- [af632ae](https://github.com/appscode/kubedb-autoscaler/commit/af632ae) Prepare for release v0.7.0 (#37) +- [45418c3](https://github.com/appscode/kubedb-autoscaler/commit/45418c3) Log warning if Community License is used with non-demo namespace (#36) + + + +## [appscode/kubedb-enterprise](https://github.com/appscode/kubedb-enterprise) + +### [v0.9.0](https://github.com/appscode/kubedb-enterprise/releases/tag/v0.9.0) + +- [19e4ac7b](https://github.com/appscode/kubedb-enterprise/commit/19e4ac7b) Prepare for release v0.9.0 (#227) +- [59c76bb4](https://github.com/appscode/kubedb-enterprise/commit/59c76bb4) Add MongoDB Offline Volume Expansion (#224) +- [064b2294](https://github.com/appscode/kubedb-enterprise/commit/064b2294) Fix MariaDB Reconfigure TLS OpsReq early Successful status (#225) +- [51a832d0](https://github.com/appscode/kubedb-enterprise/commit/51a832d0) Add TLS support for Redis Sentinel (#223) +- [b0717568](https://github.com/appscode/kubedb-enterprise/commit/b0717568) Log warning if Community License is used with non-demo namespace (#226) + + + +## [kubedb/apimachinery](https://github.com/kubedb/apimachinery) + +### [v0.22.0](https://github.com/kubedb/apimachinery/releases/tag/v0.22.0) + +- [e85fa1ff](https://github.com/kubedb/apimachinery/commit/e85fa1ff) Add `VolumeExpansionMode` in mongodb ops request (#800) +- [61d61e39](https://github.com/kubedb/apimachinery/commit/61d61e39) Add reconfigure TLS constants (#801) +- [84659c4a](https://github.com/kubedb/apimachinery/commit/84659c4a) Add RedisConfiguration struct for redis AppBinding (#799) +- [67978809](https://github.com/kubedb/apimachinery/commit/67978809) Add Redis Sentinel CRD resources (#794) + + + +## [kubedb/cli](https://github.com/kubedb/cli) + +### [v0.22.0](https://github.com/kubedb/cli/releases/tag/v0.22.0) + +- [c9c04dae](https://github.com/kubedb/cli/commit/c9c04dae) Prepare for release v0.22.0 (#629) +- [08a16994](https://github.com/kubedb/cli/commit/08a16994) Log warning if Community License is used with non-demo namespace (#628) + + + +## [kubedb/elasticsearch](https://github.com/kubedb/elasticsearch) + +### [v0.22.0](https://github.com/kubedb/elasticsearch/releases/tag/v0.22.0) + +- [7c8e13f0](https://github.com/kubedb/elasticsearch/commit/7c8e13f0) Prepare for release v0.22.0 (#520) +- [31719087](https://github.com/kubedb/elasticsearch/commit/31719087) Log warning if Community License is used with non-demo namespace (#519) + + + +## [kubedb/installer](https://github.com/kubedb/installer) + +### [v2021.09.30](https://github.com/kubedb/installer/releases/tag/v2021.09.30) + +- [b578c69](https://github.com/kubedb/installer/commit/b578c69) Prepare for release v2021.09.30 (#360) +- [3861233](https://github.com/kubedb/installer/commit/3861233) Use kubedb/mariadb-init:0.4.0 (#359) +- [dc6a9e6](https://github.com/kubedb/installer/commit/dc6a9e6) Add support for Redis Sentinel (#352) +- [f74874a](https://github.com/kubedb/installer/commit/f74874a) Log warning if Community License is used with non-demo namespace (#358) +- [2d4da88](https://github.com/kubedb/installer/commit/2d4da88) MySQL 5.7.35 and 8.0.26 (#357) + + + +## [kubedb/mariadb](https://github.com/kubedb/mariadb) + +### [v0.6.0](https://github.com/kubedb/mariadb/releases/tag/v0.6.0) + +- [088b0250](https://github.com/kubedb/mariadb/commit/088b0250) Prepare for release v0.6.0 (#94) +- [303ceb70](https://github.com/kubedb/mariadb/commit/303ceb70) Fixed security context and governing service patch issue. (#92) +- [6b1b2d59](https://github.com/kubedb/mariadb/commit/6b1b2d59) Log warning if Community License is used with non-demo namespace (#93) +- [1dc9b573](https://github.com/kubedb/mariadb/commit/1dc9b573) Fix coordinator run (#91) + + + +## [kubedb/mariadb-coordinator](https://github.com/kubedb/mariadb-coordinator) + +### [v0.2.0](https://github.com/kubedb/mariadb-coordinator/releases/tag/v0.2.0) + +- [8f1b7df](https://github.com/kubedb/mariadb-coordinator/commit/8f1b7df) Prepare for release v0.2.0 (#12) +- [dff9b13](https://github.com/kubedb/mariadb-coordinator/commit/dff9b13) Restart mysqld Process if missing (#10) +- [3e0e0de](https://github.com/kubedb/mariadb-coordinator/commit/3e0e0de) Log warning if Community License is used with non-demo namespace (#11) + + + +## [kubedb/memcached](https://github.com/kubedb/memcached) + +### [v0.15.0](https://github.com/kubedb/memcached/releases/tag/v0.15.0) + +- [d97a6ae2](https://github.com/kubedb/memcached/commit/d97a6ae2) Prepare for release v0.15.0 (#312) +- [5851a2b7](https://github.com/kubedb/memcached/commit/5851a2b7) Log warning if Community License is used with non-demo namespace (#311) + + + +## [kubedb/mongodb](https://github.com/kubedb/mongodb) + +### [v0.15.0](https://github.com/kubedb/mongodb/releases/tag/v0.15.0) + +- [19550583](https://github.com/kubedb/mongodb/commit/19550583) Prepare for release v0.15.0 (#417) +- [c52519d9](https://github.com/kubedb/mongodb/commit/c52519d9) Log warning if Community License is used with non-demo namespace (#416) + + + +## [kubedb/mysql](https://github.com/kubedb/mysql) + +### [v0.15.0](https://github.com/kubedb/mysql/releases/tag/v0.15.0) + +- [883b3e77](https://github.com/kubedb/mysql/commit/883b3e77) Prepare for release v0.15.0 (#408) +- [4f44fc94](https://github.com/kubedb/mysql/commit/4f44fc94) Log warning if Community License is used with non-demo namespace (#407) + + + +## [kubedb/operator](https://github.com/kubedb/operator) + +### [v0.22.0](https://github.com/kubedb/operator/releases/tag/v0.22.0) + +- [ae6b5279](https://github.com/kubedb/operator/commit/ae6b5279) Prepare for release v0.22.0 (#423) +- [0ec444fe](https://github.com/kubedb/operator/commit/0ec444fe) Log warning if Community License is used with non-demo namespace (#422) + + + +## [kubedb/percona-xtradb](https://github.com/kubedb/percona-xtradb) + +### [v0.9.0](https://github.com/kubedb/percona-xtradb/releases/tag/v0.9.0) + +- [2d6cbda4](https://github.com/kubedb/percona-xtradb/commit/2d6cbda4) Prepare for release v0.9.0 (#214) +- [db098256](https://github.com/kubedb/percona-xtradb/commit/db098256) Log warning if Community License is used with non-demo namespace (#213) + + + +## [kubedb/pg-coordinator](https://github.com/kubedb/pg-coordinator) + +### [v0.6.0](https://github.com/kubedb/pg-coordinator/releases/tag/v0.6.0) + +- [41b6632](https://github.com/kubedb/pg-coordinator/commit/41b6632) Prepare for release v0.6.0 (#39) +- [6399040](https://github.com/kubedb/pg-coordinator/commit/6399040) Log warning if Community License is used with non-demo namespace (#38) + + + +## [kubedb/pgbouncer](https://github.com/kubedb/pgbouncer) + +### [v0.9.0](https://github.com/kubedb/pgbouncer/releases/tag/v0.9.0) + +- [af21e1a9](https://github.com/kubedb/pgbouncer/commit/af21e1a9) Prepare for release v0.9.0 (#174) +- [6e6fac64](https://github.com/kubedb/pgbouncer/commit/6e6fac64) Log warning if Community License is used with non-demo namespace (#173) + + + +## [kubedb/postgres](https://github.com/kubedb/postgres) + +### [v0.22.0](https://github.com/kubedb/postgres/releases/tag/v0.22.0) + +- [94a3e32f](https://github.com/kubedb/postgres/commit/94a3e32f) Prepare for release v0.22.0 (#523) +- [d578389f](https://github.com/kubedb/postgres/commit/d578389f) Log warning if Community License is used with non-demo namespace (#522) + + + +## [kubedb/proxysql](https://github.com/kubedb/proxysql) + +### [v0.9.0](https://github.com/kubedb/proxysql/releases/tag/v0.9.0) + +- [e5535aa0](https://github.com/kubedb/proxysql/commit/e5535aa0) Prepare for release v0.9.0 (#193) +- [80d1fc3c](https://github.com/kubedb/proxysql/commit/80d1fc3c) Log warning if Community License is used with non-demo namespace (#192) + + + +## [kubedb/redis](https://github.com/kubedb/redis) + +### [v0.15.0](https://github.com/kubedb/redis/releases/tag/v0.15.0) + +- [264f2548](https://github.com/kubedb/redis/commit/264f2548) Prepare for release v0.15.0 (#341) +- [ad483730](https://github.com/kubedb/redis/commit/ad483730) Add support for Redis Sentinel mode (#335) +- [31eb43fb](https://github.com/kubedb/redis/commit/31eb43fb) Log warning if Community License is used with non-demo namespace (#340) + + + +## [kubedb/redis-coordinator](https://github.com/kubedb/redis-coordinator) + +### [v0.1.0](https://github.com/kubedb/redis-coordinator/releases/tag/v0.1.0) + +- [7c4149e](https://github.com/kubedb/redis-coordinator/commit/7c4149e) Prepare for release v0.1.0 (#2) +- [86e0464](https://github.com/kubedb/redis-coordinator/commit/86e0464) Update Sentinel client (#1) + + + +## [kubedb/replication-mode-detector](https://github.com/kubedb/replication-mode-detector) + +### [v0.9.0](https://github.com/kubedb/replication-mode-detector/releases/tag/v0.9.0) + +- [498053f](https://github.com/kubedb/replication-mode-detector/commit/498053f) Prepare for release v0.9.0 (#161) +- [6cd3bd4](https://github.com/kubedb/replication-mode-detector/commit/6cd3bd4) Log warning if Community License is used with non-demo namespace (#160) + + + +## [kubedb/tests](https://github.com/kubedb/tests) + +### [v0.7.0](https://github.com/kubedb/tests/releases/tag/v0.7.0) + +- [bb86cc7](https://github.com/kubedb/tests/commit/bb86cc7) Prepare for release v0.7.0 (#142) +- [5ff6dff](https://github.com/kubedb/tests/commit/5ff6dff) Log warning if Community License is used with non-demo namespace (#141) + + + + diff --git a/content/docs/v2024.1.31/CHANGELOG-v2021.11.18.md b/content/docs/v2024.1.31/CHANGELOG-v2021.11.18.md new file mode 100644 index 0000000000..c02cf09c74 --- /dev/null +++ b/content/docs/v2024.1.31/CHANGELOG-v2021.11.18.md @@ -0,0 +1,571 @@ +--- +title: Changelog | KubeDB +description: Changelog +menu: + docs_v2024.1.31: + identifier: changelog-kubedb-v2021.11.18 + name: Changelog-v2021.11.18 + parent: welcome + weight: 20211118 +product_name: kubedb +menu_name: docs_v2024.1.31 +section_menu_id: welcome +url: /docs/v2024.1.31/welcome/changelog-v2021.11.18/ +aliases: +- /docs/v2024.1.31/CHANGELOG-v2021.11.18/ +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +# KubeDB v2021.11.18 (2021-11-17) + + +## [kubedb/apimachinery](https://github.com/kubedb/apimachinery) + +### [v0.23.0](https://github.com/kubedb/apimachinery/releases/tag/v0.23.0) + +- [ff3a4175](https://github.com/kubedb/apimachinery/commit/ff3a4175) Update repository config (#819) +- [1969d04c](https://github.com/kubedb/apimachinery/commit/1969d04c) Remove EnableAnalytics (#818) +- [1222a1d6](https://github.com/kubedb/apimachinery/commit/1222a1d6) Add pod and workload controller label support (#817) +- [1cef6837](https://github.com/kubedb/apimachinery/commit/1cef6837) Allow vertical scaling Coordinator (#816) +- [90d46474](https://github.com/kubedb/apimachinery/commit/90d46474) Add distribution tags for KubeDB (#815) +- [24d44217](https://github.com/kubedb/apimachinery/commit/24d44217) Update default resource for pg-coordinator (#813) +- [807280ce](https://github.com/kubedb/apimachinery/commit/807280ce) Add `applyConfig` in MongoDBOpsRequest for custom configuration (#811) +- [6f31cb6a](https://github.com/kubedb/apimachinery/commit/6f31cb6a) Add support for OpenSearch (#810) +- [b9f7eadd](https://github.com/kubedb/apimachinery/commit/b9f7eadd) Stop using storage.k8s.io/v1beta1 deprecated in k8s 1.22 (#814) +- [73f09b6b](https://github.com/kubedb/apimachinery/commit/73f09b6b) Add support for reconfigure Elasticsearch (#793) +- [997836d3](https://github.com/kubedb/apimachinery/commit/997836d3) Add Redis Constants for Config files (#812) +- [d0176524](https://github.com/kubedb/apimachinery/commit/d0176524) Remove statefulSetOrdinal from MySQL ops reuqest (#809) +- [ad8c1f78](https://github.com/kubedb/apimachinery/commit/ad8c1f78) Update MySQLClusterMode constants (#808) +- [8ce33b18](https://github.com/kubedb/apimachinery/commit/8ce33b18) Update deps +- [d8ea50ce](https://github.com/kubedb/apimachinery/commit/d8ea50ce) Update for release Stash@v2021.10.11 (#807) +- [4937544e](https://github.com/kubedb/apimachinery/commit/4937544e) Add MySQL Router constant (#805) +- [ee093b4d](https://github.com/kubedb/apimachinery/commit/ee093b4d) Add new distribution values (#806) +- [47b42be0](https://github.com/kubedb/apimachinery/commit/47b42be0) Add support for MySQL coordinator (#803) +- [d1f74f0b](https://github.com/kubedb/apimachinery/commit/d1f74f0b) Update repository config (#804) + + + +## [kubedb/autoscaler](https://github.com/kubedb/autoscaler) + +### [v0.8.0](https://github.com/kubedb/autoscaler/releases/tag/v0.8.0) + +- [85440f50](https://github.com/kubedb/autoscaler/commit/85440f50) Prepare for release v0.8.0 (#52) +- [5f59bb99](https://github.com/kubedb/autoscaler/commit/5f59bb99) Update kmodules.xyz/monitoring-agent-api (#51) +- [dcaa9d9d](https://github.com/kubedb/autoscaler/commit/dcaa9d9d) Update repository config (#50) +- [a9c755f1](https://github.com/kubedb/autoscaler/commit/a9c755f1) Use DisableAnalytics flag from license (#49) +- [a97521e7](https://github.com/kubedb/autoscaler/commit/a97521e7) Update license-verifier (#48) +- [7cda0e3b](https://github.com/kubedb/autoscaler/commit/7cda0e3b) Support custom pod and controller labels (#47) +- [99b2710c](https://github.com/kubedb/autoscaler/commit/99b2710c) Fix mongodb shard autoscaling issue (#46) +- [3302e496](https://github.com/kubedb/autoscaler/commit/3302e496) Merge recommended resource with current resource (#45) +- [7f6e3994](https://github.com/kubedb/autoscaler/commit/7f6e3994) Update dependencies (#44) +- [0ab54377](https://github.com/kubedb/autoscaler/commit/0ab54377) Fix satori/go.uuid security vulnerability (#43) +- [898e4497](https://github.com/kubedb/autoscaler/commit/898e4497) Fix jwt-go security vulnerability (#42) +- [39e647ff](https://github.com/kubedb/autoscaler/commit/39e647ff) Fix jwt-go security vulnerability (#41) +- [e898b195](https://github.com/kubedb/autoscaler/commit/e898b195) Use nats.go v1.13.0 (#39) +- [dc0b7b32](https://github.com/kubedb/autoscaler/commit/dc0b7b32) Setup SiteInfo publisher (#40) +- [de10d221](https://github.com/kubedb/autoscaler/commit/de10d221) Update dependencies to publish SiteInfo (#38) + + + +## [kubedb/cli](https://github.com/kubedb/cli) + +### [v0.23.0](https://github.com/kubedb/cli/releases/tag/v0.23.0) + +- [176feb8f](https://github.com/kubedb/cli/commit/176feb8f) Prepare for release v0.23.0 (#641) +- [c10cf297](https://github.com/kubedb/cli/commit/c10cf297) Update kmodules.xyz/monitoring-agent-api (#640) +- [6e5e3e57](https://github.com/kubedb/cli/commit/6e5e3e57) Use DisableAnalytics flag from license (#639) +- [3ca8fbf6](https://github.com/kubedb/cli/commit/3ca8fbf6) Update license-verifier (#638) +- [9ba88756](https://github.com/kubedb/cli/commit/9ba88756) Support custom pod and controller labels (#637) +- [67ae9ed4](https://github.com/kubedb/cli/commit/67ae9ed4) Update dependencies (#636) +- [58159a19](https://github.com/kubedb/cli/commit/58159a19) Fix satori/go.uuid security vulnerability (#635) +- [1350b7f4](https://github.com/kubedb/cli/commit/1350b7f4) Fix jwt-go security vulnerability (#634) +- [1e783e44](https://github.com/kubedb/cli/commit/1e783e44) Fix jwt-go security vulnerability (#633) +- [fe1a9aeb](https://github.com/kubedb/cli/commit/fe1a9aeb) Fix jwt-go security vulnerability (#632) +- [a79d2705](https://github.com/kubedb/cli/commit/a79d2705) Update dependencies to publish SiteInfo (#631) +- [6e9be4c3](https://github.com/kubedb/cli/commit/6e9be4c3) Update dependencies to publish SiteInfo (#630) + + + +## [kubedb/elasticsearch](https://github.com/kubedb/elasticsearch) + +### [v0.23.0](https://github.com/kubedb/elasticsearch/releases/tag/v0.23.0) + +- [d653718f](https://github.com/kubedb/elasticsearch/commit/d653718f) Prepare for release v0.23.0 (#540) +- [a15c804d](https://github.com/kubedb/elasticsearch/commit/a15c804d) Update kmodules.xyz/monitoring-agent-api (#539) +- [0778ff4f](https://github.com/kubedb/elasticsearch/commit/0778ff4f) Remove global variable for preconditions (#538) +- [bd084ade](https://github.com/kubedb/elasticsearch/commit/bd084ade) Update repository config (#537) +- [180f2c47](https://github.com/kubedb/elasticsearch/commit/180f2c47) Remove docs folder +- [a97b7131](https://github.com/kubedb/elasticsearch/commit/a97b7131) Update docs +- [16895e92](https://github.com/kubedb/elasticsearch/commit/16895e92) Use DisableAnalytics flag from license (#536) +- [62c2d28f](https://github.com/kubedb/elasticsearch/commit/62c2d28f) Update license-verifier (#535) +- [c70caee3](https://github.com/kubedb/elasticsearch/commit/c70caee3) Add pod, services and workload-controller(sts) label support (#532) +- [0e5aeb59](https://github.com/kubedb/elasticsearch/commit/0e5aeb59) Add support for OpenSearch (#529) +- [fd527ccb](https://github.com/kubedb/elasticsearch/commit/fd527ccb) Update dependencies (#531) +- [b2cf3e9f](https://github.com/kubedb/elasticsearch/commit/b2cf3e9f) Always create admin certs if the cluster security is enabled (#516) +- [aae6bc29](https://github.com/kubedb/elasticsearch/commit/aae6bc29) Fix satori/go.uuid security vulnerability (#530) +- [50aa1c9e](https://github.com/kubedb/elasticsearch/commit/50aa1c9e) Fix jwt-go security vulnerability (#528) +- [4b6ebc0c](https://github.com/kubedb/elasticsearch/commit/4b6ebc0c) Fix jwt-go security vulnerability (#527) +- [9d87c0a7](https://github.com/kubedb/elasticsearch/commit/9d87c0a7) Use nats.go v1.13.0 (#526) +- [cc4811ef](https://github.com/kubedb/elasticsearch/commit/cc4811ef) Setup SiteInfo publisher (#525) +- [00feb65a](https://github.com/kubedb/elasticsearch/commit/00feb65a) Update dependencies to publish SiteInfo (#524) +- [a7f4137f](https://github.com/kubedb/elasticsearch/commit/a7f4137f) Update dependencies to publish SiteInfo (#523) +- [7e3c63fd](https://github.com/kubedb/elasticsearch/commit/7e3c63fd) Collect metrics from all type of Elasticsearch nodes (#521) + + + +## [kubedb/enterprise](https://github.com/kubedb/enterprise) + +### [v0.10.0](https://github.com/kubedb/enterprise/releases/tag/v0.10.0) + +- [3214618e](https://github.com/kubedb/enterprise/commit/3214618e) Prepare for release v0.10.0 (#251) +- [41c5b619](https://github.com/kubedb/enterprise/commit/41c5b619) Update kmodules.xyz/monitoring-agent-api (#250) +- [64252ddb](https://github.com/kubedb/enterprise/commit/64252ddb) Remove global variable for preconditions (#249) +- [aa368d82](https://github.com/kubedb/enterprise/commit/aa368d82) Update repository config (#248) +- [9bcfee5a](https://github.com/kubedb/enterprise/commit/9bcfee5a) Fix semver checking. (#247) +- [91ea48f1](https://github.com/kubedb/enterprise/commit/91ea48f1) Update docs +- [34560f43](https://github.com/kubedb/enterprise/commit/34560f43) Use DisableAnalytics flag from license (#246) +- [fac8c82b](https://github.com/kubedb/enterprise/commit/fac8c82b) Update license-verifier (#245) +- [2d3839af](https://github.com/kubedb/enterprise/commit/2d3839af) Support custom pod and controller labels (#244) +- [83488e1c](https://github.com/kubedb/enterprise/commit/83488e1c) Add backup permission for mysql replication user (#243) +- [0dd46f8f](https://github.com/kubedb/enterprise/commit/0dd46f8f) Add support for reconfigure Elasticsearch (#220) +- [2a980832](https://github.com/kubedb/enterprise/commit/2a980832) Use `kubedb.dev/db-client-go` for mongodb (#241) +- [69d59b2c](https://github.com/kubedb/enterprise/commit/69d59b2c) Update mongodb vertical scaling logic (#240) +- [e9b227b7](https://github.com/kubedb/enterprise/commit/e9b227b7) Update Redis Reconfigure Ops Request (#236) +- [9342c052](https://github.com/kubedb/enterprise/commit/9342c052) Add support for mongodb reconfigure replicaSet config (#235) +- [330c3be1](https://github.com/kubedb/enterprise/commit/330c3be1) Fix upgrade opsrequest for mysql coordinator (#229) +- [f64f7d29](https://github.com/kubedb/enterprise/commit/f64f7d29) Update dependencies (#239) +- [60b2d128](https://github.com/kubedb/enterprise/commit/60b2d128) Update xorm dependency (#238) +- [fdcd91d3](https://github.com/kubedb/enterprise/commit/fdcd91d3) Fix satori/go.uuid security vulnerability (#237) +- [62cb9918](https://github.com/kubedb/enterprise/commit/62cb9918) Fix jwt-go security vulnerability (#234) +- [6af2b5a5](https://github.com/kubedb/enterprise/commit/6af2b5a5) Fix jwt-go security vulnerability (#233) +- [527eb0e4](https://github.com/kubedb/enterprise/commit/527eb0e4) Fix: major and minor Upgrade issue for Postgres Debian images (#232) +- [e8b5d6ae](https://github.com/kubedb/enterprise/commit/e8b5d6ae) Use nats.go v1.13.0 (#231) +- [80c6c4ec](https://github.com/kubedb/enterprise/commit/80c6c4ec) Setup SiteInfo publisher (#230) +- [b9d9d37f](https://github.com/kubedb/enterprise/commit/b9d9d37f) Update dependencies to publish SiteInfo (#228) + + + +## [kubedb/installer](https://github.com/kubedb/installer) + +### [v2021.11.18](https://github.com/kubedb/installer/releases/tag/v2021.11.18) + +- [3fe4c869](https://github.com/kubedb/installer/commit/3fe4c869) Prepare for release v2021.11.18 (#395) +- [b7321269](https://github.com/kubedb/installer/commit/b7321269) Use mysqld Exporter Image with custom query support (#390) +- [5372ab67](https://github.com/kubedb/installer/commit/5372ab67) Add new Postgres versions in catalog (#394) +- [09621a10](https://github.com/kubedb/installer/commit/09621a10) Update kmodules.xyz/monitoring-agent-api (#393) +- [23f4d5c1](https://github.com/kubedb/installer/commit/23f4d5c1) Update repository config (#392) +- [3fa119af](https://github.com/kubedb/installer/commit/3fa119af) Add labels to license related rolebindings & secrets (#391) +- [9e18ab72](https://github.com/kubedb/installer/commit/9e18ab72) Add MySQL 5.7.36, 8.0.17 8.0.27 (#378) +- [5c8d2854](https://github.com/kubedb/installer/commit/5c8d2854) Remove --enable-analytics flag (#389) +- [589c93f8](https://github.com/kubedb/installer/commit/589c93f8) Update license-verifier (#388) +- [f1f19f47](https://github.com/kubedb/installer/commit/f1f19f47) Fix regression in #386 (#387) +- [ddd3e9f3](https://github.com/kubedb/installer/commit/ddd3e9f3) Change installer namespace to kubedb (#386) +- [64e95827](https://github.com/kubedb/installer/commit/64e95827) Support OpenSearch 1.1.0 and update elasticsearch-exporter image to 1.3.0 (#384) +- [6cf27bc1](https://github.com/kubedb/installer/commit/6cf27bc1) Update Postgres-init Image Version to v0.4.0 (#382) +- [d5db17f8](https://github.com/kubedb/installer/commit/d5db17f8) Update crds +- [0dbc20cb](https://github.com/kubedb/installer/commit/0dbc20cb) Update crds +- [c11c29f6](https://github.com/kubedb/installer/commit/c11c29f6) Update dependencies (#381) +- [1ef34de3](https://github.com/kubedb/installer/commit/1ef34de3) Update Redis Init Image version for Custom Config fixes (#380) +- [cb48c46b](https://github.com/kubedb/installer/commit/cb48c46b) Fix satori/go.uuid security vulnerability (#379) +- [f39255c3](https://github.com/kubedb/installer/commit/f39255c3) Add innodb and coordiantor support (#371) +- [53ab0b1b](https://github.com/kubedb/installer/commit/53ab0b1b) Fix jwt-go security vulnerability (#377) +- [01ae1087](https://github.com/kubedb/installer/commit/01ae1087) Fix jwt-go security vulnerability (#376) +- [b17e22a0](https://github.com/kubedb/installer/commit/b17e22a0) Add fields to MySQL Metrics (#375) +- [b4e4d317](https://github.com/kubedb/installer/commit/b4e4d317) Add New Postgres versions (#374) +- [19094b7a](https://github.com/kubedb/installer/commit/19094b7a) Update crds +- [85e02d5b](https://github.com/kubedb/installer/commit/85e02d5b) Add mongodb `5.0.3` (#372) +- [e055d79a](https://github.com/kubedb/installer/commit/e055d79a) Mark versions using Official docker images as Official Distro (#373) +- [3e45fb54](https://github.com/kubedb/installer/commit/3e45fb54) Add SiteInfo publisher permission +- [df913373](https://github.com/kubedb/installer/commit/df913373) Update dependencies to publish SiteInfo (#369) +- [a1cc057f](https://github.com/kubedb/installer/commit/a1cc057f) Add fields to redis-metrics (#366) +- [bf243169](https://github.com/kubedb/installer/commit/bf243169) Add fields to MariaDB Metrics (#370) +- [b53d5cc3](https://github.com/kubedb/installer/commit/b53d5cc3) Add v14.0 in Postgres catalog (#368) +- [e3ec7f67](https://github.com/kubedb/installer/commit/e3ec7f67) Add redis sentinel metrics configuration (#367) +- [1881967c](https://github.com/kubedb/installer/commit/1881967c) Update various kubedb metrics and metric labels (#364) + + + +## [kubedb/mariadb](https://github.com/kubedb/mariadb) + +### [v0.7.0](https://github.com/kubedb/mariadb/releases/tag/v0.7.0) + +- [05707163](https://github.com/kubedb/mariadb/commit/05707163) Prepare for release v0.7.0 (#112) +- [2818eb2b](https://github.com/kubedb/mariadb/commit/2818eb2b) Update kmodules.xyz/monitoring-agent-api (#111) +- [4580ebd5](https://github.com/kubedb/mariadb/commit/4580ebd5) Remove global variable for preconditions (#110) +- [8223c352](https://github.com/kubedb/mariadb/commit/8223c352) Update repository config (#109) +- [8be974a6](https://github.com/kubedb/mariadb/commit/8be974a6) Remove docs +- [0279fa08](https://github.com/kubedb/mariadb/commit/0279fa08) Update docs +- [45cbdb9e](https://github.com/kubedb/mariadb/commit/45cbdb9e) Use DisableAnalytics flag from license (#108) +- [0d4ae537](https://github.com/kubedb/mariadb/commit/0d4ae537) Update license-verifier (#107) +- [92626beb](https://github.com/kubedb/mariadb/commit/92626beb) Support custom pod, service, and controller(sts) labels (#105) +- [afd25e04](https://github.com/kubedb/mariadb/commit/afd25e04) Update dependencies (#104) +- [297c7cdb](https://github.com/kubedb/mariadb/commit/297c7cdb) Update xorm dependency (#103) +- [fc99578b](https://github.com/kubedb/mariadb/commit/fc99578b) Fix satori/go.uuid security vulnerability (#102) +- [43236638](https://github.com/kubedb/mariadb/commit/43236638) Fix jwt-go security vulnerability (#101) +- [247e1413](https://github.com/kubedb/mariadb/commit/247e1413) Fix jwt-go security vulnerability (#100) +- [1ef0690d](https://github.com/kubedb/mariadb/commit/1ef0690d) Use nats.go v1.13.0 (#99) +- [2a067c0b](https://github.com/kubedb/mariadb/commit/2a067c0b) Setup SiteInfo publisher (#98) +- [72c93bb2](https://github.com/kubedb/mariadb/commit/72c93bb2) Update dependencies to publish SiteInfo (#97) +- [2e17fbb6](https://github.com/kubedb/mariadb/commit/2e17fbb6) Update dependencies to publish SiteInfo (#96) +- [8b091ae3](https://github.com/kubedb/mariadb/commit/8b091ae3) Update repository config (#95) + + + +## [kubedb/mariadb-coordinator](https://github.com/kubedb/mariadb-coordinator) + +### [v0.3.0](https://github.com/kubedb/mariadb-coordinator/releases/tag/v0.3.0) + +- [d229ad1](https://github.com/kubedb/mariadb-coordinator/commit/d229ad1) Prepare for release v0.3.0 (#24) +- [04ac158](https://github.com/kubedb/mariadb-coordinator/commit/04ac158) Update kmodules.xyz/monitoring-agent-api (#23) +- [b1836cd](https://github.com/kubedb/mariadb-coordinator/commit/b1836cd) Update repository config (#22) +- [670cce7](https://github.com/kubedb/mariadb-coordinator/commit/670cce7) Use DisableAnalytics flag from license (#21) +- [b2149b3](https://github.com/kubedb/mariadb-coordinator/commit/b2149b3) Update license-verifier (#20) +- [43e2907](https://github.com/kubedb/mariadb-coordinator/commit/43e2907) Support custom pod and controller labels (#19) +- [054ad28](https://github.com/kubedb/mariadb-coordinator/commit/054ad28) Update dependencies (#18) +- [73b094a](https://github.com/kubedb/mariadb-coordinator/commit/73b094a) Update xorm dependency (#17) +- [d401ce6](https://github.com/kubedb/mariadb-coordinator/commit/d401ce6) Fix satori/go.uuid security vulnerability (#16) +- [fbbec4b](https://github.com/kubedb/mariadb-coordinator/commit/fbbec4b) Fix jwt-go security vulnerability (#15) +- [bf9222c](https://github.com/kubedb/mariadb-coordinator/commit/bf9222c) Fix jwt-go security vulnerability (#14) +- [dbac458](https://github.com/kubedb/mariadb-coordinator/commit/dbac458) Use nats.go v1.13.0 (#13) + + + +## [kubedb/memcached](https://github.com/kubedb/memcached) + +### [v0.16.0](https://github.com/kubedb/memcached/releases/tag/v0.16.0) + +- [f1131b24](https://github.com/kubedb/memcached/commit/f1131b24) Prepare for release v0.16.0 (#327) +- [9a48dfb4](https://github.com/kubedb/memcached/commit/9a48dfb4) Update kmodules.xyz/monitoring-agent-api (#326) +- [eedff52b](https://github.com/kubedb/memcached/commit/eedff52b) Remove global variable for preconditions (#325) +- [7e9aa7cb](https://github.com/kubedb/memcached/commit/7e9aa7cb) Update repository config (#324) +- [83d8990b](https://github.com/kubedb/memcached/commit/83d8990b) Remove docs +- [75c6aaae](https://github.com/kubedb/memcached/commit/75c6aaae) Update docs +- [d44def4b](https://github.com/kubedb/memcached/commit/d44def4b) Use DisableAnalytics flag from license (#323) +- [f1ac7471](https://github.com/kubedb/memcached/commit/f1ac7471) Update license-verifier (#322) +- [7c395019](https://github.com/kubedb/memcached/commit/7c395019) Support custom pod, service, and controller labels (#321) +- [b138b898](https://github.com/kubedb/memcached/commit/b138b898) Update dependencies (#320) +- [789dd6f7](https://github.com/kubedb/memcached/commit/789dd6f7) Fix satori/go.uuid security vulnerability (#319) +- [37d03918](https://github.com/kubedb/memcached/commit/37d03918) Fix jwt-go security vulnerability (#318) +- [27e097a3](https://github.com/kubedb/memcached/commit/27e097a3) Fix jwt-go security vulnerability (#317) +- [8fe76024](https://github.com/kubedb/memcached/commit/8fe76024) Use nats.go v1.13.0 (#316) +- [1e1443e0](https://github.com/kubedb/memcached/commit/1e1443e0) Update dependencies to publish SiteInfo (#315) +- [5c4569d2](https://github.com/kubedb/memcached/commit/5c4569d2) Update dependencies to publish SiteInfo (#314) +- [912ec127](https://github.com/kubedb/memcached/commit/912ec127) Update repository config (#313) + + + +## [kubedb/mongodb](https://github.com/kubedb/mongodb) + +### [v0.16.0](https://github.com/kubedb/mongodb/releases/tag/v0.16.0) + +- [c72e7335](https://github.com/kubedb/mongodb/commit/c72e7335) Prepare for release v0.16.0 (#437) +- [43ac7699](https://github.com/kubedb/mongodb/commit/43ac7699) Update kmodules.xyz/monitoring-agent-api (#436) +- [4ad8f28c](https://github.com/kubedb/mongodb/commit/4ad8f28c) Remove global variable for preconditions (#435) +- [e009f4ec](https://github.com/kubedb/mongodb/commit/e009f4ec) Update repository config (#434) +- [02cc1e50](https://github.com/kubedb/mongodb/commit/02cc1e50) Remove docs +- [e24969a1](https://github.com/kubedb/mongodb/commit/e24969a1) Use DisableAnalytics flag from license (#433) +- [8dc342e6](https://github.com/kubedb/mongodb/commit/8dc342e6) Update license-verifier (#432) +- [ecfb1583](https://github.com/kubedb/mongodb/commit/ecfb1583) Support custom pod and controller labels (#431) +- [a0550a93](https://github.com/kubedb/mongodb/commit/a0550a93) Add pod, statefulSet and service labels support (#430) +- [6ac1a182](https://github.com/kubedb/mongodb/commit/6ac1a182) Use `kubedb.dev/db-client-go` (#429) +- [8b2ed1c6](https://github.com/kubedb/mongodb/commit/8b2ed1c6) Add support for ReplicaSet configuration (#426) +- [07a2f120](https://github.com/kubedb/mongodb/commit/07a2f120) Update dependencies (#428) +- [f3f206f8](https://github.com/kubedb/mongodb/commit/f3f206f8) Fix satori/go.uuid security vulnerability (#427) +- [5c5c669b](https://github.com/kubedb/mongodb/commit/5c5c669b) Set owner reference to the secrets created by the operator (#425) +- [17ea4294](https://github.com/kubedb/mongodb/commit/17ea4294) Fix jwt-go security vulnerability (#424) +- [6a0dccf3](https://github.com/kubedb/mongodb/commit/6a0dccf3) Fix jwt-go security vulnerability (#423) +- [db40027d](https://github.com/kubedb/mongodb/commit/db40027d) Use nats.go v1.13.0 (#422) +- [473928f4](https://github.com/kubedb/mongodb/commit/473928f4) Setup SiteInfo publisher (#421) +- [b9ce138a](https://github.com/kubedb/mongodb/commit/b9ce138a) Update dependencies to publish SiteInfo (#420) +- [fff26a96](https://github.com/kubedb/mongodb/commit/fff26a96) Update dependencies to publish SiteInfo (#419) +- [41f3ccd9](https://github.com/kubedb/mongodb/commit/41f3ccd9) Update repository config (#418) + + + +## [kubedb/mysql](https://github.com/kubedb/mysql) + +### [v0.16.0](https://github.com/kubedb/mysql/releases/tag/v0.16.0) + +- [0680eeb3](https://github.com/kubedb/mysql/commit/0680eeb3) Prepare for release v0.16.0 (#429) +- [375760f3](https://github.com/kubedb/mysql/commit/375760f3) Export Group Replication stats in Exporter Container (#425) +- [2b5af248](https://github.com/kubedb/mysql/commit/2b5af248) Update kmodules.xyz/monitoring-agent-api (#428) +- [57f7cf60](https://github.com/kubedb/mysql/commit/57f7cf60) Remove global variable for preconditions (#427) +- [d47d0e39](https://github.com/kubedb/mysql/commit/d47d0e39) Update repository config (#426) +- [8847d166](https://github.com/kubedb/mysql/commit/8847d166) Update dependencies +- [646da2c8](https://github.com/kubedb/mysql/commit/646da2c8) Remove docs +- [eca0cfd5](https://github.com/kubedb/mysql/commit/eca0cfd5) Use DisableAnalytics flag from license (#424) +- [86d7a80d](https://github.com/kubedb/mysql/commit/86d7a80d) Update license-verifier (#423) +- [de8696fc](https://github.com/kubedb/mysql/commit/de8696fc) Add support for custom pod, service, and controller(sts) labels (#420) +- [87e3ea31](https://github.com/kubedb/mysql/commit/87e3ea31) Update entry point command for mysql router. (#422) +- [73178faa](https://github.com/kubedb/mysql/commit/73178faa) Add support for MySQL Coordinator (#406) +- [0075cf98](https://github.com/kubedb/mysql/commit/0075cf98) Update dependencies (#418) +- [4188a194](https://github.com/kubedb/mysql/commit/4188a194) Fix satori/go.uuid security vulnerability (#417) +- [569e220f](https://github.com/kubedb/mysql/commit/569e220f) Fix jwt-go security vulnerability (#416) +- [be5be397](https://github.com/kubedb/mysql/commit/be5be397) Restrict group replicas for size 2 in Validator (#402) +- [4dbb18f3](https://github.com/kubedb/mysql/commit/4dbb18f3) Fix jwt-go security vulnerability (#414) +- [f4b0bb43](https://github.com/kubedb/mysql/commit/f4b0bb43) Use nats.go v1.13.0 (#413) +- [c5eefa7e](https://github.com/kubedb/mysql/commit/c5eefa7e) Setup SiteInfo publisher (#412) +- [8157ec8f](https://github.com/kubedb/mysql/commit/8157ec8f) Update dependencies to publish SiteInfo (#411) +- [808dbd85](https://github.com/kubedb/mysql/commit/808dbd85) Update dependencies to publish SiteInfo (#410) +- [a949af00](https://github.com/kubedb/mysql/commit/a949af00) Update repository config (#409) + + + +## [kubedb/mysql-coordinator](https://github.com/kubedb/mysql-coordinator) + +### [v0.1.0](https://github.com/kubedb/mysql-coordinator/releases/tag/v0.1.0) + +- [51cf61d](https://github.com/kubedb/mysql-coordinator/commit/51cf61d) Prepare for release v0.1.0 (#18) +- [104431b](https://github.com/kubedb/mysql-coordinator/commit/104431b) Prepare for release v0.1.0 (#17) +- [1cd379d](https://github.com/kubedb/mysql-coordinator/commit/1cd379d) Update kmodules.xyz/monitoring-agent-api (#16) +- [e85255b](https://github.com/kubedb/mysql-coordinator/commit/e85255b) Update repository config (#15) +- [d7f6193](https://github.com/kubedb/mysql-coordinator/commit/d7f6193) Use DisableAnalytics flag from license (#14) +- [c0a51bb](https://github.com/kubedb/mysql-coordinator/commit/c0a51bb) Update license-verifier (#13) +- [d624835](https://github.com/kubedb/mysql-coordinator/commit/d624835) Support custom pod and controller labels (#12) +- [d3bc5ba](https://github.com/kubedb/mysql-coordinator/commit/d3bc5ba) Add sleep for now to avoid the joining problem (#11) +- [e44f9d1](https://github.com/kubedb/mysql-coordinator/commit/e44f9d1) Update dependencies (#10) +- [653e357](https://github.com/kubedb/mysql-coordinator/commit/653e357) Update xorm.io dependency (#9) +- [772ebbd](https://github.com/kubedb/mysql-coordinator/commit/772ebbd) Fix satori/go.uuid security vulnerability (#8) +- [50fdeee](https://github.com/kubedb/mysql-coordinator/commit/50fdeee) Fix jwt-go security vulnerability (#7) + + + +## [kubedb/mysql-router-init](https://github.com/kubedb/mysql-router-init) + +### [v0.1.0](https://github.com/kubedb/mysql-router-init/releases/tag/v0.1.0) + +- [e16b07e](https://github.com/kubedb/mysql-router-init/commit/e16b07e) Update repository config (#13) +- [1d14631](https://github.com/kubedb/mysql-router-init/commit/1d14631) Monitor mysql router process id and restart it if closed. (#11) +- [e36615e](https://github.com/kubedb/mysql-router-init/commit/e36615e) Support custom pod and controller labels (#12) +- [48829ef](https://github.com/kubedb/mysql-router-init/commit/48829ef) Fix satori/go.uuid security vulnerability (#10) +- [5f363e8](https://github.com/kubedb/mysql-router-init/commit/5f363e8) Fix jwt-go security vulnerability (#9) +- [41b0fb7](https://github.com/kubedb/mysql-router-init/commit/41b0fb7) Update deps +- [51fc22e](https://github.com/kubedb/mysql-router-init/commit/51fc22e) Fix jwt-go security vulnerability (#8) + + + +## [kubedb/operator](https://github.com/kubedb/operator) + +### [v0.23.0](https://github.com/kubedb/operator/releases/tag/v0.23.0) + + + + +## [kubedb/percona-xtradb](https://github.com/kubedb/percona-xtradb) + +### [v0.10.0](https://github.com/kubedb/percona-xtradb/releases/tag/v0.10.0) + +- [99ac8dca](https://github.com/kubedb/percona-xtradb/commit/99ac8dca) Prepare for release v0.10.0 (#230) +- [5b90ae92](https://github.com/kubedb/percona-xtradb/commit/5b90ae92) Update kmodules.xyz/monitoring-agent-api (#229) +- [13edd56c](https://github.com/kubedb/percona-xtradb/commit/13edd56c) Remove global variable for preconditions (#228) +- [29b4a103](https://github.com/kubedb/percona-xtradb/commit/29b4a103) Update repository config (#227) +- [56b7d005](https://github.com/kubedb/percona-xtradb/commit/56b7d005) Remove docs +- [87f94bb7](https://github.com/kubedb/percona-xtradb/commit/87f94bb7) Use DisableAnalytics flag from license (#226) +- [2f92a7d0](https://github.com/kubedb/percona-xtradb/commit/2f92a7d0) Update license-verifier (#225) +- [11db9761](https://github.com/kubedb/percona-xtradb/commit/11db9761) Update audit and license-verifier version (#223) +- [4026e363](https://github.com/kubedb/percona-xtradb/commit/4026e363) Add pod, statefulSet and service labels support (#224) +- [eb09a518](https://github.com/kubedb/percona-xtradb/commit/eb09a518) Fix satori/go.uuid security vulnerability (#222) +- [0b6063c4](https://github.com/kubedb/percona-xtradb/commit/0b6063c4) Fix jwt-go security vulnerability (#221) +- [ba344a97](https://github.com/kubedb/percona-xtradb/commit/ba344a97) Fix jwt-go security vulnerability (#220) +- [9d3c6e65](https://github.com/kubedb/percona-xtradb/commit/9d3c6e65) Use nats.go v1.13.0 (#219) +- [7dbb955f](https://github.com/kubedb/percona-xtradb/commit/7dbb955f) Setup SiteInfo publisher (#218) +- [eab16c22](https://github.com/kubedb/percona-xtradb/commit/eab16c22) Update dependencies to publish SiteInfo (#217) +- [31e773dd](https://github.com/kubedb/percona-xtradb/commit/31e773dd) Update dependencies to publish SiteInfo (#216) +- [5a2ff511](https://github.com/kubedb/percona-xtradb/commit/5a2ff511) Update repository config (#215) + + + +## [kubedb/pg-coordinator](https://github.com/kubedb/pg-coordinator) + +### [v0.7.0](https://github.com/kubedb/pg-coordinator/releases/tag/v0.7.0) + +- [e81fa81](https://github.com/kubedb/pg-coordinator/commit/e81fa81) Prepare for release v0.7.0 (#54) +- [7c49a84](https://github.com/kubedb/pg-coordinator/commit/7c49a84) Update kmodules.xyz/monitoring-agent-api (#53) +- [aed68ec](https://github.com/kubedb/pg-coordinator/commit/aed68ec) Update repository config (#52) +- [b052255](https://github.com/kubedb/pg-coordinator/commit/b052255) Fix: Raft log corrupted issue (#51) +- [9413347](https://github.com/kubedb/pg-coordinator/commit/9413347) Use DisableAnalytics flag from license (#50) +- [2fe1bfc](https://github.com/kubedb/pg-coordinator/commit/2fe1bfc) Update license-verifier (#49) +- [d6f9afd](https://github.com/kubedb/pg-coordinator/commit/d6f9afd) Support custom pod and controller labels (#48) +- [fb2b48c](https://github.com/kubedb/pg-coordinator/commit/fb2b48c) Postgres Server Restart If Sig-Killed (#44) +- [ab85e39](https://github.com/kubedb/pg-coordinator/commit/ab85e39) Print logs at Debug level +- [9b65232](https://github.com/kubedb/pg-coordinator/commit/9b65232) Log timestamp from zap logger used in raft (#47) +- [6d3eb77](https://github.com/kubedb/pg-coordinator/commit/6d3eb77) Update xorm dependency (#46) +- [b77df43](https://github.com/kubedb/pg-coordinator/commit/b77df43) Fix satori/go.uuid security vulnerability (#45) +- [3cd9cc4](https://github.com/kubedb/pg-coordinator/commit/3cd9cc4) Fix jwt-go security vulnerability (#43) +- [bd2356d](https://github.com/kubedb/pg-coordinator/commit/bd2356d) Fix: Postgres server single user mode start for bullseye image (#42) +- [0c8c18d](https://github.com/kubedb/pg-coordinator/commit/0c8c18d) Update dependencies to publish SiteInfo (#40) +- [06ee14c](https://github.com/kubedb/pg-coordinator/commit/06ee14c) Add support for Postgres version v14.0 (#41) + + + +## [kubedb/pgbouncer](https://github.com/kubedb/pgbouncer) + +### [v0.10.0](https://github.com/kubedb/pgbouncer/releases/tag/v0.10.0) + +- [e12cc8a9](https://github.com/kubedb/pgbouncer/commit/e12cc8a9) Prepare for release v0.10.0 (#190) +- [1e7d783e](https://github.com/kubedb/pgbouncer/commit/1e7d783e) Update kmodules.xyz/monitoring-agent-api (#189) +- [6e08b78b](https://github.com/kubedb/pgbouncer/commit/6e08b78b) Update repository config (#187) +- [ecd28729](https://github.com/kubedb/pgbouncer/commit/ecd28729) Remove global variable for preconditions (#188) +- [e8ad1227](https://github.com/kubedb/pgbouncer/commit/e8ad1227) Remove docs +- [3a2a4143](https://github.com/kubedb/pgbouncer/commit/3a2a4143) Use DisableAnalytics flag from license (#186) +- [308c521f](https://github.com/kubedb/pgbouncer/commit/308c521f) Update license-verifier (#185) +- [a3eb245d](https://github.com/kubedb/pgbouncer/commit/a3eb245d) Update audit and license-verifier version (#184) +- [236cec3c](https://github.com/kubedb/pgbouncer/commit/236cec3c) Support custom pod, service and controller(sts) labels (#183) +- [8a935075](https://github.com/kubedb/pgbouncer/commit/8a935075) Stop using beta apis +- [6f2bce67](https://github.com/kubedb/pgbouncer/commit/6f2bce67) Fix satori/go.uuid security vulnerability (#182) +- [51676d8e](https://github.com/kubedb/pgbouncer/commit/51676d8e) Fix jwt-go security vulnerability (#181) +- [ac2bbd35](https://github.com/kubedb/pgbouncer/commit/ac2bbd35) Fix jwt-go security vulnerability (#180) +- [01c0adc9](https://github.com/kubedb/pgbouncer/commit/01c0adc9) Use nats.go v1.13.0 (#179) +- [3260a07d](https://github.com/kubedb/pgbouncer/commit/3260a07d) Setup SiteInfo publisher (#178) +- [36353a42](https://github.com/kubedb/pgbouncer/commit/36353a42) Update dependencies to publish SiteInfo (#176) +- [ce4fdfc1](https://github.com/kubedb/pgbouncer/commit/ce4fdfc1) Update repository config (#175) + + + +## [kubedb/postgres](https://github.com/kubedb/postgres) + +### [v0.23.0](https://github.com/kubedb/postgres/releases/tag/v0.23.0) + +- [b9b2521a](https://github.com/kubedb/postgres/commit/b9b2521a) Prepare for release v0.23.0 (#540) +- [6f98f884](https://github.com/kubedb/postgres/commit/6f98f884) Update kmodules.xyz/monitoring-agent-api (#539) +- [015dd315](https://github.com/kubedb/postgres/commit/015dd315) Update repository config (#537) +- [1ce33dd4](https://github.com/kubedb/postgres/commit/1ce33dd4) Remove global variable for preconditions (#538) +- [967b1bd5](https://github.com/kubedb/postgres/commit/967b1bd5) Remove docs +- [63585d4d](https://github.com/kubedb/postgres/commit/63585d4d) Use DisableAnalytics flag from license (#536) +- [8030b449](https://github.com/kubedb/postgres/commit/8030b449) Update license-verifier (#535) +- [30407273](https://github.com/kubedb/postgres/commit/30407273) Add pod, services, and pod-controller(sts) labels support (#533) +- [55c626a2](https://github.com/kubedb/postgres/commit/55c626a2) Add Raft client Port In Primary Service (#530) +- [a1a4bdb3](https://github.com/kubedb/postgres/commit/a1a4bdb3) Stop using beta api +- [e0e2a3e4](https://github.com/kubedb/postgres/commit/e0e2a3e4) Update xorm.io/xorm dependency (#532) +- [e6aacd05](https://github.com/kubedb/postgres/commit/e6aacd05) Fix satori/go.uuid security vulnerability (#531) +- [140226f7](https://github.com/kubedb/postgres/commit/140226f7) Fix jwt-go security vulnerability (#529) +- [31e9df33](https://github.com/kubedb/postgres/commit/31e9df33) Fix jwt-go security vulnerability (#528) +- [70fb383a](https://github.com/kubedb/postgres/commit/70fb383a) Use nats.go v1.13.0 (#527) +- [77d43f95](https://github.com/kubedb/postgres/commit/77d43f95) Setup SiteInfo publisher (#526) +- [8755bde2](https://github.com/kubedb/postgres/commit/8755bde2) Update dependencies to publish SiteInfo (#525) +- [feb81410](https://github.com/kubedb/postgres/commit/feb81410) Update repository config (#524) + + + +## [kubedb/proxysql](https://github.com/kubedb/proxysql) + +### [v0.10.0](https://github.com/kubedb/proxysql/releases/tag/v0.10.0) + +- [88940863](https://github.com/kubedb/proxysql/commit/88940863) Prepare for release v0.10.0 (#207) +- [66ce0801](https://github.com/kubedb/proxysql/commit/66ce0801) Update kmodules.xyz/monitoring-agent-api (#206) +- [21b59886](https://github.com/kubedb/proxysql/commit/21b59886) Remove global variable for preconditions (#205) +- [884e3915](https://github.com/kubedb/proxysql/commit/884e3915) Update repository config (#204) +- [81c11592](https://github.com/kubedb/proxysql/commit/81c11592) Remove docs +- [271bc5af](https://github.com/kubedb/proxysql/commit/271bc5af) Use DisableAnalytics flag from license (#203) +- [4710a672](https://github.com/kubedb/proxysql/commit/4710a672) Update license-verifier (#202) +- [229ba8c7](https://github.com/kubedb/proxysql/commit/229ba8c7) Support custom pod, service and controller(sts) labels (#201) +- [3c915f61](https://github.com/kubedb/proxysql/commit/3c915f61) Update dependencies (#200) +- [7ce88a70](https://github.com/kubedb/proxysql/commit/7ce88a70) Fix jwt-go security vulnerability (#199) +- [bb2c78e8](https://github.com/kubedb/proxysql/commit/bb2c78e8) Fix jwt-go security vulnerability (#198) +- [2764f4c7](https://github.com/kubedb/proxysql/commit/2764f4c7) Use nats.go v1.13.0 (#197) +- [b06f614b](https://github.com/kubedb/proxysql/commit/b06f614b) Update dependencies to publish SiteInfo (#196) +- [6a067416](https://github.com/kubedb/proxysql/commit/6a067416) Update dependencies to publish SiteInfo (#195) +- [5f1ce0f2](https://github.com/kubedb/proxysql/commit/5f1ce0f2) Update repository config (#194) + + + +## [kubedb/redis](https://github.com/kubedb/redis) + +### [v0.16.0](https://github.com/kubedb/redis/releases/tag/v0.16.0) + +- [c3986f47](https://github.com/kubedb/redis/commit/c3986f47) Prepare for release v0.16.0 (#362) +- [158af05f](https://github.com/kubedb/redis/commit/158af05f) Update kmodules.xyz/monitoring-agent-api (#361) +- [4cc13143](https://github.com/kubedb/redis/commit/4cc13143) Remove global variable for preconditions (#360) +- [16011733](https://github.com/kubedb/redis/commit/16011733) Update repository config (#359) +- [eea15b8a](https://github.com/kubedb/redis/commit/eea15b8a) Fix: Sentinel and Redis In Different Namespaces (#358) +- [38b28c4e](https://github.com/kubedb/redis/commit/38b28c4e) Remove docs +- [32b2565a](https://github.com/kubedb/redis/commit/32b2565a) Use DisableAnalytics flag from license (#357) +- [27d7f428](https://github.com/kubedb/redis/commit/27d7f428) Update license-verifier (#356) +- [c00f72bf](https://github.com/kubedb/redis/commit/c00f72bf) Update audit and license-verifier version (#354) +- [7aebec13](https://github.com/kubedb/redis/commit/7aebec13) Add pod, statefulSet and service labels support (#355) +- [2f09ae66](https://github.com/kubedb/redis/commit/2f09ae66) Fix: resolve panic issue when sentinelRef is Null or empty (#353) +- [016fc0ff](https://github.com/kubedb/redis/commit/016fc0ff) Redis Custom Config issue (#351) +- [09d750ac](https://github.com/kubedb/redis/commit/09d750ac) Update dependencies (#352) +- [4ac3e812](https://github.com/kubedb/redis/commit/4ac3e812) Fix jwt-go security vulnerability (#350) +- [4f7fd873](https://github.com/kubedb/redis/commit/4f7fd873) Fix jwt-go security vulnerability (#349) +- [f86d4fb1](https://github.com/kubedb/redis/commit/f86d4fb1) Fix: Redis Panic issue for sentinel (#348) +- [7b1c53a6](https://github.com/kubedb/redis/commit/7b1c53a6) Use nats.go v1.13.0 (#347) +- [7d9017e8](https://github.com/kubedb/redis/commit/7d9017e8) Setup SiteInfo publisher (#346) +- [34c98fc3](https://github.com/kubedb/redis/commit/34c98fc3) Update dependencies to publish SiteInfo (#345) +- [1831c5b7](https://github.com/kubedb/redis/commit/1831c5b7) Update dependencies to publish SiteInfo (#344) +- [4798de2d](https://github.com/kubedb/redis/commit/4798de2d) Update repository config (#343) +- [a14cd630](https://github.com/kubedb/redis/commit/a14cd630) Fix: Redis monitoring port (#342) + + + +## [kubedb/redis-coordinator](https://github.com/kubedb/redis-coordinator) + +### [v0.2.0](https://github.com/kubedb/redis-coordinator/releases/tag/v0.2.0) + +- [8b9d7eb](https://github.com/kubedb/redis-coordinator/commit/8b9d7eb) Prepare for release v0.2.0 (#13) +- [3399280](https://github.com/kubedb/redis-coordinator/commit/3399280) Update kmodules.xyz/monitoring-agent-api (#12) +- [eb51783](https://github.com/kubedb/redis-coordinator/commit/eb51783) Update repository config (#11) +- [fff31b5](https://github.com/kubedb/redis-coordinator/commit/fff31b5) Use DisableAnalytics flag from license (#10) +- [f2b347c](https://github.com/kubedb/redis-coordinator/commit/f2b347c) Update license-verifier (#9) +- [361e3f7](https://github.com/kubedb/redis-coordinator/commit/361e3f7) Support custom pod and controller labels (#8) +- [ad486b9](https://github.com/kubedb/redis-coordinator/commit/ad486b9) Update dependencies (#7) +- [560e04d](https://github.com/kubedb/redis-coordinator/commit/560e04d) Fix satori/go.uuid security vulnerability (#6) +- [a0bd03b](https://github.com/kubedb/redis-coordinator/commit/a0bd03b) Fix jwt-go security vulnerability (#5) +- [6a1f913](https://github.com/kubedb/redis-coordinator/commit/6a1f913) Fix jwt-go security vulnerability (#4) +- [8f1418e](https://github.com/kubedb/redis-coordinator/commit/8f1418e) Use nats.go v1.13.0 (#3) + + + +## [kubedb/replication-mode-detector](https://github.com/kubedb/replication-mode-detector) + +### [v0.10.0](https://github.com/kubedb/replication-mode-detector/releases/tag/v0.10.0) + +- [bde31cc8](https://github.com/kubedb/replication-mode-detector/commit/bde31cc8) Prepare for release v0.10.0 (#175) +- [93abeed6](https://github.com/kubedb/replication-mode-detector/commit/93abeed6) Update kmodules.xyz/monitoring-agent-api (#174) +- [78bff385](https://github.com/kubedb/replication-mode-detector/commit/78bff385) Update repository config (#173) +- [6578cc86](https://github.com/kubedb/replication-mode-detector/commit/6578cc86) Use DisableAnalytics flag from license (#172) +- [b99f779b](https://github.com/kubedb/replication-mode-detector/commit/b99f779b) Update license-verifier (#171) +- [a62adbd0](https://github.com/kubedb/replication-mode-detector/commit/a62adbd0) Support custom pod and controller labels (#170) +- [afb2bfd9](https://github.com/kubedb/replication-mode-detector/commit/afb2bfd9) Update dependencies (#169) +- [9b65b2c5](https://github.com/kubedb/replication-mode-detector/commit/9b65b2c5) Update xorm dependency (#168) +- [a2427a67](https://github.com/kubedb/replication-mode-detector/commit/a2427a67) Fix satori/go.uuid security vulnerability (#167) +- [0a0163ca](https://github.com/kubedb/replication-mode-detector/commit/0a0163ca) Fix jwt-go security vulnerability (#166) +- [4f69c8c3](https://github.com/kubedb/replication-mode-detector/commit/4f69c8c3) Fix jwt-go security vulnerability (#165) +- [1005f1a2](https://github.com/kubedb/replication-mode-detector/commit/1005f1a2) Update dependencies to publish SiteInfo (#164) +- [9ac9d09e](https://github.com/kubedb/replication-mode-detector/commit/9ac9d09e) Update dependencies to publish SiteInfo (#163) +- [c55ae055](https://github.com/kubedb/replication-mode-detector/commit/c55ae055) Update repository config (#162) + + + +## [kubedb/tests](https://github.com/kubedb/tests) + +### [v0.8.0](https://github.com/kubedb/tests/releases/tag/v0.8.0) + +- [50f414a9](https://github.com/kubedb/tests/commit/50f414a9) Prepare for release v0.8.0 (#156) +- [25e5c5b6](https://github.com/kubedb/tests/commit/25e5c5b6) Update kmodules.xyz/monitoring-agent-api (#155) +- [951d17ca](https://github.com/kubedb/tests/commit/951d17ca) Update repository config (#154) +- [d0988abf](https://github.com/kubedb/tests/commit/d0988abf) Use DisableAnalytics flag from license (#153) +- [7aea8907](https://github.com/kubedb/tests/commit/7aea8907) Update license-verifier (#152) +- [637ae1a0](https://github.com/kubedb/tests/commit/637ae1a0) Support custom pod and controller labels (#151) +- [45223290](https://github.com/kubedb/tests/commit/45223290) Update dependencies (#150) +- [fcef9222](https://github.com/kubedb/tests/commit/fcef9222) Fix satori/go.uuid security vulnerability (#149) +- [1d308fc7](https://github.com/kubedb/tests/commit/1d308fc7) Fix jwt-go security vulnerability (#148) +- [9c764d48](https://github.com/kubedb/tests/commit/9c764d48) Fix jwt-go security vulnerability (#147) +- [cef34499](https://github.com/kubedb/tests/commit/cef34499) Fix jwt-go security vulnerability (#146) +- [2c1d6094](https://github.com/kubedb/tests/commit/2c1d6094) Update dependencies to publish SiteInfo (#145) +- [443c8390](https://github.com/kubedb/tests/commit/443c8390) Update dependencies to publish SiteInfo (#144) +- [5f71478f](https://github.com/kubedb/tests/commit/5f71478f) Update repository config (#143) + + + + diff --git a/content/docs/v2024.1.31/CHANGELOG-v2021.11.24.md b/content/docs/v2024.1.31/CHANGELOG-v2021.11.24.md new file mode 100644 index 0000000000..452864c25c --- /dev/null +++ b/content/docs/v2024.1.31/CHANGELOG-v2021.11.24.md @@ -0,0 +1,274 @@ +--- +title: Changelog | KubeDB +description: Changelog +menu: + docs_v2024.1.31: + identifier: changelog-kubedb-v2021.11.24 + name: Changelog-v2021.11.24 + parent: welcome + weight: 20211124 +product_name: kubedb +menu_name: docs_v2024.1.31 +section_menu_id: welcome +url: /docs/v2024.1.31/welcome/changelog-v2021.11.24/ +aliases: +- /docs/v2024.1.31/CHANGELOG-v2021.11.24/ +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +# KubeDB v2021.11.24 (2021-11-24) + + +## [kubedb/apimachinery](https://github.com/kubedb/apimachinery) + +### [v0.24.0](https://github.com/kubedb/apimachinery/releases/tag/v0.24.0) + +- [fe94664c](https://github.com/kubedb/apimachinery/commit/fe94664c) Fix UI crd schema +- [5904fd60](https://github.com/kubedb/apimachinery/commit/5904fd60) Fix UI type pluralization +- [cd6d64c9](https://github.com/kubedb/apimachinery/commit/cd6d64c9) Generate crd yamls for UI types (#823) +- [3ffb45c3](https://github.com/kubedb/apimachinery/commit/3ffb45c3) Remove unused ui apis (#822) +- [444a0862](https://github.com/kubedb/apimachinery/commit/444a0862) Update dependencies +- [a4234dc5](https://github.com/kubedb/apimachinery/commit/a4234dc5) Add UI api types (#821) +- [fcda9c60](https://github.com/kubedb/apimachinery/commit/fcda9c60) Add helper function to filter database pods (#820) + + + +## [kubedb/autoscaler](https://github.com/kubedb/autoscaler) + +### [v0.9.0](https://github.com/kubedb/autoscaler/releases/tag/v0.9.0) + +- [2beca453](https://github.com/kubedb/autoscaler/commit/2beca453) Prepare for release v0.9.0 (#55) +- [52ea3990](https://github.com/kubedb/autoscaler/commit/52ea3990) Fix SiteInfo publishing (#54) +- [17177d27](https://github.com/kubedb/autoscaler/commit/17177d27) Update dependencies (#53) + + + +## [kubedb/cli](https://github.com/kubedb/cli) + +### [v0.24.0](https://github.com/kubedb/cli/releases/tag/v0.24.0) + +- [17f1637c](https://github.com/kubedb/cli/commit/17f1637c) Prepare for release v0.24.0 (#643) +- [a8b26225](https://github.com/kubedb/cli/commit/a8b26225) Update dependencies (#642) + + + +## [kubedb/elasticsearch](https://github.com/kubedb/elasticsearch) + +### [v0.24.0](https://github.com/kubedb/elasticsearch/releases/tag/v0.24.0) + +- [4209d6b7](https://github.com/kubedb/elasticsearch/commit/4209d6b7) Prepare for release v0.24.0 (#544) +- [33c76558](https://github.com/kubedb/elasticsearch/commit/33c76558) Provide node roles to config-merger init-container (#542) +- [bef5bc1d](https://github.com/kubedb/elasticsearch/commit/bef5bc1d) Fix SiteInfo publishing (#543) +- [e877751e](https://github.com/kubedb/elasticsearch/commit/e877751e) Add support for SearchGuard 7.14 (#534) +- [04175cef](https://github.com/kubedb/elasticsearch/commit/04175cef) Update dependencies (#541) + + + +## [kubedb/enterprise](https://github.com/kubedb/enterprise) + +### [v0.11.0](https://github.com/kubedb/enterprise/releases/tag/v0.11.0) + +- [cb4ff7a6](https://github.com/kubedb/enterprise/commit/cb4ff7a6) Prepare for release v0.11.0 (#254) +- [069d7c3d](https://github.com/kubedb/enterprise/commit/069d7c3d) Fix SiteInfo publishing (#253) +- [e556b56e](https://github.com/kubedb/enterprise/commit/e556b56e) Update dependencies (#252) + + + +## [kubedb/installer](https://github.com/kubedb/installer) + +### [v2021.11.24](https://github.com/kubedb/installer/releases/tag/v2021.11.24) + +- [b6b172d5](https://github.com/kubedb/installer/commit/b6b172d5) Prepare for release v2021.11.24 (#403) +- [b4ecd09e](https://github.com/kubedb/installer/commit/b4ecd09e) Update repository config (#402) +- [8ca4fae4](https://github.com/kubedb/installer/commit/8ca4fae4) Add support for SearchGuard 7.14.2 (#383) +- [cabbd39d](https://github.com/kubedb/installer/commit/cabbd39d) Update dependencies (#401) +- [57183edd](https://github.com/kubedb/installer/commit/57183edd) Update kubedb-opscenter chart dependencies via make file +- [d63d8d20](https://github.com/kubedb/installer/commit/d63d8d20) Add kubedb-opscenter chart (#400) +- [44cdc13c](https://github.com/kubedb/installer/commit/44cdc13c) Update ui-server chart +- [62b2390c](https://github.com/kubedb/installer/commit/62b2390c) Don't import UI crds (#399) +- [2008841f](https://github.com/kubedb/installer/commit/2008841f) Fix permissions for the ui-server chart +- [e6586325](https://github.com/kubedb/installer/commit/e6586325) Grant permission to watch rbac resources (#398) +- [56cdf5f8](https://github.com/kubedb/installer/commit/56cdf5f8) Add kubedb-ui-server chart (#396) + + + +## [kubedb/mariadb](https://github.com/kubedb/mariadb) + +### [v0.8.0](https://github.com/kubedb/mariadb/releases/tag/v0.8.0) + +- [c90ff2a2](https://github.com/kubedb/mariadb/commit/c90ff2a2) Prepare for release v0.8.0 (#115) +- [176a2468](https://github.com/kubedb/mariadb/commit/176a2468) Fix SiteInfo publishing (#114) +- [04216631](https://github.com/kubedb/mariadb/commit/04216631) Update dependencies (#113) + + + +## [kubedb/mariadb-coordinator](https://github.com/kubedb/mariadb-coordinator) + +### [v0.4.0](https://github.com/kubedb/mariadb-coordinator/releases/tag/v0.4.0) + +- [c4f422f](https://github.com/kubedb/mariadb-coordinator/commit/c4f422f) Prepare for release v0.4.0 (#27) +- [5776558](https://github.com/kubedb/mariadb-coordinator/commit/5776558) Fix SiteInfo publishing (#26) +- [9142160](https://github.com/kubedb/mariadb-coordinator/commit/9142160) Update dependencies (#25) + + + +## [kubedb/memcached](https://github.com/kubedb/memcached) + +### [v0.17.0](https://github.com/kubedb/memcached/releases/tag/v0.17.0) + +- [4f426b96](https://github.com/kubedb/memcached/commit/4f426b96) Prepare for release v0.17.0 (#330) +- [67e974bc](https://github.com/kubedb/memcached/commit/67e974bc) Fix SiteInfo publishing (#329) +- [5feae6ab](https://github.com/kubedb/memcached/commit/5feae6ab) Update dependencies (#328) + + + +## [kubedb/mongodb](https://github.com/kubedb/mongodb) + +### [v0.17.0](https://github.com/kubedb/mongodb/releases/tag/v0.17.0) + +- [d842a13d](https://github.com/kubedb/mongodb/commit/d842a13d) Prepare for release v0.17.0 (#440) +- [16fc38d1](https://github.com/kubedb/mongodb/commit/16fc38d1) Fix SiteInfo publishing (#439) +- [036de72e](https://github.com/kubedb/mongodb/commit/036de72e) Update dependencies (#438) + + + +## [kubedb/mysql](https://github.com/kubedb/mysql) + +### [v0.17.0](https://github.com/kubedb/mysql/releases/tag/v0.17.0) + +- [ae4b36c9](https://github.com/kubedb/mysql/commit/ae4b36c9) Prepare for release v0.17.0 (#435) +- [af98af2e](https://github.com/kubedb/mysql/commit/af98af2e) Fix SiteInfo publishing (#434) +- [8af7f2ed](https://github.com/kubedb/mysql/commit/8af7f2ed) Update dependencies (#433) +- [16e20d5f](https://github.com/kubedb/mysql/commit/16e20d5f) Fix MySQL app binding URL (#432) +- [36b10c3b](https://github.com/kubedb/mysql/commit/36b10c3b) Update health checker to use only the database pods (#431) +- [0fa954cc](https://github.com/kubedb/mysql/commit/0fa954cc) Ensure health checker uses only the database pods (#430) + + + +## [kubedb/mysql-coordinator](https://github.com/kubedb/mysql-coordinator) + +### [v0.2.0](https://github.com/kubedb/mysql-coordinator/releases/tag/v0.2.0) + +- [587bc66](https://github.com/kubedb/mysql-coordinator/commit/587bc66) Prepare for release v0.2.0 (#21) +- [126c117](https://github.com/kubedb/mysql-coordinator/commit/126c117) Fix SiteInfo publishing (#20) +- [cec1517](https://github.com/kubedb/mysql-coordinator/commit/cec1517) Update dependencies (#19) + + + +## [kubedb/mysql-router-init](https://github.com/kubedb/mysql-router-init) + +### [v0.2.0](https://github.com/kubedb/mysql-router-init/releases/tag/v0.2.0) + + + + +## [kubedb/operator](https://github.com/kubedb/operator) + +### [v0.24.0](https://github.com/kubedb/operator/releases/tag/v0.24.0) + +- [3657964d](https://github.com/kubedb/operator/commit/3657964d) Prepare for release v0.24.0 (#438) +- [e8ca9fdd](https://github.com/kubedb/operator/commit/e8ca9fdd) Fix SiteInfo publishing (#437) +- [0a153b6d](https://github.com/kubedb/operator/commit/0a153b6d) Update dependencies (#436) + + + +## [kubedb/percona-xtradb](https://github.com/kubedb/percona-xtradb) + +### [v0.11.0](https://github.com/kubedb/percona-xtradb/releases/tag/v0.11.0) + +- [b3aa8b5b](https://github.com/kubedb/percona-xtradb/commit/b3aa8b5b) Prepare for release v0.11.0 (#233) +- [59444b38](https://github.com/kubedb/percona-xtradb/commit/59444b38) Fix SiteInfo publishing (#232) +- [448884b4](https://github.com/kubedb/percona-xtradb/commit/448884b4) Update dependencies (#231) + + + +## [kubedb/pg-coordinator](https://github.com/kubedb/pg-coordinator) + +### [v0.8.0](https://github.com/kubedb/pg-coordinator/releases/tag/v0.8.0) + +- [35e2b4f](https://github.com/kubedb/pg-coordinator/commit/35e2b4f) Prepare for release v0.8.0 (#57) +- [68cbc96](https://github.com/kubedb/pg-coordinator/commit/68cbc96) Update dependencies (#56) +- [a09b701](https://github.com/kubedb/pg-coordinator/commit/a09b701) Check if pods are controlled by kubedb statefulset (#55) + + + +## [kubedb/pgbouncer](https://github.com/kubedb/pgbouncer) + +### [v0.11.0](https://github.com/kubedb/pgbouncer/releases/tag/v0.11.0) + +- [bc1b2498](https://github.com/kubedb/pgbouncer/commit/bc1b2498) Prepare for release v0.11.0 (#193) +- [fdd843a5](https://github.com/kubedb/pgbouncer/commit/fdd843a5) Fix SiteInfo publishing (#192) +- [766caf5a](https://github.com/kubedb/pgbouncer/commit/766caf5a) Update dependencies (#191) + + + +## [kubedb/postgres](https://github.com/kubedb/postgres) + +### [v0.24.0](https://github.com/kubedb/postgres/releases/tag/v0.24.0) + +- [cff2c9ca](https://github.com/kubedb/postgres/commit/cff2c9ca) Prepare for release v0.24.0 (#544) +- [8fa72022](https://github.com/kubedb/postgres/commit/8fa72022) Fix SiteInfo publishing (#543) +- [849a67ef](https://github.com/kubedb/postgres/commit/849a67ef) Update dependencies (#542) +- [c5d14298](https://github.com/kubedb/postgres/commit/c5d14298) Ensure health checker uses only the database pods (#541) + + + +## [kubedb/proxysql](https://github.com/kubedb/proxysql) + +### [v0.11.0](https://github.com/kubedb/proxysql/releases/tag/v0.11.0) + +- [532e00d3](https://github.com/kubedb/proxysql/commit/532e00d3) Prepare for release v0.11.0 (#210) +- [cf7cc989](https://github.com/kubedb/proxysql/commit/cf7cc989) Fix SiteInfo publishing (#209) +- [c10cb631](https://github.com/kubedb/proxysql/commit/c10cb631) Update dependencies (#208) + + + +## [kubedb/redis](https://github.com/kubedb/redis) + +### [v0.17.0](https://github.com/kubedb/redis/releases/tag/v0.17.0) + +- [73c3be89](https://github.com/kubedb/redis/commit/73c3be89) Prepare for release v0.17.0 (#366) +- [89394093](https://github.com/kubedb/redis/commit/89394093) Fix SiteInfo publishing (#365) +- [f3146a45](https://github.com/kubedb/redis/commit/f3146a45) Update dependencies (#364) + + + +## [kubedb/redis-coordinator](https://github.com/kubedb/redis-coordinator) + +### [v0.3.0](https://github.com/kubedb/redis-coordinator/releases/tag/v0.3.0) + +- [62a268b](https://github.com/kubedb/redis-coordinator/commit/62a268b) Prepare for release v0.3.0 (#16) +- [851d915](https://github.com/kubedb/redis-coordinator/commit/851d915) Fix SiteInfo publishing (#15) +- [25fa001](https://github.com/kubedb/redis-coordinator/commit/25fa001) Update dependencies (#14) + + + +## [kubedb/replication-mode-detector](https://github.com/kubedb/replication-mode-detector) + +### [v0.11.0](https://github.com/kubedb/replication-mode-detector/releases/tag/v0.11.0) + +- [d475b0e1](https://github.com/kubedb/replication-mode-detector/commit/d475b0e1) Prepare for release v0.11.0 (#177) +- [439578e7](https://github.com/kubedb/replication-mode-detector/commit/439578e7) Update dependencies (#176) + + + +## [kubedb/tests](https://github.com/kubedb/tests) + +### [v0.9.0](https://github.com/kubedb/tests/releases/tag/v0.9.0) + +- [e459024c](https://github.com/kubedb/tests/commit/e459024c) Prepare for release v0.9.0 (#158) +- [dabca0ad](https://github.com/kubedb/tests/commit/dabca0ad) Update dependencies (#157) + + + + diff --git a/content/docs/v2024.1.31/CHANGELOG-v2022.02.22.md b/content/docs/v2024.1.31/CHANGELOG-v2022.02.22.md new file mode 100644 index 0000000000..e95fb35747 --- /dev/null +++ b/content/docs/v2024.1.31/CHANGELOG-v2022.02.22.md @@ -0,0 +1,541 @@ +--- +title: Changelog | KubeDB +description: Changelog +menu: + docs_v2024.1.31: + identifier: changelog-kubedb-v2022.02.22 + name: Changelog-v2022.02.22 + parent: welcome + weight: 20220222 +product_name: kubedb +menu_name: docs_v2024.1.31 +section_menu_id: welcome +url: /docs/v2024.1.31/welcome/changelog-v2022.02.22/ +aliases: +- /docs/v2024.1.31/CHANGELOG-v2022.02.22/ +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +# KubeDB v2022.02.22 (2022-02-18) + + +## [kubedb/apimachinery](https://github.com/kubedb/apimachinery) + +### [v0.25.0](https://github.com/kubedb/apimachinery/releases/tag/v0.25.0) + +- [4ed35401](https://github.com/kubedb/apimachinery/commit/4ed35401) Add Elasticsearch dashboard helper methods and constants (#860) +- [8c9cb4b7](https://github.com/kubedb/apimachinery/commit/8c9cb4b7) fix mysqldatabase validator webhook (#870) +- [6be2000e](https://github.com/kubedb/apimachinery/commit/6be2000e) Add Schema Manager for Postgres (#854) +- [fa5b5267](https://github.com/kubedb/apimachinery/commit/fa5b5267) Add helper method for MySQL (#871) +- [161fcef7](https://github.com/kubedb/apimachinery/commit/161fcef7) Remove RedisDatabase crd (#869) +- [f7217890](https://github.com/kubedb/apimachinery/commit/f7217890) Use admission/v1 api types (#868) +- [02c89901](https://github.com/kubedb/apimachinery/commit/02c89901) Cancel concurrent CI runs for same pr/commit (#867) +- [c6db524e](https://github.com/kubedb/apimachinery/commit/c6db524e) Cancel concurrent CI runs for same pr/commit (#866) +- [f1d3fa44](https://github.com/kubedb/apimachinery/commit/f1d3fa44) Remove Enable***Webhook fields from common Config (#865) +- [f0d84187](https://github.com/kubedb/apimachinery/commit/f0d84187) Add ES constants: ElasticsearchJavaOptsEnv (#864) +- [90877a3d](https://github.com/kubedb/apimachinery/commit/90877a3d) Add disableAuth Support in Redis (#863) +- [da0fea34](https://github.com/kubedb/apimachinery/commit/da0fea34) Add support to configure JVM heap in term of percentage (#861) +- [5d665ff1](https://github.com/kubedb/apimachinery/commit/5d665ff1) Add doubleOptIn helpers; Change 'Successful' to 'Current' (#856) +- [1ff8a60c](https://github.com/kubedb/apimachinery/commit/1ff8a60c) Fix dashboard api and webhook helper function (#852) +- [fdad1ab2](https://github.com/kubedb/apimachinery/commit/fdad1ab2) Convert configmap for redis +- [b2887180](https://github.com/kubedb/apimachinery/commit/b2887180) Update repository config (#855) +- [713bb229](https://github.com/kubedb/apimachinery/commit/713bb229) Make dashboard & dashboardInitContainer fields optional (#853) +- [fc35bc33](https://github.com/kubedb/apimachinery/commit/fc35bc33) Add Constants for MariaDB ApplyConfig OpsReq (#851) +- [18a94e28](https://github.com/kubedb/apimachinery/commit/18a94e28) Update constants for Elasticsearch horizontal scaling (#849) +- [c327bc75](https://github.com/kubedb/apimachinery/commit/c327bc75) Add helper method for Mysql Read Replica (#848) +- [967a2137](https://github.com/kubedb/apimachinery/commit/967a2137) Add common condition-related constants & GetPhase function (#845) +- [e6e6d092](https://github.com/kubedb/apimachinery/commit/e6e6d092) Add dashboard image in ElasticsearchVersion (#824) +- [13b91fde](https://github.com/kubedb/apimachinery/commit/13b91fde) Add helper method for MySQL Read Replica (#847) +- [dfb7dd5c](https://github.com/kubedb/apimachinery/commit/dfb7dd5c) Add `ApplyConfig` on MariaDB Reconfigure Ops Request (#846) +- [449b6d64](https://github.com/kubedb/apimachinery/commit/449b6d64) Use lower case letters +- [ee91f91a](https://github.com/kubedb/apimachinery/commit/ee91f91a) Fix typo in package name (#844) +- [8a260d9a](https://github.com/kubedb/apimachinery/commit/8a260d9a) Add Config Generator for Reconfigure (#835) +- [55f68a75](https://github.com/kubedb/apimachinery/commit/55f68a75) Add support for MySQL Read Only Replica (#827) +- [c2d563c4](https://github.com/kubedb/apimachinery/commit/c2d563c4) Fix linter error +- [de55a914](https://github.com/kubedb/apimachinery/commit/de55a914) Add Timeout on MySQLOpsRequestSpec (#825) +- [6894aa4d](https://github.com/kubedb/apimachinery/commit/6894aa4d) Update Volume Expansion Mode Name in Storage Autoscaler (#843) +- [4834d9c2](https://github.com/kubedb/apimachinery/commit/4834d9c2) Add dashboard and schema-manager apis (#841) +- [1bfbd8a8](https://github.com/kubedb/apimachinery/commit/1bfbd8a8) Add VolumeExpansion Mode in PostgresOpsRequest (#842) +- [9831faf3](https://github.com/kubedb/apimachinery/commit/9831faf3) Add UpdateVersion ops request type (#838) +- [113972a8](https://github.com/kubedb/apimachinery/commit/113972a8) Add EnforceFsGroup field in Postgres Spec (#839) +- [ff687f61](https://github.com/kubedb/apimachinery/commit/ff687f61) Add Changes for MariaDB Offline Volume Expansion and MariaDB AutoScaler (#834) +- [e438a2d8](https://github.com/kubedb/apimachinery/commit/e438a2d8) Fix spelling +- [a90c72c7](https://github.com/kubedb/apimachinery/commit/a90c72c7) Rename ***Overview api types to ***Insight (#840) +- [f3a216bf](https://github.com/kubedb/apimachinery/commit/f3a216bf) Add support of offline volume expansion for Elasticsearch (#826) +- [77708728](https://github.com/kubedb/apimachinery/commit/77708728) Update repository config (#837) +- [de96ed84](https://github.com/kubedb/apimachinery/commit/de96ed84) Add mongodb reprovision ops request (#829) +- [d35fa391](https://github.com/kubedb/apimachinery/commit/d35fa391) Add EphemerStorage in MongoDB (#828) +- [24b06131](https://github.com/kubedb/apimachinery/commit/24b06131) Add constant for mongodb `configuration.js` (#830) + + + +## [kubedb/autoscaler](https://github.com/kubedb/autoscaler) + +### [v0.10.0](https://github.com/kubedb/autoscaler/releases/tag/v0.10.0) + +- [5b5cab07](https://github.com/kubedb/autoscaler/commit/5b5cab07) Prepare for release v0.10.0 (#75) +- [1da577b7](https://github.com/kubedb/autoscaler/commit/1da577b7) Add MariaDB Autoscaler | Add expandMode field on Autoscaler (#58) +- [292b4a17](https://github.com/kubedb/autoscaler/commit/292b4a17) Fix typo (#74) +- [f3a518ce](https://github.com/kubedb/autoscaler/commit/f3a518ce) Add suffix to webhook resource (#73) +- [e5683679](https://github.com/kubedb/autoscaler/commit/e5683679) Allow partially installing webhook server (#72) +- [dc1e1a19](https://github.com/kubedb/autoscaler/commit/dc1e1a19) Fix AdmissionReview api version (#71) +- [8baf503a](https://github.com/kubedb/autoscaler/commit/8baf503a) Update dependencies +- [11935336](https://github.com/kubedb/autoscaler/commit/11935336) Add make uninstall & purge targets +- [e6bd0c08](https://github.com/kubedb/autoscaler/commit/e6bd0c08) Fix commands (#69) +- [11aab741](https://github.com/kubedb/autoscaler/commit/11aab741) Cancel concurrent CI runs for same pr/commit (#68) +- [a38279f3](https://github.com/kubedb/autoscaler/commit/a38279f3) Fix linter error (#67) +- [749cca26](https://github.com/kubedb/autoscaler/commit/749cca26) Update dependencies (#66) +- [10510c7f](https://github.com/kubedb/autoscaler/commit/10510c7f) Cancel concurrent CI runs for same pr/commit (#65) +- [f372ee10](https://github.com/kubedb/autoscaler/commit/f372ee10) Introduce separate commands for operator and webhook (#64) +- [5a8b7e36](https://github.com/kubedb/autoscaler/commit/5a8b7e36) Use stash.appscode.dev/apimachinery@v0.18.0 (#63) +- [0252232d](https://github.com/kubedb/autoscaler/commit/0252232d) Update UID generation for GenericResource (#62) +- [8bf7600b](https://github.com/kubedb/autoscaler/commit/8bf7600b) Fix mongodb inMemory shard Autoscaler (#61) +- [4d1c2222](https://github.com/kubedb/autoscaler/commit/4d1c2222) Update SiteInfo (#60) +- [93c5cbaf](https://github.com/kubedb/autoscaler/commit/93c5cbaf) Generate GenericResource +- [21fed0b2](https://github.com/kubedb/autoscaler/commit/21fed0b2) Publish GenericResource (#59) +- [4b57f902](https://github.com/kubedb/autoscaler/commit/4b57f902) Recover from panic in reconcilers (#57) +- [678eab33](https://github.com/kubedb/autoscaler/commit/678eab33) Use Go 1.17 module format +- [bff8d517](https://github.com/kubedb/autoscaler/commit/bff8d517) Update package module path + + + +## [kubedb/cli](https://github.com/kubedb/cli) + +### [v0.25.0](https://github.com/kubedb/cli/releases/tag/v0.25.0) + +- [d22f9b86](https://github.com/kubedb/cli/commit/d22f9b86) Prepare for release v0.25.0 (#654) +- [829e5d49](https://github.com/kubedb/cli/commit/829e5d49) Cancel concurrent CI runs for same pr/commit (#653) +- [2366e0fc](https://github.com/kubedb/cli/commit/2366e0fc) Update dependencies (#652) +- [3a4e8d6a](https://github.com/kubedb/cli/commit/3a4e8d6a) Cancel concurrent CI runs for same pr/commit (#651) +- [21d910b6](https://github.com/kubedb/cli/commit/21d910b6) Use GO 1.17 module format (#650) +- [3972a064](https://github.com/kubedb/cli/commit/3972a064) Use stash.appscode.dev/apimachinery@v0.18.0 (#649) +- [287d32bc](https://github.com/kubedb/cli/commit/287d32bc) Update SiteInfo (#648) +- [7aacbc4e](https://github.com/kubedb/cli/commit/7aacbc4e) Publish GenericResource (#647) +- [926af73f](https://github.com/kubedb/cli/commit/926af73f) Release cli for darwin/arm64 (#646) +- [f575f520](https://github.com/kubedb/cli/commit/f575f520) Recover from panic in reconcilers (#645) +- [5ebd64b6](https://github.com/kubedb/cli/commit/5ebd64b6) Update for release Stash@v2021.11.24 (#644) + + + +## [kubedb/dashboard](https://github.com/kubedb/dashboard) + +### [v0.1.0](https://github.com/kubedb/dashboard/releases/tag/v0.1.0) + +- [dc2c5cd](https://github.com/kubedb/dashboard/commit/dc2c5cd) Prepare for release v0.1.0 (#15) +- [9444404](https://github.com/kubedb/dashboard/commit/9444404) Cancel concurrent CI runs for same pr/commit (#14) +- [19a0cc3](https://github.com/kubedb/dashboard/commit/19a0cc3) Add support for config-merger init container (#13) + + + +## [kubedb/elasticsearch](https://github.com/kubedb/elasticsearch) + +### [v0.25.0](https://github.com/kubedb/elasticsearch/releases/tag/v0.25.0) + +- [c5725973](https://github.com/kubedb/elasticsearch/commit/c5725973) Prepare for release v0.25.0 (#565) +- [d7535c40](https://github.com/kubedb/elasticsearch/commit/d7535c40) Add support for Elasticsearch:5.6.16-searchguard (#564) +- [0c348e3d](https://github.com/kubedb/elasticsearch/commit/0c348e3d) Add suffix to webhook resource (#563) +- [72a19921](https://github.com/kubedb/elasticsearch/commit/72a19921) Allow partially installing webhook server (#562) +- [fc6fd671](https://github.com/kubedb/elasticsearch/commit/fc6fd671) Fix AdmissionReview api version (#561) +- [99cac224](https://github.com/kubedb/elasticsearch/commit/99cac224) Fix commands (#559) +- [db3a0ef1](https://github.com/kubedb/elasticsearch/commit/db3a0ef1) Cancel concurrent CI runs for same pr/commit (#558) +- [6fd7c0df](https://github.com/kubedb/elasticsearch/commit/6fd7c0df) Update dependencies (#557) +- [847fe9c4](https://github.com/kubedb/elasticsearch/commit/847fe9c4) Cancel concurrent CI runs for same pr/commit (#555) +- [96a85825](https://github.com/kubedb/elasticsearch/commit/96a85825) Introduce separate commands for operator and webhook (#554) +- [7781b596](https://github.com/kubedb/elasticsearch/commit/7781b596) Use stash.appscode.dev/apimachinery@v0.18.0 (#553) +- [e8d411b4](https://github.com/kubedb/elasticsearch/commit/e8d411b4) Update UID generation for GenericResource (#552) +- [eb2f7d24](https://github.com/kubedb/elasticsearch/commit/eb2f7d24) Add support for JVM heap size in term of percentage (#551) +- [bbb0d8c5](https://github.com/kubedb/elasticsearch/commit/bbb0d8c5) Update SiteInfo (#550) +- [feaf7f2a](https://github.com/kubedb/elasticsearch/commit/feaf7f2a) Generate GenericResource +- [ef1cc55e](https://github.com/kubedb/elasticsearch/commit/ef1cc55e) Publish GenericResource (#549) +- [f1b3203b](https://github.com/kubedb/elasticsearch/commit/f1b3203b) Revert PRODUCT_NAME in makefile (#548) +- [30c80c26](https://github.com/kubedb/elasticsearch/commit/30c80c26) Fix resource patching issue in upsertContainer func (#547) +- [5ab262d3](https://github.com/kubedb/elasticsearch/commit/5ab262d3) Fix service ExternalTrafficPolicy repetitive patch issue (#546) +- [85231bcc](https://github.com/kubedb/elasticsearch/commit/85231bcc) Recover from panic in reconcilers (#545) + + + +## [kubedb/enterprise](https://github.com/kubedb/enterprise) + +### [v0.12.0](https://github.com/kubedb/enterprise/releases/tag/v0.12.0) + + + + +## [kubedb/installer](https://github.com/kubedb/installer) + +### [v2022.02.22](https://github.com/kubedb/installer/releases/tag/v2022.02.22) + + + + +## [kubedb/mariadb](https://github.com/kubedb/mariadb) + +### [v0.9.0](https://github.com/kubedb/mariadb/releases/tag/v0.9.0) + +- [6a778dd9](https://github.com/kubedb/mariadb/commit/6a778dd9) Prepare for release v0.9.0 (#135) +- [b4a30c99](https://github.com/kubedb/mariadb/commit/b4a30c99) added-all +- [c6a5cc23](https://github.com/kubedb/mariadb/commit/c6a5cc23) Update validator webhook gvr +- [7d2d3c91](https://github.com/kubedb/mariadb/commit/7d2d3c91) Add suffix to webhook resource (#134) +- [0636c331](https://github.com/kubedb/mariadb/commit/0636c331) Allow partially installing webhook server (#133) +- [a735bbfb](https://github.com/kubedb/mariadb/commit/a735bbfb) Fix AdmissionReview api version (#132) +- [d29518d2](https://github.com/kubedb/mariadb/commit/d29518d2) Fix commands (#130) +- [d04f9e9a](https://github.com/kubedb/mariadb/commit/d04f9e9a) Cancel concurrent CI runs for same pr/commit (#129) +- [8c266208](https://github.com/kubedb/mariadb/commit/8c266208) Update dependencies (#128) +- [4871392b](https://github.com/kubedb/mariadb/commit/4871392b) Cancel concurrent CI runs for same pr/commit (#126) +- [34ff3c21](https://github.com/kubedb/mariadb/commit/34ff3c21) Introduce separate commands for operator and webhook (#125) +- [09286968](https://github.com/kubedb/mariadb/commit/09286968) Use stash.appscode.dev/apimachinery@v0.18.0 (#124) +- [c4f75cfd](https://github.com/kubedb/mariadb/commit/c4f75cfd) Update UID generation for GenericResource (#123) +- [bbdd36d6](https://github.com/kubedb/mariadb/commit/bbdd36d6) Update SiteInfo (#121) +- [10ff7827](https://github.com/kubedb/mariadb/commit/10ff7827) Generate GenericResource +- [cf8ac7fe](https://github.com/kubedb/mariadb/commit/cf8ac7fe) Publish GenericResource (#120) +- [a8d68263](https://github.com/kubedb/mariadb/commit/a8d68263) Allow database service account to get DB object from coordinator (#117) +- [5710c30c](https://github.com/kubedb/mariadb/commit/5710c30c) Revert product name on Makefile (#119) +- [62ec6717](https://github.com/kubedb/mariadb/commit/62ec6717) Update kubedb-community chart name to kubedb-provisioner (#118) +- [e909bd19](https://github.com/kubedb/mariadb/commit/e909bd19) Recover from panic in reconcilers (#116) + + + +## [kubedb/mariadb-coordinator](https://github.com/kubedb/mariadb-coordinator) + +### [v0.5.0](https://github.com/kubedb/mariadb-coordinator/releases/tag/v0.5.0) + +- [c22dc23](https://github.com/kubedb/mariadb-coordinator/commit/c22dc23) Prepare for release v0.5.0 (#35) +- [c55747c](https://github.com/kubedb/mariadb-coordinator/commit/c55747c) Cancel concurrent CI runs for same pr/commit (#34) +- [aff152f](https://github.com/kubedb/mariadb-coordinator/commit/aff152f) Update dependencies (#33) +- [3ee0cd0](https://github.com/kubedb/mariadb-coordinator/commit/3ee0cd0) Cancel concurrent CI runs for same pr/commit (#32) +- [9e0e94c](https://github.com/kubedb/mariadb-coordinator/commit/9e0e94c) Update SiteInfo (#31) +- [f4a25a9](https://github.com/kubedb/mariadb-coordinator/commit/f4a25a9) Publish GenericResource (#30) +- [b086548](https://github.com/kubedb/mariadb-coordinator/commit/b086548) Get ReplicaCount from DB object when StatefulSet isNotFound (#29) +- [548249c](https://github.com/kubedb/mariadb-coordinator/commit/548249c) Recover from panic in reconcilers (#28) + + + +## [kubedb/memcached](https://github.com/kubedb/memcached) + +### [v0.18.0](https://github.com/kubedb/memcached/releases/tag/v0.18.0) + +- [134b2f2d](https://github.com/kubedb/memcached/commit/134b2f2d) Prepare for release v0.18.0 (#345) +- [f0f5c8b4](https://github.com/kubedb/memcached/commit/f0f5c8b4) Add suffix to webhook resource (#344) +- [87d70155](https://github.com/kubedb/memcached/commit/87d70155) Allow partially installing webhook server (#343) +- [5949b5d6](https://github.com/kubedb/memcached/commit/5949b5d6) Fix AdmissionReview api version (#342) +- [31f19773](https://github.com/kubedb/memcached/commit/31f19773) Fix commands (#340) +- [bc8f831e](https://github.com/kubedb/memcached/commit/bc8f831e) Cancel concurrent CI runs for same pr/commit (#339) +- [dd6ce3a7](https://github.com/kubedb/memcached/commit/dd6ce3a7) Update dependencies (#338) +- [0fb58107](https://github.com/kubedb/memcached/commit/0fb58107) Cancel concurrent CI runs for same pr/commit (#337) +- [54c0f656](https://github.com/kubedb/memcached/commit/54c0f656) Introduce separate commands for operator and webhook (#336) +- [930593a8](https://github.com/kubedb/memcached/commit/930593a8) Use stash.appscode.dev/apimachinery@v0.18.0 (#335) +- [61742791](https://github.com/kubedb/memcached/commit/61742791) Update UID generation for GenericResource (#334) +- [bdba4e60](https://github.com/kubedb/memcached/commit/bdba4e60) Update SiteInfo (#333) +- [7fc04444](https://github.com/kubedb/memcached/commit/7fc04444) Generate GenericResource +- [8eb438f6](https://github.com/kubedb/memcached/commit/8eb438f6) Publish GenericResource (#332) +- [247d2c99](https://github.com/kubedb/memcached/commit/247d2c99) Recover from panic in reconcilers (#331) + + + +## [kubedb/mongodb](https://github.com/kubedb/mongodb) + +### [v0.18.0](https://github.com/kubedb/mongodb/releases/tag/v0.18.0) + +- [7a354d6c](https://github.com/kubedb/mongodb/commit/7a354d6c) Prepare for release v0.18.0 (#462) +- [46b5f2a7](https://github.com/kubedb/mongodb/commit/46b5f2a7) Add suffix to webhook resource (#461) +- [8db1061d](https://github.com/kubedb/mongodb/commit/8db1061d) Allow partially installing webhook server (#460) +- [d21b4c46](https://github.com/kubedb/mongodb/commit/d21b4c46) Fix AdmissionReview api version (#458) +- [85ae88c1](https://github.com/kubedb/mongodb/commit/85ae88c1) Fix commands (#456) +- [e528b327](https://github.com/kubedb/mongodb/commit/e528b327) Cancel concurrent CI runs for same pr/commit (#455) +- [111d3d88](https://github.com/kubedb/mongodb/commit/111d3d88) Update dependencies (#454) +- [417ca61e](https://github.com/kubedb/mongodb/commit/417ca61e) Cancel concurrent CI runs for same pr/commit (#453) +- [2aacc8b2](https://github.com/kubedb/mongodb/commit/2aacc8b2) Introduce separate commands for operator and webhook (#452) +- [aaa48967](https://github.com/kubedb/mongodb/commit/aaa48967) Use stash.appscode.dev/apimachinery@v0.18.0 (#451) +- [b4f039b7](https://github.com/kubedb/mongodb/commit/b4f039b7) Update UID generation for GenericResource (#450) +- [537f1d2a](https://github.com/kubedb/mongodb/commit/537f1d2a) Fix shard health check (#448) +- [787cd3b0](https://github.com/kubedb/mongodb/commit/787cd3b0) Update SiteInfo (#447) +- [2d7b4b0e](https://github.com/kubedb/mongodb/commit/2d7b4b0e) Generate GenericResource +- [7df41e17](https://github.com/kubedb/mongodb/commit/7df41e17) Publish GenericResource (#446) +- [4eaa7aa2](https://github.com/kubedb/mongodb/commit/4eaa7aa2) Add configuration for ephemeral storage (#442) +- [28c8967d](https://github.com/kubedb/mongodb/commit/28c8967d) Add read/write health check (#443) +- [921451d7](https://github.com/kubedb/mongodb/commit/921451d7) Add support to apply `configuration.js` (#445) +- [027e307a](https://github.com/kubedb/mongodb/commit/027e307a) Update Reconcile method (#444) +- [9d609162](https://github.com/kubedb/mongodb/commit/9d609162) Recover from panic in reconcilers (#441) + + + +## [kubedb/mysql](https://github.com/kubedb/mysql) + +### [v0.18.0](https://github.com/kubedb/mysql/releases/tag/v0.18.0) + +- [12b89a3e](https://github.com/kubedb/mysql/commit/12b89a3e) Prepare for release v0.18.0 (#455) +- [03df7640](https://github.com/kubedb/mysql/commit/03df7640) Add Support for MySQL Read Replica (#439) +- [abfa3adc](https://github.com/kubedb/mysql/commit/abfa3adc) Use component specific webhook install command +- [f42185e5](https://github.com/kubedb/mysql/commit/f42185e5) Add suffix to webhook resource (#454) +- [b8c47c15](https://github.com/kubedb/mysql/commit/b8c47c15) Fix AdmissionReview api version (#453) +- [e987420a](https://github.com/kubedb/mysql/commit/e987420a) Fix commands (#451) +- [db3a06de](https://github.com/kubedb/mysql/commit/db3a06de) Cancel concurrent CI runs for same pr/commit (#450) +- [4a4f156e](https://github.com/kubedb/mysql/commit/4a4f156e) Update dependencies (#449) +- [b7209b20](https://github.com/kubedb/mysql/commit/b7209b20) Cancel concurrent CI runs for same pr/commit (#448) +- [f6214514](https://github.com/kubedb/mysql/commit/f6214514) Introduce separate commands for operator and webhook (#447) +- [97ba973b](https://github.com/kubedb/mysql/commit/97ba973b) Use stash.appscode.dev/apimachinery@v0.18.0 (#446) +- [668f50ff](https://github.com/kubedb/mysql/commit/668f50ff) Update UID generation for GenericResource (#445) +- [ac411e95](https://github.com/kubedb/mysql/commit/ac411e95) Remove coordinator container for stand alone instance. (#443) +- [99441193](https://github.com/kubedb/mysql/commit/99441193) Update SiteInfo (#444) +- [a248a8e2](https://github.com/kubedb/mysql/commit/a248a8e2) Generate GenericResource +- [1e7e681b](https://github.com/kubedb/mysql/commit/1e7e681b) Publish GenericResource (#442) +- [2cca63b8](https://github.com/kubedb/mysql/commit/2cca63b8) Rename MySQLClusterModeGroupReplication to MySQLModeGroupReplication (#441) +- [a25b9a4c](https://github.com/kubedb/mysql/commit/a25b9a4c) Pass --set-gtid-purged=off to stash for innodb cluster. (#437) +- [e2533c3a](https://github.com/kubedb/mysql/commit/e2533c3a) Recover from panic in reconcilers (#436) + + + +## [kubedb/mysql-coordinator](https://github.com/kubedb/mysql-coordinator) + +### [v0.3.0](https://github.com/kubedb/mysql-coordinator/releases/tag/v0.3.0) + +- [5784a32](https://github.com/kubedb/mysql-coordinator/commit/5784a32) Prepare for release v0.3.0 (#29) +- [9d7d210](https://github.com/kubedb/mysql-coordinator/commit/9d7d210) Cancel concurrent CI runs for same pr/commit (#28) +- [8c0afbd](https://github.com/kubedb/mysql-coordinator/commit/8c0afbd) Update dependencies (#27) +- [99284f0](https://github.com/kubedb/mysql-coordinator/commit/99284f0) Cancel concurrent CI runs for same pr/commit (#26) +- [bccd960](https://github.com/kubedb/mysql-coordinator/commit/bccd960) Update SiteInfo (#25) +- [d7c0d30](https://github.com/kubedb/mysql-coordinator/commit/d7c0d30) Publish GenericResource (#24) +- [4ffaf21](https://github.com/kubedb/mysql-coordinator/commit/4ffaf21) Recover from panic in reconcilers (#22) + + + +## [kubedb/mysql-router-init](https://github.com/kubedb/mysql-router-init) + +### [v0.3.0](https://github.com/kubedb/mysql-router-init/releases/tag/v0.3.0) + +- [32d8ac8](https://github.com/kubedb/mysql-router-init/commit/32d8ac8) Cancel concurrent CI runs for same pr/commit (#16) +- [c01384d](https://github.com/kubedb/mysql-router-init/commit/c01384d) Cancel concurrent CI runs for same pr/commit (#15) +- [febef73](https://github.com/kubedb/mysql-router-init/commit/febef73) Publish GenericResource (#14) + + + +## [kubedb/operator](https://github.com/kubedb/operator) + +### [v0.25.0](https://github.com/kubedb/operator/releases/tag/v0.25.0) + + + + +## [kubedb/percona-xtradb](https://github.com/kubedb/percona-xtradb) + +### [v0.12.0](https://github.com/kubedb/percona-xtradb/releases/tag/v0.12.0) + +- [d01b2914](https://github.com/kubedb/percona-xtradb/commit/d01b2914) Prepare for release v0.12.0 (#249) +- [6f821254](https://github.com/kubedb/percona-xtradb/commit/6f821254) Add suffix to webhook resource (#248) +- [0be0d287](https://github.com/kubedb/percona-xtradb/commit/0be0d287) Allow partially installing webhook server (#247) +- [a867d68c](https://github.com/kubedb/percona-xtradb/commit/a867d68c) Fix AdmissionReview api version (#246) +- [15c1e045](https://github.com/kubedb/percona-xtradb/commit/15c1e045) Fix commands (#244) +- [3c12213b](https://github.com/kubedb/percona-xtradb/commit/3c12213b) Cancel concurrent CI runs for same pr/commit (#243) +- [1458bd2b](https://github.com/kubedb/percona-xtradb/commit/1458bd2b) Update dependencies (#242) +- [675cd747](https://github.com/kubedb/percona-xtradb/commit/675cd747) Cancel concurrent CI runs for same pr/commit (#241) +- [3c5f5df0](https://github.com/kubedb/percona-xtradb/commit/3c5f5df0) Introduce separate commands for operator and webhook (#240) +- [cb4dd867](https://github.com/kubedb/percona-xtradb/commit/cb4dd867) Use stash.appscode.dev/apimachinery@v0.18.0 (#239) +- [b8bd01a9](https://github.com/kubedb/percona-xtradb/commit/b8bd01a9) Update UID generation for GenericResource (#238) +- [e6b35455](https://github.com/kubedb/percona-xtradb/commit/e6b35455) Update SiteInfo (#236) +- [473b9ba6](https://github.com/kubedb/percona-xtradb/commit/473b9ba6) Generate GenericResource +- [28321621](https://github.com/kubedb/percona-xtradb/commit/28321621) Publish GenericResource (#235) +- [94984a32](https://github.com/kubedb/percona-xtradb/commit/94984a32) Recover from panic in reconcilers (#234) + + + +## [kubedb/pg-coordinator](https://github.com/kubedb/pg-coordinator) + +### [v0.9.0](https://github.com/kubedb/pg-coordinator/releases/tag/v0.9.0) + +- [eb50dc2](https://github.com/kubedb/pg-coordinator/commit/eb50dc2) Prepare for release v0.9.0 (#66) +- [d27428b](https://github.com/kubedb/pg-coordinator/commit/d27428b) Cancel concurrent CI runs for same pr/commit (#65) +- [7beba31](https://github.com/kubedb/pg-coordinator/commit/7beba31) Update dependencies (#64) +- [feed8e5](https://github.com/kubedb/pg-coordinator/commit/feed8e5) Cancel concurrent CI runs for same pr/commit (#63) +- [d509ec3](https://github.com/kubedb/pg-coordinator/commit/d509ec3) Update SiteInfo (#62) +- [dfa09ba](https://github.com/kubedb/pg-coordinator/commit/dfa09ba) Publish GenericResource (#61) +- [3a850da](https://github.com/kubedb/pg-coordinator/commit/3a850da) Fix custom Auth secret issues (#60) +- [5cdea8c](https://github.com/kubedb/pg-coordinator/commit/5cdea8c) Use Postgres CR to get replica count (#59) +- [1070903](https://github.com/kubedb/pg-coordinator/commit/1070903) Recover from panic in reconcilers (#58) + + + +## [kubedb/pgbouncer](https://github.com/kubedb/pgbouncer) + +### [v0.12.0](https://github.com/kubedb/pgbouncer/releases/tag/v0.12.0) + +- [a244100d](https://github.com/kubedb/pgbouncer/commit/a244100d) Prepare for release v0.12.0 (#208) +- [3571411a](https://github.com/kubedb/pgbouncer/commit/3571411a) Add suffix to webhook resource (#207) +- [8d13a7bc](https://github.com/kubedb/pgbouncer/commit/8d13a7bc) Allow partially installing webhook server (#206) +- [05098834](https://github.com/kubedb/pgbouncer/commit/05098834) Fix AdmissionReview api version (#205) +- [117c33a7](https://github.com/kubedb/pgbouncer/commit/117c33a7) Fix commands (#203) +- [876c86d6](https://github.com/kubedb/pgbouncer/commit/876c86d6) Cancel concurrent CI runs for same pr/commit (#202) +- [d23c8939](https://github.com/kubedb/pgbouncer/commit/d23c8939) Update dependencies (#201) +- [3e1ed897](https://github.com/kubedb/pgbouncer/commit/3e1ed897) Cancel concurrent CI runs for same pr/commit (#200) +- [6ab49fde](https://github.com/kubedb/pgbouncer/commit/6ab49fde) Introduce separate commands for operator and webhook (#199) +- [aa1e2c7f](https://github.com/kubedb/pgbouncer/commit/aa1e2c7f) Use stash.appscode.dev/apimachinery@v0.18.0 (#198) +- [b602f703](https://github.com/kubedb/pgbouncer/commit/b602f703) Update UID generation for GenericResource (#197) +- [7acd55f4](https://github.com/kubedb/pgbouncer/commit/7acd55f4) Update SiteInfo (#196) +- [504f39d7](https://github.com/kubedb/pgbouncer/commit/504f39d7) Generate GenericResource +- [e4aaec6c](https://github.com/kubedb/pgbouncer/commit/e4aaec6c) Publish GenericResource (#195) +- [fe1b6138](https://github.com/kubedb/pgbouncer/commit/fe1b6138) Recover from panic in reconcilers (#194) + + + +## [kubedb/postgres](https://github.com/kubedb/postgres) + +### [v0.25.0](https://github.com/kubedb/postgres/releases/tag/v0.25.0) + +- [7b764c0b](https://github.com/kubedb/postgres/commit/7b764c0b) Prepare for release v0.25.0 (#562) +- [3cc55bf0](https://github.com/kubedb/postgres/commit/3cc55bf0) Add suffix to webhook resource (#561) +- [59393ddb](https://github.com/kubedb/postgres/commit/59393ddb) Allow partially installing webhook server (#560) +- [a4eaa7af](https://github.com/kubedb/postgres/commit/a4eaa7af) Fix AdmissionReview api version (#559) +- [bc82ff36](https://github.com/kubedb/postgres/commit/bc82ff36) Fix commands (#557) +- [b4eaa521](https://github.com/kubedb/postgres/commit/b4eaa521) Cancel concurrent CI runs for same pr/commit (#556) +- [b43419f3](https://github.com/kubedb/postgres/commit/b43419f3) Update dependencies (#555) +- [3212a076](https://github.com/kubedb/postgres/commit/3212a076) Cancel concurrent CI runs for same pr/commit (#554) +- [578f48f1](https://github.com/kubedb/postgres/commit/578f48f1) Introduce separate commands for operator and webhook (#552) +- [124489ce](https://github.com/kubedb/postgres/commit/124489ce) Use stash.appscode.dev/apimachinery@v0.18.0 (#553) +- [6af28e8f](https://github.com/kubedb/postgres/commit/6af28e8f) Update UID generation for GenericResource (#551) +- [824b4a89](https://github.com/kubedb/postgres/commit/824b4a89) Update SiteInfo (#550) +- [2d8e23ed](https://github.com/kubedb/postgres/commit/2d8e23ed) Generate GenericResource +- [a933d0fb](https://github.com/kubedb/postgres/commit/a933d0fb) Publish GenericResource (#549) +- [2fbb7c8b](https://github.com/kubedb/postgres/commit/2fbb7c8b) Enforce FsGroup and add permission to get Postgres CR from coordinator (#547) +- [cdf23fcb](https://github.com/kubedb/postgres/commit/cdf23fcb) Fix: remove func SetDefaultResourceLimits call (#548) +- [adf84055](https://github.com/kubedb/postgres/commit/adf84055) Recover from panic in reconcilers (#545) + + + +## [kubedb/proxysql](https://github.com/kubedb/proxysql) + +### [v0.12.0](https://github.com/kubedb/proxysql/releases/tag/v0.12.0) + +- [0bff10e8](https://github.com/kubedb/proxysql/commit/0bff10e8) Prepare for release v0.12.0 (#224) +- [4781caf4](https://github.com/kubedb/proxysql/commit/4781caf4) Fix AdmissionReview api version +- [b9d175c3](https://github.com/kubedb/proxysql/commit/b9d175c3) Add suffix to webhook resource (#223) +- [9935ddf2](https://github.com/kubedb/proxysql/commit/9935ddf2) Allow partially installing webhook server (#222) +- [31e15e52](https://github.com/kubedb/proxysql/commit/31e15e52) Create namespace if not present in install commands +- [15139595](https://github.com/kubedb/proxysql/commit/15139595) Fix commands (#220) +- [dbbf3ba2](https://github.com/kubedb/proxysql/commit/dbbf3ba2) Cancel concurrent CI runs for same pr/commit (#219) +- [85c46c87](https://github.com/kubedb/proxysql/commit/85c46c87) Update dependencies (#218) +- [ee41ced8](https://github.com/kubedb/proxysql/commit/ee41ced8) Cancel concurrent CI runs for same pr/commit (#217) +- [635f6b9b](https://github.com/kubedb/proxysql/commit/635f6b9b) Introduce separate commands for operator and webhook (#216) +- [056cfac6](https://github.com/kubedb/proxysql/commit/056cfac6) Use stash.appscode.dev/apimachinery@v0.18.0 (#215) +- [335460c7](https://github.com/kubedb/proxysql/commit/335460c7) Update UID generation for GenericResource (#214) +- [2148e1d7](https://github.com/kubedb/proxysql/commit/2148e1d7) Update SiteInfo (#213) +- [1a903feb](https://github.com/kubedb/proxysql/commit/1a903feb) Generate GenericResource +- [d7ec8b90](https://github.com/kubedb/proxysql/commit/d7ec8b90) Publish GenericResource (#212) +- [62769ef2](https://github.com/kubedb/proxysql/commit/62769ef2) Recover from panic in reconcilers (#211) + + + +## [kubedb/redis](https://github.com/kubedb/redis) + +### [v0.18.0](https://github.com/kubedb/redis/releases/tag/v0.18.0) + +- [f506b8ad](https://github.com/kubedb/redis/commit/f506b8ad) Prepare for release v0.18.0 (#386) +- [ca77cd30](https://github.com/kubedb/redis/commit/ca77cd30) Fix: Multiple Redis Cluster with same name Monitored by Sentinel (#385) +- [ee2c5d31](https://github.com/kubedb/redis/commit/ee2c5d31) Fix AdmissionReview api version (#384) +- [02db9598](https://github.com/kubedb/redis/commit/02db9598) Add DisableAuth Support For Redis and Sentinel (#372) +- [64751b05](https://github.com/kubedb/redis/commit/64751b05) Fix: health checker for Redis Cluster Mode (#363) +- [10be0855](https://github.com/kubedb/redis/commit/10be0855) Add suffix to webhook resource (#383) +- [c5a2e86f](https://github.com/kubedb/redis/commit/c5a2e86f) Allow partially installing webhook server (#382) +- [46216979](https://github.com/kubedb/redis/commit/46216979) Change command name +- [dd1afb75](https://github.com/kubedb/redis/commit/dd1afb75) Fix admission api alias +- [ed61d9fa](https://github.com/kubedb/redis/commit/ed61d9fa) Fix commands (#379) +- [066c65a5](https://github.com/kubedb/redis/commit/066c65a5) Cancel concurrent CI runs for same pr/commit (#380) +- [63f58773](https://github.com/kubedb/redis/commit/63f58773) Install webhook server chart (#378) +- [4a3be0c8](https://github.com/kubedb/redis/commit/4a3be0c8) Update dependencies (#377) +- [4340c91e](https://github.com/kubedb/redis/commit/4340c91e) Cancel concurrent CI runs for same pr/commit (#376) +- [1719bf95](https://github.com/kubedb/redis/commit/1719bf95) Introduce separate commands for operator and webhook (#375) +- [aceab546](https://github.com/kubedb/redis/commit/aceab546) Use stash.appscode.dev/apimachinery@v0.18.0 (#374) +- [73283002](https://github.com/kubedb/redis/commit/73283002) Update UID generation for GenericResource (#373) +- [2c23c89b](https://github.com/kubedb/redis/commit/2c23c89b) Update SiteInfo (#371) +- [efd0041f](https://github.com/kubedb/redis/commit/efd0041f) Generate GenericResource +- [0e9a3244](https://github.com/kubedb/redis/commit/0e9a3244) Publish GenericResource (#370) +- [b4deca3e](https://github.com/kubedb/redis/commit/b4deca3e) Fix: Volume-Exp Permission Issue from Validator (#369) +- [ae1384d8](https://github.com/kubedb/redis/commit/ae1384d8) Add Container name when exec into pod for clustering (#368) +- [83dcec6d](https://github.com/kubedb/redis/commit/83dcec6d) Recover from panic in reconcilers (#367) + + + +## [kubedb/redis-coordinator](https://github.com/kubedb/redis-coordinator) + +### [v0.4.0](https://github.com/kubedb/redis-coordinator/releases/tag/v0.4.0) + +- [a2adbd9](https://github.com/kubedb/redis-coordinator/commit/a2adbd9) Prepare for release v0.4.0 (#25) +- [7ab65d2](https://github.com/kubedb/redis-coordinator/commit/7ab65d2) Fix: Multiple Redis cluster with same name for Sentinel Monitoring (#24) +- [94043db](https://github.com/kubedb/redis-coordinator/commit/94043db) Disable redis auth (#20) +- [4a5c2e6](https://github.com/kubedb/redis-coordinator/commit/4a5c2e6) Cancel concurrent CI runs for same pr/commit (#23) +- [a207e38](https://github.com/kubedb/redis-coordinator/commit/a207e38) Update dependencies (#22) +- [cedef27](https://github.com/kubedb/redis-coordinator/commit/cedef27) Use Go 1.17 module format +- [335b4f6](https://github.com/kubedb/redis-coordinator/commit/335b4f6) Cancel concurrent CI runs for same pr/commit (#21) +- [17a7a07](https://github.com/kubedb/redis-coordinator/commit/17a7a07) Update SiteInfo (#19) +- [6f6013d](https://github.com/kubedb/redis-coordinator/commit/6f6013d) Publish GenericResource (#18) +- [3785029](https://github.com/kubedb/redis-coordinator/commit/3785029) Recover from panic in reconcilers (#17) + + + +## [kubedb/replication-mode-detector](https://github.com/kubedb/replication-mode-detector) + +### [v0.12.0](https://github.com/kubedb/replication-mode-detector/releases/tag/v0.12.0) + +- [86ec5d2a](https://github.com/kubedb/replication-mode-detector/commit/86ec5d2a) Prepare for release v0.12.0 (#184) +- [21cf5fe5](https://github.com/kubedb/replication-mode-detector/commit/21cf5fe5) Cancel concurrent CI runs for same pr/commit (#183) +- [c8a693ba](https://github.com/kubedb/replication-mode-detector/commit/c8a693ba) Update dependencies (#182) +- [31268557](https://github.com/kubedb/replication-mode-detector/commit/31268557) Cancel concurrent CI runs for same pr/commit (#181) +- [c471f782](https://github.com/kubedb/replication-mode-detector/commit/c471f782) Update SiteInfo (#180) +- [301a0b0c](https://github.com/kubedb/replication-mode-detector/commit/301a0b0c) Publish GenericResource (#179) +- [157723f2](https://github.com/kubedb/replication-mode-detector/commit/157723f2) Recover from panic in reconcilers (#178) + + + +## [kubedb/schema-manager](https://github.com/kubedb/schema-manager) + +### [v0.1.0](https://github.com/kubedb/schema-manager/releases/tag/v0.1.0) + +- [27cbd85f](https://github.com/kubedb/schema-manager/commit/27cbd85f) Prepare for release v0.1.0 (#21) +- [e599be46](https://github.com/kubedb/schema-manager/commit/e599be46) Add Schema-Manager support for PostgreSQL (#12) +- [c0b7b037](https://github.com/kubedb/schema-manager/commit/c0b7b037) Reflect stash-v2022.02.22 related changes for MongoDB (#13) +- [cb194e15](https://github.com/kubedb/schema-manager/commit/cb194e15) Add stash support for mysqldatabase (#19) + + + +## [kubedb/tests](https://github.com/kubedb/tests) + +### [v0.10.0](https://github.com/kubedb/tests/releases/tag/v0.10.0) + +- [72008dce](https://github.com/kubedb/tests/commit/72008dce) Prepare for release v0.10.0 (#169) +- [9f48d54c](https://github.com/kubedb/tests/commit/9f48d54c) Cancel concurrent CI runs for same pr/commit (#168) +- [39fb2faa](https://github.com/kubedb/tests/commit/39fb2faa) Update dependencies (#167) +- [82bef4de](https://github.com/kubedb/tests/commit/82bef4de) Update dependencies (#166) +- [3de40073](https://github.com/kubedb/tests/commit/3de40073) Use stash.appscode.dev/apimachinery@v0.18.0 (#165) +- [02695e02](https://github.com/kubedb/tests/commit/02695e02) Update SiteInfo (#164) +- [917f979c](https://github.com/kubedb/tests/commit/917f979c) Update dependencies (#163) +- [bd430a5e](https://github.com/kubedb/tests/commit/bd430a5e) Recover from panic in reconcilers (#159) + + + +## [kubedb/ui-server](https://github.com/kubedb/ui-server) + +### [v0.1.0](https://github.com/kubedb/ui-server/releases/tag/v0.1.0) + +- [62f41a7](https://github.com/kubedb/ui-server/commit/62f41a7) Prepare for release v0.1.0 (#25) +- [004104c](https://github.com/kubedb/ui-server/commit/004104c) Cancel concurrent CI runs for same pr/commit (#24) +- [757f36a](https://github.com/kubedb/ui-server/commit/757f36a) Update uid generation (#23) + + + +## [kubedb/webhook-server](https://github.com/kubedb/webhook-server) + +### [v0.1.0](https://github.com/kubedb/webhook-server/releases/tag/v0.1.0) + +- [70336af](https://github.com/kubedb/webhook-server/commit/70336af) Prepare for release v0.1.0 (#11) + + + + diff --git a/content/docs/v2024.1.31/CHANGELOG-v2022.03.28.md b/content/docs/v2024.1.31/CHANGELOG-v2022.03.28.md new file mode 100644 index 0000000000..8182ccbac9 --- /dev/null +++ b/content/docs/v2024.1.31/CHANGELOG-v2022.03.28.md @@ -0,0 +1,341 @@ +--- +title: Changelog | KubeDB +description: Changelog +menu: + docs_v2024.1.31: + identifier: changelog-kubedb-v2022.03.28 + name: Changelog-v2022.03.28 + parent: welcome + weight: 20220328 +product_name: kubedb +menu_name: docs_v2024.1.31 +section_menu_id: welcome +url: /docs/v2024.1.31/welcome/changelog-v2022.03.28/ +aliases: +- /docs/v2024.1.31/CHANGELOG-v2022.03.28/ +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +# KubeDB v2022.03.28 (2022-03-29) + + +## [kubedb/apimachinery](https://github.com/kubedb/apimachinery) + +### [v0.26.0](https://github.com/kubedb/apimachinery/releases/tag/v0.26.0) + +- [3a3c03c0](https://github.com/kubedb/apimachinery/commit/3a3c03c0) Update dependencies +- [8b0203db](https://github.com/kubedb/apimachinery/commit/8b0203db) Use Go 1.18 (#877) +- [890903bc](https://github.com/kubedb/apimachinery/commit/890903bc) Use Go 1.18 (#876) +- [1a6317fe](https://github.com/kubedb/apimachinery/commit/1a6317fe) Revise KubeDB UI APIs (#874) +- [a4fb0558](https://github.com/kubedb/apimachinery/commit/a4fb0558) Cancel concurrent CI runs for same pr/commit (#873) +- [1bcca116](https://github.com/kubedb/apimachinery/commit/1bcca116) Fix GH security warnings (#872) + + + +## [kubedb/autoscaler](https://github.com/kubedb/autoscaler) + +### [v0.11.0](https://github.com/kubedb/autoscaler/releases/tag/v0.11.0) + +- [465b5184](https://github.com/kubedb/autoscaler/commit/465b5184) Prepare for release v0.11.0 (#80) +- [3648f032](https://github.com/kubedb/autoscaler/commit/3648f032) Prepare Go for private repos +- [9acbe1db](https://github.com/kubedb/autoscaler/commit/9acbe1db) Update dependencies (#79) +- [aa737abc](https://github.com/kubedb/autoscaler/commit/aa737abc) Use Go 1.18 (#77) +- [72ef8b7f](https://github.com/kubedb/autoscaler/commit/72ef8b7f) make fmt (#76) + + + +## [kubedb/cli](https://github.com/kubedb/cli) + +### [v0.26.0](https://github.com/kubedb/cli/releases/tag/v0.26.0) + +- [9df8f777](https://github.com/kubedb/cli/commit/9df8f777) Prepare for release v0.26.0 (#661) +- [1c9535b7](https://github.com/kubedb/cli/commit/1c9535b7) Update dependencies (#659) +- [d406d03c](https://github.com/kubedb/cli/commit/d406d03c) Update dependencies (#658) +- [5da61dc0](https://github.com/kubedb/cli/commit/5da61dc0) Use Go 1.18 (#656) +- [07350a91](https://github.com/kubedb/cli/commit/07350a91) Update dependencies (#657) +- [3f690cbd](https://github.com/kubedb/cli/commit/3f690cbd) Cancel concurrent CI runs for same pr/commit (#655) + + + +## [kubedb/dashboard](https://github.com/kubedb/dashboard) + +### [v0.2.0](https://github.com/kubedb/dashboard/releases/tag/v0.2.0) + +- [2835cca](https://github.com/kubedb/dashboard/commit/2835cca) Prepare for release v0.2.0 (#23) +- [d617285](https://github.com/kubedb/dashboard/commit/d617285) Update dependencies (#21) +- [24fa106](https://github.com/kubedb/dashboard/commit/24fa106) Update dependencies (#20) +- [3b7474e](https://github.com/kubedb/dashboard/commit/3b7474e) Use Go 1.18 (#19) +- [be57ba3](https://github.com/kubedb/dashboard/commit/be57ba3) Use Go 1.18 (#18) +- [205f5fa](https://github.com/kubedb/dashboard/commit/205f5fa) Cancel concurrent CI runs for same pr/commit (#16) + + + +## [kubedb/elasticsearch](https://github.com/kubedb/elasticsearch) + +### [v0.26.0](https://github.com/kubedb/elasticsearch/releases/tag/v0.26.0) + +- [f5712d4b](https://github.com/kubedb/elasticsearch/commit/f5712d4b) Prepare for release v0.26.0 (#572) +- [b524d625](https://github.com/kubedb/elasticsearch/commit/b524d625) Update dependencies (#570) +- [0803823b](https://github.com/kubedb/elasticsearch/commit/0803823b) Use Go 1.18 (#568) +- [d01eeff9](https://github.com/kubedb/elasticsearch/commit/d01eeff9) Cancel concurrent CI runs for same pr/commit (#567) +- [d1317ca8](https://github.com/kubedb/elasticsearch/commit/d1317ca8) Include Hot,Warm,Cold,Frozen,and Content node in health check (#566) + + + +## [kubedb/enterprise](https://github.com/kubedb/enterprise) + +### [v0.13.0](https://github.com/kubedb/enterprise/releases/tag/v0.13.0) + +- [47203610](https://github.com/kubedb/enterprise/commit/47203610) Prepare for release v0.13.0 (#294) +- [7b530123](https://github.com/kubedb/enterprise/commit/7b530123) Update dependencies (#293) +- [0ba4949a](https://github.com/kubedb/enterprise/commit/0ba4949a) Use Go 1.18 (#292) +- [1fe69052](https://github.com/kubedb/enterprise/commit/1fe69052) make fmt (#290) + + + +## [kubedb/installer](https://github.com/kubedb/installer) + +### [v2022.03.28](https://github.com/kubedb/installer/releases/tag/v2022.03.28) + +- [f0a666c6](https://github.com/kubedb/installer/commit/f0a666c6) Prepare for release v2022.03.28 (#473) +- [97fb6dd1](https://github.com/kubedb/installer/commit/97fb6dd1) Update dependencies (#472) +- [87429a6b](https://github.com/kubedb/installer/commit/87429a6b) Fix installer schema (#464) + + + +## [kubedb/mariadb](https://github.com/kubedb/mariadb) + +### [v0.10.0](https://github.com/kubedb/mariadb/releases/tag/v0.10.0) + +- [122b15e0](https://github.com/kubedb/mariadb/commit/122b15e0) Prepare for release v0.10.0 (#143) +- [fe6bf7ef](https://github.com/kubedb/mariadb/commit/fe6bf7ef) Update dependencies (#141) +- [44bef045](https://github.com/kubedb/mariadb/commit/44bef045) Use Go 1.18 (#140) +- [db9e7add](https://github.com/kubedb/mariadb/commit/db9e7add) Add URL on ClientConfig Spec of Appbinding (#122) +- [d9fa0ef5](https://github.com/kubedb/mariadb/commit/d9fa0ef5) Cancel concurrent CI runs for same pr/commit (#137) + + + +## [kubedb/mariadb-coordinator](https://github.com/kubedb/mariadb-coordinator) + +### [v0.6.0](https://github.com/kubedb/mariadb-coordinator/releases/tag/v0.6.0) + +- [7463ebd](https://github.com/kubedb/mariadb-coordinator/commit/7463ebd) Prepare for release v0.6.0 (#40) +- [5ac5014](https://github.com/kubedb/mariadb-coordinator/commit/5ac5014) Update dependencies (#39) +- [bf99143](https://github.com/kubedb/mariadb-coordinator/commit/bf99143) Use Go 1.18 (#38) +- [e9d6432](https://github.com/kubedb/mariadb-coordinator/commit/e9d6432) make fmt (#36) + + + +## [kubedb/memcached](https://github.com/kubedb/memcached) + +### [v0.19.0](https://github.com/kubedb/memcached/releases/tag/v0.19.0) + +- [c9719b95](https://github.com/kubedb/memcached/commit/c9719b95) Prepare for release v0.19.0 (#351) +- [54052038](https://github.com/kubedb/memcached/commit/54052038) Update dependencies (#349) +- [5cdc9943](https://github.com/kubedb/memcached/commit/5cdc9943) Use Go 1.18 (#348) +- [522393e1](https://github.com/kubedb/memcached/commit/522393e1) Cancel concurrent CI runs for same pr/commit (#346) + + + +## [kubedb/mongodb](https://github.com/kubedb/mongodb) + +### [v0.19.0](https://github.com/kubedb/mongodb/releases/tag/v0.19.0) + +- [7f10dbbd](https://github.com/kubedb/mongodb/commit/7f10dbbd) Prepare for release v0.19.0 (#469) +- [25737b62](https://github.com/kubedb/mongodb/commit/25737b62) Update dependencies (#467) +- [fa61e31b](https://github.com/kubedb/mongodb/commit/fa61e31b) Remove forked go.mongodb.org/mongo-driver (#466) +- [ee7cb5e0](https://github.com/kubedb/mongodb/commit/ee7cb5e0) Use Go 1.18 (#465) +- [b4922308](https://github.com/kubedb/mongodb/commit/b4922308) Cancel concurrent CI runs for same pr/commit (#463) + + + +## [kubedb/mysql](https://github.com/kubedb/mysql) + +### [v0.19.0](https://github.com/kubedb/mysql/releases/tag/v0.19.0) + +- [0ef8f9a4](https://github.com/kubedb/mysql/commit/0ef8f9a4) Prepare for release v0.19.0 (#461) +- [bc50e8e9](https://github.com/kubedb/mysql/commit/bc50e8e9) Update dependencies (#459) +- [7fc2588a](https://github.com/kubedb/mysql/commit/7fc2588a) Use Go 1.18 (#458) +- [7d5f5cdf](https://github.com/kubedb/mysql/commit/7d5f5cdf) Cancel concurrent CI runs for same pr/commit (#456) + + + +## [kubedb/mysql-coordinator](https://github.com/kubedb/mysql-coordinator) + +### [v0.4.0](https://github.com/kubedb/mysql-coordinator/releases/tag/v0.4.0) + +- [002fa02](https://github.com/kubedb/mysql-coordinator/commit/002fa02) Prepare for release v0.4.0 (#34) +- [9588c58](https://github.com/kubedb/mysql-coordinator/commit/9588c58) Update dependencies (#33) +- [68519c0](https://github.com/kubedb/mysql-coordinator/commit/68519c0) Use Go 1.18 (#32) +- [4051a5e](https://github.com/kubedb/mysql-coordinator/commit/4051a5e) Update dependencies (#31) +- [0209ce9](https://github.com/kubedb/mysql-coordinator/commit/0209ce9) make fmt (#30) + + + +## [kubedb/mysql-router-init](https://github.com/kubedb/mysql-router-init) + +### [v0.4.0](https://github.com/kubedb/mysql-router-init/releases/tag/v0.4.0) + +- [d4689b7](https://github.com/kubedb/mysql-router-init/commit/d4689b7) Update dependencies (#19) +- [eedbddb](https://github.com/kubedb/mysql-router-init/commit/eedbddb) Use Go 1.18 (#18) +- [9d81774](https://github.com/kubedb/mysql-router-init/commit/9d81774) make fmt (#17) + + + +## [kubedb/operator](https://github.com/kubedb/operator) + +### [v0.26.0](https://github.com/kubedb/operator/releases/tag/v0.26.0) + +- [93adeabf](https://github.com/kubedb/operator/commit/93adeabf) Prepare for release v0.26.0 (#463) +- [f27e41f5](https://github.com/kubedb/operator/commit/f27e41f5) Update dependencies (#461) +- [3e00bbce](https://github.com/kubedb/operator/commit/3e00bbce) Update dependencies (#460) +- [a600ffd7](https://github.com/kubedb/operator/commit/a600ffd7) Use Go 1.18 (#459) +- [81cdbb25](https://github.com/kubedb/operator/commit/81cdbb25) Use Go 1.18 (#458) +- [1a65782b](https://github.com/kubedb/operator/commit/1a65782b) Remove vendor folder +- [e02bd5bd](https://github.com/kubedb/operator/commit/e02bd5bd) Cancel concurrent CI runs for same pr/commit (#457) + + + +## [kubedb/percona-xtradb](https://github.com/kubedb/percona-xtradb) + +### [v0.13.0](https://github.com/kubedb/percona-xtradb/releases/tag/v0.13.0) + +- [dfb0d0fa](https://github.com/kubedb/percona-xtradb/commit/dfb0d0fa) Prepare for release v0.13.0 (#255) +- [82053ab2](https://github.com/kubedb/percona-xtradb/commit/82053ab2) Update dependencies (#253) +- [7627f703](https://github.com/kubedb/percona-xtradb/commit/7627f703) Use Go 1.18 (#251) +- [98eb8513](https://github.com/kubedb/percona-xtradb/commit/98eb8513) Cancel concurrent CI runs for same pr/commit (#250) + + + +## [kubedb/pg-coordinator](https://github.com/kubedb/pg-coordinator) + +### [v0.10.0](https://github.com/kubedb/pg-coordinator/releases/tag/v0.10.0) + +- [db28fcc](https://github.com/kubedb/pg-coordinator/commit/db28fcc) Prepare for release v0.10.0 (#72) +- [fb2e657](https://github.com/kubedb/pg-coordinator/commit/fb2e657) Update dependencies (#71) + + + +## [kubedb/pgbouncer](https://github.com/kubedb/pgbouncer) + +### [v0.13.0](https://github.com/kubedb/pgbouncer/releases/tag/v0.13.0) + +- [814d0891](https://github.com/kubedb/pgbouncer/commit/814d0891) Prepare for release v0.13.0 (#215) +- [1dc7c4fe](https://github.com/kubedb/pgbouncer/commit/1dc7c4fe) Update dependencies (#213) +- [6e26e8c1](https://github.com/kubedb/pgbouncer/commit/6e26e8c1) Use Go 1.18 (#212) +- [d3a209b7](https://github.com/kubedb/pgbouncer/commit/d3a209b7) Use Go 1.18 (#211) +- [ded44f1f](https://github.com/kubedb/pgbouncer/commit/ded44f1f) Cancel concurrent CI runs for same pr/commit (#209) + + + +## [kubedb/postgres](https://github.com/kubedb/postgres) + +### [v0.26.0](https://github.com/kubedb/postgres/releases/tag/v0.26.0) + +- [5d9b463c](https://github.com/kubedb/postgres/commit/5d9b463c) Prepare for release v0.26.0 (#567) +- [093f2f3e](https://github.com/kubedb/postgres/commit/093f2f3e) Update dependencies (#565) +- [01f007b5](https://github.com/kubedb/postgres/commit/01f007b5) Use Go 1.18 (#564) +- [1e30d894](https://github.com/kubedb/postgres/commit/1e30d894) Cancel concurrent CI runs for same pr/commit (#563) + + + +## [kubedb/proxysql](https://github.com/kubedb/proxysql) + +### [v0.13.0](https://github.com/kubedb/proxysql/releases/tag/v0.13.0) + +- [7ee0f89f](https://github.com/kubedb/proxysql/commit/7ee0f89f) Prepare for release v0.13.0 (#230) +- [e0182abc](https://github.com/kubedb/proxysql/commit/e0182abc) Update dependencies (#228) +- [55f9a0c6](https://github.com/kubedb/proxysql/commit/55f9a0c6) Use Go 1.18 (#227) +- [544da591](https://github.com/kubedb/proxysql/commit/544da591) Add sample yamls for development (#226) +- [afd09b88](https://github.com/kubedb/proxysql/commit/afd09b88) Cancel concurrent CI runs for same pr/commit (#225) + + + +## [kubedb/redis](https://github.com/kubedb/redis) + +### [v0.19.0](https://github.com/kubedb/redis/releases/tag/v0.19.0) + +- [ad6eb9b4](https://github.com/kubedb/redis/commit/ad6eb9b4) Prepare for release v0.19.0 (#391) +- [08798538](https://github.com/kubedb/redis/commit/08798538) Update dependencies (#390) +- [960e7d38](https://github.com/kubedb/redis/commit/960e7d38) Use Go 1.18 (#389) +- [2fefd9e2](https://github.com/kubedb/redis/commit/2fefd9e2) Cancel concurrent CI runs for same pr/commit (#388) +- [8a8df15f](https://github.com/kubedb/redis/commit/8a8df15f) Make CreateConfigSecret public for ops-manager (#387) + + + +## [kubedb/redis-coordinator](https://github.com/kubedb/redis-coordinator) + +### [v0.5.0](https://github.com/kubedb/redis-coordinator/releases/tag/v0.5.0) + +- [42df94b](https://github.com/kubedb/redis-coordinator/commit/42df94b) Prepare for release v0.5.0 (#29) +- [4f0464e](https://github.com/kubedb/redis-coordinator/commit/4f0464e) Update dependencies (#28) +- [5822a74](https://github.com/kubedb/redis-coordinator/commit/5822a74) Use Go 1.18 (#27) +- [fdd5845](https://github.com/kubedb/redis-coordinator/commit/fdd5845) make fmt (#26) + + + +## [kubedb/replication-mode-detector](https://github.com/kubedb/replication-mode-detector) + +### [v0.13.0](https://github.com/kubedb/replication-mode-detector/releases/tag/v0.13.0) + +- [9adf6283](https://github.com/kubedb/replication-mode-detector/commit/9adf6283) Prepare for release v0.13.0 (#191) +- [3c1cdda5](https://github.com/kubedb/replication-mode-detector/commit/3c1cdda5) Update dependencies (#189) +- [5a8297f3](https://github.com/kubedb/replication-mode-detector/commit/5a8297f3) Update dependencies (#188) +- [b361f5fa](https://github.com/kubedb/replication-mode-detector/commit/b361f5fa) Use Go 1.18 (#187) +- [f5eca52b](https://github.com/kubedb/replication-mode-detector/commit/f5eca52b) Use Go 1.18 (#186) +- [ce8409f5](https://github.com/kubedb/replication-mode-detector/commit/ce8409f5) Cancel concurrent CI runs for same pr/commit (#185) + + + +## [kubedb/schema-manager](https://github.com/kubedb/schema-manager) + +### [v0.2.0](https://github.com/kubedb/schema-manager/releases/tag/v0.2.0) + +- [5d16981](https://github.com/kubedb/schema-manager/commit/5d16981) Prepare for release v0.2.0 (#27) +- [1b1a6b2](https://github.com/kubedb/schema-manager/commit/1b1a6b2) Use Go 1.18 (#26) +- [c8742d7](https://github.com/kubedb/schema-manager/commit/c8742d7) Use Go 1.18 (#25) +- [1f3aecb](https://github.com/kubedb/schema-manager/commit/1f3aecb) make fmt (#23) +- [4940414](https://github.com/kubedb/schema-manager/commit/4940414) Cancel concurrent CI runs for same pr/commit (#22) + + + +## [kubedb/tests](https://github.com/kubedb/tests) + +### [v0.11.0](https://github.com/kubedb/tests/releases/tag/v0.11.0) + +- [700325f4](https://github.com/kubedb/tests/commit/700325f4) Update dependencies (#175) +- [e3489fd9](https://github.com/kubedb/tests/commit/e3489fd9) Use Go 1.18 (#173) +- [56ecf3b2](https://github.com/kubedb/tests/commit/56ecf3b2) Use Go 1.18 (#171) +- [a9008733](https://github.com/kubedb/tests/commit/a9008733) make fmt (#170) + + + +## [kubedb/ui-server](https://github.com/kubedb/ui-server) + +### [v0.2.0](https://github.com/kubedb/ui-server/releases/tag/v0.2.0) + +- [2c30ccb](https://github.com/kubedb/ui-server/commit/2c30ccb) Prepare for release v0.2.0 (#33) + + + +## [kubedb/webhook-server](https://github.com/kubedb/webhook-server) + +### [v0.2.0](https://github.com/kubedb/webhook-server/releases/tag/v0.2.0) + +- [b9ed181](https://github.com/kubedb/webhook-server/commit/b9ed181) Prepare for release v0.2.0 (#16) +- [26c615b](https://github.com/kubedb/webhook-server/commit/26c615b) Update dependencies (#15) +- [5166b0b](https://github.com/kubedb/webhook-server/commit/5166b0b) Use Go 1.18 (#14) + + + + diff --git a/content/docs/v2024.1.31/CHANGELOG-v2022.05.24.md b/content/docs/v2024.1.31/CHANGELOG-v2022.05.24.md new file mode 100644 index 0000000000..0954d5f3e3 --- /dev/null +++ b/content/docs/v2024.1.31/CHANGELOG-v2022.05.24.md @@ -0,0 +1,344 @@ +--- +title: Changelog | KubeDB +description: Changelog +menu: + docs_v2024.1.31: + identifier: changelog-kubedb-v2022.05.24 + name: Changelog-v2022.05.24 + parent: welcome + weight: 20220524 +product_name: kubedb +menu_name: docs_v2024.1.31 +section_menu_id: welcome +url: /docs/v2024.1.31/welcome/changelog-v2022.05.24/ +aliases: +- /docs/v2024.1.31/CHANGELOG-v2022.05.24/ +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +# KubeDB v2022.05.24 (2022-05-20) + + +## [kubedb/apimachinery](https://github.com/kubedb/apimachinery) + +### [v0.27.0](https://github.com/kubedb/apimachinery/releases/tag/v0.27.0) + +- [3634eb14](https://github.com/kubedb/apimachinery/commit/3634eb14) Add `HealthCheckPaused` condition and `Unknown` phase (#898) +- [a3a3b1df](https://github.com/kubedb/apimachinery/commit/a3a3b1df) Add Raft metrics port as constants (#896) +- [6f9afd91](https://github.com/kubedb/apimachinery/commit/6f9afd91) Add support for MySQL semi-sync cluster (#890) +- [bf17bf6d](https://github.com/kubedb/apimachinery/commit/bf17bf6d) Add constants for Kibana 8 (#894) +- [a8461374](https://github.com/kubedb/apimachinery/commit/a8461374) Add method and constants for proxysql (#893) +- [a57c9577](https://github.com/kubedb/apimachinery/commit/a57c9577) Add doubleOptIn funcs & shortnames for schema-manager (#889) +- [af6f51f3](https://github.com/kubedb/apimachinery/commit/af6f51f3) Add constants and helpers for ES Internal Users (#886) +- [74c4fc13](https://github.com/kubedb/apimachinery/commit/74c4fc13) Fix typo (#888) +- [023a7988](https://github.com/kubedb/apimachinery/commit/023a7988) Update ProxySQL types and helpers (#883) +- [29217d17](https://github.com/kubedb/apimachinery/commit/29217d17) Fix pgbouncer Version Spec +- [3b994342](https://github.com/kubedb/apimachinery/commit/3b994342) Add support for mariadbdatabase with webhook (#858) +- [4be4a876](https://github.com/kubedb/apimachinery/commit/4be4a876) Add spec for MongoDB arbiter support (#862) +- [36e97b5a](https://github.com/kubedb/apimachinery/commit/36e97b5a) Add TopologySpreadConstraints (#885) +- [27c7483d](https://github.com/kubedb/apimachinery/commit/27c7483d) Add SyncStatefulSetPodDisruptionBudget helper method (#884) +- [0e635b9f](https://github.com/kubedb/apimachinery/commit/0e635b9f) Make ClusterHealth inline in ES insight (#881) +- [761d8ca3](https://github.com/kubedb/apimachinery/commit/761d8ca3) fix: update Postgres shared buffer func (#880) +- [8579cef3](https://github.com/kubedb/apimachinery/commit/8579cef3) Add Support for Opensearch Dashboards (#878) +- [24eadd87](https://github.com/kubedb/apimachinery/commit/24eadd87) Use Go 1.18 (#879) + + + +## [kubedb/autoscaler](https://github.com/kubedb/autoscaler) + +### [v0.12.0](https://github.com/kubedb/autoscaler/releases/tag/v0.12.0) + +- [7cb69fae](https://github.com/kubedb/autoscaler/commit/7cb69fae) Prepare for release v0.12.0 (#84) +- [0dd28106](https://github.com/kubedb/autoscaler/commit/0dd28106) Update dependencies (#83) +- [8fb60ad6](https://github.com/kubedb/autoscaler/commit/8fb60ad6) Update dependencies(nats client, mongo-driver) (#81) + + + +## [kubedb/cli](https://github.com/kubedb/cli) + +### [v0.27.0](https://github.com/kubedb/cli/releases/tag/v0.27.0) + +- [01318a20](https://github.com/kubedb/cli/commit/01318a20) Prepare for release v0.27.0 (#664) +- [6d399a31](https://github.com/kubedb/cli/commit/6d399a31) Update dependencies (#663) +- [3e9a658f](https://github.com/kubedb/cli/commit/3e9a658f) Update dependencies(nats client, mongo-driver) (#662) + + + +## [kubedb/dashboard](https://github.com/kubedb/dashboard) + +### [v0.3.0](https://github.com/kubedb/dashboard/releases/tag/v0.3.0) + +- [454bf6a](https://github.com/kubedb/dashboard/commit/454bf6a) Prepare for release v0.3.0 (#27) +- [872bbd9](https://github.com/kubedb/dashboard/commit/872bbd9) Update dependencies (#26) +- [6cafd62](https://github.com/kubedb/dashboard/commit/6cafd62) Add support for Kibana 8 (#25) +- [273d034](https://github.com/kubedb/dashboard/commit/273d034) Update dependencies(nats client, mongo-driver) (#24) +- [7d6c3ec](https://github.com/kubedb/dashboard/commit/7d6c3ec) Add support for Opensearch_Dashboards (#17) + + + +## [kubedb/elasticsearch](https://github.com/kubedb/elasticsearch) + +### [v0.27.0](https://github.com/kubedb/elasticsearch/releases/tag/v0.27.0) + +- [ba7dee10](https://github.com/kubedb/elasticsearch/commit/ba7dee10) Prepare for release v0.27.0 (#577) +- [cfdb4d21](https://github.com/kubedb/elasticsearch/commit/cfdb4d21) Update dependencies (#576) +- [d24dfadc](https://github.com/kubedb/elasticsearch/commit/d24dfadc) Add support for ElasticStack Built-In Users (#574) +- [cdb4d974](https://github.com/kubedb/elasticsearch/commit/cdb4d974) Update dependencies(nats client, mongo-driver) (#575) +- [865d0703](https://github.com/kubedb/elasticsearch/commit/865d0703) Add support for Elasticsearch 8 (#573) + + + +## [kubedb/installer](https://github.com/kubedb/installer) + +### [v2022.05.24](https://github.com/kubedb/installer/releases/tag/v2022.05.24) + + + + +## [kubedb/mariadb](https://github.com/kubedb/mariadb) + +### [v0.11.0](https://github.com/kubedb/mariadb/releases/tag/v0.11.0) + +- [b2fd680d](https://github.com/kubedb/mariadb/commit/b2fd680d) Prepare for release v0.11.0 (#147) +- [39ac8190](https://github.com/kubedb/mariadb/commit/39ac8190) Update dependencies (#146) +- [f081a5ee](https://github.com/kubedb/mariadb/commit/f081a5ee) Update MariaDB conditions on health check (#138) +- [385f270d](https://github.com/kubedb/mariadb/commit/385f270d) Update dependencies(nats client, mongo-driver) (#145) +- [6879d6a6](https://github.com/kubedb/mariadb/commit/6879d6a6) Cleanup PodDisruptionBudget when the replica count is one or less (#144) + + + +## [kubedb/mariadb-coordinator](https://github.com/kubedb/mariadb-coordinator) + +### [v0.7.0](https://github.com/kubedb/mariadb-coordinator/releases/tag/v0.7.0) + +- [23da6cd](https://github.com/kubedb/mariadb-coordinator/commit/23da6cd) Prepare for release v0.7.0 (#43) +- [e1fca00](https://github.com/kubedb/mariadb-coordinator/commit/e1fca00) Update dependencies (#42) +- [20d90c6](https://github.com/kubedb/mariadb-coordinator/commit/20d90c6) Update dependencies(nats client, mongo-driver) (#41) + + + +## [kubedb/memcached](https://github.com/kubedb/memcached) + +### [v0.20.0](https://github.com/kubedb/memcached/releases/tag/v0.20.0) + +- [439a9398](https://github.com/kubedb/memcached/commit/439a9398) Prepare for release v0.20.0 (#355) +- [73606c44](https://github.com/kubedb/memcached/commit/73606c44) Update dependencies (#354) +- [75cd9209](https://github.com/kubedb/memcached/commit/75cd9209) Update dependencies(nats client, mongo-driver) (#353) +- [2b996ad8](https://github.com/kubedb/memcached/commit/2b996ad8) Cleanup PodDisruptionBudget when the replica count is one or less (#352) + + + +## [kubedb/mongodb](https://github.com/kubedb/mongodb) + +### [v0.20.0](https://github.com/kubedb/mongodb/releases/tag/v0.20.0) + +- [85063ec7](https://github.com/kubedb/mongodb/commit/85063ec7) Prepare for release v0.20.0 (#477) +- [ab3a33f7](https://github.com/kubedb/mongodb/commit/ab3a33f7) Update dependencies (#476) +- [275fbdc4](https://github.com/kubedb/mongodb/commit/275fbdc4) Fix shard database write check (#475) +- [643c958c](https://github.com/kubedb/mongodb/commit/643c958c) Use updated commit-hash (#474) +- [8ba58693](https://github.com/kubedb/mongodb/commit/8ba58693) Add arbiter support (#470) +- [a8ecbc33](https://github.com/kubedb/mongodb/commit/a8ecbc33) Update dependencies(nats client, mongo-driver) (#472) +- [3073bbec](https://github.com/kubedb/mongodb/commit/3073bbec) Cleanup PodDisruptionBudget when the replica count is one or less (#471) +- [e7c146cb](https://github.com/kubedb/mongodb/commit/e7c146cb) Refactor statefulset-related files (#449) + + + +## [kubedb/mysql](https://github.com/kubedb/mysql) + +### [v0.20.0](https://github.com/kubedb/mysql/releases/tag/v0.20.0) + +- [988eab76](https://github.com/kubedb/mysql/commit/988eab76) Prepare for release v0.20.0 (#470) +- [47c7e612](https://github.com/kubedb/mysql/commit/47c7e612) Update dependencies (#469) +- [a972735f](https://github.com/kubedb/mysql/commit/a972735f) Pass `--set-gtid-purged=OFF` to app binding for stash (#468) +- [a4f2e6a5](https://github.com/kubedb/mysql/commit/a4f2e6a5) Add Raft Server ports for MySQL Semi-sync (#467) +- [b9a3c322](https://github.com/kubedb/mysql/commit/b9a3c322) Add Support for Semi-sync cluster (#464) +- [2d7a0080](https://github.com/kubedb/mysql/commit/2d7a0080) Update dependencies(nats client, mongo-driver) (#466) +- [684d553a](https://github.com/kubedb/mysql/commit/684d553a) Cleanup PodDisruptionBudget when the replica count is one or less (#462) +- [5caa331a](https://github.com/kubedb/mysql/commit/5caa331a) Patch existing Auth secret to db ojbect (#463) + + + +## [kubedb/mysql-router-init](https://github.com/kubedb/mysql-router-init) + +### [v0.5.0](https://github.com/kubedb/mysql-router-init/releases/tag/v0.5.0) + +- [b30fd8e](https://github.com/kubedb/mysql-router-init/commit/b30fd8e) Update dependencies (#20) + + + +## [kubedb/ops-manager](https://github.com/kubedb/ops-manager) + +### [v0.14.0](https://github.com/kubedb/ops-manager/releases/tag/v0.14.0) + +- [2727742d](https://github.com/kubedb/ops-manager/commit/2727742d) Prepare for release v0.14.0 (#310) +- [8964e523](https://github.com/kubedb/ops-manager/commit/8964e523) Fix: Redis shard node deletion issue for Horizontal scaling (#304) +- [63ac74e1](https://github.com/kubedb/ops-manager/commit/63ac74e1) Fix product name +- [8e5a457d](https://github.com/kubedb/ops-manager/commit/8e5a457d) Rename to ops-manager package (#309) +- [36a71aa0](https://github.com/kubedb/ops-manager/commit/36a71aa0) Update dependencies (#307) (#308) +- [e0c10f1f](https://github.com/kubedb/ops-manager/commit/e0c10f1f) Update dependencies (#307) +- [d0b2d531](https://github.com/kubedb/ops-manager/commit/d0b2d531) Fix mongodb shard scale down (#306) +- [a65f70f9](https://github.com/kubedb/ops-manager/commit/a65f70f9) update replication user updating condition (#305) +- [88027506](https://github.com/kubedb/ops-manager/commit/88027506) Update Replication User Password (#300) +- [e1b525cb](https://github.com/kubedb/ops-manager/commit/e1b525cb) Use updated commit-hash (#303) +- [4dc359b4](https://github.com/kubedb/ops-manager/commit/4dc359b4) Ensure right master count when scaling down Redis Shard Cluster +- [c3bed80c](https://github.com/kubedb/ops-manager/commit/c3bed80c) Add ProxySQL TLS support (#302) +- [b8e2c085](https://github.com/kubedb/ops-manager/commit/b8e2c085) Add arbiter-support for mongodb (#291) +- [20a24475](https://github.com/kubedb/ops-manager/commit/20a24475) Update dependencies(nats client, mongo-driver) (#298) +- [0da84955](https://github.com/kubedb/ops-manager/commit/0da84955) Fix horizontal scaling to support Redis Shard Dynamic Failover (#297) +- [8d6d42a2](https://github.com/kubedb/ops-manager/commit/8d6d42a2) Add PgBouncer TLS Support (#295) + + + +## [kubedb/percona-xtradb](https://github.com/kubedb/percona-xtradb) + +### [v0.14.0](https://github.com/kubedb/percona-xtradb/releases/tag/v0.14.0) + +- [83f74c17](https://github.com/kubedb/percona-xtradb/commit/83f74c17) Prepare for release v0.14.0 (#258) +- [bfae8113](https://github.com/kubedb/percona-xtradb/commit/bfae8113) Update dependencies (#257) +- [bcc010f8](https://github.com/kubedb/percona-xtradb/commit/bcc010f8) Update dependencies(nats client, mongo-driver) (#256) + + + +## [kubedb/pg-coordinator](https://github.com/kubedb/pg-coordinator) + +### [v0.11.0](https://github.com/kubedb/pg-coordinator/releases/tag/v0.11.0) + +- [373a83e](https://github.com/kubedb/pg-coordinator/commit/373a83e) Prepare for release v0.11.0 (#80) +- [254c361](https://github.com/kubedb/pg-coordinator/commit/254c361) Update dependencies (#79) +- [7f6a6c0](https://github.com/kubedb/pg-coordinator/commit/7f6a6c0) Add Raft Metrics And graceful shutdown of Postgres (#74) +- [c1a5b53](https://github.com/kubedb/pg-coordinator/commit/c1a5b53) Update dependencies(nats client, mongo-driver) (#78) +- [b6da859](https://github.com/kubedb/pg-coordinator/commit/b6da859) Fix: Fast Shut-down Postgres server to avoid single-user mode shutdown failure (#73) + + + +## [kubedb/pgbouncer](https://github.com/kubedb/pgbouncer) + +### [v0.14.0](https://github.com/kubedb/pgbouncer/releases/tag/v0.14.0) + +- [8bb55234](https://github.com/kubedb/pgbouncer/commit/8bb55234) Prepare for release v0.14.0 (#221) +- [ca8efd9a](https://github.com/kubedb/pgbouncer/commit/ca8efd9a) Update dependencies (#220) +- [8122b2c7](https://github.com/kubedb/pgbouncer/commit/8122b2c7) Update dependencies(nats client, mongo-driver) (#218) +- [431839ee](https://github.com/kubedb/pgbouncer/commit/431839ee) Update exporter container to support TLS enabled PgBouncer (#217) +- [766ece71](https://github.com/kubedb/pgbouncer/commit/766ece71) Fix TLS and Config Related Issues, Add health Check (#210) +- [76ebe1ec](https://github.com/kubedb/pgbouncer/commit/76ebe1ec) Cleanup PodDisruptionBudget when the replica count is one or less (#216) + + + +## [kubedb/postgres](https://github.com/kubedb/postgres) + +### [v0.27.0](https://github.com/kubedb/postgres/releases/tag/v0.27.0) + +- [bc3cf38e](https://github.com/kubedb/postgres/commit/bc3cf38e) Prepare for release v0.27.0 (#573) +- [14c87e8f](https://github.com/kubedb/postgres/commit/14c87e8f) Update dependencies (#572) +- [7cb31a1d](https://github.com/kubedb/postgres/commit/7cb31a1d) Add Raft Metrics exporter Port for Monitoring (#569) +- [3a71b165](https://github.com/kubedb/postgres/commit/3a71b165) Update dependencies(nats client, mongo-driver) (#571) +- [131dd7d9](https://github.com/kubedb/postgres/commit/131dd7d9) Cleanup podDiscruptionBudget when the replica count is one or less (#570) +- [44e929d8](https://github.com/kubedb/postgres/commit/44e929d8) Fix: Fast Shut-down Postgres server to avoid single-user mode shutdown failure (#568) + + + +## [kubedb/provisioner](https://github.com/kubedb/provisioner) + +### [v0.27.0](https://github.com/kubedb/provisioner/releases/tag/v0.27.0) + +- [1a87a7e7](https://github.com/kubedb/provisioner/commit/1a87a7e7) Prepare for release v0.27.0 (#2) +- [53226f1d](https://github.com/kubedb/provisioner/commit/53226f1d) Rename to provisioner module (#1) +- [ae8196d3](https://github.com/kubedb/provisioner/commit/ae8196d3) Update dependencies(nats client, mongo-driver) (#465) + + + +## [kubedb/proxysql](https://github.com/kubedb/proxysql) + +### [v0.14.0](https://github.com/kubedb/proxysql/releases/tag/v0.14.0) + +- [283f3bf3](https://github.com/kubedb/proxysql/commit/283f3bf3) Prepare for release v0.14.0 (#235) +- [05e4b5dc](https://github.com/kubedb/proxysql/commit/05e4b5dc) Update dependencies (#234) +- [81b98c09](https://github.com/kubedb/proxysql/commit/81b98c09) Fix phase and condition update for ProxySQL (#233) +- [c0561e90](https://github.com/kubedb/proxysql/commit/c0561e90) Add support for ProxySQL clustering and TLS (#231) +- [df6b4688](https://github.com/kubedb/proxysql/commit/df6b4688) Update dependencies(nats client, mongo-driver) (#232) + + + +## [kubedb/redis](https://github.com/kubedb/redis) + +### [v0.20.0](https://github.com/kubedb/redis/releases/tag/v0.20.0) + +- [3dcfc3c7](https://github.com/kubedb/redis/commit/3dcfc3c7) Prepare for release v0.20.0 (#397) +- [ac65b0b3](https://github.com/kubedb/redis/commit/ac65b0b3) Update dependencies (#396) +- [177c0329](https://github.com/kubedb/redis/commit/177c0329) Update dependencies(nats client, mongo-driver) (#395) +- [6bf1db27](https://github.com/kubedb/redis/commit/6bf1db27) Redis Shard Cluster Dynamic Failover (#393) +- [4fa76436](https://github.com/kubedb/redis/commit/4fa76436) Refactor StatefulSet ENVs for Redis (#394) +- [b12bfef9](https://github.com/kubedb/redis/commit/b12bfef9) Cleanup PodDisruptionBudget when the replica count is one or less (#392) + + + +## [kubedb/redis-coordinator](https://github.com/kubedb/redis-coordinator) + +### [v0.6.0](https://github.com/kubedb/redis-coordinator/releases/tag/v0.6.0) + +- [fb4f029](https://github.com/kubedb/redis-coordinator/commit/fb4f029) Prepare for release v0.6.0 (#33) +- [69cc834](https://github.com/kubedb/redis-coordinator/commit/69cc834) Update dependencies (#32) +- [9c1cbd9](https://github.com/kubedb/redis-coordinator/commit/9c1cbd9) Update dependencies(nats client, mongo-driver) (#31) +- [33baab6](https://github.com/kubedb/redis-coordinator/commit/33baab6) Update Env Variables (#30) + + + +## [kubedb/replication-mode-detector](https://github.com/kubedb/replication-mode-detector) + +### [v0.14.0](https://github.com/kubedb/replication-mode-detector/releases/tag/v0.14.0) + +- [fcb720f2](https://github.com/kubedb/replication-mode-detector/commit/fcb720f2) Prepare for release v0.14.0 (#194) +- [b59867e3](https://github.com/kubedb/replication-mode-detector/commit/b59867e3) Update dependencies (#193) +- [bc287981](https://github.com/kubedb/replication-mode-detector/commit/bc287981) Update dependencies(nats client, mongo-driver) (#192) + + + +## [kubedb/schema-manager](https://github.com/kubedb/schema-manager) + +### [v0.3.0](https://github.com/kubedb/schema-manager/releases/tag/v0.3.0) + +- [e98aaec](https://github.com/kubedb/schema-manager/commit/e98aaec) Prepare for release v0.3.0 (#29) +- [99ca0f7](https://github.com/kubedb/schema-manager/commit/99ca0f7) Fix sharded-mongo restore issue; Use typed doubleOptIn funcs (#28) +- [2a23c38](https://github.com/kubedb/schema-manager/commit/2a23c38) Add support for MariaDB database schema manager (#24) + + + +## [kubedb/tests](https://github.com/kubedb/tests) + +### [v0.12.0](https://github.com/kubedb/tests/releases/tag/v0.12.0) + +- [6501852d](https://github.com/kubedb/tests/commit/6501852d) Prepare for release v0.12.0 (#178) +- [68979c56](https://github.com/kubedb/tests/commit/68979c56) Update dependencies (#177) +- [affe5f32](https://github.com/kubedb/tests/commit/affe5f32) Update dependencies(nats client, mongo-driver) (#176) + + + +## [kubedb/ui-server](https://github.com/kubedb/ui-server) + +### [v0.3.0](https://github.com/kubedb/ui-server/releases/tag/v0.3.0) + +- [4cb89db](https://github.com/kubedb/ui-server/commit/4cb89db) Prepare for release v0.3.0 (#34) + + + +## [kubedb/webhook-server](https://github.com/kubedb/webhook-server) + +### [v0.3.0](https://github.com/kubedb/webhook-server/releases/tag/v0.3.0) + +- [5d69aa6](https://github.com/kubedb/webhook-server/commit/5d69aa6) Prepare for release v0.3.0 (#19) +- [ca55fb8](https://github.com/kubedb/webhook-server/commit/ca55fb8) Update dependencies (#18) +- [22b4ab7](https://github.com/kubedb/webhook-server/commit/22b4ab7) Update dependencies(nats client, mongo-driver) (#17) + + + + diff --git a/content/docs/v2024.1.31/CHANGELOG-v2022.08.02-rc.0.md b/content/docs/v2024.1.31/CHANGELOG-v2022.08.02-rc.0.md new file mode 100644 index 0000000000..dc5090fca8 --- /dev/null +++ b/content/docs/v2024.1.31/CHANGELOG-v2022.08.02-rc.0.md @@ -0,0 +1,378 @@ +--- +title: Changelog | KubeDB +description: Changelog +menu: + docs_v2024.1.31: + identifier: changelog-kubedb-v2022.08.02-rc.0 + name: Changelog-v2022.08.02-rc.0 + parent: welcome + weight: 20220802 +product_name: kubedb +menu_name: docs_v2024.1.31 +section_menu_id: welcome +url: /docs/v2024.1.31/welcome/changelog-v2022.08.02-rc.0/ +aliases: +- /docs/v2024.1.31/CHANGELOG-v2022.08.02-rc.0/ +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +# KubeDB v2022.08.02-rc.0 (2022-08-02) + + +## [kubedb/apimachinery](https://github.com/kubedb/apimachinery) + +### [v0.28.0-rc.0](https://github.com/kubedb/apimachinery/releases/tag/v0.28.0-rc.0) + + + + +## [kubedb/autoscaler](https://github.com/kubedb/autoscaler) + +### [v0.13.0-rc.0](https://github.com/kubedb/autoscaler/releases/tag/v0.13.0-rc.0) + +- [896d0bb5](https://github.com/kubedb/autoscaler/commit/896d0bb5) Prepare for release v0.13.0-rc.0 (#95) +- [a904819e](https://github.com/kubedb/autoscaler/commit/a904819e) Update db-client-go (#94) +- [049b959a](https://github.com/kubedb/autoscaler/commit/049b959a) Acquire license from license-proxyserver if available (#93) +- [6f47ba7d](https://github.com/kubedb/autoscaler/commit/6f47ba7d) Use MemoryUsedPercentage as float (#92) +- [2f41e629](https://github.com/kubedb/autoscaler/commit/2f41e629) Fix mongodb inMemory calculation and changes for updated autoscaler CRD (#91) +- [a0d00ea2](https://github.com/kubedb/autoscaler/commit/a0d00ea2) Update mongodb inmemory recommendation logic (#90) +- [7fd346e6](https://github.com/kubedb/autoscaler/commit/7fd346e6) Change some in-memory recommendation logic (#89) +- [00b9087f](https://github.com/kubedb/autoscaler/commit/00b9087f) Convert to KubeBuilder style (#88) +- [9a3599ac](https://github.com/kubedb/autoscaler/commit/9a3599ac) Update dependencies +- [7260dc2f](https://github.com/kubedb/autoscaler/commit/7260dc2f) Add custom recommender for dbs (#85) + + + +## [kubedb/cli](https://github.com/kubedb/cli) + +### [v0.28.0-rc.0](https://github.com/kubedb/cli/releases/tag/v0.28.0-rc.0) + +- [902c36b2](https://github.com/kubedb/cli/commit/902c36b2) Prepare for release v0.28.0-rc.0 (#671) +- [e0564ec0](https://github.com/kubedb/cli/commit/e0564ec0) Acquire license from license-proxyserver if available (#670) +- [da3169be](https://github.com/kubedb/cli/commit/da3169be) Update for release Stash@v2022.07.09 (#669) +- [38b1149c](https://github.com/kubedb/cli/commit/38b1149c) Update for release Stash@v2022.06.21 (#668) +- [09bb7b93](https://github.com/kubedb/cli/commit/09bb7b93) Update to k8s 1.24 toolchain (#666) +- [1642a399](https://github.com/kubedb/cli/commit/1642a399) Update to k8s 1.24 toolchain (#665) + + + +## [kubedb/dashboard](https://github.com/kubedb/dashboard) + +### [v0.4.0-rc.0](https://github.com/kubedb/dashboard/releases/tag/v0.4.0-rc.0) + +- [26e7432](https://github.com/kubedb/dashboard/commit/26e7432) Prepare for release v0.4.0-rc.0 (#32) +- [60f3154](https://github.com/kubedb/dashboard/commit/60f3154) Acquire license from license-proxyserver if available (#31) +- [9b18e46](https://github.com/kubedb/dashboard/commit/9b18e46) Update to k8s 1.24 toolchain (#29) +- [5ce68d1](https://github.com/kubedb/dashboard/commit/5ce68d1) Update to k8s 1.24 toolchain (#28) + + + +## [kubedb/elasticsearch](https://github.com/kubedb/elasticsearch) + +### [v0.28.0-rc.0](https://github.com/kubedb/elasticsearch/releases/tag/v0.28.0-rc.0) + +- [6991fc7a](https://github.com/kubedb/elasticsearch/commit/6991fc7a3) Prepare for release v0.28.0-rc.0 (#592) +- [d8df80d1](https://github.com/kubedb/elasticsearch/commit/d8df80d1c) Update db-client-go (#591) +- [177990ef](https://github.com/kubedb/elasticsearch/commit/177990efa) Make Changes for newKBClient() args removal (#590) +- [3c704bdc](https://github.com/kubedb/elasticsearch/commit/3c704bdc1) Acquire license from license-proxyserver if available (#589) +- [46e8fa20](https://github.com/kubedb/elasticsearch/commit/46e8fa201) Add support for volumes and volumeMounts (#588) +- [51536415](https://github.com/kubedb/elasticsearch/commit/515364158) Re-construct Elasticsearch health checker (#587) +- [cc1a8224](https://github.com/kubedb/elasticsearch/commit/cc1a8224a) SKIP_IMAGE_DIGEST for dev builds (#586) +- [c1c84b12](https://github.com/kubedb/elasticsearch/commit/c1c84b124) Use docker image with digest value (#579) +- [5594ad03](https://github.com/kubedb/elasticsearch/commit/5594ad035) Change credential sync log level to avoid operator log overloading (#584) +- [6c40c79e](https://github.com/kubedb/elasticsearch/commit/6c40c79ed) Revert es client version +- [e6621d98](https://github.com/kubedb/elasticsearch/commit/e6621d980) Update to k8s 1.24 toolchain (#580) +- [93fd95b6](https://github.com/kubedb/elasticsearch/commit/93fd95b65) Test against Kubernetes 1.24.0 (#578) + + + +## [kubedb/installer](https://github.com/kubedb/installer) + +### [v2022.08.02-rc.0](https://github.com/kubedb/installer/releases/tag/v2022.08.02-rc.0) + + + + +## [kubedb/mariadb](https://github.com/kubedb/mariadb) + +### [v0.12.0-rc.0](https://github.com/kubedb/mariadb/releases/tag/v0.12.0-rc.0) + +- [8ed9b5e8](https://github.com/kubedb/mariadb/commit/8ed9b5e8) Prepare for release v0.12.0-rc.0 (#161) +- [7eb4d546](https://github.com/kubedb/mariadb/commit/7eb4d546) Acquire license from license-proxyserver if available (#160) +- [16dc94dd](https://github.com/kubedb/mariadb/commit/16dc94dd) Add custom volume and volume mount support (#159) +- [d03b7ab3](https://github.com/kubedb/mariadb/commit/d03b7ab3) Update MariaDB Health check (#155) +- [ac7fc040](https://github.com/kubedb/mariadb/commit/ac7fc040) Add syncAndValidate for secrets | Not delete custom auth secrets (#153) +- [75a280b3](https://github.com/kubedb/mariadb/commit/75a280b3) SKIP_IMAGE_DIGEST for dev builds (#158) +- [09e31be1](https://github.com/kubedb/mariadb/commit/09e31be1) Fix MariaDB not ready condition after removing halt (#157) +- [29934625](https://github.com/kubedb/mariadb/commit/29934625) Add digest value on docker image (#154) +- [a73717c8](https://github.com/kubedb/mariadb/commit/a73717c8) Update to k8s 1.24 toolchain (#151) +- [ff6e83e5](https://github.com/kubedb/mariadb/commit/ff6e83e5) Update to k8s 1.24 toolchain (#150) +- [c1dba654](https://github.com/kubedb/mariadb/commit/c1dba654) Test against Kubernetes 1.24.0 (#148) + + + +## [kubedb/mariadb-coordinator](https://github.com/kubedb/mariadb-coordinator) + +### [v0.8.0-rc.0](https://github.com/kubedb/mariadb-coordinator/releases/tag/v0.8.0-rc.0) + +- [82bad04](https://github.com/kubedb/mariadb-coordinator/commit/82bad04) Prepare for release v0.8.0-rc.0 (#50) +- [487fdbb](https://github.com/kubedb/mariadb-coordinator/commit/487fdbb) Acquire license from license-proxyserver if available (#49) +- [b2565e4](https://github.com/kubedb/mariadb-coordinator/commit/b2565e4) Add primary component detector (#48) +- [84810c8](https://github.com/kubedb/mariadb-coordinator/commit/84810c8) Fix custom auth secret failure (#47) +- [7261a98](https://github.com/kubedb/mariadb-coordinator/commit/7261a98) Update to k8s 1.24 toolchain (#45) +- [3c23773](https://github.com/kubedb/mariadb-coordinator/commit/3c23773) Update to k8s 1.24 toolchain (#44) + + + +## [kubedb/memcached](https://github.com/kubedb/memcached) + +### [v0.21.0-rc.0](https://github.com/kubedb/memcached/releases/tag/v0.21.0-rc.0) + +- [6fcbc121](https://github.com/kubedb/memcached/commit/6fcbc121) Prepare for release v0.21.0-rc.0 (#360) +- [8d43bd1e](https://github.com/kubedb/memcached/commit/8d43bd1e) Acquire license from license-proxyserver if available (#359) +- [cd2fa52b](https://github.com/kubedb/memcached/commit/cd2fa52b) Fix validator webhook api group +- [71c254a7](https://github.com/kubedb/memcached/commit/71c254a7) Update to k8s 1.24 toolchain (#357) +- [0545e187](https://github.com/kubedb/memcached/commit/0545e187) Test against Kubernetes 1.24.0 (#356) + + + +## [kubedb/mongodb](https://github.com/kubedb/mongodb) + +### [v0.21.0-rc.0](https://github.com/kubedb/mongodb/releases/tag/v0.21.0-rc.0) + +- [c0a023d2](https://github.com/kubedb/mongodb/commit/c0a023d2) Prepare for release v0.21.0-rc.0 (#496) +- [0b56bec3](https://github.com/kubedb/mongodb/commit/0b56bec3) Update db-client-go (#495) +- [7126f7f1](https://github.com/kubedb/mongodb/commit/7126f7f1) getExporterContainer only if monitoring enabled (#494) +- [cdd6adbc](https://github.com/kubedb/mongodb/commit/cdd6adbc) Acquire license from license-proxyserver if available (#493) +- [c598db8e](https://github.com/kubedb/mongodb/commit/c598db8e) Add `--collect-all` exporter container cmd args (#487) +- [dc2fce0d](https://github.com/kubedb/mongodb/commit/dc2fce0d) Add InMemory validations & some refactoration (#491) +- [991588ef](https://github.com/kubedb/mongodb/commit/991588ef) Add support for custom volumes (#492) +- [4fc305c0](https://github.com/kubedb/mongodb/commit/4fc305c0) Update Health Checker (#480) +- [3be64a6b](https://github.com/kubedb/mongodb/commit/3be64a6b) SKIP_IMAGE_DIGEST for dev builds (#490) +- [3fd70298](https://github.com/kubedb/mongodb/commit/3fd70298) Use docker images with digest value (#486) +- [c7325a29](https://github.com/kubedb/mongodb/commit/c7325a29) Fix connection leak when ping fails (#485) +- [4ff96a0e](https://github.com/kubedb/mongodb/commit/4ff96a0e) Use kubebuilder client for db-client-go (#484) +- [4094f54a](https://github.com/kubedb/mongodb/commit/4094f54a) Update to k8s 1.24 toolchain (#482) +- [2e56a4e9](https://github.com/kubedb/mongodb/commit/2e56a4e9) Update to k8s 1.24 toolchain (#481) + + + +## [kubedb/mysql](https://github.com/kubedb/mysql) + +### [v0.21.0-rc.0](https://github.com/kubedb/mysql/releases/tag/v0.21.0-rc.0) + +- [15f1a9dd](https://github.com/kubedb/mysql/commit/15f1a9dd) Prepare for release v0.21.0-rc.0 (#484) +- [43f733a3](https://github.com/kubedb/mysql/commit/43f733a3) Acquire license from license-proxyserver if available (#483) +- [0c88f473](https://github.com/kubedb/mysql/commit/0c88f473) Use GetCertSecret instead of MustCertSecretName (#482) +- [afb1c070](https://github.com/kubedb/mysql/commit/afb1c070) Add support for Custom volume and volume mounts (#481) +- [4f167ef8](https://github.com/kubedb/mysql/commit/4f167ef8) Update MySQL healthchecker (#480) +- [05d5179e](https://github.com/kubedb/mysql/commit/05d5179e) Update read replica Auth secret (#477) +- [d36e09de](https://github.com/kubedb/mysql/commit/d36e09de) upsert volumes with existing volumes. (#476) +- [e51c7d79](https://github.com/kubedb/mysql/commit/e51c7d79) Use docker image with digest value (#475) +- [574fc526](https://github.com/kubedb/mysql/commit/574fc526) SKIP_IMAGE_DIGEST for dev builds (#479) +- [10dd0b78](https://github.com/kubedb/mysql/commit/10dd0b78) Fix not ready condition after removing halt (#478) +- [3c514bae](https://github.com/kubedb/mysql/commit/3c514bae) Update to k8s 1.24 toolchain (#473) +- [6a09468a](https://github.com/kubedb/mysql/commit/6a09468a) Update to k8s 1.24 toolchain (#472) +- [04c925b5](https://github.com/kubedb/mysql/commit/04c925b5) Test against Kubernetes 1.24.0 (#471) + + + +## [kubedb/mysql-coordinator](https://github.com/kubedb/mysql-coordinator) + +### [v0.6.0-rc.0](https://github.com/kubedb/mysql-coordinator/releases/tag/v0.6.0-rc.0) + +- [7c79e5f](https://github.com/kubedb/mysql-coordinator/commit/7c79e5f) Prepare for release v0.6.0-rc.0 (#42) +- [2eb313d](https://github.com/kubedb/mysql-coordinator/commit/2eb313d) Acquire license from license-proxyserver if available (#40) +- [6116c0e](https://github.com/kubedb/mysql-coordinator/commit/6116c0e) Read replica count from db object If sts not found (#39) +- [40d1007](https://github.com/kubedb/mysql-coordinator/commit/40d1007) fix custom auth secret issue (#38) +- [a72d3a9](https://github.com/kubedb/mysql-coordinator/commit/a72d3a9) Update to k8s 1.24 toolchain (#37) +- [593433a](https://github.com/kubedb/mysql-coordinator/commit/593433a) Update dependencies (#36) + + + +## [kubedb/mysql-router-init](https://github.com/kubedb/mysql-router-init) + +### [v0.6.0-rc.0](https://github.com/kubedb/mysql-router-init/releases/tag/v0.6.0-rc.0) + +- [445a2ff](https://github.com/kubedb/mysql-router-init/commit/445a2ff) Acquire license from license-proxyserver if available (#23) +- [28eb2d4](https://github.com/kubedb/mysql-router-init/commit/28eb2d4) Update to k8s 1.24 toolchain (#22) +- [40d3dd9](https://github.com/kubedb/mysql-router-init/commit/40d3dd9) Update to k8s 1.24 toolchain (#21) + + + +## [kubedb/ops-manager](https://github.com/kubedb/ops-manager) + +### [v0.15.0-rc.0](https://github.com/kubedb/ops-manager/releases/tag/v0.15.0-rc.0) + + + + +## [kubedb/percona-xtradb](https://github.com/kubedb/percona-xtradb) + +### [v0.15.0-rc.0](https://github.com/kubedb/percona-xtradb/releases/tag/v0.15.0-rc.0) + +- [4970d99b](https://github.com/kubedb/percona-xtradb/commit/4970d99b) Prepare for release v0.15.0-rc.0 (#264) +- [d1896876](https://github.com/kubedb/percona-xtradb/commit/d1896876) Acquire license from license-proxyserver if available (#263) +- [8e65e97d](https://github.com/kubedb/percona-xtradb/commit/8e65e97d) Add custom volume and volume mount support (#262) +- [1866eed1](https://github.com/kubedb/percona-xtradb/commit/1866eed1) Add Percona XtraDB Cluster Support (#237) +- [501c5a18](https://github.com/kubedb/percona-xtradb/commit/501c5a18) Update to k8s 1.24 toolchain (#260) +- [e632ea56](https://github.com/kubedb/percona-xtradb/commit/e632ea56) Test against Kubernetes 1.24.0 (#259) + + + +## [kubedb/pg-coordinator](https://github.com/kubedb/pg-coordinator) + +### [v0.12.0-rc.0](https://github.com/kubedb/pg-coordinator/releases/tag/v0.12.0-rc.0) + +- [e066e950](https://github.com/kubedb/pg-coordinator/commit/e066e950) Prepare for release v0.12.0-rc.0 (#85) +- [e3256a32](https://github.com/kubedb/pg-coordinator/commit/e3256a32) Acquire license from license-proxyserver if available (#83) +- [909c225e](https://github.com/kubedb/pg-coordinator/commit/909c225e) Remove role scripts from the coordinator. (#82) +- [fdd2a4ad](https://github.com/kubedb/pg-coordinator/commit/fdd2a4ad) Update to k8s 1.24 toolchain (#81) + + + +## [kubedb/pgbouncer](https://github.com/kubedb/pgbouncer) + +### [v0.15.0-rc.0](https://github.com/kubedb/pgbouncer/releases/tag/v0.15.0-rc.0) + +- [09815675](https://github.com/kubedb/pgbouncer/commit/09815675) Prepare for release v0.15.0-rc.0 (#230) +- [6e18d967](https://github.com/kubedb/pgbouncer/commit/6e18d967) Acquire license from license-proxyserver if available (#229) +- [4d042db4](https://github.com/kubedb/pgbouncer/commit/4d042db4) Update healthcheck (#228) +- [22cf136d](https://github.com/kubedb/pgbouncer/commit/22cf136d) Add digest value on docker image (#227) +- [3e9914c9](https://github.com/kubedb/pgbouncer/commit/3e9914c9) Update test for PgBouncer (#219) +- [92662071](https://github.com/kubedb/pgbouncer/commit/92662071) SKIP_IMAGE_DIGEST for dev builds (#226) +- [a62c708f](https://github.com/kubedb/pgbouncer/commit/a62c708f) Update to k8s 1.24 toolchain (#224) +- [061e53fe](https://github.com/kubedb/pgbouncer/commit/061e53fe) Update to k8s 1.24 toolchain (#223) +- [a89ce8fd](https://github.com/kubedb/pgbouncer/commit/a89ce8fd) Test against Kubernetes 1.24.0 (#222) + + + +## [kubedb/postgres](https://github.com/kubedb/postgres) + +### [v0.28.0-rc.0](https://github.com/kubedb/postgres/releases/tag/v0.28.0-rc.0) + +- [e2a4636b](https://github.com/kubedb/postgres/commit/e2a4636b) Prepare for release v0.28.0-rc.0 (#582) +- [8f861234](https://github.com/kubedb/postgres/commit/8f861234) Acquire license from license-proxyserver if available (#581) +- [52f6820d](https://github.com/kubedb/postgres/commit/52f6820d) Add Custom Volume and VolumeMount Support (#580) +- [34f8283f](https://github.com/kubedb/postgres/commit/34f8283f) Update Postgres health check (#578) +- [ad40ece7](https://github.com/kubedb/postgres/commit/ad40ece7) Use docker image with digest value (#579) +- [c51a4716](https://github.com/kubedb/postgres/commit/c51a4716) Update: remove sidecar from standalone. (#576) +- [514ef2bd](https://github.com/kubedb/postgres/commit/514ef2bd) SKIP_IMAGE_DIGEST for dev builds (#577) +- [2bc43818](https://github.com/kubedb/postgres/commit/2bc43818) Update to k8s 1.24 toolchain (#575) +- [8e2a02a3](https://github.com/kubedb/postgres/commit/8e2a02a3) Test against Kubernetes 1.24.0 (#574) + + + +## [kubedb/provisioner](https://github.com/kubedb/provisioner) + +### [v0.28.0-rc.0](https://github.com/kubedb/provisioner/releases/tag/v0.28.0-rc.0) + + + + +## [kubedb/proxysql](https://github.com/kubedb/proxysql) + +### [v0.15.0-rc.0](https://github.com/kubedb/proxysql/releases/tag/v0.15.0-rc.0) + +- [89e4767d](https://github.com/kubedb/proxysql/commit/89e4767d) Prepare for release v0.15.0-rc.0 (#243) +- [a4c92b9b](https://github.com/kubedb/proxysql/commit/a4c92b9b) Acquire license from license-proxyserver if available (#242) +- [fb6f2301](https://github.com/kubedb/proxysql/commit/fb6f2301) Rewrite ProxySQL HealthChecker (#241) +- [a6c80651](https://github.com/kubedb/proxysql/commit/a6c80651) Add proxysql declarative configuration (#239) +- [1d9c415c](https://github.com/kubedb/proxysql/commit/1d9c415c) SKIP_IMAGE_DIGEST for dev builds (#240) +- [23a85c85](https://github.com/kubedb/proxysql/commit/23a85c85) Update to k8s 1.24 toolchain (#237) +- [a93f8f4e](https://github.com/kubedb/proxysql/commit/a93f8f4e) Test against Kubernetes 1.24.0 (#236) + + + +## [kubedb/redis](https://github.com/kubedb/redis) + +### [v0.21.0-rc.0](https://github.com/kubedb/redis/releases/tag/v0.21.0-rc.0) + +- [a5ec2963](https://github.com/kubedb/redis/commit/a5ec2963) Prepare for release v0.21.0-rc.0 (#411) +- [738a1964](https://github.com/kubedb/redis/commit/738a1964) Acquire license from license-proxyserver if available (#410) +- [9336d51d](https://github.com/kubedb/redis/commit/9336d51d) Add Custom Volume Support (#409) +- [8235b201](https://github.com/kubedb/redis/commit/8235b201) Rework Redis and Redis Sentinel Health Checker (#403) +- [f5c45ae5](https://github.com/kubedb/redis/commit/f5c45ae5) Image Digest For Sentinel Images (#406) +- [8962659f](https://github.com/kubedb/redis/commit/8962659f) SKIP_IMAGE_DIGEST for dev builds (#405) +- [efa9e726](https://github.com/kubedb/redis/commit/efa9e726) Use image with digest value (#402) +- [08388f4a](https://github.com/kubedb/redis/commit/08388f4a) Update to k8s 1.24 toolchain (#400) +- [82cd6ba2](https://github.com/kubedb/redis/commit/82cd6ba2) Test against Kubernetes 1.24.0 (#398) + + + +## [kubedb/redis-coordinator](https://github.com/kubedb/redis-coordinator) + +### [v0.7.0-rc.0](https://github.com/kubedb/redis-coordinator/releases/tag/v0.7.0-rc.0) + +- [d9726c9](https://github.com/kubedb/redis-coordinator/commit/d9726c9) Prepare for release v0.7.0-rc.0 (#38) +- [f672fd6](https://github.com/kubedb/redis-coordinator/commit/f672fd6) Acquire license from license-proxyserver if available (#37) +- [5bbab2c](https://github.com/kubedb/redis-coordinator/commit/5bbab2c) Update to k8s 1.24 toolchain (#35) +- [df38bb5](https://github.com/kubedb/redis-coordinator/commit/df38bb5) Update to k8s 1.24 toolchain (#34) + + + +## [kubedb/replication-mode-detector](https://github.com/kubedb/replication-mode-detector) + +### [v0.15.0-rc.0](https://github.com/kubedb/replication-mode-detector/releases/tag/v0.15.0-rc.0) + +- [e06b4771](https://github.com/kubedb/replication-mode-detector/commit/e06b4771) Prepare for release v0.15.0-rc.0 (#200) +- [e77ff57c](https://github.com/kubedb/replication-mode-detector/commit/e77ff57c) Update db-client-go (#199) +- [3008bd62](https://github.com/kubedb/replication-mode-detector/commit/3008bd62) Acquire license from license-proxyserver if available (#198) +- [134d10c7](https://github.com/kubedb/replication-mode-detector/commit/134d10c7) Use mongodb db client (#197) +- [0a9cf005](https://github.com/kubedb/replication-mode-detector/commit/0a9cf005) Update to k8s 1.24 toolchain (#195) + + + +## [kubedb/schema-manager](https://github.com/kubedb/schema-manager) + +### [v0.4.0-rc.0](https://github.com/kubedb/schema-manager/releases/tag/v0.4.0-rc.0) + +- [114d882a](https://github.com/kubedb/schema-manager/commit/114d882a) Prepare for release v0.4.0-rc.0 (#36) +- [bcb51407](https://github.com/kubedb/schema-manager/commit/bcb51407) Update db-client-go (#35) +- [e6ffd878](https://github.com/kubedb/schema-manager/commit/e6ffd878) Acquire license from license-proxyserver if available (#34) +- [5a6ae96d](https://github.com/kubedb/schema-manager/commit/5a6ae96d) Update kutil dependency (#33) +- [8e9a2732](https://github.com/kubedb/schema-manager/commit/8e9a2732) Update to use KubeVault v2022.06.16 (#32) +- [5f7e441d](https://github.com/kubedb/schema-manager/commit/5f7e441d) Update to k8s 1.24 toolchain (#31) + + + +## [kubedb/tests](https://github.com/kubedb/tests) + +### [v0.13.0-rc.0](https://github.com/kubedb/tests/releases/tag/v0.13.0-rc.0) + +- [c9a1d705](https://github.com/kubedb/tests/commit/c9a1d705) Prepare for release v0.13.0-rc.0 (#183) +- [7307b213](https://github.com/kubedb/tests/commit/7307b213) Acquire license from license-proxyserver if available (#182) +- [84bdf708](https://github.com/kubedb/tests/commit/84bdf708) Update to k8s 1.24 toolchain (#180) + + + +## [kubedb/ui-server](https://github.com/kubedb/ui-server) + +### [v0.4.0-rc.0](https://github.com/kubedb/ui-server/releases/tag/v0.4.0-rc.0) + +- [56b14fd6](https://github.com/kubedb/ui-server/commit/56b14fd6) Prepare for release v0.4.0-rc.0 (#40) +- [932c6cd7](https://github.com/kubedb/ui-server/commit/932c6cd7) Vendor Elasticsearch changes in db-client-go (#39) +- [1e8b3cd7](https://github.com/kubedb/ui-server/commit/1e8b3cd7) Acquire license from license-proxyserver if available (#37) +- [eef867b5](https://github.com/kubedb/ui-server/commit/eef867b5) Fix linter warning regarding selfLink (#36) +- [15556fc9](https://github.com/kubedb/ui-server/commit/15556fc9) Update to k8s 1.24 toolchain (#35) + + + +## [kubedb/webhook-server](https://github.com/kubedb/webhook-server) + +### [v0.4.0-rc.0](https://github.com/kubedb/webhook-server/releases/tag/v0.4.0-rc.0) + + + + + diff --git a/content/docs/v2024.1.31/CHANGELOG-v2022.08.04-rc.1.md b/content/docs/v2024.1.31/CHANGELOG-v2022.08.04-rc.1.md new file mode 100644 index 0000000000..1df24b2b96 --- /dev/null +++ b/content/docs/v2024.1.31/CHANGELOG-v2022.08.04-rc.1.md @@ -0,0 +1,304 @@ +--- +title: Changelog | KubeDB +description: Changelog +menu: + docs_v2024.1.31: + identifier: changelog-kubedb-v2022.08.04-rc.1 + name: Changelog-v2022.08.04-rc.1 + parent: welcome + weight: 20220804 +product_name: kubedb +menu_name: docs_v2024.1.31 +section_menu_id: welcome +url: /docs/v2024.1.31/welcome/changelog-v2022.08.04-rc.1/ +aliases: +- /docs/v2024.1.31/CHANGELOG-v2022.08.04-rc.1/ +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +# KubeDB v2022.08.04-rc.1 (2022-08-04) + + +## [kubedb/apimachinery](https://github.com/kubedb/apimachinery) + +### [v0.28.0-rc.1](https://github.com/kubedb/apimachinery/releases/tag/v0.28.0-rc.1) + +- [1241b40d](https://github.com/kubedb/apimachinery/commit/1241b40d) Configure health checker default value (#954) + + + +## [kubedb/autoscaler](https://github.com/kubedb/autoscaler) + +### [v0.13.0-rc.1](https://github.com/kubedb/autoscaler/releases/tag/v0.13.0-rc.1) + +- [8d0fadbb](https://github.com/kubedb/autoscaler/commit/8d0fadbb) Prepare for release v0.13.0-rc.1 (#98) +- [b907af64](https://github.com/kubedb/autoscaler/commit/b907af64) Update db-client-go (#97) +- [54169b06](https://github.com/kubedb/autoscaler/commit/54169b06) Update .kodiak.toml +- [50914655](https://github.com/kubedb/autoscaler/commit/50914655) Update health checker (#96) + + + +## [kubedb/cli](https://github.com/kubedb/cli) + +### [v0.28.0-rc.1](https://github.com/kubedb/cli/releases/tag/v0.28.0-rc.1) + +- [1f0d46aa](https://github.com/kubedb/cli/commit/1f0d46aa) Prepare for release v0.28.0-rc.1 (#673) +- [0e65567a](https://github.com/kubedb/cli/commit/0e65567a) Update health checker (#672) + + + +## [kubedb/dashboard](https://github.com/kubedb/dashboard) + +### [v0.4.0-rc.1](https://github.com/kubedb/dashboard/releases/tag/v0.4.0-rc.1) + +- [1e0ee6f](https://github.com/kubedb/dashboard/commit/1e0ee6f) Prepare for release v0.4.0-rc.1 (#35) +- [4844400](https://github.com/kubedb/dashboard/commit/4844400) Update health checker (#33) + + + +## [kubedb/elasticsearch](https://github.com/kubedb/elasticsearch) + +### [v0.28.0-rc.1](https://github.com/kubedb/elasticsearch/releases/tag/v0.28.0-rc.1) + +- [29ab6c81](https://github.com/kubedb/elasticsearch/commit/29ab6c81b) Prepare for release v0.28.0-rc.1 (#596) +- [bcaa3512](https://github.com/kubedb/elasticsearch/commit/bcaa3512c) Update db-client-go (#595) +- [5755abf3](https://github.com/kubedb/elasticsearch/commit/5755abf3b) Set default values during reconcile cycle (#594) +- [af727aab](https://github.com/kubedb/elasticsearch/commit/af727aab7) Update health checker (#593) + + + +## [kubedb/installer](https://github.com/kubedb/installer) + +### [v2022.08.04-rc.1](https://github.com/kubedb/installer/releases/tag/v2022.08.04-rc.1) + +- [16c0c0bd](https://github.com/kubedb/installer/commit/16c0c0bd) Prepare for release v2022.08.04-rc.1 (#524) + + + +## [kubedb/mariadb](https://github.com/kubedb/mariadb) + +### [v0.12.0-rc.1](https://github.com/kubedb/mariadb/releases/tag/v0.12.0-rc.1) + +- [0cad6134](https://github.com/kubedb/mariadb/commit/0cad6134) Prepare for release v0.12.0-rc.1 (#163) +- [7cf4c255](https://github.com/kubedb/mariadb/commit/7cf4c255) Update health checker (#162) + + + +## [kubedb/mariadb-coordinator](https://github.com/kubedb/mariadb-coordinator) + +### [v0.8.0-rc.1](https://github.com/kubedb/mariadb-coordinator/releases/tag/v0.8.0-rc.1) + +- [fe8a57e](https://github.com/kubedb/mariadb-coordinator/commit/fe8a57e) Prepare for release v0.8.0-rc.1 (#52) +- [9c3c47f](https://github.com/kubedb/mariadb-coordinator/commit/9c3c47f) Update health checker (#51) + + + +## [kubedb/memcached](https://github.com/kubedb/memcached) + +### [v0.21.0-rc.1](https://github.com/kubedb/memcached/releases/tag/v0.21.0-rc.1) + +- [96b98521](https://github.com/kubedb/memcached/commit/96b98521) Prepare for release v0.21.0-rc.1 (#362) +- [98aeae01](https://github.com/kubedb/memcached/commit/98aeae01) Update health checker (#361) + + + +## [kubedb/mongodb](https://github.com/kubedb/mongodb) + +### [v0.21.0-rc.1](https://github.com/kubedb/mongodb/releases/tag/v0.21.0-rc.1) + +- [0640deb7](https://github.com/kubedb/mongodb/commit/0640deb7) Prepare for release v0.21.0-rc.1 (#500) +- [ff02931d](https://github.com/kubedb/mongodb/commit/ff02931d) Update db-client-go (#499) +- [3da24ba2](https://github.com/kubedb/mongodb/commit/3da24ba2) SetDefaults when adding the finalizer (#498) +- [5652fdc8](https://github.com/kubedb/mongodb/commit/5652fdc8) Update health checker (#497) + + + +## [kubedb/mysql](https://github.com/kubedb/mysql) + +### [v0.21.0-rc.1](https://github.com/kubedb/mysql/releases/tag/v0.21.0-rc.1) + +- [f0f44703](https://github.com/kubedb/mysql/commit/f0f44703) Prepare for release v0.21.0-rc.1 (#487) +- [77e0b015](https://github.com/kubedb/mysql/commit/77e0b015) refactor mysql health checker (#486) +- [8c008024](https://github.com/kubedb/mysql/commit/8c008024) Update health checker (#485) + + + +## [kubedb/mysql-coordinator](https://github.com/kubedb/mysql-coordinator) + +### [v0.6.0-rc.1](https://github.com/kubedb/mysql-coordinator/releases/tag/v0.6.0-rc.1) + +- [fa7ad1c](https://github.com/kubedb/mysql-coordinator/commit/fa7ad1c) Prepare for release v0.6.0-rc.1 (#46) +- [2c3615b](https://github.com/kubedb/mysql-coordinator/commit/2c3615b) update labels (#45) +- [38a4f88](https://github.com/kubedb/mysql-coordinator/commit/38a4f88) Update health checker (#43) + + + +## [kubedb/mysql-router-init](https://github.com/kubedb/mysql-router-init) + +### [v0.6.0-rc.1](https://github.com/kubedb/mysql-router-init/releases/tag/v0.6.0-rc.1) + + + + +## [kubedb/ops-manager](https://github.com/kubedb/ops-manager) + +### [v0.15.0-rc.1](https://github.com/kubedb/ops-manager/releases/tag/v0.15.0-rc.1) + +- [23ca2d3b](https://github.com/kubedb/ops-manager/commit/23ca2d3b9) Prepare for release v0.15.0-rc.1 (#336) +- [2d03cc25](https://github.com/kubedb/ops-manager/commit/2d03cc257) Fix Upgrade | Volume Expansion ops request MySQL (#335) +- [9f4c0acb](https://github.com/kubedb/ops-manager/commit/9f4c0acb1) Update db-client-go (#334) +- [490d5fce](https://github.com/kubedb/ops-manager/commit/490d5fcec) Update health checker (#332) +- [0531e84b](https://github.com/kubedb/ops-manager/commit/0531e84b6) Fix multiple recommendation creation for same cause for empty phase (#333) + + + +## [kubedb/percona-xtradb](https://github.com/kubedb/percona-xtradb) + +### [v0.15.0-rc.1](https://github.com/kubedb/percona-xtradb/releases/tag/v0.15.0-rc.1) + +- [c512779b](https://github.com/kubedb/percona-xtradb/commit/c512779b) Prepare for release v0.15.0-rc.1 (#266) +- [a767d382](https://github.com/kubedb/percona-xtradb/commit/a767d382) Update health checker (#265) + + + +## [kubedb/percona-xtradb-coordinator](https://github.com/kubedb/percona-xtradb-coordinator) + +### [v0.1.0-rc.1](https://github.com/kubedb/percona-xtradb-coordinator/releases/tag/v0.1.0-rc.1) + +- [290e281](https://github.com/kubedb/percona-xtradb-coordinator/commit/290e281) Prepare for release v0.1.0-rc.1 (#9) +- [c57449c](https://github.com/kubedb/percona-xtradb-coordinator/commit/c57449c) Update health checker (#8) +- [adad8b5](https://github.com/kubedb/percona-xtradb-coordinator/commit/adad8b5) Acquire license from license-proxyserver if available (#7) +- [14f17f0](https://github.com/kubedb/percona-xtradb-coordinator/commit/14f17f0) Add Percona XtraDB Coordinator (#3) +- [4434736](https://github.com/kubedb/percona-xtradb-coordinator/commit/4434736) Update to k8s 1.24 toolchain (#5) +- [e01945d](https://github.com/kubedb/percona-xtradb-coordinator/commit/e01945d) Update to k8s 1.24 toolchain (#4) +- [ad7dd9c](https://github.com/kubedb/percona-xtradb-coordinator/commit/ad7dd9c) Use Go 1.18 (#2) +- [5a140bb](https://github.com/kubedb/percona-xtradb-coordinator/commit/5a140bb) make fmt (#1) + + + +## [kubedb/pg-coordinator](https://github.com/kubedb/pg-coordinator) + +### [v0.12.0-rc.1](https://github.com/kubedb/pg-coordinator/releases/tag/v0.12.0-rc.1) + +- [e555754d](https://github.com/kubedb/pg-coordinator/commit/e555754d) Prepare for release v0.12.0-rc.1 (#88) +- [28c83e07](https://github.com/kubedb/pg-coordinator/commit/28c83e07) Update health checker (#86) + + + +## [kubedb/pgbouncer](https://github.com/kubedb/pgbouncer) + +### [v0.15.0-rc.1](https://github.com/kubedb/pgbouncer/releases/tag/v0.15.0-rc.1) + +- [6b3f6715](https://github.com/kubedb/pgbouncer/commit/6b3f6715) Prepare for release v0.15.0-rc.1 (#232) +- [a88654ad](https://github.com/kubedb/pgbouncer/commit/a88654ad) Update health checker (#231) + + + +## [kubedb/postgres](https://github.com/kubedb/postgres) + +### [v0.28.0-rc.1](https://github.com/kubedb/postgres/releases/tag/v0.28.0-rc.1) + +- [f2b875d4](https://github.com/kubedb/postgres/commit/f2b875d4) Prepare for release v0.28.0-rc.1 (#585) +- [a15214c9](https://github.com/kubedb/postgres/commit/a15214c9) Set default values during reconcile cycle (#584) +- [acf99b7f](https://github.com/kubedb/postgres/commit/acf99b7f) Update health checker (#583) + + + +## [kubedb/provisioner](https://github.com/kubedb/provisioner) + +### [v0.28.0-rc.1](https://github.com/kubedb/provisioner/releases/tag/v0.28.0-rc.1) + +- [e855a3ff](https://github.com/kubedb/provisioner/commit/e855a3ff6) Prepare for release v0.28.0-rc.1 (#10) +- [309ada2c](https://github.com/kubedb/provisioner/commit/309ada2c9) Update db-client-go (#9) +- [e61a83fc](https://github.com/kubedb/provisioner/commit/e61a83fcd) Update health checker (#8) + + + +## [kubedb/proxysql](https://github.com/kubedb/proxysql) + +### [v0.15.0-rc.1](https://github.com/kubedb/proxysql/releases/tag/v0.15.0-rc.1) + +- [de17ccb4](https://github.com/kubedb/proxysql/commit/de17ccb4) Prepare for release v0.15.0-rc.1 (#245) +- [bceb7ff5](https://github.com/kubedb/proxysql/commit/bceb7ff5) Update health checker (#244) + + + +## [kubedb/redis](https://github.com/kubedb/redis) + +### [v0.21.0-rc.1](https://github.com/kubedb/redis/releases/tag/v0.21.0-rc.1) + +- [4be0e311](https://github.com/kubedb/redis/commit/4be0e311) Prepare for release v0.21.0-rc.1 (#415) +- [c819ea86](https://github.com/kubedb/redis/commit/c819ea86) Update Health Checker (#414) +- [41079e80](https://github.com/kubedb/redis/commit/41079e80) Update health checker (#413) + + + +## [kubedb/redis-coordinator](https://github.com/kubedb/redis-coordinator) + +### [v0.7.0-rc.1](https://github.com/kubedb/redis-coordinator/releases/tag/v0.7.0-rc.1) + +- [535162a](https://github.com/kubedb/redis-coordinator/commit/535162a) Prepare for release v0.7.0-rc.1 (#40) +- [9dc5d3d](https://github.com/kubedb/redis-coordinator/commit/9dc5d3d) Update health checker (#39) + + + +## [kubedb/replication-mode-detector](https://github.com/kubedb/replication-mode-detector) + +### [v0.15.0-rc.1](https://github.com/kubedb/replication-mode-detector/releases/tag/v0.15.0-rc.1) + +- [50cb0ab6](https://github.com/kubedb/replication-mode-detector/commit/50cb0ab6) Prepare for release v0.15.0-rc.1 (#203) +- [e5c04c52](https://github.com/kubedb/replication-mode-detector/commit/e5c04c52) Update db-client-go (#202) +- [6409566e](https://github.com/kubedb/replication-mode-detector/commit/6409566e) Update health checker (#201) + + + +## [kubedb/schema-manager](https://github.com/kubedb/schema-manager) + +### [v0.4.0-rc.1](https://github.com/kubedb/schema-manager/releases/tag/v0.4.0-rc.1) + +- [d5033ad5](https://github.com/kubedb/schema-manager/commit/d5033ad5) Prepare for release v0.4.0-rc.1 (#39) +- [0f7e0747](https://github.com/kubedb/schema-manager/commit/0f7e0747) Update db-client-go (#38) +- [9b44c0dc](https://github.com/kubedb/schema-manager/commit/9b44c0dc) Update health checker (#37) + + + +## [kubedb/tests](https://github.com/kubedb/tests) + +### [v0.13.0-rc.1](https://github.com/kubedb/tests/releases/tag/v0.13.0-rc.1) + +- [a8571959](https://github.com/kubedb/tests/commit/a8571959) Prepare for release v0.13.0-rc.1 (#186) +- [77ce04e4](https://github.com/kubedb/tests/commit/77ce04e4) Update db-client-go (#185) +- [7924eac1](https://github.com/kubedb/tests/commit/7924eac1) Update health checker (#184) + + + +## [kubedb/ui-server](https://github.com/kubedb/ui-server) + +### [v0.4.0-rc.1](https://github.com/kubedb/ui-server/releases/tag/v0.4.0-rc.1) + +- [64a798ef](https://github.com/kubedb/ui-server/commit/64a798ef) Prepare for release v0.4.0-rc.1 (#43) +- [a0475425](https://github.com/kubedb/ui-server/commit/a0475425) Update db-client-go (#42) +- [82a9bec5](https://github.com/kubedb/ui-server/commit/82a9bec5) Update health checker (#41) + + + +## [kubedb/webhook-server](https://github.com/kubedb/webhook-server) + +### [v0.4.0-rc.1](https://github.com/kubedb/webhook-server/releases/tag/v0.4.0-rc.1) + +- [1714ad93](https://github.com/kubedb/webhook-server/commit/1714ad93) Prepare for release v0.4.0-rc.1 (#25) +- [30558972](https://github.com/kubedb/webhook-server/commit/30558972) Add Clsuter Topology in PerconaXtraDB (#24) + + + + diff --git a/content/docs/v2024.1.31/CHANGELOG-v2022.08.08.md b/content/docs/v2024.1.31/CHANGELOG-v2022.08.08.md new file mode 100644 index 0000000000..57ec48ebee --- /dev/null +++ b/content/docs/v2024.1.31/CHANGELOG-v2022.08.08.md @@ -0,0 +1,426 @@ +--- +title: Changelog | KubeDB +description: Changelog +menu: + docs_v2024.1.31: + identifier: changelog-kubedb-v2022.08.08 + name: Changelog-v2022.08.08 + parent: welcome + weight: 20220808 +product_name: kubedb +menu_name: docs_v2024.1.31 +section_menu_id: welcome +url: /docs/v2024.1.31/welcome/changelog-v2022.08.08/ +aliases: +- /docs/v2024.1.31/CHANGELOG-v2022.08.08/ +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +# KubeDB v2022.08.08 (2022-08-05) + + +## [kubedb/apimachinery](https://github.com/kubedb/apimachinery) + +### [v0.28.0](https://github.com/kubedb/apimachinery/releases/tag/v0.28.0) + + + + +## [kubedb/autoscaler](https://github.com/kubedb/autoscaler) + +### [v0.13.0](https://github.com/kubedb/autoscaler/releases/tag/v0.13.0) + +- [72d59743](https://github.com/kubedb/autoscaler/commit/72d59743) Prepare for release v0.13.0 (#99) +- [8d0fadbb](https://github.com/kubedb/autoscaler/commit/8d0fadbb) Prepare for release v0.13.0-rc.1 (#98) +- [b907af64](https://github.com/kubedb/autoscaler/commit/b907af64) Update db-client-go (#97) +- [54169b06](https://github.com/kubedb/autoscaler/commit/54169b06) Update .kodiak.toml +- [50914655](https://github.com/kubedb/autoscaler/commit/50914655) Update health checker (#96) +- [896d0bb5](https://github.com/kubedb/autoscaler/commit/896d0bb5) Prepare for release v0.13.0-rc.0 (#95) +- [a904819e](https://github.com/kubedb/autoscaler/commit/a904819e) Update db-client-go (#94) +- [049b959a](https://github.com/kubedb/autoscaler/commit/049b959a) Acquire license from license-proxyserver if available (#93) +- [6f47ba7d](https://github.com/kubedb/autoscaler/commit/6f47ba7d) Use MemoryUsedPercentage as float (#92) +- [2f41e629](https://github.com/kubedb/autoscaler/commit/2f41e629) Fix mongodb inMemory calculation and changes for updated autoscaler CRD (#91) +- [a0d00ea2](https://github.com/kubedb/autoscaler/commit/a0d00ea2) Update mongodb inmemory recommendation logic (#90) +- [7fd346e6](https://github.com/kubedb/autoscaler/commit/7fd346e6) Change some in-memory recommendation logic (#89) +- [00b9087f](https://github.com/kubedb/autoscaler/commit/00b9087f) Convert to KubeBuilder style (#88) +- [9a3599ac](https://github.com/kubedb/autoscaler/commit/9a3599ac) Update dependencies +- [7260dc2f](https://github.com/kubedb/autoscaler/commit/7260dc2f) Add custom recommender for dbs (#85) + + + +## [kubedb/cli](https://github.com/kubedb/cli) + +### [v0.28.0](https://github.com/kubedb/cli/releases/tag/v0.28.0) + +- [99730379](https://github.com/kubedb/cli/commit/99730379) Prepare for release v0.28.0 (#674) +- [1f0d46aa](https://github.com/kubedb/cli/commit/1f0d46aa) Prepare for release v0.28.0-rc.1 (#673) +- [0e65567a](https://github.com/kubedb/cli/commit/0e65567a) Update health checker (#672) +- [902c36b2](https://github.com/kubedb/cli/commit/902c36b2) Prepare for release v0.28.0-rc.0 (#671) +- [e0564ec0](https://github.com/kubedb/cli/commit/e0564ec0) Acquire license from license-proxyserver if available (#670) +- [da3169be](https://github.com/kubedb/cli/commit/da3169be) Update for release Stash@v2022.07.09 (#669) +- [38b1149c](https://github.com/kubedb/cli/commit/38b1149c) Update for release Stash@v2022.06.21 (#668) +- [09bb7b93](https://github.com/kubedb/cli/commit/09bb7b93) Update to k8s 1.24 toolchain (#666) +- [1642a399](https://github.com/kubedb/cli/commit/1642a399) Update to k8s 1.24 toolchain (#665) + + + +## [kubedb/dashboard](https://github.com/kubedb/dashboard) + +### [v0.4.0](https://github.com/kubedb/dashboard/releases/tag/v0.4.0) + +- [7a0a5fb](https://github.com/kubedb/dashboard/commit/7a0a5fb) Prepare for release v0.4.0 (#36) +- [1e0ee6f](https://github.com/kubedb/dashboard/commit/1e0ee6f) Prepare for release v0.4.0-rc.1 (#35) +- [4844400](https://github.com/kubedb/dashboard/commit/4844400) Update health checker (#33) +- [26e7432](https://github.com/kubedb/dashboard/commit/26e7432) Prepare for release v0.4.0-rc.0 (#32) +- [60f3154](https://github.com/kubedb/dashboard/commit/60f3154) Acquire license from license-proxyserver if available (#31) +- [9b18e46](https://github.com/kubedb/dashboard/commit/9b18e46) Update to k8s 1.24 toolchain (#29) +- [5ce68d1](https://github.com/kubedb/dashboard/commit/5ce68d1) Update to k8s 1.24 toolchain (#28) + + + +## [kubedb/elasticsearch](https://github.com/kubedb/elasticsearch) + +### [v0.28.0](https://github.com/kubedb/elasticsearch/releases/tag/v0.28.0) + +- [7b5e40d0](https://github.com/kubedb/elasticsearch/commit/7b5e40d02) Prepare for release v0.28.0 (#597) +- [29ab6c81](https://github.com/kubedb/elasticsearch/commit/29ab6c81b) Prepare for release v0.28.0-rc.1 (#596) +- [bcaa3512](https://github.com/kubedb/elasticsearch/commit/bcaa3512c) Update db-client-go (#595) +- [5755abf3](https://github.com/kubedb/elasticsearch/commit/5755abf3b) Set default values during reconcile cycle (#594) +- [af727aab](https://github.com/kubedb/elasticsearch/commit/af727aab7) Update health checker (#593) +- [6991fc7a](https://github.com/kubedb/elasticsearch/commit/6991fc7a3) Prepare for release v0.28.0-rc.0 (#592) +- [d8df80d1](https://github.com/kubedb/elasticsearch/commit/d8df80d1c) Update db-client-go (#591) +- [177990ef](https://github.com/kubedb/elasticsearch/commit/177990efa) Make Changes for newKBClient() args removal (#590) +- [3c704bdc](https://github.com/kubedb/elasticsearch/commit/3c704bdc1) Acquire license from license-proxyserver if available (#589) +- [46e8fa20](https://github.com/kubedb/elasticsearch/commit/46e8fa201) Add support for volumes and volumeMounts (#588) +- [51536415](https://github.com/kubedb/elasticsearch/commit/515364158) Re-construct Elasticsearch health checker (#587) +- [cc1a8224](https://github.com/kubedb/elasticsearch/commit/cc1a8224a) SKIP_IMAGE_DIGEST for dev builds (#586) +- [c1c84b12](https://github.com/kubedb/elasticsearch/commit/c1c84b124) Use docker image with digest value (#579) +- [5594ad03](https://github.com/kubedb/elasticsearch/commit/5594ad035) Change credential sync log level to avoid operator log overloading (#584) +- [6c40c79e](https://github.com/kubedb/elasticsearch/commit/6c40c79ed) Revert es client version +- [e6621d98](https://github.com/kubedb/elasticsearch/commit/e6621d980) Update to k8s 1.24 toolchain (#580) +- [93fd95b6](https://github.com/kubedb/elasticsearch/commit/93fd95b65) Test against Kubernetes 1.24.0 (#578) + + + +## [kubedb/installer](https://github.com/kubedb/installer) + +### [v2022.08.08](https://github.com/kubedb/installer/releases/tag/v2022.08.08) + + + + +## [kubedb/mariadb](https://github.com/kubedb/mariadb) + +### [v0.12.0](https://github.com/kubedb/mariadb/releases/tag/v0.12.0) + +- [72f57b6a](https://github.com/kubedb/mariadb/commit/72f57b6a) Prepare for release v0.12.0 (#165) +- [04108afb](https://github.com/kubedb/mariadb/commit/04108afb) Add custom service account (#164) +- [0cad6134](https://github.com/kubedb/mariadb/commit/0cad6134) Prepare for release v0.12.0-rc.1 (#163) +- [7cf4c255](https://github.com/kubedb/mariadb/commit/7cf4c255) Update health checker (#162) +- [8ed9b5e8](https://github.com/kubedb/mariadb/commit/8ed9b5e8) Prepare for release v0.12.0-rc.0 (#161) +- [7eb4d546](https://github.com/kubedb/mariadb/commit/7eb4d546) Acquire license from license-proxyserver if available (#160) +- [16dc94dd](https://github.com/kubedb/mariadb/commit/16dc94dd) Add custom volume and volume mount support (#159) +- [d03b7ab3](https://github.com/kubedb/mariadb/commit/d03b7ab3) Update MariaDB Health check (#155) +- [ac7fc040](https://github.com/kubedb/mariadb/commit/ac7fc040) Add syncAndValidate for secrets | Not delete custom auth secrets (#153) +- [75a280b3](https://github.com/kubedb/mariadb/commit/75a280b3) SKIP_IMAGE_DIGEST for dev builds (#158) +- [09e31be1](https://github.com/kubedb/mariadb/commit/09e31be1) Fix MariaDB not ready condition after removing halt (#157) +- [29934625](https://github.com/kubedb/mariadb/commit/29934625) Add digest value on docker image (#154) +- [a73717c8](https://github.com/kubedb/mariadb/commit/a73717c8) Update to k8s 1.24 toolchain (#151) +- [ff6e83e5](https://github.com/kubedb/mariadb/commit/ff6e83e5) Update to k8s 1.24 toolchain (#150) +- [c1dba654](https://github.com/kubedb/mariadb/commit/c1dba654) Test against Kubernetes 1.24.0 (#148) + + + +## [kubedb/memcached](https://github.com/kubedb/memcached) + +### [v0.21.0](https://github.com/kubedb/memcached/releases/tag/v0.21.0) + +- [0aad8ad7](https://github.com/kubedb/memcached/commit/0aad8ad7) Prepare for release v0.21.0 (#363) +- [96b98521](https://github.com/kubedb/memcached/commit/96b98521) Prepare for release v0.21.0-rc.1 (#362) +- [98aeae01](https://github.com/kubedb/memcached/commit/98aeae01) Update health checker (#361) +- [6fcbc121](https://github.com/kubedb/memcached/commit/6fcbc121) Prepare for release v0.21.0-rc.0 (#360) +- [8d43bd1e](https://github.com/kubedb/memcached/commit/8d43bd1e) Acquire license from license-proxyserver if available (#359) +- [cd2fa52b](https://github.com/kubedb/memcached/commit/cd2fa52b) Fix validator webhook api group +- [71c254a7](https://github.com/kubedb/memcached/commit/71c254a7) Update to k8s 1.24 toolchain (#357) +- [0545e187](https://github.com/kubedb/memcached/commit/0545e187) Test against Kubernetes 1.24.0 (#356) + + + +## [kubedb/mongodb](https://github.com/kubedb/mongodb) + +### [v0.21.0](https://github.com/kubedb/mongodb/releases/tag/v0.21.0) + +- [cfcef698](https://github.com/kubedb/mongodb/commit/cfcef698) Prepare for release v0.21.0 (#501) +- [0640deb7](https://github.com/kubedb/mongodb/commit/0640deb7) Prepare for release v0.21.0-rc.1 (#500) +- [ff02931d](https://github.com/kubedb/mongodb/commit/ff02931d) Update db-client-go (#499) +- [3da24ba2](https://github.com/kubedb/mongodb/commit/3da24ba2) SetDefaults when adding the finalizer (#498) +- [5652fdc8](https://github.com/kubedb/mongodb/commit/5652fdc8) Update health checker (#497) +- [c0a023d2](https://github.com/kubedb/mongodb/commit/c0a023d2) Prepare for release v0.21.0-rc.0 (#496) +- [0b56bec3](https://github.com/kubedb/mongodb/commit/0b56bec3) Update db-client-go (#495) +- [7126f7f1](https://github.com/kubedb/mongodb/commit/7126f7f1) getExporterContainer only if monitoring enabled (#494) +- [cdd6adbc](https://github.com/kubedb/mongodb/commit/cdd6adbc) Acquire license from license-proxyserver if available (#493) +- [c598db8e](https://github.com/kubedb/mongodb/commit/c598db8e) Add `--collect-all` exporter container cmd args (#487) +- [dc2fce0d](https://github.com/kubedb/mongodb/commit/dc2fce0d) Add InMemory validations & some refactoration (#491) +- [991588ef](https://github.com/kubedb/mongodb/commit/991588ef) Add support for custom volumes (#492) +- [4fc305c0](https://github.com/kubedb/mongodb/commit/4fc305c0) Update Health Checker (#480) +- [3be64a6b](https://github.com/kubedb/mongodb/commit/3be64a6b) SKIP_IMAGE_DIGEST for dev builds (#490) +- [3fd70298](https://github.com/kubedb/mongodb/commit/3fd70298) Use docker images with digest value (#486) +- [c7325a29](https://github.com/kubedb/mongodb/commit/c7325a29) Fix connection leak when ping fails (#485) +- [4ff96a0e](https://github.com/kubedb/mongodb/commit/4ff96a0e) Use kubebuilder client for db-client-go (#484) +- [4094f54a](https://github.com/kubedb/mongodb/commit/4094f54a) Update to k8s 1.24 toolchain (#482) +- [2e56a4e9](https://github.com/kubedb/mongodb/commit/2e56a4e9) Update to k8s 1.24 toolchain (#481) + + + +## [kubedb/mysql](https://github.com/kubedb/mysql) + +### [v0.21.0](https://github.com/kubedb/mysql/releases/tag/v0.21.0) + +- [229a7675](https://github.com/kubedb/mysql/commit/229a7675) Prepare for release v0.21.0 (#488) +- [f0f44703](https://github.com/kubedb/mysql/commit/f0f44703) Prepare for release v0.21.0-rc.1 (#487) +- [77e0b015](https://github.com/kubedb/mysql/commit/77e0b015) refactor mysql health checker (#486) +- [8c008024](https://github.com/kubedb/mysql/commit/8c008024) Update health checker (#485) +- [15f1a9dd](https://github.com/kubedb/mysql/commit/15f1a9dd) Prepare for release v0.21.0-rc.0 (#484) +- [43f733a3](https://github.com/kubedb/mysql/commit/43f733a3) Acquire license from license-proxyserver if available (#483) +- [0c88f473](https://github.com/kubedb/mysql/commit/0c88f473) Use GetCertSecret instead of MustCertSecretName (#482) +- [afb1c070](https://github.com/kubedb/mysql/commit/afb1c070) Add support for Custom volume and volume mounts (#481) +- [4f167ef8](https://github.com/kubedb/mysql/commit/4f167ef8) Update MySQL healthchecker (#480) +- [05d5179e](https://github.com/kubedb/mysql/commit/05d5179e) Update read replica Auth secret (#477) +- [d36e09de](https://github.com/kubedb/mysql/commit/d36e09de) upsert volumes with existing volumes. (#476) +- [e51c7d79](https://github.com/kubedb/mysql/commit/e51c7d79) Use docker image with digest value (#475) +- [574fc526](https://github.com/kubedb/mysql/commit/574fc526) SKIP_IMAGE_DIGEST for dev builds (#479) +- [10dd0b78](https://github.com/kubedb/mysql/commit/10dd0b78) Fix not ready condition after removing halt (#478) +- [3c514bae](https://github.com/kubedb/mysql/commit/3c514bae) Update to k8s 1.24 toolchain (#473) +- [6a09468a](https://github.com/kubedb/mysql/commit/6a09468a) Update to k8s 1.24 toolchain (#472) +- [04c925b5](https://github.com/kubedb/mysql/commit/04c925b5) Test against Kubernetes 1.24.0 (#471) + + + +## [kubedb/mysql-router-init](https://github.com/kubedb/mysql-router-init) + +### [v0.6.0](https://github.com/kubedb/mysql-router-init/releases/tag/v0.6.0) + +- [445a2ff](https://github.com/kubedb/mysql-router-init/commit/445a2ff) Acquire license from license-proxyserver if available (#23) +- [28eb2d4](https://github.com/kubedb/mysql-router-init/commit/28eb2d4) Update to k8s 1.24 toolchain (#22) +- [40d3dd9](https://github.com/kubedb/mysql-router-init/commit/40d3dd9) Update to k8s 1.24 toolchain (#21) + + + +## [kubedb/ops-manager](https://github.com/kubedb/ops-manager) + +### [v0.15.0](https://github.com/kubedb/ops-manager/releases/tag/v0.15.0) + + + + +## [kubedb/percona-xtradb](https://github.com/kubedb/percona-xtradb) + +### [v0.15.0](https://github.com/kubedb/percona-xtradb/releases/tag/v0.15.0) + +- [fd8fecd2](https://github.com/kubedb/percona-xtradb/commit/fd8fecd2) Prepare for release v0.15.0 (#268) +- [9139470e](https://github.com/kubedb/percona-xtradb/commit/9139470e) Add custom service account (#267) +- [c512779b](https://github.com/kubedb/percona-xtradb/commit/c512779b) Prepare for release v0.15.0-rc.1 (#266) +- [a767d382](https://github.com/kubedb/percona-xtradb/commit/a767d382) Update health checker (#265) +- [4970d99b](https://github.com/kubedb/percona-xtradb/commit/4970d99b) Prepare for release v0.15.0-rc.0 (#264) +- [d1896876](https://github.com/kubedb/percona-xtradb/commit/d1896876) Acquire license from license-proxyserver if available (#263) +- [8e65e97d](https://github.com/kubedb/percona-xtradb/commit/8e65e97d) Add custom volume and volume mount support (#262) +- [1866eed1](https://github.com/kubedb/percona-xtradb/commit/1866eed1) Add Percona XtraDB Cluster Support (#237) +- [501c5a18](https://github.com/kubedb/percona-xtradb/commit/501c5a18) Update to k8s 1.24 toolchain (#260) +- [e632ea56](https://github.com/kubedb/percona-xtradb/commit/e632ea56) Test against Kubernetes 1.24.0 (#259) + + + +## [kubedb/pg-coordinator](https://github.com/kubedb/pg-coordinator) + +### [v0.12.0](https://github.com/kubedb/pg-coordinator/releases/tag/v0.12.0) + +- [394b7fff](https://github.com/kubedb/pg-coordinator/commit/394b7fff) Prepare for release v0.12.0 (#89) +- [e555754d](https://github.com/kubedb/pg-coordinator/commit/e555754d) Prepare for release v0.12.0-rc.1 (#88) +- [28c83e07](https://github.com/kubedb/pg-coordinator/commit/28c83e07) Update health checker (#86) +- [e066e950](https://github.com/kubedb/pg-coordinator/commit/e066e950) Prepare for release v0.12.0-rc.0 (#85) +- [e3256a32](https://github.com/kubedb/pg-coordinator/commit/e3256a32) Acquire license from license-proxyserver if available (#83) +- [909c225e](https://github.com/kubedb/pg-coordinator/commit/909c225e) Remove role scripts from the coordinator. (#82) +- [fdd2a4ad](https://github.com/kubedb/pg-coordinator/commit/fdd2a4ad) Update to k8s 1.24 toolchain (#81) + + + +## [kubedb/pgbouncer](https://github.com/kubedb/pgbouncer) + +### [v0.15.0](https://github.com/kubedb/pgbouncer/releases/tag/v0.15.0) + +- [722eccc2](https://github.com/kubedb/pgbouncer/commit/722eccc2) Prepare for release v0.15.0 (#233) +- [6b3f6715](https://github.com/kubedb/pgbouncer/commit/6b3f6715) Prepare for release v0.15.0-rc.1 (#232) +- [a88654ad](https://github.com/kubedb/pgbouncer/commit/a88654ad) Update health checker (#231) +- [09815675](https://github.com/kubedb/pgbouncer/commit/09815675) Prepare for release v0.15.0-rc.0 (#230) +- [6e18d967](https://github.com/kubedb/pgbouncer/commit/6e18d967) Acquire license from license-proxyserver if available (#229) +- [4d042db4](https://github.com/kubedb/pgbouncer/commit/4d042db4) Update healthcheck (#228) +- [22cf136d](https://github.com/kubedb/pgbouncer/commit/22cf136d) Add digest value on docker image (#227) +- [3e9914c9](https://github.com/kubedb/pgbouncer/commit/3e9914c9) Update test for PgBouncer (#219) +- [92662071](https://github.com/kubedb/pgbouncer/commit/92662071) SKIP_IMAGE_DIGEST for dev builds (#226) +- [a62c708f](https://github.com/kubedb/pgbouncer/commit/a62c708f) Update to k8s 1.24 toolchain (#224) +- [061e53fe](https://github.com/kubedb/pgbouncer/commit/061e53fe) Update to k8s 1.24 toolchain (#223) +- [a89ce8fd](https://github.com/kubedb/pgbouncer/commit/a89ce8fd) Test against Kubernetes 1.24.0 (#222) + + + +## [kubedb/postgres](https://github.com/kubedb/postgres) + +### [v0.28.0](https://github.com/kubedb/postgres/releases/tag/v0.28.0) + +- [e61ca57b](https://github.com/kubedb/postgres/commit/e61ca57b) Prepare for release v0.28.0 (#586) +- [f2b875d4](https://github.com/kubedb/postgres/commit/f2b875d4) Prepare for release v0.28.0-rc.1 (#585) +- [a15214c9](https://github.com/kubedb/postgres/commit/a15214c9) Set default values during reconcile cycle (#584) +- [acf99b7f](https://github.com/kubedb/postgres/commit/acf99b7f) Update health checker (#583) +- [e2a4636b](https://github.com/kubedb/postgres/commit/e2a4636b) Prepare for release v0.28.0-rc.0 (#582) +- [8f861234](https://github.com/kubedb/postgres/commit/8f861234) Acquire license from license-proxyserver if available (#581) +- [52f6820d](https://github.com/kubedb/postgres/commit/52f6820d) Add Custom Volume and VolumeMount Support (#580) +- [34f8283f](https://github.com/kubedb/postgres/commit/34f8283f) Update Postgres health check (#578) +- [ad40ece7](https://github.com/kubedb/postgres/commit/ad40ece7) Use docker image with digest value (#579) +- [c51a4716](https://github.com/kubedb/postgres/commit/c51a4716) Update: remove sidecar from standalone. (#576) +- [514ef2bd](https://github.com/kubedb/postgres/commit/514ef2bd) SKIP_IMAGE_DIGEST for dev builds (#577) +- [2bc43818](https://github.com/kubedb/postgres/commit/2bc43818) Update to k8s 1.24 toolchain (#575) +- [8e2a02a3](https://github.com/kubedb/postgres/commit/8e2a02a3) Test against Kubernetes 1.24.0 (#574) + + + +## [kubedb/provisioner](https://github.com/kubedb/provisioner) + +### [v0.28.0](https://github.com/kubedb/provisioner/releases/tag/v0.28.0) + + + + +## [kubedb/proxysql](https://github.com/kubedb/proxysql) + +### [v0.15.0](https://github.com/kubedb/proxysql/releases/tag/v0.15.0) + +- [3ad10c1f](https://github.com/kubedb/proxysql/commit/3ad10c1f) Prepare for release v0.15.0 (#246) +- [de17ccb4](https://github.com/kubedb/proxysql/commit/de17ccb4) Prepare for release v0.15.0-rc.1 (#245) +- [bceb7ff5](https://github.com/kubedb/proxysql/commit/bceb7ff5) Update health checker (#244) +- [89e4767d](https://github.com/kubedb/proxysql/commit/89e4767d) Prepare for release v0.15.0-rc.0 (#243) +- [a4c92b9b](https://github.com/kubedb/proxysql/commit/a4c92b9b) Acquire license from license-proxyserver if available (#242) +- [fb6f2301](https://github.com/kubedb/proxysql/commit/fb6f2301) Rewrite ProxySQL HealthChecker (#241) +- [a6c80651](https://github.com/kubedb/proxysql/commit/a6c80651) Add proxysql declarative configuration (#239) +- [1d9c415c](https://github.com/kubedb/proxysql/commit/1d9c415c) SKIP_IMAGE_DIGEST for dev builds (#240) +- [23a85c85](https://github.com/kubedb/proxysql/commit/23a85c85) Update to k8s 1.24 toolchain (#237) +- [a93f8f4e](https://github.com/kubedb/proxysql/commit/a93f8f4e) Test against Kubernetes 1.24.0 (#236) + + + +## [kubedb/redis](https://github.com/kubedb/redis) + +### [v0.21.0](https://github.com/kubedb/redis/releases/tag/v0.21.0) + +- [fb199e1d](https://github.com/kubedb/redis/commit/fb199e1d) Prepare for release v0.21.0 (#417) +- [3a11cd64](https://github.com/kubedb/redis/commit/3a11cd64) Update PoolTimeout for Redis Shard Cluster to avoid Connection Pool Timeout (#416) +- [4be0e311](https://github.com/kubedb/redis/commit/4be0e311) Prepare for release v0.21.0-rc.1 (#415) +- [c819ea86](https://github.com/kubedb/redis/commit/c819ea86) Update Health Checker (#414) +- [41079e80](https://github.com/kubedb/redis/commit/41079e80) Update health checker (#413) +- [a5ec2963](https://github.com/kubedb/redis/commit/a5ec2963) Prepare for release v0.21.0-rc.0 (#411) +- [738a1964](https://github.com/kubedb/redis/commit/738a1964) Acquire license from license-proxyserver if available (#410) +- [9336d51d](https://github.com/kubedb/redis/commit/9336d51d) Add Custom Volume Support (#409) +- [8235b201](https://github.com/kubedb/redis/commit/8235b201) Rework Redis and Redis Sentinel Health Checker (#403) +- [f5c45ae5](https://github.com/kubedb/redis/commit/f5c45ae5) Image Digest For Sentinel Images (#406) +- [8962659f](https://github.com/kubedb/redis/commit/8962659f) SKIP_IMAGE_DIGEST for dev builds (#405) +- [efa9e726](https://github.com/kubedb/redis/commit/efa9e726) Use image with digest value (#402) +- [08388f4a](https://github.com/kubedb/redis/commit/08388f4a) Update to k8s 1.24 toolchain (#400) +- [82cd6ba2](https://github.com/kubedb/redis/commit/82cd6ba2) Test against Kubernetes 1.24.0 (#398) + + + +## [kubedb/redis-coordinator](https://github.com/kubedb/redis-coordinator) + +### [v0.7.0](https://github.com/kubedb/redis-coordinator/releases/tag/v0.7.0) + +- [805f325](https://github.com/kubedb/redis-coordinator/commit/805f325) Prepare for release v0.7.0 (#41) +- [535162a](https://github.com/kubedb/redis-coordinator/commit/535162a) Prepare for release v0.7.0-rc.1 (#40) +- [9dc5d3d](https://github.com/kubedb/redis-coordinator/commit/9dc5d3d) Update health checker (#39) +- [d9726c9](https://github.com/kubedb/redis-coordinator/commit/d9726c9) Prepare for release v0.7.0-rc.0 (#38) +- [f672fd6](https://github.com/kubedb/redis-coordinator/commit/f672fd6) Acquire license from license-proxyserver if available (#37) +- [5bbab2c](https://github.com/kubedb/redis-coordinator/commit/5bbab2c) Update to k8s 1.24 toolchain (#35) +- [df38bb5](https://github.com/kubedb/redis-coordinator/commit/df38bb5) Update to k8s 1.24 toolchain (#34) + + + +## [kubedb/replication-mode-detector](https://github.com/kubedb/replication-mode-detector) + +### [v0.15.0](https://github.com/kubedb/replication-mode-detector/releases/tag/v0.15.0) + +- [27507324](https://github.com/kubedb/replication-mode-detector/commit/27507324) Prepare for release v0.15.0 (#204) +- [50cb0ab6](https://github.com/kubedb/replication-mode-detector/commit/50cb0ab6) Prepare for release v0.15.0-rc.1 (#203) +- [e5c04c52](https://github.com/kubedb/replication-mode-detector/commit/e5c04c52) Update db-client-go (#202) +- [6409566e](https://github.com/kubedb/replication-mode-detector/commit/6409566e) Update health checker (#201) +- [e06b4771](https://github.com/kubedb/replication-mode-detector/commit/e06b4771) Prepare for release v0.15.0-rc.0 (#200) +- [e77ff57c](https://github.com/kubedb/replication-mode-detector/commit/e77ff57c) Update db-client-go (#199) +- [3008bd62](https://github.com/kubedb/replication-mode-detector/commit/3008bd62) Acquire license from license-proxyserver if available (#198) +- [134d10c7](https://github.com/kubedb/replication-mode-detector/commit/134d10c7) Use mongodb db client (#197) +- [0a9cf005](https://github.com/kubedb/replication-mode-detector/commit/0a9cf005) Update to k8s 1.24 toolchain (#195) + + + +## [kubedb/schema-manager](https://github.com/kubedb/schema-manager) + +### [v0.4.0](https://github.com/kubedb/schema-manager/releases/tag/v0.4.0) + +- [ccd8d69e](https://github.com/kubedb/schema-manager/commit/ccd8d69e) Prepare for release v0.4.0 (#40) +- [d5033ad5](https://github.com/kubedb/schema-manager/commit/d5033ad5) Prepare for release v0.4.0-rc.1 (#39) +- [0f7e0747](https://github.com/kubedb/schema-manager/commit/0f7e0747) Update db-client-go (#38) +- [9b44c0dc](https://github.com/kubedb/schema-manager/commit/9b44c0dc) Update health checker (#37) +- [114d882a](https://github.com/kubedb/schema-manager/commit/114d882a) Prepare for release v0.4.0-rc.0 (#36) +- [bcb51407](https://github.com/kubedb/schema-manager/commit/bcb51407) Update db-client-go (#35) +- [e6ffd878](https://github.com/kubedb/schema-manager/commit/e6ffd878) Acquire license from license-proxyserver if available (#34) +- [5a6ae96d](https://github.com/kubedb/schema-manager/commit/5a6ae96d) Update kutil dependency (#33) +- [8e9a2732](https://github.com/kubedb/schema-manager/commit/8e9a2732) Update to use KubeVault v2022.06.16 (#32) +- [5f7e441d](https://github.com/kubedb/schema-manager/commit/5f7e441d) Update to k8s 1.24 toolchain (#31) + + + +## [kubedb/tests](https://github.com/kubedb/tests) + +### [v0.13.0](https://github.com/kubedb/tests/releases/tag/v0.13.0) + +- [9d36df53](https://github.com/kubedb/tests/commit/9d36df53) Prepare for release v0.13.0 (#187) +- [a8571959](https://github.com/kubedb/tests/commit/a8571959) Prepare for release v0.13.0-rc.1 (#186) +- [77ce04e4](https://github.com/kubedb/tests/commit/77ce04e4) Update db-client-go (#185) +- [7924eac1](https://github.com/kubedb/tests/commit/7924eac1) Update health checker (#184) +- [c9a1d705](https://github.com/kubedb/tests/commit/c9a1d705) Prepare for release v0.13.0-rc.0 (#183) +- [7307b213](https://github.com/kubedb/tests/commit/7307b213) Acquire license from license-proxyserver if available (#182) +- [84bdf708](https://github.com/kubedb/tests/commit/84bdf708) Update to k8s 1.24 toolchain (#180) + + + +## [kubedb/ui-server](https://github.com/kubedb/ui-server) + +### [v0.4.0](https://github.com/kubedb/ui-server/releases/tag/v0.4.0) + +- [a7cd9b5e](https://github.com/kubedb/ui-server/commit/a7cd9b5e) Prepare for release v0.4.0 (#44) +- [079240cc](https://github.com/kubedb/ui-server/commit/079240cc) Fix MariaDBInsight resource name +- [64a798ef](https://github.com/kubedb/ui-server/commit/64a798ef) Prepare for release v0.4.0-rc.1 (#43) +- [a0475425](https://github.com/kubedb/ui-server/commit/a0475425) Update db-client-go (#42) +- [82a9bec5](https://github.com/kubedb/ui-server/commit/82a9bec5) Update health checker (#41) +- [56b14fd6](https://github.com/kubedb/ui-server/commit/56b14fd6) Prepare for release v0.4.0-rc.0 (#40) +- [932c6cd7](https://github.com/kubedb/ui-server/commit/932c6cd7) Vendor Elasticsearch changes in db-client-go (#39) +- [1e8b3cd7](https://github.com/kubedb/ui-server/commit/1e8b3cd7) Acquire license from license-proxyserver if available (#37) +- [eef867b5](https://github.com/kubedb/ui-server/commit/eef867b5) Fix linter warning regarding selfLink (#36) +- [15556fc9](https://github.com/kubedb/ui-server/commit/15556fc9) Update to k8s 1.24 toolchain (#35) + + + +## [kubedb/webhook-server](https://github.com/kubedb/webhook-server) + +### [v0.4.0](https://github.com/kubedb/webhook-server/releases/tag/v0.4.0) + + + + + diff --git a/content/docs/v2024.1.31/CHANGELOG-v2022.10.12-rc.0.md b/content/docs/v2024.1.31/CHANGELOG-v2022.10.12-rc.0.md new file mode 100644 index 0000000000..ea64114c14 --- /dev/null +++ b/content/docs/v2024.1.31/CHANGELOG-v2022.10.12-rc.0.md @@ -0,0 +1,605 @@ +--- +title: Changelog | KubeDB +description: Changelog +menu: + docs_v2024.1.31: + identifier: changelog-kubedb-v2022.10.12-rc.0 + name: Changelog-v2022.10.12-rc.0 + parent: welcome + weight: 20221012 +product_name: kubedb +menu_name: docs_v2024.1.31 +section_menu_id: welcome +url: /docs/v2024.1.31/welcome/changelog-v2022.10.12-rc.0/ +aliases: +- /docs/v2024.1.31/CHANGELOG-v2022.10.12-rc.0/ +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +# KubeDB v2022.10.12-rc.0 (2022-10-12) + + +## [kubedb/apimachinery](https://github.com/kubedb/apimachinery) + +### [v0.29.0-rc.0](https://github.com/kubedb/apimachinery/releases/tag/v0.29.0-rc.0) + +- [7d01a527](https://github.com/kubedb/apimachinery/commit/7d01a527) Add conditions for postgres logical replication. (#990) +- [197a2568](https://github.com/kubedb/apimachinery/commit/197a2568) Remove storage autoscaler from Sentinel spec (#991) +- [2dae1fa1](https://github.com/kubedb/apimachinery/commit/2dae1fa1) Add GetSystemUserSecret Heplers on PerconaXtraDB (#989) +- [35e1d5e5](https://github.com/kubedb/apimachinery/commit/35e1d5e5) Make OpsRequestType specific to databases (#988) +- [e7310243](https://github.com/kubedb/apimachinery/commit/e7310243) Add Redis Sentinel Ops Requests APIs (#958) +- [b937b3dc](https://github.com/kubedb/apimachinery/commit/b937b3dc) Update digest.go +- [1b1732a9](https://github.com/kubedb/apimachinery/commit/1b1732a9) Change ProxySQL backend to a local obj ref (#987) +- [31c66a34](https://github.com/kubedb/apimachinery/commit/31c66a34) Include Arbiter & hidden nodes in MongoAutoscaler (#979) +- [3c2f4a7a](https://github.com/kubedb/apimachinery/commit/3c2f4a7a) Add autoscaler types for Postgres (#969) +- [9f60ebbe](https://github.com/kubedb/apimachinery/commit/9f60ebbe) Add GetAuthSecretName() helper (#986) +- [b48d0118](https://github.com/kubedb/apimachinery/commit/b48d0118) Ignore TLS certificate validation when using private domains (#984) +- [11a09d52](https://github.com/kubedb/apimachinery/commit/11a09d52) Use stash.appscode.dev/apimachinery@v0.23.0 (#983) +- [cb611290](https://github.com/kubedb/apimachinery/commit/cb611290) Remove duplicate short name from redis sentinel (#982) +- [f5eabfc2](https://github.com/kubedb/apimachinery/commit/f5eabfc2) Fix typo 'SuccessfullyRestatedStatefulSet' (#980) +- [4f6d7eac](https://github.com/kubedb/apimachinery/commit/4f6d7eac) Test against Kubernetes 1.25.0 (#981) +- [c0388bc2](https://github.com/kubedb/apimachinery/commit/c0388bc2) Use authSecret.externallyManaged field (#978) +- [7f39736a](https://github.com/kubedb/apimachinery/commit/7f39736a) Remove default values from authSecret (#977) +- [2d9abdb4](https://github.com/kubedb/apimachinery/commit/2d9abdb4) Support different types of secrets and password rotation (#976) +- [f01cf5b9](https://github.com/kubedb/apimachinery/commit/f01cf5b9) Using opsRequestOpts for elastic,maria & percona (#970) +- [e26f6417](https://github.com/kubedb/apimachinery/commit/e26f6417) Fix typos of Postgres Logical Replication CRDs. (#974) +- [d43f454e](https://github.com/kubedb/apimachinery/commit/d43f454e) Check for PDB version only once (#975) +- [fb5283cd](https://github.com/kubedb/apimachinery/commit/fb5283cd) Handle status conversion for PDB (#973) +- [7263b503](https://github.com/kubedb/apimachinery/commit/7263b503) Update kutil +- [5c643b97](https://github.com/kubedb/apimachinery/commit/5c643b97) Use Go 1.19 +- [a0b96812](https://github.com/kubedb/apimachinery/commit/a0b96812) Fix mergo dependency +- [b7b93597](https://github.com/kubedb/apimachinery/commit/b7b93597) Use k8s 1.25.1 libs (#971) +- [c1f407b0](https://github.com/kubedb/apimachinery/commit/c1f407b0) Add MySQLAutoscaler support (#968) +- [693f5243](https://github.com/kubedb/apimachinery/commit/693f5243) Add MongoDB HiddenNode support (#956) +- [0b3be441](https://github.com/kubedb/apimachinery/commit/0b3be441) Add Postgres Publisher & Subscriber CRDs (#967) +- [71947dec](https://github.com/kubedb/apimachinery/commit/71947dec) Update README.md +- [818f48fa](https://github.com/kubedb/apimachinery/commit/818f48fa) Add redis-sentinel autoscaler types (#965) +- [011938c4](https://github.com/kubedb/apimachinery/commit/011938c4) Add PerconaXtraDB OpsReq and Autoscaler APIs (#953) +- [b57e7099](https://github.com/kubedb/apimachinery/commit/b57e7099) Add RedisAutoscaler support (#963) +- [2ccea895](https://github.com/kubedb/apimachinery/commit/2ccea895) Remove `DisableScaleDown` field from autoscaler (#966) +- [02b47709](https://github.com/kubedb/apimachinery/commit/02b47709) Support PDB v1 or v1beta1 api based on k8s version (#964) +- [e2d0bb4f](https://github.com/kubedb/apimachinery/commit/e2d0bb4f) Stop using removed apis in Kubernetes 1.25 (#962) +- [722a1bc1](https://github.com/kubedb/apimachinery/commit/722a1bc1) Use health checker types from kmodules (#961) + + + +## [kubedb/autoscaler](https://github.com/kubedb/autoscaler) + +### [v0.14.0-rc.0](https://github.com/kubedb/autoscaler/releases/tag/v0.14.0-rc.0) + +- [99ffb7a4](https://github.com/kubedb/autoscaler/commit/99ffb7a4) Prepare for release v0.14.0-rc.0 (#118) +- [deec8a47](https://github.com/kubedb/autoscaler/commit/deec8a47) Update dependencies (#117) +- [c06eff58](https://github.com/kubedb/autoscaler/commit/c06eff58) Support mongo arbiter & hidden nodes (#115) +- [513b5fb4](https://github.com/kubedb/autoscaler/commit/513b5fb4) Add support for Postgres Autoscaler (#112) +- [87dd17fe](https://github.com/kubedb/autoscaler/commit/87dd17fe) Using opsRequestOpts on storageAutoscalers to satisfy cmp.Equal() (#116) +- [a8afe242](https://github.com/kubedb/autoscaler/commit/a8afe242) Test against Kubernetes 1.25.0 (#114) +- [65d3869c](https://github.com/kubedb/autoscaler/commit/65d3869c) Test against Kubernetes 1.25.0 (#113) +- [bc069f48](https://github.com/kubedb/autoscaler/commit/bc069f48) Add MySQL Autoscaler support (#106) +- [88c985a0](https://github.com/kubedb/autoscaler/commit/88c985a0) Check for PDB version only once (#110) +- [6f5f9ae2](https://github.com/kubedb/autoscaler/commit/6f5f9ae2) Handle status conversion for CronJob/VolumeSnapshot (#109) +- [46b925c0](https://github.com/kubedb/autoscaler/commit/46b925c0) Use Go 1.19 (#108) +- [674e3b7a](https://github.com/kubedb/autoscaler/commit/674e3b7a) Use k8s 1.25.1 libs (#107) +- [6a5d4274](https://github.com/kubedb/autoscaler/commit/6a5d4274) Improve internal API; using milliValue (#105) +- [757cdfed](https://github.com/kubedb/autoscaler/commit/757cdfed) Add support for RedisSentinel autoscaler (#104) +- [56b92c66](https://github.com/kubedb/autoscaler/commit/56b92c66) Update README.md +- [f3b9904f](https://github.com/kubedb/autoscaler/commit/f3b9904f) Add PerconaXtraDB Autoscaler Support (#103) +- [7ac495d9](https://github.com/kubedb/autoscaler/commit/7ac495d9) Implement redisAutoscaler feature (#102) +- [997180f5](https://github.com/kubedb/autoscaler/commit/997180f5) Stop using removed apis in Kubernetes 1.25 (#101) +- [490a6b69](https://github.com/kubedb/autoscaler/commit/490a6b69) Use health checker types from kmodules (#100) + + + +## [kubedb/cli](https://github.com/kubedb/cli) + +### [v0.29.0-rc.0](https://github.com/kubedb/cli/releases/tag/v0.29.0-rc.0) + +- [8033e31b](https://github.com/kubedb/cli/commit/8033e31b) Prepare for release v0.29.0-rc.0 (#684) +- [b021f761](https://github.com/kubedb/cli/commit/b021f761) Update dependencies (#683) +- [792efd14](https://github.com/kubedb/cli/commit/792efd14) Support externally managed secrets (#681) +- [7ec2adbc](https://github.com/kubedb/cli/commit/7ec2adbc) Test against Kubernetes 1.25.0 (#682) +- [fc9b63c7](https://github.com/kubedb/cli/commit/fc9b63c7) Check for PDB version only once (#680) +- [81199060](https://github.com/kubedb/cli/commit/81199060) Handle status conversion for CronJob/VolumeSnapshot (#679) +- [17c6e94d](https://github.com/kubedb/cli/commit/17c6e94d) Use Go 1.19 (#678) +- [31c24f80](https://github.com/kubedb/cli/commit/31c24f80) Use k8s 1.25.1 libs (#677) +- [68e9ada6](https://github.com/kubedb/cli/commit/68e9ada6) Update README.md +- [4202bc84](https://github.com/kubedb/cli/commit/4202bc84) Stop using removed apis in Kubernetes 1.25 (#676) +- [eb922b19](https://github.com/kubedb/cli/commit/eb922b19) Use health checker types from kmodules (#675) + + + +## [kubedb/dashboard](https://github.com/kubedb/dashboard) + +### [v0.5.0-rc.0](https://github.com/kubedb/dashboard/releases/tag/v0.5.0-rc.0) + +- [fd8f1bc](https://github.com/kubedb/dashboard/commit/fd8f1bc) Prepare for release v0.5.0-rc.0 (#46) +- [4b093a9](https://github.com/kubedb/dashboard/commit/4b093a9) Update dependencies (#45) +- [9804a55](https://github.com/kubedb/dashboard/commit/9804a55) Test against Kubernetes 1.25.0 (#44) +- [5f9caec](https://github.com/kubedb/dashboard/commit/5f9caec) Check for PDB version only once (#42) +- [91b256c](https://github.com/kubedb/dashboard/commit/91b256c) Handle status conversion for CronJob/VolumeSnapshot (#41) +- [11445c2](https://github.com/kubedb/dashboard/commit/11445c2) Use Go 1.19 (#40) +- [858bced](https://github.com/kubedb/dashboard/commit/858bced) Use k8s 1.25.1 libs (#39) +- [ebaaade](https://github.com/kubedb/dashboard/commit/ebaaade) Stop using removed apis in Kubernetes 1.25 (#38) +- [51d4f7f](https://github.com/kubedb/dashboard/commit/51d4f7f) Use health checker types from kmodules (#37) + + + +## [kubedb/elasticsearch](https://github.com/kubedb/elasticsearch) + +### [v0.29.0-rc.0](https://github.com/kubedb/elasticsearch/releases/tag/v0.29.0-rc.0) + +- [1e715c1a](https://github.com/kubedb/elasticsearch/commit/1e715c1a9) Prepare for release v0.29.0-rc.0 (#611) +- [4ab0de97](https://github.com/kubedb/elasticsearch/commit/4ab0de973) Update dependencies (#610) +- [1803a407](https://github.com/kubedb/elasticsearch/commit/1803a4078) Add support for Externally Managed secret (#609) +- [c2fb96e2](https://github.com/kubedb/elasticsearch/commit/c2fb96e2b) Test against Kubernetes 1.25.0 (#608) +- [96bbc6a8](https://github.com/kubedb/elasticsearch/commit/96bbc6a85) Check for PDB version only once (#606) +- [38099062](https://github.com/kubedb/elasticsearch/commit/380990623) Handle status conversion for CronJob/VolumeSnapshot (#605) +- [6e86f853](https://github.com/kubedb/elasticsearch/commit/6e86f853a) Use Go 1.19 (#604) +- [838ab6ae](https://github.com/kubedb/elasticsearch/commit/838ab6aec) Use k8s 1.25.1 libs (#603) +- [ce6877b5](https://github.com/kubedb/elasticsearch/commit/ce6877b58) Update README.md +- [297c6004](https://github.com/kubedb/elasticsearch/commit/297c60040) Stop using removed apis in Kubernetes 1.25 (#602) +- [7f9ef6bf](https://github.com/kubedb/elasticsearch/commit/7f9ef6bf1) Use health checker types from kmodules (#601) +- [baf9b9c1](https://github.com/kubedb/elasticsearch/commit/baf9b9c1b) Fix ClientCreated counter increment issue in healthchecker (#600) + + + +## [kubedb/installer](https://github.com/kubedb/installer) + +### [v2022.10.12-rc.0](https://github.com/kubedb/installer/releases/tag/v2022.10.12-rc.0) + +- [0d5ff889](https://github.com/kubedb/installer/commit/0d5ff889) Prepare for release v2022.10.12-rc.0 (#553) +- [f0410def](https://github.com/kubedb/installer/commit/f0410def) Fix backend name for ProxySQL (#554) +- [545d7326](https://github.com/kubedb/installer/commit/545d7326) Add support for mysql 8.0.31 (#552) +- [0ecf6b7a](https://github.com/kubedb/installer/commit/0ecf6b7a) Update crds +- [3c80588f](https://github.com/kubedb/installer/commit/3c80588f) Add ProxySQL-2.3.2-debian/centos-v2 (#549) +- [edb50a92](https://github.com/kubedb/installer/commit/edb50a92) Add ProxySQL MetricsConfiguration (#545) +- [78961127](https://github.com/kubedb/installer/commit/78961127) Update Redis Init Container Image (#551) +- [e266fe95](https://github.com/kubedb/installer/commit/e266fe95) Update Percona XtraDB init container image (#550) +- [c2b9f93b](https://github.com/kubedb/installer/commit/c2b9f93b) Update mongodb init container image (#548) +- [cb4d226a](https://github.com/kubedb/installer/commit/cb4d226a) Add Redis Sentinel Ops Requests changes (#533) +- [f970eac3](https://github.com/kubedb/installer/commit/f970eac3) Fix missing docker images (#547) +- [d34e3363](https://github.com/kubedb/installer/commit/d34e3363) Add mutating webhook for postgresAutoscaler (#544) +- [bb0ae0de](https://github.com/kubedb/installer/commit/bb0ae0de) Fix valuePath for app_namespace key (#546) +- [862d034e](https://github.com/kubedb/installer/commit/862d034e) Add Subscriber apiservice (#543) +- [88d1225e](https://github.com/kubedb/installer/commit/88d1225e) Use k8s 1.25 client libs (#228) +- [46641e26](https://github.com/kubedb/installer/commit/46641e26) Add proxysql new version 2.4.4 (#539) +- [f498f5ae](https://github.com/kubedb/installer/commit/f498f5ae) Add Percona XtraDB 8.0.28 (#529) +- [24519580](https://github.com/kubedb/installer/commit/24519580) Add PerconaXtraDB Metrics (#532) +- [a1f8ac75](https://github.com/kubedb/installer/commit/a1f8ac75) Update crds (#541) +- [4b500533](https://github.com/kubedb/installer/commit/4b500533) Add Postgres Logical Replication rbac and validators (#534) +- [753f60c4](https://github.com/kubedb/installer/commit/753f60c4) Use k8s 1.25.2 +- [700dacb1](https://github.com/kubedb/installer/commit/700dacb1) Test against Kubernetes 1.25.0 (#540) +- [92069bc7](https://github.com/kubedb/installer/commit/92069bc7) Test against k8s 1.25.0 (#537) +- [b944c0b0](https://github.com/kubedb/installer/commit/b944c0b0) Don't create PSP object in k8s >= 1.25 (#536) +- [d35f8aec](https://github.com/kubedb/installer/commit/d35f8aec) Use Go 1.19 (#535) +- [59a10600](https://github.com/kubedb/installer/commit/59a10600) Add all db-types in autoscaler mutatingwebhookConfiguration (#531) +- [3010e3e4](https://github.com/kubedb/installer/commit/3010e3e4) Update README.md +- [2763ae75](https://github.com/kubedb/installer/commit/2763ae75) Add exclusion for health index in Elasticsearch (#530) + + + +## [kubedb/mariadb](https://github.com/kubedb/mariadb) + +### [v0.13.0-rc.0](https://github.com/kubedb/mariadb/releases/tag/v0.13.0-rc.0) + +- [b13d62cf](https://github.com/kubedb/mariadb/commit/b13d62cf) Prepare for release v0.13.0-rc.0 (#179) +- [5a8b0877](https://github.com/kubedb/mariadb/commit/5a8b0877) Add TLS Secret on Appbinding (#178) +- [a7f976f6](https://github.com/kubedb/mariadb/commit/a7f976f6) Add AppRef on AppBinding and Add Exporter Config Secret (#177) +- [a3d17697](https://github.com/kubedb/mariadb/commit/a3d17697) Update dependencies (#176) +- [8c666da8](https://github.com/kubedb/mariadb/commit/8c666da8) Add Externally Manage Secret Support (#175) +- [b14391f2](https://github.com/kubedb/mariadb/commit/b14391f2) Test against Kubernetes 1.25.0 (#174) +- [a07bbf68](https://github.com/kubedb/mariadb/commit/a07bbf68) Check for PDB version only once (#172) +- [8a316b93](https://github.com/kubedb/mariadb/commit/8a316b93) Handle status conversion for CronJob/VolumeSnapshot (#171) +- [56b6cd33](https://github.com/kubedb/mariadb/commit/56b6cd33) Use Go 1.19 (#170) +- [c666db48](https://github.com/kubedb/mariadb/commit/c666db48) Use k8s 1.25.1 libs (#169) +- [665d7f2a](https://github.com/kubedb/mariadb/commit/665d7f2a) Fix health check issue (#166) +- [c089e057](https://github.com/kubedb/mariadb/commit/c089e057) Update README.md +- [c0efefeb](https://github.com/kubedb/mariadb/commit/c0efefeb) Stop using removed apis in Kubernetes 1.25 (#168) +- [e3ef008e](https://github.com/kubedb/mariadb/commit/e3ef008e) Use health checker types from kmodules (#167) + + + +## [kubedb/mariadb-coordinator](https://github.com/kubedb/mariadb-coordinator) + +### [v0.9.0-rc.0](https://github.com/kubedb/mariadb-coordinator/releases/tag/v0.9.0-rc.0) + +- [b16d49d](https://github.com/kubedb/mariadb-coordinator/commit/b16d49d) Prepare for release v0.9.0-rc.0 (#62) +- [a5e1a7b](https://github.com/kubedb/mariadb-coordinator/commit/a5e1a7b) Update dependencies (#61) +- [119956c](https://github.com/kubedb/mariadb-coordinator/commit/119956c) Test against Kubernetes 1.25.0 (#60) +- [4950880](https://github.com/kubedb/mariadb-coordinator/commit/4950880) Check for PDB version only once (#58) +- [8e89509](https://github.com/kubedb/mariadb-coordinator/commit/8e89509) Handle status conversion for CronJob/VolumeSnapshot (#57) +- [79dc72c](https://github.com/kubedb/mariadb-coordinator/commit/79dc72c) Use Go 1.19 (#56) +- [5a57951](https://github.com/kubedb/mariadb-coordinator/commit/5a57951) Use k8s 1.25.1 libs (#55) +- [101e71a](https://github.com/kubedb/mariadb-coordinator/commit/101e71a) Stop using removed apis in Kubernetes 1.25 (#54) +- [61c60ed](https://github.com/kubedb/mariadb-coordinator/commit/61c60ed) Use health checker types from kmodules (#53) +- [fe8a57e](https://github.com/kubedb/mariadb-coordinator/commit/fe8a57e) Prepare for release v0.8.0-rc.1 (#52) +- [9c3c47f](https://github.com/kubedb/mariadb-coordinator/commit/9c3c47f) Update health checker (#51) +- [82bad04](https://github.com/kubedb/mariadb-coordinator/commit/82bad04) Prepare for release v0.8.0-rc.0 (#50) +- [487fdbb](https://github.com/kubedb/mariadb-coordinator/commit/487fdbb) Acquire license from license-proxyserver if available (#49) + + + +## [kubedb/memcached](https://github.com/kubedb/memcached) + +### [v0.22.0-rc.0](https://github.com/kubedb/memcached/releases/tag/v0.22.0-rc.0) + +- [255abab3](https://github.com/kubedb/memcached/commit/255abab3) Prepare for release v0.22.0-rc.0 (#373) +- [2cbc373f](https://github.com/kubedb/memcached/commit/2cbc373f) Update dependencies (#372) +- [6995e546](https://github.com/kubedb/memcached/commit/6995e546) Test against Kubernetes 1.25.0 (#371) +- [2974948e](https://github.com/kubedb/memcached/commit/2974948e) Check for PDB version only once (#369) +- [f2662305](https://github.com/kubedb/memcached/commit/f2662305) Handle status conversion for CronJob/VolumeSnapshot (#368) +- [a79d8ed9](https://github.com/kubedb/memcached/commit/a79d8ed9) Use Go 1.19 (#367) +- [e2a89736](https://github.com/kubedb/memcached/commit/e2a89736) Use k8s 1.25.1 libs (#366) +- [15ba567f](https://github.com/kubedb/memcached/commit/15ba567f) Stop using removed apis in Kubernetes 1.25 (#365) +- [12204d85](https://github.com/kubedb/memcached/commit/12204d85) Use health checker types from kmodules (#364) + + + +## [kubedb/mongodb](https://github.com/kubedb/mongodb) + +### [v0.22.0-rc.0](https://github.com/kubedb/mongodb/releases/tag/v0.22.0-rc.0) + +- [b9e03cc5](https://github.com/kubedb/mongodb/commit/b9e03cc5) Prepare for release v0.22.0-rc.0 (#517) +- [2f0c8b65](https://github.com/kubedb/mongodb/commit/2f0c8b65) Set TLSSecret name (#516) +- [ffb021ea](https://github.com/kubedb/mongodb/commit/ffb021ea) Configure AppRef in appbinding (#515) +- [2c9eb87b](https://github.com/kubedb/mongodb/commit/2c9eb87b) Add support for externally-managed authSecret (#514) +- [f4789ab7](https://github.com/kubedb/mongodb/commit/f4789ab7) Test against Kubernetes 1.25.0 (#513) +- [9ad4c219](https://github.com/kubedb/mongodb/commit/9ad4c219) Change operator name in event (#511) +- [dbb7ff10](https://github.com/kubedb/mongodb/commit/dbb7ff10) Check for PDB version only once (#510) +- [79d53b0a](https://github.com/kubedb/mongodb/commit/79d53b0a) Handle status conversion for CronJob/VolumeSnapshot (#509) +- [37521202](https://github.com/kubedb/mongodb/commit/37521202) Use Go 1.19 (#508) +- [d1a2d55a](https://github.com/kubedb/mongodb/commit/d1a2d55a) Use k8s 1.25.1 libs (#507) +- [43399906](https://github.com/kubedb/mongodb/commit/43399906) Add support for Hidden node (#503) +- [91acbffc](https://github.com/kubedb/mongodb/commit/91acbffc) Update README.md +- [b053290c](https://github.com/kubedb/mongodb/commit/b053290c) Stop using removed apis in Kubernetes 1.25 (#506) +- [79b99580](https://github.com/kubedb/mongodb/commit/79b99580) Use health checker types from kmodules (#505) +- [ff39883d](https://github.com/kubedb/mongodb/commit/ff39883d) Fix health check issue (#504) + + + +## [kubedb/mysql](https://github.com/kubedb/mysql) + +### [v0.22.0-rc.0](https://github.com/kubedb/mysql/releases/tag/v0.22.0-rc.0) + +- [8386ed4c](https://github.com/kubedb/mysql/commit/8386ed4c) Prepare for release v0.22.0-rc.0 (#504) +- [8d58bbd8](https://github.com/kubedb/mysql/commit/8d58bbd8) Add cluster role for watching mysqlversion in coordinator (#503) +- [53b207b3](https://github.com/kubedb/mysql/commit/53b207b3) Add TLS Secret Name in appbinding (#501) +- [541e9f5e](https://github.com/kubedb/mysql/commit/541e9f5e) Update dependencies (#502) +- [e51a494c](https://github.com/kubedb/mysql/commit/e51a494c) Fix innodb router issues (#500) +- [c4f78c1f](https://github.com/kubedb/mysql/commit/c4f78c1f) Wait for externaly managed auth secret (#499) +- [90f337a2](https://github.com/kubedb/mysql/commit/90f337a2) Test against Kubernetes 1.25.0 (#498) +- [af6d6654](https://github.com/kubedb/mysql/commit/af6d6654) Check for PDB version only once (#496) +- [27611133](https://github.com/kubedb/mysql/commit/27611133) Handle status conversion for CronJob/VolumeSnapshot (#495) +- [a662b10d](https://github.com/kubedb/mysql/commit/a662b10d) Use Go 1.19 (#494) +- [07ce8211](https://github.com/kubedb/mysql/commit/07ce8211) Use k8s 1.25.1 libs (#493) +- [fac38c31](https://github.com/kubedb/mysql/commit/fac38c31) Update README.md +- [9676f388](https://github.com/kubedb/mysql/commit/9676f388) Stop using removed apis in Kubernetes 1.25 (#492) +- [db176142](https://github.com/kubedb/mysql/commit/db176142) Use health checker types from kmodules (#491) +- [3c9835b0](https://github.com/kubedb/mysql/commit/3c9835b0) Fix health check issue (#489) + + + +## [kubedb/mysql-coordinator](https://github.com/kubedb/mysql-coordinator) + +### [v0.7.0-rc.0](https://github.com/kubedb/mysql-coordinator/releases/tag/v0.7.0-rc.0) + +- [b1d9ecf](https://github.com/kubedb/mysql-coordinator/commit/b1d9ecf) Prepare for release v0.7.0-rc.0 (#58) +- [88d01ef](https://github.com/kubedb/mysql-coordinator/commit/88d01ef) Update dependencies (#56) +- [cbb3504](https://github.com/kubedb/mysql-coordinator/commit/cbb3504) fix group_replication extra transcions jonning issue (#49) +- [8939e89](https://github.com/kubedb/mysql-coordinator/commit/8939e89) Test against Kubernetes 1.25.0 (#55) +- [0ba243d](https://github.com/kubedb/mysql-coordinator/commit/0ba243d) Check for PDB version only once (#53) +- [dac7227](https://github.com/kubedb/mysql-coordinator/commit/dac7227) Handle status conversion for CronJob/VolumeSnapshot (#52) +- [100f268](https://github.com/kubedb/mysql-coordinator/commit/100f268) Use Go 1.19 (#51) +- [07fc1af](https://github.com/kubedb/mysql-coordinator/commit/07fc1af) Use k8s 1.25.1 libs (#50) +- [71fe729](https://github.com/kubedb/mysql-coordinator/commit/71fe729) Stop using removed apis in Kubernetes 1.25 (#48) +- [f968206](https://github.com/kubedb/mysql-coordinator/commit/f968206) Use health checker types from kmodules (#47) +- [fa7ad1c](https://github.com/kubedb/mysql-coordinator/commit/fa7ad1c) Prepare for release v0.6.0-rc.1 (#46) +- [2c3615b](https://github.com/kubedb/mysql-coordinator/commit/2c3615b) update labels (#45) +- [38a4f88](https://github.com/kubedb/mysql-coordinator/commit/38a4f88) Update health checker (#43) +- [7c79e5f](https://github.com/kubedb/mysql-coordinator/commit/7c79e5f) Prepare for release v0.6.0-rc.0 (#42) +- [2eb313d](https://github.com/kubedb/mysql-coordinator/commit/2eb313d) Acquire license from license-proxyserver if available (#40) + + + +## [kubedb/mysql-router-init](https://github.com/kubedb/mysql-router-init) + +### [v0.7.0-rc.0](https://github.com/kubedb/mysql-router-init/releases/tag/v0.7.0-rc.0) + +- [e5eba9e](https://github.com/kubedb/mysql-router-init/commit/e5eba9e) Test against Kubernetes 1.25.0 (#26) +- [f0bdfdd](https://github.com/kubedb/mysql-router-init/commit/f0bdfdd) Use Go 1.19 (#25) +- [5631a3c](https://github.com/kubedb/mysql-router-init/commit/5631a3c) Use k8s 1.25.1 libs (#24) + + + +## [kubedb/ops-manager](https://github.com/kubedb/ops-manager) + +### [v0.16.0-rc.0](https://github.com/kubedb/ops-manager/releases/tag/v0.16.0-rc.0) + +- [698d4f84](https://github.com/kubedb/ops-manager/commit/698d4f84) Prepare for release v0.16.0-rc.0 (#373) +- [f85a6048](https://github.com/kubedb/ops-manager/commit/f85a6048) Handle private registry with self-signed certs (#372) +- [205b8e3c](https://github.com/kubedb/ops-manager/commit/205b8e3c) Fix replication user update password (#371) +- [6680a32c](https://github.com/kubedb/ops-manager/commit/6680a32c) Fix HS Ops Request (#370) +- [b96f4592](https://github.com/kubedb/ops-manager/commit/b96f4592) Add PostgreSQL Logical Replication (#353) +- [2ca9b5f8](https://github.com/kubedb/ops-manager/commit/2ca9b5f8) ProxySQL Ops-requests (#368) +- [c2d6b85e](https://github.com/kubedb/ops-manager/commit/c2d6b85e) Remove ensureExporterSecretForTLSConfig for MariaDB and PXC (#369) +- [06b69609](https://github.com/kubedb/ops-manager/commit/06b69609) Add PerconaXtraDB OpsReq (#367) +- [891a2288](https://github.com/kubedb/ops-manager/commit/891a2288) Make opsReqType specific to databases (#366) +- [82d960b0](https://github.com/kubedb/ops-manager/commit/82d960b0) MySQL ops request fix for Innodb (#365) +- [13401a96](https://github.com/kubedb/ops-manager/commit/13401a96) Add Redis Sentinel Ops Request (#328) +- [8ee68b62](https://github.com/kubedb/ops-manager/commit/8ee68b62) Modify reconfigureTLS to support arbiter & hidden enabled mongo (#364) +- [805f8bba](https://github.com/kubedb/ops-manager/commit/805f8bba) Test against Kubernetes 1.25.0 (#363) +- [787f7bea](https://github.com/kubedb/ops-manager/commit/787f7bea) Fix MariaDB Upgrade OpsReq Image name issue (#361) +- [e676ea51](https://github.com/kubedb/ops-manager/commit/e676ea51) Fix podnames & selectors for Mongo volumeExpansion (#358) +- [7a5e34b1](https://github.com/kubedb/ops-manager/commit/7a5e34b1) Check for PDB version only once (#357) +- [3fb148a5](https://github.com/kubedb/ops-manager/commit/3fb148a5) Handle status conversion for CronJob/VolumeSnapshot (#356) +- [9f058091](https://github.com/kubedb/ops-manager/commit/9f058091) Use Go 1.19 (#355) +- [25febfcb](https://github.com/kubedb/ops-manager/commit/25febfcb) Update .kodiak.toml +- [eb0f3792](https://github.com/kubedb/ops-manager/commit/eb0f3792) Use k8s 1.25.1 libs (#354) +- [d09da904](https://github.com/kubedb/ops-manager/commit/d09da904) Add opsRequests for mongo hidden-node (#347) +- [9114f329](https://github.com/kubedb/ops-manager/commit/9114f329) Rework Mongo verticalScaling; Fix arbiter & exporter-related issues (#346) +- [74f4831c](https://github.com/kubedb/ops-manager/commit/74f4831c) Update README.md +- [2277c28f](https://github.com/kubedb/ops-manager/commit/2277c28f) Skip Image Digest for Dev Builds (#350) +- [c7f3cf07](https://github.com/kubedb/ops-manager/commit/c7f3cf07) Stop using removed apis in Kubernetes 1.25 (#352) +- [0fbc4d57](https://github.com/kubedb/ops-manager/commit/0fbc4d57) Use health checker types from kmodules (#351) + + + +## [kubedb/percona-xtradb](https://github.com/kubedb/percona-xtradb) + +### [v0.16.0-rc.0](https://github.com/kubedb/percona-xtradb/releases/tag/v0.16.0-rc.0) + +- [030e063d](https://github.com/kubedb/percona-xtradb/commit/030e063d) Prepare for release v0.16.0-rc.0 (#282) +- [47345de1](https://github.com/kubedb/percona-xtradb/commit/47345de1) Add TLS Secret on AppBinding (#281) +- [0aa33548](https://github.com/kubedb/percona-xtradb/commit/0aa33548) Add AppRef on AppBinding and Add Exporter Config Secret (#280) +- [82685157](https://github.com/kubedb/percona-xtradb/commit/82685157) Merge pull request #269 from kubedb/add-px-ops +- [f7f1898e](https://github.com/kubedb/percona-xtradb/commit/f7f1898e) Add Externally Managed Secret Support on PerconaXtraDB +- [43dcc76d](https://github.com/kubedb/percona-xtradb/commit/43dcc76d) Test against Kubernetes 1.25.0 (#278) +- [bc5c97db](https://github.com/kubedb/percona-xtradb/commit/bc5c97db) Check for PDB version only once (#276) +- [13a57a32](https://github.com/kubedb/percona-xtradb/commit/13a57a32) Handle status conversion for CronJob/VolumeSnapshot (#275) +- [6013a92e](https://github.com/kubedb/percona-xtradb/commit/6013a92e) Use Go 1.19 (#274) +- [45c413b9](https://github.com/kubedb/percona-xtradb/commit/45c413b9) Use k8s 1.25.1 libs (#273) +- [fd7d238a](https://github.com/kubedb/percona-xtradb/commit/fd7d238a) Update README.md +- [13da58d6](https://github.com/kubedb/percona-xtradb/commit/13da58d6) Stop using removed apis in Kubernetes 1.25 (#272) +- [6941e6d6](https://github.com/kubedb/percona-xtradb/commit/6941e6d6) Use health checker types from kmodules (#271) +- [9f813287](https://github.com/kubedb/percona-xtradb/commit/9f813287) Fix health check issue (#270) + + + +## [kubedb/percona-xtradb-coordinator](https://github.com/kubedb/percona-xtradb-coordinator) + +### [v0.2.0-rc.0](https://github.com/kubedb/percona-xtradb-coordinator/releases/tag/v0.2.0-rc.0) + +- [cf6c54c](https://github.com/kubedb/percona-xtradb-coordinator/commit/cf6c54c) Prepare for release v0.2.0-rc.0 (#19) +- [a71e01d](https://github.com/kubedb/percona-xtradb-coordinator/commit/a71e01d) Update dependencies (#18) +- [0b51751](https://github.com/kubedb/percona-xtradb-coordinator/commit/0b51751) Test against Kubernetes 1.25.0 (#17) +- [1f2b1a5](https://github.com/kubedb/percona-xtradb-coordinator/commit/1f2b1a5) Check for PDB version only once (#15) +- [03125ba](https://github.com/kubedb/percona-xtradb-coordinator/commit/03125ba) Handle status conversion for CronJob/VolumeSnapshot (#14) +- [06a2634](https://github.com/kubedb/percona-xtradb-coordinator/commit/06a2634) Use Go 1.19 (#13) +- [1a8a90b](https://github.com/kubedb/percona-xtradb-coordinator/commit/1a8a90b) Use k8s 1.25.1 libs (#12) +- [f33c751](https://github.com/kubedb/percona-xtradb-coordinator/commit/f33c751) Stop using removed apis in Kubernetes 1.25 (#11) +- [91495bf](https://github.com/kubedb/percona-xtradb-coordinator/commit/91495bf) Use health checker types from kmodules (#10) +- [290e281](https://github.com/kubedb/percona-xtradb-coordinator/commit/290e281) Prepare for release v0.1.0-rc.1 (#9) +- [c57449c](https://github.com/kubedb/percona-xtradb-coordinator/commit/c57449c) Update health checker (#8) +- [adad8b5](https://github.com/kubedb/percona-xtradb-coordinator/commit/adad8b5) Acquire license from license-proxyserver if available (#7) + + + +## [kubedb/pg-coordinator](https://github.com/kubedb/pg-coordinator) + +### [v0.13.0-rc.0](https://github.com/kubedb/pg-coordinator/releases/tag/v0.13.0-rc.0) + +- [85fb61bb](https://github.com/kubedb/pg-coordinator/commit/85fb61bb) Prepare for release v0.13.0-rc.0 (#99) +- [58720b10](https://github.com/kubedb/pg-coordinator/commit/58720b10) Update dependencies (#98) +- [5a9dcc5f](https://github.com/kubedb/pg-coordinator/commit/5a9dcc5f) Test against Kubernetes 1.25.0 (#97) +- [eb45fd8e](https://github.com/kubedb/pg-coordinator/commit/eb45fd8e) Check for PDB version only once (#95) +- [a66884fb](https://github.com/kubedb/pg-coordinator/commit/a66884fb) Handle status conversion for CronJob/VolumeSnapshot (#94) +- [db150c63](https://github.com/kubedb/pg-coordinator/commit/db150c63) Use Go 1.19 (#93) +- [8bd4fcc5](https://github.com/kubedb/pg-coordinator/commit/8bd4fcc5) Use k8s 1.25.1 libs (#92) +- [4a510768](https://github.com/kubedb/pg-coordinator/commit/4a510768) Stop using removed apis in Kubernetes 1.25 (#91) +- [3b26263c](https://github.com/kubedb/pg-coordinator/commit/3b26263c) Use health checker types from kmodules (#90) + + + +## [kubedb/pgbouncer](https://github.com/kubedb/pgbouncer) + +### [v0.16.0-rc.0](https://github.com/kubedb/pgbouncer/releases/tag/v0.16.0-rc.0) + +- [0d58567a](https://github.com/kubedb/pgbouncer/commit/0d58567a) Prepare for release v0.16.0-rc.0 (#245) +- [47329dfa](https://github.com/kubedb/pgbouncer/commit/47329dfa) Fix TLSSecret for appbinding. (#244) +- [3efec0cb](https://github.com/kubedb/pgbouncer/commit/3efec0cb) Update dependencies (#243) +- [8a1bd7b0](https://github.com/kubedb/pgbouncer/commit/8a1bd7b0) Fix health check issue (#234) +- [c20e87e5](https://github.com/kubedb/pgbouncer/commit/c20e87e5) Test against Kubernetes 1.25.0 (#242) +- [760fd8e3](https://github.com/kubedb/pgbouncer/commit/760fd8e3) Check for PDB version only once (#240) +- [8ba2692d](https://github.com/kubedb/pgbouncer/commit/8ba2692d) Handle status conversion for CronJob/VolumeSnapshot (#239) +- [ea1fc328](https://github.com/kubedb/pgbouncer/commit/ea1fc328) Use Go 1.19 (#238) +- [6a24f732](https://github.com/kubedb/pgbouncer/commit/6a24f732) Use k8s 1.25.1 libs (#237) +- [327242e1](https://github.com/kubedb/pgbouncer/commit/327242e1) Update README.md +- [c9754ecd](https://github.com/kubedb/pgbouncer/commit/c9754ecd) Stop using removed apis in Kubernetes 1.25 (#236) +- [bb7a3b6f](https://github.com/kubedb/pgbouncer/commit/bb7a3b6f) Use health checker types from kmodules (#235) + + + +## [kubedb/postgres](https://github.com/kubedb/postgres) + +### [v0.29.0-rc.0](https://github.com/kubedb/postgres/releases/tag/v0.29.0-rc.0) + +- [cd547a68](https://github.com/kubedb/postgres/commit/cd547a68) Prepare for release v0.29.0-rc.0 (#605) +- [9d98af14](https://github.com/kubedb/postgres/commit/9d98af14) Fix TlsSecret for AppBinding (#604) +- [7d73ce99](https://github.com/kubedb/postgres/commit/7d73ce99) Update dependencies (#603) +- [d8515a3f](https://github.com/kubedb/postgres/commit/d8515a3f) Configure appRef in AppBinding (#602) +- [69458e25](https://github.com/kubedb/postgres/commit/69458e25) Check auth secrets labels if key exists (#601) +- [3dd3563b](https://github.com/kubedb/postgres/commit/3dd3563b) Simplify ensureAuthSecret (#600) +- [67f3db64](https://github.com/kubedb/postgres/commit/67f3db64) Relax Postgres key detection for a secret (#599) +- [acdd2cda](https://github.com/kubedb/postgres/commit/acdd2cda) Add support for Externally Managed secret (#597) +- [5121a362](https://github.com/kubedb/postgres/commit/5121a362) Test against Kubernetes 1.25.0 (#598) +- [bfa46b08](https://github.com/kubedb/postgres/commit/bfa46b08) Check for PDB version only once (#594) +- [150fcf2c](https://github.com/kubedb/postgres/commit/150fcf2c) Handle status conversion for CronJob/VolumeSnapshot (#593) +- [86ff76e1](https://github.com/kubedb/postgres/commit/86ff76e1) Use Go 1.19 (#592) +- [7732e22b](https://github.com/kubedb/postgres/commit/7732e22b) Use k8s 1.25.1 libs (#591) +- [b4c7f426](https://github.com/kubedb/postgres/commit/b4c7f426) Update README.md +- [68b06e68](https://github.com/kubedb/postgres/commit/68b06e68) Stop using removed apis in Kubernetes 1.25 (#590) +- [51f600b9](https://github.com/kubedb/postgres/commit/51f600b9) Use health checker types from kmodules (#589) +- [2e45ad1b](https://github.com/kubedb/postgres/commit/2e45ad1b) Fix health check issue (#588) + + + +## [kubedb/provisioner](https://github.com/kubedb/provisioner) + +### [v0.29.0-rc.0](https://github.com/kubedb/provisioner/releases/tag/v0.29.0-rc.0) + +- [5497cc1b](https://github.com/kubedb/provisioner/commit/5497cc1be) Prepare for release v0.29.0-rc.0 (#21) +- [26b43352](https://github.com/kubedb/provisioner/commit/26b43352c) Test against Kubernetes 1.25.0 (#19) +- [597518ce](https://github.com/kubedb/provisioner/commit/597518cea) Check for PDB version only once (#17) +- [a55613f6](https://github.com/kubedb/provisioner/commit/a55613f6e) Handle status conversion for CronJob/VolumeSnapshot (#16) +- [5ef0c78e](https://github.com/kubedb/provisioner/commit/5ef0c78ee) Use Go 1.19 (#15) +- [40fe839c](https://github.com/kubedb/provisioner/commit/40fe839c8) Use k8s 1.25.1 libs (#14) +- [444e527c](https://github.com/kubedb/provisioner/commit/444e527ca) Update README.md +- [dc895331](https://github.com/kubedb/provisioner/commit/dc8953315) Stop using removed apis in Kubernetes 1.25 (#13) +- [2910a39e](https://github.com/kubedb/provisioner/commit/2910a39e2) Use health checker types from kmodules (#12) + + + +## [kubedb/proxysql](https://github.com/kubedb/proxysql) + +### [v0.16.0-rc.0](https://github.com/kubedb/proxysql/releases/tag/v0.16.0-rc.0) + +- [3dc6618f](https://github.com/kubedb/proxysql/commit/3dc6618f) Prepare for release v0.16.0-rc.0 (#259) +- [ec249ccf](https://github.com/kubedb/proxysql/commit/ec249ccf) Add External-backend support and changes for Ops-requests (#258) +- [fe9d736a](https://github.com/kubedb/proxysql/commit/fe9d736a) Fix health check issue (#247) +- [42a3dedf](https://github.com/kubedb/proxysql/commit/42a3dedf) Test against Kubernetes 1.25.0 (#256) +- [4677b6ab](https://github.com/kubedb/proxysql/commit/4677b6ab) Check for PDB version only once (#254) +- [8f3e6e64](https://github.com/kubedb/proxysql/commit/8f3e6e64) Handle status conversion for CronJob/VolumeSnapshot (#253) +- [19b856f4](https://github.com/kubedb/proxysql/commit/19b856f4) Use Go 1.19 (#252) +- [f8dd8297](https://github.com/kubedb/proxysql/commit/f8dd8297) Use k8s 1.25.1 libs (#251) +- [3a21c93a](https://github.com/kubedb/proxysql/commit/3a21c93a) Stop using removed apis in Kubernetes 1.25 (#249) +- [cb0a1efd](https://github.com/kubedb/proxysql/commit/cb0a1efd) Use health checker types from kmodules (#248) + + + +## [kubedb/redis](https://github.com/kubedb/redis) + +### [v0.22.0-rc.0](https://github.com/kubedb/redis/releases/tag/v0.22.0-rc.0) + +- [24a961a8](https://github.com/kubedb/redis/commit/24a961a8) Prepare for release v0.22.0-rc.0 (#432) +- [586d92c6](https://github.com/kubedb/redis/commit/586d92c6) Add Client Cert to Appbinding (#431) +- [9931e951](https://github.com/kubedb/redis/commit/9931e951) Update dependencies (#430) +- [5a27f772](https://github.com/kubedb/redis/commit/5a27f772) Add Redis Sentinel Ops Request Changes (#421) +- [81ad08ab](https://github.com/kubedb/redis/commit/81ad08ab) Add Support for Externally Managed Secret (#428) +- [b16212e4](https://github.com/kubedb/redis/commit/b16212e4) Test against Kubernetes 1.25.0 (#429) +- [05a1b814](https://github.com/kubedb/redis/commit/05a1b814) Check for PDB version only once (#427) +- [bd41d16d](https://github.com/kubedb/redis/commit/bd41d16d) Handle status conversion for CronJob/VolumeSnapshot (#426) +- [e1746638](https://github.com/kubedb/redis/commit/e1746638) Use Go 1.19 (#425) +- [b220f611](https://github.com/kubedb/redis/commit/b220f611) Use k8s 1.25.1 libs (#424) +- [538e2539](https://github.com/kubedb/redis/commit/538e2539) Update README.md +- [1513ca9a](https://github.com/kubedb/redis/commit/1513ca9a) Stop using removed apis in Kubernetes 1.25 (#423) +- [c29f0f6b](https://github.com/kubedb/redis/commit/c29f0f6b) Use health checker types from kmodules (#422) +- [bda4de79](https://github.com/kubedb/redis/commit/bda4de79) Fix health check issue (#420) + + + +## [kubedb/redis-coordinator](https://github.com/kubedb/redis-coordinator) + +### [v0.8.0-rc.0](https://github.com/kubedb/redis-coordinator/releases/tag/v0.8.0-rc.0) + +- [21d63ea](https://github.com/kubedb/redis-coordinator/commit/21d63ea) Prepare for release v0.8.0-rc.0 (#52) +- [d7bcff0](https://github.com/kubedb/redis-coordinator/commit/d7bcff0) Update dependencies (#51) +- [db31014](https://github.com/kubedb/redis-coordinator/commit/db31014) Add Redis Sentinel Ops Requests Changes (#48) +- [3bc6a63](https://github.com/kubedb/redis-coordinator/commit/3bc6a63) Test against Kubernetes 1.25.0 (#50) +- [b144d17](https://github.com/kubedb/redis-coordinator/commit/b144d17) Check for PDB version only once (#47) +- [803f76a](https://github.com/kubedb/redis-coordinator/commit/803f76a) Handle status conversion for CronJob/VolumeSnapshot (#46) +- [a7cd5af](https://github.com/kubedb/redis-coordinator/commit/a7cd5af) Use Go 1.19 (#45) +- [f066d36](https://github.com/kubedb/redis-coordinator/commit/f066d36) Use k8s 1.25.1 libs (#44) +- [db04c50](https://github.com/kubedb/redis-coordinator/commit/db04c50) Stop using removed apis in Kubernetes 1.25 (#43) +- [10f1fb5](https://github.com/kubedb/redis-coordinator/commit/10f1fb5) Use health checker types from kmodules (#42) + + + +## [kubedb/replication-mode-detector](https://github.com/kubedb/replication-mode-detector) + +### [v0.16.0-rc.0](https://github.com/kubedb/replication-mode-detector/releases/tag/v0.16.0-rc.0) + +- [d051a8eb](https://github.com/kubedb/replication-mode-detector/commit/d051a8eb) Prepare for release v0.16.0-rc.0 (#214) +- [2d51c3f3](https://github.com/kubedb/replication-mode-detector/commit/2d51c3f3) Update dependencies (#213) +- [0a544cf9](https://github.com/kubedb/replication-mode-detector/commit/0a544cf9) Test against Kubernetes 1.25.0 (#212) +- [aa1635cf](https://github.com/kubedb/replication-mode-detector/commit/aa1635cf) Check for PDB version only once (#210) +- [6549acf6](https://github.com/kubedb/replication-mode-detector/commit/6549acf6) Handle status conversion for CronJob/VolumeSnapshot (#209) +- [fc7a68fd](https://github.com/kubedb/replication-mode-detector/commit/fc7a68fd) Use Go 1.19 (#208) +- [2f9a7435](https://github.com/kubedb/replication-mode-detector/commit/2f9a7435) Use k8s 1.25.1 libs (#207) +- [c831c08e](https://github.com/kubedb/replication-mode-detector/commit/c831c08e) Stop using removed apis in Kubernetes 1.25 (#206) +- [8c80e5b4](https://github.com/kubedb/replication-mode-detector/commit/8c80e5b4) Use health checker types from kmodules (#205) + + + +## [kubedb/schema-manager](https://github.com/kubedb/schema-manager) + +### [v0.5.0-rc.0](https://github.com/kubedb/schema-manager/releases/tag/v0.5.0-rc.0) + +- [56931e13](https://github.com/kubedb/schema-manager/commit/56931e13) Prepare for release v0.5.0-rc.0 (#51) +- [7a97cbbd](https://github.com/kubedb/schema-manager/commit/7a97cbbd) Add documentation for PostgreSql (#30) +- [786c9ebf](https://github.com/kubedb/schema-manager/commit/786c9ebf) Make packages according to db-types (#49) +- [b708e23e](https://github.com/kubedb/schema-manager/commit/b708e23e) Update dependencies (#50) +- [78c6b620](https://github.com/kubedb/schema-manager/commit/78c6b620) Test against Kubernetes 1.25.0 (#48) +- [a150a60c](https://github.com/kubedb/schema-manager/commit/a150a60c) Check for PDB version only once (#46) +- [627daf35](https://github.com/kubedb/schema-manager/commit/627daf35) Handle status conversion for CronJob/VolumeSnapshot (#45) +- [1663dd03](https://github.com/kubedb/schema-manager/commit/1663dd03) Use Go 1.19 (#44) +- [417b5ebf](https://github.com/kubedb/schema-manager/commit/417b5ebf) Use k8s 1.25.1 libs (#43) +- [19488002](https://github.com/kubedb/schema-manager/commit/19488002) Stop using removed apis in Kubernetes 1.25 (#42) +- [f1af7213](https://github.com/kubedb/schema-manager/commit/f1af7213) Use health checker types from kmodules (#41) + + + +## [kubedb/tests](https://github.com/kubedb/tests) + +### [v0.14.0-rc.0](https://github.com/kubedb/tests/releases/tag/v0.14.0-rc.0) + +- [a2d4d3ac](https://github.com/kubedb/tests/commit/a2d4d3ac) Prepare for release v0.14.0-rc.0 (#200) +- [03a028e7](https://github.com/kubedb/tests/commit/03a028e7) Update dependencies (#197) +- [b34253e7](https://github.com/kubedb/tests/commit/b34253e7) Test against Kubernetes 1.25.0 (#196) +- [b2c48e72](https://github.com/kubedb/tests/commit/b2c48e72) Check for PDB version only once (#194) +- [bd0b7f66](https://github.com/kubedb/tests/commit/bd0b7f66) Handle status conversion for CronJob/VolumeSnapshot (#193) +- [8e5103d0](https://github.com/kubedb/tests/commit/8e5103d0) Use Go 1.19 (#192) +- [096cfbf6](https://github.com/kubedb/tests/commit/096cfbf6) Use k8s 1.25.1 libs (#191) +- [6c45ea94](https://github.com/kubedb/tests/commit/6c45ea94) Migrate to GinkgoV2 (#188) +- [f89ab1c1](https://github.com/kubedb/tests/commit/f89ab1c1) Stop using removed apis in Kubernetes 1.25 (#190) +- [17954e8b](https://github.com/kubedb/tests/commit/17954e8b) Use health checker types from kmodules (#189) + + + +## [kubedb/ui-server](https://github.com/kubedb/ui-server) + +### [v0.5.0-rc.0](https://github.com/kubedb/ui-server/releases/tag/v0.5.0-rc.0) + +- [9dc8acb9](https://github.com/kubedb/ui-server/commit/9dc8acb9) Prepare for release v0.5.0-rc.0 (#54) +- [7ccfe3ed](https://github.com/kubedb/ui-server/commit/7ccfe3ed) Update dependencies (#53) +- [55f85699](https://github.com/kubedb/ui-server/commit/55f85699) Use Go 1.19 (#52) +- [19c39ab1](https://github.com/kubedb/ui-server/commit/19c39ab1) Check for PDB version only once (#50) +- [c1d7c41f](https://github.com/kubedb/ui-server/commit/c1d7c41f) Handle status conversion for CronJob/VolumeSnapshot (#49) +- [96100e5f](https://github.com/kubedb/ui-server/commit/96100e5f) Use Go 1.19 (#48) +- [99bc4723](https://github.com/kubedb/ui-server/commit/99bc4723) Use k8s 1.25.1 libs (#47) +- [2c0ba4c1](https://github.com/kubedb/ui-server/commit/2c0ba4c1) Stop using removed apis in Kubernetes 1.25 (#46) +- [fc35287c](https://github.com/kubedb/ui-server/commit/fc35287c) Use health checker types from kmodules (#45) + + + +## [kubedb/webhook-server](https://github.com/kubedb/webhook-server) + +### [v0.5.0-rc.0](https://github.com/kubedb/webhook-server/releases/tag/v0.5.0-rc.0) + +- [1de1fe03](https://github.com/kubedb/webhook-server/commit/1de1fe03) Prepare for release v0.5.0-rc.0 (#33) +- [8f65154d](https://github.com/kubedb/webhook-server/commit/8f65154d) Test against Kubernetes 1.25.0 (#31) +- [ed6ba664](https://github.com/kubedb/webhook-server/commit/ed6ba664) Check for PDB version only once (#29) +- [ab4e44d0](https://github.com/kubedb/webhook-server/commit/ab4e44d0) Handle status conversion for CronJob/VolumeSnapshot (#28) +- [aef864b7](https://github.com/kubedb/webhook-server/commit/aef864b7) Use Go 1.19 (#27) + + + + diff --git a/content/docs/v2024.1.31/CHANGELOG-v2022.10.18.md b/content/docs/v2024.1.31/CHANGELOG-v2022.10.18.md new file mode 100644 index 0000000000..993f9bdd08 --- /dev/null +++ b/content/docs/v2024.1.31/CHANGELOG-v2022.10.18.md @@ -0,0 +1,663 @@ +--- +title: Changelog | KubeDB +description: Changelog +menu: + docs_v2024.1.31: + identifier: changelog-kubedb-v2022.10.18 + name: Changelog-v2022.10.18 + parent: welcome + weight: 20221018 +product_name: kubedb +menu_name: docs_v2024.1.31 +section_menu_id: welcome +url: /docs/v2024.1.31/welcome/changelog-v2022.10.18/ +aliases: +- /docs/v2024.1.31/CHANGELOG-v2022.10.18/ +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +# KubeDB v2022.10.18 (2022-10-15) + + +## [kubedb/apimachinery](https://github.com/kubedb/apimachinery) + +### [v0.29.0](https://github.com/kubedb/apimachinery/releases/tag/v0.29.0) + +- [daeafa99](https://github.com/kubedb/apimachinery/commit/daeafa99) Update crds properly +- [fc357a90](https://github.com/kubedb/apimachinery/commit/fc357a90) Make exporter optional in ProxySQL catalog (#992) +- [7d01a527](https://github.com/kubedb/apimachinery/commit/7d01a527) Add conditions for postgres logical replication. (#990) +- [197a2568](https://github.com/kubedb/apimachinery/commit/197a2568) Remove storage autoscaler from Sentinel spec (#991) +- [2dae1fa1](https://github.com/kubedb/apimachinery/commit/2dae1fa1) Add GetSystemUserSecret Heplers on PerconaXtraDB (#989) +- [35e1d5e5](https://github.com/kubedb/apimachinery/commit/35e1d5e5) Make OpsRequestType specific to databases (#988) +- [e7310243](https://github.com/kubedb/apimachinery/commit/e7310243) Add Redis Sentinel Ops Requests APIs (#958) +- [b937b3dc](https://github.com/kubedb/apimachinery/commit/b937b3dc) Update digest.go +- [1b1732a9](https://github.com/kubedb/apimachinery/commit/1b1732a9) Change ProxySQL backend to a local obj ref (#987) +- [31c66a34](https://github.com/kubedb/apimachinery/commit/31c66a34) Include Arbiter & hidden nodes in MongoAutoscaler (#979) +- [3c2f4a7a](https://github.com/kubedb/apimachinery/commit/3c2f4a7a) Add autoscaler types for Postgres (#969) +- [9f60ebbe](https://github.com/kubedb/apimachinery/commit/9f60ebbe) Add GetAuthSecretName() helper (#986) +- [b48d0118](https://github.com/kubedb/apimachinery/commit/b48d0118) Ignore TLS certificate validation when using private domains (#984) +- [11a09d52](https://github.com/kubedb/apimachinery/commit/11a09d52) Use stash.appscode.dev/apimachinery@v0.23.0 (#983) +- [cb611290](https://github.com/kubedb/apimachinery/commit/cb611290) Remove duplicate short name from redis sentinel (#982) +- [f5eabfc2](https://github.com/kubedb/apimachinery/commit/f5eabfc2) Fix typo 'SuccessfullyRestatedStatefulSet' (#980) +- [4f6d7eac](https://github.com/kubedb/apimachinery/commit/4f6d7eac) Test against Kubernetes 1.25.0 (#981) +- [c0388bc2](https://github.com/kubedb/apimachinery/commit/c0388bc2) Use authSecret.externallyManaged field (#978) +- [7f39736a](https://github.com/kubedb/apimachinery/commit/7f39736a) Remove default values from authSecret (#977) +- [2d9abdb4](https://github.com/kubedb/apimachinery/commit/2d9abdb4) Support different types of secrets and password rotation (#976) +- [f01cf5b9](https://github.com/kubedb/apimachinery/commit/f01cf5b9) Using opsRequestOpts for elastic,maria & percona (#970) +- [e26f6417](https://github.com/kubedb/apimachinery/commit/e26f6417) Fix typos of Postgres Logical Replication CRDs. (#974) +- [d43f454e](https://github.com/kubedb/apimachinery/commit/d43f454e) Check for PDB version only once (#975) +- [fb5283cd](https://github.com/kubedb/apimachinery/commit/fb5283cd) Handle status conversion for PDB (#973) +- [7263b503](https://github.com/kubedb/apimachinery/commit/7263b503) Update kutil +- [5c643b97](https://github.com/kubedb/apimachinery/commit/5c643b97) Use Go 1.19 +- [a0b96812](https://github.com/kubedb/apimachinery/commit/a0b96812) Fix mergo dependency +- [b7b93597](https://github.com/kubedb/apimachinery/commit/b7b93597) Use k8s 1.25.1 libs (#971) +- [c1f407b0](https://github.com/kubedb/apimachinery/commit/c1f407b0) Add MySQLAutoscaler support (#968) +- [693f5243](https://github.com/kubedb/apimachinery/commit/693f5243) Add MongoDB HiddenNode support (#956) +- [0b3be441](https://github.com/kubedb/apimachinery/commit/0b3be441) Add Postgres Publisher & Subscriber CRDs (#967) +- [71947dec](https://github.com/kubedb/apimachinery/commit/71947dec) Update README.md +- [818f48fa](https://github.com/kubedb/apimachinery/commit/818f48fa) Add redis-sentinel autoscaler types (#965) +- [011938c4](https://github.com/kubedb/apimachinery/commit/011938c4) Add PerconaXtraDB OpsReq and Autoscaler APIs (#953) +- [b57e7099](https://github.com/kubedb/apimachinery/commit/b57e7099) Add RedisAutoscaler support (#963) +- [2ccea895](https://github.com/kubedb/apimachinery/commit/2ccea895) Remove `DisableScaleDown` field from autoscaler (#966) +- [02b47709](https://github.com/kubedb/apimachinery/commit/02b47709) Support PDB v1 or v1beta1 api based on k8s version (#964) +- [e2d0bb4f](https://github.com/kubedb/apimachinery/commit/e2d0bb4f) Stop using removed apis in Kubernetes 1.25 (#962) +- [722a1bc1](https://github.com/kubedb/apimachinery/commit/722a1bc1) Use health checker types from kmodules (#961) + + + +## [kubedb/autoscaler](https://github.com/kubedb/autoscaler) + +### [v0.14.0](https://github.com/kubedb/autoscaler/releases/tag/v0.14.0) + +- [e798cfae](https://github.com/kubedb/autoscaler/commit/e798cfae) Prepare for release v0.14.0 (#120) +- [defb6306](https://github.com/kubedb/autoscaler/commit/defb6306) Use password-generator@v0.2.9 (#119) +- [99ffb7a4](https://github.com/kubedb/autoscaler/commit/99ffb7a4) Prepare for release v0.14.0-rc.0 (#118) +- [deec8a47](https://github.com/kubedb/autoscaler/commit/deec8a47) Update dependencies (#117) +- [c06eff58](https://github.com/kubedb/autoscaler/commit/c06eff58) Support mongo arbiter & hidden nodes (#115) +- [513b5fb4](https://github.com/kubedb/autoscaler/commit/513b5fb4) Add support for Postgres Autoscaler (#112) +- [87dd17fe](https://github.com/kubedb/autoscaler/commit/87dd17fe) Using opsRequestOpts on storageAutoscalers to satisfy cmp.Equal() (#116) +- [a8afe242](https://github.com/kubedb/autoscaler/commit/a8afe242) Test against Kubernetes 1.25.0 (#114) +- [65d3869c](https://github.com/kubedb/autoscaler/commit/65d3869c) Test against Kubernetes 1.25.0 (#113) +- [bc069f48](https://github.com/kubedb/autoscaler/commit/bc069f48) Add MySQL Autoscaler support (#106) +- [88c985a0](https://github.com/kubedb/autoscaler/commit/88c985a0) Check for PDB version only once (#110) +- [6f5f9ae2](https://github.com/kubedb/autoscaler/commit/6f5f9ae2) Handle status conversion for CronJob/VolumeSnapshot (#109) +- [46b925c0](https://github.com/kubedb/autoscaler/commit/46b925c0) Use Go 1.19 (#108) +- [674e3b7a](https://github.com/kubedb/autoscaler/commit/674e3b7a) Use k8s 1.25.1 libs (#107) +- [6a5d4274](https://github.com/kubedb/autoscaler/commit/6a5d4274) Improve internal API; using milliValue (#105) +- [757cdfed](https://github.com/kubedb/autoscaler/commit/757cdfed) Add support for RedisSentinel autoscaler (#104) +- [56b92c66](https://github.com/kubedb/autoscaler/commit/56b92c66) Update README.md +- [f3b9904f](https://github.com/kubedb/autoscaler/commit/f3b9904f) Add PerconaXtraDB Autoscaler Support (#103) +- [7ac495d9](https://github.com/kubedb/autoscaler/commit/7ac495d9) Implement redisAutoscaler feature (#102) +- [997180f5](https://github.com/kubedb/autoscaler/commit/997180f5) Stop using removed apis in Kubernetes 1.25 (#101) +- [490a6b69](https://github.com/kubedb/autoscaler/commit/490a6b69) Use health checker types from kmodules (#100) + + + +## [kubedb/cli](https://github.com/kubedb/cli) + +### [v0.29.0](https://github.com/kubedb/cli/releases/tag/v0.29.0) + +- [64ed984d](https://github.com/kubedb/cli/commit/64ed984d) Prepare for release v0.29.0 (#686) +- [a3228690](https://github.com/kubedb/cli/commit/a3228690) Use password-generator@v0.2.9 (#685) +- [8033e31b](https://github.com/kubedb/cli/commit/8033e31b) Prepare for release v0.29.0-rc.0 (#684) +- [b021f761](https://github.com/kubedb/cli/commit/b021f761) Update dependencies (#683) +- [792efd14](https://github.com/kubedb/cli/commit/792efd14) Support externally managed secrets (#681) +- [7ec2adbc](https://github.com/kubedb/cli/commit/7ec2adbc) Test against Kubernetes 1.25.0 (#682) +- [fc9b63c7](https://github.com/kubedb/cli/commit/fc9b63c7) Check for PDB version only once (#680) +- [81199060](https://github.com/kubedb/cli/commit/81199060) Handle status conversion for CronJob/VolumeSnapshot (#679) +- [17c6e94d](https://github.com/kubedb/cli/commit/17c6e94d) Use Go 1.19 (#678) +- [31c24f80](https://github.com/kubedb/cli/commit/31c24f80) Use k8s 1.25.1 libs (#677) +- [68e9ada6](https://github.com/kubedb/cli/commit/68e9ada6) Update README.md +- [4202bc84](https://github.com/kubedb/cli/commit/4202bc84) Stop using removed apis in Kubernetes 1.25 (#676) +- [eb922b19](https://github.com/kubedb/cli/commit/eb922b19) Use health checker types from kmodules (#675) + + + +## [kubedb/dashboard](https://github.com/kubedb/dashboard) + +### [v0.5.0](https://github.com/kubedb/dashboard/releases/tag/v0.5.0) + +- [903e551](https://github.com/kubedb/dashboard/commit/903e551) Prepare for release v0.5.0 (#48) +- [b06a4cf](https://github.com/kubedb/dashboard/commit/b06a4cf) Use password-generator@v0.2.9 (#47) +- [fd8f1bc](https://github.com/kubedb/dashboard/commit/fd8f1bc) Prepare for release v0.5.0-rc.0 (#46) +- [4b093a9](https://github.com/kubedb/dashboard/commit/4b093a9) Update dependencies (#45) +- [9804a55](https://github.com/kubedb/dashboard/commit/9804a55) Test against Kubernetes 1.25.0 (#44) +- [5f9caec](https://github.com/kubedb/dashboard/commit/5f9caec) Check for PDB version only once (#42) +- [91b256c](https://github.com/kubedb/dashboard/commit/91b256c) Handle status conversion for CronJob/VolumeSnapshot (#41) +- [11445c2](https://github.com/kubedb/dashboard/commit/11445c2) Use Go 1.19 (#40) +- [858bced](https://github.com/kubedb/dashboard/commit/858bced) Use k8s 1.25.1 libs (#39) +- [ebaaade](https://github.com/kubedb/dashboard/commit/ebaaade) Stop using removed apis in Kubernetes 1.25 (#38) +- [51d4f7f](https://github.com/kubedb/dashboard/commit/51d4f7f) Use health checker types from kmodules (#37) + + + +## [kubedb/elasticsearch](https://github.com/kubedb/elasticsearch) + +### [v0.29.0](https://github.com/kubedb/elasticsearch/releases/tag/v0.29.0) + +- [fe996350](https://github.com/kubedb/elasticsearch/commit/fe9963505) Prepare for release v0.29.0 (#613) +- [13a36665](https://github.com/kubedb/elasticsearch/commit/13a36665d) Use password-generator@v0.2.9 (#612) +- [1e715c1a](https://github.com/kubedb/elasticsearch/commit/1e715c1a9) Prepare for release v0.29.0-rc.0 (#611) +- [4ab0de97](https://github.com/kubedb/elasticsearch/commit/4ab0de973) Update dependencies (#610) +- [1803a407](https://github.com/kubedb/elasticsearch/commit/1803a4078) Add support for Externally Managed secret (#609) +- [c2fb96e2](https://github.com/kubedb/elasticsearch/commit/c2fb96e2b) Test against Kubernetes 1.25.0 (#608) +- [96bbc6a8](https://github.com/kubedb/elasticsearch/commit/96bbc6a85) Check for PDB version only once (#606) +- [38099062](https://github.com/kubedb/elasticsearch/commit/380990623) Handle status conversion for CronJob/VolumeSnapshot (#605) +- [6e86f853](https://github.com/kubedb/elasticsearch/commit/6e86f853a) Use Go 1.19 (#604) +- [838ab6ae](https://github.com/kubedb/elasticsearch/commit/838ab6aec) Use k8s 1.25.1 libs (#603) +- [ce6877b5](https://github.com/kubedb/elasticsearch/commit/ce6877b58) Update README.md +- [297c6004](https://github.com/kubedb/elasticsearch/commit/297c60040) Stop using removed apis in Kubernetes 1.25 (#602) +- [7f9ef6bf](https://github.com/kubedb/elasticsearch/commit/7f9ef6bf1) Use health checker types from kmodules (#601) +- [baf9b9c1](https://github.com/kubedb/elasticsearch/commit/baf9b9c1b) Fix ClientCreated counter increment issue in healthchecker (#600) + + + +## [kubedb/installer](https://github.com/kubedb/installer) + +### [v2022.10.18](https://github.com/kubedb/installer/releases/tag/v2022.10.18) + +- [333e9724](https://github.com/kubedb/installer/commit/333e9724) Prepare for release v2022.10.18 (#556) +- [e8a05842](https://github.com/kubedb/installer/commit/e8a05842) Update crds for kubedb/apimachinery@daeafa99 (#555) +- [1bb1d84b](https://github.com/kubedb/installer/commit/1bb1d84b) Update metricsconfig crd +- [46c2b8f2](https://github.com/kubedb/installer/commit/46c2b8f2) Add postgres crds to charts +- [0d5ff889](https://github.com/kubedb/installer/commit/0d5ff889) Prepare for release v2022.10.12-rc.0 (#553) +- [f0410def](https://github.com/kubedb/installer/commit/f0410def) Fix backend name for ProxySQL (#554) +- [545d7326](https://github.com/kubedb/installer/commit/545d7326) Add support for mysql 8.0.31 (#552) +- [0ecf6b7a](https://github.com/kubedb/installer/commit/0ecf6b7a) Update crds +- [3c80588f](https://github.com/kubedb/installer/commit/3c80588f) Add ProxySQL-2.3.2-debian/centos-v2 (#549) +- [edb50a92](https://github.com/kubedb/installer/commit/edb50a92) Add ProxySQL MetricsConfiguration (#545) +- [78961127](https://github.com/kubedb/installer/commit/78961127) Update Redis Init Container Image (#551) +- [e266fe95](https://github.com/kubedb/installer/commit/e266fe95) Update Percona XtraDB init container image (#550) +- [c2b9f93b](https://github.com/kubedb/installer/commit/c2b9f93b) Update mongodb init container image (#548) +- [cb4d226a](https://github.com/kubedb/installer/commit/cb4d226a) Add Redis Sentinel Ops Requests changes (#533) +- [f970eac3](https://github.com/kubedb/installer/commit/f970eac3) Fix missing docker images (#547) +- [d34e3363](https://github.com/kubedb/installer/commit/d34e3363) Add mutating webhook for postgresAutoscaler (#544) +- [bb0ae0de](https://github.com/kubedb/installer/commit/bb0ae0de) Fix valuePath for app_namespace key (#546) +- [862d034e](https://github.com/kubedb/installer/commit/862d034e) Add Subscriber apiservice (#543) +- [88d1225e](https://github.com/kubedb/installer/commit/88d1225e) Use k8s 1.25 client libs (#228) +- [46641e26](https://github.com/kubedb/installer/commit/46641e26) Add proxysql new version 2.4.4 (#539) +- [f498f5ae](https://github.com/kubedb/installer/commit/f498f5ae) Add Percona XtraDB 8.0.28 (#529) +- [24519580](https://github.com/kubedb/installer/commit/24519580) Add PerconaXtraDB Metrics (#532) +- [a1f8ac75](https://github.com/kubedb/installer/commit/a1f8ac75) Update crds (#541) +- [4b500533](https://github.com/kubedb/installer/commit/4b500533) Add Postgres Logical Replication rbac and validators (#534) +- [753f60c4](https://github.com/kubedb/installer/commit/753f60c4) Use k8s 1.25.2 +- [700dacb1](https://github.com/kubedb/installer/commit/700dacb1) Test against Kubernetes 1.25.0 (#540) +- [92069bc7](https://github.com/kubedb/installer/commit/92069bc7) Test against k8s 1.25.0 (#537) +- [b944c0b0](https://github.com/kubedb/installer/commit/b944c0b0) Don't create PSP object in k8s >= 1.25 (#536) +- [d35f8aec](https://github.com/kubedb/installer/commit/d35f8aec) Use Go 1.19 (#535) +- [59a10600](https://github.com/kubedb/installer/commit/59a10600) Add all db-types in autoscaler mutatingwebhookConfiguration (#531) +- [3010e3e4](https://github.com/kubedb/installer/commit/3010e3e4) Update README.md +- [2763ae75](https://github.com/kubedb/installer/commit/2763ae75) Add exclusion for health index in Elasticsearch (#530) + + + +## [kubedb/mariadb](https://github.com/kubedb/mariadb) + +### [v0.13.0](https://github.com/kubedb/mariadb/releases/tag/v0.13.0) + +- [c9d28b74](https://github.com/kubedb/mariadb/commit/c9d28b74) Prepare for release v0.13.0 (#182) +- [132af6e6](https://github.com/kubedb/mariadb/commit/132af6e6) Use password-generator@v0.2.9 (#181) +- [ae53b273](https://github.com/kubedb/mariadb/commit/ae53b273) Not wait for Exporter Config Secret (#180) +- [b13d62cf](https://github.com/kubedb/mariadb/commit/b13d62cf) Prepare for release v0.13.0-rc.0 (#179) +- [5a8b0877](https://github.com/kubedb/mariadb/commit/5a8b0877) Add TLS Secret on Appbinding (#178) +- [a7f976f6](https://github.com/kubedb/mariadb/commit/a7f976f6) Add AppRef on AppBinding and Add Exporter Config Secret (#177) +- [a3d17697](https://github.com/kubedb/mariadb/commit/a3d17697) Update dependencies (#176) +- [8c666da8](https://github.com/kubedb/mariadb/commit/8c666da8) Add Externally Manage Secret Support (#175) +- [b14391f2](https://github.com/kubedb/mariadb/commit/b14391f2) Test against Kubernetes 1.25.0 (#174) +- [a07bbf68](https://github.com/kubedb/mariadb/commit/a07bbf68) Check for PDB version only once (#172) +- [8a316b93](https://github.com/kubedb/mariadb/commit/8a316b93) Handle status conversion for CronJob/VolumeSnapshot (#171) +- [56b6cd33](https://github.com/kubedb/mariadb/commit/56b6cd33) Use Go 1.19 (#170) +- [c666db48](https://github.com/kubedb/mariadb/commit/c666db48) Use k8s 1.25.1 libs (#169) +- [665d7f2a](https://github.com/kubedb/mariadb/commit/665d7f2a) Fix health check issue (#166) +- [c089e057](https://github.com/kubedb/mariadb/commit/c089e057) Update README.md +- [c0efefeb](https://github.com/kubedb/mariadb/commit/c0efefeb) Stop using removed apis in Kubernetes 1.25 (#168) +- [e3ef008e](https://github.com/kubedb/mariadb/commit/e3ef008e) Use health checker types from kmodules (#167) + + + +## [kubedb/mariadb-coordinator](https://github.com/kubedb/mariadb-coordinator) + +### [v0.9.0](https://github.com/kubedb/mariadb-coordinator/releases/tag/v0.9.0) + +- [923c65e](https://github.com/kubedb/mariadb-coordinator/commit/923c65e) Prepare for release v0.9.0 (#63) +- [b16d49d](https://github.com/kubedb/mariadb-coordinator/commit/b16d49d) Prepare for release v0.9.0-rc.0 (#62) +- [a5e1a7b](https://github.com/kubedb/mariadb-coordinator/commit/a5e1a7b) Update dependencies (#61) +- [119956c](https://github.com/kubedb/mariadb-coordinator/commit/119956c) Test against Kubernetes 1.25.0 (#60) +- [4950880](https://github.com/kubedb/mariadb-coordinator/commit/4950880) Check for PDB version only once (#58) +- [8e89509](https://github.com/kubedb/mariadb-coordinator/commit/8e89509) Handle status conversion for CronJob/VolumeSnapshot (#57) +- [79dc72c](https://github.com/kubedb/mariadb-coordinator/commit/79dc72c) Use Go 1.19 (#56) +- [5a57951](https://github.com/kubedb/mariadb-coordinator/commit/5a57951) Use k8s 1.25.1 libs (#55) +- [101e71a](https://github.com/kubedb/mariadb-coordinator/commit/101e71a) Stop using removed apis in Kubernetes 1.25 (#54) +- [61c60ed](https://github.com/kubedb/mariadb-coordinator/commit/61c60ed) Use health checker types from kmodules (#53) +- [fe8a57e](https://github.com/kubedb/mariadb-coordinator/commit/fe8a57e) Prepare for release v0.8.0-rc.1 (#52) +- [9c3c47f](https://github.com/kubedb/mariadb-coordinator/commit/9c3c47f) Update health checker (#51) +- [82bad04](https://github.com/kubedb/mariadb-coordinator/commit/82bad04) Prepare for release v0.8.0-rc.0 (#50) +- [487fdbb](https://github.com/kubedb/mariadb-coordinator/commit/487fdbb) Acquire license from license-proxyserver if available (#49) + + + +## [kubedb/memcached](https://github.com/kubedb/memcached) + +### [v0.22.0](https://github.com/kubedb/memcached/releases/tag/v0.22.0) + +- [05b13b2f](https://github.com/kubedb/memcached/commit/05b13b2f) Prepare for release v0.22.0 (#375) +- [9849bbb8](https://github.com/kubedb/memcached/commit/9849bbb8) Use password-generator@v0.2.9 (#374) +- [255abab3](https://github.com/kubedb/memcached/commit/255abab3) Prepare for release v0.22.0-rc.0 (#373) +- [2cbc373f](https://github.com/kubedb/memcached/commit/2cbc373f) Update dependencies (#372) +- [6995e546](https://github.com/kubedb/memcached/commit/6995e546) Test against Kubernetes 1.25.0 (#371) +- [2974948e](https://github.com/kubedb/memcached/commit/2974948e) Check for PDB version only once (#369) +- [f2662305](https://github.com/kubedb/memcached/commit/f2662305) Handle status conversion for CronJob/VolumeSnapshot (#368) +- [a79d8ed9](https://github.com/kubedb/memcached/commit/a79d8ed9) Use Go 1.19 (#367) +- [e2a89736](https://github.com/kubedb/memcached/commit/e2a89736) Use k8s 1.25.1 libs (#366) +- [15ba567f](https://github.com/kubedb/memcached/commit/15ba567f) Stop using removed apis in Kubernetes 1.25 (#365) +- [12204d85](https://github.com/kubedb/memcached/commit/12204d85) Use health checker types from kmodules (#364) + + + +## [kubedb/mongodb](https://github.com/kubedb/mongodb) + +### [v0.22.0](https://github.com/kubedb/mongodb/releases/tag/v0.22.0) + +- [fe689daa](https://github.com/kubedb/mongodb/commit/fe689daa) Prepare for release v0.22.0 (#519) +- [e6ce6f0e](https://github.com/kubedb/mongodb/commit/e6ce6f0e) Use password-generator@v0.2.9 (#518) +- [b9e03cc5](https://github.com/kubedb/mongodb/commit/b9e03cc5) Prepare for release v0.22.0-rc.0 (#517) +- [2f0c8b65](https://github.com/kubedb/mongodb/commit/2f0c8b65) Set TLSSecret name (#516) +- [ffb021ea](https://github.com/kubedb/mongodb/commit/ffb021ea) Configure AppRef in appbinding (#515) +- [2c9eb87b](https://github.com/kubedb/mongodb/commit/2c9eb87b) Add support for externally-managed authSecret (#514) +- [f4789ab7](https://github.com/kubedb/mongodb/commit/f4789ab7) Test against Kubernetes 1.25.0 (#513) +- [9ad4c219](https://github.com/kubedb/mongodb/commit/9ad4c219) Change operator name in event (#511) +- [dbb7ff10](https://github.com/kubedb/mongodb/commit/dbb7ff10) Check for PDB version only once (#510) +- [79d53b0a](https://github.com/kubedb/mongodb/commit/79d53b0a) Handle status conversion for CronJob/VolumeSnapshot (#509) +- [37521202](https://github.com/kubedb/mongodb/commit/37521202) Use Go 1.19 (#508) +- [d1a2d55a](https://github.com/kubedb/mongodb/commit/d1a2d55a) Use k8s 1.25.1 libs (#507) +- [43399906](https://github.com/kubedb/mongodb/commit/43399906) Add support for Hidden node (#503) +- [91acbffc](https://github.com/kubedb/mongodb/commit/91acbffc) Update README.md +- [b053290c](https://github.com/kubedb/mongodb/commit/b053290c) Stop using removed apis in Kubernetes 1.25 (#506) +- [79b99580](https://github.com/kubedb/mongodb/commit/79b99580) Use health checker types from kmodules (#505) +- [ff39883d](https://github.com/kubedb/mongodb/commit/ff39883d) Fix health check issue (#504) + + + +## [kubedb/mysql](https://github.com/kubedb/mysql) + +### [v0.22.0](https://github.com/kubedb/mysql/releases/tag/v0.22.0) + +- [2dc839c0](https://github.com/kubedb/mysql/commit/2dc839c0) Prepare for release v0.22.0 (#507) +- [1c55ac97](https://github.com/kubedb/mysql/commit/1c55ac97) Add return statement if client engine is not created (#506) +- [4d402bf3](https://github.com/kubedb/mysql/commit/4d402bf3) Use password-generator@v0.2.9 (#505) +- [8386ed4c](https://github.com/kubedb/mysql/commit/8386ed4c) Prepare for release v0.22.0-rc.0 (#504) +- [8d58bbd8](https://github.com/kubedb/mysql/commit/8d58bbd8) Add cluster role for watching mysqlversion in coordinator (#503) +- [53b207b3](https://github.com/kubedb/mysql/commit/53b207b3) Add TLS Secret Name in appbinding (#501) +- [541e9f5e](https://github.com/kubedb/mysql/commit/541e9f5e) Update dependencies (#502) +- [e51a494c](https://github.com/kubedb/mysql/commit/e51a494c) Fix innodb router issues (#500) +- [c4f78c1f](https://github.com/kubedb/mysql/commit/c4f78c1f) Wait for externaly managed auth secret (#499) +- [90f337a2](https://github.com/kubedb/mysql/commit/90f337a2) Test against Kubernetes 1.25.0 (#498) +- [af6d6654](https://github.com/kubedb/mysql/commit/af6d6654) Check for PDB version only once (#496) +- [27611133](https://github.com/kubedb/mysql/commit/27611133) Handle status conversion for CronJob/VolumeSnapshot (#495) +- [a662b10d](https://github.com/kubedb/mysql/commit/a662b10d) Use Go 1.19 (#494) +- [07ce8211](https://github.com/kubedb/mysql/commit/07ce8211) Use k8s 1.25.1 libs (#493) +- [fac38c31](https://github.com/kubedb/mysql/commit/fac38c31) Update README.md +- [9676f388](https://github.com/kubedb/mysql/commit/9676f388) Stop using removed apis in Kubernetes 1.25 (#492) +- [db176142](https://github.com/kubedb/mysql/commit/db176142) Use health checker types from kmodules (#491) +- [3c9835b0](https://github.com/kubedb/mysql/commit/3c9835b0) Fix health check issue (#489) + + + +## [kubedb/mysql-coordinator](https://github.com/kubedb/mysql-coordinator) + +### [v0.7.0](https://github.com/kubedb/mysql-coordinator/releases/tag/v0.7.0) + +- [650bb92](https://github.com/kubedb/mysql-coordinator/commit/650bb92) Prepare for release v0.7.0 (#59) +- [0930361](https://github.com/kubedb/mysql-coordinator/commit/0930361) Add version check for MySQL 5.0 (#57) +- [b1d9ecf](https://github.com/kubedb/mysql-coordinator/commit/b1d9ecf) Prepare for release v0.7.0-rc.0 (#58) +- [88d01ef](https://github.com/kubedb/mysql-coordinator/commit/88d01ef) Update dependencies (#56) +- [cbb3504](https://github.com/kubedb/mysql-coordinator/commit/cbb3504) fix group_replication extra transcions jonning issue (#49) +- [8939e89](https://github.com/kubedb/mysql-coordinator/commit/8939e89) Test against Kubernetes 1.25.0 (#55) +- [0ba243d](https://github.com/kubedb/mysql-coordinator/commit/0ba243d) Check for PDB version only once (#53) +- [dac7227](https://github.com/kubedb/mysql-coordinator/commit/dac7227) Handle status conversion for CronJob/VolumeSnapshot (#52) +- [100f268](https://github.com/kubedb/mysql-coordinator/commit/100f268) Use Go 1.19 (#51) +- [07fc1af](https://github.com/kubedb/mysql-coordinator/commit/07fc1af) Use k8s 1.25.1 libs (#50) +- [71fe729](https://github.com/kubedb/mysql-coordinator/commit/71fe729) Stop using removed apis in Kubernetes 1.25 (#48) +- [f968206](https://github.com/kubedb/mysql-coordinator/commit/f968206) Use health checker types from kmodules (#47) +- [fa7ad1c](https://github.com/kubedb/mysql-coordinator/commit/fa7ad1c) Prepare for release v0.6.0-rc.1 (#46) +- [2c3615b](https://github.com/kubedb/mysql-coordinator/commit/2c3615b) update labels (#45) +- [38a4f88](https://github.com/kubedb/mysql-coordinator/commit/38a4f88) Update health checker (#43) +- [7c79e5f](https://github.com/kubedb/mysql-coordinator/commit/7c79e5f) Prepare for release v0.6.0-rc.0 (#42) +- [2eb313d](https://github.com/kubedb/mysql-coordinator/commit/2eb313d) Acquire license from license-proxyserver if available (#40) + + + +## [kubedb/mysql-router-init](https://github.com/kubedb/mysql-router-init) + +### [v0.7.0](https://github.com/kubedb/mysql-router-init/releases/tag/v0.7.0) + +- [e5eba9e](https://github.com/kubedb/mysql-router-init/commit/e5eba9e) Test against Kubernetes 1.25.0 (#26) +- [f0bdfdd](https://github.com/kubedb/mysql-router-init/commit/f0bdfdd) Use Go 1.19 (#25) +- [5631a3c](https://github.com/kubedb/mysql-router-init/commit/5631a3c) Use k8s 1.25.1 libs (#24) + + + +## [kubedb/ops-manager](https://github.com/kubedb/ops-manager) + +### [v0.16.0](https://github.com/kubedb/ops-manager/releases/tag/v0.16.0) + +- [121610ce](https://github.com/kubedb/ops-manager/commit/121610ce) Prepare for release v0.16.0 (#377) +- [b7f7b559](https://github.com/kubedb/ops-manager/commit/b7f7b559) Fix remove TLS and version upgrade for mysql (#375) +- [4a527621](https://github.com/kubedb/ops-manager/commit/4a527621) Fix exporter config secret cleanup for MariaDB and Percona XtraDB (#376) +- [2df29e1d](https://github.com/kubedb/ops-manager/commit/2df29e1d) Use password-generator@v0.2.9 (#374) +- [698d4f84](https://github.com/kubedb/ops-manager/commit/698d4f84) Prepare for release v0.16.0-rc.0 (#373) +- [f85a6048](https://github.com/kubedb/ops-manager/commit/f85a6048) Handle private registry with self-signed certs (#372) +- [205b8e3c](https://github.com/kubedb/ops-manager/commit/205b8e3c) Fix replication user update password (#371) +- [6680a32c](https://github.com/kubedb/ops-manager/commit/6680a32c) Fix HS Ops Request (#370) +- [b96f4592](https://github.com/kubedb/ops-manager/commit/b96f4592) Add PostgreSQL Logical Replication (#353) +- [2ca9b5f8](https://github.com/kubedb/ops-manager/commit/2ca9b5f8) ProxySQL Ops-requests (#368) +- [c2d6b85e](https://github.com/kubedb/ops-manager/commit/c2d6b85e) Remove ensureExporterSecretForTLSConfig for MariaDB and PXC (#369) +- [06b69609](https://github.com/kubedb/ops-manager/commit/06b69609) Add PerconaXtraDB OpsReq (#367) +- [891a2288](https://github.com/kubedb/ops-manager/commit/891a2288) Make opsReqType specific to databases (#366) +- [82d960b0](https://github.com/kubedb/ops-manager/commit/82d960b0) MySQL ops request fix for Innodb (#365) +- [13401a96](https://github.com/kubedb/ops-manager/commit/13401a96) Add Redis Sentinel Ops Request (#328) +- [8ee68b62](https://github.com/kubedb/ops-manager/commit/8ee68b62) Modify reconfigureTLS to support arbiter & hidden enabled mongo (#364) +- [805f8bba](https://github.com/kubedb/ops-manager/commit/805f8bba) Test against Kubernetes 1.25.0 (#363) +- [787f7bea](https://github.com/kubedb/ops-manager/commit/787f7bea) Fix MariaDB Upgrade OpsReq Image name issue (#361) +- [e676ea51](https://github.com/kubedb/ops-manager/commit/e676ea51) Fix podnames & selectors for Mongo volumeExpansion (#358) +- [7a5e34b1](https://github.com/kubedb/ops-manager/commit/7a5e34b1) Check for PDB version only once (#357) +- [3fb148a5](https://github.com/kubedb/ops-manager/commit/3fb148a5) Handle status conversion for CronJob/VolumeSnapshot (#356) +- [9f058091](https://github.com/kubedb/ops-manager/commit/9f058091) Use Go 1.19 (#355) +- [25febfcb](https://github.com/kubedb/ops-manager/commit/25febfcb) Update .kodiak.toml +- [eb0f3792](https://github.com/kubedb/ops-manager/commit/eb0f3792) Use k8s 1.25.1 libs (#354) +- [d09da904](https://github.com/kubedb/ops-manager/commit/d09da904) Add opsRequests for mongo hidden-node (#347) +- [9114f329](https://github.com/kubedb/ops-manager/commit/9114f329) Rework Mongo verticalScaling; Fix arbiter & exporter-related issues (#346) +- [74f4831c](https://github.com/kubedb/ops-manager/commit/74f4831c) Update README.md +- [2277c28f](https://github.com/kubedb/ops-manager/commit/2277c28f) Skip Image Digest for Dev Builds (#350) +- [c7f3cf07](https://github.com/kubedb/ops-manager/commit/c7f3cf07) Stop using removed apis in Kubernetes 1.25 (#352) +- [0fbc4d57](https://github.com/kubedb/ops-manager/commit/0fbc4d57) Use health checker types from kmodules (#351) + + + +## [kubedb/percona-xtradb](https://github.com/kubedb/percona-xtradb) + +### [v0.16.0](https://github.com/kubedb/percona-xtradb/releases/tag/v0.16.0) + +- [78c63bf7](https://github.com/kubedb/percona-xtradb/commit/78c63bf7) Prepare for release v0.16.0 (#285) +- [d47bd015](https://github.com/kubedb/percona-xtradb/commit/d47bd015) Use password-generator@v0.2.9 (#284) +- [f3278c09](https://github.com/kubedb/percona-xtradb/commit/f3278c09) Not wait for Exporter Config Secret (#283) +- [030e063d](https://github.com/kubedb/percona-xtradb/commit/030e063d) Prepare for release v0.16.0-rc.0 (#282) +- [47345de1](https://github.com/kubedb/percona-xtradb/commit/47345de1) Add TLS Secret on AppBinding (#281) +- [0aa33548](https://github.com/kubedb/percona-xtradb/commit/0aa33548) Add AppRef on AppBinding and Add Exporter Config Secret (#280) +- [82685157](https://github.com/kubedb/percona-xtradb/commit/82685157) Merge pull request #269 from kubedb/add-px-ops +- [f7f1898e](https://github.com/kubedb/percona-xtradb/commit/f7f1898e) Add Externally Managed Secret Support on PerconaXtraDB +- [43dcc76d](https://github.com/kubedb/percona-xtradb/commit/43dcc76d) Test against Kubernetes 1.25.0 (#278) +- [bc5c97db](https://github.com/kubedb/percona-xtradb/commit/bc5c97db) Check for PDB version only once (#276) +- [13a57a32](https://github.com/kubedb/percona-xtradb/commit/13a57a32) Handle status conversion for CronJob/VolumeSnapshot (#275) +- [6013a92e](https://github.com/kubedb/percona-xtradb/commit/6013a92e) Use Go 1.19 (#274) +- [45c413b9](https://github.com/kubedb/percona-xtradb/commit/45c413b9) Use k8s 1.25.1 libs (#273) +- [fd7d238a](https://github.com/kubedb/percona-xtradb/commit/fd7d238a) Update README.md +- [13da58d6](https://github.com/kubedb/percona-xtradb/commit/13da58d6) Stop using removed apis in Kubernetes 1.25 (#272) +- [6941e6d6](https://github.com/kubedb/percona-xtradb/commit/6941e6d6) Use health checker types from kmodules (#271) +- [9f813287](https://github.com/kubedb/percona-xtradb/commit/9f813287) Fix health check issue (#270) + + + +## [kubedb/percona-xtradb-coordinator](https://github.com/kubedb/percona-xtradb-coordinator) + +### [v0.2.0](https://github.com/kubedb/percona-xtradb-coordinator/releases/tag/v0.2.0) + +- [9a28a21](https://github.com/kubedb/percona-xtradb-coordinator/commit/9a28a21) Prepare for release v0.2.0 (#20) +- [cf6c54c](https://github.com/kubedb/percona-xtradb-coordinator/commit/cf6c54c) Prepare for release v0.2.0-rc.0 (#19) +- [a71e01d](https://github.com/kubedb/percona-xtradb-coordinator/commit/a71e01d) Update dependencies (#18) +- [0b51751](https://github.com/kubedb/percona-xtradb-coordinator/commit/0b51751) Test against Kubernetes 1.25.0 (#17) +- [1f2b1a5](https://github.com/kubedb/percona-xtradb-coordinator/commit/1f2b1a5) Check for PDB version only once (#15) +- [03125ba](https://github.com/kubedb/percona-xtradb-coordinator/commit/03125ba) Handle status conversion for CronJob/VolumeSnapshot (#14) +- [06a2634](https://github.com/kubedb/percona-xtradb-coordinator/commit/06a2634) Use Go 1.19 (#13) +- [1a8a90b](https://github.com/kubedb/percona-xtradb-coordinator/commit/1a8a90b) Use k8s 1.25.1 libs (#12) +- [f33c751](https://github.com/kubedb/percona-xtradb-coordinator/commit/f33c751) Stop using removed apis in Kubernetes 1.25 (#11) +- [91495bf](https://github.com/kubedb/percona-xtradb-coordinator/commit/91495bf) Use health checker types from kmodules (#10) +- [290e281](https://github.com/kubedb/percona-xtradb-coordinator/commit/290e281) Prepare for release v0.1.0-rc.1 (#9) +- [c57449c](https://github.com/kubedb/percona-xtradb-coordinator/commit/c57449c) Update health checker (#8) +- [adad8b5](https://github.com/kubedb/percona-xtradb-coordinator/commit/adad8b5) Acquire license from license-proxyserver if available (#7) + + + +## [kubedb/pg-coordinator](https://github.com/kubedb/pg-coordinator) + +### [v0.13.0](https://github.com/kubedb/pg-coordinator/releases/tag/v0.13.0) + +- [43cd452f](https://github.com/kubedb/pg-coordinator/commit/43cd452f) Prepare for release v0.13.0 (#100) +- [85fb61bb](https://github.com/kubedb/pg-coordinator/commit/85fb61bb) Prepare for release v0.13.0-rc.0 (#99) +- [58720b10](https://github.com/kubedb/pg-coordinator/commit/58720b10) Update dependencies (#98) +- [5a9dcc5f](https://github.com/kubedb/pg-coordinator/commit/5a9dcc5f) Test against Kubernetes 1.25.0 (#97) +- [eb45fd8e](https://github.com/kubedb/pg-coordinator/commit/eb45fd8e) Check for PDB version only once (#95) +- [a66884fb](https://github.com/kubedb/pg-coordinator/commit/a66884fb) Handle status conversion for CronJob/VolumeSnapshot (#94) +- [db150c63](https://github.com/kubedb/pg-coordinator/commit/db150c63) Use Go 1.19 (#93) +- [8bd4fcc5](https://github.com/kubedb/pg-coordinator/commit/8bd4fcc5) Use k8s 1.25.1 libs (#92) +- [4a510768](https://github.com/kubedb/pg-coordinator/commit/4a510768) Stop using removed apis in Kubernetes 1.25 (#91) +- [3b26263c](https://github.com/kubedb/pg-coordinator/commit/3b26263c) Use health checker types from kmodules (#90) + + + +## [kubedb/pgbouncer](https://github.com/kubedb/pgbouncer) + +### [v0.16.0](https://github.com/kubedb/pgbouncer/releases/tag/v0.16.0) + +- [4ba55b3e](https://github.com/kubedb/pgbouncer/commit/4ba55b3e) Prepare for release v0.16.0 (#247) +- [a4e978e9](https://github.com/kubedb/pgbouncer/commit/a4e978e9) Use password-generator@v0.2.9 (#246) +- [0d58567a](https://github.com/kubedb/pgbouncer/commit/0d58567a) Prepare for release v0.16.0-rc.0 (#245) +- [47329dfa](https://github.com/kubedb/pgbouncer/commit/47329dfa) Fix TLSSecret for appbinding. (#244) +- [3efec0cb](https://github.com/kubedb/pgbouncer/commit/3efec0cb) Update dependencies (#243) +- [8a1bd7b0](https://github.com/kubedb/pgbouncer/commit/8a1bd7b0) Fix health check issue (#234) +- [c20e87e5](https://github.com/kubedb/pgbouncer/commit/c20e87e5) Test against Kubernetes 1.25.0 (#242) +- [760fd8e3](https://github.com/kubedb/pgbouncer/commit/760fd8e3) Check for PDB version only once (#240) +- [8ba2692d](https://github.com/kubedb/pgbouncer/commit/8ba2692d) Handle status conversion for CronJob/VolumeSnapshot (#239) +- [ea1fc328](https://github.com/kubedb/pgbouncer/commit/ea1fc328) Use Go 1.19 (#238) +- [6a24f732](https://github.com/kubedb/pgbouncer/commit/6a24f732) Use k8s 1.25.1 libs (#237) +- [327242e1](https://github.com/kubedb/pgbouncer/commit/327242e1) Update README.md +- [c9754ecd](https://github.com/kubedb/pgbouncer/commit/c9754ecd) Stop using removed apis in Kubernetes 1.25 (#236) +- [bb7a3b6f](https://github.com/kubedb/pgbouncer/commit/bb7a3b6f) Use health checker types from kmodules (#235) + + + +## [kubedb/postgres](https://github.com/kubedb/postgres) + +### [v0.29.0](https://github.com/kubedb/postgres/releases/tag/v0.29.0) + +- [08afda20](https://github.com/kubedb/postgres/commit/08afda20) Prepare for release v0.29.0 (#607) +- [ad39a6cf](https://github.com/kubedb/postgres/commit/ad39a6cf) Update password generator hash (#606) +- [cd547a68](https://github.com/kubedb/postgres/commit/cd547a68) Prepare for release v0.29.0-rc.0 (#605) +- [9d98af14](https://github.com/kubedb/postgres/commit/9d98af14) Fix TlsSecret for AppBinding (#604) +- [7d73ce99](https://github.com/kubedb/postgres/commit/7d73ce99) Update dependencies (#603) +- [d8515a3f](https://github.com/kubedb/postgres/commit/d8515a3f) Configure appRef in AppBinding (#602) +- [69458e25](https://github.com/kubedb/postgres/commit/69458e25) Check auth secrets labels if key exists (#601) +- [3dd3563b](https://github.com/kubedb/postgres/commit/3dd3563b) Simplify ensureAuthSecret (#600) +- [67f3db64](https://github.com/kubedb/postgres/commit/67f3db64) Relax Postgres key detection for a secret (#599) +- [acdd2cda](https://github.com/kubedb/postgres/commit/acdd2cda) Add support for Externally Managed secret (#597) +- [5121a362](https://github.com/kubedb/postgres/commit/5121a362) Test against Kubernetes 1.25.0 (#598) +- [bfa46b08](https://github.com/kubedb/postgres/commit/bfa46b08) Check for PDB version only once (#594) +- [150fcf2c](https://github.com/kubedb/postgres/commit/150fcf2c) Handle status conversion for CronJob/VolumeSnapshot (#593) +- [86ff76e1](https://github.com/kubedb/postgres/commit/86ff76e1) Use Go 1.19 (#592) +- [7732e22b](https://github.com/kubedb/postgres/commit/7732e22b) Use k8s 1.25.1 libs (#591) +- [b4c7f426](https://github.com/kubedb/postgres/commit/b4c7f426) Update README.md +- [68b06e68](https://github.com/kubedb/postgres/commit/68b06e68) Stop using removed apis in Kubernetes 1.25 (#590) +- [51f600b9](https://github.com/kubedb/postgres/commit/51f600b9) Use health checker types from kmodules (#589) +- [2e45ad1b](https://github.com/kubedb/postgres/commit/2e45ad1b) Fix health check issue (#588) + + + +## [kubedb/provisioner](https://github.com/kubedb/provisioner) + +### [v0.29.0](https://github.com/kubedb/provisioner/releases/tag/v0.29.0) + +- [b499c076](https://github.com/kubedb/provisioner/commit/b499c0764) Prepare for release v0.29.0 (#23) +- [5f54d2b0](https://github.com/kubedb/provisioner/commit/5f54d2b03) Use password-generator@v0.2.9 (#22) +- [0fdb3106](https://github.com/kubedb/provisioner/commit/0fdb31068) Not wait for Exporter Config Secret mariadb/xtradb +- [5497cc1b](https://github.com/kubedb/provisioner/commit/5497cc1be) Prepare for release v0.29.0-rc.0 (#21) +- [26b43352](https://github.com/kubedb/provisioner/commit/26b43352c) Test against Kubernetes 1.25.0 (#19) +- [597518ce](https://github.com/kubedb/provisioner/commit/597518cea) Check for PDB version only once (#17) +- [a55613f6](https://github.com/kubedb/provisioner/commit/a55613f6e) Handle status conversion for CronJob/VolumeSnapshot (#16) +- [5ef0c78e](https://github.com/kubedb/provisioner/commit/5ef0c78ee) Use Go 1.19 (#15) +- [40fe839c](https://github.com/kubedb/provisioner/commit/40fe839c8) Use k8s 1.25.1 libs (#14) +- [444e527c](https://github.com/kubedb/provisioner/commit/444e527ca) Update README.md +- [dc895331](https://github.com/kubedb/provisioner/commit/dc8953315) Stop using removed apis in Kubernetes 1.25 (#13) +- [2910a39e](https://github.com/kubedb/provisioner/commit/2910a39e2) Use health checker types from kmodules (#12) + + + +## [kubedb/proxysql](https://github.com/kubedb/proxysql) + +### [v0.16.0](https://github.com/kubedb/proxysql/releases/tag/v0.16.0) + +- [060e14f6](https://github.com/kubedb/proxysql/commit/060e14f6) Prepare for release v0.16.0 (#261) +- [7598aa0b](https://github.com/kubedb/proxysql/commit/7598aa0b) Use password-generator@v0.2.9 (#260) +- [3dc6618f](https://github.com/kubedb/proxysql/commit/3dc6618f) Prepare for release v0.16.0-rc.0 (#259) +- [ec249ccf](https://github.com/kubedb/proxysql/commit/ec249ccf) Add External-backend support and changes for Ops-requests (#258) +- [fe9d736a](https://github.com/kubedb/proxysql/commit/fe9d736a) Fix health check issue (#247) +- [42a3dedf](https://github.com/kubedb/proxysql/commit/42a3dedf) Test against Kubernetes 1.25.0 (#256) +- [4677b6ab](https://github.com/kubedb/proxysql/commit/4677b6ab) Check for PDB version only once (#254) +- [8f3e6e64](https://github.com/kubedb/proxysql/commit/8f3e6e64) Handle status conversion for CronJob/VolumeSnapshot (#253) +- [19b856f4](https://github.com/kubedb/proxysql/commit/19b856f4) Use Go 1.19 (#252) +- [f8dd8297](https://github.com/kubedb/proxysql/commit/f8dd8297) Use k8s 1.25.1 libs (#251) +- [3a21c93a](https://github.com/kubedb/proxysql/commit/3a21c93a) Stop using removed apis in Kubernetes 1.25 (#249) +- [cb0a1efd](https://github.com/kubedb/proxysql/commit/cb0a1efd) Use health checker types from kmodules (#248) + + + +## [kubedb/redis](https://github.com/kubedb/redis) + +### [v0.22.0](https://github.com/kubedb/redis/releases/tag/v0.22.0) + +- [a949cd65](https://github.com/kubedb/redis/commit/a949cd65) Prepare for release v0.22.0 (#434) +- [b6d1e6dc](https://github.com/kubedb/redis/commit/b6d1e6dc) Use password-generator@v0.2.9 (#433) +- [24a961a8](https://github.com/kubedb/redis/commit/24a961a8) Prepare for release v0.22.0-rc.0 (#432) +- [586d92c6](https://github.com/kubedb/redis/commit/586d92c6) Add Client Cert to Appbinding (#431) +- [9931e951](https://github.com/kubedb/redis/commit/9931e951) Update dependencies (#430) +- [5a27f772](https://github.com/kubedb/redis/commit/5a27f772) Add Redis Sentinel Ops Request Changes (#421) +- [81ad08ab](https://github.com/kubedb/redis/commit/81ad08ab) Add Support for Externally Managed Secret (#428) +- [b16212e4](https://github.com/kubedb/redis/commit/b16212e4) Test against Kubernetes 1.25.0 (#429) +- [05a1b814](https://github.com/kubedb/redis/commit/05a1b814) Check for PDB version only once (#427) +- [bd41d16d](https://github.com/kubedb/redis/commit/bd41d16d) Handle status conversion for CronJob/VolumeSnapshot (#426) +- [e1746638](https://github.com/kubedb/redis/commit/e1746638) Use Go 1.19 (#425) +- [b220f611](https://github.com/kubedb/redis/commit/b220f611) Use k8s 1.25.1 libs (#424) +- [538e2539](https://github.com/kubedb/redis/commit/538e2539) Update README.md +- [1513ca9a](https://github.com/kubedb/redis/commit/1513ca9a) Stop using removed apis in Kubernetes 1.25 (#423) +- [c29f0f6b](https://github.com/kubedb/redis/commit/c29f0f6b) Use health checker types from kmodules (#422) +- [bda4de79](https://github.com/kubedb/redis/commit/bda4de79) Fix health check issue (#420) + + + +## [kubedb/redis-coordinator](https://github.com/kubedb/redis-coordinator) + +### [v0.8.0](https://github.com/kubedb/redis-coordinator/releases/tag/v0.8.0) + +- [140202b](https://github.com/kubedb/redis-coordinator/commit/140202b) Prepare for release v0.8.0 (#53) +- [21d63ea](https://github.com/kubedb/redis-coordinator/commit/21d63ea) Prepare for release v0.8.0-rc.0 (#52) +- [d7bcff0](https://github.com/kubedb/redis-coordinator/commit/d7bcff0) Update dependencies (#51) +- [db31014](https://github.com/kubedb/redis-coordinator/commit/db31014) Add Redis Sentinel Ops Requests Changes (#48) +- [3bc6a63](https://github.com/kubedb/redis-coordinator/commit/3bc6a63) Test against Kubernetes 1.25.0 (#50) +- [b144d17](https://github.com/kubedb/redis-coordinator/commit/b144d17) Check for PDB version only once (#47) +- [803f76a](https://github.com/kubedb/redis-coordinator/commit/803f76a) Handle status conversion for CronJob/VolumeSnapshot (#46) +- [a7cd5af](https://github.com/kubedb/redis-coordinator/commit/a7cd5af) Use Go 1.19 (#45) +- [f066d36](https://github.com/kubedb/redis-coordinator/commit/f066d36) Use k8s 1.25.1 libs (#44) +- [db04c50](https://github.com/kubedb/redis-coordinator/commit/db04c50) Stop using removed apis in Kubernetes 1.25 (#43) +- [10f1fb5](https://github.com/kubedb/redis-coordinator/commit/10f1fb5) Use health checker types from kmodules (#42) + + + +## [kubedb/replication-mode-detector](https://github.com/kubedb/replication-mode-detector) + +### [v0.16.0](https://github.com/kubedb/replication-mode-detector/releases/tag/v0.16.0) + +- [866018be](https://github.com/kubedb/replication-mode-detector/commit/866018be) Prepare for release v0.16.0 (#215) +- [d051a8eb](https://github.com/kubedb/replication-mode-detector/commit/d051a8eb) Prepare for release v0.16.0-rc.0 (#214) +- [2d51c3f3](https://github.com/kubedb/replication-mode-detector/commit/2d51c3f3) Update dependencies (#213) +- [0a544cf9](https://github.com/kubedb/replication-mode-detector/commit/0a544cf9) Test against Kubernetes 1.25.0 (#212) +- [aa1635cf](https://github.com/kubedb/replication-mode-detector/commit/aa1635cf) Check for PDB version only once (#210) +- [6549acf6](https://github.com/kubedb/replication-mode-detector/commit/6549acf6) Handle status conversion for CronJob/VolumeSnapshot (#209) +- [fc7a68fd](https://github.com/kubedb/replication-mode-detector/commit/fc7a68fd) Use Go 1.19 (#208) +- [2f9a7435](https://github.com/kubedb/replication-mode-detector/commit/2f9a7435) Use k8s 1.25.1 libs (#207) +- [c831c08e](https://github.com/kubedb/replication-mode-detector/commit/c831c08e) Stop using removed apis in Kubernetes 1.25 (#206) +- [8c80e5b4](https://github.com/kubedb/replication-mode-detector/commit/8c80e5b4) Use health checker types from kmodules (#205) + + + +## [kubedb/schema-manager](https://github.com/kubedb/schema-manager) + +### [v0.5.0](https://github.com/kubedb/schema-manager/releases/tag/v0.5.0) + +- [e6fffe23](https://github.com/kubedb/schema-manager/commit/e6fffe23) Prepare for release v0.5.0 (#52) +- [56931e13](https://github.com/kubedb/schema-manager/commit/56931e13) Prepare for release v0.5.0-rc.0 (#51) +- [7a97cbbd](https://github.com/kubedb/schema-manager/commit/7a97cbbd) Add documentation for PostgreSql (#30) +- [786c9ebf](https://github.com/kubedb/schema-manager/commit/786c9ebf) Make packages according to db-types (#49) +- [b708e23e](https://github.com/kubedb/schema-manager/commit/b708e23e) Update dependencies (#50) +- [78c6b620](https://github.com/kubedb/schema-manager/commit/78c6b620) Test against Kubernetes 1.25.0 (#48) +- [a150a60c](https://github.com/kubedb/schema-manager/commit/a150a60c) Check for PDB version only once (#46) +- [627daf35](https://github.com/kubedb/schema-manager/commit/627daf35) Handle status conversion for CronJob/VolumeSnapshot (#45) +- [1663dd03](https://github.com/kubedb/schema-manager/commit/1663dd03) Use Go 1.19 (#44) +- [417b5ebf](https://github.com/kubedb/schema-manager/commit/417b5ebf) Use k8s 1.25.1 libs (#43) +- [19488002](https://github.com/kubedb/schema-manager/commit/19488002) Stop using removed apis in Kubernetes 1.25 (#42) +- [f1af7213](https://github.com/kubedb/schema-manager/commit/f1af7213) Use health checker types from kmodules (#41) + + + +## [kubedb/tests](https://github.com/kubedb/tests) + +### [v0.14.0](https://github.com/kubedb/tests/releases/tag/v0.14.0) + +- [1737b25f](https://github.com/kubedb/tests/commit/1737b25f) Prepare for release v0.14.0 (#203) +- [f272557a](https://github.com/kubedb/tests/commit/f272557a) Add MongoDB arbiter-related tests (#172) +- [e7c55a30](https://github.com/kubedb/tests/commit/e7c55a30) Use password-generator@v0.2.9 (#201) +- [a2d4d3ac](https://github.com/kubedb/tests/commit/a2d4d3ac) Prepare for release v0.14.0-rc.0 (#200) +- [03a028e7](https://github.com/kubedb/tests/commit/03a028e7) Update dependencies (#197) +- [b34253e7](https://github.com/kubedb/tests/commit/b34253e7) Test against Kubernetes 1.25.0 (#196) +- [b2c48e72](https://github.com/kubedb/tests/commit/b2c48e72) Check for PDB version only once (#194) +- [bd0b7f66](https://github.com/kubedb/tests/commit/bd0b7f66) Handle status conversion for CronJob/VolumeSnapshot (#193) +- [8e5103d0](https://github.com/kubedb/tests/commit/8e5103d0) Use Go 1.19 (#192) +- [096cfbf6](https://github.com/kubedb/tests/commit/096cfbf6) Use k8s 1.25.1 libs (#191) +- [6c45ea94](https://github.com/kubedb/tests/commit/6c45ea94) Migrate to GinkgoV2 (#188) +- [f89ab1c1](https://github.com/kubedb/tests/commit/f89ab1c1) Stop using removed apis in Kubernetes 1.25 (#190) +- [17954e8b](https://github.com/kubedb/tests/commit/17954e8b) Use health checker types from kmodules (#189) + + + +## [kubedb/ui-server](https://github.com/kubedb/ui-server) + +### [v0.5.0](https://github.com/kubedb/ui-server/releases/tag/v0.5.0) + +- [d205fcec](https://github.com/kubedb/ui-server/commit/d205fcec) Prepare for release v0.5.0 (#55) +- [9dc8acb9](https://github.com/kubedb/ui-server/commit/9dc8acb9) Prepare for release v0.5.0-rc.0 (#54) +- [7ccfe3ed](https://github.com/kubedb/ui-server/commit/7ccfe3ed) Update dependencies (#53) +- [55f85699](https://github.com/kubedb/ui-server/commit/55f85699) Use Go 1.19 (#52) +- [19c39ab1](https://github.com/kubedb/ui-server/commit/19c39ab1) Check for PDB version only once (#50) +- [c1d7c41f](https://github.com/kubedb/ui-server/commit/c1d7c41f) Handle status conversion for CronJob/VolumeSnapshot (#49) +- [96100e5f](https://github.com/kubedb/ui-server/commit/96100e5f) Use Go 1.19 (#48) +- [99bc4723](https://github.com/kubedb/ui-server/commit/99bc4723) Use k8s 1.25.1 libs (#47) +- [2c0ba4c1](https://github.com/kubedb/ui-server/commit/2c0ba4c1) Stop using removed apis in Kubernetes 1.25 (#46) +- [fc35287c](https://github.com/kubedb/ui-server/commit/fc35287c) Use health checker types from kmodules (#45) + + + +## [kubedb/webhook-server](https://github.com/kubedb/webhook-server) + +### [v0.5.0](https://github.com/kubedb/webhook-server/releases/tag/v0.5.0) + +- [41c41749](https://github.com/kubedb/webhook-server/commit/41c41749) Prepare for release v0.5.0 (#37) +- [eaa32942](https://github.com/kubedb/webhook-server/commit/eaa32942) Use password-generator@v0.2.9 (#36) +- [c06f6c42](https://github.com/kubedb/webhook-server/commit/c06f6c42) Register the missing types to webhook (#35) +- [59cb7fa0](https://github.com/kubedb/webhook-server/commit/59cb7fa0) Register pg sub/sub validators (#34) +- [1de1fe03](https://github.com/kubedb/webhook-server/commit/1de1fe03) Prepare for release v0.5.0-rc.0 (#33) +- [8f65154d](https://github.com/kubedb/webhook-server/commit/8f65154d) Test against Kubernetes 1.25.0 (#31) +- [ed6ba664](https://github.com/kubedb/webhook-server/commit/ed6ba664) Check for PDB version only once (#29) +- [ab4e44d0](https://github.com/kubedb/webhook-server/commit/ab4e44d0) Handle status conversion for CronJob/VolumeSnapshot (#28) +- [aef864b7](https://github.com/kubedb/webhook-server/commit/aef864b7) Use Go 1.19 (#27) + + + + diff --git a/content/docs/v2024.1.31/CHANGELOG-v2022.12.13-rc.0.md b/content/docs/v2024.1.31/CHANGELOG-v2022.12.13-rc.0.md new file mode 100644 index 0000000000..567a4715b0 --- /dev/null +++ b/content/docs/v2024.1.31/CHANGELOG-v2022.12.13-rc.0.md @@ -0,0 +1,387 @@ +--- +title: Changelog | KubeDB +description: Changelog +menu: + docs_v2024.1.31: + identifier: changelog-kubedb-v2022.12.13-rc.0 + name: Changelog-v2022.12.13-rc.0 + parent: welcome + weight: 20221213 +product_name: kubedb +menu_name: docs_v2024.1.31 +section_menu_id: welcome +url: /docs/v2024.1.31/welcome/changelog-v2022.12.13-rc.0/ +aliases: +- /docs/v2024.1.31/CHANGELOG-v2022.12.13-rc.0/ +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +# KubeDB v2022.12.13-rc.0 (2022-12-12) + + +## [kubedb/apimachinery](https://github.com/kubedb/apimachinery) + +### [v0.30.0-rc.0](https://github.com/kubedb/apimachinery/releases/tag/v0.30.0-rc.0) + +- [70bc1ca7](https://github.com/kubedb/apimachinery/commit/70bc1ca7) Fix build +- [c051e053](https://github.com/kubedb/apimachinery/commit/c051e053) Update deps (#1007) +- [2a1d4b0b](https://github.com/kubedb/apimachinery/commit/2a1d4b0b) Set PSP in KafkaVersion Spec to optional (#1005) +- [69bc9dec](https://github.com/kubedb/apimachinery/commit/69bc9dec) Add kafka api (#998) +- [b9528283](https://github.com/kubedb/apimachinery/commit/b9528283) Run GH actions on ubuntu-20.04 (#1004) +- [d498e8e9](https://github.com/kubedb/apimachinery/commit/d498e8e9) Add ```TransferLeadershipInterval``` and ```TransferLeadershipTimeout``` for Postgres (#1001) +- [b8f88e70](https://github.com/kubedb/apimachinery/commit/b8f88e70) Add sidekick api to kubebuilder client (#1000) +- [89a71807](https://github.com/kubedb/apimachinery/commit/89a71807) Change DatabaseRef to ProxyRef in ProxySQLAutoscaler (#997) +- [f570aabe](https://github.com/kubedb/apimachinery/commit/f570aabe) Add support for ProxySQL autoscaler (#996) +- [01c07593](https://github.com/kubedb/apimachinery/commit/01c07593) Add ProxySQL Vertical-Scaling spec (#995) + + + +## [kubedb/autoscaler](https://github.com/kubedb/autoscaler) + +### [v0.15.0-rc.0](https://github.com/kubedb/autoscaler/releases/tag/v0.15.0-rc.0) + +- [2e6d15fd](https://github.com/kubedb/autoscaler/commit/2e6d15fd) Prepare for release v0.15.0-rc.0 (#126) +- [a5bc7afd](https://github.com/kubedb/autoscaler/commit/a5bc7afd) Update deps (#125) +- [56ebf3fd](https://github.com/kubedb/autoscaler/commit/56ebf3fd) Run GH actions on ubuntu-20.04 (#124) +- [ef402f45](https://github.com/kubedb/autoscaler/commit/ef402f45) Add ProxySQL autoscaler support (#121) +- [36165599](https://github.com/kubedb/autoscaler/commit/36165599) Acquire license from proxyserver (#123) +- [f727dc6e](https://github.com/kubedb/autoscaler/commit/f727dc6e) Reduce logs; Fix RecommendationProvider's parameters for sharded mongo (#122) +- [835632d9](https://github.com/kubedb/autoscaler/commit/835632d9) Clean up go.mod + + + +## [kubedb/cli](https://github.com/kubedb/cli) + +### [v0.30.0-rc.0](https://github.com/kubedb/cli/releases/tag/v0.30.0-rc.0) + +- [1bf92e06](https://github.com/kubedb/cli/commit/1bf92e06) Prepare for release v0.30.0-rc.0 (#689) +- [76426575](https://github.com/kubedb/cli/commit/76426575) Update deps (#688) +- [2f35bac1](https://github.com/kubedb/cli/commit/2f35bac1) Run GH actions on ubuntu-20.04 (#687) + + + +## [kubedb/dashboard](https://github.com/kubedb/dashboard) + +### [v0.6.0-rc.0](https://github.com/kubedb/dashboard/releases/tag/v0.6.0-rc.0) + +- [a7952c3](https://github.com/kubedb/dashboard/commit/a7952c3) Prepare for release v0.6.0-rc.0 (#52) +- [722df43](https://github.com/kubedb/dashboard/commit/722df43) Update deps (#51) +- [600877d](https://github.com/kubedb/dashboard/commit/600877d) Run GH actions on ubuntu-20.04 (#50) +- [cc2b95b](https://github.com/kubedb/dashboard/commit/cc2b95b) Acquire license from proxyserver (#49) + + + +## [kubedb/elasticsearch](https://github.com/kubedb/elasticsearch) + +### [v0.30.0-rc.0](https://github.com/kubedb/elasticsearch/releases/tag/v0.30.0-rc.0) + +- [6b883d16](https://github.com/kubedb/elasticsearch/commit/6b883d16e) Prepare for release v0.30.0-rc.0 (#617) +- [40ab6ecf](https://github.com/kubedb/elasticsearch/commit/40ab6ecf5) Update deps (#616) +- [732ba4c2](https://github.com/kubedb/elasticsearch/commit/732ba4c2f) Run GH actions on ubuntu-20.04 (#615) +- [ba032204](https://github.com/kubedb/elasticsearch/commit/ba0322041) Fix PDB deletion issue (#614) + + + +## [kubedb/installer](https://github.com/kubedb/installer) + +### [v2022.12.13-rc.0](https://github.com/kubedb/installer/releases/tag/v2022.12.13-rc.0) + +- [9fca52a4](https://github.com/kubedb/installer/commit/9fca52a4) Prepare for release v2022.12.13-rc.0 (#574) +- [a1811331](https://github.com/kubedb/installer/commit/a1811331) Add support for elasticsearch 8.5.2 (#566) +- [7288df17](https://github.com/kubedb/installer/commit/7288df17) Update redis-init image (#573) +- [a9e2070d](https://github.com/kubedb/installer/commit/a9e2070d) Add kafka versions (#571) +- [9d3c3255](https://github.com/kubedb/installer/commit/9d3c3255) Update crds for kubedb/apimachinery@2a1d4b0b (#572) +- [0c3cfd8b](https://github.com/kubedb/installer/commit/0c3cfd8b) Update crds for kubedb/apimachinery@69bc9dec (#570) +- [d8cf2cfd](https://github.com/kubedb/installer/commit/d8cf2cfd) Update crds for kubedb/apimachinery@b9528283 (#569) +- [15601eeb](https://github.com/kubedb/installer/commit/15601eeb) Run GH actions on ubuntu-20.04 (#568) +- [833df418](https://github.com/kubedb/installer/commit/833df418) Add proxysql to kubedb grafana dashboard values and resources (#567) +- [bb368507](https://github.com/kubedb/installer/commit/bb368507) Add support for Postgres 15.1 12.13 13.9 14.6 (#563) +- [5c43e598](https://github.com/kubedb/installer/commit/5c43e598) Update Grafana dashboards (#564) +- [641023f5](https://github.com/kubedb/installer/commit/641023f5) Update crds for kubedb/apimachinery@89a71807 (#561) +- [be777e86](https://github.com/kubedb/installer/commit/be777e86) Update crds for kubedb/apimachinery@f570aabe (#560) +- [c0473ea7](https://github.com/kubedb/installer/commit/c0473ea7) Update crds for kubedb/apimachinery@01c07593 (#559) + + + +## [kubedb/kafka](https://github.com/kubedb/kafka) + +### [v0.1.0-rc.0](https://github.com/kubedb/kafka/releases/tag/v0.1.0-rc.0) + +- [41f3a22](https://github.com/kubedb/kafka/commit/41f3a22) Prepare for release v0.1.0-rc.0 (#4) +- [6cb7882](https://github.com/kubedb/kafka/commit/6cb7882) Refactor SetupControllers +- [f4c8eb1](https://github.com/kubedb/kafka/commit/f4c8eb1) Update deps (#3) +- [61ab7f6](https://github.com/kubedb/kafka/commit/61ab7f6) Acquire license from proxyserver (#2) +- [11f6df2](https://github.com/kubedb/kafka/commit/11f6df2) Add Operator for Kafka (#1) + + + +## [kubedb/mariadb](https://github.com/kubedb/mariadb) + +### [v0.14.0-rc.0](https://github.com/kubedb/mariadb/releases/tag/v0.14.0-rc.0) + +- [fbc128ad](https://github.com/kubedb/mariadb/commit/fbc128ad) Prepare for release v0.14.0-rc.0 (#188) +- [6048437a](https://github.com/kubedb/mariadb/commit/6048437a) Update deps (#187) +- [649bb98e](https://github.com/kubedb/mariadb/commit/649bb98e) Run GH actions on ubuntu-20.04 (#186) +- [b14ab86f](https://github.com/kubedb/mariadb/commit/b14ab86f) Update PDB Deletion (#185) +- [897068c5](https://github.com/kubedb/mariadb/commit/897068c5) Use constants from apimachinery (#184) + + + +## [kubedb/mariadb-coordinator](https://github.com/kubedb/mariadb-coordinator) + +### [v0.10.0-rc.0](https://github.com/kubedb/mariadb-coordinator/releases/tag/v0.10.0-rc.0) + +- [02c4399](https://github.com/kubedb/mariadb-coordinator/commit/02c4399) Prepare for release v0.10.0-rc.0 (#66) +- [bf28b66](https://github.com/kubedb/mariadb-coordinator/commit/bf28b66) Update deps (#65) +- [a00947d](https://github.com/kubedb/mariadb-coordinator/commit/a00947d) Run GH actions on ubuntu-20.04 (#64) + + + +## [kubedb/memcached](https://github.com/kubedb/memcached) + +### [v0.23.0-rc.0](https://github.com/kubedb/memcached/releases/tag/v0.23.0-rc.0) + +- [8f5172f6](https://github.com/kubedb/memcached/commit/8f5172f6) Prepare for release v0.23.0-rc.0 (#378) +- [cb73ec86](https://github.com/kubedb/memcached/commit/cb73ec86) Update deps (#377) +- [e8b780d6](https://github.com/kubedb/memcached/commit/e8b780d6) Run GH actions on ubuntu-20.04 (#376) + + + +## [kubedb/mongodb](https://github.com/kubedb/mongodb) + +### [v0.23.0-rc.0](https://github.com/kubedb/mongodb/releases/tag/v0.23.0-rc.0) + +- [2602cc08](https://github.com/kubedb/mongodb/commit/2602cc08) Prepare for release v0.23.0-rc.0 (#524) +- [a53e0b6e](https://github.com/kubedb/mongodb/commit/a53e0b6e) Update deps (#523) +- [6f68602b](https://github.com/kubedb/mongodb/commit/6f68602b) Run GH actions on ubuntu-20.04 (#522) +- [d9448103](https://github.com/kubedb/mongodb/commit/d9448103) Fix PDB issues (#521) +- [6f9b3325](https://github.com/kubedb/mongodb/commit/6f9b3325) Copy missing fields from podTemplate & serviceTemplate (#520) + + + +## [kubedb/mysql](https://github.com/kubedb/mysql) + +### [v0.23.0-rc.0](https://github.com/kubedb/mysql/releases/tag/v0.23.0-rc.0) + +- [22382a39](https://github.com/kubedb/mysql/commit/22382a39) Prepare for release v0.23.0-rc.0 (#512) +- [8e7fb1a7](https://github.com/kubedb/mysql/commit/8e7fb1a7) Update deps (#511) +- [15f8ba0b](https://github.com/kubedb/mysql/commit/15f8ba0b) Run GH actions on ubuntu-20.04 (#510) +- [83335edb](https://github.com/kubedb/mysql/commit/83335edb) Update PDB Deletion (#509) +- [b5b8cadd](https://github.com/kubedb/mysql/commit/b5b8cadd) Use constants from apimachinery (#508) + + + +## [kubedb/mysql-coordinator](https://github.com/kubedb/mysql-coordinator) + +### [v0.8.0-rc.0](https://github.com/kubedb/mysql-coordinator/releases/tag/v0.8.0-rc.0) + +- [cc3258d](https://github.com/kubedb/mysql-coordinator/commit/cc3258d) Prepare for release v0.8.0-rc.0 (#63) +- [25da659](https://github.com/kubedb/mysql-coordinator/commit/25da659) Update deps (#62) +- [c2cd415](https://github.com/kubedb/mysql-coordinator/commit/c2cd415) Run GH actions on ubuntu-20.04 (#61) + + + +## [kubedb/mysql-router-init](https://github.com/kubedb/mysql-router-init) + +### [v0.8.0-rc.0](https://github.com/kubedb/mysql-router-init/releases/tag/v0.8.0-rc.0) + +- [a8c367e](https://github.com/kubedb/mysql-router-init/commit/a8c367e) Update deps (#28) +- [e11c7ff](https://github.com/kubedb/mysql-router-init/commit/e11c7ff) Run GH actions on ubuntu-20.04 (#27) + + + +## [kubedb/ops-manager](https://github.com/kubedb/ops-manager) + +### [v0.17.0-rc.0](https://github.com/kubedb/ops-manager/releases/tag/v0.17.0-rc.0) + +- [13107ce9](https://github.com/kubedb/ops-manager/commit/13107ce9) Prepare for release v0.17.0-rc.0 (#393) +- [96f289a0](https://github.com/kubedb/ops-manager/commit/96f289a0) Update deps (#392) +- [ab83bb02](https://github.com/kubedb/ops-manager/commit/ab83bb02) Update Evict pod with kmodules api (#388) +- [028a4a29](https://github.com/kubedb/ops-manager/commit/028a4a29) Fix condition check for pvc update (#384) +- [f85db652](https://github.com/kubedb/ops-manager/commit/f85db652) Add TLS support for Kafka (#391) +- [93e1fcf4](https://github.com/kubedb/ops-manager/commit/93e1fcf4) Fix: compareTables() function for postgresql logical replication (#385) +- [d6225c57](https://github.com/kubedb/ops-manager/commit/d6225c57) Run GH actions on ubuntu-20.04 (#390) +- [eb9f8b0c](https://github.com/kubedb/ops-manager/commit/eb9f8b0c) Remove usage of `UpgradeVersion` constant (#389) +- [f682a359](https://github.com/kubedb/ops-manager/commit/f682a359) Skip Managing TLS if DB is paused for MariaDB, PXC and ProxySQL (#387) +- [1ba7dc05](https://github.com/kubedb/ops-manager/commit/1ba7dc05) Add ProxySQL Vertical Scaling Ops-Request (#381) +- [db89b9c9](https://github.com/kubedb/ops-manager/commit/db89b9c9) Adding `UpdateVersion` in mongo validator (#382) +- [7c373593](https://github.com/kubedb/ops-manager/commit/7c373593) Acquire license from proxyserver (#383) + + + +## [kubedb/percona-xtradb](https://github.com/kubedb/percona-xtradb) + +### [v0.17.0-rc.0](https://github.com/kubedb/percona-xtradb/releases/tag/v0.17.0-rc.0) + +- [f7ba9bfc](https://github.com/kubedb/percona-xtradb/commit/f7ba9bfc) Prepare for release v0.17.0-rc.0 (#290) +- [806df3d2](https://github.com/kubedb/percona-xtradb/commit/806df3d2) Update deps (#289) +- [a55bb0f2](https://github.com/kubedb/percona-xtradb/commit/a55bb0f2) Run GH actions on ubuntu-20.04 (#288) +- [37fab686](https://github.com/kubedb/percona-xtradb/commit/37fab686) Update PDB Deletion (#287) +- [55c35a72](https://github.com/kubedb/percona-xtradb/commit/55c35a72) Use constants from apimachinery (#286) + + + +## [kubedb/percona-xtradb-coordinator](https://github.com/kubedb/percona-xtradb-coordinator) + +### [v0.3.0-rc.0](https://github.com/kubedb/percona-xtradb-coordinator/releases/tag/v0.3.0-rc.0) + +- [7e53d31](https://github.com/kubedb/percona-xtradb-coordinator/commit/7e53d31) Prepare for release v0.3.0-rc.0 (#23) +- [bd5e0b3](https://github.com/kubedb/percona-xtradb-coordinator/commit/bd5e0b3) Update deps (#22) +- [b970f14](https://github.com/kubedb/percona-xtradb-coordinator/commit/b970f14) Run GH actions on ubuntu-20.04 (#21) + + + +## [kubedb/pg-coordinator](https://github.com/kubedb/pg-coordinator) + +### [v0.14.0-rc.0](https://github.com/kubedb/pg-coordinator/releases/tag/v0.14.0-rc.0) + +- [34cb5a6c](https://github.com/kubedb/pg-coordinator/commit/34cb5a6c) Prepare for release v0.14.0-rc.0 (#105) +- [7394e6b7](https://github.com/kubedb/pg-coordinator/commit/7394e6b7) Update deps (#104) +- [228b1ae2](https://github.com/kubedb/pg-coordinator/commit/228b1ae2) Merge pull request #102 from kubedb/leader-switch +- [11a3c127](https://github.com/kubedb/pg-coordinator/commit/11a3c127) Merge branch 'master' into leader-switch +- [f8d04c52](https://github.com/kubedb/pg-coordinator/commit/f8d04c52) Add PG Reset Wal for Single user mode failed #101 +- [8eaa5f11](https://github.com/kubedb/pg-coordinator/commit/8eaa5f11) retry eviction of pod and delete pod if fails +- [d2a23fa9](https://github.com/kubedb/pg-coordinator/commit/d2a23fa9) Update deps +- [febd8aab](https://github.com/kubedb/pg-coordinator/commit/febd8aab) Refined +- [5a2005cf](https://github.com/kubedb/pg-coordinator/commit/5a2005cf) Fix: Transfer Leadership issue fix with pod delete +- [7631cb84](https://github.com/kubedb/pg-coordinator/commit/7631cb84) Add PG Reset Wal for Single user mode failed +- [a951c00e](https://github.com/kubedb/pg-coordinator/commit/a951c00e) Run GH actions on ubuntu-20.04 (#103) + + + +## [kubedb/pgbouncer](https://github.com/kubedb/pgbouncer) + +### [v0.17.0-rc.0](https://github.com/kubedb/pgbouncer/releases/tag/v0.17.0-rc.0) + +- [8d39e418](https://github.com/kubedb/pgbouncer/commit/8d39e418) Prepare for release v0.17.0-rc.0 (#251) +- [991cbaec](https://github.com/kubedb/pgbouncer/commit/991cbaec) Update deps (#250) +- [8af0a2f0](https://github.com/kubedb/pgbouncer/commit/8af0a2f0) Run GH actions on ubuntu-20.04 (#248) + + + +## [kubedb/postgres](https://github.com/kubedb/postgres) + +### [v0.30.0-rc.0](https://github.com/kubedb/postgres/releases/tag/v0.30.0-rc.0) + +- [da9e88bb](https://github.com/kubedb/postgres/commit/da9e88bb) Prepare for release v0.30.0-rc.0 (#615) +- [f2e2da36](https://github.com/kubedb/postgres/commit/f2e2da36) Update deps (#614) +- [296bb241](https://github.com/kubedb/postgres/commit/296bb241) Run GH actions on ubuntu-20.04 (#613) +- [d67b529a](https://github.com/kubedb/postgres/commit/d67b529a) Add tranferLeadership env for co-ordinator (#612) +- [fab00b44](https://github.com/kubedb/postgres/commit/fab00b44) Update PDB Deletion (#611) +- [c104c2b2](https://github.com/kubedb/postgres/commit/c104c2b2) Check for old auth secret label (#610) +- [932d6851](https://github.com/kubedb/postgres/commit/932d6851) Fix shared buffer for version 10 (#609) +- [60dba4ae](https://github.com/kubedb/postgres/commit/60dba4ae) Use constants from apimachinery (#608) + + + +## [kubedb/provisioner](https://github.com/kubedb/provisioner) + +### [v0.30.0-rc.0](https://github.com/kubedb/provisioner/releases/tag/v0.30.0-rc.0) + +- [1104e9f6](https://github.com/kubedb/provisioner/commit/1104e9f68) Prepare for release v0.30.0-rc.0 (#28) +- [f37503db](https://github.com/kubedb/provisioner/commit/f37503dbb) Add kafka controller (#27) +- [c8618da0](https://github.com/kubedb/provisioner/commit/c8618da0b) Update deps (#26) +- [2db07a7d](https://github.com/kubedb/provisioner/commit/2db07a7dc) Run GH actions on ubuntu-20.04 (#25) +- [9949d569](https://github.com/kubedb/provisioner/commit/9949d5692) Acquire license from proxyserver (#24) + + + +## [kubedb/proxysql](https://github.com/kubedb/proxysql) + +### [v0.17.0-rc.0](https://github.com/kubedb/proxysql/releases/tag/v0.17.0-rc.0) + +- [587d8b97](https://github.com/kubedb/proxysql/commit/587d8b97) Prepare for release v0.17.0-rc.0 (#267) +- [32b9cc71](https://github.com/kubedb/proxysql/commit/32b9cc71) Update deps (#266) +- [05e7a3a4](https://github.com/kubedb/proxysql/commit/05e7a3a4) Add MariaDB and Percona-XtraDB Backend (#264) +- [a1e7c91d](https://github.com/kubedb/proxysql/commit/a1e7c91d) Fix CI workflow for private deps +- [effb7617](https://github.com/kubedb/proxysql/commit/effb7617) Run GH actions on ubuntu-20.04 (#265) +- [38391814](https://github.com/kubedb/proxysql/commit/38391814) Use constants from apimachinery (#263) + + + +## [kubedb/redis](https://github.com/kubedb/redis) + +### [v0.23.0-rc.0](https://github.com/kubedb/redis/releases/tag/v0.23.0-rc.0) + +- [175547fa](https://github.com/kubedb/redis/commit/175547fa) Prepare for release v0.23.0-rc.0 (#438) +- [265332d0](https://github.com/kubedb/redis/commit/265332d0) Update deps (#437) +- [f1a8f85f](https://github.com/kubedb/redis/commit/f1a8f85f) Run GH actions on ubuntu-20.04 (#436) +- [9263f404](https://github.com/kubedb/redis/commit/9263f404) Fix PDB deletion issue (#435) + + + +## [kubedb/redis-coordinator](https://github.com/kubedb/redis-coordinator) + +### [v0.9.0-rc.0](https://github.com/kubedb/redis-coordinator/releases/tag/v0.9.0-rc.0) + +- [61aefbb](https://github.com/kubedb/redis-coordinator/commit/61aefbb) Prepare for release v0.9.0-rc.0 (#56) +- [94a6eea](https://github.com/kubedb/redis-coordinator/commit/94a6eea) Update deps (#55) +- [4454cf1](https://github.com/kubedb/redis-coordinator/commit/4454cf1) Run GH actions on ubuntu-20.04 (#54) + + + +## [kubedb/replication-mode-detector](https://github.com/kubedb/replication-mode-detector) + +### [v0.17.0-rc.0](https://github.com/kubedb/replication-mode-detector/releases/tag/v0.17.0-rc.0) + +- [865f05e0](https://github.com/kubedb/replication-mode-detector/commit/865f05e0) Prepare for release v0.17.0-rc.0 (#218) +- [8d0fa119](https://github.com/kubedb/replication-mode-detector/commit/8d0fa119) Update deps (#217) +- [e6a86096](https://github.com/kubedb/replication-mode-detector/commit/e6a86096) Run GH actions on ubuntu-20.04 (#216) + + + +## [kubedb/schema-manager](https://github.com/kubedb/schema-manager) + +### [v0.6.0-rc.0](https://github.com/kubedb/schema-manager/releases/tag/v0.6.0-rc.0) + +- [64bf4d7a](https://github.com/kubedb/schema-manager/commit/64bf4d7a) Prepare for release v0.6.0-rc.0 (#56) +- [c0bd9699](https://github.com/kubedb/schema-manager/commit/c0bd9699) Update deps (#55) +- [ab5098c9](https://github.com/kubedb/schema-manager/commit/ab5098c9) Run GH actions on ubuntu-20.04 (#54) +- [3a7c5fb9](https://github.com/kubedb/schema-manager/commit/3a7c5fb9) Acquire license from proxyserver (#53) + + + +## [kubedb/tests](https://github.com/kubedb/tests) + +### [v0.15.0-rc.0](https://github.com/kubedb/tests/releases/tag/v0.15.0-rc.0) + +- [d212a7d2](https://github.com/kubedb/tests/commit/d212a7d2) Prepare for release v0.15.0-rc.0 (#208) +- [1c9c1627](https://github.com/kubedb/tests/commit/1c9c1627) Update deps (#207) +- [b3bfac83](https://github.com/kubedb/tests/commit/b3bfac83) Run GH actions on ubuntu-20.04 (#206) +- [986dd480](https://github.com/kubedb/tests/commit/986dd480) Add Redis Sentinel e2e Tests (#199) +- [5c2fc0b9](https://github.com/kubedb/tests/commit/5c2fc0b9) Update MongoDB Autoscaler tests (#204) + + + +## [kubedb/ui-server](https://github.com/kubedb/ui-server) + +### [v0.6.0-rc.0](https://github.com/kubedb/ui-server/releases/tag/v0.6.0-rc.0) + +- [8e1be757](https://github.com/kubedb/ui-server/commit/8e1be757) Prepare for release v0.6.0-rc.0 (#59) +- [05f138aa](https://github.com/kubedb/ui-server/commit/05f138aa) Update deps (#58) +- [87c75073](https://github.com/kubedb/ui-server/commit/87c75073) Run GH actions on ubuntu-20.04 (#56) + + + +## [kubedb/webhook-server](https://github.com/kubedb/webhook-server) + +### [v0.6.0-rc.0](https://github.com/kubedb/webhook-server/releases/tag/v0.6.0-rc.0) + +- [2df0f44e](https://github.com/kubedb/webhook-server/commit/2df0f44e) Prepare for release v0.6.0-rc.0 (#41) +- [f1ea74a2](https://github.com/kubedb/webhook-server/commit/f1ea74a2) Add kafka webhooks (#39) +- [b15ff051](https://github.com/kubedb/webhook-server/commit/b15ff051) Update deps (#40) +- [6246a9cf](https://github.com/kubedb/webhook-server/commit/6246a9cf) Run GH actions on ubuntu-20.04 (#38) + + + + diff --git a/content/docs/v2024.1.31/CHANGELOG-v2022.12.24-rc.1.md b/content/docs/v2024.1.31/CHANGELOG-v2022.12.24-rc.1.md new file mode 100644 index 0000000000..3ef4477f1d --- /dev/null +++ b/content/docs/v2024.1.31/CHANGELOG-v2022.12.24-rc.1.md @@ -0,0 +1,294 @@ +--- +title: Changelog | KubeDB +description: Changelog +menu: + docs_v2024.1.31: + identifier: changelog-kubedb-v2022.12.24-rc.1 + name: Changelog-v2022.12.24-rc.1 + parent: welcome + weight: 20221224 +product_name: kubedb +menu_name: docs_v2024.1.31 +section_menu_id: welcome +url: /docs/v2024.1.31/welcome/changelog-v2022.12.24-rc.1/ +aliases: +- /docs/v2024.1.31/CHANGELOG-v2022.12.24-rc.1/ +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +# KubeDB v2022.12.24-rc.1 (2022-12-24) + + +## [kubedb/apimachinery](https://github.com/kubedb/apimachinery) + +### [v0.30.0-rc.1](https://github.com/kubedb/apimachinery/releases/tag/v0.30.0-rc.1) + +- [7f6ffca9](https://github.com/kubedb/apimachinery/commit/7f6ffca9) Revise PgBouncer api (#1002) +- [e539a58e](https://github.com/kubedb/apimachinery/commit/e539a58e) Update Redis Root Username (#1010) +- [12cda1e0](https://github.com/kubedb/apimachinery/commit/12cda1e0) Remove docker utils (#1008) +- [125f3bb7](https://github.com/kubedb/apimachinery/commit/125f3bb7) Add API for ProxySQL UI-Server (#1003) + + + +## [kubedb/autoscaler](https://github.com/kubedb/autoscaler) + +### [v0.15.0-rc.1](https://github.com/kubedb/autoscaler/releases/tag/v0.15.0-rc.1) + +- [c941fab2](https://github.com/kubedb/autoscaler/commit/c941fab2) Prepare for release v0.15.0-rc.1 (#127) + + + +## [kubedb/cli](https://github.com/kubedb/cli) + +### [v0.30.0-rc.1](https://github.com/kubedb/cli/releases/tag/v0.30.0-rc.1) + +- [f91038aa](https://github.com/kubedb/cli/commit/f91038aa) Prepare for release v0.30.0-rc.1 (#690) + + + +## [kubedb/dashboard](https://github.com/kubedb/dashboard) + +### [v0.6.0-rc.1](https://github.com/kubedb/dashboard/releases/tag/v0.6.0-rc.1) + +- [0c9d9a4](https://github.com/kubedb/dashboard/commit/0c9d9a4) Prepare for release v0.6.0-rc.1 (#53) + + + +## [kubedb/elasticsearch](https://github.com/kubedb/elasticsearch) + +### [v0.30.0-rc.1](https://github.com/kubedb/elasticsearch/releases/tag/v0.30.0-rc.1) + +- [0812edfe](https://github.com/kubedb/elasticsearch/commit/0812edfee) Prepare for release v0.30.0-rc.1 (#619) +- [2bd59db3](https://github.com/kubedb/elasticsearch/commit/2bd59db3b) Use go-containerregistry for image digest (#618) + + + +## [kubedb/installer](https://github.com/kubedb/installer) + +### [v2022.12.24-rc.1](https://github.com/kubedb/installer/releases/tag/v2022.12.24-rc.1) + +- [d80dae40](https://github.com/kubedb/installer/commit/d80dae40) Prepare for release v2022.12.24-rc.1 (#580) +- [c6588fe5](https://github.com/kubedb/installer/commit/c6588fe5) Add MariaDB Version 10.10.2 (#579) +- [3045cc42](https://github.com/kubedb/installer/commit/3045cc42) Update crds for kubedb/apimachinery@7f6ffca9 (#578) +- [950b0ae5](https://github.com/kubedb/installer/commit/950b0ae5) Add support for PgBouncer 1.18.0 (#577) +- [401de79f](https://github.com/kubedb/installer/commit/401de79f) Add Redis Version 6.2.8 and 7.0.6 (#576) + + + +## [kubedb/kafka](https://github.com/kubedb/kafka) + +### [v0.1.0-rc.1](https://github.com/kubedb/kafka/releases/tag/v0.1.0-rc.1) + +- [649dbf5](https://github.com/kubedb/kafka/commit/649dbf5) Prepare for release v0.1.0-rc.1 (#8) +- [ac4dc3d](https://github.com/kubedb/kafka/commit/ac4dc3d) Use go-containerregistry for image digest (#7) +- [8d8b5bc](https://github.com/kubedb/kafka/commit/8d8b5bc) Use kauth.NoServiceAccount when no sa is specified +- [16ee315](https://github.com/kubedb/kafka/commit/16ee315) Fix Image digest detection (#6) + + + +## [kubedb/mariadb](https://github.com/kubedb/mariadb) + +### [v0.14.0-rc.1](https://github.com/kubedb/mariadb/releases/tag/v0.14.0-rc.1) + +- [50d9424e](https://github.com/kubedb/mariadb/commit/50d9424e) Prepare for release v0.14.0-rc.1 (#190) +- [ca141bfa](https://github.com/kubedb/mariadb/commit/ca141bfa) Use go-containerregistry for image digest (#189) + + + +## [kubedb/mariadb-coordinator](https://github.com/kubedb/mariadb-coordinator) + +### [v0.10.0-rc.1](https://github.com/kubedb/mariadb-coordinator/releases/tag/v0.10.0-rc.1) + +- [378ac91](https://github.com/kubedb/mariadb-coordinator/commit/378ac91) Prepare for release v0.10.0-rc.1 (#67) + + + +## [kubedb/memcached](https://github.com/kubedb/memcached) + +### [v0.23.0-rc.1](https://github.com/kubedb/memcached/releases/tag/v0.23.0-rc.1) + +- [0bdafbd7](https://github.com/kubedb/memcached/commit/0bdafbd7) Prepare for release v0.23.0-rc.1 (#379) + + + +## [kubedb/mongodb](https://github.com/kubedb/mongodb) + +### [v0.23.0-rc.1](https://github.com/kubedb/mongodb/releases/tag/v0.23.0-rc.1) + +- [d94c3301](https://github.com/kubedb/mongodb/commit/d94c3301) Prepare for release v0.23.0-rc.1 (#526) +- [7ee6de66](https://github.com/kubedb/mongodb/commit/7ee6de66) Use go-containerregistry for image digest (#525) + + + +## [kubedb/mysql](https://github.com/kubedb/mysql) + +### [v0.23.0-rc.1](https://github.com/kubedb/mysql/releases/tag/v0.23.0-rc.1) + +- [b2fcc9fa](https://github.com/kubedb/mysql/commit/b2fcc9fa) Prepare for release v0.23.0-rc.1 (#514) +- [814b64b8](https://github.com/kubedb/mysql/commit/814b64b8) Use go-containerregistry for image digest (#513) + + + +## [kubedb/mysql-coordinator](https://github.com/kubedb/mysql-coordinator) + +### [v0.8.0-rc.1](https://github.com/kubedb/mysql-coordinator/releases/tag/v0.8.0-rc.1) + +- [24c35fc](https://github.com/kubedb/mysql-coordinator/commit/24c35fc) Prepare for release v0.8.0-rc.1 (#65) +- [e0bebc6](https://github.com/kubedb/mysql-coordinator/commit/e0bebc6) remove appeding singnal cluster_status_ok (#64) + + + +## [kubedb/mysql-router-init](https://github.com/kubedb/mysql-router-init) + +### [v0.8.0-rc.1](https://github.com/kubedb/mysql-router-init/releases/tag/v0.8.0-rc.1) + + + + +## [kubedb/ops-manager](https://github.com/kubedb/ops-manager) + +### [v0.17.0-rc.1](https://github.com/kubedb/ops-manager/releases/tag/v0.17.0-rc.1) + +- [eab904d7](https://github.com/kubedb/ops-manager/commit/eab904d7) Prepare for release v0.17.0-rc.1 (#399) +- [7258d256](https://github.com/kubedb/ops-manager/commit/7258d256) Use kmodules image library for parsing image (#398) +- [b47b86e7](https://github.com/kubedb/ops-manager/commit/b47b86e7) Add adminUserName as CommonName for PgBouncer (#394) +- [1f3799b7](https://github.com/kubedb/ops-manager/commit/1f3799b7) Update deps +- [def279a1](https://github.com/kubedb/ops-manager/commit/def279a1) Use go-containerregistry for image digest (#397) +- [90bbf6f3](https://github.com/kubedb/ops-manager/commit/90bbf6f3) Fix evict api usage for k8s < 1.22 (#396) + + + +## [kubedb/percona-xtradb](https://github.com/kubedb/percona-xtradb) + +### [v0.17.0-rc.1](https://github.com/kubedb/percona-xtradb/releases/tag/v0.17.0-rc.1) + +- [e374cf7e](https://github.com/kubedb/percona-xtradb/commit/e374cf7e) Prepare for release v0.17.0-rc.1 (#292) +- [d6a2ffa6](https://github.com/kubedb/percona-xtradb/commit/d6a2ffa6) Use go-containerregistry for image digest (#291) + + + +## [kubedb/percona-xtradb-coordinator](https://github.com/kubedb/percona-xtradb-coordinator) + +### [v0.3.0-rc.1](https://github.com/kubedb/percona-xtradb-coordinator/releases/tag/v0.3.0-rc.1) + +- [d6df29d](https://github.com/kubedb/percona-xtradb-coordinator/commit/d6df29d) Prepare for release v0.3.0-rc.1 (#24) + + + +## [kubedb/pg-coordinator](https://github.com/kubedb/pg-coordinator) + +### [v0.14.0-rc.1](https://github.com/kubedb/pg-coordinator/releases/tag/v0.14.0-rc.1) + +- [8e83f433](https://github.com/kubedb/pg-coordinator/commit/8e83f433) Prepare for release v0.14.0-rc.1 (#106) + + + +## [kubedb/pgbouncer](https://github.com/kubedb/pgbouncer) + +### [v0.17.0-rc.1](https://github.com/kubedb/pgbouncer/releases/tag/v0.17.0-rc.1) + +- [89675d58](https://github.com/kubedb/pgbouncer/commit/89675d58) Prepare for release v0.17.0-rc.1 (#253) +- [e84285e2](https://github.com/kubedb/pgbouncer/commit/e84285e2) Add authSecret & configSecret (#249) +- [a7064c4f](https://github.com/kubedb/pgbouncer/commit/a7064c4f) Use go-containerregistry for image digest (#252) + + + +## [kubedb/postgres](https://github.com/kubedb/postgres) + +### [v0.30.0-rc.1](https://github.com/kubedb/postgres/releases/tag/v0.30.0-rc.1) + +- [1769f0ba](https://github.com/kubedb/postgres/commit/1769f0ba) Prepare for release v0.30.0-rc.1 (#617) +- [3bd63349](https://github.com/kubedb/postgres/commit/3bd63349) Revert to k8s 1.25 client libraries +- [42f3f740](https://github.com/kubedb/postgres/commit/42f3f740) Use go-containerregistry for image digest (#616) + + + +## [kubedb/provisioner](https://github.com/kubedb/provisioner) + +### [v0.30.0-rc.1](https://github.com/kubedb/provisioner/releases/tag/v0.30.0-rc.1) + +- [57e5c33a](https://github.com/kubedb/provisioner/commit/57e5c33a2) Prepare for release v0.30.0-rc.1 (#30) + + + +## [kubedb/proxysql](https://github.com/kubedb/proxysql) + +### [v0.17.0-rc.1](https://github.com/kubedb/proxysql/releases/tag/v0.17.0-rc.1) + +- [df3d1df1](https://github.com/kubedb/proxysql/commit/df3d1df1) Prepare for release v0.17.0-rc.1 (#272) +- [bb0df62a](https://github.com/kubedb/proxysql/commit/bb0df62a) Fix Monitoring Port Issue (#271) +- [68ad2f54](https://github.com/kubedb/proxysql/commit/68ad2f54) Fix Validator Issue (#270) +- [350e74af](https://github.com/kubedb/proxysql/commit/350e74af) Use go-containerregistry for image digest (#268) + + + +## [kubedb/redis](https://github.com/kubedb/redis) + +### [v0.23.0-rc.1](https://github.com/kubedb/redis/releases/tag/v0.23.0-rc.1) + +- [532ed03f](https://github.com/kubedb/redis/commit/532ed03f) Prepare for release v0.23.0-rc.1 (#441) +- [a231c6f2](https://github.com/kubedb/redis/commit/a231c6f2) Update Redis Root UserName (#440) +- [902f036b](https://github.com/kubedb/redis/commit/902f036b) Use go-containerregistry for image digest (#439) + + + +## [kubedb/redis-coordinator](https://github.com/kubedb/redis-coordinator) + +### [v0.9.0-rc.1](https://github.com/kubedb/redis-coordinator/releases/tag/v0.9.0-rc.1) + +- [384700f](https://github.com/kubedb/redis-coordinator/commit/384700f) Prepare for release v0.9.0-rc.1 (#57) + + + +## [kubedb/replication-mode-detector](https://github.com/kubedb/replication-mode-detector) + +### [v0.17.0-rc.1](https://github.com/kubedb/replication-mode-detector/releases/tag/v0.17.0-rc.1) + +- [30f4ff3f](https://github.com/kubedb/replication-mode-detector/commit/30f4ff3f) Prepare for release v0.17.0-rc.1 (#219) + + + +## [kubedb/schema-manager](https://github.com/kubedb/schema-manager) + +### [v0.6.0-rc.1](https://github.com/kubedb/schema-manager/releases/tag/v0.6.0-rc.1) + +- [5734ca5e](https://github.com/kubedb/schema-manager/commit/5734ca5e) Prepare for release v0.6.0-rc.1 (#57) + + + +## [kubedb/tests](https://github.com/kubedb/tests) + +### [v0.15.0-rc.1](https://github.com/kubedb/tests/releases/tag/v0.15.0-rc.1) + +- [436f15a7](https://github.com/kubedb/tests/commit/436f15a7) Prepare for release v0.15.0-rc.1 (#209) + + + +## [kubedb/ui-server](https://github.com/kubedb/ui-server) + +### [v0.6.0-rc.1](https://github.com/kubedb/ui-server/releases/tag/v0.6.0-rc.1) + +- [8254bf93](https://github.com/kubedb/ui-server/commit/8254bf93) Update deps +- [27c1daf5](https://github.com/kubedb/ui-server/commit/27c1daf5) Proxysql UI server (#57) + + + +## [kubedb/webhook-server](https://github.com/kubedb/webhook-server) + +### [v0.6.0-rc.1](https://github.com/kubedb/webhook-server/releases/tag/v0.6.0-rc.1) + +- [b9eabfda](https://github.com/kubedb/webhook-server/commit/b9eabfda) Prepare for release v0.6.0-rc.1 (#42) + + + + diff --git a/content/docs/v2024.1.31/CHANGELOG-v2022.12.28.md b/content/docs/v2024.1.31/CHANGELOG-v2022.12.28.md new file mode 100644 index 0000000000..c433a49879 --- /dev/null +++ b/content/docs/v2024.1.31/CHANGELOG-v2022.12.28.md @@ -0,0 +1,505 @@ +--- +title: Changelog | KubeDB +description: Changelog +menu: + docs_v2024.1.31: + identifier: changelog-kubedb-v2022.12.28 + name: Changelog-v2022.12.28 + parent: welcome + weight: 20221228 +product_name: kubedb +menu_name: docs_v2024.1.31 +section_menu_id: welcome +url: /docs/v2024.1.31/welcome/changelog-v2022.12.28/ +aliases: +- /docs/v2024.1.31/CHANGELOG-v2022.12.28/ +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +# KubeDB v2022.12.28 (2022-12-28) + + +## [kubedb/apimachinery](https://github.com/kubedb/apimachinery) + +### [v0.30.0](https://github.com/kubedb/apimachinery/releases/tag/v0.30.0) + +- [c221f8b7](https://github.com/kubedb/apimachinery/commit/c221f8b7) Update deps +- [7f6ffca9](https://github.com/kubedb/apimachinery/commit/7f6ffca9) Revise PgBouncer api (#1002) +- [e539a58e](https://github.com/kubedb/apimachinery/commit/e539a58e) Update Redis Root Username (#1010) +- [12cda1e0](https://github.com/kubedb/apimachinery/commit/12cda1e0) Remove docker utils (#1008) +- [125f3bb7](https://github.com/kubedb/apimachinery/commit/125f3bb7) Add API for ProxySQL UI-Server (#1003) +- [70bc1ca7](https://github.com/kubedb/apimachinery/commit/70bc1ca7) Fix build +- [c051e053](https://github.com/kubedb/apimachinery/commit/c051e053) Update deps (#1007) +- [2a1d4b0b](https://github.com/kubedb/apimachinery/commit/2a1d4b0b) Set PSP in KafkaVersion Spec to optional (#1005) +- [69bc9dec](https://github.com/kubedb/apimachinery/commit/69bc9dec) Add kafka api (#998) +- [b9528283](https://github.com/kubedb/apimachinery/commit/b9528283) Run GH actions on ubuntu-20.04 (#1004) +- [d498e8e9](https://github.com/kubedb/apimachinery/commit/d498e8e9) Add ```TransferLeadershipInterval``` and ```TransferLeadershipTimeout``` for Postgres (#1001) +- [b8f88e70](https://github.com/kubedb/apimachinery/commit/b8f88e70) Add sidekick api to kubebuilder client (#1000) +- [89a71807](https://github.com/kubedb/apimachinery/commit/89a71807) Change DatabaseRef to ProxyRef in ProxySQLAutoscaler (#997) +- [f570aabe](https://github.com/kubedb/apimachinery/commit/f570aabe) Add support for ProxySQL autoscaler (#996) +- [01c07593](https://github.com/kubedb/apimachinery/commit/01c07593) Add ProxySQL Vertical-Scaling spec (#995) + + + +## [kubedb/autoscaler](https://github.com/kubedb/autoscaler) + +### [v0.15.0](https://github.com/kubedb/autoscaler/releases/tag/v0.15.0) + +- [7d47cbb0](https://github.com/kubedb/autoscaler/commit/7d47cbb0) Prepare for release v0.15.0 (#129) +- [36fccc81](https://github.com/kubedb/autoscaler/commit/36fccc81) Update dependencies (#128) +- [c941fab2](https://github.com/kubedb/autoscaler/commit/c941fab2) Prepare for release v0.15.0-rc.1 (#127) +- [2e6d15fd](https://github.com/kubedb/autoscaler/commit/2e6d15fd) Prepare for release v0.15.0-rc.0 (#126) +- [a5bc7afd](https://github.com/kubedb/autoscaler/commit/a5bc7afd) Update deps (#125) +- [56ebf3fd](https://github.com/kubedb/autoscaler/commit/56ebf3fd) Run GH actions on ubuntu-20.04 (#124) +- [ef402f45](https://github.com/kubedb/autoscaler/commit/ef402f45) Add ProxySQL autoscaler support (#121) +- [36165599](https://github.com/kubedb/autoscaler/commit/36165599) Acquire license from proxyserver (#123) +- [f727dc6e](https://github.com/kubedb/autoscaler/commit/f727dc6e) Reduce logs; Fix RecommendationProvider's parameters for sharded mongo (#122) +- [835632d9](https://github.com/kubedb/autoscaler/commit/835632d9) Clean up go.mod + + + +## [kubedb/cli](https://github.com/kubedb/cli) + +### [v0.30.0](https://github.com/kubedb/cli/releases/tag/v0.30.0) + +- [7e75f1b6](https://github.com/kubedb/cli/commit/7e75f1b6) Prepare for release v0.30.0 (#694) +- [35c01568](https://github.com/kubedb/cli/commit/35c01568) Update dependencies (#693) +- [a93323ae](https://github.com/kubedb/cli/commit/a93323ae) Update dependencies (#691) +- [f91038aa](https://github.com/kubedb/cli/commit/f91038aa) Prepare for release v0.30.0-rc.1 (#690) +- [1bf92e06](https://github.com/kubedb/cli/commit/1bf92e06) Prepare for release v0.30.0-rc.0 (#689) +- [76426575](https://github.com/kubedb/cli/commit/76426575) Update deps (#688) +- [2f35bac1](https://github.com/kubedb/cli/commit/2f35bac1) Run GH actions on ubuntu-20.04 (#687) + + + +## [kubedb/dashboard](https://github.com/kubedb/dashboard) + +### [v0.6.0](https://github.com/kubedb/dashboard/releases/tag/v0.6.0) + +- [293364a](https://github.com/kubedb/dashboard/commit/293364a) Prepare for release v0.6.0 (#55) +- [5406fb1](https://github.com/kubedb/dashboard/commit/5406fb1) Update dependencies (#54) +- [0c9d9a4](https://github.com/kubedb/dashboard/commit/0c9d9a4) Prepare for release v0.6.0-rc.1 (#53) +- [a7952c3](https://github.com/kubedb/dashboard/commit/a7952c3) Prepare for release v0.6.0-rc.0 (#52) +- [722df43](https://github.com/kubedb/dashboard/commit/722df43) Update deps (#51) +- [600877d](https://github.com/kubedb/dashboard/commit/600877d) Run GH actions on ubuntu-20.04 (#50) +- [cc2b95b](https://github.com/kubedb/dashboard/commit/cc2b95b) Acquire license from proxyserver (#49) + + + +## [kubedb/elasticsearch](https://github.com/kubedb/elasticsearch) + +### [v0.30.0](https://github.com/kubedb/elasticsearch/releases/tag/v0.30.0) + +- [1fa2c90f](https://github.com/kubedb/elasticsearch/commit/1fa2c90fe) Prepare for release v0.30.0 (#621) +- [f6a947d9](https://github.com/kubedb/elasticsearch/commit/f6a947d99) Update dependencies (#620) +- [0812edfe](https://github.com/kubedb/elasticsearch/commit/0812edfee) Prepare for release v0.30.0-rc.1 (#619) +- [2bd59db3](https://github.com/kubedb/elasticsearch/commit/2bd59db3b) Use go-containerregistry for image digest (#618) +- [6b883d16](https://github.com/kubedb/elasticsearch/commit/6b883d16e) Prepare for release v0.30.0-rc.0 (#617) +- [40ab6ecf](https://github.com/kubedb/elasticsearch/commit/40ab6ecf5) Update deps (#616) +- [732ba4c2](https://github.com/kubedb/elasticsearch/commit/732ba4c2f) Run GH actions on ubuntu-20.04 (#615) +- [ba032204](https://github.com/kubedb/elasticsearch/commit/ba0322041) Fix PDB deletion issue (#614) + + + +## [kubedb/installer](https://github.com/kubedb/installer) + +### [v2022.12.28](https://github.com/kubedb/installer/releases/tag/v2022.12.28) + +- [2f670a2d](https://github.com/kubedb/installer/commit/2f670a2d) Prepare for release v2022.12.28 (#581) +- [d80dae40](https://github.com/kubedb/installer/commit/d80dae40) Prepare for release v2022.12.24-rc.1 (#580) +- [c6588fe5](https://github.com/kubedb/installer/commit/c6588fe5) Add MariaDB Version 10.10.2 (#579) +- [3045cc42](https://github.com/kubedb/installer/commit/3045cc42) Update crds for kubedb/apimachinery@7f6ffca9 (#578) +- [950b0ae5](https://github.com/kubedb/installer/commit/950b0ae5) Add support for PgBouncer 1.18.0 (#577) +- [401de79f](https://github.com/kubedb/installer/commit/401de79f) Add Redis Version 6.2.8 and 7.0.6 (#576) +- [9fca52a4](https://github.com/kubedb/installer/commit/9fca52a4) Prepare for release v2022.12.13-rc.0 (#574) +- [a1811331](https://github.com/kubedb/installer/commit/a1811331) Add support for elasticsearch 8.5.2 (#566) +- [7288df17](https://github.com/kubedb/installer/commit/7288df17) Update redis-init image (#573) +- [a9e2070d](https://github.com/kubedb/installer/commit/a9e2070d) Add kafka versions (#571) +- [9d3c3255](https://github.com/kubedb/installer/commit/9d3c3255) Update crds for kubedb/apimachinery@2a1d4b0b (#572) +- [0c3cfd8b](https://github.com/kubedb/installer/commit/0c3cfd8b) Update crds for kubedb/apimachinery@69bc9dec (#570) +- [d8cf2cfd](https://github.com/kubedb/installer/commit/d8cf2cfd) Update crds for kubedb/apimachinery@b9528283 (#569) +- [15601eeb](https://github.com/kubedb/installer/commit/15601eeb) Run GH actions on ubuntu-20.04 (#568) +- [833df418](https://github.com/kubedb/installer/commit/833df418) Add proxysql to kubedb grafana dashboard values and resources (#567) +- [bb368507](https://github.com/kubedb/installer/commit/bb368507) Add support for Postgres 15.1 12.13 13.9 14.6 (#563) +- [5c43e598](https://github.com/kubedb/installer/commit/5c43e598) Update Grafana dashboards (#564) +- [641023f5](https://github.com/kubedb/installer/commit/641023f5) Update crds for kubedb/apimachinery@89a71807 (#561) +- [be777e86](https://github.com/kubedb/installer/commit/be777e86) Update crds for kubedb/apimachinery@f570aabe (#560) +- [c0473ea7](https://github.com/kubedb/installer/commit/c0473ea7) Update crds for kubedb/apimachinery@01c07593 (#559) + + + +## [kubedb/kafka](https://github.com/kubedb/kafka) + +### [v0.1.0](https://github.com/kubedb/kafka/releases/tag/v0.1.0) + +- [2f65320](https://github.com/kubedb/kafka/commit/2f65320) Prepare for release v0.1.0 (#9) +- [649dbf5](https://github.com/kubedb/kafka/commit/649dbf5) Prepare for release v0.1.0-rc.1 (#8) +- [ac4dc3d](https://github.com/kubedb/kafka/commit/ac4dc3d) Use go-containerregistry for image digest (#7) +- [8d8b5bc](https://github.com/kubedb/kafka/commit/8d8b5bc) Use kauth.NoServiceAccount when no sa is specified +- [16ee315](https://github.com/kubedb/kafka/commit/16ee315) Fix Image digest detection (#6) +- [41f3a22](https://github.com/kubedb/kafka/commit/41f3a22) Prepare for release v0.1.0-rc.0 (#4) +- [6cb7882](https://github.com/kubedb/kafka/commit/6cb7882) Refactor SetupControllers +- [f4c8eb1](https://github.com/kubedb/kafka/commit/f4c8eb1) Update deps (#3) +- [61ab7f6](https://github.com/kubedb/kafka/commit/61ab7f6) Acquire license from proxyserver (#2) +- [11f6df2](https://github.com/kubedb/kafka/commit/11f6df2) Add Operator for Kafka (#1) + + + +## [kubedb/mariadb](https://github.com/kubedb/mariadb) + +### [v0.14.0](https://github.com/kubedb/mariadb/releases/tag/v0.14.0) + +- [01de8eb5](https://github.com/kubedb/mariadb/commit/01de8eb5) Prepare for release v0.14.0 (#192) +- [dc5d9d9e](https://github.com/kubedb/mariadb/commit/dc5d9d9e) Update dependencies (#191) +- [50d9424e](https://github.com/kubedb/mariadb/commit/50d9424e) Prepare for release v0.14.0-rc.1 (#190) +- [ca141bfa](https://github.com/kubedb/mariadb/commit/ca141bfa) Use go-containerregistry for image digest (#189) +- [fbc128ad](https://github.com/kubedb/mariadb/commit/fbc128ad) Prepare for release v0.14.0-rc.0 (#188) +- [6048437a](https://github.com/kubedb/mariadb/commit/6048437a) Update deps (#187) +- [649bb98e](https://github.com/kubedb/mariadb/commit/649bb98e) Run GH actions on ubuntu-20.04 (#186) +- [b14ab86f](https://github.com/kubedb/mariadb/commit/b14ab86f) Update PDB Deletion (#185) +- [897068c5](https://github.com/kubedb/mariadb/commit/897068c5) Use constants from apimachinery (#184) + + + +## [kubedb/mariadb-coordinator](https://github.com/kubedb/mariadb-coordinator) + +### [v0.10.0](https://github.com/kubedb/mariadb-coordinator/releases/tag/v0.10.0) + +- [9be8c90](https://github.com/kubedb/mariadb-coordinator/commit/9be8c90) Prepare for release v0.10.0 (#69) +- [225e2bd](https://github.com/kubedb/mariadb-coordinator/commit/225e2bd) Update dependencies (#68) +- [378ac91](https://github.com/kubedb/mariadb-coordinator/commit/378ac91) Prepare for release v0.10.0-rc.1 (#67) +- [02c4399](https://github.com/kubedb/mariadb-coordinator/commit/02c4399) Prepare for release v0.10.0-rc.0 (#66) +- [bf28b66](https://github.com/kubedb/mariadb-coordinator/commit/bf28b66) Update deps (#65) +- [a00947d](https://github.com/kubedb/mariadb-coordinator/commit/a00947d) Run GH actions on ubuntu-20.04 (#64) + + + +## [kubedb/memcached](https://github.com/kubedb/memcached) + +### [v0.23.0](https://github.com/kubedb/memcached/releases/tag/v0.23.0) + +- [8c7ccc82](https://github.com/kubedb/memcached/commit/8c7ccc82) Prepare for release v0.23.0 (#381) +- [21414fca](https://github.com/kubedb/memcached/commit/21414fca) Update dependencies (#380) +- [0bdafbd7](https://github.com/kubedb/memcached/commit/0bdafbd7) Prepare for release v0.23.0-rc.1 (#379) +- [8f5172f6](https://github.com/kubedb/memcached/commit/8f5172f6) Prepare for release v0.23.0-rc.0 (#378) +- [cb73ec86](https://github.com/kubedb/memcached/commit/cb73ec86) Update deps (#377) +- [e8b780d6](https://github.com/kubedb/memcached/commit/e8b780d6) Run GH actions on ubuntu-20.04 (#376) + + + +## [kubedb/mongodb](https://github.com/kubedb/mongodb) + +### [v0.23.0](https://github.com/kubedb/mongodb/releases/tag/v0.23.0) + +- [0dbf4b62](https://github.com/kubedb/mongodb/commit/0dbf4b62) Prepare for release v0.23.0 (#528) +- [addede82](https://github.com/kubedb/mongodb/commit/addede82) Update dependencies (#527) +- [d94c3301](https://github.com/kubedb/mongodb/commit/d94c3301) Prepare for release v0.23.0-rc.1 (#526) +- [7ee6de66](https://github.com/kubedb/mongodb/commit/7ee6de66) Use go-containerregistry for image digest (#525) +- [2602cc08](https://github.com/kubedb/mongodb/commit/2602cc08) Prepare for release v0.23.0-rc.0 (#524) +- [a53e0b6e](https://github.com/kubedb/mongodb/commit/a53e0b6e) Update deps (#523) +- [6f68602b](https://github.com/kubedb/mongodb/commit/6f68602b) Run GH actions on ubuntu-20.04 (#522) +- [d9448103](https://github.com/kubedb/mongodb/commit/d9448103) Fix PDB issues (#521) +- [6f9b3325](https://github.com/kubedb/mongodb/commit/6f9b3325) Copy missing fields from podTemplate & serviceTemplate (#520) + + + +## [kubedb/mysql](https://github.com/kubedb/mysql) + +### [v0.23.0](https://github.com/kubedb/mysql/releases/tag/v0.23.0) + +- [3469cc59](https://github.com/kubedb/mysql/commit/3469cc59) Prepare for release v0.23.0 (#516) +- [f4b205a6](https://github.com/kubedb/mysql/commit/f4b205a6) Update dependencies (#515) +- [b2fcc9fa](https://github.com/kubedb/mysql/commit/b2fcc9fa) Prepare for release v0.23.0-rc.1 (#514) +- [814b64b8](https://github.com/kubedb/mysql/commit/814b64b8) Use go-containerregistry for image digest (#513) +- [22382a39](https://github.com/kubedb/mysql/commit/22382a39) Prepare for release v0.23.0-rc.0 (#512) +- [8e7fb1a7](https://github.com/kubedb/mysql/commit/8e7fb1a7) Update deps (#511) +- [15f8ba0b](https://github.com/kubedb/mysql/commit/15f8ba0b) Run GH actions on ubuntu-20.04 (#510) +- [83335edb](https://github.com/kubedb/mysql/commit/83335edb) Update PDB Deletion (#509) +- [b5b8cadd](https://github.com/kubedb/mysql/commit/b5b8cadd) Use constants from apimachinery (#508) + + + +## [kubedb/mysql-coordinator](https://github.com/kubedb/mysql-coordinator) + +### [v0.8.0](https://github.com/kubedb/mysql-coordinator/releases/tag/v0.8.0) + +- [7a24704](https://github.com/kubedb/mysql-coordinator/commit/7a24704) Prepare for release v0.8.0 (#67) +- [c4411ec](https://github.com/kubedb/mysql-coordinator/commit/c4411ec) Update dependencies (#66) +- [24c35fc](https://github.com/kubedb/mysql-coordinator/commit/24c35fc) Prepare for release v0.8.0-rc.1 (#65) +- [e0bebc6](https://github.com/kubedb/mysql-coordinator/commit/e0bebc6) remove appeding singnal cluster_status_ok (#64) +- [cc3258d](https://github.com/kubedb/mysql-coordinator/commit/cc3258d) Prepare for release v0.8.0-rc.0 (#63) +- [25da659](https://github.com/kubedb/mysql-coordinator/commit/25da659) Update deps (#62) +- [c2cd415](https://github.com/kubedb/mysql-coordinator/commit/c2cd415) Run GH actions on ubuntu-20.04 (#61) + + + +## [kubedb/mysql-router-init](https://github.com/kubedb/mysql-router-init) + +### [v0.8.0](https://github.com/kubedb/mysql-router-init/releases/tag/v0.8.0) + +- [6698ada](https://github.com/kubedb/mysql-router-init/commit/6698ada) Update dependencies (#29) +- [a8c367e](https://github.com/kubedb/mysql-router-init/commit/a8c367e) Update deps (#28) +- [e11c7ff](https://github.com/kubedb/mysql-router-init/commit/e11c7ff) Run GH actions on ubuntu-20.04 (#27) + + + +## [kubedb/ops-manager](https://github.com/kubedb/ops-manager) + +### [v0.17.0](https://github.com/kubedb/ops-manager/releases/tag/v0.17.0) + +- [0bdf5b45](https://github.com/kubedb/ops-manager/commit/0bdf5b45) Prepare for release v0.17.0 (#402) +- [9928c74a](https://github.com/kubedb/ops-manager/commit/9928c74a) Fix NPE using license-proxyserver (#401) +- [84c522b0](https://github.com/kubedb/ops-manager/commit/84c522b0) Update dependencies (#400) +- [eab904d7](https://github.com/kubedb/ops-manager/commit/eab904d7) Prepare for release v0.17.0-rc.1 (#399) +- [7258d256](https://github.com/kubedb/ops-manager/commit/7258d256) Use kmodules image library for parsing image (#398) +- [b47b86e7](https://github.com/kubedb/ops-manager/commit/b47b86e7) Add adminUserName as CommonName for PgBouncer (#394) +- [1f3799b7](https://github.com/kubedb/ops-manager/commit/1f3799b7) Update deps +- [def279a1](https://github.com/kubedb/ops-manager/commit/def279a1) Use go-containerregistry for image digest (#397) +- [90bbf6f3](https://github.com/kubedb/ops-manager/commit/90bbf6f3) Fix evict api usage for k8s < 1.22 (#396) +- [13107ce9](https://github.com/kubedb/ops-manager/commit/13107ce9) Prepare for release v0.17.0-rc.0 (#393) +- [96f289a0](https://github.com/kubedb/ops-manager/commit/96f289a0) Update deps (#392) +- [ab83bb02](https://github.com/kubedb/ops-manager/commit/ab83bb02) Update Evict pod with kmodules api (#388) +- [028a4a29](https://github.com/kubedb/ops-manager/commit/028a4a29) Fix condition check for pvc update (#384) +- [f85db652](https://github.com/kubedb/ops-manager/commit/f85db652) Add TLS support for Kafka (#391) +- [93e1fcf4](https://github.com/kubedb/ops-manager/commit/93e1fcf4) Fix: compareTables() function for postgresql logical replication (#385) +- [d6225c57](https://github.com/kubedb/ops-manager/commit/d6225c57) Run GH actions on ubuntu-20.04 (#390) +- [eb9f8b0c](https://github.com/kubedb/ops-manager/commit/eb9f8b0c) Remove usage of `UpgradeVersion` constant (#389) +- [f682a359](https://github.com/kubedb/ops-manager/commit/f682a359) Skip Managing TLS if DB is paused for MariaDB, PXC and ProxySQL (#387) +- [1ba7dc05](https://github.com/kubedb/ops-manager/commit/1ba7dc05) Add ProxySQL Vertical Scaling Ops-Request (#381) +- [db89b9c9](https://github.com/kubedb/ops-manager/commit/db89b9c9) Adding `UpdateVersion` in mongo validator (#382) +- [7c373593](https://github.com/kubedb/ops-manager/commit/7c373593) Acquire license from proxyserver (#383) + + + +## [kubedb/percona-xtradb](https://github.com/kubedb/percona-xtradb) + +### [v0.17.0](https://github.com/kubedb/percona-xtradb/releases/tag/v0.17.0) + +- [bfca3ca2](https://github.com/kubedb/percona-xtradb/commit/bfca3ca2) Prepare for release v0.17.0 (#294) +- [d2303e54](https://github.com/kubedb/percona-xtradb/commit/d2303e54) Update dependencies (#293) +- [e374cf7e](https://github.com/kubedb/percona-xtradb/commit/e374cf7e) Prepare for release v0.17.0-rc.1 (#292) +- [d6a2ffa6](https://github.com/kubedb/percona-xtradb/commit/d6a2ffa6) Use go-containerregistry for image digest (#291) +- [f7ba9bfc](https://github.com/kubedb/percona-xtradb/commit/f7ba9bfc) Prepare for release v0.17.0-rc.0 (#290) +- [806df3d2](https://github.com/kubedb/percona-xtradb/commit/806df3d2) Update deps (#289) +- [a55bb0f2](https://github.com/kubedb/percona-xtradb/commit/a55bb0f2) Run GH actions on ubuntu-20.04 (#288) +- [37fab686](https://github.com/kubedb/percona-xtradb/commit/37fab686) Update PDB Deletion (#287) +- [55c35a72](https://github.com/kubedb/percona-xtradb/commit/55c35a72) Use constants from apimachinery (#286) + + + +## [kubedb/percona-xtradb-coordinator](https://github.com/kubedb/percona-xtradb-coordinator) + +### [v0.3.0](https://github.com/kubedb/percona-xtradb-coordinator/releases/tag/v0.3.0) + +- [a99bd6d](https://github.com/kubedb/percona-xtradb-coordinator/commit/a99bd6d) Prepare for release v0.3.0 (#26) +- [2540e8b](https://github.com/kubedb/percona-xtradb-coordinator/commit/2540e8b) Update dependencies (#25) +- [d6df29d](https://github.com/kubedb/percona-xtradb-coordinator/commit/d6df29d) Prepare for release v0.3.0-rc.1 (#24) +- [7e53d31](https://github.com/kubedb/percona-xtradb-coordinator/commit/7e53d31) Prepare for release v0.3.0-rc.0 (#23) +- [bd5e0b3](https://github.com/kubedb/percona-xtradb-coordinator/commit/bd5e0b3) Update deps (#22) +- [b970f14](https://github.com/kubedb/percona-xtradb-coordinator/commit/b970f14) Run GH actions on ubuntu-20.04 (#21) + + + +## [kubedb/pg-coordinator](https://github.com/kubedb/pg-coordinator) + +### [v0.14.0](https://github.com/kubedb/pg-coordinator/releases/tag/v0.14.0) + +- [6c0945d4](https://github.com/kubedb/pg-coordinator/commit/6c0945d4) Prepare for release v0.14.0 (#108) +- [7413dd09](https://github.com/kubedb/pg-coordinator/commit/7413dd09) Update dependencies (#107) +- [8e83f433](https://github.com/kubedb/pg-coordinator/commit/8e83f433) Prepare for release v0.14.0-rc.1 (#106) +- [34cb5a6c](https://github.com/kubedb/pg-coordinator/commit/34cb5a6c) Prepare for release v0.14.0-rc.0 (#105) +- [7394e6b7](https://github.com/kubedb/pg-coordinator/commit/7394e6b7) Update deps (#104) +- [228b1ae2](https://github.com/kubedb/pg-coordinator/commit/228b1ae2) Merge pull request #102 from kubedb/leader-switch +- [11a3c127](https://github.com/kubedb/pg-coordinator/commit/11a3c127) Merge branch 'master' into leader-switch +- [f8d04c52](https://github.com/kubedb/pg-coordinator/commit/f8d04c52) Add PG Reset Wal for Single user mode failed #101 +- [8eaa5f11](https://github.com/kubedb/pg-coordinator/commit/8eaa5f11) retry eviction of pod and delete pod if fails +- [d2a23fa9](https://github.com/kubedb/pg-coordinator/commit/d2a23fa9) Update deps +- [febd8aab](https://github.com/kubedb/pg-coordinator/commit/febd8aab) Refined +- [5a2005cf](https://github.com/kubedb/pg-coordinator/commit/5a2005cf) Fix: Transfer Leadership issue fix with pod delete +- [7631cb84](https://github.com/kubedb/pg-coordinator/commit/7631cb84) Add PG Reset Wal for Single user mode failed +- [a951c00e](https://github.com/kubedb/pg-coordinator/commit/a951c00e) Run GH actions on ubuntu-20.04 (#103) + + + +## [kubedb/pgbouncer](https://github.com/kubedb/pgbouncer) + +### [v0.17.0](https://github.com/kubedb/pgbouncer/releases/tag/v0.17.0) + +- [3d30d3cc](https://github.com/kubedb/pgbouncer/commit/3d30d3cc) Prepare for release v0.17.0 (#255) +- [cc73d8a6](https://github.com/kubedb/pgbouncer/commit/cc73d8a6) Update dependencies (#254) +- [89675d58](https://github.com/kubedb/pgbouncer/commit/89675d58) Prepare for release v0.17.0-rc.1 (#253) +- [e84285e2](https://github.com/kubedb/pgbouncer/commit/e84285e2) Add authSecret & configSecret (#249) +- [a7064c4f](https://github.com/kubedb/pgbouncer/commit/a7064c4f) Use go-containerregistry for image digest (#252) +- [8d39e418](https://github.com/kubedb/pgbouncer/commit/8d39e418) Prepare for release v0.17.0-rc.0 (#251) +- [991cbaec](https://github.com/kubedb/pgbouncer/commit/991cbaec) Update deps (#250) +- [8af0a2f0](https://github.com/kubedb/pgbouncer/commit/8af0a2f0) Run GH actions on ubuntu-20.04 (#248) + + + +## [kubedb/postgres](https://github.com/kubedb/postgres) + +### [v0.30.0](https://github.com/kubedb/postgres/releases/tag/v0.30.0) + +- [99cfddaa](https://github.com/kubedb/postgres/commit/99cfddaa) Prepare for release v0.30.0 (#619) +- [1b577c2d](https://github.com/kubedb/postgres/commit/1b577c2d) Update dependencies (#618) +- [1769f0ba](https://github.com/kubedb/postgres/commit/1769f0ba) Prepare for release v0.30.0-rc.1 (#617) +- [3bd63349](https://github.com/kubedb/postgres/commit/3bd63349) Revert to k8s 1.25 client libraries +- [42f3f740](https://github.com/kubedb/postgres/commit/42f3f740) Use go-containerregistry for image digest (#616) +- [da9e88bb](https://github.com/kubedb/postgres/commit/da9e88bb) Prepare for release v0.30.0-rc.0 (#615) +- [f2e2da36](https://github.com/kubedb/postgres/commit/f2e2da36) Update deps (#614) +- [296bb241](https://github.com/kubedb/postgres/commit/296bb241) Run GH actions on ubuntu-20.04 (#613) +- [d67b529a](https://github.com/kubedb/postgres/commit/d67b529a) Add tranferLeadership env for co-ordinator (#612) +- [fab00b44](https://github.com/kubedb/postgres/commit/fab00b44) Update PDB Deletion (#611) +- [c104c2b2](https://github.com/kubedb/postgres/commit/c104c2b2) Check for old auth secret label (#610) +- [932d6851](https://github.com/kubedb/postgres/commit/932d6851) Fix shared buffer for version 10 (#609) +- [60dba4ae](https://github.com/kubedb/postgres/commit/60dba4ae) Use constants from apimachinery (#608) + + + +## [kubedb/provisioner](https://github.com/kubedb/provisioner) + +### [v0.30.0](https://github.com/kubedb/provisioner/releases/tag/v0.30.0) + +- [56a8dd1f](https://github.com/kubedb/provisioner/commit/56a8dd1f3) Prepare for release v0.30.0 (#33) +- [09feede3](https://github.com/kubedb/provisioner/commit/09feede38) Update dependencies (#31) +- [61101ab2](https://github.com/kubedb/provisioner/commit/61101ab23) Fix NPE using license-proxyserver (#32) +- [9bd614ae](https://github.com/kubedb/provisioner/commit/9bd614ae4) Update deps +- [57e5c33a](https://github.com/kubedb/provisioner/commit/57e5c33a2) Prepare for release v0.30.0-rc.1 (#30) +- [bacaba2d](https://github.com/kubedb/provisioner/commit/bacaba2dc) Detect image digest correctly for kafka (#29) +- [1104e9f6](https://github.com/kubedb/provisioner/commit/1104e9f68) Prepare for release v0.30.0-rc.0 (#28) +- [f37503db](https://github.com/kubedb/provisioner/commit/f37503dbb) Add kafka controller (#27) +- [c8618da0](https://github.com/kubedb/provisioner/commit/c8618da0b) Update deps (#26) +- [2db07a7d](https://github.com/kubedb/provisioner/commit/2db07a7dc) Run GH actions on ubuntu-20.04 (#25) +- [9949d569](https://github.com/kubedb/provisioner/commit/9949d5692) Acquire license from proxyserver (#24) + + + +## [kubedb/proxysql](https://github.com/kubedb/proxysql) + +### [v0.17.0](https://github.com/kubedb/proxysql/releases/tag/v0.17.0) + +- [362c4dde](https://github.com/kubedb/proxysql/commit/362c4dde) Prepare for release v0.17.0 (#274) +- [5d8270e3](https://github.com/kubedb/proxysql/commit/5d8270e3) Update dependencies (#273) +- [df3d1df1](https://github.com/kubedb/proxysql/commit/df3d1df1) Prepare for release v0.17.0-rc.1 (#272) +- [bb0df62a](https://github.com/kubedb/proxysql/commit/bb0df62a) Fix Monitoring Port Issue (#271) +- [68ad2f54](https://github.com/kubedb/proxysql/commit/68ad2f54) Fix Validator Issue (#270) +- [350e74af](https://github.com/kubedb/proxysql/commit/350e74af) Use go-containerregistry for image digest (#268) +- [587d8b97](https://github.com/kubedb/proxysql/commit/587d8b97) Prepare for release v0.17.0-rc.0 (#267) +- [32b9cc71](https://github.com/kubedb/proxysql/commit/32b9cc71) Update deps (#266) +- [05e7a3a4](https://github.com/kubedb/proxysql/commit/05e7a3a4) Add MariaDB and Percona-XtraDB Backend (#264) +- [a1e7c91d](https://github.com/kubedb/proxysql/commit/a1e7c91d) Fix CI workflow for private deps +- [effb7617](https://github.com/kubedb/proxysql/commit/effb7617) Run GH actions on ubuntu-20.04 (#265) +- [38391814](https://github.com/kubedb/proxysql/commit/38391814) Use constants from apimachinery (#263) + + + +## [kubedb/redis](https://github.com/kubedb/redis) + +### [v0.23.0](https://github.com/kubedb/redis/releases/tag/v0.23.0) + +- [11e1bc5e](https://github.com/kubedb/redis/commit/11e1bc5e) Prepare for release v0.23.0 (#443) +- [0a7dc9f9](https://github.com/kubedb/redis/commit/0a7dc9f9) Update dependencies (#442) +- [532ed03f](https://github.com/kubedb/redis/commit/532ed03f) Prepare for release v0.23.0-rc.1 (#441) +- [a231c6f2](https://github.com/kubedb/redis/commit/a231c6f2) Update Redis Root UserName (#440) +- [902f036b](https://github.com/kubedb/redis/commit/902f036b) Use go-containerregistry for image digest (#439) +- [175547fa](https://github.com/kubedb/redis/commit/175547fa) Prepare for release v0.23.0-rc.0 (#438) +- [265332d0](https://github.com/kubedb/redis/commit/265332d0) Update deps (#437) +- [f1a8f85f](https://github.com/kubedb/redis/commit/f1a8f85f) Run GH actions on ubuntu-20.04 (#436) +- [9263f404](https://github.com/kubedb/redis/commit/9263f404) Fix PDB deletion issue (#435) + + + +## [kubedb/redis-coordinator](https://github.com/kubedb/redis-coordinator) + +### [v0.9.0](https://github.com/kubedb/redis-coordinator/releases/tag/v0.9.0) + +- [ae53e1d](https://github.com/kubedb/redis-coordinator/commit/ae53e1d) Prepare for release v0.9.0 (#59) +- [f56d2c0](https://github.com/kubedb/redis-coordinator/commit/f56d2c0) Update dependencies (#58) +- [384700f](https://github.com/kubedb/redis-coordinator/commit/384700f) Prepare for release v0.9.0-rc.1 (#57) +- [61aefbb](https://github.com/kubedb/redis-coordinator/commit/61aefbb) Prepare for release v0.9.0-rc.0 (#56) +- [94a6eea](https://github.com/kubedb/redis-coordinator/commit/94a6eea) Update deps (#55) +- [4454cf1](https://github.com/kubedb/redis-coordinator/commit/4454cf1) Run GH actions on ubuntu-20.04 (#54) + + + +## [kubedb/replication-mode-detector](https://github.com/kubedb/replication-mode-detector) + +### [v0.17.0](https://github.com/kubedb/replication-mode-detector/releases/tag/v0.17.0) + +- [74aff1fa](https://github.com/kubedb/replication-mode-detector/commit/74aff1fa) Prepare for release v0.17.0 (#221) +- [fce0441e](https://github.com/kubedb/replication-mode-detector/commit/fce0441e) Update dependencies (#220) +- [30f4ff3f](https://github.com/kubedb/replication-mode-detector/commit/30f4ff3f) Prepare for release v0.17.0-rc.1 (#219) +- [865f05e0](https://github.com/kubedb/replication-mode-detector/commit/865f05e0) Prepare for release v0.17.0-rc.0 (#218) +- [8d0fa119](https://github.com/kubedb/replication-mode-detector/commit/8d0fa119) Update deps (#217) +- [e6a86096](https://github.com/kubedb/replication-mode-detector/commit/e6a86096) Run GH actions on ubuntu-20.04 (#216) + + + +## [kubedb/schema-manager](https://github.com/kubedb/schema-manager) + +### [v0.6.0](https://github.com/kubedb/schema-manager/releases/tag/v0.6.0) + +- [05cd8b1d](https://github.com/kubedb/schema-manager/commit/05cd8b1d) Prepare for release v0.6.0 (#59) +- [6c8edada](https://github.com/kubedb/schema-manager/commit/6c8edada) Update dependencies (#58) +- [5734ca5e](https://github.com/kubedb/schema-manager/commit/5734ca5e) Prepare for release v0.6.0-rc.1 (#57) +- [64bf4d7a](https://github.com/kubedb/schema-manager/commit/64bf4d7a) Prepare for release v0.6.0-rc.0 (#56) +- [c0bd9699](https://github.com/kubedb/schema-manager/commit/c0bd9699) Update deps (#55) +- [ab5098c9](https://github.com/kubedb/schema-manager/commit/ab5098c9) Run GH actions on ubuntu-20.04 (#54) +- [3a7c5fb9](https://github.com/kubedb/schema-manager/commit/3a7c5fb9) Acquire license from proxyserver (#53) + + + +## [kubedb/tests](https://github.com/kubedb/tests) + +### [v0.15.0](https://github.com/kubedb/tests/releases/tag/v0.15.0) + +- [c23bbf69](https://github.com/kubedb/tests/commit/c23bbf69) Prepare for release v0.15.0 (#212) +- [b0f7c6d7](https://github.com/kubedb/tests/commit/b0f7c6d7) Update dependencies (#210) +- [436f15a7](https://github.com/kubedb/tests/commit/436f15a7) Prepare for release v0.15.0-rc.1 (#209) +- [d212a7d2](https://github.com/kubedb/tests/commit/d212a7d2) Prepare for release v0.15.0-rc.0 (#208) +- [1c9c1627](https://github.com/kubedb/tests/commit/1c9c1627) Update deps (#207) +- [b3bfac83](https://github.com/kubedb/tests/commit/b3bfac83) Run GH actions on ubuntu-20.04 (#206) +- [986dd480](https://github.com/kubedb/tests/commit/986dd480) Add Redis Sentinel e2e Tests (#199) +- [5c2fc0b9](https://github.com/kubedb/tests/commit/5c2fc0b9) Update MongoDB Autoscaler tests (#204) + + + +## [kubedb/ui-server](https://github.com/kubedb/ui-server) + +### [v0.6.0](https://github.com/kubedb/ui-server/releases/tag/v0.6.0) + +- [796f1231](https://github.com/kubedb/ui-server/commit/796f1231) Prepare for release v0.6.0 (#61) +- [0c325de6](https://github.com/kubedb/ui-server/commit/0c325de6) Update dependencies (#60) +- [8254bf93](https://github.com/kubedb/ui-server/commit/8254bf93) Update deps +- [27c1daf5](https://github.com/kubedb/ui-server/commit/27c1daf5) Proxysql UI server (#57) +- [8e1be757](https://github.com/kubedb/ui-server/commit/8e1be757) Prepare for release v0.6.0-rc.0 (#59) +- [05f138aa](https://github.com/kubedb/ui-server/commit/05f138aa) Update deps (#58) +- [87c75073](https://github.com/kubedb/ui-server/commit/87c75073) Run GH actions on ubuntu-20.04 (#56) + + + +## [kubedb/webhook-server](https://github.com/kubedb/webhook-server) + +### [v0.6.0](https://github.com/kubedb/webhook-server/releases/tag/v0.6.0) + +- [b99dfb10](https://github.com/kubedb/webhook-server/commit/b99dfb10) Prepare for release v0.6.0 (#44) +- [cea5b91c](https://github.com/kubedb/webhook-server/commit/cea5b91c) Update dependencies (#43) +- [b9eabfda](https://github.com/kubedb/webhook-server/commit/b9eabfda) Prepare for release v0.6.0-rc.1 (#42) +- [2df0f44e](https://github.com/kubedb/webhook-server/commit/2df0f44e) Prepare for release v0.6.0-rc.0 (#41) +- [f1ea74a2](https://github.com/kubedb/webhook-server/commit/f1ea74a2) Add kafka webhooks (#39) +- [b15ff051](https://github.com/kubedb/webhook-server/commit/b15ff051) Update deps (#40) +- [6246a9cf](https://github.com/kubedb/webhook-server/commit/6246a9cf) Run GH actions on ubuntu-20.04 (#38) + + + + diff --git a/content/docs/v2024.1.31/CHANGELOG-v2023.01.17.md b/content/docs/v2024.1.31/CHANGELOG-v2023.01.17.md new file mode 100644 index 0000000000..c8aad53850 --- /dev/null +++ b/content/docs/v2024.1.31/CHANGELOG-v2023.01.17.md @@ -0,0 +1,301 @@ +--- +title: Changelog | KubeDB +description: Changelog +menu: + docs_v2024.1.31: + identifier: changelog-kubedb-v2023.01.17 + name: Changelog-v2023.01.17 + parent: welcome + weight: 20230117 +product_name: kubedb +menu_name: docs_v2024.1.31 +section_menu_id: welcome +url: /docs/v2024.1.31/welcome/changelog-v2023.01.17/ +aliases: +- /docs/v2024.1.31/CHANGELOG-v2023.01.17/ +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +# KubeDB v2023.01.17 (2023-01-17) + + +## [kubedb/apimachinery](https://github.com/kubedb/apimachinery) + +### [v0.30.1](https://github.com/kubedb/apimachinery/releases/tag/v0.30.1) + +- [d93fabe5](https://github.com/kubedb/apimachinery/commit/d93fabe5) Update deps + + + +## [kubedb/autoscaler](https://github.com/kubedb/autoscaler) + +### [v0.15.1](https://github.com/kubedb/autoscaler/releases/tag/v0.15.1) + +- [4a31539f](https://github.com/kubedb/autoscaler/commit/4a31539f) Prepare for release v0.15.1 (#131) +- [c6f2ba46](https://github.com/kubedb/autoscaler/commit/c6f2ba46) Set registryFQDN to use docker for local development (#130) + + + +## [kubedb/cli](https://github.com/kubedb/cli) + +### [v0.30.1](https://github.com/kubedb/cli/releases/tag/v0.30.1) + +- [264d4323](https://github.com/kubedb/cli/commit/264d4323) Prepare for release v0.30.1 (#695) + + + +## [kubedb/dashboard](https://github.com/kubedb/dashboard) + +### [v0.6.1](https://github.com/kubedb/dashboard/releases/tag/v0.6.1) + +- [3d59229](https://github.com/kubedb/dashboard/commit/3d59229) Prepare for release v0.6.1 (#58) +- [da17eac](https://github.com/kubedb/dashboard/commit/da17eac) Set registryFQDN to use docker for local development (#57) +- [7b8d07c](https://github.com/kubedb/dashboard/commit/7b8d07c) Disable usage collection from ES (#56) + + + +## [kubedb/elasticsearch](https://github.com/kubedb/elasticsearch) + +### [v0.30.1](https://github.com/kubedb/elasticsearch/releases/tag/v0.30.1) + +- [35c62fa5](https://github.com/kubedb/elasticsearch/commit/35c62fa5b) Prepare for release v0.30.1 (#624) +- [207904e7](https://github.com/kubedb/elasticsearch/commit/207904e72) Add --insecure-registries flag (#623) +- [72b402a6](https://github.com/kubedb/elasticsearch/commit/72b402a66) Set registryFQDN to use docker for local development (#622) + + + +## [kubedb/installer](https://github.com/kubedb/installer) + +### [v2023.01.17](https://github.com/kubedb/installer/releases/tag/v2023.01.17) + +- [2414928a](https://github.com/kubedb/installer/commit/2414928a) Prepare for release v2023.01.17 (#584) +- [a070531c](https://github.com/kubedb/installer/commit/a070531c) Add --insecure-registries flag (#583) +- [321c3729](https://github.com/kubedb/installer/commit/321c3729) Fix Grafana dashboard crd url + + + +## [kubedb/kafka](https://github.com/kubedb/kafka) + +### [v0.1.1](https://github.com/kubedb/kafka/releases/tag/v0.1.1) + +- [6b49e97](https://github.com/kubedb/kafka/commit/6b49e97) Prepare for release v0.1.1 (#12) +- [a3098fa](https://github.com/kubedb/kafka/commit/a3098fa) Add --insecure-registries flag (#11) +- [edea46b](https://github.com/kubedb/kafka/commit/edea46b) Set registryFQDN to use docker for local development (#10) + + + +## [kubedb/mariadb](https://github.com/kubedb/mariadb) + +### [v0.14.1](https://github.com/kubedb/mariadb/releases/tag/v0.14.1) + +- [f8c90127](https://github.com/kubedb/mariadb/commit/f8c90127) Prepare for release v0.14.1 (#196) +- [87d3c9c9](https://github.com/kubedb/mariadb/commit/87d3c9c9) Add --insecure-registries flag (#195) +- [3d32a203](https://github.com/kubedb/mariadb/commit/3d32a203) Set registryFQDN to use docker for local development (#194) + + + +## [kubedb/mariadb-coordinator](https://github.com/kubedb/mariadb-coordinator) + +### [v0.10.1](https://github.com/kubedb/mariadb-coordinator/releases/tag/v0.10.1) + +- [62eec8e](https://github.com/kubedb/mariadb-coordinator/commit/62eec8e) Prepare for release v0.10.1 (#70) + + + +## [kubedb/memcached](https://github.com/kubedb/memcached) + +### [v0.23.1](https://github.com/kubedb/memcached/releases/tag/v0.23.1) + +- [96587959](https://github.com/kubedb/memcached/commit/96587959) Prepare for release v0.23.1 (#382) + + + +## [kubedb/mongodb](https://github.com/kubedb/mongodb) + +### [v0.23.1](https://github.com/kubedb/mongodb/releases/tag/v0.23.1) + +- [e165db2e](https://github.com/kubedb/mongodb/commit/e165db2e) Prepare for release v0.23.1 (#531) +- [d24ed694](https://github.com/kubedb/mongodb/commit/d24ed694) Add --insecure-registries flag (#530) +- [b15c3833](https://github.com/kubedb/mongodb/commit/b15c3833) Set registryFQDN to use docker for local development (#529) + + + +## [kubedb/mysql](https://github.com/kubedb/mysql) + +### [v0.23.1](https://github.com/kubedb/mysql/releases/tag/v0.23.1) + +- [2700e88d](https://github.com/kubedb/mysql/commit/2700e88d) Prepare for release v0.23.1 (#519) +- [ce7127b1](https://github.com/kubedb/mysql/commit/ce7127b1) Add --insecure-registries flag (#518) +- [944883c9](https://github.com/kubedb/mysql/commit/944883c9) Set registryFQDN to use docker for local development (#517) + + + +## [kubedb/mysql-coordinator](https://github.com/kubedb/mysql-coordinator) + +### [v0.8.1](https://github.com/kubedb/mysql-coordinator/releases/tag/v0.8.1) + +- [4a182b7](https://github.com/kubedb/mysql-coordinator/commit/4a182b7) Prepare for release v0.8.1 (#68) + + + +## [kubedb/mysql-router-init](https://github.com/kubedb/mysql-router-init) + +### [v0.8.1](https://github.com/kubedb/mysql-router-init/releases/tag/v0.8.1) + +- [960c85b](https://github.com/kubedb/mysql-router-init/commit/960c85b) Remove slack link + + + +## [kubedb/ops-manager](https://github.com/kubedb/ops-manager) + +### [v0.17.1](https://github.com/kubedb/ops-manager/releases/tag/v0.17.1) + +- [57e4e442](https://github.com/kubedb/ops-manager/commit/57e4e442) Prepare for release v0.17.1 (#407) +- [b8be3433](https://github.com/kubedb/ops-manager/commit/b8be3433) Add --insecure-registries flag (#406) +- [63ff77f7](https://github.com/kubedb/ops-manager/commit/63ff77f7) Set registryFQDN to use docker for local development (#405) +- [94110406](https://github.com/kubedb/ops-manager/commit/94110406) Refactor Redis Ops Requests (#404) + + + +## [kubedb/percona-xtradb](https://github.com/kubedb/percona-xtradb) + +### [v0.17.1](https://github.com/kubedb/percona-xtradb/releases/tag/v0.17.1) + +- [7b0b63f4](https://github.com/kubedb/percona-xtradb/commit/7b0b63f4) Prepare for release v0.17.1 (#297) +- [84aeef58](https://github.com/kubedb/percona-xtradb/commit/84aeef58) Add --insecure-registries flag (#296) +- [15386fb7](https://github.com/kubedb/percona-xtradb/commit/15386fb7) Set registryFQDN to use docker for local development (#295) + + + +## [kubedb/percona-xtradb-coordinator](https://github.com/kubedb/percona-xtradb-coordinator) + +### [v0.3.1](https://github.com/kubedb/percona-xtradb-coordinator/releases/tag/v0.3.1) + +- [d5d1f97](https://github.com/kubedb/percona-xtradb-coordinator/commit/d5d1f97) Prepare for release v0.3.1 (#27) + + + +## [kubedb/pg-coordinator](https://github.com/kubedb/pg-coordinator) + +### [v0.14.1](https://github.com/kubedb/pg-coordinator/releases/tag/v0.14.1) + +- [084ba8e0](https://github.com/kubedb/pg-coordinator/commit/084ba8e0) Prepare for release v0.14.1 (#109) + + + +## [kubedb/pgbouncer](https://github.com/kubedb/pgbouncer) + +### [v0.17.1](https://github.com/kubedb/pgbouncer/releases/tag/v0.17.1) + +- [6ea07cb6](https://github.com/kubedb/pgbouncer/commit/6ea07cb6) Prepare for release v0.17.1 (#258) +- [5d8608e4](https://github.com/kubedb/pgbouncer/commit/5d8608e4) Add --insecure-registries flag (#257) +- [11c42ab3](https://github.com/kubedb/pgbouncer/commit/11c42ab3) Set registryFQDN to use docker for local development (#256) + + + +## [kubedb/postgres](https://github.com/kubedb/postgres) + +### [v0.30.1](https://github.com/kubedb/postgres/releases/tag/v0.30.1) + +- [d3ded3bd](https://github.com/kubedb/postgres/commit/d3ded3bd) Prepare for release v0.30.1 (#623) +- [21b81e75](https://github.com/kubedb/postgres/commit/21b81e75) Add --insecure-registries flag (#622) +- [050c4e61](https://github.com/kubedb/postgres/commit/050c4e61) Set registryFQDN to use docker for local development (#621) + + + +## [kubedb/provisioner](https://github.com/kubedb/provisioner) + +### [v0.30.1](https://github.com/kubedb/provisioner/releases/tag/v0.30.1) + +- [02cd0453](https://github.com/kubedb/provisioner/commit/02cd0453b) Prepare for release v0.30.1 (#35) +- [24d6aa18](https://github.com/kubedb/provisioner/commit/24d6aa18e) Add --insecure-registries flag (#34) + + + +## [kubedb/proxysql](https://github.com/kubedb/proxysql) + +### [v0.17.1](https://github.com/kubedb/proxysql/releases/tag/v0.17.1) + +- [67500f61](https://github.com/kubedb/proxysql/commit/67500f61) Prepare for release v0.17.1 (#277) +- [5088aa2e](https://github.com/kubedb/proxysql/commit/5088aa2e) Add --insecure-registries flag (#276) +- [83c14966](https://github.com/kubedb/proxysql/commit/83c14966) Set registryFQDN to use docker for local development (#275) + + + +## [kubedb/redis](https://github.com/kubedb/redis) + +### [v0.23.1](https://github.com/kubedb/redis/releases/tag/v0.23.1) + +- [d4b63271](https://github.com/kubedb/redis/commit/d4b63271) Prepare for release v0.23.1 (#447) +- [2a829766](https://github.com/kubedb/redis/commit/2a829766) Add --insecure-registries flag (#446) +- [73260d12](https://github.com/kubedb/redis/commit/73260d12) Set registryFQDN to use docker for local development (#445) +- [9cabe22e](https://github.com/kubedb/redis/commit/9cabe22e) Add redis client-go (#444) + + + +## [kubedb/redis-coordinator](https://github.com/kubedb/redis-coordinator) + +### [v0.9.1](https://github.com/kubedb/redis-coordinator/releases/tag/v0.9.1) + +- [a66421c](https://github.com/kubedb/redis-coordinator/commit/a66421c) Prepare for release v0.9.1 (#60) + + + +## [kubedb/replication-mode-detector](https://github.com/kubedb/replication-mode-detector) + +### [v0.17.1](https://github.com/kubedb/replication-mode-detector/releases/tag/v0.17.1) + +- [94eeb8d5](https://github.com/kubedb/replication-mode-detector/commit/94eeb8d5) Prepare for release v0.17.1 (#222) + + + +## [kubedb/schema-manager](https://github.com/kubedb/schema-manager) + +### [v0.6.1](https://github.com/kubedb/schema-manager/releases/tag/v0.6.1) + +- [94bf0859](https://github.com/kubedb/schema-manager/commit/94bf0859) Prepare for release v0.6.1 (#62) +- [195a9199](https://github.com/kubedb/schema-manager/commit/195a9199) Update KubeVault crds (#61) +- [eadf5f6e](https://github.com/kubedb/schema-manager/commit/eadf5f6e) Set registryFQDN to use docker for local development (#60) + + + +## [kubedb/tests](https://github.com/kubedb/tests) + +### [v0.15.1](https://github.com/kubedb/tests/releases/tag/v0.15.1) + +- [ad9578bf](https://github.com/kubedb/tests/commit/ad9578bf) Prepare for release v0.15.1 (#214) +- [ead800d4](https://github.com/kubedb/tests/commit/ead800d4) Add Percona XtraDB Tests (Provisioner, OpsManager, Autoscaler) (#181) +- [0af9c9b7](https://github.com/kubedb/tests/commit/0af9c9b7) Add MariaDB OpsReq and Autoscaler Tests (#160) +- [7e18e981](https://github.com/kubedb/tests/commit/7e18e981) Update Redis Sentinel Tests (#211) + + + +## [kubedb/ui-server](https://github.com/kubedb/ui-server) + +### [v0.6.1](https://github.com/kubedb/ui-server/releases/tag/v0.6.1) + +- [6cfc4469](https://github.com/kubedb/ui-server/commit/6cfc4469) Prepare for release v0.6.1 (#64) +- [d3c21e61](https://github.com/kubedb/ui-server/commit/d3c21e61) Remove slack link + + + +## [kubedb/webhook-server](https://github.com/kubedb/webhook-server) + +### [v0.6.1](https://github.com/kubedb/webhook-server/releases/tag/v0.6.1) + +- [7b3eff57](https://github.com/kubedb/webhook-server/commit/7b3eff57) Prepare for release v0.6.1 (#47) +- [c2fffb87](https://github.com/kubedb/webhook-server/commit/c2fffb87) Set registryFQDN to use docker for local development (#45) + + + + diff --git a/content/docs/v2024.1.31/CHANGELOG-v2023.01.31.md b/content/docs/v2024.1.31/CHANGELOG-v2023.01.31.md new file mode 100644 index 0000000000..1cdd3d85ba --- /dev/null +++ b/content/docs/v2024.1.31/CHANGELOG-v2023.01.31.md @@ -0,0 +1,310 @@ +--- +title: Changelog | KubeDB +description: Changelog +menu: + docs_v2024.1.31: + identifier: changelog-kubedb-v2023.01.31 + name: Changelog-v2023.01.31 + parent: welcome + weight: 20230131 +product_name: kubedb +menu_name: docs_v2024.1.31 +section_menu_id: welcome +url: /docs/v2024.1.31/welcome/changelog-v2023.01.31/ +aliases: +- /docs/v2024.1.31/CHANGELOG-v2023.01.31/ +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +# KubeDB v2023.01.31 (2023-01-30) + + +## [kubedb/apimachinery](https://github.com/kubedb/apimachinery) + +### [v0.31.0](https://github.com/kubedb/apimachinery/releases/tag/v0.31.0) + +- [f2cc1db7](https://github.com/kubedb/apimachinery/commit/f2cc1db7) Add PgBouncer apis for UI Server (#1011) +- [424a2960](https://github.com/kubedb/apimachinery/commit/424a2960) Fix postgres `transferLeadershipTimeout` (#1013) + + + +## [kubedb/autoscaler](https://github.com/kubedb/autoscaler) + +### [v0.16.0](https://github.com/kubedb/autoscaler/releases/tag/v0.16.0) + +- [1ffa653b](https://github.com/kubedb/autoscaler/commit/1ffa653b) Update sidekick dependency (#135) +- [e9b84b31](https://github.com/kubedb/autoscaler/commit/e9b84b31) Update sidekick dependency (#133) +- [02affdc3](https://github.com/kubedb/autoscaler/commit/02affdc3) Read imge pull secret from operator flags (#132) + + + +## [kubedb/cli](https://github.com/kubedb/cli) + +### [v0.31.0](https://github.com/kubedb/cli/releases/tag/v0.31.0) + +- [a821a2ce](https://github.com/kubedb/cli/commit/a821a2ce) Update sidekick dependency (#697) +- [6fdf6747](https://github.com/kubedb/cli/commit/6fdf6747) Read imge pull secret from operator flags (#696) + + + +## [kubedb/dashboard](https://github.com/kubedb/dashboard) + +### [v0.7.0](https://github.com/kubedb/dashboard/releases/tag/v0.7.0) + +- [c40c888](https://github.com/kubedb/dashboard/commit/c40c888) Prepare for release v0.7.0 (#62) +- [205133c](https://github.com/kubedb/dashboard/commit/205133c) Update sidekick dependency (#61) +- [f6fe542](https://github.com/kubedb/dashboard/commit/f6fe542) Update sidekick dependency (#60) +- [1ec3b80](https://github.com/kubedb/dashboard/commit/1ec3b80) Read imge pull secret from operator flags (#59) + + + +## [kubedb/elasticsearch](https://github.com/kubedb/elasticsearch) + +### [v0.31.0](https://github.com/kubedb/elasticsearch/releases/tag/v0.31.0) + +- [b91d6a98](https://github.com/kubedb/elasticsearch/commit/b91d6a98e) Update sidekick dependency (#626) +- [5c36b694](https://github.com/kubedb/elasticsearch/commit/5c36b6944) Read imge pull secret from operator flags (#625) + + + +## [kubedb/installer](https://github.com/kubedb/installer) + +### [v2023.01.31](https://github.com/kubedb/installer/releases/tag/v2023.01.31) + +- [6e84d9a3](https://github.com/kubedb/installer/commit/6e84d9a3) Prepare for release v2023.01.31 (#590) +- [97109145](https://github.com/kubedb/installer/commit/97109145) Add Sidekick on Cluster Role (POC) (#575) +- [29c471ed](https://github.com/kubedb/installer/commit/29c471ed) Update Dashboard init container images (#589) +- [935a7e75](https://github.com/kubedb/installer/commit/935a7e75) Read image pull secret from operator flags (#588) +- [e58f09ae](https://github.com/kubedb/installer/commit/e58f09ae) Add `pod/eviction` rbac for provisioner (#586) +- [692d5929](https://github.com/kubedb/installer/commit/692d5929) Update crds for kubedb/apimachinery@424a2960 (#587) +- [55c4da09](https://github.com/kubedb/installer/commit/55c4da09) Fix grafana dashboard variable Name +- [9e9b92ee](https://github.com/kubedb/installer/commit/9e9b92ee) Update supervisor crd +- [63645074](https://github.com/kubedb/installer/commit/63645074) Make service monitor required in global values + + + +## [kubedb/kafka](https://github.com/kubedb/kafka) + +### [v0.2.0](https://github.com/kubedb/kafka/releases/tag/v0.2.0) + +- [fa669ea](https://github.com/kubedb/kafka/commit/fa669ea) Prepare for release v0.2.0 (#14) +- [f50c714](https://github.com/kubedb/kafka/commit/f50c714) Read imge pull secret from operator flags (#13) + + + +## [kubedb/mariadb](https://github.com/kubedb/mariadb) + +### [v0.15.0](https://github.com/kubedb/mariadb/releases/tag/v0.15.0) + +- [6d769a0a](https://github.com/kubedb/mariadb/commit/6d769a0a) Prepare for release v0.15.0 (#198) +- [56fbf1e4](https://github.com/kubedb/mariadb/commit/56fbf1e4) Read imge pull secret from operator flags (#197) + + + +## [kubedb/mariadb-coordinator](https://github.com/kubedb/mariadb-coordinator) + +### [v0.11.0](https://github.com/kubedb/mariadb-coordinator/releases/tag/v0.11.0) + +- [78aaeab](https://github.com/kubedb/mariadb-coordinator/commit/78aaeab) Update sidekick dependency (#72) +- [5d7a579](https://github.com/kubedb/mariadb-coordinator/commit/5d7a579) Read imge pull secret from operator flags (#71) + + + +## [kubedb/memcached](https://github.com/kubedb/memcached) + +### [v0.24.0](https://github.com/kubedb/memcached/releases/tag/v0.24.0) + +- [67c3dad8](https://github.com/kubedb/memcached/commit/67c3dad8) Update sidekick dependency (#384) +- [147b0ae0](https://github.com/kubedb/memcached/commit/147b0ae0) Read imge pull secret from operator flags (#383) + + + +## [kubedb/mongodb](https://github.com/kubedb/mongodb) + +### [v0.24.0](https://github.com/kubedb/mongodb/releases/tag/v0.24.0) + +- [58a098f0](https://github.com/kubedb/mongodb/commit/58a098f0) Update sidekick dependency (#533) +- [2b4876a0](https://github.com/kubedb/mongodb/commit/2b4876a0) Read imge pull secret from operator flags (#532) + + + +## [kubedb/mysql](https://github.com/kubedb/mysql) + +### [v0.24.0](https://github.com/kubedb/mysql/releases/tag/v0.24.0) + +- [2ce7a185](https://github.com/kubedb/mysql/commit/2ce7a185) Update sidekick dependency (#521) +- [9e9133db](https://github.com/kubedb/mysql/commit/9e9133db) Read imge pull secret from operator flags (#520) + + + +## [kubedb/mysql-coordinator](https://github.com/kubedb/mysql-coordinator) + +### [v0.9.0](https://github.com/kubedb/mysql-coordinator/releases/tag/v0.9.0) + +- [727ce38](https://github.com/kubedb/mysql-coordinator/commit/727ce38) Update sidekick dependency (#70) +- [4d55c9d](https://github.com/kubedb/mysql-coordinator/commit/4d55c9d) Read imge pull secret from operator flags (#69) + + + +## [kubedb/mysql-router-init](https://github.com/kubedb/mysql-router-init) + +### [v0.9.0](https://github.com/kubedb/mysql-router-init/releases/tag/v0.9.0) + +- [63ef29f](https://github.com/kubedb/mysql-router-init/commit/63ef29f) Update sidekick dependency (#30) + + + +## [kubedb/ops-manager](https://github.com/kubedb/ops-manager) + +### [v0.18.0](https://github.com/kubedb/ops-manager/releases/tag/v0.18.0) + +- [049b0428](https://github.com/kubedb/ops-manager/commit/049b0428) Prepare for release v0.18.0 (#413) +- [9dc43bfa](https://github.com/kubedb/ops-manager/commit/9dc43bfa) Update sidekick dependency (#412) +- [010a65c7](https://github.com/kubedb/ops-manager/commit/010a65c7) Read image pull secret from operator flags (#411) +- [0b3fcf4a](https://github.com/kubedb/ops-manager/commit/0b3fcf4a) Read imge pull secret from operator flags (#410) + + + +## [kubedb/percona-xtradb](https://github.com/kubedb/percona-xtradb) + +### [v0.18.0](https://github.com/kubedb/percona-xtradb/releases/tag/v0.18.0) + +- [6da94951](https://github.com/kubedb/percona-xtradb/commit/6da94951) Update sidekick dependency (#299) +- [7f36f57a](https://github.com/kubedb/percona-xtradb/commit/7f36f57a) Read imge pull secret from operator flags (#298) + + + +## [kubedb/percona-xtradb-coordinator](https://github.com/kubedb/percona-xtradb-coordinator) + +### [v0.4.0](https://github.com/kubedb/percona-xtradb-coordinator/releases/tag/v0.4.0) + +- [562d7df](https://github.com/kubedb/percona-xtradb-coordinator/commit/562d7df) Update sidekick dependency (#29) +- [0e823ff](https://github.com/kubedb/percona-xtradb-coordinator/commit/0e823ff) Read imge pull secret from operator flags (#28) + + + +## [kubedb/pg-coordinator](https://github.com/kubedb/pg-coordinator) + +### [v0.15.0](https://github.com/kubedb/pg-coordinator/releases/tag/v0.15.0) + +- [8fdc0143](https://github.com/kubedb/pg-coordinator/commit/8fdc0143) Update sidekick dependency (#112) +- [b686ba76](https://github.com/kubedb/pg-coordinator/commit/b686ba76) Read imge pull secret from operator flags (#111) +- [49cd4a67](https://github.com/kubedb/pg-coordinator/commit/49cd4a67) Fix `waiting for the target to be leader` issue (#110) + + + +## [kubedb/pgbouncer](https://github.com/kubedb/pgbouncer) + +### [v0.18.0](https://github.com/kubedb/pgbouncer/releases/tag/v0.18.0) + +- [45753006](https://github.com/kubedb/pgbouncer/commit/45753006) Prepare for release v0.18.0 (#261) +- [b56e096a](https://github.com/kubedb/pgbouncer/commit/b56e096a) Update sidekick dependency (#260) +- [aa59fc2c](https://github.com/kubedb/pgbouncer/commit/aa59fc2c) Read imge pull secret from operator flags (#259) + + + +## [kubedb/postgres](https://github.com/kubedb/postgres) + +### [v0.31.0](https://github.com/kubedb/postgres/releases/tag/v0.31.0) + +- [710e4282](https://github.com/kubedb/postgres/commit/710e4282) Update sidekick dependency (#627) +- [30ffd035](https://github.com/kubedb/postgres/commit/30ffd035) Read imge pull secret from operator flags (#626) +- [f9dc7194](https://github.com/kubedb/postgres/commit/f9dc7194) Fixed scaling up and down for standalone with `halted=true` and Add eviction RBAC (#624) + + + +## [kubedb/provisioner](https://github.com/kubedb/provisioner) + +### [v0.31.0](https://github.com/kubedb/provisioner/releases/tag/v0.31.0) + +- [8d8fe8f5](https://github.com/kubedb/provisioner/commit/8d8fe8f5b) Prepare for release v0.31.0 (#38) +- [ccd6c545](https://github.com/kubedb/provisioner/commit/ccd6c545f) Update sidekick dependency (#37) +- [e0214013](https://github.com/kubedb/provisioner/commit/e0214013b) Read imge pull secret from operator flags (#36) + + + +## [kubedb/proxysql](https://github.com/kubedb/proxysql) + +### [v0.18.0](https://github.com/kubedb/proxysql/releases/tag/v0.18.0) + +- [1c38abc8](https://github.com/kubedb/proxysql/commit/1c38abc8) Prepare for release v0.18.0 (#280) +- [c19af3c5](https://github.com/kubedb/proxysql/commit/c19af3c5) Update sidekick dependency (#279) +- [0478dff1](https://github.com/kubedb/proxysql/commit/0478dff1) Read imge pull secret from operator flags (#278) + + + +## [kubedb/redis](https://github.com/kubedb/redis) + +### [v0.24.0](https://github.com/kubedb/redis/releases/tag/v0.24.0) + +- [20a616fa](https://github.com/kubedb/redis/commit/20a616fa) Update sidekick dependency (#449) +- [eebeb0b9](https://github.com/kubedb/redis/commit/eebeb0b9) Read imge pull secret from operator flags (#448) + + + +## [kubedb/redis-coordinator](https://github.com/kubedb/redis-coordinator) + +### [v0.10.0](https://github.com/kubedb/redis-coordinator/releases/tag/v0.10.0) + +- [d8f8d72](https://github.com/kubedb/redis-coordinator/commit/d8f8d72) Update sidekick dependency (#62) +- [55e1996](https://github.com/kubedb/redis-coordinator/commit/55e1996) Read imge pull secret from operator flags (#61) + + + +## [kubedb/replication-mode-detector](https://github.com/kubedb/replication-mode-detector) + +### [v0.18.0](https://github.com/kubedb/replication-mode-detector/releases/tag/v0.18.0) + +- [08bd3050](https://github.com/kubedb/replication-mode-detector/commit/08bd3050) Update sidekick dependency (#224) +- [b65a3b7f](https://github.com/kubedb/replication-mode-detector/commit/b65a3b7f) Read imge pull secret from operator flags (#223) + + + +## [kubedb/schema-manager](https://github.com/kubedb/schema-manager) + +### [v0.7.0](https://github.com/kubedb/schema-manager/releases/tag/v0.7.0) + +- [6ce17e7e](https://github.com/kubedb/schema-manager/commit/6ce17e7e) Update sidekick dependency (#64) +- [9b75ed17](https://github.com/kubedb/schema-manager/commit/9b75ed17) Read imge pull secret from operator flags (#63) + + + +## [kubedb/tests](https://github.com/kubedb/tests) + +### [v0.16.0](https://github.com/kubedb/tests/releases/tag/v0.16.0) + +- [73dc61fa](https://github.com/kubedb/tests/commit/73dc61fa) Update sidekick dependency (#216) +- [53b4a2e2](https://github.com/kubedb/tests/commit/53b4a2e2) Read imge pull secret from operator flags (#215) + + + +## [kubedb/ui-server](https://github.com/kubedb/ui-server) + +### [v0.7.0](https://github.com/kubedb/ui-server/releases/tag/v0.7.0) + +- [3fd9a734](https://github.com/kubedb/ui-server/commit/3fd9a734) Update sidekick dependency (#66) +- [f5a40c64](https://github.com/kubedb/ui-server/commit/f5a40c64) Add PgBouncer Support (#62) +- [fbcb7537](https://github.com/kubedb/ui-server/commit/fbcb7537) Read imge pull secret from operator flags (#65) + + + +## [kubedb/webhook-server](https://github.com/kubedb/webhook-server) + +### [v0.7.0](https://github.com/kubedb/webhook-server/releases/tag/v0.7.0) + +- [a01a3928](https://github.com/kubedb/webhook-server/commit/a01a3928) Prepare for release v0.7.0 (#50) +- [5677176e](https://github.com/kubedb/webhook-server/commit/5677176e) Update sidekick dependency (#49) + + + + diff --git a/content/docs/v2024.1.31/CHANGELOG-v2023.02.28.md b/content/docs/v2024.1.31/CHANGELOG-v2023.02.28.md new file mode 100644 index 0000000000..15462da031 --- /dev/null +++ b/content/docs/v2024.1.31/CHANGELOG-v2023.02.28.md @@ -0,0 +1,286 @@ +--- +title: Changelog | KubeDB +description: Changelog +menu: + docs_v2024.1.31: + identifier: changelog-kubedb-v2023.02.28 + name: Changelog-v2023.02.28 + parent: welcome + weight: 20230228 +product_name: kubedb +menu_name: docs_v2024.1.31 +section_menu_id: welcome +url: /docs/v2024.1.31/welcome/changelog-v2023.02.28/ +aliases: +- /docs/v2024.1.31/CHANGELOG-v2023.02.28/ +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +# KubeDB v2023.02.28 (2023-02-28) + + +## [kubedb/apimachinery](https://github.com/kubedb/apimachinery) + +### [v0.32.0](https://github.com/kubedb/apimachinery/releases/tag/v0.32.0) + +- [40b15db3](https://github.com/kubedb/apimachinery/commit/40b15db3) Add WsrepSSTMethod Field in MariaDB API (#1012) +- [1b9e2bac](https://github.com/kubedb/apimachinery/commit/1b9e2bac) Update `setDefaults()` for pgbouncer (#1022) +- [7bf2fbe1](https://github.com/kubedb/apimachinery/commit/7bf2fbe1) Add separate Security Config Directory constant for Opensearch V2 (#1021) +- [693b7795](https://github.com/kubedb/apimachinery/commit/693b7795) Update TLS Defaulting for ProxySQL & PgBouncer (#1020) +- [48fae91c](https://github.com/kubedb/apimachinery/commit/48fae91c) Fix `GetPersistentSecrets()` function (#1018) +- [b334a5eb](https://github.com/kubedb/apimachinery/commit/b334a5eb) Add postgres streaming and standby mode in horizontal scaling (#1017) + + + +## [kubedb/autoscaler](https://github.com/kubedb/autoscaler) + +### [v0.17.0](https://github.com/kubedb/autoscaler/releases/tag/v0.17.0) + +- [4d8bd3b0](https://github.com/kubedb/autoscaler/commit/4d8bd3b0) Prepare for release v0.17.0 (#136) + + + +## [kubedb/cli](https://github.com/kubedb/cli) + +### [v0.32.0](https://github.com/kubedb/cli/releases/tag/v0.32.0) + +- [8477bd09](https://github.com/kubedb/cli/commit/8477bd09) Prepare for release v0.32.0 (#698) + + + +## [kubedb/dashboard](https://github.com/kubedb/dashboard) + +### [v0.8.0](https://github.com/kubedb/dashboard/releases/tag/v0.8.0) + +- [b0ac65a](https://github.com/kubedb/dashboard/commit/b0ac65a) Prepare for release v0.8.0 (#64) +- [987d7ef](https://github.com/kubedb/dashboard/commit/987d7ef) Add support for opensearch-dashboards 2.x (#63) + + + +## [kubedb/elasticsearch](https://github.com/kubedb/elasticsearch) + +### [v0.32.0](https://github.com/kubedb/elasticsearch/releases/tag/v0.32.0) + +- [24948165](https://github.com/kubedb/elasticsearch/commit/249481659) Prepare for release v0.32.0 (#629) +- [7b6f30ed](https://github.com/kubedb/elasticsearch/commit/7b6f30edf) Use separate securityConfig Volume mount path for Opensearch V2 (#627) + + + +## [kubedb/installer](https://github.com/kubedb/installer) + +### [v2023.02.28](https://github.com/kubedb/installer/releases/tag/v2023.02.28) + +- [d7d1197d](https://github.com/kubedb/installer/commit/d7d1197d) Prepare for release v2023.02.28 (#601) +- [73115439](https://github.com/kubedb/installer/commit/73115439) Update MariaDB initContainer Image with 0.5.0 (#600) +- [fa9aab3c](https://github.com/kubedb/installer/commit/fa9aab3c) Add `SecurityContext` & remove `initContainer` from `pgbouncerVersion.spec` (#596) +- [9959c608](https://github.com/kubedb/installer/commit/9959c608) Add Support for OpenSearch v2.0.1 & v2.5.0 (#599) +- [a9ca09c0](https://github.com/kubedb/installer/commit/a9ca09c0) Update postgres init conatiner and rbac for pvc (#592) +- [2f3da948](https://github.com/kubedb/installer/commit/2f3da948) Update crds for kubedb/apimachinery@b334a5eb (#591) + + + +## [kubedb/kafka](https://github.com/kubedb/kafka) + +### [v0.3.0](https://github.com/kubedb/kafka/releases/tag/v0.3.0) + +- [be9595d](https://github.com/kubedb/kafka/commit/be9595d) Prepare for release v0.3.0 (#15) + + + +## [kubedb/mariadb](https://github.com/kubedb/mariadb) + +### [v0.16.0](https://github.com/kubedb/mariadb/releases/tag/v0.16.0) + +- [d092f3a3](https://github.com/kubedb/mariadb/commit/d092f3a3) Prepare for release v0.16.0 (#200) +- [60b6d846](https://github.com/kubedb/mariadb/commit/60b6d846) Add Dynamic `wsrep_sst_method` Selection Code (#193) +- [0ba15d93](https://github.com/kubedb/mariadb/commit/0ba15d93) Update sidekick dependency (#199) + + + +## [kubedb/mariadb-coordinator](https://github.com/kubedb/mariadb-coordinator) + +### [v0.12.0](https://github.com/kubedb/mariadb-coordinator/releases/tag/v0.12.0) + +- [ead9061](https://github.com/kubedb/mariadb-coordinator/commit/ead9061) Prepare for release v0.12.0 (#73) + + + +## [kubedb/memcached](https://github.com/kubedb/memcached) + +### [v0.25.0](https://github.com/kubedb/memcached/releases/tag/v0.25.0) + +- [a6449fc0](https://github.com/kubedb/memcached/commit/a6449fc0) Prepare for release v0.25.0 (#385) + + + +## [kubedb/mongodb](https://github.com/kubedb/mongodb) + +### [v0.25.0](https://github.com/kubedb/mongodb/releases/tag/v0.25.0) + +- [abfa58ea](https://github.com/kubedb/mongodb/commit/abfa58ea) Prepare for release v0.25.0 (#535) + + + +## [kubedb/mysql](https://github.com/kubedb/mysql) + +### [v0.25.0](https://github.com/kubedb/mysql/releases/tag/v0.25.0) + +- [168c7346](https://github.com/kubedb/mysql/commit/168c7346) Prepare for release v0.25.0 (#522) + + + +## [kubedb/mysql-coordinator](https://github.com/kubedb/mysql-coordinator) + +### [v0.10.0](https://github.com/kubedb/mysql-coordinator/releases/tag/v0.10.0) + +- [570c5f4](https://github.com/kubedb/mysql-coordinator/commit/570c5f4) Prepare for release v0.10.0 (#71) + + + +## [kubedb/mysql-router-init](https://github.com/kubedb/mysql-router-init) + +### [v0.10.0](https://github.com/kubedb/mysql-router-init/releases/tag/v0.10.0) + + + + +## [kubedb/ops-manager](https://github.com/kubedb/ops-manager) + +### [v0.19.0](https://github.com/kubedb/ops-manager/releases/tag/v0.19.0) + +- [2ae340f2](https://github.com/kubedb/ops-manager/commit/2ae340f2) Prepare for release v0.19.0 (#419) +- [b91aa423](https://github.com/kubedb/ops-manager/commit/b91aa423) Fix ProxySQL reconfigure tls issues (#418) +- [3d320c64](https://github.com/kubedb/ops-manager/commit/3d320c64) Add PEM encoded output in certificate based on cert-manager feature-gate (#417) +- [817a828e](https://github.com/kubedb/ops-manager/commit/817a828e) Support Acme protocol issued certs for PgBouncer & ProxySQL (#415) +- [fe3ef59a](https://github.com/kubedb/ops-manager/commit/fe3ef59a) Add support for stand Alone to HA postgres (#409) + + + +## [kubedb/percona-xtradb](https://github.com/kubedb/percona-xtradb) + +### [v0.19.0](https://github.com/kubedb/percona-xtradb/releases/tag/v0.19.0) + +- [97b98029](https://github.com/kubedb/percona-xtradb/commit/97b98029) Prepare for release v0.19.0 (#300) + + + +## [kubedb/percona-xtradb-coordinator](https://github.com/kubedb/percona-xtradb-coordinator) + +### [v0.5.0](https://github.com/kubedb/percona-xtradb-coordinator/releases/tag/v0.5.0) + +- [f33b2dc](https://github.com/kubedb/percona-xtradb-coordinator/commit/f33b2dc) Prepare for release v0.5.0 (#30) + + + +## [kubedb/pg-coordinator](https://github.com/kubedb/pg-coordinator) + +### [v0.16.0](https://github.com/kubedb/pg-coordinator/releases/tag/v0.16.0) + +- [b85f50dc](https://github.com/kubedb/pg-coordinator/commit/b85f50dc) Prepare for release v0.16.0 (#113) + + + +## [kubedb/pgbouncer](https://github.com/kubedb/pgbouncer) + +### [v0.19.0](https://github.com/kubedb/pgbouncer/releases/tag/v0.19.0) + +- [107f81dd](https://github.com/kubedb/pgbouncer/commit/107f81dd) Prepare for release v0.19.0 (#266) +- [7afeb055](https://github.com/kubedb/pgbouncer/commit/7afeb055) Acme TLS support (#262) +- [4abc8090](https://github.com/kubedb/pgbouncer/commit/4abc8090) Fix ownerReference for auth secret (#263) + + + +## [kubedb/postgres](https://github.com/kubedb/postgres) + +### [v0.32.0](https://github.com/kubedb/postgres/releases/tag/v0.32.0) + +- [2daf213e](https://github.com/kubedb/postgres/commit/2daf213e) Prepare for release v0.32.0 (#629) +- [5eecc8b8](https://github.com/kubedb/postgres/commit/5eecc8b8) Refactor Reconciler to address Standalone to High Availability (#625) + + + +## [kubedb/provisioner](https://github.com/kubedb/provisioner) + +### [v0.32.0](https://github.com/kubedb/provisioner/releases/tag/v0.32.0) + +- [096872ab](https://github.com/kubedb/provisioner/commit/096872ab7) Prepare for release v0.32.0 (#39) + + + +## [kubedb/proxysql](https://github.com/kubedb/proxysql) + +### [v0.19.0](https://github.com/kubedb/proxysql/releases/tag/v0.19.0) + +- [d739df42](https://github.com/kubedb/proxysql/commit/d739df42) Prepare for release v0.19.0 (#283) +- [6392951a](https://github.com/kubedb/proxysql/commit/6392951a) Support Acme Protocol Issued Certs (eg LE) (#282) + + + +## [kubedb/redis](https://github.com/kubedb/redis) + +### [v0.25.0](https://github.com/kubedb/redis/releases/tag/v0.25.0) + +- [f0c6c4ef](https://github.com/kubedb/redis/commit/f0c6c4ef) Prepare for release v0.25.0 (#450) + + + +## [kubedb/redis-coordinator](https://github.com/kubedb/redis-coordinator) + +### [v0.11.0](https://github.com/kubedb/redis-coordinator/releases/tag/v0.11.0) + +- [a7605af](https://github.com/kubedb/redis-coordinator/commit/a7605af) Prepare for release v0.11.0 (#63) + + + +## [kubedb/replication-mode-detector](https://github.com/kubedb/replication-mode-detector) + +### [v0.19.0](https://github.com/kubedb/replication-mode-detector/releases/tag/v0.19.0) + +- [450a3942](https://github.com/kubedb/replication-mode-detector/commit/450a3942) Prepare for release v0.19.0 (#225) + + + +## [kubedb/schema-manager](https://github.com/kubedb/schema-manager) + +### [v0.8.0](https://github.com/kubedb/schema-manager/releases/tag/v0.8.0) + +- [011e1f8c](https://github.com/kubedb/schema-manager/commit/011e1f8c) Prepare for release v0.8.0 (#65) + + + +## [kubedb/tests](https://github.com/kubedb/tests) + +### [v0.17.0](https://github.com/kubedb/tests/releases/tag/v0.17.0) + +- [b6e52b82](https://github.com/kubedb/tests/commit/b6e52b82) Prepare for release v0.17.0 (#217) +- [6ccd68ef](https://github.com/kubedb/tests/commit/6ccd68ef) Add MySQL tests (#198) + + + +## [kubedb/ui-server](https://github.com/kubedb/ui-server) + +### [v0.8.0](https://github.com/kubedb/ui-server/releases/tag/v0.8.0) + +- [77f1095e](https://github.com/kubedb/ui-server/commit/77f1095e) Prepare for release v0.8.0 (#68) + + + +## [kubedb/webhook-server](https://github.com/kubedb/webhook-server) + +### [v0.8.0](https://github.com/kubedb/webhook-server/releases/tag/v0.8.0) + +- [72058d49](https://github.com/kubedb/webhook-server/commit/72058d49) Prepare for release v0.8.0 (#52) + + + + diff --git a/content/docs/v2024.1.31/CHANGELOG-v2023.04.10.md b/content/docs/v2024.1.31/CHANGELOG-v2023.04.10.md new file mode 100644 index 0000000000..751faa742b --- /dev/null +++ b/content/docs/v2024.1.31/CHANGELOG-v2023.04.10.md @@ -0,0 +1,479 @@ +--- +title: Changelog | KubeDB +description: Changelog +menu: + docs_v2024.1.31: + identifier: changelog-kubedb-v2023.04.10 + name: Changelog-v2023.04.10 + parent: welcome + weight: 20230410 +product_name: kubedb +menu_name: docs_v2024.1.31 +section_menu_id: welcome +url: /docs/v2024.1.31/welcome/changelog-v2023.04.10/ +aliases: +- /docs/v2024.1.31/CHANGELOG-v2023.04.10/ +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +# KubeDB v2023.04.10 (2023-04-07) + + +## [kubedb/apimachinery](https://github.com/kubedb/apimachinery) + +### [v0.33.0](https://github.com/kubedb/apimachinery/releases/tag/v0.33.0) + +- [16319573](https://github.com/kubedb/apimachinery/commit/16319573) Cleanup ci files +- [8a9b762e](https://github.com/kubedb/apimachinery/commit/8a9b762e) Rename UpgradeConstraints to UpdateConstraints in catalogs (#1035) +- [1f9d8cb4](https://github.com/kubedb/apimachinery/commit/1f9d8cb4) Add support for mongo version 6 (#1034) +- [c787eb94](https://github.com/kubedb/apimachinery/commit/c787eb94) Add Kafka monitor API (#1014) +- [3f1adae7](https://github.com/kubedb/apimachinery/commit/3f1adae7) Use enum generator for ops types (#1031) +- [d08e21e3](https://github.com/kubedb/apimachinery/commit/d08e21e3) Use ghcr.io for appscode/golang-dev (#1032) +- [b51ef1ea](https://github.com/kubedb/apimachinery/commit/b51ef1ea) Change return type of GetRequestType() func (#1030) +- [2e1cc0ab](https://github.com/kubedb/apimachinery/commit/2e1cc0ab) Use UpdateVersion instead of Upgrade in ops-manager (#1028) +- [b02d8800](https://github.com/kubedb/apimachinery/commit/b02d8800) Update for release Stash@v2023.03.13 (#1029) +- [03af3f01](https://github.com/kubedb/apimachinery/commit/03af3f01) Update workflows (Go 1.20, k8s 1.26) (#1027) +- [a5bd3816](https://github.com/kubedb/apimachinery/commit/a5bd3816) Refect Monitoring Agent StatAccessor API Update (#1024) +- [e867759c](https://github.com/kubedb/apimachinery/commit/e867759c) Update wrokflows (Go 1.20, k8s 1.26) (#1026) +- [854c4fa4](https://github.com/kubedb/apimachinery/commit/854c4fa4) Test against Kubernetes 1.26.0 (#1025) +- [1a3cbc58](https://github.com/kubedb/apimachinery/commit/1a3cbc58) Update for release Stash@v2023.02.28 (#1023) + + + +## [kubedb/autoscaler](https://github.com/kubedb/autoscaler) + +### [v0.18.0](https://github.com/kubedb/autoscaler/releases/tag/v0.18.0) + +- [311d6970](https://github.com/kubedb/autoscaler/commit/311d6970) Prepare for release v0.18.0 (#141) +- [ef53b5a7](https://github.com/kubedb/autoscaler/commit/ef53b5a7) Use ghcr.io +- [7ce66405](https://github.com/kubedb/autoscaler/commit/7ce66405) Use Homebrew in CI +- [a4880d45](https://github.com/kubedb/autoscaler/commit/a4880d45) Stop publishing to docker hub +- [a5e4e870](https://github.com/kubedb/autoscaler/commit/a5e4e870) Update package label in Docker files +- [61be362d](https://github.com/kubedb/autoscaler/commit/61be362d) Dynamically select runner type +- [572d61ab](https://github.com/kubedb/autoscaler/commit/572d61ab) Use ghcr.io for appscode/golang-dev (#140) +- [1bf5f5ef](https://github.com/kubedb/autoscaler/commit/1bf5f5ef) Update workflows (Go 1.20, k8s 1.26) (#139) +- [901be09a](https://github.com/kubedb/autoscaler/commit/901be09a) Update wrokflows (Go 1.20, k8s 1.26) (#138) +- [12f24074](https://github.com/kubedb/autoscaler/commit/12f24074) Test against Kubernetes 1.26.0 (#137) + + + +## [kubedb/cli](https://github.com/kubedb/cli) + +### [v0.33.0](https://github.com/kubedb/cli/releases/tag/v0.33.0) + +- [4849d48b](https://github.com/kubedb/cli/commit/4849d48b) Prepare for release v0.33.0 (#704) +- [7f680b1e](https://github.com/kubedb/cli/commit/7f680b1e) Cleanup CI +- [2c607eef](https://github.com/kubedb/cli/commit/2c607eef) Use ghcr.io for appscode/golang-dev (#703) +- [4867dac1](https://github.com/kubedb/cli/commit/4867dac1) Update workflows (Go 1.20, k8s 1.26) (#702) +- [3ed34cca](https://github.com/kubedb/cli/commit/3ed34cca) Update wrokflows (Go 1.20, k8s 1.26) (#701) +- [26fa3901](https://github.com/kubedb/cli/commit/26fa3901) Test against Kubernetes 1.26.0 (#700) + + + +## [kubedb/dashboard](https://github.com/kubedb/dashboard) + +### [v0.9.0](https://github.com/kubedb/dashboard/releases/tag/v0.9.0) + +- [a796df9](https://github.com/kubedb/dashboard/commit/a796df9) Prepare for release v0.9.0 (#69) +- [396b3c9](https://github.com/kubedb/dashboard/commit/396b3c9) Use ghcr.io +- [0014ad3](https://github.com/kubedb/dashboard/commit/0014ad3) Stop publishing to docker hub +- [6f5ca8e](https://github.com/kubedb/dashboard/commit/6f5ca8e) Dynamically select runner type +- [55c6d4b](https://github.com/kubedb/dashboard/commit/55c6d4b) Use ghcr.io for appscode/golang-dev (#68) +- [d6c42ca](https://github.com/kubedb/dashboard/commit/d6c42ca) Update workflows (Go 1.20, k8s 1.26) (#67) +- [1367b94](https://github.com/kubedb/dashboard/commit/1367b94) Update wrokflows (Go 1.20, k8s 1.26) (#66) +- [8085731](https://github.com/kubedb/dashboard/commit/8085731) Test against Kubernetes 1.26.0 (#65) + + + +## [kubedb/elasticsearch](https://github.com/kubedb/elasticsearch) + +### [v0.33.0](https://github.com/kubedb/elasticsearch/releases/tag/v0.33.0) + +- [fc4188e9](https://github.com/kubedb/elasticsearch/commit/fc4188e94) Prepare for release v0.33.0 (#635) +- [3bfc204d](https://github.com/kubedb/elasticsearch/commit/3bfc204de) Update e2e workflow +- [62b43e46](https://github.com/kubedb/elasticsearch/commit/62b43e46c) Use ghcr.io +- [3fee0f94](https://github.com/kubedb/elasticsearch/commit/3fee0f947) Update workflows +- [d8dad30e](https://github.com/kubedb/elasticsearch/commit/d8dad30e9) Dynamically select runner type +- [845dd87c](https://github.com/kubedb/elasticsearch/commit/845dd87cb) Use ghcr.io for appscode/golang-dev (#633) +- [579d778d](https://github.com/kubedb/elasticsearch/commit/579d778de) Update workflows (Go 1.20, k8s 1.26) (#632) +- [c0aa077f](https://github.com/kubedb/elasticsearch/commit/c0aa077f9) Update wrokflows (Go 1.20, k8s 1.26) (#631) +- [9269ed75](https://github.com/kubedb/elasticsearch/commit/9269ed75c) Test against Kubernetes 1.26.0 (#630) + + + +## [kubedb/installer](https://github.com/kubedb/installer) + +### [v2023.04.10](https://github.com/kubedb/installer/releases/tag/v2023.04.10) + + + + +## [kubedb/kafka](https://github.com/kubedb/kafka) + +### [v0.4.0](https://github.com/kubedb/kafka/releases/tag/v0.4.0) + +- [6d43376](https://github.com/kubedb/kafka/commit/6d43376) Prepare for release v0.4.0 (#22) +- [9d9beea](https://github.com/kubedb/kafka/commit/9d9beea) Update e2e workflow +- [1fc0185](https://github.com/kubedb/kafka/commit/1fc0185) Cleanup Makefile +- [c4b3450](https://github.com/kubedb/kafka/commit/c4b3450) Update workflows - Stop publishing to docker hub - Enable e2e tests - Use homebrew to install tools +- [50c408e](https://github.com/kubedb/kafka/commit/50c408e) Remove Kafka advertised.listeners config from controller node (#21) +- [2e80fc8](https://github.com/kubedb/kafka/commit/2e80fc8) Add support for monitoring (#19) +- [58894bb](https://github.com/kubedb/kafka/commit/58894bb) Use ghcr.io for appscode/golang-dev (#20) +- [c7b1158](https://github.com/kubedb/kafka/commit/c7b1158) Dynamically select runner type +- [d5f02d5](https://github.com/kubedb/kafka/commit/d5f02d5) Update workflows (Go 1.20, k8s 1.26) (#18) +- [842361d](https://github.com/kubedb/kafka/commit/842361d) Test against Kubernetes 1.26.0 (#16) + + + +## [kubedb/mariadb](https://github.com/kubedb/mariadb) + +### [v0.17.0](https://github.com/kubedb/mariadb/releases/tag/v0.17.0) + +- [d0ab53a5](https://github.com/kubedb/mariadb/commit/d0ab53a5) Prepare for release v0.17.0 (#205) +- [a7cb3789](https://github.com/kubedb/mariadb/commit/a7cb3789) Update e2e workflow +- [03182a15](https://github.com/kubedb/mariadb/commit/03182a15) Use ghcr.io +- [48a0ae24](https://github.com/kubedb/mariadb/commit/48a0ae24) Update workflows - Stop publishing to docker hub - Enable e2e tests - Use homebrew to install tools +- [b5fc163d](https://github.com/kubedb/mariadb/commit/b5fc163d) Use ghcr.io for appscode/golang-dev (#204) +- [ffd17645](https://github.com/kubedb/mariadb/commit/ffd17645) Dynamically select runner type +- [9e18fbf6](https://github.com/kubedb/mariadb/commit/9e18fbf6) Update workflows (Go 1.20, k8s 1.26) (#203) +- [02c9169d](https://github.com/kubedb/mariadb/commit/02c9169d) Update wrokflows (Go 1.20, k8s 1.26) (#202) +- [ccddab5f](https://github.com/kubedb/mariadb/commit/ccddab5f) Test against Kubernetes 1.26.0 (#201) + + + +## [kubedb/mariadb-coordinator](https://github.com/kubedb/mariadb-coordinator) + +### [v0.13.0](https://github.com/kubedb/mariadb-coordinator/releases/tag/v0.13.0) + +- [838c879](https://github.com/kubedb/mariadb-coordinator/commit/838c879) Prepare for release v0.13.0 (#78) +- [1242437](https://github.com/kubedb/mariadb-coordinator/commit/1242437) Update CI +- [561fd55](https://github.com/kubedb/mariadb-coordinator/commit/561fd55) Use ghcr.io for appscode/golang-dev (#77) +- [0f67cb9](https://github.com/kubedb/mariadb-coordinator/commit/0f67cb9) DYnamically select runner type +- [adbb2d3](https://github.com/kubedb/mariadb-coordinator/commit/adbb2d3) Update workflows (Go 1.20, k8s 1.26) (#76) +- [e27c8f6](https://github.com/kubedb/mariadb-coordinator/commit/e27c8f6) Update wrokflows (Go 1.20, k8s 1.26) (#75) +- [633dc41](https://github.com/kubedb/mariadb-coordinator/commit/633dc41) Test against Kubernetes 1.26.0 (#74) + + + +## [kubedb/memcached](https://github.com/kubedb/memcached) + +### [v0.26.0](https://github.com/kubedb/memcached/releases/tag/v0.26.0) + +- [f7975e7b](https://github.com/kubedb/memcached/commit/f7975e7b) Prepare for release v0.26.0 (#391) +- [81e92a08](https://github.com/kubedb/memcached/commit/81e92a08) Update e2e workflow +- [dc8ccbf4](https://github.com/kubedb/memcached/commit/dc8ccbf4) Cleanup CI +- [318b3f14](https://github.com/kubedb/memcached/commit/318b3f14) Update workflows - Stop publishing to docker hub - Enable e2e tests - Use homebrew to install tools +- [2ca568aa](https://github.com/kubedb/memcached/commit/2ca568aa) Use ghcr.io for appscode/golang-dev (#390) +- [d85f38d2](https://github.com/kubedb/memcached/commit/d85f38d2) Dynamically select runner type +- [962f3daa](https://github.com/kubedb/memcached/commit/962f3daa) Update workflows (Go 1.20, k8s 1.26) (#389) +- [c1eb2df8](https://github.com/kubedb/memcached/commit/c1eb2df8) Update wrokflows (Go 1.20, k8s 1.26) (#388) +- [48b784c8](https://github.com/kubedb/memcached/commit/48b784c8) Test against Kubernetes 1.26.0 (#387) + + + +## [kubedb/mongodb](https://github.com/kubedb/mongodb) + +### [v0.26.0](https://github.com/kubedb/mongodb/releases/tag/v0.26.0) + +- [9e52ecf2](https://github.com/kubedb/mongodb/commit/9e52ecf2) Prepare for release v0.26.0 (#545) +- [987a14ba](https://github.com/kubedb/mongodb/commit/987a14ba) Replace Mongos bootstrap container with postStart hook (#542) +- [8f2437cf](https://github.com/kubedb/mongodb/commit/8f2437cf) Use --timeout=24h for e2e tests +- [41c3a6e9](https://github.com/kubedb/mongodb/commit/41c3a6e9) Rename ref flag to rest for e2e workflows +- [e7e3203b](https://github.com/kubedb/mongodb/commit/e7e3203b) Customize installer ref for e2e workflows +- [d57e6f31](https://github.com/kubedb/mongodb/commit/d57e6f31) Issue license key for kubedb +- [bcb712e7](https://github.com/kubedb/mongodb/commit/bcb712e7) Cleanup CI +- [3bd0d6cc](https://github.com/kubedb/mongodb/commit/3bd0d6cc) Stop publishing to docker hub +- [1c5327b5](https://github.com/kubedb/mongodb/commit/1c5327b5) Update db versions for e2e tests +- [7a8e9b25](https://github.com/kubedb/mongodb/commit/7a8e9b25) Speed up e2e tests +- [471e3259](https://github.com/kubedb/mongodb/commit/471e3259) Use brew to install tools +- [71574901](https://github.com/kubedb/mongodb/commit/71574901) Use fircracker vms for e2e tests +- [220b1b14](https://github.com/kubedb/mongodb/commit/220b1b14) Update e2e workflow +- [5bf24d7b](https://github.com/kubedb/mongodb/commit/5bf24d7b) Use ghcr.io for appscode/golang-dev (#541) +- [553dc5b5](https://github.com/kubedb/mongodb/commit/553dc5b5) Dynamically select runner type +- [0e94ca5a](https://github.com/kubedb/mongodb/commit/0e94ca5a) Update workflows (Go 1.20, k8s 1.26) (#539) +- [c8858e12](https://github.com/kubedb/mongodb/commit/c8858e12) Update wrokflows (Go 1.20, k8s 1.26) (#538) +- [b9e36634](https://github.com/kubedb/mongodb/commit/b9e36634) Test against Kubernetes 1.26.0 (#537) + + + +## [kubedb/mysql](https://github.com/kubedb/mysql) + +### [v0.26.0](https://github.com/kubedb/mysql/releases/tag/v0.26.0) + +- [f4897ade](https://github.com/kubedb/mysql/commit/f4897ade) Prepare for release v0.26.0 (#530) +- [942f3675](https://github.com/kubedb/mysql/commit/942f3675) Add MySQL-5 MaxLen Check (#528) +- [d3dcd00e](https://github.com/kubedb/mysql/commit/d3dcd00e) Update e2e workflows +- [ec1ee2a3](https://github.com/kubedb/mysql/commit/ec1ee2a3) Cleanup CI +- [9330ee56](https://github.com/kubedb/mysql/commit/9330ee56) Update workflows - Stop publishing to docker hub - Enable e2e tests - Use homebrew to install tools +- [4382bb9a](https://github.com/kubedb/mysql/commit/4382bb9a) Use ghcr.io for appscode/golang-dev (#527) +- [dbeadab3](https://github.com/kubedb/mysql/commit/dbeadab3) Dynamically select runner type +- [a693489b](https://github.com/kubedb/mysql/commit/a693489b) Update workflows (Go 1.20, k8s 1.26) (#526) +- [67c8a8f0](https://github.com/kubedb/mysql/commit/67c8a8f0) Update wrokflows (Go 1.20, k8s 1.26) (#525) +- [4bd25e73](https://github.com/kubedb/mysql/commit/4bd25e73) Test against Kubernetes 1.26.0 (#524) + + + +## [kubedb/mysql-coordinator](https://github.com/kubedb/mysql-coordinator) + +### [v0.11.0](https://github.com/kubedb/mysql-coordinator/releases/tag/v0.11.0) + +- [16c77c8](https://github.com/kubedb/mysql-coordinator/commit/16c77c8) Prepare for release v0.11.0 (#76) +- [50ad81b](https://github.com/kubedb/mysql-coordinator/commit/50ad81b) Cleanup CI +- [b920384](https://github.com/kubedb/mysql-coordinator/commit/b920384) Use ghcr.io for appscode/golang-dev (#75) +- [f1de5ed](https://github.com/kubedb/mysql-coordinator/commit/f1de5ed) Dynamically select runner type +- [dc31944](https://github.com/kubedb/mysql-coordinator/commit/dc31944) Update workflows (Go 1.20, k8s 1.26) (#74) +- [244154d](https://github.com/kubedb/mysql-coordinator/commit/244154d) Update workflows (Go 1.20, k8s 1.26) (#73) +- [505ecfa](https://github.com/kubedb/mysql-coordinator/commit/505ecfa) Test against Kubernetes 1.26.0 (#72) + + + +## [kubedb/mysql-router-init](https://github.com/kubedb/mysql-router-init) + +### [v0.11.0](https://github.com/kubedb/mysql-router-init/releases/tag/v0.11.0) + +- [710fc9f](https://github.com/kubedb/mysql-router-init/commit/710fc9f) Cleanup CI +- [2fe6586](https://github.com/kubedb/mysql-router-init/commit/2fe6586) Use ghcr.io for appscode/golang-dev (#34) +- [989bb29](https://github.com/kubedb/mysql-router-init/commit/989bb29) Dynamically select runner type +- [2d00c02](https://github.com/kubedb/mysql-router-init/commit/2d00c02) Update workflows (Go 1.20, k8s 1.26) (#33) +- [1a70e0c](https://github.com/kubedb/mysql-router-init/commit/1a70e0c) Update wrokflows (Go 1.20, k8s 1.26) (#32) +- [a68b30e](https://github.com/kubedb/mysql-router-init/commit/a68b30e) Test against Kubernetes 1.26.0 (#31) + + + +## [kubedb/ops-manager](https://github.com/kubedb/ops-manager) + +### [v0.20.0](https://github.com/kubedb/ops-manager/releases/tag/v0.20.0) + +- [e1e3a251](https://github.com/kubedb/ops-manager/commit/e1e3a251) Prepare for release v0.20.0 (#434) +- [972654d8](https://github.com/kubedb/ops-manager/commit/972654d8) Rename UpgradeConstraints to UpdateConstrints in catalogs (#432) +- [f2a545b2](https://github.com/kubedb/ops-manager/commit/f2a545b2) Fix mongodb Upgrade (#431) +- [ce8027f0](https://github.com/kubedb/ops-manager/commit/ce8027f0) Cleanup CI +- [64bef08d](https://github.com/kubedb/ops-manager/commit/64bef08d) Add cve report to version upgrade Recommendation (#395) +- [34a8838e](https://github.com/kubedb/ops-manager/commit/34a8838e) Use ghcr.io for appscode/golang-dev (#430) +- [fcd0704d](https://github.com/kubedb/ops-manager/commit/fcd0704d) Use Redis and Sentinel Client from db-client-go (#429) +- [55c6e16f](https://github.com/kubedb/ops-manager/commit/55c6e16f) Use UpdateVersion instead of Upgrade (#427) +- [51554040](https://github.com/kubedb/ops-manager/commit/51554040) Auto detect runs-on label (#428) +- [a41c70fb](https://github.com/kubedb/ops-manager/commit/a41c70fb) Use self-hosted runners +- [bb60637d](https://github.com/kubedb/ops-manager/commit/bb60637d) Add Postgres UpdateVersion support (#425) +- [ac890091](https://github.com/kubedb/ops-manager/commit/ac890091) Update workflows (Go 1.20, k8s 1.26) (#424) + + + +## [kubedb/percona-xtradb](https://github.com/kubedb/percona-xtradb) + +### [v0.20.0](https://github.com/kubedb/percona-xtradb/releases/tag/v0.20.0) + +- [0783793f](https://github.com/kubedb/percona-xtradb/commit/0783793f) Prepare for release v0.20.0 (#307) +- [281c5eac](https://github.com/kubedb/percona-xtradb/commit/281c5eac) Update e2e workflow +- [d16efe09](https://github.com/kubedb/percona-xtradb/commit/d16efe09) Cleanup CI +- [f533248c](https://github.com/kubedb/percona-xtradb/commit/f533248c) Update workflows - Stop publishing to docker hub - Enable e2e tests - Use homebrew to install tools +- [c6e08088](https://github.com/kubedb/percona-xtradb/commit/c6e08088) Use ghcr.io for appscode/golang-dev (#305) +- [227526f0](https://github.com/kubedb/percona-xtradb/commit/227526f0) Dynamically select runner type +- [cd6321b2](https://github.com/kubedb/percona-xtradb/commit/cd6321b2) Update workflows (Go 1.20, k8s 1.26) (#304) +- [9840d83a](https://github.com/kubedb/percona-xtradb/commit/9840d83a) Update wrokflows (Go 1.20, k8s 1.26) (#303) +- [33c6116b](https://github.com/kubedb/percona-xtradb/commit/33c6116b) Test against Kubernetes 1.26.0 (#302) + + + +## [kubedb/percona-xtradb-coordinator](https://github.com/kubedb/percona-xtradb-coordinator) + +### [v0.6.0](https://github.com/kubedb/percona-xtradb-coordinator/releases/tag/v0.6.0) + +- [cc06348](https://github.com/kubedb/percona-xtradb-coordinator/commit/cc06348) Prepare for release v0.6.0 (#35) +- [566cdac](https://github.com/kubedb/percona-xtradb-coordinator/commit/566cdac) Cleanup CI +- [82bab80](https://github.com/kubedb/percona-xtradb-coordinator/commit/82bab80) Use ghcr.io for appscode/golang-dev (#34) +- [2e9c1f5](https://github.com/kubedb/percona-xtradb-coordinator/commit/2e9c1f5) Dynamically select runner type +- [b21a83d](https://github.com/kubedb/percona-xtradb-coordinator/commit/b21a83d) Update workflows (Go 1.20, k8s 1.26) (#33) +- [a55967f](https://github.com/kubedb/percona-xtradb-coordinator/commit/a55967f) Update wrokflows (Go 1.20, k8s 1.26) (#32) +- [a8eaa03](https://github.com/kubedb/percona-xtradb-coordinator/commit/a8eaa03) Test against Kubernetes 1.26.0 (#31) + + + +## [kubedb/pg-coordinator](https://github.com/kubedb/pg-coordinator) + +### [v0.17.0](https://github.com/kubedb/pg-coordinator/releases/tag/v0.17.0) + +- [c958b2b6](https://github.com/kubedb/pg-coordinator/commit/c958b2b6) Prepare for release v0.17.0 (#118) +- [8faaf376](https://github.com/kubedb/pg-coordinator/commit/8faaf376) Cleanup CI +- [cbc12702](https://github.com/kubedb/pg-coordinator/commit/cbc12702) Use ghcr.io for appscode/golang-dev (#117) +- [cbfef2aa](https://github.com/kubedb/pg-coordinator/commit/cbfef2aa) Dynamically select runner type +- [57f2ad58](https://github.com/kubedb/pg-coordinator/commit/57f2ad58) Update workflows (Go 1.20, k8s 1.26) (#116) +- [fb81176a](https://github.com/kubedb/pg-coordinator/commit/fb81176a) Test against Kubernetes 1.26.0 (#114) + + + +## [kubedb/pgbouncer](https://github.com/kubedb/pgbouncer) + +### [v0.20.0](https://github.com/kubedb/pgbouncer/releases/tag/v0.20.0) + +- [f3f89b3e](https://github.com/kubedb/pgbouncer/commit/f3f89b3e) Prepare for release v0.20.0 (#273) +- [7b1391e1](https://github.com/kubedb/pgbouncer/commit/7b1391e1) Update e2e workflows +- [3ae31397](https://github.com/kubedb/pgbouncer/commit/3ae31397) Cleanup CI +- [e8fe48b3](https://github.com/kubedb/pgbouncer/commit/e8fe48b3) Update workflows - Stop publishing to docker hub - Enable e2e tests - Use homebrew to install tools +- [153effe0](https://github.com/kubedb/pgbouncer/commit/153effe0) Use ghcr.io for appscode/golang-dev (#271) +- [d141e211](https://github.com/kubedb/pgbouncer/commit/d141e211) Dynamically select runner type + + + +## [kubedb/postgres](https://github.com/kubedb/postgres) + +### [v0.33.0](https://github.com/kubedb/postgres/releases/tag/v0.33.0) + +- [c95f2b59](https://github.com/kubedb/postgres/commit/c95f2b59d) Prepare for release v0.33.0 (#636) +- [8d53d60a](https://github.com/kubedb/postgres/commit/8d53d60a2) Update e2e workflows +- [fa31d6e3](https://github.com/kubedb/postgres/commit/fa31d6e33) Cleanup CI +- [40cf94c4](https://github.com/kubedb/postgres/commit/40cf94c4f) Update workflows - Stop publishing to docker hub - Enable e2e tests - Use homebrew to install tools +- [78b13fe0](https://github.com/kubedb/postgres/commit/78b13fe00) Use ghcr.io for appscode/golang-dev (#634) +- [c5ca8a99](https://github.com/kubedb/postgres/commit/c5ca8a99d) Dynamically select runner type +- [6005fce1](https://github.com/kubedb/postgres/commit/6005fce14) Update workflows (Go 1.20, k8s 1.26) (#633) +- [26751826](https://github.com/kubedb/postgres/commit/267518268) Update wrokflows (Go 1.20, k8s 1.26) (#632) +- [aad6863b](https://github.com/kubedb/postgres/commit/aad6863b7) Test against Kubernetes 1.26.0 (#631) + + + +## [kubedb/provisioner](https://github.com/kubedb/provisioner) + +### [v0.33.0](https://github.com/kubedb/provisioner/releases/tag/v0.33.0) + +- [9e5ad46a](https://github.com/kubedb/provisioner/commit/9e5ad46a1) Prepare for release v0.33.0 (#44) +- [e1a68944](https://github.com/kubedb/provisioner/commit/e1a689443) Update e2e workflow +- [0c177001](https://github.com/kubedb/provisioner/commit/0c1770015) Update workflows - Stop publishing to docker hub - Enable e2e tests - Use homebrew to install tools +- [f30f3b60](https://github.com/kubedb/provisioner/commit/f30f3b60c) Use ghcr.io for appscode/golang-dev (#43) +- [1ef0787b](https://github.com/kubedb/provisioner/commit/1ef0787b8) Dynamically select runner type + + + +## [kubedb/proxysql](https://github.com/kubedb/proxysql) + +### [v0.20.0](https://github.com/kubedb/proxysql/releases/tag/v0.20.0) + +- [b49d3af7](https://github.com/kubedb/proxysql/commit/b49d3af7) Prepare for release v0.20.0 (#288) +- [61044810](https://github.com/kubedb/proxysql/commit/61044810) Update e2e workflow +- [32ddb10e](https://github.com/kubedb/proxysql/commit/32ddb10e) Cleanup CI +- [2f16bec6](https://github.com/kubedb/proxysql/commit/2f16bec6) Update workflows - Stop publishing to docker hub - Enable e2e tests - Use homebrew to install tools +- [0724ef23](https://github.com/kubedb/proxysql/commit/0724ef23) Use ghcr.io for appscode/golang-dev (#287) +- [73541f5c](https://github.com/kubedb/proxysql/commit/73541f5c) Dynamically select runner type +- [b28209ab](https://github.com/kubedb/proxysql/commit/b28209ab) Update workflows (Go 1.20, k8s 1.26) (#286) +- [7f682fe8](https://github.com/kubedb/proxysql/commit/7f682fe8) Update wrokflows (Go 1.20, k8s 1.26) (#285) +- [7daac3a4](https://github.com/kubedb/proxysql/commit/7daac3a4) Test against Kubernetes 1.26.0 (#284) + + + +## [kubedb/redis](https://github.com/kubedb/redis) + +### [v0.26.0](https://github.com/kubedb/redis/releases/tag/v0.26.0) + +- [a52ff8e8](https://github.com/kubedb/redis/commit/a52ff8e8) Prepare for release v0.26.0 (#458) +- [f6b74025](https://github.com/kubedb/redis/commit/f6b74025) Update e2e workflow +- [7799c78b](https://github.com/kubedb/redis/commit/7799c78b) Cleanup CI +- [a5abe6d2](https://github.com/kubedb/redis/commit/a5abe6d2) Update workflows - Stop publishing to docker hub - Enable e2e tests - Use homebrew to install tools +- [b4a13c26](https://github.com/kubedb/redis/commit/b4a13c26) Use ghcr.io for appscode/golang-dev (#456) +- [a6df02d8](https://github.com/kubedb/redis/commit/a6df02d8) Dynamically select runner type +- [11d13a42](https://github.com/kubedb/redis/commit/11d13a42) Update workflows (Go 1.20, k8s 1.26) (#455) +- [2366e2db](https://github.com/kubedb/redis/commit/2366e2db) Update wrokflows (Go 1.20, k8s 1.26) (#454) +- [4ffdad8b](https://github.com/kubedb/redis/commit/4ffdad8b) Test against Kubernetes 1.26.0 (#453) + + + +## [kubedb/redis-coordinator](https://github.com/kubedb/redis-coordinator) + +### [v0.12.0](https://github.com/kubedb/redis-coordinator/releases/tag/v0.12.0) + +- [d04d7c3](https://github.com/kubedb/redis-coordinator/commit/d04d7c3) Prepare for release v0.12.0 (#68) +- [a28d24b](https://github.com/kubedb/redis-coordinator/commit/a28d24b) Cleanup CI +- [79ee2a2](https://github.com/kubedb/redis-coordinator/commit/79ee2a2) Use ghcr.io for appscode/golang-dev (#67) +- [45e832e](https://github.com/kubedb/redis-coordinator/commit/45e832e) Dynamically select runner type +- [9d96b05](https://github.com/kubedb/redis-coordinator/commit/9d96b05) Update workflows (Go 1.20, k8s 1.26) (#66) +- [983ef3b](https://github.com/kubedb/redis-coordinator/commit/983ef3b) Update wrokflows (Go 1.20, k8s 1.26) (#65) +- [e12472f](https://github.com/kubedb/redis-coordinator/commit/e12472f) Test against Kubernetes 1.26.0 (#64) + + + +## [kubedb/replication-mode-detector](https://github.com/kubedb/replication-mode-detector) + +### [v0.20.0](https://github.com/kubedb/replication-mode-detector/releases/tag/v0.20.0) + +- [181606ec](https://github.com/kubedb/replication-mode-detector/commit/181606ec) Prepare for release v0.20.0 (#230) +- [90e75258](https://github.com/kubedb/replication-mode-detector/commit/90e75258) Cleanup CI +- [9b1ffb20](https://github.com/kubedb/replication-mode-detector/commit/9b1ffb20) Use ghcr.io for appscode/golang-dev (#229) +- [83e4656c](https://github.com/kubedb/replication-mode-detector/commit/83e4656c) Dynamically select runner type +- [160bd418](https://github.com/kubedb/replication-mode-detector/commit/160bd418) Update workflows (Go 1.20, k8s 1.26) (#228) +- [306b14ac](https://github.com/kubedb/replication-mode-detector/commit/306b14ac) Test against Kubernetes 1.26.0 (#227) + + + +## [kubedb/schema-manager](https://github.com/kubedb/schema-manager) + +### [v0.9.0](https://github.com/kubedb/schema-manager/releases/tag/v0.9.0) + +- [e0f28fd4](https://github.com/kubedb/schema-manager/commit/e0f28fd4) Prepare for release v0.9.0 (#70) +- [d633359a](https://github.com/kubedb/schema-manager/commit/d633359a) Cleanup CI +- [5386ed59](https://github.com/kubedb/schema-manager/commit/5386ed59) Use ghcr.io for appscode/golang-dev (#69) +- [8e517e3d](https://github.com/kubedb/schema-manager/commit/8e517e3d) Dynamically select runner type +- [12abe459](https://github.com/kubedb/schema-manager/commit/12abe459) Update workflows (Go 1.20, k8s 1.26) (#68) +- [6b7412b4](https://github.com/kubedb/schema-manager/commit/6b7412b4) Update wrokflows (Go 1.20, k8s 1.26) (#67) +- [b1734e39](https://github.com/kubedb/schema-manager/commit/b1734e39) Test against Kubernetes 1.26.0 (#66) + + + +## [kubedb/tests](https://github.com/kubedb/tests) + +### [v0.18.0](https://github.com/kubedb/tests/releases/tag/v0.18.0) + +- [655e8669](https://github.com/kubedb/tests/commit/655e8669) Prepare for release v0.18.0 (#225) +- [27b5548f](https://github.com/kubedb/tests/commit/27b5548f) Use UpdateVersion instead of Upgrade (#224) +- [3a57f668](https://github.com/kubedb/tests/commit/3a57f668) Fix mongo e2e test (#223) +- [76a8abdd](https://github.com/kubedb/tests/commit/76a8abdd) Update deps +- [0bb20a34](https://github.com/kubedb/tests/commit/0bb20a34) Replace deprecated CurrentGinkgoTestDescription() with CurrentSpecReport() +- [4adb3f61](https://github.com/kubedb/tests/commit/4adb3f61) Cleanup CI +- [c3b1a205](https://github.com/kubedb/tests/commit/c3b1a205) Add MongoDB Hidden-node (#205) +- [1d7f62bb](https://github.com/kubedb/tests/commit/1d7f62bb) Use ghcr.io for appscode/golang-dev (#222) +- [bf3df4dc](https://github.com/kubedb/tests/commit/bf3df4dc) Dynamically select runner type +- [1a5a1d04](https://github.com/kubedb/tests/commit/1a5a1d04) Update workflows (Go 1.20, k8s 1.26) (#221) +- [016687ad](https://github.com/kubedb/tests/commit/016687ad) Update workflows (Go 1.20, k8s 1.26) (#220) +- [3155a749](https://github.com/kubedb/tests/commit/3155a749) Test against Kubernetes 1.26.0 (#219) +- [a58933cd](https://github.com/kubedb/tests/commit/a58933cd) Fix typo in MySQL tests (#218) + + + +## [kubedb/ui-server](https://github.com/kubedb/ui-server) + +### [v0.9.0](https://github.com/kubedb/ui-server/releases/tag/v0.9.0) + +- [49a09f28](https://github.com/kubedb/ui-server/commit/49a09f28) Prepare for release v0.9.0 (#73) +- [eaba6a4d](https://github.com/kubedb/ui-server/commit/eaba6a4d) Cleanup CI +- [779d75a5](https://github.com/kubedb/ui-server/commit/779d75a5) Use ghcr.io for appscode/golang-dev (#72) +- [9a94bf21](https://github.com/kubedb/ui-server/commit/9a94bf21) Dynamically select runner type +- [3c053ff1](https://github.com/kubedb/ui-server/commit/3c053ff1) Update workflows (Go 1.20, k8s 1.26) (#71) +- [7adf8e99](https://github.com/kubedb/ui-server/commit/7adf8e99) Update wrokflows (Go 1.20, k8s 1.26) (#70) +- [97704312](https://github.com/kubedb/ui-server/commit/97704312) Test against Kubernetes 1.26.0 (#69) +- [8fa7286a](https://github.com/kubedb/ui-server/commit/8fa7286a) Add context to Redis Client Builder (#67) + + + +## [kubedb/webhook-server](https://github.com/kubedb/webhook-server) + +### [v0.9.0](https://github.com/kubedb/webhook-server/releases/tag/v0.9.0) + +- [a50a54aa](https://github.com/kubedb/webhook-server/commit/a50a54aa) Prepare for release v0.9.0 (#57) +- [34807440](https://github.com/kubedb/webhook-server/commit/34807440) Update e2e workflow +- [38c71e46](https://github.com/kubedb/webhook-server/commit/38c71e46) Update workflows - Stop publishing to docker hub - Enable e2e tests - Use homebrew to install tools +- [1b32b482](https://github.com/kubedb/webhook-server/commit/1b32b482) Use ghcr.io for appscode/golang-dev (#56) +- [6a15e2e4](https://github.com/kubedb/webhook-server/commit/6a15e2e4) Dynamically select runner type +- [4d08d51b](https://github.com/kubedb/webhook-server/commit/4d08d51b) Update workflows (Go 1.20, k8s 1.26) (#55) +- [3ea6558a](https://github.com/kubedb/webhook-server/commit/3ea6558a) Update wrokflows (Go 1.20, k8s 1.26) (#54) +- [b0edce4a](https://github.com/kubedb/webhook-server/commit/b0edce4a) Test against Kubernetes 1.26.0 (#53) + + + + diff --git a/content/docs/v2024.1.31/CHANGELOG-v2023.06.13-rc.0.md b/content/docs/v2024.1.31/CHANGELOG-v2023.06.13-rc.0.md new file mode 100644 index 0000000000..b6ad382ca3 --- /dev/null +++ b/content/docs/v2024.1.31/CHANGELOG-v2023.06.13-rc.0.md @@ -0,0 +1,371 @@ +--- +title: Changelog | KubeDB +description: Changelog +menu: + docs_v2024.1.31: + identifier: changelog-kubedb-v2023.06.13-rc.0 + name: Changelog-v2023.06.13-rc.0 + parent: welcome + weight: 20230613 +product_name: kubedb +menu_name: docs_v2024.1.31 +section_menu_id: welcome +url: /docs/v2024.1.31/welcome/changelog-v2023.06.13-rc.0/ +aliases: +- /docs/v2024.1.31/CHANGELOG-v2023.06.13-rc.0/ +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +# KubeDB v2023.06.13-rc.0 (2023-06-13) + + +## [kubedb/apimachinery](https://github.com/kubedb/apimachinery) + +### [v0.34.0-rc.0](https://github.com/kubedb/apimachinery/releases/tag/v0.34.0-rc.0) + +- [19397df9](https://github.com/kubedb/apimachinery/commit/19397df9) Auto detect mode for ProxySQL (#1043) +- [892c0d78](https://github.com/kubedb/apimachinery/commit/892c0d78) Move the schema-doubleOptIn helpers to be used as common utility (#1044) +- [ef5eaff4](https://github.com/kubedb/apimachinery/commit/ef5eaff4) Update license verifier (#1042) +- [70ebcf52](https://github.com/kubedb/apimachinery/commit/70ebcf52) Add `enableServiceLinks` to PodSpec (#1039) +- [a92b36c3](https://github.com/kubedb/apimachinery/commit/a92b36c3) Test against K8s 1.27.1 (#1038) +- [81c3f83c](https://github.com/kubedb/apimachinery/commit/81c3f83c) Test against K8s 1.27.0 (#1037) +- [bb1d4ff3](https://github.com/kubedb/apimachinery/commit/bb1d4ff3) Fix linter + + + +## [kubedb/autoscaler](https://github.com/kubedb/autoscaler) + +### [v0.19.0-rc.0](https://github.com/kubedb/autoscaler/releases/tag/v0.19.0-rc.0) + +- [d0116a87](https://github.com/kubedb/autoscaler/commit/d0116a87) Prepare for release v0.19.0-rc.0 (#146) +- [7b5d01a3](https://github.com/kubedb/autoscaler/commit/7b5d01a3) Update license verifier (#145) +- [1987e74c](https://github.com/kubedb/autoscaler/commit/1987e74c) Update license verifier (#144) +- [5d3a8fc6](https://github.com/kubedb/autoscaler/commit/5d3a8fc6) Add enableServiceLinks to PodSpec (#143) + + + +## [kubedb/cli](https://github.com/kubedb/cli) + +### [v0.34.0-rc.0](https://github.com/kubedb/cli/releases/tag/v0.34.0-rc.0) + +- [166b7b23](https://github.com/kubedb/cli/commit/166b7b23) Prepare for release v0.34.0-rc.0 (#707) +- [2efbea85](https://github.com/kubedb/cli/commit/2efbea85) Update license verifier (#706) +- [b0d86409](https://github.com/kubedb/cli/commit/b0d86409) Add enableServiceLinks to PodSpec (#705) + + + +## [kubedb/dashboard](https://github.com/kubedb/dashboard) + +### [v0.10.0-rc.0](https://github.com/kubedb/dashboard/releases/tag/v0.10.0-rc.0) + +- [2dd3f01](https://github.com/kubedb/dashboard/commit/2dd3f01) Prepare for release v0.10.0-rc.0 (#74) +- [7f4dce3](https://github.com/kubedb/dashboard/commit/7f4dce3) Re-Configure pod template fields (#72) +- [d6780b2](https://github.com/kubedb/dashboard/commit/d6780b2) Update license verifier (#73) +- [4bc57b5](https://github.com/kubedb/dashboard/commit/4bc57b5) Add enableServiceLinks to PodSpec (#71) + + + +## [kubedb/elasticsearch](https://github.com/kubedb/elasticsearch) + +### [v0.34.0-rc.0](https://github.com/kubedb/elasticsearch/releases/tag/v0.34.0-rc.0) + +- [805c7274](https://github.com/kubedb/elasticsearch/commit/805c72747) Prepare for release v0.34.0-rc.0 (#643) +- [dfeec431](https://github.com/kubedb/elasticsearch/commit/dfeec4312) Use cached client (#639) +- [605d3acc](https://github.com/kubedb/elasticsearch/commit/605d3acce) Re-Configure pod template fields (#637) +- [1d060eff](https://github.com/kubedb/elasticsearch/commit/1d060eff8) Update docker/distribution (#642) +- [7dcc3a7a](https://github.com/kubedb/elasticsearch/commit/7dcc3a7af) Update license verifier (#641) +- [fd59f56c](https://github.com/kubedb/elasticsearch/commit/fd59f56c6) Update license verifier (#640) +- [5a8b386f](https://github.com/kubedb/elasticsearch/commit/5a8b386f9) Add enableServiceLinks to PodSpec (#636) + + + +## [kubedb/installer](https://github.com/kubedb/installer) + +### [v2023.06.13-rc.0](https://github.com/kubedb/installer/releases/tag/v2023.06.13-rc.0) + + + + +## [kubedb/kafka](https://github.com/kubedb/kafka) + +### [v0.5.0-rc.0](https://github.com/kubedb/kafka/releases/tag/v0.5.0-rc.0) + +- [d523a2e](https://github.com/kubedb/kafka/commit/d523a2e) Prepare for release v0.5.0-rc.0 (#29) +- [640213a](https://github.com/kubedb/kafka/commit/640213a) Re-Configure pod template fields (#24) +- [a772b4c](https://github.com/kubedb/kafka/commit/a772b4c) Update docker/distribution (#27) +- [b643c9b](https://github.com/kubedb/kafka/commit/b643c9b) Update license verifier (#26) +- [f75f877](https://github.com/kubedb/kafka/commit/f75f877) Update license verifier (#25) +- [4985a5b](https://github.com/kubedb/kafka/commit/4985a5b) Add enableServiceLinks to PodSpec (#23) + + + +## [kubedb/mariadb](https://github.com/kubedb/mariadb) + +### [v0.18.0-rc.0](https://github.com/kubedb/mariadb/releases/tag/v0.18.0-rc.0) + +- [710e8863](https://github.com/kubedb/mariadb/commit/710e8863) Prepare for release v0.18.0-rc.0 (#214) +- [013cbda6](https://github.com/kubedb/mariadb/commit/013cbda6) Used cached client (#209) +- [17475079](https://github.com/kubedb/mariadb/commit/17475079) Configure podtemplate fields (#213) +- [059ba1e2](https://github.com/kubedb/mariadb/commit/059ba1e2) Update docker/distribution (#212) +- [5cb60c15](https://github.com/kubedb/mariadb/commit/5cb60c15) Update license verifier (#211) +- [b04b55ea](https://github.com/kubedb/mariadb/commit/b04b55ea) Update license verifier (#210) +- [4953dd26](https://github.com/kubedb/mariadb/commit/4953dd26) Add enableServiceLinks to PodSpec (#208) +- [f953bfb4](https://github.com/kubedb/mariadb/commit/f953bfb4) Test against K8s 1.27.0 (#207) + + + +## [kubedb/mariadb-coordinator](https://github.com/kubedb/mariadb-coordinator) + +### [v0.14.0-rc.0](https://github.com/kubedb/mariadb-coordinator/releases/tag/v0.14.0-rc.0) + +- [1ad0eab6](https://github.com/kubedb/mariadb-coordinator/commit/1ad0eab6) Prepare for release v0.14.0-rc.0 (#84) +- [6e2fcb39](https://github.com/kubedb/mariadb-coordinator/commit/6e2fcb39) Read pass from env (#81) +- [0b3ae6b8](https://github.com/kubedb/mariadb-coordinator/commit/0b3ae6b8) Update license verifier (#83) +- [ca4b5b1d](https://github.com/kubedb/mariadb-coordinator/commit/ca4b5b1d) Update license verifier (#82) +- [5bc15939](https://github.com/kubedb/mariadb-coordinator/commit/5bc15939) Add enableServiceLinks to PodSpec (#80) +- [e3a4ed91](https://github.com/kubedb/mariadb-coordinator/commit/e3a4ed91) Test against K8s 1.27.0 (#79) + + + +## [kubedb/memcached](https://github.com/kubedb/memcached) + +### [v0.27.0-rc.0](https://github.com/kubedb/memcached/releases/tag/v0.27.0-rc.0) + +- [8584ad5d](https://github.com/kubedb/memcached/commit/8584ad5d) Prepare for release v0.27.0-rc.0 (#395) +- [d91bd11f](https://github.com/kubedb/memcached/commit/d91bd11f) Update license verifier (#394) +- [44765061](https://github.com/kubedb/memcached/commit/44765061) Update license verifier (#393) +- [d4be3ee0](https://github.com/kubedb/memcached/commit/d4be3ee0) Add enableServiceLinks to PodSpec (#392) + + + +## [kubedb/mongodb](https://github.com/kubedb/mongodb) + +### [v0.27.0-rc.0](https://github.com/kubedb/mongodb/releases/tag/v0.27.0-rc.0) + +- [39f1936f](https://github.com/kubedb/mongodb/commit/39f1936f) Prepare for release v0.27.0-rc.0 (#553) +- [5f12f078](https://github.com/kubedb/mongodb/commit/5f12f078) Use cached client (#550) +- [2130dd52](https://github.com/kubedb/mongodb/commit/2130dd52) Update docker/distribution (#552) +- [0c3ee218](https://github.com/kubedb/mongodb/commit/0c3ee218) Update license verifier (#551) +- [70367dcd](https://github.com/kubedb/mongodb/commit/70367dcd) Configure pod template fields (#548) +- [61a4b51a](https://github.com/kubedb/mongodb/commit/61a4b51a) Add enableServiceLinks to PodSpec (#546) + + + +## [kubedb/mysql](https://github.com/kubedb/mysql) + +### [v0.27.0-rc.0](https://github.com/kubedb/mysql/releases/tag/v0.27.0-rc.0) + +- [6b41e393](https://github.com/kubedb/mysql/commit/6b41e393) Prepare for release v0.27.0-rc.0 (#538) +- [9b3a2cf6](https://github.com/kubedb/mysql/commit/9b3a2cf6) Used cached client (#535) +- [fefe9865](https://github.com/kubedb/mysql/commit/fefe9865) Configure pod template fields (#534) +- [d30d0aa9](https://github.com/kubedb/mysql/commit/d30d0aa9) Update docker/distribution (#537) +- [da13390d](https://github.com/kubedb/mysql/commit/da13390d) Update license verifier (#536) +- [bb928789](https://github.com/kubedb/mysql/commit/bb928789) Add enableServiceLinks to PodSpec (#532) + + + +## [kubedb/mysql-coordinator](https://github.com/kubedb/mysql-coordinator) + +### [v0.12.0-rc.0](https://github.com/kubedb/mysql-coordinator/releases/tag/v0.12.0-rc.0) + +- [0a4ecdf](https://github.com/kubedb/mysql-coordinator/commit/0a4ecdf) Prepare for release v0.12.0-rc.0 (#80) +- [8a7f31c](https://github.com/kubedb/mysql-coordinator/commit/8a7f31c) Read password form env (#79) +- [d1d8106](https://github.com/kubedb/mysql-coordinator/commit/d1d8106) Add enableServiceLinks to PodSpec (#78) +- [3e67a4f](https://github.com/kubedb/mysql-coordinator/commit/3e67a4f) Test against K8s 1.27.0 (#77) + + + +## [kubedb/mysql-router-init](https://github.com/kubedb/mysql-router-init) + +### [v0.12.0-rc.0](https://github.com/kubedb/mysql-router-init/releases/tag/v0.12.0-rc.0) + + + + +## [kubedb/ops-manager](https://github.com/kubedb/ops-manager) + +### [v0.21.0-rc.0](https://github.com/kubedb/ops-manager/releases/tag/v0.21.0-rc.0) + + + + +## [kubedb/percona-xtradb](https://github.com/kubedb/percona-xtradb) + +### [v0.21.0-rc.0](https://github.com/kubedb/percona-xtradb/releases/tag/v0.21.0-rc.0) + +- [cb1c8481](https://github.com/kubedb/percona-xtradb/commit/cb1c8481) Prepare for release v0.21.0-rc.0 (#315) +- [da3fbc76](https://github.com/kubedb/percona-xtradb/commit/da3fbc76) Used cached client (#311) +- [39144eb2](https://github.com/kubedb/percona-xtradb/commit/39144eb2) Configure podtemplate fields (#314) +- [d85284ea](https://github.com/kubedb/percona-xtradb/commit/d85284ea) Update docker/distribution (#313) +- [b3a5c6f3](https://github.com/kubedb/percona-xtradb/commit/b3a5c6f3) Update license verifier (#312) +- [445c4787](https://github.com/kubedb/percona-xtradb/commit/445c4787) Add enableServiceLinks to PodSpec (#309) +- [c2c1451e](https://github.com/kubedb/percona-xtradb/commit/c2c1451e) Test against K8s 1.27.0 (#308) + + + +## [kubedb/percona-xtradb-coordinator](https://github.com/kubedb/percona-xtradb-coordinator) + +### [v0.7.0-rc.0](https://github.com/kubedb/percona-xtradb-coordinator/releases/tag/v0.7.0-rc.0) + +- [cfd5919](https://github.com/kubedb/percona-xtradb-coordinator/commit/cfd5919) Prepare for release v0.7.0-rc.0 (#41) +- [fa25aaa](https://github.com/kubedb/percona-xtradb-coordinator/commit/fa25aaa) Read pass from env (#38) +- [3322225](https://github.com/kubedb/percona-xtradb-coordinator/commit/3322225) Update license verifier (#40) +- [f44d3ba](https://github.com/kubedb/percona-xtradb-coordinator/commit/f44d3ba) Update license verifier (#39) +- [bbbabf7](https://github.com/kubedb/percona-xtradb-coordinator/commit/bbbabf7) Add enableServiceLinks to PodSpec (#37) +- [052fb88](https://github.com/kubedb/percona-xtradb-coordinator/commit/052fb88) Test against K8s 1.27.0 (#36) + + + +## [kubedb/pg-coordinator](https://github.com/kubedb/pg-coordinator) + +### [v0.18.0-rc.0](https://github.com/kubedb/pg-coordinator/releases/tag/v0.18.0-rc.0) + +- [dbc0ab09](https://github.com/kubedb/pg-coordinator/commit/dbc0ab09) Prepare for release v0.18.0-rc.0 (#123) +- [8a17dbe1](https://github.com/kubedb/pg-coordinator/commit/8a17dbe1) Update license verifier (#122) +- [a86af91b](https://github.com/kubedb/pg-coordinator/commit/a86af91b) Update license verifier (#121) +- [1a538695](https://github.com/kubedb/pg-coordinator/commit/1a538695) Add enableServiceLinks to PodSpec (#120) +- [f680dc7b](https://github.com/kubedb/pg-coordinator/commit/f680dc7b) Test against K8s 1.27.0 (#119) + + + +## [kubedb/pgbouncer](https://github.com/kubedb/pgbouncer) + +### [v0.21.0-rc.0](https://github.com/kubedb/pgbouncer/releases/tag/v0.21.0-rc.0) + +- [510a0100](https://github.com/kubedb/pgbouncer/commit/510a0100) Prepare for release v0.21.0-rc.0 (#280) +- [1b5a8f2b](https://github.com/kubedb/pgbouncer/commit/1b5a8f2b) Update docker/distribution (#279) +- [7da9582c](https://github.com/kubedb/pgbouncer/commit/7da9582c) Update license verifier (#278) +- [b056ed70](https://github.com/kubedb/pgbouncer/commit/b056ed70) Update license verifier (#277) +- [e4211b77](https://github.com/kubedb/pgbouncer/commit/e4211b77) Add enableServiceLinks to PodSpec (#275) +- [32d37f30](https://github.com/kubedb/pgbouncer/commit/32d37f30) Test against K8s 1.27.0 (#274) + + + +## [kubedb/postgres](https://github.com/kubedb/postgres) + +### [v0.34.0-rc.0](https://github.com/kubedb/postgres/releases/tag/v0.34.0-rc.0) + +- [03615529](https://github.com/kubedb/postgres/commit/036155296) Prepare for release v0.34.0-rc.0 (#645) +- [5b178fe3](https://github.com/kubedb/postgres/commit/5b178fe3c) Used cached client (#641) +- [5d4f949c](https://github.com/kubedb/postgres/commit/5d4f949cd) Update docker/distribution (#644) +- [0bd2d452](https://github.com/kubedb/postgres/commit/0bd2d452b) Configure pod template fields (#642) +- [7e18f135](https://github.com/kubedb/postgres/commit/7e18f1359) Update license verifier (#643) +- [09607de3](https://github.com/kubedb/postgres/commit/09607de3e) Add enableServiceLinks to PodSpec (#639) + + + +## [kubedb/provisioner](https://github.com/kubedb/provisioner) + +### [v0.34.0-rc.0](https://github.com/kubedb/provisioner/releases/tag/v0.34.0-rc.0) + +- [5828a53a](https://github.com/kubedb/provisioner/commit/5828a53aa) Prepare for release v0.34.0-rc.0 (#49) +- [ddbfc75b](https://github.com/kubedb/provisioner/commit/ddbfc75bd) Update docker/distribution (#48) +- [1f2d4d5a](https://github.com/kubedb/provisioner/commit/1f2d4d5a6) Update license verifier (#47) +- [b2b71d00](https://github.com/kubedb/provisioner/commit/b2b71d00c) Add enableServiceLinks to PodSpec (#46) + + + +## [kubedb/proxysql](https://github.com/kubedb/proxysql) + +### [v0.21.0-rc.0](https://github.com/kubedb/proxysql/releases/tag/v0.21.0-rc.0) + +- [55a2b71f](https://github.com/kubedb/proxysql/commit/55a2b71f) Prepare for release v0.21.0-rc.0 (#297) +- [3b813223](https://github.com/kubedb/proxysql/commit/3b813223) Used cached client (#291) +- [e657808c](https://github.com/kubedb/proxysql/commit/e657808c) Fix ProxySQL Backend Mode Issue (#296) +- [c8540a1a](https://github.com/kubedb/proxysql/commit/c8540a1a) Configure pod template feilds (#295) +- [8586dcdf](https://github.com/kubedb/proxysql/commit/8586dcdf) Update docker/distribution (#294) +- [80122c5d](https://github.com/kubedb/proxysql/commit/80122c5d) Update license verifier (#292) +- [1de70c13](https://github.com/kubedb/proxysql/commit/1de70c13) Add enableServiceLinks to PodSpec (#290) +- [32525708](https://github.com/kubedb/proxysql/commit/32525708) Test against K8s 1.27.0 (#289) + + + +## [kubedb/redis](https://github.com/kubedb/redis) + +### [v0.27.0-rc.0](https://github.com/kubedb/redis/releases/tag/v0.27.0-rc.0) + +- [8411d7d9](https://github.com/kubedb/redis/commit/8411d7d9) Prepare for release v0.27.0-rc.0 (#467) +- [f1f3e2c9](https://github.com/kubedb/redis/commit/f1f3e2c9) Use cached client (#463) +- [0884c60c](https://github.com/kubedb/redis/commit/0884c60c) Update docker/distribution (#466) +- [385691d6](https://github.com/kubedb/redis/commit/385691d6) Update license verifier (#465) +- [37fa806f](https://github.com/kubedb/redis/commit/37fa806f) Configure pod template fields (#464) +- [2993d8b7](https://github.com/kubedb/redis/commit/2993d8b7) Add enableServiceLinks to PodSpec (#461) + + + +## [kubedb/redis-coordinator](https://github.com/kubedb/redis-coordinator) + +### [v0.13.0-rc.0](https://github.com/kubedb/redis-coordinator/releases/tag/v0.13.0-rc.0) + +- [8b0ecaa](https://github.com/kubedb/redis-coordinator/commit/8b0ecaa) Prepare for release v0.13.0-rc.0 (#73) +- [9d42b96](https://github.com/kubedb/redis-coordinator/commit/9d42b96) Update license verifier (#72) +- [31a4f43](https://github.com/kubedb/redis-coordinator/commit/31a4f43) Update license verifier (#71) +- [61bc0cf](https://github.com/kubedb/redis-coordinator/commit/61bc0cf) Add enableServiceLinks to PodSpec (#70) +- [ca42ade](https://github.com/kubedb/redis-coordinator/commit/ca42ade) Test against K8s 1.27.0 (#69) + + + +## [kubedb/replication-mode-detector](https://github.com/kubedb/replication-mode-detector) + +### [v0.21.0-rc.0](https://github.com/kubedb/replication-mode-detector/releases/tag/v0.21.0-rc.0) + +- [22364949](https://github.com/kubedb/replication-mode-detector/commit/22364949) Prepare for release v0.21.0-rc.0 (#235) +- [13f4af9f](https://github.com/kubedb/replication-mode-detector/commit/13f4af9f) Update license verifier (#234) +- [49187c88](https://github.com/kubedb/replication-mode-detector/commit/49187c88) Update license verifier (#233) +- [cc8e62a7](https://github.com/kubedb/replication-mode-detector/commit/cc8e62a7) Test against K8s 1.27.0 (#231) + + + +## [kubedb/schema-manager](https://github.com/kubedb/schema-manager) + +### [v0.10.0-rc.0](https://github.com/kubedb/schema-manager/releases/tag/v0.10.0-rc.0) + +- [3bf16dac](https://github.com/kubedb/schema-manager/commit/3bf16dac) Prepare for release v0.10.0-rc.0 (#75) +- [a80da62a](https://github.com/kubedb/schema-manager/commit/a80da62a) Update license verifier (#74) +- [0761e223](https://github.com/kubedb/schema-manager/commit/0761e223) Update license verifier (#73) +- [83fa3191](https://github.com/kubedb/schema-manager/commit/83fa3191) Add enableServiceLinks to PodSpec (#72) + + + +## [kubedb/tests](https://github.com/kubedb/tests) + +### [v0.19.0-rc.0](https://github.com/kubedb/tests/releases/tag/v0.19.0-rc.0) + +- [066c8cf9](https://github.com/kubedb/tests/commit/066c8cf9) Prepare for release v0.19.0-rc.0 (#227) +- [e49d6470](https://github.com/kubedb/tests/commit/e49d6470) Update license verifier (#226) + + + +## [kubedb/ui-server](https://github.com/kubedb/ui-server) + +### [v0.10.0-rc.0](https://github.com/kubedb/ui-server/releases/tag/v0.10.0-rc.0) + +- [f3cee800](https://github.com/kubedb/ui-server/commit/f3cee800) Prepare for release v0.10.0-rc.0 (#80) +- [86d37355](https://github.com/kubedb/ui-server/commit/86d37355) Update docker/distribution (#79) +- [5f060b7e](https://github.com/kubedb/ui-server/commit/5f060b7e) Update license verifier (#77) +- [6cf6d5ab](https://github.com/kubedb/ui-server/commit/6cf6d5ab) Close mongo client connection with defer (#76) +- [312e5210](https://github.com/kubedb/ui-server/commit/312e5210) Add enableServiceLinks to PodSpec (#75) +- [662808cb](https://github.com/kubedb/ui-server/commit/662808cb) Test against K8s 1.27.0 (#74) + + + +## [kubedb/webhook-server](https://github.com/kubedb/webhook-server) + +### [v0.10.0-rc.0](https://github.com/kubedb/webhook-server/releases/tag/v0.10.0-rc.0) + +- [6673b381](https://github.com/kubedb/webhook-server/commit/6673b381) Prepare for release v0.10.0-rc.0 (#60) +- [c502edd3](https://github.com/kubedb/webhook-server/commit/c502edd3) Add enableServiceLinks to PodSpec (#59) + + + + diff --git a/content/docs/v2024.1.31/CHANGELOG-v2023.06.19.md b/content/docs/v2024.1.31/CHANGELOG-v2023.06.19.md new file mode 100644 index 0000000000..c13901127e --- /dev/null +++ b/content/docs/v2024.1.31/CHANGELOG-v2023.06.19.md @@ -0,0 +1,401 @@ +--- +title: Changelog | KubeDB +description: Changelog +menu: + docs_v2024.1.31: + identifier: changelog-kubedb-v2023.06.19 + name: Changelog-v2023.06.19 + parent: welcome + weight: 20230619 +product_name: kubedb +menu_name: docs_v2024.1.31 +section_menu_id: welcome +url: /docs/v2024.1.31/welcome/changelog-v2023.06.19/ +aliases: +- /docs/v2024.1.31/CHANGELOG-v2023.06.19/ +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +# KubeDB v2023.06.19 (2023-06-17) + + +## [kubedb/apimachinery](https://github.com/kubedb/apimachinery) + +### [v0.34.0](https://github.com/kubedb/apimachinery/releases/tag/v0.34.0) + +- [acd43ce5](https://github.com/kubedb/apimachinery/commit/acd43ce5) Update go.mod +- [19397df9](https://github.com/kubedb/apimachinery/commit/19397df9) Auto detect mode for ProxySQL (#1043) +- [892c0d78](https://github.com/kubedb/apimachinery/commit/892c0d78) Move the schema-doubleOptIn helpers to be used as common utility (#1044) +- [ef5eaff4](https://github.com/kubedb/apimachinery/commit/ef5eaff4) Update license verifier (#1042) +- [70ebcf52](https://github.com/kubedb/apimachinery/commit/70ebcf52) Add `enableServiceLinks` to PodSpec (#1039) +- [a92b36c3](https://github.com/kubedb/apimachinery/commit/a92b36c3) Test against K8s 1.27.1 (#1038) +- [81c3f83c](https://github.com/kubedb/apimachinery/commit/81c3f83c) Test against K8s 1.27.0 (#1037) +- [bb1d4ff3](https://github.com/kubedb/apimachinery/commit/bb1d4ff3) Fix linter + + + +## [kubedb/autoscaler](https://github.com/kubedb/autoscaler) + +### [v0.19.0](https://github.com/kubedb/autoscaler/releases/tag/v0.19.0) + +- [d734810c](https://github.com/kubedb/autoscaler/commit/d734810c) Prepare for release v0.19.0 (#147) +- [d0116a87](https://github.com/kubedb/autoscaler/commit/d0116a87) Prepare for release v0.19.0-rc.0 (#146) +- [7b5d01a3](https://github.com/kubedb/autoscaler/commit/7b5d01a3) Update license verifier (#145) +- [1987e74c](https://github.com/kubedb/autoscaler/commit/1987e74c) Update license verifier (#144) +- [5d3a8fc6](https://github.com/kubedb/autoscaler/commit/5d3a8fc6) Add enableServiceLinks to PodSpec (#143) + + + +## [kubedb/cli](https://github.com/kubedb/cli) + +### [v0.34.0](https://github.com/kubedb/cli/releases/tag/v0.34.0) + +- [981fb863](https://github.com/kubedb/cli/commit/981fb863) Prepare for release v0.34.0 (#708) +- [166b7b23](https://github.com/kubedb/cli/commit/166b7b23) Prepare for release v0.34.0-rc.0 (#707) +- [2efbea85](https://github.com/kubedb/cli/commit/2efbea85) Update license verifier (#706) +- [b0d86409](https://github.com/kubedb/cli/commit/b0d86409) Add enableServiceLinks to PodSpec (#705) + + + +## [kubedb/dashboard](https://github.com/kubedb/dashboard) + +### [v0.10.0](https://github.com/kubedb/dashboard/releases/tag/v0.10.0) + +- [f7e4160](https://github.com/kubedb/dashboard/commit/f7e4160) Prepare for release v0.10.0 (#75) +- [2dd3f01](https://github.com/kubedb/dashboard/commit/2dd3f01) Prepare for release v0.10.0-rc.0 (#74) +- [7f4dce3](https://github.com/kubedb/dashboard/commit/7f4dce3) Re-Configure pod template fields (#72) +- [d6780b2](https://github.com/kubedb/dashboard/commit/d6780b2) Update license verifier (#73) +- [4bc57b5](https://github.com/kubedb/dashboard/commit/4bc57b5) Add enableServiceLinks to PodSpec (#71) + + + +## [kubedb/elasticsearch](https://github.com/kubedb/elasticsearch) + +### [v0.34.0](https://github.com/kubedb/elasticsearch/releases/tag/v0.34.0) + +- [0d237dff](https://github.com/kubedb/elasticsearch/commit/0d237dffa) Prepare for release v0.34.0 (#644) +- [805c7274](https://github.com/kubedb/elasticsearch/commit/805c72747) Prepare for release v0.34.0-rc.0 (#643) +- [dfeec431](https://github.com/kubedb/elasticsearch/commit/dfeec4312) Use cached client (#639) +- [605d3acc](https://github.com/kubedb/elasticsearch/commit/605d3acce) Re-Configure pod template fields (#637) +- [1d060eff](https://github.com/kubedb/elasticsearch/commit/1d060eff8) Update docker/distribution (#642) +- [7dcc3a7a](https://github.com/kubedb/elasticsearch/commit/7dcc3a7af) Update license verifier (#641) +- [fd59f56c](https://github.com/kubedb/elasticsearch/commit/fd59f56c6) Update license verifier (#640) +- [5a8b386f](https://github.com/kubedb/elasticsearch/commit/5a8b386f9) Add enableServiceLinks to PodSpec (#636) + + + +## [kubedb/installer](https://github.com/kubedb/installer) + +### [v2023.06.19](https://github.com/kubedb/installer/releases/tag/v2023.06.19) + + + + +## [kubedb/kafka](https://github.com/kubedb/kafka) + +### [v0.5.0](https://github.com/kubedb/kafka/releases/tag/v0.5.0) + +- [8617db3](https://github.com/kubedb/kafka/commit/8617db3) Prepare for release v0.5.0 (#30) +- [d523a2e](https://github.com/kubedb/kafka/commit/d523a2e) Prepare for release v0.5.0-rc.0 (#29) +- [640213a](https://github.com/kubedb/kafka/commit/640213a) Re-Configure pod template fields (#24) +- [a772b4c](https://github.com/kubedb/kafka/commit/a772b4c) Update docker/distribution (#27) +- [b643c9b](https://github.com/kubedb/kafka/commit/b643c9b) Update license verifier (#26) +- [f75f877](https://github.com/kubedb/kafka/commit/f75f877) Update license verifier (#25) +- [4985a5b](https://github.com/kubedb/kafka/commit/4985a5b) Add enableServiceLinks to PodSpec (#23) + + + +## [kubedb/mariadb](https://github.com/kubedb/mariadb) + +### [v0.18.0](https://github.com/kubedb/mariadb/releases/tag/v0.18.0) + +- [159572c5](https://github.com/kubedb/mariadb/commit/159572c5) Prepare for release v0.18.0 (#215) +- [710e8863](https://github.com/kubedb/mariadb/commit/710e8863) Prepare for release v0.18.0-rc.0 (#214) +- [013cbda6](https://github.com/kubedb/mariadb/commit/013cbda6) Used cached client (#209) +- [17475079](https://github.com/kubedb/mariadb/commit/17475079) Configure podtemplate fields (#213) +- [059ba1e2](https://github.com/kubedb/mariadb/commit/059ba1e2) Update docker/distribution (#212) +- [5cb60c15](https://github.com/kubedb/mariadb/commit/5cb60c15) Update license verifier (#211) +- [b04b55ea](https://github.com/kubedb/mariadb/commit/b04b55ea) Update license verifier (#210) +- [4953dd26](https://github.com/kubedb/mariadb/commit/4953dd26) Add enableServiceLinks to PodSpec (#208) +- [f953bfb4](https://github.com/kubedb/mariadb/commit/f953bfb4) Test against K8s 1.27.0 (#207) + + + +## [kubedb/mariadb-coordinator](https://github.com/kubedb/mariadb-coordinator) + +### [v0.14.0](https://github.com/kubedb/mariadb-coordinator/releases/tag/v0.14.0) + +- [303a817e](https://github.com/kubedb/mariadb-coordinator/commit/303a817e) Prepare for release v0.14.0 (#85) +- [1ad0eab6](https://github.com/kubedb/mariadb-coordinator/commit/1ad0eab6) Prepare for release v0.14.0-rc.0 (#84) +- [6e2fcb39](https://github.com/kubedb/mariadb-coordinator/commit/6e2fcb39) Read pass from env (#81) +- [0b3ae6b8](https://github.com/kubedb/mariadb-coordinator/commit/0b3ae6b8) Update license verifier (#83) +- [ca4b5b1d](https://github.com/kubedb/mariadb-coordinator/commit/ca4b5b1d) Update license verifier (#82) +- [5bc15939](https://github.com/kubedb/mariadb-coordinator/commit/5bc15939) Add enableServiceLinks to PodSpec (#80) +- [e3a4ed91](https://github.com/kubedb/mariadb-coordinator/commit/e3a4ed91) Test against K8s 1.27.0 (#79) + + + +## [kubedb/memcached](https://github.com/kubedb/memcached) + +### [v0.27.0](https://github.com/kubedb/memcached/releases/tag/v0.27.0) + +- [5630eab4](https://github.com/kubedb/memcached/commit/5630eab4) Prepare for release v0.27.0 (#396) +- [8584ad5d](https://github.com/kubedb/memcached/commit/8584ad5d) Prepare for release v0.27.0-rc.0 (#395) +- [d91bd11f](https://github.com/kubedb/memcached/commit/d91bd11f) Update license verifier (#394) +- [44765061](https://github.com/kubedb/memcached/commit/44765061) Update license verifier (#393) +- [d4be3ee0](https://github.com/kubedb/memcached/commit/d4be3ee0) Add enableServiceLinks to PodSpec (#392) + + + +## [kubedb/mongodb](https://github.com/kubedb/mongodb) + +### [v0.27.0](https://github.com/kubedb/mongodb/releases/tag/v0.27.0) + +- [352ac1f7](https://github.com/kubedb/mongodb/commit/352ac1f7) Prepare for release v0.27.0 (#554) +- [39f1936f](https://github.com/kubedb/mongodb/commit/39f1936f) Prepare for release v0.27.0-rc.0 (#553) +- [5f12f078](https://github.com/kubedb/mongodb/commit/5f12f078) Use cached client (#550) +- [2130dd52](https://github.com/kubedb/mongodb/commit/2130dd52) Update docker/distribution (#552) +- [0c3ee218](https://github.com/kubedb/mongodb/commit/0c3ee218) Update license verifier (#551) +- [70367dcd](https://github.com/kubedb/mongodb/commit/70367dcd) Configure pod template fields (#548) +- [61a4b51a](https://github.com/kubedb/mongodb/commit/61a4b51a) Add enableServiceLinks to PodSpec (#546) + + + +## [kubedb/mysql](https://github.com/kubedb/mysql) + +### [v0.27.0](https://github.com/kubedb/mysql/releases/tag/v0.27.0) + +- [80b1c828](https://github.com/kubedb/mysql/commit/80b1c828) Prepare for release v0.27.0 (#539) +- [6b41e393](https://github.com/kubedb/mysql/commit/6b41e393) Prepare for release v0.27.0-rc.0 (#538) +- [9b3a2cf6](https://github.com/kubedb/mysql/commit/9b3a2cf6) Used cached client (#535) +- [fefe9865](https://github.com/kubedb/mysql/commit/fefe9865) Configure pod template fields (#534) +- [d30d0aa9](https://github.com/kubedb/mysql/commit/d30d0aa9) Update docker/distribution (#537) +- [da13390d](https://github.com/kubedb/mysql/commit/da13390d) Update license verifier (#536) +- [bb928789](https://github.com/kubedb/mysql/commit/bb928789) Add enableServiceLinks to PodSpec (#532) + + + +## [kubedb/mysql-coordinator](https://github.com/kubedb/mysql-coordinator) + +### [v0.12.0](https://github.com/kubedb/mysql-coordinator/releases/tag/v0.12.0) + +- [f2e53e8](https://github.com/kubedb/mysql-coordinator/commit/f2e53e8) Prepare for release v0.12.0 (#81) +- [0a4ecdf](https://github.com/kubedb/mysql-coordinator/commit/0a4ecdf) Prepare for release v0.12.0-rc.0 (#80) +- [8a7f31c](https://github.com/kubedb/mysql-coordinator/commit/8a7f31c) Read password form env (#79) +- [d1d8106](https://github.com/kubedb/mysql-coordinator/commit/d1d8106) Add enableServiceLinks to PodSpec (#78) +- [3e67a4f](https://github.com/kubedb/mysql-coordinator/commit/3e67a4f) Test against K8s 1.27.0 (#77) + + + +## [kubedb/mysql-router-init](https://github.com/kubedb/mysql-router-init) + +### [v0.12.0](https://github.com/kubedb/mysql-router-init/releases/tag/v0.12.0) + + + + +## [kubedb/ops-manager](https://github.com/kubedb/ops-manager) + +### [v0.21.0](https://github.com/kubedb/ops-manager/releases/tag/v0.21.0) + + + + +## [kubedb/percona-xtradb](https://github.com/kubedb/percona-xtradb) + +### [v0.21.0](https://github.com/kubedb/percona-xtradb/releases/tag/v0.21.0) + +- [8aa1d3b8](https://github.com/kubedb/percona-xtradb/commit/8aa1d3b8) Prepare for release v0.21.0 (#316) +- [cb1c8481](https://github.com/kubedb/percona-xtradb/commit/cb1c8481) Prepare for release v0.21.0-rc.0 (#315) +- [da3fbc76](https://github.com/kubedb/percona-xtradb/commit/da3fbc76) Used cached client (#311) +- [39144eb2](https://github.com/kubedb/percona-xtradb/commit/39144eb2) Configure podtemplate fields (#314) +- [d85284ea](https://github.com/kubedb/percona-xtradb/commit/d85284ea) Update docker/distribution (#313) +- [b3a5c6f3](https://github.com/kubedb/percona-xtradb/commit/b3a5c6f3) Update license verifier (#312) +- [445c4787](https://github.com/kubedb/percona-xtradb/commit/445c4787) Add enableServiceLinks to PodSpec (#309) +- [c2c1451e](https://github.com/kubedb/percona-xtradb/commit/c2c1451e) Test against K8s 1.27.0 (#308) + + + +## [kubedb/percona-xtradb-coordinator](https://github.com/kubedb/percona-xtradb-coordinator) + +### [v0.7.0](https://github.com/kubedb/percona-xtradb-coordinator/releases/tag/v0.7.0) + +- [b2356c4](https://github.com/kubedb/percona-xtradb-coordinator/commit/b2356c4) Prepare for release v0.7.0 (#42) +- [cfd5919](https://github.com/kubedb/percona-xtradb-coordinator/commit/cfd5919) Prepare for release v0.7.0-rc.0 (#41) +- [fa25aaa](https://github.com/kubedb/percona-xtradb-coordinator/commit/fa25aaa) Read pass from env (#38) +- [3322225](https://github.com/kubedb/percona-xtradb-coordinator/commit/3322225) Update license verifier (#40) +- [f44d3ba](https://github.com/kubedb/percona-xtradb-coordinator/commit/f44d3ba) Update license verifier (#39) +- [bbbabf7](https://github.com/kubedb/percona-xtradb-coordinator/commit/bbbabf7) Add enableServiceLinks to PodSpec (#37) +- [052fb88](https://github.com/kubedb/percona-xtradb-coordinator/commit/052fb88) Test against K8s 1.27.0 (#36) + + + +## [kubedb/pg-coordinator](https://github.com/kubedb/pg-coordinator) + +### [v0.18.0](https://github.com/kubedb/pg-coordinator/releases/tag/v0.18.0) + +- [033dc8aa](https://github.com/kubedb/pg-coordinator/commit/033dc8aa) Prepare for release v0.18.0 (#124) +- [dbc0ab09](https://github.com/kubedb/pg-coordinator/commit/dbc0ab09) Prepare for release v0.18.0-rc.0 (#123) +- [8a17dbe1](https://github.com/kubedb/pg-coordinator/commit/8a17dbe1) Update license verifier (#122) +- [a86af91b](https://github.com/kubedb/pg-coordinator/commit/a86af91b) Update license verifier (#121) +- [1a538695](https://github.com/kubedb/pg-coordinator/commit/1a538695) Add enableServiceLinks to PodSpec (#120) +- [f680dc7b](https://github.com/kubedb/pg-coordinator/commit/f680dc7b) Test against K8s 1.27.0 (#119) + + + +## [kubedb/pgbouncer](https://github.com/kubedb/pgbouncer) + +### [v0.21.0](https://github.com/kubedb/pgbouncer/releases/tag/v0.21.0) + +- [03b74418](https://github.com/kubedb/pgbouncer/commit/03b74418) Prepare for release v0.21.0 (#281) +- [510a0100](https://github.com/kubedb/pgbouncer/commit/510a0100) Prepare for release v0.21.0-rc.0 (#280) +- [1b5a8f2b](https://github.com/kubedb/pgbouncer/commit/1b5a8f2b) Update docker/distribution (#279) +- [7da9582c](https://github.com/kubedb/pgbouncer/commit/7da9582c) Update license verifier (#278) +- [b056ed70](https://github.com/kubedb/pgbouncer/commit/b056ed70) Update license verifier (#277) +- [e4211b77](https://github.com/kubedb/pgbouncer/commit/e4211b77) Add enableServiceLinks to PodSpec (#275) +- [32d37f30](https://github.com/kubedb/pgbouncer/commit/32d37f30) Test against K8s 1.27.0 (#274) + + + +## [kubedb/postgres](https://github.com/kubedb/postgres) + +### [v0.34.0](https://github.com/kubedb/postgres/releases/tag/v0.34.0) + +- [8509c5cb](https://github.com/kubedb/postgres/commit/8509c5cbf) Prepare for release v0.34.0 (#646) +- [03615529](https://github.com/kubedb/postgres/commit/036155296) Prepare for release v0.34.0-rc.0 (#645) +- [5b178fe3](https://github.com/kubedb/postgres/commit/5b178fe3c) Used cached client (#641) +- [5d4f949c](https://github.com/kubedb/postgres/commit/5d4f949cd) Update docker/distribution (#644) +- [0bd2d452](https://github.com/kubedb/postgres/commit/0bd2d452b) Configure pod template fields (#642) +- [7e18f135](https://github.com/kubedb/postgres/commit/7e18f1359) Update license verifier (#643) +- [09607de3](https://github.com/kubedb/postgres/commit/09607de3e) Add enableServiceLinks to PodSpec (#639) + + + +## [kubedb/provisioner](https://github.com/kubedb/provisioner) + +### [v0.34.0](https://github.com/kubedb/provisioner/releases/tag/v0.34.0) + +- [da0659c9](https://github.com/kubedb/provisioner/commit/da0659c9a) Prepare for release v0.34.0 (#50) +- [78bd6891](https://github.com/kubedb/provisioner/commit/78bd6891e) Fix ProxySQL +- [5828a53a](https://github.com/kubedb/provisioner/commit/5828a53aa) Prepare for release v0.34.0-rc.0 (#49) +- [ddbfc75b](https://github.com/kubedb/provisioner/commit/ddbfc75bd) Update docker/distribution (#48) +- [1f2d4d5a](https://github.com/kubedb/provisioner/commit/1f2d4d5a6) Update license verifier (#47) +- [b2b71d00](https://github.com/kubedb/provisioner/commit/b2b71d00c) Add enableServiceLinks to PodSpec (#46) + + + +## [kubedb/proxysql](https://github.com/kubedb/proxysql) + +### [v0.21.0](https://github.com/kubedb/proxysql/releases/tag/v0.21.0) + +- [32cfd8ed](https://github.com/kubedb/proxysql/commit/32cfd8ed) Prepare for release v0.21.0 (#300) +- [807ea0fa](https://github.com/kubedb/proxysql/commit/807ea0fa) Update go.mod +- [d225233f](https://github.com/kubedb/proxysql/commit/d225233f) Fix SQL Query Builder (#299) +- [b6a2633e](https://github.com/kubedb/proxysql/commit/b6a2633e) Update client (#298) +- [55a2b71f](https://github.com/kubedb/proxysql/commit/55a2b71f) Prepare for release v0.21.0-rc.0 (#297) +- [3b813223](https://github.com/kubedb/proxysql/commit/3b813223) Used cached client (#291) +- [e657808c](https://github.com/kubedb/proxysql/commit/e657808c) Fix ProxySQL Backend Mode Issue (#296) +- [c8540a1a](https://github.com/kubedb/proxysql/commit/c8540a1a) Configure pod template feilds (#295) +- [8586dcdf](https://github.com/kubedb/proxysql/commit/8586dcdf) Update docker/distribution (#294) +- [80122c5d](https://github.com/kubedb/proxysql/commit/80122c5d) Update license verifier (#292) +- [1de70c13](https://github.com/kubedb/proxysql/commit/1de70c13) Add enableServiceLinks to PodSpec (#290) +- [32525708](https://github.com/kubedb/proxysql/commit/32525708) Test against K8s 1.27.0 (#289) + + + +## [kubedb/redis](https://github.com/kubedb/redis) + +### [v0.27.0](https://github.com/kubedb/redis/releases/tag/v0.27.0) + +- [a689d6da](https://github.com/kubedb/redis/commit/a689d6da) Prepare for release v0.27.0 (#468) +- [8411d7d9](https://github.com/kubedb/redis/commit/8411d7d9) Prepare for release v0.27.0-rc.0 (#467) +- [f1f3e2c9](https://github.com/kubedb/redis/commit/f1f3e2c9) Use cached client (#463) +- [0884c60c](https://github.com/kubedb/redis/commit/0884c60c) Update docker/distribution (#466) +- [385691d6](https://github.com/kubedb/redis/commit/385691d6) Update license verifier (#465) +- [37fa806f](https://github.com/kubedb/redis/commit/37fa806f) Configure pod template fields (#464) +- [2993d8b7](https://github.com/kubedb/redis/commit/2993d8b7) Add enableServiceLinks to PodSpec (#461) + + + +## [kubedb/redis-coordinator](https://github.com/kubedb/redis-coordinator) + +### [v0.13.0](https://github.com/kubedb/redis-coordinator/releases/tag/v0.13.0) + +- [a6efda5](https://github.com/kubedb/redis-coordinator/commit/a6efda5) Prepare for release v0.13.0 (#74) +- [8b0ecaa](https://github.com/kubedb/redis-coordinator/commit/8b0ecaa) Prepare for release v0.13.0-rc.0 (#73) +- [9d42b96](https://github.com/kubedb/redis-coordinator/commit/9d42b96) Update license verifier (#72) +- [31a4f43](https://github.com/kubedb/redis-coordinator/commit/31a4f43) Update license verifier (#71) +- [61bc0cf](https://github.com/kubedb/redis-coordinator/commit/61bc0cf) Add enableServiceLinks to PodSpec (#70) +- [ca42ade](https://github.com/kubedb/redis-coordinator/commit/ca42ade) Test against K8s 1.27.0 (#69) + + + +## [kubedb/replication-mode-detector](https://github.com/kubedb/replication-mode-detector) + +### [v0.21.0](https://github.com/kubedb/replication-mode-detector/releases/tag/v0.21.0) + +- [95360f91](https://github.com/kubedb/replication-mode-detector/commit/95360f91) Prepare for release v0.21.0 (#236) +- [22364949](https://github.com/kubedb/replication-mode-detector/commit/22364949) Prepare for release v0.21.0-rc.0 (#235) +- [13f4af9f](https://github.com/kubedb/replication-mode-detector/commit/13f4af9f) Update license verifier (#234) +- [49187c88](https://github.com/kubedb/replication-mode-detector/commit/49187c88) Update license verifier (#233) +- [cc8e62a7](https://github.com/kubedb/replication-mode-detector/commit/cc8e62a7) Test against K8s 1.27.0 (#231) + + + +## [kubedb/schema-manager](https://github.com/kubedb/schema-manager) + +### [v0.10.0](https://github.com/kubedb/schema-manager/releases/tag/v0.10.0) + +- [ecf33195](https://github.com/kubedb/schema-manager/commit/ecf33195) Prepare for release v0.10.0 (#76) +- [3bf16dac](https://github.com/kubedb/schema-manager/commit/3bf16dac) Prepare for release v0.10.0-rc.0 (#75) +- [a80da62a](https://github.com/kubedb/schema-manager/commit/a80da62a) Update license verifier (#74) +- [0761e223](https://github.com/kubedb/schema-manager/commit/0761e223) Update license verifier (#73) +- [83fa3191](https://github.com/kubedb/schema-manager/commit/83fa3191) Add enableServiceLinks to PodSpec (#72) + + + +## [kubedb/tests](https://github.com/kubedb/tests) + +### [v0.19.0](https://github.com/kubedb/tests/releases/tag/v0.19.0) + +- [118b2299](https://github.com/kubedb/tests/commit/118b2299) Prepare for release v0.19.0 (#228) +- [066c8cf9](https://github.com/kubedb/tests/commit/066c8cf9) Prepare for release v0.19.0-rc.0 (#227) +- [e49d6470](https://github.com/kubedb/tests/commit/e49d6470) Update license verifier (#226) + + + +## [kubedb/ui-server](https://github.com/kubedb/ui-server) + +### [v0.10.0](https://github.com/kubedb/ui-server/releases/tag/v0.10.0) + +- [b18d2830](https://github.com/kubedb/ui-server/commit/b18d2830) Prepare for release v0.10.0 (#81) +- [f3cee800](https://github.com/kubedb/ui-server/commit/f3cee800) Prepare for release v0.10.0-rc.0 (#80) +- [86d37355](https://github.com/kubedb/ui-server/commit/86d37355) Update docker/distribution (#79) +- [5f060b7e](https://github.com/kubedb/ui-server/commit/5f060b7e) Update license verifier (#77) +- [6cf6d5ab](https://github.com/kubedb/ui-server/commit/6cf6d5ab) Close mongo client connection with defer (#76) +- [312e5210](https://github.com/kubedb/ui-server/commit/312e5210) Add enableServiceLinks to PodSpec (#75) +- [662808cb](https://github.com/kubedb/ui-server/commit/662808cb) Test against K8s 1.27.0 (#74) + + + +## [kubedb/webhook-server](https://github.com/kubedb/webhook-server) + +### [v0.10.0](https://github.com/kubedb/webhook-server/releases/tag/v0.10.0) + +- [b683739b](https://github.com/kubedb/webhook-server/commit/b683739b) Prepare for release v0.10.0 (#61) +- [6673b381](https://github.com/kubedb/webhook-server/commit/6673b381) Prepare for release v0.10.0-rc.0 (#60) +- [c502edd3](https://github.com/kubedb/webhook-server/commit/c502edd3) Add enableServiceLinks to PodSpec (#59) + + + + diff --git a/content/docs/v2024.1.31/CHANGELOG-v2023.08.18.md b/content/docs/v2024.1.31/CHANGELOG-v2023.08.18.md new file mode 100644 index 0000000000..5dc76b60f5 --- /dev/null +++ b/content/docs/v2024.1.31/CHANGELOG-v2023.08.18.md @@ -0,0 +1,468 @@ +--- +title: Changelog | KubeDB +description: Changelog +menu: + docs_v2024.1.31: + identifier: changelog-kubedb-v2023.08.18 + name: Changelog-v2023.08.18 + parent: welcome + weight: 20230818 +product_name: kubedb +menu_name: docs_v2024.1.31 +section_menu_id: welcome +url: /docs/v2024.1.31/welcome/changelog-v2023.08.18/ +aliases: +- /docs/v2024.1.31/CHANGELOG-v2023.08.18/ +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +# KubeDB v2023.08.18 (2023-08-21) + + +## [kubedb/apimachinery](https://github.com/kubedb/apimachinery) + +### [v0.35.0](https://github.com/kubedb/apimachinery/releases/tag/v0.35.0) + +- [8e2aab0c](https://github.com/kubedb/apimachinery/commit/8e2aab0c) Add apis for git sync (#1055) +- [eec599f0](https://github.com/kubedb/apimachinery/commit/eec599f0) Add default MaxUnavailable spec for ES (#1053) +- [72a039cd](https://github.com/kubedb/apimachinery/commit/72a039cd) Updated kafka validateVersions webhook for newly added versions (#1054) +- [f86084cd](https://github.com/kubedb/apimachinery/commit/f86084cd) Add Logical Replication Replica Identity conditions (#1050) +- [4ce21bc5](https://github.com/kubedb/apimachinery/commit/4ce21bc5) Add cruise control API (#1045) +- [eda8efdf](https://github.com/kubedb/apimachinery/commit/eda8efdf) Make the conditions uniform across database opsRequests (#1052) +- [eb1b7f21](https://github.com/kubedb/apimachinery/commit/eb1b7f21) Change schema-manager constants type (#1051) +- [a763fb6b](https://github.com/kubedb/apimachinery/commit/a763fb6b) Use updated kmapi Conditions (#1049) +- [ebc00ae2](https://github.com/kubedb/apimachinery/commit/ebc00ae2) Add AsOwner() utility for dbs (#1046) +- [224dd567](https://github.com/kubedb/apimachinery/commit/224dd567) Add Custom Configuration spec for Kafka (#1041) + + + +## [kubedb/autoscaler](https://github.com/kubedb/autoscaler) + +### [v0.20.0](https://github.com/kubedb/autoscaler/releases/tag/v0.20.0) + +- [02970fe1](https://github.com/kubedb/autoscaler/commit/02970fe1) Prepare for release v0.20.0 (#152) +- [bdd60f13](https://github.com/kubedb/autoscaler/commit/bdd60f13) Update dependencies (#151) +- [9fbd8bf6](https://github.com/kubedb/autoscaler/commit/9fbd8bf6) Update dependencies (#150) +- [8bc4f455](https://github.com/kubedb/autoscaler/commit/8bc4f455) Use new kmapi Condition (#149) +- [6ccd8cfc](https://github.com/kubedb/autoscaler/commit/6ccd8cfc) Update Makefile +- [23a3a0b1](https://github.com/kubedb/autoscaler/commit/23a3a0b1) Use restricted pod security label (#148) + + + +## [kubedb/cli](https://github.com/kubedb/cli) + +### [v0.35.0](https://github.com/kubedb/cli/releases/tag/v0.35.0) + +- [bbe4b2ef](https://github.com/kubedb/cli/commit/bbe4b2ef) Prepare for release v0.35.0 (#718) +- [be1a1198](https://github.com/kubedb/cli/commit/be1a1198) Update dependencies (#717) +- [6adaa37f](https://github.com/kubedb/cli/commit/6adaa37f) Add MongoDB data cli (#716) +- [95ef1341](https://github.com/kubedb/cli/commit/95ef1341) Add Elasticsearch CMD to insert, verify and drop data (#714) +- [196a75ca](https://github.com/kubedb/cli/commit/196a75ca) Added Postgres Data insert verify drop through Kubedb CLI (#712) +- [9953efb7](https://github.com/kubedb/cli/commit/9953efb7) Add Insert Verify Drop for MariaDB in KubeDB CLI (#715) +- [41139e49](https://github.com/kubedb/cli/commit/41139e49) Add Insert Verify Drop for MySQL in KubeDB CLI (#713) +- [cf49e9aa](https://github.com/kubedb/cli/commit/cf49e9aa) Add Redis CMD for data insert (#709) +- [3a14bd72](https://github.com/kubedb/cli/commit/3a14bd72) Use svcName in exec instead of static primary (#711) +- [af0c5734](https://github.com/kubedb/cli/commit/af0c5734) Update dependencies (#710) + + + +## [kubedb/dashboard](https://github.com/kubedb/dashboard) + +### [v0.11.0](https://github.com/kubedb/dashboard/releases/tag/v0.11.0) + +- [40e20d8](https://github.com/kubedb/dashboard/commit/40e20d8) Prepare for release v0.11.0 (#81) +- [9b390b6](https://github.com/kubedb/dashboard/commit/9b390b6) Update dependencies (#80) +- [2db4453](https://github.com/kubedb/dashboard/commit/2db4453) Update dependencies (#79) +- [49332b4](https://github.com/kubedb/dashboard/commit/49332b4) Update client-go for GET and PATCH call issue fix (#77) +- [711eba9](https://github.com/kubedb/dashboard/commit/711eba9) Use new kmapi Condition (#78) +- [4d8f1d9](https://github.com/kubedb/dashboard/commit/4d8f1d9) Update Makefile +- [cf67a4d](https://github.com/kubedb/dashboard/commit/cf67a4d) Use restricted pod security label (#76) + + + +## [kubedb/elasticsearch](https://github.com/kubedb/elasticsearch) + +### [v0.35.0](https://github.com/kubedb/elasticsearch/releases/tag/v0.35.0) + +- [bc8df643](https://github.com/kubedb/elasticsearch/commit/bc8df6435) Prepare for release v0.35.0 (#662) +- [e70d2d10](https://github.com/kubedb/elasticsearch/commit/e70d2d108) Update dependencies (#661) +- [8775e15d](https://github.com/kubedb/elasticsearch/commit/8775e15d0) Confirm the db has been paused before ops continue (#660) +- [b717a900](https://github.com/kubedb/elasticsearch/commit/b717a900c) Update nightly +- [096538e8](https://github.com/kubedb/elasticsearch/commit/096538e8d) Update dependencies (#659) +- [9b5de295](https://github.com/kubedb/elasticsearch/commit/9b5de295f) update nightly test profile to provisioner (#658) +- [c25227e3](https://github.com/kubedb/elasticsearch/commit/c25227e39) Add opensearch-2.5.0 in nightly tests +- [36705273](https://github.com/kubedb/elasticsearch/commit/367052731) Fix Disable Security failing Builtin User cred synchronization Issue (#654) +- [7102b622](https://github.com/kubedb/elasticsearch/commit/7102b622d) Add inputs to nightly workflow +- [93eda557](https://github.com/kubedb/elasticsearch/commit/93eda557e) Fix GET and PATCH call issue (#648) +- [af7e4c23](https://github.com/kubedb/elasticsearch/commit/af7e4c237) Fix nightly (#651) +- [0e1de49b](https://github.com/kubedb/elasticsearch/commit/0e1de49b5) Use KIND v0.20.0 (#652) +- [ffc5d7f6](https://github.com/kubedb/elasticsearch/commit/ffc5d7f6d) Use master branch with nightly.yml +- [ec391b1e](https://github.com/kubedb/elasticsearch/commit/ec391b1e2) Update nightly.yml +- [179aa150](https://github.com/kubedb/elasticsearch/commit/179aa1507) Update nightly test matrix +- [b6a094db](https://github.com/kubedb/elasticsearch/commit/b6a094db2) Run e2e tests nightly (#650) +- [e09e1e70](https://github.com/kubedb/elasticsearch/commit/e09e1e700) Use new kmapi Condition (#649) +- [ce0cac63](https://github.com/kubedb/elasticsearch/commit/ce0cac63a) Update Makefile +- [f0b570d4](https://github.com/kubedb/elasticsearch/commit/f0b570d4d) Use restricted pod security label (#647) + + + +## [kubedb/installer](https://github.com/kubedb/installer) + +### [v2023.08.18](https://github.com/kubedb/installer/releases/tag/v2023.08.18) + + + + +## [kubedb/kafka](https://github.com/kubedb/kafka) + +### [v0.6.0](https://github.com/kubedb/kafka/releases/tag/v0.6.0) + +- [1b83b3b](https://github.com/kubedb/kafka/commit/1b83b3b) Prepare for release v0.6.0 (#35) +- [4eb30cd](https://github.com/kubedb/kafka/commit/4eb30cd) Add Support for Cruise Control (#33) +- [1414470](https://github.com/kubedb/kafka/commit/1414470) Add custom configuration (#28) +- [5a5537b](https://github.com/kubedb/kafka/commit/5a5537b) Run nightly tests against master +- [d973665](https://github.com/kubedb/kafka/commit/d973665) Update nightly.yml +- [1cbdccd](https://github.com/kubedb/kafka/commit/1cbdccd) Run e2e tests nightly (#34) +- [987235c](https://github.com/kubedb/kafka/commit/987235c) Use new kmapi Condition (#32) +- [dbbbc7f](https://github.com/kubedb/kafka/commit/dbbbc7f) Update Makefile +- [f8adcfe](https://github.com/kubedb/kafka/commit/f8adcfe) Use restricted pod security label (#31) + + + +## [kubedb/mariadb](https://github.com/kubedb/mariadb) + +### [v0.19.0](https://github.com/kubedb/mariadb/releases/tag/v0.19.0) + +- [f98e4730](https://github.com/kubedb/mariadb/commit/f98e4730) Prepare for release v0.19.0 (#228) +- [5d47fbb2](https://github.com/kubedb/mariadb/commit/5d47fbb2) Update dependencies (#227) +- [40a21a5e](https://github.com/kubedb/mariadb/commit/40a21a5e) Confirm the db has been paused before ops continue (#226) +- [97924029](https://github.com/kubedb/mariadb/commit/97924029) Update dependencies (#225) +- [edc232c8](https://github.com/kubedb/mariadb/commit/edc232c8) update nightly test profile to provisioner (#224) +- [0087bcfa](https://github.com/kubedb/mariadb/commit/0087bcfa) Add inputs fields to manual trigger ci file (#222) +- [ca265d0f](https://github.com/kubedb/mariadb/commit/ca265d0f) reduce get/patch api calls (#218) +- [ec7a0a79](https://github.com/kubedb/mariadb/commit/ec7a0a79) fix nightly test workflow (#221) +- [e145c47d](https://github.com/kubedb/mariadb/commit/e145c47d) Use KIND v0.20.0 (#220) +- [bc1cb72d](https://github.com/kubedb/mariadb/commit/bc1cb72d) Run nightly tests against master +- [c8f6dab2](https://github.com/kubedb/mariadb/commit/c8f6dab2) Update nightly.yml +- [f577fa72](https://github.com/kubedb/mariadb/commit/f577fa72) Run e2e tests nightly (#219) +- [256ae22a](https://github.com/kubedb/mariadb/commit/256ae22a) Use new kmapi Condition (#217) +- [37bbf08e](https://github.com/kubedb/mariadb/commit/37bbf08e) Use restricted pod security label (#216) + + + +## [kubedb/mariadb-coordinator](https://github.com/kubedb/mariadb-coordinator) + +### [v0.15.0](https://github.com/kubedb/mariadb-coordinator/releases/tag/v0.15.0) + +- [0d67bc6d](https://github.com/kubedb/mariadb-coordinator/commit/0d67bc6d) Prepare for release v0.15.0 (#89) +- [49c68129](https://github.com/kubedb/mariadb-coordinator/commit/49c68129) Update dependencies (#88) +- [e9c737c5](https://github.com/kubedb/mariadb-coordinator/commit/e9c737c5) Update dependencies (#87) +- [77b5b854](https://github.com/kubedb/mariadb-coordinator/commit/77b5b854) Reduce get/patch api calls (#86) + + + +## [kubedb/memcached](https://github.com/kubedb/memcached) + +### [v0.28.0](https://github.com/kubedb/memcached/releases/tag/v0.28.0) + +- [fd40e37e](https://github.com/kubedb/memcached/commit/fd40e37e) Prepare for release v0.28.0 (#402) +- [f759a6d6](https://github.com/kubedb/memcached/commit/f759a6d6) Update dependencies (#401) +- [4a82561c](https://github.com/kubedb/memcached/commit/4a82561c) Update dependencies (#400) +- [29b39605](https://github.com/kubedb/memcached/commit/29b39605) Fix e2e and nightly workflows +- [1c77d33f](https://github.com/kubedb/memcached/commit/1c77d33f) Use KIND v0.20.0 (#399) +- [bfc480a7](https://github.com/kubedb/memcached/commit/bfc480a7) Run nightly tests against master +- [7acbe89e](https://github.com/kubedb/memcached/commit/7acbe89e) Update nightly.yml +- [08adb133](https://github.com/kubedb/memcached/commit/08adb133) Run e2e tests nightly (#398) +- [3d10ada3](https://github.com/kubedb/memcached/commit/3d10ada3) Use new kmapi Condition (#397) + + + +## [kubedb/mongodb](https://github.com/kubedb/mongodb) + +### [v0.28.0](https://github.com/kubedb/mongodb/releases/tag/v0.28.0) + +- [df494f03](https://github.com/kubedb/mongodb/commit/df494f03) Prepare for release v0.28.0 (#568) +- [d8705c76](https://github.com/kubedb/mongodb/commit/d8705c76) Update dependencies (#567) +- [b84464c3](https://github.com/kubedb/mongodb/commit/b84464c3) Confirm the db has been paused before ops continue (#566) +- [b0e8a237](https://github.com/kubedb/mongodb/commit/b0e8a237) Update dependencies (#565) +- [ecd154fb](https://github.com/kubedb/mongodb/commit/ecd154fb) add test input (#564) +- [96fac12b](https://github.com/kubedb/mongodb/commit/96fac12b) Reduce get/patch api calls (#557) +- [568f3e28](https://github.com/kubedb/mongodb/commit/568f3e28) Fix stash installation (#563) +- [d905767f](https://github.com/kubedb/mongodb/commit/d905767f) Run only general profile tests +- [9692d296](https://github.com/kubedb/mongodb/commit/9692d296) Use KIND v0.20.0 (#562) +- [30fe37a7](https://github.com/kubedb/mongodb/commit/30fe37a7) Use --bind_ip to fix 3.4.* CrashLoopbackOff issue (#559) +- [f658d023](https://github.com/kubedb/mongodb/commit/f658d023) Run nightly.yml against master branch +- [af990bc2](https://github.com/kubedb/mongodb/commit/af990bc2) Run nightly tests against master branch +- [83abcb97](https://github.com/kubedb/mongodb/commit/83abcb97) Run e2e test nightly (#560) +- [35bb3970](https://github.com/kubedb/mongodb/commit/35bb3970) Use new kmapi Condition (#558) +- [6c5e8551](https://github.com/kubedb/mongodb/commit/6c5e8551) Update Makefile +- [02269ae8](https://github.com/kubedb/mongodb/commit/02269ae8) Use restricted pod security level (#556) + + + +## [kubedb/mysql](https://github.com/kubedb/mysql) + +### [v0.28.0](https://github.com/kubedb/mysql/releases/tag/v0.28.0) + + + + +## [kubedb/mysql-coordinator](https://github.com/kubedb/mysql-coordinator) + +### [v0.13.0](https://github.com/kubedb/mysql-coordinator/releases/tag/v0.13.0) + +- [11ad765](https://github.com/kubedb/mysql-coordinator/commit/11ad765) Prepare for release v0.13.0 (#85) +- [12b4608](https://github.com/kubedb/mysql-coordinator/commit/12b4608) Update dependencies (#84) +- [9cd6e03](https://github.com/kubedb/mysql-coordinator/commit/9cd6e03) Update dependencies (#83) +- [4587ab8](https://github.com/kubedb/mysql-coordinator/commit/4587ab8) reduce k8s api calls for get and patch (#82) + + + +## [kubedb/mysql-router-init](https://github.com/kubedb/mysql-router-init) + +### [v0.13.0](https://github.com/kubedb/mysql-router-init/releases/tag/v0.13.0) + +- [2a59ae1](https://github.com/kubedb/mysql-router-init/commit/2a59ae1) Update dependencies (#36) +- [a4f4318](https://github.com/kubedb/mysql-router-init/commit/a4f4318) Update dependencies (#35) + + + +## [kubedb/ops-manager](https://github.com/kubedb/ops-manager) + +### [v0.22.0](https://github.com/kubedb/ops-manager/releases/tag/v0.22.0) + + + + +## [kubedb/percona-xtradb](https://github.com/kubedb/percona-xtradb) + +### [v0.22.0](https://github.com/kubedb/percona-xtradb/releases/tag/v0.22.0) + +- [6d522b6c](https://github.com/kubedb/percona-xtradb/commit/6d522b6c) Prepare for release v0.22.0 (#327) +- [9ba97882](https://github.com/kubedb/percona-xtradb/commit/9ba97882) Update dependencies (#326) +- [408be6e9](https://github.com/kubedb/percona-xtradb/commit/408be6e9) Confirm the db has been paused before ops continue (#325) +- [b314569f](https://github.com/kubedb/percona-xtradb/commit/b314569f) Update dependencies (#324) +- [ff7e5e09](https://github.com/kubedb/percona-xtradb/commit/ff7e5e09) Update nightly.yml +- [f1ddeb07](https://github.com/kubedb/percona-xtradb/commit/f1ddeb07) reduce get/patch api calls (#320) +- [b3d3564c](https://github.com/kubedb/percona-xtradb/commit/b3d3564c) Create nightly.yml +- [29f6ab80](https://github.com/kubedb/percona-xtradb/commit/29f6ab80) Remove nightly workflow +- [6c47d97f](https://github.com/kubedb/percona-xtradb/commit/6c47d97f) Merge pull request #323 from kubedb/fix-nightly +- [c8d2e630](https://github.com/kubedb/percona-xtradb/commit/c8d2e630) Fix nightly +- [c2854017](https://github.com/kubedb/percona-xtradb/commit/c2854017) Use KIND v0.20.0 (#322) +- [ff4d7c11](https://github.com/kubedb/percona-xtradb/commit/ff4d7c11) Run nightly tests against master +- [0328b6ad](https://github.com/kubedb/percona-xtradb/commit/0328b6ad) Update nightly.yml +- [eb533938](https://github.com/kubedb/percona-xtradb/commit/eb533938) Run e2e tests nightly (#321) +- [6be27644](https://github.com/kubedb/percona-xtradb/commit/6be27644) Use new kmapi Condition (#319) +- [33571e97](https://github.com/kubedb/percona-xtradb/commit/33571e97) Use restricted pod security label (#318) + + + +## [kubedb/percona-xtradb-coordinator](https://github.com/kubedb/percona-xtradb-coordinator) + +### [v0.8.0](https://github.com/kubedb/percona-xtradb-coordinator/releases/tag/v0.8.0) + +- [2bc9a17](https://github.com/kubedb/percona-xtradb-coordinator/commit/2bc9a17) Prepare for release v0.8.0 (#46) +- [b886ff2](https://github.com/kubedb/percona-xtradb-coordinator/commit/b886ff2) Update dependencies (#45) +- [9d5feb9](https://github.com/kubedb/percona-xtradb-coordinator/commit/9d5feb9) Update dependencies (#44) +- [2c8983d](https://github.com/kubedb/percona-xtradb-coordinator/commit/2c8983d) reduce get/patch api calls (#43) + + + +## [kubedb/pg-coordinator](https://github.com/kubedb/pg-coordinator) + +### [v0.19.0](https://github.com/kubedb/pg-coordinator/releases/tag/v0.19.0) + +- [a8ee999d](https://github.com/kubedb/pg-coordinator/commit/a8ee999d) Prepare for release v0.19.0 (#129) +- [1434fdfc](https://github.com/kubedb/pg-coordinator/commit/1434fdfc) Update dependencies (#128) +- [36ceccc8](https://github.com/kubedb/pg-coordinator/commit/36ceccc8) Use cached client (#127) +- [190a4880](https://github.com/kubedb/pg-coordinator/commit/190a4880) Update dependencies (#126) +- [8aad969e](https://github.com/kubedb/pg-coordinator/commit/8aad969e) fix failover and standby sync issue (#125) + + + +## [kubedb/pgbouncer](https://github.com/kubedb/pgbouncer) + +### [v0.22.0](https://github.com/kubedb/pgbouncer/releases/tag/v0.22.0) + +- [fe943791](https://github.com/kubedb/pgbouncer/commit/fe943791) Prepare for release v0.22.0 (#289) +- [489949d7](https://github.com/kubedb/pgbouncer/commit/489949d7) Update dependencies (#288) +- [ea7e4b3c](https://github.com/kubedb/pgbouncer/commit/ea7e4b3c) Update dependencies (#287) +- [8ddf699c](https://github.com/kubedb/pgbouncer/commit/8ddf699c) Fix: get and patch call issue (#285) +- [81fa0fb3](https://github.com/kubedb/pgbouncer/commit/81fa0fb3) Use KIND v0.20.0 (#286) +- [6bc9e12b](https://github.com/kubedb/pgbouncer/commit/6bc9e12b) Run nightly tests against master +- [655e1d06](https://github.com/kubedb/pgbouncer/commit/655e1d06) Update nightly.yml +- [2d1bc4e5](https://github.com/kubedb/pgbouncer/commit/2d1bc4e5) Update nightly.yml +- [94419822](https://github.com/kubedb/pgbouncer/commit/94419822) Run e2e tests nightly +- [c41aa109](https://github.com/kubedb/pgbouncer/commit/c41aa109) Use new kmapi Condition (#284) +- [e62cbde9](https://github.com/kubedb/pgbouncer/commit/e62cbde9) Update Makefile +- [6734feba](https://github.com/kubedb/pgbouncer/commit/6734feba) Use restricted pod security label (#283) + + + +## [kubedb/postgres](https://github.com/kubedb/postgres) + +### [v0.35.0](https://github.com/kubedb/postgres/releases/tag/v0.35.0) + +- [8e62ebef](https://github.com/kubedb/postgres/commit/8e62ebef5) Prepare for release v0.35.0 (#662) +- [1b23e335](https://github.com/kubedb/postgres/commit/1b23e3352) Update dependencies (#661) +- [92a455d7](https://github.com/kubedb/postgres/commit/92a455d74) Confirm the db has been paused before ops continue (#660) +- [e642b565](https://github.com/kubedb/postgres/commit/e642b5655) add pod watch permision (#659) +- [192be10e](https://github.com/kubedb/postgres/commit/192be10e5) fix client (#658) +- [b2bff6fe](https://github.com/kubedb/postgres/commit/b2bff6fe2) close client engine (#656) +- [df65982e](https://github.com/kubedb/postgres/commit/df65982ea) Update dependencies (#657) +- [63185866](https://github.com/kubedb/postgres/commit/631858669) Check all the replica's are connected to the primary (#654) +- [c0c7689b](https://github.com/kubedb/postgres/commit/c0c7689ba) fix get and patch call issue (#649) +- [cc5c1468](https://github.com/kubedb/postgres/commit/cc5c1468e) Merge pull request #653 from kubedb/fix-nightly +- [5e4a7a19](https://github.com/kubedb/postgres/commit/5e4a7a196) Fixed nightly yaml. +- [6bf8ea0b](https://github.com/kubedb/postgres/commit/6bf8ea0be) Use KIND v0.20.0 (#652) +- [5c3dc9c8](https://github.com/kubedb/postgres/commit/5c3dc9c8e) Run nightly tests against master +- [bcec1cfb](https://github.com/kubedb/postgres/commit/bcec1cfbc) Update nightly.yml +- [dc4ae6ad](https://github.com/kubedb/postgres/commit/dc4ae6ad2) Run e2e tests nightly (#651) +- [b958109a](https://github.com/kubedb/postgres/commit/b958109aa) Use new kmapi Condition (#650) +- [ca9f77af](https://github.com/kubedb/postgres/commit/ca9f77af0) Update Makefile +- [b36a06f2](https://github.com/kubedb/postgres/commit/b36a06f2d) Use restricted pod security label (#648) + + + +## [kubedb/provisioner](https://github.com/kubedb/provisioner) + +### [v0.35.0](https://github.com/kubedb/provisioner/releases/tag/v0.35.0) + +- [47f0fc82](https://github.com/kubedb/provisioner/commit/47f0fc823) Prepare for release v0.35.0 (#54) +- [8b716999](https://github.com/kubedb/provisioner/commit/8b716999e) Update dependencies (#53) +- [81dd67a3](https://github.com/kubedb/provisioner/commit/81dd67a31) Update dependencies (#52) + + + +## [kubedb/proxysql](https://github.com/kubedb/proxysql) + +### [v0.22.0](https://github.com/kubedb/proxysql/releases/tag/v0.22.0) + +- [8b568349](https://github.com/kubedb/proxysql/commit/8b568349) Prepare for release v0.22.0 (#310) +- [fdfd9943](https://github.com/kubedb/proxysql/commit/fdfd9943) Update dependencies (#309) +- [9dc1c3fc](https://github.com/kubedb/proxysql/commit/9dc1c3fc) Confirm the db has been paused before ops continue (#308) +- [e600efb7](https://github.com/kubedb/proxysql/commit/e600efb7) Update dependencies (#307) +- [63e65342](https://github.com/kubedb/proxysql/commit/63e65342) Add inputs fields to manual trigger ci file (#306) +- [800c10ae](https://github.com/kubedb/proxysql/commit/800c10ae) Update nightly.yml +- [b1816f1a](https://github.com/kubedb/proxysql/commit/b1816f1a) reduce get/patch api calls (#303) +- [21d71bc5](https://github.com/kubedb/proxysql/commit/21d71bc5) Merge pull request #305 from kubedb/fix_nightly +- [91a019e1](https://github.com/kubedb/proxysql/commit/91a019e1) Nightly fix +- [09492c1d](https://github.com/kubedb/proxysql/commit/09492c1d) Run nightly tests against master +- [37547f7b](https://github.com/kubedb/proxysql/commit/37547f7b) Update nightly.yml +- [3e31ed14](https://github.com/kubedb/proxysql/commit/3e31ed14) Update nightly.yml +- [491ad083](https://github.com/kubedb/proxysql/commit/491ad083) RUn e2e tests nightly (#304) +- [b6151aeb](https://github.com/kubedb/proxysql/commit/b6151aeb) Use new kmapi Condition (#302) +- [7d5756b1](https://github.com/kubedb/proxysql/commit/7d5756b1) Use restricted pod security label (#301) + + + +## [kubedb/redis](https://github.com/kubedb/redis) + +### [v0.28.0](https://github.com/kubedb/redis/releases/tag/v0.28.0) + +- [ea9ebdf2](https://github.com/kubedb/redis/commit/ea9ebdf2) Prepare for release v0.28.0 (#484) +- [63b32c43](https://github.com/kubedb/redis/commit/63b32c43) Update dependencies (#483) +- [7a2df42c](https://github.com/kubedb/redis/commit/7a2df42c) Confirm the db has been paused before ops continue (#482) +- [ca81ca3f](https://github.com/kubedb/redis/commit/ca81ca3f) Update dependencies (#481) +- [b13517b1](https://github.com/kubedb/redis/commit/b13517b1) update nightly test profile to provisioner (#480) +- [c00dab7c](https://github.com/kubedb/redis/commit/c00dab7c) Add inputs to nightly workflow (#479) +- [06ca0ad2](https://github.com/kubedb/redis/commit/06ca0ad2) Fix nightly (#477) +- [33ee7af4](https://github.com/kubedb/redis/commit/33ee7af4) Fix Redis nightly test workflow +- [5647852a](https://github.com/kubedb/redis/commit/5647852a) Use KIND v0.20.0 (#476) +- [5ef88f14](https://github.com/kubedb/redis/commit/5ef88f14) Run nightly tests against master +- [083a9124](https://github.com/kubedb/redis/commit/083a9124) Update nightly.yml +- [aa9b75ae](https://github.com/kubedb/redis/commit/aa9b75ae) Run e2e tests nightly (#473) +- [b4f312f4](https://github.com/kubedb/redis/commit/b4f312f4) Reduce get/patch api calls (#471) +- [ebd30b79](https://github.com/kubedb/redis/commit/ebd30b79) Use new kmapi Condition (#472) +- [5d191bf8](https://github.com/kubedb/redis/commit/5d191bf8) Update Makefile +- [aaf4a815](https://github.com/kubedb/redis/commit/aaf4a815) Use restricted pod security label (#470) + + + +## [kubedb/redis-coordinator](https://github.com/kubedb/redis-coordinator) + +### [v0.14.0](https://github.com/kubedb/redis-coordinator/releases/tag/v0.14.0) + +- [136eb79](https://github.com/kubedb/redis-coordinator/commit/136eb79) Prepare for release v0.14.0 (#77) +- [cc749e6](https://github.com/kubedb/redis-coordinator/commit/cc749e6) Update dependencies (#76) +- [fbc75dd](https://github.com/kubedb/redis-coordinator/commit/fbc75dd) Update dependencies (#75) + + + +## [kubedb/replication-mode-detector](https://github.com/kubedb/replication-mode-detector) + +### [v0.22.0](https://github.com/kubedb/replication-mode-detector/releases/tag/v0.22.0) + +- [11106cf6](https://github.com/kubedb/replication-mode-detector/commit/11106cf6) Prepare for release v0.22.0 (#241) +- [5a7a2a75](https://github.com/kubedb/replication-mode-detector/commit/5a7a2a75) Update dependencies (#240) +- [914e86dc](https://github.com/kubedb/replication-mode-detector/commit/914e86dc) Update dependencies (#239) +- [f30374d2](https://github.com/kubedb/replication-mode-detector/commit/f30374d2) Use new kmapi Condition (#238) + + + +## [kubedb/schema-manager](https://github.com/kubedb/schema-manager) + +### [v0.11.0](https://github.com/kubedb/schema-manager/releases/tag/v0.11.0) + +- [65459bc6](https://github.com/kubedb/schema-manager/commit/65459bc6) Prepare for release v0.11.0 (#81) +- [30dd907b](https://github.com/kubedb/schema-manager/commit/30dd907b) Update dependencies (#80) +- [472e7496](https://github.com/kubedb/schema-manager/commit/472e7496) Update dependencies (#79) +- [1c4a60a8](https://github.com/kubedb/schema-manager/commit/1c4a60a8) Use new kmapi Condition (#78) +- [bec3c7b8](https://github.com/kubedb/schema-manager/commit/bec3c7b8) Update Makefile +- [3df964b1](https://github.com/kubedb/schema-manager/commit/3df964b1) Use restricted pod security label (#77) + + + +## [kubedb/tests](https://github.com/kubedb/tests) + +### [v0.20.0](https://github.com/kubedb/tests/releases/tag/v0.20.0) + +- [ea935103](https://github.com/kubedb/tests/commit/ea935103) Prepare for release v0.20.0 (#242) +- [bc927923](https://github.com/kubedb/tests/commit/bc927923) Update dependencies (#241) +- [319bf4a2](https://github.com/kubedb/tests/commit/319bf4a2) Fix mg termination_policy & env_variables (#237) +- [5424bdbe](https://github.com/kubedb/tests/commit/5424bdbe) update vertical scaling constant (#239) +- [40229b3d](https://github.com/kubedb/tests/commit/40229b3d) Update dependencies (#238) +- [68deeafb](https://github.com/kubedb/tests/commit/68deeafb) Exclude volume expansion (#235) +- [7a364367](https://github.com/kubedb/tests/commit/7a364367) Fix test for ES & OS with disabled security (#232) +- [313151de](https://github.com/kubedb/tests/commit/313151de) fix mariadb test (#234) +- [c5d9911e](https://github.com/kubedb/tests/commit/c5d9911e) Update tests by test profile (#231) +- [b2a5f384](https://github.com/kubedb/tests/commit/b2a5f384) Fix general tests (#230) +- [a13b9095](https://github.com/kubedb/tests/commit/a13b9095) Use new kmapi Condition (#229) + + + +## [kubedb/ui-server](https://github.com/kubedb/ui-server) + +### [v0.11.0](https://github.com/kubedb/ui-server/releases/tag/v0.11.0) + +- [7ccfc49c](https://github.com/kubedb/ui-server/commit/7ccfc49c) Prepare for release v0.11.0 (#89) +- [04edecf9](https://github.com/kubedb/ui-server/commit/04edecf9) Update dependencies (#88) +- [8d1f7b4b](https://github.com/kubedb/ui-server/commit/8d1f7b4b) Update dependencies (#87) +- [a0fd42d8](https://github.com/kubedb/ui-server/commit/a0fd42d8) Use new kmapi Condition (#86) + + + +## [kubedb/webhook-server](https://github.com/kubedb/webhook-server) + +### [v0.11.0](https://github.com/kubedb/webhook-server/releases/tag/v0.11.0) + +- [26e96671](https://github.com/kubedb/webhook-server/commit/26e96671) Prepare for release (#66) +- [d446d877](https://github.com/kubedb/webhook-server/commit/d446d877) Update dependencies (#65) +- [278f450b](https://github.com/kubedb/webhook-server/commit/278f450b) Use KIND v0.20.0 (#64) +- [6ba4191e](https://github.com/kubedb/webhook-server/commit/6ba4191e) Use new kmapi Condition (#63) +- [5cc21a08](https://github.com/kubedb/webhook-server/commit/5cc21a08) Update Makefile +- [0edd0610](https://github.com/kubedb/webhook-server/commit/0edd0610) Use restricted pod security label (#62) + + + + diff --git a/content/docs/v2024.1.31/CHANGELOG-v2023.11.2.md b/content/docs/v2024.1.31/CHANGELOG-v2023.11.2.md new file mode 100644 index 0000000000..319dfcb552 --- /dev/null +++ b/content/docs/v2024.1.31/CHANGELOG-v2023.11.2.md @@ -0,0 +1,270 @@ +--- +title: Changelog | KubeDB +description: Changelog +menu: + docs_v2024.1.31: + identifier: changelog-kubedb-v2023.11.2 + name: Changelog-v2023.11.2 + parent: welcome + weight: 20231102 +product_name: kubedb +menu_name: docs_v2024.1.31 +section_menu_id: welcome +url: /docs/v2024.1.31/welcome/changelog-v2023.11.2/ +aliases: +- /docs/v2024.1.31/CHANGELOG-v2023.11.2/ +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +# KubeDB v2023.11.2 (2023-11-02) + + +## [kubedb/apimachinery](https://github.com/kubedb/apimachinery) + +### [v0.37.0](https://github.com/kubedb/apimachinery/releases/tag/v0.37.0) + +- [feb7d046](https://github.com/kubedb/apimachinery/commit/feb7d046) Update deps +- [b69c766e](https://github.com/kubedb/apimachinery/commit/b69c766e) Bring back GetRequestType() (#1064) +- [f684560c](https://github.com/kubedb/apimachinery/commit/f684560c) Remove spec.upgrade field and Upgrade ops type (#1063) + + + +## [kubedb/autoscaler](https://github.com/kubedb/autoscaler) + +### [v0.22.0](https://github.com/kubedb/autoscaler/releases/tag/v0.22.0) + +- [4d6e524e](https://github.com/kubedb/autoscaler/commit/4d6e524e) Prepare for release v0.22.0 (#157) + + + +## [kubedb/cli](https://github.com/kubedb/cli) + +### [v0.37.0](https://github.com/kubedb/cli/releases/tag/v0.37.0) + +- [78ff0505](https://github.com/kubedb/cli/commit/78ff0505) Prepare for release v0.37.0 (#729) + + + +## [kubedb/dashboard](https://github.com/kubedb/dashboard) + +### [v0.13.0](https://github.com/kubedb/dashboard/releases/tag/v0.13.0) + +- [746cdbf2](https://github.com/kubedb/dashboard/commit/746cdbf2) Prepare for release v0.13.0 (#83) + + + +## [kubedb/elasticsearch](https://github.com/kubedb/elasticsearch) + +### [v0.37.0](https://github.com/kubedb/elasticsearch/releases/tag/v0.37.0) + +- [9f3dcdd2](https://github.com/kubedb/elasticsearch/commit/9f3dcdd2b) Prepare for release v0.37.0 (#675) + + + +## [kubedb/installer](https://github.com/kubedb/installer) + +### [v2023.11.2](https://github.com/kubedb/installer/releases/tag/v2023.11.2) + + + + +## [kubedb/kafka](https://github.com/kubedb/kafka) + +### [v0.8.0](https://github.com/kubedb/kafka/releases/tag/v0.8.0) + +- [1101f2c](https://github.com/kubedb/kafka/commit/1101f2c) Prepare for release v0.8.0 (#41) +- [4e3c27e](https://github.com/kubedb/kafka/commit/4e3c27e) Disable stash installation + + + +## [kubedb/mariadb](https://github.com/kubedb/mariadb) + +### [v0.21.0](https://github.com/kubedb/mariadb/releases/tag/v0.21.0) + +- [7f583121](https://github.com/kubedb/mariadb/commit/7f583121) Prepare for release v0.21.0 (#231) + + + +## [kubedb/mariadb-coordinator](https://github.com/kubedb/mariadb-coordinator) + +### [v0.17.0](https://github.com/kubedb/mariadb-coordinator/releases/tag/v0.17.0) + +- [fc27813c](https://github.com/kubedb/mariadb-coordinator/commit/fc27813c) Prepare for release v0.17.0 (#91) + + + +## [kubedb/memcached](https://github.com/kubedb/memcached) + +### [v0.30.0](https://github.com/kubedb/memcached/releases/tag/v0.30.0) + +- [91294d5d](https://github.com/kubedb/memcached/commit/91294d5d) Prepare for release v0.30.0 (#404) + + + +## [kubedb/mongodb](https://github.com/kubedb/mongodb) + +### [v0.30.0](https://github.com/kubedb/mongodb/releases/tag/v0.30.0) + +- [b4783ade](https://github.com/kubedb/mongodb/commit/b4783ade) Prepare for release v0.30.0 (#574) + + + +## [kubedb/mysql](https://github.com/kubedb/mysql) + +### [v0.30.0](https://github.com/kubedb/mysql/releases/tag/v0.30.0) + +- [c48c4374](https://github.com/kubedb/mysql/commit/c48c4374) Prepare for release v0.30.0 (#569) + + + +## [kubedb/mysql-coordinator](https://github.com/kubedb/mysql-coordinator) + +### [v0.15.0](https://github.com/kubedb/mysql-coordinator/releases/tag/v0.15.0) + +- [8a49d1f](https://github.com/kubedb/mysql-coordinator/commit/8a49d1f) Prepare for release v0.15.0 (#88) + + + +## [kubedb/mysql-router-init](https://github.com/kubedb/mysql-router-init) + +### [v0.15.0](https://github.com/kubedb/mysql-router-init/releases/tag/v0.15.0) + + + + +## [kubedb/ops-manager](https://github.com/kubedb/ops-manager) + +### [v0.24.0](https://github.com/kubedb/ops-manager/releases/tag/v0.24.0) + +- [ecee4464](https://github.com/kubedb/ops-manager/commit/ecee4464) Prepare for release v0.24.0 (#484) +- [eecc2c17](https://github.com/kubedb/ops-manager/commit/eecc2c17) Remove spec.upgrade field and Upgrade ops type (#482) +- [54caa66d](https://github.com/kubedb/ops-manager/commit/54caa66d) Reprovision opsReq will progress even if DB is in Provisioning state (#480) + + + +## [kubedb/percona-xtradb](https://github.com/kubedb/percona-xtradb) + +### [v0.24.0](https://github.com/kubedb/percona-xtradb/releases/tag/v0.24.0) + +- [83f333a6](https://github.com/kubedb/percona-xtradb/commit/83f333a6) Prepare for release v0.24.0 (#330) + + + +## [kubedb/percona-xtradb-coordinator](https://github.com/kubedb/percona-xtradb-coordinator) + +### [v0.10.0](https://github.com/kubedb/percona-xtradb-coordinator/releases/tag/v0.10.0) + +- [d92be74](https://github.com/kubedb/percona-xtradb-coordinator/commit/d92be74) Prepare for release v0.10.0 (#48) + + + +## [kubedb/pg-coordinator](https://github.com/kubedb/pg-coordinator) + +### [v0.21.0](https://github.com/kubedb/pg-coordinator/releases/tag/v0.21.0) + +- [bc69455b](https://github.com/kubedb/pg-coordinator/commit/bc69455b) Prepare for release v0.21.0 (#135) + + + +## [kubedb/pgbouncer](https://github.com/kubedb/pgbouncer) + +### [v0.24.0](https://github.com/kubedb/pgbouncer/releases/tag/v0.24.0) + +- [2973741d](https://github.com/kubedb/pgbouncer/commit/2973741d) Prepare for release v0.24.0 (#295) + + + +## [kubedb/postgres](https://github.com/kubedb/postgres) + +### [v0.37.0](https://github.com/kubedb/postgres/releases/tag/v0.37.0) + +- [9bf1793e](https://github.com/kubedb/postgres/commit/9bf1793ea) Prepare for release v0.37.0 (#676) + + + +## [kubedb/provisioner](https://github.com/kubedb/provisioner) + +### [v0.37.0](https://github.com/kubedb/provisioner/releases/tag/v0.37.0) + +- [aece653d](https://github.com/kubedb/provisioner/commit/aece653d2) Prepare for release v0.37.0 (#58) + + + +## [kubedb/proxysql](https://github.com/kubedb/proxysql) + +### [v0.24.0](https://github.com/kubedb/proxysql/releases/tag/v0.24.0) + +- [ebcaa7ac](https://github.com/kubedb/proxysql/commit/ebcaa7ac) Prepare for release v0.24.0 (#312) + + + +## [kubedb/redis](https://github.com/kubedb/redis) + +### [v0.30.0](https://github.com/kubedb/redis/releases/tag/v0.30.0) + +- [abe2953b](https://github.com/kubedb/redis/commit/abe2953b) Prepare for release v0.30.0 (#493) + + + +## [kubedb/redis-coordinator](https://github.com/kubedb/redis-coordinator) + +### [v0.16.0](https://github.com/kubedb/redis-coordinator/releases/tag/v0.16.0) + +- [5159461a](https://github.com/kubedb/redis-coordinator/commit/5159461a) Prepare for release v0.16.0 (#79) + + + +## [kubedb/replication-mode-detector](https://github.com/kubedb/replication-mode-detector) + +### [v0.24.0](https://github.com/kubedb/replication-mode-detector/releases/tag/v0.24.0) + +- [16cba321](https://github.com/kubedb/replication-mode-detector/commit/16cba321) Prepare for release v0.24.0 (#243) + + + +## [kubedb/schema-manager](https://github.com/kubedb/schema-manager) + +### [v0.13.0](https://github.com/kubedb/schema-manager/releases/tag/v0.13.0) + +- [072d4c70](https://github.com/kubedb/schema-manager/commit/072d4c70) Prepare for release v0.13.0 (#84) + + + +## [kubedb/tests](https://github.com/kubedb/tests) + +### [v0.22.0](https://github.com/kubedb/tests/releases/tag/v0.22.0) + +- [7f48fadf](https://github.com/kubedb/tests/commit/7f48fadf) Prepare for release v0.22.0 (#265) +- [94fcf34b](https://github.com/kubedb/tests/commit/94fcf34b) Minimize mongo test areas; Increase timeout (#264) +- [0808fc26](https://github.com/kubedb/tests/commit/0808fc26) Add Redis Autoscaler tests (#255) +- [bbf8b2f3](https://github.com/kubedb/tests/commit/bbf8b2f3) Update mysql remote replica (#263) + + + +## [kubedb/ui-server](https://github.com/kubedb/ui-server) + +### [v0.13.0](https://github.com/kubedb/ui-server/releases/tag/v0.13.0) + +- [82adcd74](https://github.com/kubedb/ui-server/commit/82adcd74) Prepare for release v0.13.0 (#93) + + + +## [kubedb/webhook-server](https://github.com/kubedb/webhook-server) + +### [v0.13.0](https://github.com/kubedb/webhook-server/releases/tag/v0.13.0) + +- [9ffa4b85](https://github.com/kubedb/webhook-server/commit/9ffa4b85) Prepare for release v0.13.0 (#69) + + + + diff --git a/content/docs/v2024.1.31/CHANGELOG-v2023.11.29-rc.0.md b/content/docs/v2024.1.31/CHANGELOG-v2023.11.29-rc.0.md new file mode 100644 index 0000000000..6d05de8da9 --- /dev/null +++ b/content/docs/v2024.1.31/CHANGELOG-v2023.11.29-rc.0.md @@ -0,0 +1,531 @@ +--- +title: Changelog | KubeDB +description: Changelog +menu: + docs_v2024.1.31: + identifier: changelog-kubedb-v2023.11.29-rc.0 + name: Changelog-v2023.11.29-rc.0 + parent: welcome + weight: 20231129 +product_name: kubedb +menu_name: docs_v2024.1.31 +section_menu_id: welcome +url: /docs/v2024.1.31/welcome/changelog-v2023.11.29-rc.0/ +aliases: +- /docs/v2024.1.31/CHANGELOG-v2023.11.29-rc.0/ +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +# KubeDB v2023.11.29-rc.0 (2023-11-30) + + +## [kubedb/apimachinery](https://github.com/kubedb/apimachinery) + +### [v0.38.0-rc.0](https://github.com/kubedb/apimachinery/releases/tag/v0.38.0-rc.0) + +- [e070a3ae](https://github.com/kubedb/apimachinery/commit/e070a3ae) Do not default the seccompProfile (#1079) +- [29c96031](https://github.com/kubedb/apimachinery/commit/29c96031) Set Default Security Context for MariaDB (#1077) +- [fc35d376](https://github.com/kubedb/apimachinery/commit/fc35d376) Set default SecurityContext for mysql (#1070) +- [ee71aca0](https://github.com/kubedb/apimachinery/commit/ee71aca0) Update dependencies +- [93b5ba51](https://github.com/kubedb/apimachinery/commit/93b5ba51) add encriptSecret to postgresAchiver (#1078) +- [2b06b6e5](https://github.com/kubedb/apimachinery/commit/2b06b6e5) Add mongodb & postgres archiver (#1016) +- [47793c9a](https://github.com/kubedb/apimachinery/commit/47793c9a) Set default SecurityContext for Elasticsearch. (#1072) +- [90567b46](https://github.com/kubedb/apimachinery/commit/90567b46) Set default SecurityContext for Kafka (#1068) +- [449a4e00](https://github.com/kubedb/apimachinery/commit/449a4e00) Remove redundant helper functions for Kafka and Update constants (#1074) +- [b28463f4](https://github.com/kubedb/apimachinery/commit/b28463f4) Set fsGroup to 999 to avoid mountedPath's files permission issue in different storageClass (#1075) +- [8e497b92](https://github.com/kubedb/apimachinery/commit/8e497b92) Set Default Security Context for Redis (#1073) +- [88ab93c7](https://github.com/kubedb/apimachinery/commit/88ab93c7) Set default SecurityContext for mongodb (#1067) +- [e7ac5d2e](https://github.com/kubedb/apimachinery/commit/e7ac5d2e) Set default for security Context for postgres (#1069) +- [f5de4a28](https://github.com/kubedb/apimachinery/commit/f5de4a28) Add support for init with git-sync; Add const (#1065) + + + +## [kubedb/autoscaler](https://github.com/kubedb/autoscaler) + +### [v0.23.0-rc.0](https://github.com/kubedb/autoscaler/releases/tag/v0.23.0-rc.0) + +- [a406fbda](https://github.com/kubedb/autoscaler/commit/a406fbda) Prepare for release v0.23.0-rc.0 (#158) + + + +## [kubedb/cli](https://github.com/kubedb/cli) + +### [v0.38.0-rc.0](https://github.com/kubedb/cli/releases/tag/v0.38.0-rc.0) + +- [3a4dcc47](https://github.com/kubedb/cli/commit/3a4dcc47) Prepare for release v0.38.0-rc.0 (#737) + + + +## [kubedb/dashboard](https://github.com/kubedb/dashboard) + +### [v0.14.0-rc.0](https://github.com/kubedb/dashboard/releases/tag/v0.14.0-rc.0) + +- [c2982e93](https://github.com/kubedb/dashboard/commit/c2982e93) Prepare for release v0.14.0-rc.0 (#85) +- [9a9e6cd9](https://github.com/kubedb/dashboard/commit/9a9e6cd9) Add container security context for elasticsearch dashboard. (#84) + + + +## [kubedb/elasticsearch](https://github.com/kubedb/elasticsearch) + +### [v0.38.0-rc.0](https://github.com/kubedb/elasticsearch/releases/tag/v0.38.0-rc.0) + +- [6b2943f1](https://github.com/kubedb/elasticsearch/commit/6b2943f19) Prepare for release v0.38.0-rc.0 (#678) +- [7f1a37e1](https://github.com/kubedb/elasticsearch/commit/7f1a37e1a) Add prepare cluster installer before test runner (#677) +- [1d49f16d](https://github.com/kubedb/elasticsearch/commit/1d49f16d2) Remove `init-sysctl` container and add default containerSecurityContext (#676) +- [4bb15e48](https://github.com/kubedb/elasticsearch/commit/4bb15e48b) Update daily-opensearch workflow to provision v1.3.13 + + + +## [kubedb/elasticsearch-restic-plugin](https://github.com/kubedb/elasticsearch-restic-plugin) + +### [v0.1.0-rc.0](https://github.com/kubedb/elasticsearch-restic-plugin/releases/tag/v0.1.0-rc.0) + +- [eb95c84](https://github.com/kubedb/elasticsearch-restic-plugin/commit/eb95c84) Prepare for release v0.1.0-rc.0 (#8) +- [fe82e1b](https://github.com/kubedb/elasticsearch-restic-plugin/commit/fe82e1b) Update component name (#7) +- [c155643](https://github.com/kubedb/elasticsearch-restic-plugin/commit/c155643) Update snapshot time (#6) +- [7093d5a](https://github.com/kubedb/elasticsearch-restic-plugin/commit/7093d5a) Move to kubedb org +- [a3a079e](https://github.com/kubedb/elasticsearch-restic-plugin/commit/a3a079e) Update deps (#5) +- [7a0fd38](https://github.com/kubedb/elasticsearch-restic-plugin/commit/7a0fd38) Refactor (#4) +- [b262635](https://github.com/kubedb/elasticsearch-restic-plugin/commit/b262635) Add support for backup and restore (#1) +- [50bde7e](https://github.com/kubedb/elasticsearch-restic-plugin/commit/50bde7e) Fix build +- [b9686b7](https://github.com/kubedb/elasticsearch-restic-plugin/commit/b9686b7) Prepare for release v0.1.0-rc.0 (#3) +- [ba0c0ed](https://github.com/kubedb/elasticsearch-restic-plugin/commit/ba0c0ed) Fix binary name +- [b0aa991](https://github.com/kubedb/elasticsearch-restic-plugin/commit/b0aa991) Use firecracker runner +- [a621400](https://github.com/kubedb/elasticsearch-restic-plugin/commit/a621400) Use Go 1.21 and restic 0.16.0 +- [f08e4e8](https://github.com/kubedb/elasticsearch-restic-plugin/commit/f08e4e8) Use github runner to push docker image + + + +## [kubedb/installer](https://github.com/kubedb/installer) + +### [v2023.11.29-rc.0](https://github.com/kubedb/installer/releases/tag/v2023.11.29-rc.0) + + + + +## [kubedb/kafka](https://github.com/kubedb/kafka) + +### [v0.9.0-rc.0](https://github.com/kubedb/kafka/releases/tag/v0.9.0-rc.0) + +- [0770fff](https://github.com/kubedb/kafka/commit/0770fff) Prepare for release v0.9.0-rc.0 (#48) +- [ee3dcf5](https://github.com/kubedb/kafka/commit/ee3dcf5) Add condition for ssl.properties file (#47) +- [4bd632b](https://github.com/kubedb/kafka/commit/4bd632b) Reconfigure kafka for updated config properties (#45) +- [cc9795b](https://github.com/kubedb/kafka/commit/cc9795b) Upsert Init Containers with Kafka podtemplate.spec and update default test-profile (#43) +- [76e743c](https://github.com/kubedb/kafka/commit/76e743c) Update daily e2e tests yml (#42) + + + +## [kubedb/kubedb-manifest-plugin](https://github.com/kubedb/kubedb-manifest-plugin) + +### [v0.1.0-rc.0](https://github.com/kubedb/kubedb-manifest-plugin/releases/tag/v0.1.0-rc.0) + +- [bef777c](https://github.com/kubedb/kubedb-manifest-plugin/commit/bef777c) Prepare for release v0.1.0-rc.0 (#28) +- [46ad967](https://github.com/kubedb/kubedb-manifest-plugin/commit/46ad967) Remove redundancy (#27) +- [4eaf765](https://github.com/kubedb/kubedb-manifest-plugin/commit/4eaf765) Update snapshot time (#26) +- [e8ace42](https://github.com/kubedb/kubedb-manifest-plugin/commit/e8ace42) Fix plugin binary name +- [d4e3c34](https://github.com/kubedb/kubedb-manifest-plugin/commit/d4e3c34) Move to kubedb org +- [15770b2](https://github.com/kubedb/kubedb-manifest-plugin/commit/15770b2) Update deps (#25) +- [f50a3af](https://github.com/kubedb/kubedb-manifest-plugin/commit/f50a3af) Fix directory cleanup (#24) +- [d41eba7](https://github.com/kubedb/kubedb-manifest-plugin/commit/d41eba7) Refactor +- [0e154e7](https://github.com/kubedb/kubedb-manifest-plugin/commit/0e154e7) Fix release workflow +- [35c6b95](https://github.com/kubedb/kubedb-manifest-plugin/commit/35c6b95) Prepare for release v0.2.0-rc.0 (#22) +- [da97d9a](https://github.com/kubedb/kubedb-manifest-plugin/commit/da97d9a) Use gh runner token to publish image +- [592c51f](https://github.com/kubedb/kubedb-manifest-plugin/commit/592c51f) Use firecracker runner +- [008042d](https://github.com/kubedb/kubedb-manifest-plugin/commit/008042d) Use Go 1.21 +- [985bcab](https://github.com/kubedb/kubedb-manifest-plugin/commit/985bcab) Set snapshot time after snapshot completed (#21) +- [6a8c682](https://github.com/kubedb/kubedb-manifest-plugin/commit/6a8c682) Refactor code (#20) +- [bcb944d](https://github.com/kubedb/kubedb-manifest-plugin/commit/bcb944d) Remove manifest option flags (#19) +- [5a47722](https://github.com/kubedb/kubedb-manifest-plugin/commit/5a47722) Fix secret restore issue (#18) +- [3ced8b7](https://github.com/kubedb/kubedb-manifest-plugin/commit/3ced8b7) Update `kmodules.xyz/client-go` version to `v0.25.27` (#17) +- [2ee1314](https://github.com/kubedb/kubedb-manifest-plugin/commit/2ee1314) Update Readme (#16) +- [42d0e52](https://github.com/kubedb/kubedb-manifest-plugin/commit/42d0e52) Set initial component status prior to backup and restore (#15) +- [31a64d6](https://github.com/kubedb/kubedb-manifest-plugin/commit/31a64d6) Remove redundant flags (#14) +- [a804ba8](https://github.com/kubedb/kubedb-manifest-plugin/commit/a804ba8) Pass Snapshot name for restore +- [99ca49f](https://github.com/kubedb/kubedb-manifest-plugin/commit/99ca49f) Set snapshot time, integrity and size (#12) +- [384bbb6](https://github.com/kubedb/kubedb-manifest-plugin/commit/384bbb6) Set backup error in component status + Refactor codebase (#11) +- [513eef5](https://github.com/kubedb/kubedb-manifest-plugin/commit/513eef5) Update for snapshot and restoresession API changes (#10) +- [4fb8f52](https://github.com/kubedb/kubedb-manifest-plugin/commit/4fb8f52) Add options for issuerref (#9) +- [2931d9e](https://github.com/kubedb/kubedb-manifest-plugin/commit/2931d9e) Update restic modules (#7) +- [3422ddf](https://github.com/kubedb/kubedb-manifest-plugin/commit/3422ddf) Fix bugs + Sync with updated snapshot api (#6) +- [b1a69b5](https://github.com/kubedb/kubedb-manifest-plugin/commit/b1a69b5) Prepare for release v0.1.0 (#5) +- [5344e9f](https://github.com/kubedb/kubedb-manifest-plugin/commit/5344e9f) Update modules (#4) +- [14b2797](https://github.com/kubedb/kubedb-manifest-plugin/commit/14b2797) Add CI badge +- [969eeda](https://github.com/kubedb/kubedb-manifest-plugin/commit/969eeda) Organize code structure (#3) +- [9fc3cbe](https://github.com/kubedb/kubedb-manifest-plugin/commit/9fc3cbe) Postgres manifest (#2) +- [8e2a56f](https://github.com/kubedb/kubedb-manifest-plugin/commit/8e2a56f) Merge pull request #1 from kubestash/mongodb-manifest +- [e80c1d0](https://github.com/kubedb/kubedb-manifest-plugin/commit/e80c1d0) update flag names. +- [80d3908](https://github.com/kubedb/kubedb-manifest-plugin/commit/80d3908) Add options for changing name in the restored files. +- [e7da42d](https://github.com/kubedb/kubedb-manifest-plugin/commit/e7da42d) Fix error. +- [70a0267](https://github.com/kubedb/kubedb-manifest-plugin/commit/70a0267) Sync with updated snapshot api +- [9d747d8](https://github.com/kubedb/kubedb-manifest-plugin/commit/9d747d8) Merge branch 'mongodb-manifest' of github.com:stashed/kubedb-manifest into mongodb-manifest +- [90e00e3](https://github.com/kubedb/kubedb-manifest-plugin/commit/90e00e3) Fix bugs. +- [9c3fc1e](https://github.com/kubedb/kubedb-manifest-plugin/commit/9c3fc1e) Sync with updated snapshot api +- [c321013](https://github.com/kubedb/kubedb-manifest-plugin/commit/c321013) update component path. +- [7f4bd17](https://github.com/kubedb/kubedb-manifest-plugin/commit/7f4bd17) Refactor. +- [2b61ff0](https://github.com/kubedb/kubedb-manifest-plugin/commit/2b61ff0) Specify component directory +- [6264cdf](https://github.com/kubedb/kubedb-manifest-plugin/commit/6264cdf) Support restoring particular mongo component. +- [0008570](https://github.com/kubedb/kubedb-manifest-plugin/commit/0008570) Fix restore component phase updating. +- [8bd4c95](https://github.com/kubedb/kubedb-manifest-plugin/commit/8bd4c95) Fix restore manifests. +- [7eda9f9](https://github.com/kubedb/kubedb-manifest-plugin/commit/7eda9f9) Update Snapshot phase calculation. +- [a2b52d2](https://github.com/kubedb/kubedb-manifest-plugin/commit/a2b52d2) Add core to runtime scheme. +- [9bd6bd5](https://github.com/kubedb/kubedb-manifest-plugin/commit/9bd6bd5) Fix bugs. +- [9e08774](https://github.com/kubedb/kubedb-manifest-plugin/commit/9e08774) Fix build +- [01225c6](https://github.com/kubedb/kubedb-manifest-plugin/commit/01225c6) Update module path +- [45d0e45](https://github.com/kubedb/kubedb-manifest-plugin/commit/45d0e45) updated flags. +- [fb0282f](https://github.com/kubedb/kubedb-manifest-plugin/commit/fb0282f) update docker file. +- [ad4c004](https://github.com/kubedb/kubedb-manifest-plugin/commit/ad4c004) refactor. +- [8f71d3a](https://github.com/kubedb/kubedb-manifest-plugin/commit/8f71d3a) Fix build +- [115ef23](https://github.com/kubedb/kubedb-manifest-plugin/commit/115ef23) update makefile. +- [a274690](https://github.com/kubedb/kubedb-manifest-plugin/commit/a274690) update backup and restore. +- [cff449f](https://github.com/kubedb/kubedb-manifest-plugin/commit/cff449f) Use yaml pkg from k8s.io. +- [dcbb399](https://github.com/kubedb/kubedb-manifest-plugin/commit/dcbb399) Use restic package from KubeStash. +- [596a498](https://github.com/kubedb/kubedb-manifest-plugin/commit/596a498) fix restore implementation. +- [6ebc19b](https://github.com/kubedb/kubedb-manifest-plugin/commit/6ebc19b) Implement restore. +- [3e8a869](https://github.com/kubedb/kubedb-manifest-plugin/commit/3e8a869) Start implementing restore. +- [e841113](https://github.com/kubedb/kubedb-manifest-plugin/commit/e841113) Add backup methods for mongodb. +- [b5961f7](https://github.com/kubedb/kubedb-manifest-plugin/commit/b5961f7) Continue implementing backup. +- [d943f6a](https://github.com/kubedb/kubedb-manifest-plugin/commit/d943f6a) Implement manifest backup for MongoDB. +- [e644c67](https://github.com/kubedb/kubedb-manifest-plugin/commit/e644c67) Implement kubedb-manifest plugin to MongoDB manifests. + + + +## [kubedb/mariadb](https://github.com/kubedb/mariadb) + +### [v0.22.0-rc.0](https://github.com/kubedb/mariadb/releases/tag/v0.22.0-rc.0) + +- [e360fd82](https://github.com/kubedb/mariadb/commit/e360fd82) Prepare for release v0.22.0-rc.0 (#233) +- [3956f18c](https://github.com/kubedb/mariadb/commit/3956f18c) Set Default Security Context for MariaDB (#232) + + + +## [kubedb/mariadb-archiver](https://github.com/kubedb/mariadb-archiver) + +### [v0.1.0-rc.0](https://github.com/kubedb/mariadb-archiver/releases/tag/v0.1.0-rc.0) + +- [65fd6bf](https://github.com/kubedb/mariadb-archiver/commit/65fd6bf) Prepare for release v0.1.0-rc.0 (#3) + + + +## [kubedb/mariadb-coordinator](https://github.com/kubedb/mariadb-coordinator) + +### [v0.18.0-rc.0](https://github.com/kubedb/mariadb-coordinator/releases/tag/v0.18.0-rc.0) + +- [bf515bfa](https://github.com/kubedb/mariadb-coordinator/commit/bf515bfa) Prepare for release v0.18.0-rc.0 (#92) + + + +## [kubedb/memcached](https://github.com/kubedb/memcached) + +### [v0.31.0-rc.0](https://github.com/kubedb/memcached/releases/tag/v0.31.0-rc.0) + +- [e44be0a6](https://github.com/kubedb/memcached/commit/e44be0a6) Prepare for release v0.31.0-rc.0 (#405) + + + +## [kubedb/mongodb](https://github.com/kubedb/mongodb) + +### [v0.31.0-rc.0](https://github.com/kubedb/mongodb/releases/tag/v0.31.0-rc.0) + +- [c368ec94](https://github.com/kubedb/mongodb/commit/c368ec94) Prepare for release v0.31.0-rc.0 (#581) +- [020d5599](https://github.com/kubedb/mongodb/commit/020d5599) Set manifest component in restoreSession (#579) +- [95103a47](https://github.com/kubedb/mongodb/commit/95103a47) Implement mongodb archiver (#534) +- [fb01b593](https://github.com/kubedb/mongodb/commit/fb01b593) Update apimachinery deps for fsgroup defaulting (#578) +- [22a5bb29](https://github.com/kubedb/mongodb/commit/22a5bb29) Make changes to run containers as non-root user (#576) +- [8667f411](https://github.com/kubedb/mongodb/commit/8667f411) Rearrange the daily CI (#577) +- [7024a3ca](https://github.com/kubedb/mongodb/commit/7024a3ca) Add support for initialization with git-sync (#575) + + + +## [kubedb/mongodb-csi-snapshotter-plugin](https://github.com/kubedb/mongodb-csi-snapshotter-plugin) + +### [v0.1.0-rc.0](https://github.com/kubedb/mongodb-csi-snapshotter-plugin/releases/tag/v0.1.0-rc.0) + + + + +## [kubedb/mongodb-restic-plugin](https://github.com/kubedb/mongodb-restic-plugin) + +### [v0.1.0-rc.0](https://github.com/kubedb/mongodb-restic-plugin/releases/tag/v0.1.0-rc.0) + +- [745f5cb](https://github.com/kubedb/mongodb-restic-plugin/commit/745f5cb) Prepare for release v0.1.0-rc.0 (#13) +- [2c381ee](https://github.com/kubedb/mongodb-restic-plugin/commit/2c381ee) Rename `max-Concurrency` flag name to `max-concurrency` (#12) +- [769bb27](https://github.com/kubedb/mongodb-restic-plugin/commit/769bb27) Set DB version from env if empty (#11) +- [7f51333](https://github.com/kubedb/mongodb-restic-plugin/commit/7f51333) Update snapshot time (#10) +- [e5972d1](https://github.com/kubedb/mongodb-restic-plugin/commit/e5972d1) Move to kubedb org +- [004ef7e](https://github.com/kubedb/mongodb-restic-plugin/commit/004ef7e) Update deps (#9) +- [e54bc9b](https://github.com/kubedb/mongodb-restic-plugin/commit/e54bc9b) Remove version prefix from files (#8) +- [2ab94f7](https://github.com/kubedb/mongodb-restic-plugin/commit/2ab94f7) Add db version flag (#6) +- [d3e752d](https://github.com/kubedb/mongodb-restic-plugin/commit/d3e752d) Prepare for release v0.1.0-rc.0 (#7) +- [e0872f9](https://github.com/kubedb/mongodb-restic-plugin/commit/e0872f9) Use firecracker runners +- [a2e18e9](https://github.com/kubedb/mongodb-restic-plugin/commit/a2e18e9) Use github runner to push docker image +- [b32ebb2](https://github.com/kubedb/mongodb-restic-plugin/commit/b32ebb2) Build docker images for each db version (#5) +- [bc3219d](https://github.com/kubedb/mongodb-restic-plugin/commit/bc3219d) Update deps +- [8040cc0](https://github.com/kubedb/mongodb-restic-plugin/commit/8040cc0) MongoDB backup and restore addon (#2) +- [d9cd315](https://github.com/kubedb/mongodb-restic-plugin/commit/d9cd315) Update Readme and license (#1) + + + +## [kubedb/mysql](https://github.com/kubedb/mysql) + +### [v0.31.0-rc.0](https://github.com/kubedb/mysql/releases/tag/v0.31.0-rc.0) + +- [3c005b51](https://github.com/kubedb/mysql/commit/3c005b51) Prepare for release v0.31.0-rc.0 (#572) +- [bcdfaf4a](https://github.com/kubedb/mysql/commit/bcdfaf4a) Set Default Security Context for MySQL (#571) +- [9009bcac](https://github.com/kubedb/mysql/commit/9009bcac) Add git sync constants from apimachinery (#570) + + + +## [kubedb/mysql-archiver](https://github.com/kubedb/mysql-archiver) + +### [v0.1.0-rc.0](https://github.com/kubedb/mysql-archiver/releases/tag/v0.1.0-rc.0) + +- [f79286a](https://github.com/kubedb/mysql-archiver/commit/f79286a) Prepare for release v0.1.0-rc.0 (#2) +- [dcd2e30](https://github.com/kubedb/mysql-archiver/commit/dcd2e30) Fix wal-g binary + + + +## [kubedb/mysql-coordinator](https://github.com/kubedb/mysql-coordinator) + +### [v0.16.0-rc.0](https://github.com/kubedb/mysql-coordinator/releases/tag/v0.16.0-rc.0) + +- [b5e481fc](https://github.com/kubedb/mysql-coordinator/commit/b5e481fc) Prepare for release v0.16.0-rc.0 (#89) + + + +## [kubedb/mysql-restic-plugin](https://github.com/kubedb/mysql-restic-plugin) + +### [v0.1.0-rc.0](https://github.com/kubedb/mysql-restic-plugin/releases/tag/v0.1.0-rc.0) + +- [b255e47](https://github.com/kubedb/mysql-restic-plugin/commit/b255e47) Prepare for release v0.1.0-rc.0 (#11) +- [9a17360](https://github.com/kubedb/mysql-restic-plugin/commit/9a17360) Set DB version from env if empty (#10) +- [c67ba7c](https://github.com/kubedb/mysql-restic-plugin/commit/c67ba7c) Update snapshot time (#9) +- [abef89e](https://github.com/kubedb/mysql-restic-plugin/commit/abef89e) Fix binary name +- [db1bbbf](https://github.com/kubedb/mysql-restic-plugin/commit/db1bbbf) Move to kubedb org +- [746d13e](https://github.com/kubedb/mysql-restic-plugin/commit/746d13e) Update deps (#8) +- [569533a](https://github.com/kubedb/mysql-restic-plugin/commit/569533a) Add version flag + Refactor (#6) +- [f0abd94](https://github.com/kubedb/mysql-restic-plugin/commit/f0abd94) Prepare for release v0.1.0-rc.0 (#7) +- [01bff62](https://github.com/kubedb/mysql-restic-plugin/commit/01bff62) Remove arm64 image support +- [277fda8](https://github.com/kubedb/mysql-restic-plugin/commit/277fda8) Build docker images for each db version (#5) +- [94f000d](https://github.com/kubedb/mysql-restic-plugin/commit/94f000d) Use Go 1.21 +- [2e4f30d](https://github.com/kubedb/mysql-restic-plugin/commit/2e4f30d) Update Readme (#4) +- [272c8f9](https://github.com/kubedb/mysql-restic-plugin/commit/272c8f9) Add support for mysql backup and restore (#1) + + + +## [kubedb/mysql-router-init](https://github.com/kubedb/mysql-router-init) + +### [v0.16.0-rc.0](https://github.com/kubedb/mysql-router-init/releases/tag/v0.16.0-rc.0) + + + + +## [kubedb/ops-manager](https://github.com/kubedb/ops-manager) + +### [v0.25.0-rc.0](https://github.com/kubedb/ops-manager/releases/tag/v0.25.0-rc.0) + +- [640fe280](https://github.com/kubedb/ops-manager/commit/640fe280) Prepare for release v0.25.0-rc.0 (#492) +- [9714e841](https://github.com/kubedb/ops-manager/commit/9714e841) Add kafka version 3.6.0 to daily test (#491) +- [dd18b17c](https://github.com/kubedb/ops-manager/commit/dd18b17c) postgres arbiter related changes and bug fixes (#483) +- [de52bda7](https://github.com/kubedb/ops-manager/commit/de52bda7) Remove default configuration and restart kafka with new config (#490) +- [f7850172](https://github.com/kubedb/ops-manager/commit/f7850172) Add prepare cluster installer before test runners (#489) +- [79e646ef](https://github.com/kubedb/ops-manager/commit/79e646ef) Update ServiceDNS for kafka (#488) +- [18851802](https://github.com/kubedb/ops-manager/commit/18851802) added daily postgres (#487) +- [0c2bdda1](https://github.com/kubedb/ops-manager/commit/0c2bdda1) added daily-postgres.yml (#486) +- [5cb75965](https://github.com/kubedb/ops-manager/commit/5cb75965) Fixed BUG in postgres reconfigureTLS opsreq (#485) +- [145e08d5](https://github.com/kubedb/ops-manager/commit/145e08d5) Failover before restarting primary on restart ops (#481) +- [e53a72ce](https://github.com/kubedb/ops-manager/commit/e53a72ce) Add Kafka daily yml (#475) + + + +## [kubedb/percona-xtradb](https://github.com/kubedb/percona-xtradb) + +### [v0.25.0-rc.0](https://github.com/kubedb/percona-xtradb/releases/tag/v0.25.0-rc.0) + +- [d374a542](https://github.com/kubedb/percona-xtradb/commit/d374a542) Prepare for release v0.25.0-rc.0 (#331) + + + +## [kubedb/percona-xtradb-coordinator](https://github.com/kubedb/percona-xtradb-coordinator) + +### [v0.11.0-rc.0](https://github.com/kubedb/percona-xtradb-coordinator/releases/tag/v0.11.0-rc.0) + +- [69e7d1e](https://github.com/kubedb/percona-xtradb-coordinator/commit/69e7d1e) Prepare for release v0.11.0-rc.0 (#49) + + + +## [kubedb/pg-coordinator](https://github.com/kubedb/pg-coordinator) + +### [v0.22.0-rc.0](https://github.com/kubedb/pg-coordinator/releases/tag/v0.22.0-rc.0) + +- [e4efa4db](https://github.com/kubedb/pg-coordinator/commit/e4efa4db) Prepare for release v0.22.0-rc.0 (#139) +- [7c862bcd](https://github.com/kubedb/pg-coordinator/commit/7c862bcd) Add support for arbiter (#136) +- [53ba32a9](https://github.com/kubedb/pg-coordinator/commit/53ba32a9) added postgres 16.0 support (#137) +- [24445f9b](https://github.com/kubedb/pg-coordinator/commit/24445f9b) Added & modified logs (#134) + + + +## [kubedb/pgbouncer](https://github.com/kubedb/pgbouncer) + +### [v0.25.0-rc.0](https://github.com/kubedb/pgbouncer/releases/tag/v0.25.0-rc.0) + +- [21ba9f0f](https://github.com/kubedb/pgbouncer/commit/21ba9f0f) Prepare for release v0.25.0-rc.0 (#296) + + + +## [kubedb/postgres](https://github.com/kubedb/postgres) + +### [v0.38.0-rc.0](https://github.com/kubedb/postgres/releases/tag/v0.38.0-rc.0) + +- [8738ad73](https://github.com/kubedb/postgres/commit/8738ad73e) Prepare for release v0.38.0-rc.0 (#684) +- [adb69b02](https://github.com/kubedb/postgres/commit/adb69b02e) Implement PostgreSQL archiver (#628) +- [668e15dd](https://github.com/kubedb/postgres/commit/668e15dd4) Remove test directory (#683) +- [d857c354](https://github.com/kubedb/postgres/commit/d857c354a) added postgres arbiter support (#677) +- [8fc98e8e](https://github.com/kubedb/postgres/commit/8fc98e8ed) Fixed a bug for init container (#681) +- [a2b408ff](https://github.com/kubedb/postgres/commit/a2b408ffb) Bugfix for security context (#680) +- [fb14015e](https://github.com/kubedb/postgres/commit/fb14015e9) added nightly yml for postgres (#679) + + + +## [kubedb/postgres-archiver](https://github.com/kubedb/postgres-archiver) + +### [v0.1.0-rc.0](https://github.com/kubedb/postgres-archiver/releases/tag/v0.1.0-rc.0) + + + + +## [kubedb/postgres-csi-snapshotter-plugin](https://github.com/kubedb/postgres-csi-snapshotter-plugin) + +### [v0.1.0-rc.0](https://github.com/kubedb/postgres-csi-snapshotter-plugin/releases/tag/v0.1.0-rc.0) + +- [02a45da](https://github.com/kubedb/postgres-csi-snapshotter-plugin/commit/02a45da) Prepare for release v0.1.0-rc.0 (#6) +- [1a6457c](https://github.com/kubedb/postgres-csi-snapshotter-plugin/commit/1a6457c) Update flags and deps + Refactor (#5) +- [f32b56b](https://github.com/kubedb/postgres-csi-snapshotter-plugin/commit/f32b56b) Delete .idea folder +- [e7f8135](https://github.com/kubedb/postgres-csi-snapshotter-plugin/commit/e7f8135) clean up (#4) +- [06e7e70](https://github.com/kubedb/postgres-csi-snapshotter-plugin/commit/06e7e70) clean up (#3) + + + +## [kubedb/postgres-restic-plugin](https://github.com/kubedb/postgres-restic-plugin) + +### [v0.1.0-rc.0](https://github.com/kubedb/postgres-restic-plugin/releases/tag/v0.1.0-rc.0) + +- [8208814](https://github.com/kubedb/postgres-restic-plugin/commit/8208814) Prepare for release v0.1.0-rc.0 (#4) +- [a56fcfa](https://github.com/kubedb/postgres-restic-plugin/commit/a56fcfa) Move to kubedb org (#3) +- [e8928c7](https://github.com/kubedb/postgres-restic-plugin/commit/e8928c7) Added postgres addon for kubestash (#2) +- [7c55105](https://github.com/kubedb/postgres-restic-plugin/commit/7c55105) Prepare for release v0.1.0-rc.0 (#1) +- [19eff67](https://github.com/kubedb/postgres-restic-plugin/commit/19eff67) Use gh runner token to publish docker image +- [6a71410](https://github.com/kubedb/postgres-restic-plugin/commit/6a71410) Use firecracker runner +- [e278d71](https://github.com/kubedb/postgres-restic-plugin/commit/e278d71) Use Go 1.21 +- [4899879](https://github.com/kubedb/postgres-restic-plugin/commit/4899879) Update readme + cleanup + + + +## [kubedb/provisioner](https://github.com/kubedb/provisioner) + +### [v0.38.0-rc.0](https://github.com/kubedb/provisioner/releases/tag/v0.38.0-rc.0) + +- [7e6099e0](https://github.com/kubedb/provisioner/commit/7e6099e0e) Prepare for release v0.38.0-rc.0 (#59) + + + +## [kubedb/proxysql](https://github.com/kubedb/proxysql) + +### [v0.25.0-rc.0](https://github.com/kubedb/proxysql/releases/tag/v0.25.0-rc.0) + +- [c4775bf7](https://github.com/kubedb/proxysql/commit/c4775bf7) Prepare for release v0.25.0-rc.0 (#313) + + + +## [kubedb/redis](https://github.com/kubedb/redis) + +### [v0.31.0-rc.0](https://github.com/kubedb/redis/releases/tag/v0.31.0-rc.0) + +- [966f14ca](https://github.com/kubedb/redis/commit/966f14ca) Prepare for release v0.31.0-rc.0 (#495) +- [b72d8319](https://github.com/kubedb/redis/commit/b72d8319) Run Redis and RedisSentinel as non root (#494) + + + +## [kubedb/redis-coordinator](https://github.com/kubedb/redis-coordinator) + +### [v0.17.0-rc.0](https://github.com/kubedb/redis-coordinator/releases/tag/v0.17.0-rc.0) + +- [9f724e43](https://github.com/kubedb/redis-coordinator/commit/9f724e43) Prepare for release v0.17.0-rc.0 (#80) + + + +## [kubedb/redis-restic-plugin](https://github.com/kubedb/redis-restic-plugin) + +### [v0.1.0-rc.0](https://github.com/kubedb/redis-restic-plugin/releases/tag/v0.1.0-rc.0) + +- [f8de18b](https://github.com/kubedb/redis-restic-plugin/commit/f8de18b) Prepare for release v0.1.0-rc.0 (#9) +- [a4c03d9](https://github.com/kubedb/redis-restic-plugin/commit/a4c03d9) Update snapshot time (#8) +- [404447d](https://github.com/kubedb/redis-restic-plugin/commit/404447d) Fix binary name +- [4dbc58b](https://github.com/kubedb/redis-restic-plugin/commit/4dbc58b) Move to kubedb org +- [e4a6fb2](https://github.com/kubedb/redis-restic-plugin/commit/e4a6fb2) Update deps (#7) +- [1b28954](https://github.com/kubedb/redis-restic-plugin/commit/1b28954) Remove maxConcurrency variable (#6) +- [4d13ee5](https://github.com/kubedb/redis-restic-plugin/commit/4d13ee5) Remove addon implementer + Refactor (#5) +- [44ac2c7](https://github.com/kubedb/redis-restic-plugin/commit/44ac2c7) Prepare for release v0.1.0-rc.0 (#4) +- [ce275bd](https://github.com/kubedb/redis-restic-plugin/commit/ce275bd) Use firecracker runner +- [bf39971](https://github.com/kubedb/redis-restic-plugin/commit/bf39971) Update deps +- [ef24891](https://github.com/kubedb/redis-restic-plugin/commit/ef24891) Use github runner to push docker image +- [6a6f6d6](https://github.com/kubedb/redis-restic-plugin/commit/6a6f6d6) Add support for redis backup and restore (#1) + + + +## [kubedb/replication-mode-detector](https://github.com/kubedb/replication-mode-detector) + +### [v0.25.0-rc.0](https://github.com/kubedb/replication-mode-detector/releases/tag/v0.25.0-rc.0) + +- [77886a28](https://github.com/kubedb/replication-mode-detector/commit/77886a28) Prepare for release v0.25.0-rc.0 (#244) + + + +## [kubedb/schema-manager](https://github.com/kubedb/schema-manager) + +### [v0.14.0-rc.0](https://github.com/kubedb/schema-manager/releases/tag/v0.14.0-rc.0) + +- [893fe8d9](https://github.com/kubedb/schema-manager/commit/893fe8d9) Prepare for release v0.14.0-rc.0 (#85) + + + +## [kubedb/tests](https://github.com/kubedb/tests) + +### [v0.23.0-rc.0](https://github.com/kubedb/tests/releases/tag/v0.23.0-rc.0) + +- [bfd1ec79](https://github.com/kubedb/tests/commit/bfd1ec79) Prepare for release v0.23.0-rc.0 (#270) +- [fab75dd1](https://github.com/kubedb/tests/commit/fab75dd1) Add disableDefault while deploying elasticsearch. (#269) +- [009399c7](https://github.com/kubedb/tests/commit/009399c7) Run tests in restriced PodSecurityStandard (#266) +- [4be89382](https://github.com/kubedb/tests/commit/4be89382) Fixed stash test and Innodb issues in MySQL (#250) +- [f007f5f5](https://github.com/kubedb/tests/commit/f007f5f5) Added test for Standalone to HA scalin (#267) +- [017546ec](https://github.com/kubedb/tests/commit/017546ec) Add Postgres e2e tests (#233) +- [fbd16c88](https://github.com/kubedb/tests/commit/fbd16c88) Add kafka e2e tests (#254) + + + +## [kubedb/ui-server](https://github.com/kubedb/ui-server) + +### [v0.14.0-rc.0](https://github.com/kubedb/ui-server/releases/tag/v0.14.0-rc.0) + +- [b59415fd](https://github.com/kubedb/ui-server/commit/b59415fd) Prepare for release v0.14.0-rc.0 (#94) + + + +## [kubedb/webhook-server](https://github.com/kubedb/webhook-server) + +### [v0.14.0-rc.0](https://github.com/kubedb/webhook-server/releases/tag/v0.14.0-rc.0) + +- [c36d61e5](https://github.com/kubedb/webhook-server/commit/c36d61e5) Prepare for release v0.14.0-rc.0 (#70) + + + + diff --git a/content/docs/v2024.1.31/CHANGELOG-v2023.12.1-rc.1.md b/content/docs/v2024.1.31/CHANGELOG-v2023.12.1-rc.1.md new file mode 100644 index 0000000000..b74784bb88 --- /dev/null +++ b/content/docs/v2024.1.31/CHANGELOG-v2023.12.1-rc.1.md @@ -0,0 +1,366 @@ +--- +title: Changelog | KubeDB +description: Changelog +menu: + docs_v2024.1.31: + identifier: changelog-kubedb-v2023.12.1-rc.1 + name: Changelog-v2023.12.1-rc.1 + parent: welcome + weight: 20231201 +product_name: kubedb +menu_name: docs_v2024.1.31 +section_menu_id: welcome +url: /docs/v2024.1.31/welcome/changelog-v2023.12.1-rc.1/ +aliases: +- /docs/v2024.1.31/CHANGELOG-v2023.12.1-rc.1/ +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +# KubeDB v2023.12.1-rc.1 (2023-12-01) + + +## [kubedb/apimachinery](https://github.com/kubedb/apimachinery) + +### [v0.38.0-rc.1](https://github.com/kubedb/apimachinery/releases/tag/v0.38.0-rc.1) + +- [de0bb4e2](https://github.com/kubedb/apimachinery/commit/de0bb4e2) Update kubestash apimachienry +- [545731a9](https://github.com/kubedb/apimachinery/commit/545731a9) Add default KubeBuilder client (#1081) +- [f260aa8e](https://github.com/kubedb/apimachinery/commit/f260aa8e) Add SecurityContext field in catalogs; Set default accordingly (#1080) + + + +## [kubedb/autoscaler](https://github.com/kubedb/autoscaler) + +### [v0.23.0-rc.1](https://github.com/kubedb/autoscaler/releases/tag/v0.23.0-rc.1) + +- [193fb07b](https://github.com/kubedb/autoscaler/commit/193fb07b) Prepare for release v0.23.0-rc.1 (#159) + + + +## [kubedb/cli](https://github.com/kubedb/cli) + +### [v0.38.0-rc.1](https://github.com/kubedb/cli/releases/tag/v0.38.0-rc.1) + +- [a99b2857](https://github.com/kubedb/cli/commit/a99b2857) Prepare for release v0.38.0-rc.1 (#738) + + + +## [kubedb/dashboard](https://github.com/kubedb/dashboard) + +### [v0.14.0-rc.1](https://github.com/kubedb/dashboard/releases/tag/v0.14.0-rc.1) + +- [7031fb23](https://github.com/kubedb/dashboard/commit/7031fb23) Prepare for release v0.14.0-rc.1 (#86) + + + +## [kubedb/elasticsearch](https://github.com/kubedb/elasticsearch) + +### [v0.38.0-rc.1](https://github.com/kubedb/elasticsearch/releases/tag/v0.38.0-rc.1) + +- [bd0fd357](https://github.com/kubedb/elasticsearch/commit/bd0fd357e) Prepare for release v0.38.0-rc.1 (#680) + + + +## [kubedb/elasticsearch-restic-plugin](https://github.com/kubedb/elasticsearch-restic-plugin) + +### [v0.1.0-rc.1](https://github.com/kubedb/elasticsearch-restic-plugin/releases/tag/v0.1.0-rc.1) + +- [f6a9e4c](https://github.com/kubedb/elasticsearch-restic-plugin/commit/f6a9e4c) Prepare for release v0.1.0-rc.1 (#9) + + + +## [kubedb/installer](https://github.com/kubedb/installer) + +### [v2023.12.1-rc.1](https://github.com/kubedb/installer/releases/tag/v2023.12.1-rc.1) + +- [876956a1](https://github.com/kubedb/installer/commit/876956a1) Prepare for release v2023.12.1-rc.1 (#727) +- [d021b61b](https://github.com/kubedb/installer/commit/d021b61b) Add `runAsUser` field in all catalogs (#725) +- [38e2ba3e](https://github.com/kubedb/installer/commit/38e2ba3e) Add `--default-seccomp-profile-type` flag (#724) +- [ee86f5bc](https://github.com/kubedb/installer/commit/ee86f5bc) Add `databases` flag for mysql addon (#723) +- [e67541cf](https://github.com/kubedb/installer/commit/e67541cf) Add postgres restic addon (#726) + + + +## [kubedb/kafka](https://github.com/kubedb/kafka) + +### [v0.9.0-rc.1](https://github.com/kubedb/kafka/releases/tag/v0.9.0-rc.1) + +- [0516c18](https://github.com/kubedb/kafka/commit/0516c18) Prepare for release v0.9.0-rc.1 (#50) +- [6554778](https://github.com/kubedb/kafka/commit/6554778) Set default KubeBuilder client (#49) + + + +## [kubedb/kubedb-manifest-plugin](https://github.com/kubedb/kubedb-manifest-plugin) + +### [v0.1.0-rc.1](https://github.com/kubedb/kubedb-manifest-plugin/releases/tag/v0.1.0-rc.1) + +- [4bd44b8](https://github.com/kubedb/kubedb-manifest-plugin/commit/4bd44b8) Prepare for release v0.1.0-rc.1 (#29) + + + +## [kubedb/mariadb](https://github.com/kubedb/mariadb) + +### [v0.22.0-rc.1](https://github.com/kubedb/mariadb/releases/tag/v0.22.0-rc.1) + +- [9c157c66](https://github.com/kubedb/mariadb/commit/9c157c66) Prepare for release v0.22.0-rc.1 (#235) +- [1d0c2579](https://github.com/kubedb/mariadb/commit/1d0c2579) Pass version in SetDefaults func (#234) + + + +## [kubedb/mariadb-archiver](https://github.com/kubedb/mariadb-archiver) + +### [v0.1.0-rc.1](https://github.com/kubedb/mariadb-archiver/releases/tag/v0.1.0-rc.1) + +- [a2afbc9](https://github.com/kubedb/mariadb-archiver/commit/a2afbc9) Prepare for release v0.1.0-rc.1 (#4) + + + +## [kubedb/mariadb-coordinator](https://github.com/kubedb/mariadb-coordinator) + +### [v0.18.0-rc.1](https://github.com/kubedb/mariadb-coordinator/releases/tag/v0.18.0-rc.1) + +- [118bcda4](https://github.com/kubedb/mariadb-coordinator/commit/118bcda4) Prepare for release v0.18.0-rc.1 (#93) + + + +## [kubedb/memcached](https://github.com/kubedb/memcached) + +### [v0.31.0-rc.1](https://github.com/kubedb/memcached/releases/tag/v0.31.0-rc.1) + +- [fab2a879](https://github.com/kubedb/memcached/commit/fab2a879) Prepare for release v0.31.0-rc.1 (#406) + + + +## [kubedb/mongodb](https://github.com/kubedb/mongodb) + +### [v0.31.0-rc.1](https://github.com/kubedb/mongodb/releases/tag/v0.31.0-rc.1) + +- [de48eeb7](https://github.com/kubedb/mongodb/commit/de48eeb7) Prepare for release v0.31.0-rc.1 (#582) + + + +## [kubedb/mongodb-csi-snapshotter-plugin](https://github.com/kubedb/mongodb-csi-snapshotter-plugin) + +### [v0.1.0-rc.1](https://github.com/kubedb/mongodb-csi-snapshotter-plugin/releases/tag/v0.1.0-rc.1) + +- [92b28e8](https://github.com/kubedb/mongodb-csi-snapshotter-plugin/commit/92b28e8) Prepare for release v0.1.0-rc.1 (#4) + + + +## [kubedb/mongodb-restic-plugin](https://github.com/kubedb/mongodb-restic-plugin) + +### [v0.1.0-rc.1](https://github.com/kubedb/mongodb-restic-plugin/releases/tag/v0.1.0-rc.1) + +- [1daa490](https://github.com/kubedb/mongodb-restic-plugin/commit/1daa490) Prepare for release v0.1.0-rc.1 (#14) + + + +## [kubedb/mysql](https://github.com/kubedb/mysql) + +### [v0.31.0-rc.1](https://github.com/kubedb/mysql/releases/tag/v0.31.0-rc.1) + +- [79cb58c1](https://github.com/kubedb/mysql/commit/79cb58c1) Prepare for release v0.31.0-rc.1 (#574) +- [e5b37c00](https://github.com/kubedb/mysql/commit/e5b37c00) Pass version in SetDefaults func (#573) + + + +## [kubedb/mysql-archiver](https://github.com/kubedb/mysql-archiver) + +### [v0.1.0-rc.1](https://github.com/kubedb/mysql-archiver/releases/tag/v0.1.0-rc.1) + +- [8c65d14](https://github.com/kubedb/mysql-archiver/commit/8c65d14) Prepare for release v0.1.0-rc.1 (#3) + + + +## [kubedb/mysql-coordinator](https://github.com/kubedb/mysql-coordinator) + +### [v0.16.0-rc.1](https://github.com/kubedb/mysql-coordinator/releases/tag/v0.16.0-rc.1) + +- [63cb0a33](https://github.com/kubedb/mysql-coordinator/commit/63cb0a33) Prepare for release v0.16.0-rc.1 (#90) + + + +## [kubedb/mysql-restic-plugin](https://github.com/kubedb/mysql-restic-plugin) + +### [v0.1.0-rc.1](https://github.com/kubedb/mysql-restic-plugin/releases/tag/v0.1.0-rc.1) + +- [f77476b](https://github.com/kubedb/mysql-restic-plugin/commit/f77476b) Prepare for release v0.1.0-rc.1 (#13) +- [81ceb55](https://github.com/kubedb/mysql-restic-plugin/commit/81ceb55) Add `databases` flag (#12) + + + +## [kubedb/mysql-router-init](https://github.com/kubedb/mysql-router-init) + +### [v0.16.0-rc.1](https://github.com/kubedb/mysql-router-init/releases/tag/v0.16.0-rc.1) + + + + +## [kubedb/ops-manager](https://github.com/kubedb/ops-manager) + +### [v0.25.0-rc.1](https://github.com/kubedb/ops-manager/releases/tag/v0.25.0-rc.1) + +- [98dbd6c0](https://github.com/kubedb/ops-manager/commit/98dbd6c0) Prepare for release v0.25.0-rc.1 (#494) + + + +## [kubedb/percona-xtradb](https://github.com/kubedb/percona-xtradb) + +### [v0.25.0-rc.1](https://github.com/kubedb/percona-xtradb/releases/tag/v0.25.0-rc.1) + +- [bad0b334](https://github.com/kubedb/percona-xtradb/commit/bad0b334) Prepare for release v0.25.0-rc.1 (#333) +- [b8447936](https://github.com/kubedb/percona-xtradb/commit/b8447936) Pass version in SetDefaults func (#332) + + + +## [kubedb/percona-xtradb-coordinator](https://github.com/kubedb/percona-xtradb-coordinator) + +### [v0.11.0-rc.1](https://github.com/kubedb/percona-xtradb-coordinator/releases/tag/v0.11.0-rc.1) + +- [7a66da3](https://github.com/kubedb/percona-xtradb-coordinator/commit/7a66da3) Prepare for release v0.11.0-rc.1 (#50) + + + +## [kubedb/pg-coordinator](https://github.com/kubedb/pg-coordinator) + +### [v0.22.0-rc.1](https://github.com/kubedb/pg-coordinator/releases/tag/v0.22.0-rc.1) + +- [9f9614e6](https://github.com/kubedb/pg-coordinator/commit/9f9614e6) Prepare for release v0.22.0-rc.1 (#140) + + + +## [kubedb/pgbouncer](https://github.com/kubedb/pgbouncer) + +### [v0.25.0-rc.1](https://github.com/kubedb/pgbouncer/releases/tag/v0.25.0-rc.1) + +- [e3e9f84d](https://github.com/kubedb/pgbouncer/commit/e3e9f84d) Prepare for release v0.25.0-rc.1 (#297) + + + +## [kubedb/postgres](https://github.com/kubedb/postgres) + +### [v0.38.0-rc.1](https://github.com/kubedb/postgres/releases/tag/v0.38.0-rc.1) + +- [0e493a1c](https://github.com/kubedb/postgres/commit/0e493a1cd) Prepare for release v0.38.0-rc.1 (#685) + + + +## [kubedb/postgres-archiver](https://github.com/kubedb/postgres-archiver) + +### [v0.1.0-rc.1](https://github.com/kubedb/postgres-archiver/releases/tag/v0.1.0-rc.1) + + + + +## [kubedb/postgres-csi-snapshotter-plugin](https://github.com/kubedb/postgres-csi-snapshotter-plugin) + +### [v0.1.0-rc.1](https://github.com/kubedb/postgres-csi-snapshotter-plugin/releases/tag/v0.1.0-rc.1) + +- [57a7bdf](https://github.com/kubedb/postgres-csi-snapshotter-plugin/commit/57a7bdf) Prepare for release v0.1.0-rc.1 (#7) + + + +## [kubedb/postgres-restic-plugin](https://github.com/kubedb/postgres-restic-plugin) + +### [v0.1.0-rc.1](https://github.com/kubedb/postgres-restic-plugin/releases/tag/v0.1.0-rc.1) + +- [584bbad](https://github.com/kubedb/postgres-restic-plugin/commit/584bbad) Prepare for release v0.1.0-rc.1 (#6) +- [da1ecd7](https://github.com/kubedb/postgres-restic-plugin/commit/da1ecd7) Refactor (#5) + + + +## [kubedb/provisioner](https://github.com/kubedb/provisioner) + +### [v0.38.0-rc.1](https://github.com/kubedb/provisioner/releases/tag/v0.38.0-rc.1) + +- [086300d9](https://github.com/kubedb/provisioner/commit/086300d90) Prepare for release v0.38.0-rc.1 (#61) +- [0dfe3742](https://github.com/kubedb/provisioner/commit/0dfe37425) Ensure archiver CRDs (#60) + + + +## [kubedb/proxysql](https://github.com/kubedb/proxysql) + +### [v0.25.0-rc.1](https://github.com/kubedb/proxysql/releases/tag/v0.25.0-rc.1) + +- [1f87cbc5](https://github.com/kubedb/proxysql/commit/1f87cbc5) Prepare for release v0.25.0-rc.1 (#314) + + + +## [kubedb/redis](https://github.com/kubedb/redis) + +### [v0.31.0-rc.1](https://github.com/kubedb/redis/releases/tag/v0.31.0-rc.1) + +- [a3d4b7b8](https://github.com/kubedb/redis/commit/a3d4b7b8) Prepare for release v0.31.0-rc.1 (#497) +- [bb101b6a](https://github.com/kubedb/redis/commit/bb101b6a) Pass version in SetDefaults func (#496) + + + +## [kubedb/redis-coordinator](https://github.com/kubedb/redis-coordinator) + +### [v0.17.0-rc.1](https://github.com/kubedb/redis-coordinator/releases/tag/v0.17.0-rc.1) + +- [7e5fbf31](https://github.com/kubedb/redis-coordinator/commit/7e5fbf31) Prepare for release v0.17.0-rc.1 (#81) + + + +## [kubedb/redis-restic-plugin](https://github.com/kubedb/redis-restic-plugin) + +### [v0.1.0-rc.1](https://github.com/kubedb/redis-restic-plugin/releases/tag/v0.1.0-rc.1) + +- [8cae5ef](https://github.com/kubedb/redis-restic-plugin/commit/8cae5ef) Prepare for release v0.1.0-rc.1 (#10) + + + +## [kubedb/replication-mode-detector](https://github.com/kubedb/replication-mode-detector) + +### [v0.25.0-rc.1](https://github.com/kubedb/replication-mode-detector/releases/tag/v0.25.0-rc.1) + +- [758906fe](https://github.com/kubedb/replication-mode-detector/commit/758906fe) Prepare for release v0.25.0-rc.1 (#245) + + + +## [kubedb/schema-manager](https://github.com/kubedb/schema-manager) + +### [v0.14.0-rc.1](https://github.com/kubedb/schema-manager/releases/tag/v0.14.0-rc.1) + +- [f7e384b2](https://github.com/kubedb/schema-manager/commit/f7e384b2) Prepare for release v0.14.0-rc.1 (#86) + + + +## [kubedb/tests](https://github.com/kubedb/tests) + +### [v0.23.0-rc.1](https://github.com/kubedb/tests/releases/tag/v0.23.0-rc.1) + +- [8f82cf9a](https://github.com/kubedb/tests/commit/8f82cf9a) Prepare for release v0.23.0-rc.1 (#272) +- [0bfcc3b6](https://github.com/kubedb/tests/commit/0bfcc3b6) Fix kafka restart-pods test (#271) + + + +## [kubedb/ui-server](https://github.com/kubedb/ui-server) + +### [v0.14.0-rc.1](https://github.com/kubedb/ui-server/releases/tag/v0.14.0-rc.1) + +- [82f78763](https://github.com/kubedb/ui-server/commit/82f78763) Prepare for release v0.14.0-rc.1 (#95) + + + +## [kubedb/webhook-server](https://github.com/kubedb/webhook-server) + +### [v0.14.0-rc.1](https://github.com/kubedb/webhook-server/releases/tag/v0.14.0-rc.1) + +- [01d13baa](https://github.com/kubedb/webhook-server/commit/01d13baa) Prepare for release v0.14.0-rc.1 (#72) +- [e869d0ce](https://github.com/kubedb/webhook-server/commit/e869d0ce) Initialize default KubeBuilder client (#71) + + + + diff --git a/content/docs/v2024.1.31/CHANGELOG-v2023.12.11.md b/content/docs/v2024.1.31/CHANGELOG-v2023.12.11.md new file mode 100644 index 0000000000..0ba7851880 --- /dev/null +++ b/content/docs/v2024.1.31/CHANGELOG-v2023.12.11.md @@ -0,0 +1,634 @@ +--- +title: Changelog | KubeDB +description: Changelog +menu: + docs_v2024.1.31: + identifier: changelog-kubedb-v2023.12.11 + name: Changelog-v2023.12.11 + parent: welcome + weight: 20231211 +product_name: kubedb +menu_name: docs_v2024.1.31 +section_menu_id: welcome +url: /docs/v2024.1.31/welcome/changelog-v2023.12.11/ +aliases: +- /docs/v2024.1.31/CHANGELOG-v2023.12.11/ +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +# KubeDB v2023.12.11 (2023-12-08) + + +## [kubedb/apimachinery](https://github.com/kubedb/apimachinery) + +### [v0.38.0](https://github.com/kubedb/apimachinery/releases/tag/v0.38.0) + +- [566c617f](https://github.com/kubedb/apimachinery/commit/566c617f) Update kafka webhook mutating verb (#1084) +- [c6ac8def](https://github.com/kubedb/apimachinery/commit/c6ac8def) Add IPS_LOCK and SYS_RESOURCE (#1083) +- [96238937](https://github.com/kubedb/apimachinery/commit/96238937) Add Postgres arbiter spec (#1082) +- [24013ada](https://github.com/kubedb/apimachinery/commit/24013ada) Fix update-crds wf +- [de0bb4e2](https://github.com/kubedb/apimachinery/commit/de0bb4e2) Update kubestash apimachienry +- [545731a9](https://github.com/kubedb/apimachinery/commit/545731a9) Add default KubeBuilder client (#1081) +- [f260aa8e](https://github.com/kubedb/apimachinery/commit/f260aa8e) Add SecurityContext field in catalogs; Set default accordingly (#1080) +- [e070a3ae](https://github.com/kubedb/apimachinery/commit/e070a3ae) Do not default the seccompProfile (#1079) +- [29c96031](https://github.com/kubedb/apimachinery/commit/29c96031) Set Default Security Context for MariaDB (#1077) +- [fc35d376](https://github.com/kubedb/apimachinery/commit/fc35d376) Set default SecurityContext for mysql (#1070) +- [ee71aca0](https://github.com/kubedb/apimachinery/commit/ee71aca0) Update dependencies +- [93b5ba51](https://github.com/kubedb/apimachinery/commit/93b5ba51) add encriptSecret to postgresAchiver (#1078) +- [2b06b6e5](https://github.com/kubedb/apimachinery/commit/2b06b6e5) Add mongodb & postgres archiver (#1016) +- [47793c9a](https://github.com/kubedb/apimachinery/commit/47793c9a) Set default SecurityContext for Elasticsearch. (#1072) +- [90567b46](https://github.com/kubedb/apimachinery/commit/90567b46) Set default SecurityContext for Kafka (#1068) +- [449a4e00](https://github.com/kubedb/apimachinery/commit/449a4e00) Remove redundant helper functions for Kafka and Update constants (#1074) +- [b28463f4](https://github.com/kubedb/apimachinery/commit/b28463f4) Set fsGroup to 999 to avoid mountedPath's files permission issue in different storageClass (#1075) +- [8e497b92](https://github.com/kubedb/apimachinery/commit/8e497b92) Set Default Security Context for Redis (#1073) +- [88ab93c7](https://github.com/kubedb/apimachinery/commit/88ab93c7) Set default SecurityContext for mongodb (#1067) +- [e7ac5d2e](https://github.com/kubedb/apimachinery/commit/e7ac5d2e) Set default for security Context for postgres (#1069) +- [f5de4a28](https://github.com/kubedb/apimachinery/commit/f5de4a28) Add support for init with git-sync; Add const (#1065) + + + +## [kubedb/autoscaler](https://github.com/kubedb/autoscaler) + +### [v0.23.0](https://github.com/kubedb/autoscaler/releases/tag/v0.23.0) + +- [d7c1af24](https://github.com/kubedb/autoscaler/commit/d7c1af24) Prepare for release v0.23.0 (#160) +- [193fb07b](https://github.com/kubedb/autoscaler/commit/193fb07b) Prepare for release v0.23.0-rc.1 (#159) +- [a406fbda](https://github.com/kubedb/autoscaler/commit/a406fbda) Prepare for release v0.23.0-rc.0 (#158) + + + +## [kubedb/cli](https://github.com/kubedb/cli) + +### [v0.38.0](https://github.com/kubedb/cli/releases/tag/v0.38.0) + +- [8c968939](https://github.com/kubedb/cli/commit/8c968939) Prepare for release v0.38.0 (#739) +- [a99b2857](https://github.com/kubedb/cli/commit/a99b2857) Prepare for release v0.38.0-rc.1 (#738) +- [3a4dcc47](https://github.com/kubedb/cli/commit/3a4dcc47) Prepare for release v0.38.0-rc.0 (#737) + + + +## [kubedb/dashboard](https://github.com/kubedb/dashboard) + +### [v0.14.0](https://github.com/kubedb/dashboard/releases/tag/v0.14.0) + +- [7741d24d](https://github.com/kubedb/dashboard/commit/7741d24d) Prepare for release v0.14.0 (#87) +- [7031fb23](https://github.com/kubedb/dashboard/commit/7031fb23) Prepare for release v0.14.0-rc.1 (#86) +- [c2982e93](https://github.com/kubedb/dashboard/commit/c2982e93) Prepare for release v0.14.0-rc.0 (#85) +- [9a9e6cd9](https://github.com/kubedb/dashboard/commit/9a9e6cd9) Add container security context for elasticsearch dashboard. (#84) + + + +## [kubedb/elasticsearch](https://github.com/kubedb/elasticsearch) + +### [v0.38.0](https://github.com/kubedb/elasticsearch/releases/tag/v0.38.0) + +- [da1e77ef](https://github.com/kubedb/elasticsearch/commit/da1e77ef4) Prepare for release v0.38.0 (#681) +- [aec25e8a](https://github.com/kubedb/elasticsearch/commit/aec25e8a9) Add new version in elasticsearch yaml. (#679) +- [bd0fd357](https://github.com/kubedb/elasticsearch/commit/bd0fd357e) Prepare for release v0.38.0-rc.1 (#680) +- [6b2943f1](https://github.com/kubedb/elasticsearch/commit/6b2943f19) Prepare for release v0.38.0-rc.0 (#678) +- [7f1a37e1](https://github.com/kubedb/elasticsearch/commit/7f1a37e1a) Add prepare cluster installer before test runner (#677) +- [1d49f16d](https://github.com/kubedb/elasticsearch/commit/1d49f16d2) Remove `init-sysctl` container and add default containerSecurityContext (#676) +- [4bb15e48](https://github.com/kubedb/elasticsearch/commit/4bb15e48b) Update daily-opensearch workflow to provision v1.3.13 + + + +## [kubedb/elasticsearch-restic-plugin](https://github.com/kubedb/elasticsearch-restic-plugin) + +### [v0.1.0](https://github.com/kubedb/elasticsearch-restic-plugin/releases/tag/v0.1.0) + +- [1d1abdd](https://github.com/kubedb/elasticsearch-restic-plugin/commit/1d1abdd) Prepare for release v0.1.0 (#10) +- [f6a9e4c](https://github.com/kubedb/elasticsearch-restic-plugin/commit/f6a9e4c) Prepare for release v0.1.0-rc.1 (#9) +- [eb95c84](https://github.com/kubedb/elasticsearch-restic-plugin/commit/eb95c84) Prepare for release v0.1.0-rc.0 (#8) +- [fe82e1b](https://github.com/kubedb/elasticsearch-restic-plugin/commit/fe82e1b) Update component name (#7) +- [c155643](https://github.com/kubedb/elasticsearch-restic-plugin/commit/c155643) Update snapshot time (#6) +- [7093d5a](https://github.com/kubedb/elasticsearch-restic-plugin/commit/7093d5a) Move to kubedb org +- [a3a079e](https://github.com/kubedb/elasticsearch-restic-plugin/commit/a3a079e) Update deps (#5) +- [7a0fd38](https://github.com/kubedb/elasticsearch-restic-plugin/commit/7a0fd38) Refactor (#4) +- [b262635](https://github.com/kubedb/elasticsearch-restic-plugin/commit/b262635) Add support for backup and restore (#1) +- [50bde7e](https://github.com/kubedb/elasticsearch-restic-plugin/commit/50bde7e) Fix build +- [b9686b7](https://github.com/kubedb/elasticsearch-restic-plugin/commit/b9686b7) Prepare for release v0.1.0-rc.0 (#3) +- [ba0c0ed](https://github.com/kubedb/elasticsearch-restic-plugin/commit/ba0c0ed) Fix binary name +- [b0aa991](https://github.com/kubedb/elasticsearch-restic-plugin/commit/b0aa991) Use firecracker runner +- [a621400](https://github.com/kubedb/elasticsearch-restic-plugin/commit/a621400) Use Go 1.21 and restic 0.16.0 +- [f08e4e8](https://github.com/kubedb/elasticsearch-restic-plugin/commit/f08e4e8) Use github runner to push docker image + + + +## [kubedb/installer](https://github.com/kubedb/installer) + +### [v2023.12.11](https://github.com/kubedb/installer/releases/tag/v2023.12.11) + + + + +## [kubedb/kafka](https://github.com/kubedb/kafka) + +### [v0.9.0](https://github.com/kubedb/kafka/releases/tag/v0.9.0) + +- [9c62eb1](https://github.com/kubedb/kafka/commit/9c62eb1) Prepare for release v0.9.0 (#52) +- [8ddb2b8](https://github.com/kubedb/kafka/commit/8ddb2b8) Remove hardcoded fsgroup from statefulset (#51) +- [0516c18](https://github.com/kubedb/kafka/commit/0516c18) Prepare for release v0.9.0-rc.1 (#50) +- [6554778](https://github.com/kubedb/kafka/commit/6554778) Set default KubeBuilder client (#49) +- [0770fff](https://github.com/kubedb/kafka/commit/0770fff) Prepare for release v0.9.0-rc.0 (#48) +- [ee3dcf5](https://github.com/kubedb/kafka/commit/ee3dcf5) Add condition for ssl.properties file (#47) +- [4bd632b](https://github.com/kubedb/kafka/commit/4bd632b) Reconfigure kafka for updated config properties (#45) +- [cc9795b](https://github.com/kubedb/kafka/commit/cc9795b) Upsert Init Containers with Kafka podtemplate.spec and update default test-profile (#43) +- [76e743c](https://github.com/kubedb/kafka/commit/76e743c) Update daily e2e tests yml (#42) + + + +## [kubedb/kubedb-manifest-plugin](https://github.com/kubedb/kubedb-manifest-plugin) + +### [v0.1.0](https://github.com/kubedb/kubedb-manifest-plugin/releases/tag/v0.1.0) + +- [2dd0a52](https://github.com/kubedb/kubedb-manifest-plugin/commit/2dd0a52) Prepare for release v0.1.0 (#30) +- [4bd44b8](https://github.com/kubedb/kubedb-manifest-plugin/commit/4bd44b8) Prepare for release v0.1.0-rc.1 (#29) +- [bef777c](https://github.com/kubedb/kubedb-manifest-plugin/commit/bef777c) Prepare for release v0.1.0-rc.0 (#28) +- [46ad967](https://github.com/kubedb/kubedb-manifest-plugin/commit/46ad967) Remove redundancy (#27) +- [4eaf765](https://github.com/kubedb/kubedb-manifest-plugin/commit/4eaf765) Update snapshot time (#26) +- [e8ace42](https://github.com/kubedb/kubedb-manifest-plugin/commit/e8ace42) Fix plugin binary name +- [d4e3c34](https://github.com/kubedb/kubedb-manifest-plugin/commit/d4e3c34) Move to kubedb org +- [15770b2](https://github.com/kubedb/kubedb-manifest-plugin/commit/15770b2) Update deps (#25) +- [f50a3af](https://github.com/kubedb/kubedb-manifest-plugin/commit/f50a3af) Fix directory cleanup (#24) +- [d41eba7](https://github.com/kubedb/kubedb-manifest-plugin/commit/d41eba7) Refactor +- [0e154e7](https://github.com/kubedb/kubedb-manifest-plugin/commit/0e154e7) Fix release workflow +- [35c6b95](https://github.com/kubedb/kubedb-manifest-plugin/commit/35c6b95) Prepare for release v0.2.0-rc.0 (#22) +- [da97d9a](https://github.com/kubedb/kubedb-manifest-plugin/commit/da97d9a) Use gh runner token to publish image +- [592c51f](https://github.com/kubedb/kubedb-manifest-plugin/commit/592c51f) Use firecracker runner +- [008042d](https://github.com/kubedb/kubedb-manifest-plugin/commit/008042d) Use Go 1.21 +- [985bcab](https://github.com/kubedb/kubedb-manifest-plugin/commit/985bcab) Set snapshot time after snapshot completed (#21) +- [6a8c682](https://github.com/kubedb/kubedb-manifest-plugin/commit/6a8c682) Refactor code (#20) +- [bcb944d](https://github.com/kubedb/kubedb-manifest-plugin/commit/bcb944d) Remove manifest option flags (#19) +- [5a47722](https://github.com/kubedb/kubedb-manifest-plugin/commit/5a47722) Fix secret restore issue (#18) +- [3ced8b7](https://github.com/kubedb/kubedb-manifest-plugin/commit/3ced8b7) Update `kmodules.xyz/client-go` version to `v0.25.27` (#17) +- [2ee1314](https://github.com/kubedb/kubedb-manifest-plugin/commit/2ee1314) Update Readme (#16) +- [42d0e52](https://github.com/kubedb/kubedb-manifest-plugin/commit/42d0e52) Set initial component status prior to backup and restore (#15) +- [31a64d6](https://github.com/kubedb/kubedb-manifest-plugin/commit/31a64d6) Remove redundant flags (#14) +- [a804ba8](https://github.com/kubedb/kubedb-manifest-plugin/commit/a804ba8) Pass Snapshot name for restore +- [99ca49f](https://github.com/kubedb/kubedb-manifest-plugin/commit/99ca49f) Set snapshot time, integrity and size (#12) +- [384bbb6](https://github.com/kubedb/kubedb-manifest-plugin/commit/384bbb6) Set backup error in component status + Refactor codebase (#11) +- [513eef5](https://github.com/kubedb/kubedb-manifest-plugin/commit/513eef5) Update for snapshot and restoresession API changes (#10) +- [4fb8f52](https://github.com/kubedb/kubedb-manifest-plugin/commit/4fb8f52) Add options for issuerref (#9) +- [2931d9e](https://github.com/kubedb/kubedb-manifest-plugin/commit/2931d9e) Update restic modules (#7) +- [3422ddf](https://github.com/kubedb/kubedb-manifest-plugin/commit/3422ddf) Fix bugs + Sync with updated snapshot api (#6) +- [b1a69b5](https://github.com/kubedb/kubedb-manifest-plugin/commit/b1a69b5) Prepare for release v0.1.0 (#5) +- [5344e9f](https://github.com/kubedb/kubedb-manifest-plugin/commit/5344e9f) Update modules (#4) +- [14b2797](https://github.com/kubedb/kubedb-manifest-plugin/commit/14b2797) Add CI badge +- [969eeda](https://github.com/kubedb/kubedb-manifest-plugin/commit/969eeda) Organize code structure (#3) +- [9fc3cbe](https://github.com/kubedb/kubedb-manifest-plugin/commit/9fc3cbe) Postgres manifest (#2) +- [8e2a56f](https://github.com/kubedb/kubedb-manifest-plugin/commit/8e2a56f) Merge pull request #1 from kubestash/mongodb-manifest +- [e80c1d0](https://github.com/kubedb/kubedb-manifest-plugin/commit/e80c1d0) update flag names. +- [80d3908](https://github.com/kubedb/kubedb-manifest-plugin/commit/80d3908) Add options for changing name in the restored files. +- [e7da42d](https://github.com/kubedb/kubedb-manifest-plugin/commit/e7da42d) Fix error. +- [70a0267](https://github.com/kubedb/kubedb-manifest-plugin/commit/70a0267) Sync with updated snapshot api +- [9d747d8](https://github.com/kubedb/kubedb-manifest-plugin/commit/9d747d8) Merge branch 'mongodb-manifest' of github.com:stashed/kubedb-manifest into mongodb-manifest +- [90e00e3](https://github.com/kubedb/kubedb-manifest-plugin/commit/90e00e3) Fix bugs. +- [9c3fc1e](https://github.com/kubedb/kubedb-manifest-plugin/commit/9c3fc1e) Sync with updated snapshot api +- [c321013](https://github.com/kubedb/kubedb-manifest-plugin/commit/c321013) update component path. +- [7f4bd17](https://github.com/kubedb/kubedb-manifest-plugin/commit/7f4bd17) Refactor. +- [2b61ff0](https://github.com/kubedb/kubedb-manifest-plugin/commit/2b61ff0) Specify component directory +- [6264cdf](https://github.com/kubedb/kubedb-manifest-plugin/commit/6264cdf) Support restoring particular mongo component. +- [0008570](https://github.com/kubedb/kubedb-manifest-plugin/commit/0008570) Fix restore component phase updating. +- [8bd4c95](https://github.com/kubedb/kubedb-manifest-plugin/commit/8bd4c95) Fix restore manifests. +- [7eda9f9](https://github.com/kubedb/kubedb-manifest-plugin/commit/7eda9f9) Update Snapshot phase calculation. +- [a2b52d2](https://github.com/kubedb/kubedb-manifest-plugin/commit/a2b52d2) Add core to runtime scheme. +- [9bd6bd5](https://github.com/kubedb/kubedb-manifest-plugin/commit/9bd6bd5) Fix bugs. +- [9e08774](https://github.com/kubedb/kubedb-manifest-plugin/commit/9e08774) Fix build +- [01225c6](https://github.com/kubedb/kubedb-manifest-plugin/commit/01225c6) Update module path +- [45d0e45](https://github.com/kubedb/kubedb-manifest-plugin/commit/45d0e45) updated flags. +- [fb0282f](https://github.com/kubedb/kubedb-manifest-plugin/commit/fb0282f) update docker file. +- [ad4c004](https://github.com/kubedb/kubedb-manifest-plugin/commit/ad4c004) refactor. +- [8f71d3a](https://github.com/kubedb/kubedb-manifest-plugin/commit/8f71d3a) Fix build +- [115ef23](https://github.com/kubedb/kubedb-manifest-plugin/commit/115ef23) update makefile. +- [a274690](https://github.com/kubedb/kubedb-manifest-plugin/commit/a274690) update backup and restore. +- [cff449f](https://github.com/kubedb/kubedb-manifest-plugin/commit/cff449f) Use yaml pkg from k8s.io. +- [dcbb399](https://github.com/kubedb/kubedb-manifest-plugin/commit/dcbb399) Use restic package from KubeStash. +- [596a498](https://github.com/kubedb/kubedb-manifest-plugin/commit/596a498) fix restore implementation. +- [6ebc19b](https://github.com/kubedb/kubedb-manifest-plugin/commit/6ebc19b) Implement restore. +- [3e8a869](https://github.com/kubedb/kubedb-manifest-plugin/commit/3e8a869) Start implementing restore. +- [e841113](https://github.com/kubedb/kubedb-manifest-plugin/commit/e841113) Add backup methods for mongodb. +- [b5961f7](https://github.com/kubedb/kubedb-manifest-plugin/commit/b5961f7) Continue implementing backup. +- [d943f6a](https://github.com/kubedb/kubedb-manifest-plugin/commit/d943f6a) Implement manifest backup for MongoDB. +- [e644c67](https://github.com/kubedb/kubedb-manifest-plugin/commit/e644c67) Implement kubedb-manifest plugin to MongoDB manifests. + + + +## [kubedb/mariadb](https://github.com/kubedb/mariadb) + +### [v0.22.0](https://github.com/kubedb/mariadb/releases/tag/v0.22.0) + +- [b6995945](https://github.com/kubedb/mariadb/commit/b6995945) Prepare for release v0.22.0 (#237) +- [25018ad7](https://github.com/kubedb/mariadb/commit/25018ad7) Fix Statefulset Security Context Assign (#236) +- [9c157c66](https://github.com/kubedb/mariadb/commit/9c157c66) Prepare for release v0.22.0-rc.1 (#235) +- [1d0c2579](https://github.com/kubedb/mariadb/commit/1d0c2579) Pass version in SetDefaults func (#234) +- [e360fd82](https://github.com/kubedb/mariadb/commit/e360fd82) Prepare for release v0.22.0-rc.0 (#233) +- [3956f18c](https://github.com/kubedb/mariadb/commit/3956f18c) Set Default Security Context for MariaDB (#232) + + + +## [kubedb/mariadb-archiver](https://github.com/kubedb/mariadb-archiver) + +### [v0.1.0](https://github.com/kubedb/mariadb-archiver/releases/tag/v0.1.0) + +- [a014ffc](https://github.com/kubedb/mariadb-archiver/commit/a014ffc) Prepare for release v0.1.0 (#5) +- [a2afbc9](https://github.com/kubedb/mariadb-archiver/commit/a2afbc9) Prepare for release v0.1.0-rc.1 (#4) +- [65fd6bf](https://github.com/kubedb/mariadb-archiver/commit/65fd6bf) Prepare for release v0.1.0-rc.0 (#3) + + + +## [kubedb/mariadb-coordinator](https://github.com/kubedb/mariadb-coordinator) + +### [v0.18.0](https://github.com/kubedb/mariadb-coordinator/releases/tag/v0.18.0) + +- [ec9782e7](https://github.com/kubedb/mariadb-coordinator/commit/ec9782e7) Prepare for release v0.18.0 (#94) +- [118bcda4](https://github.com/kubedb/mariadb-coordinator/commit/118bcda4) Prepare for release v0.18.0-rc.1 (#93) +- [bf515bfa](https://github.com/kubedb/mariadb-coordinator/commit/bf515bfa) Prepare for release v0.18.0-rc.0 (#92) + + + +## [kubedb/memcached](https://github.com/kubedb/memcached) + +### [v0.31.0](https://github.com/kubedb/memcached/releases/tag/v0.31.0) + +- [da52b0a5](https://github.com/kubedb/memcached/commit/da52b0a5) Prepare for release v0.31.0 (#407) +- [fab2a879](https://github.com/kubedb/memcached/commit/fab2a879) Prepare for release v0.31.0-rc.1 (#406) +- [e44be0a6](https://github.com/kubedb/memcached/commit/e44be0a6) Prepare for release v0.31.0-rc.0 (#405) + + + +## [kubedb/mongodb](https://github.com/kubedb/mongodb) + +### [v0.31.0](https://github.com/kubedb/mongodb/releases/tag/v0.31.0) + +- [32ab5a6a](https://github.com/kubedb/mongodb/commit/32ab5a6a) Prepare for release v0.31.0 (#584) +- [1a79be25](https://github.com/kubedb/mongodb/commit/1a79be25) Use args instead of cmd to work with latest walg image (#583) +- [de48eeb7](https://github.com/kubedb/mongodb/commit/de48eeb7) Prepare for release v0.31.0-rc.1 (#582) +- [c368ec94](https://github.com/kubedb/mongodb/commit/c368ec94) Prepare for release v0.31.0-rc.0 (#581) +- [020d5599](https://github.com/kubedb/mongodb/commit/020d5599) Set manifest component in restoreSession (#579) +- [95103a47](https://github.com/kubedb/mongodb/commit/95103a47) Implement mongodb archiver (#534) +- [fb01b593](https://github.com/kubedb/mongodb/commit/fb01b593) Update apimachinery deps for fsgroup defaulting (#578) +- [22a5bb29](https://github.com/kubedb/mongodb/commit/22a5bb29) Make changes to run containers as non-root user (#576) +- [8667f411](https://github.com/kubedb/mongodb/commit/8667f411) Rearrange the daily CI (#577) +- [7024a3ca](https://github.com/kubedb/mongodb/commit/7024a3ca) Add support for initialization with git-sync (#575) + + + +## [kubedb/mongodb-csi-snapshotter-plugin](https://github.com/kubedb/mongodb-csi-snapshotter-plugin) + +### [v0.1.0](https://github.com/kubedb/mongodb-csi-snapshotter-plugin/releases/tag/v0.1.0) + + + + +## [kubedb/mongodb-restic-plugin](https://github.com/kubedb/mongodb-restic-plugin) + +### [v0.1.0](https://github.com/kubedb/mongodb-restic-plugin/releases/tag/v0.1.0) + +- [93b29cd](https://github.com/kubedb/mongodb-restic-plugin/commit/93b29cd) Prepare for release v0.1.0 (#15) +- [1daa490](https://github.com/kubedb/mongodb-restic-plugin/commit/1daa490) Prepare for release v0.1.0-rc.1 (#14) +- [745f5cb](https://github.com/kubedb/mongodb-restic-plugin/commit/745f5cb) Prepare for release v0.1.0-rc.0 (#13) +- [2c381ee](https://github.com/kubedb/mongodb-restic-plugin/commit/2c381ee) Rename `max-Concurrency` flag name to `max-concurrency` (#12) +- [769bb27](https://github.com/kubedb/mongodb-restic-plugin/commit/769bb27) Set DB version from env if empty (#11) +- [7f51333](https://github.com/kubedb/mongodb-restic-plugin/commit/7f51333) Update snapshot time (#10) +- [e5972d1](https://github.com/kubedb/mongodb-restic-plugin/commit/e5972d1) Move to kubedb org +- [004ef7e](https://github.com/kubedb/mongodb-restic-plugin/commit/004ef7e) Update deps (#9) +- [e54bc9b](https://github.com/kubedb/mongodb-restic-plugin/commit/e54bc9b) Remove version prefix from files (#8) +- [2ab94f7](https://github.com/kubedb/mongodb-restic-plugin/commit/2ab94f7) Add db version flag (#6) +- [d3e752d](https://github.com/kubedb/mongodb-restic-plugin/commit/d3e752d) Prepare for release v0.1.0-rc.0 (#7) +- [e0872f9](https://github.com/kubedb/mongodb-restic-plugin/commit/e0872f9) Use firecracker runners +- [a2e18e9](https://github.com/kubedb/mongodb-restic-plugin/commit/a2e18e9) Use github runner to push docker image +- [b32ebb2](https://github.com/kubedb/mongodb-restic-plugin/commit/b32ebb2) Build docker images for each db version (#5) +- [bc3219d](https://github.com/kubedb/mongodb-restic-plugin/commit/bc3219d) Update deps +- [8040cc0](https://github.com/kubedb/mongodb-restic-plugin/commit/8040cc0) MongoDB backup and restore addon (#2) +- [d9cd315](https://github.com/kubedb/mongodb-restic-plugin/commit/d9cd315) Update Readme and license (#1) + + + +## [kubedb/mysql](https://github.com/kubedb/mysql) + +### [v0.31.0](https://github.com/kubedb/mysql/releases/tag/v0.31.0) + +- [9094f699](https://github.com/kubedb/mysql/commit/9094f699) Prepare for release v0.31.0 (#576) +- [fd0ebe09](https://github.com/kubedb/mysql/commit/fd0ebe09) Fix Statefulset Security Context Assign (#575) +- [79cb58c1](https://github.com/kubedb/mysql/commit/79cb58c1) Prepare for release v0.31.0-rc.1 (#574) +- [e5b37c00](https://github.com/kubedb/mysql/commit/e5b37c00) Pass version in SetDefaults func (#573) +- [3c005b51](https://github.com/kubedb/mysql/commit/3c005b51) Prepare for release v0.31.0-rc.0 (#572) +- [bcdfaf4a](https://github.com/kubedb/mysql/commit/bcdfaf4a) Set Default Security Context for MySQL (#571) +- [9009bcac](https://github.com/kubedb/mysql/commit/9009bcac) Add git sync constants from apimachinery (#570) + + + +## [kubedb/mysql-archiver](https://github.com/kubedb/mysql-archiver) + +### [v0.1.0](https://github.com/kubedb/mysql-archiver/releases/tag/v0.1.0) + +- [721eaa8](https://github.com/kubedb/mysql-archiver/commit/721eaa8) Prepare for release v0.1.0 (#4) +- [8c65d14](https://github.com/kubedb/mysql-archiver/commit/8c65d14) Prepare for release v0.1.0-rc.1 (#3) +- [f79286a](https://github.com/kubedb/mysql-archiver/commit/f79286a) Prepare for release v0.1.0-rc.0 (#2) +- [dcd2e30](https://github.com/kubedb/mysql-archiver/commit/dcd2e30) Fix wal-g binary + + + +## [kubedb/mysql-coordinator](https://github.com/kubedb/mysql-coordinator) + +### [v0.16.0](https://github.com/kubedb/mysql-coordinator/releases/tag/v0.16.0) + +- [c93152ea](https://github.com/kubedb/mysql-coordinator/commit/c93152ea) Prepare for release v0.16.0 (#91) +- [63cb0a33](https://github.com/kubedb/mysql-coordinator/commit/63cb0a33) Prepare for release v0.16.0-rc.1 (#90) +- [b5e481fc](https://github.com/kubedb/mysql-coordinator/commit/b5e481fc) Prepare for release v0.16.0-rc.0 (#89) + + + +## [kubedb/mysql-restic-plugin](https://github.com/kubedb/mysql-restic-plugin) + +### [v0.1.0](https://github.com/kubedb/mysql-restic-plugin/releases/tag/v0.1.0) + +- [9ed9b45](https://github.com/kubedb/mysql-restic-plugin/commit/9ed9b45) Prepare for release v0.1.0 (#14) +- [f77476b](https://github.com/kubedb/mysql-restic-plugin/commit/f77476b) Prepare for release v0.1.0-rc.1 (#13) +- [81ceb55](https://github.com/kubedb/mysql-restic-plugin/commit/81ceb55) Add `databases` flag (#12) +- [b255e47](https://github.com/kubedb/mysql-restic-plugin/commit/b255e47) Prepare for release v0.1.0-rc.0 (#11) +- [9a17360](https://github.com/kubedb/mysql-restic-plugin/commit/9a17360) Set DB version from env if empty (#10) +- [c67ba7c](https://github.com/kubedb/mysql-restic-plugin/commit/c67ba7c) Update snapshot time (#9) +- [abef89e](https://github.com/kubedb/mysql-restic-plugin/commit/abef89e) Fix binary name +- [db1bbbf](https://github.com/kubedb/mysql-restic-plugin/commit/db1bbbf) Move to kubedb org +- [746d13e](https://github.com/kubedb/mysql-restic-plugin/commit/746d13e) Update deps (#8) +- [569533a](https://github.com/kubedb/mysql-restic-plugin/commit/569533a) Add version flag + Refactor (#6) +- [f0abd94](https://github.com/kubedb/mysql-restic-plugin/commit/f0abd94) Prepare for release v0.1.0-rc.0 (#7) +- [01bff62](https://github.com/kubedb/mysql-restic-plugin/commit/01bff62) Remove arm64 image support +- [277fda8](https://github.com/kubedb/mysql-restic-plugin/commit/277fda8) Build docker images for each db version (#5) +- [94f000d](https://github.com/kubedb/mysql-restic-plugin/commit/94f000d) Use Go 1.21 +- [2e4f30d](https://github.com/kubedb/mysql-restic-plugin/commit/2e4f30d) Update Readme (#4) +- [272c8f9](https://github.com/kubedb/mysql-restic-plugin/commit/272c8f9) Add support for mysql backup and restore (#1) + + + +## [kubedb/mysql-router-init](https://github.com/kubedb/mysql-router-init) + +### [v0.16.0](https://github.com/kubedb/mysql-router-init/releases/tag/v0.16.0) + + + + +## [kubedb/ops-manager](https://github.com/kubedb/ops-manager) + +### [v0.25.0](https://github.com/kubedb/ops-manager/releases/tag/v0.25.0) + +- [63d69118](https://github.com/kubedb/ops-manager/commit/63d69118) Prepare for release v0.25.0 (#497) +- [96f06e76](https://github.com/kubedb/ops-manager/commit/96f06e76) Update deps +- [387bd0b0](https://github.com/kubedb/ops-manager/commit/387bd0b0) Modify update version logic for mongo to run chown (#496) +- [3a1ee06c](https://github.com/kubedb/ops-manager/commit/3a1ee06c) Add support for arbiter vertical scaling & volume expansion (#495) +- [98dbd6c0](https://github.com/kubedb/ops-manager/commit/98dbd6c0) Prepare for release v0.25.0-rc.1 (#494) +- [640fe280](https://github.com/kubedb/ops-manager/commit/640fe280) Prepare for release v0.25.0-rc.0 (#492) +- [9714e841](https://github.com/kubedb/ops-manager/commit/9714e841) Add kafka version 3.6.0 to daily test (#491) +- [dd18b17c](https://github.com/kubedb/ops-manager/commit/dd18b17c) postgres arbiter related changes and bug fixes (#483) +- [de52bda7](https://github.com/kubedb/ops-manager/commit/de52bda7) Remove default configuration and restart kafka with new config (#490) +- [f7850172](https://github.com/kubedb/ops-manager/commit/f7850172) Add prepare cluster installer before test runners (#489) +- [79e646ef](https://github.com/kubedb/ops-manager/commit/79e646ef) Update ServiceDNS for kafka (#488) +- [18851802](https://github.com/kubedb/ops-manager/commit/18851802) added daily postgres (#487) +- [0c2bdda1](https://github.com/kubedb/ops-manager/commit/0c2bdda1) added daily-postgres.yml (#486) +- [5cb75965](https://github.com/kubedb/ops-manager/commit/5cb75965) Fixed BUG in postgres reconfigureTLS opsreq (#485) +- [145e08d5](https://github.com/kubedb/ops-manager/commit/145e08d5) Failover before restarting primary on restart ops (#481) +- [e53a72ce](https://github.com/kubedb/ops-manager/commit/e53a72ce) Add Kafka daily yml (#475) + + + +## [kubedb/percona-xtradb](https://github.com/kubedb/percona-xtradb) + +### [v0.25.0](https://github.com/kubedb/percona-xtradb/releases/tag/v0.25.0) + +- [2780eea8](https://github.com/kubedb/percona-xtradb/commit/2780eea8) Prepare for release v0.25.0 (#335) +- [3a15a15e](https://github.com/kubedb/percona-xtradb/commit/3a15a15e) Fix Statefulset Security Context Assign (#334) +- [bad0b334](https://github.com/kubedb/percona-xtradb/commit/bad0b334) Prepare for release v0.25.0-rc.1 (#333) +- [b8447936](https://github.com/kubedb/percona-xtradb/commit/b8447936) Pass version in SetDefaults func (#332) +- [d374a542](https://github.com/kubedb/percona-xtradb/commit/d374a542) Prepare for release v0.25.0-rc.0 (#331) + + + +## [kubedb/percona-xtradb-coordinator](https://github.com/kubedb/percona-xtradb-coordinator) + +### [v0.11.0](https://github.com/kubedb/percona-xtradb-coordinator/releases/tag/v0.11.0) + +- [44e0fea](https://github.com/kubedb/percona-xtradb-coordinator/commit/44e0fea) Prepare for release v0.11.0 (#51) +- [7a66da3](https://github.com/kubedb/percona-xtradb-coordinator/commit/7a66da3) Prepare for release v0.11.0-rc.1 (#50) +- [69e7d1e](https://github.com/kubedb/percona-xtradb-coordinator/commit/69e7d1e) Prepare for release v0.11.0-rc.0 (#49) + + + +## [kubedb/pg-coordinator](https://github.com/kubedb/pg-coordinator) + +### [v0.22.0](https://github.com/kubedb/pg-coordinator/releases/tag/v0.22.0) + +- [6eec739b](https://github.com/kubedb/pg-coordinator/commit/6eec739b) Prepare for release v0.22.0 (#141) +- [9f9614e6](https://github.com/kubedb/pg-coordinator/commit/9f9614e6) Prepare for release v0.22.0-rc.1 (#140) +- [e4efa4db](https://github.com/kubedb/pg-coordinator/commit/e4efa4db) Prepare for release v0.22.0-rc.0 (#139) +- [7c862bcd](https://github.com/kubedb/pg-coordinator/commit/7c862bcd) Add support for arbiter (#136) +- [53ba32a9](https://github.com/kubedb/pg-coordinator/commit/53ba32a9) added postgres 16.0 support (#137) +- [24445f9b](https://github.com/kubedb/pg-coordinator/commit/24445f9b) Added & modified logs (#134) + + + +## [kubedb/pgbouncer](https://github.com/kubedb/pgbouncer) + +### [v0.25.0](https://github.com/kubedb/pgbouncer/releases/tag/v0.25.0) + +- [6e148ed2](https://github.com/kubedb/pgbouncer/commit/6e148ed2) Prepare for release v0.25.0 (#299) +- [efa76519](https://github.com/kubedb/pgbouncer/commit/efa76519) Fix Statefulset Security Context Assign (#298) +- [e3e9f84d](https://github.com/kubedb/pgbouncer/commit/e3e9f84d) Prepare for release v0.25.0-rc.1 (#297) +- [21ba9f0f](https://github.com/kubedb/pgbouncer/commit/21ba9f0f) Prepare for release v0.25.0-rc.0 (#296) + + + +## [kubedb/postgres](https://github.com/kubedb/postgres) + +### [v0.38.0](https://github.com/kubedb/postgres/releases/tag/v0.38.0) + +- [8fe3fe0e](https://github.com/kubedb/postgres/commit/8fe3fe0e1) Prepare for release v0.38.0 (#687) +- [7a853e00](https://github.com/kubedb/postgres/commit/7a853e001) Add postgres arbiter custom size limit (#686) +- [0e493a1c](https://github.com/kubedb/postgres/commit/0e493a1cd) Prepare for release v0.38.0-rc.1 (#685) +- [8738ad73](https://github.com/kubedb/postgres/commit/8738ad73e) Prepare for release v0.38.0-rc.0 (#684) +- [adb69b02](https://github.com/kubedb/postgres/commit/adb69b02e) Implement PostgreSQL archiver (#628) +- [668e15dd](https://github.com/kubedb/postgres/commit/668e15dd4) Remove test directory (#683) +- [d857c354](https://github.com/kubedb/postgres/commit/d857c354a) added postgres arbiter support (#677) +- [8fc98e8e](https://github.com/kubedb/postgres/commit/8fc98e8ed) Fixed a bug for init container (#681) +- [a2b408ff](https://github.com/kubedb/postgres/commit/a2b408ffb) Bugfix for security context (#680) +- [fb14015e](https://github.com/kubedb/postgres/commit/fb14015e9) added nightly yml for postgres (#679) + + + +## [kubedb/postgres-archiver](https://github.com/kubedb/postgres-archiver) + +### [v0.1.0](https://github.com/kubedb/postgres-archiver/releases/tag/v0.1.0) + + + + +## [kubedb/postgres-csi-snapshotter-plugin](https://github.com/kubedb/postgres-csi-snapshotter-plugin) + +### [v0.1.0](https://github.com/kubedb/postgres-csi-snapshotter-plugin/releases/tag/v0.1.0) + +- [31f5fc5](https://github.com/kubedb/postgres-csi-snapshotter-plugin/commit/31f5fc5) Prepare for release v0.1.0 (#8) +- [57a7bdf](https://github.com/kubedb/postgres-csi-snapshotter-plugin/commit/57a7bdf) Prepare for release v0.1.0-rc.1 (#7) +- [02a45da](https://github.com/kubedb/postgres-csi-snapshotter-plugin/commit/02a45da) Prepare for release v0.1.0-rc.0 (#6) +- [1a6457c](https://github.com/kubedb/postgres-csi-snapshotter-plugin/commit/1a6457c) Update flags and deps + Refactor (#5) +- [f32b56b](https://github.com/kubedb/postgres-csi-snapshotter-plugin/commit/f32b56b) Delete .idea folder +- [e7f8135](https://github.com/kubedb/postgres-csi-snapshotter-plugin/commit/e7f8135) clean up (#4) +- [06e7e70](https://github.com/kubedb/postgres-csi-snapshotter-plugin/commit/06e7e70) clean up (#3) + + + +## [kubedb/postgres-restic-plugin](https://github.com/kubedb/postgres-restic-plugin) + +### [v0.1.0](https://github.com/kubedb/postgres-restic-plugin/releases/tag/v0.1.0) + +- [d5524a7](https://github.com/kubedb/postgres-restic-plugin/commit/d5524a7) Prepare for release v0.1.0 (#7) +- [584bbad](https://github.com/kubedb/postgres-restic-plugin/commit/584bbad) Prepare for release v0.1.0-rc.1 (#6) +- [da1ecd7](https://github.com/kubedb/postgres-restic-plugin/commit/da1ecd7) Refactor (#5) +- [8208814](https://github.com/kubedb/postgres-restic-plugin/commit/8208814) Prepare for release v0.1.0-rc.0 (#4) +- [a56fcfa](https://github.com/kubedb/postgres-restic-plugin/commit/a56fcfa) Move to kubedb org (#3) +- [e8928c7](https://github.com/kubedb/postgres-restic-plugin/commit/e8928c7) Added postgres addon for kubestash (#2) +- [7c55105](https://github.com/kubedb/postgres-restic-plugin/commit/7c55105) Prepare for release v0.1.0-rc.0 (#1) +- [19eff67](https://github.com/kubedb/postgres-restic-plugin/commit/19eff67) Use gh runner token to publish docker image +- [6a71410](https://github.com/kubedb/postgres-restic-plugin/commit/6a71410) Use firecracker runner +- [e278d71](https://github.com/kubedb/postgres-restic-plugin/commit/e278d71) Use Go 1.21 +- [4899879](https://github.com/kubedb/postgres-restic-plugin/commit/4899879) Update readme + cleanup + + + +## [kubedb/provisioner](https://github.com/kubedb/provisioner) + +### [v0.38.0](https://github.com/kubedb/provisioner/releases/tag/v0.38.0) + +- [284eef89](https://github.com/kubedb/provisioner/commit/284eef89b) Prepare for release v0.38.0 (#63) +- [4396cf5d](https://github.com/kubedb/provisioner/commit/4396cf5d5) Add storage, archiver, kubestash scheme (#62) +- [086300d9](https://github.com/kubedb/provisioner/commit/086300d90) Prepare for release v0.38.0-rc.1 (#61) +- [0dfe3742](https://github.com/kubedb/provisioner/commit/0dfe37425) Ensure archiver CRDs (#60) +- [7e6099e0](https://github.com/kubedb/provisioner/commit/7e6099e0e) Prepare for release v0.38.0-rc.0 (#59) + + + +## [kubedb/proxysql](https://github.com/kubedb/proxysql) + +### [v0.25.0](https://github.com/kubedb/proxysql/releases/tag/v0.25.0) + +- [69d82b19](https://github.com/kubedb/proxysql/commit/69d82b19) Prepare for release v0.25.0 (#315) +- [1f87cbc5](https://github.com/kubedb/proxysql/commit/1f87cbc5) Prepare for release v0.25.0-rc.1 (#314) +- [c4775bf7](https://github.com/kubedb/proxysql/commit/c4775bf7) Prepare for release v0.25.0-rc.0 (#313) + + + +## [kubedb/redis](https://github.com/kubedb/redis) + +### [v0.31.0](https://github.com/kubedb/redis/releases/tag/v0.31.0) + +- [de7b9f50](https://github.com/kubedb/redis/commit/de7b9f50) Prepare for release v0.31.0 (#500) +- [ffe5982e](https://github.com/kubedb/redis/commit/ffe5982e) Fix DB update from version for RedisSentinel (#499) +- [9f4f26ac](https://github.com/kubedb/redis/commit/9f4f26ac) Fix Statefulset Security Context Assign (#498) +- [a3d4b7b8](https://github.com/kubedb/redis/commit/a3d4b7b8) Prepare for release v0.31.0-rc.1 (#497) +- [bb101b6a](https://github.com/kubedb/redis/commit/bb101b6a) Pass version in SetDefaults func (#496) +- [966f14ca](https://github.com/kubedb/redis/commit/966f14ca) Prepare for release v0.31.0-rc.0 (#495) +- [b72d8319](https://github.com/kubedb/redis/commit/b72d8319) Run Redis and RedisSentinel as non root (#494) + + + +## [kubedb/redis-coordinator](https://github.com/kubedb/redis-coordinator) + +### [v0.17.0](https://github.com/kubedb/redis-coordinator/releases/tag/v0.17.0) + +- [34f65113](https://github.com/kubedb/redis-coordinator/commit/34f65113) Prepare for release v0.17.0 (#82) +- [7e5fbf31](https://github.com/kubedb/redis-coordinator/commit/7e5fbf31) Prepare for release v0.17.0-rc.1 (#81) +- [9f724e43](https://github.com/kubedb/redis-coordinator/commit/9f724e43) Prepare for release v0.17.0-rc.0 (#80) + + + +## [kubedb/redis-restic-plugin](https://github.com/kubedb/redis-restic-plugin) + +### [v0.1.0](https://github.com/kubedb/redis-restic-plugin/releases/tag/v0.1.0) + +- [79d23fd](https://github.com/kubedb/redis-restic-plugin/commit/79d23fd) Prepare for release v0.1.0 (#11) +- [8cae5ef](https://github.com/kubedb/redis-restic-plugin/commit/8cae5ef) Prepare for release v0.1.0-rc.1 (#10) +- [f8de18b](https://github.com/kubedb/redis-restic-plugin/commit/f8de18b) Prepare for release v0.1.0-rc.0 (#9) +- [a4c03d9](https://github.com/kubedb/redis-restic-plugin/commit/a4c03d9) Update snapshot time (#8) +- [404447d](https://github.com/kubedb/redis-restic-plugin/commit/404447d) Fix binary name +- [4dbc58b](https://github.com/kubedb/redis-restic-plugin/commit/4dbc58b) Move to kubedb org +- [e4a6fb2](https://github.com/kubedb/redis-restic-plugin/commit/e4a6fb2) Update deps (#7) +- [1b28954](https://github.com/kubedb/redis-restic-plugin/commit/1b28954) Remove maxConcurrency variable (#6) +- [4d13ee5](https://github.com/kubedb/redis-restic-plugin/commit/4d13ee5) Remove addon implementer + Refactor (#5) +- [44ac2c7](https://github.com/kubedb/redis-restic-plugin/commit/44ac2c7) Prepare for release v0.1.0-rc.0 (#4) +- [ce275bd](https://github.com/kubedb/redis-restic-plugin/commit/ce275bd) Use firecracker runner +- [bf39971](https://github.com/kubedb/redis-restic-plugin/commit/bf39971) Update deps +- [ef24891](https://github.com/kubedb/redis-restic-plugin/commit/ef24891) Use github runner to push docker image +- [6a6f6d6](https://github.com/kubedb/redis-restic-plugin/commit/6a6f6d6) Add support for redis backup and restore (#1) + + + +## [kubedb/replication-mode-detector](https://github.com/kubedb/replication-mode-detector) + +### [v0.25.0](https://github.com/kubedb/replication-mode-detector/releases/tag/v0.25.0) + +- [5189e007](https://github.com/kubedb/replication-mode-detector/commit/5189e007) Prepare for release v0.25.0 (#246) +- [758906fe](https://github.com/kubedb/replication-mode-detector/commit/758906fe) Prepare for release v0.25.0-rc.1 (#245) +- [77886a28](https://github.com/kubedb/replication-mode-detector/commit/77886a28) Prepare for release v0.25.0-rc.0 (#244) + + + +## [kubedb/schema-manager](https://github.com/kubedb/schema-manager) + +### [v0.14.0](https://github.com/kubedb/schema-manager/releases/tag/v0.14.0) + +- [3707ea4f](https://github.com/kubedb/schema-manager/commit/3707ea4f) Prepare for release v0.14.0 (#87) +- [f7e384b2](https://github.com/kubedb/schema-manager/commit/f7e384b2) Prepare for release v0.14.0-rc.1 (#86) +- [893fe8d9](https://github.com/kubedb/schema-manager/commit/893fe8d9) Prepare for release v0.14.0-rc.0 (#85) + + + +## [kubedb/tests](https://github.com/kubedb/tests) + +### [v0.23.0](https://github.com/kubedb/tests/releases/tag/v0.23.0) + +- [3c1ea68e](https://github.com/kubedb/tests/commit/3c1ea68e) Prepare for release v0.23.0 (#274) +- [ea46166e](https://github.com/kubedb/tests/commit/ea46166e) Fix postgres tls test cases (#273) +- [f76a34b2](https://github.com/kubedb/tests/commit/f76a34b2) Arbiter related test cases added (#268) +- [8f82cf9a](https://github.com/kubedb/tests/commit/8f82cf9a) Prepare for release v0.23.0-rc.1 (#272) +- [0bfcc3b6](https://github.com/kubedb/tests/commit/0bfcc3b6) Fix kafka restart-pods test (#271) +- [bfd1ec79](https://github.com/kubedb/tests/commit/bfd1ec79) Prepare for release v0.23.0-rc.0 (#270) +- [fab75dd1](https://github.com/kubedb/tests/commit/fab75dd1) Add disableDefault while deploying elasticsearch. (#269) +- [009399c7](https://github.com/kubedb/tests/commit/009399c7) Run tests in restriced PodSecurityStandard (#266) +- [4be89382](https://github.com/kubedb/tests/commit/4be89382) Fixed stash test and Innodb issues in MySQL (#250) +- [f007f5f5](https://github.com/kubedb/tests/commit/f007f5f5) Added test for Standalone to HA scalin (#267) +- [017546ec](https://github.com/kubedb/tests/commit/017546ec) Add Postgres e2e tests (#233) +- [fbd16c88](https://github.com/kubedb/tests/commit/fbd16c88) Add kafka e2e tests (#254) + + + +## [kubedb/ui-server](https://github.com/kubedb/ui-server) + +### [v0.14.0](https://github.com/kubedb/ui-server/releases/tag/v0.14.0) + +- [bfe213eb](https://github.com/kubedb/ui-server/commit/bfe213eb) Prepare for release v0.14.0 (#96) +- [82f78763](https://github.com/kubedb/ui-server/commit/82f78763) Prepare for release v0.14.0-rc.1 (#95) +- [b59415fd](https://github.com/kubedb/ui-server/commit/b59415fd) Prepare for release v0.14.0-rc.0 (#94) + + + +## [kubedb/webhook-server](https://github.com/kubedb/webhook-server) + +### [v0.14.0](https://github.com/kubedb/webhook-server/releases/tag/v0.14.0) + +- [9ff121c7](https://github.com/kubedb/webhook-server/commit/9ff121c7) Prepare for release v0.14.0 (#73) +- [01d13baa](https://github.com/kubedb/webhook-server/commit/01d13baa) Prepare for release v0.14.0-rc.1 (#72) +- [e869d0ce](https://github.com/kubedb/webhook-server/commit/e869d0ce) Initialize default KubeBuilder client (#71) +- [c36d61e5](https://github.com/kubedb/webhook-server/commit/c36d61e5) Prepare for release v0.14.0-rc.0 (#70) + + + + diff --git a/content/docs/v2024.1.31/CHANGELOG-v2023.12.21.md b/content/docs/v2024.1.31/CHANGELOG-v2023.12.21.md new file mode 100644 index 0000000000..0f6e4ff770 --- /dev/null +++ b/content/docs/v2024.1.31/CHANGELOG-v2023.12.21.md @@ -0,0 +1,408 @@ +--- +title: Changelog | KubeDB +description: Changelog +menu: + docs_v2024.1.31: + identifier: changelog-kubedb-v2023.12.21 + name: Changelog-v2023.12.21 + parent: welcome + weight: 20231221 +product_name: kubedb +menu_name: docs_v2024.1.31 +section_menu_id: welcome +url: /docs/v2024.1.31/welcome/changelog-v2023.12.21/ +aliases: +- /docs/v2024.1.31/CHANGELOG-v2023.12.21/ +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +# KubeDB v2023.12.21 (2023-12-21) + + +## [kubedb/apimachinery](https://github.com/kubedb/apimachinery) + +### [v0.39.0](https://github.com/kubedb/apimachinery/releases/tag/v0.39.0) + +- [c99d3ab1](https://github.com/kubedb/apimachinery/commit/c99d3ab1) Update pg arbiter api (#1091) +- [1d455662](https://github.com/kubedb/apimachinery/commit/1d455662) Add nodeSelector, tolerations in es & kafka spec (#1089) +- [3878d59f](https://github.com/kubedb/apimachinery/commit/3878d59f) Update deps +- [ecc6001f](https://github.com/kubedb/apimachinery/commit/ecc6001f) Update deps +- [bf7aa205](https://github.com/kubedb/apimachinery/commit/bf7aa205) Configure node topology for autoscaling compute resources (#1085) + + + +## [kubedb/autoscaler](https://github.com/kubedb/autoscaler) + +### [v0.24.0](https://github.com/kubedb/autoscaler/releases/tag/v0.24.0) + +- [f2e9be5d](https://github.com/kubedb/autoscaler/commit/f2e9be5d) Prepare for release v0.24.0 (#166) +- [98f7ce9e](https://github.com/kubedb/autoscaler/commit/98f7ce9e) Utilize topologyInfo in compute autoscalers (#162) +- [fc29fd8a](https://github.com/kubedb/autoscaler/commit/fc29fd8a) Send hourly audit events (#165) +- [ee3e323f](https://github.com/kubedb/autoscaler/commit/ee3e323f) Update autoscaler & ops apis (#161) + + + +## [kubedb/cli](https://github.com/kubedb/cli) + +### [v0.39.0](https://github.com/kubedb/cli/releases/tag/v0.39.0) + +- [3d619254](https://github.com/kubedb/cli/commit/3d619254) Prepare for release v0.39.0 (#741) + + + +## [kubedb/dashboard](https://github.com/kubedb/dashboard) + +### [v0.15.0](https://github.com/kubedb/dashboard/releases/tag/v0.15.0) + +- [1f0ffd6f](https://github.com/kubedb/dashboard/commit/1f0ffd6f) Prepare for release v0.15.0 (#89) +- [0d69c977](https://github.com/kubedb/dashboard/commit/0d69c977) Send hourly audit events (#88) + + + +## [kubedb/elasticsearch](https://github.com/kubedb/elasticsearch) + +### [v0.39.0](https://github.com/kubedb/elasticsearch/releases/tag/v0.39.0) + +- [944aac8b](https://github.com/kubedb/elasticsearch/commit/944aac8bf) Prepare for release v0.39.0 (#686) +- [090de217](https://github.com/kubedb/elasticsearch/commit/090de2176) Send hourly audit events (#685) +- [3f18a9e4](https://github.com/kubedb/elasticsearch/commit/3f18a9e40) Set tolerations & nodeSelectors from esNode (#682) + + + +## [kubedb/elasticsearch-restic-plugin](https://github.com/kubedb/elasticsearch-restic-plugin) + +### [v0.2.0](https://github.com/kubedb/elasticsearch-restic-plugin/releases/tag/v0.2.0) + +- [89c9f39](https://github.com/kubedb/elasticsearch-restic-plugin/commit/89c9f39) Prepare for release v0.2.0 (#12) + + + +## [kubedb/installer](https://github.com/kubedb/installer) + +### [v2023.12.21](https://github.com/kubedb/installer/releases/tag/v2023.12.21) + + + + +## [kubedb/kafka](https://github.com/kubedb/kafka) + +### [v0.10.0](https://github.com/kubedb/kafka/releases/tag/v0.10.0) + +- [89c3fbe](https://github.com/kubedb/kafka/commit/89c3fbe) Prepare for release v0.10.0 (#55) +- [ca738d8](https://github.com/kubedb/kafka/commit/ca738d8) Send hourly audit events (#54) +- [cfd9ea2](https://github.com/kubedb/kafka/commit/cfd9ea2) Set tolerations & nodeSelectors from kafka topology nodes (#53) + + + +## [kubedb/kubedb-manifest-plugin](https://github.com/kubedb/kubedb-manifest-plugin) + +### [v0.2.0](https://github.com/kubedb/kubedb-manifest-plugin/releases/tag/v0.2.0) + +- [e561ae8](https://github.com/kubedb/kubedb-manifest-plugin/commit/e561ae8) Prepare for release v0.2.0 (#32) +- [86311ba](https://github.com/kubedb/kubedb-manifest-plugin/commit/86311ba) Add mysql and mariadb manifest backup and restore support (#31) + + + +## [kubedb/mariadb](https://github.com/kubedb/mariadb) + +### [v0.23.0](https://github.com/kubedb/mariadb/releases/tag/v0.23.0) + +- [e6cae3c7](https://github.com/kubedb/mariadb/commit/e6cae3c7) Prepare for release v0.23.0 (#240) +- [b0c9a5a9](https://github.com/kubedb/mariadb/commit/b0c9a5a9) Send hourly audit events (#239) + + + +## [kubedb/mariadb-archiver](https://github.com/kubedb/mariadb-archiver) + +### [v0.2.0](https://github.com/kubedb/mariadb-archiver/releases/tag/v0.2.0) + +- [1c1bb1d](https://github.com/kubedb/mariadb-archiver/commit/1c1bb1d) Prepare for release v0.2.0 (#7) +- [e1ada03](https://github.com/kubedb/mariadb-archiver/commit/e1ada03) Use appscode-images as base image (#6) + + + +## [kubedb/mariadb-coordinator](https://github.com/kubedb/mariadb-coordinator) + +### [v0.19.0](https://github.com/kubedb/mariadb-coordinator/releases/tag/v0.19.0) + +- [a82c76e8](https://github.com/kubedb/mariadb-coordinator/commit/a82c76e8) Prepare for release v0.19.0 (#95) + + + +## [kubedb/memcached](https://github.com/kubedb/memcached) + +### [v0.32.0](https://github.com/kubedb/memcached/releases/tag/v0.32.0) + +- [28a0d9b6](https://github.com/kubedb/memcached/commit/28a0d9b6) Prepare for release v0.32.0 (#410) +- [5b0e2cf7](https://github.com/kubedb/memcached/commit/5b0e2cf7) Send hourly audit events (#409) + + + +## [kubedb/mongodb](https://github.com/kubedb/mongodb) + +### [v0.32.0](https://github.com/kubedb/mongodb/releases/tag/v0.32.0) + +- [6b7b6be2](https://github.com/kubedb/mongodb/commit/6b7b6be2) Prepare for release v0.32.0 (#589) +- [7c9d0105](https://github.com/kubedb/mongodb/commit/7c9d0105) Send hourly audit events (#588) + + + +## [kubedb/mongodb-csi-snapshotter-plugin](https://github.com/kubedb/mongodb-csi-snapshotter-plugin) + +### [v0.2.0](https://github.com/kubedb/mongodb-csi-snapshotter-plugin/releases/tag/v0.2.0) + +- [2bad72d](https://github.com/kubedb/mongodb-csi-snapshotter-plugin/commit/2bad72d) Prepare for release v0.2.0 (#6) + + + +## [kubedb/mongodb-restic-plugin](https://github.com/kubedb/mongodb-restic-plugin) + +### [v0.2.0](https://github.com/kubedb/mongodb-restic-plugin/releases/tag/v0.2.0) + +- [16cdbac](https://github.com/kubedb/mongodb-restic-plugin/commit/16cdbac) Prepare for release v0.2.0 (#17) + + + +## [kubedb/mysql](https://github.com/kubedb/mysql) + +### [v0.32.0](https://github.com/kubedb/mysql/releases/tag/v0.32.0) + +- [1d875c1c](https://github.com/kubedb/mysql/commit/1d875c1c) Prepare for release v0.32.0 (#581) +- [d4323211](https://github.com/kubedb/mysql/commit/d4323211) Send hourly audit events (#580) + + + +## [kubedb/mysql-archiver](https://github.com/kubedb/mysql-archiver) + +### [v0.2.0](https://github.com/kubedb/mysql-archiver/releases/tag/v0.2.0) + +- [e800623](https://github.com/kubedb/mysql-archiver/commit/e800623) Prepare for release v0.2.0 (#8) +- [b9f6ec5](https://github.com/kubedb/mysql-archiver/commit/b9f6ec5) Install mysqlbinlog (#7) +- [c46d991](https://github.com/kubedb/mysql-archiver/commit/c46d991) Use appscode-images as base image (#6) + + + +## [kubedb/mysql-coordinator](https://github.com/kubedb/mysql-coordinator) + +### [v0.17.0](https://github.com/kubedb/mysql-coordinator/releases/tag/v0.17.0) + +- [eb942605](https://github.com/kubedb/mysql-coordinator/commit/eb942605) Prepare for release v0.17.0 (#92) + + + +## [kubedb/mysql-restic-plugin](https://github.com/kubedb/mysql-restic-plugin) + +### [v0.2.0](https://github.com/kubedb/mysql-restic-plugin/releases/tag/v0.2.0) + +- [91eb451](https://github.com/kubedb/mysql-restic-plugin/commit/91eb451) Prepare for release v0.2.0 (#16) + + + +## [kubedb/mysql-router-init](https://github.com/kubedb/mysql-router-init) + +### [v0.17.0](https://github.com/kubedb/mysql-router-init/releases/tag/v0.17.0) + + + + +## [kubedb/ops-manager](https://github.com/kubedb/ops-manager) + +### [v0.26.0](https://github.com/kubedb/ops-manager/releases/tag/v0.26.0) + +- [328b13d9](https://github.com/kubedb/ops-manager/commit/328b13d9) Prepare for release v0.26.0 (#503) +- [10100aa9](https://github.com/kubedb/ops-manager/commit/10100aa9) Set tolerations & nodeSelectors while verticalScaling (#500) +- [14fe79e3](https://github.com/kubedb/ops-manager/commit/14fe79e3) Send hourly audit events (#502) +- [b7a5522f](https://github.com/kubedb/ops-manager/commit/b7a5522f) Update opsRequest api (#499) +- [304855b3](https://github.com/kubedb/ops-manager/commit/304855b3) Update daily-opensearch.yml +- [50c7ff53](https://github.com/kubedb/ops-manager/commit/50c7ff53) Update daily workflow for ES and Kafka. (#493) + + + +## [kubedb/percona-xtradb](https://github.com/kubedb/percona-xtradb) + +### [v0.26.0](https://github.com/kubedb/percona-xtradb/releases/tag/v0.26.0) + +- [35495dd3](https://github.com/kubedb/percona-xtradb/commit/35495dd3) Prepare for release v0.26.0 (#338) +- [7bac5129](https://github.com/kubedb/percona-xtradb/commit/7bac5129) Send hourly audit events (#337) + + + +## [kubedb/percona-xtradb-coordinator](https://github.com/kubedb/percona-xtradb-coordinator) + +### [v0.12.0](https://github.com/kubedb/percona-xtradb-coordinator/releases/tag/v0.12.0) + +- [1dc1fbf](https://github.com/kubedb/percona-xtradb-coordinator/commit/1dc1fbf) Prepare for release v0.12.0 (#52) + + + +## [kubedb/pg-coordinator](https://github.com/kubedb/pg-coordinator) + +### [v0.23.0](https://github.com/kubedb/pg-coordinator/releases/tag/v0.23.0) + +- [d18365e6](https://github.com/kubedb/pg-coordinator/commit/d18365e6) Prepare for release v0.23.0 (#142) + + + +## [kubedb/pgbouncer](https://github.com/kubedb/pgbouncer) + +### [v0.26.0](https://github.com/kubedb/pgbouncer/releases/tag/v0.26.0) + +- [ad28cfa4](https://github.com/kubedb/pgbouncer/commit/ad28cfa4) Prepare for release v0.26.0 (#303) +- [dbe23148](https://github.com/kubedb/pgbouncer/commit/dbe23148) Send hourly audit events (#302) + + + +## [kubedb/postgres](https://github.com/kubedb/postgres) + +### [v0.39.0](https://github.com/kubedb/postgres/releases/tag/v0.39.0) + +- [448e81f0](https://github.com/kubedb/postgres/commit/448e81f04) Prepare for release v0.39.0 (#694) +- [745c6555](https://github.com/kubedb/postgres/commit/745c6555d) Send hourly audit events (#693) +- [e4016868](https://github.com/kubedb/postgres/commit/e4016868e) Send hourly audit events (#691) +- [26f68fef](https://github.com/kubedb/postgres/commit/26f68fefa) Set toleration & nodeSelector fields from arbiter spec (#689) + + + +## [kubedb/postgres-archiver](https://github.com/kubedb/postgres-archiver) + +### [v0.2.0](https://github.com/kubedb/postgres-archiver/releases/tag/v0.2.0) + +- [c4f7e11](https://github.com/kubedb/postgres-archiver/commit/c4f7e11) Fix formatting + + + +## [kubedb/postgres-csi-snapshotter-plugin](https://github.com/kubedb/postgres-csi-snapshotter-plugin) + +### [v0.2.0](https://github.com/kubedb/postgres-csi-snapshotter-plugin/releases/tag/v0.2.0) + +- [bce9779](https://github.com/kubedb/postgres-csi-snapshotter-plugin/commit/bce9779) Prepare for release v0.2.0 (#9) + + + +## [kubedb/postgres-restic-plugin](https://github.com/kubedb/postgres-restic-plugin) + +### [v0.2.0](https://github.com/kubedb/postgres-restic-plugin/releases/tag/v0.2.0) + +- [7e449e3](https://github.com/kubedb/postgres-restic-plugin/commit/7e449e3) Prepare for release v0.2.0 (#9) + + + +## [kubedb/provider-aws](https://github.com/kubedb/provider-aws) + +### [v0.1.0](https://github.com/kubedb/provider-aws/releases/tag/v0.1.0) + +- [3cdbabe](https://github.com/kubedb/provider-aws/commit/3cdbabe) Fix makefile + + + +## [kubedb/provider-azure](https://github.com/kubedb/provider-azure) + +### [v0.1.0](https://github.com/kubedb/provider-azure/releases/tag/v0.1.0) + + + + +## [kubedb/provider-gcp](https://github.com/kubedb/provider-gcp) + +### [v0.1.0](https://github.com/kubedb/provider-gcp/releases/tag/v0.1.0) + + + + +## [kubedb/provisioner](https://github.com/kubedb/provisioner) + +### [v0.39.0](https://github.com/kubedb/provisioner/releases/tag/v0.39.0) + +- [6ec88b2b](https://github.com/kubedb/provisioner/commit/6ec88b2b0) Prepare for release v0.39.0 (#65) +- [bbb9417d](https://github.com/kubedb/provisioner/commit/bbb9417da) Send hourly audit events (#64) + + + +## [kubedb/proxysql](https://github.com/kubedb/proxysql) + +### [v0.26.0](https://github.com/kubedb/proxysql/releases/tag/v0.26.0) + +- [71c51c63](https://github.com/kubedb/proxysql/commit/71c51c63) Prepare for release v0.26.0 (#317) +- [30119f2c](https://github.com/kubedb/proxysql/commit/30119f2c) Send hourly audit events (#316) + + + +## [kubedb/redis](https://github.com/kubedb/redis) + +### [v0.32.0](https://github.com/kubedb/redis/releases/tag/v0.32.0) + +- [c18c7bbf](https://github.com/kubedb/redis/commit/c18c7bbf) Prepare for release v0.32.0 (#504) +- [8716c93c](https://github.com/kubedb/redis/commit/8716c93c) Send hourly audit events (#503) + + + +## [kubedb/redis-coordinator](https://github.com/kubedb/redis-coordinator) + +### [v0.18.0](https://github.com/kubedb/redis-coordinator/releases/tag/v0.18.0) + +- [a5ddc00b](https://github.com/kubedb/redis-coordinator/commit/a5ddc00b) Prepare for release v0.18.0 (#83) + + + +## [kubedb/redis-restic-plugin](https://github.com/kubedb/redis-restic-plugin) + +### [v0.2.0](https://github.com/kubedb/redis-restic-plugin/releases/tag/v0.2.0) + +- [352a231](https://github.com/kubedb/redis-restic-plugin/commit/352a231) Prepare for release v0.2.0 (#13) + + + +## [kubedb/replication-mode-detector](https://github.com/kubedb/replication-mode-detector) + +### [v0.26.0](https://github.com/kubedb/replication-mode-detector/releases/tag/v0.26.0) + +- [9fbf8da6](https://github.com/kubedb/replication-mode-detector/commit/9fbf8da6) Prepare for release v0.26.0 (#247) + + + +## [kubedb/schema-manager](https://github.com/kubedb/schema-manager) + +### [v0.15.0](https://github.com/kubedb/schema-manager/releases/tag/v0.15.0) + +- [bb65f133](https://github.com/kubedb/schema-manager/commit/bb65f133) Prepare for release v0.15.0 (#90) +- [96ddacfe](https://github.com/kubedb/schema-manager/commit/96ddacfe) Send hourly audit events (#89) + + + +## [kubedb/tests](https://github.com/kubedb/tests) + +### [v0.24.0](https://github.com/kubedb/tests/releases/tag/v0.24.0) + +- [7bd88b9f](https://github.com/kubedb/tests/commit/7bd88b9f) Prepare for release v0.24.0 (#275) + + + +## [kubedb/ui-server](https://github.com/kubedb/ui-server) + +### [v0.15.0](https://github.com/kubedb/ui-server/releases/tag/v0.15.0) + +- [7b2351b0](https://github.com/kubedb/ui-server/commit/7b2351b0) Prepare for release v0.15.0 (#99) +- [956aae83](https://github.com/kubedb/ui-server/commit/956aae83) Send hourly audit events (#98) + + + +## [kubedb/webhook-server](https://github.com/kubedb/webhook-server) + +### [v0.15.0](https://github.com/kubedb/webhook-server/releases/tag/v0.15.0) + +- [3afa1398](https://github.com/kubedb/webhook-server/commit/3afa1398) Prepare for release v0.15.0 (#76) +- [96da1acd](https://github.com/kubedb/webhook-server/commit/96da1acd) Send hourly audit events (#75) + + + + diff --git a/content/docs/v2024.1.31/CHANGELOG-v2023.12.28.md b/content/docs/v2024.1.31/CHANGELOG-v2023.12.28.md new file mode 100644 index 0000000000..ad7dcaf99c --- /dev/null +++ b/content/docs/v2024.1.31/CHANGELOG-v2023.12.28.md @@ -0,0 +1,500 @@ +--- +title: Changelog | KubeDB +description: Changelog +menu: + docs_v2024.1.31: + identifier: changelog-kubedb-v2023.12.28 + name: Changelog-v2023.12.28 + parent: welcome + weight: 20231228 +product_name: kubedb +menu_name: docs_v2024.1.31 +section_menu_id: welcome +url: /docs/v2024.1.31/welcome/changelog-v2023.12.28/ +aliases: +- /docs/v2024.1.31/CHANGELOG-v2023.12.28/ +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +# KubeDB v2023.12.28 (2023-12-27) + + +## [kubedb/apimachinery](https://github.com/kubedb/apimachinery) + +### [v0.40.0](https://github.com/kubedb/apimachinery/releases/tag/v0.40.0) + +- [000dfa1a](https://github.com/kubedb/apimachinery/commit/000dfa1a6) Use kubestash v0.3.0 +- [541ddfd4](https://github.com/kubedb/apimachinery/commit/541ddfd45) Update client-go deps +- [b6912d25](https://github.com/kubedb/apimachinery/commit/b6912d25a) Defaulting compute autoscaler fields (#1097) +- [61b590f7](https://github.com/kubedb/apimachinery/commit/61b590f7b) Add mysql archiver apis (#1086) +- [750f6385](https://github.com/kubedb/apimachinery/commit/750f6385b) Add scaleUp & scaleDown diffPercentage fields to autoscaler (#1092) +- [0922ff18](https://github.com/kubedb/apimachinery/commit/0922ff18c) Add default resource for initContainer (#1094) +- [da96ad5f](https://github.com/kubedb/apimachinery/commit/da96ad5fe) Revert "Add kubestash controller for changing kubeDB phase (#1076)" +- [d6368a16](https://github.com/kubedb/apimachinery/commit/d6368a16f) Add kubestash controller for changing kubeDB phase (#1076) + + + +## [kubedb/autoscaler](https://github.com/kubedb/autoscaler) + +### [v0.25.0](https://github.com/kubedb/autoscaler/releases/tag/v0.25.0) + +- [557a5503](https://github.com/kubedb/autoscaler/commit/557a5503) Prepare for release v0.25.0 (#169) +- [346ed2f0](https://github.com/kubedb/autoscaler/commit/346ed2f0) Implement nodePool jumping when topology specified (#168) +- [5d464e14](https://github.com/kubedb/autoscaler/commit/5d464e14) Add Dockerfile for dbg; Use go 21 (#167) + + + +## [kubedb/cli](https://github.com/kubedb/cli) + +### [v0.40.0](https://github.com/kubedb/cli/releases/tag/v0.40.0) + +- [94aedfc9](https://github.com/kubedb/cli/commit/94aedfc9) Prepare for release v0.40.0 (#742) + + + +## [kubedb/dashboard](https://github.com/kubedb/dashboard) + +### [v0.16.0](https://github.com/kubedb/dashboard/releases/tag/v0.16.0) + +- [67058d0b](https://github.com/kubedb/dashboard/commit/67058d0b) Prepare for release v0.16.0 (#90) + + + +## [kubedb/elasticsearch](https://github.com/kubedb/elasticsearch) + +### [v0.40.0](https://github.com/kubedb/elasticsearch/releases/tag/v0.40.0) + +- [745c7022](https://github.com/kubedb/elasticsearch/commit/745c70225) Prepare for release v0.40.0 (#687) + + + +## [kubedb/elasticsearch-restic-plugin](https://github.com/kubedb/elasticsearch-restic-plugin) + +### [v0.3.0](https://github.com/kubedb/elasticsearch-restic-plugin/releases/tag/v0.3.0) + +- [231c402](https://github.com/kubedb/elasticsearch-restic-plugin/commit/231c402) Prepare for release v0.3.0 (#13) + + + +## [kubedb/installer](https://github.com/kubedb/installer) + +### [v2023.12.28](https://github.com/kubedb/installer/releases/tag/v2023.12.28) + +- [8c5db03d](https://github.com/kubedb/installer/commit/8c5db03d) Prepare for release v2023.12.28 (#758) +- [2e323fab](https://github.com/kubedb/installer/commit/2e323fab) mongodb-csisnapshotter -> mongodb-csi-snapshotter +- [c8226e7d](https://github.com/kubedb/installer/commit/c8226e7d) postgres-csisnapshotter -> postgres-csi-snapshotter +- [e17deeb2](https://github.com/kubedb/installer/commit/e17deeb2) Add NodeTopology crd to autoscaler chart (#757) +- [84673762](https://github.com/kubedb/installer/commit/84673762) Add mysql archiver specs (#755) +- [76713c53](https://github.com/kubedb/installer/commit/76713c53) Templatize wal-g images (#756) +- [74e57f6e](https://github.com/kubedb/installer/commit/74e57f6e) Add image.* templates to kubestash catalog chart +- [66efd1f3](https://github.com/kubedb/installer/commit/66efd1f3) Update crds for kubedb/apimachinery@61b590f7 (#754) +- [dc051893](https://github.com/kubedb/installer/commit/dc051893) Update crds for kubedb/apimachinery@750f6385 (#753) + + + +## [kubedb/kafka](https://github.com/kubedb/kafka) + +### [v0.11.0](https://github.com/kubedb/kafka/releases/tag/v0.11.0) + +- [65e61f0](https://github.com/kubedb/kafka/commit/65e61f0) Prepare for release v0.11.0 (#56) + + + +## [kubedb/kubedb-manifest-plugin](https://github.com/kubedb/kubedb-manifest-plugin) + +### [v0.3.0](https://github.com/kubedb/kubedb-manifest-plugin/releases/tag/v0.3.0) + +- [c664d92](https://github.com/kubedb/kubedb-manifest-plugin/commit/c664d92) Prepare for release v0.3.0 (#33) + + + +## [kubedb/mariadb](https://github.com/kubedb/mariadb) + +### [v0.24.0](https://github.com/kubedb/mariadb/releases/tag/v0.24.0) + +- [94f03b1b](https://github.com/kubedb/mariadb/commit/94f03b1b) Prepare for release v0.24.0 (#241) + + + +## [kubedb/mariadb-archiver](https://github.com/kubedb/mariadb-archiver) + +### [v0.1.0](https://github.com/kubedb/mariadb-archiver/releases/tag/v0.1.0) + +- [910b7ce](https://github.com/kubedb/mariadb-archiver/commit/910b7ce) Prepare for release v0.1.0 (#1) +- [3801668](https://github.com/kubedb/mariadb-archiver/commit/3801668) mysql -> mariadb +- [4e905fb](https://github.com/kubedb/mariadb-archiver/commit/4e905fb) Implemenet new algorithm for archiver and restorer (#5) +- [22701c8](https://github.com/kubedb/mariadb-archiver/commit/22701c8) Fix 5.7.x build +- [6da2b1c](https://github.com/kubedb/mariadb-archiver/commit/6da2b1c) Update build matrix +- [e2f6244](https://github.com/kubedb/mariadb-archiver/commit/e2f6244) Use separate dockerfile per mysql version (#9) +- [e800623](https://github.com/kubedb/mariadb-archiver/commit/e800623) Prepare for release v0.2.0 (#8) +- [b9f6ec5](https://github.com/kubedb/mariadb-archiver/commit/b9f6ec5) Install mysqlbinlog (#7) +- [c46d991](https://github.com/kubedb/mariadb-archiver/commit/c46d991) Use appscode-images as base image (#6) +- [721eaa8](https://github.com/kubedb/mariadb-archiver/commit/721eaa8) Prepare for release v0.1.0 (#4) +- [8c65d14](https://github.com/kubedb/mariadb-archiver/commit/8c65d14) Prepare for release v0.1.0-rc.1 (#3) +- [f79286a](https://github.com/kubedb/mariadb-archiver/commit/f79286a) Prepare for release v0.1.0-rc.0 (#2) +- [dcd2e30](https://github.com/kubedb/mariadb-archiver/commit/dcd2e30) Fix wal-g binary +- [6c20a4a](https://github.com/kubedb/mariadb-archiver/commit/6c20a4a) Fix build +- [f034e7b](https://github.com/kubedb/mariadb-archiver/commit/f034e7b) Add build script (#1) + + + +## [kubedb/mariadb-coordinator](https://github.com/kubedb/mariadb-coordinator) + +### [v0.20.0](https://github.com/kubedb/mariadb-coordinator/releases/tag/v0.20.0) + +- [ff2b45fc](https://github.com/kubedb/mariadb-coordinator/commit/ff2b45fc) Prepare for release v0.20.0 (#96) + + + +## [kubedb/mariadb-csi-snapshotter-plugin](https://github.com/kubedb/mariadb-csi-snapshotter-plugin) + +### [v0.1.0](https://github.com/kubedb/mariadb-csi-snapshotter-plugin/releases/tag/v0.1.0) + +- [933e138](https://github.com/kubedb/mariadb-csi-snapshotter-plugin/commit/933e138) Prepare for release v0.1.0 (#2) +- [5d38f94](https://github.com/kubedb/mariadb-csi-snapshotter-plugin/commit/5d38f94) Enable GH actions +- [2a97178](https://github.com/kubedb/mariadb-csi-snapshotter-plugin/commit/2a97178) Replace mysql with mariadb + + + +## [kubedb/memcached](https://github.com/kubedb/memcached) + +### [v0.33.0](https://github.com/kubedb/memcached/releases/tag/v0.33.0) + +- [bf3329f4](https://github.com/kubedb/memcached/commit/bf3329f4) Prepare for release v0.33.0 (#411) + + + +## [kubedb/mongodb](https://github.com/kubedb/mongodb) + +### [v0.33.0](https://github.com/kubedb/mongodb/releases/tag/v0.33.0) + +- [30a34a1c](https://github.com/kubedb/mongodb/commit/30a34a1c) Prepare for release v0.33.0 (#592) +- [71c092df](https://github.com/kubedb/mongodb/commit/71c092df) Trigger backupSession once while backupConfig created (#591) +- [57c8a367](https://github.com/kubedb/mongodb/commit/57c8a367) Set Default initContainer resource (#590) + + + +## [kubedb/mongodb-csi-snapshotter-plugin](https://github.com/kubedb/mongodb-csi-snapshotter-plugin) + +### [v0.1.0](https://github.com/kubedb/mongodb-csi-snapshotter-plugin/releases/tag/v0.1.0) + +- [fc233a7](https://github.com/kubedb/mongodb-csi-snapshotter-plugin/commit/fc233a7) Prepare for release v0.1.0 (#7) +- [2bad72d](https://github.com/kubedb/mongodb-csi-snapshotter-plugin/commit/2bad72d) Prepare for release v0.2.0 (#6) +- [c2fcb4f](https://github.com/kubedb/mongodb-csi-snapshotter-plugin/commit/c2fcb4f) Prepare for release v0.1.0 (#5) +- [92b28e8](https://github.com/kubedb/mongodb-csi-snapshotter-plugin/commit/92b28e8) Prepare for release v0.1.0-rc.1 (#4) +- [f06d344](https://github.com/kubedb/mongodb-csi-snapshotter-plugin/commit/f06d344) Prepare for release v0.1.0-rc.0 (#3) +- [df1a966](https://github.com/kubedb/mongodb-csi-snapshotter-plugin/commit/df1a966) Update flags + Refactor (#2) +- [7eb7cea](https://github.com/kubedb/mongodb-csi-snapshotter-plugin/commit/7eb7cea) Fix issues +- [1f189b4](https://github.com/kubedb/mongodb-csi-snapshotter-plugin/commit/1f189b4) Test against K8s 1.27.0 (#1) + + + +## [kubedb/mongodb-restic-plugin](https://github.com/kubedb/mongodb-restic-plugin) + +### [v0.3.0](https://github.com/kubedb/mongodb-restic-plugin/releases/tag/v0.3.0) + +- [efac8ef](https://github.com/kubedb/mongodb-restic-plugin/commit/efac8ef) Prepare for release v0.3.0 (#18) + + + +## [kubedb/mysql](https://github.com/kubedb/mysql) + +### [v0.33.0](https://github.com/kubedb/mysql/releases/tag/v0.33.0) + +- [74a63d01](https://github.com/kubedb/mysql/commit/74a63d01) Prepare for release v0.33.0 (#583) +- [272eb81c](https://github.com/kubedb/mysql/commit/272eb81c) fix typos and secondlast snapshot time (#582) +- [7efda78a](https://github.com/kubedb/mysql/commit/7efda78a) Fix error return +- [a6cd4fe9](https://github.com/kubedb/mysql/commit/a6cd4fe9) Add support for archiver (#577) + + + +## [kubedb/mysql-archiver](https://github.com/kubedb/mysql-archiver) + +### [v0.1.0](https://github.com/kubedb/mysql-archiver/releases/tag/v0.1.0) + +- [c956cb9](https://github.com/kubedb/mysql-archiver/commit/c956cb9) Prepare for release v0.1.0 (#10) +- [4e905fb](https://github.com/kubedb/mysql-archiver/commit/4e905fb) Implemenet new algorithm for archiver and restorer (#5) +- [22701c8](https://github.com/kubedb/mysql-archiver/commit/22701c8) Fix 5.7.x build +- [6da2b1c](https://github.com/kubedb/mysql-archiver/commit/6da2b1c) Update build matrix +- [e2f6244](https://github.com/kubedb/mysql-archiver/commit/e2f6244) Use separate dockerfile per mysql version (#9) +- [e800623](https://github.com/kubedb/mysql-archiver/commit/e800623) Prepare for release v0.2.0 (#8) +- [b9f6ec5](https://github.com/kubedb/mysql-archiver/commit/b9f6ec5) Install mysqlbinlog (#7) +- [c46d991](https://github.com/kubedb/mysql-archiver/commit/c46d991) Use appscode-images as base image (#6) +- [721eaa8](https://github.com/kubedb/mysql-archiver/commit/721eaa8) Prepare for release v0.1.0 (#4) +- [8c65d14](https://github.com/kubedb/mysql-archiver/commit/8c65d14) Prepare for release v0.1.0-rc.1 (#3) +- [f79286a](https://github.com/kubedb/mysql-archiver/commit/f79286a) Prepare for release v0.1.0-rc.0 (#2) +- [dcd2e30](https://github.com/kubedb/mysql-archiver/commit/dcd2e30) Fix wal-g binary +- [6c20a4a](https://github.com/kubedb/mysql-archiver/commit/6c20a4a) Fix build +- [f034e7b](https://github.com/kubedb/mysql-archiver/commit/f034e7b) Add build script (#1) + + + +## [kubedb/mysql-coordinator](https://github.com/kubedb/mysql-coordinator) + +### [v0.18.0](https://github.com/kubedb/mysql-coordinator/releases/tag/v0.18.0) + +- [8d6d3073](https://github.com/kubedb/mysql-coordinator/commit/8d6d3073) Prepare for release v0.18.0 (#93) + + + +## [kubedb/mysql-csi-snapshotter-plugin](https://github.com/kubedb/mysql-csi-snapshotter-plugin) + +### [v0.1.0](https://github.com/kubedb/mysql-csi-snapshotter-plugin/releases/tag/v0.1.0) + +- [34bd9fd](https://github.com/kubedb/mysql-csi-snapshotter-plugin/commit/34bd9fd) Prepare for release v0.1.0 (#1) +- [a0ddb4a](https://github.com/kubedb/mysql-csi-snapshotter-plugin/commit/a0ddb4a) Enable GH actions + + + +## [kubedb/mysql-restic-plugin](https://github.com/kubedb/mysql-restic-plugin) + +### [v0.3.0](https://github.com/kubedb/mysql-restic-plugin/releases/tag/v0.3.0) + +- [a364862](https://github.com/kubedb/mysql-restic-plugin/commit/a364862) Prepare for release v0.3.0 (#17) + + + +## [kubedb/mysql-router-init](https://github.com/kubedb/mysql-router-init) + +### [v0.18.0](https://github.com/kubedb/mysql-router-init/releases/tag/v0.18.0) + + + + +## [kubedb/ops-manager](https://github.com/kubedb/ops-manager) + +### [v0.27.0](https://github.com/kubedb/ops-manager/releases/tag/v0.27.0) + +- [abbbea47](https://github.com/kubedb/ops-manager/commit/abbbea47) Prepare for release v0.27.0 (#505) +- [5955ffd7](https://github.com/kubedb/ops-manager/commit/5955ffd7) Delete pod if it was found in CrashLoopBackOff while restarting (#504) + + + +## [kubedb/percona-xtradb](https://github.com/kubedb/percona-xtradb) + +### [v0.27.0](https://github.com/kubedb/percona-xtradb/releases/tag/v0.27.0) + +- [117ce794](https://github.com/kubedb/percona-xtradb/commit/117ce794) Prepare for release v0.27.0 (#340) +- [56a4b354](https://github.com/kubedb/percona-xtradb/commit/56a4b354) Fix initContainer resource (#339) + + + +## [kubedb/percona-xtradb-coordinator](https://github.com/kubedb/percona-xtradb-coordinator) + +### [v0.13.0](https://github.com/kubedb/percona-xtradb-coordinator/releases/tag/v0.13.0) + +- [8ce147f](https://github.com/kubedb/percona-xtradb-coordinator/commit/8ce147f) Prepare for release v0.13.0 (#53) + + + +## [kubedb/pg-coordinator](https://github.com/kubedb/pg-coordinator) + +### [v0.24.0](https://github.com/kubedb/pg-coordinator/releases/tag/v0.24.0) + +- [e3f8df76](https://github.com/kubedb/pg-coordinator/commit/e3f8df76) Prepare for release v0.24.0 (#143) + + + +## [kubedb/pgbouncer](https://github.com/kubedb/pgbouncer) + +### [v0.27.0](https://github.com/kubedb/pgbouncer/releases/tag/v0.27.0) + +- [0abff0c8](https://github.com/kubedb/pgbouncer/commit/0abff0c8) Prepare for release v0.27.0 (#304) + + + +## [kubedb/postgres](https://github.com/kubedb/postgres) + +### [v0.40.0](https://github.com/kubedb/postgres/releases/tag/v0.40.0) + +- [17d39368](https://github.com/kubedb/postgres/commit/17d393689) Prepare for release v0.40.0 (#695) + + + +## [kubedb/postgres-archiver](https://github.com/kubedb/postgres-archiver) + +### [v0.1.0](https://github.com/kubedb/postgres-archiver/releases/tag/v0.1.0) + +- [12cb5f0](https://github.com/kubedb/postgres-archiver/commit/12cb5f0) Prepare for release v0.1.0 (#14) +- [91c52a5](https://github.com/kubedb/postgres-archiver/commit/91c52a5) Add tls support for connection string (#13) +- [c4f7e11](https://github.com/kubedb/postgres-archiver/commit/c4f7e11) Fix formatting +- [1feeaeb](https://github.com/kubedb/postgres-archiver/commit/1feeaeb) Fix wal-g version +- [c86ede7](https://github.com/kubedb/postgres-archiver/commit/c86ede7) Update readme +- [f5b4fb3](https://github.com/kubedb/postgres-archiver/commit/f5b4fb3) Rename to postgres-archiver +- [302fbc1](https://github.com/kubedb/postgres-archiver/commit/302fbc1) Merge pull request #12 from kubedb/cleanup +- [020b817](https://github.com/kubedb/postgres-archiver/commit/020b817) clean up +- [5ae6dee](https://github.com/kubedb/postgres-archiver/commit/5ae6dee) Add ca-certificates into docker image +- [2a9e7b5](https://github.com/kubedb/postgres-archiver/commit/2a9e7b5) Build images parallelly +- [ec05751](https://github.com/kubedb/postgres-archiver/commit/ec05751) Build bookwork images +- [1ed24d1](https://github.com/kubedb/postgres-archiver/commit/1ed24d1) Build multi version docker images +- [57dd7e5](https://github.com/kubedb/postgres-archiver/commit/57dd7e5) Format repo (#11) +- [adc5e71](https://github.com/kubedb/postgres-archiver/commit/adc5e71) Implement archiver command (#7) +- [7d0adba](https://github.com/kubedb/postgres-archiver/commit/7d0adba) Test against K8s 1.27.0 (#10) +- [9b2a242](https://github.com/kubedb/postgres-archiver/commit/9b2a242) Update Makefile +- [cbbe124](https://github.com/kubedb/postgres-archiver/commit/cbbe124) Use ghcr.io for appscode/golang-dev (#9) +- [03877ad](https://github.com/kubedb/postgres-archiver/commit/03877ad) Update wrokflows (Go 1.20, k8s 1.26) (#8) +- [ad607ec](https://github.com/kubedb/postgres-archiver/commit/ad607ec) Use Go 1.18 (#5) +- [32d1866](https://github.com/kubedb/postgres-archiver/commit/32d1866) Use Go 1.18 (#4) +- [42ae1cb](https://github.com/kubedb/postgres-archiver/commit/42ae1cb) make fmt (#3) +- [1a6fe8d](https://github.com/kubedb/postgres-archiver/commit/1a6fe8d) Update repository config (#2) +- [8100920](https://github.com/kubedb/postgres-archiver/commit/8100920) Update repository config (#1) +- [8e3c29d](https://github.com/kubedb/postgres-archiver/commit/8e3c29d) Add License and Makefile +- [0097568](https://github.com/kubedb/postgres-archiver/commit/0097568) fix: added proper wal-handler need to do: fix kill container +- [5ebca34](https://github.com/kubedb/postgres-archiver/commit/5ebca34) added check for primary Signed-off-by: Emon46 +- [0829021](https://github.com/kubedb/postgres-archiver/commit/0829021) added: Different wal dir for different base-backup +- [2e66200](https://github.com/kubedb/postgres-archiver/commit/2e66200) added: basebackup handler update: intial listing func +- [b9c938f](https://github.com/kubedb/postgres-archiver/commit/b9c938f) update: added walg base-backup in bucket storage Signed-off-by: Emon46 +- [3c8c8da](https://github.com/kubedb/postgres-archiver/commit/3c8c8da) fix: fix go routine fr bucket listing update: added ticker in go routine update: combined two go-routine for listing file and bucket queue +- [21c1076](https://github.com/kubedb/postgres-archiver/commit/21c1076) added: wal-g push need to fix : filter list is not working +- [bc41b06](https://github.com/kubedb/postgres-archiver/commit/bc41b06) added - getExistingWalFiles func() updated - UpdateBucketPushQueue() Signed-off-by: Emon46 +- [b93fa5b](https://github.com/kubedb/postgres-archiver/commit/b93fa5b) added functions Signed-off-by: Emon331046 +- [7ded016](https://github.com/kubedb/postgres-archiver/commit/7ded016) added-func-name +- [cc7544d](https://github.com/kubedb/postgres-archiver/commit/cc7544d) file watcher local postgres watching Signed-off-by: Emon331046 + + + +## [kubedb/postgres-csi-snapshotter-plugin](https://github.com/kubedb/postgres-csi-snapshotter-plugin) + +### [v0.1.0](https://github.com/kubedb/postgres-csi-snapshotter-plugin/releases/tag/v0.1.0) + +- [b141665](https://github.com/kubedb/postgres-csi-snapshotter-plugin/commit/b141665) Prepare for release v0.1.0 (#10) +- [bce9779](https://github.com/kubedb/postgres-csi-snapshotter-plugin/commit/bce9779) Prepare for release v0.2.0 (#9) +- [31f5fc5](https://github.com/kubedb/postgres-csi-snapshotter-plugin/commit/31f5fc5) Prepare for release v0.1.0 (#8) +- [57a7bdf](https://github.com/kubedb/postgres-csi-snapshotter-plugin/commit/57a7bdf) Prepare for release v0.1.0-rc.1 (#7) +- [02a45da](https://github.com/kubedb/postgres-csi-snapshotter-plugin/commit/02a45da) Prepare for release v0.1.0-rc.0 (#6) +- [1a6457c](https://github.com/kubedb/postgres-csi-snapshotter-plugin/commit/1a6457c) Update flags and deps + Refactor (#5) +- [f32b56b](https://github.com/kubedb/postgres-csi-snapshotter-plugin/commit/f32b56b) Delete .idea folder +- [e7f8135](https://github.com/kubedb/postgres-csi-snapshotter-plugin/commit/e7f8135) clean up (#4) +- [06e7e70](https://github.com/kubedb/postgres-csi-snapshotter-plugin/commit/06e7e70) clean up (#3) +- [b23dd63](https://github.com/kubedb/postgres-csi-snapshotter-plugin/commit/b23dd63) Add build scripts +- [2e1dff2](https://github.com/kubedb/postgres-csi-snapshotter-plugin/commit/2e1dff2) Add Postgres backup plugin (#1) +- [d0d156b](https://github.com/kubedb/postgres-csi-snapshotter-plugin/commit/d0d156b) Test against K8s 1.27.0 (#2) + + + +## [kubedb/postgres-restic-plugin](https://github.com/kubedb/postgres-restic-plugin) + +### [v0.3.0](https://github.com/kubedb/postgres-restic-plugin/releases/tag/v0.3.0) + +- [4a0356a](https://github.com/kubedb/postgres-restic-plugin/commit/4a0356a) Prepare for release v0.3.0 (#10) + + + +## [kubedb/provider-aws](https://github.com/kubedb/provider-aws) + +### [v0.2.0](https://github.com/kubedb/provider-aws/releases/tag/v0.2.0) + +- [ec4459c](https://github.com/kubedb/provider-aws/commit/ec4459c) Add dynamically start crd reconciler (#9) + + + +## [kubedb/provider-azure](https://github.com/kubedb/provider-azure) + +### [v0.2.0](https://github.com/kubedb/provider-azure/releases/tag/v0.2.0) + +- [0d449ff](https://github.com/kubedb/provider-azure/commit/0d449ff) Add dynamically start crd reconciler (#3) + + + +## [kubedb/provider-gcp](https://github.com/kubedb/provider-gcp) + +### [v0.2.0](https://github.com/kubedb/provider-gcp/releases/tag/v0.2.0) + +- [a3de663](https://github.com/kubedb/provider-gcp/commit/a3de663) Add dynamically start crd reconciler (#3) + + + +## [kubedb/provisioner](https://github.com/kubedb/provisioner) + +### [v0.40.0](https://github.com/kubedb/provisioner/releases/tag/v0.40.0) + +- [715f4be8](https://github.com/kubedb/provisioner/commit/715f4be87) Prepare for release v0.40.0 (#66) + + + +## [kubedb/proxysql](https://github.com/kubedb/proxysql) + +### [v0.27.0](https://github.com/kubedb/proxysql/releases/tag/v0.27.0) + +- [1abe8cd0](https://github.com/kubedb/proxysql/commit/1abe8cd0) Prepare for release v0.27.0 (#318) + + + +## [kubedb/redis](https://github.com/kubedb/redis) + +### [v0.33.0](https://github.com/kubedb/redis/releases/tag/v0.33.0) + +- [9e36ab06](https://github.com/kubedb/redis/commit/9e36ab06) Prepare for release v0.33.0 (#506) +- [58b47ecb](https://github.com/kubedb/redis/commit/58b47ecb) Fix initContainer resources (#505) + + + +## [kubedb/redis-coordinator](https://github.com/kubedb/redis-coordinator) + +### [v0.19.0](https://github.com/kubedb/redis-coordinator/releases/tag/v0.19.0) + +- [c4d1d8b7](https://github.com/kubedb/redis-coordinator/commit/c4d1d8b7) Prepare for release v0.19.0 (#84) + + + +## [kubedb/redis-restic-plugin](https://github.com/kubedb/redis-restic-plugin) + +### [v0.3.0](https://github.com/kubedb/redis-restic-plugin/releases/tag/v0.3.0) + +- [c7105ef](https://github.com/kubedb/redis-restic-plugin/commit/c7105ef) Prepare for release v0.3.0 (#14) + + + +## [kubedb/replication-mode-detector](https://github.com/kubedb/replication-mode-detector) + +### [v0.27.0](https://github.com/kubedb/replication-mode-detector/releases/tag/v0.27.0) + +- [125a1972](https://github.com/kubedb/replication-mode-detector/commit/125a1972) Prepare for release v0.27.0 (#248) + + + +## [kubedb/schema-manager](https://github.com/kubedb/schema-manager) + +### [v0.16.0](https://github.com/kubedb/schema-manager/releases/tag/v0.16.0) + +- [4aef1f64](https://github.com/kubedb/schema-manager/commit/4aef1f64) Prepare for release v0.16.0 (#91) + + + +## [kubedb/tests](https://github.com/kubedb/tests) + +### [v0.25.0](https://github.com/kubedb/tests/releases/tag/v0.25.0) + +- [a8a640dd](https://github.com/kubedb/tests/commit/a8a640dd) Prepare for release v0.25.0 (#278) +- [7149cc0c](https://github.com/kubedb/tests/commit/7149cc0c) Fix elasticsearch vertical scalinig tests. (#277) +- [3c71eea9](https://github.com/kubedb/tests/commit/3c71eea9) Fix build for autosclaer & verticalOps breaking api-changes (#276) + + + +## [kubedb/ui-server](https://github.com/kubedb/ui-server) + +### [v0.16.0](https://github.com/kubedb/ui-server/releases/tag/v0.16.0) + +- [4e1c32a2](https://github.com/kubedb/ui-server/commit/4e1c32a2) Prepare for release v0.16.0 (#100) + + + +## [kubedb/webhook-server](https://github.com/kubedb/webhook-server) + +### [v0.16.0](https://github.com/kubedb/webhook-server/releases/tag/v0.16.0) + +- [17659ce8](https://github.com/kubedb/webhook-server/commit/17659ce8) Prepare for release v0.16.0 (#77) + + + + diff --git a/content/docs/v2024.1.31/CHANGELOG-v2024.1.19-beta.1.md b/content/docs/v2024.1.31/CHANGELOG-v2024.1.19-beta.1.md new file mode 100644 index 0000000000..f770981442 --- /dev/null +++ b/content/docs/v2024.1.31/CHANGELOG-v2024.1.19-beta.1.md @@ -0,0 +1,584 @@ +--- +title: Changelog | KubeDB +description: Changelog +menu: + docs_v2024.1.31: + identifier: changelog-kubedb-v2024.1.19-beta.1 + name: Changelog-v2024.1.19-beta.1 + parent: welcome + weight: 20240119 +product_name: kubedb +menu_name: docs_v2024.1.31 +section_menu_id: welcome +url: /docs/v2024.1.31/welcome/changelog-v2024.1.19-beta.1/ +aliases: +- /docs/v2024.1.31/CHANGELOG-v2024.1.19-beta.1/ +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +# KubeDB v2024.1.19-beta.1 (2024-01-20) + + +## [kubedb/apimachinery](https://github.com/kubedb/apimachinery) + +### [v0.41.0-beta.1](https://github.com/kubedb/apimachinery/releases/tag/v0.41.0-beta.1) + +- [ef49cbfa](https://github.com/kubedb/apimachinery/commit/ef49cbfa8) Update deps +- [f85d1410](https://github.com/kubedb/apimachinery/commit/f85d14100) Without non-root (#1122) +- [79fd675a](https://github.com/kubedb/apimachinery/commit/79fd675a0) Add `PausedBackups` field into `OpsRequestStatus` (#1114) +- [778a1af2](https://github.com/kubedb/apimachinery/commit/778a1af25) Add FerretDB Apis (#1119) +- [329083aa](https://github.com/kubedb/apimachinery/commit/329083aa6) Add missing entries while ignoring openapi schema (#1121) +- [0f8ac911](https://github.com/kubedb/apimachinery/commit/0f8ac9110) Fix API for new Databases (#1120) +- [b625c64c](https://github.com/kubedb/apimachinery/commit/b625c64c5) Fix issues with Pgpool HealthChecker field and version check in webhook (#1118) +- [e78c6ff7](https://github.com/kubedb/apimachinery/commit/e78c6ff74) Remove unnecessary apis for singlestore (#1117) +- [6e98cd41](https://github.com/kubedb/apimachinery/commit/6e98cd41c) Add Rabbitmq API (#1109) +- [e7a088fa](https://github.com/kubedb/apimachinery/commit/e7a088faf) Remove api call from Solr setDefaults. (#1116) +- [a73a825b](https://github.com/kubedb/apimachinery/commit/a73a825b7) Add Solr API (#1110) +- [9d687049](https://github.com/kubedb/apimachinery/commit/9d6870498) Pgpool Backend Set to Required (#1113) +- [72d44aef](https://github.com/kubedb/apimachinery/commit/72d44aef7) Fix ElasticsearchDashboard constants +- [0c40a769](https://github.com/kubedb/apimachinery/commit/0c40a7698) Change dashboard api group to elasticsearch (#1112) +- [85e4ae23](https://github.com/kubedb/apimachinery/commit/85e4ae232) Add ZooKeeper API (#1104) +- [ee446682](https://github.com/kubedb/apimachinery/commit/ee446682d) Add Pgpool apis (#1103) +- [4995ebf3](https://github.com/kubedb/apimachinery/commit/4995ebf3d) Add Druid API (#1111) +- [556a36df](https://github.com/kubedb/apimachinery/commit/556a36dfe) Add SingleStore APIS (#1108) +- [a72bb1ff](https://github.com/kubedb/apimachinery/commit/a72bb1ffc) Add runAsGroup field in mgVersion api (#1107) +- [1ee5ee41](https://github.com/kubedb/apimachinery/commit/1ee5ee41d) Add Kafka Connect Cluster and Connector APIs (#1066) +- [2fd99ee8](https://github.com/kubedb/apimachinery/commit/2fd99ee82) Fix replica count for arbiter & hidden node (#1106) +- [4e194f0a](https://github.com/kubedb/apimachinery/commit/4e194f0a2) Implement validator for autoscalers (#1105) +- [6a454592](https://github.com/kubedb/apimachinery/commit/6a4545928) Add kubestash controller for changing kubeDB phase (#1096) +- [44757753](https://github.com/kubedb/apimachinery/commit/447577539) Ignore validators.autoscaling.kubedb.com webhook handlers + + + +## [kubedb/autoscaler](https://github.com/kubedb/autoscaler) + +### [v0.26.0-beta.1](https://github.com/kubedb/autoscaler/releases/tag/v0.26.0-beta.1) + +- [7cef99b3](https://github.com/kubedb/autoscaler/commit/7cef99b3) Prepare for release v0.26.0-beta.1 (#181) +- [621bf52c](https://github.com/kubedb/autoscaler/commit/621bf52c) Use RestMapper to check for crd availability (#180) +- [2ae4e01e](https://github.com/kubedb/autoscaler/commit/2ae4e01e) Initialize kubeuilder client for webhooks; cleanup (#179) +- [e536b856](https://github.com/kubedb/autoscaler/commit/e536b856) Conditionally check for vpa & checkpoints (#178) + + + +## [kubedb/cli](https://github.com/kubedb/cli) + +### [v0.41.0-beta.1](https://github.com/kubedb/cli/releases/tag/v0.41.0-beta.1) + +- [234b7051](https://github.com/kubedb/cli/commit/234b7051) Prepare for release v0.41.0-beta.1 (#748) +- [1ebdd532](https://github.com/kubedb/cli/commit/1ebdd532) Update deps + + + +## [kubedb/dashboard](https://github.com/kubedb/dashboard) + +### [v0.17.0-beta.1](https://github.com/kubedb/dashboard/releases/tag/v0.17.0-beta.1) + +- [999f215f](https://github.com/kubedb/dashboard/commit/999f215f) Prepare for release v0.17.0-beta.1 (#100) +- [80780e17](https://github.com/kubedb/dashboard/commit/80780e17) Change dashboard api group to elasticsearch (#99) +- [b362ecb6](https://github.com/kubedb/dashboard/commit/b362ecb6) Use Go client from db-client-go lib (#98) + + + +## [kubedb/druid](https://github.com/kubedb/druid) + +### [v0.0.1](https://github.com/kubedb/druid/releases/tag/v0.0.1) + +- [46c4387](https://github.com/kubedb/druid/commit/46c4387) Prepare for release v0.0.1 (#2) +- [3a9e0dd](https://github.com/kubedb/druid/commit/3a9e0dd) Add Druid Controller (#1) + + + +## [kubedb/elasticsearch](https://github.com/kubedb/elasticsearch) + +### [v0.41.0-beta.1](https://github.com/kubedb/elasticsearch/releases/tag/v0.41.0-beta.1) + +- [c410b39f](https://github.com/kubedb/elasticsearch/commit/c410b39f5) Prepare for release v0.41.0-beta.1 (#699) +- [3394f1d1](https://github.com/kubedb/elasticsearch/commit/3394f1d13) Use ptr.Deref(); Update deps +- [f00ee052](https://github.com/kubedb/elasticsearch/commit/f00ee052e) Update ci & makefile for crd-manager (#698) +- [e37e6d63](https://github.com/kubedb/elasticsearch/commit/e37e6d631) Add catalog client in scheme. (#697) +- [a46bfd41](https://github.com/kubedb/elasticsearch/commit/a46bfd41b) Add Support for DB phase change for restoring using KubeStash (#696) +- [9cbac2fc](https://github.com/kubedb/elasticsearch/commit/9cbac2fc4) Update makefile for dynamic crd installer (#695) + + + +## [kubedb/elasticsearch-restic-plugin](https://github.com/kubedb/elasticsearch-restic-plugin) + +### [v0.4.0-beta.1](https://github.com/kubedb/elasticsearch-restic-plugin/releases/tag/v0.4.0-beta.1) + +- [584dfd9](https://github.com/kubedb/elasticsearch-restic-plugin/commit/584dfd9) Prepare for release v0.4.0-beta.1 (#16) + + + +## [kubedb/ferretdb](https://github.com/kubedb/ferretdb) + +### [v0.0.1](https://github.com/kubedb/ferretdb/releases/tag/v0.0.1) + +- [68618ec](https://github.com/kubedb/ferretdb/commit/68618ec) Prepare for release v0.0.1 (#4) +- [9443437](https://github.com/kubedb/ferretdb/commit/9443437) Add github workflow files (#3) +- [0287771](https://github.com/kubedb/ferretdb/commit/0287771) Add FerretDB Controller (#2) + + + +## [kubedb/installer](https://github.com/kubedb/installer) + +### [v2024.1.19-beta.1](https://github.com/kubedb/installer/releases/tag/v2024.1.19-beta.1) + +- [a58a71f1](https://github.com/kubedb/installer/commit/a58a71f1) Prepare for release v2024.1.19-beta.1 (#813) +- [fad71f4d](https://github.com/kubedb/installer/commit/fad71f4d) Use appscode built opensearch images (#798) +- [016898c4](https://github.com/kubedb/installer/commit/016898c4) Update webhook values for solr. (#812) +- [238d29a9](https://github.com/kubedb/installer/commit/238d29a9) Add necessary cluster-role for kubestash (#811) +- [e476675f](https://github.com/kubedb/installer/commit/e476675f) Add Druid (#807) +- [ba594a40](https://github.com/kubedb/installer/commit/ba594a40) Add Pgpool (#809) +- [2fb21fa0](https://github.com/kubedb/installer/commit/2fb21fa0) Add ferretdb (#806) +- [1f285c40](https://github.com/kubedb/installer/commit/1f285c40) Update solr version crds. (#808) +- [588e078f](https://github.com/kubedb/installer/commit/588e078f) Revert "Update Solr webhook helm charts. (#796)" +- [f4db4314](https://github.com/kubedb/installer/commit/f4db4314) Update Solr webhook helm charts. (#796) +- [a33a050d](https://github.com/kubedb/installer/commit/a33a050d) Add Redis version 7.2.4 and 7.0.15 (#797) +- [9074c79c](https://github.com/kubedb/installer/commit/9074c79c) Add Singlestore (#782) +- [eec84b67](https://github.com/kubedb/installer/commit/eec84b67) Add rabbitmq crd (#785) +- [06495dbc](https://github.com/kubedb/installer/commit/06495dbc) Update crds for kubedb/apimachinery@0f8ac911 (#805) +- [bb4786ea](https://github.com/kubedb/installer/commit/bb4786ea) Update crds for kubedb/apimachinery@e78c6ff7 (#804) +- [55ff929e](https://github.com/kubedb/installer/commit/55ff929e) Add ZooKeeper Versions (#776) +- [c3283eb4](https://github.com/kubedb/installer/commit/c3283eb4) Add mongodb perconaserver 7.0.4; Deprecate 4.2.7 & 4.4.10 (#802) +- [4ad98522](https://github.com/kubedb/installer/commit/4ad98522) Add percona versions for mongodb (#775) +- [0bbf1794](https://github.com/kubedb/installer/commit/0bbf1794) Update kubestash backup and restore task names (#766) +- [08e4002d](https://github.com/kubedb/installer/commit/08e4002d) Use kafka featureGate for kafkaConnector; Remove PSP (#792) +- [eafb83d0](https://github.com/kubedb/installer/commit/eafb83d0) Change dashboard api group to elasticsearch (#794) +- [625cc6e8](https://github.com/kubedb/installer/commit/625cc6e8) Remove kubedb-dashboard charts from the kubedb/kubedb-one chart (#793) +- [4b1a8f0c](https://github.com/kubedb/installer/commit/4b1a8f0c) Add if condition to ApiService creation for kafka (#786) +- [7cd2242d](https://github.com/kubedb/installer/commit/7cd2242d) Add Kafka connector (#784) +- [696850fc](https://github.com/kubedb/installer/commit/696850fc) Add runAsGroup; Mongo 7.0.4 -> 7.0.5 (#780) +- [34708816](https://github.com/kubedb/installer/commit/34708816) Update crds for kubedb/apimachinery@a72bb1ff (#781) +- [3bc7789a](https://github.com/kubedb/installer/commit/3bc7789a) Add mgversion for mongodb 7.0.4 (#763) +- [94e2d5b2](https://github.com/kubedb/installer/commit/94e2d5b2) Add MySQL 5.7.42-debian +- [04b2def1](https://github.com/kubedb/installer/commit/04b2def1) Add validator for autoscaler (#777) + + + +## [kubedb/kafka](https://github.com/kubedb/kafka) + +### [v0.12.0-beta.1](https://github.com/kubedb/kafka/releases/tag/v0.12.0-beta.1) + +- [34f4967f](https://github.com/kubedb/kafka/commit/34f4967f) Prepare for release v0.12.0-beta.1 (#68) +- [7176931c](https://github.com/kubedb/kafka/commit/7176931c) Move Kafka Podtemplate to ofshoot-api v2 (#66) +- [9454adf6](https://github.com/kubedb/kafka/commit/9454adf6) Update ci & makefile for crd-manager (#67) +- [fda770d8](https://github.com/kubedb/kafka/commit/fda770d8) Add kafka connector controller (#65) +- [6ed0ccd4](https://github.com/kubedb/kafka/commit/6ed0ccd4) Add Kafka connect controller (#44) +- [18e9a45c](https://github.com/kubedb/kafka/commit/18e9a45c) update deps (#64) +- [a7dfb409](https://github.com/kubedb/kafka/commit/a7dfb409) Update makefile for dynamic crd installer (#63) + + + +## [kubedb/kubedb-manifest-plugin](https://github.com/kubedb/kubedb-manifest-plugin) + +### [v0.4.0-beta.1](https://github.com/kubedb/kubedb-manifest-plugin/releases/tag/v0.4.0-beta.1) + +- [c77b4ae](https://github.com/kubedb/kubedb-manifest-plugin/commit/c77b4ae) Prepare for release v0.4.0-beta.1 (#37) +- [6a8a822](https://github.com/kubedb/kubedb-manifest-plugin/commit/6a8a822) Update component name (#35) + + + +## [kubedb/mariadb](https://github.com/kubedb/mariadb) + +### [v0.25.0-beta.1](https://github.com/kubedb/mariadb/releases/tag/v0.25.0-beta.1) + +- [c4d4942f](https://github.com/kubedb/mariadb/commit/c4d4942f8) Prepare for release v0.25.0-beta.1 (#250) +- [25fe3917](https://github.com/kubedb/mariadb/commit/25fe39177) Use ptr.Deref(); Update deps +- [c76704cc](https://github.com/kubedb/mariadb/commit/c76704cc8) Fix ci & makefile for crd-manager (#249) +- [67396abb](https://github.com/kubedb/mariadb/commit/67396abb9) Incorporate with apimachinery package name change from `stash` to `restore` (#248) + + + +## [kubedb/mariadb-archiver](https://github.com/kubedb/mariadb-archiver) + +### [v0.1.0-beta.1](https://github.com/kubedb/mariadb-archiver/releases/tag/v0.1.0-beta.1) + +- [e8564fe](https://github.com/kubedb/mariadb-archiver/commit/e8564fe) Prepare for release v0.1.0-beta.1 (#5) +- [e5e8945](https://github.com/kubedb/mariadb-archiver/commit/e5e8945) Don't use fail-fast + + + +## [kubedb/mariadb-coordinator](https://github.com/kubedb/mariadb-coordinator) + +### [v0.21.0-beta.1](https://github.com/kubedb/mariadb-coordinator/releases/tag/v0.21.0-beta.1) + +- [1c30e710](https://github.com/kubedb/mariadb-coordinator/commit/1c30e710) Prepare for release v0.21.0-beta.1 (#101) + + + +## [kubedb/mariadb-csi-snapshotter-plugin](https://github.com/kubedb/mariadb-csi-snapshotter-plugin) + +### [v0.1.0-beta.1](https://github.com/kubedb/mariadb-csi-snapshotter-plugin/releases/tag/v0.1.0-beta.1) + +- [adac38d](https://github.com/kubedb/mariadb-csi-snapshotter-plugin/commit/adac38d) Prepare for release v0.1.0-beta.1 (#5) + + + +## [kubedb/memcached](https://github.com/kubedb/memcached) + +### [v0.34.0-beta.1](https://github.com/kubedb/memcached/releases/tag/v0.34.0-beta.1) + +- [754ba398](https://github.com/kubedb/memcached/commit/754ba398) Prepare for release v0.34.0-beta.1 (#418) +- [abd9dbb6](https://github.com/kubedb/memcached/commit/abd9dbb6) Incorporate with apimachinery package name change from stash to restore (#417) + + + +## [kubedb/mongodb](https://github.com/kubedb/mongodb) + +### [v0.34.0-beta.1](https://github.com/kubedb/mongodb/releases/tag/v0.34.0-beta.1) + +- [c0c58448](https://github.com/kubedb/mongodb/commit/c0c58448b) Prepare for release v0.34.0-beta.1 (#606) +- [5df39d09](https://github.com/kubedb/mongodb/commit/5df39d09f) Update ci mgVersion; Fix pointer dereference issue (#605) +- [e2781eae](https://github.com/kubedb/mongodb/commit/e2781eaea) Run ci with specific crd-manager branch (#604) +- [b57bc47a](https://github.com/kubedb/mongodb/commit/b57bc47ae) Add kubestash for health check (#603) +- [62cb9c81](https://github.com/kubedb/mongodb/commit/62cb9c816) Install crd-manager specifiying DATABASE (#602) +- [6bf45fe7](https://github.com/kubedb/mongodb/commit/6bf45fe72) 7.0.4 -> 7.0.5; update deps +- [e5b9841e](https://github.com/kubedb/mongodb/commit/e5b9841e5) Fix oplog backup directory (#601) +- [452b785f](https://github.com/kubedb/mongodb/commit/452b785f0) Add Support for DB phase change for restoring using `KubeStash` (#586) +- [35d93d0b](https://github.com/kubedb/mongodb/commit/35d93d0bc) add ssl/tls args command (#595) + + + +## [kubedb/mongodb-csi-snapshotter-plugin](https://github.com/kubedb/mongodb-csi-snapshotter-plugin) + +### [v0.2.0-beta.1](https://github.com/kubedb/mongodb-csi-snapshotter-plugin/releases/tag/v0.2.0-beta.1) + +- [5680265](https://github.com/kubedb/mongodb-csi-snapshotter-plugin/commit/5680265) Prepare for release v0.2.0-beta.1 (#12) +- [72693c8](https://github.com/kubedb/mongodb-csi-snapshotter-plugin/commit/72693c8) Fix component driver status (#11) +- [0ea73ee](https://github.com/kubedb/mongodb-csi-snapshotter-plugin/commit/0ea73ee) Update deps (#10) + + + +## [kubedb/mongodb-restic-plugin](https://github.com/kubedb/mongodb-restic-plugin) + +### [v0.4.0-beta.1](https://github.com/kubedb/mongodb-restic-plugin/releases/tag/v0.4.0-beta.1) + +- [6ae8ae2](https://github.com/kubedb/mongodb-restic-plugin/commit/6ae8ae2) Prepare for release v0.4.0-beta.1 (#23) +- [d8e1636](https://github.com/kubedb/mongodb-restic-plugin/commit/d8e1636) Reorder the execution of cleanup funcs (#22) + + + +## [kubedb/mysql](https://github.com/kubedb/mysql) + +### [v0.34.0-beta.1](https://github.com/kubedb/mysql/releases/tag/v0.34.0-beta.1) + +- [e9dbf269](https://github.com/kubedb/mysql/commit/e9dbf269c) Prepare for release v0.34.0-beta.1 (#599) +- [44eda2d2](https://github.com/kubedb/mysql/commit/44eda2d25) Prepare for release v0.34.0-beta.1 (#598) +- [16dd4637](https://github.com/kubedb/mysql/commit/16dd46377) Fix pointer dereference issue (#597) +- [334c1a1d](https://github.com/kubedb/mysql/commit/334c1a1dd) Update ci & makefile for crd-manager (#596) +- [edb9b1a1](https://github.com/kubedb/mysql/commit/edb9b1a11) Fix binlog backup directory (#587) +- [fc6d7030](https://github.com/kubedb/mysql/commit/fc6d70303) Add Support for DB phase change for restoring using KubeStash (#594) + + + +## [kubedb/mysql-archiver](https://github.com/kubedb/mysql-archiver) + +### [v0.2.0-beta.1](https://github.com/kubedb/mysql-archiver/releases/tag/v0.2.0-beta.1) + +- [e5bdae3](https://github.com/kubedb/mysql-archiver/commit/e5bdae3) Prepare for release v0.2.0-beta.1 (#15) +- [7ef752c](https://github.com/kubedb/mysql-archiver/commit/7ef752c) Refactor + Cleanup wal-g example files (#14) +- [5857a8d](https://github.com/kubedb/mysql-archiver/commit/5857a8d) Don't use fail-fast + + + +## [kubedb/mysql-coordinator](https://github.com/kubedb/mysql-coordinator) + +### [v0.19.0-beta.1](https://github.com/kubedb/mysql-coordinator/releases/tag/v0.19.0-beta.1) + +- [59a11671](https://github.com/kubedb/mysql-coordinator/commit/59a11671) Prepare for release v0.19.0-beta.1 (#98) + + + +## [kubedb/mysql-csi-snapshotter-plugin](https://github.com/kubedb/mysql-csi-snapshotter-plugin) + +### [v0.2.0-beta.1](https://github.com/kubedb/mysql-csi-snapshotter-plugin/releases/tag/v0.2.0-beta.1) + +- [d5771cf](https://github.com/kubedb/mysql-csi-snapshotter-plugin/commit/d5771cf) Prepare for release v0.2.0-beta.1 (#5) +- [b4ffc6f](https://github.com/kubedb/mysql-csi-snapshotter-plugin/commit/b4ffc6f) Fix component driver status & Update deps (#3) + + + +## [kubedb/mysql-restic-plugin](https://github.com/kubedb/mysql-restic-plugin) + +### [v0.4.0-beta.1](https://github.com/kubedb/mysql-restic-plugin/releases/tag/v0.4.0-beta.1) + +- [105888a](https://github.com/kubedb/mysql-restic-plugin/commit/105888a) Prepare for release v0.4.0-beta.1 (#21) +- [b42d0cf](https://github.com/kubedb/mysql-restic-plugin/commit/b42d0cf) Removed `--all-databases` flag for restoring (#20) + + + +## [kubedb/mysql-router-init](https://github.com/kubedb/mysql-router-init) + +### [v0.19.0-beta.1](https://github.com/kubedb/mysql-router-init/releases/tag/v0.19.0-beta.1) + + + + +## [kubedb/ops-manager](https://github.com/kubedb/ops-manager) + +### [v0.28.0-beta.1](https://github.com/kubedb/ops-manager/releases/tag/v0.28.0-beta.1) + +- [5976d8ed](https://github.com/kubedb/ops-manager/commit/5976d8ed0) Prepare for release v0.28.0-beta.1 (#529) +- [90e4c315](https://github.com/kubedb/ops-manager/commit/90e4c3159) Update deps; Add license +- [d6c0e148](https://github.com/kubedb/ops-manager/commit/d6c0e1487) Add backupConfiguration `Pause` & `Resume` support for Kubestash (#528) +- [e9b4bfea](https://github.com/kubedb/ops-manager/commit/e9b4bfea0) Fix kafka vertical scaling ops request for ofshoot api v2 (#527) +- [b230d6bb](https://github.com/kubedb/ops-manager/commit/b230d6bb6) Made crd-manager non required +- [439031ae](https://github.com/kubedb/ops-manager/commit/439031aea) Fix operator installation in ci (#526) +- [88014501](https://github.com/kubedb/ops-manager/commit/88014501f) Seperate mongo ci according to profiles; Change `daily`'s schedule (#525) +- [335a3e49](https://github.com/kubedb/ops-manager/commit/335a3e49f) Add TLS support for Kafka Connect Cluster (#518) +- [69de3f3e](https://github.com/kubedb/ops-manager/commit/69de3f3e8) Run new mongo versions to ci (#524) +- [e5fbed83](https://github.com/kubedb/ops-manager/commit/e5fbed839) Incorporate with apimachinery package name change from stash to restore (#523) +- [6320384b](https://github.com/kubedb/ops-manager/commit/6320384bf) Reorganize recommendation pkg +- [8f2b36d7](https://github.com/kubedb/ops-manager/commit/8f2b36d72) Update wait condition in makefile (#522) + + + +## [kubedb/percona-xtradb](https://github.com/kubedb/percona-xtradb) + +### [v0.28.0-beta.1](https://github.com/kubedb/percona-xtradb/releases/tag/v0.28.0-beta.1) + +- [475a5e32](https://github.com/kubedb/percona-xtradb/commit/475a5e328) Prepare for release v0.28.0-beta.1 (#348) +- [4c1380ab](https://github.com/kubedb/percona-xtradb/commit/4c1380ab7) Incorporate with apimachinery package name change from `stash` to `restore` (#347) + + + +## [kubedb/percona-xtradb-coordinator](https://github.com/kubedb/percona-xtradb-coordinator) + +### [v0.14.0-beta.1](https://github.com/kubedb/percona-xtradb-coordinator/releases/tag/v0.14.0-beta.1) + +- [560bc5c3](https://github.com/kubedb/percona-xtradb-coordinator/commit/560bc5c3) Prepare for release v0.14.0-beta.1 (#58) + + + +## [kubedb/pg-coordinator](https://github.com/kubedb/pg-coordinator) + +### [v0.25.0-beta.1](https://github.com/kubedb/pg-coordinator/releases/tag/v0.25.0-beta.1) + +- [bc296307](https://github.com/kubedb/pg-coordinator/commit/bc296307) Prepare for release v0.25.0-beta.1 (#148) + + + +## [kubedb/pgbouncer](https://github.com/kubedb/pgbouncer) + +### [v0.28.0-beta.1](https://github.com/kubedb/pgbouncer/releases/tag/v0.28.0-beta.1) + +- [55c248d5](https://github.com/kubedb/pgbouncer/commit/55c248d5) Prepare for release v0.28.0-beta.1 (#312) +- [1b86664a](https://github.com/kubedb/pgbouncer/commit/1b86664a) Incorporate with apimachinery package name change from stash to restore (#311) + + + +## [kubedb/pgpool](https://github.com/kubedb/pgpool) + +### [v0.0.1](https://github.com/kubedb/pgpool/releases/tag/v0.0.1) + +- [dbb333b](https://github.com/kubedb/pgpool/commit/dbb333b) Prepare for release v0.0.1 (#3) +- [b9c96e2](https://github.com/kubedb/pgpool/commit/b9c96e2) Pgpool operator (#2) +- [7c878e7](https://github.com/kubedb/pgpool/commit/7c878e7) C1:bootstrap Initialization project and basic api design +- [c437da3](https://github.com/kubedb/pgpool/commit/c437da3) C1:bootstrap Initialization project and basic api design + + + +## [kubedb/postgres](https://github.com/kubedb/postgres) + +### [v0.41.0-beta.1](https://github.com/kubedb/postgres/releases/tag/v0.41.0-beta.1) + +- [72a1ee29](https://github.com/kubedb/postgres/commit/72a1ee294) Prepare for release v0.41.0-beta.1 (#708) +- [026598f4](https://github.com/kubedb/postgres/commit/026598f44) Prepare for release v0.41.0-beta.1 (#707) +- [8af305aa](https://github.com/kubedb/postgres/commit/8af305aa4) Use ptr.Deref(); Update deps +- [c7c0652d](https://github.com/kubedb/postgres/commit/c7c0652dc) Update ci & makefile for crd-manager (#706) +- [d468bdb3](https://github.com/kubedb/postgres/commit/d468bdb34) Fix wal backup directory (#705) +- [c6992bed](https://github.com/kubedb/postgres/commit/c6992bed8) Add Support for DB phase change for restoring using KubeStash (#704) + + + +## [kubedb/postgres-archiver](https://github.com/kubedb/postgres-archiver) + +### [v0.2.0-beta.1](https://github.com/kubedb/postgres-archiver/releases/tag/v0.2.0-beta.1) + +- [c4405c1](https://github.com/kubedb/postgres-archiver/commit/c4405c1) Prepare for release v0.2.0-beta.1 (#17) + + + +## [kubedb/postgres-csi-snapshotter-plugin](https://github.com/kubedb/postgres-csi-snapshotter-plugin) + +### [v0.2.0-beta.1](https://github.com/kubedb/postgres-csi-snapshotter-plugin/releases/tag/v0.2.0-beta.1) + +- [dc4f85e](https://github.com/kubedb/postgres-csi-snapshotter-plugin/commit/dc4f85e) Prepare for release v0.2.0-beta.1 (#15) +- [098365a](https://github.com/kubedb/postgres-csi-snapshotter-plugin/commit/098365a) Update README.md (#14) +- [5ef571f](https://github.com/kubedb/postgres-csi-snapshotter-plugin/commit/5ef571f) Update deps (#13) + + + +## [kubedb/postgres-restic-plugin](https://github.com/kubedb/postgres-restic-plugin) + +### [v0.4.0-beta.1](https://github.com/kubedb/postgres-restic-plugin/releases/tag/v0.4.0-beta.1) + +- [4ed2b4a](https://github.com/kubedb/postgres-restic-plugin/commit/4ed2b4a) Prepare for release v0.4.0-beta.1 (#14) + + + +## [kubedb/provider-aws](https://github.com/kubedb/provider-aws) + +### [v0.3.0-beta.1](https://github.com/kubedb/provider-aws/releases/tag/v0.3.0-beta.1) + + + + +## [kubedb/provider-azure](https://github.com/kubedb/provider-azure) + +### [v0.3.0-beta.1](https://github.com/kubedb/provider-azure/releases/tag/v0.3.0-beta.1) + + + + +## [kubedb/provider-gcp](https://github.com/kubedb/provider-gcp) + +### [v0.3.0-beta.1](https://github.com/kubedb/provider-gcp/releases/tag/v0.3.0-beta.1) + + + + +## [kubedb/provisioner](https://github.com/kubedb/provisioner) + +### [v0.41.0-beta.1](https://github.com/kubedb/provisioner/releases/tag/v0.41.0-beta.1) + +- [52cb0fa9](https://github.com/kubedb/provisioner/commit/52cb0fa9c) Prepare for release v0.41.0-beta.1 (#75) +- [92f05e8e](https://github.com/kubedb/provisioner/commit/92f05e8e7) Add New Database support (#74) +- [514709fc](https://github.com/kubedb/provisioner/commit/514709fc9) Add ElasticsearchDashboard controllers (#73) +- [b826a5f1](https://github.com/kubedb/provisioner/commit/b826a5f1e) Add Support for DB phase change for restoring using KubeStash (#72) + + + +## [kubedb/proxysql](https://github.com/kubedb/proxysql) + +### [v0.28.0-beta.1](https://github.com/kubedb/proxysql/releases/tag/v0.28.0-beta.1) + +- [213ebfc4](https://github.com/kubedb/proxysql/commit/213ebfc43) Prepare for release v0.28.0-beta.1 (#327) +- [8427158e](https://github.com/kubedb/proxysql/commit/8427158ec) Incorporate with apimachinery package name change from stash to restore (#325) + + + +## [kubedb/rabbitmq](https://github.com/kubedb/rabbitmq) + +### [v0.0.1](https://github.com/kubedb/rabbitmq/releases/tag/v0.0.1) + +- [48d2ec95](https://github.com/kubedb/rabbitmq/commit/48d2ec95) Prepare for release v0.0.1 (#2) +- [d9dcec0f](https://github.com/kubedb/rabbitmq/commit/d9dcec0f) Add Rabbitmq controller (#1) +- [6844a9cf](https://github.com/kubedb/rabbitmq/commit/6844a9cf) Add Appscode Community license and release workflows + + + +## [kubedb/redis](https://github.com/kubedb/redis) + +### [v0.34.0-beta.1](https://github.com/kubedb/redis/releases/tag/v0.34.0-beta.1) + +- [01290634](https://github.com/kubedb/redis/commit/01290634) Prepare for release v0.34.0-beta.1 (#517) +- [e51f93e1](https://github.com/kubedb/redis/commit/e51f93e1) Fix panic (#516) +- [dc75c163](https://github.com/kubedb/redis/commit/dc75c163) Update ci & makefile for crd-manager (#515) +- [09688f35](https://github.com/kubedb/redis/commit/09688f35) Add Support for DB phase change for restoring using KubeStash (#514) + + + +## [kubedb/redis-coordinator](https://github.com/kubedb/redis-coordinator) + +### [v0.20.0-beta.1](https://github.com/kubedb/redis-coordinator/releases/tag/v0.20.0-beta.1) + +- [fd3b2112](https://github.com/kubedb/redis-coordinator/commit/fd3b2112) Prepare for release v0.20.0-beta.1 (#89) + + + +## [kubedb/redis-restic-plugin](https://github.com/kubedb/redis-restic-plugin) + +### [v0.4.0-beta.1](https://github.com/kubedb/redis-restic-plugin/releases/tag/v0.4.0-beta.1) + +- [fac6226](https://github.com/kubedb/redis-restic-plugin/commit/fac6226) Prepare for release v0.4.0-beta.1 (#17) + + + +## [kubedb/replication-mode-detector](https://github.com/kubedb/replication-mode-detector) + +### [v0.28.0-beta.1](https://github.com/kubedb/replication-mode-detector/releases/tag/v0.28.0-beta.1) + +- [f948a650](https://github.com/kubedb/replication-mode-detector/commit/f948a650) Prepare for release v0.28.0-beta.1 (#253) + + + +## [kubedb/schema-manager](https://github.com/kubedb/schema-manager) + +### [v0.17.0-beta.1](https://github.com/kubedb/schema-manager/releases/tag/v0.17.0-beta.1) + +- [f14516a9](https://github.com/kubedb/schema-manager/commit/f14516a9) Prepare for release v0.17.0-beta.1 (#97) + + + +## [kubedb/singlestore](https://github.com/kubedb/singlestore) + +### [v0.0.1](https://github.com/kubedb/singlestore/releases/tag/v0.0.1) + +- [8feeb79](https://github.com/kubedb/singlestore/commit/8feeb79) Prepare for release v0.0.1 (#5) +- [fb79ff9](https://github.com/kubedb/singlestore/commit/fb79ff9) Add Singlestore Operator (#4) + + + +## [kubedb/solr](https://github.com/kubedb/solr) + +### [v0.0.1](https://github.com/kubedb/solr/releases/tag/v0.0.1) + +- [58fb5b4](https://github.com/kubedb/solr/commit/58fb5b4) Prepare for release v0.0.1 (#1) +- [6b7c3ef](https://github.com/kubedb/solr/commit/6b7c3ef) Add release workflows +- [9db6c84](https://github.com/kubedb/solr/commit/9db6c84) Disable ferret db in catalog helm command. (#5) +- [19553e7](https://github.com/kubedb/solr/commit/19553e7) Add solr operator. (#3) +- [ff4b9ae](https://github.com/kubedb/solr/commit/ff4b9ae) Reset master (#4) +- [7804b0a](https://github.com/kubedb/solr/commit/7804b0a) Add initial controller implementation (#2) + + + +## [kubedb/tests](https://github.com/kubedb/tests) + +### [v0.26.0-beta.1](https://github.com/kubedb/tests/releases/tag/v0.26.0-beta.1) + +- [3cfc1212](https://github.com/kubedb/tests/commit/3cfc1212) Prepare for release v0.26.0-beta.1 (#292) +- [b810e690](https://github.com/kubedb/tests/commit/b810e690) increase cpu limit for vertical scaling (#289) +- [c43985ba](https://github.com/kubedb/tests/commit/c43985ba) Change dashboard api group (#291) +- [1b96881e](https://github.com/kubedb/tests/commit/1b96881e) Fix error logging +- [33f78143](https://github.com/kubedb/tests/commit/33f78143) forceCleanup PVCs for mongo (#288) +- [0dcd3e38](https://github.com/kubedb/tests/commit/0dcd3e38) Add PostgreSQL logical replication tests (#202) +- [2f403c85](https://github.com/kubedb/tests/commit/2f403c85) Find profiles in array, Don't match with string (#286) +- [5aca2293](https://github.com/kubedb/tests/commit/5aca2293) Give time to PDB status to be updated (#285) + + + +## [kubedb/ui-server](https://github.com/kubedb/ui-server) + +### [v0.17.0-beta.1](https://github.com/kubedb/ui-server/releases/tag/v0.17.0-beta.1) + +- [98c1a6dd](https://github.com/kubedb/ui-server/commit/98c1a6dd) Prepare for release v0.17.0-beta.1 (#105) +- [8173cfc2](https://github.com/kubedb/ui-server/commit/8173cfc2) Implement SingularNameProvider + + + +## [kubedb/webhook-server](https://github.com/kubedb/webhook-server) + +### [v0.17.0-beta.1](https://github.com/kubedb/webhook-server/releases/tag/v0.17.0-beta.1) + +- [2a84cedb](https://github.com/kubedb/webhook-server/commit/2a84cedb) Prepare for release v0.17.0-beta.1 (#89) +- [bb4a5c22](https://github.com/kubedb/webhook-server/commit/bb4a5c22) Add kafka connect-cluster (#87) +- [c46c6662](https://github.com/kubedb/webhook-server/commit/c46c6662) Add new Database support (#88) +- [c6387e9e](https://github.com/kubedb/webhook-server/commit/c6387e9e) Set default kubebuilder client for autoscaler (#86) +- [14c07899](https://github.com/kubedb/webhook-server/commit/14c07899) Incorporate apimachinery (#85) +- [266c79a0](https://github.com/kubedb/webhook-server/commit/266c79a0) Add kafka ops request validator (#84) + + + + diff --git a/content/docs/v2024.1.31/CHANGELOG-v2024.1.26-rc.0.md b/content/docs/v2024.1.31/CHANGELOG-v2024.1.26-rc.0.md new file mode 100644 index 0000000000..dfdf6ecb08 --- /dev/null +++ b/content/docs/v2024.1.31/CHANGELOG-v2024.1.26-rc.0.md @@ -0,0 +1,837 @@ +--- +title: Changelog | KubeDB +description: Changelog +menu: + docs_v2024.1.31: + identifier: changelog-kubedb-v2024.1.26-rc.0 + name: Changelog-v2024.1.26-rc.0 + parent: welcome + weight: 20240126 +product_name: kubedb +menu_name: docs_v2024.1.31 +section_menu_id: welcome +url: /docs/v2024.1.31/welcome/changelog-v2024.1.26-rc.0/ +aliases: +- /docs/v2024.1.31/CHANGELOG-v2024.1.26-rc.0/ +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +# KubeDB v2024.1.26-rc.0 (2024-01-27) + + +## [kubedb/apimachinery](https://github.com/kubedb/apimachinery) + +### [v0.41.0-rc.0](https://github.com/kubedb/apimachinery/releases/tag/v0.41.0-rc.0) + +- [32a0f294](https://github.com/kubedb/apimachinery/commit/32a0f2944) Update deps +- [c389dcb1](https://github.com/kubedb/apimachinery/commit/c389dcb17) Add Singlestore Config Type (#1136) +- [ef7f62fb](https://github.com/kubedb/apimachinery/commit/ef7f62fbd) Defaulting RunAsGroup (#1134) +- [e08f63ba](https://github.com/kubedb/apimachinery/commit/e08f63ba0) Minox fixes in rlease (#1135) +- [760f1c55](https://github.com/kubedb/apimachinery/commit/760f1c554) Ferretdb webhook and apis updated (#1132) +- [958de8ec](https://github.com/kubedb/apimachinery/commit/958de8ec3) Fix spelling mistakes in dashboard. (#1133) +- [f614ab97](https://github.com/kubedb/apimachinery/commit/f614ab976) Fix release issues and add version 28.0.1 (#1131) +- [df53756a](https://github.com/kubedb/apimachinery/commit/df53756a3) Fix dashboard config merger command. (#1126) +- [4b8a46ab](https://github.com/kubedb/apimachinery/commit/4b8a46ab1) Add kafka connector webhook (#1128) +- [3e06dc03](https://github.com/kubedb/apimachinery/commit/3e06dc03a) Update Rabbitmq helpers and webhooks (#1130) +- [23153f41](https://github.com/kubedb/apimachinery/commit/23153f41f) Add ZooKeeper Standalone Mode (#1129) +- [650406ba](https://github.com/kubedb/apimachinery/commit/650406ba8) Remove replica condition for Pgpool (#1127) +- [dbd8e067](https://github.com/kubedb/apimachinery/commit/dbd8e0679) Update docker/docker +- [a28b2662](https://github.com/kubedb/apimachinery/commit/a28b2662e) Add validator to check negative number of replicas. (#1124) +- [cc189c3c](https://github.com/kubedb/apimachinery/commit/cc189c3c8) Add utilities to extract databaseInfo (#1123) +- [ceef191e](https://github.com/kubedb/apimachinery/commit/ceef191e0) Fix short name for FerretDBVersion +- [ef49cbfa](https://github.com/kubedb/apimachinery/commit/ef49cbfa8) Update deps +- [f85d1410](https://github.com/kubedb/apimachinery/commit/f85d14100) Without non-root (#1122) +- [79fd675a](https://github.com/kubedb/apimachinery/commit/79fd675a0) Add `PausedBackups` field into `OpsRequestStatus` (#1114) +- [778a1af2](https://github.com/kubedb/apimachinery/commit/778a1af25) Add FerretDB Apis (#1119) +- [329083aa](https://github.com/kubedb/apimachinery/commit/329083aa6) Add missing entries while ignoring openapi schema (#1121) +- [0f8ac911](https://github.com/kubedb/apimachinery/commit/0f8ac9110) Fix API for new Databases (#1120) +- [b625c64c](https://github.com/kubedb/apimachinery/commit/b625c64c5) Fix issues with Pgpool HealthChecker field and version check in webhook (#1118) +- [e78c6ff7](https://github.com/kubedb/apimachinery/commit/e78c6ff74) Remove unnecessary apis for singlestore (#1117) +- [6e98cd41](https://github.com/kubedb/apimachinery/commit/6e98cd41c) Add Rabbitmq API (#1109) +- [e7a088fa](https://github.com/kubedb/apimachinery/commit/e7a088faf) Remove api call from Solr setDefaults. (#1116) +- [a73a825b](https://github.com/kubedb/apimachinery/commit/a73a825b7) Add Solr API (#1110) +- [9d687049](https://github.com/kubedb/apimachinery/commit/9d6870498) Pgpool Backend Set to Required (#1113) +- [72d44aef](https://github.com/kubedb/apimachinery/commit/72d44aef7) Fix ElasticsearchDashboard constants +- [0c40a769](https://github.com/kubedb/apimachinery/commit/0c40a7698) Change dashboard api group to elasticsearch (#1112) +- [85e4ae23](https://github.com/kubedb/apimachinery/commit/85e4ae232) Add ZooKeeper API (#1104) +- [ee446682](https://github.com/kubedb/apimachinery/commit/ee446682d) Add Pgpool apis (#1103) +- [4995ebf3](https://github.com/kubedb/apimachinery/commit/4995ebf3d) Add Druid API (#1111) +- [556a36df](https://github.com/kubedb/apimachinery/commit/556a36dfe) Add SingleStore APIS (#1108) +- [a72bb1ff](https://github.com/kubedb/apimachinery/commit/a72bb1ffc) Add runAsGroup field in mgVersion api (#1107) +- [1ee5ee41](https://github.com/kubedb/apimachinery/commit/1ee5ee41d) Add Kafka Connect Cluster and Connector APIs (#1066) +- [2fd99ee8](https://github.com/kubedb/apimachinery/commit/2fd99ee82) Fix replica count for arbiter & hidden node (#1106) +- [4e194f0a](https://github.com/kubedb/apimachinery/commit/4e194f0a2) Implement validator for autoscalers (#1105) +- [6a454592](https://github.com/kubedb/apimachinery/commit/6a4545928) Add kubestash controller for changing kubeDB phase (#1096) +- [44757753](https://github.com/kubedb/apimachinery/commit/447577539) Ignore validators.autoscaling.kubedb.com webhook handlers +- [45cbf75e](https://github.com/kubedb/apimachinery/commit/45cbf75e3) Update deps +- [dc224c1a](https://github.com/kubedb/apimachinery/commit/dc224c1a1) Remove crd informer (#1102) +- [87c402a1](https://github.com/kubedb/apimachinery/commit/87c402a1a) Remove discovery.ResourceMapper (#1101) +- [a1d475ce](https://github.com/kubedb/apimachinery/commit/a1d475ceb) Replace deprecated PollImmediate (#1100) +- [75db4a37](https://github.com/kubedb/apimachinery/commit/75db4a378) Add ConfigureOpenAPI helper (#1099) +- [83be295b](https://github.com/kubedb/apimachinery/commit/83be295b0) update sidekick deps +- [032b2721](https://github.com/kubedb/apimachinery/commit/032b27211) Fix linter +- [389a934c](https://github.com/kubedb/apimachinery/commit/389a934c7) Use k8s 1.29 client libs (#1093) + + + +## [kubedb/autoscaler](https://github.com/kubedb/autoscaler) + +### [v0.26.0-rc.0](https://github.com/kubedb/autoscaler/releases/tag/v0.26.0-rc.0) + + + + +## [kubedb/cli](https://github.com/kubedb/cli) + +### [v0.41.0-rc.0](https://github.com/kubedb/cli/releases/tag/v0.41.0-rc.0) + +- [64ad0b63](https://github.com/kubedb/cli/commit/64ad0b63) Prepare for release v0.41.0-rc.0 (#749) +- [d188eae6](https://github.com/kubedb/cli/commit/d188eae6) Grafana dashboard's metric checking CLI (#740) +- [234b7051](https://github.com/kubedb/cli/commit/234b7051) Prepare for release v0.41.0-beta.1 (#748) +- [1ebdd532](https://github.com/kubedb/cli/commit/1ebdd532) Update deps +- [c0165e83](https://github.com/kubedb/cli/commit/c0165e83) Prepare for release v0.41.0-beta.0 (#747) +- [d9c905e5](https://github.com/kubedb/cli/commit/d9c905e5) Update deps (#746) +- [bc415a1d](https://github.com/kubedb/cli/commit/bc415a1d) Update deps (#745) + + + +## [kubedb/crd-manager](https://github.com/kubedb/crd-manager) + +### [v0.0.2](https://github.com/kubedb/crd-manager/releases/tag/v0.0.2) + +- [5c6b4d6](https://github.com/kubedb/crd-manager/commit/5c6b4d6) Prepare for release v0.0.2 (#10) +- [e6e03ae](https://github.com/kubedb/crd-manager/commit/e6e03ae) Add --remove-unused-crds (#9) +- [6b48b3d](https://github.com/kubedb/crd-manager/commit/6b48b3d) Hide new databases +- [a872af9](https://github.com/kubedb/crd-manager/commit/a872af9) Fix Apimachinery module (#8) +- [f7fccb6](https://github.com/kubedb/crd-manager/commit/f7fccb6) Install kubestash crds for ops_manager (#7) +- [514f51c](https://github.com/kubedb/crd-manager/commit/514f51c) Set multiple values to true in featureGates (#5) + + + +## [kubedb/dashboard](https://github.com/kubedb/dashboard) + +### [v0.17.0-rc.0](https://github.com/kubedb/dashboard/releases/tag/v0.17.0-rc.0) + + + + +## [kubedb/db-client-go](https://github.com/kubedb/db-client-go) + +### [v0.0.9](https://github.com/kubedb/db-client-go/releases/tag/v0.0.9) + +- [b254eda7](https://github.com/kubedb/db-client-go/commit/b254eda7) Prepare for release v0.0.9 (#83) +- [22edae9f](https://github.com/kubedb/db-client-go/commit/22edae9f) Add support for Opensearch Dashboard client (#82) +- [dd2b92a0](https://github.com/kubedb/db-client-go/commit/dd2b92a0) Add backup and restore methods for kibana dashboard (#81) +- [649baaf6](https://github.com/kubedb/db-client-go/commit/649baaf6) Add release workflow +- [34b87965](https://github.com/kubedb/db-client-go/commit/34b87965) Add release tracker script +- [7f4d5847](https://github.com/kubedb/db-client-go/commit/7f4d5847) Add Pgpool DB-Client (#80) +- [60162574](https://github.com/kubedb/db-client-go/commit/60162574) Change dashboard api group to elasticsearch (#79) +- [3b88c8fa](https://github.com/kubedb/db-client-go/commit/3b88c8fa) Add Singlestore db-client (#73) +- [70c5b516](https://github.com/kubedb/db-client-go/commit/70c5b516) Add client libraries for kafka and kafka connect (#74) +- [d8bc9aa1](https://github.com/kubedb/db-client-go/commit/d8bc9aa1) Add Go client for ElasticsearchDashboard (#78) +- [49a0c0b6](https://github.com/kubedb/db-client-go/commit/49a0c0b6) Update deps (#77) +- [cd32078b](https://github.com/kubedb/db-client-go/commit/cd32078b) Update deps (#76) +- [986266b2](https://github.com/kubedb/db-client-go/commit/986266b2) Use k8s 1.29 client libs (#75) + + + +## [kubedb/druid](https://github.com/kubedb/druid) + +### [v0.0.2](https://github.com/kubedb/druid/releases/tag/v0.0.2) + +- [8fb5537](https://github.com/kubedb/druid/commit/8fb5537) Prepare for release v0.0.2 (#6) +- [91f4519](https://github.com/kubedb/druid/commit/91f4519) Remove cassandra, clickhouse, etcd flags +- [3cc3281](https://github.com/kubedb/druid/commit/3cc3281) Updates for running Druid as non root (#5) +- [125a642](https://github.com/kubedb/druid/commit/125a642) Fix release issues and add version 28.0.1 (#4) +- [9d8305b](https://github.com/kubedb/druid/commit/9d8305b) Update install recipies to install zookeeper also (#1) +- [956d511](https://github.com/kubedb/druid/commit/956d511) Remove manager binary (#3) + + + +## [kubedb/elasticsearch](https://github.com/kubedb/elasticsearch) + +### [v0.41.0-rc.0](https://github.com/kubedb/elasticsearch/releases/tag/v0.41.0-rc.0) + +- [69735e9e](https://github.com/kubedb/elasticsearch/commit/69735e9e1) Prepare for release v0.41.0-rc.0 (#700) +- [c410b39f](https://github.com/kubedb/elasticsearch/commit/c410b39f5) Prepare for release v0.41.0-beta.1 (#699) +- [3394f1d1](https://github.com/kubedb/elasticsearch/commit/3394f1d13) Use ptr.Deref(); Update deps +- [f00ee052](https://github.com/kubedb/elasticsearch/commit/f00ee052e) Update ci & makefile for crd-manager (#698) +- [e37e6d63](https://github.com/kubedb/elasticsearch/commit/e37e6d631) Add catalog client in scheme. (#697) +- [a46bfd41](https://github.com/kubedb/elasticsearch/commit/a46bfd41b) Add Support for DB phase change for restoring using KubeStash (#696) +- [9cbac2fc](https://github.com/kubedb/elasticsearch/commit/9cbac2fc4) Update makefile for dynamic crd installer (#695) +- [3ab4d77d](https://github.com/kubedb/elasticsearch/commit/3ab4d77d2) Prepare for release v0.41.0-beta.0 (#694) +- [c38c61cb](https://github.com/kubedb/elasticsearch/commit/c38c61cbc) Dynamically start crd controller (#693) +- [6a798d30](https://github.com/kubedb/elasticsearch/commit/6a798d309) Update deps (#692) +- [bdf034a4](https://github.com/kubedb/elasticsearch/commit/bdf034a49) Update deps (#691) +- [ea22eecb](https://github.com/kubedb/elasticsearch/commit/ea22eecb2) Add openapi configuration for webhook server (#690) +- [b97636cd](https://github.com/kubedb/elasticsearch/commit/b97636cd1) Update lint command +- [0221ac14](https://github.com/kubedb/elasticsearch/commit/0221ac14e) Update deps +- [b4cb8d60](https://github.com/kubedb/elasticsearch/commit/b4cb8d603) Use k8s 1.29 client libs (#689) + + + +## [kubedb/elasticsearch-restic-plugin](https://github.com/kubedb/elasticsearch-restic-plugin) + +### [v0.4.0-rc.0](https://github.com/kubedb/elasticsearch-restic-plugin/releases/tag/v0.4.0-rc.0) + +- [18ea6da](https://github.com/kubedb/elasticsearch-restic-plugin/commit/18ea6da) Prepare for release v0.4.0-rc.0 (#17) +- [584dfd9](https://github.com/kubedb/elasticsearch-restic-plugin/commit/584dfd9) Prepare for release v0.4.0-beta.1 (#16) +- [5e9aef5](https://github.com/kubedb/elasticsearch-restic-plugin/commit/5e9aef5) Prepare for release v0.4.0-beta.0 (#15) +- [2fdcafa](https://github.com/kubedb/elasticsearch-restic-plugin/commit/2fdcafa) Use k8s 1.29 client libs (#14) + + + +## [kubedb/ferretdb](https://github.com/kubedb/ferretdb) + +### [v0.0.2](https://github.com/kubedb/ferretdb/releases/tag/v0.0.2) + +- [4ffe133](https://github.com/kubedb/ferretdb/commit/4ffe133) Prepare for release v0.0.2 (#6) +- [9df7b8f](https://github.com/kubedb/ferretdb/commit/9df7b8f) Remove cassandra, clickhouse, etcd flags +- [23ec3b8](https://github.com/kubedb/ferretdb/commit/23ec3b8) Update install recipies in makefile (#5) + + + +## [kubedb/installer](https://github.com/kubedb/installer) + +### [v2024.1.26-rc.0](https://github.com/kubedb/installer/releases/tag/v2024.1.26-rc.0) + + + + +## [kubedb/kafka](https://github.com/kubedb/kafka) + +### [v0.12.0-rc.0](https://github.com/kubedb/kafka/releases/tag/v0.12.0-rc.0) + +- [9d73e3ce](https://github.com/kubedb/kafka/commit/9d73e3ce) Prepare for release v0.12.0-rc.0 (#71) +- [c1d08f75](https://github.com/kubedb/kafka/commit/c1d08f75) Remove cassandra, clickhouse, etcd flags +- [e7283583](https://github.com/kubedb/kafka/commit/e7283583) Fix podtemplate containers reference isuue (#70) +- [6d04bf0f](https://github.com/kubedb/kafka/commit/6d04bf0f) Add termination policy for kafka and connect cluster (#69) +- [34f4967f](https://github.com/kubedb/kafka/commit/34f4967f) Prepare for release v0.12.0-beta.1 (#68) +- [7176931c](https://github.com/kubedb/kafka/commit/7176931c) Move Kafka Podtemplate to ofshoot-api v2 (#66) +- [9454adf6](https://github.com/kubedb/kafka/commit/9454adf6) Update ci & makefile for crd-manager (#67) +- [fda770d8](https://github.com/kubedb/kafka/commit/fda770d8) Add kafka connector controller (#65) +- [6ed0ccd4](https://github.com/kubedb/kafka/commit/6ed0ccd4) Add Kafka connect controller (#44) +- [18e9a45c](https://github.com/kubedb/kafka/commit/18e9a45c) update deps (#64) +- [a7dfb409](https://github.com/kubedb/kafka/commit/a7dfb409) Update makefile for dynamic crd installer (#63) +- [f9350578](https://github.com/kubedb/kafka/commit/f9350578) Prepare for release v0.12.0-beta.0 (#62) +- [692f2bef](https://github.com/kubedb/kafka/commit/692f2bef) Dynamically start crd controller (#61) +- [a50dc8b4](https://github.com/kubedb/kafka/commit/a50dc8b4) Update deps (#60) +- [7ff28ed7](https://github.com/kubedb/kafka/commit/7ff28ed7) Update deps (#59) +- [16130571](https://github.com/kubedb/kafka/commit/16130571) Add openapi configuration for webhook server (#58) +- [cc465de9](https://github.com/kubedb/kafka/commit/cc465de9) Use k8s 1.29 client libs (#57) + + + +## [kubedb/kubedb-manifest-plugin](https://github.com/kubedb/kubedb-manifest-plugin) + +### [v0.4.0-rc.0](https://github.com/kubedb/kubedb-manifest-plugin/releases/tag/v0.4.0-rc.0) + +- [b7ec4a4](https://github.com/kubedb/kubedb-manifest-plugin/commit/b7ec4a4) Prepare for release v0.4.0-rc.0 (#38) +- [c77b4ae](https://github.com/kubedb/kubedb-manifest-plugin/commit/c77b4ae) Prepare for release v0.4.0-beta.1 (#37) +- [6a8a822](https://github.com/kubedb/kubedb-manifest-plugin/commit/6a8a822) Update component name (#35) +- [c315615](https://github.com/kubedb/kubedb-manifest-plugin/commit/c315615) Prepare for release v0.4.0-beta.0 (#36) +- [5ce328d](https://github.com/kubedb/kubedb-manifest-plugin/commit/5ce328d) Use k8s 1.29 client libs (#34) + + + +## [kubedb/mariadb](https://github.com/kubedb/mariadb) + +### [v0.25.0-rc.0](https://github.com/kubedb/mariadb/releases/tag/v0.25.0-rc.0) + +- [4bdcd6cc](https://github.com/kubedb/mariadb/commit/4bdcd6cca) Prepare for release v0.25.0-rc.0 (#252) +- [c4d4942f](https://github.com/kubedb/mariadb/commit/c4d4942f8) Prepare for release v0.25.0-beta.1 (#250) +- [25fe3917](https://github.com/kubedb/mariadb/commit/25fe39177) Use ptr.Deref(); Update deps +- [c76704cc](https://github.com/kubedb/mariadb/commit/c76704cc8) Fix ci & makefile for crd-manager (#249) +- [67396abb](https://github.com/kubedb/mariadb/commit/67396abb9) Incorporate with apimachinery package name change from `stash` to `restore` (#248) +- [b93ddce3](https://github.com/kubedb/mariadb/commit/b93ddce3d) Prepare for release v0.25.0-beta.0 (#247) +- [8099af6d](https://github.com/kubedb/mariadb/commit/8099af6d9) Dynamically start crd controller (#246) +- [0a9dd9e0](https://github.com/kubedb/mariadb/commit/0a9dd9e03) Update deps (#245) +- [5c548629](https://github.com/kubedb/mariadb/commit/5c548629e) Update deps (#244) +- [0f9ea4f2](https://github.com/kubedb/mariadb/commit/0f9ea4f20) Update deps +- [89641d3c](https://github.com/kubedb/mariadb/commit/89641d3c7) Use k8s 1.29 client libs (#242) + + + +## [kubedb/mariadb-archiver](https://github.com/kubedb/mariadb-archiver) + +### [v0.1.0-rc.0](https://github.com/kubedb/mariadb-archiver/releases/tag/v0.1.0-rc.0) + +- [90b9d66](https://github.com/kubedb/mariadb-archiver/commit/90b9d66) Prepare for release v0.1.0-rc.0 (#6) +- [e8564fe](https://github.com/kubedb/mariadb-archiver/commit/e8564fe) Prepare for release v0.1.0-beta.1 (#5) +- [e5e8945](https://github.com/kubedb/mariadb-archiver/commit/e5e8945) Don't use fail-fast +- [8c8e09a](https://github.com/kubedb/mariadb-archiver/commit/8c8e09a) Prepare for release v0.1.0-beta.0 (#4) +- [90ae04c](https://github.com/kubedb/mariadb-archiver/commit/90ae04c) Use k8s 1.29 client libs (#3) +- [b3067c8](https://github.com/kubedb/mariadb-archiver/commit/b3067c8) Fix binlog command +- [5cc0b6a](https://github.com/kubedb/mariadb-archiver/commit/5cc0b6a) Fix release workflow +- [910b7ce](https://github.com/kubedb/mariadb-archiver/commit/910b7ce) Prepare for release v0.1.0 (#1) +- [3801668](https://github.com/kubedb/mariadb-archiver/commit/3801668) mysql -> mariadb +- [4e905fb](https://github.com/kubedb/mariadb-archiver/commit/4e905fb) Implemenet new algorithm for archiver and restorer (#5) +- [22701c8](https://github.com/kubedb/mariadb-archiver/commit/22701c8) Fix 5.7.x build +- [6da2b1c](https://github.com/kubedb/mariadb-archiver/commit/6da2b1c) Update build matrix +- [e2f6244](https://github.com/kubedb/mariadb-archiver/commit/e2f6244) Use separate dockerfile per mysql version (#9) +- [e800623](https://github.com/kubedb/mariadb-archiver/commit/e800623) Prepare for release v0.2.0 (#8) +- [b9f6ec5](https://github.com/kubedb/mariadb-archiver/commit/b9f6ec5) Install mysqlbinlog (#7) +- [c46d991](https://github.com/kubedb/mariadb-archiver/commit/c46d991) Use appscode-images as base image (#6) +- [721eaa8](https://github.com/kubedb/mariadb-archiver/commit/721eaa8) Prepare for release v0.1.0 (#4) +- [8c65d14](https://github.com/kubedb/mariadb-archiver/commit/8c65d14) Prepare for release v0.1.0-rc.1 (#3) +- [f79286a](https://github.com/kubedb/mariadb-archiver/commit/f79286a) Prepare for release v0.1.0-rc.0 (#2) +- [dcd2e30](https://github.com/kubedb/mariadb-archiver/commit/dcd2e30) Fix wal-g binary +- [6c20a4a](https://github.com/kubedb/mariadb-archiver/commit/6c20a4a) Fix build +- [f034e7b](https://github.com/kubedb/mariadb-archiver/commit/f034e7b) Add build script (#1) + + + +## [kubedb/mariadb-coordinator](https://github.com/kubedb/mariadb-coordinator) + +### [v0.21.0-rc.0](https://github.com/kubedb/mariadb-coordinator/releases/tag/v0.21.0-rc.0) + +- [15a83758](https://github.com/kubedb/mariadb-coordinator/commit/15a83758) Prepare for release v0.21.0-rc.0 (#102) +- [1c30e710](https://github.com/kubedb/mariadb-coordinator/commit/1c30e710) Prepare for release v0.21.0-beta.1 (#101) +- [28677618](https://github.com/kubedb/mariadb-coordinator/commit/28677618) Prepare for release v0.21.0-beta.0 (#100) +- [655a2c66](https://github.com/kubedb/mariadb-coordinator/commit/655a2c66) Update deps (#99) +- [ef206cfe](https://github.com/kubedb/mariadb-coordinator/commit/ef206cfe) Update deps (#98) +- [ef72c98b](https://github.com/kubedb/mariadb-coordinator/commit/ef72c98b) Use k8s 1.29 client libs (#97) + + + +## [kubedb/mariadb-csi-snapshotter-plugin](https://github.com/kubedb/mariadb-csi-snapshotter-plugin) + +### [v0.1.0-rc.0](https://github.com/kubedb/mariadb-csi-snapshotter-plugin/releases/tag/v0.1.0-rc.0) + +- [ebd73c7](https://github.com/kubedb/mariadb-csi-snapshotter-plugin/commit/ebd73c7) Prepare for release v0.1.0-rc.0 (#6) +- [adac38d](https://github.com/kubedb/mariadb-csi-snapshotter-plugin/commit/adac38d) Prepare for release v0.1.0-beta.1 (#5) +- [09f68b7](https://github.com/kubedb/mariadb-csi-snapshotter-plugin/commit/09f68b7) Prepare for release v0.1.0-beta.0 (#4) +- [7407444](https://github.com/kubedb/mariadb-csi-snapshotter-plugin/commit/7407444) Use k8s 1.29 client libs (#3) +- [933e138](https://github.com/kubedb/mariadb-csi-snapshotter-plugin/commit/933e138) Prepare for release v0.1.0 (#2) +- [5d38f94](https://github.com/kubedb/mariadb-csi-snapshotter-plugin/commit/5d38f94) Enable GH actions +- [2a97178](https://github.com/kubedb/mariadb-csi-snapshotter-plugin/commit/2a97178) Replace mysql with mariadb + + + +## [kubedb/memcached](https://github.com/kubedb/memcached) + +### [v0.34.0-rc.0](https://github.com/kubedb/memcached/releases/tag/v0.34.0-rc.0) + +- [3ae5739b](https://github.com/kubedb/memcached/commit/3ae5739b) Prepare for release v0.34.0-rc.0 (#419) +- [754ba398](https://github.com/kubedb/memcached/commit/754ba398) Prepare for release v0.34.0-beta.1 (#418) +- [abd9dbb6](https://github.com/kubedb/memcached/commit/abd9dbb6) Incorporate with apimachinery package name change from stash to restore (#417) +- [6fe1686a](https://github.com/kubedb/memcached/commit/6fe1686a) Prepare for release v0.34.0-beta.0 (#416) +- [1cfb0544](https://github.com/kubedb/memcached/commit/1cfb0544) Dynamically start crd controller (#415) +- [171faff2](https://github.com/kubedb/memcached/commit/171faff2) Update deps (#414) +- [639495c7](https://github.com/kubedb/memcached/commit/639495c7) Update deps (#413) +- [223d295a](https://github.com/kubedb/memcached/commit/223d295a) Use k8s 1.29 client libs (#412) + + + +## [kubedb/mongodb](https://github.com/kubedb/mongodb) + +### [v0.34.0-rc.0](https://github.com/kubedb/mongodb/releases/tag/v0.34.0-rc.0) + +- [278ce846](https://github.com/kubedb/mongodb/commit/278ce846b) Prepare for release v0.34.0-rc.0 (#607) +- [c0c58448](https://github.com/kubedb/mongodb/commit/c0c58448b) Prepare for release v0.34.0-beta.1 (#606) +- [5df39d09](https://github.com/kubedb/mongodb/commit/5df39d09f) Update ci mgVersion; Fix pointer dereference issue (#605) +- [e2781eae](https://github.com/kubedb/mongodb/commit/e2781eaea) Run ci with specific crd-manager branch (#604) +- [b57bc47a](https://github.com/kubedb/mongodb/commit/b57bc47ae) Add kubestash for health check (#603) +- [62cb9c81](https://github.com/kubedb/mongodb/commit/62cb9c816) Install crd-manager specifiying DATABASE (#602) +- [6bf45fe7](https://github.com/kubedb/mongodb/commit/6bf45fe72) 7.0.4 -> 7.0.5; update deps +- [e5b9841e](https://github.com/kubedb/mongodb/commit/e5b9841e5) Fix oplog backup directory (#601) +- [452b785f](https://github.com/kubedb/mongodb/commit/452b785f0) Add Support for DB phase change for restoring using `KubeStash` (#586) +- [35d93d0b](https://github.com/kubedb/mongodb/commit/35d93d0bc) add ssl/tls args command (#595) +- [7ff67238](https://github.com/kubedb/mongodb/commit/7ff672382) Prepare for release v0.34.0-beta.0 (#600) +- [beca63a4](https://github.com/kubedb/mongodb/commit/beca63a48) Dynamically start crd controller (#599) +- [17d90616](https://github.com/kubedb/mongodb/commit/17d90616d) Update deps (#598) +- [bc25ca00](https://github.com/kubedb/mongodb/commit/bc25ca001) Update deps (#597) +- [4ce5a94a](https://github.com/kubedb/mongodb/commit/4ce5a94a4) Configure openapi for webhook server (#596) +- [8d8206db](https://github.com/kubedb/mongodb/commit/8d8206db3) Update ci versions +- [bfdd519f](https://github.com/kubedb/mongodb/commit/bfdd519fc) Update deps +- [01a7c268](https://github.com/kubedb/mongodb/commit/01a7c2685) Use k8s 1.29 client libs (#594) + + + +## [kubedb/mongodb-csi-snapshotter-plugin](https://github.com/kubedb/mongodb-csi-snapshotter-plugin) + +### [v0.2.0-rc.0](https://github.com/kubedb/mongodb-csi-snapshotter-plugin/releases/tag/v0.2.0-rc.0) + +- [afd4fdb](https://github.com/kubedb/mongodb-csi-snapshotter-plugin/commit/afd4fdb) Prepare for release v0.2.0-rc.0 (#13) +- [5680265](https://github.com/kubedb/mongodb-csi-snapshotter-plugin/commit/5680265) Prepare for release v0.2.0-beta.1 (#12) +- [72693c8](https://github.com/kubedb/mongodb-csi-snapshotter-plugin/commit/72693c8) Fix component driver status (#11) +- [0ea73ee](https://github.com/kubedb/mongodb-csi-snapshotter-plugin/commit/0ea73ee) Update deps (#10) +- [ef74421](https://github.com/kubedb/mongodb-csi-snapshotter-plugin/commit/ef74421) Prepare for release v0.2.0-beta.0 (#9) +- [c2c9bd4](https://github.com/kubedb/mongodb-csi-snapshotter-plugin/commit/c2c9bd4) Use k8s 1.29 client libs (#8) + + + +## [kubedb/mongodb-restic-plugin](https://github.com/kubedb/mongodb-restic-plugin) + +### [v0.4.0-rc.0](https://github.com/kubedb/mongodb-restic-plugin/releases/tag/v0.4.0-rc.0) + +- [bff5aa4](https://github.com/kubedb/mongodb-restic-plugin/commit/bff5aa4) Prepare for release v0.4.0-rc.0 (#24) +- [6ae8ae2](https://github.com/kubedb/mongodb-restic-plugin/commit/6ae8ae2) Prepare for release v0.4.0-beta.1 (#23) +- [d8e1636](https://github.com/kubedb/mongodb-restic-plugin/commit/d8e1636) Reorder the execution of cleanup funcs (#22) +- [4f0b021](https://github.com/kubedb/mongodb-restic-plugin/commit/4f0b021) Prepare for release v0.4.0-beta.0 (#20) +- [91ee7c0](https://github.com/kubedb/mongodb-restic-plugin/commit/91ee7c0) Use k8s 1.29 client libs (#19) + + + +## [kubedb/mysql](https://github.com/kubedb/mysql) + +### [v0.34.0-rc.0](https://github.com/kubedb/mysql/releases/tag/v0.34.0-rc.0) + +- [aaaf3aad](https://github.com/kubedb/mysql/commit/aaaf3aad0) Prepare for release v0.34.0-rc.0 (#604) +- [d2f2eba7](https://github.com/kubedb/mysql/commit/d2f2eba7d) Refactor (#602) +- [fa00fc42](https://github.com/kubedb/mysql/commit/fa00fc424) Fix provider env in sidekick (#601) +- [e75f6e26](https://github.com/kubedb/mysql/commit/e75f6e26e) Fix restore service selector (#600) +- [e9dbf269](https://github.com/kubedb/mysql/commit/e9dbf269c) Prepare for release v0.34.0-beta.1 (#599) +- [44eda2d2](https://github.com/kubedb/mysql/commit/44eda2d25) Prepare for release v0.34.0-beta.1 (#598) +- [16dd4637](https://github.com/kubedb/mysql/commit/16dd46377) Fix pointer dereference issue (#597) +- [334c1a1d](https://github.com/kubedb/mysql/commit/334c1a1dd) Update ci & makefile for crd-manager (#596) +- [edb9b1a1](https://github.com/kubedb/mysql/commit/edb9b1a11) Fix binlog backup directory (#587) +- [fc6d7030](https://github.com/kubedb/mysql/commit/fc6d70303) Add Support for DB phase change for restoring using KubeStash (#594) +- [354f6f3e](https://github.com/kubedb/mysql/commit/354f6f3e1) Prepare for release v0.34.0-beta.0 (#593) +- [01498d02](https://github.com/kubedb/mysql/commit/01498d025) Dynamically start crd controller (#592) +- [e68015cf](https://github.com/kubedb/mysql/commit/e68015cfd) Update deps (#591) +- [67029acc](https://github.com/kubedb/mysql/commit/67029acc9) Update deps (#590) +- [87d2de4a](https://github.com/kubedb/mysql/commit/87d2de4a1) Include kubestash catalog chart in makefile (#588) +- [e5874ffb](https://github.com/kubedb/mysql/commit/e5874ffb7) Add openapi configuration for webhook server (#589) +- [977d3cd3](https://github.com/kubedb/mysql/commit/977d3cd38) Update deps +- [3df86853](https://github.com/kubedb/mysql/commit/3df868533) Use k8s 1.29 client libs (#586) +- [d159ad05](https://github.com/kubedb/mysql/commit/d159ad052) Ensure MySQLArchiver crd (#585) + + + +## [kubedb/mysql-archiver](https://github.com/kubedb/mysql-archiver) + +### [v0.2.0-rc.0](https://github.com/kubedb/mysql-archiver/releases/tag/v0.2.0-rc.0) + +- [a6fdf50](https://github.com/kubedb/mysql-archiver/commit/a6fdf50) Prepare for release v0.2.0-rc.0 (#18) +- [718511e](https://github.com/kubedb/mysql-archiver/commit/718511e) Remove obsolete files (#16) +- [07fc1eb](https://github.com/kubedb/mysql-archiver/commit/07fc1eb) Fix mysql-community-common version in docker file +- [e5bdae3](https://github.com/kubedb/mysql-archiver/commit/e5bdae3) Prepare for release v0.2.0-beta.1 (#15) +- [7ef752c](https://github.com/kubedb/mysql-archiver/commit/7ef752c) Refactor + Cleanup wal-g example files (#14) +- [5857a8d](https://github.com/kubedb/mysql-archiver/commit/5857a8d) Don't use fail-fast +- [5833776](https://github.com/kubedb/mysql-archiver/commit/5833776) Prepare for release v0.2.0-beta.0 (#12) +- [f3e68b2](https://github.com/kubedb/mysql-archiver/commit/f3e68b2) Use k8s 1.29 client libs (#11) + + + +## [kubedb/mysql-coordinator](https://github.com/kubedb/mysql-coordinator) + +### [v0.19.0-rc.0](https://github.com/kubedb/mysql-coordinator/releases/tag/v0.19.0-rc.0) + +- [1bc71d04](https://github.com/kubedb/mysql-coordinator/commit/1bc71d04) Prepare for release v0.19.0-rc.0 (#99) +- [59a11671](https://github.com/kubedb/mysql-coordinator/commit/59a11671) Prepare for release v0.19.0-beta.1 (#98) +- [e0cc149f](https://github.com/kubedb/mysql-coordinator/commit/e0cc149f) Prepare for release v0.19.0-beta.0 (#97) +- [67aeb229](https://github.com/kubedb/mysql-coordinator/commit/67aeb229) Update deps (#96) +- [2fa4423f](https://github.com/kubedb/mysql-coordinator/commit/2fa4423f) Update deps (#95) +- [b0735769](https://github.com/kubedb/mysql-coordinator/commit/b0735769) Use k8s 1.29 client libs (#94) + + + +## [kubedb/mysql-csi-snapshotter-plugin](https://github.com/kubedb/mysql-csi-snapshotter-plugin) + +### [v0.2.0-rc.0](https://github.com/kubedb/mysql-csi-snapshotter-plugin/releases/tag/v0.2.0-rc.0) + +- [21e9470](https://github.com/kubedb/mysql-csi-snapshotter-plugin/commit/21e9470) Prepare for release v0.2.0-rc.0 (#6) +- [d5771cf](https://github.com/kubedb/mysql-csi-snapshotter-plugin/commit/d5771cf) Prepare for release v0.2.0-beta.1 (#5) +- [b4ffc6f](https://github.com/kubedb/mysql-csi-snapshotter-plugin/commit/b4ffc6f) Fix component driver status & Update deps (#3) +- [d285eff](https://github.com/kubedb/mysql-csi-snapshotter-plugin/commit/d285eff) Prepare for release v0.2.0-beta.0 (#4) +- [7a46441](https://github.com/kubedb/mysql-csi-snapshotter-plugin/commit/7a46441) Use k8s 1.29 client libs (#2) + + + +## [kubedb/mysql-restic-plugin](https://github.com/kubedb/mysql-restic-plugin) + +### [v0.4.0-rc.0](https://github.com/kubedb/mysql-restic-plugin/releases/tag/v0.4.0-rc.0) + +- [eedf2e7](https://github.com/kubedb/mysql-restic-plugin/commit/eedf2e7) Prepare for release v0.4.0-rc.0 (#22) +- [105888a](https://github.com/kubedb/mysql-restic-plugin/commit/105888a) Prepare for release v0.4.0-beta.1 (#21) +- [b42d0cf](https://github.com/kubedb/mysql-restic-plugin/commit/b42d0cf) Removed `--all-databases` flag for restoring (#20) +- [742d2ce](https://github.com/kubedb/mysql-restic-plugin/commit/742d2ce) Prepare for release v0.4.0-beta.0 (#19) +- [0402847](https://github.com/kubedb/mysql-restic-plugin/commit/0402847) Use k8s 1.29 client libs (#18) + + + +## [kubedb/mysql-router-init](https://github.com/kubedb/mysql-router-init) + +### [v0.19.0-rc.0](https://github.com/kubedb/mysql-router-init/releases/tag/v0.19.0-rc.0) + +- [85f8c6f](https://github.com/kubedb/mysql-router-init/commit/85f8c6f) Update deps (#38) +- [7dd201c](https://github.com/kubedb/mysql-router-init/commit/7dd201c) Use k8s 1.29 client libs (#37) + + + +## [kubedb/ops-manager](https://github.com/kubedb/ops-manager) + +### [v0.28.0-rc.0](https://github.com/kubedb/ops-manager/releases/tag/v0.28.0-rc.0) + + + + +## [kubedb/percona-xtradb](https://github.com/kubedb/percona-xtradb) + +### [v0.28.0-rc.0](https://github.com/kubedb/percona-xtradb/releases/tag/v0.28.0-rc.0) + +- [80cd3a03](https://github.com/kubedb/percona-xtradb/commit/80cd3a030) Prepare for release v0.28.0-rc.0 (#350) +- [475a5e32](https://github.com/kubedb/percona-xtradb/commit/475a5e328) Prepare for release v0.28.0-beta.1 (#348) +- [4c1380ab](https://github.com/kubedb/percona-xtradb/commit/4c1380ab7) Incorporate with apimachinery package name change from `stash` to `restore` (#347) +- [0ceb3028](https://github.com/kubedb/percona-xtradb/commit/0ceb30284) Prepare for release v0.28.0-beta.0 (#346) +- [e7d35606](https://github.com/kubedb/percona-xtradb/commit/e7d356062) Dynamically start crd controller (#345) +- [5d07b565](https://github.com/kubedb/percona-xtradb/commit/5d07b5655) Update deps (#344) +- [1a639f84](https://github.com/kubedb/percona-xtradb/commit/1a639f840) Update deps (#343) +- [4f8b24ab](https://github.com/kubedb/percona-xtradb/commit/4f8b24aba) Update deps +- [e5254020](https://github.com/kubedb/percona-xtradb/commit/e52540202) Use k8s 1.29 client libs (#341) + + + +## [kubedb/percona-xtradb-coordinator](https://github.com/kubedb/percona-xtradb-coordinator) + +### [v0.14.0-rc.0](https://github.com/kubedb/percona-xtradb-coordinator/releases/tag/v0.14.0-rc.0) + +- [7581630e](https://github.com/kubedb/percona-xtradb-coordinator/commit/7581630e) Prepare for release v0.14.0-rc.0 (#59) +- [560bc5c3](https://github.com/kubedb/percona-xtradb-coordinator/commit/560bc5c3) Prepare for release v0.14.0-beta.1 (#58) +- [963756eb](https://github.com/kubedb/percona-xtradb-coordinator/commit/963756eb) Prepare for release v0.14.0-beta.0 (#57) +- [5489bb8c](https://github.com/kubedb/percona-xtradb-coordinator/commit/5489bb8c) Update deps (#56) +- [a8424e18](https://github.com/kubedb/percona-xtradb-coordinator/commit/a8424e18) Update deps (#55) +- [ee4add86](https://github.com/kubedb/percona-xtradb-coordinator/commit/ee4add86) Use k8s 1.29 client libs (#54) + + + +## [kubedb/pg-coordinator](https://github.com/kubedb/pg-coordinator) + +### [v0.25.0-rc.0](https://github.com/kubedb/pg-coordinator/releases/tag/v0.25.0-rc.0) + +- [41cc97b6](https://github.com/kubedb/pg-coordinator/commit/41cc97b6) Prepare for release v0.25.0-rc.0 (#150) +- [5298a177](https://github.com/kubedb/pg-coordinator/commit/5298a177) Fixed (#149) +- [bc296307](https://github.com/kubedb/pg-coordinator/commit/bc296307) Prepare for release v0.25.0-beta.1 (#148) +- [30973540](https://github.com/kubedb/pg-coordinator/commit/30973540) Prepare for release v0.25.0-beta.0 (#147) +- [7b84e198](https://github.com/kubedb/pg-coordinator/commit/7b84e198) Update deps (#146) +- [f1bfe818](https://github.com/kubedb/pg-coordinator/commit/f1bfe818) Update deps (#145) +- [1de05a6e](https://github.com/kubedb/pg-coordinator/commit/1de05a6e) Use k8s 1.29 client libs (#144) + + + +## [kubedb/pgbouncer](https://github.com/kubedb/pgbouncer) + +### [v0.28.0-rc.0](https://github.com/kubedb/pgbouncer/releases/tag/v0.28.0-rc.0) + +- [e69aa743](https://github.com/kubedb/pgbouncer/commit/e69aa743) Prepare for release v0.28.0-rc.0 (#313) +- [55c248d5](https://github.com/kubedb/pgbouncer/commit/55c248d5) Prepare for release v0.28.0-beta.1 (#312) +- [1b86664a](https://github.com/kubedb/pgbouncer/commit/1b86664a) Incorporate with apimachinery package name change from stash to restore (#311) +- [3c6bc335](https://github.com/kubedb/pgbouncer/commit/3c6bc335) Prepare for release v0.28.0-beta.0 (#310) +- [73c5f6fb](https://github.com/kubedb/pgbouncer/commit/73c5f6fb) Dynamically start crd controller (#309) +- [f9edc2cd](https://github.com/kubedb/pgbouncer/commit/f9edc2cd) Update deps (#308) +- [d54251c0](https://github.com/kubedb/pgbouncer/commit/d54251c0) Update deps (#307) +- [de40a35e](https://github.com/kubedb/pgbouncer/commit/de40a35e) Update deps +- [8c325577](https://github.com/kubedb/pgbouncer/commit/8c325577) Use k8s 1.29 client libs (#305) + + + +## [kubedb/pgpool](https://github.com/kubedb/pgpool) + +### [v0.0.2](https://github.com/kubedb/pgpool/releases/tag/v0.0.2) + +- [21d8639](https://github.com/kubedb/pgpool/commit/21d8639) Prepare for release v0.0.2 (#7) +- [e7dab5e](https://github.com/kubedb/pgpool/commit/e7dab5e) Remove cassandra, clickhouse, etcd flags +- [2678231](https://github.com/kubedb/pgpool/commit/2678231) Fix log (#6) +- [e4a54e0](https://github.com/kubedb/pgpool/commit/e4a54e0) Fix xorm client issue (#5) +- [258da9b](https://github.com/kubedb/pgpool/commit/258da9b) Update install recipes in makefile (#4) + + + +## [kubedb/postgres](https://github.com/kubedb/postgres) + +### [v0.41.0-rc.0](https://github.com/kubedb/postgres/releases/tag/v0.41.0-rc.0) + +- [8135d351](https://github.com/kubedb/postgres/commit/8135d3511) Prepare for release v0.41.0-rc.0 (#709) +- [72a1ee29](https://github.com/kubedb/postgres/commit/72a1ee294) Prepare for release v0.41.0-beta.1 (#708) +- [026598f4](https://github.com/kubedb/postgres/commit/026598f44) Prepare for release v0.41.0-beta.1 (#707) +- [8af305aa](https://github.com/kubedb/postgres/commit/8af305aa4) Use ptr.Deref(); Update deps +- [c7c0652d](https://github.com/kubedb/postgres/commit/c7c0652dc) Update ci & makefile for crd-manager (#706) +- [d468bdb3](https://github.com/kubedb/postgres/commit/d468bdb34) Fix wal backup directory (#705) +- [c6992bed](https://github.com/kubedb/postgres/commit/c6992bed8) Add Support for DB phase change for restoring using KubeStash (#704) +- [d1bd909b](https://github.com/kubedb/postgres/commit/d1bd909ba) Prepare for release v0.41.0-beta.0 (#703) +- [5e8101e3](https://github.com/kubedb/postgres/commit/5e8101e39) Dynamically start crd controller (#702) +- [47dbbff5](https://github.com/kubedb/postgres/commit/47dbbff53) Update deps (#701) +- [84f99c58](https://github.com/kubedb/postgres/commit/84f99c58b) Disable fairness api +- [a715765d](https://github.com/kubedb/postgres/commit/a715765dc) Set --restricted=false for ci tests (#700) +- [fe9af597](https://github.com/kubedb/postgres/commit/fe9af5977) Add Postgres test fix (#699) +- [8bae8886](https://github.com/kubedb/postgres/commit/8bae88860) Configure openapi for webhook server (#698) +- [9ce2efce](https://github.com/kubedb/postgres/commit/9ce2efce5) Update deps +- [24e4e9ca](https://github.com/kubedb/postgres/commit/24e4e9ca5) Use k8s 1.29 client libs (#697) + + + +## [kubedb/postgres-archiver](https://github.com/kubedb/postgres-archiver) + +### [v0.2.0-rc.0](https://github.com/kubedb/postgres-archiver/releases/tag/v0.2.0-rc.0) + +- [bff75cb](https://github.com/kubedb/postgres-archiver/commit/bff75cb) Prepare for release v0.2.0-rc.0 (#19) +- [bb8c342](https://github.com/kubedb/postgres-archiver/commit/bb8c342) Create directory for wal-backup (#18) +- [c4405c1](https://github.com/kubedb/postgres-archiver/commit/c4405c1) Prepare for release v0.2.0-beta.1 (#17) +- [c353dcd](https://github.com/kubedb/postgres-archiver/commit/c353dcd) Don't use fail-fast +- [a9cbe08](https://github.com/kubedb/postgres-archiver/commit/a9cbe08) Prepare for release v0.2.0-beta.0 (#16) +- [183e97c](https://github.com/kubedb/postgres-archiver/commit/183e97c) Use k8s 1.29 client libs (#15) + + + +## [kubedb/postgres-csi-snapshotter-plugin](https://github.com/kubedb/postgres-csi-snapshotter-plugin) + +### [v0.2.0-rc.0](https://github.com/kubedb/postgres-csi-snapshotter-plugin/releases/tag/v0.2.0-rc.0) + +- [87240d8](https://github.com/kubedb/postgres-csi-snapshotter-plugin/commit/87240d8) Prepare for release v0.2.0-rc.0 (#16) +- [dc4f85e](https://github.com/kubedb/postgres-csi-snapshotter-plugin/commit/dc4f85e) Prepare for release v0.2.0-beta.1 (#15) +- [098365a](https://github.com/kubedb/postgres-csi-snapshotter-plugin/commit/098365a) Update README.md (#14) +- [5ef571f](https://github.com/kubedb/postgres-csi-snapshotter-plugin/commit/5ef571f) Update deps (#13) +- [f0e546a](https://github.com/kubedb/postgres-csi-snapshotter-plugin/commit/f0e546a) Prepare for release v0.2.0-beta.0 (#12) +- [aae7294](https://github.com/kubedb/postgres-csi-snapshotter-plugin/commit/aae7294) Use k8s 1.29 client libs (#11) + + + +## [kubedb/postgres-restic-plugin](https://github.com/kubedb/postgres-restic-plugin) + +### [v0.4.0-rc.0](https://github.com/kubedb/postgres-restic-plugin/releases/tag/v0.4.0-rc.0) + + + + +## [kubedb/provider-aws](https://github.com/kubedb/provider-aws) + +### [v0.3.0-rc.0](https://github.com/kubedb/provider-aws/releases/tag/v0.3.0-rc.0) + + + + +## [kubedb/provider-azure](https://github.com/kubedb/provider-azure) + +### [v0.3.0-rc.0](https://github.com/kubedb/provider-azure/releases/tag/v0.3.0-rc.0) + +- [ebba4fa](https://github.com/kubedb/provider-azure/commit/ebba4fa) Checkout fake release branch for release workflow + + + +## [kubedb/provider-gcp](https://github.com/kubedb/provider-gcp) + +### [v0.3.0-rc.0](https://github.com/kubedb/provider-gcp/releases/tag/v0.3.0-rc.0) + +- [82f52c3](https://github.com/kubedb/provider-gcp/commit/82f52c3) Checkout fake release branch for release workflow + + + +## [kubedb/provisioner](https://github.com/kubedb/provisioner) + +### [v0.41.0-rc.0](https://github.com/kubedb/provisioner/releases/tag/v0.41.0-rc.0) + + + + +## [kubedb/proxysql](https://github.com/kubedb/proxysql) + +### [v0.28.0-rc.0](https://github.com/kubedb/proxysql/releases/tag/v0.28.0-rc.0) + +- [2fa5679d](https://github.com/kubedb/proxysql/commit/2fa5679d7) Prepare for release v0.28.0-rc.0 (#331) +- [2cc59016](https://github.com/kubedb/proxysql/commit/2cc590165) Update ci & makefile for crd-manager (#326) +- [79e29efd](https://github.com/kubedb/proxysql/commit/79e29efdb) Handle MySQL URL Parsing (#330) +- [b3372a53](https://github.com/kubedb/proxysql/commit/b3372a53d) Fix MySQL Client and sync_user (#328) +- [213ebfc4](https://github.com/kubedb/proxysql/commit/213ebfc43) Prepare for release v0.28.0-beta.1 (#327) +- [8427158e](https://github.com/kubedb/proxysql/commit/8427158ec) Incorporate with apimachinery package name change from stash to restore (#325) +- [c0805050](https://github.com/kubedb/proxysql/commit/c0805050e) Prepare for release v0.28.0-beta.0 (#324) +- [88ef1f1d](https://github.com/kubedb/proxysql/commit/88ef1f1de) Dynamically start crd controller (#323) +- [8c0a96ac](https://github.com/kubedb/proxysql/commit/8c0a96ac7) Update deps (#322) +- [e96797e4](https://github.com/kubedb/proxysql/commit/e96797e48) Update deps (#321) +- [e8fd529b](https://github.com/kubedb/proxysql/commit/e8fd529b2) Update deps +- [b2e9a1df](https://github.com/kubedb/proxysql/commit/b2e9a1df8) Use k8s 1.29 client libs (#319) + + + +## [kubedb/rabbitmq](https://github.com/kubedb/rabbitmq) + +### [v0.0.2](https://github.com/kubedb/rabbitmq/releases/tag/v0.0.2) + +- [3eef0623](https://github.com/kubedb/rabbitmq/commit/3eef0623) Prepare for release v0.0.2 (#6) +- [8b7c36a5](https://github.com/kubedb/rabbitmq/commit/8b7c36a5) Remove cassandra, clickhouse, etcd flags +- [6628a5a9](https://github.com/kubedb/rabbitmq/commit/6628a5a9) Add Appbinding (#5) +- [017a24b0](https://github.com/kubedb/rabbitmq/commit/017a24b0) Fix health checker (#4) +- [673275ba](https://github.com/kubedb/rabbitmq/commit/673275ba) Update install recipes in makefile (#3) + + + +## [kubedb/redis](https://github.com/kubedb/redis) + +### [v0.34.0-rc.0](https://github.com/kubedb/redis/releases/tag/v0.34.0-rc.0) + +- [0703a513](https://github.com/kubedb/redis/commit/0703a513) Prepare for release v0.34.0-rc.0 (#519) +- [b1a296b7](https://github.com/kubedb/redis/commit/b1a296b7) Init sentinel before secret watcher (#518) +- [01290634](https://github.com/kubedb/redis/commit/01290634) Prepare for release v0.34.0-beta.1 (#517) +- [e51f93e1](https://github.com/kubedb/redis/commit/e51f93e1) Fix panic (#516) +- [dc75c163](https://github.com/kubedb/redis/commit/dc75c163) Update ci & makefile for crd-manager (#515) +- [09688f35](https://github.com/kubedb/redis/commit/09688f35) Add Support for DB phase change for restoring using KubeStash (#514) +- [7e844ab1](https://github.com/kubedb/redis/commit/7e844ab1) Prepare for release v0.34.0-beta.0 (#513) +- [6318d04f](https://github.com/kubedb/redis/commit/6318d04f) Dynamically start crd controller (#512) +- [92b8a3a9](https://github.com/kubedb/redis/commit/92b8a3a9) Update deps (#511) +- [f0fb4c69](https://github.com/kubedb/redis/commit/f0fb4c69) Update deps (#510) +- [c99d9498](https://github.com/kubedb/redis/commit/c99d9498) Update deps +- [90299544](https://github.com/kubedb/redis/commit/90299544) Use k8s 1.29 client libs (#508) +- [fced7010](https://github.com/kubedb/redis/commit/fced7010) Update redis versions in nightly tests (#507) + + + +## [kubedb/redis-coordinator](https://github.com/kubedb/redis-coordinator) + +### [v0.20.0-rc.0](https://github.com/kubedb/redis-coordinator/releases/tag/v0.20.0-rc.0) + +- [f09062c4](https://github.com/kubedb/redis-coordinator/commit/f09062c4) Prepare for release v0.20.0-rc.0 (#90) +- [fd3b2112](https://github.com/kubedb/redis-coordinator/commit/fd3b2112) Prepare for release v0.20.0-beta.1 (#89) +- [4c36accd](https://github.com/kubedb/redis-coordinator/commit/4c36accd) Prepare for release v0.20.0-beta.0 (#88) +- [c8658380](https://github.com/kubedb/redis-coordinator/commit/c8658380) Update deps (#87) +- [c99c2e9b](https://github.com/kubedb/redis-coordinator/commit/c99c2e9b) Update deps (#86) +- [22c7beb4](https://github.com/kubedb/redis-coordinator/commit/22c7beb4) Use k8s 1.29 client libs (#85) + + + +## [kubedb/redis-restic-plugin](https://github.com/kubedb/redis-restic-plugin) + +### [v0.4.0-rc.0](https://github.com/kubedb/redis-restic-plugin/releases/tag/v0.4.0-rc.0) + +- [968da13](https://github.com/kubedb/redis-restic-plugin/commit/968da13) Prepare for release v0.4.0-rc.0 (#18) +- [fac6226](https://github.com/kubedb/redis-restic-plugin/commit/fac6226) Prepare for release v0.4.0-beta.1 (#17) +- [da2796a](https://github.com/kubedb/redis-restic-plugin/commit/da2796a) Prepare for release v0.4.0-beta.0 (#16) +- [0553c6f](https://github.com/kubedb/redis-restic-plugin/commit/0553c6f) Use k8s 1.29 client libs (#15) + + + +## [kubedb/replication-mode-detector](https://github.com/kubedb/replication-mode-detector) + +### [v0.28.0-rc.0](https://github.com/kubedb/replication-mode-detector/releases/tag/v0.28.0-rc.0) + +- [d55f7e69](https://github.com/kubedb/replication-mode-detector/commit/d55f7e69) Prepare for release v0.28.0-rc.0 (#254) +- [f948a650](https://github.com/kubedb/replication-mode-detector/commit/f948a650) Prepare for release v0.28.0-beta.1 (#253) +- [572668c8](https://github.com/kubedb/replication-mode-detector/commit/572668c8) Prepare for release v0.28.0-beta.0 (#252) +- [39ba3ce0](https://github.com/kubedb/replication-mode-detector/commit/39ba3ce0) Update deps (#251) +- [d3d2ad96](https://github.com/kubedb/replication-mode-detector/commit/d3d2ad96) Update deps (#250) +- [633d7b76](https://github.com/kubedb/replication-mode-detector/commit/633d7b76) Use k8s 1.29 client libs (#249) + + + +## [kubedb/schema-manager](https://github.com/kubedb/schema-manager) + +### [v0.17.0-rc.0](https://github.com/kubedb/schema-manager/releases/tag/v0.17.0-rc.0) + + + + +## [kubedb/singlestore](https://github.com/kubedb/singlestore) + +### [v0.0.2](https://github.com/kubedb/singlestore/releases/tag/v0.0.2) + +- [62d006e](https://github.com/kubedb/singlestore/commit/62d006e) Prepare for release v0.0.2 (#9) +- [968f8b7](https://github.com/kubedb/singlestore/commit/968f8b7) Add AppBinding Config (#8) +- [4ca70af](https://github.com/kubedb/singlestore/commit/4ca70af) Fix Appbinding Scheme (#7) +- [501a7bf](https://github.com/kubedb/singlestore/commit/501a7bf) Remove cassandra, clickhouse, etcd flags +- [1555746](https://github.com/kubedb/singlestore/commit/1555746) Update install recipes in makefile (#6) + + + +## [kubedb/singlestore-coordinator](https://github.com/kubedb/singlestore-coordinator) + +### [v0.0.2](https://github.com/kubedb/singlestore-coordinator/releases/tag/v0.0.2) + +- [ded7a50](https://github.com/kubedb/singlestore-coordinator/commit/ded7a50) Prepare for release v0.0.2 (#3) + + + +## [kubedb/solr](https://github.com/kubedb/solr) + +### [v0.0.2](https://github.com/kubedb/solr/releases/tag/v0.0.2) + +- [e78ab6d](https://github.com/kubedb/solr/commit/e78ab6d) Prepare for release v0.0.2 (#6) +- [6c2dfff](https://github.com/kubedb/solr/commit/6c2dfff) Remove cassandra, clickhouse, etcd flags +- [6e36a4f](https://github.com/kubedb/solr/commit/6e36a4f) Fix install recipes for Solr (#3) +- [203d9f0](https://github.com/kubedb/solr/commit/203d9f0) Start health check using a struct. (#5) + + + +## [kubedb/tests](https://github.com/kubedb/tests) + +### [v0.26.0-rc.0](https://github.com/kubedb/tests/releases/tag/v0.26.0-rc.0) + +- [1730fd31](https://github.com/kubedb/tests/commit/1730fd31) Prepare for release v0.26.0-rc.0 (#296) +- [d1805668](https://github.com/kubedb/tests/commit/d1805668) Add ZooKeeper Tests (#294) +- [4c27754c](https://github.com/kubedb/tests/commit/4c27754c) Fix kafka env-variable tests (#293) +- [3cfc1212](https://github.com/kubedb/tests/commit/3cfc1212) Prepare for release v0.26.0-beta.1 (#292) +- [b810e690](https://github.com/kubedb/tests/commit/b810e690) increase cpu limit for vertical scaling (#289) +- [c43985ba](https://github.com/kubedb/tests/commit/c43985ba) Change dashboard api group (#291) +- [1b96881e](https://github.com/kubedb/tests/commit/1b96881e) Fix error logging +- [33f78143](https://github.com/kubedb/tests/commit/33f78143) forceCleanup PVCs for mongo (#288) +- [0dcd3e38](https://github.com/kubedb/tests/commit/0dcd3e38) Add PostgreSQL logical replication tests (#202) +- [2f403c85](https://github.com/kubedb/tests/commit/2f403c85) Find profiles in array, Don't match with string (#286) +- [5aca2293](https://github.com/kubedb/tests/commit/5aca2293) Give time to PDB status to be updated (#285) +- [5f3fabd7](https://github.com/kubedb/tests/commit/5f3fabd7) Prepare for release v0.26.0-beta.0 (#284) +- [27a24dff](https://github.com/kubedb/tests/commit/27a24dff) Update deps (#283) +- [b9021186](https://github.com/kubedb/tests/commit/b9021186) Update deps (#282) +- [589ca51c](https://github.com/kubedb/tests/commit/589ca51c) mongodb vertical scaling fix (#281) +- [feaa0f6a](https://github.com/kubedb/tests/commit/feaa0f6a) Add `--restricted` flag (#280) +- [2423ee38](https://github.com/kubedb/tests/commit/2423ee38) Fix linter errors +- [dcd64c7c](https://github.com/kubedb/tests/commit/dcd64c7c) Update lint command +- [c3ef1fa4](https://github.com/kubedb/tests/commit/c3ef1fa4) Use k8s 1.29 client libs (#279) + + + +## [kubedb/ui-server](https://github.com/kubedb/ui-server) + +### [v0.17.0-rc.0](https://github.com/kubedb/ui-server/releases/tag/v0.17.0-rc.0) + +- [3046f685](https://github.com/kubedb/ui-server/commit/3046f685) Prepare for release v0.17.0-rc.0 (#106) +- [98c1a6dd](https://github.com/kubedb/ui-server/commit/98c1a6dd) Prepare for release v0.17.0-beta.1 (#105) +- [8173cfc2](https://github.com/kubedb/ui-server/commit/8173cfc2) Implement SingularNameProvider +- [6e8f80dc](https://github.com/kubedb/ui-server/commit/6e8f80dc) Prepare for release v0.17.0-beta.0 (#104) +- [6a05721f](https://github.com/kubedb/ui-server/commit/6a05721f) Update deps (#103) +- [3c24fd5e](https://github.com/kubedb/ui-server/commit/3c24fd5e) Update deps (#102) +- [25e29443](https://github.com/kubedb/ui-server/commit/25e29443) Use k8s 1.29 client libs (#101) + + + +## [kubedb/webhook-server](https://github.com/kubedb/webhook-server) + +### [v0.17.0-rc.0](https://github.com/kubedb/webhook-server/releases/tag/v0.17.0-rc.0) + +- [f9cf0b11](https://github.com/kubedb/webhook-server/commit/f9cf0b11) Prepare for release v0.17.0-rc.0 (#91) +- [98914ade](https://github.com/kubedb/webhook-server/commit/98914ade) Add kafka connector webhook apitypes (#90) +- [1184db7a](https://github.com/kubedb/webhook-server/commit/1184db7a) Fix solr webhook +- [2a84cedb](https://github.com/kubedb/webhook-server/commit/2a84cedb) Prepare for release v0.17.0-beta.1 (#89) +- [bb4a5c22](https://github.com/kubedb/webhook-server/commit/bb4a5c22) Add kafka connect-cluster (#87) +- [c46c6662](https://github.com/kubedb/webhook-server/commit/c46c6662) Add new Database support (#88) +- [c6387e9e](https://github.com/kubedb/webhook-server/commit/c6387e9e) Set default kubebuilder client for autoscaler (#86) +- [14c07899](https://github.com/kubedb/webhook-server/commit/14c07899) Incorporate apimachinery (#85) +- [266c79a0](https://github.com/kubedb/webhook-server/commit/266c79a0) Add kafka ops request validator (#84) +- [528b8463](https://github.com/kubedb/webhook-server/commit/528b8463) Fix webhook handlers (#83) +- [dfdeb6c3](https://github.com/kubedb/webhook-server/commit/dfdeb6c3) Prepare for release v0.17.0-beta.0 (#82) +- [bf54df2a](https://github.com/kubedb/webhook-server/commit/bf54df2a) Update deps (#81) +- [c7d17faa](https://github.com/kubedb/webhook-server/commit/c7d17faa) Update deps (#79) +- [170573b1](https://github.com/kubedb/webhook-server/commit/170573b1) Use k8s 1.29 client libs (#78) + + + +## [kubedb/zookeeper](https://github.com/kubedb/zookeeper) + +### [v0.0.2](https://github.com/kubedb/zookeeper/releases/tag/v0.0.2) + +- [6efd3a5](https://github.com/kubedb/zookeeper/commit/6efd3a5) Prepare for release v0.0.2 (#6) +- [4c7340e](https://github.com/kubedb/zookeeper/commit/4c7340e) Remove cassandra, clickhouse, etcd flags +- [33727fc](https://github.com/kubedb/zookeeper/commit/33727fc) Add ZooKeeper Standalone (#5) +- [5225286](https://github.com/kubedb/zookeeper/commit/5225286) Add e2e test workflow (#4) +- [59426c9](https://github.com/kubedb/zookeeper/commit/59426c9) Update install recipes in makefile (#3) +- [e7b05a1](https://github.com/kubedb/zookeeper/commit/e7b05a1) Limit ZooKeeper Health Logs (#2) + + + + diff --git a/content/docs/v2024.1.31/CHANGELOG-v2024.1.28-rc.1.md b/content/docs/v2024.1.31/CHANGELOG-v2024.1.28-rc.1.md new file mode 100644 index 0000000000..213ac1c7fd --- /dev/null +++ b/content/docs/v2024.1.31/CHANGELOG-v2024.1.28-rc.1.md @@ -0,0 +1,520 @@ +--- +title: Changelog | KubeDB +description: Changelog +menu: + docs_v2024.1.31: + identifier: changelog-kubedb-v2024.1.28-rc.1 + name: Changelog-v2024.1.28-rc.1 + parent: welcome + weight: 20240128 +product_name: kubedb +menu_name: docs_v2024.1.31 +section_menu_id: welcome +url: /docs/v2024.1.31/welcome/changelog-v2024.1.28-rc.1/ +aliases: +- /docs/v2024.1.31/CHANGELOG-v2024.1.28-rc.1/ +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +# KubeDB v2024.1.28-rc.1 (2024-01-29) + + +## [kubedb/apimachinery](https://github.com/kubedb/apimachinery) + +### [v0.41.0-rc.1](https://github.com/kubedb/apimachinery/releases/tag/v0.41.0-rc.1) + +- [2a63b8b1](https://github.com/kubedb/apimachinery/commit/2a63b8b1a) Update deps +- [2293744e](https://github.com/kubedb/apimachinery/commit/2293744e3) Update deps + + + +## [kubedb/autoscaler](https://github.com/kubedb/autoscaler) + +### [v0.26.0-rc.1](https://github.com/kubedb/autoscaler/releases/tag/v0.26.0-rc.1) + +- [09c7c1b3](https://github.com/kubedb/autoscaler/commit/09c7c1b3) Prepare for release v0.26.0-rc.1 (#186) +- [edae8156](https://github.com/kubedb/autoscaler/commit/edae8156) Update deps (#185) +- [b063256c](https://github.com/kubedb/autoscaler/commit/b063256c) Update deps (#184) + + + +## [kubedb/cli](https://github.com/kubedb/cli) + +### [v0.41.0-rc.1](https://github.com/kubedb/cli/releases/tag/v0.41.0-rc.1) + +- [a67dadc9](https://github.com/kubedb/cli/commit/a67dadc9) Prepare for release v0.41.0-rc.1 (#753) +- [d1b206ee](https://github.com/kubedb/cli/commit/d1b206ee) Update deps (#752) +- [50f15d19](https://github.com/kubedb/cli/commit/50f15d19) Update deps (#751) + + + +## [kubedb/crd-manager](https://github.com/kubedb/crd-manager) + +### [v0.0.3](https://github.com/kubedb/crd-manager/releases/tag/v0.0.3) + +- [c4d0c24](https://github.com/kubedb/crd-manager/commit/c4d0c24) Prepare for release v0.0.3 (#11) +- [5a149ca](https://github.com/kubedb/crd-manager/commit/5a149ca) Add new db to the list + + + +## [kubedb/dashboard](https://github.com/kubedb/dashboard) + +### [v0.17.0-rc.1](https://github.com/kubedb/dashboard/releases/tag/v0.17.0-rc.1) + +- [1720f9d5](https://github.com/kubedb/dashboard/commit/1720f9d5) Prepare for release v0.17.0-rc.1 (#105) +- [b0f55bcf](https://github.com/kubedb/dashboard/commit/b0f55bcf) Update deps (#104) +- [e342f79c](https://github.com/kubedb/dashboard/commit/e342f79c) Update deps (#103) + + + +## [kubedb/druid](https://github.com/kubedb/druid) + +### [v0.0.3](https://github.com/kubedb/druid/releases/tag/v0.0.3) + +- [f564e7c](https://github.com/kubedb/druid/commit/f564e7c) Prepare for release v0.0.3 (#7) + + + +## [kubedb/elasticsearch](https://github.com/kubedb/elasticsearch) + +### [v0.41.0-rc.1](https://github.com/kubedb/elasticsearch/releases/tag/v0.41.0-rc.1) + +- [f287ef9f](https://github.com/kubedb/elasticsearch/commit/f287ef9f1) Prepare for release v0.41.0-rc.1 (#703) +- [fcabe6ba](https://github.com/kubedb/elasticsearch/commit/fcabe6bae) Update deps (#702) +- [861d01f3](https://github.com/kubedb/elasticsearch/commit/861d01f30) Update deps (#701) + + + +## [kubedb/elasticsearch-restic-plugin](https://github.com/kubedb/elasticsearch-restic-plugin) + +### [v0.4.0-rc.1](https://github.com/kubedb/elasticsearch-restic-plugin/releases/tag/v0.4.0-rc.1) + +- [675caf7](https://github.com/kubedb/elasticsearch-restic-plugin/commit/675caf7) Prepare for release v0.4.0-rc.1 (#18) + + + +## [kubedb/ferretdb](https://github.com/kubedb/ferretdb) + +### [v0.0.3](https://github.com/kubedb/ferretdb/releases/tag/v0.0.3) + +- [be756f6](https://github.com/kubedb/ferretdb/commit/be756f6) Prepare for release v0.0.3 (#7) + + + +## [kubedb/installer](https://github.com/kubedb/installer) + +### [v2024.1.28-rc.1](https://github.com/kubedb/installer/releases/tag/v2024.1.28-rc.1) + +- [12e4d649](https://github.com/kubedb/installer/commit/12e4d649) Prepare for release v2024.1.28-rc.1 (#836) +- [ce5ca0dc](https://github.com/kubedb/installer/commit/ce5ca0dc) Remove kubedb-one chart + + + +## [kubedb/kafka](https://github.com/kubedb/kafka) + +### [v0.12.0-rc.1](https://github.com/kubedb/kafka/releases/tag/v0.12.0-rc.1) + +- [511914c2](https://github.com/kubedb/kafka/commit/511914c2) Prepare for release v0.12.0-rc.1 (#75) +- [fb908cf2](https://github.com/kubedb/kafka/commit/fb908cf2) Update deps (#74) +- [cccaf86c](https://github.com/kubedb/kafka/commit/cccaf86c) Update deps (#73) + + + +## [kubedb/kubedb-manifest-plugin](https://github.com/kubedb/kubedb-manifest-plugin) + +### [v0.4.0-rc.1](https://github.com/kubedb/kubedb-manifest-plugin/releases/tag/v0.4.0-rc.1) + +- [5daf9ce](https://github.com/kubedb/kubedb-manifest-plugin/commit/5daf9ce) Prepare for release v0.4.0-rc.1 (#39) + + + +## [kubedb/mariadb](https://github.com/kubedb/mariadb) + +### [v0.25.0-rc.1](https://github.com/kubedb/mariadb/releases/tag/v0.25.0-rc.1) + +- [2ca44131](https://github.com/kubedb/mariadb/commit/2ca441314) Prepare for release v0.25.0-rc.1 (#255) +- [dad5b837](https://github.com/kubedb/mariadb/commit/dad5b837a) Update deps (#254) +- [a210c867](https://github.com/kubedb/mariadb/commit/a210c8675) Fix appbinding scheme (#251) +- [81f985cd](https://github.com/kubedb/mariadb/commit/81f985cd7) Update deps (#253) + + + +## [kubedb/mariadb-archiver](https://github.com/kubedb/mariadb-archiver) + +### [v0.1.0-rc.1](https://github.com/kubedb/mariadb-archiver/releases/tag/v0.1.0-rc.1) + +- [8743316](https://github.com/kubedb/mariadb-archiver/commit/8743316) Prepare for release v0.1.0-rc.1 (#7) + + + +## [kubedb/mariadb-coordinator](https://github.com/kubedb/mariadb-coordinator) + +### [v0.21.0-rc.1](https://github.com/kubedb/mariadb-coordinator/releases/tag/v0.21.0-rc.1) + +- [4289bcd1](https://github.com/kubedb/mariadb-coordinator/commit/4289bcd1) Prepare for release v0.21.0-rc.1 (#105) +- [34f610f7](https://github.com/kubedb/mariadb-coordinator/commit/34f610f7) Update deps (#104) +- [13dbe3f7](https://github.com/kubedb/mariadb-coordinator/commit/13dbe3f7) Update deps (#103) + + + +## [kubedb/mariadb-csi-snapshotter-plugin](https://github.com/kubedb/mariadb-csi-snapshotter-plugin) + +### [v0.1.0-rc.1](https://github.com/kubedb/mariadb-csi-snapshotter-plugin/releases/tag/v0.1.0-rc.1) + +- [066b41c](https://github.com/kubedb/mariadb-csi-snapshotter-plugin/commit/066b41c) Prepare for release v0.1.0-rc.1 (#7) + + + +## [kubedb/memcached](https://github.com/kubedb/memcached) + +### [v0.34.0-rc.1](https://github.com/kubedb/memcached/releases/tag/v0.34.0-rc.1) + +- [928f00e2](https://github.com/kubedb/memcached/commit/928f00e2) Prepare for release v0.34.0-rc.1 (#422) +- [42aa5dcc](https://github.com/kubedb/memcached/commit/42aa5dcc) Update deps (#421) +- [d9fd6358](https://github.com/kubedb/memcached/commit/d9fd6358) Update deps (#420) + + + +## [kubedb/mongodb](https://github.com/kubedb/mongodb) + +### [v0.34.0-rc.1](https://github.com/kubedb/mongodb/releases/tag/v0.34.0-rc.1) + +- [2e96a733](https://github.com/kubedb/mongodb/commit/2e96a7338) Prepare for release v0.34.0-rc.1 (#610) +- [9a7e16bd](https://github.com/kubedb/mongodb/commit/9a7e16bdb) Update deps (#609) +- [369b51e8](https://github.com/kubedb/mongodb/commit/369b51e8a) Update deps (#608) + + + +## [kubedb/mongodb-csi-snapshotter-plugin](https://github.com/kubedb/mongodb-csi-snapshotter-plugin) + +### [v0.2.0-rc.1](https://github.com/kubedb/mongodb-csi-snapshotter-plugin/releases/tag/v0.2.0-rc.1) + +- [5b28353](https://github.com/kubedb/mongodb-csi-snapshotter-plugin/commit/5b28353) Prepare for release v0.2.0-rc.1 (#14) + + + +## [kubedb/mongodb-restic-plugin](https://github.com/kubedb/mongodb-restic-plugin) + +### [v0.4.0-rc.1](https://github.com/kubedb/mongodb-restic-plugin/releases/tag/v0.4.0-rc.1) + +- [3fa6286](https://github.com/kubedb/mongodb-restic-plugin/commit/3fa6286) Prepare for release v0.4.0-rc.1 (#25) + + + +## [kubedb/mysql](https://github.com/kubedb/mysql) + +### [v0.34.0-rc.1](https://github.com/kubedb/mysql/releases/tag/v0.34.0-rc.1) + +- [f3133290](https://github.com/kubedb/mysql/commit/f31332908) Prepare for release v0.34.0-rc.1 (#607) +- [0f3ddf23](https://github.com/kubedb/mysql/commit/0f3ddf233) Update deps (#606) +- [1a5d7bd1](https://github.com/kubedb/mysql/commit/1a5d7bd13) Fix appbinding scheme (#603) +- [c643ded9](https://github.com/kubedb/mysql/commit/c643ded94) Update deps (#605) + + + +## [kubedb/mysql-archiver](https://github.com/kubedb/mysql-archiver) + +### [v0.2.0-rc.1](https://github.com/kubedb/mysql-archiver/releases/tag/v0.2.0-rc.1) + +- [d0fbef5](https://github.com/kubedb/mysql-archiver/commit/d0fbef5) Prepare for release v0.2.0-rc.1 (#19) + + + +## [kubedb/mysql-coordinator](https://github.com/kubedb/mysql-coordinator) + +### [v0.19.0-rc.1](https://github.com/kubedb/mysql-coordinator/releases/tag/v0.19.0-rc.1) + +- [40a2b6eb](https://github.com/kubedb/mysql-coordinator/commit/40a2b6eb) Prepare for release v0.19.0-rc.1 (#102) +- [8b80958d](https://github.com/kubedb/mysql-coordinator/commit/8b80958d) Update deps (#101) +- [df438239](https://github.com/kubedb/mysql-coordinator/commit/df438239) Update deps (#100) + + + +## [kubedb/mysql-csi-snapshotter-plugin](https://github.com/kubedb/mysql-csi-snapshotter-plugin) + +### [v0.2.0-rc.1](https://github.com/kubedb/mysql-csi-snapshotter-plugin/releases/tag/v0.2.0-rc.1) + +- [ac60bf4](https://github.com/kubedb/mysql-csi-snapshotter-plugin/commit/ac60bf4) Prepare for release v0.2.0-rc.1 (#7) + + + +## [kubedb/mysql-restic-plugin](https://github.com/kubedb/mysql-restic-plugin) + +### [v0.4.0-rc.1](https://github.com/kubedb/mysql-restic-plugin/releases/tag/v0.4.0-rc.1) + +- [55897ab](https://github.com/kubedb/mysql-restic-plugin/commit/55897ab) Prepare for release v0.4.0-rc.1 (#23) + + + +## [kubedb/mysql-router-init](https://github.com/kubedb/mysql-router-init) + +### [v0.19.0-rc.1](https://github.com/kubedb/mysql-router-init/releases/tag/v0.19.0-rc.1) + +- [6a5deed](https://github.com/kubedb/mysql-router-init/commit/6a5deed) Update deps (#40) +- [0078f09](https://github.com/kubedb/mysql-router-init/commit/0078f09) Update deps (#39) + + + +## [kubedb/ops-manager](https://github.com/kubedb/ops-manager) + +### [v0.28.0-rc.1](https://github.com/kubedb/ops-manager/releases/tag/v0.28.0-rc.1) + +- [67604c6b](https://github.com/kubedb/ops-manager/commit/67604c6b0) Prepare for release v0.28.0-rc.1 (#535) +- [283d07cf](https://github.com/kubedb/ops-manager/commit/283d07cf4) Update deps (#534) +- [81a5f6e3](https://github.com/kubedb/ops-manager/commit/81a5f6e3c) Update deps (#533) + + + +## [kubedb/percona-xtradb](https://github.com/kubedb/percona-xtradb) + +### [v0.28.0-rc.1](https://github.com/kubedb/percona-xtradb/releases/tag/v0.28.0-rc.1) + +- [b567db53](https://github.com/kubedb/percona-xtradb/commit/b567db53a) Prepare for release v0.28.0-rc.1 (#353) +- [c0ddb330](https://github.com/kubedb/percona-xtradb/commit/c0ddb330b) Update deps (#352) +- [d461df3e](https://github.com/kubedb/percona-xtradb/commit/d461df3ed) Fix appbinding scheme (#349) +- [2752f7e3](https://github.com/kubedb/percona-xtradb/commit/2752f7e36) Update deps (#351) + + + +## [kubedb/percona-xtradb-coordinator](https://github.com/kubedb/percona-xtradb-coordinator) + +### [v0.14.0-rc.1](https://github.com/kubedb/percona-xtradb-coordinator/releases/tag/v0.14.0-rc.1) + +- [da619fa3](https://github.com/kubedb/percona-xtradb-coordinator/commit/da619fa3) Prepare for release v0.14.0-rc.1 (#62) +- [c39daf56](https://github.com/kubedb/percona-xtradb-coordinator/commit/c39daf56) Update deps (#61) +- [42dc1a95](https://github.com/kubedb/percona-xtradb-coordinator/commit/42dc1a95) Update deps (#60) + + + +## [kubedb/pg-coordinator](https://github.com/kubedb/pg-coordinator) + +### [v0.25.0-rc.1](https://github.com/kubedb/pg-coordinator/releases/tag/v0.25.0-rc.1) + +- [9a720273](https://github.com/kubedb/pg-coordinator/commit/9a720273) Prepare for release v0.25.0-rc.1 (#153) +- [f103f1fc](https://github.com/kubedb/pg-coordinator/commit/f103f1fc) Update deps (#152) +- [84b92d89](https://github.com/kubedb/pg-coordinator/commit/84b92d89) Update deps (#151) + + + +## [kubedb/pgbouncer](https://github.com/kubedb/pgbouncer) + +### [v0.28.0-rc.1](https://github.com/kubedb/pgbouncer/releases/tag/v0.28.0-rc.1) + +- [0ca01e53](https://github.com/kubedb/pgbouncer/commit/0ca01e53) Prepare for release v0.28.0-rc.1 (#316) +- [4b76d2cb](https://github.com/kubedb/pgbouncer/commit/4b76d2cb) Update deps (#315) +- [f32676bc](https://github.com/kubedb/pgbouncer/commit/f32676bc) Update deps (#314) + + + +## [kubedb/pgpool](https://github.com/kubedb/pgpool) + +### [v0.0.3](https://github.com/kubedb/pgpool/releases/tag/v0.0.3) + +- [3696365](https://github.com/kubedb/pgpool/commit/3696365) Prepare for release v0.0.3 (#8) + + + +## [kubedb/postgres](https://github.com/kubedb/postgres) + +### [v0.41.0-rc.1](https://github.com/kubedb/postgres/releases/tag/v0.41.0-rc.1) + +- [071b2645](https://github.com/kubedb/postgres/commit/071b26455) Prepare for release v0.41.0-rc.1 (#712) +- [c9f1d5b6](https://github.com/kubedb/postgres/commit/c9f1d5b68) Update deps (#711) +- [723fb80c](https://github.com/kubedb/postgres/commit/723fb80c0) Update deps (#710) + + + +## [kubedb/postgres-archiver](https://github.com/kubedb/postgres-archiver) + +### [v0.2.0-rc.1](https://github.com/kubedb/postgres-archiver/releases/tag/v0.2.0-rc.1) + +- [8f66790](https://github.com/kubedb/postgres-archiver/commit/8f66790) Prepare for release v0.2.0-rc.1 (#20) + + + +## [kubedb/postgres-csi-snapshotter-plugin](https://github.com/kubedb/postgres-csi-snapshotter-plugin) + +### [v0.2.0-rc.1](https://github.com/kubedb/postgres-csi-snapshotter-plugin/releases/tag/v0.2.0-rc.1) + +- [369c9a3](https://github.com/kubedb/postgres-csi-snapshotter-plugin/commit/369c9a3) Prepare for release v0.2.0-rc.1 (#17) + + + +## [kubedb/postgres-restic-plugin](https://github.com/kubedb/postgres-restic-plugin) + +### [v0.4.0-rc.1](https://github.com/kubedb/postgres-restic-plugin/releases/tag/v0.4.0-rc.1) + +- [e5c6d21](https://github.com/kubedb/postgres-restic-plugin/commit/e5c6d21) Prepare for release v0.4.0-rc.1 (#16) + + + +## [kubedb/provider-aws](https://github.com/kubedb/provider-aws) + +### [v0.3.0-rc.1](https://github.com/kubedb/provider-aws/releases/tag/v0.3.0-rc.1) + + + + +## [kubedb/provider-azure](https://github.com/kubedb/provider-azure) + +### [v0.3.0-rc.1](https://github.com/kubedb/provider-azure/releases/tag/v0.3.0-rc.1) + + + + +## [kubedb/provider-gcp](https://github.com/kubedb/provider-gcp) + +### [v0.3.0-rc.1](https://github.com/kubedb/provider-gcp/releases/tag/v0.3.0-rc.1) + + + + +## [kubedb/provisioner](https://github.com/kubedb/provisioner) + +### [v0.41.0-rc.1](https://github.com/kubedb/provisioner/releases/tag/v0.41.0-rc.1) + +- [bb8fd5aa](https://github.com/kubedb/provisioner/commit/bb8fd5aad) Prepare for release v0.41.0-rc.1 (#80) +- [9d4fa8ab](https://github.com/kubedb/provisioner/commit/9d4fa8abd) Update deps (#79) +- [154d0403](https://github.com/kubedb/provisioner/commit/154d0403b) Update deps (#78) + + + +## [kubedb/proxysql](https://github.com/kubedb/proxysql) + +### [v0.28.0-rc.1](https://github.com/kubedb/proxysql/releases/tag/v0.28.0-rc.1) + +- [0ff6f90d](https://github.com/kubedb/proxysql/commit/0ff6f90d2) Prepare for release v0.28.0-rc.1 (#334) +- [382d3283](https://github.com/kubedb/proxysql/commit/382d3283e) Update deps (#333) +- [0b4da810](https://github.com/kubedb/proxysql/commit/0b4da8101) Update deps (#332) + + + +## [kubedb/rabbitmq](https://github.com/kubedb/rabbitmq) + +### [v0.0.3](https://github.com/kubedb/rabbitmq/releases/tag/v0.0.3) + +- [3f6ecf3f](https://github.com/kubedb/rabbitmq/commit/3f6ecf3f) Prepare for release v0.0.3 (#7) + + + +## [kubedb/redis](https://github.com/kubedb/redis) + +### [v0.34.0-rc.1](https://github.com/kubedb/redis/releases/tag/v0.34.0-rc.1) + +- [5e171587](https://github.com/kubedb/redis/commit/5e171587) Prepare for release v0.34.0-rc.1 (#522) +- [71665b9b](https://github.com/kubedb/redis/commit/71665b9b) Update deps (#521) +- [302f1f19](https://github.com/kubedb/redis/commit/302f1f19) Update deps (#520) + + + +## [kubedb/redis-coordinator](https://github.com/kubedb/redis-coordinator) + +### [v0.20.0-rc.1](https://github.com/kubedb/redis-coordinator/releases/tag/v0.20.0-rc.1) + +- [055ceaf1](https://github.com/kubedb/redis-coordinator/commit/055ceaf1) Prepare for release v0.20.0-rc.1 (#93) +- [79575d26](https://github.com/kubedb/redis-coordinator/commit/79575d26) Update deps (#92) +- [a5b4c4b4](https://github.com/kubedb/redis-coordinator/commit/a5b4c4b4) Update deps (#91) + + + +## [kubedb/redis-restic-plugin](https://github.com/kubedb/redis-restic-plugin) + +### [v0.4.0-rc.1](https://github.com/kubedb/redis-restic-plugin/releases/tag/v0.4.0-rc.1) + +- [67a8942](https://github.com/kubedb/redis-restic-plugin/commit/67a8942) Prepare for release v0.4.0-rc.1 (#19) + + + +## [kubedb/replication-mode-detector](https://github.com/kubedb/replication-mode-detector) + +### [v0.28.0-rc.1](https://github.com/kubedb/replication-mode-detector/releases/tag/v0.28.0-rc.1) + +- [de39974e](https://github.com/kubedb/replication-mode-detector/commit/de39974e) Prepare for release v0.28.0-rc.1 (#257) +- [e1ef5191](https://github.com/kubedb/replication-mode-detector/commit/e1ef5191) Update deps (#256) +- [7b4e4149](https://github.com/kubedb/replication-mode-detector/commit/7b4e4149) Update deps (#255) + + + +## [kubedb/schema-manager](https://github.com/kubedb/schema-manager) + +### [v0.17.0-rc.1](https://github.com/kubedb/schema-manager/releases/tag/v0.17.0-rc.1) + +- [c6293601](https://github.com/kubedb/schema-manager/commit/c6293601) Prepare for release v0.17.0-rc.1 (#101) +- [09585d2f](https://github.com/kubedb/schema-manager/commit/09585d2f) Update deps (#100) +- [d3a90582](https://github.com/kubedb/schema-manager/commit/d3a90582) Update deps (#99) + + + +## [kubedb/singlestore](https://github.com/kubedb/singlestore) + +### [v0.0.3](https://github.com/kubedb/singlestore/releases/tag/v0.0.3) + +- [fe72e9f](https://github.com/kubedb/singlestore/commit/fe72e9f) Prepare for release v0.0.3 (#10) + + + +## [kubedb/singlestore-coordinator](https://github.com/kubedb/singlestore-coordinator) + +### [v0.0.3](https://github.com/kubedb/singlestore-coordinator/releases/tag/v0.0.3) + +- [7b99fd6](https://github.com/kubedb/singlestore-coordinator/commit/7b99fd6) Prepare for release v0.0.3 (#4) + + + +## [kubedb/solr](https://github.com/kubedb/solr) + +### [v0.0.3](https://github.com/kubedb/solr/releases/tag/v0.0.3) + +- [c4ef8d7](https://github.com/kubedb/solr/commit/c4ef8d7) Prepare for release v0.0.3 (#7) + + + +## [kubedb/tests](https://github.com/kubedb/tests) + +### [v0.26.0-rc.1](https://github.com/kubedb/tests/releases/tag/v0.26.0-rc.1) + +- [5a527051](https://github.com/kubedb/tests/commit/5a527051) Prepare for release v0.26.0-rc.1 (#299) +- [03d71b6d](https://github.com/kubedb/tests/commit/03d71b6d) Update deps (#298) +- [2d928008](https://github.com/kubedb/tests/commit/2d928008) Update deps (#297) + + + +## [kubedb/ui-server](https://github.com/kubedb/ui-server) + +### [v0.17.0-rc.1](https://github.com/kubedb/ui-server/releases/tag/v0.17.0-rc.1) + +- [ed2c04e7](https://github.com/kubedb/ui-server/commit/ed2c04e7) Prepare for release v0.17.0-rc.1 (#109) +- [645c4ac2](https://github.com/kubedb/ui-server/commit/645c4ac2) Update deps (#108) +- [e75f0f9e](https://github.com/kubedb/ui-server/commit/e75f0f9e) Update deps (#107) + + + +## [kubedb/webhook-server](https://github.com/kubedb/webhook-server) + +### [v0.17.0-rc.1](https://github.com/kubedb/webhook-server/releases/tag/v0.17.0-rc.1) + +- [a49ecca7](https://github.com/kubedb/webhook-server/commit/a49ecca7) Prepare for release v0.17.0-rc.1 (#94) +- [5f8de57b](https://github.com/kubedb/webhook-server/commit/5f8de57b) Update deps (#93) +- [8c22ce2d](https://github.com/kubedb/webhook-server/commit/8c22ce2d) Update deps (#92) + + + +## [kubedb/zookeeper](https://github.com/kubedb/zookeeper) + +### [v0.0.3](https://github.com/kubedb/zookeeper/releases/tag/v0.0.3) + +- [d77faed](https://github.com/kubedb/zookeeper/commit/d77faed) Prepare for release v0.0.3 (#7) + + + + diff --git a/content/docs/v2024.1.31/CHANGELOG-v2024.1.31.md b/content/docs/v2024.1.31/CHANGELOG-v2024.1.31.md new file mode 100644 index 0000000000..d572666bfb --- /dev/null +++ b/content/docs/v2024.1.31/CHANGELOG-v2024.1.31.md @@ -0,0 +1,937 @@ +--- +title: Changelog | KubeDB +description: Changelog +menu: + docs_v2024.1.31: + identifier: changelog-kubedb-v2024.1.31 + name: Changelog-v2024.1.31 + parent: welcome + weight: 20240131 +product_name: kubedb +menu_name: docs_v2024.1.31 +section_menu_id: welcome +url: /docs/v2024.1.31/welcome/changelog-v2024.1.31/ +aliases: +- /docs/v2024.1.31/CHANGELOG-v2024.1.31/ +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +# KubeDB v2024.1.31 (2024-02-02) + + +## [kubedb/apimachinery](https://github.com/kubedb/apimachinery) + +### [v0.41.0](https://github.com/kubedb/apimachinery/releases/tag/v0.41.0) + +- [447a890a](https://github.com/kubedb/apimachinery/commit/447a890af) Update kubestash +- [a81b9dc2](https://github.com/kubedb/apimachinery/commit/a81b9dc28) Increase CPU resource for mongo versions >= 6 (#1140) +- [c711b3ab](https://github.com/kubedb/apimachinery/commit/c711b3abb) ferretdb apm fix (#1138) +- [02bd64df](https://github.com/kubedb/apimachinery/commit/02bd64dfa) Update security context based on version (#1137) +- [2a63b8b1](https://github.com/kubedb/apimachinery/commit/2a63b8b1a) Update deps +- [2293744e](https://github.com/kubedb/apimachinery/commit/2293744e3) Update deps +- [32a0f294](https://github.com/kubedb/apimachinery/commit/32a0f2944) Update deps +- [c389dcb1](https://github.com/kubedb/apimachinery/commit/c389dcb17) Add Singlestore Config Type (#1136) +- [ef7f62fb](https://github.com/kubedb/apimachinery/commit/ef7f62fbd) Defaulting RunAsGroup (#1134) +- [e08f63ba](https://github.com/kubedb/apimachinery/commit/e08f63ba0) Minox fixes in rlease (#1135) +- [760f1c55](https://github.com/kubedb/apimachinery/commit/760f1c554) Ferretdb webhook and apis updated (#1132) +- [958de8ec](https://github.com/kubedb/apimachinery/commit/958de8ec3) Fix spelling mistakes in dashboard. (#1133) +- [f614ab97](https://github.com/kubedb/apimachinery/commit/f614ab976) Fix release issues and add version 28.0.1 (#1131) +- [df53756a](https://github.com/kubedb/apimachinery/commit/df53756a3) Fix dashboard config merger command. (#1126) +- [4b8a46ab](https://github.com/kubedb/apimachinery/commit/4b8a46ab1) Add kafka connector webhook (#1128) +- [3e06dc03](https://github.com/kubedb/apimachinery/commit/3e06dc03a) Update Rabbitmq helpers and webhooks (#1130) +- [23153f41](https://github.com/kubedb/apimachinery/commit/23153f41f) Add ZooKeeper Standalone Mode (#1129) +- [650406ba](https://github.com/kubedb/apimachinery/commit/650406ba8) Remove replica condition for Pgpool (#1127) +- [dbd8e067](https://github.com/kubedb/apimachinery/commit/dbd8e0679) Update docker/docker +- [a28b2662](https://github.com/kubedb/apimachinery/commit/a28b2662e) Add validator to check negative number of replicas. (#1124) +- [cc189c3c](https://github.com/kubedb/apimachinery/commit/cc189c3c8) Add utilities to extract databaseInfo (#1123) +- [ceef191e](https://github.com/kubedb/apimachinery/commit/ceef191e0) Fix short name for FerretDBVersion +- [ef49cbfa](https://github.com/kubedb/apimachinery/commit/ef49cbfa8) Update deps +- [f85d1410](https://github.com/kubedb/apimachinery/commit/f85d14100) Without non-root (#1122) +- [79fd675a](https://github.com/kubedb/apimachinery/commit/79fd675a0) Add `PausedBackups` field into `OpsRequestStatus` (#1114) +- [778a1af2](https://github.com/kubedb/apimachinery/commit/778a1af25) Add FerretDB Apis (#1119) +- [329083aa](https://github.com/kubedb/apimachinery/commit/329083aa6) Add missing entries while ignoring openapi schema (#1121) +- [0f8ac911](https://github.com/kubedb/apimachinery/commit/0f8ac9110) Fix API for new Databases (#1120) +- [b625c64c](https://github.com/kubedb/apimachinery/commit/b625c64c5) Fix issues with Pgpool HealthChecker field and version check in webhook (#1118) +- [e78c6ff7](https://github.com/kubedb/apimachinery/commit/e78c6ff74) Remove unnecessary apis for singlestore (#1117) +- [6e98cd41](https://github.com/kubedb/apimachinery/commit/6e98cd41c) Add Rabbitmq API (#1109) +- [e7a088fa](https://github.com/kubedb/apimachinery/commit/e7a088faf) Remove api call from Solr setDefaults. (#1116) +- [a73a825b](https://github.com/kubedb/apimachinery/commit/a73a825b7) Add Solr API (#1110) +- [9d687049](https://github.com/kubedb/apimachinery/commit/9d6870498) Pgpool Backend Set to Required (#1113) +- [72d44aef](https://github.com/kubedb/apimachinery/commit/72d44aef7) Fix ElasticsearchDashboard constants +- [0c40a769](https://github.com/kubedb/apimachinery/commit/0c40a7698) Change dashboard api group to elasticsearch (#1112) +- [85e4ae23](https://github.com/kubedb/apimachinery/commit/85e4ae232) Add ZooKeeper API (#1104) +- [ee446682](https://github.com/kubedb/apimachinery/commit/ee446682d) Add Pgpool apis (#1103) +- [4995ebf3](https://github.com/kubedb/apimachinery/commit/4995ebf3d) Add Druid API (#1111) +- [556a36df](https://github.com/kubedb/apimachinery/commit/556a36dfe) Add SingleStore APIS (#1108) +- [a72bb1ff](https://github.com/kubedb/apimachinery/commit/a72bb1ffc) Add runAsGroup field in mgVersion api (#1107) +- [1ee5ee41](https://github.com/kubedb/apimachinery/commit/1ee5ee41d) Add Kafka Connect Cluster and Connector APIs (#1066) +- [2fd99ee8](https://github.com/kubedb/apimachinery/commit/2fd99ee82) Fix replica count for arbiter & hidden node (#1106) +- [4e194f0a](https://github.com/kubedb/apimachinery/commit/4e194f0a2) Implement validator for autoscalers (#1105) +- [6a454592](https://github.com/kubedb/apimachinery/commit/6a4545928) Add kubestash controller for changing kubeDB phase (#1096) +- [44757753](https://github.com/kubedb/apimachinery/commit/447577539) Ignore validators.autoscaling.kubedb.com webhook handlers +- [45cbf75e](https://github.com/kubedb/apimachinery/commit/45cbf75e3) Update deps +- [dc224c1a](https://github.com/kubedb/apimachinery/commit/dc224c1a1) Remove crd informer (#1102) +- [87c402a1](https://github.com/kubedb/apimachinery/commit/87c402a1a) Remove discovery.ResourceMapper (#1101) +- [a1d475ce](https://github.com/kubedb/apimachinery/commit/a1d475ceb) Replace deprecated PollImmediate (#1100) +- [75db4a37](https://github.com/kubedb/apimachinery/commit/75db4a378) Add ConfigureOpenAPI helper (#1099) +- [83be295b](https://github.com/kubedb/apimachinery/commit/83be295b0) update sidekick deps +- [032b2721](https://github.com/kubedb/apimachinery/commit/032b27211) Fix linter +- [389a934c](https://github.com/kubedb/apimachinery/commit/389a934c7) Use k8s 1.29 client libs (#1093) + + + +## [kubedb/autoscaler](https://github.com/kubedb/autoscaler) + +### [v0.26.0](https://github.com/kubedb/autoscaler/releases/tag/v0.26.0) + + + + +## [kubedb/cli](https://github.com/kubedb/cli) + +### [v0.41.0](https://github.com/kubedb/cli/releases/tag/v0.41.0) + +- [8ff0608c](https://github.com/kubedb/cli/commit/8ff0608c) Prepare for release v0.41.0 (#755) +- [7aeaa861](https://github.com/kubedb/cli/commit/7aeaa861) Monitor CLI added for check-connection and aggregate all monitor CLI (#754) +- [a67dadc9](https://github.com/kubedb/cli/commit/a67dadc9) Prepare for release v0.41.0-rc.1 (#753) +- [d1b206ee](https://github.com/kubedb/cli/commit/d1b206ee) Update deps (#752) +- [50f15d19](https://github.com/kubedb/cli/commit/50f15d19) Update deps (#751) +- [64ad0b63](https://github.com/kubedb/cli/commit/64ad0b63) Prepare for release v0.41.0-rc.0 (#749) +- [d188eae6](https://github.com/kubedb/cli/commit/d188eae6) Grafana dashboard's metric checking CLI (#740) +- [234b7051](https://github.com/kubedb/cli/commit/234b7051) Prepare for release v0.41.0-beta.1 (#748) +- [1ebdd532](https://github.com/kubedb/cli/commit/1ebdd532) Update deps +- [c0165e83](https://github.com/kubedb/cli/commit/c0165e83) Prepare for release v0.41.0-beta.0 (#747) +- [d9c905e5](https://github.com/kubedb/cli/commit/d9c905e5) Update deps (#746) +- [bc415a1d](https://github.com/kubedb/cli/commit/bc415a1d) Update deps (#745) + + + +## [kubedb/crd-manager](https://github.com/kubedb/crd-manager) + +### [v0.0.4](https://github.com/kubedb/crd-manager/releases/tag/v0.0.4) + +- [a45ec91](https://github.com/kubedb/crd-manager/commit/a45ec91) Prepare for release v0.0.4 (#13) +- [39cec60](https://github.com/kubedb/crd-manager/commit/39cec60) Fix deploy-to-kind make target + + + +## [kubedb/dashboard](https://github.com/kubedb/dashboard) + +### [v0.17.0](https://github.com/kubedb/dashboard/releases/tag/v0.17.0) + + + + +## [kubedb/db-client-go](https://github.com/kubedb/db-client-go) + +### [v0.0.10](https://github.com/kubedb/db-client-go/releases/tag/v0.0.10) + +- [902c39a0](https://github.com/kubedb/db-client-go/commit/902c39a0) Prepare for release v0.0.10 (#86) +- [377f8591](https://github.com/kubedb/db-client-go/commit/377f8591) Update deps +- [67567b71](https://github.com/kubedb/db-client-go/commit/67567b71) Update deps (#85) +- [4e2471e3](https://github.com/kubedb/db-client-go/commit/4e2471e3) Update deps (#84) + + + +## [kubedb/druid](https://github.com/kubedb/druid) + +### [v0.0.4](https://github.com/kubedb/druid/releases/tag/v0.0.4) + +- [8d4fdb6](https://github.com/kubedb/druid/commit/8d4fdb6) Prepare for release v0.0.4 (#8) + + + +## [kubedb/elasticsearch](https://github.com/kubedb/elasticsearch) + +### [v0.41.0](https://github.com/kubedb/elasticsearch/releases/tag/v0.41.0) + +- [3d0feb70](https://github.com/kubedb/elasticsearch/commit/3d0feb70f) Prepare for release v0.41.0 (#704) +- [f287ef9f](https://github.com/kubedb/elasticsearch/commit/f287ef9f1) Prepare for release v0.41.0-rc.1 (#703) +- [fcabe6ba](https://github.com/kubedb/elasticsearch/commit/fcabe6bae) Update deps (#702) +- [861d01f3](https://github.com/kubedb/elasticsearch/commit/861d01f30) Update deps (#701) +- [69735e9e](https://github.com/kubedb/elasticsearch/commit/69735e9e1) Prepare for release v0.41.0-rc.0 (#700) +- [c410b39f](https://github.com/kubedb/elasticsearch/commit/c410b39f5) Prepare for release v0.41.0-beta.1 (#699) +- [3394f1d1](https://github.com/kubedb/elasticsearch/commit/3394f1d13) Use ptr.Deref(); Update deps +- [f00ee052](https://github.com/kubedb/elasticsearch/commit/f00ee052e) Update ci & makefile for crd-manager (#698) +- [e37e6d63](https://github.com/kubedb/elasticsearch/commit/e37e6d631) Add catalog client in scheme. (#697) +- [a46bfd41](https://github.com/kubedb/elasticsearch/commit/a46bfd41b) Add Support for DB phase change for restoring using KubeStash (#696) +- [9cbac2fc](https://github.com/kubedb/elasticsearch/commit/9cbac2fc4) Update makefile for dynamic crd installer (#695) +- [3ab4d77d](https://github.com/kubedb/elasticsearch/commit/3ab4d77d2) Prepare for release v0.41.0-beta.0 (#694) +- [c38c61cb](https://github.com/kubedb/elasticsearch/commit/c38c61cbc) Dynamically start crd controller (#693) +- [6a798d30](https://github.com/kubedb/elasticsearch/commit/6a798d309) Update deps (#692) +- [bdf034a4](https://github.com/kubedb/elasticsearch/commit/bdf034a49) Update deps (#691) +- [ea22eecb](https://github.com/kubedb/elasticsearch/commit/ea22eecb2) Add openapi configuration for webhook server (#690) +- [b97636cd](https://github.com/kubedb/elasticsearch/commit/b97636cd1) Update lint command +- [0221ac14](https://github.com/kubedb/elasticsearch/commit/0221ac14e) Update deps +- [b4cb8d60](https://github.com/kubedb/elasticsearch/commit/b4cb8d603) Use k8s 1.29 client libs (#689) + + + +## [kubedb/elasticsearch-restic-plugin](https://github.com/kubedb/elasticsearch-restic-plugin) + +### [v0.4.0](https://github.com/kubedb/elasticsearch-restic-plugin/releases/tag/v0.4.0) + +- [11a8a76](https://github.com/kubedb/elasticsearch-restic-plugin/commit/11a8a76) Prepare for release v0.4.0 (#19) +- [675caf7](https://github.com/kubedb/elasticsearch-restic-plugin/commit/675caf7) Prepare for release v0.4.0-rc.1 (#18) +- [18ea6da](https://github.com/kubedb/elasticsearch-restic-plugin/commit/18ea6da) Prepare for release v0.4.0-rc.0 (#17) +- [584dfd9](https://github.com/kubedb/elasticsearch-restic-plugin/commit/584dfd9) Prepare for release v0.4.0-beta.1 (#16) +- [5e9aef5](https://github.com/kubedb/elasticsearch-restic-plugin/commit/5e9aef5) Prepare for release v0.4.0-beta.0 (#15) +- [2fdcafa](https://github.com/kubedb/elasticsearch-restic-plugin/commit/2fdcafa) Use k8s 1.29 client libs (#14) + + + +## [kubedb/ferretdb](https://github.com/kubedb/ferretdb) + +### [v0.0.4](https://github.com/kubedb/ferretdb/releases/tag/v0.0.4) + +- [19ac254](https://github.com/kubedb/ferretdb/commit/19ac254) Prepare for release v0.0.4 (#9) +- [1135ae9](https://github.com/kubedb/ferretdb/commit/1135ae9) Auth secret name change and codebase Refactor (#8) + + + +## [kubedb/installer](https://github.com/kubedb/installer) + +### [v2024.1.31](https://github.com/kubedb/installer/releases/tag/v2024.1.31) + + + + +## [kubedb/kafka](https://github.com/kubedb/kafka) + +### [v0.12.0](https://github.com/kubedb/kafka/releases/tag/v0.12.0) + +- [8ffe42e6](https://github.com/kubedb/kafka/commit/8ffe42e6) Prepare for release v0.12.0 (#76) +- [511914c2](https://github.com/kubedb/kafka/commit/511914c2) Prepare for release v0.12.0-rc.1 (#75) +- [fb908cf2](https://github.com/kubedb/kafka/commit/fb908cf2) Update deps (#74) +- [cccaf86c](https://github.com/kubedb/kafka/commit/cccaf86c) Update deps (#73) +- [9d73e3ce](https://github.com/kubedb/kafka/commit/9d73e3ce) Prepare for release v0.12.0-rc.0 (#71) +- [c1d08f75](https://github.com/kubedb/kafka/commit/c1d08f75) Remove cassandra, clickhouse, etcd flags +- [e7283583](https://github.com/kubedb/kafka/commit/e7283583) Fix podtemplate containers reference isuue (#70) +- [6d04bf0f](https://github.com/kubedb/kafka/commit/6d04bf0f) Add termination policy for kafka and connect cluster (#69) +- [34f4967f](https://github.com/kubedb/kafka/commit/34f4967f) Prepare for release v0.12.0-beta.1 (#68) +- [7176931c](https://github.com/kubedb/kafka/commit/7176931c) Move Kafka Podtemplate to ofshoot-api v2 (#66) +- [9454adf6](https://github.com/kubedb/kafka/commit/9454adf6) Update ci & makefile for crd-manager (#67) +- [fda770d8](https://github.com/kubedb/kafka/commit/fda770d8) Add kafka connector controller (#65) +- [6ed0ccd4](https://github.com/kubedb/kafka/commit/6ed0ccd4) Add Kafka connect controller (#44) +- [18e9a45c](https://github.com/kubedb/kafka/commit/18e9a45c) update deps (#64) +- [a7dfb409](https://github.com/kubedb/kafka/commit/a7dfb409) Update makefile for dynamic crd installer (#63) +- [f9350578](https://github.com/kubedb/kafka/commit/f9350578) Prepare for release v0.12.0-beta.0 (#62) +- [692f2bef](https://github.com/kubedb/kafka/commit/692f2bef) Dynamically start crd controller (#61) +- [a50dc8b4](https://github.com/kubedb/kafka/commit/a50dc8b4) Update deps (#60) +- [7ff28ed7](https://github.com/kubedb/kafka/commit/7ff28ed7) Update deps (#59) +- [16130571](https://github.com/kubedb/kafka/commit/16130571) Add openapi configuration for webhook server (#58) +- [cc465de9](https://github.com/kubedb/kafka/commit/cc465de9) Use k8s 1.29 client libs (#57) + + + +## [kubedb/kubedb-manifest-plugin](https://github.com/kubedb/kubedb-manifest-plugin) + +### [v0.4.0](https://github.com/kubedb/kubedb-manifest-plugin/releases/tag/v0.4.0) + +- [7d51761](https://github.com/kubedb/kubedb-manifest-plugin/commit/7d51761) Prepare for release v0.4.0 (#40) +- [5daf9ce](https://github.com/kubedb/kubedb-manifest-plugin/commit/5daf9ce) Prepare for release v0.4.0-rc.1 (#39) +- [b7ec4a4](https://github.com/kubedb/kubedb-manifest-plugin/commit/b7ec4a4) Prepare for release v0.4.0-rc.0 (#38) +- [c77b4ae](https://github.com/kubedb/kubedb-manifest-plugin/commit/c77b4ae) Prepare for release v0.4.0-beta.1 (#37) +- [6a8a822](https://github.com/kubedb/kubedb-manifest-plugin/commit/6a8a822) Update component name (#35) +- [c315615](https://github.com/kubedb/kubedb-manifest-plugin/commit/c315615) Prepare for release v0.4.0-beta.0 (#36) +- [5ce328d](https://github.com/kubedb/kubedb-manifest-plugin/commit/5ce328d) Use k8s 1.29 client libs (#34) + + + +## [kubedb/mariadb](https://github.com/kubedb/mariadb) + +### [v0.25.0](https://github.com/kubedb/mariadb/releases/tag/v0.25.0) + +- [bcf4484c](https://github.com/kubedb/mariadb/commit/bcf4484c9) Fix mariadb health check (#256) +- [fcfa0e96](https://github.com/kubedb/mariadb/commit/fcfa0e966) Prepare for release v0.25.0 (#257) +- [2ca44131](https://github.com/kubedb/mariadb/commit/2ca441314) Prepare for release v0.25.0-rc.1 (#255) +- [dad5b837](https://github.com/kubedb/mariadb/commit/dad5b837a) Update deps (#254) +- [a210c867](https://github.com/kubedb/mariadb/commit/a210c8675) Fix appbinding scheme (#251) +- [81f985cd](https://github.com/kubedb/mariadb/commit/81f985cd7) Update deps (#253) +- [4bdcd6cc](https://github.com/kubedb/mariadb/commit/4bdcd6cca) Prepare for release v0.25.0-rc.0 (#252) +- [c4d4942f](https://github.com/kubedb/mariadb/commit/c4d4942f8) Prepare for release v0.25.0-beta.1 (#250) +- [25fe3917](https://github.com/kubedb/mariadb/commit/25fe39177) Use ptr.Deref(); Update deps +- [c76704cc](https://github.com/kubedb/mariadb/commit/c76704cc8) Fix ci & makefile for crd-manager (#249) +- [67396abb](https://github.com/kubedb/mariadb/commit/67396abb9) Incorporate with apimachinery package name change from `stash` to `restore` (#248) +- [b93ddce3](https://github.com/kubedb/mariadb/commit/b93ddce3d) Prepare for release v0.25.0-beta.0 (#247) +- [8099af6d](https://github.com/kubedb/mariadb/commit/8099af6d9) Dynamically start crd controller (#246) +- [0a9dd9e0](https://github.com/kubedb/mariadb/commit/0a9dd9e03) Update deps (#245) +- [5c548629](https://github.com/kubedb/mariadb/commit/5c548629e) Update deps (#244) +- [0f9ea4f2](https://github.com/kubedb/mariadb/commit/0f9ea4f20) Update deps +- [89641d3c](https://github.com/kubedb/mariadb/commit/89641d3c7) Use k8s 1.29 client libs (#242) + + + +## [kubedb/mariadb-archiver](https://github.com/kubedb/mariadb-archiver) + +### [v0.1.0](https://github.com/kubedb/mariadb-archiver/releases/tag/v0.1.0) + +- [bd6798f](https://github.com/kubedb/mariadb-archiver/commit/bd6798f) Prepare for release v0.1.0 (#8) +- [8743316](https://github.com/kubedb/mariadb-archiver/commit/8743316) Prepare for release v0.1.0-rc.1 (#7) +- [90b9d66](https://github.com/kubedb/mariadb-archiver/commit/90b9d66) Prepare for release v0.1.0-rc.0 (#6) +- [e8564fe](https://github.com/kubedb/mariadb-archiver/commit/e8564fe) Prepare for release v0.1.0-beta.1 (#5) +- [e5e8945](https://github.com/kubedb/mariadb-archiver/commit/e5e8945) Don't use fail-fast +- [8c8e09a](https://github.com/kubedb/mariadb-archiver/commit/8c8e09a) Prepare for release v0.1.0-beta.0 (#4) +- [90ae04c](https://github.com/kubedb/mariadb-archiver/commit/90ae04c) Use k8s 1.29 client libs (#3) +- [b3067c8](https://github.com/kubedb/mariadb-archiver/commit/b3067c8) Fix binlog command +- [5cc0b6a](https://github.com/kubedb/mariadb-archiver/commit/5cc0b6a) Fix release workflow +- [910b7ce](https://github.com/kubedb/mariadb-archiver/commit/910b7ce) Prepare for release v0.1.0 (#1) +- [3801668](https://github.com/kubedb/mariadb-archiver/commit/3801668) mysql -> mariadb +- [4e905fb](https://github.com/kubedb/mariadb-archiver/commit/4e905fb) Implemenet new algorithm for archiver and restorer (#5) +- [22701c8](https://github.com/kubedb/mariadb-archiver/commit/22701c8) Fix 5.7.x build +- [6da2b1c](https://github.com/kubedb/mariadb-archiver/commit/6da2b1c) Update build matrix +- [e2f6244](https://github.com/kubedb/mariadb-archiver/commit/e2f6244) Use separate dockerfile per mysql version (#9) +- [e800623](https://github.com/kubedb/mariadb-archiver/commit/e800623) Prepare for release v0.2.0 (#8) +- [b9f6ec5](https://github.com/kubedb/mariadb-archiver/commit/b9f6ec5) Install mysqlbinlog (#7) +- [c46d991](https://github.com/kubedb/mariadb-archiver/commit/c46d991) Use appscode-images as base image (#6) +- [721eaa8](https://github.com/kubedb/mariadb-archiver/commit/721eaa8) Prepare for release v0.1.0 (#4) +- [8c65d14](https://github.com/kubedb/mariadb-archiver/commit/8c65d14) Prepare for release v0.1.0-rc.1 (#3) +- [f79286a](https://github.com/kubedb/mariadb-archiver/commit/f79286a) Prepare for release v0.1.0-rc.0 (#2) +- [dcd2e30](https://github.com/kubedb/mariadb-archiver/commit/dcd2e30) Fix wal-g binary +- [6c20a4a](https://github.com/kubedb/mariadb-archiver/commit/6c20a4a) Fix build +- [f034e7b](https://github.com/kubedb/mariadb-archiver/commit/f034e7b) Add build script (#1) + + + +## [kubedb/mariadb-coordinator](https://github.com/kubedb/mariadb-coordinator) + +### [v0.21.0](https://github.com/kubedb/mariadb-coordinator/releases/tag/v0.21.0) + +- [6e2b4dee](https://github.com/kubedb/mariadb-coordinator/commit/6e2b4dee) Prepare for release v0.21.0 (#107) +- [e0e9c489](https://github.com/kubedb/mariadb-coordinator/commit/e0e9c489) Fix MariaDB Health Check (#106) +- [4289bcd1](https://github.com/kubedb/mariadb-coordinator/commit/4289bcd1) Prepare for release v0.21.0-rc.1 (#105) +- [34f610f7](https://github.com/kubedb/mariadb-coordinator/commit/34f610f7) Update deps (#104) +- [13dbe3f7](https://github.com/kubedb/mariadb-coordinator/commit/13dbe3f7) Update deps (#103) +- [15a83758](https://github.com/kubedb/mariadb-coordinator/commit/15a83758) Prepare for release v0.21.0-rc.0 (#102) +- [1c30e710](https://github.com/kubedb/mariadb-coordinator/commit/1c30e710) Prepare for release v0.21.0-beta.1 (#101) +- [28677618](https://github.com/kubedb/mariadb-coordinator/commit/28677618) Prepare for release v0.21.0-beta.0 (#100) +- [655a2c66](https://github.com/kubedb/mariadb-coordinator/commit/655a2c66) Update deps (#99) +- [ef206cfe](https://github.com/kubedb/mariadb-coordinator/commit/ef206cfe) Update deps (#98) +- [ef72c98b](https://github.com/kubedb/mariadb-coordinator/commit/ef72c98b) Use k8s 1.29 client libs (#97) + + + +## [kubedb/mariadb-csi-snapshotter-plugin](https://github.com/kubedb/mariadb-csi-snapshotter-plugin) + +### [v0.1.0](https://github.com/kubedb/mariadb-csi-snapshotter-plugin/releases/tag/v0.1.0) + +- [d28c59f](https://github.com/kubedb/mariadb-csi-snapshotter-plugin/commit/d28c59f) Prepare for release v0.1.0 (#11) +- [299687e](https://github.com/kubedb/mariadb-csi-snapshotter-plugin/commit/299687e) Update README.md (#9) +- [00e8552](https://github.com/kubedb/mariadb-csi-snapshotter-plugin/commit/00e8552) Update deps (#8) +- [ac2caaf](https://github.com/kubedb/mariadb-csi-snapshotter-plugin/commit/ac2caaf) Update pause and resume queries (#1) +- [066b41c](https://github.com/kubedb/mariadb-csi-snapshotter-plugin/commit/066b41c) Prepare for release v0.1.0-rc.1 (#7) +- [ebd73c7](https://github.com/kubedb/mariadb-csi-snapshotter-plugin/commit/ebd73c7) Prepare for release v0.1.0-rc.0 (#6) +- [adac38d](https://github.com/kubedb/mariadb-csi-snapshotter-plugin/commit/adac38d) Prepare for release v0.1.0-beta.1 (#5) +- [09f68b7](https://github.com/kubedb/mariadb-csi-snapshotter-plugin/commit/09f68b7) Prepare for release v0.1.0-beta.0 (#4) +- [7407444](https://github.com/kubedb/mariadb-csi-snapshotter-plugin/commit/7407444) Use k8s 1.29 client libs (#3) +- [933e138](https://github.com/kubedb/mariadb-csi-snapshotter-plugin/commit/933e138) Prepare for release v0.1.0 (#2) +- [5d38f94](https://github.com/kubedb/mariadb-csi-snapshotter-plugin/commit/5d38f94) Enable GH actions +- [2a97178](https://github.com/kubedb/mariadb-csi-snapshotter-plugin/commit/2a97178) Replace mysql with mariadb + + + +## [kubedb/memcached](https://github.com/kubedb/memcached) + +### [v0.34.0](https://github.com/kubedb/memcached/releases/tag/v0.34.0) + +- [c43923ba](https://github.com/kubedb/memcached/commit/c43923ba) Prepare for release v0.34.0 (#423) +- [928f00e2](https://github.com/kubedb/memcached/commit/928f00e2) Prepare for release v0.34.0-rc.1 (#422) +- [42aa5dcc](https://github.com/kubedb/memcached/commit/42aa5dcc) Update deps (#421) +- [d9fd6358](https://github.com/kubedb/memcached/commit/d9fd6358) Update deps (#420) +- [3ae5739b](https://github.com/kubedb/memcached/commit/3ae5739b) Prepare for release v0.34.0-rc.0 (#419) +- [754ba398](https://github.com/kubedb/memcached/commit/754ba398) Prepare for release v0.34.0-beta.1 (#418) +- [abd9dbb6](https://github.com/kubedb/memcached/commit/abd9dbb6) Incorporate with apimachinery package name change from stash to restore (#417) +- [6fe1686a](https://github.com/kubedb/memcached/commit/6fe1686a) Prepare for release v0.34.0-beta.0 (#416) +- [1cfb0544](https://github.com/kubedb/memcached/commit/1cfb0544) Dynamically start crd controller (#415) +- [171faff2](https://github.com/kubedb/memcached/commit/171faff2) Update deps (#414) +- [639495c7](https://github.com/kubedb/memcached/commit/639495c7) Update deps (#413) +- [223d295a](https://github.com/kubedb/memcached/commit/223d295a) Use k8s 1.29 client libs (#412) + + + +## [kubedb/mongodb](https://github.com/kubedb/mongodb) + +### [v0.34.0](https://github.com/kubedb/mongodb/releases/tag/v0.34.0) + +- [92691f3f](https://github.com/kubedb/mongodb/commit/92691f3f0) Prepare for release v0.34.0 (#611) +- [2e96a733](https://github.com/kubedb/mongodb/commit/2e96a7338) Prepare for release v0.34.0-rc.1 (#610) +- [9a7e16bd](https://github.com/kubedb/mongodb/commit/9a7e16bdb) Update deps (#609) +- [369b51e8](https://github.com/kubedb/mongodb/commit/369b51e8a) Update deps (#608) +- [278ce846](https://github.com/kubedb/mongodb/commit/278ce846b) Prepare for release v0.34.0-rc.0 (#607) +- [c0c58448](https://github.com/kubedb/mongodb/commit/c0c58448b) Prepare for release v0.34.0-beta.1 (#606) +- [5df39d09](https://github.com/kubedb/mongodb/commit/5df39d09f) Update ci mgVersion; Fix pointer dereference issue (#605) +- [e2781eae](https://github.com/kubedb/mongodb/commit/e2781eaea) Run ci with specific crd-manager branch (#604) +- [b57bc47a](https://github.com/kubedb/mongodb/commit/b57bc47ae) Add kubestash for health check (#603) +- [62cb9c81](https://github.com/kubedb/mongodb/commit/62cb9c816) Install crd-manager specifiying DATABASE (#602) +- [6bf45fe7](https://github.com/kubedb/mongodb/commit/6bf45fe72) 7.0.4 -> 7.0.5; update deps +- [e5b9841e](https://github.com/kubedb/mongodb/commit/e5b9841e5) Fix oplog backup directory (#601) +- [452b785f](https://github.com/kubedb/mongodb/commit/452b785f0) Add Support for DB phase change for restoring using `KubeStash` (#586) +- [35d93d0b](https://github.com/kubedb/mongodb/commit/35d93d0bc) add ssl/tls args command (#595) +- [7ff67238](https://github.com/kubedb/mongodb/commit/7ff672382) Prepare for release v0.34.0-beta.0 (#600) +- [beca63a4](https://github.com/kubedb/mongodb/commit/beca63a48) Dynamically start crd controller (#599) +- [17d90616](https://github.com/kubedb/mongodb/commit/17d90616d) Update deps (#598) +- [bc25ca00](https://github.com/kubedb/mongodb/commit/bc25ca001) Update deps (#597) +- [4ce5a94a](https://github.com/kubedb/mongodb/commit/4ce5a94a4) Configure openapi for webhook server (#596) +- [8d8206db](https://github.com/kubedb/mongodb/commit/8d8206db3) Update ci versions +- [bfdd519f](https://github.com/kubedb/mongodb/commit/bfdd519fc) Update deps +- [01a7c268](https://github.com/kubedb/mongodb/commit/01a7c2685) Use k8s 1.29 client libs (#594) + + + +## [kubedb/mongodb-csi-snapshotter-plugin](https://github.com/kubedb/mongodb-csi-snapshotter-plugin) + +### [v0.2.0](https://github.com/kubedb/mongodb-csi-snapshotter-plugin/releases/tag/v0.2.0) + +- [06a2057](https://github.com/kubedb/mongodb-csi-snapshotter-plugin/commit/06a2057) Prepare for release v0.2.0 (#15) +- [5b28353](https://github.com/kubedb/mongodb-csi-snapshotter-plugin/commit/5b28353) Prepare for release v0.2.0-rc.1 (#14) +- [afd4fdb](https://github.com/kubedb/mongodb-csi-snapshotter-plugin/commit/afd4fdb) Prepare for release v0.2.0-rc.0 (#13) +- [5680265](https://github.com/kubedb/mongodb-csi-snapshotter-plugin/commit/5680265) Prepare for release v0.2.0-beta.1 (#12) +- [72693c8](https://github.com/kubedb/mongodb-csi-snapshotter-plugin/commit/72693c8) Fix component driver status (#11) +- [0ea73ee](https://github.com/kubedb/mongodb-csi-snapshotter-plugin/commit/0ea73ee) Update deps (#10) +- [ef74421](https://github.com/kubedb/mongodb-csi-snapshotter-plugin/commit/ef74421) Prepare for release v0.2.0-beta.0 (#9) +- [c2c9bd4](https://github.com/kubedb/mongodb-csi-snapshotter-plugin/commit/c2c9bd4) Use k8s 1.29 client libs (#8) + + + +## [kubedb/mongodb-restic-plugin](https://github.com/kubedb/mongodb-restic-plugin) + +### [v0.4.0](https://github.com/kubedb/mongodb-restic-plugin/releases/tag/v0.4.0) + +- [26a2566](https://github.com/kubedb/mongodb-restic-plugin/commit/26a2566) Prepare for release v0.4.0 (#26) +- [3fa6286](https://github.com/kubedb/mongodb-restic-plugin/commit/3fa6286) Prepare for release v0.4.0-rc.1 (#25) +- [bff5aa4](https://github.com/kubedb/mongodb-restic-plugin/commit/bff5aa4) Prepare for release v0.4.0-rc.0 (#24) +- [6ae8ae2](https://github.com/kubedb/mongodb-restic-plugin/commit/6ae8ae2) Prepare for release v0.4.0-beta.1 (#23) +- [d8e1636](https://github.com/kubedb/mongodb-restic-plugin/commit/d8e1636) Reorder the execution of cleanup funcs (#22) +- [4f0b021](https://github.com/kubedb/mongodb-restic-plugin/commit/4f0b021) Prepare for release v0.4.0-beta.0 (#20) +- [91ee7c0](https://github.com/kubedb/mongodb-restic-plugin/commit/91ee7c0) Use k8s 1.29 client libs (#19) + + + +## [kubedb/mysql](https://github.com/kubedb/mysql) + +### [v0.34.0](https://github.com/kubedb/mysql/releases/tag/v0.34.0) + +- [ed8d2fcd](https://github.com/kubedb/mysql/commit/ed8d2fcd8) Prepare for release v0.34.0 (#609) +- [6088a1f9](https://github.com/kubedb/mysql/commit/6088a1f9d) Add azure, gcs support for backup and restore archiver (#595) +- [f3133290](https://github.com/kubedb/mysql/commit/f31332908) Prepare for release v0.34.0-rc.1 (#607) +- [0f3ddf23](https://github.com/kubedb/mysql/commit/0f3ddf233) Update deps (#606) +- [1a5d7bd1](https://github.com/kubedb/mysql/commit/1a5d7bd13) Fix appbinding scheme (#603) +- [c643ded9](https://github.com/kubedb/mysql/commit/c643ded94) Update deps (#605) +- [aaaf3aad](https://github.com/kubedb/mysql/commit/aaaf3aad0) Prepare for release v0.34.0-rc.0 (#604) +- [d2f2eba7](https://github.com/kubedb/mysql/commit/d2f2eba7d) Refactor (#602) +- [fa00fc42](https://github.com/kubedb/mysql/commit/fa00fc424) Fix provider env in sidekick (#601) +- [e75f6e26](https://github.com/kubedb/mysql/commit/e75f6e26e) Fix restore service selector (#600) +- [e9dbf269](https://github.com/kubedb/mysql/commit/e9dbf269c) Prepare for release v0.34.0-beta.1 (#599) +- [44eda2d2](https://github.com/kubedb/mysql/commit/44eda2d25) Prepare for release v0.34.0-beta.1 (#598) +- [16dd4637](https://github.com/kubedb/mysql/commit/16dd46377) Fix pointer dereference issue (#597) +- [334c1a1d](https://github.com/kubedb/mysql/commit/334c1a1dd) Update ci & makefile for crd-manager (#596) +- [edb9b1a1](https://github.com/kubedb/mysql/commit/edb9b1a11) Fix binlog backup directory (#587) +- [fc6d7030](https://github.com/kubedb/mysql/commit/fc6d70303) Add Support for DB phase change for restoring using KubeStash (#594) +- [354f6f3e](https://github.com/kubedb/mysql/commit/354f6f3e1) Prepare for release v0.34.0-beta.0 (#593) +- [01498d02](https://github.com/kubedb/mysql/commit/01498d025) Dynamically start crd controller (#592) +- [e68015cf](https://github.com/kubedb/mysql/commit/e68015cfd) Update deps (#591) +- [67029acc](https://github.com/kubedb/mysql/commit/67029acc9) Update deps (#590) +- [87d2de4a](https://github.com/kubedb/mysql/commit/87d2de4a1) Include kubestash catalog chart in makefile (#588) +- [e5874ffb](https://github.com/kubedb/mysql/commit/e5874ffb7) Add openapi configuration for webhook server (#589) +- [977d3cd3](https://github.com/kubedb/mysql/commit/977d3cd38) Update deps +- [3df86853](https://github.com/kubedb/mysql/commit/3df868533) Use k8s 1.29 client libs (#586) +- [d159ad05](https://github.com/kubedb/mysql/commit/d159ad052) Ensure MySQLArchiver crd (#585) + + + +## [kubedb/mysql-archiver](https://github.com/kubedb/mysql-archiver) + +### [v0.2.0](https://github.com/kubedb/mysql-archiver/releases/tag/v0.2.0) + +- [ed748c7](https://github.com/kubedb/mysql-archiver/commit/ed748c7) Prepare for release v0.2.0 (#20) +- [4d1c341](https://github.com/kubedb/mysql-archiver/commit/4d1c341) Remove example files (#17) +- [9c0545e](https://github.com/kubedb/mysql-archiver/commit/9c0545e) Add azure, gcs support (#13) +- [d0fbef5](https://github.com/kubedb/mysql-archiver/commit/d0fbef5) Prepare for release v0.2.0-rc.1 (#19) +- [a6fdf50](https://github.com/kubedb/mysql-archiver/commit/a6fdf50) Prepare for release v0.2.0-rc.0 (#18) +- [718511e](https://github.com/kubedb/mysql-archiver/commit/718511e) Remove obsolete files (#16) +- [07fc1eb](https://github.com/kubedb/mysql-archiver/commit/07fc1eb) Fix mysql-community-common version in docker file +- [e5bdae3](https://github.com/kubedb/mysql-archiver/commit/e5bdae3) Prepare for release v0.2.0-beta.1 (#15) +- [7ef752c](https://github.com/kubedb/mysql-archiver/commit/7ef752c) Refactor + Cleanup wal-g example files (#14) +- [5857a8d](https://github.com/kubedb/mysql-archiver/commit/5857a8d) Don't use fail-fast +- [5833776](https://github.com/kubedb/mysql-archiver/commit/5833776) Prepare for release v0.2.0-beta.0 (#12) +- [f3e68b2](https://github.com/kubedb/mysql-archiver/commit/f3e68b2) Use k8s 1.29 client libs (#11) + + + +## [kubedb/mysql-coordinator](https://github.com/kubedb/mysql-coordinator) + +### [v0.19.0](https://github.com/kubedb/mysql-coordinator/releases/tag/v0.19.0) + +- [ef84bd37](https://github.com/kubedb/mysql-coordinator/commit/ef84bd37) Prepare for release v0.19.0 (#103) +- [40a2b6eb](https://github.com/kubedb/mysql-coordinator/commit/40a2b6eb) Prepare for release v0.19.0-rc.1 (#102) +- [8b80958d](https://github.com/kubedb/mysql-coordinator/commit/8b80958d) Update deps (#101) +- [df438239](https://github.com/kubedb/mysql-coordinator/commit/df438239) Update deps (#100) +- [1bc71d04](https://github.com/kubedb/mysql-coordinator/commit/1bc71d04) Prepare for release v0.19.0-rc.0 (#99) +- [59a11671](https://github.com/kubedb/mysql-coordinator/commit/59a11671) Prepare for release v0.19.0-beta.1 (#98) +- [e0cc149f](https://github.com/kubedb/mysql-coordinator/commit/e0cc149f) Prepare for release v0.19.0-beta.0 (#97) +- [67aeb229](https://github.com/kubedb/mysql-coordinator/commit/67aeb229) Update deps (#96) +- [2fa4423f](https://github.com/kubedb/mysql-coordinator/commit/2fa4423f) Update deps (#95) +- [b0735769](https://github.com/kubedb/mysql-coordinator/commit/b0735769) Use k8s 1.29 client libs (#94) + + + +## [kubedb/mysql-csi-snapshotter-plugin](https://github.com/kubedb/mysql-csi-snapshotter-plugin) + +### [v0.2.0](https://github.com/kubedb/mysql-csi-snapshotter-plugin/releases/tag/v0.2.0) + +- [61d40c2](https://github.com/kubedb/mysql-csi-snapshotter-plugin/commit/61d40c2) Prepare for release v0.2.0 (#8) +- [ac60bf4](https://github.com/kubedb/mysql-csi-snapshotter-plugin/commit/ac60bf4) Prepare for release v0.2.0-rc.1 (#7) +- [21e9470](https://github.com/kubedb/mysql-csi-snapshotter-plugin/commit/21e9470) Prepare for release v0.2.0-rc.0 (#6) +- [d5771cf](https://github.com/kubedb/mysql-csi-snapshotter-plugin/commit/d5771cf) Prepare for release v0.2.0-beta.1 (#5) +- [b4ffc6f](https://github.com/kubedb/mysql-csi-snapshotter-plugin/commit/b4ffc6f) Fix component driver status & Update deps (#3) +- [d285eff](https://github.com/kubedb/mysql-csi-snapshotter-plugin/commit/d285eff) Prepare for release v0.2.0-beta.0 (#4) +- [7a46441](https://github.com/kubedb/mysql-csi-snapshotter-plugin/commit/7a46441) Use k8s 1.29 client libs (#2) + + + +## [kubedb/mysql-restic-plugin](https://github.com/kubedb/mysql-restic-plugin) + +### [v0.4.0](https://github.com/kubedb/mysql-restic-plugin/releases/tag/v0.4.0) + +- [416d3cd](https://github.com/kubedb/mysql-restic-plugin/commit/416d3cd) Prepare for release v0.4.0 (#24) +- [55897ab](https://github.com/kubedb/mysql-restic-plugin/commit/55897ab) Prepare for release v0.4.0-rc.1 (#23) +- [eedf2e7](https://github.com/kubedb/mysql-restic-plugin/commit/eedf2e7) Prepare for release v0.4.0-rc.0 (#22) +- [105888a](https://github.com/kubedb/mysql-restic-plugin/commit/105888a) Prepare for release v0.4.0-beta.1 (#21) +- [b42d0cf](https://github.com/kubedb/mysql-restic-plugin/commit/b42d0cf) Removed `--all-databases` flag for restoring (#20) +- [742d2ce](https://github.com/kubedb/mysql-restic-plugin/commit/742d2ce) Prepare for release v0.4.0-beta.0 (#19) +- [0402847](https://github.com/kubedb/mysql-restic-plugin/commit/0402847) Use k8s 1.29 client libs (#18) + + + +## [kubedb/mysql-router-init](https://github.com/kubedb/mysql-router-init) + +### [v0.19.0](https://github.com/kubedb/mysql-router-init/releases/tag/v0.19.0) + +- [6a5deed](https://github.com/kubedb/mysql-router-init/commit/6a5deed) Update deps (#40) +- [0078f09](https://github.com/kubedb/mysql-router-init/commit/0078f09) Update deps (#39) +- [85f8c6f](https://github.com/kubedb/mysql-router-init/commit/85f8c6f) Update deps (#38) +- [7dd201c](https://github.com/kubedb/mysql-router-init/commit/7dd201c) Use k8s 1.29 client libs (#37) + + + +## [kubedb/ops-manager](https://github.com/kubedb/ops-manager) + +### [v0.28.0](https://github.com/kubedb/ops-manager/releases/tag/v0.28.0) + + + + +## [kubedb/percona-xtradb](https://github.com/kubedb/percona-xtradb) + +### [v0.28.0](https://github.com/kubedb/percona-xtradb/releases/tag/v0.28.0) + +- [279330e0](https://github.com/kubedb/percona-xtradb/commit/279330e09) Prepare for release v0.28.0 (#354) +- [b567db53](https://github.com/kubedb/percona-xtradb/commit/b567db53a) Prepare for release v0.28.0-rc.1 (#353) +- [c0ddb330](https://github.com/kubedb/percona-xtradb/commit/c0ddb330b) Update deps (#352) +- [d461df3e](https://github.com/kubedb/percona-xtradb/commit/d461df3ed) Fix appbinding scheme (#349) +- [2752f7e3](https://github.com/kubedb/percona-xtradb/commit/2752f7e36) Update deps (#351) +- [80cd3a03](https://github.com/kubedb/percona-xtradb/commit/80cd3a030) Prepare for release v0.28.0-rc.0 (#350) +- [475a5e32](https://github.com/kubedb/percona-xtradb/commit/475a5e328) Prepare for release v0.28.0-beta.1 (#348) +- [4c1380ab](https://github.com/kubedb/percona-xtradb/commit/4c1380ab7) Incorporate with apimachinery package name change from `stash` to `restore` (#347) +- [0ceb3028](https://github.com/kubedb/percona-xtradb/commit/0ceb30284) Prepare for release v0.28.0-beta.0 (#346) +- [e7d35606](https://github.com/kubedb/percona-xtradb/commit/e7d356062) Dynamically start crd controller (#345) +- [5d07b565](https://github.com/kubedb/percona-xtradb/commit/5d07b5655) Update deps (#344) +- [1a639f84](https://github.com/kubedb/percona-xtradb/commit/1a639f840) Update deps (#343) +- [4f8b24ab](https://github.com/kubedb/percona-xtradb/commit/4f8b24aba) Update deps +- [e5254020](https://github.com/kubedb/percona-xtradb/commit/e52540202) Use k8s 1.29 client libs (#341) + + + +## [kubedb/percona-xtradb-coordinator](https://github.com/kubedb/percona-xtradb-coordinator) + +### [v0.14.0](https://github.com/kubedb/percona-xtradb-coordinator/releases/tag/v0.14.0) + +- [6fd3b3cd](https://github.com/kubedb/percona-xtradb-coordinator/commit/6fd3b3cd) Prepare for release v0.14.0 (#63) +- [da619fa3](https://github.com/kubedb/percona-xtradb-coordinator/commit/da619fa3) Prepare for release v0.14.0-rc.1 (#62) +- [c39daf56](https://github.com/kubedb/percona-xtradb-coordinator/commit/c39daf56) Update deps (#61) +- [42dc1a95](https://github.com/kubedb/percona-xtradb-coordinator/commit/42dc1a95) Update deps (#60) +- [7581630e](https://github.com/kubedb/percona-xtradb-coordinator/commit/7581630e) Prepare for release v0.14.0-rc.0 (#59) +- [560bc5c3](https://github.com/kubedb/percona-xtradb-coordinator/commit/560bc5c3) Prepare for release v0.14.0-beta.1 (#58) +- [963756eb](https://github.com/kubedb/percona-xtradb-coordinator/commit/963756eb) Prepare for release v0.14.0-beta.0 (#57) +- [5489bb8c](https://github.com/kubedb/percona-xtradb-coordinator/commit/5489bb8c) Update deps (#56) +- [a8424e18](https://github.com/kubedb/percona-xtradb-coordinator/commit/a8424e18) Update deps (#55) +- [ee4add86](https://github.com/kubedb/percona-xtradb-coordinator/commit/ee4add86) Use k8s 1.29 client libs (#54) + + + +## [kubedb/pg-coordinator](https://github.com/kubedb/pg-coordinator) + +### [v0.25.0](https://github.com/kubedb/pg-coordinator/releases/tag/v0.25.0) + +- [7e7d32d9](https://github.com/kubedb/pg-coordinator/commit/7e7d32d9) Prepare for release v0.25.0 (#154) +- [9a720273](https://github.com/kubedb/pg-coordinator/commit/9a720273) Prepare for release v0.25.0-rc.1 (#153) +- [f103f1fc](https://github.com/kubedb/pg-coordinator/commit/f103f1fc) Update deps (#152) +- [84b92d89](https://github.com/kubedb/pg-coordinator/commit/84b92d89) Update deps (#151) +- [41cc97b6](https://github.com/kubedb/pg-coordinator/commit/41cc97b6) Prepare for release v0.25.0-rc.0 (#150) +- [5298a177](https://github.com/kubedb/pg-coordinator/commit/5298a177) Fixed (#149) +- [bc296307](https://github.com/kubedb/pg-coordinator/commit/bc296307) Prepare for release v0.25.0-beta.1 (#148) +- [30973540](https://github.com/kubedb/pg-coordinator/commit/30973540) Prepare for release v0.25.0-beta.0 (#147) +- [7b84e198](https://github.com/kubedb/pg-coordinator/commit/7b84e198) Update deps (#146) +- [f1bfe818](https://github.com/kubedb/pg-coordinator/commit/f1bfe818) Update deps (#145) +- [1de05a6e](https://github.com/kubedb/pg-coordinator/commit/1de05a6e) Use k8s 1.29 client libs (#144) + + + +## [kubedb/pgbouncer](https://github.com/kubedb/pgbouncer) + +### [v0.28.0](https://github.com/kubedb/pgbouncer/releases/tag/v0.28.0) + +- [f79abcdd](https://github.com/kubedb/pgbouncer/commit/f79abcdd) Prepare for release v0.28.0 (#317) +- [0ca01e53](https://github.com/kubedb/pgbouncer/commit/0ca01e53) Prepare for release v0.28.0-rc.1 (#316) +- [4b76d2cb](https://github.com/kubedb/pgbouncer/commit/4b76d2cb) Update deps (#315) +- [f32676bc](https://github.com/kubedb/pgbouncer/commit/f32676bc) Update deps (#314) +- [e69aa743](https://github.com/kubedb/pgbouncer/commit/e69aa743) Prepare for release v0.28.0-rc.0 (#313) +- [55c248d5](https://github.com/kubedb/pgbouncer/commit/55c248d5) Prepare for release v0.28.0-beta.1 (#312) +- [1b86664a](https://github.com/kubedb/pgbouncer/commit/1b86664a) Incorporate with apimachinery package name change from stash to restore (#311) +- [3c6bc335](https://github.com/kubedb/pgbouncer/commit/3c6bc335) Prepare for release v0.28.0-beta.0 (#310) +- [73c5f6fb](https://github.com/kubedb/pgbouncer/commit/73c5f6fb) Dynamically start crd controller (#309) +- [f9edc2cd](https://github.com/kubedb/pgbouncer/commit/f9edc2cd) Update deps (#308) +- [d54251c0](https://github.com/kubedb/pgbouncer/commit/d54251c0) Update deps (#307) +- [de40a35e](https://github.com/kubedb/pgbouncer/commit/de40a35e) Update deps +- [8c325577](https://github.com/kubedb/pgbouncer/commit/8c325577) Use k8s 1.29 client libs (#305) + + + +## [kubedb/pgpool](https://github.com/kubedb/pgpool) + +### [v0.0.4](https://github.com/kubedb/pgpool/releases/tag/v0.0.4) + +- [b6546d3](https://github.com/kubedb/pgpool/commit/b6546d3) Prepare for release v0.0.4 (#12) +- [6f7ebca](https://github.com/kubedb/pgpool/commit/6f7ebca) Add daily to workflows (#10) +- [18c06cd](https://github.com/kubedb/pgpool/commit/18c06cd) Fix InitConfiguration issue (#9) + + + +## [kubedb/postgres](https://github.com/kubedb/postgres) + +### [v0.41.0](https://github.com/kubedb/postgres/releases/tag/v0.41.0) + +- [cf1b2726](https://github.com/kubedb/postgres/commit/cf1b27268) Prepare for release v0.41.0 (#715) +- [29aaa191](https://github.com/kubedb/postgres/commit/29aaa191b) Add sub path for sidekick (#714) +- [9f487d98](https://github.com/kubedb/postgres/commit/9f487d984) Add postmaster arguments for exporter image (#713) +- [071b2645](https://github.com/kubedb/postgres/commit/071b26455) Prepare for release v0.41.0-rc.1 (#712) +- [c9f1d5b6](https://github.com/kubedb/postgres/commit/c9f1d5b68) Update deps (#711) +- [723fb80c](https://github.com/kubedb/postgres/commit/723fb80c0) Update deps (#710) +- [8135d351](https://github.com/kubedb/postgres/commit/8135d3511) Prepare for release v0.41.0-rc.0 (#709) +- [72a1ee29](https://github.com/kubedb/postgres/commit/72a1ee294) Prepare for release v0.41.0-beta.1 (#708) +- [026598f4](https://github.com/kubedb/postgres/commit/026598f44) Prepare for release v0.41.0-beta.1 (#707) +- [8af305aa](https://github.com/kubedb/postgres/commit/8af305aa4) Use ptr.Deref(); Update deps +- [c7c0652d](https://github.com/kubedb/postgres/commit/c7c0652dc) Update ci & makefile for crd-manager (#706) +- [d468bdb3](https://github.com/kubedb/postgres/commit/d468bdb34) Fix wal backup directory (#705) +- [c6992bed](https://github.com/kubedb/postgres/commit/c6992bed8) Add Support for DB phase change for restoring using KubeStash (#704) +- [d1bd909b](https://github.com/kubedb/postgres/commit/d1bd909ba) Prepare for release v0.41.0-beta.0 (#703) +- [5e8101e3](https://github.com/kubedb/postgres/commit/5e8101e39) Dynamically start crd controller (#702) +- [47dbbff5](https://github.com/kubedb/postgres/commit/47dbbff53) Update deps (#701) +- [84f99c58](https://github.com/kubedb/postgres/commit/84f99c58b) Disable fairness api +- [a715765d](https://github.com/kubedb/postgres/commit/a715765dc) Set --restricted=false for ci tests (#700) +- [fe9af597](https://github.com/kubedb/postgres/commit/fe9af5977) Add Postgres test fix (#699) +- [8bae8886](https://github.com/kubedb/postgres/commit/8bae88860) Configure openapi for webhook server (#698) +- [9ce2efce](https://github.com/kubedb/postgres/commit/9ce2efce5) Update deps +- [24e4e9ca](https://github.com/kubedb/postgres/commit/24e4e9ca5) Use k8s 1.29 client libs (#697) + + + +## [kubedb/postgres-archiver](https://github.com/kubedb/postgres-archiver) + +### [v0.2.0](https://github.com/kubedb/postgres-archiver/releases/tag/v0.2.0) + +- [a55baa8](https://github.com/kubedb/postgres-archiver/commit/a55baa8) Prepare for release v0.2.0 (#21) +- [8f66790](https://github.com/kubedb/postgres-archiver/commit/8f66790) Prepare for release v0.2.0-rc.1 (#20) +- [bff75cb](https://github.com/kubedb/postgres-archiver/commit/bff75cb) Prepare for release v0.2.0-rc.0 (#19) +- [bb8c342](https://github.com/kubedb/postgres-archiver/commit/bb8c342) Create directory for wal-backup (#18) +- [c4405c1](https://github.com/kubedb/postgres-archiver/commit/c4405c1) Prepare for release v0.2.0-beta.1 (#17) +- [c353dcd](https://github.com/kubedb/postgres-archiver/commit/c353dcd) Don't use fail-fast +- [a9cbe08](https://github.com/kubedb/postgres-archiver/commit/a9cbe08) Prepare for release v0.2.0-beta.0 (#16) +- [183e97c](https://github.com/kubedb/postgres-archiver/commit/183e97c) Use k8s 1.29 client libs (#15) + + + +## [kubedb/postgres-csi-snapshotter-plugin](https://github.com/kubedb/postgres-csi-snapshotter-plugin) + +### [v0.2.0](https://github.com/kubedb/postgres-csi-snapshotter-plugin/releases/tag/v0.2.0) + +- [5e0031f](https://github.com/kubedb/postgres-csi-snapshotter-plugin/commit/5e0031f) Prepare for release v0.2.0 (#18) +- [369c9a3](https://github.com/kubedb/postgres-csi-snapshotter-plugin/commit/369c9a3) Prepare for release v0.2.0-rc.1 (#17) +- [87240d8](https://github.com/kubedb/postgres-csi-snapshotter-plugin/commit/87240d8) Prepare for release v0.2.0-rc.0 (#16) +- [dc4f85e](https://github.com/kubedb/postgres-csi-snapshotter-plugin/commit/dc4f85e) Prepare for release v0.2.0-beta.1 (#15) +- [098365a](https://github.com/kubedb/postgres-csi-snapshotter-plugin/commit/098365a) Update README.md (#14) +- [5ef571f](https://github.com/kubedb/postgres-csi-snapshotter-plugin/commit/5ef571f) Update deps (#13) +- [f0e546a](https://github.com/kubedb/postgres-csi-snapshotter-plugin/commit/f0e546a) Prepare for release v0.2.0-beta.0 (#12) +- [aae7294](https://github.com/kubedb/postgres-csi-snapshotter-plugin/commit/aae7294) Use k8s 1.29 client libs (#11) + + + +## [kubedb/postgres-restic-plugin](https://github.com/kubedb/postgres-restic-plugin) + +### [v0.4.0](https://github.com/kubedb/postgres-restic-plugin/releases/tag/v0.4.0) + + + + +## [kubedb/provider-aws](https://github.com/kubedb/provider-aws) + +### [v0.3.0](https://github.com/kubedb/provider-aws/releases/tag/v0.3.0) + + + + +## [kubedb/provider-azure](https://github.com/kubedb/provider-azure) + +### [v0.3.0](https://github.com/kubedb/provider-azure/releases/tag/v0.3.0) + +- [ebba4fa](https://github.com/kubedb/provider-azure/commit/ebba4fa) Checkout fake release branch for release workflow + + + +## [kubedb/provider-gcp](https://github.com/kubedb/provider-gcp) + +### [v0.3.0](https://github.com/kubedb/provider-gcp/releases/tag/v0.3.0) + +- [82f52c3](https://github.com/kubedb/provider-gcp/commit/82f52c3) Checkout fake release branch for release workflow + + + +## [kubedb/provisioner](https://github.com/kubedb/provisioner) + +### [v0.41.0](https://github.com/kubedb/provisioner/releases/tag/v0.41.0) + + + + +## [kubedb/proxysql](https://github.com/kubedb/proxysql) + +### [v0.28.0](https://github.com/kubedb/proxysql/releases/tag/v0.28.0) + +- [b0d0e92c](https://github.com/kubedb/proxysql/commit/b0d0e92cf) Prepare for release v0.28.0 (#335) +- [0ff6f90d](https://github.com/kubedb/proxysql/commit/0ff6f90d2) Prepare for release v0.28.0-rc.1 (#334) +- [382d3283](https://github.com/kubedb/proxysql/commit/382d3283e) Update deps (#333) +- [0b4da810](https://github.com/kubedb/proxysql/commit/0b4da8101) Update deps (#332) +- [2fa5679d](https://github.com/kubedb/proxysql/commit/2fa5679d7) Prepare for release v0.28.0-rc.0 (#331) +- [2cc59016](https://github.com/kubedb/proxysql/commit/2cc590165) Update ci & makefile for crd-manager (#326) +- [79e29efd](https://github.com/kubedb/proxysql/commit/79e29efdb) Handle MySQL URL Parsing (#330) +- [b3372a53](https://github.com/kubedb/proxysql/commit/b3372a53d) Fix MySQL Client and sync_user (#328) +- [213ebfc4](https://github.com/kubedb/proxysql/commit/213ebfc43) Prepare for release v0.28.0-beta.1 (#327) +- [8427158e](https://github.com/kubedb/proxysql/commit/8427158ec) Incorporate with apimachinery package name change from stash to restore (#325) +- [c0805050](https://github.com/kubedb/proxysql/commit/c0805050e) Prepare for release v0.28.0-beta.0 (#324) +- [88ef1f1d](https://github.com/kubedb/proxysql/commit/88ef1f1de) Dynamically start crd controller (#323) +- [8c0a96ac](https://github.com/kubedb/proxysql/commit/8c0a96ac7) Update deps (#322) +- [e96797e4](https://github.com/kubedb/proxysql/commit/e96797e48) Update deps (#321) +- [e8fd529b](https://github.com/kubedb/proxysql/commit/e8fd529b2) Update deps +- [b2e9a1df](https://github.com/kubedb/proxysql/commit/b2e9a1df8) Use k8s 1.29 client libs (#319) + + + +## [kubedb/rabbitmq](https://github.com/kubedb/rabbitmq) + +### [v0.0.4](https://github.com/kubedb/rabbitmq/releases/tag/v0.0.4) + +- [cbcc9132](https://github.com/kubedb/rabbitmq/commit/cbcc9132) Prepare for release v0.0.4 (#9) +- [89636ce7](https://github.com/kubedb/rabbitmq/commit/89636ce7) Remove standby service and fix init container security context (#8) + + + +## [kubedb/redis](https://github.com/kubedb/redis) + +### [v0.34.0](https://github.com/kubedb/redis/releases/tag/v0.34.0) + +- [bd9d152b](https://github.com/kubedb/redis/commit/bd9d152b) Prepare for release v0.34.0 (#523) +- [5e171587](https://github.com/kubedb/redis/commit/5e171587) Prepare for release v0.34.0-rc.1 (#522) +- [71665b9b](https://github.com/kubedb/redis/commit/71665b9b) Update deps (#521) +- [302f1f19](https://github.com/kubedb/redis/commit/302f1f19) Update deps (#520) +- [0703a513](https://github.com/kubedb/redis/commit/0703a513) Prepare for release v0.34.0-rc.0 (#519) +- [b1a296b7](https://github.com/kubedb/redis/commit/b1a296b7) Init sentinel before secret watcher (#518) +- [01290634](https://github.com/kubedb/redis/commit/01290634) Prepare for release v0.34.0-beta.1 (#517) +- [e51f93e1](https://github.com/kubedb/redis/commit/e51f93e1) Fix panic (#516) +- [dc75c163](https://github.com/kubedb/redis/commit/dc75c163) Update ci & makefile for crd-manager (#515) +- [09688f35](https://github.com/kubedb/redis/commit/09688f35) Add Support for DB phase change for restoring using KubeStash (#514) +- [7e844ab1](https://github.com/kubedb/redis/commit/7e844ab1) Prepare for release v0.34.0-beta.0 (#513) +- [6318d04f](https://github.com/kubedb/redis/commit/6318d04f) Dynamically start crd controller (#512) +- [92b8a3a9](https://github.com/kubedb/redis/commit/92b8a3a9) Update deps (#511) +- [f0fb4c69](https://github.com/kubedb/redis/commit/f0fb4c69) Update deps (#510) +- [c99d9498](https://github.com/kubedb/redis/commit/c99d9498) Update deps +- [90299544](https://github.com/kubedb/redis/commit/90299544) Use k8s 1.29 client libs (#508) +- [fced7010](https://github.com/kubedb/redis/commit/fced7010) Update redis versions in nightly tests (#507) + + + +## [kubedb/redis-coordinator](https://github.com/kubedb/redis-coordinator) + +### [v0.20.0](https://github.com/kubedb/redis-coordinator/releases/tag/v0.20.0) + +- [cd9b64c3](https://github.com/kubedb/redis-coordinator/commit/cd9b64c3) Prepare for release v0.20.0 (#94) +- [055ceaf1](https://github.com/kubedb/redis-coordinator/commit/055ceaf1) Prepare for release v0.20.0-rc.1 (#93) +- [79575d26](https://github.com/kubedb/redis-coordinator/commit/79575d26) Update deps (#92) +- [a5b4c4b4](https://github.com/kubedb/redis-coordinator/commit/a5b4c4b4) Update deps (#91) +- [f09062c4](https://github.com/kubedb/redis-coordinator/commit/f09062c4) Prepare for release v0.20.0-rc.0 (#90) +- [fd3b2112](https://github.com/kubedb/redis-coordinator/commit/fd3b2112) Prepare for release v0.20.0-beta.1 (#89) +- [4c36accd](https://github.com/kubedb/redis-coordinator/commit/4c36accd) Prepare for release v0.20.0-beta.0 (#88) +- [c8658380](https://github.com/kubedb/redis-coordinator/commit/c8658380) Update deps (#87) +- [c99c2e9b](https://github.com/kubedb/redis-coordinator/commit/c99c2e9b) Update deps (#86) +- [22c7beb4](https://github.com/kubedb/redis-coordinator/commit/22c7beb4) Use k8s 1.29 client libs (#85) + + + +## [kubedb/redis-restic-plugin](https://github.com/kubedb/redis-restic-plugin) + +### [v0.4.0](https://github.com/kubedb/redis-restic-plugin/releases/tag/v0.4.0) + +- [5436cb6](https://github.com/kubedb/redis-restic-plugin/commit/5436cb6) Prepare for release v0.4.0 (#20) +- [67a8942](https://github.com/kubedb/redis-restic-plugin/commit/67a8942) Prepare for release v0.4.0-rc.1 (#19) +- [968da13](https://github.com/kubedb/redis-restic-plugin/commit/968da13) Prepare for release v0.4.0-rc.0 (#18) +- [fac6226](https://github.com/kubedb/redis-restic-plugin/commit/fac6226) Prepare for release v0.4.0-beta.1 (#17) +- [da2796a](https://github.com/kubedb/redis-restic-plugin/commit/da2796a) Prepare for release v0.4.0-beta.0 (#16) +- [0553c6f](https://github.com/kubedb/redis-restic-plugin/commit/0553c6f) Use k8s 1.29 client libs (#15) + + + +## [kubedb/replication-mode-detector](https://github.com/kubedb/replication-mode-detector) + +### [v0.28.0](https://github.com/kubedb/replication-mode-detector/releases/tag/v0.28.0) + +- [4cce748f](https://github.com/kubedb/replication-mode-detector/commit/4cce748f) Prepare for release v0.28.0 (#258) +- [de39974e](https://github.com/kubedb/replication-mode-detector/commit/de39974e) Prepare for release v0.28.0-rc.1 (#257) +- [e1ef5191](https://github.com/kubedb/replication-mode-detector/commit/e1ef5191) Update deps (#256) +- [7b4e4149](https://github.com/kubedb/replication-mode-detector/commit/7b4e4149) Update deps (#255) +- [d55f7e69](https://github.com/kubedb/replication-mode-detector/commit/d55f7e69) Prepare for release v0.28.0-rc.0 (#254) +- [f948a650](https://github.com/kubedb/replication-mode-detector/commit/f948a650) Prepare for release v0.28.0-beta.1 (#253) +- [572668c8](https://github.com/kubedb/replication-mode-detector/commit/572668c8) Prepare for release v0.28.0-beta.0 (#252) +- [39ba3ce0](https://github.com/kubedb/replication-mode-detector/commit/39ba3ce0) Update deps (#251) +- [d3d2ad96](https://github.com/kubedb/replication-mode-detector/commit/d3d2ad96) Update deps (#250) +- [633d7b76](https://github.com/kubedb/replication-mode-detector/commit/633d7b76) Use k8s 1.29 client libs (#249) + + + +## [kubedb/schema-manager](https://github.com/kubedb/schema-manager) + +### [v0.17.0](https://github.com/kubedb/schema-manager/releases/tag/v0.17.0) + + + + +## [kubedb/singlestore](https://github.com/kubedb/singlestore) + +### [v0.0.4](https://github.com/kubedb/singlestore/releases/tag/v0.0.4) + +- [e8cf66f](https://github.com/kubedb/singlestore/commit/e8cf66f) Prepare for release v0.0.4 (#11) + + + +## [kubedb/singlestore-coordinator](https://github.com/kubedb/singlestore-coordinator) + +### [v0.0.4](https://github.com/kubedb/singlestore-coordinator/releases/tag/v0.0.4) + +- [b451944](https://github.com/kubedb/singlestore-coordinator/commit/b451944) Prepare for release v0.0.4 (#5) + + + +## [kubedb/solr](https://github.com/kubedb/solr) + +### [v0.0.4](https://github.com/kubedb/solr/releases/tag/v0.0.4) + +- [8be74c4](https://github.com/kubedb/solr/commit/8be74c4) Prepare for release v0.0.4 (#12) +- [0647ccf](https://github.com/kubedb/solr/commit/0647ccf) Remove overseer discovery service. (#10) +- [4901117](https://github.com/kubedb/solr/commit/4901117) Add daily yml. (#9) +- [3abb79b](https://github.com/kubedb/solr/commit/3abb79b) Add auth secret reference in appbinding. (#8) + + + +## [kubedb/tests](https://github.com/kubedb/tests) + +### [v0.26.0](https://github.com/kubedb/tests/releases/tag/v0.26.0) + +- [16543a0f](https://github.com/kubedb/tests/commit/16543a0f) Prepare for release v0.26.0 (#304) +- [92607278](https://github.com/kubedb/tests/commit/92607278) Add dependencies flag (#301) +- [17bbf43c](https://github.com/kubedb/tests/commit/17bbf43c) Add Reconfigure with Vertical Scaling (#300) +- [f3e3fba1](https://github.com/kubedb/tests/commit/f3e3fba1) Add Singlestore Provisioning Test (#287) +- [5a527051](https://github.com/kubedb/tests/commit/5a527051) Prepare for release v0.26.0-rc.1 (#299) +- [03d71b6d](https://github.com/kubedb/tests/commit/03d71b6d) Update deps (#298) +- [2d928008](https://github.com/kubedb/tests/commit/2d928008) Update deps (#297) +- [1730fd31](https://github.com/kubedb/tests/commit/1730fd31) Prepare for release v0.26.0-rc.0 (#296) +- [d1805668](https://github.com/kubedb/tests/commit/d1805668) Add ZooKeeper Tests (#294) +- [4c27754c](https://github.com/kubedb/tests/commit/4c27754c) Fix kafka env-variable tests (#293) +- [3cfc1212](https://github.com/kubedb/tests/commit/3cfc1212) Prepare for release v0.26.0-beta.1 (#292) +- [b810e690](https://github.com/kubedb/tests/commit/b810e690) increase cpu limit for vertical scaling (#289) +- [c43985ba](https://github.com/kubedb/tests/commit/c43985ba) Change dashboard api group (#291) +- [1b96881e](https://github.com/kubedb/tests/commit/1b96881e) Fix error logging +- [33f78143](https://github.com/kubedb/tests/commit/33f78143) forceCleanup PVCs for mongo (#288) +- [0dcd3e38](https://github.com/kubedb/tests/commit/0dcd3e38) Add PostgreSQL logical replication tests (#202) +- [2f403c85](https://github.com/kubedb/tests/commit/2f403c85) Find profiles in array, Don't match with string (#286) +- [5aca2293](https://github.com/kubedb/tests/commit/5aca2293) Give time to PDB status to be updated (#285) +- [5f3fabd7](https://github.com/kubedb/tests/commit/5f3fabd7) Prepare for release v0.26.0-beta.0 (#284) +- [27a24dff](https://github.com/kubedb/tests/commit/27a24dff) Update deps (#283) +- [b9021186](https://github.com/kubedb/tests/commit/b9021186) Update deps (#282) +- [589ca51c](https://github.com/kubedb/tests/commit/589ca51c) mongodb vertical scaling fix (#281) +- [feaa0f6a](https://github.com/kubedb/tests/commit/feaa0f6a) Add `--restricted` flag (#280) +- [2423ee38](https://github.com/kubedb/tests/commit/2423ee38) Fix linter errors +- [dcd64c7c](https://github.com/kubedb/tests/commit/dcd64c7c) Update lint command +- [c3ef1fa4](https://github.com/kubedb/tests/commit/c3ef1fa4) Use k8s 1.29 client libs (#279) + + + +## [kubedb/ui-server](https://github.com/kubedb/ui-server) + +### [v0.17.0](https://github.com/kubedb/ui-server/releases/tag/v0.17.0) + +- [7a1d7c5e](https://github.com/kubedb/ui-server/commit/7a1d7c5e) Prepare for release v0.17.0 (#110) +- [ed2c04e7](https://github.com/kubedb/ui-server/commit/ed2c04e7) Prepare for release v0.17.0-rc.1 (#109) +- [645c4ac2](https://github.com/kubedb/ui-server/commit/645c4ac2) Update deps (#108) +- [e75f0f9e](https://github.com/kubedb/ui-server/commit/e75f0f9e) Update deps (#107) +- [3046f685](https://github.com/kubedb/ui-server/commit/3046f685) Prepare for release v0.17.0-rc.0 (#106) +- [98c1a6dd](https://github.com/kubedb/ui-server/commit/98c1a6dd) Prepare for release v0.17.0-beta.1 (#105) +- [8173cfc2](https://github.com/kubedb/ui-server/commit/8173cfc2) Implement SingularNameProvider +- [6e8f80dc](https://github.com/kubedb/ui-server/commit/6e8f80dc) Prepare for release v0.17.0-beta.0 (#104) +- [6a05721f](https://github.com/kubedb/ui-server/commit/6a05721f) Update deps (#103) +- [3c24fd5e](https://github.com/kubedb/ui-server/commit/3c24fd5e) Update deps (#102) +- [25e29443](https://github.com/kubedb/ui-server/commit/25e29443) Use k8s 1.29 client libs (#101) + + + +## [kubedb/webhook-server](https://github.com/kubedb/webhook-server) + +### [v0.17.0](https://github.com/kubedb/webhook-server/releases/tag/v0.17.0) + +- [93116fb5](https://github.com/kubedb/webhook-server/commit/93116fb5) Prepare for release v0.17.0 (#95) +- [a49ecca7](https://github.com/kubedb/webhook-server/commit/a49ecca7) Prepare for release v0.17.0-rc.1 (#94) +- [5f8de57b](https://github.com/kubedb/webhook-server/commit/5f8de57b) Update deps (#93) +- [8c22ce2d](https://github.com/kubedb/webhook-server/commit/8c22ce2d) Update deps (#92) +- [f9cf0b11](https://github.com/kubedb/webhook-server/commit/f9cf0b11) Prepare for release v0.17.0-rc.0 (#91) +- [98914ade](https://github.com/kubedb/webhook-server/commit/98914ade) Add kafka connector webhook apitypes (#90) +- [1184db7a](https://github.com/kubedb/webhook-server/commit/1184db7a) Fix solr webhook +- [2a84cedb](https://github.com/kubedb/webhook-server/commit/2a84cedb) Prepare for release v0.17.0-beta.1 (#89) +- [bb4a5c22](https://github.com/kubedb/webhook-server/commit/bb4a5c22) Add kafka connect-cluster (#87) +- [c46c6662](https://github.com/kubedb/webhook-server/commit/c46c6662) Add new Database support (#88) +- [c6387e9e](https://github.com/kubedb/webhook-server/commit/c6387e9e) Set default kubebuilder client for autoscaler (#86) +- [14c07899](https://github.com/kubedb/webhook-server/commit/14c07899) Incorporate apimachinery (#85) +- [266c79a0](https://github.com/kubedb/webhook-server/commit/266c79a0) Add kafka ops request validator (#84) +- [528b8463](https://github.com/kubedb/webhook-server/commit/528b8463) Fix webhook handlers (#83) +- [dfdeb6c3](https://github.com/kubedb/webhook-server/commit/dfdeb6c3) Prepare for release v0.17.0-beta.0 (#82) +- [bf54df2a](https://github.com/kubedb/webhook-server/commit/bf54df2a) Update deps (#81) +- [c7d17faa](https://github.com/kubedb/webhook-server/commit/c7d17faa) Update deps (#79) +- [170573b1](https://github.com/kubedb/webhook-server/commit/170573b1) Use k8s 1.29 client libs (#78) + + + +## [kubedb/zookeeper](https://github.com/kubedb/zookeeper) + +### [v0.0.4](https://github.com/kubedb/zookeeper/releases/tag/v0.0.4) + +- [7347527](https://github.com/kubedb/zookeeper/commit/7347527) Prepare for release v0.0.4 (#8) + + + + diff --git a/content/docs/v2024.1.31/CHANGELOG-v2024.1.7-beta.0.md b/content/docs/v2024.1.31/CHANGELOG-v2024.1.7-beta.0.md new file mode 100644 index 0000000000..4b6d7d10e1 --- /dev/null +++ b/content/docs/v2024.1.31/CHANGELOG-v2024.1.7-beta.0.md @@ -0,0 +1,529 @@ +--- +title: Changelog | KubeDB +description: Changelog +menu: + docs_v2024.1.31: + identifier: changelog-kubedb-v2024.1.7-beta.0 + name: Changelog-v2024.1.7-beta.0 + parent: welcome + weight: 20240107 +product_name: kubedb +menu_name: docs_v2024.1.31 +section_menu_id: welcome +url: /docs/v2024.1.31/welcome/changelog-v2024.1.7-beta.0/ +aliases: +- /docs/v2024.1.31/CHANGELOG-v2024.1.7-beta.0/ +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +# KubeDB v2024.1.7-beta.0 (2024-01-08) + + +## [kubedb/apimachinery](https://github.com/kubedb/apimachinery) + +### [v0.41.0-beta.0](https://github.com/kubedb/apimachinery/releases/tag/v0.41.0-beta.0) + +- [45cbf75e](https://github.com/kubedb/apimachinery/commit/45cbf75e3) Update deps +- [dc224c1a](https://github.com/kubedb/apimachinery/commit/dc224c1a1) Remove crd informer (#1102) +- [87c402a1](https://github.com/kubedb/apimachinery/commit/87c402a1a) Remove discovery.ResourceMapper (#1101) +- [a1d475ce](https://github.com/kubedb/apimachinery/commit/a1d475ceb) Replace deprecated PollImmediate (#1100) +- [75db4a37](https://github.com/kubedb/apimachinery/commit/75db4a378) Add ConfigureOpenAPI helper (#1099) +- [83be295b](https://github.com/kubedb/apimachinery/commit/83be295b0) update sidekick deps +- [032b2721](https://github.com/kubedb/apimachinery/commit/032b27211) Fix linter +- [389a934c](https://github.com/kubedb/apimachinery/commit/389a934c7) Use k8s 1.29 client libs (#1093) + + + +## [kubedb/autoscaler](https://github.com/kubedb/autoscaler) + +### [v0.26.0-beta.0](https://github.com/kubedb/autoscaler/releases/tag/v0.26.0-beta.0) + + + + +## [kubedb/cli](https://github.com/kubedb/cli) + +### [v0.41.0-beta.0](https://github.com/kubedb/cli/releases/tag/v0.41.0-beta.0) + +- [c0165e83](https://github.com/kubedb/cli/commit/c0165e83) Prepare for release v0.41.0-beta.0 (#747) +- [d9c905e5](https://github.com/kubedb/cli/commit/d9c905e5) Update deps (#746) +- [bc415a1d](https://github.com/kubedb/cli/commit/bc415a1d) Update deps (#745) + + + +## [kubedb/dashboard](https://github.com/kubedb/dashboard) + +### [v0.17.0-beta.0](https://github.com/kubedb/dashboard/releases/tag/v0.17.0-beta.0) + + + + +## [kubedb/elasticsearch](https://github.com/kubedb/elasticsearch) + +### [v0.41.0-beta.0](https://github.com/kubedb/elasticsearch/releases/tag/v0.41.0-beta.0) + +- [3ab4d77d](https://github.com/kubedb/elasticsearch/commit/3ab4d77d2) Prepare for release v0.41.0-beta.0 (#694) +- [c38c61cb](https://github.com/kubedb/elasticsearch/commit/c38c61cbc) Dynamically start crd controller (#693) +- [6a798d30](https://github.com/kubedb/elasticsearch/commit/6a798d309) Update deps (#692) +- [bdf034a4](https://github.com/kubedb/elasticsearch/commit/bdf034a49) Update deps (#691) +- [ea22eecb](https://github.com/kubedb/elasticsearch/commit/ea22eecb2) Add openapi configuration for webhook server (#690) +- [b97636cd](https://github.com/kubedb/elasticsearch/commit/b97636cd1) Update lint command +- [0221ac14](https://github.com/kubedb/elasticsearch/commit/0221ac14e) Update deps +- [b4cb8d60](https://github.com/kubedb/elasticsearch/commit/b4cb8d603) Use k8s 1.29 client libs (#689) + + + +## [kubedb/elasticsearch-restic-plugin](https://github.com/kubedb/elasticsearch-restic-plugin) + +### [v0.4.0-beta.0](https://github.com/kubedb/elasticsearch-restic-plugin/releases/tag/v0.4.0-beta.0) + +- [5e9aef5](https://github.com/kubedb/elasticsearch-restic-plugin/commit/5e9aef5) Prepare for release v0.4.0-beta.0 (#15) +- [2fdcafa](https://github.com/kubedb/elasticsearch-restic-plugin/commit/2fdcafa) Use k8s 1.29 client libs (#14) + + + +## [kubedb/installer](https://github.com/kubedb/installer) + +### [v2024.1.7-beta.0](https://github.com/kubedb/installer/releases/tag/v2024.1.7-beta.0) + +- [45c11e3e](https://github.com/kubedb/installer/commit/45c11e3e) Prepare for release v2024.1.7-beta.0 (#773) +- [d8634da5](https://github.com/kubedb/installer/commit/d8634da5) Fix CI +- [b8e850cc](https://github.com/kubedb/installer/commit/b8e850cc) Fix linter +- [69dff8a6](https://github.com/kubedb/installer/commit/69dff8a6) Selectively disable feature gates (#771) +- [0f925055](https://github.com/kubedb/installer/commit/0f925055) Use common feature gates across charts (#770) +- [4856a14e](https://github.com/kubedb/installer/commit/4856a14e) Update crds for kubedb/apimachinery@87c402a1 (#768) +- [570bfc35](https://github.com/kubedb/installer/commit/570bfc35) Configure crd-manager features (#769) +- [91a890d3](https://github.com/kubedb/installer/commit/91a890d3) Add crd-manager chart (#767) +- [bd03b2b4](https://github.com/kubedb/installer/commit/bd03b2b4) Require kubernetes 1.20+ + + + +## [kubedb/kafka](https://github.com/kubedb/kafka) + +### [v0.12.0-beta.0](https://github.com/kubedb/kafka/releases/tag/v0.12.0-beta.0) + +- [f9350578](https://github.com/kubedb/kafka/commit/f9350578) Prepare for release v0.12.0-beta.0 (#62) +- [692f2bef](https://github.com/kubedb/kafka/commit/692f2bef) Dynamically start crd controller (#61) +- [a50dc8b4](https://github.com/kubedb/kafka/commit/a50dc8b4) Update deps (#60) +- [7ff28ed7](https://github.com/kubedb/kafka/commit/7ff28ed7) Update deps (#59) +- [16130571](https://github.com/kubedb/kafka/commit/16130571) Add openapi configuration for webhook server (#58) +- [cc465de9](https://github.com/kubedb/kafka/commit/cc465de9) Use k8s 1.29 client libs (#57) + + + +## [kubedb/kubedb-manifest-plugin](https://github.com/kubedb/kubedb-manifest-plugin) + +### [v0.4.0-beta.0](https://github.com/kubedb/kubedb-manifest-plugin/releases/tag/v0.4.0-beta.0) + +- [c315615](https://github.com/kubedb/kubedb-manifest-plugin/commit/c315615) Prepare for release v0.4.0-beta.0 (#36) +- [5ce328d](https://github.com/kubedb/kubedb-manifest-plugin/commit/5ce328d) Use k8s 1.29 client libs (#34) + + + +## [kubedb/mariadb](https://github.com/kubedb/mariadb) + +### [v0.25.0-beta.0](https://github.com/kubedb/mariadb/releases/tag/v0.25.0-beta.0) + +- [b93ddce3](https://github.com/kubedb/mariadb/commit/b93ddce3d) Prepare for release v0.25.0-beta.0 (#247) +- [8099af6d](https://github.com/kubedb/mariadb/commit/8099af6d9) Dynamically start crd controller (#246) +- [0a9dd9e0](https://github.com/kubedb/mariadb/commit/0a9dd9e03) Update deps (#245) +- [5c548629](https://github.com/kubedb/mariadb/commit/5c548629e) Update deps (#244) +- [0f9ea4f2](https://github.com/kubedb/mariadb/commit/0f9ea4f20) Update deps +- [89641d3c](https://github.com/kubedb/mariadb/commit/89641d3c7) Use k8s 1.29 client libs (#242) + + + +## [kubedb/mariadb-archiver](https://github.com/kubedb/mariadb-archiver) + +### [v0.1.0-beta.0](https://github.com/kubedb/mariadb-archiver/releases/tag/v0.1.0-beta.0) + +- [8c8e09a](https://github.com/kubedb/mariadb-archiver/commit/8c8e09a) Prepare for release v0.1.0-beta.0 (#4) +- [90ae04c](https://github.com/kubedb/mariadb-archiver/commit/90ae04c) Use k8s 1.29 client libs (#3) +- [b3067c8](https://github.com/kubedb/mariadb-archiver/commit/b3067c8) Fix binlog command +- [5cc0b6a](https://github.com/kubedb/mariadb-archiver/commit/5cc0b6a) Fix release workflow +- [910b7ce](https://github.com/kubedb/mariadb-archiver/commit/910b7ce) Prepare for release v0.1.0 (#1) +- [3801668](https://github.com/kubedb/mariadb-archiver/commit/3801668) mysql -> mariadb +- [4e905fb](https://github.com/kubedb/mariadb-archiver/commit/4e905fb) Implemenet new algorithm for archiver and restorer (#5) +- [22701c8](https://github.com/kubedb/mariadb-archiver/commit/22701c8) Fix 5.7.x build +- [6da2b1c](https://github.com/kubedb/mariadb-archiver/commit/6da2b1c) Update build matrix +- [e2f6244](https://github.com/kubedb/mariadb-archiver/commit/e2f6244) Use separate dockerfile per mysql version (#9) +- [e800623](https://github.com/kubedb/mariadb-archiver/commit/e800623) Prepare for release v0.2.0 (#8) +- [b9f6ec5](https://github.com/kubedb/mariadb-archiver/commit/b9f6ec5) Install mysqlbinlog (#7) +- [c46d991](https://github.com/kubedb/mariadb-archiver/commit/c46d991) Use appscode-images as base image (#6) +- [721eaa8](https://github.com/kubedb/mariadb-archiver/commit/721eaa8) Prepare for release v0.1.0 (#4) +- [8c65d14](https://github.com/kubedb/mariadb-archiver/commit/8c65d14) Prepare for release v0.1.0-rc.1 (#3) +- [f79286a](https://github.com/kubedb/mariadb-archiver/commit/f79286a) Prepare for release v0.1.0-rc.0 (#2) +- [dcd2e30](https://github.com/kubedb/mariadb-archiver/commit/dcd2e30) Fix wal-g binary +- [6c20a4a](https://github.com/kubedb/mariadb-archiver/commit/6c20a4a) Fix build +- [f034e7b](https://github.com/kubedb/mariadb-archiver/commit/f034e7b) Add build script (#1) + + + +## [kubedb/mariadb-coordinator](https://github.com/kubedb/mariadb-coordinator) + +### [v0.21.0-beta.0](https://github.com/kubedb/mariadb-coordinator/releases/tag/v0.21.0-beta.0) + +- [28677618](https://github.com/kubedb/mariadb-coordinator/commit/28677618) Prepare for release v0.21.0-beta.0 (#100) +- [655a2c66](https://github.com/kubedb/mariadb-coordinator/commit/655a2c66) Update deps (#99) +- [ef206cfe](https://github.com/kubedb/mariadb-coordinator/commit/ef206cfe) Update deps (#98) +- [ef72c98b](https://github.com/kubedb/mariadb-coordinator/commit/ef72c98b) Use k8s 1.29 client libs (#97) + + + +## [kubedb/mariadb-csi-snapshotter-plugin](https://github.com/kubedb/mariadb-csi-snapshotter-plugin) + +### [v0.1.0-beta.0](https://github.com/kubedb/mariadb-csi-snapshotter-plugin/releases/tag/v0.1.0-beta.0) + +- [09f68b7](https://github.com/kubedb/mariadb-csi-snapshotter-plugin/commit/09f68b7) Prepare for release v0.1.0-beta.0 (#4) +- [7407444](https://github.com/kubedb/mariadb-csi-snapshotter-plugin/commit/7407444) Use k8s 1.29 client libs (#3) +- [933e138](https://github.com/kubedb/mariadb-csi-snapshotter-plugin/commit/933e138) Prepare for release v0.1.0 (#2) +- [5d38f94](https://github.com/kubedb/mariadb-csi-snapshotter-plugin/commit/5d38f94) Enable GH actions +- [2a97178](https://github.com/kubedb/mariadb-csi-snapshotter-plugin/commit/2a97178) Replace mysql with mariadb + + + +## [kubedb/memcached](https://github.com/kubedb/memcached) + +### [v0.34.0-beta.0](https://github.com/kubedb/memcached/releases/tag/v0.34.0-beta.0) + +- [6fe1686a](https://github.com/kubedb/memcached/commit/6fe1686a) Prepare for release v0.34.0-beta.0 (#416) +- [1cfb0544](https://github.com/kubedb/memcached/commit/1cfb0544) Dynamically start crd controller (#415) +- [171faff2](https://github.com/kubedb/memcached/commit/171faff2) Update deps (#414) +- [639495c7](https://github.com/kubedb/memcached/commit/639495c7) Update deps (#413) +- [223d295a](https://github.com/kubedb/memcached/commit/223d295a) Use k8s 1.29 client libs (#412) + + + +## [kubedb/mongodb](https://github.com/kubedb/mongodb) + +### [v0.34.0-beta.0](https://github.com/kubedb/mongodb/releases/tag/v0.34.0-beta.0) + +- [7ff67238](https://github.com/kubedb/mongodb/commit/7ff672382) Prepare for release v0.34.0-beta.0 (#600) +- [beca63a4](https://github.com/kubedb/mongodb/commit/beca63a48) Dynamically start crd controller (#599) +- [17d90616](https://github.com/kubedb/mongodb/commit/17d90616d) Update deps (#598) +- [bc25ca00](https://github.com/kubedb/mongodb/commit/bc25ca001) Update deps (#597) +- [4ce5a94a](https://github.com/kubedb/mongodb/commit/4ce5a94a4) Configure openapi for webhook server (#596) +- [8d8206db](https://github.com/kubedb/mongodb/commit/8d8206db3) Update ci versions +- [bfdd519f](https://github.com/kubedb/mongodb/commit/bfdd519fc) Update deps +- [01a7c268](https://github.com/kubedb/mongodb/commit/01a7c2685) Use k8s 1.29 client libs (#594) + + + +## [kubedb/mongodb-csi-snapshotter-plugin](https://github.com/kubedb/mongodb-csi-snapshotter-plugin) + +### [v0.2.0-beta.0](https://github.com/kubedb/mongodb-csi-snapshotter-plugin/releases/tag/v0.2.0-beta.0) + +- [ef74421](https://github.com/kubedb/mongodb-csi-snapshotter-plugin/commit/ef74421) Prepare for release v0.2.0-beta.0 (#9) +- [c2c9bd4](https://github.com/kubedb/mongodb-csi-snapshotter-plugin/commit/c2c9bd4) Use k8s 1.29 client libs (#8) + + + +## [kubedb/mongodb-restic-plugin](https://github.com/kubedb/mongodb-restic-plugin) + +### [v0.4.0-beta.0](https://github.com/kubedb/mongodb-restic-plugin/releases/tag/v0.4.0-beta.0) + +- [4f0b021](https://github.com/kubedb/mongodb-restic-plugin/commit/4f0b021) Prepare for release v0.4.0-beta.0 (#20) +- [91ee7c0](https://github.com/kubedb/mongodb-restic-plugin/commit/91ee7c0) Use k8s 1.29 client libs (#19) + + + +## [kubedb/mysql](https://github.com/kubedb/mysql) + +### [v0.34.0-beta.0](https://github.com/kubedb/mysql/releases/tag/v0.34.0-beta.0) + +- [354f6f3e](https://github.com/kubedb/mysql/commit/354f6f3e1) Prepare for release v0.34.0-beta.0 (#593) +- [01498d02](https://github.com/kubedb/mysql/commit/01498d025) Dynamically start crd controller (#592) +- [e68015cf](https://github.com/kubedb/mysql/commit/e68015cfd) Update deps (#591) +- [67029acc](https://github.com/kubedb/mysql/commit/67029acc9) Update deps (#590) +- [87d2de4a](https://github.com/kubedb/mysql/commit/87d2de4a1) Include kubestash catalog chart in makefile (#588) +- [e5874ffb](https://github.com/kubedb/mysql/commit/e5874ffb7) Add openapi configuration for webhook server (#589) +- [977d3cd3](https://github.com/kubedb/mysql/commit/977d3cd38) Update deps +- [3df86853](https://github.com/kubedb/mysql/commit/3df868533) Use k8s 1.29 client libs (#586) +- [d159ad05](https://github.com/kubedb/mysql/commit/d159ad052) Ensure MySQLArchiver crd (#585) + + + +## [kubedb/mysql-archiver](https://github.com/kubedb/mysql-archiver) + +### [v0.2.0-beta.0](https://github.com/kubedb/mysql-archiver/releases/tag/v0.2.0-beta.0) + +- [5833776](https://github.com/kubedb/mysql-archiver/commit/5833776) Prepare for release v0.2.0-beta.0 (#12) +- [f3e68b2](https://github.com/kubedb/mysql-archiver/commit/f3e68b2) Use k8s 1.29 client libs (#11) + + + +## [kubedb/mysql-coordinator](https://github.com/kubedb/mysql-coordinator) + +### [v0.19.0-beta.0](https://github.com/kubedb/mysql-coordinator/releases/tag/v0.19.0-beta.0) + +- [e0cc149f](https://github.com/kubedb/mysql-coordinator/commit/e0cc149f) Prepare for release v0.19.0-beta.0 (#97) +- [67aeb229](https://github.com/kubedb/mysql-coordinator/commit/67aeb229) Update deps (#96) +- [2fa4423f](https://github.com/kubedb/mysql-coordinator/commit/2fa4423f) Update deps (#95) +- [b0735769](https://github.com/kubedb/mysql-coordinator/commit/b0735769) Use k8s 1.29 client libs (#94) + + + +## [kubedb/mysql-csi-snapshotter-plugin](https://github.com/kubedb/mysql-csi-snapshotter-plugin) + +### [v0.2.0-beta.0](https://github.com/kubedb/mysql-csi-snapshotter-plugin/releases/tag/v0.2.0-beta.0) + +- [d285eff](https://github.com/kubedb/mysql-csi-snapshotter-plugin/commit/d285eff) Prepare for release v0.2.0-beta.0 (#4) +- [7a46441](https://github.com/kubedb/mysql-csi-snapshotter-plugin/commit/7a46441) Use k8s 1.29 client libs (#2) + + + +## [kubedb/mysql-restic-plugin](https://github.com/kubedb/mysql-restic-plugin) + +### [v0.4.0-beta.0](https://github.com/kubedb/mysql-restic-plugin/releases/tag/v0.4.0-beta.0) + +- [742d2ce](https://github.com/kubedb/mysql-restic-plugin/commit/742d2ce) Prepare for release v0.4.0-beta.0 (#19) +- [0402847](https://github.com/kubedb/mysql-restic-plugin/commit/0402847) Use k8s 1.29 client libs (#18) + + + +## [kubedb/mysql-router-init](https://github.com/kubedb/mysql-router-init) + +### [v0.19.0-beta.0](https://github.com/kubedb/mysql-router-init/releases/tag/v0.19.0-beta.0) + +- [85f8c6f](https://github.com/kubedb/mysql-router-init/commit/85f8c6f) Update deps (#38) +- [7dd201c](https://github.com/kubedb/mysql-router-init/commit/7dd201c) Use k8s 1.29 client libs (#37) + + + +## [kubedb/ops-manager](https://github.com/kubedb/ops-manager) + +### [v0.28.0-beta.0](https://github.com/kubedb/ops-manager/releases/tag/v0.28.0-beta.0) + + + + +## [kubedb/percona-xtradb](https://github.com/kubedb/percona-xtradb) + +### [v0.28.0-beta.0](https://github.com/kubedb/percona-xtradb/releases/tag/v0.28.0-beta.0) + +- [0ceb3028](https://github.com/kubedb/percona-xtradb/commit/0ceb30284) Prepare for release v0.28.0-beta.0 (#346) +- [e7d35606](https://github.com/kubedb/percona-xtradb/commit/e7d356062) Dynamically start crd controller (#345) +- [5d07b565](https://github.com/kubedb/percona-xtradb/commit/5d07b5655) Update deps (#344) +- [1a639f84](https://github.com/kubedb/percona-xtradb/commit/1a639f840) Update deps (#343) +- [4f8b24ab](https://github.com/kubedb/percona-xtradb/commit/4f8b24aba) Update deps +- [e5254020](https://github.com/kubedb/percona-xtradb/commit/e52540202) Use k8s 1.29 client libs (#341) + + + +## [kubedb/percona-xtradb-coordinator](https://github.com/kubedb/percona-xtradb-coordinator) + +### [v0.14.0-beta.0](https://github.com/kubedb/percona-xtradb-coordinator/releases/tag/v0.14.0-beta.0) + +- [963756eb](https://github.com/kubedb/percona-xtradb-coordinator/commit/963756eb) Prepare for release v0.14.0-beta.0 (#57) +- [5489bb8c](https://github.com/kubedb/percona-xtradb-coordinator/commit/5489bb8c) Update deps (#56) +- [a8424e18](https://github.com/kubedb/percona-xtradb-coordinator/commit/a8424e18) Update deps (#55) +- [ee4add86](https://github.com/kubedb/percona-xtradb-coordinator/commit/ee4add86) Use k8s 1.29 client libs (#54) + + + +## [kubedb/pg-coordinator](https://github.com/kubedb/pg-coordinator) + +### [v0.25.0-beta.0](https://github.com/kubedb/pg-coordinator/releases/tag/v0.25.0-beta.0) + +- [30973540](https://github.com/kubedb/pg-coordinator/commit/30973540) Prepare for release v0.25.0-beta.0 (#147) +- [7b84e198](https://github.com/kubedb/pg-coordinator/commit/7b84e198) Update deps (#146) +- [f1bfe818](https://github.com/kubedb/pg-coordinator/commit/f1bfe818) Update deps (#145) +- [1de05a6e](https://github.com/kubedb/pg-coordinator/commit/1de05a6e) Use k8s 1.29 client libs (#144) + + + +## [kubedb/pgbouncer](https://github.com/kubedb/pgbouncer) + +### [v0.28.0-beta.0](https://github.com/kubedb/pgbouncer/releases/tag/v0.28.0-beta.0) + +- [3c6bc335](https://github.com/kubedb/pgbouncer/commit/3c6bc335) Prepare for release v0.28.0-beta.0 (#310) +- [73c5f6fb](https://github.com/kubedb/pgbouncer/commit/73c5f6fb) Dynamically start crd controller (#309) +- [f9edc2cd](https://github.com/kubedb/pgbouncer/commit/f9edc2cd) Update deps (#308) +- [d54251c0](https://github.com/kubedb/pgbouncer/commit/d54251c0) Update deps (#307) +- [de40a35e](https://github.com/kubedb/pgbouncer/commit/de40a35e) Update deps +- [8c325577](https://github.com/kubedb/pgbouncer/commit/8c325577) Use k8s 1.29 client libs (#305) + + + +## [kubedb/postgres](https://github.com/kubedb/postgres) + +### [v0.41.0-beta.0](https://github.com/kubedb/postgres/releases/tag/v0.41.0-beta.0) + +- [d1bd909b](https://github.com/kubedb/postgres/commit/d1bd909ba) Prepare for release v0.41.0-beta.0 (#703) +- [5e8101e3](https://github.com/kubedb/postgres/commit/5e8101e39) Dynamically start crd controller (#702) +- [47dbbff5](https://github.com/kubedb/postgres/commit/47dbbff53) Update deps (#701) +- [84f99c58](https://github.com/kubedb/postgres/commit/84f99c58b) Disable fairness api +- [a715765d](https://github.com/kubedb/postgres/commit/a715765dc) Set --restricted=false for ci tests (#700) +- [fe9af597](https://github.com/kubedb/postgres/commit/fe9af5977) Add Postgres test fix (#699) +- [8bae8886](https://github.com/kubedb/postgres/commit/8bae88860) Configure openapi for webhook server (#698) +- [9ce2efce](https://github.com/kubedb/postgres/commit/9ce2efce5) Update deps +- [24e4e9ca](https://github.com/kubedb/postgres/commit/24e4e9ca5) Use k8s 1.29 client libs (#697) + + + +## [kubedb/postgres-archiver](https://github.com/kubedb/postgres-archiver) + +### [v0.2.0-beta.0](https://github.com/kubedb/postgres-archiver/releases/tag/v0.2.0-beta.0) + +- [a9cbe08](https://github.com/kubedb/postgres-archiver/commit/a9cbe08) Prepare for release v0.2.0-beta.0 (#16) +- [183e97c](https://github.com/kubedb/postgres-archiver/commit/183e97c) Use k8s 1.29 client libs (#15) + + + +## [kubedb/postgres-csi-snapshotter-plugin](https://github.com/kubedb/postgres-csi-snapshotter-plugin) + +### [v0.2.0-beta.0](https://github.com/kubedb/postgres-csi-snapshotter-plugin/releases/tag/v0.2.0-beta.0) + +- [f0e546a](https://github.com/kubedb/postgres-csi-snapshotter-plugin/commit/f0e546a) Prepare for release v0.2.0-beta.0 (#12) +- [aae7294](https://github.com/kubedb/postgres-csi-snapshotter-plugin/commit/aae7294) Use k8s 1.29 client libs (#11) + + + +## [kubedb/postgres-restic-plugin](https://github.com/kubedb/postgres-restic-plugin) + +### [v0.4.0-beta.0](https://github.com/kubedb/postgres-restic-plugin/releases/tag/v0.4.0-beta.0) + + + + +## [kubedb/provider-aws](https://github.com/kubedb/provider-aws) + +### [v0.3.0-beta.0](https://github.com/kubedb/provider-aws/releases/tag/v0.3.0-beta.0) + + + + +## [kubedb/provider-azure](https://github.com/kubedb/provider-azure) + +### [v0.3.0-beta.0](https://github.com/kubedb/provider-azure/releases/tag/v0.3.0-beta.0) + + + + +## [kubedb/provider-gcp](https://github.com/kubedb/provider-gcp) + +### [v0.3.0-beta.0](https://github.com/kubedb/provider-gcp/releases/tag/v0.3.0-beta.0) + + + + +## [kubedb/provisioner](https://github.com/kubedb/provisioner) + +### [v0.41.0-beta.0](https://github.com/kubedb/provisioner/releases/tag/v0.41.0-beta.0) + + + + +## [kubedb/proxysql](https://github.com/kubedb/proxysql) + +### [v0.28.0-beta.0](https://github.com/kubedb/proxysql/releases/tag/v0.28.0-beta.0) + +- [c0805050](https://github.com/kubedb/proxysql/commit/c0805050e) Prepare for release v0.28.0-beta.0 (#324) +- [88ef1f1d](https://github.com/kubedb/proxysql/commit/88ef1f1de) Dynamically start crd controller (#323) +- [8c0a96ac](https://github.com/kubedb/proxysql/commit/8c0a96ac7) Update deps (#322) +- [e96797e4](https://github.com/kubedb/proxysql/commit/e96797e48) Update deps (#321) +- [e8fd529b](https://github.com/kubedb/proxysql/commit/e8fd529b2) Update deps +- [b2e9a1df](https://github.com/kubedb/proxysql/commit/b2e9a1df8) Use k8s 1.29 client libs (#319) + + + +## [kubedb/redis](https://github.com/kubedb/redis) + +### [v0.34.0-beta.0](https://github.com/kubedb/redis/releases/tag/v0.34.0-beta.0) + +- [7e844ab1](https://github.com/kubedb/redis/commit/7e844ab1) Prepare for release v0.34.0-beta.0 (#513) +- [6318d04f](https://github.com/kubedb/redis/commit/6318d04f) Dynamically start crd controller (#512) +- [92b8a3a9](https://github.com/kubedb/redis/commit/92b8a3a9) Update deps (#511) +- [f0fb4c69](https://github.com/kubedb/redis/commit/f0fb4c69) Update deps (#510) +- [c99d9498](https://github.com/kubedb/redis/commit/c99d9498) Update deps +- [90299544](https://github.com/kubedb/redis/commit/90299544) Use k8s 1.29 client libs (#508) +- [fced7010](https://github.com/kubedb/redis/commit/fced7010) Update redis versions in nightly tests (#507) + + + +## [kubedb/redis-coordinator](https://github.com/kubedb/redis-coordinator) + +### [v0.20.0-beta.0](https://github.com/kubedb/redis-coordinator/releases/tag/v0.20.0-beta.0) + +- [4c36accd](https://github.com/kubedb/redis-coordinator/commit/4c36accd) Prepare for release v0.20.0-beta.0 (#88) +- [c8658380](https://github.com/kubedb/redis-coordinator/commit/c8658380) Update deps (#87) +- [c99c2e9b](https://github.com/kubedb/redis-coordinator/commit/c99c2e9b) Update deps (#86) +- [22c7beb4](https://github.com/kubedb/redis-coordinator/commit/22c7beb4) Use k8s 1.29 client libs (#85) + + + +## [kubedb/redis-restic-plugin](https://github.com/kubedb/redis-restic-plugin) + +### [v0.4.0-beta.0](https://github.com/kubedb/redis-restic-plugin/releases/tag/v0.4.0-beta.0) + +- [da2796a](https://github.com/kubedb/redis-restic-plugin/commit/da2796a) Prepare for release v0.4.0-beta.0 (#16) +- [0553c6f](https://github.com/kubedb/redis-restic-plugin/commit/0553c6f) Use k8s 1.29 client libs (#15) + + + +## [kubedb/replication-mode-detector](https://github.com/kubedb/replication-mode-detector) + +### [v0.28.0-beta.0](https://github.com/kubedb/replication-mode-detector/releases/tag/v0.28.0-beta.0) + +- [572668c8](https://github.com/kubedb/replication-mode-detector/commit/572668c8) Prepare for release v0.28.0-beta.0 (#252) +- [39ba3ce0](https://github.com/kubedb/replication-mode-detector/commit/39ba3ce0) Update deps (#251) +- [d3d2ad96](https://github.com/kubedb/replication-mode-detector/commit/d3d2ad96) Update deps (#250) +- [633d7b76](https://github.com/kubedb/replication-mode-detector/commit/633d7b76) Use k8s 1.29 client libs (#249) + + + +## [kubedb/schema-manager](https://github.com/kubedb/schema-manager) + +### [v0.17.0-beta.0](https://github.com/kubedb/schema-manager/releases/tag/v0.17.0-beta.0) + + + + +## [kubedb/tests](https://github.com/kubedb/tests) + +### [v0.26.0-beta.0](https://github.com/kubedb/tests/releases/tag/v0.26.0-beta.0) + +- [5f3fabd7](https://github.com/kubedb/tests/commit/5f3fabd7) Prepare for release v0.26.0-beta.0 (#284) +- [27a24dff](https://github.com/kubedb/tests/commit/27a24dff) Update deps (#283) +- [b9021186](https://github.com/kubedb/tests/commit/b9021186) Update deps (#282) +- [589ca51c](https://github.com/kubedb/tests/commit/589ca51c) mongodb vertical scaling fix (#281) +- [feaa0f6a](https://github.com/kubedb/tests/commit/feaa0f6a) Add `--restricted` flag (#280) +- [2423ee38](https://github.com/kubedb/tests/commit/2423ee38) Fix linter errors +- [dcd64c7c](https://github.com/kubedb/tests/commit/dcd64c7c) Update lint command +- [c3ef1fa4](https://github.com/kubedb/tests/commit/c3ef1fa4) Use k8s 1.29 client libs (#279) + + + +## [kubedb/ui-server](https://github.com/kubedb/ui-server) + +### [v0.17.0-beta.0](https://github.com/kubedb/ui-server/releases/tag/v0.17.0-beta.0) + +- [6e8f80dc](https://github.com/kubedb/ui-server/commit/6e8f80dc) Prepare for release v0.17.0-beta.0 (#104) +- [6a05721f](https://github.com/kubedb/ui-server/commit/6a05721f) Update deps (#103) +- [3c24fd5e](https://github.com/kubedb/ui-server/commit/3c24fd5e) Update deps (#102) +- [25e29443](https://github.com/kubedb/ui-server/commit/25e29443) Use k8s 1.29 client libs (#101) + + + +## [kubedb/webhook-server](https://github.com/kubedb/webhook-server) + +### [v0.17.0-beta.0](https://github.com/kubedb/webhook-server/releases/tag/v0.17.0-beta.0) + +- [dfdeb6c3](https://github.com/kubedb/webhook-server/commit/dfdeb6c3) Prepare for release v0.17.0-beta.0 (#82) +- [bf54df2a](https://github.com/kubedb/webhook-server/commit/bf54df2a) Update deps (#81) +- [c7d17faa](https://github.com/kubedb/webhook-server/commit/c7d17faa) Update deps (#79) +- [170573b1](https://github.com/kubedb/webhook-server/commit/170573b1) Use k8s 1.29 client libs (#78) + + + + diff --git a/content/docs/v2024.1.31/CONTRIBUTING.md b/content/docs/v2024.1.31/CONTRIBUTING.md new file mode 100644 index 0000000000..af274a4159 --- /dev/null +++ b/content/docs/v2024.1.31/CONTRIBUTING.md @@ -0,0 +1,42 @@ +--- +title: Contributing | KubeDB +description: Contributing +menu: + docs_v2024.1.31: + identifier: contributing-cli + name: Contributing + parent: welcome + weight: 10 +menu_name: docs_v2024.1.31 +section_menu_id: welcome +url: /docs/v2024.1.31/welcome/contributing/ +aliases: +- /docs/v2024.1.31/CONTRIBUTING/ +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +# Contribution Guidelines + +Want to contribute to KubeDB? + +## Getting Help + +To speak with us, please leave a message on [our website](https://appscode.com/contact/). To receive product announcements, follow us on [Twitter](https://twitter.com/KubeDB). + +## Bugs/Feature request + +If you have found a bug with KubeDB or want to request for new features, please [file an issue](https://github.com/kubedb/project/issues/new). + +## Spread the word + +If you have written blog post or tutorial on KubeDB, please share it with us on [Twitter](https://twitter.com/KubeDB). diff --git a/content/docs/v2024.1.31/README.md b/content/docs/v2024.1.31/README.md new file mode 100644 index 0000000000..01d8b0acec --- /dev/null +++ b/content/docs/v2024.1.31/README.md @@ -0,0 +1,41 @@ +--- +title: Welcome | KubeDB +description: Welcome to KubeDB +menu: + docs_v2024.1.31: + identifier: readme-cli + name: Readme + parent: welcome + weight: -1 +menu_name: docs_v2024.1.31 +section_menu_id: welcome +url: /docs/v2024.1.31/welcome/ +aliases: +- /docs/v2024.1.31/ +- /docs/v2024.1.31/README/ +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +# Welcome + +From here you can learn all about KubeDB's architecture and how to deploy and use KubeDB. + +- [Overview](/docs/v2024.1.31/overview/). Overview explains what KubeDB does and how it does it. + +- [Setup](/docs/v2024.1.31/setup/). Setup contains instructions for installing the KubeDB in various cloud providers. + +- [Guides](/docs/v2024.1.31/guides/). Guides to show you how to perform tasks with KubeDB. + +- [Reference](/docs/v2024.1.31/reference/). Detailed exhaustive lists of command-line options, configuration options, API definitions, and procedures. + +We're always looking for help improving our documentation, so please don't hesitate to [file an issue](https://github.com/kubedb/project/issues/new) if you see some problem. Or better yet, submit your own [contributions](/docs/v2024.1.31/CONTRIBUTING) to help make our docs better. diff --git a/content/docs/v2024.1.31/_index.md b/content/docs/v2024.1.31/_index.md new file mode 100644 index 0000000000..fd6495ef40 --- /dev/null +++ b/content/docs/v2024.1.31/_index.md @@ -0,0 +1,21 @@ +--- +title: KubeDB +menu: + docs_v2024.1.31: + identifier: welcome + name: Welcome + weight: 10 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/examples/elasticsearch/cli/elasticsearch-demo.yaml b/content/docs/v2024.1.31/examples/elasticsearch/cli/elasticsearch-demo.yaml new file mode 100644 index 0000000000..4f0e209cbd --- /dev/null +++ b/content/docs/v2024.1.31/examples/elasticsearch/cli/elasticsearch-demo.yaml @@ -0,0 +1,14 @@ +apiVersion: kubedb.com/v1alpha2 +kind: Elasticsearch +metadata: + name: elasticsearch-demo + namespace: demo +spec: + version: xpack-8.11.1 + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi diff --git a/content/docs/v2024.1.31/examples/elasticsearch/clustering/multi-node-es.yaml b/content/docs/v2024.1.31/examples/elasticsearch/clustering/multi-node-es.yaml new file mode 100644 index 0000000000..f6593e3df6 --- /dev/null +++ b/content/docs/v2024.1.31/examples/elasticsearch/clustering/multi-node-es.yaml @@ -0,0 +1,16 @@ +apiVersion: kubedb.com/v1alpha2 +kind: Elasticsearch +metadata: + name: multi-node-es + namespace: demo +spec: + version: xpack-8.11.1 + replicas: 3 + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi diff --git a/content/docs/v2024.1.31/examples/elasticsearch/clustering/topology-es.yaml b/content/docs/v2024.1.31/examples/elasticsearch/clustering/topology-es.yaml new file mode 100644 index 0000000000..1522b9f866 --- /dev/null +++ b/content/docs/v2024.1.31/examples/elasticsearch/clustering/topology-es.yaml @@ -0,0 +1,39 @@ +apiVersion: kubedb.com/v1alpha2 +kind: Elasticsearch +metadata: + name: topology-es + namespace: demo +spec: + version: xpack-8.11.1 + storageType: Durable + topology: + master: + suffix: master + replicas: 1 + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + data: + suffix: data + replicas: 2 + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + ingest: + suffix: ingest + replicas: 2 + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi diff --git a/content/docs/v2024.1.31/examples/elasticsearch/custom-config/client-config.yml b/content/docs/v2024.1.31/examples/elasticsearch/custom-config/client-config.yml new file mode 100644 index 0000000000..9319a299a9 --- /dev/null +++ b/content/docs/v2024.1.31/examples/elasticsearch/custom-config/client-config.yml @@ -0,0 +1,4 @@ +node: + name: es-node-client +path: + data: ["/usr/share/elasticsearch/data/elasticsearch/client-datadir"] \ No newline at end of file diff --git a/content/docs/v2024.1.31/examples/elasticsearch/custom-config/common-config.yml b/content/docs/v2024.1.31/examples/elasticsearch/custom-config/common-config.yml new file mode 100644 index 0000000000..a00a0f6bd9 --- /dev/null +++ b/content/docs/v2024.1.31/examples/elasticsearch/custom-config/common-config.yml @@ -0,0 +1,2 @@ +path: + logs: /usr/share/elasticsearch/data/elasticsearch/common-logdir diff --git a/content/docs/v2024.1.31/examples/elasticsearch/custom-config/data-config.yml b/content/docs/v2024.1.31/examples/elasticsearch/custom-config/data-config.yml new file mode 100644 index 0000000000..57a45dc699 --- /dev/null +++ b/content/docs/v2024.1.31/examples/elasticsearch/custom-config/data-config.yml @@ -0,0 +1,4 @@ +node: + name: es-node-data +path: + data: ["/usr/share/elasticsearch/data/data-datadir"] \ No newline at end of file diff --git a/content/docs/v2024.1.31/examples/elasticsearch/custom-config/es-custom-with-topology.yaml b/content/docs/v2024.1.31/examples/elasticsearch/custom-config/es-custom-with-topology.yaml new file mode 100644 index 0000000000..d20a480604 --- /dev/null +++ b/content/docs/v2024.1.31/examples/elasticsearch/custom-config/es-custom-with-topology.yaml @@ -0,0 +1,40 @@ +apiVersion: kubedb.com/v1alpha2 +kind: Elasticsearch +metadata: + name: custom-elasticsearch + namespace: demo +spec: + version: xpack-8.11.1 + configSecret: + name: es-custom-config + topology: + master: + suffix: master + replicas: 1 + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + data: + suffix: data + replicas: 1 + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + ingest: + suffix: ingest + replicas: 2 + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi diff --git a/content/docs/v2024.1.31/examples/elasticsearch/custom-config/es-custom.yaml b/content/docs/v2024.1.31/examples/elasticsearch/custom-config/es-custom.yaml new file mode 100644 index 0000000000..53e58159b9 --- /dev/null +++ b/content/docs/v2024.1.31/examples/elasticsearch/custom-config/es-custom.yaml @@ -0,0 +1,17 @@ +apiVersion: kubedb.com/v1alpha2 +kind: Elasticsearch +metadata: + name: custom-elasticsearch + namespace: demo +spec: + version: xpack-8.11.1 + replicas: 2 + configSecret: + name: es-custom-config + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi diff --git a/content/docs/v2024.1.31/examples/elasticsearch/custom-config/master-config.yml b/content/docs/v2024.1.31/examples/elasticsearch/custom-config/master-config.yml new file mode 100644 index 0000000000..d28585c5b2 --- /dev/null +++ b/content/docs/v2024.1.31/examples/elasticsearch/custom-config/master-config.yml @@ -0,0 +1,4 @@ +node: + name: es-node-master +path: + data: ["/usr/share/elasticsearch/data/master-datadir"] \ No newline at end of file diff --git a/content/docs/v2024.1.31/examples/elasticsearch/custom-rbac/es-custom-db-two.yaml b/content/docs/v2024.1.31/examples/elasticsearch/custom-rbac/es-custom-db-two.yaml new file mode 100644 index 0000000000..f62381d39b --- /dev/null +++ b/content/docs/v2024.1.31/examples/elasticsearch/custom-rbac/es-custom-db-two.yaml @@ -0,0 +1,19 @@ +apiVersion: kubedb.com/v1alpha2 +kind: Elasticsearch +metadata: + name: minute-elasticsearch + namespace: demo +spec: + version: xpack-8.11.1 + storageType: Durable + podTemplate: + spec: + serviceAccountName: my-custom-serviceaccount + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: DoNotTerminate diff --git a/content/docs/v2024.1.31/examples/elasticsearch/custom-rbac/es-custom-db.yaml b/content/docs/v2024.1.31/examples/elasticsearch/custom-rbac/es-custom-db.yaml new file mode 100644 index 0000000000..36b246721d --- /dev/null +++ b/content/docs/v2024.1.31/examples/elasticsearch/custom-rbac/es-custom-db.yaml @@ -0,0 +1,19 @@ +apiVersion: kubedb.com/v1alpha2 +kind: Elasticsearch +metadata: + name: quick-elasticsearch + namespace: demo +spec: + version: xpack-8.11.1 + storageType: Durable + podTemplate: + spec: + serviceAccountName: my-custom-serviceaccount + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: DoNotTerminate diff --git a/content/docs/v2024.1.31/examples/elasticsearch/custom-rbac/es-custom-role.yaml b/content/docs/v2024.1.31/examples/elasticsearch/custom-rbac/es-custom-role.yaml new file mode 100644 index 0000000000..0291e50440 --- /dev/null +++ b/content/docs/v2024.1.31/examples/elasticsearch/custom-rbac/es-custom-role.yaml @@ -0,0 +1,14 @@ +apiVersion: rbac.authorization.k8s.io/v1 +kind: Role +metadata: + name: my-custom-role + namespace: demo +rules: + - apiGroups: + - policy + resourceNames: + - elasticsearch-db + resources: + - podsecuritypolicies + verbs: + - use diff --git a/content/docs/v2024.1.31/examples/elasticsearch/es-overview.yaml b/content/docs/v2024.1.31/examples/elasticsearch/es-overview.yaml new file mode 100644 index 0000000000..f9543d4e02 --- /dev/null +++ b/content/docs/v2024.1.31/examples/elasticsearch/es-overview.yaml @@ -0,0 +1,66 @@ +apiVersion: kubedb.com/v1alpha2 +kind: Elasticsearch +metadata: + name: e1 + namespace: demo +spec: + version: xpack-8.11.1 + replicas: 2 + enableSSL: true + authSecret: + name: e1-auth + storageType: "Durable" + storage: + storageClassName: standard + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + init: + script: + configMap: + name: es-init-script + monitor: + agent: prometheus.io/operator + prometheus: + serviceMonitor: + labels: + app: kubedb + interval: 10s + configSecret: + name: es-custom-config + podTemplate: + metadata: + annotations: + passMe: ToDatabasePod + controller: + annotations: + passMe: ToStatefulSet + spec: + schedulerName: my-scheduler + nodeSelector: + disktype: ssd + imagePullSecrets: + - name: myregistrykey + env: + - name: ES_JAVA_OPTS + value: "-Xms128m -Xmx128m" + resources: + requests: + memory: "64Mi" + cpu: "250m" + limits: + memory: "128Mi" + cpu: "500m" + serviceTemplates: + - alias: primary + metadata: + annotations: + passMe: ToService + spec: + type: NodePort + ports: + - name: http + port: 9200 + terminationPolicy: "DoNotTerminate" diff --git a/content/docs/v2024.1.31/examples/elasticsearch/initialization/recovered-es.yaml b/content/docs/v2024.1.31/examples/elasticsearch/initialization/recovered-es.yaml new file mode 100644 index 0000000000..5b9defde48 --- /dev/null +++ b/content/docs/v2024.1.31/examples/elasticsearch/initialization/recovered-es.yaml @@ -0,0 +1,19 @@ +apiVersion: kubedb.com/v1alpha2 +kind: Elasticsearch +metadata: + name: recovered-es + namespace: demo +spec: + version: xpack-8.11.1 + authSecret: + name: instant-elasticsearch-auth + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + init: + waitForInitialRestore: true diff --git a/content/docs/v2024.1.31/examples/elasticsearch/kibana/common-config.yml b/content/docs/v2024.1.31/examples/elasticsearch/kibana/common-config.yml new file mode 100644 index 0000000000..e094c56e75 --- /dev/null +++ b/content/docs/v2024.1.31/examples/elasticsearch/kibana/common-config.yml @@ -0,0 +1,2 @@ +xpack.security.enabled: false +searchguard.restapi.roles_enabled: ["sg_all_access","sg_kibana_user"] \ No newline at end of file diff --git a/content/docs/v2024.1.31/examples/elasticsearch/kibana/es-kibana-demo.yaml b/content/docs/v2024.1.31/examples/elasticsearch/kibana/es-kibana-demo.yaml new file mode 100644 index 0000000000..0eefcc1dd3 --- /dev/null +++ b/content/docs/v2024.1.31/examples/elasticsearch/kibana/es-kibana-demo.yaml @@ -0,0 +1,19 @@ +apiVersion: kubedb.com/v1alpha2 +kind: Elasticsearch +metadata: + name: es-kibana-demo + namespace: demo +spec: + version: xpack-8.11.1 + replicas: 1 + authSecret: + name: es-auth + configSecret: + name: es-custom-config + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi diff --git a/content/docs/v2024.1.31/examples/elasticsearch/kibana/kibana-deployment.yaml b/content/docs/v2024.1.31/examples/elasticsearch/kibana/kibana-deployment.yaml new file mode 100644 index 0000000000..5936a73342 --- /dev/null +++ b/content/docs/v2024.1.31/examples/elasticsearch/kibana/kibana-deployment.yaml @@ -0,0 +1,25 @@ +apiVersion: apps/v1 +kind: Deployment +metadata: + name: kibana + namespace: demo +spec: + replicas: 1 + selector: + matchLabels: + app: kibana + template: + metadata: + labels: + app: kibana + spec: + containers: + - name: kibana + image: kubedb/kibana:6.3.0 + volumeMounts: + - name: kibana-config + mountPath: /usr/share/kibana/config + volumes: + - name: kibana-config + configMap: + name: kibana-config diff --git a/content/docs/v2024.1.31/examples/elasticsearch/kibana/kibana.yml b/content/docs/v2024.1.31/examples/elasticsearch/kibana/kibana.yml new file mode 100644 index 0000000000..0271b4ae50 --- /dev/null +++ b/content/docs/v2024.1.31/examples/elasticsearch/kibana/kibana.yml @@ -0,0 +1,9 @@ +xpack.security.enabled: false +server.host: 0.0.0.0 + +elasticsearch.url: "http://es-kibana-demo.demo.svc:9200" +elasticsearch.username: "kibanauser" +elasticsearch.password: "kibana@secret" + +searchguard.auth.type: "basicauth" +searchguard.cookie.secure: false diff --git a/content/docs/v2024.1.31/examples/elasticsearch/kibana/sg_action_groups.yml b/content/docs/v2024.1.31/examples/elasticsearch/kibana/sg_action_groups.yml new file mode 100644 index 0000000000..cd9aedd575 --- /dev/null +++ b/content/docs/v2024.1.31/examples/elasticsearch/kibana/sg_action_groups.yml @@ -0,0 +1,36 @@ +UNLIMITED: + readonly: true + permissions: + - "*" + +###### INDEX LEVEL ###### + +INDICES_ALL: + readonly: true + permissions: + - "indices:*" + +###### CLUSTER LEVEL ###### +CLUSTER_MONITOR: + readonly: true + permissions: + - "cluster:monitor/*" + +CLUSTER_COMPOSITE_OPS_RO: + readonly: true + permissions: + - "indices:data/read/mget" + - "indices:data/read/msearch" + - "indices:data/read/mtv" + - "indices:data/read/coordinate-msearch*" + - "indices:admin/aliases/exists*" + - "indices:admin/aliases/get*" + - "indices:data/read/scroll" + +CLUSTER_COMPOSITE_OPS: + readonly: true + permissions: + - "indices:data/write/bulk" + - "indices:admin/aliases*" + - "indices:data/write/reindex" + - CLUSTER_COMPOSITE_OPS_RO diff --git a/content/docs/v2024.1.31/examples/elasticsearch/kibana/sg_config.yml b/content/docs/v2024.1.31/examples/elasticsearch/kibana/sg_config.yml new file mode 100644 index 0000000000..614a8b5d17 --- /dev/null +++ b/content/docs/v2024.1.31/examples/elasticsearch/kibana/sg_config.yml @@ -0,0 +1,20 @@ +searchguard: + dynamic: + authc: + kibana_auth_domain: + enabled: true + order: 0 + http_authenticator: + type: basic + challenge: false + authentication_backend: + type: internal + basic_internal_auth_domain: + http_enabled: true + transport_enabled: true + order: 1 + http_authenticator: + type: basic + challenge: true + authentication_backend: + type: internal diff --git a/content/docs/v2024.1.31/examples/elasticsearch/kibana/sg_internal_users.yml b/content/docs/v2024.1.31/examples/elasticsearch/kibana/sg_internal_users.yml new file mode 100644 index 0000000000..8f97baae2a --- /dev/null +++ b/content/docs/v2024.1.31/examples/elasticsearch/kibana/sg_internal_users.yml @@ -0,0 +1,16 @@ +# This is the internal user database +# The hash value is a bcrypt hash and can be generated with plugin/tools/hash.sh + +#password is: admin@secret +admin: + readonly: true + hash: $2y$12$skma87wuFFtxtGWegeAiIeTtUH1nnOfIRZzwwhBlzXjg0DdM4gLeG + roles: + - admin + +#password is: kibana@secret +kibanauser: + readonly: true + hash: $2y$12$dk2UrPTjhgCRbFOm/gThX.aJ47yH0zyQcYEuWiNiyw6NlVmeOjM7a + roles: + - kibanauser \ No newline at end of file diff --git a/content/docs/v2024.1.31/examples/elasticsearch/kibana/sg_roles.yml b/content/docs/v2024.1.31/examples/elasticsearch/kibana/sg_roles.yml new file mode 100644 index 0000000000..af359fd48c --- /dev/null +++ b/content/docs/v2024.1.31/examples/elasticsearch/kibana/sg_roles.yml @@ -0,0 +1,23 @@ +sg_all_access: + readonly: true + cluster: + - UNLIMITED + indices: + '*': + '*': + - UNLIMITED + tenants: + admin_tenant: RW + +# For the kibana user +sg_kibana_user: + readonly: true + cluster: + - CLUSTER_MONITOR + - CLUSTER_COMPOSITE_OPS + - cluster:admin/xpack/monitoring* + - indices:admin/template* + indices: + '*': + '*': + - INDICES_ALL \ No newline at end of file diff --git a/content/docs/v2024.1.31/examples/elasticsearch/kibana/sg_roles_mapping.yml b/content/docs/v2024.1.31/examples/elasticsearch/kibana/sg_roles_mapping.yml new file mode 100644 index 0000000000..c156273643 --- /dev/null +++ b/content/docs/v2024.1.31/examples/elasticsearch/kibana/sg_roles_mapping.yml @@ -0,0 +1,11 @@ +# In this file users, backendroles and hosts can be mapped to Search Guard roles. +# Permissions for Search Guard roles are configured in sg_roles.yml +sg_all_access: + readonly: true + backendroles: + - admin + +sg_kibana_user: + readonly: true + backendroles: + - kibanauser \ No newline at end of file diff --git a/content/docs/v2024.1.31/examples/elasticsearch/monitoring/builtin-prom-es.yaml b/content/docs/v2024.1.31/examples/elasticsearch/monitoring/builtin-prom-es.yaml new file mode 100644 index 0000000000..848c15a26b --- /dev/null +++ b/content/docs/v2024.1.31/examples/elasticsearch/monitoring/builtin-prom-es.yaml @@ -0,0 +1,17 @@ +apiVersion: kubedb.com/v1alpha2 +kind: Elasticsearch +metadata: + name: builtin-prom-es + namespace: demo +spec: + version: xpack-8.11.1 + terminationPolicy: WipeOut + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + monitor: + agent: prometheus.io/builtin diff --git a/content/docs/v2024.1.31/examples/elasticsearch/monitoring/coreos-prom-es.yaml b/content/docs/v2024.1.31/examples/elasticsearch/monitoring/coreos-prom-es.yaml new file mode 100644 index 0000000000..6e0a0fe869 --- /dev/null +++ b/content/docs/v2024.1.31/examples/elasticsearch/monitoring/coreos-prom-es.yaml @@ -0,0 +1,22 @@ +apiVersion: kubedb.com/v1alpha2 +kind: Elasticsearch +metadata: + name: coreos-prom-es + namespace: demo +spec: + version: xpack-8.11.1 + terminationPolicy: WipeOut + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + monitor: + agent: prometheus.io/operator + prometheus: + serviceMonitor: + labels: + release: prometheus + interval: 10s diff --git a/content/docs/v2024.1.31/examples/elasticsearch/private-registry/private-registry.yaml b/content/docs/v2024.1.31/examples/elasticsearch/private-registry/private-registry.yaml new file mode 100644 index 0000000000..37b3c7fccc --- /dev/null +++ b/content/docs/v2024.1.31/examples/elasticsearch/private-registry/private-registry.yaml @@ -0,0 +1,18 @@ +apiVersion: kubedb.com/v1alpha2 +kind: Elasticsearch +metadata: + name: searchguard-793 + namespace: demo +spec: + version: xpack-8.11.1 + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + podTemplate: + spec: + imagePullSecrets: + - name: myregistrykey diff --git a/content/docs/v2024.1.31/examples/elasticsearch/private-registry/pvt-elasticsearchversion.yaml b/content/docs/v2024.1.31/examples/elasticsearch/private-registry/pvt-elasticsearchversion.yaml new file mode 100644 index 0000000000..41f36983bc --- /dev/null +++ b/content/docs/v2024.1.31/examples/elasticsearch/private-registry/pvt-elasticsearchversion.yaml @@ -0,0 +1,26 @@ +apiVersion: catalog.kubedb.com/v1alpha1 +kind: ElasticsearchVersion +metadata: + name: xpack-8.11.1 +spec: + authPlugin: SearchGuard + db: + image: PRIVATE_REGISTRY/elasticsearch:7.9.3-searchguard + distribution: SearchGuard + exporter: + image: PRIVATE_REGISTRY/elasticsearch_exporter:1.1.0 + initContainer: + image: PRIVATE_REGISTRY/toybox:0.8.4 + yqImage: PRIVATE_REGISTRY/elasticsearch-init:7.9.3-searchguard + podSecurityPolicies: + databasePolicyName: elasticsearch-db + stash: + addon: + backupTask: + name: elasticsearch-backup-7.3.2 + params: + - name: args + value: --match=^(?![.])(?!searchguard).+ + restoreTask: + name: elasticsearch-restore-7.3.2 + version: 7.9.3 diff --git a/content/docs/v2024.1.31/examples/elasticsearch/quickstart/instant-elasticsearch.yaml b/content/docs/v2024.1.31/examples/elasticsearch/quickstart/instant-elasticsearch.yaml new file mode 100644 index 0000000000..74d5a9f5cf --- /dev/null +++ b/content/docs/v2024.1.31/examples/elasticsearch/quickstart/instant-elasticsearch.yaml @@ -0,0 +1,9 @@ +apiVersion: kubedb.com/v1alpha2 +kind: Elasticsearch +metadata: + name: instant-elasticsearch + namespace: demo +spec: + version: xpack-8.11.1 + replicas: 1 + storageType: Ephemeral diff --git a/content/docs/v2024.1.31/examples/elasticsearch/quickstart/quick-elasticsearch.yaml b/content/docs/v2024.1.31/examples/elasticsearch/quickstart/quick-elasticsearch.yaml new file mode 100644 index 0000000000..a4770dfb8b --- /dev/null +++ b/content/docs/v2024.1.31/examples/elasticsearch/quickstart/quick-elasticsearch.yaml @@ -0,0 +1,16 @@ +apiVersion: kubedb.com/v1alpha2 +kind: Elasticsearch +metadata: + name: es-quickstart + namespace: demo +spec: + version: xpack-8.2.3 + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: DoNotTerminate diff --git a/content/docs/v2024.1.31/examples/elasticsearch/search-guard/config-elasticsearch.yaml b/content/docs/v2024.1.31/examples/elasticsearch/search-guard/config-elasticsearch.yaml new file mode 100644 index 0000000000..ea4bcc3d3b --- /dev/null +++ b/content/docs/v2024.1.31/examples/elasticsearch/search-guard/config-elasticsearch.yaml @@ -0,0 +1,16 @@ +apiVersion: kubedb.com/v1alpha2 +kind: Elasticsearch +metadata: + name: config-elasticsearch + namespace: demo +spec: + version: searchguard-7.9.3 + authSecret: + name: config-elasticsearch-auth + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi diff --git a/content/docs/v2024.1.31/examples/elasticsearch/search-guard/es-sg-disabled.yaml b/content/docs/v2024.1.31/examples/elasticsearch/search-guard/es-sg-disabled.yaml new file mode 100644 index 0000000000..1c67e6968b --- /dev/null +++ b/content/docs/v2024.1.31/examples/elasticsearch/search-guard/es-sg-disabled.yaml @@ -0,0 +1,14 @@ +apiVersion: kubedb.com/v1alpha2 +kind: Elasticsearch +metadata: + name: es-sg-disabled + namespace: demo +spec: + version: searchguard-7.9.3 + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi diff --git a/content/docs/v2024.1.31/examples/elasticsearch/search-guard/openssl-config/openssl-ca.ini b/content/docs/v2024.1.31/examples/elasticsearch/search-guard/openssl-config/openssl-ca.ini new file mode 100644 index 0000000000..a133ab2e61 --- /dev/null +++ b/content/docs/v2024.1.31/examples/elasticsearch/search-guard/openssl-config/openssl-ca.ini @@ -0,0 +1,17 @@ +[ ca ] +default_ca = CA_default + +[ CA_default ] +private_key = root-key.pem +default_days = 1000 # how long to certify for +default_md = sha256 # use public key default MD +copy_extensions = copy # Required to copy SANs from CSR to cert + +[ req ] +prompt = no +default_bits = 4096 +distinguished_name = ca_distinguished_name + +[ ca_distinguished_name ] +O = Elasticsearch Operator +CN = KubeDB Com. Root CA diff --git a/content/docs/v2024.1.31/examples/elasticsearch/search-guard/openssl-config/openssl-client.ini b/content/docs/v2024.1.31/examples/elasticsearch/search-guard/openssl-config/openssl-client.ini new file mode 100644 index 0000000000..f667d75158 --- /dev/null +++ b/content/docs/v2024.1.31/examples/elasticsearch/search-guard/openssl-config/openssl-client.ini @@ -0,0 +1,18 @@ +[ req ] +prompt = no +default_bits = 4096 +distinguished_name = client_distinguished_name +req_extensions = client_req_extensions + +[ client_distinguished_name ] +O = Elasticsearch Operator +CN = sg-elasticsearch + +[ client_req_extensions ] +keyUsage = digitalSignature, keyEncipherment +extendedKeyUsage = serverAuth, clientAuth +subjectAltName = @alternate_names + +[ alternate_names ] +DNS.1 = localhost +DNS.2 = sg-elasticsearch.demo.svc diff --git a/content/docs/v2024.1.31/examples/elasticsearch/search-guard/openssl-config/openssl-node.ini b/content/docs/v2024.1.31/examples/elasticsearch/search-guard/openssl-config/openssl-node.ini new file mode 100644 index 0000000000..6243b843a3 --- /dev/null +++ b/content/docs/v2024.1.31/examples/elasticsearch/search-guard/openssl-config/openssl-node.ini @@ -0,0 +1,18 @@ +[ req ] +prompt = no +default_bits = 4096 +distinguished_name = node_distinguished_name +req_extensions = node_req_extensions + +[ node_distinguished_name ] +O = Elasticsearch Operator +CN = sg-elasticsearch + +[ node_req_extensions ] +keyUsage = digitalSignature, keyEncipherment +extendedKeyUsage = serverAuth, clientAuth +subjectAltName = @alternate_names + +[ alternate_names ] +DNS.1 = localhost +RID.1=1.2.3.4.5.5 diff --git a/content/docs/v2024.1.31/examples/elasticsearch/search-guard/openssl-config/openssl-sgadmin.ini b/content/docs/v2024.1.31/examples/elasticsearch/search-guard/openssl-config/openssl-sgadmin.ini new file mode 100644 index 0000000000..eee30065d7 --- /dev/null +++ b/content/docs/v2024.1.31/examples/elasticsearch/search-guard/openssl-config/openssl-sgadmin.ini @@ -0,0 +1,17 @@ +[ req ] +prompt = no +default_bits = 4096 +distinguished_name = sgadmin_distinguished_name +req_extensions = sgadmin_req_extensions + +[ sgadmin_distinguished_name ] +O = Elasticsearch Operator +CN = sgadmin + +[ sgadmin_req_extensions ] +keyUsage = digitalSignature, keyEncipherment +extendedKeyUsage = serverAuth, clientAuth +subjectAltName = @alternate_names + +[ alternate_names ] +DNS.1 = localhost diff --git a/content/docs/v2024.1.31/examples/elasticsearch/search-guard/openssl-config/openssl-sign.ini b/content/docs/v2024.1.31/examples/elasticsearch/search-guard/openssl-config/openssl-sign.ini new file mode 100644 index 0000000000..8de6de8d34 --- /dev/null +++ b/content/docs/v2024.1.31/examples/elasticsearch/search-guard/openssl-config/openssl-sign.ini @@ -0,0 +1,32 @@ +[ ca ] +default_ca = CA_default + +[ CA_default ] +base_dir = . +certificate = $base_dir/root.pem # The CA certifcate +private_key = $base_dir/root-key.pem # The CA private key +new_certs_dir = $base_dir # Location for new certs after signing +database = $base_dir/index.txt # Database index file +serial = $base_dir/serial.txt # The current serial number +unique_subject = no # Set to 'no' to allow creation of several certificates with same subject. + +default_days = 1000 # how long to certify for +default_md = sha256 # use public key default MD +email_in_dn = no +copy_extensions = copy # Required to copy SANs from CSR to cert + +[ req ] +default_bits = 4096 +default_keyfile = root-key.pem +distinguished_name = ca_distinguished_name + +[ ca_distinguished_name ] +O = Elasticsearch Operator +CN = KubeDB Com. Root CA + +[ signing_req ] +keyUsage = digitalSignature, keyEncipherment + +[ signing_policy ] +organizationName = optional +commonName = supplied diff --git a/content/docs/v2024.1.31/examples/elasticsearch/search-guard/sg-config/sg_action_groups.yml b/content/docs/v2024.1.31/examples/elasticsearch/search-guard/sg-config/sg_action_groups.yml new file mode 100644 index 0000000000..310e07f9a9 --- /dev/null +++ b/content/docs/v2024.1.31/examples/elasticsearch/search-guard/sg-config/sg_action_groups.yml @@ -0,0 +1,20 @@ +UNLIMITED: + - "*" + +READ: + - "indices:data/read*" + - "indices:admin/mappings/fields/get*" + +CLUSTER_COMPOSITE_OPS_RO: + - "indices:data/read/mget" + - "indices:data/read/msearch" + - "indices:data/read/mtv" + - "indices:data/read/coordinate-msearch*" + - "indices:admin/aliases/exists*" + - "indices:admin/aliases/get*" + +CLUSTER_KUBEDB_SNAPSHOT: + - "indices:data/read/scroll*" + +INDICES_KUBEDB_SNAPSHOT: + - "indices:admin/get" \ No newline at end of file diff --git a/content/docs/v2024.1.31/examples/elasticsearch/search-guard/sg-config/sg_config.yml b/content/docs/v2024.1.31/examples/elasticsearch/search-guard/sg-config/sg_config.yml new file mode 100644 index 0000000000..c464fb2d4b --- /dev/null +++ b/content/docs/v2024.1.31/examples/elasticsearch/search-guard/sg-config/sg_config.yml @@ -0,0 +1,11 @@ +searchguard: + dynamic: + authc: + basic_internal_auth_domain: + enabled: true + order: 4 + http_authenticator: + type: basic + challenge: true + authentication_backend: + type: internal diff --git a/content/docs/v2024.1.31/examples/elasticsearch/search-guard/sg-config/sg_internal_users.yml b/content/docs/v2024.1.31/examples/elasticsearch/search-guard/sg-config/sg_internal_users.yml new file mode 100644 index 0000000000..90d6f667d4 --- /dev/null +++ b/content/docs/v2024.1.31/examples/elasticsearch/search-guard/sg-config/sg_internal_users.yml @@ -0,0 +1,5 @@ +admin: + hash: $ADMIN_PASSWORD_HASHED + +readall: + hash: $READALL_PASSWORD_HASHED diff --git a/content/docs/v2024.1.31/examples/elasticsearch/search-guard/sg-config/sg_roles.yml b/content/docs/v2024.1.31/examples/elasticsearch/search-guard/sg-config/sg_roles.yml new file mode 100644 index 0000000000..e28666015c --- /dev/null +++ b/content/docs/v2024.1.31/examples/elasticsearch/search-guard/sg-config/sg_roles.yml @@ -0,0 +1,20 @@ +sg_all_access: + cluster: + - UNLIMITED + indices: + '*': + '*': + - UNLIMITED + tenants: + adm_tenant: RW + test_tenant_ro: RW + +sg_readall: + cluster: + - CLUSTER_COMPOSITE_OPS_RO + - CLUSTER_KUBEDB_SNAPSHOT + indices: + '*': + '*': + - READ + - INDICES_KUBEDB_SNAPSHOT \ No newline at end of file diff --git a/content/docs/v2024.1.31/examples/elasticsearch/search-guard/sg-config/sg_roles_mapping.yml b/content/docs/v2024.1.31/examples/elasticsearch/search-guard/sg-config/sg_roles_mapping.yml new file mode 100644 index 0000000000..8ac902a838 --- /dev/null +++ b/content/docs/v2024.1.31/examples/elasticsearch/search-guard/sg-config/sg_roles_mapping.yml @@ -0,0 +1,7 @@ +sg_all_access: + users: + - admin + +sg_readall: + users: + - readall \ No newline at end of file diff --git a/content/docs/v2024.1.31/examples/elasticsearch/search-guard/sg-elasticsearch.yaml b/content/docs/v2024.1.31/examples/elasticsearch/search-guard/sg-elasticsearch.yaml new file mode 100644 index 0000000000..d54cb81938 --- /dev/null +++ b/content/docs/v2024.1.31/examples/elasticsearch/search-guard/sg-elasticsearch.yaml @@ -0,0 +1,15 @@ +apiVersion: kubedb.com/v1alpha2 +kind: Elasticsearch +metadata: + name: sg-elasticsearch + namespace: demo +spec: + version: searchguard-7.9.3 + enableSSL: true + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi diff --git a/content/docs/v2024.1.31/examples/elasticsearch/search-guard/ssl-elasticsearch.yaml b/content/docs/v2024.1.31/examples/elasticsearch/search-guard/ssl-elasticsearch.yaml new file mode 100644 index 0000000000..7de3a3c76a --- /dev/null +++ b/content/docs/v2024.1.31/examples/elasticsearch/search-guard/ssl-elasticsearch.yaml @@ -0,0 +1,16 @@ +apiVersion: kubedb.com/v1alpha2 +kind: Elasticsearch +metadata: + name: ssl-elasticsearch + namespace: demo +spec: + version: searchguard-7.9.3 + replicas: 2 + enableSSL: true + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi diff --git a/content/docs/v2024.1.31/examples/elasticsearch/x-pack/common-config.yaml b/content/docs/v2024.1.31/examples/elasticsearch/x-pack/common-config.yaml new file mode 100644 index 0000000000..c94e701f27 --- /dev/null +++ b/content/docs/v2024.1.31/examples/elasticsearch/x-pack/common-config.yaml @@ -0,0 +1,10 @@ +xpack.security.enabled: false +xpack.monitoring.enabled: true +xpack.monitoring.collection.enabled: true +xpack.monitoring.exporters: + my-http-exporter: + type: http + host: ["http://127.0.0.1:9200"] + auth: + username: monitor + password: monitor@secret \ No newline at end of file diff --git a/content/docs/v2024.1.31/examples/elasticsearch/x-pack/config-elasticsearch.yaml b/content/docs/v2024.1.31/examples/elasticsearch/x-pack/config-elasticsearch.yaml new file mode 100644 index 0000000000..487f3924c5 --- /dev/null +++ b/content/docs/v2024.1.31/examples/elasticsearch/x-pack/config-elasticsearch.yaml @@ -0,0 +1,14 @@ +apiVersion: kubedb.com/v1alpha2 +kind: Elasticsearch +metadata: + name: config-elasticsearch + namespace: demo +spec: + version: xpack-8.11.1 + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi diff --git a/content/docs/v2024.1.31/examples/elasticsearch/x-pack/custom-certificate-es-ssl.yaml b/content/docs/v2024.1.31/examples/elasticsearch/x-pack/custom-certificate-es-ssl.yaml new file mode 100644 index 0000000000..4f296e0f00 --- /dev/null +++ b/content/docs/v2024.1.31/examples/elasticsearch/x-pack/custom-certificate-es-ssl.yaml @@ -0,0 +1,15 @@ +apiVersion: kubedb.com/v1alpha2 +kind: Elasticsearch +metadata: + name: custom-certificate-es-ssl + namespace: demo +spec: + version: xpack-8.11.1 + enableSSL: true + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi diff --git a/content/docs/v2024.1.31/examples/elasticsearch/x-pack/es-mon-demo.yaml b/content/docs/v2024.1.31/examples/elasticsearch/x-pack/es-mon-demo.yaml new file mode 100644 index 0000000000..12195473d2 --- /dev/null +++ b/content/docs/v2024.1.31/examples/elasticsearch/x-pack/es-mon-demo.yaml @@ -0,0 +1,19 @@ +apiVersion: kubedb.com/v1alpha2 +kind: Elasticsearch +metadata: + name: es-mon-demo + namespace: demo +spec: + version: xpack-8.11.1 + replicas: 1 + authSecret: + name: es-auth + configSecret: + name: es-custom-config + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi diff --git a/content/docs/v2024.1.31/examples/elasticsearch/x-pack/es-xpack-disabled.yaml b/content/docs/v2024.1.31/examples/elasticsearch/x-pack/es-xpack-disabled.yaml new file mode 100644 index 0000000000..8b3b2a196e --- /dev/null +++ b/content/docs/v2024.1.31/examples/elasticsearch/x-pack/es-xpack-disabled.yaml @@ -0,0 +1,15 @@ +apiVersion: kubedb.com/v1alpha2 +kind: Elasticsearch +metadata: + name: es-xpack-disabled + namespace: demo +spec: + version: xpack-8.11.1 + disableSecurity: true + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi diff --git a/content/docs/v2024.1.31/examples/elasticsearch/x-pack/esversion-none.yaml b/content/docs/v2024.1.31/examples/elasticsearch/x-pack/esversion-none.yaml new file mode 100644 index 0000000000..49332f2959 --- /dev/null +++ b/content/docs/v2024.1.31/examples/elasticsearch/x-pack/esversion-none.yaml @@ -0,0 +1,26 @@ +apiVersion: catalog.kubedb.com/v1alpha1 +kind: ElasticsearchVersion +metadata: + name: xpack-8.11.1 +spec: + authPlugin: SearchGuard + db: + image: kubedb/elasticsearch:7.9.3-searchguard + distribution: SearchGuard + exporter: + image: kubedb/elasticsearch_exporter:1.1.0 + initContainer: + image: kubedb/toybox:0.8.4 + yqImage: kubedb/elasticsearch-init:7.9.3-searchguard + podSecurityPolicies: + databasePolicyName: elasticsearch-db + stash: + addon: + backupTask: + name: elasticsearch-backup-7.3.2 + params: + - name: args + value: --match=^(?![.])(?!searchguard).+ + restoreTask: + name: elasticsearch-restore-7.3.2 + version: 7.9.3 diff --git a/content/docs/v2024.1.31/examples/elasticsearch/x-pack/kibana.yml b/content/docs/v2024.1.31/examples/elasticsearch/x-pack/kibana.yml new file mode 100644 index 0000000000..8af03b3502 --- /dev/null +++ b/content/docs/v2024.1.31/examples/elasticsearch/x-pack/kibana.yml @@ -0,0 +1,13 @@ +xpack.security.enabled: false +xpack.monitoring.enabled: true +xpack.monitoring.kibana.collection.enabled: true +xpack.monitoring.ui.enabled: true + +server.host: 0.0.0.0 + +elasticsearch.url: "http://es-mon-demo.demo.svc:9200" +elasticsearch.username: "monitor" +elasticsearch.password: "monitor@secret" + +searchguard.auth.type: "basicauth" +searchguard.cookie.secure: false diff --git a/content/docs/v2024.1.31/examples/elasticsearch/x-pack/openssl-config/openssl-ca.ini b/content/docs/v2024.1.31/examples/elasticsearch/x-pack/openssl-config/openssl-ca.ini new file mode 100644 index 0000000000..a133ab2e61 --- /dev/null +++ b/content/docs/v2024.1.31/examples/elasticsearch/x-pack/openssl-config/openssl-ca.ini @@ -0,0 +1,17 @@ +[ ca ] +default_ca = CA_default + +[ CA_default ] +private_key = root-key.pem +default_days = 1000 # how long to certify for +default_md = sha256 # use public key default MD +copy_extensions = copy # Required to copy SANs from CSR to cert + +[ req ] +prompt = no +default_bits = 4096 +distinguished_name = ca_distinguished_name + +[ ca_distinguished_name ] +O = Elasticsearch Operator +CN = KubeDB Com. Root CA diff --git a/content/docs/v2024.1.31/examples/elasticsearch/x-pack/openssl-config/openssl-client.ini b/content/docs/v2024.1.31/examples/elasticsearch/x-pack/openssl-config/openssl-client.ini new file mode 100644 index 0000000000..4dbfd398a4 --- /dev/null +++ b/content/docs/v2024.1.31/examples/elasticsearch/x-pack/openssl-config/openssl-client.ini @@ -0,0 +1,18 @@ +[ req ] +prompt = no +default_bits = 4096 +distinguished_name = client_distinguished_name +req_extensions = client_req_extensions + +[ client_distinguished_name ] +O = Elasticsearch Operator +CN = custom-certificate-es-ssl + +[ client_req_extensions ] +keyUsage = digitalSignature, keyEncipherment +extendedKeyUsage = serverAuth, clientAuth +subjectAltName = @alternate_names + +[ alternate_names ] +DNS.1 = localhost +DNS.2 = custom-certificate-es-ssl.demo.svc diff --git a/content/docs/v2024.1.31/examples/elasticsearch/x-pack/openssl-config/openssl-node.ini b/content/docs/v2024.1.31/examples/elasticsearch/x-pack/openssl-config/openssl-node.ini new file mode 100644 index 0000000000..b26de35fbc --- /dev/null +++ b/content/docs/v2024.1.31/examples/elasticsearch/x-pack/openssl-config/openssl-node.ini @@ -0,0 +1,18 @@ +[ req ] +prompt = no +default_bits = 4096 +distinguished_name = node_distinguished_name +req_extensions = node_req_extensions + +[ node_distinguished_name ] +O = Elasticsearch Operator +CN = custom-certificate-es-ssl + +[ node_req_extensions ] +keyUsage = digitalSignature, keyEncipherment +extendedKeyUsage = serverAuth, clientAuth +subjectAltName = @alternate_names + +[ alternate_names ] +DNS.1 = localhost +RID.1=1.2.3.4.5.5 diff --git a/content/docs/v2024.1.31/examples/elasticsearch/x-pack/openssl-config/openssl-sign.ini b/content/docs/v2024.1.31/examples/elasticsearch/x-pack/openssl-config/openssl-sign.ini new file mode 100644 index 0000000000..8de6de8d34 --- /dev/null +++ b/content/docs/v2024.1.31/examples/elasticsearch/x-pack/openssl-config/openssl-sign.ini @@ -0,0 +1,32 @@ +[ ca ] +default_ca = CA_default + +[ CA_default ] +base_dir = . +certificate = $base_dir/root.pem # The CA certifcate +private_key = $base_dir/root-key.pem # The CA private key +new_certs_dir = $base_dir # Location for new certs after signing +database = $base_dir/index.txt # Database index file +serial = $base_dir/serial.txt # The current serial number +unique_subject = no # Set to 'no' to allow creation of several certificates with same subject. + +default_days = 1000 # how long to certify for +default_md = sha256 # use public key default MD +email_in_dn = no +copy_extensions = copy # Required to copy SANs from CSR to cert + +[ req ] +default_bits = 4096 +default_keyfile = root-key.pem +distinguished_name = ca_distinguished_name + +[ ca_distinguished_name ] +O = Elasticsearch Operator +CN = KubeDB Com. Root CA + +[ signing_req ] +keyUsage = digitalSignature, keyEncipherment + +[ signing_policy ] +organizationName = optional +commonName = supplied diff --git a/content/docs/v2024.1.31/examples/elasticsearch/x-pack/sg_action_groups.yml b/content/docs/v2024.1.31/examples/elasticsearch/x-pack/sg_action_groups.yml new file mode 100644 index 0000000000..381d8b73a2 --- /dev/null +++ b/content/docs/v2024.1.31/examples/elasticsearch/x-pack/sg_action_groups.yml @@ -0,0 +1,43 @@ +###### UNLIMITED ###### +UNLIMITED: + readonly: true + permissions: + - "*" + +###### CLUSTER LEVEL ##### +CLUSTER_MONITOR: + readonly: true + permissions: + - "cluster:monitor/*" + +CLUSTER_COMPOSITE_OPS_RO: + readonly: true + permissions: + - "indices:data/read/mget" + - "indices:data/read/msearch" + - "indices:data/read/mtv" + - "indices:data/read/coordinate-msearch*" + - "indices:admin/aliases/exists*" + - "indices:admin/aliases/get*" + - "indices:data/read/scroll" + +CLUSTER_COMPOSITE_OPS: + readonly: true + permissions: + - "indices:data/write/bulk" + - "indices:admin/aliases*" + - "indices:data/write/reindex" + - CLUSTER_COMPOSITE_OPS_RO + +###### INDEX LEVEL ###### +INDICES_ALL: + readonly: true + permissions: + - "indices:*" + +READ: + readonly: true + permissions: + - "indices:data/read*" + - "indices:admin/mappings/fields/get*" + - "indices:admin/mappings/get*" diff --git a/content/docs/v2024.1.31/examples/elasticsearch/x-pack/sg_config.yml b/content/docs/v2024.1.31/examples/elasticsearch/x-pack/sg_config.yml new file mode 100644 index 0000000000..15409669c6 --- /dev/null +++ b/content/docs/v2024.1.31/examples/elasticsearch/x-pack/sg_config.yml @@ -0,0 +1,20 @@ +searchguard: + dynamic: + authc: + kibana_auth_domain: + enabled: true + order: 0 + http_authenticator: + type: basic + challenge: false + authentication_backend: + type: internal + basic_internal_auth_domain: + http_enabled: true + transport_enabled: true + order: 1 + http_authenticator: + type: basic + challenge: true + authentication_backend: + type: internal \ No newline at end of file diff --git a/content/docs/v2024.1.31/examples/elasticsearch/x-pack/sg_internal_users.yml b/content/docs/v2024.1.31/examples/elasticsearch/x-pack/sg_internal_users.yml new file mode 100644 index 0000000000..155a80840f --- /dev/null +++ b/content/docs/v2024.1.31/examples/elasticsearch/x-pack/sg_internal_users.yml @@ -0,0 +1,16 @@ +# This is the internal user database +# The hash value is a bcrypt hash and can be generated with plugin/tools/hash.sh + +#password is: admin@secret +admin: + readonly: true + hash: $2y$12$skma87wuFFtxtGWegeAiIeTtUH1nnOfIRZzwwhBlzXjg0DdM4gLeG + roles: + - admin + +#password is: monitor@secret +monitor: + readonly: true + hash: $2y$12$JDTXih3AqV/1MDRYQ.KIY.u68CkzCIq.xiiqwtRJx3cjN0YmFavTe + roles: + - monitor \ No newline at end of file diff --git a/content/docs/v2024.1.31/examples/elasticsearch/x-pack/sg_roles.yml b/content/docs/v2024.1.31/examples/elasticsearch/x-pack/sg_roles.yml new file mode 100644 index 0000000000..5370e1e430 --- /dev/null +++ b/content/docs/v2024.1.31/examples/elasticsearch/x-pack/sg_roles.yml @@ -0,0 +1,35 @@ +### Admin +sg_all_access: + readonly: true + cluster: + - UNLIMITED + indices: + '*': + '*': + - UNLIMITED + tenants: + admin_tenant: RW + +### X-Pack Monitoring +sg_xp_monitoring: + cluster: + - cluster:admin/xpack/monitoring/* + - cluster:admin/ingest/pipeline/put + - cluster:admin/ingest/pipeline/get + - indices:admin/template/get + - indices:admin/template/put + - CLUSTER_MONITOR + - CLUSTER_COMPOSITE_OPS + indices: + '?monitor*': + '*': + - INDICES_ALL + '?marvel*': + '*': + - INDICES_ALL + '?kibana*': + '*': + - READ + '*': + '*': + - indices:data/read/field_caps \ No newline at end of file diff --git a/content/docs/v2024.1.31/examples/elasticsearch/x-pack/sg_roles_mapping.yml b/content/docs/v2024.1.31/examples/elasticsearch/x-pack/sg_roles_mapping.yml new file mode 100644 index 0000000000..5ce157b3eb --- /dev/null +++ b/content/docs/v2024.1.31/examples/elasticsearch/x-pack/sg_roles_mapping.yml @@ -0,0 +1,11 @@ +# In this file users, backendroles and hosts can be mapped to Search Guard roles. +# Permissions for Search Guard roles are configured in sg_roles.yml +sg_all_access: + readonly: true + backendroles: + - admin + +sg_xp_monitoring: + readonly: true + backendroles: + - monitor \ No newline at end of file diff --git a/content/docs/v2024.1.31/examples/elasticsearch/x-pack/ssl-elasticsearch.yaml b/content/docs/v2024.1.31/examples/elasticsearch/x-pack/ssl-elasticsearch.yaml new file mode 100644 index 0000000000..8393c7d96c --- /dev/null +++ b/content/docs/v2024.1.31/examples/elasticsearch/x-pack/ssl-elasticsearch.yaml @@ -0,0 +1,16 @@ +apiVersion: kubedb.com/v1alpha2 +kind: Elasticsearch +metadata: + name: ssl-elasticsearch + namespace: demo +spec: + version: xpack-8.11.1 + replicas: 2 + enableSSL: true + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi diff --git a/content/docs/v2024.1.31/examples/kafka/clustering/kf-multinode.yaml b/content/docs/v2024.1.31/examples/kafka/clustering/kf-multinode.yaml new file mode 100644 index 0000000000..db15b1b2e1 --- /dev/null +++ b/content/docs/v2024.1.31/examples/kafka/clustering/kf-multinode.yaml @@ -0,0 +1,17 @@ +apiVersion: kubedb.com/v1alpha2 +kind: Kafka +metadata: + name: kafka-multinode + namespace: demo +spec: + replicas: 3 + version: 3.3.2 + storage: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + storageClassName: standard + storageType: Durable + terminationPolicy: DoNotTerminate \ No newline at end of file diff --git a/content/docs/v2024.1.31/examples/kafka/clustering/kf-standalone.yaml b/content/docs/v2024.1.31/examples/kafka/clustering/kf-standalone.yaml new file mode 100644 index 0000000000..5cb3697dd7 --- /dev/null +++ b/content/docs/v2024.1.31/examples/kafka/clustering/kf-standalone.yaml @@ -0,0 +1,17 @@ +apiVersion: kubedb.com/v1alpha2 +kind: Kafka +metadata: + name: kafka-standalone + namespace: demo +spec: + replicas: 1 + version: 3.3.2 + storage: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + storageClassName: standard + storageType: Durable + terminationPolicy: DoNotTerminate \ No newline at end of file diff --git a/content/docs/v2024.1.31/examples/kafka/clustering/kf-topology.yaml b/content/docs/v2024.1.31/examples/kafka/clustering/kf-topology.yaml new file mode 100644 index 0000000000..21de7703e9 --- /dev/null +++ b/content/docs/v2024.1.31/examples/kafka/clustering/kf-topology.yaml @@ -0,0 +1,34 @@ +apiVersion: kubedb.com/v1alpha2 +kind: Kafka +metadata: + name: kafka-prod + namespace: demo +spec: + version: 3.3.2 + enableSSL: true + tls: + issuerRef: + apiGroup: cert-manager.io + name: kafka-ca-issuer + kind: Issuer + topology: + broker: + replicas: 3 + storage: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + storageClassName: standard + controller: + replicas: 3 + storage: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + storageClassName: standard + storageType: Durable + terminationPolicy: DoNotTerminate \ No newline at end of file diff --git a/content/docs/v2024.1.31/examples/kafka/monitoring/kf-with-monitoring.yaml b/content/docs/v2024.1.31/examples/kafka/monitoring/kf-with-monitoring.yaml new file mode 100644 index 0000000000..d152140823 --- /dev/null +++ b/content/docs/v2024.1.31/examples/kafka/monitoring/kf-with-monitoring.yaml @@ -0,0 +1,32 @@ +apiVersion: kubedb.com/v1alpha2 +kind: Kafka +metadata: + name: kafka + namespace: demo +spec: + enableSSL: true + tls: + issuerRef: + apiGroup: cert-manager.io + name: kafka-ca-issuer + kind: Issuer + replicas: 3 + version: 3.4.0 + storage: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + storageClassName: standard + monitor: + agent: prometheus.io/operator + prometheus: + exporter: + port: 9091 + serviceMonitor: + labels: + release: prometheus + interval: 10s + storageType: Durable + terminationPolicy: WipeOut \ No newline at end of file diff --git a/content/docs/v2024.1.31/examples/kafka/tls/kf-Issuer.yaml b/content/docs/v2024.1.31/examples/kafka/tls/kf-Issuer.yaml new file mode 100644 index 0000000000..6dd38b41e1 --- /dev/null +++ b/content/docs/v2024.1.31/examples/kafka/tls/kf-Issuer.yaml @@ -0,0 +1,8 @@ +apiVersion: cert-manager.io/v1 +kind: Issuer +metadata: + name: kafka-ca-issuer + namespace: demo +spec: + ca: + secretName: kafka-ca \ No newline at end of file diff --git a/content/docs/v2024.1.31/examples/memcached/cli/memcached-demo.yaml b/content/docs/v2024.1.31/examples/memcached/cli/memcached-demo.yaml new file mode 100644 index 0000000000..597f5ec30e --- /dev/null +++ b/content/docs/v2024.1.31/examples/memcached/cli/memcached-demo.yaml @@ -0,0 +1,17 @@ +apiVersion: kubedb.com/v1alpha2 +kind: Memcached +metadata: + name: memcached-demo + namespace: demo +spec: + replicas: 3 + version: "1.6.22" + podTemplate: + spec: + resources: + limits: + cpu: 500m + memory: 128Mi + requests: + cpu: 250m + memory: 64Mi diff --git a/content/docs/v2024.1.31/examples/memcached/custom-config/mc-custom.yaml b/content/docs/v2024.1.31/examples/memcached/custom-config/mc-custom.yaml new file mode 100644 index 0000000000..cda2747ab2 --- /dev/null +++ b/content/docs/v2024.1.31/examples/memcached/custom-config/mc-custom.yaml @@ -0,0 +1,19 @@ +apiVersion: kubedb.com/v1alpha2 +kind: Memcached +metadata: + name: custom-memcached + namespace: demo +spec: + replicas: 1 + version: "1.6.22" + configSecret: + name: mc-custom-config + podTemplate: + spec: + resources: + limits: + cpu: 500m + memory: 128Mi + requests: + cpu: 250m + memory: 64Mi diff --git a/content/docs/v2024.1.31/examples/memcached/custom-config/memcached.conf b/content/docs/v2024.1.31/examples/memcached/custom-config/memcached.conf new file mode 100644 index 0000000000..4bf81425dc --- /dev/null +++ b/content/docs/v2024.1.31/examples/memcached/custom-config/memcached.conf @@ -0,0 +1,3 @@ +-c 500 +# maximum allowed memory in MB +-m 128 diff --git a/content/docs/v2024.1.31/examples/memcached/custom-rbac/mc-custom-db-two.yaml b/content/docs/v2024.1.31/examples/memcached/custom-rbac/mc-custom-db-two.yaml new file mode 100644 index 0000000000..93e78f82e8 --- /dev/null +++ b/content/docs/v2024.1.31/examples/memcached/custom-rbac/mc-custom-db-two.yaml @@ -0,0 +1,19 @@ +apiVersion: kubedb.com/v1alpha2 +kind: Memcached +metadata: + name: minute-memcached + namespace: demo +spec: + replicas: 3 + version: "1.6.22" + podTemplate: + spec: + serviceAccountName: my-custom-serviceaccount + resources: + limits: + cpu: 500m + memory: 128Mi + requests: + cpu: 250m + memory: 64Mi + terminationPolicy: DoNotTerminate diff --git a/content/docs/v2024.1.31/examples/memcached/custom-rbac/mc-custom-db.yaml b/content/docs/v2024.1.31/examples/memcached/custom-rbac/mc-custom-db.yaml new file mode 100644 index 0000000000..4ebcc1a81c --- /dev/null +++ b/content/docs/v2024.1.31/examples/memcached/custom-rbac/mc-custom-db.yaml @@ -0,0 +1,19 @@ +apiVersion: kubedb.com/v1alpha2 +kind: Memcached +metadata: + name: quick-memcached + namespace: demo +spec: + replicas: 3 + version: "1.6.22" + podTemplate: + spec: + serviceAccountName: my-custom-serviceaccount + resources: + limits: + cpu: 500m + memory: 128Mi + requests: + cpu: 250m + memory: 64Mi + terminationPolicy: DoNotTerminate diff --git a/content/docs/v2024.1.31/examples/memcached/custom-rbac/mc-custom-role.yaml b/content/docs/v2024.1.31/examples/memcached/custom-rbac/mc-custom-role.yaml new file mode 100644 index 0000000000..a5b7ed6c45 --- /dev/null +++ b/content/docs/v2024.1.31/examples/memcached/custom-rbac/mc-custom-role.yaml @@ -0,0 +1,14 @@ +apiVersion: rbac.authorization.k8s.io/v1 +kind: Role +metadata: + name: my-custom-role + namespace: demo +rules: + - apiGroups: + - policy + resourceNames: + - memcached-db + resources: + - podsecuritypolicies + verbs: + - use diff --git a/content/docs/v2024.1.31/examples/memcached/demo-0.yaml b/content/docs/v2024.1.31/examples/memcached/demo-0.yaml new file mode 100644 index 0000000000..1d9f59eb36 --- /dev/null +++ b/content/docs/v2024.1.31/examples/memcached/demo-0.yaml @@ -0,0 +1,7 @@ +apiVersion: v1 +kind: Namespace +metadata: + name: demo +spec: + finalizers: + - kubernetes diff --git a/content/docs/v2024.1.31/examples/memcached/monitoring/builtin-prom-memcd.yaml b/content/docs/v2024.1.31/examples/memcached/monitoring/builtin-prom-memcd.yaml new file mode 100644 index 0000000000..034b413ebc --- /dev/null +++ b/content/docs/v2024.1.31/examples/memcached/monitoring/builtin-prom-memcd.yaml @@ -0,0 +1,20 @@ +apiVersion: kubedb.com/v1alpha2 +kind: Memcached +metadata: + name: builtin-prom-memcd + namespace: demo +spec: + replicas: 1 + version: "1.6.22" + terminationPolicy: WipeOut + podTemplate: + spec: + resources: + limits: + cpu: 500m + memory: 128Mi + requests: + cpu: 250m + memory: 64Mi + monitor: + agent: prometheus.io/builtin diff --git a/content/docs/v2024.1.31/examples/memcached/monitoring/coreos-prom-memcd.yaml b/content/docs/v2024.1.31/examples/memcached/monitoring/coreos-prom-memcd.yaml new file mode 100644 index 0000000000..844ea377c9 --- /dev/null +++ b/content/docs/v2024.1.31/examples/memcached/monitoring/coreos-prom-memcd.yaml @@ -0,0 +1,25 @@ +apiVersion: kubedb.com/v1alpha2 +kind: Memcached +metadata: + name: coreos-prom-memcd + namespace: demo +spec: + replicas: 3 + version: "1.6.22" + terminationPolicy: WipeOut + podTemplate: + spec: + resources: + limits: + cpu: 500m + memory: 128Mi + requests: + cpu: 250m + memory: 64Mi + monitor: + agent: prometheus.io/operator + prometheus: + serviceMonitor: + labels: + release: prometheus + interval: 10s diff --git a/content/docs/v2024.1.31/examples/memcached/private-registry/demo-1.yaml b/content/docs/v2024.1.31/examples/memcached/private-registry/demo-1.yaml new file mode 100644 index 0000000000..b1d8a6568c --- /dev/null +++ b/content/docs/v2024.1.31/examples/memcached/private-registry/demo-1.yaml @@ -0,0 +1,8 @@ +apiVersion: v1 +kind: Secret +metadata: + name: myregistrykey + namespace: demo +data: + .dockerconfigjson: PGJhc2UtNjQtZW5jb2RlZC1qc29uLWhlcmU+ +type: kubernetes.io/dockerconfigjson diff --git a/content/docs/v2024.1.31/examples/memcached/private-registry/demo-2.yaml b/content/docs/v2024.1.31/examples/memcached/private-registry/demo-2.yaml new file mode 100644 index 0000000000..3d218f2e4b --- /dev/null +++ b/content/docs/v2024.1.31/examples/memcached/private-registry/demo-2.yaml @@ -0,0 +1,21 @@ +apiVersion: kubedb.com/v1alpha2 +kind: Memcached +metadata: + name: memcd-pvt-reg + namespace: demo +spec: + replicas: 3 + version: "1.6.22" + podTemplate: + spec: + resources: + limits: + cpu: 500m + memory: 128Mi + requests: + cpu: 250m + memory: 64Mi + podTemplate: + spec: + imagePullSecrets: + - name: myregistrykey diff --git a/content/docs/v2024.1.31/examples/memcached/quickstart/demo-1.yaml b/content/docs/v2024.1.31/examples/memcached/quickstart/demo-1.yaml new file mode 100644 index 0000000000..027a14c575 --- /dev/null +++ b/content/docs/v2024.1.31/examples/memcached/quickstart/demo-1.yaml @@ -0,0 +1,18 @@ +apiVersion: kubedb.com/v1alpha2 +kind: Memcached +metadata: + name: memcd-quickstart + namespace: demo +spec: + replicas: 3 + version: "1.6.22" + podTemplate: + spec: + resources: + limits: + cpu: 500m + memory: 128Mi + requests: + cpu: 250m + memory: 64Mi + terminationPolicy: DoNotTerminate diff --git a/content/docs/v2024.1.31/examples/mongodb/Initialization/demo-1.yaml b/content/docs/v2024.1.31/examples/mongodb/Initialization/demo-1.yaml new file mode 100644 index 0000000000..3a0f99606d --- /dev/null +++ b/content/docs/v2024.1.31/examples/mongodb/Initialization/demo-1.yaml @@ -0,0 +1,18 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MongoDB +metadata: + name: mgo-init-script + namespace: demo +spec: + version: "4.4.26" + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + init: + script: + configMap: + name: mg-init-script diff --git a/content/docs/v2024.1.31/examples/mongodb/arbiter/replicaset.yaml b/content/docs/v2024.1.31/examples/mongodb/arbiter/replicaset.yaml new file mode 100644 index 0000000000..b9e71c6266 --- /dev/null +++ b/content/docs/v2024.1.31/examples/mongodb/arbiter/replicaset.yaml @@ -0,0 +1,22 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MongoDB +metadata: + name: mongo-arb + namespace: demo +spec: + version: "4.4.26" + replicaSet: + name: "rs0" + replicas: 2 + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 500Mi + arbiter: + podTemplate: {} + terminationPolicy: WipeOut + diff --git a/content/docs/v2024.1.31/examples/mongodb/arbiter/sharding.yaml b/content/docs/v2024.1.31/examples/mongodb/arbiter/sharding.yaml new file mode 100644 index 0000000000..5fc81ed684 --- /dev/null +++ b/content/docs/v2024.1.31/examples/mongodb/arbiter/sharding.yaml @@ -0,0 +1,39 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MongoDB +metadata: + name: mongo-sh-arb + namespace: demo +spec: + version: "4.4.26" + shardTopology: + configServer: + replicas: 3 + storage: + resources: + requests: + storage: 500Mi + storageClassName: standard + mongos: + replicas: 2 + shard: + replicas: 2 + podTemplate: + spec: + resources: + requests: + cpu: "400m" + memory: "300Mi" + shards: 2 + storage: + resources: + requests: + storage: 500Mi + storageClassName: standard + arbiter: + podTemplate: + spec: + resources: + requests: + cpu: "200m" + memory: "200Mi" + terminationPolicy: WipeOut diff --git a/content/docs/v2024.1.31/examples/mongodb/autoscaling/compute/mg-as-rs.yaml b/content/docs/v2024.1.31/examples/mongodb/autoscaling/compute/mg-as-rs.yaml new file mode 100644 index 0000000000..f2936d8b80 --- /dev/null +++ b/content/docs/v2024.1.31/examples/mongodb/autoscaling/compute/mg-as-rs.yaml @@ -0,0 +1,24 @@ +apiVersion: autoscaling.kubedb.com/v1alpha1 +kind: MongoDBAutoscaler +metadata: + name: mg-as-rs + namespace: demo +spec: + databaseRef: + name: mg-rs + opsRequestOptions: + timeout: 3m + apply: IfReady + compute: + replicaSet: + trigger: "On" + podLifeTimeThreshold: 5m + resourceDiffPercentage: 20 + minAllowed: + cpu: 400m + memory: 400Mi + maxAllowed: + cpu: 1 + memory: 1Gi + controlledResources: ["cpu", "memory"] + containerControlledValues: "RequestsAndLimits" \ No newline at end of file diff --git a/content/docs/v2024.1.31/examples/mongodb/autoscaling/compute/mg-as-shard.yaml b/content/docs/v2024.1.31/examples/mongodb/autoscaling/compute/mg-as-shard.yaml new file mode 100644 index 0000000000..8f298fedfa --- /dev/null +++ b/content/docs/v2024.1.31/examples/mongodb/autoscaling/compute/mg-as-shard.yaml @@ -0,0 +1,24 @@ +apiVersion: autoscaling.kubedb.com/v1alpha1 +kind: MongoDBAutoscaler +metadata: + name: mg-as-sh + namespace: demo +spec: + databaseRef: + name: mg-sh + opsRequestOptions: + timeout: 3m + apply: IfReady + compute: + shard: + trigger: "On" + podLifeTimeThreshold: 5m + resourceDiffPercentage: 20 + minAllowed: + cpu: 400m + memory: 400Mi + maxAllowed: + cpu: 1 + memory: 1Gi + controlledResources: ["cpu", "memory"] + containerControlledValues: "RequestsAndLimits" \ No newline at end of file diff --git a/content/docs/v2024.1.31/examples/mongodb/autoscaling/compute/mg-as-standalone.yaml b/content/docs/v2024.1.31/examples/mongodb/autoscaling/compute/mg-as-standalone.yaml new file mode 100644 index 0000000000..eac02b0818 --- /dev/null +++ b/content/docs/v2024.1.31/examples/mongodb/autoscaling/compute/mg-as-standalone.yaml @@ -0,0 +1,24 @@ +apiVersion: autoscaling.kubedb.com/v1alpha1 +kind: MongoDBAutoscaler +metadata: + name: mg-as + namespace: demo +spec: + databaseRef: + name: mg-standalone + opsRequestOptions: + timeout: 3m + apply: IfReady + compute: + standalone: + trigger: "On" + podLifeTimeThreshold: 5m + resourceDiffPercentage: 20 + minAllowed: + cpu: 400m + memory: 400Mi + maxAllowed: + cpu: 1 + memory: 1Gi + controlledResources: ["cpu", "memory"] + containerControlledValues: "RequestsAndLimits" \ No newline at end of file diff --git a/content/docs/v2024.1.31/examples/mongodb/autoscaling/compute/mg-rs.yaml b/content/docs/v2024.1.31/examples/mongodb/autoscaling/compute/mg-rs.yaml new file mode 100644 index 0000000000..ccd96941a0 --- /dev/null +++ b/content/docs/v2024.1.31/examples/mongodb/autoscaling/compute/mg-rs.yaml @@ -0,0 +1,25 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MongoDB +metadata: + name: mg-rs + namespace: demo +spec: + version: "4.4.26" + replicaSet: + name: "replicaset" + replicas: 3 + storageType: Durable + storage: + resources: + requests: + storage: 1Gi + podTemplate: + spec: + resources: + requests: + cpu: "200m" + memory: "300Mi" + limits: + cpu: "200m" + memory: "300Mi" + terminationPolicy: WipeOut diff --git a/content/docs/v2024.1.31/examples/mongodb/autoscaling/compute/mg-sh.yaml b/content/docs/v2024.1.31/examples/mongodb/autoscaling/compute/mg-sh.yaml new file mode 100644 index 0000000000..a31103482e --- /dev/null +++ b/content/docs/v2024.1.31/examples/mongodb/autoscaling/compute/mg-sh.yaml @@ -0,0 +1,43 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MongoDB +metadata: + name: mg-sh + namespace: demo +spec: + version: "4.4.26" + storageType: Durable + shardTopology: + configServer: + storage: + resources: + requests: + storage: 1Gi + replicas: 3 + podTemplate: + spec: + resources: + requests: + cpu: "200m" + memory: "300Mi" + mongos: + replicas: 2 + podTemplate: + spec: + resources: + requests: + cpu: "200m" + memory: "300Mi" + shard: + storage: + resources: + requests: + storage: 1Gi + replicas: 3 + shards: 2 + podTemplate: + spec: + resources: + requests: + cpu: "200m" + memory: "300Mi" + terminationPolicy: WipeOut diff --git a/content/docs/v2024.1.31/examples/mongodb/autoscaling/compute/mg-standalone.yaml b/content/docs/v2024.1.31/examples/mongodb/autoscaling/compute/mg-standalone.yaml new file mode 100644 index 0000000000..5ff2b0c233 --- /dev/null +++ b/content/docs/v2024.1.31/examples/mongodb/autoscaling/compute/mg-standalone.yaml @@ -0,0 +1,22 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MongoDB +metadata: + name: mg-standalone + namespace: demo +spec: + version: "4.4.26" + storageType: Durable + storage: + resources: + requests: + storage: 1Gi + podTemplate: + spec: + resources: + requests: + cpu: "200m" + memory: "300Mi" + limits: + cpu: "200m" + memory: "300Mi" + terminationPolicy: WipeOut diff --git a/content/docs/v2024.1.31/examples/mongodb/autoscaling/storage/mg-as-rs.yaml b/content/docs/v2024.1.31/examples/mongodb/autoscaling/storage/mg-as-rs.yaml new file mode 100644 index 0000000000..440c683205 --- /dev/null +++ b/content/docs/v2024.1.31/examples/mongodb/autoscaling/storage/mg-as-rs.yaml @@ -0,0 +1,13 @@ +apiVersion: autoscaling.kubedb.com/v1alpha1 +kind: MongoDBAutoscaler +metadata: + name: mg-as-rs + namespace: demo +spec: + databaseRef: + name: mg-rs + storage: + replicaSet: + trigger: "On" + usageThreshold: 60 + scalingThreshold: 50 \ No newline at end of file diff --git a/content/docs/v2024.1.31/examples/mongodb/autoscaling/storage/mg-as-sh.yaml b/content/docs/v2024.1.31/examples/mongodb/autoscaling/storage/mg-as-sh.yaml new file mode 100644 index 0000000000..40e0ed3088 --- /dev/null +++ b/content/docs/v2024.1.31/examples/mongodb/autoscaling/storage/mg-as-sh.yaml @@ -0,0 +1,13 @@ +apiVersion: autoscaling.kubedb.com/v1alpha1 +kind: MongoDBAutoscaler +metadata: + name: mg-as-sh + namespace: demo +spec: + databaseRef: + name: mg-sh + storage: + shard: + trigger: "On" + usageThreshold: 60 + scalingThreshold: 50 \ No newline at end of file diff --git a/content/docs/v2024.1.31/examples/mongodb/autoscaling/storage/mg-as-standalone.yaml b/content/docs/v2024.1.31/examples/mongodb/autoscaling/storage/mg-as-standalone.yaml new file mode 100644 index 0000000000..3c197eedea --- /dev/null +++ b/content/docs/v2024.1.31/examples/mongodb/autoscaling/storage/mg-as-standalone.yaml @@ -0,0 +1,13 @@ +apiVersion: autoscaling.kubedb.com/v1alpha1 +kind: MongoDBAutoscaler +metadata: + name: mg-as + namespace: demo +spec: + databaseRef: + name: mg-standalone + storage: + standalone: + trigger: "On" + usageThreshold: 60 + scalingThreshold: 50 \ No newline at end of file diff --git a/content/docs/v2024.1.31/examples/mongodb/autoscaling/storage/mg-rs.yaml b/content/docs/v2024.1.31/examples/mongodb/autoscaling/storage/mg-rs.yaml new file mode 100644 index 0000000000..db194ef615 --- /dev/null +++ b/content/docs/v2024.1.31/examples/mongodb/autoscaling/storage/mg-rs.yaml @@ -0,0 +1,17 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MongoDB +metadata: + name: mg-rs + namespace: demo +spec: + version: "4.4.26" + replicaSet: + name: "replicaset" + replicas: 3 + storageType: Durable + storage: + storageClassName: topolvm-provisioner + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut diff --git a/content/docs/v2024.1.31/examples/mongodb/autoscaling/storage/mg-sh.yaml b/content/docs/v2024.1.31/examples/mongodb/autoscaling/storage/mg-sh.yaml new file mode 100644 index 0000000000..f4d6e876cb --- /dev/null +++ b/content/docs/v2024.1.31/examples/mongodb/autoscaling/storage/mg-sh.yaml @@ -0,0 +1,27 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MongoDB +metadata: + name: mg-sh + namespace: demo +spec: + version: "4.4.26" + storageType: Durable + shardTopology: + configServer: + storage: + storageClassName: topolvm-provisioner + resources: + requests: + storage: 1Gi + replicas: 3 + mongos: + replicas: 2 + shard: + storage: + storageClassName: topolvm-provisioner + resources: + requests: + storage: 1Gi + replicas: 3 + shards: 2 + terminationPolicy: WipeOut diff --git a/content/docs/v2024.1.31/examples/mongodb/autoscaling/storage/mg-standalone.yaml b/content/docs/v2024.1.31/examples/mongodb/autoscaling/storage/mg-standalone.yaml new file mode 100644 index 0000000000..3250b67466 --- /dev/null +++ b/content/docs/v2024.1.31/examples/mongodb/autoscaling/storage/mg-standalone.yaml @@ -0,0 +1,14 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MongoDB +metadata: + name: mg-standalone + namespace: demo +spec: + version: "4.4.26" + storageType: Durable + storage: + storageClassName: topolvm-provisioner + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut diff --git a/content/docs/v2024.1.31/examples/mongodb/cli/mongodb-demo.yaml b/content/docs/v2024.1.31/examples/mongodb/cli/mongodb-demo.yaml new file mode 100644 index 0000000000..b21005cbbb --- /dev/null +++ b/content/docs/v2024.1.31/examples/mongodb/cli/mongodb-demo.yaml @@ -0,0 +1,14 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MongoDB +metadata: + name: mongodb-demo + namespace: demo +spec: + version: "4.4.26" + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi diff --git a/content/docs/v2024.1.31/examples/mongodb/clustering/mongo-sharding.yaml b/content/docs/v2024.1.31/examples/mongodb/clustering/mongo-sharding.yaml new file mode 100644 index 0000000000..a1158fba09 --- /dev/null +++ b/content/docs/v2024.1.31/examples/mongodb/clustering/mongo-sharding.yaml @@ -0,0 +1,25 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MongoDB +metadata: + name: mongo-sh + namespace: demo +spec: + version: 4.4.26 + shardTopology: + configServer: + replicas: 3 + storage: + resources: + requests: + storage: 1Gi + storageClassName: standard + mongos: + replicas: 2 + shard: + replicas: 3 + shards: 2 + storage: + resources: + requests: + storage: 1Gi + storageClassName: standard \ No newline at end of file diff --git a/content/docs/v2024.1.31/examples/mongodb/clustering/replicaset.yaml b/content/docs/v2024.1.31/examples/mongodb/clustering/replicaset.yaml new file mode 100644 index 0000000000..0b3607ba0a --- /dev/null +++ b/content/docs/v2024.1.31/examples/mongodb/clustering/replicaset.yaml @@ -0,0 +1,17 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MongoDB +metadata: + name: mgo-replicaset + namespace: demo +spec: + version: "4.4.26" + replicas: 3 + replicaSet: + name: rs0 + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi \ No newline at end of file diff --git a/content/docs/v2024.1.31/examples/mongodb/clustering/standalone.yaml b/content/docs/v2024.1.31/examples/mongodb/clustering/standalone.yaml new file mode 100644 index 0000000000..2ad99d0c82 --- /dev/null +++ b/content/docs/v2024.1.31/examples/mongodb/clustering/standalone.yaml @@ -0,0 +1,19 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MongoDB +metadata: + name: mg-alone + namespace: demo +spec: + version: "4.4.26" + podTemplate: + spec: + resources: + requests: + cpu: "300m" + memory: "400Mi" + storage: + resources: + requests: + storage: 500Mi + storageClassName: standard + terminationPolicy: WipeOut diff --git a/content/docs/v2024.1.31/examples/mongodb/configuration/demo-1.yaml b/content/docs/v2024.1.31/examples/mongodb/configuration/demo-1.yaml new file mode 100644 index 0000000000..066b831b4c --- /dev/null +++ b/content/docs/v2024.1.31/examples/mongodb/configuration/demo-1.yaml @@ -0,0 +1,17 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MongoDB +metadata: + name: mgo-custom-config + namespace: demo +spec: + version: "4.4.26" + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + configSecret: + name: mg-configuration diff --git a/content/docs/v2024.1.31/examples/mongodb/configuration/mgo-misc-config.yaml b/content/docs/v2024.1.31/examples/mongodb/configuration/mgo-misc-config.yaml new file mode 100644 index 0000000000..a164b01a63 --- /dev/null +++ b/content/docs/v2024.1.31/examples/mongodb/configuration/mgo-misc-config.yaml @@ -0,0 +1,24 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MongoDB +metadata: + name: mgo-misc-config + namespace: demo +spec: + version: "4.4.26" + storageType: "Durable" + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + podTemplate: + spec: + args: + - --maxConns=100 + resources: + requests: + memory: "1Gi" + cpu: "250m" + terminationPolicy: Halt diff --git a/content/docs/v2024.1.31/examples/mongodb/configuration/mongod.conf b/content/docs/v2024.1.31/examples/mongodb/configuration/mongod.conf new file mode 100644 index 0000000000..0334ab5619 --- /dev/null +++ b/content/docs/v2024.1.31/examples/mongodb/configuration/mongod.conf @@ -0,0 +1,2 @@ +net: + maxIncomingConnections: 10000 diff --git a/content/docs/v2024.1.31/examples/mongodb/custom-rbac/mg-custom-db-two.yaml b/content/docs/v2024.1.31/examples/mongodb/custom-rbac/mg-custom-db-two.yaml new file mode 100644 index 0000000000..bd85719647 --- /dev/null +++ b/content/docs/v2024.1.31/examples/mongodb/custom-rbac/mg-custom-db-two.yaml @@ -0,0 +1,19 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MongoDB +metadata: + name: minute-mongodb + namespace: demo +spec: + version: "4.4.26" + storageType: Durable + podTemplate: + spec: + serviceAccountName: my-custom-serviceaccount + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: DoNotTerminate diff --git a/content/docs/v2024.1.31/examples/mongodb/custom-rbac/mg-custom-db.yaml b/content/docs/v2024.1.31/examples/mongodb/custom-rbac/mg-custom-db.yaml new file mode 100644 index 0000000000..d224d1166a --- /dev/null +++ b/content/docs/v2024.1.31/examples/mongodb/custom-rbac/mg-custom-db.yaml @@ -0,0 +1,19 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MongoDB +metadata: + name: quick-mongodb + namespace: demo +spec: + version: "4.4.26" + storageType: Durable + podTemplate: + spec: + serviceAccountName: my-custom-serviceaccount + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: DoNotTerminate diff --git a/content/docs/v2024.1.31/examples/mongodb/custom-rbac/mg-custom-role.yaml b/content/docs/v2024.1.31/examples/mongodb/custom-rbac/mg-custom-role.yaml new file mode 100644 index 0000000000..cea2e81796 --- /dev/null +++ b/content/docs/v2024.1.31/examples/mongodb/custom-rbac/mg-custom-role.yaml @@ -0,0 +1,14 @@ +apiVersion: rbac.authorization.k8s.io/v1 +kind: Role +metadata: + name: my-custom-role + namespace: demo +rules: + - apiGroups: + - policy + resourceNames: + - mongodb-db + resources: + - podsecuritypolicies + verbs: + - use diff --git a/content/docs/v2024.1.31/examples/mongodb/demo-0.yaml b/content/docs/v2024.1.31/examples/mongodb/demo-0.yaml new file mode 100644 index 0000000000..1d9f59eb36 --- /dev/null +++ b/content/docs/v2024.1.31/examples/mongodb/demo-0.yaml @@ -0,0 +1,7 @@ +apiVersion: v1 +kind: Namespace +metadata: + name: demo +spec: + finalizers: + - kubernetes diff --git a/content/docs/v2024.1.31/examples/mongodb/hidden-node/replicaset.yaml b/content/docs/v2024.1.31/examples/mongodb/hidden-node/replicaset.yaml new file mode 100644 index 0000000000..ccaa9c4d0e --- /dev/null +++ b/content/docs/v2024.1.31/examples/mongodb/hidden-node/replicaset.yaml @@ -0,0 +1,36 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MongoDB +metadata: + name: mongo-rs-hid + namespace: demo +spec: + version: "percona-4.4.10" + replicaSet: + name: "replicaset" + podTemplate: + spec: + resources: + requests: + cpu: "600m" + memory: "600Mi" + replicas: 3 + storageEngine: inMemory + storageType: Ephemeral + ephemeralStorage: + sizeLimit: "900Mi" + hidden: + podTemplate: + spec: + resources: + requests: + cpu: "400m" + memory: "400Mi" + replicas: 2 + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 2Gi + terminationPolicy: WipeOut \ No newline at end of file diff --git a/content/docs/v2024.1.31/examples/mongodb/hidden-node/sharding.yaml b/content/docs/v2024.1.31/examples/mongodb/hidden-node/sharding.yaml new file mode 100644 index 0000000000..9ea4703cba --- /dev/null +++ b/content/docs/v2024.1.31/examples/mongodb/hidden-node/sharding.yaml @@ -0,0 +1,35 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MongoDB +metadata: + name: mongo-sh-hid + namespace: demo +spec: + version: "percona-4.4.10" + shardTopology: + configServer: + replicas: 3 + ephemeralStorage: {} + mongos: + replicas: 2 + shard: + replicas: 3 + shards: 2 + ephemeralStorage: {} + storageEngine: inMemory + storageType: Ephemeral + hidden: + podTemplate: + spec: + resources: + requests: + cpu: "400m" + memory: "400Mi" + replicas: 2 + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 2Gi + terminationPolicy: WipeOut \ No newline at end of file diff --git a/content/docs/v2024.1.31/examples/mongodb/monitoring/builtin-prom-mgo.yaml b/content/docs/v2024.1.31/examples/mongodb/monitoring/builtin-prom-mgo.yaml new file mode 100644 index 0000000000..a3a595a49b --- /dev/null +++ b/content/docs/v2024.1.31/examples/mongodb/monitoring/builtin-prom-mgo.yaml @@ -0,0 +1,17 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MongoDB +metadata: + name: builtin-prom-mgo + namespace: demo +spec: + version: "4.4.26" + terminationPolicy: WipeOut + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + monitor: + agent: prometheus.io/builtin diff --git a/content/docs/v2024.1.31/examples/mongodb/monitoring/coreos-prom-mgo.yaml b/content/docs/v2024.1.31/examples/mongodb/monitoring/coreos-prom-mgo.yaml new file mode 100644 index 0000000000..55a607da18 --- /dev/null +++ b/content/docs/v2024.1.31/examples/mongodb/monitoring/coreos-prom-mgo.yaml @@ -0,0 +1,22 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MongoDB +metadata: + name: coreos-prom-mgo + namespace: demo +spec: + version: "4.4.26" + terminationPolicy: WipeOut + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + monitor: + agent: prometheus.io/operator + prometheus: + serviceMonitor: + labels: + release: prometheus + interval: 10s diff --git a/content/docs/v2024.1.31/examples/mongodb/private-registry/demo-1.yaml b/content/docs/v2024.1.31/examples/mongodb/private-registry/demo-1.yaml new file mode 100644 index 0000000000..1bdd213e59 --- /dev/null +++ b/content/docs/v2024.1.31/examples/mongodb/private-registry/demo-1.yaml @@ -0,0 +1,18 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MongoDB +metadata: + name: mgo-pvt-reg + namespace: demo +spec: + version: 4.4.26 + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + podTemplate: + spec: + imagePullSecrets: + - name: myregistrykey diff --git a/content/docs/v2024.1.31/examples/mongodb/quickstart/replicaset.yaml b/content/docs/v2024.1.31/examples/mongodb/quickstart/replicaset.yaml new file mode 100644 index 0000000000..bfef48b556 --- /dev/null +++ b/content/docs/v2024.1.31/examples/mongodb/quickstart/replicaset.yaml @@ -0,0 +1,19 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MongoDB +metadata: + name: mgo-quickstart + namespace: demo +spec: + version: "4.4.26" + replicaSet: + name: "rs1" + replicas: 3 + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: DoNotTerminate diff --git a/content/docs/v2024.1.31/examples/mongodb/reconfigure-tls/issuer.yaml b/content/docs/v2024.1.31/examples/mongodb/reconfigure-tls/issuer.yaml new file mode 100644 index 0000000000..b001fb7268 --- /dev/null +++ b/content/docs/v2024.1.31/examples/mongodb/reconfigure-tls/issuer.yaml @@ -0,0 +1,8 @@ +apiVersion: cert-manager.io/v1 +kind: Issuer +metadata: + name: mg-issuer + namespace: demo +spec: + ca: + secretName: mongo-ca diff --git a/content/docs/v2024.1.31/examples/mongodb/reconfigure-tls/mg-replicaset.yaml b/content/docs/v2024.1.31/examples/mongodb/reconfigure-tls/mg-replicaset.yaml new file mode 100644 index 0000000000..399b9daaa0 --- /dev/null +++ b/content/docs/v2024.1.31/examples/mongodb/reconfigure-tls/mg-replicaset.yaml @@ -0,0 +1,17 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MongoDB +metadata: + name: mg-rs + namespace: demo +spec: + version: "4.4.26" + replicas: 3 + replicaSet: + name: rs0 + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi diff --git a/content/docs/v2024.1.31/examples/mongodb/reconfigure-tls/mops-add-tls.yaml b/content/docs/v2024.1.31/examples/mongodb/reconfigure-tls/mops-add-tls.yaml new file mode 100644 index 0000000000..25ed1b09e4 --- /dev/null +++ b/content/docs/v2024.1.31/examples/mongodb/reconfigure-tls/mops-add-tls.yaml @@ -0,0 +1,21 @@ +apiVersion: ops.kubedb.com/v1alpha1 +kind: MongoDBOpsRequest +metadata: + name: mops-add-tls + namespace: demo +spec: + type: ReconfigureTLS + databaseRef: + name: mg-rs + tls: + issuerRef: + name: mg-issuer + kind: Issuer + apiGroup: "cert-manager.io" + certificates: + - alias: client + subject: + organizations: + - mongo + organizationalUnits: + - client diff --git a/content/docs/v2024.1.31/examples/mongodb/reconfigure-tls/mops-change-issuer.yaml b/content/docs/v2024.1.31/examples/mongodb/reconfigure-tls/mops-change-issuer.yaml new file mode 100644 index 0000000000..3543909617 --- /dev/null +++ b/content/docs/v2024.1.31/examples/mongodb/reconfigure-tls/mops-change-issuer.yaml @@ -0,0 +1,14 @@ +apiVersion: ops.kubedb.com/v1alpha1 +kind: MongoDBOpsRequest +metadata: + name: mops-change-issuer + namespace: demo +spec: + type: ReconfigureTLS + databaseRef: + name: mg-rs + tls: + issuerRef: + name: mg-new-issuer + kind: Issuer + apiGroup: "cert-manager.io" diff --git a/content/docs/v2024.1.31/examples/mongodb/reconfigure-tls/mops-remove.yaml b/content/docs/v2024.1.31/examples/mongodb/reconfigure-tls/mops-remove.yaml new file mode 100644 index 0000000000..06245506d9 --- /dev/null +++ b/content/docs/v2024.1.31/examples/mongodb/reconfigure-tls/mops-remove.yaml @@ -0,0 +1,11 @@ +apiVersion: ops.kubedb.com/v1alpha1 +kind: MongoDBOpsRequest +metadata: + name: mops-remove + namespace: demo +spec: + type: ReconfigureTLS + databaseRef: + name: mg-rs + tls: + remove: true diff --git a/content/docs/v2024.1.31/examples/mongodb/reconfigure-tls/mops-rotate.yaml b/content/docs/v2024.1.31/examples/mongodb/reconfigure-tls/mops-rotate.yaml new file mode 100644 index 0000000000..7530796118 --- /dev/null +++ b/content/docs/v2024.1.31/examples/mongodb/reconfigure-tls/mops-rotate.yaml @@ -0,0 +1,11 @@ +apiVersion: ops.kubedb.com/v1alpha1 +kind: MongoDBOpsRequest +metadata: + name: mops-rotate + namespace: demo +spec: + type: ReconfigureTLS + databaseRef: + name: mg-rs + tls: + rotateCertificates: true diff --git a/content/docs/v2024.1.31/examples/mongodb/reconfigure-tls/new-issuer.yaml b/content/docs/v2024.1.31/examples/mongodb/reconfigure-tls/new-issuer.yaml new file mode 100644 index 0000000000..066cbce511 --- /dev/null +++ b/content/docs/v2024.1.31/examples/mongodb/reconfigure-tls/new-issuer.yaml @@ -0,0 +1,8 @@ +apiVersion: cert-manager.io/v1 +kind: Issuer +metadata: + name: mg-new-issuer + namespace: demo +spec: + ca: + secretName: mongo-new-ca diff --git a/content/docs/v2024.1.31/examples/mongodb/reconfigure/mg-replicaset-config.yaml b/content/docs/v2024.1.31/examples/mongodb/reconfigure/mg-replicaset-config.yaml new file mode 100644 index 0000000000..2a32cbb179 --- /dev/null +++ b/content/docs/v2024.1.31/examples/mongodb/reconfigure/mg-replicaset-config.yaml @@ -0,0 +1,20 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MongoDB +metadata: + name: mg-replicaset + namespace: demo +spec: + version: "4.4.26" + replicas: 3 + replicaSet: + name: rs0 + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + configSecret: + name: mg-custom-config diff --git a/content/docs/v2024.1.31/examples/mongodb/reconfigure/mg-shard-config.yaml b/content/docs/v2024.1.31/examples/mongodb/reconfigure/mg-shard-config.yaml new file mode 100644 index 0000000000..c5007872c8 --- /dev/null +++ b/content/docs/v2024.1.31/examples/mongodb/reconfigure/mg-shard-config.yaml @@ -0,0 +1,31 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MongoDB +metadata: + name: mg-sharding + namespace: demo +spec: + version: 4.4.26 + shardTopology: + configServer: + replicas: 3 + configSecret: + name: mg-custom-config + storage: + resources: + requests: + storage: 1Gi + storageClassName: standard + mongos: + replicas: 2 + configSecret: + name: mg-custom-config + shard: + replicas: 3 + shards: 2 + configSecret: + name: mg-custom-config + storage: + resources: + requests: + storage: 1Gi + storageClassName: standard diff --git a/content/docs/v2024.1.31/examples/mongodb/reconfigure/mg-standalone-config.yaml b/content/docs/v2024.1.31/examples/mongodb/reconfigure/mg-standalone-config.yaml new file mode 100644 index 0000000000..ac2cce3615 --- /dev/null +++ b/content/docs/v2024.1.31/examples/mongodb/reconfigure/mg-standalone-config.yaml @@ -0,0 +1,17 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MongoDB +metadata: + name: mg-standalone + namespace: demo +spec: + version: "4.4.26" + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + configSecret: + name: mg-custom-config diff --git a/content/docs/v2024.1.31/examples/mongodb/reconfigure/mops-reconfigure-inline-replicaset.yaml b/content/docs/v2024.1.31/examples/mongodb/reconfigure/mops-reconfigure-inline-replicaset.yaml new file mode 100644 index 0000000000..45bcafc44d --- /dev/null +++ b/content/docs/v2024.1.31/examples/mongodb/reconfigure/mops-reconfigure-inline-replicaset.yaml @@ -0,0 +1,14 @@ +apiVersion: ops.kubedb.com/v1alpha1 +kind: MongoDBOpsRequest +metadata: + name: mops-reconfigure-inline-replicaset + namespace: demo +spec: + type: Reconfigure + databaseRef: + name: mg-replicaset + configuration: + replicaSet: + inlineConfig: | + net: + maxIncomingConnections: 30000 diff --git a/content/docs/v2024.1.31/examples/mongodb/reconfigure/mops-reconfigure-inline-shard.yaml b/content/docs/v2024.1.31/examples/mongodb/reconfigure/mops-reconfigure-inline-shard.yaml new file mode 100644 index 0000000000..1f9bef23ac --- /dev/null +++ b/content/docs/v2024.1.31/examples/mongodb/reconfigure/mops-reconfigure-inline-shard.yaml @@ -0,0 +1,22 @@ +apiVersion: ops.kubedb.com/v1alpha1 +kind: MongoDBOpsRequest +metadata: + name: mops-reconfigure-inline-shard + namespace: demo +spec: + type: Reconfigure + databaseRef: + name: mg-sharding + configuration: + shard: + inlineConfig: | + net: + maxIncomingConnections: 30000 + configServer: + inlineConfig: | + net: + maxIncomingConnections: 30000 + mongos: + inlineConfig: | + net: + maxIncomingConnections: 30000 diff --git a/content/docs/v2024.1.31/examples/mongodb/reconfigure/mops-reconfigure-inline-standalone.yaml b/content/docs/v2024.1.31/examples/mongodb/reconfigure/mops-reconfigure-inline-standalone.yaml new file mode 100644 index 0000000000..53079b69fa --- /dev/null +++ b/content/docs/v2024.1.31/examples/mongodb/reconfigure/mops-reconfigure-inline-standalone.yaml @@ -0,0 +1,14 @@ +apiVersion: ops.kubedb.com/v1alpha1 +kind: MongoDBOpsRequest +metadata: + name: mops-reconfigure-inline-standalone + namespace: demo +spec: + type: Reconfigure + databaseRef: + name: mg-standalone + configuration: + standalone: + inlineConfig: | + net: + maxIncomingConnections: 30000 diff --git a/content/docs/v2024.1.31/examples/mongodb/reconfigure/mops-reconfigure-replicaset.yaml b/content/docs/v2024.1.31/examples/mongodb/reconfigure/mops-reconfigure-replicaset.yaml new file mode 100644 index 0000000000..0392d2feb5 --- /dev/null +++ b/content/docs/v2024.1.31/examples/mongodb/reconfigure/mops-reconfigure-replicaset.yaml @@ -0,0 +1,13 @@ +apiVersion: ops.kubedb.com/v1alpha1 +kind: MongoDBOpsRequest +metadata: + name: mops-reconfigure-replicaset + namespace: demo +spec: + type: Reconfigure + databaseRef: + name: mg-replicaset + configuration: + replicaSet: + configSecret: + name: new-custom-config diff --git a/content/docs/v2024.1.31/examples/mongodb/reconfigure/mops-reconfigure-shard.yaml b/content/docs/v2024.1.31/examples/mongodb/reconfigure/mops-reconfigure-shard.yaml new file mode 100644 index 0000000000..285d686820 --- /dev/null +++ b/content/docs/v2024.1.31/examples/mongodb/reconfigure/mops-reconfigure-shard.yaml @@ -0,0 +1,19 @@ +apiVersion: ops.kubedb.com/v1alpha1 +kind: MongoDBOpsRequest +metadata: + name: mops-reconfigure-shard + namespace: demo +spec: + type: Reconfigure + databaseRef: + name: mg-sharding + configuration: + shard: + configSecret: + name: new-custom-config + configServer: + configSecret: + name: new-custom-config + mongos: + configSecret: + name: new-custom-config diff --git a/content/docs/v2024.1.31/examples/mongodb/reconfigure/mops-reconfigure-standalone.yaml b/content/docs/v2024.1.31/examples/mongodb/reconfigure/mops-reconfigure-standalone.yaml new file mode 100644 index 0000000000..3e0435f07c --- /dev/null +++ b/content/docs/v2024.1.31/examples/mongodb/reconfigure/mops-reconfigure-standalone.yaml @@ -0,0 +1,13 @@ +apiVersion: ops.kubedb.com/v1alpha1 +kind: MongoDBOpsRequest +metadata: + name: mops-reconfigure-standalone + namespace: demo +spec: + type: Reconfigure + databaseRef: + name: mg-standalone + configuration: + standalone: + configSecret: + name: new-custom-config diff --git a/content/docs/v2024.1.31/examples/mongodb/reprovision/mongo.yaml b/content/docs/v2024.1.31/examples/mongodb/reprovision/mongo.yaml new file mode 100644 index 0000000000..384f3a4317 --- /dev/null +++ b/content/docs/v2024.1.31/examples/mongodb/reprovision/mongo.yaml @@ -0,0 +1,35 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MongoDB +metadata: + name: mongo + namespace: demo +spec: + version: "4.4.26" + replicaSet: + name: "replicaset" + podTemplate: + spec: + resources: + requests: + cpu: "300m" + memory: "300Mi" + replicas: 2 + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut + arbiter: {} + hidden: + replicas: 2 + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 2Gi \ No newline at end of file diff --git a/content/docs/v2024.1.31/examples/mongodb/reprovision/ops.yaml b/content/docs/v2024.1.31/examples/mongodb/reprovision/ops.yaml new file mode 100644 index 0000000000..8512d7eb32 --- /dev/null +++ b/content/docs/v2024.1.31/examples/mongodb/reprovision/ops.yaml @@ -0,0 +1,9 @@ +apiVersion: ops.kubedb.com/v1alpha1 +kind: MongoDBOpsRequest +metadata: + name: repro + namespace: demo +spec: + type: Reprovision + databaseRef: + name: mongo \ No newline at end of file diff --git a/content/docs/v2024.1.31/examples/mongodb/restart/mongo.yaml b/content/docs/v2024.1.31/examples/mongodb/restart/mongo.yaml new file mode 100644 index 0000000000..384f3a4317 --- /dev/null +++ b/content/docs/v2024.1.31/examples/mongodb/restart/mongo.yaml @@ -0,0 +1,35 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MongoDB +metadata: + name: mongo + namespace: demo +spec: + version: "4.4.26" + replicaSet: + name: "replicaset" + podTemplate: + spec: + resources: + requests: + cpu: "300m" + memory: "300Mi" + replicas: 2 + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut + arbiter: {} + hidden: + replicas: 2 + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 2Gi \ No newline at end of file diff --git a/content/docs/v2024.1.31/examples/mongodb/restart/ops.yaml b/content/docs/v2024.1.31/examples/mongodb/restart/ops.yaml new file mode 100644 index 0000000000..6f66442c4a --- /dev/null +++ b/content/docs/v2024.1.31/examples/mongodb/restart/ops.yaml @@ -0,0 +1,14 @@ +apiVersion: ops.kubedb.com/v1alpha1 +kind: MongoDBOpsRequest +metadata: + name: restart + namespace: demo +spec: + type: Restart + databaseRef: + name: mongo + readinessCriteria: + oplogMaxLagSeconds: 10 + objectsCountDiffPercentage: 15 + timeout: 3m + apply: Always \ No newline at end of file diff --git a/content/docs/v2024.1.31/examples/mongodb/scaling/horizontal-scaling/mops-hscale-down-replicaset.yaml b/content/docs/v2024.1.31/examples/mongodb/scaling/horizontal-scaling/mops-hscale-down-replicaset.yaml new file mode 100644 index 0000000000..42b5c1b415 --- /dev/null +++ b/content/docs/v2024.1.31/examples/mongodb/scaling/horizontal-scaling/mops-hscale-down-replicaset.yaml @@ -0,0 +1,11 @@ +apiVersion: ops.kubedb.com/v1alpha1 +kind: MongoDBOpsRequest +metadata: + name: mops-hscale-down-replicaset + namespace: demo +spec: + type: HorizontalScaling + databaseRef: + name: mg-replicaset + horizontalScaling: + replicas: 3 \ No newline at end of file diff --git a/content/docs/v2024.1.31/examples/mongodb/scaling/horizontal-scaling/mops-hscale-down-shard.yaml b/content/docs/v2024.1.31/examples/mongodb/scaling/horizontal-scaling/mops-hscale-down-shard.yaml new file mode 100644 index 0000000000..962500f6c1 --- /dev/null +++ b/content/docs/v2024.1.31/examples/mongodb/scaling/horizontal-scaling/mops-hscale-down-shard.yaml @@ -0,0 +1,17 @@ +apiVersion: ops.kubedb.com/v1alpha1 +kind: MongoDBOpsRequest +metadata: + name: mops-hscale-down-shard + namespace: demo +spec: + type: HorizontalScaling + databaseRef: + name: mg-sharding + horizontalScaling: + shard: + shards: 2 + replicas: 3 + mongos: + replicas: 2 + configServer: + replicas: 3 diff --git a/content/docs/v2024.1.31/examples/mongodb/scaling/horizontal-scaling/mops-hscale-up-replicaset.yaml b/content/docs/v2024.1.31/examples/mongodb/scaling/horizontal-scaling/mops-hscale-up-replicaset.yaml new file mode 100644 index 0000000000..cf7805b6af --- /dev/null +++ b/content/docs/v2024.1.31/examples/mongodb/scaling/horizontal-scaling/mops-hscale-up-replicaset.yaml @@ -0,0 +1,11 @@ +apiVersion: ops.kubedb.com/v1alpha1 +kind: MongoDBOpsRequest +metadata: + name: mops-hscale-up-replicaset + namespace: demo +spec: + type: HorizontalScaling + databaseRef: + name: mg-replicaset + horizontalScaling: + replicas: 4 \ No newline at end of file diff --git a/content/docs/v2024.1.31/examples/mongodb/scaling/horizontal-scaling/mops-hscale-up-shard.yaml b/content/docs/v2024.1.31/examples/mongodb/scaling/horizontal-scaling/mops-hscale-up-shard.yaml new file mode 100644 index 0000000000..f2393ab7f7 --- /dev/null +++ b/content/docs/v2024.1.31/examples/mongodb/scaling/horizontal-scaling/mops-hscale-up-shard.yaml @@ -0,0 +1,17 @@ +apiVersion: ops.kubedb.com/v1alpha1 +kind: MongoDBOpsRequest +metadata: + name: mops-hscale-up-shard + namespace: demo +spec: + type: HorizontalScaling + databaseRef: + name: mg-sharding + horizontalScaling: + shard: + shards: 3 + replicas: 4 + mongos: + replicas: 3 + configServer: + replicas: 4 diff --git a/content/docs/v2024.1.31/examples/mongodb/scaling/mg-replicaset.yaml b/content/docs/v2024.1.31/examples/mongodb/scaling/mg-replicaset.yaml new file mode 100644 index 0000000000..777010ad0e --- /dev/null +++ b/content/docs/v2024.1.31/examples/mongodb/scaling/mg-replicaset.yaml @@ -0,0 +1,18 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MongoDB +metadata: + name: mg-replicaset + namespace: demo +spec: + version: "4.4.26" + replicaSet: + name: "replicaset" + replicas: 3 + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi diff --git a/content/docs/v2024.1.31/examples/mongodb/scaling/mg-shard.yaml b/content/docs/v2024.1.31/examples/mongodb/scaling/mg-shard.yaml new file mode 100644 index 0000000000..d500656b05 --- /dev/null +++ b/content/docs/v2024.1.31/examples/mongodb/scaling/mg-shard.yaml @@ -0,0 +1,25 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MongoDB +metadata: + name: mg-sharding + namespace: demo +spec: + version: 4.4.26 + shardTopology: + configServer: + replicas: 3 + storage: + resources: + requests: + storage: 1Gi + storageClassName: standard + mongos: + replicas: 2 + shard: + replicas: 3 + shards: 2 + storage: + resources: + requests: + storage: 1Gi + storageClassName: standard diff --git a/content/docs/v2024.1.31/examples/mongodb/scaling/mg-standalone.yaml b/content/docs/v2024.1.31/examples/mongodb/scaling/mg-standalone.yaml new file mode 100644 index 0000000000..b625c062bc --- /dev/null +++ b/content/docs/v2024.1.31/examples/mongodb/scaling/mg-standalone.yaml @@ -0,0 +1,15 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MongoDB +metadata: + name: mg-standalone + namespace: demo +spec: + version: "4.4.26" + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi diff --git a/content/docs/v2024.1.31/examples/mongodb/scaling/vertical-scaling/mops-vscale-replicaset.yaml b/content/docs/v2024.1.31/examples/mongodb/scaling/vertical-scaling/mops-vscale-replicaset.yaml new file mode 100644 index 0000000000..b225adb307 --- /dev/null +++ b/content/docs/v2024.1.31/examples/mongodb/scaling/vertical-scaling/mops-vscale-replicaset.yaml @@ -0,0 +1,18 @@ +apiVersion: ops.kubedb.com/v1alpha1 +kind: MongoDBOpsRequest +metadata: + name: mops-vscale-replicaset + namespace: demo +spec: + type: VerticalScaling + databaseRef: + name: mg-replicaset + verticalScaling: + replicaSet: + resources: + requests: + memory: "1.2Gi" + cpu: "0.6" + limits: + memory: "1.2Gi" + cpu: "0.6" diff --git a/content/docs/v2024.1.31/examples/mongodb/scaling/vertical-scaling/mops-vscale-shard.yaml b/content/docs/v2024.1.31/examples/mongodb/scaling/vertical-scaling/mops-vscale-shard.yaml new file mode 100644 index 0000000000..1f87fd0af5 --- /dev/null +++ b/content/docs/v2024.1.31/examples/mongodb/scaling/vertical-scaling/mops-vscale-shard.yaml @@ -0,0 +1,34 @@ +apiVersion: ops.kubedb.com/v1alpha1 +kind: MongoDBOpsRequest +metadata: + name: mops-vscale-shard + namespace: demo +spec: + type: VerticalScaling + databaseRef: + name: mg-sharding + verticalScaling: + shard: + resources: + requests: + memory: "1100Mi" + cpu: "0.55" + limits: + memory: "1100Mi" + cpu: "0.55" + configServer: + resources: + requests: + memory: "1100Mi" + cpu: "0.55" + limits: + memory: "1100Mi" + cpu: "0.55" + mongos: + resources: + requests: + memory: "1100Mi" + cpu: "0.55" + limits: + memory: "1100Mi" + cpu: "0.55" diff --git a/content/docs/v2024.1.31/examples/mongodb/scaling/vertical-scaling/mops-vscale-standalone.yaml b/content/docs/v2024.1.31/examples/mongodb/scaling/vertical-scaling/mops-vscale-standalone.yaml new file mode 100644 index 0000000000..56561f14f3 --- /dev/null +++ b/content/docs/v2024.1.31/examples/mongodb/scaling/vertical-scaling/mops-vscale-standalone.yaml @@ -0,0 +1,18 @@ +apiVersion: ops.kubedb.com/v1alpha1 +kind: MongoDBOpsRequest +metadata: + name: mops-vscale-standalone + namespace: demo +spec: + type: VerticalScaling + databaseRef: + name: mg-standalone + verticalScaling: + standalone: + resources: + requests: + memory: "2Gi" + cpu: "1" + limits: + memory: "2Gi" + cpu: "1" diff --git a/content/docs/v2024.1.31/examples/mongodb/tls/issuer.yaml b/content/docs/v2024.1.31/examples/mongodb/tls/issuer.yaml new file mode 100644 index 0000000000..9be190728e --- /dev/null +++ b/content/docs/v2024.1.31/examples/mongodb/tls/issuer.yaml @@ -0,0 +1,8 @@ +apiVersion: cert-manager.io/v1 +kind: Issuer +metadata: + name: mongo-ca-issuer + namespace: demo +spec: + ca: + secretName: mongo-ca diff --git a/content/docs/v2024.1.31/examples/mongodb/tls/mg-replicaset-ssl.yaml b/content/docs/v2024.1.31/examples/mongodb/tls/mg-replicaset-ssl.yaml new file mode 100644 index 0000000000..9205a98070 --- /dev/null +++ b/content/docs/v2024.1.31/examples/mongodb/tls/mg-replicaset-ssl.yaml @@ -0,0 +1,24 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MongoDB +metadata: + name: mgo-rs-tls + namespace: demo +spec: + version: "4.4.26" + sslMode: requireSSL + tls: + issuerRef: + name: mongo-ca-issuer + kind: Issuer + apiGroup: "cert-manager.io" + clusterAuthMode: x509 + replicas: 4 + replicaSet: + name: rs0 + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi \ No newline at end of file diff --git a/content/docs/v2024.1.31/examples/mongodb/tls/mg-shard-ssl.yaml b/content/docs/v2024.1.31/examples/mongodb/tls/mg-shard-ssl.yaml new file mode 100644 index 0000000000..eae445d61a --- /dev/null +++ b/content/docs/v2024.1.31/examples/mongodb/tls/mg-shard-ssl.yaml @@ -0,0 +1,34 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MongoDB +metadata: + name: mongo-sh-tls + namespace: demo +spec: + version: "4.4.26" + sslMode: requireSSL + tls: + issuerRef: + name: mongo-ca-issuer + kind: Issuer + apiGroup: "cert-manager.io" + clusterAuthMode: x509 + shardTopology: + configServer: + replicas: 2 + storage: + resources: + requests: + storage: 1Gi + storageClassName: standard + mongos: + replicas: 2 + shard: + replicas: 2 + shards: 2 + storage: + resources: + requests: + storage: 1Gi + storageClassName: standard + storageType: Durable + terminationPolicy: WipeOut \ No newline at end of file diff --git a/content/docs/v2024.1.31/examples/mongodb/tls/mg-standalone-ssl.yaml b/content/docs/v2024.1.31/examples/mongodb/tls/mg-standalone-ssl.yaml new file mode 100644 index 0000000000..24be9f30a5 --- /dev/null +++ b/content/docs/v2024.1.31/examples/mongodb/tls/mg-standalone-ssl.yaml @@ -0,0 +1,20 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MongoDB +metadata: + name: mgo-tls + namespace: demo +spec: + version: "4.4.26" + sslMode: requireSSL + tls: + issuerRef: + name: mongo-ca-issuer + kind: Issuer + apiGroup: "cert-manager.io" + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi \ No newline at end of file diff --git a/content/docs/v2024.1.31/examples/mongodb/update-version/mg-replicaset.yaml b/content/docs/v2024.1.31/examples/mongodb/update-version/mg-replicaset.yaml new file mode 100644 index 0000000000..c8890a60dd --- /dev/null +++ b/content/docs/v2024.1.31/examples/mongodb/update-version/mg-replicaset.yaml @@ -0,0 +1,18 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MongoDB +metadata: + name: mg-replicaset + namespace: demo +spec: + version: "4.4.26" + replicaSet: + name: "replicaset" + replicas: 3 + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi \ No newline at end of file diff --git a/content/docs/v2024.1.31/examples/mongodb/update-version/mg-shard.yaml b/content/docs/v2024.1.31/examples/mongodb/update-version/mg-shard.yaml new file mode 100644 index 0000000000..76304853c1 --- /dev/null +++ b/content/docs/v2024.1.31/examples/mongodb/update-version/mg-shard.yaml @@ -0,0 +1,25 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MongoDB +metadata: + name: mg-sharding + namespace: demo +spec: + version: 4.4.26 + shardTopology: + configServer: + replicas: 2 + storage: + resources: + requests: + storage: 1Gi + storageClassName: standard + mongos: + replicas: 2 + shard: + replicas: 2 + shards: 3 + storage: + resources: + requests: + storage: 1Gi + storageClassName: standard diff --git a/content/docs/v2024.1.31/examples/mongodb/update-version/mg-standalone.yaml b/content/docs/v2024.1.31/examples/mongodb/update-version/mg-standalone.yaml new file mode 100644 index 0000000000..d2de03e594 --- /dev/null +++ b/content/docs/v2024.1.31/examples/mongodb/update-version/mg-standalone.yaml @@ -0,0 +1,15 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MongoDB +metadata: + name: mg-standalone + namespace: demo +spec: + version: "4.4.26" + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi \ No newline at end of file diff --git a/content/docs/v2024.1.31/examples/mongodb/update-version/mops-update-replicaset.yaml b/content/docs/v2024.1.31/examples/mongodb/update-version/mops-update-replicaset.yaml new file mode 100644 index 0000000000..baf372d404 --- /dev/null +++ b/content/docs/v2024.1.31/examples/mongodb/update-version/mops-update-replicaset.yaml @@ -0,0 +1,11 @@ +apiVersion: ops.kubedb.com/v1alpha1 +kind: MongoDBOpsRequest +metadata: + name: mops-replicaset-update + namespace: demo +spec: + type: UpdateVersion + databaseRef: + name: mg-replicaset + updateVersion: + targetVersion: 4.0.5-v3 \ No newline at end of file diff --git a/content/docs/v2024.1.31/examples/mongodb/update-version/mops-update-shard.yaml b/content/docs/v2024.1.31/examples/mongodb/update-version/mops-update-shard.yaml new file mode 100644 index 0000000000..59151cc97b --- /dev/null +++ b/content/docs/v2024.1.31/examples/mongodb/update-version/mops-update-shard.yaml @@ -0,0 +1,11 @@ +apiVersion: ops.kubedb.com/v1alpha1 +kind: MongoDBOpsRequest +metadata: + name: mops-shard-update + namespace: demo +spec: + type: UpdateVersion + databaseRef: + name: mg-sharding + updateVersion: + targetVersion: 4.0.5-v3 \ No newline at end of file diff --git a/content/docs/v2024.1.31/examples/mongodb/update-version/mops-update-standalone.yaml b/content/docs/v2024.1.31/examples/mongodb/update-version/mops-update-standalone.yaml new file mode 100644 index 0000000000..2c7c4e421a --- /dev/null +++ b/content/docs/v2024.1.31/examples/mongodb/update-version/mops-update-standalone.yaml @@ -0,0 +1,11 @@ +apiVersion: ops.kubedb.com/v1alpha1 +kind: MongoDBOpsRequest +metadata: + name: mops-update + namespace: demo +spec: + type: UpdateVersion + databaseRef: + name: mg-standalone + updateVersion: + targetVersion: 4.0.5-v3 \ No newline at end of file diff --git a/content/docs/v2024.1.31/examples/mongodb/volume-expansion/mg-replicaset.yaml b/content/docs/v2024.1.31/examples/mongodb/volume-expansion/mg-replicaset.yaml new file mode 100644 index 0000000000..367336370f --- /dev/null +++ b/content/docs/v2024.1.31/examples/mongodb/volume-expansion/mg-replicaset.yaml @@ -0,0 +1,18 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MongoDB +metadata: + name: mg-replicaset + namespace: demo +spec: + version: "4.4.26" + replicaSet: + name: "replicaset" + replicas: 3 + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi \ No newline at end of file diff --git a/content/docs/v2024.1.31/examples/mongodb/volume-expansion/mg-shard.yaml b/content/docs/v2024.1.31/examples/mongodb/volume-expansion/mg-shard.yaml new file mode 100644 index 0000000000..76304853c1 --- /dev/null +++ b/content/docs/v2024.1.31/examples/mongodb/volume-expansion/mg-shard.yaml @@ -0,0 +1,25 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MongoDB +metadata: + name: mg-sharding + namespace: demo +spec: + version: 4.4.26 + shardTopology: + configServer: + replicas: 2 + storage: + resources: + requests: + storage: 1Gi + storageClassName: standard + mongos: + replicas: 2 + shard: + replicas: 2 + shards: 3 + storage: + resources: + requests: + storage: 1Gi + storageClassName: standard diff --git a/content/docs/v2024.1.31/examples/mongodb/volume-expansion/mg-standalone.yaml b/content/docs/v2024.1.31/examples/mongodb/volume-expansion/mg-standalone.yaml new file mode 100644 index 0000000000..d2de03e594 --- /dev/null +++ b/content/docs/v2024.1.31/examples/mongodb/volume-expansion/mg-standalone.yaml @@ -0,0 +1,15 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MongoDB +metadata: + name: mg-standalone + namespace: demo +spec: + version: "4.4.26" + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi \ No newline at end of file diff --git a/content/docs/v2024.1.31/examples/mongodb/volume-expansion/mops-volume-exp-replicaset.yaml b/content/docs/v2024.1.31/examples/mongodb/volume-expansion/mops-volume-exp-replicaset.yaml new file mode 100644 index 0000000000..b6f22adf7f --- /dev/null +++ b/content/docs/v2024.1.31/examples/mongodb/volume-expansion/mops-volume-exp-replicaset.yaml @@ -0,0 +1,11 @@ +apiVersion: ops.kubedb.com/v1alpha1 +kind: MongoDBOpsRequest +metadata: + name: mops-volume-exp-replicaset + namespace: demo +spec: + type: VolumeExpansion + databaseRef: + name: mg-replicaset + volumeExpansion: + replicaSet: 2Gi \ No newline at end of file diff --git a/content/docs/v2024.1.31/examples/mongodb/volume-expansion/mops-volume-exp-shard.yaml b/content/docs/v2024.1.31/examples/mongodb/volume-expansion/mops-volume-exp-shard.yaml new file mode 100644 index 0000000000..73c1609445 --- /dev/null +++ b/content/docs/v2024.1.31/examples/mongodb/volume-expansion/mops-volume-exp-shard.yaml @@ -0,0 +1,12 @@ +apiVersion: ops.kubedb.com/v1alpha1 +kind: MongoDBOpsRequest +metadata: + name: mops-volume-exp-shard + namespace: demo +spec: + type: VolumeExpansion + databaseRef: + name: mg-sharding + volumeExpansion: + shard: 2Gi + configServer: 2Gi \ No newline at end of file diff --git a/content/docs/v2024.1.31/examples/mongodb/volume-expansion/mops-volume-exp-standalone.yaml b/content/docs/v2024.1.31/examples/mongodb/volume-expansion/mops-volume-exp-standalone.yaml new file mode 100644 index 0000000000..9c7e1188cb --- /dev/null +++ b/content/docs/v2024.1.31/examples/mongodb/volume-expansion/mops-volume-exp-standalone.yaml @@ -0,0 +1,11 @@ +apiVersion: ops.kubedb.com/v1alpha1 +kind: MongoDBOpsRequest +metadata: + name: mops-volume-exp-standalone + namespace: demo +spec: + type: VolumeExpansion + databaseRef: + name: mg-standalone + volumeExpansion: + standalone: 2Gi \ No newline at end of file diff --git a/content/docs/v2024.1.31/examples/monitoring/builtin-prometheus/prom-config.yaml b/content/docs/v2024.1.31/examples/monitoring/builtin-prometheus/prom-config.yaml new file mode 100644 index 0000000000..45aee6317a --- /dev/null +++ b/content/docs/v2024.1.31/examples/monitoring/builtin-prometheus/prom-config.yaml @@ -0,0 +1,68 @@ +apiVersion: v1 +kind: ConfigMap +metadata: + name: prometheus-config + labels: + app: prometheus-demo + namespace: monitoring +data: + prometheus.yml: |- + global: + scrape_interval: 5s + evaluation_interval: 5s + scrape_configs: + - job_name: 'kubedb-databases' + honor_labels: true + scheme: http + kubernetes_sd_configs: + - role: endpoints + # by default Prometheus server select all Kubernetes services as possible target. + # relabel_config is used to filter only desired endpoints + relabel_configs: + # keep only those services that has "prometheus.io/scrape","prometheus.io/path" and "prometheus.io/port" anootations + - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape, __meta_kubernetes_service_annotation_prometheus_io_port] + separator: ; + regex: true;(.*) + action: keep + # currently KubeDB supported databases uses only "http" scheme to export metrics. so, drop any service that uses "https" scheme. + - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scheme] + action: drop + regex: https + # only keep the stats services created by KubeDB for monitoring purpose which has "-stats" suffix + - source_labels: [__meta_kubernetes_service_name] + separator: ; + regex: (.*-stats) + action: keep + # service created by KubeDB will have "app.kubernetes.io/name" and "app.kubernetes.io/instance" annotations. keep only those services that have these annotations. + - source_labels: [__meta_kubernetes_service_label_app_kubernetes_io_name] + separator: ; + regex: (.*) + action: keep + # read the metric path from "prometheus.io/path: " annotation + - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_path] + action: replace + target_label: __metrics_path__ + regex: (.+) + # read the port from "prometheus.io/port: " annotation and update scraping address accordingly + - source_labels: [__address__, __meta_kubernetes_service_annotation_prometheus_io_port] + action: replace + target_label: __address__ + regex: ([^:]+)(?::\d+)?;(\d+) + replacement: $1:$2 + # add service namespace as label to the scraped metrics + - source_labels: [__meta_kubernetes_namespace] + separator: ; + regex: (.*) + target_label: namespace + replacement: $1 + action: replace + # add service name as a label to the scraped metrics + - source_labels: [__meta_kubernetes_service_name] + separator: ; + regex: (.*) + target_label: service + replacement: $1 + action: replace + # add stats service's labels to the scraped metrics + - action: labelmap + regex: __meta_kubernetes_service_label_(.+) diff --git a/content/docs/v2024.1.31/examples/monitoring/coreos-operator/demo-0.yaml b/content/docs/v2024.1.31/examples/monitoring/coreos-operator/demo-0.yaml new file mode 100644 index 0000000000..242438057d --- /dev/null +++ b/content/docs/v2024.1.31/examples/monitoring/coreos-operator/demo-0.yaml @@ -0,0 +1,108 @@ +apiVersion: v1 +kind: Namespace +metadata: + name: demo +spec: + finalizers: + - kubernetes +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + name: prometheus-operator +rules: +- apiGroups: + - extensions + resources: + - thirdpartyresources + verbs: + - "*" +- apiGroups: + - apiextensions.k8s.io + resources: + - customresourcedefinitions + verbs: + - "*" +- apiGroups: + - monitoring.coreos.com + resources: + - alertmanagers + - prometheuses + - servicemonitors + verbs: + - "*" +- apiGroups: + - apps + resources: + - statefulsets + verbs: ["*"] +- apiGroups: [""] + resources: + - configmaps + - secrets + verbs: ["*"] +- apiGroups: [""] + resources: + - pods + verbs: ["list", "delete"] +- apiGroups: [""] + resources: + - services + - endpoints + verbs: ["get", "create", "update"] +- apiGroups: [""] + resources: + - nodes + verbs: ["list", "watch"] +- apiGroups: [""] + resources: + - namespaces + verbs: ["list"] +--- +apiVersion: v1 +kind: ServiceAccount +metadata: + name: prometheus-operator + namespace: demo +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + name: prometheus-operator +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: prometheus-operator +subjects: +- kind: ServiceAccount + name: prometheus-operator + namespace: demo +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: prometheus-operator + namespace: demo + labels: + operator: prometheus +spec: + replicas: 1 + selector: + matchLabels: + operator: prometheus + template: + metadata: + labels: + operator: prometheus + spec: + serviceAccountName: prometheus-operator + containers: + - name: prometheus-operator + image: quay.io/coreos/prometheus-operator:v0.16.0 + resources: + requests: + cpu: 100m + memory: 1Gi + limits: + cpu: 200m + memory: 100Mi diff --git a/content/docs/v2024.1.31/examples/monitoring/coreos-operator/demo-1.yaml b/content/docs/v2024.1.31/examples/monitoring/coreos-operator/demo-1.yaml new file mode 100644 index 0000000000..97bb53a2b0 --- /dev/null +++ b/content/docs/v2024.1.31/examples/monitoring/coreos-operator/demo-1.yaml @@ -0,0 +1,68 @@ +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + name: prometheus +rules: +- apiGroups: [""] + resources: + - nodes + - services + - endpoints + - pods + verbs: ["get", "list", "watch"] +- apiGroups: [""] + resources: + - configmaps + verbs: ["get"] +- nonResourceURLs: ["/metrics"] + verbs: ["get"] +--- +apiVersion: v1 +kind: ServiceAccount +metadata: + name: prometheus + namespace: demo +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + name: prometheus +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: prometheus +subjects: +- kind: ServiceAccount + name: prometheus + namespace: demo +--- +apiVersion: monitoring.coreos.com/v1 +kind: Prometheus +metadata: + name: prometheus + namespace: demo +spec: + serviceAccountName: prometheus + serviceMonitorSelector: + matchLabels: + app: kubedb + version: v1.7.0 + resources: + requests: + memory: 400Mi +--- +apiVersion: v1 +kind: Service +metadata: + name: prometheus + namespace: demo +spec: + type: LoadBalancer + ports: + - name: web + nodePort: 30900 + port: 9090 + protocol: TCP + targetPort: web + selector: + prometheus: prometheus diff --git a/content/docs/v2024.1.31/examples/monitoring/operator/prom-config.yaml b/content/docs/v2024.1.31/examples/monitoring/operator/prom-config.yaml new file mode 100644 index 0000000000..87ca3250a7 --- /dev/null +++ b/content/docs/v2024.1.31/examples/monitoring/operator/prom-config.yaml @@ -0,0 +1,70 @@ +apiVersion: v1 +kind: ConfigMap +metadata: + name: kubedb-operator-prom-config + labels: + app: kubedb + namespace: monitoring +data: + prometheus.yml: |- + global: + scrape_interval: 30s + scrape_timeout: 10s + evaluation_interval: 30s + scrape_configs: + - job_name: kubedb-operator + kubernetes_sd_configs: + - role: endpoints + # we have to provide certificate to establish tls secure connection + tls_config: + # public certificate of the extension apiserver that has been mounted in "/etc/prometheus/secret/" directory of prometheus server + ca_file: /etc/prometheus/secret/kubedb-operator-apiserver-cert/tls.crt + # dns name for which the certificate is valid + server_name: kubedb-operator.kube-system.svc + # bearer_token_file is required for authorizing prometheus server to extension apiserver + bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token + # by default Prometheus server select all Kubernetes services as possible target. + # relabel_config is used to filter only desired endpoints + relabel_configs: + # keep only those services that has "prometheus.io/scrape: true" anootation. + - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape] + regex: true + action: keep + # keep only those services that has "app: kubedb" label + - source_labels: [__meta_kubernetes_service_label_app] + regex: kubedb + action: keep + # keep only those services that has endpoint named "api" + - source_labels: [__meta_kubernetes_endpoint_port_name] + regex: api + action: keep + # read the metric path from "prometheus.io/path: " annotation + - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_path] + regex: (.+) + target_label: __metrics_path__ + action: replace + # read the scraping scheme from "prometheus.io/scheme: " annotation + - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scheme] + action: replace + target_label: __scheme__ + regex: (https?) + # read the port from "prometheus.io/port: " annotation and update scraping address accordingly + - source_labels: [__address__, __meta_kubernetes_service_annotation_prometheus_io_port] + action: replace + target_label: __address__ + regex: ([^:]+)(?::\d+)?;(\d+) + replacement: $1:$2 + # add service namespace as label to the scraped metrics + - source_labels: [__meta_kubernetes_namespace] + separator: ; + regex: (.*) + target_label: namespace + replacement: $1 + action: replace + # add service name as label to the scraped metrics + - source_labels: [__meta_kubernetes_service_name] + separator: ; + regex: (.*) + target_label: service + replacement: $1 + action: replace diff --git a/content/docs/v2024.1.31/examples/monitoring/operator/prom-deploy.yaml b/content/docs/v2024.1.31/examples/monitoring/operator/prom-deploy.yaml new file mode 100644 index 0000000000..c1de88c11b --- /dev/null +++ b/content/docs/v2024.1.31/examples/monitoring/operator/prom-deploy.yaml @@ -0,0 +1,45 @@ +apiVersion: apps/v1 +kind: Deployment +metadata: + name: prometheus + namespace: monitoring +spec: + replicas: 1 + selector: + matchLabels: + app: prometheus + template: + metadata: + labels: + app: prometheus + spec: + serviceAccountName: prometheus + containers: + - name: prometheus + image: prom/prometheus:v2.4.3 + args: + - "--config.file=/etc/prometheus/prometheus.yml" + - "--storage.tsdb.path=/prometheus/" + ports: + - containerPort: 9090 + volumeMounts: + - name: prometheus-config-volume + mountPath: /etc/prometheus/ + - name: prometheus-storage-volume + mountPath: /prometheus/ + - name: kubedb-operator-apiserver-cert # mount the secret volume with public certificate of the kubedb extension apiserver + mountPath: /etc/prometheus/secret/kubedb-operator-apiserver-cert + volumes: + - name: prometheus-config-volume + configMap: + defaultMode: 420 + name: kubedb-operator-prom-conf + - name: prometheus-storage-volume + emptyDir: {} + - name: kubedb-operator-apiserver-cert + secret: + defaultMode: 420 + secretName: kubedb-operator-apiserver-cert + items: # avoid mounting private key + - key: tls.crt + path: tls.crt diff --git a/content/docs/v2024.1.31/examples/monitoring/operator/prometheus.yaml b/content/docs/v2024.1.31/examples/monitoring/operator/prometheus.yaml new file mode 100644 index 0000000000..15a84cbe4c --- /dev/null +++ b/content/docs/v2024.1.31/examples/monitoring/operator/prometheus.yaml @@ -0,0 +1,18 @@ +apiVersion: monitoring.coreos.com/v1 +kind: Prometheus +metadata: + name: prometheus + namespace: monitoring # use same namespace as ServiceMonitor crd + labels: + prometheus: prometheus +spec: + replicas: 1 + serviceAccountName: prometheus + serviceMonitorSelector: + matchLabels: + release: prometheus # change this according to your setup + secrets: + - kubedb-operator-apiserver-cert + resources: + requests: + memory: 400Mi diff --git a/content/docs/v2024.1.31/examples/mysql/Initialization/demo-1.yaml b/content/docs/v2024.1.31/examples/mysql/Initialization/demo-1.yaml new file mode 100644 index 0000000000..4544bb9804 --- /dev/null +++ b/content/docs/v2024.1.31/examples/mysql/Initialization/demo-1.yaml @@ -0,0 +1,18 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MySQL +metadata: + name: mysql-init-script + namespace: demo +spec: + version: "8.0.35" + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + init: + script: + configMap: + name: my-init-script diff --git a/content/docs/v2024.1.31/examples/mysql/cli/mysql-demo.yaml b/content/docs/v2024.1.31/examples/mysql/cli/mysql-demo.yaml new file mode 100644 index 0000000000..addd02e541 --- /dev/null +++ b/content/docs/v2024.1.31/examples/mysql/cli/mysql-demo.yaml @@ -0,0 +1,14 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MySQL +metadata: + name: mysql-demo + namespace: demo +spec: + version: "8.0.35" + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi diff --git a/content/docs/v2024.1.31/examples/mysql/clustering/demo-1.yaml b/content/docs/v2024.1.31/examples/mysql/clustering/demo-1.yaml new file mode 100644 index 0000000000..25d93bcfb2 --- /dev/null +++ b/content/docs/v2024.1.31/examples/mysql/clustering/demo-1.yaml @@ -0,0 +1,21 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MySQL +metadata: + name: my-group + namespace: demo +spec: + version: "8.0.35" + replicas: 3 + topology: + mode: GroupReplication + group: + name: "dc002fc3-c412-4d18-b1d4-66c1fbfbbc9b" + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut diff --git a/content/docs/v2024.1.31/examples/mysql/configuration/my-config.cnf b/content/docs/v2024.1.31/examples/mysql/configuration/my-config.cnf new file mode 100644 index 0000000000..ccd87f160c --- /dev/null +++ b/content/docs/v2024.1.31/examples/mysql/configuration/my-config.cnf @@ -0,0 +1,3 @@ +[mysqld] +max_connections = 200 +read_buffer_size = 1048576 diff --git a/content/docs/v2024.1.31/examples/mysql/configuration/mysql-custom.yaml b/content/docs/v2024.1.31/examples/mysql/configuration/mysql-custom.yaml new file mode 100644 index 0000000000..45016cadcf --- /dev/null +++ b/content/docs/v2024.1.31/examples/mysql/configuration/mysql-custom.yaml @@ -0,0 +1,16 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MySQL +metadata: + name: custom-mysql + namespace: demo +spec: + version: "8.0.35" + configSecret: + name: my-custom-config + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi diff --git a/content/docs/v2024.1.31/examples/mysql/configuration/mysql-misc-config.yaml b/content/docs/v2024.1.31/examples/mysql/configuration/mysql-misc-config.yaml new file mode 100644 index 0000000000..aa83e77999 --- /dev/null +++ b/content/docs/v2024.1.31/examples/mysql/configuration/mysql-misc-config.yaml @@ -0,0 +1,27 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MySQL +metadata: + name: mysql-misc-config + namespace: demo +spec: + version: "5.7.44" + storageType: "Durable" + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + podTemplate: + spec: + env: + - name: MYSQL_DATABASE + value: myDB + args: + - --character-set-server=utf8mb4 + resources: + requests: + memory: "1Gi" + cpu: "250m" + terminationPolicy: Halt diff --git a/content/docs/v2024.1.31/examples/mysql/custom-rbac/my-custom-db-two.yaml b/content/docs/v2024.1.31/examples/mysql/custom-rbac/my-custom-db-two.yaml new file mode 100644 index 0000000000..029a81ffae --- /dev/null +++ b/content/docs/v2024.1.31/examples/mysql/custom-rbac/my-custom-db-two.yaml @@ -0,0 +1,19 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MySQL +metadata: + name: minute-mysql + namespace: demo +spec: + version: "8.0.35" + storageType: Durable + podTemplate: + spec: + serviceAccountName: my-custom-serviceaccount + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: DoNotTerminate diff --git a/content/docs/v2024.1.31/examples/mysql/custom-rbac/my-custom-db.yaml b/content/docs/v2024.1.31/examples/mysql/custom-rbac/my-custom-db.yaml new file mode 100644 index 0000000000..11bddb8c97 --- /dev/null +++ b/content/docs/v2024.1.31/examples/mysql/custom-rbac/my-custom-db.yaml @@ -0,0 +1,19 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MySQL +metadata: + name: quick-mysql + namespace: demo +spec: + version: "8.0.35" + storageType: Durable + podTemplate: + spec: + serviceAccountName: my-custom-serviceaccount + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: DoNotTerminate diff --git a/content/docs/v2024.1.31/examples/mysql/custom-rbac/my-custom-role.yaml b/content/docs/v2024.1.31/examples/mysql/custom-rbac/my-custom-role.yaml new file mode 100644 index 0000000000..e9b576585d --- /dev/null +++ b/content/docs/v2024.1.31/examples/mysql/custom-rbac/my-custom-role.yaml @@ -0,0 +1,14 @@ +apiVersion: rbac.authorization.k8s.io/v1 +kind: Role +metadata: + name: my-custom-role + namespace: demo +rules: + - apiGroups: + - policy + resourceNames: + - postgres-db + resources: + - podsecuritypolicies + verbs: + - use diff --git a/content/docs/v2024.1.31/examples/mysql/demo-0.yaml b/content/docs/v2024.1.31/examples/mysql/demo-0.yaml new file mode 100644 index 0000000000..1d9f59eb36 --- /dev/null +++ b/content/docs/v2024.1.31/examples/mysql/demo-0.yaml @@ -0,0 +1,7 @@ +apiVersion: v1 +kind: Namespace +metadata: + name: demo +spec: + finalizers: + - kubernetes diff --git a/content/docs/v2024.1.31/examples/mysql/horizontalscaling/group_replication.yaml b/content/docs/v2024.1.31/examples/mysql/horizontalscaling/group_replication.yaml new file mode 100644 index 0000000000..a18eb3cfef --- /dev/null +++ b/content/docs/v2024.1.31/examples/mysql/horizontalscaling/group_replication.yaml @@ -0,0 +1,21 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MySQL +metadata: + name: my-group + namespace: demo +spec: + version: "8.0.35" + replicas: 3 + topology: + mode: GroupReplication + group: + name: "dc002fc3-c412-4d18-b1d4-66c1fbfbbc9b" + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut \ No newline at end of file diff --git a/content/docs/v2024.1.31/examples/mysql/horizontalscaling/scale_down.yaml b/content/docs/v2024.1.31/examples/mysql/horizontalscaling/scale_down.yaml new file mode 100644 index 0000000000..0bf32e33ae --- /dev/null +++ b/content/docs/v2024.1.31/examples/mysql/horizontalscaling/scale_down.yaml @@ -0,0 +1,13 @@ +apiVersion: ops.kubedb.com/v1alpha1 +kind: MySQLOpsRequest +metadata: + name: my-scale-down + namespace: demo +spec: + type: HorizontalScaling + databaseRef: + name: my-group + horizontalScaling: + member: 4 + + diff --git a/content/docs/v2024.1.31/examples/mysql/horizontalscaling/scale_up.yaml b/content/docs/v2024.1.31/examples/mysql/horizontalscaling/scale_up.yaml new file mode 100644 index 0000000000..8823466bfd --- /dev/null +++ b/content/docs/v2024.1.31/examples/mysql/horizontalscaling/scale_up.yaml @@ -0,0 +1,13 @@ +apiVersion: ops.kubedb.com/v1alpha1 +kind: MySQLOpsRequest +metadata: + name: my-scale-up + namespace: demo +spec: + type: HorizontalScaling + databaseRef: + name: my-group + horizontalScaling: + member: 5 + + diff --git a/content/docs/v2024.1.31/examples/mysql/monitoring/builtin-prom-mysql.yaml b/content/docs/v2024.1.31/examples/mysql/monitoring/builtin-prom-mysql.yaml new file mode 100644 index 0000000000..c107c98c9b --- /dev/null +++ b/content/docs/v2024.1.31/examples/mysql/monitoring/builtin-prom-mysql.yaml @@ -0,0 +1,17 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MySQL +metadata: + name: builtin-prom-mysql + namespace: demo +spec: + version: "8.0.35" + terminationPolicy: WipeOut + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + monitor: + agent: prometheus.io/builtin diff --git a/content/docs/v2024.1.31/examples/mysql/monitoring/coreos-prom-mysql.yaml b/content/docs/v2024.1.31/examples/mysql/monitoring/coreos-prom-mysql.yaml new file mode 100644 index 0000000000..2243c6b4ac --- /dev/null +++ b/content/docs/v2024.1.31/examples/mysql/monitoring/coreos-prom-mysql.yaml @@ -0,0 +1,22 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MySQL +metadata: + name: coreos-prom-mysql + namespace: demo +spec: + version: "8.0.35" + terminationPolicy: WipeOut + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + monitor: + agent: prometheus.io/operator + prometheus: + serviceMonitor: + labels: + release: prometheus + interval: 10s diff --git a/content/docs/v2024.1.31/examples/mysql/private-registry/demo-1.yaml b/content/docs/v2024.1.31/examples/mysql/private-registry/demo-1.yaml new file mode 100644 index 0000000000..b1d8a6568c --- /dev/null +++ b/content/docs/v2024.1.31/examples/mysql/private-registry/demo-1.yaml @@ -0,0 +1,8 @@ +apiVersion: v1 +kind: Secret +metadata: + name: myregistrykey + namespace: demo +data: + .dockerconfigjson: PGJhc2UtNjQtZW5jb2RlZC1qc29uLWhlcmU+ +type: kubernetes.io/dockerconfigjson diff --git a/content/docs/v2024.1.31/examples/mysql/private-registry/demo-2.yaml b/content/docs/v2024.1.31/examples/mysql/private-registry/demo-2.yaml new file mode 100644 index 0000000000..0dcddb779b --- /dev/null +++ b/content/docs/v2024.1.31/examples/mysql/private-registry/demo-2.yaml @@ -0,0 +1,18 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MySQL +metadata: + name: mysql-pvt-reg + namespace: demo +spec: + version: "8.0.35" + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + podTemplate: + spec: + imagePullSecrets: + - name: myregistrykey diff --git a/content/docs/v2024.1.31/examples/mysql/quickstart/demo-1.yaml b/content/docs/v2024.1.31/examples/mysql/quickstart/demo-1.yaml new file mode 100644 index 0000000000..8674cb45c8 --- /dev/null +++ b/content/docs/v2024.1.31/examples/mysql/quickstart/demo-1.yaml @@ -0,0 +1,46 @@ +apiVersion: apps/v1 +kind: Deployment +metadata: + labels: + app: myadmin + name: myadmin + namespace: demo +spec: + replicas: 3 + selector: + matchLabels: + app: myadmin + template: + metadata: + labels: + app: myadmin + spec: + containers: + - image: phpmyadmin/phpmyadmin + imagePullPolicy: Always + name: phpmyadmin + ports: + - containerPort: 80 + name: http + protocol: TCP + env: + - name: PMA_ARBITRARY + value: '1' + +--- +apiVersion: v1 +kind: Service +metadata: + labels: + app: myadmin + name: myadmin + namespace: demo +spec: + ports: + - name: http + port: 80 + protocol: TCP + targetPort: http + selector: + app: myadmin + type: LoadBalancer diff --git a/content/docs/v2024.1.31/examples/mysql/quickstart/demo-2.yaml b/content/docs/v2024.1.31/examples/mysql/quickstart/demo-2.yaml new file mode 100644 index 0000000000..770912e63a --- /dev/null +++ b/content/docs/v2024.1.31/examples/mysql/quickstart/demo-2.yaml @@ -0,0 +1,16 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MySQL +metadata: + name: mysql-quickstart + namespace: demo +spec: + version: "8.0.35" + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: DoNotTerminate diff --git a/content/docs/v2024.1.31/examples/mysql/tls/issuer.yaml b/content/docs/v2024.1.31/examples/mysql/tls/issuer.yaml new file mode 100644 index 0000000000..9ec9f3bbd8 --- /dev/null +++ b/content/docs/v2024.1.31/examples/mysql/tls/issuer.yaml @@ -0,0 +1,8 @@ +apiVersion: cert-manager.io/v1 +kind: Issuer +metadata: + name: mysql-issuer + namespace: demo +spec: + ca: + secretName: my-ca \ No newline at end of file diff --git a/content/docs/v2024.1.31/examples/mysql/tls/tls-group.yaml b/content/docs/v2024.1.31/examples/mysql/tls/tls-group.yaml new file mode 100644 index 0000000000..4fd8925e09 --- /dev/null +++ b/content/docs/v2024.1.31/examples/mysql/tls/tls-group.yaml @@ -0,0 +1,36 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MySQL +metadata: + name: my-group-tls + namespace: demo +spec: + version: "8.0.35" + replicas: 3 + topology: + mode: GroupReplication + group: + name: "dc002fc3-c412-4d18-b1d4-66c1fbfbbc9b" + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + requireSSL: true + tls: + issuerRef: + apiGroup: cert-manager.io + kind: Issuer + name: mysql-issuer + certificates: + - alias: server + subject: + organizations: + - kubedb:server + dnsNames: + - localhost + ipAddresses: + - "127.0.0.1" + terminationPolicy: WipeOut \ No newline at end of file diff --git a/content/docs/v2024.1.31/examples/mysql/tls/tls-standalone.yaml b/content/docs/v2024.1.31/examples/mysql/tls/tls-standalone.yaml new file mode 100644 index 0000000000..6a23dea7d9 --- /dev/null +++ b/content/docs/v2024.1.31/examples/mysql/tls/tls-standalone.yaml @@ -0,0 +1,31 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MySQL +metadata: + name: my-standalone-tls + namespace: demo +spec: + version: "8.0.35" + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + requireSSL: true + tls: + issuerRef: + apiGroup: cert-manager.io + kind: Issuer + name: mysql-issuer + certificates: + - alias: server + subject: + organizations: + - kubedb:server + dnsNames: + - localhost + ipAddresses: + - "127.0.0.1" + terminationPolicy: WipeOut \ No newline at end of file diff --git a/content/docs/v2024.1.31/examples/mysql/update-version/majorversion/group_replication.yaml b/content/docs/v2024.1.31/examples/mysql/update-version/majorversion/group_replication.yaml new file mode 100644 index 0000000000..e90a9c9b4e --- /dev/null +++ b/content/docs/v2024.1.31/examples/mysql/update-version/majorversion/group_replication.yaml @@ -0,0 +1,21 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MySQL +metadata: + name: my-group + namespace: demo +spec: + version: "5.7.44" + replicas: 3 + topology: + mode: GroupReplication + group: + name: "dc002fc3-c412-4d18-b1d4-66c1fbfbbc9b" + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut \ No newline at end of file diff --git a/content/docs/v2024.1.31/examples/mysql/update-version/majorversion/standalone.yaml b/content/docs/v2024.1.31/examples/mysql/update-version/majorversion/standalone.yaml new file mode 100644 index 0000000000..b06ee2cd5b --- /dev/null +++ b/content/docs/v2024.1.31/examples/mysql/update-version/majorversion/standalone.yaml @@ -0,0 +1,16 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MySQL +metadata: + name: my-standalone + namespace: demo +spec: + version: "5.7.44" + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut \ No newline at end of file diff --git a/content/docs/v2024.1.31/examples/mysql/update-version/majorversion/update_major_version_group.yaml b/content/docs/v2024.1.31/examples/mysql/update-version/majorversion/update_major_version_group.yaml new file mode 100644 index 0000000000..a9b98708d2 --- /dev/null +++ b/content/docs/v2024.1.31/examples/mysql/update-version/majorversion/update_major_version_group.yaml @@ -0,0 +1,11 @@ +apiVersion: ops.kubedb.com/v1alpha1 +kind: MySQLOpsRequest +metadata: + name: my-update-major-group + namespace: demo +spec: + type: UpdateVersion + databaseRef: + name: my-group + updateVersion: + targetVersion: "8.0.35" \ No newline at end of file diff --git a/content/docs/v2024.1.31/examples/mysql/update-version/majorversion/update_major_version_standalone.yaml b/content/docs/v2024.1.31/examples/mysql/update-version/majorversion/update_major_version_standalone.yaml new file mode 100644 index 0000000000..8e1477480f --- /dev/null +++ b/content/docs/v2024.1.31/examples/mysql/update-version/majorversion/update_major_version_standalone.yaml @@ -0,0 +1,11 @@ +apiVersion: ops.kubedb.com/v1alpha1 +kind: MySQLOpsRequest +metadata: + name: my-update-major-standalone + namespace: demo +spec: + databaseRef: + name: my-standalone + type: UpdateVersion + updateVersion: + targetVersion: "8.0.35" \ No newline at end of file diff --git a/content/docs/v2024.1.31/examples/mysql/update-version/minorversion/group_replication.yaml b/content/docs/v2024.1.31/examples/mysql/update-version/minorversion/group_replication.yaml new file mode 100644 index 0000000000..e90a9c9b4e --- /dev/null +++ b/content/docs/v2024.1.31/examples/mysql/update-version/minorversion/group_replication.yaml @@ -0,0 +1,21 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MySQL +metadata: + name: my-group + namespace: demo +spec: + version: "5.7.44" + replicas: 3 + topology: + mode: GroupReplication + group: + name: "dc002fc3-c412-4d18-b1d4-66c1fbfbbc9b" + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut \ No newline at end of file diff --git a/content/docs/v2024.1.31/examples/mysql/update-version/minorversion/standalone.yaml b/content/docs/v2024.1.31/examples/mysql/update-version/minorversion/standalone.yaml new file mode 100644 index 0000000000..b06ee2cd5b --- /dev/null +++ b/content/docs/v2024.1.31/examples/mysql/update-version/minorversion/standalone.yaml @@ -0,0 +1,16 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MySQL +metadata: + name: my-standalone + namespace: demo +spec: + version: "5.7.44" + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut \ No newline at end of file diff --git a/content/docs/v2024.1.31/examples/mysql/update-version/minorversion/update_minor_version_group.yaml b/content/docs/v2024.1.31/examples/mysql/update-version/minorversion/update_minor_version_group.yaml new file mode 100644 index 0000000000..618a58eb7b --- /dev/null +++ b/content/docs/v2024.1.31/examples/mysql/update-version/minorversion/update_minor_version_group.yaml @@ -0,0 +1,11 @@ +apiVersion: ops.kubedb.com/v1alpha1 +kind: MySQLOpsRequest +metadata: + name: my-update-minor-group + namespace: demo +spec: + type: UpdateVersion + databaseRef: + name: my-group + updateVersion: + targetVersion: "5.7.44" \ No newline at end of file diff --git a/content/docs/v2024.1.31/examples/mysql/update-version/minorversion/update_minor_version_standalone.yaml b/content/docs/v2024.1.31/examples/mysql/update-version/minorversion/update_minor_version_standalone.yaml new file mode 100644 index 0000000000..a37ead33ae --- /dev/null +++ b/content/docs/v2024.1.31/examples/mysql/update-version/minorversion/update_minor_version_standalone.yaml @@ -0,0 +1,11 @@ +apiVersion: ops.kubedb.com/v1alpha1 +kind: MySQLOpsRequest +metadata: + name: my-update-minor-standalone + namespace: demo +spec: + databaseRef: + name: my-standalone + type: UpdateVersion + updateVersion: + targetVersion: "5.7.44" \ No newline at end of file diff --git a/content/docs/v2024.1.31/examples/mysql/verticalscaling/group_replication.yaml b/content/docs/v2024.1.31/examples/mysql/verticalscaling/group_replication.yaml new file mode 100644 index 0000000000..a18eb3cfef --- /dev/null +++ b/content/docs/v2024.1.31/examples/mysql/verticalscaling/group_replication.yaml @@ -0,0 +1,21 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MySQL +metadata: + name: my-group + namespace: demo +spec: + version: "8.0.35" + replicas: 3 + topology: + mode: GroupReplication + group: + name: "dc002fc3-c412-4d18-b1d4-66c1fbfbbc9b" + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut \ No newline at end of file diff --git a/content/docs/v2024.1.31/examples/mysql/verticalscaling/standalone.yaml b/content/docs/v2024.1.31/examples/mysql/verticalscaling/standalone.yaml new file mode 100644 index 0000000000..56ef5f11e0 --- /dev/null +++ b/content/docs/v2024.1.31/examples/mysql/verticalscaling/standalone.yaml @@ -0,0 +1,16 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MySQL +metadata: + name: my-standalone + namespace: demo +spec: + version: "8.0.35" + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut \ No newline at end of file diff --git a/content/docs/v2024.1.31/examples/mysql/verticalscaling/vertical_scale_group.yaml b/content/docs/v2024.1.31/examples/mysql/verticalscaling/vertical_scale_group.yaml new file mode 100644 index 0000000000..679506aa9b --- /dev/null +++ b/content/docs/v2024.1.31/examples/mysql/verticalscaling/vertical_scale_group.yaml @@ -0,0 +1,18 @@ +apiVersion: ops.kubedb.com/v1alpha1 +kind: MySQLOpsRequest +metadata: + name: my-scale-group + namespace: demo +spec: + type: VerticalScaling + databaseRef: + name: my-group + verticalScaling: + mysql: + resources: + requests: + memory: "200Mi" + cpu: "0.1" + limits: + memory: "300Mi" + cpu: "0.2" \ No newline at end of file diff --git a/content/docs/v2024.1.31/examples/mysql/verticalscaling/vertical_scale_standalone.yaml b/content/docs/v2024.1.31/examples/mysql/verticalscaling/vertical_scale_standalone.yaml new file mode 100644 index 0000000000..8f03d1294c --- /dev/null +++ b/content/docs/v2024.1.31/examples/mysql/verticalscaling/vertical_scale_standalone.yaml @@ -0,0 +1,18 @@ +apiVersion: ops.kubedb.com/v1alpha1 +kind: MySQLOpsRequest +metadata: + name: my-scale-standalone + namespace: demo +spec: + type: VerticalScaling + databaseRef: + name: my-standalone + verticalScaling: + mysql: + resources: + requests: + memory: "200Mi" + cpu: "0.1" + limits: + memory: "300Mi" + cpu: "0.2" \ No newline at end of file diff --git a/content/docs/v2024.1.31/examples/pgbouncer/custom-version/Dockerfile b/content/docs/v2024.1.31/examples/pgbouncer/custom-version/Dockerfile new file mode 100644 index 0000000000..d9836c27b2 --- /dev/null +++ b/content/docs/v2024.1.31/examples/pgbouncer/custom-version/Dockerfile @@ -0,0 +1,9 @@ +FROM kubedb/pgbouncer:latest + +ENV SOME_VERSION_VAR 0.9.1 + +RUN set -ex \ + && apk add --no-cache --virtual .fetch-deps \ + ca-certificates \ + curl \ + bash diff --git a/content/docs/v2024.1.31/examples/pgbouncer/custom-version/pgbouncer.yaml b/content/docs/v2024.1.31/examples/pgbouncer/custom-version/pgbouncer.yaml new file mode 100644 index 0000000000..2535f4aa29 --- /dev/null +++ b/content/docs/v2024.1.31/examples/pgbouncer/custom-version/pgbouncer.yaml @@ -0,0 +1,17 @@ +apiVersion: kubedb.com/v1alpha2 +kind: PgBouncer +metadata: + name: pgbouncer-server + namespace: demo +spec: + version: "1.17.0" + replicas: 1 + databases: + - alias: "postgres" + databaseName: "postgres" + databaseRef: + name: "quick-postgres" + namespace: demo + connectionPool: + port: 2580 + reservePoolSize: 5 diff --git a/content/docs/v2024.1.31/examples/pgbouncer/custom-version/pgbouncerversion.yaml b/content/docs/v2024.1.31/examples/pgbouncer/custom-version/pgbouncerversion.yaml new file mode 100644 index 0000000000..dbdc07eaa2 --- /dev/null +++ b/content/docs/v2024.1.31/examples/pgbouncer/custom-version/pgbouncerversion.yaml @@ -0,0 +1,10 @@ +apiVersion: catalog.kubedb.com/v1alpha1 +kind: PgBouncerVersion +metadata: + name: "1.17.0" +spec: + exporter: + image: kubedb/pgbouncer_exporter:v0.1.1 + pgBouncer: + image: kubedb/pgbouncer:1.17.0 + version: 1.17.0 diff --git a/content/docs/v2024.1.31/examples/pgbouncer/monitoring/builtin-prom-pgbouncer.yaml b/content/docs/v2024.1.31/examples/pgbouncer/monitoring/builtin-prom-pgbouncer.yaml new file mode 100644 index 0000000000..65708dc530 --- /dev/null +++ b/content/docs/v2024.1.31/examples/pgbouncer/monitoring/builtin-prom-pgbouncer.yaml @@ -0,0 +1,19 @@ +apiVersion: kubedb.com/v1alpha2 +kind: PgBouncer +metadata: + name: pgbouncer-server + namespace: demo +spec: + version: "1.17.0" + replicas: 1 + databases: + - alias: "postgres" + databaseName: "postgres" + databaseRef: + name: "quick-postgres" + namespace: demo + connectionPool: + maxClientConnections: 20 + reservePoolSize: 5 + monitor: + agent: prometheus.io/builtin diff --git a/content/docs/v2024.1.31/examples/pgbouncer/monitoring/builtin-prom-service.yaml b/content/docs/v2024.1.31/examples/pgbouncer/monitoring/builtin-prom-service.yaml new file mode 100644 index 0000000000..c3ceffb18f --- /dev/null +++ b/content/docs/v2024.1.31/examples/pgbouncer/monitoring/builtin-prom-service.yaml @@ -0,0 +1,11 @@ +apiVersion: v1 +kind: Service +metadata: + name: prometheus-operated + namespace: monitoring +spec: + selector: + app: prometheus + ports: + - protocol: TCP + port: 9090 diff --git a/content/docs/v2024.1.31/examples/pgbouncer/monitoring/coreos-prom-pgbouncer.yaml b/content/docs/v2024.1.31/examples/pgbouncer/monitoring/coreos-prom-pgbouncer.yaml new file mode 100644 index 0000000000..f30d92faf0 --- /dev/null +++ b/content/docs/v2024.1.31/examples/pgbouncer/monitoring/coreos-prom-pgbouncer.yaml @@ -0,0 +1,24 @@ +apiVersion: kubedb.com/v1alpha2 +kind: PgBouncer +metadata: + name: pgbouncer-server + namespace: demo +spec: + version: "1.17.0" + replicas: 1 + databases: + - alias: "postgres" + databaseName: "postgres" + databaseRef: + name: "quick-postgres" + namespace: demo + connectionPool: + maxClientConnections: 20 + reservePoolSize: 5 + monitor: + agent: prometheus.io/operator + prometheus: + serviceMonitor: + labels: + release: prometheus + interval: 10s diff --git a/content/docs/v2024.1.31/examples/pgbouncer/monitoring/coreos-prom-server.yaml b/content/docs/v2024.1.31/examples/pgbouncer/monitoring/coreos-prom-server.yaml new file mode 100644 index 0000000000..283c87aca0 --- /dev/null +++ b/content/docs/v2024.1.31/examples/pgbouncer/monitoring/coreos-prom-server.yaml @@ -0,0 +1,54 @@ +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + name: prometheus +rules: + - apiGroups: [""] + resources: + - nodes + - services + - endpoints + - pods + verbs: ["get", "list", "watch"] + - apiGroups: [""] + resources: + - configmaps + verbs: ["get"] + - nonResourceURLs: ["/metrics"] + verbs: ["get"] +--- +apiVersion: v1 +kind: ServiceAccount +metadata: + name: prometheus + namespace: monitoring +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + name: prometheus +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: prometheus +subjects: + - kind: ServiceAccount + name: prometheus + namespace: monitoring +--- +apiVersion: monitoring.coreos.com/v1 +kind: Prometheus +metadata: + name: prometheus + namespace: monitoring # use same namespace as ServiceMonitor crd + labels: + prometheus: prometheus +spec: + replicas: 1 + serviceAccountName: prometheus + serviceMonitorSelector: + matchLabels: + release: prometheus # change this according to your setup + resources: + requests: + memory: 400Mi diff --git a/content/docs/v2024.1.31/examples/pgbouncer/monitoring/grafana.yaml b/content/docs/v2024.1.31/examples/pgbouncer/monitoring/grafana.yaml new file mode 100644 index 0000000000..43007e11de --- /dev/null +++ b/content/docs/v2024.1.31/examples/pgbouncer/monitoring/grafana.yaml @@ -0,0 +1,20 @@ +apiVersion: apps/v1 +kind: Deployment +metadata: + name: grafana + namespace: monitoring + labels: + app: grafana +spec: + replicas: 1 + selector: + matchLabels: + app: grafana + template: + metadata: + labels: + app: grafana + spec: + containers: + - name: grafana + image: grafana/grafana:6.2.14 diff --git a/content/docs/v2024.1.31/examples/pgbouncer/pb-overview.yaml b/content/docs/v2024.1.31/examples/pgbouncer/pb-overview.yaml new file mode 100644 index 0000000000..9a25c1bc40 --- /dev/null +++ b/content/docs/v2024.1.31/examples/pgbouncer/pb-overview.yaml @@ -0,0 +1,29 @@ +apiVersion: kubedb.com/v1alpha2 +kind: PgBouncer +metadata: + name: pgbouncer-server + namespace: demo +spec: + version: "1.17.0" + replicas: 1 + databases: + - alias: "postgres" + databaseName: "postgres" + databaseRef: + name: "quick-postgres" + namespace: demo + - alias: "mydb" + databaseName: "tmpdb" + databaseRef: + name: "quick-postgres" + namespace: demo + connectionPool: + maxClientConnections: 20 + reservePoolSize: 5 + monitor: + agent: prometheus.io/operator + prometheus: + serviceMonitor: + labels: + release: prometheus + interval: 10s \ No newline at end of file diff --git a/content/docs/v2024.1.31/examples/pgbouncer/private-registry/pvt-pgbouncerversion.yaml b/content/docs/v2024.1.31/examples/pgbouncer/private-registry/pvt-pgbouncerversion.yaml new file mode 100644 index 0000000000..a09fa0cfaa --- /dev/null +++ b/content/docs/v2024.1.31/examples/pgbouncer/private-registry/pvt-pgbouncerversion.yaml @@ -0,0 +1,23 @@ +apiVersion: catalog.kubedb.com/v1alpha1 +kind: PostgresVersion +metadata: + name: "13.13" +spec: + coordinator: + image: PRIVATE_REGISTRY/pg-coordinator:v0.1.0 + db: + image: PRIVATE_REGISTRY/postgres:13.2-alpine + distribution: PostgreSQL + exporter: + image: PRIVATE_REGISTRY/postgres-exporter:v0.9.0 + initContainer: + image: PRIVATE_REGISTRY/postgres-init:0.1.0 + podSecurityPolicies: + databasePolicyName: postgres-db + stash: + addon: + backupTask: + name: postgres-backup-13.1 + restoreTask: + name: postgres-restore-13.1 + version: "13.13" diff --git a/content/docs/v2024.1.31/examples/pgbouncer/private-registry/pvt-reg-pgbouncer.yaml b/content/docs/v2024.1.31/examples/pgbouncer/private-registry/pvt-reg-pgbouncer.yaml new file mode 100644 index 0000000000..c064474fa0 --- /dev/null +++ b/content/docs/v2024.1.31/examples/pgbouncer/private-registry/pvt-reg-pgbouncer.yaml @@ -0,0 +1,20 @@ +apiVersion: kubedb.com/v1alpha2 +kind: PgBouncer +metadata: + name: pgbouncer-server + namespace: demo +spec: + version: "1.17.0" + databases: + - alias: "postgres" + databaseName: "postgres" + databaseRef: + name: "quick-postgres" + namespace: demo + connectionPool: + maxClientConnections: 20 + reservePoolSize: 5 + podTemplate: + spec: + imagePullSecrets: + - name: myregistrykey diff --git a/content/docs/v2024.1.31/examples/pgbouncer/quickstart/pgbouncer-server-mod.yaml b/content/docs/v2024.1.31/examples/pgbouncer/quickstart/pgbouncer-server-mod.yaml new file mode 100644 index 0000000000..59a897fccc --- /dev/null +++ b/content/docs/v2024.1.31/examples/pgbouncer/quickstart/pgbouncer-server-mod.yaml @@ -0,0 +1,23 @@ +apiVersion: kubedb.com/v1alpha2 +kind: PgBouncer +metadata: + name: pgbouncer-server + namespace: demo +spec: + version: "1.17.0" + replicas: 1 + databases: + - alias: "postgres" + databaseName: "postgres" + databaseRef: + name: "quick-postgres" + namespace: demo + - alias: "tmpdb" + databaseName: "mydb" + databaseRef: + name: "quick-postgres" + namespace: demo + connectionPool: + maxClientConnections: 20 + reservePoolSize: 5 + terminationPolicy: WipeOut diff --git a/content/docs/v2024.1.31/examples/pgbouncer/quickstart/pgbouncer-server.yaml b/content/docs/v2024.1.31/examples/pgbouncer/quickstart/pgbouncer-server.yaml new file mode 100644 index 0000000000..92e4097aef --- /dev/null +++ b/content/docs/v2024.1.31/examples/pgbouncer/quickstart/pgbouncer-server.yaml @@ -0,0 +1,19 @@ +apiVersion: kubedb.com/v1alpha2 +kind: PgBouncer +metadata: + name: pgbouncer-server + namespace: demo +spec: + version: "1.18.0" + replicas: 1 + databases: + - alias: "postgres" + databaseName: "postgres" + databaseRef: + name: "quick-postgres" + namespace: demo + connectionPool: + port: 5432 + maxClientConnections: 20 + reservePoolSize: 5 + terminationPolicy: Delete diff --git a/content/docs/v2024.1.31/examples/pgbouncer/quickstart/quick-postgres.yaml b/content/docs/v2024.1.31/examples/pgbouncer/quickstart/quick-postgres.yaml new file mode 100644 index 0000000000..7baa3aed63 --- /dev/null +++ b/content/docs/v2024.1.31/examples/pgbouncer/quickstart/quick-postgres.yaml @@ -0,0 +1,17 @@ +apiVersion: kubedb.com/v1alpha2 +kind: Postgres +metadata: + name: quick-postgres + namespace: demo +spec: + replicas: 1 + version: "13.13" + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 50Mi + terminationPolicy: WipeOut diff --git a/content/docs/v2024.1.31/examples/pgbouncer/quickstart/userlist b/content/docs/v2024.1.31/examples/pgbouncer/quickstart/userlist new file mode 100644 index 0000000000..dcb7a1001d --- /dev/null +++ b/content/docs/v2024.1.31/examples/pgbouncer/quickstart/userlist @@ -0,0 +1,2 @@ +"postgres" "ZFopeLnwkSf_f5Ys" +"myuser" "mypass" \ No newline at end of file diff --git a/content/docs/v2024.1.31/examples/postgres/clustering/ha-postgres.yaml b/content/docs/v2024.1.31/examples/postgres/clustering/ha-postgres.yaml new file mode 100644 index 0000000000..3a6a74a85f --- /dev/null +++ b/content/docs/v2024.1.31/examples/postgres/clustering/ha-postgres.yaml @@ -0,0 +1,16 @@ +apiVersion: kubedb.com/v1alpha2 +kind: Postgres +metadata: + name: ha-postgres + namespace: demo +spec: + version: "13.13" + replicas: 3 + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi diff --git a/content/docs/v2024.1.31/examples/postgres/clustering/hot-postgres.yaml b/content/docs/v2024.1.31/examples/postgres/clustering/hot-postgres.yaml new file mode 100644 index 0000000000..89fc329d47 --- /dev/null +++ b/content/docs/v2024.1.31/examples/postgres/clustering/hot-postgres.yaml @@ -0,0 +1,17 @@ +apiVersion: kubedb.com/v1alpha2 +kind: Postgres +metadata: + name: hot-postgres + namespace: demo +spec: + version: "13.13" + replicas: 3 + standbyMode: Hot + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi diff --git a/content/docs/v2024.1.31/examples/postgres/custom-config/pg-custom-config.yaml b/content/docs/v2024.1.31/examples/postgres/custom-config/pg-custom-config.yaml new file mode 100644 index 0000000000..45b0ff71eb --- /dev/null +++ b/content/docs/v2024.1.31/examples/postgres/custom-config/pg-custom-config.yaml @@ -0,0 +1,16 @@ +apiVersion: kubedb.com/v1alpha2 +kind: Postgres +metadata: + name: custom-postgres + namespace: demo +spec: + version: "13.13" + configSecret: + name: pg-custom-config + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi diff --git a/content/docs/v2024.1.31/examples/postgres/custom-config/user.conf b/content/docs/v2024.1.31/examples/postgres/custom-config/user.conf new file mode 100644 index 0000000000..7d5cc2d6fa --- /dev/null +++ b/content/docs/v2024.1.31/examples/postgres/custom-config/user.conf @@ -0,0 +1,2 @@ +max_connections=300 +shared_buffers=256MB \ No newline at end of file diff --git a/content/docs/v2024.1.31/examples/postgres/custom-rbac/pg-custom-db-two.yaml b/content/docs/v2024.1.31/examples/postgres/custom-rbac/pg-custom-db-two.yaml new file mode 100644 index 0000000000..386d5b090d --- /dev/null +++ b/content/docs/v2024.1.31/examples/postgres/custom-rbac/pg-custom-db-two.yaml @@ -0,0 +1,21 @@ +apiVersion: kubedb.com/v1alpha2 +kind: Postgres +metadata: + name: minute-postgres + namespace: demo + labels: + app.kubernetes.io/component: database + app.kubernetes.io/instance: quick-postgres +spec: + version: "13.13" + storageType: Durable + podTemplate: + spec: + serviceAccountName: my-custom-serviceaccount + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 50Mi diff --git a/content/docs/v2024.1.31/examples/postgres/custom-rbac/pg-custom-db.yaml b/content/docs/v2024.1.31/examples/postgres/custom-rbac/pg-custom-db.yaml new file mode 100644 index 0000000000..ab5222e02c --- /dev/null +++ b/content/docs/v2024.1.31/examples/postgres/custom-rbac/pg-custom-db.yaml @@ -0,0 +1,21 @@ +apiVersion: kubedb.com/v1alpha2 +kind: Postgres +metadata: + name: quick-postgres + namespace: demo + labels: + app.kubernetes.io/component: database + app.kubernetes.io/instance: quick-postgres +spec: + version: "13.13" + storageType: Durable + podTemplate: + spec: + serviceAccountName: my-custom-serviceaccount + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 50Mi diff --git a/content/docs/v2024.1.31/examples/postgres/custom-rbac/pg-custom-role-two.yaml b/content/docs/v2024.1.31/examples/postgres/custom-rbac/pg-custom-role-two.yaml new file mode 100644 index 0000000000..26217dc1bc --- /dev/null +++ b/content/docs/v2024.1.31/examples/postgres/custom-rbac/pg-custom-role-two.yaml @@ -0,0 +1,23 @@ +apiVersion: rbac.authorization.k8s.io/v1 +kind: Role +metadata: + name: my-custom-role-two + namespace: demo +rules: + - apiGroups: + - apps + resourceNames: + - miniute-postgres + resources: + - statefulsets + verbs: + - get + - apiGroups: + - "" + resourceNames: + - miniute-postgres-leader-lock + resources: + - configmaps + verbs: + - get + - update diff --git a/content/docs/v2024.1.31/examples/postgres/custom-rbac/pg-custom-role.yaml b/content/docs/v2024.1.31/examples/postgres/custom-rbac/pg-custom-role.yaml new file mode 100644 index 0000000000..7c106739c6 --- /dev/null +++ b/content/docs/v2024.1.31/examples/postgres/custom-rbac/pg-custom-role.yaml @@ -0,0 +1,44 @@ +apiVersion: rbac.authorization.k8s.io/v1 +kind: Role +metadata: + name: my-custom-role + namespace: demo +rules: + - apiGroups: + - apps + resourceNames: + - quick-postgres + resources: + - statefulsets + verbs: + - get + - apiGroups: + - "" + resources: + - pods + verbs: + - list + - patch + - apiGroups: + - "" + resources: + - configmaps + verbs: + - create + - apiGroups: + - "" + resourceNames: + - quick-postgres-leader-lock + resources: + - configmaps + verbs: + - get + - update + - apiGroups: + - policy + resourceNames: + - postgres-db + resources: + - podsecuritypolicies + verbs: + - use diff --git a/content/docs/v2024.1.31/examples/postgres/custom-version/Dockerfile b/content/docs/v2024.1.31/examples/postgres/custom-version/Dockerfile new file mode 100644 index 0000000000..2a002fa22d --- /dev/null +++ b/content/docs/v2024.1.31/examples/postgres/custom-version/Dockerfile @@ -0,0 +1,32 @@ +FROM kubedb/postgres:10.2-v5 + +ENV TIMESCALEDB_VERSION 0.9.1 + +RUN set -ex \ + && apk add --no-cache --virtual .fetch-deps \ + ca-certificates \ + openssl \ + tar \ + && mkdir -p /build/timescaledb \ + && wget -O /timescaledb.tar.gz https://github.com/timescale/timescaledb/archive/$TIMESCALEDB_VERSION.tar.gz \ + && tar -C /build/timescaledb --strip-components 1 -zxf /timescaledb.tar.gz \ + && rm -f /timescaledb.tar.gz \ + \ + && apk add --no-cache --virtual .build-deps \ + coreutils \ + dpkg-dev dpkg \ + gcc \ + libc-dev \ + make \ + cmake \ + util-linux-dev \ + \ + && cd /build/timescaledb \ + && ./bootstrap \ + && cd build && make install \ + && cd ~ \ + \ + && apk del .fetch-deps .build-deps \ + && rm -rf /build + +RUN sed -r -i "s/[#]*\s*(shared_preload_libraries)\s*=\s*'(.*)'/\1 = 'timescaledb,\2'/;s/,'/'/" /scripts/primary/postgresql.conf diff --git a/content/docs/v2024.1.31/examples/postgres/custom-version/postgresversion.yaml b/content/docs/v2024.1.31/examples/postgres/custom-version/postgresversion.yaml new file mode 100644 index 0000000000..11fb13e1a0 --- /dev/null +++ b/content/docs/v2024.1.31/examples/postgres/custom-version/postgresversion.yaml @@ -0,0 +1,23 @@ +apiVersion: catalog.kubedb.com/v1alpha1 +kind: PostgresVersion +metadata: + name: timescaledb-2.1.0-pg13 +spec: + coordinator: + image: kubedb/pg-coordinator:v0.1.0 + db: + image: timescale/timescaledb:2.1.0-pg13-oss + distribution: TimescaleDB + exporter: + image: prometheuscommunity/postgres-exporter:v0.9.0 + initContainer: + image: kubedb/postgres-init:0.1.0 + podSecurityPolicies: + databasePolicyName: postgres-db + stash: + addon: + backupTask: + name: postgres-backup-13.1 + restoreTask: + name: postgres-restore-13.1 + version: "13.13" diff --git a/content/docs/v2024.1.31/examples/postgres/custom-version/timescale-postgres.yaml b/content/docs/v2024.1.31/examples/postgres/custom-version/timescale-postgres.yaml new file mode 100644 index 0000000000..db52aca019 --- /dev/null +++ b/content/docs/v2024.1.31/examples/postgres/custom-version/timescale-postgres.yaml @@ -0,0 +1,14 @@ +apiVersion: kubedb.com/v1alpha2 +kind: Postgres +metadata: + name: timescale-postgres + namespace: demo +spec: + version: "timescaledb-2.1.0-pg13" # points to the name of our custom PostgresVersion + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi diff --git a/content/docs/v2024.1.31/examples/postgres/initialization/script-postgres.yaml b/content/docs/v2024.1.31/examples/postgres/initialization/script-postgres.yaml new file mode 100644 index 0000000000..696c9ab28d --- /dev/null +++ b/content/docs/v2024.1.31/examples/postgres/initialization/script-postgres.yaml @@ -0,0 +1,18 @@ +apiVersion: kubedb.com/v1alpha2 +kind: Postgres +metadata: + name: script-postgres + namespace: demo +spec: + version: "13.13" + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + init: + script: + configMap: + name: pg-init-script diff --git a/content/docs/v2024.1.31/examples/postgres/monitoring/builtin-prom-postgres.yaml b/content/docs/v2024.1.31/examples/postgres/monitoring/builtin-prom-postgres.yaml new file mode 100644 index 0000000000..148a3c5982 --- /dev/null +++ b/content/docs/v2024.1.31/examples/postgres/monitoring/builtin-prom-postgres.yaml @@ -0,0 +1,17 @@ +apiVersion: kubedb.com/v1alpha2 +kind: Postgres +metadata: + name: builtin-prom-postgres + namespace: demo +spec: + version: "13.13" + terminationPolicy: WipeOut + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + monitor: + agent: prometheus.io/builtin diff --git a/content/docs/v2024.1.31/examples/postgres/monitoring/coreos-prom-postgres.yaml b/content/docs/v2024.1.31/examples/postgres/monitoring/coreos-prom-postgres.yaml new file mode 100644 index 0000000000..a0e62a57b1 --- /dev/null +++ b/content/docs/v2024.1.31/examples/postgres/monitoring/coreos-prom-postgres.yaml @@ -0,0 +1,22 @@ +apiVersion: kubedb.com/v1alpha2 +kind: Postgres +metadata: + name: coreos-prom-postgres + namespace: demo +spec: + version: "13.13" + terminationPolicy: WipeOut + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + monitor: + agent: prometheus.io/operator + prometheus: + serviceMonitor: + labels: + release: prometheus + interval: 10s diff --git a/content/docs/v2024.1.31/examples/postgres/pg-overview.yaml b/content/docs/v2024.1.31/examples/postgres/pg-overview.yaml new file mode 100644 index 0000000000..1d27823f32 --- /dev/null +++ b/content/docs/v2024.1.31/examples/postgres/pg-overview.yaml @@ -0,0 +1,67 @@ +apiVersion: kubedb.com/v1alpha2 +kind: Postgres +metadata: + name: p1 + namespace: demo +spec: + version: "13.13" + replicas: 2 + standbyMode: Hot + streamingMode: Asynchronous + authSecret: + name: p1-auth + storageType: "Durable" + storage: + storageClassName: standard + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + init: + script: + configMap: + name: pg-init-script + monitor: + agent: prometheus.io/operator + prometheus: + serviceMonitor: + labels: + app: kubedb + interval: 10s + configSecret: + name: pg-custom-config + podTemplate: + metadata: + annotations: + passMe: ToDatabasePod + controller: + annotations: + passMe: ToStatefulSet + spec: + schedulerName: my-scheduler + nodeSelector: + disktype: ssd + imagePullSecrets: + - name: myregistrykey + env: + - name: POSTGRES_DB + value: pgdb + resources: + requests: + memory: "64Mi" + cpu: "250m" + limits: + memory: "128Mi" + cpu: "500m" + serviceTemplates: + - alias: primary + metadata: + annotations: + passMe: ToService + spec: + type: NodePort + ports: + - name: http + port: 5432 + terminationPolicy: "DoNotTerminate" diff --git a/content/docs/v2024.1.31/examples/postgres/private-registry/pvt-postgresversion.yaml b/content/docs/v2024.1.31/examples/postgres/private-registry/pvt-postgresversion.yaml new file mode 100644 index 0000000000..a09fa0cfaa --- /dev/null +++ b/content/docs/v2024.1.31/examples/postgres/private-registry/pvt-postgresversion.yaml @@ -0,0 +1,23 @@ +apiVersion: catalog.kubedb.com/v1alpha1 +kind: PostgresVersion +metadata: + name: "13.13" +spec: + coordinator: + image: PRIVATE_REGISTRY/pg-coordinator:v0.1.0 + db: + image: PRIVATE_REGISTRY/postgres:13.2-alpine + distribution: PostgreSQL + exporter: + image: PRIVATE_REGISTRY/postgres-exporter:v0.9.0 + initContainer: + image: PRIVATE_REGISTRY/postgres-init:0.1.0 + podSecurityPolicies: + databasePolicyName: postgres-db + stash: + addon: + backupTask: + name: postgres-backup-13.1 + restoreTask: + name: postgres-restore-13.1 + version: "13.13" diff --git a/content/docs/v2024.1.31/examples/postgres/private-registry/pvt-reg-postgres.yaml b/content/docs/v2024.1.31/examples/postgres/private-registry/pvt-reg-postgres.yaml new file mode 100644 index 0000000000..9e47cfa919 --- /dev/null +++ b/content/docs/v2024.1.31/examples/postgres/private-registry/pvt-reg-postgres.yaml @@ -0,0 +1,18 @@ +apiVersion: kubedb.com/v1alpha2 +kind: Postgres +metadata: + name: pvt-reg-postgres + namespace: demo +spec: + version: "13.13" + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + podTemplate: + spec: + imagePullSecrets: + - name: myregistrykey diff --git a/content/docs/v2024.1.31/examples/postgres/quickstart/instant-postgres.yaml b/content/docs/v2024.1.31/examples/postgres/quickstart/instant-postgres.yaml new file mode 100644 index 0000000000..e59bec341f --- /dev/null +++ b/content/docs/v2024.1.31/examples/postgres/quickstart/instant-postgres.yaml @@ -0,0 +1,8 @@ +apiVersion: kubedb.com/v1alpha2 +kind: Postgres +metadata: + name: instant-postgres + namespace: demo +spec: + version: "13.13" + storageType: Ephemeral diff --git a/content/docs/v2024.1.31/examples/postgres/quickstart/pgadmin.yaml b/content/docs/v2024.1.31/examples/postgres/quickstart/pgadmin.yaml new file mode 100644 index 0000000000..1b1f3b05af --- /dev/null +++ b/content/docs/v2024.1.31/examples/postgres/quickstart/pgadmin.yaml @@ -0,0 +1,49 @@ +apiVersion: apps/v1 +kind: Deployment +metadata: + labels: + app: pgadmin + name: pgadmin + namespace: demo +spec: + replicas: 1 + selector: + matchLabels: + app: pgadmin + template: + metadata: + labels: + app: pgadmin + spec: + containers: + - image: dpage/pgadmin4:latest + imagePullPolicy: Always + name: pgadmin + env: + - name: PGADMIN_DEFAULT_EMAIL + value: "admin" + - name: PGADMIN_DEFAULT_PASSWORD + value: "admin" + - name: PGADMIN_PORT + value: "80" + ports: + - containerPort: 80 + name: http + protocol: TCP +--- +apiVersion: v1 +kind: Service +metadata: + labels: + app: pgadmin + name: pgadmin + namespace: demo +spec: + ports: + - name: http + port: 80 + protocol: TCP + targetPort: http + selector: + app: pgadmin + type: LoadBalancer diff --git a/content/docs/v2024.1.31/examples/postgres/quickstart/quick-postgres.yaml b/content/docs/v2024.1.31/examples/postgres/quickstart/quick-postgres.yaml new file mode 100644 index 0000000000..04f711ee59 --- /dev/null +++ b/content/docs/v2024.1.31/examples/postgres/quickstart/quick-postgres.yaml @@ -0,0 +1,16 @@ +apiVersion: kubedb.com/v1alpha2 +kind: Postgres +metadata: + name: quick-postgres + namespace: demo +spec: + version: "13.13" + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: DoNotTerminate diff --git a/content/docs/v2024.1.31/examples/postgres/synchronous/postgres.yaml b/content/docs/v2024.1.31/examples/postgres/synchronous/postgres.yaml new file mode 100644 index 0000000000..14435a16f8 --- /dev/null +++ b/content/docs/v2024.1.31/examples/postgres/synchronous/postgres.yaml @@ -0,0 +1,19 @@ +apiVersion: kubedb.com/v1alpha2 +kind: Postgres +metadata: + name: demo-pg + namespace: demo +spec: + version: "13.13" + replicas: 3 + standbyMode: Hot + streamingMode: Synchronous + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: DoNotTerminate \ No newline at end of file diff --git a/content/docs/v2024.1.31/examples/proxysql/builtin-prom-proxysql.yaml b/content/docs/v2024.1.31/examples/proxysql/builtin-prom-proxysql.yaml new file mode 100644 index 0000000000..9ac8ef6382 --- /dev/null +++ b/content/docs/v2024.1.31/examples/proxysql/builtin-prom-proxysql.yaml @@ -0,0 +1,15 @@ +apiVersion: kubedb.com/v1alpha2 +kind: ProxySQL +metadata: + name: builtin-prom-proxysql + namespace: demo +spec: + version: "2.3.2-debian" + replicas: 1 + backend: + name: my-group + monitor: + agent: prometheus.io/builtin + prometheus: + exporter: + port: 42004 diff --git a/content/docs/v2024.1.31/examples/proxysql/coreos-prom-proxysql.yaml b/content/docs/v2024.1.31/examples/proxysql/coreos-prom-proxysql.yaml new file mode 100644 index 0000000000..58357c4830 --- /dev/null +++ b/content/docs/v2024.1.31/examples/proxysql/coreos-prom-proxysql.yaml @@ -0,0 +1,19 @@ +apiVersion: kubedb.com/v1alpha2 +kind: ProxySQL +metadata: + name: coreos-prom-proxysql + namespace: demo +spec: + version: "2.3.2-debian" + replicas: 1 + backend: + name: my-group + monitor: + agent: prometheus.io/operator + prometheus: + exporter: + port: 42004 + serviceMonitor: + labels: + release: prometheus + interval: 10s diff --git a/content/docs/v2024.1.31/examples/proxysql/custom-proxysql.yaml b/content/docs/v2024.1.31/examples/proxysql/custom-proxysql.yaml new file mode 100644 index 0000000000..e33e51e0f9 --- /dev/null +++ b/content/docs/v2024.1.31/examples/proxysql/custom-proxysql.yaml @@ -0,0 +1,12 @@ +apiVersion: kubedb.com/v1alpha2 +kind: ProxySQL +metadata: + name: custom-proxysql + namespace: demo +spec: + version: "2.3.2-debian" + replicas: 1 + backend: + name: my-group + configSecret: + name: my-custom-config diff --git a/content/docs/v2024.1.31/examples/proxysql/demo-my-group.yaml b/content/docs/v2024.1.31/examples/proxysql/demo-my-group.yaml new file mode 100644 index 0000000000..7c4c4ab04f --- /dev/null +++ b/content/docs/v2024.1.31/examples/proxysql/demo-my-group.yaml @@ -0,0 +1,21 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MySQL +metadata: + name: my-group + namespace: demo +spec: + version: "5.7.44" + replicas: 3 + topology: + mode: GroupReplication + group: + name: "dc002fc3-c412-4d18-b1d4-66c1fbfbbc9b" + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut diff --git a/content/docs/v2024.1.31/examples/proxysql/demo-proxy-my-group.yaml b/content/docs/v2024.1.31/examples/proxysql/demo-proxy-my-group.yaml new file mode 100644 index 0000000000..4d7a51dfe5 --- /dev/null +++ b/content/docs/v2024.1.31/examples/proxysql/demo-proxy-my-group.yaml @@ -0,0 +1,11 @@ +apiVersion: kubedb.com/v1alpha2 +kind: ProxySQL +metadata: + name: proxy-my-group + namespace: demo +spec: + version: "2.3.2-debian" + replicas: 1 + backend: + name: mysql-server + terminationPolicy: WipeOut diff --git a/content/docs/v2024.1.31/examples/proxysql/proxysql-private-registry.yaml b/content/docs/v2024.1.31/examples/proxysql/proxysql-private-registry.yaml new file mode 100644 index 0000000000..41b7ad0858 --- /dev/null +++ b/content/docs/v2024.1.31/examples/proxysql/proxysql-private-registry.yaml @@ -0,0 +1,14 @@ +apiVersion: kubedb.com/v1alpha2 +kind: ProxySQL +metadata: + name: proxysql-pvt-reg + namespace: demo +spec: + version: "2.3.2-debian" + replicas: 1 + backend: + name: my-group + podTemplate: + spec: + imagePullSecrets: + - name: myregistrykey \ No newline at end of file diff --git a/content/docs/v2024.1.31/examples/rbac/demo-0.yaml b/content/docs/v2024.1.31/examples/rbac/demo-0.yaml new file mode 100644 index 0000000000..85d656920f --- /dev/null +++ b/content/docs/v2024.1.31/examples/rbac/demo-0.yaml @@ -0,0 +1,57 @@ +apiVersion: v1 +kind: Namespace +metadata: + name: demo +spec: + finalizers: + - kubernetes +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + labels: + app: pgadmin + name: pgadmin + namespace: demo +spec: + replicas: 1 + selector: + matchLabels: + app: pgadmin + template: + metadata: + labels: + app: pgadmin + spec: + containers: + - image: dpage/pgadmin4:3 + imagePullPolicy: Always + name: pgadmin + env: + - name: PGADMIN_DEFAULT_EMAIL + value: "admin" + - name: PGADMIN_DEFAULT_PASSWORD + value: "admin" + - name: PGADMIN_PORT + value: "80" + ports: + - containerPort: 80 + name: http + protocol: TCP +--- +apiVersion: v1 +kind: Service +metadata: + labels: + app: pgadmin + name: pgadmin + namespace: demo +spec: + ports: + - name: http + port: 80 + protocol: TCP + targetPort: http + selector: + app: pgadmin + type: LoadBalancer diff --git a/content/docs/v2024.1.31/examples/rbac/demo-1.yaml b/content/docs/v2024.1.31/examples/rbac/demo-1.yaml new file mode 100644 index 0000000000..6fdf89a8df --- /dev/null +++ b/content/docs/v2024.1.31/examples/rbac/demo-1.yaml @@ -0,0 +1,14 @@ +apiVersion: kubedb.com/v1alpha2 +kind: Postgres +metadata: + name: p1 + namespace: demo +spec: + version: "10.2"-v5 + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi diff --git a/content/docs/v2024.1.31/examples/redis/autoscaling/compute/rd-as-standalone.yaml b/content/docs/v2024.1.31/examples/redis/autoscaling/compute/rd-as-standalone.yaml new file mode 100644 index 0000000000..bdf2a62f1f --- /dev/null +++ b/content/docs/v2024.1.31/examples/redis/autoscaling/compute/rd-as-standalone.yaml @@ -0,0 +1,24 @@ +apiVersion: autoscaling.kubedb.com/v1alpha1 +kind: RedisAutoscaler +metadata: + name: rd-as + namespace: demo +spec: + databaseRef: + name: rd-standalone + opsRequestOptions: + timeout: 3m + apply: IfReady + compute: + standalone: + trigger: "On" + podLifeTimeThreshold: 5m + resourceDiffPercentage: 20 + minAllowed: + cpu: 400m + memory: 400Mi + maxAllowed: + cpu: 1 + memory: 1Gi + controlledResources: ["cpu", "memory"] + containerControlledValues: "RequestsAndLimits" diff --git a/content/docs/v2024.1.31/examples/redis/autoscaling/compute/rd-standalone.yaml b/content/docs/v2024.1.31/examples/redis/autoscaling/compute/rd-standalone.yaml new file mode 100644 index 0000000000..d7105b6f07 --- /dev/null +++ b/content/docs/v2024.1.31/examples/redis/autoscaling/compute/rd-standalone.yaml @@ -0,0 +1,22 @@ +apiVersion: kubedb.com/v1alpha2 +kind: Redis +metadata: + name: rd-standalone + namespace: demo +spec: + version: "6.2.14" + storageType: Durable + storage: + resources: + requests: + storage: 1Gi + podTemplate: + spec: + resources: + requests: + cpu: "200m" + memory: "300Mi" + limits: + cpu: "200m" + memory: "300Mi" + terminationPolicy: WipeOut diff --git a/content/docs/v2024.1.31/examples/redis/autoscaling/compute/sen-as.yaml b/content/docs/v2024.1.31/examples/redis/autoscaling/compute/sen-as.yaml new file mode 100644 index 0000000000..d4f9cda427 --- /dev/null +++ b/content/docs/v2024.1.31/examples/redis/autoscaling/compute/sen-as.yaml @@ -0,0 +1,24 @@ +apiVersion: autoscaling.kubedb.com/v1alpha1 +kind: RedisSentinelAutoscaler +metadata: + name: sen-as + namespace: demo +spec: + databaseRef: + name: sen-demo + opsRequestOptions: + timeout: 3m + apply: IfReady + compute: + sentinel: + trigger: "On" + podLifeTimeThreshold: 5m + resourceDiffPercentage: 20 + minAllowed: + cpu: 400m + memory: 400Mi + maxAllowed: + cpu: 1 + memory: 1Gi + controlledResources: ["cpu", "memory"] + containerControlledValues: "RequestsAndLimits" diff --git a/content/docs/v2024.1.31/examples/redis/autoscaling/compute/sentinel.yaml b/content/docs/v2024.1.31/examples/redis/autoscaling/compute/sentinel.yaml new file mode 100644 index 0000000000..ac4ac6000d --- /dev/null +++ b/content/docs/v2024.1.31/examples/redis/autoscaling/compute/sentinel.yaml @@ -0,0 +1,23 @@ +apiVersion: kubedb.com/v1alpha2 +kind: RedisSentinel +metadata: + name: sen-demo + namespace: demo +spec: + version: "6.2.14" + storageType: Durable + replicas: 3 + storage: + resources: + requests: + storage: 1Gi + podTemplate: + spec: + resources: + requests: + cpu: "200m" + memory: "300Mi" + limits: + cpu: "200m" + memory: "300Mi" + terminationPolicy: WipeOut \ No newline at end of file diff --git a/content/docs/v2024.1.31/examples/redis/autoscaling/storage/rd-as.yaml b/content/docs/v2024.1.31/examples/redis/autoscaling/storage/rd-as.yaml new file mode 100644 index 0000000000..debe1149e2 --- /dev/null +++ b/content/docs/v2024.1.31/examples/redis/autoscaling/storage/rd-as.yaml @@ -0,0 +1,13 @@ +apiVersion: autoscaling.kubedb.com/v1alpha1 +kind: RedisAutoscaler +metadata: + name: rd-as + namespace: demo +spec: + databaseRef: + name: rd-standalone + storage: + standalone: + trigger: "On" + usageThreshold: 60 + scalingThreshold: 50 diff --git a/content/docs/v2024.1.31/examples/redis/autoscaling/storage/rd-standalone.yaml b/content/docs/v2024.1.31/examples/redis/autoscaling/storage/rd-standalone.yaml new file mode 100644 index 0000000000..8406fdd5cd --- /dev/null +++ b/content/docs/v2024.1.31/examples/redis/autoscaling/storage/rd-standalone.yaml @@ -0,0 +1,14 @@ +apiVersion: kubedb.com/v1alpha2 +kind: Redis +metadata: + name: rd-standalone + namespace: demo +spec: + version: "6.2.14" + storageType: Durable + storage: + storageClassName: topolvm-provisioner + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut \ No newline at end of file diff --git a/content/docs/v2024.1.31/examples/redis/cli/redis-demo.yaml b/content/docs/v2024.1.31/examples/redis/cli/redis-demo.yaml new file mode 100644 index 0000000000..87c5df341c --- /dev/null +++ b/content/docs/v2024.1.31/examples/redis/cli/redis-demo.yaml @@ -0,0 +1,14 @@ +apiVersion: kubedb.com/v1alpha2 +kind: Redis +metadata: + name: redis-demo + namespace: demo +spec: + version: 6.0.20 + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi diff --git a/content/docs/v2024.1.31/examples/redis/clustering/demo-1.yaml b/content/docs/v2024.1.31/examples/redis/clustering/demo-1.yaml new file mode 100644 index 0000000000..4257afb75d --- /dev/null +++ b/content/docs/v2024.1.31/examples/redis/clustering/demo-1.yaml @@ -0,0 +1,19 @@ +apiVersion: kubedb.com/v1alpha2 +kind: Redis +metadata: + name: redis-cluster + namespace: demo +spec: + version: 6.2.14 + mode: Cluster + cluster: + master: 3 + replicas: 1 + storageType: Durable + storage: + resources: + requests: + storage: 1Gi + storageClassName: "standard" + accessModes: + - ReadWriteOnce diff --git a/content/docs/v2024.1.31/examples/redis/custom-config/redis-custom.yaml b/content/docs/v2024.1.31/examples/redis/custom-config/redis-custom.yaml new file mode 100644 index 0000000000..6e22ae4815 --- /dev/null +++ b/content/docs/v2024.1.31/examples/redis/custom-config/redis-custom.yaml @@ -0,0 +1,16 @@ +apiVersion: kubedb.com/v1alpha2 +kind: Redis +metadata: + name: custom-redis + namespace: demo +spec: + version: 6.2.14 + configSecret: + name: rd-configuration + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi diff --git a/content/docs/v2024.1.31/examples/redis/custom-config/redis.conf b/content/docs/v2024.1.31/examples/redis/custom-config/redis.conf new file mode 100644 index 0000000000..5f830b9e4d --- /dev/null +++ b/content/docs/v2024.1.31/examples/redis/custom-config/redis.conf @@ -0,0 +1,2 @@ +databases 10 +maxclients 425 diff --git a/content/docs/v2024.1.31/examples/redis/custom-rbac/rd-custom-db-two.yaml b/content/docs/v2024.1.31/examples/redis/custom-rbac/rd-custom-db-two.yaml new file mode 100644 index 0000000000..1acc12bd0b --- /dev/null +++ b/content/docs/v2024.1.31/examples/redis/custom-rbac/rd-custom-db-two.yaml @@ -0,0 +1,19 @@ +apiVersion: kubedb.com/v1alpha2 +kind: Redis +metadata: + name: minute-redis + namespace: demo +spec: + version: 6.2.14 + podTemplate: + spec: + serviceAccountName: my-custom-serviceaccount + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: DoNotTerminate diff --git a/content/docs/v2024.1.31/examples/redis/custom-rbac/rd-custom-db.yaml b/content/docs/v2024.1.31/examples/redis/custom-rbac/rd-custom-db.yaml new file mode 100644 index 0000000000..2990a47d24 --- /dev/null +++ b/content/docs/v2024.1.31/examples/redis/custom-rbac/rd-custom-db.yaml @@ -0,0 +1,19 @@ +apiVersion: kubedb.com/v1alpha2 +kind: Redis +metadata: + name: quick-redis + namespace: demo +spec: + version: 6.2.14 + podTemplate: + spec: + serviceAccountName: my-custom-serviceaccount + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: DoNotTerminate diff --git a/content/docs/v2024.1.31/examples/redis/custom-rbac/rd-custom-role.yaml b/content/docs/v2024.1.31/examples/redis/custom-rbac/rd-custom-role.yaml new file mode 100644 index 0000000000..7ca000becb --- /dev/null +++ b/content/docs/v2024.1.31/examples/redis/custom-rbac/rd-custom-role.yaml @@ -0,0 +1,14 @@ +apiVersion: rbac.authorization.k8s.io/v1 +kind: Role +metadata: + name: my-custom-role + namespace: demo +rules: + - apiGroups: + - policy + resourceNames: + - redis-db + resources: + - podsecuritypolicies + verbs: + - use diff --git a/content/docs/v2024.1.31/examples/redis/demo-0.yaml b/content/docs/v2024.1.31/examples/redis/demo-0.yaml new file mode 100644 index 0000000000..1d9f59eb36 --- /dev/null +++ b/content/docs/v2024.1.31/examples/redis/demo-0.yaml @@ -0,0 +1,7 @@ +apiVersion: v1 +kind: Namespace +metadata: + name: demo +spec: + finalizers: + - kubernetes diff --git a/content/docs/v2024.1.31/examples/redis/monitoring/builtin-prom-redis.yaml b/content/docs/v2024.1.31/examples/redis/monitoring/builtin-prom-redis.yaml new file mode 100644 index 0000000000..35cb60b7a0 --- /dev/null +++ b/content/docs/v2024.1.31/examples/redis/monitoring/builtin-prom-redis.yaml @@ -0,0 +1,17 @@ +apiVersion: kubedb.com/v1alpha2 +kind: Redis +metadata: + name: builtin-prom-redis + namespace: demo +spec: + version: 6.0.20 + terminationPolicy: WipeOut + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + monitor: + agent: prometheus.io/builtin diff --git a/content/docs/v2024.1.31/examples/redis/monitoring/coreos-prom-redis.yaml b/content/docs/v2024.1.31/examples/redis/monitoring/coreos-prom-redis.yaml new file mode 100644 index 0000000000..a999e9d709 --- /dev/null +++ b/content/docs/v2024.1.31/examples/redis/monitoring/coreos-prom-redis.yaml @@ -0,0 +1,22 @@ +apiVersion: kubedb.com/v1alpha2 +kind: Redis +metadata: + name: coreos-prom-redis + namespace: demo +spec: + version: 6.0.20 + terminationPolicy: WipeOut + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + monitor: + agent: prometheus.io/operator + prometheus: + serviceMonitor: + labels: + release: prometheus + interval: 10s diff --git a/content/docs/v2024.1.31/examples/redis/private-registry/demo-1.yaml b/content/docs/v2024.1.31/examples/redis/private-registry/demo-1.yaml new file mode 100644 index 0000000000..b1d8a6568c --- /dev/null +++ b/content/docs/v2024.1.31/examples/redis/private-registry/demo-1.yaml @@ -0,0 +1,8 @@ +apiVersion: v1 +kind: Secret +metadata: + name: myregistrykey + namespace: demo +data: + .dockerconfigjson: PGJhc2UtNjQtZW5jb2RlZC1qc29uLWhlcmU+ +type: kubernetes.io/dockerconfigjson diff --git a/content/docs/v2024.1.31/examples/redis/private-registry/demo-2.yaml b/content/docs/v2024.1.31/examples/redis/private-registry/demo-2.yaml new file mode 100644 index 0000000000..fce69733ea --- /dev/null +++ b/content/docs/v2024.1.31/examples/redis/private-registry/demo-2.yaml @@ -0,0 +1,18 @@ +apiVersion: kubedb.com/v1alpha2 +kind: Redis +metadata: + name: redis-pvt-reg + namespace: demo +spec: + version: 6.2.14 + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + podTemplate: + spec: + imagePullSecrets: + - name: myregistrykey diff --git a/content/docs/v2024.1.31/examples/redis/quickstart/demo-1.yaml b/content/docs/v2024.1.31/examples/redis/quickstart/demo-1.yaml new file mode 100644 index 0000000000..419b601d28 --- /dev/null +++ b/content/docs/v2024.1.31/examples/redis/quickstart/demo-1.yaml @@ -0,0 +1,16 @@ +apiVersion: kubedb.com/v1alpha2 +kind: Redis +metadata: + name: redis-quickstart + namespace: demo +spec: + version: 6.2.14 + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: DoNotTerminate diff --git a/content/docs/v2024.1.31/examples/redis/reconfigure-tls/clusterissuer.yaml b/content/docs/v2024.1.31/examples/redis/reconfigure-tls/clusterissuer.yaml new file mode 100644 index 0000000000..f18ef6ca85 --- /dev/null +++ b/content/docs/v2024.1.31/examples/redis/reconfigure-tls/clusterissuer.yaml @@ -0,0 +1,7 @@ +apiVersion: cert-manager.io/v1 +kind: ClusterIssuer +metadata: + name: redis-ca-issuer +spec: + ca: + secretName: redis-ca diff --git a/content/docs/v2024.1.31/examples/redis/reconfigure-tls/issuer.yaml b/content/docs/v2024.1.31/examples/redis/reconfigure-tls/issuer.yaml new file mode 100644 index 0000000000..c812454645 --- /dev/null +++ b/content/docs/v2024.1.31/examples/redis/reconfigure-tls/issuer.yaml @@ -0,0 +1,8 @@ +apiVersion: cert-manager.io/v1 +kind: Issuer +metadata: + name: redis-ca-issuer + namespace: demo +spec: + ca: + secretName: redis-ca diff --git a/content/docs/v2024.1.31/examples/redis/reconfigure-tls/new-issuer.yaml b/content/docs/v2024.1.31/examples/redis/reconfigure-tls/new-issuer.yaml new file mode 100644 index 0000000000..594e62145c --- /dev/null +++ b/content/docs/v2024.1.31/examples/redis/reconfigure-tls/new-issuer.yaml @@ -0,0 +1,8 @@ +apiVersion: cert-manager.io/v1 +kind: Issuer +metadata: + name: rd-new-issuer + namespace: demo +spec: + ca: + secretName: redis-new-ca diff --git a/content/docs/v2024.1.31/examples/redis/reconfigure-tls/rd-add-tls.yaml b/content/docs/v2024.1.31/examples/redis/reconfigure-tls/rd-add-tls.yaml new file mode 100644 index 0000000000..e863171351 --- /dev/null +++ b/content/docs/v2024.1.31/examples/redis/reconfigure-tls/rd-add-tls.yaml @@ -0,0 +1,21 @@ +apiVersion: ops.kubedb.com/v1alpha1 +kind: RedisOpsRequest +metadata: + name: rd-add-tls + namespace: demo +spec: + type: ReconfigureTLS + databaseRef: + name: rd-sample + tls: + issuerRef: + name: redis-ca-issuer + kind: Issuer + apiGroup: "cert-manager.io" + certificates: + - alias: client + subject: + organizations: + - redis + organizationalUnits: + - client diff --git a/content/docs/v2024.1.31/examples/redis/reconfigure-tls/rd-change-issuer.yaml b/content/docs/v2024.1.31/examples/redis/reconfigure-tls/rd-change-issuer.yaml new file mode 100644 index 0000000000..1781e5ee5c --- /dev/null +++ b/content/docs/v2024.1.31/examples/redis/reconfigure-tls/rd-change-issuer.yaml @@ -0,0 +1,14 @@ +apiVersion: ops.kubedb.com/v1alpha1 +kind: RedisOpsRequest +metadata: + name: rd-change-issuer + namespace: demo +spec: + type: ReconfigureTLS + databaseRef: + name: rd-sample + tls: + issuerRef: + name: rd-new-issuer + kind: Issuer + apiGroup: "cert-manager.io" diff --git a/content/docs/v2024.1.31/examples/redis/reconfigure-tls/rd-ops-remove.yaml b/content/docs/v2024.1.31/examples/redis/reconfigure-tls/rd-ops-remove.yaml new file mode 100644 index 0000000000..f12ad7212c --- /dev/null +++ b/content/docs/v2024.1.31/examples/redis/reconfigure-tls/rd-ops-remove.yaml @@ -0,0 +1,11 @@ +apiVersion: ops.kubedb.com/v1alpha1 +kind: RedisOpsRequest +metadata: + name: rd-ops-remove + namespace: demo +spec: + type: ReconfigureTLS + databaseRef: + name: rd-sample + tls: + remove: true diff --git a/content/docs/v2024.1.31/examples/redis/reconfigure-tls/rd-ops-rotate.yaml b/content/docs/v2024.1.31/examples/redis/reconfigure-tls/rd-ops-rotate.yaml new file mode 100644 index 0000000000..3a1d34a34d --- /dev/null +++ b/content/docs/v2024.1.31/examples/redis/reconfigure-tls/rd-ops-rotate.yaml @@ -0,0 +1,11 @@ +apiVersion: ops.kubedb.com/v1alpha1 +kind: RedisOpsRequest +metadata: + name: rd-ops-rotate + namespace: demo +spec: + type: ReconfigureTLS + databaseRef: + name: rd-sample + tls: + rotateCertificates: true diff --git a/content/docs/v2024.1.31/examples/redis/reconfigure-tls/rd-sentinel.yaml b/content/docs/v2024.1.31/examples/redis/reconfigure-tls/rd-sentinel.yaml new file mode 100644 index 0000000000..4ce5e6e7fa --- /dev/null +++ b/content/docs/v2024.1.31/examples/redis/reconfigure-tls/rd-sentinel.yaml @@ -0,0 +1,21 @@ +apiVersion: kubedb.com/v1alpha2 +kind: Redis +metadata: + name: rd-sample + namespace: demo +spec: + version: 6.2.14 + replicas: 3 + sentinelRef: + name: sen-sample + namespace: demo + mode: Sentinel + storageType: Durable + storage: + resources: + requests: + storage: 1Gi + storageClassName: "standard" + accessModes: + - ReadWriteOnce + terminationPolicy: DoNotTerminate diff --git a/content/docs/v2024.1.31/examples/redis/reconfigure-tls/redis-standalone.yaml b/content/docs/v2024.1.31/examples/redis/reconfigure-tls/redis-standalone.yaml new file mode 100644 index 0000000000..b6f183c233 --- /dev/null +++ b/content/docs/v2024.1.31/examples/redis/reconfigure-tls/redis-standalone.yaml @@ -0,0 +1,15 @@ +apiVersion: kubedb.com/v1alpha2 +kind: Redis +metadata: + name: rd-sample + namespace: demo +spec: + version: "6.2.14" + mode: Standalone + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi diff --git a/content/docs/v2024.1.31/examples/redis/reconfigure-tls/sen-add-tls.yaml b/content/docs/v2024.1.31/examples/redis/reconfigure-tls/sen-add-tls.yaml new file mode 100644 index 0000000000..0a92ea796b --- /dev/null +++ b/content/docs/v2024.1.31/examples/redis/reconfigure-tls/sen-add-tls.yaml @@ -0,0 +1,26 @@ +apiVersion: ops.kubedb.com/v1alpha1 +kind: RedisOpsRequest +metadata: + name: rd-add-tls + namespace: demo +spec: + type: ReconfigureTLS + databaseRef: + name: rd-sample + tls: + sentinel: + ref: + name: sen-demo-tls + namespace: demo + removeUnusedSentinel: true + issuerRef: + apiGroup: cert-manager.io + name: redis-ca-issuer + kind: ClusterIssuer + certificates: + - alias: client + subject: + organizations: + - redis + organizationalUnits: + - client diff --git a/content/docs/v2024.1.31/examples/redis/reconfigure-tls/sen-ops-remove.yaml b/content/docs/v2024.1.31/examples/redis/reconfigure-tls/sen-ops-remove.yaml new file mode 100644 index 0000000000..3b532da57b --- /dev/null +++ b/content/docs/v2024.1.31/examples/redis/reconfigure-tls/sen-ops-remove.yaml @@ -0,0 +1,16 @@ +apiVersion: ops.kubedb.com/v1alpha1 +kind: RedisOpsRequest +metadata: + name: rd-ops-remove + namespace: demo +spec: + type: ReconfigureTLS + databaseRef: + name: rd-sample + tls: + sentinel: + ref: + name: sen-sample + namespace: demo + removeUnusedSentinel: true + remove: true diff --git a/content/docs/v2024.1.31/examples/redis/reconfigure-tls/sen-ops-rotate.yaml b/content/docs/v2024.1.31/examples/redis/reconfigure-tls/sen-ops-rotate.yaml new file mode 100644 index 0000000000..03fde5a0c2 --- /dev/null +++ b/content/docs/v2024.1.31/examples/redis/reconfigure-tls/sen-ops-rotate.yaml @@ -0,0 +1,11 @@ +apiVersion: ops.kubedb.com/v1alpha1 +kind: RedisSentinelOpsRequest +metadata: + name: sen-ops-rotate + namespace: demo +spec: + type: ReconfigureTLS + databaseRef: + name: sen-demo-tls + tls: + rotateCertificates: true diff --git a/content/docs/v2024.1.31/examples/redis/reconfigure-tls/sentinel.yaml b/content/docs/v2024.1.31/examples/redis/reconfigure-tls/sentinel.yaml new file mode 100644 index 0000000000..40c622a189 --- /dev/null +++ b/content/docs/v2024.1.31/examples/redis/reconfigure-tls/sentinel.yaml @@ -0,0 +1,17 @@ +apiVersion: kubedb.com/v1alpha2 +kind: RedisSentinel +metadata: + name: sen-sample + namespace: demo +spec: + version: 6.2.14 + replicas: 3 + storageType: Durable + storage: + resources: + requests: + storage: 1Gi + storageClassName: "standard" + accessModes: + - ReadWriteOnce + terminationPolicy: DoNotTerminate diff --git a/content/docs/v2024.1.31/examples/redis/scaling/horizontal-scaling/horizontal-cluster.yaml b/content/docs/v2024.1.31/examples/redis/scaling/horizontal-scaling/horizontal-cluster.yaml new file mode 100644 index 0000000000..85420791a6 --- /dev/null +++ b/content/docs/v2024.1.31/examples/redis/scaling/horizontal-scaling/horizontal-cluster.yaml @@ -0,0 +1,12 @@ +apiVersion: ops.kubedb.com/v1alpha1 +kind: RedisOpsRequest +metadata: + name: redisops-horizontal + namespace: demo +spec: + type: HorizontalScaling + databaseRef: + name: redis-cluster + horizontalScaling: + master: 4 + replicas: 1 diff --git a/content/docs/v2024.1.31/examples/redis/scaling/horizontal-scaling/horizontal-redis-sentinel.yaml b/content/docs/v2024.1.31/examples/redis/scaling/horizontal-scaling/horizontal-redis-sentinel.yaml new file mode 100644 index 0000000000..20a59711b3 --- /dev/null +++ b/content/docs/v2024.1.31/examples/redis/scaling/horizontal-scaling/horizontal-redis-sentinel.yaml @@ -0,0 +1,11 @@ +apiVersion: ops.kubedb.com/v1alpha1 +kind: RedisOpsRequest +metadata: + name: rd-ops-horizontal + namespace: demo +spec: + type: HorizontalScaling + databaseRef: + name: rd-sample + horizontalScaling: + replicas: 5 diff --git a/content/docs/v2024.1.31/examples/redis/scaling/horizontal-scaling/horizontal-sentinel.yaml b/content/docs/v2024.1.31/examples/redis/scaling/horizontal-scaling/horizontal-sentinel.yaml new file mode 100644 index 0000000000..199828a379 --- /dev/null +++ b/content/docs/v2024.1.31/examples/redis/scaling/horizontal-scaling/horizontal-sentinel.yaml @@ -0,0 +1,11 @@ +apiVersion: ops.kubedb.com/v1alpha1 +kind: RedisSentinelOpsRequest +metadata: + name: sen-ops-horizontal + namespace: demo +spec: + type: HorizontalScaling + databaseRef: + name: sen-sample + horizontalScaling: + replicas: 3 diff --git a/content/docs/v2024.1.31/examples/redis/scaling/horizontal-scaling/rd-cluster.yaml b/content/docs/v2024.1.31/examples/redis/scaling/horizontal-scaling/rd-cluster.yaml new file mode 100644 index 0000000000..88e410cef3 --- /dev/null +++ b/content/docs/v2024.1.31/examples/redis/scaling/horizontal-scaling/rd-cluster.yaml @@ -0,0 +1,20 @@ +apiVersion: kubedb.com/v1alpha2 +kind: Redis +metadata: + name: redis-cluster + namespace: demo +spec: + version: 6.2.14 + mode: Cluster + cluster: + master: 3 + replicas: 2 + storageType: Durable + storage: + resources: + requests: + storage: "1Gi" + storageClassName: "standard" + accessModes: + - ReadWriteOnce + terminationPolicy: Halt diff --git a/content/docs/v2024.1.31/examples/redis/scaling/horizontal-scaling/rd-sentinel.yaml b/content/docs/v2024.1.31/examples/redis/scaling/horizontal-scaling/rd-sentinel.yaml new file mode 100644 index 0000000000..4ce5e6e7fa --- /dev/null +++ b/content/docs/v2024.1.31/examples/redis/scaling/horizontal-scaling/rd-sentinel.yaml @@ -0,0 +1,21 @@ +apiVersion: kubedb.com/v1alpha2 +kind: Redis +metadata: + name: rd-sample + namespace: demo +spec: + version: 6.2.14 + replicas: 3 + sentinelRef: + name: sen-sample + namespace: demo + mode: Sentinel + storageType: Durable + storage: + resources: + requests: + storage: 1Gi + storageClassName: "standard" + accessModes: + - ReadWriteOnce + terminationPolicy: DoNotTerminate diff --git a/content/docs/v2024.1.31/examples/redis/scaling/horizontal-scaling/sentinel.yaml b/content/docs/v2024.1.31/examples/redis/scaling/horizontal-scaling/sentinel.yaml new file mode 100644 index 0000000000..b948ee8063 --- /dev/null +++ b/content/docs/v2024.1.31/examples/redis/scaling/horizontal-scaling/sentinel.yaml @@ -0,0 +1,17 @@ +apiVersion: kubedb.com/v1alpha2 +kind: RedisSentinel +metadata: + name: sen-sample + namespace: demo +spec: + version: 6.2.14 + replicas: 5 + storageType: Durable + storage: + resources: + requests: + storage: 1Gi + storageClassName: "standard" + accessModes: + - ReadWriteOnce + terminationPolicy: DoNotTerminate diff --git a/content/docs/v2024.1.31/examples/redis/scaling/vertical-scaling/rd-cluster.yaml b/content/docs/v2024.1.31/examples/redis/scaling/vertical-scaling/rd-cluster.yaml new file mode 100644 index 0000000000..bd782c2cae --- /dev/null +++ b/content/docs/v2024.1.31/examples/redis/scaling/vertical-scaling/rd-cluster.yaml @@ -0,0 +1,26 @@ +apiVersion: kubedb.com/v1alpha2 +kind: Redis +metadata: + name: redis-cluster + namespace: demo +spec: + version: 7.0.14 + mode: Cluster + cluster: + master: 3 + replicas: 1 + storageType: Durable + storage: + resources: + requests: + storage: "1Gi" + storageClassName: "standard" + accessModes: + - ReadWriteOnce + podTemplate: + spec: + resources: + requests: + cpu: "100m" + memory: "100Mi" + terminationPolicy: Halt diff --git a/content/docs/v2024.1.31/examples/redis/scaling/vertical-scaling/rd-sentinel.yaml b/content/docs/v2024.1.31/examples/redis/scaling/vertical-scaling/rd-sentinel.yaml new file mode 100644 index 0000000000..1e611092a8 --- /dev/null +++ b/content/docs/v2024.1.31/examples/redis/scaling/vertical-scaling/rd-sentinel.yaml @@ -0,0 +1,27 @@ +apiVersion: kubedb.com/v1alpha2 +kind: Redis +metadata: + name: rd-sample + namespace: demo +spec: + version: 6.2.14 + replicas: 3 + sentinelRef: + name: sen-sample + namespace: demo + mode: Sentinel + storageType: Durable + storage: + resources: + requests: + storage: 1Gi + storageClassName: "standard" + accessModes: + - ReadWriteOnce + podTemplate: + spec: + resources: + requests: + cpu: "100m" + memory: "100Mi" + terminationPolicy: DoNotTerminate diff --git a/content/docs/v2024.1.31/examples/redis/scaling/vertical-scaling/rd-standalone.yaml b/content/docs/v2024.1.31/examples/redis/scaling/vertical-scaling/rd-standalone.yaml new file mode 100644 index 0000000000..a225d1bf86 --- /dev/null +++ b/content/docs/v2024.1.31/examples/redis/scaling/vertical-scaling/rd-standalone.yaml @@ -0,0 +1,21 @@ +apiVersion: kubedb.com/v1alpha2 +kind: Redis +metadata: + name: redis-quickstart + namespace: demo +spec: + version: 6.2.14 + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + podTemplate: + spec: + resources: + requests: + cpu: "100m" + memory: "100Mi" diff --git a/content/docs/v2024.1.31/examples/redis/scaling/vertical-scaling/sentinel.yaml b/content/docs/v2024.1.31/examples/redis/scaling/vertical-scaling/sentinel.yaml new file mode 100644 index 0000000000..95b2698a98 --- /dev/null +++ b/content/docs/v2024.1.31/examples/redis/scaling/vertical-scaling/sentinel.yaml @@ -0,0 +1,23 @@ +apiVersion: kubedb.com/v1alpha2 +kind: RedisSentinel +metadata: + name: sen-sample + namespace: demo +spec: + version: 6.2.14 + replicas: 3 + storageType: Durable + storage: + resources: + requests: + storage: 1Gi + storageClassName: "standard" + accessModes: + - ReadWriteOnce + podTemplate: + spec: + resources: + requests: + cpu: "100m" + memory: "100Mi" + terminationPolicy: DoNotTerminate diff --git a/content/docs/v2024.1.31/examples/redis/scaling/vertical-scaling/vertical-cluster.yaml b/content/docs/v2024.1.31/examples/redis/scaling/vertical-scaling/vertical-cluster.yaml new file mode 100644 index 0000000000..0b4deca4b7 --- /dev/null +++ b/content/docs/v2024.1.31/examples/redis/scaling/vertical-scaling/vertical-cluster.yaml @@ -0,0 +1,18 @@ +apiVersion: ops.kubedb.com/v1alpha1 +kind: RedisOpsRequest +metadata: + name: redisops-vertical + namespace: demo +spec: + type: VerticalScaling + databaseRef: + name: redis-cluster + verticalScaling: + redis: + resources: + requests: + memory: "300Mi" + cpu: "200m" + limits: + memory: "800Mi" + cpu: "500m" \ No newline at end of file diff --git a/content/docs/v2024.1.31/examples/redis/scaling/vertical-scaling/vertical-redis-sentinel.yaml b/content/docs/v2024.1.31/examples/redis/scaling/vertical-scaling/vertical-redis-sentinel.yaml new file mode 100644 index 0000000000..20c6219c94 --- /dev/null +++ b/content/docs/v2024.1.31/examples/redis/scaling/vertical-scaling/vertical-redis-sentinel.yaml @@ -0,0 +1,18 @@ +apiVersion: ops.kubedb.com/v1alpha1 +kind: RedisOpsRequest +metadata: + name: rd-ops-vertical + namespace: demo +spec: + type: VerticalScaling + databaseRef: + name: rd-sample + verticalScaling: + redis: + resources: + requests: + memory: "300Mi" + cpu: "200m" + limits: + memory: "800Mi" + cpu: "500m" \ No newline at end of file diff --git a/content/docs/v2024.1.31/examples/redis/scaling/vertical-scaling/vertical-sentinel.yaml b/content/docs/v2024.1.31/examples/redis/scaling/vertical-scaling/vertical-sentinel.yaml new file mode 100644 index 0000000000..ae0e70816f --- /dev/null +++ b/content/docs/v2024.1.31/examples/redis/scaling/vertical-scaling/vertical-sentinel.yaml @@ -0,0 +1,18 @@ +apiVersion: ops.kubedb.com/v1alpha1 +kind: RedisSentinelOpsRequest +metadata: + name: sen-ops-vertical + namespace: demo +spec: + type: VerticalScaling + databaseRef: + name: sen-sample + verticalScaling: + redissentinel: + resources: + requests: + memory: "300Mi" + cpu: "200m" + limits: + memory: "800Mi" + cpu: "500m" diff --git a/content/docs/v2024.1.31/examples/redis/scaling/vertical-scaling/vertical-standalone.yaml b/content/docs/v2024.1.31/examples/redis/scaling/vertical-scaling/vertical-standalone.yaml new file mode 100644 index 0000000000..c48afc6aca --- /dev/null +++ b/content/docs/v2024.1.31/examples/redis/scaling/vertical-scaling/vertical-standalone.yaml @@ -0,0 +1,18 @@ +apiVersion: ops.kubedb.com/v1alpha1 +kind: RedisOpsRequest +metadata: + name: redisopsstandalone + namespace: demo +spec: + type: VerticalScaling + databaseRef: + name: redis-quickstart + verticalScaling: + redis: + resources: + requests: + memory: "300Mi" + cpu: "200m" + limits: + memory: "800Mi" + cpu: "500m" diff --git a/content/docs/v2024.1.31/examples/redis/sentinel/new-sentinel.yaml b/content/docs/v2024.1.31/examples/redis/sentinel/new-sentinel.yaml new file mode 100644 index 0000000000..959f23fc96 --- /dev/null +++ b/content/docs/v2024.1.31/examples/redis/sentinel/new-sentinel.yaml @@ -0,0 +1,17 @@ +apiVersion: kubedb.com/v1alpha2 +kind: RedisSentinel +metadata: + name: new-sentinel + namespace: demo +spec: + version: 6.2.14 + replicas: 3 + storageType: Durable + storage: + resources: + requests: + storage: 1Gi + storageClassName: "standard" + accessModes: + - ReadWriteOnce + terminationPolicy: WipeOut diff --git a/content/docs/v2024.1.31/examples/redis/sentinel/redis.yaml b/content/docs/v2024.1.31/examples/redis/sentinel/redis.yaml new file mode 100644 index 0000000000..a11087771f --- /dev/null +++ b/content/docs/v2024.1.31/examples/redis/sentinel/redis.yaml @@ -0,0 +1,20 @@ +apiVersion: kubedb.com/v1alpha2 +kind: Redis +metadata: + name: rd-demo + namespace: demo +spec: + version: 6.2.14 + replicas: 3 + sentinelRef: + name: sen-demo + namespace: demo + mode: Sentinel + storage: + resources: + requests: + storage: 1Gi + storageClassName: "standard" + accessModes: + - ReadWriteOnce + terminationPolicy: Halt diff --git a/content/docs/v2024.1.31/examples/redis/sentinel/replace-sentinel.yaml b/content/docs/v2024.1.31/examples/redis/sentinel/replace-sentinel.yaml new file mode 100644 index 0000000000..17a87ff62e --- /dev/null +++ b/content/docs/v2024.1.31/examples/redis/sentinel/replace-sentinel.yaml @@ -0,0 +1,14 @@ +apiVersion: ops.kubedb.com/v1alpha1 +kind: RedisOpsRequest +metadata: + name: replace-sentinel + namespace: demo +spec: + type: ReplaceSentinel + databaseRef: + name: rd-demo + sentinel: + ref: + name: new-sentinel + namespace: demo + removeUnusedSentinel: true diff --git a/content/docs/v2024.1.31/examples/redis/sentinel/sentinel.yaml b/content/docs/v2024.1.31/examples/redis/sentinel/sentinel.yaml new file mode 100644 index 0000000000..e377e63d02 --- /dev/null +++ b/content/docs/v2024.1.31/examples/redis/sentinel/sentinel.yaml @@ -0,0 +1,17 @@ +apiVersion: kubedb.com/v1alpha2 +kind: RedisSentinel +metadata: + name: sen-demo + namespace: demo +spec: + version: 6.2.14 + replicas: 3 + storageType: Durable + storage: + resources: + requests: + storage: 1Gi + storageClassName: "standard" + accessModes: + - ReadWriteOnce + terminationPolicy: WipeOut diff --git a/content/docs/v2024.1.31/examples/redis/tls/clusterissuer.yaml b/content/docs/v2024.1.31/examples/redis/tls/clusterissuer.yaml new file mode 100644 index 0000000000..f18ef6ca85 --- /dev/null +++ b/content/docs/v2024.1.31/examples/redis/tls/clusterissuer.yaml @@ -0,0 +1,7 @@ +apiVersion: cert-manager.io/v1 +kind: ClusterIssuer +metadata: + name: redis-ca-issuer +spec: + ca: + secretName: redis-ca diff --git a/content/docs/v2024.1.31/examples/redis/tls/issuer.yaml b/content/docs/v2024.1.31/examples/redis/tls/issuer.yaml new file mode 100644 index 0000000000..c812454645 --- /dev/null +++ b/content/docs/v2024.1.31/examples/redis/tls/issuer.yaml @@ -0,0 +1,8 @@ +apiVersion: cert-manager.io/v1 +kind: Issuer +metadata: + name: redis-ca-issuer + namespace: demo +spec: + ca: + secretName: redis-ca diff --git a/content/docs/v2024.1.31/examples/redis/tls/rd-cluster-ssl.yaml b/content/docs/v2024.1.31/examples/redis/tls/rd-cluster-ssl.yaml new file mode 100644 index 0000000000..9c109372ec --- /dev/null +++ b/content/docs/v2024.1.31/examples/redis/tls/rd-cluster-ssl.yaml @@ -0,0 +1,23 @@ +apiVersion: kubedb.com/v1alpha2 +kind: Redis +metadata: + name: rd-tls + namespace: demo +spec: + version: "6.2.14" + mode: Cluster + cluster: + master: 3 + replicas: 1 + tls: + issuerRef: + apiGroup: "cert-manager.io" + kind: Issuer + name: redis-ca-issuer + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi diff --git a/content/docs/v2024.1.31/examples/redis/tls/rd-sentinel.yaml b/content/docs/v2024.1.31/examples/redis/tls/rd-sentinel.yaml new file mode 100644 index 0000000000..d07fe48584 --- /dev/null +++ b/content/docs/v2024.1.31/examples/redis/tls/rd-sentinel.yaml @@ -0,0 +1,24 @@ +apiVersion: kubedb.com/v1alpha2 +kind: Redis +metadata: + name: rd-tls + namespace: demo +spec: + version: "6.2.14" + mode: Sentinel + replicas: 3 + sentinelRef: + name: sen-tls + namespace: demo + tls: + issuerRef: + apiGroup: "cert-manager.io" + kind: ClusterIssuer + name: redis-ca-issuer + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi diff --git a/content/docs/v2024.1.31/examples/redis/tls/rd-standalone-ssl.yaml b/content/docs/v2024.1.31/examples/redis/tls/rd-standalone-ssl.yaml new file mode 100644 index 0000000000..ef1be0193e --- /dev/null +++ b/content/docs/v2024.1.31/examples/redis/tls/rd-standalone-ssl.yaml @@ -0,0 +1,19 @@ +apiVersion: kubedb.com/v1alpha2 +kind: Redis +metadata: + name: rd-tls + namespace: demo +spec: + version: "6.2.14" + tls: + issuerRef: + apiGroup: "cert-manager.io" + kind: Issuer + name: redis-ca-issuer + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi diff --git a/content/docs/v2024.1.31/examples/redis/tls/sentinel-ssl.yaml b/content/docs/v2024.1.31/examples/redis/tls/sentinel-ssl.yaml new file mode 100644 index 0000000000..9c2cd5cbcc --- /dev/null +++ b/content/docs/v2024.1.31/examples/redis/tls/sentinel-ssl.yaml @@ -0,0 +1,20 @@ +apiVersion: kubedb.com/v1alpha2 +kind: RedisSentinel +metadata: + name: sen-tls + namespace: demo +spec: + replicas: 3 + version: "6.2.14" + tls: + issuerRef: + apiGroup: "cert-manager.io" + kind: ClusterIssuer + name: redis-ca-issuer + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi \ No newline at end of file diff --git a/content/docs/v2024.1.31/examples/redis/update-version/rd-cluster.yaml b/content/docs/v2024.1.31/examples/redis/update-version/rd-cluster.yaml new file mode 100644 index 0000000000..eca022edd4 --- /dev/null +++ b/content/docs/v2024.1.31/examples/redis/update-version/rd-cluster.yaml @@ -0,0 +1,20 @@ +apiVersion: kubedb.com/v1alpha2 +kind: Redis +metadata: + name: redis-cluster + namespace: demo +spec: + version: 6.0.20 + mode: Cluster + cluster: + master: 3 + replicas: 1 + storageType: Durable + storage: + resources: + requests: + storage: "100Mi" + storageClassName: "standard" + accessModes: + - ReadWriteOnce + terminationPolicy: Halt diff --git a/content/docs/v2024.1.31/examples/redis/update-version/rd-sentinel.yaml b/content/docs/v2024.1.31/examples/redis/update-version/rd-sentinel.yaml new file mode 100644 index 0000000000..4ce5e6e7fa --- /dev/null +++ b/content/docs/v2024.1.31/examples/redis/update-version/rd-sentinel.yaml @@ -0,0 +1,21 @@ +apiVersion: kubedb.com/v1alpha2 +kind: Redis +metadata: + name: rd-sample + namespace: demo +spec: + version: 6.2.14 + replicas: 3 + sentinelRef: + name: sen-sample + namespace: demo + mode: Sentinel + storageType: Durable + storage: + resources: + requests: + storage: 1Gi + storageClassName: "standard" + accessModes: + - ReadWriteOnce + terminationPolicy: DoNotTerminate diff --git a/content/docs/v2024.1.31/examples/redis/update-version/rd-standalone.yaml b/content/docs/v2024.1.31/examples/redis/update-version/rd-standalone.yaml new file mode 100644 index 0000000000..2a8256c941 --- /dev/null +++ b/content/docs/v2024.1.31/examples/redis/update-version/rd-standalone.yaml @@ -0,0 +1,15 @@ +apiVersion: kubedb.com/v1alpha2 +kind: Redis +metadata: + name: redis-quickstart + namespace: demo +spec: + version: 6.2.14 + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi \ No newline at end of file diff --git a/content/docs/v2024.1.31/examples/redis/update-version/sentinel.yaml b/content/docs/v2024.1.31/examples/redis/update-version/sentinel.yaml new file mode 100644 index 0000000000..40c622a189 --- /dev/null +++ b/content/docs/v2024.1.31/examples/redis/update-version/sentinel.yaml @@ -0,0 +1,17 @@ +apiVersion: kubedb.com/v1alpha2 +kind: RedisSentinel +metadata: + name: sen-sample + namespace: demo +spec: + version: 6.2.14 + replicas: 3 + storageType: Durable + storage: + resources: + requests: + storage: 1Gi + storageClassName: "standard" + accessModes: + - ReadWriteOnce + terminationPolicy: DoNotTerminate diff --git a/content/docs/v2024.1.31/examples/redis/update-version/update-standalone.yaml b/content/docs/v2024.1.31/examples/redis/update-version/update-standalone.yaml new file mode 100644 index 0000000000..5eeae82d60 --- /dev/null +++ b/content/docs/v2024.1.31/examples/redis/update-version/update-standalone.yaml @@ -0,0 +1,11 @@ +apiVersion: ops.kubedb.com/v1alpha1 +kind: RedisOpsRequest +metadata: + name: update-standalone + namespace: demo +spec: + type: UpdateVersion + databaseRef: + name: redis-quickstart + updateVersion: + targetVersion: 7.0.14 diff --git a/content/docs/v2024.1.31/examples/redis/update-version/update-version.yaml b/content/docs/v2024.1.31/examples/redis/update-version/update-version.yaml new file mode 100644 index 0000000000..f2aadea09a --- /dev/null +++ b/content/docs/v2024.1.31/examples/redis/update-version/update-version.yaml @@ -0,0 +1,11 @@ +apiVersion: ops.kubedb.com/v1alpha1 +kind: RedisOpsRequest +metadata: + name: update-version + namespace: demo +spec: + type: UpdateVersion + databaseRef: + name: redis-cluster + updateVersion: + targetVersion: 7.0.14 \ No newline at end of file diff --git a/content/docs/v2024.1.31/examples/redis/update-version/upgrade-redis-sentinel.yaml b/content/docs/v2024.1.31/examples/redis/update-version/upgrade-redis-sentinel.yaml new file mode 100644 index 0000000000..4e488ba47a --- /dev/null +++ b/content/docs/v2024.1.31/examples/redis/update-version/upgrade-redis-sentinel.yaml @@ -0,0 +1,11 @@ +apiVersion: ops.kubedb.com/v1alpha1 +kind: RedisOpsRequest +metadata: + name: update-rd-version + namespace: demo +spec: + type: UpdateVersion + databaseRef: + name: rd-sample + updateVersion: + targetVersion: 7.0.4 diff --git a/content/docs/v2024.1.31/examples/redis/update-version/upgrade-sentinel.yaml b/content/docs/v2024.1.31/examples/redis/update-version/upgrade-sentinel.yaml new file mode 100644 index 0000000000..fb8948a0e5 --- /dev/null +++ b/content/docs/v2024.1.31/examples/redis/update-version/upgrade-sentinel.yaml @@ -0,0 +1,11 @@ +apiVersion: ops.kubedb.com/v1alpha1 +kind: RedisSentinelOpsRequest +metadata: + name: update-sen-version + namespace: demo +spec: + type: UpdateVersion + databaseRef: + name: sen-sample + updateVersion: + targetVersion: 7.0.14 diff --git a/content/docs/v2024.1.31/examples/redis/volume-expansion/online-vol-expansion.yaml b/content/docs/v2024.1.31/examples/redis/volume-expansion/online-vol-expansion.yaml new file mode 100644 index 0000000000..a7eac12ae5 --- /dev/null +++ b/content/docs/v2024.1.31/examples/redis/volume-expansion/online-vol-expansion.yaml @@ -0,0 +1,12 @@ +apiVersion: ops.kubedb.com/v1alpha1 +kind: RedisOpsRequest +metadata: + name: rd-online-volume-expansion + namespace: demo +spec: + type: VolumeExpansion + databaseRef: + name: sample-redis + volumeExpansion: + mode: "Online" + redis: 2Gi diff --git a/content/docs/v2024.1.31/examples/redis/volume-expansion/sample-redis.yaml b/content/docs/v2024.1.31/examples/redis/volume-expansion/sample-redis.yaml new file mode 100644 index 0000000000..8986ea9b2c --- /dev/null +++ b/content/docs/v2024.1.31/examples/redis/volume-expansion/sample-redis.yaml @@ -0,0 +1,20 @@ +apiVersion: kubedb.com/v1alpha2 +kind: Redis +metadata: + name: sample-redis + namespace: demo +spec: + version: 6.2.14 + mode: Cluster + cluster: + master: 3 + replicas: 1 + storageType: Durable + storage: + storageClassName: "topolvm-provisioner" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: Halt diff --git a/content/docs/v2024.1.31/guides/README.md b/content/docs/v2024.1.31/guides/README.md new file mode 100644 index 0000000000..fd0f81f51a --- /dev/null +++ b/content/docs/v2024.1.31/guides/README.md @@ -0,0 +1,41 @@ +--- +title: Guides | KubeDB +menu: + docs_v2024.1.31: + identifier: guides-readme + name: Readme + parent: guides + weight: -1 +menu_name: docs_v2024.1.31 +section_menu_id: guides +url: /docs/v2024.1.31/guides/ +aliases: +- /docs/v2024.1.31/guides/README/ +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +# Guides + +Guides to show you how to perform tasks with KubeDB: + +- [Elasticsearch](/docs/v2024.1.31/guides/elasticsearch/README). Shows how to manage Elasticsearch & OpenSearch using KubeDB. +- [MariaDB](/docs/v2024.1.31/guides/mariadb). Shows how to manage MariaDB using KubeDB. +- [Memcached](/docs/v2024.1.31/guides/memcached/README). Shows how to manage Memcached using KubeDB. +- [MongoDB](/docs/v2024.1.31/guides/mongodb/README). Shows how to manage MongoDB using KubeDB. +- [MySQL](/docs/v2024.1.31/guides/mysql/README). Shows how to manage MySQL using KubeDB. +- [Percona XtraDB](/docs/v2024.1.31/guides/percona-xtradb/README). Shows how to manage Percona XtraDB using KubeDB. +- [PgBouncer](/docs/v2024.1.31/guides/pgbouncer/README). Shows how to manage PgBouncer using KubeDB. +- [PostgreSQL](/docs/v2024.1.31/guides/postgres/README). Shows how to manage PostgreSQL using KubeDB. +- [ProxySQL](/docs/v2024.1.31/guides/proxysql/README). Shows how to manage ProxySQL using KubeDB. +- [Redis](/docs/v2024.1.31/guides/redis/README). Shows how to manage Redis using KubeDB. +- [Kafka](/docs/v2024.1.31/guides/kafka/README). Shows how to manage Redis using KubeDB. diff --git a/content/docs/v2024.1.31/guides/_index.md b/content/docs/v2024.1.31/guides/_index.md new file mode 100644 index 0000000000..d245afc6aa --- /dev/null +++ b/content/docs/v2024.1.31/guides/_index.md @@ -0,0 +1,22 @@ +--- +title: Guides | KubeDB +menu: + docs_v2024.1.31: + identifier: guides + name: Guides + weight: 40 + pre: dropdown +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/elasticsearch/README.md b/content/docs/v2024.1.31/guides/elasticsearch/README.md new file mode 100644 index 0000000000..45ec0ea138 --- /dev/null +++ b/content/docs/v2024.1.31/guides/elasticsearch/README.md @@ -0,0 +1,121 @@ +--- +title: Elasticsearch +menu: + docs_v2024.1.31: + identifier: es-readme-elasticsearch + name: Elasticsearch + parent: es-elasticsearch-guides + weight: 10 +menu_name: docs_v2024.1.31 +section_menu_id: guides +url: /docs/v2024.1.31/guides/elasticsearch/ +aliases: +- /docs/v2024.1.31/guides/elasticsearch/README/ +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +## Elasticsearch Features + +| Features | Community | Enterprise | +|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| :----------: |:----------:| +| Combined Cluster (n nodes with master,data,ingest: ture; n >= 1 ) | ✓ | ✓ | +| Topology Cluster (n master, m data, x ingest nodes; n,m,x >= 1 ) | ✓ | ✓ | +| Hot-Warm-Cold Topology Cluster (a hot, b warm, c cold nodes; a,b,c >= 1 ) | ✓ | ✓ | +| TLS: Add, Remove, Update, Rotate ( [Cert Manager](https://cert-manager.io/docs/) ) | ✗ | ✓ | +| Automated Version Update | ✗ | ✓ | +| Automatic Vertical Scaling | ✗ | ✓ | +| Automated Horizontal Scaling | ✗ | ✓ | +| Automated Volume Expansion | ✗ | ✓ | +| Backup/Recovery: Instant, Scheduled ( [Stash](https://stash.run/) ) | ✓ | ✓ | +| Dashboard ( Kibana , Opensearch-Dashboards ) | ✓ | ✓ | +| Grafana Dashboards | ✗ | ✓ | +| Initialization from Snapshot ( [Stash](https://stash.run/) ) | ✓ | ✓ | +| Authentication ( [OpensSearch](https://opensearch.org/) / [X-Pack](https://www.elastic.co/guide/en/elasticsearch/reference/7.9/setup-xpack.html) / [OpenDistro](https://opendistro.github.io/for-elasticsearch-docs/) / [Search Guard](https://docs.search-guard.com/latest/) ) | ✓ | ✓ | +| Authorization ( [OpensSearch](https://opensearch.org/) / [X-Pack](https://www.elastic.co/guide/en/elasticsearch/reference/7.9/setup-xpack.html) / [OpenDistro](https://opendistro.github.io/for-elasticsearch-docs/) / [Search Guard](https://docs.search-guard.com/latest/) ) | ✓ | ✓ | +| Persistent Volume | ✓ | ✓ | +| Exports Prometheus Matrices | ✓ | ✓ | +| Custom Configuration | ✓ | ✓ | +| Using Custom Docker Image | ✓ | ✓ | +| Initialization From Script | ✗ | ✗ | +| Autoscaling (vertically) | ✗ | ✓ | + +## Lifecycle of Elasticsearch Object + + +

+  lifecycle +

+ + + +## Available Elasticsearch Versions + +KubeDB supports `Elasticsearch` provided by Elastic with `xpack` auth plugin. `Opensearch` and `Opendistro` are supported too. KubeDB also supports some versions of Elasticsearch with `searchguard` auth plugin. Compatible `Kibana` and `Opensearch-Dashboards` are supported by Most of the Elasticsearch versions with `xpack` auth plugin and OpenSearch. `Kibana` and `Opensearch-Dashboards` can be provisioned externally or by using KubeDB with `ElasticsearchDashboard` CRD. + + + + + + + +
X-PackOpenSearch
+ +| Version | ElasticSearch | Dashboard(Kibana) | +|:--------:|:-------------:|:-----------------:| +| 6.8.x | ✓ | ✓ | +| 7.13.x | ✓ | ✓ | +| 7.14.x | ✓ | ✓ | +| 7.16.x | ✓ | ✓ | +| 7.17.x | ✓ | ✓ | +| 8.2.x | ✓ | ✓ | +| 8.5.x | ✓ | ✓ | +| 8.6.x | ✓ | ✓ | +| 8.8.x | ✓ | ✓ | +| 8.11.x | ✓ | ✓ | + + + +| Version | OpenSearch | Dashboard
(OpenSearch-Dashboards) | +|:--------:|:----------:|:-------------------------------------:| +| 1.1.x | ✓ | ✓ | +| 1.2.x | ✓ | ✓ | +| 1.3.x | ✓ | ✓ | +| 2.0.x | ✓ | ✓ | +| 2.5.x | ✓ | ✓ | +| 2.8.x | ✓ | ✓ | +| 2.11.x | ✓ | ✓ | + +
+ + + +> The listed ElasticsearchVersions are tested and provided as a part of the installation process (ie. catalog chart), but you are open to create your own [ElasticsearchVersion](/docs/v2024.1.31/guides/elasticsearch/concepts/catalog/) object with your custom Elasticsearch image. + +## User Guide + +- [Quickstart Elasticsearch](/docs/v2024.1.31/guides/elasticsearch/quickstart/overview/elasticsearch/) with KubeDB Operator. +- [Quickstart OpenSearch](/docs/v2024.1.31/guides/elasticsearch/quickstart/overview/opensearch/) with KubeDB Operator. +- [Quickstart Kibana](/docs/v2024.1.31/guides/elasticsearch/elasticsearch-dashboard/kibana/) with KubeDB Operator. +- [Quickstart OpenSearch-Dashboards](/docs/v2024.1.31/guides/elasticsearch/elasticsearch-dashboard/opensearch-dashboards/) with KubeDB Operator. +- [Elasticsearch Clustering](/docs/v2024.1.31/guides/elasticsearch/clustering/combined-cluster/) supported by KubeDB +- [Backup & Restore Elasticsearch](/docs/v2024.1.31/guides/elasticsearch/backup/overview/) database using Stash. +- Monitor your Elasticsearch database with KubeDB using [`out-of-the-box` builtin-Prometheus](/docs/v2024.1.31/guides/elasticsearch/monitoring/using-builtin-prometheus). +- Monitor your Elasticsearch database with KubeDB using [`out-of-the-box` Prometheus operator](/docs/v2024.1.31/guides/elasticsearch/monitoring/using-prometheus-operator). +- Use [private Docker registry](/docs/v2024.1.31/guides/elasticsearch/private-registry/using-private-registry) to deploy Elasticsearch with KubeDB. +- Use [kubedb cli](/docs/v2024.1.31/guides/elasticsearch/cli/cli) to manage databases like kubectl for Kubernetes. +- Detail concepts of [Elasticsearch object](/docs/v2024.1.31/guides/elasticsearch/concepts/elasticsearch/). +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/v2024.1.31/CONTRIBUTING). diff --git a/content/docs/v2024.1.31/guides/elasticsearch/_index.md b/content/docs/v2024.1.31/guides/elasticsearch/_index.md new file mode 100644 index 0000000000..29f3c7294c --- /dev/null +++ b/content/docs/v2024.1.31/guides/elasticsearch/_index.md @@ -0,0 +1,22 @@ +--- +title: Elasticsearch +menu: + docs_v2024.1.31: + identifier: es-elasticsearch-guides + name: Elasticsearch + parent: guides + weight: 10 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/elasticsearch/autoscaler/_index.md b/content/docs/v2024.1.31/guides/elasticsearch/autoscaler/_index.md new file mode 100644 index 0000000000..3c6697ca6e --- /dev/null +++ b/content/docs/v2024.1.31/guides/elasticsearch/autoscaler/_index.md @@ -0,0 +1,22 @@ +--- +title: Autoscaling +menu: + docs_v2024.1.31: + identifier: es-auto-scaling + name: Autoscaling + parent: es-elasticsearch-guides + weight: 44 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/elasticsearch/autoscaler/compute/_index.md b/content/docs/v2024.1.31/guides/elasticsearch/autoscaler/compute/_index.md new file mode 100644 index 0000000000..969ec06f6d --- /dev/null +++ b/content/docs/v2024.1.31/guides/elasticsearch/autoscaler/compute/_index.md @@ -0,0 +1,22 @@ +--- +title: Compute Autoscaling +menu: + docs_v2024.1.31: + identifier: es-compute-auto-scaling + name: Compute Autoscaling + parent: es-auto-scaling + weight: 5 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/elasticsearch/autoscaler/compute/combined/index.md b/content/docs/v2024.1.31/guides/elasticsearch/autoscaler/compute/combined/index.md new file mode 100644 index 0000000000..28905686c7 --- /dev/null +++ b/content/docs/v2024.1.31/guides/elasticsearch/autoscaler/compute/combined/index.md @@ -0,0 +1,527 @@ +--- +title: Elasticsearch Combined Cluster Autoscaling +menu: + docs_v2024.1.31: + identifier: es-auto-scaling-combined + name: Combined + parent: es-compute-auto-scaling + weight: 10 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Autoscaling the Compute Resource of an Elasticsearch Combined Cluster + +This guide will show you how to use `KubeDB` to autoscale compute resources i.e. `cpu` and `memory` of an Elasticsearch combined cluster. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster. + +- Install `KubeDB` Community, Enterprise and Autoscaler operator in your cluster following the steps [here](/docs/v2024.1.31/setup/README). + +- Install `Metrics Server` from [here](https://github.com/kubernetes-sigs/metrics-server#installation) + +- You should be familiar with the following `KubeDB` concepts: + - [Elasticsearch](/docs/v2024.1.31/guides/elasticsearch/concepts/elasticsearch/) + - [ElasticsearchAutoscaler](/docs/v2024.1.31/guides/elasticsearch/concepts/autoscaler/) + - [ElasticsearchOpsRequest](/docs/v2024.1.31/guides/elasticsearch/concepts/elasticsearch-ops-request/) + - [Compute Resource Autoscaling Overview](/docs/v2024.1.31/guides/elasticsearch/autoscaler/compute/overview/) + +To keep everything isolated, we are going to use a separate namespace called `demo` throughout this tutorial. + +```bash +$ kubectl create ns demo +namespace/demo created +``` + +> **Note:** YAML files used in this tutorial are stored in this [directory](/docs/v2024.1.31/guides/elasticsearch/autoscaler/compute/combined/yamls) of [kubedb/docs](https://github.com/kubedb/docs) repository. + +## Autoscaling of a Combined Cluster + +Here, we are going to deploy an `Elasticsearch` in combined cluster mode using a supported version by `KubeDB` operator. Then we are going to apply `ElasticsearchAutoscaler` to set up autoscaling. + +### Deploy Elasticsearch standalone + +In this section, we are going to deploy an Elasticsearch combined cluster with ElasticsearchVersion `xpack-8.11.1`. Then, in the next section, we will set up autoscaling for this database using `ElasticsearchAutoscaler` CRD. Below is the YAML of the `Elasticsearch` CR that we are going to create, + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: Elasticsearch +metadata: + name: es-combined + namespace: demo +spec: + enableSSL: true + version: xpack-8.2.3 + storageType: Durable + replicas: 3 + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + podTemplate: + spec: + resources: + requests: + cpu: "500m" + limits: + cpu: "500m" + memory: "1.2Gi" + terminationPolicy: WipeOut +``` + +Let's create the `Elasticsearch` CRO we have shown above, + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/elasticsearch/autoscaler/compute/combined/yamls/es-combined.yaml +elasticsearch.kubedb.com/es-combined created +``` + +Now, wait until `es-combined` has status `Ready`. i.e, + +```bash +$ kubectl get elasticsearch -n demo -w +NAME VERSION STATUS AGE +es-combined xpack-8.2.3 Provisioning 4s +es-combined xpack-8.2.3 Provisioning 7s +.... +.... +es-combined xpack-8.2.3 Ready 60s + +``` + +Let's check the Pod containers resources, + +```json +$ kubectl get pod -n demo es-combined-0 -o json | jq '.spec.containers[].resources' +{ + "limits": { + "cpu": "500m", + "memory": "1288490188800m" + }, + "requests": { + "cpu": "500m", + "memory": "1288490188800m" + } +} +``` + +Let's check the Elasticsearch resources, + +```json +$ kubectl get elasticsearch -n demo es-combined -o json | jq '.spec.podTemplate.spec.resources' +{ + "limits": { + "cpu": "500m", + "memory": "1288490188800m" + }, + "requests": { + "cpu": "500m", + "memory": "1288490188800m" + } +} +``` + +You can see from the above outputs that the resources are the same as the ones we have assigned while deploying the Elasticsearch. + +We are now ready to apply the `ElasticsearchAutoscaler` CRO to set up autoscaling for this database. + +### Compute Resource Autoscaling + +Here, we are going to set up compute (ie. `cpu` and `memory`) autoscaling using an ElasticsearchAutoscaler Object. + +#### Create ElasticsearchAutoscaler Object + +To set up compute resource autoscaling for this combined cluster, we have to create a `ElasticsearchAutoscaler` CRO with our desired configuration. Below is the YAML of the `ElasticsearchAutoscaler` object that we are going to create, + +```yaml +apiVersion: autoscaling.kubedb.com/v1alpha1 +kind: ElasticsearchAutoscaler +metadata: + name: es-combined-as + namespace: demo +spec: + databaseRef: + name: es-combined + compute: + node: + trigger: "On" + podLifeTimeThreshold: 5m + resourceDiffPercentage: 5 + minAllowed: + cpu: 1 + memory: "2.1Gi" + maxAllowed: + cpu: 2 + memory: 3Gi + controlledResources: ["cpu", "memory"] + containerControlledValues: "RequestsAndLimits" +``` + +Here, + +- `spec.databaseRef.name` specifies that we are performing compute resource autoscaling on `es-combined` database. +- `spec.compute.node.trigger` specifies that compute resource autoscaling is enabled for this cluster. +- `spec.compute.node.podLifeTimeThreshold` specifies the minimum lifetime for at least one of the pod to initiate a vertical scaling. +- `spec.compute.node.minAllowed` specifies the minimum allowed resources for the Elasticsearch node. +- `spec.compute.node.maxAllowed` specifies the maximum allowed resources for the Elasticsearch node. +- `spec.compute.node.controlledResources` specifies the resources that are controlled by the autoscaler. +- `spec.compute.node.resourceDiffPercentage` specifies the minimum resource difference in percentage. The default is 10%. + If the difference between current & recommended resource is less than ResourceDiffPercentage, Autoscaler Operator will ignore the updating. +- `spec.compute.node.containerControlledValues` specifies which resource values should be controlled. The default is "RequestsAndLimits". +- - `spec.opsRequestOptions` contains the options to pass to the created OpsRequest. It has 2 fields. Know more about them here : [timeout](/docs/v2024.1.31/guides/elasticsearch/concepts/elasticsearch-ops-request/#spectimeout), [apply](/docs/v2024.1.31/guides/elasticsearch/concepts/elasticsearch-ops-request/#specapply). + +Let's create the `ElasticsearchAutoscaler` CR we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/elasticsearch/autoscaler/compute/combined/yamls/es-auto-scaler.yaml +elasticsearchautoscaler.autoscaling.kubedb.com/es-combined-as created +``` + +#### Verify Autoscaling is set up successfully + +Let's check that the `elasticsearchautoscaler` resource is created successfully, + +```bash +$kubectl get elasticsearchautoscaler -n demo +NAME AGE +es-combined-as 14s + +$ kubectl describe elasticsearchautoscaler -n demo es-combined-as +Name: es-combined-as +Namespace: demo +Labels: +Annotations: +API Version: autoscaling.kubedb.com/v1alpha1 +Kind: ElasticsearchAutoscaler +Metadata: + Creation Timestamp: 2022-12-29T10:54:00Z + Generation: 1 + Managed Fields: + API Version: autoscaling.kubedb.com/v1alpha1 + Fields Type: FieldsV1 + fieldsV1: + f:metadata: + f:annotations: + .: + f:kubectl.kubernetes.io/last-applied-configuration: + f:spec: + .: + f:compute: + .: + f:node: + .: + f:containerControlledValues: + f:controlledResources: + f:maxAllowed: + .: + f:cpu: + f:memory: + f:minAllowed: + .: + f:cpu: + f:memory: + f:podLifeTimeThreshold: + f:resourceDiffPercentage: + f:trigger: + f:databaseRef: + Manager: kubectl-client-side-apply + Operation: Update + Time: 2022-12-29T10:54:00Z + API Version: autoscaling.kubedb.com/v1alpha1 + Fields Type: FieldsV1 + fieldsV1: + f:status: + .: + f:checkpoints: + f:conditions: + f:vpas: + Manager: kubedb-autoscaler + Operation: Update + Subresource: status + Time: 2022-12-29T10:54:27Z + Resource Version: 12469 + UID: 35640903-7aaf-46c6-9bc4-bd1771313e30 +Spec: + Compute: + Node: + Container Controlled Values: RequestsAndLimits + Controlled Resources: + cpu + memory + Max Allowed: + Cpu: 2 + Memory: 3Gi + Min Allowed: + Cpu: 1 + Memory: 2254857830400m + Pod Life Time Threshold: 5m0s + Resource Diff Percentage: 5 + Trigger: On + Database Ref: + Name: es-combined + Ops Request Options: + Apply: IfReady +Status: + Checkpoints: + Cpu Histogram: + Bucket Weights: + Index: 0 + Weight: 2849 + Index: 1 + Weight: 10000 + Index: 2 + Weight: 2856 + Index: 3 + Weight: 714 + Index: 5 + Weight: 714 + Index: 6 + Weight: 713 + Index: 7 + Weight: 714 + Index: 12 + Weight: 713 + Index: 21 + Weight: 713 + Index: 25 + Weight: 2138 + Reference Timestamp: 2022-12-29T00:00:00Z + Total Weight: 4.257959878725071 + First Sample Start: 2022-12-29T10:54:03Z + Last Sample Start: 2022-12-29T11:04:18Z + Last Update Time: 2022-12-29T11:04:26Z + Memory Histogram: + Reference Timestamp: 2022-12-30T00:00:00Z + Ref: + Container Name: elasticsearch + Vpa Object Name: es-combined + Total Samples Count: 31 + Version: v3 + Conditions: + Last Transition Time: 2022-12-29T10:54:27Z + Message: Successfully created elasticsearchOpsRequest demo/esops-es-combined-ujb5hy + Observed Generation: 1 + Reason: CreateOpsRequest + Status: True + Type: CreateOpsRequest + Vpas: + Conditions: + Last Transition Time: 2022-12-29T10:54:26Z + Status: True + Type: RecommendationProvided + Recommendation: + Container Recommendations: + Container Name: elasticsearch + Lower Bound: + Cpu: 1 + Memory: 2254857830400m + Target: + Cpu: 1 + Memory: 2254857830400m + Uncapped Target: + Cpu: 442m + Memory: 1555165137 + Upper Bound: + Cpu: 2 + Memory: 3Gi + Vpa Name: es-combined +Events: +``` + +So, the `elasticsearchautoscaler` resource is created successfully. + +you can see in the `Status.VPAs.Recommendation section`, that recommendation has been generated for our database. Our autoscaler operator continuously watches the recommendation generated and creates an `elasticsearchopsrequest` based on the recommendations, if the database pods are needed to scaled up or down. + +Let's watch the `elasticsearchopsrequest` in the demo namespace to see if any `elasticsearchopsrequest` object is created. After some time you'll see that an `elasticsearchopsrequest` will be created based on the recommendation. + +```bash +$ kubectl get elasticsearchopsrequest -n demo +NAME TYPE STATUS AGE +esops-es-combined-ujb5hy VerticalScaling Progessing 1m +``` + +Let's wait for the opsRequest to become successful. + +```bash +$ kubectl get elasticsearchopsrequest -n demo +NAME TYPE STATUS AGE +esops-es-combined-ujb5hy VerticalScaling Successful 1m +``` + +We can see from the above output that the `ElasticsearchOpsRequest` has succeeded. If we describe the `ElasticsearchOpsRequest` we will get an overview of the steps that were followed to scale the database. + +```bash +$ kubectl describe elasticsearchopsrequest -n demo esops-es-combined-ujb5hy +Name: esops-es-combined-ujb5hy +Namespace: demo +Labels: +Annotations: +API Version: ops.kubedb.com/v1alpha1 +Kind: ElasticsearchOpsRequest +Metadata: + Creation Timestamp: 2022-12-29T10:54:27Z + Generation: 1 + Managed Fields: + API Version: ops.kubedb.com/v1alpha1 + Fields Type: FieldsV1 + fieldsV1: + f:metadata: + f:ownerReferences: + .: + k:{"uid":"35640903-7aaf-46c6-9bc4-bd1771313e30"}: + f:spec: + .: + f:apply: + f:databaseRef: + f:type: + f:verticalScaling: + .: + f:node: + .: + f:limits: + .: + f:cpu: + f:memory: + f:requests: + .: + f:cpu: + f:memory: + Manager: kubedb-autoscaler + Operation: Update + Time: 2022-12-29T10:54:27Z + API Version: ops.kubedb.com/v1alpha1 + Fields Type: FieldsV1 + fieldsV1: + f:status: + .: + f:conditions: + f:observedGeneration: + f:phase: + Manager: kubedb-ops-manager + Operation: Update + Subresource: status + Time: 2022-12-29T10:54:27Z + Owner References: + API Version: autoscaling.kubedb.com/v1alpha1 + Block Owner Deletion: true + Controller: true + Kind: ElasticsearchAutoscaler + Name: es-combined-as + UID: 35640903-7aaf-46c6-9bc4-bd1771313e30 + Resource Version: 11992 + UID: 4aa5295f-0702-45ac-9ae8-3cb496b0e740 +Spec: + Apply: IfReady + Database Ref: + Name: es-combined + Type: VerticalScaling + Vertical Scaling: + Node: + Limits: + Cpu: 1 + Memory: 2254857830400m + Requests: + Cpu: 1 + Memory: 2254857830400m +Status: + Conditions: + Last Transition Time: 2022-12-29T10:54:27Z + Message: Elasticsearch ops request is vertically scaling the nodes + Observed Generation: 1 + Reason: VerticalScaling + Status: True + Type: VerticalScaling + Last Transition Time: 2022-12-29T10:54:39Z + Message: successfully reconciled the Elasticsearch resources + Observed Generation: 1 + Reason: Reconciled + Status: True + Type: Reconciled + Last Transition Time: 2022-12-29T10:58:39Z + Message: Successfully restarted all nodes + Observed Generation: 1 + Reason: RestartNodes + Status: True + Type: RestartNodes + Last Transition Time: 2022-12-29T10:58:44Z + Message: successfully updated Elasticsearch CR + Observed Generation: 1 + Reason: UpdateElasticsearchCR + Status: True + Type: UpdateElasticsearchCR + Last Transition Time: 2022-12-29T10:58:45Z + Message: Successfully completed the modification process. + Observed Generation: 1 + Reason: Successful + Status: True + Type: Successful + Observed Generation: 1 + Phase: Successful +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal PauseDatabase 8m25s KubeDB Ops-manager Operator Pausing Elasticsearch demo/es-combined + Normal Reconciled 8m13s KubeDB Ops-manager Operator successfully reconciled the Elasticsearch resources + Normal RestartNodes 4m13s KubeDB Ops-manager Operator Successfully restarted all nodes + Normal UpdateElasticsearchCR 4m7s KubeDB Ops-manager Operator successfully updated Elasticsearch CR + Normal ResumeDatabase 4m7s KubeDB Ops-manager Operator Resuming Elasticsearch demo/es-combined + Normal Successful 4m7s KubeDB Ops-manager Operator Successfully Updated Database +``` + +Now, we are going to verify from the Pod, and the Elasticsearch YAML whether the resources of the standalone database has updated to meet up the desired state, Let's check, + +```json +$ kubectl get pod -n demo es-combined-0 -o json | jq '.spec.containers[].resources' +{ + "limits": { + "cpu": "500m", + "memory": "1288490188800m" + }, + "requests": { + "cpu": "500m", + "memory": "1288490188800m" + } +} + +$ kubectl get elasticsearch -n demo es-combined -o json | jq '.spec.podTemplate.spec.resources' +{ + "limits": { + "cpu": "1", + "memory": "2254857830400m" + }, + "requests": { + "cpu": "1", + "memory": "2254857830400m" + } +} +``` + +The above output verifies that we have successfully auto-scaled the resources of the Elasticsearch standalone database. + +## Cleaning Up + +To clean up the Kubernetes resources created by this tutorial, run: + +```bash +$ kubectl delete es -n demo es-combined +$ kubectl delete elasticsearchautoscaler -n demo es-combined-as +$ kubectl delete ns demo +``` diff --git a/content/docs/v2024.1.31/guides/elasticsearch/autoscaler/compute/combined/yamls/es-auto-scaler.yaml b/content/docs/v2024.1.31/guides/elasticsearch/autoscaler/compute/combined/yamls/es-auto-scaler.yaml new file mode 100644 index 0000000000..a016686658 --- /dev/null +++ b/content/docs/v2024.1.31/guides/elasticsearch/autoscaler/compute/combined/yamls/es-auto-scaler.yaml @@ -0,0 +1,21 @@ +apiVersion: autoscaling.kubedb.com/v1alpha1 +kind: ElasticsearchAutoscaler +metadata: + name: es-combined-as + namespace: demo +spec: + databaseRef: + name: es-combined + compute: + node: + trigger: "On" + podLifeTimeThreshold: 5m + resourceDiffPercentage: 5 + minAllowed: + cpu: 1 + memory: "2.1Gi" + maxAllowed: + cpu: 2 + memory: 3Gi + controlledResources: ["cpu", "memory"] + containerControlledValues: "RequestsAndLimits" \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/elasticsearch/autoscaler/compute/combined/yamls/es-combined.yaml b/content/docs/v2024.1.31/guides/elasticsearch/autoscaler/compute/combined/yamls/es-combined.yaml new file mode 100644 index 0000000000..aa9873b8fe --- /dev/null +++ b/content/docs/v2024.1.31/guides/elasticsearch/autoscaler/compute/combined/yamls/es-combined.yaml @@ -0,0 +1,26 @@ +apiVersion: kubedb.com/v1alpha2 +kind: Elasticsearch +metadata: + name: es-combined + namespace: demo +spec: + enableSSL: true + version: xpack-8.2.3 + storageType: Durable + replicas: 3 + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + podTemplate: + spec: + resources: + requests: + cpu: "500m" + limits: + cpu: "500m" + memory: "1.2Gi" + terminationPolicy: WipeOut \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/elasticsearch/autoscaler/compute/overview/index.md b/content/docs/v2024.1.31/guides/elasticsearch/autoscaler/compute/overview/index.md new file mode 100644 index 0000000000..ad1f69fe84 --- /dev/null +++ b/content/docs/v2024.1.31/guides/elasticsearch/autoscaler/compute/overview/index.md @@ -0,0 +1,60 @@ +--- +title: Elasticsearch Compute Autoscaling Overview +menu: + docs_v2024.1.31: + identifier: es-auto-scaling-overview + name: Overview + parent: es-compute-auto-scaling + weight: 5 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Elasticsearch Compute Resource Autoscaling + +This guide will give an overview on how the KubeDB Autoscaler operator autoscales the database compute resources i.e. `cpu` and `memory` using `elasticsearchautoscaler` crd. + +## Before You Begin + +- You should be familiar with the following `KubeDB` concepts: + - [Elasticsearch](/docs/v2024.1.31/guides/elasticsearch/concepts/elasticsearch/) + - [ElasticsearchAutoscaler](/docs/v2024.1.31/guides/elasticsearch/concepts/autoscaler/) + - [ElasticsearchOpsRequest](/docs/v2024.1.31/guides/elasticsearch/concepts/elasticsearch-ops-request/) + +## How Compute Autoscaling Works + +The Auto Scaling process consists of the following steps: + +1. At first, a user creates a `Elasticsearch` Custom Resource Object (CRO). + +2. `KubeDB` Provisioner operator watches the `Elasticsearch` CRO. + +3. When the operator finds a `Elasticsearch` CRO, it creates required number of `StatefulSets` and related necessary stuff like secrets, services, etc. + +4. Then, in order to set up autoscaling of the various components of the `Elasticsearch` database the user creates a `ElasticsearchAutoscaler` CRO with desired configuration. + +5. `KubeDB` Autoscaler operator watches the `ElasticsearchAutoscaler` CRO. + +6. `KubeDB` Autoscaler operator generates recommendation using the modified version of kubernetes [official recommender](https://github.com/kubernetes/autoscaler/tree/master/vertical-pod-autoscaler/pkg/recommender) for different components of the database, as specified in the `ElasticsearchAutoscaler` CRO. + +7. If the generated recommendation doesn't match the current resources of the database, then `KubeDB` Autoscaler operator creates a `ElasticsearchOpsRequest` CRO to scale the database to match the recommendation generated. + +8. `KubeDB` Ops-manager operator watches the `ElasticsearchOpsRequest` CRO. + +9. Then the `KubeDB` Ops-manager operator will scale the database component vertically as specified on the `ElasticsearchOpsRequest` CRO. + +In the next docs, we are going to show a step-by-step guide on Autoscaling of various Elasticsearch database components using `ElasticsearchAutoscaler` CRD. + diff --git a/content/docs/v2024.1.31/guides/elasticsearch/autoscaler/compute/topology/index.md b/content/docs/v2024.1.31/guides/elasticsearch/autoscaler/compute/topology/index.md new file mode 100644 index 0000000000..4cc3342289 --- /dev/null +++ b/content/docs/v2024.1.31/guides/elasticsearch/autoscaler/compute/topology/index.md @@ -0,0 +1,452 @@ +--- +title: Elasticsearch Topology Cluster Autoscaling +menu: + docs_v2024.1.31: + identifier: es-auto-scaling-topology + name: Topology Cluster + parent: es-compute-auto-scaling + weight: 15 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Autoscaling the Compute Resource of an Elasticsearch Topology Cluster + +This guide will show you how to use `KubeDB` to autoscale compute resources i.e. `cpu` and `memory` of an Elasticsearch topology cluster. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster. + +- Install `KubeDB` Community, Enterprise and Autoscaler operator in your cluster following the steps [here](/docs/v2024.1.31/setup/README). + +- Install `Metrics Server` from [here](https://github.com/kubernetes-sigs/metrics-server#installation) + +- You should be familiar with the following `KubeDB` concepts: + - [Elasticsearch](/docs/v2024.1.31/guides/elasticsearch/concepts/elasticsearch/) + - [ElasticsearchAutoscaler](/docs/v2024.1.31/guides/elasticsearch/concepts/autoscaler/) + - [ElasticsearchOpsRequest](/docs/v2024.1.31/guides/elasticsearch/concepts/elasticsearch-ops-request/) + - [Compute Resource Autoscaling Overview](/docs/v2024.1.31/guides/elasticsearch/autoscaler/compute/overview/) + +To keep everything isolated, we are going to use a separate namespace called `demo` throughout this tutorial. + +```bash +$ kubectl create ns demo +namespace/demo created +``` + +> **Note:** YAML files used in this tutorial are stored in this [directory](/docs/v2024.1.31/guides/elasticsearch/autoscaler/compute/topology/yamls) of [kubedb/docs](https://github.com/kubedb/docs) repository. + +## Autoscaling of Topology Cluster + +Here, we are going to deploy an `Elasticsearch` topology cluster using a supported version by `KubeDB` operator. Then we are going to apply `ElasticsearchAutoscaler` to set up autoscaling. + +#### Deploy Elasticsearch Topology Cluster + +In this section, we are going to deploy an Elasticsearch topology with ElasticsearchVersion `opensearch-2.8.0`. Then, in the next section we will set up autoscaling for this database using `ElasticsearchAutoscaler` CRD. Below is the YAML of the `Elasticsearch` CR that we are going to create, + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: Elasticsearch +metadata: + name: es-topology + namespace: demo +spec: + enableSSL: true + version: opensearch-2.8.0 + storageType: Durable + topology: + master: + suffix: master + replicas: 1 + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + data: + suffix: data + replicas: 2 + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + ingest: + suffix: ingest + replicas: 1 + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut +``` + +Let's create the `Elasticsearch` CRO we have shown above, + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/elasticsearch/autoscaler/computetopology/yamls/es-topology.yaml +elasticsearch.kubedb.com/es-topology created +``` + +Now, wait until `es-topology` has status `Ready`. i.e, + +```bash +$ kubectl get elasticsearch -n demo -w +NAME VERSION STATUS AGE +es-topology opensearch-2.8.0 Provisioning 113s +es-topology opensearch-2.8.0 Ready 115s +``` + +Let's check an ingest node containers resources, + +```bash +$ kubectl get pod -n demo es-topology-ingest-0 -o json | jq '.spec.containers[].resources' +{ + "limits": { + "cpu": "500m", + "memory": "1Gi" + }, + "requests": { + "cpu": "500m", + "memory": "1Gi" + } +} +``` + +Let's check the Elasticsearch CR for the ingest node resources, + +```bash +$ kubectl get elasticsearch -n demo es-topology -o json | jq '.spec.topology.ingest.resources' +{ + "limits": { + "cpu": "500m", + "memory": "1Gi" + }, + "requests": { + "cpu": "500m", + "memory": "1Gi" + } + +``` + +You can see from the above outputs that the resources are the same as the ones we have assigned while deploying the Elasticsearch. + +We are now ready to apply the `ElasticsearchAutoscaler` CRO to set up autoscaling for this database. + +### Compute Resource Autoscaling + +Here, we are going to set up compute resource autoscaling using a ElasticsearchAutoscaler Object. + +#### Create ElasticsearchAutoscaler Object + +In order to set up compute resource autoscaling for the ingest nodes of the cluster, we have to create a `ElasticsearchAutoscaler` CRO with our desired configuration. Below is the YAML of the `ElasticsearchAutoscaler` object that we are going to create, + +```yaml +apiVersion: autoscaling.kubedb.com/v1alpha1 +kind: ElasticsearchAutoscaler +metadata: + name: es-topology-as + namespace: demo +spec: + databaseRef: + name: es-topology + compute: + ingest: + trigger: "On" + podLifeTimeThreshold: 5m + minAllowed: + cpu: ".4" + memory: 500Mi + maxAllowed: + cpu: 2 + memory: 3Gi + controlledResources: ["cpu", "memory"] +``` + +Here, + +- `spec.databaseRef.name` specifies that we are performing compute resource scaling operation on `es-topology` cluster. +- `spec.compute.topology.ingest.trigger` specifies that compute autoscaling is enabled for the ingest nodes. +- `spec.compute.topology.ingest.podLifeTimeThreshold` specifies the minimum lifetime for at least one of the pod to initiate a vertical scaling. +- `spec.compute.topology.ingest.minAllowed` specifies the minimum allowed resources for the ingest nodes. +- `spec.compute.topology.ingest.maxAllowed` specifies the maximum allowed resources for the ingest nodes. +- `spec.compute.topology.ingest.controlledResources` specifies the resources that are controlled by the autoscaler. + +> Note: In this demo, we are only setting up the autoscaling for the ingest nodes, that's why we only specified the ingest section of the autoscaler. You can enable autoscaling for the master and the data nodes in the same YAML, by specifying the `topology.master` and `topology.data` section, similar to the `topology.ingest` section we have configured in this demo. + +Let's create the `ElasticsearchAutoscaler` CR we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/elasticsearch/autoscaler/computetopology/yamls/es-topology-auto-scaler.yaml +elasticsearchautoscaler.autoscaling.kubedb.com/es-topology-as created +``` + +#### Verify Autoscaling is set up successfully + +Let's check that the `elasticsearchautoscaler` resource is created successfully, + +```bash +$ kubectl get elasticsearchautoscaler -n demo +NAME AGE +es-topology-as 9s + +$ kubectl describe elasticsearchautoscaler -n demo es-topology-as +Name: es-topology-as +Namespace: demo +Labels: +Annotations: +API Version: autoscaling.kubedb.com/v1alpha1 +Kind: ElasticsearchAutoscaler +Metadata: + Creation Timestamp: 2021-03-22T13:03:55Z + Generation: 1 + Resource Version: 18219 + UID: c1855d8e-6430-48bb-87d7-9c7bc9ce6f42 +Spec: + Compute: + Topology: + Ingest: + Controlled Resources: + cpu + memory + Max Allowed: + Cpu: 2 + Memory: 3Gi + Min Allowed: + Cpu: 400m + Memory: 500Mi + Pod Life Time Threshold: 5m0s + Trigger: On + Database Ref: + Name: es-topology +Events: +``` + +So, the `elasticsearchautoscaler` resource is created successfully. + +Now, lets verify that the vertical pod autoscaler (vpa) resource is created successfully, + +```bash +$ kubectl get vpa -n demo +NAME MODE CPU MEM PROVIDED AGE +vpa-es-topology-ingest Off 400m 1102117711 True 30s + +$ kubectl describe vpa -n demo vpa-es-topology-ingest +Name: vpa-es-topology-ingest +Namespace: demo +Labels: +Annotations: +API Version: autoscaling.k8s.io/v1 +Kind: VerticalPodAutoscaler +Metadata: + Creation Timestamp: 2021-03-22T13:03:55Z + Generation: 2 + Owner References: + API Version: autoscaling.kubedb.com/v1alpha1 + Block Owner Deletion: true + Controller: true + Kind: ElasticsearchAutoscaler + Name: es-topology-as + UID: c1855d8e-6430-48bb-87d7-9c7bc9ce6f42 + Resource Version: 18253 + UID: 1d32c133-7214-49bd-bf3b-aa4a99986058 +Spec: + Resource Policy: + Container Policies: + Container Name: elasticsearch + Controlled Resources: + cpu + memory + Controlled Values: RequestsAndLimits + Max Allowed: + Cpu: 2 + Memory: 3Gi + Min Allowed: + Cpu: 400m + Memory: 500Mi + Target Ref: + API Version: apps/v1 + Kind: StatefulSet + Name: es-topology-ingest + Update Policy: + Update Mode: Off +Status: + Conditions: + Last Transition Time: 2021-03-22T13:04:12Z + Status: True + Type: RecommendationProvided + Recommendation: + Container Recommendations: + Container Name: elasticsearch + Lower Bound: + Cpu: 400m + Memory: 1054147415 + Target: + Cpu: 400m + Memory: 1102117711 + Uncapped Target: + Cpu: 224m + Memory: 1102117711 + Upper Bound: + Cpu: 2 + Memory: 3Gi +Events: +``` + +As you can see from the output the vpa has generated a recommendation for the ingest node of the Elasticsearch cluster. Our autoscaler operator continuously watches the recommendation generated and creates an `elasticsearchopsrequest` based on the recommendations, if the Elasticsearch nodes are needed to be scaled up or down. + +Let's watch the `elasticsearchopsrequest` in the demo namespace to see if any `elasticsearchopsrequest` object is created. After some time you'll see that an `elasticsearchopsrequest` will be created based on the recommendation. + +```bash +$ kubectl get elasticsearchopsrequest -n demo +NAME TYPE STATUS AGE +esops-vpa-es-topology-ingest-37m2wi VerticalScaling Progressing 44s +``` + +Let's wait for the opsRequest to become successful. + +```bash +$ kubectl get elasticsearchopsrequest -n demo -w +NAME TYPE STATUS AGE +esops-vpa-es-topology-ingest-37m2wi VerticalScaling Progressing 8m2s +esops-vpa-es-topology-ingest-37m2wi VerticalScaling Successful 9m20s +``` + +We can see from the above output that the `ElasticsearchOpsRequest` has succeeded. If we describe the `ElasticsearchOpsRequest` we will get an overview of the steps that were followed to scale the database. + +```bash +$ Name: esops-vpa-es-topology-ingest-37m2wi +Namespace: demo +Labels: app.kubernetes.io/component=database + app.kubernetes.io/instance=es-topology + app.kubernetes.io/managed-by=kubedb.com + app.kubernetes.io/name=elasticsearches.kubedb.com +Annotations: +API Version: ops.kubedb.com/v1alpha1 +Kind: ElasticsearchOpsRequest +Metadata: + Creation Timestamp: 2021-03-22T13:04:21Z + Generation: 1 + Owner References: + API Version: autoscaling.kubedb.com/v1alpha1 + Block Owner Deletion: true + Controller: true + Kind: ElasticsearchAutoscaler + Name: es-topology-as + UID: c1855d8e-6430-48bb-87d7-9c7bc9ce6f42 + Resource Version: 19553 + UID: aed024b7-3779-416c-86c4-43120bba7bd3 +Spec: + Database Ref: + Name: es-topology + Type: VerticalScaling + Vertical Scaling: + Topology: + Ingest: + Limits: + Cpu: 400m + Memory: 1102117711 + Requests: + Cpu: 400m + Memory: 1102117711 +Status: + Conditions: + Last Transition Time: 2021-03-22T13:04:21Z + Message: Elasticsearch ops request is vertically scaling the nodes + Observed Generation: 1 + Reason: VerticalScaling + Status: True + Type: VerticalScaling + Last Transition Time: 2021-03-22T13:04:21Z + Message: Successfully updated statefulSet resources. + Observed Generation: 1 + Reason: UpdateStatefulSetResources + Status: True + Type: UpdateStatefulSetResources + Last Transition Time: 2021-03-22T13:13:41Z + Message: Successfully updated all node resources + Observed Generation: 1 + Reason: UpdateNodeResources + Status: True + Type: UpdateNodeResources + Last Transition Time: 2021-03-22T13:13:41Z + Message: Successfully completed the modification process. + Observed Generation: 1 + Reason: Successful + Status: True + Type: Successful + Observed Generation: 1 + Phase: Successful +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal PauseDatabase 10m KubeDB Enterprise Operator Pausing Elasticsearch demo/es-topology + Normal Updating 10m KubeDB Enterprise Operator Updating StatefulSets + Normal Updating 10m KubeDB Enterprise Operator Successfully Updated StatefulSets + Normal UpdateNodeResources 56s KubeDB Enterprise Operator Successfully updated all node resources + Normal Updating 56s KubeDB Enterprise Operator Updating Elasticsearch + Normal Updating 56s KubeDB Enterprise Operator Successfully Updated Elasticsearch + Normal ResumeDatabase 56s KubeDB Enterprise Operator Resuming Elasticsearch demo/es-topology + Normal Successful 56s KubeDB Enterprise Operator Successfully Updated Database +``` + +Now, we are going to verify from the Pod, and the Elasticsearch YAML whether the resources of the ingest node of the cluster has updated to meet up the desired state, Let's check, + +```bash +$ kubectl get pod -n demo es-topology-ingest-0 -o json | jq '.spec.containers[].resources' +{ + "limits": { + "cpu": "400m", + "memory": "1102117711" + }, + "requests": { + "cpu": "400m", + "memory": "1102117711" + } +} + +$ kubectl get elasticsearch -n demo es-topology -o json | jq '.spec.topology.ingest.resources' +{ + "limits": { + "cpu": "400m", + "memory": "1102117711" + }, + "requests": { + "cpu": "400m", + "memory": "1102117711" + } +} +``` + +The above output verifies that we have successfully auto-scaled the resources of the Elasticsearch topology cluster. + +## Cleaning Up + +To clean up the Kubernetes resources created by this tutorial, run: + +```bash +$ kubectl delete elasticsearch -n demo es-topology +$ kubectl delete elasticsearchautoscaler -n demo es-topology-as +$ kubectl delete ns demo +``` \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/elasticsearch/autoscaler/compute/topology/yamls/es-topology-auto-scaler.yaml b/content/docs/v2024.1.31/guides/elasticsearch/autoscaler/compute/topology/yamls/es-topology-auto-scaler.yaml new file mode 100644 index 0000000000..e08e4732cc --- /dev/null +++ b/content/docs/v2024.1.31/guides/elasticsearch/autoscaler/compute/topology/yamls/es-topology-auto-scaler.yaml @@ -0,0 +1,19 @@ +apiVersion: autoscaling.kubedb.com/v1alpha1 +kind: ElasticsearchAutoscaler +metadata: + name: es-topology-as + namespace: demo +spec: + databaseRef: + name: es-topology + compute: + ingest: + trigger: "On" + podLifeTimeThreshold: 5m + minAllowed: + cpu: ".4" + memory: 500Mi + maxAllowed: + cpu: 2 + memory: 3Gi + controlledResources: ["cpu", "memory"] \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/elasticsearch/autoscaler/compute/topology/yamls/es-topology.yaml b/content/docs/v2024.1.31/guides/elasticsearch/autoscaler/compute/topology/yamls/es-topology.yaml new file mode 100644 index 0000000000..86c6e7c6c3 --- /dev/null +++ b/content/docs/v2024.1.31/guides/elasticsearch/autoscaler/compute/topology/yamls/es-topology.yaml @@ -0,0 +1,41 @@ +apiVersion: kubedb.com/v1alpha2 +kind: Elasticsearch +metadata: + name: es-topology + namespace: demo +spec: + enableSSL: true + version: opensearch-2.8.0 + storageType: Durable + topology: + master: + suffix: master + replicas: 1 + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + data: + suffix: data + replicas: 2 + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + ingest: + suffix: ingest + replicas: 1 + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/elasticsearch/autoscaler/storage/_index.md b/content/docs/v2024.1.31/guides/elasticsearch/autoscaler/storage/_index.md new file mode 100644 index 0000000000..19f18a92ad --- /dev/null +++ b/content/docs/v2024.1.31/guides/elasticsearch/autoscaler/storage/_index.md @@ -0,0 +1,22 @@ +--- +title: Storage Autoscaling +menu: + docs_v2024.1.31: + identifier: es-storage-auto-scaling + name: Storage Autoscaling + parent: es-auto-scaling + weight: 10 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/elasticsearch/autoscaler/storage/combined/index.md b/content/docs/v2024.1.31/guides/elasticsearch/autoscaler/storage/combined/index.md new file mode 100644 index 0000000000..f394fd044c --- /dev/null +++ b/content/docs/v2024.1.31/guides/elasticsearch/autoscaler/storage/combined/index.md @@ -0,0 +1,342 @@ +--- +title: Elasticsearch Combined Cluster Storage Autoscaling +menu: + docs_v2024.1.31: + identifier: es-storage-auto-scaling-combined + name: Combined Cluster + parent: es-storage-auto-scaling + weight: 10 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Storage Autoscaling of Elasticsearch Combined Cluster + +This guide will show you how to use `KubeDB` to autoscale the storage of an Elasticsearch combined cluster. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster. + +- Install `KubeDB` Community, Enterprise and Autoscaler operator in your cluster following the steps [here](/docs/v2024.1.31/setup/README). + +- Install Prometheus from [here](https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack) + +- You must have a `StorageClass` that supports volume expansion. + +- You should be familiar with the following `KubeDB` concepts: + - [Elasticsearch](/docs/v2024.1.31/guides/elasticsearch/concepts/elasticsearch/) + - [ElasticsearchAutoscaler](/docs/v2024.1.31/guides/elasticsearch/concepts/autoscaler/) + - [ElasticsearchOpsRequest](/docs/v2024.1.31/guides/elasticsearch/concepts/elasticsearch-ops-request/) + - [Storage Autoscaling Overview](/docs/v2024.1.31/guides/elasticsearch/autoscaler/storage/overview/) + +To keep everything isolated, we are going to use a separate namespace called `demo` throughout this tutorial. + +```bash +$ kubectl create ns demo +namespace/demo created +``` + +> **Note:** YAML files used in this tutorial are stored in this [directory](/docs/v2024.1.31/guides/elasticsearch/autoscaler/storage/combined/yamls) of [kubedb/docs](https://github.com/kubedb/docs) repository. + +## Storage Autoscaling of Combined cluster + +At first verify that your cluster has a storage class, that supports volume expansion. Let's check, + +```bash +$ kubectl get storageclass +NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE +standard (default) rancher.io/local-path Delete WaitForFirstConsumer false 9h +topolvm-provisioner topolvm.cybozu.com Delete WaitForFirstConsumer true 9h +``` + +We can see from the output the `topolvm-provisioner` storage class has `ALLOWVOLUMEEXPANSION` field as true. So, this storage class supports volume expansion. We can use it. You can install topolvm from [here](https://github.com/topolvm/topolvm) + +Now, we are going to deploy a `Elasticsearch` combined cluster using a supported version by the `KubeDB` operator. Then we are going to apply `ElasticsearchAutoscaler` to set up autoscaling. + +#### Deploy Elasticsearch Combined Cluster + +In this section, we are going to deploy an Elasticsearch combined cluster with version `xpack-8.11.1`. Then, in the next section we will set up autoscaling for this database using `ElasticsearchAutoscaler` CRD. Below is the YAML of the `Elasticsearch` CR that we are going to create, + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: Elasticsearch +metadata: + name: es-combined + namespace: demo +spec: + enableSSL: true + version: xpack-8.11.1 + storageType: Durable + replicas: 1 + storage: + storageClassName: "topolvm-provisioner" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut +``` + +Let's create the `Elasticsearch` CRO we have shown above, + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/elasticsearch/autoscaler/storage/combined/yamls/es-combined.yaml +elasticsearch.kubedb.com/es-combined created +``` + +Now, wait until `es-combined` has status `Ready`. i.e, + +```bash +$ kubectl get es -n demo -w +NAME VERSION STATUS AGE +es-combined xpack-8.11.1 Provisioning 5s +es-combined xpack-8.11.1 Ready 50s +``` + +Let's check volume size from statefulset, and from the persistent volume, + +```bash +$ kubectl get sts -n demo es-combined -o json | jq '.spec.volumeClaimTemplates[].spec.resources' +{ + "requests": { + "storage": "1Gi" + } +} + +$ kubectl get pv -n demo +NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE +pvc-efe67aee-21bf-4320-9873-5d58d68182ae 1Gi RWO Delete Bound demo/data-es-combined-0 topolvm-provisioner 8m3s +``` + +You can see the StatefulSet has 1GB storage, and the capacity of the persistent volume is also 1GB. + +We are now ready to apply the `ElasticsearchAutoscaler` CRO to set up storage autoscaling for this database. + +### Storage Autoscaling + +Here, we are going to set up storage autoscaling using a ElasticsearchAutoscaler Object. + +#### Create ElasticsearchAutoscaler Object + +To set up vertical autoscaling for the combined cluster nodes, we have to create a `ElasticsearchAutoscaler` CRO with our desired configuration. Below is the YAML of the `ElasticsearchAutoscaler` object that we are going to create, + +```yaml +apiVersion: autoscaling.kubedb.com/v1alpha1 +kind: ElasticsearchAutoscaler +metadata: + name: es-combined-storage-as + namespace: demo +spec: + databaseRef: + name: es-combined + storage: + node: + trigger: "On" + usageThreshold: 60 + scalingThreshold: 50 +``` + +Here, + +- `spec.databaseRef.name` specifies that we are performing vertical scaling operation on `es-combined` cluster. +- `spec.storage.node.trigger` specifies that storage autoscaling is enabled for the Elasticsearch nodes. +- `spec.storage.node.usageThreshold` specifies storage usage threshold, if storage usage exceeds `60%` then storage autoscaling will be triggered. +- `spec.storage.node.scalingThreshold` specifies the scaling threshold. Storage will be scaled to `50%` of the current amount. + +Let's create the `ElasticsearchAutoscaler` CR we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/elasticsearch/autoscaler/storage/combined/yamls/es-combined-storage-as.yaml +elasticsearchautoscaler.autoscaling.kubedb.com/es-combined-storage-as created +``` + +#### Storage Autoscaling is set up successfully + +Let's check that the `elasticsearchautoscaler` resource is created successfully, + +```bash +$ kubectl get elasticsearchautoscaler -n demo +NAME AGE +es-combined-storage-as 9s + +$ kubectl describe elasticsearchautoscaler -n demo es-combined-storage-as +Name: es-combined-storage-as +Namespace: demo +Labels: +Annotations: +API Version: autoscaling.kubedb.com/v1alpha1 +Kind: ElasticsearchAutoscaler +Metadata: + Creation Timestamp: 2021-03-22T14:57:58Z + Generation: 1 + Resource Version: 7906 + UID: f4e6b550-b566-458b-af05-84e0581b93f0 +Spec: + Database Ref: + Name: es-combined + Storage: + Node: + Scaling Threshold: 50 + Trigger: On + Usage Threshold: 60 +Events: +``` + +So, the `elasticsearchautoscaler` resource is created successfully. + +Now, for this demo, we are going to manually fill up the persistent volume to exceed the `usageThreshold` using `dd` command to see if storage autoscaling is working or not. + +Let's exec into the database pod and fill the database volume using the following commands: + +```bash +$ kubectl exec -it -n demo es-combined-0 -- bash +[root@es-combined-0 elasticsearch]# df -h /usr/share/elasticsearch/data +Filesystem Size Used Avail Use% Mounted on +/dev/topolvm/026b4152-c7d8-47c1-afe2-0a7c7b708857 1014M 40M 975M 4% /usr/share/elasticsearch/data + +[root@es-combined-0 elasticsearch]# dd if=/dev/zero of=/usr/share/elasticsearch/data/file.img bs=600M count=1 +1+0 records in +1+0 records out +629145600 bytes (629 MB) copied, 1.95767 s, 321 MB/s + +[root@es-combined-0 elasticsearch]# df -h /usr/share/elasticsearch/data +Filesystem Size Used Avail Use% Mounted on +/dev/topolvm/026b4152-c7d8-47c1-afe2-0a7c7b708857 1014M 640M 375M 64% /usr/share/elasticsearch/data +``` + +So, from the above output, we can see that the storage usage is 64%, which exceeded the `usageThreshold` 60%. + +Let's watch the `elasticsearchopsrequest` in the demo namespace to see if any `elasticsearchopsrequest` object is created. After some time you'll see that a `elasticsearchopsrequest` of type `VolumeExpansion` will be created based on the `scalingThreshold`. + +```bash +$ kubectl get esops -n demo -w +NAME TYPE STATUS AGE +esops-es-combined-8ub9ca VolumeExpansion Progressing 30s +``` + +Let's wait for the opsRequest to become successful. + +```bash +$ kubectl get esops -n demo +NAME TYPE STATUS AGE +esops-es-combined-8ub9ca VolumeExpansion Successful 50s +``` + +We can see from the above output that the `ElasticsearchOpsRequest` has succeeded. If we describe the `ElasticsearchOpsRequest` we will get an overview of the steps that were followed to expand the volume of the database. + +```bash +$ kubectl describe esops -n demo esops-es-combined-8ub9ca +Name: esops-es-combined-8ub9ca +Namespace: demo +Labels: app.kubernetes.io/component=database + app.kubernetes.io/instance=es-combined + app.kubernetes.io/managed-by=kubedb.com + app.kubernetes.io/name=elasticsearches.kubedb.com +Annotations: +API Version: ops.kubedb.com/v1alpha1 +Kind: ElasticsearchOpsRequest +Metadata: + Creation Timestamp: 2021-03-22T15:08:54Z + Generation: 1 + Owner References: + API Version: autoscaling.kubedb.com/v1alpha1 + Block Owner Deletion: true + Controller: true + Kind: ElasticsearchAutoscaler + Name: es-combined-storage-as + UID: f4e6b550-b566-458b-af05-84e0581b93f0 + Resource Version: 11064 + UID: 65ca8078-ae75-4b90-8e11-c09dc287c993 +Spec: + Database Ref: + Name: es-combined + Type: VolumeExpansion + Volume Expansion: + Node: 1594884096 +Status: + Conditions: + Last Transition Time: 2021-03-22T15:08:54Z + Message: Elasticsearch ops request is expanding volume of the Elasticsearch nodes. + Observed Generation: 1 + Reason: VolumeExpansion + Status: True + Type: VolumeExpansion + Last Transition Time: 2021-03-22T15:09:24Z + Message: successfully expanded combined nodes + Observed Generation: 1 + Reason: UpdateCombinedNodePVCs + Status: True + Type: UpdateCombinedNodePVCs + Last Transition Time: 2021-03-22T15:09:39Z + Message: successfully deleted the statefulSet with orphan propagation policy + Observed Generation: 1 + Reason: OrphanStatefulSetPods + Status: True + Type: OrphanStatefulSetPods + Last Transition Time: 2021-03-22T15:09:44Z + Message: StatefulSet is recreated + Observed Generation: 1 + Reason: ReadyStatefulSets + Status: True + Type: ReadyStatefulSets + Last Transition Time: 2021-03-22T15:09:44Z + Message: Successfully completed the modification process. + Observed Generation: 1 + Reason: Successful + Status: True + Type: Successful + Observed Generation: 1 + Phase: Successful +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal PauseDatabase 17m KubeDB Enterprise Operator Pausing Elasticsearch demo/es-combined + Normal UpdateCombinedNodePVCs 17m KubeDB Enterprise Operator successfully expanded combined nodes + Normal OrphanStatefulSetPods 16m KubeDB Enterprise Operator successfully deleted the statefulSet with orphan propagation policy + Normal ResumeDatabase 16m KubeDB Enterprise Operator Resuming Elasticsearch demo/es-combined + Normal ResumeDatabase 16m KubeDB Enterprise Operator Resuming Elasticsearch demo/es-combined + Normal ReadyStatefulSets 16m KubeDB Enterprise Operator StatefulSet is recreated + Normal ResumeDatabase 16m KubeDB Enterprise Operator Resuming Elasticsearch demo/es-combined + Normal Successful 16m KubeDB Enterprise Operator Successfully Updated Database +``` + +Now, we are going to verify from the `Statefulset`, and the `Persistent Volume` whether the volume of the combined cluster has expanded to meet the desired state, Let's check, + +```bash +$ kubectl get sts -n demo es-combined -o json | jq '.spec.volumeClaimTemplates[].spec.resources' +{ + "requests": { + "storage": "1594884096" + } +} + +$ kubectl get pv -n demo +NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE +pvc-efe67aee-21bf-4320-9873-5d58d68182ae 2Gi RWO Delete Bound demo/data-es-combined-0 topolvm-provisioner 43m +``` + +The above output verifies that we have successfully autoscaled the volume of the Elasticsearch combined cluster. + +## Cleaning Up + +To clean up the Kubernetes resources created by this tutorial, run: + +```bash +$ kubectl delete elasticsearch -n demo es-combined +$ kubectl delete elasticsearchautoscaler -n demo es-combined-storage-as +``` diff --git a/content/docs/v2024.1.31/guides/elasticsearch/autoscaler/storage/combined/yamls/es-combined-storage-as.yaml b/content/docs/v2024.1.31/guides/elasticsearch/autoscaler/storage/combined/yamls/es-combined-storage-as.yaml new file mode 100644 index 0000000000..c25a3dbb23 --- /dev/null +++ b/content/docs/v2024.1.31/guides/elasticsearch/autoscaler/storage/combined/yamls/es-combined-storage-as.yaml @@ -0,0 +1,13 @@ +apiVersion: autoscaling.kubedb.com/v1alpha1 +kind: ElasticsearchAutoscaler +metadata: + name: es-combined-storage-as + namespace: demo +spec: + databaseRef: + name: es-combined + storage: + node: + trigger: "On" + usageThreshold: 60 + scalingThreshold: 50 \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/elasticsearch/autoscaler/storage/combined/yamls/es-combined.yaml b/content/docs/v2024.1.31/guides/elasticsearch/autoscaler/storage/combined/yamls/es-combined.yaml new file mode 100644 index 0000000000..e6d69b29ec --- /dev/null +++ b/content/docs/v2024.1.31/guides/elasticsearch/autoscaler/storage/combined/yamls/es-combined.yaml @@ -0,0 +1,18 @@ +apiVersion: kubedb.com/v1alpha2 +kind: Elasticsearch +metadata: + name: es-combined + namespace: demo +spec: + enableSSL: true + version: xpack-8.11.1 + storageType: Durable + replicas: 1 + storage: + storageClassName: "topolvm-provisioner" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/elasticsearch/autoscaler/storage/overview/index.md b/content/docs/v2024.1.31/guides/elasticsearch/autoscaler/storage/overview/index.md new file mode 100644 index 0000000000..c620c321ea --- /dev/null +++ b/content/docs/v2024.1.31/guides/elasticsearch/autoscaler/storage/overview/index.md @@ -0,0 +1,60 @@ +--- +title: Elasticsearch Storage Autoscaling Overview +menu: + docs_v2024.1.31: + identifier: es-storage-auto-scaling-overview + name: Overview + parent: es-storage-auto-scaling + weight: 5 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Elasticsearch Storange Autoscaling + +This guide will give an overview on how KubeDB Autoscaler operator autoscales the Elasticsearch storage using `elasticsearchautoscaler` crd. + +## Before You Begin + +- You should be familiar with the following `KubeDB` concepts: + - [Elasticsearch](/docs/v2024.1.31/guides/elasticsearch/concepts/elasticsearch/) + - [ElasticsearchAutoscaler](/docs/v2024.1.31/guides/elasticsearch/concepts/autoscaler/) + - [ElasticsearchOpsRequest](/docs/v2024.1.31/guides/elasticsearch/concepts/elasticsearch-ops-request/) + +## How Storage Autoscaling Works + +The Auto Scaling process consists of the following steps: + +1. At first, a user creates a `Elasticsearch` Custom Resource (CR). + +2. `KubeDB` Provisioner operator watches the `Elasticsearch` CR. + +3. When the operator finds a `Elasticsearch` CR, it creates required number of `StatefulSets` and related necessary stuff like secrets, services, etc. + +- Each StatefulSet creates a Persistent Volume according to the Volume Claim Template provided in the statefulset configuration. + +4. Then, in order to set up storage autoscaling of the various components of the `Elasticsearch` database the user creates a `ElasticsearchAutoscaler` CRO with desired configuration. + +5. `KubeDB` Autoscaler operator watches the `ElasticsearchAutoscaler` CRO. + +6. `KubeDB` Autoscaler operator continuously watches persistent volumes of the databases to check if it exceeds the specified usage threshold. +- If the usage exceeds the specified usage threshold, then `KubeDB` Autoscaler operator creates a `ElasticsearchOpsRequest` to expand the storage of the database. + +7. `KubeDB` Ops-manager operator watches the `ElasticsearchOpsRequest` CRO. + +8. Then the `KubeDB` Ops-manager operator will expand the storage of the database component as specified on the `ElasticsearchOpsRequest` CRO. + +In the next docs, we are going to show a step-by-step guide on Autoscaling storage of various Elasticsearch database components using `ElasticsearchAutoscaler` CRD. diff --git a/content/docs/v2024.1.31/guides/elasticsearch/autoscaler/storage/topology/index.md b/content/docs/v2024.1.31/guides/elasticsearch/autoscaler/storage/topology/index.md new file mode 100644 index 0000000000..77779aefc4 --- /dev/null +++ b/content/docs/v2024.1.31/guides/elasticsearch/autoscaler/storage/topology/index.md @@ -0,0 +1,383 @@ +--- +title: Elasticsearch Topology Cluster Storage Autoscaling +menu: + docs_v2024.1.31: + identifier: es-storage-auto-scaling-topology + name: Topology Cluster + parent: es-storage-auto-scaling + weight: 15 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Storage Autoscaling of Elasticsearch Topology Cluster + +This guide will show you how to use `KubeDB` to autoscale the storage of an Elasticsearch topology cluster. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster. + +- Install `KubeDB` Community, Enterprise and Autoscaler operator in your cluster following the steps [here](/docs/v2024.1.31/setup/README). + +- Install Prometheus from [here](https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack) + +- You must have a `StorageClass` that supports volume expansion. + +- You should be familiar with the following `KubeDB` concepts: + - [Elasticsearch](/docs/v2024.1.31/guides/elasticsearch/concepts/elasticsearch/) + - [ElasticsearchAutoscaler](/docs/v2024.1.31/guides/elasticsearch/concepts/autoscaler/) + - [ElasticsearchOpsRequest](/docs/v2024.1.31/guides/elasticsearch/concepts/elasticsearch-ops-request/) + - [Storage Autoscaling Overview](/docs/v2024.1.31/guides/elasticsearch/autoscaler/storage/overview/) + +To keep everything isolated, we are going to use a separate namespace called `demo` throughout this tutorial. + +```bash +$ kubectl create ns demo +namespace/demo created +``` + +> **Note:** YAML files used in this tutorial are stored in this [directory](/docs/v2024.1.31/guides/elasticsearch/autoscaler/storage/topology/yamls) of [kubedb/docs](https://github.com/kubedb/docs) repository. + +## Storage Autoscaling of Topology Cluster + +At first verify that your cluster has a storage class, that supports volume expansion. Let's check, + +```bash +$ kubectl get storageclass +NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE +standard (default) rancher.io/local-path Delete WaitForFirstConsumer false 9h +topolvm-provisioner topolvm.cybozu.com Delete WaitForFirstConsumer true 9h +``` + +We can see from the output the `topolvm-provisioner` storage class has `ALLOWVOLUMEEXPANSION` field as true. So, this storage class supports volume expansion. We can use it. You can install topolvm from [here](https://github.com/topolvm/topolvm) + +Now, we are going to deploy a `Elasticsearch` topology cluster using a supported version by the `KubeDB` operator. Then we are going to apply `ElasticsearchAutoscaler` to set up autoscaling. + +#### Deploy Elasticsearch Topology + +In this section, we are going to deploy a Elasticsearch topology cluster with version `xpack-8.11.1`. Then, in the next section we will set up autoscaling for this database using `ElasticsearchAutoscaler` CRD. Below is the YAML of the `Elasticsearch` CR that we are going to create, + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: Elasticsearch +metadata: + name: es-topology + namespace: demo +spec: + enableSSL: true + version: xpack-8.11.1 + storageType: Durable + topology: + master: + suffix: master + replicas: 1 + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + data: + suffix: data + replicas: 2 + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + ingest: + suffix: ingest + replicas: 1 + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut +``` + +Let's create the `Elasticsearch` CRO we have shown above, + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/elasticsearch/autoscaler/storage/topology/yamls/es-topology.yaml +elasticsearch.kubedb.com/es-topology created +``` + +Now, wait until `es-topology` has status `Ready`. i.e, + +```bash +$ kubectl get elasticsearch -n demo -w +NAME VERSION STATUS AGE +es-topology xpack-8.11.1 Provisioning 12s +es-topology xpack-8.11.1 Ready 1m50s +``` + +Let's check volume size from the data statefulset, and from the persistent volume, + +```bash +$ kubectl get sts -n demo es-topology-data -o json | jq '.spec.volumeClaimTemplates[].spec.resources' +{ + "requests": { + "storage": "1Gi" + } +} + +$ kubectl get pv -n demo +NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE +pvc-1a22f743-2b03-487b-92db-e75ce14a3994 1Gi RWO Delete Bound demo/data-es-topology-ingest-0 topolvm-provisioner 2m8s +pvc-82c60733-22a3-4dbb-bac0-2fcd386650dd 1Gi RWO Delete Bound demo/data-es-topology-data-0 topolvm-provisioner 2m7s +pvc-a610cbb8-dece-4d2e-8870-b66a2f1fe458 1Gi RWO Delete Bound demo/data-es-topology-master-0 topolvm-provisioner 2m8s +pvc-edb7f4f7-f8ba-4af9-a507-b707462ddc3c 1Gi RWO Delete Bound demo/data-es-topology-data-1 topolvm-provisioner 119s +``` + +You can see that the data StatefulSet has 1GB storage, and the capacity of all the persistent volume is also 1GB. + +We are now ready to apply the `ElasticsearchAutoscaler` CRO to set up storage autoscaling for the data nodes. + +### Storage Autoscaling + +Here, we are going to set up storage autoscaling using an ElasticsearchAutoscaler Object. + +#### Create ElasticsearchAutoscaler Object + +To set up vertical autoscaling for this topology cluster, we have to create a `ElasticsearchAutoscaler` CRO with our desired configuration. Below is the YAML of the `ElasticsearchAutoscaler` object that we are going to create, + +```yaml +apiVersion: autoscaling.kubedb.com/v1alpha1 +kind: ElasticsearchAutoscaler +metadata: + name: es-topology-storage-as + namespace: demo +spec: + databaseRef: + name: es-topology + storage: + data: + trigger: "On" + usageThreshold: 60 + scalingThreshold: 50 +``` + +Here, + +- `spec.databaseRef.name` specifies that we are performing vertical scaling operation on `es-topology` cluster. +- `spec.storage.topology.data.trigger` specifies that storage autoscaling is enabled for data nodes. +- `spec.storage.topology.data.usageThreshold` specifies storage usage threshold, if storage usage exceeds `60%` then storage autoscaling will be triggered. +- `spec.storage.topology.data.scalingThreshold` specifies the scaling threshold. Storage will be scaled to `50%` of the current amount. + +> Note: In this demo we are only setting up the storage autoscaling for the data nodes, that's why we only specified the data section of the autoscaler. You can enable autoscaling for master nodes and ingest nodes in the same YAML, by specifying the `topology.master` and `topology.ingest` respectivly. + +Let's create the `ElasticsearchAutoscaler` CR we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/elasticsearch/autoscaler/storage/topology/yamls/es-topology-storage-as.yaml +elasticsearchautoscaler.autoscaling.kubedb.com/es-topology-storage-as created +``` + +#### Storage Autoscaling is set up successfully + +Let's check that the `elasticsearchautoscaler` resource is created successfully, + +```bash +$ kubectl get elasticsearchautoscaler -n demo +NAME AGE +es-topology-storage-as 4m16s + +$ kubectl describe elasticsearchautoscaler -n demo es-topology-storage-as +Name: es-topology-storage-as +Namespace: demo +Labels: +Annotations: +API Version: autoscaling.kubedb.com/v1alpha1 +Kind: ElasticsearchAutoscaler +Metadata: + Creation Timestamp: 2021-03-22T15:47:18Z + Generation: 1 + Resource Version: 19096 + UID: 3ea0516f-e272-463e-be7f-903c86a8e084 +Spec: + Database Ref: + Name: es-topology + Storage: + Topology: + Data: + Scaling Threshold: 50 + Trigger: On + Usage Threshold: 60 +Events: + +``` + +So, the `elasticsearchautoscaler` resource is created successfully. + +Now, for this demo, we are going to manually fill up one of the persistent volume to exceed the `usageThreshold` using `dd` command to see if storage autoscaling is working or not. + +Let's exec into the data nodes and fill the database volume using the following commands: + +```bash +$ kubectl exec -it -n demo es-topology-data-0 -- bash +[root@es-topology-data-0 elasticsearch]# df -h /usr/share/elasticsearch/data +Filesystem Size Used Avail Use% Mounted on +/dev/topolvm/fb6d30c8-8bf7-4c19-884e-937f150f4763 1014M 40M 975M 4% /usr/share/elasticsearch/data +[root@es-topology-data-0 elasticsearch]# dd if=/dev/zero of=/usr/share/elasticsearch/data/file.img bs=650M count=1 +1+0 records in +1+0 records out +681574400 bytes (682 MB) copied, 2.25556 s, 302 MB/s +[root@es-topology-data-0 elasticsearch]# df -h /usr/share/elasticsearch/data +Filesystem Size Used Avail Use% Mounted on +/dev/topolvm/fb6d30c8-8bf7-4c19-884e-937f150f4763 1014M 690M 325M 69% /usr/share/elasticsearch/data + +``` + +So, from the above output we can see that the storage usage is 69%, which exceeded the `usageThreshold` 60%. + +Let's watch the `elasticsearchopsrequest` in the demo namespace to see if any `elasticsearchopsrequest` object is created. After some time you'll see that an `elasticsearchopsrequest` of type `VolumeExpansion` will be created based on the `scalingThreshold`. + +```bash +$ kubectl get esops -n demo -w +NAME TYPE STATUS AGE +esops-es-topology-79zpaf VolumeExpansion 0s +esops-es-topology-79zpaf VolumeExpansion Progressing 0s +``` + +Let's wait for the opsRequest to become successful. + +```bash +$ kubectl get esops -n demo +NAME TYPE STATUS AGE +esops-es-topology-79zpaf VolumeExpansion Successful 110s +``` + +We can see from the above output that the `ElasticsearchOpsRequest` has succeeded. If we describe the `ElasticsearchOpsRequest` we will get an overview of the steps that were followed to expand the volume of the database. + +```bash +$ kubectl describe elasticsearchopsrequest -n demo esops-es-topology-79zpaf +Name: esops-es-topology-79zpaf +Namespace: demo +Labels: app.kubernetes.io/component=database + app.kubernetes.io/instance=es-topology + app.kubernetes.io/managed-by=kubedb.com + app.kubernetes.io/name=elasticsearches.kubedb.com +Annotations: +API Version: ops.kubedb.com/v1alpha1 +Kind: ElasticsearchOpsRequest +Metadata: + Creation Timestamp: 2021-03-22T16:03:54Z + Generation: 1 + Owner References: + API Version: autoscaling.kubedb.com/v1alpha1 + Block Owner Deletion: true + Controller: true + Kind: ElasticsearchAutoscaler + Name: es-topology-storage-as + UID: aae135af-b203-47db-baeb-f51ffeb66e57 + Resource Version: 23727 + UID: 378b28e8-9a7f-49c2-9e4d-49ee6ecad4d0 +Spec: + Database Ref: + Name: es-topology + Type: VolumeExpansion + Volume Expansion: + Topology: + Data: 1594884096 +Status: + Conditions: + Last Transition Time: 2021-03-22T16:03:54Z + Message: Elasticsearch ops request is expanding volume of the Elasticsearch nodes. + Observed Generation: 1 + Reason: VolumeExpansion + Status: True + Type: VolumeExpansion + Last Transition Time: 2021-03-22T16:05:24Z + Message: successfully expanded data nodes + Observed Generation: 1 + Reason: UpdateDataNodePVCs + Status: True + Type: UpdateDataNodePVCs + Last Transition Time: 2021-03-22T16:05:39Z + Message: successfully deleted the statefulSets with orphan propagation policy + Observed Generation: 1 + Reason: OrphanStatefulSetPods + Status: True + Type: OrphanStatefulSetPods + Last Transition Time: 2021-03-22T16:05:44Z + Message: StatefulSet is recreated + Observed Generation: 1 + Reason: ReadyStatefulSets + Status: True + Type: ReadyStatefulSets + Last Transition Time: 2021-03-22T16:05:44Z + Message: Successfully completed the modification process. + Observed Generation: 1 + Reason: Successful + Status: True + Type: Successful + Observed Generation: 1 + Phase: Successful +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal PauseDatabase 3m18s KubeDB Enterprise Operator Pausing Elasticsearch demo/es-topology + Normal UpdateDataNodePVCs 108s KubeDB Enterprise Operator successfully expanded data nodes + Normal OrphanStatefulSetPods 93s KubeDB Enterprise Operator successfully deleted the statefulSets with orphan propagation policy + Normal ResumeDatabase 93s KubeDB Enterprise Operator Resuming Elasticsearch demo/es-topology + Normal ResumeDatabase 93s KubeDB Enterprise Operator Resuming Elasticsearch demo/es-topology + Normal ReadyStatefulSets 88s KubeDB Enterprise Operator StatefulSet is recreated + Normal ResumeDatabase 88s KubeDB Enterprise Operator Resuming Elasticsearch demo/es-topology + Normal Successful 88s KubeDB Enterprise Operator Successfully Updated Database +``` + +Now, we are going to verify from the `Statefulset`, and the `Persistent Volume` whether the volume of the data nodes of the cluster has expanded to meet the desired state, Let's check, + +```bash +$ kubectl get sts -n demo es-topology-data -o json | jq '.spec.volumeClaimTemplates[].spec.resources' +{ + "requests": { + "storage": "1594884096" + } +} + +$ kubectl get pvc -n demo +NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE +data-es-topology-data-0 Bound pvc-82c60733-22a3-4dbb-bac0-2fcd386650dd 2Gi RWO topolvm-provisioner 11m +data-es-topology-data-1 Bound pvc-edb7f4f7-f8ba-4af9-a507-b707462ddc3c 2Gi RWO topolvm-provisioner 11m +data-es-topology-ingest-0 Bound pvc-1a22f743-2b03-487b-92db-e75ce14a3994 1Gi RWO topolvm-provisioner 11m +data-es-topology-master-0 Bound pvc-a610cbb8-dece-4d2e-8870-b66a2f1fe458 1Gi RWO topolvm-provisioner 11m + +$ kubectl get pv -n demo +NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE +pvc-1a22f743-2b03-487b-92db-e75ce14a3994 1Gi RWO Delete Bound demo/data-es-topology-ingest-0 topolvm-provisioner 10m +pvc-82c60733-22a3-4dbb-bac0-2fcd386650dd 2Gi RWO Delete Bound demo/data-es-topology-data-0 topolvm-provisioner 10m +pvc-a610cbb8-dece-4d2e-8870-b66a2f1fe458 1Gi RWO Delete Bound demo/data-es-topology-master-0 topolvm-provisioner 10m +pvc-edb7f4f7-f8ba-4af9-a507-b707462ddc3c 2Gi RWO Delete Bound demo/data-es-topology-data-1 topolvm-provisioner 10m +``` + +The above output verifies that we have successfully autoscaled the volume of the data nodes of this Elasticsearch topology cluster. + +## Cleaning Up + +To clean up the Kubernetes resources created by this tutorial, run: + +```bash +$ kubectl delete elasticsearch -n demo es-topology +$ kubectl delete elasticsearchautoscaler -n demo es-topology-storage-as +``` diff --git a/content/docs/v2024.1.31/guides/elasticsearch/autoscaler/storage/topology/yamls/es-topology-storage-as.yaml b/content/docs/v2024.1.31/guides/elasticsearch/autoscaler/storage/topology/yamls/es-topology-storage-as.yaml new file mode 100644 index 0000000000..e0142a927e --- /dev/null +++ b/content/docs/v2024.1.31/guides/elasticsearch/autoscaler/storage/topology/yamls/es-topology-storage-as.yaml @@ -0,0 +1,13 @@ +apiVersion: autoscaling.kubedb.com/v1alpha1 +kind: ElasticsearchAutoscaler +metadata: + name: es-topology-storage-as + namespace: demo +spec: + databaseRef: + name: es-topology + storage: + data: + trigger: "On" + usageThreshold: 60 + scalingThreshold: 50 \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/elasticsearch/autoscaler/storage/topology/yamls/es-topology.yaml b/content/docs/v2024.1.31/guides/elasticsearch/autoscaler/storage/topology/yamls/es-topology.yaml new file mode 100644 index 0000000000..b935e79666 --- /dev/null +++ b/content/docs/v2024.1.31/guides/elasticsearch/autoscaler/storage/topology/yamls/es-topology.yaml @@ -0,0 +1,41 @@ +apiVersion: kubedb.com/v1alpha2 +kind: Elasticsearch +metadata: + name: es-topology + namespace: demo +spec: + enableSSL: true + version: xpack-8.11.1 + storageType: Durable + topology: + master: + suffix: master + replicas: 1 + storage: + storageClassName: "topolvm-provisioner" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + data: + suffix: data + replicas: 2 + storage: + storageClassName: "topolvm-provisioner" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + ingest: + suffix: ingest + replicas: 1 + storage: + storageClassName: "topolvm-provisioner" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/elasticsearch/backup/_index.md b/content/docs/v2024.1.31/guides/elasticsearch/backup/_index.md new file mode 100755 index 0000000000..6768420eff --- /dev/null +++ b/content/docs/v2024.1.31/guides/elasticsearch/backup/_index.md @@ -0,0 +1,22 @@ +--- +title: Backup & Restore Elasticsearch +menu: + docs_v2024.1.31: + identifier: guides-es-backup + name: Backup & Restore (Stash) + parent: es-elasticsearch-guides + weight: 40 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/elasticsearch/backup/auto-backup/examples/backupblueprint.yaml b/content/docs/v2024.1.31/guides/elasticsearch/backup/auto-backup/examples/backupblueprint.yaml new file mode 100644 index 0000000000..81f1739786 --- /dev/null +++ b/content/docs/v2024.1.31/guides/elasticsearch/backup/auto-backup/examples/backupblueprint.yaml @@ -0,0 +1,26 @@ +apiVersion: stash.appscode.com/v1beta1 +kind: BackupBlueprint +metadata: + name: elasticsearch-backup-template +spec: + # ============== Blueprint for Repository ========================== + backend: + gcs: + bucket: stash-testing + prefix: stash-backup/${TARGET_NAMESPACE}/${TARGET_APP_RESOURCE}/${TARGET_NAME} + storageSecretName: gcs-secret + # ============== Blueprint for BackupConfiguration ================= + schedule: "*/5 * * * *" + interimVolumeTemplate: + metadata: + name: ${TARGET_APP_RESOURCE}-${TARGET_NAME} # To ensure that the PVC names are unique for different database + spec: + accessModes: [ "ReadWriteOnce" ] + storageClassName: "standard" + resources: + requests: + storage: 1Gi + retentionPolicy: + name: 'keep-last-5' + keepLast: 5 + prune: true diff --git a/content/docs/v2024.1.31/guides/elasticsearch/backup/auto-backup/examples/es-demo-2.yaml b/content/docs/v2024.1.31/guides/elasticsearch/backup/auto-backup/examples/es-demo-2.yaml new file mode 100644 index 0000000000..1e8ae8fb1d --- /dev/null +++ b/content/docs/v2024.1.31/guides/elasticsearch/backup/auto-backup/examples/es-demo-2.yaml @@ -0,0 +1,18 @@ +apiVersion: kubedb.com/v1alpha2 +kind: Elasticsearch +metadata: + name: es-demo-2 + namespace: demo-2 + annotations: + stash.appscode.com/backup-blueprint: elasticsearch-backup-template + stash.appscode.com/schedule: "*/3 * * * *" +spec: + version: xpack-8.11.1 + replicas: 1 + storageType: Durable + storage: + resources: + requests: + storage: 1Gi + storageClassName: standard + terminationPolicy: WipeOut diff --git a/content/docs/v2024.1.31/guides/elasticsearch/backup/auto-backup/examples/es-demo-3.yaml b/content/docs/v2024.1.31/guides/elasticsearch/backup/auto-backup/examples/es-demo-3.yaml new file mode 100644 index 0000000000..164d7e9368 --- /dev/null +++ b/content/docs/v2024.1.31/guides/elasticsearch/backup/auto-backup/examples/es-demo-3.yaml @@ -0,0 +1,18 @@ +apiVersion: kubedb.com/v1alpha2 +kind: Elasticsearch +metadata: + name: es-demo-3 + namespace: demo-3 + annotations: + stash.appscode.com/backup-blueprint: elasticsearch-backup-template + params.stash.appscode.com/args: --ignoreType=settings,template +spec: + version: xpack-8.11.1 + replicas: 1 + storageType: Durable + storage: + resources: + requests: + storage: 1Gi + storageClassName: standard + terminationPolicy: WipeOut diff --git a/content/docs/v2024.1.31/guides/elasticsearch/backup/auto-backup/examples/es-demo.yaml b/content/docs/v2024.1.31/guides/elasticsearch/backup/auto-backup/examples/es-demo.yaml new file mode 100644 index 0000000000..034a6a9861 --- /dev/null +++ b/content/docs/v2024.1.31/guides/elasticsearch/backup/auto-backup/examples/es-demo.yaml @@ -0,0 +1,17 @@ +apiVersion: kubedb.com/v1alpha2 +kind: Elasticsearch +metadata: + name: es-demo + namespace: demo + annotations: + stash.appscode.com/backup-blueprint: elasticsearch-backup-template +spec: + version: xpack-8.11.1 + replicas: 1 + storageType: Durable + storage: + resources: + requests: + storage: 1Gi + storageClassName: standard + terminationPolicy: WipeOut diff --git a/content/docs/v2024.1.31/guides/elasticsearch/backup/auto-backup/images/es-demo-2.png b/content/docs/v2024.1.31/guides/elasticsearch/backup/auto-backup/images/es-demo-2.png new file mode 100644 index 0000000000..b4d3b419a6 Binary files /dev/null and b/content/docs/v2024.1.31/guides/elasticsearch/backup/auto-backup/images/es-demo-2.png differ diff --git a/content/docs/v2024.1.31/guides/elasticsearch/backup/auto-backup/images/es-demo-3.png b/content/docs/v2024.1.31/guides/elasticsearch/backup/auto-backup/images/es-demo-3.png new file mode 100644 index 0000000000..53b7200d5d Binary files /dev/null and b/content/docs/v2024.1.31/guides/elasticsearch/backup/auto-backup/images/es-demo-3.png differ diff --git a/content/docs/v2024.1.31/guides/elasticsearch/backup/auto-backup/images/es-demo.png b/content/docs/v2024.1.31/guides/elasticsearch/backup/auto-backup/images/es-demo.png new file mode 100644 index 0000000000..86d770d37c Binary files /dev/null and b/content/docs/v2024.1.31/guides/elasticsearch/backup/auto-backup/images/es-demo.png differ diff --git a/content/docs/v2024.1.31/guides/elasticsearch/backup/auto-backup/index.md b/content/docs/v2024.1.31/guides/elasticsearch/backup/auto-backup/index.md new file mode 100644 index 0000000000..a851f88b67 --- /dev/null +++ b/content/docs/v2024.1.31/guides/elasticsearch/backup/auto-backup/index.md @@ -0,0 +1,697 @@ +--- +title: Elasticsearch Auto-Backup | Stash +description: Backup Elasticsearch using Stash Auto-Backup +menu: + docs_v2024.1.31: + identifier: guides-es-backup-auto-backup + name: Auto-Backup + parent: guides-es-backup + weight: 30 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +# Backup Elasticsearch using Stash Auto-Backup + +Stash can be configured to automatically backup any Elasticsearch database in your cluster. Stash enables cluster administrators to deploy backup blueprints ahead of time so that the database owners can easily backup their database with just a few annotations. + +In this tutorial, we are going to show how you can configure a backup blueprint for Elasticsearch databases in your cluster and backup them with few annotations. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster. +- Install KubeDB in your cluster following the steps [here](/docs/v2024.1.31/setup/README). +- Install Stash in your cluster following the steps [here](https://stash.run/docs/latest/setup/install/stash/). +- If you are not familiar with how Stash backup and restore Elasticsearch databases, please check the following guide [here](/docs/v2024.1.31/guides/elasticsearch/backup/overview/). +- If you are not familiar with how auto-backup works in Stash, please check the following guide [here](https://stash.run/docs/latest/guides/auto-backup/overview/). +- If you are not familiar with the available auto-backup options for databases in Stash, please check the following guide [here](https://stash.run/docs/latest/guides/auto-backup/database/). + +You should be familiar with the following `Stash` concepts: + +- [BackupBlueprint](https://stash.run/docs/latest/concepts/crds/backupblueprint/) +- [BackupConfiguration](https://stash.run/docs/latest/concepts/crds/backupconfiguration/) +- [BackupSession](https://stash.run/docs/latest/concepts/crds/backupsession/) +- [Repository](https://stash.run/docs/latest/concepts/crds/repository/) +- [Function](https://stash.run/docs/latest/concepts/crds/function/) +- [Task](https://stash.run/docs/latest/concepts/crds/task/) + +In this tutorial, we are going to show backup of three different Elasticsearch databases on three different namespaces named `demo`, `demo-2`, and `demo-3`. Create the namespaces as below if you haven't done it already. + +```bash +❯ kubectl create ns demo +namespace/demo created + +❯ kubectl create ns demo-2 +namespace/demo-2 created + +❯ kubectl create ns demo-3 +namespace/demo-3 created +``` + +When you install the Stash, it automatically installs all the official database addons. Verify that it has installed the Elasticsearch addons using the following command. + +```bash +❯ kubectl get tasks.stash.appscode.com | grep elasticsearch +elasticsearch-backup-5.6.4 4d4h +elasticsearch-backup-6.2.4 4d4h +elasticsearch-backup-6.3.0 4d4h +elasticsearch-backup-6.4.0 4d4h +elasticsearch-backup-6.5.3 4d4h +elasticsearch-backup-6.8.0 4d4h +elasticsearch-backup-7.2.0 4d4h +elasticsearch-backup-7.3.2 4d4h +elasticsearch-restore-5.6.4 4d4h +elasticsearch-restore-6.2.4 4d4h +elasticsearch-restore-6.3.0 4d4h +elasticsearch-restore-6.4.0 4d4h +elasticsearch-restore-6.5.3 4d4h +elasticsearch-restore-6.8.0 4d4h +elasticsearch-restore-7.2.0 4d4h +elasticsearch-restore-7.3.2 4d4h +``` + +## Prepare Backup Blueprint + +To backup an Elasticsearch database using Stash, you have to create a `Secret` containing the backend credentials, a `Repository` containing the backend information, and a `BackupConfiguration` containing the schedule and target information. A `BackupBlueprint` allows you to specify a template for the `Repository` and the `BackupConfiguration`. + +The `BackupBlueprint` is a non-namespaced CRD. So, once you have created a `BackupBlueprint`, you can use it to backup any Elasticsearch database of any namespace just by creating the storage `Secret` in that namespace and adding few annotations to your Elasticsearch CRO. Then, Stash will automatically create a `Repository` and a `BackupConfiguration` according to the template to backup the database. + +Below is the `BackupBlueprint` object that we are going to use in this tutorial, + +```yaml +apiVersion: stash.appscode.com/v1beta1 +kind: BackupBlueprint +metadata: + name: elasticsearch-backup-template +spec: + # ============== Blueprint for Repository ========================== + backend: + gcs: + bucket: stash-testing + prefix: stash-backup/${TARGET_NAMESPACE}/${TARGET_APP_RESOURCE}/${TARGET_NAME} + storageSecretName: gcs-secret + # ============== Blueprint for BackupConfiguration ================= +# task: # Uncomment if you are not using KubeDB to deploy your database. +# name: elasticsearch-backup-7.3.2 + schedule: "*/5 * * * *" + interimVolumeTemplate: + metadata: + name: ${TARGET_APP_RESOURCE}-${TARGET_NAME} # To ensure that the PVC names are unique for different database + spec: + accessModes: [ "ReadWriteOnce" ] + storageClassName: "standard" + resources: + requests: + storage: 1Gi + retentionPolicy: + name: 'keep-last-5' + keepLast: 5 + prune: true +``` + +Here, we are using a GCS bucket as our backend. We are providing `gcs-secret` at the `storageSecretName` field. Hence, we have to create a secret named `gcs-secret` with the access credentials of our bucket in every namespace where we want to enable backup through this blueprint. + +Notice the `prefix` field of `backend` section. We have used some variables in form of `${VARIABLE_NAME}`. Stash will automatically resolve those variables from the database information to make the backend prefix unique for each database instance. + +We have also used some variables in `name` field of the `interimVolumeTemplate` section. This is to ensure that the generated PVC name becomes unique for the database instances. + +Let's create the `BackupBlueprint` we have shown above, + +```bash +❯ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/elasticsearch/backup/auto-backup/examples/backupblueprint.yaml +backupblueprint.stash.appscode.com/elasticsearch-backup-template created +``` + +Now, we are ready to backup our Elasticsearch databases using few annotations. You can check available auto-backup annotations for a databases from [here](https://stash.run/docs/latest/guides/auto-backup/database/#available-auto-backup-annotations-for-database). + +## Auto-backup with default configurations + +In this section, we are going to backup an Elasticsearch database of `demo` namespace. We are going to use the default configurations specified in the `BackupBlueprint`. + +### Create Storage Secret + +At first, let's create the `gcs-secret` in `demo` namespace with the access credentials to our GCS bucket. + +```bash +❯ echo -n 'changeit' > RESTIC_PASSWORD +❯ echo -n '' > GOOGLE_PROJECT_ID +❯ cat downloaded-sa-key.json > GOOGLE_SERVICE_ACCOUNT_JSON_KEY +❯ kubectl create secret generic -n demo gcs-secret \ + --from-file=./RESTIC_PASSWORD \ + --from-file=./GOOGLE_PROJECT_ID \ + --from-file=./GOOGLE_SERVICE_ACCOUNT_JSON_KEY +secret/gcs-secret created +``` + +### Create Database + +Now, we are going to create an Elasticsearch CRO in `demo` namespace. Below is the YAML of the Elasticsearch object that we are going to create, + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: Elasticsearch +metadata: + name: es-demo + namespace: demo + annotations: + stash.appscode.com/backup-blueprint: elasticsearch-backup-template +spec: + version: xpack-8.11.1 + replicas: 1 + storageType: Durable + storage: + resources: + requests: + storage: 1Gi + storageClassName: standard + terminationPolicy: WipeOut +``` + +Notice the `annotations` section. We are pointing to the `BackupBlueprint` that we have created earlier though `stash.appscode.com/backup-blueprint` annotation. Stash will watch this annotation and create a `Repository` and a `BackupConfiguration` according to the `BackupBlueprint`. + +Let's create the above Elasticsearch CRO, + +```bash +❯ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/elasticsearch/backup/auto-backup/examples/es-demo.yaml +elasticsearch.kubedb.com/sample-elasticsearch created +``` + +### Verify Auto-backup configured + +In this section, we are going to verify whether Stash has created the respective `Repository` and `BackupConfiguration` for our Elasticsearch database we have just deployed or not. + +#### Verify Repository + +At first, let's verify whether Stash has created a `Repository` for our Elasticsearch or not. + +```bash +❯ kubectl get repository -n demo +NAME INTEGRITY SIZE SNAPSHOT-COUNT LAST-SUCCESSFUL-BACKUP AGE +app-es-demo 5s +``` + +Now, let's check the YAML of the `Repository`. + +```yaml +❯ kubectl get repository -n demo app-es-demo -o yaml +apiVersion: stash.appscode.com/v1alpha1 +kind: Repository +metadata: +... + name: app-es-demo + namespace: demo +spec: + backend: + gcs: + bucket: stash-testing + prefix: stash-backup/demo/elasticsearch/es-demo + storageSecretName: gcs-secret +``` + +Here, you can see that Stash has resolved the variables in `prefix` field and substituted them with the equivalent information from this database. + +#### Verify BackupConfiguration +If everything goes well, Stash should create a `BackupConfiguration` for our Elasticsearch in `demo` namespace and the phase of that `BackupConfiguration` should be `Ready`. Verify the `BackupConfiguration` crd by the following command, + +```bash +❯ kubectl get backupconfiguration -n demo +NAME TASK SCHEDULE PAUSED PHASE AGE +app-es-demo elasticsearch-backup-7.3.2 */5 * * * * Ready 12s +``` + +Now, let's check the YAML of the `BackupConfiguration`. + +```yaml +❯ kubectl get backupconfiguration -n demo app-es-demo -o yaml +apiVersion: stash.appscode.com/v1beta1 +kind: BackupConfiguration +metadata: + name: app-es-demo + namespace: demo + ... +spec: + driver: Restic + interimVolumeTemplate: + metadata: + name: elasticsearch-es-demo + spec: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + storageClassName: standard + status: {} + repository: + name: app-es-demo + retentionPolicy: + keepLast: 5 + name: keep-last-5 + prune: true + runtimeSettings: {} + schedule: '*/5 * * * *' + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: es-demo + tempDir: {} +status: + conditions: + - lastTransitionTime: "2021-02-12T11:46:53Z" + message: Repository demo/app-es-demo exist. + reason: RepositoryAvailable + status: "True" + type: RepositoryFound + - lastTransitionTime: "2021-02-12T11:46:53Z" + message: Backend Secret demo/gcs-secret exist. + reason: BackendSecretAvailable + status: "True" + type: BackendSecretFound + - lastTransitionTime: "2021-02-12T11:46:53Z" + message: Backup target appcatalog.appscode.com/v1alpha1 appbinding/es-demo found. + reason: TargetAvailable + status: "True" + type: BackupTargetFound + - lastTransitionTime: "2021-02-12T11:46:53Z" + message: Successfully created backup triggering CronJob. + reason: CronJobCreationSucceeded + status: "True" + type: CronJobCreated + observedGeneration: 1 +``` + +Notice the `interimVolumeTemplate` section. The variables of `name` field have been substituted by the equivalent information from the database. + +Also, notice the `target` section. Stash has automatically added the Elasticsearch as the target of this `BackupConfiguration`. + +#### Verify Backup + +Now, let's wait for a backup run to complete. You can watch for `BackupSession` as below, + +```bash +❯ kubectl get backupsession -n demo -w +NAME INVOKER-TYPE INVOKER-NAME PHASE AGE +app-es-demo-1613130605 BackupConfiguration app-es-demo 0s +app-es-demo-1613130605 BackupConfiguration app-es-demo Running 10s +app-es-demo-1613130605 BackupConfiguration app-es-demo Succeeded 46s +``` + +Once the backup has been completed successfully, you should see the backed up data has been stored in the bucket at the directory pointed by the `prefix` field of the `Repository`. + +
+ Backup data in GCS Bucket +
Fig: Backup data in GCS Bucket
+
+ +## Auto-backup with a custom schedule + +In this section, we are going to backup an Elasticsearch database of `demo-2` namespace. This time, we are going to overwrite the default schedule used in the `BackupBlueprint`. + +### Create Storage Secret + +At first, let's create the `gcs-secret` in `demo-2` namespace with the access credentials to our GCS bucket. + +```bash +❯ kubectl create secret generic -n demo-2 gcs-secret \ + --from-file=./RESTIC_PASSWORD \ + --from-file=./GOOGLE_PROJECT_ID \ + --from-file=./GOOGLE_SERVICE_ACCOUNT_JSON_KEY +secret/gcs-secret created +``` + +### Create Database + +Now, we are going to create an Elasticsearch CRO in `demo-2` namespace. Below is the YAML of the Elasticsearch object that we are going to create, + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: Elasticsearch +metadata: + name: es-demo-2 + namespace: demo-2 + annotations: + stash.appscode.com/backup-blueprint: elasticsearch-backup-template + stash.appscode.com/schedule: "*/3 * * * *" +spec: + version: xpack-8.11.1 + replicas: 1 + storageType: Durable + storage: + resources: + requests: + storage: 1Gi + storageClassName: standard + terminationPolicy: WipeOut +``` + +Notice the `annotations` section. This time, we have passed a schedule via `stash.appscode.com/schedule` annotation along with the `stash.appscode.com/backup-blueprint` annotation. + +Let's create the above Elasticsearch CRO, + +```bash +❯ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/elasticsearch/backup/auto-backup/examples/es-demo-2.yaml +elasticsearch.kubedb.com/es-demo-2 created +``` + +### Verify Auto-backup configured + +Now, let's verify whether the auto-backup has been configured properly or not. + +#### Verify Repository + +At first, let's verify whether Stash has created a `Repository` for our Elasticsearch or not. + +```bash +❯ kubectl get repository -n demo-2 +NAME INTEGRITY SIZE SNAPSHOT-COUNT LAST-SUCCESSFUL-BACKUP AGE +app-es-demo-2 25s +``` + +Now, let's check the YAML of the `Repository`. + +```yaml +❯ kubectl get repository -n demo-2 app-es-demo-2 -o yaml +apiVersion: stash.appscode.com/v1alpha1 +kind: Repository +metadata: + name: app-es-demo-2 + namespace: demo-2 + ... +spec: + backend: + gcs: + bucket: stash-testing + prefix: stash-backup/demo-2/elasticsearch/es-demo-2 + storageSecretName: gcs-secret +``` + +Here, you can see that Stash has resolved the variables in `prefix` field and substituted them with the equivalent information from this new database. + +#### Verify BackupConfiguration + +If everything goes well, Stash should create a `BackupConfiguration` for our Elasticsearch in `demo-2` namespace and the phase of that `BackupConfiguration` should be `Ready`. Verify the `BackupConfiguration` crd by the following command, + +```bash +❯ kubectl get backupconfiguration -n demo-2 +NAME TASK SCHEDULE PAUSED PHASE AGE +app-es-demo-2 elasticsearch-backup-7.3.2 */3 * * * * Ready 77s +``` + +Now, let's check the YAML of the `BackupConfiguration`. + +```yaml +❯ kubectl get backupconfiguration -n demo-2 app-es-demo-2 -o yaml +apiVersion: stash.appscode.com/v1beta1 +kind: BackupConfiguration +metadata: + name: app-es-demo-2 + namespace: demo-2 + ... +spec: + driver: Restic + interimVolumeTemplate: + metadata: + name: elasticsearch-es-demo-2 + spec: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + storageClassName: standard + status: {} + repository: + name: app-es-demo-2 + retentionPolicy: + keepLast: 5 + name: keep-last-5 + prune: true + runtimeSettings: {} + schedule: '*/3 * * * *' + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: es-demo-2 + tempDir: {} +status: + conditions: + - lastTransitionTime: "2021-02-12T12:24:07Z" + message: Repository demo-2/app-es-demo-2 exist. + reason: RepositoryAvailable + status: "True" + type: RepositoryFound + - lastTransitionTime: "2021-02-12T12:24:07Z" + message: Backend Secret demo-2/gcs-secret exist. + reason: BackendSecretAvailable + status: "True" + type: BackendSecretFound + - lastTransitionTime: "2021-02-12T12:24:07Z" + message: Backup target appcatalog.appscode.com/v1alpha1 appbinding/es-demo-2 found. + reason: TargetAvailable + status: "True" + type: BackupTargetFound + - lastTransitionTime: "2021-02-12T12:24:07Z" + message: Successfully created backup triggering CronJob. + reason: CronJobCreationSucceeded + status: "True" + type: CronJobCreated + observedGeneration: 1 +``` + +Notice the `schedule` section. This time the `BackupConfiguration` has been created with the schedule we have provided via annotation. + +Also, notice the `target` section. Stash has automatically added the new Elasticsearch as the target of this `BackupConfiguration`. + +#### Verify Backup + +Now, let's wait for a backup run to complete. You can watch for `BackupSession` as below, + +```bash +❯ kubectl get backupsession -n demo-2 -w +NAME INVOKER-TYPE INVOKER-NAME PHASE AGE +app-es-demo-2-1613132831 BackupConfiguration app-es-demo-2 0s +app-es-demo-2-1613132831 BackupConfiguration app-es-demo-2 Running 17s +app-es-demo-2-1613132831 BackupConfiguration app-es-demo-2 Succeeded 41s +``` + +Once the backup has been completed successfully, you should see that Stash has created a new directory as pointed by the `prefix` field of the new `Repository` and stored the backed up data there. + +
+ Backup data in GCS Bucket +
Fig: Backup data in GCS Bucket
+
+ +## Auto-backup with custom parameters + +In this section, we are going to backup an Elasticsearch database of `demo-3` namespace. This time, we are going to pass some parameters for the Task through the annotations. + +### Create Storage Secret + +At first, let's create the `gcs-secret` in `demo-3` namespace with the access credentials to our GCS bucket. + +```bash +❯ kubectl create secret generic -n demo-3 gcs-secret \ + --from-file=./RESTIC_PASSWORD \ + --from-file=./GOOGLE_PROJECT_ID \ + --from-file=./GOOGLE_SERVICE_ACCOUNT_JSON_KEY +secret/gcs-secret created +``` + +### Create Database + +Now, we are going to create an Elasticsearch CRO in `demo-3` namespace. Below is the YAML of the Elasticsearch object that we are going to create, + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: Elasticsearch +metadata: + name: es-demo-3 + namespace: demo-3 + annotations: + stash.appscode.com/backup-blueprint: elasticsearch-backup-template + params.stash.appscode.com/args: --ignoreType=settings,template +spec: + version: xpack-8.11.1 + replicas: 1 + storageType: Durable + storage: + resources: + requests: + storage: 1Gi + storageClassName: standard + terminationPolicy: WipeOut +``` + +Notice the `annotations` section. This time, we have passed an argument via `params.stash.appscode.com/args` annotation along with the `stash.appscode.com/backup-blueprint` annotation. + +Let's create the above Elasticsearch CRO, + +```bash +❯ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/elasticsearch/backup/auto-backup/examples/es-demo-3.yaml +elasticsearch.kubedb.com/es-demo-3 created +``` + +### Verify Auto-backup configured + +Now, let's verify whether the auto-backup resources has been created or not. + +#### Verify Repository + +At first, let's verify whether Stash has created a `Repository` for our Elasticsearch or not. + +```bash +❯ kubectl get repository -n demo-3 +NAME INTEGRITY SIZE SNAPSHOT-COUNT LAST-SUCCESSFUL-BACKUP AGE +app-es-demo-3 23s +``` + +Now, let's check the YAML of the `Repository`. + +```yaml +❯ kubectl get repository -n demo-3 app-es-demo-3 -o yaml +apiVersion: stash.appscode.com/v1alpha1 +kind: Repository +metadata: + name: app-es-demo-3 + namespace: demo-3 + ... +spec: + backend: + gcs: + bucket: stash-testing + prefix: stash-backup/demo-3/elasticsearch/es-demo-3 + storageSecretName: gcs-secret +``` + +Here, you can see that Stash has resolved the variables in `prefix` field and substituted them with the equivalent information from this new database. + +#### Verify BackupConfiguration + +If everything goes well, Stash should create a `BackupConfiguration` for our Elasticsearch in `demo-3` namespace and the phase of that `BackupConfiguration` should be `Ready`. Verify the `BackupConfiguration` crd by the following command, + +```bash +❯ kubectl get backupconfiguration -n demo-3 +NAME TASK SCHEDULE PAUSED PHASE AGE +app-es-demo-3 elasticsearch-backup-7.3.2 */5 * * * * Ready 84s +``` + +Now, let's check the YAML of the `BackupConfiguration`. + +```yaml +❯ kubectl get backupconfiguration -n demo-3 app-es-demo-3 -o yaml +apiVersion: stash.appscode.com/v1beta1 +kind: BackupConfiguration +metadata: + name: app-es-demo-3 + namespace: demo-3 + ... +spec: + driver: Restic + interimVolumeTemplate: + metadata: + name: elasticsearch-es-demo-3 + spec: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + storageClassName: standard + status: {} + repository: + name: app-es-demo-3 + retentionPolicy: + keepLast: 5 + name: keep-last-5 + prune: true + runtimeSettings: {} + schedule: '*/5 * * * *' + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: es-demo-3 + task: + params: + - name: args + value: --ignoreType=settings,template + tempDir: {} +status: + conditions: + - lastTransitionTime: "2021-02-12T12:39:14Z" + message: Repository demo-3/app-es-demo-3 exist. + reason: RepositoryAvailable + status: "True" + type: RepositoryFound + - lastTransitionTime: "2021-02-12T12:39:14Z" + message: Backend Secret demo-3/gcs-secret exist. + reason: BackendSecretAvailable + status: "True" + type: BackendSecretFound + - lastTransitionTime: "2021-02-12T12:39:14Z" + message: Backup target appcatalog.appscode.com/v1alpha1 appbinding/es-demo-3 found. + reason: TargetAvailable + status: "True" + type: BackupTargetFound + - lastTransitionTime: "2021-02-12T12:39:14Z" + message: Successfully created backup triggering CronJob. + reason: CronJobCreationSucceeded + status: "True" + type: CronJobCreated + observedGeneration: 1 +``` + +Notice the `task` section. The `args` parameter that we had passed via annotations has been added to the `params` section. + +Also, notice the `target` section. Stash has automatically added the new Elasticsearch as the target of this `BackupConfiguration`. + +#### Verify Backup + +Now, let's wait for a backup run to complete. You can watch for `BackupSession` as below, + +```bash +❯ kubectl get backupsession -n demo-3 -w +NAME INVOKER-TYPE INVOKER-NAME PHASE AGE +app-es-demo-3-1613133604 BackupConfiguration app-es-demo-3 0s +app-es-demo-3-1613133604 BackupConfiguration app-es-demo-3 Running 5s +app-es-demo-3-1613133604 BackupConfiguration app-es-demo-3 Succeeded 48s +``` + +Once the backup has been completed successfully, you should see that Stash has created a new directory as pointed by the `prefix` field of the new `Repository` and stored the backed up data there. + +
+ Backup data in GCS Bucket +
Fig: Backup data in GCS Bucket
+
+ +## Cleanup + +To cleanup the resources crated by this tutorial, run the following commands, + +```bash +❯ kubectl delete -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/elasticsearch/backup/auto-backup/examples/ +backupblueprint.stash.appscode.com "elasticsearch-backup-template" deleted +elasticsearch.kubedb.com "es-demo-2" deleted +elasticsearch.kubedb.com "es-demo-3" deleted +elasticsearch.kubedb.com "es-demo" deleted + +❯ kubectl delete repository -n demo --all +repository.stash.appscode.com "app-es-demo" deleted +❯ kubectl delete repository -n demo-2 --all +repository.stash.appscode.com "app-es-demo-2" deleted +❯ kubectl delete repository -n demo-3 --all +repository.stash.appscode.com "app-es-demo-3" deleted +``` diff --git a/content/docs/v2024.1.31/guides/elasticsearch/backup/customization/examples/backup/ignore-sg-index.yaml b/content/docs/v2024.1.31/guides/elasticsearch/backup/customization/examples/backup/ignore-sg-index.yaml new file mode 100644 index 0000000000..c7af935295 --- /dev/null +++ b/content/docs/v2024.1.31/guides/elasticsearch/backup/customization/examples/backup/ignore-sg-index.yaml @@ -0,0 +1,31 @@ +apiVersion: stash.appscode.com/v1beta1 +kind: BackupConfiguration +metadata: + name: sample-elasticsearch-backup + namespace: demo +spec: + schedule: "*/2 * * * *" + task: + params: + - name: args + value: --match=^(?![.])(?!searchguard).+ + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-elasticsearch + interimVolumeTemplate: + metadata: + name: stash-tmp-backup-storage + spec: + accessModes: [ "ReadWriteOnce" ] + storageClassName: "standard" + resources: + requests: + storage: 1Gi + retentionPolicy: + name: keep-last-5 + keepLast: 5 + prune: true diff --git a/content/docs/v2024.1.31/guides/elasticsearch/backup/customization/examples/backup/multi-retention-policy.yaml b/content/docs/v2024.1.31/guides/elasticsearch/backup/customization/examples/backup/multi-retention-policy.yaml new file mode 100644 index 0000000000..3668d9d8e2 --- /dev/null +++ b/content/docs/v2024.1.31/guides/elasticsearch/backup/customization/examples/backup/multi-retention-policy.yaml @@ -0,0 +1,31 @@ +apiVersion: stash.appscode.com/v1beta1 +kind: BackupConfiguration +metadata: + name: sample-elasticsearch-backup + namespace: demo +spec: + schedule: "*/2 * * * *" + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-elasticsearch + interimVolumeTemplate: + metadata: + name: stash-tmp-backup-storage + spec: + accessModes: [ "ReadWriteOnce" ] + storageClassName: "standard" + resources: + requests: + storage: 1Gi + retentionPolicy: + name: sample-es-retention + keepLast: 5 + keepDaily: 10 + keepWeekly: 20 + keepMonthly: 50 + keepYearly: 100 + prune: true diff --git a/content/docs/v2024.1.31/guides/elasticsearch/backup/customization/examples/backup/passing-args.yaml b/content/docs/v2024.1.31/guides/elasticsearch/backup/customization/examples/backup/passing-args.yaml new file mode 100644 index 0000000000..c61f293f10 --- /dev/null +++ b/content/docs/v2024.1.31/guides/elasticsearch/backup/customization/examples/backup/passing-args.yaml @@ -0,0 +1,31 @@ +apiVersion: stash.appscode.com/v1beta1 +kind: BackupConfiguration +metadata: + name: sample-elasticsearch-backup + namespace: demo +spec: + schedule: "*/2 * * * *" + task: + params: + - name: args + value: --ignoreType=template,settings + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-elasticsearch + interimVolumeTemplate: + metadata: + name: stash-tmp-backup-storage + spec: + accessModes: [ "ReadWriteOnce" ] + storageClassName: "standard" + resources: + requests: + storage: 1Gi + retentionPolicy: + name: keep-last-5 + keepLast: 5 + prune: true diff --git a/content/docs/v2024.1.31/guides/elasticsearch/backup/customization/examples/backup/resource-limit.yaml b/content/docs/v2024.1.31/guides/elasticsearch/backup/customization/examples/backup/resource-limit.yaml new file mode 100644 index 0000000000..cd06204f81 --- /dev/null +++ b/content/docs/v2024.1.31/guides/elasticsearch/backup/customization/examples/backup/resource-limit.yaml @@ -0,0 +1,36 @@ +apiVersion: stash.appscode.com/v1beta1 +kind: BackupConfiguration +metadata: + name: sample-elasticsearch-backup + namespace: demo +spec: + schedule: "*/2 * * * *" + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-elasticsearch + interimVolumeTemplate: + metadata: + name: stash-tmp-backup-storage + spec: + accessModes: ["ReadWriteOnce"] + storageClassName: "standard" + resources: + requests: + storage: 1Gi + runtimeSettings: + container: + resources: + requests: + cpu: "200m" + memory: "1Gi" + limits: + cpu: "200m" + memory: "1Gi" + retentionPolicy: + name: keep-last-5 + keepLast: 5 + prune: true diff --git a/content/docs/v2024.1.31/guides/elasticsearch/backup/customization/examples/backup/specific-user.yaml b/content/docs/v2024.1.31/guides/elasticsearch/backup/customization/examples/backup/specific-user.yaml new file mode 100644 index 0000000000..e4f3716f7d --- /dev/null +++ b/content/docs/v2024.1.31/guides/elasticsearch/backup/customization/examples/backup/specific-user.yaml @@ -0,0 +1,32 @@ +apiVersion: stash.appscode.com/v1beta1 +kind: BackupConfiguration +metadata: + name: sample-elasticsearch-backup + namespace: demo +spec: + schedule: "*/2 * * * *" + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-elasticsearch + interimVolumeTemplate: + metadata: + name: stash-tmp-backup-storage + spec: + accessModes: [ "ReadWriteOnce" ] + storageClassName: "standard" + resources: + requests: + storage: 1Gi + runtimeSettings: + pod: + securityContext: + runAsUser: 0 + runAsGroup: 0 + retentionPolicy: + name: keep-last-5 + keepLast: 5 + prune: true diff --git a/content/docs/v2024.1.31/guides/elasticsearch/backup/customization/examples/elasticsearch.yaml b/content/docs/v2024.1.31/guides/elasticsearch/backup/customization/examples/elasticsearch.yaml new file mode 100644 index 0000000000..d0074e8664 --- /dev/null +++ b/content/docs/v2024.1.31/guides/elasticsearch/backup/customization/examples/elasticsearch.yaml @@ -0,0 +1,15 @@ +apiVersion: kubedb.com/v1alpha2 +kind: Elasticsearch +metadata: + name: sample-elasticsearch + namespace: demo +spec: + version: xpack-8.11.1 + replicas: 1 + storageType: Durable + storage: + resources: + requests: + storage: 1Gi + storageClassName: standard + terminationPolicy: Delete diff --git a/content/docs/v2024.1.31/guides/elasticsearch/backup/customization/examples/repository.yaml b/content/docs/v2024.1.31/guides/elasticsearch/backup/customization/examples/repository.yaml new file mode 100644 index 0000000000..8a6aaab13b --- /dev/null +++ b/content/docs/v2024.1.31/guides/elasticsearch/backup/customization/examples/repository.yaml @@ -0,0 +1,11 @@ +apiVersion: stash.appscode.com/v1alpha1 +kind: Repository +metadata: + name: gcs-repo + namespace: demo +spec: + backend: + gcs: + bucket: stash-testing + prefix: /demo/customizing + storageSecretName: gcs-secret diff --git a/content/docs/v2024.1.31/guides/elasticsearch/backup/customization/examples/restore/passing-args.yaml b/content/docs/v2024.1.31/guides/elasticsearch/backup/customization/examples/restore/passing-args.yaml new file mode 100644 index 0000000000..1298ce7542 --- /dev/null +++ b/content/docs/v2024.1.31/guides/elasticsearch/backup/customization/examples/restore/passing-args.yaml @@ -0,0 +1,28 @@ +apiVersion: stash.appscode.com/v1beta1 +kind: RestoreSession +metadata: + name: sample-elasticsearch-restore + namespace: demo +spec: + task: + params: + - name: args + value: --ignoreType=template,settings + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-elasticsearch + interimVolumeTemplate: + metadata: + name: stash-tmp-restore-storage + spec: + accessModes: [ "ReadWriteOnce" ] + storageClassName: "standard" + resources: + requests: + storage: 1Gi + rules: + - snapshots: [latest] diff --git a/content/docs/v2024.1.31/guides/elasticsearch/backup/customization/examples/restore/resource-limit.yaml b/content/docs/v2024.1.31/guides/elasticsearch/backup/customization/examples/restore/resource-limit.yaml new file mode 100644 index 0000000000..21bccedb25 --- /dev/null +++ b/content/docs/v2024.1.31/guides/elasticsearch/backup/customization/examples/restore/resource-limit.yaml @@ -0,0 +1,33 @@ +apiVersion: stash.appscode.com/v1beta1 +kind: RestoreSession +metadata: + name: sample-elasticsearch-restore + namespace: demo +spec: + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-elasticsearch + interimVolumeTemplate: + metadata: + name: stash-tmp-restore-storage + spec: + accessModes: [ "ReadWriteOnce" ] + storageClassName: "standard" + resources: + requests: + storage: 1Gi + runtimeSettings: + container: + resources: + requests: + cpu: "200m" + memory: "1Gi" + limits: + cpu: "200m" + memory: "1Gi" + rules: + - snapshots: [latest] diff --git a/content/docs/v2024.1.31/guides/elasticsearch/backup/customization/examples/restore/specific-snapshot.yaml b/content/docs/v2024.1.31/guides/elasticsearch/backup/customization/examples/restore/specific-snapshot.yaml new file mode 100644 index 0000000000..7791dde9d6 --- /dev/null +++ b/content/docs/v2024.1.31/guides/elasticsearch/backup/customization/examples/restore/specific-snapshot.yaml @@ -0,0 +1,24 @@ +apiVersion: stash.appscode.com/v1beta1 +kind: RestoreSession +metadata: + name: sample-elasticsearch-restore + namespace: demo +spec: + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-elasticsearch + interimVolumeTemplate: + metadata: + name: stash-tmp-restore-storage + spec: + accessModes: [ "ReadWriteOnce" ] + storageClassName: "standard" + resources: + requests: + storage: 1Gi + rules: + - snapshots: [4bc21d6f] diff --git a/content/docs/v2024.1.31/guides/elasticsearch/backup/customization/examples/restore/specific-user.yaml b/content/docs/v2024.1.31/guides/elasticsearch/backup/customization/examples/restore/specific-user.yaml new file mode 100644 index 0000000000..09db3534c1 --- /dev/null +++ b/content/docs/v2024.1.31/guides/elasticsearch/backup/customization/examples/restore/specific-user.yaml @@ -0,0 +1,29 @@ +apiVersion: stash.appscode.com/v1beta1 +kind: RestoreSession +metadata: + name: sample-elasticsearch-restore + namespace: demo +spec: + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-elasticsearch + interimVolumeTemplate: + metadata: + name: stash-tmp-restore-storage + spec: + accessModes: [ "ReadWriteOnce" ] + storageClassName: "standard" + resources: + requests: + storage: 1Gi + runtimeSettings: + pod: + securityContext: + runAsUser: 0 + runAsGroup: 0 + rules: + - snapshots: [latest] diff --git a/content/docs/v2024.1.31/guides/elasticsearch/backup/customization/index.md b/content/docs/v2024.1.31/guides/elasticsearch/backup/customization/index.md new file mode 100644 index 0000000000..506c680166 --- /dev/null +++ b/content/docs/v2024.1.31/guides/elasticsearch/backup/customization/index.md @@ -0,0 +1,395 @@ +--- +title: Elasticsearch Backup Customization | Stash +description: Customizing Elasticsearch Backup and Restore process with Stash +menu: + docs_v2024.1.31: + identifier: guides-es-backup-customization + name: Customizing Backup & Restore Process + parent: guides-es-backup + weight: 40 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +# Customizing Backup and Restore Process + +Stash provides rich customization supports for the backup and restore process to meet the requirements of various cluster configurations. This guide will show you some examples of these customizations. + +## Customizing Backup Process + +In this section, we are going to show you how to customize the backup process. Here, we are going to show some examples of providing arguments to the backup process, running the backup process as a specific user, ignoring some indexes during the backup process, etc. + +### Passing arguments to the backup process + +Stash Elasticsearch addon uses [multielasticdump](https://github.com/elasticsearch-dump/elasticsearch-dump#multielasticdump) for backup. You can pass arguments to the `multielasticdump` through `args` param under `task.params` section. + +The below example shows how you can pass the `--ignoreType` argument to ignore `template` and `settings` during backup. + +```yaml +apiVersion: stash.appscode.com/v1beta1 +kind: BackupConfiguration +metadata: + name: sample-elasticsearch-backup + namespace: demo +spec: + schedule: "*/2 * * * *" + task: + params: + - name: args + value: --ignoreType=template,settings + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-elasticsearch + interimVolumeTemplate: + metadata: + name: stash-tmp-backup-storage + spec: + accessModes: [ "ReadWriteOnce" ] + storageClassName: "standard" + resources: + requests: + storage: 1Gi + retentionPolicy: + name: keep-last-5 + keepLast: 5 + prune: true +``` + +### Ignoring Search Guard Indexes + +If you are using the Search Guard variant for your Elasticsearch, you can pass a regex through the `--match` argument to ignore the Search Guard specific indexes during backup. + +```yaml +apiVersion: stash.appscode.com/v1beta1 +kind: BackupConfiguration +metadata: + name: sample-elasticsearch-backup + namespace: demo +spec: + schedule: "*/2 * * * *" + task: + params: + - name: args + value: --match=^(?![.])(?!searchguard).+ --ignoreType=template + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-elasticsearch + interimVolumeTemplate: + metadata: + name: stash-tmp-backup-storage + spec: + accessModes: [ "ReadWriteOnce" ] + storageClassName: "standard" + resources: + requests: + storage: 1Gi + retentionPolicy: + name: keep-last-5 + keepLast: 5 + prune: true +``` + +### Running backup job as a specific user + +If your cluster requires running the backup job as a specific user, you can provide `securityContext` under `runtimeSettings.pod` section. The below example shows how you can run the backup job as the root user. + +```yaml +apiVersion: stash.appscode.com/v1beta1 +kind: BackupConfiguration +metadata: + name: sample-elasticsearch-backup + namespace: demo +spec: + schedule: "*/2 * * * *" + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-elasticsearch + interimVolumeTemplate: + metadata: + name: stash-tmp-backup-storage + spec: + accessModes: [ "ReadWriteOnce" ] + storageClassName: "standard" + resources: + requests: + storage: 1Gi + runtimeSettings: + pod: + securityContext: + runAsUser: 0 + runAsGroup: 0 + retentionPolicy: + name: keep-last-5 + keepLast: 5 + prune: true +``` + +### Specifying Memory/CPU limit/request for the backup job + +If you want to specify the Memory/CPU limit/request for your backup job, you can specify `resources` field under `runtimeSettings.container` section. + +```yaml +apiVersion: stash.appscode.com/v1beta1 +kind: BackupConfiguration +metadata: + name: sample-elasticsearch-backup + namespace: demo +spec: + schedule: "*/2 * * * *" + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-elasticsearch + interimVolumeTemplate: + metadata: + name: stash-tmp-backup-storage + spec: + accessModes: ["ReadWriteOnce"] + storageClassName: "standard" + resources: + requests: + storage: 1Gi + runtimeSettings: + container: + resources: + requests: + cpu: "200m" + memory: "1Gi" + limits: + cpu: "200m" + memory: "1Gi" + retentionPolicy: + name: keep-last-5 + keepLast: 5 + prune: true +``` + +### Using multiple retention policies + +You can also specify multiple retention policies for your backed up data. For example, you may want to keep few daily snapshots, few weekly snapshots, and few monthly snapshots, etc. You just need to pass the desired number with the respective key under the `retentionPolicy` section. + +```yaml +apiVersion: stash.appscode.com/v1beta1 +kind: BackupConfiguration +metadata: + name: sample-elasticsearch-backup + namespace: demo +spec: + schedule: "*/2 * * * *" + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-elasticsearch + interimVolumeTemplate: + metadata: + name: stash-tmp-backup-storage + spec: + accessModes: [ "ReadWriteOnce" ] + storageClassName: "standard" + resources: + requests: + storage: 1Gi + retentionPolicy: + name: sample-es-retention + keepLast: 5 + keepDaily: 10 + keepWeekly: 20 + keepMonthly: 50 + keepYearly: 100 + prune: true +``` + +To know more about the available options for retention policies, please visit [here](https://stash.run/docs/latest/concepts/crds/backupconfiguration/#specretentionpolicy). + +## Customizing Restore Process + +Stash also uses `multielasticdump` during the restore process. In this section, we are going to show how you can pass arguments to the restore process, restore a specific snapshot, run restore job as a specific user, etc. + +### Passing arguments to the restore process + +Similar to the backup process, you can pass arguments to the restore process through the `args` params under `task.params` section. + +```yaml +apiVersion: stash.appscode.com/v1beta1 +kind: RestoreSession +metadata: + name: sample-elasticsearch-restore + namespace: demo +spec: + task: + params: + - name: args + value: --ignoreType=template,settings + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-elasticsearch + interimVolumeTemplate: + metadata: + name: stash-tmp-restore-storage + spec: + accessModes: [ "ReadWriteOnce" ] + storageClassName: "standard" + resources: + requests: + storage: 1Gi + rules: + - snapshots: [latest] +``` + +### Restore specific snapshot + +You can also restore a specific snapshot. At first, list the available snapshots as below, + +```bash +❯ kubectl get snapshots -n demo +NAME ID REPOSITORY HOSTNAME CREATED AT +gcs-repo-4bc21d6f 4bc21d6f gcs-repo host-0 2022-01-12T14:54:27Z +gcs-repo-f0ac7cbd f0ac7cbd gcs-repo host-0 2022-01-12T14:56:26Z +gcs-repo-9210ebb6 9210ebb6 gcs-repo host-0 2022-01-12T14:58:27Z +gcs-repo-0aff8890 0aff8890 gcs-repo host-0 2022-01-12T15:00:28Z +``` + + +>You can also filter the snapshots as shown in the guide [here](https://stash.run/docs/latest/concepts/crds/snapshot/#working-with-snapshot). + +You can use the respective ID of the snapshot to restore that snapshot. + +The below example shows how you can pass a specific snapshot ID through the `snapshots` field of `rules` section. + +```yaml +apiVersion: stash.appscode.com/v1beta1 +kind: RestoreSession +metadata: + name: sample-elasticsearch-restore + namespace: demo +spec: + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-elasticsearch + interimVolumeTemplate: + metadata: + name: stash-tmp-restore-storage + spec: + accessModes: [ "ReadWriteOnce" ] + storageClassName: "standard" + resources: + requests: + storage: 1Gi + rules: + - snapshots: [4bc21d6f] +``` + +>Please, do not specify multiple snapshots here. Each snapshot represents a complete backup of your database. Multiple snapshots are only usable during file/directory restore. + +### Running restore job as a specific user + +You can provide `securityContext` under `runtimeSettings.pod` section to run the restore job as a specific user. + +```yaml +apiVersion: stash.appscode.com/v1beta1 +kind: RestoreSession +metadata: + name: sample-elasticsearch-restore + namespace: demo +spec: + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-elasticsearch + interimVolumeTemplate: + metadata: + name: stash-tmp-restore-storage + spec: + accessModes: [ "ReadWriteOnce" ] + storageClassName: "standard" + resources: + requests: + storage: 1Gi + runtimeSettings: + pod: + securityContext: + runAsUser: 0 + runAsGroup: 0 + rules: + - snapshots: [latest] +``` + +### Specifying Memory/CPU limit/request for the restore job + +Similar to the backup process, you can also provide `resources` field under the `runtimeSettings.container` section to limit the Memory/CPU for your restore job. + +```yaml +apiVersion: stash.appscode.com/v1beta1 +kind: RestoreSession +metadata: + name: sample-elasticsearch-restore + namespace: demo +spec: + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-elasticsearch + interimVolumeTemplate: + metadata: + name: stash-tmp-restore-storage + spec: + accessModes: [ "ReadWriteOnce" ] + storageClassName: "standard" + resources: + requests: + storage: 1Gi + runtimeSettings: + container: + resources: + requests: + cpu: "200m" + memory: "1Gi" + limits: + cpu: "200m" + memory: "1Gi" + rules: + - snapshots: [latest] +``` diff --git a/content/docs/v2024.1.31/guides/elasticsearch/backup/kubedb/examples/backup/backupconfiguration.yaml b/content/docs/v2024.1.31/guides/elasticsearch/backup/kubedb/examples/backup/backupconfiguration.yaml new file mode 100644 index 0000000000..4c54b80ae4 --- /dev/null +++ b/content/docs/v2024.1.31/guides/elasticsearch/backup/kubedb/examples/backup/backupconfiguration.yaml @@ -0,0 +1,27 @@ +apiVersion: stash.appscode.com/v1beta1 +kind: BackupConfiguration +metadata: + name: sample-es-backup + namespace: demo +spec: + schedule: "*/5 * * * *" + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-es + interimVolumeTemplate: + metadata: + name: sample-es-backup-tmp-storage + spec: + accessModes: [ "ReadWriteOnce" ] + storageClassName: "standard" + resources: + requests: + storage: 1Gi + retentionPolicy: + name: keep-last-5 + keepLast: 5 + prune: true diff --git a/content/docs/v2024.1.31/guides/elasticsearch/backup/kubedb/examples/backup/repository.yaml b/content/docs/v2024.1.31/guides/elasticsearch/backup/kubedb/examples/backup/repository.yaml new file mode 100644 index 0000000000..73bc3914a8 --- /dev/null +++ b/content/docs/v2024.1.31/guides/elasticsearch/backup/kubedb/examples/backup/repository.yaml @@ -0,0 +1,11 @@ +apiVersion: stash.appscode.com/v1alpha1 +kind: Repository +metadata: + name: gcs-repo + namespace: demo +spec: + backend: + gcs: + bucket: stash-testing + prefix: /demo/sample-es + storageSecretName: gcs-secret diff --git a/content/docs/v2024.1.31/guides/elasticsearch/backup/kubedb/examples/elasticsearch/init_sample.yaml b/content/docs/v2024.1.31/guides/elasticsearch/backup/kubedb/examples/elasticsearch/init_sample.yaml new file mode 100644 index 0000000000..a35d96663b --- /dev/null +++ b/content/docs/v2024.1.31/guides/elasticsearch/backup/kubedb/examples/elasticsearch/init_sample.yaml @@ -0,0 +1,41 @@ +apiVersion: kubedb.com/v1alpha2 +kind: Elasticsearch +metadata: + name: init-sample + namespace: restored +spec: + version: opensearch-2.8.0 + storageType: Durable + init: + waitForInitialRestore: true + topology: + master: + suffix: master + replicas: 1 + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + data: + suffix: data + replicas: 2 + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + ingest: + suffix: client + replicas: 2 + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi diff --git a/content/docs/v2024.1.31/guides/elasticsearch/backup/kubedb/examples/elasticsearch/sample_es.yaml b/content/docs/v2024.1.31/guides/elasticsearch/backup/kubedb/examples/elasticsearch/sample_es.yaml new file mode 100644 index 0000000000..792ea029a5 --- /dev/null +++ b/content/docs/v2024.1.31/guides/elasticsearch/backup/kubedb/examples/elasticsearch/sample_es.yaml @@ -0,0 +1,39 @@ +apiVersion: kubedb.com/v1alpha2 +kind: Elasticsearch +metadata: + name: sample-es + namespace: demo +spec: + version: xpack-8.11.1 + storageType: Durable + topology: + master: + suffix: master + replicas: 1 + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + data: + suffix: data + replicas: 2 + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + ingest: + suffix: client + replicas: 2 + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi diff --git a/content/docs/v2024.1.31/guides/elasticsearch/backup/kubedb/examples/restore/init_sample_restore.yaml b/content/docs/v2024.1.31/guides/elasticsearch/backup/kubedb/examples/restore/init_sample_restore.yaml new file mode 100644 index 0000000000..accfea790a --- /dev/null +++ b/content/docs/v2024.1.31/guides/elasticsearch/backup/kubedb/examples/restore/init_sample_restore.yaml @@ -0,0 +1,24 @@ +apiVersion: stash.appscode.com/v1beta1 +kind: RestoreSession +metadata: + name: init-sample-restore + namespace: restored +spec: + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: init-sample + interimVolumeTemplate: + metadata: + name: init-sample-restore-tmp-storage + spec: + accessModes: [ "ReadWriteOnce" ] + storageClassName: "standard" + resources: + requests: + storage: 1Gi + rules: + - snapshots: [latest] diff --git a/content/docs/v2024.1.31/guides/elasticsearch/backup/kubedb/examples/restore/restoresession.yaml b/content/docs/v2024.1.31/guides/elasticsearch/backup/kubedb/examples/restore/restoresession.yaml new file mode 100644 index 0000000000..61c814d17d --- /dev/null +++ b/content/docs/v2024.1.31/guides/elasticsearch/backup/kubedb/examples/restore/restoresession.yaml @@ -0,0 +1,24 @@ +apiVersion: stash.appscode.com/v1beta1 +kind: RestoreSession +metadata: + name: sample-es-restore + namespace: demo +spec: + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-es + interimVolumeTemplate: + metadata: + name: sample-es-restore-tmp-storage + spec: + accessModes: [ "ReadWriteOnce" ] + storageClassName: "standard" + resources: + requests: + storage: 1Gi + rules: + - snapshots: [latest] diff --git a/content/docs/v2024.1.31/guides/elasticsearch/backup/kubedb/images/sample-es-backup.png b/content/docs/v2024.1.31/guides/elasticsearch/backup/kubedb/images/sample-es-backup.png new file mode 100644 index 0000000000..f902c53009 Binary files /dev/null and b/content/docs/v2024.1.31/guides/elasticsearch/backup/kubedb/images/sample-es-backup.png differ diff --git a/content/docs/v2024.1.31/guides/elasticsearch/backup/kubedb/index.md b/content/docs/v2024.1.31/guides/elasticsearch/backup/kubedb/index.md new file mode 100644 index 0000000000..eafba5240d --- /dev/null +++ b/content/docs/v2024.1.31/guides/elasticsearch/backup/kubedb/index.md @@ -0,0 +1,1158 @@ +--- +title: Elasticsearch | Stash +description: Backup and restore Elasticsearch deployed with KubeDB +menu: + docs_v2024.1.31: + identifier: guides-es-backup-kubedb + name: Scheduled Backup + parent: guides-es-backup + weight: 20 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +# Backup and restore Elasticsearch database deployed with KubeDB + +Stash 0.9.0+ supports backup and restoration of Elasticsearch clusters. This guide will show you how you can backup and restore your KubeDB deployed Elasticsearch database using Stash. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster. +- Install KubeDB in your cluster following the steps [here](/docs/v2024.1.31/setup/README). +- Install Stash in your cluster following the steps [here](https://stash.run/docs/latest/setup/install/stash/). +- Install Stash `kubectl` plugin following the steps [here](https://stash.run/docs/latest/setup/install/kubectl-plugin/). +- If you are not familiar with how Stash backup and restore Elasticsearch databases, please check the following guide [here](/docs/v2024.1.31/guides/elasticsearch/backup/overview/). + +You have to be familiar with following custom resources: + +- [AppBinding](/docs/v2024.1.31/guides/elasticsearch/concepts/appbinding/) +- [Function](https://stash.run/docs/latest/concepts/crds/function/) +- [Task](https://stash.run/docs/latest/concepts/crds/task/) +- [BackupConfiguration](https://stash.run/docs/latest/concepts/crds/backupconfiguration/) +- [BackupSession](https://stash.run/docs/latest/concepts/crds/backupsession/) +- [RestoreSession](https://stash.run/docs/latest/concepts/crds/restoresession/) + +To keep things isolated, we are going to use a separate namespace called `demo` throughout this tutorial. Create `demo` namespace if you haven't created it yet. + +```console +$ kubectl create ns demo +namespace/demo created +``` + +## Prepare Elasticsearch + +In this section, we are going to deploy an Elasticsearch database using KubeDB. Then, we are going to insert some sample data into it. + +### Deploy Elasticsearch + +At first, let's deploy a sample Elasticsearch database. Below is the YAML of a sample Elasticsearch crd that we are going to create for this tutorial: + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: Elasticsearch +metadata: + name: sample-es + namespace: demo +spec: + version: xpack-8.11.1 + storageType: Durable + topology: + master: + suffix: master + replicas: 1 + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + data: + suffix: data + replicas: 2 + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + ingest: + suffix: client + replicas: 2 + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi +``` + +Let's create the above `Elasticsearch` object, + +```console +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/elasticsearch/backup/kubedb/examples/elasticsearch/sample_es.yaml +elasticsearch.kubedb.com/sample-es created +``` + +KubeDB will create the necessary resources to deploy the Elasticsearch database according to the above specification. Let's wait until the database to be ready to use, + +```console +❯ kubectl get elasticsearch -n demo -w +NAME VERSION STATUS AGE +sample-es xpack-8.11.1 Provisioning 89s +sample-es xpack-8.11.1 Ready 5m26s +``` + +The database is in `Ready` state. It means the database is ready to accept connections. + +### Insert Sample Data + +In this section, we are going to create few indexes in the deployed Elasticsearch. At first, we are going to port-forward the respective Service so that we can connect with the database from our local machine. Then, we are going to insert some data into the Elasticsearch. + +#### Port-forward the Service + +KubeDB will create few Services to connect with the database. Let's see the Services created by KubeDB for our Elasticsearch, + +```bash +❯ kubectl get service -n demo +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +sample-es ClusterIP 10.108.129.195 9200/TCP 10m +sample-es-master ClusterIP None 9300/TCP 10m +sample-es-pods ClusterIP None 9200/TCP 10m +``` + +Here, we are going to use the `sample-es` Service to connect with the database. Now, let's port-forward the `sample-es` Service. Run the following command into a separate terminal. + +```bash +❯ kubectl port-forward -n demo service/sample-es 9200 +Forwarding from 127.0.0.1:9200 -> 9200 +Forwarding from [::1]:9200 -> 9200 +``` + +#### Export the Credentials + +KubeDB will create some Secrets for the database. Let's check which Secrets have been created by KubeDB for our `sample-es` Elasticsearch. + +```bash +❯ kubectl get secret -n demo | grep sample-es +sample-es-ca-cert kubernetes.io/tls 2 21m +sample-es-config Opaque 1 21m +sample-es-elastic-cred kubernetes.io/basic-auth 2 21m +sample-es-token-ctzn5 kubernetes.io/service-account-token 3 21m +sample-es-transport-cert kubernetes.io/tls 3 21m +``` + +Here, `sample-es-elastic-cred` contains the credentials require to connect with the database. Let's export the credentials as environment variable to our current shell so that we can easily environment variables to connect with the database. + +```bash +❯ export USER=$(kubectl get secrets -n demo sample-es-elastic-cred -o jsonpath='{.data.\username}' | base64 -d) +❯ export PASSWORD=$(kubectl get secrets -n demo sample-es-elastic-cred -o jsonpath='{.data.\password}' | base64 -d) +``` + +#### Insert data + +Now, let's create an index called `products` and insert some data into it. + +```bash +# Elasticsearch will automatically create the index if it does not exist already. +❯ curl -XPOST --user "$USER:$PASSWORD" "http://localhost:9200/products/_doc?pretty" -H 'Content-Type: application/json' -d' +{ + "name": "KubeDB", + "vendor": "AppsCode Inc.", + "description": "Database Operator for Kubernetes" +} +' + +# Let's insert another data into the "products" index. +❯ curl -XPOST --user "$USER:$PASSWORD" "http://localhost:9200/products/_doc?pretty" -H 'Content-Type: application/json' -d' +{ + "name": "Stash", + "vendor": "AppsCode Inc.", + "description": "Backup tool for Kubernetes workloads" +} +' +``` + +Let's create another index called `companies` and insert some data into it. + +```bash +❯ curl -XPOST --user "$USER:$PASSWORD" "http://localhost:9200/companies/_doc?pretty" -H 'Content-Type: application/json' -d' +{ + "name": "AppsCode Inc.", + "mission": "Accelerate the transition to Containers by building a Kubernetes-native Data Platform", + "products": ["KubeDB", "Stash", "KubeVault", "Kubeform", "ByteBuilders"] +} +' +``` + +Now, let's verify that the indexes have been created successfully. + +```bash +❯ curl -XGET --user "$USER:$PASSWORD" "http://localhost:9200/_cat/indices?v&s=index&pretty" +health status index uuid pri rep docs.count docs.deleted store.size pri.store.size +green open companies qs52L4xrShay14NPUExDNw 1 1 1 0 11.5kb 5.7kb +green open products 6aCd7y_kQf26sYG3QdY0ow 1 1 2 0 20.7kb 10.3kb +``` + +Also, let's verify the data in the indexes: + +```bash +# Verify the data in the "product" index. +❯ curl -XGET --user "$USER:$PASSWORD" "http://localhost:9200/products/_search?pretty" +{ + "took" : 354, + "timed_out" : false, + "_shards" : { + "total" : 1, + "successful" : 1, + "skipped" : 0, + "failed" : 0 + }, + "hits" : { + "total" : { + "value" : 2, + "relation" : "eq" + }, + "max_score" : 1.0, + "hits" : [ + { + "_index" : "products", + "_type" : "_doc", + "_id" : "3GyXa3cB55U52E6TvL8f", + "_score" : 1.0, + "_source" : { + "name" : "KubeDB", + "vendor" : "AppsCode Inc.", + "description" : "Database Operator for Kubernetes" + } + }, + { + "_index" : "products", + "_type" : "_doc", + "_id" : "3WyYa3cB55U52E6Tc7_G", + "_score" : 1.0, + "_source" : { + "name" : "Stash", + "vendor" : "AppsCode Inc.", + "description" : "Backup tool for Kubernetes workloads" + } + } + ] + } +} + +# Verify data in the "companies" index. +❯ curl -XGET --user "$USER:$PASSWORD" "http://localhost:9200/companies/_search?pretty" +{ + "took" : 172, + "timed_out" : false, + "_shards" : { + "total" : 1, + "successful" : 1, + "skipped" : 0, + "failed" : 0 + }, + "hits" : { + "total" : { + "value" : 1, + "relation" : "eq" + }, + "max_score" : 1.0, + "hits" : [ + { + "_index" : "companies", + "_type" : "_doc", + "_id" : "3myya3cB55U52E6TE78a", + "_score" : 1.0, + "_source" : { + "name" : "AppsCode Inc.", + "mission" : "Accelerate the transition to Containers by building a Kubernetes-native Data Platform", + "products" : [ + "KubeDB", + "Stash", + "KubeVault", + "Kubeform", + "ByteBuilders" + ] + } + } + ] + } +} +``` + +We now have sample data in our database. In the next section, we are going to prepare the necessary resources to backup these sample data. + +## Prepare for Backup + +In this section, we are going to prepare our cluster for backup. + +### Verify AppBinding + +KubeDB will create an `AppBinding` object with the same name as the database object which contains the necessary information requires to connect with the database. + +Let's verify that the `AppBinding` object has been created for our `sample-es` Elasticsearch, + +```bash +❯ kubectl get appbindings.appcatalog.appscode.com -n demo sample-es +NAME TYPE VERSION AGE +sample-es kubedb.com/elasticsearch 7.9.1 2d +``` + +Now, if you check the YAML of the `AppBinding`, you will see that it contains the service and secret information that are necessary to connect with the database. + +```yaml +❯ kubectl get appbindings.appcatalog.appscode.com -n demo sample-es -o yaml +apiVersion: appcatalog.appscode.com/v1alpha1 +kind: AppBinding +metadata: + name: sample-es + namespace: demo + ... +spec: + clientConfig: + service: + name: sample-es + port: 9200 + scheme: http + secret: + name: sample-es-elastic-cred + parameters: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: StashAddon + stash: + addon: + backupTask: + name: elasticsearch-backup-7.3.2 + restoreTask: + name: elasticsearch-restore-7.3.2 + type: kubedb.com/elasticsearch + version: 7.9.1 +``` + +Here, + +- `spec.parameters.stash` section specifies the Stash Addon that will be used to backup and restore this Elasticsearch. + +### Verify Stash Elasticsearch Addons Installed + +When you install the Stash, it automatically installs all the official database addons. Verify that it has installed the Elasticsearch addons using the following command. + +```bash +❯ kubectl get tasks.stash.appscode.com | grep elasticsearch +elasticsearch-backup-5.6.4 3d2h +elasticsearch-backup-6.2.4 3d2h +elasticsearch-backup-6.3.0 3d2h +elasticsearch-backup-6.4.0 3d2h +elasticsearch-backup-6.5.3 3d2h +elasticsearch-backup-6.8.0 3d2h +elasticsearch-backup-7.2.0 3d2h +elasticsearch-backup-7.3.2 3d2h +elasticsearch-restore-5.6.4 3d2h +elasticsearch-restore-6.2.4 3d2h +elasticsearch-restore-6.3.0 3d2h +elasticsearch-restore-6.4.0 3d2h +elasticsearch-restore-6.5.3 3d2h +elasticsearch-restore-6.8.0 3d2h +elasticsearch-restore-7.2.0 3d2h +elasticsearch-restore-7.3.2 3d2h +``` + +### Prepare Backend + +We are going to store our backed up data into a GCS bucket. So, we need to create a `Secret` with GCS credentials and a `Repository` object with the bucket information. If you want to use a different backend, please read the respective backend configuration doc from [here](https://stash.run/docs/latest/guides/backends/overview/). + +#### Create Storage Secret + +At first, let's create a `Secret` called `gcs-secret` with access credentials to our desired GCS bucket, + +```bash +$ echo -n 'changeit' > RESTIC_PASSWORD +$ echo -n '' > GOOGLE_PROJECT_ID +$ cat downloaded-sa-key.json > GOOGLE_SERVICE_ACCOUNT_JSON_KEY +$ kubectl create secret generic -n demo gcs-secret \ + --from-file=./RESTIC_PASSWORD \ + --from-file=./GOOGLE_PROJECT_ID \ + --from-file=./GOOGLE_SERVICE_ACCOUNT_JSON_KEY +secret/gcs-secret created +``` + +#### Create Repository + +Now, crete a `Repository` object with the information of your desired bucket. Below is the YAML of `Repository` object we are going to create, + +```yaml +apiVersion: stash.appscode.com/v1alpha1 +kind: Repository +metadata: + name: gcs-repo + namespace: demo +spec: + backend: + gcs: + bucket: stash-testing + prefix: /demo/sample-es + storageSecretName: gcs-secret +``` + +Let's create the `Repository` we have shown above, + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/elasticsearch/backup/kubedb/examples/backup/repository.yaml +repository.stash.appscode.com/gcs-repo created +``` + +Now, we are ready to back up our database into our desired backend. + +## Backup + +To schedule a backup, we have to create a `BackupConfiguration` object targeting the respective `AppBinding` of our desired database. Then, Stash will create a CronJob to periodically trigger a backup of the database. + +### Create BackupConfiguration + +Below is the YAML for `BackupConfiguration` object we care going to use to backup the `sample-es` database we have deployed earlier, + +```yaml +apiVersion: stash.appscode.com/v1beta1 +kind: BackupConfiguration +metadata: + name: sample-es-backup + namespace: demo +spec: + schedule: "*/5 * * * *" + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-es + interimVolumeTemplate: + metadata: + name: sample-es-backup-tmp-storage + spec: + accessModes: [ "ReadWriteOnce" ] + storageClassName: "standard" + resources: + requests: + storage: 1Gi + retentionPolicy: + name: keep-last-5 + keepLast: 5 + prune: true +``` + +Here, + +- `.spec.schedule` specifies that we want to backup the database every 5th minutes. +- `.spec.target.ref` refers to the `AppBinding` object that holds the connection information of our targeted database. +- `spec.interimVolumeTemplate` specifies a PVC template that will be used by Stash to hold the dumped data temporarily before uploading it into the cloud bucket. + +Let's create the `BackupConfiguration` object we have shown above, + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/elasticsearch/backup/kubedb/examples/backup/backupconfiguration.yaml +backupconfiguration.stash.appscode.com/sample-es-backup created +``` + +### Verify Backup Setup Successful + +If everything goes well, the phase of the `BackupConfiguration` should be `Ready`. The `Ready` phase indicates that the backup setup is successful. Let's verify the `Phase` of the BackupConfiguration, + +```bash +$ kubectl get backupconfiguration -n demo +NAME TASK SCHEDULE PAUSED PHASE AGE +sample-es-backup elasticsearch-backup-7.3.2 */5 * * * * Ready 11s +``` + +### Verify CronJob + +Stash will create a CronJob with the schedule specified in the `spec.schedule` field of `BackupConfiguration` object. + +Verify that the CronJob has been created using the following command, + +```bash +❯ kubectl get cronjob -n demo +NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE +stash-backup-sample-es-backup */5 * * * * False 0 9s +``` + +### Wait for BackupSession + +The `stash-backup-sample-es-backup` CronJob will trigger a backup on each scheduled slot by creating a `BackupSession` object. + +Now, wait for a schedule to appear. Run the following command to watch for a `BackupSession` object, + +```bash +❯ kubectl get backupsessions.stash.appscode.com -n demo -w +NAME INVOKER-TYPE INVOKER-NAME PHASE AGE +sample-es-backup-1612440003 BackupConfiguration sample-es-backup 0s +sample-es-backup-1612440003 BackupConfiguration sample-es-backup Running 0s +sample-es-backup-1612440003 BackupConfiguration sample-es-backup Succeeded 54s +``` + +Here, the phase `Succeeded` means that the backup process has been completed successfully. + +### Verify Backup + +Now, we are going to verify whether the backed up data is present in the backend or not. Once a backup is completed, Stash will update the respective `Repository` object to reflect the backup completion. Check that the repository `gcs-repo` has been updated by the following command, + +```bash +❯ kubectl get repository -n demo gcs-repo +NAME INTEGRITY SIZE SNAPSHOT-COUNT LAST-SUCCESSFUL-BACKUP AGE +gcs-repo true 3.801 KiB 1 64s 3m46s +``` + +Now, if we navigate to the GCS bucket, we will see the backed up data has been stored in `demo/sample-es` directory as specified by the `.spec.backend.gcs.prefix` field of the `Repository` object. + +
+ Backup data in GCS Bucket +
Fig: Backup data in GCS Bucket
+
+ +> Note: Stash keeps all the backed up data encrypted. So, data in the backend will not make any sense until they are decrypted. + +## Restore + +If you have followed the previous sections properly, you should have a successful backup of your Elasticsearch database. Now, we are going to show how you can restore the database from the backed up data. + +### Restore into the same Elasticsearch + +You can restore your data into the same database you have backed up from or into a different database in the same cluster or a different cluster. In this section, we are going to show you how to restore in the same database which may be necessary when you have accidentally deleted any data from the running database. + +#### Temporarily pause backup + +At first, let's stop taking any further backup of the database so that no backup runs after we delete the sample data. We are going to pause the `BackupConfiguration` object. Stash will stop taking any further backup when the `BackupConfiguration` is paused. + +Let's pause the `sample-es-backup` BackupConfiguration, + +```bash +❯ kubectl patch backupconfiguration -n demo sample-es-backup --type="merge" --patch='{"spec": {"paused": true}}' +backupconfiguration.stash.appscode.com/sample-es-backup patched +``` +Or you can use the Stash `kubectl` plugin to pause the `BackupConfiguration`, +```bash +❯ kubectl stash pause backup -n demo --backupconfig=sample-es-backup +BackupConfiguration demo/sample-es-backup has been paused successfully. +``` + +Verify that the `BackupConfiguration` has been paused, + +```bash +❯ kubectl get backupconfiguration -n demo sample-es-backup +NAME TASK SCHEDULE PAUSED PHASE AGE +sample-es-backup elasticsearch-backup-7.3.2 */5 * * * * true Ready 12m +``` + +Notice the `PAUSED` column. Value `true` for this field means that the `BackupConfiguration` has been paused. + +Stash will also suspend the respective CronJob. + +```bash +❯ kubectl get cronjob -n demo +NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE +stash-backup-sample-es-backup */5 * * * * True 0 5m19s 12m +``` + +#### Simulate Disaster + +Now, let's simulate an accidental deletion scenario. Here, we are going to delete the `products` and `companies` indexes that we had created earlier. + +```bash +# Delete "products" index +❯ curl -XDELETE --user "$USER:$PASSWORD" "http://localhost:9200/products?pretty" +{ + "acknowledged" : true +} + +# Delete "companies" index +❯ curl -XDELETE --user "$USER:$PASSWORD" "http://localhost:9200/companies?pretty" +{ + "acknowledged" : true +} +``` + +Now, let's verify that the indexes have been deleted from the database, + +```bash +❯ curl -XGET --user "$USER:$PASSWORD" "http://localhost:9200/_cat/indices?v&s=index&pretty" +health status index uuid pri rep docs.count docs.deleted store.size pri.store.size +``` + +So, we can see our `sample-es` database does not have any indexes. In the next section, we are going to restore the deleted indexes from backed up data. + +#### Create RestoreSession + +To restore the database, you have to create a `RestoreSession` object pointing to the `AppBinding` of the targeted database. + +Here, is the YAML of the `RestoreSession` object that we are going to use for restoring our `sample-es` database. + +```yaml +apiVersion: stash.appscode.com/v1beta1 +kind: RestoreSession +metadata: + name: sample-es-restore + namespace: demo +spec: + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-es + interimVolumeTemplate: + metadata: + name: sample-es-restore-tmp-storage + spec: + accessModes: [ "ReadWriteOnce" ] + storageClassName: "standard" + resources: + requests: + storage: 1Gi + rules: + - snapshots: [latest] +``` + +Here, + +- `.spec.repository.name` specifies the `Repository` object that holds the backend information where our backed up data has been stored. +- `.spec.target.ref` refers to the respective `AppBinding` of the `sample-es` database. +- `spec.interimVolumeTemplate` specifies a PVC template that will be used by Stash to hold the restored data temporarily before injecting it into the database. +- `.spec.rules` specifies that we are restoring data from the latest backup snapshot of the database. + +Let's create the `RestoreSession` object object we have shown above, + +```bash +❯ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/elasticsearch/backup/kubedb/examples/restore/restoresession.yaml +restoresession.stash.appscode.com/sample-es-restore created +``` + +Once, you have created the `RestoreSession` object, Stash will create a restore Job. Run the following command to watch the phase of the `RestoreSession` object, + +```bash +❯ kubectl get restoresession -n demo -w +NAME REPOSITORY PHASE AGE +sample-es-restore gcs-repo Running 8s +sample-es-restore gcs-repo Running 24s +sample-es-restore gcs-repo Succeeded 24s +sample-es-restore gcs-repo Succeeded 25s +``` + +The `Succeeded` phase means that the restore process has been completed successfully. + +#### Verify Restored Data + +Now, it's time to verify whether the actual data has been restored or not. At first, let's verify that whether the indexes have been restored or not: + +```bash +❯ curl -XGET --user "$USER:$PASSWORD" "http://localhost:9200/_cat/indices?v&s=index&pretty" +health status index uuid pri rep docs.count docs.deleted store.size pri.store.size +green open companies 7UgxlL4wST6ZIAImxRVvzw 1 1 1 0 11.4kb 5.7kb +green open products vb19PIneSL2zMTPvNEgm-w 1 1 2 0 10.8kb 5.4kb +``` + +So, we can see the indexes have been restored. Now, let's verify the data of these indexes, + +```bash +# Verify the data of the "products" index +❯ curl -XGET --user "$USER:$PASSWORD" "http://localhost:9200/products/_search?pretty" +{ + "took" : 3, + "timed_out" : false, + "_shards" : { + "total" : 1, + "successful" : 1, + "skipped" : 0, + "failed" : 0 + }, + "hits" : { + "total" : { + "value" : 2, + "relation" : "eq" + }, + "max_score" : 1.0, + "hits" : [ + { + "_index" : "products", + "_type" : "_doc", + "_id" : "vKDVgXcBa1PZYKwIDBjy", + "_score" : 1.0, + "_source" : { + "name" : "Stash", + "vendor" : "AppsCode Inc.", + "description" : "Backup tool for Kubernetes workloads" + } + }, + { + "_index" : "products", + "_type" : "_doc", + "_id" : "u6DUgXcBa1PZYKwI5xic", + "_score" : 1.0, + "_source" : { + "name" : "KubeDB", + "vendor" : "AppsCode Inc.", + "description" : "Database Operator for Kubernetes" + } + } + ] + } +} + +# Verify the data of "companies" index +❯ curl -XGET --user "$USER:$PASSWORD" "http://localhost:9200/companies/_search?pretty" +{ + "took" : 2, + "timed_out" : false, + "_shards" : { + "total" : 1, + "successful" : 1, + "skipped" : 0, + "failed" : 0 + }, + "hits" : { + "total" : { + "value" : 1, + "relation" : "eq" + }, + "max_score" : 1.0, + "hits" : [ + { + "_index" : "companies", + "_type" : "_doc", + "_id" : "vaDVgXcBa1PZYKwIMxhm", + "_score" : 1.0, + "_source" : { + "name" : "AppsCode Inc.", + "mission" : "Accelerate the transition to Containers by building a Kubernetes-native Data Platform", + "products" : [ + "KubeDB", + "Stash", + "KubeVault", + "Kubeform", + "ByteBuilders" + ] + } + } + ] + } +} +``` + +So, we can see that the data has been restored as well. + +#### Resume Backup + +Since our data has been restored successfully we can now resume our usual backup process. Resume the `BackupConfiguration` using following command, +```bash +❯ kubectl patch backupconfiguration -n demo sample-es-backup --type="merge" --patch='{"spec": {"paused": false}}' +backupconfiguration.stash.appscode.com/sample-es-backup patched +``` + +Or you can use Stash `kubectl` plugin to resume the `BackupConfiguration`, +```bash +❯ kubectl stash resume -n demo --backupconfig=sample-es-backup +BackupConfiguration demo/sample-es-backup has been resumed successfully. +``` + +Verify that the `BackupConfiguration` has been resumed, + +```bash +❯ kubectl get backupconfiguration -n demo sample-es-backup +NAME TASK SCHEDULE PAUSED PHASE AGE +sample-es-backup elasticsearch-backup-7.3.2 */5 * * * * false Ready 30m +``` + +Here, `false` in the `PAUSED` column means the backup has been resume successfully. The CronJob also should be resumed now. + +```bash +❯ kubectl get cronjob -n demo +NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE +stash-backup-sample-es-backup */5 * * * * False 0 2m50s 30m +``` + +Here, `False` in the `SUSPEND` column means the CronJob is no longer suspended and will trigger in the next schedule. + +### Restore into a different Elasticsearch + +Now, we are going to restore the backed up data into a different Elasticsearch of a different namespace. This time, we are going to use `opendistro` variant for Elasticsearch to demonstrate migration between the variants. You can use the same variant of Elasticsearch if you are not considering to migrate from your current variant. + +We are going to restore the data into an Elasticsearch in `restored` namespace. If you already don't have the namespace, let's create it first. + +```bash +❯ kubectl create ns restored +namespace/restored created +``` + +#### Copy Repository and backend Secret into the new namespace + +Now, let's copy the `gcs-repo` Repository into our new namespace using the `stash` kubectl plugin, + +```bash +❯ kubectl stash cp repository gcs-repo -n demo --to-namespace=restored +I0208 19:51:43.950560 666626 copy_repository.go:58] Repository demo/gcs-repo uses Storage Secret demo/gcs-secret. +I0208 19:51:43.952899 666626 copy_secret.go:60] Copying Storage Secret demo to restored namespace +I0208 19:51:43.957204 666626 copy_secret.go:73] Secret demo/gcs-secret has been copied to restored namespace successfully. +I0208 19:51:43.967768 666626 copy_repository.go:75] Repository demo/gcs-repo has been copied to restored namespace successfully. +``` + +The above command will copy the `gcs-repo` Repository as well as the respective backend secret `gcs-secret`. + +Let's verify that the `Repository` has been copied into `restored` namespace, + +```bash +❯ kubectl get repository -n restored +NAME INTEGRITY SIZE SNAPSHOT-COUNT LAST-SUCCESSFUL-BACKUP AGE +gcs-repo 2m9s +``` + +>The command does not copy the status of the `Repository`. As a result, you will see the `INTEGRITY`, `SIZE`, `SNAPSHOT-COUNT`, and `LAST-SUCCESSFUL-BACKUP` fields are empty. Nothing to panic about here. Your actual data exist safely in the cloud bucket. The `Repository` just contains the connection information to that bucket. + +Now, let's verify that the backend secret has been copied as well, + +```bash +❯ kubectl get secret -n restored +NAME TYPE DATA AGE +default-token-rd2v5 kubernetes.io/service-account-token 3 15m +gcs-secret Opaque 3 8m36s +``` + +As you can see, the backend secret `gcs-secret` also has been copied to `restored` namespace. + +#### Deploy new Elasticsearch + +Now, we are going to deploy an Elasticsearch into `restored` namespace. We are going to initialize this database from the backed up data of first Elasticsearch. + +Here, is the YAML of the Elasticsearch object that we are going to create, + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: Elasticsearch +metadata: + name: init-sample + namespace: restored +spec: + version: opensearch-2.8.0 + storageType: Durable + init: + waitForInitialRestore: true + topology: + master: + suffix: master + replicas: 1 + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + data: + suffix: data + replicas: 2 + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + ingest: + suffix: client + replicas: 2 + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi +``` + +Notice that this time, we are using `1.9.0-opendistro` variant for Elasticsearch. Also, notice that we have added an `init` section in the `spec`. Here, `waitForInitialRestore: true` tells KubeDB to wait for the first restore to complete before marking this database as ready to use. + +Let's deploy the above Elasticsearch, + +```bash +❯ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/elasticsearch/backup/kubedb/examples/elasticsearch/init_sample.yaml +elasticsearch.kubedb.com/init-sample created +``` + +Now, wait for the KubeDB to create all the nodes for this Elasticsearch. This time, Elasticsearch will get stuck in the `Provisioning` state because we haven't completed the first restore yet. + +You can check the condition of the Elasticsearch to verify whether we are ready to restore the database. + +```bash +❯ kubectl get elasticsearch -n restored init-sample -o jsonpath='{.status.conditions}' | jq +[ + { + "lastTransitionTime": "2021-02-08T14:13:22Z", + "message": "The KubeDB operator has started the provisioning of Elasticsearch: restored/init-sample", + "reason": "DatabaseProvisioningStartedSuccessfully", + "status": "True", + "type": "ProvisioningStarted" + }, + { + "lastTransitionTime": "2021-02-08T14:18:15Z", + "message": "All desired replicas are ready.", + "reason": "AllReplicasReady", + "status": "True", + "type": "ReplicaReady" + }, + { + "lastTransitionTime": "2021-02-08T14:19:22Z", + "message": "The Elasticsearch: restored/init-sample is accepting client requests.", + "observedGeneration": 3, + "reason": "DatabaseAcceptingConnectionRequest", + "status": "True", + "type": "AcceptingConnection" + }, + { + "lastTransitionTime": "2021-02-08T14:19:33Z", + "message": "The Elasticsearch: restored/init-sample is ready.", + "observedGeneration": 3, + "reason": "ReadinessCheckSucceeded", + "status": "True", + "type": "Ready" + } +] +``` + +Here, check the last two conditions. We can see that the database has passed the readiness check from `Ready` conditions and it is accepting connections from `AcceptingConnection` condition. So, we are good to start restoring into this database. + +KubeDB has created an AppBinding for this database. Let's verify that the AppBinding has been created, + +```bash +❯ kubectl get appbindings.appcatalog.appscode.com -n restored +NAME TYPE VERSION AGE +init-sample kubedb.com/elasticsearch 7.8.0 21m +``` + +We are going to create a `RestoreSession` targeting this AppBinding to restore into this database. + +#### Create RestoreSession for new Elasticsearch + +Now, we have to create a `RestoreSession` object targeting the `AppBinding` of our `init-sample` database. Here, is the YAML of the RestoreSession that we are going to create, + +```yaml +apiVersion: stash.appscode.com/v1beta1 +kind: RestoreSession +metadata: + name: init-sample-restore + namespace: restored +spec: + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: init-sample + interimVolumeTemplate: + metadata: + name: init-sample-restore-tmp-storage + spec: + accessModes: [ "ReadWriteOnce" ] + storageClassName: "standard" + resources: + requests: + storage: 1Gi + rules: + - snapshots: [latest] +``` + +Let's create the above RestoreSession, + +```bash +❯ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/elasticsearch/backup/kubedb/examples/restore/init_sample_restore.yaml +restoresession.stash.appscode.com/init-sample-restore created +``` + +Now, wait for the restore process to complete, + +```bash +❯ kubectl get restoresession -n restored -w +NAME REPOSITORY PHASE AGE +init-sample-restore gcs-repo Running 4s +init-sample-restore gcs-repo Running 21s +init-sample-restore gcs-repo Succeeded 21s +init-sample-restore gcs-repo Succeeded 21s +``` + +#### Verify Restored Data in new Elasticsearch + +Now, we are going to verify whether the data has been restored or not. At first let's port-forward the respective Service for this Elasticsearch, + +```bash +❯ kubectl get service -n restored +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +init-sample ClusterIP 10.109.51.219 9200/TCP 54m +init-sample-master ClusterIP None 9300/TCP 54m +init-sample-pods ClusterIP None 9200/TCP 54m +``` + +```bash +❯ kubectl port-forward -n restored service/init-sample 9200 +Forwarding from 127.0.0.1:9200 -> 9200 +Forwarding from [::1]:9200 -> 9200 +``` + +Now, let's export the credentials of this Elasticsearch, + +```bash +❯ kubectl get secret -n restored | grep init-sample +init-sample-admin-cred kubernetes.io/basic-auth 2 55m +init-sample-ca-cert kubernetes.io/tls 2 55m +init-sample-config Opaque 3 55m +init-sample-kibanaro-cred kubernetes.io/basic-auth 2 55m +init-sample-kibanaserver-cred kubernetes.io/basic-auth 2 55m +init-sample-logstash-cred kubernetes.io/basic-auth 2 55m +init-sample-readall-cred kubernetes.io/basic-auth 2 55m +init-sample-snapshotrestore-cred kubernetes.io/basic-auth 2 55m +init-sample-token-xgnrx kubernetes.io/service-account-token 3 55m +init-sample-transport-cert kubernetes.io/tls 3 55m +stash-restore-init-sample-restore-0-token-vscdt kubernetes.io/service-account-token 3 4m40s +``` + +Here, we are going to use the `init-sample-admin-cred` for connecting with the database. Let's export the `username` and `password` keys. + +```bash +❯ export USER=$(kubectl get secrets -n restored init-sample-admin-cred -o jsonpath='{.data.\username}' | base64 -d) +❯ export PASSWORD=$(kubectl get secrets -n restored init-sample-admin-cred -o jsonpath='{.data.\password}' | base64 -d) +``` + +Now, let's verify whether the indexes have been restored or not. + +```bash +❯ curl -XGET --user "$USER:$PASSWORD" "http://localhost:9200/_cat/indices?v&s=index&pretty" +health status index uuid pri rep docs.count docs.deleted store.size pri.store.size +green open .opendistro_security _v-_YiJUReylNbUaIEXN8A 1 1 7 0 57.1kb 37.1kb +green open companies XfSvxePuS7-lNq-gcd-bxg 1 1 1 0 11.1kb 5.5kb +green open products pZYHzOp_TWK9bLaEU-uj8Q 1 1 2 0 10.5kb 5.2kb +``` + +So, we can see that our indexes have been restored successfully. Now, let's verify the data of these indexes. + +```bash +# Verify data of "products" index +❯ curl -XGET --user "$USER:$PASSWORD" "http://localhost:9200/products/_search?pretty" +{ + "took" : 634, + "timed_out" : false, + "_shards" : { + "total" : 1, + "successful" : 1, + "skipped" : 0, + "failed" : 0 + }, + "hits" : { + "total" : { + "value" : 2, + "relation" : "eq" + }, + "max_score" : 1.0, + "hits" : [ + { + "_index" : "products", + "_type" : "_doc", + "_id" : "u6DUgXcBa1PZYKwI5xic", + "_score" : 1.0, + "_source" : { + "name" : "KubeDB", + "vendor" : "AppsCode Inc.", + "description" : "Database Operator for Kubernetes" + } + }, + { + "_index" : "products", + "_type" : "_doc", + "_id" : "vKDVgXcBa1PZYKwIDBjy", + "_score" : 1.0, + "_source" : { + "name" : "Stash", + "vendor" : "AppsCode Inc.", + "description" : "Backup tool for Kubernetes workloads" + } + } + ] + } +} + +# Verify data of "companies" index +❯ curl -XGET --user "$USER:$PASSWORD" "http://localhost:9200/companies/_search?pretty" +{ + "took" : 5, + "timed_out" : false, + "_shards" : { + "total" : 1, + "successful" : 1, + "skipped" : 0, + "failed" : 0 + }, + "hits" : { + "total" : { + "value" : 1, + "relation" : "eq" + }, + "max_score" : 1.0, + "hits" : [ + { + "_index" : "companies", + "_type" : "_doc", + "_id" : "vaDVgXcBa1PZYKwIMxhm", + "_score" : 1.0, + "_source" : { + "name" : "AppsCode Inc.", + "mission" : "Accelerate the transition to Containers by building a Kubernetes-native Data Platform", + "products" : [ + "KubeDB", + "Stash", + "KubeVault", + "Kubeform", + "ByteBuilders" + ] + } + } + ] + } +} +``` + +So, we can see that the data of these indexes data has been restored too. + +### Restore into a different cluster + +If you want to restore into a different cluster, you have to install KubeDB and Stash in the desired cluster. Then, you have to install Stash Elasticsearch addon in that cluster too. Then, you have to deploy the target database there. Once, the database is ready to accept connections, create the Repository, backend Secret, in the same namespace as the database of your desired cluster. Finally, create the `RestoreSession` object in the desired cluster pointing to the AppBinding of the targeted database of that cluster. + +## Cleanup + +To cleanup the Kubernetes resources created by this tutorial, run: + +```bash +# delete all reasources from "demo" namespace +kubectl delete -n demo backupconfiguration sample-es-backup +kubectl delete -n demo restoresession sample-es-restore +kubectl delete -n demo repository gcs-repo +kubectl delete -n demo secret gcs-repo +kubectl delete -n demo secret gcs-secret +kubectl delete -n demo elasticsearch sample-es + +# delete all reasources from "restored" namespace +kubectl delete -n restored restoresession init-sample-restore +kubectl delete -n restored repository gcs-repo +kubectl delete -n restored secret gcs-secret +kubectl delete -n restored elasticsearch init-sample +``` diff --git a/content/docs/v2024.1.31/guides/elasticsearch/backup/overview/images/backup_overview.svg b/content/docs/v2024.1.31/guides/elasticsearch/backup/overview/images/backup_overview.svg new file mode 100644 index 0000000000..5d043fc483 --- /dev/null +++ b/content/docs/v2024.1.31/guides/elasticsearch/backup/overview/images/backup_overview.svg @@ -0,0 +1,1033 @@ + + + + + + image/svg+xml + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/content/docs/v2024.1.31/guides/elasticsearch/backup/overview/images/restore_overview.svg b/content/docs/v2024.1.31/guides/elasticsearch/backup/overview/images/restore_overview.svg new file mode 100644 index 0000000000..d1285c8820 --- /dev/null +++ b/content/docs/v2024.1.31/guides/elasticsearch/backup/overview/images/restore_overview.svg @@ -0,0 +1,892 @@ + + + + + + image/svg+xml + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/content/docs/v2024.1.31/guides/elasticsearch/backup/overview/index.md b/content/docs/v2024.1.31/guides/elasticsearch/backup/overview/index.md new file mode 100644 index 0000000000..eb8b3cca61 --- /dev/null +++ b/content/docs/v2024.1.31/guides/elasticsearch/backup/overview/index.md @@ -0,0 +1,101 @@ +--- +title: Backup & Restore Elasticsearch Using Stash +menu: + docs_v2024.1.31: + identifier: guides-es-backup-overview + name: Overview + parent: guides-es-backup + weight: 10 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +{{< notice type="warning" message="Please install [Stash](https://stash.run/docs/latest/setup/install/stash/) to try this feature. Database backup with Stash is already included in the KubeDB license. So, you don't need a separate license for Stash." >}} + +# Backup & Restore Elasticsearch Using Stash + +KubeDB uses [Stash](https://stash.run) to backup and restore databases. Stash by AppsCode is a cloud native data backup and recovery solution for Kubernetes workloads. Stash utilizes [restic](https://github.com/restic/restic) to securely backup stateful applications to any cloud or on-prem storage backends (for example, S3, GCS, Azure Blob storage, Minio, NetApp, Dell EMC etc.). + +
+  KubeDB + Stash +
Fig: Backup KubeDB Databases Using Stash
+
+ +## How Backup Works + +The following diagram shows how Stash takes a backup of an Elasticsearch database. Open the image in a new tab to see the enlarged version. + +
+ Elasticsearch Backup Overview +
Fig: Elasticsearch Backup Overview
+
+ +The backup process consists of the following steps: + +1. At first, a user creates a secret with access credentials of the backend where the backed up data will be stored. + +2. Then, she creates a `Repository` crd that specifies the backend information along with the secret that holds the credentials to access the backend. + +3. Then, she creates a `BackupConfiguration` crd targeting the [AppBinding](/docs/v2024.1.31/guides/elasticsearch/concepts/appbinding/) crd of the desired database. The `BackupConfiguration` object also specifies the `Task` to use to backup the database. + +4. Stash operator watches for `BackupConfiguration` crd. + +5. Once Stash operator finds a `BackupConfiguration` crd, it creates a CronJob with the schedule specified in `BackupConfiguration` object to trigger backup periodically. + +6. On the next scheduled slot, the CronJob triggers a backup by creating a `BackupSession` crd. + +7. Stash operator also watches for `BackupSession` crd. + +8. When it finds a `BackupSession` object, it resolves the respective `Task` and `Function` and prepares a Job definition to backup. + +9. Then, it creates the Job to backup the targeted database. + +10. The backup Job reads necessary information to connect with the database from the `AppBinding` crd. It also reads backend information and access credentials from `Repository` crd and Storage Secret respectively. + +11. Then, the Job dumps the targeted database and uploads the output to the backend. Stash stores the dumped files temporarily before uploading into the backend. Hence, you should provide a PVC template using `spec.interimVolumeTemplate` field of `BackupConfiguration` crd to use to store those dumped files temporarily. + +12. Finally, when the backup is completed, the Job sends Prometheus metrics to the Pushgateway running inside Stash operator pod. It also updates the `BackupSession` and `Repository` status to reflect the backup procedure. + +## How Restore Process Works + +The following diagram shows how Stash restores backed up data into an Elasticsearch database. Open the image in a new tab to see the enlarged version. + +
+ Database Restore Overview +
Fig: Elasticsearch Restore Process
+
+ +The restore process consists of the following steps: + +1. At first, a user creates a `RestoreSession` crd targeting the `AppBinding` of the desired database where the backed up data will be restored. It also specifies the `Repository` crd which holds the backend information and the `Task` to use to restore the target. + +2. Stash operator watches for `RestoreSession` object. + +3. Once it finds a `RestoreSession` object, it resolves the respective `Task` and `Function` and prepares a Job definition to restore. + +4. Then, it creates the Job to restore the target. + +5. The Job reads necessary information to connect with the database from respective `AppBinding` crd. It also reads backend information and access credentials from `Repository` crd and Storage Secret respectively. + +6. Then, the job downloads the backed up data from the backend and insert into the desired database. Stash stores the downloaded files temporarily before inserting into the targeted database. Hence, you should provide a PVC template using `spec.interimVolumeTemplate` field of `RestoreSession` crd to use to store those restored files temporarily. + +7. Finally, when the restore process is completed, the Job sends Prometheus metrics to the Pushgateway and update the `RestoreSession` status to reflect restore completion. + +## Next Steps + +- Backup your Elasticsearch databases using Stash following the guide from [here](/docs/v2024.1.31/guides/elasticsearch/backup/kubedb/). +- Configure a generic backup template for all the Elasticsearch databases of your cluster using Stash Auto-backup by following the guide from [here](/docs/v2024.1.31/guides/elasticsearch/backup/auto-backup/). +- Customize the backup & restore process for your cluster by following the guides from [here](/docs/v2024.1.31/guides/elasticsearch/backup/customization/). diff --git a/content/docs/v2024.1.31/guides/elasticsearch/cli/_index.md b/content/docs/v2024.1.31/guides/elasticsearch/cli/_index.md new file mode 100755 index 0000000000..7826be9fcb --- /dev/null +++ b/content/docs/v2024.1.31/guides/elasticsearch/cli/_index.md @@ -0,0 +1,22 @@ +--- +title: CLI | KubeDB +menu: + docs_v2024.1.31: + identifier: es-cli-elasticsearch + name: CLI + parent: es-elasticsearch-guides + weight: 100 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/elasticsearch/cli/cli.md b/content/docs/v2024.1.31/guides/elasticsearch/cli/cli.md new file mode 100644 index 0000000000..6c14888980 --- /dev/null +++ b/content/docs/v2024.1.31/guides/elasticsearch/cli/cli.md @@ -0,0 +1,417 @@ +--- +title: CLI | KubeDB +menu: + docs_v2024.1.31: + identifier: es-cli-cli + name: Quickstart + parent: es-cli-elasticsearch + weight: 100 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Manage KubeDB objects using CLIs + +## KubeDB CLI + +KubeDB comes with its own cli. It is called `kubedb` cli. `kubedb` can be used to manage any KubeDB object. `kubedb` cli also performs various validations to improve ux. To install KubeDB cli on your workstation, follow the steps [here](/docs/v2024.1.31/setup/README). + +### How to Create objects + +`kubectl create` creates a database CRD object in `default` namespace by default. Following command will create an Elasticsearch object as specified in `elasticsearch.yaml`. + +```bash +$ kubectl create -f elasticsearch-demo.yaml +elasticsearch.kubedb.com/elasticsearch-demo created +``` + +You can provide namespace as a flag `--namespace`. Provided namespace should match with namespace specified in input file. + +```bash +$ kubectl create -f elasticsearch-demo.yaml --namespace=kube-system +elasticsearch.kubedb.com/elasticsearch-demo created +``` + +`kubectl create` command also considers `stdin` as input. + +```bash +cat elasticsearch-demo.yaml | kubectl create -f - +``` + +### How to List Objects + +`kubectl get` command allows users to list or find any KubeDB object. To list all Elasticsearch objects in `default` namespace, run the following command: + +```bash +$ kubectl get elasticsearch +NAME VERSION STATUS AGE +elasticsearch-demo 7.3.2 Running 1m +``` + +To get YAML of an object, use `--output=yaml` flag. + +```yaml +$ kubectl get elasticsearch elasticsearch-demo --output=yaml +apiVersion: kubedb.com/v1alpha2 +kind: Elasticsearch +metadata: + creationTimestamp: 2018-10-08T14:22:19Z + finalizers: + - kubedb.com + generation: 3 + name: elasticsearch-demo + namespace: demo + resourceVersion: "51660" + selfLink: /apis/kubedb.com/v1alpha2/namespaces/default/elasticsearches/elasticsearch-demo + uid: 90a54c9e-cb05-11e8-8d51-9eed48c5e947 +spec: + authSecret: + name: elasticsearch-demo-auth + podTemplate: + controller: {} + metadata: {} + spec: + resources: {} + replicas: 1 + storage: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + storageClassName: standard + storageType: Durable + terminationPolicy: Halt + version: xpack-8.11.1 +status: + observedGeneration: 3$4212299729528774793 + phase: Running +``` + +To get JSON of an object, use `--output=json` flag. + +```bash +$ kubectl get elasticsearch elasticsearch-demo --output=json +``` + +To list all KubeDB objects, use following command: + +```bash +$ kubectl get all -o wide +NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE +pod/elasticsearch-demo-0 1/1 Running 0 2m 192.168.1.105 4gb-pool-crtbqq + +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR +service/elasticsearch-demo ClusterIP 10.98.224.23 9200/TCP 2m app.kubernetes.io/name=elasticsearches.kubedb.com,app.kubernetes.io/instance=elasticsearch-demo,node.role.client=set +service/elasticsearch-demo-master ClusterIP 10.100.87.240 9300/TCP 2m app.kubernetes.io/name=elasticsearches.kubedb.com,app.kubernetes.io/instance=elasticsearch-demo,node.role.master=set +service/kubedb ClusterIP None 2m +service/kubernetes ClusterIP 10.96.0.1 443/TCP 9h + +NAME DESIRED CURRENT AGE CONTAINERS IMAGES +statefulset.apps/elasticsearch-demo 1 1 2m elasticsearch kubedbci/elasticsearch:5.6-v1 + +NAME VERSION DB_IMAGE DEPRECATED AGE +elasticsearchversion.catalog.kubedb.com/5.6 5.6 kubedbci/elasticsearch:5.6 true 5h +elasticsearchversion.catalog.kubedb.com/5.6-v1 5.6 kubedbci/elasticsearch:5.6-v1 5h +elasticsearchversion.catalog.kubedb.com/5.6.4 5.6.4 kubedbci/elasticsearch:5.6.4 true 5h +elasticsearchversion.catalog.kubedb.com/5.6.4-v1 5.6.4 kubedbci/elasticsearch:5.6.4-v1 5h +elasticsearchversion.catalog.kubedb.com/6.2 6.2 kubedbci/elasticsearch:6.2 true 5h +elasticsearchversion.catalog.kubedb.com/6.2-v1 6.2 kubedbci/elasticsearch:6.2-v1 5h +elasticsearchversion.catalog.kubedb.com/6.2.4 6.2.4 kubedbci/elasticsearch:6.2.4 true 5h +elasticsearchversion.catalog.kubedb.com/6.2.4-v1 6.2.4 kubedbci/elasticsearch:6.2.4-v1 5h +elasticsearchversion.catalog.kubedb.com/6.3 6.3 kubedbci/elasticsearch:6.3 true 5h +elasticsearchversion.catalog.kubedb.com/6.3-v1 6.3 kubedbci/elasticsearch:6.3-v1 5h +elasticsearchversion.catalog.kubedb.com/6.3.0 6.3.0 kubedbci/elasticsearch:6.3.0 true 5h +elasticsearchversion.catalog.kubedb.com/6.3.0-v1 6.3.0 kubedbci/elasticsearch:6.3.0-v1 5h +elasticsearchversion.catalog.kubedb.com/6.4 6.4 kubedbci/elasticsearch:6.4 5h +elasticsearchversion.catalog.kubedb.com/6.4.0 6.4.0 kubedbci/elasticsearch:6.4.0 5h + +NAME VERSION STATUS AGE +elasticsearch.kubedb.com/elasticsearch-demo 5.6-v1 Running 2m +NAME DATABASE BUCKET STATUS AGE +snap/elasticsearch-demo-20170605-073557 es/elasticsearch-demo gs:bucket-name Succeeded 9m +snap/snapshot-20171212-114700 es/elasticsearch-demo gs:bucket-name Succeeded 1h +``` + +Flag `--output=wide` is used to print additional information. + +List command supports short names for each object types. You can use it like `kubectl get `. Below are the short name for KubeDB objects: + +- Elasticsearch: `es` +- Snapshot: `snap` +- DormantDatabase: `drmn` + +You can print labels with objects. The following command will list all Snapshots with their corresponding labels. + +```bash +$ kubectl get snap --show-labels +NAME DATABASE STATUS AGE LABELS +elasticsearch-demo-20170605-073557 es/elasticsearch-demo Succeeded 11m app.kubernetes.io/name=elasticsearches.kubedb.com,app.kubernetes.io/instance=elasticsearch-demo +snapshot-20171212-114700 es/elasticsearch-demo Succeeded 1h app.kubernetes.io/name=elasticsearches.kubedb.com,app.kubernetes.io/instance=elasticsearch-demo +``` + +You can also filter list using `--selector` flag. + +```bash +$ kubectl get snap --selector='app.kubernetes.io/name=elasticsearches.kubedb.com' --show-labels +NAME DATABASE STATUS AGE LABELS +elasticsearch-demo-20171212-073557 es/elasticsearch-demo Succeeded 14m app.kubernetes.io/name=elasticsearches.kubedb.com,app.kubernetes.io/instance=elasticsearch-demo +snapshot-20171212-114700 es/elasticsearch-demo Succeeded 2h app.kubernetes.io/name=elasticsearches.kubedb.com,app.kubernetes.io/instance=elasticsearch-demo +``` + +To print only object name, run the following command: + +```bash +$ kubectl get all -o name +pod/elasticsearch-demo-0 +service/elasticsearch-demo +service/elasticsearch-demo-master +service/kubedb +service/kubernetes +statefulset.apps/elasticsearch-demo +elasticsearchversion.catalog.kubedb.com/5.6 +elasticsearchversion.catalog.kubedb.com/5.6-v1 +elasticsearchversion.catalog.kubedb.com/5.6.4 +elasticsearchversion.catalog.kubedb.com/5.6.4-v1 +elasticsearchversion.catalog.kubedb.com/6.2 +elasticsearchversion.catalog.kubedb.com/6.2-v1 +elasticsearchversion.catalog.kubedb.com/6.2.4 +elasticsearchversion.catalog.kubedb.com/6.2.4-v1 +elasticsearchversion.catalog.kubedb.com/6.3 +elasticsearchversion.catalog.kubedb.com/6.3-v1 +elasticsearchversion.catalog.kubedb.com/6.3.0 +elasticsearchversion.catalog.kubedb.com/6.3.0-v1 +elasticsearchversion.catalog.kubedb.com/6.4 +elasticsearchversion.catalog.kubedb.com/6.4.0 +elasticsearch.kubedb.com/elasticsearch-demo +``` + +### How to Describe Objects + +`kubectl dba describe` command allows users to describe any KubeDB object. The following command will describe Elasticsearch database `elasticsearch-demo` with relevant information. + +```bash +$ kubectl dba describe es elasticsearch-demo +Name: elasticsearch-demo +Namespace: default +CreationTimestamp: Mon, 08 Oct 2018 20:22:19 +0600 +Labels: +Annotations: +Status: Running +Replicas: 1 total + StorageType: Durable +Volume: + StorageClass: standard + Capacity: 1Gi + Access Modes: RWO + +StatefulSet: + Name: elasticsearch-demo + CreationTimestamp: Mon, 08 Oct 2018 20:22:22 +0600 + Labels: app.kubernetes.io/name=elasticsearches.kubedb.com + app.kubernetes.io/instance=elasticsearch-demo + node.role.client=set + node.role.data=set + node.role.master=set + Annotations: + Replicas: 824642046536 desired | 1 total + Pods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed + +Service: + Name: elasticsearch-demo + Labels: app.kubernetes.io/name=elasticsearches.kubedb.com + app.kubernetes.io/instance=elasticsearch-demo + Annotations: + Type: ClusterIP + IP: 10.98.224.23 + Port: http 9200/TCP + TargetPort: http/TCP + Endpoints: 192.168.1.105:9200 + +Service: + Name: elasticsearch-demo-master + Labels: app.kubernetes.io/name=elasticsearches.kubedb.com + app.kubernetes.io/instance=elasticsearch-demo + Annotations: + Type: ClusterIP + IP: 10.100.87.240 + Port: transport 9300/TCP + TargetPort: transport/TCP + Endpoints: 192.168.1.105:9300 + +Certificate Secret: + Name: elasticsearch-demo-cert + Labels: app.kubernetes.io/name=elasticsearches.kubedb.com + app.kubernetes.io/instance=elasticsearch-demo + Annotations: + +Type: Opaque + +Data +==== + key_pass: 6 bytes + node.jks: 3015 bytes + root.jks: 864 bytes + sgadmin.jks: 3011 bytes + +Database Secret: + Name: elasticsearch-demo-auth + Labels: app.kubernetes.io/name=elasticsearches.kubedb.com + app.kubernetes.io/instance=elasticsearch-demo + Annotations: + +Type: Opaque + +Data +==== + sg_roles.yml: 312 bytes + sg_roles_mapping.yml: 73 bytes + ADMIN_PASSWORD: 8 bytes + READALL_USERNAME: 7 bytes + sg_action_groups.yml: 430 bytes + sg_internal_users.yml: 156 bytes + ADMIN_USERNAME: 5 bytes + READALL_PASSWORD: 8 bytes + sg_config.yml: 242 bytes + +Topology: + Type Pod StartTime Phase + ---- --- --------- ----- + data|master|client elasticsearch-demo-0 2018-10-08 20:22:23 +0600 +06 Running + +No Snapshots. + +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal Successful 6m Elasticsearch operator Successfully created Service + Normal Successful 6m Elasticsearch operator Successfully created Service + Normal Successful 6m Elasticsearch operator Successfully created StatefulSet + Normal Successful 5m Elasticsearch operator Successfully created Elasticsearch + Normal Successful 5m Elasticsearch operator Successfully patched StatefulSet + Normal Successful 5m Elasticsearch operator Successfully patched Elasticsearch + Normal Successful 5m Elasticsearch operator Successfully patched StatefulSet + Normal Successful 4m Elasticsearch operator Successfully patched Elasticsearch +``` + +`kubectl dba describe` command provides following basic information about a database. + +- StatefulSet +- Storage (Persistent Volume) +- Service +- Secret (If available) +- Topology (If available) +- Snapshots (If any) +- Monitoring system (If available) + +To hide details about StatefulSet & Service, use flag `--show-workload=false` +To hide details about Secret, use flag `--show-secret=false` +To hide events on KubeDB object, use flag `--show-events=false` + +To describe all Elasticsearch objects in `default` namespace, use following command + +```bash +$ kubectl dba describe es +``` + +To describe all Elasticsearch objects from every namespace, provide `--all-namespaces` flag. + +```bash +$ kubectl dba describe es --all-namespaces +``` + +To describe all KubeDB objects from every namespace, use the following command: + +```bash +$ kubectl dba describe all --all-namespaces +``` + +You can also describe KubeDb objects with matching labels. The following command will describe all Elasticsearch objects with specified labels from every namespace. + +```bash +$ kubectl dba describe es --all-namespaces --selector='group=dev' +``` + +To learn about various options of `describe` command, please visit [here](/docs/v2024.1.31/reference/cli/kubectl-dba_describe). + + +#### Edit restrictions + +Various fields of a KubeDb object can't be edited using `edit` command. The following fields are restricted from updates for all KubeDB objects: + +- apiVersion +- kind +- metadata.name +- metadata.namespace +- status + +If StatefulSets or Deployments exists for a database, following fields can't be modified as well. + +Elasticsearch: + +- spec.init +- spec.storageType +- spec.storage +- spec.podTemplate.spec.nodeSelector +- spec.podTemplate.spec.env + +For DormantDatabase, `spec.origin` can't be edited using `kubectl edit` + +### How to Delete Objects + +`kubectl delete` command will delete an object in `default` namespace by default unless namespace is provided. The following command will delete an Elasticsearch `elasticsearch-dev` in default namespace + +```bash +$ kubectl delete elasticsearch elasticsearch-demo +elasticsearch.kubedb.com "elasticsearch-demo" deleted +``` + +You can also use YAML files to delete objects. The following command will delete an Elasticsearch using the type and name specified in `elasticsearch.yaml`. + +```bash +$ kubectl delete -f elasticsearch-demo.yaml +elasticsearch.kubedb.com "elasticsearch-demo" deleted +``` + +`kubectl delete` command also takes input from `stdin`. + +```bash +cat elasticsearch.yaml | kubectl delete -f - +``` + +To delete database with matching labels, use `--selector` flag. The following command will delete elasticsearch with label `elasticsearch.app.kubernetes.io/instance=elasticsearch-demo`. + +```bash +$ kubectl delete elasticsearch -l elasticsearch.app.kubernetes.io/instance=elasticsearch-demo +``` + +## Using Kubectl + +You can use Kubectl with KubeDB objects like any other CRDs. Below are some common examples of using Kubectl with KubeDB objects. + +```bash +# List objects +$ kubectl get elasticsearch +$ kubectl get elasticsearch.kubedb.com + +# Delete objects +$ kubectl delete elasticsearch +``` + +## Next Steps + +- Learn how to use KubeDB to run an Elasticsearch database [here](/docs/v2024.1.31/guides/elasticsearch/README). +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/v2024.1.31/CONTRIBUTING). diff --git a/content/docs/v2024.1.31/guides/elasticsearch/clustering/_index.md b/content/docs/v2024.1.31/guides/elasticsearch/clustering/_index.md new file mode 100755 index 0000000000..85676d9698 --- /dev/null +++ b/content/docs/v2024.1.31/guides/elasticsearch/clustering/_index.md @@ -0,0 +1,22 @@ +--- +title: Elasticsearch Clustering +menu: + docs_v2024.1.31: + identifier: es-clustering-elasticsearch + name: Clustering + parent: es-elasticsearch-guides + weight: 25 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/elasticsearch/clustering/combined-cluster/index.md b/content/docs/v2024.1.31/guides/elasticsearch/clustering/combined-cluster/index.md new file mode 100644 index 0000000000..a25e299af9 --- /dev/null +++ b/content/docs/v2024.1.31/guides/elasticsearch/clustering/combined-cluster/index.md @@ -0,0 +1,315 @@ +--- +title: Elasticsearch Combined Cluster +menu: + docs_v2024.1.31: + identifier: es-combined-cluster + name: Combined Cluster + parent: es-clustering-elasticsearch + weight: 10 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Combined Cluster + +An Elasticsearch combined cluster is a group of one or more Elasticsearch nodes where each node can perform as master, data, and ingest nodes simultaneously. + +## Before You Begin + +At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +Now, install the KubeDB operator in your cluster following the steps [here](/docs/v2024.1.31/setup/README). + +To keep things isolated, this tutorial uses a separate namespace called `demo` throughout this tutorial. + +```bash +$ kubectl create namespace demo +namespace/demo created + +$ kubectl get namespace +NAME STATUS AGE +demo Active 9s +``` + +> Note: YAML files used in this tutorial are stored in [here](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/guides/elasticsearch/clustering/combined-cluster/yamls) in GitHub repository [kubedb/docs](https://github.com/kubedb/docs). + +## Create Standalone Elasticsearch Cluster + +Here, we are going to create a standalone (ie. `replicas: 1`) Elasticsearch cluster. We will use the Elasticsearch image provided by the Opendistro (`opensearch-2.8.0`) for this demo. To learn more about Elasticsearch CR, visit [here](/docs/v2024.1.31/guides/elasticsearch/concepts/elasticsearch/). + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: Elasticsearch +metadata: + name: es-standalone + namespace: demo +spec: + version: opensearch-2.8.0 + enableSSL: true + replicas: 1 + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: DoNotTerminate +``` + +Let's deploy the above example by the following command: + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/elasticsearch/clustering/combined-cluster/yamls/es-standalone.yaml +elasticsearch.kubedb.com/es-standalone created +``` + +Watch the bootstrap progress: + +```bash +$ kubectl get elasticsearch -n demo -w +NAME VERSION STATUS AGE +es-standalone opensearch-2.8.0 Provisioning 1m32s +es-standalone opensearch-2.8.0 Provisioning 2m17s +es-standalone opensearch-2.8.0 Provisioning 2m17s +es-standalone opensearch-2.8.0 Provisioning 2m20s +es-standalone opensearch-2.8.0 Ready 2m20s +``` + +Hence the cluster is ready to use. +Let's check the k8s resources created by the operator on the deployment of Elasticsearch CRO: + +```bash +$ kubectl get all,secret,pvc -n demo -l 'app.kubernetes.io/instance=es-standalone' +NAME READY STATUS RESTARTS AGE +pod/es-standalone-0 1/1 Running 0 33m + +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +service/es-standalone ClusterIP 10.96.46.11 9200/TCP 33m +service/es-standalone-master ClusterIP None 9300/TCP 33m +service/es-standalone-pods ClusterIP None 9200/TCP 33m + +NAME READY AGE +statefulset.apps/es-standalone 1/1 33m + +NAME TYPE VERSION AGE +appbinding.appcatalog.appscode.com/es-standalone kubedb.com/elasticsearch 7.10.0 33m + +NAME TYPE DATA AGE +secret/es-standalone-admin-cert kubernetes.io/tls 3 33m +secret/es-standalone-admin-cred kubernetes.io/basic-auth 2 33m +secret/es-standalone-archiver-cert kubernetes.io/tls 3 33m +secret/es-standalone-ca-cert kubernetes.io/tls 2 33m +secret/es-standalone-config Opaque 3 33m +secret/es-standalone-http-cert kubernetes.io/tls 3 33m +secret/es-standalone-kibanaro-cred kubernetes.io/basic-auth 2 33m +secret/es-standalone-kibanaserver-cred kubernetes.io/basic-auth 2 33m +secret/es-standalone-logstash-cred kubernetes.io/basic-auth 2 33m +secret/es-standalone-readall-cred kubernetes.io/basic-auth 2 33m +secret/es-standalone-snapshotrestore-cred kubernetes.io/basic-auth 2 33m +secret/es-standalone-transport-cert kubernetes.io/tls 3 33m + +NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE +persistentvolumeclaim/data-es-standalone-0 Bound pvc-a2d3e491-1d66-4b29-bb18-d5f06905336c 1Gi RWO standard 33m +``` + +Connect to the Cluster: + +```bash +# Port-forward the service to local machine +$ kubectl port-forward -n demo svc/es-standalone 9200 +Forwarding from 127.0.0.1:9200 -> 9200 +Forwarding from [::1]:9200 -> 9200 +``` + +```bash +# Get admin username & password from k8s secret +$ kubectl get secret -n demo es-standalone-admin-cred -o jsonpath='{.data.username}' | base64 -d +admin +$ kubectl get secret -n demo es-standalone-admin-cred -o jsonpath='{.data.password}' | base64 -d +V,YY1.qXxoAch9)B + +# Check cluster health +$ curl -XGET -k -u 'admin:V,YY1.qXxoAch9)B' "https://localhost:9200/_cluster/health?pretty" +{ + "cluster_name" : "es-standalone", + "status" : "green", + "timed_out" : false, + "number_of_nodes" : 1, + "number_of_data_nodes" : 1, + "active_primary_shards" : 1, + "active_shards" : 1, + "relocating_shards" : 0, + "initializing_shards" : 0, + "unassigned_shards" : 0, + "delayed_unassigned_shards" : 0, + "number_of_pending_tasks" : 0, + "number_of_in_flight_fetch" : 0, + "task_max_waiting_in_queue_millis" : 0, + "active_shards_percent_as_number" : 100.0 +} +``` + +## Create Multi-Node Combined Elasticsearch Cluster + +Here, we are going to create a multi-node (say `replicas: 3`) Elasticsearch cluster. We will use the Elasticsearch image provided by the Opendistro (`opensearch-2.8.0`) for this demo. To learn more about Elasticsearch CR, visit [here](/docs/v2024.1.31/guides/elasticsearch/concepts/elasticsearch/). + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: Elasticsearch +metadata: + name: es-multinode + namespace: demo +spec: + version: opensearch-2.8.0 + enableSSL: true + replicas: 3 + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: DoNotTerminate +``` + +Let's deploy the above example by the following command: + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/elasticsearch/clustering/combined-cluster/yamls/es-multinode.yaml +elasticsearch.kubedb.com/es-multinode created +``` + +Watch the bootstrap progress: + +```bash +$ kubectl get elasticsearch -n demo -w +NAME VERSION STATUS AGE +es-multinode opensearch-2.8.0 Provisioning 18s +es-multinode opensearch-2.8.0 Provisioning 78s +es-multinode opensearch-2.8.0 Provisioning 78s +es-multinode opensearch-2.8.0 Provisioning 81s +es-multinode opensearch-2.8.0 Ready 81s +``` + +Hence the cluster is ready to use. +Let's check the k8s resources created by the operator on the deployment of Elasticsearch CRO: + +```bash +$ kubectl get all,secret,pvc -n demo -l 'app.kubernetes.io/instance=es-multinode' +NAME READY STATUS RESTARTS AGE +pod/es-multinode-0 1/1 Running 0 6m12s +pod/es-multinode-1 1/1 Running 0 6m7s +pod/es-multinode-2 1/1 Running 0 6m2s + +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +service/es-multinode ClusterIP 10.96.237.120 9200/TCP 6m14s +service/es-multinode-master ClusterIP None 9300/TCP 6m14s +service/es-multinode-pods ClusterIP None 9200/TCP 6m15s + +NAME READY AGE +statefulset.apps/es-multinode 3/3 6m12s + +NAME TYPE VERSION AGE +appbinding.appcatalog.appscode.com/es-multinode kubedb.com/elasticsearch 7.10.0 6m12s + +NAME TYPE DATA AGE +secret/es-multinode-admin-cert kubernetes.io/tls 3 6m14s +secret/es-multinode-admin-cred kubernetes.io/basic-auth 2 6m13s +secret/es-multinode-archiver-cert kubernetes.io/tls 3 6m13s +secret/es-multinode-ca-cert kubernetes.io/tls 2 6m14s +secret/es-multinode-config Opaque 3 6m12s +secret/es-multinode-http-cert kubernetes.io/tls 3 6m14s +secret/es-multinode-kibanaro-cred kubernetes.io/basic-auth 2 6m13s +secret/es-multinode-kibanaserver-cred kubernetes.io/basic-auth 2 6m13s +secret/es-multinode-logstash-cred kubernetes.io/basic-auth 2 6m13s +secret/es-multinode-readall-cred kubernetes.io/basic-auth 2 6m13s +secret/es-multinode-snapshotrestore-cred kubernetes.io/basic-auth 2 6m13s +secret/es-multinode-transport-cert kubernetes.io/tls 3 6m14s + +NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE +persistentvolumeclaim/data-es-multinode-0 Bound pvc-c031bd37-2266-4a0b-8d9f-313281379810 1Gi RWO standard 6m12s +persistentvolumeclaim/data-es-multinode-1 Bound pvc-e75bc8a8-15ed-4522-b0b3-252ff6c841a8 1Gi RWO standard 6m7s +persistentvolumeclaim/data-es-multinode-2 Bound pvc-6452fa80-91c6-4d71-9b93-5cff973a2625 1Gi RWO standard 6m2s + +``` + +Connect to the Cluster: + +```bash +# Port-forward the service to local machine +$ kubectl port-forward -n demo svc/es-multinode 9200 +Forwarding from 127.0.0.1:9200 -> 9200 +Forwarding from [::1]:9200 -> 9200 +``` + +```bash +# Get admin username & password from k8s secret +$ kubectl get secret -n demo es-multinode-admin-cred -o jsonpath='{.data.username}' | base64 -d +admin +$ kubectl get secret -n demo es-multinode-admin-cred -o jsonpath='{.data.password}' | base64 -d +9f$A8o2pBpKL~1T8 + +# Check cluster health +$ curl -XGET -k -u 'admin:9f$A8o2pBpKL~1T8' "https://localhost:9200/_cluster/health?pretty" +{ + "cluster_name" : "es-multinode", + "status" : "green", + "timed_out" : false, + "number_of_nodes" : 3, + "number_of_data_nodes" : 3, + "active_primary_shards" : 1, + "active_shards" : 3, + "relocating_shards" : 0, + "initializing_shards" : 0, + "unassigned_shards" : 0, + "delayed_unassigned_shards" : 0, + "number_of_pending_tasks" : 0, + "number_of_in_flight_fetch" : 0, + "task_max_waiting_in_queue_millis" : 0, + "active_shards_percent_as_number" : 100.0 +} +``` + +## Cleaning Up + +TO cleanup the k8s resources created by this tutorial, run: + +```bash +# standalone cluster +$ kubectl patch -n demo elasticsearch es-standalone -p '{"spec":{"terminationPolicy":"WipeOut"}}' --type="merge" +$ kubectl delete elasticsearch -n demo es-standalone + +# multinode cluster +$ kubectl patch -n demo elasticsearch es-multinode -p '{"spec":{"terminationPolicy":"WipeOut"}}' --type="merge" +$ kubectl delete elasticsearch -n demo es-multinode + +# delete namespace +$ kubectl delete namespace demo +``` + +## Next Steps + +- Deploy [simple dedicated topology cluster](/docs/v2024.1.31/guides/elasticsearch/clustering/topology-cluster/simple-dedicated-cluster/) +- Learn about [taking backup](/docs/v2024.1.31/guides/elasticsearch/backup/overview/) of Elasticsearch database using Stash. +- Monitor your Elasticsearch database with KubeDB using [`out-of-the-box` builtin-Prometheus](/docs/v2024.1.31/guides/elasticsearch/monitoring/using-builtin-prometheus). +- Monitor your Elasticsearch database with KubeDB using [`out-of-the-box` Prometheus operator](/docs/v2024.1.31/guides/elasticsearch/monitoring/using-prometheus-operator). +- Detail concepts of [Elasticsearch object](/docs/v2024.1.31/guides/elasticsearch/concepts/elasticsearch/). +- Use [private Docker registry](/docs/v2024.1.31/guides/elasticsearch/private-registry/using-private-registry) to deploy Elasticsearch with KubeDB. +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/v2024.1.31/CONTRIBUTING). \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/elasticsearch/clustering/combined-cluster/yamls/es-multinode.yaml b/content/docs/v2024.1.31/guides/elasticsearch/clustering/combined-cluster/yamls/es-multinode.yaml new file mode 100644 index 0000000000..ef6da275d4 --- /dev/null +++ b/content/docs/v2024.1.31/guides/elasticsearch/clustering/combined-cluster/yamls/es-multinode.yaml @@ -0,0 +1,18 @@ +apiVersion: kubedb.com/v1alpha2 +kind: Elasticsearch +metadata: + name: es-multinode + namespace: demo +spec: + version: opensearch-2.8.0 + enableSSL: true + replicas: 3 + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: DoNotTerminate \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/elasticsearch/clustering/combined-cluster/yamls/es-standalone.yaml b/content/docs/v2024.1.31/guides/elasticsearch/clustering/combined-cluster/yamls/es-standalone.yaml new file mode 100644 index 0000000000..d7bf099dd3 --- /dev/null +++ b/content/docs/v2024.1.31/guides/elasticsearch/clustering/combined-cluster/yamls/es-standalone.yaml @@ -0,0 +1,18 @@ +apiVersion: kubedb.com/v1alpha2 +kind: Elasticsearch +metadata: + name: es-standalone + namespace: demo +spec: + version: opensearch-2.8.0 + enableSSL: true + replicas: 1 + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: DoNotTerminate \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/elasticsearch/clustering/topology-cluster/_index.md b/content/docs/v2024.1.31/guides/elasticsearch/clustering/topology-cluster/_index.md new file mode 100644 index 0000000000..f839f4bd6f --- /dev/null +++ b/content/docs/v2024.1.31/guides/elasticsearch/clustering/topology-cluster/_index.md @@ -0,0 +1,22 @@ +--- +title: Elasticsearch Topology Cluster +menu: + docs_v2024.1.31: + identifier: es-topology-cluster + name: Topology Cluster + parent: es-clustering-elasticsearch + weight: 10 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/elasticsearch/clustering/topology-cluster/hot-warm-cold-cluster/index.md b/content/docs/v2024.1.31/guides/elasticsearch/clustering/topology-cluster/hot-warm-cold-cluster/index.md new file mode 100644 index 0000000000..863812ff4f --- /dev/null +++ b/content/docs/v2024.1.31/guides/elasticsearch/clustering/topology-cluster/hot-warm-cold-cluster/index.md @@ -0,0 +1,598 @@ +--- +title: Elasticsearch Hot-Warm-Cold Cluster +menu: + docs_v2024.1.31: + identifier: es-hot-warm-cold-cluster + name: Hot-Warm-Cold Cluster + parent: es-topology-cluster + weight: 15 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Elasticsearch Hot-Warm-Cold Cluster + +Hot-warm-cold architectures are common for time series data such as logging or metrics and it also has various use cases too. For example, assume Elasticsearch is being used to aggregate log files from multiple systems. Logs from today are actively being indexed and this week's logs are the most heavily searched (hot). Last week's logs may be searched but not as much as the current week's logs (warm). Last month's logs may or may not be searched often, but are good to keep around just in case (cold). + +## Before You Begin + +At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +Now, install the KubeDB operator in your cluster following the steps [here](/docs/v2024.1.31/setup/README). + +To keep things isolated, this tutorial uses a separate namespace called `demo` throughout this tutorial. + +```bash +$ kubectl create namespace demo +namespace/demo created + +$ kubectl get namespace +NAME STATUS AGE +demo Active 14s +``` + +> Note: YAML files used in this tutorial are stored in [here](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/guides/elasticsearch/clustering/topology-cluster/hot-warm-cold-cluster/yamls) in GitHub repository [kubedb/docs](https://github.com/kubedb/docs). + +## Find Available StorageClass + +We will have to provide `StorageClass` in Elasticsearch CR specification. Check available `StorageClass` in your cluster using the following command, + +```bash +$ kubectl get storageclass +NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE +standard (default) rancher.io/local-path Delete WaitForFirstConsumer false 10m +linode-block-storage linodebs.csi.linode.com Delete Immediate true 10m +linode-block-storage-retain (default) linodebs.csi.linode.com Retain Immediate true 10m +``` + +Here, we use `linode-block-storage` as StorageClass in this demo. + +## Create Elasticsearch Hot-Warm-Cold Cluster + +We are going to create a Elasticsearch Hot-Warm-Cold cluster in topology mode. Our cluster will be consist of 2 master nodes, 2 ingest nodes, 1 data content node, 3 data hot nodes, 2 data warm node, and 2 data cold nodes. Here, we are using Elasticsearch version (`xpack-8.11.1`) of ElasticStack distribution for this demo. To learn more about the Elasticsearch CR, visit [here](/docs/v2024.1.31/guides/elasticsearch/concepts/elasticsearch/). + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: Elasticsearch +metadata: + name: es-cluster + namespace: demo +spec: + enableSSL: true + version: xpack-8.11.1 + topology: + master: + replicas: 2 + storage: + resources: + requests: + storage: 1Gi + storageClassName: "linode-block-storage" + ingest: + replicas: 2 + storage: + resources: + requests: + storage: 1Gi + storageClassName: "linode-block-storage" + dataContent: + replicas: 1 + storage: + resources: + requests: + storage: 5Gi + storageClassName: "linode-block-storage" + dataHot: + replicas: 3 + storage: + resources: + requests: + storage: 3Gi + storageClassName: "linode-block-storage" + dataWarm: + replicas: 2 + storage: + resources: + requests: + storage: 5Gi + storageClassName: "linode-block-storage" + dataCold: + replicas: 2 + storage: + resources: + requests: + storage: 5Gi + storageClassName: "linode-block-storage" + +``` + +Here, + +- `spec.version` - is the name of the ElasticsearchVersion CR. Here, we are using Elasticsearch version `xpack-8.11.1` of ElasticStack distribution. +- `spec.enableSSL` - specifies whether the HTTP layer is secured with certificates or not. +- `spec.storageType` - specifies the type of storage that will be used for Elasticsearch database. It can be `Durable` or `Ephemeral`. The default value of this field is `Durable`. If `Ephemeral` is used then KubeDB will create the Elasticsearch database using `EmptyDir` volume. In this case, you don't have to specify `spec.storage` field. This is useful for testing purposes. +- `spec.topology` - specifies the node-specific properties for the Elasticsearch cluster. + - `topology.master` - specifies the properties of [master](https://www.elastic.co/guide/en/elasticsearch/reference/7.16/modules-node.html#master-node) nodes. + - `master.replicas` - specifies the number of master nodes. + - `master.storage` - specifies the master node storage information that passed to the StatefulSet. + - `topology.dataContent` - specifies the properties of [data content](https://www.elastic.co/guide/en/elasticsearch/reference/7.16/modules-node.html#data-content-node) node. + - `dataContent.replicas` - specifies the number of data content node. + - `dataContent.storage` - specifies the data content node storage information that passed to the StatefulSet. + - `topology.ingest` - specifies the properties of [ingest](https://www.elastic.co/guide/en/elasticsearch/reference/7.16/modules-node.html#node-ingest-node) nodes. + - `ingest.replicas` - specifies the number of ingest nodes. + - `ingest.storage` - specifies the ingest node storage information that passed to the StatefulSet. + - `topology.dataHot` - specifies the properties of [dataHot](https://www.elastic.co/guide/en/elasticsearch/reference/7.16/modules-node.html#data-hot-node) nodes. + - `dataHot.replicas` - specifies the number of dataHot nodes. + - `dataHot.storage` - specifies the dataHot node storage information that passed to the StatefulSet. + - `topology.dataWarm` - specifies the properties of [dataWarm](https://www.elastic.co/guide/en/elasticsearch/reference/7.16/modules-node.html#data-warm-node) nodes. + - `dataWarm.replicas` - specifies the number of dataWarm nodes. + - `dataWarm.storage` - specifies the dataWarm node storage information that passed to the StatefulSet. + - `topology.dataCold` - specifies the properties of [dataCold](https://www.elastic.co/guide/en/elasticsearch/reference/7.16/modules-node.html#data-cold-node) nodes. + - `dataCold.replicas` - specifies the number of dataCold nodes. + - `dataCold.storage` - specifies the dataCold node storage information that passed to the StatefulSet. +> Here, we use `linode-block-storage` as storage for every node. But it is recommended to prioritize faster storage for `dataHot` node then `dataWarm` and finally `dataCold`. + +Let's deploy the above example by the following command: + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/elasticsearch/clustering/topology-cluster/hot-warm-cold-cluster/yamls/es-cluster.yaml +elasticsearch.kubedb.com/es-cluster created +``` + +KubeDB will create the necessary resources to deploy the Elasticsearch cluster according to the above specification. Let’s wait until the database to be ready to use, + +```bash +$ watch kubectl get elasticsearch -n demo +NAME VERSION STATUS AGE +es-cluster xpack-8.11.1 Ready 2m48s +``` +Here, Elasticsearch is in `Ready` state. It means the database is ready to accept connections. + +Describe the Elasticsearch object to observe the progress if something goes wrong or the status is not changing for a long period of time: + +```bash +$ kubectl describe elasticsearch -n demo es-cluster +Name: es-cluster +Namespace: demo +Labels: +Annotations: +API Version: kubedb.com/v1alpha2 +Kind: Elasticsearch +Metadata: + Creation Timestamp: 2022-03-14T06:33:20Z + Finalizers: + kubedb.com + Generation: 2 + Resource Version: 20467655 + UID: 236fd414-9d94-4fce-93d3-7891fcf7f6a4 +Spec: + Auth Secret: + Name: es-cluster-elastic-cred + Enable SSL: true + Heap Size Percentage: 50 + Kernel Settings: + Privileged: true + Sysctls: + Name: vm.max_map_count + Value: 262144 + Pod Template: + Controller: + Metadata: + Spec: + Affinity: + Pod Anti Affinity: + Preferred During Scheduling Ignored During Execution: + Pod Affinity Term: + Label Selector: + Match Expressions: + Key: ${NODE_ROLE} + Operator: Exists + Match Labels: + app.kubernetes.io/instance: es-cluster + app.kubernetes.io/managed-by: kubedb.com + app.kubernetes.io/name: elasticsearches.kubedb.com + Namespaces: + demo + Topology Key: kubernetes.io/hostname + Weight: 100 + Pod Affinity Term: + Label Selector: + Match Expressions: + Key: ${NODE_ROLE} + Operator: Exists + Match Labels: + app.kubernetes.io/instance: es-cluster + app.kubernetes.io/managed-by: kubedb.com + app.kubernetes.io/name: elasticsearches.kubedb.com + Namespaces: + demo + Topology Key: failure-domain.beta.kubernetes.io/zone + Weight: 50 + Container Security Context: + Capabilities: + Add: + IPC_LOCK + SYS_RESOURCE + Privileged: false + Run As User: 1000 + Resources: + Service Account Name: es-cluster + Storage Type: Durable + Termination Policy: Delete + Tls: + Certificates: + Alias: ca + Private Key: + Encoding: PKCS8 + Secret Name: es-cluster-ca-cert + Subject: + Organizations: + kubedb + Alias: transport + Private Key: + Encoding: PKCS8 + Secret Name: es-cluster-transport-cert + Subject: + Organizations: + kubedb + Alias: http + Private Key: + Encoding: PKCS8 + Secret Name: es-cluster-http-cert + Subject: + Organizations: + kubedb + Alias: archiver + Private Key: + Encoding: PKCS8 + Secret Name: es-cluster-archiver-cert + Subject: + Organizations: + kubedb + Topology: + Data Cold: + Replicas: 2 + Resources: + Limits: + Memory: 1Gi + Requests: + Cpu: 500m + Memory: 1Gi + Storage: + Resources: + Requests: + Storage: 5Gi + Storage Class Name: linode-block-storage + Suffix: data-cold + Data Content: + Replicas: 1 + Resources: + Limits: + Memory: 1Gi + Requests: + Cpu: 500m + Memory: 1Gi + Storage: + Resources: + Requests: + Storage: 5Gi + Storage Class Name: linode-block-storage + Suffix: data-content + Data Hot: + Replicas: 3 + Resources: + Limits: + Memory: 1Gi + Requests: + Cpu: 500m + Memory: 1Gi + Storage: + Resources: + Requests: + Storage: 3Gi + Storage Class Name: linode-block-storage + Suffix: data-hot + Data Warm: + Replicas: 2 + Resources: + Limits: + Memory: 1Gi + Requests: + Cpu: 500m + Memory: 1Gi + Storage: + Resources: + Requests: + Storage: 5Gi + Storage Class Name: linode-block-storage + Suffix: data-warm + Ingest: + Replicas: 2 + Resources: + Limits: + Memory: 1Gi + Requests: + Cpu: 500m + Memory: 1Gi + Storage: + Resources: + Requests: + Storage: 1Gi + Storage Class Name: linode-block-storage + Suffix: ingest + Master: + Replicas: 2 + Resources: + Limits: + Memory: 1Gi + Requests: + Cpu: 500m + Memory: 1Gi + Storage: + Resources: + Requests: + Storage: 1Gi + Storage Class Name: linode-block-storage + Suffix: master + Version: xpack-8.11.1 +Status: + Conditions: + Last Transition Time: 2022-03-14T06:33:20Z + Message: The KubeDB operator has started the provisioning of Elasticsearch: demo/es-cluster + Reason: DatabaseProvisioningStartedSuccessfully + Status: True + Type: ProvisioningStarted + Last Transition Time: 2022-03-14T06:34:55Z + Message: All desired replicas are ready. + Reason: AllReplicasReady + Status: True + Type: ReplicaReady + Last Transition Time: 2022-03-14T06:35:17Z + Message: The Elasticsearch: demo/es-cluster is accepting client requests. + Observed Generation: 2 + Reason: DatabaseAcceptingConnectionRequest + Status: True + Type: AcceptingConnection + Last Transition Time: 2022-03-14T06:35:27Z + Message: The Elasticsearch: demo/es-cluster is ready. + Observed Generation: 2 + Reason: ReadinessCheckSucceeded + Status: True + Type: Ready + Last Transition Time: 2022-03-14T06:35:28Z + Message: The Elasticsearch: demo/es-cluster is successfully provisioned. + Observed Generation: 2 + Reason: DatabaseSuccessfullyProvisioned + Status: True + Type: Provisioned + Observed Generation: 2 + Phase: Ready +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal Successful 3m29s KubeDB Operator Successfully created governing service + Normal Successful 3m29s KubeDB Operator Successfully created Service + Normal Successful 3m29s KubeDB Operator Successfully created Service + Normal Successful 3m27s KubeDB Operator Successfully created Elasticsearch + Normal Successful 3m26s KubeDB Operator Successfully created appbinding + Normal Successful 3m26s KubeDB Operator Successfully governing service +``` +- Here, in `Status.Conditions` + - `Conditions.Status` is `True` for the `Condition.Type:ProvisioningStarted` which means database provisioning has been started successfully. + - `Conditions.Status` is `True` for the `Condition.Type:ReplicaReady` which specifies all replicas are ready in the cluster. + - `Conditions.Status` is `True` for the `Condition.Type:AcceptingConnection` which means database has been accepting connection request. + - `Conditions.Status` is `True` for the `Condition.Type:Ready` which defines database is ready to use. + - `Conditions.Status` is `True` for the `Condition.Type:Provisioned` which specifies Database has been successfully provisioned. + +### KubeDB Operator Generated Resources + +Let's check the Kubernetes resources created by the operator on the deployment of Elasticsearch CRO: + +```bash +$ kubectl get all,secret,pvc -n demo -l 'app.kubernetes.io/instance=es-cluster' +NAME READY STATUS RESTARTS AGE +pod/es-cluster-data-cold-0 1/1 Running 0 5m46s +pod/es-cluster-data-cold-1 1/1 Running 0 4m51s +pod/es-cluster-data-content-0 1/1 Running 0 5m46s +pod/es-cluster-data-hot-0 1/1 Running 0 5m46s +pod/es-cluster-data-hot-1 1/1 Running 0 5m9s +pod/es-cluster-data-hot-2 1/1 Running 0 4m41s +pod/es-cluster-data-warm-0 1/1 Running 0 5m46s +pod/es-cluster-data-warm-1 1/1 Running 0 4m52s +pod/es-cluster-ingest-0 1/1 Running 0 5m46s +pod/es-cluster-ingest-1 1/1 Running 0 5m14s +pod/es-cluster-master-0 1/1 Running 0 5m46s +pod/es-cluster-master-1 1/1 Running 0 4m50s + +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +service/es-cluster ClusterIP 10.128.132.28 9200/TCP 5m50s +service/es-cluster-master ClusterIP None 9300/TCP 5m50s +service/es-cluster-pods ClusterIP None 9200/TCP 5m50s + +NAME READY AGE +statefulset.apps/es-cluster-data-cold 2/2 5m48s +statefulset.apps/es-cluster-data-content 1/1 5m48s +statefulset.apps/es-cluster-data-hot 3/3 5m48s +statefulset.apps/es-cluster-data-warm 2/2 5m48s +statefulset.apps/es-cluster-ingest 2/2 5m48s +statefulset.apps/es-cluster-master 2/2 5m48s + +NAME TYPE VERSION AGE +appbinding.appcatalog.appscode.com/es-cluster kubedb.com/elasticsearch 7.16.2 5m49s + +NAME TYPE DATA AGE +secret/es-cluster-archiver-cert kubernetes.io/tls 3 5m51s +secret/es-cluster-ca-cert kubernetes.io/tls 2 5m51s +secret/es-cluster-config Opaque 1 5m50s +secret/es-cluster-elastic-cred kubernetes.io/basic-auth 2 5m51s +secret/es-cluster-http-cert kubernetes.io/tls 3 5m51s +secret/es-cluster-transport-cert kubernetes.io/tls 3 5m51s + +NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE +persistentvolumeclaim/data-es-cluster-data-cold-0 Bound pvc-47585d52c11a4a52 10Gi RWO linode-block-storage 5m50s +persistentvolumeclaim/data-es-cluster-data-cold-1 Bound pvc-66aaa122c5774713 10Gi RWO linode-block-storage 4m55s +persistentvolumeclaim/data-es-cluster-data-content-0 Bound pvc-d51361e9352b4e9f 10Gi RWO linode-block-storage 5m50s +persistentvolumeclaim/data-es-cluster-data-hot-0 Bound pvc-3712187a3c6540da 10Gi RWO linode-block-storage 5m50s +persistentvolumeclaim/data-es-cluster-data-hot-1 Bound pvc-2318d4eacb4b453f 10Gi RWO linode-block-storage 5m13s +persistentvolumeclaim/data-es-cluster-data-hot-2 Bound pvc-c309c7058b114578 10Gi RWO linode-block-storage 4m45s +persistentvolumeclaim/data-es-cluster-data-warm-0 Bound pvc-d5950f5b075c4d3f 10Gi RWO linode-block-storage 5m50s +persistentvolumeclaim/data-es-cluster-data-warm-1 Bound pvc-3f6b99d11b1d46ea 10Gi RWO linode-block-storage 4m56s +persistentvolumeclaim/data-es-cluster-ingest-0 Bound pvc-081be753a20a45da 10Gi RWO linode-block-storage 5m50s +persistentvolumeclaim/data-es-cluster-ingest-1 Bound pvc-1bea5a3b5be24817 10Gi RWO linode-block-storage 5m18s +persistentvolumeclaim/data-es-cluster-master-0 Bound pvc-2c49a2ccb4644d6e 10Gi RWO linode-block-storage 5m50s +persistentvolumeclaim/data-es-cluster-master-1 Bound pvc-cb1d970febff498f 10Gi RWO linode-block-storage 4m54s + +``` + +- `StatefulSet` - 6 StatefulSets are created for 6 types Elasticsearch nodes. The StatefulSets are named after the Elasticsearch instance with given suffix: `{Elasticsearch-Name}-{Sufix}`. +- `Services` - 3 services are generated for each Elasticsearch database. + - `{Elasticsearch-Name}` - the client service which is used to connect to the database. It points to the `ingest` nodes. + - `{Elasticsearch-Name}-master` - the master service which is used to connect to the master nodes. It is a headless service. + - `{Elasticsearch-Name}-pods` - the node discovery service which is used by the Elasticsearch nodes to communicate each other. It is a headless service. +- `AppBinding` - an [AppBinding](/docs/v2024.1.31/guides/elasticsearch/concepts/appbinding/) which hold the connect information for the database. It is also named after the Elastics +- `Secrets` - 3 types of secrets are generated for each Elasticsearch database. + - `{Elasticsearch-Name}-{username}-cred` - the auth secrets which hold the `username` and `password` for the Elasticsearch users. + - `{Elasticsearch-Name}-{alias}-cert` - the certificate secrets which hold `tls.crt`, `tls.key`, and `ca.crt` for configuring the Elasticsearch database. + - `{Elasticsearch-Name}-config` - the default configuration secret created by the operator. + +## Connect with Elasticsearch Database + +We will use [port forwarding](https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/) to connect with our Elasticsearch database. Then we will use `curl` to send `HTTP` requests to check cluster health to verify that our Elasticsearch database is working well. + +#### Port-forward the Service + +KubeDB will create few Services to connect with the database. Let’s check the Services by following command, + +```bash +$ kubectl get service -n demo +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +es-cluster ClusterIP 10.128.132.28 9200/TCP 10m +es-cluster-dashboard ClusterIP 10.128.99.51 5601/TCP 10m +es-cluster-master ClusterIP None 9300/TCP 10m +es-cluster-pods ClusterIP None 9200/TCP 10m +``` +Here, we are going to use `es-cluster` Service to connect with the database. Now, let’s port-forward the `es-cluster` Service to the port `9200` to local machine: + +```bash +$ kubectl port-forward -n demo svc/es-cluster 9200 +Forwarding from 127.0.0.1:9200 -> 9200 +Forwarding from [::1]:9200 -> 9200 +``` +Now, our Elasticsearch cluster is accessible at `localhost:9200`. + +#### Export the Credentials + +KubeDB also create some Secrets for the database. Let’s check which Secrets have been created by KubeDB for our `es-cluster`. + +```bash +$ kubectl get secret -n demo | grep es-cluster +es-cluster-archiver-cert kubernetes.io/tls 3 12m +es-cluster-ca-cert kubernetes.io/tls 2 12m +es-cluster-config Opaque 1 12m +es-cluster-dashboard-ca-cert kubernetes.io/tls 2 12m +es-cluster-dashboard-config Opaque 1 12m +es-cluster-dashboard-kibana-server-cert kubernetes.io/tls 3 12m +es-cluster-elastic-cred kubernetes.io/basic-auth 2 12m +es-cluster-http-cert kubernetes.io/tls 3 12m +es-cluster-token-v97c7 kubernetes.io/service-account-token 3 12m +es-cluster-transport-cert kubernetes.io/tls 3 12m +``` +Now, we can connect to the database with `es-cluster-elastic-cred` which contains the admin level credentials to connect with the database. + +### Accessing Database Through CLI + +To access the database through CLI, we have to get the credentials to access. Let’s export the credentials as environment variable to our current shell : + +```bash +$ kubectl get secret -n demo es-cluster-elastic-cred -o jsonpath='{.data.username}' | base64 -d +elastic +$ kubectl get secret -n demo es-cluster-elastic-cred -o jsonpath='{.data.password}' | base64 -d +YQB)~K6M9U)d_yVu +``` + +Now, let's check the health of our Elasticsearch cluster + +```bash +# curl -XGET -k -u 'username:password' https://localhost:9200/_cluster/health?pretty" +$ curl -XGET -k -u 'elastic:YQB)~K6M9U)d_yVu' "https://localhost:9200/_cluster/health?pretty" +{ + "cluster_name" : "es-cluster", + "status" : "green", + "timed_out" : false, + "number_of_nodes" : 12, + "number_of_data_nodes" : 8, + "active_primary_shards" : 9, + "active_shards" : 10, + "relocating_shards" : 0, + "initializing_shards" : 0, + "unassigned_shards" : 0, + "delayed_unassigned_shards" : 0, + "number_of_pending_tasks" : 0, + "number_of_in_flight_fetch" : 0, + "task_max_waiting_in_queue_millis" : 0, + "active_shards_percent_as_number" : 100.0 +} + +``` + +### Verify Node Role + +As we have assigned a dedicated role to each type of node, let's verify them by following command, + +```bash +$ curl -XGET -k -u 'elastic:YQB)~K6M9U)d_yVu' "https://localhost:9200/_cat/nodes?v" +ip heap.percent ram.percent cpu load_1m load_5m load_15m node.role master name +10.2.2.30 41 90 3 0.22 0.31 0.34 s - es-cluster-data-content-0 +10.2.1.28 70 76 3 0.00 0.03 0.07 h - es-cluster-data-hot-0 +10.2.0.28 45 87 4 0.09 0.20 0.26 i - es-cluster-ingest-0 +10.2.2.29 33 75 3 0.22 0.31 0.34 w - es-cluster-data-warm-0 +10.2.0.29 65 76 3 0.09 0.20 0.26 h - es-cluster-data-hot-1 +10.2.0.30 46 75 3 0.09 0.20 0.26 c - es-cluster-data-cold-1 +10.2.1.29 56 77 3 0.00 0.03 0.07 m * es-cluster-master-0 +10.2.3.50 52 74 3 0.02 0.06 0.11 c - es-cluster-data-cold-0 +10.2.2.31 34 75 3 0.22 0.31 0.34 m - es-cluster-master-1 +10.2.1.30 21 74 3 0.00 0.03 0.07 w - es-cluster-data-warm-1 +10.2.3.49 23 85 3 0.02 0.06 0.11 i - es-cluster-ingest-1 +10.2.3.51 72 75 3 0.02 0.06 0.11 h - es-cluster-data-hot-2 + +``` + +- `node.role` field specifies the dedicated role that we have assigned for each type of node. Where `h` refers to the hot node, `w` refers to the warm node, `c` refers to the cold node, `i` refers to the ingest node, `m` refers to the master node, and `s` refers to the content node. +- `master` field specifies the acive master node. Here, we can see a `*` in the `es-cluster-master-0` which shows that it is the active master node now. + + + +## Cleaning Up + +To cleanup the k8s resources created by this tutorial, run: + +```bash +$ kubectl patch -n demo elasticsearch es-cluster -p '{"spec":{"terminationPolicy":"WipeOut"}}' --type="merge" + +$ kubectl delete elasticsearch -n demo es-cluster + +# Delete namespace +$ kubectl delete namespace demo +``` + +## Next Steps + +- Learn about [taking backup](/docs/v2024.1.31/guides/elasticsearch/backup/overview/) of Elasticsearch database using Stash. +- Monitor your Elasticsearch database with KubeDB using [`out-of-the-box` builtin-Prometheus](/docs/v2024.1.31/guides/elasticsearch/monitoring/using-builtin-prometheus). +- Monitor your Elasticsearch database with KubeDB using [`out-of-the-box` Prometheus operator](/docs/v2024.1.31/guides/elasticsearch/monitoring/using-prometheus-operator). +- Detail concepts of [Elasticsearch object](/docs/v2024.1.31/guides/elasticsearch/concepts/elasticsearch/). +- Use [private Docker registry](/docs/v2024.1.31/guides/elasticsearch/private-registry/using-private-registry) to deploy Elasticsearch with KubeDB. +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/v2024.1.31/CONTRIBUTING). \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/elasticsearch/clustering/topology-cluster/hot-warm-cold-cluster/yamls/es-cluster.yaml b/content/docs/v2024.1.31/guides/elasticsearch/clustering/topology-cluster/hot-warm-cold-cluster/yamls/es-cluster.yaml new file mode 100644 index 0000000000..0df7555c80 --- /dev/null +++ b/content/docs/v2024.1.31/guides/elasticsearch/clustering/topology-cluster/hot-warm-cold-cluster/yamls/es-cluster.yaml @@ -0,0 +1,51 @@ +apiVersion: kubedb.com/v1alpha2 +kind: Elasticsearch +metadata: + name: es-cluster + namespace: demo +spec: + enableSSL: true + version: xpack-8.11.1 + topology: + master: + replicas: 2 + storage: + resources: + requests: + storage: 1Gi + storageClassName: "linode-block-storage" + ingest: + replicas: 2 + storage: + resources: + requests: + storage: 1Gi + storageClassName: "linode-block-storage" + dataContent: + replicas: 1 + storage: + resources: + requests: + storage: 5Gi + storageClassName: "linode-block-storage" + dataHot: + replicas: 3 + storage: + resources: + requests: + storage: 3Gi + storageClassName: "linode-block-storage" + dataWarm: + replicas: 2 + storage: + resources: + requests: + storage: 5Gi + storageClassName: "linode-block-storage" + dataCold: + replicas: 2 + storage: + resources: + requests: + storage: 5Gi + storageClassName: "linode-block-storage" diff --git a/content/docs/v2024.1.31/guides/elasticsearch/clustering/topology-cluster/simple-dedicated-cluster/index.md b/content/docs/v2024.1.31/guides/elasticsearch/clustering/topology-cluster/simple-dedicated-cluster/index.md new file mode 100644 index 0000000000..1d13090070 --- /dev/null +++ b/content/docs/v2024.1.31/guides/elasticsearch/clustering/topology-cluster/simple-dedicated-cluster/index.md @@ -0,0 +1,546 @@ +--- +title: Elasticsearch Simple Dedicated Cluster +menu: + docs_v2024.1.31: + identifier: es-simple-dedicated-cluster + name: Simple Dedicated Cluster + parent: es-topology-cluster + weight: 10 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Elasticsearch Simple Dedicated Cluster + +## Before You Begin + +At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +Now, install the KubeDB operator in your cluster following the steps [here](/docs/v2024.1.31/setup/README). + +To keep things isolated, this tutorial uses a separate namespace called `demo` throughout this tutorial. + +```bash +$ kubectl create namespace demo +namespace/demo created + +$ kubectl get namespace +NAME STATUS AGE +demo Active 7s +``` + +> Note: YAML files used in this tutorial are stored in [here](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/guides/elasticsearch/clustering/topology-cluster/simple-dedicated-cluster/yamls) in GitHub repository [kubedb/docs](https://github.com/kubedb/docs). + +## Find Available StorageClass + +We will have to provide `StorageClass` in Elasticsearch CR specification. Check available `StorageClass` in your cluster using the following command, + +```bash +$ kubectl get storageclass +NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE +standard (default) rancher.io/local-path Delete WaitForFirstConsumer false 1h +``` + +Here, we have `standard` StorageClass in our cluster from [Local Path Provisioner](https://github.com/rancher/local-path-provisioner). + +## Create Elasticsearch Simple Dedicated Cluster + +We are going to create a Elasticsearch Simple Dedicated Cluster in topology mode. Our cluster will be consist of 2 master nodes, 3 data nodes, 2 ingest nodes. Here, we are using Elasticsearch version ( `xpack-8.11.1` ) of SearchGuard distribution for this demo. To learn more about the Elasticsearch CR, visit [here](/docs/v2024.1.31/guides/elasticsearch/concepts/elasticsearch/). + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: Elasticsearch +metadata: + name: es-cluster + namespace: demo +spec: + enableSSL: true + version: xpack-8.11.1 + storageType: Durable + topology: + master: + replicas: 2 + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + data: + replicas: 3 + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + ingest: + replicas: 2 + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi +``` + +Here, + +- `spec.version` - is the name of the ElasticsearchVersion CR. Here, we are using Elasticsearch version `xpack-8.11.1` of SearchGuard distribution. +- `spec.enableSSL` - specifies whether the HTTP layer is secured with certificates or not. +- `spec.storageType` - specifies the type of storage that will be used for Elasticsearch database. It can be `Durable` or `Ephemeral`. The default value of this field is `Durable`. If `Ephemeral` is used then KubeDB will create the Elasticsearch database using `EmptyDir` volume. In this case, you don't have to specify `spec.storage` field. This is useful for testing purposes. +- `spec.topology` - specifies the node-specific properties for the Elasticsearch cluster. + - `topology.master` - specifies the properties of master nodes. + - `master.replicas` - specifies the number of master nodes. + - `master.storage` - specifies the master node storage information that passed to the StatefulSet. + - `topology.data` - specifies the properties of data nodes. + - `data.replicas` - specifies the number of data nodes. + - `data.storage` - specifies the data node storage information that passed to the StatefulSet. + - `topology.ingest` - specifies the properties of ingest nodes. + - `ingest.replicas` - specifies the number of ingest nodes. + - `ingest.storage` - specifies the ingest node storage information that passed to the StatefulSet. + +Let's deploy the above example by the following command: + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/elasticsearch/clustering/topology-cluster/simple-dedicated-cluster/yamls/es-cluster.yaml +elasticsearch.kubedb.com/es-cluster created +``` +KubeDB will create the necessary resources to deploy the Elasticsearch cluster according to the above specification. Let’s wait until the database to be ready to use, + +```bash +$ watch kubectl get elasticsearch -n demo +NAME VERSION STATUS AGE +es-cluster xpack-8.11.1 Ready 3m32s +``` +Here, Elasticsearch is in `Ready` state. It means the database is ready to accept connections. + +Describe the Elasticsearch object to observe the progress if something goes wrong or the status is not changing for a long period of time: + +```bash +$ kubectl describe elasticsearch -n demo es-cluster +Name: es-cluster +Namespace: demo +Labels: +Annotations: +API Version: kubedb.com/v1alpha2 +Kind: Elasticsearch +Metadata: + Creation Timestamp: 2022-04-07T09:48:51Z + Finalizers: + kubedb.com + Generation: 2 + Resource Version: 406999 + UID: 1dff00c8-5a90-4916-bf8a-ed28f19dd433 +Spec: + Auth Secret: + Name: es-cluster-elastic-cred + Enable SSL: true + Heap Size Percentage: 50 + Kernel Settings: + Privileged: true + Sysctls: + Name: vm.max_map_count + Value: 262144 + Pod Template: + Controller: + Metadata: + Spec: + Affinity: + Pod Anti Affinity: + Preferred During Scheduling Ignored During Execution: + Pod Affinity Term: + Label Selector: + Match Expressions: + Key: ${NODE_ROLE} + Operator: Exists + Match Labels: + app.kubernetes.io/instance: es-cluster + app.kubernetes.io/managed-by: kubedb.com + app.kubernetes.io/name: elasticsearches.kubedb.com + Namespaces: + demo + Topology Key: kubernetes.io/hostname + Weight: 100 + Pod Affinity Term: + Label Selector: + Match Expressions: + Key: ${NODE_ROLE} + Operator: Exists + Match Labels: + app.kubernetes.io/instance: es-cluster + app.kubernetes.io/managed-by: kubedb.com + app.kubernetes.io/name: elasticsearches.kubedb.com + Namespaces: + demo + Topology Key: failure-domain.beta.kubernetes.io/zone + Weight: 50 + Container Security Context: + Capabilities: + Add: + IPC_LOCK + SYS_RESOURCE + Privileged: false + Run As User: 1000 + Resources: + Service Account Name: es-cluster + Storage Type: Durable + Termination Policy: Delete + Tls: + Certificates: + Alias: ca + Private Key: + Encoding: PKCS8 + Secret Name: es-cluster-ca-cert + Subject: + Organizations: + kubedb + Alias: transport + Private Key: + Encoding: PKCS8 + Secret Name: es-cluster-transport-cert + Subject: + Organizations: + kubedb + Alias: http + Private Key: + Encoding: PKCS8 + Secret Name: es-cluster-http-cert + Subject: + Organizations: + kubedb + Alias: archiver + Private Key: + Encoding: PKCS8 + Secret Name: es-cluster-archiver-cert + Subject: + Organizations: + kubedb + Topology: + Data: + Replicas: 3 + Resources: + Limits: + Memory: 1Gi + Requests: + Cpu: 500m + Memory: 1Gi + Storage: + Access Modes: + ReadWriteOnce + Resources: + Requests: + Storage: 1Gi + Storage Class Name: standard + Suffix: data + Ingest: + Replicas: 2 + Resources: + Limits: + Memory: 1Gi + Requests: + Cpu: 500m + Memory: 1Gi + Storage: + Access Modes: + ReadWriteOnce + Resources: + Requests: + Storage: 1Gi + Storage Class Name: standard + Suffix: ingest + Master: + Replicas: 2 + Resources: + Limits: + Memory: 1Gi + Requests: + Cpu: 500m + Memory: 1Gi + Storage: + Access Modes: + ReadWriteOnce + Resources: + Requests: + Storage: 1Gi + Storage Class Name: standard + Suffix: master + Version: xpack-8.11.1 +Status: + Conditions: + Last Transition Time: 2022-04-07T09:48:51Z + Message: The KubeDB operator has started the provisioning of Elasticsearch: demo/es-cluster + Reason: DatabaseProvisioningStartedSuccessfully + Status: True + Type: ProvisioningStarted + Last Transition Time: 2022-04-07T09:51:28Z + Message: All desired replicas are ready. + Reason: AllReplicasReady + Status: True + Type: ReplicaReady + Last Transition Time: 2022-04-07T09:53:29Z + Message: The Elasticsearch: demo/es-cluster is accepting client requests. + Observed Generation: 2 + Reason: DatabaseAcceptingConnectionRequest + Status: True + Type: AcceptingConnection + Last Transition Time: 2022-04-07T09:54:02Z + Message: The Elasticsearch: demo/es-cluster is ready. + Observed Generation: 2 + Reason: ReadinessCheckSucceeded + Status: True + Type: Ready + Last Transition Time: 2022-04-07T09:53:41Z + Message: The Elasticsearch: demo/es-cluster is successfully provisioned. + Observed Generation: 2 + Reason: DatabaseSuccessfullyProvisioned + Status: True + Type: Provisioned + Observed Generation: 2 + Phase: Ready +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal Successful 30m KubeDB Operator Successfully created governing service + Normal Successful 30m KubeDB Operator Successfully created Service + Normal Successful 30m KubeDB Operator Successfully created Service + Normal Successful 30m KubeDB Operator Successfully created Elasticsearch + Normal Successful 30m KubeDB Operator Successfully created appbinding + Normal Successful 30m KubeDB Operator Successfully governing service +``` +- Here, in `Status.Conditions` + - `Conditions.Status` is `True` for the `Condition.Type:ProvisioningStarted` which means database provisioning has been started successfully. + - `Conditions.Status` is `True` for the `Condition.Type:ReplicaReady` which specifies all replicas are ready in the cluster. + - `Conditions.Status` is `True` for the `Condition.Type:AcceptingConnection` which means database has been accepting connection request. + - `Conditions.Status` is `True` for the `Condition.Type:Ready` which defines database is ready to use. + - `Conditions.Status` is `True` for the `Condition.Type:Provisioned` which specifies Database has been successfully provisioned. + +### KubeDB Operator Generated Resources + +Let's check the Kubernetes resources created by the operator on the deployment of Elasticsearch CRO: + +```bash +$ kubectl get all,secret,pvc -n demo -l 'app.kubernetes.io/instance=es-cluster' +NAME READY STATUS RESTARTS AGE +pod/es-cluster-data-0 1/1 Running 0 31m +pod/es-cluster-data-1 1/1 Running 0 29m +pod/es-cluster-data-2 1/1 Running 0 29m +pod/es-cluster-ingest-0 1/1 Running 0 31m +pod/es-cluster-ingest-1 1/1 Running 0 29m +pod/es-cluster-master-0 1/1 Running 0 31m +pod/es-cluster-master-1 1/1 Running 0 29m + +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +service/es-cluster ClusterIP 10.96.67.225 9200/TCP 31m +service/es-cluster-master ClusterIP None 9300/TCP 31m +service/es-cluster-pods ClusterIP None 9200/TCP 31m + +NAME READY AGE +statefulset.apps/es-cluster-data 3/3 31m +statefulset.apps/es-cluster-ingest 2/2 31m +statefulset.apps/es-cluster-master 2/2 31m + +NAME TYPE VERSION AGE +appbinding.appcatalog.appscode.com/es-cluster kubedb.com/elasticsearch 7.14.2 31m + +NAME TYPE DATA AGE +secret/es-cluster-archiver-cert kubernetes.io/tls 3 31m +secret/es-cluster-ca-cert kubernetes.io/tls 2 31m +secret/es-cluster-config Opaque 1 31m +secret/es-cluster-elastic-cred kubernetes.io/basic-auth 2 31m +secret/es-cluster-http-cert kubernetes.io/tls 3 31m +secret/es-cluster-transport-cert kubernetes.io/tls 3 31m + +NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE +persistentvolumeclaim/data-es-cluster-data-0 Bound pvc-b55f67b3-7c2a-4b16-8cf0-77bbaafef3f7 1Gi RWO standard 31m +persistentvolumeclaim/data-es-cluster-data-1 Bound pvc-62176b5a-5136-450b-afec-f483f041506f 1Gi RWO standard 29m +persistentvolumeclaim/data-es-cluster-data-2 Bound pvc-c36f2ca3-466f-4314-81d7-7c1c0b4acf4f 1Gi RWO standard 29m +persistentvolumeclaim/data-es-cluster-ingest-0 Bound pvc-96a081a1-90ff-4b82-bbf5-3bdf349b7de4 1Gi RWO standard 31m +persistentvolumeclaim/data-es-cluster-ingest-1 Bound pvc-18420ed8-8455-4b18-864b-f13637dade38 1Gi RWO standard 29m +persistentvolumeclaim/data-es-cluster-master-0 Bound pvc-6892422b-e399-44e1-9fdb-884b68fc66b5 1Gi RWO standard 31m +persistentvolumeclaim/data-es-cluster-master-1 Bound pvc-ed4a704c-7b13-421e-85e1-d710e556ca4e 1Gi RWO standard 29m + +``` + +- `StatefulSet` - 3 StatefulSets are created for 3 types Elasticsearch nodes. The StatefulSets are named after the Elasticsearch instance with given suffix: `{Elasticsearch-Name}-{Sufix}`. +- `Services` - 3 services are generated for each Elasticsearch database. + - `{Elasticsearch-Name}` - the client service which is used to connect to the database. It points to the `ingest` nodes. + - `{Elasticsearch-Name}-master` - the master service which is used to connect to the master nodes. It is a headless service. + - `{Elasticsearch-Name}-pods` - the node discovery service which is used by the Elasticsearch nodes to communicate each other. It is a headless service. +- `AppBinding` - an [AppBinding](/docs/v2024.1.31/guides/elasticsearch/concepts/appbinding/) which hold the connect information for the database. It is also named after the Elastics +- `Secrets` - 3 types of secrets are generated for each Elasticsearch database. + - `{Elasticsearch-Name}-{username}-cred` - the auth secrets which hold the `username` and `password` for the Elasticsearch users. + - `{Elasticsearch-Name}-{alias}-cert` - the certificate secrets which hold `tls.crt`, `tls.key`, and `ca.crt` for configuring the Elasticsearch database. + - `{Elasticsearch-Name}-config` - the default configuration secret created by the operator. + +## Connect with Elasticsearch Database + +We will use [port forwarding](https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/) to connect with our Elasticsearch database. Then we will use `curl` to send `HTTP` requests to check cluster health to verify that our Elasticsearch database is working well. + +#### Port-forward the Service + +KubeDB will create few Services to connect with the database. Let’s check the Services by following command, + +```bash +$ kubectl get service -n demo +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +es-cluster ClusterIP 10.96.67.225 9200/TCP 11m +es-cluster-master ClusterIP None 9300/TCP 11m +es-cluster-pods ClusterIP None 9200/TCP 11m +``` +Here, we are going to use `es-cluster` Service to connect with the database. Now, let’s port-forward the `es-cluster` Service to the port `9200` to local machine: + +```bash +$ kubectl port-forward -n demo svc/es-cluster 9200 +Forwarding from 127.0.0.1:9200 -> 9200 +Forwarding from [::1]:9200 -> 9200 +``` +Now, our Elasticsearch cluster is accessible at `localhost:9200`. + +#### Export the Credentials + +KubeDB also create some Secrets for the database. Let’s check which Secrets have been created by KubeDB for our `es-cluster`. + +```bash +$ kubectl get secret -n demo | grep es-cluster +es-cluster-archiver-cert kubernetes.io/tls 3 12m +es-cluster-ca-cert kubernetes.io/tls 2 12m +es-cluster-config Opaque 1 12m +es-cluster-elastic-cred kubernetes.io/basic-auth 2 12m +es-cluster-http-cert kubernetes.io/tls 3 12m +es-cluster-token-hx5mn kubernetes.io/service-account-token 3 12m +es-cluster-transport-cert kubernetes.io/tls 3 12m +``` +Now, we can connect to the database with `es-cluster-elastic-cred` which contains the admin level credentials to connect with the database. + +### Accessing Database Through CLI + +To access the database through CLI, we have to get the credentials to access. Let’s export the credentials as environment variable to our current shell : + +```bash +$ kubectl get secret -n demo es-cluster-elastic-cred -o jsonpath='{.data.username}' | base64 -d +elastic +$ kubectl get secret -n demo es-cluster-elastic-cred -o jsonpath='{.data.password}' | base64 -d +tS$k!2IBI.ASI7FJ +``` + +Now, let's check the health of our Elasticsearch cluster + +```bash +# curl -XGET -k -u 'username:password' https://localhost:9200/_cluster/health?pretty" +$ curl -XGET -k --user 'elastic:tS$k!2IBI.ASI7FJ' "https://localhost:9200/_cluster/health?pretty" +{ + "cluster_name" : "es-cluster", + "status" : "green", + "timed_out" : false, + "number_of_nodes" : 7, + "number_of_data_nodes" : 3, + "active_primary_shards" : 1, + "active_shards" : 2, + "relocating_shards" : 0, + "initializing_shards" : 0, + "unassigned_shards" : 0, + "delayed_unassigned_shards" : 0, + "number_of_pending_tasks" : 0, + "number_of_in_flight_fetch" : 0, + "task_max_waiting_in_queue_millis" : 0, + "active_shards_percent_as_number" : 100.0 +} + +``` + +## Insert Sample Data + +Now, we are going to insert some data into Elasticsearch. + +```bash +$ curl -XPOST -k --user 'elastic:tS$k!2IBI.ASI7FJ' "https://localhost:9200/info/_doc?pretty" -H 'Content-Type: application/json' -d' + { + "Company": "AppsCode Inc", + "Product": "KubeDB" + } + ' + +``` +Now, let’s verify that the index have been created successfully. + +```bash +$ curl -XGET -k --user 'elastic:tS$k!2IBI.ASI7FJ' "https://localhost:9200/_cat/indices?v&s=index&pretty" +health status index uuid pri rep docs.count docs.deleted store.size pri.store.size +green open .geoip_databases FsJlvTyRSsuRWTpX8OpkOA 1 1 40 0 76mb 38mb +green open info 9Z2Cl5fjQWGBAfjtF9LqBw 1 1 1 0 8.9kb 4.4kb +``` +Also, let’s verify the data in the indexes: + +```bash +curl -XGET -k --user 'elastic:tS$k!2IBI.ASI7FJ' "https://localhost:9200/info/_search?pretty" +{ + "took" : 79, + "timed_out" : false, + "_shards" : { + "total" : 1, + "successful" : 1, + "skipped" : 0, + "failed" : 0 + }, + "hits" : { + "total" : { + "value" : 1, + "relation" : "eq" + }, + "max_score" : 1.0, + "hits" : [ + { + "_index" : "info", + "_type" : "_doc", + "_id" : "mQCvA4ABs70-lBxlFWZD", + "_score" : 1.0, + "_source" : { + "Company" : "AppsCode Inc", + "Product" : "KubeDB" + } + } + ] + } +} + +``` + + +## Cleaning Up + +To cleanup the k8s resources created by this tutorial, run: + +```bash +$ kubectl patch -n demo elasticsearch es-cluster -p '{"spec":{"terminationPolicy":"WipeOut"}}' --type="merge" + +$ kubectl delete elasticsearch -n demo es-cluster + +# Delete namespace +$ kubectl delete namespace demo +``` + +## Next Steps + +- Learn about [taking backup](/docs/v2024.1.31/guides/elasticsearch/backup/overview/) of Elasticsearch database using Stash. +- Monitor your Elasticsearch database with KubeDB using [`out-of-the-box` builtin-Prometheus](/docs/v2024.1.31/guides/elasticsearch/monitoring/using-builtin-prometheus). +- Monitor your Elasticsearch database with KubeDB using [`out-of-the-box` Prometheus operator](/docs/v2024.1.31/guides/elasticsearch/monitoring/using-prometheus-operator). +- Detail concepts of [Elasticsearch object](/docs/v2024.1.31/guides/elasticsearch/concepts/elasticsearch/). +- Use [private Docker registry](/docs/v2024.1.31/guides/elasticsearch/private-registry/using-private-registry) to deploy Elasticsearch with KubeDB. +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/v2024.1.31/CONTRIBUTING). \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/elasticsearch/clustering/topology-cluster/simple-dedicated-cluster/yamls/es-cluster.yaml b/content/docs/v2024.1.31/guides/elasticsearch/clustering/topology-cluster/simple-dedicated-cluster/yamls/es-cluster.yaml new file mode 100644 index 0000000000..602f053d5c --- /dev/null +++ b/content/docs/v2024.1.31/guides/elasticsearch/clustering/topology-cluster/simple-dedicated-cluster/yamls/es-cluster.yaml @@ -0,0 +1,37 @@ +apiVersion: kubedb.com/v1alpha2 +kind: Elasticsearch +metadata: + name: es-cluster + namespace: demo +spec: + enableSSL: true + version: xpack-8.11.1 + storageType: Durable + topology: + master: + replicas: 2 + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + data: + replicas: 3 + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + ingest: + replicas: 2 + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/elasticsearch/concepts/_index.md b/content/docs/v2024.1.31/guides/elasticsearch/concepts/_index.md new file mode 100755 index 0000000000..ce0d23dbbc --- /dev/null +++ b/content/docs/v2024.1.31/guides/elasticsearch/concepts/_index.md @@ -0,0 +1,22 @@ +--- +title: Elasticsearch Concepts +menu: + docs_v2024.1.31: + identifier: es-concepts-elasticsearch + name: Concepts + parent: es-elasticsearch-guides + weight: 20 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/elasticsearch/concepts/appbinding/index.md b/content/docs/v2024.1.31/guides/elasticsearch/concepts/appbinding/index.md new file mode 100644 index 0000000000..2c5b73f71f --- /dev/null +++ b/content/docs/v2024.1.31/guides/elasticsearch/concepts/appbinding/index.md @@ -0,0 +1,170 @@ +--- +title: AppBinding CRD +menu: + docs_v2024.1.31: + identifier: es-appbinding-catalog + name: AppBinding + parent: es-concepts-elasticsearch + weight: 25 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# AppBinding + +## What is AppBinding + +An `AppBinding` is a Kubernetes `CustomResourceDefinition`(CRD) which points to an application using either its URL (usually for a non-Kubernetes resident service instance) or a Kubernetes service object (if self-hosted in a Kubernetes cluster), some optional parameters and a credential secret. To learn more about AppBinding and the problems it solves, please read this blog post: [The case for AppBinding](https://blog.byte.builders/post/the-case-for-appbinding). + +If you deploy a database using [KubeDB](https://kubedb.com/docs/latest/welcome/), the `AppBinding` object will be created automatically for it. Otherwise, you have to create an `AppBinding` object manually pointing to your desired database. + +KubeDB uses [Stash](https://appscode.com/products/stash/) to perform backup/recovery of databases. Stash needs to know how to connect with a target database and the credentials necessary to access it. This is done via an `AppBinding`. + +## AppBinding CRD Specification + +Like any official Kubernetes resource, an `AppBinding` has `TypeMeta`, `ObjectMeta` and `Spec` sections. However, unlike other Kubernetes resources, it does not have a `Status` section. + +An `AppBinding` object created by `KubeDB` for Elasticsearch database is shown below, + +```yaml +apiVersion: appcatalog.appscode.com/v1alpha1 +kind: AppBinding +metadata: + annotations: + kubectl.kubernetes.io/last-applied-configuration: | + {"apiVersion":"kubedb.com/v1alpha2","kind":"Elasticsearch","metadata":{"annotations":{},"name":"es-quickstart","namespace":"demo"},"spec":{"enableSSL":true,"replicas":3,"storage":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"1Gi"}},"storageClassName":"standard"},"storageType":"Durable","terminationPolicy":"Delete","version":"xpack-8.2.3"}} + creationTimestamp: "2022-12-29T05:03:33Z" + generation: 1 + labels: + app.kubernetes.io/component: database + app.kubernetes.io/instance: es-quickstart + app.kubernetes.io/managed-by: kubedb.com + app.kubernetes.io/name: elasticsearches.kubedb.com + name: es-quickstart + namespace: demo + ownerReferences: + - apiVersion: kubedb.com/v1alpha2 + blockOwnerDeletion: true + controller: true + kind: Elasticsearch + name: es-quickstart + uid: 4ae0339e-e86a-4032-9146-a7eac1a780db + resourceVersion: "425575" + uid: aef62276-dc61-4ec3-9883-9f4093bb1186 +spec: + appRef: + apiGroup: kubedb.com + kind: Elasticsearch + name: es-quickstart + namespace: demo + clientConfig: + service: + name: es-quickstart + port: 9200 + scheme: https + parameters: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: StashAddon + stash: + addon: + backupTask: + name: elasticsearch-backup-8.2.0 + params: + - name: args + value: --match=^(?![.])(?!apm-agent-configuration)(?!kubedb-system).+ + restoreTask: + name: elasticsearch-restore-8.2.0 + params: + - name: args + value: --match=^(?![.])(?!apm-agent-configuration)(?!kubedb-system).+ + secret: + name: es-quickstart-elastic-cred + tlsSecret: + name: es-quickstart-client-cert + type: kubedb.com/elasticsearch + version: 8.2.0 + +``` + +Here, we are going to describe the sections of an `AppBinding` crd. + +### AppBinding `Spec` + +An `AppBinding` object has the following fields in the `spec` section: + +#### spec.type + +`spec.type` is an optional field that indicates the type of the app that this `AppBinding` is pointing to. Stash uses this field to resolve the values of `TARGET_APP_TYPE`, `TARGET_APP_GROUP` and `TARGET_APP_RESOURCE` variables of [BackupBlueprint](https://appscode.com/products/stash/latest/concepts/crds/backupblueprint/) object. + +This field follows the following format: `/`. The above AppBinding is pointing to a `elasticsearch` resource under `kubedb.com` group. + +Here, the variables are parsed as follows: + +| Variable | Usage | +| --------------------- |----------------------------------------------------------------------------------------------------------------------------------------| +| `TARGET_APP_GROUP` | Represents the application group where the respective app belongs (i.e: `kubedb.com`). | +| `TARGET_APP_RESOURCE` | Represents the resource under that application group that this AppBinding represents (i.e: `elasticsearch`). | +| `TARGET_APP_TYPE` | Represents the complete type of the application. It's simply `TARGET_APP_GROUP/TARGET_APP_RESOURCE` (i.e: `kubedb.com/elasticsearch`). | + +#### spec.secret + +`spec.secret` specifies the name of the secret which contains the credentials that are required to access the database. This secret must be in the same namespace as the `AppBinding`. + +This secret must contain the following keys: + +| Key | Usage | +| :----------------: | ----------------------- | +| `username` | Admin/Elastic user name | +| `password` | Admin/Elastic user password | + +#### spec.clientConfig + +`spec.clientConfig` defines how to communicate with the target database. You can use either an URL or a Kubernetes service to connect with the database. You don't have to specify both of them. + +You can configure following fields in `spec.clientConfig` section: + +- **spec.clientConfig.url** + + `spec.clientConfig.url` gives the location of the database, in standard URL form (i.e. `[scheme://]host:port/[path]`). This is particularly useful when the target database is running outside of the Kubernetes cluster. If your database is running inside the cluster, use `spec.clientConfig.service` section instead. + +> Note that, attempting to use a user or basic auth (e.g. `user:password@host:port`) is not allowed. Stash will insert them automatically from the respective secret. Fragments ("#...") and query parameters ("?...") are not allowed either. + +- **spec.clientConfig.service** + + If you are running the database inside the Kubernetes cluster, you can use Kubernetes service to connect with the database. You have to specify the following fields in `spec.clientConfig.service` section if you manually create an `AppBinding` object. + + - **name :** `name` indicates the name of the service that connects with the target database. + - **scheme :** `scheme` specifies the scheme (i.e. http, https) to use to connect with the database. + - **port :** `port` specifies the port where the target database is running. + +- **spec.clientConfig.insecureSkipTLSVerify** + + `spec.clientConfig.insecureSkipTLSVerify` is used to disable TLS certificate verification while connecting with the database. We strongly discourage to disable TLS verification during backup. You should provide the respective CA bundle through `spec.clientConfig.caBundle` field instead. + +- **spec.clientConfig.caBundle** + + `spec.clientConfig.caBundle` is a PEM encoded CA bundle which will be used to validate the serving certificate of the database. + +#### spec.appRef +appRef refers to the underlying application. It has 4 fields named `apiGroup`, `kind`, `name` & `namespace`. + +#### spec.tlsSecret +`spec.tlsSecret` specifies the name of the secret which contains the tls configuration files that are required to access the database. This secret must be in the same namespace as the `AppBinding`. + +## Next Steps + +- Learn how to use KubeDB to manage various databases [here](/docs/v2024.1.31/guides/README). +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/v2024.1.31/CONTRIBUTING). diff --git a/content/docs/v2024.1.31/guides/elasticsearch/concepts/autoscaler/index.md b/content/docs/v2024.1.31/guides/elasticsearch/concepts/autoscaler/index.md new file mode 100644 index 0000000000..d3dd4a7fd2 --- /dev/null +++ b/content/docs/v2024.1.31/guides/elasticsearch/concepts/autoscaler/index.md @@ -0,0 +1,171 @@ +--- +title: ElasticsearchAutoscaler CRD +menu: + docs_v2024.1.31: + identifier: es-autoscaler-concepts + name: ElasticsearchAutoscaler + parent: es-concepts-elasticsearch + weight: 25 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# ElasticsearchAutoscaler + +## What is ElasticsearchAutoscaler + +`ElasticsearchAutoscaler` is a Kubernetes `Custom Resource Definitions` (CRD). It provides a declarative configuration for autoscaling [Elasticsearch](https://www.elastic.co/products/elasticsearch) and [OpenSearch](https://opensearch.org/) compute resources and storage of database components in a Kubernetes native way. + +## ElasticsearchAutoscaler CRD Specifications + +Like any official Kubernetes resource, a `ElasticsearchAutoscaler` has `TypeMeta`, `ObjectMeta`, `Spec` and `Status` sections. + +Here, some sample `ElasticsearchAutoscaler` CROs for autoscaling different components of database is given below: + +**Sample `ElasticsearchAutoscaler` YAML for an Elasticsearch combined cluster:** + +```yaml +apiVersion: autoscaling.kubedb.com/v1alpha1 +kind: ElasticsearchAutoscaler +metadata: + name: es-as + namespace: demo +spec: + databaseRef: + name: es-combined + opsRequestOptions: + timeout: 3m + apply: IfReady + compute: + node: + trigger: "On" + podLifeTimeThreshold: 24h + minAllowed: + cpu: 250m + memory: 350Mi + maxAllowed: + cpu: 1 + memory: 1Gi + controlledResources: ["cpu", "memory"] + containerControlledValues: "RequestsAndLimits" + resourceDiffPercentage: 10 + storage: + node: + trigger: "On" + usageThreshold: 60 + scalingThreshold: 50 +``` + +**Sample `ElasticsearchAutoscaler` YAML for the Elasticsearch topology cluster:** + +```yaml +apiVersion: autoscaling.kubedb.com/v1alpha1 +kind: ElasticsearchAutoscaler +metadata: + name: mg-as-topology + namespace: demo +spec: + databaseRef: + name: es-topology + compute: + master: + trigger: "On" + podLifeTimeThreshold: 24h + minAllowed: + cpu: 250m + memory: 350Mi + maxAllowed: + cpu: 1 + memory: 1Gi + controlledResources: ["cpu", "memory"] + containerControlledValues: "RequestsAndLimits" + resourceDiffPercentage: 10 + data: + trigger: "On" + podLifeTimeThreshold: 24h + minAllowed: + cpu: 250m + memory: 350Mi + maxAllowed: + cpu: 1 + memory: 1Gi + controlledResources: ["cpu", "memory"] + containerControlledValues: "RequestsAndLimits" + resourceDiffPercentage: 10 + ingest: + trigger: "On" + podLifeTimeThreshold: 24h + minAllowed: + cpu: 250m + memory: 350Mi + maxAllowed: + cpu: 1 + memory: 1Gi + controlledResources: ["cpu", "memory"] + containerControlledValues: "RequestsAndLimits" + resourceDiffPercentage: 10 + storage: + data: + trigger: "On" + usageThreshold: 60 + scalingThreshold: 50 +``` + +Here, we are going to describe the various sections of a `ElasticsearchAutoscaler` crd. + +A `ElasticsearchAutoscaler` object has the following fields in the `spec` section. + +### spec.databaseRef + +`spec.databaseRef` is a `required` field that point to the [Elasticsearch](/docs/v2024.1.31/guides/elasticsearch/concepts/elasticsearch/) object for which the autoscaling will be performed. This field consists of the following sub-field: + +- **spec.databaseRef.name :** specifies the name of the [Elasticsearch](/docs/v2024.1.31/guides/elasticsearch/concepts/elasticsearch/) object. + +### spec.compute + +`spec.compute` specifies the autoscaling configuration for the compute resources i.e. cpu and memory of the database components. This field consists of the following sub-field: + +- `spec.compute.node` indicates the desired compute autoscaling configuration for a combined Elasticsearch cluster. +- `spec.compute.topology` indicates the desired compute autoscaling configuration for different type of nodes running in the Elasticsearch topology cluster mode. + - `topology.master` indicates the desired compute autoscaling configuration for master nodes. + - `topology.data` indicates the desired compute autoscaling configuration for data nodes. + - `topology.ingest` indicates the desired compute autoscaling configuration for ingest nodes. + +All of them has the following sub-fields: + +- `trigger` indicates if compute autoscaling is enabled for this component of the database. If "On" then compute autoscaling is enabled. If "Off" then compute autoscaling is disabled. +- `minAllowed` specifies the minimal amount of resources that will be recommended, default is no minimum. +- `maxAllowed` specifies the maximum amount of resources that will be recommended, default is no maximum. +- `controlledResources` specifies which type of compute resources (cpu and memory) are allowed for autoscaling. Allowed values are "cpu" and "memory". +- `containerControlledValues` specifies which resource values should be controlled. Allowed values are "RequestsAndLimits" and "RequestsOnly". +- `resourceDiffPercentage` specifies the minimum resource difference between recommended value and the current value in percentage. If the difference percentage is greater than this value than autoscaling will be triggered. +- `podLifeTimeThreshold` specifies the minimum pod lifetime of at least one of the pods before triggering autoscaling. + +### spec.storage + +`spec.storage` specifies the autoscaling configuration for the storage resources of the database components. This field consists of the following sub-field: + +- `spec.storage.node` indicates the desired storage autoscaling configuration for a combined Elasticsearch cluster. +- `spec.storage.topology` indicates the desired storage autoscaling configuration for different type of nodes running in the Elasticsearch topology cluster mode. + - `topology.master` indicates the desired storage autoscaling configuration for the master nodes. + - `topology.data` indicates the desired storage autoscaling configuration for the data nodes. + - `topology.ingest` indicates the desired storage autoscaling configuration for the ingest nodes. + +All of them has the following sub-fields: + +- `trigger` indicates if storage autoscaling is enabled for this component of the database. If "On" then storage autoscaling is enabled. If "Off" then storage autoscaling is disabled. +- `usageThreshold` indicates usage percentage threshold, if the current storage usage exceeds then storage autoscaling will be triggered. +- `scalingThreshold` indicates the percentage of the current storage that will be scaled. diff --git a/content/docs/v2024.1.31/guides/elasticsearch/concepts/catalog/index.md b/content/docs/v2024.1.31/guides/elasticsearch/concepts/catalog/index.md new file mode 100644 index 0000000000..47e87faafc --- /dev/null +++ b/content/docs/v2024.1.31/guides/elasticsearch/concepts/catalog/index.md @@ -0,0 +1,154 @@ +--- +title: ElasticsearchVersion CRD +menu: + docs_v2024.1.31: + identifier: es-catalog-concepts + name: ElasticsearchVersion + parent: es-concepts-elasticsearch + weight: 20 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# ElasticsearchVersion + +## What is ElasticsearchVersion + +`ElasticsearchVersion` is a Kubernetes `Custom Resource Definitions` (CRD). It provides a declarative configuration to specify the docker images to be used for [Elasticsearch](https://www.elastic.co/products/elasticsearch), [Kibana](https://www.elastic.co/products/kibana) and [OpenSearch](https://opensearch.org/), [OpenSearch-Dashboards](https://opensearch.org/docs/latest/dashboards/index/) deployed with KubeDB in a Kubernetes native way. + +When you install KubeDB, an `ElasticsearchVersion` custom resource will be created automatically for every supported Elasticsearch and OpenSearch version. You have to specify the name of `ElasticsearchVersion` CRD in `spec.version` field of [Elasticsearch](/docs/v2024.1.31/guides/elasticsearch/concepts/elasticsearch/) CRD. Then, KubeDB will use the docker images specified in the `ElasticsearchVersion` CRD to create your expected database. If you want to provision `Kibana` or `Opensearch-Dashboards`, you have to specify the name of `Elasticsearch` CRD in `spec.databaseRef.name` field of [ElasticsearchDashboard](/docs/v2024.1.31/guides/elasticsearch/concepts/elasticsearch-dashboard/) CRD. Then, KubeDB will use the compatible docker image specified in the `.spec.dashboard.image` field of the `ElasticsearchVersion` CRD that Elasticsearch is using to create your expected dashboard. + +Using a separate CRD for specifying respective docker images, and pod security policy names allow us to modify the images, and policies independent of the KubeDB operator. This will also allow the users to use a custom image for the database. + +## ElasticsearchVersion Specification + +As with all other Kubernetes objects, an ElasticsearchVersion needs `apiVersion`, `kind`, and `metadata` fields. It also needs a `.spec` section. + +```yaml +apiVersion: catalog.kubedb.com/v1alpha1 +kind: ElasticsearchVersion +metadata: + annotations: + meta.helm.sh/release-name: kubedb + meta.helm.sh/release-namespace: kubedb + creationTimestamp: "2022-12-29T09:23:41Z" + generation: 1 + labels: + app.kubernetes.io/instance: kubedb + app.kubernetes.io/managed-by: Helm + app.kubernetes.io/name: kubedb-catalog + app.kubernetes.io/version: v2022.12.28 + helm.sh/chart: kubedb-catalog-v2022.12.28 + name: xpack-8.11.1 + resourceVersion: "1844" + uid: db8b5122-bce8-4e80-b608-e314954f2980 +spec: + authPlugin: X-Pack + dashboard: + image: kibana:7.14.0 + dashboardInitContainer: + yqImage: kubedb/elasticsearch-dashboard-init:7.14.0-xpack-v2022.02.22 + db: + image: elasticsearch:7.14.0 + distribution: ElasticStack + exporter: + image: prometheuscommunity/elasticsearch-exporter:v1.3.0 + initContainer: + image: tianon/toybox:0.8.4 + yqImage: kubedb/elasticsearch-init:7.14.0-xpack-v2021.08.23 + podSecurityPolicies: + databasePolicyName: elasticsearch-db + securityContext: + runAsAnyNonRoot: true + runAsUser: 1000 + stash: + addon: + backupTask: + name: elasticsearch-backup-7.3.2 + params: + - name: args + value: --match=^(?![.])(?!kubedb-system).+ + restoreTask: + name: elasticsearch-restore-7.3.2 + params: + - name: args + value: --match=^(?![.])(?!kubedb-system).+ + updateConstraints: + allowlist: + - < 7.18.0 + version: 7.14.0 + +``` + +### metadata.name + +`metadata.name` is a required field that specifies the name of the `ElasticsearchVersion` CRD. You have to specify this name in `spec.version` field of [Elasticsearch](/docs/v2024.1.31/guides/elasticsearch/concepts/elasticsearch/) CRD. + +We follow this convention for naming ElasticsearchVersion CRD: + +- Name format: `{Security Plugin Name}-{Application Version}-{Modification Tag}` + +- Samples: `xpack-8.11.1`, `xpack-8.11.1`, `opensearch-2.8.0`, etc. + +We use the original Elasticsearch docker image provided by the distributors. Then we bundle the image with the necessary sidecar and init container images which facilitate features like sysctl kernel settings, custom configuration, monitoring matrices, etc. An image with a higher modification tag will have more features and fixes than an image with a lower modification tag. Hence, it is recommended to use ElasticsearchVersion CRD with the highest modification tag to take advantage of the latest features. + +### spec.version + +`spec.version` is a required field that specifies the original version of the Elasticsearch database that has been used to build the docker image specified in `spec.db.image` field. + +### spec.deprecated + +`spec.deprecated` is an optional field that specifies whether the docker images specified here is supported by the current KubeDB operator. For example, we have modified `kubedb/elasticsearch:7.x.x-xpack` docker images to support custom configuration and re-tagged as `kubedb/elasticsearch:7.x.x-xpack-v1`. Now, KubeDB operator `version:x.y.z` supports providing custom configuration which required `kubedb/elasticsearch:7.x.x-xpack-v1` docker images. So, we have marked `kubedb/elasticsearch:7.x.x-xpack` as deprecated in KubeDB `version:x.y.z`. + +The default value of this field is `false`. If `spec.deprecated` is set `true`, the KubeDB operator will not create the database and other respective resources for this version. + +### spec.db.image + +`spec.db.image` is a `required` field that specifies the docker image which will be used to create StatefulSet by KubeDB provisioner operator to create the expected Elasticsearch/OpenSearch database. + +### spec.dashboard.image +`spec.dashboard.image` is an `optional` field that specifies the docker image which will be used to create Deployment by KubeDB dashboard operator to create the expected Kibana/Opensearch-dashboards. + +### spec.exporter.image + +`spec.exporter.image` is a `required` field that specifies the image which will be used to export Prometheus metrics if monitoring is enabled. + +### spec.updateConstraints +updateConstraints specifies the constraints that need to be considered during version update. Here `allowList` contains the versions those are allowed for updating from the current version. +An empty list of AllowList indicates all the versions are accepted except the denyList. +On the other hand, `DenyList` contains all the rejected versions for the update request. An empty list indicates no version is rejected. + + +### spec.podSecurityPolicies.databasePolicyName + +`spec.podSecurityPolicies.databasePolicyName` is a required field that specifies the name of the pod security policy required to get the database server pod(s) running. + +```bash +helm upgrade -i kubedb oci://ghcr.io/appscode-charts/kubedb \ + --namespace kubedb --create-namespace \ + --set additionalPodSecurityPolicies[0]=custom-db-policy \ + --set additionalPodSecurityPolicies[1]=custom-snapshotter-policy \ + --set-file global.license=/path/to/the/license.txt \ + --wait --burst-limit=10000 --debug +``` + +### spec.stash +This holds the Backup & Restore task definitions, where a `TaskRef` has a `Name` & `Params` section. Params specifies a list of parameters to pass to the task. + +## Next Steps +- Learn about Elasticsearch CRD [here](/docs/v2024.1.31/guides/elasticsearch/concepts/elasticsearch/). +- Deploy your first Elasticsearch database with KubeDB by following the guide [here](/docs/v2024.1.31/guides/elasticsearch/quickstart/overview/elasticsearch/). +- Deploy your first OpenSearch database with KubeDB by following the guide [here](/docs/v2024.1.31/guides/elasticsearch/quickstart/overview/opensearch/). diff --git a/content/docs/v2024.1.31/guides/elasticsearch/concepts/elasticsearch-dashboard/index.md b/content/docs/v2024.1.31/guides/elasticsearch/concepts/elasticsearch-dashboard/index.md new file mode 100644 index 0000000000..518d7ae090 --- /dev/null +++ b/content/docs/v2024.1.31/guides/elasticsearch/concepts/elasticsearch-dashboard/index.md @@ -0,0 +1,150 @@ +--- +title: ElasticsearchDashboard +menu: + docs_v2024.1.31: + identifier: es-dashboard-concepts + name: ElasticsearchDashboard + parent: es-concepts-elasticsearch + weight: 21 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# ElasticsearchDashboard + +## What is ElasticsearchDashboard + +`ElasticsearchDashboard` is a Kubernetes `Custom Resource Definitions` (CRD). It provides a declarative configuration to specify the docker images to be used for Elasticsearch Dashboard (`Kibana`, `Opensearch_Dashboards`) deployed with KubeDB in Kubernetes native way. When you install KubeDB, an `ElasticsearchVersion` custom resource will be created automatically for every supported `ElasticsearchDashboard` version. +Suppose you have a KubeDB-managed [Elasticsearch](/docs/v2024.1.31/guides/elasticsearch/concepts/elasticsearch/) provisioned in your cluster. You have to specify the name of `Elasticsearch` CRD in `spec.databaseRef.name` field of `ElasticsearchDashboard` CRD. Then, KubeDB will use the docker images specified in the `ElasticsearchVersion` CRD to create your expected dashboard. + + +## ElasticsearchDashboard Specification + +As with all other Kubernetes objects, an `ElasticsearchDashboard` needs `apiVersion`, `kind`, and `metadata` fields. It also needs a `spec` section. + +```yaml +apiVersion: dashboard.kubedb.com/v1alpha1 +kind: ElasticsearchDashboard +metadata: + name: es-cluster-dashboard + namespace: demo +spec: + replicas: 1 + enableSSL: true + authSecret: + name: es-cluster-user-cred + configSecret: + name: custom-configuration + databaseRef: + name: es-cluster + podTemplate: + spec: + resources: + limits: + memory: 1.5Gi + requests: + cpu: 500m + memory: 1.5Gi + serviceTemplates: + - alias: primary + spec: + ports: + - port: 5601 + tls: + certificates: + - alias: database-client + secretName: es-cluster-client-cert + terminationPolicy: WipeOut +``` + + + +### spec.replicas + +`spec.replicas` is an optional field that can be used if `spec.topology` is not specified. This field specifies the number of nodes (ie. pods) in the Elasticsearch cluster. The default value of this field is 1. + +### spec.enableSSL + +`spec.enableSSL` is an `optional` field that specifies whether to enable TLS to HTTP layer. The default value of this field is `false`. Enabling TLS from `ElasticsearchDashboard` CRD ensures secure connectivity with dashboard. In order to enable TLS in HTTP layer, the `spec.enableSSL` field in `elasticsearch` CRD has to be set to `true`. + +### spec.authSecret + +`spec.authSecret` is an `optional` field that points to a k8s secret used to hold the Elasticsearch `elastic`/`admin` user credentials. In order to access elastic search dashboard these credentials will be required. + +The k8s secret must be of type: kubernetes.io/basic-auth with the following keys: + +- `username`: Must be `elastic` for `x-pack`, and `admin` for `OpenSearch`. +- `password`: Password for the `elastic`/`admin` user. + If `spec.authSecret` is not set, dashboard operator will use the authSecret from referred database object. + +### spec.configSecret + +`spec.configSecret` is an optional field that allows users to provide custom configuration for `ElasticsearchDashboard`. It contains a k8s secret name that holds the configuration files for `ElasticsearchDashboard`. If not provided, operator generated configurations will be applied to dashboard. If `configSecret` is provided, it will be merged with the operator-generated configuration. The user-provided configuration has higher precedence over the operator-generated configuration. The configuration file names are used as secret keys. + +- Kibana: + - `kibana.yml` for configuring Kibana + +- Opensearch_dashboards: + - `opensearch_dashboards.yml` for configuring OpenSearch_Dashboards + +### spec.databaseRef + +`spec.databaseRef` specifies the database name to which `ElasticsearchDashboard` is pointing. Referenced Elasticsearch instance must be deployed in the same namespace with dashboard. The dashboard will not become ready until database is ready and accepting connection requests. + +### spec.podTemplate + +KubeDB allows providing a template for database pod through `spec.podTemplate`. KubeDB operator will pass the information provided in `spec.podTemplate` to the StatefulSet created for the Elasticsearch database. + +KubeDB accepts the following fields to set in `spec.podTemplate`: + +- metadata + - annotations (pod’s annotation) + +- controller + - annotations (deployment's annotation) + +- spec: + - env + - resources + - initContainers + - imagePullSecrets + - nodeSelector + - affinity + - serviceAccountName + - schedulerName + - tolerations + - priorityClassName + - priority + - securityContext + - livenessProbe + - readinessProbe + - lifecycle + + +### spec.serviceTemplates + +`spec.serviceTemplates` is an optional field that contains a list of the `serviceTemplate`. The templates are identified by the alias. For Dashboard, the only configurable service alias is `primary`. + +### spec.tls + +`spec.tls` specifies the TLS/SSL configurations. User can provide custom TLS certificates using k8s secrets with allowed certificate aliases.`ElasticsearchDashboard` supports certificate with alias `database-client` to securely communicate with elasticsearch, alias `ca` to provide ca certificates and alias `server` for securely communicating with dashboard server. If `spec.tls` is not set the operator generated self-signed certificates will be used for secure connectivity with database and dashboard server. + + +## Next Steps + +- Learn about Elasticsearch CRD [here](/docs/v2024.1.31/guides/elasticsearch/concepts/elasticsearch/). +- Deploy your first Elasticsearch database with KubeDB by following the guide [here](/docs/v2024.1.31/guides/elasticsearch/quickstart/overview/elasticsearch/). +- Deploy your first OpenSearch database with KubeDB by following the guide [here](/docs/v2024.1.31/guides/elasticsearch/quickstart/overview/opensearch/). diff --git a/content/docs/v2024.1.31/guides/elasticsearch/concepts/elasticsearch-ops-request/index.md b/content/docs/v2024.1.31/guides/elasticsearch/concepts/elasticsearch-ops-request/index.md new file mode 100644 index 0000000000..719d868578 --- /dev/null +++ b/content/docs/v2024.1.31/guides/elasticsearch/concepts/elasticsearch-ops-request/index.md @@ -0,0 +1,539 @@ +--- +title: ElasticsearchOpsRequests CRD +menu: + docs_v2024.1.31: + identifier: es-opsrequest-concepts + name: ElasticsearchOpsRequest + parent: es-concepts-elasticsearch + weight: 15 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# ElasticsearchOpsRequest + +## What is ElasticsearchOpsRequest + +`ElasticsearchOpsRequest` is a Kubernetes `Custom Resource Definitions` (CRD). It provides a declarative configuration for the [Elasticsearch](https://www.elastic.co/guide/index.html) and [OpenSearch](https://opensearch.org/) administrative operations like database version update, horizontal scaling, vertical scaling, etc. in a Kubernetes native way. + +## ElasticsearchOpsRequest Specifications + +Like any official Kubernetes resource, a `ElasticsearchOpsRequest` has `TypeMeta`, `ObjectMeta`, `Spec` and `Status` sections. + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: ElasticsearchOpsRequest +metadata: + name: es-update + namespace: demo +spec: + type: UpdateVersion + databaseRef: + name: es + updateVersion: + targetVersion: xpack-8.11.1 +status: + conditions: + - lastTransitionTime: "2020-08-25T18:22:38Z" + message: Successfully completed the modification process + observedGeneration: 1 + reason: Successful + status: "True" + type: Successful + observedGeneration: 1 + phase: Successful +``` + +Here, we are going to describe the various sections of a `ElasticsearchOpsRequest` CRD. + +### spec.type + +`spec.type` is a `required` field that specifies the kind of operation that will be applied to the Elasticsearch. The following types of operations are allowed in the `ElasticsearchOpsRequest`: + +- `Restart` - is used to perform a smart restart of the Elasticsearch cluster. +- `UpdateVersion` - is used to update the version of the Elasticsearch in a managed way. The necessary information required for updating the version, must be provided in `spec.updateVersion` field. +- `VerticalScaling` - is used to vertically scale the Elasticsearch nodes (ie. pods). The necessary information required for vertical scaling, must be provided in `spec.verticalScaling` field. +- `HorizontalScaling` - is used to horizontally scale the Elasticsearch nodes (ie. pods). The necessary information required for horizontal scaling, must be provided in `spec.horizontalScaling` field. +- `VolumeExpansion` - is used to expand the storage of the Elasticsearch nodes (ie. pods). The necessary information required for volume expansion, must be provided in `spec.volumeExpansion` field. +- `ReconfigureTLS` - is used to configure the TLS configuration of a running Elasticsearch cluster. The necessary information required for reconfiguring the TLS, must be provided in `spec.tls` field. + +> Note: You can only perform one type of operation by using an `ElasticsearchOpsRequest` custom resource object. For example, if you want to update your database and scale up its replica then you will need to create two separate `ElasticsearchOpsRequest`. At first, you will have to create an `ElasticsearchOpsRequest` for updating. Once the update is completed, then you can create another `ElasticsearchOpsRequest` for scaling. You should not create two `ElasticsearchOpsRequest` simultaneously. + +### spec.databaseRef + +`spec.databaseRef` is a `required` field that points to the [Elasticsearch](/docs/v2024.1.31/guides/elasticsearch/concepts/elasticsearch/) object for which the administrative operations will be performed. This field consists of the following sub-field: + +- `databaseRef.name` - specifies the name of the [Elasticsearch](/docs/v2024.1.31/guides/elasticsearch/concepts/elasticsearch/) object. + +> Note: The `ElasticsearchOpsRequest` should be on the same namespace as the referring `Elasticsearch` object. + +### spec.updateVersion + +`spec.updateVersion` is an `optional` field, but it acts as a `required` field when the `spec.type` is set to `UpdateVersion`. +It specifies the desired version information required for the Elasticsearch version update. This field consists of the following sub-fields: + +- `updateVersion.targetVersion` refers to an [ElasticsearchVersion](/docs/v2024.1.31/guides/elasticsearch/concepts/catalog/) CR name that contains the Elasticsearch version information required to perform the update. + +> KubeDB does not support downgrade for Elasticsearch. + +**Samples:** +Let's assume we have and Elasticsearch cluster of version `xpack-8.2.3`. The Elasticsearch custom resource is named `es-quickstart` and it's provisioned in demo namespace. Now, you want to update your Elasticsearch cluster to `xpack-8.5.2`. Apply this YAML to update to your desired version. +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: ElasticsearchOpsRequest +metadata: + name: es-quickstart-update + namespace: demo +spec: + type: UpdateVersion + databaseRef: + name: es-quickstart + updateVersion: + targetVersion: xpack-8.5.2 +``` + +### spec.horizontalScaling + +`spec.horizontalScaling` is an `optional` field, but it acts as a `required` field when the `spec.type` is set to `HorizontalScaling`. +It specifies the necessary information required to horizontally scale the Elasticsearch nodes (ie. pods). It consists of the following sub-field: + +- `horizontalScaling.node` - specifies the desired number of nodes for the Elasticsearch cluster running in combined mode (ie. `Elasticsearch.spec.topology` is `empty`). The value should be greater than the maximum value of replication for the shard of any index. For example, if a shard has `x` replicas, `x+1` data nodes are required to allocate them. + +- `horizontalScaling.topology` - specifies the desired number of different type of nodes for the Elasticsearch cluster running in cluster topology mode (ie. `Elasticsearch.spec.topology` is `not empty`). + - `topology.master` - specifies the desired number of master nodes. The value should be greater than zero ( >= 1 ). + - `toplogy.ingest` - specifies the desired number of ingest nodes. The value should be greater than zero ( >= 1 ). + - `topology.data` - specifies the desired number of data nodes. The value should be greater than the maximum value of replication for the shard of any index. For example, if a shard has `x` replicas, `x+1` data nodes are required to allocate them. + +**Samples:** + +- Horizontally scale combined nodes: + + ```yaml + apiVersion: ops.kubedb.com/v1alpha1 + kind: ElasticsearchOpsRequest + metadata: + name: hscale-combined + namespace: demo + spec: + type: HorizontalScaling + databaseRef: + name: es + horizontalScaling: + node: 4 + ``` + +- Horizontally scale cluster topology: + + ```yaml + apiVersion: ops.kubedb.com/v1alpha1 + kind: ElasticsearchOpsRequest + metadata: + name: hscale-topology + namespace: demo + spec: + type: HorizontalScaling + databaseRef: + name: es + horizontalScaling: + topology: + master: 2 + ingest: 2 + data: 3 + ``` + +- Horizontally scale only ingest nodes: + + ```yaml + apiVersion: ops.kubedb.com/v1alpha1 + kind: ElasticsearchOpsRequest + metadata: + name: hscale-ingest-nodes + namespace: demo + spec: + type: HorizontalScaling + databaseRef: + name: es + horizontalScaling: + topology: + ingest: 4 + ``` + +### spec.verticalScaling + +`spec.verticalScaling` is an `optional` field, but it acts as a `required` field when the `spec.type` is set to `VerticalScaling`. It specifies the necessary information required to vertically scale the Elasticsearch node resources (ie. `cpu`, `memory`). It consists of the following sub-field: + +- `verticalScaling.node` - specifies the desired node resources for the Elasticsearch cluster running in combined mode (ie. `Elasticsearch.spec.topology` is `empty`). +- `verticalScaling.topology` - specifies the desired node resources for different type of node of the Elasticsearch running in cluster topology mode (ie. `Elasticsearch.spec.topology` is `not empty`). + - `topology.master` - specifies the desired resources for the master nodes. It takes input same as the k8s [resources](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#resource-types). + - `topology.data` - specifies the desired node resources for the data nodes. It takes input same as the k8s [resources](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#resource-types). + - `topology.ingest` - specifies the desired node resources for the ingest nodes. It takes input same as the k8s [resources](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#resource-types). + +> Note: It is recommended not to use resources below the default one; `cpu: 500m, memory: 1Gi`. + +**Samples:** + +- Vertically scale combined nodes: + + ```yaml + apiVersion: ops.kubedb.com/v1alpha1 + kind: ElasticsearchOpsRequest + metadata: + name: vscale-combined + namespace: demo + spec: + type: VerticalScaling + databaseRef: + name: es + verticalScaling: + node: + resources: + limits: + cpu: 1000m + memory: 2Gi + requests: + cpu: 500m + memory: 1Gi + ``` + +- Vertically scale topology cluster: + + ```yaml + apiVersion: ops.kubedb.com/v1alpha1 + kind: ElasticsearchOpsRequest + metadata: + name: vscale-topology + namespace: demo + spec: + type: VerticalScaling + databaseRef: + name: es + verticalScaling: + master: + resources: + limits: + cpu: 750m + memory: 800Mi + data: + resources: + requests: + cpu: 760m + memory: 900Mi + ingest: + resources: + limits: + cpu: 900m + memory: 1.2Gi + requests: + cpu: 800m + memory: 1Gi + ``` + +- Vertically scale only data nodes: + + ```yaml + apiVersion: ops.kubedb.com/v1alpha1 + kind: ElasticsearchOpsRequest + metadata: + name: vscale-data-nodes + namespace: demo + spec: + type: VerticalScaling + databaseRef: + name: es + verticalScaling: + data: + resources: + limits: + cpu: 900m + memory: 1.2Gi + requests: + cpu: 800m + memory: 1Gi + ``` + +### spec.volumeExpansion + +> Note: To use the volume expansion feature the StorageClass must support volume expansion. + +`spec.volumeExpansion` is an `optional` field, but it acts as a `required` field when the `spec.type` is set to `VolumeExpansion`. It specifies the necessary information required to expand the storage of the Elasticsearch node. It consists of the following sub-field: + +- `volumeExpansion.node` - specifies the desired size of the persistent volume for the Elasticsearch node running in combined mode (ie. `Elasticsearch.spec.topology` is `empty`). +- `volumeExpansion.topology` - specifies the desired size of the persistent volumes for the different types of nodes of the Elasticsearch cluster running in cluster topology mode (ie. `Elasticsearch.spec.topology` is `not empty`). + - `topology.master` - specifies the desired size of the persistent volume for the master nodes. + - `topology.data` - specifies the desired size of the persistent volume for the data nodes. + - `topology.ingest` - specifies the desired size of the persistent volume for the ingest nodes. + +All of them refer to [Quantity](https://v1-22.docs.kubernetes.io/docs/reference/generated/kubernetes-api/v1.22/#quantity-resource-core) types of Kubernetes. + +> Note: Make sure that the requested volume is greater than the current volume. + +**Samples:** + +- Expand volume for combined nodes: + + ```yaml + apiVersion: ops.kubedb.com/v1alpha1 + kind: ElasticsearchOpsRequest + metadata: + name: volume-expansion-combined + namespace: demo + spec: + type: VolumeExpansion + databaseRef: + name: es + volumeExpansion: + node: 4Gi + ``` + +- Expand volume for cluster topology: + + ```yaml + apiVersion: ops.kubedb.com/v1alpha1 + kind: ElasticsearchOpsRequest + metadata: + name: volume-expansion-topology + namespace: demo + spec: + type: VolumeExpansion + databaseRef: + name: es + volumeExpansion: + master: 2Gi + data: 3Gi + ingest: 4Gi + ``` + +- Expand volume for only data nodes: + + ```yaml + apiVersion: ops.kubedb.com/v1alpha1 + kind: ElasticsearchOpsRequest + metadata: + name: volume-expansion-data-nodes + namespace: demo + spec: + type: VolumeExpansion + databaseRef: + name: es + volumeExpansion: + data: 5Gi + ``` + +### spec.tls + +> The ReconfigureTLS only works with the [Cert-Manager](https://cert-manager.io/docs/concepts/) managed certificates. [Installation guide](https://cert-manager.io/docs/installation/). + +`spec.tls` is an `optional` field, but it acts as a `required` field when the `spec.type` is set to `ReconfigureTLS`. It specifies the necessary information required to add or remove or update the TLS configuration of the Elasticsearch cluster. It consists of the following sub-fields: + +- `tls.remove` ( `bool` | `false` ) - tells the operator to remove the TLS configuration for the HTTP layer. The transport layer is always secured with certificates, so the removal process does not affect the transport layer. +- `tls.rotateCertificates` ( `bool` | `false`) - tells the operator to renew all the certificates. +- `tls.issuerRef` - is an `optional` field that references to the `Issuer` or `ClusterIssuer` custom resource object of [cert-manager](https://cert-manager.io/docs/concepts/issuer/). It is used to generate the necessary certificate secrets for Elasticsearch. If the `issuerRef` is not specified, the operator creates a self-signed CA and also creates necessary certificate (valid: 365 days) secrets using that CA. + - `apiGroup` - is the group name of the resource that is being referenced. Currently, the only supported value is `cert-manager.io`. + - `kind` - is the type of resource that is being referenced. The supported values are `Issuer` and `ClusterIssuer`. + - `name` - is the name of the resource ( `Issuer` or `ClusterIssuer` ) that is being referenced. + +- `tls.certificates` - is an `optional` field that specifies a list of certificate configurations used to configure the certificates. It has the following fields: + - `alias` - represents the identifier of the certificate. It has the following possible value: + - `transport` - is used for the transport layer certificate configuration. + - `http` - is used for the HTTP layer certificate configuration. + - `admin` - is used for the admin certificate configuration. Available for the `SearchGuard` and the `OpenDistro` auth-plugins. + - `metrics-exporter` - is used for the metrics-exporter sidecar certificate configuration. + + - `secretName` - ( `string` | `"-alias-cert"` ) - specifies the k8s secret name that holds the certificates. + + - `subject` - specifies an `X.509` distinguished name (DN). It has the following configurable fields: + - `organizations` ( `[]string` | `nil` ) - is a list of organization names. + - `organizationalUnits` ( `[]string` | `nil` ) - is a list of organization unit names. + - `countries` ( `[]string` | `nil` ) - is a list of country names (ie. Country Codes). + - `localities` ( `[]string` | `nil` ) - is a list of locality names. + - `provinces` ( `[]string` | `nil` ) - is a list of province names. + - `streetAddresses` ( `[]string` | `nil` ) - is a list of street addresses. + - `postalCodes` ( `[]string` | `nil` ) - is a list of postal codes. + - `serialNumber` ( `string` | `""` ) is a serial number. + + For more details, visit [here](https://golang.org/pkg/crypto/x509/pkix/#Name). + + - `duration` ( `string` | `""` ) - is the period during which the certificate is valid. A duration string is a possibly signed sequence of decimal numbers, each with optional fraction and a unit suffix, such as `"300m"`, `"1.5h"` or `"20h45m"`. Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h". + - `renewBefore` ( `string` | `""` ) - is a specifiable time before expiration duration. + - `dnsNames` ( `[]string` | `nil` ) - is a list of subject alt names. + - `ipAddresses` ( `[]string` | `nil` ) - is a list of IP addresses. + - `uris` ( `[]string` | `nil` ) - is a list of URI Subject Alternative Names. + - `emailAddresses` ( `[]string` | `nil` ) - is a list of email Subject Alternative Names. + +To enable TLS on the HTTP layer, the configuration for the `http` layer certificate needs to be provided on `tls.certificates[]` list. + +**Samples:** + +- Add TLS: + + ```yaml + apiVersion: ops.kubedb.com/v1alpha1 + kind: ElasticsearchOpsRequest + metadata: + name: add-tls + namespace: demo + spec: + type: ReconfigureTLS + databaseRef: + name: es + tls: + issuerRef: + apiGroup: "cert-manager.io" + kind: Issuer + name: es-issuer + certificates: + - alias: http + subject: + organizations: + - kubedb.com + emailAddresses: + - abc@kubedb.com + ``` + +- Remove TLS: + + ```yaml + apiVersion: ops.kubedb.com/v1alpha1 + kind: ElasticsearchOpsRequest + metadata: + name: remove-tls + namespace: demo + spec: + type: ReconfigureTLS + databaseRef: + name: es + tls: + remove: true + ``` + +- Rotate TLS: + + ```yaml + apiVersion: ops.kubedb.com/v1alpha1 + kind: ElasticsearchOpsRequest + metadata: + name: rotate-tls + namespace: demo + spec: + type: ReconfigureTLS + databaseRef: + name: es + tls: + rotateCertificates: true + ``` + +- Update transport layer certificate: + + ```yaml + apiVersion: ops.kubedb.com/v1alpha1 + kind: ElasticsearchOpsRequest + metadata: + name: update-tls + namespace: demo + spec: + type: ReconfigureTLS + databaseRef: + name: es + tls: + certificates: + - alias: transport + subject: + organizations: + - mydb.com # say, previously it was "kubedb.com" + ``` + +### spec.configuration + +If you want to reconfigure your Running Elasticsearch cluster or different components of it with new custom configuration, you have to specify `spec.configuration` section. This field consists of the following sub-field: + +- `spec.configuration.configsecret`: ConfigSecret is an optional field to provide custom configuration file for database. +- `spec.configuration.secureConfigSecret`: SecureConfigSecret is an optional field to provide secure settings for database. +- `spec.configuration.applyConfig`: ApplyConfig is an optional field to provide Elasticsearch configuration. Provided configuration will be applied to config files stored in ConfigSecret. If the ConfigSecret is missing, the operator will create a new k8s secret by the following naming convention: {db-name}-user-config. +```yaml + applyConfig: + file-name.yml: | + key: value + elasticsearch.yml: | + thread_pool: + write: + size: 30 +``` + +- `spec.configuration.removeCustomConfig`: If set to "true", the user provided configuration will be removed. The Elasticsearch cluster will start will default configuration that is generated by the operator. +- `spec.configuration.removeSecureCustomConfig`: If set to "true", the user provided secure settings will be removed. The elasticsearch.keystore will start will default password (i.e. ""). + +### spec.timeout +As we internally retry the ops request steps multiple times, This `timeout` field helps the users to specify the timeout for those steps of the ops request (in second). +If a step doesn't finish within the specified timeout, the ops request will result in failure. + +### spec.apply +This field controls the execution of obsRequest depending on the database state. It has two supported values: `Always` & `IfReady`. +Use IfReady, if you want to process the opsRequest only when the database is Ready. And use Always, if you want to process the execution of opsReq irrespective of the Database state. + +## ElasticsearchOpsRequest `Status` + +`.status` describes the current state and progress of a `ElasticsearchOpsRequest` operation. It has the following fields: + +### status.phase + +`status.phase` indicates the overall phase of the operation for this `ElasticsearchOpsRequest`. It can have the following three values: + +| Phase | Meaning | +| :--------: | ---------------------------------------------------------------------------------- | +| Progressing | KubeDB has started to process the Ops request | +| Successful | KubeDB has successfully performed all the operations needed for the Ops request | +| Failed | KubeDB has failed while performing the operations needed for the Ops request | + +### status.observedGeneration + +`status.observedGeneration` shows the most recent generation observed by the `ElasticsearchOpsRequest` controller. + +### status.conditions + +`status.conditions` is an array that specifies the conditions of different steps of `ElasticsearchOpsRequest` processing. Each condition entry has the following fields: + +- `types` specifies the type of the condition. +- The `status` field is a string, with possible values `True`, `False`, and `Unknown`. + - `status` will be `True` if the current transition succeeded. + - `status` will be `False` if the current transition failed. + - `status` will be `Unknown` if the current transition was denied. +- The `message` field is a human-readable message indicating details about the condition. +- The `reason` field is a unique, one-word, CamelCase reason for the condition's last transition. +- The `lastTransitionTime` field provides a timestamp for when the operation last transitioned from one state to another. +- The `observedGeneration` shows the most recent condition transition generation observed by the controller. + +ElasticsearchOpsRequest has the following types of conditions: + +| Type | Meaning | +| ----------------------------- | ------------------------------------------------------------------------- | +| `Progressing` | The operator has started to process the Ops request | +| `Successful` | The Ops request has successfully executed | +| `Failed` | The operation on the database failed | +| `OrphanStatefulSetPods` | The statefulSet has deleted leaving the pods orphaned | +| `ReadyStatefulSets` | The StatefulSet are ready | +| `ScaleDownCombinedNode` | Scaled down the combined nodes | +| `ScaleDownDataNode` | Scaled down the data nodes | +| `ScaleDownIngestNode` | Scaled down the ingest nodes | +| `ScaleDownMasterNode` | Scaled down the master nodes | +| `ScaleUpCombinedNode` | Scaled up the combined nodes | +| `ScaleUpDataNode` | Scaled up the data nodes | +| `ScaleUpIngestNode` | Scaled up the ingest nodes | +| `ScaleUpMasterNode` | Scaled up the master nodes | +| `UpdateCombinedNodePVCs` | Updated combined node PVCs | +| `UpdateDataNodePVCs` | Updated data node PVCs | +| `UpdateIngestNodePVCs` | Updated ingest node PVCs | +| `UpdateMasterNodePVCs` | Updated master node PVCs | +| `UpdateNodeResources` | Updated node resources | diff --git a/content/docs/v2024.1.31/guides/elasticsearch/concepts/elasticsearch/index.md b/content/docs/v2024.1.31/guides/elasticsearch/concepts/elasticsearch/index.md new file mode 100644 index 0000000000..d8a6d1a658 --- /dev/null +++ b/content/docs/v2024.1.31/guides/elasticsearch/concepts/elasticsearch/index.md @@ -0,0 +1,906 @@ +--- +title: Elasticsearch CRD +menu: + docs_v2024.1.31: + identifier: es-elasticsearch-concepts + name: Elasticsearch + parent: es-concepts-elasticsearch + weight: 10 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Elasticsearch + +## What is Elasticsearch + +`Elasticsearch` is a Kubernetes `Custom Resource Definitions` (CRD). It provides a declarative configuration for [Elasticsearch](https://www.elastic.co/products/elasticsearch) and [OpenSearch](https://opensearch.org/) in a Kubernetes native way. You only need to describe the desired database configuration in an Elasticsearch object, and the KubeDB operator will create Kubernetes objects in the desired state for you. + +## Elasticsearch Spec + +As with all other Kubernetes objects, an Elasticsearch needs `apiVersion`, `kind`, and `metadata` fields. It also needs a `.spec` section. Below is an example Elasticsearch object. + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: Elasticsearch +metadata: + name: myes + namespace: demo +spec: + autoOps: + disabled: true + authSecret: + name: es-admin-cred + externallyManaged: false + configSecret: + name: es-custom-config + enableSSL: true + internalUsers: + metrics_exporter: {} + rolesMapping: + SGS_READALL_AND_MONITOR: + users: + - metrics_exporter + kernelSettings: + privileged: true + sysctls: + - name: vm.max_map_count + value: "262144" + maxUnavailable: 1 + monitor: + agent: prometheus.io + prometheus: + exporter: + port: 56790 + podTemplate: + controller: + annotations: + passTo: statefulSets + metadata: + annotations: + passTo: pods + spec: + affinity: + podAntiAffinity: + preferredDuringSchedulingIgnoredDuringExecution: + - podAffinityTerm: + labelSelector: + matchLabels: + app.kubernetes.io/instance: es + app.kubernetes.io/managed-by: kubedb.com + app.kubernetes.io/name: elasticsearches.kubedb.com + namespaces: + - demo + topologyKey: kubernetes.io/hostname + weight: 100 + - podAffinityTerm: + labelSelector: + matchLabels: + app.kubernetes.io/instance: es + app.kubernetes.io/managed-by: kubedb.com + app.kubernetes.io/name: elasticsearches.kubedb.com + namespaces: + - demo + topologyKey: failure-domain.beta.kubernetes.io/zone + weight: 50 + env: + - name: node.processors + value: "2" + nodeSelector: + kubernetes.io/os: linux + resources: + limits: + cpu: "1" + memory: 1Gi + requests: + cpu: 500m + memory: 512Mi + serviceAccountName: es + replicas: 3 + serviceTemplates: + - alias: primary + metadata: + annotations: + passTo: service + spec: + type: NodePort + storage: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + storageClassName: standard + storageType: Durable + terminationPolicy: WipeOut + tls: + issuerRef: + apiGroup: "cert-manager.io" + kind: Issuer + name: es-issuer + certificates: + - alias: transport + privateKey: + encoding: PKCS8 + secretName: es-transport-cert + subject: + organizations: + - kubedb + - alias: http + privateKey: + encoding: PKCS8 + secretName: es-http-cert + subject: + organizations: + - kubedb + - alias: admin + privateKey: + encoding: PKCS8 + secretName: es-admin-cert + subject: + organizations: + - kubedb + - alias: metrics-exporter + privateKey: + encoding: PKCS8 + secretName: es-metrics-exporter-cert + subject: + organizations: + - kubedb + healthChecker: + periodSeconds: 15 + timeoutSeconds: 10 + failureThreshold: 2 + disableWriteCheck: false + version: xpack-8.11.1 +``` +### spec.autoOps +AutoOps is an optional field to control the generation of versionUpdate & TLS-related recommendations. + +### spec.version +`spec.version` is a `required` field that specifies the name of the [ElasticsearchVersion](/docs/v2024.1.31/guides/elasticsearch/concepts/catalog/) CRD where the docker images are specified. + +- Name format: `{Security Plugin Name}-{Application Version}-{Modification Tag}` + +- Samples: `xpack-8.2.3`, `xpack-8.11.1`, `opensearch-1.3.0`, etc. + +```yaml +spec: + version: xpack-8.11.1 +``` + +### spec.kernelSettings + +`spec.kernelSettings` is an `optional` field that is used to configure the k8s-cluster node's kernel settings. It let users to perform `sysctl -w key=value` commands to the node's kernel. These commands are performed from an `initContainer`. If any of those commands require `privileged` access, you need to set the `kernelSettings.privileged` to `true` resulting in the `initContainer` running in `privileged` mode. + +```yaml +spec: + kernelSettings: + privileged: true + sysctls: + - name: vm.max_map_count + value: "262144" +``` + +To disable the kernetSetting `initContainer`, set the `kernelSettings` to empty (`{}`) . + +```yaml +spec: + kernelSettings: {} +``` + +> Note: Make sure that `vm.max_map_count` is greater or equal to `262144`, otherwise the Elasticsearch may fail to bootstrap. + + +### spec.disableSecurity + +`spec.disableSecurity` is an `optional` field that allows a user to run the Elasticsearch with the security plugin `disabled`. Default to `false`. + +```yaml +spec: + disableSecurity: true +``` + +### spec.internalUsers + +`spec.internalUsers` provides an alternative way to configure the existing internal users or create new users without using the `internal_users.yml` file. This field expects the input format to be in the `map[username]ElasticsearchUserSpec` format. The KubeDB operator creates and synchronizes secure passwords for those users and stores in k8s secrets. The k8s secret names are formed by the following format: `{Elasticsearch Instance Name}-{Username}-cred`. + +The `ElasticsearchUserSpec` contains the following fields: +- `hash` ( `string` | `""` ) - Specifies the hash of the password. +- `full_name` ( `string` | `""` ) - Specifies The full name of the user. Only applicable for xpack authplugin. +- `metadata` ( `map[string]string` | `""` ) - Specifies Arbitrary metadata that you want to associate with the user. Only applicable for xpack authplugin. +- `secretName` ( `string` | `""` ) - Specifies the k8s secret name that holds the user credentials. Defaults to "--cred". +- `roles` ( `[]string` | `nil` ) - A set of roles the user has. The roles determine the user’s access permissions. To create a user without any roles, specify an empty list: []. Only applicable for xpack authplugin. +- `email` ( `string` | `""` ) - Specifies the email of the user. Only applicable for xpack authplugin. +- `reserved` ( `bool` | `false` ) - specifies the reserved status. The resources that have this set to `true` cannot be changed using the REST API or Kibana. +- `hidden` ( `bool` | `false` ) - specifies the hidden status. The resources that have this set to true are not returned by the REST API and not visible in Kibana. +- `backendRoles` (`[]string` | `nil`) - specifies a list of backend roles assigned to this user. The backend roles can come from the internal user database, LDAP groups, JSON web token claims, or SAML assertions. +- `searchGuardRoles` ( `[]string` | `nil` ) - specifies a list of SearchGuard security plugin roles assigned to this user. +- `opendistroSecurityRoles` ( `[]string` | `nil` ) - specifies a list of opendistro security plugin roles assigned to this user. +- `attributes` ( `map[string]string` | `nil` )- specifies one or more custom attributes which can be used in index names and DLS queries. +- `description` ( `string` | `""` ) - specifies the description of the user. + +Here's how `.spec.internalUsers` can be configured for `searchguard` or `opendistro` auth plugins. + +```yaml +spec: + internalUsers: + # update the attribute of default kibanaro user + kibanaro: + attributes: + attribute1: "value-a" + attribute2: "value-b" + attribute3: "value-c" + # update the desciption of snapshotrestore user + snapshotrestore: + description: "This is the new description" + # Create a new readall user + custom_readall_user: + backend_roles: + - "readall" + description: "Custom readall user" +``` + +Here's how `.spec.internalUsers` can be configured for `xpack` auth plugins. + +```yaml +spec: + internalUsers: + apm_system: + backendRoles: + - apm_system + secretName: es-cluster-apm-system-cred + beats_system: + backendRoles: + - beats_system + secretName: es-cluster-beats-system-cred + elastic: + backendRoles: + - superuser + secretName: es-cluster-elastic-cred + kibana_system: + backendRoles: + - kibana_system + secretName: es-cluster-kibana-system-cred + logstash_system: + backendRoles: + - logstash_system + secretName: es-cluster-logstash-system-cred + remote_monitoring_user: + backendRoles: + - remote_monitoring_collector + - remote_monitoring_agent + secretName: es-cluster-remote-monitoring-user-cred +``` +**ElasticStack:** + +Default Users: [Official Docs](https://www.elastic.co/guide/en/elasticsearch/reference/current/built-in-users.html) + +- `elastic` - Has direct read-only access to restricted indices, such as .security. This user also has the ability to manage security and create roles with unlimited privileges +- `kibana_system` - The user Kibana uses to connect and communicate with Elasticsearch. +- `logstash_system` - The user Logstash uses when storing monitoring information in Elasticsearch. +- `beats_system` - The user the Beats use when storing monitoring information in Elasticsearch. +- `apm_system` - The user the APM server uses when storing monitoring information in Elasticsearch. +- `remote_monitoring_user` - The user Metricbeat uses when collecting and storing monitoring information in Elasticsearch. It has the remote_monitoring_agent and remote_monitoring_collector built-in roles. + +**SearchGuard:** + +Default Users: [Official Docs](https://docs.search-guard.com/latest/demo-users-roles) + +- `admin` - Full access to the cluster and all indices. +- `kibanaserver` - Has all permissions on the `.kibana` index. +- `kibanaro` - Has `SGS_READ` access to all indices and all permissions on the `.kibana` index. +- `logstash` - Has `SGS_CRUD` and `SGS_CREATE_INDEX` permissions on all logstash and beats indices. +- `readall` - Has read access to all indices. +- `snapshotrestore` - Has permissions to perform snapshot and restore operations. + +**OpenDistro:** + +Default Users: [Official Docs](https://opendistro.github.io/for-elasticsearch-docs/docs/security/access-control/users-roles/) + +- `admin` - Grants full access to the cluster: all cluster-wide operations, write to all indices, write to all tenants. +- `kibanaserver` - Has all permissions on the `.kibana` index +- `kibanaro` - Grants permissions to use Kibana: cluster-wide searches, index monitoring, and write to various Kibana indices. +- `logstash` - Grants permissions for Logstash to interact with the cluster: cluster-wide searches, cluster monitoring, and write to the various Logstash indices. +- `readall` - Grants permissions for cluster-wide searches like msearch and search permissions for all indices. +- `snapshotrestore` - Grants permissions to manage snapshot repositories, take snapshots, and restore snapshots. + +### spec.rolesMapping + +`spec.rolesMapping` provides an alternative way to map backend roles, hosts and users to roles without using the `roles_mapping.yml` file. Only works with `SearchGurad` and `OpenDistro` security plugins. This field expects the input format to be in the `map[rolename]RoleSpec` format. + +The `RoleSpec` contains the following fields: + +- `reserved` ( `bool` | `false` ) - specifies the reserved status. The resources that have this set to `true`, cannot be changed using the REST API or Kibana. +- `hidden` ( `bool` | `false` ) - specifies the hidden status. The resources that have this field set to `true` are not returned by the REST API and not visible in Kibana. +- `backendRoles` ( `[]string` | `nil` )- specifies a list of backend roles assigned to this role. The backend roles can come from the internal user database, LDAP groups, JSON web token-claims or SAML assertions. +- `hosts` ( `[]string` | `nil` ) - specifies a list of hosts assigned to this role. +- `users` ( `[]string` | `nil` ) - specifies a list of users assigned to this role. +- ` + +```yaml +spec: + rolesMapping: + # create role mapping for the custom readall user + readall: + users: + - custom_readall_user +``` + +For the default roles visit the [SearchGurad docs](https://docs.search-guard.com/latest/roles-permissions), [OpenDistro docs](https://opendistro.github.io/for-elasticsearch-docs/docs/security/access-control/users-roles/#create-roles). + +### spec.topology + +`spec.topology` is an `optional` field that provides a way to configure different types of nodes for the Elasticsearch cluster. This field enables you to specify how many nodes you want to act as `master`, `data`, `ingest` or other node roles for Elasticsearch. You can also specify how much storage and resources to allocate for each type of node independently. + +Currently supported node types are - +- **data**: Data nodes hold the shards that contain the documents you have indexed. Data nodes handle data related operations like CRUD, search, and aggregations +- **ingest**: Ingest nodes can execute pre-processing pipelines, composed of one or more ingest processors +- **master**: The master node is responsible for lightweight cluster-wide actions such as creating or deleting an index, tracking which nodes are part of the cluster, and deciding which shards to allocate to which nodes. It is important for cluster health to have a stable master node. +- **dataHot**: Hot data nodes are part of the hot tier. The hot tier is the Elasticsearch entry point for time series data and holds your most-recent, most-frequently-searched time series data. +- **dataWarm**: Warm data nodes are part of the warm tier. Time series data can move to the warm tier once it is being queried less frequently than the recently-indexed data in the hot tier. +- **dataCold**: Cold data nodes are part of the cold tier. When you no longer need to search time series data regularly, it can move from the warm tier to the cold tier. +- **dataFrozen**: Frozen data nodes are part of the frozen tier. Once data is no longer being queried, or being queried rarely, it may move from the cold tier to the frozen tier where it stays for the rest of its life. +- **dataContent**: Content data nodes are part of the content tier. Data stored in the content tier is generally a collection of items such as a product catalog or article archive. Unlike time series data, the value of the content remains relatively constant over time, so it doesn’t make sense to move it to a tier with different performance characteristics as it ages. +- **ml**: Machine learning nodes run jobs and handle machine learning API requests. +- **transform**: Transform nodes run transforms and handle transform API requests. +- **coordinating**: The coordinating node forwards the request to the data nodes which hold the data. + +```yaml + topology: + data: + maxUnavailable: 1 + replicas: 3 + resources: + limits: + cpu: 500m + memory: 1Gi + requests: + cpu: 500m + memory: 1Gi + storage: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + storageClassName: standard + suffix: data + ingest: + maxUnavailable: 1 + replicas: 3 + resources: + limits: + cpu: 500m + memory: 1Gi + requests: + cpu: 500m + memory: 1Gi + storage: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + storageClassName: standard + suffix: ingest + master: + maxUnavailable: 1 + replicas: 2 + resources: + limits: + cpu: 500m + memory: 1Gi + requests: + cpu: 500m + memory: 1Gi + storage: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + storageClassName: standard + suffix: master +``` + +The `spec.topology` contains the following fields: + +- `topology.master`: + - `replicas` (`: "1"`) - is an `optional` field to specify the number of nodes (ie. pods ) that act as the `master` nodes. Defaults to `1`. + - `suffix` (`: "master"`) - is an `optional` field that is added as the suffix of the master StatefulSet name. Defaults to `master`. + - `storage` is a `required` field that specifies how much storage to claim for each of the `master` nodes. + - `resources` (`: "cpu: 500m, memory: 1Gi" `) - is an `optional` field that specifies how much computational resources to request or to limit for each of the `master` nodes. + - `maxUnavailable` is an `optional` field that specifies the exact number of master nodes (ie. pods) that can be safely evicted before the pod disruption budget (PDB) kicks in. KubeDB uses Pod Disruption Budget to ensure that desired number of replicas are available during [voluntary disruptions](https://kubernetes.io/docs/concepts/workloads/pods/disruptions/#voluntary-and-involuntary-disruptions) so that no data loss occurs. + +- `topology.data`: + - `replicas` (`: "1"`) - is an `optional` field to specify the number of nodes (ie. pods ) that act as the `data` nodes. Defaults to `1`. + - `suffix` (`: "data"`) - is an `optional` field that is added as the suffix of the data StatefulSet name. Defaults to `data`. + - `storage` is a `required` field that specifies how much storage to claim for each of the `data` nodes. + - `resources` (` cpu: 500m, memory: 1Gi `) - is an `optional` field that specifies which amount of computational resources to request or to limit for each of the `data` nodes. + - `maxUnavailable` is an `optional` field that specifies the exact number of data nodes (ie. pods) that can be safely evicted before the pod disruption budget (PDB) kicks in. KubeDB uses Pod Disruption Budget to ensure that desired number of replicas are available during [voluntary disruptions](https://kubernetes.io/docs/concepts/workloads/pods/disruptions/#voluntary-and-involuntary-disruptions) so that no data loss occurs. + +- `topology.ingest`: + - `replicas` (`: "1"`) - is an `optional` field to specify the number of nodes (ie. pods ) that act as the `ingest` nodes. Defaults to `1`. + - `suffix` (`: "ingest"`) - is an `optional` field that is added as the suffix of the data StatefulSet name. Defaults to `ingest`. + - `storage` is a `required` field that specifies how much storage to claim for each of the `ingest` nodes. + - `resources` (` cpu: 500m, memory: 1Gi `) - is an `optional` field that specifies which amount of computational resources to request or to limit for each of the `data` nodes. + - `maxUnavailable` is an `optional` field that specifies the exact number of ingest nodes (ie. pods) that can be safely evicted before the pod disruption budget (PDB) kicks in. KubeDB uses Pod Disruption Budget to ensure that desired number of replicas are available during [voluntary disruptions](https://kubernetes.io/docs/concepts/workloads/pods/disruptions/#voluntary-and-involuntary-disruptions) so that no data loss is occurs. + +> Note: Any two types of nodes can't have the same `suffix`. + +If you specify `spec.topology` field then you **do not need** to specify the following fields in Elasticsearch CRD. + +- `spec.replicas` +- `spec.storage` +- `spec.podTemplate.spec.resources` + +If you do not specify `spec.topology` field, the Elasticsearch Cluster runs in combined mode. + +> Combined Mode: all nodes of the Elasticsearch cluster will work as `master`, `data` and `ingest` nodes simultaneously. + +### spec.replicas + +`spec.replicas` is an `optional` field that can be used if `spec.topology` is not specified. This field specifies the number of nodes (ie. pods) in the Elasticsearch cluster. The default value of this field is `1`. + +```yaml +spec: + replicas: 3 +``` + +### spec.maxUnavailable + +`spec.maxUnavailable` is an `optional` field that is used to specify the exact number of cluster replicas that can be safely evicted before the pod disruption budget kicks in to prevent unwanted data loss. + +```yaml +spec: + maxUnavailable: 1 +``` + +### spec.enableSSL + +`spec.enableSSL` is an `optional` field that specifies whether to enable TLS to HTTP layer. The default value of this field is `false`. + +```yaml +spec: + enableSSL: true +``` + +> Note: The `transport` layer of an Elasticsearch cluster is always secured with certificates. If you want to disable it, you need to disable the security plugin by setting the `spec.disableSecurity` to `true`. + +### spec.tls + +`spec.tls` specifies the TLS/SSL configurations. The KubeDB operator supports TLS management by using the [cert-manager](https://cert-manager.io/). Currently, the operator only supports the `PKCS#8` encoded certificates. + +```yaml +spec: + tls: + issuerRef: + apiGroup: "cert-manager.io" + kind: Issuer + name: es-issuer + certificates: + - alias: transport + privateKey: + encoding: PKCS8 + secretName: es-transport-cert + subject: + organizations: + - kubedb + - alias: http + privateKey: + encoding: PKCS8 + secretName: es-http-cert + subject: + organizations: + - kubedb +``` + +The `spec.tls` contains the following fields: + +- `tls.issuerRef` - is an `optional` field that references to the `Issuer` or `ClusterIssuer` custom resource object of [cert-manager](https://cert-manager.io/docs/concepts/issuer/). It is used to generate the necessary certificate secrets for Elasticsearch. If the `issuerRef` is not specified, the operator creates a self-signed CA and also creates necessary certificate (valid: 365 days) secrets using that CA. + - `apiGroup` - is the group name of the resource that is being referenced. Currently, the only supported value is `cert-manager.io`. + - `kind` - is the type of resource that is being referenced. The supported values are `Issuer` and `ClusterIssuer`. + - `name` - is the name of the resource ( `Issuer` or `ClusterIssuer` ) that is being referenced. + +- `tls.certificates` - is an `optional` field that specifies a list of certificate configurations used to configure the certificates. It has the following fields: + - `alias` - represents the identifier of the certificate. It has the following possible value: + - `transport` - is used for the transport layer certificate configuration. + - `http` - is used for the HTTP layer certificate configuration. + - `admin` - is used for the admin certificate configuration. Available for the `SearchGuard` and the `OpenDistro` auth-plugins. + - `metrics-exporter` - is used for the metrics-exporter sidecar certificate configuration. + + - `secretName` - ( `string` | `"-alias-cert"` ) - specifies the k8s secret name that holds the certificates. + + - `subject` - specifies an `X.509` distinguished name (DN). It has the following configurable fields: + - `organizations` ( `[]string` | `nil` ) - is a list of organization names. + - `organizationalUnits` ( `[]string` | `nil` ) - is a list of organization unit names. + - `countries` ( `[]string` | `nil` ) - is a list of country names (ie. Country Codes). + - `localities` ( `[]string` | `nil` ) - is a list of locality names. + - `provinces` ( `[]string` | `nil` ) - is a list of province names. + - `streetAddresses` ( `[]string` | `nil` ) - is a list of street addresses. + - `postalCodes` ( `[]string` | `nil` ) - is a list of postal codes. + - `serialNumber` ( `string` | `""` ) is a serial number. + + For more details, visit [here](https://golang.org/pkg/crypto/x509/pkix/#Name). + + - `duration` ( `string` | `""` ) - is the period during which the certificate is valid. A duration string is a possibly signed sequence of decimal numbers, each with optional fraction and a unit suffix, such as `"300m"`, `"1.5h"` or `"20h45m"`. Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h". + - `renewBefore` ( `string` | `""` ) - is a specifiable time before expiration duration. + - `dnsNames` ( `[]string` | `nil` ) - is a list of subject alt names. + - `ipAddresses` ( `[]string` | `nil` ) - is a list of IP addresses. + - `uris` ( `[]string` | `nil` ) - is a list of URI Subject Alternative Names. + - `emailAddresses` ( `[]string` | `nil` ) - is a list of email Subject Alternative Names. + +### spec.authSecret + +`spec.authSecret` is an `optional` field that points to a k8s secret used to hold the Elasticsearch `elastic`/`admin` user credentials. + +```yaml +spec: + authSecret: + name: es-admin-cred +``` + +The k8s secret must be of `type: kubernetes.io/basic-auth` with the following keys: + +- `username`: Must be `elastic` for x-pack, or `admin` for searchGuard and OpenDistro. +- `password`: Password for the `elastic`/`admin` user. + +If not set, the KubeDB operator creates a new Secret `{Elasticsearch name}-{UserName}-cred` with randomly generated secured credentials. + +We can use this field in 3 mode. +1. Using an external secret. In this case, You need to create an auth secret first with required fields, then specify the secret name when creating the Elasticsearch object using `spec.authSecret.name` & set `spec.authSecret.externallyManaged` to true. +```yaml +authSecret: + name: + externallyManaged: true +``` + +2. Specifying the secret name only. In this case, You need to specify the secret name when creating the Elasticsearch object using `spec.authSecret.name`. `externallyManaged` is by default false. +```yaml +authSecret: + name: +``` + +3. Let KubeDB do everything for you. In this case, no work for you. + +AuthSecret contains a `user` key and a `password` key which contains the `username` and `password` respectively for `elastic` superuser. + +Example: + +```bash +$ kubectl create secret generic elastic-auth -n demo \ +--from-literal=username=jhon-doe \ +--from-literal=password=6q8u_2jMOW-OOZXk +secret "elastic-auth" created +``` + +```yaml +apiVersion: v1 +data: + password: NnE4dV8yak1PVy1PT1pYaw== + username: amhvbi1kb2U= +kind: Secret +metadata: + name: elastic-auth + namespace: demo +type: Opaque +``` + +Secrets provided by users are not managed by KubeDB, and therefore, won't be modified or garbage collected by the KubeDB operator (version 0.13.0 and higher). + +### spec.storageType + +`spec.storageType` is an `optional` field that specifies the type of storage to use for the database. It can be either `Durable` or `Ephemeral`. The default value of this field is `Durable`. If `Ephemeral` is used then KubeDB will create Elasticsearch database using [emptyDir](https://kubernetes.io/docs/concepts/storage/volumes/#emptydir) volume. In this case, you don't have to specify `spec.storage` field. + +```yaml +spec: + storageType: Durable +``` + +### spec.storage + +If the `spec.storageType` is not set to `Ephemeral` and if the `spec.topology` field also is not set then `spec.storage` field is `required`. This field specifies the StorageClass of the PVCs dynamically allocated to store data for the database. This storage spec will be passed to the StatefulSet created by the KubeDB operator to run database pods. You can specify any StorageClass available in your cluster with appropriate resource requests. + +```yaml +spec: + storage: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + storageClassName: standard +``` + +- `storage.storageClassName` - is the name of the StorageClass used to provision the PVCs. The PVCs don’t necessarily have to request a class. A PVC with the storageClassName set equal to "" is always interpreted to be requesting a PV with no class, so it can only be bound to PVs with no class (no annotation or one set equal to ""). A PVC with no storageClassName is not quite the same and is treated differently by the cluster depending on whether the DefaultStorageClass admission plugin is turned on. +- `storage.accessModes` uses the same conventions as Kubernetes PVCs when requesting storage with specific access modes. +- `spec.storage.resources` can be used to request specific quantities of storage. This follows the same resource model used by PVCs. + +To learn how to configure `spec.storage`, please visit the links below: + +- https://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims + +### spec.init + +`spec.init` is an `optional` section that can be used to initialize a newly created Elasticsearch cluster from prior snapshots, taken by [Stash](/docs/v2024.1.31/guides/elasticsearch/backup/overview/). + +```yaml +spec: + init: + waitForInitialRestore: true +``` + +When the `waitForInitialRestore` is set to true, the Elasticsearch instance will be stack in the `Provisioning` state until the initial backup is completed. On completion of the very first restore operation, the Elasticsearch instance will go to the `Ready` state. + +For detailed tutorial on how to initialize Elasticsearch from Stash backup, please visit [here](/docs/v2024.1.31/guides/elasticsearch/backup/overview/). + +### spec.monitor + +Elasticsearch managed by KubeDB can be monitored with builtin-Prometheus and Prometheus operator out-of-the-box. To learn more, + +- [Monitor Elasticsearch with builtin Prometheus](/docs/v2024.1.31/guides/elasticsearch/monitoring/using-builtin-prometheus) +- [Monitor Elasticsearch with Prometheus operator](/docs/v2024.1.31/guides/elasticsearch/monitoring/using-prometheus-operator) + +### spec.configSecret + +`spec.configSecret` is an `optional` field that allows users to provide custom configuration for Elasticsearch. It contains a k8s secret name that holds the configuration files for both Elasticsearch and the security plugins (ie. x-pack, SearchGuard, and openDistro). + +```yaml +spec: + configSecret: + name: es-custom-config +``` + +The configuration file names are used as secret keys. + +**Elasticsearch:** + +- `elasticsearch.yml` - for configuring Elasticsearch +- `jvm.options` - for configuring Elasticsearch JVM settings +- `log4j2.properties` - for configuring Elasticsearch logging + +**X-Pack:** + +- `roles.yml` - define roles and the associated permissions. +- `role_mapping.yml` - define which roles should be assigned to each user based on their username, groups, or other metadata. + +**SearchGuard:** + +- `sg_config.yml` - configure authenticators and authorization backends. +- `sg_roles.yml` - define roles and the associated permissions. +- `sg_roles_mapping.yml` - map backend roles, hosts, and users to roles. +- `sg_internal_users.yml` - stores users, and hashed passwords in the internal user database. +- `sg_action_groups.yml` - define named permission groups. +- `sg_tenants.yml` - defines tenants for configuring the Kibana access. +- `sg_blocks.yml` - defines blocked users and IP addresses. + +**OpenDistro:** + +- `internal_users.yml` - contains any initial users that you want to add to the security plugin’s internal user database. +- `roles.yml` - contains any initial roles that you want to add to the security plugin. +- `roles_mapping.yml` - maps backend roles, hosts and users to roles. +- `action_groups.yml` - contains any initial action groups that you want to add to the security plugin. +- `tenants.yml` - contains the tenant configurations. +- `nodes_dn.yml` - contains nodesDN mapping name and corresponding values. + +**How the resultant configuration files are generated?** + +- `YML`: The default configuration file pre-stored at config directories is overwritten by the operator-generated configuration file (if any). Then the resultant configuration file is overwritten by the user-provided custom configuration file (if any). The [yq](https://github.com/mikefarah/yq) tool is used to merge two YAML files. + + ```bash + $ yq merge -i --overwrite file1.yml file2.yml + ``` + +- `Non-YML`: The default configuration file is replaced by the operator-generated one (if any). Then the resultant configuration file is replaced by the user-provided custom configuration file (if any). + + ```bash + $ cp -f file2 file1 + ``` + +**How to provide node-role specific configurations?** + +If an Elasticsearch cluster is running in the topology mode (ie. `spec.topology` is set), a user may want to provide node-role specific configurations, say configurations that will only be merged to `master` nodes. To achieve this, users need to add the node role as a prefix to the file name. + +- Format: `-.extension` +- Samples: + - `data-elasticsearch.yml`: Only applied to `data` nodes. + - `master-jvm.options`: Only applied to `master` nodes. + - `ingest-log4j2.properties`: Only applied to `ingest` nodes. + +**How to provide additional files that is referenced from the configurations?** + +All these files provided via `configSecret` is stored in each Elasticsearch node (i.e. pod) at `ES_CONFIG_DIR/custom_config/` ( i.e. `/usr/share/elasticsearch/config/custom_config/`) directory. So, user can use this path while configuring the Elasticsearch. + +```yaml +apiVersion: v1 +kind: Secret +metadata: + name: es-custom-config + namespace: demo +stringData: + elasticsearch.yml: |- + logger.org.elasticsearch.discovery: DEBUG +``` + +### spec.podTemplate + +KubeDB allows providing a template for database pod through `spec.podTemplate`. KubeDB operator will pass the information provided in `spec.podTemplate` to the StatefulSet created for Elasticsearch database. + +KubeDB accept following fields to set in `spec.podTemplate:` + +- metadata: + - annotations (pod's annotation) + - labels (pod's labels) +- controller: + - annotations (statefulset's annotation) + - labels (statefulset's labels) +- spec: + - args + - env + - resources + - initContainers + - imagePullSecrets + - nodeSelector + - affinity + - serviceAccountName + - schedulerName + - tolerations + - priorityClassName + - priority + - securityContext + - livenessProbe + - readinessProbe + - lifecycle + +You can checkout the full list [here](https://github.com/kmodules/offshoot-api/blob/ea366935d5bad69d7643906c7556923271592513/api/v1/types.go#L42-L259). Uses of some field of `spec.podTemplate` is described below, + +Uses of some fields of `spec.podTemplate` are described below, + +#### spec.podTemplate.spec.env + +`spec.podTemplate.spec.env` is an `optional` field that specifies the environment variables to pass to the Elasticsearch Docker image. + +You are not allowed to pass the following `env`: +- `node.name` +- `node.ingest` +- `node.master` +- `node.data` + + +```ini +Error from server (Forbidden): error when creating "./elasticsearch.yaml": admission webhook "elasticsearch.validators.kubedb.com" denied the request: environment variable node.name is forbidden to use in Elasticsearch spec +``` + +#### spec.podTemplate.spec.imagePullSecrets + +`spec.podTemplate.spec.imagePullSecrets` is an optional field that points to secrets to be used for pulling docker image when you are using a private docker registry. For more details on how to use private docker registry, please visit [here](/docs/v2024.1.31/guides/elasticsearch/private-registry/using-private-registry). + +#### spec.podTemplate.spec.nodeSelector + +`spec.podTemplate.spec.nodeSelector` is an `optional` field that specifies a map of key-value pairs. For the pod to be eligible to run on a node, the node must have each of the indicated key-value pairs as labels (it can have additional labels as well). To learn more, see [here](https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector) . + +#### spec.podTemplate.spec.serviceAccountName + +`serviceAccountName` is an optional field supported by KubeDB Operator (version 0.13.0 and higher) that can be used to specify a custom service account to fine-tune role-based access control. + +If this field is left empty, the KubeDB operator will create a service account name matching the Elasticsearch instance name. Role and RoleBinding that provide necessary access permissions will also be generated automatically for this service account. + +If a service account name is given, but there's no existing service account by that name, the KubeDB operator will create one, and Role and RoleBinding that provide necessary access permissions will also be generated for this service account. + +If a service account name is given, and there's an existing service account by that name, the KubeDB operator will use that existing service account. Since this service account is not managed by KubeDB, users are responsible for providing necessary access permissions manually. Follow the guide [here](/docs/v2024.1.31/guides/elasticsearch/custom-rbac/using-custom-rbac) to grant necessary permissions in this scenario. + +```yaml +spec: + podTemplate: + spec: + serviceAccountName: es +``` + +#### spec.podTemplate.spec.resources + +`spec.podTemplate.spec.resources` is an `optional` field. If the `spec.topology` field is not set, then it can be used to request or limit computational resources required by the database pods. To learn more, visit [here](http://kubernetes.io/docs/user-guide/compute-resources/). + +```yaml +spec: + podTemplate: + spec: + resources: + limits: + cpu: "1" + memory: 1Gi + requests: + cpu: 500m + memory: 512Mi +``` + +### spec.serviceTemplates + +`spec.serviceTemplates` is an `optional` field that contains a list of the serviceTemplate. The templates are identified by the `alias`. For Elasticsearch, the configurable services' `alias` are `primary` and `stats`. + +You can also provide template for the services created by KubeDB operator for Elasticsearch database through `spec.serviceTemplates`. This will allow you to set the type and other properties of the services. + +KubeDB allows following fields to set in `spec.serviceTemplates`: +- metadata: + - labels + - annotations +- spec: + - type + - ports + - clusterIP + - externalIPs + - loadBalancerIP + - loadBalancerSourceRanges + - externalTrafficPolicy + - healthCheckNodePort + - sessionAffinityConfig + +See [here](https://github.com/kmodules/offshoot-api/blob/kubernetes-1.21.1/api/v1/types.go#L237) to understand these fields in detail. + +```yaml +spec: + serviceTemplates: + - alias: primary + metadata: + annotations: + passTo: service + spec: + type: NodePort + - alias: stats + # stats service configurations +``` + +See [here](https://github.com/kmodules/offshoot-api/blob/kubernetes-1.18.9/api/v1/types.go#L192) to understand these fields in detail. + +### spec.terminationPolicy + +`terminationPolicy` gives flexibility whether to `nullify`(reject) the delete operation of `Elasticsearch` CRD or which resources KubeDB should keep or delete when you delete `Elasticsearch` CRD. The KubeDB operator provides the following termination policies: + +- DoNotTerminate +- Halt +- Delete (`Default`) +- WipeOut + +When `terminationPolicy` is `DoNotTerminate`, KubeDB takes advantage of `ValidationWebhook` feature in Kubernetes v1.9+ to provide safety from accidental deletion of the database. If admission webhook is enabled, KubeDB prevents users from deleting the database as long as the `spec.terminationPolicy` is set to `DoNotTerminate`. + +Following table show what KubeDB does when you delete Elasticsearch CRD for different termination policies, + +| Behavior | DoNotTerminate | Halt | Delete | WipeOut | +| ----------------------------------- | :------------: | :------: | :------: | :------: | +| 1. Block Delete operation | ✓ | ✗ | ✗ | ✗ | +| 2. Delete StatefulSet | ✗ | ✓ | ✓ | ✓ | +| 3. Delete Services | ✗ | ✓ | ✓ | ✓ | +| 4. Delete TLS Credential Secrets | ✗ | ✓ | ✓ | ✓ | +| 5. Delete PVCs | ✗ | ✗ | ✓ | ✓ | +| 6. Delete User Credential Secrets | ✗ | ✗ | ✗ | ✓ | + + +If the `spec.terminationPolicy` is not specified, the KubeDB operator defaults it to `Delete`. + +## spec.healthChecker +It defines the attributes for the health checker. +- `spec.healthChecker.periodSeconds` specifies how often to perform the health check. +- `spec.healthChecker.timeoutSeconds` specifies the number of seconds after which the probe times out. +- `spec.healthChecker.failureThreshold` specifies minimum consecutive failures for the healthChecker to be considered failed. +- `spec.healthChecker.disableWriteCheck` specifies whether to disable the writeCheck or not. + +Know details about KubeDB Health checking from this [blog post](https://blog.byte.builders/post/kubedb-health-checker/). + +## Next Steps + +- Learn how to use KubeDB to run an Elasticsearch database [here](/docs/v2024.1.31/guides/elasticsearch/README). +- Learn how to use ElasticsearchOpsRequest [here](/docs/v2024.1.31/guides/elasticsearch/concepts/elasticsearch-ops-request/). +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/v2024.1.31/CONTRIBUTING). diff --git a/content/docs/v2024.1.31/guides/elasticsearch/configuration/_index.md b/content/docs/v2024.1.31/guides/elasticsearch/configuration/_index.md new file mode 100755 index 0000000000..489af0d157 --- /dev/null +++ b/content/docs/v2024.1.31/guides/elasticsearch/configuration/_index.md @@ -0,0 +1,22 @@ +--- +title: Run Elasticsearch with Custom Configuration +menu: + docs_v2024.1.31: + identifier: es-configuration + name: Custom Configuration + parent: es-elasticsearch-guides + weight: 30 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/elasticsearch/configuration/combined-cluster/index.md b/content/docs/v2024.1.31/guides/elasticsearch/configuration/combined-cluster/index.md new file mode 100644 index 0000000000..f938f82975 --- /dev/null +++ b/content/docs/v2024.1.31/guides/elasticsearch/configuration/combined-cluster/index.md @@ -0,0 +1,525 @@ +--- +title: Configuring Elasticsearch Combined Cluster +menu: + docs_v2024.1.31: + identifier: es-configuration-combined-cluster + name: Combined Cluster + parent: es-configuration + weight: 15 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Configure Elasticsearch Combined Cluster + +In Elasticsearch combined cluster, every node can perform as master, data, and ingest nodes simultaneously. In this tutorial, we will see how to configure a combined cluster. + +## Before You Begin + +At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +Now, install the KubeDB operator in your cluster following the steps [here](/docs/v2024.1.31/setup/README). + +To keep things isolated, this tutorial uses a separate namespace called `demo` throughout this tutorial. + +```bash +$ kubectl create namespace demo +namespace/demo created + +$ kubectl get namespace +NAME STATUS AGE +demo Active 9s +``` + +> Note: YAML files used in this tutorial are stored in [here](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/guides/elasticsearch/configuration/combined-cluster/yamls +) in GitHub repository [kubedb/docs](https://github.com/kubedb/docs). + +## Find Available StorageClass + +We will have to provide `StorageClass` in Elasticsearch CR specification. Check available `StorageClass` in your cluster using the following command, + +```bash +$ kubectl get storageclass +NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE +standard (default) rancher.io/local-path Delete WaitForFirstConsumer false 1h +``` + +Here, we have `standard` StorageClass in our cluster from [Local Path Provisioner](https://github.com/rancher/local-path-provisioner). + +## Use Custom Configuration + +Say we want to change the default log directory for our cluster and want to configure disk-based shard allocation. Let's create the `elasticsearch.yml` file with our desire configurations. + +**elasticsearch.yml:** + +```yaml +path: + logs: "/usr/share/elasticsearch/data/new-logs-dir" +# For 100gb node space: +# Enable disk-based shard allocation +cluster.routing.allocation.disk.threshold_enabled: true +# prevent Elasticsearch from allocating shards to the node if less than the 15gb of space is available +cluster.routing.allocation.disk.watermark.low: 15gb +# relocate shards away from the node if the node has less than 10gb of free space +cluster.routing.allocation.disk.watermark.high: 10gb +# enforce a read-only index block if the node has less than 5gb of free space +cluster.routing.allocation.disk.watermark.flood_stage: 5gb +``` + +Let's create a k8s secret containing the above configuration where the file name will be the key and the file-content as the value: + +```yaml +apiVersion: v1 +kind: Secret +metadata: + name: es-custom-config + namespace: demo +stringData: + elasticsearch.yml: |- + path: + logs: "/usr/share/elasticsearch/data/new-logs-dir" + cluster.routing.allocation.disk.threshold_enabled: true + cluster.routing.allocation.disk.watermark.low: 15gb + cluster.routing.allocation.disk.watermark.high: 10gb + cluster.routing.allocation.disk.watermark.flood_stage: 5gb +``` + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/elasticsearch/configuration/combined-cluster/yamls/config-secret.yaml +secret/es-custom-config created +``` + +Now that the config secret is created, it needs to be mention in the [Elasticsearch](/docs/v2024.1.31/guides/elasticsearch/concepts/elasticsearch/) object's yaml: + + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: Elasticsearch +metadata: + name: es-multinode + namespace: demo +spec: + version: xpack-8.11.1 + enableSSL: true + replicas: 3 + configSecret: + name: es-custom-config # mentioned here! + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 100Gi + terminationPolicy: WipeOut +``` + +Now, create the Elasticsearch object by the following command: + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/elasticsearch/configuration/combined-cluster/yamls/es-combined.yaml +elasticsearch.kubedb.com/es-multinode created +``` + +Now, wait for the Elasticsearch to become ready: + +```bash +$ kubectl get es -n demo -w +NAME VERSION STATUS AGE +es-multinode xpack-8.11.1 Provisioning 18s +es-multinode xpack-8.11.1 Provisioning 2m5s +es-multinode xpack-8.11.1 Ready 2m5s +``` + +## Verify Configuration + +Let's connect to the Elasticsearch cluster that we have created and check the node settings to verify whether our configurations are applied or not: + +Connect to the Cluster: + +```bash +# Port-forward the service to local machine +$ kubectl port-forward -n demo svc/es-multinode 9200 +Forwarding from 127.0.0.1:9200 -> 9200 +Forwarding from [::1]:9200 -> 9200 +``` + +Now, our Elasticsearch cluster is accessible at `localhost:9200`. + +**Connection information:** + +- Address: `localhost:9200` +- Username: + + ```bash + $ kubectl get secret -n demo es-multinode-elastic-cred -o jsonpath='{.data.username}' | base64 -d + elastic + ``` + +- Password: + + ```bash + $ kubectl get secret -n demo es-multinode-elastic-cred -o jsonpath='{.data.password}' | base64 -d + ehG7*7SJZ0o9PA05 + ``` + +Now, we will query for settings of all nodes in an Elasticsearch cluster, + +```bash +$ curl -XGET -k -u 'elastic:ehG7*7SJZ0o9PA05' "https://localhost:9200/_nodes/_all/settings?pretty" + +``` + +This will return a large JSON with node settings. Here is the prettified JSON response, + +```json +{ + "_nodes" : { + "total" : 3, + "successful" : 3, + "failed" : 0 + }, + "cluster_name" : "es-multinode", + "nodes" : { + "_xWvqAU4QJeMaV4MayTgeg" : { + "name" : "es-multinode-0", + "transport_address" : "10.244.0.25:9300", + "host" : "10.244.0.25", + "ip" : "10.244.0.25", + "version" : "7.9.1", + "build_flavor" : "default", + "build_type" : "docker", + "build_hash" : "083627f112ba94dffc1232e8b42b73492789ef91", + "roles" : [ + "data", + "ingest", + "master", + "ml", + "remote_cluster_client", + "transform" + ], + "attributes" : { + "ml.machine_memory" : "1073741824", + "xpack.installed" : "true", + "transform.node" : "true", + "ml.max_open_jobs" : "20" + }, + "settings" : { + "cluster" : { + "name" : "es-multinode", + "routing" : { + "allocation" : { + "disk" : { + "threshold_enabled" : "true", + "watermark" : { + "low" : "15gb", + "flood_stage" : "5gb", + "high" : "10gb" + } + } + } + }, + "election" : { + "strategy" : "supports_voting_only" + }, + "initial_master_nodes" : "es-multinode-0,es-multinode-1,es-multinode-2" + }, + "node" : { + "name" : "es-multinode-0", + "attr" : { + "transform" : { + "node" : "true" + }, + "xpack" : { + "installed" : "true" + }, + "ml" : { + "machine_memory" : "1073741824", + "max_open_jobs" : "20" + } + }, + "data" : "true", + "ingest" : "true", + "master" : "true" + }, + "path" : { + "logs" : "/usr/share/elasticsearch/data/new-logs-dir", + "home" : "/usr/share/elasticsearch" + }, + "discovery" : { + "seed_hosts" : "es-multinode-master" + }, + "client" : { + "type" : "node" + }, + "http" : { + "compression" : "false", + "type" : "security4", + "type.default" : "netty4" + }, + "transport" : { + "type" : "security4", + "features" : { + "x-pack" : "true" + }, + "type.default" : "netty4" + }, + "xpack" : { + "security" : { + "http" : { + "ssl" : { + "enabled" : "true" + } + }, + "enabled" : "true", + "transport" : { + "ssl" : { + "enabled" : "true" + } + } + } + }, + "network" : { + "host" : "0.0.0.0" + } + } + }, + "0q1IcSSARwu9HrQmtvjDGA" : { + "name" : "es-multinode-1", + "transport_address" : "10.244.0.27:9300", + "host" : "10.244.0.27", + "ip" : "10.244.0.27", + "version" : "7.9.1", + "build_flavor" : "default", + "build_type" : "docker", + "build_hash" : "083627f112ba94dffc1232e8b42b73492789ef91", + "roles" : [ + "data", + "ingest", + "master", + "ml", + "remote_cluster_client", + "transform" + ], + "attributes" : { + "ml.machine_memory" : "1073741824", + "ml.max_open_jobs" : "20", + "xpack.installed" : "true", + "transform.node" : "true" + }, + "settings" : { + "cluster" : { + "name" : "es-multinode", + "routing" : { + "allocation" : { + "disk" : { + "threshold_enabled" : "true", + "watermark" : { + "low" : "15gb", + "flood_stage" : "5gb", + "high" : "10gb" + } + } + } + }, + "election" : { + "strategy" : "supports_voting_only" + }, + "initial_master_nodes" : "es-multinode-0,es-multinode-1,es-multinode-2" + }, + "node" : { + "name" : "es-multinode-1", + "attr" : { + "transform" : { + "node" : "true" + }, + "xpack" : { + "installed" : "true" + }, + "ml" : { + "machine_memory" : "1073741824", + "max_open_jobs" : "20" + } + }, + "data" : "true", + "ingest" : "true", + "master" : "true" + }, + "path" : { + "logs" : "/usr/share/elasticsearch/data/new-logs-dir", + "home" : "/usr/share/elasticsearch" + }, + "discovery" : { + "seed_hosts" : "es-multinode-master" + }, + "client" : { + "type" : "node" + }, + "http" : { + "compression" : "false", + "type" : "security4", + "type.default" : "netty4" + }, + "transport" : { + "type" : "security4", + "features" : { + "x-pack" : "true" + }, + "type.default" : "netty4" + }, + "xpack" : { + "security" : { + "http" : { + "ssl" : { + "enabled" : "true" + } + }, + "enabled" : "true", + "transport" : { + "ssl" : { + "enabled" : "true" + } + } + } + }, + "network" : { + "host" : "0.0.0.0" + } + } + }, + "ITvdnOcERwuG0qBmBJLaww" : { + "name" : "es-multinode-2", + "transport_address" : "10.244.0.29:9300", + "host" : "10.244.0.29", + "ip" : "10.244.0.29", + "version" : "7.9.1", + "build_flavor" : "default", + "build_type" : "docker", + "build_hash" : "083627f112ba94dffc1232e8b42b73492789ef91", + "roles" : [ + "data", + "ingest", + "master", + "ml", + "remote_cluster_client", + "transform" + ], + "attributes" : { + "ml.machine_memory" : "1073741824", + "ml.max_open_jobs" : "20", + "xpack.installed" : "true", + "transform.node" : "true" + }, + "settings" : { + "cluster" : { + "name" : "es-multinode", + "routing" : { + "allocation" : { + "disk" : { + "threshold_enabled" : "true", + "watermark" : { + "low" : "15gb", + "flood_stage" : "5gb", + "high" : "10gb" + } + } + } + }, + "election" : { + "strategy" : "supports_voting_only" + }, + "initial_master_nodes" : "es-multinode-0,es-multinode-1,es-multinode-2" + }, + "node" : { + "name" : "es-multinode-2", + "attr" : { + "transform" : { + "node" : "true" + }, + "xpack" : { + "installed" : "true" + }, + "ml" : { + "machine_memory" : "1073741824", + "max_open_jobs" : "20" + } + }, + "data" : "true", + "ingest" : "true", + "master" : "true" + }, + "path" : { + "logs" : "/usr/share/elasticsearch/data/new-logs-dir", + "home" : "/usr/share/elasticsearch" + }, + "discovery" : { + "seed_hosts" : "es-multinode-master" + }, + "client" : { + "type" : "node" + }, + "http" : { + "compression" : "false", + "type" : "security4", + "type.default" : "netty4" + }, + "transport" : { + "type" : "security4", + "features" : { + "x-pack" : "true" + }, + "type.default" : "netty4" + }, + "xpack" : { + "security" : { + "http" : { + "ssl" : { + "enabled" : "true" + } + }, + "enabled" : "true", + "transport" : { + "ssl" : { + "enabled" : "true" + } + } + } + }, + "network" : { + "host" : "0.0.0.0" + } + } + } + } +} +``` + +Here we can see that our given configuration is merged to the default configurations. + +## Cleanup + +To cleanup the Kubernetes resources created by this tutorial, run: + +```bash +$ kubectl delete elasticsearch -n demo es-multinode + +$ kubectl delete secret -n demo es-custom-config + +$ kubectl delete namespace demo +``` + +## Next Steps diff --git a/content/docs/v2024.1.31/guides/elasticsearch/configuration/combined-cluster/yamls/config-secret.yaml b/content/docs/v2024.1.31/guides/elasticsearch/configuration/combined-cluster/yamls/config-secret.yaml new file mode 100644 index 0000000000..1a57da0708 --- /dev/null +++ b/content/docs/v2024.1.31/guides/elasticsearch/configuration/combined-cluster/yamls/config-secret.yaml @@ -0,0 +1,13 @@ +apiVersion: v1 +kind: Secret +metadata: + name: es-custom-config + namespace: demo +stringData: + elasticsearch.yml: |- + path: + logs: "/usr/share/elasticsearch/data/new-logs-dir" + cluster.routing.allocation.disk.threshold_enabled: true + cluster.routing.allocation.disk.watermark.low: 15gb + cluster.routing.allocation.disk.watermark.high: 10gb + cluster.routing.allocation.disk.watermark.flood_stage: 5gb \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/elasticsearch/configuration/combined-cluster/yamls/es-combined.yaml b/content/docs/v2024.1.31/guides/elasticsearch/configuration/combined-cluster/yamls/es-combined.yaml new file mode 100644 index 0000000000..b0d4213cd8 --- /dev/null +++ b/content/docs/v2024.1.31/guides/elasticsearch/configuration/combined-cluster/yamls/es-combined.yaml @@ -0,0 +1,20 @@ +apiVersion: kubedb.com/v1alpha2 +kind: Elasticsearch +metadata: + name: es-multinode + namespace: demo +spec: + version: xpack-8.11.1 + enableSSL: true + replicas: 3 + configSecret: + name: es-custom-config + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 100Gi + terminationPolicy: WipeOut \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/elasticsearch/configuration/jvm-options/index.md b/content/docs/v2024.1.31/guides/elasticsearch/configuration/jvm-options/index.md new file mode 100644 index 0000000000..b17ce3dbae --- /dev/null +++ b/content/docs/v2024.1.31/guides/elasticsearch/configuration/jvm-options/index.md @@ -0,0 +1,150 @@ +--- +title: Configuring Elasticsearch JVM Options +menu: + docs_v2024.1.31: + identifier: es-configuration-jvm-options + name: JVM Options + parent: es-configuration + weight: 25 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +# Configure Elasticsearch JVM Options + +The Elasticsearch offers users to configure the JVM settings by using `jvm.options` file. The `jvm.options` file located at the `$ES_HOME/config` (ie. `/usr/share/elasticsearch/config`) directory. + +## Deploy Elasticsearch with Custom jvm.options File + +Before deploying the Elasticsearch instance, you need to create a k8s secret with the custom config files (here: `jvm.options`). + +```yaml +apiVersion: v1 +kind: Secret +metadata: + name: es-custom-config + namespace: demo +stringData: + jvm.options: |- + ## G1GC Configuration + + 10-:-XX:+UseG1GC + 10-13:-XX:-UseConcMarkSweepGC + 10-13:-XX:-UseCMSInitiatingOccupancyOnly + 10-:-XX:G1ReservePercent=25 + 10-:-XX:InitiatingHeapOccupancyPercent=30 + + ## JVM temporary directory + -Djava.io.tmpdir=${ES_TMPDIR} + + ## heap dumps + + # generate a heap dump when an allocation from the Java heap fails + # heap dumps are created in the working directory of the JVM + -XX:+HeapDumpOnOutOfMemoryError + + # specify an alternative path for heap dumps; ensure the directory exists and + # has sufficient space + -XX:HeapDumpPath=data + + # specify an alternative path for JVM fatal error logs + -XX:ErrorFile=logs/hs_err_pid%p.log + + # JDK 9+ GC logging + 9-:-Xlog:gc*,gc+age=trace,safepoint:file=logs/gc.log:utctime,pid,tags:filecount=32,filesize=64m +``` + +If you want to provide node-role specific settings, say you want to configure ingest nodes with a different setting than others in a topology cluster, add node `role` as a prefix in the file name. + +```yaml +stringData: + ingest-jvm.options: |- + ... ... + master-jvm.options: |- + ... ... + ... ... +``` + +Deploy the k8s secret: + +```bash +kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/elasticsearch/configuration/jvm-options/yamls/custom-config.yaml +secret/es-custom-config created +``` + +Now Deploy the Elasticsearch Cluster with the custom `jvm.options` file: + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: Elasticsearch +metadata: + name: es-test + namespace: demo +spec: + # Make sure that you've mentioned the config secret name here + configSecret: + name: es-custom-config + enableSSL: false + version: opensearch-2.8.0 + storageType: Durable + terminationPolicy: WipeOut + topology: + master: + suffix: master + replicas: 3 + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + data: + suffix: data + replicas: 2 + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 5Gi + ingest: + suffix: ingest + replicas: 2 + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi +``` + +Deploy Elasticsearch: + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/elasticsearch/configuration/jvm-options/yamls/elasticsearch.yaml +elasticsearch/es-test created +``` + +Wait for the Elasticsearch to become ready: + +```bash +$ kubectl get elasticsearch -n demo -w +NAME VERSION STATUS AGE +es-test opensearch-2.8.0 Provisioning 12s +es-test opensearch-2.8.0 Provisioning 2m2s +es-test opensearch-2.8.0 Ready 2m2s +``` diff --git a/content/docs/v2024.1.31/guides/elasticsearch/configuration/jvm-options/yamls/custom-config.yaml b/content/docs/v2024.1.31/guides/elasticsearch/configuration/jvm-options/yamls/custom-config.yaml new file mode 100644 index 0000000000..ab59c20ede --- /dev/null +++ b/content/docs/v2024.1.31/guides/elasticsearch/configuration/jvm-options/yamls/custom-config.yaml @@ -0,0 +1,33 @@ +apiVersion: v1 +kind: Secret +metadata: + name: es-custom-config + namespace: demo +stringData: + jvm.options: |- + ## G1GC Configuration + + 10-:-XX:+UseG1GC + 10-13:-XX:-UseConcMarkSweepGC + 10-13:-XX:-UseCMSInitiatingOccupancyOnly + 10-:-XX:G1ReservePercent=25 + 10-:-XX:InitiatingHeapOccupancyPercent=30 + + ## JVM temporary directory + -Djava.io.tmpdir=${ES_TMPDIR} + + ## heap dumps + + # generate a heap dump when an allocation from the Java heap fails + # heap dumps are created in the working directory of the JVM + -XX:+HeapDumpOnOutOfMemoryError + + # specify an alternative path for heap dumps; ensure the directory exists and + # has sufficient space + -XX:HeapDumpPath=data + + # specify an alternative path for JVM fatal error logs + -XX:ErrorFile=logs/hs_err_pid%p.log + + # JDK 9+ GC logging + 9-:-Xlog:gc*,gc+age=trace,safepoint:file=logs/gc.log:utctime,pid,tags:filecount=32,filesize=64m \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/elasticsearch/configuration/jvm-options/yamls/elasticsearch.yaml b/content/docs/v2024.1.31/guides/elasticsearch/configuration/jvm-options/yamls/elasticsearch.yaml new file mode 100644 index 0000000000..ef78df2eaf --- /dev/null +++ b/content/docs/v2024.1.31/guides/elasticsearch/configuration/jvm-options/yamls/elasticsearch.yaml @@ -0,0 +1,43 @@ +apiVersion: kubedb.com/v1alpha2 +kind: Elasticsearch +metadata: + name: es-test + namespace: demo +spec: + configSecret: + name: es-custom-config + enableSSL: false + version: opensearch-2.8.0 + storageType: Durable + terminationPolicy: WipeOut + topology: + master: + suffix: master + replicas: 3 + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + data: + suffix: data + replicas: 2 + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 5Gi + ingest: + suffix: ingest + replicas: 2 + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi diff --git a/content/docs/v2024.1.31/guides/elasticsearch/configuration/overview/index.md b/content/docs/v2024.1.31/guides/elasticsearch/configuration/overview/index.md new file mode 100644 index 0000000000..9d7f89c43c --- /dev/null +++ b/content/docs/v2024.1.31/guides/elasticsearch/configuration/overview/index.md @@ -0,0 +1,123 @@ +--- +title: Run Elasticsearch with Custom Configuration +menu: + docs_v2024.1.31: + identifier: es-overview-configuration + name: Overview + parent: es-configuration + weight: 10 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Elasticsearch with Custom Configuration Files + +The KubeDB operator allows a user to deploy an Elasticsearch cluster with custom configuration files. The operator also allows the user to configure the security plugins such as X-Pack, SearchGurad, and OpenDistro. If the custom configuration files are not provided, the operator will start the cluster with default configurations. + +## Overview + +Elasticsearch has three configuration files: + +- `elasticsearch.yml`: for configuring Elasticsearch +- `jvm.options`: for configuring Elasticsearch JVM settings +- `log4j2.properties`: for configuring Elasticsearch logging + +In KubeDB managed Elasticsearch cluster, the configuration files are located at `/usr/share/elasticsearch/config` directory of Elasticsearch pods. To know more about configuring the Elasticsearch cluster see [here](https://www.elastic.co/guide/en/elasticsearch/reference/7.10/settings.html). + +The `X-Pack` security plugin has the following configuration files: + +- `roles.yml` - define roles and the associated permissions. +- `role_mapping.yml` - define which roles should be assigned to each user based on their username, groups, or other metadata. + +The `SearchGuard` security plugin has the following configuration files: + +- `sg_config.yml` - configure authenticators and authorization backends. +- `sg_roles.yml` - define roles and the associated permissions. +- `sg_roles_mapping.yml` - map backend roles, hosts, and users to roles. +- `sg_internal_users.yml` - stores users, and hashed passwords in the internal user database. +- `sg_action_groups.yml` - define named permission groups. +- `sg_tenants.yml` - defines tenants for configuring the Kibana access. +- `sg_blocks.yml` - defines blocked users and IP addresses. + +The `OpenDistro` security plugin has the following configuration files: + +- `internal_users.yml` - contains any initial users that you want to add to the security plugin’s internal user database. +- `roles.yml` - contains any initial roles that you want to add to the security plugin. +- `roles_mapping.yml` - maps backend roles, hosts, and users to roles. +- `action_groups.yml` - contains any initial action groups that you want to add to the security plugin. +- `tenants.yml` - contains the tenant configurations. +- `nodes_dn.yml` - contains nodesDN mapping name and corresponding values. + +## Custom Config Seceret + +The custom configuration files are passed via a Kubernetes secret. The **file names are the keys** of the Secret with the **file-contents as the values**. The secret name needs to be mentioned in `spec.configSecret.name` of the [Elasticsearch](/docs/v2024.1.31/guides/elasticsearch/concepts/elasticsearch/) object. + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: Elasticsearch +metadata: + name: es-custom-config + namespace: demo +spec: + version: xpack-8.11.1 + configSecret: + name: es-custom-config +``` + +```yaml +apiVersion: v1 +kind: Secret +metadata: + name: es-custom-config + namespace: demo +stringData: + elasticsearch.yml: |- + logger.org.elasticsearch.discovery: DEBUG +``` + +**How the resultant configuration files are generated?** + +- `YML`: The default configuration file pre-stored at config directories is overwritten by the operator-generated configuration file (if any). Then the resultant configuration file is overwritten by the user-provided custom configuration file (if any). The [yq](https://github.com/mikefarah/yq) tool is used to merge two YAML files. + + ```bash + $ yq merge -i --overwrite file1.yml file2.yml + ``` + +- `Non-YML`: The default configuration file is replaced by the operator-generated one (if any). Then the resultant configuration file is replaced by the user-provided custom configuration file (if any). + + ```bash + $ cp -f file2 file1 + ``` + +**How to provide node-role specific configurations?** + +If an Elasticsearch cluster is running in the topology mode (ie. `spec.topology` is set), a user may want to provide node-role specific configurations, say configurations that will only be merged to `master` nodes. To achieve this, users need to add the node role as a prefix to the file name. + +- Format: `-.extension` +- Samples: + - `data-elasticsearch.yml`: Only applied to `data` nodes. + - `master-jvm.options`: Only applied to `master` nodes. + - `ingest-log4j2.properties`: Only applied to `ingest` nodes. + - `elasticsearch.yml`: applied to all nodes. + +**How to provide additional files that are referenced from the configurations?** + +All these files provided via `configSecret` is stored in each Elasticsearch node (i.e. pod) at `ES_CONFIG_DIR/custom_config/` ( i.e. `/usr/share/elasticsearch/config/custom_config/`) directory. So, user can use this path while configuring the Elasticsearch. + +## Next Steps + +- Learn how to use custom configuration in combined cluster from [here](/docs/v2024.1.31/guides/elasticsearch/configuration/combined-cluster/). +- Learn how to use custom configuration in topology cluster from [here](/docs/v2024.1.31/guides/elasticsearch/configuration/topology-cluster/). diff --git a/content/docs/v2024.1.31/guides/elasticsearch/configuration/topology-cluster/index.md b/content/docs/v2024.1.31/guides/elasticsearch/configuration/topology-cluster/index.md new file mode 100644 index 0000000000..cd32df9c24 --- /dev/null +++ b/content/docs/v2024.1.31/guides/elasticsearch/configuration/topology-cluster/index.md @@ -0,0 +1,551 @@ +--- +title: Configuring Elasticsearch Topology Cluster +menu: + docs_v2024.1.31: + identifier: es-configuration-topology-cluster + name: Topology Cluster + parent: es-configuration + weight: 20 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Configure Elasticsearch Topology Cluster + +In an Elasticsearch topology cluster, each node is assigned with a dedicated role such as master, data, and ingest. The cluster must have at least one master node, one data node, and one ingest node. In this tutorial, we will see how to configure a topology cluster. + +## Before You Begin + +At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +Now, install the KubeDB operator in your cluster following the steps [here](/docs/v2024.1.31/setup/README). + +To keep things isolated, this tutorial uses a separate namespace called `demo` throughout this tutorial. + +```bash +$ kubectl create namespace demo +namespace/demo created + +$ kubectl get namespace +NAME STATUS AGE +demo Active 9s +``` + +> Note: YAML files used in this tutorial are stored in [here](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/guides/elasticsearch/configuration/combined-cluster/yamls +) in GitHub repository [kubedb/docs](https://github.com/kubedb/docs). + +## Find Available StorageClass + +We will have to provide `StorageClass` in Elasticsearch CR specification. Check available `StorageClass` in your cluster using the following command, + +```bash +$ kubectl get storageclass +NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE +standard (default) rancher.io/local-path Delete WaitForFirstConsumer false 1h +``` + +Here, we have `standard` StorageClass in our cluster from [Local Path Provisioner](https://github.com/rancher/local-path-provisioner). + +## Use Custom Configuration + +Say we want to change the default log directories for our cluster and want to configure disk-based shard allocation. We also want that the log directory name should have node-role in it (ie. demonstrating node-role specific configurations). + +If a user may want to provide node-role specific configurations, say configurations that will only be merged to master nodes. To achieve this, users need to add the node role as a prefix to the file name. + +- Format: `-.extension` +- Samples: + - `data-elasticsearch.yml`: Only applied to data nodes. + - `master-jvm.options`: Only applied to master nodes. + - `ingest-log4j2.properties`: Only applied to ingest nodes. + - `elasticsearch.yml`: Empty node-role means it will be applied to all nodes. + +Let's create the `elasticsearch.yml` files with our desire configurations. + +**elasticsearch.yml** is for all nodes: + +```yaml +node.processors: 2 +``` + +**master-elasticsearch.yml** is for master nodes: + +```yaml +path: + logs: "/usr/share/elasticsearch/data/master-logs-dir" +``` + +**data-elasticsearch.yml** is for data nodes: + +```yaml +path: + logs: "/usr/share/elasticsearch/data/data-logs-dir" +# For 100gb node space: +# Enable disk-based shard allocation +cluster.routing.allocation.disk.threshold_enabled: true +# prevent Elasticsearch from allocating shards to the node if less than the 15gb of space is available +cluster.routing.allocation.disk.watermark.low: 15gb +# relocate shards away from the node if the node has less than 10gb of free space +cluster.routing.allocation.disk.watermark.high: 10gb +# enforce a read-only index block if the node has less than 5gb of free space +cluster.routing.allocation.disk.watermark.flood_stage: 5gb +``` + +**ingest-elasticsearch.yml** is for ingest nodes: + +```yaml +path: + logs: "/usr/share/elasticsearch/data/ingest-logs-dir" +``` + +Let's create a k8s secret containing the above configurations where the file name will be the key and the file-content as the value: + +```yaml +apiVersion: v1 +kind: Secret +metadata: + name: es-custom-config + namespace: demo +stringData: + elasticsearch.yml: |- + node.processors: 2 + master-elasticsearch.yml: |- + path: + logs: "/usr/share/elasticsearch/data/master-logs-dir" + ingest-elasticsearch.yml: |- + path: + logs: "/usr/share/elasticsearch/data/ingest-logs-dir" + data-elasticsearch.yml: |- + path: + logs: "/usr/share/elasticsearch/data/data-logs-dir" + cluster.routing.allocation.disk.threshold_enabled: true + cluster.routing.allocation.disk.watermark.low: 15gb + cluster.routing.allocation.disk.watermark.high: 10gb + cluster.routing.allocation.disk.watermark.flood_stage: 5gb +``` + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/elasticsearch/configuration/topology-cluster/yamls/config-secret.yaml +secret/es-custom-config created +``` + +Now that the config secret is created, it needs to be mention in the [Elasticsearch](/docs/v2024.1.31/guides/elasticsearch/concepts/elasticsearch/) object's yaml: + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: Elasticsearch +metadata: + name: es-topology + namespace: demo +spec: + enableSSL: true + version: xpack-8.11.1 + configSecret: + name: es-custom-config # mentioned here! + storageType: Durable + terminationPolicy: WipeOut + topology: + master: + replicas: 1 + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + data: + replicas: 1 + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 100Gi + ingest: + replicas: 1 + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi +``` + +Now, create the Elasticsearch object by the following command: + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/elasticsearch/configuration/topology-cluster/yamls/es-topology.yaml +elasticsearch.kubedb.com/es-topology created +``` + +Now, wait for the Elasticsearch to become ready: + +```bash +$ kubectl get elasticsearch -n demo -w +NAME VERSION STATUS AGE +es-topology xpack-8.11.1 Provisioning 12s +es-topology xpack-8.11.1 Provisioning 2m2s +es-topology xpack-8.11.1 Ready 2m2s +``` + +## Verify Configuration + +Let's connect to the Elasticsearch cluster that we have created and check the node settings to verify whether our configurations are applied or not: + +Connect to the Cluster: + +```bash +# Port-forward the service to local machine +$ kubectl port-forward -n demo svc/es-topology 9200 +Forwarding from 127.0.0.1:9200 -> 9200 +Forwarding from [::1]:9200 -> 9200 +``` + +Now, our Elasticsearch cluster is accessible at `localhost:9200`. + +**Connection information:** + +- Address: `localhost:9200` +- Username: + + ```bash + $ kubectl get secret -n demo es-topology-elastic-cred -o jsonpath='{.data.username}' | base64 -d + elastic + ``` + +- Password: + + ```bash + $ kubectl get secret -n demo es-topology-elastic-cred -o jsonpath='{.data.password}' | base64 -d + F2sIde1TbZqOR_gF + ``` + +Now, we will query for settings of all nodes in an Elasticsearch cluster, + +```bash +$ curl -XGET -k -u 'elastic:F2sIde1TbZqOR_gF' "https://localhost:9200/_nodes/_all/settings?pretty" +``` + +This will return a large JSON with node settings. Here is the prettified JSON response, + +```json +{ + "_nodes" : { + "total" : 3, + "successful" : 3, + "failed" : 0 + }, + "cluster_name" : "es-topology", + "nodes" : { + "PnvWHS4tTZaNLX8yiUykEg" : { + "name" : "es-topology-data-0", + "transport_address" : "10.244.0.37:9300", + "host" : "10.244.0.37", + "ip" : "10.244.0.37", + "version" : "7.9.1", + "build_flavor" : "default", + "build_type" : "docker", + "build_hash" : "083627f112ba94dffc1232e8b42b73492789ef91", + "roles" : [ + "data", + "ml", + "remote_cluster_client", + "transform" + ], + "attributes" : { + "ml.machine_memory" : "1073741824", + "ml.max_open_jobs" : "20", + "xpack.installed" : "true", + "transform.node" : "true" + }, + "settings" : { + "cluster" : { + "name" : "es-topology", + "routing" : { + "allocation" : { + "disk" : { + "threshold_enabled" : "true", + "watermark" : { + "low" : "15gb", + "flood_stage" : "5gb", + "high" : "10gb" + } + } + } + }, + "election" : { + "strategy" : "supports_voting_only" + } + }, + "node" : { + "name" : "es-topology-data-0", + "processors" : "2", + "attr" : { + "transform" : { + "node" : "true" + }, + "xpack" : { + "installed" : "true" + }, + "ml" : { + "machine_memory" : "1073741824", + "max_open_jobs" : "20" + } + }, + "data" : "true", + "ingest" : "false", + "master" : "false" + }, + "path" : { + "logs" : "/usr/share/elasticsearch/data/data-logs-dir", + "home" : "/usr/share/elasticsearch" + }, + "discovery" : { + "seed_hosts" : "es-topology-master" + }, + "client" : { + "type" : "node" + }, + "http" : { + "compression" : "false", + "type" : "security4", + "type.default" : "netty4" + }, + "transport" : { + "type" : "security4", + "features" : { + "x-pack" : "true" + }, + "type.default" : "netty4" + }, + "xpack" : { + "security" : { + "http" : { + "ssl" : { + "enabled" : "true" + } + }, + "enabled" : "true", + "transport" : { + "ssl" : { + "enabled" : "true" + } + } + } + }, + "network" : { + "host" : "0.0.0.0" + } + } + }, + "5EeawayWTa6aw9D8pcYlGQ" : { + "name" : "es-topology-ingest-0", + "transport_address" : "10.244.0.36:9300", + "host" : "10.244.0.36", + "ip" : "10.244.0.36", + "version" : "7.9.1", + "build_flavor" : "default", + "build_type" : "docker", + "build_hash" : "083627f112ba94dffc1232e8b42b73492789ef91", + "roles" : [ + "ingest", + "ml", + "remote_cluster_client" + ], + "attributes" : { + "ml.machine_memory" : "1073741824", + "xpack.installed" : "true", + "transform.node" : "false", + "ml.max_open_jobs" : "20" + }, + "settings" : { + "cluster" : { + "name" : "es-topology", + "election" : { + "strategy" : "supports_voting_only" + } + }, + "node" : { + "name" : "es-topology-ingest-0", + "processors" : "2", + "attr" : { + "transform" : { + "node" : "false" + }, + "xpack" : { + "installed" : "true" + }, + "ml" : { + "machine_memory" : "1073741824", + "max_open_jobs" : "20" + } + }, + "data" : "false", + "ingest" : "true", + "master" : "false" + }, + "path" : { + "logs" : "/usr/share/elasticsearch/data/ingest-logs-dir", + "home" : "/usr/share/elasticsearch" + }, + "discovery" : { + "seed_hosts" : "es-topology-master" + }, + "client" : { + "type" : "node" + }, + "http" : { + "compression" : "false", + "type" : "security4", + "type.default" : "netty4" + }, + "transport" : { + "type" : "security4", + "features" : { + "x-pack" : "true" + }, + "type.default" : "netty4" + }, + "xpack" : { + "security" : { + "http" : { + "ssl" : { + "enabled" : "true" + } + }, + "enabled" : "true", + "transport" : { + "ssl" : { + "enabled" : "true" + } + } + } + }, + "network" : { + "host" : "0.0.0.0" + } + } + }, + "d2YO9jGNRzuPczGpITuxNA" : { + "name" : "es-topology-master-0", + "transport_address" : "10.244.0.38:9300", + "host" : "10.244.0.38", + "ip" : "10.244.0.38", + "version" : "7.9.1", + "build_flavor" : "default", + "build_type" : "docker", + "build_hash" : "083627f112ba94dffc1232e8b42b73492789ef91", + "roles" : [ + "master", + "ml", + "remote_cluster_client" + ], + "attributes" : { + "ml.machine_memory" : "1073741824", + "ml.max_open_jobs" : "20", + "xpack.installed" : "true", + "transform.node" : "false" + }, + "settings" : { + "cluster" : { + "initial_master_nodes" : "es-topology-master-0", + "name" : "es-topology", + "election" : { + "strategy" : "supports_voting_only" + } + }, + "node" : { + "name" : "es-topology-master-0", + "processors" : "2", + "attr" : { + "transform" : { + "node" : "false" + }, + "xpack" : { + "installed" : "true" + }, + "ml" : { + "machine_memory" : "1073741824", + "max_open_jobs" : "20" + } + }, + "data" : "false", + "ingest" : "false", + "master" : "true" + }, + "path" : { + "logs" : "/usr/share/elasticsearch/data/master-logs-dir", + "home" : "/usr/share/elasticsearch" + }, + "discovery" : { + "seed_hosts" : "es-topology-master" + }, + "client" : { + "type" : "node" + }, + "http" : { + "compression" : "false", + "type" : "security4", + "type.default" : "netty4" + }, + "transport" : { + "type" : "security4", + "features" : { + "x-pack" : "true" + }, + "type.default" : "netty4" + }, + "xpack" : { + "security" : { + "http" : { + "ssl" : { + "enabled" : "true" + } + }, + "enabled" : "true", + "transport" : { + "ssl" : { + "enabled" : "true" + } + } + } + }, + "network" : { + "host" : "0.0.0.0" + } + } + } + } +} +``` + +Here we can see that our given configuration is merged to the default configurations. The common configuration `node.processors` is merged to all types of nodes. The node role-specific log directories are also configured. The disk-based shard allocation setting merged to data nodes. + +## Cleanup + +To cleanup the Kubernetes resources created by this tutorial, run: + +```bash +$ kubectl delete elasticsearch -n demo es-topology + +$ kubectl delete secret -n demo es-custom-config + +$ kubectl delete namespace demo +``` + +## Next Steps diff --git a/content/docs/v2024.1.31/guides/elasticsearch/configuration/topology-cluster/yamls/config-secret.yaml b/content/docs/v2024.1.31/guides/elasticsearch/configuration/topology-cluster/yamls/config-secret.yaml new file mode 100644 index 0000000000..6eb1dbed29 --- /dev/null +++ b/content/docs/v2024.1.31/guides/elasticsearch/configuration/topology-cluster/yamls/config-secret.yaml @@ -0,0 +1,21 @@ +apiVersion: v1 +kind: Secret +metadata: + name: es-custom-config + namespace: demo +stringData: + elasticsearch.yml: |- + node.processors: 2 + master-elasticsearch.yml: |- + path: + logs: "/usr/share/elasticsearch/data/master-logs-dir" + ingest-elasticsearch.yml: |- + path: + logs: "/usr/share/elasticsearch/data/ingest-logs-dir" + data-elasticsearch.yml: |- + path: + logs: "/usr/share/elasticsearch/data/data-logs-dir" + cluster.routing.allocation.disk.threshold_enabled: true + cluster.routing.allocation.disk.watermark.low: 15gb + cluster.routing.allocation.disk.watermark.high: 10gb + cluster.routing.allocation.disk.watermark.flood_stage: 5gb \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/elasticsearch/configuration/topology-cluster/yamls/es-topology.yaml b/content/docs/v2024.1.31/guides/elasticsearch/configuration/topology-cluster/yamls/es-topology.yaml new file mode 100644 index 0000000000..61ddb8e43e --- /dev/null +++ b/content/docs/v2024.1.31/guides/elasticsearch/configuration/topology-cluster/yamls/es-topology.yaml @@ -0,0 +1,41 @@ +apiVersion: kubedb.com/v1alpha2 +kind: Elasticsearch +metadata: + name: es-topology + namespace: demo +spec: + enableSSL: true + version: xpack-8.11.1 + configSecret: + name: es-custom-config + storageType: Durable + terminationPolicy: WipeOut + topology: + master: + replicas: 1 + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + data: + replicas: 1 + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 100Gi + ingest: + replicas: 1 + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + diff --git a/content/docs/v2024.1.31/guides/elasticsearch/custom-rbac/_index.md b/content/docs/v2024.1.31/guides/elasticsearch/custom-rbac/_index.md new file mode 100755 index 0000000000..2443b24404 --- /dev/null +++ b/content/docs/v2024.1.31/guides/elasticsearch/custom-rbac/_index.md @@ -0,0 +1,22 @@ +--- +title: Run Elasticsearch with Custom RBAC resources +menu: + docs_v2024.1.31: + identifier: es-custom-rbac + name: Custom RBAC + parent: es-elasticsearch-guides + weight: 31 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/elasticsearch/custom-rbac/using-custom-rbac.md b/content/docs/v2024.1.31/guides/elasticsearch/custom-rbac/using-custom-rbac.md new file mode 100644 index 0000000000..c09e33f263 --- /dev/null +++ b/content/docs/v2024.1.31/guides/elasticsearch/custom-rbac/using-custom-rbac.md @@ -0,0 +1,271 @@ +--- +title: Run Elasticsearch with Custom RBAC resources +menu: + docs_v2024.1.31: + identifier: es-custom-rbac-quickstart + name: Custom RBAC + parent: es-custom-rbac + weight: 10 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Using Custom RBAC resources + +KubeDB (version 0.13.0 and higher) supports finer user control over role based access permissions provided to an Elasticsearch instance. This tutorial will show you how to use KubeDB to run Elasticsearch instance with custom RBAC resources. + +## Before You Begin + +At first, you need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +Now, install KubeDB cli on your workstation and KubeDB operator in your cluster following the steps [here](/docs/v2024.1.31/setup/README). + +To keep things isolated, this tutorial uses a separate namespace called `demo` throughout this tutorial. + +```bash +$ kubectl create ns demo +namespace/demo created +``` + +> Note: YAML files used in this tutorial are stored in [docs/examples/elasticsearch](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/examples/elasticsearch) folder in GitHub repository [kubedb/docs](https://github.com/kubedb/docs). + +## Overview + +KubeDB allows users to provide custom RBAC resources, namely, `ServiceAccount`, `Role`, and `RoleBinding` for Elasticsearch. This is provided via the `spec.podTemplate.spec.serviceAccountName` field in Elasticsearch crd. If this field is left empty, the KubeDB operator will create a service account name matching Elasticsearch crd name. Role and RoleBinding that provide necessary access permissions will also be generated automatically for this service account. + +If a service account name is given, but there's no existing service account by that name, the KubeDB operator will create one, and Role and RoleBinding that provide necessary access permissions will also be generated for this service account. + +If a service account name is given, and there's an existing service account by that name, the KubeDB operator will use that existing service account. Since this service account is not managed by KubeDB, users are responsible for providing necessary access permissions manually. + +This guide will show you how to create custom `Service Account`, `Role`, and `RoleBinding` for an Elasticsearch Database named `quick-elasticsearch` to provide the bare minimum access permissions. + +## Custom RBAC for Elasticsearch + +At first, let's create a `Service Acoount` in `demo` namespace. + +```bash +$ kubectl create serviceaccount -n demo my-custom-serviceaccount +serviceaccount/my-custom-serviceaccount created +``` + +It should create a service account. + +```yaml +$ kubectl get serviceaccount -n demo my-custom-serviceaccount -o yaml +apiVersion: v1 +kind: ServiceAccount +metadata: + creationTimestamp: "2019-10-02T05:18:37Z" + name: my-custom-serviceaccount + namespace: demo + resourceVersion: "15521" + selfLink: /api/v1/namespaces/demo/serviceaccounts/my-custom-serviceaccount + uid: 16cf2f6c-e4d4-11e9-b2b2-42010a940225 +secrets: +- name: my-custom-serviceaccount-token-ptt25 +``` + +Now, we need to create a role that has necessary access permissions for the Elasticsearch instance named `quick-elasticsearch`. + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/elasticsearch/custom-rbac/es-custom-role.yaml +role.rbac.authorization.k8s.io/my-custom-role created +``` + +Below is the YAML for the Role we just created. + +```yaml +apiVersion: rbac.authorization.k8s.io/v1 +kind: Role +metadata: + name: my-custom-role + namespace: demo +rules: +- apiGroups: + - policy + resourceNames: + - elasticsearch-db + resources: + - podsecuritypolicies + verbs: + - use +``` + +This permission is required for Elasticsearch pods running on PSP enabled clusters. + +Now create a `RoleBinding` to bind this `Role` with the already created service account. + +```bash +$ kubectl create rolebinding my-custom-rolebinding --role=my-custom-role --serviceaccount=demo:my-custom-serviceaccount --namespace=demo +rolebinding.rbac.authorization.k8s.io/my-custom-rolebinding created + +``` + +It should bind `my-custom-role` and `my-custom-serviceaccount` successfully. + +```yaml +$ kubectl get rolebinding -n demo my-custom-rolebinding -o yaml +apiVersion: rbac.authorization.k8s.io/v1 +kind: RoleBinding +metadata: + creationTimestamp: "2019-10-02T05:19:37Z" + name: my-custom-rolebinding + namespace: demo + resourceVersion: "15726" + selfLink: /apis/rbac.authorization.k8s.io/v1/namespaces/demo/rolebindings/my-custom-rolebinding + uid: 3a5e9277-e4d4-11e9-b2b2-42010a940225 +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: Role + name: my-custom-role +subjects: +- kind: ServiceAccount + name: my-custom-serviceaccount + namespace: demo +``` + +Now, create an Elasticsearch crd specifying `spec.podTemplate.spec.serviceAccountName` field to `my-custom-serviceaccount`. + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/elasticsearch/custom-rbac/es-custom-db.yaml +elasticsearch.kubedb.com/quick-elasticsearch created +``` + +Below is the YAML for the Elasticsearch crd we just created. + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: Elasticsearch +metadata: + name: quick-elasticsearch + namespace: demo +spec: + version: xpack-8.11.1 + podTemplate: + spec: + serviceAccountName: my-custom-serviceaccount + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: DoNotTerminate +``` + +```bash +$ kubectl get es -n demo +NAME VERSION STATUS AGE +quick-elasticsearch 7.3.2 Running 74s +``` + +Now, wait a few minutes. the KubeDB operator will create necessary PVC, statefulset, services, secret etc. If everything goes well, we should see that a pod with the name `quick-elasticsearch-0` has been created. + +Check that the statefulset's pod is running + +```bash +$ kubectl get pod -n demo quick-elasticsearch-0 +NAME READY STATUS RESTARTS AGE +quick-elasticsearch-0 1/1 Running 0 93s +``` + +## Reusing Service Account + +An existing service account can be reused in another Elasticsearch Database. No new access permission is required to run the new Elasticsearch Database. + +Now, create Elasticsearch crd `minute-elasticsearch` using the existing service account name `my-custom-serviceaccount` in the `spec.podTemplate.spec.serviceAccountName` field. + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/elasticsearch/custom-rbac/es-custom-db-two.yaml +elasticsearch.kubedb.com/quick-elasticsearch created +``` + +Below is the YAML for the Elasticsearch crd we just created. + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: Elasticsearch +metadata: + name: minute-elasticsearch + namespace: demo +spec: + version: xpack-8.11.1 + storageType: Durable + podTemplate: + spec: + serviceAccountName: my-custom-serviceaccount + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: DoNotTerminate +``` + +```bash +$ kubectl get es -n demo +NAME VERSION STATUS AGE +minute-elasticsearch 7.3.2 Running 59s +quick-elasticsearch 7.3.2 Running 3m17s +``` + +Now, wait a few minutes. the KubeDB operator will create necessary PVC, statefulset, services, secret etc. If everything goes well, we should see that a pod with the name `minute-elasticsearch-0` has been created. + +Check that the statefulset's pod is running + +```bash +$ kubectl get pod -n demo minute-elasticsearch-0 +NAME READY STATUS RESTARTS AGE +minute-elasticsearch-0 1/1 Running 0 71s +``` + +## Cleaning up + +To cleanup the Kubernetes resources created by this tutorial, run: + +```bash +kubectl patch -n demo es/quick-elasticsearch -p '{"spec":{"terminationPolicy":"WipeOut"}}' --type="merge" +kubectl delete -n demo es/quick-elasticsearch + +kubectl patch -n demo es/minute-elasticsearch -p '{"spec":{"terminationPolicy":"WipeOut"}}' --type="merge" +kubectl delete -n demo es/minute-elasticsearch + +kubectl delete -n demo role my-custom-role +kubectl delete -n demo rolebinding my-custom-rolebinding + +kubectl delete sa -n demo my-custom-serviceaccount + +kubectl delete ns demo +``` + +If you would like to uninstall the KubeDB operator, please follow the steps [here](/docs/v2024.1.31/setup/README). + +## Next Steps + +- [Quickstart Elasticsearch](/docs/v2024.1.31/guides/elasticsearch/quickstart/overview/elasticsearch/) with KubeDB Operator. +- [Quickstart OpenSearch](/docs/v2024.1.31/guides/elasticsearch/quickstart/overview/opensearch/) with KubeDB Operator. +- [Backup & Restore](/docs/v2024.1.31/guides/elasticsearch/backup/overview/) Elasticsearch instances using Stash. +- Monitor your Elasticsearch instance with KubeDB using [out-of-the-box Prometheus operator](/docs/v2024.1.31/guides/elasticsearch/monitoring/using-prometheus-operator). +- Monitor your Elasticsearch instance with KubeDB using [out-of-the-box builtin-Prometheus](/docs/v2024.1.31/guides/elasticsearch/monitoring/using-builtin-prometheus). +- Use [private Docker registry](/docs/v2024.1.31/guides/elasticsearch/private-registry/using-private-registry) to deploy Elasticsearch with KubeDB. +- Use [kubedb cli](/docs/v2024.1.31/guides/elasticsearch/cli/cli) to manage databases like kubectl for Kubernetes. +- Detail concepts of [Elasticsearch object](/docs/v2024.1.31/guides/elasticsearch/concepts/elasticsearch/). +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/v2024.1.31/CONTRIBUTING). + diff --git a/content/docs/v2024.1.31/guides/elasticsearch/elasticsearch-dashboard/_index.md b/content/docs/v2024.1.31/guides/elasticsearch/elasticsearch-dashboard/_index.md new file mode 100644 index 0000000000..e7cee0640d --- /dev/null +++ b/content/docs/v2024.1.31/guides/elasticsearch/elasticsearch-dashboard/_index.md @@ -0,0 +1,22 @@ +--- +title: Elasticsearch Dashboard +menu: + docs_v2024.1.31: + identifier: es-dashboard + name: Elasticsearch Dashboard + parent: es-elasticsearch-guides + weight: 32 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/elasticsearch/elasticsearch-dashboard/kibana/images/DashboardUI.png b/content/docs/v2024.1.31/guides/elasticsearch/elasticsearch-dashboard/kibana/images/DashboardUI.png new file mode 100644 index 0000000000..baff0bc214 Binary files /dev/null and b/content/docs/v2024.1.31/guides/elasticsearch/elasticsearch-dashboard/kibana/images/DashboardUI.png differ diff --git a/content/docs/v2024.1.31/guides/elasticsearch/elasticsearch-dashboard/kibana/images/Delete.png b/content/docs/v2024.1.31/guides/elasticsearch/elasticsearch-dashboard/kibana/images/Delete.png new file mode 100644 index 0000000000..efa63d97c6 Binary files /dev/null and b/content/docs/v2024.1.31/guides/elasticsearch/elasticsearch-dashboard/kibana/images/Delete.png differ diff --git a/content/docs/v2024.1.31/guides/elasticsearch/elasticsearch-dashboard/kibana/images/GetData.png b/content/docs/v2024.1.31/guides/elasticsearch/elasticsearch-dashboard/kibana/images/GetData.png new file mode 100644 index 0000000000..7507d84b79 Binary files /dev/null and b/content/docs/v2024.1.31/guides/elasticsearch/elasticsearch-dashboard/kibana/images/GetData.png differ diff --git a/content/docs/v2024.1.31/guides/elasticsearch/elasticsearch-dashboard/kibana/images/GetQuery.png b/content/docs/v2024.1.31/guides/elasticsearch/elasticsearch-dashboard/kibana/images/GetQuery.png new file mode 100644 index 0000000000..ae68a51332 Binary files /dev/null and b/content/docs/v2024.1.31/guides/elasticsearch/elasticsearch-dashboard/kibana/images/GetQuery.png differ diff --git a/content/docs/v2024.1.31/guides/elasticsearch/elasticsearch-dashboard/kibana/images/GetUpdatedData.png b/content/docs/v2024.1.31/guides/elasticsearch/elasticsearch-dashboard/kibana/images/GetUpdatedData.png new file mode 100644 index 0000000000..bfb5597516 Binary files /dev/null and b/content/docs/v2024.1.31/guides/elasticsearch/elasticsearch-dashboard/kibana/images/GetUpdatedData.png differ diff --git a/content/docs/v2024.1.31/guides/elasticsearch/elasticsearch-dashboard/kibana/images/LoginPage.png b/content/docs/v2024.1.31/guides/elasticsearch/elasticsearch-dashboard/kibana/images/LoginPage.png new file mode 100644 index 0000000000..3776b2e897 Binary files /dev/null and b/content/docs/v2024.1.31/guides/elasticsearch/elasticsearch-dashboard/kibana/images/LoginPage.png differ diff --git a/content/docs/v2024.1.31/guides/elasticsearch/elasticsearch-dashboard/kibana/images/PostData.png b/content/docs/v2024.1.31/guides/elasticsearch/elasticsearch-dashboard/kibana/images/PostData.png new file mode 100644 index 0000000000..318b665a78 Binary files /dev/null and b/content/docs/v2024.1.31/guides/elasticsearch/elasticsearch-dashboard/kibana/images/PostData.png differ diff --git a/content/docs/v2024.1.31/guides/elasticsearch/elasticsearch-dashboard/kibana/images/SampleData.png b/content/docs/v2024.1.31/guides/elasticsearch/elasticsearch-dashboard/kibana/images/SampleData.png new file mode 100644 index 0000000000..2a6005e5de Binary files /dev/null and b/content/docs/v2024.1.31/guides/elasticsearch/elasticsearch-dashboard/kibana/images/SampleData.png differ diff --git a/content/docs/v2024.1.31/guides/elasticsearch/elasticsearch-dashboard/kibana/index.md b/content/docs/v2024.1.31/guides/elasticsearch/elasticsearch-dashboard/kibana/index.md new file mode 100644 index 0000000000..5b4c0e35ad --- /dev/null +++ b/content/docs/v2024.1.31/guides/elasticsearch/elasticsearch-dashboard/kibana/index.md @@ -0,0 +1,583 @@ +--- +title: Kibana +menu: + docs_v2024.1.31: + identifier: es-dashboard-kibana + name: Kibana + parent: es-dashboard + weight: 10 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Deploy Kibana With ElasticsearchDashboard + +## Before You Begin + +At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +Now, install the KubeDB operator in your cluster following the steps [here](/docs/v2024.1.31/setup/README). + +To keep things isolated, this tutorial uses a separate namespace called `demo` throughout this tutorial. + +```bash +$ kubectl create namespace demo +namespace/demo created + +$ kubectl get namespace +NAME STATUS AGE +demo Active 11s +``` + +> Note: YAML files used in this tutorial are stored in [here](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/guides/elasticsearch/elasticsearch-dashboard/kibana/yamls) in GitHub repository [kubedb/docs](https://github.com/kubedb/docs). + +## Find Available StorageClass + +We will have to provide `StorageClass` in Elasticsearch CR specification. Check available `StorageClass` in your cluster using the following command, + +```bash +$ kubectl get storageclass +NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE +standard (default) rancher.io/local-path Delete WaitForFirstConsumer false 1h +``` + +Here, we have `standard` StorageClass in our cluster from [Local Path Provisioner](https://github.com/rancher/local-path-provisioner). + +## Create an Elasticsearch Cluster + +We are going to create a Elasticsearch Simple Dedicated Cluster in topology mode. Our cluster will be consist of 2 master nodes, 3 data nodes, 2 ingest nodes. Here, we are using Elasticsearch version (`xpack-8.2.3`) of Elasticsearch distribution for this demo. To learn more about the Elasticsearch CR, visit [here](/docs/v2024.1.31/guides/elasticsearch/concepts/elasticsearch/). + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: Elasticsearch +metadata: + name: es-cluster + namespace: demo +spec: + enableSSL: true + version: xpack-8.2.3 + storageType: Durable + topology: + master: + replicas: 2 + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + data: + replicas: 3 + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + ingest: + replicas: 2 + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi +``` + +Here, + +- `spec.version` - is the name of the ElasticsearchVersion CR. Here, we are using Elasticsearch version `xpack-8.2.3` of Elasticsearch distribution. +- `spec.enableSSL` - specifies whether the HTTP layer is secured with certificates or not. +- `spec.storageType` - specifies the type of storage that will be used for Elasticsearch database. It can be `Durable` or `Ephemeral`. The default value of this field is `Durable`. If `Ephemeral` is used then KubeDB will create the Elasticsearch database using `EmptyDir` volume. In this case, you don't have to specify `spec.storage` field. This is useful for testing purposes. +- `spec.topology` - specifies the node-specific properties for the Elasticsearch cluster. + - `topology.master` - specifies the properties of master nodes. + - `master.replicas` - specifies the number of master nodes. + - `master.storage` - specifies the master node storage information that passed to the StatefulSet. + - `topology.data` - specifies the properties of data nodes. + - `data.replicas` - specifies the number of data nodes. + - `data.storage` - specifies the data node storage information that passed to the StatefulSet. + - `topology.ingest` - specifies the properties of ingest nodes. + - `ingest.replicas` - specifies the number of ingest nodes. + - `ingest.storage` - specifies the ingest node storage information that passed to the StatefulSet. + +Let's deploy the above yaml by the following command: + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/elasticsearch//elasticsearch-dashboard/kibana/yamls/es-cluster.yaml +elasticsearch.kubedb.com/es-cluster created +``` +KubeDB will create the necessary resources to deploy the Elasticsearch cluster according to the above specification. Let’s wait until the database to be ready to use, + +```bash +$ watch kubectl get elasticsearch -n demo +NAME VERSION STATUS AGE +es-cluster xpack-8.2.3 Ready 4m32s +``` +Here, Elasticsearch is in `Ready` state. It means the database is ready to accept connections. + +Describe the Elasticsearch object to observe the progress if something goes wrong or the status is not changing for a long period of time: + +```bash +$ kubectl describe elasticsearch -n demo es-cluster +Name: es-cluster +Namespace: demo +Labels: +Annotations: +API Version: kubedb.com/v1alpha2 +Kind: Elasticsearch +Metadata: + Creation Timestamp: 2022-06-08T11:03:43Z + Finalizers: + kubedb.com + Generation: 1 + Resource Version: 1047187 + UID: dd90071c-8e64-420f-a836-2be9459e728a +Spec: + Auth Secret: + Name: es-cluster-elastic-cred + Enable SSL: true + Heap Size Percentage: 50 + Internal Users: + apm_system: + Backend Roles: + apm_system + Secret Name: es-cluster-apm-system-cred + beats_system: + Backend Roles: + beats_system + Secret Name: es-cluster-beats-system-cred + Elastic: + Backend Roles: + superuser + Secret Name: es-cluster-elastic-cred + kibana_system: + Backend Roles: + kibana_system + Secret Name: es-cluster-kibana-system-cred + logstash_system: + Backend Roles: + logstash_system + Secret Name: es-cluster-logstash-system-cred + remote_monitoring_user: + Backend Roles: + remote_monitoring_collector + remote_monitoring_agent + Secret Name: es-cluster-remote-monitoring-user-cred + Kernel Settings: + Privileged: true + Sysctls: + Name: vm.max_map_count + Value: 262144 + Pod Template: + Controller: + Metadata: + Spec: + Affinity: + Pod Anti Affinity: + Preferred During Scheduling Ignored During Execution: + Pod Affinity Term: + Label Selector: + Match Expressions: + Key: ${NODE_ROLE} + Operator: Exists + Match Labels: + app.kubernetes.io/instance: es-cluster + app.kubernetes.io/managed-by: kubedb.com + app.kubernetes.io/name: elasticsearches.kubedb.com + Namespaces: + demo + Topology Key: kubernetes.io/hostname + Weight: 100 + Pod Affinity Term: + Label Selector: + Match Expressions: + Key: ${NODE_ROLE} + Operator: Exists + Match Labels: + app.kubernetes.io/instance: es-cluster + app.kubernetes.io/managed-by: kubedb.com + app.kubernetes.io/name: elasticsearches.kubedb.com + Namespaces: + demo + Topology Key: failure-domain.beta.kubernetes.io/zone + Weight: 50 + Container Security Context: + Capabilities: + Add: + IPC_LOCK + SYS_RESOURCE + Privileged: false + Run As User: 1000 + Resources: + Service Account Name: es-cluster + Storage Type: Durable + Termination Policy: Delete + Tls: + Certificates: + Alias: ca + Private Key: + Encoding: PKCS8 + Secret Name: es-cluster-ca-cert + Subject: + Organizations: + kubedb + Alias: transport + Private Key: + Encoding: PKCS8 + Secret Name: es-cluster-transport-cert + Subject: + Organizations: + kubedb + Alias: http + Private Key: + Encoding: PKCS8 + Secret Name: es-cluster-http-cert + Subject: + Organizations: + kubedb + Alias: client + Private Key: + Encoding: PKCS8 + Secret Name: es-cluster-client-cert + Subject: + Organizations: + kubedb + Topology: + Data: + Replicas: 3 + Resources: + Limits: + Memory: 2Gi + Requests: + Cpu: 100m + Memory: 1.5Gi + Storage: + Access Modes: + ReadWriteOnce + Resources: + Requests: + Storage: 1Gi + Storage Class Name: standard + Suffix: data + Ingest: + Replicas: 2 + Resources: + Limits: + Memory: 2Gi + Requests: + Cpu: 100m + Memory: 1.5Gi + Storage: + Access Modes: + ReadWriteOnce + Resources: + Requests: + Storage: 1Gi + Storage Class Name: standard + Suffix: ingest + Master: + Replicas: 2 + Resources: + Limits: + Memory: 2Gi + Requests: + Cpu: 100m + Memory: 1.5Gi + Storage: + Access Modes: + ReadWriteOnce + Resources: + Requests: + Storage: 1Gi + Storage Class Name: standard + Suffix: master + Version: xpack-8.2.3 +Status: + Conditions: + Last Transition Time: 2022-06-08T11:03:43Z + Message: The KubeDB operator has started the provisioning of Elasticsearch: demo/es-cluster + Reason: DatabaseProvisioningStartedSuccessfully + Status: True + Type: ProvisioningStarted + Last Transition Time: 2022-06-08T11:09:31Z + Message: Internal Users for Elasticsearch: demo/es-cluster is ready. + Observed Generation: 1 + Reason: InternalUsersCredentialsSyncedSuccessfully + Status: True + Type: InternalUsersSynced + Last Transition Time: 2022-06-08T11:04:24Z + Message: All desired replicas are ready. + Reason: AllReplicasReady + Status: True + Type: ReplicaReady + Last Transition Time: 2022-06-08T11:08:58Z + Message: The Elasticsearch: demo/es-cluster is accepting client requests. + Observed Generation: 1 + Reason: DatabaseAcceptingConnectionRequest + Status: True + Type: AcceptingConnection + Last Transition Time: 2022-06-08T11:09:31Z + Message: The Elasticsearch: demo/es-cluster is ready. + Observed Generation: 1 + Reason: ReadinessCheckSucceeded + Status: True + Type: Ready + Last Transition Time: 2022-06-08T11:09:44Z + Message: The Elasticsearch: demo/es-cluster is successfully provisioned. + Observed Generation: 1 + Reason: DatabaseSuccessfullyProvisioned + Status: True + Type: Provisioned + Observed Generation: 1 + Phase: Ready +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal Successful 6m27s KubeDB Operator Successfully created governing service + Normal Successful 6m27s KubeDB Operator Successfully created Service + Normal Successful 6m27s KubeDB Operator Successfully created Service + Normal Successful 6m25s KubeDB Operator Successfully created Elasticsearch + Normal Successful 6m25s KubeDB Operator Successfully created appbinding + Normal Successful 6m25s KubeDB Operator Successfully governing service + Normal Successful 6m22s KubeDB Operator Successfully governing service + +``` +- Here, in `Status.Conditions` + - `Conditions.Status` is `True` for the `Condition.Type:ProvisioningStarted` which means database provisioning has been started successfully. + - `Conditions.Status` is `True` for the `Condition.Type:ReplicaReady` which specifies all replicas are ready in the cluster. + - `Conditions.Status` is `True` for the `Condition.Type:AcceptingConnection` which means database has been accepting connection request. + - `Conditions.Status` is `True` for the `Condition.Type:Ready` which defines database is ready to use. + - `Conditions.Status` is `True` for the `Condition.Type:Provisioned` which specifies Database has been successfully provisioned. + +### KubeDB Operator Generated Resources + +Let's check the Kubernetes resources created by the operator on the deployment of Elasticsearch CRO: + +```bash +$ kubectl get all,secret,pvc -n demo -l 'app.kubernetes.io/instance=es-cluster' +NAME READY STATUS RESTARTS AGE +pod/es-cluster-data-0 1/1 Running 0 13m +pod/es-cluster-data-1 1/1 Running 0 13m +pod/es-cluster-data-2 1/1 Running 0 13m +pod/es-cluster-ingest-0 1/1 Running 0 13m +pod/es-cluster-ingest-1 1/1 Running 0 13m +pod/es-cluster-master-0 1/1 Running 0 13m +pod/es-cluster-master-1 1/1 Running 0 13m + +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +service/es-cluster ClusterIP 10.96.135.31 9200/TCP 13m +service/es-cluster-master ClusterIP None 9300/TCP 13m +service/es-cluster-pods ClusterIP None 9200/TCP 13m + +NAME READY AGE +statefulset.apps/es-cluster-data 3/3 13m +statefulset.apps/es-cluster-ingest 2/2 13m +statefulset.apps/es-cluster-master 2/2 13m + +NAME TYPE VERSION AGE +appbinding.appcatalog.appscode.com/es-cluster kubedb.com/elasticsearch 8.2.0 13m + +NAME TYPE DATA AGE +secret/es-cluster-apm-system-cred kubernetes.io/basic-auth 2 13m +secret/es-cluster-beats-system-cred kubernetes.io/basic-auth 2 13m +secret/es-cluster-ca-cert kubernetes.io/tls 2 13m +secret/es-cluster-client-cert kubernetes.io/tls 3 13m +secret/es-cluster-config Opaque 1 13m +secret/es-cluster-elastic-cred kubernetes.io/basic-auth 2 13m +secret/es-cluster-http-cert kubernetes.io/tls 3 13m +secret/es-cluster-kibana-system-cred kubernetes.io/basic-auth 2 13m +secret/es-cluster-logstash-system-cred kubernetes.io/basic-auth 2 13m +secret/es-cluster-remote-monitoring-user-cred kubernetes.io/basic-auth 2 13m +secret/es-cluster-transport-cert kubernetes.io/tls 3 13m + +NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE +persistentvolumeclaim/data-es-cluster-data-0 Bound pvc-ef297d44-6adc-4307-ac53-93d09999622a 1Gi RWO standard 13m +persistentvolumeclaim/data-es-cluster-data-1 Bound pvc-9bc2ccc5-f775-49f5-9148-f1b70a1cd3b3 1Gi RWO standard 13m +persistentvolumeclaim/data-es-cluster-data-2 Bound pvc-fca1f3fc-a9e6-4fd2-8531-767c4f4286ee 1Gi RWO standard 13m +persistentvolumeclaim/data-es-cluster-ingest-0 Bound pvc-77f128cf-d0b5-40e2-94fd-1be506a17a4a 1Gi RWO standard 13m +persistentvolumeclaim/data-es-cluster-ingest-1 Bound pvc-024a1697-7737-4a53-8f48-e89ee0530cad 1Gi RWO standard 13m +persistentvolumeclaim/data-es-cluster-master-0 Bound pvc-775f89a2-4fcd-4660-b0c3-8c46dd1b0a67 1Gi RWO standard 13m +persistentvolumeclaim/data-es-cluster-master-1 Bound pvc-53fd7683-96a6-4737-9c4c-eade942e6743 1Gi RWO standard 13m + +``` + +- `StatefulSet` - 3 StatefulSets are created for 3 types Elasticsearch nodes. The StatefulSets are named after the Elasticsearch instance with given suffix: `{Elasticsearch-Name}-{Sufix}`. +- `Services` - 3 services are generated for each Elasticsearch database. + - `{Elasticsearch-Name}` - the client service which is used to connect to the database. It points to the `ingest` nodes. + - `{Elasticsearch-Name}-master` - the master service which is used to connect to the master nodes. It is a headless service. + - `{Elasticsearch-Name}-pods` - the node discovery service which is used by the Elasticsearch nodes to communicate each other. It is a headless service. +- `AppBinding` - an [AppBinding](/docs/v2024.1.31/guides/elasticsearch/concepts/appbinding/) which hold the connect information for the database. It is also named after the Elastics +- `Secrets` - 3 types of secrets are generated for each Elasticsearch database. + - `{Elasticsearch-Name}-{username}-cred` - the auth secrets which hold the `username` and `password` for the Elasticsearch users. + - `{Elasticsearch-Name}-{alias}-cert` - the certificate secrets which hold `tls.crt`, `tls.key`, and `ca.crt` for configuring the Elasticsearch database. + - `{Elasticsearch-Name}-config` - the default configuration secret created by the operator. + +## Deploy ElasticsearchDashboard + +```yaml +apiVersion: dashboard.kubedb.com/v1alpha1 +kind: ElasticsearchDashboard +metadata: + name: es-cluster-dashboard + namespace: demo +spec: + enableSSL: true + databaseRef: + name: es-cluster + terminationPolicy: WipeOut +``` +> Note: Elasticsearch Database and Elasticsearch dashboard should have to be deployed in the same namespace. In this tutorial, we use demo namespace for both cases. + +- `spec.enableSSL` specifies whether the HTTP layer is secured with certificates or not. +- `spec.databaseRef.name` refers to the Elasticsearch database name. +- `spec.terminationPolicy` refers to the strategy to follow during dashboard deletion. `Wipeout` means that the database will be deleted without restrictions. It can also be `DoNotTerminate` which will cause a restriction to delete the dashboard. Learn More about these [HERE](https://kubedb.com/docs/v2022.05.24/guides/elasticsearch/concepts/elasticsearch/#specterminationpolicy). + +Let's deploy the above yaml by the following command: + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/elasticsearch/elasticsearch-dashboard/kibana/yamls/es-cluster-dashboard.yaml +elasticsearchdashboard.dashboard.kubedb.com/es-cluster-dashboard created +``` + +KubeDB will create the necessary resources to deploy the dashboard + according to the above specification. Let’s wait until the database to be ready to use, + +```bash +$ watch kubectl get elasticsearchdashboard -n demo +NAME TYPE DATABASE STATUS AGE +es-cluster-dashboard dashboard.kubedb.com/v1alpha1 es-cluster Ready 9m +``` +Here, Elasticsearch Dashboard is in `Ready` state. + + +## Connect with Elasticsearch Dashboard + +We will use [port forwarding](https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/) to connect with our Elasticsearch dashboard. Then We are going to login into kibana with authentication credentials and make API requests from dev tools to check cluster health so that we can verify that our Elasticsearch database is working well. + +#### Port-forward the Service + +KubeDB will create few Services to connect with the database. Let’s check the Services by following command, + +```bash +$ kubectl get service -n demo +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +es-cluster ClusterIP 10.96.103.250 9200/TCP 13m +es-cluster-dashboard ClusterIP 10.96.108.252 5601/TCP 11m +es-cluster-master ClusterIP None 9300/TCP 13m +es-cluster-pods ClusterIP None 9200/TCP 13m + +``` +Here, we are going to use `es-cluster-dashboard` Service to connect with the database. Now, let’s port-forward the `es-cluster` Service to the port `5601` to local machine: + +```bash +$ kubectl port-forward -n demo service/es-cluster-dashboard 5601 +Forwarding from 127.0.0.1:5601 -> 5601 +Forwarding from [::1]:5601 -> 5601 +``` +Now, our Elasticsearch cluster dashboard is accessible at `https://localhost:5601`. + +#### Export the Credentials + +KubeDB also create some Secrets for the database. Let’s check which Secrets have been created by KubeDB for our `es-cluster`. + +```bash +$ kubectl get secret -n demo | grep es-cluster +es-cluster-apm-system-cred kubernetes.io/basic-auth 2 14m +es-cluster-beats-system-cred kubernetes.io/basic-auth 2 14m +es-cluster-ca-cert kubernetes.io/tls 2 14m +es-cluster-client-cert kubernetes.io/tls 3 14m +es-cluster-config Opaque 1 14m +es-cluster-elastic-cred kubernetes.io/basic-auth 2 14m +es-cluster-http-cert kubernetes.io/tls 3 14m +es-cluster-kibana-system-cred kubernetes.io/basic-auth 2 14m +es-cluster-logstash-system-cred kubernetes.io/basic-auth 2 14m +es-cluster-remote-monitoring-user-cred kubernetes.io/basic-auth 2 14m +es-cluster-token-8tbg6 kubernetes.io/service-account-token 3 14m +es-cluster-transport-cert kubernetes.io/tls 3 14m +``` +Now, we can connect to the database with `es-cluster-elastic-cred` which contains the admin level credentials to connect with the database. + +### Accessing Database Through Dashboard + +To access the database through Dashboard, we have to get the credentials. We can do that by following command, + +```bash +$ kubectl get secret -n demo es-cluster-elastic-cred -o jsonpath='{.data.username}' | base64 -d +elastic +$ kubectl get secret -n demo es-cluster-elastic-cred -o jsonpath='{.data.password}' | base64 -d +5m2YFv!JO6w5_LrD +``` + +Now, let's go to `https://localhost:5601` from our browser and login by using those credentials. + +![Login Page](images/LoginPage.png) + +After login successfully, we will see Elasticsearch Dashboard UI. Now, We are going to `Dev tools` for running some queries into our Elasticsearch database. + +![Dashboard UI](images/DashboardUI.png) + +Here, in `Dev tools` we will use `Console` section for running some queries. Let's run `GET /` query to check node informations. + +![Get Query](images/GetQuery.png) + +Now, we are going to insert some sample data to our Elasticsearch cluster index `appscode/_doc/1` by using `PUT` query. + +![Sample Data](images/SampleData.png) + +Let's check that sample data in the index `appscode/_doc/1` by using `GET` query. + +![Get Data](images/GetData.png) + +Now, we are going to update sample data in the index `appscode/_doc/1` by using `POST` query. + +![Post Data](images/PostData.png) + +Let's verify the index `appscode/_doc/1` again to see whether the data is updated or not. + +![Get Updated Data](images/GetUpdatedData.png) + +We can see that the data has been updated successfully. +Now, Let's remove that index by using `DELETE` query. + +![Delete](images/Delete.png) + + + + +## Cleaning Up + +To cleanup the Kubernetes resources created by this tutorial, run: + +```bash +$ kubectl delete elasticsearchdashboard -n demo es-cluster-dashboard + +$ kubectl patch -n demo elasticsearch es-cluster -p '{"spec":{"terminationPolicy":"WipeOut"}}' --type="merge" + +$ kubectl delete elasticsearch -n demo es-cluster + +# Delete namespace +$ kubectl delete namespace demo +``` + +## Next Steps + +- Learn about [taking backup](/docs/v2024.1.31/guides/elasticsearch/backup/overview/) of Elasticsearch database using Stash. +- Detail concepts of [Elasticsearch object](/docs/v2024.1.31/guides/elasticsearch/concepts/elasticsearch/). +- Use [private Docker registry](/docs/v2024.1.31/guides/elasticsearch/private-registry/using-private-registry) to deploy Elasticsearch with KubeDB. +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/v2024.1.31/CONTRIBUTING). \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/elasticsearch/elasticsearch-dashboard/kibana/yamls/es-cluster-dashboard.yaml b/content/docs/v2024.1.31/guides/elasticsearch/elasticsearch-dashboard/kibana/yamls/es-cluster-dashboard.yaml new file mode 100644 index 0000000000..e4f6f8c81c --- /dev/null +++ b/content/docs/v2024.1.31/guides/elasticsearch/elasticsearch-dashboard/kibana/yamls/es-cluster-dashboard.yaml @@ -0,0 +1,10 @@ +apiVersion: dashboard.kubedb.com/v1alpha1 +kind: ElasticsearchDashboard +metadata: + name: es-cluster-dashboard + namespace: demo +spec: + enableSSL: true + databaseRef: + name: es-cluster + terminationPolicy: WipeOut \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/elasticsearch/elasticsearch-dashboard/kibana/yamls/es-cluster.yaml b/content/docs/v2024.1.31/guides/elasticsearch/elasticsearch-dashboard/kibana/yamls/es-cluster.yaml new file mode 100644 index 0000000000..4b849aef86 --- /dev/null +++ b/content/docs/v2024.1.31/guides/elasticsearch/elasticsearch-dashboard/kibana/yamls/es-cluster.yaml @@ -0,0 +1,38 @@ +apiVersion: kubedb.com/v1alpha2 +kind: Elasticsearch +metadata: + name: es-cluster + namespace: demo +spec: + enableSSL: true + version: xpack-8.2.3 + storageType: Durable + topology: + master: + replicas: 2 + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + data: + replicas: 3 + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + ingest: + replicas: 2 + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + diff --git a/content/docs/v2024.1.31/guides/elasticsearch/elasticsearch-dashboard/opensearch-dashboards/images/DashboardUI.png b/content/docs/v2024.1.31/guides/elasticsearch/elasticsearch-dashboard/opensearch-dashboards/images/DashboardUI.png new file mode 100644 index 0000000000..65d6fe4bc3 Binary files /dev/null and b/content/docs/v2024.1.31/guides/elasticsearch/elasticsearch-dashboard/opensearch-dashboards/images/DashboardUI.png differ diff --git a/content/docs/v2024.1.31/guides/elasticsearch/elasticsearch-dashboard/opensearch-dashboards/images/Delete.png b/content/docs/v2024.1.31/guides/elasticsearch/elasticsearch-dashboard/opensearch-dashboards/images/Delete.png new file mode 100644 index 0000000000..5c17ac73c4 Binary files /dev/null and b/content/docs/v2024.1.31/guides/elasticsearch/elasticsearch-dashboard/opensearch-dashboards/images/Delete.png differ diff --git a/content/docs/v2024.1.31/guides/elasticsearch/elasticsearch-dashboard/opensearch-dashboards/images/GetData.png b/content/docs/v2024.1.31/guides/elasticsearch/elasticsearch-dashboard/opensearch-dashboards/images/GetData.png new file mode 100644 index 0000000000..a3c886dc91 Binary files /dev/null and b/content/docs/v2024.1.31/guides/elasticsearch/elasticsearch-dashboard/opensearch-dashboards/images/GetData.png differ diff --git a/content/docs/v2024.1.31/guides/elasticsearch/elasticsearch-dashboard/opensearch-dashboards/images/GetQuery.png b/content/docs/v2024.1.31/guides/elasticsearch/elasticsearch-dashboard/opensearch-dashboards/images/GetQuery.png new file mode 100644 index 0000000000..bcd2b2f091 Binary files /dev/null and b/content/docs/v2024.1.31/guides/elasticsearch/elasticsearch-dashboard/opensearch-dashboards/images/GetQuery.png differ diff --git a/content/docs/v2024.1.31/guides/elasticsearch/elasticsearch-dashboard/opensearch-dashboards/images/GetUpdatedData.png b/content/docs/v2024.1.31/guides/elasticsearch/elasticsearch-dashboard/opensearch-dashboards/images/GetUpdatedData.png new file mode 100644 index 0000000000..682b309466 Binary files /dev/null and b/content/docs/v2024.1.31/guides/elasticsearch/elasticsearch-dashboard/opensearch-dashboards/images/GetUpdatedData.png differ diff --git a/content/docs/v2024.1.31/guides/elasticsearch/elasticsearch-dashboard/opensearch-dashboards/images/LoginPage.png b/content/docs/v2024.1.31/guides/elasticsearch/elasticsearch-dashboard/opensearch-dashboards/images/LoginPage.png new file mode 100644 index 0000000000..fdf46ec7ac Binary files /dev/null and b/content/docs/v2024.1.31/guides/elasticsearch/elasticsearch-dashboard/opensearch-dashboards/images/LoginPage.png differ diff --git a/content/docs/v2024.1.31/guides/elasticsearch/elasticsearch-dashboard/opensearch-dashboards/images/PostData.png b/content/docs/v2024.1.31/guides/elasticsearch/elasticsearch-dashboard/opensearch-dashboards/images/PostData.png new file mode 100644 index 0000000000..cc684423fc Binary files /dev/null and b/content/docs/v2024.1.31/guides/elasticsearch/elasticsearch-dashboard/opensearch-dashboards/images/PostData.png differ diff --git a/content/docs/v2024.1.31/guides/elasticsearch/elasticsearch-dashboard/opensearch-dashboards/images/SampleData.png b/content/docs/v2024.1.31/guides/elasticsearch/elasticsearch-dashboard/opensearch-dashboards/images/SampleData.png new file mode 100644 index 0000000000..214ffba2be Binary files /dev/null and b/content/docs/v2024.1.31/guides/elasticsearch/elasticsearch-dashboard/opensearch-dashboards/images/SampleData.png differ diff --git a/content/docs/v2024.1.31/guides/elasticsearch/elasticsearch-dashboard/opensearch-dashboards/index.md b/content/docs/v2024.1.31/guides/elasticsearch/elasticsearch-dashboard/opensearch-dashboards/index.md new file mode 100644 index 0000000000..76db420e5d --- /dev/null +++ b/content/docs/v2024.1.31/guides/elasticsearch/elasticsearch-dashboard/opensearch-dashboards/index.md @@ -0,0 +1,574 @@ +--- +title: OpenSearch +menu: + docs_v2024.1.31: + identifier: es-dashboard-opensearch + name: OpenSearch-Dashboards + parent: es-dashboard + weight: 15 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Deploy OpenSearch-Dashboards With ElasticsearchDashboard + +## Before You Begin + +At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +Now, install the KubeDB operator in your cluster following the steps [here](/docs/v2024.1.31/setup/README). + +Elasticsearch has many distributions like `ElasticStack`, `OpenSearch`, `SearchGuard`, `OpenDistro` etc. KubeDB provides all of these distribution’s support under the Elasticsearch CR of KubeDB. So, we will deploy OpenSearch with the help of KubeDB managed Elasticsearch CR. + +To keep things isolated, this tutorial uses a separate namespace called `demo` throughout this tutorial. + +```bash +$ kubectl create namespace demo +namespace/demo created + +$ kubectl get namespace +NAME STATUS AGE +demo Active 14s +``` + +> Note: YAML files used in this tutorial are stored in [here](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/guides/elasticsearch/elasticsearch-dashboard/kibana/yamls) in GitHub repository [kubedb/docs](https://github.com/kubedb/docs). + +## Find Available StorageClass + +We will have to provide `StorageClass` in Elasticsearch CR specification. Check available `StorageClass` in your cluster using the following command, + +```bash +$ kubectl get storageclass +NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE +standard (default) rancher.io/local-path Delete WaitForFirstConsumer false 10m +``` + +Here, we have `standard` StorageClass in our cluster from [Local Path Provisioner](https://github.com/rancher/local-path-provisioner). + +## Create an OpenSearch Cluster + +We are going to create a OpenSearch Cluster in topology mode. Our cluster will be consist of 2 master nodes, 3 data nodes, 2 ingest nodes. Here, we are using Elasticsearch version ( `opensearch-2.8.0` ) of OpenSearch distribution for this demo. To learn more about the Elasticsearch CR, visit [here](/docs/v2024.1.31/guides/elasticsearch/concepts/elasticsearch/). + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: Elasticsearch +metadata: + name: os-cluster + namespace: demo +spec: + enableSSL: true + version: opensearch-2.8.0 + storageType: Durable + topology: + master: + replicas: 2 + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + data: + replicas: 3 + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + ingest: + replicas: 2 + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi +``` + +Here, + +- `spec.version` - is the name of the ElasticsearchVersion CR. Here, we are using Elasticsearch version `opensearch-2.8.0` of OpenSearch distribution. +- `spec.enableSSL` - specifies whether the HTTP layer is secured with certificates or not. +- `spec.storageType` - specifies the type of storage that will be used for OpenSearch database. It can be `Durable` or `Ephemeral`. The default value of this field is `Durable`. If `Ephemeral` is used then KubeDB will create the OpenSearch database using `EmptyDir` volume. In this case, you don't have to specify `spec.storage` field. This is useful for testing purposes. +- `spec.topology` - specifies the node-specific properties for the OpenSearch cluster. + - `topology.master` - specifies the properties of master nodes. + - `master.replicas` - specifies the number of master nodes. + - `master.storage` - specifies the master node storage information that passed to the StatefulSet. + - `topology.data` - specifies the properties of data nodes. + - `data.replicas` - specifies the number of data nodes. + - `data.storage` - specifies the data node storage information that passed to the StatefulSet. + - `topology.ingest` - specifies the properties of ingest nodes. + - `ingest.replicas` - specifies the number of ingest nodes. + - `ingest.storage` - specifies the ingest node storage information that passed to the StatefulSet. + +Let's deploy the above yaml by the following command: + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/elasticsearch//elasticsearch-dashboard/opensearch/yamls/os-cluster.yaml +elasticsearch.kubedb.com/os-cluster created +``` +KubeDB will create the necessary resources to deploy the OpenSearch cluster according to the above specification. Let’s wait until the database to be ready to use, + +```bash +$ watch kubectl get elasticsearch -n demo +NAME VERSION STATUS AGE +os-cluster opensearch-2.8.0 Ready 3m25s +``` +Here, OpenSearch is in `Ready` state. It means the database is ready to accept connections. + +Describe the object to observe the progress if something goes wrong or the status is not changing for a long period of time: + +```bash +$ kubectl describe elasticsearch -n demo os-cluster +Name: os-cluster +Namespace: demo +Labels: +Annotations: +API Version: kubedb.com/v1alpha2 +Kind: Elasticsearch +Metadata: + Creation Timestamp: 2022-06-08T06:01:54Z + Finalizers: + kubedb.com + Generation: 1 + Resource Version: 1012763 + UID: 2aeef9b3-fcb6-47c8-9df0-54a4fa018413 +Spec: + Auth Secret: + Name: os-cluster-admin-cred + Enable SSL: true + Heap Size Percentage: 50 + Internal Users: + Admin: + Backend Roles: + admin + Reserved: true + Secret Name: os-cluster-admin-cred + Kibanaro: + Secret Name: os-cluster-kibanaro-cred + Kibanaserver: + Reserved: true + Secret Name: os-cluster-kibanaserver-cred + Logstash: + Secret Name: os-cluster-logstash-cred + Readall: + Secret Name: os-cluster-readall-cred + Snapshotrestore: + Secret Name: os-cluster-snapshotrestore-cred + Kernel Settings: + Privileged: true + Sysctls: + Name: vm.max_map_count + Value: 262144 + Pod Template: + Controller: + Metadata: + Spec: + Affinity: + Pod Anti Affinity: + Preferred During Scheduling Ignored During Execution: + Pod Affinity Term: + Label Selector: + Match Expressions: + Key: ${NODE_ROLE} + Operator: Exists + Match Labels: + app.kubernetes.io/instance: os-cluster + app.kubernetes.io/managed-by: kubedb.com + app.kubernetes.io/name: elasticsearches.kubedb.com + Namespaces: + demo + Topology Key: kubernetes.io/hostname + Weight: 100 + Pod Affinity Term: + Label Selector: + Match Expressions: + Key: ${NODE_ROLE} + Operator: Exists + Match Labels: + app.kubernetes.io/instance: os-cluster + app.kubernetes.io/managed-by: kubedb.com + app.kubernetes.io/name: elasticsearches.kubedb.com + Namespaces: + demo + Topology Key: failure-domain.beta.kubernetes.io/zone + Weight: 50 + Container Security Context: + Capabilities: + Add: + IPC_LOCK + SYS_RESOURCE + Privileged: false + Resources: + Service Account Name: os-cluster + Storage Type: Durable + Termination Policy: Delete + Tls: + Certificates: + Alias: ca + Private Key: + Encoding: PKCS8 + Secret Name: os-cluster-ca-cert + Subject: + Organizations: + kubedb + Alias: transport + Private Key: + Encoding: PKCS8 + Secret Name: os-cluster-transport-cert + Subject: + Organizations: + kubedb + Alias: admin + Private Key: + Encoding: PKCS8 + Secret Name: os-cluster-admin-cert + Subject: + Organizations: + kubedb + Alias: http + Private Key: + Encoding: PKCS8 + Secret Name: os-cluster-http-cert + Subject: + Organizations: + kubedb + Alias: client + Private Key: + Encoding: PKCS8 + Secret Name: os-cluster-client-cert + Subject: + Organizations: + kubedb + Topology: + Data: + Replicas: 3 + Resources: + Limits: + Memory: 1Gi + Requests: + Cpu: 100m + Memory: 1Gi + Storage: + Access Modes: + ReadWriteOnce + Resources: + Requests: + Storage: 1Gi + Storage Class Name: standard + Suffix: data + Ingest: + Replicas: 2 + Resources: + Limits: + Memory: 1Gi + Requests: + Cpu: 100m + Memory: 1Gi + Storage: + Access Modes: + ReadWriteOnce + Resources: + Requests: + Storage: 1Gi + Storage Class Name: standard + Suffix: ingest + Master: + Replicas: 2 + Resources: + Limits: + Memory: 1Gi + Requests: + Cpu: 100m + Memory: 1Gi + Storage: + Access Modes: + ReadWriteOnce + Resources: + Requests: + Storage: 1Gi + Storage Class Name: standard + Suffix: master + Version: opensearch-2.8.0 +Status: + Conditions: + Last Transition Time: 2022-06-08T06:01:54Z + Message: The KubeDB operator has started the provisioning of Elasticsearch: demo/os-cluster + Reason: DatabaseProvisioningStartedSuccessfully + Status: True + Type: ProvisioningStarted + Last Transition Time: 2022-06-08T06:05:02Z + Message: All desired replicas are ready. + Reason: AllReplicasReady + Status: True + Type: ReplicaReady + Last Transition Time: 2022-06-08T06:06:52Z + Message: The Elasticsearch: demo/os-cluster is accepting client requests. + Observed Generation: 1 + Reason: DatabaseAcceptingConnectionRequest + Status: True + Type: AcceptingConnection + Last Transition Time: 2022-06-08T06:11:58Z + Message: The Elasticsearch: demo/os-cluster is ready. + Observed Generation: 1 + Reason: ReadinessCheckSucceeded + Status: True + Type: Ready + Last Transition Time: 2022-06-08T06:06:53Z + Message: The Elasticsearch: demo/os-cluster is successfully provisioned. + Observed Generation: 1 + Reason: DatabaseSuccessfullyProvisioned + Status: True + Type: Provisioned + Observed Generation: 1 + Phase: Ready +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal Successful 12m KubeDB Operator Successfully governing service + Normal Successful 12m KubeDB Operator Successfully governing service + +``` +- Here, in `Status.Conditions` + - `Conditions.Status` is `True` for the `Condition.Type:ProvisioningStarted` which means database provisioning has been started successfully. + - `Conditions.Status` is `True` for the `Condition.Type:ReplicaReady` which specifies all replicas are ready in the cluster. + - `Conditions.Status` is `True` for the `Condition.Type:AcceptingConnection` which means database has been accepting connection request. + - `Conditions.Status` is `True` for the `Condition.Type:Ready` which defines database is ready to use. + - `Conditions.Status` is `True` for the `Condition.Type:Provisioned` which specifies Database has been successfully provisioned. + +### KubeDB Operator Generated Resources + +After the deployment, the operator creates the following resources:: + +```bash +$ kubectl get all,secret,pvc -n demo -l 'app.kubernetes.io/instance=os-cluster' +NAME READY STATUS RESTARTS AGE +pod/os-cluster-data-0 1/1 Running 0 16m +pod/os-cluster-data-1 1/1 Running 0 16m +pod/os-cluster-data-2 1/1 Running 0 16m +pod/os-cluster-ingest-0 1/1 Running 0 16m +pod/os-cluster-ingest-1 1/1 Running 0 16m +pod/os-cluster-master-0 1/1 Running 0 16m +pod/os-cluster-master-1 1/1 Running 0 16m + +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +service/os-cluster ClusterIP 10.96.203.204 9200/TCP 16m +service/os-cluster-master ClusterIP None 9300/TCP 16m +service/os-cluster-pods ClusterIP None 9200/TCP 16m + +NAME READY AGE +statefulset.apps/os-cluster-data 3/3 16m +statefulset.apps/os-cluster-ingest 2/2 16m +statefulset.apps/os-cluster-master 2/2 16m + +NAME TYPE VERSION AGE +appbinding.appcatalog.appscode.com/os-cluster kubedb.com/elasticsearch 1.3.2 16m + +NAME TYPE DATA AGE +secret/os-cluster-admin-cert kubernetes.io/tls 3 16m +secret/os-cluster-admin-cred kubernetes.io/basic-auth 2 16m +secret/os-cluster-ca-cert kubernetes.io/tls 2 16m +secret/os-cluster-client-cert kubernetes.io/tls 3 16m +secret/os-cluster-config Opaque 3 16m +secret/os-cluster-http-cert kubernetes.io/tls 3 16m +secret/os-cluster-kibanaro-cred kubernetes.io/basic-auth 2 16m +secret/os-cluster-kibanaserver-cred kubernetes.io/basic-auth 2 16m +secret/os-cluster-logstash-cred kubernetes.io/basic-auth 2 16m +secret/os-cluster-readall-cred kubernetes.io/basic-auth 2 16m +secret/os-cluster-snapshotrestore-cred kubernetes.io/basic-auth 2 16m +secret/os-cluster-transport-cert kubernetes.io/tls 3 16m + +NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE +persistentvolumeclaim/data-os-cluster-data-0 Bound pvc-eca004d5-b67f-4f39-95df-5fb311b75dc7 1Gi RWO standard 16m +persistentvolumeclaim/data-os-cluster-data-1 Bound pvc-9319bcac-1e71-4414-a20e-4784f936bd3c 1Gi RWO standard 16m +persistentvolumeclaim/data-os-cluster-data-2 Bound pvc-f1625b72-1b5c-4e4a-b1cf-231fb9e259e9 1Gi RWO standard 16m +persistentvolumeclaim/data-os-cluster-ingest-0 Bound pvc-fe3b6633-bd74-465c-8732-9398417bec5a 1Gi RWO standard 16m +persistentvolumeclaim/data-os-cluster-ingest-1 Bound pvc-2f60eea0-2bb8-4e42-a8e2-49232163f0a5 1Gi RWO standard 16m +persistentvolumeclaim/data-os-cluster-master-0 Bound pvc-59e14a54-6311-4639-9b00-dca6304ed90c 1Gi RWO standard 16m +persistentvolumeclaim/data-os-cluster-master-1 Bound pvc-37783550-3c3a-4280-b9ac-9e967ab248af 1Gi RWO standard 16m +``` + +- `StatefulSet` - 3 StatefulSets are created for 3 type of nodes. The StatefulSets are named after the OpenSearch instance with given suffix: `{OpenSearch-Name}-{Sufix}`. +- `Services` - 3 services are generated for each OpenSearch database. + - `{OpenSearch-Name}` - the client service which is used to connect to the database. It points to the `ingest` nodes. + - `{OpenSearch-Name}-master` - the master service which is used to connect to the master nodes. It is a headless service. + - `{OpenSearch-Name}-pods` - the node discovery service which is used by the OpenSearch nodes to communicate each other. It is a headless service. +- `AppBinding` - an [AppBinding](/docs/v2024.1.31/guides/elasticsearch/concepts/appbinding/) which hold the connect information for the database. +- `Secrets` - 3 types of secrets are generated for each OpenSearch database. + - `{OpenSearch-Name}-{username}-cred` - the auth secrets which hold the `username` and `password` for the OpenSearch users. + - `{OpenSearch-Name}-{alias}-cert` - the certificate secrets which hold `tls.crt`, `tls.key`, and `ca.crt` for configuring the OpenSearch database. + - `{OpenSearch-Name}-config` - the default configuration secret created by the operator. + +## Deploy ElasticsearchDashboard + +```yaml +apiVersion: dashboard.kubedb.com/v1alpha1 +kind: ElasticsearchDashboard +metadata: + name: os-cluster-dashboard + namespace: demo +spec: + enableSSL: true + databaseRef: + name: os-cluster + terminationPolicy: WipeOut +``` +> Note: OpenSearch Database and OpenSearch dashboard should have to be deployed in the same namespace. In this tutorial, we use `demo` namespace for both cases. + +- `spec.enableSSL` specifies whether the HTTP layer is secured with certificates or not. +- `spec.databaseRef.name` refers to the OpenSearch database name. +- `spec.terminationPolicy` refers to the strategy to follow during dashboard deletion. `Wipeout` means that the database will be deleted without restrictions. It can also be `DoNotTerminate` which will cause a restriction to delete the dashboard. Learn More about these [HERE](https://kubedb.com/docs/v2022.05.24/guides/elasticsearch/concepts/elasticsearch/#specterminationpolicy). + +Let's deploy the above yaml by the following command: + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/elasticsearch/elasticsearch-dashboard/opensearch/yamls/os-cluster-dashboard.yaml +elasticsearchdashboard.dashboard.kubedb.com/os-cluster-dashboard created +``` + +KubeDB will create the necessary resources to deploy the OpenSearch dashboard according to the above specification. Let’s wait until the dashboard to be ready to use, + +```bash +$ watch kubectl get elasticsearchdashboard -n demo +NAME TYPE DATABASE STATUS AGE +os-cluster-dashboard dashboard.kubedb.com/v1alpha1 os-cluster Ready 9m +``` +Here, OpenSearch Dashboard is in `Ready` state. + + +## Connect with OpenSearch Dashboard + +We will use [port forwarding](https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/) to connect with our OpenSearch database. Then we will use `curl` to send `HTTP` requests to check cluster health to verify that our OpenSearch database is working well. + +#### Port-forward the Service + +KubeDB will create few Services to connect with the database. Let’s check the Services by following command, + +```bash +$ kubectl get service -n demo +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +os-cluster ClusterIP 10.96.103.250 9200/TCP 19m +os-cluster-dashboard ClusterIP 10.96.108.252 5601/TCP 19m +os-cluster-master ClusterIP None 9300/TCP 19m +os-cluster-pods ClusterIP None 9200/TCP 19m + +``` +Here, we are going to use `os-cluster-dashboard` Service to connect with the database. Now, let’s port-forward the `os-cluster` Service to the port `5601` to local machine: + +```bash +$ kubectl port-forward -n demo service/os-cluster-dashboard 5601 +Forwarding from 127.0.0.1:5601 -> 5601 +Forwarding from [::1]:5601 -> 5601 +``` +Now, our OpenSearch cluster dashboard is accessible at `https://localhost:5601`. + +#### Export the Credentials + +KubeDB also create some Secrets for the database. Let’s check which Secrets have been created by KubeDB for our `os-cluster`. + +```bash +$ kubectl get secret -n demo | grep es-cluster +os-cluster-admin-cert kubernetes.io/tls 3 16m +os-cluster-admin-cred kubernetes.io/basic-auth 2 16m +os-cluster-ca-cert kubernetes.io/tls 2 16m +os-cluster-client-cert kubernetes.io/tls 3 16m +os-cluster-config Opaque 3 16m +os-cluster-dashboard-ca-cert kubernetes.io/tls 2 8m31s +os-cluster-dashboard-config Opaque 2 8m30s +os-cluster-dashboard-server-cert kubernetes.io/tls 3 8m30s +os-cluster-http-cert kubernetes.io/tls 3 16m +os-cluster-kibanaro-cred kubernetes.io/basic-auth 2 16m +os-cluster-kibanaserver-cred kubernetes.io/basic-auth 2 16m +os-cluster-logstash-cred kubernetes.io/basic-auth 2 16m +os-cluster-readall-cred kubernetes.io/basic-auth 2 16m +os-cluster-snapshotrestore-cred kubernetes.io/basic-auth 2 16m +os-cluster-token-wq8b9 kubernetes.io/service-account-token 3 16m +os-cluster-transport-cert kubernetes.io/tls 3 16m +``` +Now, we can connect to the database with `os-cluster-elastic-cred` which contains the admin credentials to connect with the database. + +### Accessing Database Through Dashboard + +To access the database through Dashboard, we have to get the credentials. We can do that by following command, + +```bash +$ kubectl get secret -n demo os-cluster-admin-cred -o jsonpath='{.data.username}' | base64 -d +admin +$ kubectl get secret -n demo os-cluster-admin-cred -o jsonpath='{.data.password}' | base64 -d +Oyj8FdPzA.DZqEyS +``` + +Now, let's go to `https://localhost:5601` from our browser and login by using those credentials. + +![Login Page](images/LoginPage.png) + +After login successfully, we will see OpenSearch Dashboard UI. Now, We are going to `Dev tools` for running some queries into our OpenSearch database. + +![Dashboard UI](images/DashboardUI.png) + +Here, in `Dev tools` we will use `Console` section for running some queries. Let's run `GET /` query to check node informations. + +![Get Query](images/GetQuery.png) + +Now, we are going to insert some sample data to our OpenSearch cluster index `appscode/_doc/1` by using `PUT` query. + +![Sample Data](images/SampleData.png) + +Let's check that sample data in the index `appscode/_doc/1` by using `GET` query. + +![Get Data](images/GetData.png) + +Now, we are going to update sample data in the index `appscode/_doc/1` by using `POST` query. + +![Post Data](images/PostData.png) + +Let's verify the index `appscode/_doc/1` again to see whether the data is updated or not. + +![Get Updated Data](images/GetUpdatedData.png) + +We can see that the data has been updated successfully. +Now, Let's remove that index by using `DELETE` query. + +![Delete](images/Delete.png) + + + + +## Cleaning Up + +To cleanup the Kubernetes resources created by this tutorial, run: + +```bash +$ kubectl delete elasticsearchdashboard -n demo os-cluster-dashboard + +$ kubectl patch -n demo elasticsearch os-cluster -p '{"spec":{"terminationPolicy":"WipeOut"}}' --type="merge" + +$ kubectl delete elasticsearch -n demo os-cluster + +# Delete namespace +$ kubectl delete namespace demo +``` + +## Next Steps + +- Learn about [taking backup](/docs/v2024.1.31/guides/elasticsearch/backup/overview/) of Elasticsearch database using Stash. +- Detail concepts of [Elasticsearch object](/docs/v2024.1.31/guides/elasticsearch/concepts/elasticsearch/). +- Use [private Docker registry](/docs/v2024.1.31/guides/elasticsearch/private-registry/using-private-registry) to deploy Elasticsearch with KubeDB. +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/v2024.1.31/CONTRIBUTING). diff --git a/content/docs/v2024.1.31/guides/elasticsearch/elasticsearch-dashboard/opensearch-dashboards/yamls/os-cluster-dashboard.yaml b/content/docs/v2024.1.31/guides/elasticsearch/elasticsearch-dashboard/opensearch-dashboards/yamls/os-cluster-dashboard.yaml new file mode 100644 index 0000000000..fbe7e8f2ce --- /dev/null +++ b/content/docs/v2024.1.31/guides/elasticsearch/elasticsearch-dashboard/opensearch-dashboards/yamls/os-cluster-dashboard.yaml @@ -0,0 +1,10 @@ +apiVersion: dashboard.kubedb.com/v1alpha1 +kind: ElasticsearchDashboard +metadata: + name: os-cluster-dashboard + namespace: demo +spec: + enableSSL: true + databaseRef: + name: os-cluster + terminationPolicy: WipeOut \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/elasticsearch/elasticsearch-dashboard/opensearch-dashboards/yamls/os-cluster.yaml b/content/docs/v2024.1.31/guides/elasticsearch/elasticsearch-dashboard/opensearch-dashboards/yamls/os-cluster.yaml new file mode 100644 index 0000000000..c16bcdd228 --- /dev/null +++ b/content/docs/v2024.1.31/guides/elasticsearch/elasticsearch-dashboard/opensearch-dashboards/yamls/os-cluster.yaml @@ -0,0 +1,38 @@ +apiVersion: kubedb.com/v1alpha2 +kind: Elasticsearch +metadata: + name: os-cluster + namespace: demo +spec: + enableSSL: true + version: opensearch-2.8.0 + storageType: Durable + topology: + master: + replicas: 2 + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + data: + replicas: 3 + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + ingest: + replicas: 2 + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + diff --git a/content/docs/v2024.1.31/guides/elasticsearch/images/Lifecycle-of-an-Elasticsearch-CRD-complete.png b/content/docs/v2024.1.31/guides/elasticsearch/images/Lifecycle-of-an-Elasticsearch-CRD-complete.png new file mode 100644 index 0000000000..dff0b432ca Binary files /dev/null and b/content/docs/v2024.1.31/guides/elasticsearch/images/Lifecycle-of-an-Elasticsearch-CRD-complete.png differ diff --git a/content/docs/v2024.1.31/guides/elasticsearch/monitoring/_index.md b/content/docs/v2024.1.31/guides/elasticsearch/monitoring/_index.md new file mode 100755 index 0000000000..c92245488a --- /dev/null +++ b/content/docs/v2024.1.31/guides/elasticsearch/monitoring/_index.md @@ -0,0 +1,22 @@ +--- +title: Elasticsearch Monitoring +menu: + docs_v2024.1.31: + identifier: es-monitoring-elasticsearch + name: Monitoring + parent: es-elasticsearch-guides + weight: 50 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/elasticsearch/monitoring/overview.md b/content/docs/v2024.1.31/guides/elasticsearch/monitoring/overview.md new file mode 100644 index 0000000000..43fc423f60 --- /dev/null +++ b/content/docs/v2024.1.31/guides/elasticsearch/monitoring/overview.md @@ -0,0 +1,117 @@ +--- +title: Elasticsearch Monitoring Overview +description: Elasticsearch Monitoring Overview +menu: + docs_v2024.1.31: + identifier: es-monitoring-overview + name: Overview + parent: es-monitoring-elasticsearch + weight: 10 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Monitoring Elasticsearch with KubeDB + +KubeDB has native support for monitoring via [Prometheus](https://prometheus.io/). You can use builtin [Prometheus](https://github.com/prometheus/prometheus) scraper or [Prometheus operator](https://github.com/prometheus-operator/prometheus-operator) to monitor KubeDB managed databases. This tutorial will show you how database monitoring works with KubeDB and how to configure Database crd to enable monitoring. + +## Overview + +KubeDB uses Prometheus [exporter](https://prometheus.io/docs/instrumenting/exporters/#databases) images to export Prometheus metrics for respective databases. Following diagram shows the logical flow of database monitoring with KubeDB. + +

+  Database Monitoring Flow +

+ +When a user creates a database crd with `spec.monitor` section configured, KubeDB operator provisions the respective database and injects an exporter image as sidecar to the database pod. It also creates a dedicated stats service with name `{database-crd-name}-stats` for monitoring. Prometheus server can scrape metrics using this stats service. + +## Configure Monitoring + +In order to enable monitoring for a database, you have to configure `spec.monitor` section. KubeDB provides following options to configure `spec.monitor` section: + +| Field | Type | Uses | +| -------------------------------------------------- | ---------- | ---------------------------------------------------------------------------------------------------------------------------------------------- | +| `spec.monitor.agent` | `Required` | Type of the monitoring agent that will be used to monitor this database. It can be `prometheus.io/builtin` or `prometheus.io/operator`. | +| `spec.monitor.prometheus.exporter.port` | `Optional` | Port number where the exporter side car will serve metrics. | +| `spec.monitor.prometheus.exporter.args` | `Optional` | Arguments to pass to the exporter sidecar. | +| `spec.monitor.prometheus.exporter.env` | `Optional` | List of environment variables to set in the exporter sidecar container. | +| `spec.monitor.prometheus.exporter.resources` | `Optional` | Resources required by exporter sidecar container. | +| `spec.monitor.prometheus.exporter.securityContext` | `Optional` | Security options the exporter should run with. | +| `spec.monitor.prometheus.serviceMonitor.labels` | `Optional` | Labels for `ServiceMonitor` crd. | +| `spec.monitor.prometheus.serviceMonitor.interval` | `Optional` | Interval at which metrics should be scraped. | + +## Sample Configuration + +A sample YAML for Redis crd with `spec.monitor` section configured to enable monitoring with [Prometheus operator](https://github.com/prometheus-operator/prometheus-operator) is shown below. + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: Redis +metadata: + name: sample-redis + namespace: databases +spec: + version: 6.0.20 + terminationPolicy: WipeOut + configSecret: # configure Redis to use password for authentication + name: redis-config + storageType: Durable + storage: + storageClassName: default + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 5Gi + monitor: + agent: prometheus.io/operator + prometheus: + serviceMonitor: + labels: + release: prometheus + exporter: + args: + - --redis.password=$(REDIS_PASSWORD) + env: + - name: REDIS_PASSWORD + valueFrom: + secretKeyRef: + name: _name_of_secret_with_redis_password + key: password # key with the password + resources: + requests: + memory: 512Mi + cpu: 200m + limits: + memory: 512Mi + cpu: 250m + securityContext: + runAsUser: 2000 + allowPrivilegeEscalation: false +``` + +Assume that above Redis is configured to use basic authentication. So, exporter image also need to provide password to collect metrics. We have provided it through `spec.monitor.args` field. + +Here, we have specified that we are going to monitor this server using Prometheus operator through `spec.monitor.agent: prometheus.io/operator`. KubeDB will create a `ServiceMonitor` crd in `monitoring` namespace and this `ServiceMonitor` will have `release: prometheus` label. + +## Next Steps + +- Learn how to monitor Elasticsearch database with KubeDB using [builtin-Prometheus](/docs/v2024.1.31/guides/elasticsearch/monitoring/using-builtin-prometheus) and using [Prometheus operator](/docs/v2024.1.31/guides/elasticsearch/monitoring/using-prometheus-operator). +- Learn how to monitor PostgreSQL database with KubeDB using [builtin-Prometheus](/docs/v2024.1.31/guides/postgres/monitoring/using-builtin-prometheus) and using [Prometheus operator](/docs/v2024.1.31/guides/postgres/monitoring/using-prometheus-operator). +- Learn how to monitor MySQL database with KubeDB using [builtin-Prometheus](/docs/v2024.1.31/guides/mysql/monitoring/builtin-prometheus/) and using [Prometheus operator](/docs/v2024.1.31/guides/mysql/monitoring/prometheus-operator/). +- Learn how to monitor MongoDB database with KubeDB using [builtin-Prometheus](/docs/v2024.1.31/guides/mongodb/monitoring/using-builtin-prometheus) and using [Prometheus operator](/docs/v2024.1.31/guides/mongodb/monitoring/using-prometheus-operator). +- Learn how to monitor Redis server with KubeDB using [builtin-Prometheus](/docs/v2024.1.31/guides/redis/monitoring/using-builtin-prometheus) and using [Prometheus operator](/docs/v2024.1.31/guides/redis/monitoring/using-prometheus-operator). +- Learn how to monitor Memcached server with KubeDB using [builtin-Prometheus](/docs/v2024.1.31/guides/memcached/monitoring/using-builtin-prometheus) and using [Prometheus operator](/docs/v2024.1.31/guides/memcached/monitoring/using-prometheus-operator). diff --git a/content/docs/v2024.1.31/guides/elasticsearch/monitoring/using-builtin-prometheus.md b/content/docs/v2024.1.31/guides/elasticsearch/monitoring/using-builtin-prometheus.md new file mode 100644 index 0000000000..fb235bc524 --- /dev/null +++ b/content/docs/v2024.1.31/guides/elasticsearch/monitoring/using-builtin-prometheus.md @@ -0,0 +1,371 @@ +--- +title: Monitor Elasticsearch using Builtin Prometheus Discovery +menu: + docs_v2024.1.31: + identifier: es-using-builtin-prometheus-monitoring + name: Builtin Prometheus + parent: es-monitoring-elasticsearch + weight: 20 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Monitoring Elasticsearch with builtin Prometheus + +This tutorial will show you how to monitor Elasticsearch database using builtin [Prometheus](https://github.com/prometheus/prometheus) scraper. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +- Install KubeDB operator in your cluster following the steps [here](/docs/v2024.1.31/setup/README). + +- If you are not familiar with how to configure Prometheus to scrape metrics from various Kubernetes resources, please read the tutorial from [here](https://github.com/appscode/third-party-tools/tree/master/monitoring/prometheus/builtin). + +- To learn how Prometheus monitoring works with KubeDB in general, please visit [here](/docs/v2024.1.31/guides/elasticsearch/monitoring/overview). + +- To keep Prometheus resources isolated, we are going to use a separate namespace called `monitoring` to deploy respective monitoring resources. We are going to deploy database in `demo` namespace. + + ```bash + $ kubectl create ns monitoring + namespace/monitoring created + + $ kubectl create ns demo + namespace/demo created + ``` + +> Note: YAML files used in this tutorial are stored in [docs/examples/elasticsearch](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/examples/elasticsearch) folder in GitHub repository [kubedb/docs](https://github.com/kubedb/docs). + +## Deploy Elasticsearch with Monitoring Enabled + +At first, let's deploy an Elasticsearch database with monitoring enabled. Below is the Elasticsearch object that we are going to create. + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: Elasticsearch +metadata: + name: builtin-prom-es + namespace: demo +spec: + version: xpack-8.11.1 + terminationPolicy: WipeOut + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + monitor: + agent: prometheus.io/builtin +``` + +Here, + +- `spec.monitor.agent: prometheus.io/builtin` specifies that we are going to monitor this server using builtin Prometheus scraper. + +Let's create the Elasticsearch crd we have shown above. + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/elasticsearch/monitoring/builtin-prom-es.yaml +elasticsearch.kubedb.com/builtin-prom-es created +``` + +Now, wait for the database to go into `Running` state. + +```bash +$ kubectl get es -n demo builtin-prom-es +NAME VERSION STATUS AGE +builtin-prom-es 7.3.2 Running 4m +``` + +KubeDB will create a separate stats service with name `{Elasticsearch crd name}-stats` for monitoring purpose. + +```bash +$ kubectl get svc -n demo --selector="app.kubernetes.io/instance=builtin-prom-es" +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +builtin-prom-es ClusterIP 10.0.14.79 9200/TCP 4m10s +builtin-prom-es-master ClusterIP 10.0.1.39 9300/TCP 4m10s +builtin-prom-es-stats ClusterIP 10.0.3.147 56790/TCP 3m14s +``` + +Here, `builtin-prom-es-stats` service has been created for monitoring purpose. Let's describe the service. + +```bash +$ kubectl describe svc -n demo builtin-prom-es-stats +Name: builtin-prom-es-stats +Namespace: demo +Labels: app.kubernetes.io/name=elasticsearches.kubedb.com + app.kubernetes.io/instance=builtin-prom-es + kubedb.com/role=stats +Annotations: monitoring.appscode.com/agent: prometheus.io/builtin + prometheus.io/path: /metrics + prometheus.io/port: 56790 + prometheus.io/scrape: true +Selector: app.kubernetes.io/name=elasticsearches.kubedb.com,app.kubernetes.io/instance=builtin-prom-es +Type: ClusterIP +IP: 10.0.3.147 +Port: prom-http 56790/TCP +TargetPort: prom-http/TCP +Endpoints: 10.4.0.49:56790 +Session Affinity: None +Events: +``` + +You can see that the service contains following annotations. + +```bash +prometheus.io/path: /metrics +prometheus.io/port: 56790 +prometheus.io/scrape: true +``` + +The Prometheus server will discover the service endpoint using these specifications and will scrape metrics from the exporter. + +## Configure Prometheus Server + +Now, we have to configure a Prometheus scraping job to scrape the metrics using this service. We are going to configure scraping job similar to this [kubernetes-service-endpoints](https://github.com/appscode/third-party-tools/tree/master/monitoring/prometheus/builtin#kubernetes-service-endpoints) job that scrapes metrics from endpoints of a service. + +Let's configure a Prometheus scraping job to collect metrics from this service. + +```yaml +- job_name: 'kubedb-databases' + honor_labels: true + scheme: http + kubernetes_sd_configs: + - role: endpoints + # by default Prometheus server select all Kubernetes services as possible target. + # relabel_config is used to filter only desired endpoints + relabel_configs: + # keep only those services that has "prometheus.io/scrape","prometheus.io/path" and "prometheus.io/port" anootations + - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape, __meta_kubernetes_service_annotation_prometheus_io_port] + separator: ; + regex: true;(.*) + action: keep + # currently KubeDB supported databases uses only "http" scheme to export metrics. so, drop any service that uses "https" scheme. + - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scheme] + action: drop + regex: https + # only keep the stats services created by KubeDB for monitoring purpose which has "-stats" suffix + - source_labels: [__meta_kubernetes_service_name] + separator: ; + regex: (.*-stats) + action: keep + # service created by KubeDB will have "app.kubernetes.io/name" and "app.kubernetes.io/instance" annotations. keep only those services that have these annotations. + - source_labels: [__meta_kubernetes_service_label_app_kubernetes_io_name] + separator: ; + regex: (.*) + action: keep + # read the metric path from "prometheus.io/path: " annotation + - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_path] + action: replace + target_label: __metrics_path__ + regex: (.+) + # read the port from "prometheus.io/port: " annotation and update scraping address accordingly + - source_labels: [__address__, __meta_kubernetes_service_annotation_prometheus_io_port] + action: replace + target_label: __address__ + regex: ([^:]+)(?::\d+)?;(\d+) + replacement: $1:$2 + # add service namespace as label to the scraped metrics + - source_labels: [__meta_kubernetes_namespace] + separator: ; + regex: (.*) + target_label: namespace + replacement: $1 + action: replace + # add service name as a label to the scraped metrics + - source_labels: [__meta_kubernetes_service_name] + separator: ; + regex: (.*) + target_label: service + replacement: $1 + action: replace + # add stats service's labels to the scraped metrics + - action: labelmap + regex: __meta_kubernetes_service_label_(.+) +``` + +### Configure Existing Prometheus Server + +If you already have a Prometheus server running, you have to add above scraping job in the `ConfigMap` used to configure the Prometheus server. Then, you have to restart it for the updated configuration to take effect. + +>If you don't use a persistent volume for Prometheus storage, you will lose your previously scraped data on restart. + +### Deploy New Prometheus Server + +If you don't have any existing Prometheus server running, you have to deploy one. In this section, we are going to deploy a Prometheus server in `monitoring` namespace to collect metrics using this stats service. + +**Create ConfigMap:** + +At first, create a ConfigMap with the scraping configuration. Bellow, the YAML of ConfigMap that we are going to create in this tutorial. + +```yaml +apiVersion: v1 +kind: ConfigMap +metadata: + name: prometheus-config + labels: + app: prometheus-demo + namespace: monitoring +data: + prometheus.yml: |- + global: + scrape_interval: 5s + evaluation_interval: 5s + scrape_configs: + - job_name: 'kubedb-databases' + honor_labels: true + scheme: http + kubernetes_sd_configs: + - role: endpoints + # by default Prometheus server select all Kubernetes services as possible target. + # relabel_config is used to filter only desired endpoints + relabel_configs: + # keep only those services that has "prometheus.io/scrape","prometheus.io/path" and "prometheus.io/port" anootations + - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape, __meta_kubernetes_service_annotation_prometheus_io_port] + separator: ; + regex: true;(.*) + action: keep + # currently KubeDB supported databases uses only "http" scheme to export metrics. so, drop any service that uses "https" scheme. + - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scheme] + action: drop + regex: https + # only keep the stats services created by KubeDB for monitoring purpose which has "-stats" suffix + - source_labels: [__meta_kubernetes_service_name] + separator: ; + regex: (.*-stats) + action: keep + # service created by KubeDB will have "app.kubernetes.io/name" and "app.kubernetes.io/instance" annotations. keep only those services that have these annotations. + - source_labels: [__meta_kubernetes_service_label_app_kubernetes_io_name] + separator: ; + regex: (.*) + action: keep + # read the metric path from "prometheus.io/path: " annotation + - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_path] + action: replace + target_label: __metrics_path__ + regex: (.+) + # read the port from "prometheus.io/port: " annotation and update scraping address accordingly + - source_labels: [__address__, __meta_kubernetes_service_annotation_prometheus_io_port] + action: replace + target_label: __address__ + regex: ([^:]+)(?::\d+)?;(\d+) + replacement: $1:$2 + # add service namespace as label to the scraped metrics + - source_labels: [__meta_kubernetes_namespace] + separator: ; + regex: (.*) + target_label: namespace + replacement: $1 + action: replace + # add service name as a label to the scraped metrics + - source_labels: [__meta_kubernetes_service_name] + separator: ; + regex: (.*) + target_label: service + replacement: $1 + action: replace + # add stats service's labels to the scraped metrics + - action: labelmap + regex: __meta_kubernetes_service_label_(.+) +``` + +Let's create above `ConfigMap`, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/monitoring/builtin-prometheus/prom-config.yaml +configmap/prometheus-config created +``` + +**Create RBAC:** + +If you are using an RBAC enabled cluster, you have to give necessary RBAC permissions for Prometheus. Let's create necessary RBAC stuffs for Prometheus, + +```bash +$ kubectl apply -f https://github.com/appscode/third-party-tools/raw/master/monitoring/prometheus/builtin/artifacts/rbac.yaml +clusterrole.rbac.authorization.k8s.io/prometheus created +serviceaccount/prometheus created +clusterrolebinding.rbac.authorization.k8s.io/prometheus created +``` + +>YAML for the RBAC resources created above can be found [here](https://github.com/appscode/third-party-tools/blob/master/monitoring/prometheus/builtin/artifacts/rbac.yaml). + +**Deploy Prometheus:** + +Now, we are ready to deploy Prometheus server. We are going to use following [deployment](https://github.com/appscode/third-party-tools/blob/master/monitoring/prometheus/builtin/artifacts/deployment.yaml) to deploy Prometheus server. + +Let's deploy the Prometheus server. + +```bash +$ kubectl apply -f https://github.com/appscode/third-party-tools/raw/master/monitoring/prometheus/builtin/artifacts/deployment.yaml +deployment.apps/prometheus created +``` + +### Verify Monitoring Metrics + +Prometheus server is listening to port `9090`. We are going to use [port forwarding](https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/) to access Prometheus dashboard. + +At first, let's check if the Prometheus pod is in `Running` state. + +```bash +$ kubectl get pod -n monitoring -l=app=prometheus +NAME READY STATUS RESTARTS AGE +prometheus-8568c86d86-95zhn 1/1 Running 0 77s +``` + +Now, run following command on a separate terminal to forward 9090 port of `prometheus-8568c86d86-95zhn` pod, + +```bash +$ kubectl port-forward -n monitoring prometheus-8568c86d86-95zhn 9090 +Forwarding from 127.0.0.1:9090 -> 9090 +Forwarding from [::1]:9090 -> 9090 +``` + +Now, we can access the dashboard at `localhost:9090`. Open [http://localhost:9090](http://localhost:9090) in your browser. You should see the endpoint of `builtin-prom-es-stats` service as one of the targets. + +

+  Prometheus Target +

+ +Check the labels marked with red rectangle. These labels confirm that the metrics are coming from `Elasticsearch` database `builtin-prom-es` through stats service `builtin-prom-es-stats`. + +Now, you can view the collected metrics and create a graph from homepage of this Prometheus dashboard. You can also use this Prometheus server as data source for [Grafana](https://grafana.com/) and create beautiful dashboard with collected metrics. + +## Cleaning up + +To cleanup the Kubernetes resources created by this tutorial, run following commands + +```bash +$ kubectl delete -n demo es/builtin-prom-es + +$ kubectl delete -n monitoring deployment.apps/prometheus + +$ kubectl delete -n monitoring clusterrole.rbac.authorization.k8s.io/prometheus +$ kubectl delete -n monitoring serviceaccount/prometheus +$ kubectl delete -n monitoring clusterrolebinding.rbac.authorization.k8s.io/prometheus + +$ kubectl delete ns demo +$ kubectl delete ns monitoring +``` + +## Next Steps + +- Learn about [backup & restore](/docs/v2024.1.31/guides/elasticsearch/backup/overview/) Elasticsearch database using Stash. +- Learn how to configure [Elasticsearch Topology Cluster](/docs/v2024.1.31/guides/elasticsearch/clustering/topology-cluster/simple-dedicated-cluster/). +- Monitor your Elasticsearch database with KubeDB using [`out-of-the-box` Prometheus operator](/docs/v2024.1.31/guides/elasticsearch/monitoring/using-prometheus-operator). +- Use [private Docker registry](/docs/v2024.1.31/guides/elasticsearch/private-registry/using-private-registry) to deploy Elasticsearch with KubeDB. +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/v2024.1.31/CONTRIBUTING). diff --git a/content/docs/v2024.1.31/guides/elasticsearch/monitoring/using-prometheus-operator.md b/content/docs/v2024.1.31/guides/elasticsearch/monitoring/using-prometheus-operator.md new file mode 100644 index 0000000000..7019b936f5 --- /dev/null +++ b/content/docs/v2024.1.31/guides/elasticsearch/monitoring/using-prometheus-operator.md @@ -0,0 +1,297 @@ +--- +title: Monitoring Elasticsearch using Prometheus Operator +menu: + docs_v2024.1.31: + identifier: es-using-prometheus-operator-monitoring + name: Prometheus Operator + parent: es-monitoring-elasticsearch + weight: 15 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Monitoring Elasticsearch Using Prometheus operator + +[Prometheus operator](https://github.com/prometheus-operator/prometheus-operator) provides simple and Kubernetes native way to deploy and configure Prometheus server. This tutorial will show you how to use Prometheus operator to monitor Elasticsearch database deployed with KubeDB. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +- To learn how Prometheus monitoring works with KubeDB in general, please visit [here](/docs/v2024.1.31/guides/elasticsearch/monitoring/overview). + +- To keep Prometheus resources isolated, we are going to use a separate namespace called `monitoring` to deploy respective monitoring resources. We are going to deploy database in `demo` namespace. + + ```bash + $ kubectl create ns monitoring + namespace/monitoring created + + $ kubectl create ns demo + namespace/demo created + ``` + +- We need a [Prometheus operator](https://github.com/prometheus-operator/prometheus-operator) instance running. If you don't already have a running instance, deploy one following the docs from [here](https://github.com/appscode/third-party-tools/blob/master/monitoring/prometheus/operator/README.md). + +- If you already don't have a Prometheus server running, deploy one following tutorial from [here](https://github.com/appscode/third-party-tools/blob/master/monitoring/prometheus/operator/README.md#deploy-prometheus-server). + +> Note: YAML files used in this tutorial are stored in [docs/examples/elasticsearch](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/examples/elasticsearch) folder in GitHub repository [kubedb/docs](https://github.com/kubedb/docs). + +## Find out required labels for ServiceMonitor + +We need to know the labels used to select `ServiceMonitor` by a `Prometheus` crd. We are going to provide these labels in `spec.monitor.prometheus.labels` field of Elasticsearch crd so that KubeDB creates `ServiceMonitor` object accordingly. + +At first, let's find out the available Prometheus server in our cluster. + +```bash +$ kubectl get prometheus --all-namespaces +NAMESPACE NAME AGE +monitoring prometheus 18m +``` + +> If you don't have any Prometheus server running in your cluster, deploy one following the guide specified in **Before You Begin** section. + +Now, let's view the YAML of the available Prometheus server `prometheus` in `monitoring` namespace. + +```yaml +$ kubectl get prometheus -n monitoring prometheus -o yaml +apiVersion: monitoring.coreos.com/v1 +kind: Prometheus +metadata: + annotations: + kubectl.kubernetes.io/last-applied-configuration: | + {"apiVersion":"monitoring.coreos.com/v1","kind":"Prometheus","metadata":{"annotations":{},"labels":{"prometheus":"prometheus"},"name":"prometheus","namespace":"monitoring"},"spec":{"replicas":1,"resources":{"requests":{"memory":"400Mi"}},"serviceAccountName":"prometheus","serviceMonitorSelector":{"matchLabels":{"release":"prometheus"}}}} + creationTimestamp: "2019-10-02T09:48:29Z" + generation: 1 + labels: + prometheus: prometheus + name: prometheus + namespace: monitoring + resourceVersion: "74613" + selfLink: /apis/monitoring.coreos.com/v1/namespaces/monitoring/prometheuses/prometheus + uid: ca0db414-e4f9-11e9-b2b2-42010a940225 +spec: + replicas: 1 + resources: + requests: + memory: 400Mi + serviceAccountName: prometheus + serviceMonitorSelector: + matchLabels: + release: prometheus +``` + +Notice the `spec.serviceMonitorSelector` section. Here, `release: prometheus` label is used to select `ServiceMonitor` crd. So, we are going to use this label in `spec.monitor.prometheus.labels` field of Elasticsearch crd. + +## Deploy Elasticsearch with Monitoring Enabled + +At first, let's deploy an Elasticsearch database with monitoring enabled. Below is the Elasticsearch object that we are going to create. + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: Elasticsearch +metadata: + name: coreos-prom-es + namespace: demo +spec: + version: xpack-8.11.1 + terminationPolicy: WipeOut + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + monitor: + agent: prometheus.io/operator + prometheus: + serviceMonitor: + labels: + release: prometheus + interval: 10s +``` + +Here, + +- `monitor.agent: prometheus.io/operator` indicates that we are going to monitor this server using Prometheus operator. +- `monitor.prometheus.namespace: monitoring` specifies that KubeDB should create `ServiceMonitor` in `monitoring` namespace. + +- `monitor.prometheus.labels` specifies that KubeDB should create `ServiceMonitor` with these labels. + +- `monitor.prometheus.interval` indicates that the Prometheus server should scrape metrics from this database with 10 seconds interval. + +Let's create the Elasticsearch object that we have shown above, + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/elasticsearch/monitoring/coreos-prom-es.yaml +elasticsearch.kubedb.com/coreos-prom-es created +``` + +Now, wait for the database to go into `Running` state. + +```bash +$ kubectl get es -n demo coreos-prom-es +NAME VERSION STATUS AGE +coreos-prom-es 7.3.2 Running 85s +``` + +KubeDB will create a separate stats service with name `{Elasticsearch crd name}-stats` for monitoring purpose. + +```bash +$ kubectl get svc -n demo --selector="app.kubernetes.io/instance=coreos-prom-es" +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +coreos-prom-es ClusterIP 10.0.1.56 9200/TCP 77s +coreos-prom-es-master ClusterIP 10.0.7.18 9300/TCP 77s +coreos-prom-es-stats ClusterIP 10.0.5.58 56790/TCP 19s +``` + +Here, `coreos-prom-es-stats` service has been created for monitoring purpose. + +Let's describe this stats service. + +```yaml +$ kubectl describe svc -n demo coreos-prom-es-stats +Name: coreos-prom-es-stats +Namespace: demo +Labels: app.kubernetes.io/name=elasticsearches.kubedb.com + app.kubernetes.io/instance=coreos-prom-es + kubedb.com/role=stats +Annotations: monitoring.appscode.com/agent: prometheus.io/operator +Selector: app.kubernetes.io/name=elasticsearches.kubedb.com,app.kubernetes.io/instance=coreos-prom-es +Type: ClusterIP +IP: 10.0.5.58 +Port: prom-http 56790/TCP +TargetPort: prom-http/TCP +Endpoints: 10.4.0.50:56790 +Session Affinity: None +Events: +``` + +Notice the `Labels` and `Port` fields. `ServiceMonitor` will use these information to target its endpoints. + +KubeDB will also create a `ServiceMonitor` crd in `monitoring` namespace that select the endpoints of `coreos-prom-es-stats` service. Verify that the `ServiceMonitor` crd has been created. + +```bash +$ kubectl get servicemonitor -n monitoring +NAME AGE +kubedb-demo-coreos-prom-es 6m +``` + +Let's verify that the `ServiceMonitor` has the label that we had specified in `spec.monitor` section of Elasticsearch crd. + +```yaml +$ kubectl get servicemonitor -n monitoring kubedb-demo-coreos-prom-es -o yaml +apiVersion: monitoring.coreos.com/v1 +kind: ServiceMonitor +metadata: + creationTimestamp: "2019-10-02T09:51:04Z" + generation: 1 + labels: + release: prometheus + monitoring.appscode.com/service: coreos-prom-es-stats.demo + name: kubedb-demo-coreos-prom-es + namespace: monitoring + ownerReferences: + - apiVersion: v1 + blockOwnerDeletion: true + kind: Service + name: coreos-prom-es-stats + uid: 25f91fcc-e4fa-11e9-b2b2-42010a940225 + resourceVersion: "75305" + selfLink: /apis/monitoring.coreos.com/v1/namespaces/monitoring/servicemonitors/kubedb-demo-coreos-prom-es + uid: 2601a2ba-e4fa-11e9-b2b2-42010a940225 +spec: + endpoints: + - honorLabels: true + interval: 10s + path: /metrics + port: prom-http + namespaceSelector: + matchNames: + - demo + selector: + matchLabels: + app.kubernetes.io/name: elasticsearches.kubedb.com + app.kubernetes.io/instance: coreos-prom-es + kubedb.com/role: stats +``` + +Notice that the `ServiceMonitor` has label `release: prometheus` that we had specified in Elasticsearch crd. + +Also notice that the `ServiceMonitor` has selector which match the labels we have seen in the `coreos-prom-es-stats` service. It also, target the `prom-http` port that we have seen in the stats service. + +## Verify Monitoring Metrics + +At first, let's find out the respective Prometheus pod for `prometheus` Prometheus server. + +```bash +$ kubectl get pod -n monitoring -l=app=prometheus +NAME READY STATUS RESTARTS AGE +prometheus-prometheus-0 3/3 Running 1 63m +``` + +Prometheus server is listening to port `9090` of `prometheus-prometheus-0` pod. We are going to use [port forwarding](https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/) to access Prometheus dashboard. + +Run following command on a separate terminal to forward the port 9090 of `prometheus-prometheus-0` pod, + +```bash +$ kubectl port-forward -n monitoring prometheus-prometheus-0 9090 +Forwarding from 127.0.0.1:9090 -> 9090 +Forwarding from [::1]:9090 -> 9090 +``` + +Now, we can access the dashboard at `localhost:9090`. Open [http://localhost:9090](http://localhost:9090) in your browser. You should see `prom-http` endpoint of `coreos-prom-es-stats` service as one of the targets. + +

+  Prometheus Target +

+ +Check the `endpoint` and `service` labels marked by red rectangle. It verifies that the target is our expected database. Now, you can view the collected metrics and create a graph from homepage of this Prometheus dashboard. You can also use this Prometheus server as data source for [Grafana](https://grafana.com/) and create beautiful dashboard with collected metrics. + +## Cleaning up + +To cleanup the Kubernetes resources created by this tutorial, run following commands + +```bash +# cleanup database +kubectl delete -n demo es/coreos-prom-es + +# cleanup prometheus resources +kubectl delete -n monitoring prometheus prometheus +kubectl delete -n monitoring clusterrolebinding prometheus +kubectl delete -n monitoring clusterrole prometheus +kubectl delete -n monitoring serviceaccount prometheus +kubectl delete -n monitoring service prometheus-operated + +# cleanup prometheus operator resources +kubectl delete -n monitoring deployment prometheus-operator +kubectl delete -n dmeo serviceaccount prometheus-operator +kubectl delete clusterrolebinding prometheus-operator +kubectl delete clusterrole prometheus-operator + +# delete namespace +kubectl delete ns monitoring +kubectl delete ns demo +``` + +## Next Steps + +- Learn about [backup & restore](/docs/v2024.1.31/guides/elasticsearch/backup/overview/) Elasticsearch database using Stash. +- Learn how to configure [Elasticsearch Topology Cluster](/docs/v2024.1.31/guides/elasticsearch/clustering/topology-cluster/simple-dedicated-cluster/). +- Monitor your Elasticsearch database with KubeDB using [`out-of-the-box` builtin-Prometheus](/docs/v2024.1.31/guides/elasticsearch/monitoring/using-builtin-prometheus). +- Detail concepts of [Elasticsearch object](/docs/v2024.1.31/guides/elasticsearch/concepts/elasticsearch/). +- Use [private Docker registry](/docs/v2024.1.31/guides/elasticsearch/private-registry/using-private-registry) to deploy Elasticsearch with KubeDB. +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/v2024.1.31/CONTRIBUTING). diff --git a/content/docs/v2024.1.31/guides/elasticsearch/plugins-backup/_index.md b/content/docs/v2024.1.31/guides/elasticsearch/plugins-backup/_index.md new file mode 100644 index 0000000000..a142ecdedc --- /dev/null +++ b/content/docs/v2024.1.31/guides/elasticsearch/plugins-backup/_index.md @@ -0,0 +1,22 @@ +--- +title: Backup & Restore Elasticsearch Using Repository Plugins +menu: + docs_v2024.1.31: + identifier: guides-es-plugins-backup + name: Snapshot & Restore (Repository Plugins) + parent: es-elasticsearch-guides + weight: 41 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/elasticsearch/plugins-backup/overview/index.md b/content/docs/v2024.1.31/guides/elasticsearch/plugins-backup/overview/index.md new file mode 100644 index 0000000000..a25ce2ca92 --- /dev/null +++ b/content/docs/v2024.1.31/guides/elasticsearch/plugins-backup/overview/index.md @@ -0,0 +1,57 @@ +--- +title: Backup & Restore Elasticsearch Using Snapshot Plugins +menu: + docs_v2024.1.31: + identifier: guides-es-plugins-backup-overview + name: Overview + parent: guides-es-plugins-backup + weight: 10 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Snapshot and Restore Using Repository Plugins + +A snapshot is a backup taken from a running Elasticsearch cluster. You can take snapshots of an entire cluster, including all its data streams and indices. You can also take snapshots of only specific data streams or indices in the cluster. + +Snapshots can be stored in remote repositories like Amazon S3, Microsoft Azure, Google Cloud Storage, and other platforms supported by a repository plugin. + +Find more details at the official docs: [Snapshot and Restore](https://www.elastic.co/guide/en/elasticsearch/reference/7.14/snapshot-restore.html#snapshot-restore) + +## KubeDB Managed Elasticsearch Docker Images + +To enable the snapshot and restore feature, users need to install the respective repositroy plugin. For example, if user needs to installed [S3 Repository](https://www.elastic.co/guide/en/elasticsearch/plugins/7.14/repository-s3.html), the following needed to be run as root user: + +```bash +sudo bin/elasticsearch-plugin install repository-s3 +``` + +While running the Elasticsearch cluster in k8s, you don't always have the previliage to run as root user. Moreover, the plugin must be installed on every node in the cluster, and each node must be restarted after installation which bring more operational complexities. Here comes the KubeDB with Elasticsearch docker images (i.e. `Distribution=KubeDB`) with the pre-installed plugins; repository-s3, repository-azure, repository-hdfs, and repository-gcs. + +```bash +$ kubectl get elasticsearchversions +NAME VERSION DISTRIBUTION DB_IMAGE DEPRECATED AGE +kubedb-xpack-7.12.0 7.12.0 KubeDB kubedb/elasticsearch:7.12.0-xpack-v2021.08.23 4h44m +kubedb-xpack-7.13.2 7.13.2 KubeDB kubedb/elasticsearch:7.13.2-xpack-v2021.08.23 4h44m +xpack-8.11.1 7.14.0 KubeDB kubedb/elasticsearch:7.14.0-xpack-v2021.08.23 4h44m +kubedb-xpack-7.9.1 7.9.1 KubeDB kubedb/elasticsearch:7.9.1-xpack-v2021.08.23 4h44m +``` + +In case, you want to build your own custom Elasticsearch image with your own custom set of Elasticsearch plugins, visit the [elasticsearch-docker](https://github.com/kubedb/elasticsearch-docker/tree/release-7.14-xpack) github repository. + +## What's Next? + +- Snapshot and restore Elasticsearch cluster data using [S3 Repository Plugin](/docs/v2024.1.31/guides/elasticsearch/plugins-backup/s3-repository/). \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/elasticsearch/plugins-backup/s3-repository/images/create-s3-bucket.png b/content/docs/v2024.1.31/guides/elasticsearch/plugins-backup/s3-repository/images/create-s3-bucket.png new file mode 100644 index 0000000000..712dec6207 Binary files /dev/null and b/content/docs/v2024.1.31/guides/elasticsearch/plugins-backup/s3-repository/images/create-s3-bucket.png differ diff --git a/content/docs/v2024.1.31/guides/elasticsearch/plugins-backup/s3-repository/index.md b/content/docs/v2024.1.31/guides/elasticsearch/plugins-backup/s3-repository/index.md new file mode 100644 index 0000000000..cd90efc2b2 --- /dev/null +++ b/content/docs/v2024.1.31/guides/elasticsearch/plugins-backup/s3-repository/index.md @@ -0,0 +1,420 @@ +--- +title: Snapshot and Restore Using S3 Repository Plugin +description: Snapshot and Restore of Elasticsearch Cluster Using S3 Repository Plugin +menu: + docs_v2024.1.31: + identifier: guides-es-plugins-backup-s3-repository + name: S3 Repository Plugin + parent: guides-es-plugins-backup + weight: 15 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Snapshot and Restore Using S3 Repository Plugin + +The [S3 repository](https://www.elastic.co/guide/en/elasticsearch/plugins/7.14/repository-s3.html) plugin adds support for using AWS S3 as a repository for Snapshot/Restore. It also works with S3 compatible other mediums such as [Linode Object Storage](https://www.linode.com/docs/guides/how-to-use-object-storage/). + +For the demo, we are going to show you how to snapshot a KubeDB managed Elasticsearch and restore data from previously taken snapshot. + +## Before You Begin + +At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +Now, install the KubeDB operator in your cluster following the steps [here](/docs/v2024.1.31/setup/README). + +To keep things isolated, this tutorial uses a separate namespace called `demo` throughout this tutorial. + +```bash +$ kubectl create namespace demo +namespace/demo created + +$ kubectl get namespace +NAME STATUS AGE +demo Active 9s +``` + +> Note: YAML files used in this tutorial are stored in [guides/elasticsearch/quickstart/overview/yamls](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/guides/elasticsearch/plugins-backup/s3-repository/yamls) folder in GitHub repository [kubedb/docs](https://github.com/kubedb/docs) + +## Create S3 Compatible Storage + +We are going to use the [Linode Object Storage](https://www.linode.com/docs/guides/how-to-use-object-storage/) which is S3 compatible. But you can any S3 compatible storage which suits you best. Let's [create](https://cloud.linode.com/object-storage/buckets/create) a `sample-s3-bucket` to store snapshot and later restore from it. + +![create sample s3 bucket](images/create-s3-bucket.png) + +You also need to [create](https://cloud.linode.com/object-storage/access-keys) `access_key` and `secret_key` so that your Elasticsearch Cluster can connect to the bucket. + +## Deploy Elasticsearch Cluster and Populate Data + +For the demo, we are going to use Elasticsearch docker images from KubeDB distribution with the pre-installed S3 repository plugin. + +### Secure Client Settings + +To make the plugin works we need to create a k8s secret with the Elastisearch secure settings: + +```yaml +apiVersion: v1 +kind: Secret +metadata: + name: es-secure-settings + namespace: demo +stringData: + password: strong-password + s3.client.default.access_key: 6BU5GFIIUC2******** + s3.client.default.secret_key: DD1FS5NAiPf******** +``` + +> N.B.: Here, the `password` is the Elasticsearch `KEYSTROE_PASSWORD`, if you do not provide it, default to empty string (`""`). + +Let's create the k8s secret with secure settings: + +```bash +$ kubectl apply -f secure-settings-secret.yaml +secret/es-secure-settings created +``` + +In [S3 Client Settings](https://www.elastic.co/guide/en/elasticsearch/plugins/7.14/repository-s3-client.html), If you do not configure the `endpoint`, it default to `s3.amazonaws.com`. Since we are using Linode Bucket instead of AWS S3, we need to configure the endpoint too. Let's create another secret with custom client configurations: + +```yaml +apiVersion: v1 +kind: Secret +metadata: + name: es-custom-config + namespace: demo +stringData: + elasticsearch.yml: |- + s3.client.default.endpoint: us-east-1.linodeobjects.com +``` + +> N.B.: In Elasticsearch, only secure setting goes to `elasticsearch.keystore`, others are put into `elasticsearch.yml` config file. That's why two different k8s secrets are used. + +Let's create the k8s secret with custom configurations: + +```bash +$ kubectl apply -f custom-configuration.yaml +secret/es-custom-config created +``` + +### Deploy Elasticsearch Cluster + +Now that we have deployed our configuration secrets, it's time to deploy our Elasticsearch instance. + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: Elasticsearch +metadata: + name: sample-es + namespace: demo +spec: + # Custom configuration, which will update elasticsearch.yml + configSecret: + name: es-custom-config + # Secure settings which will be stored in elasticsearch.keystore + secureConfigSecret: + name: es-secure-settings + enableSSL: true + # we are using ElasticsearchVersion with pre-installed s3 repository plugin + version: xpack-8.11.1 + storageType: Durable + replicas: 3 + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi +``` + +Let's deploy the Elasticsearch and wait for it to become ready to use: + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/elasticsearch/plugins-backup/s3-repository/yamls/elasticsearch.yaml +elasticsearch.kubedb.com/sample-es created +``` + +```bash +$ kubectl get es -n demo -w +NAME VERSION STATUS AGE +sample-es xpack-8.11.1 0s +sample-es xpack-8.11.1 Provisioning 19s +sample-es xpack-8.11.1 Ready 41s +``` + +### Populate Data + +To connect to our Elasticsearch cluster, let's port-forward the Elasticsearch service to local machine: + +```bash +$ kubectl port-forward -n demo svc/sample-es 9200 +Forwarding from 127.0.0.1:9200 -> 9200 +Forwarding from [::1]:9200 -> 9200 +``` + +Keep it like that and switch to another terminal window: + +```bash +$ export ELASTIC_USER=$(kubectl get secret -n demo sample-es-elastic-cred -o jsonpath='{.data.username}' | base64 -d) + +$ export ELASTIC_PASSWORD=$(kubectl get secret -n demo sample-es-elastic-cred -o jsonpath='{.data.password}' | base64 -d) + +$ curl -XGET -k -u "$ELASTIC_USER:$ELASTIC_PASSWORD" "https://localhost:9200/_cluster/health?pretty" +{ + "cluster_name" : "sample-es", + "status" : "green", + "timed_out" : false, + "number_of_nodes" : 3, + "number_of_data_nodes" : 3, + "active_primary_shards" : 1, + "active_shards" : 2, + "relocating_shards" : 0, + "initializing_shards" : 0, + "unassigned_shards" : 0, + "delayed_unassigned_shards" : 0, + "number_of_pending_tasks" : 0, + "number_of_in_flight_fetch" : 0, + "task_max_waiting_in_queue_millis" : 0, + "active_shards_percent_as_number" : 100.0 +} +``` + +So, our cluster status is green. Let's create some indices with dummy data: + +```bash +$ curl -XPOST -k -u "$ELASTIC_USER:$ELASTIC_PASSWORD" "https://localhost:9200/products/_doc?pretty" -H 'Content-Type: application/json' -d ' +{ + "name": "KubeDB", + "vendor": "AppsCode Inc.", + "description": "Database Operator for Kubernetes" +} +' + +$ curl -XPOST -k -u "$ELASTIC_USER:$ELASTIC_PASSWORD" "https://localhost:9200/companies/_doc?pretty" -H 'Content-Type: application/json' -d ' +{ + "name": "AppsCode Inc.", + "mission": "Accelerate the transition to Containers by building a Kubernetes-native Data Platform", + "products": ["KubeDB", "Stash", "KubeVault", "Kubeform", "ByteBuilders"] +} +' +``` + +Now, let’s verify that the indexes have been created successfully. + +```bash +$ curl -XGET -k -u "$ELASTIC_USER:$ELASTIC_PASSWORD" "https://localhost:9200/_cat/indices?v&s=index&pretty" +health status index uuid pri rep docs.count docs.deleted store.size pri.store.size +green open .geoip_databases oiaZfJA8Q5CihQon0oR8hA 1 1 42 0 81.6mb 40.8mb +green open companies GuGisWJ8Tkqnq8vhREQ2-A 1 1 1 0 11.5kb 5.7kb +green open products wyu-fImDRr-Hk_GXVF7cDw 1 1 1 0 10.6kb 5.3kb +``` + +### Repository Settings + +The s3 repository type supports a [number of settings](https://www.elastic.co/guide/en/elasticsearch/plugins/7.14/repository-s3-repository.html#repository-s3-repository) to customize how data is stored in S3. These can be specified when creating the repository. + +Let's create the `_snapshot` repository `sample_s3_repo` with our bucket name `sample-s3-bucket`: + +```bash +$ curl -k -X PUT -u "$ELASTIC_USER:$ELASTIC_PASSWORD" "https://localhost:9200/_snapshot/sample_s3_repo?pretty" -H 'Content-Type: application/json' -d' +{ + "type": "s3", + "settings": { + "bucket": "sample-s3-bucket" + } +} +' +{ + "acknowledged" : true +} +``` + +We've successfully created our repository. Ready to take our first snapshot. + +## Create a Snapshot + +A repository can contain multiple snapshots of the same cluster. Snapshots are identified by unique names within the cluster. For more details, visit [Create a snapshot](https://www.elastic.co/guide/en/elasticsearch/reference/7.14/snapshots-take-snapshot.html). + +```bash +$ curl -k -X PUT -u "$ELASTIC_USER:$ELASTIC_PASSWORD" "https://localhost:9200/_snapshot/sample_s3_repo/snapshot_1?wait_for_completion=true&pretty" + +{ + "snapshot" : { + "snapshot" : "snapshot_1", + "uuid" : "JKoF5sgtS3WPBQ8A_OvWbw", + "repository" : "sample_s3_repo", + "version_id" : 7140099, + "version" : "7.14.0", + "indices" : [ + ".geoip_databases", + "companies", + "products" + ], + "data_streams" : [ ], + "include_global_state" : true, + "state" : "SUCCESS", + "start_time" : "2021-08-24T14:45:38.930Z", + "start_time_in_millis" : 1629816338930, + "end_time" : "2021-08-24T14:46:16.946Z", + "end_time_in_millis" : 1629816376946, + "duration_in_millis" : 38016, + "failures" : [ ], + "shards" : { + "total" : 3, + "failed" : 0, + "successful" : 3 + }, + "feature_states" : [ + { + "feature_name" : "geoip", + "indices" : [ + ".geoip_databases" + ] + } + ] + } +} +``` + +We've successfully taken our first snapshot. + +## Delete Data and Restore a Snapshot + +Let's delete all the indices: + +```bash +$ curl -k -u "$ELASTIC_USER:$ELASTIC_PASSWORD" -X DELETE "https://localhost:9200/_all?pretty" +{ + "acknowledged" : true +} +``` + +List and varify the deletion: + +```bash +$ curl -XGET -k -u "$ELASTIC_USER:$ELASTIC_PASSWORD" "https://localhost:9200/_cat/indices?v&s=index&pretty" +health status index uuid pri rep docs.count docs.deleted store.size pri.store.size +green open .geoip_databases oiaZfJA8Q5CihQon0oR8hA 1 1 42 0 81.6mb 40.8mb +``` + +For more details about restore, visit [Restore a snapshot](https://www.elastic.co/guide/en/elasticsearch/reference/7.14/snapshots-restore-snapshot.html#snapshots-restore-snapshot). + +Let's restore the data from our `snapshot_1`: + +```bash +$ curl -k -u "$ELASTIC_USER:$ELASTIC_PASSWORD" -X POST "https://localhost:9200/_snapshot/sample_s3_repo/snapshot_1/_restore?pretty" -H 'Content-Type: application/json' -d' +{ + "indices": "companies,products" +} +' + +{ + "accepted" : true +} +``` + +We've successfully restored our indices. + +> N.B.: We only wanted to restore the indices we created, but if you want to overwrite everything with the snapshot data, you can do it by setting `include_global_state` to `true` while restoring. + +### Varify Data + +To varify our data, let's list the indices: + +```bash +$ curl -XGET -k -u "$ELASTIC_USER:$ELASTIC_PASSWORD" "https://localhost:9200/_cat/indices?v&s=index&pretty" +health status index uuid pri rep docs.count docs.deleted store.size pri.store.size +green open .geoip_databases oiaZfJA8Q5CihQon0oR8hA 1 1 42 0 81.6mb 40.8mb +green open companies drsv-5tvQwCcte7bkUT0uQ 1 1 1 0 11.7kb 5.8kb +green open products 7TXoXy5kRFiVgZDuyqffQA 1 1 1 0 10.6kb 5.3kb +``` + +Check the content inside: + +```bash +$ curl -XGET -k -u "$ELASTIC_USER:$ELASTIC_PASSWORD" "https://localhost:9200/products/_search?pretty" +{ + "took" : 3, + "timed_out" : false, + "_shards" : { + "total" : 1, + "successful" : 1, + "skipped" : 0, + "failed" : 0 + }, + "hits" : { + "total" : { + "value" : 1, + "relation" : "eq" + }, + "max_score" : 1.0, + "hits" : [ + { + "_index" : "products", + "_type" : "_doc", + "_id" : "36SEeHsBS6UMHADkEvJw", + "_score" : 1.0, + "_source" : { + "name" : "KubeDB", + "vendor" : "AppsCode Inc.", + "description" : "Database Operator for Kubernetes" + } + } + ] + } +} +``` + +```bash +$ curl -XGET -k -u "$ELASTIC_USER:$ELASTIC_PASSWORD" "https://localhost:9200/companies/_search?pretty" +{ + "took" : 3, + "timed_out" : false, + "_shards" : { + "total" : 1, + "successful" : 1, + "skipped" : 0, + "failed" : 0 + }, + "hits" : { + "total" : { + "value" : 1, + "relation" : "eq" + }, + "max_score" : 1.0, + "hits" : [ + { + "_index" : "companies", + "_type" : "_doc", + "_id" : "4KSFeHsBS6UMHADkGvL5", + "_score" : 1.0, + "_source" : { + "name" : "AppsCode Inc.", + "mission" : "Accelerate the transition to Containers by building a Kubernetes-native Data Platform", + "products" : [ + "KubeDB", + "Stash", + "KubeVault", + "Kubeform", + "ByteBuilders" + ] + } + } + ] + } +} +``` + +So, we have successfully retored our data from the snapshot. diff --git a/content/docs/v2024.1.31/guides/elasticsearch/plugins-backup/s3-repository/yamls/custom-configuration.yaml b/content/docs/v2024.1.31/guides/elasticsearch/plugins-backup/s3-repository/yamls/custom-configuration.yaml new file mode 100644 index 0000000000..395e67b122 --- /dev/null +++ b/content/docs/v2024.1.31/guides/elasticsearch/plugins-backup/s3-repository/yamls/custom-configuration.yaml @@ -0,0 +1,8 @@ +apiVersion: v1 +kind: Secret +metadata: + name: es-custom-config + namespace: demo +stringData: + elasticsearch.yml: |- + s3.client.default.endpoint: us-east-1.linodeobjects.com \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/elasticsearch/plugins-backup/s3-repository/yamls/elasticsearch.yaml b/content/docs/v2024.1.31/guides/elasticsearch/plugins-backup/s3-repository/yamls/elasticsearch.yaml new file mode 100644 index 0000000000..0ec15069ef --- /dev/null +++ b/content/docs/v2024.1.31/guides/elasticsearch/plugins-backup/s3-repository/yamls/elasticsearch.yaml @@ -0,0 +1,24 @@ +apiVersion: kubedb.com/v1alpha2 +kind: Elasticsearch +metadata: + name: sample-es + namespace: demo +spec: + # Custom configuration, which will update elasticsearch.yml + configSecret: + name: es-custom-config + # Secure settings which will be stored in elasticsearch.keystore + secureConfigSecret: + name: es-secure-settings + enableSSL: true + # we are using ElasticsearchVersion with pre-installed s3 repository plugin + version: xpack-8.11.1 + storageType: Durable + replicas: 3 + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/elasticsearch/plugins-backup/s3-repository/yamls/secure-setting-secret.yaml b/content/docs/v2024.1.31/guides/elasticsearch/plugins-backup/s3-repository/yamls/secure-setting-secret.yaml new file mode 100644 index 0000000000..03a319546a --- /dev/null +++ b/content/docs/v2024.1.31/guides/elasticsearch/plugins-backup/s3-repository/yamls/secure-setting-secret.yaml @@ -0,0 +1,9 @@ +apiVersion: v1 +kind: Secret +metadata: + name: es-secure-settings + namespace: demo +stringData: + password: strong-password + s3.client.default.access_key: 6BU5GF******************** + s3.client.default.secret_key: DD1FS5NAiPf488********************* \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/elasticsearch/plugins/_index.md b/content/docs/v2024.1.31/guides/elasticsearch/plugins/_index.md new file mode 100755 index 0000000000..8d0822a389 --- /dev/null +++ b/content/docs/v2024.1.31/guides/elasticsearch/plugins/_index.md @@ -0,0 +1,22 @@ +--- +title: Using plugins and extensions with Elasticsearch +menu: + docs_v2024.1.31: + identifier: es-plugin-elasticsearch + name: Extensions & Plugins + parent: es-elasticsearch-guides + weight: 60 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/elasticsearch/plugins/search-guard/_index.md b/content/docs/v2024.1.31/guides/elasticsearch/plugins/search-guard/_index.md new file mode 100755 index 0000000000..46013b48f1 --- /dev/null +++ b/content/docs/v2024.1.31/guides/elasticsearch/plugins/search-guard/_index.md @@ -0,0 +1,22 @@ +--- +title: Using Search Guard with Elasticsearch +menu: + docs_v2024.1.31: + identifier: es-search-guard-elasticsearch + name: Search Guard + parent: es-plugin-elasticsearch + weight: 15 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/elasticsearch/plugins/search-guard/configuration.md b/content/docs/v2024.1.31/guides/elasticsearch/plugins/search-guard/configuration.md new file mode 100644 index 0000000000..f8fec7818a --- /dev/null +++ b/content/docs/v2024.1.31/guides/elasticsearch/plugins/search-guard/configuration.md @@ -0,0 +1,457 @@ +--- +title: Search Guard Configuration +menu: + docs_v2024.1.31: + identifier: es-configuration-search-guard + name: Configuration + parent: es-search-guard-elasticsearch + weight: 15 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Search Guard Configuration + +Search Guard configuration enables basic flow as follows: + +- Search Guard **authenticates** the credentials against the configured authentication backend(s). +- Search Guard authorizes the user by retrieving a list of the user’s roles from the configured authorization backend + - Roles retrieved from authorization backends are called backend roles. +- Search Guard maps the user and backend roles to Search Guard roles. +- Search Guard determines the permissions associated with the Search Guard role and decides whether the action the user wants to perform is allowed or not. + +## Before You Begin + +At first, you need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +Now, install KubeDB cli on your workstation and KubeDB operator in your cluster following the steps [here](/docs/v2024.1.31/setup/README). + +To keep things isolated, this tutorial uses a separate namespace called `demo` throughout this tutorial. + +```bash +$ kubectl create ns demo +namespace/demo created + +$ kubectl get ns demo +NAME STATUS AGE +demo Active 5s +``` + +We will use `htpasswd`** to hash user password. Install `apache2-utils` package for this. + +```bash +$ sudo apt-get install apache2-utils +``` + +To keep configuration files separated, open a new terminal and create a directory `/tmp/kubedb/sg` + +```bash +mkdir -p /tmp/kubedb/sg +cd /tmp/kubedb/sg +``` + +> Note: YAML files used in this tutorial are stored in [docs/examples/elasticsearch](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/examples/elasticsearch) folder in GitHub repository [kubedb/docs](https://github.com/kubedb/docs). + +## Create Configuration + +The configuration consists of the following files. + +- `sg_config.yml` configure authenticators and authorisation backends. +- `sg_internal_users.yml` stores users, roles and hashed passwords in the internal user database. +- `sg_action_groups.yml` define named permission groups. +- `sg_roles.yml` define roles and the associated permissions. +- `sg_roles_mapping.yml` map backend roles, hosts and users to roles. + +If you do not provide Secret for configuration, KubeDB will create one with default setup. + +### sg_config.yml + +The main configuration file for authentication and authorization modules is `sg_config.yml`. It defines how Search Guard retrieves the user credentials, how it verifies these credentials, and how additional user roles are fetched from backend systems. + +It has two main parts: + +```yml +searchguard: + dynamic: + authc: + ... + authz: + ... +``` + +See details about [authentication and authorisation](http://docs.search-guard.com/v5/authentication-authorization) in Search Guard documentation. + +We will use following config data in this tutorial + +```yml +searchguard: + dynamic: + authc: + basic_internal_auth_domain: + enabled: true + order: 4 + http_authenticator: + type: basic + challenge: true + authentication_backend: + type: internal +``` + +```bash +$ wget https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/elasticsearch/search-guard/sg-config/sg_config.yml +``` + +### sg_internal_users.yml + +Internal users are configured in `sg_internal_users.yml`. + +Syntax: + +```yml +: + hash: + roles: + - + - +``` + +See details about [internal users](http://docs.search-guard.com/v5/internal-users-database) in Search Guard documentation. + +KubeDB needs user `admin` and `readall` for backup and restore process. + +Create two hashed password for user `admin` and `readall` + +```bash +export ADMIN_PASSWORD=admin-password +export READALL_PASSWORD=readall-password + +export ADMIN_PASSWORD_HASHED=$(htpasswd -bnBC 12 "" $ADMIN_PASSWORD | tr -d ':\n' | sed 's/$2y/$2a/') +export READALL_PASSWORD_HASHED=$(htpasswd -bnBC 12 "" $READALL_PASSWORD | tr -d ':\n' | sed 's/$2y/$2a/') +``` + +Here, + +- `admin` user password : `admin-password` +- `readall` user password : `readall-password` + +This following template file is used to substitute password for internal user. + +```yaml +admin: + hash: $ADMIN_PASSWORD_HASHED + +readall: + hash: $READALL_PASSWORD_HASHED +``` + +Run following command to write user information in `sg_internal_users.yml` file with password. + +```bash +$ curl https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/elasticsearch/search-guard/sg-config/sg_internal_users.yml | envsubst > sg_internal_users.yml +``` + +> Note: If user does not provide `spec.authSecret`, KubeDB will generate random password for both admin and readall user. + +### sg_action_groups.yml + +An action group is simply a collection of permissions with a telling name. Action groups are defined in the file `sg_action_groups.yml` +and can be referred to in `sg_roles.yml`. + +The file structure is very simple: + +```yml +: + - '' + - '' + - ... +``` + +See details about [action groups](http://docs.search-guard.com/v5/action-groups) in Search Guard documentation. + +Run following command to get action groups we will use in this tutorial + +```bash +$ wget https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/elasticsearch/search-guard/sg-config/sg_action_groups.yml +``` + +```yml +UNLIMITED: + - "*" + +READ: + - "indices:data/read*" + - "indices:admin/mappings/fields/get*" + +CLUSTER_COMPOSITE_OPS_RO: + - "indices:data/read/mget" + - "indices:data/read/msearch" + - "indices:data/read/mtv" + - "indices:data/read/coordinate-msearch*" + - "indices:admin/aliases/exists*" + - "indices:admin/aliases/get*" + +CLUSTER_KUBEDB_SNAPSHOT: + - "indices:data/read/scroll*" + +INDICES_KUBEDB_SNAPSHOT: + - "indices:admin/get" +``` + +### sg_roles.yml + +Search Guard roles and their associated permissions are defined in the file `sg_roles.yml`. + +The syntax to define a role, and associate permissions with it, is as follows: + +```yml +: + cluster: + - '' + - ... + indices: + '': + '': + - '' + - ... + '': + - '' + - ... + _dls_: '' + _fls_: + - '' + - ... + tenants: + : + : +``` + +See details about [roles and permissions](http://docs.search-guard.com/v5/roles-permissions) in Search Guard documentation. + +We will use following roles for Search Guard users. + +```yaml +sg_all_access: + cluster: + - UNLIMITED + indices: + '*': + '*': + - UNLIMITED + tenants: + adm_tenant: RW + test_tenant_ro: RW + +sg_readall: + cluster: + - CLUSTER_COMPOSITE_OPS_RO + - CLUSTER_KUBEDB_SNAPSHOT + indices: + '*': + '*': + - READ + - INDICES_KUBEDB_SNAPSHOT +``` + +```bash +$ wget https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/elasticsearch/search-guard/sg-config/sg_roles.yml +``` + +### sg_roles_mapping.yml + +Backend roles are roles that Search Guard retrieves during the authentication and authorization process. These roles are then mapped to the roles Search Guard uses to define which permissions a given user or host possesses. + +In configuration, KubeDB sets for Search Guard, backend roles comes from: + +- Roles defined in sg_internal_users.yml for particular users +- A JSON web token, if you’re using JWT authentication +- HTTP headers, if you’re using Proxy authentication + +#### Mapping + +Backend users, roles and hosts are mapped to Search Guard roles in the file `sg_roles_mapping.yml`. + +Syntax: + +```yml +: + users: + - + - ... + backendroles: + - + - ... + hosts: + - + - ... +``` + +See details about [backend roles mapping](http://docs.search-guard.com/v5/mapping-users-roles) in Search Guard documentation. + +Get roles mapping by running + +```bash +$ wget https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/elasticsearch/search-guard/sg-config/sg_roles_mapping.yml +``` + +```yml +sg_all_access: + users: + - admin + +sg_readall: + users: + - readall +``` + +### Flow Diagram for User readall + +

+ + snapshot-console + +

+ +## Create Secret + +Now create a Secret with these files to use in your Elasticsearch object. + +```bash +$ kubectl create secret generic -n demo config-elasticsearch-auth \ + --from-file=sg_config.yml \ + --from-file=sg_internal_users.yml \ + --from-file=sg_action_groups.yml \ + --from-file=sg_roles.yml \ + --from-file=sg_roles_mapping.yml \ + --from-literal=ADMIN_USERNAME=admin \ + --from-literal=ADMIN_PASSWORD=$ADMIN_PASSWORD \ + --from-literal=READALL_USERNAME=readall \ + --from-literal=READALL_PASSWORD=$READALL_PASSWORD + +secret/config-elasticsearch-auth created +``` + +Here, + +- `ADMIN_USERNAME` and `ADMIN_PASSWORD` password is used for initializing database from previous Snapshot. +- `READALL_USERNAME` and `READALL_PASSWORD` password is used for taking backup. + +If you do not use these two features of Snapshot, you can ignore adding these. + +```bash +--from-literal=ADMIN_USERNAME=admin +--from-literal=ADMIN_PASSWORD=$ADMIN_PASSWORD +--from-literal=READALL_USERNAME=readall +--from-literal=READALL_PASSWORD=$READALL_PASSWORD +``` + +>Note: `ADMIN_PASSWORD` and `READALL_PASSWORD` are the same password you have provided as hashed value in `sg_internal_users.yml`. It is not possible for KubeDB to figure out the password from the hashed value. So, you have to provide these password as a separate key in the secret. Otherwise, KubeDB will not able to perform backup or initialization. + +Use this Secret `config-elasticsearch-auth` in `spec.authSecret` field of your Elasticsearch object. + +## Create an Elasticsearch database + +Below is the Elasticsearch object created in this tutorial. + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: Elasticsearch +metadata: + name: config-elasticsearch + namespace: demo +spec: + version: searchguard-7.9.3 + authSecret: + name: config-elasticsearch-auth + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi +``` + +Here, + +- `spec.authSecret` specifies Secret with Search Guard configuration and basic auth for internal user. + +Create example above with following command + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/elasticsearch/search-guard/config-elasticsearch.yaml +elasticsearch.kubedb.com/config-elasticsearch created +``` + +KubeDB operator sets the `status.phase` to `Running` once the database is successfully created. + +```bash +$ kubectl get es -n demo config-elasticsearch -o wide +NAME VERSION STATUS AGE +config-elasticsearch 6.3-v1 Running 1m +``` + +## Connect to Elasticsearch Database + +At first, forward port 9200 of `config-elasticsearch-0` pod. Run following command on a separate terminal, + +```bash +$ kubectl port-forward -n demo config-elasticsearch-0 9200 +Forwarding from 127.0.0.1:9200 -> 9200 +Forwarding from [::1]:9200 -> 9200 +``` + +Now, you can connect to this database at `localhost:9200`. + +```bash +$ curl --user "admin:$ADMIN_PASSWORD" "localhost:9200/_cluster/health?pretty" +``` + +```json +{ + "cluster_name" : "config-elasticsearch", + "status" : "green", + "timed_out" : false, + "number_of_nodes" : 1, + "number_of_data_nodes" : 1, + "active_primary_shards" : 1, + "active_shards" : 1, + "relocating_shards" : 0, + "initializing_shards" : 0, + "unassigned_shards" : 0, + "delayed_unassigned_shards" : 0, + "number_of_pending_tasks" : 0, + "number_of_in_flight_fetch" : 0, + "task_max_waiting_in_queue_millis" : 0, + "active_shards_percent_as_number" : 100.0 +} +``` + +## Cleaning up + +To cleanup the Kubernetes resources created by this tutorial, run: + +```bash +$ kubectl patch -n demo es/config-elasticsearch -p '{"spec":{"terminationPolicy":"WipeOut"}}' --type="merge" +$ kubectl delete -n demo es/config-elasticsearch + +$ kubectl delete ns demo +``` + +## Next Steps + +- Learn how to [create TLS certificates](/docs/v2024.1.31/guides/elasticsearch/plugins/search-guard/issue-certificate). +- Learn how to [use TLS certificates](/docs/v2024.1.31/guides/elasticsearch/plugins/search-guard/use-tls) to connect Elasticsearch. +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/v2024.1.31/CONTRIBUTING). diff --git a/content/docs/v2024.1.31/guides/elasticsearch/plugins/search-guard/disable-searchguard.md b/content/docs/v2024.1.31/guides/elasticsearch/plugins/search-guard/disable-searchguard.md new file mode 100644 index 0000000000..9be6e17645 --- /dev/null +++ b/content/docs/v2024.1.31/guides/elasticsearch/plugins/search-guard/disable-searchguard.md @@ -0,0 +1,142 @@ +--- +title: Disable Search Guard +menu: + docs_v2024.1.31: + identifier: es-disable-search-guard + name: Disable Search Guard + parent: es-search-guard-elasticsearch + weight: 30 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Disable Search Guard Plugin + +Databases are precious. Definitely, you will not want to left your production database unprotected. Hence, KubeDB ship with Search Guard plugin integrated with it. It provides you authentication, authorization and TLS security. However, you can disable Search Guard plugin. You have to set `spec.authPlugin` field of Elasticsearch object to `None`. + +This tutorial will show you how to disable Search Guard plugin for Elasticsearch database in KubeDB. + +## Before You Begin + +At first, you need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +Now, install KubeDB cli on your workstation and KubeDB operator in your cluster following the steps [here](/docs/v2024.1.31/setup/README). + +To keep things isolated, this tutorial uses a separate namespace called `demo` throughout this tutorial. + +```bash +$ kubectl create ns demo +namespace/demo created + +$ kubectl get ns demo +NAME STATUS AGE +demo Active 5s +``` + +> Note: YAML files used in this tutorial are stored in [docs/examples/elasticsearch](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/examples/elasticsearch) folder in GitHub repository [kubedb/docs](https://github.com/kubedb/docs). + +## Create Elasticsearch + +In order to disable Search Guard, you have to set `spec.authPlugin` field of Elasticsearch object to `None`. Below is the YAML of Elasticsearch object that will be created in this tutorial. + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: Elasticsearch +metadata: + name: es-sg-disabled + namespace: demo +spec: + version: searchguard-7.9.3 + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi +``` + +Let's create the Elasticsearch object we have shown above, + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/elasticsearch/search-guard/es-sg-disabled.yaml +elasticsearch.kubedb.com/es-sg-disabled created +``` + +Wait for Elasticsearch to be ready, + +```bash +$ kubectl get es -n demo es-sg-disabled +NAME VERSION STATUS AGE +es-sg-disabled 6.3-v1 Running 27m +``` + +## Connect to Elasticsearch Database + +As we have disabled Search Guard plugin, we no longer require *username* and *password* to connect with our Elasticsearch database. + +At first, forward port 9200 of `es-sg-disabled-0` pod. Run following command in a separate terminal, + +```bash +$ kubectl port-forward -n demo es-sg-disabled-0 9200 +Forwarding from 127.0.0.1:9200 -> 9200 +Forwarding from [::1]:9200 -> 9200 +``` + +Now, we can connect with the database at `localhost:9200`. + +Let's check health of our Elasticsearch database. + +```bash +$ curl "localhost:9200/_cluster/health?pretty" +``` + +```json +{ + "cluster_name" : "es-sg-disabled", + "status" : "green", + "timed_out" : false, + "number_of_nodes" : 1, + "number_of_data_nodes" : 1, + "active_primary_shards" : 0, + "active_shards" : 0, + "relocating_shards" : 0, + "initializing_shards" : 0, + "unassigned_shards" : 0, + "delayed_unassigned_shards" : 0, + "number_of_pending_tasks" : 0, + "number_of_in_flight_fetch" : 0, + "task_max_waiting_in_queue_millis" : 0, + "active_shards_percent_as_number" : 100.0 +} +``` + +## Cleaning up + +To cleanup the Kubernetes resources created by this tutorial, run: + +```bash +$ kubectl patch -n demo es/es-sg-disabled -p '{"spec":{"terminationPolicy":"WipeOut"}}' --type="merge" +$ kubectl delete -n demo es/es-sg-disabled + +$ kubectl delete ns demo +``` + +## Next Steps + +- Learn how to [create TLS certificates](/docs/v2024.1.31/guides/elasticsearch/plugins/search-guard/issue-certificate). +- Learn how to generate [search-guard configuration](/docs/v2024.1.31/guides/elasticsearch/plugins/search-guard/configuration). +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/v2024.1.31/CONTRIBUTING). diff --git a/content/docs/v2024.1.31/guides/elasticsearch/plugins/search-guard/issue-certificate.md b/content/docs/v2024.1.31/guides/elasticsearch/plugins/search-guard/issue-certificate.md new file mode 100644 index 0000000000..686f1f03fc --- /dev/null +++ b/content/docs/v2024.1.31/guides/elasticsearch/plugins/search-guard/issue-certificate.md @@ -0,0 +1,411 @@ +--- +title: Search Guard Certificate +menu: + docs_v2024.1.31: + identifier: es-issue-certificate-search-guard + name: Issue Certificate + parent: es-search-guard-elasticsearch + weight: 25 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Issue TLS Certificates + +Search Guard requires certificates to enable TLS. KubeDB creates necessary certificates automatically. However, if you want to use your own certificates, you can provide them through `spec.certificateSecret` field of Elasticsearch object. + +This tutorial will show you how to generate certificates for Search Guard and use them with Elasticsearch database. + +In KubeDB Elasticsearch, keystore and truststore files in JKS format are used instead of certificates and private keys in PEM format. + +KubeDB applies same **truststore** for both transport layer TLS and REST layer TLS. + +But, KubeDB distinguishes between the following types of keystore for security purpose. + +- **transport layer keystore** are used to identify and secure traffic between Elasticsearch nodes on the transport layer +- **http layer keystore** are used to identify Elasticsearch clients on the REST and transport layer. +- **sgadmin keystore** are used as admin client that have elevated rights to perform administrative tasks. + +## Before You Begin + +At first, you need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +Now, install KubeDB cli on your workstation and KubeDB operator in your cluster following the steps [here](/docs/v2024.1.31/setup/README). + +To keep things isolated, this tutorial uses a separate namespace called `demo` throughout this tutorial. + +```bash +$ kubectl create ns demo +namespace/demo created + +$ kubectl get ns demo +NAME STATUS AGE +demo Active 5s +``` + +You also need to have [*OpenSSL*](https://www.openssl.org/source/) and Java *keytool* for generating all required artifacts. + +In order to find out if you have OpenSSL installed, open a terminal and type + +```bash +$ openssl version +OpenSSL 1.0.2g 1 Mar 2016 +``` + +Make sure it’s version 1.0.1k or higher + +And check *keytool* by calling + +```bash +keytool +``` + +If already installed, it will print a list of available commands. + +To keep generated files separated, open a new terminal and create a directory `/tmp/kubedb/certs` + +```bash +mkdir -p /tmp/kubedb/certs +cd /tmp/kubedb/certs +``` + +> Note: YAML files used in this tutorial are stored in [docs/examples/elasticsearch](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/examples/elasticsearch) folder in GitHub repository [kubedb/docs](https://github.com/kubedb/docs). + +## Generate truststore + +First, we need root certificate to sign other server & client certificates. And also this certificate is imported as *truststore*. + +You need to follow these steps + +1. Get root certificate configuration file + + ```bash + $ wget https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/elasticsearch/search-guard/openssl-config/openssl-ca.ini + ``` + + ```ini + [ ca ] + default_ca = CA_default + + [ CA_default ] + private_key = root-key.pem + default_days = 1000 # how long to certify for + default_md = sha256 # use public key default MD + copy_extensions = copy # Required to copy SANs from CSR to cert + + [ req ] + prompt = no + default_bits = 4096 + distinguished_name = ca_distinguished_name + + [ ca_distinguished_name ] + O = Elasticsearch Operator + CN = KubeDB Com. Root CA + ``` + +2. Set a password of your keystore and truststore files + + ```bash + $ export KEY_PASS=secret + ``` + + > Note: You need to provide this KEY_PASS in your Secret as `key_pass` + +3. Generate private key and certificate + + ```bash + $ openssl req -x509 -config openssl-ca.ini -newkey rsa:4096 -sha256 -nodes -out root.pem -keyout root-key.pem -batch -passin "pass:$KEY_PASS" + ``` + + Here, + + - `root-key.pem` holds Private Key + - `root.pem`holds CA Certificate + +4. Finally, import certificate as keystore + + ```bash + $ keytool -import -file root.pem -keystore root.jks -storepass $KEY_PASS -srcstoretype pkcs12 -noprompt + ``` + + Here, + + - `root.jks` is truststore for Elasticsearch + +## Generate keystore + +Steps to generate certificate and keystore for Elasticsearch + +1. Get certificate configuration file +2. Generate private key and certificate signing request (CSR) +3. Sign certificate using root certificate +4. Generate PKCS12 file using root certificate +5. Import PKCS12 as keystore + +You need to follow these steps to generate three keystore. + +To sign certificate, we need another configuration file. + +```bash +$ wget https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/elasticsearch/search-guard/openssl-config/openssl-sign.ini +``` + +```ini +[ ca ] +default_ca = CA_default + +[ CA_default ] +base_dir = . +certificate = $base_dir/root.pem # The CA certifcate +private_key = $base_dir/root-key.pem # The CA private key +new_certs_dir = $base_dir # Location for new certs after signing +database = $base_dir/index.txt # Database index file +serial = $base_dir/serial.txt # The current serial number +unique_subject = no # Set to 'no' to allow creation of several certificates with same subject. + +default_days = 1000 # how long to certify for +default_md = sha256 # use public key default MD +email_in_dn = no +copy_extensions = copy # Required to copy SANs from CSR to cert + +[ req ] +default_bits = 4096 +default_keyfile = root-key.pem +distinguished_name = ca_distinguished_name + +[ ca_distinguished_name ] +O = Elasticsearch Operator +CN = KubeDB Com. Root CA + +[ signing_req ] +keyUsage = digitalSignature, keyEncipherment + +[ signing_policy ] +organizationName = optional +commonName = supplied +``` + +Here, + +- `certificate` denotes CA certificate path +- `private_key` denotes CA key path + +Also, you need to create a `index.txt` file and `serial.txt` file with value `01` + +```bash +touch index.txt +echo '01' > serial.txt +``` + +### Node + +Following configuration is used to generate CSR for node certificate. + +```ini +[ req ] +prompt = no +default_bits = 4096 +distinguished_name = node_distinguished_name +req_extensions = node_req_extensions + +[ node_distinguished_name ] +O = Elasticsearch Operator +CN = sg-elasticsearch + +[ node_req_extensions ] +keyUsage = digitalSignature, keyEncipherment +extendedKeyUsage = serverAuth, clientAuth +subjectAltName = @alternate_names + +[ alternate_names ] +DNS.1 = localhost +RID.1 = 1.2.3.4.5.5 +``` + +Here, + +- `RID.1=1.2.3.4.5.5` is used in node certificate. All certificates with registeredID `1.2.3.4.5.5` is considered as valid certificate for transport layer. + +Now run following commands + +```bash +$ wget https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/elasticsearch/search-guard/openssl-config/openssl-node.ini +$ openssl req -config openssl-node.ini -newkey rsa:4096 -sha256 -nodes -out node-csr.pem -keyout node-key.pem +$ openssl ca -config openssl-sign.ini -batch -policy signing_policy -extensions signing_req -out node.pem -infiles node-csr.pem +$ openssl pkcs12 -export -certfile root.pem -inkey node-key.pem -in node.pem -password "pass:$KEY_PASS" -out node.pkcs12 +$ keytool -importkeystore -srckeystore node.pkcs12 -storepass $KEY_PASS -srcstoretype pkcs12 -srcstorepass $KEY_PASS -destkeystore node.jks -deststoretype pkcs12 +``` + +Generated `node.jks` will be used as keystore for transport layer TLS. + +### Client + +Following configuration is used to generate CSR for client certificate. + +```ini +[ req ] +prompt = no +default_bits = 4096 +distinguished_name = client_distinguished_name +req_extensions = client_req_extensions + +[ client_distinguished_name ] +O = Elasticsearch Operator +CN = sg-elasticsearch + +[ client_req_extensions ] +keyUsage = digitalSignature, keyEncipherment +extendedKeyUsage = serverAuth, clientAuth +subjectAltName = @alternate_names + +[ alternate_names ] +DNS.1 = localhost +DNS.2 = sg-elasticsearch.demo.svc +``` + +Here, + +- `sg-elasticsearch` is used as a Common Name so that host `sg-elasticsearch` is verified as valid Client. + +Now run following commands + +```bash +$ wget https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/elasticsearch/search-guard/openssl-config/openssl-client.ini +$ openssl req -config openssl-client.ini -newkey rsa:4096 -sha256 -nodes -out client-csr.pem -keyout client-key.pem +$ openssl ca -config openssl-sign.ini -batch -policy signing_policy -extensions signing_req -out client.pem -infiles client-csr.pem +$ openssl pkcs12 -export -certfile root.pem -inkey client-key.pem -in client.pem -password "pass:$KEY_PASS" -out client.pkcs12 +$ keytool -importkeystore -srckeystore client.pkcs12 -storepass $KEY_PASS -srcstoretype pkcs12 -srcstorepass $KEY_PASS -destkeystore client.jks -deststoretype pkcs12 +``` + +Generated `client.jks` will be used as keystore for http layer TLS. + +### sgadmin + +Following configuration is used to generate CSR for sgadmin certificate. + +```ini +[ req ] +prompt = no +default_bits = 4096 +distinguished_name = sgadmin_distinguished_name +req_extensions = sgadmin_req_extensions + +[ sgadmin_distinguished_name ] +O = Elasticsearch Operator +CN = sgadmin + +[ sgadmin_req_extensions ] +keyUsage = digitalSignature, keyEncipherment +extendedKeyUsage = serverAuth, clientAuth +subjectAltName = @alternate_names + +[ alternate_names ] +DNS.1 = localhost +``` + +Here, + +- `sgadmin` is used as Common Name. Because in searchguard, certificate with `sgadmin` common name is considered as admin certificate. + +Now run following commands + +```bash +$ wget https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/elasticsearch/search-guard/openssl-config/openssl-sgadmin.ini +$ openssl req -config openssl-sgadmin.ini -newkey rsa:4096 -sha256 -nodes -out sgadmin-csr.pem -keyout sgadmin-key.pem +$ openssl ca -config openssl-sign.ini -batch -policy signing_policy -extensions signing_req -out sgadmin.pem -infiles sgadmin-csr.pem +$ openssl pkcs12 -export -certfile root.pem -inkey sgadmin-key.pem -in sgadmin.pem -password "pass:$KEY_PASS" -out sgadmin.pkcs12 +$ keytool -importkeystore -srckeystore sgadmin.pkcs12 -storepass $KEY_PASS -srcstoretype pkcs12 -srcstorepass $KEY_PASS -destkeystore sgadmin.jks -deststoretype pkcs12 +``` + +Generated `sgadmin.pkcs12` will be used as keystore for admin usage. + +## Create Secret + +Now create a Secret with these certificates to use in your Elasticsearch object. + +```bash +$ kubectl create secret generic -n demo sg-elasticsearch-cert \ + --from-file=root.pem \ + --from-file=root.jks \ + --from-file=node.jks \ + --from-file=client.jks \ + --from-file=sgadmin.jks \ + --from-literal=key_pass=$KEY_PASS + +secret/sg-elasticsearch-cert created +``` + +> Note: `root.pem` is added in Secret so that user can use these to connect Elasticsearch + +Use this Secret `sg-elasticsearch-cert` in your Elasticsearch object. + +## Create an Elasticsearch database + +Below is the Elasticsearch object created in this tutorial. + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: Elasticsearch +metadata: + name: sg-elasticsearch + namespace: demo +spec: + version: searchguard-7.9.3 + enableSSL: true + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi +``` + +Here, + +- `spec.certificateSecret` specifies Secret with certificates those will be used in Elasticsearch database. + +Create example above with following command + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/elasticsearch/search-guard/sg-elasticsearch.yaml +elasticsearch.kubedb.com/sg-elasticsearch created +``` + +KubeDB operator sets the `status.phase` to `Running` once the database is successfully created. + +```bash +$ kubectl get es -n demo sg-elasticsearch -o wide +NAME VERSION STATUS AGE +sg-elasticsearch 6.3-v1 Running 1m +``` + +## Cleaning up + +To cleanup the Kubernetes resources created by this tutorial, run: + +```bash +$ kubectl patch -n demo es/sg-elasticsearch -p '{"spec":{"terminationPolicy":"WipeOut"}}' --type="merge" +$ kubectl delete -n demo es/sg-elasticsearch + +$ kubectl delete ns demo +``` + +## Next Steps + +- Learn how to use TLS certificates to connect Elasticsearch from [here](/docs/v2024.1.31/guides/elasticsearch/plugins/search-guard/use-tls). +- Learn how to generate [search-guard configuration](/docs/v2024.1.31/guides/elasticsearch/plugins/search-guard/configuration). +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/v2024.1.31/CONTRIBUTING). diff --git a/content/docs/v2024.1.31/guides/elasticsearch/plugins/search-guard/overview.md b/content/docs/v2024.1.31/guides/elasticsearch/plugins/search-guard/overview.md new file mode 100644 index 0000000000..6880726eb1 --- /dev/null +++ b/content/docs/v2024.1.31/guides/elasticsearch/plugins/search-guard/overview.md @@ -0,0 +1,86 @@ +--- +title: Search Guard +menu: + docs_v2024.1.31: + identifier: es-search-guard-search-guard + name: Overview + parent: es-search-guard-elasticsearch + weight: 10 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Search Guard + +[Search Guard(®)](https://github.com/floragunncom/search-guard) is an Elasticsearch plugin that offers encryption, authentication, and authorization. +It supports fine grained role-based access control to clusters, indices, documents and fields. + +- Search Guard authenticates the credentials against the configured authentication backend(s). +- Search Guard authorizes the user by retrieving a list of the user’s roles from the configured authorization backend + +## TLS certificates + +Search Guard relies heavily on the use of TLS, both for the REST and the transport layer of Elasticsearch. TLS is configured in the `elasticsearch.yml` file of Elasticsearch installation. + +Following keys are used to configure location of keystore and truststore files. + +Transport layer TLS + +| Name | Description | +|---------------------------------------------------|:------------------------------------------------------------------------------| +| searchguard.ssl.transport.keystore_filepath | Path to the keystore file, relative to the config/ directory (mandatory) | +| searchguard.ssl.transport.keystore_password | Keystore password | +| searchguard.ssl.transport.truststore_filepath | Path to the truststore file, relative to the config/ directory (mandatory) | +| searchguard.ssl.transport.truststore_password | Truststore password | + +REST layer TLS + +| Name | Description | +|-----------------------------------------------|:--------------------------------------------------------------------------------------| +| searchguard.ssl.http.enabled | Whether to enable TLS on the REST layer or not | +| searchguard.ssl.http.keystore_filepath | Path to the keystore file, relative to the config/ directory (mandatory) | +| searchguard.ssl.http.keystore_password | Keystore password | +| searchguard.ssl.http.truststore_filepath | Path to the truststore file, relative to the config/ directory (mandatory) | +| searchguard.ssl.http.truststore_password | Truststore password | + + +> Note: KubeDB Elasticsearch is configured with keystore and truststore files in JKS format + +#### Configuring Admin certificates + +Admin certificates are regular client certificates that have elevated rights to perform administrative tasks. You need an admin certificate to +change the Search Guard configuration via the *sgadmin* command line tool. Admin certificates are configured in `elasticsearch.yml` by simply stating their DN(s). + +```yaml +searchguard.authcz.admin_dn: + - CN=sgadmin, O=Elasticsearch Operator +``` + +#### Client authentication + +With TLS client authentication enabled, REST clients can send a TLS certificate with the HTTP request to provide identity information to Search Guard. + +- You can provide an admin certificate when using the REST API. +- You can provide Basic Auth with client certificates. + +> Note: Search Guard accepts TLS client certificates if they are sent, but does not enforce them. + +## Next Steps + +- Learn how to [create TLS certificates](/docs/v2024.1.31/guides/elasticsearch/plugins/search-guard/issue-certificate). +- Learn how to [use TLS certificates](/docs/v2024.1.31/guides/elasticsearch/plugins/search-guard/use-tls) to connect Elasticsearch. +- Learn how to generate [search-guard configuration](/docs/v2024.1.31/guides/elasticsearch/plugins/search-guard/configuration). +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/v2024.1.31/CONTRIBUTING). diff --git a/content/docs/v2024.1.31/guides/elasticsearch/plugins/search-guard/use-tls.md b/content/docs/v2024.1.31/guides/elasticsearch/plugins/search-guard/use-tls.md new file mode 100644 index 0000000000..ae1ff9cdc4 --- /dev/null +++ b/content/docs/v2024.1.31/guides/elasticsearch/plugins/search-guard/use-tls.md @@ -0,0 +1,196 @@ +--- +title: Run TLS Secured Elasticsearch +menu: + docs_v2024.1.31: + identifier: es-use-tls-search-guard + name: Use TLS + parent: es-search-guard-elasticsearch + weight: 20 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Run TLS Secured Elasticsearch + +Search Guard provides facility to secure your Elasticsearch cluster with TLS. By default, KubeDB does not enable TLS security. You have to enable it by setting `spec.enableSSL: true`. If TLS is enabled, only HTTPS calls are allowed to database server. + +This tutorial will show you how to connect with Elasticsearch cluster using certificate when TLS is enabled. + +## Before You Begin + +At first, you need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +Now, install KubeDB cli on your workstation and KubeDB operator in your cluster following the steps [here](/docs/v2024.1.31/setup/README). + +To keep things isolated, this tutorial uses a separate namespace called `demo` throughout this tutorial. + +```bash +$ kubectl create ns demo +namespace/demo created + +$ kubectl get ns demo +NAME STATUS AGE +demo Active 5s +``` + +> Note: YAML files used in this tutorial are stored in [docs/examples/elasticsearch](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/examples/elasticsearch) folder in GitHub repository [kubedb/docs](https://github.com/kubedb/docs). + +## Create Elasticsearch + +In order to enable TLS, we have to set `spec.enableSSL` field of Elasticsearch object to `true`. Below is the YAML of Elasticsearch object that will be created in this tutorial. + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: Elasticsearch +metadata: + name: ssl-elasticsearch + namespace: demo +spec: + version: searchguard-7.9.3 + replicas: 2 + enableSSL: true + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi +``` + +Let's create the Elasticsearch object we have shown above, + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/elasticsearch/search-guard/ssl-elasticsearch.yaml +elasticsearch.kubedb.com/ssl-elasticsearch created +``` + +```bash +$ kubectl get es -n demo ssl-elasticsearch +NAME STATUS AGE +ssl-elasticsearch Running 17m +``` + +## Connect to Elasticsearch Database + +As we have enabled TLS for our Elasticsearch cluster, only HTTPS calls are allowed to Elasticsearch server. So, we need to provide certificate to connect with Elasticsearch. If you do not provide certificate manually through `spec.certificateSecret` field of Elasticsearch object, KubeDB will create a secret `{elasticsearch name}-cert` with necessary certificates. + +Let's check the certificates that has been created for Elasticsearch `ssl-elasticsearch` by KubeDB operator. + +```bash +$ kubectl get secret -n demo ssl-elasticsearch-cert -o yaml +``` + +```yaml +apiVersion: v1 +data: + client.jks: TFMwdExTMUNSVWRKVGlCLi4uLi49PQ== + node.jks: TFMwdExTMUNSVWRKVGlCLi4uLi49PQ== + root.jks: TFMwdExTMUNSVWRKVGlCLi4uLi49PQ== + root.pem: TFMwdExTMUNSVWRKVGlCLi4uLi49PQ== + sgadmin.jks: TFMwdExTMUNSVWRKVGlCLi4uLi49PQ== + key_pass: TFMwdExTMUNSVWRKVGlCLi4uLi49PQ== +kind: Secret +metadata: + creationTimestamp: 2018-02-19T09:51:45Z + labels: + app.kubernetes.io/name: elasticsearches.kubedb.com + app.kubernetes.io/instance: ssl-elasticsearch + name: ssl-elasticsearch-cert + namespace: demo + resourceVersion: "754" + selfLink: /api/v1/namespaces/demo/secrets/ssl-elasticsearch-cert + uid: 7efdaf31-155a-11e8-a001-42010a8000d5 +type: Opaque +``` + +Here, `root.pem` file is the root CA in `.pem` format. We will require to provide this file while sending REST request to the Elasticsearch server. + +Let's forward port 9200 of `ssl-elasticsearch-0` pod. Run following command in a separate terminal, + +```bash +$ kubectl port-forward -n demo ssl-elasticsearch-0 9200 +Forwarding from 127.0.0.1:9200 -> 9200 +Forwarding from [::1]:9200 -> 9200 +``` + +Now, we can connect with the database at `localhost:9200`. + +**Connection information:** + +- Address: `localhost:9200` +- Username: Run following command to get *username* + + ```bash + $ kubectl get secrets -n demo ssl-elasticsearch-auth -o jsonpath='{.data.\ADMIN_USERNAME}' | base64 -d + elastic + ``` + +- Password: Run following command to get *password* + + ```bash + $ kubectl get secrets -n demo ssl-elasticsearch-auth -o jsonpath='{.data.\ADMIN_PASSWORD}' | base64 -d + uv2io5au + ``` + +- Root CA: Run following command to get `root.pem` file + + ```bash + $ kubectl get secrets -n demo ssl-elasticsearch-cert -o jsonpath='{.data.\root\.pem}' | base64 --decode > root.pem + ``` + +Now, let's check health of our Elasticsearch database. + +```bash +$ curl --user "elastic:uv2io5au" "https://localhost:9200/_cluster/health?pretty" --cacert root.pem +``` + +```json +{ + "cluster_name" : "ssl-elasticsearch", + "status" : "green", + "timed_out" : false, + "number_of_nodes" : 2, + "number_of_data_nodes" : 2, + "active_primary_shards" : 1, + "active_shards" : 2, + "relocating_shards" : 0, + "initializing_shards" : 0, + "unassigned_shards" : 0, + "delayed_unassigned_shards" : 0, + "number_of_pending_tasks" : 0, + "number_of_in_flight_fetch" : 0, + "task_max_waiting_in_queue_millis" : 0, + "active_shards_percent_as_number" : 100.0 +} +``` + +## Cleaning up + +To cleanup the Kubernetes resources created by this tutorial, run: + +```bash +$ kubectl patch -n demo es/ssl-elasticsearch -p '{"spec":{"terminationPolicy":"WipeOut"}}' --type="merge" +$ kubectl delete -n demo es/ssl-elasticsearch + +$ kubectl delete ns demo +``` + +## Next Steps + +- Learn how to [create TLS certificates](/docs/v2024.1.31/guides/elasticsearch/plugins/search-guard/issue-certificate). +- Learn how to generate [search-guard configuration](/docs/v2024.1.31/guides/elasticsearch/plugins/search-guard/configuration). +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/v2024.1.31/CONTRIBUTING). diff --git a/content/docs/v2024.1.31/guides/elasticsearch/plugins/search-guard/x-pack-monitoring.md b/content/docs/v2024.1.31/guides/elasticsearch/plugins/search-guard/x-pack-monitoring.md new file mode 100644 index 0000000000..c21b9c6d0d --- /dev/null +++ b/content/docs/v2024.1.31/guides/elasticsearch/plugins/search-guard/x-pack-monitoring.md @@ -0,0 +1,515 @@ +--- +title: X-Pack Monitoring of Elasticsearch Cluster with SearchGuard Auth +menu: + docs_v2024.1.31: + identifier: es-x-pack-monitoring-with-searchguard + name: Monitoring + parent: es-search-guard-elasticsearch + weight: 50 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# X-Pack Monitoring with KubeDB Elasticsearch + +This tutorial will show you how to use X-Pack monitoring in an Elasticsearch cluster deployed with KubeDB. + +## Before You Begin + +At first, you need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +Now, install KubeDB cli on your workstation and KubeDB operator in your cluster following the steps [here](/docs/v2024.1.31/setup/README). + +As KubeDB uses [Search Guard](https://search-guard.com/) plugin for authentication and authorization, you have to know how to configure Search Guard for both Elasticsearch cluster and Kibana. If you don't know, please visit [here](https://docs.search-guard.com/latest/main-concepts). + +To keep things isolated, this tutorial uses a separate namespace called `demo` throughout this tutorial. + +```bash +$ kubectl create ns demo +namespace/demo created + +$ kubectl get ns demo +NAME STATUS AGE +demo Active 5s +``` + +> Note: YAML files used in this tutorial are stored in [docs/examples/elasticsearch](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/examples/elasticsearch) folder in GitHub repository [kubedb/docs](https://github.com/kubedb/docs). + +## Overview + +At first, we will create some necessary Search Guard configuration and roles to give a user permission to monitor an Elasticsearch cluster from Kibana. We will create a secret with this configuration files. Then we will provide this secret in `spec.authSecret` field of Elasticsearch crd so that our Elasticsearch cluster start with this configuration. We are going to configure Elasticsearch cluster to collect and send x-pack monitoring data over [HTTP Exporters](https://www.elastic.co/guide/en/elasticsearch/reference/current/http-exporter.html) using a [custom configuration](/docs/v2024.1.31/guides/elasticsearch/configuration/overview/) file. + +Then, we will deploy Kibana with Search Guard plugin installed. We will configure Kibana to connect with our Elasticsearch cluster and view monitoring data from it. + +For this tutorial, we will use Elasticsearch 6.3.0 with Search Guard plugin 23.1 and Kibana 6.3.0 with Search Guard plugin 14 installed. + +## Deploy Elasticsearch Cluster + +Let's create necessary Search Guard configuration files. Here, we will create two users `admin` and `monitor`. User `admin` will have all permissions on the cluster and user `monitor` will have some limited permission to view only monitoring data. Here, are the contents of Search Guard configuration files, + +**sg_action_groups.yml:** + +```yaml +###### UNLIMITED ###### +UNLIMITED: + readonly: true + permissions: + - "*" + +###### CLUSTER LEVEL ##### +CLUSTER_MONITOR: + readonly: true + permissions: + - "cluster:monitor/*" + +CLUSTER_COMPOSITE_OPS_RO: + readonly: true + permissions: + - "indices:data/read/mget" + - "indices:data/read/msearch" + - "indices:data/read/mtv" + - "indices:data/read/coordinate-msearch*" + - "indices:admin/aliases/exists*" + - "indices:admin/aliases/get*" + - "indices:data/read/scroll" + +CLUSTER_COMPOSITE_OPS: + readonly: true + permissions: + - "indices:data/write/bulk" + - "indices:admin/aliases*" + - "indices:data/write/reindex" + - CLUSTER_COMPOSITE_OPS_RO + +###### INDEX LEVEL ###### +INDICES_ALL: + readonly: true + permissions: + - "indices:*" + +READ: + readonly: true + permissions: + - "indices:data/read*" + - "indices:admin/mappings/fields/get*" + - "indices:admin/mappings/get*" +``` + +**sg_roles.yaml:** + +```yaml +### Admin +sg_all_access: + readonly: true + cluster: + - UNLIMITED + indices: + '*': + '*': + - UNLIMITED + tenants: + admin_tenant: RW + +### X-Pack Monitoring +sg_xp_monitoring: + cluster: + - cluster:admin/xpack/monitoring/* + - cluster:admin/ingest/pipeline/put + - cluster:admin/ingest/pipeline/get + - indices:admin/template/get + - indices:admin/template/put + - CLUSTER_MONITOR + - CLUSTER_COMPOSITE_OPS + indices: + '?monitor*': + '*': + - INDICES_ALL + '?marvel*': + '*': + - INDICES_ALL + '?kibana*': + '*': + - READ + '*': + '*': + - indices:data/read/field_caps +``` + +**sg_internal_users.yml:** + +```yaml +#password is: admin@secret +admin: + readonly: true + hash: $2y$12$skma87wuFFtxtGWegeAiIeTtUH1nnOfIRZzwwhBlzXjg0DdM4gLeG + roles: + - admin + +#password is: monitor@secret +monitor: + readonly: true + hash: $2y$12$JDTXih3AqV/1MDRYQ.KIY.u68CkzCIq.xiiqwtRJx3cjN0YmFavTe + roles: + - monitor +``` + +Here, we have used `admin@secret` password for `admin` user and `monitor@secret` password for `monitor` user. You can use `htpasswd` to generate the bcrypt encrypted password hashes. + +```bash +$htpasswd -bnBC 12 "" | tr -d ':\n' +``` + +**sg_roles_mapping.yml:** + +```yaml +sg_all_access: + readonly: true + backendroles: + - admin + +sg_xp_monitoring: + readonly: true + backendroles: + - monitor +``` + +**sg_config.yml:** + +```yaml +searchguard: + dynamic: + authc: + kibana_auth_domain: + enabled: true + order: 0 + http_authenticator: + type: basic + challenge: false + authentication_backend: + type: internal + basic_internal_auth_domain: + http_enabled: true + transport_enabled: true + order: 1 + http_authenticator: + type: basic + challenge: true + authentication_backend: + type: internal +``` + +Now, create a secret with these Search Guard configuration files. + +```bash + $ kubectl create secret generic -n demo es-auth \ + --from-literal=ADMIN_USERNAME=admin \ + --from-literal=ADMIN_PASSWORD=admin@secret \ + --from-file=./sg_action_groups.yml \ + --from-file=./sg_config.yml \ + --from-file=./sg_internal_users.yml \ + --from-file=./sg_roles_mapping.yml \ + --from-file=./sg_roles.yml +secret/es-auth created +``` + +Verify the secret has desired configuration files, + +```yaml +$ kubectl get secret -n demo es-auth -o yaml +apiVersion: v1 +data: + sg_action_groups.yml: + sg_config.yml: + sg_internal_users.yml: + sg_roles.yml: + sg_roles_mapping.yml: +kind: Secret +metadata: + ... + name: es-auth + namespace: demo + ... +type: Opaque +``` + +As we are using Search Guard plugin for authentication, we need to ensure that `x-pack` security is not enabled. We will ensure that by providing `xpack.security.enabled: false` in `common-config.yml` file and we will use this file to configure our Elasticsearch cluster. As Search Guard does not support `local` exporter, we will use `http` exporter and set `host` filed to `http://127.0.0.1:9200` to store monitoring data in same cluster. + + Let's create `common-config.yml` with following configuration, + +```yaml +xpack.security.enabled: false +xpack.monitoring.enabled: true +xpack.monitoring.collection.enabled: true +xpack.monitoring.exporters: + my-http-exporter: + type: http + host: ["http://127.0.0.1:9200"] + auth: + username: monitor + password: monitor@secret +``` + +Create a ConfigMap using this file, + +```bash +$ kubectl create configmap -n demo es-custom-config \ + --from-file=./common-config.yaml +configmap/es-custom-config created +``` + +Verify that the ConfigMap has desired configuration, + +```yaml +$ kubectl get configmap -n demo es-custom-config -o yaml +apiVersion: v1 +data: + common-config.yaml: |- + xpack.security.enabled: false + xpack.monitoring.enabled: true + xpack.monitoring.collection.enabled: true + xpack.monitoring.exporters: + my-http-exporter: + type: http + host: ["http://127.0.0.1:9200"] + auth: + username: monitor + password: monitor@secret +kind: ConfigMap +metadata: + ... + name: es-custom-config + namespace: demo + ... +``` + +Now, create Elasticsearch crd specifying `spec.authSecret` and `spec.configSecret` field. + +```bash +$ kubectl apply -f kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/elasticsearch/x-pack/es-mon-demo.yaml +elasticsearch.kubedb.com/es-mon-demo created +``` + +Below is the YAML for the Elasticsearch crd we just created. + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: Elasticsearch +metadata: + name: es-mon-demo + namespace: demo +spec: + version: searchguard-7.9.3 + replicas: 1 + authSecret: + name: es-auth + configSecret: + name: es-custom-config + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi +``` + +Now, wait for few minutes. KubeDB will create necessary secrets, services, and statefulsets. + +Check resources created in demo namespace by KubeDB, + +```bash +$ kubectl get all -n demo -l=app.kubernetes.io/instance=es-mon-demo +NAME READY STATUS RESTARTS AGE +pod/es-mon-demo-0 1/1 Running 0 37s + +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +service/es-mon-demo ClusterIP 10.110.227.143 9200/TCP 40s +service/es-mon-demo-master ClusterIP 10.104.12.90 9300/TCP 40s + +NAME DESIRED CURRENT AGE +statefulset.apps/es-mon-demo 1 1 39s +``` + +Once everything is created, Elasticsearch will go to Running state. Check that Elasticsearch is in running state. + +```bash +$ kubectl get es -n demo es-mon-demo +NAME VERSION STATUS AGE +es-mon-demo 7.3.2 Running 1m +``` + +Now, check elasticsearch log to see if the cluster is ready to accept requests, + +```bash +$ kubectl logs -n demo es-mon-demo-0 -f +... +Starting runit... +... +Elasticsearch Version: 6.3.0 +Search Guard Version: 6.3.0-23.0 +Connected as CN=sgadmin,O=Elasticsearch Operator +Contacting elasticsearch cluster 'elasticsearch' and wait for YELLOW clusterstate ... +Clustername: es-mon-demo +Clusterstate: GREEN +Number of nodes: 1 +Number of data nodes: 1 +... +Done with success +... +``` + +Once you see `Done with success` success line in the log, the cluster is ready to accept requests. Now, it is time to connect with Kibana. + +## Deploy Kibana + +In order to view monitoring data from Kibana, we need to configure `kibana.yml` with appropriate configuration. + +KubeDB has created a service with name `es-mon-demo` in `demo` namespace for the Elasticsearch cluster. We will use this service in `elasticsearch.url` field. Kibana will use this service to connect with the Elasticsearch cluster. + +Let's, configure `kibana.yml` as below, + +```yaml +xpack.security.enabled: false +xpack.monitoring.enabled: true +xpack.monitoring.kibana.collection.enabled: true +xpack.monitoring.ui.enabled: true + +server.host: 0.0.0.0 + +elasticsearch.url: "http://es-mon-demo.demo.svc:9200" +elasticsearch.username: "monitor" +elasticsearch.password: "monitor@secret" + +searchguard.auth.type: "basicauth" +searchguard.cookie.secure: false + +``` + +Notice the `elasticsearch.username` and `elasticsearch.password` field. Kibana will connect to Elasticsearch cluster with this credentials. They must match with the credentials we have provided in `sg_internal_users.yml` file for `monitor` user while creating the cluster. + +Now, create a ConfigMap with `kibana.yml` file. We will mount this ConfigMap in Kibana deployment so that Kibana starts with this configuration. + +```conlose +$ kubectl create configmap -n demo kibana-config \ + --from-file=./kibana.yml +configmap/kibana-config created +``` + +Finally, deploy Kibana deployment, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/elasticsearch/kibana/kibana-deployment.yaml +deployment.apps/kibana created +``` + +Below is the YAML for the Kibana deployment we just created. + +```yaml +apiVersion: apps/v1 +kind: Deployment +metadata: + name: kibana + namespace: demo +spec: + replicas: 1 + selector: + matchLabels: + app: kibana + template: + metadata: + labels: + app: kibana + spec: + containers: + - name: kibana + image: kubedb/kibana:6.3.0 + volumeMounts: + - name: kibana-config + mountPath: /usr/share/kibana/config + volumes: + - name: kibana-config + configMap: + name: kibana-config +``` + +Now, wait for few minutes. Let the Kibana pod go in`Running` state. Check pod is in `Running` using this command, + +```bash + $ kubectl get pods -n demo -l app=kibana +NAME READY STATUS RESTARTS AGE +kibana-84b8cbcf7c-mg699 1/1 Running 0 3m +``` + +Now, watch the Kibana pod's log to see if Kibana is ready to access, + +```bash +$ kubectl logs -n demo kibana-84b8cbcf7c-mg699 -f +... +{"type":"log","@timestamp":"2018-08-27T09:50:47Z","tags":["listening","info"],"pid":1,"message":"Server running at http://0.0.0.0:5601"} +``` + +Once you see `"message":"Server running at http://0.0.0.0:5601"` in the log, Kibana is ready. Now it is time to access Kibana UI. + +Kibana is running on port `5601` in of `kibana-84b8cbcf7c-mg699` pod. In order to access Kibana UI from outside of the cluster, we will use [port forwarding](https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster). + +First, open a new terminal and run, + +```bash +$ kubectl port-forward -n demo kibana-84b8cbcf7c-mg699 5601 +Forwarding from 127.0.0.1:5601 -> 5601 +Forwarding from [::1]:5601 -> 5601 +``` + +Now, open `localhost:5601` in your browser. When you will open the address, you will be greeted with Search Guard login UI. When you will open the address, you will be greeted with Search Guard login UI. + +Login with following credentials: `username: monitor` and `password: monitor@secret`. After login, go to `Monitoring` tab in Kibana UI. You will see Kibana has connected with the Elasticsearch cluster and showing monitoring data. Some screenshots of monitoring `es-mon-demo` cluster is given below. + +![Kibana Monitoring Home](/docs/v2024.1.31/images/elasticsearch/x-pack/monitoring-home.png) + +![Kibana Monitoring Node](/docs/v2024.1.31/images/elasticsearch/x-pack/monitoring-node.png) + +![Kibana Monitoring Overview](/docs/v2024.1.31/images/elasticsearch/x-pack/monitoring-overview.png) + +## Monitoring Multiple Cluster + +Monitoring multiple cluster is paid feature of X-Pack. If you are interested then follow these steps, + +1. First, create a separate cluster to store monitoring data. Let's say it **monitoring-cluster**. +2. Configure monitoring-cluster to connect with Kibana. +3. Configure Kibana to view monitoring data from monitoring-cluster. +4. Configure `http` exporter of your production clusters to export monitoring data to the monitoring-cluster. Set `xpack.monitoring.exporters..host:` field to the address of the monitoring-cluster. + +Now, your production clusters will send monitoring data to the monitoring-cluster and Kibana will retrieve these data from it. + +## Cleanup + +To cleanup the Kubernetes resources created by this tutorial, run: + +```bash +$ kubectl patch -n demo es/es-mon-demo -p '{"spec":{"terminationPolicy":"WipeOut"}}' --type="merge" + +$ kubectl delete -n demo es/es-mon-demo + +$ kubectl delete -n demo configmap/es-custom-config + +$ kubectl delete -n demo configmap/kibana-config + +$ kubectl delete -n demo deployment/kibana + +$ kubectl delete ns demo +``` + +To uninstall KubeDB follow this [guide](/docs/v2024.1.31/setup/README). diff --git a/content/docs/v2024.1.31/guides/elasticsearch/plugins/x-pack/_index.md b/content/docs/v2024.1.31/guides/elasticsearch/plugins/x-pack/_index.md new file mode 100644 index 0000000000..8c4c586388 --- /dev/null +++ b/content/docs/v2024.1.31/guides/elasticsearch/plugins/x-pack/_index.md @@ -0,0 +1,22 @@ +--- +title: Use X-Pack with KubeDB Elasticsearch +menu: + docs_v2024.1.31: + identifier: es-x-pack + name: X-Pack + parent: es-plugin-elasticsearch + weight: 10 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/elasticsearch/plugins/x-pack/configuration.md b/content/docs/v2024.1.31/guides/elasticsearch/plugins/x-pack/configuration.md new file mode 100644 index 0000000000..64a457c5c3 --- /dev/null +++ b/content/docs/v2024.1.31/guides/elasticsearch/plugins/x-pack/configuration.md @@ -0,0 +1,376 @@ +--- +title: X-Pack Configuration +menu: + docs_v2024.1.31: + identifier: es-configuration-xpack + name: X-Pack Configuration + parent: es-x-pack + weight: 15 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# X-Pack Configuration + +X-Pack is an Elastic Stack extension that provides security along with other features. In KubeDB, X-Pack authentication can be used with elasticsearch `6.8` and `7.2+`. In this guide, we will show, how to use xpack authentication or disable it. + +## Before You Begin + +At first, you need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +Now, install KubeDB cli on your workstation and KubeDB operator in your cluster following the steps [here](/docs/v2024.1.31/setup/README). + +To keep things isolated, this tutorial uses a separate namespace called `demo` throughout this tutorial. + +```bash +$ kubectl create ns demo +namespace/demo created + +$ kubectl get ns demo +NAME STATUS AGE +demo Active 5s +``` + +> Note: YAML files used in this tutorial are stored in [docs/examples/elasticsearch](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/examples/elasticsearch) folder in GitHub repository [kubedb/docs](https://github.com/kubedb/docs). + +## X-Pack AuthPlugin + +In 0.13.0 release, a new field is introduced to `ElasticsearchVersions` crd, named `authPlugin`. In prior this releases, `authPlugin` was part of `Elasticsearch` CRD spec, which is deprecated since 0.13.0-rc.1. + +The `spec.authPlugin` is an required field in ElasticsearchVersion CRD, which specifies which plugin to use for authentication. Currently, this field accepts either `X-Pack` or `SearchGuard`. + +To see, which authPlugin is used in the target ElasticsearchVersion, run the following command: + +```bash +kubectl get elasticsearchversions 7.3.2 -o yaml +``` + +```yaml +apiVersion: catalog.kubedb.com/v1alpha1 +kind: ElasticsearchVersion +metadata: + name: xpack-8.11.1 +spec: + authPlugin: X-Pack + db: + image: kubedb/elasticsearch:7.9.1-xpack + distribution: ElasticStack + exporter: + image: kubedb/elasticsearch_exporter:1.1.0 + initContainer: + image: kubedb/toybox:0.8.4 + yqImage: kubedb/elasticsearch-init:7.9.1-xpack-v1 + podSecurityPolicies: + databasePolicyName: elasticsearch-db + stash: + addon: + backupTask: + name: elasticsearch-backup-7.3.2 + restoreTask: + name: elasticsearch-restore-7.3.2 + version: 7.9.1 +``` + +## Changing authPlugin + +To change authPlugin, it is recommended to create a new ElasticsearchVersion CRD. Then, use that elasticsearchVersion to install an Elasticsearch server with that authPlugin. + +## Deploy with X-Pack + +To deploy with X-Pack, you need to use an `ElasticsearchVersion` where `X-Pack` is set to `authPlugin`. + +Here, we are going to use ElasticsearchVersion `7.3.2`, which is mentioned earlier in this guide. + +Now, let's create an Elasticsearch server using the following yaml. + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: Elasticsearch +metadata: + name: config-elasticsearch + namespace: demo +spec: + version: xpack-8.11.1 + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi +``` + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/elasticsearch/x-pack/config-elasticsearch.yaml +elasticsearch.kubedb.com/config-elasticsearch created +``` + +The deployed elasticsearch object specs, after the mutation is done by kubedb: + +```yaml +$ kubectl get elasticsearch -n demo config-elasticsearch -o yaml + +apiVersion: kubedb.com/v1alpha2 +kind: Elasticsearch +metadata: + creationTimestamp: "2019-09-30T08:34:10Z" + finalizers: + - kubedb.com + generation: 3 + name: config-elasticsearch + namespace: demo + resourceVersion: "60830" + selfLink: /apis/kubedb.com/v1alpha2/namespaces/demo/elasticsearches/config-elasticsearch + uid: 13263dfa-e35d-11e9-85c8-42010a8c002f +spec: + authSecret: + name: config-elasticsearch-auth + podTemplate: + controller: {} + metadata: {} + spec: + resources: {} + serviceAccountName: config-elasticsearch + replicas: 1 + storage: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + storageClassName: standard + storageType: Durable + terminationPolicy: Halt + version: xpack-8.11.1 +status: + observedGeneration: 1$4210395375389091791 + phase: Running +``` + +As we can see, KubeDB has created a secret named `config-elasticsearch-auth`, which contains password for built-in user `elastic` . + +## Manually Generated Password + +If you want to provide your own password, you need to create a secret that contains two keys: `ADMIN_USERNAME`, `ADMIN_PASSWORD`. + +```bash +$ export ADMIN_PASSWORD=admin-password +$ kubectl create secret generic -n demo config-elasticsearch-auth \ + --from-literal=ADMIN_USERNAME=elastic \ + --from-literal=ADMIN_PASSWORD=harderPASSWORD \ +secret/config-elasticsearch-auth created +``` + +> Use this Secret `config-elasticsearch-auth` in `spec.authSecret` field of your Elasticsearch object while creating the elasticsearch for the 1st time. Changing the password after creating, won't work at this time. + +## Connect to Elasticsearch Database + +KubeDB operator sets the `status.phase` to `Running` once the database is successfully created. + +```bash +$ kubectl get es -n demo config-elasticsearch -o wide +NAME VERSION STATUS AGE +config-elasticsearch 7.3.2 Running 2m8s +``` + +To connect to the elasticsearch node, we are going to use port forward to the elasticsearch pod. Run following command on a separate terminal, + +```bash +$ kubectl port-forward -n demo config-elasticsearch-0 9200 +Forwarding from 127.0.0.1:9200 -> 9200 +Forwarding from [::1]:9200 -> 9200 +``` + +**Connection information:** + +- Address: `localhost:9200` +- Username: Run following command to get *username* + + ```bash + $ kubectl get secrets -n demo config-elasticsearch-auth -o jsonpath='{.data.\ADMIN_USERNAME}' | base64 -d + elastic + ``` + +- Password: Run following command to get *password* + + ```bash + $ kubectl get secrets -n demo config-elasticsearch-auth -o jsonpath='{.data.\ADMIN_PASSWORD}' | base64 -d + ruobj2eo + ``` + +Firstly, try to connect to this database without providing any authentication. You will face the following error: + +```bash +$ curl "localhost:9200/_cluster/health?pretty" +``` + +```json +{ + "error" : { + "root_cause" : [ + { + "type" : "security_exception", + "reason" : "missing authentication credentials for REST request [/_cluster/health?pretty]", + "header" : { + "WWW-Authenticate" : "Basic realm=\"security\" charset=\"UTF-8\"" + } + } + ], + "type" : "security_exception", + "reason" : "missing authentication credentials for REST request [/_cluster/health?pretty]", + "header" : { + "WWW-Authenticate" : "Basic realm=\"security\" charset=\"UTF-8\"" + } + }, + "status" : 401 +} +``` + +Now, provide the authentication, + +```json +$ curl --user elastic:ruobj2eo "localhost:9200/_cluster/health?pretty" +{ + "cluster_name" : "config-elasticsearch", + "status" : "green", + "timed_out" : false, + "number_of_nodes" : 1, + "number_of_data_nodes" : 1, + "active_primary_shards" : 0, + "active_shards" : 0, + "relocating_shards" : 0, + "initializing_shards" : 0, + "unassigned_shards" : 0, + "delayed_unassigned_shards" : 0, + "number_of_pending_tasks" : 0, + "number_of_in_flight_fetch" : 0, + "task_max_waiting_in_queue_millis" : 0, + "active_shards_percent_as_number" : 100.0 +} +``` + +Additionally, to query the settings about xpack, + +```json +$ curl --user "elastic:ruobj2eo" "localhost:9200/_nodes/_all/settings?pretty" +{ + "_nodes": { + "total": 1, + "successful": 1, + "failed": 0 + }, + "cluster_name": "config-elasticsearch", + "nodes": { + "LxLZBdU6SLemcv6mF1p2vw": { + "name": "config-elasticsearch-0", + "transport_address": "10.8.0.112:9300", + "host": "10.8.0.112", + "ip": "10.8.0.112", + "version": "7.3.2", + "build_flavor": "default", + "build_type": "docker", + "build_hash": "508c38a", + "roles": [ + "master", + "data", + "ingest" + ], + "attributes": { + "ml.machine_memory": "7841255424", + "xpack.installed": "true", + "ml.max_open_jobs": "20" + }, + "settings": { + "cluster": { + "initial_master_nodes": "config-elasticsearch-0", + "name": "config-elasticsearch" + }, + "node": { + "name": "config-elasticsearch-0", + "attr": { + "xpack": { + "installed": "true" + }, + "ml": { + "machine_memory": "7841255424", + "max_open_jobs": "20" + } + }, + "data": "true", + "ingest": "true", + "master": "true" + }, + "path": { + "logs": "/usr/share/elasticsearch/logs", + "home": "/usr/share/elasticsearch" + }, + "discovery": { + "seed_hosts": "config-elasticsearch-master" + }, + "client": { + "type": "node" + }, + "http": { + "type": "security4", + "type.default": "netty4" + }, + "transport": { + "type": "security4", + "features": { + "x-pack": "true" + }, + "type.default": "netty4" + }, + "xpack": { + "security": { + "http": { + "ssl": { + "enabled": "false" + } + }, + "enabled": "true", + "transport": { + "ssl": { + "enabled": "true" + } + } + } + }, + "network": { + "host": "0.0.0.0" + } + } + } + } +} +``` + +As you can see, `xpack.security.enabled` is set to true. + +## Cleaning up + +To clean up the Kubernetes resources created by this tutorial, run: + +```bash +kubectl patch -n demo es/config-elasticsearch -p '{"spec":{"terminationPolicy":"WipeOut"}}' --type="merge" +kubectl delete -n demo es/config-elasticsearch + +kubectl delete ns demo +``` + +## Next Steps + +- Learn how to use [ssl enabled](/docs/v2024.1.31/guides/elasticsearch/plugins/x-pack/use-tls) elasticsearch cluster with xpack. diff --git a/content/docs/v2024.1.31/guides/elasticsearch/plugins/x-pack/disable-xpack.md b/content/docs/v2024.1.31/guides/elasticsearch/plugins/x-pack/disable-xpack.md new file mode 100644 index 0000000000..a3283ec624 --- /dev/null +++ b/content/docs/v2024.1.31/guides/elasticsearch/plugins/x-pack/disable-xpack.md @@ -0,0 +1,272 @@ +--- +title: Disable X-Pack +menu: + docs_v2024.1.31: + identifier: es-disable-x-pack + name: Disable X-Pack + parent: es-x-pack + weight: 30 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Disable X-Pack Plugin + +You data is precious. Definitely, you will not want to leave your production database unprotected. Hence, KubeDB automates Elasticsearch X-Pack configuration. It provides you authentication, authorization and TLS security. However, you can disable X-Pack security. You have to set `spec.disableSecurity` field of Elasticsearch object to `true`. + +This tutorial will show you how to disable X-Pack security for Elasticsearch database in KubeDB. + +## Before You Begin + +At first, you need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +Now, install KubeDB cli on your workstation and KubeDB operator in your cluster following the steps [here](/docs/v2024.1.31/setup/README). + +To keep things isolated, this tutorial uses a separate namespace called `demo` throughout this tutorial. + +```bash +$ kubectl create ns demo +namespace/demo created + +$ kubectl get ns demo +NAME STATUS AGE +demo Active 5s +``` + +> Note: YAML files used in this tutorial are stored in [docs/examples/elasticsearch](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/examples/elasticsearch) folder in GitHub repository [kubedb/docs](https://github.com/kubedb/docs). + +## X-Pack enabled ElasticsearchVersion + +To deploy with X-Pack, you need to use an `ElasticsearchVersion` where `X-Pack` is used as `authPlugin`. + +Here, we are going to use ElasticsearchVersion `7.3.2`. + +> To change authPlugin, it is recommended to create another `ElasticsearchVersion` CRD. Then, use that `ElasticsearchVersion` to install an Elasticsearch without authentication, or with other authPlugin. + +```bash +$ kubectl get elasticsearchversions 7.3.2 -o yaml +``` + +```yaml +apiVersion: catalog.kubedb.com/v1alpha1 +kind: ElasticsearchVersion +metadata: + name: xpack-8.11.1 +spec: + authPlugin: SearchGuard + db: + image: kubedb/elasticsearch:7.9.3-searchguard + distribution: SearchGuard + exporter: + image: kubedb/elasticsearch_exporter:1.1.0 + initContainer: + image: kubedb/toybox:0.8.4 + yqImage: kubedb/elasticsearch-init:7.9.3-searchguard + podSecurityPolicies: + databasePolicyName: elasticsearch-db + stash: + addon: + backupTask: + name: elasticsearch-backup-7.3.2 + params: + - name: args + value: --match=^(?![.])(?!searchguard).+ + restoreTask: + name: elasticsearch-restore-7.3.2 + version: 7.9.3 +``` + +## Create Elasticsearch + +In order to disable X-Pack, you have to set `spec.disableSecurity` field of `Elasticsearch` object to `true`. + +Below is the YAML of `Elasticsearch` object that will be created in this tutorial. + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: Elasticsearch +metadata: + name: es-xpack-disabled + namespace: demo +spec: + version: xpack-8.11.1 + disableSecurity: true + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi +``` + +Let's create the Elasticsearch object, + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/elasticsearch/x-pack/es-xpack-disabled.yaml +elasticsearch.kubedb.com/es-xpack-disabled created +``` + +Wait for Elasticsearch to be ready, + +```bash +$ kubectl get es -n demo es-xpack-disabled +NAME VERSION STATUS AGE +es-xpack-disabled 7.3.2 Running 6m14s +``` + +## Connect to Elasticsearch Database + +As we have disabled X-Pack security, we no longer require *username* and *password* to connect with our Elasticsearch database. + +At first, forward port 9200 of `es-xpack-disabled-0` pod. Run following command in a separate terminal, + +```bash +$ kubectl port-forward -n demo es-xpack-disabled-0 9200 +Forwarding from 127.0.0.1:9200 -> 9200 +Forwarding from [::1]:9200 -> 9200 +``` + +Now, we can connect with the database at `localhost:9200`. + +Let's check health of our Elasticsearch database. + +```bash +$ curl "localhost:9200/_cluster/health?pretty" +``` + +```json +{ + "cluster_name" : "es-xpack-disabled", + "status" : "green", + "timed_out" : false, + "number_of_nodes" : 1, + "number_of_data_nodes" : 1, + "active_primary_shards" : 0, + "active_shards" : 0, + "relocating_shards" : 0, + "initializing_shards" : 0, + "unassigned_shards" : 0, + "delayed_unassigned_shards" : 0, + "number_of_pending_tasks" : 0, + "number_of_in_flight_fetch" : 0, + "task_max_waiting_in_queue_millis" : 0, + "active_shards_percent_as_number" : 100.0 +} +``` + +Additionally, to query the settings about xpack, + +```json +$ curl "localhost:9200/_nodes/_all/settings?pretty" +{ + "_nodes" : { + "total" : 1, + "successful" : 1, + "failed" : 0 + }, + "cluster_name" : "es-xpack-disabled", + "nodes" : { + "GpHq4kaERoq8_43zXup_mA" : { + "name" : "es-xpack-disabled-0", + "transport_address" : "10.244.1.7:9300", + "host" : "10.244.1.7", + "ip" : "10.244.1.7", + "version" : "7.3.2", + "build_flavor" : "default", + "build_type" : "docker", + "build_hash" : "1c1faf1", + "roles" : [ + "ingest", + "master", + "data" + ], + "attributes" : { + "ml.machine_memory" : "16683249664", + "xpack.installed" : "true", + "ml.max_open_jobs" : "20" + }, + "settings" : { + "cluster" : { + "initial_master_nodes" : "es-xpack-disabled-0", + "name" : "es-xpack-disabled", + "election" : { + "strategy" : "supports_voting_only" + } + }, + "node" : { + "name" : "es-xpack-disabled-0", + "attr" : { + "xpack" : { + "installed" : "true" + }, + "ml" : { + "machine_memory" : "16683249664", + "max_open_jobs" : "20" + } + }, + "data" : "true", + "ingest" : "true", + "master" : "true" + }, + "path" : { + "logs" : "/usr/share/elasticsearch/logs", + "home" : "/usr/share/elasticsearch" + }, + "discovery" : { + "seed_hosts" : "es-xpack-disabled-master" + }, + "client" : { + "type" : "node" + }, + "http" : { + "type" : "security4", + "type.default" : "netty4" + }, + "transport" : { + "type" : "security4", + "features" : { + "x-pack" : "true" + }, + "type.default" : "netty4" + }, + "network" : { + "host" : "0.0.0.0" + } + } + } + } +} +``` + +Here, `xpack.security.enabled` is set to `false`. As a result, `xpack` security configurations are missing from the node settings. + +## Cleaning up + +To cleanup the Kubernetes resources created by this tutorial, run: + +```bash +kubectl patch -n demo es/es-xpack-disabled -p '{"spec":{"terminationPolicy":"WipeOut"}}' --type="merge" +kubectl delete -n demo es/es-xpack-disabled + +kubectl delete ns demo +``` + +## Next Steps + +- Learn how to [create TLS certificates](/docs/v2024.1.31/guides/elasticsearch/plugins/x-pack/issue-certificate). +- Learn how to generate [x-pack configuration](/docs/v2024.1.31/guides/elasticsearch/plugins/x-pack/configuration). +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/v2024.1.31/CONTRIBUTING). diff --git a/content/docs/v2024.1.31/guides/elasticsearch/plugins/x-pack/issue-certificate.md b/content/docs/v2024.1.31/guides/elasticsearch/plugins/x-pack/issue-certificate.md new file mode 100644 index 0000000000..2aabe02b4e --- /dev/null +++ b/content/docs/v2024.1.31/guides/elasticsearch/plugins/x-pack/issue-certificate.md @@ -0,0 +1,525 @@ +--- +title: X-Pack Certificate +menu: + docs_v2024.1.31: + identifier: es-issue-certificate-x-pack + name: Issue Certificate + parent: es-x-pack + weight: 25 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Issue TLS Certificates + +X-Pack requires certificates to enable TLS. KubeDB creates necessary certificates automatically. However, if you want to use your own certificates, you can provide them through `spec.certificateSecret` field of Elasticsearch object. + +This tutorial will show you how to generate certificates for X-Pack and use them with Elasticsearch database. + +In KubeDB Elasticsearch, keystore and truststore files in JKS format are used instead of certificates and private keys in PEM format. + +KubeDB applies same **truststore** for both transport layer TLS and REST layer TLS. + +But, KubeDB distinguishes between the following types of keystore for security purpose. + +- **transport layer keystore** are used to identify and secure traffic between Elasticsearch nodes on the transport layer +- **http layer keystore** are used to identify Elasticsearch clients on the REST and transport layer. + +## Before You Begin + +At first, you need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +Now, install KubeDB cli on your workstation and KubeDB operator in your cluster following the steps [here](/docs/v2024.1.31/setup/README). + +To keep things isolated, this tutorial uses a separate namespace called `demo` throughout this tutorial. + +```bash +$ kubectl create ns demo +namespace/demo created +``` + +You also need to have [*OpenSSL*](https://www.openssl.org/source/) and Java *keytool* for generating all required artifacts. + +In order to find out if you have OpenSSL installed, open a terminal and type + +```bash +$ openssl version +OpenSSL 1.0.2g 1 Mar 2016 +``` + +Make sure it’s version 1.0.1k or higher + +And check *keytool* by calling + +```bash +keytool +``` + +If already installed, it will print a list of available commands. + +To keep generated files separated, open a new terminal and create a directory `/tmp/kubedb/certs` + +```bash +mkdir -p /tmp/kubedb/certs +cd /tmp/kubedb/certs +``` + +> Note: YAML files used in this tutorial are stored in [docs/examples/elasticsearch](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/examples/elasticsearch) folder in GitHub repository [kubedb/docs](https://github.com/kubedb/docs). + +## Generate truststore + +First, we need root certificate to sign other server & client certificates. And also this certificate is imported as *truststore*. + +You need to follow these steps + +1. Get root certificate configuration file + + ```bash + $ wget https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/elasticsearch/x-pack/openssl-config/openssl-ca.ini + ``` + + ```ini + [ ca ] + default_ca = CA_default + + [ CA_default ] + private_key = root-key.pem + default_days = 1000 # how long to certify for + default_md = sha256 # use public key default MD + copy_extensions = copy # Required to copy SANs from CSR to cert + + [ req ] + prompt = no + default_bits = 4096 + distinguished_name = ca_distinguished_name + + [ ca_distinguished_name ] + O = Elasticsearch Operator + CN = KubeDB Com. Root CA + ``` + +2. Set a password of your keystore and truststore files + + ```bash + $ export KEY_PASS=secret + ``` + + > Note: You need to provide this KEY_PASS in your Secret as `key_pass` + +3. Generate private key and certificate + + ```bash + $ openssl req -x509 -config openssl-ca.ini -newkey rsa:4096 -sha256 -nodes -out root.pem -keyout root-key.pem -batch -passin "pass:$KEY_PASS" + ``` + + Here, + + - `root-key.pem` holds Private Key + - `root.pem`holds CA Certificate + +4. Finally, import certificate as keystore + + ```bash + $ keytool -import -file root.pem -keystore root.jks -storepass $KEY_PASS -srcstoretype pkcs12 -noprompt + ``` + + Here, + + - `root.jks` is truststore for Elasticsearch + +## Generate keystore + +Here are the steps for generating certificate and keystore for Elasticsearch: + +1. Get certificate configuration file +2. Generate private key and certificate signing request (CSR) +3. Sign certificate using root certificate +4. Generate PKCS12 file using root certificate +5. Import PKCS12 as keystore + +You need to follow these steps to generate three keystore. + +To sign certificate, we need another configuration file. + +```bash +$ wget https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/info.version" >}}/docs/examples/elasticsearch/x-pack/openssl-config/openssl-sign.ini +``` + +```ini +[ ca ] +default_ca = CA_default + +[ CA_default ] +base_dir = . +certificate = $base_dir/root.pem # The CA certifcate +private_key = $base_dir/root-key.pem # The CA private key +new_certs_dir = $base_dir # Location for new certs after signing +database = $base_dir/index.txt # Database index file +serial = $base_dir/serial.txt # The current serial number +unique_subject = no # Set to 'no' to allow creation of several certificates with same subject. + +default_days = 1000 # how long to certify for +default_md = sha256 # use public key default MD +email_in_dn = no +copy_extensions = copy # Required to copy SANs from CSR to cert + +[ req ] +default_bits = 4096 +default_keyfile = root-key.pem +distinguished_name = ca_distinguished_name + +[ ca_distinguished_name ] +O = Elasticsearch Operator +CN = KubeDB Com. Root CA + +[ signing_req ] +keyUsage = digitalSignature, keyEncipherment + +[ signing_policy ] +organizationName = optional +commonName = supplied +``` + +Here, + +- `certificate` denotes CA certificate path +- `private_key` denotes CA key path + +Also, you need to create a `index.txt` file and `serial.txt` file with value `01` + +```bash +touch index.txt +echo '01' > serial.txt +``` + +### Node + +Following configuration is used to generate CSR for node certificate. + +```ini +[ req ] +prompt = no +default_bits = 4096 +distinguished_name = node_distinguished_name +req_extensions = node_req_extensions + +[ node_distinguished_name ] +O = Elasticsearch Operator +CN = custom-certificate-es-ssl + +[ node_req_extensions ] +keyUsage = digitalSignature, keyEncipherment +extendedKeyUsage = serverAuth, clientAuth +subjectAltName = @alternate_names + +[ alternate_names ] +DNS.1 = localhost +RID.1 = 1.2.3.4.5.5 +``` + +Here, + +- `RID.1=1.2.3.4.5.5` is used in node certificate. All certificates with registeredID `1.2.3.4.5.5` is considered as valid certificate for transport layer. + +Now run following commands + +```bash +$ wget https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/elasticsearch/x-pack/openssl-config/openssl-node.ini +$ openssl req -config openssl-node.ini -newkey rsa:4096 -sha256 -nodes -out node-csr.pem -keyout node-key.pem +$ openssl ca -config openssl-sign.ini -batch -policy signing_policy -extensions signing_req -out node.pem -infiles node-csr.pem +$ openssl pkcs12 -export -certfile root.pem -inkey node-key.pem -in node.pem -password "pass:$KEY_PASS" -out node.pkcs12 +$ keytool -importkeystore -srckeystore node.pkcs12 -storepass $KEY_PASS -srcstoretype pkcs12 -srcstorepass $KEY_PASS -destkeystore node.jks -deststoretype pkcs12 +``` + +Generated `node.jks` will be used as keystore for transport layer TLS. + +### Client + +Following configuration is used to generate CSR for client certificate. + +```ini +[ req ] +prompt = no +default_bits = 4096 +distinguished_name = client_distinguished_name +req_extensions = client_req_extensions + +[ client_distinguished_name ] +O = Elasticsearch Operator +CN = custom-certificate-es-ssl + +[ client_req_extensions ] +keyUsage = digitalSignature, keyEncipherment +extendedKeyUsage = serverAuth, clientAuth +subjectAltName = @alternate_names + +[ alternate_names ] +DNS.1 = localhost +DNS.2 = custom-certificate-es-ssl.demo.svc +``` + +Here, + +- `custom-certificate-es-ssl` is used as a Common Name so that host `custom-certificate-es-ssl` is verified as valid Client. + +Now run following commands + +```bash +$ wget https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/elasticsearch/x-pack/openssl-config/openssl-client.ini +$ openssl req -config openssl-client.ini -newkey rsa:4096 -sha256 -nodes -out client-csr.pem -keyout client-key.pem +$ openssl ca -config openssl-sign.ini -batch -policy signing_policy -extensions signing_req -out client.pem -infiles client-csr.pem +$ openssl pkcs12 -export -certfile root.pem -inkey client-key.pem -in client.pem -password "pass:$KEY_PASS" -out client.pkcs12 +$ keytool -importkeystore -srckeystore client.pkcs12 -storepass $KEY_PASS -srcstoretype pkcs12 -srcstorepass $KEY_PASS -destkeystore client.jks -deststoretype pkcs12 +``` + +Generated `client.jks` will be used as keystore for http layer TLS. + +## Create Secret + +Now create a Secret with these certificates to use in your Elasticsearch object. + +```bash +$ kubectl create secret generic -n demo custom-certificate-es-ssl-cert \ + --from-file=root.pem \ + --from-file=root.jks \ + --from-file=node.jks \ + --from-file=client.jks \ + --from-literal=key_pass=$KEY_PASS + +secret/custom-certificate-es-ssl-cert created +``` + +> Note: `root.pem` is added in Secret so that user can use these to connect Elasticsearch + +Use this Secret `custom-certificate-es-ssl-cert` in your Elasticsearch object. + +## Create an Elasticsearch database + +Below is the Elasticsearch object created in this tutorial. + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: Elasticsearch +metadata: + name: custom-certificate-es-ssl + namespace: demo +spec: + version: xpack-8.11.1 + enableSSL: true + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi +``` + +Here, + +- `spec.certificateSecret` specifies Secret with certificates those will be used in Elasticsearch database. + +Create example above with following command + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/elasticsearch/x-pack/custom-certificate-es-ssl.yaml +elasticsearch.kubedb.com/custom-certificate-es-ssl created +``` + +KubeDB operator sets the `status.phase` to `Running` once the database is successfully created. + +```bash +$ kubectl get es -n demo custom-certificate-es-ssl -o wide +NAME VERSION STATUS AGE +custom-certificate-es-ssl 7.3.2 Running 1m +``` + +## Connect to Elasticsearch Database + +We need to provide `root.pem` to connect to elasticsearch nodes. + +Let's forward port 9200 of `custom-certificate-es-ssl-0` pod. Run following command in a separate terminal, + +```bash +$ kubectl port-forward -n demo custom-certificate-es-ssl-0 9200 +Forwarding from 127.0.0.1:9200 -> 9200 +Forwarding from [::1]:9200 -> 9200 +``` + +Now, we can connect with the database at `localhost:9200`. + +**Connection information:** + +- Address: `localhost:9200` +- Username: Run following command to get *username* + + ```bash + $ kubectl get secrets -n demo custom-certificate-es-ssl-auth -o jsonpath='{.data.\ADMIN_USERNAME}' | base64 -d + elastic + ``` + +- Password: Run following command to get *password* + + ```bash + $ kubectl get secrets -n demo custom-certificate-es-ssl-auth -o jsonpath='{.data.\ADMIN_PASSWORD}' | base64 -d + uft73z6j + ``` + +- Root CA: Run following command to get `root.pem` file + + ```bash + $ kubectl get secrets -n demo custom-certificate-es-ssl-cert -o jsonpath='{.data.\root\.pem}' | base64 --decode > root.pem + ``` + +Now, let's check health of our Elasticsearch database. + +```bash +$ curl --user "elastic:uft73z6j" "https://localhost:9200/_cluster/health?pretty" --cacert root.pem +``` + +```json +{ + "cluster_name" : "custom-certificate-es-ssl", + "status" : "green", + "timed_out" : false, + "number_of_nodes" : 1, + "number_of_data_nodes" : 1, + "active_primary_shards" : 0, + "active_shards" : 0, + "relocating_shards" : 0, + "initializing_shards" : 0, + "unassigned_shards" : 0, + "delayed_unassigned_shards" : 0, + "number_of_pending_tasks" : 0, + "number_of_in_flight_fetch" : 0, + "task_max_waiting_in_queue_millis" : 0, + "active_shards_percent_as_number" : 100.0 +} +``` + +Additionally, to query the settings about xpack, + +```json +$ curl --user "elastic:uft73z6j" "https://localhost:9200/_nodes/_all/settings?pretty" --cacert root.pem +{ + "_nodes" : { + "total" : 1, + "successful" : 1, + "failed" : 0 + }, + "cluster_name" : "custom-certificate-es-ssl", + "nodes" : { + "L75i6kmaRRWqy7-IqnDbbA" : { + "name" : "custom-certificate-es-ssl-0", + "transport_address" : "10.4.0.166:9300", + "host" : "10.4.0.166", + "ip" : "10.4.0.166", + "version" : "7.3.2", + "build_flavor" : "default", + "build_type" : "docker", + "build_hash" : "508c38a", + "roles" : [ + "master", + "data", + "ingest" + ], + "attributes" : { + "ml.machine_memory" : "7841263616", + "xpack.installed" : "true", + "ml.max_open_jobs" : "20" + }, + "settings" : { + "cluster" : { + "initial_master_nodes" : "custom-certificate-es-ssl-0", + "name" : "custom-certificate-es-ssl" + }, + "node" : { + "name" : "custom-certificate-es-ssl-0", + "attr" : { + "xpack" : { + "installed" : "true" + }, + "ml" : { + "machine_memory" : "7841263616", + "max_open_jobs" : "20" + } + }, + "data" : "true", + "ingest" : "true", + "master" : "true" + }, + "path" : { + "logs" : "/usr/share/elasticsearch/logs", + "home" : "/usr/share/elasticsearch" + }, + "discovery" : { + "seed_hosts" : "custom-certificate-es-ssl-master" + }, + "client" : { + "type" : "node" + }, + "http" : { + "compression" : "false", + "type" : "security4", + "type.default" : "netty4" + }, + "transport" : { + "type" : "security4", + "features" : { + "x-pack" : "true" + }, + "type.default" : "netty4" + }, + "xpack" : { + "security" : { + "http" : { + "ssl" : { + "enabled" : "true" + } + }, + "enabled" : "true", + "transport" : { + "ssl" : { + "enabled" : "true" + } + } + } + }, + "network" : { + "host" : "0.0.0.0" + } + } + } + } +} +``` + +## Cleaning up + +To cleanup the Kubernetes resources created by this tutorial, run: + +```bash +kubectl patch -n demo es/custom-certificate-es-ssl -p '{"spec":{"terminationPolicy":"WipeOut"}}' --type="merge" +kubectl delete -n demo es/custom-certificate-es-ssl + +kubectl delete ns demo +``` + +## Next Steps + +- Learn how to use TLS certificates to connect Elasticsearch from [here](/docs/v2024.1.31/guides/elasticsearch/plugins/x-pack/use-tls). +- Learn how to generate [x-pack configuration](/docs/v2024.1.31/guides/elasticsearch/plugins/x-pack/configuration). +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/v2024.1.31/CONTRIBUTING). diff --git a/content/docs/v2024.1.31/guides/elasticsearch/plugins/x-pack/use-tls.md b/content/docs/v2024.1.31/guides/elasticsearch/plugins/x-pack/use-tls.md new file mode 100644 index 0000000000..2744ed6456 --- /dev/null +++ b/content/docs/v2024.1.31/guides/elasticsearch/plugins/x-pack/use-tls.md @@ -0,0 +1,374 @@ +--- +title: Run TLS Secured Elasticsearch +menu: + docs_v2024.1.31: + identifier: es-use-tls-x-pack + name: Use TLS + parent: es-x-pack + weight: 20 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Run TLS Secured Elasticsearch + +X-Pack provides facility to secure your Elasticsearch cluster with TLS. By default, KubeDB does not enable TLS security. You have to enable it by setting `spec.enableSSL: true`. If TLS is enabled, only HTTPS calls are allowed to database server. + +This tutorial will show you how to connect with Elasticsearch cluster using certificate when TLS is enabled. + +## Before You Begin + +At first, you need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +Now, install KubeDB cli on your workstation and KubeDB operator in your cluster following the steps [here](/docs/v2024.1.31/setup/README). + +To keep things isolated, this tutorial uses a separate namespace called `demo` throughout this tutorial. + +```bash +$ kubectl create ns demo +namespace/demo created + +$ kubectl get ns demo +NAME STATUS AGE +demo Active 5s +``` + +> Note: YAML files used in this tutorial are stored in [docs/examples/elasticsearch](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/examples/elasticsearch) folder in GitHub repository [kubedb/docs](https://github.com/kubedb/docs). + +## Create Elasticsearch + +In order to enable TLS, we have to set `spec.enableSSL` field of Elasticsearch object to `true`. Below is the YAML of Elasticsearch object that will be created in this tutorial. + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: Elasticsearch +metadata: + name: ssl-elasticsearch + namespace: demo +spec: + version: xpack-8.11.1 + replicas: 2 + enableSSL: true + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi +``` + +Let's create the Elasticsearch object we have shown above, + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/elasticsearch/x-pack/ssl-elasticsearch.yaml +elasticsearch.kubedb.com/ssl-elasticsearch created +``` + +```bash +$ kubectl get es -n demo ssl-elasticsearch +NAME VERSION STATUS AGE +ssl-elasticsearch 7.3.2 Running 5m54s +``` + +## Connect to Elasticsearch Database + +As we have enabled TLS for our Elasticsearch cluster, only HTTPS calls are allowed to the Elasticsearch server. So, we need to provide certificate to connect with Elasticsearch. If you do not provide certificate manually through `spec.certificateSecret` field of Elasticsearch object, KubeDB will create a secret `{elasticsearch name}-cert` with necessary certificates. + +Let's check the certificates that has been created for Elasticsearch `ssl-elasticsearch` by KubeDB operator. + +```bash +$ kubectl get secret -n demo ssl-elasticsearch-cert -o yaml +``` + +```yaml +apiVersion: v1 +data: + client.jks: TFMwdExTMUNSVWRKVGlCLi4uLi49PQ== + node.jks: TFMwdExTMUNSVWRKVGlCLi4uLi49PQ== + root.jks: TFMwdExTMUNSVWRKVGlCLi4uLi49PQ== + root.pem: TFMwdExTMUNSVWRKVGlCLi4uLi49PQ== + sgadmin.jks: TFMwdExTMUNSVWRKVGlCLi4uLi49PQ== + key_pass: TFMwdExTMUNSVWRKVGlCLi4uLi49PQ== +kind: Secret +metadata: + creationTimestamp: 2018-02-19T09:51:45Z + labels: + app.kubernetes.io/name: elasticsearches.kubedb.com + app.kubernetes.io/instance: ssl-elasticsearch + name: ssl-elasticsearch-cert + namespace: demo + resourceVersion: "754" + selfLink: /api/v1/namespaces/demo/secrets/ssl-elasticsearch-cert + uid: 7efdaf31-155a-11e8-a001-42010a8000d5 +type: Opaque +``` + +Here, `root.pem` file is the root CA in `.pem` format. We will require to provide this file while sending REST request to the Elasticsearch server. + +Let's forward port 9200 of `ssl-elasticsearch-0` pod. Run following command in a separate terminal, + +```bash +$ kubectl port-forward -n demo ssl-elasticsearch-0 9200 +Forwarding from 127.0.0.1:9200 -> 9200 +Forwarding from [::1]:9200 -> 9200 +``` + +Now, we can connect with the database at `localhost:9200`. + +**Connection information:** + +- Address: `localhost:9200` +- Username: Run following command to get *username* + + ```bash + $ kubectl get secrets -n demo ssl-elasticsearch-auth -o jsonpath='{.data.\ADMIN_USERNAME}' | base64 -d + elastic + ``` + +- Password: Run following command to get *password* + + ```bash + $ kubectl get secrets -n demo ssl-elasticsearch-auth -o jsonpath='{.data.\ADMIN_PASSWORD}' | base64 -d + err5ns7w + ``` + +- Root CA: Run following command to get `root.pem` file + + ```bash + $ kubectl get secrets -n demo ssl-elasticsearch-cert -o jsonpath='{.data.\root\.pem}' | base64 --decode > root.pem + ``` + +Now, let's check health of our Elasticsearch database. + +```bash +$ curl --user "elastic:err5ns7w" "https://localhost:9200/_cluster/health?pretty" --cacert root.pem +``` + +```json +{ + "cluster_name" : "ssl-elasticsearch", + "status" : "green", + "timed_out" : false, + "number_of_nodes" : 2, + "number_of_data_nodes" : 2, + "active_primary_shards" : 0, + "active_shards" : 0, + "relocating_shards" : 0, + "initializing_shards" : 0, + "unassigned_shards" : 0, + "delayed_unassigned_shards" : 0, + "number_of_pending_tasks" : 0, + "number_of_in_flight_fetch" : 0, + "task_max_waiting_in_queue_millis" : 0, + "active_shards_percent_as_number" : 100.0 +} +``` + +Additionally, to query the settings about xpack, + +```json +$ curl --user "elastic:err5ns7w" "https://localhost:9200/_nodes/_all/settings?pretty" --cacert root.pem +{ + "_nodes" : { + "total" : 2, + "successful" : 2, + "failed" : 0 + }, + "cluster_name" : "ssl-elasticsearch", + "nodes" : { + "RUZU2vafThaLJwt6AJgNUQ" : { + "name" : "ssl-elasticsearch-0", + "transport_address" : "10.4.1.109:9300", + "host" : "10.4.1.109", + "ip" : "10.4.1.109", + "version" : "7.3.2", + "build_flavor" : "default", + "build_type" : "docker", + "build_hash" : "508c38a", + "roles" : [ + "master", + "data", + "ingest" + ], + "attributes" : { + "ml.machine_memory" : "7841263616", + "xpack.installed" : "true", + "ml.max_open_jobs" : "20" + }, + "settings" : { + "cluster" : { + "initial_master_nodes" : "ssl-elasticsearch-0,ssl-elasticsearch-1", + "name" : "ssl-elasticsearch" + }, + "node" : { + "name" : "ssl-elasticsearch-0", + "attr" : { + "xpack" : { + "installed" : "true" + }, + "ml" : { + "machine_memory" : "7841263616", + "max_open_jobs" : "20" + } + }, + "data" : "true", + "ingest" : "true", + "master" : "true" + }, + "path" : { + "logs" : "/usr/share/elasticsearch/logs", + "home" : "/usr/share/elasticsearch" + }, + "discovery" : { + "seed_hosts" : "ssl-elasticsearch-master" + }, + "client" : { + "type" : "node" + }, + "http" : { + "compression" : "false", + "type" : "security4", + "type.default" : "netty4" + }, + "transport" : { + "type" : "security4", + "features" : { + "x-pack" : "true" + }, + "type.default" : "netty4" + }, + "xpack" : { + "security" : { + "http" : { + "ssl" : { + "enabled" : "true" + } + }, + "enabled" : "true", + "transport" : { + "ssl" : { + "enabled" : "true" + } + } + } + }, + "network" : { + "host" : "0.0.0.0" + } + } + }, + "I9aircHnRsqFqVgLkia3_A" : { + "name" : "ssl-elasticsearch-1", + "transport_address" : "10.4.0.174:9300", + "host" : "10.4.0.174", + "ip" : "10.4.0.174", + "version" : "7.3.2", + "build_flavor" : "default", + "build_type" : "docker", + "build_hash" : "508c38a", + "roles" : [ + "master", + "data", + "ingest" + ], + "attributes" : { + "ml.machine_memory" : "7841263616", + "ml.max_open_jobs" : "20", + "xpack.installed" : "true" + }, + "settings" : { + "cluster" : { + "initial_master_nodes" : "ssl-elasticsearch-0,ssl-elasticsearch-1", + "name" : "ssl-elasticsearch" + }, + "node" : { + "name" : "ssl-elasticsearch-1", + "attr" : { + "xpack" : { + "installed" : "true" + }, + "ml" : { + "machine_memory" : "7841263616", + "max_open_jobs" : "20" + } + }, + "data" : "true", + "ingest" : "true", + "master" : "true" + }, + "path" : { + "logs" : "/usr/share/elasticsearch/logs", + "home" : "/usr/share/elasticsearch" + }, + "discovery" : { + "seed_hosts" : "ssl-elasticsearch-master" + }, + "client" : { + "type" : "node" + }, + "http" : { + "compression" : "false", + "type" : "security4", + "type.default" : "netty4" + }, + "transport" : { + "type" : "security4", + "features" : { + "x-pack" : "true" + }, + "type.default" : "netty4" + }, + "xpack" : { + "security" : { + "http" : { + "ssl" : { + "enabled" : "true" + } + }, + "enabled" : "true", + "transport" : { + "ssl" : { + "enabled" : "true" + } + } + } + }, + "network" : { + "host" : "0.0.0.0" + } + } + } + } +} +``` + +## Cleaning up + +To cleanup the Kubernetes resources created by this tutorial, run: + +```bash +kubectl patch -n demo es/ssl-elasticsearch -p '{"spec":{"terminationPolicy":"WipeOut"}}' --type="merge" +kubectl delete -n demo es/ssl-elasticsearch + +kubectl delete ns demo +``` + +## Next Steps + +- Learn how to [create TLS certificates](/docs/v2024.1.31/guides/elasticsearch/plugins/x-pack/issue-certificate). +- Learn how to generate [x-pack configuration](/docs/v2024.1.31/guides/elasticsearch/plugins/x-pack/configuration). +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/v2024.1.31/CONTRIBUTING). diff --git a/content/docs/v2024.1.31/guides/elasticsearch/private-registry/_index.md b/content/docs/v2024.1.31/guides/elasticsearch/private-registry/_index.md new file mode 100755 index 0000000000..9261cbc952 --- /dev/null +++ b/content/docs/v2024.1.31/guides/elasticsearch/private-registry/_index.md @@ -0,0 +1,22 @@ +--- +title: Run Elasticsearch from Private Registry +menu: + docs_v2024.1.31: + identifier: es-private-registry-elasticsearch + name: Private Registry + parent: es-elasticsearch-guides + weight: 35 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/elasticsearch/private-registry/using-private-registry.md b/content/docs/v2024.1.31/guides/elasticsearch/private-registry/using-private-registry.md new file mode 100644 index 0000000000..fd705d2399 --- /dev/null +++ b/content/docs/v2024.1.31/guides/elasticsearch/private-registry/using-private-registry.md @@ -0,0 +1,200 @@ +--- +title: Run Elasticsearch using Private Registry +menu: + docs_v2024.1.31: + identifier: es-using-private-registry-private-registry + name: Quickstart + parent: es-private-registry-elasticsearch + weight: 10 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Using private Docker registry + +KubeDB operator supports using private Docker registry. This tutorial will show you how to run KubeDB managed Elasticsearch database using private Docker images. + +## Before You Begin + +At first, you need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +To keep things isolated, this tutorial uses a separate namespace called `demo` throughout this tutorial. + +```bash +$ kubectl create ns demo +namespace/demo created + +$ kubectl get ns demo +NAME STATUS AGE +demo Active 5s +``` + +> Note: YAML files used in this tutorial are stored in [docs/examples/elasticsearch](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/examples/elasticsearch) folder in GitHub repository [kubedb/docs](https://github.com/kubedb/docs). + +## Prepare Private Docker Registry + +You will also need a docker private [registry](https://docs.docker.com/registry/) or [private repository](https://docs.docker.com/docker-hub/repos/#private-repositories). In this tutorial we will use private repository of [docker hub](https://hub.docker.com/). + +You have to push the required images from KubeDB's [Docker hub account](https://hub.docker.com/r/kubedb/) into your private registry. + +For Elasticsearch, push the following images to your private registry. + +- [kubedb/operator](https://hub.docker.com/r/kubedb/operator) +- [kubedb/elasticsearch](https://hub.docker.com/r/kubedb/elasticsearch) +- [kubedb/elasticsearch-tools](https://hub.docker.com/r/kubedb/elasticsearch-tools) +- [kubedb/elasticsearch_exporter](https://hub.docker.com/r/kubedb/elasticsearch_exporter) +- [kubedb/yq](https://hub.docker.com/r/kubedb/yq) + +```bash +$ export DOCKER_REGISTRY= + +$ docker pull kubedb/operator:{{< param "info.version" >}} ; docker tag kubedb/operator:{{< param "info.version" >}} $DOCKER_REGISTRY/operator:{{< param "info.version" >}} ; docker push $DOCKER_REGISTRY/operator:{{< param "info.version" >}} +$ docker pull kubedb/elasticsearch:7.3.2 ; docker tag kubedb/elasticsearch:7.3.2 $DOCKER_REGISTRY/elasticsearch:7.3.2 ; docker push $DOCKER_REGISTRY/elasticsearch:7.3.2 +$ docker pull kubedb/elasticsearch-tools:7.3.2 ; docker tag kubedb/elasticsearch-tools:7.3.2 $DOCKER_REGISTRY/elasticsearch-tools:7.3.2 ; docker push $DOCKER_REGISTRY/elasticsearch-tools:7.3.2 +$ docker pull kubedb/elasticsearch_exporter:1.0.2 ; docker tag kubedb/elasticsearch_exporter:1.0.2 $DOCKER_REGISTRY/elasticsearch_exporter:1.0.2 ; docker push $DOCKER_REGISTRY/elasticsearch_exporter:1.0.2 +$ docker pull kubedb/yq:2.4.0 ; docker tag kubedb/yq:2.4.0 $DOCKER_REGISTRY/yq:2.4.0 ; docker push $DOCKER_REGISTRY/yq:2.4.0 +``` + +## Create ImagePullSecret + +ImagePullSecrets is a type of a Kubernetes Secret whose sole purpose is to pull private images from a Docker registry. It allows you to specify the url of the docker registry, credentials for logging in and the image name of your private docker image. + +Run the following command, substituting the appropriate uppercase values to create an image pull secret for your private Docker registry: + +```bash +$ kubectl create secret docker-registry myregistrykey \ + --docker-server=DOCKER_REGISTRY_SERVER \ + --docker-username=DOCKER_USER \ + --docker-email=DOCKER_EMAIL \ + --docker-password=DOCKER_PASSWORD +secret "myregistrykey" created. +``` + +If you wish to follow other ways to pull private images see [official docs](https://kubernetes.io/docs/concepts/containers/images/) of Kubernetes. + +> Note; If you are using `kubectl` 1.9.0, update to 1.9.1 or later to avoid this [issue](https://github.com/kubernetes/kubernetes/issues/57427). + +## Create ElasticsearchVersion CRD + +KubeDB uses images specified in ElasticsearchVersion crd for database, backup and exporting prometheus metrics. You have to create an ElasticsearchVersion crd specifying images from your private registry. Then, you have to point this ElasticsearchVersion crd in `spec.version` field of Elasticsearch object. For more details about ElasticsearchVersion crd, please visit [here](/docs/v2024.1.31/guides/elasticsearch/concepts/catalog/). + +Here, is an example of ElasticsearchVersion crd. Replace `` with your private registry. + +```yaml +apiVersion: catalog.kubedb.com/v1alpha1 +kind: ElasticsearchVersion +metadata: + name: xpack-8.11.1 +spec: + authPlugin: SearchGuard + db: + image: PRIVATE_REGISTRY/elasticsearch:7.9.3-searchguard + distribution: SearchGuard + exporter: + image: PRIVATE_REGISTRY/elasticsearch_exporter:1.1.0 + initContainer: + image: PRIVATE_REGISTRY/toybox:0.8.4 + yqImage: PRIVATE_REGISTRY/elasticsearch-init:7.9.3-searchguard + podSecurityPolicies: + databasePolicyName: elasticsearch-db + stash: + addon: + backupTask: + name: elasticsearch-backup-7.3.2 + params: + - name: args + value: --match=^(?![.])(?!searchguard).+ + restoreTask: + name: elasticsearch-restore-7.3.2 + version: 7.9.3 +``` + +Now, create the ElasticsearchVersion crd, + +```bash +$ kubectl apply -f pvt-elasticsearchversion.yaml +elasticsearchversion.kubedb.com/pvt-7.3.2 created +``` + +## Install KubeDB operator + +When installing KubeDB operator, set the flags `--docker-registry` and `--image-pull-secret` to appropriate value. Follow the guide for customizing installer to see how to pass those flags from [here](/docs/v2024.1.31/setup/README#customizing-installer). + +## Deploy Elasticsearch database from Private Registry + +While deploying Elasticsearch from private repository, you have to add `myregistrykey` secret in Elasticsearch `spec.podTemplate.spec.imagePullSecrets`. + +Below is the YAML for Elasticsearch crd that will be created in this tutorial. + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: Elasticsearch +metadata: + name: pvt-reg-elasticsearch + namespace: demo +spec: + version: "xpack-8.11.1" + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + podTemplate: + spec: + imagePullSecrets: + - name: myregistrykey +``` + +Now run the command to deploy this Elasticsearch object: + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/elasticsearch/private-registry/private-registry.yaml +elasticsearch.kubedb.com/pvt-reg-elasticsearch created +``` + +To check if the images pulled successfully from the repository, see if the Elasticsearch is in running state: + +```bash +$ kubectl get es -n demo pvt-reg-elasticsearch -o wide +NAME VERSION STATUS AGE +pvt-reg-elasticsearch pvt-7.3.2 Running 33m +``` + +## Snapshot + +You can specify `imagePullSecret` for Snapshot objects in `spec.podTemplate.spec.imagePullSecrets` field of Snapshot object. If you are using scheduled backup, you can also provide `imagePullSecret` in `backupSchedule.podTemplate.spec.imagePullSecrets` field of Elasticsearch crd. KubeDB also reuses `imagePullSecret` for Snapshot object from `spec.podTemplate.spec.imagePullSecrets` field of Elasticsearch crd. + +## Cleaning up + +To cleanup the Kubernetes resources created by this tutorial, run: + +```bash +kubectl patch -n demo es/pvt-reg-elasticsearch -p '{"spec":{"terminationPolicy":"WipeOut"}}' --type="merge" +kubectl delete -n demo es/pvt-reg-elasticsearch + +kubectl delete ns demo +``` + +## Next Steps + +- Learn about [backup & restore](/docs/v2024.1.31/guides/elasticsearch/backup/overview/) of Elasticsearch database using Stash. +- Learn how to configure [Elasticsearch Topology Cluster](/docs/v2024.1.31/guides/elasticsearch/clustering/topology-cluster/simple-dedicated-cluster/). +- Monitor your Elasticsearch database with KubeDB using [`out-of-the-box` builtin-Prometheus](/docs/v2024.1.31/guides/elasticsearch/monitoring/using-builtin-prometheus). +- Monitor your Elasticsearch database with KubeDB using [`out-of-the-box` Prometheus operator](/docs/v2024.1.31/guides/elasticsearch/monitoring/using-prometheus-operator). +- Detail concepts of [Elasticsearch object](/docs/v2024.1.31/guides/elasticsearch/concepts/elasticsearch/). +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/v2024.1.31/CONTRIBUTING). diff --git a/content/docs/v2024.1.31/guides/elasticsearch/quickstart/_index.md b/content/docs/v2024.1.31/guides/elasticsearch/quickstart/_index.md new file mode 100755 index 0000000000..9e83ac065f --- /dev/null +++ b/content/docs/v2024.1.31/guides/elasticsearch/quickstart/_index.md @@ -0,0 +1,22 @@ +--- +title: Elasticsearch Quickstart +menu: + docs_v2024.1.31: + identifier: es-quickstart-elasticsearch + name: Quickstart + parent: es-elasticsearch-guides + weight: 15 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/elasticsearch/quickstart/overview/_index.md b/content/docs/v2024.1.31/guides/elasticsearch/quickstart/overview/_index.md new file mode 100755 index 0000000000..4f794781c7 --- /dev/null +++ b/content/docs/v2024.1.31/guides/elasticsearch/quickstart/overview/_index.md @@ -0,0 +1,22 @@ +--- +title: Elasticsearch Overview +menu: + docs_v2024.1.31: + identifier: es-overview-elasticsearch + name: Overview + parent: es-quickstart-elasticsearch + weight: 10 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/elasticsearch/quickstart/overview/elasticsearch/images/Lifecycle-of-an-Elasticsearch-CRD.png b/content/docs/v2024.1.31/guides/elasticsearch/quickstart/overview/elasticsearch/images/Lifecycle-of-an-Elasticsearch-CRD.png new file mode 100644 index 0000000000..29360feffe Binary files /dev/null and b/content/docs/v2024.1.31/guides/elasticsearch/quickstart/overview/elasticsearch/images/Lifecycle-of-an-Elasticsearch-CRD.png differ diff --git a/content/docs/v2024.1.31/guides/elasticsearch/quickstart/overview/elasticsearch/index.md b/content/docs/v2024.1.31/guides/elasticsearch/quickstart/overview/elasticsearch/index.md new file mode 100644 index 0000000000..a3f08a4fa7 --- /dev/null +++ b/content/docs/v2024.1.31/guides/elasticsearch/quickstart/overview/elasticsearch/index.md @@ -0,0 +1,656 @@ +--- +title: Elasticsearch Quickstart +menu: + docs_v2024.1.31: + identifier: es-elasticsearch-overview-elasticsearch + name: Elasticsearch + parent: es-overview-elasticsearch + weight: 10 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Elasticsearch QuickStart + +This tutorial will show you how to use KubeDB to run an Elasticsearch database. + +

+  lifecycle +

+ +## Before You Begin + +At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +Now, install the KubeDB operator in your cluster following the steps [here](/docs/v2024.1.31/setup/install/_index). + +To keep things isolated, this tutorial uses a separate namespace called `demo` throughout this tutorial. + +```bash +$ kubectl create namespace demo +namespace/demo created + +$ kubectl get namespace +NAME STATUS AGE +demo Active 9s +``` + +> Note: YAML files used in this tutorial are stored in [guides/elasticsearch/quickstart/overview/yamls](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/guides/elasticsearch/quickstart/overview/yamls) folder in GitHub repository [kubedb/docs](https://github.com/kubedb/docs). + +> We have designed this tutorial to demonstrate a production setup of KubeDB managed Elasticsearch. If you just want to try out KubeDB, you can bypass some safety features following the tips [here](/docs/v2024.1.31/guides/elasticsearch/quickstart/overview/elasticsearch/#tips-for-testing). + +## Find Available StorageClass + +We will have to provide `StorageClass` in Elasticsearch CRD specification. Check available `StorageClass` in your cluster using the following command, + +```bash +$ kubectl get storageclass +NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE +standard (default) rancher.io/local-path Delete WaitForFirstConsumer false 14h +``` + +Here, we have `standard` StorageClass in our cluster from [Local Path Provisioner](https://github.com/rancher/local-path-provisioner). + +## Find Available ElasticsearchVersion + +When you install the KubeDB operator, it registers a CRD named [ElasticsearchVersion](/docs/v2024.1.31/guides/elasticsearch/concepts/catalog/). The installation process comes with a set of tested ElasticsearchVersion objects. Let's check available ElasticsearchVersions by, + +```bash +$ kubectl get elasticsearchversions +NAME VERSION DISTRIBUTION DB_IMAGE DEPRECATED AGE +kubedb-searchguard-5.6.16 5.6.16 KubeDB kubedb/elasticsearch:5.6.16-searchguard-v2022.02.22 4h24m +kubedb-xpack-7.12.0 7.12.0 KubeDB kubedb/elasticsearch:7.12.0-xpack-v2021.08.23 4h24m +kubedb-xpack-7.13.2 7.13.2 KubeDB kubedb/elasticsearch:7.13.2-xpack-v2021.08.23 4h24m +xpack-8.11.1 7.14.0 KubeDB kubedb/elasticsearch:7.14.0-xpack-v2021.08.23 4h24m +kubedb-xpack-8.11.1 7.16.2 KubeDB kubedb/elasticsearch:7.16.2-xpack-v2021.12.24 4h24m +kubedb-xpack-7.9.1 7.9.1 KubeDB kubedb/elasticsearch:7.9.1-xpack-v2021.08.23 4h24m +kubedb-xpack-8.2.3 8.2.0 KubeDB kubedb/elasticsearch:8.2.0-xpack-v2022.05.24 4h24m +opendistro-1.0.2 7.0.1 OpenDistro amazon/opendistro-for-elasticsearch:1.0.2 4h24m +opendistro-1.0.2-v1 7.0.1 OpenDistro amazon/opendistro-for-elasticsearch:1.0.2 4h24m +opendistro-1.1.0 7.1.1 OpenDistro amazon/opendistro-for-elasticsearch:1.1.0 4h24m +opendistro-1.1.0-v1 7.1.1 OpenDistro amazon/opendistro-for-elasticsearch:1.1.0 4h24m +opendistro-1.10.1 7.9.1 OpenDistro amazon/opendistro-for-elasticsearch:1.10.1 4h24m +opensearch-2.8.0 7.9.1 OpenDistro amazon/opendistro-for-elasticsearch:1.10.1 4h24m +opensearch-2.8.0 7.10.0 OpenDistro amazon/opendistro-for-elasticsearch:1.12.0 4h24m +opendistro-1.13.2 7.10.2 OpenDistro amazon/opendistro-for-elasticsearch:1.13.2 4h24m +opendistro-1.2.1 7.2.1 OpenDistro amazon/opendistro-for-elasticsearch:1.2.1 4h24m +opendistro-1.2.1-v1 7.2.1 OpenDistro amazon/opendistro-for-elasticsearch:1.2.1 4h24m +opendistro-1.3.0 7.3.2 OpenDistro amazon/opendistro-for-elasticsearch:1.3.0 4h24m +opendistro-1.3.0-v1 7.3.2 OpenDistro amazon/opendistro-for-elasticsearch:1.3.0 4h24m +opendistro-1.4.0 7.4.2 OpenDistro amazon/opendistro-for-elasticsearch:1.4.0 4h24m +opendistro-1.4.0-v1 7.4.2 OpenDistro amazon/opendistro-for-elasticsearch:1.4.0 4h24m +opendistro-1.6.0 7.6.1 OpenDistro amazon/opendistro-for-elasticsearch:1.6.0 4h24m +opendistro-1.6.0-v1 7.6.1 OpenDistro amazon/opendistro-for-elasticsearch:1.6.0 4h24m +opendistro-1.7.0 7.6.1 OpenDistro amazon/opendistro-for-elasticsearch:1.7.0 4h24m +opendistro-1.7.0-v1 7.6.1 OpenDistro amazon/opendistro-for-elasticsearch:1.7.0 4h24m +opendistro-1.8.0 7.7.0 OpenDistro amazon/opendistro-for-elasticsearch:1.8.0 4h24m +opendistro-1.8.0-v1 7.7.0 OpenDistro amazon/opendistro-for-elasticsearch:1.8.0 4h24m +opendistro-1.9.0 7.8.0 OpenDistro amazon/opendistro-for-elasticsearch:1.9.0 4h24m +opendistro-1.9.0-v1 7.8.0 OpenDistro amazon/opendistro-for-elasticsearch:1.9.0 4h24m +opensearch-1.1.0 1.1.0 OpenSearch opensearchproject/opensearch:1.1.0 4h24m +opensearch-2.8.0 1.2.2 OpenSearch opensearchproject/opensearch:1.2.2 4h24m +opensearch-2.8.0 1.3.2 OpenSearch opensearchproject/opensearch:1.3.2 4h24m +searchguard-6.8.1 6.8.1 SearchGuard floragunncom/sg-elasticsearch:6.8.1-oss-25.1 4h24m +searchguard-6.8.1-v1 6.8.1 SearchGuard floragunncom/sg-elasticsearch:6.8.1-oss-25.1 4h24m +searchguard-7.0.1 7.0.1 SearchGuard floragunncom/sg-elasticsearch:7.0.1-oss-35.0.0 4h24m +searchguard-7.0.1-v1 7.0.1 SearchGuard floragunncom/sg-elasticsearch:7.0.1-oss-35.0.0 4h24m +searchguard-7.1.1 7.1.1 SearchGuard floragunncom/sg-elasticsearch:7.1.1-oss-35.0.0 4h24m +searchguard-7.1.1-v1 7.1.1 SearchGuard floragunncom/sg-elasticsearch:7.1.1-oss-35.0.0 4h24m +searchguard-7.10.2 7.10.2 SearchGuard floragunncom/sg-elasticsearch:7.10.2-oss-49.0.0 4h24m +xpack-8.11.1 7.14.2 SearchGuard floragunncom/sg-elasticsearch:7.14.2-52.3.0 4h24m +searchguard-7.3.2 7.3.2 SearchGuard floragunncom/sg-elasticsearch:7.3.2-oss-37.0.0 4h24m +searchguard-7.5.2 7.5.2 SearchGuard floragunncom/sg-elasticsearch:7.5.2-oss-40.0.0 4h24m +xpack-8.11.1 7.5.2 SearchGuard floragunncom/sg-elasticsearch:7.5.2-oss-40.0.0 4h24m +searchguard-7.8.1 7.8.1 SearchGuard floragunncom/sg-elasticsearch:7.8.1-oss-43.0.0 4h24m +xpack-8.11.1 7.9.3 SearchGuard floragunncom/sg-elasticsearch:7.9.3-oss-47.1.0 4h24m +xpack-6.8.10-v1 6.8.10 ElasticStack elasticsearch:6.8.10 4h24m +xpack-6.8.16 6.8.16 ElasticStack elasticsearch:6.8.16 4h24m +xpack-6.8.22 6.8.22 ElasticStack elasticsearch:6.8.22 4h24m +xpack-7.0.1-v1 7.0.1 ElasticStack elasticsearch:7.0.1 4h24m +xpack-7.1.1-v1 7.1.1 ElasticStack elasticsearch:7.1.1 4h24m +xpack-7.12.0 7.12.0 ElasticStack elasticsearch:7.12.0 4h24m +xpack-7.12.0-v1 7.12.0 ElasticStack elasticsearch:7.12.0 4h24m +xpack-7.13.2 7.13.2 ElasticStack elasticsearch:7.13.2 4h24m +xpack-8.11.1 7.14.0 ElasticStack elasticsearch:7.14.0 4h24m +xpack-8.11.1 7.16.2 ElasticStack elasticsearch:7.16.2 4h24m +xpack-7.17.3 7.17.3 ElasticStack elasticsearch:7.17.3 4h24m +xpack-7.2.1-v1 7.2.1 ElasticStack elasticsearch:7.2.1 4h24m +xpack-7.3.2-v1 7.3.2 ElasticStack elasticsearch:7.3.2 4h24m +xpack-7.4.2-v1 7.4.2 ElasticStack elasticsearch:7.4.2 4h24m +xpack-7.5.2-v1 7.5.2 ElasticStack elasticsearch:7.5.2 4h24m +xpack-7.6.2-v1 7.6.2 ElasticStack elasticsearch:7.6.2 4h24m +xpack-7.7.1-v1 7.7.1 ElasticStack elasticsearch:7.7.1 4h24m +xpack-7.8.0-v1 7.8.0 ElasticStack elasticsearch:7.8.0 4h24m +xpack-8.11.1 7.9.1 ElasticStack elasticsearch:7.9.1 4h24m +xpack-7.9.1-v2 7.9.1 ElasticStack elasticsearch:7.9.1 4h24m +xpack-8.2.3 8.2.0 ElasticStack elasticsearch:8.2.0 4h24m +xpack-8.5.2 8.5.2 ElasticStack elasticsearch:8.5.2 4h24m +``` + +Notice the `DEPRECATED` column. Here, `true` means that this ElasticsearchVersion is deprecated for the current KubeDB version. KubeDB will not work for deprecated ElasticsearchVersion. + +In this tutorial, we will use `xpack-8.2.3` ElasticsearchVersion CR to create an Elasticsearch cluster. + +> Note: An image with a higher modification tag will have more features and fixes than an image with a lower modification tag. Hence, it is recommended to use ElasticsearchVersion CRD with the highest modification tag to take advantage of the latest features. For example, use `xpack-8.11.1` over `7.9.1-xpack`. + +## Create an Elasticsearch Cluster + +The KubeDB operator implements an Elasticsearch CRD to define the specification of an Elasticsearch database. + +The Elasticsearch instance used for this tutorial: + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: Elasticsearch +metadata: + name: es-quickstart + namespace: demo +spec: + version: xpack-8.2.3 + enableSSL: true + replicas: 3 + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: Delete +``` + +Here, + +- `spec.version` - is the name of the ElasticsearchVersion CR. Here, an Elasticsearch of version `8.2.0` will be created with `x-pack` security plugin. +- `spec.enableSSL` - specifies whether the HTTP layer is secured with certificates or not. +- `spec.replicas` - specifies the number of Elasticsearch nodes. +- `spec.storageType` - specifies the type of storage that will be used for Elasticsearch database. It can be `Durable` or `Ephemeral`. The default value of this field is `Durable`. If `Ephemeral` is used then KubeDB will create the Elasticsearch database using `EmptyDir` volume. In this case, you don't have to specify `spec.storage` field. This is useful for testing purposes. +- `spec.storage` specifies the StorageClass of PVC dynamically allocated to store data for this database. This storage spec will be passed to the StatefulSet created by the KubeDB operator to run database pods. You can specify any StorageClass available in your cluster with appropriate resource requests. If you don't specify `spec.storageType: Ephemeral`, then this field is required. +- `spec.terminationPolicy` specifies what KubeDB should do when a user try to delete Elasticsearch CR. Termination policy `Delete` will delete the database pods, secret and PVC when the Elasticsearch CR is deleted. + +> Note: `spec.storage` section is used to create PVC for database pod. It will create PVC with storage size specified in the `storage.resources.requests` field. Don't specify `limits` here. PVC does not get resized automatically. + +Let's create the Elasticsearch CR that is shown above: + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/elasticsearch/quickstart/overview/elasticsearch/yamls/elasticsearch.yaml +elasticsearch.kubedb.com/es-quickstart created +``` + +The Elasticsearch's `STATUS` will go from `Provisioning` to `Ready` state within few minutes. Once the `STATUS` is `Ready`, you are ready to use the database. + +```bash +$ kubectl get elasticsearch -n demo -w +NAME VERSION STATUS AGE +es-quickstart xpack-8.2.3 Provisioning 7s +... ... +es-quickstart xpack-8.2.3 Ready 39s +``` + +Describe the Elasticsearch object to observe the progress if something goes wrong or the status is not changing for a long period of time: + +```bash +$ kubectl describe elasticsearch -n demo es-quickstart +Name: es-quickstart +Namespace: demo +Labels: +Annotations: +API Version: kubedb.com/v1alpha2 +Kind: Elasticsearch +Metadata: + Creation Timestamp: 2022-12-27T05:25:39Z + Finalizers: + kubedb.com + Generation: 1 + Managed Fields: + API Version: kubedb.com/v1alpha2 + Fields Type: FieldsV1 + fieldsV1: + f:metadata: + f:annotations: + .: + f:kubectl.kubernetes.io/last-applied-configuration: + f:spec: + .: + f:enableSSL: + f:healthChecker: + .: + f:failureThreshold: + f:periodSeconds: + f:timeoutSeconds: + f:heapSizePercentage: + f:replicas: + f:storage: + .: + f:accessModes: + f:resources: + .: + f:requests: + .: + f:storage: + f:storageClassName: + f:storageType: + f:terminationPolicy: + f:version: + Manager: kubectl-client-side-apply + Operation: Update + Time: 2022-12-27T05:25:39Z + API Version: kubedb.com/v1alpha2 + Fields Type: FieldsV1 + fieldsV1: + f:metadata: + f:finalizers: + .: + v:"kubedb.com": + Manager: kubedb-provisioner + Operation: Update + Time: 2022-12-27T05:25:39Z + API Version: kubedb.com/v1alpha2 + Fields Type: FieldsV1 + fieldsV1: + f:status: + .: + f:conditions: + f:observedGeneration: + f:phase: + Manager: kubedb-provisioner + Operation: Update + Subresource: status + Time: 2022-12-27T05:25:39Z + Resource Version: 313887 + UID: cf37390a-ab9f-4886-9f7e-1a5bedc975e7 +Spec: + Auth Secret: + Name: es-quickstart-elastic-cred + Auto Ops: + Enable SSL: true + Health Checker: + Failure Threshold: 1 + Period Seconds: 10 + Timeout Seconds: 10 + Heap Size Percentage: 50 + Internal Users: + apm_system: + Backend Roles: + apm_system + Secret Name: es-quickstart-apm-system-cred + beats_system: + Backend Roles: + beats_system + Secret Name: es-quickstart-beats-system-cred + Elastic: + Backend Roles: + superuser + Secret Name: es-quickstart-elastic-cred + kibana_system: + Backend Roles: + kibana_system + Secret Name: es-quickstart-kibana-system-cred + logstash_system: + Backend Roles: + logstash_system + Secret Name: es-quickstart-logstash-system-cred + remote_monitoring_user: + Backend Roles: + remote_monitoring_collector + remote_monitoring_agent + Secret Name: es-quickstart-remote-monitoring-user-cred + Kernel Settings: + Privileged: true + Sysctls: + Name: vm.max_map_count + Value: 262144 + Pod Template: + Controller: + Metadata: + Spec: + Affinity: + Pod Anti Affinity: + Preferred During Scheduling Ignored During Execution: + Pod Affinity Term: + Label Selector: + Match Labels: + app.kubernetes.io/instance: es-quickstart + app.kubernetes.io/managed-by: kubedb.com + app.kubernetes.io/name: elasticsearches.kubedb.com + Namespaces: + demo + Topology Key: kubernetes.io/hostname + Weight: 100 + Pod Affinity Term: + Label Selector: + Match Labels: + app.kubernetes.io/instance: es-quickstart + app.kubernetes.io/managed-by: kubedb.com + app.kubernetes.io/name: elasticsearches.kubedb.com + Namespaces: + demo + Topology Key: failure-domain.beta.kubernetes.io/zone + Weight: 50 + Container Security Context: + Capabilities: + Add: + IPC_LOCK + SYS_RESOURCE + Privileged: false + Run As User: 1000 + Resources: + Limits: + Memory: 1536Mi + Requests: + Cpu: 500m + Memory: 1536Mi + Service Account Name: es-quickstart + Replicas: 3 + Storage: + Access Modes: + ReadWriteOnce + Resources: + Requests: + Storage: 1Gi + Storage Class Name: standard + Storage Type: Durable + Termination Policy: Delete + Tls: + Certificates: + Alias: ca + Private Key: + Encoding: PKCS8 + Secret Name: es-quickstart-ca-cert + Subject: + Organizations: + kubedb + Alias: transport + Private Key: + Encoding: PKCS8 + Secret Name: es-quickstart-transport-cert + Subject: + Organizations: + kubedb + Alias: http + Private Key: + Encoding: PKCS8 + Secret Name: es-quickstart-http-cert + Subject: + Organizations: + kubedb + Alias: client + Private Key: + Encoding: PKCS8 + Secret Name: es-quickstart-client-cert + Subject: + Organizations: + kubedb + Version: xpack-8.2.3 +Status: + Conditions: + Last Transition Time: 2022-12-27T05:25:39Z + Message: The KubeDB operator has started the provisioning of Elasticsearch: demo/es-quickstart + Reason: DatabaseProvisioningStartedSuccessfully + Status: True + Type: ProvisioningStarted + Last Transition Time: 2022-12-27T05:25:41Z + Message: Internal Users for Elasticsearch: demo/es-quickstart is ready. + Observed Generation: 1 + Reason: InternalUsersCredentialsSyncedSuccessfully + Status: True + Type: InternalUsersSynced + Last Transition Time: 2022-12-27T05:28:48Z + Message: All desired replicas are ready. + Reason: AllReplicasReady + Status: True + Type: ReplicaReady + Last Transition Time: 2022-12-27T05:29:05Z + Message: The Elasticsearch: demo/es-quickstart is accepting client requests. + Observed Generation: 1 + Reason: DatabaseAcceptingConnectionRequest + Status: True + Type: AcceptingConnection + Last Transition Time: 2022-12-27T05:29:05Z + Message: The Elasticsearch: demo/es-quickstart is ready. + Observed Generation: 1 + Reason: ReadinessCheckSucceeded + Status: True + Type: Ready + Last Transition Time: 2022-12-27T05:29:06Z + Message: The Elasticsearch: demo/es-quickstart is accepting write requests. + Observed Generation: 1 + Reason: DatabaseWriteAccessCheckSucceeded + Status: True + Type: DatabaseWriteAccess + Last Transition Time: 2022-12-27T05:29:13Z + Message: The Elasticsearch: demo/es-quickstart is successfully provisioned. + Observed Generation: 1 + Reason: DatabaseSuccessfullyProvisioned + Status: True + Type: Provisioned + Last Transition Time: 2022-12-27T05:29:15Z + Message: The Elasticsearch: demo/es-quickstart is accepting read requests. + Observed Generation: 1 + Reason: DatabaseReadAccessCheckSucceeded + Status: True + Type: DatabaseReadAccess + Observed Generation: 1 + Phase: Ready +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal Successful 4m48s KubeDB Operator Successfully created governing service + Normal Successful 4m47s KubeDB Operator Successfully created Service + Normal Successful 4m47s KubeDB Operator Successfully created Service + Normal Successful 4m40s KubeDB Operator Successfully created Elasticsearch + Normal Successful 4m40s KubeDB Operator Successfully created appbinding + Normal Successful 4m40s KubeDB Operator Successfully governing service + Normal Successful 4m32s KubeDB Operator Successfully governing service + Normal Successful 99s KubeDB Operator Successfully governing service + Normal Successful 82s KubeDB Operator Successfully governing service + Normal Successful 74s KubeDB Operator Successfully governing service + Normal Successful 66s KubeDB Operator Successfully governing service +``` + +### KubeDB Operator Generated Resources + +On deployment of an Elasticsearch CR, the operator creates the following resources: + +```bash +$ kubectl get all,secret,pvc -n demo -l 'app.kubernetes.io/instance=es-quickstart' +NAME READY STATUS RESTARTS AGE +pod/es-quickstart-0 1/1 Running 0 8m2s +pod/es-quickstart-1 1/1 Running 0 5m15s +pod/es-quickstart-2 1/1 Running 0 5m8s + +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +service/es-quickstart ClusterIP 10.96.209.204 9200/TCP 8m9s +service/es-quickstart-master ClusterIP None 9300/TCP 8m9s +service/es-quickstart-pods ClusterIP None 9200/TCP 8m10s + +NAME READY AGE +statefulset.apps/es-quickstart 3/3 8m2s + +NAME TYPE VERSION AGE +appbinding.appcatalog.appscode.com/es-quickstart kubedb.com/elasticsearch 8.2.0 8m2s + +NAME TYPE DATA AGE +secret/es-quickstart-apm-system-cred kubernetes.io/basic-auth 2 8m8s +secret/es-quickstart-beats-system-cred kubernetes.io/basic-auth 2 8m8s +secret/es-quickstart-ca-cert kubernetes.io/tls 2 8m9s +secret/es-quickstart-client-cert kubernetes.io/tls 3 8m8s +secret/es-quickstart-config Opaque 1 8m8s +secret/es-quickstart-elastic-cred kubernetes.io/basic-auth 2 8m8s +secret/es-quickstart-http-cert kubernetes.io/tls 3 8m9s +secret/es-quickstart-kibana-system-cred kubernetes.io/basic-auth 2 8m8s +secret/es-quickstart-logstash-system-cred kubernetes.io/basic-auth 2 8m8s +secret/es-quickstart-remote-monitoring-user-cred kubernetes.io/basic-auth 2 8m8s +secret/es-quickstart-transport-cert kubernetes.io/tls 3 8m9s + +NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE +persistentvolumeclaim/data-es-quickstart-0 Bound pvc-e5227633-2fc0-4a50-a599-57cba8b31d14 1Gi RWO standard 8m2s +persistentvolumeclaim/data-es-quickstart-1 Bound pvc-fbacd36c-4132-4e2a-a5c5-91149054044c 1Gi RWO standard 5m15s +persistentvolumeclaim/data-es-quickstart-2 Bound pvc-9f9c6eaf-1ba6-4167-a37d-86eaf1f7e103 1Gi RWO standard 5m8s +``` + +- `StatefulSet` - a StatefulSet named after the Elasticsearch instance. In topology mode, the operator creates 3 statefulSets with name `{Elasticsearch-Name}-{Sufix}`. +- `Services` - 3 services are generated for each Elasticsearch database. + - `{Elasticsearch-Name}` - the client service which is used to connect to the database. It points to the `ingest` nodes. + - `{Elasticsearch-Name}-master` - the master service which is used to connect to the master nodes. It is a headless service. + - `{Elasticsearch-Name}-pods` - the node discovery service which is used by the Elasticsearch nodes to communicate each other. It is a headless service. +- `AppBinding` - an [AppBinding](/docs/v2024.1.31/guides/elasticsearch/concepts/appbinding/) which hold to connect information for the database. It is also named after the Elastics +- `Secrets` - 3 types of secrets are generated for each Elasticsearch database. + - `{Elasticsearch-Name}-{username}-cred` - the auth secrets which hold the `username` and `password` for the Elasticsearch users. The auth secret `es-quickstart-elastic-cred` holds the `username` and `password` for `elastic` user which lets administrative access. + - `{Elasticsearch-Name}-{alias}-cert` - the certificate secrets which hold `tls.crt`, `tls.key`, and `ca.crt` for configuring the Elasticsearch database. + - `{Elasticsearch-Name}-config` - the default configuration secret created by the operator. + - `data-{Elasticsearch-node-name}` - the persistent volume claims created by the StatefulSet. + +## Connect with Elasticsearch Database + +We will use [port forwarding](https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/) to connect with our Elasticsearch database. Then we will use `curl` to send `HTTP` requests to check cluster health to verify that our Elasticsearch database is working well. + +Let's port-forward the port `9200` to local machine: + +```bash +$ kubectl port-forward -n demo svc/es-quickstart 9200 +Forwarding from 127.0.0.1:9200 -> 9200 +Forwarding from [::1]:9200 -> 9200 +``` + +Now, our Elasticsearch cluster is accessible at `localhost:9200`. + +**Connection information:** + +- Address: `localhost:9200` +- Username: + + ```bash + $ kubectl get secret -n demo es-quickstart-elastic-cred -o jsonpath='{.data.username}' | base64 -d + elastic + ``` + +- Password: + + ```bash + $ kubectl get secret -n demo es-quickstart-elastic-cred -o jsonpath='{.data.password}' | base64 -d + vIHoIfHn=!Z8F4gP + ``` + +Now let's check the health of our Elasticsearch database. + +```bash +$ curl -XGET -k -u 'elastic:vIHoIfHn=!Z8F4gP' "https://localhost:9200/_cluster/health?pretty" + +{ + "cluster_name" : "es-quickstart", + "status" : "green", + "timed_out" : false, + "number_of_nodes" : 3, + "number_of_data_nodes" : 3, + "active_primary_shards" : 3, + "active_shards" : 6, + "relocating_shards" : 0, + "initializing_shards" : 0, + "unassigned_shards" : 0, + "delayed_unassigned_shards" : 0, + "number_of_pending_tasks" : 0, + "number_of_in_flight_fetch" : 0, + "task_max_waiting_in_queue_millis" : 0, + "active_shards_percent_as_number" : 100.0 +} +``` + +From the health information above, we can see that our Elasticsearch cluster's status is `green` which means the cluster is healthy. + +## Halt Elasticsearch + +KubeDB takes advantage of `ValidationWebhook` feature in Kubernetes 1.9.0 or later clusters to implement `DoNotTerminate` termination policy. If admission webhook is enabled, it prevents the user from deleting the database as long as the `spec.terminationPolicy` is set `DoNotTerminate`. + +To halt the database, we have to set `spec.terminationPolicy:` to `Halt` by updating it, + +```bash +$ kubectl edit elasticsearch -n demo es-quickstart + +>> spec: +>> terminationPolicy: Halt +``` + +Now, if you delete the Elasticsearch object, the KubeDB operator will delete every resource created for this Elasticsearch CR, but leaves the auth secrets, and PVCs. + +```bash +$ kubectl delete elasticsearch -n demo es-quickstart +elasticsearch.kubedb.com "es-quickstart" deleted +``` + +Check resources: + +```bash +$ kubectl get all,secret,pvc -n demo -l 'app.kubernetes.io/instance=es-quickstart' +NAME TYPE DATA AGE +secret/es-quickstart-apm-system-cred kubernetes.io/basic-auth 2 5m39s +secret/es-quickstart-beats-system-cred kubernetes.io/basic-auth 2 5m39s +secret/es-quickstart-elastic-cred kubernetes.io/basic-auth 2 5m39s +secret/es-quickstart-kibana-system-cred kubernetes.io/basic-auth 2 5m39s +secret/es-quickstart-logstash-system-cred kubernetes.io/basic-auth 2 5m39s +secret/es-quickstart-remote-monitoring-user-cred kubernetes.io/basic-auth 2 5m39s + +NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE +persistentvolumeclaim/data-es-quickstart-0 Bound pvc-5b657e2a-6c32-4631-bac9-eefebbcb129a 1Gi RWO standard 5m29s +persistentvolumeclaim/data-es-quickstart-1 Bound pvc-e44d7ab8-fc2b-4cfe-9bef-74f2a2d875f5 1Gi RWO standard 5m23s +persistentvolumeclaim/data-es-quickstart-2 Bound pvc-dad75b1b-37ed-4318-a82a-5e38f04d36bc 1Gi RWO standard 5m18s + +``` + +## Resume Elasticsearch + +Say, the Elasticsearch CR was deleted with `spec.terminationPolicy` to `Halt` and you want to re-create the Elasticsearch cluster using the existing auth secrets and the PVCs. + +You can do it by simpily re-deploying the original Elasticsearch object: + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/elasticsearch/quickstart/overview/elasticsearch/yamls/elasticsearch.yaml +elasticsearch.kubedb.com/es-quickstart created +``` + +## Cleaning up + +To cleanup the Kubernetes resources created by this tutorial, run: + +```bash +$ kubectl patch -n demo elasticsearch es-quickstart -p '{"spec":{"terminationPolicy":"WipeOut"}}' --type="merge" +elasticsearch.kubedb.com/es-quickstart patched + +$ kubectl delete -n demo es/quick-elasticsearch +elasticsearch.kubedb.com "es-quickstart" deleted + +$ kubectl delete namespace demo +namespace "demo" deleted +``` + +## Tips for Testing + +If you are just testing some basic functionalities, you might want to avoid additional hassles due to some safety features that are great for the production environment. You can follow these tips to avoid them. + +1. **Use `storageType: Ephemeral`**. Databases are precious. You might not want to lose your data in your production environment if the database pod fails. So, we recommend to use `spec.storageType: Durable` and provide storage spec in `spec.storage` section. For testing purposes, you can just use `spec.storageType: Ephemeral`. KubeDB will use [emptyDir](https://kubernetes.io/docs/concepts/storage/volumes/#emptydir) for storage. You will not require to provide `spec.storage` section. +2. **Use `terminationPolicy: WipeOut`**. It is nice to be able to resume the database from the previous one. So, we preserve all your `PVCs` and auth `Secrets`. If you don't want to resume the database, you can just use `spec.terminationPolicy: WipeOut`. It will clean up every resouce that was created with the Elasticsearch CR. For more details, please visit [here](/docs/v2024.1.31/guides/elasticsearch/concepts/elasticsearch/#specterminationpolicy). + +## Next Steps + +- [Quickstart Kibana](/docs/v2024.1.31/guides/elasticsearch/elasticsearch-dashboard/kibana/) with KubeDB Operator. +- Learn how to configure [Elasticsearch Topology Cluster](/docs/v2024.1.31/guides/elasticsearch/clustering/topology-cluster/simple-dedicated-cluster/). +- Learn about [backup & restore](/docs/v2024.1.31/guides/elasticsearch/backup/overview/) Elasticsearch database using Stash. +- Monitor your Elasticsearch database with KubeDB using [`out-of-the-box` builtin-Prometheus](/docs/v2024.1.31/guides/elasticsearch/monitoring/using-builtin-prometheus). +- Monitor your Elasticsearch database with KubeDB using [`out-of-the-box` Prometheus operator](/docs/v2024.1.31/guides/elasticsearch/monitoring/using-prometheus-operator). +- Detail concepts of [Elasticsearch object](/docs/v2024.1.31/guides/elasticsearch/concepts/elasticsearch/). +- Use [private Docker registry](/docs/v2024.1.31/guides/elasticsearch/private-registry/using-private-registry) to deploy Elasticsearch with KubeDB. +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/v2024.1.31/CONTRIBUTING). diff --git a/content/docs/v2024.1.31/guides/elasticsearch/quickstart/overview/elasticsearch/yamls/elasticsearch.yaml b/content/docs/v2024.1.31/guides/elasticsearch/quickstart/overview/elasticsearch/yamls/elasticsearch.yaml new file mode 100644 index 0000000000..6446a26577 --- /dev/null +++ b/content/docs/v2024.1.31/guides/elasticsearch/quickstart/overview/elasticsearch/yamls/elasticsearch.yaml @@ -0,0 +1,18 @@ +apiVersion: kubedb.com/v1alpha2 +kind: Elasticsearch +metadata: + name: es-quickstart + namespace: demo +spec: + version: xpack-8.2.3 + enableSSL: true + replicas: 3 + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: Delete \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/elasticsearch/quickstart/overview/opensearch/images/Lifecycle-of-an-Opensearch-CRD.png b/content/docs/v2024.1.31/guides/elasticsearch/quickstart/overview/opensearch/images/Lifecycle-of-an-Opensearch-CRD.png new file mode 100644 index 0000000000..d8e70fc7aa Binary files /dev/null and b/content/docs/v2024.1.31/guides/elasticsearch/quickstart/overview/opensearch/images/Lifecycle-of-an-Opensearch-CRD.png differ diff --git a/content/docs/v2024.1.31/guides/elasticsearch/quickstart/overview/opensearch/index.md b/content/docs/v2024.1.31/guides/elasticsearch/quickstart/overview/opensearch/index.md new file mode 100644 index 0000000000..b6d2e06475 --- /dev/null +++ b/content/docs/v2024.1.31/guides/elasticsearch/quickstart/overview/opensearch/index.md @@ -0,0 +1,586 @@ +--- +title: OpenSearch Quickstart +menu: + docs_v2024.1.31: + identifier: es-opensearch-overview-elasticsearch + name: OpenSearch + parent: es-overview-elasticsearch + weight: 10 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](https://kubedb.com/docs/v2021.12.21/welcome/). + +# OpenSearch QuickStart + +This tutorial will show you how to use KubeDB to run an OpenSearch database. + +

+  lifecycle +

+ +## Before You Begin + +* At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +* Now, install the KubeDB operator in your cluster following the steps [here](https://kubedb.com/docs/v2021.12.21/setup/). + +* Elasticsearch has many distributions like `ElasticStack`, `OpenSearch`, `SearchGuard`, `OpenDistro` etc. KubeDB provides all of these distribution’s support under the Elasticsearch CR of KubeDB. So, in this tutorial we will deploy OpenSearch with the help of KubeDB managed Elasticsearch CR. + +* [StorageClass](https://kubernetes.io/docs/concepts/storage/storage-classes/) is required for CRD specification. Check the available StorageClass in cluster. + +```bash +$ kubectl get storageclass +NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE +standard (default) rancher.io/local-path Delete WaitForFirstConsumer false 11h +``` +Here, we have `standard` StorageClass in our cluster from [Local Path Provisioner](https://github.com/rancher/local-path-provisioner). + + +To keep things isolated, this tutorial uses a separate namespace called `demo` throughout this tutorial. + +```bash +$ kubectl create namespace demo +namespace/demo created + +$ kubectl get namespace +NAME STATUS AGE +demo Active 9s +``` + +> Note: YAML files used in this tutorial are stored in [guides/elasticsearch/quickstart/overview/opensearch/yamls](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/guides/elasticsearch/quickstart/overview/opensearch/yamls) folder in GitHub repository [kubedb/docs](https://github.com/kubedb/docs). + +> We have designed this tutorial to demonstrate a production setup of KubeDB managed OpenSearch. If you just want to try out KubeDB, you can bypass some of the safety features following the tips [here](/docs/v2024.1.31/guides/elasticsearch/quickstart/overview/opensearch/#tips-for-testing). + + +## Find Available Versions + +When you install the KubeDB operator, it registers a CRD named [ElasticsearchVersion](/docs/v2024.1.31/guides/elasticsearch/concepts/catalog/). The installation process comes with a set of tested ElasticsearchVersion objects. Let's check available ElasticsearchVersions by following command, + +```bash +$ kubectl get elasticsearchversions +NAME VERSION DISTRIBUTION DB_IMAGE DEPRECATED AGE +kubedb-xpack-7.12.0 7.12.0 KubeDB kubedb/elasticsearch:7.12.0-xpack-v2021.08.23 17h +kubedb-xpack-7.13.2 7.13.2 KubeDB kubedb/elasticsearch:7.13.2-xpack-v2021.08.23 17h +xpack-8.11.1 7.14.0 KubeDB kubedb/elasticsearch:7.14.0-xpack-v2021.08.23 17h +kubedb-xpack-8.11.1 7.16.2 KubeDB kubedb/elasticsearch:7.16.2-xpack-v2021.12.24 17h +kubedb-xpack-7.9.1 7.9.1 KubeDB kubedb/elasticsearch:7.9.1-xpack-v2021.08.23 17h +opendistro-1.0.2 7.0.1 OpenDistro amazon/opendistro-for-elasticsearch:1.0.2 17h +opendistro-1.0.2-v1 7.0.1 OpenDistro amazon/opendistro-for-elasticsearch:1.0.2 17h +opendistro-1.1.0 7.1.1 OpenDistro amazon/opendistro-for-elasticsearch:1.1.0 17h +opendistro-1.1.0-v1 7.1.1 OpenDistro amazon/opendistro-for-elasticsearch:1.1.0 17h +opendistro-1.10.1 7.9.1 OpenDistro amazon/opendistro-for-elasticsearch:1.10.1 17h +opensearch-2.8.0 7.9.1 OpenDistro amazon/opendistro-for-elasticsearch:1.10.1 17h +opensearch-2.8.0 7.10.0 OpenDistro amazon/opendistro-for-elasticsearch:1.12.0 17h +opendistro-1.13.2 7.10.2 OpenDistro amazon/opendistro-for-elasticsearch:1.13.2 17h +opendistro-1.2.1 7.2.1 OpenDistro amazon/opendistro-for-elasticsearch:1.2.1 17h +opendistro-1.2.1-v1 7.2.1 OpenDistro amazon/opendistro-for-elasticsearch:1.2.1 17h +opendistro-1.3.0 7.3.2 OpenDistro amazon/opendistro-for-elasticsearch:1.3.0 17h +opendistro-1.3.0-v1 7.3.2 OpenDistro amazon/opendistro-for-elasticsearch:1.3.0 17h +opendistro-1.4.0 7.4.2 OpenDistro amazon/opendistro-for-elasticsearch:1.4.0 17h +opendistro-1.4.0-v1 7.4.2 OpenDistro amazon/opendistro-for-elasticsearch:1.4.0 17h +opendistro-1.6.0 7.6.1 OpenDistro amazon/opendistro-for-elasticsearch:1.6.0 17h +opendistro-1.6.0-v1 7.6.1 OpenDistro amazon/opendistro-for-elasticsearch:1.6.0 17h +opendistro-1.7.0 7.6.1 OpenDistro amazon/opendistro-for-elasticsearch:1.7.0 17h +opendistro-1.7.0-v1 7.6.1 OpenDistro amazon/opendistro-for-elasticsearch:1.7.0 17h +opendistro-1.8.0 7.7.0 OpenDistro amazon/opendistro-for-elasticsearch:1.8.0 17h +opendistro-1.8.0-v1 7.7.0 OpenDistro amazon/opendistro-for-elasticsearch:1.8.0 17h +opendistro-1.9.0 7.8.0 OpenDistro amazon/opendistro-for-elasticsearch:1.9.0 17h +opendistro-1.9.0-v1 7.8.0 OpenDistro amazon/opendistro-for-elasticsearch:1.9.0 17h +opensearch-1.1.0 1.1.0 OpenSearch opensearchproject/opensearch:1.1.0 17h +opensearch-2.8.0 1.2.2 OpenSearch opensearchproject/opensearch:1.2.2 17h +searchguard-6.8.1 6.8.1 SearchGuard floragunncom/sg-elasticsearch:6.8.1-oss-25.1 17h +searchguard-6.8.1-v1 6.8.1 SearchGuard floragunncom/sg-elasticsearch:6.8.1-oss-25.1 17h +searchguard-7.0.1 7.0.1 SearchGuard floragunncom/sg-elasticsearch:7.0.1-oss-35.0.0 17h +searchguard-7.0.1-v1 7.0.1 SearchGuard floragunncom/sg-elasticsearch:7.0.1-oss-35.0.0 17h +searchguard-7.1.1 7.1.1 SearchGuard floragunncom/sg-elasticsearch:7.1.1-oss-35.0.0 17h +searchguard-7.1.1-v1 7.1.1 SearchGuard floragunncom/sg-elasticsearch:7.1.1-oss-35.0.0 17h +searchguard-7.10.2 7.10.2 SearchGuard floragunncom/sg-elasticsearch:7.10.2-oss-49.0.0 17h +xpack-8.11.1 7.14.2 SearchGuard floragunncom/sg-elasticsearch:7.14.2-52.3.0 17h +searchguard-7.3.2 7.3.2 SearchGuard floragunncom/sg-elasticsearch:7.3.2-oss-37.0.0 17h +searchguard-7.5.2 7.5.2 SearchGuard floragunncom/sg-elasticsearch:7.5.2-oss-40.0.0 17h +xpack-8.11.1 7.5.2 SearchGuard floragunncom/sg-elasticsearch:7.5.2-oss-40.0.0 17h +searchguard-7.8.1 7.8.1 SearchGuard floragunncom/sg-elasticsearch:7.8.1-oss-43.0.0 17h +xpack-8.11.1 7.9.3 SearchGuard floragunncom/sg-elasticsearch:7.9.3-oss-47.1.0 17h +xpack-6.8.10-v1 6.8.10 ElasticStack elasticsearch:6.8.10 17h +xpack-6.8.16 6.8.16 ElasticStack elasticsearch:6.8.16 17h +xpack-6.8.22 6.8.22 ElasticStack elasticsearch:6.8.22 17h +xpack-7.0.1-v1 7.0.1 ElasticStack elasticsearch:7.0.1 17h +xpack-7.1.1-v1 7.1.1 ElasticStack elasticsearch:7.1.1 17h +xpack-7.12.0 7.12.0 ElasticStack elasticsearch:7.12.0 17h +xpack-7.12.0-v1 7.12.0 ElasticStack elasticsearch:7.12.0 17h +xpack-7.13.2 7.13.2 ElasticStack elasticsearch:7.13.2 17h +xpack-8.11.1 7.14.0 ElasticStack elasticsearch:7.14.0 17h +xpack-8.11.1 7.16.2 ElasticStack elasticsearch:7.16.2 17h +xpack-7.2.1-v1 7.2.1 ElasticStack elasticsearch:7.2.1 17h +xpack-7.3.2-v1 7.3.2 ElasticStack elasticsearch:7.3.2 17h +xpack-7.4.2-v1 7.4.2 ElasticStack elasticsearch:7.4.2 17h +xpack-7.5.2-v1 7.5.2 ElasticStack elasticsearch:7.5.2 17h +xpack-7.6.2-v1 7.6.2 ElasticStack elasticsearch:7.6.2 17h +xpack-7.7.1-v1 7.7.1 ElasticStack elasticsearch:7.7.1 17h +xpack-7.8.0-v1 7.8.0 ElasticStack elasticsearch:7.8.0 17h +xpack-8.11.1 7.9.1 ElasticStack elasticsearch:7.9.1 17h +xpack-7.9.1-v2 7.9.1 ElasticStack elasticsearch:7.9.1 17h +``` + +Notice the `DEPRECATED` column. Here, `true` means that this ElasticsearchVersion is deprecated for the current KubeDB version. KubeDB will not work for deprecated ElasticsearchVersion. + +In this tutorial, we will use `opensearch-2.8.0` ElasticsearchVersion CR to create an OpenSearch cluster. + +> Note: An image with a higher modification tag will have more features and fixes than an image with a lower modification tag. Hence, it is recommended to use ElasticsearchVersion CRD with the highest modification tag to take advantage of the latest features. For example, we are using `opensearch-2.8.0` over `opensearch-1.1.0`. + +## Create an OpenSearch Cluster + +The KubeDB operator implements an Elasticsearch CRD to define the specification of an OpenSearch database. + +Here is the yaml we will use for this tutorial: + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: Elasticsearch +metadata: + name: sample-opensearch + namespace: demo +spec: + version: opensearch-2.8.0 + enableSSL: true + replicas: 3 + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: DoNotTerminate +``` + +Here, + +- `spec.version` - is the name of the ElasticsearchVersion CR. Here, we are using `opensearch-2.8.0` version. +- `spec.enableSSL` - specifies whether the HTTP layer is secured with certificates or not. +- `spec.replicas` - specifies the number of OpenSearch nodes. +- `spec.storageType` - specifies the type of storage that will be used for OpenSearch database. It can be `Durable` or `Ephemeral`. The default value of this field is `Durable`. If `Ephemeral` is used then KubeDB will create the OpenSearch database using `EmptyDir` volume. In this case, you don't have to specify `spec.storage` field. This is useful for testing purposes. +- `spec.storage` specifies the StorageClass of PVC dynamically allocated to store data for this database. This storage spec will be passed to the StatefulSet created by the KubeDB operator to run database pods. You can specify any StorageClass available in your cluster with appropriate resource requests. If you don't specify `spec.storageType: Ephemeral`, then this field is required. +- `spec.terminationPolicy` specifies what KubeDB should do when a user try to delete the operation of Elasticsearch CR. Termination policy `DoNotTerminate` prevents a user from deleting this object if the admission webhook is enabled. + +> Note: `spec.storage` section is used to create PVC for database pod. It will create PVC with storage size specified in the `storage.resources.requests` field. Don't specify `limits` here. PVC does not get resized automatically. + +Let's apply the yaml that is shown above: + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/elasticsearch/quickstart/overview/opensearch/yamls/opensearch.yaml +elasticsearch.kubedb.com/es-quickstart created +``` + +Wait for few minutes until the `STATUS` will go from `Provisioning` to `Ready`. Once the `STATUS` is `Ready`, you are ready to use the database. + +```bash +$ kubectl get elasticsearch -n demo -w +NAME VERSION STATUS AGE +sample-opensearch opensearch-2.8.0 Provisioning 49s +... ... +$ kubectl get elasticsearch -n demo -w +NAME VERSION STATUS AGE +sample-opensearch opensearch-2.8.0 Ready 5m4s +``` + +Describe the object to observe the progress if something goes wrong or the status is not changing for a long period of time: + +```bash +$ kubectl describe elasticsearch -n demo sample-opensearch +Name: sample-opensearch +Namespace: demo +Labels: +Annotations: +API Version: kubedb.com/v1alpha2 +Kind: Elasticsearch +Metadata: + Creation Timestamp: 2022-02-15T07:00:21Z + Finalizers: + kubedb.com + Generation: 1 + Resource Version: 84343 + UID: 20c388a6-54b1-4c0d-891b-879ec8e2a8c6 +Spec: + Auth Secret: + Name: sample-opensearch-admin-cred + Enable SSL: true + Internal Users: + Admin: + Backend Roles: + admin + Reserved: true + Secret Name: sample-opensearch-admin-cred + Kibanaro: + Secret Name: sample-opensearch-kibanaro-cred + Kibanaserver: + Reserved: true + Secret Name: sample-opensearch-kibanaserver-cred + Logstash: + Secret Name: sample-opensearch-logstash-cred + Readall: + Secret Name: sample-opensearch-readall-cred + Snapshotrestore: + Secret Name: sample-opensearch-snapshotrestore-cred + Kernel Settings: + Privileged: true + Sysctls: + Name: vm.max_map_count + Value: 262144 + Pod Template: + Controller: + Metadata: + Spec: + Affinity: + Pod Anti Affinity: + Preferred During Scheduling Ignored During Execution: + Pod Affinity Term: + Label Selector: + Match Labels: + app.kubernetes.io/instance: sample-opensearch + app.kubernetes.io/managed-by: kubedb.com + app.kubernetes.io/name: elasticsearches.kubedb.com + Namespaces: + demo + Topology Key: kubernetes.io/hostname + Weight: 100 + Pod Affinity Term: + Label Selector: + Match Labels: + app.kubernetes.io/instance: sample-opensearch + app.kubernetes.io/managed-by: kubedb.com + app.kubernetes.io/name: elasticsearches.kubedb.com + Namespaces: + demo + Topology Key: failure-domain.beta.kubernetes.io/zone + Weight: 50 + Container Security Context: + Capabilities: + Add: + IPC_LOCK + SYS_RESOURCE + Privileged: false + Resources: + Limits: + Memory: 1Gi + Requests: + Cpu: 500m + Memory: 1Gi + Service Account Name: sample-opensearch + Replicas: 3 + Storage: + Access Modes: + ReadWriteOnce + Resources: + Requests: + Storage: 1Gi + Storage Class Name: standard + Storage Type: Durable + Termination Policy: DoNotTerminate + Tls: + Certificates: + Alias: ca + Private Key: + Encoding: PKCS8 + Secret Name: sample-opensearch-ca-cert + Subject: + Organizations: + kubedb + Alias: transport + Private Key: + Encoding: PKCS8 + Secret Name: sample-opensearch-transport-cert + Subject: + Organizations: + kubedb + Alias: admin + Private Key: + Encoding: PKCS8 + Secret Name: sample-opensearch-admin-cert + Subject: + Organizations: + kubedb + Alias: http + Private Key: + Encoding: PKCS8 + Secret Name: sample-opensearch-http-cert + Subject: + Organizations: + kubedb + Alias: archiver + Private Key: + Encoding: PKCS8 + Secret Name: sample-opensearch-archiver-cert + Subject: + Organizations: + kubedb + Version: opensearch-2.8.0 +Status: + Conditions: + Last Transition Time: 2022-02-15T07:00:21Z + Message: The KubeDB operator has started the provisioning of Elasticsearch: demo/sample-opensearch + Reason: DatabaseProvisioningStartedSuccessfully + Status: True + Type: ProvisioningStarted + Last Transition Time: 2022-02-15T07:00:44Z + Message: All desired replicas are ready. + Reason: AllReplicasReady + Status: True + Type: ReplicaReady + Last Transition Time: 2022-02-15T07:01:35Z + Message: The Elasticsearch: demo/sample-opensearch is accepting client requests. + Observed Generation: 1 + Reason: DatabaseAcceptingConnectionRequest + Status: True + Type: AcceptingConnection + Last Transition Time: 2022-02-15T07:01:35Z + Message: The Elasticsearch: demo/sample-opensearch is ready. + Observed Generation: 1 + Reason: ReadinessCheckSucceeded + Status: True + Type: Ready + Last Transition Time: 2022-02-15T07:01:35Z + Message: The Elasticsearch: demo/sample-opensearch is successfully provisioned. + Observed Generation: 1 + Reason: DatabaseSuccessfullyProvisioned + Status: True + Type: Provisioned + Observed Generation: 1 + Phase: Ready +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal Successful 56m KubeDB Operator Successfully governing service + Normal Successful 56m KubeDB Operator Successfully governing service +``` + +### KubeDB Operator Generated Resources + +after the deployment, the operator creates the following resources: + +```bash +$ kubectl get all,secret -n demo -l 'app.kubernetes.io/instance=sample-opensearch' +NAME READY STATUS RESTARTS AGE +pod/sample-opensearch-0 1/1 Running 0 23m +pod/sample-opensearch-1 1/1 Running 0 23m +pod/sample-opensearch-2 1/1 Running 0 23m + +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +service/sample-opensearch ClusterIP 10.96.29.157 9200/TCP 23m +service/sample-opensearch-master ClusterIP None 9300/TCP 23m +service/sample-opensearch-pods ClusterIP None 9200/TCP 23m + +NAME READY AGE +statefulset.apps/sample-opensearch 3/3 23m + +NAME TYPE VERSION AGE +appbinding.appcatalog.appscode.com/sample-opensearch kubedb.com/elasticsearch 1.2.2 23m + +NAME TYPE DATA AGE +secret/sample-opensearch-admin-cert kubernetes.io/tls 3 23m +secret/sample-opensearch-admin-cred kubernetes.io/basic-auth 2 23m +secret/sample-opensearch-archiver-cert kubernetes.io/tls 3 23m +secret/sample-opensearch-ca-cert kubernetes.io/tls 2 23m +secret/sample-opensearch-config Opaque 3 23m +secret/sample-opensearch-http-cert kubernetes.io/tls 3 23m +secret/sample-opensearch-kibanaro-cred kubernetes.io/basic-auth 2 23m +secret/sample-opensearch-kibanaserver-cred kubernetes.io/basic-auth 2 23m +secret/sample-opensearch-logstash-cred kubernetes.io/basic-auth 2 23m +secret/sample-opensearch-readall-cred kubernetes.io/basic-auth 2 23m +secret/sample-opensearch-snapshotrestore-cred kubernetes.io/basic-auth 2 23m +secret/sample-opensearch-transport-cert kubernetes.io/tls 3 23m + +``` + +- `StatefulSet` - a StatefulSet named after the OpenSearch instance. +- `Services` - 3 services are generated for each OpenSearch database. + - `{OpenSearch-Name}` - the client service which is used to connect to the database. It points to the `ingest` nodes. + - `{OpenSearch-Name}-master` - the master service which is used to connect to the master nodes. It is a headless service. + - `{OpenSearch-Name}-pods` - the node discovery service which is used by the OpenSearch nodes to communicate each other. It is a headless service. +- `AppBinding` - an [AppBinding](/docs/v2024.1.31/guides/elasticsearch/concepts/appbinding/) which hold to connect information for the database. +- `Secrets` - 3 types of secrets are generated for each OpenSearch database. + - `{OpenSearch-Name}-{username}-cred` - the auth secrets which hold the `username` and `password` for the OpenSearch users. + - `{OpenSearch-Name}-{alias}-cert` - the certificate secrets which hold `tls.crt`, `tls.key`, and `ca.crt` for configuring the OpenSearch database. + - `{OpenSearch-Name}-config` - the default configuration secret created by the operator. + +### Insert Sample Data + +In this section, we are going to create few indexes in the deployed OpenSearch. At first, we are going to port-forward the respective Service so that we can connect with the database from our local machine. Then, we are going to insert some data into the OpenSearch. + +#### Port-forward the Service + +KubeDB will create few Services to connect with the database. Let’s see the Services created by KubeDB for our OpenSearch, + +```bash +$ kubectl get service -n demo +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +sample-opensearch ClusterIP 10.48.14.99 9200/TCP 4m33s +sample-opensearch-master ClusterIP None 9300/TCP 4m33s +sample-opensearch-pods ClusterIP None 9200/TCP 4m33s +``` +Here, we are going to use the `sample-opensearch` Service to connect with the database. Now, let’s port-forward the `sample-opensearch` Service. + +```bash +# Port-forward the service to local machine +$ kubectl port-forward -n demo svc/sample-opensearch 9200 +Forwarding from 127.0.0.1:9200 -> 9200 +Forwarding from [::1]:9200 -> 9200 +``` + +#### Export the Credentials + +KubeDB will create some Secrets for the database. Let’s check which Secrets have been created by KubeDB for our `sample-opensearch`. + +```bash +$ kubectl get secret -n demo | grep sample-opensearch +sample-opensearch-admin-cert kubernetes.io/tls 3 10m +sample-opensearch-admin-cred kubernetes.io/basic-auth 2 10m +sample-opensearch-ca-cert kubernetes.io/tls 2 10m +sample-opensearch-config Opaque 3 10m +sample-opensearch-kibanaro-cred kubernetes.io/basic-auth 2 10m +sample-opensearch-kibanaserver-cred kubernetes.io/basic-auth 2 10m +sample-opensearch-logstash-cred kubernetes.io/basic-auth 2 10m +sample-opensearch-readall-cred kubernetes.io/basic-auth 2 10m +sample-opensearch-snapshotrestore-cred kubernetes.io/basic-auth 2 10m +sample-opensearch-token-zbn46 kubernetes.io/service-account-token 3 10m +sample-opensearch-transport-cert kubernetes.io/tls 3 10m +``` +Now, we can connect to the database with any of these secret that have the prefix `cred`. Here, we are using `sample-opensearch-admin-cred` which contains the admin level credentials to connect with the database. + + +### Accessing Database Through CLI + +To access the database through CLI, we have to get the credentials to access. Let’s export the credentials as environment variable to our current shell : + +```bash +$ kubectl get secret -n demo sample-opensearch-admin-cred -o jsonpath='{.data.username}' | base64 -d +admin +$ kubectl get secret -n demo sample-opensearch-admin-cred -o jsonpath='{.data.password}' | base64 -d +9aHT*ZhEK_qjPS~v +``` + +Then login and check the health of our OpenSearch database. + +```bash +$ curl -XGET -k -u 'admin:9aHT*ZhEK_qjPS~v' "https://localhost:9200/_cluster/health?pretty" +{ + "cluster_name" : "sample-opensearch", + "status" : "green", + "timed_out" : false, + "number_of_nodes" : 3, + "number_of_data_nodes" : 3, + "discovered_master" : true, + "active_primary_shards" : 1, + "active_shards" : 3, + "relocating_shards" : 0, + "initializing_shards" : 0, + "unassigned_shards" : 0, + "delayed_unassigned_shards" : 0, + "number_of_pending_tasks" : 0, + "number_of_in_flight_fetch" : 0, + "task_max_waiting_in_queue_millis" : 0, + "active_shards_percent_as_number" : 100.0 +} +``` + +Now, insert some data into OpenSearch: + +```bash +$ curl -XPOST -k --user 'admin:9aHT*ZhEK_qjPS~v' "https://localhost:9200/bands/_doc?pretty" -H 'Content-Type: application/json' -d' +{ + "Name": "Backstreet Boys", + "Album": "Millennium", + "Song": "Show Me The Meaning" +} +' +``` + +Let’s verify that the index have been created successfully. + +```bash +$ curl -XGET -k --user 'admin:9aHT*ZhEK_qjPS~v' "https://localhost:9200/_cat/indices?v&s=index&pretty" +health status index uuid pri rep docs.count docs.deleted store.size pri.store.size +green open .opendistro_security ARYAKuVwQsKel2_0Fl3H2w 1 2 9 0 150.3kb 59.9kb +green open bands 1z6Moj6XS12tpDwFPZpqYw 1 1 1 0 10.4kb 5.2kb +green open security-auditlog-2022.02.10 j8-mj4o_SKqCD1g-Nz2PAA 1 1 5 0 183.2kb 91.6kb +``` +Also, let’s verify the data in the indexes: + +```bash +$ curl -XGET -k --user 'admin:9aHT*ZhEK_qjPS~v' "https://localhost:9200/bands/_search?pretty" +{ + "took" : 183, + "timed_out" : false, + "_shards" : { + "total" : 1, + "successful" : 1, + "skipped" : 0, + "failed" : 0 + }, + "hits" : { + "total" : { + "value" : 1, + "relation" : "eq" + }, + "max_score" : 1.0, + "hits" : [ + { + "_index" : "bands", + "_type" : "_doc", + "_id" : "V1xW4n4BfiOqQRjndUdv", + "_score" : 1.0, + "_source" : { + "Name" : "Backstreet Boys", + "Album" : "Millennium", + "Song" : "Show Me The Meaning" + } + } + ] + } +} + +``` + +## Cleaning up + +To cleanup the Kubernetes resources created by this tutorial, run: + +```bash +$ kubectl patch -n demo elasticsearch sample-opensearch -p '{"spec":{"terminationPolicy":"WipeOut"}}' --type="merge" +elasticsearch.kubedb.com/sample-opensearch patched + +$ kubectl delete -n demo es/sample-opensearch +elasticsearch.kubedb.com "sample-opensearch" deleted + +$ kubectl delete namespace demo +namespace "demo" deleted +``` + +## Tips for Testing + +If you are just testing some basic functionalities, you might want to avoid additional hassles due to some safety features that are great for the production environment. You can follow these tips to avoid them. + +1. **Use `storageType: Ephemeral`**. Databases are precious. You might not want to lose your data in your production environment if the database pod fails. So, we recommend to use `spec.storageType: Durable` and provide storage spec in `spec.storage` section. For testing purposes, you can just use `spec.storageType: Ephemeral`. KubeDB will use [emptyDir](https://kubernetes.io/docs/concepts/storage/volumes/#emptydir) for storage. You will not require to provide `spec.storage` section. +2. **Use `terminationPolicy: WipeOut`**. It is nice to be able to resume the database from the previous one. So, we preserve all your `PVCs` and auth `Secrets`. If you don't want to resume the database, you can just use `spec.terminationPolicy: WipeOut`. It will clean up every resouce that was created with the Elasticsearch CR. For more details, please visit [here](/docs/v2024.1.31/guides/elasticsearch/concepts/elasticsearch/#specterminationpolicy). + +## Next Steps + +- Learn about [backup & restore](/docs/v2024.1.31/guides/elasticsearch/backup/overview/) OpenSearch database using Stash. +- [Quickstart OpenSearch-Dashboards](/docs/v2024.1.31/guides/elasticsearch/elasticsearch-dashboard/opensearch-dashboards/) with KubeDB Operator. +- Monitor your OpenSearch database with KubeDB using [`out-of-the-box` builtin-Prometheus](/docs/v2024.1.31/guides/elasticsearch/monitoring/using-builtin-prometheus). +- Monitor your OpenSearch database with KubeDB using [`out-of-the-box` Prometheus operator](/docs/v2024.1.31/guides/elasticsearch/monitoring/using-prometheus-operator). +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/v2024.1.31/CONTRIBUTING). \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/elasticsearch/quickstart/overview/opensearch/yamls/opensearch.yaml b/content/docs/v2024.1.31/guides/elasticsearch/quickstart/overview/opensearch/yamls/opensearch.yaml new file mode 100644 index 0000000000..48b93f6dff --- /dev/null +++ b/content/docs/v2024.1.31/guides/elasticsearch/quickstart/overview/opensearch/yamls/opensearch.yaml @@ -0,0 +1,18 @@ +apiVersion: kubedb.com/v1alpha2 +kind: Elasticsearch +metadata: + name: sample-opensearch + namespace: demo +spec: + version: opensearch-2.8.0 + enableSSL: true + replicas: 3 + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: DoNotTerminate \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/kafka/README.md b/content/docs/v2024.1.31/guides/kafka/README.md new file mode 100644 index 0000000000..25433736d4 --- /dev/null +++ b/content/docs/v2024.1.31/guides/kafka/README.md @@ -0,0 +1,71 @@ +--- +title: Kafka +menu: + docs_v2024.1.31: + identifier: kf-readme-kafka + name: Kafka + parent: kf-kafka-guides + weight: 10 +menu_name: docs_v2024.1.31 +section_menu_id: guides +url: /docs/v2024.1.31/guides/kafka/ +aliases: +- /docs/v2024.1.31/guides/kafka/README/ +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +## Supported Kafka Features + + +| Features | Community | Enterprise | +|----------------------------------------------------------------|:---------:|:----------:| +| Clustering - Combined (shared controller and broker nodes) | ✓ | ✓ | +| Clustering - Topology (dedicated controllers and broker nodes) | ✓ | ✓ | +| Custom Docker Image | ✓ | ✓ | +| Authentication & Authorization | ✓ | ✓ | +| Persistent Volume | ✓ | ✓ | +| Custom Volume | ✓ | ✓ | +| TLS: using ( [Cert Manager](https://cert-manager.io/docs/) ) | ✗ | ✓ | +| Reconfigurable Health Checker | ✓ | ✓ | +| Externally manageable Auth Secret | ✓ | ✓ | +| Monitoring with Prometheus & Grafana | ✓ | ✓ | + +## Supported Kafka Versions + +KubeDB supports The following Kafka versions. Supported version are applicable for Kraft mode or Zookeeper-less releases: +- `3.3.0` +- `3.3.2` +- `3.4.0` + +> The listed KafkaVersions are tested and provided as a part of the installation process (ie. catalog chart), but you are open to create your own [KafkaVersion](/docs/v2024.1.31/guides/kafka/concepts/catalog) object with your custom Kafka image. + +## Lifecycle of Kafka Object + + + +

+lifecycle +

+ +## User Guide +- [Quickstart Kafka](/docs/v2024.1.31/guides/kafka/quickstart/overview/) with KubeDB Operator. +- Kafka Clustering supported by KubeDB + - [Combined Clustering](/docs/v2024.1.31/guides/kafka/clustering/combined-cluster/) + - [Topology Clustering](/docs/v2024.1.31/guides/kafka/clustering/topology-cluster/) +- Use [kubedb cli](/docs/v2024.1.31/guides/kafka/cli/cli) to manage databases like kubectl for Kubernetes. +- Detail concepts of [Kafka object](/docs/v2024.1.31/guides/kafka/concepts/kafka). +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/v2024.1.31/CONTRIBUTING). \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/kafka/_index.md b/content/docs/v2024.1.31/guides/kafka/_index.md new file mode 100644 index 0000000000..cf8ed5fd1a --- /dev/null +++ b/content/docs/v2024.1.31/guides/kafka/_index.md @@ -0,0 +1,22 @@ +--- +title: Kafka +menu: + docs_v2024.1.31: + identifier: kf-kafka-guides + name: Kafka + parent: guides + weight: 10 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/kafka/cli/_index.md b/content/docs/v2024.1.31/guides/kafka/cli/_index.md new file mode 100755 index 0000000000..54a67ddf81 --- /dev/null +++ b/content/docs/v2024.1.31/guides/kafka/cli/_index.md @@ -0,0 +1,22 @@ +--- +title: CLI | KubeDB +menu: + docs_v2024.1.31: + identifier: kf-cli-kafka + name: CLI + parent: kf-kafka-guides + weight: 100 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/kafka/cli/cli.md b/content/docs/v2024.1.31/guides/kafka/cli/cli.md new file mode 100644 index 0000000000..3d9f21f87a --- /dev/null +++ b/content/docs/v2024.1.31/guides/kafka/cli/cli.md @@ -0,0 +1,738 @@ +--- +title: CLI | KubeDB +menu: + docs_v2024.1.31: + identifier: kf-cli-cli + name: Quickstart + parent: kf-cli-kafka + weight: 100 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Manage KubeDB managed Kafka objects using CLIs + +## KubeDB CLI + +KubeDB comes with its own cli. It is called `kubedb` cli. `kubedb` can be used to manage any KubeDB object. `kubedb` cli also performs various validations to improve ux. To install KubeDB cli on your workstation, follow the steps [here](/docs/v2024.1.31/setup/README). + +### How to Create objects + +`kubectl create` creates a database CRD object in `default` namespace by default. Following command will create a Kafka object as specified in `kafka.yaml`. + +```bash +$ kubectl create -f kafka.yaml +kafka.kubedb.com/kafka created +``` + +You can provide namespace as a flag `--namespace`. Provided namespace should match with namespace specified in input file. + +```bash +$ kubectl create -f kafka.yaml --namespace=kube-system +kafka.kubedb.com/kafka created +``` + +`kubectl create` command also considers `stdin` as input. + +```bash +cat kafka.yaml | kubectl create -f - +``` + +### How to List Objects + +`kubectl get` command allows users to list or find any KubeDB object. To list all Kafka objects in `default` namespace, run the following command: + +```bash +$ kubectl get kafka +NAME TYPE VERSION STATUS AGE +kafka kubedb.com/v1alpha2 3.4.0 Ready 36m +``` + +You can also use short-form (`kf`) for kafka CR. + +```bash +$ kubectl get kf +NAME TYPE VERSION STATUS AGE +kafka kubedb.com/v1alpha2 3.4.0 Ready 36m +``` + +To get YAML of an object, use `--output=yaml` or `-oyaml` flag. Use `-n` flag for referring namespace. + +```yaml +$ kubectl get kf kafka -n demo -oyaml +apiVersion: kubedb.com/v1alpha2 +kind: Kafka +metadata: + annotations: + kubectl.kubernetes.io/last-applied-configuration: | + {"apiVersion":"kubedb.com/v1alpha2","kind":"Kafka","metadata":{"annotations":{},"name":"kafka","namespace":"demo"},"spec":{"authSecret":{"name":"kafka-admin-cred"},"enableSSL":true,"healthChecker":{"failureThreshold":3,"periodSeconds":20,"timeoutSeconds":10},"keystoreCredSecret":{"name":"kafka-keystore-cred"},"storageType":"Durable","terminationPolicy":"DoNotTerminate","tls":{"certificates":[{"alias":"server","secretName":"kafka-server-cert"},{"alias":"client","secretName":"kafka-client-cert"}],"issuerRef":{"apiGroup":"cert-manager.io","kind":"Issuer","name":"kafka-ca-issuer"}},"topology":{"broker":{"replicas":3,"resources":{"limits":{"memory":"1Gi"},"requests":{"cpu":"500m","memory":"1Gi"}},"storage":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"1Gi"}},"storageClassName":"standard"},"suffix":"broker"},"controller":{"replicas":3,"resources":{"limits":{"memory":"1Gi"},"requests":{"cpu":"500m","memory":"1Gi"}},"storage":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"1Gi"}},"storageClassName":"standard"},"suffix":"controller"}},"version":"3.4.0"}} + creationTimestamp: "2023-03-29T07:01:29Z" + finalizers: + - kubedb.com + generation: 1 + name: kafka + namespace: demo + resourceVersion: "570445" + uid: ed5f6197-0238-4aba-a7d9-7dc771b2564c +spec: + authSecret: + name: kafka-admin-cred + enableSSL: true + healthChecker: + failureThreshold: 3 + periodSeconds: 20 + timeoutSeconds: 10 + keystoreCredSecret: + name: kafka-keystore-cred + podTemplate: + controller: {} + metadata: {} + spec: + resources: {} + storageType: Durable + terminationPolicy: DoNotTerminate + tls: + certificates: + - alias: server + secretName: kafka-server-cert + - alias: client + secretName: kafka-client-cert + issuerRef: + apiGroup: cert-manager.io + kind: Issuer + name: kafka-ca-issuer + topology: + broker: + replicas: 3 + resources: + limits: + memory: 1Gi + requests: + cpu: 500m + memory: 1Gi + storage: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + storageClassName: standard + suffix: broker + controller: + replicas: 3 + resources: + limits: + memory: 1Gi + requests: + cpu: 500m + memory: 1Gi + storage: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + storageClassName: standard + suffix: controller + version: 3.4.0 +status: + conditions: + - lastTransitionTime: "2023-03-29T07:01:29Z" + message: 'The KubeDB operator has started the provisioning of Kafka: demo/kafka' + observedGeneration: 1 + reason: DatabaseProvisioningStartedSuccessfully + status: "True" + type: ProvisioningStarted + - lastTransitionTime: "2023-03-29T07:02:46Z" + message: All desired replicas are ready. + observedGeneration: 1 + reason: AllReplicasReady + status: "True" + type: ReplicaReady + - lastTransitionTime: "2023-03-29T07:02:37Z" + message: 'The Kafka: demo/kafka is accepting client requests' + observedGeneration: 1 + reason: DatabaseAcceptingConnectionRequest + status: "True" + type: AcceptingConnection + - lastTransitionTime: "2023-03-29T07:03:37Z" + message: 'The Kafka: demo/kafka is ready.' + observedGeneration: 1 + reason: ReadinessCheckSucceeded + status: "True" + type: Ready + - lastTransitionTime: "2023-03-29T07:03:41Z" + message: 'The Kafka: demo/kafka is successfully provisioned.' + observedGeneration: 1 + reason: DatabaseSuccessfullyProvisioned + status: "True" + type: Provisioned + phase: Ready +``` + +To get JSON of an object, use `--output=json` or `-ojson` flag. + +```bash +$ kubectl get kf kafka -n demo -ojson +{ + "apiVersion": "kubedb.com/v1alpha2", + "kind": "Kafka", + "metadata": { + "annotations": { + "kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"kubedb.com/v1alpha2\",\"kind\":\"Kafka\",\"metadata\":{\"annotations\":{},\"name\":\"kafka\",\"namespace\":\"demo\"},\"spec\":{\"authSecret\":{\"name\":\"kafka-admin-cred\"},\"enableSSL\":true,\"healthChecker\":{\"failureThreshold\":3,\"periodSeconds\":20,\"timeoutSeconds\":10},\"keystoreCredSecret\":{\"name\":\"kafka-keystore-cred\"},\"storageType\":\"Durable\",\"terminationPolicy\":\"DoNotTerminate\",\"tls\":{\"certificates\":[{\"alias\":\"server\",\"secretName\":\"kafka-server-cert\"},{\"alias\":\"client\",\"secretName\":\"kafka-client-cert\"}],\"issuerRef\":{\"apiGroup\":\"cert-manager.io\",\"kind\":\"Issuer\",\"name\":\"kafka-ca-issuer\"}},\"topology\":{\"broker\":{\"replicas\":3,\"resources\":{\"limits\":{\"memory\":\"1Gi\"},\"requests\":{\"cpu\":\"500m\",\"memory\":\"1Gi\"}},\"storage\":{\"accessModes\":[\"ReadWriteOnce\"],\"resources\":{\"requests\":{\"storage\":\"1Gi\"}},\"storageClassName\":\"standard\"},\"suffix\":\"broker\"},\"controller\":{\"replicas\":3,\"resources\":{\"limits\":{\"memory\":\"1Gi\"},\"requests\":{\"cpu\":\"500m\",\"memory\":\"1Gi\"}},\"storage\":{\"accessModes\":[\"ReadWriteOnce\"],\"resources\":{\"requests\":{\"storage\":\"1Gi\"}},\"storageClassName\":\"standard\"},\"suffix\":\"controller\"}},\"version\":\"3.4.0\"}}\n" + }, + "creationTimestamp": "2023-03-29T07:01:29Z", + "finalizers": [ + "kubedb.com" + ], + "generation": 1, + "name": "kafka", + "namespace": "demo", + "resourceVersion": "570445", + "uid": "ed5f6197-0238-4aba-a7d9-7dc771b2564c" + }, + "spec": { + "authSecret": { + "name": "kafka-admin-cred" + }, + "enableSSL": true, + "healthChecker": { + "failureThreshold": 3, + "periodSeconds": 20, + "timeoutSeconds": 10 + }, + "keystoreCredSecret": { + "name": "kafka-keystore-cred" + }, + "podTemplate": { + "controller": {}, + "metadata": {}, + "spec": { + "resources": {} + } + }, + "storageType": "Durable", + "terminationPolicy": "DoNotTerminate", + "tls": { + "certificates": [ + { + "alias": "server", + "secretName": "kafka-server-cert" + }, + { + "alias": "client", + "secretName": "kafka-client-cert" + } + ], + "issuerRef": { + "apiGroup": "cert-manager.io", + "kind": "Issuer", + "name": "kafka-ca-issuer" + } + }, + "topology": { + "broker": { + "replicas": 3, + "resources": { + "limits": { + "memory": "1Gi" + }, + "requests": { + "cpu": "500m", + "memory": "1Gi" + } + }, + "storage": { + "accessModes": [ + "ReadWriteOnce" + ], + "resources": { + "requests": { + "storage": "1Gi" + } + }, + "storageClassName": "standard" + }, + "suffix": "broker" + }, + "controller": { + "replicas": 3, + "resources": { + "limits": { + "memory": "1Gi" + }, + "requests": { + "cpu": "500m", + "memory": "1Gi" + } + }, + "storage": { + "accessModes": [ + "ReadWriteOnce" + ], + "resources": { + "requests": { + "storage": "1Gi" + } + }, + "storageClassName": "standard" + }, + "suffix": "controller" + } + }, + "version": "3.4.0" + }, + "status": { + "conditions": [ + { + "lastTransitionTime": "2023-03-29T07:01:29Z", + "message": "The KubeDB operator has started the provisioning of Kafka: demo/kafka", + "observedGeneration": 1, + "reason": "DatabaseProvisioningStartedSuccessfully", + "status": "True", + "type": "ProvisioningStarted" + }, + { + "lastTransitionTime": "2023-03-29T07:02:46Z", + "message": "All desired replicas are ready.", + "observedGeneration": 1, + "reason": "AllReplicasReady", + "status": "True", + "type": "ReplicaReady" + }, + { + "lastTransitionTime": "2023-03-29T07:02:37Z", + "message": "The Kafka: demo/kafka is accepting client requests", + "observedGeneration": 1, + "reason": "DatabaseAcceptingConnectionRequest", + "status": "True", + "type": "AcceptingConnection" + }, + { + "lastTransitionTime": "2023-03-29T07:03:37Z", + "message": "The Kafka: demo/kafka is ready.", + "observedGeneration": 1, + "reason": "ReadinessCheckSucceeded", + "status": "True", + "type": "Ready" + }, + { + "lastTransitionTime": "2023-03-29T07:03:41Z", + "message": "The Kafka: demo/kafka is successfully provisioned.", + "observedGeneration": 1, + "reason": "DatabaseSuccessfullyProvisioned", + "status": "True", + "type": "Provisioned" + } + ], + "phase": "Ready" + } +} +``` + +To list all KubeDB objects managed by KubeDB including secrets, use following command: + +```bash +$ kubectl get all,secret -A -l app.kubernetes.io/managed-by=kubedb.com -owide +NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES +demo pod/kafka-broker-0 1/1 Running 0 45m 10.244.0.49 kind-control-plane +demo pod/kafka-broker-1 1/1 Running 0 45m 10.244.0.53 kind-control-plane +demo pod/kafka-broker-2 1/1 Running 0 45m 10.244.0.57 kind-control-plane +demo pod/kafka-controller-0 1/1 Running 0 45m 10.244.0.51 kind-control-plane +demo pod/kafka-controller-1 1/1 Running 0 45m 10.244.0.55 kind-control-plane +demo pod/kafka-controller-2 1/1 Running 3 (45m ago) 45m 10.244.0.58 kind-control-plane + +NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR +demo service/kafka-broker ClusterIP None 9092/TCP,29092/TCP 46m app.kubernetes.io/instance=kafka,app.kubernetes.io/managed-by=kubedb.com,app.kubernetes.io/name=kafkas.kubedb.com,kubedb.com/role=broker +demo service/kafka-controller ClusterIP None 9093/TCP 46m app.kubernetes.io/instance=kafka,app.kubernetes.io/managed-by=kubedb.com,app.kubernetes.io/name=kafkas.kubedb.com,kubedb.com/role=controller + +NAMESPACE NAME READY AGE CONTAINERS IMAGES +demo statefulset.apps/kafka-broker 3/3 45m kafka docker.io/kubedb/kafka-kraft:3.4.0@sha256:f059db2929e3cfe388f50e82e168a9ce94b012e413e056eda2838df48632048a +demo statefulset.apps/kafka-controller 3/3 45m kafka docker.io/kubedb/kafka-kraft:3.4.0@sha256:f059db2929e3cfe388f50e82e168a9ce94b012e413e056eda2838df48632048a + +NAMESPACE NAME TYPE VERSION AGE +demo appbinding.appcatalog.appscode.com/kafka kubedb.com/kafka 3.4.0 45m + +NAMESPACE NAME TYPE DATA AGE +demo secret/kafka-admin-cred kubernetes.io/basic-auth 2 46m +demo secret/kafka-broker-config Opaque 3 46m +demo secret/kafka-client-cert kubernetes.io/tls 3 46m +demo secret/kafka-controller-config Opaque 3 45m +demo secret/kafka-keystore-cred Opaque 3 46m +demo secret/kafka-server-cert kubernetes.io/tls 5 46m +``` + +Flag `--output=wide` or `-owide` is used to print additional information. List command supports short names for each object types. You can use it like `kubectl get `. + +You can print labels with objects. The following command will list all Snapshots with their corresponding labels. + +```bash +$ kubectl get pods -n demo --show-labels +NAME READY STATUS RESTARTS AGE LABELS +kafka-broker-0 1/1 Running 0 47m app.kubernetes.io/component=database,app.kubernetes.io/instance=kafka,app.kubernetes.io/managed-by=kubedb.com,app.kubernetes.io/name=kafkas.kubedb.com,controller-revision-hash=kafka-broker-5f568d57c9,kubedb.com/role=broker,statefulset.kubernetes.io/pod-name=kafka-broker-0 +kafka-broker-1 1/1 Running 0 47m app.kubernetes.io/component=database,app.kubernetes.io/instance=kafka,app.kubernetes.io/managed-by=kubedb.com,app.kubernetes.io/name=kafkas.kubedb.com,controller-revision-hash=kafka-broker-5f568d57c9,kubedb.com/role=broker,statefulset.kubernetes.io/pod-name=kafka-broker-1 +kafka-broker-2 1/1 Running 0 47m app.kubernetes.io/component=database,app.kubernetes.io/instance=kafka,app.kubernetes.io/managed-by=kubedb.com,app.kubernetes.io/name=kafkas.kubedb.com,controller-revision-hash=kafka-broker-5f568d57c9,kubedb.com/role=broker,statefulset.kubernetes.io/pod-name=kafka-broker-2 +kafka-controller-0 1/1 Running 0 47m app.kubernetes.io/component=database,app.kubernetes.io/instance=kafka,app.kubernetes.io/managed-by=kubedb.com,app.kubernetes.io/name=kafkas.kubedb.com,controller-revision-hash=kafka-controller-96ddd885f,kubedb.com/role=controller,statefulset.kubernetes.io/pod-name=kafka-controller-0 +kafka-controller-1 1/1 Running 0 47m app.kubernetes.io/component=database,app.kubernetes.io/instance=kafka,app.kubernetes.io/managed-by=kubedb.com,app.kubernetes.io/name=kafkas.kubedb.com,controller-revision-hash=kafka-controller-96ddd885f,kubedb.com/role=controller,statefulset.kubernetes.io/pod-name=kafka-controller-1 +kafka-controller-2 1/1 Running 3 (47m ago) 47m app.kubernetes.io/component=database,app.kubernetes.io/instance=kafka,app.kubernetes.io/managed-by=kubedb.com,app.kubernetes.io/name=kafkas.kubedb.com,controller-revision-hash=kafka-controller-96ddd885f,kubedb.com/role=controller,statefulset.kubernetes.io/pod-name=kafka-controller-2 +``` + +You can also filter list using `--selector` flag. + +```bash +$ kubectl get services -n demo --selector='app.kubernetes.io/name=kafkas.kubedb.com' --show-labels +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE LABELS +kafka-broker ClusterIP None 9092/TCP,29092/TCP 49m app.kubernetes.io/component=database,app.kubernetes.io/instance=kafka,app.kubernetes.io/managed-by=kubedb.com,app.kubernetes.io/name=kafkas.kubedb.com +kafka-controller ClusterIP None 9093/TCP 49m app.kubernetes.io/component=database,app.kubernetes.io/instance=kafka,app.kubernetes.io/managed-by=kubedb.com,app.kubernetes.io/name=kafkas.kubedb.com +``` + +To print only object name, run the following command: + +```bash +$ kubectl get all -o name -n demo +pod/kafka-broker-0 +pod/kafka-broker-1 +pod/kafka-broker-2 +pod/kafka-controller-0 +pod/kafka-controller-1 +pod/kafka-controller-2 +service/kafka-broker +service/kafka-controller +statefulset.apps/kafka-broker +statefulset.apps/kafka-controller +appbinding.appcatalog.appscode.com/kafka +``` + +### How to Describe Objects + +`kubectl describe` command allows users to describe any KubeDB object. The following command will describe Kafka instance `kafka` with relevant information. + +```bash +$ kubectl describe -n demo kf kafka +Name: kafka +Namespace: demo +Labels: +Annotations: +API Version: kubedb.com/v1alpha2 +Kind: Kafka +Metadata: + Creation Timestamp: 2023-03-29T07:01:29Z + Finalizers: + kubedb.com + Generation: 1 + Managed Fields: + API Version: kubedb.com/v1alpha2 + Fields Type: FieldsV1 + fieldsV1: + f:metadata: + f:finalizers: + .: + v:"kubedb.com": + Manager: kafka-operator + Operation: Update + Time: 2023-03-29T07:01:29Z + API Version: kubedb.com/v1alpha2 + Fields Type: FieldsV1 + fieldsV1: + f:metadata: + f:annotations: + .: + f:kubectl.kubernetes.io/last-applied-configuration: + f:spec: + .: + f:authSecret: + f:enableSSL: + f:healthChecker: + .: + f:failureThreshold: + f:periodSeconds: + f:timeoutSeconds: + f:keystoreCredSecret: + f:storageType: + f:terminationPolicy: + f:tls: + .: + f:certificates: + f:issuerRef: + f:topology: + .: + f:broker: + .: + f:replicas: + f:resources: + .: + f:limits: + .: + f:memory: + f:requests: + .: + f:cpu: + f:memory: + f:storage: + .: + f:accessModes: + f:resources: + .: + f:requests: + .: + f:storage: + f:storageClassName: + f:suffix: + f:controller: + .: + f:replicas: + f:resources: + .: + f:limits: + .: + f:memory: + f:requests: + .: + f:cpu: + f:memory: + f:storage: + .: + f:accessModes: + f:resources: + .: + f:requests: + .: + f:storage: + f:storageClassName: + f:suffix: + f:version: + Manager: kubectl-client-side-apply + Operation: Update + Time: 2023-03-29T07:01:29Z + API Version: kubedb.com/v1alpha2 + Fields Type: FieldsV1 + fieldsV1: + f:status: + .: + f:conditions: + f:phase: + Manager: kafka-operator + Operation: Update + Subresource: status + Time: 2023-03-29T07:01:34Z + Resource Version: 570445 + UID: ed5f6197-0238-4aba-a7d9-7dc771b2564c +Spec: + Auth Secret: + Name: kafka-admin-cred + Enable SSL: true + Health Checker: + Failure Threshold: 3 + Period Seconds: 20 + Timeout Seconds: 10 + Keystore Cred Secret: + Name: kafka-keystore-cred + Pod Template: + Controller: + Metadata: + Spec: + Resources: + Storage Type: Durable + Termination Policy: DoNotTerminate + Tls: + Certificates: + Alias: server + Secret Name: kafka-server-cert + Alias: client + Secret Name: kafka-client-cert + Issuer Ref: + API Group: cert-manager.io + Kind: Issuer + Name: kafka-ca-issuer + Topology: + Broker: + Replicas: 3 + Resources: + Limits: + Memory: 1Gi + Requests: + Cpu: 500m + Memory: 1Gi + Storage: + Access Modes: + ReadWriteOnce + Resources: + Requests: + Storage: 1Gi + Storage Class Name: standard + Suffix: broker + Controller: + Replicas: 3 + Resources: + Limits: + Memory: 1Gi + Requests: + Cpu: 500m + Memory: 1Gi + Storage: + Access Modes: + ReadWriteOnce + Resources: + Requests: + Storage: 1Gi + Storage Class Name: standard + Suffix: controller + Version: 3.4.0 +Status: + Conditions: + Last Transition Time: 2023-03-29T07:01:29Z + Message: The KubeDB operator has started the provisioning of Kafka: demo/kafka + Observed Generation: 1 + Reason: DatabaseProvisioningStartedSuccessfully + Status: True + Type: ProvisioningStarted + Last Transition Time: 2023-03-29T07:02:46Z + Message: All desired replicas are ready. + Observed Generation: 1 + Reason: AllReplicasReady + Status: True + Type: ReplicaReady + Last Transition Time: 2023-03-29T07:02:37Z + Message: The Kafka: demo/kafka is accepting client requests + Observed Generation: 1 + Reason: DatabaseAcceptingConnectionRequest + Status: True + Type: AcceptingConnection + Last Transition Time: 2023-03-29T07:03:37Z + Message: The Kafka: demo/kafka is ready. + Observed Generation: 1 + Reason: ReadinessCheckSucceeded + Status: True + Type: Ready + Last Transition Time: 2023-03-29T07:03:41Z + Message: The Kafka: demo/kafka is successfully provisioned. + Observed Generation: 1 + Reason: DatabaseSuccessfullyProvisioned + Status: True + Type: Provisioned + Phase: Ready +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Warning Failed 50m KubeDB Ops-manager Operator Fail to be ready database: "kafka". Reason: services "kafka-broker" not found + Warning Failed 50m KubeDB Ops-manager Operator Fail to be ready database: "kafka". Reason: services "kafka-broker" not found + Warning Failed 50m KubeDB Ops-manager Operator Fail to be ready database: "kafka". Reason: services "kafka-broker" not found + Warning Failed 50m KubeDB Ops-manager Operator Fail to be ready database: "kafka". Reason: services "kafka-broker" not found + Warning Failed 50m KubeDB Ops-manager Operator Fail to be ready database: "kafka". Reason: services "kafka-broker" not found + Normal Successful 50m KubeDB Ops-manager Operator Successfully created Kafka server certificates + Normal Successful 50m KubeDB Ops-manager Operator Successfully created Kafka client-certificates +``` + +`kubectl describe` command provides following basic information about a database. + +- StatefulSet +- Storage (Persistent Volume) +- Service +- Secret (If available) +- Topology (If available) +- Monitoring system (If available) + +To hide details about StatefulSet & Service, use flag `--show-workload=false` +To hide details about Secret, use flag `--show-secret=false` +To hide events on KubeDB object, use flag `--show-events=false` + +To describe all Kafka objects in `default` namespace, use following command + +```bash +$ kubectl describe kf +``` + +To describe all Kafka objects from every namespace, provide `--all-namespaces` flag. + +```bash +$ kubectl describe kf --all-namespaces +``` + +You can also describe KubeDb objects with matching labels. The following command will describe all Kafka objects with specified labels from every namespace. + +```bash +$ kubectl describe kf --all-namespaces --selector='app.kubernetes.io/component=database' +``` + +To learn about various options of `describe` command, please visit [here](/docs/v2024.1.31/reference/cli/kubectl-dba_describe). + + +#### Edit restrictions + +Various fields of a KubeDb object can't be edited using `edit` command. The following fields are restricted from updates for all KubeDB objects: + +- apiVersion +- kind +- metadata.name +- metadata.namespace +- status + +If StatefulSets or Deployments exists for a database, following fields can't be modified as well. + +Kafka: + +- spec.init +- spec.storageType +- spec.storage +- spec.podTemplate.spec.nodeSelector +- spec.podTemplate.spec.env + + +### How to Delete Objects + +`kubectl delete` command will delete an object in `default` namespace by default unless namespace is provided. The following command will delete a Kafka instance `kafka` in demo namespace + +```bash +$ kubectl delete kf kafka -n demo +kafka.kubedb.com "kafka" deleted +``` + +You can also use YAML files to delete objects. The following command will delete an Kafka using the type and name specified in `kafka.yaml`. + +```bash +$ kubectl delete -f kafka.yaml +kafka.kubedb.com "kafka" deleted +``` + +`kubectl delete` command also takes input from `stdin`. + +```bash +cat kafka.yaml | kubectl delete -f - +``` + +To delete database with matching labels, use `--selector` flag. The following command will delete kafka with label `app.kubernetes.io/instance=kafka`. + +```bash +$ kubectl delete kf -l app.kubernetes.io/instance=kafka +``` + +## Using Kubectl + +You can use Kubectl with KubeDB objects like any other CRDs. Below are some common examples of using Kubectl with KubeDB objects. + +```bash +# List objects +$ kubectl get kafka +$ kubectl get kafka.kubedb.com + +# Delete objects +$ kubectl delete kafka +``` + +## Next Steps + +- Learn how to use KubeDB to run a Apache Kafka [here](/docs/v2024.1.31/guides/kafka/README). +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/v2024.1.31/CONTRIBUTING). diff --git a/content/docs/v2024.1.31/guides/kafka/clustering/_index.md b/content/docs/v2024.1.31/guides/kafka/clustering/_index.md new file mode 100755 index 0000000000..549e869e4c --- /dev/null +++ b/content/docs/v2024.1.31/guides/kafka/clustering/_index.md @@ -0,0 +1,22 @@ +--- +title: Clustering Modes for Kafka +menu: + docs_v2024.1.31: + identifier: kf-clustering + name: Kafka Clustering + parent: kf-kafka-guides + weight: 20 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/kafka/clustering/combined-cluster/index.md b/content/docs/v2024.1.31/guides/kafka/clustering/combined-cluster/index.md new file mode 100644 index 0000000000..868934cb0e --- /dev/null +++ b/content/docs/v2024.1.31/guides/kafka/clustering/combined-cluster/index.md @@ -0,0 +1,326 @@ +--- +title: Kafka Combined Cluster +menu: + docs_v2024.1.31: + identifier: kf-combined-cluster + name: Combined Cluster + parent: kf-clustering + weight: 10 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Combined Cluster + +A Kafka combined cluster is a group of kafka brokers where each broker also acts as a controller and participates in leader election as a voter. Combined mode can be used in development environment, but it should be avoided in critical deployment environments. + +## Before You Begin + +At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +Now, install the KubeDB operator in your cluster following the steps [here](/docs/v2024.1.31/setup/README). + +To keep things isolated, this tutorial uses a separate namespace called `demo` throughout this tutorial. + +```bash +$ kubectl create namespace demo +namespace/demo created + +$ kubectl get namespace +NAME STATUS AGE +demo Active 9s +``` + +> Note: YAML files used in this tutorial are stored in [here](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/examples/kafka/clustering) in GitHub repository [kubedb/docs](https://github.com/kubedb/docs). + +## Create Standalone Kafka Cluster + +Here, we are going to create a standalone (ie. `replicas: 1`) Kafka cluster in Kraft mode. For this demo, we are going to provision kafka version `3.3.2`. To learn more about Kafka CR, visit [here](/docs/v2024.1.31/guides/kafka/concepts/kafka). visit [here](/docs/v2024.1.31/guides/kafka/concepts/catalog) to learn more about KafkaVersion CR. + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: Kafka +metadata: + name: kafka-standalone + namespace: demo +spec: + replicas: 1 + version: 3.3.2 + storage: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + storageClassName: standard + storageType: Durable + terminationPolicy: DoNotTerminate +``` + +Let's deploy the above example by the following command: + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/kafka/clustering/kf-standalone.yaml +kafka.kubedb.com/kafka-standalone created +``` + +Watch the bootstrap progress: + +```bash +$ kubectl get kf -n demo -w +NAME TYPE VERSION STATUS AGE +kafka-standalone kubedb.com/v1alpha2 3.3.2 Provisioning 8s +kafka-standalone kubedb.com/v1alpha2 3.3.2 Provisioning 14s +kafka-standalone kubedb.com/v1alpha2 3.3.2 Provisioning 35s +kafka-standalone kubedb.com/v1alpha2 3.3.2 Provisioning 35s +kafka-standalone kubedb.com/v1alpha2 3.3.2 Provisioning 36s +kafka-standalone kubedb.com/v1alpha2 3.3.2 Ready 41s +``` + +Hence, the cluster is ready to use. +Let's check the k8s resources created by the operator on the deployment of Kafka CRO: + +```bash +$ kubectl get all,secret,pvc -n demo -l 'app.kubernetes.io/instance=kafka-standalone' +NAME READY STATUS RESTARTS AGE +pod/kafka-standalone-0 1/1 Running 0 8m56s + +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +service/kafka-standalone-pods ClusterIP None 9092/TCP,9093/TCP,29092/TCP 8m59s + +NAME READY AGE +statefulset.apps/kafka-standalone 1/1 8m56s + +NAME TYPE VERSION AGE +appbinding.appcatalog.appscode.com/kafka-standalone kubedb.com/kafka 3.3.2 8m56s + +NAME TYPE DATA AGE +secret/kafka-standalone-admin-cred kubernetes.io/basic-auth 2 8m59s +secret/kafka-standalone-config Opaque 2 8m59s + +NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE +persistentvolumeclaim/kafka-standalone-data-kafka-standalone-0 Bound pvc-56f8284a-249e-4444-ab3d-31e01662a9a0 1Gi RWO standard 8m56s +``` + +## Create Multi-Node Combined Kafka Cluster + +Here, we are going to create a multi-node (say `replicas: 3`) Kafka cluster. We will use the KafkaVersion `3.4.0` for this demo. To learn more about kafka CR, visit [here](/docs/v2024.1.31/guides/kafka/concepts/kafka). + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: Kafka +metadata: + name: kafka-multinode + namespace: demo +spec: + replicas: 3 + version: 3.3.2 + storage: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + storageClassName: standard + storageType: Durable + terminationPolicy: DoNotTerminate +``` + +Let's deploy the above example by the following command: + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/kafka/clustering/kf-multinode.yaml +kafka.kubedb.com/kafka-multinode created +``` + +Watch the bootstrap progress: + +```bash +$ kubectl get kf -n demo -w +kafka-multinode kubedb.com/v1alpha2 3.3.2 Provisioning 9s +kafka-multinode kubedb.com/v1alpha2 3.3.2 Provisioning 14s +kafka-multinode kubedb.com/v1alpha2 3.3.2 Provisioning 18s +kafka-multinode kubedb.com/v1alpha2 3.3.2 Provisioning 2m6s +kafka-multinode kubedb.com/v1alpha2 3.3.2 Provisioning 2m8s +kafka-multinode kubedb.com/v1alpha2 3.3.2 Ready 2m14s +``` + +Hence, the cluster is ready to use. +Let's check the k8s resources created by the operator on the deployment of Kafka CRO: + +```bash +$ kubectl get all,secret,pvc -n demo -l 'app.kubernetes.io/instance=kafka-multinode' +NAME READY STATUS RESTARTS AGE +pod/kafka-multinode-0 1/1 Running 0 6m2s +pod/kafka-multinode-1 1/1 Running 0 5m56s +pod/kafka-multinode-2 1/1 Running 0 5m51s + +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +service/kafka-multinode-pods ClusterIP None 9092/TCP,9093/TCP,29092/TCP 6m7s + +NAME READY AGE +statefulset.apps/kafka-multinode 3/3 6m2s + +NAME TYPE VERSION AGE +appbinding.appcatalog.appscode.com/kafka-multinode kubedb.com/kafka 3.3.2 6m2s + +NAME TYPE DATA AGE +secret/kafka-multinode-admin-cred kubernetes.io/basic-auth 2 6m7s +secret/kafka-multinode-config Opaque 2 6m7s + +NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE +persistentvolumeclaim/kafka-multinode-data-kafka-multinode-0 Bound pvc-15cc2329-15ba-4781-8b7f-f0fe6cf81614 1Gi RWO standard 6m2s +persistentvolumeclaim/kafka-multinode-data-kafka-multinode-1 Bound pvc-bc3773cc-dff0-458c-b71a-7ef6aa877549 1Gi RWO standard 5m56s +persistentvolumeclaim/kafka-multinode-data-kafka-multinode-2 Bound pvc-e4829946-b2bb-473e-84d9-c5f9c360f3f0 1Gi RWO standard 5m51s +``` + +## Publish & Consume messages with Kafka + +We will create a Kafka topic using `kafka-topics.sh` script which is provided by kafka container itself. We will use `kafka console producer` and `kafka console consumer` as clients for publishing messages to the topic and then consume those messages. Exec into one of the kafka brokers in interactive mode first. + +```bash +$ kubectl exec -it -n demo kafka-multinode-0 -- bash +root@kafka-multinode-0:~# pwd +/opt/kafka +``` + +You will find a file named `clientauth.properties` in the config directory. This file is generated by the operator which contains necessary authentication/authorization configurations that are required during publishing or subscribing messages to a kafka topic. + +```bash +root@kafka-multinode-0:~# cat config/clientauth.properties +security.protocol=SASL_PLAINTEXT +sasl.mechanism=PLAIN +sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="admin" password="************"; +``` + +Now, we have to use a bootstrap server to perform operations in a kafka broker. For this demo, we are going to use the http endpoint of the headless service `kafka-multinode-pods` as bootstrap server for publishing & consuming messages to kafka brokers. These endpoints are pointing to all the kafka broker pods. We will set an environment variable for the `clientauth.properties` filepath as well. At first, describe the service to get the http endpoints. + +```bash +$ kubectl describe svc -n demo kafka-multinode-pods +Name: kafka-multinode-pods +Namespace: demo +Labels: app.kubernetes.io/component=database + app.kubernetes.io/instance=kafka-multinode + app.kubernetes.io/managed-by=kubedb.com + app.kubernetes.io/name=kafkas.kubedb.com +Annotations: +Selector: app.kubernetes.io/instance=kafka-multinode,app.kubernetes.io/managed-by=kubedb.com,app.kubernetes.io/name=kafkas.kubedb.com +Type: ClusterIP +IP Family Policy: SingleStack +IP Families: IPv4 +IP: None +IPs: None +Port: http 9092/TCP +TargetPort: http/TCP +Endpoints: 10.244.0.69:9092,10.244.0.71:9092,10.244.0.73:9092 +Port: controller 9093/TCP +TargetPort: controller/TCP +Endpoints: 10.244.0.69:9093,10.244.0.71:9093,10.244.0.73:9093 +Port: internal 29092/TCP +TargetPort: internal/TCP +Endpoints: 10.244.0.69:29092,10.244.0.71:29092,10.244.0.73:29092 +Session Affinity: None +Events: +``` + +Use the `http endpoints` and `clientauth.properties` file to set environment variables. These environment variables will be useful for handling console command operations easily. + +```bash +root@kafka-multinode-0:~# export SERVER="10.244.0.69:9092,10.244.0.71:9092,10.244.0.73:9092" +root@kafka-multinode-0:~# export CLIENTAUTHCONFIG="$HOME/config/clientauth.properties" +``` + +Let's describe the broker metadata for the quorum. + +```bash +root@kafka-multinode-0:~# kafka-metadata-quorum.sh --command-config $CLIENTAUTHCONFIG --bootstrap-server $SERVER describe --status +ClusterId: 11ed-957c-625c6a5f47bw +LeaderId: 0 +LeaderEpoch: 19 +HighWatermark: 2601 +MaxFollowerLag: 0 +MaxFollowerLagTimeMs: 0 +CurrentVoters: [0,1,2] +CurrentObservers: [] +``` + +It will show you important metadata information like clusterID, current leader ID, broker IDs which are participating in leader election voting and IDs of those brokers who are observers. It is important to mention that each broker is assigned a numeric ID which is called its broker ID. The ID is assigned sequentially with respect to the host pod name. In this case, The pods assigned broker IDs are as follows: + +| Pods | Broker ID | +|-------------------|:---------:| +| kafka-multinode-0 | 0 | +| kafka-multinode-1 | 1 | +| kafka-multinode-2 | 2 | + +Let's create a topic named `sample` with 1 partitions and a replication factor of 1. Describe the topic once it's created. You will see the leader ID for each partition and their replica IDs along with in-sync-replicas(ISR). + +```bash +root@kafka-multinode-0:~# kafka-topics.sh --command-config $CLIENTAUTHCONFIG --create --topic sample --partitions 1 --replication-factor 1 --bootstrap-server $SERVER +Created topic sample. + +root@kafka-multinode-0:~# kafka-topics.sh --command-config $CLIENTAUTHCONFIG --describe --topic sample --bootstrap-server $SERVER +Topic: sample TopicId: KVpw_JXfRjaeUHfoXLPBvQ PartitionCount: 1 ReplicationFactor: 1 Configs: segment.bytes=1073741824 + Topic: sample Partition: 0 Leader: 0 Replicas: 0 Isr: 0 +``` + +Now, we are going to start a producer and a consumer for topic `sample` using console. Let's use this current terminal for producing messages and open a new terminal for consuming messages. Let's set the environment variables for bootstrap server and the configuration file in consumer terminal also. + +From the topic description we can see that the leader partition for partition 0 is 0 (the broker that we are on). If we produce messages to `kafka-multinode-0` broker(brokerID=0) it will store those messages in partition 0. Let's produce messages in the producer terminal and consume them from the consumer terminal. + +```bash +root@kafka-quickstart-0:~# kafka-console-producer.sh --producer.config $CLIENTAUTHCONFIG --topic sample --request-required-acks all --bootstrap-server $SERVER +>message one +>message two +>message three +> +``` + +```bash +root@kafka-quickstart-0:/# kafka-console-consumer.sh --consumer.config $CLIENTAUTHCONFIG --topic sample --from-beginning --bootstrap-server $SERVER --partition 0 +message one +message two +message three + +``` + +Notice that, messages are coming to the consumer as you continue sending messages via producer. So, we have created a kafka topic and used kafka console producer and consumer to test message publishing and consuming successfully. + + +## Cleaning Up + +TO clean up the k8s resources created by this tutorial, run: + +```bash +# standalone cluster +$ kubectl patch -n demo kf kafka-standalone -p '{"spec":{"terminationPolicy":"WipeOut"}}' --type="merge" +$ kubectl delete kf -n demo kafka-standalone + +# multinode cluster +$ kubectl patch -n demo kf kafka-multinode -p '{"spec":{"terminationPolicy":"WipeOut"}}' --type="merge" +$ kubectl delete kf -n demo kafka-multinode + +# delete namespace +$ kubectl delete namespace demo +``` + +## Next Steps + +- Deploy [dedicated topology cluster](/docs/v2024.1.31/guides/kafka/clustering/topology-cluster/) for Apache Kafka +- Monitor your Kafka cluster with KubeDB using [`out-of-the-box` Prometheus operator](/docs/v2024.1.31/guides/kafka/monitoring/using-prometheus-operator). +- Detail concepts of [Kafka object](/docs/v2024.1.31/guides/kafka/concepts/kafka). +- Detail concepts of [KafkaVersion object](/docs/v2024.1.31/guides/kafka/concepts/catalog). +- Learn to use KubeDB managed Kafka objects using [CLIs](/docs/v2024.1.31/guides/kafka/cli/cli). +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/v2024.1.31/CONTRIBUTING). \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/kafka/clustering/topology-cluster/index.md b/content/docs/v2024.1.31/guides/kafka/clustering/topology-cluster/index.md new file mode 100644 index 0000000000..418a69a372 --- /dev/null +++ b/content/docs/v2024.1.31/guides/kafka/clustering/topology-cluster/index.md @@ -0,0 +1,329 @@ +--- +title: Kafka Topology Cluster +menu: null +docs_v2024.1.31: null +identifier: kf-topology-cluster +name: Topology Cluster +parent: kf-clustering +weight: 20 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Topology Cluster + +A Kafka topology cluster is a comprised of two groups of kafka nodes (eg. pods) where one group of nodes are assigned to controller role which manages cluster metadata & participates in leader election. Outer group of nodes are assigned to dedicated broker roles that only act as Kafka broker for message publishing and subscribing. Topology mode clustering is suitable for production deployment. + +## Before You Begin + +At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +Now, install the KubeDB operator in your cluster following the steps [here](/docs/v2024.1.31/setup/README). + +To keep things isolated, this tutorial uses a separate namespace called `demo` throughout this tutorial. + +```bash +$ kubectl create namespace demo +namespace/demo created + +$ kubectl get namespace +NAME STATUS AGE +demo Active 9s +``` + +> Note: YAML files used in this tutorial are stored in [here](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/examples/kafka/clustering) in GitHub repository [kubedb/docs](https://github.com/kubedb/docs). + +## Create Topology Kafka Cluster + +Here, we are going to create a TLS secured Kafka topology cluster in Kraft mode. + +### Create Issuer/ ClusterIssuer + +At first, make sure you have cert-manager installed on your k8s for enabling TLS. KubeDB operator uses cert manager to inject certificates into kubernetes secret & uses them for secure `SASL` encrypted communication among kafka brokers and controllers. We are going to create an example `Issuer` that will be used throughout the duration of this tutorial to enable SSL/TLS in Kafka. Alternatively, you can follow this [cert-manager tutorial](https://cert-manager.io/docs/configuration/ca/) to create your own `Issuer`. + +- Start off by generating you CA certificates using openssl. + +```bash +openssl req -newkey rsa:2048 -keyout ca.key -nodes -x509 -days 3650 -out ca.crt +``` + +- Now create a ca-secret using the certificate files you have just generated. + +```bash +kubectl create secret tls kafka-ca \ + --cert=ca.crt \ + --key=ca.key \ + --namespace=demo +``` + +Now, create an `Issuer` using the `ca-secret` you have just created. The `YAML` file looks like this: + +```yaml +apiVersion: cert-manager.io/v1 +kind: Issuer +metadata: + name: kafka-ca-issuer + namespace: demo +spec: + ca: + secretName: kafka-ca +``` + +Apply the `YAML` file: + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/kafka/tls/kf-issuer.yaml +issuer.cert-manager.io/kafka-ca-issuer created +``` + +### Provision TLS secure Kafka + +For this demo, we are going to provision kafka version `3.3.2` with 3 controllers and 3 brokers. To learn more about Kafka CR, visit [here](/docs/v2024.1.31/guides/kafka/concepts/kafka). visit [here](/docs/v2024.1.31/guides/kafka/concepts/catalog) to learn more about KafkaVersion CR. + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: Kafka +metadata: + name: kafka-prod + namespace: demo +spec: + version: 3.3.2 + enableSSL: true + tls: + issuerRef: + apiGroup: cert-manager.io + name: kafka-ca-issuer + kind: Issuer + topology: + broker: + replicas: 3 + storage: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + storageClassName: standard + controller: + replicas: 3 + storage: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + storageClassName: standard + storageType: Durable + terminationPolicy: DoNotTerminate +``` + + Let's deploy the above example by the following command: + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/kafka/clustering/kf-topology.yaml +kafka.kubedb.com/kafka-prod created +``` + +Watch the bootstrap progress: + +```bash +$ kubectl get kf -n demo -w +NAME TYPE VERSION STATUS AGE +kafka-prod kubedb.com/v1alpha2 3.3.2 Provisioning 6s +kafka-prod kubedb.com/v1alpha2 3.3.2 Provisioning 14s +kafka-prod kubedb.com/v1alpha2 3.3.2 Provisioning 50s +kafka-prod kubedb.com/v1alpha2 3.3.2 Ready 68s +``` + +Hence, the cluster is ready to use. +Let's check the k8s resources created by the operator on the deployment of Kafka CRO: + +```bash +$ kubectl get all,secret,pvc -n demo -l 'app.kubernetes.io/instance=kafka-prod' +NAME READY STATUS RESTARTS AGE +pod/kafka-prod-broker-0 1/1 Running 0 4m10s +pod/kafka-prod-broker-1 1/1 Running 0 4m4s +pod/kafka-prod-broker-2 1/1 Running 0 3m57s +pod/kafka-prod-controller-0 1/1 Running 0 4m8s +pod/kafka-prod-controller-1 1/1 Running 2 (3m35s ago) 4m +pod/kafka-prod-controller-2 1/1 Running 0 3m53s + +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +service/kafka-prod-broker ClusterIP None 9092/TCP,29092/TCP 4m14s +service/kafka-prod-controller ClusterIP None 9093/TCP 4m14s + +NAME READY AGE +statefulset.apps/kafka-prod-broker 3/3 4m10s +statefulset.apps/kafka-prod-controller 3/3 4m8s + +NAME TYPE VERSION AGE +appbinding.appcatalog.appscode.com/kafka-prod kubedb.com/kafka 3.3.2 4m8s + +NAME TYPE DATA AGE +secret/kafka-prod-admin-cred kubernetes.io/basic-auth 2 4m14s +secret/kafka-prod-broker-config Opaque 3 4m14s +secret/kafka-prod-client-cert kubernetes.io/tls 3 4m14s +secret/kafka-prod-controller-config Opaque 3 4m10s +secret/kafka-prod-keystore-cred Opaque 3 4m14s +secret/kafka-prod-server-cert kubernetes.io/tls 5 4m14s + +NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE +persistentvolumeclaim/kafka-prod-data-kafka-prod-broker-0 Bound pvc-1ce9bf24-8d2d-4cae-9453-28df9f52ac44 1Gi RWO standard 4m10s +persistentvolumeclaim/kafka-prod-data-kafka-prod-broker-1 Bound pvc-5e2dc46b-0947-4de1-adb0-307fc881b2ba 1Gi RWO standard 4m4s +persistentvolumeclaim/kafka-prod-data-kafka-prod-broker-2 Bound pvc-b7ef2986-db7d-4089-95a2-474cd14c6282 1Gi RWO standard 3m57s +persistentvolumeclaim/kafka-prod-data-kafka-prod-controller-0 Bound pvc-8e3ae399-fb87-4906-91d8-3f5a09014d2a 1Gi RWO standard 4m8s +persistentvolumeclaim/kafka-prod-data-kafka-prod-controller-1 Bound pvc-faf53264-e125-430a-9a73-c2c73da1b97e 1Gi RWO standard 4m +persistentvolumeclaim/kafka-prod-data-kafka-prod-controller-2 Bound pvc-d962a03b-7af7-41ba-9d53-044e8ffa03f2 1Gi RWO standard 3m53s +``` + +## Publish & Consume messages with Kafka + +We will create a Kafka topic using `kafka-topics.sh` script which is provided by kafka container itself. We will use `kafka console producer` and `kafka console consumer` as clients for publishing messages to the topic and then consume those messages. Exec into one of the kafka broker pods in interactive mode first. + +```bash +$ kubectl exec -it -n demo kafka-prod-broker-0 -- bash +root@kafka-prod-broker-0:~# pwd +/opt/kafka +``` + +You will find a file named `clientauth.properties` in the config directory. This file is generated by the operator which contains necessary authentication/authorization configurations that are required during publishing or subscribing messages to a kafka topic. + +```bash +root@kafka-prod-broker-0:~# cat config/clientauth.properties +sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="admin" password="*************"; +security.protocol=SASL_SSL +sasl.mechanism=PLAIN +ssl.truststore.location=/var/private/ssl/server.truststore.jks +ssl.truststore.password=*********** +``` + +Now, we have to use a bootstrap server to perform operations in a kafka broker. For this demo, we are going to use the http endpoint of the headless service `kafka-prod-broker` as bootstrap server for publishing & consuming messages to kafka brokers. These endpoints are pointing to all the kafka broker pods. We will set an environment variable for the `clientauth.properties` filepath as well. At first, describe the service to get the http endpoints. + +```bash +$ kubectl describe svc -n demo kafka-prod-broker +Name: kafka-prod-broker +Namespace: demo +Labels: app.kubernetes.io/component=database + app.kubernetes.io/instance=kafka-prod + app.kubernetes.io/managed-by=kubedb.com + app.kubernetes.io/name=kafkas.kubedb.com +Annotations: +Selector: app.kubernetes.io/instance=kafka-prod,app.kubernetes.io/managed-by=kubedb.com,app.kubernetes.io/name=kafkas.kubedb.com,kubedb.com/role=broker +Type: ClusterIP +IP Family Policy: SingleStack +IP Families: IPv4 +IP: None +IPs: None +Port: http 9092/TCP +TargetPort: http/TCP +Endpoints: 10.244.0.33:9092,10.244.0.37:9092,10.244.0.41:9092 +Port: internal 29092/TCP +TargetPort: internal/TCP +Endpoints: 10.244.0.33:29092,10.244.0.37:29092,10.244.0.41:29092 +Session Affinity: None +Events: +``` + +Use the `http endpoints` and `clientauth.properties` file to set environment variables. These environment variables will be useful for handling console command operations easily. + +```bash +root@kafka-prod-broker-0:~# export SERVER="10.244.0.100:9092,10.244.0.104:9092,10.244.0.108:9092" +root@kafka-prod-broker-0:~# export CLIENTAUTHCONFIG="$HOME/config/clientauth.properties" +``` + +Let's describe the broker metadata for the quorum. + +```bash +root@kafka-prod-broker-0:~# kafka-metadata-quorum.sh --command-config $CLIENTAUTHCONFIG --bootstrap-server localhost:9092 describe --status +ClusterId: 11ed-82ed-2a2abab96b3w +LeaderId: 2 +LeaderEpoch: 15 +HighWatermark: 1820 +MaxFollowerLag: 0 +MaxFollowerLagTimeMs: 159 +CurrentVoters: [0,1,2] +CurrentObservers: [3,4,5] +``` + +It will show you important metadata information like clusterID, current leader ID, broker IDs which are participating in leader election voting and IDs of those brokers who are observers. It is important to mention that each broker is assigned a numeric ID which is called its broker ID. The ID is assigned sequentially with respect to the host pod name. In this case, The pods assigned broker IDs are as follows: + +| Pods | Broker ID | +|---------------------|:---------:| +| kafka-prod-broker-0 | 3 | +| kafka-prod-broker-1 | 4 | +| kafka-prod-broker-2 | 5 | + +Let's create a topic named `sample` with 1 partitions and a replication factor of 1. Describe the topic once it's created. You will see the leader ID for each partition and their replica IDs along with in-sync-replicas(ISR). + +```bash +root@kafka-prod-broker-0:~# kafka-topics.sh --command-config $CLIENTAUTHCONFIG --create --topic sample --partitions 1 --replication-factor 1 --bootstrap-server localhost:9092 +Created topic sample. + + +root@kafka-prod-broker-0:~# kafka-topics.sh --command-config $CLIENTAUTHCONFIG --describe --topic sample --bootstrap-server localhost:9092 +Topic: sample TopicId: mqlupmBhQj6OQxxG9m51CA PartitionCount: 1 ReplicationFactor: 1 Configs: segment.bytes=1073741824 + Topic: sample Partition: 0 Leader: 4 Replicas: 4 Isr: 4 +``` + +Now, we are going to start a producer and a consumer for topic `sample` using console. Let's use this current terminal for producing messages and open a new terminal for consuming messages. Let's set the environment variables for bootstrap server and the configuration file in consumer terminal also. + +From the topic description we can see that the leader partition for partition 0 is 4 that is `kafka-prod-broker-1`. If we produce messages to `kafka-prod-broker-1` broker(brokerID=4) it will store those messages in partition 0. Let's produce messages in the producer terminal and consume them from the consumer terminal. + +```bash +root@kafka-prod-broker-1:~# kafka-console-producer.sh --producer.config $CLIENTAUTHCONFIG --topic sample --request-required-acks all --bootstrap-server localhost:9092 +>hello +>hi +>this is a message from console producer client +>I hope it's received by console consumer +> +``` + +```bash +root@kafka-prod-broker-1:~# kafka-console-consumer.sh --consumer.config $CLIENTAUTHCONFIG --topic sample --from-beginning --bootstrap-server localhost:9092 --partition 0 +hello +hi +this is a message from console producer client +I hope it's received by console consumer +``` + +Notice that, messages are coming to the consumer as you continue sending messages via producer. So, we have created a kafka topic and used kafka console producer and consumer to test message publishing and consuming successfully. + + +## Cleaning Up + +TO clean up the k8s resources created by this tutorial, run: + +```bash +# standalone cluster +$ kubectl patch -n demo kf kafka-prod -p '{"spec":{"terminationPolicy":"WipeOut"}}' --type="merge" +$ kubectl delete kf -n demo kafka-prod + +# multinode cluster +$ kubectl patch -n demo kf kafka-prod -p '{"spec":{"terminationPolicy":"WipeOut"}}' --type="merge" +$ kubectl delete kf -n demo kafka-prod + +# delete namespace +$ kubectl delete namespace demo +``` + +## Next Steps + +- Deploy [dedicated topology cluster](/docs/v2024.1.31/guides/kafka/clustering/topology-cluster/) for Apache Kafka +- Monitor your Kafka cluster with KubeDB using [`out-of-the-box` Prometheus operator](/docs/v2024.1.31/guides/kafka/monitoring/using-prometheus-operator). +- Detail concepts of [Kafka object](/docs/v2024.1.31/guides/kafka/concepts/kafka). +- Detail concepts of [KafkaVersion object](/docs/v2024.1.31/guides/kafka/concepts/catalog). +- Learn to use KubeDB managed Kafka objects using [CLIs](/docs/v2024.1.31/guides/kafka/cli/cli). +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/v2024.1.31/CONTRIBUTING). \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/kafka/concepts/_index.md b/content/docs/v2024.1.31/guides/kafka/concepts/_index.md new file mode 100755 index 0000000000..1d6ed94cce --- /dev/null +++ b/content/docs/v2024.1.31/guides/kafka/concepts/_index.md @@ -0,0 +1,22 @@ +--- +title: Kafka Concepts +menu: + docs_v2024.1.31: + identifier: kf-concepts-kafka + name: Concepts + parent: kf-kafka-guides + weight: 20 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/kafka/concepts/appbinding.md b/content/docs/v2024.1.31/guides/kafka/concepts/appbinding.md new file mode 100644 index 0000000000..2a4e3cc3b6 --- /dev/null +++ b/content/docs/v2024.1.31/guides/kafka/concepts/appbinding.md @@ -0,0 +1,159 @@ +--- +title: AppBinding CRD +menu: + docs_v2024.1.31: + identifier: kf-appbinding-concepts + name: AppBinding + parent: kf-concepts-kafka + weight: 21 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# AppBinding + +## What is AppBinding + +An `AppBinding` is a Kubernetes `CustomResourceDefinition`(CRD) which points to an application using either its URL (usually for a non-Kubernetes resident service instance) or a Kubernetes service object (if self-hosted in a Kubernetes cluster), some optional parameters and a credential secret. To learn more about AppBinding and the problems it solves, please read this blog post: [The case for AppBinding](https://blog.byte.builders/post/the-case-for-appbinding). + +If you deploy a database using [KubeDB](https://kubedb.com/docs/latest/welcome/), `AppBinding` object will be created automatically for it. Otherwise, you have to create an `AppBinding` object manually pointing to your desired database. + +KubeDB uses [Stash](https://appscode.com/products/stash/) to perform backup/recovery of databases. Stash needs to know how to connect with a target database and the credentials necessary to access it. This is done via an `AppBinding`. + +## AppBinding CRD Specification + +Like any official Kubernetes resource, an `AppBinding` has `TypeMeta`, `ObjectMeta` and `Spec` sections. However, unlike other Kubernetes resources, it does not have a `Status` section. + +An `AppBinding` object created by `KubeDB` for Kafka database is shown below, + +```yaml +apiVersion: appcatalog.appscode.com/v1alpha1 +kind: AppBinding +metadata: + annotations: + kubectl.kubernetes.io/last-applied-configuration: | + {"apiVersion":"kubedb.com/v1alpha2","kind":"Kafka","metadata":{"annotations":{},"name":"kafka","namespace":"demo"},"spec":{"enableSSL":true,"monitor":{"agent":"prometheus.io/operator","prometheus":{"exporter":{"port":9091},"serviceMonitor":{"interval":"10s","labels":{"release":"prometheus"}}}},"replicas":3,"storage":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"1Gi"}},"storageClassName":"standard"},"storageType":"Durable","terminationPolicy":"WipeOut","tls":{"issuerRef":{"apiGroup":"cert-manager.io","kind":"Issuer","name":"kafka-ca-issuer"}},"version":"3.4.0"}} + creationTimestamp: "2023-03-27T08:04:43Z" + generation: 1 + labels: + app.kubernetes.io/component: database + app.kubernetes.io/instance: kafka + app.kubernetes.io/managed-by: kubedb.com + app.kubernetes.io/name: kafkas.kubedb.com + name: kafka + namespace: demo + ownerReferences: + - apiVersion: kubedb.com/v1alpha2 + blockOwnerDeletion: true + controller: true + kind: Kafka + name: kafka + uid: a4d3bd6d-798d-4789-a228-6eed057ccbb2 + resourceVersion: "409855" + uid: 946988c0-15ef-4ee8-b489-b7ea9be3f97e +spec: + appRef: + apiGroup: kubedb.com + kind: Kafka + name: kafka + namespace: demo + clientConfig: + caBundle: dGhpcyBpcyBub3QgYSBjZXJ0 + service: + name: kafka-pods + port: 9092 + scheme: https + secret: + name: kafka-admin-cred + tlsSecret: + name: kafka-client-cert + type: kubedb.com/kafka + version: 3.4.0 +``` + +Here, we are going to describe the sections of an `AppBinding` crd. + +### AppBinding `Spec` + +An `AppBinding` object has the following fields in the `spec` section: + +#### spec.type + +`spec.type` is an optional field that indicates the type of the app that this `AppBinding` is pointing to. + + + + +#### spec.secret + +`spec.secret` specifies the name of the secret which contains the credentials that are required to access the database. This secret must be in the same namespace as the `AppBinding`. + +This secret must contain the following keys for Kafka: + +| Key | Usage | +| ---------- |------------------------------------------------| +| `username` | Username of the target Kafka instance. | +| `password` | Password for the user specified by `username`. | + + +#### spec.appRef +appRef refers to the underlying application. It has 4 fields named `apiGroup`, `kind`, `name` & `namespace`. + +#### spec.clientConfig + +`spec.clientConfig` defines how to communicate with the target database. You can use either a URL or a Kubernetes service to connect with the database. You don't have to specify both of them. + +You can configure following fields in `spec.clientConfig` section: + +- **spec.clientConfig.url** + + `spec.clientConfig.url` gives the location of the database, in standard URL form (i.e. `[scheme://]host:port/[path]`). This is particularly useful when the target database is running outside the Kubernetes cluster. If your database is running inside the cluster, use `spec.clientConfig.service` section instead. + +> Note that, attempting to use a user or basic auth (e.g. `user:password@host:port`) is not allowed. Stash will insert them automatically from the respective secret. Fragments ("#...") and query parameters ("?...") are not allowed either. + +- **spec.clientConfig.service** + + If you are running the database inside the Kubernetes cluster, you can use Kubernetes service to connect with the database. You have to specify the following fields in `spec.clientConfig.service` section if you manually create an `AppBinding` object. + + - **name :** `name` indicates the name of the service that connects with the target database. + - **scheme :** `scheme` specifies the scheme (i.e. http, https) to use to connect with the database. + - **port :** `port` specifies the port where the target database is running. + +- **spec.clientConfig.insecureSkipTLSVerify** + + `spec.clientConfig.insecureSkipTLSVerify` is used to disable TLS certificate verification while connecting with the database. We strongly discourage to disable TLS verification during backup. You should provide the respective CA bundle through `spec.clientConfig.caBundle` field instead. + +- **spec.clientConfig.caBundle** + + `spec.clientConfig.caBundle` is a PEM encoded CA bundle which will be used to validate the serving certificate of the database. + +## Next Steps + +- Learn how to use KubeDB to manage various databases [here](/docs/v2024.1.31/guides/README). +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/v2024.1.31/CONTRIBUTING). diff --git a/content/docs/v2024.1.31/guides/kafka/concepts/catalog.md b/content/docs/v2024.1.31/guides/kafka/concepts/catalog.md new file mode 100644 index 0000000000..78de5872a9 --- /dev/null +++ b/content/docs/v2024.1.31/guides/kafka/concepts/catalog.md @@ -0,0 +1,117 @@ +--- +title: KafkaVersion CRD +menu: + docs_v2024.1.31: + identifier: kf-catalog-concepts + name: KafkaVersion + parent: kf-concepts-kafka + weight: 15 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# KafkaVersion + +## What is KafkaVersion + +`KafkaVersion` is a Kubernetes `Custom Resource Definitions` (CRD). It provides a declarative configuration to specify the docker images to be used for [Kafka](https://kafka.apache.org) database deployed with KubeDB in a Kubernetes native way. + +When you install KubeDB, a `KafkaVersion` custom resource will be created automatically for every supported Kafka versions. You have to specify the name of `KafkaVersion` crd in `spec.version` field of [Kafka](/docs/v2024.1.31/guides/kafka/concepts/kafka) crd. Then, KubeDB will use the docker images specified in the `KafkaVersion` crd to create your expected database. + +Using a separate crd for specifying respective docker images, and pod security policy names allow us to modify the images, and policies independent of KubeDB operator.This will also allow the users to use a custom image for the database. + +## KafkaVersion Spec + +As with all other Kubernetes objects, a KafkaVersion needs `apiVersion`, `kind`, and `metadata` fields. It also needs a `.spec` section. + +```yaml +apiVersion: catalog.kubedb.com/v1alpha1 +kind: KafkaVersion +metadata: + annotations: + meta.helm.sh/release-name: kubedb-catalog + meta.helm.sh/release-namespace: kubedb + creationTimestamp: "2023-03-23T10:15:24Z" + generation: 2 + labels: + app.kubernetes.io/instance: kubedb-catalog + app.kubernetes.io/managed-by: Helm + app.kubernetes.io/name: kubedb-catalog + app.kubernetes.io/version: v2023.02.28 + helm.sh/chart: kubedb-catalog-v2023.02.28 + name: 3.4.0 + resourceVersion: "472767" + uid: 36a167a3-5218-4e32-b96d-d6b5b0c86125 +spec: + db: + image: kubedb/kafka-kraft:3.4.0 + podSecurityPolicies: + databasePolicyName: kafka-db + version: 3.4.0 + cruiseControl: + image: ghcr.io/kubedb/cruise-control:3.4.0 +``` + +### metadata.name + +`metadata.name` is a required field that specifies the name of the `KafkaVersion` crd. You have to specify this name in `spec.version` field of [Kafka](/docs/v2024.1.31/guides/kafka/concepts/kafka) crd. + +We follow this convention for naming KafkaVersion crd: + +- Name format: `{Original Kafka image version}-{modification tag}` + +We use official Apache Kafka release tar files to build docker images for supporting Kafka versions and re-tag the image with v1, v2 etc. modification tag when there's any. An image with higher modification tag will have more features than the images with lower modification tag. Hence, it is recommended to use KafkaVersion crd with the highest modification tag to enjoy the latest features. + +### spec.version + +`spec.version` is a required field that specifies the original version of Kafka database that has been used to build the docker image specified in `spec.db.image` field. + +### spec.deprecated + +`spec.deprecated` is an optional field that specifies whether the docker images specified here is supported by the current KubeDB operator. + +The default value of this field is `false`. If `spec.deprecated` is set to `true`, KubeDB operator will skip processing this CRD object and will add a event to the CRD object specifying that the DB version is deprecated. + +### spec.db.image + +`spec.db.image` is a required field that specifies the docker image which will be used to create StatefulSet by KubeDB operator to create expected Kafka database. + + +### spec.podSecurityPolicies.databasePolicyName + +`spec.podSecurityPolicies.databasePolicyName` is a required field that specifies the name of the pod security policy required to get the database server pod(s) running. + +```bash +helm upgrade -i kubedb oci://ghcr.io/appscode-charts/kubedb \ + --namespace kubedb --create-namespace \ + --set additionalPodSecurityPolicies[0]=custom-db-policy \ + --set additionalPodSecurityPolicies[1]=custom-snapshotter-policy \ + --set-file global.license=/path/to/the/license.txt \ + --wait --burst-limit=10000 --debug +``` + +## Next Steps + +- Learn about Kafka crd [here](/docs/v2024.1.31/guides/kafka/concepts/kafka). +- Deploy your first Kafka database with KubeDB by following the guide [here](/docs/v2024.1.31/guides/kafka/quickstart/overview/). diff --git a/content/docs/v2024.1.31/guides/kafka/concepts/kafka.md b/content/docs/v2024.1.31/guides/kafka/concepts/kafka.md new file mode 100644 index 0000000000..b55420a5ff --- /dev/null +++ b/content/docs/v2024.1.31/guides/kafka/concepts/kafka.md @@ -0,0 +1,410 @@ +--- +title: Kafka CRD +menu: + docs_v2024.1.31: + identifier: kf-kafka-concepts + name: Kafka + parent: kf-concepts-kafka + weight: 10 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Kafka + +## What is Kafka + +`Kafka` is a Kubernetes `Custom Resource Definitions` (CRD). It provides declarative configuration for [Kafka](https://kafka.apache.org/) in a Kubernetes native way. You only need to describe the desired database configuration in a `Kafka`object, and the KubeDB operator will create Kubernetes objects in the desired state for you. + +## Kafka Spec + +As with all other Kubernetes objects, a Kafka needs `apiVersion`, `kind`, and `metadata` fields. It also needs a `.spec` section. Below is an example Kafka object. + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: Kafka +metadata: + name: kafka + namespace: demo +spec: + authSecret: + name: kafka-admin-cred + enableSSL: true + healthChecker: + failureThreshold: 3 + periodSeconds: 20 + timeoutSeconds: 10 + keystoreCredSecret: + name: kafka-keystore-cred + podTemplate: + metadata: + annotations: + passMe: ToDatabasePod + labels: + thisLabel: willGoToPod + controller: + annotations: + passMe: ToStatefulSet + labels: + thisLabel: willGoToSts + storageType: Durable + terminationPolicy: DoNotTerminate + tls: + certificates: + - alias: server + secretName: kafka-server-cert + - alias: client + secretName: kafka-client-cert + issuerRef: + apiGroup: cert-manager.io + kind: Issuer + name: kafka-ca-issuer + topology: + broker: + replicas: 3 + resources: + limits: + memory: 1Gi + requests: + cpu: 500m + memory: 1Gi + storage: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + storageClassName: standard + suffix: broker + controller: + replicas: 3 + resources: + limits: + memory: 1Gi + requests: + cpu: 500m + memory: 1Gi + storage: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + storageClassName: standard + suffix: controller + monitor: + agent: prometheus.io/operator + prometheus: + exporter: + port: 9091 + serviceMonitor: + labels: + release: prometheus + interval: 10s + version: 3.4.0 +``` + +### spec.version + +`spec.version` is a required field specifying the name of the [KafkaVersion](/docs/v2024.1.31/guides/kafka/concepts/catalog) crd where the docker images are specified. Currently, when you install KubeDB, it creates the following `Kafka` resources, + +- `3.3.0` +- `3.3.2` +- `3.4.0` + +### spec.replicas + +`spec.replicas` the number of members in Kafka replicaset. + +If `spec.topology` is set, then `spec.replicas` needs to be empty. Instead use `spec.topology.controller.replicas` and `spec.topology.broker.replicas`. You need to set both of them for topology clustering. + +KubeDB uses `PodDisruptionBudget` to ensure that majority of these replicas are available during [voluntary disruptions](https://kubernetes.io/docs/concepts/workloads/pods/disruptions/#voluntary-and-involuntary-disruptions) so that quorum is maintained. + +### spec.authSecret + +`spec.authSecret` is an optional field that points to a Secret used to hold credentials for `kafka` admin user. If not set, KubeDB operator creates a new Secret `{kafka-object-name}-auth` for storing the password for `admin` user for each Kafka object. + +We can use this field in 3 mode. +1. Using an external secret. In this case, You need to create an auth secret first with required fields, then specify the secret name when creating the Kafka object using `spec.authSecret.name` & set `spec.authSecret.externallyManaged` to true. +```yaml +authSecret: + name: + externallyManaged: true +``` + +2. Specifying the secret name only. In this case, You need to specify the secret name when creating the Kafka object using `spec.authSecret.name`. `externallyManaged` is by default false. +```yaml +authSecret: + name: +``` + +3. Let KubeDB do everything for you. In this case, no work for you. + +AuthSecret contains a `user` key and a `password` key which contains the `username` and `password` respectively for Kafka `admin` user. + +Example: + +```bash +$ kubectl create secret generic kf-auth -n demo \ +--from-literal=username=jhon-doe \ +--from-literal=password=6q8u_2jMOW-OOZXk +secret "kf-auth" created +``` + +```yaml +apiVersion: v1 +data: + password: NnE4dV8yak1PVy1PT1pYaw== + username: amhvbi1kb2U= +kind: Secret +metadata: + name: kf-auth + namespace: demo +type: Opaque +``` + +Secrets provided by users are not managed by KubeDB, and therefore, won't be modified or garbage collected by the KubeDB operator (version 0.13.0 and higher). + +### spec.topology + +`spec.topology` represents the topology configuration for Kafka cluster in KRaft mode. + +When `spec.topology` is set, the following fields needs to be empty, otherwise validating webhook will throw error. + +- `spec.replicas` +- `spec.podTemplate` +- `spec.storage` + +#### spec.topology.broker + +`broker` represents configuration for brokers of Kafka. In KRaft Topology mode clustering each pod can act as a single dedicated Kafka broker. + +Available configurable fields: + +- `topology.broker`: + - `replicas` (`: "1"`) - is an `optional` field to specify the number of nodes (ie. pods ) that act as the dedicated Kafka `broker` pods. Defaults to `1`. + - `suffix` (`: "broker"`) - is an `optional` field that is added as the suffix of the broker StatefulSet name. Defaults to `broker`. + - `storage` is a `required` field that specifies how much storage to claim for each of the `broker` pods. + - `resources` (`: "cpu: 500m, memory: 1Gi" `) - is an `optional` field that specifies how much computational resources to request or to limit for each of the `broker` pods. + +#### spec.topology.controller + +`controller` represents configuration for controllers of Kafka. In KRaft Topology mode clustering each pod can act as a single dedicated Kafka controller that preserves metadata for the whole cluster and participated in leader election. + +Available configurable fields: + +- `topology.controller`: + - `replicas` (`: "1"`) - is an `optional` field to specify the number of nodes (ie. pods ) that act as the dedicated Kafka `controller` pods. Defaults to `1`. + - `suffix` (`: "controller"`) - is an `optional` field that is added as the suffix of the controller StatefulSet name. Defaults to `controller`. + - `storage` is a `required` field that specifies how much storage to claim for each of the `controller` pods. + - `resources` (`: "cpu: 500m, memory: 1Gi" `) - is an `optional` field that specifies how much computational resources to request or to limit for each of the `controller` pods. + +### spec.enableSSL + +`spec.enableSSL` is an `optional` field that specifies whether to enable TLS to HTTP layer. The default value of this field is `false`. + +```yaml +spec: + enableSSL: true +``` + +### spec.tls + +`spec.tls` specifies the TLS/SSL configurations. The KubeDB operator supports TLS management by using the [cert-manager](https://cert-manager.io/). Currently, the operator only supports the `PKCS#8` encoded certificates. + +```yaml +spec: + tls: + issuerRef: + apiGroup: "cert-manager.io" + kind: Issuer + name: kf-issuer + certificates: + - alias: server + privateKey: + encoding: PKCS8 + secretName: kf-client-cert + subject: + organizations: + - kubedb + - alias: http + privateKey: + encoding: PKCS8 + secretName: kf-server-cert + subject: + organizations: + - kubedb +``` + +The `spec.tls` contains the following fields: + +- `tls.issuerRef` - is an `optional` field that references to the `Issuer` or `ClusterIssuer` custom resource object of [cert-manager](https://cert-manager.io/docs/concepts/issuer/). It is used to generate the necessary certificate secrets for Elasticsearch. If the `issuerRef` is not specified, the operator creates a self-signed CA and also creates necessary certificate (valid: 365 days) secrets using that CA. + - `apiGroup` - is the group name of the resource that is being referenced. Currently, the only supported value is `cert-manager.io`. + - `kind` - is the type of resource that is being referenced. The supported values are `Issuer` and `ClusterIssuer`. + - `name` - is the name of the resource ( `Issuer` or `ClusterIssuer` ) that is being referenced. + +- `tls.certificates` - is an `optional` field that specifies a list of certificate configurations used to configure the certificates. It has the following fields: + - `alias` - represents the identifier of the certificate. It has the following possible value: + - `transport` - is used for the transport layer certificate configuration. + - `http` - is used for the HTTP layer certificate configuration. + - `admin` - is used for the admin certificate configuration. Available for the `SearchGuard` and the `OpenDistro` auth-plugins. + - `metrics-exporter` - is used for the metrics-exporter sidecar certificate configuration. + + - `secretName` - ( `string` | `"-alias-cert"` ) - specifies the k8s secret name that holds the certificates. + + - `subject` - specifies an `X.509` distinguished name (DN). It has the following configurable fields: + - `organizations` ( `[]string` | `nil` ) - is a list of organization names. + - `organizationalUnits` ( `[]string` | `nil` ) - is a list of organization unit names. + - `countries` ( `[]string` | `nil` ) - is a list of country names (ie. Country Codes). + - `localities` ( `[]string` | `nil` ) - is a list of locality names. + - `provinces` ( `[]string` | `nil` ) - is a list of province names. + - `streetAddresses` ( `[]string` | `nil` ) - is a list of street addresses. + - `postalCodes` ( `[]string` | `nil` ) - is a list of postal codes. + - `serialNumber` ( `string` | `""` ) is a serial number. + + For more details, visit [here](https://golang.org/pkg/crypto/x509/pkix/#Name). + + - `duration` ( `string` | `""` ) - is the period during which the certificate is valid. A duration string is a possibly signed sequence of decimal numbers, each with optional fraction and a unit suffix, such as `"300m"`, `"1.5h"` or `"20h45m"`. Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h". + - `renewBefore` ( `string` | `""` ) - is a specifiable time before expiration duration. + - `dnsNames` ( `[]string` | `nil` ) - is a list of subject alt names. + - `ipAddresses` ( `[]string` | `nil` ) - is a list of IP addresses. + - `uris` ( `[]string` | `nil` ) - is a list of URI Subject Alternative Names. + - `emailAddresses` ( `[]string` | `nil` ) - is a list of email Subject Alternative Names. + + +### spec.storageType + +`spec.storageType` is an optional field that specifies the type of storage to use for database. It can be either `Durable` or `Ephemeral`. The default value of this field is `Durable`. If `Ephemeral` is used then KubeDB will create Kafka cluster using [emptyDir](https://kubernetes.io/docs/concepts/storage/volumes/#emptydir) volume. + +### spec.storage + +If you set `spec.storageType:` to `Durable`, then `spec.storage` is a required field that specifies the StorageClass of PVCs dynamically allocated to store data for the database. This storage spec will be passed to the StatefulSet created by KubeDB operator to run database pods. You can specify any StorageClass available in your cluster with appropriate resource requests. + +- `spec.storage.storageClassName` is the name of the StorageClass used to provision PVCs. PVCs don’t necessarily have to request a class. A PVC with its storageClassName set equal to "" is always interpreted to be requesting a PV with no class, so it can only be bound to PVs with no class (no annotation or one set equal to ""). A PVC with no storageClassName is not quite the same and is treated differently by the cluster depending on whether the DefaultStorageClass admission plugin is turned on. +- `spec.storage.accessModes` uses the same conventions as Kubernetes PVCs when requesting storage with specific access modes. +- `spec.storage.resources` can be used to request specific quantities of storage. This follows the same resource model used by PVCs. + +To learn how to configure `spec.storage`, please visit the links below: + +- https://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims + +NB. If `spec.topology` is set, then `spec.storage` needs to be empty. Instead use `spec.topology..storage` + +### spec.monitor + +Kafka managed by KubeDB can be monitored with Prometheus operator out-of-the-box. To learn more, +- [Monitor Apache with Prometheus operator](/docs/v2024.1.31/guides/kafka/monitoring/using-prometheus-operator) + +### spec.podTemplate + +KubeDB allows providing a template for database pod through `spec.podTemplate`. KubeDB operator will pass the information provided in `spec.podTemplate` to the StatefulSet created for Kafka cluster. + +KubeDB accept following fields to set in `spec.podTemplate:` + +- metadata: + - annotations (pod's annotation) + - labels (pod's labels) +- controller: + - annotations (statefulset's annotation) + - labels (statefulset's labels) +- spec: + - args + - env + - resources + - initContainers + - imagePullSecrets + - nodeSelector + - affinity + - serviceAccountName + - schedulerName + - tolerations + - priorityClassName + - priority + - securityContext + - livenessProbe + - readinessProbe + - lifecycle + +You can check out the full list [here](https://github.com/kmodules/offshoot-api/blob/ea366935d5bad69d7643906c7556923271592513/api/v1/types.go#L42-L259). Uses of some field of `spec.podTemplate` is described below, + +NB. If `spec.topology` is set, then `spec.podTemplate` needs to be empty. Instead use `spec.topology..podTemplate` + +#### spec.podTemplate.spec.args + +`spec.podTemplate.spec.args` is an optional field. This can be used to provide additional arguments to database installation. + +#### spec.podTemplate.spec.env + +`spec.podTemplate.spec.env` is an optional field that specifies the environment variables to pass to the Kafka docker image. + +#### spec.podTemplate.spec.nodeSelector + +`spec.podTemplate.spec.nodeSelector` is an optional field that specifies a map of key-value pairs. For the pod to be eligible to run on a node, the node must have each of the indicated key-value pairs as labels (it can have additional labels as well). To learn more, see [here](https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector) . + +#### spec.podTemplate.spec.resources + +`spec.podTemplate.spec.resources` is an optional field. This can be used to request compute resources required by the database pods. To learn more, visit [here](http://kubernetes.io/docs/user-guide/compute-resources/). + +### spec.serviceTemplates + +You can also provide template for the services created by KubeDB operator for Kafka cluster through `spec.serviceTemplates`. This will allow you to set the type and other properties of the services. + +KubeDB allows following fields to set in `spec.serviceTemplates`: +- `alias` represents the identifier of the service. It has the following possible value: + - `stats` is used for the exporter service identification. +- metadata: + - labels + - annotations +- spec: + - type + - ports + - clusterIP + - externalIPs + - loadBalancerIP + - loadBalancerSourceRanges + - externalTrafficPolicy + - healthCheckNodePort + - sessionAffinityConfig + +See [here](https://github.com/kmodules/offshoot-api/blob/kubernetes-1.21.1/api/v1/types.go#L237) to understand these fields in detail. + +### spec.terminationPolicy + +`terminationPolicy` gives flexibility whether to `nullify`(reject) the delete operation of `Kafka` crd or which resources KubeDB should keep or delete when you delete `Kafka` crd. KubeDB provides following four termination policies: + +- DoNotTerminate +- WipeOut + +When `terminationPolicy` is `DoNotTerminate`, KubeDB takes advantage of `ValidationWebhook` feature in Kubernetes 1.9.0 or later clusters to implement `DoNotTerminate` feature. If admission webhook is enabled, `DoNotTerminate` prevents users from deleting the database as long as the `spec.terminationPolicy` is set to `DoNotTerminate`. + +## spec.healthChecker +It defines the attributes for the health checker. +- `spec.healthChecker.periodSeconds` specifies how often to perform the health check. +- `spec.healthChecker.timeoutSeconds` specifies the number of seconds after which the probe times out. +- `spec.healthChecker.failureThreshold` specifies minimum consecutive failures for the healthChecker to be considered failed. +- `spec.healthChecker.disableWriteCheck` specifies whether to disable the writeCheck or not. + +Know details about KubeDB Health checking from this [blog post](https://blog.byte.builders/post/kubedb-health-checker/). + +## Next Steps + +- Learn how to use KubeDB to run a Apache Kafka cluster [here](/docs/v2024.1.31/guides/kafka/README). +- Deploy [dedicated topology cluster](/docs/v2024.1.31/guides/kafka/clustering/topology-cluster/) for Apache Kafka +- Deploy [combined cluster](/docs/v2024.1.31/guides/kafka/clustering/combined-cluster/) for Apache Kafka +- Monitor your Kafka cluster with KubeDB using [`out-of-the-box` Prometheus operator](/docs/v2024.1.31/guides/kafka/monitoring/using-prometheus-operator). +- Detail concepts of [KafkaVersion object](/docs/v2024.1.31/guides/kafka/concepts/catalog). +- Learn to use KubeDB managed Kafka objects using [CLIs](/docs/v2024.1.31/guides/kafka/cli/cli). +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/v2024.1.31/CONTRIBUTING). diff --git a/content/docs/v2024.1.31/guides/kafka/monitoring/_index.md b/content/docs/v2024.1.31/guides/kafka/monitoring/_index.md new file mode 100755 index 0000000000..40c1f4c41a --- /dev/null +++ b/content/docs/v2024.1.31/guides/kafka/monitoring/_index.md @@ -0,0 +1,22 @@ +--- +title: Monitor Kafka with Prometheus & Grafana +menu: + docs_v2024.1.31: + identifier: kf-monitoring-kafka + name: Kafka Monitoring + parent: kf-kafka-guides + weight: 50 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/kafka/monitoring/overview.md b/content/docs/v2024.1.31/guides/kafka/monitoring/overview.md new file mode 100644 index 0000000000..28bfc95a6f --- /dev/null +++ b/content/docs/v2024.1.31/guides/kafka/monitoring/overview.md @@ -0,0 +1,111 @@ +--- +title: Kafka Monitoring Overview +description: Kafka Monitoring Overview +menu: + docs_v2024.1.31: + identifier: kf-monitoring-overview + name: Overview + parent: kf-monitoring-kafka + weight: 10 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Monitoring Apache Kafka with KubeDB + +KubeDB has native support for monitoring via [Prometheus](https://prometheus.io/). You can use builtin [Prometheus](https://github.com/prometheus/prometheus) scraper or [Prometheus operator](https://github.com/prometheus-operator/prometheus-operator) to monitor KubeDB managed databases. This tutorial will show you how database monitoring works with KubeDB and how to configure Database crd to enable monitoring. + +## Overview + +KubeDB uses Prometheus [exporter](https://prometheus.io/docs/instrumenting/exporters/#databases) images to export Prometheus metrics for respective databases. As KubeDB supports Kafka versions in KRaft mode, and the officially recognized exporter image doesn't expose metrics for them yet - KubeDB managed Kafka instances use [JMX Exporter](https://github.com/prometheus/jmx_exporter) instead. This exporter is intended to be run as a Java Agent inside Kafka container, exposing a HTTP server and serving metrics of the local JVM. To Following diagram shows the logical flow of database monitoring with KubeDB. + +

+  Database Monitoring Flow +

+ +When a user creates a Kafka crd with `spec.monitor` section configured, KubeDB operator provisions the respective Kafka cluster while running the exporter as a Java agent inside the kafka containers. It also creates a dedicated stats service with name `{database-crd-name}-stats` for monitoring. Prometheus server can scrape metrics using this stats service. + +## Configure Monitoring + +In order to enable monitoring for a database, you have to configure `spec.monitor` section. KubeDB provides following options to configure `spec.monitor` section: + +| Field | Type | Uses | +| -------------------------------------------------- | ---------- | ---------------------------------------------------------------------------------------------------------------------------------------------- | +| `spec.monitor.agent` | `Required` | Type of the monitoring agent that will be used to monitor this database. It can be `prometheus.io/builtin` or `prometheus.io/operator`. | +| `spec.monitor.prometheus.exporter.port` | `Optional` | Port number where the exporter side car will serve metrics. | +| `spec.monitor.prometheus.exporter.args` | `Optional` | Arguments to pass to the exporter sidecar. | +| `spec.monitor.prometheus.exporter.env` | `Optional` | List of environment variables to set in the exporter sidecar container. | +| `spec.monitor.prometheus.exporter.resources` | `Optional` | Resources required by exporter sidecar container. | +| `spec.monitor.prometheus.exporter.securityContext` | `Optional` | Security options the exporter should run with. | +| `spec.monitor.prometheus.serviceMonitor.labels` | `Optional` | Labels for `ServiceMonitor` crd. | +| `spec.monitor.prometheus.serviceMonitor.interval` | `Optional` | Interval at which metrics should be scraped. | + +## Sample Configuration + +A sample YAML for TLS secured Kafka crd with `spec.monitor` section configured to enable monitoring with [Prometheus operator](https://github.com/prometheus-operator/prometheus-operator) is shown below. + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: Kafka +metadata: + name: kafka + namespace: demo +spec: + enableSSL: true + tls: + issuerRef: + apiGroup: cert-manager.io + name: kafka-ca-issuer + kind: Issuer + replicas: 3 + version: 3.4.0 + storage: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + storageClassName: standard + monitor: + agent: prometheus.io/operator + prometheus: + exporter: + port: 9091 + serviceMonitor: + labels: + release: prometheus + interval: 10s + storageType: Durable + terminationPolicy: WipeOut +``` + +Let's deploy the above example by the following command: + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/kafka/monitoring/kf-with-monitoring.yaml +kafka.kubedb.com/kafka created +``` + +Here, we have specified that we are going to monitor this server using Prometheus operator through `spec.monitor.agent: prometheus.io/operator`. KubeDB will create a `ServiceMonitor` crd in databases namespace and this `ServiceMonitor` will have `release: prometheus` label. + +## Next Steps + +- Learn how to use KubeDB to run a Apache Kafka cluster [here](/docs/v2024.1.31/guides/kafka/README). +- Deploy [dedicated topology cluster](/docs/v2024.1.31/guides/kafka/clustering/topology-cluster/) for Apache Kafka +- Deploy [combined cluster](/docs/v2024.1.31/guides/kafka/clustering/combined-cluster/) for Apache Kafka +- Detail concepts of [KafkaVersion object](/docs/v2024.1.31/guides/kafka/concepts/catalog). +- Learn to use KubeDB managed Kafka objects using [CLIs](/docs/v2024.1.31/guides/kafka/cli/cli). +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/v2024.1.31/CONTRIBUTING). \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/kafka/monitoring/using-prometheus-operator.md b/content/docs/v2024.1.31/guides/kafka/monitoring/using-prometheus-operator.md new file mode 100644 index 0000000000..2ca61e4864 --- /dev/null +++ b/content/docs/v2024.1.31/guides/kafka/monitoring/using-prometheus-operator.md @@ -0,0 +1,362 @@ +--- +title: Monitor Kafka using Prometheus Operator +menu: + docs_v2024.1.31: + identifier: kf-using-prometheus-operator-monitoring + name: Prometheus Operator + parent: kf-monitoring-kafka + weight: 15 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Monitoring Kafka Using Prometheus operator + +[Prometheus operator](https://github.com/prometheus-operator/prometheus-operator) provides simple and Kubernetes native way to deploy and configure Prometheus server. This tutorial will show you how to use Prometheus operator to monitor Kafka database deployed with KubeDB. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one locally by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +- To learn how Prometheus monitoring works with KubeDB in general, please visit [here](/docs/v2024.1.31/guides/kafka/monitoring/overview). + +- We need a [Prometheus operator](https://github.com/prometheus-operator/prometheus-operator) instance running. If you don't already have a running instance, you can deploy one using this helm chart [here](https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack). + +- To keep Prometheus resources isolated, we are going to use a separate namespace called `monitoring` to deploy the prometheus operator helm chart. Alternatively, you can use `--create-namespace` flag while deploying prometheus. We are going to deploy database in `demo` namespace. + + ```bash + $ kubectl create ns monitoring + namespace/monitoring created + + $ kubectl create ns demo + namespace/demo created + ``` + + + +> Note: YAML files used in this tutorial are stored in [docs/examples/kafka](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/examples/kafka) folder in GitHub repository [kubedb/docs](https://github.com/kubedb/docs). + +## Find out required labels for ServiceMonitor + +We need to know the labels used to select `ServiceMonitor` by a `Prometheus` crd. We are going to provide these labels in `spec.monitor.prometheus.serviceMonitor.labels` field of Kafka crd so that KubeDB creates `ServiceMonitor` object accordingly. + +At first, let's find out the available Prometheus server in our cluster. + +```bash +$ kubectl get prometheus --all-namespaces +NAMESPACE NAME VERSION DESIRED READY RECONCILED AVAILABLE AGE +monitoring prometheus-kube-prometheus-prometheus v2.42.0 1 1 True True 2d23h +``` + +> If you don't have any Prometheus server running in your cluster, deploy one following the guide specified in **Before You Begin** section. + +Now, let's view the YAML of the available Prometheus server `prometheus` in `monitoring` namespace. + +```bash +$ kubectl get prometheus -n monitoring prometheus-kube-prometheus-prometheus -o yaml +apiVersion: monitoring.coreos.com/v1 +kind: Prometheus +metadata: + annotations: + meta.helm.sh/release-name: prometheus + meta.helm.sh/release-namespace: monitoring + creationTimestamp: "2023-03-27T07:56:04Z" + generation: 1 + labels: + app: kube-prometheus-stack-prometheus + app.kubernetes.io/instance: prometheus + app.kubernetes.io/managed-by: Helm + app.kubernetes.io/part-of: kube-prometheus-stack + app.kubernetes.io/version: 45.7.1 + chart: kube-prometheus-stack-45.7.1 + heritage: Helm + release: prometheus + name: prometheus-kube-prometheus-prometheus + namespace: monitoring + resourceVersion: "638797" + uid: 0d1e7b8a-44ae-4794-ab45-95a5d7ae7f91 +spec: + alerting: + alertmanagers: + - apiVersion: v2 + name: prometheus-kube-prometheus-alertmanager + namespace: monitoring + pathPrefix: / + port: http-web + enableAdminAPI: false + evaluationInterval: 30s + externalUrl: http://prometheus-kube-prometheus-prometheus.monitoring:9090 + hostNetwork: false + image: quay.io/prometheus/prometheus:v2.42.0 + listenLocal: false + logFormat: logfmt + logLevel: info + paused: false + podMonitorNamespaceSelector: {} + podMonitorSelector: + matchLabels: + release: prometheus + portName: http-web + probeNamespaceSelector: {} + probeSelector: + matchLabels: + release: prometheus + replicas: 1 + retention: 10d + routePrefix: / + ruleNamespaceSelector: {} + ruleSelector: + matchLabels: + release: prometheus + scrapeInterval: 30s + securityContext: + fsGroup: 2000 + runAsGroup: 2000 + runAsNonRoot: true + runAsUser: 1000 + serviceAccountName: prometheus-kube-prometheus-prometheus + serviceMonitorNamespaceSelector: {} + serviceMonitorSelector: + matchLabels: + release: prometheus + shards: 1 + version: v2.42.0 + walCompression: true +status: + availableReplicas: 1 + conditions: + - lastTransitionTime: "2023-03-27T07:56:23Z" + observedGeneration: 1 + status: "True" + type: Available + - lastTransitionTime: "2023-03-30T03:39:18Z" + observedGeneration: 1 + status: "True" + type: Reconciled + paused: false + replicas: 1 + shardStatuses: + - availableReplicas: 1 + replicas: 1 + shardID: "0" + unavailableReplicas: 0 + updatedReplicas: 1 + unavailableReplicas: 0 + updatedReplicas: 1 +``` + +Notice the `spec.serviceMonitorSelector` section. Here, `release: prometheus` label is used to select `ServiceMonitor` crd. So, we are going to use this label in `spec.monitor.prometheus.serviceMonitor.labels` field of Kafka crd. + +## Deploy Kafka with Monitoring Enabled + +At first, let's deploy a Kafka database with monitoring enabled. Below is the Kafka object that we are going to create. + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: Kafka +metadata: + name: kafka + namespace: demo +spec: + enableSSL: true + tls: + issuerRef: + apiGroup: cert-manager.io + name: kafka-ca-issuer + kind: Issuer + replicas: 3 + version: 3.4.0 + storage: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + storageClassName: standard + monitor: + agent: prometheus.io/operator + prometheus: + exporter: + port: 9091 + serviceMonitor: + labels: + release: prometheus + interval: 10s + storageType: Durable + terminationPolicy: WipeOut +``` + +Here, + +- `monitor.agent: prometheus.io/operator` indicates that we are going to monitor this server using Prometheus operator. +- `monitor.prometheus.serviceMonitor.labels` specifies that KubeDB should create `ServiceMonitor` with these labels. +- `monitor.prometheus.interval` indicates that the Prometheus server should scrape metrics from this database with 10 seconds interval. + +Let's create the kafka object that we have shown above, + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/kafka/monitoring/kf-with-monirtoring.yaml +kafkas.kubedb.com/kafka created +``` + +Now, wait for the database to go into `Running` state. + +```bash +$ kubectl get kf -n demo kafka +NAME TYPE VERSION STATUS AGE +kafka kubedb.com/v1alpha2 3.4.0 Ready 2m24s +``` + +KubeDB will create a separate stats service with name `{Kafka crd name}-stats` for monitoring purpose. + +```bash +$ kubectl get svc -n demo --selector="app.kubernetes.io/instance=kafka" +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +kafka-pods ClusterIP None 9092/TCP,9093/TCP,29092/TCP 3m22s +kafka-stats ClusterIP 10.96.235.251 9091/TCP 3m19s +``` + +Here, `kafka-stats` service has been created for monitoring purpose. + +Let's describe this stats service. + +```bash +$ kubectl describe svc -n demo kafka-stats +Name: kafka-stats +Namespace: demo +Labels: app.kubernetes.io/component=database + app.kubernetes.io/instance=kafka + app.kubernetes.io/managed-by=kubedb.com + app.kubernetes.io/name=kafkas.kubedb.com + kubedb.com/role=stats +Annotations: monitoring.appscode.com/agent: prometheus.io/operator +Selector: app.kubernetes.io/instance=kafka,app.kubernetes.io/managed-by=kubedb.com,app.kubernetes.io/name=kafkas.kubedb.com +Type: ClusterIP +IP Family Policy: SingleStack +IP Families: IPv4 +IP: 10.96.235.251 +IPs: 10.96.235.251 +Port: metrics 9091/TCP +TargetPort: metrics/TCP +Endpoints: 10.244.0.117:56790,10.244.0.119:56790,10.244.0.121:56790 +Session Affinity: None +Events: +``` + +Notice the `Labels` and `Port` fields. `ServiceMonitor` will use this information to target its endpoints. + +KubeDB will also create a `ServiceMonitor` crd in `demo` namespace that select the endpoints of `kafka-stats` service. Verify that the `ServiceMonitor` crd has been created. + +```bash +$ kubectl get servicemonitor -n demo +NAME AGE +kafka-stats 4m49s +``` + +Let's verify that the `ServiceMonitor` has the label that we had specified in `spec.monitor` section of Kafka crd. + +```bash$ kubectl get servicemonitor -n demo kafka-stats -o yaml +apiVersion: monitoring.coreos.com/v1 +kind: ServiceMonitor +metadata: + creationTimestamp: "2023-03-30T07:59:49Z" + generation: 1 + labels: + app.kubernetes.io/component: database + app.kubernetes.io/instance: kafka + app.kubernetes.io/managed-by: kubedb.com + app.kubernetes.io/name: kafkas.kubedb.com + release: prometheus + name: kafka-stats + namespace: demo + ownerReferences: + - apiVersion: v1 + blockOwnerDeletion: true + controller: true + kind: Service + name: kafka-stats + uid: 4a95fc65-fe2c-4d9c-afdd-aa748642d6bc + resourceVersion: "668351" + uid: de76712d-4f51-4bab-a625-73966f4bd9f7 +spec: + endpoints: + - bearerTokenSecret: + key: "" + honorLabels: true + interval: 10s + path: /metrics + port: metrics + namespaceSelector: + matchNames: + - demo + selector: + matchLabels: + app.kubernetes.io/component: database + app.kubernetes.io/instance: kafka + app.kubernetes.io/managed-by: kubedb.com + app.kubernetes.io/name: kafkas.kubedb.com + kubedb.com/role: stats +``` + +Notice that the `ServiceMonitor` has label `release: prometheus` that we had specified in Kafka crd. + +Also notice that the `ServiceMonitor` has selector which match the labels we have seen in the `kafka-stats` service. It also, target the `metrics` port that we have seen in the stats service. + +## Verify Monitoring Metrics + +At first, let's find out the respective Prometheus pod for `prometheus` Prometheus server. + +```bash +$ kubectl get pod -n monitoring -l=app.kubernetes.io/name=prometheus +NAME READY STATUS RESTARTS AGE +prometheus-prometheus-kube-prometheus-prometheus-0 2/2 Running 8 (4h27m ago) 3d +``` + +Prometheus server is listening to port `9090` of `prometheus-prometheus-kube-prometheus-prometheus-0` pod. We are going to use [port forwarding](https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/) to access Prometheus dashboard. + +Run following command on a separate terminal to forward the port 9090 of `prometheus-kube-prometheus-prometheus` service which is pointing to the prometheus pod, + +```bash +$ kubectl port-forward -n monitoring svc/prometheus-kube-prometheus-prometheus 9090 +Forwarding from 127.0.0.1:9090 -> 9090 +Forwarding from [::1]:9090 -> 9090 +``` + +Now, we can access the dashboard at `localhost:9090`. Open [http://localhost:9090](http://localhost:9090) in your browser. You should see `metrics` endpoint of `kafka-stats` service as one of the targets. + +

+  Prometheus Target +

+ +Check the `endpoint` and `service` labels. It verifies that the target is our expected database. Now, you can view the collected metrics and create a graph from homepage of this Prometheus dashboard. You can also use this Prometheus server as data source for [Grafana](https://grafana.com/) and create a beautiful dashboard with collected metrics. + +## Cleaning up + +To clean up the Kubernetes resources created by this tutorial, run following commands + +```bash +kubectl delete -n demo kf/kafka +kubectl delete ns demo +``` + +## Next Steps + +- Learn how to use KubeDB to run a Apache Kafka cluster [here](/docs/v2024.1.31/guides/kafka/README). +- Deploy [dedicated topology cluster](/docs/v2024.1.31/guides/kafka/clustering/topology-cluster/) for Apache Kafka +- Deploy [combined cluster](/docs/v2024.1.31/guides/kafka/clustering/combined-cluster/) for Apache Kafka +- Detail concepts of [KafkaVersion object](/docs/v2024.1.31/guides/kafka/concepts/catalog). +- Learn to use KubeDB managed Kafka objects using [CLIs](/docs/v2024.1.31/guides/kafka/cli/cli). +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/v2024.1.31/CONTRIBUTING). \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/kafka/quickstart/_index.md b/content/docs/v2024.1.31/guides/kafka/quickstart/_index.md new file mode 100644 index 0000000000..d6ad5c1084 --- /dev/null +++ b/content/docs/v2024.1.31/guides/kafka/quickstart/_index.md @@ -0,0 +1,22 @@ +--- +title: Kafka Quickstart +menu: + docs_v2024.1.31: + identifier: kf-quickstart-kafka + name: Quickstart + parent: kf-kafka-guides + weight: 15 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/kafka/quickstart/overview/index.md b/content/docs/v2024.1.31/guides/kafka/quickstart/overview/index.md new file mode 100644 index 0000000000..1af38786e5 --- /dev/null +++ b/content/docs/v2024.1.31/guides/kafka/quickstart/overview/index.md @@ -0,0 +1,422 @@ +--- +title: Kafka Quickstart +menu: + docs_v2024.1.31: + identifier: kf-quickstart-quickstart + name: Overview + parent: kf-quickstart-kafka + weight: 10 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Kafka QuickStart + +This tutorial will show you how to use KubeDB to run an [Apache Kafka](https://kafka.apache.org/). + +

+  lifecycle +

+ +## Before You Begin + +At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +Now, install the KubeDB operator in your cluster following the steps [here](/docs/v2024.1.31/setup/install/_index). + +To keep things isolated, this tutorial uses a separate namespace called `demo` throughout this tutorial. + +```bash +$ kubectl create namespace demo +namespace/demo created + +$ kubectl get namespace +NAME STATUS AGE +demo Active 9s +``` + +> Note: YAML files used in this tutorial are stored in [guides/kafka/quickstart/overview/yamls](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/guides/kafka/quickstart/overview/yamls) folder in GitHub repository [kubedb/docs](https://github.com/kubedb/docs). + +> We have designed this tutorial to demonstrate a production setup of KubeDB managed Apache Kafka. If you just want to try out KubeDB, you can bypass some safety features following the tips [here](/docs/v2024.1.31/guides/kafka/quickstart/overview/#tips-for-testing). + +## Find Available StorageClass + +We will have to provide `StorageClass` in Kafka CRD specification. Check available `StorageClass` in your cluster using the following command, + +```bash +$ kubectl get storageclass +NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE +standard (default) rancher.io/local-path Delete WaitForFirstConsumer false 14h +``` + +Here, we have `standard` StorageClass in our cluster from [Local Path Provisioner](https://github.com/rancher/local-path-provisioner). + +## Find Available KafkaVersion + +When you install the KubeDB operator, it registers a CRD named [KafkaVersion](/docs/v2024.1.31/guides/kafka/concepts/catalog). The installation process comes with a set of tested KafkaVersion objects. Let's check available KafkaVersions by, + +```bash +NAME VERSION DB_IMAGE DEPRECATED AGE +3.3.0 3.3.0 kubedb/kafka-kraft:3.3.0 6d +``` + +Notice the `DEPRECATED` column. Here, `true` means that this KafkaVersion is deprecated for the current KubeDB version. KubeDB will not work for deprecated KafkaVersion. You can also use the short from `kfversion` to check available KafkaVersions. + +In this tutorial, we will use `3.3.0` KafkaVersion CR to create a Kafka cluster. + +## Create a Kafka Cluster + +The KubeDB operator implements a Kafka CRD to define the specification of Kafka. + +The Kafka instance used for this tutorial: + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: Kafka +metadata: + name: kafka-quickstart + namespace: demo +spec: + replicas: 3 + version: 3.3.0 + storage: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + storageClassName: standard + storageType: Durable + terminationPolicy: DoNotTerminate +``` + +Here, + +- `spec.version` - is the name of the KafkaVersion CR. Here, a Kafka of version `3.3.0` will be created. +- `spec.replicas` - specifies the number of Kafka brokers. +- `spec.storageType` - specifies the type of storage that will be used for Kafka. It can be `Durable` or `Ephemeral`. The default value of this field is `Durable`. If `Ephemeral` is used then KubeDB will create the Kafka using `EmptyDir` volume. In this case, you don't have to specify `spec.storage` field. This is useful for testing purposes. +- `spec.storage` specifies the StorageClass of PVC dynamically allocated to store data for this Kafka instance. This storage spec will be passed to the StatefulSet created by the KubeDB operator to run Kafka pods. You can specify any StorageClass available in your cluster with appropriate resource requests. If you don't specify `spec.storageType: Ephemeral`, then this field is required. +- `spec.terminationPolicy` specifies what KubeDB should do when a user try to delete Kafka CR. Termination policy `Delete` will delete the database pods, secret and PVC when the Kafka CR is deleted. + +> Note: `spec.storage` section is used to create PVC for database pod. It will create PVC with storage size specified in the `storage.resources.requests` field. Don't specify `limits` here. PVC does not get resized automatically. + +Let's create the Kafka CR that is shown above: + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/Kafka/quickstart/overview/yamls/kafka.yaml +kafka.kubedb.com/kafka-quickstart created +``` + +The Kafka's `STATUS` will go from `Provisioning` to `Ready` state within few minutes. Once the `STATUS` is `Ready`, you are ready to use the Kafka. + +```bash +$ kubectl get kafka -n demo -w +NAME TYPE VERSION STATUS AGE +kafka-quickstart kubedb.com/v1alpha2 3.3.0 Provisioning 2s +kafka-quickstart kubedb.com/v1alpha2 3.3.0 Provisioning 4s +. +. +kafka-quickstart kubedb.com/v1alpha2 3.3.0 Ready 112s + +``` + +Describe the kafka object to observe the progress if something goes wrong or the status is not changing for a long period of time: + +```bash +$ kubectl describe kafka -n demo kafka-quickstart +Name: kafka-quickstart +Namespace: demo +Labels: +Annotations: +API Version: kubedb.com/v1alpha2 +Kind: Kafka +Metadata: + Creation Timestamp: 2023-01-04T10:13:12Z + Finalizers: + kubedb.com + Generation: 2 + Managed Fields: + API Version: kubedb.com/v1alpha2 + Fields Type: FieldsV1 + fieldsV1: + f:metadata: + f:annotations: + .: + f:kubectl.kubernetes.io/last-applied-configuration: + f:spec: + .: + f:healthChecker: + .: + f:failureThreshold: + f:periodSeconds: + f:timeoutSeconds: + f:replicas: + f:storage: + .: + f:accessModes: + f:resources: + .: + f:requests: + .: + f:storage: + f:storageClassName: + f:storageType: + f:terminationPolicy: + f:version: + Manager: kubectl-client-side-apply + Operation: Update + Time: 2023-01-04T10:13:12Z + API Version: kubedb.com/v1alpha2 + Fields Type: FieldsV1 + fieldsV1: + f:metadata: + f:finalizers: + .: + v:"kubedb.com": + f:spec: + f:authSecret: + Manager: kubedb-provisioner + Operation: Update + Time: 2023-01-04T10:13:12Z + API Version: kubedb.com/v1alpha2 + Fields Type: FieldsV1 + fieldsV1: + f:status: + .: + f:conditions: + f:phase: + Manager: kubedb-provisioner + Operation: Update + Subresource: status + Time: 2023-01-04T10:13:14Z + Resource Version: 192231 + UID: 8a1eb48b-75f3-4b3d-b8ff-0634780a9f09 +Spec: + Auth Secret: + Name: kafka-quickstart-admin-cred + Health Checker: + Failure Threshold: 3 + Period Seconds: 20 + Timeout Seconds: 10 + Pod Template: + Controller: + Metadata: + Spec: + Resources: + Limits: + Memory: 1Gi + Requests: + Cpu: 500m + Memory: 1Gi + Replicas: 3 + Storage: + Access Modes: + ReadWriteOnce + Resources: + Requests: + Storage: 1Gi + Storage Class Name: standard + Storage Type: Durable + Termination Policy: DoNotTerminate + Version: 3.3.0 +Status: + Conditions: + Last Transition Time: 2023-01-04T10:13:14Z + Message: The KubeDB operator has started the provisioning of Kafka: demo/kafka-quickstart + Observed Generation: 2 + Reason: DatabaseProvisioningStartedSuccessfully + Status: True + Type: ProvisioningStarted + Last Transition Time: 2023-01-04T10:13:20Z + Message: All desired replicas are ready. + Observed Generation: 2 + Reason: AllReplicasReady + Status: True + Type: ReplicaReady + Last Transition Time: 2023-01-04T10:13:52Z + Message: The Kafka: demo/kafka-quickstart is accepting client requests + Observed Generation: 2 + Reason: DatabaseAcceptingConnectionRequest + Status: True + Type: AcceptingConnection + Last Transition Time: 2023-01-04T10:15:00Z + Message: The Kafka: demo/kafka-quickstart is ready. + Observed Generation: 2 + Reason: ReadinessCheckSucceeded + Status: True + Type: Ready + Last Transition Time: 2023-01-04T10:15:02Z + Message: The Kafka: demo/kafka-quickstart is successfully provisioned. + Observed Generation: 2 + Reason: DatabaseSuccessfullyProvisioned + Status: True + Type: Provisioned + Phase: Ready +Events: + +``` + +### KubeDB Operator Generated Resources + +On deployment of a Kafka CR, the operator creates the following resources: + +```bash +$ kubectl get all,secret -n demo -l 'app.kubernetes.io/instance=kafka-quickstart' +NAME READY STATUS RESTARTS AGE +pod/kafka-quickstart-0 1/1 Running 0 8m50s +pod/kafka-quickstart-1 1/1 Running 0 8m48s +pod/kafka-quickstart-2 1/1 Running 0 8m46s + +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +service/kafka-quickstart-pods ClusterIP None 9092/TCP,9093/TCP,29092/TCP 8m52s + +NAME READY AGE +statefulset.apps/kafka-quickstart 3/3 8m50s + +NAME TYPE VERSION AGE +appbinding.appcatalog.appscode.com/kafka-quickstart kubedb.com/kafka 3.3.0 8m50s + +NAME TYPE DATA AGE +secret/kafka-quickstart-admin-cred kubernetes.io/basic-auth 2 8m52s +secret/kafka-quickstart-config Opaque 2 8m52s +``` + +- `StatefulSet` - a StatefulSet named after the Kafka instance. In topology mode, the operator creates 3 statefulSets with name `{Kafka-Name}-{Sufix}`. +- `Services` - For a combined Kafka instance only one service is created with name `{Kafka-name}-{pods}`. For topology mode, two services are created. + - `{Kafka-Name}-{broker}` - the governing service which is used for inter-broker communications. This service is also used to connect to the brokers with external clients. This is a headless service. + - `{Kafka-Name}-{controller}` - the governing service which is used for inter-controller communications. It is a headless service too. +- `AppBinding` - an [AppBinding](/docs/v2024.1.31/guides/kafka/concepts/appbinding) which hold to connect information for the Kafka brokers. It is also named after the Kafka instance. +- `Secrets` - 3 types of secrets are generated for each Kafka cluster. + - `{Kafka-Name}-{username}-cred` - the auth secrets which hold the `username` and `password` for the Kafka users. Operator generates credentials for `admin` user if not provided and creates a secret for authentication. + - `{Kafka-Name}-{alias}-cert` - the certificate secrets which hold `tls.crt`, `tls.key`, and `ca.crt` for configuring the Kafka instance. + - `{Kafka-Name}-config` - the default configuration secret created by the operator. + +## Publish & Consume messages with Kafka + +We will use `kafka console producer` and `kafka console consumer` for creating kafka topic, publishing messages to kafka brokers and then consume those messages as well. Exec into one of the kafka brokers in interactive mode first, then navigate to `HOME` directory which is at path `/opt/kafka` + +```bash +$ kubectl exec -it -n demo kafka-quickstart-0 -- bash +root@kafka-quickstart-0:/# cd $HOME +root@kafka-quickstart-0:~# pwd +/opt/kafka +root@kafka-quickstart-0:~# +``` + +You will find a file named `clientauth.properties` in the config directory. This file is generated by the operator which contains necessary authentication/authorization configurations that are required during publishing or subscribing messages to a kafka topic. + +```bash +root@kafka-quickstart-0:~# cat $HOME/config/clientauth.properties +sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="admin" password="lJEKu_!Rsf31L;tU"; +security.protocol=SASL_PLAINTEXT +sasl.mechanism=PLAIN +``` + +Now, we have to use a bootstrap server to perform operations in a kafka broker. For this demo, we are going to use the FQDN of the headless service for the kafka brokers with default port for the brokers which is `kafka-quickstart-pods.demo.svc.cluster.local:9092`. We will set an environment variable for the `clientauth.properties` filepath as well. + +```bash +root@kafka-quickstart-0:~# export SERVER="kafka-quickstart-pods.demo.svc.cluster.local:9092" +root@kafka-quickstart-0:~# export CLIENTAUTHCONFIG="$HOME/config/clientauth.properties" +``` + +Let's describe the broker metadata for the quorum. + +```bash +root@kafka-quickstart-0:~# kafka-metadata-quorum.sh --command-config $CLIENTAUTHCONFIG --bootstrap-server $SERVER describe --status +ClusterId: 11ed-8dd1-2e8877e5897w +LeaderId: 2 +LeaderEpoch: 79 +HighWatermark: 125229 +MaxFollowerLag: 0 +MaxFollowerLagTimeMs: 134 +CurrentVoters: [0,1,2] +CurrentObservers: [] + +``` + +It will show you important metadata information like clusterID, current leader ID, broker IDs which are participating in leader election voting and IDs of those brokers who are observers. It is important to mention that each broker is assigned a numeric ID which is called its broker ID. The ID is assigned sequentially with respect to the host pod name. In this case, The pods assigned broker IDs are as follows: + +| Pods | Broker ID | +|--------------------|:---------:| +| kafka-quickstart-0 | 0 | +| kafka-quickstart-1 | 1 | +| kafka-quickstart-2 | 2 | + +Let's create a topic named `quickstart-topic` with 3 partitions and a replication factor of 3. Describe the topic once it's created. You will see the leader ID for each partition and their replica IDs along with in-sync-replicas(ISR). + +```bash +root@kafka-quickstart-0:~# kafka-topics.sh --command-config $CLIENTAUTHCONFIG --create --topic quickstart-topic --partitions 3 --replication-factor 3 --bootstrap-server $SERVER +Created topic quickstart-topic. + +root@kafka-quickstart-0:~# kafka-topics.sh --command-config $CLIENTAUTHCONFIG --describe --topic quickstart-topic --bootstrap-server $SERVER +Topic: quickstart-topic TopicId: E6IUqUQJQICCVqKREfVQ1Q PartitionCount: 3 ReplicationFactor: 3 Configs: segment.bytes=1073741824 + Topic: quickstart-topic Partition: 0 Leader: 1 Replicas: 1,2,0 Isr: 1 + Topic: quickstart-topic Partition: 1 Leader: 2 Replicas: 2,0,1 Isr: 2 + Topic: quickstart-topic Partition: 2 Leader: 0 Replicas: 0,1,2 Isr: 0 +``` + +Now, we are going to start a producer and a consumer for topic `quickstart-topic` using console. Let's use this current terminal for producing messages and open a new terminal for consuming messages. Let's set the environment variables for bootstrap server and the configuration file in consumer terminal also. +From the topic description we can see that the leader partition for partition 2 is 0 (the broker that we are on). If we produce messages to `kafka-quickstart-0` broker(brokerID=0) it will store those messages in partition 2. Let's produce messages in the producer terminal and consume them from the consumer terminal. + +```bash +root@kafka-quickstart-0:~# kafka-console-producer.sh --producer.config $CLIENTAUTHCONFIG --topic quickstart-topic --request-required-acks all --bootstrap-server $SERVER +>message one +>message two +>message three +> +``` + +```bash +root@kafka-quickstart-0:/# kafka-console-consumer.sh --consumer.config $CLIENTAUTHCONFIG --topic quickstart-topic --from-beginning --bootstrap-server $SERVER --partition 2 +message one +message two +message three + +``` + +Notice that, messages are coming to the consumer as you continue sending messages via producer. So, we have created a kafka topic and used kafka console producer and consumer to test message publishing and consuming successfully. + +## Cleaning up + +To clean up the Kubernetes resources created by this tutorial, run: + +```bash +$ kubectl patch -n demo kafka kafka-quickstart -p '{"spec":{"terminationPolicy":"WipeOut"}}' --type="merge" +kafka.kubedb.com/kafka-quickstart patched + +$ kubectl delete kf kafka-quickstart -n demo +kafka.kubedb.com "kafka-quickstart" deleted + +$ kubectl delete namespace demo +namespace "demo" deleted +``` + +## Tips for Testing + +If you are just testing some basic functionalities, you might want to avoid additional hassles due to some safety features that are great for the production environment. You can follow these tips to avoid them. + +1. **Use `storageType: Ephemeral`**. Databases are precious. You might not want to lose your data in your production environment if the database pod fails. So, we recommend to use `spec.storageType: Durable` and provide storage spec in `spec.storage` section. For testing purposes, you can just use `spec.storageType: Ephemeral`. KubeDB will use [emptyDir](https://kubernetes.io/docs/concepts/storage/volumes/#emptydir) for storage. You will not require to provide `spec.storage` section. +2. Use **`terminationPolicy: WipeOut`**. It is nice to be able to resume the database from the previous one. So, we preserve all your `PVCs` and auth `Secrets`. If you don't want to resume the database, you can just use `spec.terminationPolicy: WipeOut`. It will clean up every resource that was created with the Elasticsearch CR. For more details, please visit [here](/docs/v2024.1.31/guides/kafka/concepts/kafka#specterminationpolicy). + +## Next Steps + +- [Quickstart Kafka](/docs/v2024.1.31/guides/kafka/quickstart/overview/) with KubeDB Operator. +- Kafka Clustering supported by KubeDB + - [Combined Clustering](/docs/v2024.1.31/guides/kafka/clustering/combined-cluster/) + - [Topology Clustering](/docs/v2024.1.31/guides/kafka/clustering/topology-cluster/) +- Use [kubedb cli](/docs/v2024.1.31/guides/kafka/cli/cli) to manage databases like kubectl for Kubernetes. +- Detail concepts of [Kafka object](/docs/v2024.1.31/guides/kafka/concepts/kafka). +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/v2024.1.31/CONTRIBUTING). diff --git a/content/docs/v2024.1.31/guides/kafka/quickstart/overview/yamls/kafka.yaml b/content/docs/v2024.1.31/guides/kafka/quickstart/overview/yamls/kafka.yaml new file mode 100644 index 0000000000..c81e479377 --- /dev/null +++ b/content/docs/v2024.1.31/guides/kafka/quickstart/overview/yamls/kafka.yaml @@ -0,0 +1,17 @@ +apiVersion: kubedb.com/v1alpha2 +kind: Kafka +metadata: + name: kafka-quickstart + namespace: demo +spec: + replicas: 3 + version: 3.3.0 + storage: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + storageClassName: standard + storageType: Durable + terminationPolicy: DoNotTerminate \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/kafka/tls/_index.md b/content/docs/v2024.1.31/guides/kafka/tls/_index.md new file mode 100755 index 0000000000..003a875bdb --- /dev/null +++ b/content/docs/v2024.1.31/guides/kafka/tls/_index.md @@ -0,0 +1,22 @@ +--- +title: Run Kafka with TLS +menu: + docs_v2024.1.31: + identifier: kf-tls + name: TLS/SSL Encryption + parent: kf-kafka-guides + weight: 45 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/kafka/tls/overview.md b/content/docs/v2024.1.31/guides/kafka/tls/overview.md new file mode 100644 index 0000000000..1bf09f7dcb --- /dev/null +++ b/content/docs/v2024.1.31/guides/kafka/tls/overview.md @@ -0,0 +1,81 @@ +--- +title: Kafka TLS/SSL Encryption Overview +menu: + docs_v2024.1.31: + identifier: kf-tls-overview + name: Overview + parent: kf-tls + weight: 10 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Kafka TLS/SSL Encryption + +**Prerequisite :** To configure TLS/SSL in `Kafka`, `KubeDB` uses `cert-manager` to issue certificates. So first you have to make sure that the cluster has `cert-manager` installed. To install `cert-manager` in your cluster following steps [here](https://cert-manager.io/docs/installation/kubernetes/). + +To issue a certificate, the following crd of `cert-manager` is used: + +- `Issuer/ClusterIssuer`: Issuers, and ClusterIssuers represent certificate authorities (CAs) that are able to generate signed certificates by honoring certificate signing requests. All cert-manager certificates require a referenced issuer that is in a ready condition to attempt to honor the request. You can learn more details [here](https://cert-manager.io/docs/concepts/issuer/). + +- `Certificate`: `cert-manager` has the concept of Certificates that define a desired x509 certificate which will be renewed and kept up to date. You can learn more details [here](https://cert-manager.io/docs/concepts/certificate/). + +**Kafka CRD Specification :** + +KubeDB uses following crd fields to enable SSL/TLS encryption in `Kafka`. + +- `spec:` + - `enableSSL` + - `tls:` + - `issuerRef` + - `certificates` + +Read about the fields in details from [kafka concept](/docs/v2024.1.31/guides/kafka/concepts/kafka), + +When, `enableSSL` is set to `true`, the users must specify the `tls.issuerRef` field. `KubeDB` uses the `issuer` or `clusterIssuer` referenced in the `tls.issuerRef` field, and the certificate specs provided in `tls.certificate` to generate certificate secrets using `Issuer/ClusterIssuers` specification. These certificates secrets including `ca.crt`, `tls.crt` and `tls.key` etc. are used to configure `kafka` server and clients. + +## How TLS/SSL configures in Kafka + +The following figure shows how `KubeDB` enterprise used to configure TLS/SSL in Kafka. Open the image in a new tab to see the enlarged version. + +
+Deploy Kafka with TLS/SSL +
Fig: Deploy Kafka with TLS/SSL
+
+ +Deploying Kafka with TLS/SSL configuration process consists of the following steps: + +1. At first, a user creates a `Issuer/ClusterIssuer` cr. + +2. Then the user creates a `Kafka` cr which refers to the `Issuer/ClusterIssuer` cr that the user created in the previous step. + +3. `KubeDB` Provisioner operator watches for the `Kafka` cr. + +4. When it finds one, it creates `Secret`, `Service`, etc. for the `Kafka` database. + +5. `KubeDB` Ops-manager operator watches for `Kafka`(5c), `Issuer/ClusterIssuer`(5b), `Secret` and `Service`(5a). + +6. When it finds all the resources(`Kafka`, `Issuer/ClusterIssuer`, `Secret`, `Service`), it creates `Certificates` by using `tls.issuerRef` and `tls.certificates` field specification from `Kafka` cr. + +7. `cert-manager` watches for certificates. + +8. When it finds one, it creates certificate secrets `tls-secrets`(server, client, exporter secrets etc.) that holds the actual certificate signed by the CA. + +9. `KubeDB` Provisioner operator watches for the Certificate secrets `tls-secrets`. + +10. When it finds all the tls-secret, it creates the related `StatefulSets` so that Kafka database can be configured with TLS/SSL. + +In the next doc, we are going to show a step-by-step guide on how to configure a `Kafka` cluster with TLS/SSL. \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/mariadb/README.md b/content/docs/v2024.1.31/guides/mariadb/README.md new file mode 100644 index 0000000000..7882c54dde --- /dev/null +++ b/content/docs/v2024.1.31/guides/mariadb/README.md @@ -0,0 +1,61 @@ +--- +title: MariaDB +menu: + docs_v2024.1.31: + identifier: guides-mariadb-overview + name: MariaDB + parent: guides-mariadb + weight: 10 +menu_name: docs_v2024.1.31 +section_menu_id: guides +url: /docs/v2024.1.31/guides/mariadb/ +aliases: +- /docs/v2024.1.31/guides/mariadb/README/ +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +## Supported MariaDB Features + +| Features | Availability | +| ------------------------------------------------------- | :----------: | +| Clustering | ✓ | +| Persistent Volume | ✓ | +| Instant Backup | ✓ | +| Scheduled Backup | ✓ | +| Initialize using Snapshot | ✓ | +| Initialize using Script (\*.sql, \*sql.gz and/or \*.sh) | ✓ | +| Custom Configuration | ✓ | +| Using Custom docker image | ✓ | +| Builtin Prometheus Discovery | ✓ | +| Using Prometheus operator | ✓ | + +## Life Cycle of a MariaDB Object + +

+  lifecycle +

+ +## User Guide + +- [Quickstart MariaDB](/docs/v2024.1.31/guides/mariadb/quickstart/overview) with KubeDB Operator. +- Detail concepts of [MariaDB object](/docs/v2024.1.31/guides/mariadb/concepts/mariadb). +- Detail concepts of [MariaDBVersion object](/docs/v2024.1.31/guides/mariadb/concepts/mariadb-version). +- Create [MariaDB Cluster](/docs/v2024.1.31/guides/mariadb/clustering/galera-cluster). +- Create [MariaDB with Custom Configuration](/docs/v2024.1.31/guides/mariadb/configuration/using-config-file). +- Use [Custom RBAC](/docs/v2024.1.31/guides/mariadb/custom-rbac/using-custom-rbac). +- Use [private Docker registry](/docs/v2024.1.31/guides/mariadb/private-registry/quickstart) to deploy MySQL with KubeDB. +- Initialize [MariaDB with Script](/docs/v2024.1.31/guides/mariadb/initialization/using-script). +- Backup and Restore [MariaDB](/docs/v2024.1.31/guides/mariadb/backup/overview). +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/v2024.1.31/CONTRIBUTING). diff --git a/content/docs/v2024.1.31/guides/mariadb/_index.md b/content/docs/v2024.1.31/guides/mariadb/_index.md new file mode 100644 index 0000000000..ac9c733aaf --- /dev/null +++ b/content/docs/v2024.1.31/guides/mariadb/_index.md @@ -0,0 +1,22 @@ +--- +title: MariaDB +menu: + docs_v2024.1.31: + identifier: guides-mariadb + name: MariaDB + parent: guides + weight: 10 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/mariadb/autoscaler/_index.md b/content/docs/v2024.1.31/guides/mariadb/autoscaler/_index.md new file mode 100644 index 0000000000..404b293161 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mariadb/autoscaler/_index.md @@ -0,0 +1,22 @@ +--- +title: Autoscaling +menu: + docs_v2024.1.31: + identifier: guides-mariadb-autoscaling + name: Autoscaling + parent: guides-mariadb + weight: 47 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/mariadb/autoscaler/compute/_index.md b/content/docs/v2024.1.31/guides/mariadb/autoscaler/compute/_index.md new file mode 100644 index 0000000000..6f2cadb524 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mariadb/autoscaler/compute/_index.md @@ -0,0 +1,22 @@ +--- +title: Compute Autoscaling +menu: + docs_v2024.1.31: + identifier: guides-mariadb-autoscaling-compute + name: Compute Autoscaling + parent: guides-mariadb-autoscaling + weight: 46 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/mariadb/autoscaler/compute/cluster/examples/mdas-compute.yaml b/content/docs/v2024.1.31/guides/mariadb/autoscaler/compute/cluster/examples/mdas-compute.yaml new file mode 100644 index 0000000000..8ac3fba9ca --- /dev/null +++ b/content/docs/v2024.1.31/guides/mariadb/autoscaler/compute/cluster/examples/mdas-compute.yaml @@ -0,0 +1,24 @@ +apiVersion: autoscaling.kubedb.com/v1alpha1 +kind: MariaDBAutoscaler +metadata: + name: md-as-compute + namespace: demo +spec: + databaseRef: + name: sample-mariadb + opsRequestOptions: + timeout: 3m + apply: IfReady + compute: + mariadb: + trigger: "On" + podLifeTimeThreshold: 5m + resourceDiffPercentage: 20 + minAllowed: + cpu: 250m + memory: 400Mi + maxAllowed: + cpu: 1 + memory: 1Gi + containerControlledValues: "RequestsAndLimits" + controlledResources: ["cpu", "memory"] \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/mariadb/autoscaler/compute/cluster/examples/sample-mariadb.yaml b/content/docs/v2024.1.31/guides/mariadb/autoscaler/compute/cluster/examples/sample-mariadb.yaml new file mode 100644 index 0000000000..5337e55f96 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mariadb/autoscaler/compute/cluster/examples/sample-mariadb.yaml @@ -0,0 +1,26 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MariaDB +metadata: + name: sample-mariadb + namespace: demo +spec: + version: "10.6.16" + replicas: 3 + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + podTemplate: + spec: + resources: + requests: + cpu: "200m" + memory: "300Mi" + limits: + cpu: "200m" + memory: "300Mi" + terminationPolicy: WipeOut \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/mariadb/autoscaler/compute/cluster/index.md b/content/docs/v2024.1.31/guides/mariadb/autoscaler/compute/cluster/index.md new file mode 100644 index 0000000000..2f46ae8f02 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mariadb/autoscaler/compute/cluster/index.md @@ -0,0 +1,545 @@ +--- +title: MariaDB Cluster Autoscaling +menu: + docs_v2024.1.31: + identifier: guides-mariadb-autoscaling-compute-cluster + name: Cluster + parent: guides-mariadb-autoscaling-compute + weight: 20 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Autoscaling the Compute Resource of a MariaDB Cluster Database + +This guide will show you how to use `KubeDB` to autoscale compute resources i.e. cpu and memory of a MariaDB replicaset database. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster. + +- Install `KubeDB` Community, Ops-Manager and Autoscaler operator in your cluster following the steps [here](/docs/v2024.1.31/setup/README). + +- Install `Metrics Server` from [here](https://github.com/kubernetes-sigs/metrics-server#installation) + +- You should be familiar with the following `KubeDB` concepts: + - [MariaDB](/docs/v2024.1.31/guides/mariadb/concepts/mariadb) + - [MariaDBAutoscaler](/docs/v2024.1.31/guides/mariadb/concepts/autoscaler) + - [MariaDBOpsRequest](/docs/v2024.1.31/guides/mariadb/concepts/opsrequest) + - [Compute Resource Autoscaling Overview](/docs/v2024.1.31/guides/mariadb/autoscaler/compute/overview) + +To keep everything isolated, we are going to use a separate namespace called `demo` throughout this tutorial. + +```bash +$ kubectl create ns demo +namespace/demo created +``` +## Autoscaling of Cluster Database + +Here, we are going to deploy a `MariaDB` Cluster using a supported version by `KubeDB` operator. Then we are going to apply `MariaDBAutoscaler` to set up autoscaling. + +#### Deploy MariaDB Cluster + +In this section, we are going to deploy a MariaDB Cluster with version `10.6.16`. Then, in the next section we will set up autoscaling for this database using `MariaDBAutoscaler` CRD. Below is the YAML of the `MariaDB` CR that we are going to create, +> If you want to autoscale MariaDB `Standalone`, Just remove the `spec.Replicas` from the below yaml and rest of the steps are same. + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: MariaDB +metadata: + name: sample-mariadb + namespace: demo +spec: + version: "10.6.16" + replicas: 3 + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + podTemplate: + spec: + resources: + requests: + cpu: "200m" + memory: "300Mi" + limits: + cpu: "200m" + memory: "300Mi" + terminationPolicy: WipeOut +``` + +Let's create the `MariaDB` CRO we have shown above, + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mariadb/autoscaler/compute/cluster/examples/sample-mariadb.yaml +mariadb.kubedb.com/sample-mariadb created +``` + +Now, wait until `sample-mariadb` has status `Ready`. i.e, + +```bash +$ kubectl get mariadb -n demo +NAME VERSION STATUS AGE +sample-mariadb 10.5.23 Ready 14m +``` + +Let's check the Pod containers resources, + +```bash +$ kubectl get pod -n demo sample-mariadb-0 -o json | jq '.spec.containers[].resources' +{ + "limits": { + "cpu": "200m", + "memory": "300Mi" + }, + "requests": { + "cpu": "200m", + "memory": "300Mi" + } +} +``` + +Let's check the MariaDB resources, +```bash +$ kubectl get mariadb -n demo sample-mariadb -o json | jq '.spec.podTemplate.spec.resources' +{ + "limits": { + "cpu": "200m", + "memory": "300Mi" + }, + "requests": { + "cpu": "200m", + "memory": "300Mi" + } +} +``` + +You can see from the above outputs that the resources are same as the one we have assigned while deploying the mariadb. + +We are now ready to apply the `MariaDBAutoscaler` CRO to set up autoscaling for this database. + +### Compute Resource Autoscaling + +Here, we are going to set up compute resource autoscaling using a MariaDBAutoscaler Object. + +#### Create MariaDBAutoscaler Object + +In order to set up compute resource autoscaling for this database cluster, we have to create a `MariaDBAutoscaler` CRO with our desired configuration. Below is the YAML of the `MariaDBAutoscaler` object that we are going to create, + +```yaml +apiVersion: autoscaling.kubedb.com/v1alpha1 +kind: MariaDBAutoscaler +metadata: + name: md-as-compute + namespace: demo +spec: + databaseRef: + name: sample-mariadb + opsRequestOptions: + timeout: 3m + apply: IfReady + compute: + mariadb: + trigger: "On" + podLifeTimeThreshold: 5m + resourceDiffPercentage: 20 + minAllowed: + cpu: 250m + memory: 400Mi + maxAllowed: + cpu: 1 + memory: 1Gi + containerControlledValues: "RequestsAndLimits" + controlledResources: ["cpu", "memory"] +``` + +Here, + +- `spec.databaseRef.name` specifies that we are performing compute resource scaling operation on `sample-mariadb` database. +- `spec.compute.mariadb.trigger` specifies that compute autoscaling is enabled for this database. +- `spec.compute.mariadb.podLifeTimeThreshold` specifies the minimum lifetime for at least one of the pod to initiate a vertical scaling. +- `spec.compute.mariadb.resourceDiffPercentage` specifies the minimum resource difference in percentage. The default is 10%. +If the difference between current & recommended resource is less than ResourceDiffPercentage, Autoscaler Operator will ignore the updating. +- `spec.compute.mariadb.minAllowed` specifies the minimum allowed resources for the database. +- `spec.compute.mariadb.maxAllowed` specifies the maximum allowed resources for the database. +- `spec.compute.mariadb.controlledResources` specifies the resources that are controlled by the autoscaler. +- `spec.compute.mariadb.containerControlledValues` specifies which resource values should be controlled. The default is "RequestsAndLimits". +- `spec.opsRequestOptions.apply` has two supported value : `IfReady` & `Always`. +Use `IfReady` if you want to process the opsReq only when the database is Ready. And use `Always` if you want to process the execution of opsReq irrespective of the Database state. +- `spec.opsRequestOptions.timeout` specifies the maximum time for each step of the opsRequest(in seconds). +If a step doesn't finish within the specified timeout, the ops request will result in failure. + + +Let's create the `MariaDBAutoscaler` CR we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mariadb/autoscaler/compute/cluster/examples/mdas-compute.yaml +mariadbautoscaler.autoscaling.kubedb.com/mdas-compute created +``` + +#### Verify Autoscaling is set up successfully + +Let's check that the `mariadbautoscaler` resource is created successfully, + +```bash +$ kubectl get mariadbautoscaler -n demo +NAME AGE +md-as-compute 5m56s + +$ kubectl describe mariadbautoscaler md-as-compute -n demo +Name: md-as-compute +Namespace: demo +Labels: +Annotations: +API Version: autoscaling.kubedb.com/v1alpha1 +Kind: MariaDBAutoscaler +Metadata: + Creation Timestamp: 2022-09-16T11:26:58Z + Generation: 1 + Managed Fields: + API Version: autoscaling.kubedb.com/v1alpha1 + Fields Type: FieldsV1 + fieldsV1: + f:metadata: + f:annotations: + .: + f:kubectl.kubernetes.io/last-applied-configuration: + f:spec: + .: + f:compute: + .: + f:mariadb: + .: + f:containerControlledValues: + f:controlledResources: + f:maxAllowed: + .: + f:cpu: + f:memory: + f:minAllowed: + .: + f:cpu: + f:memory: + f:podLifeTimeThreshold: + f:resourceDiffPercentage: + f:trigger: + f:databaseRef: + .: + f:name: + f:opsRequestOptions: + .: + f:apply: + f:timeout: + Manager: kubectl-client-side-apply + Operation: Update + Time: 2022-09-16T11:26:58Z + API Version: autoscaling.kubedb.com/v1alpha1 + Fields Type: FieldsV1 + fieldsV1: + f:status: + .: + f:checkpoints: + f:conditions: + f:vpas: + Manager: kubedb-autoscaler + Operation: Update + Subresource: status + Time: 2022-09-16T11:27:07Z + Resource Version: 846645 + UID: 44bd46c3-bbc5-4c4a-aff4-00c7f84c6f58 +Spec: + Compute: + Mariadb: + Container Controlled Values: RequestsAndLimits + Controlled Resources: + cpu + memory + Max Allowed: + Cpu: 1 + Memory: 1Gi + Min Allowed: + Cpu: 250m + Memory: 400Mi + Pod Life Time Threshold: 5m0s + Resource Diff Percentage: 20 + Trigger: On + Database Ref: + Name: sample-mariadb + Ops Request Options: + Apply: IfReady + Timeout: 3m0s +Status: + Checkpoints: + Cpu Histogram: + Bucket Weights: + Index: 0 + Weight: 10000 + Index: 46 + Weight: 555 + Reference Timestamp: 2022-09-16T00:00:00Z + Total Weight: 2.648440345821337 + First Sample Start: 2022-09-16T11:26:48Z + Last Sample Start: 2022-09-16T11:32:52Z + Last Update Time: 2022-09-16T11:33:02Z + Memory Histogram: + Bucket Weights: + Index: 1 + Weight: 10000 + Reference Timestamp: 2022-09-17T00:00:00Z + Total Weight: 1.391848625060675 + Ref: + Container Name: md-coordinator + Vpa Object Name: sample-mariadb + Total Samples Count: 19 + Version: v3 + Cpu Histogram: + Bucket Weights: + Index: 0 + Weight: 10000 + Index: 3 + Weight: 556 + Reference Timestamp: 2022-09-16T00:00:00Z + Total Weight: 2.648440345821337 + First Sample Start: 2022-09-16T11:26:48Z + Last Sample Start: 2022-09-16T11:32:52Z + Last Update Time: 2022-09-16T11:33:02Z + Memory Histogram: + Reference Timestamp: 2022-09-17T00:00:00Z + Ref: + Container Name: mariadb + Vpa Object Name: sample-mariadb + Total Samples Count: 19 + Version: v3 + Conditions: + Last Transition Time: 2022-09-16T11:27:07Z + Message: Successfully created mariaDBOpsRequest demo/mdops-sample-mariadb-6xc1kc + Observed Generation: 1 + Reason: CreateOpsRequest + Status: True + Type: CreateOpsRequest + Vpas: + Conditions: + Last Transition Time: 2022-09-16T11:27:02Z + Status: True + Type: RecommendationProvided + Recommendation: + Container Recommendations: + Container Name: mariadb + Lower Bound: + Cpu: 250m + Memory: 400Mi + Target: + Cpu: 250m + Memory: 400Mi + Uncapped Target: + Cpu: 25m + Memory: 262144k + Upper Bound: + Cpu: 1 + Memory: 1Gi + Vpa Name: sample-mariadb +Events: + +``` +So, the `mariadbautoscaler` resource is created successfully. + +We can verify from the above output that `status.vpas` contains the `RecommendationProvided` condition to true. And in the same time, `status.vpas.recommendation.containerRecommendations` contain the actual generated recommendation. + +Our autoscaler operator continuously watches the recommendation generated and creates an `mariadbopsrequest` based on the recommendations, if the database pod resources are needed to scaled up or down. + +Let's watch the `mariadbopsrequest` in the demo namespace to see if any `mariadbopsrequest` object is created. After some time you'll see that a `mariadbopsrequest` will be created based on the recommendation. + +```bash +$ kubectl get mariadbopsrequest -n demo +NAME TYPE STATUS AGE +mdops-sample-mariadb-6xc1kc VerticalScaling Progressing 7s +``` + +Let's wait for the ops request to become successful. + +```bash +$ kubectl get mariadbopsrequest -n demo +NAME TYPE STATUS AGE +mdops-vpa-sample-mariadb-z43wc8 VerticalScaling Successful 3m32s +``` + +We can see from the above output that the `MariaDBOpsRequest` has succeeded. If we describe the `MariaDBOpsRequest` we will get an overview of the steps that were followed to scale the database. + +```bash +$ kubectl describe mariadbopsrequest -n demo mdops-vpa-sample-mariadb-z43wc8 +Name: mdops-sample-mariadb-6xc1kc +Namespace: demo +Labels: +Annotations: +API Version: ops.kubedb.com/v1alpha1 +Kind: MariaDBOpsRequest +Metadata: + Creation Timestamp: 2022-09-16T11:27:07Z + Generation: 1 + Managed Fields: + API Version: ops.kubedb.com/v1alpha1 + Fields Type: FieldsV1 + fieldsV1: + f:metadata: + f:ownerReferences: + .: + k:{"uid":"44bd46c3-bbc5-4c4a-aff4-00c7f84c6f58"}: + f:spec: + .: + f:apply: + f:databaseRef: + .: + f:name: + f:timeout: + f:type: + f:verticalScaling: + .: + f:mariadb: + .: + f:limits: + .: + f:cpu: + f:memory: + f:requests: + .: + f:cpu: + f:memory: + Manager: kubedb-autoscaler + Operation: Update + Time: 2022-09-16T11:27:07Z + API Version: ops.kubedb.com/v1alpha1 + Fields Type: FieldsV1 + fieldsV1: + f:status: + .: + f:conditions: + f:observedGeneration: + f:phase: + Manager: kubedb-ops-manager + Operation: Update + Subresource: status + Time: 2022-09-16T11:27:07Z + Owner References: + API Version: autoscaling.kubedb.com/v1alpha1 + Block Owner Deletion: true + Controller: true + Kind: MariaDBAutoscaler + Name: md-as-compute + UID: 44bd46c3-bbc5-4c4a-aff4-00c7f84c6f58 + Resource Version: 846324 + UID: c2b30107-c6d3-44bb-adf3-135edc5d615b +Spec: + Apply: IfReady + Database Ref: + Name: sample-mariadb + Timeout: 2m0s + Type: VerticalScaling + Vertical Scaling: + Mariadb: + Limits: + Cpu: 250m + Memory: 400Mi + Requests: + Cpu: 250m + Memory: 400Mi +Status: + Conditions: + Last Transition Time: 2022-09-16T11:27:07Z + Message: Controller has started to Progress the MariaDBOpsRequest: demo/mdops-sample-mariadb-6xc1kc + Observed Generation: 1 + Reason: OpsRequestProgressingStarted + Status: True + Type: Progressing + Last Transition Time: 2022-09-16T11:30:42Z + Message: Successfully restarted MariaDB pods for MariaDBOpsRequest: demo/mdops-sample-mariadb-6xc1kc + Observed Generation: 1 + Reason: SuccessfullyRestatedStatefulSet + Status: True + Type: RestartStatefulSet + Last Transition Time: 2022-09-16T11:30:47Z + Message: Vertical scale successful for MariaDBOpsRequest: demo/mdops-sample-mariadb-6xc1kc + Observed Generation: 1 + Reason: SuccessfullyPerformedVerticalScaling + Status: True + Type: VerticalScaling + Last Transition Time: 2022-09-16T11:30:47Z + Message: Controller has successfully scaled the MariaDB demo/mdops-sample-mariadb-6xc1kc + Observed Generation: 1 + Reason: OpsRequestProcessedSuccessfully + Status: True + Type: Successful + Observed Generation: 1 + Phase: Successful +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal Starting 8m48s KubeDB Enterprise Operator Start processing for MariaDBOpsRequest: demo/mdops-sample-mariadb-6xc1kc + Normal Starting 8m48s KubeDB Enterprise Operator Pausing MariaDB databse: demo/sample-mariadb + Normal Successful 8m48s KubeDB Enterprise Operator Successfully paused MariaDB database: demo/sample-mariadb for MariaDBOpsRequest: mdops-sample-mariadb-6xc1kc + Normal Starting 8m43s KubeDB Enterprise Operator Restarting Pod: demo/sample-mariadb-0 + Normal Starting 7m33s KubeDB Enterprise Operator Restarting Pod: demo/sample-mariadb-1 + Normal Starting 6m23s KubeDB Enterprise Operator Restarting Pod: demo/sample-mariadb-2 + Normal Successful 5m13s KubeDB Enterprise Operator Successfully restarted MariaDB pods for MariaDBOpsRequest: demo/mdops-sample-mariadb-6xc1kc + Normal Successful 5m8s KubeDB Enterprise Operator Vertical scale successful for MariaDBOpsRequest: demo/mdops-sample-mariadb-6xc1kc + Normal Starting 5m8s KubeDB Enterprise Operator Resuming MariaDB database: demo/sample-mariadb + Normal Successful 5m8s KubeDB Enterprise Operator Successfully resumed MariaDB database: demo/sample-mariadb + Normal Successful 5m8s KubeDB Enterprise Operator Controller has Successfully scaled the MariaDB database: demo/sample-mariadb +``` + +Now, we are going to verify from the Pod, and the MariaDB yaml whether the resources of the replicaset database has updated to meet up the desired state, Let's check, + +```bash +$ kubectl get pod -n demo sample-mariadb-0 -o json | jq '.spec.containers[].resources' +{ + "limits": { + "cpu": "250m", + "memory": "400Mi" + }, + "requests": { + "cpu": "250m", + "memory": "400Mi" + } +} + +$ kubectl get mariadb -n demo sample-mariadb -o json | jq '.spec.podTemplate.spec.resources' +{ + "limits": { + "cpu": "250m", + "memory": "400Mi" + }, + "requests": { + "cpu": "250m", + "memory": "400Mi" + } +} +``` + + +The above output verifies that we have successfully autoscaled the resources of the MariaDB replicaset database. + +## Cleaning Up + +To clean up the Kubernetes resources created by this tutorial, run: + +```bash +kubectl delete mariadb -n demo sample-mariadb +kubectl delete mariadbautoscaler -n demo md-as-compute +kubectl delete ns demo +``` \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/mariadb/autoscaler/compute/overview/images/mdas-compute.png b/content/docs/v2024.1.31/guides/mariadb/autoscaler/compute/overview/images/mdas-compute.png new file mode 100644 index 0000000000..0f8a413f00 Binary files /dev/null and b/content/docs/v2024.1.31/guides/mariadb/autoscaler/compute/overview/images/mdas-compute.png differ diff --git a/content/docs/v2024.1.31/guides/mariadb/autoscaler/compute/overview/index.md b/content/docs/v2024.1.31/guides/mariadb/autoscaler/compute/overview/index.md new file mode 100644 index 0000000000..276a961dd8 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mariadb/autoscaler/compute/overview/index.md @@ -0,0 +1,67 @@ +--- +title: MariaDB Compute Autoscaling Overview +menu: + docs_v2024.1.31: + identifier: guides-mariadb-autoscaling-compute-overview + name: Overview + parent: guides-mariadb-autoscaling-compute + weight: 10 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# MariaDB Compute Resource Autoscaling + +This guide will give an overview on how KubeDB Autoscaler operator autoscales the database compute resources i.e. cpu and memory using `mariadbautoscaler` crd. + +## Before You Begin + +- You should be familiar with the following `KubeDB` concepts: + - [MariaDB](/docs/v2024.1.31/guides/mariadb/concepts/mariadb) + - [MariaDBAutoscaler](/docs/v2024.1.31/guides/mariadb/concepts/autoscaler) + - [MariaDBOpsRequest](/docs/v2024.1.31/guides/mariadb/concepts/opsrequest) + +## How Compute Autoscaling Works + +The following diagram shows how KubeDB Autoscaler operator autoscales the resources of `MariaDB` database components. Open the image in a new tab to see the enlarged version. + +
+  Auto Scaling process of MariaDB +
Fig: Auto Scaling process of MariaDB
+
+ +The Auto Scaling process consists of the following steps: + +1. At first, the user creates a `MariaDB` Custom Resource Object (CRO). + +2. `KubeDB` Community operator watches the `MariaDB` CRO. + +3. When the operator finds a `MariaDB` CRO, it creates required number of `StatefulSets` and related necessary stuff like secrets, services, etc. + +4. Then, in order to set up autoscaling of the CPU & Memory resources of the `MariaDB` database the user creates a `MariaDBAutoscaler` CRO with desired configuration. + +5. `KubeDB` Autoscaler operator watches the `MariaDBAutoscaler` CRO. + +6. `KubeDB` Autoscaler operator utilizes the modified version of Kubernetes official [VPA-Recommender](https://github.com/kubernetes/autoscaler/tree/master/vertical-pod-autoscaler/pkg) for different components of the database, as specified in the `mariadbautoscaler` CRO. +It generates recommendations based on resource usages, & store them in the `status` section of the autoscaler CRO. + +7. If the generated recommendation doesn't match the current resources of the database, then `KubeDB` Autoscaler operator creates a `MariaDBOpsRequest` CRO to scale the database to match the recommendation provided by the VPA object. + +8. `KubeDB Ops-Manager operator` watches the `MariaDBOpsRequest` CRO. + +9. Lastly, the `KubeDB Ops-Manager operator` will scale the database component vertically as specified on the `MariaDBOpsRequest` CRO. + +In the next docs, we are going to show a step by step guide on Autoscaling of MariaDB database using `MariaDBAutoscaler` CRD. diff --git a/content/docs/v2024.1.31/guides/mariadb/autoscaler/storage/_index.md b/content/docs/v2024.1.31/guides/mariadb/autoscaler/storage/_index.md new file mode 100644 index 0000000000..6f5a88bd88 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mariadb/autoscaler/storage/_index.md @@ -0,0 +1,22 @@ +--- +title: Storage Autoscaling +menu: + docs_v2024.1.31: + identifier: guides-mariadb-autoscaling-storage + name: Storage Autoscaling + parent: guides-mariadb-autoscaling + weight: 46 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/mariadb/autoscaler/storage/cluster/examples/mdas-storage.yaml b/content/docs/v2024.1.31/guides/mariadb/autoscaler/storage/cluster/examples/mdas-storage.yaml new file mode 100644 index 0000000000..5533980e41 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mariadb/autoscaler/storage/cluster/examples/mdas-storage.yaml @@ -0,0 +1,14 @@ +apiVersion: autoscaling.kubedb.com/v1alpha1 +kind: MariaDBAutoscaler +metadata: + name: md-as-st + namespace: demo +spec: + databaseRef: + name: sample-mariadb + storage: + mariadb: + trigger: "On" + usageThreshold: 20 + scalingThreshold: 20 + expansionMode: "Online" diff --git a/content/docs/v2024.1.31/guides/mariadb/autoscaler/storage/cluster/examples/sample-mariadb.yaml b/content/docs/v2024.1.31/guides/mariadb/autoscaler/storage/cluster/examples/sample-mariadb.yaml new file mode 100644 index 0000000000..6bee3afc10 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mariadb/autoscaler/storage/cluster/examples/sample-mariadb.yaml @@ -0,0 +1,17 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MariaDB +metadata: + name: sample-mariadb + namespace: demo +spec: + version: "10.5.23" + replicas: 3 + storageType: Durable + storage: + storageClassName: "topolvm-provisioner" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut diff --git a/content/docs/v2024.1.31/guides/mariadb/autoscaler/storage/cluster/index.md b/content/docs/v2024.1.31/guides/mariadb/autoscaler/storage/cluster/index.md new file mode 100644 index 0000000000..ce5f40b37b --- /dev/null +++ b/content/docs/v2024.1.31/guides/mariadb/autoscaler/storage/cluster/index.md @@ -0,0 +1,329 @@ +--- +title: MariaDB Cluster Autoscaling +menu: + docs_v2024.1.31: + identifier: guides-mariadb-autoscaling-storage-cluster + name: Cluster + parent: guides-mariadb-autoscaling-storage + weight: 20 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Storage Autoscaling of a MariaDB Cluster + +This guide will show you how to use `KubeDB` to autoscale the storage of a MariaDB Replicaset database. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster. + +- Install `KubeDB` Community, Enterprise and Autoscaler operator in your cluster following the steps [here](/docs/v2024.1.31/setup/README). + +- Install `Metrics Server` from [here](https://github.com/kubernetes-sigs/metrics-server#installation) + +- Install Prometheus from [here](https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack) + +- You must have a `StorageClass` that supports volume expansion. + +- You should be familiar with the following `KubeDB` concepts: + - [MariaDB](/docs/v2024.1.31/guides/mariadb/concepts/mariadb) + - [MariaDBAutoscaler](/docs/v2024.1.31/guides/mariadb/concepts/autoscaler) + - [MariaDBOpsRequest](/docs/v2024.1.31/guides/mariadb/concepts/opsrequest) + - [Storage Autoscaling Overview](/docs/v2024.1.31/guides/mariadb/autoscaler/storage/overview) + +To keep everything isolated, we are going to use a separate namespace called `demo` throughout this tutorial. + +```bash +$ kubectl create ns demo +namespace/demo created +``` + +## Storage Autoscaling of Cluster Database + +At first verify that your cluster has a storage class, that supports volume expansion. Let's check, + +```bash +$ kubectl get storageclass +NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE +standard (default) rancher.io/local-path Delete WaitForFirstConsumer false 79m +topolvm-provisioner topolvm.cybozu.com Delete WaitForFirstConsumer true 78m +``` + +We can see from the output the `topolvm-provisioner` storage class has `ALLOWVOLUMEEXPANSION` field as true. So, this storage class supports volume expansion. We can use it. You can install topolvm from [here](https://github.com/topolvm/topolvm) + +Now, we are going to deploy a `MariaDB` replicaset using a supported version by `KubeDB` operator. Then we are going to apply `MariaDBAutoscaler` to set up autoscaling. + +#### Deploy MariaDB Cluster + +In this section, we are going to deploy a MariaDB replicaset database with version `10.5.23`. Then, in the next section we will set up autoscaling for this database using `MariaDBAutoscaler` CRD. Below is the YAML of the `MariaDB` CR that we are going to create, + +> If you want to autoscale MariaDB `Standalone`, Just remove the `spec.Replicas` from the below yaml and rest of the steps are same. + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: MariaDB +metadata: + name: sample-mariadb + namespace: demo +spec: + version: "10.5.23" + replicas: 3 + storageType: Durable + storage: + storageClassName: "topolvm-provisioner" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut +``` + +Let's create the `MariaDB` CRO we have shown above, + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mariadb/autoscaler/storage/cluster/examples/sample-mariadb.yaml +mariadb.kubedb.com/sample-mariadb created +``` + +Now, wait until `sample-mariadb` has status `Ready`. i.e, + +```bash +$ kubectl get mariadb -n demo +NAME VERSION STATUS AGE +sample-mariadb 10.5.23 Ready 3m46s +``` + +Let's check volume size from statefulset, and from the persistent volume, + +```bash +$ kubectl get sts -n demo sample-mariadb -o json | jq '.spec.volumeClaimTemplates[].spec.resources.requests.storage' +"1Gi" + +$ kubectl get pv -n demo +NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE +pvc-43266d76-f280-4cca-bd78-d13660a84db9 1Gi RWO Delete Bound demo/data-sample-mariadb-2 topolvm-provisioner 57s +pvc-4a509b05-774b-42d9-b36d-599c9056af37 1Gi RWO Delete Bound demo/data-sample-mariadb-0 topolvm-provisioner 58s +pvc-c27eee12-cd86-4410-b39e-b1dd735fc14d 1Gi RWO Delete Bound demo/data-sample-mariadb-1 topolvm-provisioner 57s +``` + +You can see the statefulset has 1GB storage, and the capacity of all the persistent volume is also 1GB. + +We are now ready to apply the `MariaDBAutoscaler` CRO to set up storage autoscaling for this database. + +### Storage Autoscaling + +Here, we are going to set up storage autoscaling using a MariaDBAutoscaler Object. + +#### Create MariaDBAutoscaler Object + +In order to set up vertical autoscaling for this replicaset database, we have to create a `MariaDBAutoscaler` CRO with our desired configuration. Below is the YAML of the `MariaDBAutoscaler` object that we are going to create, + +```yaml +apiVersion: autoscaling.kubedb.com/v1alpha1 +kind: MariaDBAutoscaler +metadata: + name: md-as-st + namespace: demo +spec: + databaseRef: + name: sample-mariadb + storage: + mariadb: + trigger: "On" + usageThreshold: 20 + scalingThreshold: 20 + expansionMode: "Online" +``` + +Here, + +- `spec.databaseRef.name` specifies that we are performing vertical scaling operation on `sample-mariadb` database. +- `spec.storage.mariadb.trigger` specifies that storage autoscaling is enabled for this database. +- `spec.storage.mariadb.usageThreshold` specifies storage usage threshold, if storage usage exceeds `20%` then storage autoscaling will be triggered. +- `spec.storage.mariadb.scalingThreshold` specifies the scaling threshold. Storage will be scaled to `20%` of the current amount. +- `spec.storage.mariadb.expansionMode` specifies the expansion mode of volume expansion `MariaDBOpsRequest` created by `MariaDBAutoscaler`. topolvm-provisioner supports online volume expansion so here `expansionMode` is set as "Online". + +Let's create the `MariaDBAutoscaler` CR we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mariadb/autoscaler/storage/cluster/examples/mdas-storage.yaml +mariadbautoscaler.autoscaling.kubedb.com/md-as-st created +``` + +#### Storage Autoscaling is set up successfully + +Let's check that the `mariadbautoscaler` resource is created successfully, + +```bash +$ kubectl get mariadbautoscaler -n demo +NAME AGE +md-as-st 33s + +$ kubectl describe mariadbautoscaler md-as-st -n demo +Name: md-as-st +Namespace: demo +Labels: +Annotations: API Version: autoscaling.kubedb.com/v1alpha1 +Kind: MariaDBAutoscaler +Metadata: + Creation Timestamp: 2022-01-14T06:08:02Z + Generation: 1 + Managed Fields: + ... + Resource Version: 24009 + UID: 4f45a3b3-fc72-4d04-b52c-a770944311f6 +Spec: + Database Ref: + Name: sample-mariadb + Storage: + Mariadb: + Scaling Threshold: 20 + Trigger: On + Usage Threshold: 20 +Events: +``` + +So, the `mariadbautoscaler` resource is created successfully. + +Now, for this demo, we are going to manually fill up the persistent volume to exceed the `usageThreshold` using `dd` command to see if storage autoscaling is working or not. + +Let's exec into the database pod and fill the database volume(`var/lib/mysql`) using the following commands: + +```bash +$ kubectl exec -it -n demo sample-mariadb-0 -- bash +root@sample-mariadb-0:/ df -h /var/lib/mysql +Filesystem Size Used Avail Use% Mounted on +/dev/topolvm/57cd4330-784f-42c1-bf8e-e743241df164 1014M 357M 658M 36% /var/lib/mysql +root@sample-mariadb-0:/ dd if=/dev/zero of=/var/lib/mysql/file.img bs=500M count=1 +1+0 records in +1+0 records out +524288000 bytes (524 MB, 500 MiB) copied, 0.340877 s, 1.5 GB/s +root@sample-mariadb-0:/ df -h /var/lib/mysql +Filesystem Size Used Avail Use% Mounted on +/dev/topolvm/57cd4330-784f-42c1-bf8e-e743241df164 1014M 857M 158M 85% /var/lib/mysql +``` + +So, from the above output we can see that the storage usage is 83%, which exceeded the `usageThreshold` 20%. + +Let's watch the `mariadbopsrequest` in the demo namespace to see if any `mariadbopsrequest` object is created. After some time you'll see that a `mariadbopsrequest` of type `VolumeExpansion` will be created based on the `scalingThreshold`. + +```bash +$ kubectl get mariadbopsrequest -n demo +NAME TYPE STATUS AGE +mops-sample-mariadb-xojkua VolumeExpansion Progressing 15s +``` + +Let's wait for the ops request to become successful. + +```bash +$ kubectl get mariadbopsrequest -n demo +NAME TYPE STATUS AGE +mops-sample-mariadb-xojkua VolumeExpansion Successful 97s +``` + +We can see from the above output that the `MariaDBOpsRequest` has succeeded. If we describe the `MariaDBOpsRequest` we will get an overview of the steps that were followed to expand the volume of the database. + +```bash +$ kubectl describe mariadbopsrequest -n demo mops-sample-mariadb-xojkua +Name: mops-sample-mariadb-xojkua +Namespace: demo +Labels: app.kubernetes.io/component=database + app.kubernetes.io/instance=sample-mariadb + app.kubernetes.io/managed-by=kubedb.com + app.kubernetes.io/name=mariadbs.kubedb.com +Annotations: +API Version: ops.kubedb.com/v1alpha1 +Kind: MariaDBOpsRequest +Metadata: + Creation Timestamp: 2022-01-14T06:13:10Z + Generation: 1 + Managed Fields: ... + Owner References: + API Version: autoscaling.kubedb.com/v1alpha1 + Block Owner Deletion: true + Controller: true + Kind: MariaDBAutoscaler + Name: md-as-st + UID: 4f45a3b3-fc72-4d04-b52c-a770944311f6 + Resource Version: 25557 + UID: 90763a49-a03f-407c-a233-fb20c4ab57d7 +Spec: + Database Ref: + Name: sample-mariadb + Type: VolumeExpansion + Volume Expansion: + Mariadb: 1594884096 +Status: + Conditions: + Last Transition Time: 2022-01-14T06:13:10Z + Message: Controller has started to Progress the MariaDBOpsRequest: demo/mops-sample-mariadb-xojkua + Observed Generation: 1 + Reason: OpsRequestProgressingStarted + Status: True + Type: Progressing + Last Transition Time: 2022-01-14T06:14:25Z + Message: Volume Expansion performed successfully in MariaDB pod for MariaDBOpsRequest: demo/mops-sample-mariadb-xojkua + Observed Generation: 1 + Reason: SuccessfullyVolumeExpanded + Status: True + Type: VolumeExpansion + Last Transition Time: 2022-01-14T06:14:25Z + Message: Controller has successfully expand the volume of MariaDB demo/mops-sample-mariadb-xojkua + Observed Generation: 1 + Reason: OpsRequestProcessedSuccessfully + Status: True + Type: Successful + Observed Generation: 3 + Phase: Successful +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal Starting 2m58s KubeDB Enterprise Operator Start processing for MariaDBOpsRequest: demo/mops-sample-mariadb-xojkua + Normal Starting 2m58s KubeDB Enterprise Operator Pausing MariaDB databse: demo/sample-mariadb + Normal Successful 2m58s KubeDB Enterprise Operator Successfully paused MariaDB database: demo/sample-mariadb for MariaDBOpsRequest: mops-sample-mariadb-xojkua + Normal Successful 103s KubeDB Enterprise Operator Volume Expansion performed successfully in MariaDB pod for MariaDBOpsRequest: demo/mops-sample-mariadb-xojkua + Normal Starting 103s KubeDB Enterprise Operator Updating MariaDB storage + Normal Successful 103s KubeDB Enterprise Operator Successfully Updated MariaDB storage + Normal Starting 103s KubeDB Enterprise Operator Resuming MariaDB database: demo/sample-mariadb + Normal Successful 103s KubeDB Enterprise Operator Successfully resumed MariaDB database: demo/sample-mariadb + Normal Successful 103s KubeDB Enterprise Operator Controller has Successfully expand the volume of MariaDB: demo/sample-mariadb +``` + +Now, we are going to verify from the `Statefulset`, and the `Persistent Volume` whether the volume of the replicaset database has expanded to meet the desired state, Let's check, + +```bash +$ kubectl get sts -n demo sample-mariadb -o json | jq '.spec.volumeClaimTemplates[].spec.resources.requests.storage' +"1594884096" +$ kubectl get pv -n demo +NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE +pvc-43266d76-f280-4cca-bd78-d13660a84db9 2Gi RWO Delete Bound demo/data-sample-mariadb-2 topolvm-provisioner 23m +pvc-4a509b05-774b-42d9-b36d-599c9056af37 2Gi RWO Delete Bound demo/data-sample-mariadb-0 topolvm-provisioner 24m +pvc-c27eee12-cd86-4410-b39e-b1dd735fc14d 2Gi RWO Delete Bound demo/data-sample-mariadb-1 topolvm-provisioner 23m +``` + +The above output verifies that we have successfully autoscaled the volume of the MariaDB replicaset database. + +## Cleaning Up + +To clean up the Kubernetes resources created by this tutorial, run: + +```bash +kubectl delete mariadb -n demo sample-mariadb +kubectl delete mariadbautoscaler -n demo md-as-st +kubectl delete ns demo +``` diff --git a/content/docs/v2024.1.31/guides/mariadb/autoscaler/storage/overview/images/mdas-storage.jpeg b/content/docs/v2024.1.31/guides/mariadb/autoscaler/storage/overview/images/mdas-storage.jpeg new file mode 100644 index 0000000000..e8f516f05b Binary files /dev/null and b/content/docs/v2024.1.31/guides/mariadb/autoscaler/storage/overview/images/mdas-storage.jpeg differ diff --git a/content/docs/v2024.1.31/guides/mariadb/autoscaler/storage/overview/index.md b/content/docs/v2024.1.31/guides/mariadb/autoscaler/storage/overview/index.md new file mode 100644 index 0000000000..ad4f59dc8f --- /dev/null +++ b/content/docs/v2024.1.31/guides/mariadb/autoscaler/storage/overview/index.md @@ -0,0 +1,66 @@ +--- +title: MariaDB Storage Autoscaling Overview +menu: + docs_v2024.1.31: + identifier: mguides-mariadb-autoscaling-storage-overview + name: Overview + parent: guides-mariadb-autoscaling-storage + weight: 10 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# MariaDB Vertical Autoscaling + +This guide will give an overview on how KubeDB Autoscaler operator autoscales the database storage using `mariadbautoscaler` crd. + +## Before You Begin + +- You should be familiar with the following `KubeDB` concepts: + - [MariaDB](/docs/v2024.1.31/guides/mariadb/concepts/mariadb) + - [MariaDBAutoscaler](/docs/v2024.1.31/guides/mariadb/concepts/autoscaler) + - [MariaDBOpsRequest](/docs/v2024.1.31/guides/mariadb/concepts/opsrequest) + +## How Storage Autoscaling Works + +The following diagram shows how KubeDB Autoscaler operator autoscales the resources of `MariaDB` database components. Open the image in a new tab to see the enlarged version. + +
+  Storage Autoscaling process of MariaDB +
Fig: Storage Autoscaling process of MariaDB
+
+ +The Auto Scaling process consists of the following steps: + +1. At first, a user creates a `MariaDB` Custom Resource (CR). + +2. `KubeDB` Community operator watches the `MariaDB` CR. + +3. When the operator finds a `MariaDB` CR, it creates required number of `StatefulSets` and related necessary stuff like secrets, services, etc. + +4. Each StatefulSet creates a Persistent Volume according to the Volume Claim Template provided in the statefulset configuration. This Persistent Volume will be expanded by the `KubeDB` Enterprise operator. + +5. Then, in order to set up storage autoscaling of the `MariaDB` database the user creates a `MariaDBAutoscaler` CRO with desired configuration. + +6. `KubeDB` Autoscaler operator watches the `MariaDBAutoscaler` CRO. + +7. `KubeDB` Autoscaler operator continuously watches persistent volumes of the databases to check if it exceeds the specified usage threshold. + +8. If the usage exceeds the specified usage threshold, then `KubeDB` Autoscaler operator creates a `MariaDBOpsRequest` to expand the storage of the database. +9. `KubeDB` Enterprise operator watches the `MariaDBOpsRequest` CRO. +10. Then the `KubeDB` Enterprise operator will expand the storage of the database component as specified on the `MariaDBOpsRequest` CRO. + +In the next docs, we are going to show a step by step guide on Autoscaling storage of various MariaDB database components using `MariaDBAutoscaler` CRD. diff --git a/content/docs/v2024.1.31/guides/mariadb/backup/_index.md b/content/docs/v2024.1.31/guides/mariadb/backup/_index.md new file mode 100644 index 0000000000..701910686b --- /dev/null +++ b/content/docs/v2024.1.31/guides/mariadb/backup/_index.md @@ -0,0 +1,22 @@ +--- +title: Backup & Restore MariaDB +menu: + docs_v2024.1.31: + identifier: guides-mariadb-backup + name: Backup & Restore + parent: guides-mariadb + weight: 70 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/mariadb/backup/auto-backup/examples/backupblueprint.yaml b/content/docs/v2024.1.31/guides/mariadb/backup/auto-backup/examples/backupblueprint.yaml new file mode 100644 index 0000000000..a8b951d27f --- /dev/null +++ b/content/docs/v2024.1.31/guides/mariadb/backup/auto-backup/examples/backupblueprint.yaml @@ -0,0 +1,17 @@ +apiVersion: stash.appscode.com/v1beta1 +kind: BackupBlueprint +metadata: + name: mariadb-backup-template +spec: + # ============== Blueprint for Repository ========================== + backend: + gcs: + bucket: stash-testing + prefix: mariadb-backup/${TARGET_NAMESPACE}/${TARGET_APP_RESOURCE}/${TARGET_NAME} + storageSecretName: gcs-secret + # ============== Blueprint for BackupConfiguration ================= + schedule: "*/5 * * * *" + retentionPolicy: + name: 'keep-last-5' + keepLast: 5 + prune: true diff --git a/content/docs/v2024.1.31/guides/mariadb/backup/auto-backup/examples/sample-mariadb-2.yaml b/content/docs/v2024.1.31/guides/mariadb/backup/auto-backup/examples/sample-mariadb-2.yaml new file mode 100644 index 0000000000..97348e1681 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mariadb/backup/auto-backup/examples/sample-mariadb-2.yaml @@ -0,0 +1,20 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MariaDB +metadata: + name: sample-mariadb-2 + namespace: demo-2 + annotations: + stash.appscode.com/backup-blueprint: mariadb-backup-template + stash.appscode.com/schedule: "*/3 * * * *" +spec: + version: "10.5.23" + replicas: 1 + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut diff --git a/content/docs/v2024.1.31/guides/mariadb/backup/auto-backup/examples/sample-mariadb-3.yaml b/content/docs/v2024.1.31/guides/mariadb/backup/auto-backup/examples/sample-mariadb-3.yaml new file mode 100644 index 0000000000..c88d8d7056 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mariadb/backup/auto-backup/examples/sample-mariadb-3.yaml @@ -0,0 +1,20 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MariaDB +metadata: + name: sample-mariadb-3 + namespace: demo-3 + annotations: + stash.appscode.com/backup-blueprint: mariadb-backup-template + params.stash.appscode.com/args: --databases mysql +spec: + version: "10.5.23" + replicas: 1 + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut diff --git a/content/docs/v2024.1.31/guides/mariadb/backup/auto-backup/examples/sample-mariadb.yaml b/content/docs/v2024.1.31/guides/mariadb/backup/auto-backup/examples/sample-mariadb.yaml new file mode 100644 index 0000000000..d76bbb3388 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mariadb/backup/auto-backup/examples/sample-mariadb.yaml @@ -0,0 +1,19 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MariaDB +metadata: + name: sample-mariadb + namespace: demo + annotations: + stash.appscode.com/backup-blueprint: mariadb-backup-template +spec: + version: "10.5.23" + replicas: 1 + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut diff --git a/content/docs/v2024.1.31/guides/mariadb/backup/auto-backup/images/sample-mariadb-2.png b/content/docs/v2024.1.31/guides/mariadb/backup/auto-backup/images/sample-mariadb-2.png new file mode 100644 index 0000000000..8806af218d Binary files /dev/null and b/content/docs/v2024.1.31/guides/mariadb/backup/auto-backup/images/sample-mariadb-2.png differ diff --git a/content/docs/v2024.1.31/guides/mariadb/backup/auto-backup/images/sample-mariadb-3.png b/content/docs/v2024.1.31/guides/mariadb/backup/auto-backup/images/sample-mariadb-3.png new file mode 100644 index 0000000000..3253bf52f4 Binary files /dev/null and b/content/docs/v2024.1.31/guides/mariadb/backup/auto-backup/images/sample-mariadb-3.png differ diff --git a/content/docs/v2024.1.31/guides/mariadb/backup/auto-backup/images/sample-mariadb.png b/content/docs/v2024.1.31/guides/mariadb/backup/auto-backup/images/sample-mariadb.png new file mode 100644 index 0000000000..eb9e1c9691 Binary files /dev/null and b/content/docs/v2024.1.31/guides/mariadb/backup/auto-backup/images/sample-mariadb.png differ diff --git a/content/docs/v2024.1.31/guides/mariadb/backup/auto-backup/index.md b/content/docs/v2024.1.31/guides/mariadb/backup/auto-backup/index.md new file mode 100644 index 0000000000..2d8b41e424 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mariadb/backup/auto-backup/index.md @@ -0,0 +1,655 @@ +--- +title: MariaDB Auto-Backup | Stash +description: Backup MariaDB using Stash Auto-Backup +menu: + docs_v2024.1.31: + identifier: guides-mariadb-backup-auto-backup + name: Auto-Backup + parent: guides-mariadb-backup + weight: 30 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +# Backup MariaDB using Stash Auto-Backup + +Stash can be configured to automatically backup any MariaDB database in your cluster. Stash enables cluster administrators to deploy backup blueprints ahead of time so that the database owners can easily backup their database with just a few annotations. + +In this tutorial, we are going to show how you can configure a backup blueprint for MariaDB databases in your cluster and backup them with few annotations. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster. +- Install Stash in your cluster following the steps [here](https://stash.run/docs/latest/setup/install/stash/). +- Install KubeDB in your cluster following the steps [here](/docs/v2024.1.31/setup/README). +- If you are not familiar with how Stash backup and restore MariaDB databases, please check the following guide [here](/docs/v2024.1.31/guides/mariadb/backup/overview/). +- If you are not familiar with how auto-backup works in Stash, please check the following guide [here](https://stash.run/docs/latest/guides/auto-backup/overview/). +- If you are not familiar with the available auto-backup options for databases in Stash, please check the following guide [here](https://stash.run/docs/latest/guides/auto-backup/database/). + +You should be familiar with the following `Stash` concepts: + +- [BackupBlueprint](https://stash.run/docs/latest/concepts/crds/backupblueprint/) +- [BackupConfiguration](https://stash.run/docs/latest/concepts/crds/backupconfiguration/) +- [BackupSession](https://stash.run/docs/latest/concepts/crds/backupsession/) +- [Repository](https://stash.run/docs/latest/concepts/crds/repository/) +- [Function](https://stash.run/docs/latest/concepts/crds/function/) +- [Task](https://stash.run/docs/latest/concepts/crds/task/) + +In this tutorial, we are going to show backup of three different MariaDB databases on three different namespaces named `demo`, `demo-2`, and `demo-3`. Create the namespaces as below if you haven't done it already. + +```bash +❯ kubectl create ns demo +namespace/demo created + +❯ kubectl create ns demo-2 +namespace/demo-2 created + +❯ kubectl create ns demo-3 +namespace/demo-3 created +``` + +When you install the Stash, it automatically installs all the official database addons. Verify that it has installed the MariaDB addons using the following command. + +```bash +❯ kubectl get tasks.stash.appscode.com | grep mariadb +mariadb-backup-10.5.23 62m +mariadb-restore-10.5.23 62m +``` + +## Prepare Backup Blueprint + +To backup an MariaDB database using Stash, you have to create a `Secret` containing the backend credentials, a `Repository` containing the backend information, and a `BackupConfiguration` containing the schedule and target information. A `BackupBlueprint` allows you to specify a template for the `Repository` and the `BackupConfiguration`. + +The `BackupBlueprint` is a non-namespaced CRD. So, once you have created a `BackupBlueprint`, you can use it to backup any MariaDB database of any namespace just by creating the storage `Secret` in that namespace and adding few annotations to your MariaDB CRO. Then, Stash will automatically create a `Repository` and a `BackupConfiguration` according to the template to backup the database. + +Below is the `BackupBlueprint` object that we are going to use in this tutorial, + +```yaml +apiVersion: stash.appscode.com/v1beta1 +kind: BackupBlueprint +metadata: + name: mariadb-backup-template +spec: + # ============== Blueprint for Repository ========================== + backend: + gcs: + bucket: stash-testing + prefix: mariadb-backup/${TARGET_NAMESPACE}/${TARGET_APP_RESOURCE}/${TARGET_NAME} + storageSecretName: gcs-secret + # ============== Blueprint for BackupConfiguration ================= + schedule: "*/5 * * * *" + retentionPolicy: + name: 'keep-last-5' + keepLast: 5 + prune: true +``` + +Here, we are using a GCS bucket as our backend. We are providing `gcs-secret` at the `storageSecretName` field. Hence, we have to create a secret named `gcs-secret` with the access credentials of our bucket in every namespace where we want to enable backup through this blueprint. + +Notice the `prefix` field of `backend` section. We have used some variables in form of `${VARIABLE_NAME}`. Stash will automatically resolve those variables from the database information to make the backend prefix unique for each database instance. + +Let's create the `BackupBlueprint` we have shown above, + +```bash +❯ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mariadb/backup/auto-backup/examples/backupblueprint.yaml +backupblueprint.stash.appscode.com/mariadb-backup-template created +``` + +Now, we are ready to backup our MariaDB databases using few annotations. You can check available auto-backup annotations for a databases from [here](https://stash.run/docs/latest/guides/auto-backup/database/#available-auto-backup-annotations-for-database). + +## Auto-backup with default configurations + +In this section, we are going to backup an MariaDB database of `demo` namespace. We are going to use the default configurations specified in the `BackupBlueprint`. + +### Create Storage Secret + +At first, let's create the `gcs-secret` in `demo` namespace with the access credentials to our GCS bucket. + +```bash +❯ echo -n 'changeit' > RESTIC_PASSWORD +❯ echo -n '' > GOOGLE_PROJECT_ID +❯ cat downloaded-sa-key.json > GOOGLE_SERVICE_ACCOUNT_JSON_KEY +❯ kubectl create secret generic -n demo gcs-secret \ + --from-file=./RESTIC_PASSWORD \ + --from-file=./GOOGLE_PROJECT_ID \ + --from-file=./GOOGLE_SERVICE_ACCOUNT_JSON_KEY +secret/gcs-secret created +``` + +### Create Database + +Now, we are going to create an MariaDB CRO in `demo` namespace. Below is the YAML of the MariaDB object that we are going to create, + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: MariaDB +metadata: + name: sample-mariadb + namespace: demo + annotations: + stash.appscode.com/backup-blueprint: mariadb-backup-template +spec: + version: "10.5.23" + replicas: 1 + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut +``` + +Notice the `annotations` section. We are pointing to the `BackupBlueprint` that we have created earlier though `stash.appscode.com/backup-blueprint` annotation. Stash will watch this annotation and create a `Repository` and a `BackupConfiguration` according to the `BackupBlueprint`. + +Let's create the above MariaDB CRO, + +```bash +❯ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mariadb/backup/auto-backup/examples/sample-mariadb.yaml +mariadb.kubedb.com/sample-mariadb created +``` + +### Verify Auto-backup configured + +In this section, we are going to verify whether Stash has created the respective `Repository` and `BackupConfiguration` for our MariaDB database we have just deployed or not. + +#### Verify Repository + +At first, let's verify whether Stash has created a `Repository` for our MariaDB or not. + +```bash +❯ kubectl get repository -n demo +NAME INTEGRITY SIZE SNAPSHOT-COUNT LAST-SUCCESSFUL-BACKUP AGE +app-sample-mariadb 10s +``` + +Now, let's check the YAML of the `Repository`. + +```yaml +❯ kubectl get repository -n demo app-sample-mariadb -o yaml +apiVersion: stash.appscode.com/v1alpha1 +kind: Repository +metadata: +... + name: app-sample-mariadb + namespace: demo +spec: + backend: + gcs: + bucket: stash-testing + prefix: mariadb-backup/demo/mariadb/sample-mariadb + storageSecretName: gcs-secret +``` + +Here, you can see that Stash has resolved the variables in `prefix` field and substituted them with the equivalent information from this database. + +#### Verify BackupConfiguration + +If everything goes well, Stash should create a `BackupConfiguration` for our MariaDB in `demo` namespace and the phase of that `BackupConfiguration` should be `Ready`. Verify the `BackupConfiguration` crd by the following command, + +```bash +❯ kubectl get backupconfiguration -n demo +NAME TASK SCHEDULE PAUSED PHASE AGE +app-sample-mariadb mariadb-backup-10.5.23 */5 * * * * Ready 7m28s +``` + +Now, let's check the YAML of the `BackupConfiguration`. + +```yaml +❯ kubectl get backupconfiguration -n demo app-sample-mariadb -o yaml +apiVersion: stash.appscode.com/v1beta1 +kind: BackupConfiguration +metadata: + name: app-sample-mariadb + namespace: demo + ... + spec: + driver: Restic + repository: + name: app-sample-mariadb + retentionPolicy: + keepLast: 5 + name: keep-last-5 + prune: true + runtimeSettings: {} + schedule: '*/5 * * * *' + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-mariadb + tempDir: {} +status: + conditions: + - lastTransitionTime: "2021-02-25T05:14:51Z" + message: Repository demo/app-sample-mariadb exist. + reason: RepositoryAvailable + status: "True" + type: RepositoryFound + - lastTransitionTime: "2021-02-25T05:14:51Z" + message: Backend Secret demo/gcs-secret exist. + reason: BackendSecretAvailable + status: "True" + type: BackendSecretFound + - lastTransitionTime: "2021-02-25T05:14:51Z" + message: Backup target appcatalog.appscode.com/v1alpha1 appbinding/sample-mariadb + found. + reason: TargetAvailable + status: "True" + type: BackupTargetFound + - lastTransitionTime: "2021-02-25T05:14:51Z" + message: Successfully created backup triggering CronJob. + reason: CronJobCreationSucceeded + status: "True" + type: CronJobCreated + observedGeneration: 1 + +``` + +Notice the `target` section. Stash has automatically added the MariaDB as the target of this `BackupConfiguration`. + +#### Verify Backup + +Now, let's wait for a backup run to complete. You can watch for `BackupSession` as below, + +```bash +❯ kubectl get backupsession -n demo -w +NAME INVOKER-TYPE INVOKER-NAME PHASE AGE +app-sample-mariadb-1614230401 BackupConfiguration app-sample-mariadb Succeeded 5m40s +app-sample-mariadb-1614230701 BackupConfiguration app-sample-mariadb Running 39s +``` + +Once the backup has been completed successfully, you should see the backed up data has been stored in the bucket at the directory pointed by the `prefix` field of the `Repository`. + +
+ Backup data in GCS Bucket +
Fig: Backup data in GCS Bucket
+
+ +## Auto-backup with a custom schedule + +In this section, we are going to backup an MariaDB database of `demo-2` namespace. This time, we are going to overwrite the default schedule used in the `BackupBlueprint`. + +### Create Storage Secret + +At first, let's create the `gcs-secret` in `demo-2` namespace with the access credentials to our GCS bucket. + +```bash +❯ kubectl create secret generic -n demo-2 gcs-secret \ + --from-file=./RESTIC_PASSWORD \ + --from-file=./GOOGLE_PROJECT_ID \ + --from-file=./GOOGLE_SERVICE_ACCOUNT_JSON_KEY +secret/gcs-secret created +``` + +### Create Database + +Now, we are going to create an MariaDB CRO in `demo-2` namespace. Below is the YAML of the MariaDB object that we are going to create, + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: MariaDB +metadata: + name: sample-mariadb-2 + namespace: demo-2 + annotations: + stash.appscode.com/backup-blueprint: mariadb-backup-template + stash.appscode.com/schedule: "*/3 * * * *" +spec: + version: "10.5.23" + replicas: 1 + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut +``` + +Notice the `annotations` section. This time, we have passed a schedule via `stash.appscode.com/schedule` annotation along with the `stash.appscode.com/backup-blueprint` annotation. + +Let's create the above MariaDB CRO, + +```bash +❯ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mariadb/backup/auto-backup/examples/sample-mariadb-2.yaml +mariadb.kubedb.com/sample-mariadb-2 created +``` + +### Verify Auto-backup configured + +Now, let's verify whether the auto-backup has been configured properly or not. + +#### Verify Repository + +At first, let's verify whether Stash has created a `Repository` for our MariaDB or not. + +```bash +❯ kubectl get repository -n demo-2 +NAME INTEGRITY SIZE SNAPSHOT-COUNT LAST-SUCCESSFUL-BACKUP AGE +app-sample-mariadb-2 4s +``` + +Now, let's check the YAML of the `Repository`. + +```yaml +❯ kubectl get repository -n demo-2 app-sample-mariadb-2 -o yaml +apiVersion: stash.appscode.com/v1alpha1 +kind: Repository +metadata: + name: app-sample-mariadb-2 + namespace: demo-2 + ... +spec: + backend: + gcs: + bucket: stash-testing + prefix: mariadb-backup/demo-2/mariadb/sample-mariadb-2 + storageSecretName: gcs-secret +``` + +Here, you can see that Stash has resolved the variables in `prefix` field and substituted them with the equivalent information from this new database. + +#### Verify BackupConfiguration + +If everything goes well, Stash should create a `BackupConfiguration` for our MariaDB in `demo-2` namespace and the phase of that `BackupConfiguration` should be `Ready`. Verify the `BackupConfiguration` crd by the following command, + +```bash +❯ kubectl get backupconfiguration -n demo-2 +NAME TASK SCHEDULE PAUSED PHASE AGE +app-sample-mariadb-2 mariadb-backup-10.5.23 */3 * * * * Ready 3m24s +``` + +Now, let's check the YAML of the `BackupConfiguration`. + +```yaml +❯ kubectl get backupconfiguration -n demo-2 app-sample-mariadb-2 -o yaml + +apiVersion: stash.appscode.com/v1beta1 +kind: BackupConfiguration +metadata: + name: app-sample-mariadb-2 + namespace: demo-2 + ... + ownerReferences: + - apiVersion: appcatalog.appscode.com/v1alpha1 + blockOwnerDeletion: true + controller: true + kind: AppBinding + name: sample-mariadb-2 + uid: 7cbdf140-5fd1-487a-b04f-1847def418e8 + resourceVersion: "56888" + selfLink: /apis/stash.appscode.com/v1beta1/namespaces/demo-2/backupconfigurations/app-sample-mariadb-2 + uid: e85dd3db-fa41-48b8-b253-5731ee8cc956 +spec: + driver: Restic + repository: + name: app-sample-mariadb-2 + retentionPolicy: + keepLast: 5 + name: keep-last-5 + prune: true + runtimeSettings: {} + schedule: '*/3 * * * *' + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-mariadb-2 + tempDir: {} +status: + conditions: + - lastTransitionTime: "2021-02-25T06:10:14Z" + message: Repository demo-2/app-sample-mariadb-2 exist. + reason: RepositoryAvailable + status: "True" + type: RepositoryFound + - lastTransitionTime: "2021-02-25T06:10:14Z" + message: Backend Secret demo-2/gcs-secret exist. + reason: BackendSecretAvailable + status: "True" + type: BackendSecretFound + - lastTransitionTime: "2021-02-25T06:10:14Z" + message: Backup target appcatalog.appscode.com/v1alpha1 appbinding/sample-mariadb-2 + found. + reason: TargetAvailable + status: "True" + type: BackupTargetFound + - lastTransitionTime: "2021-02-25T06:10:14Z" + message: Successfully created backup triggering CronJob. + reason: CronJobCreationSucceeded + status: "True" + type: CronJobCreated + observedGeneration: 1 +``` + +Notice the `schedule` section. This time the `BackupConfiguration` has been created with the schedule we have provided via annotation. + +Also, notice the `target` section. Stash has automatically added the new MariaDB as the target of this `BackupConfiguration`. + +#### Verify Backup + +Now, let's wait for a backup run to complete. You can watch for `BackupSession` as below, + +```bash +❯ kubectl get backupsession -n demo-2 -w +NAME INVOKER-TYPE INVOKER-NAME PHASE AGE +app-sample-mariadb-2-1614233715 BackupConfiguration app-sample-mariadb-2 Succeeded 3m2s +app-sample-mariadb-2-1614233880 BackupConfiguration app-sample-mariadb-2 Running 17s +``` + +Once the backup has been completed successfully, you should see that Stash has created a new directory as pointed by the `prefix` field of the new `Repository` and stored the backed up data there. + +
+ Backup data in GCS Bucket +
Fig: Backup data in GCS Bucket
+
+ +## Auto-backup with custom parameters + +In this section, we are going to backup an MariaDB database of `demo-3` namespace. This time, we are going to pass some parameters for the Task through the annotations. + +### Create Storage Secret + +At first, let's create the `gcs-secret` in `demo-3` namespace with the access credentials to our GCS bucket. + +```bash +❯ kubectl create secret generic -n demo-3 gcs-secret \ + --from-file=./RESTIC_PASSWORD \ + --from-file=./GOOGLE_PROJECT_ID \ + --from-file=./GOOGLE_SERVICE_ACCOUNT_JSON_KEY +secret/gcs-secret created +``` + +### Create Database + +Now, we are going to create an MariaDB CRO in `demo-3` namespace. Below is the YAML of the MariaDB object that we are going to create, + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: MariaDB +metadata: + name: sample-mariadb-3 + namespace: demo-3 + annotations: + stash.appscode.com/backup-blueprint: mariadb-backup-template + params.stash.appscode.com/args: --databases mysql +spec: + version: "10.5.23" + replicas: 1 + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut + +``` + +Notice the `annotations` section. This time, we have passed an argument via `params.stash.appscode.com/args` annotation along with the `stash.appscode.com/backup-blueprint` annotation. + +Let's create the above MariaDB CRO, + +```bash +❯ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mariadb/backup/auto-backup/examples/sample-mariadb-3.yaml +mariadb.kubedb.com/sample-mariadb-3 created +``` + +### Verify Auto-backup configured + +Now, let's verify whether the auto-backup resources has been created or not. + +#### Verify Repository + +At first, let's verify whether Stash has created a `Repository` for our MariaDB or not. + +```bash +❯ kubectl get repository -n demo-3 +NAME INTEGRITY SIZE SNAPSHOT-COUNT LAST-SUCCESSFUL-BACKUP AGE +app-sample-mariadb-3 8s +``` + +Now, let's check the YAML of the `Repository`. + +```yaml +❯ kubectl get repository -n demo-3 app-sample-mariadb-3 -o yaml +apiVersion: stash.appscode.com/v1alpha1 +kind: Repository +metadata: + name: app-sample-mariadb-3 + namespace: demo-3 + ... +spec: + backend: + gcs: + bucket: stash-testing + prefix: mariadb-backup/demo-3/mariadb/sample-mariadb-3 + storageSecretName: gcs-secret +``` + +Here, you can see that Stash has resolved the variables in `prefix` field and substituted them with the equivalent information from this new database. + +#### Verify BackupConfiguration + +If everything goes well, Stash should create a `BackupConfiguration` for our MariaDB in `demo-3` namespace and the phase of that `BackupConfiguration` should be `Ready`. Verify the `BackupConfiguration` crd by the following command, + +```bash +❯ kubectl get backupconfiguration -n demo-3 +NAME TASK SCHEDULE PAUSED PHASE AGE +app-sample-mariadb-3 mariadb-backup-10.5.23 */5 * * * * Ready 106s +``` + +Now, let's check the YAML of the `BackupConfiguration`. + +```yaml +❯ kubectl get backupconfiguration -n demo-3 app-sample-mariadb-3 -o yaml +apiVersion: stash.appscode.com/v1beta1 +kind: BackupConfiguration +metadata: + name: app-sample-mariadb-3 + namespace: demo-3 + ... +spec: + driver: Restic + repository: + name: app-sample-mariadb-3 + retentionPolicy: + keepLast: 5 + name: keep-last-5 + prune: true + runtimeSettings: {} + schedule: '*/5 * * * *' + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-mariadb-3 + task: + params: + - name: args + value: --databases mysql + tempDir: {} +status: + conditions: + - lastTransitionTime: "2021-02-25T11:58:12Z" + message: Repository demo-3/app-sample-mariadb-3 exist. + reason: RepositoryAvailable + status: "True" + type: RepositoryFound + - lastTransitionTime: "2021-02-25T11:58:12Z" + message: Backend Secret demo-3/gcs-secret exist. + reason: BackendSecretAvailable + status: "True" + type: BackendSecretFound + - lastTransitionTime: "2021-02-25T11:58:12Z" + message: Backup target appcatalog.appscode.com/v1alpha1 appbinding/sample-mariadb-3 + found. + reason: TargetAvailable + status: "True" + type: BackupTargetFound + - lastTransitionTime: "2021-02-25T11:58:12Z" + message: Successfully created backup triggering CronJob. + reason: CronJobCreationSucceeded + status: "True" + type: CronJobCreated + observedGeneration: 1 +``` + +Notice the `task` section. The `args` parameter that we had passed via annotations has been added to the `params` section. + +Also, notice the `target` section. Stash has automatically added the new MariaDB as the target of this `BackupConfiguration`. + +#### Verify Backup + +Now, let's wait for a backup run to complete. You can watch for `BackupSession` as below, + +```bash +❯ kubectl get backupsession -n demo-3 -w +NAME INVOKER-TYPE INVOKER-NAME PHASE AGE +app-sample-mariadb-3-1614254408 BackupConfiguration app-sample-mariadb-3 Succeeded 5m23s +app-sample-mariadb-3-1614254708 BackupConfiguration app-sample-mariadb-3 Running 23s +``` + +Once the backup has been completed successfully, you should see that Stash has created a new directory as pointed by the `prefix` field of the new `Repository` and stored the backed up data there. + +
+ Backup data in GCS Bucket +
Fig: Backup data in GCS Bucket
+
+ +## Cleanup + +To cleanup the resources crated by this tutorial, run the following commands, + +```bash +❯ kubectl delete -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mariadb/backup/auto-backup/examples/ +backupblueprint.stash.appscode.com "mariadb-backup-template" deleted +mariadb.kubedb.com "sample-mariadb-2" deleted +mariadb.kubedb.com "sample-mariadb-3" deleted +mariadb.kubedb.com "sample-mariadb" deleted + +❯ kubectl delete repository -n demo --all +repository.stash.appscode.com "app-sample-mariadb" deleted +❯ kubectl delete repository -n demo-2 --all +repository.stash.appscode.com "app-sample-mariadb-2" deleted +❯ kubectl delete repository -n demo-3 --all +repository.stash.appscode.com "app-sample-mariadb-3" deleted +``` diff --git a/content/docs/v2024.1.31/guides/mariadb/backup/customization/examples/backup/multi-retention-policy.yaml b/content/docs/v2024.1.31/guides/mariadb/backup/customization/examples/backup/multi-retention-policy.yaml new file mode 100644 index 0000000000..0af04272fd --- /dev/null +++ b/content/docs/v2024.1.31/guides/mariadb/backup/customization/examples/backup/multi-retention-policy.yaml @@ -0,0 +1,22 @@ +apiVersion: stash.appscode.com/v1beta1 +kind: BackupConfiguration +metadata: + name: sample-mariadb-backup + namespace: demo +spec: + schedule: "*/5 * * * *" + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-mariadb + retentionPolicy: + name: sample-mariadb-retention + keepLast: 5 + keepDaily: 10 + keepWeekly: 20 + keepMonthly: 50 + keepYearly: 100 + prune: true diff --git a/content/docs/v2024.1.31/guides/mariadb/backup/customization/examples/backup/passing-args.yaml b/content/docs/v2024.1.31/guides/mariadb/backup/customization/examples/backup/passing-args.yaml new file mode 100644 index 0000000000..386e5a79bf --- /dev/null +++ b/content/docs/v2024.1.31/guides/mariadb/backup/customization/examples/backup/passing-args.yaml @@ -0,0 +1,22 @@ +apiVersion: stash.appscode.com/v1beta1 +kind: BackupConfiguration +metadata: + name: sample-mariadb-backup + namespace: demo +spec: + schedule: "*/2 * * * *" + task: + params: + - name: args + value: --databases testdb + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-mariadb + retentionPolicy: + name: keep-last-5 + keepLast: 5 + prune: true diff --git a/content/docs/v2024.1.31/guides/mariadb/backup/customization/examples/backup/resource-limit.yaml b/content/docs/v2024.1.31/guides/mariadb/backup/customization/examples/backup/resource-limit.yaml new file mode 100644 index 0000000000..d44836df7c --- /dev/null +++ b/content/docs/v2024.1.31/guides/mariadb/backup/customization/examples/backup/resource-limit.yaml @@ -0,0 +1,24 @@ +apiVersion: stash.appscode.com/v1beta1 +kind: RestoreSession +metadata: + name: sample-mariadb-restore + namespace: demo +spec: + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-mariadb + runtimeSettings: + container: + resources: + requests: + cpu: "200m" + memory: "1Gi" + limits: + cpu: "200m" + memory: "1Gi" + rules: + - snapshots: [latest] diff --git a/content/docs/v2024.1.31/guides/mariadb/backup/customization/examples/backup/specific-user.yaml b/content/docs/v2024.1.31/guides/mariadb/backup/customization/examples/backup/specific-user.yaml new file mode 100644 index 0000000000..c0cd05d315 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mariadb/backup/customization/examples/backup/specific-user.yaml @@ -0,0 +1,23 @@ +apiVersion: stash.appscode.com/v1beta1 +kind: BackupConfiguration +metadata: + name: sample-mariadb-backup + namespace: demo +spec: + schedule: "*/2 * * * *" + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-mariadb + runtimeSettings: + pod: + securityContext: + runAsUser: 0 + runAsGroup: 0 + retentionPolicy: + name: keep-last-5 + keepLast: 5 + prune: true diff --git a/content/docs/v2024.1.31/guides/mariadb/backup/customization/examples/repository.yaml b/content/docs/v2024.1.31/guides/mariadb/backup/customization/examples/repository.yaml new file mode 100644 index 0000000000..8a6aaab13b --- /dev/null +++ b/content/docs/v2024.1.31/guides/mariadb/backup/customization/examples/repository.yaml @@ -0,0 +1,11 @@ +apiVersion: stash.appscode.com/v1alpha1 +kind: Repository +metadata: + name: gcs-repo + namespace: demo +spec: + backend: + gcs: + bucket: stash-testing + prefix: /demo/customizing + storageSecretName: gcs-secret diff --git a/content/docs/v2024.1.31/guides/mariadb/backup/customization/examples/restore/passing-args.yaml b/content/docs/v2024.1.31/guides/mariadb/backup/customization/examples/restore/passing-args.yaml new file mode 100644 index 0000000000..b1faff652b --- /dev/null +++ b/content/docs/v2024.1.31/guides/mariadb/backup/customization/examples/restore/passing-args.yaml @@ -0,0 +1,19 @@ +apiVersion: stash.appscode.com/v1beta1 +kind: RestoreSession +metadata: + name: sample-mariadb-restore + namespace: demo +spec: + task: + params: + - name: args + value: --one-database testdb + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-mariadb + rules: + - snapshots: [latest] diff --git a/content/docs/v2024.1.31/guides/mariadb/backup/customization/examples/restore/resource-limit.yaml b/content/docs/v2024.1.31/guides/mariadb/backup/customization/examples/restore/resource-limit.yaml new file mode 100644 index 0000000000..3f03de9699 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mariadb/backup/customization/examples/restore/resource-limit.yaml @@ -0,0 +1,26 @@ +apiVersion: stash.appscode.com/v1beta1 +kind: RestoreSession +metadata: + name: sample-mariadb-restore + namespace: demo +spec: + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-mariadb + runtimeSettings: + container: + resources: + requests: + cpu: "200m" + memory: "1Gi" + limits: + cpu: "200m" + memory: "1Gi" + rules: + - snapshots: [latest] + + diff --git a/content/docs/v2024.1.31/guides/mariadb/backup/customization/examples/restore/specific-snapshot.yaml b/content/docs/v2024.1.31/guides/mariadb/backup/customization/examples/restore/specific-snapshot.yaml new file mode 100644 index 0000000000..0309cf2ec0 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mariadb/backup/customization/examples/restore/specific-snapshot.yaml @@ -0,0 +1,15 @@ +apiVersion: stash.appscode.com/v1beta1 +kind: RestoreSession +metadata: + name: sample-mariadb-restore + namespace: demo +spec: + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-mariadb + rules: + - snapshots: [4bc21d6f] diff --git a/content/docs/v2024.1.31/guides/mariadb/backup/customization/examples/restore/specific-user.yaml b/content/docs/v2024.1.31/guides/mariadb/backup/customization/examples/restore/specific-user.yaml new file mode 100644 index 0000000000..3c828f9b39 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mariadb/backup/customization/examples/restore/specific-user.yaml @@ -0,0 +1,20 @@ +apiVersion: stash.appscode.com/v1beta1 +kind: RestoreSession +metadata: + name: sample-mariadb-restore + namespace: demo +spec: + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-mariadb + runtimeSettings: + pod: + securityContext: + runAsUser: 0 + runAsGroup: 0 + rules: + - snapshots: [latest] diff --git a/content/docs/v2024.1.31/guides/mariadb/backup/customization/examples/sample-mariadb.yaml b/content/docs/v2024.1.31/guides/mariadb/backup/customization/examples/sample-mariadb.yaml new file mode 100644 index 0000000000..697f492754 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mariadb/backup/customization/examples/sample-mariadb.yaml @@ -0,0 +1,15 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MariaDB +metadata: + name: sample-mariadb + namespace: demo +spec: + version: "10.5.23" + replicas: 1 + storageType: Durable + storage: + resources: + requests: + storage: 1Gi + storageClassName: standard + terminationPolicy: Delete diff --git a/content/docs/v2024.1.31/guides/mariadb/backup/customization/index.md b/content/docs/v2024.1.31/guides/mariadb/backup/customization/index.md new file mode 100644 index 0000000000..b30ca2e3ef --- /dev/null +++ b/content/docs/v2024.1.31/guides/mariadb/backup/customization/index.md @@ -0,0 +1,286 @@ +--- +title: MariaDB Backup Customization | Stash +description: Customizing MariaDB Backup and Restore process with Stash +menu: + docs_v2024.1.31: + identifier: guides-mariadb-backup-customization + name: Customizing Backup & Restore Process + parent: guides-mariadb-backup + weight: 40 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +# Customizing Backup and Restore Process + +Stash provides rich customization supports for the backup and restore process to meet the requirements of various cluster configurations. This guide will show you some examples of these customizations. + +## Customizing Backup Process + +In this section, we are going to show you how to customize the backup process. Here, we are going to show some examples of providing arguments to the backup process, running the backup process as a specific user, ignoring some indexes during the backup process, etc. + +### Passing arguments to the backup process + +Stash MariaDB addon uses [mysqldump](https://mariadb.com/kb/en/mysqldump) for backup. You can pass arguments to the `mysqldump` through `args` param under `task.params` section. + +The below example shows how you can pass the `--databases testdb` to take backup for a specific mariadb databases named `testdb`. + +```yaml +apiVersion: stash.appscode.com/v1beta1 +kind: BackupConfiguration +metadata: + name: sample-mariadb-backup + namespace: demo +spec: + schedule: "*/5 * * * *" + task: + params: + - name: args + value: --databases testdb + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-mariadb + retentionPolicy: + name: keep-last-5 + keepLast: 5 + prune: true +``` + +> **WARNING**: Make sure that you have the specific database created before taking backup. In this case, Database `testdb` should exist before the backup job starts. + +### Running backup job as a specific user + +If your cluster requires running the backup job as a specific user, you can provide `securityContext` under `runtimeSettings.pod` section. The below example shows how you can run the backup job as the root user. + +```yaml +apiVersion: stash.appscode.com/v1beta1 +kind: BackupConfiguration +metadata: + name: sample-mariadb-backup + namespace: demo +spec: + schedule: "*/2 * * * *" + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-mariadb + runtimeSettings: + pod: + securityContext: + runAsUser: 0 + runAsGroup: 0 + retentionPolicy: + name: keep-last-5 + keepLast: 5 + prune: true +``` + +### Specifying Memory/CPU limit/request for the backup job + +If you want to specify the Memory/CPU limit/request for your backup job, you can specify `resources` field under `runtimeSettings.container` section. + +```yaml +apiVersion: stash.appscode.com/v1beta1 +kind: BackupConfiguration +metadata: + name: sample-mariadb-backup + namespace: demo +spec: + schedule: "*/2 * * * *" + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-mariadb + runtimeSettings: + container: + resources: + requests: + cpu: "200m" + memory: "1Gi" + limits: + cpu: "200m" + memory: "1Gi" + retentionPolicy: + name: keep-last-5 + keepLast: 5 + prune: true +``` + +### Using multiple retention policies + +You can also specify multiple retention policies for your backed up data. For example, you may want to keep few daily snapshots, few weekly snapshots, and few monthly snapshots, etc. You just need to pass the desired number with the respective key under the `retentionPolicy` section. + +```yaml +apiVersion: stash.appscode.com/v1beta1 +kind: BackupConfiguration +metadata: + name: sample-mariadb-backup + namespace: demo +spec: + schedule: "*/5 * * * *" + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-mariadb + retentionPolicy: + name: sample-mariadb-retention + keepLast: 5 + keepDaily: 10 + keepWeekly: 20 + keepMonthly: 50 + keepYearly: 100 + prune: true +``` + +To know more about the available options for retention policies, please visit [here](https://stash.run/docs/latest/concepts/crds/backupconfiguration/#specretentionpolicy). + +## Customizing Restore Process + +Stash also uses `mysql` during the restore process. In this section, we are going to show how you can pass arguments to the restore process, restore a specific snapshot, run restore job as a specific user, etc. + +### Passing arguments to the restore process + +Similar to the backup process, you can pass arguments to the restore process through the `args` params under `task.params` section. This example will restore data from database `testdb` only. + +```yaml +apiVersion: stash.appscode.com/v1beta1 +kind: RestoreSession +metadata: + name: sample-mariadb-restore + namespace: demo +spec: + task: + params: + - name: args + value: --one-database testdb + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-mariadb + rules: + - snapshots: [latest] +``` + +### Restore specific snapshot + +You can also restore a specific snapshot. At first, list the available snapshot as bellow, + +```bash +❯ kubectl get snapshots -n demo +NAME ID REPOSITORY HOSTNAME CREATED AT +gcs-repo-4bc21d6f 4bc21d6f gcs-repo host-0 2022-01-12T14:54:27Z +gcs-repo-f0ac7cbd f0ac7cbd gcs-repo host-0 2022-01-12T14:56:26Z +gcs-repo-9210ebb6 9210ebb6 gcs-repo host-0 2022-01-12T14:58:27Z +gcs-repo-0aff8890 0aff8890 gcs-repo host-0 2022-01-12T15:00:28Z +``` + +>You can also filter the snapshots as shown in the guide [here](https://stash.run/docs/latest/concepts/crds/snapshot/#working-with-snapshot). + +You can use the respective ID of the snapshot to restore that snapshot. + +The below example shows how you can pass a specific snapshot ID through the `snapshots` field of `rules` section. + +```yaml +apiVersion: stash.appscode.com/v1beta1 +kind: RestoreSession +metadata: + name: sample-mariadb-restore + namespace: demo +spec: + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-mariadb + rules: + - snapshots: [4bc21d6f] +``` + +>Please, do not specify multiple snapshots here. Each snapshot represents a complete backup of your database. Multiple snapshots are only usable during file/directory restore. + +### Running restore job as a specific user + +You can provide `securityContext` under `runtimeSettings.pod` section to run the restore job as a specific user. + +```yaml +apiVersion: stash.appscode.com/v1beta1 +kind: RestoreSession +metadata: + name: sample-mariadb-restore + namespace: demo +spec: + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-mariadb + runtimeSettings: + pod: + securityContext: + runAsUser: 0 + runAsGroup: 0 + rules: + - snapshots: [latest] +``` + +### Specifying Memory/CPU limit/request for the restore job + +Similar to the backup process, you can also provide `resources` field under the `runtimeSettings.container` section to limit the Memory/CPU for your restore job. + +```yaml +apiVersion: stash.appscode.com/v1beta1 +kind: RestoreSession +metadata: + name: sample-mariadb-restore + namespace: demo +spec: + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-mariadb + runtimeSettings: + container: + resources: + requests: + cpu: "200m" + memory: "1Gi" + limits: + cpu: "200m" + memory: "1Gi" + rules: + - snapshots: [latest] +``` diff --git a/content/docs/v2024.1.31/guides/mariadb/backup/logical/_index.md b/content/docs/v2024.1.31/guides/mariadb/backup/logical/_index.md new file mode 100644 index 0000000000..3d25eb92ea --- /dev/null +++ b/content/docs/v2024.1.31/guides/mariadb/backup/logical/_index.md @@ -0,0 +1,22 @@ +--- +title: Logical Backup of MariaDB Using Stash +menu: + docs_v2024.1.31: + identifier: guides-mariadb-backup-logical + name: Logical Backup + parent: guides-mariadb-backup + weight: 20 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/mariadb/backup/logical/cluster/examples/backupconfiguration.yaml b/content/docs/v2024.1.31/guides/mariadb/backup/logical/cluster/examples/backupconfiguration.yaml new file mode 100644 index 0000000000..3689e0198a --- /dev/null +++ b/content/docs/v2024.1.31/guides/mariadb/backup/logical/cluster/examples/backupconfiguration.yaml @@ -0,0 +1,18 @@ +apiVersion: stash.appscode.com/v1beta1 +kind: BackupConfiguration +metadata: + name: sample-mariadb-backup + namespace: demo +spec: + schedule: "*/5 * * * *" + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-mariadb + retentionPolicy: + name: keep-last-5 + keepLast: 5 + prune: true diff --git a/content/docs/v2024.1.31/guides/mariadb/backup/logical/cluster/examples/repository.yaml b/content/docs/v2024.1.31/guides/mariadb/backup/logical/cluster/examples/repository.yaml new file mode 100644 index 0000000000..8564e0b068 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mariadb/backup/logical/cluster/examples/repository.yaml @@ -0,0 +1,11 @@ +apiVersion: stash.appscode.com/v1alpha1 +kind: Repository +metadata: + name: gcs-repo + namespace: demo +spec: + backend: + gcs: + bucket: stash-testing + prefix: /demo/mariadb/sample-mariadb + storageSecretName: gcs-secret \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/mariadb/backup/logical/cluster/examples/restoresession.yaml b/content/docs/v2024.1.31/guides/mariadb/backup/logical/cluster/examples/restoresession.yaml new file mode 100644 index 0000000000..5a6a4a394f --- /dev/null +++ b/content/docs/v2024.1.31/guides/mariadb/backup/logical/cluster/examples/restoresession.yaml @@ -0,0 +1,15 @@ +apiVersion: stash.appscode.com/v1beta1 +kind: RestoreSession +metadata: + name: sample-mariadb-restore + namespace: demo +spec: + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-mariadb + rules: + - snapshots: [latest] \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/mariadb/backup/logical/cluster/examples/sample-mariadb.yaml b/content/docs/v2024.1.31/guides/mariadb/backup/logical/cluster/examples/sample-mariadb.yaml new file mode 100644 index 0000000000..d2777e2bdb --- /dev/null +++ b/content/docs/v2024.1.31/guides/mariadb/backup/logical/cluster/examples/sample-mariadb.yaml @@ -0,0 +1,17 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MariaDB +metadata: + name: sample-mariadb + namespace: demo +spec: + version: "10.5.23" + replicas: 3 + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/mariadb/backup/logical/cluster/images/sample-mariadb-backup.png b/content/docs/v2024.1.31/guides/mariadb/backup/logical/cluster/images/sample-mariadb-backup.png new file mode 100644 index 0000000000..8ec12b62b6 Binary files /dev/null and b/content/docs/v2024.1.31/guides/mariadb/backup/logical/cluster/images/sample-mariadb-backup.png differ diff --git a/content/docs/v2024.1.31/guides/mariadb/backup/logical/cluster/index.md b/content/docs/v2024.1.31/guides/mariadb/backup/logical/cluster/index.md new file mode 100644 index 0000000000..a3e64e5e35 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mariadb/backup/logical/cluster/index.md @@ -0,0 +1,627 @@ +--- +title: Backup KubeDB managed MariaDB Cluster using Stash | Stash +description: Backup KubeDB managed clustered MariaDB using Stash +menu: + docs_v2024.1.31: + identifier: guides-mariadb-backup-logical-clustered + name: Clustered MariaDB + parent: guides-mariadb-backup-logical + weight: 20 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +# Backup KubeDB managed MariaDB Cluster using Stash + +Stash `v0.11.8+` supports backup and restoration of MariaDB databases. This guide will show you how you can take a logical backup of your MariaDB database cluster and restore them using Stash. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster. +- Install KubeDB operator in your cluster from [here](https://kubedb.com/docs/latest/setup). +- Install Stash in your cluster following the steps [here](https://stash.run/docs/latest/setup/install/stash/). +- Install Stash `kubectl` plugin following the steps [here](https://stash.run/docs/latest/setup/install/kubectl-plugin/). +- If you are not familiar with how Stash backup and restore MariaDB databases, please check the following guide [here](/docs/v2024.1.31/guides/mariadb/backup/overview/). + +You have to be familiar with following custom resources: + +- [AppBinding](/docs/v2024.1.31/guides/mariadb/concepts/appbinding/) +- [Function](https://stash.run/docs/latest/concepts/crds/function/) +- [Task](https://stash.run/docs/latest/concepts/crds/task/) +- [BackupConfiguration](https://stash.run/docs/latest/concepts/crds/backupconfiguration/) +- [BackupSession](https://stash.run/docs/latest/concepts/crds/backupsession/) +- [RestoreSession](https://stash.run/docs/latest/concepts/crds/restoresession/) + +To keep things isolated, we are going to use a separate namespace called `demo` throughout this tutorial. Create `demo` namespace if you haven't created it yet. + +```bash +$ kubectl create ns demo +namespace/demo created +``` + +## Prepare MariaDB + +In this section, we are going to deploy a MariaDB database using KubeDB. Then, we are going to insert some sample data into it. + +### Deploy MariaDB using KubeDB + +At first, let's deploy a MariaDB database named `sample-mariadb` of 3 replicas. + +``` yaml +apiVersion: kubedb.com/v1alpha2 +kind: MariaDB +metadata: + name: sample-mariadb + namespace: demo +spec: + version: "10.5.23" + replicas: 3 + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut +``` + +``` bash +$ kubectl apply -f https://github.com/logical/docs/raw/{{< param "info.version" >}}/docs/guides/mariadb/backup/logical/cluster/examples/sample-mariadb.yaml +mariadb.kubedb.com/sample-mariadb created +``` + +This MariaDB object will create the necessary StatefulSet, Secret, Service etc for the database. You can easily view all the resources created by MariaDB object using [ketall](https://github.com/corneliusweig/ketall) `kubectl` plugin as below, + +```bash +$ kubectl get-all -n demo -l app.kubernetes.io/instance=sample-mariadb +NAME NAMESPACE AGE +endpoints/sample-mariadb demo 28m +endpoints/sample-mariadb-pods demo 28m +persistentvolumeclaim/data-sample-mariadb-0 demo 28m +pod/sample-mariadb-0 demo 28m +secret/sample-mariadb-auth demo 28m +serviceaccount/sample-mariadb demo 28m +service/sample-mariadb demo 28m +service/sample-mariadb-pods demo 28m +appbinding.appcatalog.appscode.com/sample-mariadb demo 28m +controllerrevision.apps/sample-mariadb-7b7f58b68f demo 28m +statefulset.apps/sample-mariadb demo 28m +poddisruptionbudget.policy/sample-mariadb demo 28m +rolebinding.rbac.authorization.k8s.io/sample-mariadb demo 28m +role.rbac.authorization.k8s.io/sample-mariadb demo 28m +``` + +Now, wait for 3 database pods to go into `Running` state, + +```bash +$ kubectl get pod -n demo -l app.kubernetes.io/instance=sample-mariadb +NAME READY STATUS RESTARTS AGE +sample-mariadb-0 1/1 Running 0 2m7s +sample-mariadb-1 1/1 Running 0 101s +sample-mariadb-2 1/1 Running 0 81s +``` + +Once the database pod is in `Running` state, verify that all 3 nodes joined the cluster. + +```bash +$ kubectl exec -it -n demo sample-mariadb-0 -- bash +root@sample-mariadb-0:/ mysql -u${MYSQL_ROOT_USERNAME} -p${MYSQL_ROOT_PASSWORD} +Welcome to the MariaDB monitor. Commands end with ; or \g. +Your MariaDB connection id is 26 +Server version: 10.5.23-MariaDB-1:10.5.23+maria~focal mariadb.org binary distribution + +Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others. + +Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. + +MariaDB [(none)]> show status like 'wsrep_cluster_size'; ++--------------------+-------+ +| Variable_name | Value | ++--------------------+-------+ +| wsrep_cluster_size | 3 | ++--------------------+-------+ +1 row in set (0.001 sec) + +MariaDB [(none)]> quit; +Bye +``` + +From the above log, we can see that 3 nodes are ready to accept connections. + +### Insert Sample Data + +Now, we are going to exec into the database pod and create some sample data. The `sample-mariadb` object creates a secret containing the credentials of MariaDB and set them as pod's Environment varibles `MYSQL_ROOT_USERNAME` and `MYSQL_ROOT_PASSWORD`. + +Here, we are going to use the root user (`MYSQL_ROOT_USERNAME`) credential `MYSQL_ROOT_PASSWORD` to insert the sample data. Now, let's exec into one of the pods and insert some sample data, + +```bash +$ kubectl exec -it -n demo sample-mariadb-0 -- bash +root@sample-mariadb-0:/ mysql -u${MYSQL_ROOT_USERNAME} -p${MYSQL_ROOT_PASSWORD} +Welcome to the MariaDB monitor. Commands end with ; or \g. +Your MariaDB connection id is 341 +Server version: 10.5.23-MariaDB-1:10.5.23+maria~focal mariadb.org binary distribution + +Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others. + +Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. + +# Let's create a database named "company" +MariaDB [(none)]> create database company; +Query OK, 1 row affected (0.000 sec) + +# Verify that the database has been created successfully +MariaDB [(none)]> show databases; ++--------------------+ +| Database | ++--------------------+ +| company | +| information_schema | +| mysql | +| performance_schema | ++--------------------+ +4 rows in set (0.001 sec) + +# Now, let's create a table called "employee" in the "company" table +MariaDB [(none)]> create table company.employees ( name varchar(50), salary int); +Query OK, 0 rows affected (0.018 sec) + +# Verify that the table has been created successfully +MariaDB [(none)]> show tables in company; ++-------------------+ +| Tables_in_company | ++-------------------+ +| employees | ++-------------------+ +1 row in set (0.007 sec) + +# Now, let's insert a sample row in the table +MariaDB [(none)]> insert into company.employees values ('John Doe', 5000); +Query OK, 1 row affected (0.003 sec) + +# Insert another sample row +MariaDB [(none)]> insert into company.employees values ('James William', 7000); +Query OK, 1 row affected (0.002 sec) + +# Verify that the rows have been inserted into the table successfully +MariaDB [(none)]> select * from company.employees; ++---------------+--------+ +| name | salary | ++---------------+--------+ +| John Doe | 5000 | +| James William | 7000 | ++---------------+--------+ +2 rows in set (0.001 sec) + +MariaDB [(none)]> exit +Bye +``` + +We have successfully deployed a MariaDB database and inserted some sample data into it. In the subsequent sections, we are going to backup these data using Stash. + +## Prepare for Backup + +In this section, we are going to prepare the necessary resources (i.e. database connection information, backend information, etc.) before backup. + +### Verify Stash MariaDB Addon Installed + +When you install the Stash, it automatically installs all the official database addons. Verify that it has installed the MariaDB addons using the following command. + +```bash +$ kubectl get tasks.stash.appscode.com | grep mariadb +mariadb-backup-10.5.23 35s +mariadb-restore-10.5.23 35s +``` + +### Ensure AppBinding + +Stash needs to know how to connect with the database. An `AppBinding` exactly provides this information. It holds the Service and Secret information of the database. You have to point to the respective `AppBinding` as a target of backup instead of the database itself. + +Stash expect your database Secret to have `username` and `password` keys. If your database secret does not have them, the `AppBinding` can also help here. You can specify a `secretTransforms` section with the mapping between the current keys and the desired keys. + +You don't need to worry about appbindings if you are using KubeDB. It creates an appbinding containing the necessary informations when you deploy the database. Let's ensure the appbinding create by `KubeDB` operator. + +```bash +$ kubectl get appbinding -n demo +NAME TYPE VERSION AGE +sample-mariadb kubedb.com/mariadb 10.5.23 62m +``` + +We have a appbinding named same as database name `sample-mariadb`. We will use this later for connecting into this database. + +### Prepare Backend + +We are going to store our backed up data into a GCS bucket. So, we need to create a Secret with GCS credentials and a `Repository` object with the bucket information. If you want to use a different backend, please read the respective backend configuration doc from [here](https://stash.run/docs/latest/guides/backends/overview/). + +**Create Storage Secret:** + +At first, let's create a secret called `gcs-secret` with access credentials to our desired GCS bucket, + +```bash +$ echo -n 'changeit' > RESTIC_PASSWORD +$ echo -n '' > GOOGLE_PROJECT_ID +$ cat downloaded-sa-key.json > GOOGLE_SERVICE_ACCOUNT_JSON_KEY +$ kubectl create secret generic -n demo gcs-secret \ + --from-file=./RESTIC_PASSWORD \ + --from-file=./GOOGLE_PROJECT_ID \ + --from-file=./GOOGLE_SERVICE_ACCOUNT_JSON_KEY +secret/gcs-secret created +``` + +**Create Repository:** + +Now, crete a `Repository` object with the information of your desired bucket. Below is the YAML of `Repository` object we are going to create, + +```yaml +apiVersion: stash.appscode.com/v1alpha1 +kind: Repository +metadata: + name: gcs-repo + namespace: demo +spec: + backend: + gcs: + bucket: stash-testing + prefix: /demo/mariadb/sample-mariadb + storageSecretName: gcs-secret +``` + +Let's create the `Repository` we have shown above, + +```bash +$ kubectl apply -f https://github.com/logical/docs/raw/{{< param "info.version" >}}/docs/guides/mariadb/backup/logical/cluster/examples/repository.yaml +repository.stash.appscode.com/gcs-repo created +``` + +Now, we are ready to backup our database into our desired backend. + +### Backup + +To schedule a backup, we have to create a `BackupConfiguration` object targeting the respective `AppBinding` of our desired database. Then Stash will create a CronJob to periodically backup the database. + +#### Create BackupConfiguration + +Below is the YAML for `BackupConfiguration` object we are going to use to backup the `sample-mariadb` database we have deployed earlier, + +```yaml +apiVersion: stash.appscode.com/v1beta1 +kind: BackupConfiguration +metadata: + name: sample-mariadb-backup + namespace: demo +spec: + schedule: "*/5 * * * *" + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-mariadb + retentionPolicy: + name: keep-last-5 + keepLast: 5 + prune: true +``` + +Here, + +- `.spec.schedule` specifies that we want to backup the database at 5 minutes intervals. +- `.spec.target.ref` refers to the AppBinding object that holds the connection information of our targeted database. + +Let's create the `BackupConfiguration` object we have shown above, + +```bash +$ kubectl apply -f https://github.com/logical/docs/raw/{{< param "info.version" >}}/docs/guides/mariadb/backup/logical/cluster/examples/backupconfiguration.yaml +backupconfiguration.stash.appscode.com/sample-mariadb-backup created +``` + +### Verify Backup Setup Successful + +If everything goes well, the phase of the `BackupConfiguration` should be `Ready`. The `Ready` phase indicates that the backup setup is successful. Let's verify the `Phase` of the BackupConfiguration, + +```bash +$ kubectl get backupconfiguration -n demo +NAME TASK SCHEDULE PAUSED PHASE AGE +sample-mariadb-backup mariadb-backup-10.5.23 */5 * * * * Ready 11s +``` + +#### Verify CronJob + +Stash will create a CronJob with the schedule specified in `spec.schedule` field of `BackupConfiguration` object. + +Verify that the CronJob has been created using the following command, + +```bash +$ kubectl get cronjob -n demo +NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE +stash-backup-sample-mariadb-backup */5 * * * * False 0 15s 17s +``` + +#### Wait for BackupSession + +The `sample-mariadb-backup` CronJob will trigger a backup on each scheduled slot by creating a `BackupSession` object. + +Now, wait for a schedule to appear. Run the following command to watch for a `BackupSession` object, + +```bash +$ kubectl get backupsession -n demo -w +NAME INVOKER-TYPE INVOKER-NAME PHASE AGE +sample-mariadb-backup-1606994706 BackupConfiguration sample-mariadb-backup Running 24s +sample-mariadb-backup-1606994706 BackupConfiguration sample-mariadb-backup Running 75s +sample-mariadb-backup-1606994706 BackupConfiguration sample-mariadb-backup Succeeded 103s +``` + +Here, the phase `Succeeded` means that the backup process has been completed successfully. + +#### Verify Backup + +Now, we are going to verify whether the backed up data is present in the backend or not. Once a backup is completed, Stash will update the respective `Repository` object to reflect the backup completion. Check that the repository `gcs-repo` has been updated by the following command, + +```bash +$ kubectl get repository -n demo gcs-repo +NAME INTEGRITY SIZE SNAPSHOT-COUNT LAST-SUCCESSFUL-BACKUP AGE +gcs-repo true 1.327 MiB 1 60s 8m +``` + +Now, if we navigate to the GCS bucket, we will see the backed up data has been stored in `demo/mariadb/sample-mariadb` directory as specified by `.spec.backend.gcs.prefix` field of the `Repository` object. +
+ Backup data in GCS Bucket +
Fig: Backup data in GCS Bucket
+
+ + +> Note: Stash keeps all the backed up data encrypted. So, data in the backend will not make any sense until they are decrypted. + +## Restore MariaDB + +If you have followed the previous sections properly, you should have a successful logical backup of your MariaDB database. Now, we are going to show how you can restore the database from the backed up data. + +### Restore Into the Same Database + +You can restore your data into the same database you have backed up from or into a different database in the same cluster or a different cluster. In this section, we are going to show you how to restore in the same database which may be necessary when you have accidentally deleted any data from the running database. + +#### Temporarily Pause Backup + +At first, let's stop taking any further backup of the database so that no backup runs after we delete the sample data. We are going to pause the `BackupConfiguration` object. Stash will stop taking any further backup when the `BackupConfiguration` is paused. + +Let's pause the `sample-mariadb-backup` BackupConfiguration, +```bash +$ kubectl patch backupconfiguration -n demo sample-mariadb-backup --type="merge" --patch='{"spec": {"paused": true}}' +backupconfiguration.stash.appscode.com/sample-mariadb-backup patched +``` +Or you can use Stash `kubectl` plugin to pause the BackupConfiguration, +```bash +$ kubectl stash pause backup -n demo --backupconfig=sample-mariadb-backup +BackupConfiguration demo/sample-mariadb-backup has been paused successfully. +```` +Verify that the `BackupConfiguration` has been paused, + +```bash +$ kubectl get backupconfiguration -n demo sample-mariadb-backup +NAME TASK SCHEDULE PAUSED PHASE AGE +sample-mariadb-backup mariadb-backup-10.5.23 */5 * * * * true Ready 26m +``` + +Notice the `PAUSED` column. Value `true` for this field means that the `BackupConfiguration` has been paused. + +Stash will also suspend the respective CronJob. + +```bash +$ kubectl get cronjob -n demo +NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE +stash-backup-sample-mariadb-backup */5 * * * * True 0 2m59s 20m +``` + +#### Simulate Disaster + +Now, let's simulate an accidental deletion scenario. Here, we are going to exec into the database pod and delete the `company` database we had created earlier. + +```bash +$ kubectl exec -it -n demo sample-mariadb-0 -c mariadb -- bash +root@sample-mariadb-0:/ mysql -u${MYSQL_ROOT_USERNAME} -p${MYSQL_ROOT_PASSWORD} +Welcome to the MariaDB monitor. Commands end with ; or \g. +Your MariaDB connection id is 341 +Server version: 10.5.23-MariaDB-1:10.5.23+maria~focal mariadb.org binary distribution + +Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others. + +Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. + +# View current databases +MariaDB [(none)]> show databases; ++--------------------+ +| Database | ++--------------------+ +| company | +| information_schema | +| mysql | +| performance_schema | ++--------------------+ +4 rows in set (0.001 sec) + +# Let's delete the "company" database +MariaDB [(none)]> drop database company; +Query OK, 1 row affected (0.268 sec) + +# Verify that the "company" database has been deleted +MariaDB [(none)]> show databases; ++--------------------+ +| Database | ++--------------------+ +| information_schema | +| mysql | +| performance_schema | ++--------------------+ +3 rows in set (0.000 sec) + +MariaDB [(none)]> exit +Bye +``` + +#### Create RestoreSession + +To restore the database, you have to create a `RestoreSession` object pointing to the `AppBinding` of the targeted database. + +Here, is the YAML of the `RestoreSession` object that we are going to use for restoring our `sample-mariadb` database. + +```yaml +apiVersion: stash.appscode.com/v1beta1 +kind: RestoreSession +metadata: + name: sample-mariadb-restore + namespace: demo +spec: + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-mariadb + rules: + - snapshots: [latest] +``` + +Here, + +- `.spec.repository.name` specifies the Repository object that holds the backend information where our backed up data has been stored. +- `.spec.target.ref` refers to the respective AppBinding of the `sample-mariadb` database. +- `.spec.rules` specifies that we are restoring data from the latest backup snapshot of the database. + +Let's create the `RestoreSession` object object we have shown above, + +```bash +$ kubectl apply -f https://github.com/logical/docs/raw/{{< param "info.version" >}}/docs/guides/mariadb/backup/logical/cluster/examples/restoresession.yaml +restoresession.stash.appscode.com/sample-mariadb-restore created +``` + +Once, you have created the `RestoreSession` object, Stash will create a restore Job. Run the following command to watch the phase of the `RestoreSession` object, + +```bash +$ kubectl get restoresession -n demo -w +NAME REPOSITORY PHASE AGE +sample-mariadb-restore gcs-repo Running 15s +sample-mariadb-restore gcs-repo Succeeded 18s +``` + +The `Succeeded` phase means that the restore process has been completed successfully. + +#### Verify Restored Data + +Now, let's exec into the database pod and verify whether data actual data was restored or not, + +```bash +$ kubectl exec -it -n demo sample-mariadb-0 -c mariadb -- bash +root@sample-mariadb-0:/ mysql -u${MYSQL_ROOT_USERNAME} -p${MYSQL_ROOT_PASSWORD} +Welcome to the MariaDB monitor. Commands end with ; or \g. +Your MariaDB connection id is 341 +Server version: 10.5.23-MariaDB-1:10.5.23+maria~focal mariadb.org binary distribution + +Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others. + +Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. + +# Verify that the "company" database has been restored +MariaDB [(none)]> show databases; ++--------------------+ +| Database | ++--------------------+ +| company | +| information_schema | +| mysql | +| performance_schema | ++--------------------+ +4 rows in set (0.001 sec) + +# Verify that the tables of the "company" database have been restored +MariaDB [(none)]> show tables from company; ++-------------------+ +| Tables_in_company | ++-------------------+ +| employees | ++-------------------+ +1 row in set (0.000 sec) + +# Verify that the sample data of the "employees" table has been restored +MariaDB [(none)]> select * from company.employees; ++---------------+--------+ +| name | salary | ++---------------+--------+ +| John Doe | 5000 | +| James William | 7000 | ++---------------+--------+ +2 rows in set (0.000 sec) + +MariaDB [(none)]> exit +Bye +``` + +Hence, we can see from the above output that the deleted data has been restored successfully from the backup. + +#### Resume Backup + +Since our data has been restored successfully we can now resume our usual backup process. Resume the `BackupConfiguration` using following command, +```bash +$ kubectl patch backupconfiguration -n demo sample-mariadb-backup --type="merge" --patch='{"spec": {"paused": false}}' +backupconfiguration.stash.appscode.com/sample-mariadb-backup patched +``` +Or you can use the Stash `kubectl` plugin to resume the `BackupConfiguration`, + +```bash +$ kubectl stash resume -n demo --backupconfig=sample-mariadb-backup +BackupConfiguration demo/sample-mariadb-backup has been resumed successfully. +``` + +Verify that the `BackupConfiguration` has been resumed, +```bash +$ kubectl get backupconfiguration -n demo sample-mariadb-backup +NAME TASK SCHEDULE PAUSED PHASE AGE +sample-mariadb-backup mariadb-backup-10.5.23 */5 * * * * false Ready 29m +``` + +Here, `false` in the `PAUSED` column means the backup has been resume successfully. The CronJob also should be resumed now. + +```bash +$ kubectl get cronjob -n demo +NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE +stash-backup-sample-mariadb-backup */5 * * * * False 0 2m59s 29m +``` + +Here, `False` in the `SUSPEND` column means the CronJob is no longer suspended and will trigger in the next schedule. + +### Restore Into Different Database of the Same Namespace + +If you want to restore the backed up data into a different database of the same namespace, you have to use the `AppBinding` of desired database. Then, you have to create the `RestoreSession` pointing to the new `AppBinding`. + +### Restore Into Different Namespace + +If you want to restore into a different namespace of the same cluster, you have to create the Repository, backend Secret in the desired namespace. You can use [Stash kubectl plugin](https://stash.run/docs/latest/guides/cli/kubectl-plugin/) to easily copy the resources into a new namespace. Then, you have to create the `RestoreSession` object in the desired namespace pointing to the Repository, AppBinding of that namespace. + +### Restore Into Different Cluster + +If you want to restore into a different cluster, you have to install Stash in the desired cluster. Then, you have to install Stash MariaDB addon in that cluster too. Then, you have to create the Repository, backend Secret, AppBinding, in the desired cluster. Finally, you have to create the `RestoreSession` object in the desired cluster pointing to the Repository, AppBinding of that cluster. + +## Cleanup + +To cleanup the Kubernetes resources created by this tutorial, run: + +```bash +kubectl delete -n demo backupconfiguration sample-mariadb-backup +kubectl delete -n demo restoresession sample-mariadb-restore +kubectl delete -n demo repository gcs-repo +# delete the database resources +kubectl delete ns demo +``` diff --git a/content/docs/v2024.1.31/guides/mariadb/backup/logical/standalone/examples/backupconfiguration.yaml b/content/docs/v2024.1.31/guides/mariadb/backup/logical/standalone/examples/backupconfiguration.yaml new file mode 100644 index 0000000000..3689e0198a --- /dev/null +++ b/content/docs/v2024.1.31/guides/mariadb/backup/logical/standalone/examples/backupconfiguration.yaml @@ -0,0 +1,18 @@ +apiVersion: stash.appscode.com/v1beta1 +kind: BackupConfiguration +metadata: + name: sample-mariadb-backup + namespace: demo +spec: + schedule: "*/5 * * * *" + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-mariadb + retentionPolicy: + name: keep-last-5 + keepLast: 5 + prune: true diff --git a/content/docs/v2024.1.31/guides/mariadb/backup/logical/standalone/examples/repository.yaml b/content/docs/v2024.1.31/guides/mariadb/backup/logical/standalone/examples/repository.yaml new file mode 100644 index 0000000000..8564e0b068 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mariadb/backup/logical/standalone/examples/repository.yaml @@ -0,0 +1,11 @@ +apiVersion: stash.appscode.com/v1alpha1 +kind: Repository +metadata: + name: gcs-repo + namespace: demo +spec: + backend: + gcs: + bucket: stash-testing + prefix: /demo/mariadb/sample-mariadb + storageSecretName: gcs-secret \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/mariadb/backup/logical/standalone/examples/restoresession.yaml b/content/docs/v2024.1.31/guides/mariadb/backup/logical/standalone/examples/restoresession.yaml new file mode 100644 index 0000000000..5a6a4a394f --- /dev/null +++ b/content/docs/v2024.1.31/guides/mariadb/backup/logical/standalone/examples/restoresession.yaml @@ -0,0 +1,15 @@ +apiVersion: stash.appscode.com/v1beta1 +kind: RestoreSession +metadata: + name: sample-mariadb-restore + namespace: demo +spec: + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-mariadb + rules: + - snapshots: [latest] \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/mariadb/backup/logical/standalone/examples/sample-mariadb.yaml b/content/docs/v2024.1.31/guides/mariadb/backup/logical/standalone/examples/sample-mariadb.yaml new file mode 100644 index 0000000000..c76991df39 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mariadb/backup/logical/standalone/examples/sample-mariadb.yaml @@ -0,0 +1,17 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MariaDB +metadata: + name: sample-mariadb + namespace: demo +spec: + version: "10.5.23" + replicas: 1 + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/mariadb/backup/logical/standalone/images/sample-mariadb-backup.png b/content/docs/v2024.1.31/guides/mariadb/backup/logical/standalone/images/sample-mariadb-backup.png new file mode 100644 index 0000000000..8ec12b62b6 Binary files /dev/null and b/content/docs/v2024.1.31/guides/mariadb/backup/logical/standalone/images/sample-mariadb-backup.png differ diff --git a/content/docs/v2024.1.31/guides/mariadb/backup/logical/standalone/index.md b/content/docs/v2024.1.31/guides/mariadb/backup/logical/standalone/index.md new file mode 100644 index 0000000000..6eef3e73d2 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mariadb/backup/logical/standalone/index.md @@ -0,0 +1,615 @@ +--- +title: Backup KubeDB managed stanadlone MariaDB using Stash | Stash +description: Backup KubeDB managed stanadlone MariaDB using Stash +menu: + docs_v2024.1.31: + identifier: guides-mariadb-backup-logical-standalone + name: Standalone MariaDB + parent: guides-mariadb-backup-logical + weight: 10 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +# Backup KubeDB managed stanadlone MariaDB using Stash + +Stash `v0.11.8+` supports backup and restoration of MariaDB databases. This guide will show you how you can take a logical backup of your MariaDB databases and restore them using Stash. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster. +- Install KubeDB operator in your cluster from [here](https://kubedb.com/docs/latest/setup). +- Install Stash in your cluster following the steps [here](https://stash.run/docs/latest/setup/install/stash/). +- Install Stash `kubectl` plugin following the steps [here](https://stash.run/docs/latest/setup/install/kubectl-plugin/). +- If you are not familiar with how Stash backup and restore MariaDB databases, please check the following guide [here](/docs/v2024.1.31/guides/mariadb/backup/overview/). + +You have to be familiar with following custom resources: + +- [AppBinding](/docs/v2024.1.31/guides/elasticsearch/concepts/appbinding/) +- [Function](https://stash.run/docs/latest/concepts/crds/function/) +- [Task](https://stash.run/docs/latest/concepts/crds/task/) +- [BackupConfiguration](https://stash.run/docs/latest/concepts/crds/backupconfiguration/) +- [BackupSession](https://stash.run/docs/latest/concepts/crds/backupsession/) +- [RestoreSession](https://stash.run/docs/latest/concepts/crds/restoresession/) + +To keep things isolated, we are going to use a separate namespace called `demo` throughout this tutorial. Create `demo` namespace if you haven't created it yet. + +```bash +$ kubectl create ns demo +namespace/demo created +``` + +## Prepare MariaDB + +In this section, we are going to deploy a MariaDB database using KubeDB. Then, we are going to insert some sample data into it. + +### Deploy MariaDB using KubeDB + +At first, let's deploy a MariaDB standalone database named `sample-mariadb using` [KubeDB](https://kubedb.com/). + +``` yaml +apiVersion: kubedb.com/v1alpha2 +kind: MariaDB +metadata: + name: sample-mariadb + namespace: demo +spec: + version: "10.5.23" + replicas: 1 + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut +``` + +``` bash +$ kubectl apply -f https://github.com/logical/docs/raw/{{< param "info.version" >}}/docs/guides/mariadb/backup/logical/standalone/examples/sample-mariadb.yaml +mariadb.kubedb.com/sample-mariadb created +``` + +This MariaDB objetc will create the necessary StatefulSet, Secret, Service etc. for the database. You can easily view all the resources created by MariaDB object using [ketall](https://github.com/corneliusweig/ketall) `kubectl` plugin as below, + +```bash +$ kubectl get-all -n demo -l app.kubernetes.io/instance=sample-mariadb +NAME NAMESPACE AGE +endpoints/sample-mariadb demo 28m +endpoints/sample-mariadb-pods demo 28m +persistentvolumeclaim/data-sample-mariadb-0 demo 28m +pod/sample-mariadb-0 demo 28m +secret/sample-mariadb-auth demo 28m +serviceaccount/sample-mariadb demo 28m +service/sample-mariadb demo 28m +service/sample-mariadb-pods demo 28m +appbinding.appcatalog.appscode.com/sample-mariadb demo 28m +controllerrevision.apps/sample-mariadb-7b7f58b68f demo 28m +statefulset.apps/sample-mariadb demo 28m +poddisruptionbudget.policy/sample-mariadb demo 28m +rolebinding.rbac.authorization.k8s.io/sample-mariadb demo 28m +role.rbac.authorization.k8s.io/sample-mariadb demo 28m +``` + +Now, wait for the database pod `sample-mariadb-0` to go into `Running` state, + +```bash +$ kubectl get pod -n demo sample-mariadb-0 +NAME READY STATUS RESTARTS AGE +sample-mariadb-0 1/1 Running 0 29m +``` + +Once the database pod is in `Running` state, verify that the database is ready to accept the connections. + +```bash +$ kubectl logs -n demo sample-mariadb-0 +2021-02-22 9:41:37 0 [Note] Reading of all Master_info entries succeeded +2021-02-22 9:41:37 0 [Note] Added new Master_info '' to hash table +2021-02-22 9:41:37 0 [Note] mysqld: ready for connections. +Version: '10.5.23-MariaDB-1:10.5.23+maria~focal' socket: '/run/mysqld/mysqld.sock' port: 3306 mariadb.org binary distribution +``` + +From the above log, we can see the database is ready to accept connections. + +### Insert Sample Data + +Now, we are going to exec into the database pod and create some sample data. The `sample-mariadb` object creates a secret containing the credentials of MariaDB and set them as pod's Environment varibles `MYSQL_ROOT_USERNAME` and `MYSQL_ROOT_PASSWORD`. + +Here, we are going to use the root user (`MYSQL_ROOT_USERNAME`) credential `MYSQL_ROOT_PASSWORD` to insert the sample data. Now, let's exec into the database pod and insert some sample data, + +```bash +$ kubectl exec -it -n demo sample-mariadb-0 -- bash +root@sample-mariadb-0:/ mysql -u${MYSQL_ROOT_USERNAME} -p${MYSQL_ROOT_PASSWORD} +Welcome to the MariaDB monitor. Commands end with ; or \g. +Your MariaDB connection id is 341 +Server version: 10.5.23-MariaDB-1:10.5.23+maria~focal mariadb.org binary distribution + +Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others. + +Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. + +# Let's create a database named "company" +MariaDB [(none)]> create database company; +Query OK, 1 row affected (0.000 sec) + +# Verify that the database has been created successfully +MariaDB [(none)]> show databases; ++--------------------+ +| Database | ++--------------------+ +| company | +| information_schema | +| mysql | +| performance_schema | ++--------------------+ +4 rows in set (0.001 sec) + +# Now, let's create a table called "employee" in the "company" table +MariaDB [(none)]> create table company.employees ( name varchar(50), salary int); +Query OK, 0 rows affected (0.018 sec) + +# Verify that the table has been created successfully +MariaDB [(none)]> show tables in company; ++-------------------+ +| Tables_in_company | ++-------------------+ +| employees | ++-------------------+ +1 row in set (0.007 sec) + +# Now, let's insert a sample row in the table +MariaDB [(none)]> insert into company.employees values ('John Doe', 5000); +Query OK, 1 row affected (0.003 sec) + +# Insert another sample row +MariaDB [(none)]> insert into company.employees values ('James William', 7000); +Query OK, 1 row affected (0.002 sec) + +# Verify that the rows have been inserted into the table successfully +MariaDB [(none)]> select * from company.employees; ++---------------+--------+ +| name | salary | ++---------------+--------+ +| John Doe | 5000 | +| James William | 7000 | ++---------------+--------+ +2 rows in set (0.001 sec) + +MariaDB [(none)]> exit +Bye +``` + +We have successfully deployed a MariaDB database and inserted some sample data into it. In the subsequent sections, we are going to backup these data using Stash. + +## Prepare for Backup + +In this section, we are going to prepare the necessary resources (i.e. database connection information, backend information, etc.) before backup. + +### Verify Stash MariaDB Addon Installed + +When you install the Stash, it automatically installs all the official database addons. Verify that it has installed the MariaDB addons using the following command. + +```bash +$ kubectl get tasks.stash.appscode.com | grep mariadb +mariadb-backup-10.5.23 35s +mariadb-restore-10.5.8 35s +``` + +### Ensure AppBinding + +Stash needs to know how to connect with the database. An `AppBinding` exactly provides this information. It holds the Service and Secret information of the database. You have to point to the respective `AppBinding` as a target of backup instead of the database itself. + +Stash expect your database Secret to have `username` and `password` keys. If your database secret does not have them, the `AppBinding` can also help here. You can specify a `secretTransforms` section with the mapping between the current keys and the desired keys. + +You don't need to worry about appbindings if you are using KubeDB. It creates an appbinding containing the necessary informations when you deploy the database. Let's ensure the appbinding create by `KubeDB` operator. + +```bash +$ kubectl get appbinding -n demo +NAME TYPE VERSION AGE +sample-mariadb kubedb.com/mariadb 10.5.23 62m +``` + +We have a appbinding named same as database name `sample-mariadb`. We will use this later for connecting into this database. + +### Prepare Backend + +We are going to store our backed up data into a GCS bucket. So, we need to create a Secret with GCS credentials and a `Repository` object with the bucket information. If you want to use a different backend, please read the respective backend configuration doc from [here](https://stash.run/docs/latest/guides/backends/overview/). + +**Create Storage Secret:** + +At first, let's create a secret called `gcs-secret` with access credentials to our desired GCS bucket, + +```bash +$ echo -n 'changeit' > RESTIC_PASSWORD +$ echo -n '' > GOOGLE_PROJECT_ID +$ cat downloaded-sa-key.json > GOOGLE_SERVICE_ACCOUNT_JSON_KEY +$ kubectl create secret generic -n demo gcs-secret \ + --from-file=./RESTIC_PASSWORD \ + --from-file=./GOOGLE_PROJECT_ID \ + --from-file=./GOOGLE_SERVICE_ACCOUNT_JSON_KEY +secret/gcs-secret created +``` + +**Create Repository:** + +Now, crete a `Repository` object with the information of your desired bucket. Below is the YAML of `Repository` object we are going to create, + +```yaml +apiVersion: stash.appscode.com/v1alpha1 +kind: Repository +metadata: + name: gcs-repo + namespace: demo +spec: + backend: + gcs: + bucket: stash-testing + prefix: /demo/mariadb/sample-mariadb + storageSecretName: gcs-secret +``` + +Let's create the `Repository` we have shown above, + +```bash +$ kubectl apply -f https://github.com/logical/docs/raw/{{< param "info.version" >}}/docs/guides/mariadb/backup/logical/standalone/examples/repository.yaml +repository.stash.appscode.com/gcs-repo created +``` + +Now, we are ready to backup our database into our desired backend. + +### Backup + +To schedule a backup, we have to create a `BackupConfiguration` object targeting the respective `AppBinding` of our desired database. Then Stash will create a CronJob to periodically backup the database. + +#### Create BackupConfiguration + +Below is the YAML for `BackupConfiguration` object we are going to use to backup the `sample-mariadb` database we have deployed earlier, + +```yaml +apiVersion: stash.appscode.com/v1beta1 +kind: BackupConfiguration +metadata: + name: sample-mariadb-backup + namespace: demo +spec: + schedule: "*/5 * * * *" + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-mariadb + retentionPolicy: + name: keep-last-5 + keepLast: 5 + prune: true +``` + +Here, + +- `.spec.schedule` specifies that we want to backup the database at 5 minutes intervals. +- `.spec.target.ref` refers to the AppBinding object that holds the connection information of our targeted database. + +Let's create the `BackupConfiguration` object we have shown above, + +```bash +$ kubectl apply -f https://github.com/logical/docs/raw/{{< param "info.version" >}}/docs/guides/mariadb/backup/logical/standalone/examples/backupconfiguration.yaml +backupconfiguration.stash.appscode.com/sample-mariadb-backup created +``` + +#### Verify Backup Setup Successful + +If everything goes well, the phase of the `BackupConfiguration` should be `Ready`. The `Ready` phase indicates that the backup setup is successful. Let's verify the `Phase` of the BackupConfiguration, + +```bash +$ kubectl get backupconfiguration -n demo +NAME TASK SCHEDULE PAUSED PHASE AGE +sample-mariadb-backup mariadb-backup-10.5.23 */5 * * * * Ready 11s +``` + +#### Verify CronJob + +Stash will create a CronJob with the schedule specified in `spec.schedule` field of `BackupConfiguration` object. + +Verify that the CronJob has been created using the following command, + +```bash +$ kubectl get cronjob -n demo +NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE +stash-backup-sample-mariadb-backup */5 * * * * False 0 15s 17s +``` + +#### Wait for BackupSession + +The `sample-mariadb-backup` CronJob will trigger a backup on each scheduled slot by creating a `BackupSession` object. + +Now, wait for a schedule to appear. Run the following command to watch for a `BackupSession` object, + +```bash +$ kubectl get backupsession -n demo -w +NAME INVOKER-TYPE INVOKER-NAME PHASE AGE +sample-mariadb-backup-1606994706 BackupConfiguration sample-mariadb-backup Running 24s +sample-mariadb-backup-1606994706 BackupConfiguration sample-mariadb-backup Running 75s +sample-mariadb-backup-1606994706 BackupConfiguration sample-mariadb-backup Succeeded 103s +``` + +Here, the phase `Succeeded` means that the backup process has been completed successfully. + +#### Verify Backup + +Now, we are going to verify whether the backed up data is present in the backend or not. Once a backup is completed, Stash will update the respective `Repository` object to reflect the backup completion. Check that the repository `gcs-repo` has been updated by the following command, + +```bash +$ kubectl get repository -n demo gcs-repo +NAME INTEGRITY SIZE SNAPSHOT-COUNT LAST-SUCCESSFUL-BACKUP AGE +gcs-repo true 1.327 MiB 1 60s 8m +``` + +Now, if we navigate to the GCS bucket, we will see the backed up data has been stored in `demo/mariadb/sample-mariadb` directory as specified by `.spec.backend.gcs.prefix` field of the `Repository` object. + +
+ Backup data in GCS Bucket +
Fig: Backup data in GCS Bucket
+
+ +> Note: Stash keeps all the backed up data encrypted. So, data in the backend will not make any sense until they are decrypted. + +## Restore MariaDB + +If you have followed the previous sections properly, you should have a successful logical backup of your MariaDB database. Now, we are going to show how you can restore the database from the backed up data. + +### Restore Into the Same Database + +You can restore your data into the same database you have backed up from or into a different database in the same cluster or a different cluster. In this section, we are going to show you how to restore in the same database which may be necessary when you have accidentally deleted any data from the running database. + +#### Temporarily Pause Backup + +At first, let's stop taking any further backup of the database so that no backup runs after we delete the sample data. We are going to pause the `BackupConfiguration` object. Stash will stop taking any further backup when the `BackupConfiguration` is paused. + +Let's pause the `sample-mariadb-backup` BackupConfiguration, +```bash +$ kubectl patch backupconfiguration -n demo sample-mariadb-backup--type="merge" --patch='{"spec": {"paused": true}}' +backupconfiguration.stash.appscode.com/sample-mgo-rs-backup patched +``` + +Or you can use the Stash `kubectl` plugin to pause the `BackupConfiguration`, +```bash +$ kubectl stash pause backup -n demo --backupconfig=sample-mariadb-backup +BackupConfiguration demo/sample-mariadb-backup has been paused successfully. +``` + +Verify that the `BackupConfiguration` has been paused, + +```bash +$ kubectl get backupconfiguration -n demo sample-mariadb-backup +NAME TASK SCHEDULE PAUSED PHASE AGE +sample-mariadb-backup mariadb-backup-10.5.23 */5 * * * * true Ready 26m +``` + +Notice the `PAUSED` column. Value `true` for this field means that the `BackupConfiguration` has been paused. + +Stash will also suspend the respective CronJob. + +```bash +$ kubectl get cronjob -n demo +NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE +stash-backup-sample-mariadb-backup */5 * * * * True 0 2m59s 20m +``` + +#### Simulate Disaster + +Now, let's simulate an accidental deletion scenario. Here, we are going to exec into the database pod and delete the `company` database we had created earlier. + +```bash +$ kubectl exec -it -n demo sample-mariadb-0 -c mariadb -- bash +root@sample-mariadb-0:/ mysql -u${MYSQL_ROOT_USERNAME} -p${MYSQL_ROOT_PASSWORD} +Welcome to the MariaDB monitor. Commands end with ; or \g. +Your MariaDB connection id is 341 +Server version: 10.5.23-MariaDB-1:10.5.23+maria~focal mariadb.org binary distribution + +Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others. + +Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. + +# View current databases +MariaDB [(none)]> show databases; ++--------------------+ +| Database | ++--------------------+ +| company | +| information_schema | +| mysql | +| performance_schema | ++--------------------+ +4 rows in set (0.001 sec) + +# Let's delete the "company" database +MariaDB [(none)]> drop database company; +Query OK, 1 row affected (0.268 sec) + +# Verify that the "company" database has been deleted +MariaDB [(none)]> show databases; ++--------------------+ +| Database | ++--------------------+ +| information_schema | +| mysql | +| performance_schema | ++--------------------+ +3 rows in set (0.000 sec) + +MariaDB [(none)]> exit +Bye +``` + +#### Create RestoreSession + +To restore the database, you have to create a `RestoreSession` object pointing to the `AppBinding` of the targeted database. + +Here, is the YAML of the `RestoreSession` object that we are going to use for restoring our `sample-mariadb` database. + +```yaml +apiVersion: stash.appscode.com/v1beta1 +kind: RestoreSession +metadata: + name: sample-mariadb-restore + namespace: demo +spec: + task: + name: mariadb-restore-10.5.8 + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-mariadb + rules: + - snapshots: [latest] +``` + +Here, + +- `.spec.task.name` specifies the name of the Task object that specifies the necessary Functions and their execution order to restore a MariaDB database. +- `.spec.repository.name` specifies the Repository object that holds the backend information where our backed up data has been stored. +- `.spec.target.ref` refers to the respective AppBinding of the `sample-mariadb` database. +- `.spec.rules` specifies that we are restoring data from the latest backup snapshot of the database. + +Let's create the `RestoreSession` object object we have shown above, + +```bash +$ kubectl apply -f https://github.com/logical/docs/raw/{{< param "info.version" >}}/docs/guides/mariadb/backup/logical/standalone/examples/restoresession.yaml +restoresession.stash.appscode.com/sample-mariadb-restore created +``` + +Once, you have created the `RestoreSession` object, Stash will create a restore Job. Run the following command to watch the phase of the `RestoreSession` object, + +```bash +$ kubectl get restoresession -n demo -w +NAME REPOSITORY PHASE AGE +sample-mariadb-restore gcs-repo Running 15s +sample-mariadb-restore gcs-repo Succeeded 18s +``` + +The `Succeeded` phase means that the restore process has been completed successfully. + +#### Verify Restored Data + +Now, let's exec into the database pod and verify whether data actual data was restored or not, + +```bash +$ kubectl exec -it -n demo sample-mariadb-0 -c mariadb -- bash +root@sample-mariadb-0:/ mysql -u${MYSQL_ROOT_USERNAME} -p${MYSQL_ROOT_PASSWORD} +Welcome to the MariaDB monitor. Commands end with ; or \g. +Your MariaDB connection id is 341 +Server version: 10.5.23-MariaDB-1:10.5.23+maria~focal mariadb.org binary distribution + +Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others. + +Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. + +# Verify that the "company" database has been restored +MariaDB [(none)]> show databases; ++--------------------+ +| Database | ++--------------------+ +| company | +| information_schema | +| mysql | +| performance_schema | ++--------------------+ +4 rows in set (0.001 sec) + +# Verify that the tables of the "company" database have been restored +MariaDB [(none)]> show tables from company; ++-------------------+ +| Tables_in_company | ++-------------------+ +| employees | ++-------------------+ +1 row in set (0.000 sec) + +# Verify that the sample data of the "employees" table has been restored +MariaDB [(none)]> select * from company.employees; ++---------------+--------+ +| name | salary | ++---------------+--------+ +| John Doe | 5000 | +| James William | 7000 | ++---------------+--------+ +2 rows in set (0.000 sec) + +MariaDB [(none)]> exit +Bye +``` + +Hence, we can see from the above output that the deleted data has been restored successfully from the backup. + +#### Resume Backup + +Since our data has been restored successfully we can now resume our usual backup process. Resume the `BackupConfiguration` using following command, +```bash +$ kubectl patch backupconfiguration -n demo sample-mariadb-backup --type="merge" --patch='{"spec": {"paused": false}}' +backupconfiguration.stash.appscode.com/sample-mariadb-backup patched +``` + +Or you can use the Stash `kubectl` plugin to resume the `BackupConfiguration`, +```bash +$ kubectl stash resume -n demo --backupconfig=sample-mariadb-backup +BackupConfiguration demo/sample-mariadb-backup has been resumed successfully. +``` + +Verify that the `BackupConfiguration` has been resumed, +```bash +$ kubectl get backupconfiguration -n demo sample-mariadb-backup +NAME TASK SCHEDULE PAUSED PHASE AGE +sample-mariadb-backup mariadb-backup-10.5.23 */5 * * * * false Ready 29m +``` + +Here, `false` in the `PAUSED` column means the backup has been resume successfully. The CronJob also should be resumed now. + +```bash +$ kubectl get cronjob -n demo +NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE +stash-backup-sample-mariadb-backup */5 * * * * False 0 2m59s 29m +``` + +Here, `False` in the `SUSPEND` column means the CronJob is no longer suspended and will trigger in the next schedule. + +### Restore Into Different Database of the Same Namespace + +If you want to restore the backed up data into a different database of the same namespace, you have to use the `AppBinding` of desired database. Then, you have to create the `RestoreSession` pointing to the new `AppBinding`. + +### Restore Into Different Namespace + +If you want to restore into a different namespace of the same cluster, you have to create the Repository, backend Secret in the desired namespace. You can use [Stash kubectl plugin](https://stash.run/docs/latest/guides/cli/kubectl-plugin/) to easily copy the resources into a new namespace. Then, you have to create the `RestoreSession` object in the desired namespace pointing to the Repository, AppBinding of that namespace. + +### Restore Into Different Cluster + +If you want to restore into a different cluster, you have to install Stash in the desired cluster. Then, you have to install Stash MariaDB addon in that cluster too. Then, you have to create the Repository, backend Secret, AppBinding, in the desired cluster. Finally, you have to create the `RestoreSession` object in the desired cluster pointing to the Repository, AppBinding of that cluster. + +## Cleanup + +To cleanup the Kubernetes resources created by this tutorial, run: + +```bash +kubectl delete -n demo backupconfiguration sample-mariadb-backup +kubectl delete -n demo restoresession sample-mariadb-restore +kubectl delete -n demo repository gcs-repo +# delete the database resources +kubectl delete ns demo +``` diff --git a/content/docs/v2024.1.31/guides/mariadb/backup/overview/images/mariadb-logical-backup.svg b/content/docs/v2024.1.31/guides/mariadb/backup/overview/images/mariadb-logical-backup.svg new file mode 100644 index 0000000000..04faf0ae92 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mariadb/backup/overview/images/mariadb-logical-backup.svg @@ -0,0 +1,987 @@ + + + + + + image/svg+xml + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/content/docs/v2024.1.31/guides/mariadb/backup/overview/images/mariadb-logical-restore.svg b/content/docs/v2024.1.31/guides/mariadb/backup/overview/images/mariadb-logical-restore.svg new file mode 100644 index 0000000000..c930c8641b --- /dev/null +++ b/content/docs/v2024.1.31/guides/mariadb/backup/overview/images/mariadb-logical-restore.svg @@ -0,0 +1,857 @@ + + + + + + image/svg+xml + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/content/docs/v2024.1.31/guides/mariadb/backup/overview/index.md b/content/docs/v2024.1.31/guides/mariadb/backup/overview/index.md new file mode 100644 index 0000000000..b74676ea41 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mariadb/backup/overview/index.md @@ -0,0 +1,110 @@ +--- +title: Backup & Restore MariaDB Using Stash +menu: + docs_v2024.1.31: + identifier: guides-mariadb-backup-overview + name: Overview + parent: guides-mariadb-backup + weight: 10 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +{{< notice type="warning" message="Please install [Stash](https://stash.run/docs/latest/setup/install/stash/) to try this feature. Database backup with Stash is already included in the KubeDB license. So, you don't need a separate license for Stash." >}} + +# MariaDB Backup & Restore Overview + +KubeDB uses [Stash](https://stash.run) to backup and restore databases. Stash by AppsCode is a cloud native data backup and recovery solution for Kubernetes workloads. Stash utilizes [restic](https://github.com/restic/restic) to securely backup stateful applications to any cloud or on-prem storage backends (for example, S3, GCS, Azure Blob storage, Minio, NetApp, Dell EMC etc.). + +
+  KubeDB + Stash +
Fig: Backup KubeDB Databases Using Stash
+
+ +# How Stash Backups & Restores MariaDB Database + +Stash 0.9.0+ supports backup and restore operation of many databases. This guide will give you an overview of how MariaDB database backup and restore process works in Stash. + +## Logical Backup + +Stash supports taking [logical backup](https://mariadb.com/kb/en/backup-and-restore-overview/#logical-vs-physical-backups) of MariaDB databases using [mysqldump](https://mariadb.com/kb/en/mysqldump/). It is the most flexible way to perform a backup and restore, and a good choice when the data size is relatively small. + +### How Logical Backup Works + +The following diagram shows how Stash takes logical backup of a MariaDB database. Open the image in a new tab to see the enlarged version. + +
+  MariaDB Backup Overview +
Fig: MariaDB Logical Backup Overview
+
+ +The backup process consists of the following steps: + +1. At first, a user creates a secret with access credentials of the backend where the backed up data will be stored. + +2. Then, she creates a `Repository` crd that specifies the backend information along with the secret that holds the credentials to access the backend. + +3. Then, she creates a `BackupConfiguration` crd targeting the [AppBinding](/docs/v2024.1.31/guides/mariadb/concepts/appbinding/) crd of the desired database. The `BackupConfiguration` object also specifies the `Task` to use to backup the database. + +4. Stash operator watches for `BackupConfiguration` crd. + +5. Once Stash operator finds a `BackupConfiguration` crd, it creates a CronJob with the schedule specified in `BackupConfiguration` object to trigger backup periodically. + +6. On the next scheduled slot, the CronJob triggers a backup by creating a `BackupSession` crd. + +7. Stash operator also watches for `BackupSession` crd. + +8. When it finds a `BackupSession` object, it resolves the respective `Task` and `Function` and prepares a Job definition to backup. + +9. Then, it creates the Job to backup the targeted database. + +10. The backup Job reads necessary information to connect with the database from the `AppBinding` crd. It also reads backend information and access credentials from `Repository` crd and Storage Secret respectively. + +11. Then, the Job dumps the targeted database and uploads the output to the backend. Stash pipes the output of dump command to uploading process. Hence, backup Job does not require a large volume to hold the entire dump output. + +12. Finally, when the backup is complete, the Job sends Prometheus metrics to the Pushgateway running inside Stash operator pod. It also updates the `BackupSession` and `Repository` status to reflect the backup procedure. + +### How Restore from Logical Backup Works + +The following diagram shows how Stash restores a MariaDB database from a logical backup. Open the image in a new tab to see the enlarged version. + +
+  Database Restore Overview +
Fig: MariaDB Logical Restore Process Overview
+
+ +The restore process consists of the following steps: + +1. At first, a user creates a `RestoreSession` crd targeting the `AppBinding` of the desired database where the backed up data will be restored. It also specifies the `Repository` crd which holds the backend information and the `Task` to use to restore the target. + +2. Stash operator watches for `RestoreSession` object. + +3. Once it finds a `RestoreSession` object, it resolves the respective `Task` and `Function` and prepares a Job definition to restore. + +4. Then, it creates the Job to restore the target. + +5. The Job reads necessary information to connect with the database from respective `AppBinding` crd. It also reads backend information and access credentials from `Repository` crd and Storage Secret respectively. + +6. Then, the job downloads the backed up data from the backend and injects into the desired database. Stash pipes the downloaded data to the respective database tool to inject into the database. Hence, restore job does not require a large volume to download entire backup data inside it. + +7. Finally, when the restore process is complete, the Job sends Prometheus metrics to the Pushgateway and update the `RestoreSession` status to reflect restore completion. + +## Next Steps + +- Backup a standalone MariaDB databases using Stash following the guide from [here](/docs/v2024.1.31/guides/mariadb/backup/logical/standalone/). +- Backup a MariaDB cluster using Stash following the guide from [here](/docs/v2024.1.31/guides/mariadb/backup/logical/cluster/). +- Configure a generic backup template for all the MariaDB databases of your cluster using Stash Auto-backup by following the guide from [here](/docs/v2024.1.31/guides/mariadb/backup/auto-backup/). +- Customize the backup & restore process for your cluster by following the guides from [here](/docs/v2024.1.31/guides/mariadb/backup/customization/). diff --git a/content/docs/v2024.1.31/guides/mariadb/clustering/_index.md b/content/docs/v2024.1.31/guides/mariadb/clustering/_index.md new file mode 100644 index 0000000000..750f26a34e --- /dev/null +++ b/content/docs/v2024.1.31/guides/mariadb/clustering/_index.md @@ -0,0 +1,22 @@ +--- +title: MariaDB Clustering +menu: + docs_v2024.1.31: + identifier: guides-mariadb-clustering + name: MariaDB Clustering + parent: guides-mariadb + weight: 30 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/mariadb/clustering/galera-cluster/examples/demo-1.yaml b/content/docs/v2024.1.31/guides/mariadb/clustering/galera-cluster/examples/demo-1.yaml new file mode 100644 index 0000000000..d2777e2bdb --- /dev/null +++ b/content/docs/v2024.1.31/guides/mariadb/clustering/galera-cluster/examples/demo-1.yaml @@ -0,0 +1,17 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MariaDB +metadata: + name: sample-mariadb + namespace: demo +spec: + version: "10.5.23" + replicas: 3 + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/mariadb/clustering/galera-cluster/index.md b/content/docs/v2024.1.31/guides/mariadb/clustering/galera-cluster/index.md new file mode 100644 index 0000000000..175f6ee603 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mariadb/clustering/galera-cluster/index.md @@ -0,0 +1,469 @@ +--- +title: MariaDB Galera Cluster Guide +menu: + docs_v2024.1.31: + identifier: guides-mariadb-clustering-galeracluster + name: MariaDB Galera Cluster Guide + parent: guides-mariadb-clustering + weight: 20 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# KubeDB - MariaDB Cluster + +This tutorial will show you how to use KubeDB to provision a MariaDB replication group in single-primary mode. + +## Before You Begin + +Before proceeding: + +- Read [mariadb galera cluster concept](/docs/v2024.1.31/guides/mariadb/clustering/overview) to learn about MariaDB Group Replication. + +- You need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +- Now, install KubeDB cli on your workstation and KubeDB operator in your cluster following the steps [here](/docs/v2024.1.31/setup/README). + +- To keep things isolated, this tutorial uses a separate namespace called `demo` throughout this tutorial. Run the following command to prepare your cluster for this tutorial: + + ```bash + $ kubectl create ns demo + namespace/demo created + ``` + +> Note: The yaml files used in this tutorial are stored in [docs/examples/mysql](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/examples/mysql) folder in GitHub repository [kubedb/docs](https://github.com/kubedb/docs). + +## Deploy MariaDB Cluster + +The following is an example `MariaDB` object which creates a multi-master MariaDB group with three members. + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: MariaDB +metadata: + name: sample-mariadb + namespace: demo +spec: + version: "10.5.23" + replicas: 3 + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut +``` + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mariadb/clustering/galera-cluster/examples/demo-1.yaml +mariadb.kubedb.com/sample-mariadb created +``` + +Here, + +- `spec.replicas` is the number of nodes in the cluster. +- `spec.storage` specifies the StorageClass of PVC dynamically allocated to store data for this database. This storage spec will be passed to the StatefulSet created by KubeDB operator to run database pods. So, each members will have a pod of this storage configuration. You can specify any StorageClass available in your cluster with appropriate resource requests. + +KubeDB operator watches for `MariaDB` objects using Kubernetes API. When a `MariaDB` object is created, KubeDB operator will create a new StatefulSet and a Service with the matching MariaDB object name. KubeDB operator will also create a governing service for the StatefulSet with the name `-pods`. + +```bash +$ kubectl get mariadb -n demo sample-mariadb -o yaml +apiVersion: kubedb.com/v1alpha2 +kind: MariaDB +metadata: + annotations: + kubectl.kubernetes.io/last-applied-configuration: | + {"apiVersion":"kubedb.com/v1alpha2","kind":"MariaDB","metadata":{"annotations":{},"name":"sample-mariadb","namespace":"demo"},"spec":{"replicas":3,"storage":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"1Gi"}},"storageClassName":"standard"},"storageType":"Durable","terminationPolicy":"WipeOut","version":"10.5.23"}} + creationTimestamp: "2021-03-16T09:39:01Z" + finalizers: + - kubedb.com + generation: 2 + managedFields: + ... + name: sample-mariadb + namespace: demo +spec: + authSecret: + name: sample-mariadb-auth + podTemplate: + ... + replicas: 3 + storage: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + storageClassName: standard + storageType: Durable + terminationPolicy: WipeOut + version: 10.5.23 +status: + conditions: + - lastTransitionTime: "2021-03-16T09:39:01Z" + message: 'The KubeDB operator has started the provisioning of MariaDB: demo/sample-mariadb' + reason: DatabaseProvisioningStartedSuccessfully + status: "True" + type: ProvisioningStarted + - lastTransitionTime: "2021-03-16T09:40:00Z" + message: All desired replicas are ready. + reason: AllReplicasReady + status: "True" + type: ReplicaReady + - lastTransitionTime: "2021-03-16T09:39:09Z" + message: 'The MariaDB: demo/sample-mariadb is accepting client requests.' + observedGeneration: 2 + reason: DatabaseAcceptingConnectionRequest + status: "True" + type: AcceptingConnection + - lastTransitionTime: "2021-03-16T09:39:50Z" + message: 'The MySQL: demo/sample-mariadb is ready.' + observedGeneration: 2 + reason: ReadinessCheckSucceeded + status: "True" + type: Ready + - lastTransitionTime: "2021-03-16T09:40:00Z" + message: 'The MariaDB: demo/sample-mariadb is successfully provisioned.' + observedGeneration: 2 + reason: DatabaseSuccessfullyProvisioned + status: "True" + type: Provisioned + observedGeneration: 2 + phase: Ready + + +$ kubectl get sts,svc,secret,pvc,pv,pod -n demo +NAME READY AGE +statefulset.apps/sample-mariadb 3/3 116m + +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +service/sample-mariadb ClusterIP 10.97.162.171 3306/TCP 116m +service/sample-mariadb-pods ClusterIP None 3306/TCP 116m + +NAME TYPE DATA AGE +secret/default-token-696cj kubernetes.io/service-account-token 3 121m +secret/sample-mariadb-auth kubernetes.io/basic-auth 2 116m +secret/sample-mariadb-token-dk4dx kubernetes.io/service-account-token 3 116m + +NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE +persistentvolumeclaim/data-sample-mariadb-0 Bound pvc-1e259abc-5937-421a-990c-b903a83d2d8a 1Gi RWO standard 116m +persistentvolumeclaim/data-sample-mariadb-1 Bound pvc-1d0b5bcd-2699-4b87-b57b-3072ddc1027f 1Gi RWO standard 116m +persistentvolumeclaim/data-sample-mariadb-2 Bound pvc-5b85a06e-17f5-487a-9150-e928f5cf4590 1Gi RWO standard 116m + +NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE +persistentvolume/pvc-1d0b5bcd-2699-4b87-b57b-3072ddc1027f 1Gi RWO Delete Bound demo/data-sample-mariadb-1 standard 116m +persistentvolume/pvc-1e259abc-5937-421a-990c-b903a83d2d8a 1Gi RWO Delete Bound demo/data-sample-mariadb-0 standard 116m +persistentvolume/pvc-5b85a06e-17f5-487a-9150-e928f5cf4590 1Gi RWO Delete Bound demo/data-sample-mariadb-2 standard 116m + +NAME READY STATUS RESTARTS AGE +pod/sample-mariadb-0 1/1 Running 0 116m +pod/sample-mariadb-1 1/1 Running 0 116m +pod/sample-mariadb-2 1/1 Running 0 116m +``` + +## Connect with MariaDB database + +Once the database is in running state we can conncet to each of three nodes. We will use login credentials `MYSQL_ROOT_USERNAME` and `MYSQL_ROOT_PASSWORD` saved as container's environment variable. + +```bash +# First Node +$ kubectl exec -it -n demo sample-mariadb-0 -- bash +root@sample-mariadb-0:/ mysql -u${MYSQL_ROOT_USERNAME} -p${MYSQL_ROOT_PASSWORD} +Welcome to the MariaDB monitor. Commands end with ; or \g. +Your MariaDB connection id is 26 +Server version: 10.5.23-MariaDB-1:10.5.23+maria~focal mariadb.org binary distribution + +Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others. + +Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. + +MariaDB [(none)]> SELECT 1; ++---+ +| 1 | ++---+ +| 1 | ++---+ +1 row in set (0.000 sec) + +MariaDB [(none)]> quit; +Bye + + +# Second Node +$ kubectl exec -it -n demo sample-mariadb-1 -- bash +root@sample-mariadb-1:/ mysql -u${MYSQL_ROOT_USERNAME} -p${MYSQL_ROOT_PASSWORD} +Welcome to the MariaDB monitor. Commands end with ; or \g. +Your MariaDB connection id is 94 +Server version: 10.5.23-MariaDB-1:10.5.23+maria~focal mariadb.org binary distribution + +Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others. + +Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. + +MariaDB [(none)]> SELECT 1; ++---+ +| 1 | ++---+ +| 1 | ++---+ +1 row in set (0.000 sec) + +MariaDB [(none)]> quit; +Bye + + +# Third Node +$ kubectl exec -it -n demo sample-mariadb-2 -- bash +root@sample-mariadb-2:/ mysql -u${MYSQL_ROOT_USERNAME} -p${MYSQL_ROOT_PASSWORD} +Welcome to the MariaDB monitor. Commands end with ; or \g. +Your MariaDB connection id is 78 +Server version: 10.5.23-MariaDB-1:10.5.23+maria~focal mariadb.org binary distribution + +Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others. + +Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. + +MariaDB [(none)]> SELECT 1; ++---+ +| 1 | ++---+ +| 1 | ++---+ +1 row in set (0.000 sec) + +MariaDB [(none)]> quit; +Bye +``` + +## Check the Cluster Status + +Now, we are ready to check newly created cluster status. Connect and run the following commands from any of the hosts and you will get the same result, that is the cluster size is three. + +```bash +$ kubectl exec -it -n demo sample-mariadb-0 -- bash +root@sample-mariadb-0:/ mysql -u${MYSQL_ROOT_USERNAME} -p${MYSQL_ROOT_PASSWORD} +Welcome to the MariaDB monitor. Commands end with ; or \g. +Your MariaDB connection id is 137 +Server version: 10.5.23-MariaDB-1:10.5.23+maria~focal mariadb.org binary distribution + +Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others. + +Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. + +MariaDB [(none)]> show status like 'wsrep_cluster_size'; ++--------------------+-------+ +| Variable_name | Value | ++--------------------+-------+ +| wsrep_cluster_size | 3 | ++--------------------+-------+ +1 row in set (0.001 sec) +``` + +## Data Availability + +In a MariaDB Galera Cluster, Each member can read and write. In this section, we will insert data from any nodes, and we will see whether we can get the data from every other members. + +> Read the comment written for the following commands. They contain the instructions and explanations of the commands. + +```bash +$ kubectl exec -it -n demo sample-mariadb-0 -- bash +root@sample-mariadb-0:/ mysql -u${MYSQL_ROOT_USERNAME} -p${MYSQL_ROOT_PASSWORD} +Welcome to the MariaDB monitor. Commands end with ; or \g. +Your MariaDB connection id is 202 +Server version: 10.5.23-MariaDB-1:10.5.23+maria~focal mariadb.org binary distribution + +Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others. + +Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. + +MariaDB [(none)]> CREATE DATABASE playground; +Query OK, 1 row affected (0.013 sec) + +# Create table in Node 1 +MariaDB [(none)]> CREATE TABLE playground.equipment ( id INT NOT NULL AUTO_INCREMENT, type VARCHAR(50), quant INT, color VARCHAR(25), PRIMARY KEY(id)); +Query OK, 0 rows affected (0.053 sec) + +# Insert sample data into Node 1 +MariaDB [(none)]> INSERT INTO playground.equipment (type, quant, color) VALUES ('slide', 2, 'blue'); +Query OK, 1 row affected (0.003 sec) + +# Read data from Node 1 +MariaDB [(none)]> SELECT * FROM playground.equipment; ++----+-------+-------+-------+ +| id | type | quant | color | ++----+-------+-------+-------+ +| 1 | slide | 2 | blue | ++----+-------+-------+-------+ +1 row in set (0.001 sec) + +MariaDB [(none)]> quit; +Bye +root@sample-mariadb-0:/ exit +exit +~ $ kubectl exec -it -n demo sample-mariadb-1 -- bash +root@sample-mariadb-1:/ mysql -u${MYSQL_ROOT_USERNAME} -p${MYSQL_ROOT_PASSWORD} +Welcome to the MariaDB monitor. Commands end with ; or \g. +Your MariaDB connection id is 209 +Server version: 10.5.23-MariaDB-1:10.5.23+maria~focal mariadb.org binary distribution + +Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others. + +Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. + +# Read data from Node 2 +MariaDB [(none)]> SELECT * FROM playground.equipment; ++----+-------+-------+-------+ +| id | type | quant | color | ++----+-------+-------+-------+ +| 1 | slide | 2 | blue | ++----+-------+-------+-------+ +1 row in set (0.001 sec) + +#Insert data into node 2 +MariaDB [(none)]> INSERT INTO playground.equipment (type, quant, color) VALUES ('slide', 4, 'red'); +Query OK, 1 row affected (0.032 sec) + +# Read data from Node 2 after insertion +MariaDB [(none)]> SELECT * FROM playground.equipment; ++----+-------+-------+-------+ +| id | type | quant | color | ++----+-------+-------+-------+ +| 1 | slide | 2 | blue | +| 5 | slide | 4 | red | ++----+-------+-------+-------+ +2 rows in set (0.000 sec) + +MariaDB [(none)]> quit; +Bye +root@sample-mariadb-1:/ exit +exit +~ $ kubectl exec -it -n demo sample-mariadb-2 -- bash +root@sample-mariadb-2:/ mysql -u${MYSQL_ROOT_USERNAME} -p${MYSQL_ROOT_PASSWORD} +Welcome to the MariaDB monitor. Commands end with ; or \g. +Your MariaDB connection id is 209 +Server version: 10.5.23-MariaDB-1:10.5.23+maria~focal mariadb.org binary distribution + +Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others. + +Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. + +# Insert data into Node 3 +MariaDB [(none)]> INSERT INTO playground.equipment (type, quant, color) VALUES ('slide', 4, 'red'); +Query OK, 1 row affected (0.005 sec) + +# Read data from Node 3 +MariaDB [(none)]> SELECT * FROM playground.equipment; ++----+-------+-------+-------+ +| id | type | quant | color | ++----+-------+-------+-------+ +| 1 | slide | 2 | blue | +| 5 | slide | 4 | red | +| 6 | slide | 4 | red | ++----+-------+-------+-------+ +3 rows in set (0.000 sec) + +MariaDB [(none)]> quit +Bye +root@sample-mariadb-2:/# exit +exit +``` + +## Automatic Failover + +To test automatic failover, we will force the one of three pods to restart and check if it can rejoin the cluster. + +> Read the comment written for the following commands. They contain the instructions and explanations of the commands. + +```bash +kubectl exec -it -n demo sample-mariadb-0 -- bash +root@sample-mariadb-0:/ mysql -u${MYSQL_ROOT_USERNAME} -p${MYSQL_ROOT_PASSWORD} +Welcome to the MariaDB monitor. Commands end with ; or \g. +Your MariaDB connection id is 11 +Server version: 10.5.23-MariaDB-1:10.5.23+maria~focal mariadb.org binary distribution + +Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others. + +Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. + +# Check current data +MariaDB [(none)]> SELECT * FROM playground.equipment; ++----+-------+-------+-------+ +| id | type | quant | color | ++----+-------+-------+-------+ +| 1 | slide | 2 | blue | +| 5 | slide | 4 | red | +| 6 | slide | 4 | red | ++----+-------+-------+-------+ +3 rows in set (0.002 sec) + +MariaDB [(none)]> quit; +Bye +root@sample-mariadb-0:/ exit +exit + +# Forcefully delete Node 1 +~ $ kubectl delete pod -n demo sample-mariadb-0 +pod "sample-mariadb-0" deleted + +# Wait for sample-mariadb-0 to restart +$ kubectl exec -it -n demo sample-mariadb-0 -- bash +root@sample-mariadb-0:/ mysql -u${MYSQL_ROOT_USERNAME} -p${MYSQL_ROOT_PASSWORD} +Welcome to the MariaDB monitor. Commands end with ; or \g. +Your MariaDB connection id is 10 +Server version: 10.5.23-MariaDB-1:10.5.23+maria~focal mariadb.org binary distribution + +Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others. + +Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. + +# Check data after rejoining +MariaDB [(none)]> SELECT * FROM playground.equipment; ++----+-------+-------+-------+ +| id | type | quant | color | ++----+-------+-------+-------+ +| 1 | slide | 2 | blue | +| 5 | slide | 4 | red | +| 6 | slide | 4 | red | ++----+-------+-------+-------+ +3 rows in set (0.002 sec) + +# Check cluster size +MariaDB [(none)]> show status like 'wsrep_cluster_size'; ++--------------------+-------+ +| Variable_name | Value | ++--------------------+-------+ +| wsrep_cluster_size | 3 | ++--------------------+-------+ +1 row in set (0.002 sec) + +MariaDB [(none)]> quit +Bye + +``` + +## Cleaning up + +Clean what we created in this tutorial. + +```bash +$ kubectl delete mariadb -n demo sample-mariadb +mariadb.kubedb.com "sample-mariadb" deleted +$ kubectl delete ns demo +namespace "demo" deleted +``` diff --git a/content/docs/v2024.1.31/guides/mariadb/clustering/overview/images/galera_small.png b/content/docs/v2024.1.31/guides/mariadb/clustering/overview/images/galera_small.png new file mode 100644 index 0000000000..f6796e2900 Binary files /dev/null and b/content/docs/v2024.1.31/guides/mariadb/clustering/overview/images/galera_small.png differ diff --git a/content/docs/v2024.1.31/guides/mariadb/clustering/overview/index.md b/content/docs/v2024.1.31/guides/mariadb/clustering/overview/index.md new file mode 100644 index 0000000000..d5e165afa1 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mariadb/clustering/overview/index.md @@ -0,0 +1,62 @@ +--- +title: MariaDB Galera Cluster Overview +menu: + docs_v2024.1.31: + identifier: guides-mariadb-clustering-overview + name: MariaDB Galera Cluster Overview + parent: guides-mariadb-clustering + weight: 10 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# MariaDB Galera Cluster + +Here we'll discuss some concepts about MariaDB Galera Cluster. + +## So What is Replication + +Replication means data being copied from one MariaDB server to one or more other MariaDB servers, instead of only stored in one server. One can read or write in any server of the cluster. The following figure shows a cluster of four MariaDB servers: + +![MariaDB Cluster](/docs/v2024.1.31/guides/mariadb/clustering/overview/images/galera_small.png) + +Image ref: + +## Galera Replication + +MariaDB Galera Cluster is a [virtually synchronous](https://mariadb.com/kb/en/about-galera-replication/#synchronous-vs-asynchronous-replication) multi-master cluster for MariaDB. The Server replicates a transaction at commit time by broadcasting the write set associated with the transaction to every node in the cluster. The client connects directly to the DBMS and experiences behavior that is similar to native MariaDB in most cases. The wsrep API (write set replication API) defines the interface between Galera replication and MariaDB. + +Ref: [About Galera Replication](https://mariadb.com/kb/en/about-galera-replication/) + +## MariaDB Galera Cluster Features + +- Virtually synchronous replication +- Active-active multi-master topology +- Read and write to any cluster node +- Automatic membership control, failed nodes drop from the cluster +- Automatic node joining +- True parallel replication, on row level +- Direct client connections, native MariaDB look & feel + +Ref: [What is MariaDB Galera Cluster?](https://mariadb.com/kb/en/what-is-mariadb-galera-cluster/#features) + +### Limitations + +There are some limitations in MariaDB Galera Cluster that are listed [here](https://mariadb.com/kb/en/mariadb-galera-cluster-known-limitations/). + +## Next Steps + +- [Deploy MariaDB Galera Cluster](/docs/v2024.1.31/guides/mariadb/clustering/galera-cluster) using KubeDB. diff --git a/content/docs/v2024.1.31/guides/mariadb/concepts/_index.md b/content/docs/v2024.1.31/guides/mariadb/concepts/_index.md new file mode 100644 index 0000000000..b08f6876ba --- /dev/null +++ b/content/docs/v2024.1.31/guides/mariadb/concepts/_index.md @@ -0,0 +1,22 @@ +--- +title: MariaDB Concepts +menu: + docs_v2024.1.31: + identifier: guides-mariadb-concepts + name: Concepts + parent: guides-mariadb + weight: 20 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/mariadb/concepts/appbinding/index.md b/content/docs/v2024.1.31/guides/mariadb/concepts/appbinding/index.md new file mode 100644 index 0000000000..17feff5414 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mariadb/concepts/appbinding/index.md @@ -0,0 +1,152 @@ +--- +title: AppBinding CRD +menu: + docs_v2024.1.31: + identifier: guides-mariadb-concepts-appbinding + name: AppBinding + parent: guides-mariadb-concepts + weight: 25 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# AppBinding + +## What is AppBinding + +An `AppBinding` is a Kubernetes `CustomResourceDefinition`(CRD) which points to an application using either its URL (usually for a non-Kubernetes resident service instance) or a Kubernetes service object (if self-hosted in a Kubernetes cluster), some optional parameters and a credential secret. To learn more about AppBinding and the problems it solves, please read this blog post: [The case for AppBinding](https://blog.byte.builders/post/the-case-for-appbinding). + +If you deploy a database using [KubeDB](https://kubedb.com/docs/0.11.0/concepts/), `AppBinding` object will be created automatically for it. Otherwise, you have to create an `AppBinding` object manually pointing to your desired database. + +KubeDB uses [Stash](https://appscode.com/products/stash/) to perform backup/recovery of databases. Stash needs to know how to connect with a target database and the credentials necessary to access it. This is done via an `AppBinding`. + +## AppBinding CRD Specification + +Like any official Kubernetes resource, an `AppBinding` has `TypeMeta`, `ObjectMeta` and `Spec` sections. However, unlike other Kubernetes resources, it does not have a `Status` section. + +An `AppBinding` object created by `KubeDB` for MariaDB database is shown below, + +```yaml +apiVersion: appcatalog.appscode.com/v1alpha1 +kind: AppBinding +metadata: + labels: + app.kubernetes.io/component: database + app.kubernetes.io/instance: sample-mariadb + app.kubernetes.io/managed-by: kubedb.com + app.kubernetes.io/name: mariadbs.kubedb.com + name: sample-mariadb + namespace: demo +spec: + clientConfig: + caBundle: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURJekNDQWd1Z0F3SUJBZ0lVVUg1V24wOSt6MnR6RU5ESnF4N1AxZFg5aWM4d0RRWUpLb1pJaHZjTkFRRUwKQlFBd0lURU9NQXdHQTFVRUF3d0ZiWGx6Y1d3eER6QU5CZ05WQkFvTUJtdDFZbVZrWWpBZUZ3MHlNVEF5TURrdwpPVEkxTWpCYUZ3MHlNakF5TURrd09USTFNakJhTUNFeERqQU1CZ05WQkFNTUJXMTVjM0ZzTVE4d0RRWURWUVFLCkRBWnJkV0psWkdJd2dnRWlNQTBHQ1NxR1NJYjNEUUVCQVFVQUE0SUJEd0F3Z2dFS0FvSUJBUUM3ZDl5YUtMQ3UKYy9NclRBb0NkV1VORld3ckdqbVdvUEVTRWNMR0pjT0JVSTZ5NXZ5QXVGMG1TakZvNzR3SEdSbWRmS2ExMWh0Ygo4TWZ2UFNwWXVGWFpUSi9GbnkwNnU2ekZMVm5xV2h3MUdiZ2ZCUE5XK0w1ZGkzZmVjanBEZmtLbTcrd0ZUVnNmClVzWGVVcUR0VHFpdlJHVUQ5aURUTzNTUmdheVI5U0J0RnRxcHRtV0YrODFqZGlSS2pRTVlCVGJ2MDRueW9UdHUKK0hJZlFjbE40Q1p3NzJPckpUdFdiYnNiWHVlMU5RZU9nQzJmSVhkZEF0WEkxd3lOT04zckxuTFF1SUIrakVLSQpkZTlPandKSkJhSFVzRVZEbllnYlJLSTdIcVdFdk5kL29OU2VZRXF2TXk3K1hwTFV0cDBaVXlxUDV2cC9PSXJ3CmlVMWVxZGNZMzJDcEFnTUJBQUdqVXpCUk1CMEdBMVVkRGdRV0JCUlNnNDVpazFlT3lCU1VKWHkvQllZVDVLck8KeWpBZkJnTlZIU01FR0RBV2dCUlNnNDVpazFlT3lCU1VKWHkvQllZVDVLck95akFQQmdOVkhSTUJBZjhFQlRBRApBUUgvTUEwR0NTcUdTSWIzRFFFQkN3VUFBNElCQVFCNTlhNlFGQm1YMTh1b1dzQ3dGT1Y0R25GYnVBL2NoaVN6CkFwRVVhcjI1L2RNK2RpT0RVNkJuMXM3Wmpqem45WU9aVVJJL3UyRGRhdmNnQUNYR2FXWHJhSS84UUYxZDB3OGsKNXFlRmMxQWE5UEhFeEsxVm1xb21MV2xhMkdTOW1EbFpEOEtueDdjU3lpRmVqRXJYdWtld1B1VXA0dUUzTjAraApwQjQ0MDVRU2d4VVc3SmVhamFQdTNXOHgyRUFKMnViTkdMVEk5L0x4V1Z0YkxGcUFoSFphbGRjaXpOSHdTUGYzCkdMWEo3YTBWTW1JY0NuMWh5a0k2UkNrUTRLSE9tbDNOcXRRS2F5RnhUVHVpdzRiZ3k3czA1UnNzRlVUaWN1VmcKc3hnMjFVQUkvYW9WaXpQOVpESGE2TmV0YnpNczJmcmZBeHhBZk9pWDlzN1JuTmM0WHd4VAotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg== + service: + name: sample-mariadb + port: 3306 + scheme: mysql + secret: + name: sample-mariadb-auth + type: kubedb.com/mariadb + version: 10.5.23 +``` + +Here, we are going to describe the sections of an `AppBinding` crd. + +### AppBinding `Spec` + +An `AppBinding` object has the following fields in the `spec` section: + +#### spec.type + +`spec.type` is an optional field that indicates the type of the app that this `AppBinding` is pointing to. Stash uses this field to resolve the values of `TARGET_APP_TYPE`, `TARGET_APP_GROUP` and `TARGET_APP_RESOURCE` variables of [BackupBlueprint](https://appscode.com/products/stash/latest/concepts/crds/backupblueprint/) object. + +This field follows the following format: `/`. The above AppBinding is pointing to a `mariadb` resource under `kubedb.com` group. + +Here, the variables are parsed as follows: + +| Variable | Usage | +| --------------------- | --------------------------------------------------------------------------------------------------------------------------------- | +| `TARGET_APP_GROUP` | Represents the application group where the respective app belongs (i.e: `kubedb.com`). | +| `TARGET_APP_RESOURCE` | Represents the resource under that application group that this appbinding represents (i.e: `mariadb`). | +| `TARGET_APP_TYPE` | Represents the complete type of the application. It's simply `TARGET_APP_GROUP/TARGET_APP_RESOURCE` (i.e: `kubedb.com/mariadb`). | + +#### spec.secret + +`spec.secret` specifies the name of the secret which contains the credentials that are required to access the database. This secret must be in the same namespace as the `AppBinding`. + +This secret must contain the following keys: + +PostgreSQL : + +| Key | Usage | +| ------------------- | --------------------------------------------------- | +| `POSTGRES_USER` | Username of the target database. | +| `POSTGRES_PASSWORD` | Password for the user specified by `POSTGRES_USER`. | + +MySQL : + +| Key | Usage | +| ---------- | ---------------------------------------------- | +| `username` | Username of the target database. | +| `password` | Password for the user specified by `username`. | + +MariaDB : + +| Key | Usage | +| ---------- | ---------------------------------------------- | +| `username` | Username of the target database. | +| `password` | Password for the user specified by `username`. | + +MongoDB : + +| Key | Usage | +| ---------- | ---------------------------------------------- | +| `username` | Username of the target database. | +| `password` | Password for the user specified by `username`. | + +MariaDB: + +| Key | Usage | +| ---------------- | ----------------------- | +| `ADMIN_USERNAME` | Admin username | +| `ADMIN_PASSWORD` | Password for admin user | + +#### spec.clientConfig + +`spec.clientConfig` defines how to communicate with the target database. You can use either an URL or a Kubernetes service to connect with the database. You don't have to specify both of them. + +You can configure following fields in `spec.clientConfig` section: + +- **spec.clientConfig.url** + + `spec.clientConfig.url` gives the location of the database, in standard URL form (i.e. `[scheme://]host:port/[path]`). This is particularly useful when the target database is running outside of the Kubernetes cluster. If your database is running inside the cluster, use `spec.clientConfig.service` section instead. + Note that, attempting to use a user or basic auth (e.g. `user:password@host:port`) is not allowed. Stash will insert them automatically from the respective secret. Fragments ("#...") and query parameters ("?...") are not allowed either. + +- **spec.clientConfig.service** + + If you are running the database inside the Kubernetes cluster, you can use Kubernetes service to connect with the database. You have to specify the following fields in `spec.clientConfig.service` section if you manually create an `AppBinding` object. + + - **name :** `name` indicates the name of the service that connects with the target database. + - **scheme :** `scheme` specifies the scheme (i.e. http, https) to use to connect with the database. + - **port :** `port` specifies the port where the target database is running. + +- **spec.clientConfig.insecureSkipTLSVerify** + + `spec.clientConfig.insecureSkipTLSVerify` is used to disable TLS certificate verification while connecting with the database. We strongly discourage to disable TLS verification during backup. You should provide the respective CA bundle through `spec.clientConfig.caBundle` field instead. + +- **spec.clientConfig.caBundle** + + `spec.clientConfig.caBundle` is a PEM encoded CA bundle which will be used to validate the serving certificate of the database. diff --git a/content/docs/v2024.1.31/guides/mariadb/concepts/autoscaler/index.md b/content/docs/v2024.1.31/guides/mariadb/concepts/autoscaler/index.md new file mode 100644 index 0000000000..bd6a1dba1d --- /dev/null +++ b/content/docs/v2024.1.31/guides/mariadb/concepts/autoscaler/index.md @@ -0,0 +1,107 @@ +--- +title: MariaDBAutoscaler CRD +menu: + docs_v2024.1.31: + identifier: guides-mariadb-concepts-autoscaler + name: MariaDBAutoscaler + parent: guides-mariadb-concepts + weight: 26 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# MariaDBAutoscaler + +## What is MariaDBAutoscaler + +`MariaDBAutoscaler` is a Kubernetes `Custom Resource Definitions` (CRD). It provides a declarative configuration for autoscaling [MariaDB](https://www.mariadb.com/) compute resources and storage of database components in a Kubernetes native way. + +## MariaDBAutoscaler CRD Specifications + +Like any official Kubernetes resource, a `MariaDBAutoscaler` has `TypeMeta`, `ObjectMeta`, `Spec` and `Status` sections. + +Here, some sample `MariaDBAutoscaler` CROs for autoscaling different components of database is given below: + +**Sample `MariaDBAutoscaler` for MariaDB:** + +```yaml +apiVersion: autoscaling.kubedb.com/v1alpha1 +kind: MariaDBAutoscaler +metadata: + name: md-as + namespace: demo +spec: + databaseRef: + name: sample-mariadb + compute: + mariadb: + trigger: "On" + podLifeTimeThreshold: 5m + minAllowed: + cpu: 250m + memory: 350Mi + maxAllowed: + cpu: 1 + memory: 1Gi + controlledResources: ["cpu", "memory"] + storage: + mariadb: + trigger: "On" + usageThreshold: 60 + scalingThreshold: 50 + expansionMode: "Online" +``` + +Here, we are going to describe the various sections of a `MariaDBAutoscaler` crd. + +A `MariaDBAutoscaler` object has the following fields in the `spec` section. + +### spec.databaseRef + +`spec.databaseRef` is a required field that point to the [MariaDB](/docs/v2024.1.31/guides/mariadb/concepts/mariadb) object for which the autoscaling will be performed. This field consists of the following sub-field: + +- **spec.databaseRef.name :** specifies the name of the [MariaDB](/docs/v2024.1.31/guides/mariadb/concepts/mariadb) object. + +### spec.compute + +`spec.compute` specifies the autoscaling configuration for the compute resources i.e. cpu and memory of the database components. This field consists of the following sub-field: + +- `spec.compute.mariadb` indicates the desired compute autoscaling configuration for a MariaDB standalone or cluster. + +All of them has the following sub-fields: + +- `trigger` indicates if compute autoscaling is enabled for this component of the database. If "On" then compute autoscaling is enabled. If "Off" then compute autoscaling is disabled. +- `minAllowed` specifies the minimal amount of resources that will be recommended, default is no minimum. +- `maxAllowed` specifies the maximum amount of resources that will be recommended, default is no maximum. +- `controlledResources` specifies which type of compute resources (cpu and memory) are allowed for autoscaling. Allowed values are "cpu" and "memory". +- `containerControlledValues` specifies which resource values should be controlled. Allowed values are "RequestsAndLimits" and "RequestsOnly". +- `resourceDiffPercentage` specifies the minimum resource difference between recommended value and the current value in percentage. If the difference percentage is greater than this value than autoscaling will be triggered. +- `podLifeTimeThreshold` specifies the minimum pod lifetime of at least one of the pods before triggering autoscaling. +- `InMemoryScalingThreshold` the percentage of the Memory that will be passed as inMemorySizeGB for inmemory database engine, which is only available for the percona variant of the mariadb. + +### spec.storage + +`spec.compute` specifies the autoscaling configuration for the storage resources of the database components. This field consists of the following sub-field: + +- `spec.compute.mairadb` indicates the desired storage autoscaling configuration for a MariaDB standalone or cluster. + +All of them has the following sub-fields: + +- `trigger` indicates if storage autoscaling is enabled for this component of the database. If "On" then storage autoscaling is enabled. If "Off" then storage autoscaling is disabled. +- `usageThreshold` indicates usage percentage threshold, if the current storage usage exceeds then storage autoscaling will be triggered. +- `scalingThreshold` indicates the percentage of the current storage that will be scaled. +- `expansionMode` specifies the mode of volume expansion when storage autoscaler performs volume expansion OpsRequest. Default value is `Online`. + diff --git a/content/docs/v2024.1.31/guides/mariadb/concepts/mariadb-version/index.md b/content/docs/v2024.1.31/guides/mariadb/concepts/mariadb-version/index.md new file mode 100644 index 0000000000..a6fbf40c31 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mariadb/concepts/mariadb-version/index.md @@ -0,0 +1,114 @@ +--- +title: MariaDBVersion CRD +menu: + docs_v2024.1.31: + identifier: guides-mariadb-concepts-mariadbversion + name: MariaDBVersion + parent: guides-mariadb-concepts + weight: 15 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# MariaDBVersion + +## What is MariaDBVersion + +`MariaDBVersion` is a Kubernetes `Custom Resource Definitions` (CRD). It provides a declarative configuration to specify the docker images to be used for [MariaDB](https://www.mariadb.com) database deployed with KubeDB in a Kubernetes native way. + +When you install KubeDB, a `MariaDBVersion` custom resource will be created automatically for every supported MariaDB versions. You have to specify the name of `MariaDBVersion` crd in `spec.version` field of [MariaDB](/docs/v2024.1.31/guides/mariadb/concepts/mariadb) crd. Then, KubeDB will use the docker images specified in the `MariaDBVersion` crd to create your expected database. + +Using a separate crd for specifying respective docker images, and pod security policy names allow us to modify the images, and policies independent of KubeDB operator. This will also allow the users to use a custom image for the database. + +## MariaDBVersion Specification + +As with all other Kubernetes objects, a MariaDBVersion needs `apiVersion`, `kind`, and `metadata` fields. It also needs a `.spec` section. + +```yaml +apiVersion: catalog.kubedb.com/v1alpha1 +kind: MariaDBVersion +metadata: + annotations: + meta.helm.sh/release-name: kubedb-catalog + meta.helm.sh/release-namespace: kube-system + creationTimestamp: "2021-03-09T13:00:51Z" + generation: 1 + labels: + app.kubernetes.io/instance: kubedb-catalog + app.kubernetes.io/managed-by: Helm + app.kubernetes.io/name: kubedb-catalog + app.kubernetes.io/version: v0.16.2 + helm.sh/chart: kubedb-catalog-v0.16.2 + ... + name: 10.5.23 +spec: + db: + image: kubedb/mariadb:10.5.23 + exporter: + image: kubedb/mysqld-exporter:v0.11.0 + initContainer: + image: kubedb/busybox + podSecurityPolicies: + databasePolicyName: maria-db + stash: + addon: + backupTask: + name: mariadb-backup-10.5.23 + restoreTask: + name: mariadb-restore-10.5.23 + version: 10.5.23 +``` + +### metadata.name + +`metadata.name` is a required field that specifies the name of the `MariaDBVersion` crd. You have to specify this name in `spec.version` field of [MariaDB](/docs/v2024.1.31/guides/mariadb/concepts/mariadb) crd. + +We follow this convention for naming MariaDBVersion crd: + +- Name format: `{Original MariaDB image version}-{modification tag}` + +We modify original MariaDB docker image to support additional features. An image with higher modification tag will have more features than the images with lower modification tag. Hence, it is recommended to use MariaDBVersion crd with highest modification tag to take advantage of the latest features. + +### spec.version + +`spec.version` is a required field that specifies the original version of MariaDB database that has been used to build the docker image specified in `spec.db.image` field. + +### spec.deprecated + +`spec.deprecated` is an optional field that specifies whether the docker images specified here is supported by the current KubeDB operator. + +The default value of this field is `false`. If `spec.deprecated` is set `true`, KubeDB operator will not create the database and other respective resources for this version. + +### spec.db.image + +`spec.db.image` is a required field that specifies the docker image which will be used to create Statefulset by KubeDB operator to create expected MariaDB database. + +### spec.exporter.image + +`spec.exporter.image` is a required field that specifies the image which will be used to export Prometheus metrics. + +### spec.initContainer.image + +`spec.initContainer.image` is a required field that specifies the image which will be used to remove `lost+found` directory and mount an `EmptyDir` data volume. + +### spec.podSecurityPolicies.databasePolicyName + +`spec.podSecurityPolicies.databasePolicyName` is a required field that specifies the name of the pod security policy required to get the database server pod(s) running. + +### spec.stash + +`spec.stash` is an optional field that specifies the name of the task for stash backup and restore. Learn more about [Stash MariaDB addon](https://stash.run/docs/v2021.03.08/addons/mariadb/) + diff --git a/content/docs/v2024.1.31/guides/mariadb/concepts/mariadb/index.md b/content/docs/v2024.1.31/guides/mariadb/concepts/mariadb/index.md new file mode 100644 index 0000000000..18dbe26e0b --- /dev/null +++ b/content/docs/v2024.1.31/guides/mariadb/concepts/mariadb/index.md @@ -0,0 +1,389 @@ +--- +title: MariaDB CRD +menu: + docs_v2024.1.31: + identifier: guides-mariadb-concepts-mariadb + name: MariaDB + parent: guides-mariadb-concepts + weight: 10 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# MariaDB + +## What is MariaDB + +`MariaDB` is a Kubernetes `Custom Resource Definitions` (CRD). It provides declarative configuration for [MariaDB](https://www.mariadb.com/) in a Kubernetes native way. You only need to describe the desired database configuration in a MariaDB object, and the KubeDB operator will create Kubernetes objects in the desired state for you. + +## MariaDB Spec + +As with all other Kubernetes objects, a MariaDB needs `apiVersion`, `kind`, and `metadata` fields. It also needs a `.spec` section. Below is an example MariaDB object. + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: MariaDB +metadata: + name: sample-mariadb + namespace: demo +spec: + authSecret: + name: sample-mariadb-auth + monitor: + agent: prometheus.io + prometheus: + exporter: + port: 56790 + resources: {} + serviceMonitor: + interval: 10s + labels: + release: prometheus + podTemplate: + controller: {} + metadata: {} + spec: + affinity: + podAntiAffinity: + preferredDuringSchedulingIgnoredDuringExecution: + - podAffinityTerm: + labelSelector: + matchLabels: + app.kubernetes.io/instance: sample-mariadb + app.kubernetes.io/managed-by: kubedb.com + app.kubernetes.io/name: mariadbs.kubedb.com + namespaces: + - demo + topologyKey: kubernetes.io/hostname + weight: 100 + - podAffinityTerm: + labelSelector: + matchLabels: + app.kubernetes.io/instance: sample-mariadb + app.kubernetes.io/managed-by: kubedb.com + app.kubernetes.io/name: mariadbs.kubedb.com + namespaces: + - demo + topologyKey: failure-domain.beta.kubernetes.io/zone + weight: 50 + resources: + limits: + cpu: 500m + memory: 1Gi + requests: + cpu: 500m + memory: 1Gi + serviceAccountName: sample-mariadb + replicas: 3 + requireSSL: true + storage: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + storageClassName: standard + storageType: Durable + terminationPolicy: WipeOut + tls: + certificates: + - alias: server + dnsNames: + - localhost + ipAddresses: + - 127.0.0.1 + secretName: sample-mariadb-server-cert + subject: + organizations: + - kubedb:server + - alias: archiver + secretName: sample-mariadb-archiver-cert + - alias: metrics-exporter + secretName: sample-mariadb-metrics-exporter-cert + issuerRef: + apiGroup: cert-manager.io + kind: Issuer + name: md-issuer + version: 10.5.23 +``` + +### spec.version + +`spec.version` is a required field specifying the name of the [MariaDBVersion](/docs/v2024.1.31/guides/mariadb/concepts/mariadb-version) crd where the docker images are specified. Currently, when you install KubeDB, it creates the following `MariaDBVersion` resources, + +- `10.5.23`, `10.4.32` + +### spec.authSecret + +`spec.authSecret` is an optional field that points to a Secret used to hold credentials for `mariadb` root user. If not set, the KubeDB operator creates a new Secret `{mariadb-object-name}-auth` for storing the password for `mariadb` root user for each MariaDB object. If you want to use an existing secret please specify that when creating the MariaDB object using `spec.authSecret.name`. + +This secret contains a `user` key and a `password` key which contains the `username` and `password` respectively for `mariadb` root user. Here, the value of `user` key is fixed to be `root`. + +Secrets provided by users are not managed by KubeDB, and therefore, won't be modified or garbage collected by the KubeDB operator (version 0.13.0 and higher). + +Example: + +```bash +kubectl create secret generic mariadb-auth -n demo \ + --from-literal=user=root \ + --from-literal=password=6q8u_2jMOW-OOZXk +secret/mariadb-auth created +``` + +```yaml +apiVersion: v1 +data: + password: NnE4dV8yak1PVy1PT1pYaw== + user: cm9vdA== +kind: Secret +metadata: + name: mariadb-auth + namespace: demo +type: Opaque +``` + +### spec.storageType + +`spec.storageType` is an optional field that specifies the type of storage to use for the database. It can be either `Durable` or `Ephemeral`. The default value of this field is `Durable`. If `Ephemeral` is used then KubeDB will create MariaDB database using [emptyDir](https://kubernetes.io/docs/concepts/storage/volumes/#emptydir) volume. In this case, you don't have to specify `spec.storage` field. + +### spec.storage + +If you set `spec.storageType:` to `Durable`, then `spec.storage` is a required field that specifies the StorageClass of PVCs dynamically allocated to store data for the database. This storage spec will be passed to the StatefulSet created by KubeDB operator to run database pods. You can specify any StorageClass available in your cluster with appropriate resource requests. + +- `spec.storage.storageClassName` is the name of the StorageClass used to provision PVCs. PVCs don’t necessarily have to request a class. A PVC with its storageClassName set equal to "" is always interpreted to be requesting a PV with no class, so it can only be bound to PVs with no class (no annotation or one set equal to ""). A PVC with no storageClassName is not quite the same and is treated differently by the cluster depending on whether the DefaultStorageClass admission plugin is turned on. +- `spec.storage.accessModes` uses the same conventions as Kubernetes PVCs when requesting storage with specific access modes. +- `spec.storage.resources` can be used to request specific quantities of storage. This follows the same resource model used by PVCs. + +To learn how to configure `spec.storage`, please visit the links below: + +- https://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims + +### spec.init + +`spec.init` is an optional section that can be used to initialize a newly created MariaDB database. MariaDB databases can be initialized in one of two ways: + +- Initialize from Script +- Initialize from Stash Restore + +#### Initialize via Script + +To initialize a MariaDB database using a script (shell script, sql script, etc.), set the `spec.init.script` section when creating a MariaDB object. It will execute files alphabetically with extensions `.sh` , `.sql` and `.sql.gz` that is found in the repository. The scripts inside child folders will be skipped. script must have the following information: + +- [VolumeSource](https://kubernetes.io/docs/concepts/storage/volumes/#types-of-volumes): Where your script is loaded from. + +Below is an example showing how a script from a configMap can be used to initialize a MariaDB database. + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: MariaDB +metadata: + name: sample-mariadb + namespace: demo +spec: + version: 10.5.23 + init: + script: + configMap: + name: md-init-script +``` + +In the above example, KubeDB operator will launch a Job to execute all js script of `md-init-script` in alphabetical order once StatefulSet pods are running. + +### spec.monitor + +MariaDB managed by KubeDB can be monitored with builtin-Prometheus and Prometheus operator out-of-the-box. + +### spec.requireSSL + +`spec.requireSSL` specifies whether the client connections require SSL. If `spec.requireSSL` is `true` then the server permits only TCP/IP connections that use SSL, or connections that use a socket file (on Unix) or shared memory (on Windows). The server rejects any non-secure connection attempt. For more details, please visit [here](https://mariadb.com/kb/en/securing-connections-for-client-and-server/#requiring-tls-for-specific-user-accounts) + +### spec.tls + +`spec.tls` specifies the TLS/SSL configurations for the MariaDB. + +The following fields are configurable in the `spec.tls` section: + +- `issuerRef` is a reference to the `Issuer` or `ClusterIssuer` CR of [cert-manager](https://cert-manager.io/docs/concepts/issuer/) that will be used by `KubeDB` to generate necessary certificates. + + - `apiGroup` is the group name of the resource being referenced. The value for `Issuer` or `ClusterIssuer` is "cert-manager.io" (cert-manager v0.12.0 and later). + - `kind` is the type of resource being referenced. KubeDB supports both `Issuer` and `ClusterIssuer` as values for this field. + - `name` is the name of the resource (`Issuer` or `ClusterIssuer`) being referenced. + +- `certificates` (optional) are a list of certificates used to configure the server and/or client certificate. It has the following fields: + + - `alias` represents the identifier of the certificate. It has the following possible value: + - `server` is used for server certificate identification. + - `client` is used for client certificate identification. + - `metrics-exporter` is used for metrics exporter certificate identification. + - `secretName` (optional) specifies the k8s secret name that holds the certificates. + This field is optional. If the user does not specify this field, the default secret name will be created in the following format: `--cert`. + - `subject` (optional) specifies an `X.509` distinguished name. It has the following possible field, + - `organizations` (optional) are the list of different organization names to be used on the Certificate. + - `organizationalUnits` (optional) are the list of different organization unit name to be used on the Certificate. + - `countries` (optional) are the list of country names to be used on the Certificate. + - `localities` (optional) are the list of locality names to be used on the Certificate. + - `provinces` (optional) are the list of province names to be used on the Certificate. + - `streetAddresses` (optional) are the list of a street address to be used on the Certificate. + - `postalCodes` (optional) are the list of postal code to be used on the Certificate. + - `serialNumber` (optional) is a serial number to be used on the Certificate. + You can found more details from [Here](https://golang.org/pkg/crypto/x509/pkix/#Name) + + - `duration` (optional) is the period during which the certificate is valid. + - `renewBefore` (optional) is a specifiable time before expiration duration. + - `dnsNames` (optional) is a list of subject alt names to be used in the Certificate. + - `ipAddresses` (optional) is a list of IP addresses to be used in the Certificate. + - `uriSANs` (optional) is a list of URI Subject Alternative Names to be set in the Certificate. + - `emailSANs` (optional) is a list of email Subject Alternative Names to be set in the Certificate. + +### spec.configSecret + +`spec.configSecret` is an optional field that allows users to provide custom configuration for MariaDB. This field accepts a [`VolumeSource`](https://github.com/kubernetes/api/blob/release-1.11/core/v1/types.go#L47). + +### spec.podTemplate + +KubeDB allows providing a template for database pod through `spec.podTemplate`. KubeDB operator will pass the information provided in `spec.podTemplate` to the StatefulSet created for the MariaDB database. + +KubeDB accepts the following fields to set in `spec.podTemplate:` + +- metadata: + - annotations (pod's annotation) +- controller: + - annotations (statefulset's annotation) +- spec: + - args + - env + - resources + - initContainers + - imagePullSecrets + - nodeSelector + - affinity + - serviceAccountName + - schedulerName + - tolerations + - priorityClassName + - priority + - securityContext + - livenessProbe + - readinessProbe + - lifecycle + +Uses of some field of `spec.podTemplate` is described below, + +#### spec.podTemplate.spec.args + +`spec.podTemplate.spec.args` is an optional field. This can be used to provide additional arguments for database installation. To learn about available args of `mysqld`, visit [here](https://mariadb.com/kb/en/mysqld-options/). + +#### spec.podTemplate.spec.env + +`spec.podTemplate.spec.env` is an optional field that specifies the environment variables to pass to the MariaDB docker image. To know about supported environment variables, please visit [here](https://hub.docker.com/_/mariadb/). + +Note that, KubeDB does not allow `MYSQL_ROOT_PASSWORD`, `MYSQL_ALLOW_EMPTY_PASSWORD`, `MYSQL_RANDOM_ROOT_PASSWORD`, and `MYSQL_ONETIME_PASSWORD` environment variables to set in `spec.env`. If you want to set the root password, please use `spec.authSecret` instead described earlier. + +If you try to set any of the forbidden environment variables i.e. `MYSQL_ROOT_PASSWORD` in MariaDB crd, Kubed operator will reject the request with the following error, + +```bash +Error from server (Forbidden): error when creating "./mariadb.yaml": admission webhook "mariadb.validators.kubedb.com" denied the request: environment variable MYSQL_ROOT_PASSWORD is forbidden to use in MariaDB spec +``` + +Also, note that KubeDB does not allow to update the environment variables as updating them does not have any effect once the database is created. If you try to update environment variables, KubeDB operator will reject the request with the following error, + +```bash +Error from server (BadRequest): error when applying patch: +... +for: "./mariadb.yaml": admission webhook "mariadb.validators.kubedb.com" denied the request: precondition failed for: +...At least one of the following was changed: + apiVersion + kind + name + namespace + spec.authSecret + spec.init + spec.storageType + spec.storage + spec.podTemplate.spec.nodeSelector + spec.podTemplate.spec.env +``` + +#### spec.podTemplate.spec.imagePullSecrets + +`KubeDB` provides the flexibility of deploying MariaDB database from a private Docker registry. `spec.podTemplate.spec.imagePullSecrets` is an optional field that points to secrets to be used for pulling docker image if you are using a private docker registry. + +#### spec.podTemplate.spec.nodeSelector + +`spec.podTemplate.spec.nodeSelector` is an optional field that specifies a map of key-value pairs. For the pod to be eligible to run on a node, the node must have each of the indicated key-value pairs as labels (it can have additional labels as well). To learn more, see [here](https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector) . + +#### spec.podTemplate.spec.serviceAccountName + + `serviceAccountName` is an optional field supported by KubeDB Operator (version 0.13.0 and higher) that can be used to specify a custom service account to fine-tune role-based access control. + + If this field is left empty, the KubeDB operator will create a service account name matching MariaDB crd name. Role and RoleBinding that provide necessary access permissions will also be generated automatically for this service account. + + If a service account name is given, but there's no existing service account by that name, the KubeDB operator will create one, and Role and RoleBinding that provide necessary access permissions will also be generated for this service account. + + If a service account name is given, and there's an existing service account by that name, the KubeDB operator will use that existing service account. Since this service account is not managed by KubeDB, users are responsible for providing necessary access permissions manually. + +#### spec.podTemplate.spec.resources + +`spec.podTemplate.spec.resources` is an optional field. This can be used to request compute resources required by the database pods. To learn more, visit [here](http://kubernetes.io/docs/user-guide/compute-resources/). + +### spec.serviceTemplate + +You can also provide a template for the services created by KubeDB operator for MariaDB database through `spec.serviceTemplate`. This will allow you to set the type and other properties of the services. + +KubeDB allows following fields to set in `spec.serviceTemplate`: + +- metadata: + - annotations +- spec: + - type + - ports + - clusterIP + - externalIPs + - loadBalancerIP + - loadBalancerSourceRanges + - externalTrafficPolicy + - healthCheckNodePort + - sessionAffinityConfig + +See [here](https://github.com/kmodules/offshoot-api/blob/kubernetes-1.16.3/api/v1/types.go#L163) to understand these fields in detail. + +### spec.halted + +`spec.halted` is an optional field. Suppose you want to delete the `MariaDB` resources(`StatefulSet`, `Service` etc.) except `MariaDB` object, `PVCs` and `Secret` then you need to set `spec.halted` to `true`. If you set `spec.halted` to `true` then the `terminationPolicy` in `MariaDB` object will be set `Halt` by-default. + +### spec.terminationPolicy + +`terminationPolicy` gives flexibility whether to `nullify`(reject) the delete operation of `MariaDB` crd or which resources KubeDB should keep or delete when you delete `MariaDB` crd. KubeDB provides the following four termination policies: + +- DoNotTerminate +- Halt +- Delete (`Default`) +- WipeOut + +When `terminationPolicy` is `DoNotTerminate`, KubeDB takes advantage of `ValidationWebhook` feature in Kubernetes 1.9.0 or later clusters to implement `DoNotTerminate` feature. If admission webhook is enabled, `DoNotTerminate` prevents users from deleting the database as long as the `spec.terminationPolicy` is set to `DoNotTerminate`. + +Following table show what KubeDB does when you delete MariaDB crd for different termination policies, + +| Behavior | DoNotTerminate | Halt | Delete | WipeOut | +| ----------------------------------- | :------------: | :------: | :------: | :------: | +| 1. Block Delete operation | ✓ | ✗ | ✗ | ✗ | +| 2. Delete StatefulSet | ✗ | ✓ | ✓ | ✓ | +| 3. Delete Services | ✗ | ✓ | ✓ | ✓ | +| 4. Delete PVCs | ✗ | ✗ | ✓ | ✓ | +| 5. Delete Secrets | ✗ | ✗ | ✗ | ✓ | +| 6. Delete Snapshots | ✗ | ✗ | ✗ | ✓ | + +If you don't specify `spec.terminationPolicy` KubeDB uses `Delete` termination policy by default. \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/mariadb/concepts/opsrequest/index.md b/content/docs/v2024.1.31/guides/mariadb/concepts/opsrequest/index.md new file mode 100644 index 0000000000..b737b8ebe5 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mariadb/concepts/opsrequest/index.md @@ -0,0 +1,361 @@ +--- +title: MariaDBOpsRequest CRD +menu: + docs_v2024.1.31: + identifier: guides-mariadb-concepts-mariadbopsrequest + name: MariaDBOpsRequest + parent: guides-mariadb-concepts + weight: 15 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# MariaDBOpsRequest + +## What is MariaDBOpsRequest + +`MariaDBOpsRequest` is a Kubernetes `Custom Resource Definitions` (CRD). It provides a declarative configuration for [MariaDB](https://www.mariadb.com/) administrative operations like database version updating, horizontal scaling, vertical scaling etc. in a Kubernetes native way. + +## MariaDBOpsRequest CRD Specifications + +Like any official Kubernetes resource, a `MariaDBOpsRequest` has `TypeMeta`, `ObjectMeta`, `Spec` and `Status` sections. + +Here, some sample `MariaDBOpsRequest` CRs for different administrative operations is given below: + +**Sample `MariaDBOpsRequest` for updating database:** + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: MariaDBOpsRequest +metadata: + name: mdops-update + namespace: demo +spec: + type: UpdateVersion + databaseRef: + name: sample-mariadb + updateVersion: + targetVersion: 10.5.23 +status: + conditions: + - lastTransitionTime: "2020-08-25T18:22:38Z" + message: Successfully completed the modification process + observedGeneration: 1 + reason: Successful + status: "True" + type: Successful + observedGeneration: 1 + phase: Successful +``` + +**Sample `MariaDBOpsRequest` Objects for Horizontal Scaling of database cluster:** + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: MariaDBOpsRequest +metadata: + name: mdps-scale-horizontal + namespace: demo +spec: + type: HorizontalScaling + databaseRef: + name: sample-mariadb + horizontalScaling: + member : 5 +status: + conditions: + - lastTransitionTime: "2020-08-25T18:22:38Z" + message: Successfully completed the modification process + observedGeneration: 1 + reason: Successful + status: "True" + type: Successful + observedGeneration: 1 + phase: Successful +``` + +**Sample `MariaDBOpsRequest` Objects for Vertical Scaling of the database cluster:** + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: MariaDBOpsRequest +metadata: + name: md-scale-vertical + namespace: demo +spec: + type: VerticalScaling + databaseRef: + name: sample-mariadb + verticalScaling: + mariadb: + resources: + requests: + memory: "600Mi" + cpu: "0.1" + limits: + memory: "600Mi" + cpu: "0.1" +status: + conditions: + - lastTransitionTime: "2020-08-25T18:22:38Z" + message: Successfully completed the modification process + observedGeneration: 1 + reason: Successful + status: "True" + type: Successful + observedGeneration: 1 + phase: Successful +``` + +**Sample `MariaDBOpsRequest` Objects for Reconfiguring MariaDB Database:** + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: MariaDBOpsRequest +metadata: + name: md-reconfigure + namespace: demo +spec: + type: Reconfigure + databaseRef: + name: sample-mariadb + configuration: + inlineConfig: | + max_connections = 300 + read_buffer_size = 1234567 +status: + conditions: + - lastTransitionTime: "2020-08-25T18:22:38Z" + message: Successfully completed the modification process + observedGeneration: 1 + reason: Successful + status: "True" + type: Successful + observedGeneration: 1 + phase: Successful +``` + +**Sample `MariaDBOpsRequest` Objects for Volume Expansion of MariaDB:** + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: MariaDBOpsRequest +metadata: + name: md-volume-expansion + namespace: demo +spec: + type: VolumeExpansion + databaseRef: + name: sample-mariadb + volumeExpansion: + mode: "Online" + mariadb: 2Gi +status: + conditions: + - lastTransitionTime: "2020-08-25T18:22:38Z" + message: Successfully completed the modification process + observedGeneration: 1 + reason: Successful + status: "True" + type: Successful + observedGeneration: 1 + phase: Successful +``` + +**Sample `MariaDBOpsRequest` Objects for Reconfiguring TLS of the database:** + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: MariaDBOpsRequest +metadata: + name: md-recon-tls-add + namespace: demo +spec: + type: ReconfigureTLS + databaseRef: + name: sample-mariadb + tls: + requireSSL: true + issuerRef: + apiGroup: cert-manager.io + kind: Issuer + name: md-issuer + certificates: + - alias: server + subject: + organizations: + - kubedb:server + dnsNames: + - localhost + ipAddresses: + - "127.0.0.1" +``` + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: MariaDBOpsRequest +metadata: + name: md-recon-tls-rotate + namespace: demo +spec: + type: ReconfigureTLS + databaseRef: + name: sample-mariadb + tls: + rotateCertificates: true +``` + + +Here, we are going to describe the various sections of a `MariaDBOpsRequest` crd. + +A `MariaDBOpsRequest` object has the following fields in the `spec` section. + +### spec.databaseRef + +`spec.databaseRef` is a required field that point to the [MariaDB](/docs/v2024.1.31/guides/mariadb/concepts/mariadb) object for which the administrative operations will be performed. This field consists of the following sub-field: + +- **spec.databaseRef.name :** specifies the name of the [MariaDB](/docs/v2024.1.31/guides/mariadb/concepts/mariadb) object. + +### spec.type + +`spec.type` specifies the kind of operation that will be applied to the database. Currently, the following types of operations are allowed in `MariaDBOpsRequest`. + +- `Upgrade` / `UpdateVersion` +- `HorizontalScaling` +- `VerticalScaling` +- `VolumeExpansion` +- `Reconfigure` +- `ReconfigureTLS` +- `Restart` + +> You can perform only one type of operation on a single `MariaDBOpsRequest` CR. For example, if you want to update your database and scale up its replica then you have to create two separate `MariaDBOpsRequest`. At first, you have to create a `MariaDBOpsRequest` for updating. Once it is completed, then you can create another `MariaDBOpsRequest` for scaling. You should not create two `MariaDBOpsRequest` simultaneously. + +### spec.updateVersion + +If you want to update your MariaDB version, you have to specify the `spec.updateVersion` section that specifies the desired version information. This field consists of the following sub-field: + +- `spec.updateVersion.targetVersion` refers to a [MariaDBVersion](/docs/v2024.1.31/guides/mariadb/concepts/mariadb-version/) CR that contains the MariaDB version information where you want to update. + +> You can only update between MariaDB versions. KubeDB does not support downgrade for MariaDB. + +### spec.horizontalScaling + +If you want to scale-up or scale-down your MariaDB cluster or different components of it, you have to specify `spec.horizontalScaling` section. This field consists of the following sub-field: +- `spec.horizontalScaling.member` indicates the desired number of nodes for MariaDB cluster after scaling. For example, if your cluster currently has 4 nodes, and you want to add additional 2 nodes then you have to specify 6 in `spec.horizontalScaling.member` field. Similarly, if you want to remove one node from the cluster, you have to specify 3 in `spec.horizontalScaling.` field. + +### spec.verticalScaling + +`spec.verticalScaling` is a required field specifying the information of `MariaDB` resources like `cpu`, `memory` etc that will be scaled. This field consists of the following sub-field: + +- `spec.verticalScaling.mariadb` indicates the desired resources for MariaDB standalone or cluster after scaling. +- `spec.verticalScaling.exporter` indicates the desired resources for the `exporter` container. +- `spec.verticalScaling.coordinator` indicates the desired resources for the `coordinator` container. + + +All of them has the below structure: + +```yaml +requests: + memory: "200Mi" + cpu: "0.1" +limits: + memory: "300Mi" + cpu: "0.2" +``` + +Here, when you specify the resource request, the scheduler uses this information to decide which node to place the container of the Pod on and when you specify a resource limit for the container, the `kubelet` enforces those limits so that the running container is not allowed to use more of that resource than the limit you set. You can found more details from [here](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/). + +### spec.volumeExpansion + +> To use the volume expansion feature the storage class must support volume expansion + +If you want to expand the volume of your MariaDB standalone or cluster, you have to specify `spec.volumeExpansion` section. This field consists of the following sub-field: + +- `spec.volumeExpansion.mariadb` indicates the desired size for the persistent volume of a MariaDB. +- `spec.volumeExpansion.mode` indicates the mode of volume expansion. It can be `online` or `offline` based on the storage class. + + +All of them refer to Quantity types of Kubernetes. + +Example usage of this field is given below: + +```yaml +spec: + volumeExpansion: + mariadb: "2Gi" +``` + +This will expand the volume size of all the mariadb nodes to 2 GB. + +### spec.configuration + +If you want to reconfigure your Running MariaDB cluster with new custom configuration, you have to specify `spec.configuration` section. This field consists of the following sub-fields: +- `configSecret` points to a secret in the same namespace of a MariaDB resource, which contains the new custom configurations. If there are any configSecret set before in the database, this secret will replace it. +- `inlineConfig` contains the new custom config as a string which will be merged with the previous configuration. +- `removeCustomConfig` reomoves all the custom configs of the MariaDB server. + +### spec.tls + +If you want to reconfigure the TLS configuration of your database i.e. add TLS, remove TLS, update issuer/cluster issuer or Certificates and rotate the certificates, you have to specify `spec.tls` section. This field consists of the following sub-field: + +- `spec.tls.issuerRef` specifies the issuer name, kind and api group. +- `spec.tls.certificates` specifies the certificates. You can learn more about this field from [here](/docs/v2024.1.31/guides/mariadb/concepts/mariadb/#spectls). +- `spec.tls.rotateCertificates` specifies that we want to rotate the certificate of this database. +- `spec.tls.remove` specifies that we want to remove tls from this database. + + +### MariaDBOpsRequest `Status` + +`.status` describes the current state and progress of a `MariaDBOpsRequest` operation. It has the following fields: + +### status.phase + +`status.phase` indicates the overall phase of the operation for this `MariaDBOpsRequest`. It can have the following three values: + +| Phase | Meaning | +| ---------- | ---------------------------------------------------------------------------------- | +| Successful | KubeDB has successfully performed the operation requested in the MariaDBOpsRequest | +| Failed | KubeDB has failed the operation requested in the MariaDBOpsRequest | +| Denied | KubeDB has denied the operation requested in the MariaDBOpsRequest | + +### status.observedGeneration + +`status.observedGeneration` shows the most recent generation observed by the `MariaDBOpsRequest` controller. + +### status.conditions + +`status.conditions` is an array that specifies the conditions of different steps of `MariaDBOpsRequest` processing. Each condition entry has the following fields: + +- `types` specifies the type of the condition. MariaDBOpsRequest has the following types of conditions: + +| Type | Meaning | +| ----------------------------- | ------------------------------------------------------------------------- | +| `Progressing` | Specifies that the operation is now in the progressing state | +| `Successful` | Specifies such a state that the operation on the database was successful. | +| `Failed` | Specifies such a state that the operation on the database failed. | +| `ScaleDownCluster` | Specifies such a state that the scale down operation of replicaset | +| `ScaleUpCluster` | Specifies such a state that the scale up operation of replicaset | +| `VolumeExpansion` | Specifies such a state that the volume expansion operaton of the database | +| `Reconfigure` | Specifies such a state that the reconfiguration of replicaset nodes | + +- The `status` field is a string, with possible values `True`, `False`, and `Unknown`. + - `status` will be `True` if the current transition succeeded. + - `status` will be `False` if the current transition failed. + - `status` will be `Unknown` if the current transition was denied. +- The `message` field is a human-readable message indicating details about the condition. +- The `reason` field is a unique, one-word, CamelCase reason for the condition's last transition. +- The `lastTransitionTime` field provides a timestamp for when the operation last transitioned from one state to another. +- The `observedGeneration` shows the most recent condition transition generation observed by the controller. diff --git a/content/docs/v2024.1.31/guides/mariadb/configuration/_index.md b/content/docs/v2024.1.31/guides/mariadb/configuration/_index.md new file mode 100755 index 0000000000..3a48ca3da3 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mariadb/configuration/_index.md @@ -0,0 +1,22 @@ +--- +title: Run MariaDB with Custom Configuration +menu: + docs_v2024.1.31: + identifier: guides-mariadb-configuration + name: Custom Configuration + parent: guides-mariadb + weight: 40 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/mariadb/configuration/using-config-file/examples/md-custom.yaml b/content/docs/v2024.1.31/guides/mariadb/configuration/using-config-file/examples/md-custom.yaml new file mode 100644 index 0000000000..5b08c65384 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mariadb/configuration/using-config-file/examples/md-custom.yaml @@ -0,0 +1,18 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MariaDB +metadata: + name: sample-mariadb + namespace: demo +spec: + version: "10.5.23" + configSecret: + name: md-configuration + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut diff --git a/content/docs/v2024.1.31/guides/mariadb/configuration/using-config-file/index.md b/content/docs/v2024.1.31/guides/mariadb/configuration/using-config-file/index.md new file mode 100644 index 0000000000..9b470dd509 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mariadb/configuration/using-config-file/index.md @@ -0,0 +1,194 @@ +--- +title: Run MariaDB with Custom Configuration +menu: + docs_v2024.1.31: + identifier: guides-mariadb-configuration-usingconfigfile + name: Config File + parent: guides-mariadb-configuration + weight: 10 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Using Custom Configuration File + +KubeDB supports providing custom configuration for MariaDB. This tutorial will show you how to use KubeDB to run a MariaDB database with custom configuration. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +- Now, install KubeDB cli on your workstation and KubeDB operator in your cluster following the steps [here](/docs/v2024.1.31/setup/README). + +- To keep things isolated, this tutorial uses a separate namespace called `demo` throughout this tutorial. + + ```bash + $ kubectl create ns demo + namespace/demo created + + $ kubectl get ns demo + NAME STATUS AGE + demo Active 5s + ``` + +> Note: YAML files used in this tutorial are stored in [here](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/guides/mariadb/configuration/using-config-file/examples) folder in GitHub repository [kubedb/docs](https://github.com/kubedb/docs). + +## Overview + +MariaDB allows to configure database via configuration file. The default configuration for MariaDB can be found in `/etc/mysql/my.cnf` file. When MariaDB starts, it will look for custom configuration file in `/etc/mysql/conf.d` directory. If configuration file exist, MariaDB instance will use combined startup setting from both `/etc/mysql/my.cnf` and `*.cnf` files in `/etc/mysql/conf.d` directory. This custom configuration will overwrite the existing default one. To know more about configuring MariaDB see [here](https://mariadb.com/kb/en/configuring-mariadb-with-option-files/). + +At first, you have to create a config file with `.cnf` extension with your desired configuration. Then you have to put this file into a [volume](https://kubernetes.io/docs/concepts/storage/volumes/). You have to specify this volume in `spec.configSecret` section while creating MariaDB crd. KubeDB will mount this volume into `/etc/mysql/conf.d` directory of the database pod. + +In this tutorial, we will configure [max_connections](https://mariadb.com/docs/reference/mdb/system-variables/max_connections/) and [read_buffer_size](https://mariadb.com/docs/reference/mdb/system-variables/read_buffer_size/) via a custom config file. We will use Secret as volume source. + +## Custom Configuration + +At first, let's create `md-config.cnf` file setting `max_connections` and `read_buffer_size` parameters. + +```bash +cat < md-config.cnf +[mysqld] +max_connections = 200 +read_buffer_size = 1048576 +EOF + +$ cat md-config.cnf +[mysqld] +max_connections = 200 +read_buffer_size = 1048576 +``` + +Here, `read_buffer_size` is set to 1MB in bytes. + +Now, create a Secret with this configuration file. + +```bash +$ kubectl create secret generic -n demo md-configuration --from-file=./md-config.cnf +secret/md-configuration created +``` + +Verify the Secret has the configuration file. + +```yaml +$ kubectl get secret -n demo md-configuration -o yaml +apiVersion: v1 +stringData: + md-config.cnf: | + [mysqld] + max_connections = 200 + read_buffer_size = 1048576 +kind: Secret +metadata: + name: md-configuration + namespace: demo + ... +``` + +Now, create MariaDB crd specifying `spec.configSecret` field. + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mariadb/configuration/using-config-file/examples/md-custom.yaml +mysql.kubedb.com/custom-mysql created +``` + +Below is the YAML for the MariaDB crd we just created. + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: MariaDB +metadata: + name: sample-mariadb + namespace: demo +spec: + version: "10.5.23" + configSecret: + name: md-configuration + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut + +``` + +Now, wait a few minutes. KubeDB operator will create necessary PVC, statefulset, services, secret etc. If everything goes well, we will see that a pod with the name `sample-mariadb-0` has been created. + +Check that the statefulset's pod is running + +```bash + $ kubectl get pod -n demo +NAME READY STATUS RESTARTS AGE +sample-mariadb-0 1/1 Running 0 21s + +$ kubectl get mariadb -n demo +NAME VERSION STATUS AGE +sample-mariadb 10.5.23 Ready 71s +``` + +We can see the database is in ready phase so it can accept conncetion. + +Now, we will check if the database has started with the custom configuration we have provided. + +> Read the comment written for the following commands. They contain the instructions and explanations of the commands. + +```bash +# Connceting to the database + $ kubectl exec -it -n demo sample-mariadb-0 -- bash +root@sample-mariadb-0:/ mysql -u${MYSQL_ROOT_USERNAME} -p${MYSQL_ROOT_PASSWORD} +Welcome to the MariaDB monitor. Commands end with ; or \g. +Your MariaDB connection id is 23 +Server version: 10.5.23-MariaDB-1:10.5.23+maria~focal mariadb.org binary distribution + +Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others. + +Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. + +# value of `max_conncetions` is same as provided +MariaDB [(none)]> show variables like 'max_connections'; ++-----------------+-------+ +| Variable_name | Value | ++-----------------+-------+ +| max_connections | 200 | ++-----------------+-------+ +1 row in set (0.001 sec) + +# value of `read_buffer_size` is same as provided +MariaDB [(none)]> show variables like 'read_buffer_size'; ++------------------+---------+ +| Variable_name | Value | ++------------------+---------+ +| read_buffer_size | 1048576 | ++------------------+---------+ +1 row in set (0.001 sec) + +MariaDB [(none)]> exit +Bye +``` + +## Cleaning up + +To cleanup the Kubernetes resources created by this tutorial, run: + +```bash +$ kubectl delete mariadb -n demo sample-mariadb +mariadb.kubedb.com "sample-mariadb" deleted +$ kubectl delete ns demo +namespace "demo" deleted +``` diff --git a/content/docs/v2024.1.31/guides/mariadb/configuration/using-pod-template/examples/md-misc-config.yaml b/content/docs/v2024.1.31/guides/mariadb/configuration/using-pod-template/examples/md-misc-config.yaml new file mode 100644 index 0000000000..1eef7d3203 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mariadb/configuration/using-pod-template/examples/md-misc-config.yaml @@ -0,0 +1,27 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MariaDB +metadata: + name: sample-mariadb + namespace: demo +spec: + version: "10.5.23" + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + podTemplate: + spec: + env: + - name: MYSQL_DATABASE + value: mdDB + args: + - --character-set-server=utf8mb4 + resources: + requests: + memory: "1Gi" + cpu: "250m" + terminationPolicy: WipeOut \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/mariadb/configuration/using-pod-template/index.md b/content/docs/v2024.1.31/guides/mariadb/configuration/using-pod-template/index.md new file mode 100644 index 0000000000..843dfc35c4 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mariadb/configuration/using-pod-template/index.md @@ -0,0 +1,191 @@ +--- +title: Run MariaDB with Custom PodTemplate +menu: + docs_v2024.1.31: + identifier: guides-mariadb-configuration-usingpodtemplate + name: Customize PodTemplate + parent: guides-mariadb-configuration + weight: 15 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Run MariaDB with Custom PodTemplate + +KubeDB supports providing custom configuration for MariaDB via [PodTemplate](/docs/v2024.1.31/guides/mariadb/concepts/mariadb/#specpodtemplate). This tutorial will show you how to use KubeDB to run a MariaDB database with custom configuration using PodTemplate. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +- Now, install KubeDB cli on your workstation and KubeDB operator in your cluster following the steps [here](/docs/v2024.1.31/setup/README). + +- To keep things isolated, this tutorial uses a separate namespace called `demo` throughout this tutorial. + + ```bash + $ kubectl create ns demo + namespace/demo created + ``` + +> Note: YAML files used in this tutorial are stored in [docs/examples/mysql](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/guides/mariadb/configuration/using-pod-template/examples) folder in GitHub repository [kubedb/docs](https://github.com/kubedb/docs). + +## Overview + +KubeDB allows providing a template for database pod through `spec.podTemplate`. KubeDB operator will pass the information provided in `spec.podTemplate` to the StatefulSet created for MariaDB database. + +KubeDB accept following fields to set in `spec.podTemplate:` + +- metadata: + - annotations (pod's annotation) +- controller: + - annotations (statefulset's annotation) +- spec: + - env + - resources + - initContainers + - imagePullSecrets + - nodeSelector + - affinity + - schedulerName + - tolerations + - priorityClassName + - priority + - securityContext + +Read about the fields in details in [PodTemplate concept](/docs/v2024.1.31/guides/mariadb/concepts/mariadb/#specpodtemplate), + +## CRD Configuration + +Below is the YAML for the MariaDB created in this example. Here, [`spec.podTemplate.spec.env`](/docs/v2024.1.31/guides/mariadb/concepts/mariadb/#specpodtemplatespecenv) specifies environment variables and [`spec.podTemplate.spec.args`](/docs/v2024.1.31/guides/mariadb/concepts/mariadb/#specpodtemplatespecargs) provides extra arguments for [MariaDB Docker Image](https://hub.docker.com/_/mariadb/). + +In this tutorial, an initial database `mdDB` will be created by providing `env` `MYSQL_DATABASE` while the server character set will be set to `utf8mb4` by adding extra `args`. + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: MariaDB +metadata: + name: sample-mariadb + namespace: demo +spec: + version: "10.5.23" + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + podTemplate: + spec: + env: + - name: MYSQL_DATABASE + value: mdDB + args: + - --character-set-server=utf8mb4 + resources: + requests: + memory: "1Gi" + cpu: "250m" + terminationPolicy: WipeOut +``` + + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mariadb/configuration/using-pod-template/examples/md-misc-config.yaml +mariadb.kubedb.com/sample-mariadb created +``` + +Now, wait a few minutes. KubeDB operator will create necessary PVC, statefulset, services, secret etc. If everything goes well, we will see that a pod with the name `sample-mariadb` has been created. + +Check that the statefulset's pod is running + +```bash +$ $ kubectl get pod -n demo +NAME READY STATUS RESTARTS AGE +sample-mariadb-0 1/1 Running 0 96s +``` + +Check the pod's log to see if the database is ready + +```bash +$ kubectl logs -f -n demo sample-mariadb-0 +2021-03-18 06:06:17+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 1:10.5.23+maria~focal started. +2021-03-18 06:06:18+00:00 [Note] [Entrypoint]: Switching to dedicated user 'mysql' +2021-03-18 06:06:18+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 1:10.5.23+maria~focal started. +2021-03-18 06:06:19+00:00 [Note] [Entrypoint]: Initializing database files +... +2021-03-18 6:06:33 0 [Note] mysqld: ready for connections. +Version: '10.5.23-MariaDB-1:10.5.23+maria~focal' socket: '/run/mysqld/mysqld.sock' port: 3306 mariadb.org binary distribution +``` + +Once we see `Note] mysqld: ready for connections.` in the log, the database is ready. + +Now, we will check if the database has started with the custom configuration we have provided. + +```bash +$ kubectl exec -it -n demo sample-mariadb-0 -- bash +root@sample-mariadb-0:/ mysql -u${MYSQL_ROOT_USERNAME} -p${MYSQL_ROOT_PASSWORD} +Welcome to the MariaDB monitor. Commands end with ; or \g. +Your MariaDB connection id is 22 +Server version: 10.5.23-MariaDB-1:10.5.23+maria~focal mariadb.org binary distribution + +Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others. + +Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. + +# Check mdDB +MariaDB [(none)]> show databases; ++--------------------+ +| Database | ++--------------------+ +| information_schema | +| mdDB | +| mysql | +| performance_schema | ++--------------------+ +4 rows in set (0.001 sec) + +# Check character_set_server +MariaDB [(none)]> show variables like 'char%'; ++--------------------------+----------------------------+ +| Variable_name | Value | ++--------------------------+----------------------------+ +| character_set_client | latin1 | +| character_set_connection | latin1 | +| character_set_database | utf8mb4 | +| character_set_filesystem | binary | +| character_set_results | latin1 | +| character_set_server | utf8mb4 | +| character_set_system | utf8 | +| character_sets_dir | /usr/share/mysql/charsets/ | ++--------------------------+----------------------------+ +8 rows in set (0.001 sec) + +MariaDB [(none)]> quit; +Bye +``` + +## Cleaning up + +To cleanup the Kubernetes resources created by this tutorial, run: + +```bash +$ kubectl delete mariadb -n demo sample-mariadb +mariadb.kubedb.com "sample-mariadb" deleted +$ kubectl delete ns demo +namespace "demo" deleted +``` diff --git a/content/docs/v2024.1.31/guides/mariadb/custom-rbac/_index.md b/content/docs/v2024.1.31/guides/mariadb/custom-rbac/_index.md new file mode 100755 index 0000000000..8bbf02099b --- /dev/null +++ b/content/docs/v2024.1.31/guides/mariadb/custom-rbac/_index.md @@ -0,0 +1,22 @@ +--- +title: Run MySQL with Custom RBAC resources +menu: + docs_v2024.1.31: + identifier: guides-mariadb-customrbac + name: Custom RBAC + parent: guides-mariadb + weight: 50 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/mariadb/custom-rbac/using-custom-rbac/examples/md-custom-db-2.yaml b/content/docs/v2024.1.31/guides/mariadb/custom-rbac/using-custom-rbac/examples/md-custom-db-2.yaml new file mode 100644 index 0000000000..5242c8ff52 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mariadb/custom-rbac/using-custom-rbac/examples/md-custom-db-2.yaml @@ -0,0 +1,19 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MariaDB +metadata: + name: another-mariadb + namespace: demo +spec: + version: "10.5.23" + storageType: Durable + podTemplate: + spec: + serviceAccountName: md-custom-serviceaccount + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/mariadb/custom-rbac/using-custom-rbac/examples/md-custom-db.yaml b/content/docs/v2024.1.31/guides/mariadb/custom-rbac/using-custom-rbac/examples/md-custom-db.yaml new file mode 100644 index 0000000000..cabdf30dff --- /dev/null +++ b/content/docs/v2024.1.31/guides/mariadb/custom-rbac/using-custom-rbac/examples/md-custom-db.yaml @@ -0,0 +1,19 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MariaDB +metadata: + name: sample-mariadb + namespace: demo +spec: + version: "10.5.23" + storageType: Durable + podTemplate: + spec: + serviceAccountName: md-custom-serviceaccount + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/mariadb/custom-rbac/using-custom-rbac/examples/md-custom-role.yaml b/content/docs/v2024.1.31/guides/mariadb/custom-rbac/using-custom-rbac/examples/md-custom-role.yaml new file mode 100644 index 0000000000..349d0354d9 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mariadb/custom-rbac/using-custom-rbac/examples/md-custom-role.yaml @@ -0,0 +1,14 @@ +apiVersion: rbac.authorization.k8s.io/v1 +kind: Role +metadata: + name: md-custom-role + namespace: demo +rules: +- apiGroups: + - policy + resourceNames: + - maria-db + resources: + - podsecuritypolicies + verbs: + - use \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/mariadb/custom-rbac/using-custom-rbac/index.md b/content/docs/v2024.1.31/guides/mariadb/custom-rbac/using-custom-rbac/index.md new file mode 100644 index 0000000000..23c3615327 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mariadb/custom-rbac/using-custom-rbac/index.md @@ -0,0 +1,271 @@ +--- +title: Run MariaDB with Custom RBAC resources +menu: + docs_v2024.1.31: + identifier: guides-mariadb-customrbac-usingcustomrbac + name: Custom RBAC + parent: guides-mariadb-customrbac + weight: 10 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Using Custom RBAC resources + +KubeDB (version 0.13.0 and higher) supports finer user control over role based access permissions provided to a MariaDB instance. This tutorial will show you how to use KubeDB to run MariaDB instance with custom RBAC resources. + +## Before You Begin + +At first, you need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +Now, install KubeDB cli on your workstation and KubeDB operator in your cluster following the steps [here](/docs/v2024.1.31/setup/README). + +To keep things isolated, this tutorial uses a separate namespace called `demo` throughout this tutorial. + +```bash +$ kubectl create ns demo +namespace/demo created +``` + +> Note: YAML files used in this tutorial are stored in [here](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/guides/mariadb/custom-rbac/using-custom-rbac/examples) folder in GitHub repository [kubedb/docs](https://github.com/kubedb/docs). + +## Overview + +KubeDB allows users to provide custom RBAC resources, namely, `ServiceAccount`, `Role`, and `RoleBinding` for MariaDB. This is provided via the `spec.podTemplate.spec.serviceAccountName` field in MariaDB crd. If this field is left empty, the KubeDB operator will create a service account name matching MariaDB crd name. Role and RoleBinding that provide necessary access permissions will also be generated automatically for this service account. + +If a service account name is given, but there's no existing service account by that name, the KubeDB operator will create one, and Role and RoleBinding that provide necessary access permissions will also be generated for this service account. + +If a service account name is given, and there's an existing service account by that name, the KubeDB operator will use that existing service account. Since this service account is not managed by KubeDB, users are responsible for providing necessary access permissions manually. + +This guide will show you how to create custom `Service Account`, `Role`, and `RoleBinding` for a MariaDB instance named `quick-postges` to provide the bare minimum access permissions. + +## Custom RBAC for MariaDB + +At first, let's create a `Service Acoount` in `demo` namespace. + +```bash +$ kubectl create serviceaccount -n demo md-custom-serviceaccount +serviceaccount/md-custom-serviceaccount created +``` + +It should create a service account. + +```yaml +$ kubectl get serviceaccount -n demo md-custom-serviceaccount -o yaml +apiVersion: v1 +kind: ServiceAccount +metadata: + creationTimestamp: "2021-03-18T04:38:59Z" + name: md-custom-serviceaccount + namespace: demo + resourceVersion: "84669" + selfLink: /api/v1/namespaces/demo/serviceaccounts/md-custom-serviceaccount + uid: 788bd6c6-3eae-4797-b6ca-5722ef64c9dc +secrets: +- name: md-custom-serviceaccount-token-jnhvd +``` + +Now, we need to create a role that has necessary access permissions for the MariaDB instance named `sample-mariadb`. + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mariadb/custom-rbac/using-custom-rbac/examples/md-custom-role.yaml +role.rbac.authorization.k8s.io/md-custom-role created +``` + +Below is the YAML for the Role we just created. + +```yaml +apiVersion: rbac.authorization.k8s.io/v1 +kind: Role +metadata: + name: md-custom-role + namespace: demo +rules: +- apiGroups: + - policy + resourceNames: + - maria-db + resources: + - podsecuritypolicies + verbs: + - use +``` + +This permission is required for MariaDB pods running on PSP enabled clusters. + +Now create a `RoleBinding` to bind this `Role` with the already created service account. + +```bash +$ kubectl create rolebinding md-custom-rolebinding --role=md-custom-role --serviceaccount=demo:md-custom-serviceaccount --namespace=demo +rolebinding.rbac.authorization.k8s.io/md-custom-rolebinding created +``` + +It should bind `md-custom-role` and `md-custom-serviceaccount` successfully. + +SO, All required resources for RBAC are created. + +```bash +$ kubectl get serviceaccount,role,rolebindings -n demo +NAME SECRETS AGE +serviceaccount/default 1 38m +serviceaccount/md-custom-serviceaccount 1 36m + +NAME CREATED AT +role.rbac.authorization.k8s.io/md-custom-role 2021-03-18T05:13:27Z + +NAME ROLE AGE +rolebinding.rbac.authorization.k8s.io/md-custom-rolebinding Role/md-custom-role 79s +``` + +Now, create a MariaDB crd specifying `spec.podTemplate.spec.serviceAccountName` field to `md-custom-serviceaccount`. + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mariadb/custom-rbac/using-custom-rbac/examples/md-custom-db.yaml +mariadb.kubedb.com/sample-mariadb created +``` + +Below is the YAML for the MariaDB crd we just created. + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: MariaDB +metadata: + name: sample-mariadb + namespace: demo +spec: + version: "10.5.23" + storageType: Durable + podTemplate: + spec: + serviceAccountName: md-custom-serviceaccount + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut +``` + +Now, wait a few minutes. the KubeDB operator will create necessary PVC, StatefulSet, services, secret etc. If everything goes well, we should see that a pod with the name `sample-mariadb-0` has been created. + +Check that the statefulset's pod is running + +```bash +$ kubectl get pod -n demo sample-mariadb-0 +NAME READY STATUS RESTARTS AGE +sample-mariadb-0 1/1 Running 0 2m44s +``` + +Check the pod's log to see if the database is ready + +```bash +$ kubectl logs -f -n demo sample-mariadb-0 +2021-03-18 05:35:13+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 1:10.5.23+maria~focal started. +2021-03-18 05:35:13+00:00 [Note] [Entrypoint]: Switching to dedicated user 'mysql' +2021-03-18 05:35:13+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 1:10.5.23+maria~focal started. +2021-03-18 05:35:14+00:00 [Note] [Entrypoint]: Initializing database files +... +2021-03-18 5:35:22 0 [Note] Reading of all Master_info entries succeeded +2021-03-18 5:35:22 0 [Note] Added new Master_info '' to hash table +2021-03-18 5:35:22 0 [Note] mysqld: ready for connections. +Version: '10.5.23-MariaDB-1:10.5.23+maria~focal' socket: '/run/mysqld/mysqld.sock' port: 3306 mariadb.org binary distribution +``` + +Once we see `mysqld: ready for connections.` in the log, the database is ready. + +## Reusing Service Account + +An existing service account can be reused in another MariaDB instance. No new access permission is required to run the new MariaDB instance. + +Now, create MariaDB crd `another-mariadb` using the existing service account name `md-custom-serviceaccount` in the `spec.podTemplate.spec.serviceAccountName` field. + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mariadb/custom-rbac/using-custom-rbac/examples/md-custom-db-2.yaml +mariadb.kubedb.com/another-mariadb created +``` + +Below is the YAML for the MariaDB crd we just created. + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: MariaDB +metadata: + name: another-mariadb + namespace: demo +spec: + version: "10.5.23" + storageType: Durable + podTemplate: + spec: + serviceAccountName: md-custom-serviceaccount + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut +``` + +Now, wait a few minutes. the KubeDB operator will create necessary PVC, statefulset, services, secret etc. If everything goes well, we should see that a pod with the name `another-mariadb` has been created. + +Check that the statefulset's pod is running + +```bash +$ kubectl get pod -n demo another-mariadb-0 +NAME READY STATUS RESTARTS AGE +another-mariadb-0 1/1 Running 0 37s +``` + +Check the pod's log to see if the database is ready + +```bash +... +$ kubectl logs -f -n demo another-mariadb-0 +2021-03-18 05:39:50+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 1:10.5.23+maria~focal started. +2021-03-18 05:39:50+00:00 [Note] [Entrypoint]: Switching to dedicated user 'mysql' +2021-03-18 05:39:50+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 1:10.5.23+maria~focal started. +2021-03-18 05:39:50+00:00 [Note] [Entrypoint]: Initializing database files +... +2021-03-18 5:39:59 0 [Note] mysqld: ready for connections. +Version: '10.5.23-MariaDB-1:10.5.23+maria~focal' socket: '/run/mysqld/mysqld.sock' port: 3306 mariadb.org binary distribution +``` + +`mysqld: ready for connections.` in the log signifies that the database is running successfully. + +## Cleaning up + +To cleanup the Kubernetes resources created by this tutorial, run: + +```bash +$ kubectl delete mariadb -n demo sample-mariadb +mariadb.kubedb.com "sample-mariadb" deleted +$ kubectl delete mariadb -n demo another-mariadb +mariadb.kubedb.com "another-mariadb" deleted +$ kubectl delete -n demo role md-custom-role +role.rbac.authorization.k8s.io "md-custom-role" deleted +$ kubectl delete -n demo rolebinding md-custom-rolebinding +rolebinding.rbac.authorization.k8s.io "md-custom-rolebinding" deleted +$ kubectl delete sa -n demo md-custom-serviceaccount +serviceaccount "md-custom-serviceaccount" deleted +$ kubectl delete ns demo +namespace "demo" deleted +``` + + diff --git a/content/docs/v2024.1.31/guides/mariadb/images/mariadb-lifecycle.png b/content/docs/v2024.1.31/guides/mariadb/images/mariadb-lifecycle.png new file mode 100644 index 0000000000..86d604a8e3 Binary files /dev/null and b/content/docs/v2024.1.31/guides/mariadb/images/mariadb-lifecycle.png differ diff --git a/content/docs/v2024.1.31/guides/mariadb/initialization/_index.md b/content/docs/v2024.1.31/guides/mariadb/initialization/_index.md new file mode 100755 index 0000000000..16b2ed6329 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mariadb/initialization/_index.md @@ -0,0 +1,22 @@ +--- +title: MariaDB Initialization +menu: + docs_v2024.1.31: + identifier: guides-mariadb-initialization + name: Initialization + parent: guides-mariadb + weight: 80 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/mariadb/initialization/using-script/example/demo-1.yaml b/content/docs/v2024.1.31/guides/mariadb/initialization/using-script/example/demo-1.yaml new file mode 100644 index 0000000000..f3b169dd1e --- /dev/null +++ b/content/docs/v2024.1.31/guides/mariadb/initialization/using-script/example/demo-1.yaml @@ -0,0 +1,19 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MariaDB +metadata: + name: sample-mariadb + namespace: demo +spec: + version: "10.5.23" + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + init: + script: + configMap: + name: md-init-script + terminationPolicy: WipeOut \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/mariadb/initialization/using-script/index.md b/content/docs/v2024.1.31/guides/mariadb/initialization/using-script/index.md new file mode 100644 index 0000000000..2723cd5736 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mariadb/initialization/using-script/index.md @@ -0,0 +1,179 @@ +--- +title: Initialize MariaDB using Script +menu: + docs_v2024.1.31: + identifier: guides-mariadb-initialization-usingscript + name: Using Script + parent: guides-mariadb-initialization + weight: 10 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Initialize MariaDB using Script + +This tutorial will show you how to use KubeDB to initialize a MariaDB database with \*.sql, \*.sh and/or \*.sql.gz script. +In this tutorial we will use .sql script stored in GitHub repository [kubedb/mysql-init-scripts](https://github.com/kubedb/mysql-init-scripts). + +> Note: The yaml files that are used in this tutorial are stored [here](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/guides/mariadb/initialization/using-script/example) folder in GitHub repository [kubedb/docs](https://github.com/kubedb/docs). + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +- Now, install KubeDB cli on your workstation and KubeDB operator in your cluster following the steps [here](/docs/v2024.1.31/setup/README). + +- To keep things isolated, this tutorial uses a separate namespace called `demo` throughout this tutorial. + +```bash + $ kubectl create ns demo +namespace/demo created +``` + +## Prepare Initialization Scripts + +MariaDB supports initialization with `.sh`, `.sql` and `.sql.gz` files. In this tutorial, we will use `init.sql` script from [mysql-init-scripts](https://github.com/kubedb/mysql-init-scripts) git repository to create a TABLE `kubedb_table` in `mysql` database. + +We will use a ConfigMap as script source. You can use any Kubernetes supported [volume](https://kubernetes.io/docs/concepts/storage/volumes) as script source. + +At first, we will create a ConfigMap from `init.sql` file. Then, we will provide this ConfigMap as script source in `init.script` of MariaDB crd spec. + +Let's create a ConfigMap with initialization script, + +```bash +$ kubectl create configmap -n demo md-init-script \ +--from-literal=init.sql="$(curl -fsSL https://github.com/kubedb/mysql-init-scripts/raw/master/init.sql)" +configmap/md-init-script created +``` + +## Create a MariaDB database with Init-Script + +Below is the `MariaDB` object created in this tutorial. + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: MariaDB +metadata: + name: sample-mariadb + namespace: demo +spec: + version: "10.5.23" + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + init: + script: + configMap: + name: md-init-script + terminationPolicy: WipeOut +``` + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/mysql/Initialization/demo-1.yaml +mysql.kubedb.com/mysql-init-script created +``` + +Here, + +- `spec.init.script` specifies a script source used to initialize the database before database server starts. The scripts will be executed alphabatically. In this tutorial, a sample .sql script from the git repository `https://github.com/kubedb/mysql-init-scripts.git` is used to create a test database. You can use other [volume sources](https://kubernetes.io/docs/concepts/storage/volumes/#types-of-volumes) instead of `ConfigMap`. The \*.sql, \*sql.gz and/or \*.sh sripts that are stored inside the root folder will be executed alphabatically. The scripts inside child folders will be skipped. + +KubeDB operator watches for `MariaDB` objects using Kubernetes api. When a `MariaDB` object is created, KubeDB operator will create a new StatefulSet and a Service with the matching MariaDB object name. KubeDB operator will also create a governing service for StatefulSets with the name `kubedb`, if one is not already present. No MariaDB specific RBAC roles are required for [RBAC enabled clusters](/docs/v2024.1.31/setup/README#using-yaml). + +```yaml +$ kubectl get mariadb -n demo sample-mariadb -oyaml +apiVersion: kubedb.com/v1alpha2 +kind: MariaDB +metadata: + ... + name: sample-mariadb + namespace: demo + ... +spec: + authSecret: + name: sample-mariadb-auth + init: + initialized: true + script: + configMap: + name: md-init-script + ... + replicas: 1 + storage: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + storageClassName: standard + storageType: Durable + terminationPolicy: WipeOut + version: 10.5.23 +status: + ... + phase: Ready +``` + +KubeDB operator sets the `status.phase` to `Ready` once the database is successfully created. + +Now, we will connect to this database and check the data inserted by the initlization script. + +```bash +# Connecting to the database +$ kubectl exec -it -n demo sample-mariadb-0 -- bash +root@sample-mariadb-0:/ mysql -u${MYSQL_ROOT_USERNAME} -p${MYSQL_ROOT_PASSWORD} +Welcome to the MariaDB monitor. Commands end with ; or \g. +Your MariaDB connection id is 40 +Server version: 10.5.23-MariaDB-1:10.5.23+maria~focal mariadb.org binary distribution + +Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others. + +Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. + +MariaDB [(none)]> use mysql; +Reading table information for completion of table and column names +You can turn off this feature to get a quicker startup with -A + +# Showing the inserted `kubedb_table` +Database changed +MariaDB [mysql]> select * from kubedb_table; ++----+-------+ +| id | name | ++----+-------+ +| 1 | name1 | +| 2 | name2 | +| 3 | name3 | ++----+-------+ +3 rows in set (0.001 sec) + +MariaDB [mysql]> quit; +Bye + +``` + +## Cleaning up + +To cleanup the Kubernetes resources created by this tutorial, run: + +```bash +$ kubectl delete mariadb -n demo sample-mariadb +mariadb.kubedb.com "sample-mariadb" deleted +$ kubectl delete ns demo +namespace "demo" deleted +``` diff --git a/content/docs/v2024.1.31/guides/mariadb/monitoring/_index.md b/content/docs/v2024.1.31/guides/mariadb/monitoring/_index.md new file mode 100755 index 0000000000..e95e2b4588 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mariadb/monitoring/_index.md @@ -0,0 +1,22 @@ +--- +title: Monitoring MariaDB +menu: + docs_v2024.1.31: + identifier: guides-mariadb-monitoring + name: Monitoring + parent: guides-mariadb + weight: 120 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/mariadb/monitoring/builtin-prometheus/examples/builtin-prom-md.yaml b/content/docs/v2024.1.31/guides/mariadb/monitoring/builtin-prometheus/examples/builtin-prom-md.yaml new file mode 100644 index 0000000000..292f7cf26e --- /dev/null +++ b/content/docs/v2024.1.31/guides/mariadb/monitoring/builtin-prometheus/examples/builtin-prom-md.yaml @@ -0,0 +1,17 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MariaDB +metadata: + name: builtin-prom-md + namespace: demo +spec: + version: "10.5.23" + terminationPolicy: WipeOut + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + monitor: + agent: prometheus.io/builtin \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/mariadb/monitoring/builtin-prometheus/examples/prom-config.yaml b/content/docs/v2024.1.31/guides/mariadb/monitoring/builtin-prometheus/examples/prom-config.yaml new file mode 100644 index 0000000000..45aee6317a --- /dev/null +++ b/content/docs/v2024.1.31/guides/mariadb/monitoring/builtin-prometheus/examples/prom-config.yaml @@ -0,0 +1,68 @@ +apiVersion: v1 +kind: ConfigMap +metadata: + name: prometheus-config + labels: + app: prometheus-demo + namespace: monitoring +data: + prometheus.yml: |- + global: + scrape_interval: 5s + evaluation_interval: 5s + scrape_configs: + - job_name: 'kubedb-databases' + honor_labels: true + scheme: http + kubernetes_sd_configs: + - role: endpoints + # by default Prometheus server select all Kubernetes services as possible target. + # relabel_config is used to filter only desired endpoints + relabel_configs: + # keep only those services that has "prometheus.io/scrape","prometheus.io/path" and "prometheus.io/port" anootations + - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape, __meta_kubernetes_service_annotation_prometheus_io_port] + separator: ; + regex: true;(.*) + action: keep + # currently KubeDB supported databases uses only "http" scheme to export metrics. so, drop any service that uses "https" scheme. + - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scheme] + action: drop + regex: https + # only keep the stats services created by KubeDB for monitoring purpose which has "-stats" suffix + - source_labels: [__meta_kubernetes_service_name] + separator: ; + regex: (.*-stats) + action: keep + # service created by KubeDB will have "app.kubernetes.io/name" and "app.kubernetes.io/instance" annotations. keep only those services that have these annotations. + - source_labels: [__meta_kubernetes_service_label_app_kubernetes_io_name] + separator: ; + regex: (.*) + action: keep + # read the metric path from "prometheus.io/path: " annotation + - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_path] + action: replace + target_label: __metrics_path__ + regex: (.+) + # read the port from "prometheus.io/port: " annotation and update scraping address accordingly + - source_labels: [__address__, __meta_kubernetes_service_annotation_prometheus_io_port] + action: replace + target_label: __address__ + regex: ([^:]+)(?::\d+)?;(\d+) + replacement: $1:$2 + # add service namespace as label to the scraped metrics + - source_labels: [__meta_kubernetes_namespace] + separator: ; + regex: (.*) + target_label: namespace + replacement: $1 + action: replace + # add service name as a label to the scraped metrics + - source_labels: [__meta_kubernetes_service_name] + separator: ; + regex: (.*) + target_label: service + replacement: $1 + action: replace + # add stats service's labels to the scraped metrics + - action: labelmap + regex: __meta_kubernetes_service_label_(.+) diff --git a/content/docs/v2024.1.31/guides/mariadb/monitoring/builtin-prometheus/images/built-prom.png b/content/docs/v2024.1.31/guides/mariadb/monitoring/builtin-prometheus/images/built-prom.png new file mode 100644 index 0000000000..c40435214a Binary files /dev/null and b/content/docs/v2024.1.31/guides/mariadb/monitoring/builtin-prometheus/images/built-prom.png differ diff --git a/content/docs/v2024.1.31/guides/mariadb/monitoring/builtin-prometheus/index.md b/content/docs/v2024.1.31/guides/mariadb/monitoring/builtin-prometheus/index.md new file mode 100644 index 0000000000..63d0265364 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mariadb/monitoring/builtin-prometheus/index.md @@ -0,0 +1,364 @@ +--- +title: Monitor MariaDB using Builtin Prometheus Discovery +menu: + docs_v2024.1.31: + identifier: guides-mariadb-monitoring-builtinprometheus + name: Builtin Prometheus + parent: guides-mariadb-monitoring + weight: 30 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Monitoring MariaDB with builtin Prometheus + +This tutorial will show you how to monitor MariaDB database using builtin [Prometheus](https://github.com/prometheus/prometheus) scraper. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +- Install KubeDB operator in your cluster following the steps [here](/docs/v2024.1.31/setup/README). + +- If you are not familiar with how to configure Prometheus to scrape metrics from various Kubernetes resources, please read the tutorial from [here](https://github.com/appscode/third-party-tools/tree/master/monitoring/prometheus/builtin). + +- To learn how Prometheus monitoring works with KubeDB in general, please visit [here](/docs/v2024.1.31/guides/mariadb/monitoring/overview). + +- To keep Prometheus resources isolated, we are going to use a separate namespace called `monitoring` to deploy respective monitoring resources. We are going to deploy database in `demo` namespace. + + ```bash + $ kubectl create ns monitoring + namespace/monitoring created + + $ kubectl create ns demo + namespace/demo created + ``` + +> Note: YAML files used in this tutorial are stored in [docs/guides/mariadb/monitoring/builtin-prometheus/examples](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/guides/mariadb/monitoring/builtin-prometheus/examples) folder in GitHub repository [kubedb/docs](https://github.com/kubedb/docs). + +## Deploy MariaDB with Monitoring Enabled + +At first, let's deploy an MariaDB database with monitoring enabled. Below is the MariaDB object that we are going to create. + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: MariaDB +metadata: + name: builtin-prom-md + namespace: demo +spec: + version: "10.5.23" + terminationPolicy: WipeOut + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + monitor: + agent: prometheus.io/builtin +``` + +Here, + +- `spec.monitor.agent: prometheus.io/builtin` specifies that we are going to monitor this server using builtin Prometheus scraper. + +Let's create the MariaDB crd we have shown above. + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mariadb/monitoring/builtin-prometheus/examples/builtin-prom-md.yaml +mariadb.kubedb.com/builtin-prom-md created +``` + +Now, wait for the database to go into `Running` state. + +```bash +$ kubectl get mariadb -n demo builtin-prom-md +NAME VERSION STATUS AGE +builtin-prom-md 10.5.23 Ready 76s +``` + +KubeDB will create a separate stats service with name `{MariaDB crd name}-stats` for monitoring purpose. + +```bash +$ kubectl get svc -n demo --selector="app.kubernetes.io/instance=builtin-prom-md" +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +builtin-prom-md ClusterIP 10.106.32.194 3306/TCP 2m3s +builtin-prom-md-pods ClusterIP None 3306/TCP 2m3s +builtin-prom-md-stats ClusterIP 10.109.106.92 56790/TCP 2m2s +``` + +Here, `builtin-prom-md-stats ` service has been created for monitoring purpose. Let's describe the service. + +```bash +$ kubectl describe svc -n demo builtin-prom-md-stats +Name: builtin-prom-md-stats +Namespace: demo +Labels: app.kubernetes.io/instance=builtin-prom-md + app.kubernetes.io/managed-by=kubedb.com + app.kubernetes.io/name=mariadbs.kubedb.com + kubedb.com/role=stats +Annotations: monitoring.appscode.com/agent: prometheus.io/builtin + prometheus.io/path: /metrics + prometheus.io/port: 56790 + prometheus.io/scrape: true +Selector: app.kubernetes.io/instance=builtin-prom-md,app.kubernetes.io/managed-by=kubedb.com,app.kubernetes.io/name=mariadbs.kubedb.com +Type: ClusterIP +IP: 10.109.106.92 +Port: metrics 56790/TCP +TargetPort: metrics/TCP +Endpoints: 10.244.0.34:56790 +Session Affinity: None +Events: +``` + +You can see that the service contains following annotations. + +```bash +prometheus.io/path: /metrics +prometheus.io/port: 56790 +prometheus.io/scrape: true +``` + +The Prometheus server will discover the service endpoint using these specifications and will scrape metrics from the exporter. + +## Configure Prometheus Server + +Now, we have to configure a Prometheus scraping job to scrape the metrics using this service. We are going to configure scraping job similar to this [kubernetes-service-endpoints](https://github.com/appscode/third-party-tools/tree/master/monitoring/prometheus/builtin#kubernetes-service-endpoints) job that scrapes metrics from endpoints of a service. + +Let's configure a Prometheus scraping job to collect metrics from this service. + +```yaml +- job_name: 'kubedb-databases' + honor_labels: true + scheme: http + kubernetes_sd_configs: + - role: endpoints + # by default Prometheus server select all Kubernetes services as possible target. + # relabel_config is used to filter only desired endpoints + relabel_configs: + # keep only those services that has "prometheus.io/scrape","prometheus.io/path" and "prometheus.io/port" annotations + - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape, __meta_kubernetes_service_annotation_prometheus_io_port] + separator: ; + regex: true;(.*) + action: keep + # currently KubeDB supported databases uses only "http" scheme to export metrics. so, drop any service that uses "https" scheme. + - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scheme] + action: drop + regex: https + # only keep the stats services created by KubeDB for monitoring purpose which has "-stats" suffix + - source_labels: [__meta_kubernetes_service_name] + separator: ; + regex: (.*-stats) + action: keep + # service created by KubeDB will have "app.kubernetes.io/name" and "app.kubernetes.io/instance" annotations. keep only those services that have these annotations. + - source_labels: [__meta_kubernetes_service_label_app_kubernetes_io_name] + separator: ; + regex: (.*) + action: keep + # read the metric path from "prometheus.io/path: " annotation + - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_path] + action: replace + target_label: __metrics_path__ + regex: (.+) + # read the port from "prometheus.io/port: " annotation and update scraping address accordingly + - source_labels: [__address__, __meta_kubernetes_service_annotation_prometheus_io_port] + action: replace + target_label: __address__ + regex: ([^:]+)(?::\d+)?;(\d+) + replacement: $1:$2 + # add service namespace as label to the scraped metrics + - source_labels: [__meta_kubernetes_namespace] + separator: ; + regex: (.*) + target_label: namespace + replacement: $1 + action: replace + # add service name as a label to the scraped metrics + - source_labels: [__meta_kubernetes_service_name] + separator: ; + regex: (.*) + target_label: service + replacement: $1 + action: replace + # add stats service's labels to the scraped metrics + - action: labelmap + regex: __meta_kubernetes_service_label_(.+) +``` + +### Configure Existing Prometheus Server + +If you already have a Prometheus server running, you have to add above scraping job in the `ConfigMap` used to configure the Prometheus server. Then, you have to restart it for the updated configuration to take effect. + +>If you don't use a persistent volume for Prometheus storage, you will lose your previously scraped data on restart. + +### Deploy New Prometheus Server + +If you don't have any existing Prometheus server running, you have to deploy one. In this section, we are going to deploy a Prometheus server in `monitoring` namespace to collect metrics using this stats service. + +**Create ConfigMap:** + +At first, create a ConfigMap with the scraping configuration. Bellow, the YAML of ConfigMap that we are going to create in this tutorial. + +```yaml +apiVersion: v1 +kind: ConfigMap +metadata: + name: prometheus-config + labels: + app: prometheus-demo + namespace: monitoring +data: + prometheus.yml: |- + global: + scrape_interval: 5s + evaluation_interval: 5s + scrape_configs: + - job_name: 'kubedb-databases' + honor_labels: true + scheme: http + kubernetes_sd_configs: + - role: endpoints + # by default Prometheus server select all Kubernetes services as possible target. + # relabel_config is used to filter only desired endpoints + relabel_configs: + # keep only those services that has "prometheus.io/scrape","prometheus.io/path" and "prometheus.io/port" anootations + - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape, __meta_kubernetes_service_annotation_prometheus_io_port] + separator: ; + regex: true;(.*) + action: keep + # currently KubeDB supported databases uses only "http" scheme to export metrics. so, drop any service that uses "https" scheme. + - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scheme] + action: drop + regex: https + # only keep the stats services created by KubeDB for monitoring purpose which has "-stats" suffix + - source_labels: [__meta_kubernetes_service_name] + separator: ; + regex: (.*-stats) + action: keep + # service created by KubeDB will have "app.kubernetes.io/name" and "app.kubernetes.io/instance" annotations. keep only those services that have these annotations. + - source_labels: [__meta_kubernetes_service_label_app_kubernetes_io_name] + separator: ; + regex: (.*) + action: keep + # read the metric path from "prometheus.io/path: " annotation + - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_path] + action: replace + target_label: __metrics_path__ + regex: (.+) + # read the port from "prometheus.io/port: " annotation and update scraping address accordingly + - source_labels: [__address__, __meta_kubernetes_service_annotation_prometheus_io_port] + action: replace + target_label: __address__ + regex: ([^:]+)(?::\d+)?;(\d+) + replacement: $1:$2 + # add service namespace as label to the scraped metrics + - source_labels: [__meta_kubernetes_namespace] + separator: ; + regex: (.*) + target_label: namespace + replacement: $1 + action: replace + # add service name as a label to the scraped metrics + - source_labels: [__meta_kubernetes_service_name] + separator: ; + regex: (.*) + target_label: service + replacement: $1 + action: replace + # add stats service's labels to the scraped metrics + - action: labelmap + regex: __meta_kubernetes_service_label_(.+) +``` + +Let's create above `ConfigMap`, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mariadb/monitoring/builtin-prometheus/examples/prom-config.yaml +configmap/prometheus-config created +``` + +**Create RBAC:** + +If you are using an RBAC enabled cluster, you have to give necessary RBAC permissions for Prometheus. Let's create necessary RBAC stuffs for Prometheus, + +```bash +$ kubectl apply -f https://github.com/appscode/third-party-tools/raw/master/monitoring/prometheus/builtin/artifacts/rbac.yaml +clusterrole.rbac.authorization.k8s.io/prometheus created +serviceaccount/prometheus created +clusterrolebinding.rbac.authorization.k8s.io/prometheus created +``` + +>YAML for the RBAC resources created above can be found [here](https://github.com/appscode/third-party-tools/blob/master/monitoring/prometheus/builtin/artifacts/rbac.yaml). + +**Deploy Prometheus:** + +Now, we are ready to deploy Prometheus server. We are going to use following [deployment](https://github.com/appscode/third-party-tools/blob/master/monitoring/prometheus/builtin/artifacts/deployment.yaml) to deploy Prometheus server. + +Let's deploy the Prometheus server. + +```bash +$ kubectl apply -f https://github.com/appscode/third-party-tools/raw/master/monitoring/prometheus/builtin/artifacts/deployment.yaml +deployment.apps/prometheus created +``` + +### Verify Monitoring Metrics + +Prometheus server is listening to port `9090`. We are going to use [port forwarding](https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/) to access Prometheus dashboard. + +At first, let's check if the Prometheus pod is in `Running` state. + +```bash +$ kubectl get pod -n monitoring -l=app=prometheus +NAME READY STATUS RESTARTS AGE +prometheus-5dff66b455-cz9td 1/1 Running 0 42s +``` + +Now, run following command on a separate terminal to forward 9090 port of `prometheus-8568c86d86-95zhn` pod, + +```bash +$ kubectl port-forward -n monitoring prometheus-8568c86d86-95zhn 9090 +Forwarding from 127.0.0.1:9090 -> 9090 +Forwarding from [::1]:9090 -> 9090 +``` + +Now, we can access the dashboard at `localhost:9090`. Open [http://localhost:9090](http://localhost:9090) in your browser. You should see the endpoint of `builtin-prom-md-stats` service as one of the targets. + +

+  Prometheus Target +

+ +Check the labels marked with red rectangle. These labels confirm that the metrics are coming from `MariaDB` database `builtin-prom-md` through stats service `builtin-prom-md-stats`. + +Now, you can view the collected metrics and create a graph from homepage of this Prometheus dashboard. You can also use this Prometheus server as data source for [Grafana](https://grafana.com/) and create beautiful dashboard with collected metrics. + +## Cleaning up + +To cleanup the Kubernetes resources created by this tutorial, run following commands + +```bash +kubectl delete mariadb -n demo builtin-prom-md + +kubectl delete -n monitoring deployment.apps/prometheus + +kubectl delete -n monitoring clusterrole.rbac.authorization.k8s.io/prometheus +kubectl delete -n monitoring serviceaccount/prometheus +kubectl delete -n monitoring clusterrolebinding.rbac.authorization.k8s.io/prometheus + +kubectl delete ns demo +kubectl delete ns monitoring +``` diff --git a/content/docs/v2024.1.31/guides/mariadb/monitoring/overview/images/database-monitoring-overview.svg b/content/docs/v2024.1.31/guides/mariadb/monitoring/overview/images/database-monitoring-overview.svg new file mode 100644 index 0000000000..395eefb334 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mariadb/monitoring/overview/images/database-monitoring-overview.svg @@ -0,0 +1 @@ + \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/mariadb/monitoring/overview/index.md b/content/docs/v2024.1.31/guides/mariadb/monitoring/overview/index.md new file mode 100644 index 0000000000..f5037fe734 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mariadb/monitoring/overview/index.md @@ -0,0 +1,118 @@ +--- +title: MariaDB Monitoring Overview +description: MariaDB Monitoring Overview +menu: + docs_v2024.1.31: + identifier: guides-mariadb-monitoring-overview + name: Overview + parent: guides-mariadb-monitoring + weight: 10 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Monitoring MariaDB with KubeDB + +KubeDB has native support for monitoring via [Prometheus](https://prometheus.io/). You can use builtin [Prometheus](https://github.com/prometheus/prometheus) scraper or [Prometheus operator](https://github.com/prometheus-operator/prometheus-operator) to monitor KubeDB managed databases. This tutorial will show you how database monitoring works with KubeDB and how to configure Database crd to enable monitoring. + +## Overview + +KubeDB uses Prometheus [exporter](https://prometheus.io/docs/instrumenting/exporters/#databases) images to export Prometheus metrics for respective databases. Following diagram shows the logical flow of database monitoring with KubeDB. + +

+  Database Monitoring Flow +

+ +When a user creates a database crd with `spec.monitor` section configured, KubeDB operator provisions the respective database and injects an exporter image as sidecar to the database pod. It also creates a dedicated stats service with name `{database-crd-name}-stats` for monitoring. Prometheus server can scrape metrics using this stats service. + +## Configure Monitoring + +In order to enable monitoring for a database, you have to configure `spec.monitor` section. KubeDB provides following options to configure `spec.monitor` section: + +| Field | Type | Uses | +| -------------------------------------------------- | ---------- | ---------------------------------------------------------------------------------------------------------------------------------------------- | +| `spec.monitor.agent` | `Required` | Type of the monitoring agent that will be used to monitor this database. It can be `prometheus.io/builtin` or `prometheus.io/operator`. | +| `spec.monitor.prometheus.exporter.port` | `Optional` | Port number where the exporter side car will serve metrics. | +| `spec.monitor.prometheus.exporter.args` | `Optional` | Arguments to pass to the exporter sidecar. | +| `spec.monitor.prometheus.exporter.env` | `Optional` | List of environment variables to set in the exporter sidecar container. | +| `spec.monitor.prometheus.exporter.resources` | `Optional` | Resources required by exporter sidecar container. | +| `spec.monitor.prometheus.exporter.securityContext` | `Optional` | Security options the exporter should run with. | +| `spec.monitor.prometheus.serviceMonitor.labels` | `Optional` | Labels for `ServiceMonitor` crd. | +| `spec.monitor.prometheus.serviceMonitor.interval` | `Optional` | Interval at which metrics should be scraped. | + +## Sample Configuration + +A sample YAML for Redis crd with `spec.monitor` section configured to enable monitoring with [Prometheus operator](https://github.com/prometheus-operator/prometheus-operator) is shown below. + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: Redis +metadata: + name: sample-redis + namespace: databases +spec: + version: 6.0.20 + terminationPolicy: WipeOut + configSecret: # configure Redis to use password for authentication + name: redis-config + storageType: Durable + storage: + storageClassName: default + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 5Gi + monitor: + agent: prometheus.io/operator + prometheus: + serviceMonitor: + labels: + release: prometheus + exporter: + args: + - --redis.password=$(REDIS_PASSWORD) + env: + - name: REDIS_PASSWORD + valueFrom: + secretKeyRef: + name: _name_of_secret_with_redis_password + key: password # key with the password + resources: + requests: + memory: 512Mi + cpu: 200m + limits: + memory: 512Mi + cpu: 250m + securityContext: + runAsUser: 2000 + allowPrivilegeEscalation: false +``` + +Assume that above Redis is configured to use basic authentication. So, exporter image also need to provide password to collect metrics. We have provided it through `spec.monitor.args` field. + +Here, we have specified that we are going to monitor this server using Prometheus operator through `spec.monitor.agent: prometheus.io/operator`. KubeDB will create a `ServiceMonitor` crd in `monitoring` namespace and this `ServiceMonitor` will have `release: prometheus` label. + +## Next Steps + +- Learn how to monitor `MariaDB` database with KubeDB using [builtin-Prometheus](/docs/v2024.1.31/guides/mariadb/monitoring/builtin-prometheus) and using [Prometheus operator](/docs/v2024.1.31/guides/mariadb/monitoring/prometheus-operator). +- Learn how to monitor `MariaDB` database with KubeDB using [builtin-Prometheus](/docs/v2024.1.31/guides/elasticsearch/monitoring/using-builtin-prometheus) and using [Prometheus operator](/docs/v2024.1.31/guides/elasticsearch/monitoring/using-prometheus-operator). +- Learn how to monitor `PostgreSQL` database with KubeDB using [builtin-Prometheus](/docs/v2024.1.31/guides/postgres/monitoring/using-builtin-prometheus) and using [Prometheus operator](/docs/v2024.1.31/guides/postgres/monitoring/using-prometheus-operator). +- Learn how to monitor `MySQL` database with KubeDB using [builtin-Prometheus](/docs/v2024.1.31/guides/mysql/monitoring/builtin-prometheus/) and using [Prometheus operator](/docs/v2024.1.31/guides/mysql/monitoring/prometheus-operator/). +- Learn how to monitor `MongoDB` database with KubeDB using [builtin-Prometheus](/docs/v2024.1.31/guides/mongodb/monitoring/using-builtin-prometheus) and using [Prometheus operator](/docs/v2024.1.31/guides/mongodb/monitoring/using-prometheus-operator). +- Learn how to monitor `Redis` server with KubeDB using [builtin-Prometheus](/docs/v2024.1.31/guides/redis/monitoring/using-builtin-prometheus) and using [Prometheus operator](/docs/v2024.1.31/guides/redis/monitoring/using-prometheus-operator). +- Learn how to monitor `Memcached` server with KubeDB using [builtin-Prometheus](/docs/v2024.1.31/guides/memcached/monitoring/using-builtin-prometheus) and using [Prometheus operator](/docs/v2024.1.31/guides/memcached/monitoring/using-prometheus-operator). diff --git a/content/docs/v2024.1.31/guides/mariadb/monitoring/prometheus-operator/examples/prom-operator-md.yaml b/content/docs/v2024.1.31/guides/mariadb/monitoring/prometheus-operator/examples/prom-operator-md.yaml new file mode 100644 index 0000000000..ae274016e8 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mariadb/monitoring/prometheus-operator/examples/prom-operator-md.yaml @@ -0,0 +1,22 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MariaDB +metadata: + name: coreos-prom-md + namespace: demo +spec: + version: "10.5.23" + terminationPolicy: WipeOut + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + monitor: + agent: prometheus.io/operator + prometheus: + serviceMonitor: + labels: + release: prometheus + interval: 10s \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/mariadb/monitoring/prometheus-operator/images/prom-end.png b/content/docs/v2024.1.31/guides/mariadb/monitoring/prometheus-operator/images/prom-end.png new file mode 100644 index 0000000000..15d055fd9c Binary files /dev/null and b/content/docs/v2024.1.31/guides/mariadb/monitoring/prometheus-operator/images/prom-end.png differ diff --git a/content/docs/v2024.1.31/guides/mariadb/monitoring/prometheus-operator/index.md b/content/docs/v2024.1.31/guides/mariadb/monitoring/prometheus-operator/index.md new file mode 100644 index 0000000000..1587e5781f --- /dev/null +++ b/content/docs/v2024.1.31/guides/mariadb/monitoring/prometheus-operator/index.md @@ -0,0 +1,335 @@ +--- +title: Monitor MariaDB using Prometheus Operator +menu: + docs_v2024.1.31: + identifier: guides-mariadb-monitoring-prometheusoperator + name: Prometheus Operator + parent: guides-mariadb-monitoring + weight: 20 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Monitoring MariaDB Using Prometheus operator + +[Prometheus operator](https://github.com/prometheus-operator/prometheus-operator) provides simple and Kubernetes native way to deploy and configure Prometheus server. This tutorial will show you how to use Prometheus operator to monitor MariaDB database deployed with KubeDB. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +- To learn how Prometheus monitoring works with KubeDB in general, please visit [here](/docs/v2024.1.31/guides/mariadb/monitoring/overview). + +- To keep database resources isolated, this tutorial uses a separate namespace called `demo` throughout this tutorial. Run the following command to prepare your cluster: + + ```bash + $ kubectl create ns demo + namespace/demo created + ``` + +- We need a [Prometheus operator](https://github.com/prometheus-operator/prometheus-operator) instance running. If you don't already have a running instance, deploy one following the docs from [here](https://github.com/appscode/third-party-tools/blob/master/monitoring/prometheus/operator/README.md). + +- If you already don't have a Prometheus server running, deploy one following tutorial from [here](https://github.com/appscode/third-party-tools/blob/master/monitoring/prometheus/operator/README.md#deploy-prometheus-server). + +> Note: YAML files used in this tutorial are stored in [/docs/guides/mariadb/monitoring/prometheus-operator/examples](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/guides/mariadb/monitoring/prometheus-operator/examples) folder in GitHub repository [kubedb/docs](https://github.com/kubedb/docs). + +## Find out required labels for ServiceMonitor + +We need to know the labels used to select `ServiceMonitor` by a `Prometheus` crd. We are going to provide these labels in `spec.monitor.prometheus.labels` field of MariaDB crd so that KubeDB creates `ServiceMonitor` object accordingly. + +At first, let's find out the available Prometheus server in our cluster. + +```bash +$ kubectl get prometheus --all-namespaces +NAMESPACE NAME VERSION REPLICAS AGE +default prometheus 1 2m19s +``` + +> If you don't have any Prometheus server running in your cluster, deploy one following the guide specified in **Before You Begin** section. + +Now, let's view the YAML of the available Prometheus server `prometheus` in `default` namespace. + +```yaml +$ kubectl get prometheus -n default prometheus -o yaml +apiVersion: monitoring.coreos.com/v1 +kind: Prometheus +metadata: + annotations: + kubectl.kubernetes.io/last-applied-configuration: | + {"apiVersion":"monitoring.coreos.com/v1","kind":"Prometheus","metadata":{"annotations":{},"labels":{"prometheus":"prometheus"},"name":"prometheus","namespace":"default"},"spec":{"replicas":1,"resources":{"requests":{"memory":"400Mi"}},"serviceAccountName":"prometheus","serviceMonitorNamespaceSelector":{"matchLabels":{"prometheus":"prometheus"}},"serviceMonitorSelector":{"matchLabels":{"release":"prometheus"}}}} + creationTimestamp: "2020-08-25T04:02:07Z" + generation: 1 + labels: + prometheus: prometheus + ... + manager: kubectl + operation: Update + time: "2020-08-25T04:02:07Z" + name: prometheus + namespace: default + resourceVersion: "2087" + selfLink: /apis/monitoring.coreos.com/v1/namespaces/default/prometheuses/prometheus + uid: 972a50cb-b751-418b-b2bc-e0ecc9232730 +spec: + replicas: 1 + resources: + requests: + memory: 400Mi + serviceAccountName: prometheus + serviceMonitorNamespaceSelector: + matchLabels: + prometheus: prometheus + serviceMonitorSelector: + matchLabels: + release: prometheus +``` + +- `spec.serviceMonitorSelector` field specifies which ServiceMonitors should be included. The Above label `release: prometheus` is used to select `ServiceMonitors` by its selector. So, we are going to use this label in `spec.monitor.prometheus.labels` field of MariaDB crd. +- `spec.serviceMonitorNamespaceSelector` field specifies that the `ServiceMonitors` can be selected outside the Prometheus namespace by Prometheus using namespace selector. The Above label `prometheus: prometheus` is used to select the namespace where the `ServiceMonitor` is created. + +### Add Label to database namespace + +KubeDB creates a `ServiceMonitor` in database namespace `demo`. We need to add label to `demo` namespace. Prometheus will select this namespace by using its `spec.serviceMonitorNamespaceSelector` field. + +Let's add label `prometheus: prometheus` to `demo` namespace, + +```bash +$ kubectl patch namespace demo -p '{"metadata":{"labels": {"prometheus":"prometheus"}}}' +namespace/demo patched +``` + +## Deploy MariaDB with Monitoring Enabled + +At first, let's deploy an MariaDB database with monitoring enabled. Below is the MariaDB object that we are going to create. + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: MariaDB +metadata: + name: coreos-prom-md + namespace: demo +spec: + version: "10.5.23" + terminationPolicy: WipeOut + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + monitor: + agent: prometheus.io/operator + prometheus: + serviceMonitor: + labels: + release: prometheus + interval: 10s +``` + +Here, + +- `monitor.agent: prometheus.io/operator` indicates that we are going to monitor this server using Prometheus operator. + +- `monitor.prometheus.labels` specifies that KubeDB should create `ServiceMonitor` with these labels. + +- `monitor.prometheus.interval` indicates that the Prometheus server should scrape metrics from this database with 10 seconds interval. + +Let's create the MariaDB object that we have shown above, + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mariadb/monitoring/prometheus-operator/examples/prom-operator-md.yaml +mariadb.kubedb.com/coreos-prom-md created +``` + +Now, wait for the database to go into `Ready` state. + +```bash +$ kubectl get mariadb -n demo coreos-prom-md +NAME VERSION STATUS AGE +coreos-prom-md 10.5.23 Ready 59s +``` + +KubeDB will create a separate stats service with name `{MariaDB crd name}-stats` for monitoring purpose. + +```bash +$ $ kubectl get svc -n demo --selector="app.kubernetes.io/instance=coreos-prom-md" +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +coreos-prom-md ClusterIP 10.99.96.226 3306/TCP 107s +coreos-prom-md-pods ClusterIP None 3306/TCP 107s +coreos-prom-md-stats ClusterIP 10.101.190.67 56790/TCP 107s +``` + +Here, `coreos-prom-md-stats` service has been created for monitoring purpose. + +Let's describe this stats service. + +```yaml +$ kubectl describe svc -n demo coreos-prom-md-stats +Name: coreos-prom-md-stats +Namespace: demo +Labels: app.kubernetes.io/instance=coreos-prom-md + app.kubernetes.io/managed-by=kubedb.com + app.kubernetes.io/name=mariadbs.kubedb.com + kubedb.com/role=stats +Annotations: monitoring.appscode.com/agent: prometheus.io/operator +Selector: app.kubernetes.io/instance=coreos-prom-md,app.kubernetes.io/managed-by=kubedb.com,app.kubernetes.io/name=mariadbs.kubedb.com +Type: ClusterIP +IP: 10.101.190.67 +Port: metrics 56790/TCP +TargetPort: metrics/TCP +Endpoints: 10.244.0.31:56790 +Session Affinity: None +Events: +``` + +Notice the `Labels` and `Port` fields. `ServiceMonitor` will use these information to target its endpoints. + +KubeDB will also create a `ServiceMonitor` crd in `demo` namespace that select the endpoints of `coreos-prom-md-stats` service. Verify that the `ServiceMonitor` crd has been created. + +```bash +$ kubectl get servicemonitor -n demo +NAME AGE +coreos-prom-md-stats 4m8s +``` + +Let's verify that the `ServiceMonitor` has the label that we had specified in `spec.monitor` section of MariaDB crd. + +```yaml +$ kubectl get servicemonitor -n demo coreos-prom-md-stats -o yaml +apiVersion: monitoring.coreos.com/v1 +kind: ServiceMonitor +metadata: + creationTimestamp: "2021-03-19T10:09:03Z" + generation: 1 + labels: + app.kubernetes.io/component: database + app.kubernetes.io/instance: coreos-prom-md + app.kubernetes.io/managed-by: kubedb.com + app.kubernetes.io/name: mariadbs.kubedb.com + release: prometheus + managedFields: + - apiVersion: monitoring.coreos.com/v1 + fieldsType: FieldsV1 + fieldsV1: + f:metadata: + f:labels: + .: {} + f:app.kubernetes.io/component: {} + f:app.kubernetes.io/instance: {} + f:app.kubernetes.io/managed-by: {} + f:app.kubernetes.io/name: {} + f:release: {} + f:ownerReferences: {} + f:spec: + .: {} + f:endpoints: {} + f:namespaceSelector: + .: {} + f:matchNames: {} + f:selector: + .: {} + f:matchLabels: + .: {} + f:app.kubernetes.io/instance: {} + f:app.kubernetes.io/managed-by: {} + f:app.kubernetes.io/name: {} + f:kubedb.com/role: {} + manager: mariadb-operator + operation: Update + time: "2021-03-19T10:09:03Z" + name: coreos-prom-md-stats + namespace: demo + ownerReferences: + - apiVersion: v1 + blockOwnerDeletion: true + controller: true + kind: Service + name: coreos-prom-md-stats + uid: 08260a99-0984-4d90-bf68-34080ad0ee5b + resourceVersion: "241637" + selfLink: /apis/monitoring.coreos.com/v1/namespaces/demo/servicemonitors/coreos-prom-md-stats + uid: 4f022d98-d2d8-490f-9548-f6367d03ae1f +spec: + endpoints: + - bearerTokenSecret: + key: "" + honorLabels: true + interval: 10s + path: /metrics + port: metrics + namespaceSelector: + matchNames: + - demo + selector: + matchLabels: + app.kubernetes.io/instance: coreos-prom-md + app.kubernetes.io/managed-by: kubedb.com + app.kubernetes.io/name: mariadbs.kubedb.com + kubedb.com/role: stats +``` + +Notice that the `ServiceMonitor` has label `release: prometheus` that we had specified in MariaDB crd. + +Also notice that the `ServiceMonitor` has selector which match the labels we have seen in the `coreos-prom-md-stats` service. It also, target the `prom-http` port that we have seen in the stats service. + +## Verify Monitoring Metrics + +At first, let's find out the respective Prometheus pod for `prometheus` Prometheus server. + +```bash +$ kubectl get pod -n default -l=app=prometheus +NAME READY STATUS RESTARTS AGE +prometheus-prometheus-0 3/3 Running 1 16m +``` + +Prometheus server is listening to port `9090` of `prometheus-prometheus-0` pod. We are going to use [port forwarding](https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/) to access Prometheus dashboard. + +Run following command on a separate terminal to forward the port 9090 of `prometheus-prometheus-0` pod, + +```bash +$ kubectl port-forward -n default prometheus-prometheus-0 9090 +Forwarding from 127.0.0.1:9090 -> 9090 +Forwarding from [::1]:9090 -> 9090 +``` + +Now, we can access the dashboard at `localhost:9090`. Open [http://localhost:9090](http://localhost:9090) in your browser. You should see `prom-http` endpoint of `coreos-prom-md-stats` service as one of the targets. + +

+  Prometheus Target +

+ +Check the `endpoint` and `service` labels. It verifies that the target is our expected database. Now, you can view the collected metrics and create a graph from homepage of this Prometheus dashboard. You can also use this Prometheus server as data source for [Grafana](https://grafana.com/) and create beautiful dashboard with collected metrics. + +## Cleaning up + +To cleanup the Kubernetes resources created by this tutorial, run following commands + +```bash +# cleanup database +kubectl delete mariadb -n demo coreos-prom-md + +# cleanup Prometheus resources +kubectl delete -f https://raw.githubusercontent.com/appscode/third-party-tools/master/monitoring/prometheus/operator/artifacts/prometheus.yaml + +kubectl delete -f https://raw.githubusercontent.com/appscode/third-party-tools/master/monitoring/prometheus/operator/artifacts/prometheus-rbac.yaml + +# cleanup Prometheus operator resources +kubectl delete -f https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/release-0.41/bundle.yaml + +# delete namespace +kubectl delete ns demo +``` diff --git a/content/docs/v2024.1.31/guides/mariadb/private-registry/_index.md b/content/docs/v2024.1.31/guides/mariadb/private-registry/_index.md new file mode 100755 index 0000000000..ebf446edf9 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mariadb/private-registry/_index.md @@ -0,0 +1,22 @@ +--- +title: Run MySQL using Private Registry +menu: + docs_v2024.1.31: + identifier: guides-mariadb-privateregistry + name: Private Registry + parent: guides-mariadb + weight: 60 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/mariadb/private-registry/quickstart/examples/demo.yaml b/content/docs/v2024.1.31/guides/mariadb/private-registry/quickstart/examples/demo.yaml new file mode 100644 index 0000000000..89717beb4f --- /dev/null +++ b/content/docs/v2024.1.31/guides/mariadb/private-registry/quickstart/examples/demo.yaml @@ -0,0 +1,19 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MariaDB +metadata: + name: md-pvt-reg + namespace: demo +spec: + version: "10.5.23" + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + podTemplate: + spec: + imagePullSecrets: + - name: myregistrykey + terminationPolicy: WipeOut \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/mariadb/private-registry/quickstart/index.md b/content/docs/v2024.1.31/guides/mariadb/private-registry/quickstart/index.md new file mode 100644 index 0000000000..2750fe22a6 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mariadb/private-registry/quickstart/index.md @@ -0,0 +1,153 @@ +--- +title: Run MariaDB using Private Registry +menu: + docs_v2024.1.31: + identifier: guides-mariadb-privateregistry-quickstart + name: Quickstart + parent: guides-mariadb-privateregistry + weight: 10 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Deploy MariaDB from private Docker registry + +KubeDB operator supports using private Docker registry. This tutorial will show you how to use KubeDB to run MariaDB database using private Docker images. + +## Before You Begin + +- Read [concept of MariaDB Version Catalog](/docs/v2024.1.31/guides/mariadb/concepts/mariadb-version) to learn detail concepts of `MariaDBVersion` object. + +- You need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +- You will also need a docker private [registry](https://docs.docker.com/registry/) or [private repository](https://docs.docker.com/docker-hub/repos/#private-repositories). In this tutorial we will use private repository of [docker hub](https://hub.docker.com/). + +- You have to push the required images from KubeDB's [Docker hub account](https://hub.docker.com/u/kubedb) into your private registry. For mysql, push `DB_IMAGE`, `EXPORTER_IMAGE`, `INITCONTAINER_IMAGE` of following MariaDBVersions, where `deprecated` is not true, to your private registry. + +```bash +$ kubectl get mariadbversions -n kube-system -o=custom-columns=NAME:.metadata.name,VERSION:.spec.version,DB_IMAGE:.spec.db.image,EXPORTER_IMAGE:.spec.exporter.image,INITCONTAINER_IMAGE:.spec.initContainer.image,DEPRECATED:.spec.deprecated +NAME VERSION DB_IMAGE EXPORTER_IMAGE INITCONTAINER_IMAGE DEPRECATED +10.4.32 10.4.32 kubedb/mariadb:10.4.32 kubedb/mysqld-exporter:v0.11.0 kubedb/busybox +10.5.23 10.5.23 kubedb/mariadb:10.5.23 kubedb/mysqld-exporter:v0.11.0 kubedb/busybox +``` + +Docker hub repositories: + +- [kubedb/operator](https://hub.docker.com/r/kubedb/operator) +- [kubedb/mariadb](https://hub.docker.com/r/kubedb/mariadb) +- [kubedb/mysqld-exporter](https://hub.docker.com/r/kubedb/mysqld-exporter) + +- Update KubeDB catalog for private Docker registry. Ex: + + ```yaml + apiVersion: catalog.kubedb.com/v1alpha1 + kind: MariaDBVersion + metadata: + name: 10.5.23 + spec: + db: + image: PRIVATE_REGISTRY/mysql:10.5.23 + exporter: + image: PRIVATE_REGISTRY/mysqld-exporter:v0.11.0 + initContainer: + image: PRIVATE_REGISTRY/busybox + podSecurityPolicies: + databasePolicyName: maria-db + version: 10.5.23 + ``` + +- To keep things isolated, this tutorial uses a separate namespace called `demo` throughout this tutorial. Run the following command to prepare your cluster for this tutorial: + + ```bash + $ kubectl create ns demo + namespace/demo created + ``` + +## Create ImagePullSecret + +ImagePullSecrets is a type of a Kubernete Secret whose sole purpose is to pull private images from a Docker registry. It allows you to specify the url of the docker registry, credentials for logging in and the image name of your private docker image. + +Run the following command, substituting the appropriate uppercase values to create an image pull secret for your private Docker registry: + +```bash +$ kubectl create secret docker-registry -n demo myregistrykey \ + --docker-server=DOCKER_REGISTRY_SERVER \ + --docker-username=DOCKER_USER \ + --docker-email=DOCKER_EMAIL \ + --docker-password=DOCKER_PASSWORD +secret/myregistrykey created +``` + +If you wish to follow other ways to pull private images see [official docs](https://kubernetes.io/docs/concepts/containers/images/) of Kubernetes. + +NB: If you are using `kubectl` 1.9.0, update to 1.9.1 or later to avoid this [issue](https://github.com/kubernetes/kubernetes/issues/57427). + +## Install KubeDB operator + +When installing KubeDB operator, set the flags `--docker-registry` and `--image-pull-secret` to appropriate value. Follow the steps to [install KubeDB operator](/docs/v2024.1.31/setup/README) properly in cluster so that to points to the DOCKER_REGISTRY you wish to pull images from. + +## Deploy MariaDB database from Private Registry + +While deploying `MariaDB` from private repository, you have to add `myregistrykey` secret in `MariaDB` `spec.imagePullSecrets`. +Below is the MariaDB CRD object we will create. + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: MariaDB +metadata: + name: md-pvt-reg + namespace: demo +spec: + version: "10.5.23" + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + podTemplate: + spec: + imagePullSecrets: + - name: myregistrykey + terminationPolicy: WipeOut +``` + +Now run the command to deploy this `MariaDB` object: + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mariadb/private-registry/quickstart/examples/demo.yaml +mariadb.kubedb.com/md-pvt-reg created +``` + +To check if the images pulled successfully from the repository, see if the `MariaDB` is in running state: + +```bash +$ kubectl get pods -n demo +NAME READY STATUS RESTARTS AGE +md-pvt-reg-0 1/1 Running 0 56s +``` + +## Cleaning up + +To cleanup the Kubernetes resources created by this tutorial, run: + +```bash +$ kubectl delete mariadb -n demo md-pvt-reg +mariadb.kubedb.com "md-pvt-reg" deleted +$ kubectl delete ns demo +namespace "demo" deleted +``` diff --git a/content/docs/v2024.1.31/guides/mariadb/quickstart/_index.md b/content/docs/v2024.1.31/guides/mariadb/quickstart/_index.md new file mode 100755 index 0000000000..45c56e033a --- /dev/null +++ b/content/docs/v2024.1.31/guides/mariadb/quickstart/_index.md @@ -0,0 +1,22 @@ +--- +title: MariaDB Quickstart +menu: + docs_v2024.1.31: + identifier: guides-mariadb-quickstart + name: Quickstart + parent: guides-mariadb + weight: 15 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/mariadb/quickstart/overview/examples/sample-mariadb.yaml b/content/docs/v2024.1.31/guides/mariadb/quickstart/overview/examples/sample-mariadb.yaml new file mode 100644 index 0000000000..15265ca879 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mariadb/quickstart/overview/examples/sample-mariadb.yaml @@ -0,0 +1,16 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MariaDB +metadata: + name: sample-mariadb + namespace: demo +spec: + version: "10.5.23" + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: Delete \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/mariadb/quickstart/overview/images/mariadb-lifecycle.png b/content/docs/v2024.1.31/guides/mariadb/quickstart/overview/images/mariadb-lifecycle.png new file mode 100644 index 0000000000..86d604a8e3 Binary files /dev/null and b/content/docs/v2024.1.31/guides/mariadb/quickstart/overview/images/mariadb-lifecycle.png differ diff --git a/content/docs/v2024.1.31/guides/mariadb/quickstart/overview/index.md b/content/docs/v2024.1.31/guides/mariadb/quickstart/overview/index.md new file mode 100644 index 0000000000..5484f3cb4c --- /dev/null +++ b/content/docs/v2024.1.31/guides/mariadb/quickstart/overview/index.md @@ -0,0 +1,419 @@ +--- +title: MariaDB Quickstart +menu: + docs_v2024.1.31: + identifier: guides-mariadb-quickstart-overview + name: Overview + parent: guides-mariadb-quickstart + weight: 10 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# MariaDB QuickStart + +This tutorial will show you how to use KubeDB to run a MariaDB database. + +

+  lifecycle +

+ +> Note: The yaml files used in this tutorial are stored in [here](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/guides/mariadb/quickstart/overview/examples). + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +- Now, install KubeDB cli on your workstation and KubeDB operator in your cluster following the steps [here](/docs/v2024.1.31/setup/README). + +- [StorageClass](https://kubernetes.io/docs/concepts/storage/storage-classes/) is required to run KubeDB. Check the available StorageClass in cluster. + +```bash +$ kubectl get storageclasses +NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE +standard (default) rancher.io/local-path Delete WaitForFirstConsumer false 6h22m +``` + +- To keep things isolated, this tutorial uses a separate namespace called `demo` throughout this tutorial. + +``` +$ kubectl create ns demo +namespace/demo created +``` + +## Find Available MariaDBVersion + +When you have installed KubeDB, it has created `MariaDBVersion` crd for all supported MariaDB versions. Check it by using the following command, + +```bash +$ kubectl get mariadbversions +NAME VERSION DB_IMAGE DEPRECATED AGE +10.4.32 10.4.32 mariadb:10.4.32 9s +10.5.23 10.5.23 mariadb:10.5.23 9s +10.6.16 10.6.16 mariadb:10.6.16 9s +``` + +## Create a MariaDB database + +KubeDB implements a `MariaDB` CRD to define the specification of a MariaDB database. Below is the `MariaDB` object created in this tutorial. + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: MariaDB +metadata: + name: sample-mariadb + namespace: demo +spec: + version: "10.5.23" + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: Delete +``` + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mariadb/quickstart/overview/examples/sample-mariadb.yaml +mariadb.kubedb.com/sample-mariadb created +``` + +Here, + +- `spec.version` is the name of the MariaDBVersion CRD where the docker images are specified. In this tutorial, a MariaDB `10.5.23` database is going to create. +- `spec.storageType` specifies the type of storage that will be used for MariaDB database. It can be `Durable` or `Ephemeral`. Default value of this field is `Durable`. If `Ephemeral` is used then KubeDB will create MariaDB database using `EmptyDir` volume. In this case, you don't have to specify `spec.storage` field. This is useful for testing purposes. +- `spec.storage` specifies the StorageClass of PVC dynamically allocated to store data for this database. This storage spec will be passed to the StatefulSet created by KubeDB operator to run database pods. You can specify any StorageClass available in your cluster with appropriate resource requests. +- `spec.terminationPolicy` gives flexibility whether to `nullify`(reject) the delete operation of `MariaDB` crd or which resources KubeDB should keep or delete when you delete `MariaDB` crd. If admission webhook is enabled, It prevents users from deleting the database as long as the `spec.terminationPolicy` is set to `DoNotTerminate`. + +> Note: spec.storage section is used to create PVC for database pod. It will create PVC with storage size specified instorage.resources.requests field. Don't specify limits here. PVC does not get resized automatically. + +KubeDB operator watches for `MariaDB` objects using Kubernetes api. When a `MariaDB` object is created, KubeDB operator will create a new StatefulSet and a Service with the matching MariaDB object name. KubeDB operator will also create a governing service for StatefulSets with the name `kubedb`, if one is not already present. + +```bash +$ kubectl describe -n demo mariadb sample-mariadb +Name: sample-mariadb +Namespace: demo +Labels: +Annotations: +API Version: kubedb.com/v1alpha2 +Kind: MariaDB +Metadata: + Creation Timestamp: 2022-06-06T04:42:27Z + Finalizers: + kubedb.com + Generation: 2 + ... + Resource Version: 2673 + UID: 2f9c9453-6e78-4521-91ea-34ad2da398bc +Spec: + Allowed Schemas: + Namespaces: + From: Same + Auth Secret: + Name: sample-mariadb-auth + Coordinator: + Resources: + Pod Template: + ... + Replicas: 1 + Storage: + Access Modes: + ReadWriteOnce + Resources: + Requests: + Storage: 1Gi + Storage Class Name: standard + Storage Type: Durable + Termination Policy: WipeOut + Version: 10.5.23 +Status: + Conditions: + Last Transition Time: 2022-06-06T04:42:27Z + Message: The KubeDB operator has started the provisioning of MariaDB: demo/sample-mariadb + Reason: DatabaseProvisioningStartedSuccessfully + Status: True + Type: ProvisioningStarted + Last Transition Time: 2022-06-06T04:43:37Z + Message: database sample-mariadb/demo is ready + Reason: AllReplicasReady + Status: True + Type: Ready + Last Transition Time: 2022-06-06T04:43:37Z + Message: database sample-mariadb/demo is accepting connection + Reason: AcceptingConnection + Status: True + Type: AcceptingConnection + Last Transition Time: 2022-06-06T04:43:26Z + Message: All desired replicas are ready. + Reason: AllReplicasReady + Status: True + Type: ReplicaReady + Last Transition Time: 2022-06-06T04:43:37Z + Message: The MariaDB: demo/sample-mariadb is successfully provisioned. + Observed Generation: 2 + Reason: DatabaseSuccessfullyProvisioned + Status: True + Type: Provisioned + Observed Generation: 2 + Phase: Ready +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal Successful 3m49s KubeDB Operator Successfully created governing service + Normal Successful 3m49s KubeDB Operator Successfully created Service + Normal Successful 3m49s KubeDB Operator Successfully created StatefulSet demo/sample-mariadb + Normal Successful 3m49s KubeDB Operator Successfully created MariaDB + Normal Successful 3m49s KubeDB Operator Successfully created appbinding + + + +$ kubectl get statefulset -n demo +NAME READY AGE +sample-mariadb 1/1 27m + +$ kubectl get pvc -n demo +NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE +data-sample-mariadb-0 Bound pvc-10651900-d975-467f-80ff-9c4755bdf917 1Gi RWO standard 27m + +$ kubectl get pv -n demo +NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE +pvc-10651900-d975-467f-80ff-9c4755bdf917 1Gi RWO Delete Bound demo/data-sample-mariadb-0 standard 27m + +$ kubectl get service -n demo +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +sample-mariadb ClusterIP 10.105.207.172 3306/TCP 28m +sample-mariadb-pods ClusterIP None 3306/TCP 28m +``` + +KubeDB operator sets the `status.phase` to `Running` once the database is successfully created. Run the following command to see the modified MariaDB object: + +```yaml +$ kubectl get mariadb -n demo sample-mariadb -o yaml +apiVersion: kubedb.com/v1alpha2 +kind: MariaDB +metadata: + annotations: + kubectl.kubernetes.io/last-applied-configuration: | + {"apiVersion":"kubedb.com/v1alpha2","kind":"MariaDB","metadata":{"annotations":{},"name":"sample-mariadb","namespace":"demo"},"spec":{"storage":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"1Gi"}},"storageClassName":"standard"},"storageType":"Durable","terminationPolicy":"WipeOut","version":"10.5.23"}} + creationTimestamp: "2021-03-10T04:31:09Z" + finalizers: + - kubedb.com + generation: 2 + ... + name: sample-mariadb + namespace: demo + resourceVersion: "7952" + selfLink: /apis/kubedb.com/v1alpha2/namespaces/demo/mariadbs/sample-mariadb + uid: 412a4739-ac65-4b5a-a943-5e148f3222b1 +spec: + authSecret: + name: sample-mariadb-auth + ... + replicas: 1 + storage: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + storageClassName: standard + storageType: Durable + terminationPolicy: Delete + version: 10.5.23 +status: + observedGeneration: 2 + phase: Ready +``` + +## Connect with MariaDB database + +KubeDB operator has created a new Secret called `mariadb-quickstart-auth` *(format: {mariadb-object-name}-auth)* for storing the password for `mariadb` superuser. This secret contains a `username` key which contains the *username* for MariaDB superuser and a `password` key which contains the *password* for MariaDB superuser. + +If you want to use an existing secret please specify that when creating the MariaDB object using `spec.authSecret.name`. While creating this secret manually, make sure the secret contains these two keys containing data `username` and `password` and also make sure of using `root` as value of `username`. + +Now, we need `username` and `password` to connect to this database from `kubeclt exec` command. In this example, `sample-mariadb-auth` secret holds username and password. + +```bash +$ kubectl get secrets -n demo sample-mariadb-auth -o jsonpath='{.data.\username}' | base64 -d +root + +$ kubectl get secrets -n demo sample-mariadb-auth -o jsonpath='{.data.\password}' | base64 -d +w*yOU$b53dTbjsjJ +``` + +We will exec into the pod `sample-mariadb-0` and conncet to the database using `username` and `password`. + +```bash +$ kubectl exec -it -n demo sample-mariadb-0 -- mariadb -u root --password='w*yOU$b53dTbjsjJ' +Welcome to the MariaDB monitor. Commands end with ; or \g. +Your MariaDB connection id is 335 +Server version: 10.5.23-MariaDB-1:10.5.23+maria~focal mariadb.org binary distribution + +Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others. + +Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. + +MariaDB [(none)]> show databases; ++--------------------+ +| Database | ++--------------------+ +| information_schema | +| mysql | +| performance_schema | ++--------------------+ +3 rows in set (0.001 sec) + +``` + +## Database TerminationPolicy + +This field is used to regulate the deletion process of the related resources when `MariaDB` object is deleted. User can set the value of this field according to their needs. The available options and their use case scenario is described below: + +**DoNotTerminate:** + +When `terminationPolicy` is set to `DoNotTerminate`, KubeDB takes advantage of `ValidationWebhook` feature in Kubernetes 1.9.0 or later clusters to implement `DoNotTerminate` feature. If admission webhook is enabled, It prevents users from deleting the database as long as the `spec.terminationPolicy` is set to `DoNotTerminate`. If you create a database with `terminationPolicy` `DoNotTerminate` and try to delete it, you will see this: + +```bash +$ kubectl delete mariadb sample-mariadb -n demo +Error from server (BadRequest): admission webhook "mariadb.validators.kubedb.com" denied the request: mariadb "mariadb-quickstart" can't be halted. To delete, change spec.terminationPolicy +``` + +Now, run `kubectl edit mariadb sample-mariadb -n demo` to set `spec.terminationPolicy` to `Halt` (which deletes the mariadb object and keeps PVC, snapshots, Secrets intact) or remove this field (which default to `Delete`). Then you will be able to delete/halt the database. + + +**Halt:** + +Suppose you want to reuse your database volume and credential to deploy your database in future using the same configurations. But, right now you just want to delete the database except the database volumes and credentials. In this scenario, you must set the `MariaDB` object `terminationPolicy` to `Halt`. + +When the `TerminationPolicy` is set to `Halt` and the MariaDB object is deleted, the KubeDB operator will delete the StatefulSet and its pods but leaves the `PVCs`, `secrets` and database backup data(`snapshots`) intact. You can set the `terminationPolicy` to `Halt` in existing database using `edit` command for testing. + +At first, run `kubectl edit mariadb sample-mariadb -n demo` to set `spec.terminationPolicy` to `Halt`. Then delete the mariadb object, + +```bash +$ kubectl delete mariadb sample-mariadb -n demo +mariadb.kubedb.com "sample-mariadb" deleted +``` + +Now, run the following command to get all mariadb resources in `demo` namespaces, + +```bash +$ kubectl get sts,svc,secret,pvc -n demo +NAME TYPE DATA AGE +secret/default-token-w2pgw kubernetes.io/service-account-token 3 31m +secret/sample-mariadb-auth kubernetes.io/basic-auth 2 39s + +NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE +persistentvolumeclaim/data-sample-mariadb-0 Bound pvc-7502c222-2b02-4363-9027-91ab0e7b76dc 1Gi RWO standard 39s +``` + +From the above output, you can see that all mariadb resources(`StatefulSet`, `Service`, etc.) are deleted except `PVC` and `Secret`. You can recreate your mariadb again using this resources. + +**Delete:** + +If you want to delete the existing database along with the volumes used, but want to restore the database from previously taken `snapshots` and `secrets` then you might want to set the `MariaDB` object `terminationPolicy` to `Delete`. In this setting, `StatefulSet` and the volumes will be deleted. If you decide to restore the database, you can do so using the snapshots and the credentials. + +When the `TerminationPolicy` is set to `Delete` and the MariaDB object is deleted, the KubeDB operator will delete the StatefulSet and its pods along with PVCs but leaves the `secret` and database backup data(`snapshots`) intact. + +Suppose, we have a database with `terminationPolicy` set to `Delete`. Now, are going to delete the database using the following command: + +```bash +$ kubectl delete mariadb sample-mariadb -n demo +mariadb.kubedb.com "sample-mariadb" deleted +``` + +Now, run the following command to get all mariadb resources in `demo` namespaces, + +```bash +$ kubectl get sts,svc,secret,pvc -n demo +NAME TYPE DATA AGE +secret/default-token-w2pgw kubernetes.io/service-account-token 3 31m +secret/sample-mariadb-auth kubernetes.io/basic-auth 2 39s +``` + +From the above output, you can see that all mariadb resources(`StatefulSet`, `Service`, `PVCs` etc.) are deleted except `Secret`. You can initialize your mariadb using `snapshots`(if previously taken) and `secret`. + +>If you don't set the terminationPolicy then the kubeDB set the TerminationPolicy to Delete by-default. + +**WipeOut:** + +You can totally delete the `MariaDB` database and relevant resources without any tracking by setting `terminationPolicy` to `WipeOut`. KubeDB operator will delete all relevant resources of this `MariaDB` database (i.e, `PVCs`, `Secrets`, `Snapshots`) when the `terminationPolicy` is set to `WipeOut`. + +Suppose, we have a database with `terminationPolicy` set to `WipeOut`. Now, are going to delete the database using the following command: + +```yaml +$ kubectl delete mariadb sample-mariadb -n demo +mariadb.kubedb.com "sample-mariadb" deleted +``` + +Now, run the following command to get all mariadb resources in `demo` namespaces, + +```bash +$ kubectl get sts,svc,secret,pvc -n demo +No resources found in demo namespace. +``` + +From the above output, you can see that all mariadb resources are deleted. there is no option to recreate/reinitialize your database if `terminationPolicy` is set to `Delete`. + +>Be careful when you set the `terminationPolicy` to `Delete`. Because there is no option to trace the database resources if once deleted the database. + +## Database Halted + +If you want to delete MariaDB resources(`StatefulSet`,`Service`, etc.) without deleting the `MariaDB` object, `PVCs` and `Secret` you have to set the `spec.halted` to `true`. KubeDB operator will be able to delete the MariaDB related resources except the `MariaDB` object, `PVCs` and `Secret`. + +Suppose we have a database running `mariadb-quickstart` in our cluster. Now, we are going to set `spec.halted` to `true` in `MariaDB` object by running `kubectl edit -n demo mariadb-quickstart` command. + +Run the following command to get MariaDB resources, + +```bash +$ kubectl get mariadb,sts,secret,svc,pvc -n demo +NAME VERSION STATUS AGE +mariadb.kubedb.com/mariadb-quickstart 10.5.23 Halted 22m + +NAME TYPE DATA AGE +secret/default-token-lgbjm kubernetes.io/service-account-token 3 27h +secret/mariadb-quickstart-auth Opaque 2 22m + +NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE +persistentvolumeclaim/data-mariadb-quickstart-0 Bound pvc-7ab0ebb0-bb2e-45c1-9af1-4f175672605b 1Gi RWO standard 22m +``` + +From the above output , you can see that `MariaDB` object, `PVCs`, `Secret` are still alive. Then you can recreate your `MariaDB` with same configuration. + +>When you set `spec.halted` to `true` in `MariaDB` object then the `terminationPolicy` is also set to `Halt` by KubeDB operator. + +## Cleaning up + +To cleanup the Kubernetes resources created by this tutorial, run: + +```bash +kubectl delete -n demo mariadb/sample-mariadb + +kubectl delete ns demo +``` + +## Tips for Testing + +If you are just testing some basic functionalities, you might want to avoid additional hassles due to some safety features that are great for production environment. You can follow these tips to avoid them. + +1. **Use `storageType: Ephemeral`**. Databases are precious. You might not want to lose your data in your production environment if database pod fail. So, we recommend to use `spec.storageType: Durable` and provide storage spec in `spec.storage` section. For testing purpose, you can just use `spec.storageType: Ephemeral`. KubeDB will use [emptyDir](https://kubernetes.io/docs/concepts/storage/volumes/#emptydir) for storage. You will not require to provide `spec.storage` section. +2. **Use `terminationPolicy: WipeOut`**. It is nice to be able to delete everything created by KubeDB for a particular MariaDB crd when you delete the crd. + +## Next Steps + +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/v2024.1.31/CONTRIBUTING). diff --git a/content/docs/v2024.1.31/guides/mariadb/reconfigure-tls/_index.md b/content/docs/v2024.1.31/guides/mariadb/reconfigure-tls/_index.md new file mode 100644 index 0000000000..a0d73f04aa --- /dev/null +++ b/content/docs/v2024.1.31/guides/mariadb/reconfigure-tls/_index.md @@ -0,0 +1,22 @@ +--- +title: Reconfigure MariaDB TLS/SSL +menu: + docs_v2024.1.31: + identifier: guides-mariadb-reconfigure-tls + name: Reconfigure TLS/SSL + parent: guides-mariadb + weight: 46 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/mariadb/reconfigure-tls/cluster/examples/issuer.yaml b/content/docs/v2024.1.31/guides/mariadb/reconfigure-tls/cluster/examples/issuer.yaml new file mode 100644 index 0000000000..7d8dc476c3 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mariadb/reconfigure-tls/cluster/examples/issuer.yaml @@ -0,0 +1,8 @@ +apiVersion: cert-manager.io/v1 +kind: Issuer +metadata: + name: md-issuer + namespace: demo +spec: + ca: + secretName: md-ca \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/mariadb/reconfigure-tls/cluster/examples/mdops-add-tls.yaml b/content/docs/v2024.1.31/guides/mariadb/reconfigure-tls/cluster/examples/mdops-add-tls.yaml new file mode 100644 index 0000000000..1113eccab5 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mariadb/reconfigure-tls/cluster/examples/mdops-add-tls.yaml @@ -0,0 +1,24 @@ +apiVersion: ops.kubedb.com/v1alpha1 +kind: MariaDBOpsRequest +metadata: + name: mdops-add-tls + namespace: demo +spec: + type: ReconfigureTLS + databaseRef: + name: sample-mariadb + tls: + requireSSL: true + issuerRef: + apiGroup: cert-manager.io + kind: Issuer + name: md-issuer + certificates: + - alias: server + subject: + organizations: + - kubedb:server + dnsNames: + - localhost + ipAddresses: + - "127.0.0.1" diff --git a/content/docs/v2024.1.31/guides/mariadb/reconfigure-tls/cluster/examples/mdops-remove-tls.yaml b/content/docs/v2024.1.31/guides/mariadb/reconfigure-tls/cluster/examples/mdops-remove-tls.yaml new file mode 100644 index 0000000000..658aaad141 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mariadb/reconfigure-tls/cluster/examples/mdops-remove-tls.yaml @@ -0,0 +1,11 @@ +apiVersion: ops.kubedb.com/v1alpha1 +kind: MariaDBOpsRequest +metadata: + name: mdops-remove-tls + namespace: demo +spec: + type: ReconfigureTLS + databaseRef: + name: sample-mariadb + tls: + remove: true diff --git a/content/docs/v2024.1.31/guides/mariadb/reconfigure-tls/cluster/examples/mdops-rotate-tls.yaml b/content/docs/v2024.1.31/guides/mariadb/reconfigure-tls/cluster/examples/mdops-rotate-tls.yaml new file mode 100644 index 0000000000..cf2d7f417d --- /dev/null +++ b/content/docs/v2024.1.31/guides/mariadb/reconfigure-tls/cluster/examples/mdops-rotate-tls.yaml @@ -0,0 +1,11 @@ +apiVersion: ops.kubedb.com/v1alpha1 +kind: MariaDBOpsRequest +metadata: + name: mdops-rotate-tls + namespace: demo +spec: + type: ReconfigureTLS + databaseRef: + name: sample-mariadb + tls: + rotateCertificates: true diff --git a/content/docs/v2024.1.31/guides/mariadb/reconfigure-tls/cluster/examples/mdops-update-tls.yaml b/content/docs/v2024.1.31/guides/mariadb/reconfigure-tls/cluster/examples/mdops-update-tls.yaml new file mode 100644 index 0000000000..dc09605174 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mariadb/reconfigure-tls/cluster/examples/mdops-update-tls.yaml @@ -0,0 +1,17 @@ +apiVersion: ops.kubedb.com/v1alpha1 +kind: MariaDBOpsRequest +metadata: + name: mdops-update-tls + namespace: demo +spec: + type: ReconfigureTLS + databaseRef: + name: sample-mariadb + tls: + certificates: + - alias: server + subject: + organizations: + - kubedb:server + emailAddresses: + - "kubedb@appscode.com" diff --git a/content/docs/v2024.1.31/guides/mariadb/reconfigure-tls/cluster/examples/sample-mariadb.yaml b/content/docs/v2024.1.31/guides/mariadb/reconfigure-tls/cluster/examples/sample-mariadb.yaml new file mode 100644 index 0000000000..0f4f3c241f --- /dev/null +++ b/content/docs/v2024.1.31/guides/mariadb/reconfigure-tls/cluster/examples/sample-mariadb.yaml @@ -0,0 +1,18 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MariaDB +metadata: + name: sample-mariadb + namespace: demo +spec: + version: "10.5.23" + replicas: 3 + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut + diff --git a/content/docs/v2024.1.31/guides/mariadb/reconfigure-tls/cluster/index.md b/content/docs/v2024.1.31/guides/mariadb/reconfigure-tls/cluster/index.md new file mode 100644 index 0000000000..a63b830ec6 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mariadb/reconfigure-tls/cluster/index.md @@ -0,0 +1,583 @@ +--- +title: Reconfigure MariaDB TLS/SSL Encryption +menu: + docs_v2024.1.31: + identifier: guides-mariadb-reconfigure-tls-cluster + name: Reconfigure MariaDB TLS/SSL Encryption + parent: guides-mariadb-reconfigure-tls + weight: 20 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Reconfigure MariaDB TLS/SSL (Transport Encryption) + +KubeDB supports reconfigure i.e. add, remove, update and rotation of TLS/SSL certificates for existing MariaDB database via a MariaDBOpsRequest. This tutorial will show you how to use KubeDB to reconfigure TLS/SSL encryption. + +## Before You Begin + +- At first, you need to have a Kubernetes Cluster, and the kubectl command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +- Install [`cert-manger`](https://cert-manager.io/docs/installation/) v1.6.0 or later to your cluster to manage your SSL/TLS certificates. + +- Now, install KubeDB cli on your workstation and KubeDB operator in your cluster following the steps [here](/docs/v2024.1.31/setup/README). + +- To keep things isolated, this tutorial uses a separate namespace called `demo` throughout this tutorial. + + ```bash + $ kubectl create ns demo + namespace/demo created + ``` + +## Add TLS to a MariaDB Cluster + +Here, We are going to create a MariaDB database without TLS and then reconfigure the database to use TLS. +> **Note:** Steps for reconfiguring TLS of MariaDB `Standalone` is same as MariaDB `Cluster`. + +### Deploy MariaDB without TLS + +In this section, we are going to deploy a MariaDB Cluster database without TLS. In the next few sections we will reconfigure TLS using `MariaDBOpsRequest` CRD. Below is the YAML of the `MariaDB` CR that we are going to create, + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: MariaDB +metadata: + name: sample-mariadb + namespace: demo +spec: + version: "10.5.23" + replicas: 3 + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut +``` + +Let's create the `MariaDB` CR we have shown above, + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mariadb/reconfigure-tls/cluster/examples/sample-mariadb.yaml +mariadb.kubedb.com/sample-mariadb created +``` + +Now, wait until `sample-mariadb` has status `Ready`. i.e, + +```bash +$ kubectl get mariadb -n demo +NAME VERSION STATUS AGE +sample-mariadb 10.5.23 Ready 9m17s +``` + +```bash +$ kubectl get secrets -n demo sample-mariadb-auth -o jsonpath='{.data.\username}' | base64 -d +root + +$ kubectl get secrets -n demo sample-mariadb-auth -o jsonpath='{.data.\password}' | base64 -d +U6(h_pYrekLZ2OOd + +$ kubectl exec -it -n demo sample-mariadb-0 -c mariadb -- bash +root@sample-mariadb-0:/ mysql -u${MYSQL_ROOT_USERNAME} -p${MYSQL_ROOT_PASSWORD} +Welcome to the MariaDB monitor. Commands end with ; or \g. +Your MariaDB connection id is 108 +Server version: 10.5.23-MariaDB-1:10.5.23+maria~focal mariadb.org binary distribution + +Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others. + +Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. + +MariaDB [(none)]> show variables like '%ssl%'; ++---------------------+-----------------------------+ +| Variable_name | Value | ++---------------------+-----------------------------+ +| have_openssl | YES | +| have_ssl | DISABLED | +| ssl_ca | | +| ssl_capath | | +| ssl_cert | | +| ssl_cipher | | +| ssl_crl | | +| ssl_crlpath | | +| ssl_key | | +| version_ssl_library | OpenSSL 1.1.1f 31 Mar 2020 | ++---------------------+-----------------------------+ +10 rows in set (0.001 sec) + +``` + +We can verify from the above output that TLS is disabled for this database. + +### Create Issuer/ ClusterIssuer + +Now, we are going to create an example `Issuer` that will be used throughout the duration of this tutorial. Alternatively, you can follow this [cert-manager tutorial](https://cert-manager.io/docs/configuration/ca/) to create your own `Issuer`. By following the below steps, we are going to create our desired issuer, + +- Start off by generating our ca-certificates using openssl, + +```bash +$ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout ./ca.key -out ./ca.crt -subj "/CN=mariadb/O=kubedb" +Generating a RSA private key +...........................................................................+++++ +........................................................................................................+++++ +writing new private key to './ca.key' +``` + +- create a secret using the certificate files we have just generated, + +```bash +kubectl create secret tls md-ca \ + --cert=ca.crt \ + --key=ca.key \ + --namespace=demo +secret/md-ca created +``` + +Now, we are going to create an `Issuer` using the `md-ca` secret that hols the ca-certificate we have just created. Below is the YAML of the `Issuer` cr that we are going to create, + +```yaml +apiVersion: cert-manager.io/v1 +kind: Issuer +metadata: + name: md-issuer + namespace: demo +spec: + ca: + secretName: md-ca +``` + +Let’s create the `Issuer` cr we have shown above, + +```bash +kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}//docs/guides/mariadb/reconfigure-tls/cluster/examples/issuer.yaml +issuer.cert-manager.io/md-issuer created +``` + +### Create MariaDBOpsRequest + +In order to add TLS to the database, we have to create a `MariaDBOpsRequest` CRO with our created issuer. Below is the YAML of the `MariaDBOpsRequest` CRO that we are going to create, + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: MariaDBOpsRequest +metadata: + name: mdops-add-tls + namespace: demo +spec: + type: ReconfigureTLS + databaseRef: + name: sample-mariadb + tls: + requireSSL: true + issuerRef: + apiGroup: cert-manager.io + kind: Issuer + name: md-issuer + certificates: + - alias: server + subject: + organizations: + - kubedb:server + dnsNames: + - localhost + ipAddresses: + - "127.0.0.1" +``` + +Here, + +- `spec.databaseRef.name` specifies that we are performing reconfigure TLS operation on `sample-mariadb` database. +- `spec.type` specifies that we are performing `ReconfigureTLS` on our database. +- `spec.tls.issuerRef` specifies the issuer name, kind and api group. +- `requireSSL` specifies that the clients connecting to the server are required to use secured connection. +- `spec.tls.certificates` specifies the certificates. You can learn more about this field from [here](/docs/v2024.1.31/guides/mariadb/concepts/mariadb/#spectls). + +Let's create the `MariaDBOpsRequest` CR we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mariadb/reconfigure-tls/cluster/examples/mdops-add-tls.yaml +mariadbopsrequest.ops.kubedb.com/mdops-add-tls created +``` + +#### Verify TLS Enabled Successfully + +Let's wait for `MariaDBOpsRequest` to be `Successful`. Run the following command to watch `MariaDBOpsRequest` CRO, + +```bash +$ kubectl get mariadbopsrequest --all-namespaces +NAMESPACE NAME TYPE STATUS AGE +demo mdops-add-tls ReconfigureTLS Successful 6m6s +``` + +We can see from the above output that the `MariaDBOpsRequest` has succeeded. + +Now, we are going to connect to the database for verifying the `MariaDB` server has configured with TLS/SSL encryption. + +Let's exec into the pod to verify TLS/SSL configuration, + +```bash +$ kubectl exec -it -n demo sample-mariadb-0 -c mariadb -- bash +root@sample-mariadb-0:/ ls /etc/mysql/certs/client +ca.crt tls.crt tls.key +root@sample-mariadb-0:/ ls /etc/mysql/certs/server +ca.crt tls.crt tls.key +root@sample-mariadb-0:/ mysql -u${MYSQL_ROOT_USERNAME} -p${MYSQL_ROOT_PASSWORD} +Welcome to the MariaDB monitor. Commands end with ; or \g. +Your MariaDB connection id is 58 +Server version: 10.5.23-MariaDB-1:10.5.23+maria~focal mariadb.org binary distribution + +Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others. + +Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. + +MariaDB [(none)]> show variables like '%ssl%'; ++---------------------+---------------------------------+ +| Variable_name | Value | ++---------------------+---------------------------------+ +| have_openssl | YES | +| have_ssl | YES | +| ssl_ca | /etc/mysql/certs/server/ca.crt | +| ssl_capath | /etc/mysql/certs/server | +| ssl_cert | /etc/mysql/certs/server/tls.crt | +| ssl_cipher | | +| ssl_crl | | +| ssl_crlpath | | +| ssl_key | /etc/mysql/certs/server/tls.key | +| version_ssl_library | OpenSSL 1.1.1f 31 Mar 2020 | ++---------------------+---------------------------------+ +10 rows in set (0.005 sec) + +MariaDB [(none)]> show variables like '%require_secure_transport%'; ++--------------------------+-------+ +| Variable_name | Value | ++--------------------------+-------+ +| require_secure_transport | ON | ++--------------------------+-------+ +1 row in set (0.005 sec) + +MariaDB [(none)]> quit; +Bye +``` + +We can see from the above output that, `have_ssl` is set to `ture`. So, database TLS is enabled successfully to this database. + +> Note: Add or Update reconfigure TLS with with `RequireSSL=true` will create downtime of the database while `MariaDBOpsRequest` is in `Progressing` status. + +## Rotate Certificate + +Now we are going to rotate the certificate of this database. First let's check the current expiration date of the certificate. + +```bash +$ kubectl exec -it -n demo sample-mariadb-0 -c mariadb -- bash +root@sample-mariadb-0:/ apt update +root@sample-mariadb-0:/ apt install openssl +root@sample-mariadb-0:/ openssl x509 -in /etc/mysql/certs/client/tls.crt -inform PEM -enddate -nameopt RFC2253 -noout +notAfter=Apr 13 05:18:43 2022 GMT +``` + +So, the certificate will expire on this time `Apr 13 05:18:43 2022 GMT`. + +### Create MariaDBOpsRequest + +Now we are going to increase it using a MariaDBOpsRequest. Below is the yaml of the ops request that we are going to create, + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: MariaDBOpsRequest +metadata: + name: mdops-rotate-tls + namespace: demo +spec: + type: ReconfigureTLS + databaseRef: + name: sample-mariadb + tls: + rotateCertificates: true +``` + +Here, + +- `spec.databaseRef.name` specifies that we are performing reconfigure TLS operation on `sample-mariadb` database. +- `spec.type` specifies that we are performing `ReconfigureTLS` on our database. +- `spec.tls.rotateCertificates` specifies that we want to rotate the certificate of this database. + +Let's create the `MariaDBOpsRequest` CR we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mariadb/reconfigure-tls/cluster/examples/mdops-rotate-tls.yaml +mariadbopsrequest.ops.kubedb.com/mdops-rotate-tls created +``` + +#### Verify Certificate Rotated Successfully + +Let's wait for `MariaDBOpsRequest` to be `Successful`. Run the following command to watch `MariaDBOpsRequest` CRO, + +```bash +$ kubectl get mariadbopsrequest --all-namespaces +NAMESPACE NAME TYPE STATUS AGE +demo mdops-rotate-tls ReconfigureTLS Successful 3m +``` + +We can see from the above output that the `MariaDBOpsRequest` has succeeded. Now, let's check the expiration date of the certificate. + +```bash +$ kubectl exec -it -n demo sample-mariadb-0 -c mariadb -- bash +root@sample-mariadb-0:/ apt update +root@sample-mariadb-0:/ apt install openssl +root@sample-mariadb-0:/# openssl x509 -in /etc/mysql/certs/client/tls.crt -inform PEM -enddate -nameopt RFC2253 -noout +notAfter=Apr 13 06:04:50 2022 GMT +``` + +As we can see from the above output, the certificate has been rotated successfully. + +## Update Certificate + +Now, we are going to update the server certificate. + +- Let's describe the server certificate `sample-mariadb-server-cert` +```bash +$ kubectl describe certificate -n demo sample-mariadb-server-cert +Name: sample-mariadb-server-cert +Namespace: demo +Labels: app.kubernetes.io/component=database + app.kubernetes.io/instance=sample-mariadb + app.kubernetes.io/managed-by=kubedb.com + app.kubernetes.io/name=mariadbs.kubedb.com +Annotations: +API Version: cert-manager.io/v1 +Kind: Certificate +Metadata: + Creation Timestamp: 2022-01-13T05:18:42Z + Generation: 1 + ... + Owner References: + API Version: kubedb.com/v1alpha2 + Block Owner Deletion: true + Controller: true + Kind: MariaDB + Name: sample-mariadb + UID: ed8f45c7-7caf-4890-8a9c-b8437b6ca48b + Resource Version: 241340 + UID: 3343e971-395d-46df-9536-47194eb96dcc +Spec: + Common Name: sample-mariadb.demo.svc + Dns Names: + *.sample-mariadb-pods.demo.svc + *.sample-mariadb-pods.demo.svc.cluster.local + *.sample-mariadb.demo.svc + localhost + sample-mariadb + sample-mariadb.demo.svc + Ip Addresses: + 127.0.0.1 + Issuer Ref: + Group: cert-manager.io + Kind: Issuer + Name: md-issuer + Secret Name: sample-mariadb-server-cert + Subject: + Organizations: + kubedb:server + Usages: + digital signature + key encipherment + server auth + client auth +Status: + Conditions: + Last Transition Time: 2022-01-13T05:18:43Z + Message: Certificate is up to date and has not expired + Observed Generation: 1 + Reason: Ready + Status: True + Type: Ready + Not After: 2022-04-13T06:04:50Z + Not Before: 2022-01-13T06:04:50Z + Renewal Time: 2022-03-14T06:04:50Z + Revision: 6 +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal Requested 22m cert-manager Created new CertificateRequest resource "sample-mariadb-server-cert-8tnj5" + Normal Requested 22m cert-manager Created new CertificateRequest resource "sample-mariadb-server-cert-fw6sk" + Normal Requested 22m cert-manager Created new CertificateRequest resource "sample-mariadb-server-cert-cvphm" + Normal Requested 20m cert-manager Created new CertificateRequest resource "sample-mariadb-server-cert-nvhp6" + Normal Requested 19m cert-manager Created new CertificateRequest resource "sample-mariadb-server-cert-p5287" + Normal Reused 19m (x5 over 22m) cert-manager Reusing private key stored in existing Secret resource "sample-mariadb-server-cert" + Normal Issuing 19m (x6 over 65m) cert-manager The certificate has been successfully issued +``` + +We want to add `subject` and `emailAddresses` in the spec of server sertificate. + +### Create MariaDBOpsRequest + +Below is the YAML of the `MariaDBOpsRequest` CRO that we are going to create ton update the server certificate, + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: MariaDBOpsRequest +metadata: + name: mdops-update-tls + namespace: demo +spec: + type: ReconfigureTLS + databaseRef: + name: sample-mariadb + tls: + certificates: + - alias: server + subject: + organizations: + - kubedb:server + emailAddresses: + - "kubedb@appscode.com" +``` + +Here, + +- `spec.databaseRef.name` specifies that we are performing reconfigure TLS operation on `sample-mariadb` database. +- `spec.type` specifies that we are performing `ReconfigureTLS` on our database. +- `spec.tls.issuerRef` specifies the issuer name, kind and api group. +- `spec.tls.certificates` specifies the changes that we want in certificate objects. +- `spec.tls.certificates[].alias` specifies the certificate type which is one of these: `server`, `client`, `metrics-exporter`. + +Let's create the `MariaDBOpsRequest` CR we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mariadb/reconfigure-tls/cluster/examples/mdops-update-tls.yaml +mariadbopsrequest.ops.kubedb.com/mdops-update-tls created +``` + +#### Verify certificate is updated successfully + +Let's wait for `MariaDBOpsRequest` to be `Successful`. Run the following command to watch `MariaDBOpsRequest` CRO, + +```bash +$ kubectl get mariadbopsrequest -n demo +Every 2.0s: kubectl get mariadbopsrequest -n demo +NAME TYPE STATUS AGE +mdops-update-tls ReconfigureTLS Successful 7m + +``` + +We can see from the above output that the `MariaDBOpsRequest` has succeeded. + +Now, Let's exec into a database node and find out the ca subject to see if it matches the one we have provided. + +```bash +$ kubectl exec -it -n demo sample-mariadb-0 -c mariadb -- bash +root@sample-mariadb-0:/ apt update +root@sample-mariadb-0:/ apt install openssl +root@sample-mariadb-0:/ openssl x509 -in /etc/mysql/certs/server/tls.crt -inform PEM -subject -email -nameopt RFC2253 -noout +subject=CN=sample-mariadb.demo.svc,O=kubedb:server +kubedb@appscode.com +``` + +We can see from the above output that, the subject name and email address match with the new ca certificate that we have created. So, the issuer is changed successfully. + +## Remove TLS from the Database + +Now, we are going to remove TLS from this database using a MariaDBOpsRequest. + +### Create MariaDBOpsRequest + +Below is the YAML of the `MariaDBOpsRequest` CRO that we are going to create, + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: MariaDBOpsRequest +metadata: + name: mdops-remove-tls + namespace: demo +spec: + type: ReconfigureTLS + databaseRef: + name: sample-mariadb + tls: + remove: true +``` + +Here, + +- `spec.databaseRef.name` specifies that we are performing reconfigure TLS operation on `sample-mariadb` database. +- `spec.type` specifies that we are performing `ReconfigureTLS` on our database. +- `spec.tls.remove` specifies that we want to remove tls from this database. + +Let's create the `MariaDBOpsRequest` CR we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mariadb/reconfigure-tls/cluster/examples/mdops-remove-tls.yaml +mariadbopsrequest.ops.kubedb.com/mdops-remove-tls created +``` + +#### Verify TLS Removed Successfully + +Let's wait for `MariaDBOpsRequest` to be `Successful`. Run the following command to watch `MariaDBOpsRequest` CRO, + +```bash +$ kubectl get mariadbopsrequest --all-namespaces +NAMESPACE NAME TYPE STATUS AGE +demo mdops-remove-tls ReconfigureTLS Successful 6m27s +``` + +We can see from the above output that the `MariaDBOpsRequest` has succeeded. If we describe the `MariaDBOpsRequest` we will get an overview of the steps that were followed. + +Now, Let's exec into the database and find out that TLS is disabled or not. + +```bash +$ kubectl exec -it -n demo sample-mariadb-0 -c mariadb -- bash +root@sample-mariadb-0:/ mysql -u${MYSQL_ROOT_USERNAME} -p${MYSQL_ROOT_PASSWORD} +Welcome to the MariaDB monitor. Commands end with ; or \g. +Your MariaDB connection id is 108 +Server version: 10.5.23-MariaDB-1:10.5.23+maria~focal mariadb.org binary distribution + +Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others. + +Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. + +MariaDB [(none)]> show variables like '%ssl%'; ++---------------------+-----------------------------+ +| Variable_name | Value | ++---------------------+-----------------------------+ +| have_openssl | YES | +| have_ssl | DISABLED | +| ssl_ca | | +| ssl_capath | | +| ssl_cert | | +| ssl_cipher | | +| ssl_crl | | +| ssl_crlpath | | +| ssl_key | | +| version_ssl_library | OpenSSL 1.1.1f 31 Mar 2020 | ++---------------------+-----------------------------+ +10 rows in set (0.001 sec) + +``` + +So, we can see from the above that, output that tls is disabled successfully. + +## Cleaning up + +To cleanup the Kubernetes resources created by this tutorial, run: + +```bash +$ kubectl delete mariadb -n demo --all +$ kubectl delete issuer -n demo --all +$ kubectl delete mariadbopsrequest -n demo --all +$ kubectl delete ns demo +``` diff --git a/content/docs/v2024.1.31/guides/mariadb/reconfigure-tls/overview/images/reconfigure-tls.jpeg b/content/docs/v2024.1.31/guides/mariadb/reconfigure-tls/overview/images/reconfigure-tls.jpeg new file mode 100644 index 0000000000..57791b06e4 Binary files /dev/null and b/content/docs/v2024.1.31/guides/mariadb/reconfigure-tls/overview/images/reconfigure-tls.jpeg differ diff --git a/content/docs/v2024.1.31/guides/mariadb/reconfigure-tls/overview/index.md b/content/docs/v2024.1.31/guides/mariadb/reconfigure-tls/overview/index.md new file mode 100644 index 0000000000..4e1af62e56 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mariadb/reconfigure-tls/overview/index.md @@ -0,0 +1,65 @@ +--- +title: Reconfiguring TLS of MariaDB Database +menu: + docs_v2024.1.31: + identifier: guides-mariadb-reconfigure-tls-overview + name: Overview + parent: guides-mariadb-reconfigure-tls + weight: 10 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Reconfiguring TLS of MariaDB Database + +This guide will give an overview on how KubeDB Ops Manager reconfigures TLS configuration i.e. add TLS, remove TLS, update issuer/cluster issuer or Certificates and rotate the certificates of a `MariaDB` database. + +## Before You Begin + +- You should be familiar with the following `KubeDB` concepts: + - [MariaDB](/docs/v2024.1.31/guides/mariadb/concepts/mariadb) + - [MariaDBOpsRequest](/docs/v2024.1.31/guides/mariadb/concepts/opsrequest) + +## How Reconfiguring MariaDB TLS Configuration Process Works + +The following diagram shows how KubeDB Ops Manager reconfigures TLS of a `MariaDB` database. Open the image in a new tab to see the enlarged version. + +
+  Reconfiguring TLS process of MariaDB +
Fig: Reconfiguring TLS process of MariaDB
+
+ +The Reconfiguring MariaDB TLS process consists of the following steps: + +1. At first, a user creates a `MariaDB` Custom Resource Object (CRO). + +2. `KubeDB` Community operator watches the `MariaDB` CRO. + +3. When the operator finds a `MariaDB` CR, it creates required number of `StatefulSets` and related necessary stuff like secrets, services, etc. + +4. Then, in order to reconfigure the TLS configuration of the `MariaDB` database the user creates a `MariaDBOpsRequest` CR with desired information. + +5. `KubeDB` Enterprise operator watches the `MariaDBOpsRequest` CR. + +6. When it finds a `MariaDBOpsRequest` CR, it pauses the `MariaDB` object which is referred from the `MariaDBOpsRequest`. So, the `KubeDB` Community operator doesn't perform any operations on the `MariaDB` object during the reconfiguring TLS process. + +7. Then the `KubeDB` Enterprise operator will add, remove, update or rotate TLS configuration based on the Ops Request yaml. + +8. Then the `KubeDB` Enterprise operator will restart all the Pods of the database so that they restart with the new TLS configuration defined in the `MariaDBOpsRequest` CR. + +9. After the successful reconfiguring of the `MariaDB` TLS, the `KubeDB` Enterprise operator resumes the `MariaDB` object so that the `KubeDB` Community operator resumes its usual operations. + +In the next docs, we are going to show a step by step guide on reconfiguring TLS configuration of a MariaDB database using `MariaDBOpsRequest` CRD. \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/mariadb/reconfigure/_index.md b/content/docs/v2024.1.31/guides/mariadb/reconfigure/_index.md new file mode 100644 index 0000000000..627d58118f --- /dev/null +++ b/content/docs/v2024.1.31/guides/mariadb/reconfigure/_index.md @@ -0,0 +1,22 @@ +--- +title: Reconfigure +menu: + docs_v2024.1.31: + identifier: guides-mariadb-reconfigure + name: Reconfigure + parent: guides-mariadb + weight: 46 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/mariadb/reconfigure/cluster/examples/md-config.cnf b/content/docs/v2024.1.31/guides/mariadb/reconfigure/cluster/examples/md-config.cnf new file mode 100644 index 0000000000..ccd87f160c --- /dev/null +++ b/content/docs/v2024.1.31/guides/mariadb/reconfigure/cluster/examples/md-config.cnf @@ -0,0 +1,3 @@ +[mysqld] +max_connections = 200 +read_buffer_size = 1048576 diff --git a/content/docs/v2024.1.31/guides/mariadb/reconfigure/cluster/examples/mdops-reconfigure-apply-config.yaml b/content/docs/v2024.1.31/guides/mariadb/reconfigure/cluster/examples/mdops-reconfigure-apply-config.yaml new file mode 100644 index 0000000000..02d61acbc0 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mariadb/reconfigure/cluster/examples/mdops-reconfigure-apply-config.yaml @@ -0,0 +1,18 @@ +apiVersion: ops.kubedb.com/v1alpha1 +kind: MariaDBOpsRequest +metadata: + name: mdops-reconfigure-apply-config + namespace: demo +spec: + type: Reconfigure + databaseRef: + name: sample-mariadb + configuration: + applyConfig: + new-md-config.cnf: | + [mysqld] + max_connections = 230 + read_buffer_size = 1064960 + innodb-config.cnf: | + [mysqld] + innodb_log_buffer_size = 17408000 diff --git a/content/docs/v2024.1.31/guides/mariadb/reconfigure/cluster/examples/new-md-config.cnf b/content/docs/v2024.1.31/guides/mariadb/reconfigure/cluster/examples/new-md-config.cnf new file mode 100644 index 0000000000..7e27973b35 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mariadb/reconfigure/cluster/examples/new-md-config.cnf @@ -0,0 +1,3 @@ +[mysqld] +max_connections = 250 +read_buffer_size = 122880 diff --git a/content/docs/v2024.1.31/guides/mariadb/reconfigure/cluster/examples/reconfigure-remove.yaml b/content/docs/v2024.1.31/guides/mariadb/reconfigure/cluster/examples/reconfigure-remove.yaml new file mode 100644 index 0000000000..d354522332 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mariadb/reconfigure/cluster/examples/reconfigure-remove.yaml @@ -0,0 +1,11 @@ +apiVersion: ops.kubedb.com/v1alpha1 +kind: MariaDBOpsRequest +metadata: + name: mdops-reconfigure-remove + namespace: demo +spec: + type: Reconfigure + databaseRef: + name: sample-mariadb + configuration: + removeCustomConfig: true diff --git a/content/docs/v2024.1.31/guides/mariadb/reconfigure/cluster/examples/reconfigure-using-secret.yaml b/content/docs/v2024.1.31/guides/mariadb/reconfigure/cluster/examples/reconfigure-using-secret.yaml new file mode 100644 index 0000000000..64d28c813e --- /dev/null +++ b/content/docs/v2024.1.31/guides/mariadb/reconfigure/cluster/examples/reconfigure-using-secret.yaml @@ -0,0 +1,12 @@ +apiVersion: ops.kubedb.com/v1alpha1 +kind: MariaDBOpsRequest +metadata: + name: mdops-reconfigure-config + namespace: demo +spec: + type: Reconfigure + databaseRef: + name: sample-mariadb + configuration: + configSecret: + name: new-md-configuration diff --git a/content/docs/v2024.1.31/guides/mariadb/reconfigure/cluster/examples/sample-mariadb-config.yaml b/content/docs/v2024.1.31/guides/mariadb/reconfigure/cluster/examples/sample-mariadb-config.yaml new file mode 100644 index 0000000000..b1c3f3e4ed --- /dev/null +++ b/content/docs/v2024.1.31/guides/mariadb/reconfigure/cluster/examples/sample-mariadb-config.yaml @@ -0,0 +1,19 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MariaDB +metadata: + name: sample-mariadb + namespace: demo +spec: + version: "10.6.16" + replicas: 3 + configSecret: + name: md-configuration + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut diff --git a/content/docs/v2024.1.31/guides/mariadb/reconfigure/cluster/index.md b/content/docs/v2024.1.31/guides/mariadb/reconfigure/cluster/index.md new file mode 100644 index 0000000000..2726963880 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mariadb/reconfigure/cluster/index.md @@ -0,0 +1,607 @@ +--- +title: Reconfigure MariaDB Cluster +menu: + docs_v2024.1.31: + identifier: guides-mariadb-reconfigure-cluster + name: Cluster + parent: guides-mariadb-reconfigure + weight: 30 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Reconfigure MariaDB Cluster Database + +This guide will show you how to use `KubeDB` Enterprise operator to reconfigure a MariaDB Cluster. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster. + +- Install `KubeDB` Community and Enterprise operator in your cluster following the steps [here](/docs/v2024.1.31/setup/README). + +- You should be familiar with the following `KubeDB` concepts: + - [MariaDB](/docs/v2024.1.31/guides/mariadb/concepts/mariadb) + - [MariaDB Cluster](/docs/v2024.1.31/guides/mariadb/clustering/galera-cluster) + - [MariaDBOpsRequest](/docs/v2024.1.31/guides/mariadb/concepts/opsrequest) + - [Reconfigure Overview](/docs/v2024.1.31/guides/mariadb/reconfigure/overview) + +To keep everything isolated, we are going to use a separate namespace called `demo` throughout this tutorial. + +```bash +$ kubectl create ns demo +namespace/demo created +``` + +Now, we are going to deploy a `MariaDB` Cluster using a supported version by `KubeDB` operator. Then we are going to apply `MariaDBOpsRequest` to reconfigure its configuration. + +### Prepare MariaDB Cluster + +Now, we are going to deploy a `MariaDB` Cluster database with version `10.6.16`. + +### Deploy MariaDB + +At first, we will create `md-config.cnf` file containing required configuration settings. + +```ini +$ cat md-config.cnf +[mysqld] +max_connections = 200 +read_buffer_size = 1048576 +``` + +Here, `max_connections` is set to `200`, whereas the default value is `151`. Likewise, `read_buffer_size` has the deafult value `131072`. + +Now, we will create a secret with this configuration file. + +```bash +$ kubectl create secret generic -n demo md-configuration --from-file=./md-config.cnf +secret/md-configuration created +``` + +In this section, we are going to create a MariaDB object specifying `spec.configSecret` field to apply this custom configuration. Below is the YAML of the `MariaDB` CR that we are going to create, + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: MariaDB +metadata: + name: sample-mariadb + namespace: demo +spec: + version: "10.6.16" + replicas: 3 + configSecret: + name: md-configuration + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut +``` + +Let's create the `MariaDB` CR we have shown above, + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mariadb/reconfigure/cluster/examples/sample-mariadb-config.yaml +mariadb.kubedb.com/sample-mariadb created +``` + +Now, wait until `sample-mariadb` has status `Ready`. i.e, + +```bash +$ kubectl get mariadb -n demo +NAME VERSION STATUS AGE +sample-mariadb 10.6.16 Ready 71s +``` + +Now, we will check if the database has started with the custom configuration we have provided. + +First we need to get the username and password to connect to a mariadb instance, + +```bash +$ kubectl get secrets -n demo sample-mariadb-auth -o jsonpath='{.data.\username}' | base64 -d +root + +$ kubectl get secrets -n demo sample-mariadb-auth -o jsonpath='{.data.\password}' | base64 -d +nrKuxni0wDSMrgwy +``` + +Now, we will check if the database has started with the custom configuration we have provided. + +```bash +$ kubectl exec -it -n demo sample-mariadb-0 -- bash +root@sample-mariadb-0:/ mysql -u${MYSQL_ROOT_USERNAME} -p${MYSQL_ROOT_PASSWORD} +Welcome to the MariaDB monitor. Commands end with ; or \g. +Your MariaDB connection id is 23 +Server version: 10.6.16-MariaDB-1:10.6.16+maria~focal mariadb.org binary distribution + +Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others. + +Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. + +# value of `max_conncetions` is same as provided +MariaDB [(none)]> show variables like 'max_connections'; ++-----------------+-------+ +| Variable_name | Value | ++-----------------+-------+ +| max_connections | 200 | ++-----------------+-------+ +1 row in set (0.001 sec) + +# value of `read_buffer_size` is same as provided +MariaDB [(none)]> show variables like 'read_buffer_size'; ++------------------+---------+ +| Variable_name | Value | ++------------------+---------+ +| read_buffer_size | 1048576 | ++------------------+---------+ +1 row in set (0.001 sec) + +MariaDB [(none)]> exit +Bye +``` + +As we can see from the configuration of ready mariadb, the value of `max_connections` has been set to `200` and `read_buffer_size` has been set to `1048576`. + +### Reconfigure using new config secret + +Now we will reconfigure this database to set `max_connections` to `250` and `read_buffer_size` to `122880`. + +Now, we will create new file `new-md-config.cnf` containing required configuration settings. + +```ini +$ cat new-md-config.cnf +[mysqld] +max_connections = 250 +read_buffer_size = 122880 +``` + +Then, we will create a new secret with this configuration file. + +```bash +$ kubectl create secret generic -n demo new-md-configuration --from-file=./new-md-config.cnf +secret/new-md-configuration created +``` + +#### Create MariaDBOpsRequest + +Now, we will use this secret to replace the previous secret using a `MariaDBOpsRequest` CR. The `MariaDBOpsRequest` yaml is given below, + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: MariaDBOpsRequest +metadata: + name: mdops-reconfigure-config + namespace: demo +spec: + type: Reconfigure + databaseRef: + name: sample-mariadb + configuration: + configSecret: + name: new-md-configuration +``` + +Here, + +- `spec.databaseRef.name` specifies that we are reconfiguring `sample-mariadb` database. +- `spec.type` specifies that we are performing `Reconfigure` on our database. +- `spec.configuration.configSecret.name` specifies the name of the new secret. + +Let's create the `MariaDBOpsRequest` CR we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mariadb/reconfigure/cluster/examples/reconfigure-using-secret.yaml +mariadbopsrequest.ops.kubedb.com/mdops-reconfigure-config created +``` + +#### Verify the new configuration is working + +If everything goes well, `KubeDB` Enterprise operator will update the `configSecret` of `MariaDB` object. + +Let's wait for `MariaDBOpsRequest` to be `Successful`. Run the following command to watch `MariaDBOpsRequest` CR, + +```bash +$ kubectl get mariadbopsrequest --all-namespaces +NAMESPACE NAME TYPE STATUS AGE +demo mdops-reconfigure-config Reconfigure Successful 3m8s +``` + +We can see from the above output that the `MariaDBOpsRequest` has succeeded. If we describe the `MariaDBOpsRequest` we will get an overview of the steps that were followed to reconfigure the database. + +```bash +$ kubectl describe mariadbopsrequest -n demo mdops-reconfigure-config +Name: mdops-reconfigure-config +Namespace: demo +Labels: +Annotations: +API Version: ops.kubedb.com/v1alpha1 +Kind: MariaDBOpsRequest +Metadata: + Creation Timestamp: 2022-06-10T04:43:50Z + Generation: 1 + Resource Version: 1123451 + UID: 27a73fc6-1d25-4019-8975-f7d4daf782b7 +Spec: + Configuration: + Config Secret: + Name: new-md-configuration + Database Ref: + Name: sample-mariadb + Type: Reconfigure +Status: + Conditions: + Last Transition Time: 2022-06-10T04:43:50Z + Message: Controller has started to Progress the MariaDBOpsRequest: demo/mdops-reconfigure-config + Observed Generation: 1 + Reason: OpsRequestProgressingStarted + Status: True + Type: Progressing + Last Transition Time: 2022-06-10T04:47:25Z + Message: Successfully restarted MariaDB pods for MariaDBOpsRequest: demo/mdops-reconfigure-config + Observed Generation: 1 + Reason: SuccessfullyRestatedStatefulSet + Status: True + Type: RestartStatefulSetPods + Last Transition Time: 2022-06-10T04:47:30Z + Message: Successfully reconfigured MariaDB for MariaDBOpsRequest: demo/mdops-reconfigure-config + Observed Generation: 1 + Reason: SuccessfullyDBReconfigured + Status: True + Type: DBReady + Last Transition Time: 2022-06-10T04:47:30Z + Message: Controller has successfully reconfigure the MariaDB demo/mdops-reconfigure-config + Observed Generation: 1 + Reason: OpsRequestProcessedSuccessfully + Status: True + Type: Successful + Observed Generation: 3 + Phase: Successful + +``` + +Now let's connect to a mariadb instance and run a mariadb internal command to check the new configuration we have provided. + +```bash +$ kubectl exec -it -n demo sample-mariadb-0 -- bash +root@sample-mariadb-0:/ mysql -u${MYSQL_ROOT_USERNAME} -p${MYSQL_ROOT_PASSWORD} +Welcome to the MariaDB monitor. Commands end with ; or \g. +Your MariaDB connection id is 23 +Server version: 10.6.16-MariaDB-1:10.6.16+maria~focal mariadb.org binary distribution + +Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others. + +Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. + +# value of `max_conncetions` is same as provided +MariaDB [(none)]> show variables like 'max_connections'; ++-----------------+-------+ +| Variable_name | Value | ++-----------------+-------+ +| max_connections | 250 | ++-----------------+-------+ +1 row in set (0.001 sec) + +# value of `read_buffer_size` is same as provided +MariaDB [(none)]> show variables like 'read_buffer_size'; ++------------------+---------+ +| Variable_name | Value | ++------------------+---------+ +| read_buffer_size | 122880 | ++------------------+---------+ +1 row in set (0.001 sec) + +MariaDB [(none)]> exit +Bye +``` + +As we can see from the configuration has changed, the value of `max_connections` has been changed from `200` to `250` and and the `read_buffer_size` has been changed `1048576` to `122880`. So the reconfiguration of the database is successful. + + +### Reconfigure Existing Config Secret + +Now, we will create a new `MariaDBOpsRequest` to reconfigure our existing secret `new-md-configuration` by modifying our `new-md-config.cnf` file using `applyConfig`. The `MariaDBOpsRequest` yaml is given below, + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: MariaDBOpsRequest +metadata: + name: mdops-reconfigure-apply-config + namespace: demo +spec: + type: Reconfigure + databaseRef: + name: sample-mariadb + configuration: + applyConfig: + new-md-config.cnf: | + [mysqld] + max_connections = 230 + read_buffer_size = 1064960 + innodb-config.cnf: | + [mysqld] + innodb_log_buffer_size = 17408000 +``` +> Note: You can modify multiple fields of your current configuration using `applyConfig`. If you don't have any secrets then `applyConfig` will create a secret for you. Here, we modified value of our two existing fields which are `max_connections` and `read_buffer_size` also, we modified a new field `innodb_log_buffer_size` of our configuration. + +Here, +- `spec.databaseRef.name` specifies that we are reconfiguring `sample-mariadb` database. +- `spec.type` specifies that we are performing `Reconfigure` on our database. +- `spec.configuration.applyConfig` contains the configuration of existing or newly created secret. + +Before applying this yaml we are going to check the existing value of our new field, + +```bash +$ kubectl exec -it sample-mariadb-0 -n demo -c mariadb -- bash +root@sample-mariadb-0:/# mysql -u${MYSQL_ROOT_USERNAME} -p${MYSQL_ROOT_PASSWORD} +Welcome to the MariaDB monitor. Commands end with ; or \g. +Your MariaDB connection id is 23 +Server version: 10.6.16-MariaDB-1:10.6.16+maria~focal mariadb.org binary distribution + +Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others. + +Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. + +MariaDB [(none)]> show variables like 'innodb_log_buffer_size'; ++------------------------+----------+ +| Variable_name | Value | ++------------------------+----------+ +| innodb_log_buffer_size | 16777216 | ++------------------------+----------+ +1 row in set (0.001 sec) + +MariaDB [(none)]> exit +Bye +``` +Here, we can see the default value for `innodb_log_buffer_size` is `16777216`. + +Let's create the `MariaDBOpsRequest` CR we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mariadb/reconfigure/cluster/examples/mdops-reconfigure-apply-config.yaml +mariadbopsrequest.ops.kubedb.com/mdops-reconfigure-apply-config created +``` + + +#### Verify the new configuration is working + +If everything goes well, `KubeDB` Enterprise operator will update the `configSecret` of `MariaDB` object. + +Let's wait for `MariaDBOpsRequest` to be `Successful`. Run the following command to watch `MariaDBOpsRequest` CR, + +```bash +$ kubectl get mariadbopsrequest mdops-reconfigure-apply-config -n demo +NAME TYPE STATUS AGE +mdops-reconfigure-apply-config Reconfigure Successful 4m59s +``` + +We can see from the above output that the `MariaDBOpsRequest` has succeeded. If we describe the `MariaDBOpsRequest` we will get an overview of the steps that were followed to reconfigure the database. + +```bash +$ kubectl describe mariadbopsrequest -n demo mdops-reconfigure-apply-config +Name: mdops-reconfigure-apply-config +Namespace: demo +Labels: +Annotations: +API Version: ops.kubedb.com/v1alpha1 +Kind: MariaDBOpsRequest +Metadata: + Creation Timestamp: 2022-06-10T09:13:49Z + Generation: 1 + Resource Version: 14120 + UID: eb8d5df5-a0ce-4011-890c-c18c0200b5ac +Spec: + Configuration: + Apply Config: + innodb-config.cnf: [mysqld] +innodb_log_buffer_size = 17408000 + + new-md-config.cnf: [mysqld] +max_connections = 230 +read_buffer_size = 1064960 + + Database Ref: + Name: sample-mariadb + Type: Reconfigure +Status: + Conditions: + Last Transition Time: 2022-06-10T09:13:49Z + Message: Controller has started to Progress the MariaDBOpsRequest: demo/mdops-reconfigure-apply-config + Observed Generation: 1 + Reason: OpsRequestProgressingStarted + Status: True + Type: Progressing + Last Transition Time: 2022-06-10T09:13:49Z + Message: Successfully prepared user provided custom config secret + Observed Generation: 1 + Reason: PrepareSecureCustomConfig + Status: True + Type: PrepareCustomConfig + Last Transition Time: 2022-06-10T09:17:24Z + Message: Successfully restarted MariaDB pods for MariaDBOpsRequest: demo/mdops-reconfigure-apply-config + Observed Generation: 1 + Reason: SuccessfullyRestatedStatefulSet + Status: True + Type: RestartStatefulSetPods + Last Transition Time: 2022-06-10T09:17:29Z + Message: Successfully reconfigured MariaDB for MariaDBOpsRequest: demo/mdops-reconfigure-apply-config + Observed Generation: 1 + Reason: SuccessfullyDBReconfigured + Status: True + Type: DBReady + Last Transition Time: 2022-06-10T09:17:29Z + Message: Controller has successfully reconfigure the MariaDB demo/mdops-reconfigure-apply-config + Observed Generation: 1 + Reason: OpsRequestProcessedSuccessfully + Status: True + Type: Successful + Observed Generation: 3 + Phase: Successful +``` + +Now let's connect to a mariadb instance and run a mariadb internal command to check the new configuration we have provided. + +```bash +$ kubectl exec -it -n demo sample-mariadb-0 -- bash +root@sample-mariadb-0:/ mysql -u${MYSQL_ROOT_USERNAME} -p${MYSQL_ROOT_PASSWORD} +Welcome to the MariaDB monitor. Commands end with ; or \g. +Your MariaDB connection id is 23 +Server version: 10.6.16-MariaDB-1:10.6.16+maria~focal mariadb.org binary distribution + +Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others. + +Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. + +# value of `max_conncetions` is same as provided +MariaDB [(none)]> show variables like 'max_connections'; ++-----------------+-------+ +| Variable_name | Value | ++-----------------+-------+ +| max_connections | 230 | ++-----------------+-------+ +1 row in set (0.001 sec) + +# value of `read_buffer_size` is same as provided +MariaDB [(none)]> show variables like 'read_buffer_size'; ++------------------+---------+ +| Variable_name | Value | ++------------------+---------+ +| read_buffer_size | 1064960 | ++------------------+---------+ +1 row in set (0.001 sec) + +# value of `innodb_log_buffer_size` is same as provided +MariaDB [(none)]> show variables like 'innodb_log_buffer_size'; ++------------------------+----------+ +| Variable_name | Value | ++------------------------+----------+ +| innodb_log_buffer_size | 17408000 | ++------------------------+----------+ +1 row in set (0.001 sec) + +MariaDB [(none)]> exit +Bye +``` + +As we can see from above the configuration has been changed, the value of `max_connections` has been changed from `250` to `230` and the `read_buffer_size` has been changed `122880` to `1064960` also, `innodb_log_buffer_size` has been changed from `16777216` to `17408000`. So the reconfiguration of the `sample-mariadb` database is successful. + + +### Remove Custom Configuration + +We can also remove exisiting custom config using `MariaDBOpsRequest`. Provide `true` to field `spec.configuration.removeCustomConfig` and make an Ops Request to remove existing custom configuration. + +#### Create MariaDBOpsRequest + +Lets create an `MariaDBOpsRequest` having `spec.configuration.removeCustomConfig` is equal `true`, + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: MariaDBOpsRequest +metadata: + name: mdops-reconfigure-remove + namespace: demo +spec: + type: Reconfigure + databaseRef: + name: sample-mariadb + configuration: + removeCustomConfig: true +``` + +Here, + +- `spec.databaseRef.name` specifies that we are reconfiguring `mdops-reconfigure-remove` database. +- `spec.type` specifies that we are performing `Reconfigure` on our database. +- `spec.configuration.removeCustomConfig` is a bool field that should be `true` when you want to remove existing custom configuration. + +Let's create the `MariaDBOpsRequest` CR we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mariadb/reconfigure/cluster/examples/reconfigure-remove.yaml +mariadbopsrequest.ops.kubedb.com/mdops-reconfigure-remove created +``` + +#### Verify the new configuration is working + +If everything goes well, `KubeDB` Enterprise operator will update the `configSecret` of `MariaDB` object. + +Let's wait for `MariaDBOpsRequest` to be `Successful`. Run the following command to watch `MariaDBOpsRequest` CR, + +```bash +$ kubectl get mariadbopsrequest --all-namespaces +NAMESPACE NAME TYPE STATUS AGE +demo mdops-reconfigure-remove Reconfigure Successful 2m1s +``` + +Now let's connect to a mariadb instance and run a mariadb internal command to check the new configuration we have provided. + +```bash +$ kubectl exec -it -n demo sample-mariadb-0 -- bash +root@sample-mariadb-0:/ mysql -u${MYSQL_ROOT_USERNAME} -p${MYSQL_ROOT_PASSWORD} +Welcome to the MariaDB monitor. Commands end with ; or \g. +Your MariaDB connection id is 23 +Server version: 10.6.16-MariaDB-1:10.6.16+maria~focal mariadb.org binary distribution + +Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others. + +Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. + +# value of `max_conncetions` is default +MariaDB [(none)]> show variables like 'max_connections'; ++-----------------+-------+ +| Variable_name | Value | ++-----------------+-------+ +| max_connections | 151 | ++-----------------+-------+ +1 row in set (0.001 sec) + +# value of `read_buffer_size` is default +MariaDB [(none)]> show variables like 'read_buffer_size'; ++------------------+---------+ +| Variable_name | Value | ++------------------+---------+ +| read_buffer_size | 131072 | ++------------------+---------+ +1 row in set (0.001 sec) + +# value of `innodb_log_buffer_size` is default +MariaDB [(none)]> show variables like 'innodb_log_buffer_size'; ++------------------------+----------+ +| Variable_name | Value | ++------------------------+----------+ +| innodb_log_buffer_size | 16777216 | ++------------------------+----------+ +1 row in set (0.001 sec) + +MariaDB [(none)]> exit +Bye +``` + +As we can see from the configuration has changed to its default value. So removal of existing custom configuration using `MariaDBOpsRequest` is successful. + +## Cleaning Up + +To clean up the Kubernetes resources created by this tutorial, run: + +```bash +$ kubectl delete mariadb -n demo sample-mariadb +$ kubectl delete mariadbopsrequest -n demo mdops-reconfigure-config mdops-reconfigure-apply-config mdops-reconfigure-remove +$ kubectl delete ns demo +``` diff --git a/content/docs/v2024.1.31/guides/mariadb/reconfigure/overview/images/reconfigure.jpeg b/content/docs/v2024.1.31/guides/mariadb/reconfigure/overview/images/reconfigure.jpeg new file mode 100644 index 0000000000..824e74c267 Binary files /dev/null and b/content/docs/v2024.1.31/guides/mariadb/reconfigure/overview/images/reconfigure.jpeg differ diff --git a/content/docs/v2024.1.31/guides/mariadb/reconfigure/overview/index.md b/content/docs/v2024.1.31/guides/mariadb/reconfigure/overview/index.md new file mode 100644 index 0000000000..71e8b4f63c --- /dev/null +++ b/content/docs/v2024.1.31/guides/mariadb/reconfigure/overview/index.md @@ -0,0 +1,65 @@ +--- +title: Reconfiguring MariaDB +menu: + docs_v2024.1.31: + identifier: guides-mariadb-reconfigure-overview + name: Overview + parent: guides-mariadb-reconfigure + weight: 10 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Reconfiguring MariaDB + +This guide will give an overview on how KubeDB Ops Manger reconfigures `MariaDB`. + +## Before You Begin + +- You should be familiar with the following `KubeDB` concepts: + - [MariaDB](/docs/v2024.1.31/guides/mariadb/concepts/mariadb) + - [MariaDBOpsRequest](/docs/v2024.1.31/guides/mariadb/concepts/opsrequest) + +## How Reconfiguring MariaDB Process Works + +The following diagram shows how KubeDB Ops Manager reconfigures `MariaDB` database components. Open the image in a new tab to see the enlarged version. + +
+  Reconfiguring process of MariaDB +
Fig: Reconfiguring process of MariaDB
+
+ +The Reconfiguring MariaDB process consists of the following steps: + +1. At first, a user creates a `MariaDB` Custom Resource (CR). + +2. `KubeDB` Community operator watches the `MariaDB` CR. + +3. When the operator finds a `MariaDB` CR, it creates required number of `StatefulSets` and related necessary stuff like secrets, services, etc. + +4. Then, in order to reconfigure the `MariaDB` standalone or cluster the user creates a `MariaDBOpsRequest` CR with desired information. + +5. `KubeDB` Enterprise operator watches the `MariaDBOpsRequest` CR. + +6. When it finds a `MariaDBOpsRequest` CR, it halts the `MariaDB` object which is referred from the `MariaDBOpsRequest`. So, the `KubeDB` Community operator doesn't perform any operations on the `MariaDB` object during the reconfiguring process. + +7. Then the `KubeDB` Enterprise operator will replace the existing configuration with the new configuration provided or merge the new configuration with the existing configuration according to the `MariaDBOpsRequest` CR. + +8. Then the `KubeDB` Enterprise operator will restart the related StatefulSet Pods so that they restart with the new configuration defined in the `MariaDBOpsRequest` CR. + +9. After the successful reconfiguring of the `MariaDB`, the `KubeDB` Enterprise operator resumes the `MariaDB` object so that the `KubeDB` Community operator resumes its usual operations. + +In the next docs, we are going to show a step by step guide on reconfiguring MariaDB database components using `MariaDBOpsRequest` CRD. \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/mariadb/reconfigure/standalone/examples/md-config.cnf b/content/docs/v2024.1.31/guides/mariadb/reconfigure/standalone/examples/md-config.cnf new file mode 100644 index 0000000000..ccd87f160c --- /dev/null +++ b/content/docs/v2024.1.31/guides/mariadb/reconfigure/standalone/examples/md-config.cnf @@ -0,0 +1,3 @@ +[mysqld] +max_connections = 200 +read_buffer_size = 1048576 diff --git a/content/docs/v2024.1.31/guides/mariadb/reconfigure/standalone/examples/mdops-reconfigure-apply-config.yaml b/content/docs/v2024.1.31/guides/mariadb/reconfigure/standalone/examples/mdops-reconfigure-apply-config.yaml new file mode 100644 index 0000000000..02d61acbc0 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mariadb/reconfigure/standalone/examples/mdops-reconfigure-apply-config.yaml @@ -0,0 +1,18 @@ +apiVersion: ops.kubedb.com/v1alpha1 +kind: MariaDBOpsRequest +metadata: + name: mdops-reconfigure-apply-config + namespace: demo +spec: + type: Reconfigure + databaseRef: + name: sample-mariadb + configuration: + applyConfig: + new-md-config.cnf: | + [mysqld] + max_connections = 230 + read_buffer_size = 1064960 + innodb-config.cnf: | + [mysqld] + innodb_log_buffer_size = 17408000 diff --git a/content/docs/v2024.1.31/guides/mariadb/reconfigure/standalone/examples/new-md-config.cnf b/content/docs/v2024.1.31/guides/mariadb/reconfigure/standalone/examples/new-md-config.cnf new file mode 100644 index 0000000000..7e27973b35 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mariadb/reconfigure/standalone/examples/new-md-config.cnf @@ -0,0 +1,3 @@ +[mysqld] +max_connections = 250 +read_buffer_size = 122880 diff --git a/content/docs/v2024.1.31/guides/mariadb/reconfigure/standalone/examples/reconfigure-remove.yaml b/content/docs/v2024.1.31/guides/mariadb/reconfigure/standalone/examples/reconfigure-remove.yaml new file mode 100644 index 0000000000..d354522332 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mariadb/reconfigure/standalone/examples/reconfigure-remove.yaml @@ -0,0 +1,11 @@ +apiVersion: ops.kubedb.com/v1alpha1 +kind: MariaDBOpsRequest +metadata: + name: mdops-reconfigure-remove + namespace: demo +spec: + type: Reconfigure + databaseRef: + name: sample-mariadb + configuration: + removeCustomConfig: true diff --git a/content/docs/v2024.1.31/guides/mariadb/reconfigure/standalone/examples/reconfigure-using-secret.yaml b/content/docs/v2024.1.31/guides/mariadb/reconfigure/standalone/examples/reconfigure-using-secret.yaml new file mode 100644 index 0000000000..64d28c813e --- /dev/null +++ b/content/docs/v2024.1.31/guides/mariadb/reconfigure/standalone/examples/reconfigure-using-secret.yaml @@ -0,0 +1,12 @@ +apiVersion: ops.kubedb.com/v1alpha1 +kind: MariaDBOpsRequest +metadata: + name: mdops-reconfigure-config + namespace: demo +spec: + type: Reconfigure + databaseRef: + name: sample-mariadb + configuration: + configSecret: + name: new-md-configuration diff --git a/content/docs/v2024.1.31/guides/mariadb/reconfigure/standalone/examples/sample-mariadb-config.yaml b/content/docs/v2024.1.31/guides/mariadb/reconfigure/standalone/examples/sample-mariadb-config.yaml new file mode 100644 index 0000000000..d5a0039eb7 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mariadb/reconfigure/standalone/examples/sample-mariadb-config.yaml @@ -0,0 +1,19 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MariaDB +metadata: + name: sample-mariadb + namespace: demo +spec: + version: "10.6.16" + configSecret: + name: md-configuration + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut + diff --git a/content/docs/v2024.1.31/guides/mariadb/reconfigure/standalone/index.md b/content/docs/v2024.1.31/guides/mariadb/reconfigure/standalone/index.md new file mode 100644 index 0000000000..7220b2f365 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mariadb/reconfigure/standalone/index.md @@ -0,0 +1,598 @@ +--- +title: Reconfigure MariaDB Standalone +menu: + docs_v2024.1.31: + identifier: guides-mariadb-reconfigure-standalone + name: Standalone + parent: guides-mariadb-reconfigure + weight: 20 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Reconfigure MariaDB Standalone Database + +This guide will show you how to use `KubeDB` Enterprise operator to reconfigure a MariaDB Standalone. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster. + +- Install `KubeDB` Community and Enterprise operator in your cluster following the steps [here](/docs/v2024.1.31/setup/README). + +- You should be familiar with the following `KubeDB` concepts: + - [MariaDB](/docs/v2024.1.31/guides/mariadb/concepts/mariadb) + - [MariaDBOpsRequest](/docs/v2024.1.31/guides/mariadb/concepts/opsrequest) + - [Reconfigure Overview](/docs/v2024.1.31/guides/mariadb/reconfigure/overview) + +To keep everything isolated, we are going to use a separate namespace called `demo` throughout this tutorial. + +```bash +$ kubectl create ns demo +namespace/demo created +``` + +Now, we are going to deploy a `MariaDB` Standalone using a supported version by `KubeDB` operator. Then we are going to apply `MariaDBOpsRequest` to reconfigure its configuration. + +### Prepare MariaDB Standalone + +Now, we are going to deploy a `MariaDB` Standalone database with version `10.6.16`. + +### Deploy MariaDB + +At first, we will create `md-config.cnf` file containing required configuration settings. + +```ini +$ cat md-config.cnf +[mysqld] +max_connections = 200 +read_buffer_size = 1048576 +``` + +Here, `max_connections` is set to `200`, whereas the default value is `151`. Likewise, `read_buffer_size` has the deafult value `131072`. + +Now, we will create a secret with this configuration file. + +```bash +$ kubectl create secret generic -n demo md-configuration --from-file=./md-config.cnf +secret/md-configuration created +``` + +In this section, we are going to create a MariaDB object specifying `spec.configSecret` field to apply this custom configuration. Below is the YAML of the `MariaDB` CR that we are going to create, + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: MariaDB +metadata: + name: sample-mariadb + namespace: demo +spec: + version: "10.6.16" + configSecret: + name: md-configuration + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut +``` + +Let's create the `MariaDB` CR we have shown above, + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mariadb/reconfigure/standalone/examples/sample-mariadb-config.yaml +mariadb.kubedb.com/sample-mariadb created +``` + +Now, wait until `sample-mariadb` has status `Ready`. i.e, + +```bash +$ kubectl get mariadb -n demo +NAME VERSION STATUS AGE +sample-mariadb 10.6.16 Ready 61s +``` + +Now, we will check if the database has started with the custom configuration we have provided. + +First we need to get the username and password to connect to a mariadb instance, + +```bash +$ kubectl get secrets -n demo sample-mariadb-auth -o jsonpath='{.data.\username}' | base64 -d +root + +$ kubectl get secrets -n demo sample-mariadb-auth -o jsonpath='{.data.\password}' | base64 -d +PlWA6JNLkNFudl4I +``` + +Now, we will check if the database has started with the custom configuration we have provided. + +```bash +$ kubectl exec -it -n demo sample-mariadb-0 -c mariadb -- bash +root@sample-mariadb-0:/# mysql -u${MYSQL_ROOT_USERNAME} -p${MYSQL_ROOT_PASSWORD} +Welcome to the MariaDB monitor. Commands end with ; or \g. +Your MariaDB connection id is 11 +Server version: 10.6.16-MariaDB-1:10.6.16+maria~focal mariadb.org binary distribution + +Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others. + +Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. + +MariaDB [(none)]> show variables like 'max_connections'; ++-----------------+-------+ +| Variable_name | Value | ++-----------------+-------+ +| max_connections | 200 | ++-----------------+-------+ +1 row in set (0.001 sec) + +MariaDB [(none)]> show variables like 'read_buffer_size'; ++------------------+---------+ +| Variable_name | Value | ++------------------+---------+ +| read_buffer_size | 1048576 | ++------------------+---------+ +1 row in set (0.001 sec) + +MariaDB [(none)]> exit +Bye +``` + +As we can see from the configuration of ready mariadb, the value of `max_connections` has been set to `200` and `read_buffer_size` has been set to `1048576`. + +### Reconfigure using new config secret + +Now we will reconfigure this database to set `max_connections` to `250` and `read_buffer_size` to `122880`. + +Now, we will create new file `new-md-config.cnf` containing required configuration settings. + +```ini +$ cat new-md-config.cnf +[mysqld] +max_connections = 250 +read_buffer_size = 122880 +``` + +Then, we will create a new secret with this configuration file. + +```bash +$ kubectl create secret generic -n demo new-md-configuration --from-file=./new-md-config.cnf +secret/new-md-configuration created +``` + +#### Create MariaDBOpsRequest + +Now, we will use this secret to replace the previous secret using a `MariaDBOpsRequest` CR. The `MariaDBOpsRequest` yaml is given below, + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: MariaDBOpsRequest +metadata: + name: mdops-reconfigure-config + namespace: demo +spec: + type: Reconfigure + databaseRef: + name: sample-mariadb + configuration: + configSecret: + name: new-md-configuration +``` + +Here, + +- `spec.databaseRef.name` specifies that we are reconfiguring `mdops-reconfigure-config` database. +- `spec.type` specifies that we are performing `Reconfigure` on our database. +- `spec.configuration.configSecret.name` specifies the name of the new secret. + +Let's create the `MariaDBOpsRequest` CR we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mariadb/reconfigure/standalone/examples/reconfigure-using-secret.yaml +mariadbopsrequest.ops.kubedb.com/mdops-reconfigure-config created +``` + +#### Verify the new configuration is working + +If everything goes well, `KubeDB` Enterprise operator will update the `configSecret` of `MariaDB` object. + +Let's wait for `MariaDBOpsRequest` to be `Successful`. Run the following command to watch `MariaDBOpsRequest` CR, + +```bash +$ kubectl get mariadbopsrequest --all-namespaces +NAMESPACE NAME TYPE STATUS AGE +demo mdops-reconfigure-config Reconfigure Successful 2m8s +``` + +We can see from the above output that the `MariaDBOpsRequest` has succeeded. If we describe the `MariaDBOpsRequest` we will get an overview of the steps that were followed to reconfigure the database. + +```bash +$ kubectl describe mariadbopsrequest -n demo mdops-reconfigure-config +Name: mdops-reconfigure-config +Namespace: demo +Labels: +Annotations: +API Version: ops.kubedb.com/v1alpha1 +Kind: MariaDBOpsRequest +Metadata: + Creation Timestamp: 2022-06-14T10:56:01Z + Generation: 1 + Resource Version: 21589 + UID: 43997fe8-fa12-4d38-a29f-d101889d4e72 +Spec: + Configuration: + Config Secret: + Name: new-md-configuration + Database Ref: + Name: sample-mariadb + Type: Reconfigure +Status: + Conditions: + Last Transition Time: 2022-06-14T10:56:01Z + Message: Controller has started to Progress the MariaDBOpsRequest: demo/mdops-reconfigure-config + Observed Generation: 1 + Reason: OpsRequestProgressingStarted + Status: True + Type: Progressing + Last Transition Time: 2022-06-14T10:56:11Z + Message: Successfully restarted MariaDB pods for MariaDBOpsRequest: demo/mdops-reconfigure-config + Observed Generation: 1 + Reason: SuccessfullyRestatedStatefulSet + Status: True + Type: RestartStatefulSetPods + Last Transition Time: 2022-06-14T10:56:16Z + Message: Successfully reconfigured MariaDB for MariaDBOpsRequest: demo/mdops-reconfigure-config + Observed Generation: 1 + Reason: SuccessfullyDBReconfigured + Status: True + Type: DBReady + Last Transition Time: 2022-06-14T10:56:16Z + Message: Controller has successfully reconfigure the MariaDB demo/mdops-reconfigure-config + Observed Generation: 1 + Reason: OpsRequestProcessedSuccessfully + Status: True + Type: Successful + Observed Generation: 3 + Phase: Successful +``` + +Now let's connect to a mariadb instance and run a mariadb internal command to check the new configuration we have provided. + +```bash +$ $ kubectl exec -it -n demo sample-mariadb-0 -c mariadb -- bash +root@sample-mariadb-0:/# mysql -u${MYSQL_ROOT_USERNAME} -p${MYSQL_ROOT_PASSWORD} +Welcome to the MariaDB monitor. Commands end with ; or \g. +Your MariaDB connection id is 21 +Server version: 10.6.16-MariaDB-1:10.6.16+maria~focal mariadb.org binary distribution + +Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others. + +Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. + +MariaDB [(none)]> show variables like 'max_connections'; ++-----------------+-------+ +| Variable_name | Value | ++-----------------+-------+ +| max_connections | 250 | ++-----------------+-------+ +1 row in set (0.001 sec) + +MariaDB [(none)]> show variables like 'read_buffer_size'; ++------------------+--------+ +| Variable_name | Value | ++------------------+--------+ +| read_buffer_size | 122880 | ++------------------+--------+ +1 row in set (0.001 sec) + +MariaDB [(none)]> exit +Bye +``` + +As we can see from the configuration has changed, the value of `max_connections` has been changed from `200` to `250` and and the `read_buffer_size` has been changed `1048576` to `122880`. So the reconfiguration of the database is successful. + + +### Reconfigure Existing Config Secret + +Now, we will create a new `MariaDBOpsRequest` to reconfigure our existing secret `new-md-configuration` by modifying our `new-md-config.cnf` file using `applyConfig`. The `MariaDBOpsRequest` yaml is given below, + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: MariaDBOpsRequest +metadata: + name: mdops-reconfigure-apply-config + namespace: demo +spec: + type: Reconfigure + databaseRef: + name: sample-mariadb + configuration: + applyConfig: + new-md-config.cnf: | + [mysqld] + max_connections = 230 + read_buffer_size = 1064960 + innodb-config.cnf: | + [mysqld] + innodb_log_buffer_size = 17408000 +``` +> Note: You can modify multiple fields of your current configuration using `applyConfig`. If you don't have any secrets then `applyConfig` will create a secret for you. Here, we modified value of our two existing fields which are `max_connections` and `read_buffer_size` also, we modified a new field `innodb_log_buffer_size` of our configuration. + +Here, +- `spec.databaseRef.name` specifies that we are reconfiguring `sample-mariadb` database. +- `spec.type` specifies that we are performing `Reconfigure` on our database. +- `spec.configuration.applyConfig` contains the configuration of existing or newly created secret. + +Before applying this yaml we are going to check the existing value of our new field, + +```bash +$ kubectl exec -it sample-mariadb-0 -n demo -c mariadb -- bash +root@sample-mariadb-0:/# mysql -u${MYSQL_ROOT_USERNAME} -p${MYSQL_ROOT_PASSWORD} +Welcome to the MariaDB monitor. Commands end with ; or \g. +Your MariaDB connection id is 21 +Server version: 10.6.16-MariaDB-1:10.6.16+maria~focal mariadb.org binary distribution + +Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others. + +Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. + +MariaDB [(none)]> show variables like 'innodb_log_buffer_size'; ++------------------------+----------+ +| Variable_name | Value | ++------------------------+----------+ +| innodb_log_buffer_size | 16777216 | ++------------------------+----------+ +1 row in set (0.001 sec) + +MariaDB [(none)]> exit +Bye +``` +Here, we can see the default value for `innodb_log_buffer_size` is `16777216`. + +Let's create the `MariaDBOpsRequest` CR we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mariadb/reconfigure/standalone/examples/mdops-reconfigure-apply-config.yaml +mariadbopsrequest.ops.kubedb.com/mdops-reconfigure-apply-config created +``` + + +#### Verify the new configuration is working + +If everything goes well, `KubeDB` Enterprise operator will update the `configSecret` of `MariaDB` object. + +Let's wait for `MariaDBOpsRequest` to be `Successful`. Run the following command to watch `MariaDBOpsRequest` CR, + +```bash +$ kubectl get mariadbopsrequest mdops-reconfigure-apply-config -n demo +NAME TYPE STATUS AGE +mdops-reconfigure-apply-config Reconfigure Successful 3m11s +``` + +We can see from the above output that the `MariaDBOpsRequest` has succeeded. If we describe the `MariaDBOpsRequest` we will get an overview of the steps that were followed to reconfigure the database. + +```bash +$ kubectl describe mariadbopsrequest -n demo mdops-reconfigure-apply-config +Name: mdops-reconfigure-apply-config +Namespace: demo +Labels: +Annotations: +API Version: ops.kubedb.com/v1alpha1 +Kind: MariaDBOpsRequest +Metadata: + Creation Timestamp: 2022-06-14T09:13:49Z + Generation: 1 + Resource Version: 14120 + UID: eb8d5df5-a0ce-4011-890c-c18c0200b5ac +Spec: + Configuration: + Apply Config: + innodb-config.cnf: [mysqld] +innodb_log_buffer_size = 17408000 + + new-md-config.cnf: [mysqld] +max_connections = 230 +read_buffer_size = 1064960 + + Database Ref: + Name: sample-mariadb + Type: Reconfigure +Status: + Conditions: + Last Transition Time: 2022-06-14T09:13:49Z + Message: Controller has started to Progress the MariaDBOpsRequest: demo/mdops-reconfigure-apply-config + Observed Generation: 1 + Reason: OpsRequestProgressingStarted + Status: True + Type: Progressing + Last Transition Time: 2022-06-14T09:13:49Z + Message: Successfully prepared user provided custom config secret + Observed Generation: 1 + Reason: PrepareSecureCustomConfig + Status: True + Type: PrepareCustomConfig + Last Transition Time: 2022-06-14T09:17:24Z + Message: Successfully restarted MariaDB pods for MariaDBOpsRequest: demo/mdops-reconfigure-apply-config + Observed Generation: 1 + Reason: SuccessfullyRestatedStatefulSet + Status: True + Type: RestartStatefulSetPods + Last Transition Time: 2022-06-14T09:17:29Z + Message: Successfully reconfigured MariaDB for MariaDBOpsRequest: demo/mdops-reconfigure-apply-config + Observed Generation: 1 + Reason: SuccessfullyDBReconfigured + Status: True + Type: DBReady + Last Transition Time: 2022-06-14T09:17:29Z + Message: Controller has successfully reconfigure the MariaDB demo/mdops-reconfigure-apply-config + Observed Generation: 1 + Reason: OpsRequestProcessedSuccessfully + Status: True + Type: Successful + Observed Generation: 3 + Phase: Successful +``` + +Now let's connect to a mariadb instance and run a mariadb internal command to check the new configuration we have provided. + +```bash +$ kubectl exec -it -n demo sample-mariadb-0 -c mariadb -- bash +root@sample-mariadb-0:/# mysql -u${MYSQL_ROOT_USERNAME} -p${MYSQL_ROOT_PASSWORD} +Welcome to the MariaDB monitor. Commands end with ; or \g. +Your MariaDB connection id is 24 +Server version: 10.6.16-MariaDB-1:10.6.16+maria~focal mariadb.org binary distribution + +Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others. + +Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. + +MariaDB [(none)]> show variables like 'max_connections'; ++-----------------+-------+ +| Variable_name | Value | ++-----------------+-------+ +| max_connections | 230 | ++-----------------+-------+ +1 row in set (0.001 sec) + +MariaDB [(none)]> show variables like 'read_buffer_size'; ++------------------+---------+ +| Variable_name | Value | ++------------------+---------+ +| read_buffer_size | 1064960 | ++------------------+---------+ +1 row in set (0.002 sec) + +MariaDB [(none)]> show variables like 'innodb_log_buffer_size'; ++------------------------+----------+ +| Variable_name | Value | ++------------------------+----------+ +| innodb_log_buffer_size | 17408000 | ++------------------------+----------+ +1 row in set (0.001 sec) + +MariaDB [(none)]> exit +Bye +``` + +As we can see from above the configuration has been changed, the value of `max_connections` has been changed from `250` to `230` and the `read_buffer_size` has been changed `122880` to `1064960` also, `innodb_log_buffer_size` has been changed from `16777216` to `17408000`. So the reconfiguration of the `sample-mariadb` database is successful. + + + +### Remove Custom Configuration + +We can also remove exisiting custom config using `MariaDBOpsRequest`. Provide `true` to field `spec.configuration.removeCustomConfig` and make an Ops Request to remove existing custom configuration. + +#### Create MariaDBOpsRequest + +Lets create an `MariaDBOpsRequest` having `spec.configuration.removeCustomConfig` is equal `true`, + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: MariaDBOpsRequest +metadata: + name: mdops-reconfigure-remove + namespace: demo +spec: + type: Reconfigure + databaseRef: + name: sample-mariadb + configuration: + removeCustomConfig: true +``` + +Here, + +- `spec.databaseRef.name` specifies that we are reconfiguring `mdops-reconfigure-remove` database. +- `spec.type` specifies that we are performing `Reconfigure` on our database. +- `spec.configuration.removeCustomConfig` is a bool field that should be `true` when you want to remove existing custom configuration. + +Let's create the `MariaDBOpsRequest` CR we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mariadb/reconfigure/standalone/examples/reconfigure-remove.yaml +mariadbopsrequest.ops.kubedb.com/mdops-reconfigure-remove created +``` + +#### Verify the new configuration is working + +If everything goes well, `KubeDB` Enterprise operator will update the `configSecret` of `MariaDB` object. + +Let's wait for `MariaDBOpsRequest` to be `Successful`. Run the following command to watch `MariaDBOpsRequest` CR, + +```bash +$ kubectl get mariadbopsrequest --all-namespaces +NAMESPACE NAME TYPE STATUS AGE +demo mdops-reconfigure-remove Reconfigure Successful 2m5s +``` + +Now let's connect to a mariadb instance and run a mariadb internal command to check the new configuration we have provided. + +```bash +$ kubectl exec -it -n demo sample-mariadb-0 -- bash +root@sample-mariadb-0:/ mysql -u${MYSQL_ROOT_USERNAME} -p${MYSQL_ROOT_PASSWORD} +Welcome to the MariaDB monitor. Commands end with ; or \g. +Your MariaDB connection id is 8 +Server version: 10.6.16-MariaDB-1:10.6.16+maria~focal mariadb.org binary distribution + +Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others. + +Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. + +# value of `max_conncetions` is default +MariaDB [(none)]> show variables like 'max_connections'; ++-----------------+-------+ +| Variable_name | Value | ++-----------------+-------+ +| max_connections | 151 | ++-----------------+-------+ +1 row in set (0.001 sec) + +# value of `read_buffer_size` is default +MariaDB [(none)]> show variables like 'read_buffer_size'; ++------------------+---------+ +| Variable_name | Value | ++------------------+---------+ +| read_buffer_size | 131072 | ++------------------+---------+ +1 row in set (0.001 sec) + +# value of `innodb_log_buffer_size` is default +MariaDB [(none)]> show variables like 'innodb_log_buffer_size'; ++------------------------+----------+ +| Variable_name | Value | ++------------------------+----------+ +| innodb_log_buffer_size | 16777216 | ++------------------------+----------+ +1 row in set (0.001 sec) + +MariaDB [(none)]> exit +Bye +``` + +As we can see from the configuration has changed to its default value. So removal of existing custom configuration using `MariaDBOpsRequest` is successful. + +## Cleaning Up + +To clean up the Kubernetes resources created by this tutorial, run: + +```bash +$ kubectl delete mariadb -n demo sample-mariadb +$ kubectl delete mariadbopsrequest -n demo mdops-reconfigure-config mdops-reconfigure-apply-config mdops-reconfigure-remove +$ kubectl delete ns demo +``` \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/mariadb/scaling/_index.md b/content/docs/v2024.1.31/guides/mariadb/scaling/_index.md new file mode 100644 index 0000000000..97f09405dd --- /dev/null +++ b/content/docs/v2024.1.31/guides/mariadb/scaling/_index.md @@ -0,0 +1,22 @@ +--- +title: Scaling MariaDB +menu: + docs_v2024.1.31: + identifier: guides-mariadb-scaling + name: Scaling + parent: guides-mariadb + weight: 43 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/mariadb/scaling/horizontal-scaling/_index.md b/content/docs/v2024.1.31/guides/mariadb/scaling/horizontal-scaling/_index.md new file mode 100644 index 0000000000..06596c29c6 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mariadb/scaling/horizontal-scaling/_index.md @@ -0,0 +1,22 @@ +--- +title: Horizontal Scaling +menu: + docs_v2024.1.31: + identifier: guides-mariadb-scaling-horizontal + name: Horizontal Scaling + parent: guides-mariadb-scaling + weight: 10 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/mariadb/scaling/horizontal-scaling/cluster/example/mdops-downscale.yaml b/content/docs/v2024.1.31/guides/mariadb/scaling/horizontal-scaling/cluster/example/mdops-downscale.yaml new file mode 100644 index 0000000000..1e5cc7bc52 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mariadb/scaling/horizontal-scaling/cluster/example/mdops-downscale.yaml @@ -0,0 +1,11 @@ +apiVersion: ops.kubedb.com/v1alpha1 +kind: MariaDBOpsRequest +metadata: + name: mdops-scale-horizontal-down + namespace: demo +spec: + type: HorizontalScaling + databaseRef: + name: sample-mariadb + horizontalScaling: + member : 3 diff --git a/content/docs/v2024.1.31/guides/mariadb/scaling/horizontal-scaling/cluster/example/mdops-upscale.yaml b/content/docs/v2024.1.31/guides/mariadb/scaling/horizontal-scaling/cluster/example/mdops-upscale.yaml new file mode 100644 index 0000000000..6193aa18db --- /dev/null +++ b/content/docs/v2024.1.31/guides/mariadb/scaling/horizontal-scaling/cluster/example/mdops-upscale.yaml @@ -0,0 +1,11 @@ +apiVersion: ops.kubedb.com/v1alpha1 +kind: MariaDBOpsRequest +metadata: + name: mdops-scale-horizontal-up + namespace: demo +spec: + type: HorizontalScaling + databaseRef: + name: sample-mariadb + horizontalScaling: + member : 5 diff --git a/content/docs/v2024.1.31/guides/mariadb/scaling/horizontal-scaling/cluster/example/sample-mariadb.yaml b/content/docs/v2024.1.31/guides/mariadb/scaling/horizontal-scaling/cluster/example/sample-mariadb.yaml new file mode 100644 index 0000000000..518960fbd2 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mariadb/scaling/horizontal-scaling/cluster/example/sample-mariadb.yaml @@ -0,0 +1,17 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MariaDB +metadata: + name: sample-mariadb + namespace: demo +spec: + version: "10.5.23" + replicas: 3 + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut diff --git a/content/docs/v2024.1.31/guides/mariadb/scaling/horizontal-scaling/cluster/index.md b/content/docs/v2024.1.31/guides/mariadb/scaling/horizontal-scaling/cluster/index.md new file mode 100644 index 0000000000..741573c5a6 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mariadb/scaling/horizontal-scaling/cluster/index.md @@ -0,0 +1,283 @@ +--- +title: Horizontal Scaling MariaDB +menu: + docs_v2024.1.31: + identifier: guides-mariadb-scaling-horizontal-cluster + name: Cluster + parent: guides-mariadb-scaling-horizontal + weight: 20 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Horizontal Scale MariaDB + +This guide will show you how to use `KubeDB` Enterprise operator to scale the cluster of a MariaDB database. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +- Install `KubeDB` Community and Enterprise operator in your cluster following the steps [here](/docs/v2024.1.31/setup/README). + +- You should be familiar with the following `KubeDB` concepts: + - [MariaDB](/docs/v2024.1.31/guides/mariadb/concepts/mariadb/) + - [MariaDB Cluster](/docs/v2024.1.31/guides/mariadb/clustering/galera-cluster/) + - [MariaDBOpsRequest](/docs/v2024.1.31/guides/mariadb/concepts/opsrequest/) + - [Horizontal Scaling Overview](/docs/v2024.1.31/guides/mariadb/scaling/horizontal-scaling/overview/) + +To keep everything isolated, we are going to use a separate namespace called `demo` throughout this tutorial. + +```bash +$ kubectl create ns demo +namespace/demo created +``` + +## Apply Horizontal Scaling on Cluster + +Here, we are going to deploy a `MariaDB` cluster using a supported version by `KubeDB` operator. Then we are going to apply horizontal scaling on it. + +### Prepare MariaDB Cluster Database + +Now, we are going to deploy a `MariaDB` cluster with version `10.5.23`. + +### Deploy MariaDB Cluster + +In this section, we are going to deploy a MariaDB cluster. Then, in the next section we will scale the database using `MariaDBOpsRequest` CRD. Below is the YAML of the `MariaDB` CR that we are going to create, + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: MariaDB +metadata: + name: sample-mariadb + namespace: demo +spec: + version: "10.5.23" + replicas: 3 + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut +``` + +Let's create the `MariaDB` CR we have shown above, + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mariadb/scaling/horizontal-scaling/cluster/example/sample-mariadb.yaml +mariadb.kubedb.com/sample-mariadb created +``` + +Now, wait until `sample-mariadb` has status `Ready`. i.e, + +```bash +$ kubectl get mariadb -n demo +NAME VERSION STATUS AGE +sample-mariadb 10.5.23 Ready 2m36s +``` + +Let's check the number of replicas this database has from the MariaDB object, number of pods the statefulset have, + +```bash +$ kubectl get mariadb -n demo sample-mariadb -o json | jq '.spec.replicas' +3 +$ kubectl get sts -n demo sample-mariadb -o json | jq '.spec.replicas' +3 +``` + +We can see from both command that the database has 3 replicas in the cluster. + +Also, we can verify the replicas of the replicaset from an internal mariadb command by execing into a replica. + +First we need to get the username and password to connect to a mariadb instance, +```bash +$ kubectl get secrets -n demo sample-mariadb-auth -o jsonpath='{.data.\username}' | base64 -d +root + +$ kubectl get secrets -n demo sample-mariadb-auth -o jsonpath='{.data.\password}' | base64 -d +nrKuxni0wDSMrgwy +``` + +Now let's connect to a mariadb instance and run a mariadb internal command to check the number of replicas, + +```bash +$ kubectl exec -it -n demo sample-mariadb-0 -c mariadb -- bash +root@sample-mariadb-0:/ mysql -uroot -p$MYSQL_ROOT_PASSWORD -e "show status like 'wsrep_cluster_size';" ++--------------------+-------+ +| Variable_name | Value | ++--------------------+-------+ +| wsrep_cluster_size | 3 | ++--------------------+-------+ + +``` + +We can see from the above output that the cluster has 3 nodes. + +We are now ready to apply the `MariaDBOpsRequest` CR to scale this database. + +## Scale Up Replicas + +Here, we are going to scale up the replicas of the replicaset to meet the desired number of replicas after scaling. + +#### Create MariaDBOpsRequest + +In order to scale up the replicas of the replicaset of the database, we have to create a `MariaDBOpsRequest` CR with our desired replicas. Below is the YAML of the `MariaDBOpsRequest` CR that we are going to create, + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: MariaDBOpsRequest +metadata: + name: mdops-scale-horizontal-up + namespace: demo +spec: + type: HorizontalScaling + databaseRef: + name: sample-mariadb + horizontalScaling: + member : 5 +``` + +Here, + +- `spec.databaseRef.name` specifies that we are performing horizontal scaling operation on `sample-mariadb` database. +- `spec.type` specifies that we are performing `HorizontalScaling` on our database. +- `spec.horizontalScaling.member` specifies the desired replicas after scaling. + +Let's create the `MariaDBOpsRequest` CR we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mariadb/scaling/horizontal-scaling/cluster/example/mdops-upscale.yaml +mariadbopsrequest.ops.kubedb.com/mdops-scale-horizontal-up created +``` + +#### Verify Cluster replicas scaled up successfully + +If everything goes well, `KubeDB` Enterprise operator will update the replicas of `MariaDB` object and related `StatefulSets` and `Pods`. + +Let's wait for `MariaDBOpsRequest` to be `Successful`. Run the following command to watch `MariaDBOpsRequest` CR, + +```bash +$ watch kubectl get mariadbopsrequest -n demo +Every 2.0s: kubectl get mariadbopsrequest -n demo +NAME TYPE STATUS AGE +mdps-scale-horizontal HorizontalScaling Successful 106s +``` + +We can see from the above output that the `MariaDBOpsRequest` has succeeded. Now, we are going to verify the number of replicas this database has from the MariaDB object, number of pods the statefulset have, + +```bash +$ kubectl get mariadb -n demo sample-mariadb -o json | jq '.spec.replicas' +5 +$ kubectl get sts -n demo sample-mariadb -o json | jq '.spec.replicas' +5 +``` + +Now let's connect to a mariadb instance and run a mariadb internal command to check the number of replicas, + +```bash +$ $ kubectl exec -it -n demo sample-mariadb-0 -c mariadb -- bash +root@sample-mariadb-0:/ mysql -uroot -p$MYSQL_ROOT_PASSWORD -e "show status like 'wsrep_cluster_size';" ++--------------------+-------+ +| Variable_name | Value | ++--------------------+-------+ +| wsrep_cluster_size | 5 | ++--------------------+-------+ +``` + +From all the above outputs we can see that the replicas of the cluster is `5`. That means we have successfully scaled up the replicas of the MariaDB replicaset. + +### Scale Down Replicas + +Here, we are going to scale down the replicas of the cluster to meet the desired number of replicas after scaling. + +#### Create MariaDBOpsRequest + +In order to scale down the cluster of the database, we have to create a `MariaDBOpsRequest` CR with our desired replicas. Below is the YAML of the `MariaDBOpsRequest` CR that we are going to create, + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: MariaDBOpsRequest +metadata: + name: mdops-scale-horizontal-down + namespace: demo +spec: + type: HorizontalScaling + databaseRef: + name: sample-mariadb + horizontalScaling: + member : 3 +``` + +Here, + +- `spec.databaseRef.name` specifies that we are performing horizontal scaling down operation on `sample-mariadb` database. +- `spec.type` specifies that we are performing `HorizontalScaling` on our database. +- `spec.horizontalScaling.replicas` specifies the desired replicas after scaling. + +Let's create the `MariaDBOpsRequest` CR we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mariadb/scaling/horizontal-scaling/cluster/example/mdops-downscale.yaml +mariadbopsrequest.ops.kubedb.com/mdops-scale-horizontal-down created +``` + +#### Verify Cluster replicas scaled down successfully + +If everything goes well, `KubeDB` Enterprise operator will update the replicas of `MariaDB` object and related `StatefulSets` and `Pods`. + +Let's wait for `MariaDBOpsRequest` to be `Successful`. Run the following command to watch `MariaDBOpsRequest` CR, + +```bash +$ watch kubectl get mariadbopsrequest -n demo +Every 2.0s: kubectl get mariadbopsrequest -n demo +NAME TYPE STATUS AGE +mops-hscale-down-replicaset HorizontalScaling Successful 2m32s +``` + +We can see from the above output that the `MariaDBOpsRequest` has succeeded. Now, we are going to verify the number of replicas this database has from the MariaDB object, number of pods the statefulset have, + +```bash +$ kubectl get mariadb -n demo sample-mariadb -o json | jq '.spec.replicas' +3 +$ kubectl get sts -n demo sample-mariadb -o json | jq '.spec.replicas' +3 +``` + +Now let's connect to a mariadb instance and run a mariadb internal command to check the number of replicas, +```bash +$ $ kubectl exec -it -n demo sample-mariadb-0 -c mariadb -- bash +root@sample-mariadb-0:/ mysql -uroot -p$MYSQL_ROOT_PASSWORD -e "show status like 'wsrep_cluster_size';" ++--------------------+-------+ +| Variable_name | Value | ++--------------------+-------+ +| wsrep_cluster_size | 5 | ++--------------------+-------+ +``` + +From all the above outputs we can see that the replicas of the cluster is `5`. That means we have successfully scaled down the replicas of the MariaDB replicaset. + +## Cleaning Up + +To clean up the Kubernetes resources created by this tutorial, run: + +```bash +$ kubectl delete mariadb -n demo sample-mariadb +$ kubectl delete mariadbopsrequest -n demo mdops-scale-horizontal-up mdops-scale-horizontal-down +``` \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/mariadb/scaling/horizontal-scaling/overview/images/horizontal-scaling.jpg b/content/docs/v2024.1.31/guides/mariadb/scaling/horizontal-scaling/overview/images/horizontal-scaling.jpg new file mode 100644 index 0000000000..e48fdf93c0 Binary files /dev/null and b/content/docs/v2024.1.31/guides/mariadb/scaling/horizontal-scaling/overview/images/horizontal-scaling.jpg differ diff --git a/content/docs/v2024.1.31/guides/mariadb/scaling/horizontal-scaling/overview/index.md b/content/docs/v2024.1.31/guides/mariadb/scaling/horizontal-scaling/overview/index.md new file mode 100644 index 0000000000..7fbcea610d --- /dev/null +++ b/content/docs/v2024.1.31/guides/mariadb/scaling/horizontal-scaling/overview/index.md @@ -0,0 +1,65 @@ +--- +title: MariaDB Horizontal Scaling Overview +menu: + docs_v2024.1.31: + identifier: guides-mariadb-scaling-horizontal-overview + name: Overview + parent: guides-mariadb-scaling-horizontal + weight: 10 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# MariaDB Horizontal Scaling + +This guide will give an overview on how KubeDB Ops Manager scales up or down `MariaDB Cluster`. + +## Before You Begin + +- You should be familiar with the following `KubeDB` concepts: + - [MariaDB](/docs/v2024.1.31/guides/mariadb/concepts/mariadb/) + - [MariaDBOpsRequest](/docs/v2024.1.31/guides/mariadb/concepts/opsrequest/) + +## How Horizontal Scaling Process Works + +The following diagram shows how KubeDB Ops Manager scales up or down `MariaDB` database components. Open the image in a new tab to see the enlarged version. + +
+  Horizontal scaling process of MariaDB +
Fig: Horizontal scaling process of MariaDB
+
+ +The Horizontal scaling process consists of the following steps: + +1. At first, a user creates a `MariaDB` Custom Resource (CR). + +2. `KubeDB` Community operator watches the `MariaDB` CR. + +3. When the operator finds a `MariaDB` CR, it creates required number of `StatefulSets` and related necessary stuff like secrets, services, etc. + +4. Then, in order to scale the `MariaDB` database the user creates a `MariaDBOpsRequest` CR with desired information. + +5. `KubeDB` Enterprise operator watches the `MariaDBOpsRequest` CR. + +6. When it finds a `MariaDBOpsRequest` CR, it pauses the `MariaDB` object which is referred from the `MariaDBOpsRequest`. So, the `KubeDB` Community operator doesn't perform any operations on the `MariaDB` object during the horizontal scaling process. + +7. Then the `KubeDB` Enterprise operator will scale the related StatefulSet Pods to reach the expected number of replicas defined in the `MariaDBOpsRequest` CR. + +8. After the successfully scaling the replicas of the StatefulSet Pods, the `KubeDB` Enterprise operator updates the number of replicas in the `MariaDB` object to reflect the updated state. + +9. After the successful scaling of the `MariaDB` replicas, the `KubeDB` Enterprise operator resumes the `MariaDB` object so that the `KubeDB` Community operator resumes its usual operations. + +In the next docs, we are going to show a step by step guide on horizontal scaling of MariaDB database using `MariaDBOpsRequest` CRD. diff --git a/content/docs/v2024.1.31/guides/mariadb/scaling/vertical-scaling/_index.md b/content/docs/v2024.1.31/guides/mariadb/scaling/vertical-scaling/_index.md new file mode 100644 index 0000000000..d90aab7521 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mariadb/scaling/vertical-scaling/_index.md @@ -0,0 +1,22 @@ +--- +title: Vertical Scaling +menu: + docs_v2024.1.31: + identifier: guides-mariadb-scaling-vertical + name: Vertical Scaling + parent: guides-mariadb-scaling + weight: 20 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/mariadb/scaling/vertical-scaling/cluster/example/mdops-vscale.yaml b/content/docs/v2024.1.31/guides/mariadb/scaling/vertical-scaling/cluster/example/mdops-vscale.yaml new file mode 100644 index 0000000000..48059e62bf --- /dev/null +++ b/content/docs/v2024.1.31/guides/mariadb/scaling/vertical-scaling/cluster/example/mdops-vscale.yaml @@ -0,0 +1,18 @@ +apiVersion: ops.kubedb.com/v1alpha1 +kind: MariaDBOpsRequest +metadata: + name: mdops-vscale + namespace: demo +spec: + type: VerticalScaling + databaseRef: + name: sample-mariadb + verticalScaling: + mariadb: + resources: + requests: + memory: "1.2Gi" + cpu: "0.6" + limits: + memory: "1.2Gi" + cpu: "0.6" diff --git a/content/docs/v2024.1.31/guides/mariadb/scaling/vertical-scaling/cluster/example/sample-mariadb.yaml b/content/docs/v2024.1.31/guides/mariadb/scaling/vertical-scaling/cluster/example/sample-mariadb.yaml new file mode 100644 index 0000000000..518960fbd2 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mariadb/scaling/vertical-scaling/cluster/example/sample-mariadb.yaml @@ -0,0 +1,17 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MariaDB +metadata: + name: sample-mariadb + namespace: demo +spec: + version: "10.5.23" + replicas: 3 + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut diff --git a/content/docs/v2024.1.31/guides/mariadb/scaling/vertical-scaling/cluster/index.md b/content/docs/v2024.1.31/guides/mariadb/scaling/vertical-scaling/cluster/index.md new file mode 100644 index 0000000000..cc9f6b058e --- /dev/null +++ b/content/docs/v2024.1.31/guides/mariadb/scaling/vertical-scaling/cluster/index.md @@ -0,0 +1,197 @@ +--- +title: Vertical Scaling MariaDB Cluster +menu: + docs_v2024.1.31: + identifier: guides-mariadb-scaling-vertical-cluster + name: Cluster + parent: guides-mariadb-scaling-vertical + weight: 30 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Vertical Scale MariaDB Cluster + +This guide will show you how to use `KubeDB` Enterprise operator to update the resources of a MariaDB cluster database. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +- Install `KubeDB` Community and Enterprise operator in your cluster following the steps [here](/docs/v2024.1.31/setup/README). + +- You should be familiar with the following `KubeDB` concepts: + - [MariaDB](/docs/v2024.1.31/guides/mariadb/concepts/mariadb) + - [Clustering](/docs/v2024.1.31/guides/mariadb/clustering/galera-cluster) + - [MariaDBOpsRequest](/docs/v2024.1.31/guides/mariadb/concepts/opsrequest) + - [Vertical Scaling Overview](/docs/v2024.1.31/guides/mariadb/scaling/vertical-scaling/overview) + +To keep everything isolated, we are going to use a separate namespace called `demo` throughout this tutorial. + +```bash +$ kubectl create ns demo +namespace/demo created +``` + +## Apply Vertical Scaling on Cluster + +Here, we are going to deploy a `MariaDB` cluster using a supported version by `KubeDB` operator. Then we are going to apply vertical scaling on it. + +### Prepare MariaDB Cluster + +Now, we are going to deploy a `MariaDB` cluster database with version `10.5.23`. +> Vertical Scaling for `MariaDB Standalone` can be performed in the same way as `MariaDB Cluster`. Only remove the `spec.replicas` field from the below yaml to deploy a MariaDB Standalone. + +### Deploy MariaDB Cluster + +In this section, we are going to deploy a MariaDB cluster database. Then, in the next section we will update the resources of the database using `MariaDBOpsRequest` CRD. Below is the YAML of the `MariaDB` CR that we are going to create, + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: MariaDB +metadata: + name: sample-mariadb + namespace: demo +spec: + version: "10.5.23" + replicas: 3 + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut +``` + +Let's create the `MariaDB` CR we have shown above, + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mariadb/scaling/vertical-scaling/cluster/example/sample-mariadb.yaml +mariadb.kubedb.com/sample-mariadb created +``` + +Now, wait until `sample-mariadb` has status `Ready`. i.e, + +```bash +$ kubectl get mariadb -n demo +NAME VERSION STATUS AGE +sample-mariadb 10.5.23 Ready 3m46s +``` + +Let's check the Pod containers resources, + +```bash +$ kubectl get pod -n demo sample-mariadb-0 -o json | jq '.spec.containers[].resources' +{ + "limits": { + "cpu": "500m", + "memory": "1Gi" + }, + "requests": { + "cpu": "500m", + "memory": "1Gi" + } +} +``` + +You can see the Pod has the default resources which is assigned by Kubedb operator. + +We are now ready to apply the `MariaDBOpsRequest` CR to update the resources of this database. + +### Vertical Scaling + +Here, we are going to update the resources of the database to meet the desired resources after scaling. + +#### Create MariaDBOpsRequest + +In order to update the resources of the database, we have to create a `MariaDBOpsRequest` CR with our desired resources. Below is the YAML of the `MariaDBOpsRequest` CR that we are going to create, + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: MariaDBOpsRequest +metadata: + name: mdops-vscale + namespace: demo +spec: + type: VerticalScaling + databaseRef: + name: sample-mariadb + verticalScaling: + mariadb: + resources: + requests: + memory: "1.2Gi" + cpu: "0.6" + limits: + memory: "1.2Gi" + cpu: "0.6" +``` + +Here, + +- `spec.databaseRef.name` specifies that we are performing vertical scaling operation on `sample-mariadb` database. +- `spec.type` specifies that we are performing `VerticalScaling` on our database. +- `spec.VerticalScaling.mariadb` specifies the desired resources after scaling. + +Let's create the `MariaDBOpsRequest` CR we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mariadb/scaling/vertical-scaling/cluster/example/mdops-vscale.yaml +mariadbopsrequest.ops.kubedb.com/mdops-vscale created +``` + +#### Verify MariaDB Cluster resources updated successfully + +If everything goes well, `KubeDB` Enterprise operator will update the resources of `MariaDB` object and related `StatefulSets` and `Pods`. + +Let's wait for `MariaDBOpsRequest` to be `Successful`. Run the following command to watch `MariaDBOpsRequest` CR, + +```bash +$ kubectl get mariadbopsrequest -n demo +Every 2.0s: kubectl get mariadbopsrequest -n demo +NAME TYPE STATUS AGE +mdops-vscale VerticalScaling Successful 3m56s +``` + +We can see from the above output that the `MariaDBOpsRequest` has succeeded. Now, we are going to verify from one of the Pod yaml whether the resources of the database has updated to meet up the desired state, Let's check, + +```bash +$ kubectl get pod -n demo sample-mariadb-0 -o json | jq '.spec.containers[].resources' +{ + "limits": { + "cpu": "600m", + "memory": "1288490188800m" + }, + "requests": { + "cpu": "600m", + "memory": "1288490188800m" + } +} +``` + +The above output verifies that we have successfully scaled up the resources of the MariaDB database. + +## Cleaning Up + +To clean up the Kubernetes resources created by this tutorial, run: + +```bash +$ kubectl delete mariadb -n demo sample-mariadb +$ kubectl delete mariadbopsrequest -n demo mdops-vscale +``` \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/mariadb/scaling/vertical-scaling/overview/images/vertical-scaling.jpg b/content/docs/v2024.1.31/guides/mariadb/scaling/vertical-scaling/overview/images/vertical-scaling.jpg new file mode 100644 index 0000000000..2bea7f0ee2 Binary files /dev/null and b/content/docs/v2024.1.31/guides/mariadb/scaling/vertical-scaling/overview/images/vertical-scaling.jpg differ diff --git a/content/docs/v2024.1.31/guides/mariadb/scaling/vertical-scaling/overview/index.md b/content/docs/v2024.1.31/guides/mariadb/scaling/vertical-scaling/overview/index.md new file mode 100644 index 0000000000..423fca7d09 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mariadb/scaling/vertical-scaling/overview/index.md @@ -0,0 +1,65 @@ +--- +title: MariaDB Vertical Scaling Overview +menu: + docs_v2024.1.31: + identifier: guides-mariadb-scaling-vertical-overview + name: Overview + parent: guides-mariadb-scaling-vertical + weight: 10 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# MariaDB Vertical Scaling + +This guide will give an overview on how KubeDB Ops Manager vertically scales up `MariaDB`. + +## Before You Begin + +- You should be familiar with the following `KubeDB` concepts: + - [MariaDB](/docs/v2024.1.31/guides/mariadb/concepts/mariadb/) + - [MariaDBOpsRequest](/docs/v2024.1.31/guides/mariadb/concepts/opsrequest/) + +## How Horizontal Scaling Process Works + +The following diagram shows how KubeDB Ops Manager scales up or down `MariaDB` database components. Open the image in a new tab to see the enlarged version. + +
+  Vertical scaling process of MariaDB +
Fig: Vertical scaling process of MariaDB
+
+ +The vertical scaling process consists of the following steps: + +1. At first, a user creates a `MariaDB` Custom Resource (CR). + +2. `KubeDB` Community operator watches the `MariaDB` CR. + +3. When the operator finds a `MariaDB` CR, it creates required number of `StatefulSets` and related necessary stuff like secrets, services, etc. + +4. Then, in order to update the resources(for example `CPU`, `Memory` etc.) of the `MariaDB` database the user creates a `MariaDBOpsRequest` CR with desired information. + +5. `KubeDB` Enterprise operator watches the `MariaDBOpsRequest` CR. + +6. When it finds a `MariaDBOpsRequest` CR, it halts the `MariaDB` object which is referred from the `MariaDBOpsRequest`. So, the `KubeDB` Community operator doesn't perform any operations on the `MariaDB` object during the vertical scaling process. + +7. Then the `KubeDB` Enterprise operator will update resources of the StatefulSet Pods to reach desired state. + +8. After the successful update of the resources of the StatefulSet's replica, the `KubeDB` Enterprise operator updates the `MariaDB` object to reflect the updated state. + +9. After the successful update of the `MariaDB` resources, the `KubeDB` Enterprise operator resumes the `MariaDB` object so that the `KubeDB` Community operator resumes its usual operations. + +In the next docs, we are going to show a step by step guide on updating resources of MariaDB database using `MariaDBOpsRequest` CRD. \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/mariadb/tls/_index.md b/content/docs/v2024.1.31/guides/mariadb/tls/_index.md new file mode 100644 index 0000000000..7e74953d15 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mariadb/tls/_index.md @@ -0,0 +1,22 @@ +--- +title: TLS/SSL Encryption +menu: + docs_v2024.1.31: + identifier: guides-mariadb-tls + name: TLS/SSL Encryption + parent: guides-mariadb + weight: 110 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/mariadb/tls/configure/examples/issuer.yaml b/content/docs/v2024.1.31/guides/mariadb/tls/configure/examples/issuer.yaml new file mode 100644 index 0000000000..7d8dc476c3 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mariadb/tls/configure/examples/issuer.yaml @@ -0,0 +1,8 @@ +apiVersion: cert-manager.io/v1 +kind: Issuer +metadata: + name: md-issuer + namespace: demo +spec: + ca: + secretName: md-ca \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/mariadb/tls/configure/examples/tls-cluster.yaml b/content/docs/v2024.1.31/guides/mariadb/tls/configure/examples/tls-cluster.yaml new file mode 100644 index 0000000000..c1430169d4 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mariadb/tls/configure/examples/tls-cluster.yaml @@ -0,0 +1,32 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MariaDB +metadata: + name: md-cluster-tls + namespace: demo +spec: + version: "10.5.23" + replicas: 3 + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + requireSSL: true + tls: + issuerRef: + apiGroup: cert-manager.io + kind: Issuer + name: md-issuer + certificates: + - alias: server + subject: + organizations: + - kubedb:server + dnsNames: + - localhost + ipAddresses: + - "127.0.0.1" + terminationPolicy: WipeOut \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/mariadb/tls/configure/examples/tls-standalone.yaml b/content/docs/v2024.1.31/guides/mariadb/tls/configure/examples/tls-standalone.yaml new file mode 100644 index 0000000000..9b5daf5341 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mariadb/tls/configure/examples/tls-standalone.yaml @@ -0,0 +1,31 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MariaDB +metadata: + name: md-standalone-tls + namespace: demo +spec: + version: "10.5.23" + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + requireSSL: true + tls: + issuerRef: + apiGroup: cert-manager.io + kind: Issuer + name: md-issuer + certificates: + - alias: server + subject: + organizations: + - kubedb:server + dnsNames: + - localhost + ipAddresses: + - "127.0.0.1" + terminationPolicy: WipeOut \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/mariadb/tls/configure/index.md b/content/docs/v2024.1.31/guides/mariadb/tls/configure/index.md new file mode 100644 index 0000000000..51e65384a2 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mariadb/tls/configure/index.md @@ -0,0 +1,519 @@ +--- +title: TLS/SSL (Transport Encryption) +menu: + docs_v2024.1.31: + identifier: guides-mariadb-tls-configure + name: MariaDB TLS/SSL Configuration + parent: guides-mariadb-tls + weight: 20 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Configure TLS/SSL in MariaDB + +`KubeDB` supports providing TLS/SSL encryption (via, `requireSSL` mode) for `MariaDB`. This tutorial will show you how to use `KubeDB` to deploy a `MariaDB` database with TLS/SSL configuration. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +- Install [`cert-manger`](https://cert-manager.io/docs/installation/) v1.0.0 or later to your cluster to manage your SSL/TLS certificates. + +- Install `KubeDB` in your cluster following the steps [here](/docs/v2024.1.31/setup/README). + +- To keep things isolated, this tutorial uses a separate namespace called `demo` throughout this tutorial. + + ```bash + $ kubectl create ns demo + namespace/demo created + ``` + +> Note: YAML files used in this tutorial are stored in [docs/guides/mariadb/tls/configure/examples](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/guides/mysql/tls/configure/yamls) folder in GitHub repository [kubedb/docs](https://github.com/kubedb/docs). + +### Deploy MariaDB database with TLS/SSL configuration + +As pre-requisite, at first, we are going to create an Issuer/ClusterIssuer. This Issuer/ClusterIssuer is used to create certificates. Then we are going to deploy a MariaDB standalone and a group replication that will be configured with these certificates by `KubeDB` operator. + +### Create Issuer/ClusterIssuer + +Now, we are going to create an example `Issuer` that will be used throughout the duration of this tutorial. Alternatively, you can follow this [cert-manager tutorial](https://cert-manager.io/docs/configuration/ca/) to create your own `Issuer`. By following the below steps, we are going to create our desired issuer, + +- Start off by generating our ca-certificates using openssl, + +```bash +$ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout ./ca.key -out ./ca.crt -subj "/CN=mariadb/O=kubedb" +Generating a RSA private key +...........................................................................+++++ +........................................................................................................+++++ +writing new private key to './ca.key' +``` + +- create a secret using the certificate files we have just generated, + +```bash +kubectl create secret tls md-ca \ + --cert=ca.crt \ + --key=ca.key \ + --namespace=demo +secret/md-ca created +``` + +Now, we are going to create an `Issuer` using the `md-ca` secret that hols the ca-certificate we have just created. Below is the YAML of the `Issuer` cr that we are going to create, + +```yaml +apiVersion: cert-manager.io/v1 +kind: Issuer +metadata: + name: md-issuer + namespace: demo +spec: + ca: + secretName: md-ca +``` + +Let’s create the `Issuer` cr we have shown above, + +```bash +kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mariadb/tls/configure/examples/issuer.yaml +issuer.cert-manager.io/md-issuer created +``` + +### Deploy MariaDB Standalone with TLS/SSL configuration + +Here, our issuer `md-issuer` is ready to deploy a `MariaDB` standalone with TLS/SSL configuration. Below is the YAML for MariaDB Standalone that we are going to create, + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: MariaDB +metadata: + name: md-standalone-tls + namespace: demo +spec: + version: "10.5.23" + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + requireSSL: true + tls: + issuerRef: + apiGroup: cert-manager.io + kind: Issuer + name: md-issuer + certificates: + - alias: server + subject: + organizations: + - kubedb:server + dnsNames: + - localhost + ipAddresses: + - "127.0.0.1" + terminationPolicy: WipeOut +``` + +Here, + +- `spec.requireSSL` specifies the SSL/TLS client connection to the server is required. + +- `spec.tls.issuerRef` refers to the `md-issuer` issuer. + +- `spec.tls.certificates` gives you a lot of options to configure so that the certificate will be renewed and kept up to date. +You can found more details from [here](/docs/v2024.1.31/guides/mariadb/concepts/mariadb/#spectls) + +**Deploy MariaDB Standalone:** + +Let’s create the `MariaDB` cr we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mariadb/tls/configure/examples/tls-standalone.yaml +mariadb.kubedb.com/md-standalone-tls created +``` + +**Wait for the database to be ready:** + +Now, wait for `MariaDB` going on `Running` state and also wait for `StatefulSet` and its pod to be created and going to `Running` state, + +```bash +$ kubectl get mariadb -n demo md-standalone-tls +NAME VERSION STATUS AGE +md-standalone-tls 10.5.23 Ready 5m48s + +$ kubectl get sts -n demo md-standalone-tls +NAME READY AGE +md-standalone-tls 1/1 7m5s +``` + +**Verify tls-secrets created successfully:** + +If everything goes well, you can see that our tls-secrets will be created which contains server, client, exporter certificate. Server tls-secret will be used for server configuration and client tls-secret will be used for a secure connection. + +All tls-secret are created by `KubeDB` Ops Manager. Default tls-secret name formed as _{mariadb-object-name}-{cert-alias}-cert_. + +Let's check the tls-secrets have created, + +```bash +$ kubectl get secrets -n demo | grep md-standalone-tls +md-standalone-tls-archiver-cert kubernetes.io/tls 3 7m53s +md-standalone-tls-auth kubernetes.io/basic-auth 2 7m54s +md-standalone-tls-metrics-exporter-cert kubernetes.io/tls 3 7m53s +md-standalone-tls-metrics-exporter-config Opaque 1 7m54s +md-standalone-tls-server-cert kubernetes.io/tls 3 7m53s +md-standalone-tls-token-7hhg2 +``` + +**Verify MariaDB Standalone configured with TLS/SSL:** + +Now, we are going to connect to the database for verifying the `MariaDB` server has configured with TLS/SSL encryption. + +Let's exec into the pod to verify TLS/SSL configuration, + +```bash +$ kubectl exec -it -n demo md-standalone-tls-0 -- bash + +root@md-standalone-tls-0:/ ls /etc/mysql/certs/client +ca.crt tls.crt tls.key +root@md-standalone-tls-0:/ ls /etc/mysql/certs/server +ca.crt tls.crt tls.key + +root@md-standalone-tls-0:/ mysql -u${MYSQL_ROOT_USERNAME} -p${MYSQL_ROOT_PASSWORD} +Welcome to the MariaDB monitor. Commands end with ; or \g. +Your MariaDB connection id is 64 +Server version: 10.5.23-MariaDB-1:10.5.23+maria~focal mariadb.org binary distribution + +Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others. + +Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. + +MariaDB [(none)]> show variables like '%ssl%'; ++---------------------+---------------------------------+ +| Variable_name | Value | ++---------------------+---------------------------------+ +| have_openssl | YES | +| have_ssl | YES | +| ssl_ca | /etc/mysql/certs/server/ca.crt | +| ssl_capath | /etc/mysql/certs/server | +| ssl_cert | /etc/mysql/certs/server/tls.crt | +| ssl_cipher | | +| ssl_crl | | +| ssl_crlpath | | +| ssl_key | /etc/mysql/certs/server/tls.key | +| version_ssl_library | OpenSSL 1.1.1f 31 Mar 2020 | ++---------------------+---------------------------------+ +10 rows in set (0.002 sec) + +MariaDB [(none)]> show variables like '%require_secure_transport%'; ++--------------------------+-------+ +| Variable_name | Value | ++--------------------------+-------+ +| require_secure_transport | ON | ++--------------------------+-------+ +1 row in set (0.001 sec) + +MariaDB [(none)]> quit; +Bye +``` + +The above output shows that the `MariaDB` server is configured to TLS/SSL. You can also see that the `.crt` and `.key` files are stored in `/etc/mysql/certs/client/` and `/etc/mysql/certs/server/` directory for client and server respectively. + +**Verify secure connection for SSL required user:** + +Now, you can create an SSL required user that will be used to connect to the database with a secure connection. + +Let's connect to the database server with a secure connection, + +```bash +$ kubectl exec -it -n demo md-standalone-tls-0 -- bash +root@md-standalone-tls-0:/ mysql -u${MYSQL_ROOT_USERNAME} -p${MYSQL_ROOT_PASSWORD} +Welcome to the MariaDB monitor. Commands end with ; or \g. +Your MariaDB connection id is 92 +Server version: 10.5.23-MariaDB-1:10.5.23+maria~focal mariadb.org binary distribution + +Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others. + +Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. + +MariaDB [(none)]> CREATE USER 'new_user'@'localhost' IDENTIFIED BY '1234' REQUIRE SSL; +Query OK, 0 rows affected (0.028 sec) + +MariaDB [(none)]> FLUSH PRIVILEGES; +Query OK, 0 rows affected (0.000 sec) + +MariaDB [(none)]> exit +Bye + +# accessing the database server with newly created user +root@md-standalone-tls-0:/ mysql -unew_user -p1234 +ERROR 1045 (28000): Access denied for user 'new_user'@'localhost' (using password: YES) + +# accessing the database server newly created user with certificates +root@md-standalone-tls-0:/ mysql -unew_user -p1234 --ssl-ca=/etc/mysql/certs/server/ca.crt --ssl-cert=/etc/mysql/certs/server/tls.crt --ssl-key=/etc/mysql/certs/server/tls.key +Welcome to the MariaDB monitor. Commands end with ; or \g. +Your MariaDB connection id is 116 +Server version: 10.5.23-MariaDB-1:10.5.23+maria~focal mariadb.org binary distribution + +Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others. + +Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. + +MariaDB [(none)]> exit +Bye +``` + +From the above output, you can see that only using client certificate we can access the database securely, otherwise, it shows "Access denied". Our client certificate is stored in `/etc/mysql/certs/client/` directory. + +## Deploy MariaDB Cluster with TLS/SSL configuration + +Now, we are going to deploy a `MariaDB` Cluster with TLS/SSL configuration. Below is the YAML for MariaDB cluster that we are going to create, + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: MariaDB +metadata: + name: md-cluster-tls + namespace: demo +spec: + version: "10.5.23" + replicas: 3 + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + requireSSL: true + tls: + issuerRef: + apiGroup: cert-manager.io + kind: Issuer + name: md-issuer + certificates: + - alias: server + subject: + organizations: + - kubedb:server + dnsNames: + - localhost + ipAddresses: + - "127.0.0.1" + terminationPolicy: WipeOut +``` + +**Deploy MariaDB Cluster:** + +```bash +kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mariadb/tls/configure/examples/tls-cluster.yaml +mariadb.kubedb.com/md-cluster-tls created +``` + +**Wait for the database to be ready :** + +Now, wait for `MariaDB` going on `Running` state and also wait for `StatefulSet` and its pods to be created and going to `Running` state, + +```bash +$ kubectl get mariadb -n demo md-cluster-tls +NAME VERSION STATUS AGE +md-cluster-tls 10.5.23 Ready 2m49s + +$ kubectl get pod -n demo | grep md-cluster-tls +md-cluster-tls-0 1/1 Running 0 3m29s +md-cluster-tls-1 1/1 Running 0 3m9s +md-cluster-tls-2 1/1 Running 0 2m49s +``` + +**Verify tls-secrets created successfully :** + +If everything goes well, you can see that our tls-secrets will be created which contains server, client, exporter certificate. Server tls-secret will be used for server configuration and client tls-secret will be used for a secure connection. + +All tls-secret are created by `KubeDB` Ops Manager. Default tls-secret name formed as _{mariadb-object-name}-{cert-alias}-cert_. + +Let's check the tls-secrets have created, + +```bash +$ kubectl get secrets -n demo | grep md-cluster-tls +md-cluster-tls-archiver-cert kubernetes.io/tls 3 6m20s +md-cluster-tls-auth kubernetes.io/basic-auth 2 6m22s +md-cluster-tls-metrics-exporter-cert kubernetes.io/tls 3 6m20s +md-cluster-tls-metrics-exporter-config Opaque 1 6m21s +md-cluster-tls-server-cert kubernetes.io/tls 3 6m21s +md-cluster-tls-token-nrs75 +``` + +**Verify MariaDB Cluster configured with TLS/SSL:** + +Now, we are going to connect to the database for verifying the `MariaDB` server has configured with TLS/SSL encryption. + +Let's exec into the first pod to verify TLS/SSL configuration, + +```bash +$ kubectl exec -it -n demo md-cluster-tls-0 -- bash + +root@md-cluster-tls-0:/ ls /etc/mysql/certs/client +ca.crt tls.crt tls.key +root@md-cluster-tls-0:/ ls /etc/mysql/certs/server +ca.crt tls.crt tls.key + +root@md-cluster-tls-0:/ mysql -u${MYSQL_ROOT_USERNAME} -p${MYSQL_ROOT_PASSWORD} +Welcome to the MariaDB monitor. Commands end with ; or \g. +Your MariaDB connection id is 64 +Server version: 10.5.23-MariaDB-1:10.5.23+maria~focal mariadb.org binary distribution + +Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others. + +Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. + +MariaDB [(none)]> show variables like '%ssl%'; ++---------------------+---------------------------------+ +| Variable_name | Value | ++---------------------+---------------------------------+ +| have_openssl | YES | +| have_ssl | YES | +| ssl_ca | /etc/mysql/certs/server/ca.crt | +| ssl_capath | /etc/mysql/certs/server | +| ssl_cert | /etc/mysql/certs/server/tls.crt | +| ssl_cipher | | +| ssl_crl | | +| ssl_crlpath | | +| ssl_key | /etc/mysql/certs/server/tls.key | +| version_ssl_library | OpenSSL 1.1.1f 31 Mar 2020 | ++---------------------+---------------------------------+ +10 rows in set (0.002 sec) + +MariaDB [(none)]> show variables like '%require_secure_transport%'; ++--------------------------+-------+ +| Variable_name | Value | ++--------------------------+-------+ +| require_secure_transport | ON | ++--------------------------+-------+ +1 row in set (0.001 sec) + +MariaDB [(none)]> quit; +Bye +``` + +Now let's check for the second database server, + +```bash +$ kubectl exec -it -n demo md-cluster-tls-1 -- bash +root@md-cluster-tls-1:/ ls /etc/mysql/certs/client +ca.crt tls.crt tls.key +root@md-cluster-tls-1:/ ls /etc/mysql/certs/server +ca.crt tls.crt tls.key +root@md-cluster-tls-1:/ mysql -u${MYSQL_ROOT_USERNAME} -p${MYSQL_ROOT_PASSWORD} +Welcome to the MariaDB monitor. Commands end with ; or \g. +Your MariaDB connection id is 34 +Server version: 10.5.23-MariaDB-1:10.5.23+maria~focal mariadb.org binary distribution + +Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others. + +Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. + +MariaDB [(none)]> show variables like '%ssl%'; ++---------------------+---------------------------------+ +| Variable_name | Value | ++---------------------+---------------------------------+ +| have_openssl | YES | +| have_ssl | YES | +| ssl_ca | /etc/mysql/certs/server/ca.crt | +| ssl_capath | /etc/mysql/certs/server | +| ssl_cert | /etc/mysql/certs/server/tls.crt | +| ssl_cipher | | +| ssl_crl | | +| ssl_crlpath | | +| ssl_key | /etc/mysql/certs/server/tls.key | +| version_ssl_library | OpenSSL 1.1.1f 31 Mar 2020 | ++---------------------+---------------------------------+ +10 rows in set (0.001 sec) + +MariaDB [(none)]> show variables like '%require_secure_transport%'; ++--------------------------+-------+ +| Variable_name | Value | ++--------------------------+-------+ +| require_secure_transport | ON | ++--------------------------+-------+ +1 row in set (0.001 sec) + +MariaDB [(none)]> quit; +Bye +``` + +The above output shows that the `MariaDB` server is configured to TLS/SSL. You can also see that the `.crt` and `.key` files are stored in `/etc/mysql/certs/client/` and `/etc/mysql/certs/server/` directory for client and server respectively. + +**Verify secure connection for SSL required user:** + +Now, you can create an SSL required user that will be used to connect to the database with a secure connection. + +Let's connect to the database server with a secure connection, + +```bash +$ kubectl exec -it -n demo md-cluster-tls-0 -- bash +root@md-cluster-tls-0:/ mysql -u${MYSQL_ROOT_USERNAME} -p${MYSQL_ROOT_PASSWORD} +Welcome to the MariaDB monitor. Commands end with ; or \g. +Your MariaDB connection id is 92 +Server version: 10.5.23-MariaDB-1:10.5.23+maria~focal mariadb.org binary distribution + +Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others. + +Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. + +MariaDB [(none)]> CREATE USER 'new_user'@'localhost' IDENTIFIED BY '1234' REQUIRE SSL; +Query OK, 0 rows affected (0.028 sec) + +MariaDB [(none)]> FLUSH PRIVILEGES; +Query OK, 0 rows affected (0.000 sec) + +MariaDB [(none)]> exit +Bye + +# accessing the database server with newly created user +root@md-cluster-tls-0:/ mysql -unew_user -p1234 +ERROR 1045 (28000): Access denied for user 'new_user'@'localhost' (using password: YES) + +# accessing the database server newly created user with certificates +root@md-cluster-tls-0:/ mysql -unew_user -p1234 --ssl-ca=/etc/mysql/certs/server/ca.crt --ssl-cert=/etc/mysql/certs/server/tls.crt --ssl-key=/etc/mysql/certs/server/tls.key +Welcome to the MariaDB monitor. Commands end with ; or \g. +Your MariaDB connection id is 116 +Server version: 10.5.23-MariaDB-1:10.5.23+maria~focal mariadb.org binary distribution + +Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others. + +Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. + +MariaDB [(none)]> exit +Bye +``` + +From the above output, you can see that only using client certificate we can access the database securely, otherwise, it shows "Access denied". Our client certificate is stored in `/etc/mysql/certs/client/` directory. + +## Cleaning up + +To clean up the Kubernetes resources created by this tutorial, run: + +```bash +$ kubectl delete mariadb demo md-standalone-tls +mariadb.kubedb.com "md-standalone-tls" deleted +$ kubectl delete mariadb demo md-cluster-tls +mariadb.kubedb.com "md-cluster-tls" deleted +$ kubectl delete ns demo +namespace "demo" deleted +``` \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/mariadb/tls/overview/images/md-tls-ssl.png b/content/docs/v2024.1.31/guides/mariadb/tls/overview/images/md-tls-ssl.png new file mode 100644 index 0000000000..70e62dd240 Binary files /dev/null and b/content/docs/v2024.1.31/guides/mariadb/tls/overview/images/md-tls-ssl.png differ diff --git a/content/docs/v2024.1.31/guides/mariadb/tls/overview/index.md b/content/docs/v2024.1.31/guides/mariadb/tls/overview/index.md new file mode 100644 index 0000000000..09220bfb59 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mariadb/tls/overview/index.md @@ -0,0 +1,81 @@ +--- +title: MariaDB TLS/SSL Encryption Overview +menu: + docs_v2024.1.31: + identifier: guides-mariadb-tls-overview + name: Overview + parent: guides-mariadb-tls + weight: 10 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# MariaDB TLS/SSL Encryption + +**Prerequisite :** To configure TLS/SSL in `MariaDB`, `KubeDB` uses `cert-manager` to issue certificates. So first you have to make sure that the cluster has `cert-manager` installed. To install `cert-manager` in your cluster following steps [here](https://cert-manager.io/docs/installation/kubernetes/). + +To issue a certificate, the following cr of `cert-manager` is used: + +- `Issuer/ClusterIssuer`: Issuers and ClusterIssuers represent certificate authorities (CAs) that are able to generate signed certificates by honoring certificate signing requests. All cert-manager certificates require a referenced issuer that is in a ready condition to attempt to honor the request. You can learn more details [here](https://cert-manager.io/docs/concepts/issuer/). + +- `Certificate`: `cert-manager` has the concept of Certificates that define the desired x509 certificate which will be renewed and kept up to date. You can learn more details [here](https://cert-manager.io/docs/concepts/certificate/). + +**MariaDB CRD Specification:** + +KubeDB uses the following cr fields to enable SSL/TLS encryption in `MariaDB`. + +- `spec:` + - `requireSSL` + - `tls:` + - `issuerRef` + - `certificates` + +Read about the fields in details from [mariadb concept](/docs/v2024.1.31/guides/mariadb/concepts/mariadb/#spectls), + +When, `requireSSL` is set, the users must specify the `tls.issuerRef` field. `KubeDB` uses the `issuer` or `clusterIssuer` referenced in the `tls.issuerRef` field, and the certificate specs provided in `tls.certificate` to generate certificate secrets using `Issuer/ClusterIssuers` specification. These certificates secrets including `ca.crt`, `tls.crt` and `tls.key` etc. are used to configure `MariaDB` server, exporter etc. respectively. + +## How TLS/SSL configures in MariaDB + +The following figure shows how `KubeDB` enterprise is used to configure TLS/SSL in MariaDB. Open the image in a new tab to see the enlarged version. + +
+ Stash Backup Flow +
Fig: Deploy MariaDB with TLS/SSL
+
+ +Deploying MariaDB with TLS/SSL configuration process consists of the following steps: + +1. At first, a user creates an `Issuer/ClusterIssuer` cr. + +2. Then the user creates a `MariaDB` cr. + +3. `KubeDB` community operator watches for the `MariaDB` cr. + +4. When it finds one, it creates `Secret`, `Service`, etc. for the `MariaDB` database. + +5. `KubeDB` Ops Manager watches for `MariaDB`(5c), `Issuer/ClusterIssuer`(5b), `Secret` and `Service`(5a). + +6. When it finds all the resources(`MariaDB`, `Issuer/ClusterIssuer`, `Secret`, `Service`), it creates `Certificates` by using `tls.issuerRef` and `tls.certificates` field specification from `MariaDB` cr. + +7. `cert-manager` watches for certificates. + +8. When it finds one, it creates certificate secrets `tls-secrets`(server, client, exporter secrets, etc.) that hold the actual self-signed certificate. + +9. `KubeDB` community operator watches for the Certificate secrets `tls-secrets`. + +10. When it finds all the tls-secret, it creates a `StatefulSet` so that MariaDB server is configured with TLS/SSL. + +In the next doc, we are going to show a step by step guide on how to configure a `MariaDB` database with TLS/SSL. diff --git a/content/docs/v2024.1.31/guides/mariadb/update-version/_index.md b/content/docs/v2024.1.31/guides/mariadb/update-version/_index.md new file mode 100644 index 0000000000..4d94b0f908 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mariadb/update-version/_index.md @@ -0,0 +1,22 @@ +--- +title: Updating +menu: + docs_v2024.1.31: + identifier: guides-mariadb-updating + name: UpdateVersion + parent: guides-mariadb + weight: 45 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/mariadb/update-version/cluster/examples/mdops-update.yaml b/content/docs/v2024.1.31/guides/mariadb/update-version/cluster/examples/mdops-update.yaml new file mode 100644 index 0000000000..bf55c8f5da --- /dev/null +++ b/content/docs/v2024.1.31/guides/mariadb/update-version/cluster/examples/mdops-update.yaml @@ -0,0 +1,11 @@ +apiVersion: ops.kubedb.com/v1alpha1 +kind: MariaDBOpsRequest +metadata: + name: mdops-update + namespace: demo +spec: + type: UpdateVersion + databaseRef: + name: sample-mariadb + updateVersion: + targetVersion: "10.5.23" \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/mariadb/update-version/cluster/examples/sample-mariadb.yaml b/content/docs/v2024.1.31/guides/mariadb/update-version/cluster/examples/sample-mariadb.yaml new file mode 100644 index 0000000000..518960fbd2 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mariadb/update-version/cluster/examples/sample-mariadb.yaml @@ -0,0 +1,17 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MariaDB +metadata: + name: sample-mariadb + namespace: demo +spec: + version: "10.5.23" + replicas: 3 + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut diff --git a/content/docs/v2024.1.31/guides/mariadb/update-version/cluster/index.md b/content/docs/v2024.1.31/guides/mariadb/update-version/cluster/index.md new file mode 100644 index 0000000000..3c7d987308 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mariadb/update-version/cluster/index.md @@ -0,0 +1,169 @@ +--- +title: Updating MariaDB Cluster +menu: + docs_v2024.1.31: + identifier: guides-mariadb-updating-cluster + name: Cluster + parent: guides-mariadb-updating + weight: 30 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# update version of MariaDB Cluster + +This guide will show you how to use `KubeDB` Enterprise operator to update the version of `MariaDB` Cluster. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +- Install `KubeDB` Community and Enterprise operator in your cluster following the steps [here](/docs/v2024.1.31/setup/README). + +- You should be familiar with the following `KubeDB` concepts: + - [MariaDB](/docs/v2024.1.31/guides/mariadb/concepts/mariadb) + - [Cluster](/docs/v2024.1.31/guides/mariadb/clustering/overview) + - [MariaDBOpsRequest](/docs/v2024.1.31/guides/mariadb/concepts/opsrequest) + - [Updating Overview](/docs/v2024.1.31/guides/mariadb/update-version/overview) + +To keep everything isolated, we are going to use a separate namespace called `demo` throughout this tutorial. + +```bash +$ kubectl create ns demo +namespace/demo created +``` + +## Prepare MariaDB Cluster + +Now, we are going to deploy a `MariaDB` cluster database with version `10.4.32`. + +### Deploy MariaDB cluster + +In this section, we are going to deploy a MariaDB Cluster. Then, in the next section we will update the version of the database using `MariaDBOpsRequest` CRD. Below is the YAML of the `MariaDB` CR that we are going to create, + +> If you want to update `MariaDB Standalone`, Just remove the `spec.Replicas` from the below yaml and rest of the steps are same. + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: MariaDB +metadata: + name: sample-mariadb + namespace: demo +spec: + version: "10.4.32" + replicas: 3 + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut + +``` + +Let's create the `MariaDB` CR we have shown above, + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mariadb/update-version/cluster/examples/sample-mariadb.yaml +mariadb.kubedb.com/sample-mariadb created +``` + +Now, wait until `sample-mariadb` created has status `Ready`. i.e, + +```bash +$ kubectl get mariadb -n demo +NAME VERSION STATUS AGE +sample-mariadb 10.4.32 Ready 3m15s +``` + +We are now ready to apply the `MariaDBOpsRequest` CR to update this database. + +### update MariaDB Version + +Here, we are going to update `MariaDB` cluster from `10.4.32` to `10.5.23`. + +#### Create MariaDBOpsRequest: + +In order to update the database cluster, we have to create a `MariaDBOpsRequest` CR with your desired version that is supported by `KubeDB`. Below is the YAML of the `MariaDBOpsRequest` CR that we are going to create, + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: MariaDBOpsRequest +metadata: + name: mdops-update + namespace: demo +spec: + type: UpdateVersion + databaseRef: + name: sample-mariadb + updateVersion: + targetVersion: "10.5.23" +``` + +Here, + +- `spec.databaseRef.name` specifies that we are performing operation on `sample-mariadb` MariaDB database. +- `spec.type` specifies that we are going to perform `UpdateVersion` on our database. +- `spec.updateVersion.targetVersion` specifies the expected version of the database `10.5.23`. + +Let's create the `MariaDBOpsRequest` CR we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mariadb/update-version/cluster/examples/mdops-update.yaml +mariadbopsrequest.ops.kubedb.com/mdops-update created +``` + +#### Verify MariaDB version updated successfully + +If everything goes well, `KubeDB` Enterprise operator will update the image of `MariaDB` object and related `StatefulSets` and `Pods`. + +Let's wait for `MariaDBOpsRequest` to be `Successful`. Run the following command to watch `MariaDBOpsRequest` CR, + +```bash +$ kubectl get mariadbopsrequest -n demo +Every 2.0s: kubectl get mariadbopsrequest -n demo +NAME TYPE STATUS AGE +mdops-update UpdateVersion Successful 84s +``` + +We can see from the above output that the `MariaDBOpsRequest` has succeeded. + +Now, we are going to verify whether the `MariaDB` and the related `StatefulSets` and their `Pods` have the new version image. Let's check, + +```bash +$ kubectl get mariadb -n demo sample-mariadb -o=jsonpath='{.spec.version}{"\n"}' +10.5.23 + +$ kubectl get sts -n demo sample-mariadb -o=jsonpath='{.spec.template.spec.containers[0].image}{"\n"}' +mariadb:10.5.23 + +$ kubectl get pods -n demo sample-mariadb-0 -o=jsonpath='{.spec.containers[0].image}{"\n"}' +mariadb:10.5.23 +``` + +You can see from above, our `MariaDB` cluster database has been updated with the new version. So, the update process is successfully completed. + +## Cleaning Up + +To clean up the Kubernetes resources created by this tutorial, run: + +```bash +$ kubectl delete mariadb -n demo sample-mariadb +$ kubectl delete mariadbopsrequest -n demo mdops-update +``` \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/mariadb/update-version/overview/images/mdops-update.jpeg b/content/docs/v2024.1.31/guides/mariadb/update-version/overview/images/mdops-update.jpeg new file mode 100644 index 0000000000..85c88b47e1 Binary files /dev/null and b/content/docs/v2024.1.31/guides/mariadb/update-version/overview/images/mdops-update.jpeg differ diff --git a/content/docs/v2024.1.31/guides/mariadb/update-version/overview/index.md b/content/docs/v2024.1.31/guides/mariadb/update-version/overview/index.md new file mode 100644 index 0000000000..422fdde73b --- /dev/null +++ b/content/docs/v2024.1.31/guides/mariadb/update-version/overview/index.md @@ -0,0 +1,65 @@ +--- +title: Updating MariaDB Overview +menu: + docs_v2024.1.31: + identifier: guides-mariadb-updating-overview + name: Overview + parent: guides-mariadb-updating + weight: 10 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# updating MariaDB version Overview + +This guide will give you an overview on how KubeDB Ops Manager update the version of `MariaDB` database. + +## Before You Begin + +- You should be familiar with the following `KubeDB` concepts: + - [MariaDB](/docs/v2024.1.31/guides/mariadb/concepts/mariadb) + - [MariaDBOpsRequest](/docs/v2024.1.31/guides/mariadb/concepts/opsrequest) + +## How update version Process Works + +The following diagram shows how KubeDB Ops Manager used to update the version of `MariaDB`. Open the image in a new tab to see the enlarged version. + +
+  updating Process of MariaDB +
Fig: updating Process of MariaDB
+
+ +The updating process consists of the following steps: + +1. At first, a user creates a `MariaDB` Custom Resource (CR). + +2. `KubeDB` Community operator watches the `MariaDB` CR. + +3. When the operator finds a `MariaDB` CR, it creates required number of `StatefulSets` and related necessary stuff like secrets, services, etc. + +4. Then, in order to update the version of the `MariaDB` database the user creates a `MariaDBOpsRequest` CR with the desired version. + +5. `KubeDB` Enterprise operator watches the `MariaDBOpsRequest` CR. + +6. When it finds a `MariaDBOpsRequest` CR, it halts the `MariaDB` object which is referred from the `MariaDBOpsRequest`. So, the `KubeDB` Community operator doesn't perform any operations on the `MariaDB` object during the updating process. + +7. By looking at the target version from `MariaDBOpsRequest` CR, `KubeDB` Enterprise operator updates the images of all the `StatefulSets`. After each image update, the operator performs some checks such as if the oplog is synced and database size is almost same or not. + +8. After successfully updating the `StatefulSets` and their `Pods` images, the `KubeDB` Enterprise operator updates the image of the `MariaDB` object to reflect the updated state of the database. + +9. After successfully updating of `MariaDB` object, the `KubeDB` Enterprise operator resumes the `MariaDB` object so that the `KubeDB` Community operator can resume its usual operations. + +In the next doc, we are going to show a step by step guide on updating of a MariaDB database using update operation. \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/mariadb/volume-expansion/_index.md b/content/docs/v2024.1.31/guides/mariadb/volume-expansion/_index.md new file mode 100644 index 0000000000..733dabca2b --- /dev/null +++ b/content/docs/v2024.1.31/guides/mariadb/volume-expansion/_index.md @@ -0,0 +1,22 @@ +--- +title: Volume Expansion +menu: + docs_v2024.1.31: + identifier: guides-mariadb-volume-expansion + name: Volume Expansion + parent: guides-mariadb + weight: 44 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/mariadb/volume-expansion/overview/images/volume-expansion.jpeg b/content/docs/v2024.1.31/guides/mariadb/volume-expansion/overview/images/volume-expansion.jpeg new file mode 100644 index 0000000000..cdb244dada Binary files /dev/null and b/content/docs/v2024.1.31/guides/mariadb/volume-expansion/overview/images/volume-expansion.jpeg differ diff --git a/content/docs/v2024.1.31/guides/mariadb/volume-expansion/overview/index.md b/content/docs/v2024.1.31/guides/mariadb/volume-expansion/overview/index.md new file mode 100644 index 0000000000..1a133fc01d --- /dev/null +++ b/content/docs/v2024.1.31/guides/mariadb/volume-expansion/overview/index.md @@ -0,0 +1,67 @@ +--- +title: MariaDB Volume Expansion Overview +menu: + docs_v2024.1.31: + identifier: guides-mariadb-volume-expansion-overview + name: Overview + parent: guides-mariadb-volume-expansion + weight: 10 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# MariaDB Volume Expansion + +This guide will give an overview on how KubeDB Ops Manager expand the volume of `MariaDB`. + +## Before You Begin + +- You should be familiar with the following `KubeDB` concepts: + - [MariaDB](/docs/v2024.1.31/guides/mariadb/concepts/mariadb) + - [MariaDBOpsRequest](/docs/v2024.1.31/guides/mariadb/concepts/opsrequest) + +## How Volume Expansion Process Works + +The following diagram shows how KubeDB Ops Manager expand the volumes of `MariaDB` database components. Open the image in a new tab to see the enlarged version. + +
+  Volume Expansion process of MariaDB +
Fig: Volume Expansion process of MariaDB
+
+ +The Volume Expansion process consists of the following steps: + +1. At first, a user creates a `MariaDB` Custom Resource (CR). + +2. `KubeDB` Community operator watches the `MariaDB` CR. + +3. When the operator finds a `MariaDB` CR, it creates required `StatefulSet` and related necessary stuff like secrets, services, etc. + +4. The statefulSet creates Persistent Volumes according to the Volume Claim Template provided in the statefulset configuration. This Persistent Volume will be expanded by the `KubeDB` Enterprise operator. + +5. Then, in order to expand the volume of the `MariaDB` database the user creates a `MariaDBOpsRequest` CR with desired information. + +6. `KubeDB` Enterprise operator watches the `MariaDBOpsRequest` CR. + +7. When it finds a `MariaDBOpsRequest` CR, it pauses the `MariaDB` object which is referred from the `MariaDBOpsRequest`. So, the `KubeDB` Community operator doesn't perform any operations on the `MariaDB` object during the volume expansion process. + +8. Then the `KubeDB` Enterprise operator will expand the persistent volume to reach the expected size defined in the `MariaDBOpsRequest` CR. + +9. After the successfully expansion of the volume of the related StatefulSet Pods, the `KubeDB` Enterprise operator updates the new volume size in the `MariaDB` object to reflect the updated state. + +10. After the successful Volume Expansion of the `MariaDB`, the `KubeDB` Enterprise operator resumes the `MariaDB` object so that the `KubeDB` Community operator resumes its usual operations. + +In the next docs, we are going to show a step by step guide on Volume Expansion of various MariaDB database using `MariaDBOpsRequest` CRD. diff --git a/content/docs/v2024.1.31/guides/mariadb/volume-expansion/volume-expansion/example/online-volume-expansion.yaml b/content/docs/v2024.1.31/guides/mariadb/volume-expansion/volume-expansion/example/online-volume-expansion.yaml new file mode 100644 index 0000000000..6312505801 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mariadb/volume-expansion/volume-expansion/example/online-volume-expansion.yaml @@ -0,0 +1,12 @@ +apiVersion: ops.kubedb.com/v1alpha1 +kind: MariaDBOpsRequest +metadata: + name: md-online-volume-expansion + namespace: demo +spec: + type: VolumeExpansion + databaseRef: + name: sample-mariadb + volumeExpansion: + mode: "Online" + mariadb: 2Gi diff --git a/content/docs/v2024.1.31/guides/mariadb/volume-expansion/volume-expansion/example/sample-mariadb.yaml b/content/docs/v2024.1.31/guides/mariadb/volume-expansion/volume-expansion/example/sample-mariadb.yaml new file mode 100644 index 0000000000..6bee3afc10 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mariadb/volume-expansion/volume-expansion/example/sample-mariadb.yaml @@ -0,0 +1,17 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MariaDB +metadata: + name: sample-mariadb + namespace: demo +spec: + version: "10.5.23" + replicas: 3 + storageType: Durable + storage: + storageClassName: "topolvm-provisioner" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut diff --git a/content/docs/v2024.1.31/guides/mariadb/volume-expansion/volume-expansion/index.md b/content/docs/v2024.1.31/guides/mariadb/volume-expansion/volume-expansion/index.md new file mode 100644 index 0000000000..f7c1fc6cce --- /dev/null +++ b/content/docs/v2024.1.31/guides/mariadb/volume-expansion/volume-expansion/index.md @@ -0,0 +1,255 @@ +--- +title: MariaDB Volume Expansion +menu: + docs_v2024.1.31: + identifier: guides-mariadb-volume-expansion-volume-expansion + name: MariaDB Volume Expansion + parent: guides-mariadb-volume-expansion + weight: 20 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# MariaDB Volume Expansion + +This guide will show you how to use `KubeDB` Enterprise operator to expand the volume of a MariaDB. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster. + +- You must have a `StorageClass` that supports volume expansion. + +- Install `KubeDB` Community and Enterprise operator in your cluster following the steps [here](/docs/v2024.1.31/setup/README). + +- You should be familiar with the following `KubeDB` concepts: + - [MariaDB](/docs/v2024.1.31/guides/mariadb/concepts/mariadb) + - [MariaDBOpsRequest](/docs/v2024.1.31/guides/mariadb/concepts/opsrequest) + - [Volume Expansion Overview](/docs/v2024.1.31/guides/mariadb/volume-expansion/overview) + +To keep everything isolated, we are going to use a separate namespace called `demo` throughout this tutorial. + +```bash +$ kubectl create ns demo +namespace/demo created +``` + +## Expand Volume of MariaDB + +Here, we are going to deploy a `MariaDB` cluster using a supported version by `KubeDB` operator. Then we are going to apply `MariaDBOpsRequest` to expand its volume. The process of expanding MariaDB `standalone` is same as MariaDB cluster. + +### Prepare MariaDB Database + +At first verify that your cluster has a storage class, that supports volume expansion. Let's check, + +```bash +$ kubectl get storageclass +NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE +standard (default) rancher.io/local-path Delete WaitForFirstConsumer false 69s +topolvm-provisioner topolvm.cybozu.com Delete WaitForFirstConsumer true 37s + +``` + +We can see from the output the `topolvm-provisioner` storage class has `ALLOWVOLUMEEXPANSION` field as true. So, this storage class supports volume expansion. We will use this storage class. You can install topolvm from [here](https://github.com/topolvm/topolvm). + +Now, we are going to deploy a `MariaDB` database of 3 replicas with version `10.5.23`. + +### Deploy MariaDB + +In this section, we are going to deploy a MariaDB Cluster with 1GB volume. Then, in the next section we will expand its volume to 2GB using `MariaDBOpsRequest` CRD. Below is the YAML of the `MariaDB` CR that we are going to create, + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: MariaDB +metadata: + name: sample-mariadb + namespace: demo +spec: + version: "10.5.23" + replicas: 3 + storageType: Durable + storage: + storageClassName: "topolvm-provisioner" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut + +``` + +Let's create the `MariaDB` CR we have shown above, + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mariadb/volume-expansion/volume-expansion/example/sample-mariadb.yaml +mariadb.kubedb.com/sample-mariadb created +``` + +Now, wait until `sample-mariadb` has status `Ready`. i.e, + +```bash +$ kubectl get mariadb -n demo +NAME VERSION STATUS AGE +sample-mariadb 10.5.23 Ready 5m4s +``` + +Let's check volume size from statefulset, and from the persistent volume, + +```bash +$ kubectl get sts -n demo sample-mariadb -o json | jq '.spec.volumeClaimTemplates[].spec.resources.requests.storage' +"1Gi" + +$ kubectl get pv -n demo +NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE +pvc-331335d1-c8e0-4b73-9dab-dae57920e997 1Gi RWO Delete Bound demo/data-sample-mariadb-0 topolvm-provisioner 63s +pvc-b90179f8-c40a-4273-ad77-74ca8470b782 1Gi RWO Delete Bound demo/data-sample-mariadb-1 topolvm-provisioner 62s +pvc-f72411a4-80d5-4d32-b713-cb30ec662180 1Gi RWO Delete Bound demo/data-sample-mariadb-2 topolvm-provisioner 62s +``` + +You can see the statefulset has 1GB storage, and the capacity of all the persistent volumes are also 1GB. + +We are now ready to apply the `MariaDBOpsRequest` CR to expand the volume of this database. + +### Volume Expansion + +Here, we are going to expand the volume of the MariaDB cluster. + +#### Create MariaDBOpsRequest + +In order to expand the volume of the database, we have to create a `MariaDBOpsRequest` CR with our desired volume size. Below is the YAML of the `MariaDBOpsRequest` CR that we are going to create, + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: MariaDBOpsRequest +metadata: + name: md-online-volume-expansion + namespace: demo +spec: + type: VolumeExpansion + databaseRef: + name: sample-mariadb + volumeExpansion: + mode: "Online" + mariadb: 2Gi +``` + +Here, + +- `spec.databaseRef.name` specifies that we are performing volume expansion operation on `sample-mariadb` database. +- `spec.type` specifies that we are performing `VolumeExpansion` on our database. +- `spec.volumeExpansion.mariadb` specifies the desired volume size. +- `spec.volumeExpansion.mode` specifies the desired volume expansion mode (`Online` or `Offline`). Storageclass `topolvm-provisioner` supports `Online` volume expansion. + +> **Note:** If the Storageclass you are using doesn't support `Online` Volume Expansion, Try offline volume expansion by using `spec.volumeExpansion.mode:"Offline"`. + +Let's create the `MariaDBOpsRequest` CR we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mariadb/volume-expansion/volume-expansion/example/online-volume-expansion.yaml +mariadbopsrequest.ops.kubedb.com/md-online-volume-expansion created +``` + +#### Verify MariaDB volume expanded successfully + +If everything goes well, `KubeDB` Enterprise operator will update the volume size of `MariaDB` object and related `StatefulSets` and `Persistent Volumes`. + +Let's wait for `MariaDBOpsRequest` to be `Successful`. Run the following command to watch `MariaDBOpsRequest` CR, + +```bash +$ kubectl get mariadbopsrequest -n demo +NAME TYPE STATUS AGE +md-online-volume-expansion VolumeExpansion Successful 96s +``` + +We can see from the above output that the `MariaDBOpsRequest` has succeeded. If we describe the `MariaDBOpsRequest` we will get an overview of the steps that were followed to expand the volume of the database. + +```bash +$ kubectl describe mariadbopsrequest -n demo md-online-volume-expansion +Name: md-online-volume-expansion +Namespace: demo +Labels: +Annotations: API Version: ops.kubedb.com/v1alpha1 +Kind: MariaDBOpsRequest +Metadata: + UID: 09a119aa-4f2a-4cb4-b620-2aa3a514df11 +Spec: + Database Ref: + Name: sample-mariadb + Type: VolumeExpansion + Volume Expansion: + Mariadb: 2Gi + Mode: Online +Status: + Conditions: + Last Transition Time: 2022-01-07T06:38:29Z + Message: Controller has started to Progress the MariaDBOpsRequest: demo/md-online-volume-expansion + Observed Generation: 1 + Reason: OpsRequestProgressingStarted + Status: True + Type: Progressing + Last Transition Time: 2022-01-07T06:39:49Z + Message: Online Volume Expansion performed successfully in MariaDB pod for MariaDBOpsRequest: demo/md-online-volume-expansion + Observed Generation: 1 + Reason: SuccessfullyVolumeExpanded + Status: True + Type: VolumeExpansion + Last Transition Time: 2022-01-07T06:39:49Z + Message: Controller has successfully expand the volume of MariaDB demo/md-online-volume-expansion + Observed Generation: 1 + Reason: OpsRequestProcessedSuccessfully + Status: True + Type: Successful + Observed Generation: 3 + Phase: Successful +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal Starting 2m1s KubeDB Enterprise Operator Start processing for MariaDBOpsRequest: demo/md-online-volume-expansion + Normal Starting 2m1s KubeDB Enterprise Operator Pausing MariaDB databse: demo/sample-mariadb + Normal Successful 2m1s KubeDB Enterprise Operator Successfully paused MariaDB database: demo/sample-mariadb for MariaDBOpsRequest: md-online-volume-expansion + Normal Successful 41s KubeDB Enterprise Operator Online Volume Expansion performed successfully in MariaDB pod for MariaDBOpsRequest: demo/md-online-volume-expansion + Normal Starting 41s KubeDB Enterprise Operator Updating MariaDB storage + Normal Successful 41s KubeDB Enterprise Operator Successfully Updated MariaDB storage + Normal Starting 41s KubeDB Enterprise Operator Resuming MariaDB database: demo/sample-mariadb + Normal Successful 41s KubeDB Enterprise Operator Successfully resumed MariaDB database: demo/sample-mariadb + Normal Successful 41s KubeDB Enterprise Operator Controller has Successfully expand the volume of MariaDB: demo/sample-mariadb + +``` + +Now, we are going to verify from the `Statefulset`, and the `Persistent Volumes` whether the volume of the database has expanded to meet the desired state, Let's check, + +```bash +$ kubectl get sts -n demo sample-mariadb -o json | jq '.spec.volumeClaimTemplates[].spec.resources.requests.storage' +"2Gi" + +$ kubectl get pv -n demo +NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE +pvc-331335d1-c8e0-4b73-9dab-dae57920e997 2Gi RWO Delete Bound demo/data-sample-mariadb-0 topolvm-provisioner 12m +pvc-b90179f8-c40a-4273-ad77-74ca8470b782 2Gi RWO Delete Bound demo/data-sample-mariadb-1 topolvm-provisioner 12m +pvc-f72411a4-80d5-4d32-b713-cb30ec662180 2Gi RWO Delete Bound demo/data-sample-mariadb-2 topolvm-provisioner 12m +``` + +The above output verifies that we have successfully expanded the volume of the MariaDB database. + +## Cleaning Up + +To clean up the Kubernetes resources created by this tutorial, run: + +```bash +$ kubectl delete mariadb -n demo sample-mariadb +$ kubectl delete mariadbopsrequest -n demo md-online-volume-expansion +``` diff --git a/content/docs/v2024.1.31/guides/memcached/README.md b/content/docs/v2024.1.31/guides/memcached/README.md new file mode 100644 index 0000000000..fa20ea94bf --- /dev/null +++ b/content/docs/v2024.1.31/guides/memcached/README.md @@ -0,0 +1,58 @@ +--- +title: Memcached +menu: + docs_v2024.1.31: + identifier: mc-readme-memcached + name: Memcached + parent: mc-memcached-guides + weight: 10 +menu_name: docs_v2024.1.31 +section_menu_id: guides +url: /docs/v2024.1.31/guides/memcached/ +aliases: +- /docs/v2024.1.31/guides/memcached/README/ +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +## Supported Memcached Features + +| Features | Availability | +| ---------------------------- | :----------: | +| Clustering | ✗ | +| Persistent Volume | ✗ | +| Instant Backup | ✗ | +| Scheduled Backup | ✗ | +| Initialize using Snapshot | ✗ | +| Initialize using Script | ✗ | +| Custom Configuration | ✓ | +| Using Custom docker image | ✓ | +| Builtin Prometheus Discovery | ✓ | +| Using Prometheus operator | ✓ | + +## Life Cycle of a Memcached Object + +

+  lifecycle +

+ +## User Guide + +- [Quickstart Memcached](/docs/v2024.1.31/guides/memcached/quickstart/quickstart) with KubeDB Operator. +- Monitor your Memcached server with KubeDB using [out-of-the-box Prometheus operator](/docs/v2024.1.31/guides/memcached/monitoring/using-prometheus-operator). +- Monitor your Memcached server with KubeDB using [out-of-the-box builtin-Prometheus](/docs/v2024.1.31/guides/memcached/monitoring/using-builtin-prometheus). +- Use [private Docker registry](/docs/v2024.1.31/guides/memcached/private-registry/using-private-registry) to deploy Memcached with KubeDB. +- Use [kubedb cli](/docs/v2024.1.31/guides/memcached/cli/cli) to manage databases like kubectl for Kubernetes. +- Detail concepts of [Memcached object](/docs/v2024.1.31/guides/memcached/concepts/memcached). +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/v2024.1.31/CONTRIBUTING). diff --git a/content/docs/v2024.1.31/guides/memcached/_index.md b/content/docs/v2024.1.31/guides/memcached/_index.md new file mode 100644 index 0000000000..303a36c79e --- /dev/null +++ b/content/docs/v2024.1.31/guides/memcached/_index.md @@ -0,0 +1,22 @@ +--- +title: Memcached +menu: + docs_v2024.1.31: + identifier: mc-memcached-guides + name: Memcached + parent: guides + weight: 10 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/memcached/cli/_index.md b/content/docs/v2024.1.31/guides/memcached/cli/_index.md new file mode 100755 index 0000000000..8e33aec063 --- /dev/null +++ b/content/docs/v2024.1.31/guides/memcached/cli/_index.md @@ -0,0 +1,22 @@ +--- +title: CLI | KubeDB +menu: + docs_v2024.1.31: + identifier: mc-cli-memcached + name: Cli + parent: mc-memcached-guides + weight: 100 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/memcached/cli/cli.md b/content/docs/v2024.1.31/guides/memcached/cli/cli.md new file mode 100644 index 0000000000..d8f91736a3 --- /dev/null +++ b/content/docs/v2024.1.31/guides/memcached/cli/cli.md @@ -0,0 +1,307 @@ +--- +title: CLI | KubeDB +menu: + docs_v2024.1.31: + identifier: mc-cli-cli + name: Quickstart + parent: mc-cli-memcached + weight: 10 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Manage KubeDB objects using CLIs + +## KubeDB CLI + +KubeDB comes with its own cli. It is called `kubedb` cli. `kubedb` can be used to manage any KubeDB object. `kubedb` cli also performs various validations to improve ux. To install KubeDB cli on your workstation, follow the steps [here](/docs/v2024.1.31/setup/README). + +### How to Create objects + +`kubectl create` creates a database CRD object in `default` namespace by default. Following command will create a Memcached object as specified in `memcached.yaml`. + +```bash +$ kubectl create -f memcached-demo.yaml +memcached.kubedb.com/memcached-demo created +``` + +You can provide namespace as a flag `--namespace`. Provided namespace should match with namespace specified in input file. + +```bash +$ kubectl create -f memcached-demo.yaml --namespace=kube-system +memcached.kubedb.com/memcached-demo created +``` + +`kubectl create` command also considers `stdin` as input. + +```bash +cat memcached-demo.yaml | kubectl create -f - +``` + +### How to List Objects + +`kubectl get` command allows users to list or find any KubeDB object. To list all Memcached objects in `default` namespace, run the following command: + +```bash +$ kubectl get memcached +NAME VERSION STATUS AGE +memcached-demo 1.6.22 Running 40s +memcached-dev 1.6.22 Running 40s +memcached-prod 1.6.22 Running 40s +memcached-qa 1.6.22 Running 40s +``` + +To get YAML of an object, use `--output=yaml` flag. + +```yaml +$ kubectl get memcached memcached-demo --output=yaml +apiVersion: kubedb.com/v1alpha2 +kind: Memcached +metadata: + creationTimestamp: 2018-10-04T05:58:57Z + finalizers: + - kubedb.com + generation: 1 + labels: + kubedb: cli-demo + name: memcached-demo + namespace: demo + resourceVersion: "6883" + selfLink: /apis/kubedb.com/v1alpha2/namespaces/default/memcacheds/memcached-demo + uid: 953df4d1-c79a-11e8-bb11-0800272ad446 +spec: + podTemplate: + controller: {} + metadata: {} + spec: + resources: + limits: + cpu: 500m + memory: 128Mi + requests: + cpu: 250m + memory: 64Mi + replicas: 3 + terminationPolicy: Halt + version: 1.6.22 +status: + observedGeneration: 1$7916315637361465932 + phase: Running +``` + +To get JSON of an object, use `--output=json` flag. + +```bash +kubectl get memcached memcached-demo --output=json +``` + +To list all KubeDB objects, use following command: + +```bash +$ kubectl get all -o wide +NAME VERSION STATUS AGE +mc/memcached-demo 1.6.22 Running 3h +mc/memcached-dev 1.6.22 Running 3h +mc/memcached-prod 1.6.22 Running 3h +mc/memcached-qa 1.6.22 Running 3h +``` + +Flag `--output=wide` is used to print additional information. + +List command supports short names for each object types. You can use it like `kubectl get `. Below are the short name for KubeDB objects: + +- Memcached: `mc` +- DormantDatabase: `drmn` + +You can print labels with objects. The following command will list all Memcached with their corresponding labels. + +```bash +$ kubectl get mc --show-labels +NAME VERSION STATUS AGE LABELS +memcached-demo 1.6.22 Running 2m kubedb=cli-demo +``` + +To print only object name, run the following command: + +```bash +$ kubectl get all -o name +memcached/memcached-demo +memcached/memcached-dev +memcached/memcached-prod +memcached/memcached-qa +``` + +### How to Describe Objects + +`kubectl dba describe` command allows users to describe any KubeDB object. The following command will describe Memcached server `memcached-demo` with relevant information. + +```bash +$ kubectl dba describe mc memcached-demo +Name: memcached-demo +Namespace: default +CreationTimestamp: Thu, 04 Oct 2018 11:58:57 +0600 +Labels: kubedb=cli-demo +Annotations: +Replicas: 3 total +Status: Running + +Deployment: + Name: memcached-demo + CreationTimestamp: Thu, 04 Oct 2018 11:58:59 +0600 + Labels: kubedb=cli-demo + app.kubernetes.io/name=memcacheds.kubedb.com + app.kubernetes.io/instance=memcached-demo + Annotations: deployment.kubernetes.io/revision=1 + Replicas: 3 desired | 3 updated | 3 total | 3 available | 0 unavailable + Pods Status: 3 Running / 0 Waiting / 0 Succeeded / 0 Failed + +Service: + Name: memcached-demo + Labels: kubedb=cli-demo + app.kubernetes.io/name=memcacheds.kubedb.com + app.kubernetes.io/instance=memcached-demo + Annotations: + Type: ClusterIP + IP: 10.102.208.191 + Port: db 11211/TCP + TargetPort: db/TCP + Endpoints: 172.17.0.4:11211,172.17.0.14:11211,172.17.0.6:11211 + +No Snapshots. + +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal Successful 2m Memcached operator Successfully created Service + Normal Successful 2m Memcached operator Successfully created StatefulSet + Normal Successful 2m Memcached operator Successfully created Memcached + Normal Successful 2m Memcached operator Successfully patched StatefulSet + Normal Successful 2m Memcached operator Successfully patched Memcached +``` + +`kubectl dba describe` command provides following basic information about a Memcached server. + +- Deployment +- Service +- Monitoring system (If available) + +To hide events on KubeDB object, use flag `--show-events=false` + +To describe all Memcached objects in `default` namespace, use following command + +```bash +kubectl dba describe mc +``` + +To describe all Memcached objects from every namespace, provide `--all-namespaces` flag. + +```bash +kubectl dba describe mc --all-namespaces +``` + +To describe all KubeDB objects from every namespace, use the following command: + +```bash +kubectl dba describe all --all-namespaces +``` + +You can also describe KubeDB objects with matching labels. The following command will describe all Memcached objects with specified labels from every namespace. + +```bash +kubectl dba describe mc --all-namespaces --selector='group=dev' +``` + +To learn about various options of `describe` command, please visit [here](/docs/v2024.1.31/reference/cli/kubectl-dba_describe). + +### How to Edit Objects + +`kubectl edit` command allows users to directly edit any KubeDB object. It will open the editor defined by _KUBEDB_EDITOR_, or _EDITOR_ environment variables, or fall back to `nano`. + +Let's edit an existing running Memcached object to setup [Monitoring](/docs/v2024.1.31/guides/memcached/monitoring/using-builtin-prometheus). The following command will open Memcached `memcached-demo` in editor. + +```bash +$ kubectl edit mc memcached-demo + +#spec: +# monitor: +# agent: prometheus.io/builtin + +memcached "memcached-demo" edited +``` + +#### Edit Restrictions + +Various fields of a KubeDB object can't be edited using `edit` command. The following fields are restricted from updates for all KubeDB objects: + +- apiVersion +- kind +- metadata.name +- metadata.namespace +- status + +If Deployment exists for a Memcached server, following fields can't be modified as well. + +- spec.nodeSelector +- spec.podTemplate.spec.nodeSelector +- spec.podTemplate.spec.env + +For DormantDatabase, `spec.origin` can't be edited using `kubectl edit` + +### How to Delete Objects + +`kubectl delete` command will delete an object in `default` namespace by default unless namespace is provided. The following command will delete a Memcached `memcached-dev` in default namespace + +```bash +$ kubectl delete memcached memcached-dev +memcached.kubedb.com "memcached-dev" deleted +``` + +You can also use YAML files to delete objects. The following command will delete a memcached using the type and name specified in `memcached.yaml`. + +```bash +$ kubectl delete -f memcached-demo.yaml +memcached.kubedb.com "memcached-dev" deleted +``` + +`kubectl delete` command also takes input from `stdin`. + +```bash +cat memcached-demo.yaml | kubectl delete -f - +``` + +To delete database with matching labels, use `--selector` flag. The following command will delete memcached with label `memcached.app.kubernetes.io/instance=memcached-demo`. + +```bash +kubectl delete memcached -l memcached.app.kubernetes.io/instance=memcached-demo +``` + +## Using Kubectl + +You can use Kubectl with KubeDB objects like any other CRDs. Below are some common examples of using Kubectl with KubeDB objects. + +```bash +# List objects +$ kubectl get memcached +$ kubectl get memcached.kubedb.com + +# Delete objects +$ kubectl delete memcached +``` + +## Next Steps + +- Learn how to use KubeDB to run a Memcached server [here](/docs/v2024.1.31/guides/memcached/README). +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/v2024.1.31/CONTRIBUTING). diff --git a/content/docs/v2024.1.31/guides/memcached/concepts/_index.md b/content/docs/v2024.1.31/guides/memcached/concepts/_index.md new file mode 100755 index 0000000000..eaceb91b0d --- /dev/null +++ b/content/docs/v2024.1.31/guides/memcached/concepts/_index.md @@ -0,0 +1,22 @@ +--- +title: Memcached Concepts +menu: + docs_v2024.1.31: + identifier: mc-concepts-memcached + name: Concepts + parent: mc-memcached-guides + weight: 20 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/memcached/concepts/appbinding.md b/content/docs/v2024.1.31/guides/memcached/concepts/appbinding.md new file mode 100644 index 0000000000..e223671ed8 --- /dev/null +++ b/content/docs/v2024.1.31/guides/memcached/concepts/appbinding.md @@ -0,0 +1,162 @@ +--- +title: AppBinding CRD +menu: + docs_v2024.1.31: + identifier: mc-appbinding-concepts + name: AppBinding + parent: mc-concepts-memcached + weight: 20 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# AppBinding + +## What is AppBinding + +An `AppBinding` is a Kubernetes `CustomResourceDefinition`(CRD) which points to an application using either its URL (usually for a non-Kubernetes resident service instance) or a Kubernetes service object (if self-hosted in a Kubernetes cluster), some optional parameters and a credential secret. To learn more about AppBinding and the problems it solves, please read this blog post: [The case for AppBinding](https://blog.byte.builders/post/the-case-for-appbinding). + +If you deploy a database using [KubeDB](https://kubedb.com/docs/0.11.0/concepts/), `AppBinding` object will be created automatically for it. Otherwise, you have to create an `AppBinding` object manually pointing to your desired database. + +KubeDB uses [Stash](https://appscode.com/products/stash/) to perform backup/recovery of databases. Stash needs to know how to connect with a target database and the credentials necessary to access it. This is done via an `AppBinding`. + +## AppBinding CRD Specification + +Like any official Kubernetes resource, an `AppBinding` has `TypeMeta`, `ObjectMeta` and `Spec` sections. However, unlike other Kubernetes resources, it does not have a `Status` section. + +An `AppBinding` object created by `KubeDB` for PostgreSQL database is shown below, + +```yaml +apiVersion: appcatalog.appscode.com/v1alpha1 +kind: AppBinding +metadata: + name: quick-postgres + namespace: demo + labels: + app.kubernetes.io/component: database + app.kubernetes.io/instance: quick-postgres + app.kubernetes.io/managed-by: kubedb.com + app.kubernetes.io/name: postgres + app.kubernetes.io/version: "10.2"-v2 + app.kubernetes.io/name: postgreses.kubedb.com + app.kubernetes.io/instance: quick-postgres +spec: + type: kubedb.com/postgres + secret: + name: quick-postgres-auth + clientConfig: + service: + name: quick-postgres + path: / + port: 5432 + query: sslmode=disable + scheme: postgresql + secretTransforms: + - renameKey: + from: POSTGRES_USER + to: username + - renameKey: + from: POSTGRES_PASSWORD + to: password + version: "10.2" +``` + +Here, we are going to describe the sections of an `AppBinding` crd. + +### AppBinding `Spec` + +An `AppBinding` object has the following fields in the `spec` section: + +#### spec.type + +`spec.type` is an optional field that indicates the type of the app that this `AppBinding` is pointing to. Stash uses this field to resolve the values of `TARGET_APP_TYPE`, `TARGET_APP_GROUP` and `TARGET_APP_RESOURCE` variables of [BackupBlueprint](https://appscode.com/products/stash/latest/concepts/crds/backupblueprint/) object. + +This field follows the following format: `/`. The above AppBinding is pointing to a `postgres` resource under `kubedb.com` group. + +Here, the variables are parsed as follows: + +| Variable | Usage | +| --------------------- | --------------------------------------------------------------------------------------------------------------------------------- | +| `TARGET_APP_GROUP` | Represents the application group where the respective app belongs (i.e: `kubedb.com`). | +| `TARGET_APP_RESOURCE` | Represents the resource under that application group that this appbinding represents (i.e: `postgres`). | +| `TARGET_APP_TYPE` | Represents the complete type of the application. It's simply `TARGET_APP_GROUP/TARGET_APP_RESOURCE` (i.e: `kubedb.com/postgres`). | + +#### spec.secret + +`spec.secret` specifies the name of the secret which contains the credentials that are required to access the database. This secret must be in the same namespace as the `AppBinding`. + +This secret must contain the following keys: + +PostgreSQL : + +| Key | Usage | +| ------------------- | --------------------------------------------------- | +| `POSTGRES_USER` | Username of the target database. | +| `POSTGRES_PASSWORD` | Password for the user specified by `POSTGRES_USER`. | + +MySQL : + +| Key | Usage | +| ---------- | ---------------------------------------------- | +| `username` | Username of the target database. | +| `password` | Password for the user specified by `username`. | + +MongoDB : + +| Key | Usage | +| ---------- | ---------------------------------------------- | +| `username` | Username of the target database. | +| `password` | Password for the user specified by `username`. | + +Elasticsearch: + +| Key | Usage | +| ---------------- | ----------------------- | +| `ADMIN_USERNAME` | Admin username | +| `ADMIN_PASSWORD` | Password for admin user | + +#### spec.clientConfig + +`spec.clientConfig` defines how to communicate with the target database. You can use either an URL or a Kubernetes service to connect with the database. You don't have to specify both of them. + +You can configure following fields in `spec.clientConfig` section: + +- **spec.clientConfig.url** + + `spec.clientConfig.url` gives the location of the database, in standard URL form (i.e. `[scheme://]host:port/[path]`). This is particularly useful when the target database is running outside of the Kubernetes cluster. If your database is running inside the cluster, use `spec.clientConfig.service` section instead. + + > Note that, attempting to use a user or basic auth (e.g. `user:password@host:port`) is not allowed. Stash will insert them automatically from the respective secret. Fragments ("#...") and query parameters ("?...") are not allowed either. + +- **spec.clientConfig.service** + + If you are running the database inside the Kubernetes cluster, you can use Kubernetes service to connect with the database. You have to specify the following fields in `spec.clientConfig.service` section if you manually create an `AppBinding` object. + + - **name :** `name` indicates the name of the service that connects with the target database. + - **scheme :** `scheme` specifies the scheme (i.e. http, https) to use to connect with the database. + - **port :** `port` specifies the port where the target database is running. + +- **spec.clientConfig.insecureSkipTLSVerify** + + `spec.clientConfig.insecureSkipTLSVerify` is used to disable TLS certificate verification while connecting with the database. We strongly discourage to disable TLS verification during backup. You should provide the respective CA bundle through `spec.clientConfig.caBundle` field instead. + +- **spec.clientConfig.caBundle** + + `spec.clientConfig.caBundle` is a PEM encoded CA bundle which will be used to validate the serving certificate of the database. + +## Next Steps + +- Learn how to use KubeDB to manage various databases [here](/docs/v2024.1.31/guides/README). +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/v2024.1.31/CONTRIBUTING). diff --git a/content/docs/v2024.1.31/guides/memcached/concepts/catalog.md b/content/docs/v2024.1.31/guides/memcached/concepts/catalog.md new file mode 100644 index 0000000000..c4bec34af4 --- /dev/null +++ b/content/docs/v2024.1.31/guides/memcached/concepts/catalog.md @@ -0,0 +1,100 @@ +--- +title: MemcachedVersion CRD +menu: + docs_v2024.1.31: + identifier: mc-catalog-concepts + name: MemcachedVersion + parent: mc-concepts-memcached + weight: 15 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# MemcachedVersion + +## What is MemcachedVersion + +`MemcachedVersion` is a Kubernetes `Custom Resource Definitions` (CRD). It provides a declarative configuration to specify the docker images to be used for [Memcached](https://memcached.org) database deployed with KubeDB in a Kubernetes native way. + +When you install KubeDB, a `MemcachedVersion` custom resource will be created automatically for every supported Memcached versions. You have to specify the name of `MemcachedVersion` crd in `spec.version` field of [Memcached](/docs/v2024.1.31/guides/memcached/concepts/memcached) crd. Then, KubeDB will use the docker images specified in the `MemcachedVersion` crd to create your expected database. + +Using a separate crd for specifying respective docker images, and pod security policy names allow us to modify the images, and policies independent of KubeDB operator. This will also allow the users to use a custom image for the database. + +## MemcachedVersion Specification + +As with all other Kubernetes objects, a MemcachedVersion needs `apiVersion`, `kind`, and `metadata` fields. It also needs a `.spec` section. + +```yaml +apiVersion: catalog.kubedb.com/v1alpha1 +kind: MemcachedVersion +metadata: + name: "1.6.22" + labels: + app: kubedb +spec: + version: "1.5.4" + db: + image: "${KUBEDB_DOCKER_REGISTRY}/memcached:1.6.22" + exporter: + image: "${KUBEDB_DOCKER_REGISTRY}/memcached-exporter:v0.4.1" + podSecurityPolicies: + databasePolicyName: "memcached-db" +``` + +### metadata.name + +`metadata.name` is a required field that specifies the name of the `MemcachedVersion` crd. You have to specify this name in `spec.version` field of [Memcached](/docs/v2024.1.31/guides/memcached/concepts/memcached) crd. + +We follow this convention for naming MemcachedVersion crd: + +- Name format: `{Original Memcached image version}-{modification tag}` + +We modify original Memcached docker image to support additional features. An image with higher modification tag will have more features than the images with lower modification tag. Hence, it is recommended to use MemcachedVersion crd with highest modification tag to take advantage of the latest features. + +### spec.version + +`spec.version` is a required field that specifies the original version of Memcached server that has been used to build the docker image specified in `spec.db.image` field. + +### spec.deprecated + +`spec.deprecated` is an optional field that specifies whether the docker images specified here is supported by the current KubeDB operator. For example, we have modified `kubedb/memcached:1.5.4` docker image to support custom configuration and re-tagged as `kubedb/memcached:1.6.22`. Now, KubeDB `0.9.0-rc.0` supports providing custom configuration which required `kubedb/memcached:1.6.22` docker image. So, we have marked `kubedb/memcached:1.5.4` as deprecated for KubeDB `0.9.0-rc.0`. + +The default value of this field is `false`. If `spec.deprecated` is set `true`, KubeDB operator will not create the database and other respective resources for this version. + +### spec.db.image + +`spec.db.image` is a required field that specifies the docker image which will be used to create Statefulset by KubeDB operator to create expected Memcached server. + +### spec.exporter.image + +`spec.exporter.image` is a required field that specifies the image which will be used to export Prometheus metrics. + +### spec.podSecurityPolicies.databasePolicyName + +`spec.podSecurityPolicies.databasePolicyName` is a required field that specifies the name of the pod security policy required to get the database server pod(s) running. To use a user-defined policy, the name of the polict has to be set in `spec.podSecurityPolicies` and in the list of allowed policy names in KubeDB operator like below: + +```bash +helm upgrade -i kubedb oci://ghcr.io/appscode-charts/kubedb \ + --namespace kubedb --create-namespace \ + --set additionalPodSecurityPolicies[0]=custom-db-policy \ + --set-file global.license=/path/to/the/license.txt \ + --wait --burst-limit=10000 --debug +``` + +## Next Steps + +- Learn about Memcached crd [here](/docs/v2024.1.31/guides/memcached/concepts/memcached). +- Deploy your first Memcached server with KubeDB by following the guide [here](/docs/v2024.1.31/guides/memcached/quickstart/quickstart). diff --git a/content/docs/v2024.1.31/guides/memcached/concepts/memcached.md b/content/docs/v2024.1.31/guides/memcached/concepts/memcached.md new file mode 100644 index 0000000000..8671c6892b --- /dev/null +++ b/content/docs/v2024.1.31/guides/memcached/concepts/memcached.md @@ -0,0 +1,238 @@ +--- +title: Memcached +menu: + docs_v2024.1.31: + identifier: mc-memcached-concepts + name: Memcached + parent: mc-concepts-memcached + weight: 10 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Memcached + +## What is Memcached + +`Memcached` is a Kubernetes `Custom Resource Definitions` (CRD). It provides declarative configuration for [Memcached](https://memcached.org/) in a Kubernetes native way. You only need to describe the desired database configuration in a Memcached object, and the KubeDB operator will create Kubernetes objects in the desired state for you. + +## Memcached Spec + +As with all other Kubernetes objects, a Memcached needs `apiVersion`, `kind`, and `metadata` fields. It also needs a `.spec` section. Below is an example of a Memcached object. + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: Memcached +metadata: + name: mc1 + namespace: demo +spec: + replicas: 3 + version: 1.5.22 + monitor: + agent: prometheus.io/operator + prometheus: + serviceMonitor: + labels: + app: kubedb + interval: 10s + configSecret: + name: mc-custom-config + podTemplate: + metadata: + annotations: + passMe: ToDatabasePod + controller: + annotations: + passMe: ToDeployment + spec: + serviceAccountName: my-service-account + schedulerName: my-scheduler + nodeSelector: + disktype: ssd + imagePullSecrets: + - name: myregistrykey + args: + - "-u memcache" + env: + - name: TEST_ENV + value: "value" + resources: + requests: + memory: "64Mi" + cpu: "250m" + limits: + memory: "128Mi" + cpu: "500m" + serviceTemplates: + - alias: primary + metadata: + annotations: + passMe: ToService + spec: + type: NodePort + ports: + - name: http + port: 9200 + terminationPolicy: Halt +``` + +### spec.replicas + +`spec.replicas` is an optional field that specifies the number of desired Instances/Replicas of Memcached server. If you do not specify .spec.replicas, then it defaults to 1. + +KubeDB uses `PodDisruptionBudget` to ensure that majority of these replicas are available during [voluntary disruptions](https://kubernetes.io/docs/concepts/workloads/pods/disruptions/#voluntary-and-involuntary-disruptions) so that quorum is maintained. + +### spec.version + +`spec.version` is a required field specifying the name of the [MemcachedVersion](/docs/v2024.1.31/guides/memcached/concepts/catalog) crd where the docker images are specified. Currently, when you install KubeDB, it creates the following `MemcachedVersion` resources, + +- `1.5.4`, `1.6.22`, `1.5`, `1.5-v1` + +### spec.monitor + +Memcached managed by KubeDB can be monitored with builtin-Prometheus and Prometheus operator out-of-the-box. To learn more, + +- [Monitor Memcached with builtin Prometheus](/docs/v2024.1.31/guides/memcached/monitoring/using-builtin-prometheus) +- [Monitor Memcached with Prometheus operator](/docs/v2024.1.31/guides/memcached/monitoring/using-prometheus-operator) + +### spec.configSecret + +`spec.configSecret` is an optional field that allows users to provide custom configuration for Memcached. This field accepts a [`VolumeSource`](https://github.com/kubernetes/api/blob/release-1.11/core/v1/types.go#L47). So you can use any Kubernetes supported volume source such as `configMap`, `secret`, `azureDisk` etc. To learn more about how to use a custom configuration file see [here](/docs/v2024.1.31/guides/memcached/configuration/using-config-file). + +### spec.podTemplate + +KubeDB allows providing a template for database pod through `spec.podTemplate`. KubeDB operator will pass the information provided in `spec.podTemplate` to the Deployment created for Memcached server. + +KubeDB accept following fields to set in `spec.podTemplate:` + +- metadata + - annotations (pod's annotation) +- controller + - annotations (statefulset's annotation) +- spec: + - args + - env + - resources + - initContainers + - imagePullSecrets + - nodeSelector + - affinity + - serviceAccountName + - schedulerName + - tolerations + - priorityClassName + - priority + - securityContext + - livenessProbe + - readinessProbe + - lifecycle + +Uses of some field of `spec.podTemplate` is described below, + +#### spec.podTemplate.spec.args + +`spec.podTemplate.spec.args` is an optional field. This can be used to provide additional arguments to database installation. + +#### spec.podTemplate.spec.env + +`spec.env` is an optional field that specifies the environment variables to pass to the Memcached docker image. + +Note that, KubeDB does not allow to update the environment variables. If you try to update environment variables, KubeDB operator will reject the request with following error, + +```ini +Error from server (BadRequest): error when applying patch: +... +for: "./mc.yaml": admission webhook "memcached.validators.kubedb.com" denied the request: precondition failed for: +... +At least one of the following was changed: + apiVersion + kind + name + namespace + spec.podTemplate.spec.nodeSelector + spec.podTemplate.spec.env +``` + +#### spec.podTemplate.spec.imagePullSecrets + +`KubeDB` provides the flexibility of deploying Memcached server from a private Docker registry. To learn how to deploym Memcached from a private registry, please visit [here](/docs/v2024.1.31/guides/memcached/private-registry/using-private-registry). + +#### spec.podTemplate.spec.nodeSelector + +`spec.nodeSelector` is an optional field that specifies a map of key-value pairs. For the pod to be eligible to run on a node, the node must have each of the indicated key-value pairs as labels (it can have additional labels as well). To learn more, see [here](https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector) . + +#### spec.podTemplate.spec.serviceAccountName + + `serviceAccountName` is an optional field supported by KubeDB Operator (version 0.13.0 and higher) that can be used to specify a custom service account to fine tune role based access control. + + If this field is left empty, the KubeDB operator will create a service account name matching Memcached crd name. Role and RoleBinding that provide necessary access permissions will also be generated automatically for this service account. + + If a service account name is given, but there's no existing service account by that name, the KubeDB operator will create one, and Role and RoleBinding that provide necessary access permissions will also be generated for this service account. + + If a service account name is given, and there's an existing service account by that name, the KubeDB operator will use that existing service account. Since this service account is not managed by KubeDB, users are responsible for providing necessary access permissions manually. Follow the guide [here](/docs/v2024.1.31/guides/memcached/custom-rbac/using-custom-rbac) to grant necessary permissions in this scenario. + +#### spec.podTemplate.spec.resources + +`spec.resources` is an optional field. This can be used to request compute resources required by the database pods. To learn more, visit [here](http://kubernetes.io/docs/user-guide/compute-resources/). + +### spec.serviceTemplate + +You can also provide a template for the services created by KubeDB operator for Memcached server through `spec.serviceTemplate`. This will allow you to set the type and other properties of the services. + +KubeDB allows following fields to set in `spec.serviceTemplate`: + +- metadata: + - annotations +- spec: + - type + - ports + - clusterIP + - externalIPs + - loadBalancerIP + - loadBalancerSourceRanges + - externalTrafficPolicy + - healthCheckNodePort + - sessionAffinityConfig + +See [here](https://github.com/kmodules/offshoot-api/blob/kubernetes-1.16.3/api/v1/types.go#L163) to understand these fields in detail. + +### spec.terminationPolicy + +`terminationPolicy` gives flexibility whether to `nullify`(reject) the delete operation of `Memcached` crd or which resources KubeDB should keep or delete when you delete `Memcached` crd. KubeDB provides following four termination policies: + +- DoNotTerminate +- Halt +- Delete (`Default`) +- WipeOut + +When `terminationPolicy` is `DoNotTerminate`, KubeDB takes advantage of `ValidationWebhook` feature in Kubernetes 1.9.0 or later clusters to implement `DoNotTerminate` feature. If admission webhook is enabled, `DoNotTerminate` prevents users from deleting the database as long as the `spec.terminationPolicy` is set to `DoNotTerminate`. + +Following table show what KubeDB does when you delete Memcached crd for different termination policies, + +| Behavior | DoNotTerminate | Halt | Delete | WipeOut | +| ----------------------------------- | :------------: | :------: | :------: | :------: | +| 1. Block Delete operation | ✓ | ✗ | ✗ | ✗ | +| 2. Create Dormant Database | ✗ | ✓ | ✗ | ✗ | +| 3. Delete StatefulSet | ✗ | ✓ | ✓ | ✓ | +| 4. Delete Services | ✗ | ✓ | ✓ | ✓ | + +If you don't specify `spec.terminationPolicy` KubeDB uses `Halt` termination policy by default. + +## Next Steps + +- Learn how to use KubeDB to run a Memcached server [here](/docs/v2024.1.31/guides/memcached/README). +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/v2024.1.31/CONTRIBUTING). diff --git a/content/docs/v2024.1.31/guides/memcached/configuration/_index.md b/content/docs/v2024.1.31/guides/memcached/configuration/_index.md new file mode 100755 index 0000000000..b341e4385a --- /dev/null +++ b/content/docs/v2024.1.31/guides/memcached/configuration/_index.md @@ -0,0 +1,22 @@ +--- +title: Run Memcached with Custom Configuration +menu: + docs_v2024.1.31: + identifier: mc-configuration + name: Custom Configuration + parent: mc-memcached-guides + weight: 30 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/memcached/configuration/using-config-file.md b/content/docs/v2024.1.31/guides/memcached/configuration/using-config-file.md new file mode 100644 index 0000000000..dda81ca118 --- /dev/null +++ b/content/docs/v2024.1.31/guides/memcached/configuration/using-config-file.md @@ -0,0 +1,221 @@ +--- +title: Run Memcached with Custom Configuration +menu: + docs_v2024.1.31: + identifier: mc-using-config-file-configuration + name: Config File + parent: mc-configuration + weight: 10 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Using Custom Configuration File + +KubeDB supports providing custom configuration for Memcached. This tutorial will show you how to use KubeDB to run Memcached with custom configuration. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +- Now, install KubeDB cli on your workstation and KubeDB operator in your cluster following the steps [here](/docs/v2024.1.31/setup/README). + +- To keep things isolated, this tutorial uses a separate namespace called `demo` throughout this tutorial. + + ```bash + $ kubectl create ns demo + namespace/demo created + + $ kubectl get ns demo + NAME STATUS AGE + demo Active 5s + ``` + +> Note: YAML files used in this tutorial are stored in [docs/examples/memcached](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/examples/memcached) folder in GitHub repository [kubedb/docs](https://github.com/kubedb/docs). + +## Overview + +Memcached does not allows to configuration via any file. However, configuration parameters can be set as arguments while starting the memcached docker image. To keep similarity with other KubeDB supported databases which support configuration through a config file, KubeDB has added an additional executable script on top of the official memcached docker image. This script parses the configuration file then set them as arguments of memcached binary. + +To know more about configuring Memcached server see [here](https://github.com/memcached/memcached/wiki/ConfiguringServer). + +At first, you have to create a config file named `memcached.conf` with your desired configuration. Then you have to put this file into a [volume](https://kubernetes.io/docs/concepts/storage/volumes/). You have to specify this volume in `spec.configSecret` section while creating Memcached crd. KubeDB will mount this volume into `/usr/config` directory of the database pod. + +In this tutorial, we will configure [max_connections](https://github.com/memcached/memcached/blob/ee171109b3afe1f30ff053166d205768ce635342/doc/protocol.txt#L672) and [limit_maxbytes](https://github.com/memcached/memcached/blob/ee171109b3afe1f30ff053166d205768ce635342/doc/protocol.txt#L720) via a custom config file. We will use a Secret as volume source. + +**Configuration File Format:** +KubeDB support providing `memcached.conf` file in the following formats, + +```ini +# maximum simultaneous connection +-c 500 +# maximum allowed memory for the database in MB. +-m 128 +``` + +or + +```ini +# This is a comment line. It will be ignored. +--conn-limit=500 +--memory-limit=128 +``` + +or + +```ini +# This is a comment line. It will be ignored. +conn-limit = 500 +memory-limit = 128 +``` + +## Custom Configuration + +At first, let's create `memcached.conf` file setting `max_connections` and `limit_maxbytes` parameters. Default value of `max_connections` is 1024 and `limit_maxbytes` is 64MB (68157440 bytes). + +```ini +$ cat <memcached.conf +-c 500 +# maximum allowed memory in MB +-m 128 +EOF + +$ cat memcached.conf +-c 500 +# maximum allowed memory in MB +-m 128 +``` + +> Note that config file name must be `memcached.conf` + +Now, create a Secret with this configuration file. + +```bash + $ kubectl create secret generic -n demo mc-configuration --from-file=./memcached.conf +secret/mc-configuration created +``` + +Verify the Secret has the configuration file. + +```yaml +$ kubectl get secrets -n demo mc-configuration -o yaml +apiVersion: v1 +stringData: + memcached.conf: | + -c 500 + # maximum allowed memory in MB + -m 128 +kind: Secret +metadata: + creationTimestamp: 2018-10-04T05:29:37Z + name: mc-configuration + namespace: demo + resourceVersion: "4505" + selfLink: /api/v1/namespaces/demo/secrets/mc-configuration + uid: 7c38b5fd-c796-11e8-bb11-0800272ad446 +``` + +Now, create Memcached crd specifying `spec.configSecret` field. + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/memcached/configuration/mc-custom.yaml +memcached.kubedb.com/custom-memcached created +``` + +Below is the YAML for the Memcached crd we just created. + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: Memcached +metadata: + name: custom-memcached + namespace: demo +spec: + replicas: 1 + version: "1.6.22" + configSecret: + name: mc-configuration + podTemplate: + spec: + resources: + limits: + cpu: 500m + memory: 128Mi + requests: + cpu: 250m + memory: 64Mi +``` + +Now, wait a few minutes. KubeDB operator will create the necessary deployment, services etc. If everything goes well, we will see that a deployment with the name `custom-memcached` has been created. + +Check that the pods for the deployment is running: + +```bash +$ kubectl get pods -n demo +NAME READY STATUS RESTARTS AGE +custom-memcached-747b866f4b-j6clt 1/1 Running 0 5m +``` + +Now, we will check if the database has started with the custom configuration we have provided. We will use [stats](https://github.com/memcached/memcached/wiki/ConfiguringServer#inspecting-running-configuration) command to check the configuration. + +We will connect to `custom-memcached-5b5866f5b8-cbc2d` pod from local-machine using port-frowarding. + +```bash +$ kubectl port-forward -n demo custom-memcached-5b5866f5b8-cbc2d 11211 +Forwarding from 127.0.0.1:11211 -> 11211 +Forwarding from [::1]:11211 -> 11211 +``` + +Now, connect to the memcached server from a different terminal through `telnet`. + +```bash +$ telnet 127.0.0.1 11211 +Trying 127.0.0.1... +Connected to 127.0.0.1. +Escape character is '^]'. +stats +... +STAT max_connections 500 +... +STAT limit_maxbytes 134217728 +... +END +``` + +Here, `limit_maxbytes` is represented in bytes. + +## Cleaning up + +To cleanup the Kubernetes resources created by this tutorial, run: + +```bash +kubectl patch -n demo mc/custom-memcached -p '{"spec":{"terminationPolicy":"WipeOut"}}' --type="merge" +kubectl delete -n demo mc/custom-memcached + +kubectl patch -n demo drmn/custom-memcached -p '{"spec":{"wipeOut":true}}' --type="merge" +kubectl delete -n demo drmn/custom-memcached + +kubectl delete -n demo secret mc-configuration + +kubectl delete ns demo +``` + +If you would like to uninstall KubeDB operator, please follow the steps [here](/docs/v2024.1.31/setup/README). + +## Next Steps + +- Learn how to use KubeDB to run a Memcached server [here](/docs/v2024.1.31/guides/memcached/README). +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/v2024.1.31/CONTRIBUTING). diff --git a/content/docs/v2024.1.31/guides/memcached/custom-rbac/_index.md b/content/docs/v2024.1.31/guides/memcached/custom-rbac/_index.md new file mode 100755 index 0000000000..3b38578ae7 --- /dev/null +++ b/content/docs/v2024.1.31/guides/memcached/custom-rbac/_index.md @@ -0,0 +1,22 @@ +--- +title: Run Memcached with Custom RBAC resources +menu: + docs_v2024.1.31: + identifier: mc-custom-rbac + name: Custom RBAC + parent: mc-memcached-guides + weight: 31 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/memcached/custom-rbac/using-custom-rbac.md b/content/docs/v2024.1.31/guides/memcached/custom-rbac/using-custom-rbac.md new file mode 100644 index 0000000000..f949ea0fe4 --- /dev/null +++ b/content/docs/v2024.1.31/guides/memcached/custom-rbac/using-custom-rbac.md @@ -0,0 +1,271 @@ +--- +title: Run Memcached with Custom RBAC resources +menu: + docs_v2024.1.31: + identifier: mc-custom-rbac-quickstart + name: Custom RBAC + parent: mc-custom-rbac + weight: 10 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Using Custom RBAC resources + +KubeDB (version 0.13.0 and higher) supports finer user control over role based access permissions provided to a Memcached instance. This tutorial will show you how to use KubeDB to run Memcached database with custom RBAC resources. + +## Before You Begin + +At first, you need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +Now, install KubeDB cli on your workstation and KubeDB operator in your cluster following the steps [here](/docs/v2024.1.31/setup/README). + +To keep things isolated, this tutorial uses a separate namespace called `demo` throughout this tutorial. + +```bash +$ kubectl create ns demo +namespace/demo created +``` + +> Note: YAML files used in this tutorial are stored in [docs/examples/memcached](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/examples/memcached) folder in GitHub repository [kubedb/docs](https://github.com/kubedb/docs). + +## Overview + +KubeDB allows users to provide custom RBAC resources, namely, `ServiceAccount`, `Role`, and `RoleBinding` for Memcached. This is provided via the `spec.podTemplate.spec.serviceAccountName` field in Memcached crd. If this field is left empty, the KubeDB operator will create a service account name matching Memcached crd name. Role and RoleBinding that provide necessary access permissions will also be generated automatically for this service account. + +If a service account name is given, but there's no existing service account by that name, the KubeDB operator will create one, and Role and RoleBinding that provide necessary access permissions will also be generated for this service account. + +If a service account name is given, and there's an existing service account by that name, the KubeDB operator will use that existing service account. Since this service account is not managed by KubeDB, users are responsible for providing necessary access permissions manually. + +This guide will show you how to create custom `Service Account`, `Role`, and `RoleBinding` for a Memcached instance named `quick-memcached` to provide the bare minimum access permissions. + +## Custom RBAC for Memcached + +At first, let's create a `Service Acoount` in `demo` namespace. + +```bash +$ kubectl create serviceaccount -n demo my-custom-serviceaccount +serviceaccount/my-custom-serviceaccount created +``` + +It should create a service account. + +```yaml +$ kubectl get serviceaccount -n demo my-custom-serviceaccount -o yaml +apiVersion: v1 +kind: ServiceAccount +metadata: + creationTimestamp: "2019-05-30T04:23:39Z" + name: my-custom-serviceaccount + namespace: demo + resourceVersion: "21657" + selfLink: /api/v1/namespaces/demo/serviceaccounts/myserviceaccount + uid: b2ec2b05-8292-11e9-8d10-080027a8b217 +secrets: +- name: myserviceaccount-token-t8zxd +``` + +Now, we need to create a role that has necessary access permissions for the Memcached instance named `quick-memcached`. + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/memcached/custom-rbac/mc-custom-role.yaml +role.rbac.authorization.k8s.io/my-custom-role created +``` + +Below is the YAML for the Role we just created. + +```yaml +apiVersion: rbac.authorization.k8s.io/v1 +kind: Role +metadata: + name: my-custom-role + namespace: demo +rules: +- apiGroups: + - policy + resourceNames: + - memcached-db + resources: + - podsecuritypolicies + verbs: + - use +``` + +This permission is required for Memcached pods running on PSP enabled clusters. + +Now create a `RoleBinding` to bind this `Role` with the already created service account. + +```bash +$ kubectl create rolebinding my-custom-rolebinding --role=my-custom-role --serviceaccount=demo:my-custom-serviceaccount --namespace=demo +rolebinding.rbac.authorization.k8s.io/my-custom-rolebinding created + +``` + +It should bind `my-custom-role` and `my-custom-serviceaccount` successfully. + +```yaml +$ kubectl get rolebinding -n demo my-custom-rolebinding -o yaml +apiVersion: rbac.authorization.k8s.io/v1 +kind: RoleBinding +metadata: + creationTimestamp: "kubectl get rolebinding -n demo my-custom-rolebinding -o yaml" + name: my-custom-rolebinding + namespace: demo + resourceVersion: "1405" + selfLink: /apis/rbac.authorization.k8s.io/v1/namespaces/demo/rolebindings/my-custom-rolebinding + uid: 123afc02-8297-11e9-8d10-080027a8b217 +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: Role + name: my-custom-role +subjects: +- kind: ServiceAccount + name: my-custom-serviceaccount + namespace: demo + +``` + +Now, create a Memcached crd specifying `spec.podTemplate.spec.serviceAccountName` field to `my-custom-serviceaccount`. + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/memcached/custom-rbac/mc-custom-db.yaml +memcached.kubedb.com/quick-memcached created +``` + +Below is the YAML for the Memcached crd we just created. + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: Memcached +metadata: + name: quick-memcached + namespace: demo +spec: + replicas: 3 + version: "1.6.22" + podTemplate: + spec: + serviceAccountName: my-custom-serviceaccount + resources: + limits: + cpu: 500m + memory: 128Mi + requests: + cpu: 250m + memory: 64Mi + terminationPolicy: DoNotTerminate + +``` + +Now, wait a few minutes. the KubeDB operator will create necessary PVC, deployment, services, secret etc. If everything goes well, we should see that a pod with the name `quick-memcached-0` has been created. + +Check that the deployment's pod is running + +```bash +$ kubectl get pods -n demo +NAME READY STATUS RESTARTS AGE +quick-memcached-d866d6d89-sdlkx 1/1 Running 0 5m52s +quick-memcached-d866d6d89-wpdz2 1/1 Running 0 5m52s +quick-memcached-d866d6d89-wvg7c 1/1 Running 0 5m52s + +$ kubectl get pod -n demo quick-memcached-0 +NAME READY STATUS RESTARTS AGE +quick-memcached-d866d6d89-wvg7c 1/1 Running 0 14m +``` + +## Reusing Service Account + +An existing service account can be reused in another Memcached instance. No new access permission is required to run the new Memcached instance. + +Now, create Memcached crd `minute-memcached` using the existing service account name `my-custom-serviceaccount` in the `spec.podTemplate.spec.serviceAccountName` field. + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/memcached/custom-rbac/mc-custom-db-two.yaml +memcached.kubedb.com/quick-memcached created +``` + +Below is the YAML for the Memcached crd we just created. + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: Memcached +metadata: + name: minute-memcached + namespace: demo +spec: + replicas: 3 + version: "1.6.22" + podTemplate: + spec: + serviceAccountName: my-custom-serviceaccount + resources: + limits: + cpu: 500m + memory: 128Mi + requests: + cpu: 250m + memory: 64Mi + terminationPolicy: DoNotTerminate + +``` + +Now, wait a few minutes. the KubeDB operator will create necessary PVC, deployment, services, secret etc. If everything goes well, we should see that a pod with the name `minute-memcached-0` has been created. + +Check that the deployment's pod is running + +```bash +$ kubectl get pods -n demo +NAME READY STATUS RESTARTS AGE +minute-memcached-58798985f-47tm8 1/1 Running 0 5m52s +minute-memcached-58798985f-47tm8 1/1 Running 0 5m52s +minute-memcached-58798985f-47tm8 1/1 Running 0 5m52s + +$ kubectl get pod -n demo minute-memcached-0 +NAME READY STATUS RESTARTS AGE +minute-memcached-58798985f-47tm8 1/1 Running 0 14m +``` + +## Cleaning up + +To cleanup the Kubernetes resources created by this tutorial, run: + +```bash +kubectl patch -n demo mc/quick-memcached -p '{"spec":{"terminationPolicy":"WipeOut"}}' --type="merge" +kubectl delete -n demo mc/quick-memcached + +kubectl patch -n demo mc/minute-memcached -p '{"spec":{"terminationPolicy":"WipeOut"}}' --type="merge" +kubectl delete -n demo mc/minute-memcached + +kubectl delete -n demo role my-custom-role +kubectl delete -n demo rolebinding my-custom-rolebinding + +kubectl delete sa -n demo my-custom-serviceaccount + +kubectl delete ns demo +``` + +If you would like to uninstall the KubeDB operator, please follow the steps [here](/docs/v2024.1.31/setup/README). + +## Next Steps + +- [Quickstart Memcached](/docs/v2024.1.31/guides/memcached/quickstart/quickstart) with KubeDB Operator. +- Monitor your Memcached database with KubeDB using [out-of-the-box Prometheus operator](/docs/v2024.1.31/guides/memcached/monitoring/using-prometheus-operator). +- Monitor your Memcached database with KubeDB using [out-of-the-box builtin-Prometheus](/docs/v2024.1.31/guides/memcached/monitoring/using-builtin-prometheus). +- Use [private Docker registry](/docs/v2024.1.31/guides/memcached/private-registry/using-private-registry) to deploy Memcached with KubeDB. +- Use [kubedb cli](/docs/v2024.1.31/guides/memcached/cli/cli) to manage databases like kubectl for Kubernetes. +- Detail concepts of [Memcached object](/docs/v2024.1.31/guides/memcached/concepts/memcached). +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/v2024.1.31/CONTRIBUTING). + diff --git a/content/docs/v2024.1.31/guides/memcached/monitoring/_index.md b/content/docs/v2024.1.31/guides/memcached/monitoring/_index.md new file mode 100755 index 0000000000..c77fa16907 --- /dev/null +++ b/content/docs/v2024.1.31/guides/memcached/monitoring/_index.md @@ -0,0 +1,22 @@ +--- +title: Monitoring Memcached +menu: + docs_v2024.1.31: + identifier: mc-monitoring-memcached + name: Monitoring + parent: mc-memcached-guides + weight: 50 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/memcached/monitoring/overview.md b/content/docs/v2024.1.31/guides/memcached/monitoring/overview.md new file mode 100644 index 0000000000..de6d1592f4 --- /dev/null +++ b/content/docs/v2024.1.31/guides/memcached/monitoring/overview.md @@ -0,0 +1,117 @@ +--- +title: Memcached Monitoring Overview +description: Memcached Monitoring Overview +menu: + docs_v2024.1.31: + identifier: mc-monitoring-overview + name: Overview + parent: mc-monitoring-memcached + weight: 10 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Monitoring Memcached with KubeDB + +KubeDB has native support for monitoring via [Prometheus](https://prometheus.io/). You can use builtin [Prometheus](https://github.com/prometheus/prometheus) scraper or [Prometheus operator](https://github.com/prometheus-operator/prometheus-operator) to monitor KubeDB managed databases. This tutorial will show you how database monitoring works with KubeDB and how to configure Database crd to enable monitoring. + +## Overview + +KubeDB uses Prometheus [exporter](https://prometheus.io/docs/instrumenting/exporters/#databases) images to export Prometheus metrics for respective databases. Following diagram shows the logical flow of database monitoring with KubeDB. + +

+  Database Monitoring Flow +

+ +When a user creates a database crd with `spec.monitor` section configured, KubeDB operator provisions the respective database and injects an exporter image as sidecar to the database pod. It also creates a dedicated stats service with name `{database-crd-name}-stats` for monitoring. Prometheus server can scrape metrics using this stats service. + +## Configure Monitoring + +In order to enable monitoring for a database, you have to configure `spec.monitor` section. KubeDB provides following options to configure `spec.monitor` section: + +| Field | Type | Uses | +| -------------------------------------------------- | ---------- | ---------------------------------------------------------------------------------------------------------------------------------------------- | +| `spec.monitor.agent` | `Required` | Type of the monitoring agent that will be used to monitor this database. It can be `prometheus.io/builtin` or `prometheus.io/operator`. | +| `spec.monitor.prometheus.exporter.port` | `Optional` | Port number where the exporter side car will serve metrics. | +| `spec.monitor.prometheus.exporter.args` | `Optional` | Arguments to pass to the exporter sidecar. | +| `spec.monitor.prometheus.exporter.env` | `Optional` | List of environment variables to set in the exporter sidecar container. | +| `spec.monitor.prometheus.exporter.resources` | `Optional` | Resources required by exporter sidecar container. | +| `spec.monitor.prometheus.exporter.securityContext` | `Optional` | Security options the exporter should run with. | +| `spec.monitor.prometheus.serviceMonitor.labels` | `Optional` | Labels for `ServiceMonitor` crd. | +| `spec.monitor.prometheus.serviceMonitor.interval` | `Optional` | Interval at which metrics should be scraped. | + +## Sample Configuration + +A sample YAML for Redis crd with `spec.monitor` section configured to enable monitoring with [Prometheus operator](https://github.com/prometheus-operator/prometheus-operator) is shown below. + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: Redis +metadata: + name: sample-redis + namespace: databases +spec: + version: 6.0.20 + terminationPolicy: WipeOut + configSecret: # configure Redis to use password for authentication + name: redis-config + storageType: Durable + storage: + storageClassName: default + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 5Gi + monitor: + agent: prometheus.io/operator + prometheus: + serviceMonitor: + labels: + release: prometheus + exporter: + args: + - --redis.password=$(REDIS_PASSWORD) + env: + - name: REDIS_PASSWORD + valueFrom: + secretKeyRef: + name: _name_of_secret_with_redis_password + key: password # key with the password + resources: + requests: + memory: 512Mi + cpu: 200m + limits: + memory: 512Mi + cpu: 250m + securityContext: + runAsUser: 2000 + allowPrivilegeEscalation: false +``` + +Assume that above Redis is configured to use basic authentication. So, exporter image also need to provide password to collect metrics. We have provided it through `spec.monitor.args` field. + +Here, we have specified that we are going to monitor this server using Prometheus operator through `spec.monitor.agent: prometheus.io/operator`. KubeDB will create a `ServiceMonitor` crd in `monitoring` namespace and this `ServiceMonitor` will have `release: prometheus` label. + +## Next Steps + +- Learn how to monitor Elasticsearch database with KubeDB using [builtin-Prometheus](/docs/v2024.1.31/guides/elasticsearch/monitoring/using-builtin-prometheus) and using [Prometheus operator](/docs/v2024.1.31/guides/elasticsearch/monitoring/using-prometheus-operator). +- Learn how to monitor PostgreSQL database with KubeDB using [builtin-Prometheus](/docs/v2024.1.31/guides/postgres/monitoring/using-builtin-prometheus) and using [Prometheus operator](/docs/v2024.1.31/guides/postgres/monitoring/using-prometheus-operator). +- Learn how to monitor MySQL database with KubeDB using [builtin-Prometheus](/docs/v2024.1.31/guides/mysql/monitoring/builtin-prometheus/) and using [Prometheus operator](/docs/v2024.1.31/guides/mysql/monitoring/prometheus-operator/). +- Learn how to monitor MongoDB database with KubeDB using [builtin-Prometheus](/docs/v2024.1.31/guides/mongodb/monitoring/using-builtin-prometheus) and using [Prometheus operator](/docs/v2024.1.31/guides/mongodb/monitoring/using-prometheus-operator). +- Learn how to monitor Redis server with KubeDB using [builtin-Prometheus](/docs/v2024.1.31/guides/redis/monitoring/using-builtin-prometheus) and using [Prometheus operator](/docs/v2024.1.31/guides/redis/monitoring/using-prometheus-operator). +- Learn how to monitor Memcached server with KubeDB using [builtin-Prometheus](/docs/v2024.1.31/guides/memcached/monitoring/using-builtin-prometheus) and using [Prometheus operator](/docs/v2024.1.31/guides/memcached/monitoring/using-prometheus-operator). diff --git a/content/docs/v2024.1.31/guides/memcached/monitoring/using-builtin-prometheus.md b/content/docs/v2024.1.31/guides/memcached/monitoring/using-builtin-prometheus.md new file mode 100644 index 0000000000..f21f238a19 --- /dev/null +++ b/content/docs/v2024.1.31/guides/memcached/monitoring/using-builtin-prometheus.md @@ -0,0 +1,361 @@ +--- +title: Monitor Memcached using Builtin Prometheus Discovery +menu: + docs_v2024.1.31: + identifier: mc-using-builtin-prometheus-monitoring + name: Builtin Prometheus + parent: mc-monitoring-memcached + weight: 20 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Monitoring Memcached with builtin Prometheus + +This tutorial will show you how to monitor Memcached server using builtin [Prometheus](https://github.com/prometheus/prometheus) scraper. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +- Install KubeDB operator in your cluster following the steps [here](/docs/v2024.1.31/setup/README). + +- If you are not familiar with how to configure Prometheus to scrape metrics from various Kubernetes resources, please read the tutorial from [here](https://github.com/appscode/third-party-tools/tree/master/monitoring/prometheus/builtin). + +- To learn how Prometheus monitoring works with KubeDB in general, please visit [here](/docs/v2024.1.31/guides/memcached/monitoring/overview). + +- To keep Prometheus resources isolated, we are going to use a separate namespace called `monitoring` to deploy respective monitoring resources. We are going to deploy database in `demo` namespace. + + ```bash + $ kubectl create ns monitoring + namespace/monitoring created + + $ kubectl create ns demo + namespace/demo created + ``` + +> Note: YAML files used in this tutorial are stored in [docs/examples/memcached](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/examples/memcached) folder in GitHub repository [kubedb/docs](https://github.com/kubedb/docs). + +## Deploy Memcached server with Monitoring Enabled + +At first, let's deploy a Memcached server with monitoring enabled. Below is the Memcached object that we are going to create. + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: Memcached +metadata: + name: builtin-prom-memcd + namespace: demo +spec: + replicas: 1 + version: "1.6.22" + terminationPolicy: WipeOut + podTemplate: + spec: + resources: + limits: + cpu: 500m + memory: 128Mi + requests: + cpu: 250m + memory: 64Mi + monitor: + agent: prometheus.io/builtin +``` + +Here, + +- `spec.monitor.agent: prometheus.io/builtin` specifies that we are going to monitor this server using builtin Prometheus scraper. + +Let's create the Memcached crd we have shown above. + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/memcached/monitoring/builtin-prom-memcd.yaml +memcached.kubedb.com/builtin-prom-memcd created +``` + +Now, wait for the database to go into `Running` state. + +```bash +$ kubectl get mc -n demo builtin-prom-memcd +NAME VERSION STATUS AGE +builtin-prom-memcd 1.6.22 Running 1m +``` + +KubeDB will create a separate stats service with name `{Memcached crd name}-stats` for monitoring purpose. + +```bash +$ kubectl get svc -n demo --selector="app.kubernetes.io/instance=builtin-prom-memcd" +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +builtin-prom-memcd ClusterIP 10.105.40.31 11211/TCP 2m6s +builtin-prom-memcd-stats ClusterIP 10.110.89.251 56790/TCP 94s +``` + +Here, `builtin-prom-memcd-stats` service has been created for monitoring purpose. Let's describe the service. + +```bash +$ kubectl describe svc -n demo builtin-prom-memcd-stats +Name: builtin-prom-memcd-stats +Namespace: demo +Labels: app.kubernetes.io/name=memcacheds.kubedb.com + app.kubernetes.io/instance=builtin-prom-memcd +Annotations: monitoring.appscode.com/agent: prometheus.io/builtin + prometheus.io/path: /metrics + prometheus.io/port: 56790 + prometheus.io/scrape: true +Selector: app.kubernetes.io/name=memcacheds.kubedb.com,app.kubernetes.io/instance=builtin-prom-memcd +Type: ClusterIP +IP: 10.110.89.251 +Port: prom-http 56790/TCP +TargetPort: prom-http/TCP +Endpoints: 172.17.0.14:56790,172.17.0.7:56790,172.17.0.8:56790 +Session Affinity: None +Events: +``` + +You can see that the service contains following annotations. + +```bash +prometheus.io/path: /metrics +prometheus.io/port: 56790 +prometheus.io/scrape: true +``` + +The Prometheus server will discover the service endpoint using these specifications and will scrape metrics from the exporter. + +## Configure Prometheus Server + +Now, we have to configure a Prometheus scraping job to scrape the metrics using this service. We are going to configure scraping job similar to this [kubernetes-service-endpoints](https://github.com/appscode/third-party-tools/tree/master/monitoring/prometheus/builtin#kubernetes-service-endpoints) job that scrapes metrics from endpoints of a service. + +Let's configure a Prometheus scraping job to collect metrics from this service. + +```yaml +- job_name: 'kubedb-databases' + kubernetes_sd_configs: + - role: endpoints + # by default Prometheus server select all Kubernetes services as possible target. + # relabel_config is used to filter only desired endpoints + relabel_configs: + # keep only those services that has "prometheus.io/scrape","prometheus.io/path" and "prometheus.io/port" anootations + - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape, __meta_kubernetes_service_annotation_prometheus_io_port] + separator: ; + regex: true;(.*) + action: keep + # currently KubeDB supported databases uses only "http" scheme to export metrics. so, drop any service that uses "https" scheme. + - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scheme] + action: drop + regex: https + # only keep the stats services created by KubeDB for monitoring purpose which has "-stats" suffix + - source_labels: [__meta_kubernetes_service_name] + separator: ; + regex: (.*-stats) + action: keep + # service created by KubeDB will have "app.kubernetes.io/name" and "app.kubernetes.io/instance" annotations. keep only those services that have these annotations. + - source_labels: [__meta_kubernetes_service_label_app_kubernetes_io_name] + separator: ; + regex: (.*) + action: keep + # read the metric path from "prometheus.io/path: " annotation + - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_path] + action: replace + target_label: __metrics_path__ + regex: (.+) + # read the port from "prometheus.io/port: " annotation and update scraping address accordingly + - source_labels: [__address__, __meta_kubernetes_service_annotation_prometheus_io_port] + action: replace + target_label: __address__ + regex: ([^:]+)(?::\d+)?;(\d+) + replacement: $1:$2 + - action: labelmap + regex: __meta_kubernetes_service_label_(.+) + # add service namespace as label to the scraped metrics + - source_labels: [__meta_kubernetes_namespace] + action: replace + target_label: kubernetes_namespace + # add service name as label to the scraped metrics + - source_labels: [__meta_kubernetes_service_name] + action: replace + target_label: kubernetes_name +``` + +### Configure Existing Prometheus Server + +If you already have a Prometheus server running, you have to add above scraping job in the `ConfigMap` used to configure the Prometheus server. Then, you have to restart it for the updated configuration to take effect. + +>If you don't use a persistent volume for Prometheus storage, you will lose your previously scraped data on restart. + +### Deploy New Prometheus Server + +If you don't have any existing Prometheus server running, you have to deploy one. In this section, we are going to deploy a Prometheus server in `monitoring` namespace to collect metrics using this stats service. + +**Create ConfigMap:** + +At first, create a ConfigMap with the scraping configuration. Bellow, the YAML of ConfigMap that we are going to create in this tutorial. + +```yaml +apiVersion: v1 +kind: ConfigMap +metadata: + name: prometheus-config + labels: + app: prometheus-demo + namespace: monitoring +data: + prometheus.yml: |- + global: + scrape_interval: 5s + evaluation_interval: 5s + scrape_configs: + - job_name: 'kubedb-databases' + honor_labels: true + scheme: http + kubernetes_sd_configs: + - role: endpoints + # by default Prometheus server select all Kubernetes services as possible target. + # relabel_config is used to filter only desired endpoints + relabel_configs: + # keep only those services that has "prometheus.io/scrape","prometheus.io/path" and "prometheus.io/port" anootations + - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape, __meta_kubernetes_service_annotation_prometheus_io_port] + separator: ; + regex: true;(.*) + action: keep + # currently KubeDB supported databases uses only "http" scheme to export metrics. so, drop any service that uses "https" scheme. + - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scheme] + action: drop + regex: https + # only keep the stats services created by KubeDB for monitoring purpose which has "-stats" suffix + - source_labels: [__meta_kubernetes_service_name] + separator: ; + regex: (.*-stats) + action: keep + # service created by KubeDB will have "app.kubernetes.io/name" and "app.kubernetes.io/instance" annotations. keep only those services that have these annotations. + - source_labels: [__meta_kubernetes_service_label_app_kubernetes_io_name] + separator: ; + regex: (.*) + action: keep + # read the metric path from "prometheus.io/path: " annotation + - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_path] + action: replace + target_label: __metrics_path__ + regex: (.+) + # read the port from "prometheus.io/port: " annotation and update scraping address accordingly + - source_labels: [__address__, __meta_kubernetes_service_annotation_prometheus_io_port] + action: replace + target_label: __address__ + regex: ([^:]+)(?::\d+)?;(\d+) + replacement: $1:$2 + # add service namespace as label to the scraped metrics + - source_labels: [__meta_kubernetes_namespace] + separator: ; + regex: (.*) + target_label: namespace + replacement: $1 + action: replace + # add service name as a label to the scraped metrics + - source_labels: [__meta_kubernetes_service_name] + separator: ; + regex: (.*) + target_label: service + replacement: $1 + action: replace + # add stats service's labels to the scraped metrics + - action: labelmap + regex: __meta_kubernetes_service_label_(.+) +``` + +Let's create above `ConfigMap`, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/monitoring/builtin-prometheus/prom-config.yaml +configmap/prometheus-config created +``` + +**Create RBAC:** + +If you are using an RBAC enabled cluster, you have to give necessary RBAC permissions for Prometheus. Let's create necessary RBAC stuffs for Prometheus, + +```bash +$ kubectl apply -f https://github.com/appscode/third-party-tools/raw/master/monitoring/prometheus/builtin/artifacts/rbac.yaml +clusterrole.rbac.authorization.k8s.io/prometheus created +serviceaccount/prometheus created +clusterrolebinding.rbac.authorization.k8s.io/prometheus created +``` + +>YAML for the RBAC resources created above can be found [here](https://github.com/appscode/third-party-tools/blob/master/monitoring/prometheus/builtin/artifacts/rbac.yaml). + +**Deploy Prometheus:** + +Now, we are ready to deploy Prometheus server. We are going to use following [deployment](https://github.com/appscode/third-party-tools/blob/master/monitoring/prometheus/builtin/artifacts/deployment.yaml) to deploy Prometheus server. + +Let's deploy the Prometheus server. + +```bash +$ kubectl apply -f https://github.com/appscode/third-party-tools/raw/master/monitoring/prometheus/builtin/artifacts/deployment.yaml +deployment.apps/prometheus created +``` + +### Verify Monitoring Metrics + +Prometheus server is listening to port `9090`. We are going to use [port forwarding](https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/) to access Prometheus dashboard. + +At first, let's check if the Prometheus pod is in `Running` state. + +```bash +$ kubectl get pod -n monitoring -l=app=prometheus +NAME READY STATUS RESTARTS AGE +prometheus-8568c86d86-95zhn 1/1 Running 0 77s +``` + +Now, run following command on a separate terminal to forward 9090 port of `prometheus-8568c86d86-95zhn` pod, + +```bash +$ kubectl port-forward -n monitoring prometheus-8568c86d86-95zhn 9090 +Forwarding from 127.0.0.1:9090 -> 9090 +Forwarding from [::1]:9090 -> 9090 +``` + +Now, we can access the dashboard at `localhost:9090`. Open [http://localhost:9090](http://localhost:9090) in your browser. You should see the endpoints of `builtin-prom-memcd-stats` service as targets. + +

+  Prometheus Target +

+ +Check the labels marked with red rectangle. These labels confirm that the metrics are coming from `Memcached` server `builtin-prom-memcd` through stats service `builtin-prom-memcd-stats`. + +Now, you can view the collected metrics and create a graph from homepage of this Prometheus dashboard. You can also use this Prometheus server as data source for [Grafana](https://grafana.com/) and create beautiful dashboard with collected metrics. + +## Cleaning up + +To cleanup the Kubernetes resources created by this tutorial, run following commands + +```bash +$ kubectl delete -n demo mc/builtin-prom-memcd + +$ kubectl delete -n monitoring deployment.apps/prometheus + +$ kubectl delete -n monitoring clusterrole.rbac.authorization.k8s.io/prometheus +$ kubectl delete -n monitoring serviceaccount/prometheus +$ kubectl delete -n monitoring clusterrolebinding.rbac.authorization.k8s.io/prometheus + +$ kubectl delete ns demo +$ kubectl delete ns monitoring +``` + +## Next Steps + +- Monitor your Memcached server with KubeDB using [`out-of-the-box` Prometheus operator](/docs/v2024.1.31/guides/memcached/monitoring/using-prometheus-operator). +- Use [private Docker registry](/docs/v2024.1.31/guides/memcached/private-registry/using-private-registry) to deploy Memcached with KubeDB. +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/v2024.1.31/CONTRIBUTING). diff --git a/content/docs/v2024.1.31/guides/memcached/monitoring/using-prometheus-operator.md b/content/docs/v2024.1.31/guides/memcached/monitoring/using-prometheus-operator.md new file mode 100644 index 0000000000..e37e9de3ec --- /dev/null +++ b/content/docs/v2024.1.31/guides/memcached/monitoring/using-prometheus-operator.md @@ -0,0 +1,289 @@ +--- +title: Monitor Memcached using Prometheus Operator +menu: + docs_v2024.1.31: + identifier: mc-using-prometheus-operator-monitoring + name: Prometheus Operator + parent: mc-monitoring-memcached + weight: 15 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Monitoring Memcached Using Prometheus operator + +[Prometheus operator](https://github.com/prometheus-operator/prometheus-operator) provides simple and Kubernetes native way to deploy and configure Prometheus server. This tutorial will show you how to use Prometheus operator to monitor Memcached server deployed with KubeDB. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +- To learn how Prometheus monitoring works with KubeDB in general, please visit [here](/docs/v2024.1.31/guides/memcached/monitoring/overview). + +- To keep Prometheus resources isolated, we are going to use a separate namespace called `monitoring` to deploy respective monitoring resources. We are going to deploy database in `demo` namespace. + + ```bash + $ kubectl create ns monitoring + namespace/monitoring created + + $ kubectl create ns demo + namespace/demo created + ``` + +- We need a [Prometheus operator](https://github.com/prometheus-operator/prometheus-operator) instance running. If you don't already have a running instance, deploy one following the docs from [here](https://github.com/appscode/third-party-tools/blob/master/monitoring/prometheus/operator/README.md). + +- If you already don't have a Prometheus server running, deploy one following tutorial from [here](https://github.com/appscode/third-party-tools/blob/master/monitoring/prometheus/operator/README.md#deploy-prometheus-server). + +> Note: YAML files used in this tutorial are stored in [docs/examples/memcached](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/examples/memcached) folder in GitHub repository [kubedb/docs](https://github.com/kubedb/docs). + +## Find out required labels for ServiceMonitor + +We need to know the labels used to select `ServiceMonitor` by a `Prometheus` crd. We are going to provide these labels in `spec.monitor.prometheus.labels` field of Memcached crd so that KubeDB creates `ServiceMonitor` object accordingly. + +At first, let's find out the available Prometheus server in our cluster. + +```bash +$ kubectl get prometheus --all-namespaces +NAMESPACE NAME AGE +monitoring prometheus 18m +``` + +> If you don't have any Prometheus server running in your cluster, deploy one following the guide specified in **Before You Begin** section. + +Now, let's view the YAML of the available Prometheus server `prometheus` in `monitoring` namespace. + +```yaml +$ kubectl get prometheus -n monitoring prometheus -o yaml +apiVersion: monitoring.coreos.com/v1 +kind: Prometheus +metadata: + annotations: + kubectl.kubernetes.io/last-applied-configuration: | + {"apiVersion":"monitoring.coreos.com/v1","kind":"Prometheus","metadata":{"annotations":{},"labels":{"prometheus":"prometheus"},"name":"prometheus","namespace":"monitoring"},"spec":{"replicas":1,"resources":{"requests":{"memory":"400Mi"}},"serviceAccountName":"prometheus","serviceMonitorSelector":{"matchLabels":{"release":"prometheus"}}}} + creationTimestamp: 2019-01-03T13:41:51Z + generation: 1 + labels: + prometheus: prometheus + name: prometheus + namespace: monitoring + resourceVersion: "44402" + selfLink: /apis/monitoring.coreos.com/v1/namespaces/monitoring/prometheuses/prometheus + uid: 5324ad98-0f5d-11e9-b230-080027f306f3 +spec: + replicas: 1 + resources: + requests: + memory: 400Mi + serviceAccountName: prometheus + serviceMonitorSelector: + matchLabels: + release: prometheus +``` + +Notice the `spec.serviceMonitorSelector` section. Here, `release: prometheus` label is used to select `ServiceMonitor` crd. So, we are going to use this label in `spec.monitor.prometheus.labels` field of Memcached crd. + +## Deploy Memcached with Monitoring Enabled + +At first, let's deploy an Memcached server with monitoring enabled. Below is the Memcached object that we are going to create. + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: Memcached +metadata: + name: coreos-prom-memcd + namespace: demo +spec: + replicas: 3 + version: "1.6.22" + terminationPolicy: WipeOut + podTemplate: + spec: + resources: + limits: + cpu: 500m + memory: 128Mi + requests: + cpu: 250m + memory: 64Mi + monitor: + agent: prometheus.io/operator + prometheus: + serviceMonitor: + labels: + release: prometheus + interval: 10s +``` + +Here, + +- `monitor.agent: prometheus.io/operator` indicates that we are going to monitor this server using Prometheus operator. +- `monitor.prometheus.namespace: monitoring` specifies that KubeDB should create `ServiceMonitor` in `monitoring` namespace. + +- `monitor.prometheus.labels` specifies that KubeDB should create `ServiceMonitor` with these labels. + +- `monitor.prometheus.interval` indicates that the Prometheus server should scrape metrics from this database with 10 seconds interval. + +Let's create the Memcached object that we have shown above, + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/memcached/monitoring/coreos-prom-memcd.yaml +memcached.kubedb.com/coreos-prom-memcd created +``` + +Now, wait for the database to go into `Running` state. + +```bash +$ kubectl get mc -n demo coreos-prom-memcd +NAME VERSION STATUS AGE +coreos-prom-memcd 1.6.22 Running 19s +``` + +KubeDB will create a separate stats service with name `{Memcached crd name}-stats` for monitoring purpose. + +```bash +$ kubectl get svc -n demo --selector="app.kubernetes.io/instance=coreos-prom-memcd" +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +coreos-prom-memcd ClusterIP 10.100.207.76 11211/TCP 41s +coreos-prom-memcd-stats ClusterIP 10.97.230.149 56790/TCP 38s +``` + +Here, `coreos-prom-memcd-stats` service has been created for monitoring purpose. + +Let's describe this stats service. + +```yaml +$ kubectl describe svc -n demo coreos-prom-memcd-stats +Name: coreos-prom-memcd-stats +Namespace: demo +Labels: app.kubernetes.io/name=memcacheds.kubedb.com + app.kubernetes.io/instance=coreos-prom-memcd +Annotations: monitoring.appscode.com/agent: prometheus.io/operator +Selector: app.kubernetes.io/name=memcacheds.kubedb.com,app.kubernetes.io/instance=coreos-prom-memcd +Type: ClusterIP +IP: 10.97.230.149 +Port: prom-http 56790/TCP +TargetPort: prom-http/TCP +Endpoints: 172.17.0.7:56790,172.17.0.8:56790,172.17.0.9:56790 +Session Affinity: None +Events: +``` + +Notice the `Labels` and `Port` fields. `ServiceMonitor` will use these information to target its endpoints. + +KubeDB will also create a `ServiceMonitor` crd in `monitoring` namespace that select the endpoints of `coreos-prom-memcd-stats` service. Verify that the `ServiceMonitor` crd has been created. + +```bash +$ kubectl get servicemonitor -n monitoring +NAME AGE +kubedb-demo-coreos-prom-memcd 1m +``` + +Let's verify that the `ServiceMonitor` has the label that we had specified in `spec.monitor` section of Memcached crd. + +```yaml +$ kubectl get servicemonitor -n monitoring kubedb-demo-coreos-prom-memcd -o yaml +apiVersion: monitoring.coreos.com/v1 +kind: ServiceMonitor +metadata: + creationTimestamp: 2019-01-03T15:13:46Z + generation: 1 + labels: + release: prometheus + monitoring.appscode.com/service: coreos-prom-memcd-stats.demo + name: kubedb-demo-coreos-prom-memcd + namespace: monitoring + resourceVersion: "51236" + selfLink: /apis/monitoring.coreos.com/v1/namespaces/monitoring/servicemonitors/kubedb-demo-coreos-prom-memcd + uid: 2aa57b5a-0f6a-11e9-b230-080027f306f3 +spec: + endpoints: + - honorLabels: true + interval: 10s + path: /metrics + port: prom-http + namespaceSelector: + matchNames: + - demo + selector: + matchLabels: + app.kubernetes.io/name: memcacheds.kubedb.com + app.kubernetes.io/instance: coreos-prom-memcd +``` + +Notice that the `ServiceMonitor` has label `release: prometheus` that we had specified in Memcached crd. + +Also notice that the `ServiceMonitor` has selector which match the labels we have seen in the `coreos-prom-memcd-stats` service. It also, target the `prom-http` port that we have seen in the stats service. + +## Verify Monitoring Metrics + +At first, let's find out the respective Prometheus pod for `prometheus` Prometheus server. + +```bash +$ kubectl get pod -n monitoring -l=app=prometheus +NAME READY STATUS RESTARTS AGE +prometheus-prometheus-0 3/3 Running 1 63m +``` + +Prometheus server is listening to port `9090` of `prometheus-prometheus-0` pod. We are going to use [port forwarding](https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/) to access Prometheus dashboard. + +Run following command on a separate terminal to forward the port 9090 of `prometheus-prometheus-0` pod, + +```bash +$ kubectl port-forward -n monitoring prometheus-prometheus-0 9090 +Forwarding from 127.0.0.1:9090 -> 9090 +Forwarding from [::1]:9090 -> 9090 +``` + +Now, we can access the dashboard at `localhost:9090`. Open [http://localhost:9090](http://localhost:9090) in your browser. You should see `prom-http` endpoint of `coreos-prom-memcd-stats` service as one of the targets. + +

+  Prometheus Target +

+ +Check the `endpoint` and `service` labels marked by red rectangle. It verifies that the target is our expected database. Now, you can view the collected metrics and create a graph from homepage of this Prometheus dashboard. You can also use this Prometheus server as data source for [Grafana](https://grafana.com/) and create beautiful dashboard with collected metrics. + +## Cleaning up + +To cleanup the Kubernetes resources created by this tutorial, run following commands + +```bash +# cleanup database +kubectl delete -n demo mc/coreos-prom-memcd + +# cleanup prometheus resources +kubectl delete -n monitoring prometheus prometheus +kubectl delete -n monitoring clusterrolebinding prometheus +kubectl delete -n monitoring clusterrole prometheus +kubectl delete -n monitoring serviceaccount prometheus +kubectl delete -n monitoring service prometheus-operated + +# cleanup prometheus operator resources +kubectl delete -n monitoring deployment prometheus-operator +kubectl delete -n dmeo serviceaccount prometheus-operator +kubectl delete clusterrolebinding prometheus-operator +kubectl delete clusterrole prometheus-operator + +# delete namespace +kubectl delete ns monitoring +kubectl delete ns demo +``` + +## Next Steps + +- Monitor your Memcached server with KubeDB using [out-of-the-box builtin-Prometheus](/docs/v2024.1.31/guides/memcached/monitoring/using-builtin-prometheus). +- Detail concepts of [Memcached object](/docs/v2024.1.31/guides/memcached/concepts/memcached). +- Use [private Docker registry](/docs/v2024.1.31/guides/memcached/private-registry/using-private-registry) to deploy Memcached with KubeDB. +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/v2024.1.31/CONTRIBUTING). diff --git a/content/docs/v2024.1.31/guides/memcached/private-registry/_index.md b/content/docs/v2024.1.31/guides/memcached/private-registry/_index.md new file mode 100755 index 0000000000..9bca9ae480 --- /dev/null +++ b/content/docs/v2024.1.31/guides/memcached/private-registry/_index.md @@ -0,0 +1,22 @@ +--- +title: Run Memcached using Private Registry +menu: + docs_v2024.1.31: + identifier: mc-private-registry-memcached + name: Private Registry + parent: mc-memcached-guides + weight: 35 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/memcached/private-registry/using-private-registry.md b/content/docs/v2024.1.31/guides/memcached/private-registry/using-private-registry.md new file mode 100644 index 0000000000..6e95f83d91 --- /dev/null +++ b/content/docs/v2024.1.31/guides/memcached/private-registry/using-private-registry.md @@ -0,0 +1,172 @@ +--- +title: Run Memcached using Private Registry +menu: + docs_v2024.1.31: + identifier: mc-using-private-registry-private-registry + name: Quickstart + parent: mc-private-registry-memcached + weight: 10 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Using private Docker registry + +KubeDB operator supports using private Docker registry. This tutorial will show you how to use KubeDB to run Memcached server using private Docker images. + +## Before You Begin + +- Read [concept of Memcached Version Catalog](/docs/v2024.1.31/guides/memcached/concepts/catalog) to learn detail concepts of `MemcachedVersion` object. + +- You need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +- You will also need a docker private [registry](https://docs.docker.com/registry/) or [private repository](https://docs.docker.com/docker-hub/repos/#private-repositories). In this tutorial we will use private repository of [docker hub](https://hub.docker.com/). + +- You have to push the required images from KubeDB's [Docker hub account](https://hub.docker.com/r/kubedb/) into your private registry. For memcached, push `DB_IMAGE`, `EXPORTER_IMAGE` of following MemcachedVersions, where `deprecated` is not true, to your private registry. + + ```bash + $ kubectl get memcachedversions -n kube-system -o=custom-columns=NAME:.metadata.name,VERSION:.spec.version,DB_IMAGE:.spec.db.image,EXPORTER_IMAGE:.spec.exporter.image,DEPRECATED:.spec.deprecated + NAME VERSION DB_IMAGE EXPORTER_IMAGE DEPRECATED + 1.5 1.5 kubedb/memcached:1.5 kubedb/operator:0.8.0 true + 1.5-v1 1.5 kubedb/memcached:1.5-v1 kubedb/memcached-exporter:v0.4.1 + 1.5.4 1.5.4 kubedb/memcached:1.5.4 kubedb/operator:0.8.0 true + 1.6.22 1.5.4 kubedb/memcached:1.6.22 kubedb/memcached-exporter:v0.4.1 + ``` + + Docker hub repositories: + + - [kubedb/operator](https://hub.docker.com/r/kubedb/operator) + - [kubedb/memcached](https://hub.docker.com/r/kubedb/memcached) + - [kubedb/memcached-exporter](https://hub.docker.com/r/kubedb/memcached-exporter) + +- Update KubeDB catalog for private Docker registry. Ex: + + ```yaml + apiVersion: catalog.kubedb.com/v1alpha1 + kind: MemcachedVersion + metadata: + name: 1.5.22 + spec: + db: + image: PRIVATE_REGISTRY/memcached:1.5.22 + exporter: + image: PRIVATE_REGISTRY/memcached-exporter:v0.4.1 + podSecurityPolicies: + databasePolicyName: memcached-db + version: 1.5.22 + ``` + +- To keep things isolated, this tutorial uses a separate namespace called `demo` throughout this tutorial. Run the following command to prepare your cluster for this tutorial: + + ```bash + $ kubectl create ns demo + namespace/demo created + ``` + +## Create ImagePullSecret + +ImagePullSecrets is a type of a Kubernete Secret whose sole purpose is to pull private images from a Docker registry. It allows you to specify the url of the docker registry, credentials for logging in and the image name of your private docker image. + +Run the following command, substituting the appropriate uppercase values to create an image pull secret for your private Docker registry: + +```bash +$ kubectl create secret docker-registry -n demo myregistrykey \ + --docker-server=DOCKER_REGISTRY_SERVER \ + --docker-username=DOCKER_USER \ + --docker-email=DOCKER_EMAIL \ + --docker-password=DOCKER_PASSWORD +secret/myregistrykey created +``` + +If you wish to follow other ways to pull private images see [official docs](https://kubernetes.io/docs/concepts/containers/images/) of Kubernetes. + +NB: If you are using `kubectl` 1.9.0, update to 1.9.1 or later to avoid this [issue](https://github.com/kubernetes/kubernetes/issues/57427). + +## Install KubeDB operator + +When installing KubeDB operator, set the flags `--docker-registry` and `--image-pull-secret` to appropriate value. Follow the steps to [install KubeDB operator](/docs/v2024.1.31/setup/README) properly in cluster so that to points to the DOCKER_REGISTRY you wish to pull images from. + +## Deploy Memcached server from Private Registry + +While deploying `Memcached` from private repository, you have to add `myregistrykey` secret in `Memcached` `spec.imagePullSecrets`. +Below is the Memcached CRD object we will create. + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: Memcached +metadata: + name: memcd-pvt-reg + namespace: demo +spec: + replicas: 3 + version: "1.6.22" + podTemplate: + spec: + resources: + limits: + cpu: 500m + memory: 128Mi + requests: + cpu: 250m + memory: 64Mi + imagePullSecrets: + - name: myregistrykey +``` + +Now run the command to deploy this `Memcached` object: + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/memcached/private-registry/demo-2.yaml +memcached.kubedb.com/memcd-pvt-reg created +``` + +To check if the images pulled successfully from the repository, see if the `Memcached` is in running state: + +```bash +$ kubectl get pods -n demo -w +NAME READY STATUS RESTARTS AGE +memcd-pvt-reg-694d4d44df-bwtk8 0/1 ContainerCreating 0 18s +memcd-pvt-reg-694d4d44df-tkqc4 0/1 ContainerCreating 0 17s +memcd-pvt-reg-694d4d44df-zhj4l 0/1 ContainerCreating 0 17s +memcd-pvt-reg-694d4d44df-bwtk8 1/1 Running 0 25s +memcd-pvt-reg-694d4d44df-zhj4l 1/1 Running 0 26s +memcd-pvt-reg-694d4d44df-tkqc4 1/1 Running 0 27s + +$ kubectl get mc -n demo +NAME VERSION STATUS AGE +memcd-pvt-reg 1.6.22 Running 59s +``` + +## Cleaning up + +To cleanup the Kubernetes resources created by this tutorial, run: + +```bash +kubectl patch -n demo mc/memcd-pvt-reg -p '{"spec":{"terminationPolicy":"WipeOut"}}' --type="merge" +kubectl delete -n demo mc/memcd-pvt-reg + +kubectl patch -n demo drmn/memcd-pvt-reg -p '{"spec":{"wipeOut":true}}' --type="merge" +kubectl delete -n demo drmn/memcd-pvt-reg + +kubectl delete ns demo +``` + +## Next Steps + +- Monitor your Memcached server with KubeDB using [out-of-the-box Prometheus operator](/docs/v2024.1.31/guides/memcached/monitoring/using-prometheus-operator). +- Monitor your Memcached server with KubeDB using [out-of-the-box builtin-Prometheus](/docs/v2024.1.31/guides/memcached/monitoring/using-builtin-prometheus). +- Detail concepts of [Memcached object](/docs/v2024.1.31/guides/memcached/concepts/memcached). +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/v2024.1.31/CONTRIBUTING). diff --git a/content/docs/v2024.1.31/guides/memcached/quickstart/_index.md b/content/docs/v2024.1.31/guides/memcached/quickstart/_index.md new file mode 100755 index 0000000000..3bc82666ca --- /dev/null +++ b/content/docs/v2024.1.31/guides/memcached/quickstart/_index.md @@ -0,0 +1,22 @@ +--- +title: Memcached Quickstart +menu: + docs_v2024.1.31: + identifier: mc-quickstart-memcached + name: Quickstart + parent: mc-memcached-guides + weight: 15 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/memcached/quickstart/quickstart.md b/content/docs/v2024.1.31/guides/memcached/quickstart/quickstart.md new file mode 100644 index 0000000000..ab75ecc861 --- /dev/null +++ b/content/docs/v2024.1.31/guides/memcached/quickstart/quickstart.md @@ -0,0 +1,375 @@ +--- +title: Memcached Quickstart +menu: + docs_v2024.1.31: + identifier: mc-quickstart-quickstart + name: Overview + parent: mc-quickstart-memcached + weight: 10 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Memcached QuickStart + +This tutorial will show you how to use KubeDB to run a Memcached server. + +

+  lifecycle +

+ +> Note: The yaml files used in this tutorial are stored in [docs/examples/memcached](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/examples/memcached) folder in GitHub repository [kubedb/docs](https://github.com/kubedb/docs). + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +- Now, install KubeDB cli on your workstation and KubeDB operator in your cluster following the steps [here](/docs/v2024.1.31/setup/README). + +- To keep things isolated, this tutorial uses a separate namespace called `demo` throughout this tutorial. Run the following command to prepare your cluster for this tutorial: + +```bash +$ kubectl create ns demo +namespace/demo created + +$ kubectl get ns demo +NAME STATUS AGE +demo Active 1s +``` + +## Find Available MemcachedVersion + +When you have installed KubeDB, it has created `MemcachedVersion` crd for all supported Memcached versions. Check 0 + +```bash +$ kubectl get memcachedversions +NAME VERSION DB_IMAGE DEPRECATED AGE +1.5 1.5 kubedb/memcached:1.5 true 2h +1.5-v1 1.5 kubedb/memcached:1.5-v1 2h +1.5.4 1.5.4 kubedb/memcached:1.5.4 true 2h +1.6.22 1.5.4 kubedb/memcached:1.6.22 2h +``` + +## Create a Memcached server + +KubeDB implements a `Memcached` CRD to define the specification of a Memcached server. Below is the `Memcached` object created in this tutorial. + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: Memcached +metadata: + name: memcd-quickstart + namespace: demo +spec: + replicas: 3 + version: "1.6.22" + podTemplate: + spec: + resources: + limits: + cpu: 500m + memory: 128Mi + requests: + cpu: 250m + memory: 64Mi + terminationPolicy: Delete +``` + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/memcached/quickstart/demo-1.yaml +memcached.kubedb.com/memcd-quickstart created +``` + +Here, + +- `spec.replicas` is an optional field that specifies the number of desired Instances/Replicas of Memcached server. It defaults to 1. +- `spec.version` is the version of Memcached server. In this tutorial, a Memcached 1.5.4 database is going to be created. +- `spec.resource` is an optional field that specifies how much CPU and memory (RAM) each Container needs. To learn details about Managing Compute Resources for Containers, please visit [here](https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/). +- `spec.terminationPolicy` gives flexibility whether to `nullify`(reject) the delete operation of `Memcached` crd or which resources KubeDB should keep or delete when you delete `Memcached` crd. If admission webhook is enabled, It prevents users from deleting the database as long as the `spec.terminationPolicy` is set to `DoNotTerminate`. Learn details of all `TerminationPolicy` [here](/docs/v2024.1.31/guides/memcached/concepts/memcached#specterminationpolicy) + +KubeDB operator watches for `Memcached` objects using Kubernetes api. When a `Memcached` object is created, KubeDB operator will create a new Deployment and a ClusterIP Service with the matching Memcached object name. + +```bash +$ kubectl get mc -n demo +NAME VERSION STATUS AGE +memcd-quickstart 1.6.22 Running 2m + +$ kubectl dba describe mc -n demo memcd-quickstart +Name: memcd-quickstart +Namespace: demo +CreationTimestamp: Wed, 03 Oct 2018 15:40:38 +0600 +Labels: +Annotations: +Replicas: 3 total +Status: Running + +Deployment: + Name: memcd-quickstart + CreationTimestamp: Wed, 03 Oct 2018 15:40:40 +0600 + Labels: app.kubernetes.io/name=memcacheds.kubedb.com + app.kubernetes.io/instance=memcd-quickstart + Annotations: deployment.kubernetes.io/revision=1 + Replicas: 3 desired | 3 updated | 3 total | 3 available | 0 unavailable + Pods Status: 3 Running / 0 Waiting / 0 Succeeded / 0 Failed + +Service: + Name: memcd-quickstart + Labels: app.kubernetes.io/name=memcacheds.kubedb.com + app.kubernetes.io/instance=memcd-quickstart + Annotations: + Type: ClusterIP + IP: 10.111.81.177 + Port: db 11211/TCP + TargetPort: db/TCP + Endpoints: 172.17.0.4:11211,172.17.0.14:11211,172.17.0.6:11211 + +No Snapshots. + +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal Successful 2m Memcached operator Successfully created Service + Normal Successful 1m Memcached operator Successfully created StatefulSet + Normal Successful 1m Memcached operator Successfully created Memcached + Normal Successful 1m Memcached operator Successfully patched StatefulSet + Normal Successful 1m Memcached operator Successfully patched Memcached +``` + +KubeDB operator sets the `status.phase` to `Running` once the database is successfully created. Run the following command to see the modified Memcached object: + +```yaml +$ kubectl get mc -n demo memcd-quickstart -o yaml +apiVersion: kubedb.com/v1alpha2 +kind: Memcached +metadata: + creationTimestamp: 2018-10-03T09:40:38Z + finalizers: + - kubedb.com + generation: 1 + name: memcd-quickstart + namespace: demo + resourceVersion: "23592" + selfLink: /apis/kubedb.com/v1alpha2/namespaces/demo/memcacheds/memcd-quickstart + uid: 62b08ec3-c6f0-11e8-8ebc-0800275bbbee +spec: + podTemplate: + controller: {} + metadata: {} + spec: + resources: + limits: + cpu: 500m + memory: 128Mi + requests: + cpu: 250m + memory: 64Mi + replicas: 3 + terminationPolicy: Delete + version: 1.6.22 +status: + observedGeneration: 1$4210395375389091791 + phase: Running +``` + +Now, you can connect to this Memcached cluster using `telnet`. +Here, we will connect to Memcached server from local-machine through port-forwarding. + +```bash +$ kubectl get pods -n demo +NAME READY STATUS RESTARTS AGE +memcd-quickstart-57d88d6595-gfptm 1/1 Running 0 3m +memcd-quickstart-57d88d6595-wmp5p 1/1 Running 0 3m +memcd-quickstart-57d88d6595-xf4z2 1/1 Running 0 3m + +// We will connect to `memcd-quickstart-667cd68854-gs69q` pod from local-machine using port-frowarding. +$ kubectl port-forward -n demo memcd-quickstart-57d88d6595-gfptm 11211 +Forwarding from 127.0.0.1:11211 -> 11211 + +# Connect Memcached cluster from localmachine through telnet. +~ $ telnet 127.0.0.1 11211 +Trying 127.0.0.1... +Connected to 127.0.0.1. + +# Save data Command: +set my_key 0 2592000 1 +2 +# Output: +STORED + +# Meaning: +# 0 => no flags +# 2592000 => TTL (Time-To-Live) in [s] +# 1 => size in byte +# 2 => value + +# View data command +get my_key +# Output +VALUE my_key 0 1 +2 +END + +# Exit +quit +``` + +## DoNotTerminate Property + +When `terminationPolicy` is `DoNotTerminate`, KubeDB takes advantage of `ValidationWebhook` feature in Kubernetes 1.9.0 or later clusters to implement `DoNotTerminate` feature. If admission webhook is enabled, It prevents users from deleting the database as long as the `spec.terminationPolicy` is set to `DoNotTerminate`. You can see this below: + +```bash +$ kubectl delete mc memcd-quickstart -n demo +Error from server (BadRequest): admission webhook "memcached.validators.kubedb.com" denied the request: memcached "memcd-quickstart" can't be halted. To delete, change spec.terminationPolicy +``` + +Now, run `kubectl edit mc memcd-quickstart -n demo` to set `spec.terminationPolicy` to `Halt` (which creates `dormantdatabase` when memcached is deleted and keeps PVC, snapshots, Secrets intact) or remove this field (which default to `Halt`). Then you will be able to delete/halt the database. + +Learn details of all `TerminationPolicy` [here](/docs/v2024.1.31/guides/memcached/concepts/memcached#specterminationpolicy) + +## Halt Database + +When [TerminationPolicy](/docs/v2024.1.31/guides/memcached/concepts/memcached#specterminationpolicy) is set to `Halt`, it will halt the Memcached server instead of deleting it. Here, you delete the Memcached object, KubeDB operator will delete the Deployment and its pods. In KubeDB parlance, we say that `memcd-quickstart` Memcached server has entered into dormant state. This is represented by KubeDB operator by creating a matching DormantDatabase object. + +```bash +$ kubectl delete mc memcd-quickstart -n demo +memcached.kubedb.com "memcd-quickstart" deleted + +$ kubectl get drmn -n demo memcd-quickstart +NAME STATUS AGE +memcd-quickstart Pausing 21s + +$ kubectl get drmn -n demo memcd-quickstart +NAME STATUS AGE +memcd-quickstart Halted 2m +``` + +```yaml +$ kubectl get drmn -n demo memcd-quickstart -o yaml +apiVersion: kubedb.com/v1alpha2 +kind: DormantDatabase +metadata: + creationTimestamp: 2018-10-03T09:49:16Z + finalizers: + - kubedb.com + generation: 1 + labels: + app.kubernetes.io/name: memcacheds.kubedb.com + name: memcd-quickstart + namespace: demo + resourceVersion: "24242" + selfLink: /apis/kubedb.com/v1alpha2/namespaces/demo/dormantdatabases/memcd-quickstart + uid: 97ad28ef-c6f1-11e8-8ebc-0800275bbbee +spec: + origin: + metadata: + creationTimestamp: 2018-10-03T09:40:38Z + name: memcd-quickstart + namespace: demo + spec: + memcached: + podTemplate: + controller: {} + metadata: {} + spec: + resources: + limits: + cpu: 500m + memory: 128Mi + requests: + cpu: 250m + memory: 64Mi + replicas: 3 + terminationPolicy: Halt + version: 1.6.22 +status: + observedGeneration: 1$7678503742307285743 + pausingTime: 2018-10-03T09:50:10Z + phase: Halted +``` + +Here, + +- `spec.origin` is the spec of the original spec of the original Memcached object. +- `status.phase` points to the current database state `Halted`. + +## Resume Dormant Database + +To resume the database from the dormant state, create same `Memcached` object with same Spec. + +In this tutorial, the dormant database can be resumed by creating `Memcached` database using demo-1.yaml file. + +The below command resumes the dormant database `memcd-quickstart`. + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/memcached/quickstart/demo-1.yaml +memcached.kubedb.com/memcd-quickstart created +``` + +## Wipeout Dormant Database + +You can wipe out a DormantDatabase while deleting the objet by setting `spec.wipeOut` to true. KubeDB operator will delete any relevant resources of this `Memcached` database. + +```yaml +$ kubectl delete mc memcd-quickstart -n demo +memcached "memcd-quickstart" deleted + +$ kubectl edit drmn -n demo memcd-quickstart +apiVersion: kubedb.com/v1alpha2 +kind: DormantDatabase +metadata: + name: memcd-quickstart + namespace: demo + ... +spec: + wipeOut: true + ... +status: + phase: Halted + ... +``` + +If `spec.wipeOut` is not set to true while deleting the `dormantdatabase` object, then only this object will be deleted and `kubedb-operator` won't delete related Secrets. + +## Delete DormantDatabase + +As it is already discussed above, `DormantDatabase` can be deleted with or without wiping out the resources. To delete the `dormantdatabase`, + +```bash +$ kubectl delete drmn memcd-quickstart -n demo +dormantdatabase "memcd-quickstart" deleted +``` + +## Cleaning up + +To cleanup the Kubernetes resources created by this tutorial, run: + +```bash +kubectl patch -n demo mc/memcd-quickstart -p '{"spec":{"terminationPolicy":"WipeOut"}}' --type="merge" +kubectl delete -n demo mc/memcd-quickstart + +kubectl patch -n demo drmn/memcd-quickstart -p '{"spec":{"wipeOut":true}}' --type="merge" +kubectl delete -n demo drmn/memcd-quickstart + +kubectl delete ns demo +``` + +## Next Steps + +- Monitor your Memcached server with KubeDB using [out-of-the-box Prometheus operator](/docs/v2024.1.31/guides/memcached/monitoring/using-prometheus-operator). +- Monitor your Memcached server with KubeDB using [out-of-the-box builtin-Prometheus](/docs/v2024.1.31/guides/memcached/monitoring/using-builtin-prometheus). +- Use [private Docker registry](/docs/v2024.1.31/guides/memcached/private-registry/using-private-registry) to deploy Memcached with KubeDB. +- Detail concepts of [Memcached object](/docs/v2024.1.31/guides/memcached/concepts/memcached). +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/v2024.1.31/CONTRIBUTING). diff --git a/content/docs/v2024.1.31/guides/mongodb/README.md b/content/docs/v2024.1.31/guides/mongodb/README.md new file mode 100644 index 0000000000..ff01c896cc --- /dev/null +++ b/content/docs/v2024.1.31/guides/mongodb/README.md @@ -0,0 +1,77 @@ +--- +title: MongoDB +menu: + docs_v2024.1.31: + identifier: mg-readme-mongodb + name: MongoDB + parent: mg-mongodb-guides + weight: 10 +menu_name: docs_v2024.1.31 +section_menu_id: guides +url: /docs/v2024.1.31/guides/mongodb/ +aliases: +- /docs/v2024.1.31/guides/mongodb/README/ +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +## Supported MongoDB Features + + +| Features | Community | Enterprise | +|------------------------------------------------------------------------------------|:---------:|:----------:| +| Clustering - Sharding | ✓ | ✓ | +| Clustering - Replication | ✓ | ✓ | +| Custom Configuration | ✓ | ✓ | +| Using Custom Docker Image | ✓ | ✓ | +| Initialization From Script (\*.js and/or \*.sh) | ✓ | ✓ | +| Initializing from Snapshot ( [Stash](https://stash.run/) ) | ✓ | ✓ | +| Authentication & Autorization | ✓ | ✓ | +| Arbiter support | ✓ | ✓ | +| Persistent Volume | ✓ | ✓ | +| Instant Backup | ✓ | ✓ | +| Scheduled Backup | ✓ | ✓ | +| Builtin Prometheus Discovery | ✓ | ✓ | +| Using Prometheus operator | ✓ | ✓ | +| Automated Version Update | ✗ | ✓ | +| Automatic Vertical Scaling | ✗ | ✓ | +| Automated Horizontal Scaling | ✗ | ✓ | +| Automated db-configure Reconfiguration | ✗ | ✓ | +| TLS: Add, Remove, Update, Rotate ( [Cert Manager](https://cert-manager.io/docs/) ) | ✗ | ✓ | +| Automated Reprovision | ✗ | ✓ | +| Automated Volume Expansion | ✗ | ✓ | +| Autoscaling (vertically) | ✗ | ✓ | + + +## Life Cycle of a MongoDB Object + +

+  lifecycle +

+ +## User Guide + +- [Quickstart MongoDB](/docs/v2024.1.31/guides/mongodb/quickstart/quickstart) with KubeDB Operator. +- [MongoDB Replicaset](/docs/v2024.1.31/guides/mongodb/clustering/replicaset) with KubeDB Operator. +- [MongoDB Sharding](/docs/v2024.1.31/guides/mongodb/clustering/sharding) with KubeDB Operator. +- [Backup & Restore](/docs/v2024.1.31/guides/mongodb/backup/overview/) MongoDB databases using Stash. +- Initialize [MongoDB with Script](/docs/v2024.1.31/guides/mongodb/initialization/using-script). +- Start [MongoDB with Custom Config](/docs/v2024.1.31/guides/mongodb/configuration/using-config-file). +- Monitor your MongoDB database with KubeDB using [out-of-the-box Prometheus operator](/docs/v2024.1.31/guides/mongodb/monitoring/using-prometheus-operator). +- Monitor your MongoDB database with KubeDB using [out-of-the-box builtin-Prometheus](/docs/v2024.1.31/guides/mongodb/monitoring/using-builtin-prometheus). +- Use [private Docker registry](/docs/v2024.1.31/guides/mongodb/private-registry/using-private-registry) to deploy MongoDB with KubeDB. +- Use [kubedb cli](/docs/v2024.1.31/guides/mongodb/cli/cli) to manage databases like kubectl for Kubernetes. +- Detail concepts of [MongoDB object](/docs/v2024.1.31/guides/mongodb/concepts/mongodb). +- Detail concepts of [MongoDBVersion object](/docs/v2024.1.31/guides/mongodb/concepts/catalog). +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/v2024.1.31/CONTRIBUTING). diff --git a/content/docs/v2024.1.31/guides/mongodb/_index.md b/content/docs/v2024.1.31/guides/mongodb/_index.md new file mode 100644 index 0000000000..6cce3aed1b --- /dev/null +++ b/content/docs/v2024.1.31/guides/mongodb/_index.md @@ -0,0 +1,22 @@ +--- +title: MongoDB +menu: + docs_v2024.1.31: + identifier: mg-mongodb-guides + name: MongoDB + parent: guides + weight: 10 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/mongodb/arbiter/_index.md b/content/docs/v2024.1.31/guides/mongodb/arbiter/_index.md new file mode 100644 index 0000000000..542e63489c --- /dev/null +++ b/content/docs/v2024.1.31/guides/mongodb/arbiter/_index.md @@ -0,0 +1,22 @@ +--- +title: Run mongodb with Arbiter +menu: + docs_v2024.1.31: + identifier: mg-arbiter + name: Arbiter + parent: mg-mongodb-guides + weight: 27 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/mongodb/arbiter/concept.md b/content/docs/v2024.1.31/guides/mongodb/arbiter/concept.md new file mode 100644 index 0000000000..6155908897 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mongodb/arbiter/concept.md @@ -0,0 +1,85 @@ +--- +title: MongoDB Arbiter Concept +menu: + docs_v2024.1.31: + identifier: mg-arbiter-concept + name: Concept + parent: mg-arbiter + weight: 10 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# MongoDB Arbiter + +Arbiter is a member of MongoDB ReplicaSet. It does not have a copy of data set and cannot become a primary. In some circumstances (such as when you have a primary and a secondary, but cost constraints prohibit adding another secondary), you may choose to add an arbiter to your replica set. Replica sets may have arbiters to add a vote in elections for primary. Arbiters always have exactly 1 election vote, and thus allow replica sets to have an uneven number of voting members without the overhead of an additional member that replicates data. By default, it is a priority-0 member. + +For example, in the following replica set with a 2 data bearing members (the primary and a secondary), an arbiter allows the set to have an odd number of votes to break a tie: + +

+  lifecycle +

+ +# Considerations +There are some important considerations that should be taken care of by the Database administrators when deploying MongoDB. + +## Priority +Starting in MongoDB 3.6, arbiters have priority 0. When you update a replica set to MongoDB 3.6, if the existing configuration has an arbiter with priority 1, MongoDB 3.6 reconfigures the arbiter to have priority 0. + +> IMPORTANT: Do not run an arbiter on systems that also host the primary or the secondary members of the replica set. [[reference]](https://docs.mongodb.com/manual/core/replica-set-members/#arbiter). + + + +## Performance Issues +If you are using a three-member primary-secondary-arbiter (PSA) architecture, consider the following: + +- The write concern "majority" can cause performance issues if a secondary is unavailable or lagging.See [Mitigate Performance Issues with PSA Replica Set](https://www.mongodb.com/docs/manual/tutorial/mitigate-psa-performance-issues/#std-label-performance-issues-psa) to mitigate these issues. + +- If you are using a global default "majority" and the write concern is less than the size of the majority, your queries may return stale (not fully replicated) data. + + +## Concerns with multiple Arbiters + +Using multiple arbiters on same replicaSet can causes data inconsistency. Multiple arbiters prevent the reliable use of the majority write concern. For more details in this concerns, read [this](https://www.mongodb.com/docs/manual/core/replica-set-arbiter/#concerns-with-multiple-arbiters). + +By considering this issue into account, KubeDB doesn't support multiple arbiter to be deployed in a single replicaset. + +## Security + +As arbiters do not store data, they do not possess the internal table of user and role mappings used for authentication. Thus When running with authorization, arbiters exchange credentials with other members of the set to authenticate. + +Also [MongoDB doc](https://www.mongodb.com/docs/manual/core/replica-set-arbiter/#security) suggests to use TLS to avoid leaking unencrypted data when arbiter communicates with other replicaset member. + + +## Protocol version + +For replica sets, the write concern { w: 1 } only provides acknowledgement of write operations on the primary. Data may be rolled back if the primary steps down before the write operations have replicated to any of the secondaries. This type behaviour is called w:1 roolback. + +For the following MongoDB versions, pv1 (protocol version 1, which is default Starting in 4.0) increases the likelihood of w:1 rollbacks compared to pv0 (no longer supported in MongoDB 4.0+) for replica sets with arbiters: + +i) MongoDB 3.4.1
+ii) MongoDB 3.4.0
+iii) MongoDB 3.2.11 or earlier + + +## Next Steps + +- [Deploy MongoDB ReplicaSet with Arbiter](/docs/v2024.1.31/guides/mongodb/arbiter/replicaset) using KubeDB. +- Detail concepts of [MongoDB object](/docs/v2024.1.31/guides/mongodb/concepts/mongodb). +- Detail concepts of [MongoDBVersion object](/docs/v2024.1.31/guides/mongodb/concepts/catalog). +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/v2024.1.31/CONTRIBUTING). + +NB: The images in this page are taken from [MongoDB website](https://www.mongodb.com/docs/manual/core/replica-set-arbiter/#example). diff --git a/content/docs/v2024.1.31/guides/mongodb/arbiter/replicaset.md b/content/docs/v2024.1.31/guides/mongodb/arbiter/replicaset.md new file mode 100644 index 0000000000..a43fb97d6d --- /dev/null +++ b/content/docs/v2024.1.31/guides/mongodb/arbiter/replicaset.md @@ -0,0 +1,845 @@ +--- +title: MongoDB ReplicaSet with Arbiter +menu: + docs_v2024.1.31: + identifier: mg-arbiter-replicaset + name: ReplicaSet with Arbiter + parent: mg-arbiter + weight: 15 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# KubeDB - MongoDB ReplicaSet with Arbiter + +This tutorial will show you how to use KubeDB to run a MongoDB ReplicaSet with arbiter. + +## Before You Begin + +Before proceeding: + +- Read [mongodb arbiter concept](/docs/v2024.1.31/guides/mongodb/arbiter/concept) to get the concept about MongoDB Replica Set Arbiter. + +- You need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +- Now, install KubeDB cli on your workstation and KubeDB operator in your cluster following the steps [here](/docs/v2024.1.31/setup/README). + +- To keep things isolated, this tutorial uses a separate namespace called `demo` throughout this tutorial. Run the following command to prepare your cluster for this tutorial: + + ```bash + $ kubectl create ns demo + namespace/demo created + ``` + +> Note: The yaml files used in this tutorial are stored in [docs/examples/mongodb](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/examples/mongodb) folder in GitHub repository [kubedb/docs](https://github.com/kubedb/docs). + +## Deploy MongoDB ReplicaSet with arbiter + +To deploy a MongoDB ReplicaSet, user have to specify `spec.replicaSet` option in `Mongodb` CRD. + +The following is an example of a `Mongodb` object which creates MongoDB ReplicaSet of three members. + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: MongoDB +metadata: + name: mongo-arb + namespace: demo +spec: + version: "4.4.26" + replicaSet: + name: "rs0" + replicas: 2 + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 500Mi + arbiter: + podTemplate: {} + terminationPolicy: WipeOut + +``` + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/mongodb/arbiter/replicaset.yaml +mongodb.kubedb.com/mongo-arb created +``` + +Here, + +- `spec.replicaSet` represents the configuration for replicaset. + - `name` denotes the name of mongodb replicaset. +- `spec.keyFileSecret` (optional) is a secret name that contains keyfile (a random string)against `key.txt` key. Each mongod instances in the replica set and `shardTopology` uses the contents of the keyfile as the shared password for authenticating other members in the replicaset. Only mongod instances with the correct keyfile can join the replica set. _User can provide the `keyFileSecret` by creating a secret with key `key.txt`. See [here](https://docs.mongodb.com/manual/tutorial/enforce-keyfile-access-control-in-existing-replica-set/#create-a-keyfile) to create the string for `keyFileSecret`._ If `keyFileSecret` is not given, KubeDB operator will generate a `keyFileSecret` itself. +- `spec.replicas` denotes the number of data-bearing members in `rs0` mongodb replicaset. +- `spec.storage` specifies the StorageClass of PVC dynamically allocated to store data for this database. This storage spec will be passed to the StatefulSet created by KubeDB operator to run database pods. So, each members will have a pod of this storage configuration. You can specify any StorageClass available in your cluster with appropriate resource requests. +- `spec.arbiter` denotes arbiter spec of the deployed MongoDB CRD. There are two fields under it : configSecret & podTemplate. `spec.arbiter.configSecret` is an optional field to provide custom configuration file for database (i.e mongod.cnf). If specified, this file will be used as configuration file otherwise default configuration file will be used. `spec.arbiter.podTemplate` holds the arbiter-podSpec. `null` value of it, instructs kubedb operator to use the default arbiter podTemplate. + +KubeDB operator watches for `MongoDB` objects using Kubernetes api. When a `MongoDB` object is created, KubeDB operator will create two new StatefulSets (one for replicas & one for arbiter) and a Service with the matching MongoDB object name. This service will always point to the primary of the replicaset. KubeDB operator will also create a governing service for the pods of those two StatefulSets with the name `-pods`. + +```bash +$ kubectl dba describe mg -n demo mongo-arb +Name: mongo-arb +Namespace: demo +CreationTimestamp: Thu, 21 Apr 2022 14:39:32 +0600 +Labels: +Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"kubedb.com/v1alpha2","kind":"MongoDB","metadata":{"annotations":{},"name":"mongo-arb","namespace":"demo"},"spec":{"arbiter":{"podTemplat... +Replicas: 2 total +Status: Ready +StorageType: Durable +Volume: + StorageClass: standard + Capacity: 500Mi + Access Modes: RWO +Paused: false +Halted: false +Termination Policy: WipeOut + +StatefulSet: + Name: mongo-arb + CreationTimestamp: Thu, 21 Apr 2022 14:39:32 +0600 + Labels: app.kubernetes.io/component=database + app.kubernetes.io/instance=mongo-arb + app.kubernetes.io/managed-by=kubedb.com + app.kubernetes.io/name=mongodbs.kubedb.com + mongodb.kubedb.com/node.type=replica + Annotations: + Replicas: 824639168104 desired | 2 total + Pods Status: 2 Running / 0 Waiting / 0 Succeeded / 0 Failed + +StatefulSet: + Name: mongo-arb-arbiter + CreationTimestamp: Thu, 21 Apr 2022 14:40:21 +0600 + Labels: app.kubernetes.io/component=database + app.kubernetes.io/instance=mongo-arb + app.kubernetes.io/managed-by=kubedb.com + app.kubernetes.io/name=mongodbs.kubedb.com + mongodb.kubedb.com/node.type=arbiter + Annotations: + Replicas: 824645537528 desired | 1 total + Pods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed + +Service: + Name: mongo-arb + Labels: app.kubernetes.io/component=database + app.kubernetes.io/instance=mongo-arb + app.kubernetes.io/managed-by=kubedb.com + app.kubernetes.io/name=mongodbs.kubedb.com + Annotations: + Type: ClusterIP + IP: 10.96.148.184 + Port: primary 27017/TCP + TargetPort: db/TCP + Endpoints: 10.244.3.23:27017 + +Service: + Name: mongo-arb-pods + Labels: app.kubernetes.io/component=database + app.kubernetes.io/instance=mongo-arb + app.kubernetes.io/managed-by=kubedb.com + app.kubernetes.io/name=mongodbs.kubedb.com + Annotations: + Type: ClusterIP + IP: None + Port: db 27017/TCP + TargetPort: db/TCP + Endpoints: 10.244.1.9:27017,10.244.2.18:27017,10.244.3.23:27017 + +Auth Secret: + Name: mongo-arb-auth + Labels: app.kubernetes.io/component=database + app.kubernetes.io/instance=mongo-arb + app.kubernetes.io/managed-by=kubedb.com + app.kubernetes.io/name=mongodbs.kubedb.com + Annotations: + Type: Opaque + Data: + password: 16 bytes + username: 4 bytes + +AppBinding: + Metadata: + Annotations: + kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"kubedb.com/v1alpha2","kind":"MongoDB","metadata":{"annotations":{},"name":"mongo-arb","namespace":"demo"},"spec":{"arbiter":{"podTemplate":null},"replicaSet":{"name":"rs0"},"replicas":2,"storage":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"storageClassName":"standard"},"storageType":"Durable","terminationPolicy":"WipeOut","version":"4.4.26"}} + + Creation Timestamp: 2022-04-21T08:40:21Z + Labels: + app.kubernetes.io/component: database + app.kubernetes.io/instance: mongo-arb + app.kubernetes.io/managed-by: kubedb.com + app.kubernetes.io/name: mongodbs.kubedb.com + Name: mongo-arb + Namespace: demo + Spec: + Client Config: + Service: + Name: mongo-arb + Port: 27017 + Scheme: mongodb + Parameters: + API Version: config.kubedb.com/v1alpha1 + Kind: MongoConfiguration + Replica Sets: + host-0: rs0/mongo-arb-0.mongo-arb-pods.demo.svc:27017,mongo-arb-1.mongo-arb-pods.demo.svc:27017,mongo-arb-arbiter-0.mongo-arb-pods.demo.svc:27017 + Stash: + Addon: + Backup Task: + Name: mongodb-backup-4.4.6 + Restore Task: + Name: mongodb-restore-4.4.6 + Secret: + Name: mongo-arb-auth + Type: kubedb.com/mongodb + Version: 4.4.26 + +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal Successful 1m Postgres operator Successfully created governing service + Normal Successful 1m Postgres operator Successfully created Primary Service + Normal Successful 1m Postgres operator Successfully created appbinding + + + +$ kubectl get statefulset -n demo +NAME READY AGE +mongo-arb 2/2 2m37s +mongo-arb-arbiter 1/1 108s + + +$ kubectl get pvc -n demo +NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE +datadir-mongo-arb-0 Bound pvc-93a2681f-096d-4af1-b1fb-93cd7b7b6020 500Mi RWO standard 2m57s +datadir-mongo-arb-1 Bound pvc-fb06ea3b-a9dd-4479-87b2-de73ca272718 500Mi RWO standard 2m35s +datadir-mongo-arb-arbiter-0 Bound pvc-169fd172-0e41-48e3-81a5-3abae4a85056 500Mi RWO standard 2m8s + + +$ kubectl get pv -n demo +NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE +pvc-169fd172-0e41-48e3-81a5-3abae4a85056 500Mi RWO Delete Bound demo/datadir-mongo-arb-arbiter-0 standard 2m23s +pvc-93a2681f-096d-4af1-b1fb-93cd7b7b6020 500Mi RWO Delete Bound demo/datadir-mongo-arb-0 standard 3m11s +pvc-fb06ea3b-a9dd-4479-87b2-de73ca272718 500Mi RWO Delete Bound demo/datadir-mongo-arb-1 standard 2m50s + + +$ kubectl get service -n demo +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +mongo-arb ClusterIP 10.96.148.184 27017/TCP 3m32s +mongo-arb-pods ClusterIP None 27017/TCP 3m32s +``` + +KubeDB operator sets the `status.phase` to `Ready` once the database is successfully created. Run the following command to see the modified MongoDB object: + +```yaml +$ kubectl get mg -n demo mongo-arb -o yaml +apiVersion: kubedb.com/v1alpha2 +kind: MongoDB +metadata: + annotations: + kubectl.kubernetes.io/last-applied-configuration: | + {"apiVersion":"kubedb.com/v1alpha2","kind":"MongoDB","metadata":{"annotations":{},"name":"mongo-arb","namespace":"demo"},"spec":{"arbiter":{"podTemplate":null},"replicaSet":{"name":"rs0"},"replicas":2,"storage":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"storageClassName":"standard"},"storageType":"Durable","terminationPolicy":"WipeOut","version":"4.4.26"}} + creationTimestamp: "2022-04-21T08:39:32Z" + finalizers: + - kubedb.com + generation: 3 + name: mongo-arb + namespace: demo + resourceVersion: "22168" + uid: c4a3dc69-5556-42b6-a2b8-11d3547015d3 +spec: + allowedSchemas: + namespaces: + from: Same + arbiter: + podTemplate: + controller: {} + metadata: {} + spec: + affinity: + podAntiAffinity: + preferredDuringSchedulingIgnoredDuringExecution: + - podAffinityTerm: + labelSelector: + matchLabels: + app.kubernetes.io/instance: mongo-arb + app.kubernetes.io/managed-by: kubedb.com + app.kubernetes.io/name: mongodbs.kubedb.com + namespaces: + - demo + topologyKey: kubernetes.io/hostname + weight: 100 + - podAffinityTerm: + labelSelector: + matchLabels: + app.kubernetes.io/instance: mongo-arb + app.kubernetes.io/managed-by: kubedb.com + app.kubernetes.io/name: mongodbs.kubedb.com + namespaces: + - demo + topologyKey: failure-domain.beta.kubernetes.io/zone + weight: 50 + livenessProbe: + exec: + command: + - bash + - -c + - "set -x; if [[ $(mongo admin --host=localhost --quiet --eval \"db.adminCommand('ping').ok\" + ) -eq \"1\" ]]; then \n exit 0\n fi\n exit 1" + failureThreshold: 3 + periodSeconds: 10 + successThreshold: 1 + timeoutSeconds: 5 + readinessProbe: + exec: + command: + - bash + - -c + - "set -x; if [[ $(mongo admin --host=localhost --quiet --eval \"db.adminCommand('ping').ok\" + ) -eq \"1\" ]]; then \n exit 0\n fi\n exit 1" + failureThreshold: 3 + periodSeconds: 10 + successThreshold: 1 + timeoutSeconds: 1 + resources: + limits: + memory: 1Gi + requests: + cpu: 500m + memory: 1Gi + authSecret: + name: mongo-arb-auth + clusterAuthMode: keyFile + coordinator: + resources: {} + keyFileSecret: + name: mongo-arb-key + podTemplate: + controller: {} + metadata: {} + spec: + affinity: + podAntiAffinity: + preferredDuringSchedulingIgnoredDuringExecution: + - podAffinityTerm: + labelSelector: + matchLabels: + app.kubernetes.io/instance: mongo-arb + app.kubernetes.io/managed-by: kubedb.com + app.kubernetes.io/name: mongodbs.kubedb.com + namespaces: + - demo + topologyKey: kubernetes.io/hostname + weight: 100 + - podAffinityTerm: + labelSelector: + matchLabels: + app.kubernetes.io/instance: mongo-arb + app.kubernetes.io/managed-by: kubedb.com + app.kubernetes.io/name: mongodbs.kubedb.com + namespaces: + - demo + topologyKey: failure-domain.beta.kubernetes.io/zone + weight: 50 + livenessProbe: + exec: + command: + - bash + - -c + - "set -x; if [[ $(mongo admin --host=localhost --username=$MONGO_INITDB_ROOT_USERNAME + --password=$MONGO_INITDB_ROOT_PASSWORD --authenticationDatabase=admin + --quiet --eval \"db.adminCommand('ping').ok\" ) -eq \"1\" ]]; then \n + \ exit 0\n fi\n exit 1" + failureThreshold: 3 + periodSeconds: 10 + successThreshold: 1 + timeoutSeconds: 5 + readinessProbe: + exec: + command: + - bash + - -c + - "set -x; if [[ $(mongo admin --host=localhost --username=$MONGO_INITDB_ROOT_USERNAME + --password=$MONGO_INITDB_ROOT_PASSWORD --authenticationDatabase=admin + --quiet --eval \"db.adminCommand('ping').ok\" ) -eq \"1\" ]]; then \n + \ exit 0\n fi\n exit 1" + failureThreshold: 3 + periodSeconds: 10 + successThreshold: 1 + timeoutSeconds: 1 + resources: + limits: + memory: 1Gi + requests: + cpu: 500m + memory: 1Gi + serviceAccountName: mongo-arb + replicaSet: + name: rs0 + replicas: 2 + sslMode: disabled + storage: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 500Mi + storageClassName: standard + storageEngine: wiredTiger + storageType: Durable + terminationPolicy: WipeOut + version: 4.4.26 +status: + conditions: + - lastTransitionTime: "2022-04-21T08:39:32Z" + message: 'The KubeDB operator has started the provisioning of MongoDB: demo/mongo-arb' + reason: DatabaseProvisioningStartedSuccessfully + status: "True" + type: ProvisioningStarted + - lastTransitionTime: "2022-04-21T08:40:42Z" + message: All desired replicas are ready. + reason: AllReplicasReady + status: "True" + type: ReplicaReady + - lastTransitionTime: "2022-04-21T08:39:56Z" + message: 'The MongoDB: demo/mongo-arb is accepting client requests.' + observedGeneration: 3 + reason: DatabaseAcceptingConnectionRequest + status: "True" + type: AcceptingConnection + - lastTransitionTime: "2022-04-21T08:39:56Z" + message: 'The MongoDB: demo/mongo-arb is ready.' + observedGeneration: 3 + reason: ReadinessCheckSucceeded + status: "True" + type: Ready + - lastTransitionTime: "2022-04-21T08:40:21Z" + message: 'The MongoDB: demo/mongo-arb is successfully provisioned.' + observedGeneration: 3 + reason: DatabaseSuccessfullyProvisioned + status: "True" + type: Provisioned + observedGeneration: 3 + phase: Ready + +``` + +Please note that KubeDB operator has created a new Secret called `mongo-arb-auth` *(format: {mongodb-object-name}-auth)* for storing the password for `mongodb` superuser. This secret contains a `username` key which contains the *username* for MongoDB superuser and a `password` key which contains the *password* for MongoDB superuser. + +If you want to use custom or existing secret please specify that when creating the MongoDB object using `spec.authSecret.name`. While creating this secret manually, make sure the secret contains these two keys containing data `username` and `password`. For more details, please see [here](/docs/v2024.1.31/guides/mongodb/concepts/mongodb#specauthsecret). + +## Redundancy and Data Availability + +Now, you can connect to this database through [mongo-arb](https://docs.mongodb.com/v3.4/mongo/). In this tutorial, we will insert document on the primary member, and we will see if the data becomes available on secondary members. + +At first, insert data inside primary member `rs0:PRIMARY`. + +```bash +$ kubectl get secrets -n demo mongo-arb-auth -o jsonpath='{.data.\username}' | base64 -d +root + +$ kubectl get secrets -n demo mongo-arb-auth -o jsonpath='{.data.\password}' | base64 -d +OX4yb!IFm;~yAHkD + +$ kubectl exec -it mongo-arb-0 -n demo bash + +mongodb@mongo-arb-0:/$ mongo admin -u root -p 'OX4yb!IFm;~yAHkD' +MongoDB shell version v4.4.26 +connecting to: mongodb://127.0.0.1:27017/admin +MongoDB server version: 4.4.26 +Welcome to the MongoDB shell. + +rs0:PRIMARY> rs.status() +{ + "set" : "rs0", + "date" : ISODate("2022-04-21T08:46:28.786Z"), + "myState" : 1, + "term" : NumberLong(1), + "syncSourceHost" : "", + "syncSourceId" : -1, + "heartbeatIntervalMillis" : NumberLong(2000), + "majorityVoteCount" : 2, + "writeMajorityCount" : 2, + "votingMembersCount" : 3, + "writableVotingMembersCount" : 2, + "optimes" : { + "lastCommittedOpTime" : { + "ts" : Timestamp(1650530787, 1), + "t" : NumberLong(1) + }, + "lastCommittedWallTime" : ISODate("2022-04-21T08:46:27.247Z"), + "readConcernMajorityOpTime" : { + "ts" : Timestamp(1650530787, 1), + "t" : NumberLong(1) + }, + "readConcernMajorityWallTime" : ISODate("2022-04-21T08:46:27.247Z"), + "appliedOpTime" : { + "ts" : Timestamp(1650530787, 1), + "t" : NumberLong(1) + }, + "durableOpTime" : { + "ts" : Timestamp(1650530787, 1), + "t" : NumberLong(1) + }, + "lastAppliedWallTime" : ISODate("2022-04-21T08:46:27.247Z"), + "lastDurableWallTime" : ISODate("2022-04-21T08:46:27.247Z") + }, + "lastStableRecoveryTimestamp" : Timestamp(1650530747, 1), + "electionCandidateMetrics" : { + "lastElectionReason" : "electionTimeout", + "lastElectionDate" : ISODate("2022-04-21T08:39:47.205Z"), + "electionTerm" : NumberLong(1), + "lastCommittedOpTimeAtElection" : { + "ts" : Timestamp(0, 0), + "t" : NumberLong(-1) + }, + "lastSeenOpTimeAtElection" : { + "ts" : Timestamp(1650530387, 1), + "t" : NumberLong(-1) + }, + "numVotesNeeded" : 1, + "priorityAtElection" : 1, + "electionTimeoutMillis" : NumberLong(10000), + "newTermStartDate" : ISODate("2022-04-21T08:39:47.221Z"), + "wMajorityWriteAvailabilityDate" : ISODate("2022-04-21T08:39:47.234Z") + }, + "members" : [ + { + "_id" : 0, + "name" : "mongo-arb-0.mongo-arb-pods.demo.svc.cluster.local:27017", + "health" : 1, + "state" : 1, + "stateStr" : "PRIMARY", + "uptime" : 412, + "optime" : { + "ts" : Timestamp(1650530787, 1), + "t" : NumberLong(1) + }, + "optimeDate" : ISODate("2022-04-21T08:46:27Z"), + "syncSourceHost" : "", + "syncSourceId" : -1, + "infoMessage" : "", + "electionTime" : Timestamp(1650530387, 2), + "electionDate" : ISODate("2022-04-21T08:39:47Z"), + "configVersion" : 3, + "configTerm" : 1, + "self" : true, + "lastHeartbeatMessage" : "" + }, + { + "_id" : 1, + "name" : "mongo-arb-1.mongo-arb-pods.demo.svc.cluster.local:27017", + "health" : 1, + "state" : 2, + "stateStr" : "SECONDARY", + "uptime" : 375, + "optime" : { + "ts" : Timestamp(1650530787, 1), + "t" : NumberLong(1) + }, + "optimeDurable" : { + "ts" : Timestamp(1650530787, 1), + "t" : NumberLong(1) + }, + "optimeDate" : ISODate("2022-04-21T08:46:27Z"), + "optimeDurableDate" : ISODate("2022-04-21T08:46:27Z"), + "lastHeartbeat" : ISODate("2022-04-21T08:46:27.456Z"), + "lastHeartbeatRecv" : ISODate("2022-04-21T08:46:27.591Z"), + "pingMs" : NumberLong(0), + "lastHeartbeatMessage" : "", + "syncSourceHost" : "mongo-arb-0.mongo-arb-pods.demo.svc.cluster.local:27017", + "syncSourceId" : 0, + "infoMessage" : "", + "configVersion" : 3, + "configTerm" : 1 + }, + { + "_id" : 2, + "name" : "mongo-arb-arbiter-0.mongo-arb-pods.demo.svc.cluster.local:27017", + "health" : 1, + "state" : 7, + "stateStr" : "ARBITER", + "uptime" : 353, + "lastHeartbeat" : ISODate("2022-04-21T08:46:27.450Z"), + "lastHeartbeatRecv" : ISODate("2022-04-21T08:46:27.607Z"), + "pingMs" : NumberLong(0), + "lastHeartbeatMessage" : "", + "syncSourceHost" : "", + "syncSourceId" : -1, + "infoMessage" : "", + "configVersion" : 3, + "configTerm" : 1 + } + ], + "ok" : 1, + "$clusterTime" : { + "clusterTime" : Timestamp(1650530787, 1), + "signature" : { + "hash" : BinData(0,"N6pWJaxVqaZch7cKLKWX8bdfkBM="), + "keyId" : NumberLong("7088974033219223556") + } + }, + "operationTime" : Timestamp(1650530787, 1) +} +``` + +Here you can see the arbiter pod in the members list of `rs.status()` output. + +```bash +rs0:PRIMARY> > rs.isMaster().primary +mongo-arb-0.mongo-arb-pods.demo.svc.cluster.local:27017 + +rs0:PRIMARY> show dbs +admin 0.000GB +config 0.000GB +kubedb-system 0.000GB +local 0.000GB + +rs0:PRIMARY> show users +{ + "_id" : "admin.root", + "userId" : UUID("af3c1344-d052-496a-bdbb-5bd41486d878"), + "user" : "root", + "db" : "admin", + "roles" : [ + { + "role" : "root", + "db" : "admin" + } + ], + "mechanisms" : [ + "SCRAM-SHA-1", + "SCRAM-SHA-256" + ] +} + + +rs0:PRIMARY> use mydb +switched to db mydb +rs0:PRIMARY> db.songs.insert({"pink floyd": "shine on you crazy diamond"}) +WriteResult({ "nInserted" : 1 }) +rs0:PRIMARY> db.songs.find().pretty() +{ + "_id" : ObjectId("62611ae33583279dfca0a5e4"), + "pink floyd" : "shine on you crazy diamond" +} + +rs0:PRIMARY> exit +bye +``` + +Now, check the redundancy and data availability in secondary members. +We will exec in `mongo-arb-1`(which is secondary member right now) to check the data availability. + +```bash +$ kubectl exec -it mongo-arb-1 -n demo bash +mongodb@mongo-arb-1:/$ mongo admin -u root -p 'OX4yb!IFm;~yAHkD' +MongoDB shell version v4.4.26 +connecting to: mongodb://127.0.0.1:27017/admin +MongoDB server version: 4.4.26 +Welcome to the MongoDB shell. + +rs0:SECONDARY> rs.slaveOk() +rs0:SECONDARY> > show dbs +admin 0.000GB +config 0.000GB +kubedb-system 0.000GB +local 0.000GB +mydb 0.000GB + +rs0:SECONDARY> show users +{ + "_id" : "admin.root", + "userId" : UUID("af3c1344-d052-496a-bdbb-5bd41486d878"), + "user" : "root", + "db" : "admin", + "roles" : [ + { + "role" : "root", + "db" : "admin" + } + ], + "mechanisms" : [ + "SCRAM-SHA-1", + "SCRAM-SHA-256" + ] +} + +rs0:SECONDARY> use mydb +switched to db mydb + +rs0:SECONDARY> db.songs.find().pretty() +{ + "_id" : ObjectId("62611ae33583279dfca0a5e4"), + "pink floyd" : "shine on you crazy diamond" +} + +rs0:SECONDARY> exit +bye + +``` + +## Automatic Failover + +To test automatic failover, we will force the primary member to restart. As the primary member (`pod`) becomes unavailable, the rest of the members will elect a primary member by election. + +```bash +$ kubectl get pods -n demo +NAME READY STATUS RESTARTS AGE +mongo-arb-0 2/2 Running 0 15m +mongo-arb-1 2/2 Running 0 14m +mongo-arb-arbiter-0 1/1 Running 0 14m + +$ kubectl delete pod -n demo mongo-arb-0 +pod "mongo-arb-0" deleted + +$ kubectl get pods -n demo +NAME READY STATUS RESTARTS AGE +mongo-arb-0 2/2 Terminating 0 16m +mongo-arb-1 2/2 Running 0 15m +mongo-arb-arbiter-0 1/1 Running 0 15m +``` + +Now verify the automatic failover, Let's exec in `mongo-arb-0` pod, + +```bash +$ kubectl exec -it mongo-arb-0 -n demo bash +mongodb@mongo-arb-1:/$ mongo admin -u root -p 'OX4yb!IFm;~yAHkD' +MongoDB shell version v4.4.26 +connecting to: mongodb://127.0.0.1:27017/admin +MongoDB server version: 4.4.26 +Welcome to the MongoDB shell. + +rs0:SECONDARY> rs.isMaster().primary +mongo-arb-1.mongo-arb-pods.demo.svc.cluster.local:27017 + +# Also verify, data persistency +rs0:SECONDARY> rs.slaveOk() +rs0:SECONDARY> > show dbs +admin 0.000GB +config 0.000GB +kubedb-system 0.000GB +local 0.000GB +mydb 0.000GB + +rs0:SECONDARY> use mydb +switched to db mydb + +rs0:SECONDARY> db.songs.find().pretty() +{ + "_id" : ObjectId("62611ae33583279dfca0a5e4"), + "pink floyd" : "shine on you crazy diamond" +} + +``` + +## Halt Database + +When [TerminationPolicy](/docs/v2024.1.31/guides/mongodb/concepts/mongodb#specterminationpolicy) is set to halt, and you delete the mongodb object, the KubeDB operator will delete the StatefulSet and its pods but leaves the PVCs, secrets and database backup (snapshots) intact. Learn details of all `TerminationPolicy` [here](/docs/v2024.1.31/guides/mongodb/concepts/mongodb#specterminationpolicy). + +You can also keep the mongodb object and halt the database to resume it again later. If you halt the database, the kubedb will delete the statefulsets and services but will keep the mongodb object, pvcs, secrets and backup (snapshots). + +To halt the database, first you have to set the terminationPolicy to `Halt` in existing database. You can use the below command to set the terminationPolicy to `Halt`, if it is not already set. + +```bash +$ kubectl patch -n demo mg/mongo-arb -p '{"spec":{"terminationPolicy":"Halt"}}' --type="merge" +mongodb.kubedb.com/mongo-arb patched +``` + +Then, you have to set the `spec.halted` as true to set the database in a `Halted` state. You can use the below command. + +```bash +$ kubectl patch -n demo mg/mongo-arb -p '{"spec":{"halted":true}}' --type="merge" +mongodb.kubedb.com/mongo-arb patched +``` + +After that, kubedb will delete the statefulsets and services and you can see the database Phase as `Halted`. + +Now, you can run the following command to get all mongodb resources in demo namespaces, + +```bash +$ kubectl get mg,sts,svc,secret,pvc -n demo +NAME VERSION STATUS AGE +mongodb.kubedb.com/mongo-arb 4.4.26 Halted 21m + +NAME TYPE DATA AGE +secret/default-token-nzk64 kubernetes.io/service-account-token 3 146m +secret/mongo-arb-auth Opaque 2 21m +secret/mongo-arb-key Opaque 1 21m + +NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE +persistentvolumeclaim/datadir-mongo-arb-0 Bound pvc-93a2681f-096d-4af1-b1fb-93cd7b7b6020 500Mi RWO standard 21m +persistentvolumeclaim/datadir-mongo-arb-1 Bound pvc-fb06ea3b-a9dd-4479-87b2-de73ca272718 500Mi RWO standard 21m +persistentvolumeclaim/datadir-mongo-arb-arbiter-0 Bound pvc-169fd172-0e41-48e3-81a5-3abae4a85056 500Mi RWO standard 21m +``` + + +## Resume Halted Database + +Now, to resume the database, i.e. to get the same database setup back again, you have to set the the `spec.halted` as false. You can use the below command. + +```bash +$ kubectl patch -n demo mg/mongo-arb -p '{"spec":{"halted":false}}' --type="merge" +mongodb.kubedb.com/mongo-arb patched +``` + +When the database is resumed successfully, you can see the database Status is set to `Ready`. + +```bash +$ kubectl get mg -n demo +NAME VERSION STATUS AGE +mongodb.kubedb.com/mongo-arb 4.4.26 Ready 23m +``` + +Now, If you again exec into the primary `pod` and look for previous data, you will see that, all the data persists. + +```bash +$ kubectl exec -it mongo-arb-1 -n demo bash + +mongodb@mongo-arb-1:/$ mongo admin -u root -p 'OX4yb!IFm;~yAHkD' + +rs0:PRIMARY> use mydb +switched to db mydb +rs0:PRIMARY> db.songs.find().pretty() +{ + "_id" : ObjectId("62611ae33583279dfca0a5e4"), + "pink floyd" : "shine on you crazy diamond" +} +``` + +## Cleaning up + +To cleanup the Kubernetes resources created by this tutorial, run: + +```bash +kubectl patch -n demo mg/mongo-arb -p '{"spec":{"terminationPolicy":"WipeOut"}}' --type="merge" +kubectl delete -n demo mg/mongo-arb + +kubectl delete ns demo +``` + +## Next Steps + +- Deploy MongoDB shard [with Arbiter](/docs/v2024.1.31/guides/mongodb/arbiter/sharding). +- [Backup and Restore](/docs/v2024.1.31/guides/mongodb/backup/overview/) process of MongoDB databases using Stash. +- Initialize [MongoDB with Script](/docs/v2024.1.31/guides/mongodb/initialization/using-script). +- Monitor your MongoDB database with KubeDB using [out-of-the-box Prometheus operator](/docs/v2024.1.31/guides/mongodb/monitoring/using-prometheus-operator). +- Monitor your MongoDB database with KubeDB using [out-of-the-box builtin-Prometheus](/docs/v2024.1.31/guides/mongodb/monitoring/using-builtin-prometheus). +- Use [private Docker registry](/docs/v2024.1.31/guides/mongodb/private-registry/using-private-registry) to deploy MongoDB with KubeDB. +- Detail concepts of [MongoDB object](/docs/v2024.1.31/guides/mongodb/concepts/mongodb). +- Detail concepts of [MongoDBVersion object](/docs/v2024.1.31/guides/mongodb/concepts/catalog). +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/v2024.1.31/CONTRIBUTING). diff --git a/content/docs/v2024.1.31/guides/mongodb/arbiter/sharding.md b/content/docs/v2024.1.31/guides/mongodb/arbiter/sharding.md new file mode 100644 index 0000000000..b4571fd56c --- /dev/null +++ b/content/docs/v2024.1.31/guides/mongodb/arbiter/sharding.md @@ -0,0 +1,1078 @@ +--- +title: MongoDB Sharding Guide with Arbiter +menu: + docs_v2024.1.31: + identifier: mg-arbiter-sharding + name: Sharding with Arbiter + parent: mg-arbiter + weight: 20 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# MongoDB Sharding + +This tutorial will show you how to use KubeDB to run a sharded MongoDB cluster with arbiter. + +## Before You Begin + +Before proceeding: + +- Read [mongodb arbiter concept](/docs/v2024.1.31/guides/mongodb/arbiter/concept) to get the concept about MongoDB Replica Set Arbiter. + +- You need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +- Now, install KubeDB cli on your workstation and KubeDB operator in your cluster following the steps [here](/docs/v2024.1.31/setup/README). + +- To keep things isolated, this tutorial uses a separate namespace called `demo` throughout this tutorial. Run the following command to prepare your cluster for this tutorial: + + ```bash + $ kubectl create ns demo + namespace/demo created + ``` + +> Note: The yaml files used in this tutorial are stored in [docs/examples/mongodb](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/examples/mongodb) folder in GitHub repository [kubedb/docs](https://github.com/kubedb/docs). + +## Deploy Sharded MongoDB Cluster + +To deploy a MongoDB Sharding, user have to specify `spec.shardTopology` option in `Mongodb` CRD. + +The following is an example of a `Mongodb` object which creates MongoDB Sharding of three type of members. + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: MongoDB +metadata: + name: mongo-sh-arb + namespace: demo +spec: + version: "4.4.26" + shardTopology: + configServer: + replicas: 3 + storage: + resources: + requests: + storage: 500Mi + storageClassName: standard + mongos: + replicas: 2 + shard: + replicas: 2 + podTemplate: + spec: + resources: + requests: + cpu: "400m" + memory: "300Mi" + shards: 2 + storage: + resources: + requests: + storage: 500Mi + storageClassName: standard + arbiter: + podTemplate: + spec: + resources: + requests: + cpu: "200m" + memory: "200Mi" + terminationPolicy: WipeOut +``` + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/mongodb/arbiter/sharding.yaml +mongodb.kubedb.com/mongo-sh-arb created +``` + +Here, + +- `spec.shardTopology` represents the topology configuration for sharding. + - `shard` represents configuration for Shard component of mongodb. + - `shards` represents number of shards for a mongodb deployment. Each shard is deployed as a [replicaset](/docs/v2024.1.31/guides/mongodb/clustering/replication_concept). + - `replicas` represents number of replicas of each shard replicaset. + - `prefix` represents the prefix of each shard node. + - `configSecret` is an optional field to provide custom configuration file for shards (i.e mongod.cnf). If specified, this file will be used as configuration file otherwise a default configuration file will be used. + - `podTemplate` is an optional configuration for pods. + - `storage` to specify pvc spec for each node of sharding. You can specify any StorageClass available in your cluster with appropriate resource requests. + - `configServer` represents configuration for ConfigServer component of mongodb. + - `replicas` represents number of replicas for configServer replicaset. Here, configServer is deployed as a replicaset of mongodb. + - `prefix` represents the prefix of configServer nodes. + - `configSecret` is an optional field to provide custom configuration file for configSource (i.e mongod.cnf). If specified, this file will be used as configuration file otherwise a default configuration file will be used. + - `podTemplate` is an optional configuration for pods. + - `storage` to specify pvc spec for each node of configServer. You can specify any StorageClass available in your cluster with appropriate resource requests. + - `mongos` represents configuration for Mongos component of mongodb. `Mongos` instances run as stateless components (deployment). + - `replicas` represents number of replicas of `Mongos` instance. Here, Mongos is not deployed as replicaset. + - `prefix` represents the prefix of mongos nodes. + - `configSecret` is an optional field to provide custom configuration file for mongos (i.e mongod.cnf). If specified, this file will be used as configuration file otherwise a default configuration file will be used. + - `podTemplate` is an optional configuration for pods. +- `spec.keyFileSecret` (optional) is a secret name that contains keyfile (a random string)against `key.txt` key. Each mongod instances in the replica set and `shardTopology` uses the contents of the keyfile as the shared password for authenticating other members in the replicaset. Only mongod instances with the correct keyfile can join the replica set. _User can provide the `keyFileSecret` by creating a secret with key `key.txt`. See [here](https://docs.mongodb.com/manual/tutorial/enforce-keyfile-access-control-in-existing-replica-set/#create-a-keyfile) to create the string for `keyFileSecret`._ If `keyFileSecret` is not given, KubeDB operator will generate a `keyFileSecret` itself. +- `spec.arbiter` denotes arbiter spec of the deployed MongoDB CRD. There are two fields under it : configSecret & podTemplate. `spec.arbiter.configSecret` is an optional field to provide custom configuration file for database (i.e mongod.cnf). If specified, this file will be used as configuration file otherwise default configuration file will be used. `spec.arbiter.podTemplate` holds the arbiter-podSpec. `null` value of it, instructs kubedb operator to use the default arbiter podTemplate. + +KubeDB operator watches for `MongoDB` objects using Kubernetes api. When a `MongoDB` object is created, KubeDB operator will create some new StatefulSets : 1 for mongos, 1 for configServer, and 1 for each of the shard & arbiter. It creates a primary Service with the matching MongoDB object name. KubeDB operator will also create governing services for StatefulSets with the name `--pods`. + +MongoDB `mongo-sh-arb` state, + +```bash +$ kubectl get mg -n demo +NAME VERSION STATUS AGE +mongodb.kubedb.com/mongo-sh-arb 4.4.26 Ready 97s +``` + +All the types of nodes `Shard`, `ConfigServer` & `Mongos` are deployed as statefulset. + +```bash +$ kubectl get statefulset -n demo +NAME READY AGE +statefulset.apps/mongo-sh-arb-configsvr 3/3 97s +statefulset.apps/mongo-sh-arb-mongos 2/2 29s +statefulset.apps/mongo-sh-arb-shard0 2/2 97s +statefulset.apps/mongo-sh-arb-shard0-arbiter 1/1 53s +statefulset.apps/mongo-sh-arb-shard1 2/2 97s +statefulset.apps/mongo-sh-arb-shard1-arbiter 1/1 52s +``` + +All PVCs and PVs for MongoDB `mongo-sh-arb`, + +```bash +$ kubectl get pvc -n demo +NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE +persistentvolumeclaim/datadir-mongo-sh-arb-configsvr-0 Bound pvc-a9589ccb-24c2-4d17-8174-1e552d63d943 500Mi RWO standard 97s +persistentvolumeclaim/datadir-mongo-sh-arb-configsvr-1 Bound pvc-697aa035-6ff2-45c4-8e00-0787b520159b 500Mi RWO standard 75s +persistentvolumeclaim/datadir-mongo-sh-arb-configsvr-2 Bound pvc-2548ee7e-5416-4ddc-960b-33d17bd53b43 500Mi RWO standard 52s +persistentvolumeclaim/datadir-mongo-sh-arb-shard0-0 Bound pvc-a5cdb597-ad01-4362-b56e-c5d6226a38bb 500Mi RWO standard 97s +persistentvolumeclaim/datadir-mongo-sh-arb-shard0-1 Bound pvc-ae9e594a-7370-4339-9f51-6ec07588c8e0 500Mi RWO standard 75s +persistentvolumeclaim/datadir-mongo-sh-arb-shard0-arbiter-0 Bound pvc-8296c2bc-dfc0-47f4-b651-01fb802bf751 500Mi RWO standard 53s +persistentvolumeclaim/datadir-mongo-sh-arb-shard1-0 Bound pvc-33cde211-4ed5-49a9-b7a8-48e94690e12d 500Mi RWO standard 97s +persistentvolumeclaim/datadir-mongo-sh-arb-shard1-1 Bound pvc-569cedf8-b16e-4616-ae1d-74168aacc227 500Mi RWO standard 74s +persistentvolumeclaim/datadir-mongo-sh-arb-shard1-arbiter-0 Bound pvc-c65c7054-a9de-40c4-9797-4d0a730e9c5b 500Mi RWO standard 52s + +$ kubectl get pv -n demo +NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE +persistentvolume/pvc-2548ee7e-5416-4ddc-960b-33d17bd53b43 500Mi RWO Delete Bound demo/datadir-mongo-sh-arb-configsvr-2 standard 50s +persistentvolume/pvc-33cde211-4ed5-49a9-b7a8-48e94690e12d 500Mi RWO Delete Bound demo/datadir-mongo-sh-arb-shard1-0 standard 93s +persistentvolume/pvc-569cedf8-b16e-4616-ae1d-74168aacc227 500Mi RWO Delete Bound demo/datadir-mongo-sh-arb-shard1-1 standard 71s +persistentvolume/pvc-697aa035-6ff2-45c4-8e00-0787b520159b 500Mi RWO Delete Bound demo/datadir-mongo-sh-arb-configsvr-1 standard 73s +persistentvolume/pvc-8296c2bc-dfc0-47f4-b651-01fb802bf751 500Mi RWO Delete Bound demo/datadir-mongo-sh-arb-shard0-arbiter-0 standard 52s +persistentvolume/pvc-a5cdb597-ad01-4362-b56e-c5d6226a38bb 500Mi RWO Delete Bound demo/datadir-mongo-sh-arb-shard0-0 standard 94s +persistentvolume/pvc-a9589ccb-24c2-4d17-8174-1e552d63d943 500Mi RWO Delete Bound demo/datadir-mongo-sh-arb-configsvr-0 standard 94s +persistentvolume/pvc-ae9e594a-7370-4339-9f51-6ec07588c8e0 500Mi RWO Delete Bound demo/datadir-mongo-sh-arb-shard0-1 standard 73s +persistentvolume/pvc-c65c7054-a9de-40c4-9797-4d0a730e9c5b 500Mi RWO Delete Bound demo/datadir-mongo-sh-arb-shard1-arbiter-0 standard 49s +``` + +Services created for MongoDB `mongo-sh-arb` + +```bash +$ kubectl get svc -n demo +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +service/mongo-sh-arb ClusterIP 10.96.34.129 27017/TCP 97s +service/mongo-sh-arb-configsvr-pods ClusterIP None 27017/TCP 97s +service/mongo-sh-arb-mongos-pods ClusterIP None 27017/TCP 97s +service/mongo-sh-arb-shard0-pods ClusterIP None 27017/TCP 97s +service/mongo-sh-arb-shard1-pods ClusterIP None 27017/TCP 97s +``` + +KubeDB operator sets the `status.phase` to `Ready` once the database is successfully created. It has also defaulted some field of crd object. Run the following command to see the modified MongoDB object: + +```yaml +$ kubectl get mg -n demo mongo-sh-arb -o yaml +apiVersion: kubedb.com/v1alpha2 +kind: MongoDB +metadata: + annotations: + kubectl.kubernetes.io/last-applied-configuration: | + {"apiVersion":"kubedb.com/v1alpha2","kind":"MongoDB","metadata":{"annotations":{},"name":"mongo-sh-arb","namespace":"demo"},"spec":{"arbiter":{"podTemplate":{"spec":{"requests":{"cpu":"200m","memory":"200Mi"},"resources":null}}},"shardTopology":{"configServer":{"replicas":3,"storage":{"resources":{"requests":{"storage":"500Mi"}},"storageClassName":"standard"}},"mongos":{"replicas":2},"shard":{"podTemplate":{"spec":{"resources":{"requests":{"cpu":"400m","memory":"300Mi"}}}},"replicas":2,"shards":2,"storage":{"resources":{"requests":{"storage":"500Mi"}},"storageClassName":"standard"}}},"terminationPolicy":"WipeOut","version":"4.4.26"}} + creationTimestamp: "2022-04-21T09:29:07Z" + finalizers: + - kubedb.com + generation: 3 + name: mongo-sh-arb + namespace: demo + resourceVersion: "31916" + uid: 0a31ab30-0002-400e-a312-f7e343ec6894 +spec: + allowedSchemas: + namespaces: + from: Same + arbiter: + podTemplate: + controller: {} + metadata: {} + spec: + affinity: + podAntiAffinity: + preferredDuringSchedulingIgnoredDuringExecution: + - podAffinityTerm: + labelSelector: + matchLabels: + app.kubernetes.io/instance: mongo-sh-arb + app.kubernetes.io/managed-by: kubedb.com + app.kubernetes.io/name: mongodbs.kubedb.com + mongodb.kubedb.com/node.shard: mongo-sh-arb-shard${SHARD_INDEX} + namespaces: + - demo + topologyKey: kubernetes.io/hostname + weight: 100 + - podAffinityTerm: + labelSelector: + matchLabels: + app.kubernetes.io/instance: mongo-sh-arb + app.kubernetes.io/managed-by: kubedb.com + app.kubernetes.io/name: mongodbs.kubedb.com + mongodb.kubedb.com/node.shard: mongo-sh-arb-shard${SHARD_INDEX} + namespaces: + - demo + topologyKey: failure-domain.beta.kubernetes.io/zone + weight: 50 + livenessProbe: + exec: + command: + - bash + - -c + - "set -x; if [[ $(mongo admin --host=localhost --quiet --eval \"db.adminCommand('ping').ok\" + ) -eq \"1\" ]]; then \n exit 0\n fi\n exit 1" + failureThreshold: 3 + periodSeconds: 10 + successThreshold: 1 + timeoutSeconds: 5 + readinessProbe: + exec: + command: + - bash + - -c + - "set -x; if [[ $(mongo admin --host=localhost --quiet --eval \"db.adminCommand('ping').ok\" + ) -eq \"1\" ]]; then \n exit 0\n fi\n exit 1" + failureThreshold: 3 + periodSeconds: 10 + successThreshold: 1 + timeoutSeconds: 1 + resources: + limits: + memory: 1Gi + requests: + cpu: 500m + memory: 1Gi + authSecret: + name: mongo-sh-arb-auth + clusterAuthMode: keyFile + coordinator: + resources: {} + keyFileSecret: + name: mongo-sh-arb-key + shardTopology: + configServer: + podTemplate: + controller: {} + metadata: {} + spec: + affinity: + podAntiAffinity: + preferredDuringSchedulingIgnoredDuringExecution: + - podAffinityTerm: + labelSelector: + matchLabels: + app.kubernetes.io/instance: mongo-sh-arb + app.kubernetes.io/managed-by: kubedb.com + app.kubernetes.io/name: mongodbs.kubedb.com + mongodb.kubedb.com/node.config: mongo-sh-arb-configsvr + namespaces: + - demo + topologyKey: kubernetes.io/hostname + weight: 100 + - podAffinityTerm: + labelSelector: + matchLabels: + app.kubernetes.io/instance: mongo-sh-arb + app.kubernetes.io/managed-by: kubedb.com + app.kubernetes.io/name: mongodbs.kubedb.com + mongodb.kubedb.com/node.config: mongo-sh-arb-configsvr + namespaces: + - demo + topologyKey: failure-domain.beta.kubernetes.io/zone + weight: 50 + livenessProbe: + exec: + command: + - bash + - -c + - "set -x; if [[ $(mongo admin --host=localhost --username=$MONGO_INITDB_ROOT_USERNAME + --password=$MONGO_INITDB_ROOT_PASSWORD --authenticationDatabase=admin + --quiet --eval \"db.adminCommand('ping').ok\" ) -eq \"1\" ]]; then + \n exit 0\n fi\n exit 1" + failureThreshold: 3 + periodSeconds: 10 + successThreshold: 1 + timeoutSeconds: 5 + readinessProbe: + exec: + command: + - bash + - -c + - "set -x; if [[ $(mongo admin --host=localhost --username=$MONGO_INITDB_ROOT_USERNAME + --password=$MONGO_INITDB_ROOT_PASSWORD --authenticationDatabase=admin + --quiet --eval \"db.adminCommand('ping').ok\" ) -eq \"1\" ]]; then + \n exit 0\n fi\n exit 1" + failureThreshold: 3 + periodSeconds: 10 + successThreshold: 1 + timeoutSeconds: 1 + resources: + limits: + memory: 1Gi + requests: + cpu: 500m + memory: 1Gi + serviceAccountName: mongo-sh-arb + replicas: 3 + storage: + resources: + requests: + storage: 500Mi + storageClassName: standard + mongos: + podTemplate: + controller: {} + metadata: {} + spec: + affinity: + podAntiAffinity: + preferredDuringSchedulingIgnoredDuringExecution: + - podAffinityTerm: + labelSelector: + matchLabels: + app.kubernetes.io/instance: mongo-sh-arb + app.kubernetes.io/managed-by: kubedb.com + app.kubernetes.io/name: mongodbs.kubedb.com + mongodb.kubedb.com/node.mongos: mongo-sh-arb-mongos + namespaces: + - demo + topologyKey: kubernetes.io/hostname + weight: 100 + - podAffinityTerm: + labelSelector: + matchLabels: + app.kubernetes.io/instance: mongo-sh-arb + app.kubernetes.io/managed-by: kubedb.com + app.kubernetes.io/name: mongodbs.kubedb.com + mongodb.kubedb.com/node.mongos: mongo-sh-arb-mongos + namespaces: + - demo + topologyKey: failure-domain.beta.kubernetes.io/zone + weight: 50 + lifecycle: + preStop: + exec: + command: + - bash + - -c + - 'mongo admin --username=$MONGO_INITDB_ROOT_USERNAME --password=$MONGO_INITDB_ROOT_PASSWORD + --quiet --eval "db.adminCommand({ shutdown: 1 })" || true' + livenessProbe: + exec: + command: + - bash + - -c + - "set -x; if [[ $(mongo admin --host=localhost --username=$MONGO_INITDB_ROOT_USERNAME + --password=$MONGO_INITDB_ROOT_PASSWORD --authenticationDatabase=admin + --quiet --eval \"db.adminCommand('ping').ok\" ) -eq \"1\" ]]; then + \n exit 0\n fi\n exit 1" + failureThreshold: 3 + periodSeconds: 10 + successThreshold: 1 + timeoutSeconds: 5 + readinessProbe: + exec: + command: + - bash + - -c + - "set -x; if [[ $(mongo admin --host=localhost --username=$MONGO_INITDB_ROOT_USERNAME + --password=$MONGO_INITDB_ROOT_PASSWORD --authenticationDatabase=admin + --quiet --eval \"db.adminCommand('ping').ok\" ) -eq \"1\" ]]; then + \n exit 0\n fi\n exit 1" + failureThreshold: 3 + periodSeconds: 10 + successThreshold: 1 + timeoutSeconds: 1 + resources: + limits: + memory: 1Gi + requests: + cpu: 500m + memory: 1Gi + serviceAccountName: mongo-sh-arb + replicas: 2 + shard: + podTemplate: + controller: {} + metadata: {} + spec: + affinity: + podAntiAffinity: + preferredDuringSchedulingIgnoredDuringExecution: + - podAffinityTerm: + labelSelector: + matchLabels: + app.kubernetes.io/instance: mongo-sh-arb + app.kubernetes.io/managed-by: kubedb.com + app.kubernetes.io/name: mongodbs.kubedb.com + mongodb.kubedb.com/node.shard: mongo-sh-arb-shard${SHARD_INDEX} + namespaces: + - demo + topologyKey: kubernetes.io/hostname + weight: 100 + - podAffinityTerm: + labelSelector: + matchLabels: + app.kubernetes.io/instance: mongo-sh-arb + app.kubernetes.io/managed-by: kubedb.com + app.kubernetes.io/name: mongodbs.kubedb.com + mongodb.kubedb.com/node.shard: mongo-sh-arb-shard${SHARD_INDEX} + namespaces: + - demo + topologyKey: failure-domain.beta.kubernetes.io/zone + weight: 50 + livenessProbe: + exec: + command: + - bash + - -c + - "set -x; if [[ $(mongo admin --host=localhost --username=$MONGO_INITDB_ROOT_USERNAME + --password=$MONGO_INITDB_ROOT_PASSWORD --authenticationDatabase=admin + --quiet --eval \"db.adminCommand('ping').ok\" ) -eq \"1\" ]]; then + \n exit 0\n fi\n exit 1" + failureThreshold: 3 + periodSeconds: 10 + successThreshold: 1 + timeoutSeconds: 5 + readinessProbe: + exec: + command: + - bash + - -c + - "set -x; if [[ $(mongo admin --host=localhost --username=$MONGO_INITDB_ROOT_USERNAME + --password=$MONGO_INITDB_ROOT_PASSWORD --authenticationDatabase=admin + --quiet --eval \"db.adminCommand('ping').ok\" ) -eq \"1\" ]]; then + \n exit 0\n fi\n exit 1" + failureThreshold: 3 + periodSeconds: 10 + successThreshold: 1 + timeoutSeconds: 1 + resources: + limits: + memory: 300Mi + requests: + cpu: 400m + memory: 300Mi + serviceAccountName: mongo-sh-arb + replicas: 2 + shards: 2 + storage: + resources: + requests: + storage: 500Mi + storageClassName: standard + sslMode: disabled + storageEngine: wiredTiger + storageType: Durable + terminationPolicy: WipeOut + version: 4.4.26 +status: + conditions: + - lastTransitionTime: "2022-04-21T09:29:07Z" + message: 'The KubeDB operator has started the provisioning of MongoDB: demo/mongo-sh-arb' + reason: DatabaseProvisioningStartedSuccessfully + status: "True" + type: ProvisioningStarted + - lastTransitionTime: "2022-04-21T09:30:39Z" + message: All desired replicas are ready. + reason: AllReplicasReady + status: "True" + type: ReplicaReady + - lastTransitionTime: "2022-04-21T09:30:37Z" + message: 'The MongoDB: demo/mongo-sh-arb is accepting client requests.' + observedGeneration: 3 + reason: DatabaseAcceptingConnectionRequest + status: "True" + type: AcceptingConnection + - lastTransitionTime: "2022-04-21T09:30:37Z" + message: 'The MongoDB: demo/mongo-sh-arb is ready.' + observedGeneration: 3 + reason: ReadinessCheckSucceeded + status: "True" + type: Ready + - lastTransitionTime: "2022-04-21T09:30:39Z" + message: 'The MongoDB: demo/mongo-sh-arb is successfully provisioned.' + observedGeneration: 3 + reason: DatabaseSuccessfullyProvisioned + status: "True" + type: Provisioned + observedGeneration: 3 + phase: Ready + +``` + +Please note that KubeDB operator has created a new Secret called `mongo-sh-arb-auth` _(format: {mongodb-object-name}-auth)_ for storing the password for `mongodb` superuser. This secret contains a `username` key which contains the _username_ for MongoDB superuser and a `password` key which contains the _password_ for MongoDB superuser. + +If you want to use custom or existing secret please specify that when creating the MongoDB object using `spec.authSecret.name`. While creating this secret manually, make sure the secret contains these two keys containing data `username` and `password`. For more details, please see [here](/docs/v2024.1.31/guides/mongodb/concepts/mongodb#specauthsecret). + +## Connection Information + +- Hostname/address: you can use any of these + - Service: `mongo-sh-arb.demo` + - Pod IP: (`$ kubectl get po -n demo -l mongodb.kubedb.com/node.mongos=mongo-sh-arb-mongos -o yaml | grep podIP`) +- Port: `27017` +- Username: Run following command to get _username_, + + ```bash + $ kubectl get secrets -n demo mongo-sh-arb-auth -o jsonpath='{.data.\username}' | base64 -d + root + ``` + +- Password: Run the following command to get _password_, + + ```bash + $ kubectl get secrets -n demo mongo-sh-arb-auth -o jsonpath='{.data.\password}' | base64 -d + 6&UiN5;qq)Tnai=7 + ``` + +Now, you can connect to this database through [mongo-shell](https://docs.mongodb.com/v4.2/mongo/). + +## Sharded Data + +In this tutorial, we will insert sharded and unsharded document, and we will see if the data actually sharded across cluster or not. + +```bash +$ kubectl get po -n demo -l mongodb.kubedb.com/node.mongos=mongo-sh-arb-mongos +NAME READY STATUS RESTARTS AGE +mongo-sh-arb-mongos-0 1/1 Running 0 6m34s +mongo-sh-arb-mongos-1 1/1 Running 0 6m20s + +$ kubectl exec -it mongo-sh-arb-mongos-0 -n demo bash + +mongodb@mongo-sh-mongos-0:/$ mongo admin -u root -p '6&UiN5;qq)Tnai=7' +MongoDB shell version v4.4.26 +connecting to: mongodb://127.0.0.1:27017/admin?compressors=disabled&gssapiServiceName=mongodb +Implicit session: session { "id" : UUID("bf87addd-4245-45b1-a470-fabb3dcc19ab") } +MongoDB server version: 4.4.26 +Welcome to the MongoDB shell. +For interactive help, type "help". +For more comprehensive documentation, see + https://docs.mongodb.com/ +Questions? Try the MongoDB Developer Community Forums + https://community.mongodb.com +--- +The server generated these startup warnings when booting: + 2022-04-21T09:30:28.259+00:00: You are running this process as the root user, which is not recommended +--- +mongos> +``` + +To detect if the MongoDB instance that your client is connected to is mongos, use the isMaster command. When a client connects to a mongos, isMaster returns a document with a `msg` field that holds the string `isdbgrid`. + +```bash +mongos> rs.isMaster() +{ + "ismaster" : true, + "msg" : "isdbgrid", + "maxBsonObjectSize" : 16777216, + "maxMessageSizeBytes" : 48000000, + "maxWriteBatchSize" : 100000, + "localTime" : ISODate("2022-04-21T09:38:52.370Z"), + "logicalSessionTimeoutMinutes" : 30, + "connectionId" : 253, + "maxWireVersion" : 9, + "minWireVersion" : 0, + "topologyVersion" : { + "processId" : ObjectId("62612434ea3cf5a7339dd36d"), + "counter" : NumberLong(0) + }, + "ok" : 1, + "operationTime" : Timestamp(1650533931, 30), + "$clusterTime" : { + "clusterTime" : Timestamp(1650533931, 30), + "signature" : { + "hash" : BinData(0,"QhqwrAXFhPjlpvfTOPwNAESUR8c="), + "keyId" : NumberLong("7088986810746929174") + } + } +} +``` + +`mongo-sh-arb` Shard status, + +```bash +mongos> sh.status() +--- Sharding Status --- + sharding version: { + "_id" : 1, + "minCompatibleVersion" : 5, + "currentVersion" : 6, + "clusterId" : ObjectId("626123f2f1e4f6821ec73945") + } + shards: + { "_id" : "shard0", "host" : "shard0/mongo-sh-arb-shard0-0.mongo-sh-arb-shard0-pods.demo.svc.cluster.local:27017,mongo-sh-arb-shard0-1.mongo-sh-arb-shard0-pods.demo.svc.cluster.local:27017", "state" : 1 } + { "_id" : "shard1", "host" : "shard1/mongo-sh-arb-shard1-0.mongo-sh-arb-shard1-pods.demo.svc.cluster.local:27017,mongo-sh-arb-shard1-1.mongo-sh-arb-shard1-pods.demo.svc.cluster.local:27017", "state" : 1 } + active mongoses: + "4.4.26" : 2 + autosplit: + Currently enabled: yes + balancer: + Currently enabled: yes + Currently running: yes + Collections with active migrations: + config.system.sessions started at Thu Apr 21 2022 09:39:13 GMT+0000 (UTC) + Failed balancer rounds in last 5 attempts: 0 + Migration Results for the last 24 hours: + 279 : Success + databases: + { "_id" : "config", "primary" : "config", "partitioned" : true } + config.system.sessions + shard key: { "_id" : 1 } + unique: false + balancing: true + chunks: + shard0 745 + shard1 279 + too many chunks to print, use verbose if you want to force print + { "_id" : "kubedb-system", "primary" : "shard0", "partitioned" : true, "version" : { "uuid" : UUID("79db6e4a-dcb1-4f1a-86c2-dcd86a944893"), "lastMod" : 1 } } + kubedb-system.health-check + shard key: { "id" : 1 } + unique: false + balancing: true + chunks: + shard0 1 + { "id" : { "$minKey" : 1 } } -->> { "id" : { "$maxKey" : 1 } } on : shard0 Timestamp(1, 0) +``` + + + + + + +As `sh.status()` command only shows the data bearing members, if we want to assure that arbiter has been added correctly we need to exec into any shard-pod & run `rs.status()` command against the admin database. Open another terminal : + + +```bash +kubectl exec -it pod/mongo-sh-arb-shard0-1 -n demo bash + +root@mongo-sh-arb-shard0-1:/ mongo admin -u root -p '6&UiN5;qq)Tnai=7' +MongoDB shell version v4.4.26 +connecting to: mongodb://127.0.0.1:27017/admin +MongoDB server version: 4.4.26 +Welcome to the MongoDB shell. + +shard0:PRIMARY> rs.status().members +[ + { + "_id" : 0, + "name" : "mongo-sh-arb-shard0-0.mongo-sh-arb-shard0-pods.demo.svc.cluster.local:27017", + "health" : 1, + "state" : 2, + "stateStr" : "SECONDARY", + "uptime" : 350, + "optime" : { + "ts" : Timestamp(1650535338, 18), + "t" : NumberLong(3) + }, + "optimeDurable" : { + "ts" : Timestamp(1650535338, 18), + "t" : NumberLong(3) + }, + "optimeDate" : ISODate("2022-04-21T10:02:18Z"), + "optimeDurableDate" : ISODate("2022-04-21T10:02:18Z"), + "lastHeartbeat" : ISODate("2022-04-21T10:02:35.951Z"), + "lastHeartbeatRecv" : ISODate("2022-04-21T10:02:34.999Z"), + "pingMs" : NumberLong(0), + "lastHeartbeatMessage" : "", + "syncSourceHost" : "mongo-sh-arb-shard0-1.mongo-sh-arb-shard0-pods.demo.svc.cluster.local:27017", + "syncSourceId" : 1, + "infoMessage" : "", + "configVersion" : 4, + "configTerm" : 3 + }, + { + "_id" : 1, + "name" : "mongo-sh-arb-shard0-1.mongo-sh-arb-shard0-pods.demo.svc.cluster.local:27017", + "health" : 1, + "state" : 1, + "stateStr" : "PRIMARY", + "uptime" : 352, + "optime" : { + "ts" : Timestamp(1650535338, 18), + "t" : NumberLong(3) + }, + "optimeDate" : ISODate("2022-04-21T10:02:18Z"), + "syncSourceHost" : "", + "syncSourceId" : -1, + "infoMessage" : "", + "electionTime" : Timestamp(1650535017, 1), + "electionDate" : ISODate("2022-04-21T09:56:57Z"), + "configVersion" : 4, + "configTerm" : 3, + "self" : true, + "lastHeartbeatMessage" : "" + }, + { + "_id" : 2, + "name" : "mongo-sh-arb-shard0-arbiter-0.mongo-sh-arb-shard0-pods.demo.svc.cluster.local:27017", + "health" : 1, + "state" : 7, + "stateStr" : "ARBITER", + "uptime" : 328, + "lastHeartbeat" : ISODate("2022-04-21T10:02:35.950Z"), + "lastHeartbeatRecv" : ISODate("2022-04-21T10:02:35.585Z"), + "pingMs" : NumberLong(0), + "lastHeartbeatMessage" : "", + "syncSourceHost" : "", + "syncSourceId" : -1, + "infoMessage" : "", + "configVersion" : 4, + "configTerm" : 3 + } +] +``` + +Enable sharding to collection `songs.list` and insert document. See [`sh.shardCollection(namespace, key, unique, options)`](https://docs.mongodb.com/manual/reference/method/sh.shardCollection/#sh.shardCollection) for details about `shardCollection` command. + +```bash +mongos> sh.enableSharding("songs"); +{ + "ok" : 1, + "operationTime" : Timestamp(1650534119, 40), + "$clusterTime" : { + "clusterTime" : Timestamp(1650534119, 40), + "signature" : { + "hash" : BinData(0,"vtfzghRf+pGMDwsY/W3y/irgF1s="), + "keyId" : NumberLong("7088986810746929174") + } + } +} + +mongos> sh.shardCollection("songs.list", {"myfield": 1}); +{ + "collectionsharded" : "songs.list", + "collectionUUID" : UUID("320eccb3-1987-4ac9-affb-61fe2b9284a7"), + "ok" : 1, + "operationTime" : Timestamp(1650534144, 45), + "$clusterTime" : { + "clusterTime" : Timestamp(1650534144, 45), + "signature" : { + "hash" : BinData(0,"F6KJ8uibwEmuoAi4YPvLYFR71eg="), + "keyId" : NumberLong("7088986810746929174") + } + } +} + +mongos> use songs +switched to db songs + +mongos> db.list.insert({"led zeppelin": "stairway to heaven", "slipknot": "psychosocial"}); +WriteResult({ "nInserted" : 1 }) + +mongos> db.list.insert({"pink floyd": "us and them", "nirvana": "smells like teen spirit", "john lennon" : "imagine" }); +WriteResult({ "nInserted" : 1 }) + +mongos> db.list.find() +{ "_id" : ObjectId("6261275c18807d1843328e08"), "led zeppelin" : "stairway to heaven", "slipknot" : "psychosocial" } +{ "_id" : ObjectId("6261281c18807d1843328e09"), "pink floyd" : "us and them", "nirvana" : "smells like teen spirit", "john lennon" : "imagine" } +``` + +Run [`sh.status()`](https://docs.mongodb.com/manual/reference/method/sh.status/) to see whether the `songs` database has sharding enabled, and the primary shard for the `songs` database. + +The Sharded Collection section `sh.status.databases.` provides information on the sharding details for sharded collection(s) (E.g. `songs.list`). For each sharded collection, the section displays the shard key, the number of chunks per shard(s), the distribution of documents across chunks, and the tag information, if any, for shard key range(s). + +```bash +mongos> sh.status(); +--- Sharding Status --- + sharding version: { + "_id" : 1, + "minCompatibleVersion" : 5, + "currentVersion" : 6, + "clusterId" : ObjectId("626123f2f1e4f6821ec73945") + } + shards: + { "_id" : "shard0", "host" : "shard0/mongo-sh-arb-shard0-0.mongo-sh-arb-shard0-pods.demo.svc.cluster.local:27017,mongo-sh-arb-shard0-1.mongo-sh-arb-shard0-pods.demo.svc.cluster.local:27017", "state" : 1 } + { "_id" : "shard1", "host" : "shard1/mongo-sh-arb-shard1-0.mongo-sh-arb-shard1-pods.demo.svc.cluster.local:27017,mongo-sh-arb-shard1-1.mongo-sh-arb-shard1-pods.demo.svc.cluster.local:27017", "state" : 1 } + active mongoses: + "4.4.26" : 2 + autosplit: + Currently enabled: yes + balancer: + Currently enabled: yes + Currently running: no + Failed balancer rounds in last 5 attempts: 0 + Migration Results for the last 24 hours: + 512 : Success + databases: + { "_id" : "config", "primary" : "config", "partitioned" : true } + config.system.sessions + shard key: { "_id" : 1 } + unique: false + balancing: true + chunks: + shard0 512 + shard1 512 + too many chunks to print, use verbose if you want to force print + { "_id" : "kubedb-system", "primary" : "shard0", "partitioned" : true, "version" : { "uuid" : UUID("79db6e4a-dcb1-4f1a-86c2-dcd86a944893"), "lastMod" : 1 } } + kubedb-system.health-check + shard key: { "id" : 1 } + unique: false + balancing: true + chunks: + shard0 1 + { "id" : { "$minKey" : 1 } } -->> { "id" : { "$maxKey" : 1 } } on : shard0 Timestamp(1, 0) + { "_id" : "songs", "primary" : "shard1", "partitioned" : true, "version" : { "uuid" : UUID("5a61681f-e427-463f-85ca-c1f0d8854a3b"), "lastMod" : 1 } } + songs.list + shard key: { "myfield" : 1 } + unique: false + balancing: true + chunks: + shard1 1 + { "myfield" : { "$minKey" : 1 } } -->> { "myfield" : { "$maxKey" : 1 } } on : shard1 Timestamp(1, 0) +``` + +Now create another database where partiotioned is not applied and see how the data is stored. + +```bash +mongos> use demo +switched to db demo + +mongos> db.anothercollection.insert({"myfield": "ccc", "otherfield": "this is non sharded", "kube" : "db" }); +WriteResult({ "nInserted" : 1 }) + +mongos> db.anothercollection.insert({"myfield": "aaa", "more": "field" }); +WriteResult({ "nInserted" : 1 }) + + +mongos> db.anothercollection.find() +{ "_id" : ObjectId("626128f618807d1843328e0a"), "myfield" : "ccc", "otherfield" : "this is non sharded", "kube" : "db" } +{ "_id" : ObjectId("6261293c18807d1843328e0b"), "myfield" : "aaa", "more" : "field" } +``` + +Now, eventually `sh.status()` + +``` +mongos> sh.status() +--- Sharding Status --- + sharding version: { + "_id" : 1, + "minCompatibleVersion" : 5, + "currentVersion" : 6, + "clusterId" : ObjectId("626123f2f1e4f6821ec73945") + } + shards: + { "_id" : "shard0", "host" : "shard0/mongo-sh-arb-shard0-0.mongo-sh-arb-shard0-pods.demo.svc.cluster.local:27017,mongo-sh-arb-shard0-1.mongo-sh-arb-shard0-pods.demo.svc.cluster.local:27017", "state" : 1 } + { "_id" : "shard1", "host" : "shard1/mongo-sh-arb-shard1-0.mongo-sh-arb-shard1-pods.demo.svc.cluster.local:27017,mongo-sh-arb-shard1-1.mongo-sh-arb-shard1-pods.demo.svc.cluster.local:27017", "state" : 1 } + active mongoses: + "4.4.26" : 2 + autosplit: + Currently enabled: yes + balancer: + Currently enabled: yes + Currently running: no + Failed balancer rounds in last 5 attempts: 0 + Migration Results for the last 24 hours: + 512 : Success + databases: + { "_id" : "config", "primary" : "config", "partitioned" : true } + config.system.sessions + shard key: { "_id" : 1 } + unique: false + balancing: true + chunks: + shard0 512 + shard1 512 + too many chunks to print, use verbose if you want to force print + { "_id" : "demo", "primary" : "shard1", "partitioned" : false, "version" : { "uuid" : UUID("8af87f8c-b4ae-4d04-854f-d2ede7465acd"), "lastMod" : 1 } } + { "_id" : "kubedb-system", "primary" : "shard0", "partitioned" : true, "version" : { "uuid" : UUID("79db6e4a-dcb1-4f1a-86c2-dcd86a944893"), "lastMod" : 1 } } + kubedb-system.health-check + shard key: { "id" : 1 } + unique: false + balancing: true + chunks: + shard0 1 + { "id" : { "$minKey" : 1 } } -->> { "id" : { "$maxKey" : 1 } } on : shard0 Timestamp(1, 0) + { "_id" : "songs", "primary" : "shard1", "partitioned" : true, "version" : { "uuid" : UUID("5a61681f-e427-463f-85ca-c1f0d8854a3b"), "lastMod" : 1 } } + songs.list + shard key: { "myfield" : 1 } + unique: false + balancing: true + chunks: + shard1 1 + { "myfield" : { "$minKey" : 1 } } -->> { "myfield" : { "$maxKey" : 1 } } on : shard1 Timestamp(1, 0) +``` + +Here, `demo` database is not partitioned and all collections under `demo` database are stored in it's primary shard, which is `shard0`. + +## Halt Database + +When [TerminationPolicy](/docs/v2024.1.31/guides/mongodb/concepts/mongodb#specterminationpolicy) is set to halt, and you delete the mongodb object, the KubeDB operator will delete the StatefulSet and its pods but leaves the PVCs, secrets and database backup (snapshots) intact. Learn details of all `TerminationPolicy` [here](/docs/v2024.1.31/guides/mongodb/concepts/mongodb#specterminationpolicy). + +You can also keep the mongodb object and halt the database to resume it again later. If you halt the database, the kubedb will delete the statefulsets and services but will keep the mongodb object, pvcs, secrets and backup (snapshots). + +To halt the database, first you have to set the terminationPolicy to `Halt` in existing database. You can use the below command to set the terminationPolicy to `Halt`, if it is not already set. + +```bash +$ kubectl patch -n demo mg/mongo-sh-arb -p '{"spec":{"terminationPolicy":"Halt"}}' --type="merge" +mongodb.kubedb.com/mongo-sh-arb patched +``` + +Then, you have to set the `spec.halted` as true to set the database in a `Halted` state. You can use the below command. + +```bash +$ kubectl patch -n demo mg/mongo-sh-arb -p '{"spec":{"halted":true}}' --type="merge" +mongodb.kubedb.com/mongo-sh-arb patched +``` + +After that, kubedb will delete the statefulsets and services and you can see the database Phase as `Halted`. + +Now, you can run the following command to get all mongodb resources in demo namespaces, + +```bash +$ kubectl get mg,sts,svc,secret,pvc -n demo +NAME VERSION STATUS AGE +mongodb.kubedb.com/mongo-sh-arb 4.4.26 Halted 26m + +NAME TYPE DATA AGE +secret/default-token-bg2wb kubernetes.io/service-account-token 3 26m +secret/mongo-sh-arb-auth Opaque 2 26m +secret/mongo-sh-arb-key Opaque 1 26m + +NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE +persistentvolumeclaim/datadir-mongo-sh-arb-configsvr-0 Bound pvc-a9589ccb-24c2-4d17-8174-1e552d63d943 500Mi RWO standard 26m +persistentvolumeclaim/datadir-mongo-sh-arb-configsvr-1 Bound pvc-697aa035-6ff2-45c4-8e00-0787b520159b 500Mi RWO standard 26m +persistentvolumeclaim/datadir-mongo-sh-arb-configsvr-2 Bound pvc-2548ee7e-5416-4ddc-960b-33d17bd53b43 500Mi RWO standard 25m +persistentvolumeclaim/datadir-mongo-sh-arb-shard0-0 Bound pvc-a5cdb597-ad01-4362-b56e-c5d6226a38bb 500Mi RWO standard 26m +persistentvolumeclaim/datadir-mongo-sh-arb-shard0-1 Bound pvc-ae9e594a-7370-4339-9f51-6ec07588c8e0 500Mi RWO standard 26m +persistentvolumeclaim/datadir-mongo-sh-arb-shard0-arbiter-0 Bound pvc-8296c2bc-dfc0-47f4-b651-01fb802bf751 500Mi RWO standard 25m +persistentvolumeclaim/datadir-mongo-sh-arb-shard1-0 Bound pvc-33cde211-4ed5-49a9-b7a8-48e94690e12d 500Mi RWO standard 26m +persistentvolumeclaim/datadir-mongo-sh-arb-shard1-1 Bound pvc-569cedf8-b16e-4616-ae1d-74168aacc227 500Mi RWO standard 26m +persistentvolumeclaim/datadir-mongo-sh-arb-shard1-arbiter-0 Bound pvc-c65c7054-a9de-40c4-9797-4d0a730e9c5b 500Mi RWO standard 25m +``` + +From the above output, you can see that MongoDB object, PVCs, Secret are still there. + +## Resume Halted Database + +Now, to resume the database, i.e. to get the same database setup back again, you have to set the the `spec.halted` as false. You can use the below command. + +```bash +$ kubectl patch -n demo mg/mongo-sh-arb -p '{"spec":{"halted":false}}' --type="merge" +mongodb.kubedb.com/mongo-sh-arb patched +``` + +When the database is resumed successfully, you can see the database Status is set to `Ready`. + +```bash +$ kubectl get mg -n demo +NAME VERSION STATUS AGE +mongodb.kubedb.com/mongo-sh-arb 4.4.26 Ready 28m +``` + +Now, If you again exec into `pod` and look for previous data, you will see that, all the data persists. + +```bash +$ kubectl get po -n demo -l mongodb.kubedb.com/node.mongos=mongo-sh-arb-mongos +NAME READY STATUS RESTARTS AGE +mongo-sh-arb-mongos-0 1/1 Running 0 89s +mongo-sh-arb-mongos-1 1/1 Running 0 29s + + +$ kubectl exec -it mongo-sh-arb-mongos-0 -n demo bash + +mongodb@mongo-sh-mongos-0:/$ mongo admin -u root -p '6&UiN5;qq)Tnai=7' + +mongos> use songs +switched to db songs + +mongos> db.list.find() +{ "_id" : ObjectId("6261275c18807d1843328e08"), "led zeppelin" : "stairway to heaven", "slipknot" : "psychosocial" } +{ "_id" : ObjectId("6261281c18807d1843328e09"), "pink floyd" : "us and them", "nirvana" : "smells like teen spirit", "john lennon" : "imagine" } + +mongos> sh.status() +--- Sharding Status --- + sharding version: { + "_id" : 1, + "minCompatibleVersion" : 5, + "currentVersion" : 6, + "clusterId" : ObjectId("626123f2f1e4f6821ec73945") + } + shards: + { "_id" : "shard0", "host" : "shard0/mongo-sh-arb-shard0-0.mongo-sh-arb-shard0-pods.demo.svc.cluster.local:27017,mongo-sh-arb-shard0-1.mongo-sh-arb-shard0-pods.demo.svc.cluster.local:27017", "state" : 1 } + { "_id" : "shard1", "host" : "shard1/mongo-sh-arb-shard1-0.mongo-sh-arb-shard1-pods.demo.svc.cluster.local:27017,mongo-sh-arb-shard1-1.mongo-sh-arb-shard1-pods.demo.svc.cluster.local:27017", "state" : 1 } + active mongoses: + "4.4.26" : 2 + autosplit: + Currently enabled: yes + balancer: + Currently enabled: yes + Currently running: no + Failed balancer rounds in last 5 attempts: 1 + Last reported error: Could not find host matching read preference { mode: "primary" } for set shard0 + Time of Reported error: Thu Apr 21 2022 09:57:04 GMT+0000 (UTC) + Migration Results for the last 24 hours: + 512 : Success + databases: + { "_id" : "config", "primary" : "config", "partitioned" : true } + config.system.sessions + shard key: { "_id" : 1 } + unique: false + balancing: true + chunks: + shard0 512 + shard1 512 + too many chunks to print, use verbose if you want to force print + { "_id" : "demo", "primary" : "shard1", "partitioned" : false, "version" : { "uuid" : UUID("8af87f8c-b4ae-4d04-854f-d2ede7465acd"), "lastMod" : 1 } } + { "_id" : "kubedb-system", "primary" : "shard0", "partitioned" : true, "version" : { "uuid" : UUID("79db6e4a-dcb1-4f1a-86c2-dcd86a944893"), "lastMod" : 1 } } + kubedb-system.health-check + shard key: { "id" : 1 } + unique: false + balancing: true + chunks: + shard0 1 + { "id" : { "$minKey" : 1 } } -->> { "id" : { "$maxKey" : 1 } } on : shard0 Timestamp(1, 0) + { "_id" : "songs", "primary" : "shard1", "partitioned" : true, "version" : { "uuid" : UUID("5a61681f-e427-463f-85ca-c1f0d8854a3b"), "lastMod" : 1 } } + songs.list + shard key: { "myfield" : 1 } + unique: false + balancing: true + chunks: + shard1 1 + { "myfield" : { "$minKey" : 1 } } -->> { "myfield" : { "$maxKey" : 1 } } on : shard1 Timestamp(1, 0) +``` + +## Cleaning up + +To cleanup the Kubernetes resources created by this tutorial, run: + +```bash +kubectl patch -n demo mg/mongo-sh-arb -p '{"spec":{"terminationPolicy":"WipeOut"}}' --type="merge" +kubectl delete -n demo mg/mongo-sh-arb + +kubectl delete ns demo +``` + +## Next Steps + +- [Backup and Restore](/docs/v2024.1.31/guides/mongodb/backup/overview/) process of MongoDB databases using Stash. +- Initialize [MongoDB with Script](/docs/v2024.1.31/guides/mongodb/initialization/using-script). +- Monitor your MongoDB database with KubeDB using [out-of-the-box Prometheus operator](/docs/v2024.1.31/guides/mongodb/monitoring/using-prometheus-operator). +- Monitor your MongoDB database with KubeDB using [out-of-the-box builtin-Prometheus](/docs/v2024.1.31/guides/mongodb/monitoring/using-builtin-prometheus). +- Use [private Docker registry](/docs/v2024.1.31/guides/mongodb/private-registry/using-private-registry) to deploy MongoDB with KubeDB. +- Detail concepts of [MongoDB object](/docs/v2024.1.31/guides/mongodb/concepts/mongodb). +- Detail concepts of [MongoDBVersion object](/docs/v2024.1.31/guides/mongodb/concepts/catalog). +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/v2024.1.31/CONTRIBUTING). diff --git a/content/docs/v2024.1.31/guides/mongodb/autoscaler/_index.md b/content/docs/v2024.1.31/guides/mongodb/autoscaler/_index.md new file mode 100644 index 0000000000..fe847b181c --- /dev/null +++ b/content/docs/v2024.1.31/guides/mongodb/autoscaler/_index.md @@ -0,0 +1,22 @@ +--- +title: Autoscaling +menu: + docs_v2024.1.31: + identifier: mg-auto-scaling + name: Autoscaling + parent: mg-mongodb-guides + weight: 46 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/mongodb/autoscaler/compute/_index.md b/content/docs/v2024.1.31/guides/mongodb/autoscaler/compute/_index.md new file mode 100644 index 0000000000..01599e29e5 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mongodb/autoscaler/compute/_index.md @@ -0,0 +1,22 @@ +--- +title: Compute Autoscaling +menu: + docs_v2024.1.31: + identifier: mg-compute-auto-scaling + name: Compute Autoscaling + parent: mg-auto-scaling + weight: 46 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/mongodb/autoscaler/compute/overview.md b/content/docs/v2024.1.31/guides/mongodb/autoscaler/compute/overview.md new file mode 100644 index 0000000000..8b4df65cd1 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mongodb/autoscaler/compute/overview.md @@ -0,0 +1,66 @@ +--- +title: MongoDB Compute Autoscaling Overview +menu: + docs_v2024.1.31: + identifier: mg-auto-scaling-overview + name: Overview + parent: mg-compute-auto-scaling + weight: 10 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# MongoDB Compute Resource Autoscaling + +This guide will give an overview on how KubeDB Autoscaler operator autoscales the database compute resources i.e. cpu and memory using `mongodbautoscaler` crd. + +## Before You Begin + +- You should be familiar with the following `KubeDB` concepts: + - [MongoDB](/docs/v2024.1.31/guides/mongodb/concepts/mongodb) + - [MongoDBAutoscaler](/docs/v2024.1.31/guides/mongodb/concepts/autoscaler) + - [MongoDBOpsRequest](/docs/v2024.1.31/guides/mongodb/concepts/opsrequest) + +## How Compute Autoscaling Works + +The following diagram shows how KubeDB Autoscaler operator autoscales the resources of `MongoDB` database components. Open the image in a new tab to see the enlarged version. + +
+  Compute Auto Scaling process of MongoDB +
Fig: Compute Auto Scaling process of MongoDB
+
+ +The Auto Scaling process consists of the following steps: + +1. At first, a user creates a `MongoDB` Custom Resource Object (CRO). + +2. `KubeDB` Provisioner operator watches the `MongoDB` CRO. + +3. When the operator finds a `MongoDB` CRO, it creates required number of `StatefulSets` and related necessary stuff like secrets, services, etc. + +4. Then, in order to set up autoscaling of the various components (ie. ReplicaSet, Shard, ConfigServer, Mongos, etc.) of the `MongoDB` database the user creates a `MongoDBAutoscaler` CRO with desired configuration. + +5. `KubeDB` Autoscaler operator watches the `MongoDBAutoscaler` CRO. + +6. `KubeDB` Autoscaler operator generates recommendation using the modified version of kubernetes [official recommender](https://github.com/kubernetes/autoscaler/tree/master/vertical-pod-autoscaler/pkg/recommender) for different components of the database, as specified in the `MongoDBAutoscaler` CRO. + +7. If the generated recommendation doesn't match the current resources of the database, then `KubeDB` Autoscaler operator creates a `MongoDBOpsRequest` CRO to scale the database to match the recommendation generated. + +8. `KubeDB` Ops-manager operator watches the `MongoDBOpsRequest` CRO. + +9. Then the `KubeDB` Ops-manager operator will scale the database component vertically as specified on the `MongoDBOpsRequest` CRO. + +In the next docs, we are going to show a step by step guide on Autoscaling of various MongoDB database components using `MongoDBAutoscaler` CRD. diff --git a/content/docs/v2024.1.31/guides/mongodb/autoscaler/compute/replicaset.md b/content/docs/v2024.1.31/guides/mongodb/autoscaler/compute/replicaset.md new file mode 100644 index 0000000000..7de21d05ab --- /dev/null +++ b/content/docs/v2024.1.31/guides/mongodb/autoscaler/compute/replicaset.md @@ -0,0 +1,544 @@ +--- +title: MongoDB Replicaset Autoscaling +menu: + docs_v2024.1.31: + identifier: mg-auto-scaling-replicaset + name: Replicaset + parent: mg-compute-auto-scaling + weight: 20 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Autoscaling the Compute Resource of a MongoDB Replicaset Database + +This guide will show you how to use `KubeDB` to autoscale compute resources i.e. cpu and memory of a MongoDB replicaset database. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster. + +- Install `KubeDB` Provisioner, Ops-manager and Autoscaler operator in your cluster following the steps [here](/docs/v2024.1.31/setup/README). + +- Install `Metrics Server` from [here](https://github.com/kubernetes-sigs/metrics-server#installation) + +- You should be familiar with the following `KubeDB` concepts: + - [MongoDB](/docs/v2024.1.31/guides/mongodb/concepts/mongodb) + - [MongoDBAutoscaler](/docs/v2024.1.31/guides/mongodb/concepts/autoscaler) + - [MongoDBOpsRequest](/docs/v2024.1.31/guides/mongodb/concepts/opsrequest) + - [Compute Resource Autoscaling Overview](/docs/v2024.1.31/guides/mongodb/autoscaler/compute/overview) + +To keep everything isolated, we are going to use a separate namespace called `demo` throughout this tutorial. + +```bash +$ kubectl create ns demo +namespace/demo created +``` + +> **Note:** YAML files used in this tutorial are stored in [docs/examples/mongodb](/docs/v2024.1.31/examples/mongodb) directory of [kubedb/docs](https://github.com/kubedb/docs) repository. + +## Autoscaling of Replicaset Database + +Here, we are going to deploy a `MongoDB` Replicaset using a supported version by `KubeDB` operator. Then we are going to apply `MongoDBAutoscaler` to set up autoscaling. + +#### Deploy MongoDB Replicaset + +In this section, we are going to deploy a MongoDB Replicaset database with version `4.4.26`. Then, in the next section we will set up autoscaling for this database using `MongoDBAutoscaler` CRD. Below is the YAML of the `MongoDB` CR that we are going to create, + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: MongoDB +metadata: + name: mg-rs + namespace: demo +spec: + version: "4.4.26" + replicaSet: + name: "replicaset" + replicas: 3 + storageType: Durable + storage: + resources: + requests: + storage: 1Gi + podTemplate: + spec: + resources: + requests: + cpu: "200m" + memory: "300Mi" + limits: + cpu: "200m" + memory: "300Mi" + terminationPolicy: WipeOut + +``` + +Let's create the `MongoDB` CRO we have shown above, + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/mongodb/autoscaling/compute/mg-rs.yaml +mongodb.kubedb.com/mg-rs created +``` + +Now, wait until `mg-rs` has status `Ready`. i.e, + +```bash +$ kubectl get mg -n demo +NAME VERSION STATUS AGE +mg-rs 4.4.26 Ready 2m53s +``` + +Let's check the Pod containers resources, + +```bash +$ kubectl get pod -n demo mg-rs-0 -o json | jq '.spec.containers[].resources' +{ + "limits": { + "cpu": "200m", + "memory": "300Mi" + }, + "requests": { + "cpu": "200m", + "memory": "300Mi" + } +} +``` + +Let's check the MongoDB resources, +```bash +$ kubectl get mongodb -n demo mg-rs -o json | jq '.spec.podTemplate.spec.resources' +{ + "limits": { + "cpu": "200m", + "memory": "300Mi" + }, + "requests": { + "cpu": "200m", + "memory": "300Mi" + } +} +``` + +You can see from the above outputs that the resources are same as the one we have assigned while deploying the mongodb. + +We are now ready to apply the `MongoDBAutoscaler` CRO to set up autoscaling for this database. + +### Compute Resource Autoscaling + +Here, we are going to set up compute resource autoscaling using a MongoDBAutoscaler Object. + +#### Create MongoDBAutoscaler Object + +In order to set up compute resource autoscaling for this replicaset database, we have to create a `MongoDBAutoscaler` CRO with our desired configuration. Below is the YAML of the `MongoDBAutoscaler` object that we are going to create, + +```yaml +apiVersion: autoscaling.kubedb.com/v1alpha1 +kind: MongoDBAutoscaler +metadata: + name: mg-as-rs + namespace: demo +spec: + databaseRef: + name: mg-rs + opsRequestOptions: + timeout: 3m + apply: IfReady + compute: + replicaSet: + trigger: "On" + podLifeTimeThreshold: 5m + resourceDiffPercentage: 20 + minAllowed: + cpu: 400m + memory: 400Mi + maxAllowed: + cpu: 1 + memory: 1Gi + controlledResources: ["cpu", "memory"] + containerControlledValues: "RequestsAndLimits" +``` + +Here, + +- `spec.databaseRef.name` specifies that we are performing compute resource scaling operation on `mg-rs` database. +- `spec.compute.replicaSet.trigger` specifies that compute autoscaling is enabled for this database. +- `spec.compute.replicaSet.podLifeTimeThreshold` specifies the minimum lifetime for at least one of the pod to initiate a vertical scaling. +- `spec.compute.replicaset.resourceDiffPercentage` specifies the minimum resource difference in percentage. The default is 10%. + If the difference between current & recommended resource is less than ResourceDiffPercentage, Autoscaler Operator will ignore the updating. +- `spec.compute.replicaSet.minAllowed` specifies the minimum allowed resources for the database. +- `spec.compute.replicaSet.maxAllowed` specifies the maximum allowed resources for the database. +- `spec.compute.replicaSet.controlledResources` specifies the resources that are controlled by the autoscaler. +- `spec.compute.replicaSet.containerControlledValues` specifies which resource values should be controlled. The default is "RequestsAndLimits". +- `spec.opsRequestOptions` contains the options to pass to the created OpsRequest. It has 3 fields. Know more about them here : [readinessCriteria](/docs/v2024.1.31/guides/mongodb/concepts/opsrequest#specreadinesscriteria), [timeout](/docs/v2024.1.31/guides/mongodb/concepts/opsrequest#spectimeout), [apply](/docs/v2024.1.31/guides/mongodb/concepts/opsrequest#specapply). + +If it was an `InMemory database`, we could also autoscaler the inMemory resources using MongoDB compute autoscaler, like below. + +#### Autoscale inMemory database +To autoscale inMemory databases, you need to specify the `spec.compute.replicaSet.inMemoryStorage` section. + +```yaml + ... + inMemoryStorage: + usageThresholdPercentage: 80 + scalingFactorPercentage: 30 + ... +``` +It has two fields inside it. +- `usageThresholdPercentage`. If db uses more than usageThresholdPercentage of the total memory, memoryStorage should be increased. Default usage threshold is 70%. +- `scalingFactorPercentage`. If db uses more than usageThresholdPercentage of the total memory, memoryStorage should be increased by this given scaling percentage. Default scaling percentage is 50%. + +> Note: To inform you, We use `db.serverStatus().inMemory.cache["bytes currently in the cache"]` & `db.serverStatus().inMemory.cache["maximum bytes configured"]` to calculate the used & maximum inMemory storage respectively. + +Let's create the `MongoDBAutoscaler` CR we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/mongodb/autoscaling/compute/mg-as-rs.yaml +mongodbautoscaler.autoscaling.kubedb.com/mg-as-rs created +``` + +#### Verify Autoscaling is set up successfully + +Let's check that the `mongodbautoscaler` resource is created successfully, + +```bash +$ kubectl get mongodbautoscaler -n demo +NAME AGE +mg-as-rs 102s + +$ kubectl describe mongodbautoscaler mg-as-rs -n demo +Name: mg-as-rs +Namespace: demo +Labels: +Annotations: +API Version: autoscaling.kubedb.com/v1alpha1 +Kind: MongoDBAutoscaler +Metadata: + Creation Timestamp: 2022-10-27T06:56:34Z + Generation: 1 + Managed Fields: + API Version: autoscaling.kubedb.com/v1alpha1 + Fields Type: FieldsV1 + fieldsV1: + f:metadata: + f:annotations: + .: + f:kubectl.kubernetes.io/last-applied-configuration: + f:spec: + .: + f:compute: + .: + f:replicaSet: + .: + f:containerControlledValues: + f:controlledResources: + f:maxAllowed: + .: + f:cpu: + f:memory: + f:minAllowed: + .: + f:cpu: + f:memory: + f:podLifeTimeThreshold: + f:resourceDiffPercentage: + f:trigger: + f:databaseRef: + f:opsRequestOptions: + .: + f:apply: + f:timeout: + Manager: kubectl-client-side-apply + Operation: Update + Time: 2022-10-27T06:56:34Z + API Version: autoscaling.kubedb.com/v1alpha1 + Fields Type: FieldsV1 + fieldsV1: + f:status: + .: + f:checkpoints: + f:conditions: + f:vpas: + Manager: kubedb-autoscaler + Operation: Update + Subresource: status + Time: 2022-10-27T07:01:05Z + Resource Version: 640314 + UID: ab03414a-67a2-4da4-8960-6e67ae56b503 +Spec: + Compute: + Replica Set: + Container Controlled Values: RequestsAndLimits + Controlled Resources: + cpu + memory + Max Allowed: + Cpu: 1 + Memory: 1Gi + Min Allowed: + Cpu: 400m + Memory: 400Mi + Pod Life Time Threshold: 5m0s + Resource Diff Percentage: 20 + Trigger: On + Database Ref: + Name: mg-rs + Ops Request Options: + Apply: IfReady + Timeout: 3m0s +Status: + Checkpoints: + Cpu Histogram: + Bucket Weights: + Index: 2 + Weight: 10000 + Index: 3 + Weight: 5000 + Reference Timestamp: 2022-10-27T00:00:00Z + Total Weight: 0.3673624107285783 + First Sample Start: 2022-10-27T07:00:42Z + Last Sample Start: 2022-10-27T07:00:55Z + Last Update Time: 2022-10-27T07:01:00Z + Memory Histogram: + Reference Timestamp: 2022-10-28T00:00:00Z + Ref: + Container Name: mongodb + Vpa Object Name: mg-rs + Total Samples Count: 3 + Version: v3 + Cpu Histogram: + Bucket Weights: + Index: 0 + Weight: 10000 + Reference Timestamp: 2022-10-27T00:00:00Z + Total Weight: 0.3673624107285783 + First Sample Start: 2022-10-27T07:00:42Z + Last Sample Start: 2022-10-27T07:00:55Z + Last Update Time: 2022-10-27T07:01:00Z + Memory Histogram: + Reference Timestamp: 2022-10-28T00:00:00Z + Ref: + Container Name: replication-mode-detector + Vpa Object Name: mg-rs + Total Samples Count: 3 + Version: v3 + Conditions: + Last Transition Time: 2022-10-27T07:01:05Z + Message: Successfully created mongoDBOpsRequest demo/mops-mg-rs-cxhsy1 + Observed Generation: 1 + Reason: CreateOpsRequest + Status: True + Type: CreateOpsRequest + Vpas: + Conditions: + Last Transition Time: 2022-10-27T07:01:00Z + Status: True + Type: RecommendationProvided + Recommendation: + Container Recommendations: + Container Name: mongodb + Lower Bound: + Cpu: 400m + Memory: 400Mi + Target: + Cpu: 400m + Memory: 400Mi + Uncapped Target: + Cpu: 49m + Memory: 262144k + Upper Bound: + Cpu: 1 + Memory: 1Gi + Vpa Name: mg-rs +Events: +``` +So, the `mongodbautoscaler` resource is created successfully. + +you can see in the `Status.VPAs.Recommendation` section, that recommendation has been generated for our database. Our autoscaler operator continuously watches the recommendation generated and creates an `mongodbopsrequest` based on the recommendations, if the database pods are needed to scaled up or down. + +Let's watch the `mongodbopsrequest` in the demo namespace to see if any `mongodbopsrequest` object is created. After some time you'll see that a `mongodbopsrequest` will be created based on the recommendation. + +```bash +$ watch kubectl get mongodbopsrequest -n demo +Every 2.0s: kubectl get mongodbopsrequest -n demo +NAME TYPE STATUS AGE +mops-mg-rs-cxhsy1 VerticalScaling Progressing 10s +``` + +Let's wait for the ops request to become successful. + +```bash +$ watch kubectl get mongodbopsrequest -n demo +Every 2.0s: kubectl get mongodbopsrequest -n demo +NAME TYPE STATUS AGE +mops-mg-rs-cxhsy1 VerticalScaling Successful 68s +``` + +We can see from the above output that the `MongoDBOpsRequest` has succeeded. If we describe the `MongoDBOpsRequest` we will get an overview of the steps that were followed to scale the database. + +```bash +$ kubectl describe mongodbopsrequest -n demo mops-mg-rs-cxhsy1 +Name: mops-mg-rs-cxhsy1 +Namespace: demo +Labels: +Annotations: +API Version: ops.kubedb.com/v1alpha1 +Kind: MongoDBOpsRequest +Metadata: + Creation Timestamp: 2022-10-27T07:01:05Z + Generation: 1 + Managed Fields: + API Version: ops.kubedb.com/v1alpha1 + Fields Type: FieldsV1 + fieldsV1: + f:metadata: + f:ownerReferences: + .: + k:{"uid":"ab03414a-67a2-4da4-8960-6e67ae56b503"}: + f:spec: + .: + f:apply: + f:databaseRef: + f:timeout: + f:type: + f:verticalScaling: + .: + f:replicaSet: + .: + f:limits: + .: + f:cpu: + f:memory: + f:requests: + .: + f:cpu: + f:memory: + Manager: kubedb-autoscaler + Operation: Update + Time: 2022-10-27T07:01:05Z + API Version: ops.kubedb.com/v1alpha1 + Fields Type: FieldsV1 + fieldsV1: + f:status: + .: + f:conditions: + f:observedGeneration: + f:phase: + Manager: kubedb-ops-manager + Operation: Update + Subresource: status + Time: 2022-10-27T07:02:31Z + Owner References: + API Version: autoscaling.kubedb.com/v1alpha1 + Block Owner Deletion: true + Controller: true + Kind: MongoDBAutoscaler + Name: mg-as-rs + UID: ab03414a-67a2-4da4-8960-6e67ae56b503 + Resource Version: 640598 + UID: f7c6db00-dd0e-4850-8bad-5f0855ce3850 +Spec: + Apply: IfReady + Database Ref: + Name: mg-rs + Timeout: 3m0s + Type: VerticalScaling + Vertical Scaling: + Replica Set: + Limits: + Cpu: 400m + Memory: 400Mi + Requests: + Cpu: 400m + Memory: 400Mi +Status: + Conditions: + Last Transition Time: 2022-10-27T07:01:05Z + Message: MongoDB ops request is vertically scaling database + Observed Generation: 1 + Reason: VerticalScaling + Status: True + Type: VerticalScaling + Last Transition Time: 2022-10-27T07:02:30Z + Message: Successfully Vertically Scaled Replicaset Resources + Observed Generation: 1 + Reason: UpdateReplicaSetResources + Status: True + Type: UpdateReplicaSetResources + Last Transition Time: 2022-10-27T07:02:31Z + Message: Successfully Vertically Scaled Database + Observed Generation: 1 + Reason: Successful + Status: True + Type: Successful + Observed Generation: 1 + Phase: Successful +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal PauseDatabase 4m9s KubeDB Ops-manager Operator Pausing MongoDB demo/mg-rs + Normal PauseDatabase 4m9s KubeDB Ops-manager Operator Successfully paused MongoDB demo/mg-rs + Normal Starting 4m9s KubeDB Ops-manager Operator Updating Resources of StatefulSet: mg-rs + Normal UpdateReplicaSetResources 4m9s KubeDB Ops-manager Operator Successfully updated replicaset Resources + Normal Starting 4m9s KubeDB Ops-manager Operator Updating Resources of StatefulSet: mg-rs + Normal UpdateReplicaSetResources 4m9s KubeDB Ops-manager Operator Successfully updated replicaset Resources + Normal UpdateReplicaSetResources 2m44s KubeDB Ops-manager Operator Successfully Vertically Scaled Replicaset Resources + Normal ResumeDatabase 2m43s KubeDB Ops-manager Operator Resuming MongoDB demo/mg-rs + Normal ResumeDatabase 2m43s KubeDB Ops-manager Operator Successfully resumed MongoDB demo/mg-rs + Normal Successful 2m43s KubeDB Ops-manager Operator Successfully Vertically Scaled Database + Normal UpdateReplicaSetResources 2m43s KubeDB Ops-manager Operator Successfully Vertically Scaled Replicaset Resources + +``` + +Now, we are going to verify from the Pod, and the MongoDB yaml whether the resources of the replicaset database has updated to meet up the desired state, Let's check, + +```bash +$ kubectl get pod -n demo mg-rs-0 -o json | jq '.spec.containers[].resources' +{ + "limits": { + "cpu": "400m", + "memory": "400Mi" + }, + "requests": { + "cpu": "400m", + "memory": "400Mi" + } +} + +$ kubectl get mongodb -n demo mg-rs -o json | jq '.spec.podTemplate.spec.resources' +{ + "limits": { + "cpu": "400m", + "memory": "400Mi" + }, + "requests": { + "cpu": "400m", + "memory": "400Mi" + } +} +``` + + +The above output verifies that we have successfully auto scaled the resources of the MongoDB replicaset database. + +## Cleaning Up + +To clean up the Kubernetes resources created by this tutorial, run: + +```bash +kubectl delete mg -n demo mg-rs +kubectl delete mongodbautoscaler -n demo mg-as-rs +``` \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/mongodb/autoscaler/compute/sharding.md b/content/docs/v2024.1.31/guides/mongodb/autoscaler/compute/sharding.md new file mode 100644 index 0000000000..ecbfdc974c --- /dev/null +++ b/content/docs/v2024.1.31/guides/mongodb/autoscaler/compute/sharding.md @@ -0,0 +1,582 @@ +--- +title: MongoDB Shard Autoscaling +menu: + docs_v2024.1.31: + identifier: mg-auto-scaling-shard + name: Sharding + parent: mg-compute-auto-scaling + weight: 25 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Autoscaling the Compute Resource of a MongoDB Sharded Database + +This guide will show you how to use `KubeDB` to autoscale compute resources i.e. cpu and memory of a MongoDB sharded database. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster. + +- Install `KubeDB` Provisioner, Ops-manager and Autoscaler operator in your cluster following the steps [here](/docs/v2024.1.31/setup/README). + +- Install `Metrics Server` from [here](https://github.com/kubernetes-sigs/metrics-server#installation) + +- You should be familiar with the following `KubeDB` concepts: + - [MongoDB](/docs/v2024.1.31/guides/mongodb/concepts/mongodb) + - [MongoDBAutoscaler](/docs/v2024.1.31/guides/mongodb/concepts/autoscaler) + - [MongoDBOpsRequest](/docs/v2024.1.31/guides/mongodb/concepts/opsrequest) + - [Compute Resource Autoscaling Overview](/docs/v2024.1.31/guides/mongodb/autoscaler/compute/overview) + +To keep everything isolated, we are going to use a separate namespace called `demo` throughout this tutorial. + +```bash +$ kubectl create ns demo +namespace/demo created +``` + +> **Note:** YAML files used in this tutorial are stored in [docs/examples/mongodb](/docs/v2024.1.31/examples/mongodb) directory of [kubedb/docs](https://github.com/kubedb/docs) repository. + +## Autoscaling of Sharded Database + +Here, we are going to deploy a `MongoDB` sharded database using a supported version by `KubeDB` operator. Then we are going to apply `MongoDBAutoscaler` to set up autoscaling. + +#### Deploy MongoDB Sharded Database + +In this section, we are going to deploy a MongoDB sharded database with version `4.4.26`. Then, in the next section we will set up autoscaling for this database using `MongoDBAutoscaler` CRD. Below is the YAML of the `MongoDB` CR that we are going to create, + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: MongoDB +metadata: + name: mg-sh + namespace: demo +spec: + version: "4.4.26" + storageType: Durable + shardTopology: + configServer: + storage: + resources: + requests: + storage: 1Gi + replicas: 3 + podTemplate: + spec: + resources: + requests: + cpu: "200m" + memory: "300Mi" + mongos: + replicas: 2 + podTemplate: + spec: + resources: + requests: + cpu: "200m" + memory: "300Mi" + shard: + storage: + resources: + requests: + storage: 1Gi + replicas: 3 + shards: 2 + podTemplate: + spec: + resources: + requests: + cpu: "200m" + memory: "300Mi" + terminationPolicy: WipeOut +``` + +Let's create the `MongoDB` CRO we have shown above, + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/mongodb/autoscaling/compute/mg-sh.yaml +mongodb.kubedb.com/mg-sh created +``` + +Now, wait until `mg-sh` has status `Ready`. i.e, + +```bash +$ kubectl get mg -n demo +NAME VERSION STATUS AGE +mg-sh 4.4.26 Ready 3m57s +``` + +Let's check a shard Pod containers resources, + +```bash +$ kubectl get pod -n demo mg-sh-shard0-0 -o json | jq '.spec.containers[].resources' +{ + "limits": { + "cpu": "200m", + "memory": "300Mi" + }, + "requests": { + "cpu": "200m", + "memory": "300Mi" + } +} +``` + +Let's check the MongoDB resources, +```bash +$ kubectl get mongodb -n demo mg-sh -o json | jq '.spec.shardTopology.shard.podTemplate.spec.resources' +{ + "limits": { + "cpu": "200m", + "memory": "300Mi" + }, + "requests": { + "cpu": "200m", + "memory": "300Mi" + } +} +``` + +You can see from the above outputs that the resources are same as the one we have assigned while deploying the mongodb. + +We are now ready to apply the `MongoDBAutoscaler` CRO to set up autoscaling for this database. + +### Compute Resource Autoscaling + +Here, we are going to set up compute resource autoscaling using a MongoDBAutoscaler Object. + +#### Create MongoDBAutoscaler Object + +In order to set up compute resource autoscaling for the shard pod of the database, we have to create a `MongoDBAutoscaler` CRO with our desired configuration. Below is the YAML of the `MongoDBAutoscaler` object that we are going to create, + +```yaml +apiVersion: autoscaling.kubedb.com/v1alpha1 +kind: MongoDBAutoscaler +metadata: + name: mg-as-sh + namespace: demo +spec: + databaseRef: + name: mg-sh + opsRequestOptions: + timeout: 3m + apply: IfReady + compute: + shard: + trigger: "On" + podLifeTimeThreshold: 5m + resourceDiffPercentage: 20 + minAllowed: + cpu: 400m + memory: 400Mi + maxAllowed: + cpu: 1 + memory: 1Gi + controlledResources: ["cpu", "memory"] + containerControlledValues: "RequestsAndLimits" +``` + +Here, + +- `spec.databaseRef.name` specifies that we are performing compute resource scaling operation on `mg-sh` database. +- `spec.compute.shard.trigger` specifies that compute autoscaling is enabled for the shard pods of this database. +- `spec.compute.shard.podLifeTimeThreshold` specifies the minimum lifetime for at least one of the pod to initiate a vertical scaling. +- `spec.compute.replicaset.resourceDiffPercentage` specifies the minimum resource difference in percentage. The default is 10%. + If the difference between current & recommended resource is less than ResourceDiffPercentage, Autoscaler Operator will ignore the updating. +- `spec.compute.shard.minAllowed` specifies the minimum allowed resources for the database. +- `spec.compute.shard.maxAllowed` specifies the maximum allowed resources for the database. +- `spec.compute.shard.controlledResources` specifies the resources that are controlled by the autoscaler. +- `spec.compute.shard.containerControlledValues` specifies which resource values should be controlled. The default is "RequestsAndLimits". +- `spec.opsRequestOptions` contains the options to pass to the created OpsRequest. It has 3 fields. Know more about them here : [readinessCriteria](/docs/v2024.1.31/guides/mongodb/concepts/opsrequest#specreadinesscriteria), [timeout](/docs/v2024.1.31/guides/mongodb/concepts/opsrequest#spectimeout), [apply](/docs/v2024.1.31/guides/mongodb/concepts/opsrequest#specapply). +> Note: In this demo we are only setting up the autoscaling for the shard pods, that's why we only specified the shard section of the autoscaler. You can enable autoscaling for mongos and configServer pods in the same yaml, by specifying the `spec.compute.mongos` and `spec.compute.configServer` section, similar to the `spec.comput.shard` section we have configured in this demo. + +If it was an `InMemory database`, we could also autoscaler the inMemory resources using MongoDB compute autoscaler, like below. + +#### Autoscale inMemory database +To autoscale inMemory databases, you need to specify the `spec.compute.shard.inMemoryStorage` section. + +```yaml + ... + inMemoryStorage: + usageThresholdPercentage: 80 + scalingFactorPercentage: 30 + ... +``` +It has two fields inside it. +- `usageThresholdPercentage`. If db uses more than usageThresholdPercentage of the total memory, memoryStorage should be increased. Default usage threshold is 70%. +- `scalingFactorPercentage`. If db uses more than usageThresholdPercentage of the total memory, memoryStorage should be increased by this given scaling percentage. Default scaling percentage is 50%. + +> Note: To inform you, We use `db.serverStatus().inMemory.cache["bytes currently in the cache"]` & `db.serverStatus().inMemory.cache["maximum bytes configured"]` to calculate the used & maximum inMemory storage respectively. + + +Let's create the `MongoDBAutoscaler` CR we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/mongodb/autoscaling/compute/mg-as-sh.yaml +mongodbautoscaler.autoscaling.kubedb.com/mg-as-sh created +``` + +#### Verify Autoscaling is set up successfully + +Let's check that the `mongodbautoscaler` resource is created successfully, + +```bash +$ kubectl get mongodbautoscaler -n demo +NAME AGE +mg-as-sh 102s + +$ kubectl describe mongodbautoscaler mg-as-sh -n demo +Name: mg-as-sh +Namespace: demo +Labels: +Annotations: +API Version: autoscaling.kubedb.com/v1alpha1 +Kind: MongoDBAutoscaler +Metadata: + Creation Timestamp: 2022-10-27T09:46:48Z + Generation: 1 + Managed Fields: + API Version: autoscaling.kubedb.com/v1alpha1 + Fields Type: FieldsV1 + fieldsV1: + f:metadata: + f:annotations: + .: + f:kubectl.kubernetes.io/last-applied-configuration: + f:spec: + .: + f:compute: + .: + f:shard: + .: + f:containerControlledValues: + f:controlledResources: + f:maxAllowed: + .: + f:cpu: + f:memory: + f:minAllowed: + .: + f:cpu: + f:memory: + f:podLifeTimeThreshold: + f:resourceDiffPercentage: + f:trigger: + f:databaseRef: + f:opsRequestOptions: + .: + f:apply: + f:timeout: + Manager: kubectl-client-side-apply + Operation: Update + Time: 2022-10-27T09:46:48Z + API Version: autoscaling.kubedb.com/v1alpha1 + Fields Type: FieldsV1 + fieldsV1: + f:status: + .: + f:checkpoints: + f:conditions: + f:vpas: + Manager: kubedb-autoscaler + Operation: Update + Subresource: status + Time: 2022-10-27T09:47:08Z + Resource Version: 654853 + UID: 36878e8e-f100-409e-aa76-e6f46569df76 +Spec: + Compute: + Shard: + Container Controlled Values: RequestsAndLimits + Controlled Resources: + cpu + memory + Max Allowed: + Cpu: 1 + Memory: 1Gi + Min Allowed: + Cpu: 400m + Memory: 400Mi + Pod Life Time Threshold: 5m0s + Resource Diff Percentage: 20 + Trigger: On + Database Ref: + Name: mg-sh + Ops Request Options: + Apply: IfReady + Timeout: 3m0s +Status: + Checkpoints: + Cpu Histogram: + Bucket Weights: + Index: 1 + Weight: 5001 + Index: 2 + Weight: 10000 + Reference Timestamp: 2022-10-27T00:00:00Z + Total Weight: 0.397915611757652 + First Sample Start: 2022-10-27T09:46:43Z + Last Sample Start: 2022-10-27T09:46:57Z + Last Update Time: 2022-10-27T09:47:06Z + Memory Histogram: + Reference Timestamp: 2022-10-28T00:00:00Z + Ref: + Container Name: mongodb + Vpa Object Name: mg-sh-shard0 + Total Samples Count: 3 + Version: v3 + Cpu Histogram: + Bucket Weights: + Index: 1 + Weight: 10000 + Reference Timestamp: 2022-10-27T00:00:00Z + Total Weight: 0.39793263724156597 + First Sample Start: 2022-10-27T09:46:50Z + Last Sample Start: 2022-10-27T09:46:56Z + Last Update Time: 2022-10-27T09:47:06Z + Memory Histogram: + Reference Timestamp: 2022-10-28T00:00:00Z + Ref: + Container Name: mongodb + Vpa Object Name: mg-sh-shard1 + Total Samples Count: 3 + Version: v3 + Conditions: + Last Transition Time: 2022-10-27T09:47:08Z + Message: Successfully created mongoDBOpsRequest demo/mops-vpa-mg-sh-shard-ml75qi + Observed Generation: 1 + Reason: CreateOpsRequest + Status: True + Type: CreateOpsRequest + Vpas: + Conditions: + Last Transition Time: 2022-10-27T09:47:06Z + Status: True + Type: RecommendationProvided + Recommendation: + Container Recommendations: + Container Name: mongodb + Lower Bound: + Cpu: 400m + Memory: 400Mi + Target: + Cpu: 400m + Memory: 400Mi + Uncapped Target: + Cpu: 35m + Memory: 262144k + Upper Bound: + Cpu: 1 + Memory: 1Gi + Vpa Name: mg-sh-shard0 + Conditions: + Last Transition Time: 2022-10-27T09:47:06Z + Status: True + Type: RecommendationProvided + Recommendation: + Container Recommendations: + Container Name: mongodb + Lower Bound: + Cpu: 400m + Memory: 400Mi + Target: + Cpu: 400m + Memory: 400Mi + Uncapped Target: + Cpu: 25m + Memory: 262144k + Upper Bound: + Cpu: 1 + Memory: 1Gi + Vpa Name: mg-sh-shard1 +Events: + +``` +So, the `mongodbautoscaler` resource is created successfully. + +you can see in the `Status.VPAs.Recommendation` section, that recommendation has been generated for our database. Our autoscaler operator continuously watches the recommendation generated and creates an `mongodbopsrequest` based on the recommendations, if the database pods are needed to scaled up or down. + +Let's watch the `mongodbopsrequest` in the demo namespace to see if any `mongodbopsrequest` object is created. After some time you'll see that a `mongodbopsrequest` will be created based on the recommendation. + +```bash +$ watch kubectl get mongodbopsrequest -n demo +Every 2.0s: kubectl get mongodbopsrequest -n demo +NAME TYPE STATUS AGE +mops-vpa-mg-sh-shard-ml75qi VerticalScaling Progressing 19s +``` + +Let's wait for the ops request to become successful. + +```bash +$ watch kubectl get mongodbopsrequest -n demo +Every 2.0s: kubectl get mongodbopsrequest -n demo +NAME TYPE STATUS AGE +mops-vpa-mg-sh-shard-ml75qi VerticalScaling Successful 5m8s +``` + +We can see from the above output that the `MongoDBOpsRequest` has succeeded. If we describe the `MongoDBOpsRequest` we will get an overview of the steps that were followed to scale the database. + +```bash +$ kubectl describe mongodbopsrequest -n demo mops-vpa-mg-sh-shard-ml75qi +Name: mops-vpa-mg-sh-shard-ml75qi +Namespace: demo +Labels: +Annotations: +API Version: ops.kubedb.com/v1alpha1 +Kind: MongoDBOpsRequest +Metadata: + Creation Timestamp: 2022-10-27T09:47:08Z + Generation: 1 + Managed Fields: + API Version: ops.kubedb.com/v1alpha1 + Fields Type: FieldsV1 + fieldsV1: + f:metadata: + f:ownerReferences: + .: + k:{"uid":"36878e8e-f100-409e-aa76-e6f46569df76"}: + f:spec: + .: + f:apply: + f:databaseRef: + f:timeout: + f:type: + f:verticalScaling: + .: + f:shard: + .: + f:limits: + .: + f:memory: + f:requests: + .: + f:cpu: + f:memory: + Manager: kubedb-autoscaler + Operation: Update + Time: 2022-10-27T09:47:08Z + API Version: ops.kubedb.com/v1alpha1 + Fields Type: FieldsV1 + fieldsV1: + f:status: + .: + f:conditions: + f:observedGeneration: + f:phase: + Manager: kubedb-ops-manager + Operation: Update + Subresource: status + Time: 2022-10-27T09:49:49Z + Owner References: + API Version: autoscaling.kubedb.com/v1alpha1 + Block Owner Deletion: true + Controller: true + Kind: MongoDBAutoscaler + Name: mg-as-sh + UID: 36878e8e-f100-409e-aa76-e6f46569df76 + Resource Version: 655347 + UID: c44fbd53-40f9-42ca-9b4c-823d8e998d01 +Spec: + Apply: IfReady + Database Ref: + Name: mg-sh + Timeout: 3m0s + Type: VerticalScaling + Vertical Scaling: + Shard: + Limits: + Memory: 400Mi + Requests: + Cpu: 400m + Memory: 400Mi +Status: + Conditions: + Last Transition Time: 2022-10-27T09:47:08Z + Message: MongoDB ops request is vertically scaling database + Observed Generation: 1 + Reason: VerticalScaling + Status: True + Type: VerticalScaling + Last Transition Time: 2022-10-27T09:49:49Z + Message: Successfully Vertically Scaled Shard Resources + Observed Generation: 1 + Reason: UpdateShardResources + Status: True + Type: UpdateShardResources + Last Transition Time: 2022-10-27T09:49:49Z + Message: Successfully Vertically Scaled Database + Observed Generation: 1 + Reason: Successful + Status: True + Type: Successful + Observed Generation: 1 + Phase: Successful +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal PauseDatabase 3m27s KubeDB Ops-manager Operator Pausing MongoDB demo/mg-sh + Normal PauseDatabase 3m27s KubeDB Ops-manager Operator Successfully paused MongoDB demo/mg-sh + Normal Starting 3m27s KubeDB Ops-manager Operator Updating Resources of StatefulSet: mg-sh-shard0 + Normal Starting 3m27s KubeDB Ops-manager Operator Updating Resources of StatefulSet: mg-sh-shard1 + Normal UpdateShardResources 3m27s KubeDB Ops-manager Operator Successfully updated Shard Resources + Normal Starting 3m27s KubeDB Ops-manager Operator Updating Resources of StatefulSet: mg-sh-shard0 + Normal Starting 3m27s KubeDB Ops-manager Operator Updating Resources of StatefulSet: mg-sh-shard1 + Normal UpdateShardResources 3m27s KubeDB Ops-manager Operator Successfully updated Shard Resources + Normal UpdateShardResources 46s KubeDB Ops-manager Operator Successfully Vertically Scaled Shard Resources + Normal ResumeDatabase 46s KubeDB Ops-manager Operator Resuming MongoDB demo/mg-sh + Normal ResumeDatabase 46s KubeDB Ops-manager Operator Successfully resumed MongoDB demo/mg-sh + Normal Successful 46s KubeDB Ops-manager Operator Successfully Vertically Scaled Database +``` + +Now, we are going to verify from the Pod, and the MongoDB yaml whether the resources of the shard pod of the database has updated to meet up the desired state, Let's check, + +```bash +$ kubectl get pod -n demo mg-sh-shard0-0 -o json | jq '.spec.containers[].resources' +{ + "limits": { + "memory": "400Mi" + }, + "requests": { + "cpu": "400m", + "memory": "400Mi" + } +} + + +$ kubectl get mongodb -n demo mg-sh -o json | jq '.spec.shardTopology.shard.podTemplate.spec.resources' +{ + "limits": { + "memory": "400Mi" + }, + "requests": { + "cpu": "400m", + "memory": "400Mi" + } +} + +``` + + +The above output verifies that we have successfully auto scaled the resources of the MongoDB sharded database. + +## Cleaning Up + +To clean up the Kubernetes resources created by this tutorial, run: + +```bash +kubectl delete mg -n demo mg-sh +kubectl delete mongodbautoscaler -n demo mg-as-sh +``` \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/mongodb/autoscaler/compute/standalone.md b/content/docs/v2024.1.31/guides/mongodb/autoscaler/compute/standalone.md new file mode 100644 index 0000000000..7b57754dc7 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mongodb/autoscaler/compute/standalone.md @@ -0,0 +1,522 @@ +--- +title: MongoDB Standalone Autoscaling +menu: + docs_v2024.1.31: + identifier: mg-auto-scaling-standalone + name: Standalone + parent: mg-compute-auto-scaling + weight: 15 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Autoscaling the Compute Resource of a MongoDB Standalone Database + +This guide will show you how to use `KubeDB` to autoscale compute resources i.e. cpu and memory of a MongoDB standalone database. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster. + +- Install `KubeDB` Provisioner, Ops-manager and Autoscaler operator in your cluster following the steps [here](/docs/v2024.1.31/setup/README). + +- Install `Metrics Server` from [here](https://github.com/kubernetes-sigs/metrics-server#installation) + +- You should be familiar with the following `KubeDB` concepts: + - [MongoDB](/docs/v2024.1.31/guides/mongodb/concepts/mongodb) + - [MongoDBAutoscaler](/docs/v2024.1.31/guides/mongodb/concepts/autoscaler) + - [MongoDBOpsRequest](/docs/v2024.1.31/guides/mongodb/concepts/opsrequest) + - [Compute Resource Autoscaling Overview](/docs/v2024.1.31/guides/mongodb/autoscaler/compute/overview) + +To keep everything isolated, we are going to use a separate namespace called `demo` throughout this tutorial. + +```bash +$ kubectl create ns demo +namespace/demo created +``` + +> **Note:** YAML files used in this tutorial are stored in [docs/examples/mongodb](/docs/v2024.1.31/examples/mongodb) directory of [kubedb/docs](https://github.com/kubedb/docs) repository. + +## Autoscaling of Standalone Database + +Here, we are going to deploy a `MongoDB` standalone using a supported version by `KubeDB` operator. Then we are going to apply `MongoDBAutoscaler` to set up autoscaling. + +#### Deploy MongoDB standalone + +In this section, we are going to deploy a MongoDB standalone database with version `4.4.26`. Then, in the next section we will set up autoscaling for this database using `MongoDBAutoscaler` CRD. Below is the YAML of the `MongoDB` CR that we are going to create, + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: MongoDB +metadata: + name: mg-standalone + namespace: demo +spec: + version: "4.4.26" + storageType: Durable + storage: + resources: + requests: + storage: 1Gi + podTemplate: + spec: + resources: + requests: + cpu: "200m" + memory: "300Mi" + limits: + cpu: "200m" + memory: "300Mi" + terminationPolicy: WipeOut +``` + +Let's create the `MongoDB` CRO we have shown above, + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/mongodb/autoscaling/compute/mg-standalone.yaml +mongodb.kubedb.com/mg-standalone created +``` + +Now, wait until `mg-standalone` has status `Ready`. i.e, + +```bash +$ kubectl get mg -n demo +NAME VERSION STATUS AGE +mg-standalone 4.4.26 Ready 2m53s +``` + +Let's check the Pod containers resources, + +```bash +$ kubectl get pod -n demo mg-standalone-0 -o json | jq '.spec.containers[].resources' +{ + "limits": { + "cpu": "200m", + "memory": "300Mi" + }, + "requests": { + "cpu": "200m", + "memory": "300Mi" + } +} +``` + +Let's check the MongoDB resources, +```bash +$ kubectl get mongodb -n demo mg-standalone -o json | jq '.spec.podTemplate.spec.resources' +{ + "limits": { + "cpu": "200m", + "memory": "300Mi" + }, + "requests": { + "cpu": "200m", + "memory": "300Mi" + } +} +``` + +You can see from the above outputs that the resources are same as the one we have assigned while deploying the mongodb. + +We are now ready to apply the `MongoDBAutoscaler` CRO to set up autoscaling for this database. + +### Compute Resource Autoscaling + +Here, we are going to set up compute (cpu and memory) autoscaling using a MongoDBAutoscaler Object. + +#### Create MongoDBAutoscaler Object + +In order to set up compute resource autoscaling for this standalone database, we have to create a `MongoDBAutoscaler` CRO with our desired configuration. Below is the YAML of the `MongoDBAutoscaler` object that we are going to create, + +```yaml +apiVersion: autoscaling.kubedb.com/v1alpha1 +kind: MongoDBAutoscaler +metadata: + name: mg-as + namespace: demo +spec: + databaseRef: + name: mg-standalone + opsRequestOptions: + timeout: 3m + apply: IfReady + compute: + standalone: + trigger: "On" + podLifeTimeThreshold: 5m + resourceDiffPercentage: 20 + minAllowed: + cpu: 400m + memory: 400Mi + maxAllowed: + cpu: 1 + memory: 1Gi + controlledResources: ["cpu", "memory"] + containerControlledValues: "RequestsAndLimits" +``` + +Here, + +- `spec.databaseRef.name` specifies that we are performing compute resource autoscaling on `mg-standalone` database. +- `spec.compute.standalone.trigger` specifies that compute resource autoscaling is enabled for this database. +- `spec.compute.standalone.podLifeTimeThreshold` specifies the minimum lifetime for at least one of the pod to initiate a vertical scaling. +- `spec.compute.replicaset.resourceDiffPercentage` specifies the minimum resource difference in percentage. The default is 10%. + If the difference between current & recommended resource is less than ResourceDiffPercentage, Autoscaler Operator will ignore the updating. +- `spec.compute.standalone.minAllowed` specifies the minimum allowed resources for the database. +- `spec.compute.standalone.maxAllowed` specifies the maximum allowed resources for the database. +- `spec.compute.standalone.controlledResources` specifies the resources that are controlled by the autoscaler. +- `spec.compute.standalone.containerControlledValues` specifies which resource values should be controlled. The default is "RequestsAndLimits". +- `spec.opsRequestOptions` contains the options to pass to the created OpsRequest. It has 3 fields. Know more about them here : [readinessCriteria](/docs/v2024.1.31/guides/mongodb/concepts/opsrequest#specreadinesscriteria), [timeout](/docs/v2024.1.31/guides/mongodb/concepts/opsrequest#spectimeout), [apply](/docs/v2024.1.31/guides/mongodb/concepts/opsrequest#specapply). + +If it was an `InMemory database`, we could also autoscaler the inMemory resources using MongoDB compute autoscaler, like below. + +#### Autoscale inMemory database +To autoscale inMemory databases, you need to specify the `spec.compute.standalone.inMemoryStorage` section. + +```yaml + ... + inMemoryStorage: + usageThresholdPercentage: 80 + scalingFactorPercentage: 30 + ... +``` +It has two fields inside it. +- `usageThresholdPercentage`. If db uses more than usageThresholdPercentage of the total memory, memoryStorage should be increased. Default usage threshold is 70%. +- `scalingFactorPercentage`. If db uses more than usageThresholdPercentage of the total memory, memoryStorage should be increased by this given scaling percentage. Default scaling percentage is 50%. + +> Note: To inform you, We use `db.serverStatus().inMemory.cache["bytes currently in the cache"]` & `db.serverStatus().inMemory.cache["maximum bytes configured"]` to calculate the used & maximum inMemory storage respectively. + + +Let's create the `MongoDBAutoscaler` CR we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/mongodb/autoscaling/compute/mg-as-standalone.yaml +mongodbautoscaler.autoscaling.kubedb.com/mg-as created +``` + +#### Verify Autoscaling is set up successfully + +Let's check that the `mongodbautoscaler` resource is created successfully, + +```bash +$ kubectl get mongodbautoscaler -n demo +NAME AGE +mg-as 102s + +$ kubectl describe mongodbautoscaler mg-as -n demo +Name: mg-as +Namespace: demo +Labels: +Annotations: +API Version: autoscaling.kubedb.com/v1alpha1 +Kind: MongoDBAutoscaler +Metadata: + Creation Timestamp: 2022-10-27T09:54:35Z + Generation: 1 + Managed Fields: + API Version: autoscaling.kubedb.com/v1alpha1 + Fields Type: FieldsV1 + fieldsV1: + f:metadata: + f:annotations: + .: + f:kubectl.kubernetes.io/last-applied-configuration: + f:spec: + .: + f:compute: + .: + f:standalone: + .: + f:containerControlledValues: + f:controlledResources: + f:maxAllowed: + .: + f:cpu: + f:memory: + f:minAllowed: + .: + f:cpu: + f:memory: + f:podLifeTimeThreshold: + f:resourceDiffPercentage: + f:trigger: + f:databaseRef: + f:opsRequestOptions: + .: + f:apply: + f:timeout: + Manager: kubectl-client-side-apply + Operation: Update + Time: 2022-10-27T09:54:35Z + API Version: autoscaling.kubedb.com/v1alpha1 + Fields Type: FieldsV1 + fieldsV1: + f:status: + .: + f:checkpoints: + f:conditions: + f:vpas: + Manager: kubedb-autoscaler + Operation: Update + Subresource: status + Time: 2022-10-27T09:55:08Z + Resource Version: 656164 + UID: 439c148f-7c22-456f-a4b4-758cead29932 +Spec: + Compute: + Standalone: + Container Controlled Values: RequestsAndLimits + Controlled Resources: + cpu + memory + Max Allowed: + Cpu: 1 + Memory: 1Gi + Min Allowed: + Cpu: 400m + Memory: 400Mi + Pod Life Time Threshold: 5m0s + Resource Diff Percentage: 20 + Trigger: On + Database Ref: + Name: mg-standalone + Ops Request Options: + Apply: IfReady + Timeout: 3m0s +Status: + Checkpoints: + Cpu Histogram: + Bucket Weights: + Index: 6 + Weight: 10000 + Reference Timestamp: 2022-10-27T00:00:00Z + Total Weight: 0.133158834498727 + First Sample Start: 2022-10-27T09:54:56Z + Last Sample Start: 2022-10-27T09:54:56Z + Last Update Time: 2022-10-27T09:55:07Z + Memory Histogram: + Reference Timestamp: 2022-10-28T00:00:00Z + Ref: + Container Name: mongodb + Vpa Object Name: mg-standalone + Total Samples Count: 1 + Version: v3 + Conditions: + Last Transition Time: 2022-10-27T09:55:08Z + Message: Successfully created mongoDBOpsRequest demo/mops-mg-standalone-57huq2 + Observed Generation: 1 + Reason: CreateOpsRequest + Status: True + Type: CreateOpsRequest + Vpas: + Conditions: + Last Transition Time: 2022-10-27T09:55:07Z + Status: True + Type: RecommendationProvided + Recommendation: + Container Recommendations: + Container Name: mongodb + Lower Bound: + Cpu: 400m + Memory: 400Mi + Target: + Cpu: 400m + Memory: 400Mi + Uncapped Target: + Cpu: 93m + Memory: 262144k + Upper Bound: + Cpu: 1 + Memory: 1Gi + Vpa Name: mg-standalone +Events: + +``` +So, the `mongodbautoscaler` resource is created successfully. + +you can see in the `Status.VPAs.Recommendation` section, that recommendation has been generated for our database. Our autoscaler operator continuously watches the recommendation generated and creates an `mongodbopsrequest` based on the recommendations, if the database pods are needed to scaled up or down. + +Let's watch the `mongodbopsrequest` in the demo namespace to see if any `mongodbopsrequest` object is created. After some time you'll see that a `mongodbopsrequest` will be created based on the recommendation. + +```bash +$ watch kubectl get mongodbopsrequest -n demo +Every 2.0s: kubectl get mongodbopsrequest -n demo +NAME TYPE STATUS AGE +mops-mg-standalone-57huq2 VerticalScaling Progressing 10s +``` + +Let's wait for the ops request to become successful. + +```bash +$ watch kubectl get mongodbopsrequest -n demo +Every 2.0s: kubectl get mongodbopsrequest -n demo +NAME TYPE STATUS AGE +mops-mg-standalone-57huq2 VerticalScaling Successful 68s +``` + +We can see from the above output that the `MongoDBOpsRequest` has succeeded. If we describe the `MongoDBOpsRequest` we will get an overview of the steps that were followed to scale the database. + +```bash +$ kubectl describe mongodbopsrequest -n demo mops-mg-standalone-57huq2 +Name: mops-mg-standalone-57huq2 +Namespace: demo +Labels: +Annotations: +API Version: ops.kubedb.com/v1alpha1 +Kind: MongoDBOpsRequest +Metadata: + Creation Timestamp: 2022-10-27T09:55:08Z + Generation: 1 + Managed Fields: + API Version: ops.kubedb.com/v1alpha1 + Fields Type: FieldsV1 + fieldsV1: + f:metadata: + f:ownerReferences: + .: + k:{"uid":"439c148f-7c22-456f-a4b4-758cead29932"}: + f:spec: + .: + f:apply: + f:databaseRef: + f:timeout: + f:type: + f:verticalScaling: + .: + f:standalone: + .: + f:limits: + .: + f:cpu: + f:memory: + f:requests: + .: + f:cpu: + f:memory: + Manager: kubedb-autoscaler + Operation: Update + Time: 2022-10-27T09:55:08Z + API Version: ops.kubedb.com/v1alpha1 + Fields Type: FieldsV1 + fieldsV1: + f:status: + .: + f:conditions: + f:observedGeneration: + f:phase: + Manager: kubedb-ops-manager + Operation: Update + Subresource: status + Time: 2022-10-27T09:55:33Z + Owner References: + API Version: autoscaling.kubedb.com/v1alpha1 + Block Owner Deletion: true + Controller: true + Kind: MongoDBAutoscaler + Name: mg-as + UID: 439c148f-7c22-456f-a4b4-758cead29932 + Resource Version: 656279 + UID: 29908a23-7cba-4f81-b787-3f9d226993f8 +Spec: + Apply: IfReady + Database Ref: + Name: mg-standalone + Timeout: 3m0s + Type: VerticalScaling + Vertical Scaling: + Standalone: + Limits: + Cpu: 400m + Memory: 400Mi + Requests: + Cpu: 400m + Memory: 400Mi +Status: + Conditions: + Last Transition Time: 2022-10-27T09:55:08Z + Message: MongoDB ops request is vertically scaling database + Observed Generation: 1 + Reason: VerticalScaling + Status: True + Type: VerticalScaling + Last Transition Time: 2022-10-27T09:55:33Z + Message: Successfully Vertically Scaled Standalone Resources + Observed Generation: 1 + Reason: UpdateStandaloneResources + Status: True + Type: UpdateStandaloneResources + Last Transition Time: 2022-10-27T09:55:33Z + Message: Successfully Vertically Scaled Database + Observed Generation: 1 + Reason: Successful + Status: True + Type: Successful + Observed Generation: 1 + Phase: Successful +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal PauseDatabase 2m40s KubeDB Ops-manager Operator Pausing MongoDB demo/mg-standalone + Normal PauseDatabase 2m40s KubeDB Ops-manager Operator Successfully paused MongoDB demo/mg-standalone + Normal Starting 2m40s KubeDB Ops-manager Operator Updating Resources of StatefulSet: mg-standalone + Normal UpdateStandaloneResources 2m40s KubeDB Ops-manager Operator Successfully updated standalone Resources + Normal Starting 2m40s KubeDB Ops-manager Operator Updating Resources of StatefulSet: mg-standalone + Normal UpdateStandaloneResources 2m40s KubeDB Ops-manager Operator Successfully updated standalone Resources + Normal UpdateStandaloneResources 2m15s KubeDB Ops-manager Operator Successfully Vertically Scaled Standalone Resources + Normal ResumeDatabase 2m15s KubeDB Ops-manager Operator Resuming MongoDB demo/mg-standalone + Normal ResumeDatabase 2m15s KubeDB Ops-manager Operator Successfully resumed MongoDB demo/mg-standalone + Normal Successful 2m15s KubeDB Ops-manager Operator Successfully Vertically Scaled Database +``` + +Now, we are going to verify from the Pod, and the MongoDB yaml whether the resources of the standalone database has updated to meet up the desired state, Let's check, + +```bash +$ kubectl get pod -n demo mg-standalone-0 -o json | jq '.spec.containers[].resources' +{ + "limits": { + "cpu": "400m", + "memory": "400Mi" + }, + "requests": { + "cpu": "400m", + "memory": "400Mi" + } +} + +$ kubectl get mongodb -n demo mg-standalone -o json | jq '.spec.podTemplate.spec.resources' +{ + "limits": { + "cpu": "400m", + "memory": "400Mi" + }, + "requests": { + "cpu": "400m", + "memory": "400Mi" + } +} +``` + + +The above output verifies that we have successfully auto scaled the resources of the MongoDB standalone database. + +## Cleaning Up + +To clean up the Kubernetes resources created by this tutorial, run: + +```bash +kubectl delete mg -n demo mg-standalone +kubectl delete mongodbautoscaler -n demo mg-as +``` \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/mongodb/autoscaler/storage/_index.md b/content/docs/v2024.1.31/guides/mongodb/autoscaler/storage/_index.md new file mode 100644 index 0000000000..fd2606bf32 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mongodb/autoscaler/storage/_index.md @@ -0,0 +1,22 @@ +--- +title: Storage Autoscaling +menu: + docs_v2024.1.31: + identifier: mg-storage-auto-scaling + name: Storage Autoscaling + parent: mg-auto-scaling + weight: 46 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/mongodb/autoscaler/storage/overview.md b/content/docs/v2024.1.31/guides/mongodb/autoscaler/storage/overview.md new file mode 100644 index 0000000000..cb9551bd0a --- /dev/null +++ b/content/docs/v2024.1.31/guides/mongodb/autoscaler/storage/overview.md @@ -0,0 +1,68 @@ +--- +title: MongoDB Storage Autoscaling Overview +menu: + docs_v2024.1.31: + identifier: mg-storage-auto-scaling-overview + name: Overview + parent: mg-storage-auto-scaling + weight: 10 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# MongoDB Vertical Autoscaling + +This guide will give an overview on how KubeDB Autoscaler operator autoscales the database storage using `mongodbautoscaler` crd. + +## Before You Begin + +- You should be familiar with the following `KubeDB` concepts: + - [MongoDB](/docs/v2024.1.31/guides/mongodb/concepts/mongodb) + - [MongoDBAutoscaler](/docs/v2024.1.31/guides/mongodb/concepts/autoscaler) + - [MongoDBOpsRequest](/docs/v2024.1.31/guides/mongodb/concepts/opsrequest) + +## How Storage Autoscaling Works + +The following diagram shows how KubeDB Autoscaler operator autoscales the resources of `MongoDB` database components. Open the image in a new tab to see the enlarged version. + +
+  Storage Auto Scaling process of MongoDB +
Fig: Storage Auto Scaling process of MongoDB
+
+ + +The Auto Scaling process consists of the following steps: + +1. At first, a user creates a `MongoDB` Custom Resource (CR). + +2. `KubeDB` Provisioner operator watches the `MongoDB` CR. + +3. When the operator finds a `MongoDB` CR, it creates required number of `StatefulSets` and related necessary stuff like secrets, services, etc. + +- Each StatefulSet creates a Persistent Volume according to the Volume Claim Template provided in the statefulset configuration. + +4. Then, in order to set up storage autoscaling of the various components (ie. ReplicaSet, Shard, ConfigServer etc.) of the `MongoDB` database the user creates a `MongoDBAutoscaler` CRO with desired configuration. + +5. `KubeDB` Autoscaler operator watches the `MongoDBAutoscaler` CRO. + +6. `KubeDB` Autoscaler operator continuously watches persistent volumes of the databases to check if it exceeds the specified usage threshold. +- If the usage exceeds the specified usage threshold, then `KubeDB` Autoscaler operator creates a `MongoDBOpsRequest` to expand the storage of the database. + +7. `KubeDB` Ops-manager operator watches the `MongoDBOpsRequest` CRO. + +8. Then the `KubeDB` Ops-manager operator will expand the storage of the database component as specified on the `MongoDBOpsRequest` CRO. + +In the next docs, we are going to show a step by step guide on Autoscaling storage of various MongoDB database components using `MongoDBAutoscaler` CRD. diff --git a/content/docs/v2024.1.31/guides/mongodb/autoscaler/storage/replicaset.md b/content/docs/v2024.1.31/guides/mongodb/autoscaler/storage/replicaset.md new file mode 100644 index 0000000000..0c8395170b --- /dev/null +++ b/content/docs/v2024.1.31/guides/mongodb/autoscaler/storage/replicaset.md @@ -0,0 +1,397 @@ +--- +title: MongoDB Replicaset Autoscaling +menu: + docs_v2024.1.31: + identifier: mg-storage-auto-scaling-replicaset + name: ReplicaSet + parent: mg-storage-auto-scaling + weight: 20 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Storage Autoscaling of a MongoDB Replicaset Database + +This guide will show you how to use `KubeDB` to autoscale the storage of a MongoDB Replicaset database. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster. + +- Install `KubeDB` Provisioner, Ops-manager and Autoscaler operator in your cluster following the steps [here](/docs/v2024.1.31/setup/README). + +- Install `Metrics Server` from [here](https://github.com/kubernetes-sigs/metrics-server#installation) + +- Install Prometheus from [here](https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack) + +- You must have a `StorageClass` that supports volume expansion. + +- You should be familiar with the following `KubeDB` concepts: + - [MongoDB](/docs/v2024.1.31/guides/mongodb/concepts/mongodb) + - [MongoDBAutoscaler](/docs/v2024.1.31/guides/mongodb/concepts/autoscaler) + - [MongoDBOpsRequest](/docs/v2024.1.31/guides/mongodb/concepts/opsrequest) + - [Storage Autoscaling Overview](/docs/v2024.1.31/guides/mongodb/autoscaler/storage/overview) + +To keep everything isolated, we are going to use a separate namespace called `demo` throughout this tutorial. + +```bash +$ kubectl create ns demo +namespace/demo created +``` + +> **Note:** YAML files used in this tutorial are stored in [docs/examples/mongodb](/docs/v2024.1.31/examples/mongodb) directory of [kubedb/docs](https://github.com/kubedb/docs) repository. + +## Storage Autoscaling of ReplicaSet Database + +At first verify that your cluster has a storage class, that supports volume expansion. Let's check, + +```bash +$ kubectl get storageclass +NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE +standard (default) rancher.io/local-path Delete WaitForFirstConsumer false 9h +topolvm-provisioner topolvm.cybozu.com Delete WaitForFirstConsumer true 9h +``` + +We can see from the output the `topolvm-provisioner` storage class has `ALLOWVOLUMEEXPANSION` field as true. So, this storage class supports volume expansion. We can use it. You can install topolvm from [here](https://github.com/topolvm/topolvm) + +Now, we are going to deploy a `MongoDB` replicaset using a supported version by `KubeDB` operator. Then we are going to apply `MongoDBAutoscaler` to set up autoscaling. + +#### Deploy MongoDB replicaset + +In this section, we are going to deploy a MongoDB replicaset database with version `4.4.26`. Then, in the next section we will set up autoscaling for this database using `MongoDBAutoscaler` CRD. Below is the YAML of the `MongoDB` CR that we are going to create, + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: MongoDB +metadata: + name: mg-rs + namespace: demo +spec: + version: "4.4.26" + replicaSet: + name: "replicaset" + replicas: 3 + storageType: Durable + storage: + storageClassName: topolvm-provisioner + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut +``` + +Let's create the `MongoDB` CRO we have shown above, + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/mongodb/autoscaling/storage/mg-rs.yaml +mongodb.kubedb.com/mg-rs created +``` + +Now, wait until `mg-rs` has status `Ready`. i.e, + +```bash +$ kubectl get mg -n demo +NAME VERSION STATUS AGE +mg-rs 4.4.26 Ready 2m53s +``` + +Let's check volume size from statefulset, and from the persistent volume, + +```bash +$ kubectl get sts -n demo mg-rs -o json | jq '.spec.volumeClaimTemplates[].spec.resources.requests.storage' +"1Gi" + +$ kubectl get pv -n demo +NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE +pvc-b16daa50-83fc-4d25-b553-4a25f13166d5 1Gi RWO Delete Bound demo/datadir-mg-rs-0 topolvm-provisioner 2m12s +pvc-d4616bef-359d-4b73-ab9f-38c24aaaec8c 1Gi RWO Delete Bound demo/datadir-mg-rs-1 topolvm-provisioner 61s +pvc-ead21204-3dc7-453c-8121-d2fe48b1c3e2 1Gi RWO Delete Bound demo/datadir-mg-rs-2 topolvm-provisioner 18s +``` + +You can see the statefulset has 1GB storage, and the capacity of all the persistent volume is also 1GB. + +We are now ready to apply the `MongoDBAutoscaler` CRO to set up storage autoscaling for this database. + +### Storage Autoscaling + +Here, we are going to set up storage autoscaling using a MongoDBAutoscaler Object. + +#### Create MongoDBAutoscaler Object + +In order to set up vertical autoscaling for this replicaset database, we have to create a `MongoDBAutoscaler` CRO with our desired configuration. Below is the YAML of the `MongoDBAutoscaler` object that we are going to create, + +```yaml +apiVersion: autoscaling.kubedb.com/v1alpha1 +kind: MongoDBAutoscaler +metadata: + name: mg-as-rs + namespace: demo +spec: + databaseRef: + name: mg-rs + storage: + replicaSet: + trigger: "On" + usageThreshold: 60 + scalingThreshold: 50 +``` + +Here, + +- `spec.databaseRef.name` specifies that we are performing vertical scaling operation on `mg-rs` database. +- `spec.storage.replicaSet.trigger` specifies that storage autoscaling is enabled for this database. +- `spec.storage.replicaSet.usageThreshold` specifies storage usage threshold, if storage usage exceeds `60%` then storage autoscaling will be triggered. +- `spec.storage.replicaSet.scalingThreshold` specifies the scaling threshold. Storage will be scaled to `50%` of the current amount. +- It has another field `spec.storage.replicaSet.expansionMode` to set the opsRequest volumeExpansionMode, which support two values: `Online` & `Offline`. Default value is `Online`. + +Let's create the `MongoDBAutoscaler` CR we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/mongodb/autoscaling/storage/mg-as-rs.yaml +mongodbautoscaler.autoscaling.kubedb.com/mg-as-rs created +``` + +#### Storage Autoscaling is set up successfully + +Let's check that the `mongodbautoscaler` resource is created successfully, + +```bash +$ kubectl get mongodbautoscaler -n demo +NAME AGE +mg-as-rs 20s + +$ kubectl describe mongodbautoscaler mg-as-rs -n demo +Name: mg-as-rs +Namespace: demo +Labels: +Annotations: +API Version: autoscaling.kubedb.com/v1alpha1 +Kind: MongoDBAutoscaler +Metadata: + Creation Timestamp: 2021-03-08T14:11:46Z + Generation: 1 + Managed Fields: + API Version: autoscaling.kubedb.com/v1alpha1 + Fields Type: FieldsV1 + fieldsV1: + f:metadata: + f:annotations: + .: + f:kubectl.kubernetes.io/last-applied-configuration: + f:spec: + .: + f:databaseRef: + .: + f:name: + f:storage: + .: + f:replicaSet: + .: + f:scalingThreshold: + f:trigger: + f:usageThreshold: + Manager: kubectl-client-side-apply + Operation: Update + Time: 2021-03-08T14:11:46Z + Resource Version: 152149 + Self Link: /apis/autoscaling.kubedb.com/v1alpha1/namespaces/demo/mongodbautoscalers/mg-as-rs + UID: a0dab64d-e7c4-4819-8ffe-360c70231577 +Spec: + Database Ref: + Name: mg-rs + Storage: + Replica Set: + Scaling Threshold: 50 + Trigger: On + Usage Threshold: 60 +Events: +``` +So, the `mongodbautoscaler` resource is created successfully. + +Now, for this demo, we are going to manually fill up the persistent volume to exceed the `usageThreshold` using `dd` command to see if storage autoscaling is working or not. + +Let's exec into the database pod and fill the database volume using the following commands: + +```bash +$ kubectl exec -it -n demo mg-rs-0 -- bash +root@mg-rs-0:/# df -h /data/db +Filesystem Size Used Avail Use% Mounted on +/dev/topolvm/760cb655-91fe-4497-ab4a-a771aa53ece4 1014M 335M 680M 33% /data/db +root@mg-rs-0:/# dd if=/dev/zero of=/data/db/file.img bs=500M count=1 +1+0 records in +1+0 records out +524288000 bytes (524 MB, 500 MiB) copied, 0.482378 s, 1.1 GB/s +root@mg-rs-0:/# df -h /data/db +Filesystem Size Used Avail Use% Mounted on +/dev/topolvm/760cb655-91fe-4497-ab4a-a771aa53ece4 1014M 835M 180M 83% /data/db +``` + +So, from the above output we can see that the storage usage is 83%, which exceeded the `usageThreshold` 60%. + +Let's watch the `mongodbopsrequest` in the demo namespace to see if any `mongodbopsrequest` object is created. After some time you'll see that a `mongodbopsrequest` of type `VolumeExpansion` will be created based on the `scalingThreshold`. + +```bash +$ watch kubectl get mongodbopsrequest -n demo +Every 2.0s: kubectl get mongodbopsrequest -n demo +NAME TYPE STATUS AGE +mops-mg-rs-mft11m VolumeExpansion Progressing 10s +``` + +Let's wait for the ops request to become successful. + +```bash +$ watch kubectl get mongodbopsrequest -n demo +Every 2.0s: kubectl get mongodbopsrequest -n demo +NAME TYPE STATUS AGE +mops-mg-rs-mft11m VolumeExpansion Successful 97s +``` + +We can see from the above output that the `MongoDBOpsRequest` has succeeded. If we describe the `MongoDBOpsRequest` we will get an overview of the steps that were followed to expand the volume of the database. + +```bash +$ kubectl describe mongodbopsrequest -n demo mops-mg-rs-mft11m +Name: mops-mg-rs-mft11m +Namespace: demo +Labels: app.kubernetes.io/component=database + app.kubernetes.io/instance=mg-rs + app.kubernetes.io/managed-by=kubedb.com + app.kubernetes.io/name=mongodbs.kubedb.com +Annotations: +API Version: ops.kubedb.com/v1alpha1 +Kind: MongoDBOpsRequest +Metadata: + Creation Timestamp: 2021-03-08T14:15:52Z + Generation: 1 + Managed Fields: + API Version: ops.kubedb.com/v1alpha1 + Fields Type: FieldsV1 + fieldsV1: + f:metadata: + f:labels: + .: + f:app.kubernetes.io/component: + f:app.kubernetes.io/instance: + f:app.kubernetes.io/managed-by: + f:app.kubernetes.io/name: + f:ownerReferences: + f:spec: + .: + f:databaseRef: + .: + f:name: + f:type: + f:volumeExpansion: + .: + f:replicaSet: + Manager: kubedb-autoscaler + Operation: Update + Time: 2021-03-08T14:15:52Z + API Version: ops.kubedb.com/v1alpha1 + Fields Type: FieldsV1 + fieldsV1: + f:status: + .: + f:conditions: + f:observedGeneration: + f:phase: + Manager: kubedb-enterprise + Operation: Update + Time: 2021-03-08T14:15:52Z + Owner References: + API Version: autoscaling.kubedb.com/v1alpha1 + Block Owner Deletion: true + Controller: true + Kind: MongoDBAutoscaler + Name: mg-as-rs + UID: a0dab64d-e7c4-4819-8ffe-360c70231577 + Resource Version: 153496 + Self Link: /apis/ops.kubedb.com/v1alpha1/namespaces/demo/mongodbopsrequests/mops-mg-rs-mft11m + UID: 84567b84-6de4-4658-b0d2-2c374e03e63d +Spec: + Database Ref: + Name: mg-rs + Type: VolumeExpansion + Volume Expansion: + Replica Set: 1594884096 +Status: + Conditions: + Last Transition Time: 2021-03-08T14:15:52Z + Message: MongoDB ops request is expanding volume of database + Observed Generation: 1 + Reason: VolumeExpansion + Status: True + Type: VolumeExpansion + Last Transition Time: 2021-03-08T14:17:02Z + Message: Successfully Expanded Volume + Observed Generation: 1 + Reason: ReplicasetVolumeExpansion + Status: True + Type: ReplicasetVolumeExpansion + Last Transition Time: 2021-03-08T14:17:07Z + Message: Successfully Expanded Volume + Observed Generation: 1 + Reason: + Status: True + Type: + Last Transition Time: 2021-03-08T14:17:12Z + Message: StatefulSet is recreated + Observed Generation: 1 + Reason: ReadyStatefulSets + Status: True + Type: ReadyStatefulSets + Last Transition Time: 2021-03-08T14:17:12Z + Message: Successfully Expanded Volume + Observed Generation: 1 + Reason: Successful + Status: True + Type: Successful + Observed Generation: 1 + Phase: Successful +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal PauseDatabase 2m36s KubeDB Ops-manager operator Pausing MongoDB demo/mg-rs + Normal PauseDatabase 2m36s KubeDB Ops-manager operator Successfully paused MongoDB demo/mg-rs + Normal ReplicasetVolumeExpansion 86s KubeDB Ops-manager operator Successfully Expanded Volume + Normal 81s KubeDB Ops-manager operator Successfully Expanded Volume + Normal ResumeDatabase 81s KubeDB Ops-manager operator Resuming MongoDB demo/mg-rs + Normal ResumeDatabase 81s KubeDB Ops-manager operator Successfully resumed MongoDB demo/mg-rs + Normal ReadyStatefulSets 76s KubeDB Ops-manager operator StatefulSet is recreated + Normal Successful 76s KubeDB Ops-manager operator Successfully Expanded Volume +``` + +Now, we are going to verify from the `Statefulset`, and the `Persistent Volume` whether the volume of the replicaset database has expanded to meet the desired state, Let's check, + +```bash +$ kubectl get sts -n demo mg-rs -o json | jq '.spec.volumeClaimTemplates[].spec.resources.requests.storage' +"1594884096" +$ kubectl get pv -n demo +NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE +pvc-b16daa50-83fc-4d25-b553-4a25f13166d5 2Gi RWO Delete Bound demo/datadir-mg-rs-0 topolvm-provisioner 11m +pvc-d4616bef-359d-4b73-ab9f-38c24aaaec8c 2Gi RWO Delete Bound demo/datadir-mg-rs-1 topolvm-provisioner 10m +pvc-ead21204-3dc7-453c-8121-d2fe48b1c3e2 2Gi RWO Delete Bound demo/datadir-mg-rs-2 topolvm-provisioner 9m52s +``` + +The above output verifies that we have successfully autoscaled the volume of the MongoDB replicaset database. + +## Cleaning Up + +To clean up the Kubernetes resources created by this tutorial, run: + +```bash +kubectl delete mg -n demo mg-rs +kubectl delete mongodbautoscaler -n demo mg-as-rs +``` diff --git a/content/docs/v2024.1.31/guides/mongodb/autoscaler/storage/sharding.md b/content/docs/v2024.1.31/guides/mongodb/autoscaler/storage/sharding.md new file mode 100644 index 0000000000..e9ffb1d3ff --- /dev/null +++ b/content/docs/v2024.1.31/guides/mongodb/autoscaler/storage/sharding.md @@ -0,0 +1,422 @@ +--- +title: MongoDB Shard Autoscaling +menu: + docs_v2024.1.31: + identifier: mg-storage-auto-scaling-shard + name: Sharding + parent: mg-storage-auto-scaling + weight: 25 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Storage Autoscaling of a MongoDB Sharded Database + +This guide will show you how to use `KubeDB` to autoscale the storage of a MongoDB Sharded database. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster. + +- Install `KubeDB` Provisioner, Ops-manager and Autoscaler operator in your cluster following the steps [here](/docs/v2024.1.31/setup/README). + +- Install `Metrics Server` from [here](https://github.com/kubernetes-sigs/metrics-server#installation) + +- Install Prometheus from [here](https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack) + +- You must have a `StorageClass` that supports volume expansion. + +- You should be familiar with the following `KubeDB` concepts: + - [MongoDB](/docs/v2024.1.31/guides/mongodb/concepts/mongodb) + - [MongoDBAutoscaler](/docs/v2024.1.31/guides/mongodb/concepts/autoscaler) + - [MongoDBOpsRequest](/docs/v2024.1.31/guides/mongodb/concepts/opsrequest) + - [Storage Autoscaling Overview](/docs/v2024.1.31/guides/mongodb/autoscaler/storage/overview) + +To keep everything isolated, we are going to use a separate namespace called `demo` throughout this tutorial. + +```bash +$ kubectl create ns demo +namespace/demo created +``` + +> **Note:** YAML files used in this tutorial are stored in [docs/examples/mongodb](/docs/v2024.1.31/examples/mongodb) directory of [kubedb/docs](https://github.com/kubedb/docs) repository. + +## Storage Autoscaling of Sharded Database + +At first verify that your cluster has a storage class, that supports volume expansion. Let's check, + +```bash +$ kubectl get storageclass +NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE +standard (default) rancher.io/local-path Delete WaitForFirstConsumer false 9h +topolvm-provisioner topolvm.cybozu.com Delete WaitForFirstConsumer true 9h +``` + +We can see from the output the `topolvm-provisioner` storage class has `ALLOWVOLUMEEXPANSION` field as true. So, this storage class supports volume expansion. We can use it. You can install topolvm from [here](https://github.com/topolvm/topolvm) + +Now, we are going to deploy a `MongoDB` sharded database using a supported version by `KubeDB` operator. Then we are going to apply `MongoDBAutoscaler` to set up autoscaling. + +#### Deploy MongoDB Sharded Database + +In this section, we are going to deploy a MongoDB sharded database with version `4.4.26`. Then, in the next section we will set up autoscaling for this database using `MongoDBAutoscaler` CRD. Below is the YAML of the `MongoDB` CR that we are going to create, + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: MongoDB +metadata: + name: mg-sh + namespace: demo +spec: + version: "4.4.26" + storageType: Durable + shardTopology: + configServer: + storage: + storageClassName: topolvm-provisioner + resources: + requests: + storage: 1Gi + replicas: 3 + mongos: + replicas: 2 + shard: + storage: + storageClassName: topolvm-provisioner + resources: + requests: + storage: 1Gi + replicas: 3 + shards: 2 + terminationPolicy: WipeOut +``` + +Let's create the `MongoDB` CRO we have shown above, + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/mongodb/autoscaling/storage/mg-sh.yaml +mongodb.kubedb.com/mg-sh created +``` + +Now, wait until `mg-sh` has status `Ready`. i.e, + +```bash +$ kubectl get mg -n demo +NAME VERSION STATUS AGE +mg-sh 4.4.26 Ready 3m51s +``` + +Let's check volume size from one of the shard statefulset, and from the persistent volume, + +```bash +$ kubectl get sts -n demo mg-sh-shard0 -o json | jq '.spec.volumeClaimTemplates[].spec.resources.requests.storage' +"1Gi" + +$ kubectl get pv -n demo +NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE +pvc-031836c6-95ae-4015-938c-da183c205828 1Gi RWO Delete Bound demo/datadir-mg-sh-configsvr-0 topolvm-provisioner 5m1s +pvc-2515233f-0f7d-4d0d-8b45-97a3cb9d4488 1Gi RWO Delete Bound demo/datadir-mg-sh-shard0-2 topolvm-provisioner 3m44s +pvc-35f73708-3c11-4ead-a60b-e1679a294b81 1Gi RWO Delete Bound demo/datadir-mg-sh-shard0-0 topolvm-provisioner 5m +pvc-4b329feb-8c92-4605-a37e-c02b3499e311 1Gi RWO Delete Bound demo/datadir-mg-sh-configsvr-2 topolvm-provisioner 3m55s +pvc-52490270-1355-4045-b2a1-872a671ab006 1Gi RWO Delete Bound demo/datadir-mg-sh-configsvr-1 topolvm-provisioner 4m28s +pvc-80dc91d3-f56f-4037-b6e1-f69e13fb434c 1Gi RWO Delete Bound demo/datadir-mg-sh-shard1-1 topolvm-provisioner 4m26s +pvc-c1965a32-7471-4885-ac52-f9eab056d48e 1Gi RWO Delete Bound demo/datadir-mg-sh-shard1-2 topolvm-provisioner 3m57s +pvc-c838a27d-c75d-4caa-9c1d-456af3bfaba0 1Gi RWO Delete Bound demo/datadir-mg-sh-shard1-0 topolvm-provisioner 4m59s +pvc-d47f19be-f206-41c5-a0b1-5022776fea2f 1Gi RWO Delete Bound demo/datadir-mg-sh-shard0-1 topolvm-provisioner 4m25s +``` + +You can see the statefulset has 1GB storage, and the capacity of all the persistent volume is also 1GB. + +We are now ready to apply the `MongoDBAutoscaler` CRO to set up storage autoscaling for this database. + +### Storage Autoscaling + +Here, we are going to set up storage autoscaling using a MongoDBAutoscaler Object. + +#### Create MongoDBAutoscaler Object + +In order to set up vertical autoscaling for this sharded database, we have to create a `MongoDBAutoscaler` CRO with our desired configuration. Below is the YAML of the `MongoDBAutoscaler` object that we are going to create, + +```yaml +apiVersion: autoscaling.kubedb.com/v1alpha1 +kind: MongoDBAutoscaler +metadata: + name: mg-as-sh + namespace: demo +spec: + databaseRef: + name: mg-sh + storage: + shard: + trigger: "On" + usageThreshold: 60 + scalingThreshold: 50 +``` + +Here, + +- `spec.databaseRef.name` specifies that we are performing vertical scaling operation on `mg-sh` database. +- `spec.storage.shard.trigger` specifies that storage autoscaling is enabled for this database. +- `spec.storage.shard.usageThreshold` specifies storage usage threshold, if storage usage exceeds `60%` then storage autoscaling will be triggered. +- `spec.storage.shard.scalingThreshold` specifies the scaling threshold. Storage will be scaled to `50%` of the current amount. +- It has another field `spec.storage.replicaSet.expansionMode` to set the opsRequest volumeExpansionMode, which support two values: `Online` & `Offline`. Default value is `Online`. + +> Note: In this demo we are only setting up the storage autoscaling for the shard pods, that's why we only specified the shard section of the autoscaler. You can enable autoscaling for configServer pods in the same yaml, by specifying the `spec.configServer` section, similar to the `spec.shard` section we have configured in this demo. + + +Let's create the `MongoDBAutoscaler` CR we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/mongodb/autoscaling/storage/mg-as-sh.yaml +mongodbautoscaler.autoscaling.kubedb.com/mg-as-sh created +``` + +#### Storage Autoscaling is set up successfully + +Let's check that the `mongodbautoscaler` resource is created successfully, + +```bash +$ kubectl get mongodbautoscaler -n demo +NAME AGE +mg-as-sh 20s + +$ kubectl describe mongodbautoscaler mg-as-sh -n demo +Name: mg-as-sh +Namespace: demo +Labels: +Annotations: +API Version: autoscaling.kubedb.com/v1alpha1 +Kind: MongoDBAutoscaler +Metadata: + Creation Timestamp: 2021-03-08T14:26:06Z + Generation: 1 + Managed Fields: + API Version: autoscaling.kubedb.com/v1alpha1 + Fields Type: FieldsV1 + fieldsV1: + f:metadata: + f:annotations: + .: + f:kubectl.kubernetes.io/last-applied-configuration: + f:spec: + .: + f:databaseRef: + .: + f:name: + f:storage: + .: + f:shard: + .: + f:scalingThreshold: + f:trigger: + f:usageThreshold: + Manager: kubectl-client-side-apply + Operation: Update + Time: 2021-03-08T14:26:06Z + Resource Version: 156292 + Self Link: /apis/autoscaling.kubedb.com/v1alpha1/namespaces/demo/mongodbautoscalers/mg-as-sh + UID: 203e332f-bdfe-470f-a429-a7b60c7be2ee +Spec: + Database Ref: + Name: mg-sh + Storage: + Shard: + Scaling Threshold: 50 + Trigger: On + Usage Threshold: 60 +Events: +``` +So, the `mongodbautoscaler` resource is created successfully. + +Now, for this demo, we are going to manually fill up one of the persistent volume to exceed the `usageThreshold` using `dd` command to see if storage autoscaling is working or not. + +Let's exec into the database pod and fill the database volume using the following commands: + +```bash +$ kubectl exec -it -n demo mg-sh-shard0-0 -- bash +root@mg-sh-shard0-0:/# df -h /data/db +Filesystem Size Used Avail Use% Mounted on +/dev/topolvm/ad11042f-f4cc-4dfc-9680-2afbbb199d48 1014M 335M 680M 34% /data/db +root@mg-sh-shard0-0:/# dd if=/dev/zero of=/data/db/file.img bs=500M count=1 +1+0 records in +1+0 records out +524288000 bytes (524 MB, 500 MiB) copied, 0.595358 s, 881 MB/s +root@mg-sh-shard0-0:/# df -h /data/db +Filesystem Size Used Avail Use% Mounted on +/dev/topolvm/ad11042f-f4cc-4dfc-9680-2afbbb199d48 1014M 837M 178M 83% /data/db +``` + +So, from the above output we can see that the storage usage is 83%, which exceeded the `usageThreshold` 60%. + +Let's watch the `mongodbopsrequest` in the demo namespace to see if any `mongodbopsrequest` object is created. After some time you'll see that a `mongodbopsrequest` of type `VolumeExpansion` will be created based on the `scalingThreshold`. + +```bash +$ watch kubectl get mongodbopsrequest -n demo +Every 2.0s: kubectl get mongodbopsrequest -n demo +NAME TYPE STATUS AGE +mops-mg-sh-ba5ikn VolumeExpansion Progressing 41s +``` + +Let's wait for the ops request to become successful. + +```bash +$ watch kubectl get mongodbopsrequest -n demo +Every 2.0s: kubectl get mongodbopsrequest -n demo +NAME TYPE STATUS AGE +mops-mg-sh-ba5ikn VolumeExpansion Successful 2m54s +``` + +We can see from the above output that the `MongoDBOpsRequest` has succeeded. If we describe the `MongoDBOpsRequest` we will get an overview of the steps that were followed to expand the volume of the database. + +```bash +$ kubectl describe mongodbopsrequest -n demo mops-mg-sh-ba5ikn +Name: mops-mg-sh-ba5ikn +Namespace: demo +Labels: app.kubernetes.io/component=database + app.kubernetes.io/instance=mg-sh + app.kubernetes.io/managed-by=kubedb.com + app.kubernetes.io/name=mongodbs.kubedb.com +Annotations: +API Version: ops.kubedb.com/v1alpha1 +Kind: MongoDBOpsRequest +Metadata: + Creation Timestamp: 2021-03-08T14:31:52Z + Generation: 1 + Managed Fields: + API Version: ops.kubedb.com/v1alpha1 + Fields Type: FieldsV1 + fieldsV1: + f:metadata: + f:labels: + .: + f:app.kubernetes.io/component: + f:app.kubernetes.io/instance: + f:app.kubernetes.io/managed-by: + f:app.kubernetes.io/name: + f:ownerReferences: + f:spec: + .: + f:databaseRef: + .: + f:name: + f:type: + f:volumeExpansion: + .: + f:shard: + Manager: kubedb-autoscaler + Operation: Update + Time: 2021-03-08T14:31:52Z + API Version: ops.kubedb.com/v1alpha1 + Fields Type: FieldsV1 + fieldsV1: + f:status: + .: + f:conditions: + f:observedGeneration: + f:phase: + Manager: kubedb-enterprise + Operation: Update + Time: 2021-03-08T14:31:52Z + Owner References: + API Version: autoscaling.kubedb.com/v1alpha1 + Block Owner Deletion: true + Controller: true + Kind: MongoDBAutoscaler + Name: mg-as-sh + UID: 203e332f-bdfe-470f-a429-a7b60c7be2ee + Resource Version: 158488 + Self Link: /apis/ops.kubedb.com/v1alpha1/namespaces/demo/mongodbopsrequests/mops-mg-sh-ba5ikn + UID: c56236c2-5b64-4775-ba5a-35727b96a414 +Spec: + Database Ref: + Name: mg-sh + Type: VolumeExpansion + Volume Expansion: + Shard: 1594884096 +Status: + Conditions: + Last Transition Time: 2021-03-08T14:31:52Z + Message: MongoDB ops request is expanding volume of database + Observed Generation: 1 + Reason: VolumeExpansion + Status: True + Type: VolumeExpansion + Last Transition Time: 2021-03-08T14:34:32Z + Message: Successfully Expanded Volume + Observed Generation: 1 + Reason: ShardVolumeExpansion + Status: True + Type: ShardVolumeExpansion + Last Transition Time: 2021-03-08T14:34:37Z + Message: Successfully Expanded Volume + Observed Generation: 1 + Reason: + Status: True + Type: + Last Transition Time: 2021-03-08T14:34:42Z + Message: StatefulSet is recreated + Observed Generation: 1 + Reason: ReadyStatefulSets + Status: True + Type: ReadyStatefulSets + Last Transition Time: 2021-03-08T14:34:42Z + Message: Successfully Expanded Volume + Observed Generation: 1 + Reason: Successful + Status: True + Type: Successful + Observed Generation: 1 + Phase: Successful +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal PauseDatabase 3m21s KubeDB Ops-manager operator Pausing MongoDB demo/mg-sh + Normal PauseDatabase 3m21s KubeDB Ops-manager operator Successfully paused MongoDB demo/mg-sh + Normal ShardVolumeExpansion 41s KubeDB Ops-manager operator Successfully Expanded Volume + Normal 36s KubeDB Ops-manager operator Successfully Expanded Volume + Normal ResumeDatabase 36s KubeDB Ops-manager operator Resuming MongoDB demo/mg-sh + Normal ResumeDatabase 36s KubeDB Ops-manager operator Successfully resumed MongoDB demo/mg-sh + Normal ReadyStatefulSets 31s KubeDB Ops-manager operator StatefulSet is recreated + Normal Successful 31s KubeDB Ops-manager operator Successfully Expanded Volume +``` + +Now, we are going to verify from the `Statefulset`, and the `Persistent Volume` whether the volume of the shard nodes of the database has expanded to meet the desired state, Let's check, + +```bash +$ kubectl get sts -n demo mg-sh-shard0 -o json | jq '.spec.volumeClaimTemplates[].spec.resources.requests.storage' +"1594884096" +$ kubectl get pv -n demo +NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE +pvc-031836c6-95ae-4015-938c-da183c205828 1Gi RWO Delete Bound demo/datadir-mg-sh-configsvr-0 topolvm-provisioner 13m +pvc-2515233f-0f7d-4d0d-8b45-97a3cb9d4488 2Gi RWO Delete Bound demo/datadir-mg-sh-shard0-2 topolvm-provisioner 11m +pvc-35f73708-3c11-4ead-a60b-e1679a294b81 2Gi RWO Delete Bound demo/datadir-mg-sh-shard0-0 topolvm-provisioner 13m +pvc-4b329feb-8c92-4605-a37e-c02b3499e311 1Gi RWO Delete Bound demo/datadir-mg-sh-configsvr-2 topolvm-provisioner 11m +pvc-52490270-1355-4045-b2a1-872a671ab006 1Gi RWO Delete Bound demo/datadir-mg-sh-configsvr-1 topolvm-provisioner 12m +pvc-80dc91d3-f56f-4037-b6e1-f69e13fb434c 2Gi RWO Delete Bound demo/datadir-mg-sh-shard1-1 topolvm-provisioner 12m +pvc-c1965a32-7471-4885-ac52-f9eab056d48e 2Gi RWO Delete Bound demo/datadir-mg-sh-shard1-2 topolvm-provisioner 11m +pvc-c838a27d-c75d-4caa-9c1d-456af3bfaba0 2Gi RWO Delete Bound demo/datadir-mg-sh-shard1-0 topolvm-provisioner 12m +pvc-d47f19be-f206-41c5-a0b1-5022776fea2f 2Gi RWO Delete Bound demo/datadir-mg-sh-shard0-1 topolvm-provisioner 12m +``` + +The above output verifies that we have successfully autoscaled the volume of the shard nodes of this MongoDB database. + +## Cleaning Up + +To clean up the Kubernetes resources created by this tutorial, run: + +```bash +kubectl delete mg -n demo mg-sh +kubectl delete mongodbautoscaler -n demo mg-as-sh +``` diff --git a/content/docs/v2024.1.31/guides/mongodb/autoscaler/storage/standalone.md b/content/docs/v2024.1.31/guides/mongodb/autoscaler/storage/standalone.md new file mode 100644 index 0000000000..93761f3fcb --- /dev/null +++ b/content/docs/v2024.1.31/guides/mongodb/autoscaler/storage/standalone.md @@ -0,0 +1,390 @@ +--- +title: MongoDB Standalone Autoscaling +menu: + docs_v2024.1.31: + identifier: mg-storage-auto-scaling-standalone + name: Standalone + parent: mg-storage-auto-scaling + weight: 15 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Storage Autoscaling of a MongoDB Standalone Database + +This guide will show you how to use `KubeDB` to autoscale the storage of a MongoDB standalone database. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster. + +- Install `KubeDB` Provisioner, Ops-manager and Autoscaler operator in your cluster following the steps [here](/docs/v2024.1.31/setup/README). + +- Install `Metrics Server` from [here](https://github.com/kubernetes-sigs/metrics-server#installation) + +- Install Prometheus from [here](https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack) + +- You must have a `StorageClass` that supports volume expansion. + +- You should be familiar with the following `KubeDB` concepts: + - [MongoDB](/docs/v2024.1.31/guides/mongodb/concepts/mongodb) + - [MongoDBAutoscaler](/docs/v2024.1.31/guides/mongodb/concepts/autoscaler) + - [MongoDBOpsRequest](/docs/v2024.1.31/guides/mongodb/concepts/opsrequest) + - [Storage Autoscaling Overview](/docs/v2024.1.31/guides/mongodb/autoscaler/storage/overview) + +To keep everything isolated, we are going to use a separate namespace called `demo` throughout this tutorial. + +```bash +$ kubectl create ns demo +namespace/demo created +``` + +> **Note:** YAML files used in this tutorial are stored in [docs/examples/mongodb](/docs/v2024.1.31/examples/mongodb) directory of [kubedb/docs](https://github.com/kubedb/docs) repository. + +## Storage Autoscaling of Standalone Database + +At first verify that your cluster has a storage class, that supports volume expansion. Let's check, + +```bash +$ kubectl get storageclass +NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE +standard (default) rancher.io/local-path Delete WaitForFirstConsumer false 9h +topolvm-provisioner topolvm.cybozu.com Delete WaitForFirstConsumer true 9h +``` + +We can see from the output the `topolvm-provisioner` storage class has `ALLOWVOLUMEEXPANSION` field as true. So, this storage class supports volume expansion. We can use it. You can install topolvm from [here](https://github.com/topolvm/topolvm) + +Now, we are going to deploy a `MongoDB` standalone using a supported version by `KubeDB` operator. Then we are going to apply `MongoDBAutoscaler` to set up autoscaling. + +#### Deploy MongoDB standalone + +In this section, we are going to deploy a MongoDB standalone database with version `4.4.26`. Then, in the next section we will set up autoscaling for this database using `MongoDBAutoscaler` CRD. Below is the YAML of the `MongoDB` CR that we are going to create, + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: MongoDB +metadata: + name: mg-standalone + namespace: demo +spec: + version: "4.4.26" + storageType: Durable + storage: + storageClassName: topolvm-provisioner + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut +``` + +Let's create the `MongoDB` CRO we have shown above, + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/mongodb/autoscaling/storage/mg-standalone.yaml +mongodb.kubedb.com/mg-standalone created +``` + +Now, wait until `mg-standalone` has status `Ready`. i.e, + +```bash +$ kubectl get mg -n demo +NAME VERSION STATUS AGE +mg-standalone 4.4.26 Ready 2m53s +``` + +Let's check volume size from statefulset, and from the persistent volume, + +```bash +$ kubectl get sts -n demo mg-standalone -o json | jq '.spec.volumeClaimTemplates[].spec.resources.requests.storage' +"1Gi" + +$ kubectl get pv -n demo +NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE +pvc-cf469ed8-a89a-49ca-bf7c-8c76b7889428 1Gi RWO Delete Bound demo/datadir-mg-standalone-0 topolvm-provisioner 7m41s +``` + +You can see the statefulset has 1GB storage, and the capacity of the persistent volume is also 1GB. + +We are now ready to apply the `MongoDBAutoscaler` CRO to set up storage autoscaling for this database. + +### Storage Autoscaling + +Here, we are going to set up storage autoscaling using a MongoDBAutoscaler Object. + +#### Create MongoDBAutoscaler Object + +In order to set up vertical autoscaling for this standalone database, we have to create a `MongoDBAutoscaler` CRO with our desired configuration. Below is the YAML of the `MongoDBAutoscaler` object that we are going to create, + +```yaml +apiVersion: autoscaling.kubedb.com/v1alpha1 +kind: MongoDBAutoscaler +metadata: + name: mg-as + namespace: demo +spec: + databaseRef: + name: mg-standalone + storage: + standalone: + trigger: "On" + usageThreshold: 60 + scalingThreshold: 50 +``` + +Here, + +- `spec.databaseRef.name` specifies that we are performing vertical scaling operation on `mg-standalone` database. +- `spec.storage.standalone.trigger` specifies that storage autoscaling is enabled for this database. +- `spec.storage.standalone.usageThreshold` specifies storage usage threshold, if storage usage exceeds `60%` then storage autoscaling will be triggered. +- `spec.storage.standalone.scalingThreshold` specifies the scaling threshold. Storage will be scaled to `50%` of the current amount. +- It has another field `spec.storage.replicaSet.expansionMode` to set the opsRequest volumeExpansionMode, which support two values: `Online` & `Offline`. Default value is `Online`. + +Let's create the `MongoDBAutoscaler` CR we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/mongodb/autoscaling/storage/mg-as-standalone.yaml +mongodbautoscaler.autoscaling.kubedb.com/mg-as created +``` + +#### Storage Autoscaling is set up successfully + +Let's check that the `mongodbautoscaler` resource is created successfully, + +```bash +$ kubectl get mongodbautoscaler -n demo +NAME AGE +mg-as 102s + +$ kubectl describe mongodbautoscaler mg-as -n demo +Name: mg-as +Namespace: demo +Labels: +Annotations: +API Version: autoscaling.kubedb.com/v1alpha1 +Kind: MongoDBAutoscaler +Metadata: + Creation Timestamp: 2021-03-08T12:58:01Z + Generation: 1 + Managed Fields: + API Version: autoscaling.kubedb.com/v1alpha1 + Fields Type: FieldsV1 + fieldsV1: + f:metadata: + f:annotations: + .: + f:kubectl.kubernetes.io/last-applied-configuration: + f:spec: + .: + f:databaseRef: + .: + f:name: + f:storage: + .: + f:standalone: + .: + f:scalingThreshold: + f:trigger: + f:usageThreshold: + Manager: kubectl-client-side-apply + Operation: Update + Time: 2021-03-08T12:58:01Z + Resource Version: 134423 + Self Link: /apis/autoscaling.kubedb.com/v1alpha1/namespaces/demo/mongodbautoscalers/mg-as + UID: 999a2dc9-7eb7-4ed2-9e90-d3f8b21c091a +Spec: + Database Ref: + Name: mg-standalone + Storage: + Standalone: + Scaling Threshold: 50 + Trigger: On + Usage Threshold: 60 +Events: +``` +So, the `mongodbautoscaler` resource is created successfully. + +Now, for this demo, we are going to manually fill up the persistent volume to exceed the `usageThreshold` using `dd` command to see if storage autoscaling is working or not. + +Let's exec into the database pod and fill the database volume using the following commands: + +```bash +$ kubectl exec -it -n demo mg-standalone-0 -- bash +root@mg-standalone-0:/# df -h /data/db +Filesystem Size Used Avail Use% Mounted on +/dev/topolvm/1df4ee9e-b900-4c0f-9d2c-8493fb30bdc0 1014M 334M 681M 33% /data/db +root@mg-standalone-0:/# dd if=/dev/zero of=/data/db/file.img bs=500M count=1 +1+0 records in +1+0 records out +524288000 bytes (524 MB, 500 MiB) copied, 0.359202 s, 1.5 GB/s +root@mg-standalone-0:/# df -h /data/db +Filesystem Size Used Avail Use% Mounted on +/dev/topolvm/1df4ee9e-b900-4c0f-9d2c-8493fb30bdc0 1014M 835M 180M 83% /data/db +``` + +So, from the above output we can see that the storage usage is 84%, which exceeded the `usageThreshold` 60%. + +Let's watch the `mongodbopsrequest` in the demo namespace to see if any `mongodbopsrequest` object is created. After some time you'll see that a `mongodbopsrequest` of type `VolumeExpansion` will be created based on the `scalingThreshold`. + +```bash +$ watch kubectl get mongodbopsrequest -n demo +Every 2.0s: kubectl get mongodbopsrequest -n demo +NAME TYPE STATUS AGE +mops-mg-standalone-p27c11 VolumeExpansion Progressing 26s +``` + +Let's wait for the ops request to become successful. + +```bash +$ watch kubectl get mongodbopsrequest -n demo +Every 2.0s: kubectl get mongodbopsrequest -n demo +NAME TYPE STATUS AGE +mops-mg-standalone-p27c11 VolumeExpansion Successful 73s +``` + +We can see from the above output that the `MongoDBOpsRequest` has succeeded. If we describe the `MongoDBOpsRequest` we will get an overview of the steps that were followed to expand the volume of the database. + +```bash +$ kubectl describe mongodbopsrequest -n demo mops-mg-standalone-p27c11 +Name: mops-mg-standalone-p27c11 +Namespace: demo +Labels: app.kubernetes.io/component=database + app.kubernetes.io/instance=mg-standalone + app.kubernetes.io/managed-by=kubedb.com + app.kubernetes.io/name=mongodbs.kubedb.com +Annotations: +API Version: ops.kubedb.com/v1alpha1 +Kind: MongoDBOpsRequest +Metadata: + Creation Timestamp: 2021-03-08T13:19:51Z + Generation: 1 + Managed Fields: + API Version: ops.kubedb.com/v1alpha1 + Fields Type: FieldsV1 + fieldsV1: + f:metadata: + f:labels: + .: + f:app.kubernetes.io/component: + f:app.kubernetes.io/instance: + f:app.kubernetes.io/managed-by: + f:app.kubernetes.io/name: + f:ownerReferences: + f:spec: + .: + f:databaseRef: + .: + f:name: + f:type: + f:volumeExpansion: + .: + f:standalone: + Manager: kubedb-autoscaler + Operation: Update + Time: 2021-03-08T13:19:51Z + API Version: ops.kubedb.com/v1alpha1 + Fields Type: FieldsV1 + fieldsV1: + f:status: + .: + f:conditions: + f:observedGeneration: + f:phase: + Manager: kubedb-enterprise + Operation: Update + Time: 2021-03-08T13:19:52Z + Owner References: + API Version: autoscaling.kubedb.com/v1alpha1 + Block Owner Deletion: true + Controller: true + Kind: MongoDBAutoscaler + Name: mg-as + UID: 999a2dc9-7eb7-4ed2-9e90-d3f8b21c091a + Resource Version: 139871 + Self Link: /apis/ops.kubedb.com/v1alpha1/namespaces/demo/mongodbopsrequests/mops-mg-standalone-p27c11 + UID: 9606485d-9dd8-4787-9c7c-61fc874c555e +Spec: + Database Ref: + Name: mg-standalone + Type: VolumeExpansion + Volume Expansion: + Standalone: 1594884096 +Status: + Conditions: + Last Transition Time: 2021-03-08T13:19:52Z + Message: MongoDB ops request is expanding volume of database + Observed Generation: 1 + Reason: VolumeExpansion + Status: True + Type: VolumeExpansion + Last Transition Time: 2021-03-08T13:20:47Z + Message: Successfully Expanded Volume + Observed Generation: 1 + Reason: StandaloneVolumeExpansion + Status: True + Type: StandaloneVolumeExpansion + Last Transition Time: 2021-03-08T13:20:52Z + Message: Successfully Expanded Volume + Observed Generation: 1 + Reason: + Status: True + Type: + Last Transition Time: 2021-03-08T13:20:57Z + Message: StatefulSet is recreated + Observed Generation: 1 + Reason: ReadyStatefulSets + Status: True + Type: ReadyStatefulSets + Last Transition Time: 2021-03-08T13:20:57Z + Message: Successfully Expanded Volume + Observed Generation: 1 + Reason: Successful + Status: True + Type: Successful + Observed Generation: 1 + Phase: Successful +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal PauseDatabase 110s KubeDB Ops-manager operator Pausing MongoDB demo/mg-standalone + Normal PauseDatabase 110s KubeDB Ops-manager operator Successfully paused MongoDB demo/mg-standalone + Normal StandaloneVolumeExpansion 55s KubeDB Ops-manager operator Successfully Expanded Volume + Normal 50s KubeDB Ops-manager operator Successfully Expanded Volume + Normal ResumeDatabase 50s KubeDB Ops-manager operator Resuming MongoDB demo/mg-standalone + Normal ResumeDatabase 50s KubeDB Ops-manager operator Successfully resumed MongoDB demo/mg-standalone + Normal ReadyStatefulSets 45s KubeDB Ops-manager operator StatefulSet is recreated + Normal Successful 45s KubeDB Ops-manager operator Successfully Expanded Volume +``` + +Now, we are going to verify from the `Statefulset`, and the `Persistent Volume` whether the volume of the standalone database has expanded to meet the desired state, Let's check, + +```bash +$ kubectl get sts -n demo mg-standalone -o json | jq '.spec.volumeClaimTemplates[].spec.resources.requests.storage' +"1594884096" +$ kubectl get pv -n demo +NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE +pvc-cf469ed8-a89a-49ca-bf7c-8c76b7889428 2Gi RWO Delete Bound demo/datadir-mg-standalone-0 topolvm-provisioner 26m +``` + +The above output verifies that we have successfully autoscaled the volume of the MongoDB standalone database. + +## Cleaning Up + +To clean up the Kubernetes resources created by this tutorial, run: + +```bash +kubectl delete mg -n demo mg-standalone +kubectl delete mongodbautoscaler -n demo mg-as +``` diff --git a/content/docs/v2024.1.31/guides/mongodb/backup/_index.md b/content/docs/v2024.1.31/guides/mongodb/backup/_index.md new file mode 100755 index 0000000000..fc1beafdb9 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mongodb/backup/_index.md @@ -0,0 +1,22 @@ +--- +title: Backup & Restore MongoDB +menu: + docs_v2024.1.31: + identifier: guides-mongodb-backup + name: Backup & Restore + parent: mg-mongodb-guides + weight: 40 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/mongodb/backup/auto-backup/examples/backupblueprint.yaml b/content/docs/v2024.1.31/guides/mongodb/backup/auto-backup/examples/backupblueprint.yaml new file mode 100644 index 0000000000..11c91b4034 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mongodb/backup/auto-backup/examples/backupblueprint.yaml @@ -0,0 +1,17 @@ +apiVersion: stash.appscode.com/v1beta1 +kind: BackupBlueprint +metadata: + name: mongodb-backup-template +spec: + # ============== Blueprint for Repository ========================== + backend: + gcs: + bucket: stash-testing + prefix: mongodb-backup/${TARGET_NAMESPACE}/${TARGET_APP_RESOURCE}/${TARGET_NAME} + storageSecretName: gcs-secret + # ============== Blueprint for BackupConfiguration ================= + schedule: "*/5 * * * *" + retentionPolicy: + name: 'keep-last-5' + keepLast: 5 + prune: true diff --git a/content/docs/v2024.1.31/guides/mongodb/backup/auto-backup/examples/sample-mongodb-2.yaml b/content/docs/v2024.1.31/guides/mongodb/backup/auto-backup/examples/sample-mongodb-2.yaml new file mode 100644 index 0000000000..c02836a447 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mongodb/backup/auto-backup/examples/sample-mongodb-2.yaml @@ -0,0 +1,19 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MongoDB +metadata: + name: sample-mongodb-2 + namespace: demo-2 + annotations: + stash.appscode.com/backup-blueprint: mongodb-backup-template + stash.appscode.com/schedule: "*/3 * * * *" +spec: + version: "4.4.26" + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut diff --git a/content/docs/v2024.1.31/guides/mongodb/backup/auto-backup/examples/sample-mongodb-3.yaml b/content/docs/v2024.1.31/guides/mongodb/backup/auto-backup/examples/sample-mongodb-3.yaml new file mode 100644 index 0000000000..acd02f214b --- /dev/null +++ b/content/docs/v2024.1.31/guides/mongodb/backup/auto-backup/examples/sample-mongodb-3.yaml @@ -0,0 +1,19 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MongoDB +metadata: + name: sample-mongodb-3 + namespace: demo-3 + annotations: + stash.appscode.com/backup-blueprint: mongodb-backup-template + params.stash.appscode.com/args: "--db=testdb" +spec: + version: "4.4.26" + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut diff --git a/content/docs/v2024.1.31/guides/mongodb/backup/auto-backup/examples/sample-mongodb.yaml b/content/docs/v2024.1.31/guides/mongodb/backup/auto-backup/examples/sample-mongodb.yaml new file mode 100644 index 0000000000..a7a3c2f237 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mongodb/backup/auto-backup/examples/sample-mongodb.yaml @@ -0,0 +1,18 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MongoDB +metadata: + name: sample-mongodb + namespace: demo + annotations: + stash.appscode.com/backup-blueprint: mongodb-backup-template +spec: + version: "4.4.26" + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut diff --git a/content/docs/v2024.1.31/guides/mongodb/backup/auto-backup/images/sample-mongodb-2.png b/content/docs/v2024.1.31/guides/mongodb/backup/auto-backup/images/sample-mongodb-2.png new file mode 100644 index 0000000000..f7ded6b8ea Binary files /dev/null and b/content/docs/v2024.1.31/guides/mongodb/backup/auto-backup/images/sample-mongodb-2.png differ diff --git a/content/docs/v2024.1.31/guides/mongodb/backup/auto-backup/images/sample-mongodb-3.png b/content/docs/v2024.1.31/guides/mongodb/backup/auto-backup/images/sample-mongodb-3.png new file mode 100644 index 0000000000..9b6c72d15e Binary files /dev/null and b/content/docs/v2024.1.31/guides/mongodb/backup/auto-backup/images/sample-mongodb-3.png differ diff --git a/content/docs/v2024.1.31/guides/mongodb/backup/auto-backup/images/sample-mongodb.png b/content/docs/v2024.1.31/guides/mongodb/backup/auto-backup/images/sample-mongodb.png new file mode 100644 index 0000000000..2936be2cc3 Binary files /dev/null and b/content/docs/v2024.1.31/guides/mongodb/backup/auto-backup/images/sample-mongodb.png differ diff --git a/content/docs/v2024.1.31/guides/mongodb/backup/auto-backup/index.md b/content/docs/v2024.1.31/guides/mongodb/backup/auto-backup/index.md new file mode 100644 index 0000000000..793a3cdca1 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mongodb/backup/auto-backup/index.md @@ -0,0 +1,706 @@ +--- +title: MongoDB Auto-Backup | Stash +description: Backup MongoDB using Stash Auto-Backup +menu: + docs_v2024.1.31: + identifier: guides-mongodb-backup-auto-backup + name: Auto-Backup + parent: guides-mongodb-backup + weight: 30 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +# Backup MongoDB using Stash Auto-Backup + +Stash can be configured to automatically backup any MongoDB database in your cluster. Stash enables cluster administrators to deploy backup blueprints ahead of time so that the database owners can easily backup their database with just a few annotations. + +In this tutorial, we are going to show how you can configure a backup blueprint for MongoDB databases in your cluster and backup them with few annotations. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster. +- Install Stash in your cluster following the steps [here](https://stash.run/docs/latest/setup/install/stash/). +- Install KubeDB in your cluster following the steps [here](/docs/v2024.1.31/setup/README). +- If you are not familiar with how Stash backup and restore MongoDB databases, please check the following guide [here](/docs/v2024.1.31/guides/mongodb/backup/overview/). +- If you are not familiar with how auto-backup works in Stash, please check the following guide [here](https://stash.run/docs/latest/guides/auto-backup/overview/). +- If you are not familiar with the available auto-backup options for databases in Stash, please check the following guide [here](https://stash.run/docs/latest/guides/auto-backup/database/). + +You should be familiar with the following `Stash` concepts: + +- [BackupBlueprint](https://stash.run/docs/latest/concepts/crds/backupblueprint/) +- [BackupConfiguration](https://stash.run/docs/latest/concepts/crds/backupconfiguration/) +- [BackupSession](https://stash.run/docs/latest/concepts/crds/backupsession/) +- [Repository](https://stash.run/docs/latest/concepts/crds/repository/) +- [Function](https://stash.run/docs/latest/concepts/crds/function/) +- [Task](https://stash.run/docs/latest/concepts/crds/task/) + +In this tutorial, we are going to show backup of three different MongoDB databases on three different namespaces named `demo`, `demo-2`, and `demo-3`. Create the namespaces as below if you haven't done it already. + +```bash +❯ kubectl create ns demo +namespace/demo created + +❯ kubectl create ns demo-2 +namespace/demo-2 created + +❯ kubectl create ns demo-3 +namespace/demo-3 created +``` + +When you install the Stash, it automatically installs all the official database addons. Verify that it has installed the MongoDB addons using the following command. + +```bash +❯ kubectl get tasks.stash.appscode.com | grep mongodb +mongodb-backup-3.4.17 23h +mongodb-backup-3.4.22 23h +mongodb-backup-3.6.13 23h +mongodb-backup-3.6.8 23h +mongodb-backup-4.0.11 23h +mongodb-backup-4.0.3 23h +mongodb-backup-4.0.5 23h +mongodb-backup-4.1.13 23h +mongodb-backup-4.1.4 23h +mongodb-backup-4.1.7 23h +mongodb-backup-4.4.6 23h +mongodb-backup-4.4.6 23h +mongodb-backup-5.0.3 23h +mongodb-restore-3.4.17 23h +mongodb-restore-3.4.22 23h +mongodb-restore-3.6.13 23h +mongodb-restore-3.6.8 23h +mongodb-restore-4.0.11 23h +mongodb-restore-4.0.3 23h +mongodb-restore-4.0.5 23h +mongodb-restore-4.1.13 23h +mongodb-restore-4.1.4 23h +mongodb-restore-4.1.7 23h +mongodb-restore-4.4.6 23h +mongodb-restore-4.4.6 23h +mongodb-restore-5.0.3 23h + +``` + +## Prepare Backup Blueprint + +To backup an MongoDB database using Stash, you have to create a `Secret` containing the backend credentials, a `Repository` containing the backend information, and a `BackupConfiguration` containing the schedule and target information. A `BackupBlueprint` allows you to specify a template for the `Repository` and the `BackupConfiguration`. + +The `BackupBlueprint` is a non-namespaced CRD. So, once you have created a `BackupBlueprint`, you can use it to backup any MongoDB database of any namespace just by creating the storage `Secret` in that namespace and adding few annotations to your MongoDB CRO. Then, Stash will automatically create a `Repository` and a `BackupConfiguration` according to the template to backup the database. + +Below is the `BackupBlueprint` object that we are going to use in this tutorial, + +```yaml +apiVersion: stash.appscode.com/v1beta1 +kind: BackupBlueprint +metadata: + name: mongodb-backup-template +spec: + # ============== Blueprint for Repository ========================== + backend: + gcs: + bucket: stash-testing + prefix: mongodb-backup/${TARGET_NAMESPACE}/${TARGET_APP_RESOURCE}/${TARGET_NAME} + storageSecretName: gcs-secret + # ============== Blueprint for BackupConfiguration ================= + schedule: "*/5 * * * *" + retentionPolicy: + name: 'keep-last-5' + keepLast: 5 + prune: true +``` + +Here, we are using a GCS bucket as our backend. We are providing `gcs-secret` at the `storageSecretName` field. Hence, we have to create a secret named `gcs-secret` with the access credentials of our bucket in every namespace where we want to enable backup through this blueprint. + +Notice the `prefix` field of `backend` section. We have used some variables in form of `${VARIABLE_NAME}`. Stash will automatically resolve those variables from the database information to make the backend prefix unique for each database instance. + +Let's create the `BackupBlueprint` we have shown above, + +```bash +❯ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mongodb/backup/auto-backup/examples/backupblueprint.yaml +backupblueprint.stash.appscode.com/mongodb-backup-template created +``` + +Now, we are ready to backup our MongoDB databases using few annotations. You can check available auto-backup annotations for a databases from [here](https://stash.run/docs/latest/guides/auto-backup/database/#available-auto-backup-annotations-for-database). + +## Auto-backup with default configurations + +In this section, we are going to backup a MongoDB database from `demo` namespace and we are going to use the default configurations specified in the `BackupBlueprint`. + +### Create Storage Secret + +At first, let's create the `gcs-secret` in `demo` namespace with the access credentials to our GCS bucket. + +```bash +❯ echo -n 'changeit' > RESTIC_PASSWORD +❯ echo -n '' > GOOGLE_PROJECT_ID +❯ cat downloaded-sa-key.json > GOOGLE_SERVICE_ACCOUNT_JSON_KEY +❯ kubectl create secret generic -n demo gcs-secret \ + --from-file=./RESTIC_PASSWORD \ + --from-file=./GOOGLE_PROJECT_ID \ + --from-file=./GOOGLE_SERVICE_ACCOUNT_JSON_KEY +secret/gcs-secret created +``` + +### Create Database + +Now, we are going to create an MongoDB CRO in `demo` namespace. Below is the YAML of the MongoDB object that we are going to create, + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: MongoDB +metadata: + name: sample-mongodb + namespace: demo + annotations: + stash.appscode.com/backup-blueprint: mongodb-backup-template +spec: + version: "4.4.26" + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut +``` + +Notice the `annotations` section. We are pointing to the `BackupBlueprint` that we have created earlier though `stash.appscode.com/backup-blueprint` annotation. Stash will watch this annotation and create a `Repository` and a `BackupConfiguration` according to the `BackupBlueprint`. + +Let's create the above MongoDB CRO, + +```bash +❯ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mongob/backup/auto-backup/examples/sample-mongodb.yaml +mongodb.kubedb.com/sample-mongodb created +``` + +### Verify Auto-backup configured + +In this section, we are going to verify whether Stash has created the respective `Repository` and `BackupConfiguration` for our MongoDB database we have just deployed or not. + +#### Verify Repository + +At first, let's verify whether Stash has created a `Repository` for our MongoDB or not. + +```bash +❯ kubectl get repository -n demo +NAME INTEGRITY SIZE SNAPSHOT-COUNT LAST-SUCCESSFUL-BACKUP AGE +app-sample-mongodb 10s +``` + +Now, let's check the YAML of the `Repository`. + +```yaml +❯ kubectl get repository -n demo app-sample-mongodb -o yaml +apiVersion: stash.appscode.com/v1alpha1 +kind: Repository +metadata: + creationTimestamp: "2022-02-02T05:49:00Z" + finalizers: + - stash + generation: 1 + name: app-sample-mongodb + namespace: demo + resourceVersion: "283554" + uid: d025358c-2f60-4d35-8efb-27c42439d28e +spec: + backend: + gcs: + bucket: stash-testing + prefix: mongodb-backup/demo/mongodb/sample-mongodb + storageSecretName: gcs-secret +``` + +Here, you can see that Stash has resolved the variables in `prefix` field and substituted them with the equivalent information from this database. + +#### Verify BackupConfiguration + +If everything goes well, Stash should create a `BackupConfiguration` for our MongoDB in `demo` namespace and the phase of that `BackupConfiguration` should be `Ready`. Verify the `BackupConfiguration` crd by the following command, + +```bash +❯ kubectl get backupconfiguration -n demo +NAMESPACE NAME TASK SCHEDULE PAUSED PHASE AGE +demo app-sample-mongodb */5 * * * * Ready 4m11s +``` + +Now, let's check the YAML of the `BackupConfiguration`. + +```yaml +❯ kubectl get backupconfiguration -n demo app-sample-mongodb -o yaml +apiVersion: stash.appscode.com/v1beta1 +kind: BackupConfiguration +metadata: + creationTimestamp: "2022-02-02T05:49:00Z" + finalizers: + - stash.appscode.com + generation: 1 + name: app-sample-mongodb + namespace: demo + ownerReferences: + - apiVersion: appcatalog.appscode.com/v1alpha1 + blockOwnerDeletion: true + controller: true + kind: AppBinding + name: sample-mongodb + uid: 481ea54c-5a77-43a9-8230-f906f9d240bf + resourceVersion: "283559" + uid: aa2a1195-8ed7-4238-b807-66fb5b09505f +spec: + driver: Restic + repository: + name: app-sample-mongodb + retentionPolicy: + keepLast: 5 + name: keep-last-5 + prune: true + runtimeSettings: {} + schedule: '*/5 * * * *' + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-mongodb + task: {} + tempDir: {} +status: + conditions: + - lastTransitionTime: "2022-02-02T05:49:00Z" + message: Repository demo/app-sample-mongodb exist. + reason: RepositoryAvailable + status: "True" + type: RepositoryFound + - lastTransitionTime: "2022-02-02T05:49:00Z" + message: Backend Secret demo/ does not exist. + reason: BackendSecretNotAvailable + status: "False" + type: BackendSecretFound + observedGeneration: 1 +``` + +Notice the `target` section. Stash has automatically added the MongoDB as the target of this `BackupConfiguration`. + +#### Verify Backup + +Now, let's wait for a backup run to complete. You can watch for `BackupSession` as below, + +```bash +❯ kubectl get backupsession -n demo -w +NAME INVOKER-TYPE INVOKER-NAME PHASE DURATION AGE +app-sample-mongodb-1643781603 BackupConfiguration app-sample-mongodb Running 30s +app-sample-mongodb-1643781603 BackupConfiguration app-sample-mongodb Succeeded 31s 30s + +``` + +Once the backup has been completed successfully, you should see the backed up data has been stored in the bucket at the directory pointed by the `prefix` field of the `Repository`. + +
+ Backup data in GCS Bucket +
Fig: Backup data in GCS Bucket
+
+ +## Auto-backup with a custom schedule + +In this section, we are going to backup an MongoDB database from `demo-2` namespace. This time, we are going to overwrite the default schedule used in the `BackupBlueprint`. + +### Create Storage Secret + +At first, let's create the `gcs-secret` in `demo-2` namespace with the access credentials to our GCS bucket. + +```bash +❯ kubectl create secret generic -n demo-2 gcs-secret \ + --from-file=./RESTIC_PASSWORD \ + --from-file=./GOOGLE_PROJECT_ID \ + --from-file=./GOOGLE_SERVICE_ACCOUNT_JSON_KEY +secret/gcs-secret created +``` + +### Create Database + +Now, we are going to create an MongoDB CRO in `demo-2` namespace. Below is the YAML of the MongoDB object that we are going to create, + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: MongoDB +metadata: + name: sample-mongodb-2 + namespace: demo-2 + annotations: + stash.appscode.com/backup-blueprint: mongodb-backup-template + stash.appscode.com/schedule: "*/3 * * * *" +spec: + version: "4.4.26" + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut +``` + +Notice the `annotations` section. This time, we have passed a schedule via `stash.appscode.com/schedule` annotation along with the `stash.appscode.com/backup-blueprint` annotation. + +Let's create the above MongoDB CRO, + +```bash +❯ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mongodb/backup/auto-backup/examples/sample-mongodb-2.yaml +mongodb.kubedb.com/sample-mongodb-2 created +``` + +### Verify Auto-backup configured + +Now, let's verify whether the auto-backup has been configured properly or not. + +#### Verify Repository + +At first, let's verify whether Stash has created a `Repository` for our MongoDB or not. + +```bash +❯ kubectl get repository -n demo-2 +NAME INTEGRITY SIZE SNAPSHOT-COUNT LAST-SUCCESSFUL-BACKUP AGE +app-sample-mongodb-2 4s +``` + +Now, let's check the YAML of the `Repository`. + +```yaml +❯ kubectl get repository -n demo-2 app-sample-mongodb-2 -o yaml +apiVersion: stash.appscode.com/v1alpha1 +kind: Repository +metadata: + creationTimestamp: "2022-02-02T06:19:21Z" + finalizers: + - stash + generation: 1 + name: app-sample-mongodb-2 + namespace: demo-2 + resourceVersion: "286925" + uid: e1948d2d-2a15-41ea-99f9-5b59394c10c1 +spec: + backend: + gcs: + bucket: stash-testing + prefix: mongodb-backup/demo-2/mongodb/sample-mongodb-2 + storageSecretName: gcs-secret +``` + +Here, you can see that Stash has resolved the variables in `prefix` field and substituted them with the equivalent information from this new database. + +#### Verify BackupConfiguration + +If everything goes well, Stash should create a `BackupConfiguration` for our MongoDB in `demo-2` namespace and the phase of that `BackupConfiguration` should be `Ready`. Verify the `BackupConfiguration` crd by the following command, + +```bash +❯ kubectl get backupconfiguration -n demo-2 +NAME TASK SCHEDULE PAUSED PHASE AGE +app-sample-mongodb-2 mongodb-backup-10.5.23 */3 * * * * Ready 3m24s +``` + +Now, let's check the YAML of the `BackupConfiguration`. + +```yaml +❯ kubectl get backupconfiguration -n demo-2 app-sample-mongodb-2 -o yaml +apiVersion: stash.appscode.com/v1beta1 +kind: BackupConfiguration +metadata: + creationTimestamp: "2022-02-02T06:19:21Z" + finalizers: + - stash.appscode.com + generation: 1 + name: app-sample-mongodb-2 + namespace: demo-2 + ownerReferences: + - apiVersion: appcatalog.appscode.com/v1alpha1 + blockOwnerDeletion: true + controller: true + kind: AppBinding + name: sample-mongodb-2 + uid: 7c18485f-ed8e-4c01-b160-3bbc4e5049db + resourceVersion: "286938" + uid: 279c0471-0618-4b73-85d0-edd70ec2e132 +spec: + driver: Restic + repository: + name: app-sample-mongodb-2 + retentionPolicy: + keepLast: 5 + name: keep-last-5 + prune: true + runtimeSettings: {} + schedule: '*/3 * * * *' + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-mongodb-2 + task: {} + tempDir: {} +status: + conditions: + - lastTransitionTime: "2022-02-02T06:19:21Z" + message: Repository demo-2/app-sample-mongodb-2 exist. + reason: RepositoryAvailable + status: "True" + type: RepositoryFound + - lastTransitionTime: "2022-02-02T06:19:21Z" + message: Backend Secret demo-2/gcs-secret exist. + reason: BackendSecretAvailable + status: "True" + type: BackendSecretFound + - lastTransitionTime: "2022-02-02T06:19:21Z" + message: Backup target appcatalog.appscode.com/v1alpha1 appbinding/sample-mongodb-2 + found. + reason: TargetAvailable + status: "True" + type: BackupTargetFound + - lastTransitionTime: "2022-02-02T06:19:21Z" + message: Successfully created backup triggering CronJob. + reason: CronJobCreationSucceeded + status: "True" + type: CronJobCreated + observedGeneration: 1 +``` + +Notice the `schedule` section. This time the `BackupConfiguration` has been created with the schedule we have provided via annotation. + +Also, notice the `target` section. Stash has automatically added the new MongoDB as the target of this `BackupConfiguration`. + +#### Verify Backup + +Now, let's wait for a backup run to complete. You can watch for `BackupSession` as below, + +```bash +❯ kubectl get backupsession -n demo-2 -w +NAME INVOKER-TYPE INVOKER-NAME PHASE DURATION AGE +app-sample-mongodb-2-1643782861 BackupConfiguration app-sample-mongodb-2 Succeeded 31s 2m17s +``` + +Once the backup has been completed successfully, you should see that Stash has created a new directory as pointed by the `prefix` field of the new `Repository` and stored the backed up data there. + +
+ Backup data in GCS Bucket +
Fig: Backup data in GCS Bucket
+
+ +## Auto-backup with custom parameters + +In this section, we are going to backup an MongoDB database of `demo-3` namespace. This time, we are going to pass some parameters for the Task through the annotations. + +### Create Storage Secret + +At first, let's create the `gcs-secret` in `demo-3` namespace with the access credentials to our GCS bucket. + +```bash +❯ kubectl create secret generic -n demo-3 gcs-secret \ + --from-file=./RESTIC_PASSWORD \ + --from-file=./GOOGLE_PROJECT_ID \ + --from-file=./GOOGLE_SERVICE_ACCOUNT_JSON_KEY +secret/gcs-secret created +``` + +### Create Database + +Now, we are going to create an MongoDB CRO in `demo-3` namespace. Below is the YAML of the MongoDB object that we are going to create, + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: MongoDB +metadata: + name: sample-mongodb-3 + namespace: demo-3 + annotations: + stash.appscode.com/backup-blueprint: mongodb-backup-template + params.stash.appscode.com/args: "--db=testdb" +spec: + version: "4.4.26" + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut +``` + +Notice the `annotations` section. This time, we have passed an argument via `params.stash.appscode.com/args` annotation along with the `stash.appscode.com/backup-blueprint` annotation. + +Let's create the above MongoDB CRO, + +```bash +❯ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mongodb/backup/auto-backup/examples/sample-mongob-3.yaml +mongob.kubedb.com/sample-mongodb-3 created +``` + +### Verify Auto-backup configured + +Now, let's verify whether the auto-backup resources has been created or not. + +#### Verify Repository + +At first, let's verify whether Stash has created a `Repository` for our MongoDB or not. + +```bash +❯ kubectl get repository -n demo-3 +NAME INTEGRITY SIZE SNAPSHOT-COUNT LAST-SUCCESSFUL-BACKUP AGE +app-sample-mongodb-3 8s +``` + +Now, let's check the YAML of the `Repository`. + +```yaml +❯ kubectl get repository -n demo-3 app-sample-mongodb-3 -o yaml +apiVersion: stash.appscode.com/v1alpha1 +kind: Repository +metadata: + creationTimestamp: "2022-02-02T06:45:56Z" + finalizers: + - stash + generation: 1 + name: app-sample-mongodb-3 + namespace: demo-3 + resourceVersion: "302950" + uid: 00b74653-fd08-42ba-a699-1b012e1e7da8 +spec: + backend: + gcs: + bucket: stash-testing + prefix: mongodb-backup/demo-3/mongodb/sample-mongodb-3 + storageSecretName: gcs-secret +``` + +Here, you can see that Stash has resolved the variables in `prefix` field and substituted them with the equivalent information from this new database. + +#### Verify BackupConfiguration + +If everything goes well, Stash should create a `BackupConfiguration` for our MongoDB in `demo-3` namespace and the phase of that `BackupConfiguration` should be `Ready`. Verify the `BackupConfiguration` crd by the following command, + +```bash +❯ kubectl get backupconfiguration -n demo-3 +NAME TASK SCHEDULE PAUSED PHASE AGE +app-sample-mongodb-3 mongodb-backup-10.5.23 */5 * * * * Ready 106s +``` + +Now, let's check the YAML of the `BackupConfiguration`. + +```yaml +❯ kubectl get backupconfiguration -n demo-3 app-sample-mongodb-3 -o yaml +apiVersion: stash.appscode.com/v1beta1 +kind: BackupConfiguration +metadata: + creationTimestamp: "2022-02-02T08:29:43Z" + finalizers: + - stash.appscode.com + generation: 1 + name: app-sample-mongodb-3 + namespace: demo-3 + ownerReferences: + - apiVersion: appcatalog.appscode.com/v1alpha1 + blockOwnerDeletion: true + controller: true + kind: AppBinding + name: sample-mongodb-3 + uid: 54deac95-790b-4fc1-93ec-fd3758cac71e + resourceVersion: "301618" + uid: 6ecb511e-1c6c-4d0b-b241-277c0b0d1059 +spec: + driver: Restic + repository: + name: app-sample-mongodb-3 + retentionPolicy: + keepLast: 5 + name: keep-last-5 + prune: true + runtimeSettings: {} + schedule: '*/5 * * * *' + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-mongodb-3 + task: + params: + - name: args + value: --db=testdb + tempDir: {} +status: + conditions: + - lastTransitionTime: "2022-02-02T08:29:43Z" + message: Repository demo-3/app-sample-mongodb-3 exist. + reason: RepositoryAvailable + status: "True" + type: RepositoryFound + - lastTransitionTime: "2022-02-02T08:29:43Z" + message: Backend Secret demo-3/gcs-secret exist. + reason: BackendSecretAvailable + status: "True" + type: BackendSecretFound + - lastTransitionTime: "2022-02-02T08:29:43Z" + message: Backup target appcatalog.appscode.com/v1alpha1 appbinding/sample-mongodb-3 + found. + reason: TargetAvailable + status: "True" + type: BackupTargetFound + - lastTransitionTime: "2022-02-02T08:29:43Z" + message: Successfully created backup triggering CronJob. + reason: CronJobCreationSucceeded + status: "True" + type: CronJobCreated + observedGeneration: 1 +``` + +Notice the `task` section. The `args` parameter that we had passed via annotations has been added to the `params` section. + +Also, notice the `target` section. Stash has automatically added the new MongoDB as the target of this `BackupConfiguration`. + +#### Verify Backup + +Now, let's wait for a backup run to complete. You can watch for `BackupSession` as below, + +```bash +❯ kubectl get backupsession -n demo-3 -w +NAME INVOKER-TYPE INVOKER-NAME PHASE DURATION AGE +app-sample-mongodb-3-1643792101 BackupConfiguration app-sample-mongodb-3 Succeeded 39s 118s + +``` + +Once the backup has been completed successfully, you should see that Stash has created a new directory as pointed by the `prefix` field of the new `Repository` and stored the backed up data there. + +
+ Backup data in GCS Bucket +
Fig: Backup data in GCS Bucket
+
+ +## Cleanup + +To cleanup the resources crated by this tutorial, run the following commands, + +```bash +❯ kubectl delete -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mongodb/backup/auto-backup/examples/ +backupblueprint.stash.appscode.com "mongodb-backup-template" deleted +mongodb.kubedb.com "sample-mongodb-2" deleted +mongodb.kubedb.com "sample-mongodb-3" deleted +mongodb.kubedb.com "sample-mongodb" deleted + +❯ kubectl delete repository -n demo --all +repository.stash.appscode.com "app-sample-mongodb" deleted +❯ kubectl delete repository -n demo-2 --all +repository.stash.appscode.com "app-sample-mongodb-2" deleted +❯ kubectl delete repository -n demo-3 --all +repository.stash.appscode.com "app-sample-mongodb-3" deleted +``` diff --git a/content/docs/v2024.1.31/guides/mongodb/backup/customization/examples/backup/multi-retention-policy.yaml b/content/docs/v2024.1.31/guides/mongodb/backup/customization/examples/backup/multi-retention-policy.yaml new file mode 100644 index 0000000000..c74ebdeaea --- /dev/null +++ b/content/docs/v2024.1.31/guides/mongodb/backup/customization/examples/backup/multi-retention-policy.yaml @@ -0,0 +1,22 @@ +apiVersion: stash.appscode.com/v1beta1 +kind: BackupConfiguration +metadata: + name: sample-mongodb-backup + namespace: demo +spec: + schedule: "*/5 * * * *" + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-mongodb + retentionPolicy: + name: sample-mongodb-retention + keepLast: 5 + keepDaily: 10 + keepWeekly: 20 + keepMonthly: 50 + keepYearly: 100 + prune: true diff --git a/content/docs/v2024.1.31/guides/mongodb/backup/customization/examples/backup/passing-args.yaml b/content/docs/v2024.1.31/guides/mongodb/backup/customization/examples/backup/passing-args.yaml new file mode 100644 index 0000000000..78becf33f0 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mongodb/backup/customization/examples/backup/passing-args.yaml @@ -0,0 +1,22 @@ +apiVersion: stash.appscode.com/v1beta1 +kind: BackupConfiguration +metadata: + name: sample-mongodb-backup + namespace: demo +spec: + schedule: "*/2 * * * *" + task: + params: + - name: args + value: --db=testdb + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-mongodb + retentionPolicy: + name: keep-last-5 + keepLast: 5 + prune: true diff --git a/content/docs/v2024.1.31/guides/mongodb/backup/customization/examples/backup/resource-limit.yaml b/content/docs/v2024.1.31/guides/mongodb/backup/customization/examples/backup/resource-limit.yaml new file mode 100644 index 0000000000..4905792ba6 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mongodb/backup/customization/examples/backup/resource-limit.yaml @@ -0,0 +1,24 @@ +apiVersion: stash.appscode.com/v1beta1 +kind: RestoreSession +metadata: + name: sample-mongodb-restore + namespace: demo +spec: + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-mongodb + runtimeSettings: + container: + resources: + requests: + cpu: "200m" + memory: "1Gi" + limits: + cpu: "200m" + memory: "1Gi" + rules: + - snapshots: [latest] diff --git a/content/docs/v2024.1.31/guides/mongodb/backup/customization/examples/backup/specific-user.yaml b/content/docs/v2024.1.31/guides/mongodb/backup/customization/examples/backup/specific-user.yaml new file mode 100644 index 0000000000..15ea03fea2 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mongodb/backup/customization/examples/backup/specific-user.yaml @@ -0,0 +1,23 @@ +apiVersion: stash.appscode.com/v1beta1 +kind: BackupConfiguration +metadata: + name: sample-mongodb-backup + namespace: demo +spec: + schedule: "*/2 * * * *" + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-mongodb + runtimeSettings: + pod: + securityContext: + runAsUser: 0 + runAsGroup: 0 + retentionPolicy: + name: keep-last-5 + keepLast: 5 + prune: true diff --git a/content/docs/v2024.1.31/guides/mongodb/backup/customization/examples/repository.yaml b/content/docs/v2024.1.31/guides/mongodb/backup/customization/examples/repository.yaml new file mode 100644 index 0000000000..8a6aaab13b --- /dev/null +++ b/content/docs/v2024.1.31/guides/mongodb/backup/customization/examples/repository.yaml @@ -0,0 +1,11 @@ +apiVersion: stash.appscode.com/v1alpha1 +kind: Repository +metadata: + name: gcs-repo + namespace: demo +spec: + backend: + gcs: + bucket: stash-testing + prefix: /demo/customizing + storageSecretName: gcs-secret diff --git a/content/docs/v2024.1.31/guides/mongodb/backup/customization/examples/restore/passing-args.yaml b/content/docs/v2024.1.31/guides/mongodb/backup/customization/examples/restore/passing-args.yaml new file mode 100644 index 0000000000..eacf98884a --- /dev/null +++ b/content/docs/v2024.1.31/guides/mongodb/backup/customization/examples/restore/passing-args.yaml @@ -0,0 +1,19 @@ +apiVersion: stash.appscode.com/v1beta1 +kind: RestoreSession +metadata: + name: sample-mongodb-restore + namespace: demo +spec: + task: + params: + - name: args + value: --db=testdb + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-mongodb + rules: + - snapshots: [latest] diff --git a/content/docs/v2024.1.31/guides/mongodb/backup/customization/examples/restore/resource-limit.yaml b/content/docs/v2024.1.31/guides/mongodb/backup/customization/examples/restore/resource-limit.yaml new file mode 100644 index 0000000000..ee24216560 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mongodb/backup/customization/examples/restore/resource-limit.yaml @@ -0,0 +1,25 @@ +apiVersion: stash.appscode.com/v1beta1 +kind: RestoreSession +metadata: + name: sample-mongodb-restore + namespace: demo +spec: + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-mongodb + runtimeSettings: + container: + resources: + requests: + cpu: "200m" + memory: "1Gi" + limits: + cpu: "200m" + memory: "1Gi" + rules: + - snapshots: [latest] + diff --git a/content/docs/v2024.1.31/guides/mongodb/backup/customization/examples/restore/specific-snapshot.yaml b/content/docs/v2024.1.31/guides/mongodb/backup/customization/examples/restore/specific-snapshot.yaml new file mode 100644 index 0000000000..a185701d26 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mongodb/backup/customization/examples/restore/specific-snapshot.yaml @@ -0,0 +1,15 @@ +apiVersion: stash.appscode.com/v1beta1 +kind: RestoreSession +metadata: + name: sample-mongodb-restore + namespace: demo +spec: + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-mongodb + rules: + - snapshots: [4bc21d6f] diff --git a/content/docs/v2024.1.31/guides/mongodb/backup/customization/examples/restore/specific-user.yaml b/content/docs/v2024.1.31/guides/mongodb/backup/customization/examples/restore/specific-user.yaml new file mode 100644 index 0000000000..561cab3156 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mongodb/backup/customization/examples/restore/specific-user.yaml @@ -0,0 +1,21 @@ +apiVersion: stash.appscode.com/v1beta1 +kind: RestoreSession +metadata: + name: sample-mongodb-restore + namespace: demo +spec: + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-mongodb + runtimeSettings: + pod: + securityContext: + runAsUser: 0 + runAsGroup: 0 + rules: + - snapshots: [latest] + diff --git a/content/docs/v2024.1.31/guides/mongodb/backup/customization/examples/sample-mariadb.yaml b/content/docs/v2024.1.31/guides/mongodb/backup/customization/examples/sample-mariadb.yaml new file mode 100644 index 0000000000..d254e8cc4f --- /dev/null +++ b/content/docs/v2024.1.31/guides/mongodb/backup/customization/examples/sample-mariadb.yaml @@ -0,0 +1,16 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MongoDB +metadata: + name: sample-mongodb + namespace: demo +spec: + version: "4.4.26" + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut diff --git a/content/docs/v2024.1.31/guides/mongodb/backup/customization/index.md b/content/docs/v2024.1.31/guides/mongodb/backup/customization/index.md new file mode 100644 index 0000000000..a38c091ee5 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mongodb/backup/customization/index.md @@ -0,0 +1,293 @@ +--- +title: MongoDB Backup Customization | Stash +description: Customizing MongoDB Backup and Restore process with Stash +menu: + docs_v2024.1.31: + identifier: guides-mongodb-backup-customization + name: Customizing Backup & Restore Process + parent: guides-mongodb-backup + weight: 40 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +# Customizing Backup and Restore Process + +Stash provides rich customization supports for the backup and restore process to meet the requirements of various cluster configurations. This guide will show you some examples of these customizations. + +## Customizing Backup Process + +In this section, we are going to show you how to customize the backup process. Here, we are going to show some examples of providing arguments to the backup process, running the backup process as a specific user, ignoring some indexes during the backup process, etc. + +### Passing arguments to the backup process + +Stash MongoDB addon uses [mongoump](https://docs.mongodb.com/database-tools/mongodump/) for backup. You can pass arguments to the `mongodump` through `args` param under `task.params` section. + +The below example shows how you can pass the `--db testdb` to take backup for a specific mongodb databases named `testdb`. + +```yaml +apiVersion: stash.appscode.com/v1beta1 +kind: BackupConfiguration +metadata: + name: sample-mongodb-backup + namespace: demo +spec: + schedule: "*/5 * * * *" + task: + params: + - name: args + value: --db=testdb + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-mongodb + retentionPolicy: + name: keep-last-5 + keepLast: 5 + prune: true +``` + +> **WARNING**: Make sure that you have the specific database created before taking backup. In this case, Database `testdb` should exist before the backup job starts. + +### Running backup job as a specific user + +If your cluster requires running the backup job as a specific user, you can provide `securityContext` under `runtimeSettings.pod` section. The below example shows how you can run the backup job as the root user. + +```yaml +apiVersion: stash.appscode.com/v1beta1 +kind: BackupConfiguration +metadata: + name: sample-mongodb-backup + namespace: demo +spec: + schedule: "*/2 * * * *" + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-mongodb + runtimeSettings: + pod: + securityContext: + runAsUser: 0 + runAsGroup: 0 + retentionPolicy: + name: keep-last-5 + keepLast: 5 + prune: true +``` + +### Specifying Memory/CPU limit/request for the backup job + +If you want to specify the Memory/CPU limit/request for your backup job, you can specify `resources` field under `runtimeSettings.container` section. + +```yaml +apiVersion: stash.appscode.com/v1beta1 +kind: BackupConfiguration +metadata: + name: sample-mongodb-backup + namespace: demo +spec: + schedule: "*/2 * * * *" + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-mongodb + runtimeSettings: + container: + resources: + requests: + cpu: "200m" + memory: "1Gi" + limits: + cpu: "200m" + memory: "1Gi" + retentionPolicy: + name: keep-last-5 + keepLast: 5 + prune: true +``` + +### Using multiple retention policies + +You can also specify multiple retention policies for your backed up data. For example, you may want to keep few daily snapshots, few weekly snapshots, and few monthly snapshots, etc. You just need to pass the desired number with the respective key under the `retentionPolicy` section. + +```yaml +apiVersion: stash.appscode.com/v1beta1 +kind: BackupConfiguration +metadata: + name: sample-mongodb-backup + namespace: demo +spec: + schedule: "*/5 * * * *" + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-mongodb + retentionPolicy: + name: sample-mongodb-retention + keepLast: 5 + keepDaily: 10 + keepWeekly: 20 + keepMonthly: 50 + keepYearly: 100 + prune: true +``` + +To know more about the available options for retention policies, please visit [here](https://stash.run/docs/latest/concepts/crds/backupconfiguration/#specretentionpolicy). + +## Customizing Restore Process + +Stash also uses `mongorestore` during the restore process. In this section, we are going to show how you can pass arguments to the restore process, restore a specific snapshot, run restore job as a specific user, etc. + +### Passing arguments to the restore process + +Similar to the backup process, you can pass arguments to the restore process through the `args` params under `task.params` section. This example will restore data from database `testdb`. + +```yaml +apiVersion: stash.appscode.com/v1beta1 +kind: RestoreSession +metadata: + name: sample-mongodb-restore + namespace: demo +spec: + task: + params: + - name: args + value: --db=testd + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-mongodb + rules: + - snapshots: [latest] +``` + +### Restore specific snapshot + +You can also restore a specific snapshot. At first, list the available snapshot as bellow, + +```bash +❯ kubectl get snapshots -n demo +NAME ID REPOSITORY HOSTNAME CREATED AT +gcs-repo-4bc21d6f 4bc21d6f gcs-repo host-0 2021-02-12T14:54:27Z +gcs-repo-f0ac7cbd f0ac7cbd gcs-repo host-0 2021-02-12T14:56:26Z +gcs-repo-9210ebb6 9210ebb6 gcs-repo host-0 2021-02-12T14:58:27Z +gcs-repo-0aff8890 0aff8890 gcs-repo host-0 2021-02-12T15:00:28Z +``` + +>You can also filter the snapshots as shown in the guide [here](https://stash.run/docs/latest/concepts/crds/snapshot/#working-with-snapshot). + +The below example shows how you can pass a specific snapshot id through the `snapshots` filed of `rules` section. + +```yaml +apiVersion: stash.appscode.com/v1beta1 +kind: RestoreSession +metadata: + name: sample-mongodb-restore + namespace: demo +spec: + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-mongodb + rules: + - snapshots: [4bc21d6f] +``` + +>Please, do not specify multiple snapshots here. Each snapshot represents a complete backup of your database. Multiple snapshots are only usable during file/directory restore. + +### Running restore job as a specific user + +You can provide `securityContext` under `runtimeSettings.pod` section to run the restore job as a specific user. + +```yaml +apiVersion: stash.appscode.com/v1beta1 +kind: RestoreSession +metadata: + name: sample-mongodb-restore + namespace: demo +spec: + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-mongodb + runtimeSettings: + pod: + securityContext: + runAsUser: 0 + runAsGroup: 0 + rules: + - snapshots: [latest] +``` + +### Specifying Memory/CPU limit/request for the restore job + +Similar to the backup process, you can also provide `resources` field under the `runtimeSettings.container` section to limit the Memory/CPU for your restore job. + +```yaml +apiVersion: stash.appscode.com/v1beta1 +kind: RestoreSession +metadata: + name: sample-mongodb-restore + namespace: demo +spec: + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-mongodb + runtimeSettings: + container: + resources: + requests: + cpu: "200m" + memory: "1Gi" + limits: + cpu: "200m" + memory: "1Gi" + rules: + - snapshots: [latest] +``` + +## Cleanup +To cleanup the resources crated by this tutorial, run the following commands, + +```bash +❯ kubectl delete backupconfiguration -n demo +❯ kubectl delete restoresession -n demo +``` + diff --git a/content/docs/v2024.1.31/guides/mongodb/backup/logical/_index.md b/content/docs/v2024.1.31/guides/mongodb/backup/logical/_index.md new file mode 100644 index 0000000000..6c5691263e --- /dev/null +++ b/content/docs/v2024.1.31/guides/mongodb/backup/logical/_index.md @@ -0,0 +1,22 @@ +--- +title: Logical Backup of MongoDB Using Stash +menu: + docs_v2024.1.31: + identifier: guides-mongodb-backup-logical + name: Logical Backup + parent: guides-mongodb-backup + weight: 20 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/mongodb/backup/logical/replicaset/examples/backupconfiguration-replicaset.yaml b/content/docs/v2024.1.31/guides/mongodb/backup/logical/replicaset/examples/backupconfiguration-replicaset.yaml new file mode 100644 index 0000000000..b0103b18d7 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mongodb/backup/logical/replicaset/examples/backupconfiguration-replicaset.yaml @@ -0,0 +1,18 @@ +apiVersion: stash.appscode.com/v1beta1 +kind: BackupConfiguration +metadata: + name: sample-mgo-rs-backup + namespace: demo +spec: + schedule: "*/5 * * * *" + repository: + name: gcs-repo-replicaset + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-mgo-rs + retentionPolicy: + name: keep-last-5 + keepLast: 5 + prune: true diff --git a/content/docs/v2024.1.31/guides/mongodb/backup/logical/replicaset/examples/mongodb-replicaset.yaml b/content/docs/v2024.1.31/guides/mongodb/backup/logical/replicaset/examples/mongodb-replicaset.yaml new file mode 100644 index 0000000000..9cbd6dc992 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mongodb/backup/logical/replicaset/examples/mongodb-replicaset.yaml @@ -0,0 +1,18 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MongoDB +metadata: + name: sample-mgo-rs + namespace: demo +spec: + version: "4.4.26" + replicas: 3 + replicaSet: + name: rs0 + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut diff --git a/content/docs/v2024.1.31/guides/mongodb/backup/logical/replicaset/examples/repository-replicaset.yaml b/content/docs/v2024.1.31/guides/mongodb/backup/logical/replicaset/examples/repository-replicaset.yaml new file mode 100644 index 0000000000..02c7d4b469 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mongodb/backup/logical/replicaset/examples/repository-replicaset.yaml @@ -0,0 +1,11 @@ +apiVersion: stash.appscode.com/v1alpha1 +kind: Repository +metadata: + name: gcs-repo-replicaset + namespace: demo +spec: + backend: + gcs: + bucket: appscode-qa + prefix: demo/mongodb/sample-mgo-rs + storageSecretName: gcs-secret diff --git a/content/docs/v2024.1.31/guides/mongodb/backup/logical/replicaset/examples/restored-mongodb-replicaset.yaml b/content/docs/v2024.1.31/guides/mongodb/backup/logical/replicaset/examples/restored-mongodb-replicaset.yaml new file mode 100644 index 0000000000..228e44dd78 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mongodb/backup/logical/replicaset/examples/restored-mongodb-replicaset.yaml @@ -0,0 +1,20 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MongoDB +metadata: + name: restored-mgo-rs + namespace: demo +spec: + version: "4.4.26" + replicas: 3 + replicaSet: + name: rs0 + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut + init: + waitForInitialRestore: true diff --git a/content/docs/v2024.1.31/guides/mongodb/backup/logical/replicaset/examples/restored-standalone.yaml b/content/docs/v2024.1.31/guides/mongodb/backup/logical/replicaset/examples/restored-standalone.yaml new file mode 100644 index 0000000000..43d845dba4 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mongodb/backup/logical/replicaset/examples/restored-standalone.yaml @@ -0,0 +1,18 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MongoDB +metadata: + name: restored-mongodb + namespace: demo +spec: + version: "4.4.26" + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + init: + waitForInitialRestore: true + terminationPolicy: WipeOut diff --git a/content/docs/v2024.1.31/guides/mongodb/backup/logical/replicaset/examples/restoresession-replicaset.yaml b/content/docs/v2024.1.31/guides/mongodb/backup/logical/replicaset/examples/restoresession-replicaset.yaml new file mode 100644 index 0000000000..e8aed12fb5 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mongodb/backup/logical/replicaset/examples/restoresession-replicaset.yaml @@ -0,0 +1,15 @@ +apiVersion: stash.appscode.com/v1beta1 +kind: RestoreSession +metadata: + name: sample-mgo-rs-restore + namespace: demo +spec: + repository: + name: gcs-repo-replicaset + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: restored-mgo-rs + rules: + - snapshots: [latest] diff --git a/content/docs/v2024.1.31/guides/mongodb/backup/logical/replicaset/examples/restoresession-standalone.yaml b/content/docs/v2024.1.31/guides/mongodb/backup/logical/replicaset/examples/restoresession-standalone.yaml new file mode 100644 index 0000000000..d22fea850e --- /dev/null +++ b/content/docs/v2024.1.31/guides/mongodb/backup/logical/replicaset/examples/restoresession-standalone.yaml @@ -0,0 +1,17 @@ +apiVersion: stash.appscode.com/v1beta1 +kind: RestoreSession +metadata: + name: sample-mongodb-restore + namespace: demo +spec: + task: + name: mongodb-restore-4.4.6 + repository: + name: gcs-repo-custom + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: restored-mongodb + rules: + - snapshots: [latest] diff --git a/content/docs/v2024.1.31/guides/mongodb/backup/logical/replicaset/examples/standalone-backup.yaml b/content/docs/v2024.1.31/guides/mongodb/backup/logical/replicaset/examples/standalone-backup.yaml new file mode 100644 index 0000000000..da807d2839 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mongodb/backup/logical/replicaset/examples/standalone-backup.yaml @@ -0,0 +1,47 @@ +apiVersion: appcatalog.appscode.com/v1alpha1 +kind: AppBinding +metadata: + name: sample-mgo-rs-custom + namespace: demo +spec: + clientConfig: + service: + name: sample-mgo-rs + port: 27017 + scheme: mongodb + secret: + name: sample-mgo-rs-auth + type: kubedb.com/mongodb +--- +apiVersion: stash.appscode.com/v1alpha1 +kind: Repository +metadata: + name: gcs-repo-custom + namespace: demo +spec: + backend: + gcs: + bucket: appscode-qa + prefix: demo/mongodb/sample-mgo-rs/standalone + storageSecretName: gcs-secret +--- +apiVersion: stash.appscode.com/v1beta1 +kind: BackupConfiguration +metadata: + name: sample-mgo-rs-backup2 + namespace: demo +spec: + schedule: "*/5 * * * *" + task: + name: mongodb-backup-4.4.6 + repository: + name: gcs-repo-custom + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-mgo-rs-custom + retentionPolicy: + name: keep-last-5 + keepLast: 5 + prune: true diff --git a/content/docs/v2024.1.31/guides/mongodb/backup/logical/replicaset/index.md b/content/docs/v2024.1.31/guides/mongodb/backup/logical/replicaset/index.md new file mode 100644 index 0000000000..a3b75a7c60 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mongodb/backup/logical/replicaset/index.md @@ -0,0 +1,735 @@ +--- +title: Backup & Restore MongoDB ReplicaSet Cluster | Stash +description: Backup and restore MongoDB ReplicaSet cluster using Stash +menu: + docs_v2024.1.31: + identifier: guides-mongodb-backup-logical-replicaset + name: MongoDB ReplicaSet Cluster + parent: guides-mongodb-backup-logical + weight: 30 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +# Backup and Restore MongoDB ReplicaSet Clusters using Stash + +Stash supports taking [backup and restores MongoDB ReplicaSet clusters in "idiomatic" way](https://docs.mongodb.com/manual/tutorial/restore-replica-set-from-backup/). This guide will show you how you can backup and restore your MongoDB ReplicaSet clusters with Stash. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using Minikube. +- Install KubeDB in your cluster following the steps [here](/docs/v2024.1.31/setup/README). +- Install Stash in your cluster following the steps [here](https://stash.run/docs/latest/setup/install/stash/). +- Install Stash `kubectl` plugin following the steps [here](https://stash.run/docs/latest/setup/install/kubectl-plugin/). +- If you are not familiar with how Stash backup and restore MongoDB databases, please check the following guide [here](/docs/v2024.1.31/guides/mongodb/backup/overview/). + +You have to be familiar with following custom resources: + +- [AppBinding](/docs/v2024.1.31/guides/mongodb/concepts/appbinding) +- [Function](https://stash.run/docs/latest/concepts/crds/function/) +- [Task](https://stash.run/docs/latest/concepts/crds/task/) +- [BackupConfiguration](https://stash.run/docs/latest/concepts/crds/backupconfiguration/) +- [RestoreSession](https://stash.run/docs/latest/concepts/crds/restoresession/) + +To keep things isolated, we are going to use a separate namespace called `demo` throughout this tutorial. Create `demo` namespace if you haven't created yet. + +```console +$ kubectl create ns demo +namespace/demo created +``` + +## Backup MongoDB ReplicaSet using Stash + +This section will demonstrate how to backup MongoDB ReplicaSet cluster. Here, we are going to deploy a MongoDB ReplicaSet using KubeDB. Then, we are going to backup this database into a GCS bucket. Finally, we are going to restore the backed up data into another MongoDB ReplicaSet. + +### Deploy Sample MongoDB ReplicaSet + +Let's deploy a sample MongoDB ReplicaSet database and insert some data into it. + +**Create MongoDB CRD:** + +Below is the YAML of a sample MongoDB crd that we are going to create for this tutorial: + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: MongoDB +metadata: + name: sample-mgo-rs + namespace: demo +spec: + version: "4.4.26" + replicas: 3 + replicaSet: + name: rs0 + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut +``` + +Create the above `MongoDB` crd, + +```console +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mongodb/backup/logical/replicaset/examples/mongodb-replicaset.yaml +mongodb.kubedb.com/sample-mgo-rs created +``` + +KubeDB will deploy a MongoDB database according to the above specification. It will also create the necessary secrets and services to access the database. + +Let's check if the database is ready to use, + +```console +$ kubectl get mg -n demo sample-mgo-rs +NAME VERSION STATUS AGE +sample-mgo-rs 4.4.26 Ready 1m +``` + +The database is `Running`. Verify that KubeDB has created a Secret and a Service for this database using the following commands, + +```console +$ kubectl get secret -n demo -l=app.kubernetes.io/instance=sample-mgo-rs +NAME TYPE DATA AGE +sample-mgo-rs-auth Opaque 2 117s +sample-mgo-rs-cert Opaque 4 116s + +$ kubectl get service -n demo -l=app.kubernetes.io/instance=sample-mgo-rs +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +sample-mgo-rs ClusterIP 10.107.13.16 27017/TCP 2m14s +sample-mgo-rs-gvr ClusterIP None 27017/TCP 2m14s +``` + +KubeDB creates an [AppBinding](/docs/v2024.1.31/guides/mongodb/concepts/appbinding) crd that holds the necessary information to connect with the database. + +**Verify AppBinding:** + +Verify that the `AppBinding` has been created successfully using the following command, + +```console +$ kubectl get appbindings -n demo +NAME AGE +sample-mgo-rs 58s +``` + +Let's check the YAML of the above `AppBinding`, + +```console +$ kubectl get appbindings -n demo sample-mgo-rs -o yaml +``` + +```yaml +apiVersion: appcatalog.appscode.com/v1alpha1 +kind: AppBinding +metadata: + annotations: + kubectl.kubernetes.io/last-applied-configuration: | + {"apiVersion":"kubedb.com/v1alpha2","kind":"MongoDB","metadata":{"annotations":{},"name":"sample-mgo-rs","namespace":"demo"},"spec":{"replicaSet":{"name":"rs0"},"replicas":3,"storage":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"1Gi"}},"storageClassName":"standard"},"terminationPolicy":"WipeOut","version":"4.4.26"}} + creationTimestamp: "2022-10-26T04:42:05Z" + generation: 1 + labels: + app.kubernetes.io/component: database + app.kubernetes.io/instance: sample-mgo-rs + app.kubernetes.io/managed-by: kubedb.com + app.kubernetes.io/name: mongodbs.kubedb.com + name: sample-mgo-rs + namespace: demo + ownerReferences: + - apiVersion: kubedb.com/v1alpha2 + blockOwnerDeletion: true + controller: true + kind: MongoDB + name: sample-mgo-rs + uid: 658bf7d1-3772-4c89-84db-5ac74a6c5851 + resourceVersion: "577375" + uid: b0cd9885-53d9-4a2b-93a9-cf9fa90594fd +spec: + appRef: + apiGroup: kubedb.com + kind: MongoDB + name: sample-mgo-rs + namespace: demo + clientConfig: + service: + name: sample-mgo-rs + port: 27017 + scheme: mongodb + parameters: + apiVersion: config.kubedb.com/v1alpha1 + kind: MongoConfiguration + replicaSets: + host-0: rs0/sample-mgo-rs-0.sample-mgo-rs-pods.demo.svc:27017,sample-mgo-rs-1.sample-mgo-rs-pods.demo.svc:27017,sample-mgo-rs-2.sample-mgo-rs-pods.demo.svc:27017 + stash: + addon: + backupTask: + name: mongodb-backup-4.4.6 + restoreTask: + name: mongodb-restore-4.4.6 + secret: + name: sample-mgo-rs-auth + type: kubedb.com/mongodb + version: 4.4.26 +``` + +Stash uses the `AppBinding` crd to connect with the target database. It requires the following two fields to set in AppBinding's `Spec` section. + +- `spec.appRef` refers to the underlying application. +- `spec.clientConfig` defines how to communicate with the application. +- `spec.clientConfig.service.name` specifies the name of the service that connects to the database. +- `spec.secret` specifies the name of the secret that holds necessary credentials to access the database. +- `spec.parameters.replicaSets` contains the dsn of replicaset. The DSNs are in key-value pair. If there is only one replicaset (replicaset can be multiple, because of sharding), then ReplicaSets field contains only one key-value pair where the key is host-0 and the value is dsn of that replicaset. +- `spec.parameters.stash` section specifies the Stash Addons that will be used to backup and restore this MongoDB. +- `spec.type` specifies the types of the app that this AppBinding is pointing to. KubeDB generated AppBinding follows the following format: `/`. + +**Insert Sample Data:** + +Now, we are going to exec into the database pod and create some sample data. At first, find out the database pod using the following command, + +```console +$ kubectl get pods -n demo --selector="app.kubernetes.io/instance=sample-mgo-rs" +NAME READY STATUS RESTARTS AGE +sample-mgo-rs-0 1/1 Running 0 16m +sample-mgo-rs-1 1/1 Running 0 15m +sample-mgo-rs-2 1/1 Running 0 15m +``` + +Now, let's exec into the pod and create a table, + +```console +$ export USER=$(kubectl get secrets -n demo sample-mgo-rs-auth -o jsonpath='{.data.\username}' | base64 -d) + +$ export PASSWORD=$(kubectl get secrets -n demo sample-mgo-rs-auth -o jsonpath='{.data.\password}' | base64 -d) + +$ kubectl exec -it -n demo sample-mgo-rs-0 -- mongo admin -u $USER -p $PASSWORD + +rs0:PRIMARY> rs.isMaster().primary +sample-mgo-rs-0.sample-mgo-rs-gvr.demo.svc.cluster.local:27017 + +rs0:PRIMARY> show dbs +admin 0.000GB +config 0.000GB +local 0.000GB + +rs0:PRIMARY> show users +{ + "_id" : "admin.root", + "userId" : UUID("0e9345cc-27ea-4175-acc4-295c987ac06b"), + "user" : "root", + "db" : "admin", + "roles" : [ + { + "role" : "root", + "db" : "admin" + } + ] +} + +rs0:PRIMARY> use newdb +switched to db newdb + +rs0:PRIMARY> db.movie.insert({"name":"batman"}); +WriteResult({ "Inserted" : 1 }) + +rs0:PRIMARY> db.movie.find().pretty() +{ "_id" : ObjectId("5d31b9d44db670db130d7a5c"), "name" : "batman" } + +rs0:PRIMARY> exit +bye +``` + +Now, we are ready to backup this sample database. + +### Prepare Backend + +We are going to store our backed up data into a GCS bucket. At first, we need to create a secret with GCS credentials then we need to create a `Repository` crd. If you want to use a different backend, please read the respective backend configuration doc from [here](https://stash.run/docs/latest/guides/backends/overview/). + +**Create Storage Secret:** + +Let's create a secret called `gcs-secret` with access credentials to our desired GCS bucket, + +```console +$ echo -n 'changeit' > RESTIC_PASSWORD +$ echo -n '' > GOOGLE_PROJECT_ID +$ cat downloaded-sa-key.json > GOOGLE_SERVICE_ACCOUNT_JSON_KEY +$ kubectl create secret generic -n demo gcs-secret \ + --from-file=./RESTIC_PASSWORD \ + --from-file=./GOOGLE_PROJECT_ID \ + --from-file=./GOOGLE_SERVICE_ACCOUNT_JSON_KEY +secret/gcs-secret created +``` + +**Create Repository:** + +Now, crete a `Repository` using this secret. Below is the YAML of Repository crd we are going to create, + +```yaml +apiVersion: stash.appscode.com/v1alpha1 +kind: Repository +metadata: + name: gcs-repo-replicaset + namespace: demo +spec: + backend: + gcs: + bucket: appscode-qa + prefix: demo/mongodb/sample-mgo-rs + storageSecretName: gcs-secret +``` + +Let's create the `Repository` we have shown above, + +```console +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mongodb/backup/logical/replicaset/examples/repository-replicaset.yaml +repository.stash.appscode.com/gcs-repo-replicaset created +``` + +Now, we are ready to backup our database to our desired backend. + +### Backup MongoDB ReplicaSet + +We have to create a `BackupConfiguration` targeting respective AppBinding crd of our desired database. Then Stash will create a CronJob to periodically backup the database. + +**Create BackupConfiguration:** + +Below is the YAML for `BackupConfiguration` crd to backup the `sample-mgo-rs` database we have deployed earlier., + +```yaml +apiVersion: stash.appscode.com/v1beta1 +kind: BackupConfiguration +metadata: + name: sample-mgo-rs-backup + namespace: demo +spec: + schedule: "*/5 * * * *" + repository: + name: gcs-repo-replicaset + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-mgo-rs + retentionPolicy: + name: keep-last-5 + keepLast: 5 + prune: true +``` + +Here, + +- `spec.schedule` specifies that we want to backup the database at 5 minutes interval. +- `spec.target.ref` refers to the `AppBinding` crd that was created for `sample-mgo-rs` database. + +Let's create the `BackupConfiguration` crd we have shown above, + +```console +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mongodb/backup/logical/replicaset/examples/backupconfiguration-replicaset.yaml +backupconfiguration.stash.appscode.com/sample-mgo-rs-backup created +``` + +**Verify Backup Setup Successful:** + +If everything goes well, the phase of the `BackupConfiguration` should be `Ready`. The `Ready` phase indicates that the backup setup is successful. Let's verify the `Phase` of the BackupConfiguration, + +```console +$ kubectl get backupconfiguration -n demo +NAME TASK SCHEDULE PAUSED PHASE AGE +sample-mgo-rs-backup mongodb-backup-4.4.6 */5 * * * * Ready 11s +``` + +**Verify CronJob:** + +Stash will create a CronJob with the schedule specified in `spec.schedule` field of `BackupConfiguration` crd. + +Verify that the CronJob has been created using the following command, + +```console +$ kubectl get cronjob -n demo +NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE +sample-mgo-rs-backup */5 * * * * False 0 62s +``` + +**Wait for BackupSession:** + +The `sample-mgo-rs-backup` CronJob will trigger a backup on each schedule by creating a `BackupSession` crd. + +Wait for the next schedule. Run the following command to watch `BackupSession` crd, + +```console +$ kubectl get backupsession -n demo -w +NAME INVOKER-TYPE INVOKER-NAME PHASE AGE +sample-mgo-rs-backup-1563540308 BackupConfiguration sample-mgo-rs-backup Running 5m19s +sample-mgo-rs-backup-1563540308 BackupConfiguration sample-mgo-rs-backup Succeeded 5m45s +``` + +We can see above that the backup session has succeeded. Now, we are going to verify that the backed up data has been stored in the backend. + +**Verify Backup:** + +Once a backup is complete, Stash will update the respective `Repository` crd to reflect the backup. Check that the repository `gcs-repo-replicaset` has been updated by the following command, + +```console +$ kubectl get repository -n demo gcs-repo-replicaset +NAME INTEGRITY SIZE SNAPSHOT-COUNT LAST-SUCCESSFUL-BACKUP AGE +gcs-repo-replicaset true 3.844 KiB 2 14s 10m +``` + +Now, if we navigate to the GCS bucket, we are going to see backed up data has been stored in `demo/mongodb/sample-mgo-rs` directory as specified by `spec.backend.gcs.prefix` field of Repository crd. + +> Note: Stash keeps all the backed up data encrypted. So, data in the backend will not make any sense until they are decrypted. + +## Restore MongoDB ReplicaSet +You can restore your data into the same database you have backed up from or into a different database in the same cluster or a different cluster. In this section, we are going to show you how to restore in the same database which may be necessary when you have accidentally deleted any data from the running database. + +#### Temporarily Pause Backup + +At first, let's stop taking any further backup of the old database so that no backup is taken during restore process. We are going to pause the `BackupConfiguration` crd that we had created to backup the `sample-mgo-rs` database. Then, Stash will stop taking any further backup for this database. + +Let's pause the `sample-mgo-rs-backup` BackupConfiguration, +```bash +$ kubectl patch backupconfiguration -n demo sample-mgo-rs-backup --type="merge" --patch='{"spec": {"paused": true}}' +backupconfiguration.stash.appscode.com/sample-mgo-rs-backup patched +``` +Or you can use the Stash `kubectl` plugin to pause the `BackupConfiguration`, +```bash +$ kubectl stash pause backup -n demo --backupconfig=sample-mgo-rs-backup +BackupConfiguration demo/sample-mgo-rs-backup has been paused successfu +``` + +Now, wait for a moment. Stash will pause the BackupConfiguration. Verify that the BackupConfiguration has been paused, + +```console +$ kubectl get backupconfiguration -n demo sample-mgo-rs-backup +NAME TASK SCHEDULE PAUSED PHASE AGE +sample-mgo-rs-backup mongodb-backup-4.4.6 */5 * * * * true Ready 26m +``` + +Notice the `PAUSED` column. Value `true` for this field means that the BackupConfiguration has been paused. + +#### Simulate Disaster + +Now, let’s simulate an accidental deletion scenario. Here, we are going to exec into the database pod and delete the `newdb` database we had created earlier. +```console +$ kubectl exec -it -n demo sample-mgo-rs-0 -- mongo admin -u $USER -p $PASSWORD + +rs0:PRIMARY> rs.isMaster().primary +sample-mgo-rs-0.sample-mgo-rs-gvr.demo.svc.cluster.local:27017 + +rs0:PRIMARY> use newdb +switched to db newdb + +rs0:PRIMARY> db.dropDatabase() +{ "dropped" : "newdb", "ok" : 1 } + +rs0:PRIMARY> show dbs +admin 0.000GB +config 0.000GB +local 0.000GB + +rs0:PRIMARY> exit +bye +``` +#### Create RestoreSession: + +Now, we need to create a `RestoreSession` crd pointing to the AppBinding of `sample-mgo-rs` database. +Below is the YAML for the `RestoreSession` crd that we are going to create to restore the backed up data, +```yaml +apiVersion: stash.appscode.com/v1beta1 +kind: RestoreSession +metadata: + name: sample-mgo-rs-restore + namespace: demo +spec: + repository: + name: gcs-repo-replicaset + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-mgo-rs + rules: + - snapshots: [latest] +``` + +Here, + +- `spec.repository.name` specifies the `Repository` crd that holds the backend information where our backed up data has been stored. +- `spec.target.ref` refers to the AppBinding crd for the `restored-mgo-rs` database. +- `spec.rules` specifies that we are restoring from the latest backup snapshot of the database. + +Let's create the `RestoreSession` crd we have shown above, + +```console +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mongodb/backup/logical/replicaset/examples/estoresession-replicaset.yaml +restoresession.stash.appscode.com/sample-mgo-rs-restore created +``` + +Once, you have created the `RestoreSession` crd, Stash will create a job to restore. We can watch the `RestoreSession` phase to check if the restore process is succeeded or not. + +Run the following command to watch `RestoreSession` phase, + +```console +$ kubectl get restoresession -n demo sample-mgo-rs-restore -w +NAME REPOSITORY-NAME PHASE AGE +sample-mgo-rs-restore gcs-repo-replicaset Running 5s +sample-mgo-rs-restore gcs-repo-replicaset Succeeded 43s +``` + +So, we can see from the output of the above command that the restore process succeeded. + +#### Verify Restored Data: + +In this section, we are going to verify that the desired data has been restored successfully. We are going to connect to `mongos` and check whether the table we had created in the original database is restored or not. + + + +Lets, exec into the database pod and list available tables, + +```console +$ kubectl exec -it -n demo sample-mgo-rs-0 -- mongo admin -u $USER -p $PASSWORD + +rs0:PRIMARY> rs.isMaster().primary +restored-mgo-rs-0.restored-mgo-rs-gvr.demo.svc.cluster.local:27017 + +rs0:PRIMARY> show dbs +admin 0.000GB +config 0.000GB +local 0.000GB +newdb 0.000GB + +rs0:PRIMARY> show users +{ + "_id" : "admin.root", + "userId" : UUID("00f521b5-2b43-4712-ba80-efaa6b382813"), + "user" : "root", + "db" : "admin", + "roles" : [ + { + "role" : "root", + "db" : "admin" + } + ] +} + +rs0:PRIMARY> use newdb +switched to db newdb + +rs0:PRIMARY> db.movie.find().pretty() +{ "_id" : ObjectId("5d31b9d44db670db130d7a5c"), "name" : "batman" } + +rs0:PRIMARY> exit +bye +``` + +So, from the above output, we can see the database `newdb` that we had created earlier is restored. + +## Backup MongoDB ReplicaSet Cluster and Restore into a Standalone database + +It is possible to take backup of a MongoDB ReplicaSet Cluster and restore it into a standalone database, but user need to create the appbinding for this process. + +### Backup a replicaset cluster + +Keep all the fields of appbinding that is explained earlier in this guide, except `spec.parameter`. Do not set `spec.parameter.configServer` and `spec.parameter.replicaSet`. By doing this, the job will use `spec.clientConfig.service.name` as host, which is replicaset DSN. So, the backup will treat this cluster as a standalone and will skip the [`idiomatic way` of taking backups of a replicaset cluster](https://docs.mongodb.com/manual/tutorial/restore-replica-set-from-backup/). Then follow the rest of the procedure as described above. + +```yaml +apiVersion: appcatalog.appscode.com/v1alpha1 +kind: AppBinding +metadata: + name: sample-mgo-rs-custom + namespace: demo +spec: + clientConfig: + service: + name: sample-mgo-rs + port: 27017 + scheme: mongodb + secret: + name: sample-mgo-rs-auth + type: kubedb.com/mongodb + +--- +apiVersion: stash.appscode.com/v1alpha1 +kind: Repository +metadata: + name: gcs-repo-custom + namespace: demo +spec: + backend: + gcs: + bucket: appscode-qa + prefix: demo/mongodb/sample-mgo-rs/standalone + storageSecretName: gcs-secret + +--- +apiVersion: stash.appscode.com/v1beta1 +kind: BackupConfiguration +metadata: + name: sample-mgo-rs-backup2 + namespace: demo +spec: + schedule: "*/5 * * * *" + task: + name: mongodb-backup-4.4.6 + repository: + name: gcs-repo-custom + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-mgo-rs-custom + retentionPolicy: + name: keep-last-5 + keepLast: 5 + prune: true +``` + +This time, we have to provide the Stash Addon information in `spec.task` section of `BackupConfiguration` object as it does not present in the `AppBinding` object that we are creating manually. + +```console +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mongodb/backup/logical/replicaset/examples/standalone-backup.yaml +appbinding.appcatalog.appscode.com/sample-mgo-rs-custom created +repository.stash.appscode.com/gcs-repo-custom created +backupconfiguration.stash.appscode.com/sample-mgo-rs-backup2 created + + +$ kubectl get backupsession -n demo +NAME BACKUPCONFIGURATION PHASE AGE +sample-mgo-rs-backup2-1563541509 sample-mgo-rs-backup Succeeded 35s + + +$ kubectl get repository -n demo gcs-repo-custom +NAME INTEGRITY SIZE SNAPSHOT-COUNT LAST-SUCCESSFUL-BACKUP AGE +gcs-repo-custom true 1.640 KiB 1 1m 5m +``` + +### Restore to a standalone database + +No additional configuration is needed to restore the replicaset cluster to a standalone database. Follow the normal procedure of restoring a MongoDB Database. + +Standalone MongoDB, + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: MongoDB +metadata: + name: restored-mongodb + namespace: demo +spec: + version: "4.4.26" + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + init: + waitForInitialRestore: true + terminationPolicy: WipeOut +``` + +You have to provide the respective Stash restore `Task` info in `spec.task` section of `RestoreSession` object, + +```yaml +apiVersion: stash.appscode.com/v1beta1 +kind: RestoreSession +metadata: + name: sample-mongodb-restore + namespace: demo +spec: + task: + name: mongodb-restore-4.4.6 + repository: + name: gcs-repo-custom + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: restored-mongodb + rules: + - snapshots: [latest] +``` + +```console +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mongodb/backup/logical/replicaset/rexamples/estored-standalone.yaml +mongodb.kubedb.com/restored-mongodb created + +$ kubectl get mg -n demo restored-mongodb +NAME VERSION STATUS AGE +restored-mongodb 4.4.26 Provisioning 56s + +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mongodb/backup/logical/replicaset/rexamples/estoresession-standalone.yaml +restoresession.stash.appscode.com/sample-mongodb-restore created + +$ kubectl get mg -n demo restored-mongodb +NAME VERSION STATUS AGE +restored-mongodb 4.4.26 Ready 2m +``` + +Now, exec into the database pod and list available tables, + +```console +$ export USER=$(kubectl get secrets -n demo restored-mongodb-auth -o jsonpath='{.data.\username}' | base64 -d) + +$ export PASSWORD=$(kubectl get secrets -n demo restored-mongodb-auth -o jsonpath='{.data.\password}' | base64 -d) + +$ kubectl exec -it -n demo restored-mongodb-0 -- mongo admin -u $USER -p $PASSWORD + +mongodb@restored-mongodb-0:/$ mongo admin -u root -p CRz6EuxvKdFjopfP + +> show dbs +admin 0.000GB +config 0.000GB +local 0.000GB +newdb 0.000GB + +> show users +{ + "_id" : "admin.root", + "userId" : UUID("11e00a38-7b08-4864-b452-ae356350e50f"), + "user" : "root", + "db" : "admin", + "roles" : [ + { + "role" : "root", + "db" : "admin" + } + ] +} + +> use newdb +switched to db newdb + +> db.movie.find().pretty() +{ "_id" : ObjectId("5d31b9d44db670db130d7a5c"), "name" : "batman" } + +> exit +bye +``` + +So, from the above output, we can see the database `newdb` that we had created in the original database `sample-mgo-rs` is restored in the restored database `restored-mongodb`. + +## Cleanup + +To cleanup the Kubernetes resources created by this tutorial, run: + +```console +kubectl delete -n demo restoresession sample-mgo-rs-restore sample-mongodb-restore +kubectl delete -n demo backupconfiguration sample-mgo-rs-backup sample-mgo-rs-backup2 +kubectl delete -n demo mg sample-mgo-rs restored-mongodb +kubectl delete -n demo repository gcs-repo-replicaset gcs-repo-custom +kubectl delete -n demo appbinding sample-mgo-rs-custom +``` diff --git a/content/docs/v2024.1.31/guides/mongodb/backup/logical/sharding/examples/backupconfiguration-sharding.yaml b/content/docs/v2024.1.31/guides/mongodb/backup/logical/sharding/examples/backupconfiguration-sharding.yaml new file mode 100644 index 0000000000..2a14d1072c --- /dev/null +++ b/content/docs/v2024.1.31/guides/mongodb/backup/logical/sharding/examples/backupconfiguration-sharding.yaml @@ -0,0 +1,18 @@ +apiVersion: stash.appscode.com/v1beta1 +kind: BackupConfiguration +metadata: + name: sample-mgo-sh-backup + namespace: demo +spec: + schedule: "*/5 * * * *" + repository: + name: gcs-repo-sharding + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-mgo-sh + retentionPolicy: + name: keep-last-5 + keepLast: 5 + prune: true diff --git a/content/docs/v2024.1.31/guides/mongodb/backup/logical/sharding/examples/mongodb-sharding.yaml b/content/docs/v2024.1.31/guides/mongodb/backup/logical/sharding/examples/mongodb-sharding.yaml new file mode 100644 index 0000000000..c4538f6e4f --- /dev/null +++ b/content/docs/v2024.1.31/guides/mongodb/backup/logical/sharding/examples/mongodb-sharding.yaml @@ -0,0 +1,26 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MongoDB +metadata: + name: sample-mgo-sh + namespace: demo +spec: + version: 4.4.26 + shardTopology: + configServer: + replicas: 3 + storage: + resources: + requests: + storage: 1Gi + storageClassName: standard + mongos: + replicas: 2 + shard: + replicas: 3 + shards: 3 + storage: + resources: + requests: + storage: 1Gi + storageClassName: standard + terminationPolicy: WipeOut diff --git a/content/docs/v2024.1.31/guides/mongodb/backup/logical/sharding/examples/repository-sharding.yaml b/content/docs/v2024.1.31/guides/mongodb/backup/logical/sharding/examples/repository-sharding.yaml new file mode 100644 index 0000000000..fb69d936e9 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mongodb/backup/logical/sharding/examples/repository-sharding.yaml @@ -0,0 +1,11 @@ +apiVersion: stash.appscode.com/v1alpha1 +kind: Repository +metadata: + name: gcs-repo-sharding + namespace: demo +spec: + backend: + gcs: + bucket: appscode-qa + prefix: demo/mongodb/sample-mgo-sh + storageSecretName: gcs-secret diff --git a/content/docs/v2024.1.31/guides/mongodb/backup/logical/sharding/examples/restored-mongodb-sharding.yaml b/content/docs/v2024.1.31/guides/mongodb/backup/logical/sharding/examples/restored-mongodb-sharding.yaml new file mode 100644 index 0000000000..a3d78a9ecc --- /dev/null +++ b/content/docs/v2024.1.31/guides/mongodb/backup/logical/sharding/examples/restored-mongodb-sharding.yaml @@ -0,0 +1,28 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MongoDB +metadata: + name: restored-mgo-sh + namespace: demo +spec: + version: 4.4.26 + shardTopology: + configServer: + replicas: 3 + storage: + resources: + requests: + storage: 1Gi + storageClassName: standard + mongos: + replicas: 2 + shard: + replicas: 3 + shards: 3 + storage: + resources: + requests: + storage: 1Gi + storageClassName: standard + init: + waitForInitialRestore: true + terminationPolicy: WipeOut diff --git a/content/docs/v2024.1.31/guides/mongodb/backup/logical/sharding/examples/restored-standalone.yaml b/content/docs/v2024.1.31/guides/mongodb/backup/logical/sharding/examples/restored-standalone.yaml new file mode 100644 index 0000000000..43d845dba4 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mongodb/backup/logical/sharding/examples/restored-standalone.yaml @@ -0,0 +1,18 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MongoDB +metadata: + name: restored-mongodb + namespace: demo +spec: + version: "4.4.26" + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + init: + waitForInitialRestore: true + terminationPolicy: WipeOut diff --git a/content/docs/v2024.1.31/guides/mongodb/backup/logical/sharding/examples/restoresession-sharding.yaml b/content/docs/v2024.1.31/guides/mongodb/backup/logical/sharding/examples/restoresession-sharding.yaml new file mode 100644 index 0000000000..3079d4613e --- /dev/null +++ b/content/docs/v2024.1.31/guides/mongodb/backup/logical/sharding/examples/restoresession-sharding.yaml @@ -0,0 +1,15 @@ +apiVersion: stash.appscode.com/v1beta1 +kind: RestoreSession +metadata: + name: sample-mgo-sh-restore + namespace: demo +spec: + repository: + name: gcs-repo-sharding + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: restored-mgo-sh + rules: + - snapshots: [latest] diff --git a/content/docs/v2024.1.31/guides/mongodb/backup/logical/sharding/examples/restoresession-standalone.yaml b/content/docs/v2024.1.31/guides/mongodb/backup/logical/sharding/examples/restoresession-standalone.yaml new file mode 100644 index 0000000000..d22fea850e --- /dev/null +++ b/content/docs/v2024.1.31/guides/mongodb/backup/logical/sharding/examples/restoresession-standalone.yaml @@ -0,0 +1,17 @@ +apiVersion: stash.appscode.com/v1beta1 +kind: RestoreSession +metadata: + name: sample-mongodb-restore + namespace: demo +spec: + task: + name: mongodb-restore-4.4.6 + repository: + name: gcs-repo-custom + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: restored-mongodb + rules: + - snapshots: [latest] diff --git a/content/docs/v2024.1.31/guides/mongodb/backup/logical/sharding/examples/standalone-backup.yaml b/content/docs/v2024.1.31/guides/mongodb/backup/logical/sharding/examples/standalone-backup.yaml new file mode 100644 index 0000000000..4a9aecf3e4 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mongodb/backup/logical/sharding/examples/standalone-backup.yaml @@ -0,0 +1,47 @@ +apiVersion: appcatalog.appscode.com/v1alpha1 +kind: AppBinding +metadata: + name: sample-mgo-sh-custom + namespace: demo +spec: + clientConfig: + service: + name: sample-mgo-sh + port: 27017 + scheme: mongodb + secret: + name: sample-mgo-sh-auth + type: kubedb.com/mongodb +--- +apiVersion: stash.appscode.com/v1alpha1 +kind: Repository +metadata: + name: gcs-repo-custom + namespace: demo +spec: + backend: + gcs: + bucket: appscode-qa + prefix: demo/mongodb/sample-mgo-sh/standalone + storageSecretName: gcs-secret +--- +apiVersion: stash.appscode.com/v1beta1 +kind: BackupConfiguration +metadata: + name: sample-mgo-sh-backup2 + namespace: demo +spec: + schedule: "*/5 * * * *" + task: + name: mongodb-backup-4.4.6 + repository: + name: gcs-repo-custom + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-mgo-sh-custom + retentionPolicy: + name: keep-last-5 + keepLast: 5 + prune: true diff --git a/content/docs/v2024.1.31/guides/mongodb/backup/logical/sharding/index.md b/content/docs/v2024.1.31/guides/mongodb/backup/logical/sharding/index.md new file mode 100644 index 0000000000..ddbdeba844 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mongodb/backup/logical/sharding/index.md @@ -0,0 +1,741 @@ +--- +title: Backup & Restore Sharded MongoDB Cluster| Stash +description: Backup and restore sharded MongoDB cluster using Stash +menu: + docs_v2024.1.31: + identifier: guides-mongodb-backup-logical-sharded-cluster + name: MongoDB Sharded Cluster + parent: guides-mongodb-backup-logical + weight: 40 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +# Backup and Restore MongoDB Sharded Clusters using Stash + +Stash 0.9.0+ supports taking [backup](https://docs.mongodb.com/manual/tutorial/backup-sharded-cluster-with-database-dumps/) and [restores](https://docs.mongodb.com/manual/tutorial/restore-sharded-cluster/) MongoDB Sharded clusters in ["idiomatic" way](https://docs.mongodb.com/manual/administration/backup-sharded-clusters/). This guide will show you how you can backup and restore your MongoDB Sharded clusters with Stash. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using Minikube. +- Install KubeDB in your cluster following the steps [here](/docs/v2024.1.31/setup/README). +- Install Stash in your cluster following the steps [here](https://stash.run/docs/latest/setup/install/stash/). +- Install Stash `kubectl` plugin following the steps [here](https://stash.run/docs/latest/setup/install/kubectl-plugin/). +- If you are not familiar with how Stash backup and restore MongoDB databases, please check the following guide [here](/docs/v2024.1.31/guides/mongodb/backup/overview/). + +You have to be familiar with following custom resources: + +- [AppBinding](/docs/v2024.1.31/guides/mongodb/concepts/appbinding) +- [Function](https://stash.run/docs/latest/concepts/crds/function/) +- [Task](https://stash.run/docs/latest/concepts/crds/task/) +- [BackupConfiguration](https://stash.run/docs/latest/concepts/crds/backupconfiguration/) +- [RestoreSession](https://stash.run/docs/latest/concepts/crds/restoresession/) + +To keep things isolated, we are going to use a separate namespace called `demo` throughout this tutorial. Create `demo` namespace if you haven't created yet. + +```console +$ kubectl create ns demo +namespace/demo created +``` + +## Backup Sharded MongoDB Cluster + +This section will demonstrate how to backup MongoDB cluster. We are going to use [KubeDB](https://kubedb.com) to deploy a sample database. Then, we are going to backup this database into a GCS bucket. Finally, we are going to restore the backed up data into another MongoDB cluster. + +### Deploy Sample MongoDB Sharding + +Let's deploy a sample MongoDB Sharding database and insert some data into it. + +**Create MongoDB CRD:** + +Below is the YAML of a sample MongoDB crd that we are going to create for this tutorial: + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: MongoDB +metadata: + name: sample-mgo-sh + namespace: demo +spec: + version: 4.4.26 + shardTopology: + configServer: + replicas: 3 + storage: + resources: + requests: + storage: 1Gi + storageClassName: standard + mongos: + replicas: 2 + shard: + replicas: 3 + shards: 3 + storage: + resources: + requests: + storage: 1Gi + storageClassName: standard + terminationPolicy: WipeOut +``` + +Create the above `MongoDB` crd, + +```console +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mongodb/backup/logical/sharding/examples/mongodb-sharding.yaml +mongodb.kubedb.com/sample-mgo-sh created +``` + +KubeDB will deploy a MongoDB database according to the above specification. It will also create the necessary secrets and services to access the database. + +Let's check if the database is ready to use, + +```console +$ kubectl get mg -n demo sample-mgo-sh +NAME VERSION STATUS AGE +sample-mgo-sh 4.4.26 Ready 35m +``` + +The database is `Ready`. Verify that KubeDB has created a Secret and a Service for this database using the following commands, + +```console +$ kubectl get secret -n demo -l=app.kubernetes.io/instance=sample-mgo-sh +NAME TYPE DATA AGE +sample-mgo-sh-auth Opaque 2 36m +sample-mgo-sh-cert Opaque 4 36m + +$ kubectl get service -n demo -l=app.kubernetes.io/instance=sample-mgo-sh +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +sample-mgo-sh ClusterIP 10.107.11.117 27017/TCP 36m +sample-mgo-sh-configsvr-gvr ClusterIP None 27017/TCP 36m +sample-mgo-sh-shard0-gvr ClusterIP None 27017/TCP 36m +sample-mgo-sh-shard1-gvr ClusterIP None 27017/TCP 36m +sample-mgo-sh-shard2-gvr ClusterIP None 27017/TCP 36m +``` + +KubeDB creates an [AppBinding](/docs/v2024.1.31/guides/mongodb/concepts/appbinding) crd that holds the necessary information to connect with the database. + +**Verify AppBinding:** + +Verify that the `AppBinding` has been created successfully using the following command, + +```console +$ kubectl get appbindings -n demo +NAME AGE +sample-mgo-sh 30m +``` + +Let's check the YAML of the above `AppBinding`, + +```console +$ kubectl get appbindings -n demo sample-mgo-sh -o yaml +``` + +```yaml +apiVersion: appcatalog.appscode.com/v1alpha1 +kind: AppBinding +metadata: + annotations: + kubectl.kubernetes.io/last-applied-configuration: | + {"apiVersion":"kubedb.com/v1alpha2","kind":"MongoDB","metadata":{"annotations":{},"name":"sample-mgo-sh","namespace":"demo"},"spec":{"shardTopology":{"configServer":{"replicas":3,"storage":{"resources":{"requests":{"storage":"1Gi"}},"storageClassName":"standard"}},"mongos":{"replicas":2},"shard":{"replicas":3,"shards":3,"storage":{"resources":{"requests":{"storage":"1Gi"}},"storageClassName":"standard"}}},"terminationPolicy":"WipeOut","version":"4.4.26"}} + creationTimestamp: "2022-10-26T05:11:20Z" + generation: 1 + labels: + app.kubernetes.io/component: database + app.kubernetes.io/instance: sample-mgo-sh + app.kubernetes.io/managed-by: kubedb.com + app.kubernetes.io/name: mongodbs.kubedb.com + name: sample-mgo-sh + namespace: demo + ownerReferences: + - apiVersion: kubedb.com/v1alpha2 + blockOwnerDeletion: true + controller: true + kind: MongoDB + name: sample-mgo-sh + uid: 22f704c3-1a4d-468c-9404-7efa739ad0da + resourceVersion: "580483" + uid: 69092658-2f4a-45f2-a899-14884bf74a8b +spec: + appRef: + apiGroup: kubedb.com + kind: MongoDB + name: sample-mgo-sh + namespace: demo + clientConfig: + service: + name: sample-mgo-sh + port: 27017 + scheme: mongodb + parameters: + apiVersion: config.kubedb.com/v1alpha1 + configServer: cnfRepSet/sample-mgo-sh-configsvr-0.sample-mgo-sh-configsvr-pods.demo.svc:27017,sample-mgo-sh-configsvr-1.sample-mgo-sh-configsvr-pods.demo.svc:27017,sample-mgo-sh-configsvr-2.sample-mgo-sh-configsvr-pods.demo.svc:27017 + kind: MongoConfiguration + replicaSets: + host-0: shard0/sample-mgo-sh-shard0-0.sample-mgo-sh-shard0-pods.demo.svc:27017,sample-mgo-sh-shard0-1.sample-mgo-sh-shard0-pods.demo.svc:27017,sample-mgo-sh-shard0-2.sample-mgo-sh-shard0-pods.demo.svc:27017 + host-1: shard1/sample-mgo-sh-shard1-0.sample-mgo-sh-shard1-pods.demo.svc:27017,sample-mgo-sh-shard1-1.sample-mgo-sh-shard1-pods.demo.svc:27017,sample-mgo-sh-shard1-2.sample-mgo-sh-shard1-pods.demo.svc:27017 + host-2: shard2/sample-mgo-sh-shard2-0.sample-mgo-sh-shard2-pods.demo.svc:27017,sample-mgo-sh-shard2-1.sample-mgo-sh-shard2-pods.demo.svc:27017,sample-mgo-sh-shard2-2.sample-mgo-sh-shard2-pods.demo.svc:27017 + stash: + addon: + backupTask: + name: mongodb-backup-4.4.6 + restoreTask: + name: mongodb-restore-4.4.6 + secret: + name: sample-mgo-sh-auth + type: kubedb.com/mongodb + version: 4.4.26 +``` + +Stash uses the `AppBinding` crd to connect with the target database. It requires the following two fields to set in AppBinding's `Spec` section. + +- `spec.appRef` refers to the underlying application. +- `spec.clientConfig` defines how to communicate with the application. +- `spec.clientConfig.service.name` specifies the name of the service that connects to the database. +- `spec.secret` specifies the name of the secret that holds necessary credentials to access the database. +- `spec.parameters.configServer` specifies the dsn of config server of mongodb sharding. The dsn includes the port no too. +- `spec.parameters.replicaSets` contains the dsn of each replicaset of sharding. The DSNs are in key-value pair, where the keys are host-0, host-1 etc, and the values are DSN of each replicaset. If there is no sharding but only one replicaset, then ReplicaSets field contains only one key-value pair where the key is host-0 and the value is dsn of that replicaset. +- `spec.parameters.stash` contains the Stash addon information that will be used to backup and restore this MongoDB. +- `spec.type` specifies the types of the app that this AppBinding is pointing to. KubeDB generated AppBinding follows the following format: `/`. + +**Insert Sample Data:** + +Now, we are going to exec into the database pod and create some sample data. At first, find out the database pod using the following command, + +```console +$ kubectl get pods -n demo --selector="mongodb.kubedb.com/node.mongos=sample-mgo-sh-mongos" +NAME READY STATUS RESTARTS AGE +sample-mgo-sh-mongos-9459cfc44-4jthd 1/1 Running 0 60m +sample-mgo-sh-mongos-9459cfc44-6d2st 1/1 Running 0 60m +``` + +Now, let's exec into the pod and create a table, + +```console +$ export USER=$(kubectl get secrets -n demo sample-mgo-sh-auth -o jsonpath='{.data.\username}' | base64 -d) + +$ export PASSWORD=$(kubectl get secrets -n demo sample-mgo-sh-auth -o jsonpath='{.data.\password}' | base64 -d) + +$ kubectl exec -it -n demo sample-mgo-sh-mongos-9459cfc44-4jthd -- mongo admin -u $USER -p $PASSWORD + +mongos> show dbs +admin 0.000GB +config 0.001GB + + +mongos> show users +{ + "_id" : "admin.root", + "userId" : UUID("b9a1551b-83cf-4ebb-852b-dd23c890f301"), + "user" : "root", + "db" : "admin", + "roles" : [ + { + "role" : "root", + "db" : "admin" + } + ] +} + +mongos> use newdb +switched to db newdb + +mongos> db.movie.insert({"name":"batman"}); +WriteResult({ "nInserted" : 1 }) + +mongos> db.movie.find().pretty() +{ "_id" : ObjectId("5d3064bf144a1b8fda04cd4f"), "name" : "batman" } + +mongos> exit +bye +``` + +Now, we are ready to backup this sample database. + +### Prepare Backend + +We are going to store our backed up data into a GCS bucket. At first, we need to create a secret with GCS credentials then we need to create a `Repository` crd. If you want to use a different backend, please read the respective backend configuration doc from [here](https://stash.run/docs/latest/guides/backends/overview/). + +**Create Storage Secret:** + +Let's create a secret called `gcs-secret` with access credentials to our desired GCS bucket, + +```console +$ echo -n 'changeit' > RESTIC_PASSWORD +$ echo -n '' > GOOGLE_PROJECT_ID +$ cat downloaded-sa-key.json > GOOGLE_SERVICE_ACCOUNT_JSON_KEY +$ kubectl create secret generic -n demo gcs-secret \ + --from-file=./RESTIC_PASSWORD \ + --from-file=./GOOGLE_PROJECT_ID \ + --from-file=./GOOGLE_SERVICE_ACCOUNT_JSON_KEY +secret/gcs-secret created +``` + +**Create Repository:** + +Now, crete a `Repository` using this secret. Below is the YAML of Repository crd we are going to create, + +```yaml +apiVersion: stash.appscode.com/v1alpha1 +kind: Repository +metadata: + name: gcs-repo-sharding + namespace: demo +spec: + backend: + gcs: + bucket: appscode-qa + prefix: demo/mongodb/sample-mgo-sh + storageSecretName: gcs-secret +``` + +Let's create the `Repository` we have shown above, + +```console +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mongodb/backup/logical/sharding/examples/repository-sharding.yaml +repository.stash.appscode.com/gcs-repo-sharding created +``` + +Now, we are ready to backup our database to our desired backend. + +### Backup MongoDB Sharding + +We have to create a `BackupConfiguration` targeting respective AppBinding crd of our desired database. Then Stash will create a CronJob to periodically backup the database. + +**Create BackupConfiguration:** + +Below is the YAML for `BackupConfiguration` crd to backup the `sample-mgo-sh` database we have deployed earlier., + +```yaml +apiVersion: stash.appscode.com/v1beta1 +kind: BackupConfiguration +metadata: + name: sample-mgo-sh-backup + namespace: demo +spec: + schedule: "*/5 * * * *" + repository: + name: gcs-repo-sharding + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-mgo-sh + retentionPolicy: + name: keep-last-5 + keepLast: 5 + prune: true +``` + +Here, + +- `spec.schedule` specifies that we want to backup the database at 5 minutes interval. +- `spec.target.ref` refers to the `AppBinding` crd that was created for `sample-mgo-sh` database. + +Let's create the `BackupConfiguration` crd we have shown above, + +```console +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mongodb/backup/logical/sharding/examples/backupconfiguration-sharding.yaml +backupconfiguration.stash.appscode.com/sample-mgo-sh-backup created +``` + +**Verify Backup Setup Successful:** + +If everything goes well, the phase of the `BackupConfiguration` should be `Ready`. The `Ready` phase indicates that the backup setup is successful. Let's verify the `Phase` of the BackupConfiguration, + +```console +$ kubectl get backupconfiguration -n demo +NAME TASK SCHEDULE PAUSED PHASE AGE +sample-mgo-sh-backup mongodb-backup-4.4.6 */5 * * * * Ready 11s +``` + +**Verify CronJob:** + +Stash will create a CronJob with the schedule specified in `spec.schedule` field of `BackupConfiguration` crd. + +Verify that the CronJob has been created using the following command, + +```console +$ kubectl get cronjob -n demo +NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE +sample-mgo-sh-backup */5 * * * * False 0 13s +``` + +**Wait for BackupSession:** + +The `sample-mgo-sh-backup` CronJob will trigger a backup on each schedule by creating a `BackupSession` crd. + +Wait for the next schedule. Run the following command to watch `BackupSession` crd, + +```console +$ kubectl get backupsession -n demo -w +NAME INVOKER-TYPE INVOKER-NAME PHASE AGE +sample-mgo-sh-backup-1563512707 BackupConfiguration sample-mgo-sh-backup Running 5m19s +sample-mgo-sh-backup-1563512707 BackupConfiguration sample-mgo-sh-backup Succeeded 5m45s +``` + +We can see above that the backup session has succeeded. Now, we are going to verify that the backed up data has been stored in the backend. + +**Verify Backup:** + +Once a backup is complete, Stash will update the respective `Repository` crd to reflect the backup. Check that the repository `gcs-repo-sharding` has been updated by the following command, + +```console +$ kubectl get repository -n demo gcs-repo-sharding +NAME INTEGRITY SIZE SNAPSHOT-COUNT LAST-SUCCESSFUL-BACKUP AGE +gcs-repo-sharding true 66.453 KiB 12 1m 20m +``` + +Now, if we navigate to the GCS bucket, we are going to see backed up data has been stored in `demo/mongodb/sample-mgo-sh` directory as specified by `spec.backend.gcs.prefix` field of Repository crd. + +> Note: Stash keeps all the backed up data encrypted. So, data in the backend will not make any sense until they are decrypted. + +## Restore MongoDB Sharding +You can restore your data into the same database you have backed up from or into a different database in the same cluster or a different cluster. In this section, we are going to show you how to restore in the same database which may be necessary when you have accidentally deleted any data from the running database. + + +#### Stop Taking Backup of the Old Database: + +At first, let's stop taking any further backup of the old database so that no backup is taken during restore process. We are going to pause the `BackupConfiguration` crd that we had created to backup the `sample-mgo-sh` database. Then, Stash will stop taking any further backup for this database. + +Let's pause the `sample-mgo-sh-backup` BackupConfiguration, +```bash +$ kubectl patch backupconfiguration -n demo sample-mgo-sh-backup --type="merge" --patch='{"spec": {"paused": true}}' +backupconfiguration.stash.appscode.com/sample-mgo-sh-backup patched +``` + +Or you can use the Stash `kubectl` plugin to pause the `BackupConfiguration`, +```bash +$ kubectl stash pause backup -n demo --backupconfig=sample-mgo-sh-backup +BackupConfiguration demo/sample-mgo-sh-backup has been paused successfully. +``` + +Now, wait for a moment. Stash will pause the BackupConfiguration. Verify that the BackupConfiguration has been paused, + +```console +$ kubectl get backupconfiguration -n demo sample-mgo-sh-backup +NAME TASK SCHEDULE PAUSED PHASE AGE +sample-mgo-sh-backup mongodb-restore-4.4.6 */5 * * * * true Ready 26m +``` + +Notice the `PAUSED` column. Value `true` for this field means that the BackupConfiguration has been paused. + +#### Simulate Disaster: + +Now, let’s simulate an accidental deletion scenario. Here, we are going to exec into the database pod and delete the `newdb` database we had created earlier. +```console +$ kubectl exec -it -n demo sample-mgo-sh-mongos-9459cfc44-4jthd -- mongo admin -u $USER -p $PASSWORD + +mongos> use newdb +switched to db newdb + +mongos> db.dropDatabase() +{ "dropped" : "newdb", "ok" : 1 } + +mongos> show dbs +admin 0.000GB +config 0.000GB +local 0.000GB + +mongos> exit +bye +``` + +#### Create RestoreSession: + +Now, we need to create a `RestoreSession` crd pointing to the AppBinding of `sample-mgo-sh` database. + +Below is the YAML for the `RestoreSession` crd that we are going to create to restore the backed up data. + +```yaml +apiVersion: stash.appscode.com/v1beta1 +kind: RestoreSession +metadata: + name: sample-mgo-sh-restore + namespace: demo +spec: + repository: + name: gcs-repo-sharding + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-mgo-sh + rules: + - snapshots: [latest] +``` + +Here, + +- `spec.repository.name` specifies the `Repository` crd that holds the backend information where our backed up data has been stored. +- `spec.target.ref` refers to the AppBinding crd for the `restored-mgo-sh` database. +- `spec.rules` specifies that we are restoring from the latest backup snapshot of the database. + +Let's create the `RestoreSession` crd we have shown above, + +```console +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mongodb/backup/logical/sharding/examples/restoresession-sharding.yaml +restoresession.stash.appscode.com/sample-mgo-sh-restore created +``` + +Once, you have created the `RestoreSession` crd, Stash will create a job to restore. We can watch the `RestoreSession` phase to check if the restore process is succeeded or not. + +Run the following command to watch `RestoreSession` phase, + +```console +$ kubectl get restoresession -n demo sample-mgo-sh-restore -w +NAME REPOSITORY-NAME PHASE AGE +sample-mgo-sh-restore gcs-repo-sharding Running 5s +sample-mgo-sh-restore gcs-repo-sharding Succeeded 43s +``` + +So, we can see from the output of the above command that the restore process succeeded. + +#### Verify Restored Data: + +In this section, we are going to verify that the desired data has been restored successfully. We are going to connect to `mongos` and check whether the table we had created earlier is restored or not. + +Lets, exec into the database pod and list available tables, + +```console + +$ kubectl exec -it -n demo sample-mgo-sh-mongos-9459cfc44-4jthd -- mongo admin -u $USER -p $PASSWORD + +mongos> show dbs +admin 0.000GB +config 0.001GB +newdb 0.000GB + + +mongos> show users +{ + "_id" : "admin.root", + "userId" : UUID("a57cb466-ec66-453b-b795-654169a0f035"), + "user" : "root", + "db" : "admin", + "roles" : [ + { + "role" : "root", + "db" : "admin" + } + ] +} + +mongos> use newdb +switched to db newdb + +mongos> db.movie.find().pretty() +{ "_id" : ObjectId("5d3064bf144a1b8fda04cd4f"), "name" : "batman" } + +mongos> exit +bye +``` + +So, from the above output, we can see the database `newdb` that we had created earlier is restored. + +## Backup MongoDB Sharded Cluster and Restore into a Standalone database + +It is possible to take backup of a MongoDB Sharded Cluster and restore it into a standalone database, but user need to create the appbinding for this process. + +### Backup a sharded cluster + +Keep all the fields of appbinding that is explained earlier in this guide, except `spec.parameter`. Do not set `spec.parameter.configServer` and `spec.parameter.replicaSet`. By doing this, the job will use `spec.clientConfig.service.name` as host, which is `mongos` router DSN. So, the backup will treat this cluster as a standalone and will skip the [`idiomatic way` of taking backups of a sharded cluster](https://docs.mongodb.com/manual/tutorial/backup-sharded-cluster-with-database-dumps/). Then follow the rest of the procedure as described above. + +```yaml +apiVersion: appcatalog.appscode.com/v1alpha1 +kind: AppBinding +metadata: + name: sample-mgo-sh-custom + namespace: demo +spec: + clientConfig: + service: + name: sample-mgo-sh + port: 27017 + scheme: mongodb + secret: + name: sample-mgo-sh-auth + type: kubedb.com/mongodb + +--- +apiVersion: stash.appscode.com/v1alpha1 +kind: Repository +metadata: + name: gcs-repo-custom + namespace: demo +spec: + backend: + gcs: + bucket: appscode-qa + prefix: demo/mongodb/sample-mgo-sh/standalone + storageSecretName: gcs-secret + +--- +apiVersion: stash.appscode.com/v1beta1 +kind: BackupConfiguration +metadata: + name: sample-mgo-sh-backup2 + namespace: demo +spec: + schedule: "*/5 * * * *" + task: + name: mongodb-backup-4.4.6 + repository: + name: gcs-repo-custom + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-mgo-sh-custom + retentionPolicy: + name: keep-last-5 + keepLast: 5 + prune: true +``` + +This time, we have to provide Stash addon info in `spec.task` section of `BackupConfiguration` object as the `AppBinding` we are creating manually does not have those info. + +```console +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mongodb/backup/logical/sharding/examples/standalone-backup.yaml +appbinding.appcatalog.appscode.com/sample-mgo-sh-custom created +repository.stash.appscode.com/gcs-repo-custom created +backupconfiguration.stash.appscode.com/sample-mgo-sh-backup2 created + + +$ kubectl get backupsession -n demo +NAME BACKUPCONFIGURATION PHASE AGE +sample-mgo-sh-backup-1563528902 sample-mgo-sh-backup Succeeded 35s + + +$ kubectl get repository -n demo gcs-repo-custom +NAME INTEGRITY SIZE SNAPSHOT-COUNT LAST-SUCCESSFUL-BACKUP AGE +gcs-repo-custom true 22.160 KiB 4 1m 2m +``` + +### Restore to a standalone database + +No additional configuration is needed to restore the sharded cluster to a standalone database. Follow the normal procedure of restoring a MongoDB Database. + +Standalone MongoDB, + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: MongoDB +metadata: + name: restored-mongodb + namespace: demo +spec: + version: "4.4.26" + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + init: + waitForInitialRestore: true + terminationPolicy: WipeOut +``` + +This time, we have to provide `spec.task` section in `RestoreSession` object, + +```yaml +apiVersion: stash.appscode.com/v1beta1 +kind: RestoreSession +metadata: + name: sample-mongodb-restore + namespace: demo +spec: + task: + name: mongodb-restore-4.4.6 + repository: + name: gcs-repo-custom + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: restored-mongodb + rules: + - snapshots: [latest] +``` + +```console +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mongodb/backup/logical/sharding/examples/restored-standalone.yaml +mongodb.kubedb.com/restored-mongodb created + +$ kubectl get mg -n demo restored-mongodb +NAME VERSION STATUS AGE +restored-mongodb 4.4.26 Provisioning 56s + +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mongodb/backup/logical/sharding/examples/restoresession-standalone.yaml +restoresession.stash.appscode.com/sample-mongodb-restore created + +$ kubectl get mg -n demo restored-mongodb +NAME VERSION STATUS AGE +restored-mongodb 4.4.26 Ready 56s +``` + +Now, exec into the database pod and list available tables, + +```console +$ export USER=$(kubectl get secrets -n demo restored-mongodb-auth -o jsonpath='{.data.\username}' | base64 -d) + +$ export PASSWORD=$(kubectl get secrets -n demo restored-mongodb-auth -o jsonpath='{.data.\password}' | base64 -d) + +$ kubectl exec -it -n demo restored-mongodb-0 -- mongo admin -u $USER -p $PASSWORD + +> show dbs +admin 0.000GB +config 0.000GB +local 0.000GB +newdb 0.000GB + +> show users +{ + "_id" : "admin.root", + "userId" : UUID("98fa7511-2ae0-4466-bb2a-f9a7e17631ad"), + "user" : "root", + "db" : "admin", + "roles" : [ + { + "role" : "root", + "db" : "admin" + } + ] +} + +> use newdb +switched to db newdb + +> db.movie.find().pretty() +{ "_id" : ObjectId("5d3064bf144a1b8fda04cd4f"), "name" : "batman" } + +> exit +bye +``` + +So, from the above output, we can see the database `newdb` that we had created in the original database `sample-mgo-sh` is restored in the restored database `restored-mongodb`. + +## Cleanup + +To cleanup the Kubernetes resources created by this tutorial, run: + +```console +kubectl delete -n demo restoresession sample-mgo-sh-restore sample-mongodb-restore +kubectl delete -n demo backupconfiguration sample-mgo-sh-backup sample-mgo-sh-backup2 +kubectl delete -n demo mg sample-mgo-sh restored-mongodb +kubectl delete -n demo repository gcs-repo-sharding gcs-repo-custom +``` diff --git a/content/docs/v2024.1.31/guides/mongodb/backup/logical/standalone/examples/backupconfiguration.yaml b/content/docs/v2024.1.31/guides/mongodb/backup/logical/standalone/examples/backupconfiguration.yaml new file mode 100644 index 0000000000..db8d24530b --- /dev/null +++ b/content/docs/v2024.1.31/guides/mongodb/backup/logical/standalone/examples/backupconfiguration.yaml @@ -0,0 +1,18 @@ +apiVersion: stash.appscode.com/v1beta1 +kind: BackupConfiguration +metadata: + name: sample-mongodb-backup + namespace: demo +spec: + schedule: "*/5 * * * *" + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-mongodb + retentionPolicy: + name: keep-last-5 + keepLast: 5 + prune: true diff --git a/content/docs/v2024.1.31/guides/mongodb/backup/logical/standalone/examples/mongodb.yaml b/content/docs/v2024.1.31/guides/mongodb/backup/logical/standalone/examples/mongodb.yaml new file mode 100644 index 0000000000..3515b48a2e --- /dev/null +++ b/content/docs/v2024.1.31/guides/mongodb/backup/logical/standalone/examples/mongodb.yaml @@ -0,0 +1,16 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MongoDB +metadata: + name: sample-mongodb + namespace: demo +spec: + version: "4.4.26" + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut diff --git a/content/docs/v2024.1.31/guides/mongodb/backup/logical/standalone/examples/repository.yaml b/content/docs/v2024.1.31/guides/mongodb/backup/logical/standalone/examples/repository.yaml new file mode 100644 index 0000000000..8f7a87b857 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mongodb/backup/logical/standalone/examples/repository.yaml @@ -0,0 +1,11 @@ +apiVersion: stash.appscode.com/v1alpha1 +kind: Repository +metadata: + name: gcs-repo + namespace: demo +spec: + backend: + gcs: + bucket: stash-testing + prefix: demo/mongodb/sample-mongodb + storageSecretName: gcs-secret diff --git a/content/docs/v2024.1.31/guides/mongodb/backup/logical/standalone/examples/restored-mongodb.yaml b/content/docs/v2024.1.31/guides/mongodb/backup/logical/standalone/examples/restored-mongodb.yaml new file mode 100644 index 0000000000..43d845dba4 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mongodb/backup/logical/standalone/examples/restored-mongodb.yaml @@ -0,0 +1,18 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MongoDB +metadata: + name: restored-mongodb + namespace: demo +spec: + version: "4.4.26" + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + init: + waitForInitialRestore: true + terminationPolicy: WipeOut diff --git a/content/docs/v2024.1.31/guides/mongodb/backup/logical/standalone/examples/restoresession.yaml b/content/docs/v2024.1.31/guides/mongodb/backup/logical/standalone/examples/restoresession.yaml new file mode 100644 index 0000000000..508c88c27a --- /dev/null +++ b/content/docs/v2024.1.31/guides/mongodb/backup/logical/standalone/examples/restoresession.yaml @@ -0,0 +1,15 @@ +apiVersion: stash.appscode.com/v1beta1 +kind: RestoreSession +metadata: + name: sample-mongodb-restore + namespace: demo +spec: + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: restored-mongodb + rules: + - snapshots: [latest] diff --git a/content/docs/v2024.1.31/guides/mongodb/backup/logical/standalone/index.md b/content/docs/v2024.1.31/guides/mongodb/backup/logical/standalone/index.md new file mode 100644 index 0000000000..4211efc656 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mongodb/backup/logical/standalone/index.md @@ -0,0 +1,527 @@ +--- +title: Backup & Restore Standalone MongoDB | Stash +description: Backup and restore standalone MongoDB database using Stash +menu: + docs_v2024.1.31: + identifier: guides-mongodb-backup-logical-standalone + name: Standalone MongoDB + parent: guides-mongodb-backup-logical + weight: 20 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +# Backup and Restore MongoDB database using Stash + +Stash 0.9.0+ supports backup and restoration of MongoDB databases. This guide will show you how you can backup and restore your MongoDB database with Stash. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using Minikube. +- Install KubeDB in your cluster following the steps [here](/docs/v2024.1.31/setup/README). +- Install Stash in your cluster following the steps [here](https://stash.run/docs/latest/setup/install/stash/). +- Install Stash `kubectl` plugin following the steps [here](https://stash.run/docs/latest/setup/install/kubectl-plugin/). +- If you are not familiar with how Stash backup and restore MongoDB databases, please check the following guide [here](/docs/v2024.1.31/guides/mongodb/backup/overview/). + +You have to be familiar with following custom resources: + +- [AppBinding](/docs/v2024.1.31/guides/mongodb/concepts/appbinding) +- [Function](https://stash.run/docs/latest/concepts/crds/function/) +- [Task](https://stash.run/docs/latest/concepts/crds/task/) +- [BackupConfiguration](https://stash.run/docs/latest/concepts/crds/backupconfiguration/) +- [RestoreSession](https://stash.run/docs/latest/concepts/crds/restoresession/) + +To keep things isolated, we are going to use a separate namespace called `demo` throughout this tutorial. Create `demo` namespace if you haven't created yet. + +```console +$ kubectl create ns demo +namespace/demo created +``` + +## Backup MongoDB + +This section will demonstrate how to backup MongoDB database. Here, we are going to deploy a MongoDB database using KubeDB. Then, we are going to backup this database into a GCS bucket. Finally, we are going to restore the backed up data into another MongoDB database. + +### Deploy Sample MongoDB Database + +Let's deploy a sample MongoDB database and insert some data into it. + +**Create MongoDB CRD:** + +Below is the YAML of a sample MongoDB crd that we are going to create for this tutorial: + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: MongoDB +metadata: + name: sample-mongodb + namespace: demo +spec: + version: "4.4.26" + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut +``` + +Create the above `MongoDB` crd, + +```console +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mongodb/backup/logical/standalone/examples/mongodb.yaml +mongodb.kubedb.com/sample-mongodb created +``` + +KubeDB will deploy a MongoDB database according to the above specification. It will also create the necessary secrets and services to access the database. + +Let's check if the database is ready to use, + +```console +$ kubectl get mg -n demo sample-mongodb +NAME VERSION STATUS AGE +sample-mongodb 4.4.26 Ready 2m9s +``` + +The database is `Ready`. Verify that KubeDB has created a Secret and a Service for this database using the following commands, + +```console +$ kubectl get secret -n demo -l=app.kubernetes.io/instance=sample-mongodb +NAME TYPE DATA AGE +sample-mongodb-auth Opaque 2 2m28s + +$ kubectl get service -n demo -l=app.kubernetes.io/instance=sample-mongodb +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +sample-mongodb ClusterIP 10.107.58.222 27017/TCP 2m48s +sample-mongodb-gvr ClusterIP None 27017/TCP 2m48s +``` + +Here, we have to use service `sample-mongodb` and secret `sample-mongodb-auth` to connect with the database. KubeDB creates an [AppBinding](/docs/v2024.1.31/guides/mongodb/concepts/appbinding) crd that holds the necessary information to connect with the database. + +**Verify AppBinding:** + +Verify that the `AppBinding` has been created successfully using the following command, + +```console +$ kubectl get appbindings -n demo +NAME AGE +sample-mongodb 20m +``` + +Let's check the YAML of the above `AppBinding`, + +```console +$ kubectl get appbindings -n demo sample-mongodb -o yaml +``` + +```yaml +apiVersion: appcatalog.appscode.com/v1alpha1 +kind: AppBinding +metadata: + annotations: + kubectl.kubernetes.io/last-applied-configuration: | + {"apiVersion":"kubedb.com/v1alpha2","kind":"MongoDB","metadata":{"annotations":{},"name":"sample-mongodb","namespace":"demo"},"spec":{"storage":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"1Gi"}},"storageClassName":"standard"},"storageType":"Durable","terminationPolicy":"WipeOut","version":"4.4.26"}} + creationTimestamp: "2022-10-26T05:13:07Z" + generation: 1 + labels: + app.kubernetes.io/component: database + app.kubernetes.io/instance: sample-mongodb + app.kubernetes.io/managed-by: kubedb.com + app.kubernetes.io/name: mongodbs.kubedb.com + name: sample-mongodb + namespace: demo + ownerReferences: + - apiVersion: kubedb.com/v1alpha2 + blockOwnerDeletion: true + controller: true + kind: MongoDB + name: sample-mongodb + uid: 51676df9-682a-40ab-8f99-c6050b35f2f2 + resourceVersion: "580968" + uid: ca88e369-a15a-4149-9386-24e876c5aa4b +spec: + appRef: + apiGroup: kubedb.com + kind: MongoDB + name: sample-mongodb + namespace: demo + clientConfig: + service: + name: sample-mongodb + port: 27017 + scheme: mongodb + parameters: + apiVersion: config.kubedb.com/v1alpha1 + kind: MongoConfiguration + stash: + addon: + backupTask: + name: mongodb-backup-4.4.6 + restoreTask: + name: mongodb-restore-4.4.6 + secret: + name: sample-mongodb-auth + type: kubedb.com/mongodb + version: 4.4.26 +``` + +Stash uses the `AppBinding` crd to connect with the target database. It requires the following two fields to set in AppBinding's `Spec` section. + +- `spec.appRef` refers to the underlying application. +- `spec.clientConfig` defines how to communicate with the application. +- `spec.clientConfig.service.name` specifies the name of the service that connects to the database. +- `spec.secret` specifies the name of the secret that holds necessary credentials to access the database. +- `spec.parameters.stash` contains the Stash Addon information that will be used to backup/restore this MongoDB database. +- `spec.type` specifies the types of the app that this AppBinding is pointing to. KubeDB generated AppBinding follows the following format: `/`. + +**Insert Sample Data:** + +Now, we are going to exec into the database pod and create some sample data. At first, find out the database pod using the following command, + +```console +$ kubectl get pods -n demo --selector="app.kubernetes.io/instance=sample-mongodb" +NAME READY STATUS RESTARTS AGE +sample-mongodb-0 1/1 Running 0 12m +``` + +Now, let's exec into the pod and create a table, + +```console +$ export USER=$(kubectl get secrets -n demo sample-mongodb-auth -o jsonpath='{.data.\username}' | base64 -d) + +$ export PASSWORD=$(kubectl get secrets -n demo sample-mongodb-auth -o jsonpath='{.data.\password}' | base64 -d) + +$ kubectl exec -it -n demo sample-mongodb-0 -- mongo admin -u $USER -p $PASSWORD + +> show dbs +admin 0.000GB +local 0.000GB +mydb 0.000GB + +> show users +{ + "_id" : "admin.root", + "user" : "root", + "db" : "admin", + "roles" : [ + { + "role" : "root", + "db" : "admin" + } + ] +} + +> use newdb +switched to db newdb + +> db.movie.insert({"name":"batman"}); +WriteResult({ "nInserted" : 1 }) + +> db.movie.find().pretty() +{ "_id" : ObjectId("5d19d1cdc93d828f44e37735"), "name" : "batman" } + +> exit +bye +``` + +Now, we are ready to backup this sample database. + +### Prepare Backend + +We are going to store our backed up data into a GCS bucket. At first, we need to create a secret with GCS credentials then we need to create a `Repository` crd. If you want to use a different backend, please read the respective backend configuration doc from [here](https://stash.run/docs/latest/guides/backends/overview/). + +**Create Storage Secret:** + +Let's create a secret called `gcs-secret` with access credentials to our desired GCS bucket, + +```console +$ echo -n 'changeit' > RESTIC_PASSWORD +$ echo -n '' > GOOGLE_PROJECT_ID +$ cat downloaded-sa-key.json > GOOGLE_SERVICE_ACCOUNT_JSON_KEY +$ kubectl create secret generic -n demo gcs-secret \ + --from-file=./RESTIC_PASSWORD \ + --from-file=./GOOGLE_PROJECT_ID \ + --from-file=./GOOGLE_SERVICE_ACCOUNT_JSON_KEY +secret/gcs-secret created +``` + +**Create Repository:** + +Now, crete a `Repository` using this secret. Below is the YAML of Repository crd we are going to create, + +```yaml +apiVersion: stash.appscode.com/v1alpha1 +kind: Repository +metadata: + name: gcs-repo + namespace: demo +spec: + backend: + gcs: + bucket: stash-testing + prefix: demo/mongodb/sample-mongodb + storageSecretName: gcs-secret +``` + +Let's create the `Repository` we have shown above, + +```console +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mongodb/backup/logical/standalone/examples/repository.yaml +repository.stash.appscode.com/gcs-repo created +``` + +Now, we are ready to backup our database to our desired backend. + +### Backup + +We have to create a `BackupConfiguration` targeting respective AppBinding crd of our desired database. Then Stash will create a CronJob to periodically backup the database. + +**Create BackupConfiguration:** + +Below is the YAML for `BackupConfiguration` crd to backup the `sample-mongodb` database we have deployed earlier., + +```yaml +apiVersion: stash.appscode.com/v1beta1 +kind: BackupConfiguration +metadata: + name: sample-mongodb-backup + namespace: demo +spec: + schedule: "*/5 * * * *" + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-mongodb + retentionPolicy: + name: keep-last-5 + keepLast: 5 + prune: true +``` + +Here, + +- `spec.schedule` specifies that we want to backup the database at 5 minutes interval. +- `spec.target.ref` refers to the `AppBinding` crd that was created for `sample-mongodb` database. + +Let's create the `BackupConfiguration` crd we have shown above, + +```console +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mongodb/backup/logical/standalone/examples/backupconfiguration.yaml +backupconfiguration.stash.appscode.com/sample-mongodb-backup created +``` + +**Verify Backup Setup Successful:** + +If everything goes well, the phase of the `BackupConfiguration` should be `Ready`. The `Ready` phase indicates that the backup setup is successful. Let's verify the `Phase` of the BackupConfiguration, + +```console +$ kubectl get backupconfiguration -n demo +NAME TASK SCHEDULE PAUSED PHASE AGE +sample-mongodb-backup mongodb-backup-4.4.6 */5 * * * * Ready 11s +``` + +**Verify CronJob:** + +Stash will create a CronJob with the schedule specified in `spec.schedule` field of `BackupConfiguration` crd. + +Verify that the CronJob has been created using the following command, + +```console +$ kubectl get cronjob -n demo +NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE +sample-mongodb-backup */5 * * * * False 0 61s +``` + +**Wait for BackupSession:** + +The `sample-mongodb-backup` CronJob will trigger a backup on each schedule by creating a `BackpSession` crd. + +Wait for the next schedule. Run the following command to watch `BackupSession` crd, + +```console +$ kubectl get backupsession -n demo -w +NAME INVOKER-TYPE INVOKER-NAME PHASE AGE +sample-mongodb-backup-1561974001 BackupConfiguration sample-mongodb-backup Running 5m19s +sample-mongodb-backup-1561974001 BackupConfiguration sample-mongodb-backup Succeeded 5m45s +``` + +We can see above that the backup session has succeeded. Now, we are going to verify that the backed up data has been stored in the backend. + +**Verify Backup:** + +Once a backup is complete, Stash will update the respective `Repository` crd to reflect the backup. Check that the repository `gcs-repo` has been updated by the following command, + +```console +$ kubectl get repository -n demo gcs-repo +NAME INTEGRITY SIZE SNAPSHOT-COUNT LAST-SUCCESSFUL-BACKUP AGE +gcs-repo true 1.611 KiB 1 33s 33m +``` + +Now, if we navigate to the GCS bucket, we are going to see backed up data has been stored in `demo/mongodb/sample-mongodb` directory as specified by `spec.backend.gcs.prefix` field of Repository crd. + +> Note: Stash keeps all the backed up data encrypted. So, data in the backend will not make any sense until they are decrypted. + +## Restore MongoDB +You can restore your data into the same database you have backed up from or into a different database in the same cluster or a different cluster. In this section, we are going to show you how to restore in the same database which may be necessary when you have accidentally deleted any data from the running database. + +#### Stop Taking Backup of the Old Database: + +At first, let's stop taking any further backup of the old database so that no backup is taken during restore process. We are going to pause the `BackupConfiguration` crd that we had created to backup the `sample-mongodb` database. Then, Stash will stop taking any further backup for this database. + +Let's pause the `sample-mongodb-backup` BackupConfiguration, +```bash +$ kubectl patch backupconfiguration -n demo sample-mongodb-backup --type="merge" --patch='{"spec": {"paused": true}}' +backupconfiguration.stash.appscode.com/sample-mongodb-backup patched +``` + +Or you can use the Stash `kubectl` plugin to pause the `BackupConfiguration`, +```bash +$ kubectl stash pause backup -n demo --backupconfig=sample-mongodb-backup +BackupConfiguration demo/sample-mongodb-backup has been paused successfully. +``` + +Now, wait for a moment. Stash will pause the BackupConfiguration. Verify that the BackupConfiguration has been paused, + +```console +$ kubectl get backupconfiguration -n demo sample-mongodb-backup +NAME TASK SCHEDULE PAUSED PHASE AGE +sample-mongodb-backup mongodb-backup-4.4.6 */5 * * * * true Ready 26m +``` + +Notice the `PAUSED` column. Value `true` for this field means that the BackupConfiguration has been paused. + +#### Simulate Disaster: + +Now, let’s simulate an accidental deletion scenario. Here, we are going to exec into the database pod and delete the `newdb` database we had created earlier. +```console +$ kubectl exec -it -n demo sample-mongodb-0 -- mongo admin -u $USER -p $PASSWORD +> use newdb +switched to db newdb + +> db.dropDatabase() +{ "dropped" : "newdb", "ok" : 1 } + +> show dbs +admin 0.000GB +config 0.000GB +local 0.000GB + +> exit +bye +``` +#### Create RestoreSession: + +Now, we need to create a `RestoreSession` crd pointing to the AppBinding of `sample-mongodb` database. +Below is the YAML for the `RestoreSession` crd that we are going to create to restore the backed up data. + +```yaml +apiVersion: stash.appscode.com/v1beta1 +kind: RestoreSession +metadata: + name: sample-mongodb-restore + namespace: demo +spec: + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-mongodb + rules: + - snapshots: [latest] +``` + +Here, + +- `spec.repository.name` specifies the `Repository` crd that holds the backend information where our backed up data has been stored. +- `spec.target.ref` refers to the AppBinding crd for the `restored-mongodb` database. +- `spec.rules` specifies that we are restoring from the latest backup snapshot of the database. + +Let's create the `RestoreSession` crd we have shown above, + +```console +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mongodb/backup/logical/standalone/examples/restoresession.yaml +restoresession.stash.appscode.com/sample-mongodb-restore created +``` + +Once, you have created the `RestoreSession` crd, Stash will create a job to restore. We can watch the `RestoreSession` phase to check if the restore process is succeeded or not. + +Run the following command to watch `RestoreSession` phase, + +```console +$ kubectl get restoresession -n demo sample-mongodb-restore -w +NAME REPOSITORY-NAME PHASE AGE +sample-mongodb-restore gcs-repo Running 5s +sample-mongodb-restore gcs-repo Succeeded 43s +``` + +So, we can see from the output of the above command that the restore process succeeded. + +#### Verify Restored Data: + +In this section, we are going to verify that the desired data has been restored successfully. We are going to connect to the database and check whether the table we had created earlier is restored or not. + +Lets, exec into the database pod and list available tables, + +```console +$ kubectl exec -it -n demo sample-mongodb-0 -- mongo admin -u $USER -p $PASSWORD + +> show dbs +admin 0.000GB +config 0.000GB +local 0.000GB +newdb 0.000GB + +> show users +{ + "_id" : "admin.root", + "user" : "root", + "db" : "admin", + "roles" : [ + { + "role" : "root", + "db" : "admin" + } + ] +} + +> use newdb +switched to db newdb + +> db.movie.find().pretty() +{ "_id" : ObjectId("5d19d1cdc93d828f44e37735"), "name" : "batman" } + +> exit +bye +``` + +So, from the above output, we can see the database `newdb` that we had created earlier is restored. + +## Cleanup + +To cleanup the Kubernetes resources created by this tutorial, run: + +```console +kubectl delete -n demo restoresession sample-mongodb-restore sample-mongo +kubectl delete -n demo backupconfiguration sample-mongodb-backup +kubectl delete -n demo mg sample-mongodb +kubectl delete -n demo repository gcs-repo +``` diff --git a/content/docs/v2024.1.31/guides/mongodb/backup/overview/images/backup_overview.svg b/content/docs/v2024.1.31/guides/mongodb/backup/overview/images/backup_overview.svg new file mode 100644 index 0000000000..1c9ec7308d --- /dev/null +++ b/content/docs/v2024.1.31/guides/mongodb/backup/overview/images/backup_overview.svg @@ -0,0 +1,997 @@ + + + + + + image/svg+xml + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/content/docs/v2024.1.31/guides/mongodb/backup/overview/images/replicaset_backup.svg b/content/docs/v2024.1.31/guides/mongodb/backup/overview/images/replicaset_backup.svg new file mode 100644 index 0000000000..206eb76648 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mongodb/backup/overview/images/replicaset_backup.svg @@ -0,0 +1,673 @@ + + + + + + image/svg+xml + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/content/docs/v2024.1.31/guides/mongodb/backup/overview/images/replicaset_restore.svg b/content/docs/v2024.1.31/guides/mongodb/backup/overview/images/replicaset_restore.svg new file mode 100644 index 0000000000..ad6cd40659 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mongodb/backup/overview/images/replicaset_restore.svg @@ -0,0 +1,673 @@ + + + + + + image/svg+xml + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/content/docs/v2024.1.31/guides/mongodb/backup/overview/images/restore_overview.svg b/content/docs/v2024.1.31/guides/mongodb/backup/overview/images/restore_overview.svg new file mode 100644 index 0000000000..09dff9b37c --- /dev/null +++ b/content/docs/v2024.1.31/guides/mongodb/backup/overview/images/restore_overview.svg @@ -0,0 +1,867 @@ + + + + + + image/svg+xml + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/content/docs/v2024.1.31/guides/mongodb/backup/overview/images/sharded_backup.svg b/content/docs/v2024.1.31/guides/mongodb/backup/overview/images/sharded_backup.svg new file mode 100644 index 0000000000..5f3b4c5436 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mongodb/backup/overview/images/sharded_backup.svg @@ -0,0 +1,2107 @@ + + + + + + image/svg+xml + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/content/docs/v2024.1.31/guides/mongodb/backup/overview/images/sharded_restore.svg b/content/docs/v2024.1.31/guides/mongodb/backup/overview/images/sharded_restore.svg new file mode 100644 index 0000000000..03884e256f --- /dev/null +++ b/content/docs/v2024.1.31/guides/mongodb/backup/overview/images/sharded_restore.svg @@ -0,0 +1,2107 @@ + + + + + + image/svg+xml + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/content/docs/v2024.1.31/guides/mongodb/backup/overview/images/standalone_backup.svg b/content/docs/v2024.1.31/guides/mongodb/backup/overview/images/standalone_backup.svg new file mode 100644 index 0000000000..080787412d --- /dev/null +++ b/content/docs/v2024.1.31/guides/mongodb/backup/overview/images/standalone_backup.svg @@ -0,0 +1,673 @@ + + + + + + image/svg+xml + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/content/docs/v2024.1.31/guides/mongodb/backup/overview/images/standalone_restore.svg b/content/docs/v2024.1.31/guides/mongodb/backup/overview/images/standalone_restore.svg new file mode 100644 index 0000000000..3306a4d28c --- /dev/null +++ b/content/docs/v2024.1.31/guides/mongodb/backup/overview/images/standalone_restore.svg @@ -0,0 +1,673 @@ + + + + + + image/svg+xml + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/content/docs/v2024.1.31/guides/mongodb/backup/overview/index.md b/content/docs/v2024.1.31/guides/mongodb/backup/overview/index.md new file mode 100644 index 0000000000..d05306c505 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mongodb/backup/overview/index.md @@ -0,0 +1,178 @@ +--- +title: MongoDB Backup & Restore Overview +menu: + docs_v2024.1.31: + identifier: guides-mongodb-backup-overview + name: Overview + parent: guides-mongodb-backup + weight: 10 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +{{< notice type="warning" message="Please install [Stash](https://stash.run/docs/latest/setup/install/stash/) to try this feature. Database backup with Stash is already included in the KubeDB license. So, you don't need a separate license for Stash." >}} + + +# MongoDB Backup & Restore Overview + +KubeDB uses [Stash](https://stash.run) to backup and restore databases. Stash by AppsCode is a cloud native data backup and recovery solution for Kubernetes workloads. Stash utilizes [restic](https://github.com/restic/restic) to securely backup stateful applications to any cloud or on-prem storage backends (for example, S3, GCS, Azure Blob storage, Minio, NetApp, Dell EMC etc.). + +
+  KubeDB + Stash +
Fig: Backup KubeDB Databases Using Stash
+
+ +## How Backup Works + +The following diagram shows how Stash takes backup of a MongoDB database. Open the image in a new tab to see the enlarged version. + +
+ MongoDB Backup Overview +
Fig: MongoDB Backup Overview
+
+ +The backup process consists of the following steps: + +1. At first, a user creates a secret with access credentials of the backend where the backed up data will be stored. + +2. Then, she creates a `Repository` crd that specifies the backend information along with the secret that holds the credentials to access the backend. + +3. Then, she creates a `BackupConfiguration` crd targeting the [AppBinding](/docs/v2024.1.31/guides/mongodb/concepts/appbinding) crd of the desired database. The `BackupConfiguration` object also specifies the `Task` to use to backup the database. + +4. Stash operator watches for `BackupConfiguration` crd. + +5. Once Stash operator finds a `BackupConfiguration` crd, it creates a CronJob with the schedule specified in `BackupConfiguration` object to trigger backup periodically. + +6. On the next scheduled slot, the CronJob triggers a backup by creating a `BackupSession` crd. + +7. Stash operator also watches for `BackupSession` crd. + +8. When it finds a `BackupSession` object, it resolves the respective `Task` and `Function` and prepares a Job definition to backup. + +9. Then, it creates the Job to backup the targeted database. + +10. The backup Job reads necessary information to connect with the database from the `AppBinding` crd. It also reads backend information and access credentials from `Repository` crd and Storage Secret respectively. + +11. Then, the Job dumps the targeted database and uploads the output to the backend. Stash pipes the output of dump command to uploading process. Hence, backup Job does not require a large volume to hold the entire dump output. + +12. Finally, when the backup is complete, the Job sends Prometheus metrics to the Pushgateway running inside Stash operator pod. It also updates the `BackupSession` and `Repository` status to reflect the backup procedure. + +### Backup Different MongoDB Configurations + +This section will show you how backup works for different MongoDB configurations. + +#### Standalone MongoDB + +For a standalone MongoDB database, the backup job directly dumps the database using `mongodump` and pipe the output to the backup process. + +
+ Standalone MongoDB Backup Overview +
Fig: Standalone MongoDB Backup
+
+ +#### MongoDB ReplicaSet Cluster + +For MongoDB ReplicaSet cluster, Stash takes backup from one of the secondary replicas. The backup process consists of the following steps: + +1. Identify a secondary replica. +2. Lock the secondary replica. +3. Backup the secondary replica. +4. Unlock the secondary replica. + +
+ MongoDB ReplicaSet Cluster Backup Overview +
Fig: MongoDB ReplicaSet Cluster Backup
+
+ +#### MongoDB Sharded Cluster + +For MongoDB sharded cluster, Stash takes backup of the individual shards as well as the config server. Stash takes backup from a secondary replica of the shards and the config server. If there is no secondary replica then Stash will take backup from the primary replica. The backup process consists of the following steps: + +1. Disable balancer. +2. Lock config server. +3. Identify a secondary replica for each shard. +4. Lock the secondary replica. +5. Run backup on the secondary replica. +6. Unlock the secondary replica. +7. Unlock config server. +8. Enable balancer. + +
+ MongoDB Sharded Cluster Backup Overview +
Fig: MongoDB Sharded Cluster Backup
+
+ +## How Restore Process Works + +The following diagram shows how Stash restores backed up data into a MongoDB database. Open the image in a new tab to see the enlarged version. + +
+ Database Restore Overview +
Fig: MongoDB Restore Process Overview
+
+ +The restore process consists of the following steps: + +1. At first, a user creates a `RestoreSession` crd targeting the `AppBinding` of the desired database where the backed up data will be restored. It also specifies the `Repository` crd which holds the backend information and the `Task` to use to restore the target. + +2. Stash operator watches for `RestoreSession` object. + +3. Once it finds a `RestoreSession` object, it resolves the respective `Task` and `Function` and prepares a Job definition to restore. + +4. Then, it creates the Job to restore the target. + +5. The Job reads necessary information to connect with the database from respective `AppBinding` crd. It also reads backend information and access credentials from `Repository` crd and Storage Secret respectively. + +6. Then, the job downloads the backed up data from the backend and injects into the desired database. Stash pipes the downloaded data to the respective database tool to inject into the database. Hence, restore job does not require a large volume to download entire backup data inside it. + +7. Finally, when the restore process is complete, the Job sends Prometheus metrics to the Pushgateway and update the `RestoreSession` status to reflect restore completion. + +### Restoring Different MongoDB Configurations + +This section will show you restore process works for different MongoDB configurations. + +#### Standalone MongoDB + +For a standalone MongoDB database, the restore job downloads the backed up data from the backend and pipe the downloaded data to `mongorestore` command which inserts the data into the desired MongoDB database. + +
+ Standalone MongoDB Restore Overview +
Fig: Standalone MongoDB Restore
+
+ +#### MongoDB ReplicaSet Cluster + +For MongoDB ReplicaSet cluster, Stash identifies the primary replica and restore into it. + +
+ MongoDB ReplicaSet Cluster Restore Overview +
Fig: MongoDB ReplicaSet Cluster Restore
+
+ +#### MongoDB Sharded Cluster + +For MongoDB sharded cluster, Stash identifies the primary replica of each shard as well as the config server and restore respective backed up data into them. + +
+ MongoDB Sharded Cluster Restore +
Fig: MongoDB Sharded Cluster Restore
+
+ +## Next Steps + +- Backup a standalone MongoDB databases using Stash following the guide from [here](/docs/v2024.1.31/guides/mongodb/backup/logical/standalone/). +- Backup a MongoDB Replicaset cluster using Stash following the guide from [here](/docs/v2024.1.31/guides/mongodb/backup/logical/replicaset/). +- Backup a sharded MongoDB cluster using Stash following the guide from [here](/docs/v2024.1.31/guides/mongodb/backup/logical/sharding/). diff --git a/content/docs/v2024.1.31/guides/mongodb/cli/_index.md b/content/docs/v2024.1.31/guides/mongodb/cli/_index.md new file mode 100755 index 0000000000..602e85bdd8 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mongodb/cli/_index.md @@ -0,0 +1,22 @@ +--- +title: CLI | KubeDB +menu: + docs_v2024.1.31: + identifier: mg-cli-mongodb + name: Cli + parent: mg-mongodb-guides + weight: 100 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/mongodb/cli/cli.md b/content/docs/v2024.1.31/guides/mongodb/cli/cli.md new file mode 100644 index 0000000000..00f7d3e05b --- /dev/null +++ b/content/docs/v2024.1.31/guides/mongodb/cli/cli.md @@ -0,0 +1,367 @@ +--- +title: CLI | KubeDB +menu: + docs_v2024.1.31: + identifier: mg-cli-cli + name: Quickstart + parent: mg-cli-mongodb + weight: 10 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Manage KubeDB objects using CLIs + +## KubeDB CLI + +KubeDB comes with its own cli. It is called `kubedb` cli. `kubedb` can be used to manage any KubeDB object. `kubedb` cli also performs various validations to improve ux. To install KubeDB cli on your workstation, follow the steps [here](/docs/v2024.1.31/setup/README). + +### How to Create objects + +`kubectl create` creates a database CRD object in `default` namespace by default. Following command will create a MongoDB object as specified in `mongodb.yaml`. + +```bash +$ kubectl create -f mongodb-demo.yaml +mongodb.kubedb.com/mongodb-demo created +``` + +You can provide namespace as a flag `--namespace`. Provided namespace should match with namespace specified in input file. + +```bash +$ kubectl create -f mongodb-demo.yaml --namespace=kube-system +mongodb.kubedb.com/mongodb-demo +``` + +`kubectl create` command also considers `stdin` as input. + +```bash +cat mongodb-demo.yaml | kubectl create -f - +``` + +### How to List Objects + +`kubectl get` command allows users to list or find any KubeDB object. To list all MongoDB objects in `default` namespace, run the following command: + +```bash +$ kubectl get mongodb +NAME VERSION STATUS AGE +mongodb-demo 3.4-v3 Ready 13m +mongodb-dev 3.4-v3 Ready 11m +mongodb-prod 3.4-v3 Ready 11m +mongodb-qa 3.4-v3 Ready 10m +``` + +To get YAML of an object, use `--output=yaml` flag. + +```yaml +$ kubectl get mongodb mongodb-demo --output=yaml +apiVersion: kubedb.com/v1alpha2 +kind: MongoDB +metadata: + creationTimestamp: "2019-02-06T10:31:04Z" + finalizers: + - kubedb.com + generation: 2 + name: mongodb-demo + namespace: demo + resourceVersion: "94703" + selfLink: /apis/kubedb.com/v1alpha2/namespaces/default/mongodbs/mongodb-demo + uid: 4eaaba0e-29fa-11e9-aebf-080027875192 +spec: + authSecret: + name: mongodb-demo-auth + podTemplate: + controller: {} + metadata: {} + spec: + livenessProbe: + exec: + command: + - mongo + - --eval + - db.adminCommand('ping') + failureThreshold: 3 + periodSeconds: 10 + successThreshold: 1 + timeoutSeconds: 5 + readinessProbe: + exec: + command: + - mongo + - --eval + - db.adminCommand('ping') + failureThreshold: 3 + periodSeconds: 10 + successThreshold: 1 + timeoutSeconds: 1 + resources: {} + replicas: 1 + storage: + accessModes: + - ReadWriteOnce + dataSource: null + resources: + requests: + storage: 1Gi + storageClassName: standard + storageType: Durable + terminationPolicy: Halt + version: 3.4-v3 +status: + observedGeneration: 2$4213139756412538772 + phase: Ready +``` + +To get JSON of an object, use `--output=json` flag. + +```bash +kubectl get mongodb mongodb-demo --output=json +``` + +To list all KubeDB objects, use following command: + +```bash +$ kubectl get kubedb -o wide +NAME VERSION STATUS AGE +mg/mongodb-demo 3.4 Ready 3h +mg/mongodb-dev 3.4 Ready 3h +mg/mongodb-prod 3.4 Ready 3h +mg/mongodb-qa 3.4 Ready 3h + +NAME DATABASE BUCKET STATUS AGE +snap/mongodb-demo-20170605-073557 mg/mongodb-demo gs:bucket-name Succeeded 9m +snap/snapshot-20171212-114700 mg/mongodb-demo gs:bucket-name Succeeded 1h +``` + +Flag `--output=wide` is used to print additional information. + +List command supports short names for each object types. You can use it like `kubectl get `. Below are the short name for KubeDB objects: + +- MongoDB: `mg` +- Snapshot: `snap` +- DormantDatabase: `drmn` + +You can print labels with objects. The following command will list all Snapshots with their corresponding labels. + +```bash +$ kubectl get snap --show-labels +NAME DATABASE STATUS AGE LABELS +mongodb-demo-20170605-073557 mg/mongodb-demo Succeeded 11m app.kubernetes.io/name=mongodbs.kubedb.com,app.kubernetes.io/instance=mongodb-demo +snapshot-20171212-114700 mg/mongodb-demo Succeeded 1h app.kubernetes.io/name=mongodbs.kubedb.com,app.kubernetes.io/instance=mongodb-demo +``` + +You can also filter list using `--selector` flag. + +```bash +$ kubectl get snap --selector='app.kubernetes.io/name=mongodbs.kubedb.com' --show-labels +NAME DATABASE STATUS AGE LABELS +mongodb-demo-20171212-073557 mg/mongodb-demo Succeeded 14m app.kubernetes.io/name=mongodbs.kubedb.com,app.kubernetes.io/instance=mongodb-demo +snapshot-20171212-114700 mg/mongodb-demo Succeeded 2h app.kubernetes.io/name=mongodbs.kubedb.com,app.kubernetes.io/instance=mongodb-demo +``` + +To print only object name, run the following command: + +```bash +$ kubectl get all -o name +mongodb/mongodb-demo +mongodb/mongodb-dev +mongodb/mongodb-prod +mongodb/mongodb-qa +snapshot/mongodb-demo-20170605-073557 +snapshot/snapshot-20170505-114700 +``` + +### How to Describe Objects + +`kubectl dba describe` command allows users to describe any KubeDB object. The following command will describe MongoDB database `mongodb-demo` with relevant information. + +```bash +$ kubectl dba describe mg mongodb-demo +Name: mongodb-demo +Namespace: default +CreationTimestamp: Wed, 06 Feb 2019 16:31:04 +0600 +Labels: +Annotations: +Replicas: 1 total +Status: Ready + StorageType: Durable +Volume: + StorageClass: standard + Capacity: 1Gi + Access Modes: RWO + +StatefulSet: + Name: mongodb-demo + CreationTimestamp: Wed, 06 Feb 2019 16:31:05 +0600 + Labels: app.kubernetes.io/name=mongodbs.kubedb.com + app.kubernetes.io/instance=mongodb-demo + Annotations: + Replicas: 824639727120 desired | 1 total + Pods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed + +Service: + Name: mongodb-demo + Labels: app.kubernetes.io/name=mongodbs.kubedb.com + app.kubernetes.io/instance=mongodb-demo + Annotations: + Type: ClusterIP + IP: 10.96.245.200 + Port: db 27017/TCP + TargetPort: db/TCP + Endpoints: 172.17.0.8:27017 + +Service: + Name: mongodb-demo-gvr + Labels: app.kubernetes.io/name=mongodbs.kubedb.com + app.kubernetes.io/instance=mongodb-demo + Annotations: service.alpha.kubernetes.io/tolerate-unready-endpoints=true + Type: ClusterIP + IP: None + Port: db 27017/TCP + TargetPort: 27017/TCP + Endpoints: 172.17.0.8:27017 + +Database Secret: + Name: mongodb-demo-auth + Labels: app.kubernetes.io/name=mongodbs.kubedb.com + app.kubernetes.io/instance=mongodb-demo + Annotations: + +Type: Opaque + +Data +==== + password: 16 bytes + username: 4 bytes + +No Snapshots. + +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal Successful 2m KubeDB operator Successfully created Service + Normal Successful 2m KubeDB operator Successfully created StatefulSet + Normal Successful 2m KubeDB operator Successfully created MongoDB + Normal Successful 2m KubeDB operator Successfully created appbinding + Normal Successful 2m KubeDB operator Successfully patched StatefulSet + Normal Successful 2m KubeDB operator Successfully patched MongoDB +``` + +`kubectl dba describe` command provides following basic information about a MongoDB database. + +- StatefulSet +- Storage (Persistent Volume) +- Service +- Secret (If available) +- Snapshots (If any) +- Monitoring system (If available) + +To hide events on KubeDB object, use flag `--show-events=false` + +To describe all MongoDB objects in `default` namespace, use following command + +```bash +kubectl dba describe mg +``` + +To describe all MongoDB objects from every namespace, provide `--all-namespaces` flag. + +```bash +kubectl dba describe mg --all-namespaces +``` + +To describe all KubeDB objects from every namespace, use the following command: + +```bash +kubectl dba describe all --all-namespaces +``` + +You can also describe KubeDb objects with matching labels. The following command will describe all MongoDB objects with specified labels from every namespace. + +```bash +kubectl dba describe mg --all-namespaces --selector='group=dev' +``` + +To learn about various options of `describe` command, please visit [here](/docs/v2024.1.31/reference/cli/kubectl-dba_describe). + +#### Edit Restrictions + +Various fields of a KubeDb object can't be edited using `edit` command. The following fields are restricted from updates for all KubeDB objects: + +- apiVersion +- kind +- metadata.name +- metadata.namespace + +If StatefulSets exists for a MongoDB database, following fields can't be modified as well. + +- spec.ReplicaSet +- spec.authSecret +- spec.init +- spec.storageType +- spec.storage +- spec.podTemplate.spec.nodeSelector + +For DormantDatabase, `spec.origin` can't be edited using `kubectl edit` + +### How to Delete Objects + +`kubectl delete` command will delete an object in `default` namespace by default unless namespace is provided. The following command will delete a MongoDB `mongodb-dev` in default namespace + +```bash +$ kubectl delete mongodb mongodb-dev +mongodb.kubedb.com "mongodb-dev" deleted +``` + +You can also use YAML files to delete objects. The following command will delete a mongodb using the type and name specified in `mongodb.yaml`. + +```bash +$ kubectl delete -f mongodb-demo.yaml +mongodb.kubedb.com "mongodb-dev" deleted +``` + +`kubectl delete` command also takes input from `stdin`. + +```bash +cat mongodb-demo.yaml | kubectl delete -f - +``` + +To delete database with matching labels, use `--selector` flag. The following command will delete mongodb with label `mongodb.app.kubernetes.io/instance=mongodb-demo`. + +```bash +kubectl delete mongodb -l mongodb.app.kubernetes.io/instance=mongodb-demo +``` + +## Using Kubectl + +You can use Kubectl with KubeDB objects like any other CRDs. Below are some common examples of using Kubectl with KubeDB objects. + +```bash +# Create objects +$ kubectl create -f + +# List objects +$ kubectl get mongodb +$ kubectl get mongodb.kubedb.com + +# Delete objects +$ kubectl delete mongodb +``` + +## Next Steps + +- Learn how to use KubeDB to run a MongoDB database [here](/docs/v2024.1.31/guides/mongodb/README). +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/v2024.1.31/CONTRIBUTING). diff --git a/content/docs/v2024.1.31/guides/mongodb/clustering/_index.md b/content/docs/v2024.1.31/guides/mongodb/clustering/_index.md new file mode 100755 index 0000000000..3116f2567b --- /dev/null +++ b/content/docs/v2024.1.31/guides/mongodb/clustering/_index.md @@ -0,0 +1,22 @@ +--- +title: MongoDB Clustering +menu: + docs_v2024.1.31: + identifier: mg-clustering-mongodb + name: Clustering + parent: mg-mongodb-guides + weight: 25 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/mongodb/clustering/replicaset.md b/content/docs/v2024.1.31/guides/mongodb/clustering/replicaset.md new file mode 100644 index 0000000000..fcfcfc9e97 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mongodb/clustering/replicaset.md @@ -0,0 +1,685 @@ +--- +title: MongoDB ReplicaSet Guide +menu: + docs_v2024.1.31: + identifier: mg-clustering-replicaset + name: ReplicaSet Guide + parent: mg-clustering-mongodb + weight: 15 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# KubeDB - MongoDB ReplicaSet + +This tutorial will show you how to use KubeDB to run a MongoDB ReplicaSet. + +## Before You Begin + +Before proceeding: + +- Read [mongodb replication concept](/docs/v2024.1.31/guides/mongodb/clustering/replication_concept) to learn about MongoDB Replica Set clustering. + +- You need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +- Now, install KubeDB cli on your workstation and KubeDB operator in your cluster following the steps [here](/docs/v2024.1.31/setup/README). + +- To keep things isolated, this tutorial uses a separate namespace called `demo` throughout this tutorial. Run the following command to prepare your cluster for this tutorial: + + ```bash + $ kubectl create ns demo + namespace/demo created + ``` + +> Note: The yaml files used in this tutorial are stored in [docs/examples/mongodb](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/examples/mongodb) folder in GitHub repository [kubedb/docs](https://github.com/kubedb/docs). + +## Deploy MongoDB ReplicaSet + +To deploy a MongoDB ReplicaSet, user have to specify `spec.replicaSet` option in `Mongodb` CRD. + +The following is an example of a `Mongodb` object which creates MongoDB ReplicaSet of three members. + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: MongoDB +metadata: + name: mgo-replicaset + namespace: demo +spec: + version: "4.4.26" + replicas: 3 + replicaSet: + name: rs0 + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi +``` + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/mongodb/clustering/replicaset.yaml +mongodb.kubedb.com/mgo-replicaset created +``` + +Here, + +- `spec.replicaSet` represents the configuration for replicaset. + - `name` denotes the name of mongodb replicaset. +- `spec.keyFileSecret` (optional) is a secret name that contains keyfile (a random string)against `key.txt` key. Each mongod instances in the replica set and `shardTopology` uses the contents of the keyfile as the shared password for authenticating other members in the replicaset. Only mongod instances with the correct keyfile can join the replica set. _User can provide the `keyFileSecret` by creating a secret with key `key.txt`. See [here](https://docs.mongodb.com/manual/tutorial/enforce-keyfile-access-control-in-existing-replica-set/#create-a-keyfile) to create the string for `keyFileSecret`._ If `keyFileSecret` is not given, KubeDB operator will generate a `keyFileSecret` itself. +- `spec.replicas` denotes the number of members in `rs0` mongodb replicaset. +- `spec.storage` specifies the StorageClass of PVC dynamically allocated to store data for this database. This storage spec will be passed to the StatefulSet created by KubeDB operator to run database pods. So, each members will have a pod of this storage configuration. You can specify any StorageClass available in your cluster with appropriate resource requests. + +KubeDB operator watches for `MongoDB` objects using Kubernetes api. When a `MongoDB` object is created, KubeDB operator will create a new StatefulSet and a Service with the matching MongoDB object name. This service will always point to the primary of the replicaset. KubeDB operator will also create a governing service for StatefulSets with the name `-pods`. + +```bash +$ kubectl dba describe mg -n demo mgo-replicaset +Name: mgo-replicaset +Namespace: demo +CreationTimestamp: Wed, 10 Feb 2021 11:05:06 +0600 +Labels: +Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"kubedb.com/v1alpha2","kind":"MongoDB","metadata":{"annotations":{},"name":"mgo-replicaset","namespace":"demo"},"spec":{"replicaSet":{"na... +Replicas: 3 total +Status: Ready +StorageType: Durable +Volume: + StorageClass: standard + Capacity: 1Gi + Access Modes: RWO +Paused: false +Halted: false +Termination Policy: Delete + +StatefulSet: + Name: mgo-replicaset + CreationTimestamp: Wed, 10 Feb 2021 11:05:06 +0600 + Labels: app.kubernetes.io/component=database + app.kubernetes.io/instance=mgo-replicaset + app.kubernetes.io/managed-by=kubedb.com + app.kubernetes.io/name=mongodbs.kubedb.com + Annotations: + Replicas: 824637635032 desired | 3 total + Pods Status: 3 Running / 0 Waiting / 0 Succeeded / 0 Failed + +Service: + Name: mgo-replicaset + Labels: app.kubernetes.io/component=database + app.kubernetes.io/instance=mgo-replicaset + app.kubernetes.io/managed-by=kubedb.com + app.kubernetes.io/name=mongodbs.kubedb.com + Annotations: + Type: ClusterIP + IP: fd00:10:96::d5f5 + Port: primary 27017/TCP + TargetPort: db/TCP + Endpoints: [fd00:10:244::a]:27017 + +Service: + Name: mgo-replicaset-pods + Labels: app.kubernetes.io/component=database + app.kubernetes.io/instance=mgo-replicaset + app.kubernetes.io/managed-by=kubedb.com + app.kubernetes.io/name=mongodbs.kubedb.com + Annotations: + Type: ClusterIP + IP: None + Port: db 27017/TCP + TargetPort: db/TCP + Endpoints: [fd00:10:244::a]:27017,[fd00:10:244::c]:27017,[fd00:10:244::e]:27017 + +Auth Secret: + Name: mgo-replicaset-auth + Labels: app.kubernetes.io/component=database + app.kubernetes.io/instance=mgo-replicaset + app.kubernetes.io/managed-by=kubedb.com + app.kubernetes.io/name=mongodbs.kubedb.com + Annotations: + Type: Opaque + Data: + password: 16 bytes + username: 4 bytes + +AppBinding: + Metadata: + Annotations: + kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"kubedb.com/v1alpha2","kind":"MongoDB","metadata":{"annotations":{},"name":"mgo-replicaset","namespace":"demo"},"spec":{"replicaSet":{"name":"rs0"},"replicas":3,"storage":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"1Gi"}},"storageClassName":"standard"},"version":"4.4.26"}} + + Creation Timestamp: 2021-02-10T05:07:10Z + Labels: + app.kubernetes.io/component: database + app.kubernetes.io/instance: mgo-replicaset + app.kubernetes.io/managed-by: kubedb.com + app.kubernetes.io/name: mongodbs.kubedb.com + Name: mgo-replicaset + Namespace: demo + Spec: + Client Config: + Service: + Name: mgo-replicaset + Port: 27017 + Scheme: mongodb + Parameters: + API Version: config.kubedb.com/v1alpha1 + Kind: MongoConfiguration + Replica Sets: + host-0: rs0/mgo-replicaset-0.mgo-replicaset-pods.demo.svc,mgo-replicaset-1.mgo-replicaset-pods.demo.svc,mgo-replicaset-2.mgo-replicaset-pods.demo.svc + Secret: + Name: mgo-replicaset-auth + Type: kubedb.com/mongodb + Version: 4.4.26 + +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal Successful 12m MongoDB operator Successfully created stats service + Normal Successful 12m MongoDB operator Successfully created Service + Normal Successful 12m MongoDB operator Successfully stats service + Normal Successful 12m MongoDB operator Successfully stats service + Normal Successful 11m MongoDB operator Successfully stats service + Normal Successful 11m MongoDB operator Successfully stats service + Normal Successful 11m MongoDB operator Successfully stats service + Normal Successful 11m MongoDB operator Successfully stats service + Normal Successful 10m MongoDB operator Successfully stats service + Normal Successful 10m MongoDB operator Successfully stats service + Normal Successful 10m MongoDB operator Successfully stats service + Normal Successful 10m MongoDB operator Successfully stats service + Normal Successful 10m MongoDB operator Successfully stats service + Normal Successful 10m MongoDB operator Successfully patched StatefulSet demo/mgo-replicaset + Normal Successful 10m MongoDB operator Successfully patched MongoDB + Normal Successful 10m MongoDB operator Successfully created appbinding + Normal Successful 10m MongoDB operator Successfully stats service + Normal Successful 10m MongoDB operator Successfully patched StatefulSet demo/mgo-replicaset + Normal Successful 10m MongoDB operator Successfully patched MongoDB + + +$ kubectl get statefulset -n demo +NAME READY AGE +mgo-replicaset 3/3 105s + +$ kubectl get pvc -n demo +NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE +datadir-mgo-replicaset-0 Bound pvc-597784c9-c093-11e8-b4a9-0800272618ed 1Gi RWO standard 1h +datadir-mgo-replicaset-1 Bound pvc-8ca7a9d9-c093-11e8-b4a9-0800272618ed 1Gi RWO standard 1h +datadir-mgo-replicaset-2 Bound pvc-b7d8a624-c093-11e8-b4a9-0800272618ed 1Gi RWO standard 1h + +$ kubectl get pv -n demo +NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE +pvc-597784c9-c093-11e8-b4a9-0800272618ed 1Gi RWO Delete Bound demo/datadir-mgo-replicaset-0 standard 1h +pvc-8ca7a9d9-c093-11e8-b4a9-0800272618ed 1Gi RWO Delete Bound demo/datadir-mgo-replicaset-1 standard 1h +pvc-b7d8a624-c093-11e8-b4a9-0800272618ed 1Gi RWO Delete Bound demo/datadir-mgo-replicaset-2 standard 1h + +$ kubectl get service -n demo +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +mgo-replicaset ClusterIP 10.97.174.220 27017/TCP 119s +mgo-replicaset-pods ClusterIP None 27017/TCP 119s +``` + +KubeDB operator sets the `status.phase` to `Ready` once the database is successfully created. Run the following command to see the modified MongoDB object: + +```yaml +$ kubectl get mg -n demo mgo-replicaset -o yaml +apiVersion: kubedb.com/v1alpha2 +kind: MongoDB +metadata: + annotations: + kubectl.kubernetes.io/last-applied-configuration: | + {"apiVersion":"kubedb.com/v1alpha2","kind":"MongoDB","metadata":{"annotations":{},"name":"mgo-replicaset","namespace":"demo"},"spec":{"replicaSet":{"name":"rs0"},"replicas":3,"storage":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"1Gi"}},"storageClassName":"standard"},"version":"4.4.26"}} + creationTimestamp: "2021-02-11T04:29:29Z" + finalizers: + - kubedb.com + generation: 3 + managedFields: + - apiVersion: kubedb.com/v1alpha2 + fieldsType: FieldsV1 + fieldsV1: + f:metadata: + f:annotations: + .: {} + f:kubectl.kubernetes.io/last-applied-configuration: {} + f:spec: + .: {} + f:replicaSet: + .: {} + f:name: {} + f:replicas: {} + f:storage: + .: {} + f:accessModes: {} + f:resources: + .: {} + f:requests: + .: {} + f:storage: {} + f:storageClassName: {} + f:version: {} + manager: kubectl-client-side-apply + operation: Update + time: "2021-02-11T04:29:29Z" + - apiVersion: kubedb.com/v1alpha2 + fieldsType: FieldsV1 + fieldsV1: + f:metadata: + f:finalizers: {} + f:spec: + f:authSecret: + .: {} + f:name: {} + f:keyFileSecret: + .: {} + f:name: {} + f:status: + .: {} + f:conditions: {} + f:observedGeneration: {} + f:phase: {} + manager: mg-operator + operation: Update + time: "2021-02-11T04:29:29Z" + name: mgo-replicaset + namespace: demo + resourceVersion: "191685" + uid: 1cc92de5-441e-42ac-8321-459d7a955af2 +spec: + authSecret: + name: mgo-replicaset-auth + clusterAuthMode: keyFile + keyFileSecret: + name: mgo-replicaset-key + podTemplate: + controller: {} + metadata: {} + spec: + affinity: + podAntiAffinity: + preferredDuringSchedulingIgnoredDuringExecution: + - podAffinityTerm: + labelSelector: + matchLabels: + app.kubernetes.io/instance: mgo-replicaset + app.kubernetes.io/managed-by: kubedb.com + app.kubernetes.io/name: mongodbs.kubedb.com + namespaces: + - demo + topologyKey: kubernetes.io/hostname + weight: 100 + - podAffinityTerm: + labelSelector: + matchLabels: + app.kubernetes.io/instance: mgo-replicaset + app.kubernetes.io/managed-by: kubedb.com + app.kubernetes.io/name: mongodbs.kubedb.com + namespaces: + - demo + topologyKey: failure-domain.beta.kubernetes.io/zone + weight: 50 + livenessProbe: + exec: + command: + - bash + - -c + - "set -x; if [[ $(mongo admin --host=localhost --username=$MONGO_INITDB_ROOT_USERNAME + --password=$MONGO_INITDB_ROOT_PASSWORD --authenticationDatabase=admin + --quiet --eval \"db.adminCommand('ping').ok\" ) -eq \"1\" ]]; then \n + \ exit 0\n fi\n exit 1" + failureThreshold: 3 + periodSeconds: 10 + successThreshold: 1 + timeoutSeconds: 5 + readinessProbe: + exec: + command: + - bash + - -c + - "set -x; if [[ $(mongo admin --host=localhost --username=$MONGO_INITDB_ROOT_USERNAME + --password=$MONGO_INITDB_ROOT_PASSWORD --authenticationDatabase=admin + --quiet --eval \"db.adminCommand('ping').ok\" ) -eq \"1\" ]]; then \n + \ exit 0\n fi\n exit 1" + failureThreshold: 3 + periodSeconds: 10 + successThreshold: 1 + timeoutSeconds: 1 + resources: + limits: + cpu: 500m + memory: 1Gi + requests: + cpu: 500m + memory: 1Gi + serviceAccountName: mgo-replicaset + replicaSet: + name: rs0 + replicas: 3 + sslMode: disabled + storage: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + storageClassName: standard + storageEngine: wiredTiger + storageType: Durable + terminationPolicy: Delete + version: 4.4.26 +status: + conditions: + - lastTransitionTime: "2021-02-11T04:29:29Z" + message: 'The KubeDB operator has started the provisioning of MongoDB: demo/mgo-replicaset' + reason: DatabaseProvisioningStartedSuccessfully + status: "True" + type: ProvisioningStarted + - lastTransitionTime: "2021-02-11T04:31:22Z" + message: All desired replicas are ready. + reason: AllReplicasReady + status: "True" + type: ReplicaReady + - lastTransitionTime: "2021-02-11T04:31:11Z" + message: 'The MongoDB: demo/mgo-replicaset is accepting client requests.' + observedGeneration: 3 + reason: DatabaseAcceptingConnectionRequest + status: "True" + type: AcceptingConnection + - lastTransitionTime: "2021-02-11T04:31:11Z" + message: 'The MongoDB: demo/mgo-replicaset is ready.' + observedGeneration: 3 + reason: ReadinessCheckSucceeded + status: "True" + type: Ready + - lastTransitionTime: "2021-02-11T04:31:22Z" + message: 'The MongoDB: demo/mgo-replicaset is successfully provisioned.' + observedGeneration: 3 + reason: DatabaseSuccessfullyProvisioned + status: "True" + type: Provisioned + observedGeneration: 3 + phase: Ready +``` + +Please note that KubeDB operator has created a new Secret called `mgo-replicaset-auth` *(format: {mongodb-object-name}-auth)* for storing the password for `mongodb` superuser. This secret contains a `username` key which contains the *username* for MongoDB superuser and a `password` key which contains the *password* for MongoDB superuser. + +If you want to use custom or existing secret please specify that when creating the MongoDB object using `spec.authSecret.name`. While creating this secret manually, make sure the secret contains these two keys containing data `username` and `password`. For more details, please see [here](/docs/v2024.1.31/guides/mongodb/concepts/mongodb#specauthsecret). + +## Redundancy and Data Availability + +Now, you can connect to this database through [mgo-replicaset](https://docs.mongodb.com/v3.4/mongo/). In this tutorial, we will insert document on the primary member, and we will see if the data becomes available on secondary members. + +At first, insert data inside primary member `rs0:PRIMARY`. + +```bash +$ kubectl get secrets -n demo mgo-replicaset-auth -o jsonpath='{.data.\username}' | base64 -d +root + +$ kubectl get secrets -n demo mgo-replicaset-auth -o jsonpath='{.data.\password}' | base64 -d +5O4R2ze2bWXcWsdP + +$ kubectl exec -it mgo-replicaset-0 -n demo bash + +mongodb@mgo-replicaset-0:/$ mongo admin -u root -p 5O4R2ze2bWXcWsdP +MongoDB shell version v4.4.26 +connecting to: mongodb://127.0.0.1:27017/admin +MongoDB server version: 4.4.26 +Welcome to the MongoDB shell. + +rs0:PRIMARY> > rs.isMaster().primary +mgo-replicaset-0.mgo-replicaset-gvr.demo.svc.cluster.local:27017 + +rs0:PRIMARY> show dbs +admin 0.000GB +config 0.000GB +local 0.000GB + +rs0:PRIMARY> show users +{ + "_id" : "admin.root", + "userId" : UUID("6b714456-2914-4ea0-9596-92249e8285a2"), + "user" : "root", + "db" : "admin", + "roles" : [ + { + "role" : "root", + "db" : "admin" + } + ], + "mechanisms" : [ + "SCRAM-SHA-1", + "SCRAM-SHA-256" + ] +} + + +rs0:PRIMARY> use newdb +switched to db newdb + +rs0:PRIMARY> db.movie.insert({"name":"batman"}); +WriteResult({ "nInserted" : 1 }) + +rs0:PRIMARY> db.movie.find().pretty() +{ "_id" : ObjectId("5b5efeea9d097ca0600694a3"), "name" : "batman" } + +rs0:PRIMARY> exit +bye +``` + +Now, check the redundancy and data availability in secondary members. +We will exec in `mgo-replicaset-1`(which is secondary member right now) to check the data availability. + +```bash +$ kubectl exec -it mgo-replicaset-1 -n demo bash +mongodb@mgo-replicaset-1:/$ mongo admin -u root -p 5O4R2ze2bWXcWsdP +MongoDB shell version v4.4.26 +connecting to: mongodb://127.0.0.1:27017/admin +MongoDB server version: 4.4.26 +Welcome to the MongoDB shell. + +rs0:SECONDARY> rs.slaveOk() +rs0:SECONDARY> > show dbs +admin 0.000GB +config 0.000GB +local 0.000GB +newdb 0.000GB + +rs0:SECONDARY> show users +{ + "_id" : "admin.root", + "userId" : UUID("6b714456-2914-4ea0-9596-92249e8285a2"), + "user" : "root", + "db" : "admin", + "roles" : [ + { + "role" : "root", + "db" : "admin" + } + ], + "mechanisms" : [ + "SCRAM-SHA-1", + "SCRAM-SHA-256" + ] +} + +rs0:SECONDARY> use newdb +switched to db newdb + +rs0:SECONDARY> db.movie.find().pretty() +{ "_id" : ObjectId("5b5efeea9d097ca0600694a3"), "name" : "batman" } + +rs0:SECONDARY> exit +bye + +``` + +## Automatic Failover + +To test automatic failover, we will force the primary member to restart. As the primary member (`pod`) becomes unavailable, the rest of the members will elect a primary member by election. + +```bash +$ kubectl get pods -n demo +NAME READY STATUS RESTARTS AGE +mgo-replicaset-0 1/1 Running 0 1h +mgo-replicaset-1 1/1 Running 0 1h +mgo-replicaset-2 1/1 Running 0 1h + +$ kubectl delete pod -n demo mgo-replicaset-0 +pod "mgo-replicaset-0" deleted + +$ kubectl get pods -n demo +NAME READY STATUS RESTARTS AGE +mgo-replicaset-0 1/1 Terminating 0 1h +mgo-replicaset-1 1/1 Running 0 1h +mgo-replicaset-2 1/1 Running 0 1h + +``` + +Now verify the automatic failover, Let's exec in `mgo-replicaset-1` pod, + +```bash +$ kubectl exec -it mgo-replicaset-1 -n demo bash +mongodb@mgo-replicaset-1:/$ mongo admin -u root -p 5O4R2ze2bWXcWsdP +MongoDB shell version v4.4.26 +connecting to: mongodb://127.0.0.1:27017/admin +MongoDB server version: 4.4.26 +Welcome to the MongoDB shell. + +rs0:SECONDARY> rs.isMaster().primary +mgo-replicaset-2.mgo-replicaset-gvr.demo.svc.cluster.local:27017 + +# Also verify, data persistency +rs0:SECONDARY> rs.slaveOk() +rs0:SECONDARY> > show dbs +admin 0.000GB +config 0.000GB +local 0.000GB +newdb 0.000GB + +rs0:SECONDARY> show users +{ + "_id" : "admin.root", + "userId" : UUID("6b714456-2914-4ea0-9596-92249e8285a2"), + "user" : "root", + "db" : "admin", + "roles" : [ + { + "role" : "root", + "db" : "admin" + } + ], + "mechanisms" : [ + "SCRAM-SHA-1", + "SCRAM-SHA-256" + ] +} + +rs0:SECONDARY> use newdb +switched to db newdb + +rs0:SECONDARY> db.movie.find().pretty() +{ "_id" : ObjectId("5b5efeea9d097ca0600694a3"), "name" : "batman" } +``` + +## Halt Database + +When [TerminationPolicy](/docs/v2024.1.31/guides/mongodb/concepts/mongodb#specterminationpolicy) is set to halt, and you delete the mongodb object, the KubeDB operator will delete the StatefulSet and its pods but leaves the PVCs, secrets and database backup (snapshots) intact. Learn details of all `TerminationPolicy` [here](/docs/v2024.1.31/guides/mongodb/concepts/mongodb#specterminationpolicy). + +You can also keep the mongodb object and halt the database to resume it again later. If you halt the database, the kubedb will delete the statefulsets and services but will keep the mongodb object, pvcs, secrets and backup (snapshots). + +To halt the database, first you have to set the terminationPolicy to `Halt` in existing database. You can use the below command to set the terminationPolicy to `Halt`, if it is not already set. + +```bash +$ kubectl patch -n demo mg/mgo-replicaset -p '{"spec":{"terminationPolicy":"Halt"}}' --type="merge" +mongodb.kubedb.com/mgo-replicaset patched +``` + +Then, you have to set the `spec.halted` as true to set the database in a `Halted` state. You can use the below command. + +```bash +$ kubectl patch -n demo mg/mgo-replicaset -p '{"spec":{"halted":true}}' --type="merge" +mongodb.kubedb.com/mgo-replicaset patched +``` + +After that, kubedb will delete the statefulsets and services and you can see the database Phase as `Halted`. + +Now, you can run the following command to get all mongodb resources in demo namespaces, + +```bash +$ kubectl get mg,sts,svc,secret,pvc -n demo +NAME VERSION STATUS AGE +mongodb.kubedb.com/mgo-replicaset 4.4.26 Halted 9m43s + +NAME TYPE DATA AGE +secret/default-token-x2zcl kubernetes.io/service-account-token 3 47h +secret/mgo-replicaset-auth Opaque 2 23h +secret/mgo-replicaset-key Opaque 1 23h + +NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE +persistentvolumeclaim/datadir-mgo-replicaset-0 Bound pvc-816daa52-ee40-496f-a148-c75344a1b433 1Gi RWO standard 9m43s +persistentvolumeclaim/datadir-mgo-replicaset-1 Bound pvc-e818bc86-ab3c-4ec5-901f-630aab6b814b 1Gi RWO standard 9m5s +persistentvolumeclaim/datadir-mgo-replicaset-2 Bound pvc-5a50bce3-f85f-4157-be22-64dfc26e7517 1Gi RWO standard 8m25s +``` + + +## Resume Halted Database + +Now, to resume the database, i.e. to get the same database setup back again, you have to set the the `spec.halted` as false. You can use the below command. + +```bash +$ kubectl patch -n demo mg/mgo-replicaset -p '{"spec":{"halted":false}}' --type="merge" +mongodb.kubedb.com/mgo-replicaset patched +``` + +When the database is resumed successfully, you can see the database Status is set to `Ready`. + +```bash +$ kubectl get mg -n demo +NAME VERSION STATUS AGE +mgo-replicaset 4.4.26 Ready 6m27s +``` + +Now, If you again exec into the primary `pod` and look for previous data, you will see that, all the data persists. + +```bash +$ kubectl exec -it mgo-replicaset-1 -n demo bash + +mongodb@mgo-replicaset-1:/$ mongo admin -u root -p 7QiqLcuSCmZ8PU5a + +rs0:PRIMARY> use newdb +switched to db newdb + +rs0:PRIMARY> db.movie.find() +{ "_id" : ObjectId("6024b3e47c614cd582c9bb44"), "name" : "batman" } +``` + +## Cleaning up + +To cleanup the Kubernetes resources created by this tutorial, run: + +```bash +kubectl patch -n demo mg/mgo-replicaset -p '{"spec":{"terminationPolicy":"WipeOut"}}' --type="merge" +kubectl delete -n demo mg/mgo-replicaset + +kubectl delete ns demo +``` + +## Next Steps + +- [Backup and Restore](/docs/v2024.1.31/guides/mongodb/backup/overview/) process of MongoDB databases using Stash. +- Initialize [MongoDB with Script](/docs/v2024.1.31/guides/mongodb/initialization/using-script). +- Monitor your MongoDB database with KubeDB using [out-of-the-box Prometheus operator](/docs/v2024.1.31/guides/mongodb/monitoring/using-prometheus-operator). +- Monitor your MongoDB database with KubeDB using [out-of-the-box builtin-Prometheus](/docs/v2024.1.31/guides/mongodb/monitoring/using-builtin-prometheus). +- Use [private Docker registry](/docs/v2024.1.31/guides/mongodb/private-registry/using-private-registry) to deploy MongoDB with KubeDB. +- Detail concepts of [MongoDB object](/docs/v2024.1.31/guides/mongodb/concepts/mongodb). +- Detail concepts of [MongoDBVersion object](/docs/v2024.1.31/guides/mongodb/concepts/catalog). +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/v2024.1.31/CONTRIBUTING). diff --git a/content/docs/v2024.1.31/guides/mongodb/clustering/replication_concept.md b/content/docs/v2024.1.31/guides/mongodb/clustering/replication_concept.md new file mode 100644 index 0000000000..a5c9394d01 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mongodb/clustering/replication_concept.md @@ -0,0 +1,127 @@ +--- +title: MongoDB ReplicaSet Concept +menu: + docs_v2024.1.31: + identifier: mg-clustering-replicaset-concept + name: ReplicaSet Concept + parent: mg-clustering-mongodb + weight: 10 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# MongoDB Replication + +A replica set in MongoDB is a group of mongod processes that maintain the same data set. Replica sets provide redundancy and high availability, and are the basis for all production deployments. This section introduces replication in MongoDB as well as the components and architecture of replica sets. + +## Redundancy and Data Availability + +Replication provides redundancy and increases data availability. With multiple copies of data on different database servers, replication provides a level of fault tolerance against the loss of a single database server. + +## ReplicaSet Members + +Replica set contains several data bearing nodes and optionally one arbiter node. Of the data bearing nodes, one and only one member is deemed the primary node, while the other nodes are deemed secondary nodes. + +The members of a replica set are `primary` and `secondaries`. You can also maintain an `arbiter` as part of a replica set. Arbiters do not keep a copy of the data. However, arbiters play a role in the elections that select a primary if the current primary is unavailable. + +The minimum recommended configuration for a replica set is a three member replica set with three data-bearing members: one primary and two secondary members. You may alternatively deploy a three member replica set with two data-bearing members: a primary, a secondary, and an arbiter, but replica sets with at least three data-bearing members offer better redundancy. + +> A replica set can have up to 50 members but only 7 voting members. + +### Primary + +The primary is the only member in the replica set that receives write operations. MongoDB applies write operations on the primary and then records the operations on the primary’s oplog. Secondary members replicate this log and apply the operations to their data sets. + +In the following three-member replica set, the primary accepts all write operations. Then the secondaries replicate the oplog to apply to their data sets. + +

+  lifecycle +

+ +All members of the replica set can accept read operations. However, by default, an application directs its read operations to the primary member. + +The replica set can have at most one primary. If the current primary becomes unavailable, an election determines the new primary. See [Replica Set Elections](https://docs.mongodb.com/manual/core/replica-set-elections/) for more details. + +### Secondaries + +A secondary maintains a copy of the primary’s data set. To replicate data, a secondary applies operations from the primary’s oplog to its own data set in an asynchronous process. A replica set can have one or more secondaries. + +The following three-member replica set has two secondary members. The secondaries replicate the primary’s oplog and apply the operations to their data sets. + +

+  lifecycle +

+ +Although clients cannot write data to secondaries, clients can read data from secondary members. See [Read Preference](https://docs.mongodb.com/manual/core/read-preference/) for more information on how clients direct read operations to replica sets. + +A secondary can become a primary. If the current primary becomes unavailable, the replica set holds an election to choose which of the secondaries becomes the new primary. + +### Arbiter + +An arbiter does not have a copy of data set and cannot become a primary. Replica sets may have arbiters to add a vote in elections for primary. Arbiters always have exactly 1 election vote, and thus allow replica sets to have an uneven number of voting members without the overhead of an additional member that replicates data. + +Changed in version 3.6: Starting in MongoDB 3.6, arbiters have priority 0. When you update a replica set to MongoDB 3.6, if the existing configuration has an arbiter with priority 1, MongoDB 3.6 reconfigures the arbiter to have priority 0. + +> IMPORTANT: Do not run an arbiter on systems that also host the primary or the secondary members of the replica set. [[reference]](https://docs.mongodb.com/manual/core/replica-set-members/#arbiter). + +## Asynchronous Replication + +Secondaries apply operations from the primary asynchronously. By applying operations after the primary, sets can continue to function despite the failure of one or more members. + +## Automatic Failover + +When a primary does not communicate with the other members of the set for more than the configured electionTimeoutMillis period (10 seconds by default), an eligible secondary calls for an election to nominate itself as the new primary. The cluster attempts to complete the election of a new primary and resume normal operations. + +

+  lifecycle +

+ +The replica set cannot process write operations until the election completes successfully. The replica set can continue to serve read queries if such queries are configured to run on secondaries while the primary is offline. + +The median time before a cluster elects a new primary should not typically exceed 12 seconds, assuming default [replica configuration settings](https://docs.mongodb.com/manual/reference/replica-configuration/#rsconf.settings). This includes time required to mark the primary as unavailable and call and complete an election. You can tune this time period by modifying the [settings.electionTimeoutMillis](https://docs.mongodb.com/manual/reference/replica-configuration/#rsconf.settings.electionTimeoutMillis) replication configuration option. Factors such as network latency may extend the time required for replica set elections to complete, which in turn affects the amount of time your cluster may operate without a primary. These factors are dependent on your particular cluster architecture. + +Lowering the electionTimeoutMillis replication configuration option from the default 10000 (10 seconds) can result in faster detection of primary failure. However, the cluster may call elections more frequently due to factors such as temporary network latency even if the primary is otherwise healthy. This can result in increased [rollbacks](https://docs.mongodb.com/manual/core/replica-set-rollbacks/#replica-set-rollback) for [w : 1](https://docs.mongodb.com/manual/reference/write-concern/#wc-w) write operations. + +Your application connection logic should include tolerance for automatic failovers and the subsequent elections. + +## Read Operations + +By default, clients read from the primary; however, clients can specify a read preference to send read operations to secondaries. Asynchronous replication to secondaries means that reads from secondaries may return data that does not reflect the state of the data on the primary. For information on reading from replica sets, see [Read Preference](https://docs.mongodb.com/manual/core/read-preference/). + +[Multi-document transactions](https://docs.mongodb.com/manual/core/transactions/) that contain read operations must use read preference primary. + +All operations in a given transaction must route to the same member. + +## Transactions + +Starting in MongoDB 4.0, multi-document transactions are available for replica sets. + +[Multi-document transactions](https://docs.mongodb.com/manual/core/transactions/) that contain read operations must use read preference primary. + +All operations in a given transaction must route to the same member. + +## Change Streams + +Starting in MongoDB 3.6, [change streams](https://docs.mongodb.com/manual/changeStreams/) are available for replica sets and sharded clusters. Change streams allow applications to access real-time data changes without the complexity and risk of tailing the oplog. Applications can use change streams to subscribe to all data changes on a collection or collections. + +## Next Steps + +- [Deploy MongoDB ReplicaSet](/docs/v2024.1.31/guides/mongodb/clustering/replicaset) using KubeDB. +- Detail concepts of [MongoDB object](/docs/v2024.1.31/guides/mongodb/concepts/mongodb). +- Detail concepts of [MongoDBVersion object](/docs/v2024.1.31/guides/mongodb/concepts/catalog). +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/v2024.1.31/CONTRIBUTING). + +NB: The images in this page are taken from [MongoDB website](https://docs.mongodb.com/manual/replication/). diff --git a/content/docs/v2024.1.31/guides/mongodb/clustering/sharding.md b/content/docs/v2024.1.31/guides/mongodb/clustering/sharding.md new file mode 100644 index 0000000000..19c0fd1616 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mongodb/clustering/sharding.md @@ -0,0 +1,930 @@ +--- +title: MongoDB Sharding Guide +menu: + docs_v2024.1.31: + identifier: mg-clustering-sharding + name: Sharding Guide + parent: mg-clustering-mongodb + weight: 25 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# MongoDB Sharding + +This tutorial will show you how to use KubeDB to run a sharded MongoDB cluster. + +## Before You Begin + +Before proceeding: + +- Read [mongodb sharding concept](/docs/v2024.1.31/guides/mongodb/clustering/sharding_concept) to learn about MongoDB Sharding clustering. + +- You need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +- Now, install KubeDB cli on your workstation and KubeDB operator in your cluster following the steps [here](/docs/v2024.1.31/setup/README). + +- To keep things isolated, this tutorial uses a separate namespace called `demo` throughout this tutorial. Run the following command to prepare your cluster for this tutorial: + + ```bash + $ kubectl create ns demo + namespace/demo created + ``` + +> Note: The yaml files used in this tutorial are stored in [docs/examples/mongodb](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/examples/mongodb) folder in GitHub repository [kubedb/docs](https://github.com/kubedb/docs). + +## Deploy Sharded MongoDB Cluster + +To deploy a MongoDB Sharding, user have to specify `spec.shardTopology` option in `Mongodb` CRD. + +The following is an example of a `Mongodb` object which creates MongoDB Sharding of three members. + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: MongoDB +metadata: + name: mongo-sh + namespace: demo +spec: + version: 4.4.26 + shardTopology: + configServer: + replicas: 3 + storage: + resources: + requests: + storage: 1Gi + storageClassName: standard + mongos: + replicas: 2 + shard: + replicas: 3 + shards: 2 + storage: + resources: + requests: + storage: 1Gi + storageClassName: standard +``` + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/mongodb/clustering/mongo-sharding.yaml +mongodb.kubedb.com/mongo-sh created +``` + +Here, + +- `spec.shardTopology` represents the topology configuration for sharding. + - `shard` represents configuration for Shard component of mongodb. + - `shards` represents number of shards for a mongodb deployment. Each shard is deployed as a [replicaset](/docs/v2024.1.31/guides/mongodb/clustering/replication_concept). + - `replicas` represents number of replicas of each shard replicaset. + - `prefix` represents the prefix of each shard node. + - `configSecret` is an optional field to provide custom configuration file for shards (i.e mongod.cnf). If specified, this file will be used as configuration file otherwise a default configuration file will be used. + - `podTemplate` is an optional configuration for pods. + - `storage` to specify pvc spec for each node of sharding. You can specify any StorageClass available in your cluster with appropriate resource requests. + - `configServer` represents configuration for ConfigServer component of mongodb. + - `replicas` represents number of replicas for configServer replicaset. Here, configServer is deployed as a replicaset of mongodb. + - `prefix` represents the prefix of configServer nodes. + - `configSecret` is an optional field to provide custom configuration file for configSource (i.e mongod.cnf). If specified, this file will be used as configuration file otherwise a default configuration file will be used. + - `podTemplate` is an optional configuration for pods. + - `storage` to specify pvc spec for each node of configServer. You can specify any StorageClass available in your cluster with appropriate resource requests. + - `mongos` represents configuration for Mongos component of mongodb. `Mongos` instances run as stateless components (deployment). + - `replicas` represents number of replicas of `Mongos` instance. Here, Mongos is not deployed as replicaset. + - `prefix` represents the prefix of mongos nodes. + - `configSecret` is an optional field to provide custom configuration file for mongos (i.e mongod.cnf). If specified, this file will be used as configuration file otherwise a default configuration file will be used. + - `podTemplate` is an optional configuration for pods. +- `spec.keyFileSecret` (optional) is a secret name that contains keyfile (a random string)against `key.txt` key. Each mongod instances in the replica set and `shardTopology` uses the contents of the keyfile as the shared password for authenticating other members in the replicaset. Only mongod instances with the correct keyfile can join the replica set. _User can provide the `keyFileSecret` by creating a secret with key `key.txt`. See [here](https://docs.mongodb.com/manual/tutorial/enforce-keyfile-access-control-in-existing-replica-set/#create-a-keyfile) to create the string for `keyFileSecret`._ If `keyFileSecret` is not given, KubeDB operator will generate a `keyFileSecret` itself. + +KubeDB operator watches for `MongoDB` objects using Kubernetes api. When a `MongoDB` object is created, KubeDB operator will create some new StatefulSets : 1 for mongos, 1 for configServer & 1 for each of the shards. It creates a primary Service with the matching MongoDB object name. KubeDB operator will also create governing services for StatefulSets with the name `--pods`. + +MongoDB `mongo-sh` state, + +```bash +$ kubectl get mg -n demo +NAME VERSION STATUS AGE +mongo-sh 4.4.26 Ready 9m41s +``` + +All the types of nodes `Shard`, `ConfigServer` & `Mongos` are deployed as statefulset. + +```bash +$ kubectl get statefulset -n demo +NAME READY AGE +mongo-sh-configsvr 3/3 11m +mongo-sh-mongos 3/3 8m41s +mongo-sh-shard0 3/3 10m +mongo-sh-shard1 3/3 8m59s +``` + +All PVCs and PVs for MongoDB `mongo-sh`, + +```bash +$ kubectl get pvc -n demo +NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE +datadir-mongo-sh-configsvr-0 Bound pvc-1db4185e-6a5f-11e9-a871-080027a851ba 1Gi RWO standard 16m +datadir-mongo-sh-configsvr-1 Bound pvc-330cc6ee-6a5f-11e9-a871-080027a851ba 1Gi RWO standard 16m +datadir-mongo-sh-configsvr-2 Bound pvc-3db2d3f5-6a5f-11e9-a871-080027a851ba 1Gi RWO standard 15m +datadir-mongo-sh-shard0-0 Bound pvc-49b7cc3b-6a5f-11e9-a871-080027a851ba 1Gi RWO standard 15m +datadir-mongo-sh-shard0-1 Bound pvc-5b781770-6a5f-11e9-a871-080027a851ba 1Gi RWO standard 15m +datadir-mongo-sh-shard0-2 Bound pvc-6ba3263e-6a5f-11e9-a871-080027a851ba 1Gi RWO standard 14m +datadir-mongo-sh-shard1-0 Bound pvc-75feb227-6a5f-11e9-a871-080027a851ba 1Gi RWO standard 14m +datadir-mongo-sh-shard1-1 Bound pvc-89bb7bb3-6a5f-11e9-a871-080027a851ba 1Gi RWO standard 13m +datadir-mongo-sh-shard1-2 Bound pvc-98c96ae4-6a5f-11e9-a871-080027a851ba 1Gi RWO standard 13m + + +$ kubectl get pv -n demo +NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE +pvc-1db4185e-6a5f-11e9-a871-080027a851ba 1Gi RWO Delete Bound demo/datadir-mongo-sh-configsvr-0 standard 17m +pvc-330cc6ee-6a5f-11e9-a871-080027a851ba 1Gi RWO Delete Bound demo/datadir-mongo-sh-configsvr-1 standard 16m +pvc-3db2d3f5-6a5f-11e9-a871-080027a851ba 1Gi RWO Delete Bound demo/datadir-mongo-sh-configsvr-2 standard 16m +pvc-49b7cc3b-6a5f-11e9-a871-080027a851ba 1Gi RWO Delete Bound demo/datadir-mongo-sh-shard0-0 standard 16m +pvc-5b781770-6a5f-11e9-a871-080027a851ba 1Gi RWO Delete Bound demo/datadir-mongo-sh-shard0-1 standard 15m +pvc-6ba3263e-6a5f-11e9-a871-080027a851ba 1Gi RWO Delete Bound demo/datadir-mongo-sh-shard0-2 standard 15m +pvc-75feb227-6a5f-11e9-a871-080027a851ba 1Gi RWO Delete Bound demo/datadir-mongo-sh-shard1-0 standard 14m +pvc-89bb7bb3-6a5f-11e9-a871-080027a851ba 1Gi RWO Delete Bound demo/datadir-mongo-sh-shard1-1 standard 14m +pvc-98c96ae4-6a5f-11e9-a871-080027a851ba 1Gi RWO Delete Bound demo/datadir-mongo-sh-shard1-2 standard 13m +``` + +Services created for MongoDB `mongo-sh` + +```bash +$ kubectl get svc -n demo +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +mongo-sh ClusterIP 10.108.188.201 27017/TCP 18m +mongo-sh-configsvr-pods ClusterIP None 27017/TCP 18m +mongo-sh-mongos-pods ClusterIP None 27017/TCP 18m +mongo-sh-shard0-pods ClusterIP None 27017/TCP 18m +mongo-sh-shard1-pods ClusterIP None 27017/TCP 18m +``` + +KubeDB operator sets the `status.phase` to `Ready` once the database is successfully created. It has also defaulted some field of crd object. Run the following command to see the modified MongoDB object: + +```yaml +$ kubectl get mg -n demo mongo-sh -o yaml +apiVersion: kubedb.com/v1alpha2 +kind: MongoDB +metadata: + annotations: + kubectl.kubernetes.io/last-applied-configuration: | + {"apiVersion":"kubedb.com/v1alpha2","kind":"MongoDB","metadata":{"annotations":{},"name":"mongo-sh","namespace":"demo"},"spec":{"shardTopology":{"configServer":{"replicas":3,"storage":{"resources":{"requests":{"storage":"1Gi"}},"storageClassName":"standard"}},"mongos":{"replicas":2},"shard":{"replicas":3,"shards":2,"storage":{"resources":{"requests":{"storage":"1Gi"}},"storageClassName":"standard"}}},"version":"4.4.26"}} + creationTimestamp: "2021-02-10T12:57:03Z" + finalizers: + - kubedb.com + generation: 3 + managedFields: + - apiVersion: kubedb.com/v1alpha2 + fieldsType: FieldsV1 + fieldsV1: + f:metadata: + f:annotations: + .: {} + f:kubectl.kubernetes.io/last-applied-configuration: {} + f:spec: + .: {} + f:shardTopology: + .: {} + f:configServer: + .: {} + f:replicas: {} + f:storage: + .: {} + f:resources: + .: {} + f:requests: + .: {} + f:storage: {} + f:storageClassName: {} + f:mongos: + .: {} + f:replicas: {} + f:shard: + .: {} + f:replicas: {} + f:shards: {} + f:storage: + .: {} + f:resources: + .: {} + f:requests: + .: {} + f:storage: {} + f:storageClassName: {} + f:version: {} + manager: kubectl-client-side-apply + operation: Update + time: "2021-02-10T12:57:03Z" + - apiVersion: kubedb.com/v1alpha2 + fieldsType: FieldsV1 + fieldsV1: + f:metadata: + f:finalizers: {} + f:spec: + f:authSecret: + .: {} + f:name: {} + f:keyFileSecret: + .: {} + f:name: {} + f:status: + .: {} + f:conditions: {} + f:observedGeneration: {} + f:phase: {} + manager: mg-operator + operation: Update + time: "2021-02-10T12:57:03Z" + name: mongo-sh + namespace: demo + resourceVersion: "152268" + uid: 8522c8c1-344b-4824-9061-47031b88f1fa +spec: + authSecret: + name: mongo-sh-auth + clusterAuthMode: keyFile + keyFileSecret: + name: mongo-sh-key + shardTopology: + configServer: + podTemplate: + controller: {} + metadata: {} + spec: + affinity: + podAntiAffinity: + preferredDuringSchedulingIgnoredDuringExecution: + - podAffinityTerm: + labelSelector: + matchLabels: + app.kubernetes.io/instance: mongo-sh + app.kubernetes.io/managed-by: kubedb.com + app.kubernetes.io/name: mongodbs.kubedb.com + mongodb.kubedb.com/node.config: mongo-sh-configsvr + namespaces: + - demo + topologyKey: kubernetes.io/hostname + weight: 100 + - podAffinityTerm: + labelSelector: + matchLabels: + app.kubernetes.io/instance: mongo-sh + app.kubernetes.io/managed-by: kubedb.com + app.kubernetes.io/name: mongodbs.kubedb.com + mongodb.kubedb.com/node.config: mongo-sh-configsvr + namespaces: + - demo + topologyKey: failure-domain.beta.kubernetes.io/zone + weight: 50 + livenessProbe: + exec: + command: + - bash + - -c + - "set -x; if [[ $(mongo admin --host=localhost --username=$MONGO_INITDB_ROOT_USERNAME + --password=$MONGO_INITDB_ROOT_PASSWORD --authenticationDatabase=admin + --quiet --eval \"db.adminCommand('ping').ok\" ) -eq \"1\" ]]; then + \n exit 0\n fi\n exit 1" + failureThreshold: 3 + periodSeconds: 10 + successThreshold: 1 + timeoutSeconds: 5 + readinessProbe: + exec: + command: + - bash + - -c + - "set -x; if [[ $(mongo admin --host=localhost --username=$MONGO_INITDB_ROOT_USERNAME + --password=$MONGO_INITDB_ROOT_PASSWORD --authenticationDatabase=admin + --quiet --eval \"db.adminCommand('ping').ok\" ) -eq \"1\" ]]; then + \n exit 0\n fi\n exit 1" + failureThreshold: 3 + periodSeconds: 10 + successThreshold: 1 + timeoutSeconds: 1 + resources: + limits: + cpu: 500m + memory: 1Gi + requests: + cpu: 500m + memory: 1Gi + serviceAccountName: mongo-sh + replicas: 3 + storage: + resources: + requests: + storage: 1Gi + storageClassName: standard + mongos: + podTemplate: + controller: {} + metadata: {} + spec: + affinity: + podAntiAffinity: + preferredDuringSchedulingIgnoredDuringExecution: + - podAffinityTerm: + labelSelector: + matchLabels: + app.kubernetes.io/instance: mongo-sh + app.kubernetes.io/managed-by: kubedb.com + app.kubernetes.io/name: mongodbs.kubedb.com + mongodb.kubedb.com/node.mongos: mongo-sh-mongos + namespaces: + - demo + topologyKey: kubernetes.io/hostname + weight: 100 + - podAffinityTerm: + labelSelector: + matchLabels: + app.kubernetes.io/instance: mongo-sh + app.kubernetes.io/managed-by: kubedb.com + app.kubernetes.io/name: mongodbs.kubedb.com + mongodb.kubedb.com/node.mongos: mongo-sh-mongos + namespaces: + - demo + topologyKey: failure-domain.beta.kubernetes.io/zone + weight: 50 + lifecycle: + preStop: + exec: + command: + - bash + - -c + - 'mongo admin --username=$MONGO_INITDB_ROOT_USERNAME --password=$MONGO_INITDB_ROOT_PASSWORD + --quiet --eval "db.adminCommand({ shutdown: 1 })" || true' + livenessProbe: + exec: + command: + - bash + - -c + - "set -x; if [[ $(mongo admin --host=localhost --username=$MONGO_INITDB_ROOT_USERNAME + --password=$MONGO_INITDB_ROOT_PASSWORD --authenticationDatabase=admin + --quiet --eval \"db.adminCommand('ping').ok\" ) -eq \"1\" ]]; then + \n exit 0\n fi\n exit 1" + failureThreshold: 3 + periodSeconds: 10 + successThreshold: 1 + timeoutSeconds: 5 + readinessProbe: + exec: + command: + - bash + - -c + - "set -x; if [[ $(mongo admin --host=localhost --username=$MONGO_INITDB_ROOT_USERNAME + --password=$MONGO_INITDB_ROOT_PASSWORD --authenticationDatabase=admin + --quiet --eval \"db.adminCommand('ping').ok\" ) -eq \"1\" ]]; then + \n exit 0\n fi\n exit 1" + failureThreshold: 3 + periodSeconds: 10 + successThreshold: 1 + timeoutSeconds: 1 + resources: + limits: + cpu: 500m + memory: 1Gi + requests: + cpu: 500m + memory: 1Gi + serviceAccountName: mongo-sh + replicas: 2 + shard: + podTemplate: + controller: {} + metadata: {} + spec: + affinity: + podAntiAffinity: + preferredDuringSchedulingIgnoredDuringExecution: + - podAffinityTerm: + labelSelector: + matchLabels: + app.kubernetes.io/instance: mongo-sh + app.kubernetes.io/managed-by: kubedb.com + app.kubernetes.io/name: mongodbs.kubedb.com + mongodb.kubedb.com/node.shard: mongo-sh-shard${SHARD_INDEX} + namespaces: + - demo + topologyKey: kubernetes.io/hostname + weight: 100 + - podAffinityTerm: + labelSelector: + matchLabels: + app.kubernetes.io/instance: mongo-sh + app.kubernetes.io/managed-by: kubedb.com + app.kubernetes.io/name: mongodbs.kubedb.com + mongodb.kubedb.com/node.shard: mongo-sh-shard${SHARD_INDEX} + namespaces: + - demo + topologyKey: failure-domain.beta.kubernetes.io/zone + weight: 50 + livenessProbe: + exec: + command: + - bash + - -c + - "set -x; if [[ $(mongo admin --host=localhost --username=$MONGO_INITDB_ROOT_USERNAME + --password=$MONGO_INITDB_ROOT_PASSWORD --authenticationDatabase=admin + --quiet --eval \"db.adminCommand('ping').ok\" ) -eq \"1\" ]]; then + \n exit 0\n fi\n exit 1" + failureThreshold: 3 + periodSeconds: 10 + successThreshold: 1 + timeoutSeconds: 5 + readinessProbe: + exec: + command: + - bash + - -c + - "set -x; if [[ $(mongo admin --host=localhost --username=$MONGO_INITDB_ROOT_USERNAME + --password=$MONGO_INITDB_ROOT_PASSWORD --authenticationDatabase=admin + --quiet --eval \"db.adminCommand('ping').ok\" ) -eq \"1\" ]]; then + \n exit 0\n fi\n exit 1" + failureThreshold: 3 + periodSeconds: 10 + successThreshold: 1 + timeoutSeconds: 1 + resources: + limits: + cpu: 500m + memory: 1Gi + requests: + cpu: 500m + memory: 1Gi + serviceAccountName: mongo-sh + replicas: 3 + shards: 2 + storage: + resources: + requests: + storage: 1Gi + storageClassName: standard + sslMode: disabled + storageEngine: wiredTiger + storageType: Durable + terminationPolicy: Delete + version: 4.4.26 +status: + conditions: + - lastTransitionTime: "2021-02-10T12:57:03Z" + message: 'The KubeDB operator has started the provisioning of MongoDB: demo/mongo-sh' + reason: DatabaseProvisioningStartedSuccessfully + status: "True" + type: ProvisioningStarted + - lastTransitionTime: "2021-02-10T13:09:44Z" + message: All desired replicas are ready. + reason: AllReplicasReady + status: "True" + type: ReplicaReady + - lastTransitionTime: "2021-02-10T12:59:33Z" + message: 'The MongoDB: demo/mongo-sh is accepting client requests.' + observedGeneration: 3 + reason: DatabaseAcceptingConnectionRequest + status: "True" + type: AcceptingConnection + - lastTransitionTime: "2021-02-10T12:59:33Z" + message: 'The MongoDB: demo/mongo-sh is ready.' + observedGeneration: 3 + reason: ReadinessCheckSucceeded + status: "True" + type: Ready + - lastTransitionTime: "2021-02-10T12:59:51Z" + message: 'The MongoDB: demo/mongo-sh is successfully provisioned.' + observedGeneration: 3 + reason: DatabaseSuccessfullyProvisioned + status: "True" + type: Provisioned + observedGeneration: 3 + phase: Ready +``` + +Please note that KubeDB operator has created a new Secret called `mongo-sh-auth` _(format: {mongodb-object-name}-auth)_ for storing the password for `mongodb` superuser. This secret contains a `username` key which contains the _username_ for MongoDB superuser and a `password` key which contains the _password_ for MongoDB superuser. + +If you want to use custom or existing secret please specify that when creating the MongoDB object using `spec.authSecret.name`. While creating this secret manually, make sure the secret contains these two keys containing data `username` and `password`. For more details, please see [here](/docs/v2024.1.31/guides/mongodb/concepts/mongodb#specauthsecret). + +## Connection Information + +- Hostname/address: you can use any of these + - Service: `mongo-sh.demo` + - Pod IP: (`$ kubectl get po -n demo -l mongodb.kubedb.com/node.mongos=mongo-sh-mongos -o yaml | grep podIP`) +- Port: `27017` +- Username: Run following command to get _username_, + + ```bash + $ kubectl get secrets -n demo mongo-sh-auth -o jsonpath='{.data.\username}' | base64 -d + root + ``` + +- Password: Run the following command to get _password_, + + ```bash + $ kubectl get secrets -n demo mongo-sh-auth -o jsonpath='{.data.\password}' | base64 -d + 7QiqLcuSCmZ8PU5a + ``` + +Now, you can connect to this database through [mongo-shell](https://docs.mongodb.com/v4.2/mongo/). + +## Sharded Data + +In this tutorial, we will insert sharded and unsharded document, and we will see if the data actually sharded across cluster or not. + +```bash +$ kubectl get po -n demo -l mongodb.kubedb.com/node.mongos=mongo-sh-mongos +NAME READY STATUS RESTARTS AGE +mongo-sh-mongos-0 1/1 Running 0 49m +mongo-sh-mongos-1 1/1 Running 0 49m + +$ kubectl exec -it mongo-sh-mongos-0 -n demo bash + +mongodb@mongo-sh-mongos-0:/$ mongo admin -u root -p 7QiqLcuSCmZ8PU5a +MongoDB shell version v4.4.26 +connecting to: mongodb://127.0.0.1:27017/admin?gssapiServiceName=mongodb +Implicit session: session { "id" : UUID("8b7abf57-09e4-4e30-b4a0-a37ebf065e8f") } +MongoDB server version: 4.4.26 +Welcome to the MongoDB shell. +For interactive help, type "help". +For more comprehensive documentation, see + http://docs.mongodb.org/ +Questions? Try the support group + http://groups.google.com/group/mongodb-user +mongos> +``` + +To detect if the MongoDB instance that your client is connected to is mongos, use the isMaster command. When a client connects to a mongos, isMaster returns a document with a `msg` field that holds the string `isdbgrid`. + +```bash +mongos> rs.isMaster() +{ + "ismaster" : true, + "msg" : "isdbgrid", + "maxBsonObjectSize" : 16777216, + "maxMessageSizeBytes" : 48000000, + "maxWriteBatchSize" : 100000, + "localTime" : ISODate("2021-02-10T13:37:24.140Z"), + "logicalSessionTimeoutMinutes" : 30, + "connectionId" : 803, + "maxWireVersion" : 8, + "minWireVersion" : 0, + "ok" : 1, + "operationTime" : Timestamp(1612964237, 2), + "$clusterTime" : { + "clusterTime" : Timestamp(1612964237, 2), + "signature" : { + "hash" : BinData(0,"5ugX3jIC+sVDtYjxGWP5SCI7QSE="), + "keyId" : NumberLong("6927618399740624913") + } + } +} +``` + +`mongo-sh` Shard status, + +```bash +mongos> sh.status() +--- Sharding Status --- + sharding version: { + "_id" : 1, + "minCompatibleVersion" : 5, + "currentVersion" : 6, + "clusterId" : ObjectId("6023d83b8df2b687ecfade84") + } + shards: + { "_id" : "shard0", "host" : "shard0/mongo-sh-shard0-0.mongo-sh-shard0-pods.demo.svc.cluster.local:27017,mongo-sh-shard0-1.mongo-sh-shard0-pods.demo.svc.cluster.local:27017,mongo-sh-shard0-2.mongo-sh-shard0-pods.demo.svc.cluster.local:27017", "state" : 1 } + { "_id" : "shard1", "host" : "shard1/mongo-sh-shard1-0.mongo-sh-shard1-pods.demo.svc.cluster.local:27017,mongo-sh-shard1-1.mongo-sh-shard1-pods.demo.svc.cluster.local:27017,mongo-sh-shard1-2.mongo-sh-shard1-pods.demo.svc.cluster.local:27017", "state" : 1 } + active mongoses: + "4.4.26" : 2 + autosplit: + Currently enabled: yes + balancer: + Currently enabled: yes + Currently running: no + Failed balancer rounds in last 5 attempts: 0 + Migration Results for the last 24 hours: + No recent migrations + databases: + { "_id" : "config", "primary" : "config", "partitioned" : true } + config.system.sessions + shard key: { "_id" : 1 } + unique: false + balancing: true + chunks: + shard0 1 + { "_id" : { "$minKey" : 1 } } -->> { "_id" : { "$maxKey" : 1 } } on : shard0 Timestamp(1, 0) +``` + +Shard collection `test.testcoll` and insert document. See [`sh.shardCollection(namespace, key, unique, options)`](https://docs.mongodb.com/manual/reference/method/sh.shardCollection/#sh.shardCollection) for details about `shardCollection` command. + +```bash +mongos> sh.enableSharding("test"); +{ + "ok" : 1, + "operationTime" : Timestamp(1612964293, 5), + "$clusterTime" : { + "clusterTime" : Timestamp(1612964293, 5), + "signature" : { + "hash" : BinData(0,"DJbXhWUbiTQCWvlWgTTW/vlH3LE="), + "keyId" : NumberLong("6927618399740624913") + } + } +} + +mongos> sh.shardCollection("test.testcoll", {"myfield": 1}); +{ + "collectionsharded" : "test.testcoll", + "collectionUUID" : UUID("f2617eb1-8f61-47dd-af58-73f5fe4ea2c0"), + "ok" : 1, + "operationTime" : Timestamp(1612964314, 14), + "$clusterTime" : { + "clusterTime" : Timestamp(1612964314, 14), + "signature" : { + "hash" : BinData(0,"CZzOATrFeADxMkGTWbX85Olkc2Q="), + "keyId" : NumberLong("6927618399740624913") + } + } +} + +mongos> use test; +switched to db test + +mongos> db.testcoll.insert({"myfield": "a", "otherfield": "b"}); +WriteResult({ "nInserted" : 1 }) + +mongos> db.testcoll.insert({"myfield": "c", "otherfield": "d", "kube" : "db" }); +WriteResult({ "nInserted" : 1 }) + +mongos> db.testcoll.find(); +{ "_id" : ObjectId("5cc6d6f656a9ddd30be2c12a"), "myfield" : "a", "otherfield" : "b" } +{ "_id" : ObjectId("5cc6d71e56a9ddd30be2c12b"), "myfield" : "c", "otherfield" : "d", "kube" : "db" } + +``` + +Run [`sh.status()`](https://docs.mongodb.com/manual/reference/method/sh.status/) to see whether the `test` database has sharding enabled, and the primary shard for the `test` database. + +The Sharded Collection section `sh.status.databases.` provides information on the sharding details for sharded collection(s) (E.g. `test.testcoll`). For each sharded collection, the section displays the shard key, the number of chunks per shard(s), the distribution of documents across chunks, and the tag information, if any, for shard key range(s). + +```bash +mongos> sh.status(); +--- Sharding Status --- + sharding version: { + "_id" : 1, + "minCompatibleVersion" : 5, + "currentVersion" : 6, + "clusterId" : ObjectId("6023d83b8df2b687ecfade84") + } + shards: + { "_id" : "shard0", "host" : "shard0/mongo-sh-shard0-0.mongo-sh-shard0-pods.demo.svc.cluster.local:27017,mongo-sh-shard0-1.mongo-sh-shard0-pods.demo.svc.cluster.local:27017,mongo-sh-shard0-2.mongo-sh-shard0-pods.demo.svc.cluster.local:27017", "state" : 1 } + { "_id" : "shard1", "host" : "shard1/mongo-sh-shard1-0.mongo-sh-shard1-pods.demo.svc.cluster.local:27017,mongo-sh-shard1-1.mongo-sh-shard1-pods.demo.svc.cluster.local:27017,mongo-sh-shard1-2.mongo-sh-shard1-pods.demo.svc.cluster.local:27017", "state" : 1 } + active mongoses: + "4.4.26" : 2 + autosplit: + Currently enabled: yes + balancer: + Currently enabled: yes + Currently running: no + Failed balancer rounds in last 5 attempts: 0 + Migration Results for the last 24 hours: + No recent migrations + databases: + { "_id" : "config", "primary" : "config", "partitioned" : true } + config.system.sessions + shard key: { "_id" : 1 } + unique: false + balancing: true + chunks: + shard0 1 + { "_id" : { "$minKey" : 1 } } -->> { "_id" : { "$maxKey" : 1 } } on : shard0 Timestamp(1, 0) + { "_id" : "test", "primary" : "shard1", "partitioned" : true, "version" : { "uuid" : UUID("2a39d8c7-c731-46af-84c3-bf04ba10ac82"), "lastMod" : 1 } } + test.testcoll + shard key: { "myfield" : 1 } + unique: false + balancing: true + chunks: + shard1 1 + { "myfield" : { "$minKey" : 1 } } -->> { "myfield" : { "$maxKey" : 1 } } on : shard1 Timestamp(1, 0) +``` + +Now create another database where partiotioned is not applied and see how the data is stored. + +```bash +mongos> use demo +switched to db demo + +mongos> db.testcoll2.insert({"myfield": "ccc", "otherfield": "d", "kube" : "db" }); +WriteResult({ "nInserted" : 1 }) + +mongos> db.testcoll2.insert({"myfield": "aaa", "otherfield": "d", "kube" : "db" }); +WriteResult({ "nInserted" : 1 }) + + +mongos> db.testcoll2.find() +{ "_id" : ObjectId("5cc6dc831b6d9b3cddc947ec"), "myfield" : "ccc", "otherfield" : "d", "kube" : "db" } +{ "_id" : ObjectId("5cc6dce71b6d9b3cddc947ed"), "myfield" : "aaa", "otherfield" : "d", "kube" : "db" } +``` + +Now, eventually `sh.status()` + +``` +mongos> sh.status() +--- Sharding Status --- + sharding version: { + "_id" : 1, + "minCompatibleVersion" : 5, + "currentVersion" : 6, + "clusterId" : ObjectId("6023d83b8df2b687ecfade84") + } + shards: + { "_id" : "shard0", "host" : "shard0/mongo-sh-shard0-0.mongo-sh-shard0-pods.demo.svc.cluster.local:27017,mongo-sh-shard0-1.mongo-sh-shard0-pods.demo.svc.cluster.local:27017,mongo-sh-shard0-2.mongo-sh-shard0-pods.demo.svc.cluster.local:27017", "state" : 1 } + { "_id" : "shard1", "host" : "shard1/mongo-sh-shard1-0.mongo-sh-shard1-pods.demo.svc.cluster.local:27017,mongo-sh-shard1-1.mongo-sh-shard1-pods.demo.svc.cluster.local:27017,mongo-sh-shard1-2.mongo-sh-shard1-pods.demo.svc.cluster.local:27017", "state" : 1 } + active mongoses: + "4.4.26" : 2 + autosplit: + Currently enabled: yes + balancer: + Currently enabled: yes + Currently running: no + Failed balancer rounds in last 5 attempts: 0 + Migration Results for the last 24 hours: + No recent migrations + databases: + { "_id" : "config", "primary" : "config", "partitioned" : true } + config.system.sessions + shard key: { "_id" : 1 } + unique: false + balancing: true + chunks: + shard0 1 + { "_id" : { "$minKey" : 1 } } -->> { "_id" : { "$maxKey" : 1 } } on : shard0 Timestamp(1, 0) + { "_id" : "demo", "primary" : "shard1", "partitioned" : false, "version" : { "uuid" : UUID("93d077e0-2da0-4b68-a4d4-d23394b22ab2"), "lastMod" : 1 } } + { "_id" : "test", "primary" : "shard1", "partitioned" : true, "version" : { "uuid" : UUID("2a39d8c7-c731-46af-84c3-bf04ba10ac82"), "lastMod" : 1 } } + test.testcoll + shard key: { "myfield" : 1 } + unique: false + balancing: true + chunks: + shard1 1 + { "myfield" : { "$minKey" : 1 } } -->> { "myfield" : { "$maxKey" : 1 } } on : shard1 Timestamp(1, 0) +``` + +Here, `demo` database is not partitioned and all collections under `demo` database are stored in it's primary shard, which is `shard1`. + +## Halt Database + +When [TerminationPolicy](/docs/v2024.1.31/guides/mongodb/concepts/mongodb#specterminationpolicy) is set to halt, and you delete the mongodb object, the KubeDB operator will delete the StatefulSet and its pods but leaves the PVCs, secrets and database backup (snapshots) intact. Learn details of all `TerminationPolicy` [here](/docs/v2024.1.31/guides/mongodb/concepts/mongodb#specterminationpolicy). + +You can also keep the mongodb object and halt the database to resume it again later. If you halt the database, the kubedb will delete the statefulsets and services but will keep the mongodb object, pvcs, secrets and backup (snapshots). + +To halt the database, first you have to set the terminationPolicy to `Halt` in existing database. You can use the below command to set the terminationPolicy to `Halt`, if it is not already set. + +```bash +$ kubectl patch -n demo mg/mongo-sh -p '{"spec":{"terminationPolicy":"Halt"}}' --type="merge" +mongodb.kubedb.com/mongo-sh patched +``` + +Then, you have to set the `spec.halted` as true to set the database in a `Halted` state. You can use the below command. + +```bash +$ kubectl patch -n demo mg/mongo-sh -p '{"spec":{"halted":true}}' --type="merge" +mongodb.kubedb.com/mongo-sh patched +``` + +After that, kubedb will delete the statefulsets and services and you can see the database Phase as `Halted`. + +Now, you can run the following command to get all mongodb resources in demo namespaces, + +```bash +$ kubectl get mg,sts,svc,secret,pvc -n demo +NAME VERSION STATUS AGE +mongodb.kubedb.com/mongo-sh 4.4.26 Halted 74m + +NAME TYPE DATA AGE +secret/default-token-x2zcl kubernetes.io/service-account-token 3 32h +secret/mongo-sh-auth Opaque 2 75m +secret/mongo-sh-key Opaque 1 75m + +NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE +persistentvolumeclaim/datadir-mongo-sh-configsvr-0 Bound pvc-9d1b3c01-fdce-45ab-b6f6-fc7bf9462e89 1Gi RWO standard 74m +persistentvolumeclaim/datadir-mongo-sh-configsvr-1 Bound pvc-8e14fcea-ec15-4614-9ec5-21fdf3eb477c 1Gi RWO standard 74m +persistentvolumeclaim/datadir-mongo-sh-configsvr-2 Bound pvc-b65665ce-f35b-4c4f-a7ac-410ad2dfa82d 1Gi RWO standard 73m +persistentvolumeclaim/datadir-mongo-sh-shard0-0 Bound pvc-8fbfdd01-1ed1-4e3b-9a2a-0aa75911cbf0 1Gi RWO standard 74m +persistentvolumeclaim/datadir-mongo-sh-shard0-1 Bound pvc-71d2b22b-2168-46d3-927c-d3ac92f22ebb 1Gi RWO standard 74m +persistentvolumeclaim/datadir-mongo-sh-shard0-2 Bound pvc-82f83359-6e31-43e4-88b3-2555cb442ca0 1Gi RWO standard 73m +persistentvolumeclaim/datadir-mongo-sh-shard1-0 Bound pvc-07ef7cd3-99b2-47de-b1bb-ef6c5606d92e 1Gi RWO standard 74m +persistentvolumeclaim/datadir-mongo-sh-shard1-1 Bound pvc-ffa4b9a7-2492-4f18-be90-7950004e9efd 1Gi RWO standard 74m +persistentvolumeclaim/datadir-mongo-sh-shard1-2 Bound pvc-4e75b90e-dac5-4431-a50e-2bc8dfcf481b 1Gi RWO standard 73m +``` + +From the above output, you can see that MongoDB object, PVCs, Secret are still there. + +## Resume Halted Database + +Now, to resume the database, i.e. to get the same database setup back again, you have to set the the `spec.halted` as false. You can use the below command. + +```bash +$ kubectl patch -n demo mg/mongo-sh -p '{"spec":{"halted":false}}' --type="merge" +mongodb.kubedb.com/mongo-sh patched +``` + +When the database is resumed successfully, you can see the database Status is set to `Ready`. + +```bash +$ kubectl get mg -n demo +NAME VERSION STATUS AGE +mongo-sh 4.4.26 Ready 6m27s +``` + +Now, If you again exec into `pod` and look for previous data, you will see that, all the data persists. + +```bash +$ kubectl get po -n demo -l mongodb.kubedb.com/node.mongos=mongo-sh-mongos +NAME READY STATUS RESTARTS AGE +mongo-sh-mongos-0 1/1 Running 0 3m52s +mongo-sh-mongos-1 1/1 Running 0 3m52s + + +$ kubectl exec -it mongo-sh-mongos-0 -n demo bash + +mongodb@mongo-sh-mongos-0:/$ mongo admin -u root -p 7QiqLcuSCmZ8PU5a + +mongos> use test; +switched to db test + +mongos> db.testcoll.find(); +{ "_id" : ObjectId("5cc6d6f656a9ddd30be2c12a"), "myfield" : "a", "otherfield" : "b" } +{ "_id" : ObjectId("5cc6d71e56a9ddd30be2c12b"), "myfield" : "c", "otherfield" : "d", "kube" : "db" } + +mongos> sh.status() +--- Sharding Status --- + sharding version: { + "_id" : 1, + "minCompatibleVersion" : 5, + "currentVersion" : 6, + "clusterId" : ObjectId("6023d83b8df2b687ecfade84") + } + shards: + { "_id" : "shard0", "host" : "shard0/mongo-sh-shard0-0.mongo-sh-shard0-pods.demo.svc.cluster.local:27017,mongo-sh-shard0-1.mongo-sh-shard0-pods.demo.svc.cluster.local:27017,mongo-sh-shard0-2.mongo-sh-shard0-pods.demo.svc.cluster.local:27017", "state" : 1 } + { "_id" : "shard1", "host" : "shard1/mongo-sh-shard1-0.mongo-sh-shard1-pods.demo.svc.cluster.local:27017,mongo-sh-shard1-1.mongo-sh-shard1-pods.demo.svc.cluster.local:27017,mongo-sh-shard1-2.mongo-sh-shard1-pods.demo.svc.cluster.local:27017", "state" : 1 } + active mongoses: + "4.4.26" : 2 + autosplit: + Currently enabled: yes + balancer: + Currently enabled: yes + Currently running: no + Failed balancer rounds in last 5 attempts: 1 + Last reported error: Could not find host matching read preference { mode: "primary" } for set shard0 + Time of Reported error: Wed Feb 10 2021 14:16:04 GMT+0000 (UTC) + Migration Results for the last 24 hours: + No recent migrations + databases: + { "_id" : "config", "primary" : "config", "partitioned" : true } + config.system.sessions + shard key: { "_id" : 1 } + unique: false + balancing: true + chunks: + shard0 1 + { "_id" : { "$minKey" : 1 } } -->> { "_id" : { "$maxKey" : 1 } } on : shard0 Timestamp(1, 0) + { "_id" : "demo", "primary" : "shard1", "partitioned" : false, "version" : { "uuid" : UUID("93d077e0-2da0-4b68-a4d4-d23394b22ab2"), "lastMod" : 1 } } + { "_id" : "test", "primary" : "shard1", "partitioned" : true, "version" : { "uuid" : UUID("2a39d8c7-c731-46af-84c3-bf04ba10ac82"), "lastMod" : 1 } } + test.testcoll + shard key: { "myfield" : 1 } + unique: false + balancing: true + chunks: + shard1 1 + { "myfield" : { "$minKey" : 1 } } -->> { "myfield" : { "$maxKey" : 1 } } on : shard1 Timestamp(1, 0) +``` + +## Cleaning up + +To cleanup the Kubernetes resources created by this tutorial, run: + +```bash +kubectl patch -n demo mg/mongo-sh -p '{"spec":{"terminationPolicy":"WipeOut"}}' --type="merge" +kubectl delete -n demo mg/mongo-sh + +kubectl delete ns demo +``` + +## Next Steps + +- [Backup and Restore](/docs/v2024.1.31/guides/mongodb/backup/overview/) process of MongoDB databases using Stash. +- Initialize [MongoDB with Script](/docs/v2024.1.31/guides/mongodb/initialization/using-script). +- Monitor your MongoDB database with KubeDB using [out-of-the-box Prometheus operator](/docs/v2024.1.31/guides/mongodb/monitoring/using-prometheus-operator). +- Monitor your MongoDB database with KubeDB using [out-of-the-box builtin-Prometheus](/docs/v2024.1.31/guides/mongodb/monitoring/using-builtin-prometheus). +- Use [private Docker registry](/docs/v2024.1.31/guides/mongodb/private-registry/using-private-registry) to deploy MongoDB with KubeDB. +- Detail concepts of [MongoDB object](/docs/v2024.1.31/guides/mongodb/concepts/mongodb). +- Detail concepts of [MongoDBVersion object](/docs/v2024.1.31/guides/mongodb/concepts/catalog). +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/v2024.1.31/CONTRIBUTING). diff --git a/content/docs/v2024.1.31/guides/mongodb/clustering/sharding_concept.md b/content/docs/v2024.1.31/guides/mongodb/clustering/sharding_concept.md new file mode 100644 index 0000000000..582f6cf986 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mongodb/clustering/sharding_concept.md @@ -0,0 +1,135 @@ +--- +title: MongoDB Sharding Concept +menu: + docs_v2024.1.31: + identifier: mg-clustering-sharding-concept + name: Sharding Concept + parent: mg-clustering-mongodb + weight: 20 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# MongoDB Sharding + +Sharding is a method for distributing data across multiple machines. MongoDB uses sharding to support deployments with very large data sets and high throughput operations. This section introduces sharding in MongoDB as well as the components and architecture of sharding. + +## Sharded Cluster + +A MongoDB sharded cluster consists of the following components: + +- **_shard:_** Each shard contains a subset of the sharded data. As of MongoDB 3.6, shards must be deployed as a replica set. +- **_mongos:_** The mongos acts as a query router, providing an interface between client applications and the sharded cluster. +- **_config servers:_** Config servers store metadata and configuration settings for the cluster. As of MongoDB 3.4, config servers must be deployed as a replica set (CSRS). + +

+  sharded-cluster +

+ +### Shards + +A shard contains a subset of data for a sharded cluster. Together, the shards of a cluster hold the entire data set for the cluster. + +As of MongoDB 3.6, shards must be deployed as a replica set to provide redundancy and high availability. + +Performing queries on a single shard only returns a subset of data. Connect to the mongos to perform cluster level operations, including read or write operations. + +#### Primary Shard + +Each database in a sharded cluster has a primary shard that holds all the un-sharded collections for that database. Each database has its own primary shard. The primary shard has no relation to the primary in a replica set. + +The mongos selects the primary shard when creating a new database by picking the shard in the cluster that has the least amount of data. `mongos` uses the totalSize field returned by the listDatabase command as a part of the selection criteria. + +A primary shard contains non-sharded collections as well as chunks of documents from sharded collections. Shard A is the primary shard. + +#### Shard Status + +Use the sh.status() method in the mongo shell to see an overview of the cluster. This reports includes which shard is primary for the database and the chunk distribution across the shards. See sh.status() method for more details. + +Read more about shard from [official document](https://docs.mongodb.com/manual/core/sharded-cluster-shards/) + +#### Shard Keys + +To distribute the documents in a collection, MongoDB partitions the collection using the shard key. The shard key consists of an immutable field or fields that exist in every document in the target collection. + +You choose the shard key when sharding a collection. The choice of shard key cannot be changed after sharding. A sharded collection can have only one shard key. See [Shard Key Specification](https://docs.mongodb.com/manual/core/sharding-shard-key/#sharding-shard-key-creation). + +To shard a non-empty collection, the collection must have an index that starts with the shard key. For empty collections, MongoDB creates the index if the collection does not already have an appropriate index for the specified shard key. + +The choice of shard key affects the performance, efficiency, and scalability of a sharded cluster. A cluster with the best possible hardware and infrastructure can be bottlenecked by the choice of shard key. The choice of shard key and its backing index can also affect the sharding strategy that your cluster can use. + +See the [shard key](https://docs.mongodb.com/manual/core/sharding-shard-key/) documentation for more information. + +### Config Servers + +Config servers store the metadata for a sharded cluster. The metadata reflects state and organization for all data and components within the sharded cluster. The metadata includes the list of chunks on every shard and the ranges that define the chunks. + +The mongos instances cache this data and use it to route read and write operations to the correct shards. mongos updates the cache when there are metadata changes for the cluster, such as Chunk Splits or adding a shard. Shards also read chunk metadata from the config servers. + +The config servers also store Authentication configuration information such as Role-Based Access Control or internal authentication settings for the cluster. + +MongoDB also uses the config servers to manage distributed locks. + +Read more about config servers from [official document](https://docs.mongodb.com/manual/core/sharded-cluster-config-servers/) + +### Mongos + +MongoDB mongos instances route queries and write operations to shards in a sharded cluster. mongos provide the only interface to a sharded cluster from the perspective of applications. Applications never connect or communicate directly with the shards. + +The mongos tracks what data is on which shard by caching the metadata from the config servers. The mongos uses the metadata to route operations from applications and clients to the mongod instances. A mongos has no persistent state and consumes minimal system resources. + +#### Confirm Connection to mongos Instances + +To detect if the MongoDB instance that your client is connected to is mongos, use the isMaster command. When a client connects to a mongos, isMaster returns a document with a msg field that holds the string isdbgrid. For example: + +```json +{ + "ismaster" : true, + "msg" : "isdbgrid", + "maxBsonObjectSize" : 16777216, + "ok" : 1 +} +``` + +If the application is instead connected to a mongod, the returned document does not include the isdbgrid string. + +## Production Configuration + +In a production cluster, ensure that data is redundant and that your systems are highly available. Consider the following for a production sharded cluster deployment: + +- Deploy Config Servers as a **_3 member replica set_** +- Deploy each Shard as a **_3 member replica set_** +- Deploy **_one or more_** mongos routers + +## Connecting to a Sharded Cluster + +You must connect to a mongos router to interact with any collection in the sharded cluster. This includes sharded and unsharded collections. Clients should never connect to a single shard in order to perform read or write operations. + +

+  lifecycle +

+ +You can connect to a mongos the same way you connect to a mongod, such as via the mongo shell or a MongoDB driver. + +## Next Steps + +- [Deploy MongoDB Sharding](/docs/v2024.1.31/guides/mongodb/clustering/sharding) using KubeDB. +- Detail concepts of [MongoDB Sharding](https://docs.mongodb.com/manual/sharding/) +- Detail concepts of [MongoDB object](/docs/v2024.1.31/guides/mongodb/concepts/mongodb). +- Detail concepts of [MongoDBVersion object](/docs/v2024.1.31/guides/mongodb/concepts/catalog). +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/v2024.1.31/CONTRIBUTING). + +NB: The images in this page are taken from [MongoDB website](https://docs.mongodb.com/manual/sharding/). diff --git a/content/docs/v2024.1.31/guides/mongodb/clustering/standalone.md b/content/docs/v2024.1.31/guides/mongodb/clustering/standalone.md new file mode 100644 index 0000000000..9e30c6bd07 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mongodb/clustering/standalone.md @@ -0,0 +1,535 @@ +--- +title: MongoDB Standalone Guide +menu: + docs_v2024.1.31: + identifier: mg-clustering-standalone + name: Standalone Guide + parent: mg-clustering-mongodb + weight: 5 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# KubeDB - MongoDB Standalone + +This tutorial will show you how to use KubeDB to run a MongoDB Standalone. + +## Before You Begin + +Before proceeding: + +- You need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +- Now, install KubeDB cli on your workstation and KubeDB operator in your cluster following the steps [here](/docs/v2024.1.31/setup/README). + +- To keep things isolated, this tutorial uses a separate namespace called `demo` throughout this tutorial. Run the following command to prepare your cluster for this tutorial: + + ```bash + $ kubectl create ns demo + namespace/demo created + ``` + +> Note: The yaml files used in this tutorial are stored in [docs/examples/mongodb](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/examples/mongodb) folder in GitHub repository [kubedb/docs](https://github.com/kubedb/docs). + +## Deploy MongoDB Standalone + +To deploy a MongoDB Standalone, user must have `spec.replicaSet` & `spec.shardTopology` options in `Mongodb` CRD to be set to nil. Arbiter & Hidden-node are also not supported for standalone mongoDB. + +The following is an example of a `Mongodb` object which creates MongoDB Standalone database. + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: MongoDB +metadata: + name: mg-alone + namespace: demo +spec: + version: "4.4.26" + podTemplate: + spec: + resources: + requests: + cpu: "300m" + memory: "400Mi" + storage: + resources: + requests: + storage: 500Mi + storageClassName: standard + terminationPolicy: WipeOut +``` + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/mongodb/clustering/standalone.yaml +mongodb.kubedb.com/mg-alone created +``` + +Here, + +- `spec.version` is the version to be used for MongoDB. +- `spec.podTemplate` specifies the resources and other specifications of the pod. Have a look [here](/docs/v2024.1.31/guides/mongodb/concepts/mongodb#specpodtemplate) to know the other subfields of the podTemplate. +- `spec.storage` specifies the StorageClass of PVC dynamically allocated to store data for this database. This storage spec will be passed to the StatefulSet created by KubeDB operator to run database pods. So, each members will have a pod of this storage configuration. You can specify any StorageClass available in your cluster with appropriate resource requests. + +KubeDB operator watches for `MongoDB` objects using Kubernetes api. When a `MongoDB` object is created, KubeDB operator will create a new StatefulSet and a Service with the matching MongoDB object name. KubeDB operator will also create a governing service for StatefulSets with the name `-pods`. + +```bash +$ kubectl dba describe mg -n demo mg-alone +Name: mg-alone +Namespace: demo +CreationTimestamp: Fri, 04 Nov 2022 10:30:07 +0600 +Labels: +Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"kubedb.com/v1alpha2","kind":"MongoDB","metadata":{"annotations":{},"name":"mg-alone","namespace":"demo"},"spec":{"podTemplate":{"spec":{... +Replicas: 1 total +Status: Ready +StorageType: Durable +Volume: + StorageClass: standard + Capacity: 500Mi +Paused: false +Halted: false +Termination Policy: WipeOut + +StatefulSet: + Name: mg-alone + CreationTimestamp: Fri, 04 Nov 2022 10:30:07 +0600 + Labels: app.kubernetes.io/component=database + app.kubernetes.io/instance=mg-alone + app.kubernetes.io/managed-by=kubedb.com + app.kubernetes.io/name=mongodbs.kubedb.com + Annotations: + Replicas: 824638445048 desired | 1 total + Pods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed + +Service: + Name: mg-alone + Labels: app.kubernetes.io/component=database + app.kubernetes.io/instance=mg-alone + app.kubernetes.io/managed-by=kubedb.com + app.kubernetes.io/name=mongodbs.kubedb.com + Annotations: + Type: ClusterIP + IP: 10.96.47.157 + Port: primary 27017/TCP + TargetPort: db/TCP + Endpoints: 10.244.0.25:27017 + +Service: + Name: mg-alone-pods + Labels: app.kubernetes.io/component=database + app.kubernetes.io/instance=mg-alone + app.kubernetes.io/managed-by=kubedb.com + app.kubernetes.io/name=mongodbs.kubedb.com + Annotations: + Type: ClusterIP + IP: None + Port: db 27017/TCP + TargetPort: db/TCP + Endpoints: 10.244.0.25:27017 + +Auth Secret: + Name: mg-alone-auth + Labels: app.kubernetes.io/component=database + app.kubernetes.io/instance=mg-alone + app.kubernetes.io/managed-by=kubedb.com + app.kubernetes.io/name=mongodbs.kubedb.com + Annotations: + Type: kubernetes.io/basic-auth + Data: + password: 16 bytes + username: 4 bytes + +AppBinding: + Metadata: + Annotations: + kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"kubedb.com/v1alpha2","kind":"MongoDB","metadata":{"annotations":{},"name":"mg-alone","namespace":"demo"},"spec":{"podTemplate":{"spec":{"resources":{"requests":{"cpu":"300m","memory":"400Mi"}}}},"storage":{"resources":{"requests":{"storage":"500Mi"}},"storageClassName":"standard"},"terminationPolicy":"WipeOut","version":"4.4.26"}} + + Creation Timestamp: 2022-11-04T04:30:14Z + Labels: + app.kubernetes.io/component: database + app.kubernetes.io/instance: mg-alone + app.kubernetes.io/managed-by: kubedb.com + app.kubernetes.io/name: mongodbs.kubedb.com + Name: mg-alone + Namespace: demo + Spec: + Client Config: + Service: + Name: mg-alone + Port: 27017 + Scheme: mongodb + Parameters: + API Version: config.kubedb.com/v1alpha1 + Kind: MongoConfiguration + Stash: + Addon: + Backup Task: + Name: mongodb-backup-4.4.6 + Restore Task: + Name: mongodb-restore-4.4.6 + Secret: + Name: mg-alone-auth + Type: kubedb.com/mongodb + Version: 4.4.26 + +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal PhaseChanged 21s MongoDB operator Phase changed from to Provisioning. + Normal Successful 21s MongoDB operator Successfully created governing service + Normal Successful 21s MongoDB operator Successfully created Primary Service + Normal Successful 14s MongoDB operator Successfully patched StatefulSet demo/mg-alone + Normal Successful 14s MongoDB operator Successfully patched MongoDB + Normal Successful 14s MongoDB operator Successfully created appbinding + Normal Successful 14s MongoDB operator Successfully patched MongoDB + Normal Successful 14s MongoDB operator Successfully patched StatefulSet demo/mg-alone + Normal Successful 4s MongoDB operator Successfully patched StatefulSet demo/mg-alone + Normal Successful 4s MongoDB operator Successfully patched MongoDB + Normal PhaseChanged 4s MongoDB operator Phase changed from Provisioning to Ready. + Normal Successful 4s MongoDB operator Successfully patched StatefulSet demo/mg-alone + Normal Successful 4s MongoDB operator Successfully patched MongoDB + + + +$ kubectl get sts,svc,pvc,pv -n demo +NAME READY AGE +statefulset.apps/mg-alone 1/1 65s + +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +service/mg-alone ClusterIP 10.96.47.157 27017/TCP 65s +service/mg-alone-pods ClusterIP None 27017/TCP 65s + +NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE +persistentvolumeclaim/datadir-mg-alone-0 Bound pvc-78328965-1210-4f7a-a508-2749b328a5ac 500Mi RWO standard 65s + +NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE +persistentvolume/pvc-78328965-1210-4f7a-a508-2749b328a5ac 500Mi RWO Delete Bound demo/datadir-mg-alone-0 standard 62s + +``` + +KubeDB operator sets the `status.phase` to `Ready` once the database is successfully created. Run the following command to see the modified MongoDB object: + +```yaml +$ kubectl get mg -n demo mg-alone -o yaml +apiVersion: kubedb.com/v1alpha2 +kind: MongoDB +metadata: + annotations: + kubectl.kubernetes.io/last-applied-configuration: | + {"apiVersion":"kubedb.com/v1alpha2","kind":"MongoDB","metadata":{"annotations":{},"name":"mg-alone","namespace":"demo"},"spec":{"podTemplate":{"spec":{"resources":{"requests":{"cpu":"300m","memory":"400Mi"}}}},"storage":{"resources":{"requests":{"storage":"500Mi"}},"storageClassName":"standard"},"terminationPolicy":"WipeOut","version":"4.4.26"}} + creationTimestamp: "2022-11-04T04:30:07Z" + finalizers: + - kubedb.com + generation: 2 + name: mg-alone + namespace: demo + resourceVersion: "914996" + uid: 55ece68c-8df6-4055-b463-1fcb119f0fb1 +spec: + allowedSchemas: + namespaces: + from: Same + authSecret: + name: mg-alone-auth + autoOps: {} + coordinator: + resources: {} + healthChecker: + failureThreshold: 1 + periodSeconds: 10 + timeoutSeconds: 10 + podTemplate: + controller: {} + metadata: {} + spec: + affinity: + podAntiAffinity: + preferredDuringSchedulingIgnoredDuringExecution: + - podAffinityTerm: + labelSelector: + matchLabels: + app.kubernetes.io/instance: mg-alone + app.kubernetes.io/managed-by: kubedb.com + app.kubernetes.io/name: mongodbs.kubedb.com + namespaces: + - demo + topologyKey: kubernetes.io/hostname + weight: 100 + - podAffinityTerm: + labelSelector: + matchLabels: + app.kubernetes.io/instance: mg-alone + app.kubernetes.io/managed-by: kubedb.com + app.kubernetes.io/name: mongodbs.kubedb.com + namespaces: + - demo + topologyKey: failure-domain.beta.kubernetes.io/zone + weight: 50 + livenessProbe: + exec: + command: + - bash + - -c + - "set -x; if [[ $(mongo admin --host=localhost --username=$MONGO_INITDB_ROOT_USERNAME + --password=$MONGO_INITDB_ROOT_PASSWORD --authenticationDatabase=admin + --quiet --eval \"db.adminCommand('ping').ok\" ) -eq \"1\" ]]; then \n + \ exit 0\n fi\n exit 1" + failureThreshold: 3 + periodSeconds: 10 + successThreshold: 1 + timeoutSeconds: 5 + readinessProbe: + exec: + command: + - bash + - -c + - "set -x; if [[ $(mongo admin --host=localhost --username=$MONGO_INITDB_ROOT_USERNAME + --password=$MONGO_INITDB_ROOT_PASSWORD --authenticationDatabase=admin + --quiet --eval \"db.adminCommand('ping').ok\" ) -eq \"1\" ]]; then \n + \ exit 0\n fi\n exit 1" + failureThreshold: 3 + periodSeconds: 10 + successThreshold: 1 + timeoutSeconds: 5 + resources: + limits: + memory: 400Mi + requests: + cpu: 300m + memory: 400Mi + serviceAccountName: mg-alone + replicas: 1 + sslMode: disabled + storage: + resources: + requests: + storage: 500Mi + storageClassName: standard + storageEngine: wiredTiger + storageType: Durable + terminationPolicy: WipeOut + version: 4.4.26 +status: + conditions: + - lastTransitionTime: "2022-11-04T04:30:07Z" + message: 'The KubeDB operator has started the provisioning of MongoDB: demo/mg-alone' + reason: DatabaseProvisioningStartedSuccessfully + status: "True" + type: ProvisioningStarted + - lastTransitionTime: "2022-11-04T04:30:14Z" + message: All desired replicas are ready. + reason: AllReplicasReady + status: "True" + type: ReplicaReady + - lastTransitionTime: "2022-11-04T04:30:24Z" + message: 'The MongoDB: demo/mg-alone is accepting client requests.' + observedGeneration: 2 + reason: DatabaseAcceptingConnectionRequest + status: "True" + type: AcceptingConnection + - lastTransitionTime: "2022-11-04T04:30:24Z" + message: 'The MongoDB: demo/mg-alone is ready.' + observedGeneration: 2 + reason: ReadinessCheckSucceeded + status: "True" + type: Ready + - lastTransitionTime: "2022-11-04T04:30:24Z" + message: 'The MongoDB: demo/mg-alone is successfully provisioned.' + observedGeneration: 2 + reason: DatabaseSuccessfullyProvisioned + status: "True" + type: Provisioned + observedGeneration: 2 + phase: Ready + +``` + +Please note that KubeDB operator has created a new Secret called `mg-alone-auth` *(format: {mongodb-object-name}-auth)* for storing the password for `mongodb` superuser. This secret contains a `username` key which contains the *username* for MongoDB superuser and a `password` key which contains the *password* for MongoDB superuser. + +If you want to use custom or existing secret please specify that when creating the MongoDB object using `spec.authSecret.name`. While creating this secret manually, make sure the secret contains these two keys containing data `username` and `password`. For more details, please see [here](/docs/v2024.1.31/guides/mongodb/concepts/mongodb#specauthsecret). + +## Data Insertion + +Now, you can connect to this database through [mg-alone](https://docs.mongodb.com/v3.4/mongo/). In this tutorial, we will insert document on the primary member, and we will see if the data becomes available on secondary members. + +At first, insert data inside primary member `rs0:PRIMARY`. + +```bash +$ kubectl get secrets -n demo mg-alone-auth -o jsonpath='{.data.\username}' | base64 -d +root + +$ kubectl get secrets -n demo mg-alone-auth -o jsonpath='{.data.\password}' | base64 -d +5O4R2ze2bWXcWsdP + +$ kubectl exec -it mg-alone-0 -n demo bash + +mongodb@mg-alone-0:/$ mongo admin -u root -p 5O4R2ze2bWXcWsdP +MongoDB shell version v4.4.26 +connecting to: mongodb://127.0.0.1:27017/admin +MongoDB server version: 4.4.26 +Welcome to the MongoDB shell. + +> rs.isMaster() +{ + "ismaster" : true, + "maxBsonObjectSize" : 16777216, + "maxMessageSizeBytes" : 48000000, + "maxWriteBatchSize" : 100000, + "localTime" : ISODate("2022-11-04T04:45:33.151Z"), + "logicalSessionTimeoutMinutes" : 30, + "connectionId" : 447, + "minWireVersion" : 0, + "maxWireVersion" : 8, + "readOnly" : false, + "ok" : 1 +} + +> show dbs +admin 0.000GB +config 0.000GB +kubedb-system 0.000GB +local 0.000GB + + +> use admin +switched to db admin +> show users +{ + "_id" : "admin.root", + "userId" : UUID("bd711827-8d7e-4c7c-b9d7-ddb27869b9fb"), + "user" : "root", + "db" : "admin", + "roles" : [ + { + "role" : "root", + "db" : "admin" + } + ], + "mechanisms" : [ + "SCRAM-SHA-1", + "SCRAM-SHA-256" + ] +} + + + +> use newdb +switched to db newdb +> db.movie.insert({"name":"batman"}); +WriteResult({ "nInserted" : 1 }) +> db.movie.find().pretty() +{ "_id" : ObjectId("6364996b3bdf351ff67cc7a8"), "name" : "batman" } + +> exit +bye +``` + +## Data availability +As this is a standalone database which doesn't have multiple replicas, It offers no redundancy & high availability of data. All the data are stored in one place, & deleting that will occur in data lost. + +## Halt Database + +When [TerminationPolicy](/docs/v2024.1.31/guides/mongodb/concepts/mongodb#specterminationpolicy) is set to halt, and you delete the mongodb object, the KubeDB operator will delete the StatefulSet and its pods but leaves the PVCs, secrets and database backup (snapshots) intact. Learn details of all `TerminationPolicy` [here](/docs/v2024.1.31/guides/mongodb/concepts/mongodb#specterminationpolicy). + +You can also keep the mongodb object and halt the database to resume it again later. If you halt the database, the kubedb will delete the statefulsets and services but will keep the mongodb object, pvcs, secrets and backup (snapshots). + +To halt the database, first you have to set the terminationPolicy to `Halt` in existing database. You can use the below command to set the terminationPolicy to `Halt`, if it is not already set. + +```bash +$ kubectl patch -n demo mg/mg-alone -p '{"spec":{"terminationPolicy":"Halt"}}' --type="merge" +mongodb.kubedb.com/mg-alone patched +``` + +Then, you have to set the `spec.halted` as true to set the database in a `Halted` state. You can use the below command. + +```bash +$ kubectl patch -n demo mg/mg-alone -p '{"spec":{"halted":true}}' --type="merge" +mongodb.kubedb.com/mg-alone patched +``` + +After that, kubedb will delete the statefulsets and services and you can see the database Phase as `Halted`. + +Now, you can run the following command to get all mongodb resources in demo namespaces, + +```bash +$ kubectl get mg,sts,svc,secret,pvc -n demo +NAME VERSION STATUS AGE +mongodb.kubedb.com/mg-alone 4.4.26 Halted 2m4s + +NAME TYPE DATA AGE +secret/mg-alone-auth kubernetes.io/basic-auth 2 2m4s +secret/mongo-ca kubernetes.io/tls 2 15d + +NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE +persistentvolumeclaim/datadir-mg-alone-0 Bound pvc-a1a873a6-4f6d-42eb-a38f-83d36fc44e1a 500Mi RWO standard 2m4s + +``` + + +## Resume Halted Database + +Now, to resume the database, i.e. to get the same database setup back again, you have to set the the `spec.halted` as false. You can use the below command. + +```bash +$ kubectl patch -n demo mg/mg-alone -p '{"spec":{"halted":false}}' --type="merge" +mongodb.kubedb.com/mg-alone patched +``` + +When the database is resumed successfully, you can see the database Status is set to `Ready`. + +```bash +$ kubectl get mg -n demo +NAME VERSION STATUS AGE +mg-alone 4.4.26 Ready 6m27s +``` + +Now, If you again exec into the primary `pod` and look for previous data, you will see that, all the data persists. + +```bash +$ kubectl exec -it mg-alone-1 -n demo bash + +mongodb@mg-alone-1:/$ mongo admin -u root -p 7QiqLcuSCmZ8PU5a + +> use newdb +switched to db newdb +> show collections +movie +> db.movie.find() +{ "_id" : ObjectId("6364af93b1ae8e7a8467058a"), "name" : "batman" } + +``` + +## Cleaning up + +To cleanup the Kubernetes resources created by this tutorial, run: + +```bash +kubectl patch -n demo mg/mg-alone -p '{"spec":{"terminationPolicy":"WipeOut"}}' --type="merge" +kubectl delete -n demo mg/mg-alone + +kubectl delete ns demo +``` + +## Next Steps + +- Deploy MongoDB [ReplicaSet](/docs/v2024.1.31/guides/mongodb/clustering/replication_concept) +- [Backup and Restore](/docs/v2024.1.31/guides/mongodb/backup/overview/) process of MongoDB databases using Stash. +- Initialize [MongoDB with Script](/docs/v2024.1.31/guides/mongodb/initialization/using-script). +- Monitor your MongoDB database with KubeDB using [out-of-the-box Prometheus operator](/docs/v2024.1.31/guides/mongodb/monitoring/using-prometheus-operator). +- Monitor your MongoDB database with KubeDB using [out-of-the-box builtin-Prometheus](/docs/v2024.1.31/guides/mongodb/monitoring/using-builtin-prometheus). +- Use [private Docker registry](/docs/v2024.1.31/guides/mongodb/private-registry/using-private-registry) to deploy MongoDB with KubeDB. +- Detail concepts of [MongoDB object](/docs/v2024.1.31/guides/mongodb/concepts/mongodb). +- Detail concepts of [MongoDBVersion object](/docs/v2024.1.31/guides/mongodb/concepts/catalog). +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/v2024.1.31/CONTRIBUTING). diff --git a/content/docs/v2024.1.31/guides/mongodb/concepts/_index.md b/content/docs/v2024.1.31/guides/mongodb/concepts/_index.md new file mode 100755 index 0000000000..46ba235ef4 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mongodb/concepts/_index.md @@ -0,0 +1,22 @@ +--- +title: MongoDB Concepts +menu: + docs_v2024.1.31: + identifier: mg-concepts-mongodb + name: Concepts + parent: mg-mongodb-guides + weight: 20 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/mongodb/concepts/appbinding.md b/content/docs/v2024.1.31/guides/mongodb/concepts/appbinding.md new file mode 100644 index 0000000000..c3b05a3382 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mongodb/concepts/appbinding.md @@ -0,0 +1,184 @@ +--- +title: AppBinding CRD +menu: + docs_v2024.1.31: + identifier: mg-appbinding-concepts + name: AppBinding + parent: mg-concepts-mongodb + weight: 20 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# AppBinding + +## What is AppBinding + +An `AppBinding` is a Kubernetes `CustomResourceDefinition`(CRD) which points to an application using either its URL (usually for a non-Kubernetes resident service instance) or a Kubernetes service object (if self-hosted in a Kubernetes cluster), some optional parameters and a credential secret. To learn more about AppBinding and the problems it solves, please read this blog post: [The case for AppBinding](https://blog.byte.builders/post/the-case-for-appbinding). + +If you deploy a database using [KubeDB](https://kubedb.com/docs/0.11.0/concepts/), `AppBinding` object will be created automatically for it. Otherwise, you have to create an `AppBinding` object manually pointing to your desired database. + +KubeDB uses [Stash](https://appscode.com/products/stash/) to perform backup/recovery of databases. Stash needs to know how to connect with a target database and the credentials necessary to access it. This is done via an `AppBinding`. + +## AppBinding CRD Specification + +Like any official Kubernetes resource, an `AppBinding` has `TypeMeta`, `ObjectMeta` and `Spec` sections. However, unlike other Kubernetes resources, it does not have a `Status` section. + +An `AppBinding` object created by `KubeDB` for PostgreSQL database is shown below, + +```yaml +apiVersion: appcatalog.appscode.com/v1alpha1 +kind: AppBinding +metadata: + annotations: + kubectl.kubernetes.io/last-applied-configuration: | + {"apiVersion":"kubedb.com/v1alpha2","kind":"MongoDB","metadata":{"annotations":{},"name":"sample-mgo-rs","namespace":"demo"},"spec":{"replicaSet":{"name":"rs0"},"replicas":3,"storage":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"1Gi"}},"storageClassName":"standard"},"terminationPolicy":"WipeOut","version":"4.4.26"}} + creationTimestamp: "2022-10-26T04:42:05Z" + generation: 1 + labels: + app.kubernetes.io/component: database + app.kubernetes.io/instance: sample-mgo-rs + app.kubernetes.io/managed-by: kubedb.com + app.kubernetes.io/name: mongodbs.kubedb.com + name: sample-mgo-rs + namespace: demo + ownerReferences: + - apiVersion: kubedb.com/v1alpha2 + blockOwnerDeletion: true + controller: true + kind: MongoDB + name: sample-mgo-rs + uid: 658bf7d1-3772-4c89-84db-5ac74a6c5851 + resourceVersion: "577375" + uid: b0cd9885-53d9-4a2b-93a9-cf9fa90594fd +spec: + appRef: + apiGroup: kubedb.com + kind: MongoDB + name: sample-mgo-rs + namespace: demo + clientConfig: + service: + name: sample-mgo-rs + port: 27017 + scheme: mongodb + parameters: + apiVersion: config.kubedb.com/v1alpha1 + kind: MongoConfiguration + replicaSets: + host-0: rs0/sample-mgo-rs-0.sample-mgo-rs-pods.demo.svc:27017,sample-mgo-rs-1.sample-mgo-rs-pods.demo.svc:27017,sample-mgo-rs-2.sample-mgo-rs-pods.demo.svc:27017 + stash: + addon: + backupTask: + name: mongodb-backup-4.4.6 + restoreTask: + name: mongodb-restore-4.4.6 + secret: + name: sample-mgo-rs-auth + type: kubedb.com/mongodb + version: 4.4.26 +``` + +Here, we are going to describe the sections of an `AppBinding` crd. + +### AppBinding `Spec` + +An `AppBinding` object has the following fields in the `spec` section: + +#### spec.type + +`spec.type` is an optional field that indicates the type of the app that this `AppBinding` is pointing to. Stash uses this field to resolve the values of `TARGET_APP_TYPE`, `TARGET_APP_GROUP` and `TARGET_APP_RESOURCE` variables of [BackupBlueprint](https://appscode.com/products/stash/latest/concepts/crds/backupblueprint/) object. + +This field follows the following format: `/`. The above AppBinding is pointing to a `postgres` resource under `kubedb.com` group. + +Here, the variables are parsed as follows: + +| Variable | Usage | +| --------------------- | --------------------------------------------------------------------------------------------------------------------------------- | +| `TARGET_APP_GROUP` | Represents the application group where the respective app belongs (i.e: `kubedb.com`). | +| `TARGET_APP_RESOURCE` | Represents the resource under that application group that this appbinding represents (i.e: `postgres`). | +| `TARGET_APP_TYPE` | Represents the complete type of the application. It's simply `TARGET_APP_GROUP/TARGET_APP_RESOURCE` (i.e: `kubedb.com/postgres`). | + +#### spec.secret + +`spec.secret` specifies the name of the secret which contains the credentials that are required to access the database. This secret must be in the same namespace as the `AppBinding`. + +This secret must contain the following keys: + +PostgreSQL : + +| Key | Usage | +| ------------------- | --------------------------------------------------- | +| `POSTGRES_USER` | Username of the target database. | +| `POSTGRES_PASSWORD` | Password for the user specified by `POSTGRES_USER`. | + +MySQL : + +| Key | Usage | +| ---------- | ---------------------------------------------- | +| `username` | Username of the target database. | +| `password` | Password for the user specified by `username`. | + +MongoDB : + +| Key | Usage | +| ---------- | ---------------------------------------------- | +| `username` | Username of the target database. | +| `password` | Password for the user specified by `username`. | + +Elasticsearch: + +| Key | Usage | +| ---------------- | ----------------------- | +| `ADMIN_USERNAME` | Admin username | +| `ADMIN_PASSWORD` | Password for admin user | + + +#### spec.appRef +appRef refers to the underlying application. It has 4 fields named `apiGroup`, `kind`, `name` & `namespace`. + +#### spec.clientConfig + +`spec.clientConfig` defines how to communicate with the target database. You can use either an URL or a Kubernetes service to connect with the database. You don't have to specify both of them. + +You can configure following fields in `spec.clientConfig` section: + +- **spec.clientConfig.url** + + `spec.clientConfig.url` gives the location of the database, in standard URL form (i.e. `[scheme://]host:port/[path]`). This is particularly useful when the target database is running outside of the Kubernetes cluster. If your database is running inside the cluster, use `spec.clientConfig.service` section instead. + +> Note that, attempting to use a user or basic auth (e.g. `user:password@host:port`) is not allowed. Stash will insert them automatically from the respective secret. Fragments ("#...") and query parameters ("?...") are not allowed either. + +- **spec.clientConfig.service** + + If you are running the database inside the Kubernetes cluster, you can use Kubernetes service to connect with the database. You have to specify the following fields in `spec.clientConfig.service` section if you manually create an `AppBinding` object. + + - **name :** `name` indicates the name of the service that connects with the target database. + - **scheme :** `scheme` specifies the scheme (i.e. http, https) to use to connect with the database. + - **port :** `port` specifies the port where the target database is running. + +- **spec.clientConfig.insecureSkipTLSVerify** + + `spec.clientConfig.insecureSkipTLSVerify` is used to disable TLS certificate verification while connecting with the database. We strongly discourage to disable TLS verification during backup. You should provide the respective CA bundle through `spec.clientConfig.caBundle` field instead. + +- **spec.clientConfig.caBundle** + + `spec.clientConfig.caBundle` is a PEM encoded CA bundle which will be used to validate the serving certificate of the database. + +## Next Steps + +- Learn how to use KubeDB to manage various databases [here](/docs/v2024.1.31/guides/README). +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/v2024.1.31/CONTRIBUTING). diff --git a/content/docs/v2024.1.31/guides/mongodb/concepts/autoscaler.md b/content/docs/v2024.1.31/guides/mongodb/concepts/autoscaler.md new file mode 100644 index 0000000000..8a9a10f4bb --- /dev/null +++ b/content/docs/v2024.1.31/guides/mongodb/concepts/autoscaler.md @@ -0,0 +1,229 @@ +--- +title: MongoDBAutoscaler CRD +menu: + docs_v2024.1.31: + identifier: mg-autoscaler-concepts + name: MongoDBAutoscaler + parent: mg-concepts-mongodb + weight: 26 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# MongoDBAutoscaler + +## What is MongoDBAutoscaler + +`MongoDBAutoscaler` is a Kubernetes `Custom Resource Definitions` (CRD). It provides a declarative configuration for autoscaling [MongoDB](https://www.mongodb.com/) compute resources and storage of database components in a Kubernetes native way. + +## MongoDBAutoscaler CRD Specifications + +Like any official Kubernetes resource, a `MongoDBAutoscaler` has `TypeMeta`, `ObjectMeta`, `Spec` and `Status` sections. + +Here, some sample `MongoDBAutoscaler` CROs for autoscaling different components of database is given below: + +**Sample `MongoDBAutoscaler` for standalone database:** + +```yaml +apiVersion: autoscaling.kubedb.com/v1alpha1 +kind: MongoDBAutoscaler +metadata: + name: mg-as + namespace: demo +spec: + databaseRef: + name: mg-standalone + opsRequestOptions: + readinessCriteria: + oplogMaxLagSeconds: 20 + objectsCountDiffPercentage: 10 + timeout: 3m + apply: IfReady + compute: + standalone: + trigger: "On" + podLifeTimeThreshold: 24h + minAllowed: + cpu: 250m + memory: 350Mi + maxAllowed: + cpu: 1 + memory: 1Gi + controlledResources: ["cpu", "memory"] + containerControlledValues: "RequestsAndLimits" + resourceDiffPercentage: 10 + storage: + standalone: + trigger: "On" + usageThreshold: 60 + scalingThreshold: 50 +``` + +**Sample `MongoDBAutoscaler` for replicaset database:** + +```yaml +apiVersion: autoscaling.kubedb.com/v1alpha1 +kind: MongoDBAutoscaler +metadata: + name: mg-as-rs + namespace: demo +spec: + databaseRef: + name: mg-rs + opsRequestOptions: + readinessCriteria: + oplogMaxLagSeconds: 20 + objectsCountDiffPercentage: 10 + timeout: 3m + apply: IfReady + compute: + replicaSet: + trigger: "On" + podLifeTimeThreshold: 24h + minAllowed: + cpu: 200m + memory: 300Mi + maxAllowed: + cpu: 1 + memory: 1Gi + controlledResources: ["cpu", "memory"] + containerControlledValues: "RequestsAndLimits" + resourceDiffPercentage: 10 + storage: + replicaSet: + trigger: "On" + usageThreshold: 60 + scalingThreshold: 50 +``` + +**Sample `MongoDBAutoscaler` for sharded database:** + +```yaml +apiVersion: autoscaling.kubedb.com/v1alpha1 +kind: MongoDBAutoscaler +metadata: + name: mg-as-sh + namespace: demo +spec: + databaseRef: + name: mg-sh + opsRequestOptions: + readinessCriteria: + oplogMaxLagSeconds: 20 + objectsCountDiffPercentage: 10 + timeout: 3m + apply: IfReady + compute: + shard: + trigger: "On" + podLifeTimeThreshold: 24h + minAllowed: + cpu: 250m + memory: 350Mi + maxAllowed: + cpu: 1 + memory: 1Gi + controlledResources: ["cpu", "memory"] + containerControlledValues: "RequestsAndLimits" + resourceDiffPercentage: 10 + configServer: + trigger: "On" + podLifeTimeThreshold: 24h + minAllowed: + cpu: 250m + memory: 350Mi + maxAllowed: + cpu: 1 + memory: 1Gi + controlledResources: ["cpu", "memory"] + containerControlledValues: "RequestsAndLimits" + resourceDiffPercentage: 10 + mongos: + trigger: "On" + podLifeTimeThreshold: 24h + minAllowed: + cpu: 250m + memory: 350Mi + maxAllowed: + cpu: 1 + memory: 1Gi + controlledResources: ["cpu", "memory"] + containerControlledValues: "RequestsAndLimits" + resourceDiffPercentage: 10 + storage: + shard: + trigger: "On" + usageThreshold: 60 + scalingThreshold: 50 + configServer: + trigger: "On" + usageThreshold: 60 + scalingThreshold: 50 +``` + +Here, we are going to describe the various sections of a `MongoDBAutoscaler` crd. + +A `MongoDBAutoscaler` object has the following fields in the `spec` section. + +### spec.databaseRef + +`spec.databaseRef` is a required field that point to the [MongoDB](/docs/v2024.1.31/guides/mongodb/concepts/mongodb) object for which the autoscaling will be performed. This field consists of the following sub-field: + +- **spec.databaseRef.name :** specifies the name of the [MongoDB](/docs/v2024.1.31/guides/mongodb/concepts/mongodb) object. + +### spec.opsRequestOptions +These are the options to pass in the internally created opsRequest CRO. `opsRequestOptions` has three fields. They have been described in details [here](/docs/v2024.1.31/guides/mongodb/concepts/opsrequest#specreadinesscriteria). + +### spec.compute + +`spec.compute` specifies the autoscaling configuration for the compute resources i.e. cpu and memory of the database components. This field consists of the following sub-field: + +- `spec.compute.standalone` indicates the desired compute autoscaling configuration for a standalone MongoDB database. +- `spec.compute.replicaSet` indicates the desired compute autoscaling configuration for replicaSet of a MongoDB database. +- `spec.compute.configServer` indicates the desired compute autoscaling configuration for config servers of a sharded MongoDB database. +- `spec.compute.mongos` indicates the desired compute autoscaling configuration for the mongos nodes of a sharded MongoDB database. +- `spec.compute.shard` indicates the desired compute autoscaling configuration for the shard nodes of a sharded MongoDB database. +- `spec.compute.arbiter` indicates the desired compute autoscaling configuration for the arbiter node. + +All of them has the following sub-fields: + +- `trigger` indicates if compute autoscaling is enabled for this component of the database. If "On" then compute autoscaling is enabled. If "Off" then compute autoscaling is disabled. +- `minAllowed` specifies the minimal amount of resources that will be recommended, default is no minimum. +- `maxAllowed` specifies the maximum amount of resources that will be recommended, default is no maximum. +- `controlledResources` specifies which type of compute resources (cpu and memory) are allowed for autoscaling. Allowed values are "cpu" and "memory". +- `containerControlledValues` specifies which resource values should be controlled. Allowed values are "RequestsAndLimits" and "RequestsOnly". +- `resourceDiffPercentage` specifies the minimum resource difference between recommended value and the current value in percentage. If the difference percentage is greater than this value than autoscaling will be triggered. +- `podLifeTimeThreshold` specifies the minimum pod lifetime of at least one of the pods before triggering autoscaling. + +There are two more fields, those are only specifiable for the percona variant inMemory databases. +- `inMemoryStorage.UsageThresholdPercentage` If db uses more than usageThresholdPercentage of the total memory, memoryStorage should be increased. +- `inMemoryStorage.ScalingFactorPercentage` If db uses more than usageThresholdPercentage of the total memory, memoryStorage should be increased by this given scaling percentage. + +### spec.storage + +`spec.compute` specifies the autoscaling configuration for the storage resources of the database components. This field consists of the following sub-field: + +- `spec.compute.standalone` indicates the desired storage autoscaling configuration for a standalone MongoDB database. +- `spec.compute.replicaSet` indicates the desired storage autoscaling configuration for replicaSet of a MongoDB database. +- `spec.compute.configServer` indicates the desired storage autoscaling configuration for config servers of a sharded MongoDB database. +- `spec.compute.shard` indicates the desired storage autoscaling configuration for the shard nodes of a sharded MongoDB database. + +All of them has the following sub-fields: + +- `trigger` indicates if storage autoscaling is enabled for this component of the database. If "On" then storage autoscaling is enabled. If "Off" then storage autoscaling is disabled. +- `usageThreshold` indicates usage percentage threshold, if the current storage usage exceeds then storage autoscaling will be triggered. +- `scalingThreshold` indicates the percentage of the current storage that will be scaled. +- `expansionMode` indicates the volume expansion mode. diff --git a/content/docs/v2024.1.31/guides/mongodb/concepts/catalog.md b/content/docs/v2024.1.31/guides/mongodb/concepts/catalog.md new file mode 100644 index 0000000000..404d6a5ae6 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mongodb/concepts/catalog.md @@ -0,0 +1,127 @@ +--- +title: MongoDBVersion CRD +menu: + docs_v2024.1.31: + identifier: mg-catalog-concepts + name: MongoDBVersion + parent: mg-concepts-mongodb + weight: 15 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# MongoDBVersion + +## What is MongoDBVersion + +`MongoDBVersion` is a Kubernetes `Custom Resource Definitions` (CRD). It provides a declarative configuration to specify the docker images to be used for [MongoDB](https://www.mongodb.com/) database deployed with KubeDB in a Kubernetes native way. + +When you install KubeDB, a `MongoDBVersion` custom resource will be created automatically for every supported MongoDB versions. You have to specify the name of `MongoDBVersion` crd in `spec.version` field of [MongoDB](/docs/v2024.1.31/guides/mongodb/concepts/mongodb) crd. Then, KubeDB will use the docker images specified in the `MongoDBVersion` crd to create your expected database. + +Using a separate crd for specifying respective docker images, and pod security policy names allow us to modify the images, and policies independent of KubeDB operator.This will also allow the users to use a custom image for the database. + +## MongoDBVersion Spec + +As with all other Kubernetes objects, a MongoDBVersion needs `apiVersion`, `kind`, and `metadata` fields. It also needs a `.spec` section. + +```yaml +apiVersion: catalog.kubedb.com/v1alpha1 +kind: MongoDBVersion +metadata: + name: "4.4.26" + labels: + app: kubedb +spec: + db: + image: mongo:4.4.26 + distribution: Official + exporter: + image: kubedb/mongodb_exporter:v0.32.0 + initContainer: + image: kubedb/mongodb-init:4.2-v7 + podSecurityPolicies: + databasePolicyName: mongodb-db + replicationModeDetector: + image: kubedb/replication-mode-detector:v0.16.0 + stash: + addon: + backupTask: + name: mongodb-backup-4.4.6 + restoreTask: + name: mongodb-restore-4.4.6 + updateConstraints: + allowlist: + - '>= 4.4.0, < 5.0.0' + version: 4.4.26 +``` + +### metadata.name + +`metadata.name` is a required field that specifies the name of the `MongoDBVersion` crd. You have to specify this name in `spec.version` field of [MongoDB](/docs/v2024.1.31/guides/mongodb/concepts/mongodb) crd. + +We follow this convention for naming MongoDBVersion crd: + +- Name format: `{Original MongoDB image verion}-{modification tag}` + +We modify original MongoDB docker image to support MongoDB clustering and re-tag the image with v1, v2 etc. modification tag. An image with higher modification tag will have more features than the images with lower modification tag. Hence, it is recommended to use MongoDBVersion crd with highest modification tag to enjoy the latest features. + +### spec.version + +`spec.version` is a required field that specifies the original version of MongoDB database that has been used to build the docker image specified in `spec.db.image` field. + +### spec.deprecated + +`spec.deprecated` is an optional field that specifies whether the docker images specified here is supported by the current KubeDB operator. For example, we have modified `kubedb/mongo:3.6` docker image to support MongoDB clustering and re-tagged as `kubedb/mongo:3.6-v1`. So, we have marked `kubedb/mongo:3.6` as deprecated for KubeDB `0.9.0-rc.0`. + +The default value of this field is `false`. If `spec.deprecated` is set to `true`, KubeDB operator will skip processing this CRD object and will add a event to the CRD object specifying that the DB version is deprecated. + +### spec.db.image + +`spec.db.image` is a required field that specifies the docker image which will be used to create Statefulset by KubeDB operator to create expected MongoDB database. + +### spec.initContainer.image +`spec.initContainer.image` is a required field that specifies the image for init container. + + +### spec.exporter.image + +`spec.exporter.image` is a required field that specifies the image which will be used to export Prometheus metrics. + +### spec.stash +This holds the Backup & Restore task definitions, where a `TaskRef` has a `Name` & `Params` section. Params specifies a list of parameters to pass to the task. + +### spec.updateConstraints +updateConstraints specifies the constraints that need to be considered during version update. Here `allowList` contains the versions those are allowed for updating from the current version. +An empty list of AllowList indicates all the versions are accepted except the denyList. +On the other hand, `DenyList` contains all the rejected versions for the update request. An empty list indicates no version is rejected. + +### spec.podSecurityPolicies.databasePolicyName + +`spec.podSecurityPolicies.databasePolicyName` is a required field that specifies the name of the pod security policy required to get the database server pod(s) running. + +```bash +helm upgrade -i kubedb oci://ghcr.io/appscode-charts/kubedb \ + --namespace kubedb --create-namespace \ + --set additionalPodSecurityPolicies[0]=custom-db-policy \ + --set additionalPodSecurityPolicies[1]=custom-snapshotter-policy \ + --set-file global.license=/path/to/the/license.txt \ + --wait --burst-limit=10000 --debug +``` + +## Next Steps + +- Learn about MongoDB crd [here](/docs/v2024.1.31/guides/mongodb/concepts/mongodb). +- Deploy your first MongoDB database with KubeDB by following the guide [here](/docs/v2024.1.31/guides/mongodb/quickstart/quickstart). diff --git a/content/docs/v2024.1.31/guides/mongodb/concepts/mongodb.md b/content/docs/v2024.1.31/guides/mongodb/concepts/mongodb.md new file mode 100644 index 0000000000..c38a4d6206 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mongodb/concepts/mongodb.md @@ -0,0 +1,650 @@ +--- +title: MongoDB CRD +menu: + docs_v2024.1.31: + identifier: mg-mongodb-concepts + name: MongoDB + parent: mg-concepts-mongodb + weight: 10 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# MongoDB + +## What is MongoDB + +`MongoDB` is a Kubernetes `Custom Resource Definitions` (CRD). It provides declarative configuration for [MongoDB](https://www.mongodb.com/) in a Kubernetes native way. You only need to describe the desired database configuration in a MongoDB object, and the KubeDB operator will create Kubernetes objects in the desired state for you. + +## MongoDB Spec + +As with all other Kubernetes objects, a MongoDB needs `apiVersion`, `kind`, and `metadata` fields. It also needs a `.spec` section. Below is an example MongoDB object. + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: MongoDB +metadata: + name: mgo1 + namespace: demo +spec: + autoOps: + disabled: true + version: "4.4.26" + replicas: 3 + authSecret: + name: mgo1-auth + externallyManaged: false + replicaSet: + name: rs0 + shardTopology: + configServer: + podTemplate: {} + replicas: 3 + storage: + resources: + requests: + storage: 1Gi + storageClassName: standard + mongos: + podTemplate: {} + replicas: 2 + shard: + podTemplate: {} + replicas: 3 + shards: 3 + storage: + resources: + requests: + storage: 1Gi + storageClassName: standard + sslMode: requireSSL + tls: + issuerRef: + name: mongo-ca-issuer + kind: Issuer + apiGroup: "cert-manager.io" + certificates: + - alias: client + subject: + organizations: + - kubedb + emailAddresses: + - abc@appscode.com + - alias: server + subject: + organizations: + - kubedb + emailAddresses: + - abc@appscode.com + clusterAuthMode: x509 + storageType: "Durable" + storageEngine: wiredTiger + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + ephemeralStorage: + medium: "Memory" + sizeLimit: 500Mi + init: + script: + configMap: + name: mg-init-script + monitor: + agent: prometheus.io/operator + prometheus: + serviceMonitor: + labels: + app: kubedb + interval: 10s + configSecret: + name: mg-custom-config + podTemplate: + metadata: + annotations: + passMe: ToDatabasePod + labels: + thisLabel: willGoToPod + controller: + annotations: + passMe: ToStatefulSet + labels: + thisLabel: willGoToSts + spec: + serviceAccountName: my-service-account + schedulerName: my-scheduler + nodeSelector: + disktype: ssd + imagePullSecrets: + - name: myregistrykey + args: + - --maxConns=100 + env: + - name: MONGO_INITDB_DATABASE + value: myDB + resources: + requests: + memory: "64Mi" + cpu: "250m" + limits: + memory: "128Mi" + cpu: "500m" + serviceTemplates: + - alias: primary + spec: + type: NodePort + ports: + - name: primary + port: 27017 + nodePort: 300006 + terminationPolicy: Halt + halted: false + arbiter: + podTemplate: + spec: + resources: + requests: + cpu: "200m" + memory: "200Mi" + configSecret: + name: another-config + allowedSchemas: + namespaces: + from: Selector + selector: + matchExpressions: + - {key: kubernetes.io/metadata.name, operator: In, values: [dev]} + selector: + matchLabels: + "schema.kubedb.com": "mongo" + coordinator: + resources: + requests: + cpu: "300m" + memory: 500Mi + securityContext: + runAsUser: 1001 + healthChecker: + periodSeconds: 15 + timeoutSeconds: 10 + failureThreshold: 2 + disableWriteCheck: false +``` + +### spec.autoOps +AutoOps is an optional field to control the generation of versionUpdate & TLS-related recommendations. + +### spec.version + +`spec.version` is a required field specifying the name of the [MongoDBVersion](/docs/v2024.1.31/guides/mongodb/concepts/catalog) crd where the docker images are specified. Currently, when you install KubeDB, it creates the following `MongoDBVersion` resources, + +- `3.4.17-v1`, `3.4.22-v1` +- `3.6.13-v1`, `4.4.26`, +- `4.0.3-v1`, `4.4.26`, `4.0.11-v1`, +- `4.1.4-v1`, `4.1.7-v3`, `4.4.26` +- `4.4.26`, `4.4.26` +- `5.0.2`, `5.0.3` +- `percona-3.6.18` +- `percona-4.0.10`, `percona-4.2.7`, `percona-4.4.10` + +### spec.replicas + +`spec.replicas` the number of members(primary & secondary) in mongodb replicaset. + +If `spec.shardTopology` is set, then `spec.replicas` needs to be empty. Instead use `spec.shardTopology..replicas` + +If both `spec.replicaset` and `spec.shardTopology` is not set, then `spec.replicas` can be value `1`. + +KubeDB uses `PodDisruptionBudget` to ensure that majority of these replicas are available during [voluntary disruptions](https://kubernetes.io/docs/concepts/workloads/pods/disruptions/#voluntary-and-involuntary-disruptions) so that quorum is maintained. + +### spec.authSecret + +`spec.authSecret` is an optional field that points to a Secret used to hold credentials for `mongodb` superuser. If not set, KubeDB operator creates a new Secret `{mongodb-object-name}-auth` for storing the password for `mongodb` superuser for each MongoDB object. + +We can use this field in 3 mode. +1. Using an external secret. In this case, You need to create an auth secret first with required fields, then specify the secret name when creating the MongoDB object using `spec.authSecret.name` & set `spec.authSecret.externallyManaged` to true. +```yaml +authSecret: + name: + externallyManaged: true +``` + +2. Specifying the secret name only. In this case, You need to specify the secret name when creating the MongoDB object using `spec.authSecret.name`. `externallyManaged` is by default false. +```yaml +authSecret: + name: +``` + +3. Let KubeDB do everything for you. In this case, no work for you. + +AuthSecret contains a `user` key and a `password` key which contains the `username` and `password` respectively for `mongodb` superuser. + +Example: + +```bash +$ kubectl create secret generic mgo1-auth -n demo \ +--from-literal=username=jhon-doe \ +--from-literal=password=6q8u_2jMOW-OOZXk +secret "mgo1-auth" created +``` + +```yaml +apiVersion: v1 +data: + password: NnE4dV8yak1PVy1PT1pYaw== + username: amhvbi1kb2U= +kind: Secret +metadata: + name: mgo1-auth + namespace: demo +type: Opaque +``` + +Secrets provided by users are not managed by KubeDB, and therefore, won't be modified or garbage collected by the KubeDB operator (version 0.13.0 and higher). + +### spec.replicaSet + +`spec.replicaSet` represents the configuration for replicaset. When `spec.replicaSet` is set, KubeDB will deploy a mongodb replicaset where number of replicaset member is spec.replicas. + +- `name` denotes the name of mongodb replicaset. +NB. If `spec.shardTopology` is set, then `spec.replicaset` needs to be empty. + +### spec.keyFileSecret +`keyFileSecret.name` denotes the name of the secret that contains the `key.txt`, which provides the security between replicaset members using internal authentication. See [Keyfile Authentication](https://docs.mongodb.com/manual/tutorial/enforce-keyfile-access-control-in-existing-replica-set/) for more information. +It will make impact only if the ClusterAuthMode is `keyFile` or `sendKeyFile`. + +### spec.shardTopology + +`spec.shardTopology` represents the topology configuration for sharding. + +Available configurable fields: + +- shard +- configServer +- mongos + +When `spec.shardTopology` is set, the following fields needs to be empty, otherwise validating webhook will throw error. + +- `spec.replicas` +- `spec.podTemplate` +- `spec.configSecret` +- `spec.storage` +- `spec.ephemeralStorage` + +KubeDB uses `PodDisruptionBudget` to ensure that majority of the replicas of these shard components are available during [voluntary disruptions](https://kubernetes.io/docs/concepts/workloads/pods/disruptions/#voluntary-and-involuntary-disruptions) so that quorum and data integrity is maintained. + +#### spec.shardTopology.shard + +`shard` represents configuration for Shard component of mongodb. + +Available configurable fields: + +- `shards` represents number of shards for a mongodb deployment. Each shard is deployed as a [replicaset](/docs/v2024.1.31/guides/mongodb/clustering/replication_concept). +- `replicas` represents number of replicas of each shard replicaset. +- `prefix` represents the prefix of each shard node. +- `configSecret` is an optional field to provide custom configuration file for shards (i.e. mongod.cnf). If specified, this file will be used as configuration file otherwise a default configuration file will be used. See below to know about [spec.configSecret](/docs/v2024.1.31/guides/mongodb/concepts/mongodb#specconfigsecret) in details. +- `podTemplate` is an optional configuration for pods. See below to know about [spec.podTemplate](/docs/v2024.1.31/guides/mongodb/concepts/mongodb#specpodtemplate) in details. +- `storage` to specify pvc spec for each node of sharding. You can specify any StorageClass available in your cluster with appropriate resource requests. See below to know about [spec.storage](/docs/v2024.1.31/guides/mongodb/concepts/mongodb#specstorage) in details. +- `ephemeralStorage` to specify the configuration of ephemeral storage type, If you want to use volatile temporary storage attached to your instances which is only present during the running lifetime of the instance. + +#### spec.shardTopology.configServer + +`configServer` represents configuration for ConfigServer component of mongodb. + +Available configurable fields: + +- `replicas` represents number of replicas for configServer replicaset. Here, configServer is deployed as a replicaset of mongodb. +- `prefix` represents the prefix of configServer nodes. +- `configSecret` is an optional field to provide custom configuration file for config server (i.e mongod.cnf). If specified, this file will be used as configuration file otherwise a default configuration file will be used. See below to know about [spec.configSecret](/docs/v2024.1.31/guides/mongodb/concepts/mongodb#specconfigsecret) in details. +- `podTemplate` is an optional configuration for pods. See below to know about [spec.podTemplate](/docs/v2024.1.31/guides/mongodb/concepts/mongodb#specpodtemplate) in details. +- `storage` to specify pvc spec for each node of configServer. You can specify any StorageClass available in your cluster with appropriate resource requests. See below to know about [spec.storage](/docs/v2024.1.31/guides/mongodb/concepts/mongodb#specstorage) in details. +- `ephemeralStorage` to specify the configuration of ephemeral storage type, If you want to use volatile temporary storage attached to your instances which is only present during the running lifetime of the instance. + +#### spec.shardTopology.mongos + +`mongos` represents configuration for Mongos component of mongodb. + +Available configurable fields: + +- `replicas` represents number of replicas of `Mongos` instance. Here, Mongos is deployed as stateless (deployment) instance. +- `prefix` represents the prefix of mongos nodes. +- `configSecret` is an optional field to provide custom configuration file for mongos (i.e. mongod.cnf). If specified, this file will be used as configuration file otherwise a default configuration file will be used. See below to know about [spec.configSecret](/docs/v2024.1.31/guides/mongodb/concepts/mongodb#specconfigsecret) in details. +- `podTemplate` is an optional configuration for pods. See below to know about [spec.podTemplate](/docs/v2024.1.31/guides/mongodb/concepts/mongodb#specpodtemplate) in details. + +### spec.sslMode + +Enables TLS/SSL or mixed TLS/SSL used for all network connections. The value of [`sslMode`](https://docs.mongodb.com/manual/reference/program/mongod/#cmdoption-mongod-sslmode) field can be one of the following: + +| Value | Description | +| :----------: | :----------------------------------------------------------------------------------------------------------------------------- | +| `disabled` | The server does not use TLS/SSL. | +| `allowSSL` | Connections between servers do not use TLS/SSL. For incoming connections, the server accepts both TLS/SSL and non-TLS/non-SSL. | +| `preferSSL` | Connections between servers use TLS/SSL. For incoming connections, the server accepts both TLS/SSL and non-TLS/non-SSL. | +| `requireSSL` | The server uses and accepts only TLS/SSL encrypted connections. | + +### spec.tls + +`spec.tls` specifies the TLS/SSL configurations for the MongoDB. KubeDB uses [cert-manager](https://cert-manager.io/) v1 api to provision and manage TLS certificates. + +The following fields are configurable in the `spec.tls` section: + +- `issuerRef` is a reference to the `Issuer` or `ClusterIssuer` CR of [cert-manager](https://cert-manager.io/docs/concepts/issuer/) that will be used by `KubeDB` to generate necessary certificates. + + - `apiGroup` is the group name of the resource that is being referenced. Currently, the only supported value is `cert-manager.io`. + - `kind` is the type of resource that is being referenced. KubeDB supports both `Issuer` and `ClusterIssuer` as values for this field. + - `name` is the name of the resource (`Issuer` or `ClusterIssuer`) being referenced. + +- `certificates` (optional) are a list of certificates used to configure the server and/or client certificate. It has the following fields: + - `alias` represents the identifier of the certificate. It has the following possible value: + - `server` is used for server certificate identification. + - `client` is used for client certificate identification. + - `metrics-exporter` is used for metrics exporter certificate identification. + - `secretName` (optional) specifies the k8s secret name that holds the certificates. + > This field is optional. If the user does not specify this field, the default secret name will be created in the following format: `--cert`. + + - `subject` (optional) specifies an `X.509` distinguished name. It has the following possible field, + - `organizations` (optional) are the list of different organization names to be used on the Certificate. + - `organizationalUnits` (optional) are the list of different organization unit name to be used on the Certificate. + - `countries` (optional) are the list of country names to be used on the Certificate. + - `localities` (optional) are the list of locality names to be used on the Certificate. + - `provinces` (optional) are the list of province names to be used on the Certificate. + - `streetAddresses` (optional) are the list of a street address to be used on the Certificate. + - `postalCodes` (optional) are the list of postal code to be used on the Certificate. + - `serialNumber` (optional) is a serial number to be used on the Certificate. + You can find more details from [Here](https://golang.org/pkg/crypto/x509/pkix/#Name) + - `duration` (optional) is the period during which the certificate is valid. + - `renewBefore` (optional) is a specifiable time before expiration duration. + - `dnsNames` (optional) is a list of subject alt names to be used in the Certificate. + - `ipAddresses` (optional) is a list of IP addresses to be used in the Certificate. + - `uris` (optional) is a list of URI Subject Alternative Names to be set in the Certificate. + - `emailAddresses` (optional) is a list of email Subject Alternative Names to be set in the Certificate. + - `privateKey` (optional) specifies options to control private keys used for the Certificate. + - `encoding` (optional) is the private key cryptography standards (PKCS) encoding for this certificate's private key to be encoded in. If provided, allowed values are "pkcs1" and "pkcs8" standing for PKCS#1 and PKCS#8, respectively. It defaults to PKCS#1 if not specified. + +### spec.clusterAuthMode + +The authentication mode used for cluster authentication. This option can have one of the following values: + +| Value | Description | +| :-----------: | :------------------------------------------------------------------------------------------------------------------------------- | +| `keyFile` | Use a keyfile for authentication. Accept only keyfiles. | +| `sendKeyFile` | For rolling update purposes. Send a keyfile for authentication but can accept both keyfiles and x.509 certificates. | +| `sendX509` | For rolling update purposes. Send the x.509 certificate for authentication but can accept both keyfiles and x.509 certificates. | +| `x509` | Recommended. Send the x.509 certificate for authentication and accept only x.509 certificates. | + +### spec.storageType + +`spec.storageType` is an optional field that specifies the type of storage to use for database. It can be either `Durable` or `Ephemeral`. The default value of this field is `Durable`. If `Ephemeral` is used then KubeDB will create MongoDB database using [emptyDir](https://kubernetes.io/docs/concepts/storage/volumes/#emptydir) volume. +In this case, you don't have to specify `spec.storage` field. Specify `spec.ephemeralStorage` spec instead. + +### spec.storageEngine + +`spec.storageEngine` is an optional field that specifies the type of storage engine is going to be used by mongodb. There are two types of storage engine, `wiredTiger` and `inMemory`. Default value of storage engine is `wiredTiger`. `inMemory` storage engine is only supported by the percona variant of mongodb, i.e. the version that has the `percona-` prefix in the mongodb-version name. + +### spec.storage + +Since 0.9.0-rc.0, If you set `spec.storageType:` to `Durable`, then `spec.storage` is a required field that specifies the StorageClass of PVCs dynamically allocated to store data for the database. This storage spec will be passed to the StatefulSet created by KubeDB operator to run database pods. You can specify any StorageClass available in your cluster with appropriate resource requests. + +- `spec.storage.storageClassName` is the name of the StorageClass used to provision PVCs. PVCs don’t necessarily have to request a class. A PVC with its storageClassName set equal to "" is always interpreted to be requesting a PV with no class, so it can only be bound to PVs with no class (no annotation or one set equal to ""). A PVC with no storageClassName is not quite the same and is treated differently by the cluster depending on whether the DefaultStorageClass admission plugin is turned on. +- `spec.storage.accessModes` uses the same conventions as Kubernetes PVCs when requesting storage with specific access modes. +- `spec.storage.resources` can be used to request specific quantities of storage. This follows the same resource model used by PVCs. + +To learn how to configure `spec.storage`, please visit the links below: + +- https://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims + +NB. If `spec.shardTopology` is set, then `spec.storage` needs to be empty. Instead use `spec.shardTopology..storage` + +### spec.ephemeralStorage +Use this field to specify the configuration of ephemeral storage type, If you want to use volatile temporary storage attached to your instances which is only present during the running lifetime of the instance. +- `spec.ephemeralStorage.medium` refers to the name of the storage medium. +- `spec.ephemeralStorage.sizeLimit` to specify the sizeLimit of the emptyDir volume. + +For more details of these two fields, see [EmptyDir struct](https://github.com/kubernetes/api/blob/ed22bb34e3bbae9e2fafba51d66ee3f68ee304b2/core/v1/types.go#L700-L715) + +### spec.init + +`spec.init` is an optional section that can be used to initialize a newly created MongoDB database. MongoDB databases can be initialized by Script. + +`Initialize from Snapshot` is still not supported. + +#### Initialize via Script + +To initialize a MongoDB database using a script (shell script, js script), set the `spec.init.script` section when creating a MongoDB object. It will execute files alphabetically with extensions `.sh` and `.js` that are found in the repository. script must have the following information: + +- [VolumeSource](https://kubernetes.io/docs/concepts/storage/volumes/#types-of-volumes): Where your script is loaded from. + +Below is an example showing how a script from a configMap can be used to initialize a MongoDB database. + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: MongoDB +metadata: + name: mgo1 + namespace: demo +spec: + version: 4.4.26 + init: + script: + configMap: + name: mongodb-init-script +``` + +In the above example, KubeDB operator will launch a Job to execute all js script of `mongodb-init-script` in alphabetical order once StatefulSet pods are running. For more details tutorial on how to initialize from script, please visit [here](/docs/v2024.1.31/guides/mongodb/initialization/using-script). + +These are the fields of `spec.init` which you can make use of : +- `spec.init.initialized` indicating that this database has been initialized or not. `false` by default. +- `spec.init.script.scriptPath` to specify where all the init scripts should be mounted. +- `spec.init.script.` as described in the above example. To see all the volumeSource options go to [VolumeSource](https://github.com/kubernetes/api/blob/ed22bb34e3bbae9e2fafba51d66ee3f68ee304b2/core/v1/types.go#L49). +- `spec.init.waitForInitialRestore` to tell the operator if it should wait for the initial restore process or not. + +### spec.monitor + +MongoDB managed by KubeDB can be monitored with builtin-Prometheus and Prometheus operator out-of-the-box. To learn more, + +- [Monitor MongoDB with builtin Prometheus](/docs/v2024.1.31/guides/mongodb/monitoring/using-builtin-prometheus) +- [Monitor MongoDB with Prometheus operator](/docs/v2024.1.31/guides/mongodb/monitoring/using-prometheus-operator) + +### spec.configSecret + +`spec.configSecret` is an optional field that allows users to provide custom configuration for MongoDB. You can provide the custom configuration in a secret, then you can specify the secret name `spec.configSecret.name`. + +> Please note that, the secret key needs to be `mongod.conf`. + +To learn more about how to use a custom configuration file see [here](/docs/v2024.1.31/guides/mongodb/configuration/using-config-file). + +NB. If `spec.shardTopology` is set, then `spec.configSecret` needs to be empty. Instead use `spec.shardTopology..configSecret` + +### spec.podTemplate + +KubeDB allows providing a template for database pod through `spec.podTemplate`. KubeDB operator will pass the information provided in `spec.podTemplate` to the StatefulSet created for MongoDB database. + +KubeDB accept following fields to set in `spec.podTemplate:` + +- metadata: + - annotations (pod's annotation) + - labels (pod's labels) +- controller: + - annotations (statefulset's annotation) + - labels (statefulset's labels) +- spec: + - args + - env + - resources + - initContainers + - imagePullSecrets + - nodeSelector + - affinity + - serviceAccountName + - schedulerName + - tolerations + - priorityClassName + - priority + - securityContext + - livenessProbe + - readinessProbe + - lifecycle + +You can checkout the full list [here](https://github.com/kmodules/offshoot-api/blob/ea366935d5bad69d7643906c7556923271592513/api/v1/types.go#L42-L259). Uses of some field of `spec.podTemplate` is described below, + +NB. If `spec.shardTopology` is set, then `spec.podTemplate` needs to be empty. Instead use `spec.shardTopology..podTemplate` + +#### spec.podTemplate.spec.args + +`spec.podTemplate.spec.args` is an optional field. This can be used to provide additional arguments to database installation. To learn about available args of `mongod`, visit [here](https://docs.mongodb.com/manual/reference/program/mongod/). + +#### spec.podTemplate.spec.env + +`spec.podTemplate.spec.env` is an optional field that specifies the environment variables to pass to the MongoDB docker image. To know about supported environment variables, please visit [here](https://hub.docker.com/r/_/mongo/). + +Note that, KubeDB does not allow `MONGO_INITDB_ROOT_USERNAME` and `MONGO_INITDB_ROOT_PASSWORD` environment variables to set in `spec.podTemplate.spec.env`. If you want to use custom superuser and password, please use `spec.authSecret` instead described earlier. + +If you try to set `MONGO_INITDB_ROOT_USERNAME` or `MONGO_INITDB_ROOT_PASSWORD` environment variable in MongoDB crd, Kubedb operator will reject the request with following error, + +```ini +Error from server (Forbidden): error when creating "./mongodb.yaml": admission webhook "mongodb.validators.kubedb.com" denied the request: environment variable MONGO_INITDB_ROOT_USERNAME is forbidden to use in MongoDB spec +``` + +Also, note that KubeDB does not allow updating the environment variables as updating them does not have any effect once the database is created. If you try to update environment variables, KubeDB operator will reject the request with following error, + +```ini +Error from server (BadRequest): error when applying patch: +... +for: "./mongodb.yaml": admission webhook "mongodb.validators.kubedb.com" denied the request: precondition failed for: +...At least one of the following was changed: + apiVersion + kind + name + namespace + spec.ReplicaSet + spec.authSecret + spec.init + spec.storageType + spec.storage + spec.podTemplate.spec.nodeSelector + spec.podTemplate.spec.env +``` + +#### spec.podTemplate.spec.imagePullSecret + +`KubeDB` provides the flexibility of deploying MongoDB database from a private Docker registry. `spec.podTemplate.spec.imagePullSecrets` is an optional field that points to secrets to be used for pulling docker image if you are using a private docker registry. To learn how to deploy MongoDB from a private registry, please visit [here](/docs/v2024.1.31/guides/mongodb/private-registry/using-private-registry). + +#### spec.podTemplate.spec.nodeSelector + +`spec.podTemplate.spec.nodeSelector` is an optional field that specifies a map of key-value pairs. For the pod to be eligible to run on a node, the node must have each of the indicated key-value pairs as labels (it can have additional labels as well). To learn more, see [here](https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector) . + +#### spec.podTemplate.spec.serviceAccountName + +`serviceAccountName` is an optional field supported by KubeDB Operator (version 0.13.0 and higher) that can be used to specify a custom service account to fine tune role based access control. + +If this field is left empty, the KubeDB operator will create a service account name matching MongoDB crd name. Role and RoleBinding that provide necessary access permissions will also be generated automatically for this service account. + +If a service account name is given, but there's no existing service account by that name, the KubeDB operator will create one, and Role and RoleBinding that provide necessary access permissions will also be generated for this service account. + +If a service account name is given, and there's an existing service account by that name, the KubeDB operator will use that existing service account. Since this service account is not managed by KubeDB, users are responsible for providing necessary access permissions manually. Follow the guide [here](/docs/v2024.1.31/guides/mongodb/custom-rbac/using-custom-rbac) to grant necessary permissions in this scenario. + +#### spec.podTemplate.spec.resources + +`spec.podTemplate.spec.resources` is an optional field. This can be used to request compute resources required by the database pods. To learn more, visit [here](http://kubernetes.io/docs/user-guide/compute-resources/). + +### spec.serviceTemplates + +You can also provide template for the services created by KubeDB operator for MongoDB database through `spec.serviceTemplates`. This will allow you to set the type and other properties of the services. + +KubeDB allows following fields to set in `spec.serviceTemplates`: +- `alias` represents the identifier of the service. It has the following possible value: + - `primary` is used for the primary service identification. + - `standby` is used for the secondary service identification. + - `stats` is used for the exporter service identification. +- metadata: + - labels + - annotations +- spec: + - type + - ports + - clusterIP + - externalIPs + - loadBalancerIP + - loadBalancerSourceRanges + - externalTrafficPolicy + - healthCheckNodePort + - sessionAffinityConfig + +See [here](https://github.com/kmodules/offshoot-api/blob/kubernetes-1.21.1/api/v1/types.go#L237) to understand these fields in detail. + +### spec.terminationPolicy + +`terminationPolicy` gives flexibility whether to `nullify`(reject) the delete operation of `MongoDB` crd or which resources KubeDB should keep or delete when you delete `MongoDB` crd. KubeDB provides following four termination policies: + +- DoNotTerminate +- Halt +- Delete (`Default`) +- WipeOut + +When `terminationPolicy` is `DoNotTerminate`, KubeDB takes advantage of `ValidationWebhook` feature in Kubernetes 1.9.0 or later clusters to implement `DoNotTerminate` feature. If admission webhook is enabled, `DoNotTerminate` prevents users from deleting the database as long as the `spec.terminationPolicy` is set to `DoNotTerminate`. + +Following table show what KubeDB does when you delete MongoDB crd for different termination policies, + +| Behavior | DoNotTerminate | Halt | Delete | WipeOut | +| ----------------------------------- | :------------: | :------: | :------: | :------: | +| 1. Block Delete operation | ✓ | ✗ | ✗ | ✗ | +| 2. Delete StatefulSet | ✗ | ✓ | ✓ | ✓ | +| 3. Delete Services | ✗ | ✓ | ✓ | ✓ | +| 4. Delete PVCs | ✗ | ✗ | ✓ | ✓ | +| 5. Delete Secrets | ✗ | ✗ | ✗ | ✓ | +| 6. Delete Snapshots | ✗ | ✗ | ✗ | ✓ | +| 7. Delete Snapshot data from bucket | ✗ | ✗ | ✗ | ✓ | + +If you don't specify `spec.terminationPolicy` KubeDB uses `Delete` termination policy by default. + +### spec.halted +Indicates that the database is halted and all offshoot Kubernetes resources except PVCs are deleted. + +### spec.arbiter +If `spec.arbiter` is not null, there will be one arbiter pod on each of the replicaset structure, including shards. It has two fields. +- `spec.arbiter.podTemplate` defines the arbiter-pod's template. See [spec.podTemplate](/docs/v2024.1.31/guides/mongodb/configuration/using-config-file) part for more details of this. +- `spec.arbiter.configSecret` is an optional field that allows users to provide custom configurations for MongoDB arbiter. You just need to refer the configuration secret in `spec.arbiter.configSecret.name` field. +> Please note that, the secret key needs to be `mongod.conf`. + +N.B. If `spec.replicaset` & `spec.shardTopology` both is empty, `spec.arbiter` has to be empty too. + +### spec.allowedSchemas +It defines which consumers may refer to a database instance. We implemented double-optIn feature between database instance and schema-manager using this field. +- `spec.allowedSchemas.namespace.from` indicates how you want to filter the namespaces, from which a schema-manager will be able to communicate with this db instance. +Possible values are : i) `All` to allow all namespaces, ii) `Same` to allow only if schema-manager & MongoDB is deployed in same namespace & iii) `Selector` to select some namespaces through labels. +- `spec.allowedSchemas.namespace.selector`. You need to set this field only if `spec.allowedSchemas.namespace.from` is set to `selector`. Here you will give the labels of the namespaces to allow. +- `spec.allowedSchemas.selctor` denotes the labels of the schema-manager instances, which you want to give allowance to use this database. + +### spec.coordinator +We use a dedicated container, named `replication-mode-detector`, to continuously select primary pod and add label as primary. By specifying `spec.coordinator.resources` & `spec.coordinator.securityContext`, you can set the resources and securityContext of that mode-detector container. + + +## spec.healthChecker +It defines the attributes for the health checker. +- `spec.healthChecker.periodSeconds` specifies how often to perform the health check. +- `spec.healthChecker.timeoutSeconds` specifies the number of seconds after which the probe times out. +- `spec.healthChecker.failureThreshold` specifies minimum consecutive failures for the healthChecker to be considered failed. +- `spec.healthChecker.disableWriteCheck` specifies whether to disable the writeCheck or not. + +Know details about KubeDB Health checking from this [blog post](https://blog.byte.builders/post/kubedb-health-checker/). + +## Next Steps + +- Learn how to use KubeDB to run a MongoDB database [here](/docs/v2024.1.31/guides/mongodb/README). +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/v2024.1.31/CONTRIBUTING). diff --git a/content/docs/v2024.1.31/guides/mongodb/concepts/mongodbdatabase.md b/content/docs/v2024.1.31/guides/mongodb/concepts/mongodbdatabase.md new file mode 100644 index 0000000000..5dda09637b --- /dev/null +++ b/content/docs/v2024.1.31/guides/mongodb/concepts/mongodbdatabase.md @@ -0,0 +1,160 @@ +--- +title: MongoDBDatabase +menu: + docs_v2024.1.31: + identifier: mongodbdatabase-concepts + name: MongoDBDatabase + parent: mg-concepts-mongodb + weight: 30 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# MongoDBDatabase + +## What is MongoDBDatabase + +`MongoDBDatabase` is a Kubernetes Custom Resource Definitions (CRD). It provides a declarative way of implementing multitenancy inside KubeDB provisioned MongoDB server. You need to describe the target database, desired database configuration, the vault server reference for managing the user in a `MongoDBDatabase` object, and the KubeDB Schema Manager operator will create Kubernetes objects in the desired state for you. + +## MongoDBDatabase Specification + +As with all other Kubernetes objects, an `MongoDBDatabase` needs `apiVersion`, `kind`, and `metadata` fields. It also needs a `spec` section. + +```yaml +apiVersion: schema.kubedb.com/v1alpha1 +kind: MongoDBDatabase +metadata: + name: demo-schema + namespace: demo +spec: + database: + serverRef: + name: mongodb-server + namespace: dev + config: + name: myDB + vaultRef: + name: vault + namespace: dev + accessPolicy: + subjects: + - kind: ServiceAccount + name: "tester" + namespace: "demo" + defaultTTL: "10m" + maxTTL: "200h" + init: + initialized: false + snapshot: + repository: + name: repository + namespace: demo + script: + scriptPath: "etc/config" + configMap: + name: scripter + podTemplate: + spec: + containers: + - env: + - name: "HAVE_A_TRY" + value: "whoo! It works" + name: cnt + image: nginx + command: + - /bin/sh + - -c + args: + - ls + deletionPolicy: "Delete" +``` + + + +### spec.database + +`spec.database` is a required field specifying the database server reference and the desired database configuration. You need to specify the following fields in `spec.database`, + + - `serverRef` refers to the mongodb instance where the particular schema will be applied. + - `config` defines the initial configuration of the desired database. + +KubeDB accepts the following fields to set in `spec.database`: + + - serverRef: + - name + - namespace + + - config: + - name + + +### spec.vaultRef + +`spec.vaultRef` is a required field that specifies which KubeVault server to use for user management. You need to specify the following fields in `spec.vaultRef`, + +- `name` specifies the name of the Vault server. +- `namespace` refers to the namespace where the Vault server is running. + + +### spec.accessPolicy + +`spec.accessPolicy` is a required field that specifies the access permissions like which service account or cluster user have the access and for how long they can access through it. You need to specify the following fields in `spec.accessPolicy`, + +- `subjects` refers to the user or service account which is allowed to access the credentials. +- `defaultTTL` specifies for how long the credential would be valid. +- `maxTTL` specifies the maximum time-to-live for the credentials associated with this role. + +KubeDB accepts the following fields to set in `spec.accessPolicy`: + +- subjects: + - kind + - name + - namespace + +- defaultTTL + +- maxTTL + + +### spec.init + +`spec.init` is an optional field, containing the information of a script or a snapshot using which the database should be initialized during creation. You can only specify either script or snapshot fields in `spec.init`, + +- `script` refers to the information regarding the .js file which should be used for initialization. +- `snapshot` carries information about the repository and snapshot_id to initialize the database by restoring the snapshot. + +KubeDB accepts the following fields to set in `spec.init`: + +- script: + - `scriptPath` accepts a directory location at which the operator should mount the .js file. + - `volumeSource` can be `secret`, `configmap`, `emptyDir`, `nfs`, `persistantVolumeClaim`, `hostPath` etc. The referred volume source should carry the .js file in it. + - `podTemplate` specifies pod-related details, like environment variables, arguments, images etc. + +- snapshot: + - `repository` refers to the repository cr which carries necessary information about the snapshot location . + - `snapshotId` refers to the specific snapshot which should be restored . + + + +### spec.deletionPolicy + +`spec.deletionPolicy` is an optional field that gives flexibility whether to `nullify` (reject) the delete operation. + + +## Next Steps + +- Learn about [MongoDB CRD](/docs/v2024.1.31/guides/mongodb/concepts/mongodb) +- Deploy your first MongoDB database with KubeDB by following the guide [here](https://kubedb.com/docs/latest/guides/mongodb/quickstart/quickstart/). \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/mongodb/concepts/opsrequest.md b/content/docs/v2024.1.31/guides/mongodb/concepts/opsrequest.md new file mode 100644 index 0000000000..27870d3e54 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mongodb/concepts/opsrequest.md @@ -0,0 +1,787 @@ +--- +title: MongoDBOpsRequests CRD +menu: + docs_v2024.1.31: + identifier: mg-opsrequest-concepts + name: MongoDBOpsRequest + parent: mg-concepts-mongodb + weight: 25 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# MongoDBOpsRequest + +## What is MongoDBOpsRequest + +`MongoDBOpsRequest` is a Kubernetes `Custom Resource Definitions` (CRD). It provides a declarative configuration for [MongoDB](https://www.mongodb.com/) administrative operations like database version updating, horizontal scaling, vertical scaling etc. in a Kubernetes native way. + +## MongoDBOpsRequest CRD Specifications + +Like any official Kubernetes resource, a `MongoDBOpsRequest` has `TypeMeta`, `ObjectMeta`, `Spec` and `Status` sections. + +Here, some sample `MongoDBOpsRequest` CRs for different administrative operations is given below: + +**Sample `MongoDBOpsRequest` for updating database:** + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: MongoDBOpsRequest +metadata: + name: mops-update + namespace: demo +spec: + type: UpdateVersion + databaseRef: + name: mg-standalone + updateVersion: + targetVersion: 4.4.26 +status: + conditions: + - lastTransitionTime: "2020-08-25T18:22:38Z" + message: Successfully completed the modification process + observedGeneration: 1 + reason: Successful + status: "True" + type: Successful + observedGeneration: 1 + phase: Successful +``` + +**Sample `MongoDBOpsRequest` Objects for Horizontal Scaling of different component of the database:** + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: MongoDBOpsRequest +metadata: + name: mops-hscale-configserver + namespace: demo +spec: + type: HorizontalScaling + databaseRef: + name: mg-sharding + horizontalScaling: + shard: + shards: 3 + replicas: 3 + configServer: + replicas: 3 + mongos: + replicas: 2 +status: + conditions: + - lastTransitionTime: "2020-08-25T18:22:38Z" + message: Successfully completed the modification process + observedGeneration: 1 + reason: Successful + status: "True" + type: Successful + observedGeneration: 1 + phase: Successful +``` + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: MongoDBOpsRequest +metadata: + name: mops-hscale-down-replicaset + namespace: demo +spec: + type: HorizontalScaling + databaseRef: + name: mg-replicaset + horizontalScaling: + replicas: 3 +status: + conditions: + - lastTransitionTime: "2020-08-25T18:22:38Z" + message: Successfully completed the modification process + observedGeneration: 1 + reason: Successful + status: "True" + type: Successful + observedGeneration: 1 + phase: Successful +``` + +**Sample `MongoDBOpsRequest` Objects for Vertical Scaling of different component of the database:** + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: MongoDBOpsRequest +metadata: + name: mops-vscale-configserver + namespace: demo +spec: + type: VerticalScaling + databaseRef: + name: mg-sharding + verticalScaling: + configServer: + resources: + requests: + memory: "150Mi" + cpu: "0.1" + limits: + memory: "250Mi" + cpu: "0.2" + mongos: + resources: + requests: + memory: "150Mi" + cpu: "0.1" + limits: + memory: "250Mi" + cpu: "0.2" + shard: + resources: + requests: + memory: "150Mi" + cpu: "0.1" + limits: + memory: "250Mi" + cpu: "0.2" +status: + conditions: + - lastTransitionTime: "2020-08-25T18:22:38Z" + message: Successfully completed the modification process + observedGeneration: 1 + reason: Successful + status: "True" + type: Successful + observedGeneration: 1 + phase: Successful +``` + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: MongoDBOpsRequest +metadata: + name: mops-vscale-standalone + namespace: demo +spec: + type: VerticalScaling + databaseRef: + name: mg-standalone + verticalScaling: + standalone: + resources: + requests: + memory: "150Mi" + cpu: "0.1" + limits: + memory: "250Mi" + cpu: "0.2" +status: + conditions: + - lastTransitionTime: "2020-08-25T18:22:38Z" + message: Successfully completed the modification process + observedGeneration: 1 + reason: Successful + status: "True" + type: Successful + observedGeneration: 1 + phase: Successful +``` + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: MongoDBOpsRequest +metadata: + name: mops-vscale-replicaset + namespace: demo +spec: + type: VerticalScaling + databaseRef: + name: mg-replicaset + verticalScaling: + replicaSet: + resources: + requests: + memory: "150Mi" + cpu: "0.1" + limits: + memory: "250Mi" + cpu: "0.2" +status: + conditions: + - lastTransitionTime: "2020-08-25T18:22:38Z" + message: Successfully completed the modification process + observedGeneration: 1 + reason: Successful + status: "True" + type: Successful + observedGeneration: 1 + phase: Successful +``` + +**Sample `MongoDBOpsRequest` Objects for Reconfiguring different database components:** + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: MongoDBOpsRequest +metadata: + name: mops-reconfiugre-data-replicaset + namespace: demo +spec: + type: Reconfigure + databaseRef: + name: mg-replicaset + configuration: + replicaSet: + inlineConfig: | + net: + maxIncomingConnections: 30000 +status: + conditions: + - lastTransitionTime: "2020-08-25T18:22:38Z" + message: Successfully completed the modification process + observedGeneration: 1 + reason: Successful + status: "True" + type: Successful + observedGeneration: 1 + phase: Successful +``` + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: MongoDBOpsRequest +metadata: + name: mops-reconfiugre-data-shard + namespace: demo +spec: + type: Reconfigure + databaseRef: + name: mg-sharding + configuration: + shard: + inlineConfig: | + net: + maxIncomingConnections: 30000 + configServer: + inlineConfig: | + net: + maxIncomingConnections: 30000 + mongos: + inlineConfig: | + net: + maxIncomingConnections: 30000 +status: + conditions: + - lastTransitionTime: "2020-08-25T18:22:38Z" + message: Successfully completed the modification process + observedGeneration: 1 + reason: Successful + status: "True" + type: Successful + observedGeneration: 1 + phase: Successful +``` + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: MongoDBOpsRequest +metadata: + name: mops-reconfiugre-data-standalone + namespace: demo +spec: + type: Reconfigure + databaseRef: + name: mg-standalone + configuration: + standalone: + inlineConfig: | + net: + maxIncomingConnections: 30000 +status: + conditions: + - lastTransitionTime: "2020-08-25T18:22:38Z" + message: Successfully completed the modification process + observedGeneration: 1 + reason: Successful + status: "True" + type: Successful + observedGeneration: 1 + phase: Successful +``` + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: MongoDBOpsRequest +metadata: + name: mops-reconfiugre-replicaset + namespace: demo +spec: + type: Reconfigure + databaseRef: + name: mg-replicaset + configuration: + replicaSet: + configSecret: + name: new-custom-config +status: + conditions: + - lastTransitionTime: "2020-08-25T18:22:38Z" + message: Successfully completed the modification process + observedGeneration: 1 + reason: Successful + status: "True" + type: Successful + observedGeneration: 1 + phase: Successful +``` + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: MongoDBOpsRequest +metadata: + name: mops-reconfiugre-shard + namespace: demo +spec: + type: Reconfigure + databaseRef: + name: mg-sharding + configuration: + shard: + configSecret: + name: new-custom-config + configServer: + configSecret: + name: new-custom-config + mongos: + configSecret: + name: new-custom-config +status: + conditions: + - lastTransitionTime: "2020-08-25T18:22:38Z" + message: Successfully completed the modification process + observedGeneration: 1 + reason: Successful + status: "True" + type: Successful + observedGeneration: 1 + phase: Successful +``` + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: MongoDBOpsRequest +metadata: + name: mops-reconfiugre-standalone + namespace: demo +spec: + type: Reconfigure + databaseRef: + name: mg-standalone + configuration: + standalone: + configSecret: + name: new-custom-config +status: + conditions: + - lastTransitionTime: "2020-08-25T18:22:38Z" + message: Successfully completed the modification process + observedGeneration: 1 + reason: Successful + status: "True" + type: Successful + observedGeneration: 1 + phase: Successful +``` + +**Sample `MongoDBOpsRequest` Objects for Volume Expansion of different database components:** + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: MongoDBOpsRequest +metadata: + name: mops-volume-exp-replicaset + namespace: demo +spec: + type: VolumeExpansion + databaseRef: + name: mg-replicaset + volumeExpansion: + replicaSet: 2Gi +status: + conditions: + - lastTransitionTime: "2020-08-25T18:22:38Z" + message: Successfully completed the modification process + observedGeneration: 1 + reason: Successful + status: "True" + type: Successful + observedGeneration: 1 + phase: Successful +``` + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: MongoDBOpsRequest +metadata: + name: mops-volume-exp-shard + namespace: demo +spec: + type: VolumeExpansion + databaseRef: + name: mg-sharding + volumeExpansion: + shard: 2Gi + configServer: 2Gi +status: + conditions: + - lastTransitionTime: "2020-08-25T18:22:38Z" + message: Successfully completed the modification process + observedGeneration: 1 + reason: Successful + status: "True" + type: Successful + observedGeneration: 1 + phase: Successful +``` + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: MongoDBOpsRequest +metadata: + name: mops-volume-exp-standalone + namespace: demo +spec: + type: VolumeExpansion + databaseRef: + name: mg-standalone + volumeExpansion: + standalone: 2Gi +status: + conditions: + - lastTransitionTime: "2020-08-25T18:22:38Z" + message: Successfully completed the modification process + observedGeneration: 1 + reason: Successful + status: "True" + type: Successful + observedGeneration: 1 + phase: Successful +``` + +**Sample `MongoDBOpsRequest` Objects for Reconfiguring TLS of the database:** + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: MongoDBOpsRequest +metadata: + name: mops-add-tls + namespace: demo +spec: + type: ReconfigureTLS + databaseRef: + name: mg-rs + tls: + issuerRef: + name: mg-issuer + kind: Issuer + apiGroup: "cert-manager.io" + certificates: + - alias: client + emailAddresses: + - abc@appscode.com +``` + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: MongoDBOpsRequest +metadata: + name: mops-rotate + namespace: demo +spec: + type: ReconfigureTLS + databaseRef: + name: mg-rs + tls: + rotateCertificates: true +``` + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: MongoDBOpsRequest +metadata: + name: mops-change-issuer + namespace: demo +spec: + type: ReconfigureTLS + databaseRef: + name: mg-rs + tls: + issuerRef: + name: mg-new-issuer + kind: Issuer + apiGroup: "cert-manager.io" +``` + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: MongoDBOpsRequest +metadata: + name: mops-remove + namespace: demo +spec: + type: ReconfigureTLS + databaseRef: + name: mg-rs + tls: + remove: true +``` + +Here, we are going to describe the various sections of a `MongoDBOpsRequest` crd. + +A `MongoDBOpsRequest` object has the following fields in the `spec` section. + +### spec.databaseRef + +`spec.databaseRef` is a required field that point to the [MongoDB](/docs/v2024.1.31/guides/mongodb/concepts/mongodb) object for which the administrative operations will be performed. This field consists of the following sub-field: + +- **spec.databaseRef.name :** specifies the name of the [MongoDB](/docs/v2024.1.31/guides/mongodb/concepts/mongodb) object. + +### spec.type + +`spec.type` specifies the kind of operation that will be applied to the database. Currently, the following types of operations are allowed in `MongoDBOpsRequest`. + +- `Upgrade` / `UpdateVersion` +- `HorizontalScaling` +- `VerticalScaling` +- `VolumeExpansion` +- `Reconfigure` +- `ReconfigureTLS` +- `Restart` + +> You can perform only one type of operation on a single `MongoDBOpsRequest` CR. For example, if you want to update your database and scale up its replica then you have to create two separate `MongoDBOpsRequest`. At first, you have to create a `MongoDBOpsRequest` for updating. Once it is completed, then you can create another `MongoDBOpsRequest` for scaling. + +> Note: There is an exception to the above statement. It is possible to specify both `spec.configuration` & `spec.verticalScaling` in a OpsRequest of type `VerticalScaling`. + +### spec.updateVersion + +If you want to update you MongoDB version, you have to specify the `spec.updateVersion` section that specifies the desired version information. This field consists of the following sub-field: + +- `spec.updateVersion.targetVersion` refers to a [MongoDBVersion](/docs/v2024.1.31/guides/mongodb/concepts/catalog) CR that contains the MongoDB version information where you want to update. + +Have a look on the [`updateConstraints`](/docs/v2024.1.31/guides/mongodb/concepts/catalog#specupdateconstraints) of the mongodbVersion spec to know which versions are supported for updating from the current version. +```yaml +kubectl get mgversion -o=jsonpath='{.spec.updateConstraints}' | jq +``` + +> You can only update between MongoDB versions. KubeDB does not support downgrade for MongoDB. + +### spec.horizontalScaling + +If you want to scale-up or scale-down your MongoDB cluster or different components of it, you have to specify `spec.horizontalScaling` section. This field consists of the following sub-field: + +- `spec.horizontalScaling.replicas` indicates the desired number of nodes for MongoDB replicaset cluster after scaling. For example, if your cluster currently has 4 replicaset nodes, and you want to add additional 2 nodes then you have to specify 6 in `spec.horizontalScaling.replicas` field. Similarly, if you want to remove one node from the cluster, you have to specify 3 in `spec.horizontalScaling.replicas` field. +- `spec.horizontalScaling.configServer.replicas` indicates the desired number of ConfigServer nodes for Sharded MongoDB cluster after scaling. +- `spec.horizontalScaling.mongos.replicas` indicates the desired number of Mongos nodes for Sharded MongoDB cluster after scaling. +- `spec.horizontalScaling.shard` indicates the configuration of shard nodes for Sharded MongoDB cluster after scaling. This field consists of the following sub-field: + - `spec.horizontalScaling.shard.replicas` indicates the number of replicas each shard will have after scaling. + - `spec.horizontalScaling.shard.shards` indicates the number of shards after scaling + +### spec.verticalScaling + +`spec.verticalScaling` is a required field specifying the information of `MongoDB` resources like `cpu`, `memory` etc that will be scaled. This field consists of the following sub-fields: + +- `spec.verticalScaling.standalone` indicates the desired resources for standalone MongoDB database after scaling. +- `spec.verticalScaling.replicaSet` indicates the desired resources for replicaSet of MongoDB database after scaling. +- `spec.verticalScaling.mongos` indicates the desired resources for Mongos nodes of Sharded MongoDB database after scaling. +- `spec.verticalScaling.configServer` indicates the desired resources for ConfigServer nodes of Sharded MongoDB database after scaling. +- `spec.verticalScaling.shard` indicates the desired resources for Shard nodes of Sharded MongoDB database after scaling. +- `spec.verticalScaling.exporter` indicates the desired resources for the `exporter` container. +- `spec.verticalScaling.arbiter` indicates the desired resources for arbiter node of MongoDB database after scaling. +- `spec.verticalScaling.coordinator` indicates the desired resources for the coordinator container. + +All of them has the below structure: + +```yaml +requests: + memory: "200Mi" + cpu: "0.1" +limits: + memory: "300Mi" + cpu: "0.2" +``` + +Here, when you specify the resource request, the scheduler uses this information to decide which node to place the container of the Pod on and when you specify a resource limit for the container, the `kubelet` enforces those limits so that the running container is not allowed to use more of that resource than the limit you set. You can found more details from [here](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/). + +### spec.volumeExpansion + +> To use the volume expansion feature the storage class must support volume expansion + +If you want to expand the volume of your MongoDB cluster or different components of it, you have to specify `spec.volumeExpansion` section. This field consists of the following sub-field: + +- `spec.mode` specifies the volume expansion mode. Supported values are `Online` & `Offline`. The default is `Online`. +- `spec.volumeExpansion.standalone` indicates the desired size for the persistent volume of a standalone MongoDB database. +- `spec.volumeExpansion.replicaSet` indicates the desired size for the persistent volume of replicaSets of a MongoDB database. +- `spec.volumeExpansion.configServer` indicates the desired size for the persistent volume of the config server of a sharded MongoDB database. +- `spec.volumeExpansion.shard` indicates the desired size for the persistent volume of shards of a sharded MongoDB database. + +All of them refer to [Quantity](https://v1-22.docs.kubernetes.io/docs/reference/generated/kubernetes-api/v1.22/#quantity-resource-core) types of Kubernetes. + +Example usage of this field is given below: + +```yaml +spec: + volumeExpansion: + shard: "2Gi" +``` + +This will expand the volume size of all the shard nodes to 2 GB. + +### spec.configuration + +If you want to reconfigure your Running MongoDB cluster or different components of it with new custom configuration, you have to specify `spec.configuration` section. This field consists of the following sub-field: + +- `spec.configuration.standalone` indicates the desired new custom configuration for a standalone MongoDB database. +- `spec.configuration.replicaSet` indicates the desired new custom configuration for replicaSet of a MongoDB database. +- `spec.configuration.configServer` indicates the desired new custom configuration for config servers of a sharded MongoDB database. +- `spec.configuration.mongos` indicates the desired new custom configuration for the mongos nodes of a sharded MongoDB database. +- `spec.configuration.shard` indicates the desired new custom configuration for the shard nodes of a sharded MongoDB database. +- `spec.verticalScaling.arbiter` indicates the desired new custom configuration for arbiter node of MongoDB database after scaling. + +All of them has the following sub-fields: + +- `configSecret` points to a secret in the same namespace of a MongoDB resource, which contains the new custom configurations. If there are any configSecret set before in the database, this secret will replace it. +- `inlineConfig` contains the new custom config as a string which will be merged with the previous configuration. +> Note: You can use `inlineConfig` only for `mongod.conf` configurations. This field is deprecated & will be removed in some future KubeDB release. + +- `applyConfig` is the replacement of `inlineConfig`. It is a map where key supports 3 values, namely `mongod.conf`, `replicaset.json`, `configuration.js`. And value represents the corresponding configurations. +For your information, replicaset.json is used to modify replica set configurations, which we see in the output of `rs.config()`. And `configurarion.js` is used to apply a js script to configure mongodb at runtime. +KubeDB provisioner operator applies these two directly while reconciling. + +```yaml + applyConfig: + configuration.js: | + print("hello world!!!!") + replicaset.json: | + { + "settings" : { + "electionTimeoutMillis" : 4000 + } + } + mongod.conf: | + net: + maxIncomingConnections: 30000 +``` + +- `removeCustomConfig` is a boolean field. Specify this field to true if you want to remove all the custom configuration from the deployed mongodb server. + +### spec.tls + +If you want to reconfigure the TLS configuration of your database i.e. add TLS, remove TLS, update issuer/cluster issuer or Certificates and rotate the certificates, you have to specify `spec.tls` section. This field consists of the following sub-field: + +- `spec.tls.issuerRef` specifies the issuer name, kind and api group. +- `spec.tls.certificates` specifies the certificates. You can learn more about this field from [here](/docs/v2024.1.31/guides/mongodb/concepts/mongodb#spectls). +- `spec.tls.rotateCertificates` specifies that we want to rotate the certificate of this database. +- `spec.tls.remove` specifies that we want to remove tls from this database. + +### spec.readinessCriteria + +`spec.readinessCriteria` is the criteria for checking readiness of a MongoDB pod after restarting it. It has two fields. +- `spec.readinessCriteria.oplogMaxLagSeconds` defines the maximum allowed lagging time between the primary & secondary. +- `spec.readinessCriteria.objectsCountDiffPercentage` denotes the maximum allowed object-count-difference between the primary & secondary. + +```yaml +... +spec: + readinessCriteria: + oplogMaxLagSeconds: 20 + objectsCountDiffPercentage: 10 +... +``` +Exceeding these thresholds results in opsRequest failure. One thing to note that, readinessCriteria field will make impact only if pod restarting is associated with the opsRequest type. + +### spec.timeout +As we internally retry the ops request steps multiple times, This `timeout` field helps the users to specify the timeout for those steps of the ops request (in second). +If a step doesn't finish within the specified timeout, the ops request will result in failure. + +### spec.apply +This field controls the execution of obsRequest depending on the database state. It has two supported values: `Always` & `IfReady`. +Use IfReady, if you want to process the opsRequest only when the database is Ready. And use Always, if you want to process the execution of opsReq irrespective of the Database state. + + +### MongoDBOpsRequest `Status` + +`.status` describes the current state and progress of a `MongoDBOpsRequest` operation. It has the following fields: + +### status.phase + +`status.phase` indicates the overall phase of the operation for this `MongoDBOpsRequest`. It can have the following three values: + +| Phase | Meaning | +|-------------|------------------------------------------------------------------------------------| +| Successful | KubeDB has successfully performed the operation requested in the MongoDBOpsRequest | +| Progressing | KubeDB has started the execution of the applied MongoDBOpsRequest | +| Failed | KubeDB has failed the operation requested in the MongoDBOpsRequest | +| Denied | KubeDB has denied the operation requested in the MongoDBOpsRequest | +| Skipped | KubeDB has skipped the operation requested in the MongoDBOpsRequest | + +Important: Ops-manager Operator can skip an opsRequest, only if its execution has not been started yet & there is a newer opsRequest applied in the cluster. `spec.type` has to be same as the skipped one, in this case. + +### status.observedGeneration + +`status.observedGeneration` shows the most recent generation observed by the `MongoDBOpsRequest` controller. + +### status.conditions + +`status.conditions` is an array that specifies the conditions of different steps of `MongoDBOpsRequest` processing. Each condition entry has the following fields: + +- `types` specifies the type of the condition. MongoDBOpsRequest has the following types of conditions: + +| Type | Meaning | +| ----------------------------- | ------------------------------------------------------------------------- | +| `Progressing` | Specifies that the operation is now in the progressing state | +| `Successful` | Specifies such a state that the operation on the database was successful. | +| `HaltDatabase` | Specifies such a state that the database is halted by the operator | +| `ResumeDatabase` | Specifies such a state that the database is resumed by the operator | +| `Failed` | Specifies such a state that the operation on the database failed. | +| `StartingBalancer` | Specifies such a state that the balancer has successfully started | +| `StoppingBalancer` | Specifies such a state that the balancer has successfully stopped | +| `UpdateShardImage` | Specifies such a state that the Shard Images has been updated | +| `UpdateReplicaSetImage` | Specifies such a state that the Replicaset Image has been updated | +| `UpdateConfigServerImage` | Specifies such a state that the ConfigServer Image has been updated | +| `UpdateMongosImage` | Specifies such a state that the Mongos Image has been updated | +| `UpdateStatefulSetResources` | Specifies such a state that the Statefulset resources has been updated | +| `UpdateShardResources` | Specifies such a state that the Shard resources has been updated | +| `UpdateReplicaSetResources` | Specifies such a state that the Replicaset resources has been updated | +| `UpdateConfigServerResources` | Specifies such a state that the ConfigServer resources has been updated | +| `UpdateMongosResources` | Specifies such a state that the Mongos resources has been updated | +| `ScaleDownReplicaSet` | Specifies such a state that the scale down operation of replicaset | +| `ScaleUpReplicaSet` | Specifies such a state that the scale up operation of replicaset | +| `ScaleUpShardReplicas` | Specifies such a state that the scale up operation of shard replicas | +| `ScaleDownShardReplicas` | Specifies such a state that the scale down operation of shard replicas | +| `ScaleDownConfigServer` | Specifies such a state that the scale down operation of config server | +| `ScaleUpConfigServer` | Specifies such a state that the scale up operation of config server | +| `ScaleMongos` | Specifies such a state that the scale down operation of replicaset | +| `VolumeExpansion` | Specifies such a state that the volume expansion operaton of the database | +| `ReconfigureReplicaset` | Specifies such a state that the reconfiguration of replicaset nodes | +| `ReconfigureMongos` | Specifies such a state that the reconfiguration of mongos nodes | +| `ReconfigureShard` | Specifies such a state that the reconfiguration of shard nodes | +| `ReconfigureConfigServer` | Specifies such a state that the reconfiguration of config server nodes | + +- The `status` field is a string, with possible values `True`, `False`, and `Unknown`. + - `status` will be `True` if the current transition succeeded. + - `status` will be `False` if the current transition failed. + - `status` will be `Unknown` if the current transition was denied. +- The `message` field is a human-readable message indicating details about the condition. +- The `reason` field is a unique, one-word, CamelCase reason for the condition's last transition. +- The `lastTransitionTime` field provides a timestamp for when the operation last transitioned from one state to another. +- The `observedGeneration` shows the most recent condition transition generation observed by the controller. diff --git a/content/docs/v2024.1.31/guides/mongodb/configuration/_index.md b/content/docs/v2024.1.31/guides/mongodb/configuration/_index.md new file mode 100755 index 0000000000..af8a6aeac9 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mongodb/configuration/_index.md @@ -0,0 +1,22 @@ +--- +title: Run mongodb with Custom Configuration +menu: + docs_v2024.1.31: + identifier: mg-configuration + name: Custom Configuration + parent: mg-mongodb-guides + weight: 30 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/mongodb/configuration/using-config-file.md b/content/docs/v2024.1.31/guides/mongodb/configuration/using-config-file.md new file mode 100644 index 0000000000..aa3490d15e --- /dev/null +++ b/content/docs/v2024.1.31/guides/mongodb/configuration/using-config-file.md @@ -0,0 +1,216 @@ +--- +title: Run MongoDB with Custom Configuration +menu: + docs_v2024.1.31: + identifier: mg-using-config-file-configuration + name: Config File + parent: mg-configuration + weight: 10 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Using Custom Configuration File + +KubeDB supports providing custom configuration for MongoDB. This tutorial will show you how to use KubeDB to run a MongoDB database with custom configuration. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +- Now, install KubeDB cli on your workstation and KubeDB operator in your cluster following the steps [here](/docs/v2024.1.31/setup/README). + +- To keep things isolated, this tutorial uses a separate namespace called `demo` throughout this tutorial. Run the following command to prepare your cluster for this tutorial: + + ```bash + $ kubectl create ns demo + namespace/demo created + ``` + +> Note: The yaml files used in this tutorial are stored in [docs/examples/mongodb](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/examples/mongodb) folder in GitHub repository [kubedb/docs](https://github.com/kubedb/docs). + +## Overview + +MongoDB allows configuring database via configuration file. The default configuration file for MongoDB deployed by `KubeDB` can be found in `/data/configdb/mongod.conf`. When MongoDB starts, it will look for custom configuration file in `/configdb-readonly/mongod.conf`. If configuration file exist, this custom configuration will overwrite the existing default one. + +> To learn available configuration option of MongoDB see [Configuration File Options](https://docs.mongodb.com/manual/reference/configuration-options/). + +At first, you have to create a secret with your configuration file contents as the value of this key `mongod.conf`. Then, you have to specify the name of this secret in `spec.configSecret.name` section while creating MongoDB crd. KubeDB will mount this secret into `/configdb-readonly/` directory of the database pod. + +Here one important thing to note that, `spec.configSecret.name` will be used for standard replicaset members & standalone mongodb only. If you want to configure a specific type of mongo nodes, you have to set the name in respective fields. +For example, to configure shard topology node, set `spec.shardTopology..configSecret.name` field. +Similarly, To configure arbiter node, set `spec.arbiter.configSecret.name` field. + +In this tutorial, we will configure [net.maxIncomingConnections](https://docs.mongodb.com/manual/reference/configuration-options/#net.maxIncomingConnections) (default value: 65536) via a custom config file. + +## Custom Configuration + +At first, create `mongod.conf` file containing required configuration settings. + +```ini +$ cat mongod.conf +net: + maxIncomingConnections: 10000 +``` + +Here, `maxIncomingConnections` is set to `10000`, whereas the default value is 65536. + +Now, create the secret with this configuration file. + +```bash +$ kubectl create secret generic -n demo mg-configuration --from-file=./mongod.conf +secret/mg-configuration created +``` + +Verify the secret has the configuration file. + +```yaml +$ kubectl get secret -n demo mg-configuration -o yaml +apiVersion: v1 +data: + mongod.conf: bmV0OgogIG1heEluY29taW5nQ29ubmVjdGlvbnM6IDEwMDAwMA== +kind: Secret +metadata: + creationTimestamp: "2021-02-09T12:59:50Z" + name: mg-configuration + namespace: demo + resourceVersion: "52495" + uid: 92ca4191-eb97-4274-980c-9430ab7cc5d1 +type: Opaque + +$ echo bmV0OgogIG1heEluY29taW5nQ29ubmVjdGlvbnM6IDEwMDAwMA== | base64 -d +net: + maxIncomingConnections: 100000 +``` + +Now, create MongoDB crd specifying `spec.configSecret` field. + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: MongoDB +metadata: + name: mgo-custom-config + namespace: demo +spec: + version: "4.4.26" + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + configSecret: + name: mg-configuration +``` + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/mongodb/configuration/replicaset.yaml +mongodb.kubedb.com/mgo-custom-config created +``` + +Now, wait a few minutes. KubeDB operator will create necessary PVC, statefulset, services, secret etc. If everything goes well, we will see that a pod with the name `mgo-custom-config-0` has been created. + +Check that the statefulset's pod is running + +```bash +$ kubectl get pod -n demo mgo-custom-config-0 +NAME READY STATUS RESTARTS AGE +mgo-custom-config-0 1/1 Running 0 1m +``` + +Now, we will check if the database has started with the custom configuration we have provided. + +Now, you can connect to this database through [mongo-shell](https://docs.mongodb.com/v4.2/mongo/). In this tutorial, we are connecting to the MongoDB server from inside the pod. + +```bash +$ kubectl get secrets -n demo mgo-custom-config-auth -o jsonpath='{.data.\username}' | base64 -d +root + +$ kubectl get secrets -n demo mgo-custom-config-auth -o jsonpath='{.data.\password}' | base64 -d +ErialNojWParBFoP + +$ kubectl exec -it mgo-custom-config-0 -n demo sh + +> mongo admin + +> db.auth("root","ErialNojWParBFoP") +1 + +> db._adminCommand( {getCmdLineOpts: 1}) +{ + "argv" : [ + "mongod", + "--dbpath=/data/db", + "--auth", + "--ipv6", + "--bind_ip_all", + "--port=27017", + "--tlsMode=disabled", + "--config=/data/configdb/mongod.conf" + ], + "parsed" : { + "config" : "/data/configdb/mongod.conf", + "net" : { + "bindIp" : "*", + "ipv6" : true, + "maxIncomingConnections" : 10000, + "port" : 27017, + "tls" : { + "mode" : "disabled" + } + }, + "security" : { + "authorization" : "enabled" + }, + "storage" : { + "dbPath" : "/data/db" + } + }, + "ok" : 1 +} + +> exit +bye +``` + +As we can see from the configuration of running mongodb, the value of `maxIncomingConnections` has been set to 10000 successfully. + +## Cleaning up + +To cleanup the Kubernetes resources created by this tutorial, run: + +```bash +kubectl patch -n demo mg/mgo-custom-config -p '{"spec":{"terminationPolicy":"WipeOut"}}' --type="merge" +kubectl delete -n demo mg/mgo-custom-config + +kubectl delete -n demo secret mg-configuration + +kubectl delete ns demo +``` + +## Next Steps + +- [Backup and Restore](/docs/v2024.1.31/guides/mongodb/backup/overview/) MongoDB databases using Stash. +- Initialize [MongoDB with Script](/docs/v2024.1.31/guides/mongodb/initialization/using-script). +- Monitor your MongoDB database with KubeDB using [out-of-the-box Prometheus operator](/docs/v2024.1.31/guides/mongodb/monitoring/using-prometheus-operator). +- Monitor your MongoDB database with KubeDB using [out-of-the-box builtin-Prometheus](/docs/v2024.1.31/guides/mongodb/monitoring/using-builtin-prometheus). +- Use [private Docker registry](/docs/v2024.1.31/guides/mongodb/private-registry/using-private-registry) to deploy MongoDB with KubeDB. +- Use [kubedb cli](/docs/v2024.1.31/guides/mongodb/cli/cli) to manage databases like kubectl for Kubernetes. +- Detail concepts of [MongoDB object](/docs/v2024.1.31/guides/mongodb/concepts/mongodb). +- Detail concepts of [MongoDBVersion object](/docs/v2024.1.31/guides/mongodb/concepts/catalog). +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/v2024.1.31/CONTRIBUTING). diff --git a/content/docs/v2024.1.31/guides/mongodb/configuration/using-podtemplate.md b/content/docs/v2024.1.31/guides/mongodb/configuration/using-podtemplate.md new file mode 100644 index 0000000000..679c2ceded --- /dev/null +++ b/content/docs/v2024.1.31/guides/mongodb/configuration/using-podtemplate.md @@ -0,0 +1,205 @@ +--- +title: Run MongoDB with Custom Configuration +menu: + docs_v2024.1.31: + identifier: using-podtemplate-configuration + name: Customize PodTemplate + parent: mg-configuration + weight: 15 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Run MongoDB with Custom PodTemplate + +KubeDB supports providing custom configuration for MongoDB via [PodTemplate](/docs/v2024.1.31/guides/mongodb/concepts/mongodb#specpodtemplate). This tutorial will show you how to use KubeDB to run a MongoDB database with custom configuration using PodTemplate. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +- Now, install KubeDB cli on your workstation and KubeDB operator in your cluster following the steps [here](/docs/v2024.1.31/setup/README). + +- To keep things isolated, this tutorial uses a separate namespace called `demo` throughout this tutorial. + + ```bash + $ kubectl create ns demo + namespace/demo created + ``` + +> Note: YAML files used in this tutorial are stored in [docs/examples/mongodb](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/examples/mongodb) folder in GitHub repository [kubedb/docs](https://github.com/kubedb/docs). + +## Overview + +KubeDB allows providing a template for database pod through `spec.podTemplate`. KubeDB operator will pass the information provided in `spec.podTemplate` to the StatefulSet created for MongoDB database. + +KubeDB accept following fields to set in `spec.podTemplate:` + +- metadata: + - annotations (pod's annotation) + - labels (pod's labels) +- controller: + - annotations (statefulset's annotation) + - labels (statefulset's labels) +- spec: + - args + - env + - resources + - initContainers + - imagePullSecrets + - nodeSelector + - affinity + - serviceAccountName + - schedulerName + - tolerations + - priorityClassName + - priority + - securityContext + - livenessProbe + - readinessProbe + - lifecycle + +Read about the fields in details in [PodTemplate concept](/docs/v2024.1.31/guides/mongodb/concepts/mongodb#specpodtemplate), + +## CRD Configuration + +Below is the YAML for the MongoDB created in this example. Here, [`spec.podTemplate.spec.env`](/docs/v2024.1.31/guides/mongodb/concepts/mongodb#specpodtemplatespecenv) specifies environment variables and [`spec.podTemplate.spec.args`](/docs/v2024.1.31/guides/mongodb/concepts/mongodb#specpodtemplatespecargs) provides extra arguments for [MongoDB Docker Image](https://hub.docker.com/_/mongodb/). + +In this tutorial, `maxIncomingConnections` is set to `100` (default, 65536) through args `--maxConns=100`. + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: MongoDB +metadata: + name: mgo-misc-config + namespace: demo +spec: + version: "4.4.26" + storageType: "Durable" + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + podTemplate: + spec: + args: + - --maxConns=100 + resources: + requests: + memory: "1Gi" + cpu: "250m" + terminationPolicy: Halt +``` + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/mongodb/configuration/mgo-misc-config.yaml +mongodb.kubedb.com/mgo-misc-config created +``` + +Now, wait a few minutes. KubeDB operator will create necessary PVC, statefulset, services, secret etc. If everything goes well, we will see that a pod with the name `mgo-misc-config-0` has been created. + +Check that the statefulset's pod is running + +```bash +$ kubectl get pod -n demo +NAME READY STATUS RESTARTS AGE +mgo-misc-config-0 1/1 Running 0 14m +``` + +Now, check if the database has started with the custom configuration we have provided. + +Now, you can connect to this database through [mongo-shell](https://docs.mongodb.com/v3.4/mongo/). In this tutorial, we are connecting to the MongoDB server from inside the pod. + +```bash +$ kubectl get secrets -n demo mgo-misc-config-auth -o jsonpath='{.data.\username}' | base64 -d +root + +$ kubectl get secrets -n demo mgo-misc-config-auth -o jsonpath='{.data.\password}' | base64 -d +zyp5hDfRlVOWOyk9 + +$ kubectl exec -it mgo-misc-config-0 -n demo sh + +> mongo admin + +> db.auth("root","zyp5hDfRlVOWOyk9") +1 + +> db._adminCommand( {getCmdLineOpts: 1}) +{ + "argv" : [ + "mongod", + "--dbpath=/data/db", + "--auth", + "--ipv6", + "--bind_ip_all", + "--port=27017", + "--tlsMode=disabled", + "--config=/data/configdb/mongod.conf" + ], + "parsed" : { + "config" : "/data/configdb/mongod.conf", + "net" : { + "bindIp" : "*", + "ipv6" : true, + "maxIncomingConnections" : 100, + "port" : 27017, + "tls" : { + "mode" : "disabled" + } + }, + "security" : { + "authorization" : "enabled" + }, + "storage" : { + "dbPath" : "/data/db" + } + }, + "ok" : 1 +} + +> exit +bye +``` + +You can see the maximum connection is set to `100` in `parsed.net.maxIncomingConnections`. + +## Cleaning up + +To cleanup the Kubernetes resources created by this tutorial, run: + +```bash +kubectl patch -n demo mg/mgo-misc-config -p '{"spec":{"terminationPolicy":"WipeOut"}}' --type="merge" +kubectl delete -n demo mg/mgo-misc-config + +kubectl delete ns demo +``` + +If you would like to uninstall KubeDB operator, please follow the steps [here](/docs/v2024.1.31/setup/README). + +## Next Steps + +- [Quickstart MongoDB](/docs/v2024.1.31/guides/mongodb/quickstart/quickstart) with KubeDB Operator. +- [Backup and Restore](/docs/v2024.1.31/guides/mongodb/backup/overview/) MongoDB databases using Stash. +- Initialize [MongoDB with Script](/docs/v2024.1.31/guides/mongodb/initialization/using-script). +- Monitor your MongoDB database with KubeDB using [out-of-the-box Prometheus operator](/docs/v2024.1.31/guides/mongodb/monitoring/using-prometheus-operator). +- Monitor your MongoDB database with KubeDB using [out-of-the-box builtin-Prometheus](/docs/v2024.1.31/guides/mongodb/monitoring/using-builtin-prometheus). +- Use [private Docker registry](/docs/v2024.1.31/guides/mongodb/private-registry/using-private-registry) to deploy MongoDB with KubeDB. +- Use [kubedb cli](/docs/v2024.1.31/guides/mongodb/cli/cli) to manage databases like kubectl for Kubernetes. +- Detail concepts of [MongoDB object](/docs/v2024.1.31/guides/mongodb/concepts/mongodb). +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/v2024.1.31/CONTRIBUTING). diff --git a/content/docs/v2024.1.31/guides/mongodb/custom-rbac/_index.md b/content/docs/v2024.1.31/guides/mongodb/custom-rbac/_index.md new file mode 100755 index 0000000000..ce75744c72 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mongodb/custom-rbac/_index.md @@ -0,0 +1,22 @@ +--- +title: Run MongoDB with Custom RBAC resources +menu: + docs_v2024.1.31: + identifier: mg-custom-rbac + name: Custom RBAC + parent: mg-mongodb-guides + weight: 31 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/mongodb/custom-rbac/using-custom-rbac.md b/content/docs/v2024.1.31/guides/mongodb/custom-rbac/using-custom-rbac.md new file mode 100644 index 0000000000..63effbbf52 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mongodb/custom-rbac/using-custom-rbac.md @@ -0,0 +1,296 @@ +--- +title: Run MongoDB with Custom RBAC resources +menu: + docs_v2024.1.31: + identifier: mg-custom-rbac-quickstart + name: Custom RBAC + parent: mg-custom-rbac + weight: 10 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Using Custom RBAC resources + +KubeDB (version 0.13.0 and higher) supports finer user control over role based access permissions provided to a MongoDB instance. This tutorial will show you how to use KubeDB to run MongoDB instance with custom RBAC resources. + +## Before You Begin + +At first, you need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +Now, install KubeDB cli on your workstation and KubeDB operator in your cluster following the steps [here](/docs/v2024.1.31/setup/README). + +To keep things isolated, this tutorial uses a separate namespace called `demo` throughout this tutorial. + +```bash +$ kubectl create ns demo +namespace/demo created +``` + +> Note: YAML files used in this tutorial are stored in [docs/examples/mongodb](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/examples/mongodb) folder in GitHub repository [kubedb/docs](https://github.com/kubedb/docs). + +## Overview + +KubeDB allows users to provide custom RBAC resources, namely, `ServiceAccount`, `Role`, and `RoleBinding` for MongoDB. This is provided via the `spec.podTemplate.spec.serviceAccountName` field in MongoDB crd. If this field is left empty, the KubeDB operator will create a service account name matching MongoDB crd name. Role and RoleBinding that provide necessary access permissions will also be generated automatically for this service account. + +If a service account name is given, but there's no existing service account by that name, the KubeDB operator will create one, and Role and RoleBinding that provide necessary access permissions will also be generated for this service account. + +If a service account name is given, and there's an existing service account by that name, the KubeDB operator will use that existing service account. Since this service account is not managed by KubeDB, users are responsible for providing necessary access permissions manually. + +This guide will show you how to create custom `Service Account`, `Role`, and `RoleBinding` for a MongoDB instance named `quick-mongodb` to provide the bare minimum access permissions. + +## Custom RBAC for MongoDB + +At first, let's create a `Service Acoount` in `demo` namespace. + +```bash +$ kubectl create serviceaccount -n demo my-custom-serviceaccount +serviceaccount/my-custom-serviceaccount created +``` + +It should create a service account. + +```yaml +$ kubectl get serviceaccount -n demo my-custom-serviceaccount -o yaml +apiVersion: v1 +kind: ServiceAccount +metadata: + creationTimestamp: "2019-05-30T04:23:39Z" + name: my-custom-serviceaccount + namespace: demo + resourceVersion: "21657" + selfLink: /api/v1/namespaces/demo/serviceaccounts/myserviceaccount + uid: b2ec2b05-8292-11e9-8d10-080027a8b217 +secrets: +- name: myserviceaccount-token-t8zxd +``` + +Now, we need to create a role that has necessary access permissions for the MongoDB instance named `quick-mongodb`. + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/mongodb/custom-rbac/mg-custom-role.yaml +role.rbac.authorization.k8s.io/my-custom-role created +``` + +Below is the YAML for the Role we just created. + +```yaml +apiVersion: rbac.authorization.k8s.io/v1 +kind: Role +metadata: + name: my-custom-role + namespace: demo +rules: +- apiGroups: + - policy + resourceNames: + - mongodb-db + resources: + - podsecuritypolicies + verbs: + - use +``` + +This permission is required for MongoDB pods running on PSP enabled clusters. + +Now create a `RoleBinding` to bind this `Role` with the already created service account. + +```bash +$ kubectl create rolebinding my-custom-rolebinding --role=my-custom-role --serviceaccount=demo:my-custom-serviceaccount --namespace=demo +rolebinding.rbac.authorization.k8s.io/my-custom-rolebinding created + +``` + +It should bind `my-custom-role` and `my-custom-serviceaccount` successfully. + +```yaml +$ kubectl get rolebinding -n demo my-custom-rolebinding -o yaml +apiVersion: rbac.authorization.k8s.io/v1 +kind: RoleBinding +metadata: + creationTimestamp: "2019-05-30T04:33:39Z" + name: my-custom-rolebinding + namespace: demo + resourceVersion: "1405" + selfLink: /apis/rbac.authorization.k8s.io/v1/namespaces/demo/rolebindings/my-custom-rolebinding + uid: 123afc02-8297-11e9-8d10-080027a8b217 +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: Role + name: my-custom-role +subjects: +- kind: ServiceAccount + name: my-custom-serviceaccount + namespace: demo +``` + +Now, create a MongoDB crd specifying `spec.podTemplate.spec.serviceAccountName` field to `my-custom-serviceaccount`. + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/mongodb/custom-rbac/mg-custom-db.yaml +mongodb.kubedb.com/quick-mongodb created +``` + +Below is the YAML for the MongoDB crd we just created. + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: MongoDB +metadata: + name: quick-mongodb + namespace: demo +spec: + version: "4.4.26" + storageType: Durable + podTemplate: + spec: + serviceAccountName: my-custom-serviceaccount + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: DoNotTerminate +``` + +Now, wait a few minutes. the KubeDB operator will create necessary PVC, deployment, statefulsets, services, secret etc. If everything goes well, we should see that a pod with the name `quick-mongodb-0` has been created. + +Check that the statefulset's pod is running + +```bash +$ kubectl get pod -n demo quick-mongodb-0 +NAME READY STATUS RESTARTS AGE +quick-mongodb-0 1/1 Running 0 28s +``` + +Check the pod's log to see if the database is ready + +```bash +$ kubectl logs -f -n demo quick-mongodb-0 +about to fork child process, waiting until server is ready for connections. +forked process: 17 +2019-06-10T08:56:45.259+0000 I CONTROL [main] ***** SERVER RESTARTED ***** +2019-06-10T08:56:45.263+0000 I CONTROL [initandlisten] MongoDB starting : pid=17 port=27017 dbpath=/data/db 64-bit host=quick-mongodb-0 +... +... +MongoDB init process complete; ready for start up. +... +.. +2019-06-10T08:56:49.287+0000 I NETWORK [thread1] waiting for connections on port 27017 +2019-06-10T08:56:57.179+0000 I NETWORK [thread1] connection accepted from 127.0.0.1:39214 #1 (1 connection now open) +``` + +Once we see `connection accepted` in the log, the database is ready. + +## Reusing Service Account + +An existing service account can be reused in another MongoDB instance. No new access permission is required to run the new MongoDB instance. + +Now, create MongoDB crd `minute-mongodb` using the existing service account name `my-custom-serviceaccount` in the `spec.podTemplate.spec.serviceAccountName` field. + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/mongodb/custom-rbac/mg-custom-db-two.yaml +mongodb.kubedb.com/quick-mongodb created +``` + +Below is the YAML for the MongoDB crd we just created. + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: MongoDB +metadata: + name: minute-mongodb + namespace: demo +spec: + version: "4.4.26" + podTemplate: + spec: + serviceAccountName: my-custom-serviceaccount + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: DoNotTerminate +``` + +Now, wait a few minutes. the KubeDB operator will create necessary PVC, statefulset, deployment, services, secret etc. If everything goes well, we should see that a pod with the name `minute-mongodb-0` has been created. + +Check that the statefulset's pod is running + +```bash +$ kubectl get pod -n demo minute-mongodb-0 +NAME READY STATUS RESTARTS AGE +minute-mongodb-0 1/1 Running 0 50s +``` + +Check the pod's log to see if the database is ready + +```bash +$ kubectl logs -f -n demo minute-mongodb-0 +about to fork child process, waiting until server is ready for connections. +forked process: 17 +2019-06-10T08:56:45.259+0000 I CONTROL [main] ***** SERVER RESTARTED ***** +2019-06-10T08:56:45.263+0000 I CONTROL [initandlisten] MongoDB starting : pid=17 port=27017 dbpath=/data/db 64-bit host=quick-mongodb-0 +... +... +MongoDB init process complete; ready for start up. +... +.. +2019-06-10T08:56:49.287+0000 I NETWORK [thread1] waiting for connections on port 27017 +2019-06-10T08:56:57.179+0000 I NETWORK [thread1] connection accepted from 127.0.0.1:39214 #1 (1 connection now open) +``` + +`connection accepted` in the log signifies that the database is running successfully. + +## Cleaning up + +To cleanup the Kubernetes resources created by this tutorial, run: + +```bash +kubectl patch -n demo mg/quick-mongodb -p '{"spec":{"terminationPolicy":"WipeOut"}}' --type="merge" +kubectl delete -n demo mg/quick-mongodb + +kubectl patch -n demo mg/minute-mongodb -p '{"spec":{"terminationPolicy":"WipeOut"}}' --type="merge" +kubectl delete -n demo mg/minute-mongodb + +kubectl delete -n demo role my-custom-role +kubectl delete -n demo rolebinding my-custom-rolebinding + +kubectl delete sa -n demo my-custom-serviceaccount + +kubectl delete ns demo +``` + +If you would like to uninstall the KubeDB operator, please follow the steps [here](/docs/v2024.1.31/setup/README). + +## Next Steps + +- [Quickstart MongoDB](/docs/v2024.1.31/guides/mongodb/quickstart/quickstart) with KubeDB Operator. +- [Backup and Restore](/docs/v2024.1.31/guides/mongodb/backup/overview/) MongoDB instances using Stash. +- Initialize [MongoDB with Script](/docs/v2024.1.31/guides/mongodb/initialization/using-script). +- Monitor your MongoDB instance with KubeDB using [out-of-the-box Prometheus operator](/docs/v2024.1.31/guides/mongodb/monitoring/using-prometheus-operator). +- Monitor your MongoDB instance with KubeDB using [out-of-the-box builtin-Prometheus](/docs/v2024.1.31/guides/mongodb/monitoring/using-builtin-prometheus). +- Use [private Docker registry](/docs/v2024.1.31/guides/mongodb/private-registry/using-private-registry) to deploy MongoDB with KubeDB. +- Use [kubedb cli](/docs/v2024.1.31/guides/mongodb/cli/cli) to manage databases like kubectl for Kubernetes. +- Detail concepts of [MongoDB object](/docs/v2024.1.31/guides/mongodb/concepts/mongodb). +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/v2024.1.31/CONTRIBUTING). + diff --git a/content/docs/v2024.1.31/guides/mongodb/hidden-node/_index.md b/content/docs/v2024.1.31/guides/mongodb/hidden-node/_index.md new file mode 100644 index 0000000000..a85b0aa40d --- /dev/null +++ b/content/docs/v2024.1.31/guides/mongodb/hidden-node/_index.md @@ -0,0 +1,22 @@ +--- +title: Run mongodb with Hidden Node +menu: + docs_v2024.1.31: + identifier: mg-hidden + name: Hidden-node + parent: mg-mongodb-guides + weight: 28 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/mongodb/hidden-node/concept.md b/content/docs/v2024.1.31/guides/mongodb/hidden-node/concept.md new file mode 100644 index 0000000000..e0268a34ea --- /dev/null +++ b/content/docs/v2024.1.31/guides/mongodb/hidden-node/concept.md @@ -0,0 +1,54 @@ +--- +title: MongoDB Hidden-Node Concept +menu: + docs_v2024.1.31: + identifier: mg-hidden-concept + name: Concept + parent: mg-hidden + weight: 10 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# MongoDB Hidden node + +Hidden node is a member of MongoDB ReplicaSet. It maintains a copy of the primary's data set but is invisible to client applications. Hidden members are good for workloads with different usage patterns from the other members in the replica set. For example, You are using an inMemory Mongodb database server, but in the same time you want your data to be replicated in a persistent storage, in that case, Hidden node is a smart choice. + +Hidden members must always be priority 0 members and so cannot become primary. The db.hello() method does not display hidden members. Hidden members, however, may vote in elections. + +

+  hidden-node +

+ +# Considerations +There are some important considerations that should be taken care of by the Database administrators when deploying MongoDB. + +## Voting +Hidden members may vote in replica set elections. If you stop a voting hidden member, ensure that the set has an active majority or the primary will step down. [[reference]](https://www.mongodb.com/docs/manual/core/replica-set-hidden-member/#voting) + +## Multiple hosts +Always try to avoid scenarios where hidden-node is deployed on the same host as the primary of the replicaset. + +## Write concern +As non-voting replica set members (i.e. members[n].votes is 0) cannot contribute to acknowledge write operations with majority write concern, hidden-members have to be voting capable in majority write-concern scenario. + + +## Next Steps + +- [Deploy MongoDB ReplicaSet with Hidden-node](/docs/v2024.1.31/guides/mongodb/hidden-node/replicaset) using KubeDB. +- Detail concepts of [MongoDB object](/docs/v2024.1.31/guides/mongodb/concepts/mongodb). +- Detail concepts of [MongoDBVersion object](/docs/v2024.1.31/guides/mongodb/concepts/catalog). +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/v2024.1.31/CONTRIBUTING). diff --git a/content/docs/v2024.1.31/guides/mongodb/hidden-node/replicaset.md b/content/docs/v2024.1.31/guides/mongodb/hidden-node/replicaset.md new file mode 100644 index 0000000000..145fbc6d3d --- /dev/null +++ b/content/docs/v2024.1.31/guides/mongodb/hidden-node/replicaset.md @@ -0,0 +1,996 @@ +--- +title: MongoDB ReplicaSet with Hidden-node +menu: + docs_v2024.1.31: + identifier: mg-hidden-replicaset + name: ReplicaSet with Hidden node + parent: mg-hidden + weight: 15 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# KubeDB - MongoDB ReplicaSet with Hidden-node + +This tutorial will show you how to use KubeDB to run a MongoDB ReplicaSet with hidden-node. + +## Before You Begin + +Before proceeding: + +- Read [mongodb hidden-node concept](/docs/v2024.1.31/guides/mongodb/hidden-node/concept) to get the concept about MongoDB Hidden node. + +- You need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +- Now, install KubeDB cli on your workstation and KubeDB operator in your cluster following the steps [here](/docs/v2024.1.31/setup/README). + +- To keep things isolated, this tutorial uses a separate namespace called `demo` throughout this tutorial. Run the following command to prepare your cluster for this tutorial: + + ```bash + $ kubectl create ns demo + namespace/demo created + ``` + +> Note: The yaml files used in this tutorial are stored in [docs/examples/mongodb](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/examples/mongodb) folder in GitHub repository [kubedb/docs](https://github.com/kubedb/docs). + +## Deploy MongoDB ReplicaSet with Hidden-node + +To deploy a MongoDB ReplicaSet, user have to specify `spec.replicaSet` option in `Mongodb` CRD. + +The following is an example of a `Mongodb` object which creates MongoDB ReplicaSet of three members. + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: MongoDB +metadata: + name: mongo-rs-hid + namespace: demo +spec: + version: "percona-4.4.10" + replicaSet: + name: "replicaset" + podTemplate: + spec: + resources: + requests: + cpu: "600m" + memory: "600Mi" + replicas: 3 + storageEngine: inMemory + storageType: Ephemeral + ephemeralStorage: + sizeLimit: "900Mi" + hidden: + podTemplate: + spec: + resources: + requests: + cpu: "400m" + memory: "400Mi" + replicas: 2 + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 2Gi + terminationPolicy: WipeOut +``` +> Note: inMemory databases are only allowed for Percona variations of mongodb + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/mongodb/hidden-node/replicaset.yaml +mongodb.kubedb.com/mongo-rs-hid created +``` + +Here, + +- `spec.replicaSet` represents the configuration for replicaset. + - `name` denotes the name of mongodb replicaset. +- `spec.keyFileSecret` (optional) is a secret name that contains keyfile (a random string)against `key.txt` key. Each mongod instances in the replica set and `shardTopology` uses the contents of the keyfile as the shared password for authenticating other members in the replicaset. Only mongod instances with the correct keyfile can join the replica set. _User can provide the `keyFileSecret` by creating a secret with key `key.txt`. See [here](https://docs.mongodb.com/manual/tutorial/enforce-keyfile-access-control-in-existing-replica-set/#create-a-keyfile) to create the string for `keyFileSecret`._ If `keyFileSecret` is not given, KubeDB operator will generate a `keyFileSecret` itself. +- `spec.replicas` denotes the number of general members in `rs0` mongodb replicaset. +- `spec.podTemplate` denotes specifications of all the 3 general replicaset members. +- `spec.storageEngine` is set to inMemory, & `spec.storageType` to ephemeral. +- `spec.ephemeralStorage` holds the emptyDir volume specifications. This storage spec will be passed to the StatefulSet created by KubeDB operator to run database pods. So, each members will have a pod of this ephemeral storage configuration. +- `spec.hidden` denotes hidden-node spec of the deployed MongoDB CRD. There are four fields under it : + - `spec.hidden.podTemplate` holds the hidden-node podSpec. `null` value of it, instructs kubedb operator to use the default hidden-node podTemplate. + - `spec.hidden.configSecret` is an optional field to provide custom configuration file for database (i.e mongod.cnf). If specified, this file will be used as configuration file otherwise default configuration file will be used. + - `spec.hidden.replicas` holds the number of hidden-node in the replica set. + - `spec.hidden.storage` specifies the StorageClass of PVC dynamically allocated to store data for these hidden-nodes. This storage spec will be passed to the StatefulSet created by KubeDB operator to run database pods. So, each members will have a pod of this storage configuration. You can specify any StorageClass available in your cluster with appropriate resource requests. + + +KubeDB operator watches for `MongoDB` objects using Kubernetes api. When a `MongoDB` object is created, KubeDB operator will create two new StatefulSets (one for replicas & one for hidden-nodes) and a Service with the matching MongoDB object name. This service will always point to the primary of the replicaset. KubeDB operator will also create a governing service for the pods of those two StatefulSets with the name `-pods`. + +```bash +$ kubectl dba describe mg -n demo mongo-rs-hid +Name: mongo-rs-hid +Namespace: demo +CreationTimestamp: Mon, 31 Oct 2022 11:03:50 +0600 +Labels: +Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"kubedb.com/v1alpha2","kind":"MongoDB","metadata":{"annotations":{},"name":"mongo-rs-hid","namespace":"demo"},"spec":{"ephemeralStorage":... +Replicas: 3 total +Status: Ready +StorageType: Ephemeral +No volumes. +Paused: false +Halted: false +Termination Policy: WipeOut + +StatefulSet: + Name: mongo-rs-hid + CreationTimestamp: Mon, 31 Oct 2022 11:03:50 +0600 + Labels: app.kubernetes.io/component=database + app.kubernetes.io/instance=mongo-rs-hid + app.kubernetes.io/managed-by=kubedb.com + app.kubernetes.io/name=mongodbs.kubedb.com + mongodb.kubedb.com/node.type=replica + Annotations: + Replicas: 824644499032 desired | 3 total + Pods Status: 3 Running / 0 Waiting / 0 Succeeded / 0 Failed + +StatefulSet: + Name: mongo-rs-hid-hidden + CreationTimestamp: Mon, 31 Oct 2022 11:04:50 +0600 + Labels: app.kubernetes.io/component=database + app.kubernetes.io/instance=mongo-rs-hid + app.kubernetes.io/managed-by=kubedb.com + app.kubernetes.io/name=mongodbs.kubedb.com + mongodb.kubedb.com/node.type=hidden + Annotations: + Replicas: 824646223576 desired | 2 total + Pods Status: 2 Running / 0 Waiting / 0 Succeeded / 0 Failed + +Service: + Name: mongo-rs-hid + Labels: app.kubernetes.io/component=database + app.kubernetes.io/instance=mongo-rs-hid + app.kubernetes.io/managed-by=kubedb.com + app.kubernetes.io/name=mongodbs.kubedb.com + Annotations: + Type: ClusterIP + IP: 10.96.197.33 + Port: primary 27017/TCP + TargetPort: db/TCP + Endpoints: 10.244.0.53:27017 + +Service: + Name: mongo-rs-hid-pods + Labels: app.kubernetes.io/component=database + app.kubernetes.io/instance=mongo-rs-hid + app.kubernetes.io/managed-by=kubedb.com + app.kubernetes.io/name=mongodbs.kubedb.com + Annotations: + Type: ClusterIP + IP: None + Port: db 27017/TCP + TargetPort: db/TCP + Endpoints: 10.244.0.53:27017,10.244.0.54:27017,10.244.0.55:27017 + 2 more... + +Auth Secret: + Name: mongo-rs-hid-auth + Labels: app.kubernetes.io/component=database + app.kubernetes.io/instance=mongo-rs-hid + app.kubernetes.io/managed-by=kubedb.com + app.kubernetes.io/name=mongodbs.kubedb.com + Annotations: + Type: kubernetes.io/basic-auth + Data: + password: 16 bytes + username: 4 bytes + +AppBinding: + Metadata: + Annotations: + kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"kubedb.com/v1alpha2","kind":"MongoDB","metadata":{"annotations":{},"name":"mongo-rs-hid","namespace":"demo"},"spec":{"ephemeralStorage":{"sizeLimit":"900Mi"},"hidden":{"podTemplate":{"spec":{"resources":{"requests":{"cpu":"400m","memory":"400Mi"}}}},"replicas":2,"storage":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"2Gi"}},"storageClassName":"standard"}},"podTemplate":{"spec":{"resources":{"requests":{"cpu":"600m","memory":"600Mi"}}}},"replicaSet":{"name":"replicaset"},"replicas":3,"storageEngine":"inMemory","storageType":"Ephemeral","terminationPolicy":"WipeOut","version":"percona-4.4.10"}} + + Creation Timestamp: 2022-10-31T05:05:38Z + Labels: + app.kubernetes.io/component: database + app.kubernetes.io/instance: mongo-rs-hid + app.kubernetes.io/managed-by: kubedb.com + app.kubernetes.io/name: mongodbs.kubedb.com + Name: mongo-rs-hid + Namespace: demo + Spec: + Client Config: + Service: + Name: mongo-rs-hid + Port: 27017 + Scheme: mongodb + Parameters: + API Version: config.kubedb.com/v1alpha1 + Kind: MongoConfiguration + Replica Sets: + host-0: replicaset/mongo-rs-hid-0.mongo-rs-hid-pods.demo.svc:27017,mongo-rs-hid-1.mongo-rs-hid-pods.demo.svc:27017,mongo-rs-hid-2.mongo-rs-hid-pods.demo.svc:27017,mongo-rs-hid-hidden-0.mongo-rs-hid-pods.demo.svc:27017,mongo-rs-hid-hidden-1.mongo-rs-hid-pods.demo.svc:27017 + Stash: + Addon: + Backup Task: + Name: mongodb-backup-4.4.6 + Restore Task: + Name: mongodb-restore-4.4.6 + Secret: + Name: mongo-rs-hid-auth + Type: kubedb.com/mongodb + Version: 4.4.10 + +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal PhaseChanged 12m MongoDB operator Phase changed from to Provisioning. + Normal Successful 12m MongoDB operator Successfully created governing service + Normal Successful 12m MongoDB operator Successfully created Primary Service + Normal Successful 11m MongoDB operator Successfully patched StatefulSet demo/mongo-rs-hid + Normal Successful 11m MongoDB operator Successfully patched StatefulSet demo/mongo-rs-hid + Normal Successful 11m MongoDB operator Successfully patched StatefulSet demo/mongo-rs-hid + Normal Successful 11m MongoDB operator Successfully patched StatefulSet demo/mongo-rs-hid + Normal Successful 11m MongoDB operator Successfully patched StatefulSet demo/mongo-rs-hid + Normal Successful 11m MongoDB operator Successfully patched MongoDB + Normal Successful 11m MongoDB operator Successfully patched StatefulSet demo/mongo-rs-hid-hidden + Normal Successful 11m MongoDB operator Successfully patched MongoDB + Normal Successful 11m MongoDB operator Successfully created appbinding + Normal Successful 11m MongoDB operator Successfully patched StatefulSet demo/mongo-rs-hid + Normal Successful 11m MongoDB operator Successfully patched StatefulSet demo/mongo-rs-hid-hidden + Normal Successful 11m MongoDB operator Successfully patched StatefulSet demo/mongo-rs-hid + Normal PhaseChanged 11m MongoDB operator Phase changed from Provisioning to Ready. + Normal Successful 11m MongoDB operator Successfully patched StatefulSet demo/mongo-rs-hid + Normal Successful 11m MongoDB operator Successfully patched StatefulSet demo/mongo-rs-hid-hidden + Normal Successful 11m MongoDB operator Successfully patched MongoDB + Normal Successful 7m MongoDB operator Successfully patched StatefulSet demo/mongo-rs-hid + Normal Successful 7m MongoDB operator Successfully patched StatefulSet demo/mongo-rs-hid-hidden + Normal Successful 7m MongoDB operator Successfully patched MongoDB + Normal Successful 7m MongoDB operator Successfully patched StatefulSet demo/mongo-rs-hid + Normal Successful 7m MongoDB operator Successfully patched StatefulSet demo/mongo-rs-hid-hidden + Normal Successful 7m MongoDB operator Successfully patched MongoDB + Normal Successful 7m MongoDB operator Successfully patched StatefulSet demo/mongo-rs-hid + Normal Successful 7m MongoDB operator Successfully patched StatefulSet demo/mongo-rs-hid-hidden + Normal Successful 7m MongoDB operator Successfully patched MongoDB + + + +$ kubectl get statefulset -n demo +NAME READY AGE +mongo-rs-hid 3/3 13m +mongo-rs-hid-hidden 2/2 12m + + +$ kubectl get pvc -n demo +NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE +datadir-mongo-rs-hid-hidden-0 Bound pvc-e8c2a3b3-0c47-453f-8a5a-40d7dcb5b4d7 2Gi RWO standard 13m +datadir-mongo-rs-hid-hidden-1 Bound pvc-7b752799-b6b9-43cf-9aa7-d39a2577216c 2Gi RWO standard 13m + + +$ kubectl get pv -n demo +NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE +pvc-7b752799-b6b9-43cf-9aa7-d39a2577216c 2Gi RWO Delete Bound demo/datadir-mongo-rs-hid-hidden-1 standard 13m +pvc-e8c2a3b3-0c47-453f-8a5a-40d7dcb5b4d7 2Gi RWO Delete Bound demo/datadir-mongo-rs-hid-hidden-0 standard 13m + + +$ kubectl get service -n demo +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +mongo-rs-hid ClusterIP 10.96.197.33 27017/TCP 14m +mongo-rs-hid-pods ClusterIP None 27017/TCP 14m +``` + +KubeDB operator sets the `status.phase` to `Ready` once the database is successfully created. Run the following command to see the modified MongoDB object: + +```yaml +$ kubectl get mg -n demo mongo-rs-hid -o yaml +apiVersion: kubedb.com/v1alpha2 +kind: MongoDB +metadata: + annotations: + kubectl.kubernetes.io/last-applied-configuration: | + {"apiVersion":"kubedb.com/v1alpha2","kind":"MongoDB","metadata":{"annotations":{},"name":"mongo-rs-hid","namespace":"demo"},"spec":{"ephemeralStorage":{"sizeLimit":"900Mi"},"hidden":{"podTemplate":{"spec":{"resources":{"requests":{"cpu":"400m","memory":"400Mi"}}}},"replicas":2,"storage":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"2Gi"}},"storageClassName":"standard"}},"podTemplate":{"spec":{"resources":{"requests":{"cpu":"600m","memory":"600Mi"}}}},"replicaSet":{"name":"replicaset"},"replicas":3,"storageEngine":"inMemory","storageType":"Ephemeral","terminationPolicy":"WipeOut","version":"percona-4.4.10"}} + creationTimestamp: "2022-10-31T05:03:50Z" + finalizers: + - kubedb.com + generation: 3 + name: mongo-rs-hid + namespace: demo + resourceVersion: "716264" + uid: 428fa2bd-db5a-4bf5-a4ad-174fd0d7ade2 +spec: + allowedSchemas: + namespaces: + from: Same + authSecret: + name: mongo-rs-hid-auth + autoOps: {} + clusterAuthMode: keyFile + coordinator: + resources: {} + ephemeralStorage: + sizeLimit: 900Mi + healthChecker: + failureThreshold: 1 + periodSeconds: 10 + timeoutSeconds: 10 + hidden: + podTemplate: + controller: {} + metadata: {} + spec: + affinity: + podAntiAffinity: + preferredDuringSchedulingIgnoredDuringExecution: + - podAffinityTerm: + labelSelector: + matchLabels: + app.kubernetes.io/instance: mongo-rs-hid + app.kubernetes.io/managed-by: kubedb.com + app.kubernetes.io/name: mongodbs.kubedb.com + namespaces: + - demo + topologyKey: kubernetes.io/hostname + weight: 100 + - podAffinityTerm: + labelSelector: + matchLabels: + app.kubernetes.io/instance: mongo-rs-hid + app.kubernetes.io/managed-by: kubedb.com + app.kubernetes.io/name: mongodbs.kubedb.com + namespaces: + - demo + topologyKey: failure-domain.beta.kubernetes.io/zone + weight: 50 + livenessProbe: + exec: + command: + - bash + - -c + - "set -x; if [[ $(mongo admin --host=localhost --username=$MONGO_INITDB_ROOT_USERNAME + --password=$MONGO_INITDB_ROOT_PASSWORD --authenticationDatabase=admin + --quiet --eval \"db.adminCommand('ping').ok\" ) -eq \"1\" ]]; then \n + \ exit 0\n fi\n exit 1" + failureThreshold: 3 + periodSeconds: 10 + successThreshold: 1 + timeoutSeconds: 5 + readinessProbe: + exec: + command: + - bash + - -c + - "set -x; if [[ $(mongo admin --host=localhost --username=$MONGO_INITDB_ROOT_USERNAME + --password=$MONGO_INITDB_ROOT_PASSWORD --authenticationDatabase=admin + --quiet --eval \"db.adminCommand('ping').ok\" ) -eq \"1\" ]]; then \n + \ exit 0\n fi\n exit 1" + failureThreshold: 3 + periodSeconds: 10 + successThreshold: 1 + timeoutSeconds: 1 + resources: + limits: + memory: 400Mi + requests: + cpu: 400m + memory: 400Mi + replicas: 2 + storage: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 2Gi + storageClassName: standard + keyFileSecret: + name: mongo-rs-hid-key + podTemplate: + controller: {} + metadata: {} + spec: + affinity: + podAntiAffinity: + preferredDuringSchedulingIgnoredDuringExecution: + - podAffinityTerm: + labelSelector: + matchLabels: + app.kubernetes.io/instance: mongo-rs-hid + app.kubernetes.io/managed-by: kubedb.com + app.kubernetes.io/name: mongodbs.kubedb.com + namespaces: + - demo + topologyKey: kubernetes.io/hostname + weight: 100 + - podAffinityTerm: + labelSelector: + matchLabels: + app.kubernetes.io/instance: mongo-rs-hid + app.kubernetes.io/managed-by: kubedb.com + app.kubernetes.io/name: mongodbs.kubedb.com + namespaces: + - demo + topologyKey: failure-domain.beta.kubernetes.io/zone + weight: 50 + livenessProbe: + exec: + command: + - bash + - -c + - "set -x; if [[ $(mongo admin --host=localhost --username=$MONGO_INITDB_ROOT_USERNAME + --password=$MONGO_INITDB_ROOT_PASSWORD --authenticationDatabase=admin + --quiet --eval \"db.adminCommand('ping').ok\" ) -eq \"1\" ]]; then \n + \ exit 0\n fi\n exit 1" + failureThreshold: 3 + periodSeconds: 10 + successThreshold: 1 + timeoutSeconds: 5 + readinessProbe: + exec: + command: + - bash + - -c + - "set -x; if [[ $(mongo admin --host=localhost --username=$MONGO_INITDB_ROOT_USERNAME + --password=$MONGO_INITDB_ROOT_PASSWORD --authenticationDatabase=admin + --quiet --eval \"db.adminCommand('ping').ok\" ) -eq \"1\" ]]; then \n + \ exit 0\n fi\n exit 1" + failureThreshold: 3 + periodSeconds: 10 + successThreshold: 1 + timeoutSeconds: 1 + resources: + limits: + memory: 600Mi + requests: + cpu: 600m + memory: 600Mi + serviceAccountName: mongo-rs-hid + replicaSet: + name: replicaset + replicas: 3 + sslMode: disabled + storageEngine: inMemory + storageType: Ephemeral + terminationPolicy: WipeOut + version: percona-4.4.10 +status: + conditions: + - lastTransitionTime: "2022-10-31T05:03:50Z" + message: 'The KubeDB operator has started the provisioning of MongoDB: demo/mongo-rs-hid' + reason: DatabaseProvisioningStartedSuccessfully + status: "True" + type: ProvisioningStarted + - lastTransitionTime: "2022-10-31T05:05:38Z" + message: All desired replicas are ready. + reason: AllReplicasReady + status: "True" + type: ReplicaReady + - lastTransitionTime: "2022-10-31T05:05:00Z" + message: 'The MongoDB: demo/mongo-rs-hid is accepting client requests.' + observedGeneration: 3 + reason: DatabaseAcceptingConnectionRequest + status: "True" + type: AcceptingConnection + - lastTransitionTime: "2022-10-31T05:05:00Z" + message: 'The MongoDB: demo/mongo-rs-hid is ready.' + observedGeneration: 3 + reason: ReadinessCheckSucceeded + status: "True" + type: Ready + - lastTransitionTime: "2022-10-31T05:05:38Z" + message: 'The MongoDB: demo/mongo-rs-hid is successfully provisioned.' + observedGeneration: 3 + reason: DatabaseSuccessfullyProvisioned + status: "True" + type: Provisioned + observedGeneration: 3 + phase: Ready +``` + +Please note that KubeDB operator has created a new Secret called `mongo-rs-hid-auth` *(format: {mongodb-object-name}-auth)* for storing the password for `mongodb` superuser. This secret contains a `username` key which contains the *username* for MongoDB superuser and a `password` key which contains the *password* for MongoDB superuser. + +If you want to use custom or existing secret please specify that when creating the MongoDB object using `spec.authSecret.name`. While creating this secret manually, make sure the secret contains these two keys containing data `username` and `password`. For more details, please see [here](/docs/v2024.1.31/guides/mongodb/concepts/mongodb#specauthsecret). + +## Redundancy and Data Availability + +Now, you can connect to this database through [mongo-rs-hid](https://docs.mongodb.com/v3.4/mongo/). In this tutorial, we will insert document on the primary member, and we will see if the data becomes available on secondary members. + +At first, insert data inside primary member `rs0:PRIMARY`. + +```bash +$ kubectl get secrets -n demo mongo-rs-hid-auth -o jsonpath='{.data.\username}' | base64 -d +root + +$ kubectl get secrets -n demo mongo-rs-hid-auth -o jsonpath='{.data.\password}' | base64 -d +OX4yb!IFm;~yAHkD + +$ kubectl exec -it mongo-rs-hid-0 -n demo bash + +bash-4.4$ mongo admin -u root -p 'OX4yb!IFm;~yAHkD' +Percona Server for MongoDB shell version v4.4.10-11 +connecting to: mongodb://127.0.0.1:27017/?compressors=disabled&gssapiServiceName=mongodb +Implicit session: session { "id" : UUID("11890d64-37da-43dd-acb6-0f36a3678875") } +Percona Server for MongoDB server version: v4.4.10-11 +Welcome to the Percona Server for MongoDB shell. +For interactive help, type "help". +For more comprehensive documentation, see + https://www.percona.com/doc/percona-server-for-mongodb +Questions? Try the support group + https://www.percona.com/forums/questions-discussions/percona-server-for-mongodb +replicaset:PRIMARY> +replicaset:PRIMARY> +replicaset:PRIMARY> rs.status() +{ + "set" : "replicaset", + "date" : ISODate("2022-10-31T05:25:19.148Z"), + "myState" : 1, + "term" : NumberLong(1), + "syncSourceHost" : "", + "syncSourceId" : -1, + "heartbeatIntervalMillis" : NumberLong(2000), + "majorityVoteCount" : 3, + "writeMajorityCount" : 3, + "votingMembersCount" : 5, + "writableVotingMembersCount" : 5, + "optimes" : { + "lastCommittedOpTime" : { + "ts" : Timestamp(1667193912, 1), + "t" : NumberLong(1) + }, + "lastCommittedWallTime" : ISODate("2022-10-31T05:25:12.590Z"), + "readConcernMajorityOpTime" : { + "ts" : Timestamp(1667193912, 1), + "t" : NumberLong(1) + }, + "readConcernMajorityWallTime" : ISODate("2022-10-31T05:25:12.590Z"), + "appliedOpTime" : { + "ts" : Timestamp(1667193912, 1), + "t" : NumberLong(1) + }, + "durableOpTime" : { + "ts" : Timestamp(1667193912, 1), + "t" : NumberLong(1) + }, + "lastAppliedWallTime" : ISODate("2022-10-31T05:25:12.590Z"), + "lastDurableWallTime" : ISODate("2022-10-31T05:25:12.590Z") + }, + "lastStableRecoveryTimestamp" : Timestamp(1667193912, 1), + "electionCandidateMetrics" : { + "lastElectionReason" : "electionTimeout", + "lastElectionDate" : ISODate("2022-10-31T05:04:02.548Z"), + "electionTerm" : NumberLong(1), + "lastCommittedOpTimeAtElection" : { + "ts" : Timestamp(0, 0), + "t" : NumberLong(-1) + }, + "lastSeenOpTimeAtElection" : { + "ts" : Timestamp(1667192642, 1), + "t" : NumberLong(-1) + }, + "numVotesNeeded" : 1, + "priorityAtElection" : 1, + "electionTimeoutMillis" : NumberLong(10000), + "newTermStartDate" : ISODate("2022-10-31T05:04:02.552Z"), + "wMajorityWriteAvailabilityDate" : ISODate("2022-10-31T05:04:02.553Z") + }, + "members" : [ + { + "_id" : 0, + "name" : "mongo-rs-hid-0.mongo-rs-hid-pods.demo.svc.cluster.local:27017", + "health" : 1, + "state" : 1, + "stateStr" : "PRIMARY", + "uptime" : 1287, + "optime" : { + "ts" : Timestamp(1667193912, 1), + "t" : NumberLong(1) + }, + "optimeDate" : ISODate("2022-10-31T05:25:12Z"), + "syncSourceHost" : "", + "syncSourceId" : -1, + "infoMessage" : "", + "electionTime" : Timestamp(1667192642, 2), + "electionDate" : ISODate("2022-10-31T05:04:02Z"), + "configVersion" : 5, + "configTerm" : 1, + "self" : true, + "lastHeartbeatMessage" : "" + }, + { + "_id" : 1, + "name" : "mongo-rs-hid-1.mongo-rs-hid-pods.demo.svc.cluster.local:27017", + "health" : 1, + "state" : 2, + "stateStr" : "SECONDARY", + "uptime" : 1257, + "optime" : { + "ts" : Timestamp(1667193912, 1), + "t" : NumberLong(1) + }, + "optimeDurable" : { + "ts" : Timestamp(1667193912, 1), + "t" : NumberLong(1) + }, + "optimeDate" : ISODate("2022-10-31T05:25:12Z"), + "optimeDurableDate" : ISODate("2022-10-31T05:25:12Z"), + "lastHeartbeat" : ISODate("2022-10-31T05:25:18.122Z"), + "lastHeartbeatRecv" : ISODate("2022-10-31T05:25:18.120Z"), + "pingMs" : NumberLong(0), + "lastHeartbeatMessage" : "", + "syncSourceHost" : "mongo-rs-hid-0.mongo-rs-hid-pods.demo.svc.cluster.local:27017", + "syncSourceId" : 0, + "infoMessage" : "", + "configVersion" : 5, + "configTerm" : 1 + }, + { + "_id" : 2, + "name" : "mongo-rs-hid-2.mongo-rs-hid-pods.demo.svc.cluster.local:27017", + "health" : 1, + "state" : 2, + "stateStr" : "SECONDARY", + "uptime" : 1237, + "optime" : { + "ts" : Timestamp(1667193912, 1), + "t" : NumberLong(1) + }, + "optimeDurable" : { + "ts" : Timestamp(1667193912, 1), + "t" : NumberLong(1) + }, + "optimeDate" : ISODate("2022-10-31T05:25:12Z"), + "optimeDurableDate" : ISODate("2022-10-31T05:25:12Z"), + "lastHeartbeat" : ISODate("2022-10-31T05:25:18.118Z"), + "lastHeartbeatRecv" : ISODate("2022-10-31T05:25:18.119Z"), + "pingMs" : NumberLong(0), + "lastHeartbeatMessage" : "", + "syncSourceHost" : "mongo-rs-hid-0.mongo-rs-hid-pods.demo.svc.cluster.local:27017", + "syncSourceId" : 0, + "infoMessage" : "", + "configVersion" : 5, + "configTerm" : 1 + }, + { + "_id" : 3, + "name" : "mongo-rs-hid-hidden-0.mongo-rs-hid-pods.demo.svc.cluster.local:27017", + "health" : 1, + "state" : 2, + "stateStr" : "SECONDARY", + "uptime" : 1213, + "optime" : { + "ts" : Timestamp(1667193912, 1), + "t" : NumberLong(1) + }, + "optimeDurable" : { + "ts" : Timestamp(1667193912, 1), + "t" : NumberLong(1) + }, + "optimeDate" : ISODate("2022-10-31T05:25:12Z"), + "optimeDurableDate" : ISODate("2022-10-31T05:25:12Z"), + "lastHeartbeat" : ISODate("2022-10-31T05:25:18.118Z"), + "lastHeartbeatRecv" : ISODate("2022-10-31T05:25:18.119Z"), + "pingMs" : NumberLong(0), + "lastHeartbeatMessage" : "", + "syncSourceHost" : "mongo-rs-hid-2.mongo-rs-hid-pods.demo.svc.cluster.local:27017", + "syncSourceId" : 2, + "infoMessage" : "", + "configVersion" : 5, + "configTerm" : 1 + }, + { + "_id" : 4, + "name" : "mongo-rs-hid-hidden-1.mongo-rs-hid-pods.demo.svc.cluster.local:27017", + "health" : 1, + "state" : 2, + "stateStr" : "SECONDARY", + "uptime" : 1187, + "optime" : { + "ts" : Timestamp(1667193912, 1), + "t" : NumberLong(1) + }, + "optimeDurable" : { + "ts" : Timestamp(1667193912, 1), + "t" : NumberLong(1) + }, + "optimeDate" : ISODate("2022-10-31T05:25:12Z"), + "optimeDurableDate" : ISODate("2022-10-31T05:25:12Z"), + "lastHeartbeat" : ISODate("2022-10-31T05:25:18.119Z"), + "lastHeartbeatRecv" : ISODate("2022-10-31T05:25:18.008Z"), + "pingMs" : NumberLong(0), + "lastHeartbeatMessage" : "", + "syncSourceHost" : "mongo-rs-hid-2.mongo-rs-hid-pods.demo.svc.cluster.local:27017", + "syncSourceId" : 2, + "infoMessage" : "", + "configVersion" : 5, + "configTerm" : 1 + } + ], + "ok" : 1, + "$clusterTime" : { + "clusterTime" : Timestamp(1667193912, 1), + "signature" : { + "hash" : BinData(0,"EPi/BjSrT3iN3lqSAFcCynAqlP0="), + "keyId" : NumberLong("7160537873521836037") + } + }, + "operationTime" : Timestamp(1667193912, 1) +} +``` + +Here, Hidden-node's `statestr` is showing SECONDARY. If you want to see if they have been really added as hidden or not, you need to run `rs.conf()` command, look at the `hidden: true` specifications. + +```shell +replicaset:PRIMARY> rs.conf() +{ + "_id" : "replicaset", + "version" : 5, + "term" : 1, + "protocolVersion" : NumberLong(1), + "writeConcernMajorityJournalDefault" : false, + "members" : [ + { + "_id" : 0, + "host" : "mongo-rs-hid-0.mongo-rs-hid-pods.demo.svc.cluster.local:27017", + "arbiterOnly" : false, + "buildIndexes" : true, + "hidden" : false, + "priority" : 1, + "tags" : { + + }, + "slaveDelay" : NumberLong(0), + "votes" : 1 + }, + { + "_id" : 1, + "host" : "mongo-rs-hid-1.mongo-rs-hid-pods.demo.svc.cluster.local:27017", + "arbiterOnly" : false, + "buildIndexes" : true, + "hidden" : false, + "priority" : 1, + "tags" : { + + }, + "slaveDelay" : NumberLong(0), + "votes" : 1 + }, + { + "_id" : 2, + "host" : "mongo-rs-hid-2.mongo-rs-hid-pods.demo.svc.cluster.local:27017", + "arbiterOnly" : false, + "buildIndexes" : true, + "hidden" : false, + "priority" : 1, + "tags" : { + + }, + "slaveDelay" : NumberLong(0), + "votes" : 1 + }, + { + "_id" : 3, + "host" : "mongo-rs-hid-hidden-0.mongo-rs-hid-pods.demo.svc.cluster.local:27017", + "arbiterOnly" : false, + "buildIndexes" : true, + "hidden" : true, + "priority" : 0, + "tags" : { + + }, + "slaveDelay" : NumberLong(0), + "votes" : 1 + }, + { + "_id" : 4, + "host" : "mongo-rs-hid-hidden-1.mongo-rs-hid-pods.demo.svc.cluster.local:27017", + "arbiterOnly" : false, + "buildIndexes" : true, + "hidden" : true, + "priority" : 0, + "tags" : { + + }, + "slaveDelay" : NumberLong(0), + "votes" : 1 + } + ], + "settings" : { + "chainingAllowed" : true, + "heartbeatIntervalMillis" : 2000, + "heartbeatTimeoutSecs" : 10, + "electionTimeoutMillis" : 10000, + "catchUpTimeoutMillis" : -1, + "catchUpTakeoverDelayMillis" : 30000, + "getLastErrorModes" : { + + }, + "getLastErrorDefaults" : { + "w" : 1, + "wtimeout" : 0 + }, + "replicaSetId" : ObjectId("635f574270b72a363804832f") +``` + +```bash +replicaset:PRIMARY> rs.isMaster().primary +mongo-rs-hid-0.mongo-rs-hid-pods.demo.svc.cluster.local:27017 + +replicaset:PRIMARY> show dbs +admin 0.000GB +config 0.000GB +kubedb-system 0.000GB +local 0.000GB + +replicaset:PRIMARY> use admin +switched to db admin +replicaset:PRIMARY> show users +{ + "_id" : "admin.root", + "userId" : UUID("5473e955-a97d-4c8f-a4fe-a82cbe7183f4"), + "user" : "root", + "db" : "admin", + "roles" : [ + { + "role" : "root", + "db" : "admin" + } + ], + "mechanisms" : [ + "SCRAM-SHA-1", + "SCRAM-SHA-256" + ] +} + + + +replicaset:PRIMARY> use mydb +switched to db mydb +replicaset:PRIMARY> db.songs.insert({"pink floyd": "shine on you crazy diamond"}) +WriteResult({ "nInserted" : 1 }) +replicaset:PRIMARY> db.songs.find().pretty() +{ + "_id" : ObjectId("635f5df01804db954f81276e"), + "pink floyd" : "shine on you crazy diamond" +} + +replicaset:PRIMARY> exit +bye +``` + +Now, check the redundancy and data availability in secondary members. +We will exec in `mongo-rs-hid-hidden-0`(which is a hidden node right now) to check the data availability. + +```bash +$ kubectl exec -it mongo-rs-hid-hidden-0 -n demo bash +bash-4.4$ mongo admin -u root -p 'OX4yb!IFm;~yAHkD' +Percona Server for MongoDB server version: v4.4.10-11 +connecting to: mongodb://127.0.0.1:27017/admin +MongoDB server version: 4.4.10 +Welcome to the MongoDB shell. + +replicaset:SECONDARY> rs.slaveOk() +WARNING: slaveOk() is deprecated and may be removed in the next major release. Please use secondaryOk() instead. +replicaset:SECONDARY> show dbs +admin 0.000GB +config 0.000GB +kubedb-system 0.000GB +local 0.000GB +mydb 0.000GB + +replicaset:SECONDARY> use admin +switched to db admin +replicaset:SECONDARY> show users +{ + "_id" : "admin.root", + "userId" : UUID("5473e955-a97d-4c8f-a4fe-a82cbe7183f4"), + "user" : "root", + "db" : "admin", + "roles" : [ + { + "role" : "root", + "db" : "admin" + } + ], + "mechanisms" : [ + "SCRAM-SHA-1", + "SCRAM-SHA-256" + ] +} + + +replicaset:SECONDARY> use mydb +switched to db mydb + +replicaset:SECONDARY> db.songs.find().pretty() +{ + "_id" : ObjectId("635f5df01804db954f81276e"), + "pink floyd" : "shine on you crazy diamond" +} + +rs0:SECONDARY> exit +bye + +``` + +## Automatic Failover + +To test automatic failover, we will force the primary member to restart. As the primary member (`pod`) becomes unavailable, the rest of the members will elect a primary member by election. + +```bash +$ kubectl get pods -n demo +NAME READY STATUS RESTARTS AGE +mongo-rs-hid-0 2/2 Running 0 34m +mongo-rs-hid-1 2/2 Running 0 33m +mongo-rs-hid-2 2/2 Running 0 33m +mongo-rs-hid-hidden-0 1/1 Running 0 33m +mongo-rs-hid-hidden-1 1/1 Running 0 32m + +$ kubectl delete pod -n demo mongo-rs-hid-0 +pod "mongo-rs-hid-0" deleted + +$ kubectl get pods -n demo +NAME READY STATUS RESTARTS AGE +mongo-rs-hid-0 2/2 Terminating 0 34m +mongo-rs-hid-1 2/2 Running 0 33m +mongo-rs-hid-2 2/2 Running 0 33m +mongo-rs-hid-hidden-0 1/1 Running 0 33m +mongo-rs-hid-hidden-1 1/1 Running 0 32m +``` + +Now verify the automatic failover, Let's exec in `mongo-rs-hid-0` pod, + +```bash +$ kubectl exec -it mongo-rs-hid-0 -n demo bash +bash-4.4:/$ mongo admin -u root -p 'OX4yb!IFm;~yAHkD' +Percona Server for MongoDB server version: v4.4.10-11 +connecting to: mongodb://127.0.0.1:27017/admin +MongoDB server version: 4.4.10 +Welcome to the MongoDB shell. + +replicaset:SECONDARY> rs.isMaster().primary +mongo-rs-hid-1.mongo-rs-hid-pods.demo.svc.cluster.local:27017 + +# Also verify, data persistency +replicaset:SECONDARY> rs.slaveOk() +replicaset:SECONDARY> > show dbs +admin 0.000GB +config 0.000GB +kubedb-system 0.000GB +local 0.000GB +mydb 0.000GB + +replicaset:SECONDARY> use mydb +switched to db mydb + +replicaset:SECONDARY> db.songs.find().pretty() +{ + "_id" : ObjectId("635f5df01804db954f81276e"), + "pink floyd" : "shine on you crazy diamond" +} +``` +We could terminate the hidden-nodes also in a similar fashion, & check the automatic failover. + +## Cleaning up + +To cleanup the Kubernetes resources created by this tutorial, run: + +```bash +kubectl delete -n demo mg/mongo-rs-hid +kubectl delete ns demo +``` + +## Next Steps + +- Deploy MongoDB shard [with Hidden-node](/docs/v2024.1.31/guides/mongodb/hidden-node/sharding). +- [Backup and Restore](/docs/v2024.1.31/guides/mongodb/backup/overview/) process of MongoDB databases using Stash. +- Initialize [MongoDB with Script](/docs/v2024.1.31/guides/mongodb/initialization/using-script). +- Monitor your MongoDB database with KubeDB using [out-of-the-box Prometheus operator](/docs/v2024.1.31/guides/mongodb/monitoring/using-prometheus-operator). +- Monitor your MongoDB database with KubeDB using [out-of-the-box builtin-Prometheus](/docs/v2024.1.31/guides/mongodb/monitoring/using-builtin-prometheus). +- Use [private Docker registry](/docs/v2024.1.31/guides/mongodb/private-registry/using-private-registry) to deploy MongoDB with KubeDB. +- Detail concepts of [MongoDB object](/docs/v2024.1.31/guides/mongodb/concepts/mongodb). +- Detail concepts of [MongoDBVersion object](/docs/v2024.1.31/guides/mongodb/concepts/catalog). +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/v2024.1.31/CONTRIBUTING). diff --git a/content/docs/v2024.1.31/guides/mongodb/hidden-node/sharding.md b/content/docs/v2024.1.31/guides/mongodb/hidden-node/sharding.md new file mode 100644 index 0000000000..e26207503b --- /dev/null +++ b/content/docs/v2024.1.31/guides/mongodb/hidden-node/sharding.md @@ -0,0 +1,972 @@ +--- +title: MongoDB Sharding Guide with Hidden node +menu: + docs_v2024.1.31: + identifier: mg-hidden-sharding + name: Sharding with Hidden node + parent: mg-hidden + weight: 20 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# MongoDB Sharding with Hidden-node + +This tutorial will show you how to use KubeDB to run a sharded MongoDB cluster with hidden node. + +## Before You Begin + +Before proceeding: + +- Read [mongodb hidden-node concept](/docs/v2024.1.31/guides/mongodb/hidden-node/concept) to get the concept about MongoDB Replica Set Hidden-node. + +- You need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +- Now, install KubeDB cli on your workstation and KubeDB operator in your cluster following the steps [here](/docs/v2024.1.31/setup/README). + +- To keep things isolated, this tutorial uses a separate namespace called `demo` throughout this tutorial. Run the following command to prepare your cluster for this tutorial: + + ```bash + $ kubectl create ns demo + namespace/demo created + ``` + +> Note: The yaml files used in this tutorial are stored in [docs/examples/mongodb](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/examples/mongodb) folder in GitHub repository [kubedb/docs](https://github.com/kubedb/docs). + +## Deploy Sharded MongoDB Cluster + +To deploy a MongoDB Sharding, user have to specify `spec.shardTopology` option in `Mongodb` CRD. + +The following is an example of a `Mongodb` object which creates MongoDB Sharding of three type of members. + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: MongoDB +metadata: + name: mongo-sh-hid + namespace: demo +spec: + version: "percona-4.4.10" + shardTopology: + configServer: + replicas: 3 + ephemeralStorage: {} + mongos: + replicas: 2 + shard: + replicas: 3 + shards: 2 + ephemeralStorage: {} + storageEngine: inMemory + storageType: Ephemeral + hidden: + podTemplate: + spec: + resources: + requests: + cpu: "400m" + memory: "400Mi" + replicas: 2 + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 2Gi + terminationPolicy: WipeOut +``` + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/mongodb/hidden-node/sharding.yaml +mongodb.kubedb.com/mongo-sh-hid created +``` + +Here, + +- `spec.shardTopology` represents the topology configuration for sharding. + - `shard` represents configuration for Shard component of mongodb. + - `shards` represents number of shards for a mongodb deployment. Each shard is deployed as a [replicaset](/docs/v2024.1.31/guides/mongodb/clustering/replication_concept). + - `replicas` represents number of replicas of each shard replicaset. + - `prefix` represents the prefix of each shard node. + - `configSecret` is an optional field to provide custom configuration file for shards (i.e mongod.cnf). If specified, this file will be used as configuration file otherwise a default configuration file will be used. + - `podTemplate` is an optional configuration for pods. + - `storage` to specify pvc spec for each node of sharding. You can specify any StorageClass available in your cluster with appropriate resource requests. + - `configServer` represents configuration for ConfigServer component of mongodb. + - `replicas` represents number of replicas for configServer replicaset. Here, configServer is deployed as a replicaset of mongodb. + - `prefix` represents the prefix of configServer nodes. + - `configSecret` is an optional field to provide custom configuration file for configSource (i.e mongod.cnf). If specified, this file will be used as configuration file otherwise a default configuration file will be used. + - `podTemplate` is an optional configuration for pods. + - `storage` to specify pvc spec for each node of configServer. You can specify any StorageClass available in your cluster with appropriate resource requests. + - `mongos` represents configuration for Mongos component of mongodb. `Mongos` instances run as stateless components (deployment). + - `replicas` represents number of replicas of `Mongos` instance. Here, Mongos is not deployed as replicaset. + - `prefix` represents the prefix of mongos nodes. + - `configSecret` is an optional field to provide custom configuration file for mongos (i.e mongod.cnf). If specified, this file will be used as configuration file otherwise a default configuration file will be used. + - `podTemplate` is an optional configuration for pods. +- `spec.keyFileSecret` (optional) is a secret name that contains keyfile (a random string)against `key.txt` key. Each mongod instances in the replica set and `shardTopology` uses the contents of the keyfile as the shared password for authenticating other members in the replicaset. Only mongod instances with the correct keyfile can join the replica set. _User can provide the `keyFileSecret` by creating a secret with key `key.txt`. See [here](https://docs.mongodb.com/manual/tutorial/enforce-keyfile-access-control-in-existing-replica-set/#create-a-keyfile) to create the string for `keyFileSecret`._ If `keyFileSecret` is not given, KubeDB operator will generate a `keyFileSecret` itself. +- `spec.storageEngine` is set to inMemory, & `spec.storageType` to ephemeral. +- `spec.shardTopology.(configSerer/shard).ephemeralStorage` holds the emptyDir volume specifications. This storage spec will be passed to the StatefulSet created by KubeDB operator to run database pods. So, each members will have a pod of this ephemeral storage configuration. +- `spec.hidden` denotes hidden-node spec of the deployed MongoDB CRD. There are four fields under it : + - `spec.hidden.podTemplate` holds the hidden-node podSpec. `null` value of it, instructs kubedb operator to use the default hidden-node podTemplate. + - `spec.hidden.configSecret` is an optional field to provide custom configuration file for database (i.e mongod.cnf). If specified, this file will be used as configuration file otherwise default configuration file will be used. + - `spec.hidden.replicas` holds the number of hidden-node in the replica set. + - `spec.hidden.storage` specifies the StorageClass of PVC dynamically allocated to store data for these hidden-nodes. This storage spec will be passed to the StatefulSet created by KubeDB operator to run database pods. So, each members will have a pod of this storage configuration. You can specify any StorageClass available in your cluster with appropriate resource requests. + +KubeDB operator watches for `MongoDB` objects using Kubernetes api. When a `MongoDB` object is created, KubeDB operator will create some new StatefulSets : 1 for mongos, 1 for configServer, and 1 for each of the shard & hidden node. It creates a primary Service with the matching MongoDB object name. KubeDB operator will also create governing services for StatefulSets with the name `--pods`. + +MongoDB `mongo-sh-hid` state, +All the types of nodes `Shard`, `ConfigServer` & `Mongos` are deployed as statefulset. + +```bash +$ kubectl get mg,sts,svc,pvc,pv -n demo +NAME VERSION STATUS AGE +mongodb.kubedb.com/mongo-sh-hid percona-4.4.10 Ready 4m46s + +NAME READY AGE +statefulset.apps/mongo-sh-hid-configsvr 3/3 4m46s +statefulset.apps/mongo-sh-hid-mongos 2/2 2m52s +statefulset.apps/mongo-sh-hid-shard0 3/3 4m46s +statefulset.apps/mongo-sh-hid-shard0-hidden 2/2 3m45s +statefulset.apps/mongo-sh-hid-shard1 3/3 4m46s +statefulset.apps/mongo-sh-hid-shard1-hidden 2/2 3m36s + +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +service/mongo-sh-hid ClusterIP 10.96.57.155 27017/TCP 4m46s +service/mongo-sh-hid-configsvr-pods ClusterIP None 27017/TCP 4m46s +service/mongo-sh-hid-mongos-pods ClusterIP None 27017/TCP 4m46s +service/mongo-sh-hid-shard0-pods ClusterIP None 27017/TCP 4m46s +service/mongo-sh-hid-shard1-pods ClusterIP None 27017/TCP 4m46s + +NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE +persistentvolumeclaim/datadir-mongo-sh-hid-shard0-hidden-0 Bound pvc-9a4fd907-8225-4ed2-90e3-8ca43c0521d2 2Gi RWO standard 3m45s +persistentvolumeclaim/datadir-mongo-sh-hid-shard0-hidden-1 Bound pvc-b77cd5d1-d5c1-433b-90dd-3784c5207cd6 2Gi RWO standard 3m23s +persistentvolumeclaim/datadir-mongo-sh-hid-shard1-hidden-0 Bound pvc-61712454-2038-4692-a6ea-88685d7f34e1 2Gi RWO standard 3m36s +persistentvolumeclaim/datadir-mongo-sh-hid-shard1-hidden-1 Bound pvc-489fb5c9-edee-4cf9-985f-48e04f14f695 2Gi RWO standard 3m14s + +NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE +persistentvolume/pvc-489fb5c9-edee-4cf9-985f-48e04f14f695 2Gi RWO Delete Bound demo/datadir-mongo-sh-hid-shard1-hidden-1 standard 3m11s +persistentvolume/pvc-61712454-2038-4692-a6ea-88685d7f34e1 2Gi RWO Delete Bound demo/datadir-mongo-sh-hid-shard1-hidden-0 standard 3m33s +persistentvolume/pvc-9a4fd907-8225-4ed2-90e3-8ca43c0521d2 2Gi RWO Delete Bound demo/datadir-mongo-sh-hid-shard0-hidden-0 standard 3m42s +persistentvolume/pvc-b77cd5d1-d5c1-433b-90dd-3784c5207cd6 2Gi RWO Delete Bound demo/datadir-mongo-sh-hid-shard0-hidden-1 standard 3m20s + +``` + + +KubeDB operator sets the `status.phase` to `Ready` once the database is successfully created. It has also defaulted some field of crd object. Run the following command to see the modified MongoDB object: + +```yaml +$ kubectl get mg -n demo mongo-sh-hid -o yaml +apiVersion: kubedb.com/v1alpha2 +kind: MongoDB +metadata: + annotations: + kubectl.kubernetes.io/last-applied-configuration: | + {"apiVersion":"kubedb.com/v1alpha2","kind":"MongoDB","metadata":{"annotations":{},"name":"mongo-sh-hid","namespace":"demo"},"spec":{"hidden":{"podTemplate":{"spec":{"resources":{"requests":{"cpu":"400m","memory":"400Mi"}}}},"replicas":2,"storage":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"2Gi"}},"storageClassName":"standard"}},"shardTopology":{"configServer":{"ephemeralStorage":{},"replicas":3},"mongos":{"replicas":2},"shard":{"ephemeralStorage":{},"replicas":3,"shards":2}},"storageEngine":"inMemory","storageType":"Ephemeral","terminationPolicy":"WipeOut","version":"percona-4.4.10"}} + creationTimestamp: "2022-10-31T05:59:43Z" + finalizers: + - kubedb.com + generation: 3 + name: mongo-sh-hid + namespace: demo + resourceVersion: "721561" + uid: 20f66240-669d-4556-b729-f6d0956a9241 +spec: + allowedSchemas: + namespaces: + from: Same + authSecret: + name: mongo-sh-hid-auth + autoOps: {} + clusterAuthMode: keyFile + coordinator: + resources: {} + healthChecker: + failureThreshold: 1 + periodSeconds: 10 + timeoutSeconds: 10 + hidden: + podTemplate: + controller: {} + metadata: {} + spec: + affinity: + podAntiAffinity: + preferredDuringSchedulingIgnoredDuringExecution: + - podAffinityTerm: + labelSelector: + matchLabels: + app.kubernetes.io/instance: mongo-sh-hid + app.kubernetes.io/managed-by: kubedb.com + app.kubernetes.io/name: mongodbs.kubedb.com + mongodb.kubedb.com/node.shard: mongo-sh-hid-shard${SHARD_INDEX} + namespaces: + - demo + topologyKey: kubernetes.io/hostname + weight: 100 + - podAffinityTerm: + labelSelector: + matchLabels: + app.kubernetes.io/instance: mongo-sh-hid + app.kubernetes.io/managed-by: kubedb.com + app.kubernetes.io/name: mongodbs.kubedb.com + mongodb.kubedb.com/node.shard: mongo-sh-hid-shard${SHARD_INDEX} + namespaces: + - demo + topologyKey: failure-domain.beta.kubernetes.io/zone + weight: 50 + livenessProbe: + exec: + command: + - bash + - -c + - "set -x; if [[ $(mongo admin --host=localhost --username=$MONGO_INITDB_ROOT_USERNAME + --password=$MONGO_INITDB_ROOT_PASSWORD --authenticationDatabase=admin + --quiet --eval \"db.adminCommand('ping').ok\" ) -eq \"1\" ]]; then \n + \ exit 0\n fi\n exit 1" + failureThreshold: 3 + periodSeconds: 10 + successThreshold: 1 + timeoutSeconds: 5 + readinessProbe: + exec: + command: + - bash + - -c + - "set -x; if [[ $(mongo admin --host=localhost --username=$MONGO_INITDB_ROOT_USERNAME + --password=$MONGO_INITDB_ROOT_PASSWORD --authenticationDatabase=admin + --quiet --eval \"db.adminCommand('ping').ok\" ) -eq \"1\" ]]; then \n + \ exit 0\n fi\n exit 1" + failureThreshold: 3 + periodSeconds: 10 + successThreshold: 1 + timeoutSeconds: 1 + resources: + limits: + memory: 400Mi + requests: + cpu: 400m + memory: 400Mi + replicas: 2 + storage: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 2Gi + storageClassName: standard + keyFileSecret: + name: mongo-sh-hid-key + shardTopology: + configServer: + ephemeralStorage: {} + podTemplate: + controller: {} + metadata: {} + spec: + affinity: + podAntiAffinity: + preferredDuringSchedulingIgnoredDuringExecution: + - podAffinityTerm: + labelSelector: + matchLabels: + app.kubernetes.io/instance: mongo-sh-hid + app.kubernetes.io/managed-by: kubedb.com + app.kubernetes.io/name: mongodbs.kubedb.com + mongodb.kubedb.com/node.config: mongo-sh-hid-configsvr + namespaces: + - demo + topologyKey: kubernetes.io/hostname + weight: 100 + - podAffinityTerm: + labelSelector: + matchLabels: + app.kubernetes.io/instance: mongo-sh-hid + app.kubernetes.io/managed-by: kubedb.com + app.kubernetes.io/name: mongodbs.kubedb.com + mongodb.kubedb.com/node.config: mongo-sh-hid-configsvr + namespaces: + - demo + topologyKey: failure-domain.beta.kubernetes.io/zone + weight: 50 + livenessProbe: + exec: + command: + - bash + - -c + - "set -x; if [[ $(mongo admin --host=localhost --username=$MONGO_INITDB_ROOT_USERNAME + --password=$MONGO_INITDB_ROOT_PASSWORD --authenticationDatabase=admin + --quiet --eval \"db.adminCommand('ping').ok\" ) -eq \"1\" ]]; then + \n exit 0\n fi\n exit 1" + failureThreshold: 3 + periodSeconds: 10 + successThreshold: 1 + timeoutSeconds: 5 + readinessProbe: + exec: + command: + - bash + - -c + - "set -x; if [[ $(mongo admin --host=localhost --username=$MONGO_INITDB_ROOT_USERNAME + --password=$MONGO_INITDB_ROOT_PASSWORD --authenticationDatabase=admin + --quiet --eval \"db.adminCommand('ping').ok\" ) -eq \"1\" ]]; then + \n exit 0\n fi\n exit 1" + failureThreshold: 3 + periodSeconds: 10 + successThreshold: 1 + timeoutSeconds: 1 + resources: + limits: + memory: 1Gi + requests: + cpu: 500m + memory: 1Gi + serviceAccountName: mongo-sh-hid + replicas: 3 + mongos: + podTemplate: + controller: {} + metadata: {} + spec: + affinity: + podAntiAffinity: + preferredDuringSchedulingIgnoredDuringExecution: + - podAffinityTerm: + labelSelector: + matchLabels: + app.kubernetes.io/instance: mongo-sh-hid + app.kubernetes.io/managed-by: kubedb.com + app.kubernetes.io/name: mongodbs.kubedb.com + mongodb.kubedb.com/node.mongos: mongo-sh-hid-mongos + namespaces: + - demo + topologyKey: kubernetes.io/hostname + weight: 100 + - podAffinityTerm: + labelSelector: + matchLabels: + app.kubernetes.io/instance: mongo-sh-hid + app.kubernetes.io/managed-by: kubedb.com + app.kubernetes.io/name: mongodbs.kubedb.com + mongodb.kubedb.com/node.mongos: mongo-sh-hid-mongos + namespaces: + - demo + topologyKey: failure-domain.beta.kubernetes.io/zone + weight: 50 + lifecycle: + preStop: + exec: + command: + - bash + - -c + - 'mongo admin --username=$MONGO_INITDB_ROOT_USERNAME --password=$MONGO_INITDB_ROOT_PASSWORD + --quiet --eval "db.adminCommand({ shutdown: 1 })" || true' + livenessProbe: + exec: + command: + - bash + - -c + - "set -x; if [[ $(mongo admin --host=localhost --username=$MONGO_INITDB_ROOT_USERNAME + --password=$MONGO_INITDB_ROOT_PASSWORD --authenticationDatabase=admin + --quiet --eval \"db.adminCommand('ping').ok\" ) -eq \"1\" ]]; then + \n exit 0\n fi\n exit 1" + failureThreshold: 3 + periodSeconds: 10 + successThreshold: 1 + timeoutSeconds: 5 + readinessProbe: + exec: + command: + - bash + - -c + - "set -x; if [[ $(mongo admin --host=localhost --username=$MONGO_INITDB_ROOT_USERNAME + --password=$MONGO_INITDB_ROOT_PASSWORD --authenticationDatabase=admin + --quiet --eval \"db.adminCommand('ping').ok\" ) -eq \"1\" ]]; then + \n exit 0\n fi\n exit 1" + failureThreshold: 3 + periodSeconds: 10 + successThreshold: 1 + timeoutSeconds: 1 + resources: + limits: + memory: 1Gi + requests: + cpu: 500m + memory: 1Gi + serviceAccountName: mongo-sh-hid + replicas: 2 + shard: + ephemeralStorage: {} + podTemplate: + controller: {} + metadata: {} + spec: + affinity: + podAntiAffinity: + preferredDuringSchedulingIgnoredDuringExecution: + - podAffinityTerm: + labelSelector: + matchLabels: + app.kubernetes.io/instance: mongo-sh-hid + app.kubernetes.io/managed-by: kubedb.com + app.kubernetes.io/name: mongodbs.kubedb.com + mongodb.kubedb.com/node.shard: mongo-sh-hid-shard${SHARD_INDEX} + namespaces: + - demo + topologyKey: kubernetes.io/hostname + weight: 100 + - podAffinityTerm: + labelSelector: + matchLabels: + app.kubernetes.io/instance: mongo-sh-hid + app.kubernetes.io/managed-by: kubedb.com + app.kubernetes.io/name: mongodbs.kubedb.com + mongodb.kubedb.com/node.shard: mongo-sh-hid-shard${SHARD_INDEX} + namespaces: + - demo + topologyKey: failure-domain.beta.kubernetes.io/zone + weight: 50 + livenessProbe: + exec: + command: + - bash + - -c + - "set -x; if [[ $(mongo admin --host=localhost --username=$MONGO_INITDB_ROOT_USERNAME + --password=$MONGO_INITDB_ROOT_PASSWORD --authenticationDatabase=admin + --quiet --eval \"db.adminCommand('ping').ok\" ) -eq \"1\" ]]; then + \n exit 0\n fi\n exit 1" + failureThreshold: 3 + periodSeconds: 10 + successThreshold: 1 + timeoutSeconds: 5 + readinessProbe: + exec: + command: + - bash + - -c + - "set -x; if [[ $(mongo admin --host=localhost --username=$MONGO_INITDB_ROOT_USERNAME + --password=$MONGO_INITDB_ROOT_PASSWORD --authenticationDatabase=admin + --quiet --eval \"db.adminCommand('ping').ok\" ) -eq \"1\" ]]; then + \n exit 0\n fi\n exit 1" + failureThreshold: 3 + periodSeconds: 10 + successThreshold: 1 + timeoutSeconds: 1 + resources: + limits: + memory: 1Gi + requests: + cpu: 500m + memory: 1Gi + serviceAccountName: mongo-sh-hid + replicas: 3 + shards: 2 + sslMode: disabled + storageEngine: inMemory + storageType: Ephemeral + terminationPolicy: WipeOut + version: percona-4.4.10 +status: + conditions: + - lastTransitionTime: "2022-10-31T05:59:43Z" + message: 'The KubeDB operator has started the provisioning of MongoDB: demo/mongo-sh-hid' + reason: DatabaseProvisioningStartedSuccessfully + status: "True" + type: ProvisioningStarted + - lastTransitionTime: "2022-10-31T06:02:05Z" + message: All desired replicas are ready. + reason: AllReplicasReady + status: "True" + type: ReplicaReady + - lastTransitionTime: "2022-10-31T06:01:47Z" + message: 'The MongoDB: demo/mongo-sh-hid is accepting client requests.' + observedGeneration: 3 + reason: DatabaseAcceptingConnectionRequest + status: "True" + type: AcceptingConnection + - lastTransitionTime: "2022-10-31T06:01:47Z" + message: 'The MongoDB: demo/mongo-sh-hid is ready.' + observedGeneration: 3 + reason: ReadinessCheckSucceeded + status: "True" + type: Ready + - lastTransitionTime: "2022-10-31T06:02:05Z" + message: 'The MongoDB: demo/mongo-sh-hid is successfully provisioned.' + observedGeneration: 3 + reason: DatabaseSuccessfullyProvisioned + status: "True" + type: Provisioned + observedGeneration: 3 + phase: Ready + +``` + +Please note that KubeDB operator has created a new Secret called `mongo-sh-hid-auth` _(format: {mongodb-object-name}-auth)_ for storing the password for `mongodb` superuser. This secret contains a `username` key which contains the _username_ for MongoDB superuser and a `password` key which contains the _password_ for MongoDB superuser. + +If you want to use custom or existing secret please specify that when creating the MongoDB object using `spec.authSecret.name`. While creating this secret manually, make sure the secret contains these two keys containing data `username` and `password`. For more details, please see [here](/docs/v2024.1.31/guides/mongodb/concepts/mongodb#specauthsecret). + +## Connection Information + +- Hostname/address: you can use any of these + - Service: `mongo-sh-hid.demo` + - Pod IP: (`$ kubectl get po -n demo -l mongodb.kubedb.com/node.mongos=mongo-sh-hid-mongos -o yaml | grep podIP`) +- Port: `27017` +- Username: Run following command to get _username_, + + ```bash + $ kubectl get secrets -n demo mongo-sh-hid-auth -o jsonpath='{.data.\username}' | base64 -d + root + ``` + +- Password: Run the following command to get _password_, + + ```bash + $ kubectl get secrets -n demo mongo-sh-hid-auth -o jsonpath='{.data.\password}' | base64 -d + 6&UiN5;qq)Tnai=7 + ``` + +Now, you can connect to this database through [mongo-shell](https://docs.mongodb.com/v4.2/mongo/). + +## Sharded Data + +In this tutorial, we will insert sharded and unsharded document, and we will see if the data actually sharded across cluster or not. + +```bash +$ kubectl get pod -n demo -l mongodb.kubedb.com/node.mongos=mongo-sh-hid-mongos +NAME READY STATUS RESTARTS AGE +mongo-sh-hid-mongos-0 1/1 Running 0 6m38s +mongo-sh-hid-mongos-1 1/1 Running 0 6m20s + +$ kubectl exec -it mongo-sh-hid-mongos-0 -n demo bash + +mongodb@mongo-sh-mongos-0:/$ mongo admin -u root -p '6&UiN5;qq)Tnai=7' +Percona Server for MongoDB shell version v4.4.10-11 +connecting to: mongodb://127.0.0.1:27017/?compressors=disabled&gssapiServiceName=mongodb +Implicit session: session { "id" : UUID("e6979884-81b0-41c9-9745-50654f6fb39b") } +Percona Server for MongoDB server version: v4.4.10-11 +Welcome to the Percona Server for MongoDB shell. +For interactive help, type "help". +For more comprehensive documentation, see + https://www.percona.com/doc/percona-server-for-mongodb +Questions? Try the support group + https://www.percona.com/forums/questions-discussions/percona-server-for-mongodb +mongos> +``` + +To detect if the MongoDB instance that your client is connected to is mongos, use the isMaster command. When a client connects to a mongos, isMaster returns a document with a `msg` field that holds the string `isdbgrid`. + +```bash +mongos> rs.isMaster() +{ + "ismaster" : true, + "msg" : "isdbgrid", + "maxBsonObjectSize" : 16777216, + "maxMessageSizeBytes" : 48000000, + "maxWriteBatchSize" : 100000, + "localTime" : ISODate("2022-10-31T06:11:39.882Z"), + "logicalSessionTimeoutMinutes" : 30, + "connectionId" : 310, + "maxWireVersion" : 9, + "minWireVersion" : 0, + "topologyVersion" : { + "processId" : ObjectId("635f64d1716935915500369b"), + "counter" : NumberLong(0) + }, + "ok" : 1, + "operationTime" : Timestamp(1667196696, 31), + "$clusterTime" : { + "clusterTime" : Timestamp(1667196696, 31), + "signature" : { + "hash" : BinData(0,"q30+hpYp5vn4t5HCvUiw1LDfbTg="), + "keyId" : NumberLong("7160552274547179543") + } + } +} +``` + +`mongo-sh-hid` Shard status, + +```bash +mongos> sh.status() +--- Sharding Status --- + sharding version: { + "_id" : 1, + "minCompatibleVersion" : 5, + "currentVersion" : 6, + "clusterId" : ObjectId("635f645bf391eaa4fdef2fba") + } + shards: + { "_id" : "shard0", "host" : "shard0/mongo-sh-hid-shard0-0.mongo-sh-hid-shard0-pods.demo.svc.cluster.local:27017,mongo-sh-hid-shard0-1.mongo-sh-hid-shard0-pods.demo.svc.cluster.local:27017,mongo-sh-hid-shard0-2.mongo-sh-hid-shard0-pods.demo.svc.cluster.local:27017", "state" : 1, "tags" : [ "shard0" ] } + { "_id" : "shard1", "host" : "shard1/mongo-sh-hid-shard1-0.mongo-sh-hid-shard1-pods.demo.svc.cluster.local:27017,mongo-sh-hid-shard1-1.mongo-sh-hid-shard1-pods.demo.svc.cluster.local:27017,mongo-sh-hid-shard1-2.mongo-sh-hid-shard1-pods.demo.svc.cluster.local:27017", "state" : 1, "tags" : [ "shard1" ] } + active mongoses: + "4.4.10-11" : 2 + autosplit: + Currently enabled: yes + balancer: + Currently enabled: yes + Currently running: no + Failed balancer rounds in last 5 attempts: 0 + Migration Results for the last 24 hours: + 407 : Success + databases: + { "_id" : "config", "primary" : "config", "partitioned" : true } + config.system.sessions + shard key: { "_id" : 1 } + unique: false + balancing: true + chunks: + shard0 618 + shard1 406 + too many chunks to print, use verbose if you want to force print + { "_id" : "kubedb-system", "primary" : "shard0", "partitioned" : true, "version" : { "uuid" : UUID("987e328c-5675-49d7-81a1-25d99142cad1"), "lastMod" : 1 } } + kubedb-system.health-check + shard key: { "id" : 1 } + unique: false + balancing: true + chunks: + shard0 3 + shard1 1 + { "id" : { "$minKey" : 1 } } -->> { "id" : 0 } on : shard0 Timestamp(2, 1) + { "id" : 0 } -->> { "id" : 1 } on : shard0 Timestamp(1, 2) + { "id" : 1 } -->> { "id" : 2 } on : shard1 Timestamp(2, 0) + { "id" : 2 } -->> { "id" : { "$maxKey" : 1 } } on : shard0 Timestamp(1, 4) + tag: shard0 { "id" : 0 } -->> { "id" : 1 } + tag: shard1 { "id" : 1 } -->> { "id" : 2 } +``` + + + + + + +As `sh.status()` command only shows the general members, if we want to assure that hidden-nodes have been added correctly we need to exec into any shard-pod & run `rs.conf()` command against the admin database. Open another terminal : + + +```bash +kubectl exec -it -n demo pod/mongo-sh-hid-shard1-0 -- bash + +root@mongo-sh-hid-shard0-1:/ mongo admin -u root -p '6&UiN5;qq)Tnai=7' +Defaulted container "mongodb" out of: mongodb, copy-config (init) +Percona Server for MongoDB shell version v4.4.10-11 +connecting to: mongodb://127.0.0.1:27017/?compressors=disabled&gssapiServiceName=mongodb +Implicit session: session { "id" : UUID("86dadf16-fff2-4483-b3ee-1ca7fc94229f") } +Percona Server for MongoDB server version: v4.4.10-11 +Welcome to the Percona Server for MongoDB shell. +For interactive help, type "help". +For more comprehensive documentation, see + https://www.percona.com/doc/percona-server-for-mongodb +Questions? Try the support group + https://www.percona.com/forums/questions-discussions/percona-server-for-mongodb + +shard1:PRIMARY> rs.conf() +{ + "_id" : "shard1", + "version" : 6, + "term" : 1, + "protocolVersion" : NumberLong(1), + "writeConcernMajorityJournalDefault" : false, + "members" : [ + { + "_id" : 0, + "host" : "mongo-sh-hid-shard1-0.mongo-sh-hid-shard1-pods.demo.svc.cluster.local:27017", + "arbiterOnly" : false, + "buildIndexes" : true, + "hidden" : false, + "priority" : 1, + "tags" : { + + }, + "slaveDelay" : NumberLong(0), + "votes" : 1 + }, + { + "_id" : 1, + "host" : "mongo-sh-hid-shard1-1.mongo-sh-hid-shard1-pods.demo.svc.cluster.local:27017", + "arbiterOnly" : false, + "buildIndexes" : true, + "hidden" : false, + "priority" : 1, + "tags" : { + + }, + "slaveDelay" : NumberLong(0), + "votes" : 1 + }, + { + "_id" : 2, + "host" : "mongo-sh-hid-shard1-2.mongo-sh-hid-shard1-pods.demo.svc.cluster.local:27017", + "arbiterOnly" : false, + "buildIndexes" : true, + "hidden" : false, + "priority" : 1, + "tags" : { + + }, + "slaveDelay" : NumberLong(0), + "votes" : 1 + }, + { + "_id" : 3, + "host" : "mongo-sh-hid-shard1-hidden-0.mongo-sh-hid-shard1-pods.demo.svc.cluster.local:27017", + "arbiterOnly" : false, + "buildIndexes" : true, + "hidden" : true, + "priority" : 0, + "tags" : { + + }, + "slaveDelay" : NumberLong(0), + "votes" : 1 + }, + { + "_id" : 4, + "host" : "mongo-sh-hid-shard1-hidden-1.mongo-sh-hid-shard1-pods.demo.svc.cluster.local:27017", + "arbiterOnly" : false, + "buildIndexes" : true, + "hidden" : true, + "priority" : 0, + "tags" : { + + }, + "slaveDelay" : NumberLong(0), + "votes" : 1 + } + ], + "settings" : { + "chainingAllowed" : true, + "heartbeatIntervalMillis" : 2000, + "heartbeatTimeoutSecs" : 10, + "electionTimeoutMillis" : 10000, + "catchUpTimeoutMillis" : -1, + "catchUpTakeoverDelayMillis" : 30000, + "getLastErrorModes" : { + + }, + "getLastErrorDefaults" : { + "w" : 1, + "wtimeout" : 0 + }, + "replicaSetId" : ObjectId("635f645c4883e315f55b07b4") + } +} +``` + +Enable sharding to collection `songs.list` and insert document. See [`sh.shardCollection(namespace, key, unique, options)`](https://docs.mongodb.com/manual/reference/method/sh.shardCollection/#sh.shardCollection) for details about `shardCollection` command. + +```bash +mongos> sh.enableSharding("songs"); +{ + "ok" : 1, + "operationTime" : Timestamp(1667197117, 5), + "$clusterTime" : { + "clusterTime" : Timestamp(1667197117, 5), + "signature" : { + "hash" : BinData(0,"PqbGBYWJBwAexJoFMwUEQ1Z+ezc="), + "keyId" : NumberLong("7160552274547179543") + } + } +} + + +mongos> sh.shardCollection("songs.list", {"myfield": 1}); +{ + "collectionsharded" : "songs.list", + "collectionUUID" : UUID("ed9c0fec-d488-4a2f-b5ce-8b244676a5b4"), + "ok" : 1, + "operationTime" : Timestamp(1667197139, 14), + "$clusterTime" : { + "clusterTime" : Timestamp(1667197139, 14), + "signature" : { + "hash" : BinData(0,"Cons7FRJzPPeysmanMLyNgJlwNk="), + "keyId" : NumberLong("7160552274547179543") + } + } +} + +mongos> use songs +switched to db songs + +mongos> db.list.insert({"led zeppelin": "stairway to heaven", "slipknot": "psychosocial"}); +WriteResult({ "nInserted" : 1 }) + +mongos> db.list.insert({"pink floyd": "us and them", "nirvana": "smells like teen spirit", "john lennon" : "imagine" }); +WriteResult({ "nInserted" : 1 }) + +mongos> db.list.find() +{ "_id" : ObjectId("635f68e774b0bd92060ebeb6"), "led zeppelin" : "stairway to heaven", "slipknot" : "psychosocial" } +{ "_id" : ObjectId("635f692074b0bd92060ebeb7"), "pink floyd" : "us and them", "nirvana" : "smells like teen spirit", "john lennon" : "imagine" } + +``` + +Run [`sh.status()`](https://docs.mongodb.com/manual/reference/method/sh.status/) to see whether the `songs` database has sharding enabled, and the primary shard for the `songs` database. + +The Sharded Collection section `sh.status.databases.` provides information on the sharding details for sharded collection(s) (E.g. `songs.list`). For each sharded collection, the section displays the shard key, the number of chunks per shard(s), the distribution of documents across chunks, and the tag information, if any, for shard key range(s). + +```bash +mongos> sh.status() +--- Sharding Status --- + sharding version: { + "_id" : 1, + "minCompatibleVersion" : 5, + "currentVersion" : 6, + "clusterId" : ObjectId("635f645bf391eaa4fdef2fba") + } + shards: + { "_id" : "shard0", "host" : "shard0/mongo-sh-hid-shard0-0.mongo-sh-hid-shard0-pods.demo.svc.cluster.local:27017,mongo-sh-hid-shard0-1.mongo-sh-hid-shard0-pods.demo.svc.cluster.local:27017,mongo-sh-hid-shard0-2.mongo-sh-hid-shard0-pods.demo.svc.cluster.local:27017", "state" : 1, "tags" : [ "shard0" ] } + { "_id" : "shard1", "host" : "shard1/mongo-sh-hid-shard1-0.mongo-sh-hid-shard1-pods.demo.svc.cluster.local:27017,mongo-sh-hid-shard1-1.mongo-sh-hid-shard1-pods.demo.svc.cluster.local:27017,mongo-sh-hid-shard1-2.mongo-sh-hid-shard1-pods.demo.svc.cluster.local:27017", "state" : 1, "tags" : [ "shard1" ] } + active mongoses: + "4.4.10-11" : 2 + autosplit: + Currently enabled: yes + balancer: + Currently enabled: yes + Currently running: no + Failed balancer rounds in last 5 attempts: 0 + Migration Results for the last 24 hours: + 513 : Success + databases: + { "_id" : "config", "primary" : "config", "partitioned" : true } + config.system.sessions + shard key: { "_id" : 1 } + unique: false + balancing: true + chunks: + shard0 512 + shard1 512 + too many chunks to print, use verbose if you want to force print + { "_id" : "kubedb-system", "primary" : "shard0", "partitioned" : true, "version" : { "uuid" : UUID("987e328c-5675-49d7-81a1-25d99142cad1"), "lastMod" : 1 } } + kubedb-system.health-check + shard key: { "id" : 1 } + unique: false + balancing: true + chunks: + shard0 3 + shard1 1 + { "id" : { "$minKey" : 1 } } -->> { "id" : 0 } on : shard0 Timestamp(2, 1) + { "id" : 0 } -->> { "id" : 1 } on : shard0 Timestamp(1, 2) + { "id" : 1 } -->> { "id" : 2 } on : shard1 Timestamp(2, 0) + { "id" : 2 } -->> { "id" : { "$maxKey" : 1 } } on : shard0 Timestamp(1, 4) + tag: shard0 { "id" : 0 } -->> { "id" : 1 } + tag: shard1 { "id" : 1 } -->> { "id" : 2 } + { "_id" : "songs", "primary" : "shard1", "partitioned" : true, "version" : { "uuid" : UUID("03c7f9c8-f30f-42a4-8505-7f58fb95d3f3"), "lastMod" : 1 } } + songs.list + shard key: { "myfield" : 1 } + unique: false + balancing: true + chunks: + shard1 1 + { "myfield" : { "$minKey" : 1 } } -->> { "myfield" : { "$maxKey" : 1 } } on : shard1 Timestamp(1, 0) + +``` + +Now create another database where partiotioned is not applied and see how the data is stored. + +```bash +mongos> use demo +switched to db demo + +mongos> db.anothercollection.insert({"myfield": "ccc", "otherfield": "this is non sharded", "kube" : "db" }); +WriteResult({ "nInserted" : 1 }) + +mongos> db.anothercollection.insert({"myfield": "aaa", "more": "field" }); +WriteResult({ "nInserted" : 1 }) + + +mongos> db.anothercollection.find() +{ "_id" : ObjectId("635f69c674b0bd92060ebeb8"), "myfield" : "ccc", "otherfield" : "this is non sharded", "kube" : "db" } +{ "_id" : ObjectId("635f69d574b0bd92060ebeb9"), "myfield" : "aaa", "more" : "field" } +``` + +Now, eventually `sh.status()` + +``` +mongos> sh.status() +--- Sharding Status --- + sharding version: { + "_id" : 1, + "minCompatibleVersion" : 5, + "currentVersion" : 6, + "clusterId" : ObjectId("635f645bf391eaa4fdef2fba") + } + shards: + { "_id" : "shard0", "host" : "shard0/mongo-sh-hid-shard0-0.mongo-sh-hid-shard0-pods.demo.svc.cluster.local:27017,mongo-sh-hid-shard0-1.mongo-sh-hid-shard0-pods.demo.svc.cluster.local:27017,mongo-sh-hid-shard0-2.mongo-sh-hid-shard0-pods.demo.svc.cluster.local:27017", "state" : 1, "tags" : [ "shard0" ] } + { "_id" : "shard1", "host" : "shard1/mongo-sh-hid-shard1-0.mongo-sh-hid-shard1-pods.demo.svc.cluster.local:27017,mongo-sh-hid-shard1-1.mongo-sh-hid-shard1-pods.demo.svc.cluster.local:27017,mongo-sh-hid-shard1-2.mongo-sh-hid-shard1-pods.demo.svc.cluster.local:27017", "state" : 1, "tags" : [ "shard1" ] } + active mongoses: + "4.4.10-11" : 2 + autosplit: + Currently enabled: yes + balancer: + Currently enabled: yes + Currently running: no + Failed balancer rounds in last 5 attempts: 0 + Migration Results for the last 24 hours: + 513 : Success + databases: + { "_id" : "config", "primary" : "config", "partitioned" : true } + config.system.sessions + shard key: { "_id" : 1 } + unique: false + balancing: true + chunks: + shard0 512 + shard1 512 + too many chunks to print, use verbose if you want to force print + { "_id" : "demo", "primary" : "shard1", "partitioned" : false, "version" : { "uuid" : UUID("040d4dc2-232f-4cc8-bae0-11c79244a9a7"), "lastMod" : 1 } } + { "_id" : "kubedb-system", "primary" : "shard0", "partitioned" : true, "version" : { "uuid" : UUID("987e328c-5675-49d7-81a1-25d99142cad1"), "lastMod" : 1 } } + kubedb-system.health-check + shard key: { "id" : 1 } + unique: false + balancing: true + chunks: + shard0 3 + shard1 1 + { "id" : { "$minKey" : 1 } } -->> { "id" : 0 } on : shard0 Timestamp(2, 1) + { "id" : 0 } -->> { "id" : 1 } on : shard0 Timestamp(1, 2) + { "id" : 1 } -->> { "id" : 2 } on : shard1 Timestamp(2, 0) + { "id" : 2 } -->> { "id" : { "$maxKey" : 1 } } on : shard0 Timestamp(1, 4) + tag: shard0 { "id" : 0 } -->> { "id" : 1 } + tag: shard1 { "id" : 1 } -->> { "id" : 2 } + { "_id" : "songs", "primary" : "shard1", "partitioned" : true, "version" : { "uuid" : UUID("03c7f9c8-f30f-42a4-8505-7f58fb95d3f3"), "lastMod" : 1 } } + songs.list + shard key: { "myfield" : 1 } + unique: false + balancing: true + chunks: + shard1 1 + { "myfield" : { "$minKey" : 1 } } -->> { "myfield" : { "$maxKey" : 1 } } on : shard1 Timestamp(1, 0) + +``` + +Here, `demo` database is not partitioned and all collections under `demo` database are stored in it's primary shard, which is `shard0`. + +## Cleaning up + +To cleanup the Kubernetes resources created by this tutorial, run: + +```bash +kubectl delete -n demo mg/mongo-sh-hid +kubectl delete ns demo +``` + +## Next Steps + +- [Backup and Restore](/docs/v2024.1.31/guides/mongodb/backup/overview/) process of MongoDB databases using Stash. +- Initialize [MongoDB with Script](/docs/v2024.1.31/guides/mongodb/initialization/using-script). +- Monitor your MongoDB database with KubeDB using [out-of-the-box Prometheus operator](/docs/v2024.1.31/guides/mongodb/monitoring/using-prometheus-operator). +- Monitor your MongoDB database with KubeDB using [out-of-the-box builtin-Prometheus](/docs/v2024.1.31/guides/mongodb/monitoring/using-builtin-prometheus). +- Use [private Docker registry](/docs/v2024.1.31/guides/mongodb/private-registry/using-private-registry) to deploy MongoDB with KubeDB. +- Detail concepts of [MongoDB object](/docs/v2024.1.31/guides/mongodb/concepts/mongodb). +- Detail concepts of [MongoDBVersion object](/docs/v2024.1.31/guides/mongodb/concepts/catalog). +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/v2024.1.31/CONTRIBUTING). diff --git a/content/docs/v2024.1.31/guides/mongodb/initialization/_index.md b/content/docs/v2024.1.31/guides/mongodb/initialization/_index.md new file mode 100755 index 0000000000..8729839a51 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mongodb/initialization/_index.md @@ -0,0 +1,22 @@ +--- +title: MongoDB Initialization +menu: + docs_v2024.1.31: + identifier: mg-initialization-mongodb + name: Initialization + parent: mg-mongodb-guides + weight: 41 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/mongodb/initialization/using-script.md b/content/docs/v2024.1.31/guides/mongodb/initialization/using-script.md new file mode 100644 index 0000000000..543795da57 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mongodb/initialization/using-script.md @@ -0,0 +1,495 @@ +--- +title: Initialize MongoDB using Script +menu: + docs_v2024.1.31: + identifier: mg-using-script-initialization + name: Using Script + parent: mg-initialization-mongodb + weight: 10 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Initialize MongoDB using Script + +This tutorial will show you how to use KubeDB to initialize a MongoDB database with .js and/or .sh script. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +- Now, install KubeDB cli on your workstation and KubeDB operator in your cluster following the steps [here](/docs/v2024.1.31/setup/README). + +- To keep things isolated, this tutorial uses a separate namespace called `demo` throughout this tutorial. + + ```bash + $ kubectl create ns demo + namespace/demo created + ``` + + In this tutorial we will use .js script stored in GitHub repository [kubedb/mongodb-init-scripts](https://github.com/kubedb/mongodb-init-scripts). + +> Note: The yaml files used in this tutorial are stored in [docs/examples/mongodb](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/examples/mongodb) folder in GitHub repository [kubedb/docs](https://github.com/kubedb/docs). + +## Prepare Initialization Scripts + +MongoDB supports initialization with `.sh` and `.js` files. In this tutorial, we will use `init.js` script from [mongodb-init-scripts](https://github.com/kubedb/mongodb-init-scripts) git repository to insert data inside `kubedb` DB. + +We will use a ConfigMap as script source. You can use any Kubernetes supported [volume](https://kubernetes.io/docs/concepts/storage/volumes) as script source. + +At first, we will create a ConfigMap from `init.js` file. Then, we will provide this ConfigMap as script source in `init.script` of MongoDB crd spec. + +Let's create a ConfigMap with initialization script, + +```bash +$ kubectl create configmap -n demo mg-init-script \ +--from-literal=init.js="$(curl -fsSL https://github.com/kubedb/mongodb-init-scripts/raw/master/init.js)" +configmap/mg-init-script created +``` + +## Create a MongoDB database with Init-Script + +Below is the `MongoDB` object created in this tutorial. + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: MongoDB +metadata: + name: mgo-init-script + namespace: demo +spec: + version: "4.4.26" + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + init: + script: + configMap: + name: mg-init-script +``` + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/mongodb/Initialization/replicaset.yaml +mongodb.kubedb.com/mgo-init-script created +``` + +Here, + +- `spec.init.script` specifies a script source used to initialize the database before database server starts. The scripts will be executed alphabatically. In this tutorial, a sample .js script from the git repository `https://github.com/kubedb/mongodb-init-scripts.git` is used to create a test database. You can use other [volume sources](https://kubernetes.io/docs/concepts/storage/volumes/#types-of-volumes). The \*.js and/or \*.sh sripts that are stored inside the root folder will be executed alphabatically. The scripts inside child folders will be skipped. + +KubeDB operator watches for `MongoDB` objects using Kubernetes api. When a `MongoDB` object is created, KubeDB operator will create a new StatefulSet and a Service with the matching MongoDB object name. KubeDB operator will also create a governing service for StatefulSets with the name `-gvr`, if one is not already present. No MongoDB specific RBAC roles are required for [RBAC enabled clusters](/docs/v2024.1.31/setup/README#using-yaml). + +```bash +$ kubectl dba describe mg -n demo mgo-init-script +Name: mgo-init-script +Namespace: demo +CreationTimestamp: Thu, 11 Feb 2021 10:58:22 +0600 +Labels: +Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"kubedb.com/v1alpha2","kind":"MongoDB","metadata":{"annotations":{},"name":"mgo-init-script","namespace":"demo"},"spec":{"init":{"script"... +Replicas: 1 total +Status: Ready +StorageType: Durable +Volume: + StorageClass: standard + Capacity: 1Gi + Access Modes: RWO +Paused: false +Halted: false +Termination Policy: Delete + +StatefulSet: + Name: mgo-init-script + CreationTimestamp: Thu, 11 Feb 2021 10:58:23 +0600 + Labels: app.kubernetes.io/component=database + app.kubernetes.io/instance=mgo-init-script + app.kubernetes.io/managed-by=kubedb.com + app.kubernetes.io/name=mongodbs.kubedb.com + Annotations: + Replicas: 824638316568 desired | 1 total + Pods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed + +Service: + Name: mgo-init-script + Labels: app.kubernetes.io/component=database + app.kubernetes.io/instance=mgo-init-script + app.kubernetes.io/managed-by=kubedb.com + app.kubernetes.io/name=mongodbs.kubedb.com + Annotations: + Type: ClusterIP + IP: 10.107.34.91 + Port: primary 27017/TCP + TargetPort: db/TCP + Endpoints: [10.107.34.91]:27017 + +Service: + Name: mgo-init-script-pods + Labels: app.kubernetes.io/component=database + app.kubernetes.io/instance=mgo-init-script + app.kubernetes.io/managed-by=kubedb.com + app.kubernetes.io/name=mongodbs.kubedb.com + Annotations: + Type: ClusterIP + IP: None + Port: db 27017/TCP + TargetPort: db/TCP + Endpoints: [10.107.34.91]:27017 + +Auth Secret: + Name: mgo-init-script-auth + Labels: app.kubernetes.io/component=database + app.kubernetes.io/instance=mgo-init-script + app.kubernetes.io/managed-by=kubedb.com + app.kubernetes.io/name=mongodbs.kubedb.com + Annotations: + Type: Opaque + Data: + password: 16 bytes + username: 4 bytes + +Init: + Script Source: + Volume: + Type: ConfigMap (a volume populated by a ConfigMap) + Name: mg-init-script + Optional: false + +AppBinding: + Metadata: + Annotations: + kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"kubedb.com/v1alpha2","kind":"MongoDB","metadata":{"annotations":{},"name":"mgo-init-script","namespace":"demo"},"spec":{"init":{"script":{"configMap":{"name":"mg-init-script"}}},"storage":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"1Gi"}},"storageClassName":"standard"},"version":"4.4.26"}} + + Creation Timestamp: 2021-02-11T04:58:42Z + Labels: + app.kubernetes.io/component: database + app.kubernetes.io/instance: mgo-init-script + app.kubernetes.io/managed-by: kubedb.com + app.kubernetes.io/name: mongodbs.kubedb.com + Name: mgo-init-script + Namespace: demo + Spec: + Client Config: + Service: + Name: mgo-init-script + Port: 27017 + Scheme: mongodb + Secret: + Name: mgo-init-script-auth + Type: kubedb.com/mongodb + Version: 4.4.26 + +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal Successful 47s MongoDB operator Successfully created stats service + Normal Successful 47s MongoDB operator Successfully created Service + Normal Successful 46s MongoDB operator Successfully stats service + Normal Successful 46s MongoDB operator Successfully stats service + Normal Successful 27s MongoDB operator Successfully created appbinding + Normal Successful 27s MongoDB operator Successfully patched StatefulSet demo/mgo-init-script + Normal Successful 27s MongoDB operator Successfully patched MongoDB + +$ kubectl get statefulset -n demo +NAME READY AGE +mgo-init-script 1/1 30s + +$ kubectl get pvc -n demo +NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE +datadir-mgo-init-script-0 Bound pvc-a10d636b-c08c-11e8-b4a9-0800272618ed 1Gi RWO standard 11m + +$ kubectl get pv -n demo +NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE +pvc-a10d636b-c08c-11e8-b4a9-0800272618ed 1Gi RWO Delete Bound demo/datadir-mgo-init-script-0 standard 12m + +$ kubectl get service -n demo +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +mgo-init-script ClusterIP 10.107.34.91 27017/TCP 52s +mgo-init-script-pods ClusterIP None 27017/TCP 52s +``` + +KubeDB operator sets the `status.phase` to `Ready` once the database is successfully created. Run the following command to see the modified MongoDB object: + +```yaml +$ kubectl get mg -n demo mgo-init-script -o yaml +apiVersion: kubedb.com/v1alpha2 +kind: MongoDB +metadata: + annotations: + kubectl.kubernetes.io/last-applied-configuration: | + {"apiVersion":"kubedb.com/v1alpha2","kind":"MongoDB","metadata":{"annotations":{},"name":"mgo-init-script","namespace":"demo"},"spec":{"init":{"script":{"configMap":{"name":"mg-init-script"}}},"storage":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"1Gi"}},"storageClassName":"standard"},"version":"4.4.26"}} + creationTimestamp: "2021-02-10T04:38:52Z" + finalizers: + - kubedb.com + generation: 3 + managedFields: + - apiVersion: kubedb.com/v1alpha2 + fieldsType: FieldsV1 + fieldsV1: + f:metadata: + f:annotations: + .: {} + f:kubectl.kubernetes.io/last-applied-configuration: {} + f:spec: + .: {} + f:init: + .: {} + f:script: + .: {} + f:configMap: + .: {} + f:name: {} + f:storage: + .: {} + f:accessModes: {} + f:resources: + .: {} + f:requests: + .: {} + f:storage: {} + f:storageClassName: {} + f:version: {} + manager: kubectl-client-side-apply + operation: Update + time: "2021-02-10T04:38:52Z" + - apiVersion: kubedb.com/v1alpha2 + fieldsType: FieldsV1 + fieldsV1: + f:metadata: + f:finalizers: {} + f:spec: + f:authSecret: + .: {} + f:name: {} + f:init: + f:initialized: {} + f:status: + .: {} + f:conditions: {} + f:observedGeneration: {} + f:phase: {} + manager: mg-operator + operation: Update + time: "2021-02-10T04:39:16Z" + name: mgo-init-script + namespace: demo + resourceVersion: "98944" + uid: 5f13be2a-9a47-4b7e-9b83-a00b9bc89438 +spec: + authSecret: + name: mgo-init-script-auth + init: + initialized: true + script: + configMap: + name: mg-init-script + podTemplate: + controller: {} + metadata: {} + spec: + affinity: + podAntiAffinity: + preferredDuringSchedulingIgnoredDuringExecution: + - podAffinityTerm: + labelSelector: + matchLabels: + app.kubernetes.io/instance: mgo-init-script + app.kubernetes.io/managed-by: kubedb.com + app.kubernetes.io/name: mongodbs.kubedb.com + namespaces: + - demo + topologyKey: kubernetes.io/hostname + weight: 100 + - podAffinityTerm: + labelSelector: + matchLabels: + app.kubernetes.io/instance: mgo-init-script + app.kubernetes.io/managed-by: kubedb.com + app.kubernetes.io/name: mongodbs.kubedb.com + namespaces: + - demo + topologyKey: failure-domain.beta.kubernetes.io/zone + weight: 50 + livenessProbe: + exec: + command: + - bash + - -c + - "set -x; if [[ $(mongo admin --host=localhost --username=$MONGO_INITDB_ROOT_USERNAME + --password=$MONGO_INITDB_ROOT_PASSWORD --authenticationDatabase=admin + --quiet --eval \"db.adminCommand('ping').ok\" ) -eq \"1\" ]]; then \n + \ exit 0\n fi\n exit 1" + failureThreshold: 3 + periodSeconds: 10 + successThreshold: 1 + timeoutSeconds: 5 + readinessProbe: + exec: + command: + - bash + - -c + - "set -x; if [[ $(mongo admin --host=localhost --username=$MONGO_INITDB_ROOT_USERNAME + --password=$MONGO_INITDB_ROOT_PASSWORD --authenticationDatabase=admin + --quiet --eval \"db.adminCommand('ping').ok\" ) -eq \"1\" ]]; then \n + \ exit 0\n fi\n exit 1" + failureThreshold: 3 + periodSeconds: 10 + successThreshold: 1 + timeoutSeconds: 1 + resources: + limits: + cpu: 500m + memory: 1Gi + requests: + cpu: 500m + memory: 1Gi + serviceAccountName: mgo-init-script + replicas: 1 + sslMode: disabled + storage: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + storageClassName: standard + storageEngine: wiredTiger + storageType: Durable + terminationPolicy: Delete + version: 4.4.26 +status: + conditions: + - lastTransitionTime: "2021-02-10T04:38:53Z" + message: 'The KubeDB operator has started the provisioning of MongoDB: demo/mgo-init-script' + reason: DatabaseProvisioningStartedSuccessfully + status: "True" + type: ProvisioningStarted + - lastTransitionTime: "2021-02-10T04:39:16Z" + message: All desired replicas are ready. + reason: AllReplicasReady + status: "True" + type: ReplicaReady + - lastTransitionTime: "2021-02-10T04:39:33Z" + message: 'The MongoDB: demo/mgo-init-script is accepting client requests.' + observedGeneration: 3 + reason: DatabaseAcceptingConnectionRequest + status: "True" + type: AcceptingConnection + - lastTransitionTime: "2021-02-10T04:39:33Z" + message: 'The MongoDB: demo/mgo-init-script is ready.' + observedGeneration: 3 + reason: ReadinessCheckSucceeded + status: "True" + type: Ready + - lastTransitionTime: "2021-02-10T04:39:16Z" + message: 'The MongoDB: demo/mgo-init-script is successfully provisioned.' + observedGeneration: 2 + reason: DatabaseSuccessfullyProvisioned + status: "True" + type: Provisioned + observedGeneration: 3 + phase: Ready +``` + +Please note that KubeDB operator has created a new Secret called `mgo-init-script-auth` *(format: {mongodb-object-name}-auth)* for storing the password for MongoDB superuser. This secret contains a `username` key which contains the *username* for MongoDB superuser and a `password` key which contains the *password* for MongoDB superuser. +If you want to use an existing secret please specify that when creating the MongoDB object using `spec.authSecret.name`. While creating this secret manually, make sure the secret contains these two keys containing data `username` and `password`. + +```bash +$ kubectl get secrets -n demo mgo-init-script-auth -o yaml +apiVersion: v1 +data: + password: eGtBaTRmRVpmSVFrNmczVw== + user: cm9vdA== +kind: Secret +metadata: + creationTimestamp: "2019-02-06T09:43:54Z" + labels: + app.kubernetes.io/name: mongodbs.kubedb.com + app.kubernetes.io/instance: mgo-init-script + name: mgo-init-script-auth + namespace: demo + resourceVersion: "89594" + selfLink: /api/v1/namespaces/demo/secrets/mgo-init-script-auth + uid: b7cf2369-29f3-11e9-aebf-080027875192 +type: Opaque +``` + +Now, you can connect to this database through [mongo-shell](https://docs.mongodb.com/v3.4/mongo/). In this tutorial, we are connecting to the MongoDB server from inside the pod. + +```bash +$ kubectl get secrets -n demo mgo-init-script-auth -o jsonpath='{.data.\username}' | base64 -d +root + +$ kubectl get secrets -n demo mgo-init-script-auth -o jsonpath='{.data.\password}' | base64 -d +oEwk7IGxCPM5OWo5 + +$ kubectl exec -it mgo-init-script-0 -n demo sh + +> mongo admin +MongoDB shell version v3.4.10 +connecting to: mongodb://127.0.0.1:27017/admin +MongoDB server version: 3.4.10 +Welcome to the MongoDB shell. +For interactive help, type "help". +For more comprehensive documentation, see + http://docs.mongodb.org/ +Questions? Try the support group + http://groups.google.com/group/mongodb-user + +> db.auth("root","oEwk7IGxCPM5OWo5") +1 + +> show dbs +admin 0.000GB +config 0.000GB +kubedb 0.000GB +local 0.000GB + +> use kubedb +switched to db kubedb + +> db.people.find() +{ "_id" : ObjectId("5ba9d667981f02e927b6788e"), "firstname" : "kubernetes", "lastname" : "database" } + +> exit +bye +``` + +As you can see here, the initial script has successfully created a database named `kubedb` and inserted data into that database successfully. + +## Cleaning up + +To cleanup the Kubernetes resources created by this tutorial, run: + +```bash +kubectl patch -n demo mg/mgo-init-script -p '{"spec":{"terminationPolicy":"WipeOut"}}' --type="merge" +kubectl delete -n demo mg/mgo-init-script + +kubectl delete ns demo +``` + +## Next Steps + +- [Backup and Restore](/docs/v2024.1.31/guides/mongodb/backup/overview/) MongoDB databases using Stash. +- Initialize [MongoDB with Script](/docs/v2024.1.31/guides/mongodb/initialization/using-script). +- Monitor your MongoDB database with KubeDB using [out-of-the-box Prometheus operator](/docs/v2024.1.31/guides/mongodb/monitoring/using-prometheus-operator). +- Monitor your MongoDB database with KubeDB using [out-of-the-box builtin-Prometheus](/docs/v2024.1.31/guides/mongodb/monitoring/using-builtin-prometheus). +- Use [private Docker registry](/docs/v2024.1.31/guides/mongodb/private-registry/using-private-registry) to deploy MongoDB with KubeDB. +- Detail concepts of [MongoDB object](/docs/v2024.1.31/guides/mongodb/concepts/mongodb). +- Detail concepts of [MongoDBVersion object](/docs/v2024.1.31/guides/mongodb/concepts/catalog). +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/v2024.1.31/CONTRIBUTING). diff --git a/content/docs/v2024.1.31/guides/mongodb/monitoring/_index.md b/content/docs/v2024.1.31/guides/mongodb/monitoring/_index.md new file mode 100755 index 0000000000..0669dd78d6 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mongodb/monitoring/_index.md @@ -0,0 +1,22 @@ +--- +title: Monitoring MongoDB +menu: + docs_v2024.1.31: + identifier: mg-monitoring-mongodb + name: Monitoring + parent: mg-mongodb-guides + weight: 50 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/mongodb/monitoring/overview.md b/content/docs/v2024.1.31/guides/mongodb/monitoring/overview.md new file mode 100644 index 0000000000..e557a9927f --- /dev/null +++ b/content/docs/v2024.1.31/guides/mongodb/monitoring/overview.md @@ -0,0 +1,116 @@ +--- +title: MongoDB Monitoring Overview +description: MongoDB Monitoring Overview +menu: + docs_v2024.1.31: + identifier: mg-monitoring-overview + name: Overview + parent: mg-monitoring-mongodb + weight: 10 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Monitoring MongoDB with KubeDB + +KubeDB has native support for monitoring via [Prometheus](https://prometheus.io/). You can use builtin [Prometheus](https://github.com/prometheus/prometheus) scraper or [Prometheus operator](https://github.com/prometheus-operator/prometheus-operator) to monitor KubeDB managed databases. This tutorial will show you how database monitoring works with KubeDB and how to configure Database crd to enable monitoring. + +## Overview + +KubeDB uses Prometheus [exporter](https://prometheus.io/docs/instrumenting/exporters/#databases) images to export Prometheus metrics for respective databases. Following diagram shows the logical flow of database monitoring with KubeDB. + +

+  Database Monitoring Flow +

+ +When a user creates a database crd with `spec.monitor` section configured, KubeDB operator provisions the respective database and injects an exporter image as sidecar to the database pod. It also creates a dedicated stats service with name `{database-crd-name}-stats` for monitoring. Prometheus server can scrape metrics using this stats service. + +## Configure Monitoring + +In order to enable monitoring for a database, you have to configure `spec.monitor` section. KubeDB provides following options to configure `spec.monitor` section: + +| Field | Type | Uses | +| -------------------------------------------------- | ---------- | ---------------------------------------------------------------------------------------------------------------------------------------------- | +| `spec.monitor.agent` | `Required` | Type of the monitoring agent that will be used to monitor this database. It can be `prometheus.io/builtin` or `prometheus.io/operator`. | +| `spec.monitor.prometheus.exporter.port` | `Optional` | Port number where the exporter side car will serve metrics. | +| `spec.monitor.prometheus.exporter.args` | `Optional` | Arguments to pass to the exporter sidecar. | +| `spec.monitor.prometheus.exporter.env` | `Optional` | List of environment variables to set in the exporter sidecar container. | +| `spec.monitor.prometheus.exporter.resources` | `Optional` | Resources required by exporter sidecar container. | +| `spec.monitor.prometheus.exporter.securityContext` | `Optional` | Security options the exporter should run with. | +| `spec.monitor.prometheus.serviceMonitor.labels` | `Optional` | Labels for `ServiceMonitor` crd. | +| `spec.monitor.prometheus.serviceMonitor.interval` | `Optional` | Interval at which metrics should be scraped. | + +## Sample Configuration + +A sample YAML for MongoDB crd with `spec.monitor` section configured to enable monitoring with [Prometheus operator](https://github.com/prometheus-operator/prometheus-operator) is shown below. + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: MongoDB +metadata: + name: sample-mongo + namespace: databases +spec: + version: "4.4.26" + terminationPolicy: WipeOut + configSecret: + name: config + storageType: Durable + storage: + storageClassName: default + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 5Gi + monitor: + agent: prometheus.io/operator + prometheus: + serviceMonitor: + labels: + release: prometheus + exporter: + args: + - --collect.database + env: + - name: ENV_VARIABLE + valueFrom: + secretKeyRef: + name: env_name + key: env_value + resources: + requests: + memory: 512Mi + cpu: 200m + limits: + memory: 512Mi + cpu: 250m + securityContext: + runAsUser: 2000 + allowPrivilegeEscalation: false +``` + +Here, we have specified that we are going to monitor this server using Prometheus operator through `spec.monitor.agent: prometheus.io/operator`. KubeDB will create a `ServiceMonitor` crd in databases namespace and this `ServiceMonitor` will have `release: prometheus` label. + +One thing to note that, we internally use `--collect-all` args, if the mongodb exporter version >= v0.31.0 . You can check the exporter version by getting the mgversion object, like this, +`kubectl get mgversion -o=jsonpath='{.spec.exporter.image}' 4.4.26` +In that case, specifying args to collect something (as we used `--collect.database` above) will not have any effect. + +## Next Steps + +- Learn how to monitor MongoDB database with KubeDB using [builtin-Prometheus](/docs/v2024.1.31/guides/mongodb/monitoring/using-builtin-prometheus) +- Learn how to monitor MongoDB database with KubeDB using [Prometheus operator](/docs/v2024.1.31/guides/mongodb/monitoring/using-prometheus-operator). + diff --git a/content/docs/v2024.1.31/guides/mongodb/monitoring/using-builtin-prometheus.md b/content/docs/v2024.1.31/guides/mongodb/monitoring/using-builtin-prometheus.md new file mode 100644 index 0000000000..7a3da036ae --- /dev/null +++ b/content/docs/v2024.1.31/guides/mongodb/monitoring/using-builtin-prometheus.md @@ -0,0 +1,370 @@ +--- +title: Monitor MongoDB using Builtin Prometheus Discovery +menu: + docs_v2024.1.31: + identifier: mg-using-builtin-prometheus-monitoring + name: Builtin Prometheus + parent: mg-monitoring-mongodb + weight: 20 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Monitoring MongoDB with builtin Prometheus + +This tutorial will show you how to monitor MongoDB database using builtin [Prometheus](https://github.com/prometheus/prometheus) scraper. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +- Install KubeDB operator in your cluster following the steps [here](/docs/v2024.1.31/setup/README). + +- If you are not familiar with how to configure Prometheus to scrape metrics from various Kubernetes resources, please read the tutorial from [here](https://github.com/appscode/third-party-tools/tree/master/monitoring/prometheus/builtin). + +- To learn how Prometheus monitoring works with KubeDB in general, please visit [here](/docs/v2024.1.31/guides/mongodb/monitoring/overview). + +- To keep Prometheus resources isolated, we are going to use a separate namespace called `monitoring` to deploy respective monitoring resources. We are going to deploy database in `demo` namespace. + + ```bash + $ kubectl create ns monitoring + namespace/monitoring created + + $ kubectl create ns demo + namespace/demo created + ``` + +> Note: YAML files used in this tutorial are stored in [docs/examples/mongodb](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/examples/mongodb) folder in GitHub repository [kubedb/docs](https://github.com/kubedb/docs). + +## Deploy MongoDB with Monitoring Enabled + +At first, let's deploy an MongoDB database with monitoring enabled. Below is the MongoDB object that we are going to create. + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: MongoDB +metadata: + name: builtin-prom-mgo + namespace: demo +spec: + version: "4.4.26" + terminationPolicy: WipeOut + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + monitor: + agent: prometheus.io/builtin +``` + +Here, + +- `spec.monitor.agent: prometheus.io/builtin` specifies that we are going to monitor this server using builtin Prometheus scraper. + +Let's create the MongoDB crd we have shown above. + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/mongodb/monitoring/builtin-prom-mgo.yaml +mongodb.kubedb.com/builtin-prom-mgo created +``` + +Now, wait for the database to go into `Running` state. + +```bash +$ kubectl get mg -n demo builtin-prom-mgo +NAME VERSION STATUS AGE +builtin-prom-mgo 4.4.26 Ready 2m34s +``` + +KubeDB will create a separate stats service with name `{MongoDB crd name}-stats` for monitoring purpose. + +```bash +$ kubectl get svc -n demo --selector="app.kubernetes.io/instance=builtin-prom-mgo" +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +builtin-prom-mgo ClusterIP 10.99.28.40 27017/TCP 55s +builtin-prom-mgo-pods ClusterIP None 27017/TCP 55s +builtin-prom-mgo-stats ClusterIP 10.98.202.26 56790/TCP 36s +``` + +Here, `builtin-prom-mgo-stats` service has been created for monitoring purpose. Let's describe the service. + +```bash +$ kubectl describe svc -n demo builtin-prom-mgo-stats +Name: builtin-prom-mgo-stats +Namespace: demo +Labels: app.kubernetes.io/name=mongodbs.kubedb.com + app.kubernetes.io/instance=builtin-prom-mgo +Annotations: monitoring.appscode.com/agent: prometheus.io/builtin + prometheus.io/path: /metrics + prometheus.io/port: 56790 + prometheus.io/scrape: true +Selector: app.kubernetes.io/name=mongodbs.kubedb.com,app.kubernetes.io/instance=builtin-prom-mgo +Type: ClusterIP +IP: 10.98.202.26 +Port: prom-http 56790/TCP +TargetPort: prom-http/TCP +Endpoints: 172.17.0.7:56790 +Session Affinity: None +Events: +``` + +You can see that the service contains following annotations. + +```bash +prometheus.io/path: /metrics +prometheus.io/port: 56790 +prometheus.io/scrape: true +``` + +The Prometheus server will discover the service endpoint using these specifications and will scrape metrics from the exporter. + +## Configure Prometheus Server + +Now, we have to configure a Prometheus scraping job to scrape the metrics using this service. We are going to configure scraping job similar to this [kubernetes-service-endpoints](https://github.com/appscode/third-party-tools/tree/master/monitoring/prometheus/builtin#kubernetes-service-endpoints) job that scrapes metrics from endpoints of a service. + +Let's configure a Prometheus scraping job to collect metrics from this service. + +```yaml +- job_name: 'kubedb-databases' + honor_labels: true + scheme: http + kubernetes_sd_configs: + - role: endpoints + # by default Prometheus server select all Kubernetes services as possible target. + # relabel_config is used to filter only desired endpoints + relabel_configs: + # keep only those services that has "prometheus.io/scrape","prometheus.io/path" and "prometheus.io/port" anootations + - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape, __meta_kubernetes_service_annotation_prometheus_io_port] + separator: ; + regex: true;(.*) + action: keep + # currently KubeDB supported databases uses only "http" scheme to export metrics. so, drop any service that uses "https" scheme. + - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scheme] + action: drop + regex: https + # only keep the stats services created by KubeDB for monitoring purpose which has "-stats" suffix + - source_labels: [__meta_kubernetes_service_name] + separator: ; + regex: (.*-stats) + action: keep + # service created by KubeDB will have "app.kubernetes.io/name" and "app.kubernetes.io/instance" annotations. keep only those services that have these annotations. + - source_labels: [__meta_kubernetes_service_label_app_kubernetes_io_name] + separator: ; + regex: (.*) + action: keep + # read the metric path from "prometheus.io/path: " annotation + - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_path] + action: replace + target_label: __metrics_path__ + regex: (.+) + # read the port from "prometheus.io/port: " annotation and update scraping address accordingly + - source_labels: [__address__, __meta_kubernetes_service_annotation_prometheus_io_port] + action: replace + target_label: __address__ + regex: ([^:]+)(?::\d+)?;(\d+) + replacement: $1:$2 + # add service namespace as label to the scraped metrics + - source_labels: [__meta_kubernetes_namespace] + separator: ; + regex: (.*) + target_label: namespace + replacement: $1 + action: replace + # add service name as a label to the scraped metrics + - source_labels: [__meta_kubernetes_service_name] + separator: ; + regex: (.*) + target_label: service + replacement: $1 + action: replace + # add stats service's labels to the scraped metrics + - action: labelmap + regex: __meta_kubernetes_service_label_(.+) +``` + +### Configure Existing Prometheus Server + +If you already have a Prometheus server running, you have to add above scraping job in the `ConfigMap` used to configure the Prometheus server. Then, you have to restart it for the updated configuration to take effect. + +>If you don't use a persistent volume for Prometheus storage, you will lose your previously scraped data on restart. + +### Deploy New Prometheus Server + +If you don't have any existing Prometheus server running, you have to deploy one. In this section, we are going to deploy a Prometheus server in `monitoring` namespace to collect metrics using this stats service. + +**Create ConfigMap:** + +At first, create a ConfigMap with the scraping configuration. Bellow, the YAML of ConfigMap that we are going to create in this tutorial. + +```yaml +apiVersion: v1 +kind: ConfigMap +metadata: + name: prometheus-config + labels: + app: prometheus-demo + namespace: monitoring +data: + prometheus.yml: |- + global: + scrape_interval: 5s + evaluation_interval: 5s + scrape_configs: + - job_name: 'kubedb-databases' + honor_labels: true + scheme: http + kubernetes_sd_configs: + - role: endpoints + # by default Prometheus server select all Kubernetes services as possible target. + # relabel_config is used to filter only desired endpoints + relabel_configs: + # keep only those services that has "prometheus.io/scrape","prometheus.io/path" and "prometheus.io/port" anootations + - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape, __meta_kubernetes_service_annotation_prometheus_io_port] + separator: ; + regex: true;(.*) + action: keep + # currently KubeDB supported databases uses only "http" scheme to export metrics. so, drop any service that uses "https" scheme. + - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scheme] + action: drop + regex: https + # only keep the stats services created by KubeDB for monitoring purpose which has "-stats" suffix + - source_labels: [__meta_kubernetes_service_name] + separator: ; + regex: (.*-stats) + action: keep + # service created by KubeDB will have "app.kubernetes.io/name" and "app.kubernetes.io/instance" annotations. keep only those services that have these annotations. + - source_labels: [__meta_kubernetes_service_label_app_kubernetes_io_name] + separator: ; + regex: (.*) + action: keep + # read the metric path from "prometheus.io/path: " annotation + - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_path] + action: replace + target_label: __metrics_path__ + regex: (.+) + # read the port from "prometheus.io/port: " annotation and update scraping address accordingly + - source_labels: [__address__, __meta_kubernetes_service_annotation_prometheus_io_port] + action: replace + target_label: __address__ + regex: ([^:]+)(?::\d+)?;(\d+) + replacement: $1:$2 + # add service namespace as label to the scraped metrics + - source_labels: [__meta_kubernetes_namespace] + separator: ; + regex: (.*) + target_label: namespace + replacement: $1 + action: replace + # add service name as a label to the scraped metrics + - source_labels: [__meta_kubernetes_service_name] + separator: ; + regex: (.*) + target_label: service + replacement: $1 + action: replace + # add stats service's labels to the scraped metrics + - action: labelmap + regex: __meta_kubernetes_service_label_(.+) +``` + +Let's create above `ConfigMap`, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/monitoring/builtin-prometheus/prom-config.yaml +configmap/prometheus-config created +``` + +**Create RBAC:** + +If you are using an RBAC enabled cluster, you have to give necessary RBAC permissions for Prometheus. Let's create necessary RBAC stuffs for Prometheus, + +```bash +$ kubectl apply -f https://github.com/appscode/third-party-tools/raw/master/monitoring/prometheus/builtin/artifacts/rbac.yaml +clusterrole.rbac.authorization.k8s.io/prometheus created +serviceaccount/prometheus created +clusterrolebinding.rbac.authorization.k8s.io/prometheus created +``` + +>YAML for the RBAC resources created above can be found [here](https://github.com/appscode/third-party-tools/blob/master/monitoring/prometheus/builtin/artifacts/rbac.yaml). + +**Deploy Prometheus:** + +Now, we are ready to deploy Prometheus server. We are going to use following [deployment](https://github.com/appscode/third-party-tools/blob/master/monitoring/prometheus/builtin/artifacts/deployment.yaml) to deploy Prometheus server. + +Let's deploy the Prometheus server. + +```bash +$ kubectl apply -f https://github.com/appscode/third-party-tools/raw/master/monitoring/prometheus/builtin/artifacts/deployment.yaml +deployment.apps/prometheus created +``` + +### Verify Monitoring Metrics + +Prometheus server is listening to port `9090`. We are going to use [port forwarding](https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/) to access Prometheus dashboard. + +At first, let's check if the Prometheus pod is in `Running` state. + +```bash +$ kubectl get pod -n monitoring -l=app=prometheus +NAME READY STATUS RESTARTS AGE +prometheus-7bd56c6865-8dlpv 1/1 Running 0 28s +``` + +Now, run following command on a separate terminal to forward 9090 port of `prometheus-7bd56c6865-8dlpv` pod, + +```bash +$ kubectl port-forward -n monitoring prometheus-7bd56c6865-8dlpv 9090 +Forwarding from 127.0.0.1:9090 -> 9090 +Forwarding from [::1]:9090 -> 9090 +``` + +Now, we can access the dashboard at `localhost:9090`. Open [http://localhost:9090](http://localhost:9090) in your browser. You should see the endpoint of `builtin-prom-mgo-stats` service as one of the targets. + +

+  Prometheus Target +

+ +Check the labels marked with red rectangle. These labels confirm that the metrics are coming from `MongoDB` database `builtin-prom-mgo` through stats service `builtin-prom-mgo-stats`. + +Now, you can view the collected metrics and create a graph from homepage of this Prometheus dashboard. You can also use this Prometheus server as data source for [Grafana](https://grafana.com/) and create beautiful dashboard with collected metrics. + +## Cleaning up + +To cleanup the Kubernetes resources created by this tutorial, run following commands + +```bash +kubectl delete -n demo mg/builtin-prom-mgo + +kubectl delete -n monitoring deployment.apps/prometheus + +kubectl delete -n monitoring clusterrole.rbac.authorization.k8s.io/prometheus +kubectl delete -n monitoring serviceaccount/prometheus +kubectl delete -n monitoring clusterrolebinding.rbac.authorization.k8s.io/prometheus + +kubectl delete ns demo +kubectl delete ns monitoring +``` + +## Next Steps + +- Learn about [backup and restore](/docs/v2024.1.31/guides/mongodb/backup/overview/) MongoDB database using Stash. +- Learn how to configure [MongoDB Topology](/docs/v2024.1.31/guides/mongodb/clustering/sharding). +- Monitor your MongoDB database with KubeDB using [`out-of-the-box` Prometheus operator](/docs/v2024.1.31/guides/mongodb/monitoring/using-prometheus-operator). +- Use [private Docker registry](/docs/v2024.1.31/guides/mongodb/private-registry/using-private-registry) to deploy MongoDB with KubeDB. +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/v2024.1.31/CONTRIBUTING). diff --git a/content/docs/v2024.1.31/guides/mongodb/monitoring/using-prometheus-operator.md b/content/docs/v2024.1.31/guides/mongodb/monitoring/using-prometheus-operator.md new file mode 100644 index 0000000000..46871dcff5 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mongodb/monitoring/using-prometheus-operator.md @@ -0,0 +1,334 @@ +--- +title: Monitor MongoDB using Prometheus Operator +menu: + docs_v2024.1.31: + identifier: mg-using-prometheus-operator-monitoring + name: Prometheus Operator + parent: mg-monitoring-mongodb + weight: 15 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Monitoring MongoDB Using Prometheus operator + +[Prometheus operator](https://github.com/prometheus-operator/prometheus-operator) provides simple and Kubernetes native way to deploy and configure Prometheus server. This tutorial will show you how to use Prometheus operator to monitor MongoDB database deployed with KubeDB. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +- To learn how Prometheus monitoring works with KubeDB in general, please visit [here](/docs/v2024.1.31/guides/mongodb/monitoring/overview). + +- We need a [Prometheus operator](https://github.com/prometheus-operator/prometheus-operator) instance running. If you don't already have a running instance, you can deploy one using this helm chart [here](https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack). + +- To keep Prometheus resources isolated, we are going to use a separate namespace called `monitoring` to deploy the prometheus operator helm chart. We are going to deploy database in `demo` namespace. + + ```bash + $ kubectl create ns monitoring + namespace/monitoring created + + $ kubectl create ns demo + namespace/demo created + ``` + + + +> Note: YAML files used in this tutorial are stored in [docs/examples/mongodb](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/examples/mongodb) folder in GitHub repository [kubedb/docs](https://github.com/kubedb/docs). + +## Find out required labels for ServiceMonitor + +We need to know the labels used to select `ServiceMonitor` by a `Prometheus` crd. We are going to provide these labels in `spec.monitor.prometheus.serviceMonitor.labels` field of MongoDB crd so that KubeDB creates `ServiceMonitor` object accordingly. + +At first, let's find out the available Prometheus server in our cluster. + +```bash +$ kubectl get prometheus --all-namespaces +NAMESPACE NAME VERSION REPLICAS AGE +monitoring prometheus-kube-prometheus-prometheus v2.39.0 1 13d +``` + +> If you don't have any Prometheus server running in your cluster, deploy one following the guide specified in **Before You Begin** section. + +Now, let's view the YAML of the available Prometheus server `prometheus` in `monitoring` namespace. + +```yaml +$ kubectl get prometheus -n monitoring prometheus-kube-prometheus-prometheus -o yaml +apiVersion: monitoring.coreos.com/v1 +kind: Prometheus +metadata: + annotations: + meta.helm.sh/release-name: prometheus + meta.helm.sh/release-namespace: monitoring + creationTimestamp: "2022-10-11T07:12:20Z" + generation: 1 + labels: + app: kube-prometheus-stack-prometheus + app.kubernetes.io/instance: prometheus + app.kubernetes.io/managed-by: Helm + app.kubernetes.io/part-of: kube-prometheus-stack + app.kubernetes.io/version: 40.5.0 + chart: kube-prometheus-stack-40.5.0 + heritage: Helm + release: prometheus + name: prometheus-kube-prometheus-prometheus + namespace: monitoring + resourceVersion: "490475" + uid: 7e36caf3-228a-40f3-bff9-a1c0c78dedb0 +spec: + alerting: + alertmanagers: + - apiVersion: v2 + name: prometheus-kube-prometheus-alertmanager + namespace: monitoring + pathPrefix: / + port: http-web + enableAdminAPI: false + evaluationInterval: 30s + externalUrl: http://prometheus-kube-prometheus-prometheus.monitoring:9090 + image: quay.io/prometheus/prometheus:v2.39.0 + listenLocal: false + logFormat: logfmt + logLevel: info + paused: false + podMonitorNamespaceSelector: {} + podMonitorSelector: + matchLabels: + release: prometheus + portName: http-web + probeNamespaceSelector: {} + probeSelector: + matchLabels: + release: prometheus + replicas: 1 + retention: 10d + routePrefix: / + ruleNamespaceSelector: {} + ruleSelector: + matchLabels: + release: prometheus + scrapeInterval: 30s + securityContext: + fsGroup: 2000 + runAsGroup: 2000 + runAsNonRoot: true + runAsUser: 1000 + serviceAccountName: prometheus-kube-prometheus-prometheus + serviceMonitorNamespaceSelector: {} + serviceMonitorSelector: + matchLabels: + release: prometheus + shards: 1 + version: v2.39.0 + walCompression: true +``` + +Notice the `spec.serviceMonitorSelector` section. Here, `release: prometheus` label is used to select `ServiceMonitor` crd. So, we are going to use this label in `spec.monitor.prometheus.serviceMonitor.labels` field of MongoDB crd. + +## Deploy MongoDB with Monitoring Enabled + +At first, let's deploy an MongoDB database with monitoring enabled. Below is the MongoDB object that we are going to create. + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: MongoDB +metadata: + name: coreos-prom-mgo + namespace: demo +spec: + version: "4.4.26" + terminationPolicy: WipeOut + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + monitor: + agent: prometheus.io/operator + prometheus: + serviceMonitor: + labels: + release: prometheus + interval: 10s +``` + +Here, + +- `monitor.agent: prometheus.io/operator` indicates that we are going to monitor this server using Prometheus operator. +- `monitor.prometheus.serviceMonitor.labels` specifies that KubeDB should create `ServiceMonitor` with these labels. +- `monitor.prometheus.interval` indicates that the Prometheus server should scrape metrics from this database with 10 seconds interval. + +Let's create the MongoDB object that we have shown above, + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/mongodb/monitoring/coreos-prom-mgo.yaml +mongodb.kubedb.com/coreos-prom-mgo created +``` + +Now, wait for the database to go into `Running` state. + +```bash +$ kubectl get mg -n demo coreos-prom-mgo +NAME VERSION STATUS AGE +coreos-prom-mgo 4.4.26 Ready 34s +``` + +KubeDB will create a separate stats service with name `{MongoDB crd name}-stats` for monitoring purpose. + +```bash +$ kubectl get svc -n demo --selector="app.kubernetes.io/instance=coreos-prom-mgo" +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +coreos-prom-mgo ClusterIP 10.96.150.171 27017/TCP 84s +coreos-prom-mgo-pods ClusterIP None 27017/TCP 84s +coreos-prom-mgo-stats ClusterIP 10.96.218.41 56790/TCP 64s +``` + +Here, `coreos-prom-mgo-stats` service has been created for monitoring purpose. + +Let's describe this stats service. + +```yaml +$ kubectl describe svc -n demo coreos-prom-mgo-stats +Name: coreos-prom-mgo-stats +Namespace: demo +Labels: app.kubernetes.io/component=database + app.kubernetes.io/instance=coreos-prom-mgo + app.kubernetes.io/managed-by=kubedb.com + app.kubernetes.io/name=mongodbs.kubedb.com + kubedb.com/role=stats +Annotations: monitoring.appscode.com/agent: prometheus.io/operator +Selector: app.kubernetes.io/instance=coreos-prom-mgo,app.kubernetes.io/managed-by=kubedb.com,app.kubernetes.io/name=mongodbs.kubedb.com +Type: ClusterIP +IP Family Policy: SingleStack +IP Families: IPv4 +IP: 10.96.240.52 +IPs: 10.96.240.52 +Port: metrics 56790/TCP +TargetPort: metrics/TCP +Endpoints: 10.244.0.149:56790 +Session Affinity: None +Events: + +``` + +Notice the `Labels` and `Port` fields. `ServiceMonitor` will use this information to target its endpoints. + +KubeDB will also create a `ServiceMonitor` crd in `demo` namespace that select the endpoints of `coreos-prom-mgo-stats` service. Verify that the `ServiceMonitor` crd has been created. + +```bash +$ kubectl get servicemonitor -n demo +NAME AGE +coreos-prom-mgo-stats 2m40s +``` + +Let's verify that the `ServiceMonitor` has the label that we had specified in `spec.monitor` section of MongoDB crd. + +```yaml +$ kubectl get servicemonitor -n demo coreos-prom-mgo-stats -o yaml +apiVersion: monitoring.coreos.com/v1 +kind: ServiceMonitor +metadata: + creationTimestamp: "2022-10-24T11:51:08Z" + generation: 1 + labels: + app.kubernetes.io/component: database + app.kubernetes.io/instance: coreos-prom-mgo + app.kubernetes.io/managed-by: kubedb.com + app.kubernetes.io/name: mongodbs.kubedb.com + release: prometheus + name: coreos-prom-mgo-stats + namespace: demo + ownerReferences: + - apiVersion: v1 + blockOwnerDeletion: true + controller: true + kind: Service + name: coreos-prom-mgo-stats + uid: 68b0e8c4-cba4-4dcb-9016-4e1901ca1fd0 + resourceVersion: "528373" + uid: 56eb596b-d2cf-4d2c-a204-c43dbe8fe896 +spec: + endpoints: + - bearerTokenSecret: + key: "" + honorLabels: true + interval: 10s + path: /metrics + port: metrics + namespaceSelector: + matchNames: + - demo + selector: + matchLabels: + app.kubernetes.io/component: database + app.kubernetes.io/instance: coreos-prom-mgo + app.kubernetes.io/managed-by: kubedb.com + app.kubernetes.io/name: mongodbs.kubedb.com + kubedb.com/role: stats +``` + +Notice that the `ServiceMonitor` has label `release: prometheus` that we had specified in MongoDB crd. + +Also notice that the `ServiceMonitor` has selector which match the labels we have seen in the `coreos-prom-mgo-stats` service. It also, target the `metrics` port that we have seen in the stats service. + +## Verify Monitoring Metrics + +At first, let's find out the respective Prometheus pod for `prometheus` Prometheus server. + +```bash +$ kubectl get pod -n monitoring -l=app.kubernetes.io/name=prometheus +NAME READY STATUS RESTARTS AGE +prometheus-prometheus-kube-prometheus-prometheus-0 2/2 Running 1 13d +``` + +Prometheus server is listening to port `9090` of `prometheus-prometheus-kube-prometheus-prometheus-0` pod. We are going to use [port forwarding](https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/) to access Prometheus dashboard. + +Run following command on a separate terminal to forward the port 9090 of `prometheus-prometheus-kube-prometheus-prometheus-0` pod, + +```bash +$ kubectl port-forward -n monitoring prometheus-prometheus-kube-prometheus-prometheus-0 9090 +Forwarding from 127.0.0.1:9090 -> 9090 +Forwarding from [::1]:9090 -> 9090 +``` + +Now, we can access the dashboard at `localhost:9090`. Open [http://localhost:9090](http://localhost:9090) in your browser. You should see `metrics` endpoint of `coreos-prom-mgo-stats` service as one of the targets. + +

+  Prometheus Target +

+ +Check the `endpoint` and `service` labels marked by the red rectangles. It verifies that the target is our expected database. Now, you can view the collected metrics and create a graph from homepage of this Prometheus dashboard. You can also use this Prometheus server as data source for [Grafana](https://grafana.com/) and create a beautiful dashboard with collected metrics. + +## Cleaning up + +To cleanup the Kubernetes resources created by this tutorial, run following commands + +```bash +kubectl delete -n demo mg/coreos-prom-mgo +kubectl delete ns demo +``` + +## Next Steps + +- Monitor your MongoDB database with KubeDB using [out-of-the-box builtin-Prometheus](/docs/v2024.1.31/guides/mongodb/monitoring/using-builtin-prometheus). +- Detail concepts of [MongoDB object](/docs/v2024.1.31/guides/mongodb/concepts/mongodb). +- Detail concepts of [MongoDBVersion object](/docs/v2024.1.31/guides/mongodb/concepts/catalog). +- [Backup and Restore](/docs/v2024.1.31/guides/mongodb/backup/overview/) process of MongoDB databases using Stash. +- Initialize [MongoDB with Script](/docs/v2024.1.31/guides/mongodb/initialization/using-script). +- Use [private Docker registry](/docs/v2024.1.31/guides/mongodb/private-registry/using-private-registry) to deploy MongoDB with KubeDB. +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/v2024.1.31/CONTRIBUTING). diff --git a/content/docs/v2024.1.31/guides/mongodb/pitr/_index.md b/content/docs/v2024.1.31/guides/mongodb/pitr/_index.md new file mode 100644 index 0000000000..2fd5668cd1 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mongodb/pitr/_index.md @@ -0,0 +1,22 @@ +--- +title: MongoDB Archiver & PITR +menu: + docs_v2024.1.31: + identifier: mg-archiver-pitr + name: Point-in-time Recovery + parent: mg-mongodb-guides + weight: 41 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/mongodb/pitr/pitr.md b/content/docs/v2024.1.31/guides/mongodb/pitr/pitr.md new file mode 100644 index 0000000000..4f3e042425 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mongodb/pitr/pitr.md @@ -0,0 +1,492 @@ +--- +title: Continuous Archiving and Point-in-time Recovery +menu: + docs_v2024.1.31: + identifier: pitr-mongo + name: Overview + parent: mg-archiver-pitr + weight: 42 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# KubeDB MongoDB - Continuous Archiving and Point-in-time Recovery + +Here, this doc will show you how to use KubeDB to provision a MongoDB to Archive continuously and Restore point-in-time. + +## Before You Begin + +At first, you need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +Now, +- Install `KubeDB` operator in your cluster following the steps [here](/docs/v2024.1.31/setup/README). +- Install `KubeStash` operator in your cluster following the steps [here](https://github.com/kubestash/installer/tree/master/charts/kubestash). +- Install `SideKick` in your cluster following the steps [here](https://github.com/kubeops/installer/tree/master/charts/sidekick). +- Install `External-snapshotter` in your cluster following the steps [here](https://github.com/kubernetes-csi/external-snapshotter/tree/release-5.0), if you don't already have a csi-driver available in the cluster. + +To keep things isolated, this tutorial uses a separate namespace called `demo` throughout this tutorial. + +```bash +$ kubectl create ns demo +namespace/demo created +``` +> Note: The yaml files used in this tutorial are stored in [mg-archiver-demo](https://github.com/kubedb/mg-archiver-demo) +## Continuous archiving +Continuous archiving involves making regular copies (or "archives") of the MongoDB transaction log files. To ensure continuous archiving to a remote location we need to prepare `BackupStorage`,`RetentionPolicy`,`MongoDBArchiver` for the KubeDB Managed MongoDB Databases. + + +### BackupStorage +BackupStorage is a CR provided by KubeStash that can manage storage from various providers like GCS, S3, and more. + +```yaml +apiVersion: storage.kubestash.com/v1alpha1 +kind: BackupStorage +metadata: + name: gcs-storage + namespace: demo +spec: + storage: + provider: gcs + gcs: + bucket: kubestash-qa + prefix: mg + secret: gcs-secret + usagePolicy: + allowedNamespaces: + from: All + deletionPolicy: WipeOut # One of: WipeOut, Delete +``` + +For s3 compatible buckets, the `.spec.storage` will be like this : +```yaml +provider: s3 +s3: + endpoint: us-east-1.linodeobjects.com + bucket: arnob + region: us-east-1 + prefix: ya + secret: linode-secret +``` + +```bash + $ kubectl apply -f https://raw.githubusercontent.com/kubedb/mg-archiver-demo/master/gke/backupstorage.yaml + backupstorage.storage.kubestash.com/gcs-storage created +``` + +### Secret for BackupStorage + +You need to create a credentials which will hold the information about cloud bucket. Here are examples. + +For GCS : +```bash +kubectl create secret generic -n demo gcs-secret \ + --from-literal=GOOGLE_PROJECT_ID= \ + --from-file=./GOOGLE_SERVICE_ACCOUNT_JSON_KEY +``` + +For S3 : +```bash +kubectl create secret generic -n demo s3-secret \ + --from-file=./AWS_ACCESS_KEY_ID \ + --from-file=./AWS_SECRET_ACCESS_KEY +``` + +```bash + $ kubectl apply -f https://raw.githubusercontent.com/kubedb/mg-archiver-demo/master/gke/storage-secret.yaml + secret/gcs-secret created +``` + +### Retention policy +RetentionPolicy is a CR provided by KubeStash that allows you to set how long you'd like to retain the backup data. +```yaml +apiVersion: storage.kubestash.com/v1alpha1 +kind: RetentionPolicy +metadata: + name: mongodb-retention-policy + namespace: demo +spec: + maxRetentionPeriod: "30d" + successfulSnapshots: + last: 5 + failedSnapshots: + last: 2 +``` +```bash +$ kubectl apply -https://raw.githubusercontent.com/kubedb/mg-archiver-demo/master/common/retention-policy.yaml +retentionpolicy.storage.kubestash.com/mongodb-retention-policy created +``` + + +## Ensure volumeSnapshotClass + +```bash +kubectl get volumesnapshotclasses +NAME DRIVER DELETIONPOLICY AGE +longhorn-snapshot-vsc driver.longhorn.io Delete 7d22h + +``` +If not any, try using `longhorn` or any other [volumeSnapshotClass](https://kubernetes.io/docs/concepts/storage/volume-snapshot-classes/). + +```bash +$ helm install longhorn longhorn/longhorn --namespace longhorn-system --create-namespace + ... + ... + kubectl get pod -n longhorn-system +```` + + +```yaml +kind: VolumeSnapshotClass +apiVersion: snapshot.storage.k8s.io/v1 +metadata: + name: longhorn-snapshot-vsc +driver: driver.longhorn.io +deletionPolicy: Delete +parameters: + type: snap +``` + +If you already have a csi driver installed in your cluster, You need to refer that in the `.driver` section. Here is an example for GCS : + +```yaml +apiVersion: snapshot.storage.k8s.io/v1 +kind: VolumeSnapshotClass +metadata: + name: gke-vsc +driver: pd.csi.storage.gke.io +deletionPolicy: Delete +``` + + +```bash +$ kubectl apply -f https://raw.githubusercontent.com/kubedb/mg-archiver-demo/master/gke/volume-snapshot-class.yaml + volumesnapshotclass.snapshot.storage.k8s.io/gke-vsc unchanged +``` + + +### MongoDBArchiver +MongoDBArchiver is a CR provided by KubeDB for managing the archiving of MongoDB oplog files and performing volume-level backups + +```yaml +apiVersion: archiver.kubedb.com/v1alpha1 +kind: MongoDBArchiver +metadata: + name: mongodbarchiver-sample + namespace: demo +spec: + pause: false + databases: + namespaces: + from: "Same" + selector: + matchLabels: + archiver: "true" + retentionPolicy: + name: mongodb-retention-policy + namespace: demo + encryptionSecret: + name: encrypt-secret + namespace: demo + fullBackup: + driver: VolumeSnapshotter + task: + params: + volumeSnapshotClassName: gke-vsc # change it accordingly + scheduler: + successfulJobsHistoryLimit: 1 + failedJobsHistoryLimit: 1 + schedule: "*/50 * * * *" + sessionHistoryLimit: 2 + manifestBackup: + scheduler: + successfulJobsHistoryLimit: 1 + failedJobsHistoryLimit: 1 + schedule: "*/5 * * * *" + sessionHistoryLimit: 2 + backupStorage: + ref: + name: gcs-storage + namespace: demo + +``` +### EncryptionSecret + +```yaml +apiVersion: v1 +kind: Secret +type: Opaque +metadata: + name: encrypt-secret + namespace: demo +stringData: + RESTIC_PASSWORD: "changeit" +``` + +```bash + $ kubectl create -f https://raw.githubusercontent.com/kubedb/mg-archiver-demo/master/common/encrypt-secret.yaml + $ kubectl create -f https://raw.githubusercontent.com/kubedb/mg-archiver-demo/master/common/archiver.yaml +``` + + +# Deploy MongoDB +So far we are ready with setup for continuously archive MongoDB, We deploy a MongoDB referring the MongoDB archiver object + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: MongoDB +metadata: + name: mg-rs + namespace: demo + labels: + archiver: "true" +spec: + version: "4.4.26" + replicaSet: + name: "rs" + replicas: 3 + podTemplate: + spec: + resources: + requests: + cpu: "500m" + memory: "500Mi" + storage: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + +``` + +The `archiver: "true"` label is important here. Because that's how we are specifying that continous archiving will be done in this db. + + +```bash +$ kubectl get pod -n demo +NAME READY STATUS RESTARTS AGE +mg-rs-0 2/2 Running 0 8m30s +mg-rs-1 2/2 Running 0 7m32s +mg-rs-2 2/2 Running 0 6m34s +mg-rs-backup-full-backup-1702457252-lvcbn 0/1 Completed 0 65s +mg-rs-backup-manifest-backup-1702457110-fjpw5 0/1 Completed 0 3m28s +mg-rs-backup-manifest-backup-1702457253-f4chq 0/1 Completed 0 65s +mg-rs-sidekick 1/1 Running 0 5m29s +trigger-mg-rs-backup-manifest-backup-28374285-rdcfq 0/1 Completed 0 3m38s + +``` +`mg-rs-sidekick` is responsible for uploading oplog-files +`mg-rs-full-backup-*****` are the volumes levels backups for MongoDB. +`mg-rs-manifest-backup-*****` are the backups of the manifest relate to MongoDB object + +### Validate BackupConfiguration and VolumeSnapshot + +```bash +$ kubectl get backupstorage,backupconfigurations,backupsession,volumesnapshots -A + +NAMESPACE NAME PROVIDER DEFAULT DELETION-POLICY TOTAL-SIZE PHASE AGE +demo backupstorage.storage.kubestash.com/gcs-storage gcs WipeOut 3.292 KiB Ready 11m + +NAMESPACE NAME PHASE PAUSED AGE +demo backupconfiguration.core.kubestash.com/mg-rs-backup Ready 6m45s + +NAMESPACE NAME INVOKER-TYPE INVOKER-NAME PHASE DURATION AGE +demo backupsession.core.kubestash.com/mg-rs-backup-full-backup-1702457252 BackupConfiguration mg-rs-backup Succeeded 2m20s +demo backupsession.core.kubestash.com/mg-rs-backup-manifest-backup-1702457110 BackupConfiguration mg-rs-backup Succeeded 4m43s +demo backupsession.core.kubestash.com/mg-rs-backup-manifest-backup-1702457253 BackupConfiguration mg-rs-backup Succeeded 2m20s + +NAMESPACE NAME READYTOUSE SOURCEPVC SOURCESNAPSHOTCONTENT RESTORESIZE SNAPSHOTCLASS SNAPSHOTCONTENT CREATIONTIME AGE +demo volumesnapshot.snapshot.storage.k8s.io/mg-rs-1702457262 true datadir-mg-rs-1 1Gi gke-vsc snapcontent-87f1013f-cd7e-4153-b245-da9552d2e44f 2m7s 2m11s + +``` + +## data insert and switch oplog +After each and every oplog switch the oplog files will be uploaded to backup storage +```bash +$ kubectl exec -it -n demo mg-rs-0 bash +kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead. +Defaulted container "mongodb" out of: mongodb, replication-mode-detector, copy-config (init) +mongodb@mg-rs-0:/$ +mongodb@mg-rs-0:/$ mongo -u root -p $MONGO_INITDB_ROOT_PASSWORD +MongoDB shell version v4.4.26 +connecting to: mongodb://127.0.0.1:27017/?compressors=disabled&gssapiServiceName=mongodb +Implicit session: session { "id" : UUID("4a51b9fc-a26c-487b-848d-341cf5512c86") } +MongoDB server version: 4.4.26 +Welcome to the MongoDB shell. +For interactive help, type "help". +For more comprehensive documentation, see + https://docs.mongodb.com/ +Questions? Try the MongoDB Developer Community Forums + https://community.mongodb.com +--- +The server generated these startup warnings when booting: + 2023-12-13T08:40:40.423+00:00: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine. See http://dochub.mongodb.org/core/prodnotes-filesystem +--- +rs:PRIMARY> show dbs +admin 0.000GB +config 0.000GB +kubedb-system 0.000GB +local 0.000GB +rs:PRIMARY> use pink_floyd +switched to db pink_floyd +rs:PRIMARY> db.songs.insert({"name":"shine on you crazy diamond"}) +WriteResult({ "nInserted" : 1 }) +rs:PRIMARY> show collections +songs +rs:PRIMARY> db.songs.find() +{ "_id" : ObjectId("657970c1f965be0513c7f4d7"), "name" : "shine on you crazy diamond" } +rs:PRIMARY> +``` +> At this point We have a document in our newly created collection `songs` on database `pink_floyd` +## Point-in-time Recovery +Point-In-Time Recovery allows you to restore a MongoDB database to a specific point in time using the archived transaction logs. This is particularly useful in scenarios where you need to recover to a state just before a specific error or data corruption occurred. +Let's say accidentally our dba drops the the table tab_1 and we want to restore. +```bash +```bash + +rs:PRIMARY> use pink_floyd +switched to db pink_floyd + +rs:PRIMARY> db.dropDatabase() +{ + "dropped" : "pink_floyd", + "ok" : 1, + "$clusterTime" : { + "clusterTime" : Timestamp(1702457742, 2), + "signature" : { + "hash" : BinData(0,"QFpwWOtec/NdQ0iKKyFCx9Jz8/A="), + "keyId" : NumberLong("7311996497896144901") + } + }, + "operationTime" : Timestamp(1702457742, 2) +} + + +``` + +Time time `1702457742` is unix timestamp. This is `Wed Dec 13 2023 08:55:42 GMT+0000` in human readable format. +We can't restore from a full backup since at this point no full backup was perform. so we can choose a specific time in (just before this timestamp, for example `08:55:30`) which time we want to restore. + +### Restore MongoDB +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: MongoDB +metadata: + name: mg-rs-restored + namespace: demo +spec: + version: "4.4.26" + replicaSet: + name: "rs" + replicas: 3 + podTemplate: + spec: + resources: + requests: + cpu: "500m" + memory: "500Mi" + storage: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + init: + archiver: + recoveryTimestamp: "2023-12-13T08:55:30Z" + encryptionSecret: + name: encrypt-secret + namespace: demo + fullDBRepository: + name: mg-rs-full + namespace: demo + manifestRepository: + name: mg-rs-manifest + namespace: demo + terminationPolicy: WipeOut + +``` +```bash +kubectl apply -f restore.yaml +mongo.kubedb.com/restore-mg created +``` +**check for Restored MongoDB** +```bash + kubectl get pods -n demo | grep restore +mg-rs-restored-0 2/2 Running 0 4m43s +mg-rs-restored-1 2/2 Running 0 3m52s +mg-rs-restored-2 2/2 Running 0 2m59s +mg-rs-restored-manifest-restorer-2qb46 0/1 Completed 0 4m58s +mg-rs-restored-wal-restorer-nkxfl 0/1 Completed 0 41s + +``` +```bash +$ kubectl get mg -n demo +NAME VERSION STATUS AGE +mg-rs-restored 4.4.26 Ready 5m47s + +``` +**Validating data on Restored MongoDB** +```bash +$ kubectl exec -it -n demo mg-rs-restored-0 bash +mongodb@mg-rs-restored-0:/$ mongo -u root -p $MONGO_INITDB_ROOT_PASSWORD +MongoDB shell version v4.4.26 +connecting to: mongodb://127.0.0.1:27017/?compressors=disabled&gssapiServiceName=mongodb +Implicit session: session { "id" : UUID("50d3fc74-bffc-4c97-a1e6-a2ea63cb88e1") } +MongoDB server version: 4.4.26 +Welcome to the MongoDB shell. +For interactive help, type "help". +For more comprehensive documentation, see + https://docs.mongodb.com/ +Questions? Try the MongoDB Developer Community Forums + https://community.mongodb.com +--- +The server generated these startup warnings when booting: + 2023-12-13T09:05:42.205+00:00: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine. See http://dochub.mongodb.org/core/prodnotes-filesystem +--- +rs:PRIMARY> show dbs +admin 0.000GB +config 0.000GB +kubedb-system 0.000GB +local 0.000GB +pink_floyd 0.000GB +rs:PRIMARY> use pink_floyd +switched to db pink_floyd +rs:PRIMARY> show collections +songs +rs:PRIMARY> db.songs.find() +{ "_id" : ObjectId("657970c1f965be0513c7f4d7"), "name" : "shine on you crazy diamond" } + +``` +**so we are able to successfully recover from a disaster** +## Cleaning up +To cleanup the Kubernetes resources created by this tutorial, run: +```bash +kubectl delete -n demo mg/mg-rs +kubectl delete -n demo mg/mg-rs-restored +kubectl delete -n demo backupstorage/gcs-storage +kubectl delete ns demo +``` +## Next Steps +- Learn about [backup and restore](/docs/v2024.1.31/guides/mongodb/backup/overview/) MongoDB database using Stash. +- Learn about initializing [MongoDB with Script](/docs/v2024.1.31/guides/mongodb/initialization/using-script). +- Learn about [custom mongoVersions](/docs/v2024.1.31/guides/mongodb/concepts/catalog). +- Want to setup MongoDB cluster? Check how to [configure Highly Available MongoDB Cluster](/docs/v2024.1.31/guides/mongodb/clustering/replicaset) +- Monitor your MongoDB database with KubeDB using [built-in Prometheus](/docs/v2024.1.31/guides/mongodb/monitoring/using-builtin-prometheus). +- Monitor your MongoDB database with KubeDB using [Prometheus operator](/docs/v2024.1.31/guides/mongodb/monitoring/using-prometheus-operator). +- Detail concepts of [mongo object](/docs/v2024.1.31/guides/mongodb/concepts/mongodb). +- Use [private Docker registry](/docs/v2024.1.31/guides/mongodb/private-registry/using-private-registry) to deploy MongoDB with KubeDB. +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/v2024.1.31/CONTRIBUTING). \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/mongodb/private-registry/_index.md b/content/docs/v2024.1.31/guides/mongodb/private-registry/_index.md new file mode 100755 index 0000000000..54ba9445cf --- /dev/null +++ b/content/docs/v2024.1.31/guides/mongodb/private-registry/_index.md @@ -0,0 +1,22 @@ +--- +title: Run MongoDB using Private Registry +menu: + docs_v2024.1.31: + identifier: mg-private-registry-mongodb + name: Private Registry + parent: mg-mongodb-guides + weight: 35 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/mongodb/private-registry/using-private-registry.md b/content/docs/v2024.1.31/guides/mongodb/private-registry/using-private-registry.md new file mode 100644 index 0000000000..4845f67092 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mongodb/private-registry/using-private-registry.md @@ -0,0 +1,197 @@ +--- +title: Run MongoDB using Private Registry +menu: + docs_v2024.1.31: + identifier: mg-using-private-registry-private-registry + name: Quickstart + parent: mg-private-registry-mongodb + weight: 10 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Using private Docker registry + +KubeDB operator supports using private Docker registry. This tutorial will show you how to use KubeDB to run MongoDB database using private Docker images. + +## Before You Begin + +- Read [concept of MongoDB Version Catalog](/docs/v2024.1.31/guides/mongodb/concepts/catalog) to learn detail concepts of `MongoDBVersion` object. + +- you need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +- You will also need a docker private [registry](https://docs.docker.com/registry/) or [private repository](https://docs.docker.com/docker-hub/repos/#private-repositories). In this tutorial we will use private repository of [docker hub](https://hub.docker.com/). + +- You have to push the required images into your private registry. For mongodb, push `DB_IMAGE`, `TOOLS_IMAGE`, `EXPORTER_IMAGE` of following MongoDBVersions, where `deprecated` is not true, to your private registry. + + ```bash + $ kubectl get mongodbversions -n kube-system -o=custom-columns=NAME:.metadata.name,VERSION:.spec.version,INITCONTAINER_IMAGE:.spec.initContainer.image,DB_IMAGE:.spec.db.image,EXPORTER_IMAGE:.spec.exporter.image + NAME VERSION INITCONTAINER_IMAGE DB_IMAGE EXPORTER_IMAGE + 3.4.17-v1 3.4.17 kubedb/mongodb-init:4.1-v7 mongo:3.4.17 kubedb/mongodb_exporter:v0.20.4 + 3.4.22-v1 3.4.22 kubedb/mongodb-init:4.1-v7 mongo:3.4.22 kubedb/mongodb_exporter:v0.32.0 + 3.6.13-v1 3.6.13 kubedb/mongodb-init:4.1-v7 mongo:3.6.13 kubedb/mongodb_exporter:v0.32.0 + 4.4.26 3.6.8 kubedb/mongodb-init:4.1-v7 mongo:3.6.8 kubedb/mongodb_exporter:v0.32.0 + 4.0.11-v1 4.0.11 kubedb/mongodb-init:4.1-v7 mongo:4.0.11 kubedb/mongodb_exporter:v0.32.0 + 4.0.3-v1 4.0.3 kubedb/mongodb-init:4.1-v7 mongo:4.0.3 kubedb/mongodb_exporter:v0.32.0 + 4.4.26 4.0.5 kubedb/mongodb-init:4.1-v7 mongo:4.0.5 kubedb/mongodb_exporter:v0.32.0 + 4.4.26 4.1.13 kubedb/mongodb-init:4.2-v7 mongo:4.1.13 kubedb/mongodb_exporter:v0.32.0 + 4.1.4-v1 4.1.4 kubedb/mongodb-init:4.1.4-v7 mongo:4.1.4 kubedb/mongodb_exporter:v0.32.0 + 4.1.7-v3 4.1.7 kubedb/mongodb-init:4.2-v7 mongo:4.1.7 kubedb/mongodb_exporter:v0.32.0 + 4.4.26 4.4.26 kubedb/mongodb-init:4.2-v7 mongo:4.4.26 kubedb/mongodb_exporter:v0.32.0 + 4.4.26 4.4.26 kubedb/mongodb-init:4.2-v7 mongo:4.4.26 kubedb/mongodb_exporter:v0.32.0 + 5.0.2 5.0.2 kubedb/mongodb-init:4.2-v7 mongo:5.0.2 kubedb/mongodb_exporter:v0.32.0 + 5.0.3 5.0.3 kubedb/mongodb-init:4.2-v7 mongo:5.0.3 kubedb/mongodb_exporter:v0.32.0 + percona-3.6.18 3.6.18 kubedb/mongodb-init:4.1-v7 percona/percona-server-mongodb:3.6.18 kubedb/mongodb_exporter:v0.32.0 + percona-4.0.10 4.0.10 kubedb/mongodb-init:4.1-v7 percona/percona-server-mongodb:4.0.10 kubedb/mongodb_exporter:v0.32.0 + percona-4.2.7 4.2.7 kubedb/mongodb-init:4.2-v7 percona/percona-server-mongodb:4.2.7-7 kubedb/mongodb_exporter:v0.32.0 + percona-4.4.10 4.4.10 kubedb/mongodb-init:4.2-v7 percona/percona-server-mongodb:4.4.10 kubedb/mongodb_exporter:v0.32.0 + ``` + + Docker hub repositories: + + - [kubedb/operator](https://hub.docker.com/r/kubedb/operator) + - [kubedb/mongo](https://hub.docker.com/r/kubedb/mongo) + - [kubedb/mongo-tools](https://hub.docker.com/r/kubedb/mongo-tools) + - [kubedb/mongodb_exporter](https://hub.docker.com/r/kubedb/mongodb_exporter) + + +## Install KubeDB operator from Private Registry + +If you want to install KubeDB operator with some private registry images, set the flags `--registry` and `--imagePullSecret` to appropriate value, when installing the operator. +Follow the steps [install KubeDB operator](/docs/v2024.1.31/setup/README) properly. The list configuration arguments of the helm installation command will be found [here](https://github.com/kubedb/installer/tree/v2022.10.18/charts/kubedb#configuration). + + +## Use DB related images from Private Registry + +- Update KubeDB catalog for private Docker registry. Ex: + +```yaml +apiVersion: catalog.kubedb.com/v1alpha1 +kind: MongoDBVersion +metadata: + name: "4.4.26" + labels: + app: kubedb +spec: + version: "4.4.26" + db: + image: "PRIVATE_DOCKER_REGISTRY/mongo:4.4.26" + exporter: + image: "PRIVATE_DOCKER_REGISTRY/percona-mongodb-exporter:v0.8.0" + initContainer: + image: "PRIVATE_DOCKER_REGISTRY/mongodb-init:4.2" + podSecurityPolicies: + databasePolicyName: mongodb-db + replicationModeDetector: + image: "PRIVATE_DOCKER_REGISTRY/replication-mode-detector:v0.3.2" +``` + +### Create ImagePullSecret + +ImagePullSecrets is a type of a Kubernete Secret whose sole purpose is to pull private images from a Docker registry. It allows you to specify the url of the docker registry, credentials for logging in and the image name of your private docker image. + +Run the following command, substituting the appropriate uppercase values to create an image pull secret for your private Docker registry: + +```bash +$ kubectl create secret docker-registry -n demo myregistrykey \ + --docker-server=DOCKER_REGISTRY_SERVER \ + --docker-username=DOCKER_USER \ + --docker-email=DOCKER_EMAIL \ + --docker-password=DOCKER_PASSWORD +secret/myregistrykey created +``` + +DOCKER_REGISTRY_SERVER value will be `docker.io` for docker hub. + +If you wish to follow other ways to pull private images see [official docs](https://kubernetes.io/docs/concepts/containers/images/) of Kubernetes. + +NB: If you are using `kubectl` 1.9.0, update to 1.9.1 or later to avoid this [issue](https://github.com/kubernetes/kubernetes/issues/57427). + +### Create Demo namespace + +To keep things isolated, this tutorial uses a separate namespace called `demo` throughout this tutorial. Run the following command to prepare your cluster for this tutorial: + +```bash +$ kubectl create ns demo +namespace/demo created +``` + +### Deploy MongoDB + +While deploying `MongoDB` from private repository, you have to add `myregistrykey` secret in `MongoDB` `spec.imagePullSecrets`. +Below is the MongoDB CRD object we will create. + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: MongoDB +metadata: + name: mgo-pvt-reg + namespace: demo +spec: + version: 4.4.26 + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + podTemplate: + spec: + imagePullSecrets: + - name: myregistrykey +``` + +Now run the command to deploy this `MongoDB` object: + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/mongodb/private-registry/replicaset.yaml +mongodb.kubedb.com/mgo-pvt-reg created +``` + +To check if the images pulled successfully from the repository, see if the `MongoDB` is in running state: + +```bash +$ kubectl get pods -n demo +NAME READY STATUS RESTARTS AGE +mgo-pvt-reg-0 1/1 Running 0 5m + + +$ kubectl get mg -n demo +NAME VERSION STATUS AGE +mgo-pvt-reg 4.4.26 Ready 38s +``` + +## Cleaning up + +To cleanup the Kubernetes resources created by this tutorial, run: + +```bash +kubectl patch -n demo mg/mgo-pvt-reg -p '{"spec":{"terminationPolicy":"WipeOut"}}' --type="merge" +kubectl delete -n demo mg/mgo-pvt-reg + +kubectl delete ns demo +``` + +## Next Steps + +- [Backup and Restore](/docs/v2024.1.31/guides/mongodb/backup/overview/) MongoDB databases using Stash. +- Initialize [MongoDB with Script](/docs/v2024.1.31/guides/mongodb/initialization/using-script). +- Monitor your MongoDB database with KubeDB using [out-of-the-box Prometheus operator](/docs/v2024.1.31/guides/mongodb/monitoring/using-prometheus-operator). +- Monitor your MongoDB database with KubeDB using [out-of-the-box builtin-Prometheus](/docs/v2024.1.31/guides/mongodb/monitoring/using-builtin-prometheus). +- Detail concepts of [MongoDB object](/docs/v2024.1.31/guides/mongodb/concepts/mongodb). +- Detail concepts of [MongoDBVersion object](/docs/v2024.1.31/guides/mongodb/concepts/catalog). +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/v2024.1.31/CONTRIBUTING). diff --git a/content/docs/v2024.1.31/guides/mongodb/quickstart/_index.md b/content/docs/v2024.1.31/guides/mongodb/quickstart/_index.md new file mode 100755 index 0000000000..15384c3faf --- /dev/null +++ b/content/docs/v2024.1.31/guides/mongodb/quickstart/_index.md @@ -0,0 +1,22 @@ +--- +title: MongoDB Quickstart +menu: + docs_v2024.1.31: + identifier: mg-quickstart-mongodb + name: Quickstart + parent: mg-mongodb-guides + weight: 15 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/mongodb/quickstart/quickstart.md b/content/docs/v2024.1.31/guides/mongodb/quickstart/quickstart.md new file mode 100644 index 0000000000..7404b1868a --- /dev/null +++ b/content/docs/v2024.1.31/guides/mongodb/quickstart/quickstart.md @@ -0,0 +1,605 @@ +--- +title: MongoDB Quickstart +menu: + docs_v2024.1.31: + identifier: mg-quickstart-quickstart + name: Overview + parent: mg-quickstart-mongodb + weight: 10 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# MongoDB QuickStart + +This tutorial will show you how to use KubeDB to run a MongoDB database. + +

+  lifecycle +

+ +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +- Now, install KubeDB cli on your workstation and KubeDB operator in your cluster following the steps [here](/docs/v2024.1.31/setup/README). + +- [StorageClass](https://kubernetes.io/docs/concepts/storage/storage-classes/) is required to run KubeDB. Check the available StorageClass in cluster. + + ```bash + $ kubectl get storageclasses + NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE + standard (default) rancher.io/local-path Delete WaitForFirstConsumer false 2m5s + + ``` + +- To keep things isolated, this tutorial uses a separate namespace called `demo` throughout this tutorial. Run the following command to prepare your cluster for this tutorial: + + ```bash + $ kubectl create ns demo + namespace/demo created + ``` + +> Note: The yaml files used in this tutorial are stored in [docs/examples/mongodb](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/examples/mongodb) folder in GitHub repository [kubedb/docs](https://github.com/kubedb/docs). + +## Find Available MongoDBVersion + +When you have installed KubeDB, it has created `MongoDBVersion` crd for all supported MongoDB versions. Check 0 + +```bash +$ kubectl get mongodbversions +NAME VERSION DISTRIBUTION DB_IMAGE DEPRECATED AGE +3.4.17-v1 3.4.17 Official mongo:3.4.17 68s +3.4.22-v1 3.4.22 Official mongo:3.4.22 68s +3.6.13-v1 3.6.13 Official mongo:3.6.13 68s +4.4.26 3.6.8 Official mongo:3.6.8 68s +4.0.11-v1 4.0.11 Official mongo:4.0.11 68s +4.0.3-v1 4.0.3 Official mongo:4.0.3 68s +4.4.26 4.0.5 Official mongo:4.0.5 68s +4.4.26 4.1.13 Official mongo:4.1.13 68s +4.1.4-v1 4.1.4 Official mongo:4.1.4 68s +4.1.7-v3 4.1.7 Official mongo:4.1.7 68s +4.4.26 4.4.26 Official mongo:4.4.26 68s +4.4.26 4.4.26 Official mongo:4.4.26 68s +5.0.2 5.0.2 Official mongo:5.0.2 68s +5.0.3 5.0.3 Official mongo:5.0.3 68s +percona-3.6.18 3.6.18 Percona percona/percona-server-mongodb:3.6.18 68s +percona-4.0.10 4.0.10 Percona percona/percona-server-mongodb:4.0.10 68s +percona-4.2.7 4.2.7 Percona percona/percona-server-mongodb:4.2.7-7 68s +percona-4.4.10 4.4.10 Percona percona/percona-server-mongodb:4.4.10 68s + +``` + +## Create a MongoDB database + +KubeDB implements a `MongoDB` CRD to define the specification of a MongoDB database. Below is the `MongoDB` object created in this tutorial. + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: MongoDB +metadata: + name: mgo-quickstart + namespace: demo +spec: + version: "4.4.26" + replicaSet: + name: "rs1" + replicas: 3 + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: Delete +``` + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/mongodb/quickstart/replicaset.yaml +mongodb.kubedb.com/mgo-quickstart created +``` + +Here, + +- `spec.version` is name of the MongoDBVersion crd where the docker images are specified. In this tutorial, a MongoDB 4.4.26 database is created. +- `spec.storageType` specifies the type of storage that will be used for MongoDB database. It can be `Durable` or `Ephemeral`. Default value of this field is `Durable`. If `Ephemeral` is used then KubeDB will create MongoDB database using `EmptyDir` volume. In this case, you don't have to specify `spec.storage` field. This is useful for testing purposes. +- `spec.storage` specifies PVC spec that will be dynamically allocated to store data for this database. This storage spec will be passed to the StatefulSet created by KubeDB operator to run database pods. You can specify any StorageClass available in your cluster with appropriate resource requests. +- `spec.terminationPolicy` gives flexibility whether to `nullify`(reject) the delete operation of `MongoDB` crd or which resources KubeDB should keep or delete when you delete `MongoDB` crd. If admission webhook is enabled, It prevents users from deleting the database as long as the `spec.terminationPolicy` is set to `DoNotTerminate`. Learn details of all `TerminationPolicy` [here](/docs/v2024.1.31/guides/mongodb/concepts/mongodb#specterminationpolicy) +- `spec.replicaSet` denotes the name of the mongodb replica-set structure. +- `spec.replicas` denotes the number of replicas in the replica-set. + +> Note: `spec.storage` section is used to create PVC for database pod. It will create PVC with storage size specified instorage.resources.requests field. Don't specify limits here. PVC does not get resized automatically. + +KubeDB operator watches for `MongoDB` objects using Kubernetes api. When a `MongoDB` object is created, KubeDB operator will create a new StatefulSet and a Service with the matching MongoDB object name. KubeDB operator will also create a governing service for StatefulSets with the name `-pods`. + +```bash +$ kubectl dba describe mg -n demo mgo-quickstart +Name: mgo-quickstart +Namespace: demo +CreationTimestamp: Mon, 13 Jun 2022 18:01:55 +0600 +Labels: +Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"kubedb.com/v1alpha2","kind":"MongoDB","metadata":{"annotations":{},"name":"mgo-quickstart","namespace":"demo"},"spec":{"replicaSet":{"na... +Replicas: 3 total +Status: Ready +StorageType: Durable +Volume: + StorageClass: standard + Capacity: 1Gi + Access Modes: RWO +Paused: false +Halted: false +Termination Policy: DoNotTerminate + +StatefulSet: + Name: mgo-quickstart + CreationTimestamp: Mon, 13 Jun 2022 18:01:55 +0600 + Labels: app.kubernetes.io/component=database + app.kubernetes.io/instance=mgo-quickstart + app.kubernetes.io/managed-by=kubedb.com + app.kubernetes.io/name=mongodbs.kubedb.com + Annotations: + Replicas: 824645483384 desired | 3 total + Pods Status: 3 Running / 0 Waiting / 0 Succeeded / 0 Failed + +Service: + Name: mgo-quickstart + Labels: app.kubernetes.io/component=database + app.kubernetes.io/instance=mgo-quickstart + app.kubernetes.io/managed-by=kubedb.com + app.kubernetes.io/name=mongodbs.kubedb.com + Annotations: + Type: ClusterIP + IP: 10.96.20.114 + Port: primary 27017/TCP + TargetPort: db/TCP + Endpoints: 10.244.0.12:27017 + +Service: + Name: mgo-quickstart-pods + Labels: app.kubernetes.io/component=database + app.kubernetes.io/instance=mgo-quickstart + app.kubernetes.io/managed-by=kubedb.com + app.kubernetes.io/name=mongodbs.kubedb.com + Annotations: + Type: ClusterIP + IP: None + Port: db 27017/TCP + TargetPort: db/TCP + Endpoints: 10.244.0.12:27017,10.244.0.14:27017,10.244.0.16:27017 + +Auth Secret: + Name: mgo-quickstart-auth + Labels: app.kubernetes.io/component=database + app.kubernetes.io/instance=mgo-quickstart + app.kubernetes.io/managed-by=kubedb.com + app.kubernetes.io/name=mongodbs.kubedb.com + Annotations: + Type: Opaque + Data: + password: 16 bytes + username: 4 bytes + +AppBinding: + Metadata: + Annotations: + kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"kubedb.com/v1alpha2","kind":"MongoDB","metadata":{"annotations":{},"name":"mgo-quickstart","namespace":"demo"},"spec":{"replicaSet":{"name":"rs1"},"replicas":3,"storage":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"1Gi"}},"storageClassName":"standard"},"storageType":"Durable","terminationPolicy":"DoNotTerminate","version":"4.4.26"}} + + Creation Timestamp: 2022-06-13T12:01:55Z + Labels: + app.kubernetes.io/component: database + app.kubernetes.io/instance: mgo-quickstart + app.kubernetes.io/managed-by: kubedb.com + app.kubernetes.io/name: mongodbs.kubedb.com + Name: mgo-quickstart + Namespace: demo + Spec: + Client Config: + Service: + Name: mgo-quickstart + Port: 27017 + Scheme: mongodb + Parameters: + API Version: config.kubedb.com/v1alpha1 + Kind: MongoConfiguration + Replica Sets: + host-0: rs1/mgo-quickstart-0.mgo-quickstart-pods.demo.svc:27017,mgo-quickstart-1.mgo-quickstart-pods.demo.svc:27017,mgo-quickstart-2.mgo-quickstart-pods.demo.svc:27017 + Stash: + Addon: + Backup Task: + Name: mongodb-backup-4.4.6 + Restore Task: + Name: mongodb-restore-4.4.6 + Secret: + Name: mgo-quickstart-auth + Type: kubedb.com/mongodb + Version: 4.4.26 + +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal Successful 3m KubeDB Operator Successfully created governing service + Normal Successful 3m KubeDB Operator Successfully created Primary Service + Normal Successful 3m KubeDB Operator Successfully created appbinding +``` + +```bash +$ kubectl get statefulset -n demo +NAME READY AGE +mgo-quickstart 3/3 3m36s + +$ kubectl get pvc -n demo +NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE +datadir-mgo-quickstart-0 Bound pvc-18c3c456-c9a9-40b2-bec8-4302cc0aeccc 1Gi RWO standard 3m56s +datadir-mgo-quickstart-1 Bound pvc-7ac4c470-8fa7-47a9-b118-2ac20f01186d 1Gi RWO standard 104s +datadir-mgo-quickstart-2 Bound pvc-2e6dfb71-056b-4186-927d-855db35d0014 1Gi RWO standard 77s + +$ kubectl get pv -n demo +NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE +pvc-18c3c456-c9a9-40b2-bec8-4302cc0aeccc 1Gi RWO Delete Bound demo/datadir-mgo-quickstart-0 standard 4m8s +pvc-2e6dfb71-056b-4186-927d-855db35d0014 1Gi RWO Delete Bound demo/datadir-mgo-quickstart-2 standard 90s +pvc-7ac4c470-8fa7-47a9-b118-2ac20f01186d 1Gi RWO Delete Bound demo/datadir-mgo-quickstart-1 standard 117s + +$ kubectl get service -n demo +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +mgo-quickstart ClusterIP 10.96.20.114 27017/TCP 4m25s +mgo-quickstart-pods ClusterIP None 27017/TCP 4m25s + +``` + +KubeDB operator sets the `status.phase` to `Ready` once the database is successfully created. Run the following command to see the modified MongoDB object: + +```yaml +$ kubectl get mg -n demo mgo-quickstart -o yaml +apiVersion: kubedb.com/v1alpha2 +kind: MongoDB +metadata: + annotations: + kubectl.kubernetes.io/last-applied-configuration: | + {"apiVersion":"kubedb.com/v1alpha2","kind":"MongoDB","metadata":{"annotations":{},"name":"mgo-quickstart","namespace":"demo"},"spec":{"replicaSet":{"name":"rs1"},"replicas":3,"storage":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"1Gi"}},"storageClassName":"standard"},"storageType":"Durable","terminationPolicy":"DoNotTerminate","version":"4.4.26"}} + creationTimestamp: "2022-06-13T12:01:55Z" + finalizers: + - kubedb.com + generation: 3 + name: mgo-quickstart + namespace: demo + resourceVersion: "2069" + uid: 197705bd-1558-4c01-aaac-c452d6972433 +spec: + allowedSchemas: + namespaces: + from: Same + arbiter: null + authSecret: + name: mgo-quickstart-auth + clusterAuthMode: keyFile + coordinator: + resources: {} + keyFileSecret: + name: mgo-quickstart-key + podTemplate: + controller: {} + metadata: {} + spec: + affinity: + podAntiAffinity: + preferredDuringSchedulingIgnoredDuringExecution: + - podAffinityTerm: + labelSelector: + matchLabels: + app.kubernetes.io/instance: mgo-quickstart + app.kubernetes.io/managed-by: kubedb.com + app.kubernetes.io/name: mongodbs.kubedb.com + namespaces: + - demo + topologyKey: kubernetes.io/hostname + weight: 100 + - podAffinityTerm: + labelSelector: + matchLabels: + app.kubernetes.io/instance: mgo-quickstart + app.kubernetes.io/managed-by: kubedb.com + app.kubernetes.io/name: mongodbs.kubedb.com + namespaces: + - demo + topologyKey: failure-domain.beta.kubernetes.io/zone + weight: 50 + livenessProbe: + exec: + command: + - bash + - -c + - "set -x; if [[ $(mongo admin --host=localhost --username=$MONGO_INITDB_ROOT_USERNAME + --password=$MONGO_INITDB_ROOT_PASSWORD --authenticationDatabase=admin + --quiet --eval \"db.adminCommand('ping').ok\" ) -eq \"1\" ]]; then \n + \ exit 0\n fi\n exit 1" + failureThreshold: 3 + periodSeconds: 10 + successThreshold: 1 + timeoutSeconds: 5 + readinessProbe: + exec: + command: + - bash + - -c + - "set -x; if [[ $(mongo admin --host=localhost --username=$MONGO_INITDB_ROOT_USERNAME + --password=$MONGO_INITDB_ROOT_PASSWORD --authenticationDatabase=admin + --quiet --eval \"db.adminCommand('ping').ok\" ) -eq \"1\" ]]; then \n + \ exit 0\n fi\n exit 1" + failureThreshold: 3 + periodSeconds: 10 + successThreshold: 1 + timeoutSeconds: 1 + resources: + limits: + memory: 1Gi + requests: + cpu: 500m + memory: 1Gi + serviceAccountName: mgo-quickstart + replicaSet: + name: rs1 + replicas: 3 + sslMode: disabled + storage: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + storageClassName: standard + storageEngine: wiredTiger + storageType: Durable + terminationPolicy: Delete + version: 4.4.26 +status: + conditions: + - lastTransitionTime: "2022-06-13T12:01:55Z" + message: 'The KubeDB operator has started the provisioning of MongoDB: demo/mgo-quickstart' + reason: DatabaseProvisioningStartedSuccessfully + status: "True" + type: ProvisioningStarted + - lastTransitionTime: "2022-06-13T12:04:58Z" + message: All desired replicas are ready. + reason: AllReplicasReady + status: "True" + type: ReplicaReady + - lastTransitionTime: "2022-06-13T12:03:35Z" + message: 'The MongoDB: demo/mgo-quickstart is accepting client requests.' + observedGeneration: 3 + reason: DatabaseAcceptingConnectionRequest + status: "True" + type: AcceptingConnection + - lastTransitionTime: "2022-06-13T12:03:35Z" + message: 'The MongoDB: demo/mgo-quickstart is ready.' + observedGeneration: 3 + reason: ReadinessCheckSucceeded + status: "True" + type: Ready + - lastTransitionTime: "2022-06-13T12:04:58Z" + message: 'The MongoDB: demo/mgo-quickstart is successfully provisioned.' + observedGeneration: 3 + reason: DatabaseSuccessfullyProvisioned + status: "True" + type: Provisioned + observedGeneration: 3 + phase: Ready +``` + +Please note that KubeDB operator has created a new Secret called `mgo-quickstart-auth` *(format: {mongodb-object-name}-auth)* for storing the password for `mongodb` superuser. This secret contains a `username` key which contains the *username* for MongoDB superuser and a `password` key which contains the *password* for MongoDB superuser. + +If you want to use custom or existing secret please specify that when creating the MongoDB object using `spec.authSecret.name`. While creating this secret manually, make sure the secret contains these two keys containing data `username` and `password`. For more details, please see [here](/docs/v2024.1.31/guides/mongodb/concepts/mongodb#specauthsecret). + +Now, you can connect to this database through [mongo-shell](https://docs.mongodb.com/v3.4/mongo/). In this tutorial, we are connecting to the MongoDB server from inside the pod. + +```bash +$ kubectl get secrets -n demo mgo-quickstart-auth -o jsonpath='{.data.\username}' | base64 -d +root + +$ kubectl get secrets -n demo mgo-quickstart-auth -o jsonpath='{.data.\password}' | base64 -d +CaM8v9LmmSGB~&hj + +$ kubectl exec -it mgo-quickstart-0 -n demo sh + +> mongo admin + +rs1:PRIMARY> db.auth("root","CaM8v9LmmSGB~&hj") +1 + + +rs1:PRIMARY> show dbs +admin 0.000GB +config 0.000GB +kubedb-system 0.000GB +local 0.000GB + +rs1:PRIMARY> show users +{ + "_id" : "admin.root", + "userId" : UUID("1e460a23-705d-47a4-b80a-9d2fb947e915"), + "user" : "root", + "db" : "admin", + "roles" : [ + { + "role" : "root", + "db" : "admin" + } + ], + "mechanisms" : [ + "SCRAM-SHA-1", + "SCRAM-SHA-256" + ] +} + + +rs1:PRIMARY> use mydb +switched to db mydb + +rs1:PRIMARY> db.movies.insertOne({"top gun": "maverick"}) +{ + "acknowledged" : true, + "insertedId" : ObjectId("62a72949198bad2c983d6611") +} + +rs1:PRIMARY> db.movies.find() +{ "_id" : ObjectId("62a72949198bad2c983d6611"), "top gun" : "maverick" } + +> exit +bye +``` + +# Database TerminationPolicy +This field is used to regulate the deletion process of the related resources when mongodb object is deleted. User can set the value of this field according to their needs. The available options and their use case scenario is described below: + +## DoNotTerminate Property + +When `terminationPolicy` is `DoNotTerminate`, KubeDB takes advantage of `ValidationWebhook` feature in Kubernetes 1.9.0 or later clusters to implement `DoNotTerminate` feature. If admission webhook is enabled, It prevents users from deleting the database as long as the `spec.terminationPolicy` is set to `DoNotTerminate`. You can see this below: + +```bash +$ kubectl delete mg mgo-quickstart -n demo +Error from server (BadRequest): admission webhook "mongodbwebhook.validators.kubedb.com" denied the request: mongodb "demo/mgo-quickstart" can't be terminated. To delete, change spec.terminationPolicy +``` + +## Halt Database + +When [TerminationPolicy](/docs/v2024.1.31/guides/mongodb/concepts/mongodb#specterminationpolicy) is set to halt, and you delete the mongodb object, the KubeDB operator will delete the StatefulSet and its pods but leaves the PVCs, secrets and database backup (snapshots) intact. Learn details of all `TerminationPolicy` [here](/docs/v2024.1.31/guides/mongodb/concepts/mongodb#specterminationpolicy). + +You can also keep the mongodb object and halt the database to resume it again later. If you halt the database, the kubedb operator will delete the statefulsets and services but will keep the mongodb object, pvcs, secrets and backup (snapshots). + +To halt the database, first you have to set the terminationPolicy to `Halt` in existing database. You can use the below command to set the terminationPolicy to `Halt`, if it is not already set. + +```bash +$ kubectl patch -n demo mg/mgo-quickstart -p '{"spec":{"terminationPolicy":"Halt"}}' --type="merge" +mongodb.kubedb.com/mgo-quickstart patched +``` + +Then, you have to set the `spec.halted` as true to set the database in a `Halted` state. You can use the below command. + +```bash +$ kubectl patch -n demo mg/mgo-quickstart -p '{"spec":{"halted":true}}' --type="merge" +mongodb.kubedb.com/mgo-quickstart patched +``` + +After that, kubedb will delete the statefulsets and services and you can see the database Phase as `Halted`. + +Now, you can run the following command to get all mongodb resources in demo namespaces, + +```bash +$ kubectl get mg,sts,svc,secret,pvc -n demo +NAME VERSION STATUS AGE +mongodb.kubedb.com/mgo-quickstart 4.4.26 Halted 12m + +NAME TYPE DATA AGE +secret/default-token-swg6h kubernetes.io/service-account-token 3 12m +secret/mgo-quickstart-auth Opaque 2 12m +secret/mgo-quickstart-key Opaque 1 12m + +NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE +persistentvolumeclaim/datadir-mgo-quickstart-0 Bound pvc-18c3c456-c9a9-40b2-bec8-4302cc0aeccc 1Gi RWO standard 12m +persistentvolumeclaim/datadir-mgo-quickstart-1 Bound pvc-7ac4c470-8fa7-47a9-b118-2ac20f01186d 1Gi RWO standard 9m57s +persistentvolumeclaim/datadir-mgo-quickstart-2 Bound pvc-2e6dfb71-056b-4186-927d-855db35d0014 1Gi RWO standard 9m30s +``` + + +## Resume Halted Database + +Now, to resume the database, i.e. to get the same database setup back again, you have to set the the `spec.halted` as false. You can use the below command. + +```bash +$ kubectl patch -n demo mg/mgo-quickstart -p '{"spec":{"halted":false}}' --type="merge" +mongodb.kubedb.com/mgo-quickstart patched +``` + +When the database is resumed successfully, you can see the database Status is set to `Ready`. + +```bash +$ kubectl get mg -n demo +NAME VERSION STATUS AGE +mgo-quickstart 4.4.26 Ready 13m +``` + +Now, If you again exec into the `pod` and look for previous data, you will see that, all the data persists. + +```bash +$ kubectl exec -it mgo-quickstart-0 -n demo bash + +mongodb@mgo-quickstart-0:/$ mongo admin -u root -p CaM8v9LmmSGB~&hj +rs1:SECONDARY> use mydb +switched to db mydb + +rs1:SECONDARY> rs.slaveOk() +WARNING: slaveOk() is deprecated and may be removed in the next major release. Please use secondaryOk() instead. + +rs1:SECONDARY> db.movies.find() +{ "_id" : ObjectId("62a72949198bad2c983d6611"), "top gun" : "maverick" } + +``` + + +## Cleaning up + +If you don't set the terminationPolicy, then the kubeDB set the TerminationPolicy to `Delete` by-default. + +### Delete +If you want to delete the existing database along with the volumes used, but want to restore the database from previously taken snapshots and secrets then you might want to set the mongodb object terminationPolicy to Delete. In this setting, StatefulSet and the volumes will be deleted. If you decide to restore the database, you can do so using the snapshots and the credentials. + +When the TerminationPolicy is set to Delete and the mongodb object is deleted, the KubeDB operator will delete the StatefulSet and its pods along with PVCs but leaves the secret and database backup data(snapshots) intact. + +```bash +$ kubectl patch -n demo mg/mgo-quickstart -p '{"spec":{"terminationPolicy":"Delete"}}' --type="merge" +kubectl delete -n demo mg/mgo-quickstart + +$ kubectl get mg,sts,svc,secret,pvc -n demo +NAME TYPE DATA AGE +secret/default-token-swg6h kubernetes.io/service-account-token 3 27m +secret/mgo-quickstart-auth Opaque 2 27m +secret/mgo-quickstart-key Opaque 1 27m + +$ kubectl delete ns demo +``` + +### WipeOut +But if you want to cleanup each of the Kubernetes resources created by this tutorial, run: + +```bash +$ kubectl patch -n demo mg/mgo-quickstart -p '{"spec":{"terminationPolicy":"WipeOut"}}' --type="merge" +$ kubectl delete -n demo mg/mgo-quickstart + +$ kubectl get mg,sts,svc,secret,pvc -n demo +NAME TYPE DATA AGE + +$ kubectl delete ns demo +``` + +## Tips for Testing + +If you are just testing some basic functionalities, you might want to avoid additional hassles due to some safety features that are great for production environment. You can follow these tips to avoid them. + +1. **Use `storageType: Ephemeral`**. Databases are precious. You might not want to lose your data in your production environment if database pod fail. So, we recommend using `spec.storageType: Durable` and provide storage spec in `spec.storage` section. For testing purpose, you can just use `spec.storageType: Ephemeral`. KubeDB will use [emptyDir](https://kubernetes.io/docs/concepts/storage/volumes/#emptydir) for storage. You will not require to provide `spec.storage` section. + +2. **Use `terminationPolicy: WipeOut`**. It is nice to be able to resume database. So, we have `Halt` option which preserves all your `PVCs`, `Secrets`, `Snapshots` etc. If you don't want to resume database, you can just use `spec.terminationPolicy: WipeOut`. It will delete everything created by KubeDB for a particular MongoDB crd when you delete the mongodb object. For more details about termination policy, please visit [here](/docs/v2024.1.31/guides/mongodb/concepts/mongodb#specterminationpolicy). + +## Next Steps + +- [Backup and Restore](/docs/v2024.1.31/guides/mongodb/backup/overview/) MongoDB databases using Stash. +- Initialize [MongoDB with Script](/docs/v2024.1.31/guides/mongodb/initialization/using-script). +- Monitor your MongoDB database with KubeDB using [out-of-the-box Prometheus operator](/docs/v2024.1.31/guides/mongodb/monitoring/using-prometheus-operator). +- Monitor your MongoDB database with KubeDB using [out-of-the-box builtin-Prometheus](/docs/v2024.1.31/guides/mongodb/monitoring/using-builtin-prometheus). +- Use [private Docker registry](/docs/v2024.1.31/guides/mongodb/private-registry/using-private-registry) to deploy MongoDB with KubeDB. +- Detail concepts of [MongoDB object](/docs/v2024.1.31/guides/mongodb/concepts/mongodb). +- Detail concepts of [MongoDBVersion object](/docs/v2024.1.31/guides/mongodb/concepts/catalog). +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/v2024.1.31/CONTRIBUTING). diff --git a/content/docs/v2024.1.31/guides/mongodb/reconfigure-tls/_index.md b/content/docs/v2024.1.31/guides/mongodb/reconfigure-tls/_index.md new file mode 100644 index 0000000000..e18ff7449a --- /dev/null +++ b/content/docs/v2024.1.31/guides/mongodb/reconfigure-tls/_index.md @@ -0,0 +1,22 @@ +--- +title: Reconfigure MongoDB TLS/SSL +menu: + docs_v2024.1.31: + identifier: mg-reconfigure-tls + name: Reconfigure TLS/SSL + parent: mg-mongodb-guides + weight: 46 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/mongodb/reconfigure-tls/overview.md b/content/docs/v2024.1.31/guides/mongodb/reconfigure-tls/overview.md new file mode 100644 index 0000000000..12807f1dc0 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mongodb/reconfigure-tls/overview.md @@ -0,0 +1,65 @@ +--- +title: Reconfiguring TLS of MongoDB Database +menu: + docs_v2024.1.31: + identifier: mg-reconfigure-tls-overview + name: Overview + parent: mg-reconfigure-tls + weight: 10 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Reconfiguring TLS of MongoDB Database + +This guide will give an overview on how KubeDB Ops-manager operator reconfigures TLS configuration i.e. add TLS, remove TLS, update issuer/cluster issuer or Certificates and rotate the certificates of a `MongoDB` database. + +## Before You Begin + +- You should be familiar with the following `KubeDB` concepts: + - [MongoDB](/docs/v2024.1.31/guides/mongodb/concepts/mongodb) + - [MongoDBOpsRequest](/docs/v2024.1.31/guides/mongodb/concepts/opsrequest) + +## How Reconfiguring MongoDB TLS Configuration Process Works + +The following diagram shows how KubeDB Ops-manager operator reconfigures TLS of a `MongoDB` database. Open the image in a new tab to see the enlarged version. + +
+  Reconfiguring TLS process of MongoDB +
Fig: Reconfiguring TLS process of MongoDB
+
+ +The Reconfiguring MongoDB TLS process consists of the following steps: + +1. At first, a user creates a `MongoDB` Custom Resource Object (CRO). + +2. `KubeDB` Provisioner operator watches the `MongoDB` CRO. + +3. When the operator finds a `MongoDB` CR, it creates required number of `StatefulSets` and related necessary stuff like secrets, services, etc. + +4. Then, in order to reconfigure the TLS configuration of the `MongoDB` database the user creates a `MongoDBOpsRequest` CR with desired information. + +5. `KubeDB` Ops-manager operator watches the `MongoDBOpsRequest` CR. + +6. When it finds a `MongoDBOpsRequest` CR, it pauses the `MongoDB` object which is referred from the `MongoDBOpsRequest`. So, the `KubeDB` Provisioner operator doesn't perform any operations on the `MongoDB` object during the reconfiguring TLS process. + +7. Then the `KubeDB` Ops-manager operator will add, remove, update or rotate TLS configuration based on the Ops Request yaml. + +8. Then the `KubeDB` Ops-manager operator will restart all the Pods of the database so that they restart with the new TLS configuration defined in the `MongoDBOpsRequest` CR. + +9. After the successful reconfiguring of the `MongoDB` TLS, the `KubeDB` Ops-manager operator resumes the `MongoDB` object so that the `KubeDB` Provisioner operator resumes its usual operations. + +In the next docs, we are going to show a step by step guide on reconfiguring TLS configuration of a MongoDB database using `MongoDBOpsRequest` CRD. \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/mongodb/reconfigure-tls/reconfigure-tls.md b/content/docs/v2024.1.31/guides/mongodb/reconfigure-tls/reconfigure-tls.md new file mode 100644 index 0000000000..58e7753933 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mongodb/reconfigure-tls/reconfigure-tls.md @@ -0,0 +1,1017 @@ +--- +title: Reconfigure MongoDB TLS/SSL Encryption +menu: + docs_v2024.1.31: + identifier: mg-reconfigure-tls-rs + name: Reconfigure MongoDB TLS/SSL Encryption + parent: mg-reconfigure-tls + weight: 10 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Reconfigure MongoDB TLS/SSL (Transport Encryption) + +KubeDB supports reconfigure i.e. add, remove, update and rotation of TLS/SSL certificates for existing MongoDB database via a MongoDBOpsRequest. This tutorial will show you how to use KubeDB to reconfigure TLS/SSL encryption. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +- Install [`cert-manger`](https://cert-manager.io/docs/installation/) v1.0.0 or later to your cluster to manage your SSL/TLS certificates. + +- Now, install KubeDB cli on your workstation and KubeDB operator in your cluster following the steps [here](/docs/v2024.1.31/setup/README). + +- To keep things isolated, this tutorial uses a separate namespace called `demo` throughout this tutorial. + + ```bash + $ kubectl create ns demo + namespace/demo created + ``` + +> Note: YAML files used in this tutorial are stored in [docs/examples/mongodb](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/examples/mongodb) folder in GitHub repository [kubedb/docs](https://github.com/kubedb/docs). + +## Add TLS to a MongoDB database + +Here, We are going to create a MongoDB database without TLS and then reconfigure the database to use TLS. + +### Deploy MongoDB without TLS + +In this section, we are going to deploy a MongoDB Replicaset database without TLS. In the next few sections we will reconfigure TLS using `MongoDBOpsRequest` CRD. Below is the YAML of the `MongoDB` CR that we are going to create, + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: MongoDB +metadata: + name: mg-rs + namespace: demo +spec: + version: "4.4.26" + replicas: 3 + replicaSet: + name: rs0 + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi +``` + +Let's create the `MongoDB` CR we have shown above, + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/mongodb/reconfigure-tls/mg-replicaset.yaml +mongodb.kubedb.com/mg-rs created +``` + +Now, wait until `mg-replicaset` has status `Ready`. i.e, + +```bash +$ kubectl get mg -n demo +NAME VERSION STATUS AGE +mg-rs 4.4.26 Ready 10m + +$ kubectl dba describe mongodb mg-rs -n demo +Name: mg-rs +Namespace: demo +CreationTimestamp: Thu, 11 Mar 2021 13:25:05 +0600 +Labels: +Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"kubedb.com/v1alpha2","kind":"MongoDB","metadata":{"annotations":{},"name":"mg-rs","namespace":"demo"},"spec":{"replicaSet":{"name":"rs0"... +Replicas: 3 total +Status: Ready +StorageType: Durable +Volume: + StorageClass: standard + Capacity: 1Gi + Access Modes: RWO +Paused: false +Halted: false +Termination Policy: Delete + +StatefulSet: + Name: mg-rs + CreationTimestamp: Thu, 11 Mar 2021 13:25:05 +0600 + Labels: app.kubernetes.io/component=database + app.kubernetes.io/instance=mg-rs + app.kubernetes.io/managed-by=kubedb.com + app.kubernetes.io/name=mongodbs.kubedb.com + Annotations: + Replicas: 824639275080 desired | 3 total + Pods Status: 3 Running / 0 Waiting / 0 Succeeded / 0 Failed + +Service: + Name: mg-rs + Labels: app.kubernetes.io/component=database + app.kubernetes.io/instance=mg-rs + app.kubernetes.io/managed-by=kubedb.com + app.kubernetes.io/name=mongodbs.kubedb.com + Annotations: + Type: ClusterIP + IP: 10.96.70.27 + Port: primary 27017/TCP + TargetPort: db/TCP + Endpoints: 10.244.0.63:27017 + +Service: + Name: mg-rs-pods + Labels: app.kubernetes.io/component=database + app.kubernetes.io/instance=mg-rs + app.kubernetes.io/managed-by=kubedb.com + app.kubernetes.io/name=mongodbs.kubedb.com + Annotations: + Type: ClusterIP + IP: None + Port: db 27017/TCP + TargetPort: db/TCP + Endpoints: 10.244.0.63:27017,10.244.0.65:27017,10.244.0.67:27017 + +Auth Secret: + Name: mg-rs-auth + Labels: app.kubernetes.io/component=database + app.kubernetes.io/instance=mg-rs + app.kubernetes.io/managed-by=kubedb.com + app.kubernetes.io/name=mongodbs.kubedb.com + Annotations: + Type: Opaque + Data: + password: 16 bytes + username: 4 bytes + +AppBinding: + Metadata: + Annotations: + kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"kubedb.com/v1alpha2","kind":"MongoDB","metadata":{"annotations":{},"name":"mg-rs","namespace":"demo"},"spec":{"replicaSet":{"name":"rs0"},"replicas":3,"storage":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"1Gi"}},"storageClassName":"standard"},"version":"4.4.26"}} + + Creation Timestamp: 2021-03-11T07:26:44Z + Labels: + app.kubernetes.io/component: database + app.kubernetes.io/instance: mg-rs + app.kubernetes.io/managed-by: kubedb.com + app.kubernetes.io/name: mongodbs.kubedb.com + Name: mg-rs + Namespace: demo + Spec: + Client Config: + Service: + Name: mg-rs + Port: 27017 + Scheme: mongodb + Parameters: + API Version: config.kubedb.com/v1alpha1 + Kind: MongoConfiguration + Replica Sets: + host-0: rs0/mg-rs-0.mg-rs-pods.demo.svc,mg-rs-1.mg-rs-pods.demo.svc,mg-rs-2.mg-rs-pods.demo.svc + Stash: + Addon: + Backup Task: + Name: mongodb-backup-4.4.6-v6 + Restore Task: + Name: mongodb-restore-4.4.6-v6 + Secret: + Name: mg-rs-auth + Type: kubedb.com/mongodb + Version: 4.4.26 + +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal Successful 14m MongoDB operator Successfully created stats service + Normal Successful 14m MongoDB operator Successfully created Service + Normal Successful 14m MongoDB operator Successfully stats service + Normal Successful 14m MongoDB operator Successfully stats service + Normal Successful 13m MongoDB operator Successfully stats service + Normal Successful 13m MongoDB operator Successfully stats service + Normal Successful 13m MongoDB operator Successfully stats service + Normal Successful 13m MongoDB operator Successfully stats service + Normal Successful 13m MongoDB operator Successfully stats service + Normal Successful 12m MongoDB operator Successfully stats service + Normal Successful 12m MongoDB operator Successfully patched StatefulSet demo/mg-rs +``` + +Now, we can connect to this database through [mongo-shell](https://docs.mongodb.com/v4.2/mongo/) and verify that the TLS is disabled. + + +```bash +$ kubectl get secrets -n demo mg-rs-auth -o jsonpath='{.data.\username}' | base64 -d +root + +$ kubectl get secrets -n demo mg-rs-auth -o jsonpath='{.data.\password}' | base64 -d +U6(h_pYrekLZ2OOd + +$ kubectl exec -it mg-rs-0 -n demo -- mongo admin -u root -p 'U6(h_pYrekLZ2OOd' +rs0:PRIMARY> db.adminCommand({ getParameter:1, sslMode:1 }) +{ + "sslMode" : "disabled", + "ok" : 1, + "$clusterTime" : { + "clusterTime" : Timestamp(1615468344, 1), + "signature" : { + "hash" : BinData(0,"Xdclj9Y67WKZ/oTDGT/E1XzOY28="), + "keyId" : NumberLong("6938294279689207810") + } + }, + "operationTime" : Timestamp(1615468344, 1) +} +``` + +We can verify from the above output that TLS is disabled for this database. + +### Create Issuer/ ClusterIssuer + +Now, We are going to create an example `Issuer` that will be used to enable SSL/TLS in MongoDB. Alternatively, you can follow this [cert-manager tutorial](https://cert-manager.io/docs/configuration/ca/) to create your own `Issuer`. + +- Start off by generating a ca certificates using openssl. + +```bash +$ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout ./ca.key -out ./ca.crt -subj "/CN=ca/O=kubedb" +Generating a RSA private key +................+++++ +........................+++++ +writing new private key to './ca.key' +----- +``` + +- Now we are going to create a ca-secret using the certificate files that we have just generated. + +```bash +$ kubectl create secret tls mongo-ca \ + --cert=ca.crt \ + --key=ca.key \ + --namespace=demo +secret/mongo-ca created +``` + +Now, Let's create an `Issuer` using the `mongo-ca` secret that we have just created. The `YAML` file looks like this: + +```yaml +apiVersion: cert-manager.io/v1 +kind: Issuer +metadata: + name: mg-issuer + namespace: demo +spec: + ca: + secretName: mongo-ca +``` + +Let's apply the `YAML` file: + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/mongodb/reconfigure-tls/issuer.yaml +issuer.cert-manager.io/mg-issuer created +``` + +### Create MongoDBOpsRequest + +In order to add TLS to the database, we have to create a `MongoDBOpsRequest` CRO with our created issuer. Below is the YAML of the `MongoDBOpsRequest` CRO that we are going to create, + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: MongoDBOpsRequest +metadata: + name: mops-add-tls + namespace: demo +spec: + type: ReconfigureTLS + databaseRef: + name: mg-rs + tls: + issuerRef: + name: mg-issuer + kind: Issuer + apiGroup: "cert-manager.io" + certificates: + - alias: client + subject: + organizations: + - mongo + organizationalUnits: + - client + readinessCriteria: + oplogMaxLagSeconds: 20 + objectsCountDiffPercentage: 10 + timeout: 5m + apply: IfReady +``` + +Here, + +- `spec.databaseRef.name` specifies that we are performing reconfigure TLS operation on `mg-rs` database. +- `spec.type` specifies that we are performing `ReconfigureTLS` on our database. +- `spec.tls.issuerRef` specifies the issuer name, kind and api group. +- `spec.tls.certificates` specifies the certificates. You can learn more about this field from [here](/docs/v2024.1.31/guides/mongodb/concepts/mongodb#spectls). + +Let's create the `MongoDBOpsRequest` CR we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/mongodb/reconfigure-tls/mops-add-tls.yaml +mongodbopsrequest.ops.kubedb.com/mops-add-tls created +``` + +#### Verify TLS Enabled Successfully + +Let's wait for `MongoDBOpsRequest` to be `Successful`. Run the following command to watch `MongoDBOpsRequest` CRO, + +```bash +$ kubectl get mongodbopsrequest -n demo +Every 2.0s: kubectl get mongodbopsrequest -n demo +NAME TYPE STATUS AGE +mops-add-tls ReconfigureTLS Successful 91s +``` + +We can see from the above output that the `MongoDBOpsRequest` has succeeded. If we describe the `MongoDBOpsRequest` we will get an overview of the steps that were followed. + +```bash +$ kubectl describe mongodbopsrequest -n demo mops-add-tls +Name: mops-add-tls +Namespace: demo +Labels: +Annotations: +API Version: ops.kubedb.com/v1alpha1 +Kind: MongoDBOpsRequest +Metadata: + Creation Timestamp: 2021-03-11T13:32:18Z + Generation: 1 + Managed Fields: + API Version: ops.kubedb.com/v1alpha1 + Fields Type: FieldsV1 + fieldsV1: + f:metadata: + f:annotations: + .: + f:kubectl.kubernetes.io/last-applied-configuration: + f:spec: + .: + f:databaseRef: + .: + f:name: + f:tls: + .: + f:certificates: + f:issuerRef: + .: + f:apiGroup: + f:kind: + f:name: + f:type: + Manager: kubectl-client-side-apply + Operation: Update + Time: 2021-03-11T13:32:18Z + API Version: ops.kubedb.com/v1alpha1 + Fields Type: FieldsV1 + fieldsV1: + f:status: + .: + f:conditions: + f:observedGeneration: + f:phase: + Manager: kubedb-enterprise + Operation: Update + Time: 2021-03-11T13:32:19Z + Resource Version: 488264 + Self Link: /apis/ops.kubedb.com/v1alpha1/namespaces/demo/mongodbopsrequests/mops-add-tls + UID: 0024ec16-0d43-4686-a2d7-1cdeb96e41a5 +Spec: + Database Ref: + Name: mg-rs + Tls: + Certificates: + Alias: client + Subject: + Organizational Units: + client + Organizations: + mongo + Issuer Ref: + API Group: cert-manager.io + Kind: Issuer + Name: mg-issuer + Type: ReconfigureTLS +Status: + Conditions: + Last Transition Time: 2021-03-11T13:32:19Z + Message: MongoDB ops request is reconfiguring TLS + Observed Generation: 1 + Reason: ReconfigureTLS + Status: True + Type: ReconfigureTLS + Last Transition Time: 2021-03-11T13:32:25Z + Message: Successfully Updated StatefulSets + Observed Generation: 1 + Reason: TLSAdded + Status: True + Type: TLSAdded + Last Transition Time: 2021-03-11T13:34:25Z + Message: Successfully Restarted ReplicaSet nodes + Observed Generation: 1 + Reason: RestartReplicaSet + Status: True + Type: RestartReplicaSet + Last Transition Time: 2021-03-11T13:34:25Z + Message: Successfully Reconfigured TLS + Observed Generation: 1 + Reason: Successful + Status: True + Type: Successful + Observed Generation: 1 + Phase: Successful +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal PauseDatabase 2m10s KubeDB Ops-manager operator Pausing MongoDB demo/mg-rs + Normal PauseDatabase 2m10s KubeDB Ops-manager operator Successfully paused MongoDB demo/mg-rs + Normal TLSAdded 2m10s KubeDB Ops-manager operator Successfully Updated StatefulSets + Normal RestartReplicaSet 10s KubeDB Ops-manager operator Successfully Restarted ReplicaSet nodes + Normal ResumeDatabase 10s KubeDB Ops-manager operator Resuming MongoDB demo/mg-rs + Normal ResumeDatabase 10s KubeDB Ops-manager operator Successfully resumed MongoDB demo/mg-rs + Normal Successful 10s KubeDB Ops-manager operator Successfully Reconfigured TLS +``` + +Now, Let's exec into a database primary node and find out the username to connect in a mongo shell, + +```bash +$ kubectl exec -it mg-rs-2 -n demo bash +root@mgo-rs-tls-2:/$ ls /var/run/mongodb/tls +ca.crt client.pem mongo.pem +root@mgo-rs-tls-2:/$ openssl x509 -in /var/run/mongodb/tls/client.pem -inform PEM -subject -nameopt RFC2253 -noout +subject=CN=root,OU=client,O=mongo +``` + +Now, we can connect using `CN=root,OU=client,O=mongo` as root to connect to the mongo shell of the master pod, + +```bash +root@mgo-rs-tls-2:/$ mongo --tls --tlsCAFile /var/run/mongodb/tls/ca.crt --tlsCertificateKeyFile /var/run/mongodb/tls/client.pem admin --host localhost --authenticationMechanism MONGODB-X509 --authenticationDatabase='$external' -u "CN=root,OU=client,O=mongo" --quiet +rs0:PRIMARY> +``` + +We are connected to the mongo shell. Let's run some command to verify the sslMode and the user, + +```bash +rs0:PRIMARY> db.adminCommand({ getParameter:1, sslMode:1 }) +{ + "sslMode" : "requireSSL", + "ok" : 1, + "$clusterTime" : { + "clusterTime" : Timestamp(1615472249, 1), + "signature" : { + "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="), + "keyId" : NumberLong(0) + } + }, + "operationTime" : Timestamp(1615472249, 1) +} +``` + +We can see from the above output that, `sslMode` is set to `requireSSL`. So, database TLS is enabled successfully to this database. + +## Rotate Certificate + +Now we are going to rotate the certificate of this database. First let's check the current expiration date of the certificate. + +```bash +$ kubectl exec -it mg-rs-2 -n demo bash +root@mg-rs-2:/# openssl x509 -in /var/run/mongodb/tls/client.pem -inform PEM -enddate -nameopt RFC2253 -noout +notAfter=Jun 9 13:32:20 2021 GMT +``` + +So, the certificate will expire on this time `Jun 9 13:32:20 2021 GMT`. + +### Create MongoDBOpsRequest + +Now we are going to increase it using a MongoDBOpsRequest. Below is the yaml of the ops request that we are going to create, + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: MongoDBOpsRequest +metadata: + name: mops-rotate + namespace: demo +spec: + type: ReconfigureTLS + databaseRef: + name: mg-rs + tls: + rotateCertificates: true +``` + +Here, + +- `spec.databaseRef.name` specifies that we are performing reconfigure TLS operation on `mg-rs` database. +- `spec.type` specifies that we are performing `ReconfigureTLS` on our database. +- `spec.tls.rotateCertificates` specifies that we want to rotate the certificate of this database. + +Let's create the `MongoDBOpsRequest` CR we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/mongodb/reconfigure-tls/mops-rotate.yaml +mongodbopsrequest.ops.kubedb.com/mops-rotate created +``` + +#### Verify Certificate Rotated Successfully + +Let's wait for `MongoDBOpsRequest` to be `Successful`. Run the following command to watch `MongoDBOpsRequest` CRO, + +```bash +$ kubectl get mongodbopsrequest -n demo +Every 2.0s: kubectl get mongodbopsrequest -n demo +NAME TYPE STATUS AGE +mops-rotate ReconfigureTLS Successful 112s +``` + +We can see from the above output that the `MongoDBOpsRequest` has succeeded. If we describe the `MongoDBOpsRequest` we will get an overview of the steps that were followed. + +```bash +$ kubectl describe mongodbopsrequest -n demo mops-rotate +Name: mops-rotate +Namespace: demo +Labels: +Annotations: +API Version: ops.kubedb.com/v1alpha1 +Kind: MongoDBOpsRequest +Metadata: + Creation Timestamp: 2021-03-11T16:17:55Z + Generation: 1 + Managed Fields: + API Version: ops.kubedb.com/v1alpha1 + Fields Type: FieldsV1 + fieldsV1: + f:metadata: + f:annotations: + .: + f:kubectl.kubernetes.io/last-applied-configuration: + f:spec: + .: + f:databaseRef: + .: + f:name: + f:tls: + .: + f:rotateCertificates: + f:type: + Manager: kubectl-client-side-apply + Operation: Update + Time: 2021-03-11T16:17:55Z + API Version: ops.kubedb.com/v1alpha1 + Fields Type: FieldsV1 + fieldsV1: + f:status: + .: + f:conditions: + f:observedGeneration: + f:phase: + Manager: kubedb-enterprise + Operation: Update + Time: 2021-03-11T16:17:55Z + Resource Version: 521643 + Self Link: /apis/ops.kubedb.com/v1alpha1/namespaces/demo/mongodbopsrequests/mops-rotate + UID: 6d96ead2-a868-47d8-85fb-77eecc9a96b4 +Spec: + Database Ref: + Name: mg-rs + Tls: + Rotate Certificates: true + Type: ReconfigureTLS +Status: + Conditions: + Last Transition Time: 2021-03-11T16:17:55Z + Message: MongoDB ops request is reconfiguring TLS + Observed Generation: 1 + Reason: ReconfigureTLS + Status: True + Type: ReconfigureTLS + Last Transition Time: 2021-03-11T16:17:55Z + Message: Successfully Added Issuing Condition in Certificates + Observed Generation: 1 + Reason: IssuingConditionUpdated + Status: True + Type: IssuingConditionUpdated + Last Transition Time: 2021-03-11T16:18:00Z + Message: Successfully Issued New Certificates + Observed Generation: 1 + Reason: CertificateIssuingSuccessful + Status: True + Type: CertificateIssuingSuccessful + Last Transition Time: 2021-03-11T16:19:45Z + Message: Successfully Restarted ReplicaSet nodes + Observed Generation: 1 + Reason: RestartReplicaSet + Status: True + Type: RestartReplicaSet + Last Transition Time: 2021-03-11T16:19:45Z + Message: Successfully Reconfigured TLS + Observed Generation: 1 + Reason: Successful + Status: True + Type: Successful + Observed Generation: 1 + Phase: Successful +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal CertificateIssuingSuccessful 2m10s KubeDB Ops-manager operator Successfully Issued New Certificates + Normal RestartReplicaSet 25s KubeDB Ops-manager operator Successfully Restarted ReplicaSet nodes + Normal Successful 25s KubeDB Ops-manager operator Successfully Reconfigured TLS +``` + +Now, let's check the expiration date of the certificate. + +```bash +$ kubectl exec -it mg-rs-2 -n demo bash +root@mg-rs-2:/# openssl x509 -in /var/run/mongodb/tls/client.pem -inform PEM -enddate -nameopt RFC2253 -noout +notAfter=Jun 9 16:17:55 2021 GMT +``` + +As we can see from the above output, the certificate has been rotated successfully. + +## Change Issuer/ClusterIssuer + +Now, we are going to change the issuer of this database. + +- Let's create a new ca certificate and key using a different subject `CN=ca-update,O=kubedb-updated`. + +```bash +$ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout ./ca.key -out ./ca.crt -subj "/CN=ca-updated/O=kubedb-updated" +Generating a RSA private key +..............................................................+++++ +......................................................................................+++++ +writing new private key to './ca.key' +----- +``` + +- Now we are going to create a new ca-secret using the certificate files that we have just generated. + +```bash +$ kubectl create secret tls mongo-new-ca \ + --cert=ca.crt \ + --key=ca.key \ + --namespace=demo +secret/mongo-new-ca created +``` + +Now, Let's create a new `Issuer` using the `mongo-new-ca` secret that we have just created. The `YAML` file looks like this: + +```yaml +apiVersion: cert-manager.io/v1 +kind: Issuer +metadata: + name: mg-new-issuer + namespace: demo +spec: + ca: + secretName: mongo-new-ca +``` + +Let's apply the `YAML` file: + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/mongodb/reconfigure-tls/new-issuer.yaml +issuer.cert-manager.io/mg-new-issuer created +``` + +### Create MongoDBOpsRequest + +In order to use the new issuer to issue new certificates, we have to create a `MongoDBOpsRequest` CRO with the newly created issuer. Below is the YAML of the `MongoDBOpsRequest` CRO that we are going to create, + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: MongoDBOpsRequest +metadata: + name: mops-change-issuer + namespace: demo +spec: + type: ReconfigureTLS + databaseRef: + name: mg-rs + tls: + issuerRef: + name: mg-new-issuer + kind: Issuer + apiGroup: "cert-manager.io" +``` + +Here, + +- `spec.databaseRef.name` specifies that we are performing reconfigure TLS operation on `mg-rs` database. +- `spec.type` specifies that we are performing `ReconfigureTLS` on our database. +- `spec.tls.issuerRef` specifies the issuer name, kind and api group. + +Let's create the `MongoDBOpsRequest` CR we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/mongodb/reconfigure-tls/mops-change-issuer.yaml +mongodbopsrequest.ops.kubedb.com/mops-change-issuer created +``` + +#### Verify Issuer is changed successfully + +Let's wait for `MongoDBOpsRequest` to be `Successful`. Run the following command to watch `MongoDBOpsRequest` CRO, + +```bash +$ kubectl get mongodbopsrequest -n demo +Every 2.0s: kubectl get mongodbopsrequest -n demo +NAME TYPE STATUS AGE +mops-change-issuer ReconfigureTLS Successful 105s +``` + +We can see from the above output that the `MongoDBOpsRequest` has succeeded. If we describe the `MongoDBOpsRequest` we will get an overview of the steps that were followed. + +```bash +$ kubectl describe mongodbopsrequest -n demo mops-change-issuer +Name: mops-change-issuer +Namespace: demo +Labels: +Annotations: +API Version: ops.kubedb.com/v1alpha1 +Kind: MongoDBOpsRequest +Metadata: + Creation Timestamp: 2021-03-11T16:27:47Z + Generation: 1 + Managed Fields: + API Version: ops.kubedb.com/v1alpha1 + Fields Type: FieldsV1 + fieldsV1: + f:metadata: + f:annotations: + .: + f:kubectl.kubernetes.io/last-applied-configuration: + f:spec: + .: + f:databaseRef: + .: + f:name: + f:tls: + .: + f:issuerRef: + .: + f:apiGroup: + f:kind: + f:name: + f:type: + Manager: kubectl-client-side-apply + Operation: Update + Time: 2021-03-11T16:27:47Z + API Version: ops.kubedb.com/v1alpha1 + Fields Type: FieldsV1 + fieldsV1: + f:status: + .: + f:conditions: + f:observedGeneration: + f:phase: + Manager: kubedb-enterprise + Operation: Update + Time: 2021-03-11T16:27:47Z + Resource Version: 523903 + Self Link: /apis/ops.kubedb.com/v1alpha1/namespaces/demo/mongodbopsrequests/mops-change-issuer + UID: cdfe8a7d-52ef-466c-a5dd-97e74ad598ca +Spec: + Database Ref: + Name: mg-rs + Tls: + Issuer Ref: + API Group: cert-manager.io + Kind: Issuer + Name: mg-new-issuer + Type: ReconfigureTLS +Status: + Conditions: + Last Transition Time: 2021-03-11T16:27:47Z + Message: MongoDB ops request is reconfiguring TLS + Observed Generation: 1 + Reason: ReconfigureTLS + Status: True + Type: ReconfigureTLS + Last Transition Time: 2021-03-11T16:27:52Z + Message: Successfully Issued New Certificates + Observed Generation: 1 + Reason: CertificateIssuingSuccessful + Status: True + Type: CertificateIssuingSuccessful + Last Transition Time: 2021-03-11T16:29:37Z + Message: Successfully Restarted ReplicaSet nodes + Observed Generation: 1 + Reason: RestartReplicaSet + Status: True + Type: RestartReplicaSet + Last Transition Time: 2021-03-11T16:29:37Z + Message: Successfully Reconfigured TLS + Observed Generation: 1 + Reason: Successful + Status: True + Type: Successful + Observed Generation: 1 + Phase: Successful +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal CertificateIssuingSuccessful 2m27s KubeDB Ops-manager operator Successfully Issued New Certificates + Normal RestartReplicaSet 42s KubeDB Ops-manager operator Successfully Restarted ReplicaSet nodes + Normal Successful 42s KubeDB Ops-manager operator Successfully Reconfigured TLS +``` + +Now, Let's exec into a database node and find out the ca subject to see if it matches the one we have provided. + +```bash +$ kubectl exec -it mg-rs-2 -n demo bash +root@mgo-rs-tls-2:/$ openssl x509 -in /var/run/mongodb/tls/ca.crt -inform PEM -subject -nameopt RFC2253 -noout +subject=O=kubedb-updated,CN=ca-updated +``` + +We can see from the above output that, the subject name matches the subject name of the new ca certificate that we have created. So, the issuer is changed successfully. + +## Remove TLS from the Database + +Now, we are going to remove TLS from this database using a MongoDBOpsRequest. + +### Create MongoDBOpsRequest + +Below is the YAML of the `MongoDBOpsRequest` CRO that we are going to create, + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: MongoDBOpsRequest +metadata: + name: mops-remove + namespace: demo +spec: + type: ReconfigureTLS + databaseRef: + name: mg-rs + tls: + remove: true +``` + +Here, + +- `spec.databaseRef.name` specifies that we are performing reconfigure TLS operation on `mg-rs` database. +- `spec.type` specifies that we are performing `ReconfigureTLS` on our database. +- `spec.tls.remove` specifies that we want to remove tls from this database. + +Let's create the `MongoDBOpsRequest` CR we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/mongodb/reconfigure-tls/mops-remove.yaml +mongodbopsrequest.ops.kubedb.com/mops-remove created +``` + +#### Verify TLS Removed Successfully + +Let's wait for `MongoDBOpsRequest` to be `Successful`. Run the following command to watch `MongoDBOpsRequest` CRO, + +```bash +$ kubectl get mongodbopsrequest -n demo +Every 2.0s: kubectl get mongodbopsrequest -n demo +NAME TYPE STATUS AGE +mops-remove ReconfigureTLS Successful 105s +``` + +We can see from the above output that the `MongoDBOpsRequest` has succeeded. If we describe the `MongoDBOpsRequest` we will get an overview of the steps that were followed. + +```bash +$ kubectl describe mongodbopsrequest -n demo mops-remove +Name: mops-remove +Namespace: demo +Labels: +Annotations: +API Version: ops.kubedb.com/v1alpha1 +Kind: MongoDBOpsRequest +Metadata: + Creation Timestamp: 2021-03-11T16:35:32Z + Generation: 1 + Managed Fields: + API Version: ops.kubedb.com/v1alpha1 + Fields Type: FieldsV1 + fieldsV1: + f:metadata: + f:annotations: + .: + f:kubectl.kubernetes.io/last-applied-configuration: + f:spec: + .: + f:databaseRef: + .: + f:name: + f:tls: + .: + f:remove: + f:type: + Manager: kubectl-client-side-apply + Operation: Update + Time: 2021-03-11T16:35:32Z + API Version: ops.kubedb.com/v1alpha1 + Fields Type: FieldsV1 + fieldsV1: + f:status: + .: + f:conditions: + f:observedGeneration: + f:phase: + Manager: kubedb-enterprise + Operation: Update + Time: 2021-03-11T16:35:32Z + Resource Version: 525550 + Self Link: /apis/ops.kubedb.com/v1alpha1/namespaces/demo/mongodbopsrequests/mops-remove + UID: 99184cc4-1595-4f0f-b8eb-b65c5d0e86a6 +Spec: + Database Ref: + Name: mg-rs + Tls: + Remove: true + Type: ReconfigureTLS +Status: + Conditions: + Last Transition Time: 2021-03-11T16:35:32Z + Message: MongoDB ops request is reconfiguring TLS + Observed Generation: 1 + Reason: ReconfigureTLS + Status: True + Type: ReconfigureTLS + Last Transition Time: 2021-03-11T16:35:37Z + Message: Successfully Updated StatefulSets + Observed Generation: 1 + Reason: TLSRemoved + Status: True + Type: TLSRemoved + Last Transition Time: 2021-03-11T16:37:07Z + Message: Successfully Restarted ReplicaSet nodes + Observed Generation: 1 + Reason: RestartReplicaSet + Status: True + Type: RestartReplicaSet + Last Transition Time: 2021-03-11T16:37:07Z + Message: Successfully Reconfigured TLS + Observed Generation: 1 + Reason: Successful + Status: True + Type: Successful + Observed Generation: 1 + Phase: Successful +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal PauseDatabase 2m5s KubeDB Ops-manager operator Pausing MongoDB demo/mg-rs + Normal PauseDatabase 2m5s KubeDB Ops-manager operator Successfully paused MongoDB demo/mg-rs + Normal TLSRemoved 2m5s KubeDB Ops-manager operator Successfully Updated StatefulSets + Normal RestartReplicaSet 35s KubeDB Ops-manager operator Successfully Restarted ReplicaSet nodes + Normal ResumeDatabase 35s KubeDB Ops-manager operator Resuming MongoDB demo/mg-rs + Normal ResumeDatabase 35s KubeDB Ops-manager operator Successfully resumed MongoDB demo/mg-rs + Normal Successful 35s KubeDB Ops-manager operator Successfully Reconfigured TLS +``` + +Now, Let's exec into the database primary node and find out that TLS is disabled or not. + +```bash +$ kubectl exec -it -n demo mg-rs-1 -- mongo admin -u root -p 'U6(h_pYrekLZ2OOd' +rs0:PRIMARY> db.adminCommand({ getParameter:1, sslMode:1 }) +{ + "sslMode" : "disabled", + "ok" : 1, + "$clusterTime" : { + "clusterTime" : Timestamp(1615480817, 1), + "signature" : { + "hash" : BinData(0,"CWJngDTQqDhKXyx7WMFJqqUfvhY="), + "keyId" : NumberLong("6938294279689207810") + } + }, + "operationTime" : Timestamp(1615480817, 1) +} +``` + +So, we can see from the above that, output that tls is disabled successfully. + +## Cleaning up + +To cleanup the Kubernetes resources created by this tutorial, run: + +```bash +kubectl delete mongodb -n demo mg-rs +kubectl delete issuer -n demo mg-issuer mg-new-issuer +kubectl delete mongodbopsrequest mops-add-tls mops-remove mops-rotate mops-change-issuer +kubectl delete ns demo +``` + +## Next Steps + +- Detail concepts of [MongoDB object](/docs/v2024.1.31/guides/mongodb/concepts/mongodb). +- Initialize [MongoDB with Script](/docs/v2024.1.31/guides/mongodb/initialization/using-script). +- Monitor your MongoDB database with KubeDB using [out-of-the-box Prometheus operator](/docs/v2024.1.31/guides/mongodb/monitoring/using-prometheus-operator). +- Monitor your MongoDB database with KubeDB using [out-of-the-box builtin-Prometheus](/docs/v2024.1.31/guides/mongodb/monitoring/using-builtin-prometheus). +- Use [private Docker registry](/docs/v2024.1.31/guides/mongodb/private-registry/using-private-registry) to deploy MongoDB with KubeDB. +- Use [kubedb cli](/docs/v2024.1.31/guides/mongodb/cli/cli) to manage databases like kubectl for Kubernetes. +- Detail concepts of [MongoDB object](/docs/v2024.1.31/guides/mongodb/concepts/mongodb). +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/v2024.1.31/CONTRIBUTING). diff --git a/content/docs/v2024.1.31/guides/mongodb/reconfigure/_index.md b/content/docs/v2024.1.31/guides/mongodb/reconfigure/_index.md new file mode 100644 index 0000000000..4f2aaf9c41 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mongodb/reconfigure/_index.md @@ -0,0 +1,22 @@ +--- +title: Reconfigure +menu: + docs_v2024.1.31: + identifier: mg-reconfigure + name: Reconfigure + parent: mg-mongodb-guides + weight: 46 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/mongodb/reconfigure/overview.md b/content/docs/v2024.1.31/guides/mongodb/reconfigure/overview.md new file mode 100644 index 0000000000..20cbf2d253 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mongodb/reconfigure/overview.md @@ -0,0 +1,65 @@ +--- +title: Reconfiguring MongoDB +menu: + docs_v2024.1.31: + identifier: mg-reconfigure-overview + name: Overview + parent: mg-reconfigure + weight: 10 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Reconfiguring MongoDB + +This guide will give an overview on how KubeDB Ops-manager operator reconfigures `MongoDB` database components such as ReplicaSet, Shard, ConfigServer, Mongos, etc. + +## Before You Begin + +- You should be familiar with the following `KubeDB` concepts: + - [MongoDB](/docs/v2024.1.31/guides/mongodb/concepts/mongodb) + - [MongoDBOpsRequest](/docs/v2024.1.31/guides/mongodb/concepts/opsrequest) + +## How Reconfiguring MongoDB Process Works + +The following diagram shows how KubeDB Ops-manager operator reconfigures `MongoDB` database components. Open the image in a new tab to see the enlarged version. + +
+  Reconfiguring process of MongoDB +
Fig: Reconfiguring process of MongoDB
+
+ +The Reconfiguring MongoDB process consists of the following steps: + +1. At first, a user creates a `MongoDB` Custom Resource (CR). + +2. `KubeDB` Provisioner operator watches the `MongoDB` CR. + +3. When the operator finds a `MongoDB` CR, it creates required number of `StatefulSets` and related necessary stuff like secrets, services, etc. + +4. Then, in order to reconfigure the various components (ie. ReplicaSet, Shard, ConfigServer, Mongos, etc.) of the `MongoDB` database the user creates a `MongoDBOpsRequest` CR with desired information. + +5. `KubeDB` Ops-manager operator watches the `MongoDBOpsRequest` CR. + +6. When it finds a `MongoDBOpsRequest` CR, it halts the `MongoDB` object which is referred from the `MongoDBOpsRequest`. So, the `KubeDB` Provisioner operator doesn't perform any operations on the `MongoDB` object during the reconfiguring process. + +7. Then the `KubeDB` Ops-manager operator will replace the existing configuration with the new configuration provided or merge the new configuration with the existing configuration according to the `MogoDBOpsRequest` CR. + +8. Then the `KubeDB` Ops-manager operator will restart the related StatefulSet Pods so that they restart with the new configuration defined in the `MongoDBOpsRequest` CR. + +9. After the successful reconfiguring of the `MongoDB` components, the `KubeDB` Ops-manager operator resumes the `MongoDB` object so that the `KubeDB` Provisioner operator resumes its usual operations. + +In the next docs, we are going to show a step by step guide on reconfiguring MongoDB database components using `MongoDBOpsRequest` CRD. \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/mongodb/reconfigure/replicaset.md b/content/docs/v2024.1.31/guides/mongodb/reconfigure/replicaset.md new file mode 100644 index 0000000000..f1fe82a184 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mongodb/reconfigure/replicaset.md @@ -0,0 +1,655 @@ +--- +title: Reconfigure MongoDB Replicaset +menu: + docs_v2024.1.31: + identifier: mg-reconfigure-replicaset + name: Replicaset + parent: mg-reconfigure + weight: 30 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Reconfigure MongoDB Replicaset Database + +This guide will show you how to use `KubeDB` Ops-manager operator to reconfigure a MongoDB Replicaset. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster. + +- Install `KubeDB` Provisioner and Ops-manager operator in your cluster following the steps [here](/docs/v2024.1.31/setup/README). + +- You should be familiar with the following `KubeDB` concepts: + - [MongoDB](/docs/v2024.1.31/guides/mongodb/concepts/mongodb) + - [ReplicaSet](/docs/v2024.1.31/guides/mongodb/clustering/replicaset) + - [MongoDBOpsRequest](/docs/v2024.1.31/guides/mongodb/concepts/opsrequest) + - [Reconfigure Overview](/docs/v2024.1.31/guides/mongodb/reconfigure/overview) + +To keep everything isolated, we are going to use a separate namespace called `demo` throughout this tutorial. + +```bash +$ kubectl create ns demo +namespace/demo created +``` + +> **Note:** YAML files used in this tutorial are stored in [docs/examples/mongodb](/docs/v2024.1.31/examples/mongodb) directory of [kubedb/docs](https://github.com/kubedb/docs) repository. + +Now, we are going to deploy a `MongoDB` Replicaset using a supported version by `KubeDB` operator. Then we are going to apply `MongoDBOpsRequest` to reconfigure its configuration. + +### Prepare MongoDB Replicaset + +Now, we are going to deploy a `MongoDB` Replicaset database with version `4.4.26`. + +### Deploy MongoDB + +At first, we will create `mongod.conf` file containing required configuration settings. + +```ini +$ cat mongod.conf +net: + maxIncomingConnections: 10000 +``` +Here, `maxIncomingConnections` is set to `10000`, whereas the default value is `65536`. + +Now, we will create a secret with this configuration file. + +```bash +$ kubectl create secret generic -n demo mg-custom-config --from-file=./mongod.conf +secret/mg-custom-config created +``` + +In this section, we are going to create a MongoDB object specifying `spec.configSecret` field to apply this custom configuration. Below is the YAML of the `MongoDB` CR that we are going to create, + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: MongoDB +metadata: + name: mg-replicaset + namespace: demo +spec: + version: "4.4.26" + replicas: 3 + replicaSet: + name: rs0 + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + configSecret: + name: mg-custom-config +``` + +Let's create the `MongoDB` CR we have shown above, + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/mongodb/reconfigure/mg-replicaset-config.yaml +mongodb.kubedb.com/mg-replicaset created +``` + +Now, wait until `mg-replicaset` has status `Ready`. i.e, + +```bash +$ kubectl get mg -n demo +NAME VERSION STATUS AGE +mg-replicaset 4.4.26 Ready 19m +``` + +Now, we will check if the database has started with the custom configuration we have provided. + +First we need to get the username and password to connect to a mongodb instance, +```bash +$ kubectl get secrets -n demo mg-replicaset-auth -o jsonpath='{.data.\username}' | base64 -d +root + +$ kubectl get secrets -n demo mg-replicaset-auth -o jsonpath='{.data.\password}' | base64 -d +nrKuxni0wDSMrgwy +``` + +Now let's connect to a mongodb instance and run a mongodb internal command to check the configuration we have provided. + +```bash +$ kubectl exec -n demo mg-replicaset-0 -- mongo admin -u root -p nrKuxni0wDSMrgwy --eval "db._adminCommand( {getCmdLineOpts: 1})" --quiet +{ + "argv" : [ + "mongod", + "--dbpath=/data/db", + "--auth", + "--ipv6", + "--bind_ip_all", + "--port=27017", + "--tlsMode=disabled", + "--replSet=rs0", + "--keyFile=/data/configdb/key.txt", + "--clusterAuthMode=keyFile", + "--config=/data/configdb/mongod.conf" + ], + "parsed" : { + "config" : "/data/configdb/mongod.conf", + "net" : { + "bindIp" : "*", + "ipv6" : true, + "maxIncomingConnections" : 10000, + "port" : 27017, + "tls" : { + "mode" : "disabled" + } + }, + "replication" : { + "replSet" : "rs0" + }, + "security" : { + "authorization" : "enabled", + "clusterAuthMode" : "keyFile", + "keyFile" : "/data/configdb/key.txt" + }, + "storage" : { + "dbPath" : "/data/db" + } + }, + "ok" : 1, + "$clusterTime" : { + "clusterTime" : Timestamp(1614668500, 1), + "signature" : { + "hash" : BinData(0,"7sh886HhsNYajGxYGp5Jxi52IzA="), + "keyId" : NumberLong("6934943333319966722") + } + }, + "operationTime" : Timestamp(1614668500, 1) +} +``` + +As we can see from the configuration of ready mongodb, the value of `maxIncomingConnections` has been set to `10000`. + +### Reconfigure using new config secret + +Now we will reconfigure this database to set `maxIncomingConnections` to `20000`. + +Now, we will edit the `mongod.conf` file containing required configuration settings. + +```ini +$ cat mongod.conf +net: + maxIncomingConnections: 20000 +``` + +Then, we will create a new secret with this configuration file. + +```bash +$ kubectl create secret generic -n demo new-custom-config --from-file=./mongod.conf +secret/new-custom-config created +``` + +#### Create MongoDBOpsRequest + +Now, we will use this secret to replace the previous secret using a `MongoDBOpsRequest` CR. The `MongoDBOpsRequest` yaml is given below, + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: MongoDBOpsRequest +metadata: + name: mops-reconfigure-replicaset + namespace: demo +spec: + type: Reconfigure + databaseRef: + name: mg-replicaset + configuration: + replicaSet: + configSecret: + name: new-custom-config + readinessCriteria: + oplogMaxLagSeconds: 20 + objectsCountDiffPercentage: 10 + timeout: 5m + apply: IfReady +``` + +Here, + +- `spec.databaseRef.name` specifies that we are reconfiguring `mops-reconfigure-replicaset` database. +- `spec.type` specifies that we are performing `Reconfigure` on our database. +- `spec.customConfig.replicaSet.configSecret.name` specifies the name of the new secret. +- `spec.customConfig.arbiter.configSecret.name` could also be specified with a config-secret. +- Have a look [here](/docs/v2024.1.31/guides/mongodb/concepts/opsrequest#specreadinesscriteria) on the respective sections to understand the `readinessCriteria`, `timeout` & `apply` fields. + +Let's create the `MongoDBOpsRequest` CR we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/mongodb/reconfigure/mops-reconfigure-replicaset.yaml +mongodbopsrequest.ops.kubedb.com/mops-reconfigure-replicaset created +``` + +#### Verify the new configuration is working + +If everything goes well, `KubeDB` Ops-manager operator will update the `configSecret` of `MongoDB` object. + +Let's wait for `MongoDBOpsRequest` to be `Successful`. Run the following command to watch `MongoDBOpsRequest` CR, + +```bash +$ watch kubectl get mongodbopsrequest -n demo +Every 2.0s: kubectl get mongodbopsrequest -n demo +NAME TYPE STATUS AGE +mops-reconfigure-replicaset Reconfigure Successful 113s +``` + +We can see from the above output that the `MongoDBOpsRequest` has succeeded. If we describe the `MongoDBOpsRequest` we will get an overview of the steps that were followed to reconfigure the database. + +```bash +$ kubectl describe mongodbopsrequest -n demo mops-reconfigure-replicaset +Name: mops-reconfigure-replicaset +Namespace: demo +Labels: +Annotations: +API Version: ops.kubedb.com/v1alpha1 +Kind: MongoDBOpsRequest +Metadata: + Creation Timestamp: 2021-03-02T07:04:31Z + Generation: 1 + Managed Fields: + API Version: ops.kubedb.com/v1alpha1 + Fields Type: FieldsV1 + fieldsV1: + f:metadata: + f:annotations: + .: + f:kubectl.kubernetes.io/last-applied-configuration: + f:spec: + .: + f:apply: + f:configuration: + .: + f:replicaSet: + .: + f:configSecret: + .: + f:name: + f:databaseRef: + .: + f:name: + f:readinessCriteria: + .: + f:objectsCountDiffPercentage: + f:oplogMaxLagSeconds: + f:timeout: + f:type: + Manager: kubectl-client-side-apply + Operation: Update + Time: 2021-03-02T07:04:31Z + API Version: ops.kubedb.com/v1alpha1 + Fields Type: FieldsV1 + fieldsV1: + f:spec: + f:configuration: + f:replicaSet: + f:podTemplate: + .: + f:controller: + f:metadata: + f:spec: + .: + f:resources: + f:status: + .: + f:conditions: + f:observedGeneration: + f:phase: + Manager: kubedb-enterprise + Operation: Update + Time: 2021-03-02T07:04:31Z + Resource Version: 29869 + Self Link: /apis/ops.kubedb.com/v1alpha1/namespaces/demo/mongodbopsrequests/mops-reconfigure-replicaset + UID: 064733d6-19db-4153-82f7-bc0580116ee6 +Spec: + Apply: IfReady + Configuration: + Replica Set: + Config Secret: + Name: new-custom-config + Database Ref: + Name: mg-replicaset + Readiness Criteria: + Objects Count Diff Percentage: 10 + Oplog Max Lag Seconds: 20 + Timeout: 5m + Type: Reconfigure +Status: + Conditions: + Last Transition Time: 2021-03-02T07:04:31Z + Message: MongoDB ops request is reconfiguring database + Observed Generation: 1 + Reason: Reconfigure + Status: True + Type: Reconfigure + Last Transition Time: 2021-03-02T07:06:21Z + Message: Successfully Reconfigured MongoDB + Observed Generation: 1 + Reason: ReconfigureReplicaset + Status: True + Type: ReconfigureReplicaset + Last Transition Time: 2021-03-02T07:06:21Z + Message: Successfully completed the modification process. + Observed Generation: 1 + Reason: Successful + Status: True + Type: Successful + Observed Generation: 1 + Phase: Successful +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal PauseDatabase 2m55s KubeDB Ops-manager operator Pausing MongoDB demo/mg-replicaset + Normal PauseDatabase 2m55s KubeDB Ops-manager operator Successfully paused MongoDB demo/mg-replicaset + Normal ReconfigureReplicaset 65s KubeDB Ops-manager operator Successfully Reconfigured MongoDB + Normal ResumeDatabase 65s KubeDB Ops-manager operator Resuming MongoDB demo/mg-replicaset + Normal ResumeDatabase 65s KubeDB Ops-manager operator Successfully resumed MongoDB demo/mg-replicaset + Normal Successful 65s KubeDB Ops-manager operator Successfully Reconfigured Database +``` + +Now let's connect to a mongodb instance and run a mongodb internal command to check the new configuration we have provided. + +```bash +$ kubectl exec -n demo mg-replicaset-0 -- mongo admin -u root -p nrKuxni0wDSMrgwy --eval "db._adminCommand( {getCmdLineOpts: 1})" --quiet +{ + "argv" : [ + "mongod", + "--dbpath=/data/db", + "--auth", + "--ipv6", + "--bind_ip_all", + "--port=27017", + "--tlsMode=disabled", + "--replSet=rs0", + "--keyFile=/data/configdb/key.txt", + "--clusterAuthMode=keyFile", + "--config=/data/configdb/mongod.conf" + ], + "parsed" : { + "config" : "/data/configdb/mongod.conf", + "net" : { + "bindIp" : "*", + "ipv6" : true, + "maxIncomingConnections" : 20000, + "port" : 27017, + "tls" : { + "mode" : "disabled" + } + }, + "replication" : { + "replSet" : "rs0" + }, + "security" : { + "authorization" : "enabled", + "clusterAuthMode" : "keyFile", + "keyFile" : "/data/configdb/key.txt" + }, + "storage" : { + "dbPath" : "/data/db" + } + }, + "ok" : 1, + "$clusterTime" : { + "clusterTime" : Timestamp(1614668887, 1), + "signature" : { + "hash" : BinData(0,"5q35Y51+YpbVHFKoaU7lUWi38oY="), + "keyId" : NumberLong("6934943333319966722") + } + }, + "operationTime" : Timestamp(1614668887, 1) +} +``` + +As we can see from the configuration of ready mongodb, the value of `maxIncomingConnections` has been changed from `10000` to `20000`. So the reconfiguration of the database is successful. + + +### Reconfigure using inline config + +Now we will reconfigure this database again to set `maxIncomingConnections` to `30000`. This time we won't use a new secret. We will use the `inlineConfig` field of the `MongoDBOpsRequest`. This will merge the new config in the existing secret. + +#### Create MongoDBOpsRequest + +Now, we will use the new configuration in the `inlineConfig` field in the `MongoDBOpsRequest` CR. The `MongoDBOpsRequest` yaml is given below, + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: MongoDBOpsRequest +metadata: + name: mops-reconfigure-inline-replicaset + namespace: demo +spec: + type: Reconfigure + databaseRef: + name: mg-replicaset + configuration: + replicaSet: + inlineConfig: | + net: + maxIncomingConnections: 30000 + readinessCriteria: + oplogMaxLagSeconds: 20 + objectsCountDiffPercentage: 10 + timeout: 5m + apply: IfReady +``` + +Here, + +- `spec.databaseRef.name` specifies that we are reconfiguring `mops-reconfigure-inline-replicaset` database. +- `spec.type` specifies that we are performing `Reconfigure` on our database. +- `spec.configuration.replicaSet.inlineConfig` specifies the new configuration that will be merged in the existing secret. +- `spec.customConfig.arbiter.configSecret.name` could also be specified with a config-secret. +- Have a look [here](/docs/v2024.1.31/guides/mongodb/concepts/opsrequest#specreadinesscriteria) on the respective sections to understand the `readinessCriteria`, `timeout` & `apply` fields. + +Let's create the `MongoDBOpsRequest` CR we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/mongodb/reconfigure/mops-reconfigure-inline-replicaset.yaml +mongodbopsrequest.ops.kubedb.com/mops-reconfigure-inline-replicaset created +``` + +#### Verify the new configuration is working + +If everything goes well, `KubeDB` Ops-manager operator will merge this new config with the existing configuration. + +Let's wait for `MongoDBOpsRequest` to be `Successful`. Run the following command to watch `MongoDBOpsRequest` CR, + +```bash +$ watch kubectl get mongodbopsrequest -n demo +Every 2.0s: kubectl get mongodbopsrequest -n demo +NAME TYPE STATUS AGE +mops-reconfigure-inline-replicaset Reconfigure Successful 109s +``` + +We can see from the above output that the `MongoDBOpsRequest` has succeeded. If we describe the `MongoDBOpsRequest` we will get an overview of the steps that were followed to reconfigure the database. + +```bash +$ kubectl describe mongodbopsrequest -n demo mops-reconfigure-inline-replicaset +Name: mops-reconfigure-inline-replicaset +Namespace: demo +Labels: +Annotations: +API Version: ops.kubedb.com/v1alpha1 +Kind: MongoDBOpsRequest +Metadata: + Creation Timestamp: 2021-03-02T07:09:39Z + Generation: 1 + Managed Fields: + API Version: ops.kubedb.com/v1alpha1 + Fields Type: FieldsV1 + fieldsV1: + f:metadata: + f:annotations: + .: + f:kubectl.kubernetes.io/last-applied-configuration: + f:spec: + .: + f:apply: + f:configuration: + .: + f:replicaSet: + .: + f:inlineConfig: + f:databaseRef: + .: + f:name: + f:readinessCriteria: + .: + f:objectsCountDiffPercentage: + f:oplogMaxLagSeconds: + f:timeout: + f:type: + Manager: kubectl-client-side-apply + Operation: Update + Time: 2021-03-02T07:09:39Z + API Version: ops.kubedb.com/v1alpha1 + Fields Type: FieldsV1 + fieldsV1: + f:spec: + f:configuration: + f:replicaSet: + f:podTemplate: + .: + f:controller: + f:metadata: + f:spec: + .: + f:resources: + f:status: + .: + f:conditions: + f:observedGeneration: + f:phase: + Manager: kubedb-enterprise + Operation: Update + Time: 2021-03-02T07:09:39Z + Resource Version: 31005 + Self Link: /apis/ops.kubedb.com/v1alpha1/namespaces/demo/mongodbopsrequests/mops-reconfigure-inline-replicaset + UID: 0137442b-1b04-43ed-8de7-ecd913b44065 +Spec: + Apply: IfReady + Configuration: + Replica Set: + Inline Config: net: + maxIncomingConnections: 30000 + + Database Ref: + Name: mg-replicaset + Readiness Criteria: + Objects Count Diff Percentage: 10 + Oplog Max Lag Seconds: 20 + Timeout: 5m + Type: Reconfigure +Status: + Conditions: + Last Transition Time: 2021-03-02T07:09:39Z + Message: MongoDB ops request is reconfiguring database + Observed Generation: 1 + Reason: Reconfigure + Status: True + Type: Reconfigure + Last Transition Time: 2021-03-02T07:11:14Z + Message: Successfully Reconfigured MongoDB + Observed Generation: 1 + Reason: ReconfigureReplicaset + Status: True + Type: ReconfigureReplicaset + Last Transition Time: 2021-03-02T07:11:14Z + Message: Successfully completed the modification process. + Observed Generation: 1 + Reason: Successful + Status: True + Type: Successful + Observed Generation: 1 + Phase: Successful +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal PauseDatabase 9m20s KubeDB Ops-manager operator Pausing MongoDB demo/mg-replicaset + Normal PauseDatabase 9m20s KubeDB Ops-manager operator Successfully paused MongoDB demo/mg-replicaset + Normal ReconfigureReplicaset 7m45s KubeDB Ops-manager operator Successfully Reconfigured MongoDB + Normal ResumeDatabase 7m45s KubeDB Ops-manager operator Resuming MongoDB demo/mg-replicaset + Normal ResumeDatabase 7m45s KubeDB Ops-manager operator Successfully resumed MongoDB demo/mg-replicaset + Normal Successful 7m45s KubeDB Ops-manager operator Successfully Reconfigured Database +``` + +Now let's connect to a mongodb instance and run a mongodb internal command to check the new configuration we have provided. + +```bash +$ kubectl exec -n demo mg-replicaset-0 -- mongo admin -u root -p nrKuxni0wDSMrgwy --eval "db._adminCommand( {getCmdLineOpts: 1})" --quiet +{ + "argv" : [ + "mongod", + "--dbpath=/data/db", + "--auth", + "--ipv6", + "--bind_ip_all", + "--port=27017", + "--tlsMode=disabled", + "--replSet=rs0", + "--keyFile=/data/configdb/key.txt", + "--clusterAuthMode=keyFile", + "--config=/data/configdb/mongod.conf" + ], + "parsed" : { + "config" : "/data/configdb/mongod.conf", + "net" : { + "bindIp" : "*", + "ipv6" : true, + "maxIncomingConnections" : 30000, + "port" : 27017, + "tls" : { + "mode" : "disabled" + } + }, + "replication" : { + "replSet" : "rs0" + }, + "security" : { + "authorization" : "enabled", + "clusterAuthMode" : "keyFile", + "keyFile" : "/data/configdb/key.txt" + }, + "storage" : { + "dbPath" : "/data/db" + } + }, + "ok" : 1, + "$clusterTime" : { + "clusterTime" : Timestamp(1614669580, 1), + "signature" : { + "hash" : BinData(0,"u/xTAa4aW/8bsRvBYPffwQCeTF0="), + "keyId" : NumberLong("6934943333319966722") + } + }, + "operationTime" : Timestamp(1614669580, 1) +} +``` + +As we can see from the configuration of ready mongodb, the value of `maxIncomingConnections` has been changed from `20000` to `30000`. So the reconfiguration of the database using the `inlineConfig` field is successful. + + +## Cleaning Up + +To clean up the Kubernetes resources created by this tutorial, run: + +```bash +kubectl delete mg -n demo mg-replicaset +kubectl delete mongodbopsrequest -n demo mops-reconfigure-replicaset mops-reconfigure-inline-replicaset +``` \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/mongodb/reconfigure/sharding.md b/content/docs/v2024.1.31/guides/mongodb/reconfigure/sharding.md new file mode 100644 index 0000000000..e99080df29 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mongodb/reconfigure/sharding.md @@ -0,0 +1,579 @@ +--- +title: Reconfigure MongoDB Sharded Cluster +menu: + docs_v2024.1.31: + identifier: mg-reconfigure-shard + name: Sharding + parent: mg-reconfigure + weight: 40 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Reconfigure MongoDB Shard + +This guide will show you how to use `KubeDB` Ops-manager operator to reconfigure a MongoDB shard. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster. + +- Install `KubeDB` Provisioner and Ops-manager operator in your cluster following the steps [here](/docs/v2024.1.31/setup/README). + +- You should be familiar with the following `KubeDB` concepts: + - [MongoDB](/docs/v2024.1.31/guides/mongodb/concepts/mongodb) + - [Sharding](/docs/v2024.1.31/guides/mongodb/clustering/sharding) + - [MongoDBOpsRequest](/docs/v2024.1.31/guides/mongodb/concepts/opsrequest) + - [Reconfigure Overview](/docs/v2024.1.31/guides/mongodb/reconfigure/overview) + +To keep everything isolated, we are going to use a separate namespace called `demo` throughout this tutorial. + +```bash +$ kubectl create ns demo +namespace/demo created +``` + +> **Note:** YAML files used in this tutorial are stored in [docs/examples/mongodb](/docs/v2024.1.31/examples/mongodb) directory of [kubedb/docs](https://github.com/kubedb/docs) repository. + +Now, we are going to deploy a `MongoDB` sharded database using a supported version by `KubeDB` operator. Then we are going to apply `MongoDBOpsRequest` to reconfigure its configuration. + +### Prepare MongoDB Shard + +Now, we are going to deploy a `MongoDB` sharded database with version `4.4.26`. + +### Deploy MongoDB database + +At first, we will create `mongod.conf` file containing required configuration settings. + +```ini +$ cat mongod.conf +net: + maxIncomingConnections: 10000 +``` +Here, `maxIncomingConnections` is set to `10000`, whereas the default value is `65536`. + +Now, we will create a secret with this configuration file. + +```bash +$ kubectl create secret generic -n demo mg-custom-config --from-file=./mongod.conf +secret/mg-custom-config created +``` + +In this section, we are going to create a MongoDB object specifying `spec.configSecret` field to apply this custom configuration. Below is the YAML of the `MongoDB` CR that we are going to create, + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: MongoDB +metadata: + name: mg-sharding + namespace: demo +spec: + version: 4.4.26 + shardTopology: + configServer: + replicas: 3 + configSecret: + name: mg-custom-config + storage: + resources: + requests: + storage: 1Gi + storageClassName: standard + mongos: + replicas: 2 + configSecret: + name: mg-custom-config + shard: + replicas: 3 + shards: 2 + configSecret: + name: mg-custom-config + storage: + resources: + requests: + storage: 1Gi + storageClassName: standard +``` + +Let's create the `MongoDB` CR we have shown above, + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/mongodb/reconfigure/mg-shard-config.yaml +mongodb.kubedb.com/mg-sharding created +``` + +Now, wait until `mg-sharding` has status `Ready`. i.e, + +```bash +$ kubectl get mg -n demo +NAME VERSION STATUS AGE +mg-sharding 4.4.26 Ready 3m23s +``` + +Now, we will check if the database has started with the custom configuration we have provided. + +First we need to get the username and password to connect to a mongodb instance, +```bash +$ kubectl get secrets -n demo mg-sharding-auth -o jsonpath='{.data.\username}' | base64 -d +root + +$ kubectl get secrets -n demo mg-sharding-auth -o jsonpath='{.data.\password}' | base64 -d +Dv8F55zVNiEkhHM6 +``` + +Now let's connect to a mongodb instance from each type of nodes and run a mongodb internal command to check the configuration we have provided. + +```bash +$ kubectl exec -n demo mg-sharding-mongos-0 -- mongo admin -u root -p Dv8F55zVNiEkhHM6 --eval "db._adminCommand( {getCmdLineOpts: 1}).parsed.net" --quiet +{ + "bindIp" : "*", + "ipv6" : true, + "maxIncomingConnections" : 10000, + "port" : 27017, + "tls" : { + "mode" : "disabled" + } +} + +$ kubectl exec -n demo mg-sharding-configsvr-0 -- mongo admin -u root -p Dv8F55zVNiEkhHM6 --eval "db._adminCommand( {getCmdLineOpts: 1}).parsed.net" --quiet +{ + "bindIp" : "*", + "ipv6" : true, + "maxIncomingConnections" : 10000, + "port" : 27017, + "tls" : { + "mode" : "disabled" + } +} + +$ kubectl exec -n demo mg-sharding-shard0-0 -- mongo admin -u root -p Dv8F55zVNiEkhHM6 --eval "db._adminCommand( {getCmdLineOpts: 1}).parsed.net" --quiet +{ + "bindIp" : "*", + "ipv6" : true, + "maxIncomingConnections" : 10000, + "port" : 27017, + "tls" : { + "mode" : "disabled" + } +} +``` + +As we can see from the configuration of ready mongodb, the value of `maxIncomingConnections` has been set to `10000` in all nodes. + +### Reconfigure using new secret + +Now we will reconfigure this database to set `maxIncomingConnections` to `20000`. + +Now, we will edit the `mongod.conf` file containing required configuration settings. + +```ini +$ cat mongod.conf +net: + maxIncomingConnections: 20000 +``` + +Then, we will create a new secret with this configuration file. + +```bash +$ kubectl create secret generic -n demo new-custom-config --from-file=./mongod.conf +secret/new-custom-config created +``` + +#### Create MongoDBOpsRequest + +Now, we will use this secret to replace the previous secret using a `MongoDBOpsRequest` CR. The `MongoDBOpsRequest` yaml is given below, + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: MongoDBOpsRequest +metadata: + name: mops-reconfigure-shard + namespace: demo +spec: + type: Reconfigure + databaseRef: + name: mg-sharding + configuration: + shard: + configSecret: + name: new-custom-config + configServer: + configSecret: + name: new-custom-config + mongos: + configSecret: + name: new-custom-config + readinessCriteria: + oplogMaxLagSeconds: 20 + objectsCountDiffPercentage: 10 + timeout: 5m + apply: IfReady +``` + +Here, + +- `spec.databaseRef.name` specifies that we are reconfiguring `mops-reconfigure-shard` database. +- `spec.type` specifies that we are performing `Reconfigure` on our database. +- `spec.configuration.shard.configSecret.name` specifies the name of the new secret for shard nodes. +- `spec.configuration.configServer.configSecret.name` specifies the name of the new secret for configServer nodes. +- `spec.configuration.mongos.configSecret.name` specifies the name of the new secret for mongos nodes. +- `spec.customConfig.arbiter.configSecret.name` could also be specified with a config-secret. +- Have a look [here](/docs/v2024.1.31/guides/mongodb/concepts/opsrequest#specreadinesscriteria) on the respective sections to understand the `readinessCriteria`, `timeout` & `apply` fields. + +> **Note:** If you don't want to reconfigure all the components together, you can only specify the components (shard, configServer and mongos) that you want to reconfigure. + +Let's create the `MongoDBOpsRequest` CR we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/mongodb/reconfigure/mops-reconfigure-shard.yaml +mongodbopsrequest.ops.kubedb.com/mops-reconfigure-shard created +``` + +#### Verify the new configuration is working + +If everything goes well, `KubeDB` Ops-manager operator will update the `configSecret` of `MongoDB` object. + +Let's wait for `MongoDBOpsRequest` to be `Successful`. Run the following command to watch `MongoDBOpsRequest` CR, + +```bash +$ watch kubectl get mongodbopsrequest -n demo +Every 2.0s: kubectl get mongodbopsrequest -n demo +NAME TYPE STATUS AGE +mops-reconfigure-shard Reconfigure Successful 3m8s +``` + +We can see from the above output that the `MongoDBOpsRequest` has succeeded. If we describe the `MongoDBOpsRequest` we will get an overview of the steps that were followed to reconfigure the database. + +```bash +$ kubectl describe mongodbopsrequest -n demo mops-reconfigure-shard + +``` + +Now let's connect to a mongodb instance from each type of nodes and run a mongodb internal command to check the new configuration we have provided. + +```bash +$ kubectl exec -n demo mg-sharding-mongos-0 -- mongo admin -u root -p Dv8F55zVNiEkhHM6 --eval "db._adminCommand( {getCmdLineOpts: 1}).parsed.net" --quiet + { + "bindIp" : "0.0.0.0", + "maxIncomingConnections" : 20000, + "port" : 27017, + "ssl" : { + "mode" : "disabled" + } + } + +$ kubectl exec -n demo mg-sharding-configsvr-0 -- mongo admin -u root -p Dv8F55zVNiEkhHM6 --eval "db._adminCommand( {getCmdLineOpts: 1}).parsed.net" --quiet + { + "bindIp" : "0.0.0.0", + "maxIncomingConnections" : 20000, + "port" : 27017, + "ssl" : { + "mode" : "disabled" + } + } + +$ kubectl exec -n demo mg-sharding-shard0-0 -- mongo admin -u root -p Dv8F55zVNiEkhHM6 --eval "db._adminCommand( {getCmdLineOpts: 1}).parsed.net" --quiet + { + "bindIp" : "0.0.0.0", + "maxIncomingConnections" : 20000, + "port" : 27017, + "ssl" : { + "mode" : "disabled" + } + } +``` + +As we can see from the configuration of ready mongodb, the value of `maxIncomingConnections` has been changed from `10000` to `20000` in all type of nodes. So the reconfiguration of the database is successful. + +### Reconfigure using inline config + +Now we will reconfigure this database again to set `maxIncomingConnections` to `30000`. This time we won't use a new secret. We will use the `inlineConfig` field of the `MongoDBOpsRequest`. This will merge the new config in the existing secret. + +#### Create MongoDBOpsRequest + +Now, we will use the new configuration in the `data` field in the `MongoDBOpsRequest` CR. The `MongoDBOpsRequest` yaml is given below, + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: MongoDBOpsRequest +metadata: + name: mops-reconfigure-inline-shard + namespace: demo +spec: + type: Reconfigure + databaseRef: + name: mg-sharding + configuration: + shard: + inlineConfig: | + net: + maxIncomingConnections: 30000 + configServer: + inlineConfig: | + net: + maxIncomingConnections: 30000 + mongos: + inlineConfig: | + net: + maxIncomingConnections: 30000 + readinessCriteria: + oplogMaxLagSeconds: 20 + objectsCountDiffPercentage: 10 + timeout: 5m + apply: IfReady +``` + +Here, + +- `spec.databaseRef.name` specifies that we are reconfiguring `mops-reconfigure-inline-shard` database. +- `spec.type` specifies that we are performing `Reconfigure` on our database. +- `spec.configuration.shard.inlineConfig` specifies the new configuration that will be merged in the existing secret for shard nodes. +- `spec.configuration.configServer.inlineConfig` specifies the new configuration that will be merged in the existing secret for configServer nodes. +- `spec.configuration.mongos.inlineConfig` specifies the new configuration that will be merged in the existing secret for mongos nodes. +- `spec.customConfig.arbiter.configSecret.name` could also be specified with a config-secret. +- Have a look [here](/docs/v2024.1.31/guides/mongodb/concepts/opsrequest#specreadinesscriteria) on the respective sections to understand the `readinessCriteria`, `timeout` & `apply` fields. + +> **Note:** If you don't want to reconfigure all the components together, you can only specify the components (shard, configServer and mongos) that you want to reconfigure. + +Let's create the `MongoDBOpsRequest` CR we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/mongodb/reconfigure/mops-reconfigure-inline-shard.yaml +mongodbopsrequest.ops.kubedb.com/mops-reconfigure-inline-shard created +``` + +#### Verify the new configuration is working + +If everything goes well, `KubeDB` Ops-manager operator will merge this new config with the existing configuration. + +Let's wait for `MongoDBOpsRequest` to be `Successful`. Run the following command to watch `MongoDBOpsRequest` CR, + +```bash +$ watch kubectl get mongodbopsrequest -n demo +Every 2.0s: kubectl get mongodbopsrequest -n demo +NAME TYPE STATUS AGE +mops-reconfigure-inline-shard Reconfigure Successful 3m24s +``` + +We can see from the above output that the `MongoDBOpsRequest` has succeeded. If we describe the `MongoDBOpsRequest` we will get an overview of the steps that were followed to reconfigure the database. + +```bash +$ kubectl describe mongodbopsrequest -n demo mops-reconfigure-inline-shard +Name: mops-reconfigure-inline-shard +Namespace: demo +Labels: +Annotations: +API Version: ops.kubedb.com/v1alpha1 +Kind: MongoDBOpsRequest +Metadata: + Creation Timestamp: 2021-03-02T13:08:25Z + Generation: 1 + Managed Fields: + API Version: ops.kubedb.com/v1alpha1 + Fields Type: FieldsV1 + fieldsV1: + f:metadata: + f:annotations: + .: + f:kubectl.kubernetes.io/last-applied-configuration: + f:spec: + .: + f:apply: + f:configuration: + .: + f:configServer: + .: + f:configSecret: + .: + f:name: + f:mongos: + .: + f:configSecret: + .: + f:name: + f:shard: + .: + f:configSecret: + .: + f:name: + f:databaseRef: + .: + f:name: + f:readinessCriteria: + .: + f:objectsCountDiffPercentage: + f:oplogMaxLagSeconds: + f:timeout: + f:type: + Manager: kubectl-client-side-apply + Operation: Update + Time: 2021-03-02T13:08:25Z + API Version: ops.kubedb.com/v1alpha1 + Fields Type: FieldsV1 + fieldsV1: + f:spec: + f:configuration: + f:configServer: + f:podTemplate: + .: + f:controller: + f:metadata: + f:spec: + .: + f:resources: + f:mongos: + f:podTemplate: + .: + f:controller: + f:metadata: + f:spec: + .: + f:resources: + f:shard: + f:podTemplate: + .: + f:controller: + f:metadata: + f:spec: + .: + f:resources: + f:status: + .: + f:conditions: + f:observedGeneration: + f:phase: + Manager: kubedb-enterprise + Operation: Update + Time: 2021-03-02T13:08:25Z + Resource Version: 103635 + Self Link: /apis/ops.kubedb.com/v1alpha1/namespaces/demo/mongodbopsrequests/mops-reconfigure-inline-shard + UID: ab454bcb-164c-4fa2-9eaa-dd47c60fe874 +Spec: + Apply: IfReady + Configuration: + Config Server: + Inline Config: net: + maxIncomingConnections: 30000 + + Mongos: + Inline Config: net: + maxIncomingConnections: 30000 + + Shard: + Inline Config: net: + maxIncomingConnections: 30000 + + Database Ref: + Name: mg-sharding + Readiness Criteria: + Objects Count Diff Percentage: 10 + Oplog Max Lag Seconds: 20 + Timeout: 5m + Type: Reconfigure +Status: + Conditions: + Last Transition Time: 2021-03-02T13:08:25Z + Message: MongoDB ops request is reconfiguring database + Observed Generation: 1 + Reason: Reconfigure + Status: True + Type: Reconfigure + Last Transition Time: 2021-03-02T13:10:10Z + Message: Successfully Reconfigured MongoDB + Observed Generation: 1 + Reason: ReconfigureConfigServer + Status: True + Type: ReconfigureConfigServer + Last Transition Time: 2021-03-02T13:13:15Z + Message: Successfully Reconfigured MongoDB + Observed Generation: 1 + Reason: ReconfigureShard + Status: True + Type: ReconfigureShard + Last Transition Time: 2021-03-02T13:14:10Z + Message: Successfully Reconfigured MongoDB + Observed Generation: 1 + Reason: ReconfigureMongos + Status: True + Type: ReconfigureMongos + Last Transition Time: 2021-03-02T13:14:10Z + Message: Successfully completed the modification process. + Observed Generation: 1 + Reason: Successful + Status: True + Type: Successful + Observed Generation: 1 + Phase: Successful +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal PauseDatabase 13m KubeDB Ops-manager operator Pausing MongoDB demo/mg-sharding + Normal PauseDatabase 13m KubeDB Ops-manager operator Successfully paused MongoDB demo/mg-sharding + Normal ReconfigureConfigServer 12m KubeDB Ops-manager operator Successfully Reconfigured MongoDB + Normal ReconfigureShard 9m7s KubeDB Ops-manager operator Successfully Reconfigured MongoDB + Normal ReconfigureMongos 8m12s KubeDB Ops-manager operator Successfully Reconfigured MongoDB + Normal ResumeDatabase 8m12s KubeDB Ops-manager operator Resuming MongoDB demo/mg-sharding + Normal ResumeDatabase 8m12s KubeDB Ops-manager operator Successfully resumed MongoDB demo/mg-sharding + Normal Successful 8m12s KubeDB Ops-manager operator Successfully Reconfigured Database +``` + +Now let's connect to a mongodb instance from each type of nodes and run a mongodb internal command to check the new configuration we have provided. + +```bash +$ kubectl exec -n demo mg-sharding-mongos-0 -- mongo admin -u root -p Dv8F55zVNiEkhHM6 --eval "db._adminCommand( {getCmdLineOpts: 1}).parsed.net" --quiet +{ + "bindIp" : "*", + "ipv6" : true, + "maxIncomingConnections" : 20000, + "port" : 27017, + "tls" : { + "mode" : "disabled" + } +} + +$ kubectl exec -n demo mg-sharding-configsvr-0 -- mongo admin -u root -p Dv8F55zVNiEkhHM6 --eval "db._adminCommand( {getCmdLineOpts: 1}).parsed.net" --quiet +{ + "bindIp" : "*", + "ipv6" : true, + "maxIncomingConnections" : 20000, + "port" : 27017, + "tls" : { + "mode" : "disabled" + } +} + +$ kubectl exec -n demo mg-sharding-shard0-0 -- mongo admin -u root -p Dv8F55zVNiEkhHM6 --eval "db._adminCommand( {getCmdLineOpts: 1}).parsed.net" --quiet +{ + "bindIp" : "*", + "ipv6" : true, + "maxIncomingConnections" : 20000, + "port" : 27017, + "tls" : { + "mode" : "disabled" + } +} +``` + +As we can see from the configuration of ready mongodb, the value of `maxIncomingConnections` has been changed from `20000` to `30000` in all nodes. So the reconfiguration of the database using the data field is successful. + +## Cleaning Up + +To clean up the Kubernetes resources created by this tutorial, run: + +```bash +kubectl delete mg -n demo mg-sharding +kubectl delete mongodbopsrequest -n demo mops-reconfigure-shard mops-reconfigure-inline-shard +``` \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/mongodb/reconfigure/standalone.md b/content/docs/v2024.1.31/guides/mongodb/reconfigure/standalone.md new file mode 100644 index 0000000000..6c648ecd8d --- /dev/null +++ b/content/docs/v2024.1.31/guides/mongodb/reconfigure/standalone.md @@ -0,0 +1,600 @@ +--- +title: Reconfigure Standalone MongoDB Database +menu: + docs_v2024.1.31: + identifier: mg-reconfigure-standalone + name: Standalone + parent: mg-reconfigure + weight: 20 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Reconfigure MongoDB Standalone Database + +This guide will show you how to use `KubeDB` Ops-manager operator to reconfigure a MongoDB standalone database. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster. + +- Install `KubeDB` Provisioner and Ops-manager operator in your cluster following the steps [here](/docs/v2024.1.31/setup/README). + +- You should be familiar with the following `KubeDB` concepts: + - [MongoDB](/docs/v2024.1.31/guides/mongodb/concepts/mongodb) + - [MongoDBOpsRequest](/docs/v2024.1.31/guides/mongodb/concepts/opsrequest) + - [Reconfigure Overview](/docs/v2024.1.31/guides/mongodb/reconfigure/overview) + +To keep everything isolated, we are going to use a separate namespace called `demo` throughout this tutorial. + +```bash +$ kubectl create ns demo +namespace/demo created +``` + +> **Note:** YAML files used in this tutorial are stored in [docs/examples/mongodb](/docs/v2024.1.31/examples/mongodb) directory of [kubedb/docs](https://github.com/kubedb/docs) repository. + +Now, we are going to deploy a `MongoDB` standalone using a supported version by `KubeDB` operator. Then we are going to apply `MongoDBOpsRequest` to reconfigure its configuration. + +### Prepare MongoDB Standalone Database + +Now, we are going to deploy a `MongoDB` standalone database with version `4.4.26`. + +### Deploy MongoDB standalone + +At first, we will create `mongod.conf` file containing required configuration settings. + +```ini +$ cat mongod.conf +net: + maxIncomingConnections: 10000 +``` +Here, `maxIncomingConnections` is set to `10000`, whereas the default value is `65536`. + +Now, we will create a secret with this configuration file. + +```bash +$ kubectl create secret generic -n demo mg-custom-config --from-file=./mongod.conf +secret/mg-custom-config created +``` + +In this section, we are going to create a MongoDB object specifying `spec.configSecret` field to apply this custom configuration. Below is the YAML of the `MongoDB` CR that we are going to create, + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: MongoDB +metadata: + name: mg-standalone + namespace: demo +spec: + version: "4.4.26" + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + configSecret: + name: mg-custom-config +``` + +Let's create the `MongoDB` CR we have shown above, + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/mongodb/reconfigure/mg-standalone-config.yaml +mongodb.kubedb.com/mg-standalone created +``` + +Now, wait until `mg-standalone` has status `Ready`. i.e, + +```bash +$ kubectl get mg -n demo +NAME VERSION STATUS AGE +mg-standalone 4.4.26 Ready 23s +``` + +Now, we will check if the database has started with the custom configuration we have provided. + +First we need to get the username and password to connect to a mongodb instance, +```bash +$ kubectl get secrets -n demo mg-standalone-auth -o jsonpath='{.data.\username}' | base64 -d +root + +$ kubectl get secrets -n demo mg-standalone-auth -o jsonpath='{.data.\password}' | base64 -d +m6lXjZugrC4VEpB8 +``` + +Now let's connect to a mongodb instance and run a mongodb internal command to check the configuration we have provided. + +```bash +$ kubectl exec -n demo mg-standalone-0 -- mongo admin -u root -p m6lXjZugrC4VEpB8 --eval "db._adminCommand( {getCmdLineOpts: 1})" --quiet +{ + "argv" : [ + "mongod", + "--dbpath=/data/db", + "--auth", + "--ipv6", + "--bind_ip_all", + "--port=27017", + "--tlsMode=disabled", + "--config=/data/configdb/mongod.conf" + ], + "parsed" : { + "config" : "/data/configdb/mongod.conf", + "net" : { + "bindIp" : "*", + "ipv6" : true, + "maxIncomingConnections" : 10000, + "port" : 27017, + "tls" : { + "mode" : "disabled" + } + }, + "security" : { + "authorization" : "enabled" + }, + "storage" : { + "dbPath" : "/data/db" + } + }, + "ok" : 1 +} +``` + +As we can see from the configuration of running mongodb, the value of `maxIncomingConnections` has been set to `10000`. + +### Reconfigure using new secret + +Now we will reconfigure this database to set `maxIncomingConnections` to `20000`. + +Now, we will edit the `mongod.conf` file containing required configuration settings. + +```ini +$ cat mongod.conf +net: + maxIncomingConnections: 20000 +``` + +Then, we will create a new secret with this configuration file. + +```bash +$ kubectl create secret generic -n demo new-custom-config --from-file=./mongod.conf +secret/new-custom-config created +``` + +#### Create MongoDBOpsRequest + +Now, we will use this secret to replace the previous secret using a `MongoDBOpsRequest` CR. The `MongoDBOpsRequest` yaml is given below, + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: MongoDBOpsRequest +metadata: + name: mops-reconfigure-standalone + namespace: demo +spec: + type: Reconfigure + databaseRef: + name: mg-standalone + configuration: + standalone: + configSecret: + name: new-custom-config + readinessCriteria: + oplogMaxLagSeconds: 20 + objectsCountDiffPercentage: 10 + timeout: 5m + apply: IfReady +``` + +Here, + +- `spec.databaseRef.name` specifies that we are reconfiguring `mops-reconfigure-standalone` database. +- `spec.type` specifies that we are performing `Reconfigure` on our database. +- `spec.configuration.standalone.configSecret.name` specifies the name of the new secret. +- Have a look [here](/docs/v2024.1.31/guides/mongodb/concepts/opsrequest#specreadinesscriteria) on the respective sections to understand the `readinessCriteria`, `timeout` & `apply` fields. + +Let's create the `MongoDBOpsRequest` CR we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/mongodb/reconfigure/mops-reconfigure-standalone.yaml +mongodbopsrequest.ops.kubedb.com/mops-reconfigure-standalone created +``` + +#### Verify the new configuration is working + +If everything goes well, `KubeDB` Ops-manager operator will update the `configSecret` of `MongoDB` object. + +Let's wait for `MongoDBOpsRequest` to be `Successful`. Run the following command to watch `MongoDBOpsRequest` CR, + +```bash +$ watch kubectl get mongodbopsrequest -n demo +Every 2.0s: kubectl get mongodbopsrequest -n demo +NAME TYPE STATUS AGE +mops-reconfigure-standalone Reconfigure Successful 10m +``` + +We can see from the above output that the `MongoDBOpsRequest` has succeeded. If we describe the `MongoDBOpsRequest` we will get an overview of the steps that were followed to reconfigure the database. + +```bash +$ kubectl describe mongodbopsrequest -n demo mops-reconfigure-standalone +Name: mops-reconfigure-standalone +Namespace: demo +Labels: +Annotations: +API Version: ops.kubedb.com/v1alpha1 +Kind: MongoDBOpsRequest +Metadata: + Creation Timestamp: 2021-03-02T15:04:45Z + Generation: 1 + Managed Fields: + API Version: ops.kubedb.com/v1alpha1 + Fields Type: FieldsV1 + fieldsV1: + f:metadata: + f:annotations: + .: + f:kubectl.kubernetes.io/last-applied-configuration: + f:spec: + .: + f:apply: + f:configuration: + .: + f:standalone: + .: + f:configSecret: + .: + f:name: + f:databaseRef: + .: + f:name: + f:readinessCriteria: + .: + f:objectsCountDiffPercentage: + f:oplogMaxLagSeconds: + f:timeout: + f:type: + Manager: kubectl-client-side-apply + Operation: Update + Time: 2021-03-02T15:04:45Z + API Version: ops.kubedb.com/v1alpha1 + Fields Type: FieldsV1 + fieldsV1: + f:spec: + f:configuration: + f:standalone: + f:podTemplate: + .: + f:controller: + f:metadata: + f:spec: + .: + f:resources: + f:status: + .: + f:conditions: + f:observedGeneration: + f:phase: + Manager: kubedb-enterprise + Operation: Update + Time: 2021-03-02T15:04:45Z + Resource Version: 125826 + Self Link: /apis/ops.kubedb.com/v1alpha1/namespaces/demo/mongodbopsrequests/mops-reconfigure-standalone + UID: f63bb606-9df5-4516-9901-97dfe5b46b15 +Spec: + Apply: IfReady + Configuration: + Standalone: + Config Secret: + Name: new-custom-config + Database Ref: + Name: mg-standalone + Readiness Criteria: + Objects Count Diff Percentage: 10 + Oplog Max Lag Seconds: 20 + Timeout: 5m + Type: Reconfigure +Status: + Conditions: + Last Transition Time: 2021-03-02T15:04:45Z + Message: MongoDB ops request is reconfiguring database + Observed Generation: 1 + Reason: Reconfigure + Status: True + Type: Reconfigure + Last Transition Time: 2021-03-02T15:05:10Z + Message: Successfully Reconfigured MongoDB + Observed Generation: 1 + Reason: ReconfigureStandalone + Status: True + Type: ReconfigureStandalone + Last Transition Time: 2021-03-02T15:05:10Z + Message: Successfully completed the modification process. + Observed Generation: 1 + Reason: Successful + Status: True + Type: Successful + Observed Generation: 1 + Phase: Successful +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal PauseDatabase 60s KubeDB Ops-manager operator Pausing MongoDB demo/mg-standalone + Normal PauseDatabase 60s KubeDB Ops-manager operator Successfully paused MongoDB demo/mg-standalone + Normal ReconfigureStandalone 35s KubeDB Ops-manager operator Successfully Reconfigured MongoDB + Normal ResumeDatabase 35s KubeDB Ops-manager operator Resuming MongoDB demo/mg-standalone + Normal ResumeDatabase 35s KubeDB Ops-manager operator Successfully resumed MongoDB demo/mg-standalone + Normal Successful 35s KubeDB Ops-manager operator Successfully Reconfigured Database +``` + +Now let's connect to a mongodb instance and run a mongodb internal command to check the new configuration we have provided. + +```bash +$ kubectl exec -n demo mg-standalone-0 -- mongo admin -u root -p m6lXjZugrC4VEpB8 --eval "db._adminCommand( {getCmdLineOpts: 1})" --quiet +{ + "argv" : [ + "mongod", + "--dbpath=/data/db", + "--auth", + "--ipv6", + "--bind_ip_all", + "--port=27017", + "--tlsMode=disabled", + "--config=/data/configdb/mongod.conf" + ], + "parsed" : { + "config" : "/data/configdb/mongod.conf", + "net" : { + "bindIp" : "*", + "ipv6" : true, + "maxIncomingConnections" : 20000, + "port" : 27017, + "tls" : { + "mode" : "disabled" + } + }, + "security" : { + "authorization" : "enabled" + }, + "storage" : { + "dbPath" : "/data/db" + } + }, + "ok" : 1 +} +``` + +As we can see from the configuration of running mongodb, the value of `maxIncomingConnections` has been changed from `10000` to `20000`. So the reconfiguration of the database is successful. + + +### Reconfigure using inline config + +Now we will reconfigure this database again to set `maxIncomingConnections` to `30000`. This time we won't use a new secret. We will use the `inlineConfig` field of the `MongoDBOpsRequest`. This will merge the new config in the existing secret. + +#### Create MongoDBOpsRequest + +Now, we will use the new configuration in the `data` field in the `MongoDBOpsRequest` CR. The `MongoDBOpsRequest` yaml is given below, + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: MongoDBOpsRequest +metadata: + name: mops-reconfigure-inline-standalone + namespace: demo +spec: + type: Reconfigure + databaseRef: + name: mg-standalone + configuration: + standalone: + inlineConfig: | + net: + maxIncomingConnections: 30000 + readinessCriteria: + oplogMaxLagSeconds: 20 + objectsCountDiffPercentage: 10 + timeout: 5m + apply: IfReady +``` + +Here, + +- `spec.databaseRef.name` specifies that we are reconfiguring `mops-reconfigure-inline-standalone` database. +- `spec.type` specifies that we are performing `Reconfigure` on our database. +- `spec.configuration.standalone.inlineConfig` specifies the new configuration that will be merged in the existing secret. + +Let's create the `MongoDBOpsRequest` CR we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/mongodb/reconfigure/mops-reconfigure-inline-standalone.yaml +mongodbopsrequest.ops.kubedb.com/mops-reconfigure-inline-standalone created +``` + +#### Verify the new configuration is working + +If everything goes well, `KubeDB` Ops-manager operator will merge this new config with the existing configuration. + +Let's wait for `MongoDBOpsRequest` to be `Successful`. Run the following command to watch `MongoDBOpsRequest` CR, + +```bash +$ watch kubectl get mongodbopsrequest -n demo +Every 2.0s: kubectl get mongodbopsrequest -n demo +NAME TYPE STATUS AGE +mops-reconfigure-inline-standalone Reconfigure Successful 38s +``` + +We can see from the above output that the `MongoDBOpsRequest` has succeeded. If we describe the `MongoDBOpsRequest` we will get an overview of the steps that were followed to reconfigure the database. + +```bash +$ kubectl describe mongodbopsrequest -n demo mops-reconfigure-inline-standalone +Name: mops-reconfigure-inline-standalone +Namespace: demo +Labels: +Annotations: +API Version: ops.kubedb.com/v1alpha1 +Kind: MongoDBOpsRequest +Metadata: + Creation Timestamp: 2021-03-02T15:09:12Z + Generation: 1 + Managed Fields: + API Version: ops.kubedb.com/v1alpha1 + Fields Type: FieldsV1 + fieldsV1: + f:metadata: + f:annotations: + .: + f:kubectl.kubernetes.io/last-applied-configuration: + f:spec: + .: + f:apply: + f:configuration: + .: + f:standalone: + .: + f:inlineConfig: + f:databaseRef: + .: + f:name: + f:readinessCriteria: + .: + f:objectsCountDiffPercentage: + f:oplogMaxLagSeconds: + f:timeout: + f:type: + Manager: kubectl-client-side-apply + Operation: Update + Time: 2021-03-02T15:09:12Z + API Version: ops.kubedb.com/v1alpha1 + Fields Type: FieldsV1 + fieldsV1: + f:spec: + f:configuration: + f:standalone: + f:podTemplate: + .: + f:controller: + f:metadata: + f:spec: + .: + f:resources: + f:status: + .: + f:conditions: + f:observedGeneration: + f:phase: + Manager: kubedb-enterprise + Operation: Update + Time: 2021-03-02T15:09:13Z + Resource Version: 126782 + Self Link: /apis/ops.kubedb.com/v1alpha1/namespaces/demo/mongodbopsrequests/mops-reconfigure-inline-standalone + UID: 33eea32f-e2af-4e36-b612-c528549e3d65 +Spec: + Apply: IfReady + Configuration: + Standalone: + Inline Config: net: + maxIncomingConnections: 30000 + + Database Ref: + Name: mg-standalone + Readiness Criteria: + Objects Count Diff Percentage: 10 + Oplog Max Lag Seconds: 20 + Timeout: 5m + Type: Reconfigure +Status: + Conditions: + Last Transition Time: 2021-03-02T15:09:13Z + Message: MongoDB ops request is reconfiguring database + Observed Generation: 1 + Reason: Reconfigure + Status: True + Type: Reconfigure + Last Transition Time: 2021-03-02T15:09:38Z + Message: Successfully Reconfigured MongoDB + Observed Generation: 1 + Reason: ReconfigureStandalone + Status: True + Type: ReconfigureStandalone + Last Transition Time: 2021-03-02T15:09:38Z + Message: Successfully completed the modification process. + Observed Generation: 1 + Reason: Successful + Status: True + Type: Successful + Observed Generation: 1 + Phase: Successful +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal PauseDatabase 118s KubeDB Ops-manager operator Pausing MongoDB demo/mg-standalone + Normal PauseDatabase 118s KubeDB Ops-manager operator Successfully paused MongoDB demo/mg-standalone + Normal ReconfigureStandalone 93s KubeDB Ops-manager operator Successfully Reconfigured MongoDB + Normal ResumeDatabase 93s KubeDB Ops-manager operator Resuming MongoDB demo/mg-standalone + Normal ResumeDatabase 93s KubeDB Ops-manager operator Successfully resumed MongoDB demo/mg-standalone + Normal Successful 93s KubeDB Ops-manager operator Successfully Reconfigured Database +``` + +Now let's connect to a mongodb instance and run a mongodb internal command to check the new configuration we have provided. + +```bash +$ kubectl exec -n demo mg-standalone-0 -- mongo admin -u root -p m6lXjZugrC4VEpB8 --eval "db._adminCommand( {getCmdLineOpts: 1})" --quiet +{ + "argv" : [ + "mongod", + "--dbpath=/data/db", + "--auth", + "--ipv6", + "--bind_ip_all", + "--port=27017", + "--tlsMode=disabled", + "--config=/data/configdb/mongod.conf" + ], + "parsed" : { + "config" : "/data/configdb/mongod.conf", + "net" : { + "bindIp" : "*", + "ipv6" : true, + "maxIncomingConnections" : 30000, + "port" : 27017, + "tls" : { + "mode" : "disabled" + } + }, + "security" : { + "authorization" : "enabled" + }, + "storage" : { + "dbPath" : "/data/db" + } + }, + "ok" : 1 +} +``` + +As we can see from the configuration of running mongodb, the value of `maxIncomingConnections` has been changed from `20000` to `30000`. So the reconfiguration of the database using the `inlineConfig` field is successful. + + +## Cleaning Up + +To clean up the Kubernetes resources created by this tutorial, run: + +```bash +kubectl delete mg -n demo mg-standalone +kubectl delete mongodbopsrequest -n demo mops-reconfigure-standalone mops-reconfigure-inline-standalone +``` \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/mongodb/reprovision/_index.md b/content/docs/v2024.1.31/guides/mongodb/reprovision/_index.md new file mode 100644 index 0000000000..2ef0d3ff16 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mongodb/reprovision/_index.md @@ -0,0 +1,22 @@ +--- +title: Reprovision MongoDB +menu: + docs_v2024.1.31: + identifier: mg-reprovision + name: Reprovision + parent: mg-mongodb-guides + weight: 46 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/mongodb/reprovision/reprovision.md b/content/docs/v2024.1.31/guides/mongodb/reprovision/reprovision.md new file mode 100644 index 0000000000..544a09ca65 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mongodb/reprovision/reprovision.md @@ -0,0 +1,211 @@ +--- +title: Reprovision MongoDB +menu: + docs_v2024.1.31: + identifier: mg-reprovision-details + name: Reprovision MongoDB + parent: mg-reprovision + weight: 10 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Reprovision MongoDB + +KubeDB supports reprovisioning the MongoDB database via a MongoDBOpsRequest. Reprovisioning is useful if you want, for some reason, to deploy a new MongoDB with the same specifications. This tutorial will show you how to use that. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +- Now, install KubeDB cli on your workstation and KubeDB operator in your cluster following the steps [here](/docs/v2024.1.31/setup/README). + +- To keep things isolated, this tutorial uses a separate namespace called `demo` throughout this tutorial. + +```bash + $ kubectl create ns demo + namespace/demo created +``` + +> Note: YAML files used in this tutorial are stored in [docs/examples/mongodb](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/examples/mongodb) folder in GitHub repository [kubedb/docs](https://github.com/kubedb/docs). + +## Deploy MongoDB + +In this section, we are going to deploy a MongoDB database using KubeDB. + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: MongoDB +metadata: + name: mongo + namespace: demo +spec: + version: "4.4.26" + replicaSet: + name: "replicaset" + podTemplate: + spec: + resources: + requests: + cpu: "300m" + memory: "300Mi" + replicas: 2 + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut + arbiter: {} + hidden: + replicas: 2 + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 2Gi +``` + +- `spec.replicaSet` represents the configuration for replicaset. + - `name` denotes the name of mongodb replicaset. +- `spec.replicas` denotes the number of general members in `rs0` mongodb replicaset. +- `spec.podTemplate` denotes specifications of all the 3 general replicaset members. +- `spec.ephemeralStorage` holds the emptyDir volume specifications. This storage spec will be passed to the StatefulSet created by KubeDB operator to run database pods. So, each members will have a pod of this ephemeral storage configuration. +- `spec.arbiter` denotes arbiter-node spec of the deployed MongoDB CRD. +- `spec.hidden` denotes hidden-node spec of the deployed MongoDB CRD. + +Let's create the `MongoDB` CR we have shown above, + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/mongodb/reprovision/mongo.yaml +mongodb.kubedb.com/mongo created +``` + +## Apply Reprovision opsRequest + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: MongoDBOpsRequest +metadata: + name: repro + namespace: demo +spec: + type: Reprovision + databaseRef: + name: mongo + apply: Always +``` + +- `spec.type` specifies the Type of the ops Request +- `spec.databaseRef` holds the name of the MongoDB database. The db should be available in the same namespace as the opsRequest +- `spec.apply` is set to Always to denote that, we want reprovisioning even if the db was not Ready. + +> Note: The method of reprovisioning the standalone & sharded db is exactly same as above. All you need, is to specify the corresponding MongoDB name in `spec.databaseRef.name` section. + +Let's create the `MongoDBOpsRequest` CR we have shown above, + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/mongodb/reprovision/ops.yaml +mongodbopsrequest.ops.kubedb.com/repro created +``` + +Now the Ops-manager operator will +1) Pause the DB +2) Delete all statefulsets +3) Remove `Provisioned` condition from db +4) Reconcile the db for start +5) Wait for DB to be Ready. + +```shell +$ kubectl get mgops -n demo +NAME TYPE STATUS AGE +repro Reprovision Successful 2m + + +$ kubectl get mgops -n demo -oyaml repro +apiVersion: ops.kubedb.com/v1alpha1 +kind: MongoDBOpsRequest +metadata: + annotations: + kubectl.kubernetes.io/last-applied-configuration: | + {"apiVersion":"ops.kubedb.com/v1alpha1","kind":"MongoDBOpsRequest","metadata":{"annotations":{},"name":"repro","namespace":"demo"},"spec":{"databaseRef":{"name":"mongo"},"type":"Reprovision"}} + creationTimestamp: "2022-10-31T09:50:35Z" + generation: 1 + name: repro + namespace: demo + resourceVersion: "743676" + uid: b3444d38-bef3-4043-925f-551fe6c86123 +spec: + apply: Always + databaseRef: + name: mongo + type: Reprovision +status: + conditions: + - lastTransitionTime: "2022-10-31T09:50:35Z" + message: MongoDB ops request is reprovisioning the database + observedGeneration: 1 + reason: Reprovision + status: "True" + type: Reprovision + - lastTransitionTime: "2022-10-31T09:50:45Z" + message: Successfully Deleted All the StatefulSets + observedGeneration: 1 + reason: DeleteStatefulSets + status: "True" + type: DeleteStatefulSets + - lastTransitionTime: "2022-10-31T09:52:05Z" + message: Database Phase is Ready + observedGeneration: 1 + reason: DatabaseReady + status: "True" + type: DatabaseReady + - lastTransitionTime: "2022-10-31T09:52:05Z" + message: Successfully Reprovisioned the database + observedGeneration: 1 + reason: Successful + status: "True" + type: Successful + observedGeneration: 1 + phase: Successful +``` + + +## Cleaning up + +To cleanup the Kubernetes resources created by this tutorial, run: + +```bash +kubectl delete mongodbopsrequest -n demo repro +kubectl delete mongodb -n demo mongo +kubectl delete ns demo +``` + +## Next Steps + +- Detail concepts of [MongoDB object](/docs/v2024.1.31/guides/mongodb/concepts/mongodb). +- Initialize [MongoDB with Script](/docs/v2024.1.31/guides/mongodb/initialization/using-script). +- Monitor your MongoDB database with KubeDB using [out-of-the-box Prometheus operator](/docs/v2024.1.31/guides/mongodb/monitoring/using-prometheus-operator). +- Monitor your MongoDB database with KubeDB using [out-of-the-box builtin-Prometheus](/docs/v2024.1.31/guides/mongodb/monitoring/using-builtin-prometheus). +- Use [private Docker registry](/docs/v2024.1.31/guides/mongodb/private-registry/using-private-registry) to deploy MongoDB with KubeDB. +- Use [kubedb cli](/docs/v2024.1.31/guides/mongodb/cli/cli) to manage databases like kubectl for Kubernetes. +- Detail concepts of [MongoDB object](/docs/v2024.1.31/guides/mongodb/concepts/mongodb). +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/v2024.1.31/CONTRIBUTING). diff --git a/content/docs/v2024.1.31/guides/mongodb/restart/_index.md b/content/docs/v2024.1.31/guides/mongodb/restart/_index.md new file mode 100644 index 0000000000..bed2170e49 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mongodb/restart/_index.md @@ -0,0 +1,22 @@ +--- +title: Restart MongoDB +menu: + docs_v2024.1.31: + identifier: mg-restart + name: Restart + parent: mg-mongodb-guides + weight: 46 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/mongodb/restart/restart.md b/content/docs/v2024.1.31/guides/mongodb/restart/restart.md new file mode 100644 index 0000000000..b31c38e4ff --- /dev/null +++ b/content/docs/v2024.1.31/guides/mongodb/restart/restart.md @@ -0,0 +1,207 @@ +--- +title: Restart MongoDB +menu: + docs_v2024.1.31: + identifier: mg-restart-details + name: Restart MongoDB + parent: mg-restart + weight: 10 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Restart MongoDB + +KubeDB supports restarting the MongoDB database via a MongoDBOpsRequest. Restarting is useful if some pods are got stuck in some phase, or they are not working correctly. This tutorial will show you how to use that. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +- Now, install KubeDB cli on your workstation and KubeDB operator in your cluster following the steps [here](/docs/v2024.1.31/setup/README). + +- To keep things isolated, this tutorial uses a separate namespace called `demo` throughout this tutorial. + +```bash + $ kubectl create ns demo + namespace/demo created + ``` + +> Note: YAML files used in this tutorial are stored in [docs/examples/mongodb](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/examples/mongodb) folder in GitHub repository [kubedb/docs](https://github.com/kubedb/docs). + +## Deploy MongoDB + +In this section, we are going to deploy a MongoDB database using KubeDB. + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: MongoDB +metadata: + name: mongo + namespace: demo +spec: + version: "4.4.26" + replicaSet: + name: "replicaset" + podTemplate: + spec: + resources: + requests: + cpu: "300m" + memory: "300Mi" + replicas: 2 + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut + arbiter: {} + hidden: + replicas: 2 + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 2Gi +``` + +- `spec.replicaSet` represents the configuration for replicaset. + - `name` denotes the name of mongodb replicaset. +- `spec.replicas` denotes the number of general members in `rs0` mongodb replicaset. +- `spec.podTemplate` denotes specifications of all the 3 general replicaset members. +- `spec.ephemeralStorage` holds the emptyDir volume specifications. This storage spec will be passed to the StatefulSet created by KubeDB operator to run database pods. So, each members will have a pod of this ephemeral storage configuration. +- `spec.arbiter` denotes arbiter-node spec of the deployed MongoDB CRD. +- `spec.hidden` denotes hidden-node spec of the deployed MongoDB CRD. + +Let's create the `MongoDB` CR we have shown above, + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/mongodb/restart/mongo.yaml +mongodb.kubedb.com/mongo created +``` + +## Apply Restart opsRequest + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: MongoDBOpsRequest +metadata: + name: restart + namespace: demo +spec: + type: Restart + databaseRef: + name: mongo + readinessCriteria: + oplogMaxLagSeconds: 10 + objectsCountDiffPercentage: 15 + timeout: 3m + apply: Always +``` + +- `spec.type` specifies the Type of the ops Request +- `spec.databaseRef` holds the name of the MongoDB database. The db should be available in the same namespace as the opsRequest +- The meaning of`spec.readinessCriteria`, `spec.timeout` & `spec.apply` fields will be found [here](/docs/v2024.1.31/guides/mongodb/concepts/opsrequest#specreadinessCriteria) + +> Note: The method of restarting the standalone & sharded db is exactly same as above. All you need, is to specify the corresponding MongoDB name in `spec.databaseRef.name` section. + +Let's create the `MongoDBOpsRequest` CR we have shown above, + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/mongodb/restart/ops.yaml +mongodbopsrequest.ops.kubedb.com/restart created +``` + +Now the Ops-manager operator will first restart the general secondary pods, then serially the arbiters, the hidden nodes, & lastly will restart the Primary of the database. + +```shell +$ kubectl get mgops -n demo +NAME TYPE STATUS AGE +restart Restart Successful 10m + +$ kubectl get mgops -n demo -oyaml restart +apiVersion: ops.kubedb.com/v1alpha1 +kind: MongoDBOpsRequest +metadata: + annotations: + kubectl.kubernetes.io/last-applied-configuration: | + {"apiVersion":"ops.kubedb.com/v1alpha1","kind":"MongoDBOpsRequest","metadata":{"annotations":{},"name":"restart","namespace":"demo"},"spec":{"apply":"Always","databaseRef":{"name":"mongo"},"readinessCriteria":{"objectsCountDiffPercentage":15,"oplogMaxLagSeconds":10},"timeout":"3m","type":"Restart"}} + creationTimestamp: "2022-10-31T08:54:45Z" + generation: 1 + name: restart + namespace: demo + resourceVersion: "738625" + uid: 32f6c52f-6114-4e25-b3a1-877223cf7145 +spec: + apply: Always + databaseRef: + name: mongo + readinessCriteria: + objectsCountDiffPercentage: 15 + oplogMaxLagSeconds: 10 + timeout: 3m + type: Restart +status: + conditions: + - lastTransitionTime: "2022-10-31T08:54:45Z" + message: MongoDB ops request is restarting the database nodes + observedGeneration: 1 + reason: Restart + status: "True" + type: Restart + - lastTransitionTime: "2022-10-31T08:57:05Z" + message: Successfully Restarted ReplicaSet nodes + observedGeneration: 1 + reason: RestartReplicaSet + status: "True" + type: RestartReplicaSet + - lastTransitionTime: "2022-10-31T08:57:05Z" + message: Successfully restarted all nodes of MongoDB + observedGeneration: 1 + reason: Successful + status: "True" + type: Successful + observedGeneration: 1 + phase: Successful +``` + + +## Cleaning up + +To cleanup the Kubernetes resources created by this tutorial, run: + +```bash +kubectl delete mongodbopsrequest -n demo restart +kubectl delete mongodb -n demo mongo +kubectl delete ns demo +``` + +## Next Steps + +- Detail concepts of [MongoDB object](/docs/v2024.1.31/guides/mongodb/concepts/mongodb). +- Initialize [MongoDB with Script](/docs/v2024.1.31/guides/mongodb/initialization/using-script). +- Monitor your MongoDB database with KubeDB using [out-of-the-box Prometheus operator](/docs/v2024.1.31/guides/mongodb/monitoring/using-prometheus-operator). +- Monitor your MongoDB database with KubeDB using [out-of-the-box builtin-Prometheus](/docs/v2024.1.31/guides/mongodb/monitoring/using-builtin-prometheus). +- Use [private Docker registry](/docs/v2024.1.31/guides/mongodb/private-registry/using-private-registry) to deploy MongoDB with KubeDB. +- Use [kubedb cli](/docs/v2024.1.31/guides/mongodb/cli/cli) to manage databases like kubectl for Kubernetes. +- Detail concepts of [MongoDB object](/docs/v2024.1.31/guides/mongodb/concepts/mongodb). +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/v2024.1.31/CONTRIBUTING). diff --git a/content/docs/v2024.1.31/guides/mongodb/scaling/_index.md b/content/docs/v2024.1.31/guides/mongodb/scaling/_index.md new file mode 100644 index 0000000000..6720bbb437 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mongodb/scaling/_index.md @@ -0,0 +1,22 @@ +--- +title: Scaling MongoDB +menu: + docs_v2024.1.31: + identifier: mg-scaling + name: Scaling + parent: mg-mongodb-guides + weight: 43 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/mongodb/scaling/horizontal-scaling/_index.md b/content/docs/v2024.1.31/guides/mongodb/scaling/horizontal-scaling/_index.md new file mode 100644 index 0000000000..7f3c3e6f3a --- /dev/null +++ b/content/docs/v2024.1.31/guides/mongodb/scaling/horizontal-scaling/_index.md @@ -0,0 +1,22 @@ +--- +title: Horizontal Scaling +menu: + docs_v2024.1.31: + identifier: mg-horizontal-scaling + name: Horizontal Scaling + parent: mg-scaling + weight: 10 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/mongodb/scaling/horizontal-scaling/overview.md b/content/docs/v2024.1.31/guides/mongodb/scaling/horizontal-scaling/overview.md new file mode 100644 index 0000000000..390541016d --- /dev/null +++ b/content/docs/v2024.1.31/guides/mongodb/scaling/horizontal-scaling/overview.md @@ -0,0 +1,65 @@ +--- +title: MongoDB Horizontal Scaling Overview +menu: + docs_v2024.1.31: + identifier: mg-horizontal-scaling-overview + name: Overview + parent: mg-horizontal-scaling + weight: 10 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# MongoDB Horizontal Scaling + +This guide will give an overview on how KubeDB Ops-manager operator scales up or down `MongoDB` database replicas of various component such as ReplicaSet, Shard, ConfigServer, Mongos, etc. + +## Before You Begin + +- You should be familiar with the following `KubeDB` concepts: + - [MongoDB](/docs/v2024.1.31/guides/mongodb/concepts/mongodb) + - [MongoDBOpsRequest](/docs/v2024.1.31/guides/mongodb/concepts/opsrequest) + +## How Horizontal Scaling Process Works + +The following diagram shows how KubeDB Ops-manager operator scales up or down `MongoDB` database components. Open the image in a new tab to see the enlarged version. + +
+  Horizontal scaling process of MongoDB +
Fig: Horizontal scaling process of MongoDB
+
+ +The Horizontal scaling process consists of the following steps: + +1. At first, a user creates a `MongoDB` Custom Resource (CR). + +2. `KubeDB` Provisioner operator watches the `MongoDB` CR. + +3. When the operator finds a `MongoDB` CR, it creates required number of `StatefulSets` and related necessary stuff like secrets, services, etc. + +4. Then, in order to scale the various components (ie. ReplicaSet, Shard, ConfigServer, Mongos, etc.) of the `MongoDB` database the user creates a `MongoDBOpsRequest` CR with desired information. + +5. `KubeDB` Ops-manager operator watches the `MongoDBOpsRequest` CR. + +6. When it finds a `MongoDBOpsRequest` CR, it halts the `MongoDB` object which is referred from the `MongoDBOpsRequest`. So, the `KubeDB` Provisioner operator doesn't perform any operations on the `MongoDB` object during the horizontal scaling process. + +7. Then the `KubeDB` Ops-manager operator will scale the related StatefulSet Pods to reach the expected number of replicas defined in the `MongoDBOpsRequest` CR. + +8. After the successfully scaling the replicas of the related StatefulSet Pods, the `KubeDB` Ops-manager operator updates the number of replicas in the `MongoDB` object to reflect the updated state. + +9. After the successful scaling of the `MongoDB` replicas, the `KubeDB` Ops-manager operator resumes the `MongoDB` object so that the `KubeDB` Provisioner operator resumes its usual operations. + +In the next docs, we are going to show a step by step guide on horizontal scaling of MongoDB database using `MongoDBOpsRequest` CRD. \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/mongodb/scaling/horizontal-scaling/replicaset.md b/content/docs/v2024.1.31/guides/mongodb/scaling/horizontal-scaling/replicaset.md new file mode 100644 index 0000000000..25965df628 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mongodb/scaling/horizontal-scaling/replicaset.md @@ -0,0 +1,703 @@ +--- +title: Horizontal Scaling MongoDB Replicaset +menu: + docs_v2024.1.31: + identifier: mg-horizontal-scaling-replicaset + name: Replicaset + parent: mg-horizontal-scaling + weight: 20 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Horizontal Scale MongoDB Replicaset + +This guide will show you how to use `KubeDB` Ops-manager operator to scale the replicaset of a MongoDB database. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +- Install `KubeDB` Provisioner and Ops-manager operator in your cluster following the steps [here](/docs/v2024.1.31/setup/README). + +- You should be familiar with the following `KubeDB` concepts: + - [MongoDB](/docs/v2024.1.31/guides/mongodb/concepts/mongodb) + - [Replicaset](/docs/v2024.1.31/guides/mongodb/clustering/replicaset) + - [MongoDBOpsRequest](/docs/v2024.1.31/guides/mongodb/concepts/opsrequest) + - [Horizontal Scaling Overview](/docs/v2024.1.31/guides/mongodb/scaling/horizontal-scaling/overview) + +To keep everything isolated, we are going to use a separate namespace called `demo` throughout this tutorial. + +```bash +$ kubectl create ns demo +namespace/demo created +``` + +> **Note:** YAML files used in this tutorial are stored in [docs/examples/mongodb](/docs/v2024.1.31/examples/mongodb) directory of [kubedb/docs](https://github.com/kubedb/docs) repository. + +## Apply Horizontal Scaling on Replicaset + +Here, we are going to deploy a `MongoDB` replicaset using a supported version by `KubeDB` operator. Then we are going to apply horizontal scaling on it. + +### Prepare MongoDB Replicaset Database + +Now, we are going to deploy a `MongoDB` replicaset database with version `4.4.26`. + +### Deploy MongoDB replicaset + +In this section, we are going to deploy a MongoDB replicaset database. Then, in the next section we will scale the database using `MongoDBOpsRequest` CRD. Below is the YAML of the `MongoDB` CR that we are going to create, + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: MongoDB +metadata: + name: mg-replicaset + namespace: demo +spec: + version: "4.4.26" + replicaSet: + name: "replicaset" + replicas: 3 + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi +``` + +Let's create the `MongoDB` CR we have shown above, + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/mongodb/scaling/mg-replicaset.yaml +mongodb.kubedb.com/mg-replicaset created +``` + +Now, wait until `mg-replicaset` has status `Ready`. i.e, + +```bash +$ kubectl get mg -n demo +NAME VERSION STATUS AGE +mg-replicaset 4.4.26 Ready 2m36s +``` + +Let's check the number of replicas this database has from the MongoDB object, number of pods the statefulset have, + +```bash +$ kubectl get mongodb -n demo mg-replicaset -o json | jq '.spec.replicas' +3 + +$ kubectl get sts -n demo mg-replicaset -o json | jq '.spec.replicas' +3 +``` + +We can see from both command that the database has 3 replicas in the replicaset. + +Also, we can verify the replicas of the replicaset from an internal mongodb command by execing into a replica. + +First we need to get the username and password to connect to a mongodb instance, +```bash +$ kubectl get secrets -n demo mg-replicaset-auth -o jsonpath='{.data.\username}' | base64 -d +root + +$ kubectl get secrets -n demo mg-replicaset-auth -o jsonpath='{.data.\password}' | base64 -d +nrKuxni0wDSMrgwy +``` + +Now let's connect to a mongodb instance and run a mongodb internal command to check the number of replicas, + +```bash +$ kubectl exec -n demo mg-replicaset-0 -- mongo admin -u root -p nrKuxni0wDSMrgwy --eval "db.adminCommand( { replSetGetStatus : 1 } ).members" --quiet +[ + { + "_id" : 0, + "name" : "mg-replicaset-0.mg-replicaset-pods.demo.svc.cluster.local:27017", + "health" : 1, + "state" : 1, + "stateStr" : "PRIMARY", + "uptime" : 171, + "optime" : { + "ts" : Timestamp(1614698544, 1), + "t" : NumberLong(1) + }, + "optimeDate" : ISODate("2021-03-02T15:22:24Z"), + "syncingTo" : "", + "syncSourceHost" : "", + "syncSourceId" : -1, + "infoMessage" : "", + "electionTime" : Timestamp(1614698393, 2), + "electionDate" : ISODate("2021-03-02T15:19:53Z"), + "configVersion" : 3, + "self" : true, + "lastHeartbeatMessage" : "" + }, + { + "_id" : 1, + "name" : "mg-replicaset-1.mg-replicaset-pods.demo.svc.cluster.local:27017", + "health" : 1, + "state" : 2, + "stateStr" : "SECONDARY", + "uptime" : 128, + "optime" : { + "ts" : Timestamp(1614698544, 1), + "t" : NumberLong(1) + }, + "optimeDurable" : { + "ts" : Timestamp(1614698544, 1), + "t" : NumberLong(1) + }, + "optimeDate" : ISODate("2021-03-02T15:22:24Z"), + "optimeDurableDate" : ISODate("2021-03-02T15:22:24Z"), + "lastHeartbeat" : ISODate("2021-03-02T15:22:32.411Z"), + "lastHeartbeatRecv" : ISODate("2021-03-02T15:22:31.543Z"), + "pingMs" : NumberLong(0), + "lastHeartbeatMessage" : "", + "syncingTo" : "mg-replicaset-0.mg-replicaset-pods.demo.svc.cluster.local:27017", + "syncSourceHost" : "mg-replicaset-0.mg-replicaset-pods.demo.svc.cluster.local:27017", + "syncSourceId" : 0, + "infoMessage" : "", + "configVersion" : 3 + }, + { + "_id" : 2, + "name" : "mg-replicaset-2.mg-replicaset-pods.demo.svc.cluster.local:27017", + "health" : 1, + "state" : 2, + "stateStr" : "SECONDARY", + "uptime" : 83, + "optime" : { + "ts" : Timestamp(1614698544, 1), + "t" : NumberLong(1) + }, + "optimeDurable" : { + "ts" : Timestamp(1614698544, 1), + "t" : NumberLong(1) + }, + "optimeDate" : ISODate("2021-03-02T15:22:24Z"), + "optimeDurableDate" : ISODate("2021-03-02T15:22:24Z"), + "lastHeartbeat" : ISODate("2021-03-02T15:22:30.615Z"), + "lastHeartbeatRecv" : ISODate("2021-03-02T15:22:31.543Z"), + "pingMs" : NumberLong(0), + "lastHeartbeatMessage" : "", + "syncingTo" : "mg-replicaset-0.mg-replicaset-pods.demo.svc.cluster.local:27017", + "syncSourceHost" : "mg-replicaset-0.mg-replicaset-pods.demo.svc.cluster.local:27017", + "syncSourceId" : 0, + "infoMessage" : "", + "configVersion" : 3 + } +] +``` + +We can see from the above output that the replicaset has 3 nodes. + +We are now ready to apply the `MongoDBOpsRequest` CR to scale this database. + +## Scale Up Replicas + +Here, we are going to scale up the replicas of the replicaset to meet the desired number of replicas after scaling. + +#### Create MongoDBOpsRequest + +In order to scale up the replicas of the replicaset of the database, we have to create a `MongoDBOpsRequest` CR with our desired replicas. Below is the YAML of the `MongoDBOpsRequest` CR that we are going to create, + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: MongoDBOpsRequest +metadata: + name: mops-hscale-up-replicaset + namespace: demo +spec: + type: HorizontalScaling + databaseRef: + name: mg-replicaset + horizontalScaling: + replicas: 4 +``` + +Here, + +- `spec.databaseRef.name` specifies that we are performing horizontal scaling operation on `mops-hscale-up-replicaset` database. +- `spec.type` specifies that we are performing `HorizontalScaling` on our database. +- `spec.horizontalScaling.replicas` specifies the desired replicas after scaling. + +Let's create the `MongoDBOpsRequest` CR we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/mongodb/scaling/horizontal-scaling/mops-hscale-up-replicaset.yaml +mongodbopsrequest.ops.kubedb.com/mops-hscale-up-replicaset created +``` + +#### Verify Replicaset replicas scaled up successfully + +If everything goes well, `KubeDB` Ops-manager operator will update the replicas of `MongoDB` object and related `StatefulSets` and `Pods`. + +Let's wait for `MongoDBOpsRequest` to be `Successful`. Run the following command to watch `MongoDBOpsRequest` CR, + +```bash +$ watch kubectl get mongodbopsrequest -n demo +Every 2.0s: kubectl get mongodbopsrequest -n demo +NAME TYPE STATUS AGE +mops-hscale-up-replicaset HorizontalScaling Successful 106s +``` + +We can see from the above output that the `MongoDBOpsRequest` has succeeded. If we describe the `MongoDBOpsRequest` we will get an overview of the steps that were followed to scale the database. + +```bash +$ kubectl describe mongodbopsrequest -n demo mops-hscale-up-replicaset +Name: mops-hscale-up-replicaset +Namespace: demo +Labels: +Annotations: +API Version: ops.kubedb.com/v1alpha1 +Kind: MongoDBOpsRequest +Metadata: + Creation Timestamp: 2021-03-02T15:23:14Z + Generation: 1 + Managed Fields: + API Version: ops.kubedb.com/v1alpha1 + Fields Type: FieldsV1 + fieldsV1: + f:metadata: + f:annotations: + .: + f:kubectl.kubernetes.io/last-applied-configuration: + f:spec: + .: + f:databaseRef: + .: + f:name: + f:horizontalScaling: + .: + f:replicas: + f:type: + Manager: kubectl-client-side-apply + Operation: Update + Time: 2021-03-02T15:23:14Z + API Version: ops.kubedb.com/v1alpha1 + Fields Type: FieldsV1 + fieldsV1: + f:status: + .: + f:conditions: + f:observedGeneration: + f:phase: + Manager: kubedb-enterprise + Operation: Update + Time: 2021-03-02T15:23:14Z + Resource Version: 129882 + Self Link: /apis/ops.kubedb.com/v1alpha1/namespaces/demo/mongodbopsrequests/mops-hscale-up-replicaset + UID: e97dac5c-5e3a-4153-9b31-8ba02af54bcb +Spec: + Database Ref: + Name: mg-replicaset + Horizontal Scaling: + Replicas: 4 + Type: HorizontalScaling +Status: + Conditions: + Last Transition Time: 2021-03-02T15:23:14Z + Message: MongoDB ops request is horizontally scaling database + Observed Generation: 1 + Reason: HorizontalScaling + Status: True + Type: HorizontalScaling + Last Transition Time: 2021-03-02T15:24:00Z + Message: Successfully Horizontally Scaled Up ReplicaSet + Observed Generation: 1 + Reason: ScaleUpReplicaSet + Status: True + Type: ScaleUpReplicaSet + Last Transition Time: 2021-03-02T15:24:00Z + Message: Successfully Horizontally Scaled MongoDB + Observed Generation: 1 + Reason: Successful + Status: True + Type: Successful + Observed Generation: 1 + Phase: Successful +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal PauseDatabase 91s KubeDB Ops-manager operator Pausing MongoDB demo/mg-replicaset + Normal PauseDatabase 91s KubeDB Ops-manager operator Successfully paused MongoDB demo/mg-replicaset + Normal ScaleUpReplicaSet 45s KubeDB Ops-manager operator Successfully Horizontally Scaled Up ReplicaSet + Normal ResumeDatabase 45s KubeDB Ops-manager operator Resuming MongoDB demo/mg-replicaset + Normal ResumeDatabase 45s KubeDB Ops-manager operator Successfully resumed MongoDB demo/mg-replicaset + Normal Successful 45s KubeDB Ops-manager operator Successfully Horizontally Scaled Database +``` + +Now, we are going to verify the number of replicas this database has from the MongoDB object, number of pods the statefulset have, + +```bash +$ kubectl get mongodb -n demo mg-replicaset -o json | jq '.spec.replicas' +4 + +$ kubectl get sts -n demo mg-replicaset -o json | jq '.spec.replicas' +4 +``` + +Now let's connect to a mongodb instance and run a mongodb internal command to check the number of replicas, +```bash +$ kubectl exec -n demo mg-replicaset-0 -- mongo admin -u root -p nrKuxni0wDSMrgwy --eval "db.adminCommand( { replSetGetStatus : 1 } ).members" --quiet +[ + { + "_id" : 0, + "name" : "mg-replicaset-0.mg-replicaset-pods.demo.svc.cluster.local:27017", + "health" : 1, + "state" : 1, + "stateStr" : "PRIMARY", + "uptime" : 344, + "optime" : { + "ts" : Timestamp(1614698724, 1), + "t" : NumberLong(1) + }, + "optimeDate" : ISODate("2021-03-02T15:25:24Z"), + "syncingTo" : "", + "syncSourceHost" : "", + "syncSourceId" : -1, + "infoMessage" : "", + "electionTime" : Timestamp(1614698393, 2), + "electionDate" : ISODate("2021-03-02T15:19:53Z"), + "configVersion" : 4, + "self" : true, + "lastHeartbeatMessage" : "" + }, + { + "_id" : 1, + "name" : "mg-replicaset-1.mg-replicaset-pods.demo.svc.cluster.local:27017", + "health" : 1, + "state" : 2, + "stateStr" : "SECONDARY", + "uptime" : 301, + "optime" : { + "ts" : Timestamp(1614698712, 2), + "t" : NumberLong(1) + }, + "optimeDurable" : { + "ts" : Timestamp(1614698712, 2), + "t" : NumberLong(1) + }, + "optimeDate" : ISODate("2021-03-02T15:25:12Z"), + "optimeDurableDate" : ISODate("2021-03-02T15:25:12Z"), + "lastHeartbeat" : ISODate("2021-03-02T15:25:23.889Z"), + "lastHeartbeatRecv" : ISODate("2021-03-02T15:25:25.179Z"), + "pingMs" : NumberLong(0), + "lastHeartbeatMessage" : "", + "syncingTo" : "mg-replicaset-0.mg-replicaset-pods.demo.svc.cluster.local:27017", + "syncSourceHost" : "mg-replicaset-0.mg-replicaset-pods.demo.svc.cluster.local:27017", + "syncSourceId" : 0, + "infoMessage" : "", + "configVersion" : 4 + }, + { + "_id" : 2, + "name" : "mg-replicaset-2.mg-replicaset-pods.demo.svc.cluster.local:27017", + "health" : 1, + "state" : 2, + "stateStr" : "SECONDARY", + "uptime" : 256, + "optime" : { + "ts" : Timestamp(1614698712, 2), + "t" : NumberLong(1) + }, + "optimeDurable" : { + "ts" : Timestamp(1614698712, 2), + "t" : NumberLong(1) + }, + "optimeDate" : ISODate("2021-03-02T15:25:12Z"), + "optimeDurableDate" : ISODate("2021-03-02T15:25:12Z"), + "lastHeartbeat" : ISODate("2021-03-02T15:25:23.888Z"), + "lastHeartbeatRecv" : ISODate("2021-03-02T15:25:25.136Z"), + "pingMs" : NumberLong(0), + "lastHeartbeatMessage" : "", + "syncingTo" : "mg-replicaset-0.mg-replicaset-pods.demo.svc.cluster.local:27017", + "syncSourceHost" : "mg-replicaset-0.mg-replicaset-pods.demo.svc.cluster.local:27017", + "syncSourceId" : 0, + "infoMessage" : "", + "configVersion" : 4 + }, + { + "_id" : 3, + "name" : "mg-replicaset-3.mg-replicaset-pods.demo.svc.cluster.local:27017", + "health" : 1, + "state" : 2, + "stateStr" : "SECONDARY", + "uptime" : 93, + "optime" : { + "ts" : Timestamp(1614698712, 2), + "t" : NumberLong(1) + }, + "optimeDurable" : { + "ts" : Timestamp(1614698712, 2), + "t" : NumberLong(1) + }, + "optimeDate" : ISODate("2021-03-02T15:25:12Z"), + "optimeDurableDate" : ISODate("2021-03-02T15:25:12Z"), + "lastHeartbeat" : ISODate("2021-03-02T15:25:23.926Z"), + "lastHeartbeatRecv" : ISODate("2021-03-02T15:25:24.089Z"), + "pingMs" : NumberLong(0), + "lastHeartbeatMessage" : "", + "syncingTo" : "mg-replicaset-0.mg-replicaset-pods.demo.svc.cluster.local:27017", + "syncSourceHost" : "mg-replicaset-0.mg-replicaset-pods.demo.svc.cluster.local:27017", + "syncSourceId" : 0, + "infoMessage" : "", + "configVersion" : 4 + } +] +``` + +From all the above outputs we can see that the replicas of the replicaset is `4`. That means we have successfully scaled up the replicas of the MongoDB replicaset. + + +### Scale Down Replicas + +Here, we are going to scale down the replicas of the replicaset to meet the desired number of replicas after scaling. + +#### Create MongoDBOpsRequest + +In order to scale down the replicas of the replicaset of the database, we have to create a `MongoDBOpsRequest` CR with our desired replicas. Below is the YAML of the `MongoDBOpsRequest` CR that we are going to create, + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: MongoDBOpsRequest +metadata: + name: mops-hscale-down-replicaset + namespace: demo +spec: + type: HorizontalScaling + databaseRef: + name: mg-replicaset + horizontalScaling: + replicas: 3 +``` + +Here, + +- `spec.databaseRef.name` specifies that we are performing horizontal scaling down operation on `mops-hscale-down-replicaset` database. +- `spec.type` specifies that we are performing `HorizontalScaling` on our database. +- `spec.horizontalScaling.replicas` specifies the desired replicas after scaling. + +Let's create the `MongoDBOpsRequest` CR we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/mongodb/scaling/horizontal-scaling/mops-hscale-down-replicaset.yaml +mongodbopsrequest.ops.kubedb.com/mops-hscale-down-replicaset created +``` + +#### Verify Replicaset replicas scaled down successfully + +If everything goes well, `KubeDB` Ops-manager operator will update the replicas of `MongoDB` object and related `StatefulSets` and `Pods`. + +Let's wait for `MongoDBOpsRequest` to be `Successful`. Run the following command to watch `MongoDBOpsRequest` CR, + +```bash +$ watch kubectl get mongodbopsrequest -n demo +Every 2.0s: kubectl get mongodbopsrequest -n demo +NAME TYPE STATUS AGE +mops-hscale-down-replicaset HorizontalScaling Successful 2m32s +``` + +We can see from the above output that the `MongoDBOpsRequest` has succeeded. If we describe the `MongoDBOpsRequest` we will get an overview of the steps that were followed to scale the database. + +```bash +$ kubectl describe mongodbopsrequest -n demo mops-hscale-down-replicaset +Name: mops-hscale-down-replicaset +Namespace: demo +Labels: +Annotations: +API Version: ops.kubedb.com/v1alpha1 +Kind: MongoDBOpsRequest +Metadata: + Creation Timestamp: 2021-03-02T15:25:57Z + Generation: 1 + Managed Fields: + API Version: ops.kubedb.com/v1alpha1 + Fields Type: FieldsV1 + fieldsV1: + f:metadata: + f:annotations: + .: + f:kubectl.kubernetes.io/last-applied-configuration: + f:spec: + .: + f:databaseRef: + .: + f:name: + f:horizontalScaling: + .: + f:replicas: + f:type: + Manager: kubectl-client-side-apply + Operation: Update + Time: 2021-03-02T15:25:57Z + API Version: ops.kubedb.com/v1alpha1 + Fields Type: FieldsV1 + fieldsV1: + f:status: + .: + f:conditions: + f:observedGeneration: + f:phase: + Manager: kubedb-enterprise + Operation: Update + Time: 2021-03-02T15:25:57Z + Resource Version: 130393 + Self Link: /apis/ops.kubedb.com/v1alpha1/namespaces/demo/mongodbopsrequests/mops-hscale-down-replicaset + UID: fbfee7f8-1dd5-4f58-aad7-ad2e2d66b295 +Spec: + Database Ref: + Name: mg-replicaset + Horizontal Scaling: + Replicas: 3 + Type: HorizontalScaling +Status: + Conditions: + Last Transition Time: 2021-03-02T15:25:57Z + Message: MongoDB ops request is horizontally scaling database + Observed Generation: 1 + Reason: HorizontalScaling + Status: True + Type: HorizontalScaling + Last Transition Time: 2021-03-02T15:26:17Z + Message: Successfully Horizontally Scaled Down ReplicaSet + Observed Generation: 1 + Reason: ScaleDownReplicaSet + Status: True + Type: ScaleDownReplicaSet + Last Transition Time: 2021-03-02T15:26:17Z + Message: Successfully Horizontally Scaled MongoDB + Observed Generation: 1 + Reason: Successful + Status: True + Type: Successful + Observed Generation: 1 + Phase: Successful +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal PauseDatabase 50s KubeDB Ops-manager operator Pausing MongoDB demo/mg-replicaset + Normal PauseDatabase 50s KubeDB Ops-manager operator Successfully paused MongoDB demo/mg-replicaset + Normal ScaleDownReplicaSet 30s KubeDB Ops-manager operator Successfully Horizontally Scaled Down ReplicaSet + Normal ResumeDatabase 30s KubeDB Ops-manager operator Resuming MongoDB demo/mg-replicaset + Normal ResumeDatabase 30s KubeDB Ops-manager operator Successfully resumed MongoDB demo/mg-replicaset + Normal Successful 30s KubeDB Ops-manager operator Successfully Horizontally Scaled Database +``` + +Now, we are going to verify the number of replicas this database has from the MongoDB object, number of pods the statefulset have, + +```bash +$ kubectl get mongodb -n demo mg-replicaset -o json | jq '.spec.replicas' +3 + +$ kubectl get sts -n demo mg-replicaset -o json | jq '.spec.replicas' +3 +``` + +Now let's connect to a mongodb instance and run a mongodb internal command to check the number of replicas, +```bash +$ kubectl exec -n demo mg-replicaset-0 -- mongo admin -u root -p nrKuxni0wDSMrgwy --eval "db.adminCommand( { replSetGetStatus : 1 } ).members" --quiet +[ + { + "_id" : 0, + "name" : "mg-replicaset-0.mg-replicaset-pods.demo.svc.cluster.local:27017", + "health" : 1, + "state" : 1, + "stateStr" : "PRIMARY", + "uptime" : 410, + "optime" : { + "ts" : Timestamp(1614698784, 1), + "t" : NumberLong(1) + }, + "optimeDate" : ISODate("2021-03-02T15:26:24Z"), + "syncingTo" : "", + "syncSourceHost" : "", + "syncSourceId" : -1, + "infoMessage" : "", + "electionTime" : Timestamp(1614698393, 2), + "electionDate" : ISODate("2021-03-02T15:19:53Z"), + "configVersion" : 5, + "self" : true, + "lastHeartbeatMessage" : "" + }, + { + "_id" : 1, + "name" : "mg-replicaset-1.mg-replicaset-pods.demo.svc.cluster.local:27017", + "health" : 1, + "state" : 2, + "stateStr" : "SECONDARY", + "uptime" : 367, + "optime" : { + "ts" : Timestamp(1614698784, 1), + "t" : NumberLong(1) + }, + "optimeDurable" : { + "ts" : Timestamp(1614698784, 1), + "t" : NumberLong(1) + }, + "optimeDate" : ISODate("2021-03-02T15:26:24Z"), + "optimeDurableDate" : ISODate("2021-03-02T15:26:24Z"), + "lastHeartbeat" : ISODate("2021-03-02T15:26:29.423Z"), + "lastHeartbeatRecv" : ISODate("2021-03-02T15:26:29.330Z"), + "pingMs" : NumberLong(0), + "lastHeartbeatMessage" : "", + "syncingTo" : "mg-replicaset-0.mg-replicaset-pods.demo.svc.cluster.local:27017", + "syncSourceHost" : "mg-replicaset-0.mg-replicaset-pods.demo.svc.cluster.local:27017", + "syncSourceId" : 0, + "infoMessage" : "", + "configVersion" : 5 + }, + { + "_id" : 2, + "name" : "mg-replicaset-2.mg-replicaset-pods.demo.svc.cluster.local:27017", + "health" : 1, + "state" : 2, + "stateStr" : "SECONDARY", + "uptime" : 322, + "optime" : { + "ts" : Timestamp(1614698784, 1), + "t" : NumberLong(1) + }, + "optimeDurable" : { + "ts" : Timestamp(1614698784, 1), + "t" : NumberLong(1) + }, + "optimeDate" : ISODate("2021-03-02T15:26:24Z"), + "optimeDurableDate" : ISODate("2021-03-02T15:26:24Z"), + "lastHeartbeat" : ISODate("2021-03-02T15:26:31.022Z"), + "lastHeartbeatRecv" : ISODate("2021-03-02T15:26:31.224Z"), + "pingMs" : NumberLong(0), + "lastHeartbeatMessage" : "", + "syncingTo" : "mg-replicaset-0.mg-replicaset-pods.demo.svc.cluster.local:27017", + "syncSourceHost" : "mg-replicaset-0.mg-replicaset-pods.demo.svc.cluster.local:27017", + "syncSourceId" : 0, + "infoMessage" : "", + "configVersion" : 5 + } +] +``` + +From all the above outputs we can see that the replicas of the replicaset is `3`. That means we have successfully scaled down the replicas of the MongoDB replicaset. + +## Cleaning Up + +To clean up the Kubernetes resources created by this tutorial, run: + +```bash +kubectl delete mg -n demo mg-replicaset +kubectl delete mongodbopsrequest -n demo mops-vscale-replicaset +``` \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/mongodb/scaling/horizontal-scaling/sharding.md b/content/docs/v2024.1.31/guides/mongodb/scaling/horizontal-scaling/sharding.md new file mode 100644 index 0000000000..4bd868bfaf --- /dev/null +++ b/content/docs/v2024.1.31/guides/mongodb/scaling/horizontal-scaling/sharding.md @@ -0,0 +1,1447 @@ +--- +title: Horizontal Scaling MongoDB Shard +menu: + docs_v2024.1.31: + identifier: mg-horizontal-scaling-shard + name: Sharding + parent: mg-horizontal-scaling + weight: 30 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Horizontal Scale MongoDB Shard + +This guide will show you how to use `KubeDB` Ops-manager operator to scale the shard of a MongoDB database. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +- Install `KubeDB` Provisioner and Ops-manager operator in your cluster following the steps [here](/docs/v2024.1.31/setup/README). + +- You should be familiar with the following `KubeDB` concepts: + - [MongoDB](/docs/v2024.1.31/guides/mongodb/concepts/mongodb) + - [Sharding](/docs/v2024.1.31/guides/mongodb/clustering/sharding) + - [MongoDBOpsRequest](/docs/v2024.1.31/guides/mongodb/concepts/opsrequest) + - [Horizontal Scaling Overview](/docs/v2024.1.31/guides/mongodb/scaling/horizontal-scaling/overview) + +To keep everything isolated, we are going to use a separate namespace called `demo` throughout this tutorial. + +```bash +$ kubectl create ns demo +namespace/demo created +``` + +> **Note:** YAML files used in this tutorial are stored in [docs/examples/mongodb](/docs/v2024.1.31/examples/mongodb) directory of [kubedb/docs](https://github.com/kubedb/docs) repository. + +## Apply Horizontal Scaling on Sharded Database + +Here, we are going to deploy a `MongoDB` sharded database using a supported version by `KubeDB` operator. Then we are going to apply horizontal scaling on it. + +### Prepare MongoDB Sharded Database + +Now, we are going to deploy a `MongoDB` sharded database with version `4.4.26`. + +### Deploy MongoDB Sharded Database + +In this section, we are going to deploy a MongoDB sharded database. Then, in the next sections we will scale shards of the database using `MongoDBOpsRequest` CRD. Below is the YAML of the `MongoDB` CR that we are going to create, + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: MongoDB +metadata: + name: mg-sharding + namespace: demo +spec: + version: 4.4.26 + shardTopology: + configServer: + replicas: 3 + storage: + resources: + requests: + storage: 1Gi + storageClassName: standard + mongos: + replicas: 2 + shard: + replicas: 3 + shards: 2 + storage: + resources: + requests: + storage: 1Gi + storageClassName: standard +``` + +Let's create the `MongoDB` CR we have shown above, + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/mongodb/scaling/mg-shard.yaml +mongodb.kubedb.com/mg-sharding created +``` + +Now, wait until `mg-sharding` has status `Ready`. i.e, + +```bash +$ kubectl get mg -n demo +NAME VERSION STATUS AGE +mg-sharding 4.4.26 Ready 10m +``` + +##### Verify Number of Shard and Shard Replicas + +Let's check the number of shards this database from the MongoDB object and the number of statefulsets it has, + +```bash +$ kubectl get mongodb -n demo mg-sharding -o json | jq '.spec.shardTopology.shard.shards' +2 + +$ kubectl get sts -n demo +NAME READY AGE +mg-sharding-configsvr 3/3 23m +mg-sharding-mongos 2/2 22m +mg-sharding-shard0 3/3 23m +mg-sharding-shard1 3/3 23m +``` + +So, We can see from the both output that the database has 2 shards. + +Now, Let's check the number of replicas each shard has from the MongoDB object and the number of pod the statefulsets have, + +```bash +$ kubectl get mongodb -n demo mg-sharding -o json | jq '.spec.shardTopology.shard.replicas' +3 + +$ kubectl get sts -n demo mg-sharding-shard0 -o json | jq '.spec.replicas' +3 +``` + +We can see from both output that the database has 3 replicas in each shards. + +Also, we can verify the number of shard from an internal mongodb command by execing into a mongos node. + +First we need to get the username and password to connect to a mongos instance, +```bash +$ kubectl get secrets -n demo mg-sharding-auth -o jsonpath='{.data.\username}' | base64 -d +root + +$ kubectl get secrets -n demo mg-sharding-auth -o jsonpath='{.data.\password}' | base64 -d +xBC-EwMFivFCgUlK +``` + +Now let's connect to a mongos instance and run a mongodb internal command to check the number of shards, + +```bash +$ kubectl exec -n demo mg-sharding-mongos-0 -- mongo admin -u root -p xBC-EwMFivFCgUlK --eval "sh.status()" --quiet +--- Sharding Status --- + sharding version: { + "_id" : 1, + "minCompatibleVersion" : 5, + "currentVersion" : 6, + "clusterId" : ObjectId("603e5a4bec470e6b4197e10b") + } + shards: + { "_id" : "shard0", "host" : "shard0/mg-sharding-shard0-0.mg-sharding-shard0-pods.demo.svc.cluster.local:27017,mg-sharding-shard0-1.mg-sharding-shard0-pods.demo.svc.cluster.local:27017,mg-sharding-shard0-2.mg-sharding-shard0-pods.demo.svc.cluster.local:27017", "state" : 1 } + { "_id" : "shard1", "host" : "shard1/mg-sharding-shard1-0.mg-sharding-shard1-pods.demo.svc.cluster.local:27017,mg-sharding-shard1-1.mg-sharding-shard1-pods.demo.svc.cluster.local:27017,mg-sharding-shard1-2.mg-sharding-shard1-pods.demo.svc.cluster.local:27017", "state" : 1 } + active mongoses: + "4.4.26" : 2 + autosplit: + Currently enabled: yes + balancer: + Currently enabled: yes + Currently running: no + Failed balancer rounds in last 5 attempts: 0 + Migration Results for the last 24 hours: + No recent migrations + databases: + { "_id" : "config", "primary" : "config", "partitioned" : true } +``` + +We can see from the above output that the number of shard is 2. + +Also, we can verify the number of replicas each shard has from an internal mongodb command by execing into a shard node. + +Now let's connect to a shard instance and run a mongodb internal command to check the number of replicas, + +```bash +$ kubectl exec -n demo mg-sharding-shard0-0 -- mongo admin -u root -p xBC-EwMFivFCgUlK --eval "db.adminCommand( { replSetGetStatus : 1 } ).members" --quiet +[ + { + "_id" : 0, + "name" : "mg-sharding-shard0-0.mg-sharding-shard0-pods.demo.svc.cluster.local:27017", + "health" : 1, + "state" : 1, + "stateStr" : "PRIMARY", + "uptime" : 338, + "optime" : { + "ts" : Timestamp(1614699416, 1), + "t" : NumberLong(1) + }, + "optimeDate" : ISODate("2021-03-02T15:36:56Z"), + "syncingTo" : "", + "syncSourceHost" : "", + "syncSourceId" : -1, + "infoMessage" : "", + "electionTime" : Timestamp(1614699092, 1), + "electionDate" : ISODate("2021-03-02T15:31:32Z"), + "configVersion" : 3, + "self" : true, + "lastHeartbeatMessage" : "" + }, + { + "_id" : 1, + "name" : "mg-sharding-shard0-1.mg-sharding-shard0-pods.demo.svc.cluster.local:27017", + "health" : 1, + "state" : 2, + "stateStr" : "SECONDARY", + "uptime" : 291, + "optime" : { + "ts" : Timestamp(1614699413, 1), + "t" : NumberLong(1) + }, + "optimeDurable" : { + "ts" : Timestamp(1614699413, 1), + "t" : NumberLong(1) + }, + "optimeDate" : ISODate("2021-03-02T15:36:53Z"), + "optimeDurableDate" : ISODate("2021-03-02T15:36:53Z"), + "lastHeartbeat" : ISODate("2021-03-02T15:36:56.692Z"), + "lastHeartbeatRecv" : ISODate("2021-03-02T15:36:56.015Z"), + "pingMs" : NumberLong(0), + "lastHeartbeatMessage" : "", + "syncingTo" : "mg-sharding-shard0-0.mg-sharding-shard0-pods.demo.svc.cluster.local:27017", + "syncSourceHost" : "mg-sharding-shard0-0.mg-sharding-shard0-pods.demo.svc.cluster.local:27017", + "syncSourceId" : 0, + "infoMessage" : "", + "configVersion" : 3 + }, + { + "_id" : 2, + "name" : "mg-sharding-shard0-2.mg-sharding-shard0-pods.demo.svc.cluster.local:27017", + "health" : 1, + "state" : 2, + "stateStr" : "SECONDARY", + "uptime" : 259, + "optime" : { + "ts" : Timestamp(1614699413, 1), + "t" : NumberLong(1) + }, + "optimeDurable" : { + "ts" : Timestamp(1614699413, 1), + "t" : NumberLong(1) + }, + "optimeDate" : ISODate("2021-03-02T15:36:53Z"), + "optimeDurableDate" : ISODate("2021-03-02T15:36:53Z"), + "lastHeartbeat" : ISODate("2021-03-02T15:36:56.732Z"), + "lastHeartbeatRecv" : ISODate("2021-03-02T15:36:57.773Z"), + "pingMs" : NumberLong(0), + "lastHeartbeatMessage" : "", + "syncingTo" : "mg-sharding-shard0-0.mg-sharding-shard0-pods.demo.svc.cluster.local:27017", + "syncSourceHost" : "mg-sharding-shard0-0.mg-sharding-shard0-pods.demo.svc.cluster.local:27017", + "syncSourceId" : 0, + "infoMessage" : "", + "configVersion" : 3 + } +] +``` + +We can see from the above output that the number of replica is 3. + +##### Verify Number of ConfigServer + +Let's check the number of replicas this database has from the MongoDB object, number of pods the statefulset have, + +```bash +$ kubectl get mongodb -n demo mg-sharding -o json | jq '.spec.shardTopology.configServer.replicas' +3 + +$ kubectl get sts -n demo mg-sharding-configsvr -o json | jq '.spec.replicas' +3 +``` + +We can see from both command that the database has `3` replicas in the configServer. + +Now let's connect to a mongodb instance and run a mongodb internal command to check the number of replicas, + +```bash +$ kubectl exec -n demo mg-sharding-configsvr-0 -- mongo admin -u root -p xBC-EwMFivFCgUlK --eval "db.adminCommand( { replSetGetStatus : 1 } ).members" --quiet +[ + { + "_id" : 0, + "name" : "mg-sharding-configsvr-0.mg-sharding-configsvr-pods.demo.svc.cluster.local:27017", + "health" : 1, + "state" : 1, + "stateStr" : "PRIMARY", + "uptime" : 423, + "optime" : { + "ts" : Timestamp(1614699492, 1), + "t" : NumberLong(1) + }, + "optimeDate" : ISODate("2021-03-02T15:38:12Z"), + "syncingTo" : "", + "syncSourceHost" : "", + "syncSourceId" : -1, + "infoMessage" : "", + "electionTime" : Timestamp(1614699081, 2), + "electionDate" : ISODate("2021-03-02T15:31:21Z"), + "configVersion" : 3, + "self" : true, + "lastHeartbeatMessage" : "" + }, + { + "_id" : 1, + "name" : "mg-sharding-configsvr-1.mg-sharding-configsvr-pods.demo.svc.cluster.local:27017", + "health" : 1, + "state" : 2, + "stateStr" : "SECONDARY", + "uptime" : 385, + "optime" : { + "ts" : Timestamp(1614699492, 1), + "t" : NumberLong(1) + }, + "optimeDurable" : { + "ts" : Timestamp(1614699492, 1), + "t" : NumberLong(1) + }, + "optimeDate" : ISODate("2021-03-02T15:38:12Z"), + "optimeDurableDate" : ISODate("2021-03-02T15:38:12Z"), + "lastHeartbeat" : ISODate("2021-03-02T15:38:13.573Z"), + "lastHeartbeatRecv" : ISODate("2021-03-02T15:38:12.725Z"), + "pingMs" : NumberLong(0), + "lastHeartbeatMessage" : "", + "syncingTo" : "mg-sharding-configsvr-0.mg-sharding-configsvr-pods.demo.svc.cluster.local:27017", + "syncSourceHost" : "mg-sharding-configsvr-0.mg-sharding-configsvr-pods.demo.svc.cluster.local:27017", + "syncSourceId" : 0, + "infoMessage" : "", + "configVersion" : 3 + }, + { + "_id" : 2, + "name" : "mg-sharding-configsvr-2.mg-sharding-configsvr-pods.demo.svc.cluster.local:27017", + "health" : 1, + "state" : 2, + "stateStr" : "SECONDARY", + "uptime" : 340, + "optime" : { + "ts" : Timestamp(1614699490, 8), + "t" : NumberLong(1) + }, + "optimeDurable" : { + "ts" : Timestamp(1614699490, 8), + "t" : NumberLong(1) + }, + "optimeDate" : ISODate("2021-03-02T15:38:10Z"), + "optimeDurableDate" : ISODate("2021-03-02T15:38:10Z"), + "lastHeartbeat" : ISODate("2021-03-02T15:38:11.665Z"), + "lastHeartbeatRecv" : ISODate("2021-03-02T15:38:11.827Z"), + "pingMs" : NumberLong(0), + "lastHeartbeatMessage" : "", + "syncingTo" : "mg-sharding-configsvr-0.mg-sharding-configsvr-pods.demo.svc.cluster.local:27017", + "syncSourceHost" : "mg-sharding-configsvr-0.mg-sharding-configsvr-pods.demo.svc.cluster.local:27017", + "syncSourceId" : 0, + "infoMessage" : "", + "configVersion" : 3 + } +] +``` + +We can see from the above output that the configServer has 3 nodes. + +##### Verify Number of Mongos +Let's check the number of replicas this database has from the MongoDB object, number of pods the statefulset have, + +```bash +$ kubectl get mongodb -n demo mg-sharding -o json | jq '.spec.shardTopology.mongos.replicas' +2 + +$ kubectl get sts -n demo mg-sharding-mongos -o json | jq '.spec.replicas' +2 +``` + +We can see from both command that the database has `2` replicas in the mongos. + +Now let's connect to a mongodb instance and run a mongodb internal command to check the number of replicas, + +```bash +$ kubectl exec -n demo mg-sharding-mongos-0 -- mongo admin -u root -p xBC-EwMFivFCgUlK --eval "sh.status()" --quiet +--- Sharding Status --- + sharding version: { + "_id" : 1, + "minCompatibleVersion" : 5, + "currentVersion" : 6, + "clusterId" : ObjectId("603e5a4bec470e6b4197e10b") + } + shards: + { "_id" : "shard0", "host" : "shard0/mg-sharding-shard0-0.mg-sharding-shard0-pods.demo.svc.cluster.local:27017,mg-sharding-shard0-1.mg-sharding-shard0-pods.demo.svc.cluster.local:27017,mg-sharding-shard0-2.mg-sharding-shard0-pods.demo.svc.cluster.local:27017", "state" : 1 } + { "_id" : "shard1", "host" : "shard1/mg-sharding-shard1-0.mg-sharding-shard1-pods.demo.svc.cluster.local:27017,mg-sharding-shard1-1.mg-sharding-shard1-pods.demo.svc.cluster.local:27017,mg-sharding-shard1-2.mg-sharding-shard1-pods.demo.svc.cluster.local:27017", "state" : 1 } + active mongoses: + "4.4.26" : 2 + autosplit: + Currently enabled: yes + balancer: + Currently enabled: yes + Currently running: no + Failed balancer rounds in last 5 attempts: 0 + Migration Results for the last 24 hours: + No recent migrations + databases: + { "_id" : "config", "primary" : "config", "partitioned" : true } +``` + +We can see from the above output that the mongos has 2 active nodes. + +We are now ready to apply the `MongoDBOpsRequest` CR to update scale up and down all the components of the database. + +### Scale Up + +Here, we are going to scale up all the components of the database to meet the desired number of replicas after scaling. + +#### Create MongoDBOpsRequest + +In order to scale up, we have to create a `MongoDBOpsRequest` CR with our configuration. Below is the YAML of the `MongoDBOpsRequest` CR that we are going to create, + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: MongoDBOpsRequest +metadata: + name: mops-hscale-up-shard + namespace: demo +spec: + type: HorizontalScaling + databaseRef: + name: mg-sharding + horizontalScaling: + shard: + shards: 3 + replicas: 4 + mongos: + replicas: 3 + configServer: + replicas: 4 +``` + +Here, + +- `spec.databaseRef.name` specifies that we are performing horizontal scaling operation on `mops-hscale-up-shard` database. +- `spec.type` specifies that we are performing `HorizontalScaling` on our database. +- `spec.horizontalScaling.shard.shards` specifies the desired number of shards after scaling. +- `spec.horizontalScaling.shard.replicas` specifies the desired number of replicas of each shard after scaling. +- `spec.horizontalScaling.mongos.replicas` specifies the desired replicas after scaling. +- `spec.horizontalScaling.configServer.replicas` specifies the desired replicas after scaling. + +> **Note:** If you don't want to scale all the components together, you can only specify the components (shard, configServer and mongos) that you want to scale. + +Let's create the `MongoDBOpsRequest` CR we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/mongodb/scaling/horizontal-scaling/mops-hscale-up-shard.yaml +mongodbopsrequest.ops.kubedb.com/mops-hscale-up-shard created +``` + +#### Verify scaling up is successful + +If everything goes well, `KubeDB` Ops-manager operator will update the shard and replicas of `MongoDB` object and related `StatefulSets` and `Pods`. + +Let's wait for `MongoDBOpsRequest` to be `Successful`. Run the following command to watch `MongoDBOpsRequest` CR, + +```bash +$ watch kubectl get mongodbopsrequest -n demo +Every 2.0s: kubectl get mongodbopsrequest -n demo +NAME TYPE STATUS AGE +mops-hscale-up-shard HorizontalScaling Successful 9m57s +``` + +We can see from the above output that the `MongoDBOpsRequest` has succeeded. If we describe the `MongoDBOpsRequest` we will get an overview of the steps that were followed to scale the database. + +```bash +$ kubectl describe mongodbopsrequest -n demo mops-hscale-up-shard +Name: mops-hscale-up-shard +Namespace: demo +Labels: +Annotations: +API Version: ops.kubedb.com/v1alpha1 +Kind: MongoDBOpsRequest +Metadata: + Creation Timestamp: 2021-03-02T16:23:16Z + Generation: 1 + Managed Fields: + API Version: ops.kubedb.com/v1alpha1 + Fields Type: FieldsV1 + fieldsV1: + f:metadata: + f:annotations: + .: + f:kubectl.kubernetes.io/last-applied-configuration: + f:spec: + .: + f:databaseRef: + .: + f:name: + f:horizontalScaling: + .: + f:configServer: + .: + f:replicas: + f:mongos: + .: + f:replicas: + f:shard: + .: + f:replicas: + f:shards: + f:type: + Manager: kubectl-client-side-apply + Operation: Update + Time: 2021-03-02T16:23:16Z + API Version: ops.kubedb.com/v1alpha1 + Fields Type: FieldsV1 + fieldsV1: + f:status: + .: + f:conditions: + f:observedGeneration: + f:phase: + Manager: kubedb-enterprise + Operation: Update + Time: 2021-03-02T16:23:16Z + Resource Version: 147313 + Self Link: /apis/ops.kubedb.com/v1alpha1/namespaces/demo/mongodbopsrequests/mops-hscale-up-shard + UID: 982014fc-1655-44e7-946c-859626ae0247 +Spec: + Database Ref: + Name: mg-sharding + Horizontal Scaling: + Config Server: + Replicas: 4 + Mongos: + Replicas: 3 + Shard: + Replicas: 4 + Shards: 3 + Type: HorizontalScaling +Status: + Conditions: + Last Transition Time: 2021-03-02T16:23:16Z + Message: MongoDB ops request is horizontally scaling database + Observed Generation: 1 + Reason: HorizontalScaling + Status: True + Type: HorizontalScaling + Last Transition Time: 2021-03-02T16:25:31Z + Message: Successfully Horizontally Scaled Up Shard Replicas + Observed Generation: 1 + Reason: ScaleUpShardReplicas + Status: True + Type: ScaleUpShardReplicas + Last Transition Time: 2021-03-02T16:33:07Z + Message: Successfully Horizontally Scaled Up Shard + Observed Generation: 1 + Reason: ScaleUpShard + Status: True + Type: ScaleUpShard + Last Transition Time: 2021-03-02T16:34:35Z + Message: Successfully Horizontally Scaled Up ConfigServer + Observed Generation: 1 + Reason: ScaleUpConfigServer + Status: True + Type: ScaleUpConfigServer + Last Transition Time: 2021-03-02T16:36:30Z + Message: Successfully Horizontally Scaled Mongos + Observed Generation: 1 + Reason: ScaleMongos + Status: True + Type: ScaleMongos + Last Transition Time: 2021-03-02T16:36:30Z + Message: Successfully Horizontally Scaled MongoDB + Observed Generation: 1 + Reason: Successful + Status: True + Type: Successful + Observed Generation: 1 + Phase: Successful +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal PauseDatabase 13m KubeDB Ops-manager operator Pausing MongoDB demo/mg-sharding + Normal PauseDatabase 13m KubeDB Ops-manager operator Successfully paused MongoDB demo/mg-sharding + Normal ScaleUpShardReplicas 11m KubeDB Ops-manager operator Successfully Horizontally Scaled Up Shard Replicas + Normal ResumeDatabase 11m KubeDB Ops-manager operator Resuming MongoDB demo/mg-sharding + Normal ResumeDatabase 11m KubeDB Ops-manager operator Successfully resumed MongoDB demo/mg-sharding + Normal ScaleUpShardReplicas 11m KubeDB Ops-manager operator Successfully Horizontally Scaled Up Shard Replicas + Normal ScaleUpShardReplicas 11m KubeDB Ops-manager operator Successfully Horizontally Scaled Up Shard Replicas + Normal Progressing 8m20s KubeDB Ops-manager operator Successfully updated StatefulSets Resources + Normal Progressing 4m5s KubeDB Ops-manager operator Successfully updated StatefulSets Resources + Normal ScaleUpShard 3m59s KubeDB Ops-manager operator Successfully Horizontally Scaled Up Shard + Normal PauseDatabase 3m59s KubeDB Ops-manager operator Pausing MongoDB demo/mg-sharding + Normal PauseDatabase 3m59s KubeDB Ops-manager operator Successfully paused MongoDB demo/mg-sharding + Normal ScaleUpConfigServer 2m31s KubeDB Ops-manager operator Successfully Horizontally Scaled Up ConfigServer + Normal ScaleMongos 36s KubeDB Ops-manager operator Successfully Horizontally Scaled Mongos + Normal ResumeDatabase 36s KubeDB Ops-manager operator Resuming MongoDB demo/mg-sharding + Normal ResumeDatabase 36s KubeDB Ops-manager operator Successfully resumed MongoDB demo/mg-sharding + Normal Successful 36s KubeDB Ops-manager operator Successfully Horizontally Scaled Database +``` + +#### Verify Number of Shard and Shard Replicas + +Now, we are going to verify the number of shards this database has from the MongoDB object, number of statefulsets it has, + +```bash +$ kubectl get mongodb -n demo mg-sharding -o json | jq '.spec.shardTopology.shard.shards' +3 + +$ kubectl get sts -n demo +NAME READY AGE +mg-sharding-configsvr 4/4 66m +mg-sharding-mongos 3/3 64m +mg-sharding-shard0 4/4 66m +mg-sharding-shard1 4/4 66m +mg-sharding-shard2 4/4 12m +``` + +Now let's connect to a mongos instance and run a mongodb internal command to check the number of shards, +```bash +$ kubectl exec -n demo mg-sharding-mongos-0 -- mongo admin -u root -p xBC-EwMFivFCgUlK --eval "sh.status()" --quiet +--- Sharding Status --- + sharding version: { + "_id" : 1, + "minCompatibleVersion" : 5, + "currentVersion" : 6, + "clusterId" : ObjectId("603e5a4bec470e6b4197e10b") + } + shards: + { "_id" : "shard0", "host" : "shard0/mg-sharding-shard0-0.mg-sharding-shard0-pods.demo.svc.cluster.local:27017,mg-sharding-shard0-1.mg-sharding-shard0-pods.demo.svc.cluster.local:27017,mg-sharding-shard0-2.mg-sharding-shard0-pods.demo.svc.cluster.local:27017,mg-sharding-shard0-3.mg-sharding-shard0-pods.demo.svc.cluster.local:27017", "state" : 1 } + { "_id" : "shard1", "host" : "shard1/mg-sharding-shard1-0.mg-sharding-shard1-pods.demo.svc.cluster.local:27017,mg-sharding-shard1-1.mg-sharding-shard1-pods.demo.svc.cluster.local:27017,mg-sharding-shard1-2.mg-sharding-shard1-pods.demo.svc.cluster.local:27017,mg-sharding-shard1-3.mg-sharding-shard1-pods.demo.svc.cluster.local:27017", "state" : 1 } + { "_id" : "shard2", "host" : "shard2/mg-sharding-shard2-0.mg-sharding-shard2-pods.demo.svc.cluster.local:27017,mg-sharding-shard2-1.mg-sharding-shard2-pods.demo.svc.cluster.local:27017,mg-sharding-shard2-2.mg-sharding-shard2-pods.demo.svc.cluster.local:27017,mg-sharding-shard2-3.mg-sharding-shard2-pods.demo.svc.cluster.local:27017", "state" : 1 } + active mongoses: + "4.4.26" : 3 + autosplit: + Currently enabled: yes + balancer: + Currently enabled: yes + Currently running: no + Failed balancer rounds in last 5 attempts: 2 + Last reported error: Couldn't get a connection within the time limit + Time of Reported error: Tue Mar 02 2021 16:17:53 GMT+0000 (UTC) + Migration Results for the last 24 hours: + No recent migrations + databases: + { "_id" : "config", "primary" : "config", "partitioned" : true } + config.system.sessions + shard key: { "_id" : 1 } + unique: false + balancing: true + chunks: + shard0 1 + { "_id" : { "$minKey" : 1 } } -->> { "_id" : { "$maxKey" : 1 } } on : shard0 Timestamp(1, 0) +``` + +From all the above outputs we can see that the number of shards are `3`. + +Now, we are going to verify the number of replicas each shard has from the MongoDB object, number of pods the statefulset have, + +```bash +$ kubectl get mongodb -n demo mg-sharding -o json | jq '.spec.shardTopology.shard.replicas' +4 + +$ kubectl get sts -n demo mg-sharding-shard0 -o json | jq '.spec.replicas' +4 +``` + +Now let's connect to a shard instance and run a mongodb internal command to check the number of replicas, +```bash +$ kubectl exec -n demo mg-sharding-shard0-0 -- mongo admin -u root -p xBC-EwMFivFCgUlK --eval "db.adminCommand( { replSetGetStatus : 1 } ).members" --quiet +[ + { + "_id" : 0, + "name" : "mg-sharding-shard0-0.mg-sharding-shard0-pods.demo.svc.cluster.local:27017", + "health" : 1, + "state" : 2, + "stateStr" : "SECONDARY", + "uptime" : 1464, + "optime" : { + "ts" : Timestamp(1614703143, 10), + "t" : NumberLong(2) + }, + "optimeDate" : ISODate("2021-03-02T16:39:03Z"), + "syncingTo" : "mg-sharding-shard0-1.mg-sharding-shard0-pods.demo.svc.cluster.local:27017", + "syncSourceHost" : "mg-sharding-shard0-1.mg-sharding-shard0-pods.demo.svc.cluster.local:27017", + "syncSourceId" : 1, + "infoMessage" : "", + "configVersion" : 4, + "self" : true, + "lastHeartbeatMessage" : "" + }, + { + "_id" : 1, + "name" : "mg-sharding-shard0-1.mg-sharding-shard0-pods.demo.svc.cluster.local:27017", + "health" : 1, + "state" : 1, + "stateStr" : "PRIMARY", + "uptime" : 1433, + "optime" : { + "ts" : Timestamp(1614703143, 10), + "t" : NumberLong(2) + }, + "optimeDurable" : { + "ts" : Timestamp(1614703143, 10), + "t" : NumberLong(2) + }, + "optimeDate" : ISODate("2021-03-02T16:39:03Z"), + "optimeDurableDate" : ISODate("2021-03-02T16:39:03Z"), + "lastHeartbeat" : ISODate("2021-03-02T16:39:07.800Z"), + "lastHeartbeatRecv" : ISODate("2021-03-02T16:39:08.087Z"), + "pingMs" : NumberLong(6), + "lastHeartbeatMessage" : "", + "syncingTo" : "", + "syncSourceHost" : "", + "syncSourceId" : -1, + "infoMessage" : "", + "electionTime" : Timestamp(1614701678, 2), + "electionDate" : ISODate("2021-03-02T16:14:38Z"), + "configVersion" : 4 + }, + { + "_id" : 2, + "name" : "mg-sharding-shard0-2.mg-sharding-shard0-pods.demo.svc.cluster.local:27017", + "health" : 1, + "state" : 2, + "stateStr" : "SECONDARY", + "uptime" : 1433, + "optime" : { + "ts" : Timestamp(1614703143, 10), + "t" : NumberLong(2) + }, + "optimeDurable" : { + "ts" : Timestamp(1614703143, 10), + "t" : NumberLong(2) + }, + "optimeDate" : ISODate("2021-03-02T16:39:03Z"), + "optimeDurableDate" : ISODate("2021-03-02T16:39:03Z"), + "lastHeartbeat" : ISODate("2021-03-02T16:39:08.575Z"), + "lastHeartbeatRecv" : ISODate("2021-03-02T16:39:08.580Z"), + "pingMs" : NumberLong(0), + "lastHeartbeatMessage" : "", + "syncingTo" : "mg-sharding-shard0-1.mg-sharding-shard0-pods.demo.svc.cluster.local:27017", + "syncSourceHost" : "mg-sharding-shard0-1.mg-sharding-shard0-pods.demo.svc.cluster.local:27017", + "syncSourceId" : 1, + "infoMessage" : "", + "configVersion" : 4 + }, + { + "_id" : 3, + "name" : "mg-sharding-shard0-3.mg-sharding-shard0-pods.demo.svc.cluster.local:27017", + "health" : 1, + "state" : 2, + "stateStr" : "SECONDARY", + "uptime" : 905, + "optime" : { + "ts" : Timestamp(1614703143, 10), + "t" : NumberLong(2) + }, + "optimeDurable" : { + "ts" : Timestamp(1614703143, 10), + "t" : NumberLong(2) + }, + "optimeDate" : ISODate("2021-03-02T16:39:03Z"), + "optimeDurableDate" : ISODate("2021-03-02T16:39:03Z"), + "lastHeartbeat" : ISODate("2021-03-02T16:39:06.683Z"), + "lastHeartbeatRecv" : ISODate("2021-03-02T16:39:07.980Z"), + "pingMs" : NumberLong(10), + "lastHeartbeatMessage" : "", + "syncingTo" : "mg-sharding-shard0-1.mg-sharding-shard0-pods.demo.svc.cluster.local:27017", + "syncSourceHost" : "mg-sharding-shard0-1.mg-sharding-shard0-pods.demo.svc.cluster.local:27017", + "syncSourceId" : 1, + "infoMessage" : "", + "configVersion" : 4 + } +] +``` + +From all the above outputs we can see that the replicas of each shard has is `4`. + +#### Verify Number of ConfigServer Replicas +Now, we are going to verify the number of replicas this database has from the MongoDB object, number of pods the statefulset have, + +```bash +$ kubectl get mongodb -n demo mg-sharding -o json | jq '.spec.shardTopology.configServer.replicas' +4 + +$ kubectl get sts -n demo mg-sharding-configsvr -o json | jq '.spec.replicas' +4 +``` + +Now let's connect to a mongodb instance and run a mongodb internal command to check the number of replicas, +```bash +$ kubectl exec -n demo mg-sharding-configsvr-0 -- mongo admin -u root -p xBC-EwMFivFCgUlK --eval "db.adminCommand( { replSetGetStatus : 1 } ).members" --quiet +[ + { + "_id" : 0, + "name" : "mg-sharding-configsvr-0.mg-sharding-configsvr-pods.demo.svc.cluster.local:27017", + "health" : 1, + "state" : 2, + "stateStr" : "SECONDARY", + "uptime" : 1639, + "optime" : { + "ts" : Timestamp(1614703138, 2), + "t" : NumberLong(2) + }, + "optimeDate" : ISODate("2021-03-02T16:38:58Z"), + "syncingTo" : "mg-sharding-configsvr-2.mg-sharding-configsvr-pods.demo.svc.cluster.local:27017", + "syncSourceHost" : "mg-sharding-configsvr-2.mg-sharding-configsvr-pods.demo.svc.cluster.local:27017", + "syncSourceId" : 2, + "infoMessage" : "", + "configVersion" : 4, + "self" : true, + "lastHeartbeatMessage" : "" + }, + { + "_id" : 1, + "name" : "mg-sharding-configsvr-1.mg-sharding-configsvr-pods.demo.svc.cluster.local:27017", + "health" : 1, + "state" : 1, + "stateStr" : "PRIMARY", + "uptime" : 1623, + "optime" : { + "ts" : Timestamp(1614703138, 2), + "t" : NumberLong(2) + }, + "optimeDurable" : { + "ts" : Timestamp(1614703138, 2), + "t" : NumberLong(2) + }, + "optimeDate" : ISODate("2021-03-02T16:38:58Z"), + "optimeDurableDate" : ISODate("2021-03-02T16:38:58Z"), + "lastHeartbeat" : ISODate("2021-03-02T16:38:58.979Z"), + "lastHeartbeatRecv" : ISODate("2021-03-02T16:38:59.291Z"), + "pingMs" : NumberLong(3), + "lastHeartbeatMessage" : "", + "syncingTo" : "", + "syncSourceHost" : "", + "syncSourceId" : -1, + "infoMessage" : "", + "electionTime" : Timestamp(1614701497, 2), + "electionDate" : ISODate("2021-03-02T16:11:37Z"), + "configVersion" : 4 + }, + { + "_id" : 2, + "name" : "mg-sharding-configsvr-2.mg-sharding-configsvr-pods.demo.svc.cluster.local:27017", + "health" : 1, + "state" : 2, + "stateStr" : "SECONDARY", + "uptime" : 1623, + "optime" : { + "ts" : Timestamp(1614703138, 2), + "t" : NumberLong(2) + }, + "optimeDurable" : { + "ts" : Timestamp(1614703138, 2), + "t" : NumberLong(2) + }, + "optimeDate" : ISODate("2021-03-02T16:38:58Z"), + "optimeDurableDate" : ISODate("2021-03-02T16:38:58Z"), + "lastHeartbeat" : ISODate("2021-03-02T16:38:58.885Z"), + "lastHeartbeatRecv" : ISODate("2021-03-02T16:39:00.188Z"), + "pingMs" : NumberLong(3), + "lastHeartbeatMessage" : "", + "syncingTo" : "mg-sharding-configsvr-1.mg-sharding-configsvr-pods.demo.svc.cluster.local:27017", + "syncSourceHost" : "mg-sharding-configsvr-1.mg-sharding-configsvr-pods.demo.svc.cluster.local:27017", + "syncSourceId" : 1, + "infoMessage" : "", + "configVersion" : 4 + }, + { + "_id" : 3, + "name" : "mg-sharding-configsvr-3.mg-sharding-configsvr-pods.demo.svc.cluster.local:27017", + "health" : 1, + "state" : 2, + "stateStr" : "SECONDARY", + "uptime" : 296, + "optime" : { + "ts" : Timestamp(1614703138, 2), + "t" : NumberLong(2) + }, + "optimeDurable" : { + "ts" : Timestamp(1614703138, 2), + "t" : NumberLong(2) + }, + "optimeDate" : ISODate("2021-03-02T16:38:58Z"), + "optimeDurableDate" : ISODate("2021-03-02T16:38:58Z"), + "lastHeartbeat" : ISODate("2021-03-02T16:38:58.977Z"), + "lastHeartbeatRecv" : ISODate("2021-03-02T16:39:00.276Z"), + "pingMs" : NumberLong(1), + "lastHeartbeatMessage" : "", + "syncingTo" : "mg-sharding-configsvr-1.mg-sharding-configsvr-pods.demo.svc.cluster.local:27017", + "syncSourceHost" : "mg-sharding-configsvr-1.mg-sharding-configsvr-pods.demo.svc.cluster.local:27017", + "syncSourceId" : 1, + "infoMessage" : "", + "configVersion" : 4 + } +] +``` + +From all the above outputs we can see that the replicas of the configServer is `3`. That means we have successfully scaled up the replicas of the MongoDB configServer replicas. + +#### Verify Number of Mongos Replicas +Now, we are going to verify the number of replicas this database has from the MongoDB object, number of pods the statefulset have, + +```bash +$ kubectl get mongodb -n demo mg-sharding -o json | jq '.spec.shardTopology.mongos.replicas' +3 + +$ kubectl get sts -n demo mg-sharding-mongos -o json | jq '.spec.replicas' +3 +``` + +Now let's connect to a mongodb instance and run a mongodb internal command to check the number of replicas, +```bash +$ kubectl exec -n demo mg-sharding-mongos-0 -- mongo admin -u root -p xBC-EwMFivFCgUlK --eval "sh.status()" --quiet +--- Sharding Status --- + sharding version: { + "_id" : 1, + "minCompatibleVersion" : 5, + "currentVersion" : 6, + "clusterId" : ObjectId("603e5a4bec470e6b4197e10b") + } + shards: + { "_id" : "shard0", "host" : "shard0/mg-sharding-shard0-0.mg-sharding-shard0-pods.demo.svc.cluster.local:27017,mg-sharding-shard0-1.mg-sharding-shard0-pods.demo.svc.cluster.local:27017,mg-sharding-shard0-2.mg-sharding-shard0-pods.demo.svc.cluster.local:27017,mg-sharding-shard0-3.mg-sharding-shard0-pods.demo.svc.cluster.local:27017", "state" : 1 } + { "_id" : "shard1", "host" : "shard1/mg-sharding-shard1-0.mg-sharding-shard1-pods.demo.svc.cluster.local:27017,mg-sharding-shard1-1.mg-sharding-shard1-pods.demo.svc.cluster.local:27017,mg-sharding-shard1-2.mg-sharding-shard1-pods.demo.svc.cluster.local:27017,mg-sharding-shard1-3.mg-sharding-shard1-pods.demo.svc.cluster.local:27017", "state" : 1 } + { "_id" : "shard2", "host" : "shard2/mg-sharding-shard2-0.mg-sharding-shard2-pods.demo.svc.cluster.local:27017,mg-sharding-shard2-1.mg-sharding-shard2-pods.demo.svc.cluster.local:27017,mg-sharding-shard2-2.mg-sharding-shard2-pods.demo.svc.cluster.local:27017,mg-sharding-shard2-3.mg-sharding-shard2-pods.demo.svc.cluster.local:27017", "state" : 1 } + active mongoses: + "4.4.26" : 3 + autosplit: + Currently enabled: yes + balancer: + Currently enabled: yes + Currently running: no + Failed balancer rounds in last 5 attempts: 2 + Last reported error: Couldn't get a connection within the time limit + Time of Reported error: Tue Mar 02 2021 16:17:53 GMT+0000 (UTC) + Migration Results for the last 24 hours: + No recent migrations + databases: + { "_id" : "config", "primary" : "config", "partitioned" : true } + config.system.sessions + shard key: { "_id" : 1 } + unique: false + balancing: true + chunks: + shard0 1 + { "_id" : { "$minKey" : 1 } } -->> { "_id" : { "$maxKey" : 1 } } on : shard0 Timestamp(1, 0) +``` + +From all the above outputs we can see that the replicas of the mongos is `3`. That means we have successfully scaled up the replicas of the MongoDB mongos replicas. + + +So, we have successfully scaled up all the components of the MongoDB database. + +### Scale Down + +Here, we are going to scale down both the shard and their replicas to meet the desired number of replicas after scaling. + +#### Create MongoDBOpsRequest + +In order to scale down, we have to create a `MongoDBOpsRequest` CR with our configuration. Below is the YAML of the `MongoDBOpsRequest` CR that we are going to create, + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: MongoDBOpsRequest +metadata: + name: mops-hscale-down-shard + namespace: demo +spec: + type: HorizontalScaling + databaseRef: + name: mg-sharding + horizontalScaling: + shard: + shards: 2 + replicas: 3 + mongos: + replicas: 2 + configServer: + replicas: 3 +``` + +Here, + +- `spec.databaseRef.name` specifies that we are performing horizontal scaling operation on `mops-hscale-down-shard` database. +- `spec.type` specifies that we are performing `HorizontalScaling` on our database. +- `spec.horizontalScaling.shard.shards` specifies the desired number of shards after scaling. +- `spec.horizontalScaling.shard.replicas` specifies the desired number of replicas of each shard after scaling. +- `spec.horizontalScaling.configServer.replicas` specifies the desired replicas after scaling. +- `spec.horizontalScaling.mongos.replicas` specifies the desired replicas after scaling. + +> **Note:** If you don't want to scale all the components together, you can only specify the components (shard, configServer and mongos) that you want to scale. + +Let's create the `MongoDBOpsRequest` CR we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/mongodb/scaling/horizontal-scaling/mops-hscale-down-shard.yaml +mongodbopsrequest.ops.kubedb.com/mops-hscale-down-shard created +``` + +#### Verify scaling down is successful + +If everything goes well, `KubeDB` Ops-manager operator will update the shards and replicas `MongoDB` object and related `StatefulSets` and `Pods`. + +Let's wait for `MongoDBOpsRequest` to be `Successful`. Run the following command to watch `MongoDBOpsRequest` CR, + +```bash +$ watch kubectl get mongodbopsrequest -n demo +Every 2.0s: kubectl get mongodbopsrequest -n demo +NAME TYPE STATUS AGE +mops-hscale-down-shard HorizontalScaling Successful 81s +``` + +We can see from the above output that the `MongoDBOpsRequest` has succeeded. If we describe the `MongoDBOpsRequest` we will get an overview of the steps that were followed to scale down the the database. + +```bash +$ kubectl describe mongodbopsrequest -n demo mops-hscale-down-shard +Name: mops-hscale-down-shard +Namespace: demo +Labels: +Annotations: +API Version: ops.kubedb.com/v1alpha1 +Kind: MongoDBOpsRequest +Metadata: + Creation Timestamp: 2021-03-02T16:41:11Z + Generation: 1 + Managed Fields: + API Version: ops.kubedb.com/v1alpha1 + Fields Type: FieldsV1 + fieldsV1: + f:metadata: + f:annotations: + .: + f:kubectl.kubernetes.io/last-applied-configuration: + f:spec: + .: + f:databaseRef: + .: + f:name: + f:horizontalScaling: + .: + f:configServer: + .: + f:replicas: + f:mongos: + .: + f:replicas: + f:shard: + .: + f:replicas: + f:shards: + f:type: + Manager: kubectl-client-side-apply + Operation: Update + Time: 2021-03-02T16:41:11Z + API Version: ops.kubedb.com/v1alpha1 + Fields Type: FieldsV1 + fieldsV1: + f:status: + .: + f:conditions: + f:observedGeneration: + f:phase: + Manager: kubedb-enterprise + Operation: Update + Time: 2021-03-02T16:41:11Z + Resource Version: 149077 + Self Link: /apis/ops.kubedb.com/v1alpha1/namespaces/demo/mongodbopsrequests/mops-hscale-down-shard + UID: 0f83c457-9498-4144-a397-226141851751 +Spec: + Database Ref: + Name: mg-sharding + Horizontal Scaling: + Config Server: + Replicas: 3 + Mongos: + Replicas: 2 + Shard: + Replicas: 3 + Shards: 2 + Type: HorizontalScaling +Status: + Conditions: + Last Transition Time: 2021-03-02T16:41:11Z + Message: MongoDB ops request is horizontally scaling database + Observed Generation: 1 + Reason: HorizontalScaling + Status: True + Type: HorizontalScaling + Last Transition Time: 2021-03-02T16:42:11Z + Message: Successfully Horizontally Scaled Down Shard Replicas + Observed Generation: 1 + Reason: ScaleDownShardReplicas + Status: True + Type: ScaleDownShardReplicas + Last Transition Time: 2021-03-02T16:42:12Z + Message: Successfully started mongodb load balancer + Observed Generation: 1 + Reason: StartingBalancer + Status: True + Type: StartingBalancer + Last Transition Time: 2021-03-02T16:43:03Z + Message: Successfully Horizontally Scaled Down Shard + Observed Generation: 1 + Reason: ScaleDownShard + Status: True + Type: ScaleDownShard + Last Transition Time: 2021-03-02T16:43:24Z + Message: Successfully Horizontally Scaled Down ConfigServer + Observed Generation: 1 + Reason: ScaleDownConfigServer + Status: True + Type: ScaleDownConfigServer + Last Transition Time: 2021-03-02T16:43:34Z + Message: Successfully Horizontally Scaled Mongos + Observed Generation: 1 + Reason: ScaleMongos + Status: True + Type: ScaleMongos + Last Transition Time: 2021-03-02T16:43:34Z + Message: Successfully Horizontally Scaled MongoDB + Observed Generation: 1 + Reason: Successful + Status: True + Type: Successful + Observed Generation: 1 + Phase: Successful +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal PauseDatabase 6m29s KubeDB Ops-manager operator Pausing MongoDB demo/mg-sharding + Normal PauseDatabase 6m29s KubeDB Ops-manager operator Successfully paused MongoDB demo/mg-sharding + Normal ScaleDownShardReplicas 5m29s KubeDB Ops-manager operator Successfully Horizontally Scaled Down Shard Replicas + Normal StartingBalancer 5m29s KubeDB Ops-manager operator Starting Balancer + Normal StartingBalancer 5m28s KubeDB Ops-manager operator Successfully Started Balancer + Normal ScaleDownShard 4m37s KubeDB Ops-manager operator Successfully Horizontally Scaled Down Shard + Normal ScaleDownConfigServer 4m16s KubeDB Ops-manager operator Successfully Horizontally Scaled Down ConfigServer + Normal ScaleMongos 4m6s KubeDB Ops-manager operator Successfully Horizontally Scaled Mongos + Normal ResumeDatabase 4m6s KubeDB Ops-manager operator Resuming MongoDB demo/mg-sharding + Normal ResumeDatabase 4m6s KubeDB Ops-manager operator Successfully resumed MongoDB demo/mg-sharding + Normal Successful 4m6s KubeDB Ops-manager operator Successfully Horizontally Scaled Database +``` + +##### Verify Number of Shard and Shard Replicas + +Now, we are going to verify the number of shards this database has from the MongoDB object, number of statefulsets it has, + +```bash +$ kubectl get mongodb -n demo mg-sharding -o json | jq '.spec.shardTopology.shard.shards' +2 + +$ kubectl get sts -n demo +NAME READY AGE +mg-sharding-configsvr 3/3 77m +mg-sharding-mongos 2/2 75m +mg-sharding-shard0 3/3 77m +mg-sharding-shard1 3/3 77m +``` + +Now let's connect to a mongos instance and run a mongodb internal command to check the number of shards, +```bash +$ kubectl exec -n demo mg-sharding-mongos-0 -- mongo admin -u root -p xBC-EwMFivFCgUlK --eval "sh.status()" --quiet +--- Sharding Status --- + sharding version: { + "_id" : 1, + "minCompatibleVersion" : 5, + "currentVersion" : 6, + "clusterId" : ObjectId("603e5a4bec470e6b4197e10b") + } + shards: + { "_id" : "shard0", "host" : "shard0/mg-sharding-shard0-0.mg-sharding-shard0-pods.demo.svc.cluster.local:27017,mg-sharding-shard0-1.mg-sharding-shard0-pods.demo.svc.cluster.local:27017,mg-sharding-shard0-2.mg-sharding-shard0-pods.demo.svc.cluster.local:27017", "state" : 1 } + { "_id" : "shard1", "host" : "shard1/mg-sharding-shard1-0.mg-sharding-shard1-pods.demo.svc.cluster.local:27017,mg-sharding-shard1-1.mg-sharding-shard1-pods.demo.svc.cluster.local:27017,mg-sharding-shard1-2.mg-sharding-shard1-pods.demo.svc.cluster.local:27017", "state" : 1 } + active mongoses: + "4.4.26" : 2 + autosplit: + Currently enabled: yes + balancer: + Currently enabled: yes + Currently running: no + Failed balancer rounds in last 5 attempts: 2 + Last reported error: Couldn't get a connection within the time limit + Time of Reported error: Tue Mar 02 2021 16:17:53 GMT+0000 (UTC) + Migration Results for the last 24 hours: + No recent migrations + databases: + { "_id" : "config", "primary" : "config", "partitioned" : true } + config.system.sessions + shard key: { "_id" : 1 } + unique: false + balancing: true + chunks: + shard0 1 + { "_id" : { "$minKey" : 1 } } -->> { "_id" : { "$maxKey" : 1 } } on : shard0 Timestamp(1, 0) +``` + +From all the above outputs we can see that the number of shards are `2`. + +Now, we are going to verify the number of replicas each shard has from the MongoDB object, number of pods the statefulset have, + +```bash +$ kubectl get mongodb -n demo mg-sharding -o json | jq '.spec.shardTopology.shard.replicas' +3 + +$ kubectl get sts -n demo mg-sharding-shard0 -o json | jq '.spec.replicas' +3 +``` + +Now let's connect to a shard instance and run a mongodb internal command to check the number of replicas, +```bash +$ kubectl exec -n demo mg-sharding-shard0-0 -- mongo admin -u root -p xBC-EwMFivFCgUlK --eval "db.adminCommand( { replSetGetStatus : 1 } ).members" --quiet +[ + { + "_id" : 0, + "name" : "mg-sharding-shard0-0.mg-sharding-shard0-pods.demo.svc.cluster.local:27017", + "health" : 1, + "state" : 2, + "stateStr" : "SECONDARY", + "uptime" : 2096, + "optime" : { + "ts" : Timestamp(1614703771, 1), + "t" : NumberLong(2) + }, + "optimeDate" : ISODate("2021-03-02T16:49:31Z"), + "syncingTo" : "mg-sharding-shard0-2.mg-sharding-shard0-pods.demo.svc.cluster.local:27017", + "syncSourceHost" : "mg-sharding-shard0-2.mg-sharding-shard0-pods.demo.svc.cluster.local:27017", + "syncSourceId" : 2, + "infoMessage" : "", + "configVersion" : 5, + "self" : true, + "lastHeartbeatMessage" : "" + }, + { + "_id" : 1, + "name" : "mg-sharding-shard0-1.mg-sharding-shard0-pods.demo.svc.cluster.local:27017", + "health" : 1, + "state" : 1, + "stateStr" : "PRIMARY", + "uptime" : 2065, + "optime" : { + "ts" : Timestamp(1614703771, 1), + "t" : NumberLong(2) + }, + "optimeDurable" : { + "ts" : Timestamp(1614703771, 1), + "t" : NumberLong(2) + }, + "optimeDate" : ISODate("2021-03-02T16:49:31Z"), + "optimeDurableDate" : ISODate("2021-03-02T16:49:31Z"), + "lastHeartbeat" : ISODate("2021-03-02T16:49:39.092Z"), + "lastHeartbeatRecv" : ISODate("2021-03-02T16:49:40.074Z"), + "pingMs" : NumberLong(18), + "lastHeartbeatMessage" : "", + "syncingTo" : "", + "syncSourceHost" : "", + "syncSourceId" : -1, + "infoMessage" : "", + "electionTime" : Timestamp(1614701678, 2), + "electionDate" : ISODate("2021-03-02T16:14:38Z"), + "configVersion" : 5 + }, + { + "_id" : 2, + "name" : "mg-sharding-shard0-2.mg-sharding-shard0-pods.demo.svc.cluster.local:27017", + "health" : 1, + "state" : 2, + "stateStr" : "SECONDARY", + "uptime" : 2065, + "optime" : { + "ts" : Timestamp(1614703771, 1), + "t" : NumberLong(2) + }, + "optimeDurable" : { + "ts" : Timestamp(1614703771, 1), + "t" : NumberLong(2) + }, + "optimeDate" : ISODate("2021-03-02T16:49:31Z"), + "optimeDurableDate" : ISODate("2021-03-02T16:49:31Z"), + "lastHeartbeat" : ISODate("2021-03-02T16:49:38.712Z"), + "lastHeartbeatRecv" : ISODate("2021-03-02T16:49:39.885Z"), + "pingMs" : NumberLong(4), + "lastHeartbeatMessage" : "", + "syncingTo" : "mg-sharding-shard0-1.mg-sharding-shard0-pods.demo.svc.cluster.local:27017", + "syncSourceHost" : "mg-sharding-shard0-1.mg-sharding-shard0-pods.demo.svc.cluster.local:27017", + "syncSourceId" : 1, + "infoMessage" : "", + "configVersion" : 5 + } +] +``` + +From all the above outputs we can see that the replicas of each shard has is `3`. + +##### Verify Number of ConfigServer Replicas + +Now, we are going to verify the number of replicas this database has from the MongoDB object, number of pods the statefulset have, + +```bash +$ kubectl get mongodb -n demo mg-sharding -o json | jq '.spec.shardTopology.configServer.replicas' +3 + +$ kubectl get sts -n demo mg-sharding-configsvr -o json | jq '.spec.replicas' +3 +``` + +Now let's connect to a mongodb instance and run a mongodb internal command to check the number of replicas, +```bash +$ kubectl exec -n demo mg-sharding-configsvr-0 -- mongo admin -u root -p xBC-EwMFivFCgUlK --eval "db.adminCommand( { replSetGetStatus : 1 } ).members" --quiet +[ + { + "_id" : 0, + "name" : "mg-sharding-configsvr-0.mg-sharding-configsvr-pods.demo.svc.cluster.local:27017", + "health" : 1, + "state" : 2, + "stateStr" : "SECONDARY", + "uptime" : 2345, + "optime" : { + "ts" : Timestamp(1614703841, 1), + "t" : NumberLong(2) + }, + "optimeDate" : ISODate("2021-03-02T16:50:41Z"), + "syncingTo" : "mg-sharding-configsvr-1.mg-sharding-configsvr-pods.demo.svc.cluster.local:27017", + "syncSourceHost" : "mg-sharding-configsvr-1.mg-sharding-configsvr-pods.demo.svc.cluster.local:27017", + "syncSourceId" : 1, + "infoMessage" : "", + "configVersion" : 5, + "self" : true, + "lastHeartbeatMessage" : "" + }, + { + "_id" : 1, + "name" : "mg-sharding-configsvr-1.mg-sharding-configsvr-pods.demo.svc.cluster.local:27017", + "health" : 1, + "state" : 1, + "stateStr" : "PRIMARY", + "uptime" : 2329, + "optime" : { + "ts" : Timestamp(1614703841, 1), + "t" : NumberLong(2) + }, + "optimeDurable" : { + "ts" : Timestamp(1614703841, 1), + "t" : NumberLong(2) + }, + "optimeDate" : ISODate("2021-03-02T16:50:41Z"), + "optimeDurableDate" : ISODate("2021-03-02T16:50:41Z"), + "lastHeartbeat" : ISODate("2021-03-02T16:50:45.874Z"), + "lastHeartbeatRecv" : ISODate("2021-03-02T16:50:44.194Z"), + "pingMs" : NumberLong(0), + "lastHeartbeatMessage" : "", + "syncingTo" : "", + "syncSourceHost" : "", + "syncSourceId" : -1, + "infoMessage" : "", + "electionTime" : Timestamp(1614701497, 2), + "electionDate" : ISODate("2021-03-02T16:11:37Z"), + "configVersion" : 5 + }, + { + "_id" : 2, + "name" : "mg-sharding-configsvr-2.mg-sharding-configsvr-pods.demo.svc.cluster.local:27017", + "health" : 1, + "state" : 2, + "stateStr" : "SECONDARY", + "uptime" : 2329, + "optime" : { + "ts" : Timestamp(1614703841, 1), + "t" : NumberLong(2) + }, + "optimeDurable" : { + "ts" : Timestamp(1614703841, 1), + "t" : NumberLong(2) + }, + "optimeDate" : ISODate("2021-03-02T16:50:41Z"), + "optimeDurableDate" : ISODate("2021-03-02T16:50:41Z"), + "lastHeartbeat" : ISODate("2021-03-02T16:50:45.778Z"), + "lastHeartbeatRecv" : ISODate("2021-03-02T16:50:46.091Z"), + "pingMs" : NumberLong(1), + "lastHeartbeatMessage" : "", + "syncingTo" : "mg-sharding-configsvr-1.mg-sharding-configsvr-pods.demo.svc.cluster.local:27017", + "syncSourceHost" : "mg-sharding-configsvr-1.mg-sharding-configsvr-pods.demo.svc.cluster.local:27017", + "syncSourceId" : 1, + "infoMessage" : "", + "configVersion" : 5 + } +] +``` + +From all the above outputs we can see that the replicas of the configServer is `3`. That means we have successfully scaled down the replicas of the MongoDB configServer replicas. + +##### Verify Number of Mongos Replicas + +Now, we are going to verify the number of replicas this database has from the MongoDB object, number of pods the statefulset have, + +```bash +$ kubectl get mongodb -n demo mg-sharding -o json | jq '.spec.shardTopology.mongos.replicas' +2 + +$ kubectl get sts -n demo mg-sharding-mongos -o json | jq '.spec.replicas' +2 +``` + +Now let's connect to a mongodb instance and run a mongodb internal command to check the number of replicas, +```bash +$ kubectl exec -n demo mg-sharding-mongos-0 -- mongo admin -u root -p xBC-EwMFivFCgUlK --eval "sh.status()" --quiet +--- Sharding Status --- + sharding version: { + "_id" : 1, + "minCompatibleVersion" : 5, + "currentVersion" : 6, + "clusterId" : ObjectId("603e5a4bec470e6b4197e10b") + } + shards: + { "_id" : "shard0", "host" : "shard0/mg-sharding-shard0-0.mg-sharding-shard0-pods.demo.svc.cluster.local:27017,mg-sharding-shard0-1.mg-sharding-shard0-pods.demo.svc.cluster.local:27017,mg-sharding-shard0-2.mg-sharding-shard0-pods.demo.svc.cluster.local:27017", "state" : 1 } + { "_id" : "shard1", "host" : "shard1/mg-sharding-shard1-0.mg-sharding-shard1-pods.demo.svc.cluster.local:27017,mg-sharding-shard1-1.mg-sharding-shard1-pods.demo.svc.cluster.local:27017,mg-sharding-shard1-2.mg-sharding-shard1-pods.demo.svc.cluster.local:27017", "state" : 1 } + active mongoses: + "4.4.26" : 2 + autosplit: + Currently enabled: yes + balancer: + Currently enabled: yes + Currently running: no + Failed balancer rounds in last 5 attempts: 2 + Last reported error: Couldn't get a connection within the time limit + Time of Reported error: Tue Mar 02 2021 16:17:53 GMT+0000 (UTC) + Migration Results for the last 24 hours: + No recent migrations + databases: + { "_id" : "config", "primary" : "config", "partitioned" : true } + config.system.sessions + shard key: { "_id" : 1 } + unique: false + balancing: true + chunks: + shard0 1 + { "_id" : { "$minKey" : 1 } } -->> { "_id" : { "$maxKey" : 1 } } on : shard0 Timestamp(1, 0) +``` + +From all the above outputs we can see that the replicas of the mongos is `2`. That means we have successfully scaled down the replicas of the MongoDB mongos replicas. + +So, we have successfully scaled down all the components of the MongoDB database. + +## Cleaning Up + +To clean up the Kubernetes resources created by this tutorial, run: + +```bash +kubectl delete mg -n demo mg-sharding +kubectl delete mongodbopsrequest -n demo mops-vscale-up-shard mops-vscale-down-shard +``` \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/mongodb/scaling/vertical-scaling/_index.md b/content/docs/v2024.1.31/guides/mongodb/scaling/vertical-scaling/_index.md new file mode 100644 index 0000000000..a73eb85832 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mongodb/scaling/vertical-scaling/_index.md @@ -0,0 +1,22 @@ +--- +title: Vertical Scaling +menu: + docs_v2024.1.31: + identifier: mg-vertical-scaling + name: Vertical Scaling + parent: mg-scaling + weight: 20 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/mongodb/scaling/vertical-scaling/overview.md b/content/docs/v2024.1.31/guides/mongodb/scaling/vertical-scaling/overview.md new file mode 100644 index 0000000000..5a8e73501c --- /dev/null +++ b/content/docs/v2024.1.31/guides/mongodb/scaling/vertical-scaling/overview.md @@ -0,0 +1,65 @@ +--- +title: MongoDB Vertical Scaling Overview +menu: + docs_v2024.1.31: + identifier: mg-vertical-scaling-overview + name: Overview + parent: mg-vertical-scaling + weight: 10 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# MongoDB Vertical Scaling + +This guide will give an overview on how KubeDB Ops-manager operator updates the resources(for example CPU and Memory etc.) of the `MongoDB` database. + +## Before You Begin + +- You should be familiar with the following `KubeDB` concepts: + - [MongoDB](/docs/v2024.1.31/guides/mongodb/concepts/mongodb) + - [MongoDBOpsRequest](/docs/v2024.1.31/guides/mongodb/concepts/opsrequest) + +## How Vertical Scaling Process Works + +The following diagram shows how KubeDB Ops-manager operator updates the resources of the `MongoDB` database. Open the image in a new tab to see the enlarged version. + +
+  Vertical scaling process of MongoDB +
Fig: Vertical scaling process of MongoDB
+
+ +The vertical scaling process consists of the following steps: + +1. At first, a user creates a `MongoDB` Custom Resource (CR). + +2. `KubeDB` Provisioner operator watches the `MongoDB` CR. + +3. When the operator finds a `MongoDB` CR, it creates required number of `StatefulSets` and related necessary stuff like secrets, services, etc. + +4. Then, in order to update the resources(for example `CPU`, `Memory` etc.) of the `MongoDB` database the user creates a `MongoDBOpsRequest` CR with desired information. + +5. `KubeDB` Ops-manager operator watches the `MongoDBOpsRequest` CR. + +6. When it finds a `MongoDBOpsRequest` CR, it halts the `MongoDB` object which is referred from the `MongoDBOpsRequest`. So, the `KubeDB` Provisioner operator doesn't perform any operations on the `MongoDB` object during the vertical scaling process. + +7. Then the `KubeDB` Ops-manager operator will update resources of the StatefulSet Pods to reach desired state. + +8. After the successful update of the resources of the StatefulSet's replica, the `KubeDB` Ops-manager operator updates the `MongoDB` object to reflect the updated state. + +9. After the successful update of the `MongoDB` resources, the `KubeDB` Ops-manager operator resumes the `MongoDB` object so that the `KubeDB` Provisioner operator resumes its usual operations. + +In the next docs, we are going to show a step by step guide on updating resources of MongoDB database using `MongoDBOpsRequest` CRD. \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/mongodb/scaling/vertical-scaling/replicaset.md b/content/docs/v2024.1.31/guides/mongodb/scaling/vertical-scaling/replicaset.md new file mode 100644 index 0000000000..dae56a10b8 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mongodb/scaling/vertical-scaling/replicaset.md @@ -0,0 +1,321 @@ +--- +title: Vertical Scaling MongoDB Replicaset +menu: + docs_v2024.1.31: + identifier: mg-vertical-scaling-replicaset + name: Replicaset + parent: mg-vertical-scaling + weight: 30 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Vertical Scale MongoDB Replicaset + +This guide will show you how to use `KubeDB` Ops-manager operator to update the resources of a MongoDB replicaset database. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +- Install `KubeDB` Provisioner and Ops-manager operator in your cluster following the steps [here](/docs/v2024.1.31/setup/README). + +- You should be familiar with the following `KubeDB` concepts: + - [MongoDB](/docs/v2024.1.31/guides/mongodb/concepts/mongodb) + - [Replicaset](/docs/v2024.1.31/guides/mongodb/clustering/replicaset) + - [MongoDBOpsRequest](/docs/v2024.1.31/guides/mongodb/concepts/opsrequest) + - [Vertical Scaling Overview](/docs/v2024.1.31/guides/mongodb/scaling/vertical-scaling/overview) + +To keep everything isolated, we are going to use a separate namespace called `demo` throughout this tutorial. + +```bash +$ kubectl create ns demo +namespace/demo created +``` + +> **Note:** YAML files used in this tutorial are stored in [docs/examples/mongodb](/docs/v2024.1.31/examples/mongodb) directory of [kubedb/docs](https://github.com/kubedb/docs) repository. + +## Apply Vertical Scaling on Replicaset + +Here, we are going to deploy a `MongoDB` replicaset using a supported version by `KubeDB` operator. Then we are going to apply vertical scaling on it. + +### Prepare MongoDB Replicaset Database + +Now, we are going to deploy a `MongoDB` replicaset database with version `4.4.26`. + +### Deploy MongoDB replicaset + +In this section, we are going to deploy a MongoDB replicaset database. Then, in the next section we will update the resources of the database using `MongoDBOpsRequest` CRD. Below is the YAML of the `MongoDB` CR that we are going to create, + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: MongoDB +metadata: + name: mg-replicaset + namespace: demo +spec: + version: "4.4.26" + replicaSet: + name: "replicaset" + replicas: 3 + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi +``` + +Let's create the `MongoDB` CR we have shown above, + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/mongodb/scaling/mg-replicaset.yaml +mongodb.kubedb.com/mg-replicaset created +``` + +Now, wait until `mg-replicaset` has status `Ready`. i.e, + +```bash +$ kubectl get mg -n demo +NAME VERSION STATUS AGE +mg-replicaset 4.4.26 Ready 3m46s +``` + +Let's check the Pod containers resources, + +```bash +$ kubectl get pod -n demo mg-replicaset-0 -o json | jq '.spec.containers[].resources' +{ + "limits": { + "cpu": "500m", + "memory": "1Gi" + }, + "requests": { + "cpu": "500m", + "memory": "1Gi" + } +} +``` + +You can see the Pod has the default resources which is assigned by Kubedb operator. + +We are now ready to apply the `MongoDBOpsRequest` CR to update the resources of this database. + +### Vertical Scaling + +Here, we are going to update the resources of the replicaset database to meet the desired resources after scaling. + +#### Create MongoDBOpsRequest + +In order to update the resources of the database, we have to create a `MongoDBOpsRequest` CR with our desired resources. Below is the YAML of the `MongoDBOpsRequest` CR that we are going to create, + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: MongoDBOpsRequest +metadata: + name: mops-vscale-replicaset + namespace: demo +spec: + type: VerticalScaling + databaseRef: + name: mg-replicaset + verticalScaling: + replicaSet: + resources: + requests: + memory: "1.2Gi" + cpu: "0.6" + limits: + memory: "1.2Gi" + cpu: "0.6" + readinessCriteria: + oplogMaxLagSeconds: 20 + objectsCountDiffPercentage: 10 + timeout: 5m + apply: IfReady +``` + +Here, + +- `spec.databaseRef.name` specifies that we are performing vertical scaling operation on `mops-vscale-replicaset` database. +- `spec.type` specifies that we are performing `VerticalScaling` on our database. +- `spec.VerticalScaling.replicaSet` specifies the desired resources after scaling. +- `spec.VerticalScaling.arbiter` could also be specified in similar fashion to get the desired resources for arbiter pod. +- Have a look [here](/docs/v2024.1.31/guides/mongodb/concepts/opsrequest#specreadinesscriteria) on the respective sections to understand the `readinessCriteria`, `timeout` & `apply` fields. + +Let's create the `MongoDBOpsRequest` CR we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/mongodb/scaling/vertical-scaling/mops-vscale-replicaset.yaml +mongodbopsrequest.ops.kubedb.com/mops-vscale-replicaset created +``` + +#### Verify MongoDB Replicaset resources updated successfully + +If everything goes well, `KubeDB` Ops-manager operator will update the resources of `MongoDB` object and related `StatefulSets` and `Pods`. + +Let's wait for `MongoDBOpsRequest` to be `Successful`. Run the following command to watch `MongoDBOpsRequest` CR, + +```bash +$ kubectl get mongodbopsrequest -n demo +Every 2.0s: kubectl get mongodbopsrequest -n demo +NAME TYPE STATUS AGE +mops-vscale-replicaset VerticalScaling Successful 3m56s +``` + +We can see from the above output that the `MongoDBOpsRequest` has succeeded. If we describe the `MongoDBOpsRequest` we will get an overview of the steps that were followed to scale the database. + +```bash +$ kubectl describe mongodbopsrequest -n demo mops-vscale-replicaset +Name: mops-vscale-replicaset +Namespace: demo +Labels: +Annotations: +API Version: ops.kubedb.com/v1alpha1 +Kind: MongoDBOpsRequest +Metadata: + Creation Timestamp: 2022-10-26T10:41:56Z + Generation: 1 + Managed Fields: + API Version: ops.kubedb.com/v1alpha1 + Fields Type: FieldsV1 + fieldsV1: + f:metadata: + f:annotations: + .: + f:kubectl.kubernetes.io/last-applied-configuration: + f:spec: + .: + f:apply: + f:databaseRef: + f:readinessCriteria: + .: + f:objectsCountDiffPercentage: + f:oplogMaxLagSeconds: + f:timeout: + f:type: + f:verticalScaling: + .: + f:replicaSet: + .: + f:limits: + .: + f:cpu: + f:memory: + f:requests: + .: + f:cpu: + f:memory: + Manager: kubectl-client-side-apply + Operation: Update + Time: 2022-10-26T10:41:56Z + API Version: ops.kubedb.com/v1alpha1 + Fields Type: FieldsV1 + fieldsV1: + f:status: + .: + f:conditions: + f:observedGeneration: + f:phase: + Manager: kubedb-ops-manager + Operation: Update + Subresource: status + Time: 2022-10-26T10:44:33Z + Resource Version: 611468 + UID: 474053a7-90a8-49fd-9b27-c9bf7b4660e7 +Spec: + Apply: IfReady + Database Ref: + Name: mg-replicaset + Readiness Criteria: + Objects Count Diff Percentage: 10 + Oplog Max Lag Seconds: 20 + Timeout: 5m + Type: VerticalScaling + Vertical Scaling: + Replica Set: + Limits: + Cpu: 0.6 + Memory: 1.2Gi + Requests: + Cpu: 0.6 + Memory: 1.2Gi +Status: + Conditions: + Last Transition Time: 2022-10-26T10:43:21Z + Message: MongoDB ops request is vertically scaling database + Observed Generation: 1 + Reason: VerticalScaling + Status: True + Type: VerticalScaling + Last Transition Time: 2022-10-26T10:44:33Z + Message: Successfully Vertically Scaled Replicaset Resources + Observed Generation: 1 + Reason: UpdateReplicaSetResources + Status: True + Type: UpdateReplicaSetResources + Last Transition Time: 2022-10-26T10:44:33Z + Message: Successfully Vertically Scaled Database + Observed Generation: 1 + Reason: Successful + Status: True + Type: Successful + Observed Generation: 1 + Phase: Successful +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal PauseDatabase 82s KubeDB Ops-manager Operator Pausing MongoDB demo/mg-replicaset + Normal PauseDatabase 82s KubeDB Ops-manager Operator Successfully paused MongoDB demo/mg-replicaset + Normal Starting 82s KubeDB Ops-manager Operator Updating Resources of StatefulSet: mg-replicaset + Normal UpdateReplicaSetResources 82s KubeDB Ops-manager Operator Successfully updated replicaset Resources + Normal Starting 82s KubeDB Ops-manager Operator Updating Resources of StatefulSet: mg-replicaset + Normal UpdateReplicaSetResources 82s KubeDB Ops-manager Operator Successfully updated replicaset Resources + Normal UpdateReplicaSetResources 10s KubeDB Ops-manager Operator Successfully Vertically Scaled Replicaset Resources + Normal ResumeDatabase 10s KubeDB Ops-manager Operator Resuming MongoDB demo/mg-replicaset + Normal ResumeDatabase 10s KubeDB Ops-manager Operator Successfully resumed MongoDB demo/mg-replicaset + Normal Successful 10s KubeDB Ops-manager Operator Successfully Vertically Scaled Database + +``` + +Now, we are going to verify from one of the Pod yaml whether the resources of the replicaset database has updated to meet up the desired state, Let's check, + +```bash +$ kubectl get pod -n demo mg-replicaset-0 -o json | jq '.spec.containers[].resources' +{ + "limits": { + "cpu": "600m", + "memory": "1288490188800m" + }, + "requests": { + "cpu": "600m", + "memory": "1288490188800m" + } +} +``` + +The above output verifies that we have successfully scaled up the resources of the MongoDB replicaset database. + +## Cleaning Up + +To clean up the Kubernetes resources created by this tutorial, run: + +```bash +kubectl delete mg -n demo mg-replicaset +kubectl delete mongodbopsrequest -n demo mops-vscale-replicaset +``` \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/mongodb/scaling/vertical-scaling/sharding.md b/content/docs/v2024.1.31/guides/mongodb/scaling/vertical-scaling/sharding.md new file mode 100644 index 0000000000..a0b9d39c27 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mongodb/scaling/vertical-scaling/sharding.md @@ -0,0 +1,449 @@ +--- +title: Vertical Scaling Sharded MongoDB Cluster +menu: + docs_v2024.1.31: + identifier: mg-vertical-scaling-shard + name: Sharding + parent: mg-vertical-scaling + weight: 40 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Vertical Scale MongoDB Replicaset + +This guide will show you how to use `KubeDB` Ops-manager operator to update the resources of a MongoDB replicaset database. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +- Install `KubeDB` Provisioner and Ops-manager operator in your cluster following the steps [here](/docs/v2024.1.31/setup/README). + +- You should be familiar with the following `KubeDB` concepts: + - [MongoDB](/docs/v2024.1.31/guides/mongodb/concepts/mongodb) + - [Replicaset](/docs/v2024.1.31/guides/mongodb/clustering/replicaset) + - [MongoDBOpsRequest](/docs/v2024.1.31/guides/mongodb/concepts/opsrequest) + - [Vertical Scaling Overview](/docs/v2024.1.31/guides/mongodb/scaling/vertical-scaling/overview) + +To keep everything isolated, we are going to use a separate namespace called `demo` throughout this tutorial. + +```bash +$ kubectl create ns demo +namespace/demo created +``` + +> **Note:** YAML files used in this tutorial are stored in [docs/examples/mongodb](/docs/v2024.1.31/examples/mongodb) directory of [kubedb/docs](https://github.com/kubedb/docs) repository. + +## Apply Vertical Scaling on Sharded Database + +Here, we are going to deploy a `MongoDB` sharded database using a supported version by `KubeDB` operator. Then we are going to apply vertical scaling on it. + +### Prepare MongoDB Sharded Database + +Now, we are going to deploy a `MongoDB` sharded database with version `4.4.26`. + +### Deploy MongoDB Sharded Database + +In this section, we are going to deploy a MongoDB sharded database. Then, in the next sections we will update the resources of various components (mongos, shard, configserver etc.) of the database using `MongoDBOpsRequest` CRD. Below is the YAML of the `MongoDB` CR that we are going to create, + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: MongoDB +metadata: + name: mg-sharding + namespace: demo +spec: + version: 4.4.26 + shardTopology: + configServer: + replicas: 3 + storage: + resources: + requests: + storage: 1Gi + storageClassName: standard + mongos: + replicas: 2 + shard: + replicas: 3 + shards: 2 + storage: + resources: + requests: + storage: 1Gi + storageClassName: standard +``` + +Let's create the `MongoDB` CR we have shown above, + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/mongodb/scaling/mg-shard.yaml +mongodb.kubedb.com/mg-sharding created +``` + +Now, wait until `mg-sharding` has status `Ready`. i.e, + +```bash +$ kubectl get mg -n demo +NAME VERSION STATUS AGE +mg-sharding 4.4.26 Ready 8m51s +``` + +Let's check the Pod containers resources of various components (mongos, shard, configserver etc.) of the database, + +```bash +$ kubectl get pod -n demo mg-sharding-mongos-0 -o json | jq '.spec.containers[].resources' +{ + "limits": { + "cpu": "500m", + "memory": "1Gi" + }, + "requests": { + "cpu": "500m", + "memory": "1Gi" + } +} + +$ kubectl get pod -n demo mg-sharding-configsvr-0 -o json | jq '.spec.containers[].resources' +{ + "limits": { + "cpu": "500m", + "memory": "1Gi" + }, + "requests": { + "cpu": "500m", + "memory": "1Gi" + } +} + +$ kubectl get pod -n demo mg-sharding-shard0-0 -o json | jq '.spec.containers[].resources' +{ + "limits": { + "cpu": "500m", + "memory": "1Gi" + }, + "requests": { + "cpu": "500m", + "memory": "1Gi" + } +} +``` + +You can see all the Pod of mongos, configserver and shard has default resources which is assigned by Kubedb operator. + +We are now ready to apply the `MongoDBOpsRequest` CR to update the resources of mongos, configserver and shard nodes of this database. + +## Vertical Scaling of Shard + +Here, we are going to update the resources of the shard of the database to meet the desired resources after scaling. + +#### Create MongoDBOpsRequest for shard + +In order to update the resources of the shard nodes, we have to create a `MongoDBOpsRequest` CR with our desired resources. Below is the YAML of the `MongoDBOpsRequest` CR that we are going to create, + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: MongoDBOpsRequest +metadata: + name: mops-vscale-shard + namespace: demo +spec: + type: VerticalScaling + databaseRef: + name: mg-sharding + verticalScaling: + shard: + resources: + requests: + memory: "1100Mi" + cpu: "0.55" + limits: + memory: "1100Mi" + cpu: "0.55" + configServer: + resources: + requests: + memory: "1100Mi" + cpu: "0.55" + limits: + memory: "1100Mi" + cpu: "0.55" + mongos: + resources: + requests: + memory: "1100Mi" + cpu: "0.55" + limits: + memory: "1100Mi" + cpu: "0.55" + readinessCriteria: + oplogMaxLagSeconds: 20 + objectsCountDiffPercentage: 10 + timeout: 5m + apply: IfReady +``` + +Here, + +- `spec.databaseRef.name` specifies that we are performing vertical scaling operation on `mops-vscale-shard` database. +- `spec.type` specifies that we are performing `VerticalScaling` on our database. +- `spec.VerticalScaling.shard` specifies the desired resources after scaling for the shard nodes. +- `spec.VerticalScaling.configServer` specifies the desired resources after scaling for the configServer nodes. +- `spec.VerticalScaling.mongos` specifies the desired resources after scaling for the mongos nodes. +- `spec.VerticalScaling.arbiter` could also be specified in similar fashion to get the desired resources for arbiter pod. +- Have a look [here](/docs/v2024.1.31/guides/mongodb/concepts/opsrequest#specreadinesscriteria) on the respective sections to understand the `readinessCriteria`, `timeout` & `apply` fields. + +> **Note:** If you don't want to scale all the components together, you can only specify the components (shard, configServer and mongos) that you want to scale. + +Let's create the `MongoDBOpsRequest` CR we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/mongodb/scaling/vertical-scaling/mops-vscale-shard.yaml +mongodbopsrequest.ops.kubedb.com/mops-vscale-shard created +``` + +#### Verify MongoDB Shard resources updated successfully + +If everything goes well, `KubeDB` Ops-manager operator will update the resources of `MongoDB` object and related `StatefulSets` and `Pods` of shard nodes. + +Let's wait for `MongoDBOpsRequest` to be `Successful`. Run the following command to watch `MongoDBOpsRequest` CR, + +```bash +$ kubectl get mongodbopsrequest -n demo +Every 2.0s: kubectl get mongodbopsrequest -n demo +NAME TYPE STATUS AGE +mops-vscale-shard VerticalScaling Successful 8m21s +``` + +We can see from the above output that the `MongoDBOpsRequest` has succeeded. If we describe the `MongoDBOpsRequest` we will get an overview of the steps that were followed to scale the database. + +```bash +$ kubectl describe mongodbopsrequest -n demo mops-vscale-shard +Name: mops-vscale-shard +Namespace: demo +Labels: +Annotations: +API Version: ops.kubedb.com/v1alpha1 +Kind: MongoDBOpsRequest +Metadata: + Creation Timestamp: 2022-10-26T10:45:56Z + Generation: 1 + Managed Fields: + API Version: ops.kubedb.com/v1alpha1 + Fields Type: FieldsV1 + fieldsV1: + f:metadata: + f:annotations: + .: + f:kubectl.kubernetes.io/last-applied-configuration: + f:spec: + .: + f:apply: + f:databaseRef: + f:readinessCriteria: + .: + f:objectsCountDiffPercentage: + f:oplogMaxLagSeconds: + f:timeout: + f:type: + f:verticalScaling: + .: + f:configServer: + .: + f:limits: + .: + f:cpu: + f:memory: + f:requests: + .: + f:cpu: + f:memory: + f:mongos: + .: + f:limits: + .: + f:cpu: + f:memory: + f:requests: + .: + f:cpu: + f:memory: + f:shard: + .: + f:limits: + .: + f:cpu: + f:memory: + f:requests: + .: + f:cpu: + f:memory: + Manager: kubectl-client-side-apply + Operation: Update + Time: 2022-10-26T10:45:56Z + API Version: ops.kubedb.com/v1alpha1 + Fields Type: FieldsV1 + fieldsV1: + f:status: + .: + f:conditions: + f:observedGeneration: + f:phase: + Manager: kubedb-ops-manager + Operation: Update + Subresource: status + Time: 2022-10-26T10:52:28Z + Resource Version: 613274 + UID: a186cc72-3629-4034-bbf8-988839f6ec23 +Spec: + Apply: IfReady + Database Ref: + Name: mg-sharding + Readiness Criteria: + Objects Count Diff Percentage: 10 + Oplog Max Lag Seconds: 20 + Timeout: 5m + Type: VerticalScaling + Vertical Scaling: + Config Server: + Limits: + Cpu: 0.55 + Memory: 1100Mi + Requests: + Cpu: 0.55 + Memory: 1100Mi + Mongos: + Limits: + Cpu: 0.55 + Memory: 1100Mi + Requests: + Cpu: 0.55 + Memory: 1100Mi + Shard: + Limits: + Cpu: 0.55 + Memory: 1100Mi + Requests: + Cpu: 0.55 + Memory: 1100Mi +Status: + Conditions: + Last Transition Time: 2022-10-26T10:48:06Z + Message: MongoDB ops request is vertically scaling database + Observed Generation: 1 + Reason: VerticalScaling + Status: True + Type: VerticalScaling + Last Transition Time: 2022-10-26T10:49:37Z + Message: Successfully Vertically Scaled ConfigServer Resources + Observed Generation: 1 + Reason: UpdateConfigServerResources + Status: True + Type: UpdateConfigServerResources + Last Transition Time: 2022-10-26T10:50:07Z + Message: Successfully Vertically Scaled Mongos Resources + Observed Generation: 1 + Reason: UpdateMongosResources + Status: True + Type: UpdateMongosResources + Last Transition Time: 2022-10-26T10:52:28Z + Message: Successfully Vertically Scaled Shard Resources + Observed Generation: 1 + Reason: UpdateShardResources + Status: True + Type: UpdateShardResources + Last Transition Time: 2022-10-26T10:52:28Z + Message: Successfully Vertically Scaled Database + Observed Generation: 1 + Reason: Successful + Status: True + Type: Successful + Observed Generation: 1 + Phase: Successful +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal PauseDatabase 4m51s KubeDB Ops-manager Operator Successfully paused MongoDB demo/mg-sharding + Normal Starting 4m51s KubeDB Ops-manager Operator Updating Resources of StatefulSet: mg-sharding-configsvr + Normal UpdateConfigServerResources 4m51s KubeDB Ops-manager Operator Successfully updated configServer Resources + Normal Starting 4m51s KubeDB Ops-manager Operator Updating Resources of StatefulSet: mg-sharding-configsvr + Normal UpdateConfigServerResources 4m51s KubeDB Ops-manager Operator Successfully updated configServer Resources + Normal PauseDatabase 4m51s KubeDB Ops-manager Operator Pausing MongoDB demo/mg-sharding + Normal UpdateConfigServerResources 3m20s KubeDB Ops-manager Operator Successfully Vertically Scaled ConfigServer Resources + Normal Starting 3m20s KubeDB Ops-manager Operator Updating Resources of StatefulSet: mg-sharding-mongos + Normal UpdateMongosResources 3m20s KubeDB Ops-manager Operator Successfully updated Mongos Resources + Normal UpdateShardResources 2m50s KubeDB Ops-manager Operator Successfully updated Shard Resources + Normal Starting 2m50s KubeDB Ops-manager Operator Updating Resources of StatefulSet: mg-sharding-shard0 + Normal Starting 2m50s KubeDB Ops-manager Operator Updating Resources of StatefulSet: mg-sharding-shard1 + Normal UpdateMongosResources 2m50s KubeDB Ops-manager Operator Successfully Vertically Scaled Mongos Resources + Normal UpdateShardResources 29s KubeDB Ops-manager Operator Successfully Vertically Scaled Shard Resources + Normal ResumeDatabase 29s KubeDB Ops-manager Operator Resuming MongoDB demo/mg-sharding + Normal ResumeDatabase 29s KubeDB Ops-manager Operator Successfully resumed MongoDB demo/mg-sharding + Normal Successful 29s KubeDB Ops-manager Operator Successfully Vertically Scaled Database + Normal UpdateShardResources 28s KubeDB Ops-manager Operator Successfully Vertically Scaled Shard Resources +``` + +Now, we are going to verify from one of the Pod yaml whether the resources of the shard nodes has updated to meet up the desired state, Let's check, + +```bash +$ kubectl get pod -n demo mg-sharding-shard0-0 -o json | jq '.spec.containers[].resources' +{ + "limits": { + "cpu": "550m", + "memory": "1100Mi" + }, + "requests": { + "cpu": "550m", + "memory": "1100Mi" + } +} + +$ kubectl get pod -n demo mg-sharding-configsvr-0 -o json | jq '.spec.containers[].resources' +{ + "limits": { + "cpu": "550m", + "memory": "1100Mi" + }, + "requests": { + "cpu": "550m", + "memory": "1100Mi" + } +} + +$ kubectl get pod -n demo mg-sharding-mongos-0 -o json | jq '.spec.containers[].resources' +{ + "limits": { + "cpu": "550m", + "memory": "1100Mi" + }, + "requests": { + "cpu": "550m", + "memory": "1100Mi" + } +} +``` + +The above output verifies that we have successfully scaled the resources of all components of the MongoDB sharded database. + +## Cleaning Up + +To clean up the Kubernetes resources created by this tutorial, run: + +```bash +kubectl delete mg -n demo mg-shard +kubectl delete mongodbopsrequest -n demo mops-vscale-shard +``` \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/mongodb/scaling/vertical-scaling/standalone.md b/content/docs/v2024.1.31/guides/mongodb/scaling/vertical-scaling/standalone.md new file mode 100644 index 0000000000..76b9fe3450 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mongodb/scaling/vertical-scaling/standalone.md @@ -0,0 +1,317 @@ +--- +title: Vertical Scaling Standalone MongoDB +menu: + docs_v2024.1.31: + identifier: mg-vertical-scaling-standalone + name: Standalone + parent: mg-vertical-scaling + weight: 20 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Vertical Scale MongoDB Standalone + +This guide will show you how to use `KubeDB` Ops-manager operator to update the resources of a MongoDB standalone database. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +- Install `KubeDB` Provisioner and Ops-manager operator in your cluster following the steps [here](/docs/v2024.1.31/setup/README). + +- You should be familiar with the following `KubeDB` concepts: + - [MongoDB](/docs/v2024.1.31/guides/mongodb/concepts/mongodb) + - [MongoDBOpsRequest](/docs/v2024.1.31/guides/mongodb/concepts/opsrequest) + - [Vertical Scaling Overview](/docs/v2024.1.31/guides/mongodb/scaling/vertical-scaling/overview) + +To keep everything isolated, we are going to use a separate namespace called `demo` throughout this tutorial. + +```bash +$ kubectl create ns demo +namespace/demo created +``` + +> **Note:** YAML files used in this tutorial are stored in [docs/examples/mongodb](/docs/v2024.1.31/examples/mongodb) directory of [kubedb/docs](https://github.com/kubedb/docs) repository. + +## Apply Vertical Scaling on Standalone + +Here, we are going to deploy a `MongoDB` standalone using a supported version by `KubeDB` operator. Then we are going to apply vertical scaling on it. + +### Prepare MongoDB Standalone Database + +Now, we are going to deploy a `MongoDB` standalone database with version `4.4.26`. + +### Deploy MongoDB standalone + +In this section, we are going to deploy a MongoDB standalone database. Then, in the next section we will update the resources of the database using `MongoDBOpsRequest` CRD. Below is the YAML of the `MongoDB` CR that we are going to create, + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: MongoDB +metadata: + name: mg-standalone + namespace: demo +spec: + version: "4.4.26" + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi +``` + +Let's create the `MongoDB` CR we have shown above, + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/mongodb/scaling/mg-standalone.yaml +mongodb.kubedb.com/mg-standalone created +``` + +Now, wait until `mg-standalone` has status `Ready`. i.e, + +```bash +$ kubectl get mg -n demo +NAME VERSION STATUS AGE +mg-standalone 4.4.26 Ready 5m56s +``` + +Let's check the Pod containers resources, + +```bash +$ kubectl get pod -n demo mg-standalone-0 -o json | jq '.spec.containers[].resources' +{ + "limits": { + "cpu": "500m", + "memory": "1Gi" + }, + "requests": { + "cpu": "500m", + "memory": "1Gi" + } +} +``` + +You can see the Pod has default resources which is assigned by the Kubedb operator. + +We are now ready to apply the `MongoDBOpsRequest` CR to update the resources of this database. + +### Vertical Scaling + +Here, we are going to update the resources of the standalone database to meet the desired resources after scaling. + +#### Create MongoDBOpsRequest + +In order to update the resources of the database, we have to create a `MongoDBOpsRequest` CR with our desired resources. Below is the YAML of the `MongoDBOpsRequest` CR that we are going to create, + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: MongoDBOpsRequest +metadata: + name: mops-vscale-standalone + namespace: demo +spec: + type: VerticalScaling + databaseRef: + name: mg-standalone + verticalScaling: + standalone: + resources: + requests: + memory: "2Gi" + cpu: "1" + limits: + memory: "2Gi" + cpu: "1" + readinessCriteria: + oplogMaxLagSeconds: 20 + objectsCountDiffPercentage: 10 + timeout: 5m + apply: IfReady +``` + +Here, + +- `spec.databaseRef.name` specifies that we are performing vertical scaling operation on `mops-vscale-standalone` database. +- `spec.type` specifies that we are performing `VerticalScaling` on our database. +- `spec.VerticalScaling.standalone` specifies the desired resources after scaling. +- Have a look [here](/docs/v2024.1.31/guides/mongodb/concepts/opsrequest#specreadinesscriteria) on the respective sections to understand the `readinessCriteria`, `timeout` & `apply` fields. + +Let's create the `MongoDBOpsRequest` CR we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/mongodb/scaling/vertical-scaling/mops-vscale-standalone.yaml +mongodbopsrequest.ops.kubedb.com/mops-vscale-standalone created +``` + +#### Verify MongoDB Standalone resources updated successfully + +If everything goes well, `KubeDB` Ops-manager operator will update the resources of `MongoDB` object and related `StatefulSets` and `Pods`. + +Let's wait for `MongoDBOpsRequest` to be `Successful`. Run the following command to watch `MongoDBOpsRequest` CR, + +```bash +$ kubectl get mongodbopsrequest -n demo +Every 2.0s: kubectl get mongodbopsrequest -n demo +NAME TYPE STATUS AGE +mops-vscale-standalone VerticalScaling Successful 108s +``` + +We can see from the above output that the `MongoDBOpsRequest` has succeeded. If we describe the `MongoDBOpsRequest` we will get an overview of the steps that were followed to scale the database. + +```bash +$ kubectl describe mongodbopsrequest -n demo mops-vscale-standalone +Name: mops-vscale-standalone +Namespace: demo +Labels: +Annotations: +API Version: ops.kubedb.com/v1alpha1 +Kind: MongoDBOpsRequest +Metadata: + Creation Timestamp: 2022-10-26T10:54:01Z + Generation: 1 + Managed Fields: + API Version: ops.kubedb.com/v1alpha1 + Fields Type: FieldsV1 + fieldsV1: + f:metadata: + f:annotations: + .: + f:kubectl.kubernetes.io/last-applied-configuration: + f:spec: + .: + f:apply: + f:databaseRef: + f:readinessCriteria: + .: + f:objectsCountDiffPercentage: + f:oplogMaxLagSeconds: + f:timeout: + f:type: + f:verticalScaling: + .: + f:standalone: + .: + f:limits: + .: + f:cpu: + f:memory: + f:requests: + .: + f:cpu: + f:memory: + Manager: kubectl-client-side-apply + Operation: Update + Time: 2022-10-26T10:54:01Z + API Version: ops.kubedb.com/v1alpha1 + Fields Type: FieldsV1 + fieldsV1: + f:status: + .: + f:conditions: + f:observedGeneration: + f:phase: + Manager: kubedb-ops-manager + Operation: Update + Subresource: status + Time: 2022-10-26T10:54:52Z + Resource Version: 613933 + UID: c3bf9c3d-cf96-49ae-877f-a895e0b1d280 +Spec: + Apply: IfReady + Database Ref: + Name: mg-standalone + Readiness Criteria: + Objects Count Diff Percentage: 10 + Oplog Max Lag Seconds: 20 + Timeout: 5m + Type: VerticalScaling + Vertical Scaling: + Standalone: + Limits: + Cpu: 1 + Memory: 2Gi + Requests: + Cpu: 1 + Memory: 2Gi +Status: + Conditions: + Last Transition Time: 2022-10-26T10:54:21Z + Message: MongoDB ops request is vertically scaling database + Observed Generation: 1 + Reason: VerticalScaling + Status: True + Type: VerticalScaling + Last Transition Time: 2022-10-26T10:54:51Z + Message: Successfully Vertically Scaled Standalone Resources + Observed Generation: 1 + Reason: UpdateStandaloneResources + Status: True + Type: UpdateStandaloneResources + Last Transition Time: 2022-10-26T10:54:52Z + Message: Successfully Vertically Scaled Database + Observed Generation: 1 + Reason: Successful + Status: True + Type: Successful + Observed Generation: 1 + Phase: Successful +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal PauseDatabase 34s KubeDB Ops-manager Operator Pausing MongoDB demo/mg-standalone + Normal PauseDatabase 34s KubeDB Ops-manager Operator Successfully paused MongoDB demo/mg-standalone + Normal Starting 34s KubeDB Ops-manager Operator Updating Resources of StatefulSet: mg-standalone + Normal UpdateStandaloneResources 34s KubeDB Ops-manager Operator Successfully updated standalone Resources + Normal Starting 34s KubeDB Ops-manager Operator Updating Resources of StatefulSet: mg-standalone + Normal UpdateStandaloneResources 34s KubeDB Ops-manager Operator Successfully updated standalone Resources + Normal UpdateStandaloneResources 4s KubeDB Ops-manager Operator Successfully Vertically Scaled Standalone Resources + Normal UpdateStandaloneResources 4s KubeDB Ops-manager Operator Successfully Vertically Scaled Standalone Resources + Normal ResumeDatabase 4s KubeDB Ops-manager Operator Resuming MongoDB demo/mg-standalone + Normal ResumeDatabase 3s KubeDB Ops-manager Operator Successfully resumed MongoDB demo/mg-standalone + Normal Successful 3s KubeDB Ops-manager Operator Successfully Vertically Scaled Database + +``` + +Now, we are going to verify from the Pod yaml whether the resources of the standalone database has updated to meet up the desired state, Let's check, + +```bash +$ kubectl get pod -n demo mg-standalone-0 -o json | jq '.spec.containers[].resources' +{ + "limits": { + "cpu": "1", + "memory": "2Gi" + }, + "requests": { + "cpu": "1", + "memory": "2Gi" + } +} +``` + +The above output verifies that we have successfully scaled up the resources of the MongoDB standalone database. + +## Cleaning Up + +To clean up the Kubernetes resources created by this tutorial, run: + +```bash +kubectl delete mg -n demo mg-standalone +kubectl delete mongodbopsrequest -n demo mops-vscale-standalone +``` \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/mongodb/schema-manager/_index.md b/content/docs/v2024.1.31/guides/mongodb/schema-manager/_index.md new file mode 100644 index 0000000000..25fc514ff7 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mongodb/schema-manager/_index.md @@ -0,0 +1,22 @@ +--- +title: MongoDB Schema Manager +menu: + docs_v2024.1.31: + identifier: mg-schema-manager + name: Schema Manager + parent: mg-mongodb-guides + weight: 49 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/mongodb/schema-manager/deploy-mongodbdatabase/index.md b/content/docs/v2024.1.31/guides/mongodb/schema-manager/deploy-mongodbdatabase/index.md new file mode 100644 index 0000000000..f628d855c1 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mongodb/schema-manager/deploy-mongodbdatabase/index.md @@ -0,0 +1,326 @@ +--- +title: Deploy MongoDBDatabase +menu: + docs_v2024.1.31: + identifier: deploy-mongodbdatabase + name: Deploy MongoDBDatabase + parent: mg-schema-manager + weight: 15 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Create Database with MongoDB Schema Manager + +This guide will show you how to create database with MongoDB Schema Manager using `Schema Manager Operator`. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +- Install `KubeDB` in your cluster following the steps [here](https://kubedb.com/docs/latest/setup/install/kubedb/). +- Install `KubeVault` in your cluster following the steps [here](https://kubevault.com/docs/latest/setup/install/kubevault/). + +- You should be familiar with the following concepts: + - [MongoDB](/docs/v2024.1.31/guides/mongodb/concepts/mongodb) + - [MongoDBDatabase](/docs/v2024.1.31/guides/mongodb/concepts/mongodbdatabase) + - [Schema Manager Overview](/docs/v2024.1.31/guides/mongodb/schema-manager/overview/) + - [Stash Overview](https://stash.run/docs/latest/concepts/what-is-stash/overview/) + - [KubeVault Overview](https://kubevault.com/docs/latest/concepts/overview/) + +To keep everything isolated, we are going to use a separate namespace called `demo` throughout this tutorial. + +```bash +$ kubectl create ns demo +namespace/demo created +``` + +> **Note:** YAML files used in this tutorial are stored in [docs/guides/mongodb/schema-manager/deploy-mongodbdatabase/yamls](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/guides/mongodb/schema-manager/deploy-mongodbdatabase/yamls) directory of [kubedb/doc](https://github.com/kubedb/docs) repository. + +## Deploy MongoDB Server and Vault Server + +Firstly, we are going to deploy a `MongoDB Server` by using `KubeDB` operator. Also, we are deploying a `Vault Server` using `KubeVault` Operator. + +### Deploy MongoDB Server + +In this section, we are going to deploy a MongoDB Server. Let’s deploy it using this following yaml, + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: MongoDB +metadata: + name: mongodb + namespace: demo +spec: + allowedSchemas: + namespaces: + from: Selector + selector: + matchExpressions: + - {key: kubernetes.io/metadata.name, operator: In, values: [dev]} + selector: + matchLabels: + "schema.kubedb.com": "mongo" + version: "4.4.26" + replicaSet: + name: "replicaset" + podTemplate: + spec: + resources: + requests: + cpu: "100m" + memory: "100Mi" + replicas: 3 + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 100Mi + terminationPolicy: WipeOut +``` + +Here, + +- `spec.version` is the name of the MongoDBVersion CR. Here, we are using MongoDB version `4.4.26`. +- `spec.storageType` specifies the type of storage that will be used for MongoDB. It can be `Durable` or `Ephemeral`. The default value of this field is `Durable`. If `Ephemeral` is used then KubeDB will create the MongoDB using `EmptyDir` volume. +- `spec.storage` specifies the StorageClass of PVC dynamically allocated to store data for this database. This storage spec will be passed to the StatefulSet created by KubeDB operator to run database pods. So, each members will have a pod of this storage configuration. You can specify any StorageClass available in your cluster with appropriate resource requests. +- `spec.allowedSchemas` specifies the namespace and selectors of allowed `Schema Manager`. +- `spec.terminationPolicy` specifies what KubeDB should do when a user try to delete the operation of MongoDB CR. *Wipeout* means that the database will be deleted without restrictions. It can also be "Halt", "Delete" and "DoNotTerminate". Learn More about these [HERE](https://kubedb.com/docs/latest/guides/mongodb/concepts/mongodb/#specterminationpolicy). + + +Let’s save this yaml configuration into `mongodb.yaml` Then create the above `MongoDB` CR + +```bash +$ kubectl apply -f mongodb.yaml +mongodb.kubedb.com/mongodb created +``` + +### Deploy Vault Server + +In this section, we are going to deploy a Vault Server. Let’s deploy it using this following yaml, + +```yaml +apiVersion: kubevault.com/v1alpha1 +kind: VaultServer +metadata: + name: vault + namespace: demo +spec: + version: 1.8.2 + replicas: 3 + allowedSecretEngines: + namespaces: + from: All + secretEngines: + - mongodb + unsealer: + secretShares: 5 + secretThreshold: 3 + mode: + kubernetesSecret: + secretName: vault-keys + backend: + raft: + path: "/vault/data" + storage: + storageClassName: "standard" + resources: + requests: + storage: 1Gi + authMethods: + - type: kubernetes + path: kubernetes + terminationPolicy: WipeOut +``` + +Here, + +- `spec.version` is a required field that specifies the original version of Vault that has been used to build the docker image specified in `spec.vault.image` field. +- `spec.replicas` specifies the number of Vault nodes to deploy. It has to be a positive number. +- `spec.allowedSecretEngines` defines the types of Secret Engines & the Allowed namespaces from where a `SecretEngine` can be attached to the `VaultServer`. +- `spec.unsealer` is an optional field that specifies `Unsealer` configuration. `Unsealer` handles automatic initializing and unsealing of Vault. +- `spec.backend` is a required field that specifies the Vault backend storage configuration. KubeVault operator generates storage configuration according to this `spec.backend`. +- `spec.authMethods` is an optional field that specifies the list of auth methods to enable in Vault. +- `spec.terminationPolicy` is an optional field that gives flexibility whether to nullify(reject) the delete operation of VaultServer crd or which resources KubeVault operator should keep or delete when you delete VaultServer crd. + + +Let’s save this yaml configuration into `vault.yaml` Then create the above `VaultServer` CR + +```bash +$ kubectl apply -f vault.yaml +vaultserver.kubevault.com/vault created +``` + +### Create Separate Namespace For Schema Manager + +In this section, we are going to create a new `Namespace` and we will only allow this namespace for our `Schema Manager`. Below is the YAML of the `Namespace` that we are going to create, + +```yaml +apiVersion: v1 +kind: Namespace +metadata: + name: dev + labels: + kubernetes.io/metadata.name: dev +``` + +Let’s save this yaml configuration into `namespace.yaml`. Then create the above `Namespace`, + +```bash +$ kubectl apply -f namespace.yaml +namespace/dev created +``` + + +### Deploy Schema Manager + +Here, we are going to deploy `Schema Manager` with the new Namespace that we have created above. Let’s deploy it using this following yaml, + +```yaml +apiVersion: schema.kubedb.com/v1alpha1 +kind: MongoDBDatabase +metadata: + name: mongodb-schema + namespace: dev + labels: + "schema.kubedb.com": "mongo" +spec: + database: + serverRef: + name: mongodb + namespace: demo + config: + name: emptydb + vaultRef: + name: vault + namespace: demo + accessPolicy: + subjects: + - name: "saname" + namespace: dev + kind: "ServiceAccount" + apiGroup: "" + defaultTTL: "5m" + maxTTL: "200h" + deletionPolicy: Delete +``` +Here, + +- `spec.database` is a required field specifying the database server reference and the desired database configuration. +- `spec.vaultRef` is a required field that specifies which KubeVault server to use for user management. +- `spec.accessPolicy` is a required field that specifies the access permissions like which service account or cluster user have the access and also for how long they can access through it. +- `spec.deletionPolicy` is an optional field that gives flexibility whether to `nullify` (reject) the delete operation. + +Let’s save this yaml configuration into `mongodb-schema.yaml` and apply it, + +```bash +$ kubectl apply -f mongodb-schema.yaml +mongodbdatabase.schema.kubedb.com/mongodb-schema created +``` + +Let's check the `STATUS` of `Schema Manager`, + +```bash +$ kubectl get mongodbdatabase -A +NAMESPACE NAME DB_SERVER DB_NAME STATUS AGE +dev mongodb-schema mongodb emptydb Current 54s + +``` +Here, + +> In `STATUS` section, `Current` means that the current `Secret` of `Schema Manager` is vaild, and it will automatically `Expired` after it reaches the limit of `defaultTTL` that we've defined in the above yaml. + +Now, let's get the secret name from `schema-manager`, and get the login credentials for connecting to the database, + +```bash +$ kubectl get mongodbdatabase mongodb-schema -n dev -o=jsonpath='{.status.authSecret.name}' +mongodb-schema-mongo-req-fybh8z + +$ kubectl view-secret -n dev mongodb-schema-mongo-req-fybh8z -a +password=u-kDmBcMITz9dLrZ7cAL +username=v-kubernetes-demo-k8s-f7695915-1e-0NV83LXHuGMiittiObYE-1662635657 +``` + +### Insert Sample Data + +Here, we are going to connect to the database with the login credentials and insert some sample data into it. + +```bash +$ kubectl exec -it -n demo mongodb-0 -c mongodb -- bash +root@mongodb-0:/# mongo --authenticationDatabase=emptydb --username='v-kubernetes-demo-k8s-f7695915-1e-0NV83LXHuGMiittiObYE-1662635657' --password='u-kDmBcMITz9dLrZ7cAL' emptydb +MongoDB shell version v4.4.26 +... + +replicaset:PRIMARY> use emptydb +switched to db emptydb + +replicaset:PRIMARY> db.product.insert({"name":"KubeDB"}); +WriteResult({ "nInserted" : 1 }) + +replicaset:PRIMARY> db.product.find().pretty() +{ "_id" : ObjectId("6319cffeb0d19a8d717b4aee"), "name" : "KubeDB" } + +replicaset:PRIMARY> exit +bye + +``` + + +Now, Let's check the `STATUS` of `Schema Manager` again, + +```bash +$ kubectl get mongodbdatabase -A +NAMESPACE NAME DB_SERVER DB_NAME STATUS AGE +dev mongodb-schema mongodb emptydb Expired 6m +``` + +Here, we can see that the `STATUS` of the `schema-manager` is `Expired` because it's exceeded `defaultTTL: "5m"`, which means the current `Secret` of `Schema Manager` isn't vaild anymore. Now, if we try to connect and login with the credentials that we have acquired before from `schema-manager`, it won't work. + +```bash +$ kubectl exec -it -n demo mongodb-0 -c mongodb -- bash +root@mongodb-0:/# mongo --authenticationDatabase=emptydb --username='v-kubernetes-demo-k8s-f7695915-1e-0NV83LXHuGMiittiObYE-1662635657' --password='u-kDmBcMITz9dLrZ7cAL' emptydb +MongoDB shell version v4.4.26 +connecting to: mongodb://127.0.0.1:27017/emptydb?authSource=emptydb&compressors=disabled&gssapiServiceName=mongodb +Error: Authentication failed. : +connect@src/mongo/shell/mongo.js:374:17 +@(connect):2:6 +exception: connect failed +exiting with code 1 +root@mongodb-0:/# exit +exit +``` +> Note: We can't connect to the database with the login credentials, which is `Expired`. We will not be able to access the database even though we're in the middle of a connected session. And when the `Schema Manager` is deleted, the associated database and user will also be deleted. + + +## Cleaning Up + +To clean up the Kubernetes resources created by this tutorial, run: + +```bash +$ kubectl delete ns dev +$ kubectl delete ns demo +``` + + +## Next Steps + +- Detail concepts of [MongoDBDatabase object](/docs/v2024.1.31/guides/mongodb/concepts/mongodbdatabase). +- Go through the concepts of [KubeVault](https://kubevault.com/docs/latest/guides). +- Detail concepts of [MongoDB object](/docs/v2024.1.31/guides/mongodb/concepts/mongodb). +- Detail concepts of [MongoDBVersion object](/docs/v2024.1.31/guides/mongodb/concepts/catalog). \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/mongodb/schema-manager/deploy-mongodbdatabase/yamls/mongodb-schema.yaml b/content/docs/v2024.1.31/guides/mongodb/schema-manager/deploy-mongodbdatabase/yamls/mongodb-schema.yaml new file mode 100644 index 0000000000..5f57c03b36 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mongodb/schema-manager/deploy-mongodbdatabase/yamls/mongodb-schema.yaml @@ -0,0 +1,26 @@ +apiVersion: schema.kubedb.com/v1alpha1 +kind: MongoDBDatabase +metadata: + name: mongodb-schema + namespace: dev + labels: + "schema.kubedb.com": "mongo" +spec: + database: + serverRef: + name: mongodb + namespace: demo + config: + name: emptydb + vaultRef: + name: vault + namespace: demo + accessPolicy: + subjects: + - name: "saname" + namespace: dev + kind: "ServiceAccount" + apiGroup: "" + defaultTTL: "5m" + maxTTL: "200h" + deletionPolicy: Delete \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/mongodb/schema-manager/deploy-mongodbdatabase/yamls/mongodb.yaml b/content/docs/v2024.1.31/guides/mongodb/schema-manager/deploy-mongodbdatabase/yamls/mongodb.yaml new file mode 100644 index 0000000000..e05eaef8ab --- /dev/null +++ b/content/docs/v2024.1.31/guides/mongodb/schema-manager/deploy-mongodbdatabase/yamls/mongodb.yaml @@ -0,0 +1,34 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MongoDB +metadata: + name: mongodb + namespace: demo +spec: + allowedSchemas: + namespaces: + from: Selector + selector: + matchExpressions: + - {key: kubernetes.io/metadata.name, operator: In, values: [dev]} + selector: + matchLabels: + "schema.kubedb.com": "mongo" + version: "4.4.26" + replicaSet: + name: "replicaset" + podTemplate: + spec: + resources: + requests: + cpu: "100m" + memory: "100Mi" + replicas: 3 + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 100Mi + terminationPolicy: WipeOut \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/mongodb/schema-manager/deploy-mongodbdatabase/yamls/namespace.yaml b/content/docs/v2024.1.31/guides/mongodb/schema-manager/deploy-mongodbdatabase/yamls/namespace.yaml new file mode 100644 index 0000000000..bcddbc61b1 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mongodb/schema-manager/deploy-mongodbdatabase/yamls/namespace.yaml @@ -0,0 +1,6 @@ +apiVersion: v1 +kind: Namespace +metadata: + name: dev + labels: + kubernetes.io/metadata.name: dev \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/mongodb/schema-manager/deploy-mongodbdatabase/yamls/vault.yaml b/content/docs/v2024.1.31/guides/mongodb/schema-manager/deploy-mongodbdatabase/yamls/vault.yaml new file mode 100644 index 0000000000..07a115fbb5 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mongodb/schema-manager/deploy-mongodbdatabase/yamls/vault.yaml @@ -0,0 +1,31 @@ +apiVersion: kubevault.com/v1alpha1 +kind: VaultServer +metadata: + name: vault + namespace: demo +spec: + version: 1.8.2 + replicas: 3 + allowedSecretEngines: + namespaces: + from: All + secretEngines: + - mongodb + unsealer: + secretShares: 5 + secretThreshold: 3 + mode: + kubernetesSecret: + secretName: vault-keys + backend: + raft: + path: "/vault/data" + storage: + storageClassName: "standard" + resources: + requests: + storage: 1Gi + authMethods: + - type: kubernetes + path: kubernetes + terminationPolicy: WipeOut \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/mongodb/schema-manager/initializing-with-script/index.md b/content/docs/v2024.1.31/guides/mongodb/schema-manager/initializing-with-script/index.md new file mode 100644 index 0000000000..2f780917b7 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mongodb/schema-manager/initializing-with-script/index.md @@ -0,0 +1,367 @@ +--- +title: Initializing with Script +menu: + docs_v2024.1.31: + identifier: mg-initializing-with-script + name: Initializing with Script + parent: mg-schema-manager + weight: 20 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Initializing with Script + +This guide will show you how to to create database and initialize script with MongoDB `Schema Manager` using `Schema Manager Operator`. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +- Install `KubeDB` in your cluster following the steps [here](https://kubedb.com/docs/latest/setup/install/kubedb/). +- Install `KubeVault` in your cluster following the steps [here](https://kubevault.com/docs/latest/setup/install/kubevault/). + +- You should be familiar with the following concepts: + - [MongoDB](/docs/v2024.1.31/guides/mongodb/concepts/mongodb) + - [MongoDBDatabase](/docs/v2024.1.31/guides/mongodb/concepts/mongodbdatabase) + - [Schema Manager Overview](/docs/v2024.1.31/guides/mongodb/schema-manager/overview/) + - [Stash Overview](https://stash.run/docs/latest/concepts/what-is-stash/overview/) + - [KubeVault Overview](https://kubevault.com/docs/latest/concepts/overview/) + +To keep everything isolated, we are going to use a separate namespace called `demo` throughout this tutorial. + +```bash +$ kubectl create ns demo +namespace/demo created +``` + +> **Note:** YAML files used in this tutorial are stored in [docs/guides/mongodb/schema-manager/initializing-with-script/yamls](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/guides/mongodb/schema-manager/initializing-with-script/yamls) directory of [kubedb/doc](https://github.com/kubedb/docs) repository. + +## Deploy MongoDB Server and Vault Server + +Here, we are going to deploy a `MongoDB Server` by using `KubeDB` operator. Also, we are deploying a `Vault Server` using `KubeVault` Operator. + +### Deploy MongoDB Server + +In this section, we are going to deploy a MongoDB Server. Let’s deploy it using this following yaml, + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: MongoDB +metadata: + name: mongodb + namespace: demo +spec: + allowedSchemas: + namespaces: + from: Selector + selector: + matchExpressions: + - {key: kubernetes.io/metadata.name, operator: In, values: [dev]} + selector: + matchLabels: + "schema.kubedb.com": "mongo" + version: "4.4.26" + replicaSet: + name: "replicaset" + podTemplate: + spec: + resources: + requests: + cpu: "100m" + memory: "100Mi" + replicas: 3 + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 100Mi + terminationPolicy: WipeOut +``` + +Here, + +- `spec.version` is the name of the MongoDBVersion CR. Here, we are using MongoDB version `4.4.26`. +- `spec.storageType` specifies the type of storage that will be used for MongoDB. It can be `Durable` or `Ephemeral`. The default value of this field is `Durable`. If `Ephemeral` is used then KubeDB will create the MongoDB using `EmptyDir` volume. +- `spec.storage` specifies the StorageClass of PVC dynamically allocated to store data for this database. This storage spec will be passed to the StatefulSet created by KubeDB operator to run database pods. So, each members will have a pod of this storage configuration. You can specify any StorageClass available in your cluster with appropriate resource requests. +- `spec.allowedSchemas` specifies the namespace and selectors of allowed `Schema Manager`. +- `spec.terminationPolicy` specifies what KubeDB should do when a user try to delete the operation of MongoDB CR. *Wipeout* means that the database will be deleted without restrictions. It can also be "Halt", "Delete" and "DoNotTerminate". Learn More about these [HERE](https://kubedb.com/docs/latest/guides/mongodb/concepts/mongodb/#specterminationpolicy). + + +Let’s save this yaml configuration into `mongodb.yaml` Then create the above `MongoDB` CR + +```bash +$ kubectl apply -f mongodb.yaml +mongodb.kubedb.com/mongodb created +``` + +### Deploy Vault Server + +In this section, we are going to deploy a Vault Server. Let’s deploy it using this following yaml, + +```yaml +apiVersion: kubevault.com/v1alpha1 +kind: VaultServer +metadata: + name: vault + namespace: demo +spec: + version: 1.8.2 + replicas: 3 + allowedSecretEngines: + namespaces: + from: All + secretEngines: + - mongodb + unsealer: + secretShares: 5 + secretThreshold: 3 + mode: + kubernetesSecret: + secretName: vault-keys + backend: + raft: + path: "/vault/data" + storage: + storageClassName: "standard" + resources: + requests: + storage: 1Gi + authMethods: + - type: kubernetes + path: kubernetes + terminationPolicy: WipeOut +``` + +Here, + +- `spec.version` is a required field that specifies the original version of Vault that has been used to build the docker image specified in `spec.vault.image` field. +- `spec.replicas` specifies the number of Vault nodes to deploy. It has to be a positive number. +- `spec.allowedSecretEngines` defines the types of Secret Engines & the Allowed namespaces from where a `SecretEngine` can be attached to the `VaultServer`. +- `spec.unsealer` is an optional field that specifies `Unsealer` configuration. `Unsealer` handles automatic initializing and unsealing of Vault. +- `spec.backend` is a required field that specifies the Vault backend storage configuration. KubeVault operator generates storage configuration according to this `spec.backend`. +- `spec.authMethods` is an optional field that specifies the list of auth methods to enable in Vault. +- `spec.terminationPolicy` is an optional field that gives flexibility whether to nullify(reject) the delete operation of VaultServer crd or which resources KubeVault operator should keep or delete when you delete VaultServer crd. + + +Let’s save this yaml configuration into `vault.yaml` Then create the above `VaultServer` CR + +```bash +$ kubectl apply -f vault.yaml +vaultserver.kubevault.com/vault created +``` + +### Create Separate Namespace For Schema Manager + +In this section, we are going to create a new `Namespace` and we will only allow this namespace for our `Schema Manager`. Below is the YAML of the `Namespace` that we are going to create, + +```yaml +apiVersion: v1 +kind: Namespace +metadata: + name: dev + labels: + kubernetes.io/metadata.name: dev +``` + +Let’s save this yaml configuration into `namespace.yaml`. Then create the above `Namespace`, + +```bash +$ kubectl apply -f namespace.yaml +namespace/dev created +``` + + +### Script with ConfigMap + +```yaml +apiVersion: v1 +kind: ConfigMap +metadata: + name: test-script + namespace: dev +data: + init.js: |- + use initdb; + db.product.insert({"name" : "KubeDB"}); +``` + +```bash +$ kubectl apply -f test-script.yaml +configmap/test-script created +``` + + +### Deploy Schema Manager Initialize with Script + +Here, we are going to deploy `Schema Manager` with the new Namespace that we have created above. Let’s deploy it using this following yaml, + +```yaml +apiVersion: schema.kubedb.com/v1alpha1 +kind: MongoDBDatabase +metadata: + name: sample-script + namespace: dev + labels: + "schema.kubedb.com": "mongo" +spec: + database: + serverRef: + name: mongodb + namespace: demo + config: + name: initdb + vaultRef: + name: vault + namespace: demo + accessPolicy: + subjects: + - name: "saname" + namespace: dev + kind: "ServiceAccount" + apiGroup: "" + defaultTTL: "5m" + maxTTL: "200h" + init: + initialized: false + script: + scriptPath: "/etc/config" + configMap: + name: "test-script" + podTemplate: + spec: + containers: + - env: + - name: "HAVE_A_TRY" + value: "whoo! It works" + name: cnt + image: nginx + command: + - /bin/sh + - -c + args: + - ls + deletionPolicy: Delete +``` +Here, + +- `spec.database` is a required field specifying the database server reference and the desired database configuration. +- `spec.vaultRef` is a required field that specifies which KubeVault server to use for user management. +- `spec.accessPolicy` is a required field that specifies the access permissions like which service account or cluster user have the access and also for how long they can access through it. +- `spec.init` is an optional field, containing the information of a script or a snapshot using which the database should be initialized during creation. +- `spec.init.script` refers to the information regarding the .js file which should be used for initialization. +- `spec.init.script.scriptPath` accepts a directory location at which the operator should mount the .js file. +- `spec.init.script.podTemplate` specifies pod-related details, like environment variables, arguments, images etc. +- `spec.deletionPolicy` is an optional field that gives flexibility whether to `nullify` (reject) the delete operation. + +Let’s save this yaml configuration into `sample-script.yaml` and apply it, + +```bash +$ kubectl apply -f sample-script.yaml +mongodbdatabase.schema.kubedb.com/sample-script created +``` + +Let's check the `STATUS` of `Schema Manager`, + +```bash +$ kubectl get mongodbdatabase -A +NAMESPACE NAME DB_SERVER DB_NAME STATUS AGE +dev sample-script mongodb initdb Current 56s +``` +Here, + +> In `STATUS` section, `Current` means that the current `Secret` of `Schema Manager` is vaild, and it will automatically `Expired` after it reaches the limit of `defaultTTL` that we've defined in the above yaml. + +Now, let's get the secret name from `schema-manager`, and get the login credentials for connecting to the database, + +```bash +$ kubectl get mongodbdatabase sample-script -n dev -o=jsonpath='{.status.authSecret.name}' +sample-script-mongo-req-98k0ch + +$ kubectl view-secret -n dev sample-script-mongo-req-98k0ch -a +password=-e4v396GFjjjMgPPuU7q +username=v-kubernetes-demo-k8s-f7695915-1e-6sXNTvVpPDtueRQWvoyH-1662641233 +``` + +### Verify Initialization + +Here, we are going to connect to the database with the login credentials and verify the database initialization, + +```bash +$ kubectl exec -it -n demo mongodb-0 -c mongodb -- bash +root@mongodb-0:/# mongo --authenticationDatabase=initdb --username='v-kubernetes-demo-k8s-f7695915-1e-6sXNTvVpPDtueRQWvoyH-1662641233' --password='-e4v396GFjjjMgPPuU7q' initdb +MongoDB shell version v4.4.26 +... + +replicaset:PRIMARY> show dbs +initdb 0.000GB + +replicaset:PRIMARY> show collections +product + +replicaset:PRIMARY> db.product.find() +{ "_id" : ObjectId("6319e46f950868e7b3476cdf"), "name" : "KubeDB" } + +replicaset:PRIMARY> exit +bye +``` + +Now, Let's check the `STATUS` of `Schema Manager` again, + +```bash +$ kubectl get mongodbdatabase -A +NAMESPACE NAME DB_SERVER DB_NAME STATUS AGE +dev sample-script mongodb initdb Expired 6m +``` + +Here, we can see that the `STATUS` of the `schema-manager` is `Expired` because it's exceeded `defaultTTL: "5m"`, which means the current `Secret` of `Schema Manager` isn't vaild anymore. Now, if we try to connect and login with the credentials that we have acquired before from `schema-manager`, it won't work. + +```bash +$ kubectl exec -it -n demo mongodb-0 -c mongodb -- bash +root@mongodb-0:/# mongo --authenticationDatabase=initdb --username='v-kubernetes-demo-k8s-f7695915-1e-6sXNTvVpPDtueRQWvoyH-1662641233' --password='-e4v396GFjjjMgPPuU7q' initdb +MongoDB shell version v4.4.26 +connecting to: mongodb://127.0.0.1:27017/initdb?authSource=initdb&compressors=disabled&gssapiServiceName=mongodb +Error: Authentication failed. : +connect@src/mongo/shell/mongo.js:374:17 +@(connect):2:6 +exception: connect failed +exiting with code 1 +root@mongodb-0:/# exit +exit +``` +> We can't connect to the database with the login credentials, which is `Expired`. We will not be able to access the database even though we're in the middle of a connected session. + + + +## Cleaning Up + +To clean up the Kubernetes resources created by this tutorial, run: + +```bash +$ kubectl delete ns dev +$ kubectl delete ns demo +``` + + +## Next Steps + +- Detail concepts of [MongoDBDatabase object](/docs/v2024.1.31/guides/mongodb/concepts/mongodbdatabase). +- Go through the concepts of [KubeVault](https://kubevault.com/docs/latest/guides). +- Detail concepts of [MongoDB object](/docs/v2024.1.31/guides/mongodb/concepts/mongodb). +- Detail concepts of [MongoDBVersion object](/docs/v2024.1.31/guides/mongodb/concepts/catalog). \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/mongodb/schema-manager/initializing-with-script/yamls/mongodb.yaml b/content/docs/v2024.1.31/guides/mongodb/schema-manager/initializing-with-script/yamls/mongodb.yaml new file mode 100644 index 0000000000..e05eaef8ab --- /dev/null +++ b/content/docs/v2024.1.31/guides/mongodb/schema-manager/initializing-with-script/yamls/mongodb.yaml @@ -0,0 +1,34 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MongoDB +metadata: + name: mongodb + namespace: demo +spec: + allowedSchemas: + namespaces: + from: Selector + selector: + matchExpressions: + - {key: kubernetes.io/metadata.name, operator: In, values: [dev]} + selector: + matchLabels: + "schema.kubedb.com": "mongo" + version: "4.4.26" + replicaSet: + name: "replicaset" + podTemplate: + spec: + resources: + requests: + cpu: "100m" + memory: "100Mi" + replicas: 3 + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 100Mi + terminationPolicy: WipeOut \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/mongodb/schema-manager/initializing-with-script/yamls/namespace.yaml b/content/docs/v2024.1.31/guides/mongodb/schema-manager/initializing-with-script/yamls/namespace.yaml new file mode 100644 index 0000000000..bcddbc61b1 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mongodb/schema-manager/initializing-with-script/yamls/namespace.yaml @@ -0,0 +1,6 @@ +apiVersion: v1 +kind: Namespace +metadata: + name: dev + labels: + kubernetes.io/metadata.name: dev \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/mongodb/schema-manager/initializing-with-script/yamls/sample-script.yaml b/content/docs/v2024.1.31/guides/mongodb/schema-manager/initializing-with-script/yamls/sample-script.yaml new file mode 100644 index 0000000000..53d0018845 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mongodb/schema-manager/initializing-with-script/yamls/sample-script.yaml @@ -0,0 +1,45 @@ +apiVersion: schema.kubedb.com/v1alpha1 +kind: MongoDBDatabase +metadata: + name: sample-script + namespace: dev + labels: + "schema.kubedb.com": "mongo" +spec: + database: + serverRef: + name: mongodb + namespace: demo + config: + name: initdb + vaultRef: + name: vault + namespace: demo + accessPolicy: + subjects: + - name: "saname" + namespace: dev + kind: "ServiceAccount" + apiGroup: "" + defaultTTL: "5m" + maxTTL: "200h" + init: + initialized: false + script: + scriptPath: "/etc/config" + configMap: + name: "test-script" + podTemplate: + spec: + containers: + - env: + - name: "HAVE_A_TRY" + value: "whoo! It works" + name: cnt + image: nginx + command: + - /bin/sh + - -c + args: + - ls + deletionPolicy: Delete \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/mongodb/schema-manager/initializing-with-script/yamls/test-script.yaml b/content/docs/v2024.1.31/guides/mongodb/schema-manager/initializing-with-script/yamls/test-script.yaml new file mode 100644 index 0000000000..23271e8d3c --- /dev/null +++ b/content/docs/v2024.1.31/guides/mongodb/schema-manager/initializing-with-script/yamls/test-script.yaml @@ -0,0 +1,9 @@ +apiVersion: v1 +kind: ConfigMap +metadata: + name: test-script + namespace: dev +data: + init.js: |- + use initdb; + db.product.insert({"name" : "KubeDB"}); \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/mongodb/schema-manager/initializing-with-script/yamls/vault.yaml b/content/docs/v2024.1.31/guides/mongodb/schema-manager/initializing-with-script/yamls/vault.yaml new file mode 100644 index 0000000000..07a115fbb5 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mongodb/schema-manager/initializing-with-script/yamls/vault.yaml @@ -0,0 +1,31 @@ +apiVersion: kubevault.com/v1alpha1 +kind: VaultServer +metadata: + name: vault + namespace: demo +spec: + version: 1.8.2 + replicas: 3 + allowedSecretEngines: + namespaces: + from: All + secretEngines: + - mongodb + unsealer: + secretShares: 5 + secretThreshold: 3 + mode: + kubernetesSecret: + secretName: vault-keys + backend: + raft: + path: "/vault/data" + storage: + storageClassName: "standard" + resources: + requests: + storage: 1Gi + authMethods: + - type: kubernetes + path: kubernetes + terminationPolicy: WipeOut \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/mongodb/schema-manager/initializing-with-snapshot/index.md b/content/docs/v2024.1.31/guides/mongodb/schema-manager/initializing-with-snapshot/index.md new file mode 100644 index 0000000000..9241081b48 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mongodb/schema-manager/initializing-with-snapshot/index.md @@ -0,0 +1,392 @@ +--- +title: Initializing with Snapshot +menu: + docs_v2024.1.31: + identifier: initializing-with-snapshot + name: Initializing with Snapshot + parent: mg-schema-manager + weight: 25 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Initializing with Snapshot + +This guide will show you how to create database and initialize snapshot with MongoDB `Schema Manager` using `Schema Manager Operator`. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +- Install `KubeDB` in your cluster following the steps [here](https://kubedb.com/docs/latest/setup/install/kubedb/). +- Install `KubeVault` in your cluster following the steps [here](https://kubevault.com/docs/latest/setup/install/kubevault/). + +- You should be familiar with the following concepts: + - [MongoDB](/docs/v2024.1.31/guides/mongodb/concepts/mongodb) + - [MongoDBDatabase](/docs/v2024.1.31/guides/mongodb/concepts/mongodbdatabase) + - [Schema Manager Overview](/docs/v2024.1.31/guides/mongodb/schema-manager/overview/) + - [Stash Overview](https://stash.run/docs/latest/concepts/what-is-stash/overview/) + - [KubeVault Overview](https://kubevault.com/docs/latest/concepts/overview/) + +> **Note:** YAML files used in this tutorial are stored in [docs/guides/mongodb/schema-manager/initializing-with-snapshot/yamls](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/guides/mongodb/schema-manager/initializing-with-snapshot/yamls) directory of [kubedb/doc](https://github.com/kubedb/docs) repository. + + +### Create Namespace + +We are going to create two different namespaces, in `db` namespace we will deploy MongoDB and Vault Server and in `demo` namespacae we will deploy `Schema Manager`. Let’s create those namespace using the following yaml, + +```yaml +apiVersion: v1 +kind: Namespace +metadata: + name: db + labels: + kubernetes.io/metadata.name: db +--- +apiVersion: v1 +kind: Namespace +metadata: + name: demo + labels: + kubernetes.io/metadata.name: demo +``` +Let’s save this yaml configuration into `namespace.yaml` Then create those above namespaces. + +```bash +$ kubectl apply -f namespace.yaml +namespace/db created +namespace/demo created +``` + +## Deploy MongoDB Server and Vault Server + +Here, we are going to deploy a `MongoDB Server` by using `KubeDB` operator. Also, we are deploying a `Vault Server` using `KubeVault` Operator. + +### Deploy MongoDB Server + +In this section, we are going to deploy a MongoDB Server. Let’s deploy it using this following yaml, + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: MongoDB +metadata: + name: mongodb + namespace: db +spec: + allowedSchemas: + namespaces: + from: All + version: "4.4.26" + replicaSet: + name: "replicaset" + podTemplate: + spec: + resources: + requests: + cpu: "100m" + memory: "100Mi" + replicas: 3 + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 100Mi + terminationPolicy: WipeOut +``` + +Here, + +- `spec.version` is the name of the MongoDBVersion CR. Here, we are using MongoDB version `4.4.26`. +- `spec.storageType` specifies the type of storage that will be used for MongoDB. It can be `Durable` or `Ephemeral`. The default value of this field is `Durable`. If `Ephemeral` is used then KubeDB will create the MongoDB using `EmptyDir` volume. +- `spec.storage` specifies the StorageClass of PVC dynamically allocated to store data for this database. This storage spec will be passed to the StatefulSet created by KubeDB operator to run database pods. So, each members will have a pod of this storage configuration. You can specify any StorageClass available in your cluster with appropriate resource requests. +- `spec.allowedSchemas` specifies the namespace and selectors of allowed `Schema Manager`. +- `spec.terminationPolicy` specifies what KubeDB should do when a user try to delete the operation of MongoDB CR. *Wipeout* means that the database will be deleted without restrictions. It can also be "Halt", "Delete" and "DoNotTerminate". Learn More about these [HERE](https://kubedb.com/docs/latest/guides/mongodb/concepts/mongodb/#specterminationpolicy). + +Let’s save this yaml configuration into `mongodb.yaml` Then create the above `MongoDB` CR + +```bash +$ kubectl apply -f mongodb.yaml +mongodb.kubedb.com/mongodb created +``` + +### Deploy Vault Server + +In this section, we are going to deploy a Vault Server. Let’s deploy it using this following yaml, + +```yaml +apiVersion: kubevault.com/v1alpha1 +kind: VaultServer +metadata: + name: vault + namespace: demo +spec: + version: 1.8.2 + replicas: 3 + allowedSecretEngines: + namespaces: + from: All + secretEngines: + - mongodb + unsealer: + secretShares: 5 + secretThreshold: 3 + mode: + kubernetesSecret: + secretName: vault-keys + backend: + raft: + path: "/vault/data" + storage: + storageClassName: "standard" + resources: + requests: + storage: 1Gi + authMethods: + - type: kubernetes + path: kubernetes + terminationPolicy: WipeOut +``` + +Here, + +- `spec.version` is a required field that specifies the original version of Vault that has been used to build the docker image specified in `spec.vault.image` field. +- `spec.replicas` specifies the number of Vault nodes to deploy. It has to be a positive number. +- `spec.allowedSecretEngines` defines the types of Secret Engines & the Allowed namespaces from where a `SecretEngine` can be attached to the `VaultServer`. +- `spec.unsealer` is an optional field that specifies `Unsealer` configuration. `Unsealer` handles automatic initializing and unsealing of Vault. +- `spec.backend` is a required field that specifies the Vault backend storage configuration. KubeVault operator generates storage configuration according to this `spec.backend`. +- `spec.authMethods` is an optional field that specifies the list of auth methods to enable in Vault. +- `spec.terminationPolicy` is an optional field that gives flexibility whether to nullify(reject) the delete operation of VaultServer crd or which resources KubeVault operator should keep or delete when you delete VaultServer crd. + +Let’s save this yaml configuration into `vault.yaml` Then create the above `VaultServer` CR + +```bash +$ kubectl apply -f vault.yaml +vaultserver.kubevault.com/vault created +``` + + +### Create Repository Secret + +Here, we are using local backend for storing data snapshots. It can be a cloud storage like GCS bucket, AWS S3, Azure Blob Storage, NFS etc. or a Kubernetes native resources like HostPath, PersistentVolumeClaim etc. For more information check [HERE](https://stash.run/docs/latest/guides/backends/overview/) + +Let's, create a Secret for our Repository, + +```bash +$ echo -n 'changeit' > RESTIC_PASSWORD +$ kubectl create secret generic -n demo repo-secret --from-file=./RESTIC_PASSWORD +secret/repo-secret created +``` + +Let’s save this yaml configuration into `repo-secret.yaml` Then create the secret, + +```bash +$ kubectl apply -f repo-secret.yaml +secret/repo-secret created +``` + + +### Create Repository + +```yaml +apiVersion: stash.appscode.com/v1alpha1 +kind: Repository +metadata: + name: repo + namespace: demo +spec: + backend: + local: + mountPath: /hello + persistentVolumeClaim: + claimName: snapshot-pvc + storageSecretName: repo-secret + usagePolicy: + allowedNamespaces: + from: All +``` +This repository CRO specifies the `repo-secret` that we've created before and specifies the name and path to the local storage `PVC`. + +> Note: Here, we are using local storage `PVC`. My `PVC` name is `snapshot-pvc`. Don’t forget to change `backend.local.persistentVolumeClaim.claimName` to your `PVC` name. + +Let’s save this yaml configuration into `repo.yaml` Lets create the repository, + +```bash +$ kubectl apply -f repo.yaml +repository.stash.appscode.com/repo created +``` + +After creating the repository we've backed up one of our MongoDB database with some sample data via Stash. So, now our repository contains some sample data inside it. + + +### Configure Snapshot Restore + +Now, We are going to create a ServiceAccount, ClusterRole and ClusterRoleBinding. Stash does not grant necessary RBAC permissions to the restore job for taking restore from a different namespace. In this case, we have to provide the RBAC permissions manually. This helps to prevent unauthorized namespaces from getting access to a database via Stash. You can configure this process through this [Documentation](https://stash.run/docs/latest/guides/managed-backup/dedicated-backup-namespace/#configure-restore) + +### Deploy Schema Manager Initialize with Snapshot + +Here, we are going to deploy `Schema Manager` with the `demo` namespace that we have created above. Let’s deploy it using the following yaml, + +```yaml +apiVersion: schema.kubedb.com/v1alpha1 +kind: MongoDBDatabase +metadata: + name: schema-restore + namespace: demo +spec: + database: + serverRef: + name: mongodb + namespace: db + config: + name: products + vaultRef: + name: vault + namespace: demo + accessPolicy: + subjects: + - name: "saname" + namespace: db + kind: "ServiceAccount" + apiGroup: "" + defaultTTL: "5m" + maxTTL: "200h" + init: + initialized: false + snapshot: + repository: + name: repo + namespace: demo + deletionPolicy: Delete +``` + +Here, + +- `spec.database` is a required field specifying the database server reference and the desired database configuration. +- `spec.vaultRef` is a required field that specifies which KubeVault server to use for user management. +- `spec.accessPolicy` is a required field that specifies the access permissions like which service account or cluster user have the access and also for how long they can access through it. +- `spec.init` is an optional field, containing the information of a script or a snapshot using which the database should be initialized during creation. +- `spec.deletionPolicy` is an optional field that gives flexibility whether to `nullify` (reject) the delete operation. + +Let’s save this yaml configuration into `schema-restore.yaml` and apply it, + +```bash +$ kubectl apply -f schema-restore.yaml +mongodbdatabase.schema.kubedb.com/schema-restore created + +``` + +Let's check the `STATUS` of `Schema Manager`, + +```bash +$ kubectl get mongodbdatabase -A +NAMESPACE NAME DB_SERVER DB_NAME STATUS AGE +demo schema-restore mongodb products Current 56s +``` +Here, + +> In `STATUS` section, `Current` means that the current `Secret` of `Schema Manager` is vaild, and it will automatically `Expired` after it reaches the limit of `defaultTTL` that we've defined in the above yaml. + +Also, check the `STATUS` of `restoresession` + +```bash +$ kubectl get restoresession -n demo +NAME REPOSITORY PHASE DURATION AGE +schema-restore-mongo-rs repo Succeeded 5s 21s +``` + + +Now, let's get the secret name from `schema-manager`, and the login credentials for connecting to the database, + +```bash +$ kubectl get mongodbdatabase schema-restore -n demo -o=jsonpath='{.status.authSecret.name}' +schema-restore-mongo-req-98k0ch + +$ kubectl view-secret -n demo schema-restore-mongo-req-98k0ch -a +password=6ykdBljJ7D8agXeoSp-f +username=v-kubernetes-demo-k8s-f7695915-1e-2zXmduPS89LfvW6tr5Bw-1662639843 +``` + +### Verify Initialization + +Here, we are going to connect to the database with the login credentials and verify the database initialization, + +```bash +$ kubectl exec -it -n demo mongodb-0 -c mongodb -- bash +root@mongodb-0:/# mongo --authenticationDatabase=products --username='v-kubernetes-demo-k8s-f7695915-1e-2zXmduPS89LfvW6tr5Bw-1662639843' --password='6ykdBljJ7D8agXeoSp-f' products +MongoDB shell version v4.4.26 +... + +replicaset:PRIMARY> show dbs +products 0.000GB + +replicaset:PRIMARY> show collections +products + +replicaset:PRIMARY> db.products.find() +{ "_id" : ObjectId("631b3139187d1588626fb80b"), "name" : "kubedb" } + +replicaset:PRIMARY> exit +bye + +``` + +Now, Let's check the `STATUS` of `Schema Manager` again, + +```bash +$ kubectl get mongodbdatabase -A +NAMESPACE NAME DB_SERVER DB_NAME STATUS AGE +demo schema-restore mongodb products Expired 7m +``` + +Here, we can see that the `STATUS` of the `schema-manager` is `Expired` because it's exceeded `defaultTTL: "5m"`, which means the current `Secret` of `Schema Manager` isn't vaild anymore. Now, if we try to connect and login with the credentials that we have acquired before from `schema-manager`, it won't work. + + +```bash +$ kubectl exec -it -n demo mongodb-0 -c mongodb -- bash +root@mongodb-0:/# mongo --authenticationDatabase=products --username='v-kubernetes-demo-k8s-f7695915-1e-2zXmduPS89LfvW6tr5Bw-1662639843' --password='6ykdBljJ7D8agXeoSp-f' products +MongoDB shell version v4.4.26 +connecting to: mongodb://127.0.0.1:27017/initdb?authSource=initdb&compressors=disabled&gssapiServiceName=mongodb +Error: Authentication failed. : +connect@src/mongo/shell/mongo.js:374:17 +@(connect):2:6 +exception: connect failed +exiting with code 1 +root@mongodb-0:/# exit +exit +``` +> We can't connect to the database with the login credentials, which is `Expired`. We will not be able to access the database even though we're in the middle of a connected session. + + + +## Cleaning Up + +To clean up the Kubernetes resources created by this tutorial, run: + +```bash +$ kubectl delete ns db +$ kubectl delete ns demo +``` + + +## Next Steps + +- Detail concepts of [MongoDBDatabase object](/docs/v2024.1.31/guides/mongodb/concepts/mongodbdatabase). +- Go through the concepts of [KubeVault](https://kubevault.com/docs/latest/guides). +- Detail concepts of [MongoDB object](/docs/v2024.1.31/guides/mongodb/concepts/mongodb). +- Detail concepts of [MongoDBVersion object](/docs/v2024.1.31/guides/mongodb/concepts/catalog). \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/mongodb/schema-manager/initializing-with-snapshot/yamls/mongodb.yaml b/content/docs/v2024.1.31/guides/mongodb/schema-manager/initializing-with-snapshot/yamls/mongodb.yaml new file mode 100644 index 0000000000..34868bca70 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mongodb/schema-manager/initializing-with-snapshot/yamls/mongodb.yaml @@ -0,0 +1,28 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MongoDB +metadata: + name: mongodb + namespace: db +spec: + allowedSchemas: + namespaces: + from: All + version: "4.4.26" + replicaSet: + name: "replicaset" + podTemplate: + spec: + resources: + requests: + cpu: "100m" + memory: "100Mi" + replicas: 3 + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 100Mi + terminationPolicy: WipeOut \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/mongodb/schema-manager/initializing-with-snapshot/yamls/namespace.yaml b/content/docs/v2024.1.31/guides/mongodb/schema-manager/initializing-with-snapshot/yamls/namespace.yaml new file mode 100644 index 0000000000..0dce1601c4 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mongodb/schema-manager/initializing-with-snapshot/yamls/namespace.yaml @@ -0,0 +1,13 @@ +apiVersion: v1 +kind: Namespace +metadata: + name: db + labels: + kubernetes.io/metadata.name: db +--- +apiVersion: v1 +kind: Namespace +metadata: + name: demo + labels: + kubernetes.io/metadata.name: demo \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/mongodb/schema-manager/initializing-with-snapshot/yamls/repo.yaml b/content/docs/v2024.1.31/guides/mongodb/schema-manager/initializing-with-snapshot/yamls/repo.yaml new file mode 100644 index 0000000000..2f3b9cde57 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mongodb/schema-manager/initializing-with-snapshot/yamls/repo.yaml @@ -0,0 +1,15 @@ +apiVersion: stash.appscode.com/v1alpha1 +kind: Repository +metadata: + name: repo + namespace: demo +spec: + backend: + local: + mountPath: /hello + persistentVolumeClaim: + claimName: snapshot-pvc + storageSecretName: repo-secret + usagePolicy: + allowedNamespaces: + from: All \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/mongodb/schema-manager/initializing-with-snapshot/yamls/schema-restore.yaml b/content/docs/v2024.1.31/guides/mongodb/schema-manager/initializing-with-snapshot/yamls/schema-restore.yaml new file mode 100644 index 0000000000..24fd50316f --- /dev/null +++ b/content/docs/v2024.1.31/guides/mongodb/schema-manager/initializing-with-snapshot/yamls/schema-restore.yaml @@ -0,0 +1,30 @@ +apiVersion: schema.kubedb.com/v1alpha1 +kind: MongoDBDatabase +metadata: + name: schema-restore + namespace: demo +spec: + database: + serverRef: + name: mongodb + namespace: db + config: + name: products + vaultRef: + name: vault + namespace: demo + accessPolicy: + subjects: + - name: "saname" + namespace: db + kind: "ServiceAccount" + apiGroup: "" + defaultTTL: "5m" + maxTTL: "200h" + init: + initialized: false + snapshot: + repository: + name: repo + namespace: demo + deletionPolicy: Delete \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/mongodb/schema-manager/initializing-with-snapshot/yamls/vault.yaml b/content/docs/v2024.1.31/guides/mongodb/schema-manager/initializing-with-snapshot/yamls/vault.yaml new file mode 100644 index 0000000000..07a115fbb5 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mongodb/schema-manager/initializing-with-snapshot/yamls/vault.yaml @@ -0,0 +1,31 @@ +apiVersion: kubevault.com/v1alpha1 +kind: VaultServer +metadata: + name: vault + namespace: demo +spec: + version: 1.8.2 + replicas: 3 + allowedSecretEngines: + namespaces: + from: All + secretEngines: + - mongodb + unsealer: + secretShares: 5 + secretThreshold: 3 + mode: + kubernetesSecret: + secretName: vault-keys + backend: + raft: + path: "/vault/data" + storage: + storageClassName: "standard" + resources: + requests: + storage: 1Gi + authMethods: + - type: kubernetes + path: kubernetes + terminationPolicy: WipeOut \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/mongodb/schema-manager/overview/images/mongodb-schema-manager-diagram.svg b/content/docs/v2024.1.31/guides/mongodb/schema-manager/overview/images/mongodb-schema-manager-diagram.svg new file mode 100644 index 0000000000..410678ddc5 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mongodb/schema-manager/overview/images/mongodb-schema-manager-diagram.svg @@ -0,0 +1 @@ + \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/mongodb/schema-manager/overview/index.md b/content/docs/v2024.1.31/guides/mongodb/schema-manager/overview/index.md new file mode 100644 index 0000000000..007420541b --- /dev/null +++ b/content/docs/v2024.1.31/guides/mongodb/schema-manager/overview/index.md @@ -0,0 +1,72 @@ +--- +title: MongoDB Schema Manager Overview +menu: + docs_v2024.1.31: + identifier: mg-schema-manager-overview + name: Overview + parent: mg-schema-manager + weight: 10 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + + +## Before You Begin + +- You should be familiar with the following `KubeDB` concepts: + - [MongoDB](/docs/v2024.1.31/guides/mongodb/concepts/mongodb) + - [MongoDBDatabase](/docs/v2024.1.31/guides/mongodb/concepts/mongodbdatabase) + + +## What is Schema Manager + +`Schema Manager` is a Kubernetes operator developed by AppsCode that implements multi-tenancy inside KubeDB provisioned database servers like MySQL, MariaDB, PosgreSQL and MongoDB etc. With `Schema Manager` one can create database into specific database server. An user will also be created with KubeVault and assigned to that database. Using the newly created user credential one can access the database and run operations into it. One may pass the database server reference, configuration, user access policy through a single yaml and `Schema Manager` will do all the task above mentioned. `Schema Manager` also allows initializing the database and restore snapshot while bootstrap. + + +## How MongoDB Schema Manager Process Works + +The following diagram shows how MongoDB Schema Manager process worked. Open the image in a new tab to see the enlarged version. + +
+ MongoDB Schema Mananger Diagram +
Fig: Process of MongoDB Schema Manager
+
+ +The process consists of the following steps: + +1. At first the user will deploy a `MongoDBDatabase` object. + +2. Once a `MongoDBDatabase` object is deployed to the cluster, the `Schema Manager` operator first verifies if it has the required permission to be able to interact with the referred database-server by checking `Double-OptIn`. After the `Double-OptIn` verification `Schema Manager` operator checks in the `MongoDB` server if the target database is already present or not. If the database already present there, then the `MongoDBDatabase` object will be immediately denied. + +3. Once everything is ok in the `MongoDB` server side, then the target database will be created and an entry for that will be entered in the `kubedb_system` database. + +4. Then `Schema Manager` operator creates a `MongoDB Role`. + +5. `Vault` operator always watches for a Database `Role`. + +6. Once `Vault` operator finds a Database `Role`, it creates a `Secret` for that `Role`. + +7. After this process, the `Vault` operator creates a `User` in the `MongoDB` server. The user gets all the privileges on our target database and its credentials are served with the `Secret`. The user credentials secret reference is patched with the `MongoDBDatabase` object yaml in the `.status.authSecret.name` field. + +8. If there is any `init script` associated with the `MongoDBDatabase` object, it will be executed in this step with the `Schema Manager` operator. + +9. The user can also provide a `snapshot` reference for initialization. In that case `Schema Manager` operator fetches necessary `appbinding`, `secrets`, `repository`. + +10. `Stash` operator watches for a `Restoresession`. + +11. Once `Stash` operator finds a `Restoresession`, it Restores the targeted database with the `Snapshot`. + +In the next doc, we are going to show a step by step guide of using MongoDB Schema Manager with KubeDB. \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/mongodb/tls/_index.md b/content/docs/v2024.1.31/guides/mongodb/tls/_index.md new file mode 100755 index 0000000000..bced801072 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mongodb/tls/_index.md @@ -0,0 +1,22 @@ +--- +title: Run mongodb with TLS +menu: + docs_v2024.1.31: + identifier: mg-tls + name: TLS/SSL Encryption + parent: mg-mongodb-guides + weight: 45 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/mongodb/tls/overview.md b/content/docs/v2024.1.31/guides/mongodb/tls/overview.md new file mode 100644 index 0000000000..6f43ffc951 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mongodb/tls/overview.md @@ -0,0 +1,81 @@ +--- +title: MongoDB TLS/SSL Encryption Overview +menu: + docs_v2024.1.31: + identifier: mg-tls-overview + name: Overview + parent: mg-tls + weight: 10 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# MongoDB TLS/SSL Encryption + +**Prerequisite :** To configure TLS/SSL in `MongoDB`, `KubeDB` uses `cert-manager` to issue certificates. So first you have to make sure that the cluster has `cert-manager` installed. To install `cert-manager` in your cluster following steps [here](https://cert-manager.io/docs/installation/kubernetes/). + +To issue a certificate, the following crd of `cert-manager` is used: + +- `Issuer/ClusterIssuer`: Issuers, and ClusterIssuers represent certificate authorities (CAs) that are able to generate signed certificates by honoring certificate signing requests. All cert-manager certificates require a referenced issuer that is in a ready condition to attempt to honor the request. You can learn more details [here](https://cert-manager.io/docs/concepts/issuer/). + +- `Certificate`: `cert-manager` has the concept of Certificates that define a desired x509 certificate which will be renewed and kept up to date. You can learn more details [here](https://cert-manager.io/docs/concepts/certificate/). + +**MongoDB CRD Specification :** + +KubeDB uses following crd fields to enable SSL/TLS encryption in `MongoDB`. + +- `spec:` + - `sslMode` + - `tls:` + - `issuerRef` + - `certificates` + - `clusterAuthMode` +Read about the fields in details from [mongodb concept](/docs/v2024.1.31/guides/mongodb/concepts/mongodb), + +When, `sslMode` is set to `requireSSL`, the users must specify the `tls.issuerRef` field. `KubeDB` uses the `issuer` or `clusterIssuer` referenced in the `tls.issuerRef` field, and the certificate specs provided in `tls.certificate` to generate certificate secrets using `Issuer/ClusterIssuers` specification. These certificates secrets including `ca.crt`, `tls.crt` and `tls.key` etc. are used to configure `MongoDB` server, exporter etc. respectively. + +## How TLS/SSL configures in MongoDB + +The following figure shows how `KubeDB` enterprise used to configure TLS/SSL in MongoDB. Open the image in a new tab to see the enlarged version. + +
+Deploy MongoDB with TLS/SSL +
Fig: Deploy MongoDB with TLS/SSL
+
+ +Deploying MongoDB with TLS/SSL configuration process consists of the following steps: + +1. At first, a user creates a `Issuer/ClusterIssuer` cr. + +2. Then the user creates a `MongoDB` cr which refers to the `Issuer/ClusterIssuer` cr that the user created in the previous step. + +3. `KubeDB` Provisioner operator watches for the `MongoDB` cr. + +4. When it finds one, it creates `Secret`, `Service`, etc. for the `MongoDB` database. + +5. `KubeDB` Ops-manager operator watches for `MongoDB`(5c), `Issuer/ClusterIssuer`(5b), `Secret` and `Service`(5a). + +6. When it finds all the resources(`MongoDB`, `Issuer/ClusterIssuer`, `Secret`, `Service`), it creates `Certificates` by using `tls.issuerRef` and `tls.certificates` field specification from `MongoDB` cr. + +7. `cert-manager` watches for certificates. + +8. When it finds one, it creates certificate secrets `tls-secrets`(server, client, exporter secrets etc.) that holds the actual certificate signed by the CA. + +9. `KubeDB` Provisioner operator watches for the Certificate secrets `tls-secrets`. + +10. When it finds all the tls-secret, it creates the related `StatefulSets` so that MongoDB database can be configured with TLS/SSL. + +In the next doc, we are going to show a step by step guide on how to configure a `MongoDB` database with TLS/SSL. \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/mongodb/tls/replicaset.md b/content/docs/v2024.1.31/guides/mongodb/tls/replicaset.md new file mode 100644 index 0000000000..0f5d287e00 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mongodb/tls/replicaset.md @@ -0,0 +1,277 @@ +--- +title: MongoDB ReplicaSet TLS/SSL Encryption +menu: + docs_v2024.1.31: + identifier: mg-tls-replicaset + name: Replicaset + parent: mg-tls + weight: 30 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Run MongoDB with TLS/SSL (Transport Encryption) + +KubeDB supports providing TLS/SSL encryption (via, `sslMode` and `clusterAuthMode`) for MongoDB. This tutorial will show you how to use KubeDB to run a MongoDB database with TLS/SSL encryption. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +- Install [`cert-manger`](https://cert-manager.io/docs/installation/) v1.0.0 or later to your cluster to manage your SSL/TLS certificates. + +- Now, install KubeDB cli on your workstation and KubeDB operator in your cluster following the steps [here](/docs/v2024.1.31/setup/README). + +- To keep things isolated, this tutorial uses a separate namespace called `demo` throughout this tutorial. + + ```bash + $ kubectl create ns demo + namespace/demo created + ``` + +> Note: YAML files used in this tutorial are stored in [docs/examples/mongodb](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/examples/mongodb) folder in GitHub repository [kubedb/docs](https://github.com/kubedb/docs). + +## Overview + +KubeDB uses following crd fields to enable SSL/TLS encryption in Mongodb. + +- `spec:` + - `sslMode` + - `tls:` + - `issuerRef` + - `certificate` + - `clusterAuthMode` + +Read about the fields in details in [mongodb concept](/docs/v2024.1.31/guides/mongodb/concepts/mongodb), + +`sslMode`, and `tls` is applicable for all types of MongoDB (i.e., `standalone`, `replicaset` and `sharding`), while `clusterAuthMode` provides [ClusterAuthMode](https://docs.mongodb.com/manual/reference/program/mongod/#cmdoption-mongod-clusterauthmode) for MongoDB clusters (i.e., `replicaset` and `sharding`). + +When, SSLMode is anything other than `disabled`, users must specify the `tls.issuerRef` field. KubeDB uses the `issuer` or `clusterIssuer` referenced in the `tls.issuerRef` field, and the certificate specs provided in `tls.certificate` to generate certificate secrets. These certificate secrets are then used to generate required certificates including `ca.crt`, `mongo.pem` and `client.pem`. + +The subject of `client.pem` certificate is added as `root` user in `$external` mongodb database. So, user can use this client certificate for `MONGODB-X509` `authenticationMechanism`. + +## Create Issuer/ ClusterIssuer + +We are going to create an example `Issuer` that will be used throughout the duration of this tutorial to enable SSL/TLS in MongoDB. Alternatively, you can follow this [cert-manager tutorial](https://cert-manager.io/docs/configuration/ca/) to create your own `Issuer`. + +- Start off by generating you ca certificates using openssl. + +```bash +openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout ./ca.key -out ./ca.crt -subj "/CN=mongo/O=kubedb" +``` + +- Now create a ca-secret using the certificate files you have just generated. + +```bash +kubectl create secret tls mongo-ca \ + --cert=ca.crt \ + --key=ca.key \ + --namespace=demo +``` + +Now, create an `Issuer` using the `ca-secret` you have just created. The `YAML` file looks like this: + +```yaml +apiVersion: cert-manager.io/v1 +kind: Issuer +metadata: + name: mongo-ca-issuer + namespace: demo +spec: + ca: + secretName: mongo-ca +``` + +Apply the `YAML` file: + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/mongodb/tls/issuer.yaml +issuer.cert-manager.io/mongo-ca-issuer created +``` + +## TLS/SSL encryption in MongoDB Replicaset + +Below is the YAML for MongoDB Replicaset. Here, [`spec.sslMode`](/docs/v2024.1.31/guides/mongodb/concepts/mongodb#specsslMode) specifies `sslMode` for `replicaset` (which is `requireSSL`) and [`spec.clusterAuthMode`](/docs/v2024.1.31/guides/mongodb/concepts/mongodb#specclusterAuthMode) provides `clusterAuthMode` for mongodb replicaset nodes (which is `x509`). + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: MongoDB +metadata: + name: mgo-rs-tls + namespace: demo +spec: + version: "4.4.26" + sslMode: requireSSL + tls: + issuerRef: + apiGroup: "cert-manager.io" + kind: Issuer + name: mongo-ca-issuer + clusterAuthMode: x509 + replicas: 4 + replicaSet: + name: rs0 + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi +``` + +### Deploy MongoDB Replicaset + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/mongodb/tls/mg-replicaset-ssl.yaml +mongodb.kubedb.com/mgo-rs-tls created +``` + +Now, wait until `mgo-rs-tls created` has status `Ready`. i.e, + +```bash +$ watch kubectl get mg -n demo +Every 2.0s: kubectl get mongodb -n demo +NAME VERSION STATUS AGE +mgo-rs-tls 4.4.26 Ready 4m10s +``` + +### Verify TLS/SSL in MongoDB Replicaset + +Now, connect to this database through [mongo-shell](https://docs.mongodb.com/v4.0/mongo/) and verify if `SSLMode` and `ClusterAuthMode` has been set up as intended. + +```bash +$ kubectl describe secret -n demo mgo-rs-tls-client-cert +Name: mgo-rs-tls-client-cert +Namespace: demo +Labels: +Annotations: cert-manager.io/alt-names: + cert-manager.io/certificate-name: mgo-rs-tls-client-cert + cert-manager.io/common-name: root + cert-manager.io/ip-sans: + cert-manager.io/issuer-group: cert-manager.io + cert-manager.io/issuer-kind: Issuer + cert-manager.io/issuer-name: mongo-ca-issuer + cert-manager.io/uri-sans: + +Type: kubernetes.io/tls + +Data +==== +ca.crt: 1147 bytes +tls.crt: 1172 bytes +tls.key: 1679 bytes +``` + +Now, Let's exec into a mongodb container and find out the username to connect in a mongo shell, + +```bash +$ kubectl exec -it mgo-rs-tls-0 -n demo bash +root@mgo-rs-tls-0:/$ ls /var/run/mongodb/tls +ca.crt client.pem mongo.pem +root@mgo-rs-tls-0:/$ openssl x509 -in /var/run/mongodb/tls/client.pem -inform PEM -subject -nameopt RFC2253 -noout +subject=CN=root,O=kubedb +``` + +Now, we can connect using `CN=root,O=kubedb` as root to connect to the mongo shell, + +```bash +root@mgo-rs-tls-0:/$ mongo --tls --tlsCAFile /var/run/mongodb/tls/ca.crt --tlsCertificateKeyFile /var/run/mongodb/tls/client.pem admin --host localhost --authenticationMechanism MONGODB-X509 --authenticationDatabase='$external' -u "CN=root,O=kubedb" --quiet +Welcome to the MongoDB shell. +For interactive help, type "help". +For more comprehensive documentation, see + http://docs.mongodb.org/ +Questions? Try the support group + http://groups.google.com/group/mongodb-user +rs0:PRIMARY> +``` + +We are connected to the mongo shell. Let's run some command to verify the sslMode and the user, + +```bash +rs0:PRIMARY> db.adminCommand({ getParameter:1, sslMode:1 }) +{ + "sslMode" : "requireSSL", + "ok" : 1, + "$clusterTime" : { + "clusterTime" : Timestamp(1599490676, 1), + "signature" : { + "hash" : BinData(0,"/wQ4pf4HVi1T7SOyaB3pXO56j64="), + "keyId" : NumberLong("6869759546676477954") + } + }, + "operationTime" : Timestamp(1599490676, 1) +} + +rs0:PRIMARY> use $external +switched to db $external + +rs0:PRIMARY> show users +{ + "_id" : "$external.CN=root,O=kubedb", + "userId" : UUID("9cebbcf4-74bf-47dd-a485-1604125058da"), + "user" : "CN=root,O=kubedb", + "db" : "$external", + "roles" : [ + { + "role" : "root", + "db" : "admin" + } + ], + "mechanisms" : [ + "external" + ] +} +> exit +bye +``` + +You can see here that, `sslMode` is set to `requireSSL` and a user is created in `$external` with name `"CN=root,O=kubedb"`. + +## Changing the SSLMode & ClusterAuthMode + +User can update `sslMode` & `ClusterAuthMode` if needed. Some changes may be invalid from mongodb end, like using `sslMode: disabled` with `clusterAuthMode: x509`. + +The good thing is, **KubeDB operator will throw error for invalid SSL specs while creating/updating the MongoDB object.** i.e., + +```bash +$ kubectl patch -n demo mg/mgo-rs-tls -p '{"spec":{"sslMode": "disabled","clusterAuthMode": "x509"}}' --type="merge" +Error from server (Forbidden): admission webhook "mongodb.validators.kubedb.com" denied the request: can't have disabled set to mongodb.spec.sslMode when mongodb.spec.clusterAuthMode is set to x509 +``` + +To **update from Keyfile Authentication to x.509 Authentication**, change the `sslMode` and `clusterAuthMode` in recommended sequence as suggested in [official documentation](https://docs.mongodb.com/manual/tutorial/update-keyfile-to-x509/). Each time after changing the specs, follow the procedure that is described above to verify the changes of `sslMode` and `clusterAuthMode` inside the database. + +## Cleaning up + +To cleanup the Kubernetes resources created by this tutorial, run: + +```bash +kubectl delete mongodb -n demo mgo-rs-tls +kubectl delete issuer -n demo mongo-ca-issuer +kubectl delete ns demo +``` + +## Next Steps + +- Detail concepts of [MongoDB object](/docs/v2024.1.31/guides/mongodb/concepts/mongodb). +- [Backup and Restore](/docs/v2024.1.31/guides/mongodb/backup/overview/) MongoDB databases using Stash. +- Initialize [MongoDB with Script](/docs/v2024.1.31/guides/mongodb/initialization/using-script). +- Monitor your MongoDB database with KubeDB using [out-of-the-box Prometheus operator](/docs/v2024.1.31/guides/mongodb/monitoring/using-prometheus-operator). +- Monitor your MongoDB database with KubeDB using [out-of-the-box builtin-Prometheus](/docs/v2024.1.31/guides/mongodb/monitoring/using-builtin-prometheus). +- Use [private Docker registry](/docs/v2024.1.31/guides/mongodb/private-registry/using-private-registry) to deploy MongoDB with KubeDB. +- Use [kubedb cli](/docs/v2024.1.31/guides/mongodb/cli/cli) to manage databases like kubectl for Kubernetes. +- Detail concepts of [MongoDB object](/docs/v2024.1.31/guides/mongodb/concepts/mongodb). +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/v2024.1.31/CONTRIBUTING). diff --git a/content/docs/v2024.1.31/guides/mongodb/tls/sharding.md b/content/docs/v2024.1.31/guides/mongodb/tls/sharding.md new file mode 100644 index 0000000000..bb552b8c66 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mongodb/tls/sharding.md @@ -0,0 +1,285 @@ +--- +title: MongoDB Shard TLS/SSL Encryption +menu: + docs_v2024.1.31: + identifier: mg-tls-shard + name: Sharding + parent: mg-tls + weight: 40 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Run MongoDB with TLS/SSL (Transport Encryption) + +KubeDB supports providing TLS/SSL encryption (via, `sslMode` and `clusterAuthMode`) for MongoDB. This tutorial will show you how to use KubeDB to run a MongoDB database with TLS/SSL encryption. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +- Install [`cert-manger`](https://cert-manager.io/docs/installation/) v1.0.0 or later to your cluster to manage your SSL/TLS certificates. + +- Now, install KubeDB cli on your workstation and KubeDB operator in your cluster following the steps [here](/docs/v2024.1.31/setup/README). + +- To keep things isolated, this tutorial uses a separate namespace called `demo` throughout this tutorial. + + ```bash + $ kubectl create ns demo + namespace/demo created + ``` + +> Note: YAML files used in this tutorial are stored in [docs/examples/mongodb](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/examples/mongodb) folder in GitHub repository [kubedb/docs](https://github.com/kubedb/docs). + +## Overview + +KubeDB uses following crd fields to enable SSL/TLS encryption in Mongodb. + +- `spec:` + - `sslMode` + - `tls:` + - `issuerRef` + - `certificate` + - `clusterAuthMode` + +Read about the fields in details in [mongodb concept](/docs/v2024.1.31/guides/mongodb/concepts/mongodb), + +`sslMode`, and `tls` is applicable for all types of MongoDB (i.e., `standalone`, `replicaset` and `sharding`), while `clusterAuthMode` provides [ClusterAuthMode](https://docs.mongodb.com/manual/reference/program/mongod/#cmdoption-mongod-clusterauthmode) for MongoDB clusters (i.e., `replicaset` and `sharding`). + +When, SSLMode is anything other than `disabled`, users must specify the `tls.issuerRef` field. KubeDB uses the `issuer` or `clusterIssuer` referenced in the `tls.issuerRef` field, and the certificate specs provided in `tls.certificate` to generate certificate secrets. These certificate secrets are then used to generate required certificates including `ca.crt`, `mongo.pem` and `client.pem`. + +The subject of `client.pem` certificate is added as `root` user in `$external` mongodb database. So, user can use this client certificate for `MONGODB-X509` `authenticationMechanism`. + +## Create Issuer/ ClusterIssuer + +We are going to create an example `Issuer` that will be used throughout the duration of this tutorial to enable SSL/TLS in MongoDB. Alternatively, you can follow this [cert-manager tutorial](https://cert-manager.io/docs/configuration/ca/) to create your own `Issuer`. + +- Start off by generating you ca certificates using openssl. + +```bash +openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout ./ca.key -out ./ca.crt -subj "/CN=mongo/O=kubedb" +``` + +- Now create a ca-secret using the certificate files you have just generated. + +```bash +kubectl create secret tls mongo-ca \ + --cert=ca.crt \ + --key=ca.key \ + --namespace=demo +``` + +Now, create an `Issuer` using the `ca-secret` you have just created. The `YAML` file looks like this: + +```yaml +apiVersion: cert-manager.io/v1 +kind: Issuer +metadata: + name: mongo-ca-issuer + namespace: demo +spec: + ca: + secretName: mongo-ca +``` + +Apply the `YAML` file: + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/mongodb/tls/issuer.yaml +issuer.cert-manager.io/mongo-ca-issuer created +``` + +## TLS/SSL encryption in MongoDB Sharding + +Below is the YAML for MongoDB Sharding. Here, [`spec.sslMode`](/docs/v2024.1.31/guides/mongodb/concepts/mongodb#specsslMode) specifies `sslMode` for `sharding` and [`spec.clusterAuthMode`](/docs/v2024.1.31/guides/mongodb/concepts/mongodb#specclusterAuthMode) provides `clusterAuthMode` for sharding servers. + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: MongoDB +metadata: + name: mongo-sh-tls + namespace: demo +spec: + version: "4.4.26" + sslMode: requireSSL + tls: + issuerRef: + apiGroup: "cert-manager.io" + kind: Issuer + name: mongo-ca-issuer + clusterAuthMode: x509 + shardTopology: + configServer: + replicas: 2 + storage: + resources: + requests: + storage: 1Gi + storageClassName: standard + mongos: + replicas: 2 + shard: + replicas: 2 + shards: 2 + storage: + resources: + requests: + storage: 1Gi + storageClassName: standard + storageType: Durable + terminationPolicy: WipeOut +``` + +### Deploy MongoDB Sharding + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/mongodb/tls/mg-shard-ssl.yaml +mongodb.kubedb.com/mongo-sh-tls created +``` + +Now, wait until `mongo-sh-tls created` has status `Ready`. ie, + +```bash +$ watch kubectl get mg -n demo +Every 2.0s: kubectl get mongodb -n demo +NAME VERSION STATUS AGE +mongo-sh-tls 4.4.26 Ready 4m24s +``` + +### Verify TLS/SSL in MongoDB Sharding + +Now, connect to `mongos` component of this database through [mongo-shell](https://docs.mongodb.com/v4.0/mongo/) and verify if `SSLMode` and `ClusterAuthMode` has been set up as intended. + +```bash +$ kubectl describe secret -n demo mongo-sh-tls-client-cert +Name: mongo-sh-tls-client-cert +Namespace: demo +Labels: +Annotations: cert-manager.io/alt-names: + cert-manager.io/certificate-name: mongo-sh-tls-client-cert + cert-manager.io/common-name: root + cert-manager.io/ip-sans: + cert-manager.io/issuer-group: cert-manager.io + cert-manager.io/issuer-kind: Issuer + cert-manager.io/issuer-name: mongo-ca-issuer + cert-manager.io/uri-sans: + +Type: kubernetes.io/tls + +Data +==== +ca.crt: 1147 bytes +tls.crt: 1172 bytes +tls.key: 1679 bytes +``` + +Now, Let's exec into a mongodb container and find out the username to connect in a mongo shell, + +```bash +$ kubectl exec -it mongo-sh-tls-mongos-0 -n demo bash +root@mongo-sh-tls-mongos-0:/$ ls /var/run/mongodb/tls +ca.crt client.pem mongo.pem +mongodb@mgo-sh-tls-mongos-0:/$ openssl x509 -in /var/run/mongodb/tls/client.pem -inform PEM -subject -nameopt RFC2253 -noout +subject=CN=root,O=kubedb +``` + +Now, we can connect using `CN=root,O=kubedb` as root to connect to the mongo shell, + +```bash +root@mongo-sh-tls-mongos-0:/# mongo --tls --tlsCAFile /var/run/mongodb/tls/ca.crt --tlsCertificateKeyFile /var/run/mongodb/tls/client.pem admin --host localhost --authenticationMechanism MONGODB-X509 --authenticationDatabase='$external' -u "CN=root,O=kubedb" --quiet +Welcome to the MongoDB shell. +For interactive help, type "help". +For more comprehensive documentation, see + http://docs.mongodb.org/ +Questions? Try the support group + http://groups.google.com/group/mongodb-user +mongos> +``` + +We are connected to the mongo shell. Let's run some command to verify the sslMode and the user, + +```bash +mongos> db.adminCommand({ getParameter:1, sslMode:1 }) +{ + "sslMode" : "requireSSL", + "ok" : 1, + "operationTime" : Timestamp(1599491398, 1), + "$clusterTime" : { + "clusterTime" : Timestamp(1599491398, 1), + "signature" : { + "hash" : BinData(0,"cn2Mhfy2blonon3jPz6Daen0nnc="), + "keyId" : NumberLong("6869760899591176209") + } + } +} +mongos> use $external +switched to db $external +mongos> show users +{ + "_id" : "$external.CN=root,O=kubedb", + "userId" : UUID("4865dda6-5e31-4b79-a085-7d6fea51c9be"), + "user" : "CN=root,O=kubedb", + "db" : "$external", + "roles" : [ + { + "role" : "root", + "db" : "admin" + } + ], + "mechanisms" : [ + "external" + ] +} +> exit +bye +``` + +You can see here that, `sslMode` is set to `requireSSL` and `clusterAuthMode` is set to `x509` and also an user is created in `$external` with name `"CN=root,O=kubedb"`. + +## Changing the SSLMode & ClusterAuthMode + +User can update `sslMode` & `ClusterAuthMode` if needed. Some changes may be invalid from mongodb end, like using `sslMode: disabled` with `clusterAuthMode: x509`. + +The good thing is, **KubeDB operator will throw error for invalid SSL specs while creating/updating the MongoDB object.** i.e., + +```bash +$ kubectl patch -n demo mg/mgo-sh-tls -p '{"spec":{"sslMode": "disabled","clusterAuthMode": "x509"}}' --type="merge" +Error from server (Forbidden): admission webhook "mongodb.validators.kubedb.com" denied the request: can't have disabled set to mongodb.spec.sslMode when mongodb.spec.clusterAuthMode is set to x509 +``` + +To **update from Keyfile Authentication to x.509 Authentication**, change the `sslMode` and `clusterAuthMode` in recommended sequence as suggested in [official documentation](https://docs.mongodb.com/manual/tutorial/update-keyfile-to-x509/). Each time after changing the specs, follow the procedure that is described above to verify the changes of `sslMode` and `clusterAuthMode` inside the database. + +## Cleaning up + +To cleanup the Kubernetes resources created by this tutorial, run: + +```bash +kubectl delete mongodb -n demo mongo-sh-tls +kubectl delete issuer -n demo mongo-ca-issuer +kubectl delete ns demo +``` + +## Next Steps + +- Detail concepts of [MongoDB object](/docs/v2024.1.31/guides/mongodb/concepts/mongodb). +- [Backup and Restore](/docs/v2024.1.31/guides/mongodb/backup/overview/) MongoDB databases using Stash. +- Initialize [MongoDB with Script](/docs/v2024.1.31/guides/mongodb/initialization/using-script). +- Monitor your MongoDB database with KubeDB using [out-of-the-box Prometheus operator](/docs/v2024.1.31/guides/mongodb/monitoring/using-prometheus-operator). +- Monitor your MongoDB database with KubeDB using [out-of-the-box builtin-Prometheus](/docs/v2024.1.31/guides/mongodb/monitoring/using-builtin-prometheus). +- Use [private Docker registry](/docs/v2024.1.31/guides/mongodb/private-registry/using-private-registry) to deploy MongoDB with KubeDB. +- Use [kubedb cli](/docs/v2024.1.31/guides/mongodb/cli/cli) to manage databases like kubectl for Kubernetes. +- Detail concepts of [MongoDB object](/docs/v2024.1.31/guides/mongodb/concepts/mongodb). +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/v2024.1.31/CONTRIBUTING). diff --git a/content/docs/v2024.1.31/guides/mongodb/tls/standalone.md b/content/docs/v2024.1.31/guides/mongodb/tls/standalone.md new file mode 100644 index 0000000000..89b1ddbc0f --- /dev/null +++ b/content/docs/v2024.1.31/guides/mongodb/tls/standalone.md @@ -0,0 +1,256 @@ +--- +title: MongoDB Standalone TLS/SSL Encryption +menu: + docs_v2024.1.31: + identifier: mg-tls-standalone + name: Standalone + parent: mg-tls + weight: 20 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Run MongoDB with TLS/SSL (Transport Encryption) + +KubeDB supports providing TLS/SSL encryption (via, `sslMode` and `clusterAuthMode`) for MongoDB. This tutorial will show you how to use KubeDB to run a MongoDB database with TLS/SSL encryption. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +- Install [`cert-manger`](https://cert-manager.io/docs/installation/) v1.0.0 or later to your cluster to manage your SSL/TLS certificates. + +- Now, install KubeDB cli on your workstation and KubeDB operator in your cluster following the steps [here](/docs/v2024.1.31/setup/README). + +- To keep things isolated, this tutorial uses a separate namespace called `demo` throughout this tutorial. + + ```bash + $ kubectl create ns demo + namespace/demo created + ``` + +> Note: YAML files used in this tutorial are stored in [docs/examples/mongodb](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/examples/mongodb) folder in GitHub repository [kubedb/docs](https://github.com/kubedb/docs). + +## Overview + +KubeDB uses following crd fields to enable SSL/TLS encryption in Mongodb. + +- `spec:` + - `sslMode` + - `tls:` + - `issuerRef` + - `certificate` + - `clusterAuthMode` + +Read about the fields in details in [mongodb concept](/docs/v2024.1.31/guides/mongodb/concepts/mongodb), + +`sslMode`, and `tls` is applicable for all types of MongoDB (i.e., `standalone`, `replicaset` and `sharding`), while `clusterAuthMode` provides [ClusterAuthMode](https://docs.mongodb.com/manual/reference/program/mongod/#cmdoption-mongod-clusterauthmode) for MongoDB clusters (i.e., `replicaset` and `sharding`). + +When, SSLMode is anything other than `disabled`, users must specify the `tls.issuerRef` field. KubeDB uses the `issuer` or `clusterIssuer` referenced in the `tls.issuerRef` field, and the certificate specs provided in `tls.certificate` to generate certificate secrets. These certificate secrets are then used to generate required certificates including `ca.crt`, `mongo.pem` and `client.pem`. + +The subject of `client.pem` certificate is added as `root` user in `$external` mongodb database. So, user can use this client certificate for `MONGODB-X509` `authenticationMechanism`. + +## Create Issuer/ ClusterIssuer + +We are going to create an example `Issuer` that will be used throughout the duration of this tutorial to enable SSL/TLS in MongoDB. Alternatively, you can follow this [cert-manager tutorial](https://cert-manager.io/docs/configuration/ca/) to create your own `Issuer`. + +- Start off by generating you ca certificates using openssl. + +```bash +openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout ./ca.key -out ./ca.crt -subj "/CN=mongo/O=kubedb" +``` + +- Now create a ca-secret using the certificate files you have just generated. + +```bash +kubectl create secret tls mongo-ca \ + --cert=ca.crt \ + --key=ca.key \ + --namespace=demo +``` + +Now, create an `Issuer` using the `ca-secret` you have just created. The `YAML` file looks like this: + +```yaml +apiVersion: cert-manager.io/v1 +kind: Issuer +metadata: + name: mongo-ca-issuer + namespace: demo +spec: + ca: + secretName: mongo-ca +``` + +Apply the `YAML` file: + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/mongodb/tls/issuer.yaml +issuer.cert-manager.io/mongo-ca-issuer created +``` + +## TLS/SSL encryption in MongoDB Standalone + +Below is the YAML for MongoDB Standalone. Here, [`spec.sslMode`](/docs/v2024.1.31/guides/mongodb/concepts/mongodb#specsslMode) specifies `sslMode` for `standalone` (which is `requireSSL`). + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: MongoDB +metadata: + name: mgo-tls + namespace: demo +spec: + version: "4.4.26" + sslMode: requireSSL + tls: + issuerRef: + apiGroup: "cert-manager.io" + kind: Issuer + name: mongo-ca-issuer + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi +``` + +### Deploy MongoDB Standalone + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/mongodb/tls/mg-standalone-ssl.yaml +mongodb.kubedb.com/mgo-tls created +``` + +Now, wait until `mgo-tls created` has status `Ready`. i.e, + +```bash +$ watch kubectl get mg -n demo +Every 2.0s: kubectl get mongodb -n demo +NAME VERSION STATUS AGE +mgo-tls 4.4.26 Ready 14s +``` + +### Verify TLS/SSL in MongoDB Standalone + +Now, connect to this database through [mongo-shell](https://docs.mongodb.com/v4.0/mongo/) and verify if `SSLMode` has been set up as intended (i.e, `requireSSL`). + +```bash +$ kubectl describe secret -n demo mgo-tls-client-cert +Name: mgo-tls-client-cert +Namespace: demo +Labels: +Annotations: cert-manager.io/alt-names: + cert-manager.io/certificate-name: mgo-tls-client-cert + cert-manager.io/common-name: root + cert-manager.io/ip-sans: + cert-manager.io/issuer-group: cert-manager.io + cert-manager.io/issuer-kind: Issuer + cert-manager.io/issuer-name: mongo-ca-issuer + cert-manager.io/uri-sans: + +Type: kubernetes.io/tls + +Data +==== +tls.crt: 1172 bytes +tls.key: 1679 bytes +ca.crt: 1147 bytes +``` + +Now, Let's exec into a mongodb container and find out the username to connect in a mongo shell, + +```bash +$ kubectl exec -it mgo-tls-0 -n demo bash +mongodb@mgo-tls-0:/$ ls /var/run/mongodb/tls +ca.crt client.pem mongo.pem +mongodb@mgo-tls-0:/$ openssl x509 -in /var/run/mongodb/tls/client.pem -inform PEM -subject -nameopt RFC2253 -noout +subject=CN=root,O=kubedb +``` + +Now, we can connect using `CN=root,O=kubedb` as root to connect to the mongo shell, + +```bash +mongodb@mgo-tls-0:/$ mongo --tls --tlsCAFile /var/run/mongodb/tls/ca.crt --tlsCertificateKeyFile /var/run/mongodb/tls/client.pem admin --host localhost --authenticationMechanism MONGODB-X509 --authenticationDatabase='$external' -u "CN=root,O=kubedb" --quiet +> +``` + +We are connected to the mongo shell. Let's run some command to verify the sslMode and the user, + +```bash +> db.adminCommand({ getParameter:1, sslMode:1 }) +{ "sslMode" : "requireSSL", "ok" : 1 } + +> use $external +switched to db $external + +> show users +{ + "_id" : "$external.CN=root,O=kubedb", + "userId" : UUID("d2ddf121-9398-400b-b477-0e8bcdd47746"), + "user" : "CN=root,O=kubedb", + "db" : "$external", + "roles" : [ + { + "role" : "root", + "db" : "admin" + } + ], + "mechanisms" : [ + "external" + ] +} +> exit +bye +``` + +You can see here that, `sslMode` is set to `requireSSL` and a user is created in `$external` with name `"CN=root,O=kubedb"`. + +## Changing the SSLMode & ClusterAuthMode + +User can update `sslMode` & `ClusterAuthMode` if needed. Some changes may be invalid from mongodb end, like using `sslMode: disabled` with `clusterAuthMode: x509`. + +The good thing is, **KubeDB operator will throw error for invalid SSL specs while creating/updating the MongoDB object.** i.e., + +```bash +$ kubectl patch -n demo mg/mgo-tls -p '{"spec":{"sslMode": "disabled","clusterAuthMode": "x509"}}' --type="merge" +Error from server (Forbidden): admission webhook "mongodb.validators.kubedb.com" denied the request: can't have disabled set to mongodb.spec.sslMode when mongodb.spec.clusterAuthMode is set to x509 +``` + +To **update from Keyfile Authentication to x.509 Authentication**, change the `sslMode` and `clusterAuthMode` in recommended sequence as suggested in [official documentation](https://docs.mongodb.com/manual/tutorial/update-keyfile-to-x509/). Each time after changing the specs, follow the procedure that is described above to verify the changes of `sslMode` and `clusterAuthMode` inside the database. + +## Cleaning up + +To cleanup the Kubernetes resources created by this tutorial, run: + +```bash +kubectl delete mongodb -n demo mgo-tls +kubectl delete issuer -n demo mongo-ca-issuer +kubectl delete ns demo +``` + +## Next Steps + +- Detail concepts of [MongoDB object](/docs/v2024.1.31/guides/mongodb/concepts/mongodb). +- [Backup and Restore](/docs/v2024.1.31/guides/mongodb/backup/overview/) MongoDB databases using Stash. +- Initialize [MongoDB with Script](/docs/v2024.1.31/guides/mongodb/initialization/using-script). +- Monitor your MongoDB database with KubeDB using [out-of-the-box Prometheus operator](/docs/v2024.1.31/guides/mongodb/monitoring/using-prometheus-operator). +- Monitor your MongoDB database with KubeDB using [out-of-the-box builtin-Prometheus](/docs/v2024.1.31/guides/mongodb/monitoring/using-builtin-prometheus). +- Use [private Docker registry](/docs/v2024.1.31/guides/mongodb/private-registry/using-private-registry) to deploy MongoDB with KubeDB. +- Use [kubedb cli](/docs/v2024.1.31/guides/mongodb/cli/cli) to manage databases like kubectl for Kubernetes. +- Detail concepts of [MongoDB object](/docs/v2024.1.31/guides/mongodb/concepts/mongodb). +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/v2024.1.31/CONTRIBUTING). diff --git a/content/docs/v2024.1.31/guides/mongodb/update-version/_index.md b/content/docs/v2024.1.31/guides/mongodb/update-version/_index.md new file mode 100644 index 0000000000..684f6408c5 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mongodb/update-version/_index.md @@ -0,0 +1,22 @@ +--- +title: Updating MongoDB +menu: + docs_v2024.1.31: + identifier: mg-updating + name: UpdateVersion + parent: mg-mongodb-guides + weight: 42 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/mongodb/update-version/overview.md b/content/docs/v2024.1.31/guides/mongodb/update-version/overview.md new file mode 100644 index 0000000000..ed88c465ff --- /dev/null +++ b/content/docs/v2024.1.31/guides/mongodb/update-version/overview.md @@ -0,0 +1,65 @@ +--- +title: Updating MongoDB Overview +menu: + docs_v2024.1.31: + identifier: mg-updating-overview + name: Overview + parent: mg-updating + weight: 10 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# updating MongoDB version Overview + +This guide will give you an overview on how KubeDB Ops-manager operator update the version of `MongoDB` database. + +## Before You Begin + +- You should be familiar with the following `KubeDB` concepts: + - [MongoDB](/docs/v2024.1.31/guides/mongodb/concepts/mongodb) + - [MongoDBOpsRequest](/docs/v2024.1.31/guides/mongodb/concepts/opsrequest) + +## How update version Process Works + +The following diagram shows how KubeDB Ops-manager operator used to update the version of `MongoDB`. Open the image in a new tab to see the enlarged version. + +
+  updating Process of MongoDB +
Fig: updating Process of MongoDB
+
+ +The updating process consists of the following steps: + +1. At first, a user creates a `MongoDB` Custom Resource (CR). + +2. `KubeDB` Provisioner operator watches the `MongoDB` CR. + +3. When the operator finds a `MongoDB` CR, it creates required number of `StatefulSets` and related necessary stuff like secrets, services, etc. + +4. Then, in order to update the version of the `MongoDB` database the user creates a `MongoDBOpsRequest` CR with the desired version. + +5. `KubeDB` Ops-manager operator watches the `MongoDBOpsRequest` CR. + +6. When it finds a `MongoDBOpsRequest` CR, it halts the `MongoDB` object which is referred from the `MongoDBOpsRequest`. So, the `KubeDB` Provisioner operator doesn't perform any operations on the `MongoDB` object during the updating process. + +7. By looking at the target version from `MongoDBOpsRequest` CR, `KubeDB` Ops-manager operator updates the images of all the `StatefulSets`. After each image update, the operator performs some checks such as if the oplog is synced and database size is almost same or not. + +8. After successfully updating the `StatefulSets` and their `Pods` images, the `KubeDB` Ops-manager operator updates the image of the `MongoDB` object to reflect the updated state of the database. + +9. After successfully updating of `MongoDB` object, the `KubeDB` Ops-manager operator resumes the `MongoDB` object so that the `KubeDB` Provisioner operator can resume its usual operations. + +In the next doc, we are going to show a step by step guide on updating of a MongoDB database using updateVersion operation. \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/mongodb/update-version/replicaset.md b/content/docs/v2024.1.31/guides/mongodb/update-version/replicaset.md new file mode 100644 index 0000000000..187f90653a --- /dev/null +++ b/content/docs/v2024.1.31/guides/mongodb/update-version/replicaset.md @@ -0,0 +1,276 @@ +--- +title: Updating MongoDB Replicaset +menu: + docs_v2024.1.31: + identifier: mg-updating-replicaset + name: ReplicaSet + parent: mg-updating + weight: 30 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# update version of MongoDB ReplicaSet + +This guide will show you how to use `KubeDB` Ops-manager operator to update the version of `MongoDB` ReplicaSet. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +- Install `KubeDB` Provisioner and Ops-manager operator in your cluster following the steps [here](/docs/v2024.1.31/setup/README). + +- You should be familiar with the following `KubeDB` concepts: + - [MongoDB](/docs/v2024.1.31/guides/mongodb/concepts/mongodb) + - [Replicaset](/docs/v2024.1.31/guides/mongodb/clustering/replicaset) + - [MongoDBOpsRequest](/docs/v2024.1.31/guides/mongodb/concepts/opsrequest) + - [Updating Overview](/docs/v2024.1.31/guides/mongodb/update-version/overview) + +To keep everything isolated, we are going to use a separate namespace called `demo` throughout this tutorial. + +```bash +$ kubectl create ns demo +namespace/demo created +``` + +> **Note:** YAML files used in this tutorial are stored in [docs/examples/mongodb](/docs/v2024.1.31/examples/mongodb) directory of [kubedb/docs](https://github.com/kube/docs) repository. + +## Prepare MongoDB ReplicaSet Database + +Now, we are going to deploy a `MongoDB` replicaset database with version `3.6.8`. + +### Deploy MongoDB replicaset + +In this section, we are going to deploy a MongoDB replicaset database. Then, in the next section we will update the version of the database using `MongoDBOpsRequest` CRD. Below is the YAML of the `MongoDB` CR that we are going to create, + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: MongoDB +metadata: + name: mg-replicaset + namespace: demo +spec: + version: "4.4.26" + replicaSet: + name: "replicaset" + replicas: 3 + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi +``` + +Let's create the `MongoDB` CR we have shown above, + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/mongodb/update-version/mg-replicaset.yaml +mongodb.kubedb.com/mg-replicaset created +``` + +Now, wait until `mg-replicaset` created has status `Ready`. i.e, + +```bash +$ k get mongodb -n demo +NAME VERSION STATUS AGE +mg-replicaset 4.4.26 Ready 109s +``` + +We are now ready to apply the `MongoDBOpsRequest` CR to update this database. + +### update MongoDB Version + +Here, we are going to update `MongoDB` replicaset from `3.6.8` to `4.0.5`. + +#### Create MongoDBOpsRequest: + +In order to update the version of the replicaset database, we have to create a `MongoDBOpsRequest` CR with your desired version that is supported by `KubeDB`. Below is the YAML of the `MongoDBOpsRequest` CR that we are going to create, + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: MongoDBOpsRequest +metadata: + name: mops-replicaset-update + namespace: demo +spec: + type: UpdateVersion + databaseRef: + name: mg-replicaset + updateVersion: + targetVersion: 4.4.26 + readinessCriteria: + oplogMaxLagSeconds: 20 + objectsCountDiffPercentage: 10 + timeout: 5m + apply: IfReady +``` + +Here, + +- `spec.databaseRef.name` specifies that we are performing operation on `mg-replicaset` MongoDB database. +- `spec.type` specifies that we are going to perform `UpdateVersion` on our database. +- `spec.updateVersion.targetVersion` specifies the expected version of the database `4.0.5`. +- Have a look [here](/docs/v2024.1.31/guides/mongodb/concepts/opsrequest#specreadinesscriteria) on the respective sections to understand the `readinessCriteria`, `timeout` & `apply` fields. + +Let's create the `MongoDBOpsRequest` CR we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/mongodb/update-version/mops-update-replicaset .yaml +mongodbopsrequest.ops.kubedb.com/mops-replicaset-update created +``` + +#### Verify MongoDB version updated successfully + +If everything goes well, `KubeDB` Ops-manager operator will update the image of `MongoDB` object and related `StatefulSets` and `Pods`. + +Let's wait for `MongoDBOpsRequest` to be `Successful`. Run the following command to watch `MongoDBOpsRequest` CR, + +```bash +$ kubectl get mongodbopsrequest -n demo +Every 2.0s: kubectl get mongodbopsrequest -n demo +NAME TYPE STATUS AGE +mops-replicaset-update UpdateVersion Successful 84s +``` + +We can see from the above output that the `MongoDBOpsRequest` has succeeded. If we describe the `MongoDBOpsRequest` we will get an overview of the steps that were followed to update the database version. + +```bash +$ kubectl describe mongodbopsrequest -n demo mops-replicaset-update +Name: mops-replicaset-update +Namespace: demo +Labels: +Annotations: +API Version: ops.kubedb.com/v1alpha1 +Kind: MongoDBOpsRequest +Metadata: + Creation Timestamp: 2022-10-26T10:19:55Z + Generation: 1 + Managed Fields: + API Version: ops.kubedb.com/v1alpha1 + Fields Type: FieldsV1 + fieldsV1: + f:metadata: + f:annotations: + .: + f:kubectl.kubernetes.io/last-applied-configuration: + f:spec: + .: + f:apply: + f:databaseRef: + f:readinessCriteria: + .: + f:objectsCountDiffPercentage: + f:oplogMaxLagSeconds: + f:timeout: + f:type: + f:updateVersion: + .: + f:targetVersion: + Manager: kubectl-client-side-apply + Operation: Update + Time: 2022-10-26T10:19:55Z + API Version: ops.kubedb.com/v1alpha1 + Fields Type: FieldsV1 + fieldsV1: + f:status: + .: + f:conditions: + f:observedGeneration: + f:phase: + Manager: kubedb-ops-manager + Operation: Update + Subresource: status + Time: 2022-10-26T10:23:09Z + Resource Version: 607814 + UID: 38053605-47bd-4d94-9f53-ce9474ad0a98 +Spec: + Apply: IfReady + Database Ref: + Name: mg-replicaset + Readiness Criteria: + Objects Count Diff Percentage: 10 + Oplog Max Lag Seconds: 20 + Timeout: 5m + Type: UpdateVersion + UpdateVersion: + Target Version: 4.4.26 +Status: + Conditions: + Last Transition Time: 2022-10-26T10:21:20Z + Message: MongoDB ops request is update-version database version + Observed Generation: 1 + Reason: UpdateVersion + Status: True + Type: UpdateVersion + Last Transition Time: 2022-10-26T10:21:39Z + Message: Successfully updated statefulsets update strategy type + Observed Generation: 1 + Reason: UpdateStatefulSets + Status: True + Type: UpdateStatefulSets + Last Transition Time: 2022-10-26T10:23:09Z + Message: Successfully Updated Standalone Image + Observed Generation: 1 + Reason: UpdateStandaloneImage + Status: True + Type: UpdateStandaloneImage + Last Transition Time: 2022-10-26T10:23:09Z + Message: Successfully completed the modification process. + Observed Generation: 1 + Reason: Successful + Status: True + Type: Successful + Observed Generation: 1 + Phase: Successful +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal PauseDatabase 2m27s KubeDB Ops-manager Operator Pausing MongoDB demo/mg-replicaset + Normal PauseDatabase 2m27s KubeDB Ops-manager Operator Successfully paused MongoDB demo/mg-replicaset + Normal Updating 2m27s KubeDB Ops-manager Operator Updating StatefulSets + Normal Updating 2m8s KubeDB Ops-manager Operator Successfully Updated StatefulSets + Normal UpdateStandaloneImage 38s KubeDB Ops-manager Operator Successfully Updated Standalone Image + Normal ResumeDatabase 38s KubeDB Ops-manager Operator Resuming MongoDB demo/mg-replicaset + Normal ResumeDatabase 38s KubeDB Ops-manager Operator Successfully resumed MongoDB demo/mg-replicaset + Normal Successful 38s KubeDB Ops-manager Operator Successfully Updated Database +``` + +Now, we are going to verify whether the `MongoDB` and the related `StatefulSets` and their `Pods` have the new version image. Let's check, + +```bash +$ kubectl get mg -n demo mg-replicaset -o=jsonpath='{.spec.version}{"\n"}' +4.4.26 + +$ kubectl get sts -n demo mg-replicaset -o=jsonpath='{.spec.template.spec.containers[0].image}{"\n"}' +mongo:4.0.5 + +$ kubectl get pods -n demo mg-replicaset-0 -o=jsonpath='{.spec.containers[0].image}{"\n"}' +mongo:4.0.5 +``` + +You can see from above, our `MongoDB` replicaset database has been updated with the new version. So, the updateVersion process is successfully completed. + +## Cleaning Up + +To clean up the Kubernetes resources created by this tutorial, run: + +```bash +kubectl delete mg -n demo mg-replicaset +kubectl delete mongodbopsrequest -n demo mops-replicaset-update +``` \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/mongodb/update-version/sharding.md b/content/docs/v2024.1.31/guides/mongodb/update-version/sharding.md new file mode 100644 index 0000000000..10bd919d06 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mongodb/update-version/sharding.md @@ -0,0 +1,328 @@ +--- +title: Updating MongoDB Sharded Database +menu: + docs_v2024.1.31: + identifier: mg-updating-sharding + name: Sharding + parent: mg-updating + weight: 40 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Update version of MongoDB Sharded Database + +This guide will show you how to use `KubeDB` Ops-manager operator to update the version of `MongoDB` Sharded Database. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +- Install `KubeDB` Provisioner and Ops-manager operator in your cluster following the steps [here](/docs/v2024.1.31/setup/README). + +- You should be familiar with the following `KubeDB` concepts: + - [MongoDB](/docs/v2024.1.31/guides/mongodb/concepts/mongodb) + - [Sharding](/docs/v2024.1.31/guides/mongodb/clustering/sharding) + - [MongoDBOpsRequest](/docs/v2024.1.31/guides/mongodb/concepts/opsrequest) + - [Updating Overview](/docs/v2024.1.31/guides/mongodb/update-version/overview) + +To keep everything isolated, we are going to use a separate namespace called `demo` throughout this tutorial. + +```bash +$ kubectl create ns demo +namespace/demo created +``` + +> **Note:** YAML files used in this tutorial are stored in [docs/examples/mongodb](/docs/v2024.1.31/examples/mongodb) directory of [kubedb/docs](https://github.com/kube/docs) repository. + +## Prepare MongoDB Sharded Database Database + +Now, we are going to deploy a `MongoDB` sharded database with version `3.6.8`. + +### Deploy MongoDB Sharded Database + +In this section, we are going to deploy a MongoDB sharded database. Then, in the next section we will update the version of the database using `MongoDBOpsRequest` CRD. Below is the YAML of the `MongoDB` CR that we are going to create, + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: MongoDB +metadata: + name: mg-sharding + namespace: demo +spec: + version: 4.4.26 + shardTopology: + configServer: + replicas: 2 + storage: + resources: + requests: + storage: 1Gi + storageClassName: standard + mongos: + replicas: 2 + shard: + replicas: 2 + shards: 3 + storage: + resources: + requests: + storage: 1Gi + storageClassName: standard +``` + +Let's create the `MongoDB` CR we have shown above, + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/mongodb/update-version/mg-shard.yaml +mongodb.kubedb.com/mg-sharding created +``` + +Now, wait until `mg-sharding` created has status `Ready`. i.e, + +```bash +$ k get mongodb -n demo +NAME VERSION STATUS AGE +mg-sharding 4.4.26 Ready 2m9s +``` + +We are now ready to apply the `MongoDBOpsRequest` CR to update this database. + +### Update MongoDB Version + +Here, we are going to update `MongoDB` sharded database from `3.6.8` to `4.0.5`. + +#### Create MongoDBOpsRequest + +In order to update the sharded database, we have to create a `MongoDBOpsRequest` CR with your desired version that is supported by `KubeDB`. Below is the YAML of the `MongoDBOpsRequest` CR that we are going to create, + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: MongoDBOpsRequest +metadata: + name: mops-shard-update + namespace: demo +spec: + type: UpdateVersion + databaseRef: + name: mg-sharding + updateVersion: + targetVersion: 4.4.26 + readinessCriteria: + oplogMaxLagSeconds: 20 + objectsCountDiffPercentage: 10 + timeout: 5m + apply: IfReady +``` + +Here, + +- `spec.databaseRef.name` specifies that we are performing operation on `mg-sharding` MongoDB database. +- `spec.type` specifies that we are going to perform `UpdateVersion` on our database. +- `spec.updateVersion.targetVersion` specifies the expected version of the database `4.0.5`. +- Have a look [here](/docs/v2024.1.31/guides/mongodb/concepts/opsrequest#specreadinesscriteria) on the respective sections to understand the `readinessCriteria`, `timeout` & `apply` fields. + +Let's create the `MongoDBOpsRequest` CR we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/mongodb/update-version/mops-update-shard.yaml +mongodbopsrequest.ops.kubedb.com/mops-shard-update created +``` + +#### Verify MongoDB version updated successfully + +If everything goes well, `KubeDB` Ops-manager operator will update the image of `MongoDB` object and related `StatefulSets` and `Pods`. + +Let's wait for `MongoDBOpsRequest` to be `Successful`. Run the following command to watch `MongoDBOpsRequest` CR, + +```bash +$ kubectl get mongodbopsrequest -n demo +Every 2.0s: kubectl get mongodbopsrequest -n demo +NAME TYPE STATUS AGE +mops-shard-update UpdateVersion Successful 2m31s +``` + +We can see from the above output that the `MongoDBOpsRequest` has succeeded. If we describe the `MongoDBOpsRequest` we will get an overview of the steps that were followed to update the database. + +```bash +$ kubectl describe mongodbopsrequest -n demo mops-shard-update + +Name: mops-shard-update +Namespace: demo +Labels: +Annotations: +API Version: ops.kubedb.com/v1alpha1 +Kind: MongoDBOpsRequest +Metadata: + Creation Timestamp: 2022-10-26T10:27:24Z + Generation: 1 + Managed Fields: + API Version: ops.kubedb.com/v1alpha1 + Fields Type: FieldsV1 + fieldsV1: + f:metadata: + f:annotations: + .: + f:kubectl.kubernetes.io/last-applied-configuration: + f:spec: + .: + f:apply: + f:databaseRef: + f:readinessCriteria: + .: + f:objectsCountDiffPercentage: + f:oplogMaxLagSeconds: + f:timeout: + f:type: + f:updateVersion: + .: + f:targetVersion: + Manager: kubectl-client-side-apply + Operation: Update + Time: 2022-10-26T10:27:24Z + API Version: ops.kubedb.com/v1alpha1 + Fields Type: FieldsV1 + fieldsV1: + f:status: + .: + f:conditions: + f:observedGeneration: + f:phase: + Manager: kubedb-ops-manager + Operation: Update + Subresource: status + Time: 2022-10-26T10:36:12Z + Resource Version: 610193 + UID: 6459a314-c759-4002-9dff-106b836c4db0 +Spec: + Apply: IfReady + Database Ref: + Name: mg-sharding + Readiness Criteria: + Objects Count Diff Percentage: 10 + Oplog Max Lag Seconds: 20 + Timeout: 5m + Type: UpdateVersion + UpdateVersion: + Target Version: 4.4.26 +Status: + Conditions: + Last Transition Time: 2022-10-26T10:36:12Z + Message: connection() error occurred during connection handshake: dial tcp 10.244.0.125:27017: i/o timeout + Observed Generation: 1 + Reason: Failed + Status: False + Type: UpdateVersion + Last Transition Time: 2022-10-26T10:29:29Z + Message: Successfully stopped mongodb load balancer + Observed Generation: 1 + Reason: StoppingBalancer + Status: True + Type: StoppingBalancer + Last Transition Time: 2022-10-26T10:30:54Z + Message: Successfully updated statefulsets update strategy type + Observed Generation: 1 + Reason: UpdateStatefulSets + Status: True + Type: UpdateStatefulSets + Last Transition Time: 2022-10-26T10:32:00Z + Message: Successfully Updated ConfigServer Image + Observed Generation: 1 + Reason: UpdateConfigServerImage + Status: True + Type: UpdateConfigServerImage + Last Transition Time: 2022-10-26T10:35:32Z + Message: Successfully Updated Shard Image + Observed Generation: 1 + Reason: UpdateShardImage + Status: True + Type: UpdateShardImage + Last Transition Time: 2022-10-26T10:36:07Z + Message: Successfully Updated Mongos Image + Observed Generation: 1 + Reason: UpdateMongosImage + Status: True + Type: UpdateMongosImage + Last Transition Time: 2022-10-26T10:36:07Z + Message: Successfully Started mongodb load balancer + Observed Generation: 1 + Reason: StartingBalancer + Status: True + Type: StartingBalancer + Last Transition Time: 2022-10-26T10:36:07Z + Message: Successfully completed the modification process. + Observed Generation: 1 + Reason: Successful + Status: True + Type: Successful + Observed Generation: 1 + Phase: Failed +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal PauseDatabase 8m27s KubeDB Ops-manager Operator Pausing MongoDB demo/mg-sharding + Normal PauseDatabase 8m27s KubeDB Ops-manager Operator Successfully paused MongoDB demo/mg-sharding + Normal StoppingBalancer 8m27s KubeDB Ops-manager Operator Stopping Balancer + Normal StoppingBalancer 8m27s KubeDB Ops-manager Operator Successfully Stopped Balancer + Normal Updating 8m27s KubeDB Ops-manager Operator Updating StatefulSets + Normal Updating 7m2s KubeDB Ops-manager Operator Successfully Updated StatefulSets + Normal Updating 7m2s KubeDB Ops-manager Operator Updating StatefulSets + Normal UpdateConfigServerImage 5m56s KubeDB Ops-manager Operator Successfully Updated ConfigServer Image + Normal Updating 5m45s KubeDB Ops-manager Operator Successfully Updated StatefulSets + Normal UpdateShardImage 2m24s KubeDB Ops-manager Operator Successfully Updated Shard Image + Normal UpdateMongosImage 109s KubeDB Ops-manager Operator Successfully Updated Mongos Image + Normal Updating 109s KubeDB Ops-manager Operator Starting Balancer + Normal StartingBalancer 109s KubeDB Ops-manager Operator Successfully Started Balancer + Normal ResumeDatabase 109s KubeDB Ops-manager Operator Resuming MongoDB demo/mg-sharding + Normal ResumeDatabase 109s KubeDB Ops-manager Operator Successfully resumed MongoDB demo/mg-sharding + Normal Successful 109s KubeDB Ops-manager Operator Successfully Updated Database +``` + +Now, we are going to verify whether the `MongoDB` and the related `StatefulSets` of `Mongos`, `Shard` and `ConfigeServer` and their `Pods` have the new version image. Let's check, + +```bash +$ kubectl get mg -n demo mg-sharding -o=jsonpath='{.spec.version}{"\n"}' +4.4.26 + +$ kubectl get sts -n demo mg-sharding-configsvr -o=jsonpath='{.spec.template.spec.containers[0].image}{"\n"}' +mongo:4.0.5 + +$ kubectl get sts -n demo mg-sharding-shard0 -o=jsonpath='{.spec.template.spec.containers[0].image}{"\n"}' +mongo:4.0.5 + +$ kubectl get sts -n demo mg-sharding-mongos -o=jsonpath='{.spec.template.spec.containers[0].image}{"\n"}' +mongo:4.0.5 + +$ kubectl get pods -n demo mg-sharding-configsvr-0 -o=jsonpath='{.spec.containers[0].image}{"\n"}' +mongo:4.0.5 + +$ kubectl get pods -n demo mg-sharding-shard0-0 -o=jsonpath='{.spec.containers[0].image}{"\n"}' +mongo:4.0.5 + +$ kubectl get pods -n demo mg-sharding-mongos-0 -o=jsonpath='{.spec.containers[0].image}{"\n"}' +mongo:4.0.5 +``` + +You can see from above, our `MongoDB` sharded database has been updated with the new version. So, the update process is successfully completed. + +## Cleaning Up + +To clean up the Kubernetes resources created by this tutorial, run: + +```bash +kubectl delete mg -n demo mg-sharding +kubectl delete mongodbopsrequest -n demo mops-shard-update +``` \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/mongodb/update-version/standalone.md b/content/docs/v2024.1.31/guides/mongodb/update-version/standalone.md new file mode 100644 index 0000000000..f83b8b0f91 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mongodb/update-version/standalone.md @@ -0,0 +1,274 @@ +--- +title: Updating MongoDB Standalone +menu: + docs_v2024.1.31: + identifier: mg-updating-standalone + name: Standalone + parent: mg-updating + weight: 20 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# update version of MongoDB Standalone + +This guide will show you how to use `KubeDB` Ops-manager operator to update the version of `MongoDB` standalone. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +- Install `KubeDB` Provisioner and Ops-manager operator in your cluster following the steps [here](/docs/v2024.1.31/setup/README). + +- You should be familiar with the following `KubeDB` concepts: + - [MongoDB](/docs/v2024.1.31/guides/mongodb/concepts/mongodb) + - [MongoDBOpsRequest](/docs/v2024.1.31/guides/mongodb/concepts/opsrequest) + - [Updating Overview](/docs/v2024.1.31/guides/mongodb/update-version/overview) + +To keep everything isolated, we are going to use a separate namespace called `demo` throughout this tutorial. + +```bash +$ kubectl create ns demo +namespace/demo created +``` + +> **Note:** YAML files used in this tutorial are stored in [docs/examples/mongodb](/docs/v2024.1.31/examples/mongodb) directory of [kubedb/docs](https://github.com/kube/docs) repository. + +### Prepare MongoDB Standalone Database + +Now, we are going to deploy a `MongoDB` standalone database with version `3.6.8`. + +### Deploy MongoDB standalone : + +In this section, we are going to deploy a MongoDB standalone database. Then, in the next section we will update the version of the database using `MongoDBOpsRequest` CRD. Below is the YAML of the `MongoDB` CR that we are going to create, + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: MongoDB +metadata: + name: mg-standalone + namespace: demo +spec: + version: "4.4.26" + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi +``` + +Let's create the `MongoDB` CR we have shown above, + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/mongodb/update-version/mg-standalone.yaml +mongodb.kubedb.com/mg-standalone created +``` + +Now, wait until `mg-standalone` created has status `Ready`. i.e, + +```bash +$ kubectl get mg -n demo + NAME VERSION STATUS AGE + mg-standalone 4.4.26 Ready 8m58s +``` + +We are now ready to apply the `MongoDBOpsRequest` CR to update this database. + +### update MongoDB Version + +Here, we are going to update `MongoDB` standalone from `3.6.8` to `4.0.5`. + +#### Create MongoDBOpsRequest: + +In order to update the standalone database, we have to create a `MongoDBOpsRequest` CR with your desired version that is supported by `KubeDB`. Below is the YAML of the `MongoDBOpsRequest` CR that we are going to create, + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: MongoDBOpsRequest +metadata: + name: mops-update + namespace: demo +spec: + type: UpdateVersion + databaseRef: + name: mg-standalone + updateVersion: + targetVersion: 4.4.26 + readinessCriteria: + oplogMaxLagSeconds: 20 + objectsCountDiffPercentage: 10 + timeout: 5m + apply: IfReady +``` + +Here, + +- `spec.databaseRef.name` specifies that we are performing operation on `mg-standalone` MongoDB database. +- `spec.type` specifies that we are going to perform `UpdateVersion` on our database. +- `spec.updateVersion.targetVersion` specifies the expected version of the database `4.0.5`. +- Have a look [here](/docs/v2024.1.31/guides/mongodb/concepts/opsrequest#specreadinesscriteria) on the respective sections to understand the `readinessCriteria`, `timeout` & `apply` fields. + + +Let's create the `MongoDBOpsRequest` CR we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/mongodb/update-version/mops-update-standalone.yaml +mongodbopsrequest.ops.kubedb.com/mops-update created +``` + +#### Verify MongoDB version updated successfully : + +If everything goes well, `KubeDB` Ops-manager operator will update the image of `MongoDB` object and related `StatefulSets` and `Pods`. + +Let's wait for `MongoDBOpsRequest` to be `Successful`. Run the following command to watch `MongoDBOpsRequest` CR, + +```bash +$ kubectl get mongodbopsrequest -n demo +Every 2.0s: kubectl get mongodbopsrequest -n demo +NAME TYPE STATUS AGE +mops-update UpdateVersion Successful 3m45s +``` + +We can see from the above output that the `MongoDBOpsRequest` has succeeded. If we describe the `MongoDBOpsRequest` we will get an overview of the steps that were followed to update the database. + +```bash +$ kubectl describe mongodbopsrequest -n demo mops-update +Name: mops-update +Namespace: demo +Labels: +Annotations: +API Version: ops.kubedb.com/v1alpha1 +Kind: MongoDBOpsRequest +Metadata: + Creation Timestamp: 2022-10-26T10:06:50Z + Generation: 1 + Managed Fields: + API Version: ops.kubedb.com/v1alpha1 + Fields Type: FieldsV1 + fieldsV1: + f:metadata: + f:annotations: + .: + f:kubectl.kubernetes.io/last-applied-configuration: + f:spec: + .: + f:apply: + f:databaseRef: + f:readinessCriteria: + .: + f:objectsCountDiffPercentage: + f:oplogMaxLagSeconds: + f:timeout: + f:type: + f:updateVersion: + .: + f:targetVersion: + Manager: kubectl-client-side-apply + Operation: Update + Time: 2022-10-26T10:06:50Z + API Version: ops.kubedb.com/v1alpha1 + Fields Type: FieldsV1 + fieldsV1: + f:status: + .: + f:conditions: + f:observedGeneration: + f:phase: + Manager: kubedb-ops-manager + Operation: Update + Subresource: status + Time: 2022-10-26T10:08:25Z + Resource Version: 605817 + UID: 79faadf6-7af9-4b74-9907-febe7d543386 +Spec: + Apply: IfReady + Database Ref: + Name: mg-standalone + Readiness Criteria: + Objects Count Diff Percentage: 10 + Oplog Max Lag Seconds: 20 + Timeout: 5m + Type: UpdateVersion + UpdateVersion: + Target Version: 4.4.26 +Status: + Conditions: + Last Transition Time: 2022-10-26T10:07:10Z + Message: MongoDB ops request is update-version database version + Observed Generation: 1 + Reason: UpdateVersion + Status: True + Type: UpdateVersion + Last Transition Time: 2022-10-26T10:07:30Z + Message: Successfully updated statefulsets update strategy type + Observed Generation: 1 + Reason: UpdateStatefulSets + Status: True + Type: UpdateStatefulSets + Last Transition Time: 2022-10-26T10:08:25Z + Message: Successfully Updated Standalone Image + Observed Generation: 1 + Reason: UpdateStandaloneImage + Status: True + Type: UpdateStandaloneImage + Last Transition Time: 2022-10-26T10:08:25Z + Message: Successfully completed the modification process. + Observed Generation: 1 + Reason: Successful + Status: True + Type: Successful + Observed Generation: 1 + Phase: Successful +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal PauseDatabase 2m5s KubeDB Ops-manager Operator Pausing MongoDB demo/mg-standalone + Normal PauseDatabase 2m5s KubeDB Ops-manager Operator Successfully paused MongoDB demo/mg-standalone + Normal Updating 2m5s KubeDB Ops-manager Operator Updating StatefulSets + Normal Updating 105s KubeDB Ops-manager Operator Successfully Updated StatefulSets + Normal UpdateStandaloneImage 50s KubeDB Ops-manager Operator Successfully Updated Standalone Image + Normal ResumeDatabase 50s KubeDB Ops-manager Operator Resuming MongoDB demo/mg-standalone + Normal ResumeDatabase 50s KubeDB Ops-manager Operator Successfully resumed MongoDB demo/mg-standalone + Normal Successful 50s KubeDB Ops-manager Operator Successfully Updated Database + +``` + +Now, we are going to verify whether the `MongoDB` and the related `StatefulSets` their `Pods` have the new version image. Let's check, + +```bash +$ kubectl get mg -n demo mg-standalone -o=jsonpath='{.spec.version}{"\n"}' +4.4.26 + +$ kubectl get sts -n demo mg-standalone -o=jsonpath='{.spec.template.spec.containers[0].image}{"\n"}' +mongo:4.0.5 + +$ kubectl get pods -n demo mg-standalone-0 -o=jsonpath='{.spec.containers[0].image}{"\n"}' +mongo:4.0.5 +``` + +You can see from above, our `MongoDB` standalone database has been updated with the new version. So, the update process is successfully completed. + +## Cleaning Up + +To clean up the Kubernetes resources created by this tutorial, run: + +```bash +kubectl delete mg -n demo mg-standalone +kubectl delete mongodbopsrequest -n demo mops-update +``` \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/mongodb/volume-expansion/_index.md b/content/docs/v2024.1.31/guides/mongodb/volume-expansion/_index.md new file mode 100644 index 0000000000..536b7fd23b --- /dev/null +++ b/content/docs/v2024.1.31/guides/mongodb/volume-expansion/_index.md @@ -0,0 +1,22 @@ +--- +title: Volume Expansion +menu: + docs_v2024.1.31: + identifier: mg-volume-expansion + name: Volume Expansion + parent: mg-mongodb-guides + weight: 44 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/mongodb/volume-expansion/overview.md b/content/docs/v2024.1.31/guides/mongodb/volume-expansion/overview.md new file mode 100644 index 0000000000..d8b9aaec28 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mongodb/volume-expansion/overview.md @@ -0,0 +1,67 @@ +--- +title: MongoDB Volume Expansion Overview +menu: + docs_v2024.1.31: + identifier: mg-volume-expansion-overview + name: Overview + parent: mg-volume-expansion + weight: 10 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# MongoDB Volume Expansion + +This guide will give an overview on how KubeDB Ops-manager operator expand the volume of various component of `MongoDB` such as ReplicaSet, Shard, ConfigServer, Mongos, etc. + +## Before You Begin + +- You should be familiar with the following `KubeDB` concepts: + - [MongoDB](/docs/v2024.1.31/guides/mongodb/concepts/mongodb) + - [MongoDBOpsRequest](/docs/v2024.1.31/guides/mongodb/concepts/opsrequest) + +## How Volume Expansion Process Works + +The following diagram shows how KubeDB Ops-manager operator expand the volumes of `MongoDB` database components. Open the image in a new tab to see the enlarged version. + +
+  Volume Expansion process of MongoDB +
Fig: Volume Expansion process of MongoDB
+
+ +The Volume Expansion process consists of the following steps: + +1. At first, a user creates a `MongoDB` Custom Resource (CR). + +2. `KubeDB` Provisioner operator watches the `MongoDB` CR. + +3. When the operator finds a `MongoDB` CR, it creates required number of `StatefulSets` and related necessary stuff like secrets, services, etc. + +4. Each StatefulSet creates a Persistent Volume according to the Volume Claim Template provided in the statefulset configuration. This Persistent Volume will be expanded by the `KubeDB` Ops-manager operator. + +5. Then, in order to expand the volume of the various components (ie. ReplicaSet, Shard, ConfigServer, Mongos, etc.) of the `MongoDB` database the user creates a `MongoDBOpsRequest` CR with desired information. + +6. `KubeDB` Ops-manager operator watches the `MongoDBOpsRequest` CR. + +7. When it finds a `MongoDBOpsRequest` CR, it halts the `MongoDB` object which is referred from the `MongoDBOpsRequest`. So, the `KubeDB` Provisioner operator doesn't perform any operations on the `MongoDB` object during the volume expansion process. + +8. Then the `KubeDB` Ops-manager operator will expand the persistent volume to reach the expected size defined in the `MongoDBOpsRequest` CR. + +9. After the successfully expansion of the volume of the related StatefulSet Pods, the `KubeDB` Ops-manager operator updates the new volume size in the `MongoDB` object to reflect the updated state. + +10. After the successful Volume Expansion of the `MongoDB` components, the `KubeDB` Ops-manager operator resumes the `MongoDB` object so that the `KubeDB` Provisioner operator resumes its usual operations. + +In the next docs, we are going to show a step by step guide on Volume Expansion of various MongoDB database components using `MongoDBOpsRequest` CRD. diff --git a/content/docs/v2024.1.31/guides/mongodb/volume-expansion/replicaset.md b/content/docs/v2024.1.31/guides/mongodb/volume-expansion/replicaset.md new file mode 100644 index 0000000000..ccc21cc3ee --- /dev/null +++ b/content/docs/v2024.1.31/guides/mongodb/volume-expansion/replicaset.md @@ -0,0 +1,258 @@ +--- +title: MongoDB Replicaset Volume Expansion +menu: + docs_v2024.1.31: + identifier: mg-volume-expansion-replicaset + name: Replicaset + parent: mg-volume-expansion + weight: 30 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# MongoDB Replicaset Volume Expansion + +This guide will show you how to use `KubeDB` Ops-manager operator to expand the volume of a MongoDB Replicaset database. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster. + +- You must have a `StorageClass` that supports volume expansion. + +- Install `KubeDB` Provisioner and Ops-manager operator in your cluster following the steps [here](/docs/v2024.1.31/setup/README). + +- You should be familiar with the following `KubeDB` concepts: + - [MongoDB](/docs/v2024.1.31/guides/mongodb/concepts/mongodb) + - [Replicaset](/docs/v2024.1.31/guides/mongodb/clustering/replicaset) + - [MongoDBOpsRequest](/docs/v2024.1.31/guides/mongodb/concepts/opsrequest) + - [Volume Expansion Overview](/docs/v2024.1.31/guides/mongodb/volume-expansion/overview) + +To keep everything isolated, we are going to use a separate namespace called `demo` throughout this tutorial. + +```bash +$ kubectl create ns demo +namespace/demo created +``` + +> **Note:** YAML files used in this tutorial are stored in [docs/examples/mongodb](/docs/v2024.1.31/examples/mongodb) directory of [kubedb/docs](https://github.com/kubedb/docs) repository. + +## Expand Volume of Replicaset + +Here, we are going to deploy a `MongoDB` replicaset using a supported version by `KubeDB` operator. Then we are going to apply `MongoDBOpsRequest` to expand its volume. + +### Prepare MongoDB Replicaset Database + +At first verify that your cluster has a storage class, that supports volume expansion. Let's check, + +```bash +$ kubectl get storageclass +NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE +standard (default) kubernetes.io/gce-pd Delete Immediate true 2m49s +``` + +We can see from the output the `standard` storage class has `ALLOWVOLUMEEXPANSION` field as true. So, this storage class supports volume expansion. We can use it. + +Now, we are going to deploy a `MongoDB` replicaSet database with version `3.6.8`. + +### Deploy MongoDB + +In this section, we are going to deploy a MongoDB Replicaset database with 1GB volume. Then, in the next section we will expand its volume to 2GB using `MongoDBOpsRequest` CRD. Below is the YAML of the `MongoDB` CR that we are going to create, + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: MongoDB +metadata: + name: mg-replicaset + namespace: demo +spec: + version: "4.4.26" + replicaSet: + name: "replicaset" + replicas: 3 + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi +``` + +Let's create the `MongoDB` CR we have shown above, + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/mongodb/volume-expansion/mg-replicaset.yaml +mongodb.kubedb.com/mg-replicaset created +``` + +Now, wait until `mg-replicaset` has status `Ready`. i.e, + +```bash +$ kubectl get mg -n demo +NAME VERSION STATUS AGE +mg-replicaset 4.4.26 Ready 10m +``` + +Let's check volume size from statefulset, and from the persistent volume, + +```bash +$ kubectl get sts -n demo mg-replicaset -o json | jq '.spec.volumeClaimTemplates[].spec.resources.requests.storage' +"1Gi" + +$ kubectl get pv -n demo +NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE +pvc-2067c63d-f982-4b66-a008-5e9c3ff6218a 1Gi RWO Delete Bound demo/datadir-mg-replicaset-0 standard 10m +pvc-9db1aeb0-f1af-4555-93a3-0ca754327751 1Gi RWO Delete Bound demo/datadir-mg-replicaset-2 standard 9m45s +pvc-d38f42a8-50d4-4fa9-82ba-69fc7a464ff4 1Gi RWO Delete Bound demo/datadir-mg-replicaset-1 standard 10m +``` + +You can see the statefulset has 1GB storage, and the capacity of all the persistent volumes are also 1GB. + +We are now ready to apply the `MongoDBOpsRequest` CR to expand the volume of this database. + +### Volume Expansion + +Here, we are going to expand the volume of the replicaset database. + +#### Create MongoDBOpsRequest + +In order to expand the volume of the database, we have to create a `MongoDBOpsRequest` CR with our desired volume size. Below is the YAML of the `MongoDBOpsRequest` CR that we are going to create, + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: MongoDBOpsRequest +metadata: + name: mops-volume-exp-replicaset + namespace: demo +spec: + type: VolumeExpansion + databaseRef: + name: mg-replicaset + volumeExpansion: + replicaSet: 2Gi +``` + +Here, + +- `spec.databaseRef.name` specifies that we are performing volume expansion operation on `mops-volume-exp-replicaset` database. +- `spec.type` specifies that we are performing `VolumeExpansion` on our database. +- `spec.volumeExpansion.replicaSet` specifies the desired volume size. + +Let's create the `MongoDBOpsRequest` CR we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/mongodb/volume-expansion/mops-volume-exp-replicaset.yaml +mongodbopsrequest.ops.kubedb.com/mops-volume-exp-replicaset created +``` + +#### Verify MongoDB replicaset volume expanded successfully + +If everything goes well, `KubeDB` Ops-manager operator will update the volume size of `MongoDB` object and related `StatefulSets` and `Persistent Volumes`. + +Let's wait for `MongoDBOpsRequest` to be `Successful`. Run the following command to watch `MongoDBOpsRequest` CR, + +```bash +$ kubectl get mongodbopsrequest -n demo +Every 2.0s: kubectl get mongodbopsrequest -n demo +NAME TYPE STATUS AGE +mops-volume-exp-replicaset VolumeExpansion Successful 83s +``` + +We can see from the above output that the `MongoDBOpsRequest` has succeeded. If we describe the `MongoDBOpsRequest` we will get an overview of the steps that were followed to expand the volume of the database. + +```bash +$ kubectl describe mongodbopsrequest -n demo mops-volume-exp-replicaset +Name: mops-volume-exp-replicaset +Namespace: demo +Labels: +Annotations: API Version: ops.kubedb.com/v1alpha1 +Kind: MongoDBOpsRequest +Metadata: + Creation Timestamp: 2020-08-25T18:21:18Z + Finalizers: + kubedb.com + Generation: 1 + Resource Version: 84084 + Self Link: /apis/ops.kubedb.com/v1alpha1/namespaces/demo/mongodbopsrequests/mops-volume-exp-replicaset + UID: 2cec0cd3-4abe-4114-813c-1326f28563cb +Spec: + Database Ref: + Name: mg-replicaset + Type: VolumeExpansion + Volume Expansion: + ReplicaSet: 2Gi +Status: + Conditions: + Last Transition Time: 2020-08-25T18:21:18Z + Message: MongoDB ops request is being processed + Observed Generation: 1 + Reason: Scaling + Status: True + Type: Scaling + Last Transition Time: 2020-08-25T18:22:38Z + Message: Successfully updated Storage + Observed Generation: 1 + Reason: VolumeExpansion + Status: True + Type: VolumeExpansion + Last Transition Time: 2020-08-25T18:22:38Z + Message: Successfully Resumed mongodb: mg-replicaset + Observed Generation: 1 + Reason: ResumeDatabase + Status: True + Type: ResumeDatabase + Last Transition Time: 2020-08-25T18:22:38Z + Message: Successfully completed the modification process + Observed Generation: 1 + Reason: Successful + Status: True + Type: Successful + Observed Generation: 1 + Phase: Successful +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal VolumeExpansion 3m11s KubeDB Ops-manager operator Successfully Updated Storage + Normal ResumeDatabase 3m11s KubeDB Ops-manager operator Resuming MongoDB + Normal ResumeDatabase 3m11s KubeDB Ops-manager operator Successfully Resumed mongodb + Normal Successful 3m11s KubeDB Ops-manager operator Successfully Scaled Database +``` + +Now, we are going to verify from the `Statefulset`, and the `Persistent Volumes` whether the volume of the database has expanded to meet the desired state, Let's check, + +```bash +$ kubectl get sts -n demo mg-replicaset -o json | jq '.spec.volumeClaimTemplates[].spec.resources.requests.storage' +"2Gi" + +$ kubectl get pv -n demo +NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE +pvc-2067c63d-f982-4b66-a008-5e9c3ff6218a 2Gi RWO Delete Bound demo/datadir-mg-replicaset-0 standard 19m +pvc-9db1aeb0-f1af-4555-93a3-0ca754327751 2Gi RWO Delete Bound demo/datadir-mg-replicaset-2 standard 18m +pvc-d38f42a8-50d4-4fa9-82ba-69fc7a464ff4 2Gi RWO Delete Bound demo/datadir-mg-replicaset-1 standard 19m +``` + +The above output verifies that we have successfully expanded the volume of the MongoDB database. + +## Cleaning Up + +To clean up the Kubernetes resources created by this tutorial, run: + +```bash +kubectl delete mg -n demo mg-replicaset +kubectl delete mongodbopsrequest -n demo mops-volume-exp-replicaset +``` \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/mongodb/volume-expansion/sharding.md b/content/docs/v2024.1.31/guides/mongodb/volume-expansion/sharding.md new file mode 100644 index 0000000000..288807cc86 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mongodb/volume-expansion/sharding.md @@ -0,0 +1,291 @@ +--- +title: MongoDB Sharded Database Volume Expansion +menu: + docs_v2024.1.31: + identifier: mg-volume-expansion-shard + name: Sharding + parent: mg-volume-expansion + weight: 40 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# MongoDB Sharded Database Volume Expansion + +This guide will show you how to use `KubeDB` Ops-manager operator to expand the volume of a MongoDB Sharded Database. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster. + +- You must have a `StorageClass` that supports volume expansion. + +- Install `KubeDB` Provisioner and Ops-manager operator in your cluster following the steps [here](/docs/v2024.1.31/setup/README). + +- You should be familiar with the following `KubeDB` concepts: + - [MongoDB](/docs/v2024.1.31/guides/mongodb/concepts/mongodb) + - [Sharding](/docs/v2024.1.31/guides/mongodb/clustering/sharding) + - [MongoDBOpsRequest](/docs/v2024.1.31/guides/mongodb/concepts/opsrequest) + - [Volume Expansion Overview](/docs/v2024.1.31/guides/mongodb/volume-expansion/overview) + +To keep everything isolated, we are going to use a separate namespace called `demo` throughout this tutorial. + +```bash +$ kubectl create ns demo +namespace/demo created +``` + +> **Note:** YAML files used in this tutorial are stored in [docs/examples/mongodb](/docs/v2024.1.31/examples/mongodb) directory of [kubedb/docs](https://github.com/kubedb/docs) repository. + +## Expand Volume of Sharded Database + +Here, we are going to deploy a `MongoDB` Sharded Database using a supported version by `KubeDB` operator. Then we are going to apply `MongoDBOpsRequest` to expand the volume of shard nodes and config servers. + +### Prepare MongoDB Sharded Database + +At first verify that your cluster has a storage class, that supports volume expansion. Let's check, + +```bash +$ kubectl get storageclass +NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE +standard (default) kubernetes.io/gce-pd Delete Immediate true 2m49s +``` + +We can see from the output the `standard` storage class has `ALLOWVOLUMEEXPANSION` field as true. So, this storage class supports volume expansion. We can use it. + +Now, we are going to deploy a `MongoDB` standalone database with version `4.4.26`. + +### Deploy MongoDB + +In this section, we are going to deploy a MongoDB Sharded database with 1GB volume for each of the shard nodes and config servers. Then, in the next sections we will expand the volume of shard nodes and config servers to 2GB using `MongoDBOpsRequest` CRD. Below is the YAML of the `MongoDB` CR that we are going to create, + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: MongoDB +metadata: + name: mg-sharding + namespace: demo +spec: + version: 4.4.26 + shardTopology: + configServer: + replicas: 2 + storage: + resources: + requests: + storage: 1Gi + storageClassName: standard + mongos: + replicas: 2 + shard: + replicas: 2 + shards: 3 + storage: + resources: + requests: + storage: 1Gi + storageClassName: standard +``` + +Let's create the `MongoDB` CR we have shown above, + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/mongodb/volume-expansion/mg-shard.yaml +mongodb.kubedb.com/mg-sharding created +``` + +Now, wait until `mg-sharding` has status `Ready`. i.e, + +```bash +$ kubectl get mg -n demo +NAME VERSION STATUS AGE +mg-sharding 4.4.26 Ready 2m45s +``` + +Let's check volume size from statefulset, and from the persistent volume of shards and config servers, + +```bash +$ kubectl get sts -n demo mg-sharding-configsvr -o json | jq '.spec.volumeClaimTemplates[].spec.resources.requests.storage' +"1Gi" + +$ kubectl get sts -n demo mg-sharding-shard0 -o json | jq '.spec.volumeClaimTemplates[].spec.resources.requests.storage' +"1Gi" + +$ kubectl get pv -n demo +NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE +pvc-194f6e9c-b9a7-4d00-a125-a6c01273468c 1Gi RWO Delete Bound demo/datadir-mg-sharding-shard0-0 standard 68s +pvc-390b6343-f97e-4761-a516-e3c9607c55d6 1Gi RWO Delete Bound demo/datadir-mg-sharding-shard1-1 standard 2m26s +pvc-51ab98e8-d468-4a74-b176-3853dada41c2 1Gi RWO Delete Bound demo/datadir-mg-sharding-configsvr-1 standard 2m33s +pvc-5209095e-561f-4601-a0bf-0c705234da5b 1Gi RWO Delete Bound demo/datadir-mg-sharding-shard1-0 standard 3m6s +pvc-5be2ab13-e12c-4053-8680-7c5588dff8eb 1Gi RWO Delete Bound demo/datadir-mg-sharding-shard2-1 standard 2m32s +pvc-7e11502d-13e0-4a84-9ebe-29bc2b15f026 1Gi RWO Delete Bound demo/datadir-mg-sharding-shard0-1 standard 44s +pvc-7e20906c-462d-47b7-b4cf-ba0ef69ba26e 1Gi RWO Delete Bound demo/datadir-mg-sharding-shard2-0 standard 3m7s +pvc-87634059-0f95-4595-ae8a-121944961103 1Gi RWO Delete Bound demo/datadir-mg-sharding-configsvr-0 standard 3m7s +``` + +You can see the statefulsets have 1GB storage, and the capacity of all the persistent volumes are also 1GB. + +We are now ready to apply the `MongoDBOpsRequest` CR to expand the volume of this database. + +### Volume Expansion of Shard and ConfigServer Nodes + +Here, we are going to expand the volume of the shard and configServer nodes of the database. + +#### Create MongoDBOpsRequest + +In order to expand the volume of the shard nodes of the database, we have to create a `MongoDBOpsRequest` CR with our desired volume size. Below is the YAML of the `MongoDBOpsRequest` CR that we are going to create, + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: MongoDBOpsRequest +metadata: + name: mops-volume-exp-shard + namespace: demo +spec: + type: VolumeExpansion + databaseRef: + name: mg-sharding + volumeExpansion: + shard: 2Gi + configServer: 2Gi +``` + +Here, +- `spec.databaseRef.name` specifies that we are performing volume expansion operation on `mops-volume-exp-shard` database. +- `spec.type` specifies that we are performing `VolumeExpansion` on our database. +- `spec.volumeExpansion.shard` specifies the desired volume size of shard nodes. +- `spec.volumeExpansion.configServer` specifies the desired volume size of configServer nodes. + +> **Note:** If you don't want to expand the volume of all the components together, you can only specify the components (shard and configServer) that you want to expand. + +Let's create the `MongoDBOpsRequest` CR we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/mongodb/volume-expansion/mops-volume-exp-shard.yaml +mongodbopsrequest.ops.kubedb.com/mops-volume-exp-shard created +``` + +#### Verify MongoDB shard volumes expanded successfully + +If everything goes well, `KubeDB` Ops-manager operator will update the volume size of `MongoDB` object and related `StatefulSets` and `Persistent Volumes`. + +Let's wait for `MongoDBOpsRequest` to be `Successful`. Run the following command to watch `MongoDBOpsRequest` CR, + +```bash +$ kubectl get mongodbopsrequest -n demo +Every 2.0s: kubectl get mongodbopsrequest -n demo +NAME TYPE STATUS AGE +mops-volume-exp-shard VolumeExpansion Successful 3m49s +``` + +We can see from the above output that the `MongoDBOpsRequest` has succeeded. If we describe the `MongoDBOpsRequest` we will get an overview of the steps that were followed to expand the volume of the database. + +```bash +$ kubectl describe mongodbopsrequest -n demo mops-volume-exp-shard +Name: mops-volume-exp-shard +Namespace: demo +Labels: +Annotations: +API Version: ops.kubedb.com/v1alpha1 +Kind: MongoDBOpsRequest +Metadata: + Creation Timestamp: 2020-09-30T04:24:37Z + Generation: 1 + Resource Version: 140791 + Self Link: /apis/ops.kubedb.com/v1alpha1/namespaces/demo/mongodbopsrequests/mops-volume-exp-shard + UID: fc23a0a2-3a48-4b76-95c5-121f3d56df78 +Spec: + Database Ref: + Name: mg-sharding + Type: VolumeExpansion + Volume Expansion: + Config Server: 2Gi + Shard: 2Gi +Status: + Conditions: + Last Transition Time: 2020-09-30T04:25:48Z + Message: MongoDB ops request is expanding volume of database + Observed Generation: 1 + Reason: VolumeExpansion + Status: True + Type: VolumeExpansion + Last Transition Time: 2020-09-30T04:26:58Z + Message: Successfully Expanded Volume + Observed Generation: 1 + Reason: ConfigServerVolumeExpansion + Status: True + Type: ConfigServerVolumeExpansion + Last Transition Time: 2020-09-30T04:29:28Z + Message: Successfully Expanded Volume + Observed Generation: 1 + Reason: ShardVolumeExpansion + Status: True + Type: ShardVolumeExpansion + Last Transition Time: 2020-09-30T04:29:33Z + Message: Successfully Resumed mongodb: mg-sharding + Observed Generation: 1 + Reason: ResumeDatabase + Status: True + Type: ResumeDatabase + Last Transition Time: 2020-09-30T04:29:33Z + Message: Successfully Expanded Volume + Observed Generation: 1 + Reason: Successful + Status: True + Type: Successful + Observed Generation: 1 + Phase: Successful +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal ConfigServerVolumeExpansion 3m25s KubeDB Ops-manager operator Successfully Expanded Volume + Normal ShardVolumeExpansion 55s KubeDB Ops-manager operator Successfully Expanded Volume + Normal ResumeDatabase 50s KubeDB Ops-manager operator Resuming MongoDB + Normal ResumeDatabase 50s KubeDB Ops-manager operator Successfully Resumed mongodb + Normal Successful 50s KubeDB Ops-manager operator Successfully Expanded Volume +``` + +Now, we are going to verify from the `Statefulset`, and the `Persistent Volumes` whether the volume of the database has expanded to meet the desired state, Let's check, + +```bash +$ kubectl get sts -n demo mg-sharding-configsvr -o json | jq '.spec.volumeClaimTemplates[].spec.resources.requests.storage' +"2Gi" + +$ kubectl get sts -n demo mg-sharding-shard0 -o json | jq '.spec.volumeClaimTemplates[].spec.resources.requests.storage' +"2Gi" + +$ kubectl get pv -n demo +NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE +pvc-194f6e9c-b9a7-4d00-a125-a6c01273468c 2Gi RWO Delete Bound demo/datadir-mg-sharding-shard0-0 standard 3m38s +pvc-390b6343-f97e-4761-a516-e3c9607c55d6 2Gi RWO Delete Bound demo/datadir-mg-sharding-shard1-1 standard 4m56s +pvc-51ab98e8-d468-4a74-b176-3853dada41c2 2Gi RWO Delete Bound demo/datadir-mg-sharding-configsvr-1 standard 5m3s +pvc-5209095e-561f-4601-a0bf-0c705234da5b 2Gi RWO Delete Bound demo/datadir-mg-sharding-shard1-0 standard 5m36s +pvc-5be2ab13-e12c-4053-8680-7c5588dff8eb 2Gi RWO Delete Bound demo/datadir-mg-sharding-shard2-1 standard 5m2s +pvc-7e11502d-13e0-4a84-9ebe-29bc2b15f026 2Gi RWO Delete Bound demo/datadir-mg-sharding-shard0-1 standard 3m14s +pvc-7e20906c-462d-47b7-b4cf-ba0ef69ba26e 2Gi RWO Delete Bound demo/datadir-mg-sharding-shard2-0 standard 5m37s +pvc-87634059-0f95-4595-ae8a-121944961103 2Gi RWO Delete Bound demo/datadir-mg-sharding-configsvr-0 standard 5m37s +``` + +The above output verifies that we have successfully expanded the volume of the shard nodes and configServer nodes of the MongoDB database. + +## Cleaning Up + +To clean up the Kubernetes resources created by this tutorial, run: + +```bash +kubectl delete mg -n demo mg-sharding +kubectl delete mongodbopsrequest -n demo mops-volume-exp-shard mops-volume-exp-configserver +``` diff --git a/content/docs/v2024.1.31/guides/mongodb/volume-expansion/standalone.md b/content/docs/v2024.1.31/guides/mongodb/volume-expansion/standalone.md new file mode 100644 index 0000000000..bb4745e6de --- /dev/null +++ b/content/docs/v2024.1.31/guides/mongodb/volume-expansion/standalone.md @@ -0,0 +1,248 @@ +--- +title: MongoDB Standalone Volume Expansion +menu: + docs_v2024.1.31: + identifier: mg-volume-expansion-standalone + name: Standalone + parent: mg-volume-expansion + weight: 20 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# MongoDB Standalone Volume Expansion + +This guide will show you how to use `KubeDB` Ops-manager operator to expand the volume of a MongoDB standalone database. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster. + +- You must have a `StorageClass` that supports volume expansion. + +- Install `KubeDB` Provisioner and Ops-manager operator in your cluster following the steps [here](/docs/v2024.1.31/setup/README). + +- You should be familiar with the following `KubeDB` concepts: + - [MongoDB](/docs/v2024.1.31/guides/mongodb/concepts/mongodb) + - [MongoDBOpsRequest](/docs/v2024.1.31/guides/mongodb/concepts/opsrequest) + - [Volume Expansion Overview](/docs/v2024.1.31/guides/mongodb/volume-expansion/overview) + +To keep everything isolated, we are going to use a separate namespace called `demo` throughout this tutorial. + +```bash +$ kubectl create ns demo +namespace/demo created +``` + +> **Note:** YAML files used in this tutorial are stored in [docs/examples/mongodb](/docs/v2024.1.31/examples/mongodb) directory of [kubedb/docs](https://github.com/kubedb/docs) repository. + +## Expand Volume of Standalone Database + +Here, we are going to deploy a `MongoDB` standalone using a supported version by `KubeDB` operator. Then we are going to apply `MongoDBOpsRequest` to expand its volume. + +### Prepare MongoDB Standalone Database + +At first verify that your cluster has a storage class, that supports volume expansion. Let's check, + +```bash +$ kubectl get storageclass +NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE +standard (default) kubernetes.io/gce-pd Delete Immediate true 2m49s +``` + +We can see from the output the `standard` storage class has `ALLOWVOLUMEEXPANSION` field as true. So, this storage class supports volume expansion. We can use it. + +Now, we are going to deploy a `MongoDB` standalone database with version `4.4.26`. + +#### Deploy MongoDB standalone + +In this section, we are going to deploy a MongoDB standalone database with 1GB volume. Then, in the next section we will expand its volume to 2GB using `MongoDBOpsRequest` CRD. Below is the YAML of the `MongoDB` CR that we are going to create, + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: MongoDB +metadata: + name: mg-standalone + namespace: demo +spec: + version: "4.4.26" + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi +``` + +Let's create the `MongoDB` CR we have shown above, + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/mongodb/volume-expansion/mg-standalone.yaml +mongodb.kubedb.com/mg-standalone created +``` + +Now, wait until `mg-standalone` has status `Ready`. i.e, + +```bash +$ kubectl get mg -n demo +NAME VERSION STATUS AGE +mg-standalone 4.4.26 Ready 2m53s +``` + +Let's check volume size from statefulset, and from the persistent volume, + +```bash +$ kubectl get sts -n demo mg-standalone -o json | jq '.spec.volumeClaimTemplates[].spec.resources.requests.storage' +"1Gi" + +$ kubectl get pv -n demo +NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE +pvc-d0b07657-a012-4384-862a-b4e437774287 1Gi RWO Delete Bound demo/datadir-mg-standalone-0 standard 49s +``` + +You can see the statefulset has 1GB storage, and the capacity of the persistent volume is also 1GB. + +We are now ready to apply the `MongoDBOpsRequest` CR to expand the volume of this database. + +### Volume Expansion + +Here, we are going to expand the volume of the standalone database. + +#### Create MongoDBOpsRequest + +In order to expand the volume of the database, we have to create a `MongoDBOpsRequest` CR with our desired volume size. Below is the YAML of the `MongoDBOpsRequest` CR that we are going to create, + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: MongoDBOpsRequest +metadata: + name: mops-volume-exp-standalone + namespace: demo +spec: + type: VolumeExpansion + databaseRef: + name: mg-standalone + volumeExpansion: + standalone: 2Gi +``` + +Here, + +- `spec.databaseRef.name` specifies that we are performing volume expansion operation on `mops-volume-exp-standalone` database. +- `spec.type` specifies that we are performing `VolumeExpansion` on our database. +- `spec.volumeExpansion.standalone` specifies the desired volume size. + +Let's create the `MongoDBOpsRequest` CR we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/mongodb/volume-expansion/mops-volume-exp-standalone.yaml +mongodbopsrequest.ops.kubedb.com/mops-volume-exp-standalone created +``` + +#### Verify MongoDB Standalone volume expanded successfully + +If everything goes well, `KubeDB` Ops-manager operator will update the volume size of `MongoDB` object and related `StatefulSets` and `Persistent Volume`. + +Let's wait for `MongoDBOpsRequest` to be `Successful`. Run the following command to watch `MongoDBOpsRequest` CR, + +```bash +$ kubectl get mongodbopsrequest -n demo +Every 2.0s: kubectl get mongodbopsrequest -n demo +``` + +We can see from the above output that the `MongoDBOpsRequest` has succeeded. If we describe the `MongoDBOpsRequest` we will get an overview of the steps that were followed to expand the volume of the database. + +```bash +$ kubectl describe mongodbopsrequest -n demo mops-volume-exp-standalone + Name: mops-volume-exp-standalone + Namespace: demo + Labels: + Annotations: API Version: ops.kubedb.com/v1alpha1 + Kind: MongoDBOpsRequest + Metadata: + Creation Timestamp: 2020-08-25T17:48:33Z + Finalizers: + kubedb.com + Generation: 1 + Resource Version: 72899 + Self Link: /apis/ops.kubedb.com/v1alpha1/namespaces/demo/mongodbopsrequests/mops-volume-exp-standalone + UID: 007fe35a-25f6-45e7-9e85-9add488b2622 + Spec: + Database Ref: + Name: mg-standalone + Type: VolumeExpansion + Volume Expansion: + Standalone: 2Gi + Status: + Conditions: + Last Transition Time: 2020-08-25T17:48:33Z + Message: MongoDB ops request is being processed + Observed Generation: 1 + Reason: Scaling + Status: True + Type: Scaling + Last Transition Time: 2020-08-25T17:50:03Z + Message: Successfully updated Storage + Observed Generation: 1 + Reason: VolumeExpansion + Status: True + Type: VolumeExpansion + Last Transition Time: 2020-08-25T17:50:03Z + Message: Successfully Resumed mongodb: mg-standalone + Observed Generation: 1 + Reason: ResumeDatabase + Status: True + Type: ResumeDatabase + Last Transition Time: 2020-08-25T17:50:03Z + Message: Successfully completed the modification process + Observed Generation: 1 + Reason: Successful + Status: True + Type: Successful + Observed Generation: 1 + Phase: Successful + Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal VolumeExpansion 29s KubeDB Ops-manager operator Successfully Updated Storage + Normal ResumeDatabase 29s KubeDB Ops-manager operator Resuming MongoDB + Normal ResumeDatabase 29s KubeDB Ops-manager operator Successfully Resumed mongodb + Normal Successful 29s KubeDB Ops-manager operator Successfully Scaled Database +``` + +Now, we are going to verify from the `Statefulset`, and the `Persistent Volume` whether the volume of the standalone database has expanded to meet the desired state, Let's check, + +```bash +$ kubectl get sts -n demo mg-standalone -o json | jq '.spec.volumeClaimTemplates[].spec.resources.requests.storage' +"2Gi" + +$ kubectl get pv -n demo +NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE +pvc-d0b07657-a012-4384-862a-b4e437774287 2Gi RWO Delete Bound demo/datadir-mg-standalone-0 standard 4m29s +``` + +The above output verifies that we have successfully expanded the volume of the MongoDB standalone database. + +## Cleaning Up + +To clean up the Kubernetes resources created by this tutorial, run: + +```bash +kubectl delete mg -n demo mg-standalone +kubectl delete mongodbopsrequest -n demo mops-volume-exp-standalone +``` diff --git a/content/docs/v2024.1.31/guides/mysql/README.md b/content/docs/v2024.1.31/guides/mysql/README.md new file mode 100644 index 0000000000..77a8948862 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/README.md @@ -0,0 +1,67 @@ +--- +title: MySQL +menu: + docs_v2024.1.31: + identifier: guides-mysql-readme + name: MySQL + parent: guides-mysql + weight: 10 +menu_name: docs_v2024.1.31 +section_menu_id: guides +url: /docs/v2024.1.31/guides/mysql/ +aliases: +- /docs/v2024.1.31/guides/mysql/README/ +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +## Supported MySQL Features + +| Features | Availability | +| --------------------------------------------------------------------------------------- | :----------: | +| Group Replication | ✓ | +| Innodb Cluster | ✓ | +| SemiSynchronous cluster | ✓ | +| Read Replicas | ✓ | +| TLS: Add, Remove, Update, Rotate ( [Cert Manager](https://cert-manager.io/docs/) ) | ✓ | +| Automated Version update | ✓ | +| Automatic Vertical Scaling | ✓ | +| Automated Horizontal Scaling | ✓ | +| Automated Volume Expansion | ✓ | +| Backup/Recovery: Instant, Scheduled ( [Stash](https://stash.run/) ) | ✓ | +| Initialize using Snapshot | ✓ | +| Initialize using Script (\*.sql, \*sql.gz and/or \*.sh) | ✓ | +| Custom Configuration | ✓ | +| Using Custom docker image | ✓ | +| Builtin Prometheus Discovery | ✓ | +| Using Prometheus operator | ✓ | + +## Life Cycle of a MySQL Object + +

+  lifecycle +

+ +## User Guide + +- [Quickstart MySQL](/docs/v2024.1.31/guides/mysql/quickstart/) with KubeDB Operator. +- [Backup & Restore](/docs/v2024.1.31/guides/mysql/backup/overview/) MySQL databases using Stash. +- Initialize [MySQL with Script](/docs/v2024.1.31/guides/mysql/initialization/). +- Monitor your MySQL database with KubeDB using [out-of-the-box Prometheus operator](/docs/v2024.1.31/guides/mysql/monitoring/prometheus-operator/). +- Monitor your MySQL database with KubeDB using [out-of-the-box builtin-Prometheus](/docs/v2024.1.31/guides/mysql/monitoring/builtin-prometheus/). +- Use [private Docker registry](/docs/v2024.1.31/guides/mysql/private-registry/) to deploy MySQL with KubeDB. +- Use [kubedb cli](/docs/v2024.1.31/guides/mysql/cli/) to manage databases like kubectl for Kubernetes. +- Detail concepts of [MySQL object](/docs/v2024.1.31/guides/mysql/concepts/database/). +- Detail concepts of [MySQLVersion object](/docs/v2024.1.31/guides/mysql/concepts/catalog/). +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/v2024.1.31/CONTRIBUTING). diff --git a/content/docs/v2024.1.31/guides/mysql/_index.md b/content/docs/v2024.1.31/guides/mysql/_index.md new file mode 100644 index 0000000000..6e6f55d66c --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/_index.md @@ -0,0 +1,22 @@ +--- +title: MySQL +menu: + docs_v2024.1.31: + identifier: guides-mysql + name: MySQL + parent: guides + weight: 40 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/mysql/autoscaler/_index.md b/content/docs/v2024.1.31/guides/mysql/autoscaler/_index.md new file mode 100644 index 0000000000..4cdd4de59a --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/autoscaler/_index.md @@ -0,0 +1,22 @@ +--- +title: Autoscaling +menu: + docs_v2024.1.31: + identifier: guides-mysql-autoscaling + name: Autoscaling + parent: guides-mysql + weight: 47 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/mysql/autoscaler/compute/_index.md b/content/docs/v2024.1.31/guides/mysql/autoscaler/compute/_index.md new file mode 100644 index 0000000000..e441bd9a75 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/autoscaler/compute/_index.md @@ -0,0 +1,22 @@ +--- +title: Compute Autoscaling +menu: + docs_v2024.1.31: + identifier: guides-mysql-autoscaling-compute + name: Compute Autoscaling + parent: guides-mysql-autoscaling + weight: 46 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/mysql/autoscaler/compute/cluster/examples/my-as-compute.yaml b/content/docs/v2024.1.31/guides/mysql/autoscaler/compute/cluster/examples/my-as-compute.yaml new file mode 100644 index 0000000000..96799640fc --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/autoscaler/compute/cluster/examples/my-as-compute.yaml @@ -0,0 +1,24 @@ +apiVersion: autoscaling.kubedb.com/v1alpha1 +kind: MySQLAutoscaler +metadata: + name: my-as-compute + namespace: demo +spec: + databaseRef: + name: sample-mysql + opsRequestOptions: + timeout: 3m + apply: IfReady + compute: + mysql: + trigger: "On" + podLifeTimeThreshold: 5m + resourceDiffPercentage: 20 + minAllowed: + cpu: 250m + memory: 400Mi + maxAllowed: + cpu: 1 + memory: 1Gi + containerControlledValues: "RequestsAndLimits" + controlledResources: ["cpu", "memory"] \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/mysql/autoscaler/compute/cluster/examples/sample-mysql.yaml b/content/docs/v2024.1.31/guides/mysql/autoscaler/compute/cluster/examples/sample-mysql.yaml new file mode 100644 index 0000000000..02aa3a8292 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/autoscaler/compute/cluster/examples/sample-mysql.yaml @@ -0,0 +1,28 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MySQL +metadata: + name: sample-mysql + namespace: demo +spec: + version: "8.0.35" + replicas: 3 + topology: + mode: GroupReplication + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + podTemplate: + spec: + resources: + requests: + cpu: "200m" + memory: "300Mi" + limits: + cpu: "200m" + memory: "300Mi" + terminationPolicy: WipeOut \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/mysql/autoscaler/compute/cluster/index.md b/content/docs/v2024.1.31/guides/mysql/autoscaler/compute/cluster/index.md new file mode 100644 index 0000000000..4de8cc6132 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/autoscaler/compute/cluster/index.md @@ -0,0 +1,459 @@ +--- +title: MySQL Cluster Autoscaling +menu: + docs_v2024.1.31: + identifier: guides-mysql-autoscaling-compute-cluster + name: Cluster + parent: guides-mysql-autoscaling-compute + weight: 20 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Autoscaling the Compute Resource of a MySQL Cluster Database + +This guide will show you how to use `KubeDB` to autoscale compute resources i.e. cpu and memory of a MySQL replicaset database. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster. + +- Install `KubeDB` Community, Ops-Manager and Autoscaler operator in your cluster following the steps [here](/docs/v2024.1.31/setup/README). + +- Install `Metrics Server` from [here](https://github.com/kubernetes-sigs/metrics-server#installation) + +- You should be familiar with the following `KubeDB` concepts: + - [MySQL](/docs/v2024.1.31/guides/mysql/concepts/mysqldatabase) + - [MySQLAutoscaler](/docs/v2024.1.31/guides/mysql/concepts/autoscaler) + - [MySQLOpsRequest](/docs/v2024.1.31/guides/mysql/concepts/opsrequest) + - [Compute Resource Autoscaling Overview](/docs/v2024.1.31/guides/mysql/autoscaler/compute/overview) + +To keep everything isolated, we are going to use a separate namespace called `demo` throughout this tutorial. + +```bash +$ kubectl create ns demo +namespace/demo created +``` +## Autoscaling of Cluster Database + +Here, we are going to deploy a `MySQL` Cluster using a supported version by `KubeDB` operator. Then we are going to apply `MySQLAutoscaler` to set up autoscaling. + +#### Deploy MySQL Cluster + +In this section, we are going to deploy a MySQL Cluster with version `10.6.16`. Then, in the next section we will set up autoscaling for this database using `MySQLAutoscaler` CRD. Below is the YAML of the `MySQL` CR that we are going to create, +> If you want to autoscale MySQL `Standalone`, Just remove the `spec.Replicas` from the below yaml and rest of the steps are same. + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: MySQL +metadata: + name: sample-mysql + namespace: demo +spec: + version: "8.0.35" + replicas: 3 + topology: + mode: GroupReplication + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + podTemplate: + spec: + resources: + requests: + cpu: "200m" + memory: "300Mi" + limits: + cpu: "200m" + memory: "300Mi" + terminationPolicy: WipeOut +``` + +Let's create the `MySQL` CRO we have shown above, + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mysql/autoscaler/compute/cluster/examples/sample-mysql.yaml +mysql.kubedb.com/sample-mysql created +``` + +Now, wait until `sample-mysql` has status `Ready`. i.e, + +```bash +$ kubectl get mysql -n demo +NAME VERSION STATUS AGE +sample-mysql 8.0.35 Ready 14m +``` + +Let's check the Pod containers resources, + +```bash +$ kubectl get pod -n demo sample-mysql-0 -o json | jq '.spec.containers[].resources' +{ + "limits": { + "cpu": "200m", + "memory": "300Mi" + }, + "requests": { + "cpu": "200m", + "memory": "300Mi" + } +} +``` + +Let's check the MySQL resources, +```bash +$ kubectl get mysql -n demo sample-mysql -o json | jq '.spec.podTemplate.spec.resources' +{ + "limits": { + "cpu": "200m", + "memory": "300Mi" + }, + "requests": { + "cpu": "200m", + "memory": "300Mi" + } +} +``` + +You can see from the above outputs that the resources are same as the one we have assigned while deploying the mysql. + +We are now ready to apply the `MySQLAutoscaler` CRO to set up autoscaling for this database. + +### Compute Resource Autoscaling + +Here, we are going to set up compute resource autoscaling using a MySQLAutoscaler Object. + +#### Create MySQLAutoscaler Object + +In order to set up compute resource autoscaling for this database cluster, we have to create a `MySQLAutoscaler` CRO with our desired configuration. Below is the YAML of the `MySQLAutoscaler` object that we are going to create, + +```yaml +apiVersion: autoscaling.kubedb.com/v1alpha1 +kind: MySQLAutoscaler +metadata: + name: my-as-compute + namespace: demo +spec: + databaseRef: + name: sample-mysql + opsRequestOptions: + timeout: 3m + apply: IfReady + compute: + mysql: + trigger: "On" + podLifeTimeThreshold: 5m + resourceDiffPercentage: 20 + minAllowed: + cpu: 250m + memory: 400Mi + maxAllowed: + cpu: 1 + memory: 1Gi + containerControlledValues: "RequestsAndLimits" + controlledResources: ["cpu", "memory"] +``` + +Here, + +- `spec.databaseRef.name` specifies that we are performing compute resource scaling operation on `sample-mysql` database. +- `spec.compute.mysql.trigger` specifies that compute autoscaling is enabled for this database. +- `spec.compute.mysql.podLifeTimeThreshold` specifies the minimum lifetime for at least one of the pod to initiate a vertical scaling. +- `spec.compute.mysql.resourceDiffPercentage` specifies the minimum resource difference in percentage. The default is 10%. +If the difference between current & recommended resource is less than ResourceDiffPercentage, Autoscaler Operator will ignore the updating. +- `spec.compute.mysql.minAllowed` specifies the minimum allowed resources for the database. +- `spec.compute.mysql.maxAllowed` specifies the maximum allowed resources for the database. +- `spec.compute.mysql.controlledResources` specifies the resources that are controlled by the autoscaler. +- `spec.compute.mysql.containerControlledValues` specifies which resource values should be controlled. The default is "RequestsAndLimits". +- `spec.opsRequestOptions.apply` has two supported value : `IfReady` & `Always`. +Use `IfReady` if you want to process the opsReq only when the database is Ready. And use `Always` if you want to process the execution of opsReq irrespective of the Database state. +- `spec.opsRequestOptions.timeout` specifies the maximum time for each step of the opsRequest(in seconds). +If a step doesn't finish within the specified timeout, the ops request will result in failure. + + +Let's create the `MySQLAutoscaler` CR we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mysql/autoscaler/compute/cluster/examples/my-as-compute.yaml +mysqlautoscaler.autoscaling.kubedb.com/my-as-compute created +``` + +#### Verify Autoscaling is set up successfully + +Let's check that the `mysqlautoscaler` resource is created successfully, + +```bash +$ kubectl get mysqlautoscaler -n demo +NAME AGE +my-as-compute 5m56s + +$ kubectl describe mysqlautoscaler my-as-compute -n demo +Name: my-as-compute +Namespace: demo +Labels: +Annotations: +API Version: autoscaling.kubedb.com/v1alpha1 +Kind: MySQLAutoscaler +Metadata: + Creation Timestamp: 2022-09-16T11:26:58Z + Generation: 1 + Managed Fields: + ... +Spec: + Compute: + MySQL: + Container Controlled Values: RequestsAndLimits + Controlled Resources: + cpu + memory + Max Allowed: + Cpu: 1 + Memory: 1Gi + Min Allowed: + Cpu: 250m + Memory: 400Mi + Pod Life Time Threshold: 5m0s + Resource Diff Percentage: 20 + Trigger: On + Database Ref: + Name: sample-mysql + Ops Request Options: + Apply: IfReady + Timeout: 3m0s +Status: + Checkpoints: + Cpu Histogram: + Bucket Weights: + Index: 0 + Weight: 10000 + Index: 46 + Weight: 555 + Reference Timestamp: 2022-09-16T00:00:00Z + Total Weight: 2.648440345821337 + First Sample Start: 2022-09-16T11:26:48Z + Last Sample Start: 2022-09-16T11:32:52Z + Last Update Time: 2022-09-16T11:33:02Z + Memory Histogram: + Bucket Weights: + Index: 1 + Weight: 10000 + Reference Timestamp: 2022-09-17T00:00:00Z + Total Weight: 1.391848625060675 + Ref: + Container Name: md-coordinator + Vpa Object Name: sample-mysql + Total Samples Count: 19 + Version: v3 + Cpu Histogram: + Bucket Weights: + Index: 0 + Weight: 10000 + Index: 3 + Weight: 556 + Reference Timestamp: 2022-09-16T00:00:00Z + Total Weight: 2.648440345821337 + First Sample Start: 2022-09-16T11:26:48Z + Last Sample Start: 2022-09-16T11:32:52Z + Last Update Time: 2022-09-16T11:33:02Z + Memory Histogram: + Reference Timestamp: 2022-09-17T00:00:00Z + Ref: + Container Name: mysql + Vpa Object Name: sample-mysql + Total Samples Count: 19 + Version: v3 + Conditions: + Last Transition Time: 2022-09-16T11:27:07Z + Message: Successfully created mysqlDBOpsRequest demo/myops-sample-mysql-6xc1kc + Observed Generation: 1 + Reason: CreateOpsRequest + Status: True + Type: CreateOpsRequest + Vpas: + Conditions: + Last Transition Time: 2022-09-16T11:27:02Z + Status: True + Type: RecommendationProvided + Recommendation: + Container Recommendations: + Container Name: mysql + Lower Bound: + Cpu: 250m + Memory: 400Mi + Target: + Cpu: 250m + Memory: 400Mi + Uncapped Target: + Cpu: 25m + Memory: 262144k + Upper Bound: + Cpu: 1 + Memory: 1Gi + Vpa Name: sample-mysql +Events: + +``` +So, the `mysqlautoscaler` resource is created successfully. + +We can verify from the above output that `status.vpas` contains the `RecommendationProvided` condition to true. And in the same time, `status.vpas.recommendation.containerRecommendations` contain the actual generated recommendation. + +Our autoscaler operator continuously watches the recommendation generated and creates an `mysqlopsrequest` based on the recommendations, if the database pod resources are needed to scaled up or down. + +Let's watch the `mysqlopsrequest` in the demo namespace to see if any `mysqlopsrequest` object is created. After some time you'll see that a `mysqlopsrequest` will be created based on the recommendation. + +```bash +$ kubectl get mysqlopsrequest -n demo +NAME TYPE STATUS AGE +myops-sample-mysql-6xc1kc VerticalScaling Progressing 7s +``` + +Let's wait for the ops request to become successful. + +```bash +$ kubectl get mysqlopsrequest -n demo +NAME TYPE STATUS AGE +myops-vpa-sample-mysql-z43wc8 VerticalScaling Successful 3m32s +``` + +We can see from the above output that the `MySQLOpsRequest` has succeeded. If we describe the `MySQLOpsRequest` we will get an overview of the steps that were followed to scale the database. + +```bash +$ kubectl describe mysqlopsrequest -n demo myops-vpa-sample-mysql-z43wc8 +Name: myops-sample-mysql-6xc1kc +Namespace: demo +Labels: +Annotations: +API Version: ops.kubedb.com/v1alpha1 +Kind: MySQLOpsRequest +Metadata: + Creation Timestamp: 2022-09-16T11:27:07Z + Generation: 1 + Managed Fields: + ... + Owner References: + API Version: autoscaling.kubedb.com/v1alpha1 + Block Owner Deletion: true + Controller: true + Kind: MySQLAutoscaler + Name: my-as-compute + UID: 44bd46c3-bbc5-4c4a-aff4-00c7f84c6f58 + Resource Version: 846324 + UID: c2b30107-c6d3-44bb-adf3-135edc5d615b +Spec: + Apply: IfReady + Database Ref: + Name: sample-mysql + Timeout: 2m0s + Type: VerticalScaling + Vertical Scaling: + MySQL: + Limits: + Cpu: 250m + Memory: 400Mi + Requests: + Cpu: 250m + Memory: 400Mi +Status: + Conditions: + Last Transition Time: 2022-09-16T11:27:07Z + Message: Controller has started to Progress the MySQLOpsRequest: demo/myops-sample-mysql-6xc1kc + Observed Generation: 1 + Reason: OpsRequestProgressingStarted + Status: True + Type: Progressing + Last Transition Time: 2022-09-16T11:30:42Z + Message: Successfully restarted MySQL pods for MySQLOpsRequest: demo/myops-sample-mysql-6xc1kc + Observed Generation: 1 + Reason: SuccessfullyRestatedStatefulSet + Status: True + Type: RestartStatefulSet + Last Transition Time: 2022-09-16T11:30:47Z + Message: Vertical scale successful for MySQLOpsRequest: demo/myops-sample-mysql-6xc1kc + Observed Generation: 1 + Reason: SuccessfullyPerformedVerticalScaling + Status: True + Type: VerticalScaling + Last Transition Time: 2022-09-16T11:30:47Z + Message: Controller has successfully scaled the MySQL demo/myops-sample-mysql-6xc1kc + Observed Generation: 1 + Reason: OpsRequestProcessedSuccessfully + Status: True + Type: Successful + Observed Generation: 1 + Phase: Successful +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal Starting 8m48s KubeDB Enterprise Operator Start processing for MySQLOpsRequest: demo/myops-sample-mysql-6xc1kc + Normal Starting 8m48s KubeDB Enterprise Operator Pausing MySQL databse: demo/sample-mysql + Normal Successful 8m48s KubeDB Enterprise Operator Successfully paused MySQL database: demo/sample-mysql for MySQLOpsRequest: myops-sample-mysql-6xc1kc + Normal Starting 8m43s KubeDB Enterprise Operator Restarting Pod: demo/sample-mysql-0 + Normal Starting 7m33s KubeDB Enterprise Operator Restarting Pod: demo/sample-mysql-1 + Normal Starting 6m23s KubeDB Enterprise Operator Restarting Pod: demo/sample-mysql-2 + Normal Successful 5m13s KubeDB Enterprise Operator Successfully restarted MySQL pods for MySQLOpsRequest: demo/myops-sample-mysql-6xc1kc + Normal Successful 5m8s KubeDB Enterprise Operator Vertical scale successful for MySQLOpsRequest: demo/myops-sample-mysql-6xc1kc + Normal Starting 5m8s KubeDB Enterprise Operator Resuming MySQL database: demo/sample-mysql + Normal Successful 5m8s KubeDB Enterprise Operator Successfully resumed MySQL database: demo/sample-mysql + Normal Successful 5m8s KubeDB Enterprise Operator Controller has Successfully scaled the MySQL database: demo/sample-mysql +``` + +Now, we are going to verify from the Pod, and the MySQL yaml whether the resources of the replicaset database has updated to meet up the desired state, Let's check, + +```bash +$ kubectl get pod -n demo sample-mysql-0 -o json | jq '.spec.containers[].resources' +{ + "limits": { + "cpu": "250m", + "memory": "400Mi" + }, + "requests": { + "cpu": "250m", + "memory": "400Mi" + } +} + +$ kubectl get mysql -n demo sample-mysql -o json | jq '.spec.podTemplate.spec.resources' +{ + "limits": { + "cpu": "250m", + "memory": "400Mi" + }, + "requests": { + "cpu": "250m", + "memory": "400Mi" + } +} +``` + + +The above output verifies that we have successfully autoscaled the resources of the MySQL replicaset database. + +## Cleaning Up + +To clean up the Kubernetes resources created by this tutorial, run: + +```bash +kubectl delete mysql -n demo sample-mysql +kubectl delete mysqlautoscaler -n demo my-as-compute +kubectl delete mysqlopsrequest -n demo myops-vpa-sample-mysql-z43wc8 +kubectl delete mysqlopsrequest -n demo myops-sample-mysql-6xc1kc +kubectl delete ns demo +``` \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/mysql/autoscaler/compute/overview/images/compute-autoscaling.jpg b/content/docs/v2024.1.31/guides/mysql/autoscaler/compute/overview/images/compute-autoscaling.jpg new file mode 100644 index 0000000000..c5ffbaa4d6 Binary files /dev/null and b/content/docs/v2024.1.31/guides/mysql/autoscaler/compute/overview/images/compute-autoscaling.jpg differ diff --git a/content/docs/v2024.1.31/guides/mysql/autoscaler/compute/overview/index.md b/content/docs/v2024.1.31/guides/mysql/autoscaler/compute/overview/index.md new file mode 100644 index 0000000000..71b19de9af --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/autoscaler/compute/overview/index.md @@ -0,0 +1,67 @@ +--- +title: MySQL Compute Autoscaling Overview +menu: + docs_v2024.1.31: + identifier: guides-mysql-autoscaling-compute-overview + name: Overview + parent: guides-mysql-autoscaling-compute + weight: 10 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# MySQL Compute Resource Autoscaling + +This guide will give an overview on how KubeDB Autoscaler operator autoscales the database compute resources i.e. cpu and memory using `mysqlautoscaler` crd. + +## Before You Begin + +- You should be familiar with the following `KubeDB` concepts: + - [MySQL](/docs/v2024.1.31/guides/mysql/concepts/mysqldatabase) + - [MySQLAutoscaler](/docs/v2024.1.31/guides/mysql/concepts/autoscaler) + - [MySQLOpsRequest](/docs/v2024.1.31/guides/mysql/concepts/opsrequest) + +## How Compute Autoscaling Works + +The following diagram shows how KubeDB Autoscaler operator autoscales the resources of `MySQL` database components. Open the image in a new tab to see the enlarged version. + +
+  Auto Scaling process of MySQL +
Fig: Auto Scaling process of MySQL
+
+ +The Auto Scaling process consists of the following steps: + +1. At first, the user creates a `MySQL` Custom Resource Object (CRO). + +2. `KubeDB` Community operator watches the `MySQL` CRO. + +3. When the operator finds a `MySQL` CRO, it creates required number of `StatefulSets` and related necessary stuff like secrets, services, etc. + +4. Then, in order to set up autoscaling of the CPU & Memory resources of the `MySQL` database the user creates a `MySQLAutoscaler` CRO with desired configuration. + +5. `KubeDB` Autoscaler operator watches the `MySQLAutoscaler` CRO. + +6. `KubeDB` Autoscaler operator utilizes the modified version of Kubernetes official [VPA-Recommender](https://github.com/kubernetes/autoscaler/tree/master/vertical-pod-autoscaler/pkg) for different components of the database, as specified in the `mysqlautoscaler` CRO. +It generates recommendations based on resource usages, & store them in the `status` section of the autoscaler CRO. + +7. If the generated recommendation doesn't match the current resources of the database, then `KubeDB` Autoscaler operator creates a `MySQLOpsRequest` CRO to scale the database to match the recommendation provided by the VPA object. + +8. `KubeDB Ops-Manager operator` watches the `MySQLOpsRequest` CRO. + +9. Lastly, the `KubeDB Ops-Manager operator` will scale the database component vertically as specified on the `MySQLOpsRequest` CRO. + +In the next docs, we are going to show a step by step guide on Autoscaling of MySQL database using `MySQLAutoscaler` CRD. diff --git a/content/docs/v2024.1.31/guides/mysql/autoscaler/storage/_index.md b/content/docs/v2024.1.31/guides/mysql/autoscaler/storage/_index.md new file mode 100644 index 0000000000..2b1c2e7544 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/autoscaler/storage/_index.md @@ -0,0 +1,22 @@ +--- +title: Storage Autoscaling +menu: + docs_v2024.1.31: + identifier: guides-mysql-autoscaling-storage + name: Storage Autoscaling + parent: guides-mysql-autoscaling + weight: 46 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/mysql/autoscaler/storage/cluster/examples/my-as-storage.yaml b/content/docs/v2024.1.31/guides/mysql/autoscaler/storage/cluster/examples/my-as-storage.yaml new file mode 100644 index 0000000000..b09fff98da --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/autoscaler/storage/cluster/examples/my-as-storage.yaml @@ -0,0 +1,14 @@ +apiVersion: autoscaling.kubedb.com/v1alpha1 +kind: MySQLAutoscaler +metadata: + name: my-as-st + namespace: demo +spec: + databaseRef: + name: sample-mysql + storage: + mysql: + trigger: "On" + usageThreshold: 20 + scalingThreshold: 20 + expansionMode: "Online" diff --git a/content/docs/v2024.1.31/guides/mysql/autoscaler/storage/cluster/examples/sample-mysql.yaml b/content/docs/v2024.1.31/guides/mysql/autoscaler/storage/cluster/examples/sample-mysql.yaml new file mode 100644 index 0000000000..0ae38d21a9 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/autoscaler/storage/cluster/examples/sample-mysql.yaml @@ -0,0 +1,19 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MySQL +metadata: + name: sample-mysql + namespace: demo +spec: + version: "8.0.35" + replicas: 3 + topology: + mode: GroupReplication + storageType: Durable + storage: + storageClassName: "topolvm-provisioner" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut diff --git a/content/docs/v2024.1.31/guides/mysql/autoscaler/storage/cluster/index.md b/content/docs/v2024.1.31/guides/mysql/autoscaler/storage/cluster/index.md new file mode 100644 index 0000000000..66ce993c78 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/autoscaler/storage/cluster/index.md @@ -0,0 +1,331 @@ +--- +title: MySQL Cluster Autoscaling +menu: + docs_v2024.1.31: + identifier: guides-mysql-autoscaling-storage-cluster + name: Cluster + parent: guides-mysql-autoscaling-storage + weight: 20 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Storage Autoscaling of a MySQL Cluster + +This guide will show you how to use `KubeDB` to autoscale the storage of a MySQL Replicaset database. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster. + +- Install `KubeDB` Community, Enterprise and Autoscaler operator in your cluster following the steps [here](/docs/v2024.1.31/setup/README). + +- Install `Metrics Server` from [here](https://github.com/kubernetes-sigs/metrics-server#installation) + +- Install Prometheus from [here](https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack) + +- You must have a `StorageClass` that supports volume expansion. + +- You should be familiar with the following `KubeDB` concepts: + - [MySQL](/docs/v2024.1.31/guides/mysql/concepts/mysqldatabase) + - [MySQLAutoscaler](/docs/v2024.1.31/guides/mysql/concepts/autoscaler) + - [MySQLOpsRequest](/docs/v2024.1.31/guides/mysql/concepts/opsrequest) + - [Storage Autoscaling Overview](/docs/v2024.1.31/guides/mysql/autoscaler/storage/overview) + +To keep everything isolated, we are going to use a separate namespace called `demo` throughout this tutorial. + +```bash +$ kubectl create ns demo +namespace/demo created +``` + +## Storage Autoscaling of Cluster Database + +At first verify that your cluster has a storage class, that supports volume expansion. Let's check, + +```bash +$ kubectl get storageclass +NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE +standard (default) rancher.io/local-path Delete WaitForFirstConsumer false 79m +topolvm-provisioner topolvm.cybozu.com Delete WaitForFirstConsumer true 78m +``` + +We can see from the output the `topolvm-provisioner` storage class has `ALLOWVOLUMEEXPANSION` field as true. So, this storage class supports volume expansion. We can use it. You can install topolvm from [here](https://github.com/topolvm/topolvm) + +Now, we are going to deploy a `MySQL` replicaset using a supported version by `KubeDB` operator. Then we are going to apply `MySQLAutoscaler` to set up autoscaling. + +#### Deploy MySQL Cluster + +In this section, we are going to deploy a MySQL replicaset database with version `10.5.23`. Then, in the next section we will set up autoscaling for this database using `MySQLAutoscaler` CRD. Below is the YAML of the `MySQL` CR that we are going to create, + +> If you want to autoscale MySQL `Standalone`, Just remove the `spec.Replicas` from the below yaml and rest of the steps are same. + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: MySQL +metadata: + name: sample-mysql + namespace: demo +spec: + version: "8.0.35" + replicas: 3 + topology: + mode: GroupReplication + storageType: Durable + storage: + storageClassName: "topolvm-provisioner" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut +``` + +Let's create the `MySQL` CRO we have shown above, + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mysql/autoscaler/storage/cluster/examples/sample-mysql.yaml +mysql.kubedb.com/sample-mysql created +``` + +Now, wait until `sample-mysql` has status `Ready`. i.e, + +```bash +$ kubectl get mysql -n demo +NAME VERSION STATUS AGE +sample-mysql 10.5.23 Ready 3m46s +``` + +Let's check volume size from statefulset, and from the persistent volume, + +```bash +$ kubectl get sts -n demo sample-mysql -o json | jq '.spec.volumeClaimTemplates[].spec.resources.requests.storage' +"1Gi" + +$ kubectl get pv -n demo +NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE +pvc-43266d76-f280-4cca-bd78-d13660a84db9 1Gi RWO Delete Bound demo/data-sample-mysql-2 topolvm-provisioner 57s +pvc-4a509b05-774b-42d9-b36d-599c9056af37 1Gi RWO Delete Bound demo/data-sample-mysql-0 topolvm-provisioner 58s +pvc-c27eee12-cd86-4410-b39e-b1dd735fc14d 1Gi RWO Delete Bound demo/data-sample-mysql-1 topolvm-provisioner 57s +``` + +You can see the statefulset has 1GB storage, and the capacity of all the persistent volume is also 1GB. + +We are now ready to apply the `MySQLAutoscaler` CRO to set up storage autoscaling for this database. + +### Storage Autoscaling + +Here, we are going to set up storage autoscaling using a MySQLAutoscaler Object. + +#### Create MySQLAutoscaler Object + +In order to set up vertical autoscaling for this replicaset database, we have to create a `MySQLAutoscaler` CRO with our desired configuration. Below is the YAML of the `MySQLAutoscaler` object that we are going to create, + +```yaml +apiVersion: autoscaling.kubedb.com/v1alpha1 +kind: MySQLAutoscaler +metadata: + name: my-as-st + namespace: demo +spec: + databaseRef: + name: sample-mysql + storage: + mysql: + trigger: "On" + usageThreshold: 20 + scalingThreshold: 20 + expansionMode: "Online" +``` + +Here, + +- `spec.databaseRef.name` specifies that we are performing vertical scaling operation on `sample-mysql` database. +- `spec.storage.mysql.trigger` specifies that storage autoscaling is enabled for this database. +- `spec.storage.mysql.usageThreshold` specifies storage usage threshold, if storage usage exceeds `20%` then storage autoscaling will be triggered. +- `spec.storage.mysql.scalingThreshold` specifies the scaling threshold. Storage will be scaled to `20%` of the current amount. +- `spec.storage.mysql.expansionMode` specifies the expansion mode of volume expansion `MySQLOpsRequest` created by `MySQLAutoscaler`. topolvm-provisioner supports online volume expansion so here `expansionMode` is set as "Online". + +Let's create the `MySQLAutoscaler` CR we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mysql/autoscaler/storage/cluster/examples/my-as-storage.yaml +mysqlautoscaler.autoscaling.kubedb.com/my-as-st created +``` + +#### Storage Autoscaling is set up successfully + +Let's check that the `mysqlautoscaler` resource is created successfully, + +```bash +$ kubectl get mysqlautoscaler -n demo +NAME AGE +my-as-st 33s + +$ kubectl describe mysqlautoscaler my-as-st -n demo +Name: my-as-st +Namespace: demo +Labels: +Annotations: API Version: autoscaling.kubedb.com/v1alpha1 +Kind: MySQLAutoscaler +Metadata: + Creation Timestamp: 2022-01-14T06:08:02Z + Generation: 1 + Managed Fields: + ... + Resource Version: 24009 + UID: 4f45a3b3-fc72-4d04-b52c-a770944311f6 +Spec: + Database Ref: + Name: sample-mysql + Storage: + MySQL: + Scaling Threshold: 20 + Trigger: On + Usage Threshold: 20 +Events: +``` + +So, the `mysqlautoscaler` resource is created successfully. + +Now, for this demo, we are going to manually fill up the persistent volume to exceed the `usageThreshold` using `dd` command to see if storage autoscaling is working or not. + +Let's exec into the database pod and fill the database volume(`var/lib/mysql`) using the following commands: + +```bash +$ kubectl exec -it -n demo sample-mysql-0 -- bash +root@sample-mysql-0:/ df -h /var/lib/mysql +Filesystem Size Used Avail Use% Mounted on +/dev/topolvm/57cd4330-784f-42c1-bf8e-e743241df164 1014M 357M 658M 36% /var/lib/mysql +root@sample-mysql-0:/ dd if=/dev/zero of=/var/lib/mysql/file.img bs=500M count=1 +1+0 records in +1+0 records out +524288000 bytes (524 MB, 500 MiB) copied, 0.340877 s, 1.5 GB/s +root@sample-mysql-0:/ df -h /var/lib/mysql +Filesystem Size Used Avail Use% Mounted on +/dev/topolvm/57cd4330-784f-42c1-bf8e-e743241df164 1014M 857M 158M 85% /var/lib/mysql +``` + +So, from the above output we can see that the storage usage is 83%, which exceeded the `usageThreshold` 20%. + +Let's watch the `mysqlopsrequest` in the demo namespace to see if any `mysqlopsrequest` object is created. After some time you'll see that a `mysqlopsrequest` of type `VolumeExpansion` will be created based on the `scalingThreshold`. + +```bash +$ kubectl get mysqlopsrequest -n demo +NAME TYPE STATUS AGE +mops-sample-mysql-xojkua VolumeExpansion Progressing 15s +``` + +Let's wait for the ops request to become successful. + +```bash +$ kubectl get mysqlopsrequest -n demo +NAME TYPE STATUS AGE +mops-sample-mysql-xojkua VolumeExpansion Successful 97s +``` + +We can see from the above output that the `MySQLOpsRequest` has succeeded. If we describe the `MySQLOpsRequest` we will get an overview of the steps that were followed to expand the volume of the database. + +```bash +$ kubectl describe mysqlopsrequest -n demo mops-sample-mysql-xojkua +Name: mops-sample-mysql-xojkua +Namespace: demo +Labels: app.kubernetes.io/component=database + app.kubernetes.io/instance=sample-mysql + app.kubernetes.io/managed-by=kubedb.com + app.kubernetes.io/name=mysqls.kubedb.com +Annotations: +API Version: ops.kubedb.com/v1alpha1 +Kind: MySQLOpsRequest +Metadata: + Creation Timestamp: 2022-01-14T06:13:10Z + Generation: 1 + Managed Fields: ... + Owner References: + API Version: autoscaling.kubedb.com/v1alpha1 + Block Owner Deletion: true + Controller: true + Kind: MySQLAutoscaler + Name: my-as-st + UID: 4f45a3b3-fc72-4d04-b52c-a770944311f6 + Resource Version: 25557 + UID: 90763a49-a03f-407c-a233-fb20c4ab57d7 +Spec: + Database Ref: + Name: sample-mysql + Type: VolumeExpansion + Volume Expansion: + MySQL: 1594884096 +Status: + Conditions: + Last Transition Time: 2022-01-14T06:13:10Z + Message: Controller has started to Progress the MySQLOpsRequest: demo/mops-sample-mysql-xojkua + Observed Generation: 1 + Reason: OpsRequestProgressingStarted + Status: True + Type: Progressing + Last Transition Time: 2022-01-14T06:14:25Z + Message: Volume Expansion performed successfully in MySQL pod for MySQLOpsRequest: demo/mops-sample-mysql-xojkua + Observed Generation: 1 + Reason: SuccessfullyVolumeExpanded + Status: True + Type: VolumeExpansion + Last Transition Time: 2022-01-14T06:14:25Z + Message: Controller has successfully expand the volume of MySQL demo/mops-sample-mysql-xojkua + Observed Generation: 1 + Reason: OpsRequestProcessedSuccessfully + Status: True + Type: Successful + Observed Generation: 3 + Phase: Successful +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal Starting 2m58s KubeDB Enterprise Operator Start processing for MySQLOpsRequest: demo/mops-sample-mysql-xojkua + Normal Starting 2m58s KubeDB Enterprise Operator Pausing MySQL databse: demo/sample-mysql + Normal Successful 2m58s KubeDB Enterprise Operator Successfully paused MySQL database: demo/sample-mysql for MySQLOpsRequest: mops-sample-mysql-xojkua + Normal Successful 103s KubeDB Enterprise Operator Volume Expansion performed successfully in MySQL pod for MySQLOpsRequest: demo/mops-sample-mysql-xojkua + Normal Starting 103s KubeDB Enterprise Operator Updating MySQL storage + Normal Successful 103s KubeDB Enterprise Operator Successfully Updated MySQL storage + Normal Starting 103s KubeDB Enterprise Operator Resuming MySQL database: demo/sample-mysql + Normal Successful 103s KubeDB Enterprise Operator Successfully resumed MySQL database: demo/sample-mysql + Normal Successful 103s KubeDB Enterprise Operator Controller has Successfully expand the volume of MySQL: demo/sample-mysql +``` + +Now, we are going to verify from the `Statefulset`, and the `Persistent Volume` whether the volume of the replicaset database has expanded to meet the desired state, Let's check, + +```bash +$ kubectl get sts -n demo sample-mysql -o json | jq '.spec.volumeClaimTemplates[].spec.resources.requests.storage' +"1594884096" +$ kubectl get pv -n demo +NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE +pvc-43266d76-f280-4cca-bd78-d13660a84db9 2Gi RWO Delete Bound demo/data-sample-mysql-2 topolvm-provisioner 23m +pvc-4a509b05-774b-42d9-b36d-599c9056af37 2Gi RWO Delete Bound demo/data-sample-mysql-0 topolvm-provisioner 24m +pvc-c27eee12-cd86-4410-b39e-b1dd735fc14d 2Gi RWO Delete Bound demo/data-sample-mysql-1 topolvm-provisioner 23m +``` + +The above output verifies that we have successfully autoscaled the volume of the MySQL replicaset database. + +## Cleaning Up + +To clean up the Kubernetes resources created by this tutorial, run: + +```bash +kubectl delete mysql -n demo sample-mysql +kubectl delete mysqlautoscaler -n demo my-as-st +kubectl delete ns demo +``` diff --git a/content/docs/v2024.1.31/guides/mysql/autoscaler/storage/overview/images/storage-autoscaling.jpg b/content/docs/v2024.1.31/guides/mysql/autoscaler/storage/overview/images/storage-autoscaling.jpg new file mode 100644 index 0000000000..dd874b7175 Binary files /dev/null and b/content/docs/v2024.1.31/guides/mysql/autoscaler/storage/overview/images/storage-autoscaling.jpg differ diff --git a/content/docs/v2024.1.31/guides/mysql/autoscaler/storage/overview/index.md b/content/docs/v2024.1.31/guides/mysql/autoscaler/storage/overview/index.md new file mode 100644 index 0000000000..f6b004e3f3 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/autoscaler/storage/overview/index.md @@ -0,0 +1,66 @@ +--- +title: MySQL Storage Autoscaling Overview +menu: + docs_v2024.1.31: + identifier: mguides-mysql-autoscaling-storage-overview + name: Overview + parent: guides-mysql-autoscaling-storage + weight: 10 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# MySQL Vertical Autoscaling + +This guide will give an overview on how KubeDB Autoscaler operator autoscales the database storage using `mysqlautoscaler` crd. + +## Before You Begin + +- You should be familiar with the following `KubeDB` concepts: + - [MySQL](/docs/v2024.1.31/guides/mysql/concepts/mysqldatabase) + - [MySQLAutoscaler](/docs/v2024.1.31/guides/mysql/concepts/autoscaler) + - [MySQLOpsRequest](/docs/v2024.1.31/guides/mysql/concepts/opsrequest) + +## How Storage Autoscaling Works + +The following diagram shows how KubeDB Autoscaler operator autoscales the resources of `MySQL` database components. Open the image in a new tab to see the enlarged version. + +
+  Storage Autoscaling process of MySQL +
Fig: Storage Autoscaling process of MySQL
+
+ +The Auto Scaling process consists of the following steps: + +1. At first, a user creates a `MySQL` Custom Resource (CR). + +2. `KubeDB` Community operator watches the `MySQL` CR. + +3. When the operator finds a `MySQL` CR, it creates required number of `StatefulSets` and related necessary stuff like secrets, services, etc. + +4. Each StatefulSet creates a Persistent Volume according to the Volume Claim Template provided in the statefulset configuration. This Persistent Volume will be expanded by the `KubeDB` Enterprise operator. + +5. Then, in order to set up storage autoscaling of the `MySQL` database the user creates a `MySQLAutoscaler` CRO with desired configuration. + +6. `KubeDB` Autoscaler operator watches the `MySQLAutoscaler` CRO. + +7. `KubeDB` Autoscaler operator continuously watches persistent volumes of the databases to check if it exceeds the specified usage threshold. + +8. If the usage exceeds the specified usage threshold, then `KubeDB` Autoscaler operator creates a `MySQLOpsRequest` to expand the storage of the database. +9. `KubeDB` Enterprise operator watches the `MySQLOpsRequest` CRO. +10. Then the `KubeDB` Enterprise operator will expand the storage of the database component as specified on the `MySQLOpsRequest` CRO. + +In the next docs, we are going to show a step by step guide on Autoscaling storage of various MySQL database components using `MySQLAutoscaler` CRD. diff --git a/content/docs/v2024.1.31/guides/mysql/backup/_index.md b/content/docs/v2024.1.31/guides/mysql/backup/_index.md new file mode 100755 index 0000000000..f1706f973f --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/backup/_index.md @@ -0,0 +1,22 @@ +--- +title: Backup & Restore MySQL +menu: + docs_v2024.1.31: + identifier: guides-mysql-backup + name: Backup & Restore + parent: guides-mysql + weight: 40 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/mysql/backup/auto-backup/examples/backupblueprint.yaml b/content/docs/v2024.1.31/guides/mysql/backup/auto-backup/examples/backupblueprint.yaml new file mode 100644 index 0000000000..9c76c8b26d --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/backup/auto-backup/examples/backupblueprint.yaml @@ -0,0 +1,17 @@ +apiVersion: stash.appscode.com/v1beta1 +kind: BackupBlueprint +metadata: + name: mysql-backup-template +spec: + # ============== Blueprint for Repository ========================== + backend: + gcs: + bucket: stash-testing + prefix: mysql-backup/${TARGET_NAMESPACE}/${TARGET_APP_RESOURCE}/${TARGET_NAME} + storageSecretName: gcs-secret + # ============== Blueprint for BackupConfiguration ================= + schedule: "*/5 * * * *" + retentionPolicy: + name: 'keep-last-5' + keepLast: 5 + prune: true diff --git a/content/docs/v2024.1.31/guides/mysql/backup/auto-backup/examples/sample-mysql-2.yaml b/content/docs/v2024.1.31/guides/mysql/backup/auto-backup/examples/sample-mysql-2.yaml new file mode 100644 index 0000000000..01c9ace7c0 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/backup/auto-backup/examples/sample-mysql-2.yaml @@ -0,0 +1,20 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MySQL +metadata: + name: sample-mysql-2 + namespace: demo-2 + annotations: + stash.appscode.com/backup-blueprint: mysql-backup-template + stash.appscode.com/schedule: "*/3 * * * *" +spec: + version: "8.0.35" + replicas: 1 + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 50Mi + terminationPolicy: WipeOut diff --git a/content/docs/v2024.1.31/guides/mysql/backup/auto-backup/examples/sample-mysql-3.yaml b/content/docs/v2024.1.31/guides/mysql/backup/auto-backup/examples/sample-mysql-3.yaml new file mode 100644 index 0000000000..ce1b4d04b0 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/backup/auto-backup/examples/sample-mysql-3.yaml @@ -0,0 +1,20 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MySQL +metadata: + name: sample-mysql-3 + namespace: demo-3 + annotations: + stash.appscode.com/backup-blueprint: mysql-backup-template + params.stash.appscode.com/args: --databases mysql +spec: + version: "8.0.35" + replicas: 1 + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 50Mi + terminationPolicy: WipeOut diff --git a/content/docs/v2024.1.31/guides/mysql/backup/auto-backup/examples/sample-mysql.yaml b/content/docs/v2024.1.31/guides/mysql/backup/auto-backup/examples/sample-mysql.yaml new file mode 100644 index 0000000000..65f0985ed7 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/backup/auto-backup/examples/sample-mysql.yaml @@ -0,0 +1,19 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MySQL +metadata: + name: sample-mysql + namespace: demo + annotations: + stash.appscode.com/backup-blueprint: mysql-backup-template +spec: + version: "8.0.35" + replicas: 1 + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 50Mi + terminationPolicy: WipeOut diff --git a/content/docs/v2024.1.31/guides/mysql/backup/auto-backup/images/sample-mysql-2.png b/content/docs/v2024.1.31/guides/mysql/backup/auto-backup/images/sample-mysql-2.png new file mode 100644 index 0000000000..fb54089365 Binary files /dev/null and b/content/docs/v2024.1.31/guides/mysql/backup/auto-backup/images/sample-mysql-2.png differ diff --git a/content/docs/v2024.1.31/guides/mysql/backup/auto-backup/images/sample-mysql-3.png b/content/docs/v2024.1.31/guides/mysql/backup/auto-backup/images/sample-mysql-3.png new file mode 100644 index 0000000000..e77657cfed Binary files /dev/null and b/content/docs/v2024.1.31/guides/mysql/backup/auto-backup/images/sample-mysql-3.png differ diff --git a/content/docs/v2024.1.31/guides/mysql/backup/auto-backup/images/sample-mysql.png b/content/docs/v2024.1.31/guides/mysql/backup/auto-backup/images/sample-mysql.png new file mode 100644 index 0000000000..687923605d Binary files /dev/null and b/content/docs/v2024.1.31/guides/mysql/backup/auto-backup/images/sample-mysql.png differ diff --git a/content/docs/v2024.1.31/guides/mysql/backup/auto-backup/index.md b/content/docs/v2024.1.31/guides/mysql/backup/auto-backup/index.md new file mode 100644 index 0000000000..21d6dcb425 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/backup/auto-backup/index.md @@ -0,0 +1,701 @@ +--- +title: MySQL Auto-Backup | Stash +description: Backup MySQL using Stash Auto-Backup +menu: + docs_v2024.1.31: + identifier: guides-mysql-backup-auto-backup + name: Auto-Backup + parent: guides-mysql-backup + weight: 30 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +# Backup MySQL using Stash Auto-Backup + +Stash can be configured to automatically backup any MySQL database in your cluster. Stash enables cluster administrators to deploy backup blueprints ahead of time so that the database owners can easily backup their database with just a few annotations. + +In this tutorial, we are going to show how you can configure a backup blueprint for MySQL databases in your cluster and backup them with few annotations. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster. +- Install Stash in your cluster following the steps [here](https://stash.run/docs/latest/setup/install/stash/). +- Install KubeDB in your cluster following the steps [here](/docs/v2024.1.31/setup/README). +- If you are not familiar with how Stash backup and restore MySQL databases, please check the following guide [here](/docs/v2024.1.31/guides/mysql/backup/overview/). +- If you are not familiar with how auto-backup works in Stash, please check the following guide [here](https://stash.run/docs/latest/guides/auto-backup/overview/). +- If you are not familiar with the available auto-backup options for databases in Stash, please check the following guide [here](https://stash.run/docs/latest/guides/auto-backup/database/). + +You should be familiar with the following `Stash` concepts: + +- [BackupBlueprint](https://stash.run/docs/latest/concepts/crds/backupblueprint/) +- [BackupConfiguration](https://stash.run/docs/latest/concepts/crds/backupconfiguration/) +- [BackupSession](https://stash.run/docs/latest/concepts/crds/backupsession/) +- [Repository](https://stash.run/docs/latest/concepts/crds/repository/) +- [Function](https://stash.run/docs/latest/concepts/crds/function/) +- [Task](https://stash.run/docs/latest/concepts/crds/task/) + +In this tutorial, we are going to show backup of three different MySQL databases on three different namespaces named `demo`, `demo-2`, and `demo-3`. Create the namespaces as below if you haven't done it already. + +```bash +❯ kubectl create ns demo +namespace/demo created + +❯ kubectl create ns demo-2 +namespace/demo-2 created + +❯ kubectl create ns demo-3 +namespace/demo-3 created +``` + +When you install the Stash, it automatically installs all the official database addons. Verify that it has installed the MySQL addons using the following command. + +```bash +❯ kubectl get tasks.stash.appscode.com | grep mysql +mysql-backup-5.7.25 2d2h +mysql-backup-8.0.14 2d2h +mysql-backup-8.0.21 2d2h +mysql-backup-8.0.3 2d2h +mysql-restore-5.7.25 2d2h +mysql-restore-8.0.14 2d2h +mysql-restore-8.0.21 2d2h +mysql-restore-8.0.3 2d2h +``` + +## Prepare Backup Blueprint + +To backup an MySQL database using Stash, you have to create a `Secret` containing the backend credentials, a `Repository` containing the backend information, and a `BackupConfiguration` containing the schedule and target information. A `BackupBlueprint` allows you to specify a template for the `Repository` and the `BackupConfiguration`. + +The `BackupBlueprint` is a non-namespaced CRD. So, once you have created a `BackupBlueprint`, you can use it to backup any MySQL database of any namespace just by creating the storage `Secret` in that namespace and adding few annotations to your MySQL CRO. Then, Stash will automatically create a `Repository` and a `BackupConfiguration` according to the template to backup the database. + +Below is the `BackupBlueprint` object that we are going to use in this tutorial, + +```yaml +apiVersion: stash.appscode.com/v1beta1 +kind: BackupBlueprint +metadata: + name: mysql-backup-template +spec: + # ============== Blueprint for Repository ========================== + backend: + gcs: + bucket: stash-testing + prefix: mysql-backup/${TARGET_NAMESPACE}/${TARGET_APP_RESOURCE}/${TARGET_NAME} + storageSecretName: gcs-secret + # ============== Blueprint for BackupConfiguration ================= + schedule: "*/5 * * * *" + retentionPolicy: + name: 'keep-last-5' + keepLast: 5 + prune: true +``` + +Here, we are using a GCS bucket as our backend. We are providing `gcs-secret` at the `storageSecretName` field. Hence, we have to create a secret named `gcs-secret` with the access credentials of our bucket in every namespace where we want to enable backup through this blueprint. + +Notice the `prefix` field of `backend` section. We have used some variables in form of `${VARIABLE_NAME}`. Stash will automatically resolve those variables from the database information to make the backend prefix unique for each database instance. + +Let's create the `BackupBlueprint` we have shown above, + +```bash +❯ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mysql/backup/auto-backup/examples/backupblueprint.yaml +backupblueprint.stash.appscode.com/mysql-backup-template created +``` + +Now, we are ready to backup our MySQL databases using few annotations. You can check available auto-backup annotations for a databases from [here](https://stash.run/docs/latest/guides/auto-backup/database/#available-auto-backup-annotations-for-database). + +## Auto-backup with default configurations + +In this section, we are going to backup an MySQL database of `demo` namespace. We are going to use the default configurations specified in the `BackupBlueprint`. + +### Create Storage Secret + +At first, let's create the `gcs-secret` in `demo` namespace with the access credentials to our GCS bucket. + +```bash +❯ echo -n 'changeit' > RESTIC_PASSWORD +❯ echo -n '' > GOOGLE_PROJECT_ID +❯ cat downloaded-sa-key.json > GOOGLE_SERVICE_ACCOUNT_JSON_KEY +❯ kubectl create secret generic -n demo gcs-secret \ + --from-file=./RESTIC_PASSWORD \ + --from-file=./GOOGLE_PROJECT_ID \ + --from-file=./GOOGLE_SERVICE_ACCOUNT_JSON_KEY +secret/gcs-secret created +``` + +### Create Database + +Now, we are going to create an MySQL CRO in `demo` namespace. Below is the YAML of the MySQL object that we are going to create, + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: MySQL +metadata: + name: sample-mysql + namespace: demo + annotations: + stash.appscode.com/backup-blueprint: mysql-backup-template +spec: + version: "8.0.35" + replicas: 1 + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 50Mi + terminationPolicy: WipeOut +``` + +Notice the `annotations` section. We are pointing to the `BackupBlueprint` that we have created earlier though `stash.appscode.com/backup-blueprint` annotation. Stash will watch this annotation and create a `Repository` and a `BackupConfiguration` according to the `BackupBlueprint`. + +Let's create the above MySQL CRO, + +```bash +❯ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mysql/backup/auto-backup/examples/sample-mysql.yaml +mysql.kubedb.com/sample-mysql created +``` + +### Verify Auto-backup configured + +In this section, we are going to verify whether Stash has created the respective `Repository` and `BackupConfiguration` for our MySQL database we have just deployed or not. + +#### Verify Repository + +At first, let's verify whether Stash has created a `Repository` for our MySQL or not. + +```bash +❯ kubectl get repository -n demo +NAME INTEGRITY SIZE SNAPSHOT-COUNT LAST-SUCCESSFUL-BACKUP AGE +app-sample-mysql 10s +``` + +Now, let's check the YAML of the `Repository`. + +```yaml +❯ kubectl get repository -n demo app-sample-mysql -o yaml +apiVersion: stash.appscode.com/v1alpha1 +kind: Repository +metadata: + creationTimestamp: "2022-06-30T05:45:43Z" + finalizers: + - stash + generation: 1 + name: app-sample-mysql + namespace: demo + resourceVersion: "363862" + uid: 23781855-6cd9-4ef8-84d4-c6360a88bbef +spec: + backend: + gcs: + bucket: stash-testing + prefix: mysql-backup/demo/mysql/sample-mysql + storageSecretName: gcs-secret + +``` + +Here, you can see that Stash has resolved the variables in `prefix` field and substituted them with the equivalent information from this database. + +#### Verify BackupConfiguration + +If everything goes well, Stash should create a `BackupConfiguration` for our MySQL in `demo` namespace and the phase of that `BackupConfiguration` should be `Ready`. Verify the `BackupConfiguration` crd by the following command, + +```bash +❯ kubectl get backupconfiguration -n demo +NAMESPACE NAME TASK SCHEDULE PAUSED PHASE AGE +demo app-sample-mysql */5 * * * * Ready 3m56s +``` + +Now, let's check the YAML of the `BackupConfiguration`. + +```yaml +❯ kubectl get backupconfiguration -n demo app-sample-mysql -o yaml +apiVersion: stash.appscode.com/v1beta1 +kind: BackupConfiguration +metadata: + creationTimestamp: "2022-06-30T05:45:43Z" + finalizers: + - stash.appscode.com + generation: 1 + name: app-sample-mysql + namespace: demo + ownerReferences: + - apiVersion: appcatalog.appscode.com/v1alpha1 + blockOwnerDeletion: true + controller: true + kind: AppBinding + name: sample-mysql + uid: 02bacaf0-9f3c-4b48-84d4-305a7d854eb2 + resourceVersion: "363877" + uid: d101e8fc-4507-42cc-93f7-782f29d8898d +spec: + driver: Restic + repository: + name: app-sample-mysql + retentionPolicy: + keepLast: 5 + name: keep-last-5 + prune: true + runtimeSettings: {} + schedule: '*/5 * * * *' + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-mysql + task: {} + tempDir: {} +status: + conditions: + - lastTransitionTime: "2022-06-30T05:45:43Z" + message: Repository demo/app-sample-mysql exist. + reason: RepositoryAvailable + status: "True" + type: RepositoryFound + - lastTransitionTime: "2022-06-30T05:45:43Z" + message: Backend Secret demo/gcs-secret exist. + reason: BackendSecretAvailable + status: "True" + type: BackendSecretFound + - lastTransitionTime: "2022-06-30T05:45:43Z" + message: Backup target appcatalog.appscode.com/v1alpha1 appbinding/sample-mysql + found. + reason: TargetAvailable + status: "True" + type: BackupTargetFound + - lastTransitionTime: "2022-06-30T05:45:43Z" + message: Successfully created backup triggering CronJob. + reason: CronJobCreationSucceeded + status: "True" + type: CronJobCreated + observedGeneration: 1 +``` + +Notice the `target` section. Stash has automatically added the MySQL as the target of this `BackupConfiguration`. + +#### Verify Backup + +Now, let's wait for a backup run to complete. You can watch for `BackupSession` as below, + +```bash +❯ kubectl get backupsession -n demo -w +NAME INVOKER-TYPE INVOKER-NAME PHASE DURATION AGE +app-sample-mysql-1643879707 BackupConfiguration app-sample-mysql Running 40s +``` + +Once the backup has been completed successfully, you should see the backed up data has been stored in the bucket at the directory pointed by the `prefix` field of the `Repository`. + +
+ Backup data in GCS Bucket +
Fig: Backup data in GCS Bucket
+
+ +## Auto-backup with a custom schedule + +In this section, we are going to backup an MySQL database of `demo-2` namespace. This time, we are going to overwrite the default schedule used in the `BackupBlueprint`. + +### Create Storage Secret + +At first, let's create the `gcs-secret` in `demo-2` namespace with the access credentials to our GCS bucket. + +```bash +❯ kubectl create secret generic -n demo-2 gcs-secret \ + --from-file=./RESTIC_PASSWORD \ + --from-file=./GOOGLE_PROJECT_ID \ + --from-file=./GOOGLE_SERVICE_ACCOUNT_JSON_KEY +secret/gcs-secret created +``` + +### Create Database + +Now, we are going to create an MySQL CRO in `demo-2` namespace. Below is the YAML of the MySQL object that we are going to create, + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: MySQL +metadata: + name: sample-mysql-2 + namespace: demo-2 + annotations: + stash.appscode.com/backup-blueprint: mysql-backup-template + stash.appscode.com/schedule: "*/3 * * * *" +spec: + version: "8.0.35" + replicas: 1 + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 50Mi + terminationPolicy: WipeOut +``` + +Notice the `annotations` section. This time, we have passed a schedule via `stash.appscode.com/schedule` annotation along with the `stash.appscode.com/backup-blueprint` annotation. + +Let's create the above MySQL CRO, + +```bash +❯ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mysql/backup/auto-backup/examples/sample-mysql-2.yaml +mysql.kubedb.com/sample-mysql-2 created +``` + +### Verify Auto-backup configured + +Now, let's verify whether the auto-backup has been configured properly or not. + +#### Verify Repository + +At first, let's verify whether Stash has created a `Repository` for our MySQL or not. + +```bash +❯ kubectl get repository -n demo-2 +NAME INTEGRITY SIZE SNAPSHOT-COUNT LAST-SUCCESSFUL-BACKUP AGE +app-sample-myaql-2 4s +``` + +Now, let's check the YAML of the `Repository`. + +```yaml +❯ kubectl get repository -n demo-2 app-sample-mysql-2 -o yaml +apiVersion: stash.appscode.com/v1alpha1 +kind: Repository +metadata: + creationTimestamp: "2022-06-30T05:45:43Z" + finalizers: + - stash + generation: 1 + name: app-sample-mysql-2 + namespace: demo-2 + resourceVersion: "365836" + uid: f37e737c-c5f1-4620-9c22-1d7b14127eab +spec: + backend: + gcs: + bucket: stash-testing + prefix: mysql-backup/demo/mysql/sample-mysql-2 + storageSecretName: gcs-secret +``` + +Here, you can see that Stash has resolved the variables in `prefix` field and substituted them with the equivalent information from this new database. + +#### Verify BackupConfiguration + +If everything goes well, Stash should create a `BackupConfiguration` for our MySQL in `demo-2` namespace and the phase of that `BackupConfiguration` should be `Ready`. Verify the `BackupConfiguration` crd by the following command, + +```bash +❯ kubectl get backupconfiguration -n demo-2 +NAMESPACE NAME TASK SCHEDULE PAUSED PHASE AGE +demo-2 app-sample-mysql-2 */3 * * * * Ready 113s +``` + +Now, let's check the YAML of the `BackupConfiguration`. + +```yaml +❯ kubectl get backupconfiguration -n demo-2 app-sample-mysql-2 -o yaml + +apiVersion: stash.appscode.com/v1beta1 +kind: BackupConfiguration +metadata: + creationTimestamp: "2022-06-30T05:45:43Z" + finalizers: + - stash.appscode.com + generation: 1 + name: app-sample-mysql-2 + namespace: demo-2 + ownerReferences: + - apiVersion: appcatalog.appscode.com/v1alpha1 + blockOwnerDeletion: true + controller: true + kind: AppBinding + name: sample-mysql-2 + uid: 478d802c-585b-408b-9fbe-b2f90d55b26e + resourceVersion: "366551" + uid: de8448a1-0e8c-41b5-a1c4-07239ae0fef2 +spec: + driver: Restic + repository: + name: app-sample-mysql-2 + retentionPolicy: + keepLast: 5 + name: keep-last-5 + prune: true + runtimeSettings: {} + schedule: '*/3 * * * *' + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-mysql-2 + task: {} + tempDir: {} +status: + conditions: + - lastTransitionTime: "2022-06-30T05:45:43Z" + message: Repository demo-2/app-sample-mysql-2 exist. + reason: RepositoryAvailable + status: "True" + type: RepositoryFound + - lastTransitionTime: "2022-06-30T05:45:43Z" + message: Backend Secret demo-2/gcs-secret exist. + reason: BackendSecretAvailable + status: "True" + type: BackendSecretFound + - lastTransitionTime: "2022-06-30T05:45:43Z" + message: Backup target appcatalog.appscode.com/v1alpha1 appbinding/sample-mysql-2 + found. + reason: TargetAvailable + status: "True" + type: BackupTargetFound + - lastTransitionTime: "2022-06-30T05:45:43Z" + message: Successfully created backup triggering CronJob. + reason: CronJobCreationSucceeded + status: "True" + type: CronJobCreated + observedGeneration: 1 +``` + +Notice the `schedule` section. This time the `BackupConfiguration` has been created with the schedule we have provided via annotation. + +Also, notice the `target` section. Stash has automatically added the new MySQL as the target of this `BackupConfiguration`. + +#### Verify Backup + +Now, let's wait for a backup run to complete. You can watch for `BackupSession` as below, + +```bash +❯ kubectl get backupsession -n demo-2 -w +NAMESPACE NAME INVOKER-TYPE INVOKER-NAME PHASE DURATION AGE +demo-2 app-sample-mysql-2-1643880964 BackupConfiguration app-sample-mysql-2 Succeeded 35s 108s + +``` + +Once the backup has been completed successfully, you should see that Stash has created a new directory as pointed by the `prefix` field of the new `Repository` and stored the backed up data there. + +
+ Backup data in GCS Bucket +
Fig: Backup data in GCS Bucket
+
+ +## Auto-backup with custom parameters + +In this section, we are going to backup an MySQL database of `demo-3` namespace. This time, we are going to pass some parameters for the Task through the annotations. + +### Create Storage Secret + +At first, let's create the `gcs-secret` in `demo-3` namespace with the access credentials to our GCS bucket. + +```bash +❯ kubectl create secret generic -n demo-3 gcs-secret \ + --from-file=./RESTIC_PASSWORD \ + --from-file=./GOOGLE_PROJECT_ID \ + --from-file=./GOOGLE_SERVICE_ACCOUNT_JSON_KEY +secret/gcs-secret created +``` + +### Create Database + +Now, we are going to create an MySQL CRO in `demo-3` namespace. Below is the YAML of the MySQL object that we are going to create, + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: MySQL +metadata: + name: sample-mysql-3 + namespace: demo-3 + annotations: + stash.appscode.com/backup-blueprint: mysql-backup-template + params.stash.appscode.com/args: --databases mysql +spec: + version: "8.0.35" + replicas: 1 + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 50Mi + terminationPolicy: WipeOut +``` + +Notice the `annotations` section. This time, we have passed an argument via `params.stash.appscode.com/args` annotation along with the `stash.appscode.com/backup-blueprint` annotation. + +Let's create the above MySQL CRO, + +```bash +❯ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mysql/backup/auto-backup/examples/sample-mysql-3.yaml +mysql.kubedb.com/sample-mysql-3 created +``` + +### Verify Auto-backup configured + +Now, let's verify whether the auto-backup resources has been created or not. + +#### Verify Repository + +At first, let's verify whether Stash has created a `Repository` for our MySQL or not. + +```bash +❯ kubectl get repository -n demo-3 +NAME INTEGRITY SIZE SNAPSHOT-COUNT LAST-SUCCESSFUL-BACKUP AGE +app-sample-mysql-3 5s 8s +``` + +Now, let's check the YAML of the `Repository`. + +```yaml +❯ kubectl get repository -n demo-3 app-sample-mysql-3 -o yaml +apiVersion: stash.appscode.com/v1alpha1 +kind: Repository +metadata: + creationTimestamp: "2022-06-30T05:45:43Z" + finalizers: + - stash + generation: 1 + name: app-sample-mysql-3 + namespace: demo-3 + resourceVersion: "371009" + uid: 244f30d0-cc1e-4d8a-8b76-fcca702783d6 +spec: + backend: + gcs: + bucket: stash-testing + prefix: mysql-backup/demo-3/mysql/sample-mysql-3 + storageSecretName: gcs-secret +``` + +Here, you can see that Stash has resolved the variables in `prefix` field and substituted them with the equivalent information from this new database. + +#### Verify BackupConfiguration + +If everything goes well, Stash should create a `BackupConfiguration` for our MySQL in `demo` namespace and the phase of that `BackupConfiguration` should be `Ready`. Verify the `BackupConfiguration` crd by the following command, + +```bash +❯ kubectl get backupconfiguration -n demo-3 +NAMESPACE NAME TASK SCHEDULE PAUSED PHASE AGE +demo-3 app-sample-mysql-3 */5 * * * * Ready 107s +``` + +Now, let's check the YAML of the `BackupConfiguration`. + +```yaml +❯ kubectl get backupconfiguration -n demo-3 app-sample-mysql-3 -o yaml +apiVersion: stash.appscode.com/v1beta1 +kind: BackupConfiguration +metadata: + creationTimestamp: "2022-06-30T05:45:43Z" + finalizers: + - stash.appscode.com + generation: 1 + name: app-sample-mysql-3 + namespace: demo-3 + ownerReferences: + - apiVersion: appcatalog.appscode.com/v1alpha1 + blockOwnerDeletion: true + controller: true + kind: AppBinding + name: sample-mysql-3 + uid: 3a0682ef-62a5-4acf-adee-fc48f80a0ef7 + resourceVersion: "371026" + uid: 444901df-64de-44e8-b592-d4b26dfe00de +spec: + driver: Restic + repository: + name: app-sample-mysql-3 + retentionPolicy: + keepLast: 5 + name: keep-last-5 + prune: true + runtimeSettings: {} + schedule: '*/5 * * * *' + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-mysql-3 + task: + params: + - name: args + value: --databases mysql + tempDir: {} +status: + conditions: + - lastTransitionTime: "2022-06-30T05:45:43Z" + message: Repository demo-3/app-sample-mysql-3 exist. + reason: RepositoryAvailable + status: "True" + type: RepositoryFound + - lastTransitionTime: "2022-06-30T05:45:43Z" + message: Backend Secret demo-3/gcs-secret exist. + reason: BackendSecretAvailable + status: "True" + type: BackendSecretFound + - lastTransitionTime: "2022-06-30T05:45:43Z" + message: Backup target appcatalog.appscode.com/v1alpha1 appbinding/sample-mysql-3 + found. + reason: TargetAvailable + status: "True" + type: BackupTargetFound + - lastTransitionTime: "2022-06-30T05:45:43Z" + message: Successfully created backup triggering CronJob. + reason: CronJobCreationSucceeded + status: "True" + type: CronJobCreated + observedGeneration: 1 +``` + +Notice the `task` section. The `args` parameter that we had passed via annotations has been added to the `params` section. + +Also, notice the `target` section. Stash has automatically added the new MySQL as the target of this `BackupConfiguration`. + +#### Verify Backup + +Now, let's wait for a backup run to complete. You can watch for `BackupSession` as below, + +```bash +❯ kubectl get backupsession -n demo-3 -w +NAMESPACE NAME INVOKER-TYPE INVOKER-NAME PHASE DURATION AGE +demo-3 app-sample-mysql-3-1643883304 BackupConfiguration app-sample-mysql-3 Succeeded 41s 78s +``` + +Once the backup has been completed successfully, you should see that Stash has created a new directory as pointed by the `prefix` field of the new `Repository` and stored the backed up data there. + +
+ Backup data in GCS Bucket +
Fig: Backup data in GCS Bucket
+
+ +## Cleanup + +To cleanup the resources crated by this tutorial, run the following commands, + +```bash +❯ kubectl delete -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mysql/backup/auto-backup/examples/ +backupblueprint.stash.appscode.com "mysql-backup-template" deleted +mysql.kubedb.com "sample-mysql-2" deleted +mysql.kubedb.com "sample-mysql-3" deleted +mysql.kubedb.com "sample-mysql" deleted + +❯ kubectl delete repository -n demo --all +repository.stash.appscode.com "app-sample-mysql" deleted +❯ kubectl delete repository -n demo-2 --all +repository.stash.appscode.com "app-sample-mysql-2" deleted +❯ kubectl delete repository -n demo-3 --all +repository.stash.appscode.com "app-sample-mysql-3" deleted +``` diff --git a/content/docs/v2024.1.31/guides/mysql/backup/customization/examples/backup/multi-retention-policy.yaml b/content/docs/v2024.1.31/guides/mysql/backup/customization/examples/backup/multi-retention-policy.yaml new file mode 100644 index 0000000000..d9f3fb2ca8 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/backup/customization/examples/backup/multi-retention-policy.yaml @@ -0,0 +1,22 @@ +apiVersion: stash.appscode.com/v1beta1 +kind: BackupConfiguration +metadata: + name: sample-mysql-backup + namespace: demo +spec: + schedule: "*/5 * * * *" + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-mysql + retentionPolicy: + name: sample-mysql-retention + keepLast: 5 + keepDaily: 10 + keepWeekly: 20 + keepMonthly: 50 + keepYearly: 100 + prune: true diff --git a/content/docs/v2024.1.31/guides/mysql/backup/customization/examples/backup/passing-args.yaml b/content/docs/v2024.1.31/guides/mysql/backup/customization/examples/backup/passing-args.yaml new file mode 100644 index 0000000000..f4282d498e --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/backup/customization/examples/backup/passing-args.yaml @@ -0,0 +1,22 @@ +apiVersion: stash.appscode.com/v1beta1 +kind: BackupConfiguration +metadata: + name: sample-mysql-backup + namespace: demo +spec: + schedule: "*/2 * * * *" + task: + params: + - name: args + value: --databases testdb + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-mysql + retentionPolicy: + name: keep-last-5 + keepLast: 5 + prune: true diff --git a/content/docs/v2024.1.31/guides/mysql/backup/customization/examples/backup/resource-limit.yaml b/content/docs/v2024.1.31/guides/mysql/backup/customization/examples/backup/resource-limit.yaml new file mode 100644 index 0000000000..73769e9b1c --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/backup/customization/examples/backup/resource-limit.yaml @@ -0,0 +1,24 @@ +apiVersion: stash.appscode.com/v1beta1 +kind: RestoreSession +metadata: + name: sample-mysql-restore + namespace: demo +spec: + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-mysql + runtimeSettings: + container: + resources: + requests: + cpu: "200m" + memory: "1Gi" + limits: + cpu: "200m" + memory: "1Gi" + rules: + - snapshots: [latest] diff --git a/content/docs/v2024.1.31/guides/mysql/backup/customization/examples/backup/specific-user.yaml b/content/docs/v2024.1.31/guides/mysql/backup/customization/examples/backup/specific-user.yaml new file mode 100644 index 0000000000..f10f91fa8f --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/backup/customization/examples/backup/specific-user.yaml @@ -0,0 +1,23 @@ +apiVersion: stash.appscode.com/v1beta1 +kind: BackupConfiguration +metadata: + name: sample-mysql-backup + namespace: demo +spec: + schedule: "*/2 * * * *" + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-mysql + runtimeSettings: + pod: + securityContext: + runAsUser: 0 + runAsGroup: 0 + retentionPolicy: + name: keep-last-5 + keepLast: 5 + prune: true diff --git a/content/docs/v2024.1.31/guides/mysql/backup/customization/examples/repository.yaml b/content/docs/v2024.1.31/guides/mysql/backup/customization/examples/repository.yaml new file mode 100644 index 0000000000..8a6aaab13b --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/backup/customization/examples/repository.yaml @@ -0,0 +1,11 @@ +apiVersion: stash.appscode.com/v1alpha1 +kind: Repository +metadata: + name: gcs-repo + namespace: demo +spec: + backend: + gcs: + bucket: stash-testing + prefix: /demo/customizing + storageSecretName: gcs-secret diff --git a/content/docs/v2024.1.31/guides/mysql/backup/customization/examples/restore/passing-args.yaml b/content/docs/v2024.1.31/guides/mysql/backup/customization/examples/restore/passing-args.yaml new file mode 100644 index 0000000000..2daac075f3 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/backup/customization/examples/restore/passing-args.yaml @@ -0,0 +1,19 @@ +apiVersion: stash.appscode.com/v1beta1 +kind: RestoreSession +metadata: + name: sample-mysql-restore + namespace: demo +spec: + task: + params: + - name: args + value: --one-database testdb + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-mysql + rules: + - snapshots: [latest] diff --git a/content/docs/v2024.1.31/guides/mysql/backup/customization/examples/restore/resource-limit.yaml b/content/docs/v2024.1.31/guides/mysql/backup/customization/examples/restore/resource-limit.yaml new file mode 100644 index 0000000000..4cb9029982 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/backup/customization/examples/restore/resource-limit.yaml @@ -0,0 +1,26 @@ +apiVersion: stash.appscode.com/v1beta1 +kind: RestoreSession +metadata: + name: sample-mysql-restore + namespace: demo +spec: + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-mysql + runtimeSettings: + container: + resources: + requests: + cpu: "200m" + memory: "1Gi" + limits: + cpu: "200m" + memory: "1Gi" + rules: + - snapshots: [latest] + + diff --git a/content/docs/v2024.1.31/guides/mysql/backup/customization/examples/restore/specific-snapshot.yaml b/content/docs/v2024.1.31/guides/mysql/backup/customization/examples/restore/specific-snapshot.yaml new file mode 100644 index 0000000000..749cac7c1f --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/backup/customization/examples/restore/specific-snapshot.yaml @@ -0,0 +1,15 @@ +apiVersion: stash.appscode.com/v1beta1 +kind: RestoreSession +metadata: + name: sample-mysql-restore + namespace: demo +spec: + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-mysql + rules: + - snapshots: [4bc21d6f] diff --git a/content/docs/v2024.1.31/guides/mysql/backup/customization/examples/restore/specific-user.yaml b/content/docs/v2024.1.31/guides/mysql/backup/customization/examples/restore/specific-user.yaml new file mode 100644 index 0000000000..1544467c94 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/backup/customization/examples/restore/specific-user.yaml @@ -0,0 +1,21 @@ +apiVersion: stash.appscode.com/v1beta1 +kind: RestoreSession +metadata: + name: sample-mysql-restore + namespace: demo +spec: + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-mysql + runtimeSettings: + pod: + securityContext: + runAsUser: 0 + runAsGroup: 0 + rules: + - snapshots: [latest] + diff --git a/content/docs/v2024.1.31/guides/mysql/backup/customization/examples/sample-mysql.yaml b/content/docs/v2024.1.31/guides/mysql/backup/customization/examples/sample-mysql.yaml new file mode 100644 index 0000000000..5ce13bddaa --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/backup/customization/examples/sample-mysql.yaml @@ -0,0 +1,17 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MySQL +metadata: + name: sample-mysql + namespace: demo +spec: + version: "8.0.35" + replicas: 1 + storageType: Durable + storage: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 50Mi + terminationPolicy: WipeOut + diff --git a/content/docs/v2024.1.31/guides/mysql/backup/customization/index.md b/content/docs/v2024.1.31/guides/mysql/backup/customization/index.md new file mode 100644 index 0000000000..b0ee46e60e --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/backup/customization/index.md @@ -0,0 +1,284 @@ +--- +title: MySQL Backup Customization | Stash +description: Customizing MySQL Backup and Restore process with Stash +menu: + docs_v2024.1.31: + identifier: guides-mysql-backup-customization + name: Customizing Backup & Restore Process + parent: guides-mysql-backup + weight: 40 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +# Customizing Backup and Restore Process + +Stash provides rich customization supports for the backup and restore process to meet the requirements of various cluster configurations. This guide will show you some examples of these customizations. + +## Customizing Backup Process + +In this section, we are going to show you how to customize the backup process. Here, we are going to show some examples of providing arguments to the backup process, running the backup process as a specific user, ignoring some indexes during the backup process, etc. + +### Passing arguments to the backup process + +Stash MySQL addon uses [mysqldump](https://dev.mysql.com/doc/refman/8.0/en/mysqldump.html) for backup. You can pass arguments to the `mysqldump` through `args` param under `task.params` section. + +The below example shows how you can pass the `--databases testdb` to take backup for a specific mysql databases named `testdb`. + +```yaml +apiVersion: stash.appscode.com/v1beta1 +kind: BackupConfiguration +metadata: + name: sample-mysql-backup + namespace: demo +spec: + schedule: "*/5 * * * *" + task: + params: + - name: args + value: --databases testdb + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-mysql + retentionPolicy: + name: keep-last-5 + keepLast: 5 + prune: true +``` + +> **WARNING**: Make sure that you have the specific database created before taking backup. In this case, Database `testdb` should exist before the backup job starts. + +### Running backup job as a specific user + +If your cluster requires running the backup job as a specific user, you can provide `securityContext` under `runtimeSettings.pod` section. The below example shows how you can run the backup job as the root user. + +```yaml +apiVersion: stash.appscode.com/v1beta1 +kind: BackupConfiguration +metadata: + name: sample-mysql-backup + namespace: demo +spec: + schedule: "*/2 * * * *" + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-mysql + runtimeSettings: + pod: + securityContext: + runAsUser: 0 + runAsGroup: 0 + retentionPolicy: + name: keep-last-5 + keepLast: 5 + prune: true +``` + +### Specifying Memory/CPU limit/request for the backup job + +If you want to specify the Memory/CPU limit/request for your backup job, you can specify `resources` field under `runtimeSettings.container` section. + +```yaml +apiVersion: stash.appscode.com/v1beta1 +kind: BackupConfiguration +metadata: + name: sample-mysql-backup + namespace: demo +spec: + schedule: "*/2 * * * *" + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-mysql + runtimeSettings: + container: + resources: + requests: + cpu: "200m" + memory: "1Gi" + limits: + cpu: "200m" + memory: "1Gi" + retentionPolicy: + name: keep-last-5 + keepLast: 5 + prune: true +``` + +### Using multiple retention policies + +You can also specify multiple retention policies for your backed up data. For example, you may want to keep few daily snapshots, few weekly snapshots, and few monthly snapshots, etc. You just need to pass the desired number with the respective key under the `retentionPolicy` section. + +```yaml +apiVersion: stash.appscode.com/v1beta1 +kind: BackupConfiguration +metadata: + name: sample-mysql-backup + namespace: demo +spec: + schedule: "*/5 * * * *" + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-mysql + retentionPolicy: + name: sample-mysql-retention + keepLast: 5 + keepDaily: 10 + keepWeekly: 20 + keepMonthly: 50 + keepYearly: 100 + prune: true +``` + +To know more about the available options for retention policies, please visit [here](https://stash.run/docs/latest/concepts/crds/backupconfiguration/#specretentionpolicy). + +## Customizing Restore Process + +Stash also uses `mysql` during the restore process. In this section, we are going to show how you can pass arguments to the restore process, restore a specific snapshot, run restore job as a specific user, etc. + +### Passing arguments to the restore process + +Similar to the backup process, you can pass arguments to the restore process through the `args` params under `task.params` section. This example will restore data from database `testdb` only. + +```yaml +apiVersion: stash.appscode.com/v1beta1 +kind: RestoreSession +metadata: + name: sample-mysql-restore + namespace: demo +spec: + task: + params: + - name: args + value: --one-database testdb + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-mysql + rules: + - snapshots: [latest] +``` + +### Restore specific snapshot + +You can also restore a specific snapshot. At first, list the available snapshot as bellow, + +```bash +❯ kubectl get snapshots -n demo +NAME ID REPOSITORY HOSTNAME CREATED AT +gcs-repo-4bc21d6f 4bc21d6f gcs-repo host-0 2021-02-12T14:54:27Z +gcs-repo-f0ac7cbd f0ac7cbd gcs-repo host-0 2021-02-12T14:56:26Z +gcs-repo-9210ebb6 9210ebb6 gcs-repo host-0 2021-02-12T14:58:27Z +gcs-repo-0aff8890 0aff8890 gcs-repo host-0 2021-02-12T15:00:28Z +``` + +>You can also filter the snapshots as shown in the guide [here](https://stash.run/docs/latest/concepts/crds/snapshot/#working-with-snapshot). + +The below example shows how you can pass a specific snapshot id through the `snapshots` field of `rules` section. + +```yaml +apiVersion: stash.appscode.com/v1beta1 +kind: RestoreSession +metadata: + name: sample-mysql-restore + namespace: demo +spec: + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-mysql + rules: + - snapshots: [4bc21d6f] +``` + +>Please, do not specify multiple snapshots here. Each snapshot represents a complete backup of your database. Multiple snapshots are only usable during file/directory restore. + +### Running restore job as a specific user + +You can provide `securityContext` under `runtimeSettings.pod` section to run the restore job as a specific user. + +```yaml +apiVersion: stash.appscode.com/v1beta1 +kind: RestoreSession +metadata: + name: sample-mysql-restore + namespace: demo +spec: + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-mysql + runtimeSettings: + pod: + securityContext: + runAsUser: 0 + runAsGroup: 0 + rules: + - snapshots: [latest] +``` + +### Specifying Memory/CPU limit/request for the restore job + +Similar to the backup process, you can also provide `resources` field under the `runtimeSettings.container` section to limit the Memory/CPU for your restore job. + +```yaml +apiVersion: stash.appscode.com/v1beta1 +kind: RestoreSession +metadata: + name: sample-mysql-restore + namespace: demo +spec: + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-mysql + runtimeSettings: + container: + resources: + requests: + cpu: "200m" + memory: "1Gi" + limits: + cpu: "200m" + memory: "1Gi" + rules: + - snapshots: [latest] +``` diff --git a/content/docs/v2024.1.31/guides/mysql/backup/overview/images/backup_overview.svg b/content/docs/v2024.1.31/guides/mysql/backup/overview/images/backup_overview.svg new file mode 100644 index 0000000000..dc71cd59e5 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/backup/overview/images/backup_overview.svg @@ -0,0 +1,997 @@ + + + + + + image/svg+xml + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/content/docs/v2024.1.31/guides/mysql/backup/overview/images/restore_overview.svg b/content/docs/v2024.1.31/guides/mysql/backup/overview/images/restore_overview.svg new file mode 100644 index 0000000000..de6898c060 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/backup/overview/images/restore_overview.svg @@ -0,0 +1,867 @@ + + + + + + image/svg+xml + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/content/docs/v2024.1.31/guides/mysql/backup/overview/index.md b/content/docs/v2024.1.31/guides/mysql/backup/overview/index.md new file mode 100644 index 0000000000..c6bee0d558 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/backup/overview/index.md @@ -0,0 +1,99 @@ +--- +title: Backup & Restore MySQL Overview +menu: + docs_v2024.1.31: + identifier: guides-mysql-backup-overview + name: Overview + parent: guides-mysql-backup + weight: 10 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +{{< notice type="warning" message="Please install [Stash](https://stash.run/docs/latest/setup/install/stash/) to try this feature. Database backup with Stash is already included in the KubeDB license. So, you don't need a separate license for Stash." >}} + +# MySQL Backup & Restore Overview + +KubeDB uses [Stash](https://stash.run) to backup and restore databases. Stash by AppsCode is a cloud native data backup and recovery solution for Kubernetes workloads. Stash utilizes [restic](https://github.com/restic/restic) to securely backup stateful applications to any cloud or on-prem storage backends (for example, S3, GCS, Azure Blob storage, Minio, NetApp, Dell EMC etc.). + +
+  KubeDB + Stash +
Fig: Backup KubeDB Databases Using Stash
+
+ +## How Backup Works + +The following diagram shows how Stash takes backup of a MySQL database. Open the image in a new tab to see the enlarged version. + +
+  MySQL Backup Overview +
Fig: MySQL Backup Overview
+
+ +The backup process consists of the following steps: + +1. At first, a user creates a secret with access credentials of the backend where the backed up data will be stored. + +2. Then, she creates a `Repository` crd that specifies the backend information along with the secret that holds the credentials to access the backend. + +3. Then, she creates a `BackupConfiguration` crd targeting the [AppBinding](/docs/v2024.1.31/guides/mysql/concepts/appbinding/) crd of the desired database. The `BackupConfiguration` object also specifies the `Task` to use to backup the database. + +4. Stash operator watches for `BackupConfiguration` crd. + +5. Once Stash operator finds a `BackupConfiguration` crd, it creates a CronJob with the schedule specified in `BackupConfiguration` object to trigger backup periodically. + +6. On the next scheduled slot, the CronJob triggers a backup by creating a `BackupSession` crd. + +7. Stash operator also watches for `BackupSession` crd. + +8. When it finds a `BackupSession` object, it resolves the respective `Task` and `Function` and prepares a Job definition to backup. + +9. Then, it creates the Job to backup the targeted database. + +10. The backup Job reads necessary information to connect with the database from the `AppBinding` crd. It also reads backend information and access credentials from `Repository` crd and Storage Secret respectively. + +11. Then, the Job dumps the targeted database and uploads the output to the backend. Stash pipes the output of dump command to uploading process. Hence, backup Job does not require a large volume to hold the entire dump output. + +12. Finally, when the backup is complete, the Job sends Prometheus metrics to the Pushgateway running inside Stash operator pod. It also updates the `BackupSession` and `Repository` status to reflect the backup procedure. + +## How Restore Process Works + +The following diagram shows how Stash restores backed up data into a MySQL database. Open the image in a new tab to see the enlarged version. + +
+  Database Restore Overview +
Fig: MySQL Restore Process Overview
+
+ +The restore process consists of the following steps: + +1. At first, a user creates a `RestoreSession` crd targeting the `AppBinding` of the desired database where the backed up data will be restored. It also specifies the `Repository` crd which holds the backend information and the `Task` to use to restore the target. + +2. Stash operator watches for `RestoreSession` object. + +3. Once it finds a `RestoreSession` object, it resolves the respective `Task` and `Function` and prepares a Job definition to restore. + +4. Then, it creates the Job to restore the target. + +5. The Job reads necessary information to connect with the database from respective `AppBinding` crd. It also reads backend information and access credentials from `Repository` crd and Storage Secret respectively. + +6. Then, the job downloads the backed up data from the backend and injects into the desired database. Stash pipes the downloaded data to the respective database tool to inject into the database. Hence, restore job does not require a large volume to download entire backup data inside it. + +7. Finally, when the restore process is complete, the Job sends Prometheus metrics to the Pushgateway and update the `RestoreSession` status to reflect restore completion. + +## Next Steps + +- Backup a standalone MySQL server using Stash by following the guides from [here](/docs/v2024.1.31/guides/mysql/backup/standalone/). diff --git a/content/docs/v2024.1.31/guides/mysql/backup/standalone/examples/backupconfiguration.yaml b/content/docs/v2024.1.31/guides/mysql/backup/standalone/examples/backupconfiguration.yaml new file mode 100644 index 0000000000..2f4874a516 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/backup/standalone/examples/backupconfiguration.yaml @@ -0,0 +1,18 @@ +apiVersion: stash.appscode.com/v1beta1 +kind: BackupConfiguration +metadata: + name: sample-mysql-backup + namespace: demo +spec: + schedule: "*/5 * * * *" + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-mysql + retentionPolicy: + name: keep-last-5 + keepLast: 5 + prune: true diff --git a/content/docs/v2024.1.31/guides/mysql/backup/standalone/examples/repository.yaml b/content/docs/v2024.1.31/guides/mysql/backup/standalone/examples/repository.yaml new file mode 100644 index 0000000000..f08f305706 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/backup/standalone/examples/repository.yaml @@ -0,0 +1,11 @@ +apiVersion: stash.appscode.com/v1alpha1 +kind: Repository +metadata: + name: gcs-repo + namespace: demo +spec: + backend: + gcs: + bucket: appscode-qa + prefix: /demo/mysql/sample-mysql + storageSecretName: gcs-secret diff --git a/content/docs/v2024.1.31/guides/mysql/backup/standalone/examples/restored-mysql.yaml b/content/docs/v2024.1.31/guides/mysql/backup/standalone/examples/restored-mysql.yaml new file mode 100644 index 0000000000..dec50d8982 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/backup/standalone/examples/restored-mysql.yaml @@ -0,0 +1,18 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MySQL +metadata: + name: restored-mysql + namespace: demo +spec: + version: "8.0.35" + replicas: 1 + storageType: Durable + storage: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 50Mi + init: + waitForInitialRestore: true + terminationPolicy: WipeOut \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/mysql/backup/standalone/examples/restoresession.yaml b/content/docs/v2024.1.31/guides/mysql/backup/standalone/examples/restoresession.yaml new file mode 100644 index 0000000000..1612975c7f --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/backup/standalone/examples/restoresession.yaml @@ -0,0 +1,15 @@ +apiVersion: stash.appscode.com/v1beta1 +kind: RestoreSession +metadata: + name: restore-sample-mysql + namespace: demo +spec: + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: restored-mysql + rules: + - snapshots: [latest] diff --git a/content/docs/v2024.1.31/guides/mysql/backup/standalone/examples/sample-mysql.yaml b/content/docs/v2024.1.31/guides/mysql/backup/standalone/examples/sample-mysql.yaml new file mode 100644 index 0000000000..275b8b2d70 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/backup/standalone/examples/sample-mysql.yaml @@ -0,0 +1,16 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MySQL +metadata: + name: sample-mysql + namespace: demo +spec: + version: "8.0.35" + replicas: 1 + storageType: Durable + storage: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 50Mi + terminationPolicy: WipeOut diff --git a/content/docs/v2024.1.31/guides/mysql/backup/standalone/images/sample-mysql-backup.png b/content/docs/v2024.1.31/guides/mysql/backup/standalone/images/sample-mysql-backup.png new file mode 100644 index 0000000000..242b9dc718 Binary files /dev/null and b/content/docs/v2024.1.31/guides/mysql/backup/standalone/images/sample-mysql-backup.png differ diff --git a/content/docs/v2024.1.31/guides/mysql/backup/standalone/index.md b/content/docs/v2024.1.31/guides/mysql/backup/standalone/index.md new file mode 100644 index 0000000000..34dfa7f3ae --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/backup/standalone/index.md @@ -0,0 +1,638 @@ +--- +title: Backup & Restore MySQL | Stash +description: Backup standalone MySQL database using Stash +menu: + docs_v2024.1.31: + identifier: guides-mysql-backup-standalone + name: Standalone MySQL + parent: guides-mysql-backup + weight: 20 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +# Backup and Restore standalone MySQL database using Stash + +Stash 0.9.0+ supports backup and restoration of MySQL databases. This guide will show you how you can backup and restore your MySQL database with Stash. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using Minikube. +- Install KubeDB in your cluster following the steps [here](/docs/v2024.1.31/setup/README). +- Install Stash in your cluster following the steps [here](https://stash.run/docs/latest/setup/install/stash/). +- Install Stash `kubectl` plugin following the steps [here](https://stash.run/docs/latest/setup/install/kubectl-plugin/). +- If you are not familiar with how Stash backup and restore MySQL databases, please check the following guide [here](/docs/v2024.1.31/guides/mysql/backup/overview/). + +You have to be familiar with following custom resources: + +- [AppBinding](/docs/v2024.1.31/guides/mysql/concepts/appbinding/) +- [Function](https://stash.run/docs/latest/concepts/crds/function/) +- [Task](https://stash.run/docs/latest/concepts/crds/task/) +- [BackupConfiguration](https://stash.run/docs/latest/concepts/crds/backupconfiguration/) +- [RestoreSession](https://stash.run/docs/latest/concepts/crds/restoresession/) + +To keep things isolated, we are going to use a separate namespace called `demo` throughout this tutorial. Create `demo` namespace if you haven't created yet. + +```bash +$ kubectl create ns demo +namespace/demo created +``` + +## Backup MySQL + +This section will demonstrate how to backup MySQL database. Here, we are going to deploy a MySQL database using KubeDB. Then, we are going to backup this database into a GCS bucket. Finally, we are going to restore the backed up data into another MySQL database. + +### Deploy Sample MySQL Database + +Let's deploy a sample MySQL database and insert some data into it. + +**Create MySQL CRD:** + +Below is the YAML of a sample MySQL CRD that we are going to create for this tutorial: + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: MySQL +metadata: + name: sample-mysql + namespace: demo +spec: + version: "8.0.35" + replicas: 1 + storageType: Durable + storage: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 50Mi + terminationPolicy: WipeOut +``` + +Create the above `MySQL` CRD, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mysql/backup/standalone/examples/sample-mysql.yaml +mysql.kubedb.com/sample-mysql created +``` + +KubeDB will deploy a MySQL database according to the above specification. It will also create the necessary Secrets and Services to access the database. + +Let's check if the database is ready to use, + +```bash +$ kubectl get my -n demo sample-mysql +NAME VERSION STATUS AGE +sample-mysql 8.0.35 Ready 4m22s +``` + +The database is `Ready`. Verify that KubeDB has created a Secret and a Service for this database using the following commands, + +```bash +$ kubectl get secret -n demo -l=app.kubernetes.io/instance=sample-mysql +NAME TYPE DATA AGE +sample-mysql-auth Opaque 2 4m58s + +$ kubectl get service -n demo -l=app.kubernetes.io/instance=sample-mysql +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +sample-mysql ClusterIP 10.101.2.138 3306/TCP 5m33s +sample-mysql-pods ClusterIP None 3306/TCP 5m33s +``` + +Here, we have to use service `sample-mysql` and secret `sample-mysql-auth` to connect with the database. KubeDB creates an [AppBinding](/docs/v2024.1.31/guides/mysql/concepts/appbinding/) CRD that holds the necessary information to connect with the database. + +**Verify AppBinding:** + +Verify that the AppBinding has been created successfully using the following command, + +```bash +$ kubectl get appbindings -n demo +NAME AGE +sample-mysql 9m24s +``` + +Let's check the YAML of the above AppBinding, + +```bash +$ kubectl get appbindings -n demo sample-mysql -o yaml +``` + +```yaml +apiVersion: appcatalog.appscode.com/v1alpha1 +kind: AppBinding +metadata: + annotations: + kubectl.kubernetes.io/last-applied-configuration: | + {"apiVersion":"kubedb.com/v1alpha2","kind":"MySQL","metadata":{"annotations":{},"name":"sample-mysql","namespace":"demo"},"spec":{"replicas":1,"storage":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"50Mi"}}},"storageType":"Durable","terminationPolicy":"WipeOut","version":"8.0.35"}} + creationTimestamp: "2022-06-30T05:45:43Z" + generation: 1 + labels: + app.kubernetes.io/component: database + app.kubernetes.io/instance: sample-mysql + app.kubernetes.io/managed-by: kubedb.com + app.kubernetes.io/name: mysqls.kubedb.com + name: sample-mysql + namespace: demo + ownerReferences: + - apiVersion: kubedb.com/v1alpha2 + blockOwnerDeletion: true + controller: true + kind: MySQL + name: sample-mysql + uid: 00dcc579-cdd8-4586-9118-1e108298c5d0 + resourceVersion: "1693366" + uid: adb2c57f-51a6-4845-b964-2e71076202fc +spec: + clientConfig: + service: + name: sample-mysql + path: / + port: 3306 + scheme: mysql + url: tcp(sample-mysql.demo.svc:3306)/ + parameters: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: StashAddon + stash: + addon: + backupTask: + name: mysql-backup-8.0.21 + params: + - name: args + value: --all-databases --set-gtid-purged=OFF + restoreTask: + name: mysql-restore-8.0.21 + secret: + name: sample-mysql-auth + type: kubedb.com/mysql + version: 8.0.35 +``` + +Stash uses the AppBinding CRD to connect with the target database. It requires the following two fields to set in AppBinding's `.spec` section. + +- `.spec.clientConfig.service.name` specifies the name of the Service that connects to the database. +- `.spec.secret` specifies the name of the Secret that holds necessary credentials to access the database. +- `spec.parameters.stash` specifies the Stash Addon info that will be used to backup and restore this database. +- `spec.type` specifies the types of the app that this AppBinding is pointing to. KubeDB generated AppBinding follows the following format: `/`. + +**Insert Sample Data:** + +Now, we are going to exec into the database pod and create some sample data. At first, find out the database Pod using the following command, + +```bash +$ kubectl get pods -n demo --selector="app.kubernetes.io/instance=sample-mysql" +NAME READY STATUS RESTARTS AGE +sample-mysql-0 1/1 Running 0 33m +``` + +And copy the user name and password of the `root` user to access into `mysql` shell. + +```bash +$ kubectl get secret -n demo sample-mysql-auth -o jsonpath='{.data.username}'| base64 -d +root⏎ + +$ kubectl get secret -n demo sample-mysql-auth -o jsonpath='{.data.password}'| base64 -d +5HEqoozyjgaMO97N⏎ +``` + +Now, let's exec into the Pod to enter into `mysql` shell and create a database and a table, + +```bash +$ kubectl exec -it -n demo sample-mysql-0 -- mysql --user=root --password=5HEqoozyjgaMO97N +mysql: [Warning] Using a password on the command line interface can be insecure. +Welcome to the MySQL monitor. Commands end with ; or \g. +Your MySQL connection id is 10 +Server version: 8.0.21 MySQL Community Server - GPL + +Copyright (c) 2000, 2019, Oracle and/or its affiliates. All rights reserved. + +Oracle is a registered trademark of Oracle Corporation and/or its +affiliates. Other names may be trademarks of their respective +owners. + +Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. + +mysql> CREATE DATABASE playground; +Query OK, 1 row affected (0.01 sec) + +mysql> SHOW DATABASES; ++--------------------+ +| Database | ++--------------------+ +| information_schema | +| mysql | +| performance_schema | +| playground | +| sys | ++--------------------+ +5 rows in set (0.00 sec) + +mysql> CREATE TABLE playground.equipment ( id INT NOT NULL AUTO_INCREMENT, type VARCHAR(50), quant INT, color VARCHAR(25), PRIMARY KEY(id)); +Query OK, 0 rows affected (0.01 sec) + +mysql> SHOW TABLES IN playground; ++----------------------+ +| Tables_in_playground | ++----------------------+ +| equipment | ++----------------------+ +1 row in set (0.01 sec) + +mysql> INSERT INTO playground.equipment (type, quant, color) VALUES ("slide", 2, "blue"); +Query OK, 1 row affected (0.01 sec) + +mysql> SELECT * FROM playground.equipment; ++----+-------+-------+-------+ +| id | type | quant | color | ++----+-------+-------+-------+ +| 1 | slide | 2 | blue | ++----+-------+-------+-------+ +1 row in set (0.00 sec) + +mysql> exit +Bye +``` + +Now, we are ready to backup the database. + +### Prepare Backend + +We are going to store our backed up data into a GCS bucket. At first, we need to create a secret with GCS credentials then we need to create a `Repository` CRD. If you want to use a different backend, please read the respective backend configuration doc from [here](https://stash.run/docs/latest/guides/backends/overview/). + +**Create Storage Secret:** + +Let's create a secret called `gcs-secret` with access credentials to our desired GCS bucket, + +```bash +$ echo -n 'changeit' > RESTIC_PASSWORD +$ echo -n '' > GOOGLE_PROJECT_ID +$ cat downloaded-sa-key.json > GOOGLE_SERVICE_ACCOUNT_JSON_KEY +$ kubectl create secret generic -n demo gcs-secret \ + --from-file=./RESTIC_PASSWORD \ + --from-file=./GOOGLE_PROJECT_ID \ + --from-file=./GOOGLE_SERVICE_ACCOUNT_JSON_KEY +secret/gcs-secret created +``` + +**Create Repository:** + +Now, crete a `Repository` using this secret. Below is the YAML of Repository CRD we are going to create, + +```yaml +apiVersion: stash.appscode.com/v1alpha1 +kind: Repository +metadata: + name: gcs-repo + namespace: demo +spec: + backend: + gcs: + bucket: appscode-qa + prefix: /demo/mysql/sample-mysql + storageSecretName: gcs-secret +``` + +Let's create the `Repository` we have shown above, + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mysql/backup/standalone/examples/repository.yaml +repository.stash.appscode.com/gcs-repo created +``` + +Now, we are ready to backup our database to our desired backend. + +### Backup + +We have to create a `BackupConfiguration` targeting respective AppBinding CRD of our desired database. Then Stash will create a CronJob to periodically backup the database. + +**Create BackupConfiguration:** + +Below is the YAML for `BackupConfiguration` CRD to backup the `sample-mysql` database we have deployed earlier, + +```yaml +apiVersion: stash.appscode.com/v1beta1 +kind: BackupConfiguration +metadata: + name: sample-mysql-backup + namespace: demo +spec: + schedule: "*/5 * * * *" + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-mysql + retentionPolicy: + name: keep-last-5 + keepLast: 5 + prune: true +``` + +Here, + +- `.spec.schedule` specifies that we want to backup the database at 5 minutes interval. +- `.spec.target.ref` refers to the AppBinding CRD that was created for `sample-mysql` database. + +Let's create the `BackupConfiguration` CRD we have shown above, + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mysql/backup/standalone/examples/backupconfiguration.yaml +backupconfiguration.stash.appscode.com/sample-mysql-backup created +``` + +**Verify Backup Setup Successful:** + +If everything goes well, the phase of the `BackupConfiguration` should be `Ready`. The `Ready` phase indicates that the backup setup is successful. Let's verify the `Phase` of the BackupConfiguration, + +```bash +$ kubectl get backupconfiguration -n demo +NAME TASK SCHEDULE PAUSED PHASE AGE +sample-mysql-backup mysql-backup-8.0.21 */5 * * * * Ready 11s +``` + +**Verify CronJob:** + +Stash will create a CronJob with the schedule specified in `spec.schedule` field of `BackupConfiguration` CRD. + +Verify that the CronJob has been created using the following command, + +```bash +$ kubectl get cronjob -n demo +NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE +sample-mysql-backup */5 * * * * False 0 27s +``` + +**Wait for BackupSession:** + +The `sample-mysql-backup` CronJob will trigger a backup on each scheduled slot by creating a `BackupSession` CRD. + +Wait for a schedule to appear. Run the following command to watch `BackupSession` CRD, + +```bash +$ watch -n 1 kubectl get backupsession -n demo -l=stash.appscode.com/backup-configuration=sample-mysql-backup + +NAME INVOKER-TYPE INVOKER-NAME PHASE AGE +sample-mysql-backup-1569561245 BackupConfiguration sample-mysql-backup Succeeded 38s +``` + +Here, the phase **`Succeeded`** means that the backupsession has been succeeded. + +>Note: Backup CronJob creates `BackupSession` crds with the following label `stash.appscode.com/backup-configuration=`. We can use this label to watch only the `BackupSession` of our desired `BackupConfiguration`. + +**Verify Backup:** + +Now, we are going to verify whether the backed up data is in the backend. Once a backup is completed, Stash will update the respective `Repository` CRD to reflect the backup completion. Check that the repository `gcs-repo` has been updated by the following command, + +```bash +$ kubectl get repository -n demo gcs-repo +NAME INTEGRITY SIZE SNAPSHOT-COUNT LAST-SUCCESSFUL-BACKUP AGE +gcs-repo true 6.815 MiB 1 3m39s 30m +``` + +Now, if we navigate to the GCS bucket, we will see the backed up data has been stored in `demo/mysql/sample-mysql` directory as specified by `.spec.backend.gcs.prefix` field of Repository CRD. + +
+  Backup data in GCS Bucket +
Fig: Backup data in GCS Bucket
+
+ +> Note: Stash keeps all the backed up data encrypted. So, data in the backend will not make any sense until they are decrypted. + +## Restore MySQL + +In this section, we are going to restore the database from the backup we have taken in the previous section. We are going to deploy a new database and initialize it from the backup. + +#### Stop Taking Backup of the Old Database: + +At first, let's stop taking any further backup of the old database so that no backup is taken during restore process. We are going to pause the `BackupConfiguration` crd that we had created to backup the `sample-mysql` database. Then, Stash will stop taking any further backup for this database. + +Let's pause the `sample-mysql-backup` BackupConfiguration, +```bash +$ kubectl patch backupconfiguration -n demo sample-mysql-backup --type="merge" --patch='{"spec": {"paused": true}}' +backupconfiguration.stash.appscode.com/sample-mysql-backup patched +``` + +Or you can use the Stash `kubectl` plugin to pause the ` BackupConfiguration`, +```bash +$ kubectl stash pause backup -n demo --backupconfig=sample-mysql-backup +BackupConfiguration demo/sample-mysql-backup has been paused successfully. +``` + +Now, wait for a moment. Stash will pause the BackupConfiguration. Verify that the BackupConfiguration has been paused, + +```console +$ kubectl get backupconfiguration -n demo sample-mysql-backup +NAME TASK SCHEDULE PAUSED PHASE AGE +sample-mysql-backup mysql-backup-8.0.21 */5 * * * * true Ready 26m +``` + +Notice the `PAUSED` column. Value `true` for this field means that the BackupConfiguration has been paused. + +#### Deploy Restored Database: + +Now, we have to deploy the restored database similarly as we have deployed the original `sample-mysql` database. However, this time there will be the following differences: + +- We are going to specify `.spec.init.waitForInitialRestore` field that tells KubeDB to wait for first restore to complete before marking this database is ready to use. + +Below is the YAML for `MySQL` CRD we are going deploy to initialize from backup, + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: MySQL +metadata: + name: restored-mysql + namespace: demo +spec: + version: "8.0.35" + replicas: 1 + storageType: Durable + storage: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 50Mi + init: + waitForInitialRestore: true + terminationPolicy: WipeOut +``` + +Let's create the above database, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mysql/backup/standalone/examples/restored-mysql.yaml +mysql.kubedb.com/restored-mysql created +``` + +If you check the database status, you will see it is stuck in **`Provisioning`** state. + +```bash +$ kubectl get my -n demo restored-mysql +NAME VERSION STATUS AGE +restored-mysql 8.0.35 Provisioning 61s +``` + +#### Create RestoreSession: + +Now, we need to create a RestoreSession CRD pointing to the AppBinding for this restored database. + +Using the following command, check that another AppBinding object has been created for the `restored-mysql` object, + +```bash +$ kubectl get appbindings -n demo restored-mysql +NAME AGE +restored-mysql 6m6s +``` + +Below, is the contents of YAML file of the `RestoreSession` object that we are going to create to restore backed up data into the newly created database provisioned by MySQL CRD named `restored-mysql`. + +```yaml +apiVersion: stash.appscode.com/v1beta1 +kind: RestoreSession +metadata: + name: sample-mysql-restore + namespace: demo +spec: + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: restored-mysql + rules: + - snapshots: [latest] +``` + +Here, + +- `.spec.repository.name` specifies the Repository CRD that holds the backend information where our backed up data has been stored. +- `.spec.target.ref` refers to the newly created AppBinding object for the `restored-mysql` MySQL object. +- `.spec.rules` specifies that we are restoring data from the latest backup snapshot of the database. + +Let's create the RestoreSession CRD object we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mysql/backup/standalone/examples/restoresession.yaml +restoresession.stash.appscode.com/sample-mysql-restore created +``` + +Once, you have created the RestoreSession object, Stash will create a restore Job. We can watch the phase of the RestoreSession object to check whether the restore process has succeeded or not. + +Run the following command to watch the phase of the RestoreSession object, + +```bash +$ watch -n 1 kubectl get restoresession -n demo restore-sample-mysql + +Every 1.0s: kubectl get restoresession -n demo restore-sample-mysql workstation: Fri Sep 27 11:18:51 2019 +NAMESPACE NAME REPOSITORY-NAME PHASE AGE +demo restore-sample-mysql gcs-repo Succeeded 59s +``` + +Here, we can see from the output of the above command that the restore process succeeded. + +#### Verify Restored Data: + +In this section, we are going to verify whether the desired data has been restored successfully. We are going to connect to the database server and check whether the database and the table we created earlier in the original database are restored. + +At first, check if the database has gone into **`Ready`** state by the following command, + +```bash +$ kubectl get my -n demo restored-mysql +NAME VERSION STATUS AGE +restored-mysql 8.0.21 Ready 34m +``` + +Now, find out the database Pod by the following command, + +```bash +$ kubectl get pods -n demo --selector="app.kubernetes.io/instance=restored-mysql" +NAME READY STATUS RESTARTS AGE +restored-mysql-0 1/1 Running 0 39m +``` + +And then copy the user name and password of the `root` user to access into `mysql` shell. + +```bash +$ kubectl get secret -n demo sample-mysql-auth -o jsonpath='{.data.username}'| base64 -d +root + +$ kubectl get secret -n demo sample-mysql-auth -o jsonpath='{.data.password}'| base64 -d +5HEqoozyjgaMO97N +``` + +Now, let's exec into the Pod to enter into `mysql` shell and create a database and a table, + +```bash +$ kubectl exec -it -n demo restored-mysql-0 -- mysql --user=root --password=5HEqoozyjgaMO97N +mysql: [Warning] Using a password on the command line interface can be insecure. +Welcome to the MySQL monitor. Commands end with ; or \g. +Your MySQL connection id is 9 +Server version: 8.0.21 MySQL Community Server - GPL + +Copyright (c) 2000, 2019, Oracle and/or its affiliates. All rights reserved. + +Oracle is a registered trademark of Oracle Corporation and/or its +affiliates. Other names may be trademarks of their respective +owners. + +Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. + +mysql> SHOW DATABASES; ++--------------------+ +| Database | ++--------------------+ +| information_schema | +| mysql | +| performance_schema | +| playground | +| sys | ++--------------------+ +5 rows in set (0.00 sec) + +mysql> SHOW TABLES IN playground; ++----------------------+ +| Tables_in_playground | ++----------------------+ +| equipment | ++----------------------+ +1 row in set (0.00 sec) + +mysql> SELECT * FROM playground.equipment; ++----+-------+-------+-------+ +| id | type | quant | color | ++----+-------+-------+-------+ +| 1 | slide | 2 | blue | ++----+-------+-------+-------+ +1 row in set (0.00 sec) + +mysql> exit +Bye +``` + +So, from the above output, we can see that the `playground` database and the `equipment` table we created earlier in the original database and now, they are restored successfully. + +## Cleanup + +To cleanup the Kubernetes resources created by this tutorial, run: + +```bash +kubectl delete backupconfiguration -n demo sample-mysql-backup +kubectl delete restoresession -n demo restore-sample-mysql +kubectl delete repository -n demo gcs-repo +kubectl delete my -n demo restored-mysql +kubectl delete my -n demo sample-mysql +``` diff --git a/content/docs/v2024.1.31/guides/mysql/cli/index.md b/content/docs/v2024.1.31/guides/mysql/cli/index.md new file mode 100644 index 0000000000..2e58eaf2da --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/cli/index.md @@ -0,0 +1,403 @@ +--- +title: CLI | KubeDB +menu: + docs_v2024.1.31: + identifier: guides-mysql-cli + name: CLI + parent: guides-mysql + weight: 100 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Manage KubeDB objects using CLIs + +## KubeDB CLI + +KubeDB comes with its own cli. It is called `kubedb` cli. `kubedb` can be used to manage any KubeDB object. `kubedb` cli also performs various validations to improve ux. To install KubeDB cli on your workstation, follow the steps [here](/docs/v2024.1.31/setup/README). + +### How to Create objects + +`kubectl create` creates a database CRD object in `default` namespace by default. Following command will create a MySQL object as specified in `mysql.yaml`. + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mysql/cli/yamls/mysql-demo.yaml +mysql.kubedb.com/mysql-demo created +``` + +You can provide namespace as a flag `--namespace`. Provided namespace should match with namespace specified in input file. + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mysql/cli/yamls/mysql-demo.yaml --namespace=kube-system +mysql.kubedb.com/mysql-demo created +``` + +`kubectl create` command also considers `stdin` as input. + +```bash +cat mysql-demo.yaml | kubectl create -f - +``` + +### How to List Objects + +`kubectl get` command allows users to list or find any KubeDB object. To list all MySQL objects in `default` namespace, run the following command: + +```bash +$ kubectl get mysql +NAME VERSION STATUS AGE +mysql-demo 8.0.35 Running 5m1s +mysql-dev 5.7.44 Running 10m1s +``` + +To get YAML of an object, use `--output=yaml` flag. + +```yaml +$ kubectl get mysql mysql-demo -n demo --output=yaml +apiVersion: kubedb.com/v1alpha2 +kind: MySQL +metadata: + name: mysql-demo + namespace: demo +spec: + authSecret: + name: mysql-demo-auth + podTemplate: + spec: + affinity: + podAntiAffinity: + preferredDuringSchedulingIgnoredDuringExecution: + - podAffinityTerm: + labelSelector: + matchLabels: + app.kubernetes.io/instance: mysql-demo + app.kubernetes.io/managed-by: kubedb.com + app.kubernetes.io/name: mysqls.kubedb.com + namespaces: + - demo + topologyKey: kubernetes.io/hostname + weight: 100 + - podAffinityTerm: + labelSelector: + matchLabels: + app.kubernetes.io/instance: mysql-demo + app.kubernetes.io/managed-by: kubedb.com + app.kubernetes.io/name: mysqls.kubedb.com + namespaces: + - demo + topologyKey: failure-domain.beta.kubernetes.io/zone + weight: 50 + resources: + limits: + cpu: 500m + memory: 1Gi + requests: + cpu: 500m + memory: 1Gi + serviceAccountName: mysql-demo + replicas: 1 + storage: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + storageClassName: standard + storageType: Durable + terminationPolicy: Delete + version: 8.0.35 +``` + +To get JSON of an object, use `--output=json` flag. + +```bash +kubectl get mysql mysql-demo --output=json +``` + +To list all KubeDB objects, use following command: + +```bash +$ kubectl get all -n demo -o wide +NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES +pod/mysql-demo-0 1/1 Running 0 2m17s 10.244.0.13 kind-control-plane 1/1 + +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR +service/mysql-demo ClusterIP 10.107.205.135 3306/TCP 2m17s app.kubernetes.io/instance=mysql-demo,app.kubernetes.io/managed-by=kubedb.com,app.kubernetes.io/name=mysqls.kubedb.com +service/mysql-demo-pods ClusterIP None 3306/TCP 2m17s app.kubernetes.io/instance=mysql-demo,app.kubernetes.io/managed-by=kubedb.com,app.kubernetes.io/name=mysqls.kubedb.com + +NAME READY AGE CONTAINERS IMAGES +statefulset.apps/mysql-demo 1/1 2m17s mysql kubedb/mysql:8.0.35 + +NAME TYPE VERSION AGE +appbinding.appcatalog.appscode.com/mysql-demo kubedb.com/mysql 8.0.35 2m17s + +NAME VERSION STATUS AGE +mysql.kubedb.com/mysql-demo 8.0.35 Ready 2m17s +``` + +Flag `--output=wide` is used to print additional information. + +List command supports short names for each object types. You can use it like `kubectl get `. Below are the short name for KubeDB objects: + +- MySQL: `my` + +To print only object name, run the following command: + +```bash +$ kubectl get all -o name +mysql/mysql-demo +mysql/mysql-dev +mysql/mysql-prod +mysql/mysql-qa +``` + +### How to Describe Objects + +`kubectl dba describe` command allows users to describe any KubeDB object. The following command will describe MySQL database `mysql-demo` with relevant information. + +```bash +$ kubectl dba describe my -n demo +Name: mysql-demo +Namespace: demo +CreationTimestamp: Mon, 15 Mar 2021 17:53:48 +0600 +Labels: +Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"kubedb.com/v1alpha2","kind":"MySQL","metadata":{"annotations":{},"name":"mysql-demo","namespace":"demo"},"spec":{"storage":{"accessModes... +Replicas: 1 total +Status: Ready +StorageType: Durable +Volume: + StorageClass: standard + Capacity: 1Gi + Access Modes: RWO +Paused: false +Halted: false +Termination Policy: Delete + +StatefulSet: + Name: mysql-demo + CreationTimestamp: Mon, 15 Mar 2021 17:53:48 +0600 + Labels: app.kubernetes.io/component=database + app.kubernetes.io/instance=mysql-demo + app.kubernetes.io/managed-by=kubedb.com + app.kubernetes.io/name=mysqls.kubedb.com + Annotations: + Replicas: 824638230984 desired | 1 total + Pods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed + +Service: + Name: mysql-demo + Labels: app.kubernetes.io/component=database + app.kubernetes.io/instance=mysql-demo + app.kubernetes.io/managed-by=kubedb.com + app.kubernetes.io/name=mysqls.kubedb.com + Annotations: + Type: ClusterIP + IP: 10.107.205.135 + Port: primary 3306/TCP + TargetPort: db/TCP + Endpoints: 10.244.0.13:3306 + +Service: + Name: mysql-demo-pods + Labels: app.kubernetes.io/component=database + app.kubernetes.io/instance=mysql-demo + app.kubernetes.io/managed-by=kubedb.com + app.kubernetes.io/name=mysqls.kubedb.com + Annotations: + Type: ClusterIP + IP: None + Port: db 3306/TCP + TargetPort: db/TCP + Endpoints: 10.244.0.13:3306 + +Auth Secret: + Name: mysql-demo-auth + Labels: app.kubernetes.io/component=database + app.kubernetes.io/instance=mysql-demo + app.kubernetes.io/managed-by=kubedb.com + app.kubernetes.io/name=mysqls.kubedb.com + Annotations: + Type: kubernetes.io/basic-auth + Data: + password: 16 bytes + username: 4 bytes + +AppBinding: + Metadata: + Annotations: + kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"kubedb.com/v1alpha2","kind":"MySQL","metadata":{"annotations":{},"name":"mysql-demo","namespace":"demo"},"spec":{"storage":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"1Gi"}},"storageClassName":"standard"},"version":"8.0.35"}} + + Creation Timestamp: 2021-03-15T11:53:48Z + Labels: + app.kubernetes.io/component: database + app.kubernetes.io/instance: mysql-demo + app.kubernetes.io/managed-by: kubedb.com + app.kubernetes.io/name: mysqls.kubedb.com + Name: mysql-demo + Namespace: demo + Spec: + Client Config: + Service: + Name: mysql-demo + Path: / + Port: 3306 + Scheme: mysql + URL: tcp(mysql-demo:3306)/ + Parameters: + API Version: appcatalog.appscode.com/v1alpha1 + Kind: StashAddon + Stash: + Addon: + Backup Task: + Name: mysql-backup-8.0.35 + Restore Task: + Name: mysql-restore-8.0.35 + Secret: + Name: mysql-demo-auth + Type: kubedb.com/mysql + Version: 8.0.35 + +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal Successful 5m KubeDB Operator Successfully created governing service + Normal Successful 5m KubeDB Operator Successfully created service for primary/standalone + Normal Successful 5m KubeDB Operator Successfully created database auth secret + Normal Successful 5m KubeDB Operator Successfully created StatefulSet +``` + +`kubectl dba describe` command provides following basic information about a MySQL database. + +- StatefulSet +- Storage (Persistent Volume) +- Service +- Secret (If available) +- Monitoring system (If available) + +To hide events on KubeDB object, use flag `--show-events=false` + +To describe all MySQL objects in `default` namespace, use following command + +```bash +kubectl dba describe my +``` + +To describe all MySQL objects from every namespace, provide `--all-namespaces` flag. + +```bash +kubectl dba describe my --all-namespaces +``` + +To describe all KubeDB objects from every namespace, use the following command: + +```bash +kubectl dba describe all --all-namespaces +``` + +You can also describe KubeDB objects with matching labels. The following command will describe all MySQL objects with specified labels from every namespace. + +```bash +kubectl dba describe my --all-namespaces --selector='group=dev' +``` + +To learn about various options of `describe` command, please visit [here](/docs/v2024.1.31/reference/cli/kubectl-dba_describe). + +### How to Edit Objects + +`kubectl edit` command allows users to directly edit any KubeDB object. It will open the editor defined by _KUBEDB_EDITOR_, or _EDITOR_ environment variables, or fall back to `nano`. + +Lets edit an existing running MySQL object to setup database [Halted](/docs/v2024.1.31/guides/mysql/concepts/database/#spechalted). The following command will open MySQL `mysql-demo` in editor. + +```bash +$ kubectl edit my -n demo mysql-quickstart + +spec: + .... + authSecret: + name: mysql-quickstart-auth +# add database halted = true to delete StatefulSet services and database other resources + halted: true + .... + +mysql.kubedb.com/mysql-quickstart edited +``` + +#### Edit Restrictions + +Various fields of a KubeDB object can't be edited using `edit` command. The following fields are restricted from updates for all KubeDB objects: + +- apiVersion +- kind +- metadata.name +- metadata.namespace + +If StatefulSets exists for a MySQL database, following fields can't be modified as well. + +- spec.authSecret +- spec.init +- spec.storageType +- spec.storage +- spec.podTemplate.spec.nodeSelector + +For DormantDatabase, `spec.origin` can't be edited using `kubectl edit` + +### How to Delete Objects + +`kubectl delete` command will delete an object in `default` namespace by default unless namespace is provided. The following command will delete a MySQL `mysql-dev` in default namespace + +```bash +$ kubectl delete mysql mysql-dev +mysql.kubedb.com "mysql-dev" deleted +``` + +You can also use YAML files to delete objects. The following command will delete a mysql using the type and name specified in `mysql.yaml`. + +```bash +$ kubectl delete -f mysql-demo.yaml +mysql.kubedb.com "mysql-dev" deleted +``` + +`kubectl delete` command also takes input from `stdin`. + +```bash +cat mysql-demo.yaml | kubectl delete -f - +``` + +To delete database with matching labels, use `--selector` flag. The following command will delete mysql with label `mysql.app.kubernetes.io/instance=mysql-demo`. + +```bash +kubectl delete mysql -l mysql.app.kubernetes.io/instance=mysql-demo +``` + +## Using Kubectl + +You can use Kubectl with KubeDB objects like any other CRDs. Below are some common examples of using Kubectl with KubeDB objects. + +```bash +# Create objects +$ kubectl create -f + +# List objects +$ kubectl get mysql +$ kubectl get mysql.kubedb.com + +# Delete objects +$ kubectl delete mysql +``` + +## Next Steps + +- Learn how to use KubeDB to run a MySQL database [here](/docs/v2024.1.31/guides/mysql/README). +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/v2024.1.31/CONTRIBUTING). diff --git a/content/docs/v2024.1.31/guides/mysql/cli/yamls/mysql-demo.yaml b/content/docs/v2024.1.31/guides/mysql/cli/yamls/mysql-demo.yaml new file mode 100644 index 0000000000..addd02e541 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/cli/yamls/mysql-demo.yaml @@ -0,0 +1,14 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MySQL +metadata: + name: mysql-demo + namespace: demo +spec: + version: "8.0.35" + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi diff --git a/content/docs/v2024.1.31/guides/mysql/clients/index.md b/content/docs/v2024.1.31/guides/mysql/clients/index.md new file mode 100644 index 0000000000..c6195064d5 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/clients/index.md @@ -0,0 +1,464 @@ +--- +title: Connecting to a MySQL Cluster +menu: + docs_v2024.1.31: + identifier: guides-mysql-clients + name: Connecting to a MySQL Cluster + parent: guides-mysql + weight: 105 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Connecting to a MySQL Cluster + +KubeDB creates separate services for primary and secondary replicas. In this tutorial, we are going to show you how to connect your application with primary or secondary replicas using those services. + +## Before You Begin + +- Read [mysql group replication concept](/docs/v2024.1.31/guides/mysql/clustering/overview/) to learn about MySQL Group Replication. + +- You need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +- Now, install KubeDB cli on your workstation and KubeDB operator in your cluster following the steps [here](/docs/v2024.1.31/setup/README). + +- To keep things isolated, this tutorial uses a separate namespace called `demo` throughout this tutorial. Run the following command to prepare your cluster for this tutorial: + + ```bash + $ kubectl create ns demo + namespace/demo created + ``` + +- You need to have a mysql client. If you don't have a mysql client install in your local machine, you can install from [here](https://dev.mysql.com/doc/mysql-installation-excerpt/8.0/en/) + +> Note: YAML files used in this tutorial are stored in [guides/mysql/clients/yamls](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/guides/mysql/clients/yamls) folder in GitHub repository [kubedb/docs](https://github.com/kubedb/docs). + +## Deploy MySQL Cluster + +Here, we are going to deploy a `MySQL` group replication using a supported version by KubeDB operator. Then we are going to connect with the cluster using two separate services. + +Below is the YAML of `MySQL` group replication with 3 members (one is a primary member and the two others are secondary members) that we are going to create. + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: MySQL +metadata: + name: my-group + namespace: demo +spec: + version: "8.0.35" + replicas: 3 + topology: + mode: GroupReplication + group: + name: "dc002fc3-c412-4d18-b1d4-66c1fbfbbc9b" + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut +``` + +Let's create the MySQL CR we have shown above, + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mysql/clients/yamls/group-replication.yaml +mysql.kubedb.com/my-group created +``` + +KubeDB operator watches for `MySQL` objects using Kubernetes API. When a `MySQL` object is created, KubeDB operator will create a new StatefulSet and two separate Services for client connection with the cluster. The services have the following format: + +- `` has both read and write operation. +- `-standby` has only read operation. + +Now, wait for the `MySQL` is going to `Running` state and also wait for `SatefulSet` and `services` going to the `Ready` state. + +```bash +$ watch -n 3 kubectl get my -n demo my-group +Every 3.0s: kubectl get my -n demo my-group suaas-appscode: Wed Sep 9 10:54:34 2020 + +NAME VERSION STATUS AGE +my-group 8.0.35 Running 16m + +$ watch -n 3 kubectl get sts -n demo my-group +ery 3.0s: kubectl get sts -n demo my-group suaas-appscode: Wed Sep 9 10:53:52 2020 + +NAME READY AGE +my-group 3/3 15m + +$ kubectl get service -n demo +my-group ClusterIP 10.109.133.141 3306/TCP 31s +my-group-pods ClusterIP None 3306/TCP 31s +my-group-standby ClusterIP 10.110.47.184 3306/TCP 31s +``` + +If you describe the object, you can find more details here, + +```bash +$ kubectl dba describe my -n demo my-group +Name: my-group +Namespace: demo +CreationTimestamp: Mon, 15 Mar 2021 18:18:35 +0600 +Labels: +Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"kubedb.com/v1alpha2","kind":"MySQL","metadata":{"annotations":{},"name":"my-group","namespace":"demo"},"spec":{"replicas":3,"storage":{"... +Replicas: 3 total +Status: Provisioning +StorageType: Durable +Volume: + StorageClass: standard + Capacity: 1Gi + Access Modes: RWO +Paused: false +Halted: false +Termination Policy: WipeOut + +StatefulSet: + Name: my-group + CreationTimestamp: Mon, 15 Mar 2021 18:18:35 +0600 + Labels: app.kubernetes.io/component=database + app.kubernetes.io/instance=my-group + app.kubernetes.io/managed-by=kubedb.com + app.kubernetes.io/name=mysqls.kubedb.com + Annotations: + Replicas: 824635568200 desired | 3 total + Pods Status: 2 Running / 1 Waiting / 0 Succeeded / 0 Failed + +Service: + Name: my-group + Labels: app.kubernetes.io/component=database + app.kubernetes.io/instance=my-group + app.kubernetes.io/managed-by=kubedb.com + app.kubernetes.io/name=mysqls.kubedb.com + Annotations: + Type: ClusterIP + IP: 10.109.133.141 + Port: primary 3306/TCP + TargetPort: db/TCP + Endpoints: 10.244.0.15:3306 + +Service: + Name: my-group-pods + Labels: app.kubernetes.io/component=database + app.kubernetes.io/instance=my-group + app.kubernetes.io/managed-by=kubedb.com + app.kubernetes.io/name=mysqls.kubedb.com + Annotations: + Type: ClusterIP + IP: None + Port: db 3306/TCP + TargetPort: db/TCP + Endpoints: 10.244.0.15:3306,10.244.0.17:3306,10.244.0.19:3306 + +Service: + Name: my-group-standby + Labels: app.kubernetes.io/component=database + app.kubernetes.io/instance=my-group + app.kubernetes.io/managed-by=kubedb.com + app.kubernetes.io/name=mysqls.kubedb.com + Annotations: + Type: ClusterIP + IP: 10.110.47.184 + Port: standby 3306/TCP + TargetPort: db/TCP + Endpoints: 10.244.0.17:3306 + +Auth Secret: + Name: my-group-auth + Labels: app.kubernetes.io/component=database + app.kubernetes.io/instance=my-group + app.kubernetes.io/managed-by=kubedb.com + app.kubernetes.io/name=mysqls.kubedb.com + Annotations: + Type: kubernetes.io/basic-auth + Data: + username: 4 bytes + password: 16 bytes + +AppBinding: + Metadata: + Annotations: + kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"kubedb.com/v1alpha2","kind":"MySQL","metadata":{"annotations":{},"name":"my-group","namespace":"demo"},"spec":{"replicas":3,"storage":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"1Gi"}},"storageClassName":"standard"},"storageType":"Durable","terminationPolicy":"WipeOut","topology":{"group":{"name":"dc002fc3-c412-4d18-b1d4-66c1fbfbbc9b"},"mode":"GroupReplication"},"version":"8.0.35"}} + + Creation Timestamp: 2021-03-15T12:18:35Z + Labels: + app.kubernetes.io/component: database + app.kubernetes.io/instance: my-group + app.kubernetes.io/managed-by: kubedb.com + app.kubernetes.io/name: mysqls.kubedb.com + Name: my-group + Namespace: demo + Spec: + Client Config: + Service: + Name: my-group + Path: / + Port: 3306 + Scheme: mysql + URL: tcp(my-group:3306)/ + Parameters: + API Version: appcatalog.appscode.com/v1alpha1 + Kind: StashAddon + Stash: + Addon: + Backup Task: + Name: mysql-backup-8.0.35 + Params: + Name: args + Value: --all-databases --set-gtid-purged=OFF + Restore Task: + Name: mysql-restore-8.0.35 + Secret: + Name: my-group-auth + Type: kubedb.com/mysql + Version: 8.0.35 + +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal Successful 2m KubeDB Operator Successfully created governing service + Normal Successful 2m KubeDB Operator Successfully created service for primary/standalone + Normal Successful 2m KubeDB Operator Successfully created service for secondary replicas + Normal Successful 2m KubeDB Operator Successfully created database auth secret + Normal Successful 2m KubeDB Operator Successfully created StatefulSet + Normal Successful 2m KubeDB Operator Successfully created appbinding + Normal Successful 2m KubeDB Operator Successfully patched StatefulSet +``` + +Our database cluster is ready to connect. + +## How KubeDB distinguish between primary and secondary Replicas + +`KubeDB` add a `sidecar` into the pod beside the database container. This sidecar is used to add a label into the pod to distinguish between primary and secondary replicas. The label is added into the pods as follows: + +- `mysql.kubedb.com/role:primary` are added for primary. +- `mysql.kubedb.com/role:secondary` are added for secondary. + +Let's verify that the `mysql.kubedb.com/role:` label are added into the StatefulSet's replicas, + +```bash +$ kubectl get pods -n demo -l app.kubernetes.io/name=mysqls.kubedb.com,app.kubernetes.io/instance=my-group -A -o=custom-columns='Name:.metadata.name,Labels:metadata.labels,PodIP:.status.podIP' +Name Labels PodIP +my-group-0 map[controller-revision-hash:my-group-55b9f49f98 app.kubernetes.io/name:mysqls.kubedb.com app.kubernetes.io/instance:my-group mysql.kubedb.com/role:primary statefulset.kubernetes.io/pod-name:my-group-0] 10.244.1.8 +my-group-1 map[controller-revision-hash:my-group-55b9f49f98 app.kubernetes.io/name:mysqls.kubedb.com app.kubernetes.io/instance:my-group mysql.kubedb.com/role:secondary statefulset.kubernetes.io/pod-name:my-group-1] 10.244.2.11 +my-group-2 map[controller-revision-hash:my-group-55b9f49f98 app.kubernetes.io/name:mysqls.kubedb.com app.kubernetes.io/instance:my-group mysql.kubedb.com/role:secondary statefulset.kubernetes.io/pod-name:my-group-2] 10.244.2.13 +``` + +You can see from the above output that the `my-group-0` pod is selected as a primary member in our existing database cluster. It has the `mysql.kubedb.com/role:primary` label and the podIP is `10.244.1.8`. Besides, the rest of the replicas are selected as a secondary member which has `mysql.kubedb.com/role:secondary` label. + +KubeDB creates two separate services(already shown above) to connect with the database cluster. One for connecting to the primary replica and the other for secondaries. The service who is dedicated to connecting to the primary has both permissions of read and write operation and the service who is dedicated to other secondaries has only the permission of read operation. + +You can find the service which selects for primary replica have the following selector, + +```bash +$ kubectl get svc -n demo my-group -o json | jq '.spec.selector' +{ + "app.kubernetes.io/instance": "my-group", + "app.kubernetes.io/managed-by": "kubedb.com", + "app.kubernetes.io/name": "mysqls.kubedb.com", + "kubedb.com/role": "primary" +} +``` + +If you get the endpoint of the above service, you will see the podIP of the primary replica, + +```bash +$ kubectl get endpoints -n demo my-group +NAME ENDPOINTS AGE +my-group 10.244.1.8:3306 5h49m +``` + +You can also find the service which selects for secondary replicas have the following selector, + +```bash +$ kubectl get svc -n demo my-group-replicas -o json | jq '.spec.selector' +{ + "app.kubernetes.io/name": "mysqls.kubedb.com", + "app.kubernetes.io/instance": "my-group", + "mysql.kubedb.com/role": "secondary" +} +``` + +If you get the endpoint of the above service, you will see the podIP of the secondary replicas, + +```bash +$ kubectl get endpoints -n demo my-group-replicas +NAME ENDPOINTS AGE +my-group-replicas 10.244.2.11:3306,10.244.2.13:3306 5h53m +``` + +## Connecting Information + +KubeDB operator has created a new Secret called `my-group-auth` **(format: {mysql-object-name}-auth)** for storing the password for `mysql` superuser. This secret contains a `username` key which contains the **username** for MySQL superuser and a `password` key which contains the **password** for MySQL superuser. + +Now, you can connect to this database from your terminal using the `mysql` user and password. + +```bash +$ kubectl get secrets -n demo my-group-auth -o jsonpath='{.data.\username}' | base64 -d +root + +$ kubectl get secrets -n demo my-group-auth -o jsonpath='{.data.\password}' | base64 -d +RmxLjEomvE6tVj4- +``` + +You can connect to any of these group members. In that case, you just need to specify the hostname of that member Pod (either PodIP or the fully-qualified-domain-name for that Pod using any of the services) by `--host` flag. + +## Connecting with Primary Replica + +The primary replica has both the permission of read and write operation. So the clients are able to perform both operations using the service `my-group` which select primary replica(already shown above). First, we will insert data into the database cluster, then we will see whether we insert into the cluster using `my-group` service. + +At first, we are going to port-forward the service to connect to the database cluster from the outside of the cluster. + +Let's port-forward the `my-group` service using the following command, + +```bash +$ kubectl port-forward service/my-group -n demo 8081:3306 +Forwarding from 127.0.0.1:8081 -> 3306 +Forwarding from [::1]:8081 -> 3306 +``` + +>For testing purpose, we need to have a mysql client to connect with the cluster. If you don't have a client in your local machine, you can install from [here](https://dev.mysql.com/doc/mysql-installation-excerpt/8.0/en/) + +**Write Operation :** + +```bash +# create a database on cluster +$ mysql -uroot -pRmxLjEomvE6tVj4- --port=8081 --host=127.0.0.1 -e "CREATE DATABASE playground;" +mysql: [Warning] Using a password on the command line interface can be insecure. + + +# create a table +$ mysql -uroot -pRmxLjEomvE6tVj4- --port=8081 --host=127.0.0.1 -e "CREATE TABLE playground.equipment ( id INT NOT NULL AUTO_INCREMENT, type VARCHAR(50), quant INT, color VARCHAR(25), PRIMARY KEY(id));" +mysql: [Warning] Using a password on the command line interface can be insecure. + + +# insert a row +$ mysql -uroot -pRmxLjEomvE6tVj4- --port=8081 --host=127.0.0.1 -e "INSERT INTO playground.equipment (type, quant, color) VALUES ('slide', 2, 'blue');" +mysql: [Warning] Using a password on the command line interface can be insecure. +``` + +**Read Operation :** + +```bash +# read data from cluster +$ mysql -uroot -pRmxLjEomvE6tVj4- --port=8081 --host=127.0.0.1 -e "SELECT * FROM playground.equipment;" +mysql: [Warning] Using a password on the command line interface can be insecure. ++----+-------+-------+-------+ +| id | type | quant | color | ++----+-------+-------+-------+ +| 1 | slide | 2 | blue | ++----+-------+-------+-------+ +``` + +You can see from the above output that both write and read operations are performed successfully using primary pod selector service named `my-group`. + +## Connection with Secondary Replicas + +The secondary replica has only the permission of read operation. So the clients are able to perform only read operation using the service `my-group-replicas` which select only secondary replicas(already shown above). First, we will try to insert data into the database cluster, then we will read existing data from the cluster using `my-group-replicas` service. + +At first, we are going to port-forward the service to connect to the database cluster from the outside of the cluster. + +Let's port-forward the `my-group-replicas` service using the following command, + +```bash +$ kubectl port-forward service/my-group-replicas -n demo 8080:3306 +Forwarding from 127.0.0.1:8080 -> 3306 +Forwarding from [::1]:8080 -> 3306 +``` + +**Write Operation:** + +```bash +# in our database cluster we have created a database and a table named playground and equipment respectively. so we will try to insert data into it. +# insert a row +$ mysql -uroot -pRmxLjEomvE6tVj4- --port=8080 --host=127.0.0.1 -e "INSERT INTO playground.equipment (type, quant, color) VALUES ('slide', 3, 'black');" +mysql: [Warning] Using a password on the command line interface can be insecure. +ERROR 1290 (HY000) at line 1: The MySQL server is running with the --super-read-only option so it cannot execute this statement +``` + +**Read Operation:** + +```bash +# read data from cluster +$ mysql -uroot -pRmxLjEomvE6tVj4- --port=8080 --host=127.0.0.1 -e "SELECT * FROM playground.equipment;" +mysql: [Warning] Using a password on the command line interface can be insecure. ++----+-------+-------+-------+ +| id | type | quant | color | ++----+-------+-------+-------+ +| 1 | slide | 2 | blue | ++----+-------+-------+-------+ +``` + +You can see from the above output that only read operations are performed successfully using secondary pod selector service named `my-group-replicas`. No data is inserted by using this service. The error `--super-read-only` indicates that the secondary pod has only read permission. + +## Automatic Failover + +To test automatic failover, we will force the primary Pod to restart. Since the primary member (`Pod`) becomes unavailable, the rest of the members will elect a new primary for these group. When the old primary comes back, it will join the group as a secondary member. + +First, delete the primary pod `my-gorup-0` using the following command, + +```bash +$ kubectl delete pod my-group-0 -n demo +pod "my-group-0" deleted +``` + +Now wait for a few minute to automatically elect the primary replica and also wait for the services endpoint update for new primary and secondary replicas, + +```bash +$ kubectl get pods -n demo -l app.kubernetes.io/name=mysqls.kubedb.com,app.kubernetes.io/instance=my-group -A -o=custom-columns='Name:.metadata.name,Labels:metadata.labels,PodIP:.status.podIP' +Name Labels PodIP +my-group-0 map[controller-revision-hash:my-group-55b9f49f98 app.kubernetes.io/name:mysqls.kubedb.com app.kubernetes.io/instance:my-group mysql.kubedb.com/role:secondary statefulset.kubernetes.io/pod-name:my-group-0] 10.244.2.18 +my-group-1 map[controller-revision-hash:my-group-55b9f49f98 app.kubernetes.io/name:mysqls.kubedb.com app.kubernetes.io/instance:my-group mysql.kubedb.com/role:secondary statefulset.kubernetes.io/pod-name:my-group-1] 10.244.2.11 +my-group-2 map[controller-revision-hash:my-group-55b9f49f98 app.kubernetes.io/name:mysqls.kubedb.com app.kubernetes.io/instance:my-group mysql.kubedb.com/role:primary statefulset.kubernetes.io/pod-name:my-group-2] 10.244.2.13 +``` + +You can see from the above output that `my-group-2` pod is elected as a primary automatically and the others become secondary. + +If you get the endpoint of the `my-group` service, you will see the podIP of the primary replica, + +```bash +$ kubectl get endpoints -n demo my-group +NAME ENDPOINTS AGE +my-group 10.244.2.13:3306 111m +``` + +If you get the endpoint of the `my-group-replicas` service, you will see the podIP of the secondary replicas, + +```bash +$ kubectl get endpoints -n demo my-group-replicas +NAME ENDPOINTS AGE +my-group-replicas 10.244.2.11:3306,10.244.2.18:3306 112m +``` + +## Cleaning up + +Clean what you created in this tutorial. + +```bash +$ kubectl patch -n demo my/my-group -p '{"spec":{"terminationPolicy":"WipeOut"}}' --type="merge" +$ kubectl delete -n demo my/my-group + +$ kubectl delete ns demo +``` + +## Next Steps + +- Detail concepts of [MySQL object](/docs/v2024.1.31/guides/mysql/concepts/database/). +- Detail concepts of [MySQLDBVersion object](/docs/v2024.1.31/guides/mysql/concepts/catalog/). +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/v2024.1.31/CONTRIBUTING). diff --git a/content/docs/v2024.1.31/guides/mysql/clients/yamls/group-replication.yaml b/content/docs/v2024.1.31/guides/mysql/clients/yamls/group-replication.yaml new file mode 100644 index 0000000000..25d93bcfb2 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/clients/yamls/group-replication.yaml @@ -0,0 +1,21 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MySQL +metadata: + name: my-group + namespace: demo +spec: + version: "8.0.35" + replicas: 3 + topology: + mode: GroupReplication + group: + name: "dc002fc3-c412-4d18-b1d4-66c1fbfbbc9b" + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut diff --git a/content/docs/v2024.1.31/guides/mysql/clustering/_index.md b/content/docs/v2024.1.31/guides/mysql/clustering/_index.md new file mode 100644 index 0000000000..47e84681bc --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/clustering/_index.md @@ -0,0 +1,22 @@ +--- +title: MySQL Clustering +menu: + docs_v2024.1.31: + identifier: guides-mysql-clustering + name: MySQL Clustering + parent: guides-mysql + weight: 25 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/mysql/clustering/group-replication/index.md b/content/docs/v2024.1.31/guides/mysql/clustering/group-replication/index.md new file mode 100644 index 0000000000..39bad4f019 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/clustering/group-replication/index.md @@ -0,0 +1,559 @@ +--- +title: MySQL Group Replcation Guide +menu: + docs_v2024.1.31: + identifier: guides-mysql-clustering-group-replication + name: MySQL Group Replication Guide + parent: guides-mysql-clustering + weight: 20 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# KubeDB - MySQL Group Replication + +This tutorial will show you how to use KubeDB to provision a MySQL replication group in single-primary mode. + +## Before You Begin + +Before proceeding: + +- Read [mysql group replication concept](/docs/v2024.1.31/guides/mysql/clustering/overview/) to learn about MySQL Group Replication. + +- You need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +- Now, install KubeDB cli on your workstation and KubeDB operator in your cluster following the steps [here](/docs/v2024.1.31/setup/README). + +- To keep things isolated, this tutorial uses a separate namespace called `demo` throughout this tutorial. Run the following command to prepare your cluster for this tutorial: + + ```bash + $ kubectl create ns demo + namespace/demo created + ``` + +> Note: The yaml files used in this tutorial are stored in [docs/guides/mysql/clustering/group-replication/yamls](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/guides/mysql/clustering/group-replication/yamls) folder in GitHub repository [kubedb/docs](https://github.com/kubedb/docs). + +## Deploy MySQL Cluster + +To deploy a single primary MySQL replication group , specify `spec.topology` field in `MySQL` CRD. + +The following is an example `MySQL` object which creates a MySQL group with three members (one is primary member and the two others are secondary members). + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: MySQL +metadata: + name: my-group + namespace: demo +spec: + version: "8.0.35" + replicas: 3 + topology: + mode: GroupReplication + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut +``` + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mysql/clustering/group-replication/yamls/group-replication.yaml +mysql.kubedb.com/my-group created +``` + +Here, + +- `spec.topology` tells about the clustering configuration for MySQL. +- `spec.topology.mode` specifies the mode for MySQL cluster. Here we have used `GroupReplication` to tell the operator that we want to deploy a MySQL replication group. +- `spec.topology.group` contains group replication info. +- `spec.topology.group.name` the name for the group. It is a valid version 4 UUID. +- `spec.storage` specifies the StorageClass of PVC dynamically allocated to store data for this database. This storage spec will be passed to the StatefulSet created by KubeDB operator to run database pods. So, each members will have a pod of this storage configuration. You can specify any StorageClass available in your cluster with appropriate resource requests. + +KubeDB operator watches for `MySQL` objects using Kubernetes API. When a `MySQL` object is created, KubeDB operator will create a new StatefulSet and a Service with the matching MySQL object name. KubeDB operator will also create a governing service for the StatefulSet with the name `-pods`. + +```bash +$ kubectl dba describe my -n demo my-group +Name: my-group +Namespace: demo +CreationTimestamp: Tue, 28 Jun 2022 17:54:10 +0600 +Labels: +Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"kubedb.com/v1alpha2","kind":"MySQL","metadata":{"annotations":{},"name":"my-group","namespace":"demo"},"spec":{"replicas":3,"storage":{"... +Replicas: 3 total +Status: Provisioning +StorageType: Durable +Volume: + StorageClass: standard + Capacity: 1Gi + Access Modes: RWO +Paused: false +Halted: false +Termination Policy: WipeOut + +StatefulSet: + Name: my-group + CreationTimestamp: Tue, 28 Jun 2022 17:54:10 +0600 + Labels: app.kubernetes.io/component=database + app.kubernetes.io/instance=my-group + app.kubernetes.io/managed-by=kubedb.com + app.kubernetes.io/name=mysqls.kubedb.com + Annotations: + Replicas: 824640792392 desired | 3 total + Pods Status: 3 Running / 0 Waiting / 0 Succeeded / 0 Failed + +Service: + Name: my-group + Labels: app.kubernetes.io/component=database + app.kubernetes.io/instance=my-group + app.kubernetes.io/managed-by=kubedb.com + app.kubernetes.io/name=mysqls.kubedb.com + Annotations: + Type: ClusterIP + IP: 10.96.223.45 + Port: primary 3306/TCP + TargetPort: db/TCP + Endpoints: 10.244.0.44:3306 + +Service: + Name: my-group-pods + Labels: app.kubernetes.io/component=database + app.kubernetes.io/instance=my-group + app.kubernetes.io/managed-by=kubedb.com + app.kubernetes.io/name=mysqls.kubedb.com + Annotations: + Type: ClusterIP + IP: None + Port: db 3306/TCP + TargetPort: db/TCP + Endpoints: 10.244.0.44:3306,10.244.0.46:3306,10.244.0.48:3306 + +Service: + Name: my-group-standby + Labels: app.kubernetes.io/component=database + app.kubernetes.io/instance=my-group + app.kubernetes.io/managed-by=kubedb.com + app.kubernetes.io/name=mysqls.kubedb.com + Annotations: + Type: ClusterIP + IP: 10.96.70.224 + Port: standby 3306/TCP + TargetPort: db/TCP + Endpoints: + +Auth Secret: + Name: my-group-auth + Labels: app.kubernetes.io/component=database + app.kubernetes.io/instance=my-group + app.kubernetes.io/managed-by=kubedb.com + app.kubernetes.io/name=mysqls.kubedb.com + Annotations: + Type: kubernetes.io/basic-auth + Data: + password: 16 bytes + username: 4 bytes + +AppBinding: + Metadata: + Annotations: + kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"kubedb.com/v1alpha2","kind":"MySQL","metadata":{"annotations":{},"name":"my-group","namespace":"demo"},"spec":{"replicas":3,"storage":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"1Gi"}},"storageClassName":"standard"},"storageType":"Durable","terminationPolicy":"WipeOut","topology":{"group":{"name":"dc002fc3-c412-4d18-b1d4-66c1fbfbbc9b"},"mode":"GroupReplication"},"version":"8.0.35"}} + + Creation Timestamp: 2022-06-28T11:54:10Z + Labels: + app.kubernetes.io/component: database + app.kubernetes.io/instance: my-group + app.kubernetes.io/managed-by: kubedb.com + app.kubernetes.io/name: mysqls.kubedb.com + Name: my-group + Namespace: demo + Spec: + Client Config: + Service: + Name: my-group + Path: / + Port: 3306 + Scheme: mysql + URL: tcp(my-group.demo.svc:3306)/ + Parameters: + API Version: appcatalog.appscode.com/v1alpha1 + Kind: StashAddon + Stash: + Addon: + Backup Task: + Name: mysql-backup-8.0.21 + Params: + Name: args + Value: --all-databases --set-gtid-purged=OFF + Restore Task: + Name: mysql-restore-8.0.21 + Secret: + Name: my-group-auth + Type: kubedb.com/mysql + Version: 8.0.35 + +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal Successful 1m Kubedb operator Successfully created governing service + Normal Successful 1m Kubedb operator Successfully created service for primary/standalone + Normal Successful 1m Kubedb operator Successfully created service for secondary replicas + Normal Successful 1m Kubedb operator Successfully created database auth secret + Normal Successful 1m Kubedb operator Successfully created StatefulSet + Normal Successful 1m Kubedb operator Successfully created MySQL + Normal Successful 1m Kubedb operator Successfully created appbinding + + +$ kubectl get statefulset -n demo +NAME READY AGE +my-group 3/3 3m47s + +$ kubectl get pvc -n demo +NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE +data-my-group-0 Bound pvc-4f8538f6-a6ce-4233-b533-8566852f5b98 1Gi RWO standard 4m16s +data-my-group-1 Bound pvc-8823d3ad-d614-4172-89ac-c2284a17f502 1Gi RWO standard 4m11s +data-my-group-2 Bound pvc-94f1c312-50e3-41e1-94a8-a820be0abc08 1Gi RWO standard 4m7s +s + +$ kubectl get pv -n demo +NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE +pvc-4f8538f6-a6ce-4233-b533-8566852f5b98 1Gi RWO Delete Bound demo/data-my-group-0 standard 4m39s +pvc-8823d3ad-d614-4172-89ac-c2284a17f502 1Gi RWO Delete Bound demo/data-my-group-1 standard 4m35s +pvc-94f1c312-50e3-41e1-94a8-a820be0abc08 1Gi RWO Delete Bound demo/data-my-group-2 standard 4m31s + +$ kubectl get service -n demo +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +my-group ClusterIP 10.96.223.45 3306/TCP 5m13s +my-group-pods ClusterIP None 3306/TCP 5m13s +my-group-standby ClusterIP 10.96.70.224 3306/TCP 5m13s + +``` + +KubeDB operator sets the `status.phase` to `Running` once the database is successfully created. Run the following command to see the modified `MySQL` object: + +```yaml +$ kubectl get my -n demo my-group -o yaml | kubectl neat +apiVersion: kubedb.com/v1alpha2 +kind: MySQL +metadata: + name: my-group + namespace: demo +spec: + authSecret: + name: my-group-auth + podTemplate: + spec: + affinity: + podAntiAffinity: + preferredDuringSchedulingIgnoredDuringExecution: + - podAffinityTerm: + labelSelector: + matchLabels: + app.kubernetes.io/instance: my-group + app.kubernetes.io/managed-by: kubedb.com + app.kubernetes.io/name: mysqls.kubedb.com + namespaces: + - demo + topologyKey: kubernetes.io/hostname + weight: 100 + - podAffinityTerm: + labelSelector: + matchLabels: + app.kubernetes.io/instance: my-group + app.kubernetes.io/managed-by: kubedb.com + app.kubernetes.io/name: mysqls.kubedb.com + namespaces: + - demo + topologyKey: failure-domain.beta.kubernetes.io/zone + weight: 50 + resources: + limits: + cpu: 500m + memory: 1Gi + requests: + cpu: 500m + memory: 1Gi + serviceAccountName: my-group + replicas: 3 + storage: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + storageClassName: standard + storageType: Durable + terminationPolicy: WipeOut + topology: + group: + name: dc002fc3-c412-4d18-b1d4-66c1fbfbbc9b + mode: GroupReplication + version: 8.0.35 +status: + observedGeneration: 2$4213139756412538772 + phase: Running +``` + +## Connect with MySQL database + +KubeDB operator has created a new Secret called `my-group-auth` **(format: {mysql-object-name}-auth)** for storing the password for `mysql` superuser. This secret contains a `username` key which contains the **username** for MySQL superuser and a `password` key which contains the **password** for MySQL superuser. + +If you want to use an existing secret please specify that when creating the MySQL object using `spec.authSecret.name`. While creating this secret manually, make sure the secret contains these two keys containing data `username` and `password` and also make sure of using `root` as value of `username`. For more details see [here](/docs/v2024.1.31/guides/mysql/concepts/database/#specdatabasesecret). + +Now, you can connect to this database from your terminal using the `mysql` user and password. + +```bash +$ kubectl get secrets -n demo my-group-auth -o jsonpath='{.data.\username}' | base64 -d +root + +$ kubectl get secrets -n demo my-group-auth -o jsonpath='{.data.\password}' | base64 -d +d)q2MVmJK$Oex=mW +``` + +The operator creates a group according to the newly created `MySQL` object. This group has 3 members (one primary and two secondary). + +You can connect to any of these group members. In that case you just need to specify the host name of that member Pod (either PodIP or the fully-qualified-domain-name for that Pod using the governing service named `-pods`) by `--host` flag. + +```bash +# first list the mysql pods list +$ kubectl get pods -n demo -l app.kubernetes.io/instance=my-group +NAME READY STATUS RESTARTS AGE +my-group-0 2/2 Running 0 8m23s +my-group-1 2/2 Running 0 8m18s +my-group-2 2/2 Running 0 8m14s + + +# get the governing service +$ kubectl get service my-group-pods -n demo +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +my-group-pods ClusterIP None 3306/TCP 8m49s + +# list the pods with PodIP +$ kubectl get pods -n demo -l app.kubernetes.io/instance=my-group -o jsonpath='{range.items[*]}{.metadata.name} ........... {.status.podIP} ............ {.metadata.name}.my-group-pods.{.metadata.namespace}{"\\n"}{end}' +my-group-0 ........... 10.244.0.44 ............ my-group-0.my-group-pods.demo +my-group-1 ........... 10.244.0.46 ............ my-group-1.my-group-pods.demo +my-group-2 ........... 10.244.0.48 ............ my-group-2.my-group-pods.demo + +``` + +Now you can connect to these database using the above info. Ignore the warning message. It is happening for using password in the command. + +```bash +# connect to the 1st server +$ kubectl exec -it -n demo my-group-0 -c mysql -- mysql -u root --password='d)q2MVmJK$Oex=mW' --host=my-group-0.my-group-pods.demo -e "select 1;" +mysql: [Warning] Using a password on the command line interface can be insecure. ++---+ +| 1 | ++---+ +| 1 | ++---+ + +# connect to the 2nd server +$ kubectl exec -it -n demo my-group-0 -c mysql -- mysql -u root --password='d)q2MVmJK$Oex=mW' --host=my-group-1.my-group-pods.demo -e "select 1;" +mysql: [Warning] Using a password on the command line interface can be insecure. ++---+ +| 1 | ++---+ +| 1 | ++---+ + +# connect to the 3rd server +$ kubectl exec -it -n demo my-group-0 -c mysql -- mysql -u root --password='d)q2MVmJK$Oex=mW' --host=my-group-2.my-group-pods.demo -e "select 1;" +mysql: [Warning] Using a password on the command line interface can be insecure. ++---+ +| 1 | ++---+ +| 1 | ++---+ +``` + +## Check the Group Status + +Now, you are ready to check newly created group status. Connect and run the following commands from any of the hosts and you will get the same results. + +```bash +$ kubectl exec -it -n demo my-group-0 -c mysql -- mysql -u root --password='d)q2MVmJK$Oex=mW' --host=my-group-0.my-group-pods.demo -e "show status like '%primary%'" +mysql: [Warning] Using a password on the command line interface can be insecure. ++----------------------------------+--------------------------------------+ +| Variable_name | Value | ++----------------------------------+--------------------------------------+ +| group_replication_primary_member | 1ace16b5-f6d9-11ec-9a26-9ae7d6def698 | ++----------------------------------+--------------------------------------+ + +``` + +The value **1ace16b5-f6d9-11ec-9a26-9ae7d6def698** in the above table is the ID of the primary member of the group. + +```bash +$ kubectl exec -it -n demo my-group-0 -c mysql -- mysql -u root --password='d)q2MVmJK$Oex=mW' --host=my-group-0.my-group-pods.demo -e "select * from performance_schema.replication_group_members" +mysql: [Warning] Using a password on the command line interface can be insecure. ++---------------------------+--------------------------------------+-----------------------------------+-------------+--------------+-------------+----------------+----------------------------+ +| CHANNEL_NAME | MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE | MEMBER_ROLE | MEMBER_VERSION | MEMBER_COMMUNICATION_STACK | ++---------------------------+--------------------------------------+-----------------------------------+-------------+--------------+-------------+----------------+----------------------------+ +| group_replication_applier | 13aad5a5-f6d9-11ec-87bb-96e838330519 | my-group-2.my-group-pods.demo.svc | 3306 | ONLINE | SECONDARY | 8.0.35 | XCom | +| group_replication_applier | 1739589f-f6d9-11ec-956c-c2c213efafa8 | my-group-1.my-group-pods.demo.svc | 3306 | ONLINE | SECONDARY | 8.0.35 | XCom | +| group_replication_applier | 1ace16b5-f6d9-11ec-9a26-9ae7d6def698 | my-group-0.my-group-pods.demo.svc | 3306 | ONLINE | PRIMARY | 8.0.35 | XCom | ++---------------------------+--------------------------------------+-----------------------------------+-------------+--------------+-------------+----------------+----------------------------+ + +``` + +## Data Availability + +In a MySQL group, only the primary member can write not the secondary. But you can read data from any member. In this tutorial, we will insert data from primary, and we will see whether we can get the data from any other member. + +> Read the comment written for the following commands. They contain the instructions and explanations of the commands. + +```bash +# create a database on primary +$ kubectl exec -it -n demo my-group-0 -- mysql -u root --password='d)q2MVmJK$Oex=mW' --host=my-group-0.my-group-pods.demo -e "CREATE DATABASE playground;" +mysql: [Warning] Using a password on the command line interface can be insecure. + +# create a table +$ kubectl exec -it -n demo my-group-0 -- mysql -u root --password='d)q2MVmJK$Oex=mW' --host=my-group-0.my-group-pods.demo -e "CREATE TABLE playground.equipment ( id INT NOT NULL AUTO_INCREMENT, type VARCHAR(50), quant INT, color VARCHAR(25), PRIMARY KEY(id));" +mysql: [Warning] Using a password on the command line interface can be insecure. + + +# insert a row +$ kubectl exec -it -n demo my-group-0 -c mysql -- mysql -u root --password='d)q2MVmJK$Oex=mW' --host=my-group-0.my-group-pods.demo -e "INSERT INTO playground.equipment (type, quant, color) VALUES ('slide', 2, 'blue');" +mysql: [Warning] Using a password on the command line interface can be insecure. + +# read from primary +$ kubectl exec -it -n demo my-group-0 -c mysql -- mysql -u root --password='d)q2MVmJK$Oex=mW' --host=my-group-0.my-group-pods.demo -e "SELECT * FROM playground.equipment;" +mysql: [Warning] Using a password on the command line interface can be insecure. ++----+-------+-------+-------+ +| id | type | quant | color | ++----+-------+-------+-------+ +| 1 | slide | 2 | blue | ++----+-------+-------+-------+ +``` +In the previous step we have inserted into the primary pod. In the next step we will read from secondary pods to determine whether the data has been successfully copied to the secondary pods. +```bash +# read from secondary-1 +$ kubectl exec -it -n demo my-group-0 -c mysql -- mysql -u root --password='d)q2MVmJK$Oex=mW' --host=my-group-1.my-group-pods.demo -e "SELECT * FROM playground.equipment;" +mysql: [Warning] Using a password on the command line interface can be insecure. ++----+-------+-------+-------+ +| id | type | quant | color | ++----+-------+-------+-------+ +| 1 | slide | 2 | blue | ++----+-------+-------+-------+ + +# read from secondary-2 +$ kubectl exec -it -n demo my-group-0 -c mysql -- mysql -u root --password='d)q2MVmJK$Oex=mW' --host=my-group-2.my-group-pods.demo -e "SELECT * FROM playground.equipment;" +mysql: [Warning] Using a password on the command line interface can be insecure. ++----+-------+-------+-------+ +| id | type | quant | color | ++----+-------+-------+-------+ +| 1 | slide | 2 | blue | ++----+-------+-------+-------+ +``` + +## Write on Secondary Should Fail + +Only, primary member preserves the write permission. No secondary can write data. + +```bash +# try to write on secondary-1 +$ kubectl exec -it -n demo my-group-0 -c mysql -- mysql -u root --password='d)q2MVmJK$Oex=mW' --host=my-group-1.my-group-pods.demo -e "INSERT INTO playground.equipment (type, quant, color) VALUES ('mango', 5, 'yellow');" +mysql: [Warning] Using a password on the command line interface can be insecure. +ERROR 1290 (HY000) at line 1: The MySQL server is running with the --super-read-only option so it cannot execute this statement +command terminated with exit code 1 + +# try to write on secondary-2 +$ kubectl exec -it -n demo my-group-0 -c mysql -- mysql -u root --password='d)q2MVmJK$Oex=mW' --host=my-group-2.my-group-pods.demo -e "INSERT INTO playground.equipment (type, quant, color) VALUES ('mango', 5, 'yellow');" +mysql: [Warning] Using a password on the command line interface can be insecure. +ERROR 1290 (HY000) at line 1: The MySQL server is running with the --super-read-only option so it cannot execute this statement +command terminated with exit code 1 +``` + +## Automatic Failover + +To test automatic failover, we will force the primary Pod to restart. Since the primary member (`Pod`) becomes unavailable, the rest of the members will elect a new primary for these group. When the old primary comes back, it will join the group as a secondary member. + +> Read the comment written for the following commands. They contain the instructions and explanations of the commands. + +```bash +# delete the primary Pod my-group-0 +$ kubectl delete pod my-group-0 -n demo +pod "my-group-0" deleted + +# check the new primary ID +$ kubectl exec -it -n demo my-group-0 -c mysql -- mysql -u root --password='d)q2MVmJK$Oex=mW' --host=my-group-0.my-group-pods.demo -e "show status like '%primary%'" +mysql: [Warning] Using a password on the command line interface can be insecure. ++----------------------------------+--------------------------------------+ +| Variable_name | Value | ++----------------------------------+--------------------------------------+ +| group_replication_primary_member | 1739589f-f6d9-11ec-956c-c2c213efafa8| ++----------------------------------+--------------------------------------+ + + +# now check the group status +$ kubectl exec -it -n demo my-group-0 -c mysql -- mysql -u root --password='d)q2MVmJK$Oex=mW' --host=my-group-0.my-group-pods.demo -e "select * from performance_schema.replication_group_members" +mysql: [Warning] Using a password on the command line interface can be insecure. ++---------------------------+--------------------------------------+-----------------------------------+-------------+--------------+-------------+----------------+----------------------------+ +| CHANNEL_NAME | MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE | MEMBER_ROLE | MEMBER_VERSION | MEMBER_COMMUNICATION_STACK | ++---------------------------+--------------------------------------+-----------------------------------+-------------+--------------+-------------+----------------+----------------------------+ +| group_replication_applier | 13aad5a5-f6d9-11ec-87bb-96e838330519 | my-group-2.my-group-pods.demo.svc | 3306 | ONLINE | SECONDARY | 8.0.35 | XCom | +| group_replication_applier | 1739589f-f6d9-11ec-956c-c2c213efafa8 | my-group-1.my-group-pods.demo.svc | 3306 | ONLINE | PRIMARY | 8.0.35 | XCom | +| group_replication_applier | 1ace16b5-f6d9-11ec-9a26-9ae7d6def698 | my-group-0.my-group-pods.demo.svc | 3306 | ONLINE | SECONDARY | 8.0.35 | XCom | ++---------------------------+--------------------------------------+-----------------------------------+-------------+--------------+-------------+----------------+----------------------------+ + + +# read data from new primary my-group-1.my-group-pods.demo +$ kubectl exec -it -n demo my-group-0 -c mysql -- mysql -u root --password='d)q2MVmJK$Oex=mW' --host=my-group-1.my-group-pods.demo -e "SELECT * FROM playground.equipment;" +mysql: [Warning] Using a password on the command line interface can be insecure. ++----+-------+-------+-------+ +| id | type | quant | color | ++----+-------+-------+-------+ +| 1 | slide | 2 | blue | ++----+-------+-------+-------+ +``` +Now Let's read the data from secondary pods to see if the data is consistant. +```bash +# read data from secondary-1 my-group-0.my-group-pods.demo +$ kubectl exec -it -n demo my-group-0 -c mysql -- mysql -u root --password='d)q2MVmJK$Oex=mW' --host=my-group-0.my-group-pods.demo -e "SELECT * FROM playground.equipment;" +mysql: [Warning] Using a password on the command line interface can be insecure. ++----+-------+-------+-------+ +| id | type | quant | color | ++----+-------+-------+-------+ +| 1 | slide | 2 | blue | ++----+-------+-------+-------+ + +# read data from secondary-2 my-group-2.my-group-pods.demo +$ kubectl exec -it -n demo my-group-0 -c mysql -- mysql -u root --password='d)q2MVmJK$Oex=mW' --host=my-group-2.my-group-pods.demo -e "SELECT * FROM playground.equipment;" +mysql: [Warning] Using a password on the command line interface can be insecure. ++----+-------+-------+-------+ +| id | type | quant | color | ++----+-------+-------+-------+ +| 7 | slide | 2 | blue | ++----+-------+-------+-------+ +``` + +## Cleaning up + +Clean what you created in this tutorial. + +```bash +kubectl delete -n demo my/my-group +kubectl delete ns demo +``` + +## Next Steps + +- Detail concepts of [MySQL object](/docs/v2024.1.31/guides/mysql/concepts/database/). +- Detail concepts of [MySQLDBVersion object](/docs/v2024.1.31/guides/mysql/concepts/catalog/). +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/v2024.1.31/CONTRIBUTING). diff --git a/content/docs/v2024.1.31/guides/mysql/clustering/group-replication/yamls/group-replication.yaml b/content/docs/v2024.1.31/guides/mysql/clustering/group-replication/yamls/group-replication.yaml new file mode 100644 index 0000000000..6e237033a0 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/clustering/group-replication/yamls/group-replication.yaml @@ -0,0 +1,19 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MySQL +metadata: + name: my-group + namespace: demo +spec: + version: "8.0.35" + replicas: 3 + topology: + mode: GroupReplication + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut diff --git a/content/docs/v2024.1.31/guides/mysql/clustering/innodb-cluster/index.md b/content/docs/v2024.1.31/guides/mysql/clustering/innodb-cluster/index.md new file mode 100644 index 0000000000..fdb038abea --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/clustering/innodb-cluster/index.md @@ -0,0 +1,601 @@ +--- +title: MySQL Innodb Cluster Guide +menu: + docs_v2024.1.31: + identifier: guides-mysql-clustering-innodb-cluster + name: MySQL Innodb Cluster Guide + parent: guides-mysql-clustering + weight: 22 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# KubeDB - MySQL Innodb Cluster + +This tutorial will show you how to use KubeDB to provision a MySQL Innodb cluster single-primary mode. + +## Before You Begin + +Before proceeding: +- Innodb cluster itself use mysql group replication under the hood +- Read [mysql group replication concept](/docs/v2024.1.31/guides/mysql/clustering/overview/) to learn about MySQL Group Replication. +- You need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +- Now, install KubeDB cli on your workstation and KubeDB operator in your cluster following the steps [here](/docs/v2024.1.31/setup/README). + +- To keep things isolated, this tutorial uses a separate namespace called `demo` throughout this tutorial. Run the following command to prepare your cluster for this tutorial: + + ```bash + $ kubectl create ns demo + namespace/demo created + ``` + + + +## Deploy MySQL Innodb Cluster + +To deploy a MySQL Innodb cluster, specify `spec.topology` field in `MySQL` CRD. + +The following is an example `MySQL` object which creates a MySQL Innodb cluster with three members (one is primary member and the two others are secondary members). + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: MySQL +metadata: + name: innodb + namespace: demo +spec: + version: "8.0.31-innodb" + replicas: 3 + topology: + mode: InnoDBCluster + innoDBCluster: + router: + replicas: 1 + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut +``` + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mysql/clustering/innodb-cluster/yamls/innodb.yaml +mysql.kubedb.com/innodb created +``` + +Here, + +- `spec.topology` tells about the clustering configuration for MySQL. +- `spec.topology.mode` specifies the mode for MySQL cluster. Here we have used `InnoDBCluster` to tell the operator that we want to deploy a MySQL Innodb Cluster. +- `spec.topology.innoDBCluster` contains the InnodbCluster info.Innodb cluster comes with a router as a load balancer +- `spec.topology.Router.replica` is for the number of replica fo innodb cluster router. +- `spec.storage` specifies the StorageClass of PVC dynamically allocated to store data for this database. This storage spec will be passed to the StatefulSet created by KubeDB operator to run database pods. So, each members will have a pod of this storage configuration. You can specify any StorageClass available in your cluster with appropriate resource requests. + +KubeDB operator watches for `MySQL` objects using Kubernetes API. When a `MySQL` object is created, KubeDB operator will create a new StatefulSet and a Service with the matching MySQL object name. KubeDB operator will also create a governing service for the StatefulSet with the name `-pods`. + +```bash +$ kubectl dba describe my -n demo innodb +Name: innodb +Namespace: demo +CreationTimestamp: Tue, 15 Nov 2022 15:14:42 +0600 +Labels: +Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"kubedb.com/v1alpha2","kind":"MySQL","metadata":{"annotations":{},"name":"innodb","namespace":"demo"},"spec":{"replicas":3,"storage":{"ac... +Replicas: 3 total +Status: Provisioning +StorageType: Durable +Volume: + StorageClass: standard + Capacity: 1Gi + Access Modes: RWO +Paused: false +Halted: false +Termination Policy: WipeOut + +StatefulSet: + Name: innodb + CreationTimestamp: Tue, 15 Nov 2022 15:14:42 +0600 + Labels: app.kubernetes.io/component=database + app.kubernetes.io/instance=innodb + app.kubernetes.io/managed-by=kubedb.com + app.kubernetes.io/name=mysqls.kubedb.com + mysql.kubedb.com/component=database + Annotations: + Replicas: 824641134776 desired | 1 total + Pods Status: 0 Running / 1 Waiting / 0 Succeeded / 0 Failed + +Service: + Name: innodb + Labels: app.kubernetes.io/component=database + app.kubernetes.io/instance=innodb + app.kubernetes.io/managed-by=kubedb.com + app.kubernetes.io/name=mysqls.kubedb.com + mysql.kubedb.com/component=database + Annotations: + Type: ClusterIP + IP: 10.96.244.213 + Port: primary 3306/TCP + TargetPort: rw/TCP + Endpoints: + +Service: + Name: innodb-pods + Labels: app.kubernetes.io/component=database + app.kubernetes.io/instance=innodb + app.kubernetes.io/managed-by=kubedb.com + app.kubernetes.io/name=mysqls.kubedb.com + mysql.kubedb.com/component=database + Annotations: + Type: ClusterIP + IP: None + Port: db 3306/TCP + TargetPort: db/TCP + Endpoints: 10.244.0.26:3306 + +Service: + Name: innodb-standby + Labels: app.kubernetes.io/component=database + app.kubernetes.io/instance=innodb + app.kubernetes.io/managed-by=kubedb.com + app.kubernetes.io/name=mysqls.kubedb.com + mysql.kubedb.com/component=database + Annotations: + Type: ClusterIP + IP: 10.96.146.147 + Port: standby 3306/TCP + TargetPort: ro/TCP + Endpoints: + +Auth Secret: + Name: innodb-auth + Labels: app.kubernetes.io/component=database + app.kubernetes.io/instance=innodb + app.kubernetes.io/managed-by=kubedb.com + app.kubernetes.io/name=mysqls.kubedb.com + mysql.kubedb.com/component=database + Annotations: + Type: kubernetes.io/basic-auth + Data: + password: 16 bytes + username: 4 bytes + +AppBinding: + Metadata: + Annotations: + kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"kubedb.com/v1alpha2","kind":"MySQL","metadata":{"annotations":{},"name":"innodb","namespace":"demo"},"spec":{"replicas":3,"storage":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"1Gi"}},"storageClassName":"standard"},"storageType":"Durable","terminationPolicy":"WipeOut","topology":{"innoDBCluster":{"router":{"replicas":1}},"mode":"InnoDBCluster"},"version":"8.0.31-innodb"}} + + Creation Timestamp: 2022-11-15T09:14:42Z + Labels: + app.kubernetes.io/component: database + app.kubernetes.io/instance: innodb + app.kubernetes.io/managed-by: kubedb.com + app.kubernetes.io/name: mysqls.kubedb.com + mysql.kubedb.com/component: database + Name: innodb + Namespace: demo + Spec: + Client Config: + Service: + Name: innodb + Path: / + Port: 3306 + Scheme: mysql + URL: tcp(innodb.demo.svc:3306)/ + Parameters: + API Version: appcatalog.appscode.com/v1alpha1 + Kind: StashAddon + Stash: + Addon: + Backup Task: + Name: mysql-backup-8.0.21 + Params: + Name: args + Value: --all-databases --set-gtid-purged=OFF + Restore Task: + Name: mysql-restore-8.0.21 + Secret: + Name: innodb-auth + Type: kubedb.com/mysql + Version: 8.0.35 + +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal Phase Changed 27s MySQL operator phase changed from to Provisioning reason: + Normal Successful 27s MySQL operator Successfully created governing service + Normal Successful 27s MySQL operator Successfully created service for primary/standalone + Normal Successful 27s MySQL operator Successfully created service for secondary replicas + Normal Successful 27s MySQL operator Successfully created database auth secret + Normal Successful 27s MySQL operator Successfully created StatefulSet + Normal Successful 27s MySQL operator successfully patched created StatefulSet innodb-router + Normal Successful 27s MySQL operator Successfully created MySQL + Normal Successful 27s MySQL operator Successfully created appbinding + +$ kubectl get statefulset -n demo +NAME READY AGE +NAME READY AGE +innodb 3/3 2m17s +innodb-router 1/1 2m17s + +$ kubectl get pvc -n demo +NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE +data-innodb-0 Bound pvc-6f7f8ebd-0b56-45fb-b91a-fe133bfae594 1Gi RWO standard 2m47s +data-innodb-1 Bound pvc-16f9d6df-ce46-49da-9720-415d7f7d8b69 1Gi RWO standard 113s +data-innodb-2 Bound pvc-8cfcb761-eb63-4a12-bc7e-5d86f727330e 1Gi RWO standard 88s + +$ kubectl get pv -n demo +NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE +pvc-6f7f8ebd-0b56-45fb-b91a-fe133bfae594 1Gi RWO Delete Bound demo/data-innodb-0 standard 3m50s +pvc-16f9d6df-ce46-49da-9720-415d7f7d8b69 1Gi RWO Delete Bound demo/data-innodb-1 standard 2m38s +pvc-8cfcb761-eb63-4a12-bc7e-5d86f727330e 1Gi RWO Delete Bound demo/data-innodb-2 standard 2m32s + + +$ kubectl get service -n demo +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +innodb ClusterIP 10.96.244.213 3306/TCP 5m23s +innodb-pods ClusterIP None 3306/TCP 5m23s +innodb-standby ClusterIP 10.96.146.147 3306/TCP 5m23s +``` + +KubeDB operator sets the `status.phase` to `Running` once the database is successfully created. Run the following command to see the modified `MySQL` object: + +```yaml +$ kubectl get my -n demo innodb -o yaml | kubectl neat +apiVersion: kubedb.com/v1alpha2 +kind: MySQL +metadata: + name: innodb + namespace: demo +spec: + authSecret: + name: innodb-auth + podTemplate: + spec: + affinity: + podAntiAffinity: + preferredDuringSchedulingIgnoredDuringExecution: + - podAffinityTerm: + labelSelector: + matchLabels: + app.kubernetes.io/instance: innodb + app.kubernetes.io/managed-by: kubedb.com + app.kubernetes.io/name: mysqls.kubedb.com + namespaces: + - demo + topologyKey: kubernetes.io/hostname + weight: 100 + - podAffinityTerm: + labelSelector: + matchLabels: + app.kubernetes.io/instance: innodb + app.kubernetes.io/managed-by: kubedb.com + app.kubernetes.io/name: mysqls.kubedb.com + namespaces: + - demo + topologyKey: failure-domain.beta.kubernetes.io/zone + weight: 50 + resources: + limits: + cpu: 500m + memory: 1Gi + requests: + cpu: 500m + memory: 1Gi + serviceAccountName: innodb + replicas: 3 + storage: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + storageClassName: standard + storageType: Durable + terminationPolicy: WipeOut + topology: + innoDBCluster: + mode: Single-Primary + router: + replicas: 1 + mode: InnoDBCluster + version: 8.0.31-innodb +status: + observedGeneration: 2 + phase: Running +``` + +## Connect with MySQL database + +KubeDB operator has created a new Secret called `innodb-auth` **(format: {mysql-object-name}-auth)** for storing the password for `mysql` superuser. This secret contains a `username` key which contains the **username** for MySQL superuser and a `password` key which contains the **password** for MySQL superuser. + +If you want to use an existing secret please specify that when creating the MySQL object using `spec.authSecret.name`. While creating this secret manually, make sure the secret contains these two keys containing data `username` and `password` and also make sure of using `root` as value of `username`. For more details see [here](/docs/v2024.1.31/guides/mysql/concepts/database/#specdatabasesecret). + +Now, you can connect to this database from your terminal using the `mysql` user and password. + +```bash +$ kubectl get secrets -n demo innodb-auth -o jsonpath='{.data.\username}' | base64 -d +root + +$ kubectl get secrets -n demo innodb-auth -o jsonpath='{.data.\password}' | base64 -d +ny5jSirIzVtWDcZ7 +``` + +The operator creates a cluster according to the newly created `MySQL` object. This group has 3 members (one primary and two secondary). + +You can connect to any of these cluster members. In that case you just need to specify the host name of that member Pod (either PodIP or the fully-qualified-domain-name for that Pod using the governing service named `-pods`) by `--host` flag. + +```bash +# first list the mysql pods list +$ kubectl get pods -n demo -l app.kubernetes.io/instance=innodb +NAME READY STATUS RESTARTS AGE +innodb-0 2/2 Running 0 15m +innodb-1 2/2 Running 0 14m +innodb-2 2/2 Running 0 14m +innodb-router-0 1/1 Running 0 15m + + + +# get the governing service +$ kubectl get service innodb-pods -n demo +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +innodb-pods ClusterIP None 3306/TCP 16m + +# list the pods with PodIP +$ kubectl get pods -n demo -l app.kubernetes.io/instance=innodb -o jsonpath='{range.items[*]}{.metadata.name} ........... {.status.podIP} ............ {.metadata.name}.innodb-pods.{.metadata.namespace}{"\\n"}{end}' +innodb-0 ........... 10.244.0.26 ............ innodb-0.innodb-pods.demo +innodb-1 ........... 10.244.0.28 ............ innodb-1.innodb-pods.demo +innodb-2 ........... 10.244.0.30 ............ innodb-2.innodb-pods.demo + +``` + +Now you can connect to this database using the above info. Ignore the warning message. It is happening for using password in the command. + +```bash +# connect to the 1st server +$ kubectl exec -it -n demo innodb-0 -c mysql -- mysql -u root --password='ny5jSirIzVtWDcZ7' --host=innodb-0.innodb-pods.demo -e "select 1;" +mysql: [Warning] Using a password on the command line interface can be insecure. ++---+ +| 1 | ++---+ +| 1 | ++---+ + +# connect to the 2nd server +$ kubectl exec -it -n demo innodb-0 -c mysql -- mysql -u root --password='ny5jSirIzVtWDcZ7' --host=innodb-1.innodb-pods.demo -e "select 1;" +mysql: [Warning] Using a password on the command line interface can be insecure. ++---+ +| 1 | ++---+ +| 1 | ++---+ + +# connect to the 3rd server +$ kubectl exec -it -n demo innodb-0 -c mysql -- mysql -u root --password='ny5jSirIzVtWDcZ7' --host=innodb-2.innodb-pods.demo -e "select 1;" +mysql: [Warning] Using a password on the command line interface can be insecure. ++---+ +| 1 | ++---+ +| 1 | ++---+ +``` + +## Check the Innodb Cluster status + +The main advantage of innodb cluster is its comes with an admin shell from where you are able to call the mysql admin api and configure cluster and it provide some functionality wokring with the cluster. +Let's exec into one of the pod to see the cluster status. + + +```bash +$ kubectl exec -it -n demo innodb-0 -c mysql -- mysqlsh -u root --password='ny5jSirIzVtWDcZ7' --host=innodb-0.innodb-pods.demo + + MySQL innodb-0.innodb-pods.demo:33060+ ssl JS > dba.getCluster().status() +{ + "clusterName": "innodb", + "defaultReplicaSet": { + "name": "default", + "primary": "innodb-0.innodb-pods.demo.svc:3306", + "ssl": "REQUIRED", + "status": "OK", + "statusText": "Cluster is ONLINE and can tolerate up to ONE failure.", + "topology": { + "innodb-0.innodb-pods.demo.svc:3306": { + "address": "innodb-0.innodb-pods.demo.svc:3306", + "memberRole": "PRIMARY", + "mode": "R/W", + "readReplicas": {}, + "replicationLag": "applier_queue_applied", + "role": "HA", + "status": "ONLINE", + "version": "8.0.35" + }, + "innodb-1.innodb-pods.demo.svc:3306": { + "address": "innodb-1.innodb-pods.demo.svc:3306", + "memberRole": "SECONDARY", + "mode": "R/O", + "readReplicas": {}, + "replicationLag": "applier_queue_applied", + "role": "HA", + "status": "ONLINE", + "version": "8.0.35" + }, + "innodb-2.innodb-pods.demo.svc:3306": { + "address": "innodb-2.innodb-pods.demo.svc:3306", + "memberRole": "SECONDARY", + "mode": "R/O", + "readReplicas": {}, + "replicationLag": "applier_queue_applied", + "role": "HA", + "status": "ONLINE", + "version": "8.0.35" + } + }, + "topologyMode": "Single-Primary" + }, + "groupInformationSourceMember": "innodb-0.innodb-pods.demo.svc:3306" +} + + +``` + +## Data Availability + +In a MySQL Cluster, only the primary member can write not the secondary. But you can read data from any member. In this tutorial, we will insert data from primary, and we will see whether we can get the data from any other member. + +> Read the comment written for the following commands. They contain the instructions and explanations of the commands. + +```bash +# create a database on primary +$ kubectl exec -it -n demo innodb-0 -- mysql -u root --password='ny5jSirIzVtWDcZ7' --host=innodb-0.innodb-pods.demo -e "CREATE DATABASE playground;" +mysql: [Warning] Using a password on the command line interface can be insecure. + +# create a table +$ kubectl exec -it -n demo innodb-0 -- mysql -u root --password='ny5jSirIzVtWDcZ7' --host=innodb-0.innodb-pods.demo -e "CREATE TABLE playground.equipment ( id INT NOT NULL AUTO_INCREMENT, type VARCHAR(50), quant INT, color VARCHAR(25), PRIMARY KEY(id));" +mysql: [Warning] Using a password on the command line interface can be insecure. + + +# insert a row +$ kubectl exec -it -n demo innodb-0 -c mysql -- mysql -u root --password='ny5jSirIzVtWDcZ7' --host=innodb-0.innodb-pods.demo -e "INSERT INTO playground.equipment (type, quant, color) VALUES ('slide', 2, 'blue');" +mysql: [Warning] Using a password on the command line interface can be insecure. + +# read from primary +$ kubectl exec -it -n demo innodb-0 -c mysql -- mysql -u root --password='ny5jSirIzVtWDcZ7' --host=innodb-0.innodb-pods.demo -e "SELECT * FROM playground.equipment;" +mysql: [Warning] Using a password on the command line interface can be insecure. ++----+-------+-------+-------+ +| id | type | quant | color | ++----+-------+-------+-------+ +| 1 | slide | 2 | blue | ++----+-------+-------+-------+ +``` +In the previous step we have inserted into the primary pod. In the next step we will read from secondary pods to determine whether the data has been successfully copied to the secondary pods. +```bash +# read from secondary-1 +$ kubectl exec -it -n demo innodb-0 -c mysql -- mysql -u root --password='ny5jSirIzVtWDcZ7' --host=innodb-1.innodb-pods.demo -e "SELECT * FROM playground.equipment;" +mysql: [Warning] Using a password on the command line interface can be insecure. ++----+-------+-------+-------+ +| id | type | quant | color | ++----+-------+-------+-------+ +| 1 | slide | 2 | blue | ++----+-------+-------+-------+ + +# read from secondary-2 +$ kubectl exec -it -n demo innodb-0 -c mysql -- mysql -u root --password='ny5jSirIzVtWDcZ7' --host=innodb-2.innodb-pods.demo -e "SELECT * FROM playground.equipment;" +mysql: [Warning] Using a password on the command line interface can be insecure. ++----+-------+-------+-------+ +| id | type | quant | color | ++----+-------+-------+-------+ +| 1 | slide | 2 | blue | ++----+-------+-------+-------+ +``` + +## Write on Secondary Should Fail + +Only, primary member preserves the write permission. No secondary can write data. + +```bash +# try to write on secondary-1 +$ kubectl exec -it -n demo innodb-0 -c mysql -- mysql -u root --password='ny5jSirIzVtWDcZ7' --host=innodb-1.innodb-pods.demo -e "INSERT INTO playground.equipment (type, quant, color) VALUES ('mango', 5, 'yellow');" +mysql: [Warning] Using a password on the command line interface can be insecure. +ERROR 1290 (HY000) at line 1: The MySQL server is running with the --super-read-only option so it cannot execute this statement +command terminated with exit code 1 + +# try to write on secondary-2 +$ kubectl exec -it -n demo innodb-0 -c mysql -- mysql -u root --password='ny5jSirIzVtWDcZ7' --host=innodb-2.innodb-pods.demo -e "INSERT INTO playground.equipment (type, quant, color) VALUES ('mango', 5, 'yellow');" +mysql: [Warning] Using a password on the command line interface can be insecure. +ERROR 1290 (HY000) at line 1: The MySQL server is running with the --super-read-only option so it cannot execute this statement +command terminated with exit code 1 +``` + +## Automatic Failover + +To test automatic failover, we will force the primary Pod to restart. Since the primary member (`Pod`) becomes unavailable, the rest of the members will elect a new primary for the cluster. When the old primary comes back, it will join the cluster as a secondary member. + +> Read the comment written for the following commands. They contain the instructions and explanations of the commands. + +```bash +# delete the primary Pod innodb-0 +$ kubectl delete pod innodb-0 -n demo +pod "innodb-0" deleted + +# check the new primary ID +$ kubectl exec -it -n demo innodb-0 -c mysql -- mysql -u root --password='ny5jSirIzVtWDcZ7' --host=innodb-0.innodb-pods.demo -e "show status like '%primary%'" +mysql: [Warning] Using a password on the command line interface can be insecure. ++----------------------------------+--------------------------------------+ +| Variable_name | Value | ++----------------------------------+--------------------------------------+ +| group_replication_primary_member | 2b77185f-64c6-11ed-9621-e21f33a1cdb1| ++----------------------------------+--------------------------------------+ + + +# now check the cluster status for underlying group replication +$ kubectl exec -it -n demo innodb-0 -c mysql -- mysql -u root --password='ny5jSirIzVtWDcZ7' --host=innodb-0.innodb-pods.demo -e "select * from performance_schema.replication_group_members" +mysql: [Warning] Using a password on the command line interface can be insecure. ++---------------------------+--------------------------------------+-------------------------------+-------------+--------------+-------------+----------------+----------------------------+ +| CHANNEL_NAME | MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE | MEMBER_ROLE | MEMBER_VERSION | MEMBER_COMMUNICATION_STACK | ++---------------------------+--------------------------------------+-------------------------------+-------------+--------------+-------------+----------------+----------------------------+ +| group_replication_applier | 294f333c-64c6-11ed-9893-468480005d43 | innodb-0.innodb-pods.demo.svc | 3306 | ONLINE | SECONDARY | 8.0.35 | MySQL | +| group_replication_applier | 2b77185f-64c6-11ed-9621-e21f33a1cdb1 | innodb-1.innodb-pods.demo.svc | 3306 | ONLINE | PRIMARY | 8.0.35 | MySQL | +| group_replication_applier | 2f0da15c-64c6-11ed-951a-fa8d12ce91a2 | innodb-2.innodb-pods.demo.svc | 3306 | ONLINE | SECONDARY | 8.0.35 | MySQL | ++---------------------------+--------------------------------------+-------------------------------+-------------+--------------+-------------+----------------+----------------------------+ + + +# read data from new primary innodb-1.innodb-pods.demo +$ kubectl exec -it -n demo innodb-1 -c mysql -- mysql -u root --password='ny5jSirIzVtWDcZ7' --host=innodb-1.innodb-pods.demo -e "SELECT * FROM playground.equipment;" +mysql: [Warning] Using a password on the command line interface can be insecure. ++----+-------+-------+-------+ +| id | type | quant | color | ++----+-------+-------+-------+ +| 1 | slide | 2 | blue | ++----+-------+-------+-------+ +``` +Now Let's read the data from secondary pods to see if the data is consistent. +```bash +# read data from secondary-1 innodb-0.innodb-pods.demo +$ kubectl exec -it -n demo innodb-0 -c mysql -- mysql -u root --password='ny5jSirIzVtWDcZ7' --host=innodb-0.innodb-pods.demo -e "SELECT * FROM playground.equipment;" +mysql: [Warning] Using a password on the command line interface can be insecure. ++----+-------+-------+-------+ +| id | type | quant | color | ++----+-------+-------+-------+ +| 1 | slide | 2 | blue | ++----+-------+-------+-------+ + +# read data from secondary-2 innodb-2.innodb-pods.demo +$ kubectl exec -it -n demo innodb-0 -c mysql -- mysql -u root --password='ny5jSirIzVtWDcZ7' --host=innodb-2.innodb-pods.demo -e "SELECT * FROM playground.equipment;" +mysql: [Warning] Using a password on the command line interface can be insecure. ++----+-------+-------+-------+ +| id | type | quant | color | ++----+-------+-------+-------+ +| 7 | slide | 2 | blue | ++----+-------+-------+-------+ +``` + +## Cleaning up + +Clean what you created in this tutorial. + +```bash +kubectl delete -n demo my/innodb +kubectl delete ns demo +``` + +## Next Steps + +- Detail concepts of [MySQL object](/docs/v2024.1.31/guides/mysql/concepts/database/). +- Detail concepts of [MySQLDBVersion object](/docs/v2024.1.31/guides/mysql/concepts/catalog/). +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/v2024.1.31/CONTRIBUTING). diff --git a/content/docs/v2024.1.31/guides/mysql/clustering/innodb-cluster/yamls/innodb.yaml b/content/docs/v2024.1.31/guides/mysql/clustering/innodb-cluster/yamls/innodb.yaml new file mode 100644 index 0000000000..16fae7df7f --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/clustering/innodb-cluster/yamls/innodb.yaml @@ -0,0 +1,22 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MySQL +metadata: + name: innodb + namespace: demo +spec: + version: "8.0.31-innodb" + replicas: 3 + topology: + mode: InnoDBCluster + innoDBCluster: + router: + replicas: 1 + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/mysql/clustering/overview/images/async-replication-diagram.png b/content/docs/v2024.1.31/guides/mysql/clustering/overview/images/async-replication-diagram.png new file mode 100644 index 0000000000..ae7ec5cdda Binary files /dev/null and b/content/docs/v2024.1.31/guides/mysql/clustering/overview/images/async-replication-diagram.png differ diff --git a/content/docs/v2024.1.31/guides/mysql/clustering/overview/images/gr-3-server-group.png b/content/docs/v2024.1.31/guides/mysql/clustering/overview/images/gr-3-server-group.png new file mode 100644 index 0000000000..695f66a879 Binary files /dev/null and b/content/docs/v2024.1.31/guides/mysql/clustering/overview/images/gr-3-server-group.png differ diff --git a/content/docs/v2024.1.31/guides/mysql/clustering/overview/images/gr-replication-diagram.png b/content/docs/v2024.1.31/guides/mysql/clustering/overview/images/gr-replication-diagram.png new file mode 100644 index 0000000000..2894b65525 Binary files /dev/null and b/content/docs/v2024.1.31/guides/mysql/clustering/overview/images/gr-replication-diagram.png differ diff --git a/content/docs/v2024.1.31/guides/mysql/clustering/overview/images/replicationarchitecturexample.png b/content/docs/v2024.1.31/guides/mysql/clustering/overview/images/replicationarchitecturexample.png new file mode 100644 index 0000000000..cf666aa978 Binary files /dev/null and b/content/docs/v2024.1.31/guides/mysql/clustering/overview/images/replicationarchitecturexample.png differ diff --git a/content/docs/v2024.1.31/guides/mysql/clustering/overview/images/semisync-replication-diagram.png b/content/docs/v2024.1.31/guides/mysql/clustering/overview/images/semisync-replication-diagram.png new file mode 100644 index 0000000000..f3bf99141d Binary files /dev/null and b/content/docs/v2024.1.31/guides/mysql/clustering/overview/images/semisync-replication-diagram.png differ diff --git a/content/docs/v2024.1.31/guides/mysql/clustering/overview/images/yamls/group-replication.yaml b/content/docs/v2024.1.31/guides/mysql/clustering/overview/images/yamls/group-replication.yaml new file mode 100644 index 0000000000..25d93bcfb2 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/clustering/overview/images/yamls/group-replication.yaml @@ -0,0 +1,21 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MySQL +metadata: + name: my-group + namespace: demo +spec: + version: "8.0.35" + replicas: 3 + topology: + mode: GroupReplication + group: + name: "dc002fc3-c412-4d18-b1d4-66c1fbfbbc9b" + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut diff --git a/content/docs/v2024.1.31/guides/mysql/clustering/overview/index.md b/content/docs/v2024.1.31/guides/mysql/clustering/overview/index.md new file mode 100644 index 0000000000..6c1f636c7b --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/clustering/overview/index.md @@ -0,0 +1,177 @@ +--- +title: MySQL Group Replication Overview +menu: + docs_v2024.1.31: + identifier: guides-mysql-clustering-overview + name: MySQL Group Replication Overview + parent: guides-mysql-clustering + weight: 15 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# MySQL Group Replication + +Here we'll discuss some concepts about MySQL group replication. + +## So What is Replication + +Replication means data being copied from the primary (the master) MySQL server to one or more secondary (the slaves) MySQL servers, instead of only stored in one server. One can use secondary servers for reads or administrative tasks. The following figure shows an example use case: + +![MySQL Replication](/docs/v2024.1.31/guides/mysql/clustering/overview/images/replicationarchitecturexample.png) + +Image ref: + +## Primary-Secondary Replication + +It is a traditional asynchronous replication of MySQL servers in which there is a primary server and one or more secondary servers. + +After receiving a transaction, the primary - + +1. Executes the received transaction +2. Writes to the binary log with the modified data or the actual statement (based on row-based replication or statement-based replication) +3. Commits the change +4. Sends a response to the client application +5. Sends the record from binary log to the relay logs on the secondaries before commit takes place on the primary. + +Then, each of the secondaries - + +1. Re-executes (statement-based replication) or applies (row-based replication) the transaction +2. Writes to it's binary log +3. Commits + +Here, the commit on the primary and the commits on the secondaries are all independent and asynchronous. + +See the following figure: + +![Primary-Secondary Replication](/docs/v2024.1.31/guides/mysql/clustering/overview/images/async-replication-diagram.png) + +Ref: [group-replication-primary-secondary-replication](https://dev.mysql.com/doc/refman/5.7/en/group-replication-primary-secondary-replication.html) + +## MySQL Semi synchronous Replication + +There is a semi-synchronous variant of the above asynchronous replication. It adds one additional synchronous step to the protocol. + +After receiving a transaction, the primary - + +1. Executes the received transaction +2. Writes to the binary log with the modified data or the actual statement (based on row-based replication or statement-based replication) +3. Sends the record from binary log to the relay logs on the secondaries +4. Waits for an acknowledgment from the secondaries +5. Commits the transaction after getting the acknowledgment +6. Sends a response to the client application + +After each secondary has returned its acknowledgment + +1. Re-executes (statement-based replication) or applies (row-based replication) the transaction +2. Writes to its binary log +3. Commits + +Here, the commit on the primary depends on the acknowledgment from the secondaries, but the commits on the secondaries are independent of each other and from the commit on the primary. + +The following figure tells about this. + +![MySQL Semisynchronous Replication](/docs/v2024.1.31/guides/mysql/clustering/overview/images/semisync-replication-diagram.png) + +Ref: [group-replication-primary-secondary-replication](https://dev.mysql.com/doc/refman/5.7/en/group-replication-primary-secondary-replication.html) + +## Group Replication + +In Group Replication, the servers keep strong coordination through message passing to build fault-tolerant system. + +In a group, every server may execute transactions independently. Any read-write (RW) transaction is committed only if the group members approve it. But the read-only (RO) transactions have no restriction and so commit immediately. That means the server at which a transaction is executed sends the rows with unique identifiers to the other servers. If all servers receive these, a global total order is set for that transaction. Then all server apply the changes. + +In case of a conflict (if concurrent transactions on more than one server update the same row), the _certification_ process detects it and the group follows the first commit wins rule. + +So, the whole process is as follows: + +The originating server - + +1. Executes a transaction +2. Sends a message to the group consisting of itself and other servers +3. Writes the transaction to its binary log +4. Commits it +5. Sends a response to the client application + +And the other servers - + +1. Write the transaction to their relay logs +2. Apply it +3. Write it to the binary log +4. Commit it + +> The steps from 3 to 5 in the originating server and all the steps in the other servers are followed if all servers have reached consensus and they certify the transaction. + +![MySQL Group Replication Protocol](/docs/v2024.1.31/guides/mysql/clustering/overview/images/gr-replication-diagram.png) + +Ref: [group-replication](https://dev.mysql.com/doc/refman/5.7/en/group-replication-summary.html) + +According to Ramesh Sivaraman, QA Engineer and Kenny Gryp, MySQL Practice Manager, Oracle MySQL developed Group Replication as MySQL server plugin that provides distributed state machine replication with strong coordination among servers. Servers coordinate themselves automatically as long as they are part of the same replication group. Any server in the group can process updates. Conflicts are detected and handled automatically. There is a built-in membership service that keeps the view of the group consistent and available for all servers at any given point in time. Servers can leave and join the group and the view will be updated accordingly. + +Groups can operate in a single-primary mode, where only one server accepts updates at a time. Groups can be deployed in multi-primary mode, where all servers can accept updates. Currently, we only provide the single-primary mode support for MySQL Group Replication. + +A simple group architecture where three servers s1, s2, and s3 are deployed as an interconnected group and clients communicate with each of the servers has been shown below: + +![3 Server Group](/docs/v2024.1.31/guides/mysql/clustering/overview/images/gr-3-server-group.png) + +Image ref: https://dev.mysql.com/doc/refman/5.7/en/images/gr-3-server-group.png + +### Services + +Group Replication builds on some services. + +#### Failure Detection + +Basically, when server A does not receive any message from server B for a given period, then a timeout occurs and a suspicion is raised telling that server B is dead. The failure detection mechanism which is responsible for this whole process. + +More on this [here](https://dev.mysql.com/doc/refman/5.7/en/group-replication-failure-detection.html). + +#### Group Membership + +It is a built-in membership service that monitors the group. It defines the list of online servers (_view_) and thus the group has a consistent view of the actively participating members at a time. When servers leave and join the group and the group view will be reconfigured accordingly. + +See [here](https://dev.mysql.com/doc/refman/5.7/en/group-replication-group-membership.html) for more. + +#### Fault-tolerance + +MySQL Group Replication requires a majority of active servers to reach quorum and make a decision. Thus there is an impact on the failure number that a group can tolerate. So, if the majority for `n` is `floor(n/2) + 1`, then we have a relation between the group size (n) and the number of failures (f): + +​ `n = 2 x f + 1` + +In practice, this means that to tolerate one failure the group must have three servers in it. As such if one server fails, there are still two servers to form a majority (two out of three) and allow the system to continue to make decisions automatically and progress. However, if a second server fails _involuntarily_, then the group (with one server left) blocks, because there is no majority to reach a decision. + +The following is a small table illustrating the formula above. + +| Group Size | Majority | Instant Failures Tolerated | +| :--------: | :------: | :------------------------: | +| 1 | 1 | 0 | +| 2 | 2 | 0 | +| 3 | 2 | 1 | +| 4 | 3 | 1 | +| 5 | 3 | 2 | +| 6 | 4 | 2 | +| 7 | 4 | 3 | + +Ref: [group-replication-fault-tolerance](https://dev.mysql.com/doc/refman/5.7/en/group-replication-fault-tolerance.html) + +### Limitations + +There are some limitations in MySQL Group Replication that are listed [here](https://dev.mysql.com/doc/refman/5.7/en/group-replication-limitations.html). On top of that, though MySQL group can operate in both single-primary and multi-primary modes, we have implemented only single-primary mode. The multi-primary mode will be added in the future. See the issue [MySQL Cluster](https://github.com/kubedb/project/issues/18). + +## Next Steps + +- [Deploy MySQL Group Replication](/docs/v2024.1.31/guides/mysql/clustering/group-replication/) using KubeDB. +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/v2024.1.31/CONTRIBUTING) diff --git a/content/docs/v2024.1.31/guides/mysql/clustering/remote-replica/index.md b/content/docs/v2024.1.31/guides/mysql/clustering/remote-replica/index.md new file mode 100644 index 0000000000..95d268ee1c --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/clustering/remote-replica/index.md @@ -0,0 +1,441 @@ +--- +title: MySQL Remote Replica Guide +menu: + docs_v2024.1.31: + identifier: guides-mysql-clustering-remote-replica + name: MySQL Remote Replica Guide + parent: guides-mysql-clustering + weight: 21 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# KubeDB - MySQL Remote Replica + +This tutorial will show you how to use KubeDB to provision a MySQL Remote Replica from a kubedb managed mysql instance. Remote replica can used in in or across cluster + + +## Before You Begin + +Before proceeding: + +- Read [mysql replication concept](/docs/v2024.1.31/guides/mysql/clustering/overview/) to learn about MySQL Replication. + +- You need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +- Now, install KubeDB cli on your workstation and KubeDB operator in your cluster following the steps [here](/docs/v2024.1.31/setup/README). + +- To keep things isolated, this tutorial uses a separate namespace called `demo` throughout this tutorial. Run the following command to prepare your cluster for this tutorial: + +```bash + $ kubectl create ns demo + namespace/demo created + ``` + +> Note: The yaml files used in this tutorial are stored in [docs/guides/mysql/clustering/remote-replica/yamls](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/guides/mysql/clustering/group-replication/yamls) folder in GitHub repository [kubedb/docs](https://github.com/kubedb/docs). +## Remote Replica + +The remote replica allows you to replicate data from an KubeDB managed MySQL server to a read-only mysql server. The whole process uses MySQL asynchronous replication to keep up-to-date the replica with source server. +It's useful to use remote replica to scale of read-intensive workloads, can be a workaround for your BI and analytical workloads and can be geo-replicated. + +## Deploy Mysql server + +The following is an example `MySQL` object which creates a MySQL Group replicated instance.we will create a tls secure instance since were planing to replicated across cluster + +Lets start with creating a secret first to access to database and we will deploy a tls secured instance since were replication across cluster + +### Create Issuer/ClusterIssuer + +Now, we are going to create an example `Issuer` that will be used throughout the duration of this tutorial. Alternatively, you can follow this [cert-manager tutorial](https://cert-manager.io/docs/configuration/ca/) to create your own `Issuer`. By following the below steps, we are going to create our desired issuer, + +- Start off by generating our ca-certificates using openssl, + +```bash +openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout ./ca.key -out ./ca.crt -subj "/CN=mysql/O=kubedb" +``` + +- create a secret using the certificate files we have just generated, + +```bash +kubectl create secret tls my-ca \ + --cert=ca.crt \ + --key=ca.key \ + --namespace=demo +secret/my-ca created +``` + +Now, we are going to create an `Issuer` using the `my-ca` secret that hols the ca-certificate we have just created. Below is the YAML of the `Issuer` cr that we are going to create, + +```yaml +apiVersion: cert-manager.io/v1 +kind: Issuer +metadata: + name: mysql-issuer + namespace: demo +spec: + ca: + secretName: my-ca +``` + +Let’s create the `Issuer` cr we have shown above, + +```bash +kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mysql/clustering/remote-replica/yamls/issuer.yaml +issuer.cert-manager.io/mysql-issuer created +``` + +### Create Auth Secret + +```yaml +apiVersion: v1 +data: + password: cGFzcw== + username: cm9vdA== +kind: Secret +metadata: + name: mysql-singapore-auth + namespace: demo +type: kubernetes.io/basic-auth +``` +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mysql/clustering/remote-replica/yamls/mysql-singapore-auth.yaml +secret/mysql-singapore-auth created +``` +## Deploy MySQL with TLS/SSL configuration + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: MySQL +metadata: + name: mysql-singapore + namespace: demo +spec: + authSecret: + name: mysql-singapore-auth + version: "8.0.35" + replicas: 3 + topology: + mode: GroupReplication + storageType: Durable + storage: + storageClassName: "linode-block-storage" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 10Gi + requireSSL: true + tls: + issuerRef: + apiGroup: cert-manager.io + kind: Issuer + name: mysql-issuer + certificates: + - alias: server + subject: + organizations: + - kubedb:server + dnsNames: + - localhost + ipAddresses: + - "127.0.0.1" + terminationPolicy: WipeOut +``` + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mysql/clustering/remote-replica/yamls/mysql-singapore.yaml +mysql.kubedb.com/mysql created +``` + +KubeDB operator sets the `status.phase` to `Ready` once the database is successfully created + +```bash +$ kubectl get mysql -n demo +NAME VERSION STATUS AGE +mysql-singapore 8.0.35 Ready 22h +``` + +## Connect with MySQL database + +Now, you can connect to this database from your terminal using the `mysql` user and password. + +```bash +$ kubectl get secrets -n demo mysql-singapore-auth -o jsonpath='{.data.\username}' | base64 -d +root + +$ kubectl get secrets -n demo mysql-singapore-auth -o jsonpath='{.data.\password}' | base64 -d +pass +``` + +The operator creates a standalone mysql server for the newly created `MySQL` object. + +Now you can connect to the database using the above info. Ignore the warning message. It is happening for using password in the command. + + +## Data Insertion + +Let's insert some data to the newly created mysql server . we can use the primary service or governing service to connect with the database +> Read the comment written for the following commands. They contain the instructions and explanations of the commands. + +```bash +# create a database on primary +$ kubectl exec -it -n demo mysql-singapore-0 -- mysql -u root --password='pass' --host=mysql-singapore-0.mysql-singapore-pods.demo -e "CREATE DATABASE playground;" +mysql: [Warning] Using a password on the command line interface can be insecure. + +# create a table +$ kubectl exec -it -n demo mysql-singapore-0 -- mysql -u root --password='pass' --host=mysql-singapore-0.mysql-singapore-pods.demo -e "CREATE TABLE playground.equipment ( id INT NOT NULL AUTO_INCREMENT, type VARCHAR(50), quant INT, color VARCHAR(25), PRIMARY KEY(id));" +mysql: [Warning] Using a password on the command line interface can be insecure. + + +# insert a row +$ kubectl exec -it -n demo mysql-singapore-0 -c mysql -- mysql -u root --password='pass' --host=mysql-singapore-0.mysql-singapore-pods.demo -e "INSERT INTO playground.equipment (type, quant, color) VALUES ('slide', 2, 'blue');" +mysql: [Warning] Using a password on the command line interface can be insecure. + +# read from primary +$ kubectl exec -it -n demo mysql-singapore-0 -c mysql -- mysql -u root --password='pass' --host=mysql-singapore-0.mysql-singapore-pods.demo -e "SELECT * FROM playground.equipment;" +mysql: [Warning] Using a password on the command line interface can be insecure. ++----+-------+-------+-------+ +| id | type | quant | color | ++----+-------+-------+-------+ +| 1 | slide | 2 | blue | ++----+-------+-------+-------+ +``` + +# Exposing to outside world +For Now we will expose our mysql with ingress with to outside world +```bash +$ helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx +$ helm upgrade -i ingress-nginx ingress-nginx/ingress-nginx \ + --namespace demo --create-namespace \ + --set tcp.3306="demo/mysql-singapore:3306" +``` +Let's apply the ingress yaml thats refers to `mysql-singpore` service +```yaml +apiVersion: networking.k8s.io/v1 +kind: Ingress +metadata: + name: mysql-singapore + namespace: demo +spec: + ingressClassName: nginx + rules: + - host: mysql-singapore.something.org + http: + paths: + - backend: + service: + name: mysql-singapore + port: + number: 3306 + path: / + pathType: Prefix +``` +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mysql/clustering/remote-replica/yamls/mysql-ingress.yaml +ingress.networking.k8s.io/mysql-singapore created +$ kubectl get ingress -n demo +NAME CLASS HOSTS ADDRESS PORTS AGE +mysql-singapore nginx mysql-singapore.something.org 172.104.37.147 80 22h +``` +Now will be able to communicate from another cluster to our source database +# Prepare for Remote Replica +We wil use the [kubedb_plugin](/docs/v2024.1.31/setup/README) for generating configuration for remote replica. It will create the appbinding and and necessary secrets to connect with source server +```bash +$ kubectl dba remote-config mysql -n demo mysql-singapore -uremote -ppass -d 172.104.37.147 -y +home/mehedi/go/src/kubedb.dev/yamls/mysql/mysql-singapore-remote-config.yaml +``` +# Create Remote Replica +We have prepared another cluster in london region for replicating across cluster. follow the installation instruction [above](/docs/v2024.1.31/README). + +### Create sourceRef + +We will apply the generated config from kubeDB plugin to create the source refs and secrets for it + +```bash +$ kubectl apply -f /home/mehedi/go/src/kubedb.dev/yamls/bank_abc/mysql/mysql-singapore-remote-config.yaml + +secret/mysql-singapore-remote-replica-auth created +secret/mysql-singapore-client-cert-remote created +appbinding.appcatalog.appscode.com/mysql-singapore created + +$ kubectl get appbinding -n demo +NAME TYPE VERSION AGE +mysql-singapore kubedb.com/mysql 8.0.35 4m17s +``` + +### Create remote replica auth +We will need to use the same auth secrets for remote replicas as well since operations like clone also replicated the auth-secrets from source server +```yaml +apiVersion: v1 +data: + password: cGFzcw== + username: cm9vdA== +kind: Secret +metadata: + name: mysql-london-auth + namespace: demo +type: kubernetes.io/basic-auth +``` + +```bash +kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mysql/clustering/remote-replica/yamls/mysql-london-auth.yaml +``` + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: MySQL +metadata: + name: mysql-london + namespace: demo +spec: + authSecret: + name: mysql-london-auth + healthChecker: + failureThreshold: 1 + periodSeconds: 10 + timeoutSeconds: 10 + disableWriteCheck: true + version: "8.0.35" + replicas: 1 + topology: + mode: RemoteReplica + remoteReplica: + sourceRef: + name: mysql-singapore + namespace: demo + storageType: Durable + storage: + storageClassName: "linode-block-storage" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 10Gi + terminationPolicy: WipeOut +``` +Here, + +- `spec.topology` contains the information about the mysql server. +- `spec.topology.mode` we are defining the server will be working a `Remote Replica`. +- `spec.topology.remoteReplica.sourceref` we are referring to source to read. The mysql instance we previously created. +- `spec.terminationPolicy` specifies what KubeDB should do when a user try to delete the operation of MySQL CR. *Wipeout* means that the database will be deleted without restrictions. It can also be "Halt", "Delete" and "DoNotTerminate". Learn More about these [HERE](https://kubedb.com/docs/latest/guides/mysql/concepts/database/#specterminationpolicy). +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mysql/clustering/remote-replica/yamls/mysql-london.yaml +mysql.kubedb.com/mysql-london created +``` + +Now we will be able to see kubedb will provision a Remote Replica from the source mysql instance. Lets checkout out the statefulSet , pvc , pv and services associated with it +. +KubeDB operator sets the `status.phase` to `Ready` once the database is successfully created. Run the following command to see the modified `MySQL` object: +```bash +$ kubectl get mysql -n demo +NAME VERSION STATUS AGE +mysql-london 8.0.35 Ready 7m17s +``` + +## Validate Remote Replica +Since both source and replica database are in the ready state. we can validate Remote Replica is working properly by checking the replication status + +```bash +$ kubectl exec -it -n demo mysql-london-0 -c mysql -- mysql -u root --password='pass' --host=mysql-london-0.mysql-london-pods.demo -e "show slave status\G" +mysql: [Warning] Using a password on the command line interface can be insecure. +*************************** 1. row *************************** + Slave_IO_State: Waiting for source to send event + Master_Host: 172.104.37.147 + Master_User: remote + Master_Port: 3306 + Connect_Retry: 60 + Master_Log_File: binlog.000001 + Read_Master_Log_Pos: 4698131 + Relay_Log_File: mysql-london-0-relay-bin.000007 + Relay_Log_Pos: 1415154 + Relay_Master_Log_File: binlog.000001 + Slave_IO_Running: Yes + Slave_SQL_Running: Yes + .... +``` + +# Read Data +In the previous step we have inserted into the primary pod. In the next step we will read from secondary pods to determine whether the data has been successfully copied to the secondary pods. +```bash +# read from secondary-1 +$ kubectl exec -it -n demo mysql-london-0 -c mysql -- mysql -u root --password='pass' --host=mysql-london-0.mysql-london-pods.demo -e "SELECT * FROM playground.equipment;" +mysql: [Warning] Using a password on the command line interface can be insecure. ++----+-------+-------+-------+ +| id | type | quant | color | ++----+-------+-------+-------+ +| 1 | slide | 2 | blue | ++----+-------+-------+-------+ +``` + +## Write on Secondary Should Fail + +Only, primary member preserves the write permission. No secondary can write data. + +## Automatic Failover + +To test automatic failover, we will force the primary Pod to restart. Since the primary member (`Pod`) becomes unavailable, the rest of the members will elect a new primary for these group. When the old primary comes back, it will join the group as a secondary member. + +> Read the comment written for the following commands. They contain the instructions and explanations of the commands. + +```bash +# delete the primary Pod mysql-london-0 +$ kubectl delete pod mysql-london-0 -n demo +pod "mysql-london-0" deleted + +# check the new primary ID +$ kubectl exec -it -n demo mysql-london-0 -c mysql -- mysql -u root --password='pass' --host=mysql-london-0.mysql-london-pods.demo -e "show slave status\G" +mysql: [Warning] Using a password on the command line interface can be insecure. +*************************** 1. row *************************** + Slave_IO_State: Waiting for source to send event + Master_Host: mysql.demo.svc + Master_User: root + Master_Port: 3306 + Connect_Retry: 60 + Master_Log_File: binlog.000002 + Read_Master_Log_Pos: 214789 + Relay_Log_File: mysql-london-0-relay-bin.000002 + Relay_Log_Pos: 186366 + Relay_Master_Log_File: binlog.000002 + Slave_IO_Running: Yes + Slave_SQL_Running: Yes + ... + +# read data after recovery +$ kubectl exec -it -n demo mysql-london-0 -c mysql -- mysql -u root --password='pass' --host=mysql-read-2.mysql-read-pods.demo -e "SELECT * FROM playground.equipment;" +mysql: [Warning] Using a password on the command line interface can be insecure. ++----+-------+-------+-------+ +| id | type | quant | color | ++----+-------+-------+-------+ +| 7 | slide | 2 | blue | ++----+-------+-------+-------+ +``` + +## Cleaning up + +Clean what you created in this tutorial. + +```bash +kubectl delete -n demo my/mysql-singapore +kubectl delete -n demo my/mysql-london +kubectl delete ns demo +``` + +## Next Steps + +- Detail concepts of [MySQL object](/docs/v2024.1.31/guides/mysql/concepts/database/). +- Detail concepts of [MySQLDBVersion object](/docs/v2024.1.31/guides/mysql/concepts/catalog/). +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/v2024.1.31/CONTRIBUTING). diff --git a/content/docs/v2024.1.31/guides/mysql/clustering/remote-replica/yamls/issuer.yaml b/content/docs/v2024.1.31/guides/mysql/clustering/remote-replica/yamls/issuer.yaml new file mode 100644 index 0000000000..9ec9f3bbd8 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/clustering/remote-replica/yamls/issuer.yaml @@ -0,0 +1,8 @@ +apiVersion: cert-manager.io/v1 +kind: Issuer +metadata: + name: mysql-issuer + namespace: demo +spec: + ca: + secretName: my-ca \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/mysql/clustering/remote-replica/yamls/mysql-ingress.yaml b/content/docs/v2024.1.31/guides/mysql/clustering/remote-replica/yamls/mysql-ingress.yaml new file mode 100644 index 0000000000..922c36e4a3 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/clustering/remote-replica/yamls/mysql-ingress.yaml @@ -0,0 +1,18 @@ +apiVersion: networking.k8s.io/v1 +kind: Ingress +metadata: + name: mysql-singapore + namespace: demo +spec: + ingressClassName: nginx + rules: + - host: mysql-singapore.something.org + http: + paths: + - backend: + service: + name: mysql-singapore + port: + number: 3306 + path: / + pathType: Prefix \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/mysql/clustering/remote-replica/yamls/mysql-london-auth.yaml b/content/docs/v2024.1.31/guides/mysql/clustering/remote-replica/yamls/mysql-london-auth.yaml new file mode 100644 index 0000000000..0ce0495e15 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/clustering/remote-replica/yamls/mysql-london-auth.yaml @@ -0,0 +1,9 @@ +apiVersion: v1 +data: + password: cGFzcw== + username: cm9vdA== +kind: Secret +metadata: + name: mysql-london-auth + namespace: demo +type: kubernetes.io/basic-auth \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/mysql/clustering/remote-replica/yamls/mysql-london.yaml b/content/docs/v2024.1.31/guides/mysql/clustering/remote-replica/yamls/mysql-london.yaml new file mode 100644 index 0000000000..38da446463 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/clustering/remote-replica/yamls/mysql-london.yaml @@ -0,0 +1,30 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MySQL +metadata: + name: mysql-london + namespace: demo +spec: + authSecret: + name: mysql-london-auth + healthChecker: + failureThreshold: 1 + periodSeconds: 10 + timeoutSeconds: 10 + disableWriteCheck: true + version: "8.0.35" + replicas: 1 + topology: + mode: RemoteReplica + remoteReplica: + sourceRef: + name: mysql-singapore + namespace: demo + storageType: Durable + storage: + storageClassName: "linode-block-storage" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 10Gi + terminationPolicy: WipeOut \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/mysql/clustering/remote-replica/yamls/mysql-singapore-auth.yaml b/content/docs/v2024.1.31/guides/mysql/clustering/remote-replica/yamls/mysql-singapore-auth.yaml new file mode 100644 index 0000000000..153c2251c0 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/clustering/remote-replica/yamls/mysql-singapore-auth.yaml @@ -0,0 +1,9 @@ +apiVersion: v1 +data: + password: cGFzcw== + username: cm9vdA== +kind: Secret +metadata: + name: mysql-singapore-auth + namespace: demo +type: kubernetes.io/basic-auth \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/mysql/clustering/remote-replica/yamls/mysql-singapore.yaml b/content/docs/v2024.1.31/guides/mysql/clustering/remote-replica/yamls/mysql-singapore.yaml new file mode 100644 index 0000000000..6d51b3a5ed --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/clustering/remote-replica/yamls/mysql-singapore.yaml @@ -0,0 +1,36 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MySQL +metadata: + name: mysql-singapore + namespace: demo +spec: + authSecret: + name: mysql-singapore-auth + version: "8.0.35" + replicas: 3 + topology: + mode: GroupReplication + storageType: Durable + storage: + storageClassName: "linode-block-storage" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 10Gi + requireSSL: true + tls: + issuerRef: + apiGroup: cert-manager.io + kind: Issuer + name: mysql-issuer + certificates: + - alias: server + subject: + organizations: + - kubedb:server + dnsNames: + - localhost + ipAddresses: + - "127.0.0.1" + terminationPolicy: WipeOut \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/mysql/clustering/semi-sync/index.md b/content/docs/v2024.1.31/guides/mysql/clustering/semi-sync/index.md new file mode 100644 index 0000000000..2878549e13 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/clustering/semi-sync/index.md @@ -0,0 +1,602 @@ +--- +title: MySQL Semi-synchronous cluster guide +menu: + docs_v2024.1.31: + identifier: guides-mysql-clustering-semi-sync + name: MySQL Semi-sync cluster Guide + parent: guides-mysql-clustering + weight: 23 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# KubeDB - MySQL Semi-sync cluster + +This tutorial will show you how to use KubeDB to provision a MySQL semi-synchronous cluster. + +## Before You Begin + +Before proceeding: + +- Read [mysql semi synchronous concept](/docs/v2024.1.31/guides/mysql/clustering/overview/) to learn about MySQL Semi sync cluster. + +- You need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +- Now, install KubeDB cli on your workstation and KubeDB operator in your cluster following the steps [here](/docs/v2024.1.31/setup/README). + +- To keep things isolated, this tutorial uses a separate namespace called `demo` throughout this tutorial. Run the following command to prepare your cluster for this tutorial: + + ```bash + $ kubectl create ns demo + namespace/demo created + ``` + +> Note: The yaml files used in this tutorial are stored in [docs/guides/mysql/clustering/semi-sync/yamls](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/guides/mysql/clustering/semi-sync/yamls) folder in GitHub repository [kubedb/docs](https://github.com/kubedb/docs). + +## Deploy MySQL Semi-sync Cluster + +To deploy a single primary MySQL semi-synchronous cluster , specify `spec.topology` field in `MySQL` CRD. + +The following is an example `MySQL` object which creates a semi-synchronous cluster with three members (one is primary member and the two others are secondary members). + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: MySQL +metadata: + name: semi-sync-mysql + namespace: demo +spec: + version: "8.0.35" + replicas: 3 + topology: + mode: SemiSync + semiSync: + sourceWaitForReplicaCount: 1 + sourceTimeout: 23h + errantTransactionRecoveryPolicy: PseudoTransaction + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut + +``` + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mysql/clustering/semi-sync/yamls/semi-sync.yaml +mysql.kubedb.com/semi-sync-mysql created +``` + +Here, + +- `spec.topology` tells about the clustering configuration for MySQL. +- `spec.topology.mode` specifies the mode for MySQL cluster. Here we have used `SemiSync` to tell the operator that we want to deploy a MySQL Semi-synchronous cluster. +- `spec.topology.semiSync` contains semi-synchronous cluster info. +- `spec.topology.semiSync.sourceWaitForReplicaCount:` explains the number of replica semi-sync primary wait before commit a transaction +- `spec.topology.semiSync.sourceTimeout:` explains the timeout for primary to wait for a replica and fall back to asynchronous replication +- `spec.topology.semiSync.errantTransactionRecoveryPolicy:` it's possible to have errant transaction during a Primary failover . kubedb supports two types of recovery using `PseudoTransaction` and `Clone` +- `spec.storage` specifies the StorageClass of PVC dynamically allocated to store data for this database. This storage spec will be passed to the StatefulSet created by KubeDB operator to run database pods. So, each members will have a pod of this storage configuration. You can specify any StorageClass available in your cluster with appropriate resource requests. + +KubeDB operator watches for `MySQL` objects using Kubernetes API. When a `MySQL` object is created, KubeDB operator will create a new StatefulSet and a Service with the matching MySQL object name. KubeDB operator will also create a governing service for the StatefulSet with the name `-pods`. + +```bash +$ kubectl dba describe my -n demo semi-sync-mysql +Name: semi-sync-mysql +Namespace: demo +CreationTimestamp: Wed, 16 Nov 2022 11:45:53 +0600 +Labels: +Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"kubedb.com/v1alpha2","kind":"MySQL","metadata":{"annotations":{},"name":"semi-sync-mysql","namespace":"demo"},"spec":{"replicas":3,"stor... +Replicas: 3 total +Status: Ready +StorageType: Durable +Volume: + StorageClass: standard + Capacity: 1Gi + Access Modes: RWO +Paused: false +Halted: false +Termination Policy: WipeOut + +StatefulSet: + Name: semi-sync-mysql + CreationTimestamp: Wed, 16 Nov 2022 11:45:53 +0600 + Labels: app.kubernetes.io/component=database + app.kubernetes.io/instance=semi-sync-mysql + app.kubernetes.io/managed-by=kubedb.com + app.kubernetes.io/name=mysqls.kubedb.com + Annotations: + Replicas: 824640444456 desired | 3 total + Pods Status: 3 Running / 0 Waiting / 0 Succeeded / 0 Failed + +Service: + Name: semi-sync-mysql + Labels: app.kubernetes.io/component=database + app.kubernetes.io/instance=semi-sync-mysql + app.kubernetes.io/managed-by=kubedb.com + app.kubernetes.io/name=mysqls.kubedb.com + Annotations: + Type: ClusterIP + IP: 10.96.121.252 + Port: primary 3306/TCP + TargetPort: db/TCP + Endpoints: 10.244.0.18:3306 + +Service: + Name: semi-sync-mysql-pods + Labels: app.kubernetes.io/component=database + app.kubernetes.io/instance=semi-sync-mysql + app.kubernetes.io/managed-by=kubedb.com + app.kubernetes.io/name=mysqls.kubedb.com + Annotations: + Type: ClusterIP + IP: None + Port: db 3306/TCP + TargetPort: db/TCP + Endpoints: 10.244.0.18:3306,10.244.0.20:3306,10.244.0.22:3306 + Port: coordinator 2380/TCP + TargetPort: coordinator/TCP + Endpoints: 10.244.0.18:2380,10.244.0.20:2380,10.244.0.22:2380 + Port: coordinatclient 2379/TCP + TargetPort: coordinatclient/TCP + Endpoints: 10.244.0.18:2379,10.244.0.20:2379,10.244.0.22:2379 + +Service: + Name: semi-sync-mysql-standby + Labels: app.kubernetes.io/component=database + app.kubernetes.io/instance=semi-sync-mysql + app.kubernetes.io/managed-by=kubedb.com + app.kubernetes.io/name=mysqls.kubedb.com + Annotations: + Type: ClusterIP + IP: 10.96.133.61 + Port: standby 3306/TCP + TargetPort: db/TCP + Endpoints: 10.244.0.20:3306,10.244.0.22:3306 + +Auth Secret: + Name: semi-sync-mysql-auth + Labels: app.kubernetes.io/component=database + app.kubernetes.io/instance=semi-sync-mysql + app.kubernetes.io/managed-by=kubedb.com + app.kubernetes.io/name=mysqls.kubedb.com + Annotations: + Type: kubernetes.io/basic-auth + Data: + password: 16 bytes + username: 4 bytes + +AppBinding: + Metadata: + Annotations: + kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"kubedb.com/v1alpha2","kind":"MySQL","metadata":{"annotations":{},"name":"semi-sync-mysql","namespace":"demo"},"spec":{"replicas":3,"storage":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"1Gi"}},"storageClassName":"standard"},"storageType":"Durable","terminationPolicy":"WipeOut","topology":{"mode":"SemiSync","semiSync":{"errantTransactionRecoveryPolicy":"PseudoTransaction","sourceTimeout":"23h","sourceWaitForReplicaCount":1}},"version":"8.0.35"}} + + Creation Timestamp: 2022-11-16T05:45:53Z + Labels: + app.kubernetes.io/component: database + app.kubernetes.io/instance: semi-sync-mysql + app.kubernetes.io/managed-by: kubedb.com + app.kubernetes.io/name: mysqls.kubedb.com + Name: semi-sync-mysql + Namespace: demo + Spec: + Client Config: + Service: + Name: semi-sync-mysql + Path: / + Port: 3306 + Scheme: mysql + URL: tcp(semi-sync-mysql.demo.svc:3306)/ + Parameters: + API Version: appcatalog.appscode.com/v1alpha1 + Kind: StashAddon + Stash: + Addon: + Backup Task: + Name: mysql-backup-8.0.21 + Params: + Name: args + Value: --all-databases --set-gtid-purged=OFF + Restore Task: + Name: mysql-restore-8.0.21 + Secret: + Name: semi-sync-mysql-auth + Type: kubedb.com/mysql + Version: 8.0.35 + +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal Phase Changed 7m MySQL operator phase changed from to Provisioning reason: + Normal Successful 7m MySQL operator Successfully created governing service + Normal Successful 7m MySQL operator Successfully created service for primary/standalone + Normal Successful 7m MySQL operator Successfully created service for secondary replicas + Normal Successful 7m MySQL operator Successfully created database auth secret + Normal Successful 7m MySQL operator Successfully created StatefulSet + Normal Successful 7m MySQL operator Successfully created MySQL + Normal Successful 7m MySQL operator Successfully created appbinding + Normal Successful 7m MySQL operator Successfully patched governing service + Normal Successful 6m MySQL operator Successfully patched governing service + Normal Successful 6m MySQL operator Successfully patched governing service + Normal Successful 6m MySQL operator Successfully patched governing service + Normal Successful 6m MySQL operator Successfully patched governing service + Normal Successful 5m MySQL operator Successfully patched governing service + Normal Successful 5m MySQL operator Successfully patched governing service + Normal Successful 5m MySQL operator Successfully patched governing service + Normal Successful 5m MySQL operator Successfully patched governing service + Normal Phase Changed 5m MySQL operator phase changed from Provisioning to Ready reason: + Normal Successful 5m MySQL operator Successfully patched governing service +g + + +$ kubectl get statefulset -n demo +NAME READY AGE +semi-sync-mysql 3/3 3m47s + +$ kubectl get pvc -n demo +NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE +data-semi-sync-mysql-0 Bound pvc-4f8538f6-a6ce-4233-b533-8566852f5b98 1Gi RWO standard 4m16s +data-semi-sync-mysql-1 Bound pvc-8823d3ad-d614-4172-89ac-c2284a17f502 1Gi RWO standard 4m11s +data-semi-sync-mysql-2 Bound pvc-94f1c312-50e3-41e1-94a8-a820be0abc08 1Gi RWO standard 4m7s +s + +$ kubectl get pv -n demo +NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE +pvc-4f8538f6-a6ce-4233-b533-8566852f5b98 1Gi RWO Delete Bound demo/data-semi-sync-mysql-0 standard 4m39s +pvc-8823d3ad-d614-4172-89ac-c2284a17f502 1Gi RWO Delete Bound demo/data-semi-sync-mysql-1 standard 4m35s +pvc-94f1c312-50e3-41e1-94a8-a820be0abc08 1Gi RWO Delete Bound demo/data-semi-sync-mysql-2 standard 4m31s + +$ kubectl get service -n demo +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +semi-sync-mysql ClusterIP 10.96.121.252 3306/TCP 10m +semi-sync-mysql-pods ClusterIP None 3306/TCP,2380/TCP,2379/TCP 10m +semi-sync-mysql-standby ClusterIP 10.96.133.61 3306/TCP 10m + +``` + +KubeDB operator sets the `status.phase` to `Ready` once the database is successfully provisioned. Run the following command to see the modified `MySQL` object: + +```bash +$ kubectl get mysql -n demo +NAME VERSION STATUS AGE +semi-sync-mysql 8.0.35 Ready 16m +``` + +```yaml +$ kubectl get my -n demo semi-sync-mysql -o yaml | kubectl neat +apiVersion: kubedb.com/v1alpha2 +kind: MySQL +metadata: + name: semi-sync-mysql + namespace: demo +spec: + allowedReadReplicas: + namespaces: + from: Same + allowedSchemas: + namespaces: + from: Same + authSecret: + name: semi-sync-mysql-auth + autoOps: {} + coordinator: + resources: {} + healthChecker: + failureThreshold: 1 + periodSeconds: 10 + timeoutSeconds: 10 + podTemplate: + ... + + replicas: 3 + storage: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + storageClassName: standard + storageType: Durable + terminationPolicy: WipeOut + topology: + mode: SemiSync + semiSync: + errantTransactionRecoveryPolicy: PseudoTransaction + sourceTimeout: 23h0m0s + sourceWaitForReplicaCount: 1 + useAddressType: DNS + version: 8.0.35 +status: + conditions: + ... + observedGeneration: 2 + phase: Ready +``` + +## Connect with MySQL database + +KubeDB operator has created a new Secret called `semi-sync-mysql-auth` **(format: {mysql-object-name}-auth)** for storing the password for `mysql` superuser. This secret contains a `username` key which contains the **username** for MySQL superuser and a `password` key which contains the **password** for MySQL superuser. + +If you want to use an existing secret please specify that when creating the MySQL object using `spec.authSecret.name`. While creating this secret manually, make sure the secret contains these two keys containing data `username` and `password` and also make sure of using `root` as value of `username`. For more details see [here](/docs/v2024.1.31/guides/mysql/concepts/database/#specdatabasesecret). + +Now, you can connect to this database from your terminal using the `mysql` user and password. + +```bash +$ kubectl get secrets -n demo semi-sync-mysql-auth -o jsonpath='{.data.\username}' | base64 -d +root + +$ kubectl get secrets -n demo semi-sync-mysql-auth -o jsonpath='{.data.\password}' | base64 -d +y~EC~984Et1Yfs~i +``` + +The operator creates a cluster according to the newly created `MySQL` object. This cluster has 3 members (one primary and two secondary). + +You can connect to any of these cluster members. In that case you just need to specify the host name of that member Pod (either PodIP or the fully-qualified-domain-name for that Pod using the governing service named `-pods`) by `--host` flag. + +```bash +# first list the mysql pods list +$ kubectl get pods -n demo -l app.kubernetes.io/instance=semi-sync-mysql +NAME READY STATUS RESTARTS AGE +semi-sync-mysql-0 2/2 Running 0 21m +semi-sync-mysql-1 2/2 Running 0 20m +semi-sync-mysql-2 2/2 Running 0 20m + +# get the governing service +$ kubectl get service semi-sync-mysql-pods -n demo +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +semi-sync-mysql-pods ClusterIP None 3306/TCP,2380/TCP,2379/TCP 21m + +# list the pods with PodIP +$ kubectl get pods -n demo -l app.kubernetes.io/instance=semi-sync-mysql -o jsonpath='{range.items[*]}{.metadata.name} ........... {.status.podIP} ............ {.metadata.name}.semi-sync-mysql-pods.{.metadata.namespace}{"\\n"}{end}' +semi-sync-mysql-0 ........... 10.244.0.18 ............ semi-sync-mysql-0.semi-sync-mysql-pods.demo +semi-sync-mysql-1 ........... 10.244.0.20 ............ semi-sync-mysql-1.semi-sync-mysql-pods.demo +semi-sync-mysql-2 ........... 10.244.0.22 ............ semi-sync-mysql-2.semi-sync-mysql-pods.demo +``` + +Now you can connect to these database using the above info. Ignore the warning message. It is happening for using password in the command. + +```bash +# connect to the 1st server +$ kubectl exec -it -n demo semi-sync-mysql-0 -c mysql -- mysql -u root --password='y~EC~984Et1Yfs~i' --host=semi-sync-mysql-0.semi-sync-mysql-pods.demo -e "select 1;" +mysql: [Warning] Using a password on the command line interface can be insecure. ++---+ +| 1 | ++---+ +| 1 | ++---+ + +# connect to the 2nd server +$ kubectl exec -it -n demo semi-sync-mysql-0 -c mysql -- mysql -u root --password='y~EC~984Et1Yfs~i' --host=semi-sync-mysql-1.semi-sync-mysql-pods.demo -e "select 1;" +mysql: [Warning] Using a password on the command line interface can be insecure. ++---+ +| 1 | ++---+ +| 1 | ++---+ + +# connect to the 3rd server +$ kubectl exec -it -n demo semi-sync-mysql-0 -c mysql -- mysql -u root --password='y~EC~984Et1Yfs~i' --host=semi-sync-mysql-2.semi-sync-mysql-pods.demo -e "select 1;" +mysql: [Warning] Using a password on the command line interface can be insecure. ++---+ +| 1 | ++---+ +| 1 | ++---+ +``` + +## Check the Semi-sync cluster Status + +Now, you are ready to check newly created semi-sync. Connect and run the following commands from any of the hosts and you will get the same results. + +```bash + +$ kubectl get pods -n demo --show-labels +NAME READY STATUS RESTARTS AGE LABELS +semi-sync-mysql-0 2/2 Running 0 171m app.kubernetes.io/component=database,app.kubernetes.io/instance=semi-sync-mysql,app.kubernetes.io/managed-by=kubedb.com,app.kubernetes.io/name=mysqls.kubedb.com,controller-revision-hash=semi-sync-mysql-77775485f8,kubedb.com/role=primary,statefulset.kubernetes.io/pod-name=semi-sync-mysql-0 +semi-sync-mysql-1 2/2 Running 0 170m app.kubernetes.io/component=database,app.kubernetes.io/instance=semi-sync-mysql,app.kubernetes.io/managed-by=kubedb.com,app.kubernetes.io/name=mysqls.kubedb.com,controller-revision-hash=semi-sync-mysql-77775485f8,kubedb.com/role=standby,statefulset.kubernetes.io/pod-name=semi-sync-mysql-1 +semi-sync-mysql-2 2/2 Running 0 169m app.kubernetes.io/component=database,app.kubernetes.io/instance=semi-sync-mysql,app.kubernetes.io/managed-by=kubedb.com,app.kubernetes.io/name=mysqls.kubedb.com,controller-revision-hash=semi-sync-mysql-77775485f8,kubedb.com/role=standby,statefulset.kubernetes.io/pod-name=semi-sync-mysql-2 + +From the labels we can see that the `semi-sync-mysql-0` is running as primary and the rest are running as standby.Lets validate with the mysql semisync status + +$ kubectl exec -it -n demo semi-sync-mysql-0 -c mysql -- mysql -u root --password='y~EC~984Et1Yfs~i' --host=semi-sync-mysql-0.semi-sync-mysql-pods.demo -e "show status like 'Rpl%_status';" +mysql: [Warning] Using a password on the command line interface can be insecure. ++-----------------------------+-------+ +| Variable_name | Value | ++-----------------------------+-------+ +| Rpl_semi_sync_master_status | ON | +| Rpl_semi_sync_slave_status | OFF | ++-----------------------------+-------+ + + +$ kubectl exec -it -n demo semi-sync-mysql-0 -c mysql -- mysql -u root --password='y~EC~984Et1Yfs~i' --host=semi-sync-mysql-1.semi-sync-mysql-pods.demo -e "show status like 'Rpl%_status';" +mysql: [Warning] Using a password on the command line interface can be insecure. ++-----------------------------+-------+ +| Variable_name | Value | ++-----------------------------+-------+ +| Rpl_semi_sync_master_status | OFF | +| Rpl_semi_sync_slave_status | ON | ++-----------------------------+-------+ + + +$ kubectl exec -it -n demo semi-sync-mysql-0 -c mysql -- mysql -u root --password='y~EC~984Et1Yfs~i' --host=semi-sync-mysql-2.semi-sync-mysql-pods.demo -e "show status like 'Rpl%_status';" +mysql: [Warning] Using a password on the command line interface can be insecure. ++-----------------------------+-------+ +| Variable_name | Value | ++-----------------------------+-------+ +| Rpl_semi_sync_master_status | OFF | +| Rpl_semi_sync_slave_status | ON | ++-----------------------------+-------+ + +``` + +## Data Availability + +In a MySQL semi-sync cluster, only the primary member can write not the secondary. But you can read data from any member. In this tutorial, we will insert data from primary, and we will see whether we can get the data from any other member. + +> Read the comment written for the following commands. They contain the instructions and explanations of the commands. + +```bash +# create a database on primary +$ kubectl exec -it -n demo semi-sync-mysql-0 -- mysql -u root --password='y~EC~984Et1Yfs~i' --host=semi-sync-mysql-0.semi-sync-mysql-pods.demo -e "CREATE DATABASE playground;" +mysql: [Warning] Using a password on the command line interface can be insecure. + +# create a table +$ kubectl exec -it -n demo semi-sync-mysql-0 -- mysql -u root --password='y~EC~984Et1Yfs~i' --host=semi-sync-mysql-0.semi-sync-mysql-pods.demo -e "CREATE TABLE playground.equipment ( id INT NOT NULL AUTO_INCREMENT, type VARCHAR(50), quant INT, color VARCHAR(25), PRIMARY KEY(id));" +mysql: [Warning] Using a password on the command line interface can be insecure. + + +# insert a row +$ kubectl exec -it -n demo semi-sync-mysql-0 -c mysql -- mysql -u root --password='y~EC~984Et1Yfs~i' --host=semi-sync-mysql-0.semi-sync-mysql-pods.demo -e "INSERT INTO playground.equipment (type, quant, color) VALUES ('slide', 2, 'blue');" +mysql: [Warning] Using a password on the command line interface can be insecure. + +# read from primary +$ kubectl exec -it -n demo semi-sync-mysql-0 -c mysql -- mysql -u root --password='y~EC~984Et1Yfs~i' --host=semi-sync-mysql-0.semi-sync-mysql-pods.demo -e "SELECT * FROM playground.equipment;" +mysql: [Warning] Using a password on the command line interface can be insecure. ++----+-------+-------+-------+ +| id | type | quant | color | ++----+-------+-------+-------+ +| 1 | slide | 2 | blue | ++----+-------+-------+-------+ +``` +In the previous step we have inserted into the primary pod. In the next step we will read from secondary pods to determine whether the data has been successfully copied to the secondary pods. +```bash +# read from secondary-1 +$ kubectl exec -it -n demo semi-sync-mysql-0 -c mysql -- mysql -u root --password='y~EC~984Et1Yfs~i' --host=semi-sync-mysql-1.semi-sync-mysql-pods.demo -e "SELECT * FROM playground.equipment;" +mysql: [Warning] Using a password on the command line interface can be insecure. ++----+-------+-------+-------+ +| id | type | quant | color | ++----+-------+-------+-------+ +| 1 | slide | 2 | blue | ++----+-------+-------+-------+ + +# read from secondary-2 +$ kubectl exec -it -n demo semi-sync-mysql-0 -c mysql -- mysql -u root --password='y~EC~984Et1Yfs~i' --host=semi-sync-mysql-2.semi-sync-mysql-pods.demo -e "SELECT * FROM playground.equipment;" +mysql: [Warning] Using a password on the command line interface can be insecure. ++----+-------+-------+-------+ +| id | type | quant | color | ++----+-------+-------+-------+ +| 1 | slide | 2 | blue | ++----+-------+-------+-------+ +``` + +## Write on Secondary Should Fail + +Only, primary member preserves the write permission. No secondary can write data. + +```bash +# try to write on secondary-1 +$ kubectl exec -it -n demo semi-sync-mysql-0 -c mysql -- mysql -u root --password='y~EC~984Et1Yfs~i' --host=semi-sync-mysql-1.semi-sync-mysql-pods.demo -e "INSERT INTO playground.equipment (type, quant, color) VALUES ('mango', 5, 'yellow');" +mysql: [Warning] Using a password on the command line interface can be insecure. +ERROR 1290 (HY000) at line 1: The MySQL server is running with the --super-read-only option so it cannot execute this statement +command terminated with exit code 1 + +# try to write on secondary-2 +$ kubectl exec -it -n demo semi-sync-mysql-0 -c mysql -- mysql -u root --password='y~EC~984Et1Yfs~i' --host=semi-sync-mysql-2.semi-sync-mysql-pods.demo -e "INSERT INTO playground.equipment (type, quant, color) VALUES ('mango', 5, 'yellow');" +mysql: [Warning] Using a password on the command line interface can be insecure. +ERROR 1290 (HY000) at line 1: The MySQL server is running with the --super-read-only option so it cannot execute this statement +command terminated with exit code 1 +``` + +## Automatic Failover + +To test automatic failover, we will force the primary Pod to restart. Since the primary member (`Pod`) becomes unavailable, the rest of the members will elect a new primary for the cluster. When the old primary comes back, it will join the cluster as a secondary member. + +> Read the comment written for the following commands. They contain the instructions and explanations of the commands. + +```bash +# delete the primary Pod semi-sync-mysql-0 +$ kubectl delete pod semi-sync-mysql-0 -n demo +pod "semi-sync-mysql-0" deleted + +# check the new primary ID +$ kubectl get pod -n demo --show-labels | grep primary +semi-sync-mysql-1 2/2 Running 0 3h9m app.kubernetes.io/component=database,app.kubernetes.io/instance=semi-sync-mysql,app.kubernetes.io/managed-by=kubedb.com,app.kubernetes.io/name=mysqls.kubedb.com,controller-revision-hash=semi-sync-mysql-77775485f8,kubedb.com/role=primary,statefulset.kubernetes.io/pod-name=semi-sync-mysql-1 + +# now check the cluster status +$ kubectl exec -it -n demo semi-sync-mysql-0 -c mysql -- mysql -u root --password='y~EC~984Et1Yfs~i' --host=semi-sync-mysql-0.semi-sync-mysql-pods.demo -e "show status like 'Rpl%_status';" +mysql: [Warning] Using a password on the command line interface can be insecure. ++-----------------------------+-------+ +| Variable_name | Value | ++-----------------------------+-------+ +| Rpl_semi_sync_master_status | OFF | +| Rpl_semi_sync_slave_status | ON | ++-----------------------------+-------+ +$ kubectl exec -it -n demo semi-sync-mysql-0 -c mysql -- mysql -u root --password='y~EC~984Et1Yfs~i' --host=semi-sync-mysql-1.semi-sync-mysql-pods.demo -e "show status like 'Rpl%_status';" +mysql: [Warning] Using a password on the command line interface can be insecure. ++-----------------------------+-------+ +| Variable_name | Value | ++-----------------------------+-------+ +| Rpl_semi_sync_master_status | ON | +| Rpl_semi_sync_slave_status | OFF | ++-----------------------------+-------+ + +$ kubectl exec -it -n demo semi-sync-mysql-0 -c mysql -- mysql -u root --password='y~EC~984Et1Yfs~i' --host=semi-sync-mysql-2.semi-sync-mysql-pods.demo -e "show status like 'Rpl%_status';" +mysql: [Warning] Using a password on the command line interface can be insecure. ++-----------------------------+-------+ +| Variable_name | Value | ++-----------------------------+-------+ +| Rpl_semi_sync_master_status | OFF | +| Rpl_semi_sync_slave_status | ON | + + +# read data from new primary semi-sync-mysql-1.semi-sync-mysql-pods.demo +$ kubectl exec -it -n demo semi-sync-mysql-0 -c mysql -- mysql -u root --password='y~EC~984Et1Yfs~i' --host=semi-sync-mysql-1.semi-sync-mysql-pods.demo -e "SELECT * FROM playground.equipment;" +mysql: [Warning] Using a password on the command line interface can be insecure. ++----+-------+-------+-------+ +| id | type | quant | color | ++----+-------+-------+-------+ +| 1 | slide | 2 | blue | ++----+-------+-------+-------+ +``` +Now Let's read the data from secondary pods to see if the data is consistant. +```bash +# read data from secondary-1 semi-sync-mysql-0.semi-sync-mysql-pods.demo +$ kubectl exec -it -n demo semi-sync-mysql-0 -c mysql -- mysql -u root --password='y~EC~984Et1Yfs~i' --host=semi-sync-mysql-0.semi-sync-mysql-pods.demo -e "SELECT * FROM playground.equipment;" +mysql: [Warning] Using a password on the command line interface can be insecure. ++----+-------+-------+-------+ +| id | type | quant | color | ++----+-------+-------+-------+ +| 1 | slide | 2 | blue | ++----+-------+-------+-------+ + +# read data from secondary-2 semi-sync-mysql-2.semi-sync-mysql-pods.demo +$ kubectl exec -it -n demo semi-sync-mysql-0 -c mysql -- mysql -u root --password='y~EC~984Et1Yfs~i' --host=semi-sync-mysql-2.semi-sync-mysql-pods.demo -e "SELECT * FROM playground.equipment;" +mysql: [Warning] Using a password on the command line interface can be insecure. ++----+-------+-------+-------+ +| id | type | quant | color | ++----+-------+-------+-------+ +| 7 | slide | 2 | blue | ++----+-------+-------+-------+ +``` + +## Cleaning up + +Clean what you created in this tutorial. + +```bash +kubectl delete -n demo my/semi-sync-mysql +kubectl delete ns demo +``` + +## Next Steps + +- Detail concepts of [MySQL object](/docs/v2024.1.31/guides/mysql/concepts/database/). +- Detail concepts of [MySQLDBVersion object](/docs/v2024.1.31/guides/mysql/concepts/catalog/). +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/v2024.1.31/CONTRIBUTING). diff --git a/content/docs/v2024.1.31/guides/mysql/clustering/semi-sync/yamls/semi-sync.yaml b/content/docs/v2024.1.31/guides/mysql/clustering/semi-sync/yamls/semi-sync.yaml new file mode 100644 index 0000000000..0cba8f643e --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/clustering/semi-sync/yamls/semi-sync.yaml @@ -0,0 +1,23 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MySQL +metadata: + name: semi-sync-mysql + namespace: demo +spec: + version: "8.0.35" + replicas: 3 + topology: + mode: SemiSync + semiSync: + sourceWaitForReplicaCount: 1 + sourceTimeout: 23h + errantTransactionRecoveryPolicy: PseudoTransaction + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut diff --git a/content/docs/v2024.1.31/guides/mysql/concepts/_index.md b/content/docs/v2024.1.31/guides/mysql/concepts/_index.md new file mode 100755 index 0000000000..6ddb37dbe3 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/concepts/_index.md @@ -0,0 +1,22 @@ +--- +title: MySQL Concepts +menu: + docs_v2024.1.31: + identifier: guides-mysql-concepts + name: Concepts + parent: guides-mysql + weight: 20 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/mysql/concepts/appbinding/index.md b/content/docs/v2024.1.31/guides/mysql/concepts/appbinding/index.md new file mode 100644 index 0000000000..166aeb037f --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/concepts/appbinding/index.md @@ -0,0 +1,179 @@ +--- +title: AppBinding CRD +menu: + docs_v2024.1.31: + identifier: guides-mysql-concepts-appbinding + name: AppBinding + parent: guides-mysql-concepts + weight: 20 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# AppBinding + +## What is AppBinding + +An `AppBinding` is a Kubernetes `CustomResourceDefinition`(CRD) which points to an application using either its URL (usually for a non-Kubernetes resident service instance) or a Kubernetes service object (if self-hosted in a Kubernetes cluster), some optional parameters and a credential secret. To learn more about AppBinding and the problems it solves, please read this blog post: [The case for AppBinding](https://blog.byte.builders/post/the-case-for-appbinding). + +If you deploy a database using [KubeDB](https://kubedb.com/), `AppBinding` object will be created automatically for it. Otherwise, you have to create an `AppBinding` object manually pointing to your desired database. + +KubeDB uses [Stash](https://appscode.com/products/stash/) to perform backup/recovery of databases. Stash needs to know how to connect with a target database and the credentials necessary to access it. This is done via an `AppBinding`. + +## AppBinding CRD Specification + +Like any official Kubernetes resource, an `AppBinding` has `TypeMeta`, `ObjectMeta` and `Spec` sections. However, unlike other Kubernetes resources, it does not have a `Status` section. + +An `AppBinding` object created by `KubeDB` for PostgreSQL database is shown below, + +```yaml +apiVersion: appcatalog.appscode.com/v1alpha1 +kind: AppBinding +metadata: + annotations: + kubectl.kubernetes.io/last-applied-configuration: | + {"apiVersion":"kubedb.com/v1alpha2","kind":"MySQL","metadata":{"annotations":{},"name":"mysql-group","namespace":"demo"},"spec":{"configSecret":{"name":"my-configuration"},"storage":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"1Gi"}},"storageClassName":"standard"},"version":"8.0.35"}} + creationTimestamp: "2022-06-28T10:08:03Z" + generation: 1 + labels: + app.kubernetes.io/component: database + app.kubernetes.io/instance: mysql-group + app.kubernetes.io/managed-by: kubedb.com + app.kubernetes.io/name: mysqls.kubedb.com + name: mysql-group + namespace: demo + ownerReferences: + - apiVersion: kubedb.com/v1alpha2 + blockOwnerDeletion: true + controller: true + kind: MySQL + name: mysql-group + uid: d79f61f1-bba0-46b3-a822-0117e9dcfec7 + resourceVersion: "1577460" + uid: 3ac26b03-3de6-4394-a04b-3bc4a19955f4 +spec: + clientConfig: + service: + name: mysql-group + path: / + port: 3306 + scheme: mysql + url: tcp(mysql-group.demo.svc:3306)/ + parameters: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: StashAddon + stash: + addon: + backupTask: + name: mysql-backup-8.0.21 + params: + - name: args + value: --all-databases --set-gtid-purged=OFF + restoreTask: + name: mysql-restore-8.0.21 + secret: + name: mysql-group-auth + type: kubedb.com/mysql + version: 8.0.35 + +``` + +Here, we are going to describe the sections of an `AppBinding` crd. + +### AppBinding `Spec` + +An `AppBinding` object has the following fields in the `spec` section: + +#### spec.type + +`spec.type` is an optional field that indicates the type of the app that this `AppBinding` is pointing to. Stash uses this field to resolve the values of `TARGET_APP_TYPE`, `TARGET_APP_GROUP` and `TARGET_APP_RESOURCE` variables of [BackupBlueprint](https://appscode.com/products/stash/latest/concepts/crds/backupblueprint/) object. + +This field follows the following format: `/`. The above AppBinding is pointing to a `postgres` resource under `kubedb.com` group. + +Here, the variables are parsed as follows: + +| Variable | Usage | +| --------------------- | --------------------------------------------------------------------------------------------------------------------------------- | +| `TARGET_APP_GROUP` | Represents the application group where the respective app belongs (i.e: `kubedb.com`). | +| `TARGET_APP_RESOURCE` | Represents the resource under that application group that this appbinding represents (i.e: `mysql`). | +| `TARGET_APP_TYPE` | Represents the complete type of the application. It's simply `TARGET_APP_GROUP/TARGET_APP_RESOURCE` (i.e: `kubedb.com/mysql`). | + +#### spec.secret + +`spec.secret` specifies the name of the secret which contains the credentials that are required to access the database. This secret must be in the same namespace as the `AppBinding`. + +This secret must contain the following keys: + +PostgreSQL : + +| Key | Usage | +| ------------------- | --------------------------------------------------- | +| `POSTGRES_USER` | Username of the target database. | +| `POSTGRES_PASSWORD` | Password for the user specified by `POSTGRES_USER`. | + +MySQL : + +| Key | Usage | +| ---------- | ---------------------------------------------- | +| `username` | Username of the target database. | +| `password` | Password for the user specified by `username`. | + +MongoDB : + +| Key | Usage | +| ---------- | ---------------------------------------------- | +| `username` | Username of the target database. | +| `password` | Password for the user specified by `username`. | + +Elasticsearch: + +| Key | Usage | +| ---------------- | ----------------------- | +| `ADMIN_USERNAME` | Admin username | +| `ADMIN_PASSWORD` | Password for admin user | + +#### spec.clientConfig + +`spec.clientConfig` defines how to communicate with the target database. You can use either an URL or a Kubernetes service to connect with the database. You don't have to specify both of them. + +You can configure following fields in `spec.clientConfig` section: + +- **spec.clientConfig.url** + + `spec.clientConfig.url` gives the location of the database, in standard URL form (i.e. `[scheme://]host:port/[path]`). This is particularly useful when the target database is running outside of the Kubernetes cluster. If your database is running inside the cluster, use `spec.clientConfig.service` section instead. + + > Note that, attempting to use a user or basic auth (e.g. `user:password@host:port`) is not allowed. Stash will insert them automatically from the respective secret. Fragments ("#...") and query parameters ("?...") are not allowed either. + +- **spec.clientConfig.service** + + If you are running the database inside the Kubernetes cluster, you can use Kubernetes service to connect with the database. You have to specify the following fields in `spec.clientConfig.service` section if you manually create an `AppBinding` object. + + - **name :** `name` indicates the name of the service that connects with the target database. + - **scheme :** `scheme` specifies the scheme (i.e. http, https) to use to connect with the database. + - **port :** `port` specifies the port where the target database is running. + +- **spec.clientConfig.insecureSkipTLSVerify** + + `spec.clientConfig.insecureSkipTLSVerify` is used to disable TLS certificate verification while connecting with the database. We strongly discourage to disable TLS verification during backup. You should provide the respective CA bundle through `spec.clientConfig.caBundle` field instead. + +- **spec.clientConfig.caBundle** + + `spec.clientConfig.caBundle` is a PEM encoded CA bundle which will be used to validate the serving certificate of the database. + +## Next Steps + +- Learn how to use KubeDB to manage various databases [here](/docs/v2024.1.31/guides/README). +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/v2024.1.31/CONTRIBUTING). diff --git a/content/docs/v2024.1.31/guides/mysql/concepts/autoscaler/index.md b/content/docs/v2024.1.31/guides/mysql/concepts/autoscaler/index.md new file mode 100644 index 0000000000..96cbfbbdf2 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/concepts/autoscaler/index.md @@ -0,0 +1,107 @@ +--- +title: MySQLAutoscaler CRD +menu: + docs_v2024.1.31: + identifier: guides-mysql-concepts-autoscaler + name: MySQLAutoscaler + parent: guides-mysql-concepts + weight: 26 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# MySQLAutoscaler + +## What is MySQLAutoscaler + +`MySQLAutoscaler` is a Kubernetes `Custom Resource Definitions` (CRD). It provides a declarative configuration for autoscaling [MySQL](https://www.mysql.com/) compute resources and storage of database components in a Kubernetes native way. + +## MySQLAutoscaler CRD Specifications + +Like any official Kubernetes resource, a `MySQLAutoscaler` has `TypeMeta`, `ObjectMeta`, `Spec` and `Status` sections. + +Here, some sample `MySQLAutoscaler` CROs for autoscaling different components of database is given below: + +**Sample `MySQLAutoscaler` for MySQL:** + +```yaml +apiVersion: autoscaling.kubedb.com/v1alpha1 +kind: MySQLAutoscaler +metadata: + name: my-as + namespace: demo +spec: + databaseRef: + name: sample-mysql + compute: + mysql: + trigger: "On" + podLifeTimeThreshold: 5m + minAllowed: + cpu: 250m + memory: 350Mi + maxAllowed: + cpu: 1 + memory: 1Gi + controlledResources: ["cpu", "memory"] + storage: + mysql: + trigger: "On" + usageThreshold: 60 + scalingThreshold: 50 + expansionMode: "Online" +``` + +Here, we are going to describe the various sections of a `MySQLAutoscaler` crd. + +A `MySQLAutoscaler` object has the following fields in the `spec` section. + +### spec.databaseRef + +`spec.databaseRef` is a required field that point to the [MySQL](/docs/v2024.1.31/guides/mysql/concepts/mysqldatabase) object for which the autoscaling will be performed. This field consists of the following sub-field: + +- **spec.databaseRef.name :** specifies the name of the [MySQL](/docs/v2024.1.31/guides/mysql/concepts/mysqldatabase) object. + +### spec.compute + +`spec.compute` specifies the autoscaling configuration for the compute resources i.e. cpu and memory of the database components. This field consists of the following sub-field: + +- `spec.compute.mysql` indicates the desired compute autoscaling configuration for a MySQL standalone or cluster. + +All of them has the following sub-fields: + +- `trigger` indicates if compute autoscaling is enabled for this component of the database. If "On" then compute autoscaling is enabled. If "Off" then compute autoscaling is disabled. +- `minAllowed` specifies the minimal amount of resources that will be recommended, default is no minimum. +- `maxAllowed` specifies the maximum amount of resources that will be recommended, default is no maximum. +- `controlledResources` specifies which type of compute resources (cpu and memory) are allowed for autoscaling. Allowed values are "cpu" and "memory". +- `containerControlledValues` specifies which resource values should be controlled. Allowed values are "RequestsAndLimits" and "RequestsOnly". +- `resourceDiffPercentage` specifies the minimum resource difference between recommended value and the current value in percentage. If the difference percentage is greater than this value than autoscaling will be triggered. +- `podLifeTimeThreshold` specifies the minimum pod lifetime of at least one of the pods before triggering autoscaling. +- `InMemoryScalingThreshold` the percentage of the Memory that will be passed as inMemorySizeGB for inmemory database engine, which is only available for the percona variant of the mysql. + +### spec.storage + +`spec.storage` specifies the autoscaling configuration for the storage resources of the database components. This field consists of the following sub-field: + +- `spec.storage.mysql` indicates the desired storage autoscaling configuration for a MySQL standalone or cluster. + +All of them has the following sub-fields: + +- `trigger` indicates if storage autoscaling is enabled for this component of the database. If "On" then storage autoscaling is enabled. If "Off" then storage autoscaling is disabled. +- `usageThreshold` indicates usage percentage threshold, if the current storage usage exceeds then storage autoscaling will be triggered. +- `scalingThreshold` indicates the percentage of the current storage that will be scaled. +- `expansionMode` specifies the mode of volume expansion when storage autoscaler performs volume expansion OpsRequest. Default value is `Online`. + diff --git a/content/docs/v2024.1.31/guides/mysql/concepts/catalog/index.md b/content/docs/v2024.1.31/guides/mysql/concepts/catalog/index.md new file mode 100644 index 0000000000..834386a34a --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/concepts/catalog/index.md @@ -0,0 +1,145 @@ +--- +title: MySQLVersion CRD +menu: + docs_v2024.1.31: + identifier: guides-mysql-concepts-catalog + name: MySQLVersion + parent: guides-mysql-concepts + weight: 15 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# MySQLVersion + +## What is MySQLVersion + +`MySQLVersion` is a Kubernetes `Custom Resource Definitions` (CRD). It provides a declarative configuration to specify the docker images to be used for [MySQL](https://www.mysql.com) database deployed with KubeDB in a Kubernetes native way. + +When you install KubeDB, a `MySQLVersion` custom resource will be created automatically for every supported MySQL versions. You have to specify the name of `MySQLVersion` crd in `spec.version` field of [MySQL](/docs/v2024.1.31/guides/mysql/concepts/catalog/) crd. Then, KubeDB will use the docker images specified in the `MySQLVersion` crd to create your expected database. + +Using a separate crd for specifying respective docker images, and pod security policy names allow us to modify the images, and policies independent of KubeDB operator. This will also allow the users to use a custom image for the database. + +## MySQLVersion Specification + +As with all other Kubernetes objects, a MySQLVersion needs `apiVersion`, `kind`, and `metadata` fields. It also needs a `.spec` section. + +```yaml +apiVersion: catalog.kubedb.com/v1alpha1 +kind: MySQLVersion +metadata: + annotations: + meta.helm.sh/release-name: kubedb-catalog + meta.helm.sh/release-namespace: kubedb + creationTimestamp: "2022-06-16T13:52:58Z" + generation: 3 + labels: + app.kubernetes.io/instance: kubedb-catalog + app.kubernetes.io/managed-by: Helm + app.kubernetes.io/name: kubedb-catalog + app.kubernetes.io/version: v2022.03.28 + helm.sh/chart: kubedb-catalog-v2022.03.28 + name: 8.0.35 + resourceVersion: "1575483" + uid: 4e605d5f-a6f0-42cb-a125-b4b4fd02e41e +spec: + coordinator: + image: kubedb/mysql-coordinator:v0.4.0-2-g49a2d26-dirty_linux_amd64 + db: + image: mysql:8.0.35 + distribution: Official + exporter: + image: kubedb/mysqld-exporter:v0.13.1 + initContainer: + image: kubedb/mysql-init:8.0.35_linux_amd64 + podSecurityPolicies: + databasePolicyName: mysql-db + replicationModeDetector: + image: kubedb/replication-mode-detector:v0.13.0 + stash: + addon: + backupTask: + name: mysql-backup-8.0.21 + restoreTask: + name: mysql-restore-8.0.21 + updateConstraints: + denylist: + groupReplication: + - < 8.0.35 + standalone: + - < 8.0.35 + version: 8.0.35 +``` + +### metadata.name + +`metadata.name` is a required field that specifies the name of the `MySQLVersion` crd. You have to specify this name in `spec.version` field of [MySQL](/docs/v2024.1.31/guides/mysql/concepts/database/) crd. + +We follow this convention for naming MySQLVersion crd: + +- Name format: `{Original MySQL image version}-{modification tag}` + +We modify original MySQL docker image to support additional features. An image with higher modification tag will have more features than the images with lower modification tag. Hence, it is recommended to use MySQLVersion crd with highest modification tag to take advantage of the latest features. + +### spec.version + +`spec.version` is a required field that specifies the original version of MySQL database that has been used to build the docker image specified in `spec.db.image` field. + +### spec.deprecated + +`spec.deprecated` is an optional field that specifies whether the docker images specified here is supported by the current KubeDB operator. For example, we have modified `kubedb/mysql:8.0` docker image to support custom configuration and re-tagged as `kubedb/mysql:8.0-v2`. Now, KubeDB `0.9.0-rc.0` supports providing custom configuration which required `kubedb/mysql:8.0-v2` docker image. So, we have marked `kubedb/mysql:8.0` as deprecated for KubeDB `0.9.0-rc.0`. + +The default value of this field is `false`. If `spec.deprecated` is set `true`, KubeDB operator will not create the database and other respective resources for this version. + +### spec.db.image + +`spec.db.image` is a required field that specifies the docker image which will be used to create Statefulset by KubeDB operator to create expected MySQL database. + +### spec.exporter.image + +`spec.exporter.image` is a required field that specifies the image which will be used to export Prometheus metrics. + +### spec.initContainer.image + +`spec.initContainer.image` is a required field that specifies the image which will be used to remove `lost+found` directory and mount an `EmptyDir` data volume. + +### spec.replicationModeDetector.image + +`spec.replicationModeDetector.image` is only required field for MySQL Group Replication. This field specifies that image which will be used to detect primary member/replica/node in Group Replication. + +### spec.tools.image + +`spec.tools.image` is a optional field that specifies the image which will be used to take backup and initialize database from snapshot. + +### spec.updateConstraints + +`spec.updateConstraints` specifies a specific database version update constraints in a mathematical expression that describes whether it is possible or not to update from the current version to any other valid version. This field consists of the following sub-fields: + +- `denylist` specifies that it is not possible to update from the current version to any other version. This field has two sub-fields: + - `groupReplication` : Suppose you have an expression like, `< 8.0.21` under `groupReplication`, it indicates that it's not possible to update from the current version to any other lower version `8.0.21` for group replication. + - `standalone`: Suppose you have an expression like, `< 8.0.21` under `standalone`, it indicates that it's not possible to update from the current version to any other lower version `8.0.21` for standalone. +- `allowlist` specifies that it is possible to update from the current version to any other version. This field has two sub-fields: + - `groupReplication` : Suppose you have an expression like, `8.0.3`, it indicates that it's possible to update from the current version to `8.0.3` for group replication. + - `standalone`: Suppose you have an expression like, `8.0.3`, it indicates that it's possible to update from the current version to `8.0.3` for standalone. + +### spec.podSecurityPolicies.databasePolicyName + +`spec.podSecurityPolicies.databasePolicyName` is a required field that specifies the name of the pod security policy required to get the database server pod(s) running. + +## Next Steps + +- Learn about MySQL crd [here](/docs/v2024.1.31/guides/mysql/concepts/database/). +- Deploy your first MySQL database with KubeDB by following the guide [here](/docs/v2024.1.31/guides/mysql/quickstart/). diff --git a/content/docs/v2024.1.31/guides/mysql/concepts/database/index.md b/content/docs/v2024.1.31/guides/mysql/concepts/database/index.md new file mode 100644 index 0000000000..f3fa59ab2f --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/concepts/database/index.md @@ -0,0 +1,413 @@ +--- +title: MySQL CRD +menu: + docs_v2024.1.31: + identifier: guides-mysql-concepts-database + name: MySQL + parent: guides-mysql-concepts + weight: 10 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# MySQL + +## What is MySQL + +`MySQL` is a Kubernetes `Custom Resource Definitions` (CRD). It provides declarative configuration for [MySQL](https://www.mysql.com/) in a Kubernetes native way. You only need to describe the desired database configuration in a MySQL object, and the KubeDB operator will create Kubernetes objects in the desired state for you. + +## MySQL Spec + +As with all other Kubernetes objects, a MySQL needs `apiVersion`, `kind`, and `metadata` fields. It also needs a `.spec` section. Below is an example MySQL object. + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: MySQL +metadata: + name: m1 + namespace: demo +spec: + version: "8.0.35" + topology: + mode: GroupReplication + authSecret: + name: m1-auth + storageType: "Durable" + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + init: + script: + configMap: + name: my-init-script + monitor: + agent: prometheus.io/operator + prometheus: + serviceMonitor: + labels: + app: kubedb + interval: 10s + requireSSL: true + tls: + issuerRef: + apiGroup: cert-manager.io + kind: Issuer + name: mysql-issuer + certificates: + - alias: server + subject: + organizations: + - kubedb:server + dnsNames: + - localhost + ipAddresses: + - "127.0.0.1" + configSecret: + name: my-custom-config + podTemplate: + metadata: + annotations: + passMe: ToDatabasePod + controller: + annotations: + passMe: ToStatefulSet + spec: + serviceAccountName: my-service-account + schedulerName: my-scheduler + nodeSelector: + disktype: ssd + imagePullSecrets: + - name: myregistrykey + args: + - --character-set-server=utf8mb4 + env: + - name: MYSQL_DATABASE + value: myDB + resources: + requests: + memory: "64Mi" + cpu: "250m" + limits: + memory: "128Mi" + cpu: "500m" + serviceTemplates: + - alias: primary + metadata: + annotations: + passMe: ToService + spec: + type: NodePort + ports: + - name: http + port: 9200 + terminationPolicy: Halt +``` + +### spec.version + +`spec.version` is a required field specifying the name of the [MySQLVersion](/docs/v2024.1.31/guides/mysql/concepts/catalog/) crd where the docker images are specified. Currently, when you install KubeDB, it creates the following `MySQLVersion` resources, + +- `8.0.35`, `8.0.35`, `8.0.17`, `8.0.3-v4` +- `8.0.31-innodb` +- `5.7.44`, `5.7.35-v1`,`5.7.25-v2` + +### spec.topology + +`spec.topology` is an optional field that provides a way to configure HA, fault-tolerant MySQL cluster. This field enables you to specify the clustering mode. Currently, we support only MySQL Group Replication. KubeDB uses `PodDisruptionBudget` to ensure that majority of the group replicas are available during [voluntary disruptions](https://kubernetes.io/docs/concepts/workloads/pods/disruptions/#voluntary-and-involuntary-disruptions) so that quorum is maintained and no data loss has occurred. + +You can specify the following fields in `spec.topology` field, + +- `mode` specifies the clustering mode for MySQL. For now, the supported value is `"GroupReplication"` for MySQL Group Replication. This field is required if you want to deploy MySQL cluster. + +- `group` is an optional field to configure a group replication. It contains the following fields: + - `name` is an optional field to specify the name for the group. It must be a version 4 UUID if specified. + +### spec.authSecret + +`spec.authSecret` is an optional field that points to a Secret used to hold credentials for `mysql` root user. If not set, the KubeDB operator creates a new Secret `{mysql-object-name}-auth` for storing the password for `mysql` root user for each MySQL object. If you want to use an existing secret please specify that when creating the MySQL object using `spec.authSecret.name`. + +This secret contains a `user` key and a `password` key which contains the `username` and `password` respectively for `mysql` root user. Here, the value of `user` key is fixed to be `root`. + +Secrets provided by users are not managed by KubeDB, and therefore, won't be modified or garbage collected by the KubeDB operator (version 0.13.0 and higher). + +Example: + +```bash +$ kubectl create secret generic m1-auth -n demo \ +--from-literal=user=root \ +--from-literal=password=6q8u_2jMOW-OOZXk +secret "m1-auth" created +``` + +```yaml +apiVersion: v1 +data: + password: NnE4dV8yak1PVy1PT1pYaw== + user: cm9vdA== +kind: Secret +metadata: + name: m1-auth + namespace: demo +type: Opaque +``` + +### spec.storageType + +`spec.storageType` is an optional field that specifies the type of storage to use for the database. It can be either `Durable` or `Ephemeral`. The default value of this field is `Durable`. If `Ephemeral` is used then KubeDB will create MySQL database using [emptyDir](https://kubernetes.io/docs/concepts/storage/volumes/#emptydir) volume. In this case, you don't have to specify `spec.storage` field. + +### spec.storage + +Since 0.9.0-rc.0, If you set `spec.storageType:` to `Durable`, then `spec.storage` is a required field that specifies the StorageClass of PVCs dynamically allocated to store data for the database. This storage spec will be passed to the StatefulSet created by KubeDB operator to run database pods. You can specify any StorageClass available in your cluster with appropriate resource requests. + +- `spec.storage.storageClassName` is the name of the StorageClass used to provision PVCs. PVCs don’t necessarily have to request a class. A PVC with its storageClassName set equal to "" is always interpreted to be requesting a PV with no class, so it can only be bound to PVs with no class (no annotation or one set equal to ""). A PVC with no storageClassName is not quite the same and is treated differently by the cluster depending on whether the DefaultStorageClass admission plugin is turned on. +- `spec.storage.accessModes` uses the same conventions as Kubernetes PVCs when requesting storage with specific access modes. +- `spec.storage.resources` can be used to request specific quantities of storage. This follows the same resource model used by PVCs. + +To learn how to configure `spec.storage`, please visit the links below: + +- https://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims + +### spec.init + +`spec.init` is an optional section that can be used to initialize a newly created MySQL database. MySQL databases can be initialized in one of two ways: + +- Initialize from Script + +#### Initialize via Script + +To initialize a MySQL database using a script (shell script, sql script, etc.), set the `spec.init.script` section when creating a MySQL object. It will execute files alphabetically with extensions `.sh` , `.sql` and `.sql.gz` that is found in the repository. The scripts inside child folders will be skipped. script must have the following information: + +- [VolumeSource](https://kubernetes.io/docs/concepts/storage/volumes/#types-of-volumes): Where your script is loaded from. + +Below is an example showing how a script from a configMap can be used to initialize a MySQL database. + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: MySQL +metadata: + name: m1 + namespace: demo +spec: + version: 8.0.35 + init: + script: + configMap: + name: mysql-init-script +``` + +In the above example, KubeDB operator will launch a Job to execute all js script of `mysql-init-script` in alphabetical order once StatefulSet pods are running. For more details tutorial on how to initialize from script, please visit [here](/docs/v2024.1.31/guides/mysql/initialization/). + +### spec.monitor + +MySQL managed by KubeDB can be monitored with builtin-Prometheus and Prometheus operator out-of-the-box. To learn more, + +- [Monitor MySQL with builtin Prometheus](/docs/v2024.1.31/guides/mysql/monitoring/builtin-prometheus/) +- [Monitor MySQL with Prometheus operator](/docs/v2024.1.31/guides/mysql/monitoring/prometheus-operator/) + +### spec.requireSSL + +`spec.requireSSL` specifies whether the client connections require SSL. If `spec.requireSSL` is `true` then the server permits only TCP/IP connections that use SSL, or connections that use a socket file (on Unix) or shared memory (on Windows). The server rejects any non-secure connection attempt. For more details, please visit [here](https://dev.mysql.com/doc/refman/5.7/en/using-encrypted-connections.html) + +### spec.tls + +`spec.tls` specifies the TLS/SSL configurations for the MySQL. + +The following fields are configurable in the `spec.tls` section: + +- `issuerRef` is a reference to the `Issuer` or `ClusterIssuer` CR of [cert-manager](https://cert-manager.io/docs/concepts/issuer/) that will be used by `KubeDB` to generate necessary certificates. + + - `apiGroup` is the group name of the resource being referenced. The value for `Issuer` or `ClusterIssuer` is "cert-manager.io" (cert-manager v0.12.0 and later). + - `kind` is the type of resource being referenced. KubeDB supports both `Issuer` and `ClusterIssuer` as values for this field. + - `name` is the name of the resource (`Issuer` or `ClusterIssuer`) being referenced. + +- `certificates` (optional) are a list of certificates used to configure the server and/or client certificate. It has the following fields: + + - `alias` represents the identifier of the certificate. It has the following possible value: + - `server` is used for server certificate identification. + - `client` is used for client certificate identification. + - `metrics-exporter` is used for metrics exporter certificate identification. + - `secretName` (optional) specifies the k8s secret name that holds the certificates. + >This field is optional. If the user does not specify this field, the default secret name will be created in the following format: `--cert`. + - `subject` (optional) specifies an `X.509` distinguished name. It has the following possible field, + - `organizations` (optional) are the list of different organization names to be used on the Certificate. + - `organizationalUnits` (optional) are the list of different organization unit name to be used on the Certificate. + - `countries` (optional) are the list of country names to be used on the Certificate. + - `localities` (optional) are the list of locality names to be used on the Certificate. + - `provinces` (optional) are the list of province names to be used on the Certificate. + - `streetAddresses` (optional) are the list of a street address to be used on the Certificate. + - `postalCodes` (optional) are the list of postal code to be used on the Certificate. + - `serialNumber` (optional) is a serial number to be used on the Certificate. + You can found more details from [Here](https://golang.org/pkg/crypto/x509/pkix/#Name) + + - `duration` (optional) is the period during which the certificate is valid. + - `renewBefore` (optional) is a specifiable time before expiration duration. + - `dnsNames` (optional) is a list of subject alt names to be used in the Certificate. + - `ipAddresses` (optional) is a list of IP addresses to be used in the Certificate. + - `uriSANs` (optional) is a list of URI Subject Alternative Names to be set in the Certificate. + - `emailSANs` (optional) is a list of email Subject Alternative Names to be set in the Certificate. + +### spec.configSecret + +`spec.configSecret` is an optional field that allows users to provide custom configuration for MySQL. This field accepts a [`VolumeSource`](https://github.com/kubernetes/api/blob/release-1.11/core/v1/types.go#L47). So you can use any Kubernetes supported volume source such as `configMap`, `secret`, `azureDisk` etc. To learn more about how to use a custom configuration file see [here](/docs/v2024.1.31/guides/mysql/configuration/config-file/). + +### spec.podTemplate + +KubeDB allows providing a template for database pod through `spec.podTemplate`. KubeDB operator will pass the information provided in `spec.podTemplate` to the StatefulSet created for the MySQL database. + +KubeDB accepts the following fields to set in `spec.podTemplate:` + +- metadata: + - annotations (pod's annotation) +- controller: + - annotations (statefulset's annotation) +- spec: + - args + - env + - resources + - initContainers + - imagePullSecrets + - nodeSelector + - affinity + - serviceAccountName + - schedulerName + - tolerations + - priorityClassName + - priority + - securityContext + - livenessProbe + - readinessProbe + - lifecycle + +Uses of some field of `spec.podTemplate` is described below, + +#### spec.podTemplate.spec.args + +`spec.podTemplate.spec.args` is an optional field. This can be used to provide additional arguments for database installation. To learn about available args of `mysqld`, visit [here](https://dev.mysql.com/doc/refman/8.0/en/server-options.html). + +#### spec.podTemplate.spec.env + +`spec.podTemplate.spec.env` is an optional field that specifies the environment variables to pass to the MySQL docker image. To know about supported environment variables, please visit [here](https://hub.docker.com/_/mysql/). + +Note that, KubeDB does not allow `MYSQL_ROOT_PASSWORD`, `MYSQL_ALLOW_EMPTY_PASSWORD`, `MYSQL_RANDOM_ROOT_PASSWORD`, and `MYSQL_ONETIME_PASSWORD` environment variables to set in `spec.env`. If you want to set the root password, please use `spec.authSecret` instead described earlier. + +If you try to set any of the forbidden environment variables i.e. `MYSQL_ROOT_PASSWORD` in MySQL crd, Kubed operator will reject the request with the following error, + +```ini +Error from server (Forbidden): error when creating "./mysql.yaml": admission webhook "mysql.validators.kubedb.com" denied the request: environment variable MYSQL_ROOT_PASSWORD is forbidden to use in MySQL spec +``` + +Also, note that KubeDB does not allow to update the environment variables as updating them does not have any effect once the database is created. If you try to update environment variables, KubeDB operator will reject the request with the following error, + +```ini +Error from server (BadRequest): error when applying patch: +... +for: "./mysql.yaml": admission webhook "mysql.validators.kubedb.com" denied the request: precondition failed for: +...At least one of the following was changed: + apiVersion + kind + name + namespace + spec.authSecret + spec.init + spec.storageType + spec.storage + spec.podTemplate.spec.nodeSelector + spec.podTemplate.spec.env +``` + +#### spec.podTemplate.spec.imagePullSecrets + +`KubeDB` provides the flexibility of deploying MySQL database from a private Docker registry. `spec.podTemplate.spec.imagePullSecrets` is an optional field that points to secrets to be used for pulling docker image if you are using a private docker registry. To learn how to deploy MySQL from a private registry, please visit [here](/docs/v2024.1.31/guides/mysql/private-registry/). + +#### spec.podTemplate.spec.nodeSelector + +`spec.podTemplate.spec.nodeSelector` is an optional field that specifies a map of key-value pairs. For the pod to be eligible to run on a node, the node must have each of the indicated key-value pairs as labels (it can have additional labels as well). To learn more, see [here](https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector) . + +#### spec.podTemplate.spec.serviceAccountName + + `serviceAccountName` is an optional field supported by KubeDB Operator (version 0.13.0 and higher) that can be used to specify a custom service account to fine-tune role-based access control. + + If this field is left empty, the KubeDB operator will create a service account name matching MySQL crd name. Role and RoleBinding that provide necessary access permissions will also be generated automatically for this service account. + + If a service account name is given, but there's no existing service account by that name, the KubeDB operator will create one, and Role and RoleBinding that provide necessary access permissions will also be generated for this service account. + + If a service account name is given, and there's an existing service account by that name, the KubeDB operator will use that existing service account. Since this service account is not managed by KubeDB, users are responsible for providing necessary access permissions manually. Follow the guide [here](/docs/v2024.1.31/guides/mysql/custom-rbac/) to grant necessary permissions in this scenario. + +#### spec.podTemplate.spec.resources + +`spec.podTemplate.spec.resources` is an optional field. This can be used to request compute resources required by the database pods. To learn more, visit [here](http://kubernetes.io/docs/user-guide/compute-resources/). + +### spec.serviceTemplate + +You can also provide a template for the services created by KubeDB operator for MySQL database through `spec.serviceTemplate`. This will allow you to set the type and other properties of the services. + +KubeDB allows following fields to set in `spec.serviceTemplate`: + +- metadata: + - annotations +- spec: + - type + - ports + - clusterIP + - externalIPs + - loadBalancerIP + - loadBalancerSourceRanges + - externalTrafficPolicy + - healthCheckNodePort + - sessionAffinityConfig + +See [here](https://github.com/kmodules/offshoot-api/blob/kubernetes-1.16.3/api/v1/types.go#L163) to understand these fields in detail. + +### spec.halted + +`spec.halted` is an optional field. This field will be used to halt the kubeDB operator. When you set `spec.halted` to `true`, the KubeDB operator doesn't perform any operation on `MySQL` object. + +### spec.halted + +`spec.halted` is an optional field. Suppose you want to delete the `MySQL` resources(`StatefulSet`, `Service` etc.) except `MySQL` object, `PVCs` and `Secret` then you need to set `spec.halted` to `true`. If you set `spec.halted` to `true` then the `terminationPolicy` in `MySQL` object will be set `Halt` by-default. + +### spec.terminationPolicy + +`terminationPolicy` gives flexibility whether to `nullify`(reject) the delete operation of `MySQL` crd or which resources KubeDB should keep or delete when you delete `MySQL` crd. KubeDB provides the following four termination policies: + +- DoNotTerminate +- Halt +- Delete (`Default`) +- WipeOut + +When `terminationPolicy` is `DoNotTerminate`, KubeDB takes advantage of `ValidationWebhook` feature in Kubernetes 1.9.0 or later clusters to implement `DoNotTerminate` feature. If admission webhook is enabled, `DoNotTerminate` prevents users from deleting the database as long as the `spec.terminationPolicy` is set to `DoNotTerminate`. + +Following table show what KubeDB does when you delete MySQL crd for different termination policies, + +| Behavior | DoNotTerminate | Halt | Delete | WipeOut | +| ----------------------------------- | :------------: | :------: | :------: | :------: | +| 1. Block Delete operation | ✓ | ✗ | ✗ | ✗ | +| 2. Delete StatefulSet | ✗ | ✓ | ✓ | ✓ | +| 3. Delete Services | ✗ | ✓ | ✓ | ✓ | +| 4. Delete PVCs | ✗ | ✗ | ✓ | ✓ | +| 5. Delete Secrets | ✗ | ✗ | ✗ | ✓ | +| 6. Delete Snapshots | ✗ | ✗ | ✗ | ✓ | + +If you don't specify `spec.terminationPolicy` KubeDB uses `Delete` termination policy by default. + +## Next Steps + +- Learn how to use KubeDB to run a MySQL database [here](/docs/v2024.1.31/guides/mysql/README). +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/v2024.1.31/CONTRIBUTING). diff --git a/content/docs/v2024.1.31/guides/mysql/concepts/mysqldatabase/index.md b/content/docs/v2024.1.31/guides/mysql/concepts/mysqldatabase/index.md new file mode 100644 index 0000000000..704b41b14a --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/concepts/mysqldatabase/index.md @@ -0,0 +1,148 @@ +--- +title: MySQLDatabase +menu: + docs_v2024.1.31: + identifier: mysqldatabase-concepts + name: MySQLDatabase + parent: guides-mysql-concepts + weight: 30 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# MySQLDatabase + +## What is MySQLDatabase ? +`MySQLDatabase` is a Kubernetes Custom Resource Definitions (CRD). It provides a declarative way of implementing multitenancy inside KubeDB provisioned MySQL server. You need to describe the target database, desired database configuration, the vault server reference for managing the user in a `MySQLDatabase` object, and the KubeDB Schema Manager operator will create Kubernetes objects in the desired state for you. + +## MySQLDatabase Specification + +As with all other Kubernetes objects, an `MySQLDatabase` needs `apiVersion`, `kind`, and `metadata` fields. It also needs a `spec` section. + +```yaml +apiVersion: schema.kubedb.com/v1alpha1 +kind: MySQLDatabase +metadata: + name: demo-schema + namespace: demo +spec: + database: + serverRef: + name: mysql-server + namespace: dev + config: + name: myDB + characterSet: big5 + encryption: disable + readOnly: 0 + vaultRef: + name: vault + namespace: dev + accessPolicy: + subjects: + - kind: ServiceAccount + name: "tester" + namespace: "demo" + defaultTTL: "10m" + init: + initialized: false + snapshot: + repository: + name: repository + namespace: demo + script: + scriptPath: "etc/config" + configMap: + name: scripter + deletionPolicy: "Delete" +``` + + + +### spec.database + +`spec.database` is a required field specifying the database server reference and the desired database configuration. You need to specify the following fields in `spec.database`, + + - `serverRef` refers to the mysql instance where the particular schema will be applied. + - `config` defines the initial configuration of the desired database. + +KubeDB accepts the following fields to set in `spec.database`: + + - serverRef: + - name + - namespace + + - config: + - name + - characterSet + - encryption + - readOnly + + +### spec.vaultRef + +`spec.vaultRef` is a required field that specifies which KubeVault server to use for user management. You need to specify the following fields in `spec.vaultRef`, + +- `name` specifies the name of the Vault server. +- `namespace` refers to the namespace where the Vault server is running. + + +### spec.accessPolicy + +`spec.accessPolicy` is a required field that specifies the access permissions like which service account or cluster user have the access and for how long they can access through it. You need to specify the following fields in `spec.accessPolicy`, + +- `subjects` refers to the user or service account which is allowed to access the credentials. +- `defaultTTL` specifies for how long the credential would be valid. + +KubeDB accepts the following fields to set in `spec.accessPolicy`: + +- subjects: + - kind + - name + - namespace + +- defaultTTL + + +### spec.init + +`spec.init` is an optional field, containing the information of a script or a snapshot using which the database should be initialized during creation. You need to specify the following fields in `spec.init`, + +- `script` refers to the information regarding the .sql file which should be used for initialization. +- `snapshot` carries information about the repository and snapshot_id to initialize the database by restoring the snapshot. + +KubeDB accepts the following fields to set in `spec.init`: + +- script: + - `scriptPath` accepts a directory location at which the operator should mount the .sql file. + - `volumeSource` this can be either secret or configmap. The referred volume source should carry the .sql file in it. + +- snapshot: + - `repository` refers to the repository cr which carries necessary information about the snapshot location . + - `snapshotId` refers to the specific snapshot which should be restored . + + + +### spec.deletionPolicy + +`spec.deletionPolicy` is a required field that gives flexibility whether to `nullify` (reject) the delete operation or which resources KubeDB should keep or delete when you delete the CRD. + + + +## Next Steps + +- Learn about MySQL CRD [here](/docs/v2024.1.31/guides/mysql/concepts/database/). +- Deploy your first MySQL database with KubeDB by following the guide [here](https://kubedb.com/docs/latest/guides/mysql/quickstart/). \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/mysql/concepts/opsrequest/index.md b/content/docs/v2024.1.31/guides/mysql/concepts/opsrequest/index.md new file mode 100644 index 0000000000..e57c891bbc --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/concepts/opsrequest/index.md @@ -0,0 +1,248 @@ +--- +title: MySQLOpsRequests CRD +menu: + docs_v2024.1.31: + identifier: guides-mysql-concepts-opsrequest + name: MySQLOpsRequest + parent: guides-mysql-concepts + weight: 25 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# MySQLOpsRequest + +## What is MySQLOpsRequest + +`MySQLOpsRequest` is a Kubernetes `Custom Resource Definitions` (CRD). It provides declarative configuration for [MySQL](https://www.mysql.com/) administrative operations like database version updating, horizontal scaling, vertical scaling, etc. in a Kubernetes native way. + +## MySQLOpsRequest CRD Specifications + +Like any official Kubernetes resource, a `MySQLOpsRequest` has `TypeMeta`, `ObjectMeta`, `Spec` and `Status` sections. + +Here, some sample `MySQLOpsRequest` CRs for different administrative operations is given below, + +Sample `MySQLOpsRequest` for updating database: + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: MySQLOpsRequest +metadata: + name: my-ops-update + namespace: demo +spec: + databaseRef: + name: my-group + type: UpdateVersion + updateVersion: + targetVersion: 8.0.35 +status: + conditions: + - lastTransitionTime: "2022-06-16T13:52:58Z" + message: The controller has scaled/updated the MySQL successfully + observedGeneration: 3 + reason: OpsRequestSuccessful + status: "True" + type: Successful + observedGeneration: 3 + phase: Successful +``` + +Sample `MySQLOpsRequest` for horizontal scaling: + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: MySQLOpsRequest +metadata: + name: myops + namespace: demo +spec: + databaseRef: + name: my-group + type: HorizontalScaling + horizontalScaling: + member: 3 +status: + conditions: + - lastTransitionTime: "2022-06-16T13:52:58Z" + message: The controller has scaled/updated the MySQL successfully + observedGeneration: 3 + reason: OpsRequestSuccessful + status: "True" + type: Successful + observedGeneration: 3 + phase: Successful +``` + +Sample `MySQLOpsRequest` for vertical scaling: + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: MySQLOpsRequest +metadata: + name: myops + namespace: demo +spec: + databaseRef: + name: my-group + type: VerticalScaling + verticalScaling: + mysql: + resources: + requests: + memory: "1200Mi" + cpu: "0.7" + limits: + memory: "1200Mi" + cpu: "0.7" +status: + conditions: + - lastTransitionTime: "2022-06-11T09:59:05Z" + message: The controller has scaled/updated the MySQL successfully + observedGeneration: 3 + reason: OpsRequestSuccessful + status: "True" + type: Successful + observedGeneration: 3 + phase: Successful +``` + +Here, we are going to describe the various sections of a `MySQLOpsRequest` cr. + +### MySQLOpsRequest `Spec` + +A `MySQLOpsRequest` object has the following fields in the `spec` section. + +#### spec.databaseRef + +`spec.databaseRef` is a required field that point to the [MySQL](/docs/v2024.1.31/guides/mysql/concepts/database/) object where the administrative operations will be applied. This field consists of the following sub-field: + +- **spec.databaseRef.name :** specifies the name of the [MySQL](/docs/v2024.1.31/guides/mysql/concepts/database/) object. + +#### spec.type + +`spec.type` specifies the kind of operation that will be applied to the database. Currently, the following types of operations are allowed in `MySQLOpsRequest`. + +- `Upgrade` / `UpdateVersion` +- `HorizontalScaling` +- `VerticalScaling` +- `volumeExpansion` +- `Restart` +- `Reconfigure` +- `ReconfigureTLS` + +>You can perform only one type of operation on a single `MySQLOpsRequest` CR. For example, if you want to update your database and scale up its replica then you have to create two separate `MySQLOpsRequest`. At first, you have to create a `MySQLOpsRequest` for updating. Once it is completed, then you can create another `MySQLOpsRequest` for scaling. You should not create two `MySQLOpsRequest` simultaneously. + +#### spec.updateVersion + +If you want to update your MySQL version, you have to specify the `spec.updateVersion` section that specifies the desired version information. This field consists of the following sub-field: + +- `spec.updateVersion.targetVersion` refers to a [MySQLVersion](/docs/v2024.1.31/guides/mysql/concepts/catalog/) CR that contains the MySQL version information where you want to update. + +>You can only update between MySQL versions. KubeDB does not support downgrade for MySQL. + +#### spec.horizontalScaling + +If you want to scale-up or scale-down your MySQL cluster, you have to specify `spec.horizontalScaling` section. This field consists of the following sub-field: + +- `spec.horizontalScaling.member` indicates the desired number of members for your MySQL cluster after scaling. For example, if your cluster currently has 4 members and you want to add additional 2 members then you have to specify 6 in `spec.horizontalScaling.member` field. Similarly, if you want to remove one member from the cluster, you have to specify 3 in `spec.horizontalScaling.member` field. + +#### spec.verticalScaling + +`spec.verticalScaling` is a required field specifying the information of `MySQL` resources like `cpu`, `memory` etc that will be scaled. This field consists of the following sub-fields: + +- `spec.verticalScaling.mysql` indicates the `MySQL` server resources. It has the below structure: + +```yaml +requests: + memory: "200Mi" + cpu: "0.1" +limits: + memory: "300Mi" + cpu: "0.2" +``` + +Here, when you specify the resource request for `MySQL` container, the scheduler uses this information to decide which node to place the container of the Pod on and when you specify a resource limit for `MySQL` container, the `kubelet` enforces those limits so that the running container is not allowed to use more of that resource than the limit you set. you can found more details from [here](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/) + +- `spec.verticalScaling.exporter` indicates the `exporter` container resources. It has the same structure as `spec.verticalScaling.mysql` and you can scale the resource the same way as `mysql` container. + +>You can increase/decrease resources for both `mysql` container and `exporter` container on a single `MySQLOpsRequest` CR. + +### MySQLOpsRequest `Status` + +`.status` describes the current state and progress of the `MySQLOpsRequest` operation. It has the following fields: + +#### status.phase + +`status.phase` indicates the overall phase of the operation for this `MySQLOpsRequest`. It can have the following three values: + +|Phase |Meaning | +|---------------|-----------------------------------------------------------------------| +|Successful | KubeDB has successfully performed the operation requested in the MySQLOpsRequest | +|Failed | KubeDB has failed the operation requested in the MySQLOpsRequest | +|Denied | KubeDB has denied the operation requested in the MySQLOpsRequest | + +#### status.observedGeneration + +`status.observedGeneration` shows the most recent generation observed by the `MySQLOpsRequest` controller. + +#### status.conditions + +`status.conditions` is an array that specifies the conditions of different steps of `MySQLOpsRequest` processing. Each condition entry has the following fields: + +- `types` specifies the type of the condition. MySQLOpsRequest has the following types of conditions: + +| Type | Meaning | +|--------------------| -------------------------------------------------------------------------| +| `Progressing` | Specifies that the operation is now progressing | +| `Successful` | Specifies such a state that the operation on the database has been successful. | +| `HaltDatabase` | Specifies such a state that the database is halted by the operator | +| `ResumeDatabase` | Specifies such a state that the database is resumed by the operator | +| `Failure` | Specifies such a state that the operation on the database has been failed. | +| `Scaling` | Specifies such a state that the scaling operation on the database has stared | +| `VerticalScaling` | Specifies such a state that vertical scaling has performed successfully on database | +| `HorizontalScaling` | Specifies such a state that horizontal scaling has performed successfully on database | +| `Updating` | Specifies such a state that database updating operation has stared | +| `UpdateVersion` | Specifies such a state that version updating on the database have performed successfully | + +- The `status` field is a string, with possible values `"True"`, `"False"`, and `"Unknown"`. + - `status` will be `"True"` if the current transition is succeeded. + - `status` will be `"False"` if the current transition is failed. + - `status` will be `"Unknown"` if the current transition is denied. +- The `message` field is a human-readable message indicating details about the condition. +- The `reason` field is a unique, one-word, CamelCase reason for the condition's last transition. It has the following possible values: + +| Reason | Meaning | +|-----------------------------------------| -----------------------------------------------| +| `OpsRequestProgressingStarted` | Operator has started the OpsRequest processing | +| `OpsRequestFailedToProgressing` | Operator has failed to start the OpsRequest processing | +| `SuccessfullyHaltedDatabase` | Database is successfully halted by the operator | +| `FailedToHaltDatabase` | Database is failed to halt by the operator | +| `SuccessfullyResumedDatabase` | Database is successfully resumed to perform its usual operation | +| `FailedToResumedDatabase` | Database is failed to resume | +| `DatabaseVersionUpdatingStarted` | Operator has started updating the database version | +| `SuccessfullyUpdatedDatabaseVersion` | Operator has successfully updated the database version | +| `FailedToUpdateDatabaseVersion` | Operator has failed to update the database version | +| `HorizontalScalingStarted` | Operator has started the horizontal scaling | +| `SuccessfullyPerformedHorizontalScaling` | Operator has successfully performed on horizontal scaling | +| `FailedToPerformHorizontalScaling` | Operator has failed to perform on horizontal scaling | +| `VerticalScalingStarted` | Operator has started the vertical scaling | +| `SuccessfullyPerformedVerticalScaling` | Operator has successfully performed on vertical scaling | +| `FailedToPerformVerticalScaling` | Operator has failed to perform on vertical scaling | +| `OpsRequestProcessedSuccessfully` | Operator has completed the operation successfully requested by the OpeRequest cr | + +- The `lastTransitionTime` field provides a timestamp for when the operation last transitioned from one state to another. +- The `observedGeneration` shows the most recent condition transition generation observed by the controller. diff --git a/content/docs/v2024.1.31/guides/mysql/configuration/_index.md b/content/docs/v2024.1.31/guides/mysql/configuration/_index.md new file mode 100755 index 0000000000..4cdcfe9d76 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/configuration/_index.md @@ -0,0 +1,22 @@ +--- +title: Run MySQL with Custom Configuration +menu: + docs_v2024.1.31: + identifier: guides-mysql-configuration + name: Custom Configuration + parent: guides-mysql + weight: 30 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/mysql/configuration/config-file/index.md b/content/docs/v2024.1.31/guides/mysql/configuration/config-file/index.md new file mode 100644 index 0000000000..319707fa9c --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/configuration/config-file/index.md @@ -0,0 +1,242 @@ +--- +title: Run MySQL with Custom Configuration +menu: + docs_v2024.1.31: + identifier: guides-mysql-configuration-using-config-file + name: Config File + parent: guides-mysql-configuration + weight: 10 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Using Custom Configuration File + +KubeDB supports providing custom configuration for MySQL. This tutorial will show you how to use KubeDB to run a MySQL database with custom configuration. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +- Now, install KubeDB cli on your workstation and KubeDB operator in your cluster following the steps [here](/docs/v2024.1.31/setup/README). + +- To keep things isolated, this tutorial uses a separate namespace called `demo` throughout this tutorial. + + ```bash + $ kubectl create ns demo + namespace/demo created + + $ kubectl get ns demo + NAME STATUS AGE + demo Active 5s + ``` + +> Note: YAML files used in this tutorial are stored in [docs/guides/mysql/configuration/config-file/yamls](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/guides/mysql/configuration/config-file/yamls) folder in GitHub repository [kubedb/docs](https://github.com/kubedb/docs). + +## Overview + +MySQL allows to configure database via configuration file. The default configuration for MySQL can be found in `/etc/mysql/my.cnf` file. When MySQL starts, it will look for custom configuration file in `/etc/mysql/conf.d` directory. If configuration file exist, MySQL instance will use combined startup setting from both `/etc/mysql/my.cnf` and `*.cnf` files in `/etc/mysql/conf.d` directory. This custom configuration will overwrite the existing default one. To know more about configuring MySQL see [here](https://dev.mysql.com/doc/refman/8.0/en/server-configuration.html). + +At first, you have to create a config file with `.cnf` extension with your desired configuration. Then you have to put this file into a [volume](https://kubernetes.io/docs/concepts/storage/volumes/). You have to specify this volume in `spec.configSecret` section while creating MySQL crd. KubeDB will mount this volume into `/etc/mysql/conf.d` directory of the database pod. + +In this tutorial, we will configure [max_connections](https://dev.mysql.com/doc/refman/8.0/en/server-system-variables.html#sysvar_max_connections) and [read_buffer_size](https://dev.mysql.com/doc/refman/8.0/en/server-system-variables.html#sysvar_read_buffer_size) via a custom config file. We will use configMap as volume source. + +## Custom Configuration + +At first, let's create `my-config.cnf` file setting `max_connections` and `read_buffer_size` parameters. + +```bash +cat < my-config.cnf +[mysqld] +max_connections = 200 +read_buffer_size = 1048576 +EOF + +$ cat my-config.cnf +[mysqld] +max_connections = 200 +read_buffer_size = 1048576 +``` + +Here, `read_buffer_size` is set to 1MB in bytes. + +Now, create a secret with this configuration file. + +```bash +$ kubectl create secret generic -n demo my-configuration --from-file=./my-config.cnf +configmap/my-configuration created +``` + +Verify the secret has the configuration file. + +```yaml +$ kubectl get secret -n demo my-configuration -o yaml +apiVersion: v1 +data: + my-config.cnf: W215c3FsZF0KbWF4X2Nvbm5lY3Rpb25zID0gMjAwCnJlYWRfYnVmZmVyX3NpemUgPSAxMDQ4NTc2Cg== +kind: Secret +metadata: + creationTimestamp: "2022-06-28T13:20:42Z" + name: my-configuration + namespace: demo + resourceVersion: "1601408" + uid: 82e1a722-d80f-448e-89b5-c64de81ed262 +type: Opaque + +``` + +Now, create MySQL crd specifying `spec.configSecret` field. + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mysql/configuration/config-file/yamls/mysql-custom.yaml +mysql.kubedb.com/custom-mysql created +``` + +Below is the YAML for the MySQL crd we just created. + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: MySQL +metadata: + name: custom-mysql + namespace: demo +spec: + version: "8.0.35" + configSecret: + name: my-configuration + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi +``` + +Now, wait a few minutes. KubeDB operator will create necessary PVC, statefulset, services, secret etc. If everything goes well, we will see that a pod with the name `custom-mysql-0` has been created. + +Check that the statefulset's pod is running + +```bash +$ kubectl get pod -n demo +NAME READY STATUS RESTARTS AGE +custom-mysql-0 1/1 Running 0 44s +``` + +Check the pod's log to see if the database is ready + +```bash +$ kubectl logs -f -n demo custom-mysql-0 +2022-06-28 13:22:10+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 8.0.35-1debian10 started. +2022-06-28 13:22:10+00:00 [Note] [Entrypoint]: Switching to dedicated user 'mysql' +.... + +2022-06-28 13:22:20+00:00 [Note] [Entrypoint]: Database files initialized +2022-06-28 13:22:20+00:00 [Note] [Entrypoint]: Starting temporary server +2022-06-28T13:22:20.233556Z 0 [System] [MY-010116] [Server] /usr/sbin/mysqld (mysqld 8.0.35) starting as process 92 +2022-06-28T13:22:20.252075Z 1 [System] [MY-013576] [InnoDB] InnoDB initialization has started. +2022-06-28T13:22:20.543772Z 1 [System] [MY-013577] [InnoDB] InnoDB initialization has ended. +... +2022-06-28 13:22:22+00:00 [Note] [Entrypoint]: Stopping temporary server +2022-06-28T13:22:22.354537Z 10 [System] [MY-013172] [Server] Received SHUTDOWN from user root. Shutting down mysqld (Version: 8.0.35). +2022-06-28T13:22:24.495121Z 0 [System] [MY-010910] [Server] /usr/sbin/mysqld: Shutdown complete (mysqld 8.0.35) MySQL Community Server - GPL. +2022-06-28 13:22:25+00:00 [Note] [Entrypoint]: Temporary server stopped + +2022-06-28 13:22:25+00:00 [Note] [Entrypoint]: MySQL init process done. Ready for start up. + +.... +2022-06-28T13:22:26.064259Z 0 [Warning] [MY-011810] [Server] Insecure configuration for --pid-file: Location '/var/run/mysqld' in the path is accessible to all OS users. Consider choosing a different directory. +2022-06-28T13:22:26.076352Z 0 [System] [MY-011323] [Server] X Plugin ready for connections. Bind-address: '::' port: 33060, socket: /var/run/mysqld/mysqlx.sock +2022-06-28T13:22:26.076407Z 0 [System] [MY-010931] [Server] /usr/sbin/mysqld: ready for connections. Version: '8.0.35' socket: '/var/run/mysqld/mysqld.sock' port: 3306 MySQL Community Server - GPL. + +.... +``` + +Once we see `[Note] /usr/sbin/mysqld: ready for connections.` in the log, the database is ready. + +Now, we will check if the database has started with the custom configuration we have provided. + +First, deploy [phpMyAdmin](https://hub.docker.com/r/phpmyadmin/phpmyadmin/) to connect with the MySQL database we have just created. + +```bash + $ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mysql/configuration/config-file/yamls/phpmyadmin.yaml +deployment.extensions/myadmin created +service/myadmin created +``` + +Then, open your browser and go to the following URL: _http://{node-ip}:{myadmin-svc-nodeport}_. For kind cluster, you can get this URL by running the following command: + +```bash +$ kubectl get svc -n demo myadmin -o json | jq '.spec.ports[].nodePort' +30942 + +$ kubectl get node -o json | jq '.items[].status.addresses[].address' +"172.18.0.3" +"kind-control-plane" +"172.18.0.4" +"kind-worker" +"172.18.0.2" +"kind-worker2" + +# expected url will be: +url: http://172.18.0.4:30942 +``` + +Now, let's connect to the database from the phpMyAdmin dashboard using the database pod IP and MySQL user password. + +```bash +$ kubectl get pods custom-mysql-0 -n demo -o yaml | grep IP + hostIP: 10.0.2.15 + podIP: 172.17.0.6 + +$ kubectl get secrets -n demo custom-mysql-auth -o jsonpath='{.data.\user}' | base64 -d +root + +$ kubectl get secrets -n demo custom-mysql-auth -o jsonpath='{.data.\password}' | base64 -d +MLO5_fPVKcqPiEu9 +``` + +Once, you have connected to the database with phpMyAdmin go to **Variables** tab and search for `max_connections` and `read_buffer_size`. Here are some screenshot showing those configured variables. +![max_connections](/docs/v2024.1.31/images/mysql/max_connection.png) + +![read_buffer_size](/docs/v2024.1.31/images/mysql/read_buffer_size.png) + +## Cleaning up + +To cleanup the Kubernetes resources created by this tutorial, run: + +```bash +kubectl patch -n demo my/custom-mysql -p '{"spec":{"terminationPolicy":"WipeOut"}}' --type="merge" +kubectl delete -n demo my/custom-mysql + +kubectl delete deployment -n demo myadmin +kubectl delete service -n demo myadmin + +kubectl delete ns demo +``` + +If you would like to uninstall KubeDB operator, please follow the steps [here](/docs/v2024.1.31/setup/README). + +## Next Steps + +- [Quickstart MySQL](/docs/v2024.1.31/guides/mysql/quickstart/) with KubeDB Operator. +- Initialize [MySQL with Script](/docs/v2024.1.31/guides/mysql/initialization/). +- Monitor your MySQL database with KubeDB using [out-of-the-box Prometheus operator](/docs/v2024.1.31/guides/mysql/monitoring/prometheus-operator/). +- Monitor your MySQL database with KubeDB using [out-of-the-box builtin-Prometheus](/docs/v2024.1.31/guides/mysql/monitoring/builtin-prometheus/). +- Use [private Docker registry](/docs/v2024.1.31/guides/mysql/private-registry/) to deploy MySQL with KubeDB. +- Use [kubedb cli](/docs/v2024.1.31/guides/mysql/cli/) to manage databases like kubectl for Kubernetes. +- Detail concepts of [MySQL object](/docs/v2024.1.31/guides/mysql/concepts/database/). +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/v2024.1.31/CONTRIBUTING). diff --git a/content/docs/v2024.1.31/guides/mysql/configuration/config-file/yamls/my-config.cnf b/content/docs/v2024.1.31/guides/mysql/configuration/config-file/yamls/my-config.cnf new file mode 100644 index 0000000000..ccd87f160c --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/configuration/config-file/yamls/my-config.cnf @@ -0,0 +1,3 @@ +[mysqld] +max_connections = 200 +read_buffer_size = 1048576 diff --git a/content/docs/v2024.1.31/guides/mysql/configuration/config-file/yamls/mysql-custom.yaml b/content/docs/v2024.1.31/guides/mysql/configuration/config-file/yamls/mysql-custom.yaml new file mode 100644 index 0000000000..fddf33deb2 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/configuration/config-file/yamls/mysql-custom.yaml @@ -0,0 +1,16 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MySQL +metadata: + name: custom-mysql + namespace: demo +spec: + version: "8.0.35" + configSecret: + name: my-configuration + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi diff --git a/content/docs/v2024.1.31/guides/mysql/configuration/config-file/yamls/phpmyadmin.yaml b/content/docs/v2024.1.31/guides/mysql/configuration/config-file/yamls/phpmyadmin.yaml new file mode 100644 index 0000000000..6b20b9d6ff --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/configuration/config-file/yamls/phpmyadmin.yaml @@ -0,0 +1,46 @@ +apiVersion: apps/v1 +kind: Deployment +metadata: + labels: + app: myadmin + name: myadmin + namespace: demo +spec: + replicas: 1 + selector: + matchLabels: + app: myadmin + template: + metadata: + labels: + app: myadmin + spec: + containers: + - image: phpmyadmin/phpmyadmin:latest + imagePullPolicy: Always + name: phpmyadmin + ports: + - containerPort: 80 + name: http + protocol: TCP + env: + - name: PMA_ARBITRARY + value: '1' + +--- +apiVersion: v1 +kind: Service +metadata: + labels: + app: myadmin + name: myadmin + namespace: demo +spec: + ports: + - name: http + port: 80 + protocol: TCP + targetPort: http + selector: + app: myadmin + type: LoadBalancer diff --git a/content/docs/v2024.1.31/guides/mysql/configuration/podtemplating/images/mysql-all-databases.png b/content/docs/v2024.1.31/guides/mysql/configuration/podtemplating/images/mysql-all-databases.png new file mode 100644 index 0000000000..ec2c98698a Binary files /dev/null and b/content/docs/v2024.1.31/guides/mysql/configuration/podtemplating/images/mysql-all-databases.png differ diff --git a/content/docs/v2024.1.31/guides/mysql/configuration/podtemplating/images/mysql-charset.png b/content/docs/v2024.1.31/guides/mysql/configuration/podtemplating/images/mysql-charset.png new file mode 100644 index 0000000000..20771b20b3 Binary files /dev/null and b/content/docs/v2024.1.31/guides/mysql/configuration/podtemplating/images/mysql-charset.png differ diff --git a/content/docs/v2024.1.31/guides/mysql/configuration/podtemplating/index.md b/content/docs/v2024.1.31/guides/mysql/configuration/podtemplating/index.md new file mode 100644 index 0000000000..64c041020a --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/configuration/podtemplating/index.md @@ -0,0 +1,226 @@ +--- +title: Run MySQL with Custom PodTemplate +menu: + docs_v2024.1.31: + identifier: guides-mysql-configuration-using-podtemplate + name: Customize PodTemplate + parent: guides-mysql-configuration + weight: 15 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Run MySQL with Custom PodTemplate + +KubeDB supports providing custom configuration for MySQL via [PodTemplate](/docs/v2024.1.31/guides/mysql/concepts/database/#specpodtemplate). This tutorial will show you how to use KubeDB to run a MySQL database with custom configuration using PodTemplate. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +- Now, install KubeDB cli on your workstation and KubeDB operator in your cluster following the steps [here](/docs/v2024.1.31/setup/README). + +- To keep things isolated, this tutorial uses a separate namespace called `demo` throughout this tutorial. + + ```bash + $ kubectl create ns demo + namespace/demo created + ``` + +> Note: YAML files used in this tutorial are stored in [docs/guides/mysql/configuration/podtemplating/yamls](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/guides/mysql/configuration/podtemplating/yamls) folder in GitHub repository [kubedb/docs](https://github.com/kubedb/docs). + +## Overview + +KubeDB allows providing a template for database pod through `spec.podTemplate`. KubeDB operator will pass the information provided in `spec.podTemplate` to the StatefulSet created for MySQL database. + +KubeDB accept following fields to set in `spec.podTemplate:` + +- metadata: + - annotations (pod's annotation) +- controller: + - annotations (statefulset's annotation) +- spec: + - env + - resources + - initContainers + - imagePullSecrets + - nodeSelector + - affinity + - schedulerName + - tolerations + - priorityClassName + - priority + - securityContext + +Read about the fields in details in [PodTemplate concept](/docs/v2024.1.31/guides/mysql/concepts/database/#specpodtemplate), + +## CRD Configuration + +Below is the YAML for the MySQL created in this example. Here, [`spec.podTemplate.spec.env`](/docs/v2024.1.31/guides/mysql/concepts/database/#specpodtemplatespecenv) specifies environment variables and [`spec.podTemplate.spec.args`](/docs/v2024.1.31/guides/mysql/concepts/database/#specpodtemplatespecargs) provides extra arguments for [MySQL Docker Image](https://hub.docker.com/_/mysql/). + +In this tutorial, an initial database `myDB` will be created by providing `env` `MYSQL_DATABASE` while the server character set will be set to `utf8mb4` by adding extra `args`. Note that, `character-set-server` in `MySQL 5.7.44` is `latin1`. + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: MySQL +metadata: + name: mysql-misc-config + namespace: demo +spec: + version: "5.7.44" + storageType: "Durable" + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + podTemplate: + metadata: + labels: + pass-to: pod + annotations: + annotate-to: pod + controller: + labels: + pass-to: statefulset + annotations: + annotate-to: statfulset + spec: + env: + - name: MYSQL_DATABASE + value: myDB + args: + - --character-set-server=utf8mb4 + resources: + requests: + memory: "1Gi" + cpu: "250m" + terminationPolicy: Halt +``` + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mysql/configuration/podtemplating/yamls/mysql-misc-config.yaml +mysql.kubedb.com/mysql-misc-config created +``` + +Now, wait a few minutes. KubeDB operator will create necessary PVC, statefulset, services, secret etc. If everything goes well, we will see that a pod with the name `mysql-misc-config-0` has been created. + +Check that the statefulset's pod is running + +```bash +$ kubectl get pod -n demo -l app.kubernetes.io/name=mysqls.kubedb.com,app.kubernetes.io/instance=mysql-misc-config +NAME READY STATUS RESTARTS AGE +mysql-misc-config-0 1/1 Running 0 9m28s +``` + +Check the pod's log to see if the database is ready + +```bash +$ kubectl logs -f -n demo mysql-misc-config-0 +Initializing database +..... +Database initialized +Initializing certificates +... +Certificates initialized +MySQL init process in progress... +.... +MySQL init process done. Ready for start up. +.... +2022-06-28T13:22:26.076407Z 0 [Note] mysqld: ready for connections. +Version: '5.7.44' socket: '/var/run/mysqld/mysqld.sock' port: 3306 MySQL Community Server (GPL) +.... +``` + +Once we see `[Note] /usr/sbin/mysqld: ready for connections.` in the log, the database is ready. + +Now, we will check if the database has started with the custom configuration we have provided. + +First, deploy [phpMyAdmin](https://hub.docker.com/r/phpmyadmin/phpmyadmin/) to connect with the MySQL database we have just created. + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mysql/configuration/podtemplating/yamls/phpmyadmin.yaml +deployment.extensions/myadmin created +service/myadmin created +``` + +Then, open your browser and go to the following URL: _http://{node-ip}:{myadmin-svc-nodeport}_. For kind cluster, you can get this URL by running the following command: + +```bash +$ kubectl get svc -n demo myadmin -o json | jq '.spec.ports[].nodePort' +30942 + +$ kubectl get node -o json | jq '.items[].status.addresses[].address' +"172.18.0.3" +"kind-control-plane" +"172.18.0.4" +"kind-worker" +"172.18.0.2" +"kind-worker2" + +# expected url will be: +url: http://172.18.0.4:30942 +``` + +Now, let's connect to the database from the phpMyAdmin dashboard using the database pod IP and MySQL user password. + +```bash +$ kubectl get pods mysql-misc-config-0 -n demo -o yaml | grep IP + ... + hostIP: 10.0.2.15 + podIP: 172.17.0.6 + +$ kubectl get secrets -n demo mysql-misc-config-auth -o jsonpath='{.data.\user}' | base64 -d +root + +$ kubectl get secrets -n demo mysql-misc-config-auth -o jsonpath='{.data.\password}' | base64 -d +MLO5_fPVKcqPiEu9 +``` + +Once, you have connected to the database with phpMyAdmin go to **SQL** tab and run sql to see all databases `SHOW DATABASES;` and to see charcter-set configuration `SHOW VARIABLES LIKE 'char%';`. You will see a database called `myDB` is created and also all the character-set is set to `utf8mb4`. + +![mysql_all_databases](/docs/v2024.1.31/guides/mysql/configuration/podtemplating/images/mysql-all-databases.png) + +![mysql_charset](/docs/v2024.1.31/guides/mysql/configuration/podtemplating/images/mysql-charset.png) + +## Cleaning up + +To cleanup the Kubernetes resources created by this tutorial, run: + +```bash +kubectl patch -n demo my/mysql-misc-config -p '{"spec":{"terminationPolicy":"WipeOut"}}' --type="merge" +kubectl delete -n demo my/mysql-misc-config + +kubectl delete deployment -n demo myadmin +kubectl delete service -n demo myadmin + +kubectl delete ns demo +``` + +If you would like to uninstall KubeDB operator, please follow the steps [here](/docs/v2024.1.31/setup/README). + +## Next Steps + +- [Quickstart MySQL](/docs/v2024.1.31/guides/mysql/quickstart/) with KubeDB Operator. +- Initialize [MySQL with Script](/docs/v2024.1.31/guides/mysql/initialization/). +- Monitor your MySQL database with KubeDB using [out-of-the-box Prometheus operator](/docs/v2024.1.31/guides/mysql/monitoring/prometheus-operator/). +- Monitor your MySQL database with KubeDB using [out-of-the-box builtin-Prometheus](/docs/v2024.1.31/guides/mysql/monitoring/builtin-prometheus/). +- Use [private Docker registry](/docs/v2024.1.31/guides/mysql/private-registry/) to deploy MySQL with KubeDB. +- Use [kubedb cli](/docs/v2024.1.31/guides/mysql/cli/) to manage databases like kubectl for Kubernetes. +- Detail concepts of [MySQL object](/docs/v2024.1.31/guides/mysql/concepts/database/). +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/v2024.1.31/CONTRIBUTING). diff --git a/content/docs/v2024.1.31/guides/mysql/configuration/podtemplating/yamls/mysql-misc-config.yaml b/content/docs/v2024.1.31/guides/mysql/configuration/podtemplating/yamls/mysql-misc-config.yaml new file mode 100644 index 0000000000..dcaf4deb5e --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/configuration/podtemplating/yamls/mysql-misc-config.yaml @@ -0,0 +1,37 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MySQL +metadata: + name: mysql-misc-config + namespace: demo +spec: + version: "5.7.44" + storageType: "Durable" + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + podTemplate: + metadata: + labels: + pass-to: pod + annotations: + annotate-to: pod + controller: + labels: + pass-to: statefulset + annotations: + annotate-to: statfulset + spec: + env: + - name: MYSQL_DATABASE + value: myDB + args: + - --character-set-server=utf8mb4 + resources: + requests: + memory: "1Gi" + cpu: "250m" + terminationPolicy: Halt diff --git a/content/docs/v2024.1.31/guides/mysql/configuration/podtemplating/yamls/phpmyadmin.yaml b/content/docs/v2024.1.31/guides/mysql/configuration/podtemplating/yamls/phpmyadmin.yaml new file mode 100644 index 0000000000..6b20b9d6ff --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/configuration/podtemplating/yamls/phpmyadmin.yaml @@ -0,0 +1,46 @@ +apiVersion: apps/v1 +kind: Deployment +metadata: + labels: + app: myadmin + name: myadmin + namespace: demo +spec: + replicas: 1 + selector: + matchLabels: + app: myadmin + template: + metadata: + labels: + app: myadmin + spec: + containers: + - image: phpmyadmin/phpmyadmin:latest + imagePullPolicy: Always + name: phpmyadmin + ports: + - containerPort: 80 + name: http + protocol: TCP + env: + - name: PMA_ARBITRARY + value: '1' + +--- +apiVersion: v1 +kind: Service +metadata: + labels: + app: myadmin + name: myadmin + namespace: demo +spec: + ports: + - name: http + port: 80 + protocol: TCP + targetPort: http + selector: + app: myadmin + type: LoadBalancer diff --git a/content/docs/v2024.1.31/guides/mysql/custom-rbac/index.md b/content/docs/v2024.1.31/guides/mysql/custom-rbac/index.md new file mode 100644 index 0000000000..c5b39b4bde --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/custom-rbac/index.md @@ -0,0 +1,293 @@ +--- +title: Run MySQL with Custom RBAC resources +menu: + docs_v2024.1.31: + identifier: guides-mysql-custom-rbac + name: Custom RBAC + parent: guides-mysql + weight: 31 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Using Custom RBAC resources + +KubeDB (version 0.13.0 and higher) supports finer user control over role based access permissions provided to a MySQL instance. This tutorial will show you how to use KubeDB to run MySQL instance with custom RBAC resources. + +## Before You Begin + +At first, you need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +Now, install KubeDB cli on your workstation and KubeDB operator in your cluster following the steps [here](/docs/v2024.1.31/setup/README). + +To keep things isolated, this tutorial uses a separate namespace called `demo` throughout this tutorial. + +```bash +$ kubectl create ns demo +namespace/demo created +``` + +> Note: YAML files used in this tutorial are stored in [docs/guides/mysql/custom-rbac/yamls](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/guides/mysql/custom-rbac/yamls) folder in GitHub repository [kubedb/docs](https://github.com/kubedb/docs). + +## Overview + +KubeDB allows users to provide custom RBAC resources, namely, `ServiceAccount`, `Role`, and `RoleBinding` for MySQL. This is provided via the `spec.podTemplate.spec.serviceAccountName` field in MySQL crd. If this field is left empty, the KubeDB operator will create a service account name matching MySQL crd name. Role and RoleBinding that provide necessary access permissions will also be generated automatically for this service account. + +If a service account name is given, but there's no existing service account by that name, the KubeDB operator will create one, and Role and RoleBinding that provide necessary access permissions will also be generated for this service account. + +If a service account name is given, and there's an existing service account by that name, the KubeDB operator will use that existing service account. Since this service account is not managed by KubeDB, users are responsible for providing necessary access permissions manually. + +This guide will show you how to create custom `Service Account`, `Role`, and `RoleBinding` for a MySQL instance named `quick-postges` to provide the bare minimum access permissions. + +## Custom RBAC for MySQL + +At first, let's create a `Service Acoount` in `demo` namespace. + +```bash +$ kubectl create serviceaccount -n demo my-custom-serviceaccount +serviceaccount/my-custom-serviceaccount created +``` + +It should create a service account. + +```yaml +$ kubectl get serviceaccount -n demo my-custom-serviceaccount -o yaml +apiVersion: v1 +kind: ServiceAccount +metadata: + creationTimestamp: "2022-06-28T13:43:26Z" + name: my-custom-serviceaccount + namespace: demo + resourceVersion: "1604181" + uid: bcc79af3-549e-4037-aece-beffab65a6ef +secrets: +- name: my-custom-serviceaccount-token-bvlb5 + +``` + +Now, we need to create a role that has necessary access permissions for the MySQL instance named `quick-mysql`. + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mysql/custom-rbac/yamls/my-custom-role.yaml +role.rbac.authorization.k8s.io/my-custom-role created +``` + +Below is the YAML for the Role we just created. + +```yaml +apiVersion: rbac.authorization.k8s.io/v1 +kind: Role +metadata: + name: my-custom-role + namespace: demo +rules: +- apiGroups: + - policy + resourceNames: + - mysql-db + resources: + - podsecuritypolicies + verbs: + - use +``` + +This permission is required for MySQL pods running on PSP enabled clusters. + +Now create a `RoleBinding` to bind this `Role` with the already created service account. + +```bash +$ kubectl create rolebinding my-custom-rolebinding --role=my-custom-role --serviceaccount=demo:my-custom-serviceaccount --namespace=demo +rolebinding.rbac.authorization.k8s.io/my-custom-rolebinding created + +``` + +It should bind `my-custom-role` and `my-custom-serviceaccount` successfully. + +```yaml +$ kubectl get rolebinding -n demo my-custom-rolebinding -o yaml +apiVersion: rbac.authorization.k8s.io/v1 +kind: RoleBinding +metadata: + creationTimestamp: "2022-06-28T13:45:58Z" + name: my-custom-rolebinding + namespace: demo + resourceVersion: "1604463" + uid: c1242a62-a206-45bf-a757-46e0e20484ca +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: Role + name: my-custom-role +subjects: +- kind: ServiceAccount + name: my-custom-serviceaccount + namespace: demo + +``` + +Now, create a MySQL crd specifying `spec.podTemplate.spec.serviceAccountName` field to `my-custom-serviceaccount`. + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mysql/custom-rbac/yamls/my-custom-db.yaml +mysql.kubedb.com/quick-mysql created +``` + +Below is the YAML for the MySQL crd we just created. + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: MySQL +metadata: + name: quick-mysql + namespace: demo +spec: + version: "8.0.35" + storageType: Durable + podTemplate: + spec: + serviceAccountName: my-custom-serviceaccount + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: DoNotTerminate +``` + +Now, wait a few minutes. the KubeDB operator will create necessary PVC, StatefulSet, services, secret etc. If everything goes well, we should see that a pod with the name `quick-mysql-0` has been created. + +Check that the statefulset's pod is running + +```bash +$ kubectl get pod -n demo quick-mysql-0 +NAME READY STATUS RESTARTS AGE +quick-mysql-0 1/1 Running 0 2m44s +``` + +Check the pod's log to see if the database is ready + +```bash +$ kubectl logs -f -n demo quick-mysql-0 +... +2022-06-28 13:46:46+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 8.0.35-1debian10 started. +2022-06-28 13:46:46+00:00 [Note] [Entrypoint]: Switching to dedicated user 'mysql' +2022-06-28 13:46:46+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 8.0.35-1debian10 started. + +... +2022-06-28T13:47:02.915445Z 0 [System] [MY-011323] [Server] X Plugin ready for connections. Bind-address: '::' port: 33060, socket: /var/run/mysqld/mysqlx.sock +2022-06-28T13:47:02.915504Z 0 [System] [MY-010931] [Server] /usr/sbin/mysqld: ready for connections. Version: '8.0.35' socket: '/var/run/mysqld/mysqld.sock' port: 3306 MySQL Community Server - GPL. + +``` + +Once we see `MySQL init process done. Ready for start up.` in the log, the database is ready. + +## Reusing Service Account + +An existing service account can be reused in another MySQL instance. No new access permission is required to run the new MySQL instance. + +Now, create MySQL crd `minute-mysql` using the existing service account name `my-custom-serviceaccount` in the `spec.podTemplate.spec.serviceAccountName` field. + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mysql/custom-rbac/yamls/my-custom-db-two.yaml +mysql.kubedb.com/quick-mysql created +``` + +Below is the YAML for the MySQL crd we just created. + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: MySQL +metadata: + name: minute-mysql + namespace: demo +spec: + version: "8.0.35" + storageType: Durable + podTemplate: + spec: + serviceAccountName: my-custom-serviceaccount + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: DoNotTerminate + +``` + +Now, wait a few minutes. the KubeDB operator will create necessary PVC, statefulset, services, secret etc. If everything goes well, we should see that a pod with the name `minute-mysql-0` has been created. + +Check that the statefulset's pod is running + +```bash +$ kubectl get pod -n demo minute-mysql-0 +NAME READY STATUS RESTARTS AGE +minute-mysql-0 1/1 Running 0 14m +``` + +Check the pod's log to see if the database is ready + +```bash +... +2022-06-28 13:48:53+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 8.0.35-1debian10 started. +2022-06-28 13:48:53+00:00 [Note] [Entrypoint]: Switching to dedicated user 'mysql' +2022-06-28 13:48:53+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 8.0.35-1debian10 started. +2022-06-28 13:48:53+00:00 [Note] [Entrypoint]: Initializing database files +2022-06-28T13:48:53.986191Z 0 [System] [MY-013169] [Server] /usr/sbin/mysqld (mysqld 8.0.35) initializing of server in progress as process 43 +... +2022-06-28T13:49:11.543893Z 0 [System] [MY-011323] [Server] X Plugin ready for connections. Bind-address: '::' port: 33060, socket: /var/run/mysqld/mysqlx.sock +2022-06-28T13:49:11.543917Z 0 [System] [MY-010931] [Server] /usr/sbin/mysqld: ready for connections. Version: '8.0.35' socket: '/var/run/mysqld/mysqld.sock' port: 3306 MySQL Community Server - GPL. + + + +``` + +`MySQL init process done. Ready for start up.` in the log signifies that the database is running successfully. + +## Cleaning up + +To cleanup the Kubernetes resources created by this tutorial, run: + +```bash +kubectl patch -n demo my/quick-mysql -p '{"spec":{"terminationPolicy":"WipeOut"}}' --type="merge" +kubectl delete -n demo my/quick-mysql + +kubectl patch -n demo my/minute-mysql -p '{"spec":{"terminationPolicy":"WipeOut"}}' --type="merge" +kubectl delete -n demo my/minute-mysql + +kubectl delete -n demo role my-custom-role +kubectl delete -n demo rolebinding my-custom-rolebinding + +kubectl delete sa -n demo my-custom-serviceaccount + +kubectl delete ns demo +``` + +If you would like to uninstall the KubeDB operator, please follow the steps [here](/docs/v2024.1.31/setup/README). + +## Next Steps + +- [Quickstart MySQL](/docs/v2024.1.31/guides/mysql/quickstart/) with KubeDB Operator. +- Initialize [MySQL with Script](/docs/v2024.1.31/guides/mysql/initialization/). +- Monitor your MySQL database with KubeDB using [out-of-the-box Prometheus operator](/docs/v2024.1.31/guides/mysql/monitoring/prometheus-operator/). +- Monitor your MySQL database with KubeDB using [out-of-the-box builtin-Prometheus](/docs/v2024.1.31/guides/mysql/monitoring/builtin-prometheus/). +- Use [private Docker registry](/docs/v2024.1.31/guides/mysql/private-registry/) to deploy MySQL with KubeDB. +- Use [kubedb cli](/docs/v2024.1.31/guides/mysql/cli/) to manage databases like kubectl for Kubernetes. +- Detail concepts of [MySQL object](/docs/v2024.1.31/guides/mysql/concepts/database/). +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/v2024.1.31/CONTRIBUTING). diff --git a/content/docs/v2024.1.31/guides/mysql/custom-rbac/yamls/my-custom-db-two.yaml b/content/docs/v2024.1.31/guides/mysql/custom-rbac/yamls/my-custom-db-two.yaml new file mode 100644 index 0000000000..029a81ffae --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/custom-rbac/yamls/my-custom-db-two.yaml @@ -0,0 +1,19 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MySQL +metadata: + name: minute-mysql + namespace: demo +spec: + version: "8.0.35" + storageType: Durable + podTemplate: + spec: + serviceAccountName: my-custom-serviceaccount + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: DoNotTerminate diff --git a/content/docs/v2024.1.31/guides/mysql/custom-rbac/yamls/my-custom-db.yaml b/content/docs/v2024.1.31/guides/mysql/custom-rbac/yamls/my-custom-db.yaml new file mode 100644 index 0000000000..11bddb8c97 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/custom-rbac/yamls/my-custom-db.yaml @@ -0,0 +1,19 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MySQL +metadata: + name: quick-mysql + namespace: demo +spec: + version: "8.0.35" + storageType: Durable + podTemplate: + spec: + serviceAccountName: my-custom-serviceaccount + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: DoNotTerminate diff --git a/content/docs/v2024.1.31/guides/mysql/custom-rbac/yamls/my-custom-role.yaml b/content/docs/v2024.1.31/guides/mysql/custom-rbac/yamls/my-custom-role.yaml new file mode 100644 index 0000000000..db6d601259 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/custom-rbac/yamls/my-custom-role.yaml @@ -0,0 +1,14 @@ +apiVersion: rbac.authorization.k8s.io/v1 +kind: Role +metadata: + name: my-custom-role + namespace: demo +rules: + - apiGroups: + - policy + resourceNames: + - mysql-db + resources: + - podsecuritypolicies + verbs: + - use diff --git a/content/docs/v2024.1.31/guides/mysql/initialization/index.md b/content/docs/v2024.1.31/guides/mysql/initialization/index.md new file mode 100644 index 0000000000..e0973d6fd0 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/initialization/index.md @@ -0,0 +1,525 @@ +--- +title: Initialize MySQL using Script +menu: + docs_v2024.1.31: + identifier: guides-mysql-initialization + name: Initialization Using Script + parent: guides-mysql + weight: 41 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Initialize MySQL using Script + +This tutorial will show you how to use KubeDB to initialize a MySQL database with \*.sql, \*.sh and/or \*.sql.gz script. +In this tutorial we will use .sql script stored in GitHub repository [kubedb/mysql-init-scripts](https://github.com/kubedb/mysql-init-scripts). + +> Note: The yaml files that are used in this tutorial are stored in [docs/guides/mysql/initialization/yamls](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/guides/mysql/initialization/yamls) folder in GitHub repository [kubedb/docs](https://github.com/kubedb/docs). + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +- Now, install KubeDB cli on your workstation and KubeDB operator in your cluster following the steps [here](/docs/v2024.1.31/setup/README). + +- To keep things isolated, this tutorial uses a separate namespace called `demo` throughout this tutorial. This tutorial will also use a phpMyAdmin to connect and test MySQL database, once it is running. Run the following command to prepare your cluster for this tutorial: + + ```bash + $ kubectl create ns demo + namespace/demo created + + $ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mysql/initialization/yamls/phpmyadmin.yaml + deployment.extensions/myadmin created + service/myadmin created + + $ kubectl get pods -n demo + NAME READY STATUS RESTARTS AGE + myadmin-66cc8d4c77-wkwht 1/1 Running 0 5m20s + + $ kubectl get service -n demo + NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE + myadmin LoadBalancer 10.104.142.213 80:31529/TCP 3m14s + ``` + + Then, open your browser and go to the following URL: _http://{node-ip}:{myadmin-svc-nodeport}_. For kind cluster, you can get this URL by running the following command: + + ```bash + $ kubectl get svc -n demo myadmin -o json | jq '.spec.ports[].nodePort' + 31529 + + $ kubectl get node -o json | jq '.items[].status.addresses[].address' + "172.18.0.3" + "kind-control-plane" + "172.18.0.4" + "kind-worker" + "172.18.0.2" + "kind-worker2" + + # expected url will be: + url: http://172.18.0.4:31529 + ``` + +## Prepare Initialization Scripts + +MySQL supports initialization with `.sh`, `.sql` and `.sql.gz` files. In this tutorial, we will use `init.sql` script from [mysql-init-scripts](https://github.com/kubedb/mysql-init-scripts) git repository to create a TABLE `kubedb_table` in `mysql` database. + +We will use a ConfigMap as script source. You can use any Kubernetes supported [volume](https://kubernetes.io/docs/concepts/storage/volumes) as script source. + +At first, we will create a ConfigMap from `init.sql` file. Then, we will provide this ConfigMap as script source in `init.script` of MySQL crd spec. + +Let's create a ConfigMap with initialization script, + +```bash +$ kubectl create configmap -n demo my-init-script \ +--from-literal=init.sql="$(curl -fsSL https://github.com/kubedb/mysql-init-scripts/raw/master/init.sql)" +configmap/my-init-script created +``` + +## Create a MySQL database with Init-Script + +Below is the `MySQL` object created in this tutorial. + + + + +
+
+ +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: MySQL +metadata: + name: mysql-init-script + namespace: demo +spec: + version: "8.0.35" + topology: + mode: GroupReplication + replicas: 3 + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + init: + script: + configMap: + name: my-init-script + +``` + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mysql/initialization/yamls/initialize-gr.yaml +mysql.kubedb.com/mysql-init-script created +``` + +
+ +
+ +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: MySQL +metadata: + name: mysql-init-script + namespace: demo +spec: + version: "8.0.31-innodb" + replicas: 3 + topology: + mode: InnoDBCluster + innoDBCluster: + router: + replicas: 1 + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + init: + script: + configMap: + name: my-init-script + +``` + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mysql/initialization/yamls/initialize-innodb.yaml +mysql.kubedb.com/mysql-init-script created +``` +
+ +
+ +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: MySQL +metadata: + name: mysql-init-script + namespace: demo +spec: + version: "8.0.35" + replicas: 3 + topology: + mode: SemiSync + semiSync: + sourceWaitForReplicaCount: 1 + sourceTimeout: 23h + errantTransactionRecoveryPolicy: PseudoTransaction + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + init: + script: + configMap: + name: my-init-script + +``` + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mysql/initialization/yamls/initialize-mysql.yaml +mysql.kubedb.com/mysql-init-script created +``` + +
+ +
+ +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: MySQL +metadata: + name: mysql-init-script + namespace: demo +spec: + version: "8.0.35" + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + init: + script: + configMap: + name: my-init-script +``` + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mysql/initialization/yamls/initialize-mysql.yaml +mysql.kubedb.com/mysql-init-script created +``` +
+ +
+ + +Here, + +- `spec.init.script` specifies a script source used to initialize the database before database server starts. The scripts will be executed alphabatically. In this tutorial, a sample .sql script from the git repository `https://github.com/kubedb/mysql-init-scripts.git` is used to create a test database. You can use other [volume sources](https://kubernetes.io/docs/concepts/storage/volumes/#types-of-volumes) instead of `ConfigMap`. The \*.sql, \*sql.gz and/or \*.sh sripts that are stored inside the root folder will be executed alphabatically. The scripts inside child folders will be skipped. + +KubeDB operator watches for `MySQL` objects using Kubernetes api. When a `MySQL` object is created, KubeDB operator will create a new StatefulSet and a Service with the matching MySQL object name. KubeDB operator will also create a governing service for StatefulSets with the name `kubedb`, if one is not already present. No MySQL specific RBAC roles are required for [RBAC enabled clusters](/docs/v2024.1.31/setup/README#using-yaml). + +```bash +$ kubectl dba describe my -n demo mysql-init-scrip +Name: mysql-init-script +Namespace: demo +CreationTimestamp: Thu, 30 Jun 2022 12:21:15 +0600 +Labels: +Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"kubedb.com/v1alpha2","kind":"MySQL","metadata":{"annotations":{},"name":"mysql-init-script","namespace":"demo"},"spec":{"init":{"script"... +Replicas: 1 total +Status: Provisioning +StorageType: Durable +Volume: + StorageClass: standard + Capacity: 1Gi + Access Modes: RWO +Paused: false +Halted: false +Termination Policy: Delete + +StatefulSet: + Name: mysql-init-script + CreationTimestamp: Thu, 30 Jun 2022 12:21:15 +0600 + Labels: app.kubernetes.io/component=database + app.kubernetes.io/instance=mysql-init-script + app.kubernetes.io/managed-by=kubedb.com + app.kubernetes.io/name=mysqls.kubedb.com + Annotations: + Replicas: 824644789336 desired | 1 total + Pods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed + +Service: + Name: mysql-init-script + Labels: app.kubernetes.io/component=database + app.kubernetes.io/instance=mysql-init-script + app.kubernetes.io/managed-by=kubedb.com + app.kubernetes.io/name=mysqls.kubedb.com + Annotations: + Type: ClusterIP + IP: 10.96.198.184 + Port: primary 3306/TCP + TargetPort: db/TCP + Endpoints: 10.244.0.23:3306 + +Service: + Name: mysql-init-script-pods + Labels: app.kubernetes.io/component=database + app.kubernetes.io/instance=mysql-init-script + app.kubernetes.io/managed-by=kubedb.com + app.kubernetes.io/name=mysqls.kubedb.com + Annotations: + Type: ClusterIP + IP: None + Port: db 3306/TCP + TargetPort: db/TCP + Endpoints: 10.244.0.23:3306 + +Auth Secret: + Name: mysql-init-script-auth + Labels: app.kubernetes.io/component=database + app.kubernetes.io/instance=mysql-init-script + app.kubernetes.io/managed-by=kubedb.com + app.kubernetes.io/name=mysqls.kubedb.com + Annotations: + Type: kubernetes.io/basic-auth + Data: + password: 16 bytes + username: 4 bytes + +Init: + Script Source: + Volume: + Type: ConfigMap (a volume populated by a ConfigMap) + Name: my-init-script + Optional: false + +AppBinding: + Metadata: + Annotations: + kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"kubedb.com/v1alpha2","kind":"MySQL","metadata":{"annotations":{},"name":"mysql-init-script","namespace":"demo"},"spec":{"init":{"script":{"configMap":{"name":"my-init-script"}}},"storage":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"1Gi"}},"storageClassName":"standard"},"version":"8.0.35"}} + + Creation Timestamp: 2022-06-30T06:21:15Z + Labels: + app.kubernetes.io/component: database + app.kubernetes.io/instance: mysql-init-script + app.kubernetes.io/managed-by: kubedb.com + app.kubernetes.io/name: mysqls.kubedb.com + Name: mysql-init-script + Namespace: demo + Spec: + Client Config: + Service: + Name: mysql-init-script + Path: / + Port: 3306 + Scheme: mysql + URL: tcp(mysql-init-script.demo.svc:3306)/ + Parameters: + API Version: appcatalog.appscode.com/v1alpha1 + Kind: StashAddon + Stash: + Addon: + Backup Task: + Name: mysql-backup-8.0.21 + Params: + Name: args + Value: --all-databases --set-gtid-purged=OFF + Restore Task: + Name: mysql-restore-8.0.21 + Secret: + Name: mysql-init-script-auth + Type: kubedb.com/mysql + Version: 8.0.35 + +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal Successful 10s KubeDB operator Successfully created governing service + Normal Successful 10s KubeDB operator Successfully created service for primary/standalone + Normal Successful 10s KubeDB operator Successfully created database auth secret + Normal Successful 10s KubeDB operator Successfully created StatefulSet + Normal Successful 10s KubeDB operator Successfully created MySQL + Normal Successful 10s KubeDB operator Successfully created appbinding + + +$ kubectl get statefulset -n demo +NAME READY AGE +mysql-init-script 1/1 2m24s + +$ kubectl get pvc -n demo +NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE +data-mysql-init-script-0 Bound pvc-32a59975-2972-4122-9635-22fe19483145 1Gi RWO standard 3m + +$ kubectl get pv -n demo +NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE +pvc-32a59975-2972-4122-9635-22fe19483145 1Gi RWO Delete Bound demo/data-mysql-init-script-0 standard 3m25s + +$ kubectl get service -n demo +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +myadmin LoadBalancer 10.104.142.213 80:31529/TCP 23m +mysql-init-script ClusterIP 10.103.202.117 3306/TCP 3m49s +mysql-init-script-pods ClusterIP None 3306/TCP 3m49s +``` + +KubeDB operator sets the `status.phase` to `Running` once the database is successfully created. Run the following command to see the modified MySQL object: + +```yaml +$ kubectl get my -n demo mysql-init-script -o yaml +apiVersion: kubedb.com/v1alpha2 +kind: MySQL +metadata: + annotations: + kubectl.kubernetes.io/last-applied-configuration: | + {"apiVersion":"kubedb.com/v1alpha2","kind":"MySQL","metadata":{"annotations":{},"name":"mysql-init-script","namespace":"demo"},"spec":{"init":{"script":{"configMap":{"name":"my-init-script"}}},"storage":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"1Gi"}},"storageClassName":"standard"},"version":"8.0.35"}} + creationTimestamp: "2022-06-30T06:21:15Z" + finalizers: + - kubedb.com + generation: 3 + name: mysql-init-script + namespace: demo + resourceVersion: "1697522" + uid: 932c1fe3-6692-4ddc-b4cd-fe34e0d5ebc8 +spec: + allowedReadReplicas: + namespaces: + from: Same + allowedSchemas: + namespaces: + from: Same + authSecret: + name: mysql-init-script-auth + coordinator: + resources: {} + init: + initialized: true + script: + configMap: + name: my-init-script + podTemplate: + controller: {} + metadata: {} + spec: + affinity: + ... + resources: + limits: + memory: 1Gi + requests: + cpu: 500m + memory: 1Gi + serviceAccountName: mysql-init-script + replicas: 1 + storage: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + storageClassName: standard + storageType: Durable + terminationPolicy: Delete + useAddressType: DNS + version: 8.0.35 +status: + conditions: + ... + observedGeneration: 2 + reason: DatabaseSuccessfullyProvisioned + status: "True" + type: Provisioned + observedGeneration: 2 + phase: Ready + +``` + +KubeDB operator has created a new Secret called `mysql-init-script-auth` *(format: {mysql-object-name}-auth)* for storing the password for MySQL superuser. This secret contains a `username` key which contains the *username* for MySQL superuser and a `password` key which contains the *password* for MySQL superuser. +If you want to use an existing secret please specify that when creating the MySQL object using `spec.authSecret.name`. While creating this secret manually, make sure the secret contains these two keys containing data `username` and `password` and also make sure of using `root` as value of `username`. + +Now, you can connect to this database from the phpMyAdmin dashboard using the database pod IP and and `mysql` user password. + +```bash +$ kubectl get pods mysql-init-script-0 -n demo -o yaml | grep IP + hostIP: 10.0.2.15 + podIP: 10.244.2.9 + +$ kubectl get secrets -n demo mysql-init-script-auth -o jsonpath='{.data.\user}' | base64 -d +root + +$ kubectl get secrets -n demo mysql-init-script-auth -o jsonpath='{.data.\password}' | base64 -d +1Pc7bwSygrv1MX1Q +``` + +--- +Note: In MySQL: `8.0.14-v1` connection to phpMyAdmin may give error as it is using `caching_sha2_password` and `sha256_password` authentication plugins over `mysql_native_password`. If the error happens do the following for work around. But, It's not recommended to change authentication plugins. See [here](https://stackoverflow.com/questions/49948350/phpmyadmin-on-mysql-8-0) for alternative solutions. + +```bash +kubectl exec -it -n demo mysql-quickstart-0 -- mysql -u root --password=1Pc7bwSygrv1MX1Q -e "ALTER USER root IDENTIFIED WITH mysql_native_password BY '1Pc7bwSygrv1MX1Q';" +``` + +--- + +Now, open your browser and go to the following URL: _http://{node-ip}:{myadmin-svc-nodeport}_. To log into the phpMyAdmin, use host __`10.244.2.9`__ , username __`root`__ and password __`1Pc7bwSygrv1MX1Q`__. + +As you can see here, the initial script has successfully created a table named `kubedb_table` in `mysql` database and inserted three rows of data into that table successfully. + +## Cleaning up + +To cleanup the Kubernetes resources created by this tutorial, run: + +```bash +kubectl patch -n demo mysql/mysql-init-script -p '{"spec":{"terminationPolicy":"WipeOut"}}' --type="merge" +kubectl delete -n demo mysql/mysql-init-script + +kubectl delete ns demo +``` + +## Next Steps + +- Initialize [MySQL with Script](/docs/v2024.1.31/guides/mysql/initialization/). +- Monitor your MySQL database with KubeDB using [out-of-the-box Prometheus operator](/docs/v2024.1.31/guides/mysql/monitoring/prometheus-operator/). +- Monitor your MySQL database with KubeDB using [out-of-the-box builtin-Prometheus](/docs/v2024.1.31/guides/mysql/monitoring/builtin-prometheus/). +- Use [private Docker registry](/docs/v2024.1.31/guides/mysql/private-registry/) to deploy MySQL with KubeDB. +- Detail concepts of [MySQL object](/docs/v2024.1.31/guides/mysql/concepts/database/). +- Detail concepts of [MySQLVersion object](/docs/v2024.1.31/guides/mysql/concepts/catalog/). +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/v2024.1.31/CONTRIBUTING). diff --git a/content/docs/v2024.1.31/guides/mysql/initialization/yamls/initialize-gr.yaml b/content/docs/v2024.1.31/guides/mysql/initialization/yamls/initialize-gr.yaml new file mode 100644 index 0000000000..c64c9ad01a --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/initialization/yamls/initialize-gr.yaml @@ -0,0 +1,21 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MySQL +metadata: + name: mysql-init-script + namespace: demo +spec: + version: "8.0.35" + topology: + mode: GroupReplication + replicas: 3 + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + init: + script: + configMap: + name: my-init-script diff --git a/content/docs/v2024.1.31/guides/mysql/initialization/yamls/initialize-innodb.yaml b/content/docs/v2024.1.31/guides/mysql/initialization/yamls/initialize-innodb.yaml new file mode 100644 index 0000000000..973ff400a9 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/initialization/yamls/initialize-innodb.yaml @@ -0,0 +1,24 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MySQL +metadata: + name: mysql-init-script + namespace: demo +spec: + version: "8.0.31-innodb" + replicas: 3 + topology: + mode: InnoDBCluster + innoDBCluster: + router: + replicas: 1 + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + init: + script: + configMap: + name: my-init-script diff --git a/content/docs/v2024.1.31/guides/mysql/initialization/yamls/initialize-semi-sync.yaml b/content/docs/v2024.1.31/guides/mysql/initialization/yamls/initialize-semi-sync.yaml new file mode 100644 index 0000000000..2c0bb9af7b --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/initialization/yamls/initialize-semi-sync.yaml @@ -0,0 +1,25 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MySQL +metadata: + name: mysql-init-script + namespace: demo +spec: + version: "8.0.35" + replicas: 3 + topology: + mode: SemiSync + semiSync: + sourceWaitForReplicaCount: 1 + sourceTimeout: 23h + errantTransactionRecoveryPolicy: PseudoTransaction + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + init: + script: + configMap: + name: my-init-script diff --git a/content/docs/v2024.1.31/guides/mysql/initialization/yamls/initialize-standalone.yaml b/content/docs/v2024.1.31/guides/mysql/initialization/yamls/initialize-standalone.yaml new file mode 100644 index 0000000000..4544bb9804 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/initialization/yamls/initialize-standalone.yaml @@ -0,0 +1,18 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MySQL +metadata: + name: mysql-init-script + namespace: demo +spec: + version: "8.0.35" + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + init: + script: + configMap: + name: my-init-script diff --git a/content/docs/v2024.1.31/guides/mysql/initialization/yamls/phpmyadmin.yaml b/content/docs/v2024.1.31/guides/mysql/initialization/yamls/phpmyadmin.yaml new file mode 100644 index 0000000000..8674cb45c8 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/initialization/yamls/phpmyadmin.yaml @@ -0,0 +1,46 @@ +apiVersion: apps/v1 +kind: Deployment +metadata: + labels: + app: myadmin + name: myadmin + namespace: demo +spec: + replicas: 3 + selector: + matchLabels: + app: myadmin + template: + metadata: + labels: + app: myadmin + spec: + containers: + - image: phpmyadmin/phpmyadmin + imagePullPolicy: Always + name: phpmyadmin + ports: + - containerPort: 80 + name: http + protocol: TCP + env: + - name: PMA_ARBITRARY + value: '1' + +--- +apiVersion: v1 +kind: Service +metadata: + labels: + app: myadmin + name: myadmin + namespace: demo +spec: + ports: + - name: http + port: 80 + protocol: TCP + targetPort: http + selector: + app: myadmin + type: LoadBalancer diff --git a/content/docs/v2024.1.31/guides/mysql/monitoring/_index.md b/content/docs/v2024.1.31/guides/mysql/monitoring/_index.md new file mode 100755 index 0000000000..837a3eda4e --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/monitoring/_index.md @@ -0,0 +1,22 @@ +--- +title: Monitoring MySQL +menu: + docs_v2024.1.31: + identifier: guides-mysql-monitoring + name: Monitoring + parent: guides-mysql + weight: 50 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/mysql/monitoring/builtin-prometheus/index.md b/content/docs/v2024.1.31/guides/mysql/monitoring/builtin-prometheus/index.md new file mode 100644 index 0000000000..27b93e0e5f --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/monitoring/builtin-prometheus/index.md @@ -0,0 +1,371 @@ +--- +title: Monitor MySQL using Builtin Prometheus Discovery +menu: + docs_v2024.1.31: + identifier: guides-mysql-monitoring-builtin-prometheus + name: Builtin Prometheus + parent: guides-mysql-monitoring + weight: 20 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Monitoring MySQL with builtin Prometheus + +This tutorial will show you how to monitor MySQL database using builtin [Prometheus](https://github.com/prometheus/prometheus) scraper. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +- Install KubeDB operator in your cluster following the steps [here](/docs/v2024.1.31/setup/README). + +- If you are not familiar with how to configure Prometheus to scrape metrics from various Kubernetes resources, please read the tutorial from [here](https://github.com/appscode/third-party-tools/tree/master/monitoring/prometheus/builtin). + +- To learn how Prometheus monitoring works with KubeDB in general, please visit [here](/docs/v2024.1.31/guides/mysql/monitoring/overview/). + +- To keep Prometheus resources isolated, we are going to use a separate namespace called `monitoring` to deploy respective monitoring resources. We are going to deploy database in `demo` namespace. + + ```bash + $ kubectl create ns monitoring + namespace/monitoring created + + $ kubectl create ns demo + namespace/demo created + ``` + +> Note: YAML files used in this tutorial are stored in [docs/guides/mysql/monitoring/builtin-prometheus/yamls](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/guides/mysql/monitoring/builtin-prometheus/yamls) folder in GitHub repository [kubedb/docs](https://github.com/kubedb/docs). + +## Deploy MySQL with Monitoring Enabled + +At first, let's deploy an MySQL database with monitoring enabled. Below is the MySQL object that we are going to create. + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: MySQL +metadata: + name: builtin-prom-mysql + namespace: demo +spec: + version: "8.0.35" + terminationPolicy: WipeOut + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + monitor: + agent: prometheus.io/builtin +``` + +Here, + +- `spec.monitor.agent: prometheus.io/builtin` specifies that we are going to monitor this server using builtin Prometheus scraper. + +Let's create the MySQL crd we have shown above. + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mysql/monitoring/builtin-prometheus/yamls/builtin-prom-mysql.yaml +mysql.kubedb.com/builtin-prom-mysql created +``` + +Now, wait for the database to go into `Running` state. + +```bash +$ watch -n 3 kubectl get mysql -n demo builtin-prom-mysql +Every 3.0s: kubectl get mysql -n demo builtin-prom-mysql suaas-appscode: Tue Aug 25 16:07:29 2020 + +NAME VERSION STATUS AGE +builtin-prom-mysql 8.0.35 Running 3m33s +``` + +KubeDB will create a separate stats service with name `{MySQL crd name}-stats` for monitoring purpose. + +```bash +$ kubectl get svc -n demo --selector="app.kubernetes.io/instance=builtin-prom-mysql" +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +builtin-prom-mysql ClusterIP 10.104.141.73 3306/TCP 4m45s +builtin-prom-mysql-gvr ClusterIP None 3306/TCP 4m45s +builtin-prom-mysql-stats ClusterIP 10.103.209.43 56790/TCP 2m9s +``` + +Here, `builtin-prom-mysql-stats` service has been created for monitoring purpose. Let's describe the service. + +```bash +$ kubectl describe svc -n demo builtin-prom-mysql-stats +Name: builtin-prom-mysql-stats +Namespace: demo +Labels: app.kubernetes.io/name=mysqls.kubedb.com + app.kubernetes.io/instance=builtin-prom-mysql + kubedb.com/role=stats +Annotations: monitoring.appscode.com/agent: prometheus.io/builtin + prometheus.io/path: /metrics + prometheus.io/port: 56790 + prometheus.io/scrape: true +Selector: app.kubernetes.io/name=mysqls.kubedb.com,app.kubernetes.io/instance=builtin-prom-mysql +Type: ClusterIP +IP: 10.103.209.43 +Port: prom-http 56790/TCP +TargetPort: prom-http/TCP +Endpoints: 10.244.1.5:56790 +Session Affinity: None +Events: +``` + +You can see that the service contains following annotations. + +```bash +prometheus.io/path: /metrics +prometheus.io/port: 56790 +prometheus.io/scrape: true +``` + +The Prometheus server will discover the service endpoint using these specifications and will scrape metrics from the exporter. + +## Configure Prometheus Server + +Now, we have to configure a Prometheus scraping job to scrape the metrics using this service. We are going to configure scraping job similar to this [kubernetes-service-endpoints](https://github.com/appscode/third-party-tools/tree/master/monitoring/prometheus/builtin#kubernetes-service-endpoints) job that scrapes metrics from endpoints of a service. + +Let's configure a Prometheus scraping job to collect metrics from this service. + +```yaml +- job_name: 'kubedb-databases' + honor_labels: true + scheme: http + kubernetes_sd_configs: + - role: endpoints + # by default Prometheus server select all Kubernetes services as possible target. + # relabel_config is used to filter only desired endpoints + relabel_configs: + # keep only those services that has "prometheus.io/scrape","prometheus.io/path" and "prometheus.io/port" annotations + - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape, __meta_kubernetes_service_annotation_prometheus_io_port] + separator: ; + regex: true;(.*) + action: keep + # currently KubeDB supported databases uses only "http" scheme to export metrics. so, drop any service that uses "https" scheme. + - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scheme] + action: drop + regex: https + # only keep the stats services created by KubeDB for monitoring purpose which has "-stats" suffix + - source_labels: [__meta_kubernetes_service_name] + separator: ; + regex: (.*-stats) + action: keep + # service created by KubeDB will have "app.kubernetes.io/name" and "app.kubernetes.io/instance" annotations. keep only those services that have these annotations. + - source_labels: [__meta_kubernetes_service_label_app_kubernetes_io_name] + separator: ; + regex: (.*) + action: keep + # read the metric path from "prometheus.io/path: " annotation + - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_path] + action: replace + target_label: __metrics_path__ + regex: (.+) + # read the port from "prometheus.io/port: " annotation and update scraping address accordingly + - source_labels: [__address__, __meta_kubernetes_service_annotation_prometheus_io_port] + action: replace + target_label: __address__ + regex: ([^:]+)(?::\d+)?;(\d+) + replacement: $1:$2 + # add service namespace as label to the scraped metrics + - source_labels: [__meta_kubernetes_namespace] + separator: ; + regex: (.*) + target_label: namespace + replacement: $1 + action: replace + # add service name as a label to the scraped metrics + - source_labels: [__meta_kubernetes_service_name] + separator: ; + regex: (.*) + target_label: service + replacement: $1 + action: replace + # add stats service's labels to the scraped metrics + - action: labelmap + regex: __meta_kubernetes_service_label_(.+) +``` + +### Configure Existing Prometheus Server + +If you already have a Prometheus server running, you have to add above scraping job in the `ConfigMap` used to configure the Prometheus server. Then, you have to restart it for the updated configuration to take effect. + +>If you don't use a persistent volume for Prometheus storage, you will lose your previously scraped data on restart. + +### Deploy New Prometheus Server + +If you don't have any existing Prometheus server running, you have to deploy one. In this section, we are going to deploy a Prometheus server in `monitoring` namespace to collect metrics using this stats service. + +**Create ConfigMap:** + +At first, create a ConfigMap with the scraping configuration. Bellow, the YAML of ConfigMap that we are going to create in this tutorial. + +```yaml +apiVersion: v1 +kind: ConfigMap +metadata: + name: prometheus-config + labels: + app: prometheus-demo + namespace: monitoring +data: + prometheus.yml: |- + global: + scrape_interval: 5s + evaluation_interval: 5s + scrape_configs: + - job_name: 'kubedb-databases' + honor_labels: true + scheme: http + kubernetes_sd_configs: + - role: endpoints + # by default Prometheus server select all Kubernetes services as possible target. + # relabel_config is used to filter only desired endpoints + relabel_configs: + # keep only those services that has "prometheus.io/scrape","prometheus.io/path" and "prometheus.io/port" anootations + - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape, __meta_kubernetes_service_annotation_prometheus_io_port] + separator: ; + regex: true;(.*) + action: keep + # currently KubeDB supported databases uses only "http" scheme to export metrics. so, drop any service that uses "https" scheme. + - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scheme] + action: drop + regex: https + # only keep the stats services created by KubeDB for monitoring purpose which has "-stats" suffix + - source_labels: [__meta_kubernetes_service_name] + separator: ; + regex: (.*-stats) + action: keep + # service created by KubeDB will have "app.kubernetes.io/name" and "app.kubernetes.io/instance" annotations. keep only those services that have these annotations. + - source_labels: [__meta_kubernetes_service_label_app_kubernetes_io_name] + separator: ; + regex: (.*) + action: keep + # read the metric path from "prometheus.io/path: " annotation + - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_path] + action: replace + target_label: __metrics_path__ + regex: (.+) + # read the port from "prometheus.io/port: " annotation and update scraping address accordingly + - source_labels: [__address__, __meta_kubernetes_service_annotation_prometheus_io_port] + action: replace + target_label: __address__ + regex: ([^:]+)(?::\d+)?;(\d+) + replacement: $1:$2 + # add service namespace as label to the scraped metrics + - source_labels: [__meta_kubernetes_namespace] + separator: ; + regex: (.*) + target_label: namespace + replacement: $1 + action: replace + # add service name as a label to the scraped metrics + - source_labels: [__meta_kubernetes_service_name] + separator: ; + regex: (.*) + target_label: service + replacement: $1 + action: replace + # add stats service's labels to the scraped metrics + - action: labelmap + regex: __meta_kubernetes_service_label_(.+) +``` + +Let's create above `ConfigMap`, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mysql/monitoring/builtin-prometheus/yamls/prom-config.yaml +configmap/prometheus-config created +``` + +**Create RBAC:** + +If you are using an RBAC enabled cluster, you have to give necessary RBAC permissions for Prometheus. Let's create necessary RBAC stuffs for Prometheus, + +```bash +$ kubectl apply -f https://github.com/appscode/third-party-tools/raw/master/monitoring/prometheus/builtin/artifacts/rbac.yaml +clusterrole.rbac.authorization.k8s.io/prometheus created +serviceaccount/prometheus created +clusterrolebinding.rbac.authorization.k8s.io/prometheus created +``` + +>YAML for the RBAC resources created above can be found [here](https://github.com/appscode/third-party-tools/blob/master/monitoring/prometheus/builtin/artifacts/rbac.yaml). + +**Deploy Prometheus:** + +Now, we are ready to deploy Prometheus server. We are going to use following [deployment](https://github.com/appscode/third-party-tools/blob/master/monitoring/prometheus/builtin/artifacts/deployment.yaml) to deploy Prometheus server. + +Let's deploy the Prometheus server. + +```bash +$ kubectl apply -f https://github.com/appscode/third-party-tools/raw/master/monitoring/prometheus/builtin/artifacts/deployment.yaml +deployment.apps/prometheus created +``` + +### Verify Monitoring Metrics + +Prometheus server is listening to port `9090`. We are going to use [port forwarding](https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/) to access Prometheus dashboard. + +At first, let's check if the Prometheus pod is in `Running` state. + +```bash +$ kubectl get pod -n monitoring -l=app=prometheus +NAME READY STATUS RESTARTS AGE +prometheus-8568c86d86-95zhn 1/1 Running 0 77s +``` + +Now, run following command on a separate terminal to forward 9090 port of `prometheus-8568c86d86-95zhn` pod, + +```bash +$ kubectl port-forward -n monitoring prometheus-8568c86d86-95zhn 9090 +Forwarding from 127.0.0.1:9090 -> 9090 +Forwarding from [::1]:9090 -> 9090 +``` + +Now, we can access the dashboard at `localhost:9090`. Open [http://localhost:9090](http://localhost:9090) in your browser. You should see the endpoint of `builtin-prom-mysql-stats` service as one of the targets. + +

+  Prometheus Target +

+ +Check the labels marked with red rectangle. These labels confirm that the metrics are coming from `MySQL` database `builtin-prom-mysql` through stats service `builtin-prom-mysql-stats`. + +Now, you can view the collected metrics and create a graph from homepage of this Prometheus dashboard. You can also use this Prometheus server as data source for [Grafana](https://grafana.com/) and create beautiful dashboard with collected metrics. + +## Cleaning up + +To cleanup the Kubernetes resources created by this tutorial, run following commands + +```bash +kubectl delete -n demo my/builtin-prom-mysql + +kubectl delete -n monitoring deployment.apps/prometheus + +kubectl delete -n monitoring clusterrole.rbac.authorization.k8s.io/prometheus +kubectl delete -n monitoring serviceaccount/prometheus +kubectl delete -n monitoring clusterrolebinding.rbac.authorization.k8s.io/prometheus + +kubectl delete ns demo +kubectl delete ns monitoring +``` + +## Next Steps + +- Monitor your MySQL database with KubeDB using [`out-of-the-box` Prometheus operator](/docs/v2024.1.31/guides/mysql/monitoring/prometheus-operator/). +- Use [private Docker registry](/docs/v2024.1.31/guides/mysql/private-registry/) to deploy MySQL with KubeDB. +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/v2024.1.31/CONTRIBUTING). diff --git a/content/docs/v2024.1.31/guides/mysql/monitoring/builtin-prometheus/yamls/builtin-prom-mysql.yaml b/content/docs/v2024.1.31/guides/mysql/monitoring/builtin-prometheus/yamls/builtin-prom-mysql.yaml new file mode 100644 index 0000000000..c107c98c9b --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/monitoring/builtin-prometheus/yamls/builtin-prom-mysql.yaml @@ -0,0 +1,17 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MySQL +metadata: + name: builtin-prom-mysql + namespace: demo +spec: + version: "8.0.35" + terminationPolicy: WipeOut + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + monitor: + agent: prometheus.io/builtin diff --git a/content/docs/v2024.1.31/guides/mysql/monitoring/builtin-prometheus/yamls/prom-config.yaml b/content/docs/v2024.1.31/guides/mysql/monitoring/builtin-prometheus/yamls/prom-config.yaml new file mode 100644 index 0000000000..45aee6317a --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/monitoring/builtin-prometheus/yamls/prom-config.yaml @@ -0,0 +1,68 @@ +apiVersion: v1 +kind: ConfigMap +metadata: + name: prometheus-config + labels: + app: prometheus-demo + namespace: monitoring +data: + prometheus.yml: |- + global: + scrape_interval: 5s + evaluation_interval: 5s + scrape_configs: + - job_name: 'kubedb-databases' + honor_labels: true + scheme: http + kubernetes_sd_configs: + - role: endpoints + # by default Prometheus server select all Kubernetes services as possible target. + # relabel_config is used to filter only desired endpoints + relabel_configs: + # keep only those services that has "prometheus.io/scrape","prometheus.io/path" and "prometheus.io/port" anootations + - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape, __meta_kubernetes_service_annotation_prometheus_io_port] + separator: ; + regex: true;(.*) + action: keep + # currently KubeDB supported databases uses only "http" scheme to export metrics. so, drop any service that uses "https" scheme. + - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scheme] + action: drop + regex: https + # only keep the stats services created by KubeDB for monitoring purpose which has "-stats" suffix + - source_labels: [__meta_kubernetes_service_name] + separator: ; + regex: (.*-stats) + action: keep + # service created by KubeDB will have "app.kubernetes.io/name" and "app.kubernetes.io/instance" annotations. keep only those services that have these annotations. + - source_labels: [__meta_kubernetes_service_label_app_kubernetes_io_name] + separator: ; + regex: (.*) + action: keep + # read the metric path from "prometheus.io/path: " annotation + - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_path] + action: replace + target_label: __metrics_path__ + regex: (.+) + # read the port from "prometheus.io/port: " annotation and update scraping address accordingly + - source_labels: [__address__, __meta_kubernetes_service_annotation_prometheus_io_port] + action: replace + target_label: __address__ + regex: ([^:]+)(?::\d+)?;(\d+) + replacement: $1:$2 + # add service namespace as label to the scraped metrics + - source_labels: [__meta_kubernetes_namespace] + separator: ; + regex: (.*) + target_label: namespace + replacement: $1 + action: replace + # add service name as a label to the scraped metrics + - source_labels: [__meta_kubernetes_service_name] + separator: ; + regex: (.*) + target_label: service + replacement: $1 + action: replace + # add stats service's labels to the scraped metrics + - action: labelmap + regex: __meta_kubernetes_service_label_(.+) diff --git a/content/docs/v2024.1.31/guides/mysql/monitoring/overview/images/database-monitoring-overview.svg b/content/docs/v2024.1.31/guides/mysql/monitoring/overview/images/database-monitoring-overview.svg new file mode 100644 index 0000000000..395eefb334 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/monitoring/overview/images/database-monitoring-overview.svg @@ -0,0 +1 @@ + \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/mysql/monitoring/overview/index.md b/content/docs/v2024.1.31/guides/mysql/monitoring/overview/index.md new file mode 100644 index 0000000000..0e38a0e1b2 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/monitoring/overview/index.md @@ -0,0 +1,97 @@ +--- +title: MySQL Monitoring Overview +description: MySQL Monitoring Overview +menu: + docs_v2024.1.31: + identifier: guides-mysql-monitoring-overview + name: Overview + parent: guides-mysql-monitoring + weight: 10 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Monitoring MySQL with KubeDB + +KubeDB has native support for monitoring via [Prometheus](https://prometheus.io/). You can use builtin [Prometheus](https://github.com/prometheus/prometheus) scraper or [Prometheus operator](https://github.com/prometheus-operator/prometheus-operator) to monitor KubeDB managed databases. This tutorial will show you how database monitoring works with KubeDB and how to configure Database crd to enable monitoring. + +## Overview + +KubeDB uses Prometheus [exporter](https://prometheus.io/docs/instrumenting/exporters/#databases) images to export Prometheus metrics for respective databases. Following diagram shows the logical flow of database monitoring with KubeDB. + +

+  Database Monitoring Flow +

+ +When a user creates a database crd with `spec.monitor` section configured, KubeDB operator provisions the respective database and injects an exporter image as sidecar to the database pod. It also creates a dedicated stats service with name `{database-crd-name}-stats` for monitoring. Prometheus server can scrape metrics using this stats service. + +## Configure Monitoring + +In order to enable monitoring for a database, you have to configure `spec.monitor` section. KubeDB provides following options to configure `spec.monitor` section: + +| Field | Type | Uses | +| -------------------------------------------------- | ---------- | ---------------------------------------------------------------------------------------------------------------------------------------------- | +| `spec.monitor.agent` | `Required` | Type of the monitoring agent that will be used to monitor this database. It can be `prometheus.io/builtin` or `prometheus.io/operator`. | +| `spec.monitor.prometheus.exporter.port` | `Optional` | Port number where the exporter side car will serve metrics. | +| `spec.monitor.prometheus.exporter.args` | `Optional` | Arguments to pass to the exporter sidecar. | +| `spec.monitor.prometheus.exporter.env` | `Optional` | List of environment variables to set in the exporter sidecar container. | +| `spec.monitor.prometheus.exporter.resources` | `Optional` | Resources required by exporter sidecar container. | +| `spec.monitor.prometheus.exporter.securityContext` | `Optional` | Security options the exporter should run with. | +| `spec.monitor.prometheus.serviceMonitor.labels` | `Optional` | Labels for `ServiceMonitor` crd. | +| `spec.monitor.prometheus.serviceMonitor.interval` | `Optional` | Interval at which metrics should be scraped. | + +## Sample Configuration + +A sample YAML for MySQL crd with `spec.monitor` section configured to enable monitoring with [Prometheus operator](https://github.com/prometheus-operator/prometheus-operator) is shown below. + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: MySQL +metadata: + name: prom-operator-mysql + namespace: demo +spec: + version: "8.0.35" + terminationPolicy: WipeOut + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + monitor: + agent: prometheus.io/operator + prometheus: + serviceMonitor: + labels: + release: prometheus + interval: 10s + +``` + +Assume that above Redis is configured to use basic authentication. So, exporter image also need to provide password to collect metrics. We have provided it through `spec.monitor.args` field. + +Here, we have specified that we are going to monitor this server using Prometheus operator through `spec.monitor.agent: prometheus.io/operator`. KubeDB will create a `ServiceMonitor` crd in `monitoring` namespace and this `ServiceMonitor` will have `release: prometheus` label. + +## Next Steps + +- Learn how to monitor Elasticsearch database with KubeDB using [builtin-Prometheus](/docs/v2024.1.31/guides/elasticsearch/monitoring/using-builtin-prometheus) and using [Prometheus operator](/docs/v2024.1.31/guides/elasticsearch/monitoring/using-prometheus-operator). +- Learn how to monitor PostgreSQL database with KubeDB using [builtin-Prometheus](/docs/v2024.1.31/guides/postgres/monitoring/using-builtin-prometheus) and using [Prometheus operator](/docs/v2024.1.31/guides/postgres/monitoring/using-prometheus-operator). +- Learn how to monitor MySQL database with KubeDB using [builtin-Prometheus](/docs/v2024.1.31/guides/mysql/monitoring/builtin-prometheus/) and using [Prometheus operator](/docs/v2024.1.31/guides/mysql/monitoring/prometheus-operator/). +- Learn how to monitor MongoDB database with KubeDB using [builtin-Prometheus](/docs/v2024.1.31/guides/mongodb/monitoring/using-builtin-prometheus) and using [Prometheus operator](/docs/v2024.1.31/guides/mongodb/monitoring/using-prometheus-operator). +- Learn how to monitor Redis server with KubeDB using [builtin-Prometheus](/docs/v2024.1.31/guides/redis/monitoring/using-builtin-prometheus) and using [Prometheus operator](/docs/v2024.1.31/guides/redis/monitoring/using-prometheus-operator). +- Learn how to monitor Memcached server with KubeDB using [builtin-Prometheus](/docs/v2024.1.31/guides/memcached/monitoring/using-builtin-prometheus) and using [Prometheus operator](/docs/v2024.1.31/guides/memcached/monitoring/using-prometheus-operator). diff --git a/content/docs/v2024.1.31/guides/mysql/monitoring/prometheus-operator/index.md b/content/docs/v2024.1.31/guides/mysql/monitoring/prometheus-operator/index.md new file mode 100644 index 0000000000..4a5063fa6f --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/monitoring/prometheus-operator/index.md @@ -0,0 +1,312 @@ +--- +title: Monitor MySQL using Prometheus Operator +menu: + docs_v2024.1.31: + identifier: guides-mysql-monitoring-prometheus-operator + name: Prometheus Operator + parent: guides-mysql-monitoring + weight: 15 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Monitoring MySQL Using Prometheus operator + +[Prometheus operator](https://github.com/prometheus-operator/prometheus-operator) provides simple and Kubernetes native way to deploy and configure Prometheus server. This tutorial will show you how to use Prometheus operator to monitor MySQL database deployed with KubeDB. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +- To learn how Prometheus monitoring works with KubeDB in general, please visit [here](/docs/v2024.1.31/guides/mysql/monitoring/overview/). + +- To keep database resources isolated, this tutorial uses a separate namespace called `demo` throughout this tutorial. Run the following command to prepare your cluster: + + ```bash + $ kubectl create ns demo + namespace/demo created + ``` + +- We need a [Prometheus operator](https://github.com/prometheus-operator/prometheus-operator) instance running. If you don't already have a running instance, deploy one following the docs from [here](https://github.com/appscode/third-party-tools/blob/master/monitoring/prometheus/operator/README.md). + +- If you already don't have a Prometheus server running, deploy one following tutorial from [here](https://github.com/appscode/third-party-tools/blob/master/monitoring/prometheus/operator/README.md#deploy-prometheus-server). + +> Note: YAML files used in this tutorial are stored in [docs/guides/mysql/monitoring/prometheus-operator/yamls](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/guides/mysql/monitoring/prometheus-operator/yamls) folder in GitHub repository [kubedb/docs](https://github.com/kubedb/docs). + +## Find out required labels for ServiceMonitor + +We need to know the labels used to select `ServiceMonitor` by a `Prometheus` crd. We are going to provide these labels in `spec.monitor.prometheus.labels` field of MySQL crd so that KubeDB creates `ServiceMonitor` object accordingly. + +At first, let's find out the available Prometheus server in our cluster. + +```bash +$ kubectl get prometheus --all-namespaces +NAMESPACE NAME VERSION REPLICAS AGE +default prometheus 1 2m19s +``` + +> If you don't have any Prometheus server running in your cluster, deploy one following the guide specified in **Before You Begin** section. + +Now, let's view the YAML of the available Prometheus server `prometheus` in `default` namespace. + +```yaml +$ kubectl get prometheus -n default prometheus -o yaml +apiVersion: monitoring.coreos.com/v1 +kind: Prometheus +metadata: + annotations: + kubectl.kubernetes.io/last-applied-configuration: | + {"apiVersion":"monitoring.coreos.com/v1","kind":"Prometheus","metadata":{"annotations":{},"labels":{"prometheus":"prometheus"},"name":"prometheus","namespace":"default"},"spec":{"replicas":1,"resources":{"requests":{"memory":"400Mi"}},"serviceAccountName":"prometheus","serviceMonitorNamespaceSelector":{"matchLabels":{"prometheus":"prometheus"}},"serviceMonitorSelector":{"matchLabels":{"release":"prometheus"}}}} + creationTimestamp: "2020-08-25T04:02:07Z" + generation: 1 + labels: + prometheus: prometheus + ... + manager: kubectl + operation: Update + time: "2020-08-25T04:02:07Z" + name: prometheus + namespace: default + resourceVersion: "2087" + selfLink: /apis/monitoring.coreos.com/v1/namespaces/default/prometheuses/prometheus + uid: 972a50cb-b751-418b-b2bc-e0ecc9232730 +spec: + replicas: 1 + resources: + requests: + memory: 400Mi + serviceAccountName: prometheus + serviceMonitorNamespaceSelector: + matchLabels: + prometheus: prometheus + serviceMonitorSelector: + matchLabels: + release: prometheus +``` + +- `spec.serviceMonitorSelector` field specifies which ServiceMonitors should be included. The Above label `release: prometheus` is used to select `ServiceMonitors` by its selector. So, we are going to use this label in `spec.monitor.prometheus.labels` field of MySQL crd. +- `spec.serviceMonitorNamespaceSelector` field specifies that the `ServiceMonitors` can be selected outside the Prometheus namespace by Prometheus using namespace selector. The Above label `prometheus: prometheus` is used to select the namespace where the `ServiceMonitor` is created. + +### Add Label to database namespace + +KubeDB creates a `ServiceMonitor` in database namespace `demo`. We need to add label to `demo` namespace. Prometheus will select this namespace by using its `spec.serviceMonitorNamespaceSelector` field. + +Let's add label `prometheus: prometheus` to `demo` namespace, + +```bash +$ kubectl patch namespace demo -p '{"metadata":{"labels": {"prometheus":"prometheus"}}}' +namespace/demo patched +``` + +## Deploy MySQL with Monitoring Enabled + +At first, let's deploy an MySQL database with monitoring enabled. Below is the MySQL object that we are going to create. + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: MySQL +metadata: + name: coreos-prom-mysql + namespace: demo +spec: + version: "8.0.35" + terminationPolicy: WipeOut + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + monitor: + agent: prometheus.io/operator + prometheus: + serviceMonitor: + labels: + release: prometheus + interval: 10s +``` + +Here, + +- `monitor.agent: prometheus.io/operator` indicates that we are going to monitor this server using Prometheus operator. + +- `monitor.prometheus.labels` specifies that KubeDB should create `ServiceMonitor` with these labels. + +- `monitor.prometheus.interval` indicates that the Prometheus server should scrape metrics from this database with 10 seconds interval. + +Let's create the MySQL object that we have shown above, + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mysql/monitoring/prometheus-operator/yamls/prom-operator-mysql.yaml +mysql.kubedb.com/prom-operator-mysql created +``` + +Now, wait for the database to go into `Running` state. + +```bash +$ watch -n 3 kubectl get mysql -n demo coreos-prom-mysql +Every 3.0s: kubectl get mysql -n demo coreos-prom-mysql suaas-appscode: Tue Aug 25 11:53:34 2020 + +NAME VERSION STATUS AGE +coreos-prom-mysql 8.0.35 Running 2m53s +``` + +KubeDB will create a separate stats service with name `{MySQL crd name}-stats` for monitoring purpose. + +```bash +$ kubectl get svc -n demo --selector="app.kubernetes.io/instance=coreos-prom-mysql" +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +coreos-prom-mysql ClusterIP 10.103.228.135 3306/TCP 3m36s +coreos-prom-mysql-gvr ClusterIP None 3306/TCP 3m36s +coreos-prom-mysql-stats ClusterIP 10.106.236.14 56790/TCP 50s +``` + +Here, `coreos-prom-mysql-stats` service has been created for monitoring purpose. + +Let's describe this stats service. + +```yaml +$ kubectl describe svc -n demo coreos-prom-mysql-stats +Name: coreos-prom-mysql-stats +Namespace: demo +Labels: app.kubernetes.io/name=mysqls.kubedb.com + app.kubernetes.io/instance=coreos-prom-mysql + kubedb.com/role=stats +Annotations: monitoring.appscode.com/agent: prometheus.io/operator +Selector: app.kubernetes.io/name=mysqls.kubedb.com,app.kubernetes.io/instance=coreos-prom-mysql +Type: ClusterIP +IP: 10.106.236.14 +Port: prom-http 56790/TCP +TargetPort: prom-http/TCP +Endpoints: 10.244.2.6:56790 +Session Affinity: None +Events: +``` + +Notice the `Labels` and `Port` fields. `ServiceMonitor` will use these information to target its endpoints. + +KubeDB will also create a `ServiceMonitor` crd in `demo` namespace that select the endpoints of `coreos-prom-mysql-stats` service. Verify that the `ServiceMonitor` crd has been created. + +```bash +$ kubectl get servicemonitor -n demo +NAME AGE +kubedb-demo-coreos-prom-mysql 3m16s +``` + +Let's verify that the `ServiceMonitor` has the label that we had specified in `spec.monitor` section of MySQL crd. + +```yaml +$ kubectl get servicemonitor -n demo kubedb-demo-coreos-prom-mysql -o yaml +apiVersion: monitoring.coreos.com/v1 +kind: ServiceMonitor +metadata: + creationTimestamp: "2020-08-25T05:53:27Z" + generation: 1 + labels: + release: prometheus + operation: Update + time: "2020-08-25T05:53:27Z" + ... + name: kubedb-demo-coreos-prom-mysql + namespace: demo + ownerReferences: + - apiVersion: v1 + blockOwnerDeletion: true + controller: true + kind: Service + name: coreos-prom-mysql-stats + uid: cf4ce3ec-a78e-4828-9fee-941c77eb965e + resourceVersion: "28659" + selfLink: /apis/monitoring.coreos.com/v1/namespaces/demo/servicemonitors/kubedb-demo-coreos-prom-mysql + uid: 9cec794a-dfee-49dc-a809-6c9d6faac1df +spec: + endpoints: + - bearerTokenSecret: + key: "" + honorLabels: true + interval: 10s + path: /metrics + port: prom-http + namespaceSelector: + matchNames: + - demo + selector: + matchLabels: + app.kubernetes.io/name: mysqls.kubedb.com + app.kubernetes.io/instance: coreos-prom-mysql + kubedb.com/role: stats +``` + +Notice that the `ServiceMonitor` has label `release: prometheus` that we had specified in MySQL crd. + +Also notice that the `ServiceMonitor` has selector which match the labels we have seen in the `coreos-prom-mysql-stats` service. It also, target the `prom-http` port that we have seen in the stats service. + +## Verify Monitoring Metrics + +At first, let's find out the respective Prometheus pod for `prometheus` Prometheus server. + +```bash +$ kubectl get pod -n default -l=app=prometheus +NAME READY STATUS RESTARTS AGE +prometheus-prometheus-0 3/3 Running 1 121m +``` + +Prometheus server is listening to port `9090` of `prometheus-prometheus-0` pod. We are going to use [port forwarding](https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/) to access Prometheus dashboard. + +Run following command on a separate terminal to forward the port 9090 of `prometheus-prometheus-0` pod, + +```bash +$ kubectl port-forward -n default prometheus-prometheus-0 9090 +Forwarding from 127.0.0.1:9090 -> 9090 +Forwarding from [::1]:9090 -> 9090 +``` + +Now, we can access the dashboard at `localhost:9090`. Open [http://localhost:9090](http://localhost:9090) in your browser. You should see `prom-http` endpoint of `coreos-prom-mysql-stats` service as one of the targets. + +

+  Prometheus Target +

+ +Check the `endpoint` and `service` labels marked by red rectangle. It verifies that the target is our expected database. Now, you can view the collected metrics and create a graph from homepage of this Prometheus dashboard. You can also use this Prometheus server as data source for [Grafana](https://grafana.com/) and create beautiful dashboard with collected metrics. + +## Cleaning up + +To cleanup the Kubernetes resources created by this tutorial, run following commands + +```bash +# cleanup database +kubectl delete -n demo my/coreos-prom-mysql + +# cleanup Prometheus resources if exist +kubectl delete -f https://raw.githubusercontent.com/appscode/third-party-tools/master/monitoring/prometheus/coreos-operator/artifacts/prometheus.yaml +kubectl delete -f https://raw.githubusercontent.com/appscode/third-party-tools/master/monitoring/prometheus/coreos-operator/artifacts/prometheus-rbac.yaml + +# cleanup Prometheus operator resources if exist +kubectl delete -f https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/release-0.41/bundle.yaml + +# delete namespace +kubectl delete ns demo +``` + +## Next Steps + +- Monitor your MySQL database with KubeDB using [out-of-the-box builtin-Prometheus](/docs/v2024.1.31/guides/mysql/monitoring/builtin-prometheus/). +- Detail concepts of [MySQL object](/docs/v2024.1.31/guides/mysql/concepts/database/). +- Detail concepts of [MySQLVersion object](/docs/v2024.1.31/guides/mysql/concepts/catalog/). +- Initialize [MySQL with Script](/docs/v2024.1.31/guides/mysql/initialization/). +- Use [private Docker registry](/docs/v2024.1.31/guides/mysql/private-registry/) to deploy MySQL with KubeDB. +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/v2024.1.31/CONTRIBUTING). diff --git a/content/docs/v2024.1.31/guides/mysql/monitoring/prometheus-operator/yamls/prom-operator-mysql.yaml b/content/docs/v2024.1.31/guides/mysql/monitoring/prometheus-operator/yamls/prom-operator-mysql.yaml new file mode 100644 index 0000000000..8cbc3ab4dc --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/monitoring/prometheus-operator/yamls/prom-operator-mysql.yaml @@ -0,0 +1,22 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MySQL +metadata: + name: prom-operator-mysql + namespace: demo +spec: + version: "8.0.35" + terminationPolicy: WipeOut + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + monitor: + agent: prometheus.io/operator + prometheus: + serviceMonitor: + labels: + release: prometheus + interval: 10s diff --git a/content/docs/v2024.1.31/guides/mysql/pitr/_index.md b/content/docs/v2024.1.31/guides/mysql/pitr/_index.md new file mode 100644 index 0000000000..a99061443d --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/pitr/_index.md @@ -0,0 +1,22 @@ +--- +title: Continuous Archiving and Point-in-time Recovery +menu: + docs_v2024.1.31: + identifier: pitr-mysql + name: Point-in-time Recovery + parent: guides-mysql + weight: 42 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/mysql/pitr/archiver.md b/content/docs/v2024.1.31/guides/mysql/pitr/archiver.md new file mode 100644 index 0000000000..94801e3fee --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/pitr/archiver.md @@ -0,0 +1,468 @@ +--- +title: Continuous Archiving and Point-in-time Recovery +menu: + docs_v2024.1.31: + identifier: pitr-mysql-archiver + name: Overview + parent: pitr-mysql + weight: 10 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# KubeDB MySQL - Continuous Archiving and Point-in-time Recovery + +Here, will show you how to use KubeDB to provision a MySQL to Archive continuously and Restore point-in-time. + +## Before You Begin + +At first, you need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +Now,install `KubeDB` operator in your cluster following the steps [here](/docs/v2024.1.31/setup/README). + +To install `KubeStash` operator in your cluster following the steps [here](https://github.com/kubestash/installer/tree/master/charts/kubestash). + +To install `SideKick` in your cluster following the steps [here](https://github.com/kubeops/installer/tree/master/charts/sidekick). + +To install `External-snapshotter` in your cluster following the steps [here](https://github.com/kubernetes-csi/external-snapshotter/tree/release-5.0). + +To keep things isolated, this tutorial uses a separate namespace called `demo` throughout this tutorial. + +```bash +$ kubectl create ns demo +namespace/demo created +``` +> Note: The yaml files used in this tutorial are stored in [docs/guides/mysql/remote-replica/yamls](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/guides/mysql/remote-replica/yamls) folder in GitHub repository [kubedb/docs](https://github.com/kubedb/docs). + +## continuous archiving +Continuous archiving involves making regular copies (or "archives") of the MySQL transaction log files.To ensure continuous archiving to a remote location we need prepare `BackupStorage`,`RetentionPolicy`,`MySQLArchiver` for the KubeDB Managed MySQL Databases. + +### BackupStorage +BackupStorage is a CR provided by KubeStash that can manage storage from various providers like GCS, S3, and more. + +```yaml +apiVersion: storage.kubestash.com/v1alpha1 +kind: BackupStorage +metadata: + name: linode-storage + namespace: demo +spec: + storage: + provider: s3 + s3: + bucket: mehedi-mysql-wal-g + endpoint: https://ap-south-1.linodeobjects.com + region: ap-south-1 + prefix: backup + secret: storage + usagePolicy: + allowedNamespaces: + from: All + default: true + deletionPolicy: WipeOut +``` + +```bash + $ kubectl apply -f backupstorage.yaml + backupstorage.storage.kubestash.com/linode-storage created +``` + +### secrets for backup-storage +```yaml +apiVersion: v1 +kind: Secret +type: Opaque +metadata: + name: storage + namespace: demo +stringData: + AWS_ACCESS_KEY_ID: "*************26CX" + AWS_SECRET_ACCESS_KEY: "************jj3lp" + AWS_ENDPOINT: https://ap-south-1.linodeobjects.com +``` + +```bash + $ kubectl apply -f storage-secret.yaml + secret/storage created +``` + +### Retention policy +RetentionPolicy is a CR provided by KubeStash that allows you to set how long you'd like to retain the backup data. + +```yaml +apiVersion: storage.kubestash.com/v1alpha1 +kind: RetentionPolicy +metadata: + name: mysql-retention-policy + namespace: demo +spec: + maxRetentionPeriod: "30d" + successfulSnapshots: + last: 100 + failedSnapshots: + last: 2 +``` +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mysql/pitr/yamls/retention-policy.yaml +retentionpolicy.storage.kubestash.com/mysql-retention-policy created +``` + +### MySQLArchiver +MySQLArchiver is a CR provided by KubeDB for managing the archiving of MySQL binlog files and performing volume-level backups + +```yaml +apiVersion: archiver.kubedb.com/v1alpha1 +kind: MySQLArchiver +metadata: + name: mysqlarchiver-sample + namespace: demo +spec: + pause: false + databases: + namespaces: + from: Selector + selector: + matchLabels: + kubernetes.io/metadata.name: demo + selector: + matchLabels: + archiver: "true" + retentionPolicy: + name: mysql-retention-policy + namespace: demo + encryptionSecret: + name: "encrypt-secret" + namespace: "demo" + fullBackup: + driver: "VolumeSnapshotter" + task: + params: + volumeSnapshotClassName: "longhorn-snapshot-vsc" + scheduler: + successfulJobsHistoryLimit: 1 + failedJobsHistoryLimit: 1 + schedule: "/30 * * * *" + sessionHistoryLimit: 2 + manifestBackup: + scheduler: + successfulJobsHistoryLimit: 1 + failedJobsHistoryLimit: 1 + schedule: "/30 * * * *" + sessionHistoryLimit: 2 + backupStorage: + ref: + name: "linode-storage" + namespace: "demo" + +``` + +### EncryptionSecret + +```yaml +apiVersion: v1 +kind: Secret +type: Opaque +metadata: + name: encrypt-secret + namespace: demo +stringData: + RESTIC_PASSWORD: "changeit" +``` + +```bash + $ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mysql/pirt/yamls/mysqlarchiver.yaml + mysqlarchiver.archiver.kubedb.com/mysqlarchiver-sample created + $ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mysql/pirt/yamls/encryptionSecret.yaml +``` + +## Ensure volumeSnapshotClass + +```bash +$ kubectl get volumesnapshotclasses +NAME DRIVER DELETIONPOLICY AGE +longhorn-snapshot-vsc driver.longhorn.io Delete 7d22h + +``` +If not any, try using `longhorn` or any other [volumeSnapshotClass](https://kubernetes.io/docs/concepts/storage/volume-snapshot-classes/). +```yaml +kind: VolumeSnapshotClass +apiVersion: snapshot.storage.k8s.io/v1 +metadata: + name: longhorn-snapshot-vsc +driver: driver.longhorn.io +deletionPolicy: Delete +parameters: + type: snap + +``` + +```bash +$ helm install longhorn longhorn/longhorn --namespace longhorn-system --create-namespace + +$ kubectl apply -f volumesnapshotclass.yaml + volumesnapshotclass.snapshot.storage.k8s.io/longhorn-snapshot-vsc unchanged +``` + +# Deploy MySQL +So far we are ready with setup for continuously archive MySQL, We deploy a mysqlql referring the MySQL archiver object + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: MySQL +metadata: + name: mysql + namespace: demo + labels: + archiver: "true" +spec: + authSecret: + name: my-auth + version: "8.2.0" + replicas: 3 + topology: + mode: GroupReplication + storageType: Durable + storage: + storageClassName: "longhorn" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 10Gi + archiver: + ref: + name: mysqlarchiver-sample + namespace: demo + terminationPolicy: WipeOut +``` + + +```bash +$ kubectl get pod -n demo +NAME READY STATUS RESTARTS AGE +mysql-0 2/2 Running 0 28h +mysql-1 2/2 Running 0 28h +mysql-2 2/2 Running 0 28h +mysql-backup-config-full-backup-1703680982-vqf7c 0/1 Completed 0 28h +mysql-backup-config-manifest-1703680982-62x97 0/1 Completed 0 28h +mysql-sidekick 1/1 Running 0 28h +``` + +`mysql-sidekick` is responsible for uploading binlog files + +`mysql-backup-config-full-backup-1703680982-vqf7c` are the pod of volumes levels backups for MySQL. + +`mysql-backup-config-manifest-1703680982-62x97` are the pod of the manifest backup related to MySQL object + +### validate BackupConfiguration and VolumeSnapshots + +```bash + +$ kubectl get backupconfigurations -n demo + +NAME PHASE PAUSED AGE +mysql-backup-config Ready 2m43s + +$ kubectl get backupsession -n demo +NAME INVOKER-TYPE INVOKER-NAME PHASE DURATION AGE +mysql-backup-config-full-backup-1702388088 BackupConfiguration mysql-backup-config Succeeded 74s +mysql-backup-config-manifest-1702388088 BackupConfiguration mysql-backup-config Succeeded 74s + +kubectl get volumesnapshots -n demo +NAME READYTOUSE SOURCEPVC SOURCESNAPSHOTCONTENT RESTORESIZE SNAPSHOTCLASS SNAPSHOTCONTENT CREATIONTIME AGE +mysql-1702388096 true data-mysql-1 1Gi longhorn-snapshot-vsc snapcontent-735e97ad-1dfa-4b70-b416-33f7270d792c 2m5s 2m5s +``` + +## data insert and switch wal +After each and every wal switch the wal files will be uploaded to backup storage + +```bash +$ kubectl exec -it -n demo mysql-0 -- bash + +bash-4.4$ mysql -uroot -p$MYSQL_ROOT_PASSWORD + +mysql> create database hello; + +mysql> use hello; + +mysql> CREATE TABLE `demo_table`( + -> `id` BIGINT(20) NOT NULL, + -> `name` VARCHAR(255) DEFAULT NULL, + -> PRIMARY KEY (`id`) + -> ); + +mysql> INSERT INTO `demo_table` (`id`, `name`) + -> VALUES + -> (1, 'John'), + -> (2, 'Jane'), + -> (3, 'Bob'), + -> (4, 'Alice'), + -> (5, 'Charlie'), + -> (6, 'Diana'), + -> (7, 'Eve'), + -> (8, 'Frank'), + -> (9, 'Grace'), + -> (10, 'Henry'); + +mysql> select now(); ++---------------------+ +| now() | ++---------------------+ +| 2023-12-28 17:10:54 |mysql> select now(); ++---------------------+ +| now() | ++---------------------+ +| 2023-12-28 17:10:54 | ++---------------------+ ++---------------------+ + +mysql> select count(*) from demo_table; ++----------+ +| count(*) | ++----------+ +| 10 | ++----------+ + +``` + +> At this point We have 10 rows in our newly created table `demo_table` on database `hello` + +## Point-in-time Recovery +Point-In-Time Recovery allows you to restore a MySQL database to a specific point in time using the archived transaction logs. This is particularly useful in scenarios where you need to recover to a state just before a specific error or data corruption occurred. +Let's say accidentally our dba drops the the table tab_1 and we want to restore. + +```bash +$ kubectl exec -it -n demo mysql-0 -- bash + +mysql> drop table demo_table; + +mysql> flush logs; + +``` +We can't restore from a full backup since at this point no full backup was perform. so we can choose a specific time in which time we want to restore.We can get the specfice time from the wal that archived in the backup storage . Go to the binlog file and find where to store. You can parse binlog-files using `mysqlbinlog`. + + +For the demo I will use the previous time we get form `select now()` + +```bash +mysql> select now(); ++---------------------+ +| now() | ++---------------------+ +| 2023-12-28 17:10:54 | ++---------------------+ +``` +### Restore MySQL + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: MySQL +metadata: + name: restore-mysql + namespace: demo +spec: + init: + archiver: + encryptionSecret: + name: encrypt-secret + namespace: demo + fullDBRepository: + name: mysql-repository + namespace: demo + recoveryTimestamp: "2023-12-28T17:10:54Z" + version: "8.2.0" + replicas: 3 + topology: + mode: GroupReplication + storageType: Durable + storage: + storageClassName: "longhorn" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 10Gi + terminationPolicy: WipeOut + +``` + +```bash +$ kubectl apply -f restore.yaml +mysql.kubedb.com/restore-mysql created +``` + +**check for Restored MySQL** + +```bash +$ kubectl get pod -n demo +restore-mysql-0 1/1 Running 0 44s +restore-mysql-1 1/1 Running 0 42s +restore-mysql-2 1/1 Running 0 41s +restore-mysql-restorer-z4brz 0/2 Completed 0 113s +restore-mysql-restoresession-lk6jq 0/1 Completed 0 2m6s + +``` + +```bash +$ kubectl get mysql -n demo +NAME VERSION STATUS AGE +mysql 8.2.0 Ready 28h +restore-mysql 8.2.0 Ready 5m37s +``` + +**Validating data on Restored MySQL** + +```bash +$ kubectl exec -it -n demo restore-mysql-0 -- bash +bash-4.4$ mysql -uroot -p$MYSQL_ROOT_PASSWORD + +mysql> use hello + +mysql> select count(*) from demo_table; ++----------+ +| count(*) | ++----------+ +| 10 | ++----------+ +1 row in set (0.00 sec) + +``` + +**so we are able to successfully recover from a disaster** + +## Cleaning up + +To cleanup the Kubernetes resources created by this tutorial, run: + +```bash +$ kubectl delete -n demo mysql/mysql +$ kubectl delete -n demo mysql/restore-mysql +$ kubectl delete -n demo backupstorage +$ kubectl delete -n demo mysqlarchiver +$ kubectl delete ns demo +``` + +## Next Steps + +- Learn about [backup and restore](/docs/v2024.1.31/guides/mysql/backup/overview/) MySQL database using Stash. +- Learn about initializing [MySQL with Script](/docs/v2024.1.31/guides/mysql/initialization/script_source). +- Learn about [custom MySQLVersions](/docs/v2024.1.31/guides/mysql/custom-versions/setup). +- Want to setup MySQL cluster? Check how to [configure Highly Available MySQL Cluster](/docs/v2024.1.31/guides/mysql/clustering/ha_cluster) +- Monitor your MySQL database with KubeDB using [built-in Prometheus](/docs/v2024.1.31/guides/mysql/monitoring/using-builtin-prometheus). +- Monitor your MySQL database with KubeDB using [Prometheus operator](/docs/v2024.1.31/guides/mysql/monitoring/using-prometheus-operator). +- Detail concepts of [MySQL object](/docs/v2024.1.31/guides/mysql/concepts/mysql). +- Use [private Docker registry](/docs/v2024.1.31/guides/mysql/private-registry/using-private-registry) to deploy MySQL with KubeDB. +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/v2024.1.31/CONTRIBUTING). \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/mysql/pitr/yamls/backupstorage.yaml b/content/docs/v2024.1.31/guides/mysql/pitr/yamls/backupstorage.yaml new file mode 100644 index 0000000000..6747e4338a --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/pitr/yamls/backupstorage.yaml @@ -0,0 +1,19 @@ +apiVersion: storage.kubestash.com/v1alpha1 +kind: BackupStorage +metadata: + name: linode-storage + namespace: demo +spec: + storage: + provider: s3 + s3: + bucket: mehedi-pg-wal-g + endpoint: https://ap-south-1.linodeobjects.com + region: ap-south-1 + prefix: backup + secret: storage + usagePolicy: + allowedNamespaces: + from: All + default: true + deletionPolicy: WipeOut \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/mysql/pitr/yamls/encryptionSecret.yaml b/content/docs/v2024.1.31/guides/mysql/pitr/yamls/encryptionSecret.yaml new file mode 100644 index 0000000000..4eb0c25bdb --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/pitr/yamls/encryptionSecret.yaml @@ -0,0 +1,8 @@ +apiVersion: v1 +kind: Secret +type: Opaque +metadata: + name: encrypt-secret + namespace: demo +stringData: + RESTIC_PASSWORD: "changeit" diff --git a/content/docs/v2024.1.31/guides/mysql/pitr/yamls/mysqlarchiver.yaml b/content/docs/v2024.1.31/guides/mysql/pitr/yamls/mysqlarchiver.yaml new file mode 100644 index 0000000000..f3818455e6 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/pitr/yamls/mysqlarchiver.yaml @@ -0,0 +1,42 @@ +apiVersion: archiver.kubedb.com/v1alpha1 +kind: MySQLArchiver +metadata: + name: mysqlarchiver-sample + namespace: demo +spec: + pause: false + databases: + namespaces: + from: Selector + selector: + matchLabels: + kubernetes.io/metadata.name: demo + selector: + matchLabels: + archiver: "true" + retentionPolicy: + name: mysql-retention-policy + namespace: demo + encryptionSecret: + name: "encrypt-secret" + namespace: "demo" + fullBackup: + driver: "VolumeSnapshotter" + task: + params: + volumeSnapshotClassName: "longhorn-snapshot-vsc" + scheduler: + successfulJobsHistoryLimit: 1 + failedJobsHistoryLimit: 1 + schedule: "/30 * * * *" + sessionHistoryLimit: 2 + manifestBackup: + scheduler: + successfulJobsHistoryLimit: 1 + failedJobsHistoryLimit: 1 + schedule: "/30 * * * *" + sessionHistoryLimit: 2 + backupStorage: + ref: + name: "linode-storage" + namespace: "demo" \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/mysql/pitr/yamls/retentionPolicy.yaml b/content/docs/v2024.1.31/guides/mysql/pitr/yamls/retentionPolicy.yaml new file mode 100644 index 0000000000..7e08dccb08 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/pitr/yamls/retentionPolicy.yaml @@ -0,0 +1,11 @@ +apiVersion: storage.kubestash.com/v1alpha1 +kind: RetentionPolicy +metadata: + name: mysql-retention-policy + namespace: demo +spec: + maxRetentionPeriod: "30d" + successfulSnapshots: + last: 100 + failedSnapshots: + last: 2 diff --git a/content/docs/v2024.1.31/guides/mysql/pitr/yamls/voluemsnapshotclass.yaml b/content/docs/v2024.1.31/guides/mysql/pitr/yamls/voluemsnapshotclass.yaml new file mode 100644 index 0000000000..1a67906612 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/pitr/yamls/voluemsnapshotclass.yaml @@ -0,0 +1,8 @@ +kind: VolumeSnapshotClass +apiVersion: snapshot.storage.k8s.io/v1 +metadata: + name: longhorn-snapshot-vsc +driver: driver.longhorn.io +deletionPolicy: Delete +parameters: + type: snap \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/mysql/private-registry/index.md b/content/docs/v2024.1.31/guides/mysql/private-registry/index.md new file mode 100644 index 0000000000..a6c1a699ac --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/private-registry/index.md @@ -0,0 +1,187 @@ +--- +title: Run MySQL using Private Registry +menu: + docs_v2024.1.31: + identifier: guides-mysql-private-registry + name: Private Registry + parent: guides-mysql + weight: 35 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Deploy MySQL from private Docker registry + +KubeDB operator supports using private Docker registry. This tutorial will show you how to use KubeDB to run MySQL database using private Docker images. + +## Before You Begin + +- Read [concept of MySQL Version Catalog](/docs/v2024.1.31/guides/mysql/concepts/catalog/) to learn detail concepts of `MySQLVersion` object. + +- You need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +- You will also need a docker private [registry](https://docs.docker.com/registry/) or [private repository](https://docs.docker.com/docker-hub/repos/#private-repositories). In this tutorial we will use private repository of [docker hub](https://hub.docker.com/). + +- You have to push the required images from KubeDB's [Docker hub account](https://hub.docker.com/r/kubedb/) into your private registry. For mysql, push `DB_IMAGE`, `EXPORTER_IMAGE`, `REPLICATION_MODE_DETECTOR_IMAGE`(only required for Group Replication), `INITCONTAINER_IMAGE` of following MySQLVersions, where `deprecated` is not true, to your private registry. + +```bash +$ kubectl get mysqlversions -n kube-system -o=custom-columns=NAME:.metadata.name,VERSION:.spec.version,DB_IMAGE:.spec.db.image,EXPORTER_IMAGE:.spec.exporter.image,REPLICATION_MODE_DETECTOR_IMAGE:.spec.replicationModeDetector.image,INITCONTAINER_IMAGE:.spec.initContainer.image,DEPRECATED:.spec.deprecated +NAME VERSION DB_IMAGE EXPORTER_IMAGE REPLICATION_MODE_DETECTOR_IMAGE INITCONTAINER_IMAGE DEPRECATED +5.7.35-v1 5.7.35 mysql:5.7.35 kubedb/mysqld-exporter:v0.13.1 kubedb/replication-mode-detector:v0.13.0 kubedb/mysql-init:5.7-v2 +5.7.44 5.7.44 mysql:5.7.44 kubedb/mysqld-exporter:v0.13.1 kubedb/replication-mode-detector:v0.13.0 kubedb/mysql-init:5.7-v2 +8.0.17 8.0.17 mysql:8.0.17 kubedb/mysqld-exporter:v0.13.1 kubedb/replication-mode-detector:v0.13.0 kubedb/mysql-init:8.0.3-v1 +8.0.35 8.0.35 mysql:8.0.35 kubedb/mysqld-exporter:v0.13.1 kubedb/replication-mode-detector:v0.13.0 kubedb/mysql-init:8.0.26-v1 +8.0.31-innodb 8.0.35 mysql/mysql-server:8.0.35 kubedb/mysqld-exporter:v0.13.1 kubedb/replication-mode-detector:v0.13.0 kubedb/mysql-init:8.0.26-v1 +8.0.35 8.0.35 mysql:8.0.35 kubedb/mysqld-exporter:v0.13.1 kubedb/replication-mode-detector:v0.13.0 kubedb/mysql-init:8.0.35_linux_amd64 +8.0.3-v4 8.0.3 mysql:8.0.3 kubedb/mysqld-exporter:v0.13.1 kubedb/replication-mode-detector:v0.13.0 kubedb/mysql-init:8.0.3-v1 + +``` + + Docker hub repositories: + - [kubedb/operator](https://hub.docker.com/r/kubedb/operator) + - [kubedb/mysql](https://hub.docker.com/r/kubedb/mysql) + - [kubedb/mysql-tools](https://hub.docker.com/r/kubedb/mysql-tools) + - [kubedb/mysqld-exporter](https://hub.docker.com/r/kubedb/mysqld-exporter) + +- Update KubeDB catalog for private Docker registry. Ex: + +```yaml +apiVersion: catalog.kubedb.com/v1alpha1 +kind: MySQLVersion +metadata: + name: 8.0.35 +spec: + coordinator: + image: PRIVATE_REGISTRY/mysql-coordinator:v0.4.0-2-g49a2d26-dirty_linux_amd64 + db: + image: PRIVATE_REGISTRY/mysql:8.0.35 + distribution: Official + exporter: + image: PRIVATE_REGISTRY/mysqld-exporter:v0.13.1 + initContainer: + image: PRIVATE_REGISTRY/mysql-init:8.0.35_linux_amd64 + podSecurityPolicies: + databasePolicyName: mysql-db + replicationModeDetector: + image: PRIVATE_REGISTRY/replication-mode-detector:v0.13.0 + stash: + addon: + backupTask: + name: mysql-backup-8.0.21 + restoreTask: + name: mysql-restore-8.0.21 + updateConstraints: + denylist: + groupReplication: + - < 8.0.35 + standalone: + - < 8.0.35 + version: 8.0.35 +``` + +- To keep things isolated, this tutorial uses a separate namespace called `demo` throughout this tutorial. Run the following command to prepare your cluster for this tutorial: + + ```bash + $ kubectl create ns demo + namespace/demo created + ``` + +## Create ImagePullSecret + +ImagePullSecrets is a type of a Kubernete Secret whose sole purpose is to pull private images from a Docker registry. It allows you to specify the url of the docker registry, credentials for logging in and the image name of your private docker image. + +Run the following command, substituting the appropriate uppercase values to create an image pull secret for your private Docker registry: + +```bash +$ kubectl create secret docker-registry -n demo myregistrykey \ + --docker-server=DOCKER_REGISTRY_SERVER \ + --docker-username=DOCKER_USER \ + --docker-email=DOCKER_EMAIL \ + --docker-password=DOCKER_PASSWORD +secret/myregistrykey created +``` + +If you wish to follow other ways to pull private images see [official docs](https://kubernetes.io/docs/concepts/containers/images/) of Kubernetes. + +NB: If you are using `kubectl` 1.9.0, update to 1.9.1 or later to avoid this [issue](https://github.com/kubernetes/kubernetes/issues/57427). + +## Install KubeDB operator + +When installing KubeDB operator, set the flags `--docker-registry` and `--image-pull-secret` to appropriate value. Follow the steps to [install KubeDB operator](/docs/v2024.1.31/setup/README) properly in cluster so that to points to the DOCKER_REGISTRY you wish to pull images from. + +## Deploy MySQL database from Private Registry + +While deploying `MySQL` from private repository, you have to add `myregistrykey` secret in `MySQL` `spec.imagePullSecrets`. +Below is the MySQL CRD object we will create. + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: MySQL +metadata: + name: mysql-pvt-reg + namespace: demo +spec: + version: "8.0.35" + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + podTemplate: + spec: + imagePullSecrets: + - name: myregistrykey +``` + +Now run the command to deploy this `MySQL` object: + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mysql/private-registry/yamls/standalone.yaml +mysql.kubedb.com/mysql-pvt-reg created +``` + +To check if the images pulled successfully from the repository, see if the `MySQL` is in running state: + +```bash +$ kubectl get pods -n demo +NAME READY STATUS RESTARTS AGE +mysql-pvt-reg-0 1/1 Running 0 56s +``` + +## Cleaning up + +To cleanup the Kubernetes resources created by this tutorial, run: + +```bash +kubectl patch -n demo mysql/mysql-pvt-reg -p '{"spec":{"terminationPolicy":"WipeOut"}}' --type="merge" +kubectl delete -n demo mysql/mysql-pvt-reg + +kubectl patch -n demo drmn/mysql-pvt-reg -p '{"spec":{"wipeOut":true}}' --type="merge" +kubectl delete -n demo drmn/mysql-pvt-reg + +kubectl delete ns demo +``` + +## Next Steps + +- Initialize [MySQL with Script](/docs/v2024.1.31/guides/mysql/initialization/). +- Monitor your MySQL database with KubeDB using [out-of-the-box Prometheus operator](/docs/v2024.1.31/guides/mysql/monitoring/prometheus-operator/). +- Monitor your MySQL database with KubeDB using [out-of-the-box builtin-Prometheus](/docs/v2024.1.31/guides/mysql/monitoring/builtin-prometheus/). +- Detail concepts of [MySQL object](/docs/v2024.1.31/guides/mysql/concepts/database/). +- Detail concepts of [MySQLVersion object](/docs/v2024.1.31/guides/mysql/concepts/catalog/). +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/v2024.1.31/CONTRIBUTING). diff --git a/content/docs/v2024.1.31/guides/mysql/private-registry/yamls/quickstart.yaml b/content/docs/v2024.1.31/guides/mysql/private-registry/yamls/quickstart.yaml new file mode 100644 index 0000000000..770912e63a --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/private-registry/yamls/quickstart.yaml @@ -0,0 +1,16 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MySQL +metadata: + name: mysql-quickstart + namespace: demo +spec: + version: "8.0.35" + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: DoNotTerminate diff --git a/content/docs/v2024.1.31/guides/mysql/private-registry/yamls/standalone.yaml b/content/docs/v2024.1.31/guides/mysql/private-registry/yamls/standalone.yaml new file mode 100644 index 0000000000..0dcddb779b --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/private-registry/yamls/standalone.yaml @@ -0,0 +1,18 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MySQL +metadata: + name: mysql-pvt-reg + namespace: demo +spec: + version: "8.0.35" + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + podTemplate: + spec: + imagePullSecrets: + - name: myregistrykey diff --git a/content/docs/v2024.1.31/guides/mysql/quickstart/images/mysql-lifecycle.png b/content/docs/v2024.1.31/guides/mysql/quickstart/images/mysql-lifecycle.png new file mode 100644 index 0000000000..80481b4ce6 Binary files /dev/null and b/content/docs/v2024.1.31/guides/mysql/quickstart/images/mysql-lifecycle.png differ diff --git a/content/docs/v2024.1.31/guides/mysql/quickstart/index.md b/content/docs/v2024.1.31/guides/mysql/quickstart/index.md new file mode 100644 index 0000000000..91fe804003 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/quickstart/index.md @@ -0,0 +1,637 @@ +--- +title: MySQL Quickstart +menu: + docs_v2024.1.31: + identifier: guides-mysql-quickstart + name: Quickstart + parent: guides-mysql + weight: 15 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# MySQL QuickStart + +This tutorial will show you how to use KubeDB to run a MySQL database. + +

+  lifecycle +

+ +> Note: The yaml files used in this tutorial are stored in [docs/guides/mysql/quickstart/yamls](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/guides/mysql/quickstart/yamls) folder in GitHub repository [kubedb/docs](https://github.com/kubedb/docs). + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +- Now, install KubeDB cli on your workstation and KubeDB operator in your cluster following the steps [here](/docs/v2024.1.31/setup/README). + +- [StorageClass](https://kubernetes.io/docs/concepts/storage/storage-classes/) is required to run KubeDB. Check the available StorageClass in cluster. + + ```bash + $ kubectl get storageclasses + NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE + standard (default) rancher.io/local-path Delete WaitForFirstConsumer false 6h22m + ``` + +- To keep things isolated, this tutorial uses a separate namespace called `demo` throughout this tutorial. + + ```bash + $ kubectl create ns demo + namespace/demo created + ``` + +## Find Available MySQLVersion + +When you have installed KubeDB, it has created `MySQLVersion` crd for all supported MySQL versions. Check it by using the following command, + +```bash +$ kubectl get mysqlversions +NAME VERSION DISTRIBUTION DB_IMAGE DEPRECATED AGE +5.7.35-v1 5.7.35 Official mysql:5.7.35 9s +5.7.44 5.7.44 Official mysql:5.7.44 9s +8.0.17 8.0.17 Official mysql:8.0.17 9s +8.0.35 8.0.35 Official mysql:8.0.35 9s +8.0.31-innodb 8.0.35 MySQL mysql/mysql-server:8.0.35 9s +8.0.35 8.0.35 Official mysql:8.0.35 9s +8.0.3-v4 8.0.3 Official mysql:8.0.3 9s + +``` + +## Create a MySQL database + +KubeDB implements a `MySQL` CRD to define the specification of a MySQL database. Below is the `MySQL` object created in this tutorial. + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: MySQL +metadata: + name: mysql-quickstart + namespace: demo +spec: + version: "8.0.35" + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: Delete +``` + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mysql/quickstart/yamls/quickstart.yaml +mysql.kubedb.com/mysql-quickstart created +``` + +Here, + +- `spec.version` is the name of the MySQLVersion CRD where the docker images are specified. In this tutorial, a MySQL `8.0.35` database is going to be created. +- `spec.storageType` specifies the type of storage that will be used for MySQL database. It can be `Durable` or `Ephemeral`. Default value of this field is `Durable`. If `Ephemeral` is used then KubeDB will create MySQL database using `EmptyDir` volume. In this case, you don't have to specify `spec.storage` field. This is useful for testing purposes. +- `spec.storage` specifies the StorageClass of PVC dynamically allocated to store data for this database. This storage spec will be passed to the StatefulSet created by KubeDB operator to run database pods. You can specify any StorageClass available in your cluster with appropriate resource requests. +- `spec.terminationPolicy` gives flexibility whether to `nullify`(reject) the delete operation of `MySQL` crd or which resources KubeDB should keep or delete when you delete `MySQL` crd. If admission webhook is enabled, It prevents users from deleting the database as long as the `spec.terminationPolicy` is set to `DoNotTerminate`. Learn details of all `TerminationPolicy` [here](/docs/v2024.1.31/guides/mysql/concepts/database/#specterminationpolicy) + +> Note: spec.storage section is used to create PVC for database pod. It will create PVC with storage size specified instorage.resources.requests field. Don't specify limits here. PVC does not get resized automatically. + +KubeDB operator watches for `MySQL` objects using Kubernetes api. When a `MySQL` object is created, KubeDB operator will create a new StatefulSet and a Service with the matching MySQL object name. KubeDB operator will also create a governing service for StatefulSets with the name `kubedb`, if one is not already present. + +```bash +$ kubectl dba describe my -n demo mysql-quickstart +Name: mysql-quickstart +Namespace: demo +CreationTimestamp: Fri, 03 Jun 2022 12:50:40 +0600 +Labels: +Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"kubedb.com/v1alpha2","kind":"MySQL","metadata":{"annotations":{},"name":"mysql-quickstart","namespace":"demo"},"spec":{"storage":{"acces... +Replicas: 1 total +Status: Ready +StorageType: Durable +Volume: + StorageClass: standard + Capacity: 1Gi + Access Modes: RWO +Paused: false +Halted: false +Termination Policy: DoNotTerminate + +StatefulSet: + Name: mysql-quickstart + CreationTimestamp: Fri, 03 Jun 2022 12:50:40 +0600 + Labels: app.kubernetes.io/component=database + app.kubernetes.io/instance=mysql-quickstart + app.kubernetes.io/managed-by=kubedb.com + app.kubernetes.io/name=mysqls.kubedb.com + Annotations: + Replicas: 824646358808 desired | 1 total + Pods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed + +Service: + Name: mysql-quickstart + Labels: app.kubernetes.io/component=database + app.kubernetes.io/instance=mysql-quickstart + app.kubernetes.io/managed-by=kubedb.com + app.kubernetes.io/name=mysqls.kubedb.com + Annotations: + Type: ClusterIP + IP: 10.96.150.194 + Port: primary 3306/TCP + TargetPort: db/TCP + Endpoints: 10.244.0.30:3306 + +Service: + Name: mysql-quickstart-pods + Labels: app.kubernetes.io/component=database + app.kubernetes.io/instance=mysql-quickstart + app.kubernetes.io/managed-by=kubedb.com + app.kubernetes.io/name=mysqls.kubedb.com + Annotations: + Type: ClusterIP + IP: None + Port: db 3306/TCP + TargetPort: db/TCP + Endpoints: 10.244.0.30:3306 + +Auth Secret: + Name: mysql-quickstart-auth + Labels: app.kubernetes.io/component=database + app.kubernetes.io/instance=mysql-quickstart + app.kubernetes.io/managed-by=kubedb.com + app.kubernetes.io/name=mysqls.kubedb.com + Annotations: + Type: kubernetes.io/basic-auth + Data: + password: 16 bytes + username: 4 bytes + +AppBinding: + Metadata: + Annotations: + kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"kubedb.com/v1alpha2","kind":"MySQL","metadata":{"annotations":{},"name":"mysql-quickstart","namespace":"demo"},"spec":{"storage":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"1Gi"}},"storageClassName":"standard"},"storageType":"Durable","terminationPolicy":"DoNotTerminate","version":"8.0.35"}} + + Creation Timestamp: 2022-06-03T06:50:40Z + Labels: + app.kubernetes.io/component: database + app.kubernetes.io/instance: mysql-quickstart + app.kubernetes.io/managed-by: kubedb.com + app.kubernetes.io/name: mysqls.kubedb.com + Name: mysql-quickstart + Namespace: demo + Spec: + Client Config: + Service: + Name: mysql-quickstart + Path: / + Port: 3306 + Scheme: mysql + URL: tcp(mysql-quickstart.demo.svc:3306)/ + Parameters: + API Version: appcatalog.appscode.com/v1alpha1 + Kind: StashAddon + Stash: + Addon: + Backup Task: + Name: mysql-backup-8.0.21 + Params: + Name: args + Value: --all-databases --set-gtid-purged=OFF + Restore Task: + Name: mysql-restore-8.0.21 + Secret: + Name: mysql-quickstart-auth + Type: kubedb.com/mysql + Version: 8.0.35 + +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal Successful 32s KubeDB Operator Successfully created governing service + Normal Successful 32s KubeDB Operator Successfully created service for primary/standalone + Normal Successful 32s KubeDB Operator Successfully created database auth secret + Normal Successful 32s KubeDB Operator Successfully created StatefulSet + Normal Successful 32s KubeDB Operator Successfully created MySQL + Normal Successful 32s KubeDB Operator Successfully created appbinding + + + +$ kubectl get statefulset -n demo +NAME READY AGE +mysql-quickstart 1/1 3m19s + +$ kubectl get pvc -n demo +NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE +data-mysql-quickstart-0 Bound pvc-ab44ce95-2300-47d7-8f25-3cd7bc5b0091 1Gi RWO standard 3m50s + + +$ kubectl get pv -n demo +NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE +pvc-ab44ce95-2300-47d7-8f25-3cd7bc5b0091 1Gi RWO Delete Bound demo/data-mysql-quickstart-0 standard 4m19s + +kubectl get service -n demo +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +mysql-quickstart ClusterIP 10.96.150.194 3306/TCP 5m13s +mysql-quickstart-pods ClusterIP None 3306/TCP 5m13s +``` + +KubeDB operator sets the `status.phase` to `Running` once the database is successfully created. Run the following command to see the modified MySQL object: + +```yaml +$ kubectl get my -n demo mysql-quickstart -o yaml +apiVersion: kubedb.com/v1alpha2 +kind: MySQL +metadata: + annotations: + kubectl.kubernetes.io/last-applied-configuration: | + {"apiVersion":"kubedb.com/v1alpha2","kind":"MySQL","metadata":{"annotations":{},"name":"mysql-quickstart","namespace":"demo"},"spec":{"storage":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"1Gi"}},"storageClassName":"standard"},"storageType":"Durable","terminationPolicy":"DoNotTerminate","version":"8.0.35"}} + creationTimestamp: "2022-06-03T06:50:40Z" + finalizers: + - kubedb.com +spec: + allowedReadReplicas: + namespaces: + from: Same + allowedSchemas: + namespaces: + from: Same + authSecret: + name: mysql-quickstart-auth + coordinator: + resources: {} + podTemplate: + controller: {} + metadata: {} + spec: + affinity: + podAntiAffinity: + preferredDuringSchedulingIgnoredDuringExecution: + - podAffinityTerm: + labelSelector: + matchLabels: + app.kubernetes.io/instance: mysql-quickstart + app.kubernetes.io/managed-by: kubedb.com + app.kubernetes.io/name: mysqls.kubedb.com + namespaces: + - demo + topologyKey: kubernetes.io/hostname + weight: 100 + - podAffinityTerm: + labelSelector: + matchLabels: + app.kubernetes.io/instance: mysql-quickstart + app.kubernetes.io/managed-by: kubedb.com + app.kubernetes.io/name: mysqls.kubedb.com + namespaces: + - demo + topologyKey: failure-domain.beta.kubernetes.io/zone + weight: 50 + resources: + limits: + memory: 1Gi + requests: + cpu: 500m + memory: 1Gi + serviceAccountName: mysql-quickstart + replicas: 1 + storage: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + storageClassName: standard + storageType: Durable + terminationPolicy: Delete + useAddressType: DNS + version: 8.0.35 +status: + conditions: + - lastTransitionTime: "2022-06-03T06:50:40Z" + message: 'The KubeDB operator has started the provisioning of MySQL: demo/mysql-quickstart' + reason: DatabaseProvisioningStartedSuccessfully + status: "True" + type: ProvisioningStarted + - lastTransitionTime: "2022-06-03T06:50:46Z" + message: All desired replicas are ready. + reason: AllReplicasReady + status: "True" + type: ReplicaReady + - lastTransitionTime: "2022-06-03T06:51:05Z" + message: database demo/mysql-quickstart is accepting connection + reason: AcceptingConnection + status: "True" + type: AcceptingConnection + - lastTransitionTime: "2022-06-03T06:51:05Z" + message: database demo/mysql-quickstart is ready + reason: AllReplicasReady + status: "True" + type: Ready + - lastTransitionTime: "2022-06-03T06:51:05Z" + message: 'The MySQL: demo/mysql-quickstart is successfully provisioned.' + observedGeneration: 2 + reason: DatabaseSuccessfullyProvisioned + status: "True" + type: Provisioned + observedGeneration: 2 + phase: Ready + +``` + +## Connect with MySQL database + +KubeDB operator has created a new Secret called `mysql-quickstart-auth` *(format: {mysql-object-name}-auth)* for storing the password for `mysql` superuser. This secret contains a `username` key which contains the *username* for MySQL superuser and a `password` key which contains the *password* for MySQL superuser. + +If you want to use an existing secret please specify that when creating the MySQL object using `spec.authSecret.name`. While creating this secret manually, make sure the secret contains these two keys containing data `username` and `password` and also make sure of using `root` as value of `username`. For more details see [here](/docs/v2024.1.31/guides/mysql/concepts/database/#specdatabasesecret). + +Now, we need `username` and `password` to connect to this database from `kubectl exec` command. In this example `mysql-quickstart-auth` secret holds username and password + +```bash +$ kubectl get pods mysql-quickstart-0 -n demo -o yaml | grep podIP + podIP: 10.244.0.30 + +$ kubectl get secrets -n demo mysql-quickstart-auth -o jsonpath='{.data.\username}' | base64 -d +root + +$ kubectl get secrets -n demo mysql-quickstart-auth -o jsonpath='{.data.\password}' | base64 -d +H(Y.s)pg&cX1Ds3J +``` +we will exec into the pod `mysql-quickstart-0` and connect to the database uisng username and password + +```bash +$ kubectl exec -it -n demo mysql-quickstart-0 -- bash + +root@mysql-quickstart-0:/# mysql -uroot -p"H(Y.s)pg&cX1Ds3J" + +Welcome to the MySQL monitor. Commands end with ; or \g. +Your MySQL connection id is 351 +Server version: 8.0.35 MySQL Community Server - GPL + +Copyright (c) 2000, 2022, Oracle and/or its affiliates. + +mysql> show databases; ++--------------------+ +| Database | ++--------------------+ +| hello | +| information_schema | +| mysql | +| performance_schema | +| sys | ++--------------------+ +5 rows in set (0.00 sec) + +``` +you can also connect with database management tools like [phpmyadmin](https://hub.docker.com/_/phpmyadmin), [dbgate](https://hub.docker.com/r/dbgate/dbgate). + +__connecting with `phpmyadmin`__ + +lets create a deployment of `phpmyadmin` + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mysql/quickstart/yamls/phpmyadmin.yaml + +deployment/myadmin created +service/myadmin created + +$ kubectl get pods -n demo --watch +NAME READY STATUS RESTARTS AGE +myadmin-85d86cf5b5-f4mq4 1/1 Running 0 8s +mysql-quickstart-0 1/1 Running 0 12m + + +$ kubectl get svc -n demo +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +myadmin LoadBalancer 10.96.108.199 80:32634/TCP 51s +mysql-quickstart ClusterIP 10.96.150.194 3306/TCP 13m +mysql-quickstart-pods ClusterIP None 3306/TCP 13m + + +``` +Lets, open your browser and go to the following URL: _http://{node-ip}:{myadmin-svc-nodeport}_. For kind cluster, you can get this URL by running the following command: + +```bash +$ kubectl get svc -n demo myadmin -o json | jq '.spec.ports[].nodePort' +30158 + +$ kubectl get node -o json | jq '.items[].status.addresses[].address' +"172.18.0.3" +"kind-control-plane" +"172.18.0.4" +"kind-worker" +"172.18.0.2" +"kind-worker2" + +# expected url will be: +url: http://172.18.0.4:30158 +``` +According to this example, the URL will be [ http://172.18.0.4:30158]( http://172.18.0.4:30158).You can also use the external-ip of the service.Also port forward your service to connect. + + +>Note: In MySQL: `8.0.14` connection to phpMyAdmin may give error as it is using `caching_sha2_password` and `sha256_password` authentication plugins over `mysql_native_password`. If the error happens do the following for work around. But, It's not recommended to change authentication plugins. See [here](https://stackoverflow.com/questions/49948350/phpmyadmin-on-mysql-8-0) for alternative solutions. You can use mysql_native_password try `kubectl exec -it -n demo mysql-quickstart-0 -- mysql -u root --password='H(Y.s)pg&cX1Ds3J' -e "ALTER USER root IDENTIFIED WITH mysql_native_password BY 'H(Y.s)pg&cX1Ds3J';"` +--- +To log into the phpMyAdmin, use host __`mysql-quickstart.demo`__ or __`10.244.0.30`__ , username __`root`__ and password __`H(Y.s)pg&cX1Ds3J`__. + +__connecting with `dbgate`__ + + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mysql/quickstart/yamls/dbgate.yaml + +deployment/dbgate created +service/dbgate created + +$ kubectl get pods -n demo --watch +NAME READY STATUS RESTARTS AGE +demo dbgate-77d7fd4889-bfhb9 1/1 Running 0 17m + +-85d86cf5b5-f4mq4 1/1 Running 0 8s +mysql-quickstart-0 1/1 Running 0 12m + + +$ kubectl get svc -n demo +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +dbgate LoadBalancer 10.96.226.216 3000:32475/TCP 51s +mysql-quickstart ClusterIP 10.96.150.194 3306/TCP 13m +mysql-quickstart-pods ClusterIP None 3306/TCP 13m + +``` + +Lets, open your browser and go to the following URL: _http://{node-ip}:{dbgate-svc-nodeport}_. For kind cluster, you can get this URL by running the following command: + +```bash +$ kubectl get svc -n demo dbgate -o json | jq '.spec.ports[].nodePort' +32475 + +$ kubectl get node -o json | jq '.items[].status.addresses[].address' +"172.18.0.3" +"kind-control-plane" +"172.18.0.4" +"kind-worker" +"172.18.0.2" +"kind-worker2" + +# expected url will be: +url: http://172.18.0.4:32475 +``` +According to this example, the URL will be [ http://172.18.0.4:30158]( http://172.18.0.4:30158).You can also use the external-ip of the service.Also port forward your service to connect. + +You can connect multiple different database using db gate. To log into MySQL select the MYSQL driver and use server __`mysql-quickstart.demo`__ or __`10.244.0.30`__ , username __`root`__ and password __`H(Y.s)pg&cX1Ds3J`__. + +## Database TerminationPolicy + +This field is used to regulate the deletion process of the related resources when `MySQL` object is deleted. User can set the value of this field according to their needs. The available options and their use case scenario is described below: + +**DoNotTerminate:** + +When `terminationPolicy` is set to `DoNotTerminate`, KubeDB takes advantage of `ValidationWebhook` feature in Kubernetes 1.9.0 or later clusters to implement `DoNotTerminate` feature. If admission webhook is enabled, It prevents users from deleting the database as long as the `spec.terminationPolicy` is set to `DoNotTerminate`. You can see this below: + +```bash +$ kubectl delete my mysql-quickstart -n demo +Error from server (BadRequest): admission webhook "mysql.validators.kubedb.com" denied the request: mysql "mysql-quickstart" can't be halted. To delete, change spec.terminationPolicy +``` + +Now, run `kubectl edit my mysql-quickstart -n demo` to set `spec.terminationPolicy` to `Halt` (which deletes the mysql object and keeps PVC, snapshots, Secrets intact) or remove this field (which default to `Delete`). Then you will be able to delete/halt the database. + +Learn details of all `TerminationPolicy` [here](/docs/v2024.1.31/guides/mysql/concepts/database/#specterminationpolicy). + +**Halt:** + +Suppose you want to reuse your database volume and credential to deploy your database in future using the same configurations. But, right now you just want to delete the database except the database volumes and credentials. In this scenario, you must set the `MySQL` object `terminationPolicy` to `Halt`. + +When the [TerminationPolicy](/docs/v2024.1.31/guides/mysql/concepts/database/#specterminationpolicy) is set to `halt` and the MySQL object is deleted, the KubeDB operator will delete the StatefulSet and its pods but leaves the `PVCs`, `secrets` and database backup data(`snapshots`) intact. You can set the `terminationPolicy` to `halt` in existing database using `edit` command for testing. + +At first, run `kubectl edit my mysql-quickstart -n demo` to set `spec.terminationPolicy` to `Halt`. Then delete the mysql object, + +```bash +$ kubectl delete my mysql-quickstart -n demo +mysql.kubedb.com "mysql-quickstart" deleted +``` + +Now, run the following command to get all mysql resources in `demo` namespaces, + +```bash +$ kubectl get sts,svc,secret,pvc -n demo +NAME TYPE DATA AGE +secret/default-token-lgbjm kubernetes.io/service-account-token 3 23h +secret/mysql-quickstart-auth Opaque 2 20h + +NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE +persistentvolumeclaim/data-mysql-quickstart-0 Bound pvc-716f627c-9aa2-47b6-aa64-a547aab6f55c 1Gi RWO standard 20h +``` + +From the above output, you can see that all mysql resources(`StatefulSet`, `Service`, etc.) are deleted except `PVC` and `Secret`. You can recreate your mysql again using this resources. + +>You can also set the `terminationPolicy` to `Halt`(deprecated). It's behavior same as `halt` and right now `Halt` is replaced by `Halt`. + +**Delete:** + +If you want to delete the existing database along with the volumes used, but want to restore the database from previously taken `snapshots` and `secrets` then you might want to set the `MySQL` object `terminationPolicy` to `Delete`. In this setting, `StatefulSet` and the volumes will be deleted. If you decide to restore the database, you can do so using the snapshots and the credentials. + +When the [TerminationPolicy](/docs/v2024.1.31/guides/mysql/concepts/database/#specterminationpolicy) is set to `Delete` and the MySQL object is deleted, the KubeDB operator will delete the StatefulSet and its pods along with PVCs but leaves the `secret` and database backup data(`snapshots`) intact. + +Suppose, we have a database with `terminationPolicy` set to `Delete`. Now, are going to delete the database using the following command: + +```bash +$ kubectl delete my mysql-quickstart -n demo +mysql.kubedb.com "mysql-quickstart" deleted +``` + +Now, run the following command to get all mysql resources in `demo` namespaces, + +```bash +$ kubectl get sts,svc,secret,pvc -n demo +NAME TYPE DATA AGE +secret/default-token-lgbjm kubernetes.io/service-account-token 3 24h +secret/mysql-quickstart-auth Opaque +``` + +From the above output, you can see that all mysql resources(`StatefulSet`, `Service`, `PVCs` etc.) are deleted except `Secret`. You can initialize your mysql using `snapshots`(if previously taken) and `secret`. + +>If you don't set the terminationPolicy then the kubeDB set the TerminationPolicy to Delete by-default. + +**WipeOut:** + +You can totally delete the `MySQL` database and relevant resources without any tracking by setting `terminationPolicy` to `WipeOut`. KubeDB operator will delete all relevant resources of this `MySQL` database (i.e, `PVCs`, `Secrets`, `Snapshots`) when the `terminationPolicy` is set to `WipeOut`. + +Suppose, we have a database with `terminationPolicy` set to `WipeOut`. Now, are going to delete the database using the following command: + +```yaml +$ kubectl delete my mysql-quickstart -n demo +mysql.kubedb.com "mysql-quickstart" deleted +``` + +Now, run the following command to get all mysql resources in `demo` namespaces, + +```bash +$ kubectl get sts,svc,secret,pvc -n demo +No resources found in demo namespace. +``` + +From the above output, you can see that all mysql resources are deleted. there is no option to recreate/reinitialize your database if `terminationPolicy` is set to `Delete`. + +>Be careful when you set the `terminationPolicy` to `Delete`. Because there is no option to trace the database resources if once deleted the database. + +## Database Halted + +If you want to delete MySQL resources(`StatefulSet`,`Service`, etc.) without deleting the `MySQL` object, `PVCs` and `Secret` you have to set the `spec.halted` to `true`. KubeDB operator will be able to delete the MySQL related resources except the `MySQL` object, `PVCs` and `Secret`. + +Suppose we have a database running `mysql-quickstart` in our cluster. Now, we are going to set `spec.halted` to `true` in `MySQL` object by running `kubectl edit -n demo mysql-quickstart` command. + +Run the following command to get MySQL resources, + +```bash +$ kubectl get my,sts,secret,svc,pvc -n demo +NAME VERSION STATUS AGE +mysql.kubedb.com/mysql-quickstart 8.0.35 Halted 22m + +NAME TYPE DATA AGE +secret/default-token-lgbjm kubernetes.io/service-account-token 3 27h +secret/mysql-quickstart-auth Opaque 2 22m + +NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE +persistentvolumeclaim/data-mysql-quickstart-0 Bound pvc-7ab0ebb0-bb2e-45c1-9af1-4f175672605b 1Gi RWO standard 22m +``` + +From the above output , you can see that `MySQL` object, `PVCs`, `Secret` are still alive. Then you can recreate your `MySQL` with same configuration. + +>When you set `spec.halted` to `true` in `MySQL` object then the `terminationPolicy` is also set to `Halt` by KubeDB operator. + +## Cleaning up + +To cleanup the Kubernetes resources created by this tutorial, run: + +```bash +kubectl patch -n demo mysql/mysql-quickstart -p '{"spec":{"terminationPolicy":"WipeOut"}}' --type="merge" +kubectl delete -n demo mysql/mysql-quickstart + +kubectl delete ns demo +``` + +## Tips for Testing + +If you are just testing some basic functionalities, you might want to avoid additional hassles due to some safety features that are great for production environment. You can follow these tips to avoid them. + +1. **Use `storageType: Ephemeral`**. Databases are precious. You might not want to lose your data in your production environment if database pod fail. So, we recommend to use `spec.storageType: Durable` and provide storage spec in `spec.storage` section. For testing purpose, you can just use `spec.storageType: Ephemeral`. KubeDB will use [emptyDir](https://kubernetes.io/docs/concepts/storage/volumes/#emptydir) for storage. You will not require to provide `spec.storage` section. +2. **Use `terminationPolicy: WipeOut`**. It is nice to be able to delete everything created by KubeDB for a particular MySQL crd when you delete the crd. For more details about termination policy, please visit [here](/docs/v2024.1.31/guides/mysql/concepts/database/#specterminationpolicy). + +## Next Steps + +- Initialize [MySQL with Script](/docs/v2024.1.31/guides/mysql/initialization/). +- Monitor your MySQL database with KubeDB using [out-of-the-box Prometheus operator](/docs/v2024.1.31/guides/mysql/monitoring/prometheus-operator/). +- Monitor your MySQL database with KubeDB using [out-of-the-box builtin-Prometheus](/docs/v2024.1.31/guides/mysql/monitoring/builtin-prometheus/). +- Use [private Docker registry](/docs/v2024.1.31/guides/mysql/private-registry/) to deploy MySQL with KubeDB. +- Detail concepts of [MySQL object](/docs/v2024.1.31/guides/mysql/concepts/database/). +- Detail concepts of [MySQLVersion object](/docs/v2024.1.31/guides/mysql/concepts/catalog/). +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/v2024.1.31/CONTRIBUTING). diff --git a/content/docs/v2024.1.31/guides/mysql/quickstart/yamls/dbgate.yaml b/content/docs/v2024.1.31/guides/mysql/quickstart/yamls/dbgate.yaml new file mode 100644 index 0000000000..10e5f0501f --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/quickstart/yamls/dbgate.yaml @@ -0,0 +1,40 @@ +apiVersion: apps/v1 +kind: Deployment +metadata: + name: dbgate + namespace: demo + labels: + app: dbgate +spec: + replicas: 1 + selector: + matchLabels: + app: dbgate + template: + metadata: + labels: + app: dbgate + spec: + containers: + - name: dbgate + image: dbgate/dbgate:beta + ports: + - containerPort: 3000 + name: http +--- +apiVersion: v1 +kind: Service +metadata: + labels: + app: dbgate + name: dbgate + namespace: demo +spec: + ports: + - name: http + port: 3000 + protocol: TCP + targetPort: http + selector: + app: dbgate + type: LoadBalancer \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/mysql/quickstart/yamls/phpmyadmin.yaml b/content/docs/v2024.1.31/guides/mysql/quickstart/yamls/phpmyadmin.yaml new file mode 100644 index 0000000000..6b20b9d6ff --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/quickstart/yamls/phpmyadmin.yaml @@ -0,0 +1,46 @@ +apiVersion: apps/v1 +kind: Deployment +metadata: + labels: + app: myadmin + name: myadmin + namespace: demo +spec: + replicas: 1 + selector: + matchLabels: + app: myadmin + template: + metadata: + labels: + app: myadmin + spec: + containers: + - image: phpmyadmin/phpmyadmin:latest + imagePullPolicy: Always + name: phpmyadmin + ports: + - containerPort: 80 + name: http + protocol: TCP + env: + - name: PMA_ARBITRARY + value: '1' + +--- +apiVersion: v1 +kind: Service +metadata: + labels: + app: myadmin + name: myadmin + namespace: demo +spec: + ports: + - name: http + port: 80 + protocol: TCP + targetPort: http + selector: + app: myadmin + type: LoadBalancer diff --git a/content/docs/v2024.1.31/guides/mysql/quickstart/yamls/quickstart.yaml b/content/docs/v2024.1.31/guides/mysql/quickstart/yamls/quickstart.yaml new file mode 100644 index 0000000000..42ab02f949 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/quickstart/yamls/quickstart.yaml @@ -0,0 +1,16 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MySQL +metadata: + name: mysql-quickstart + namespace: demo +spec: + version: "8.0.35" + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: Delete diff --git a/content/docs/v2024.1.31/guides/mysql/reconfigure-tls/_index.md b/content/docs/v2024.1.31/guides/mysql/reconfigure-tls/_index.md new file mode 100644 index 0000000000..9c38f1ea30 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/reconfigure-tls/_index.md @@ -0,0 +1,22 @@ +--- +title: Reconfigure MySQL TLS/SSL +menu: + docs_v2024.1.31: + identifier: guides-mysql-reconfigure-tls + name: Reconfigure TLS/SSL + parent: guides-mysql + weight: 43 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/mysql/reconfigure-tls/overview/images/reconfigure-tls.jpg b/content/docs/v2024.1.31/guides/mysql/reconfigure-tls/overview/images/reconfigure-tls.jpg new file mode 100644 index 0000000000..1f0b39027c Binary files /dev/null and b/content/docs/v2024.1.31/guides/mysql/reconfigure-tls/overview/images/reconfigure-tls.jpg differ diff --git a/content/docs/v2024.1.31/guides/mysql/reconfigure-tls/overview/index.md b/content/docs/v2024.1.31/guides/mysql/reconfigure-tls/overview/index.md new file mode 100644 index 0000000000..e127cd0f59 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/reconfigure-tls/overview/index.md @@ -0,0 +1,66 @@ +--- +title: MySQL Vertical Reconfigure TLS Overview +menu: + docs_v2024.1.31: + identifier: guides-mysql-reconfigure-tls-overview + name: Overview + parent: guides-mysql-reconfigure-tls + weight: 11 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Reconfigure TLS MySQL + +This guide will give an overview on how KubeDB Ops Manager reconfigures TLS configuration i.e. add TLS, remove TLS, update issuer/cluster issuer or Certificates and rotate the certificates of a `MySQL` database. + +## Before You Begin + +- You should be familiar with the following `KubeDB` concepts: + - [MySQL](/docs/v2024.1.31/guides/mysql/concepts/database/) + - [MySQLOpsRequest](/docs/v2024.1.31/guides/mysql/concepts/opsrequest/) + +## How Reconfiguring MySQL TLS Configuration Process Works + +The following diagram shows how the `KubeDB` Ops Manager reconfigure TLS of the `MySQL` database server. Open the image in a new tab to see the enlarged version. + +
+ reconfigure tls +
Fig: Vertical scaling process of MySQL
+
+ +The Reconfiguring MySQL TLS process consists of the following steps: + +1. At first, a user creates a `MySQL` Custom Resource Object (CRO). + +2. `KubeDB` Community operator watches the `MySQL` CRO. + +3. When the operator finds a `MySQL` CR, it creates required number of `StatefulSets` and related necessary stuff like secrets, services, etc. + +4. Then, in order to reconfigure the TLS configuration of the `MySQL` database the user creates a `MySQLOpsRequest` CR with desired information. + +5. `KubeDB` Enterprise operator watches the `MySQLOpsRequest` CR. + +6. When it finds a `MySQLOpsRequest` CR, it pauses the `MySQL` object which is referred from the `MySQLOpsRequest`. So, the `KubeDB` Community operator doesn't perform any operations on the `MongoDB` object during the reconfiguring TLS process. + +7. Then the `KubeDB` Enterprise operator will add, remove, update or rotate TLS configuration based on the Ops Request yaml. + +8. Then the `KubeDB` Enterprise operator will restart all the Pods of the database so that they restart with the new TLS configuration defined in the `MongoDBOpsRequest` CR. + +9. After the successful reconfiguring of the `MySQL` TLS, the `KubeDB` Enterprise operator resumes the `MySQL` object so that the `KubeDB` Community operator resumes its usual operations. + + +In the next docs, we are going to show a step-by-step guide on reconfiguring tls of MySQL database using reconfigure-tls operation. \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/mysql/reconfigure-tls/reconfigure/index.md b/content/docs/v2024.1.31/guides/mysql/reconfigure-tls/reconfigure/index.md new file mode 100644 index 0000000000..ff838ff613 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/reconfigure-tls/reconfigure/index.md @@ -0,0 +1,1224 @@ +--- +title: Reconfigure MySQL TLS/SSL Encryption +menu: + docs_v2024.1.31: + identifier: guides-mysql-reconfigure-tls-readme + name: Reconfigure MySQL TLS/SSL Encryption + parent: guides-mysql-reconfigure-tls + weight: 12 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Reconfigure MySQL TLS/SSL (Transport Encryption) + +KubeDB supports reconfigure i.e. add, remove, update and rotation of TLS/SSL certificates for existing MySQL database via a MySQLOpsRequest. This tutorial will show you how to use KubeDB to reconfigure TLS/SSL encryption. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +- Install [`cert-manger`](https://cert-manager.io/docs/installation/) v1.0.0 or later to your cluster to manage your SSL/TLS certificates. + +- Now, install KubeDB cli on your workstation and KubeDB operator in your cluster following the steps [here](/docs/v2024.1.31/setup/README). + +- To keep things isolated, this tutorial uses a separate namespace called `demo` throughout this tutorial. + + ```bash + $ kubectl create ns demo + namespace/demo created + ``` + +> Note: YAML files used in this tutorial are stored in [docs/guides/mysql/reconfigure-tls/yamls](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/guides/mysql folder in GitHub repository [kubedb/docs](https://github.com/kubedb/docs). + +## Add TLS to a MySQL database + +Here, We are going to create a MySQL database without TLS and then reconfigure the database to use TLS. + +### Deploy MySQL without TLS + +In this section, we are going to deploy a MySQL database without TLS. In the next few sections we will reconfigure TLS using `MySQLOpsRequest` CRD. Below is the YAML of the `MySQL` CR that we are going to create, + + + + +
+
+ +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: MySQL +metadata: + name: mysql + namespace: demo +spec: + version: "8.0.35" + topology: + mode: GroupReplication + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: Delete +``` + +Let's create the `MySQL` CR we have shown above, + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mysql/reconfigure-tls/reconfigure/yamls/group-replication.yaml +mysql.kubedb.com/mysql created +``` + +
+ +
+ +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: MySQL +metadata: + name: mysql + namespace: demo +spec: + version: "8.0.31-innodb" + topology: + mode: InnoDBCluster + innoDBCluster: + router: + replicas: 1 + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: Delete +``` + +Let's create the `MySQL` CR we have shown above, + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mysql/reconfigure-tls/reconfigure/yamls/innodb-cluster.yaml +mysql.kubedb.com/mysql created +``` + +
+ +
+ +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: MySQL +metadata: + name: semi-sync-mysql + namespace: demo +spec: + version: "8.0.35" + replicas: 3 + topology: + mode: SemiSync + semiSync: + sourceWaitForReplicaCount: 1 + sourceTimeout: 23h + errantTransactionRecoveryPolicy: PseudoTransaction + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut +``` + +Let's create the `MySQL` CR we have shown above, + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mysql/reconfigure-tls/reconfigure/yamls/semi-sync.yaml +mysql.kubedb.com/mysql created +``` + +
+ + +
+ +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: MySQL +metadata: + name: mysql + namespace: demo +spec: + version: "8.0.35" + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: Delete + +``` + +Let's create the `MySQL` CR we have shown above, + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mysql/reconfigure-tls/reconfigure/yamls/standalone.yaml +mysql.kubedb.com/mysql created +``` +
+ +
+ + + +Now, wait until `mysql` has status `Ready`. i.e, + +```bash +$ kubectl get my -n demo +NAME VERSION STATUS AGE +mysql 8.0.35 Ready 75s + +$ kubectl dba describe mysql mysql -n demo +Name: mysql +Namespace: demo +CreationTimestamp: Mon, 21 Nov 2022 16:18:44 +0600 +Labels: +Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"kubedb.com/v1alpha2","kind":"MySQL","metadata":{"annotations":{},"name":"mysql","namespace":"demo"},"spec":{"storage":{"accessModes":["R... +Replicas: 1 total +Status: Ready +StorageType: Durable +Volume: + StorageClass: standard + Capacity: 1Gi + Access Modes: RWO +Paused: false +Halted: false +Termination Policy: WipeOut + +StatefulSet: + Name: mysql + CreationTimestamp: Mon, 21 Nov 2022 16:18:49 +0600 + Labels: app.kubernetes.io/component=database + app.kubernetes.io/instance=mysql + app.kubernetes.io/managed-by=kubedb.com + app.kubernetes.io/name=mysqls.kubedb.com + Annotations: + Replicas: 824635546904 desired | 1 total + Pods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed + +Service: + Name: mysql + Labels: app.kubernetes.io/component=database + app.kubernetes.io/instance=mysql + app.kubernetes.io/managed-by=kubedb.com + app.kubernetes.io/name=mysqls.kubedb.com + Annotations: + Type: ClusterIP + IP: 10.96.238.135 + Port: primary 3306/TCP + TargetPort: db/TCP + Endpoints: 10.244.0.23:3306 + +Service: + Name: mysql-pods + Labels: app.kubernetes.io/component=database + app.kubernetes.io/instance=mysql + app.kubernetes.io/managed-by=kubedb.com + app.kubernetes.io/name=mysqls.kubedb.com + Annotations: + Type: ClusterIP + IP: None + Port: db 3306/TCP + TargetPort: db/TCP + Endpoints: 10.244.0.23:3306 + +Auth Secret: + Name: mysql-auth + Labels: app.kubernetes.io/component=database + app.kubernetes.io/instance=mysql + app.kubernetes.io/managed-by=kubedb.com + app.kubernetes.io/name=mysqls.kubedb.com + Annotations: + Type: kubernetes.io/basic-auth + Data: + password: 16 bytes + username: 4 bytes + +AppBinding: + Metadata: + Annotations: + kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"kubedb.com/v1alpha2","kind":"MySQL","metadata":{"annotations":{},"name":"mysql","namespace":"demo"},"spec":{"storage":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"1Gi"}},"storageClassName":"standard"},"storageType":"Durable","terminationPolicy":"WipeOut","version":"8.0.35"}} + + Creation Timestamp: 2022-11-21T10:18:49Z + Labels: + app.kubernetes.io/component: database + app.kubernetes.io/instance: mysql + app.kubernetes.io/managed-by: kubedb.com + app.kubernetes.io/name: mysqls.kubedb.com + Name: mysql + Namespace: demo + Spec: + Client Config: + Service: + Name: mysql + Path: / + Port: 3306 + Scheme: mysql + URL: tcp(mysql.demo.svc:3306)/ + Parameters: + API Version: appcatalog.appscode.com/v1alpha1 + Kind: StashAddon + Stash: + Addon: + Backup Task: + Name: mysql-backup-8.0.21 + Params: + Name: args + Value: --all-databases --set-gtid-purged=OFF + Restore Task: + Name: mysql-restore-8.0.21 + Secret: + Name: mysql-auth + Type: kubedb.com/mysql + Version: 8.0.35 + +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal Phase Changed 1m KubeDB Operator phase changed from to Provisioning reason: + Normal Successful 1m KubeDB Operator Successfully created governing service + Normal Successful 1m KubeDB Operator Successfully created service for primary/standalone + Normal Successful 1m KubeDB Operator Successfully created StatefulSet + Normal Successful 1m KubeDB Operator Successfully created MySQL + Normal Successful 1m KubeDB Operator Successfully created appbinding + Normal Phase Changed 25s KubeDB Operator phase changed from Provisioning to Ready reason: + +``` + +Now, we can connect to this database through `mysql-shell` and verify that the TLS is disabled. + + +```bash +$ kubectl get secrets -n demo mysql-auth -o jsonpath='{.data.\username}' | base64 -d +root + +$ kubectl get secrets -n demo mysql-auth -o jsonpath='{.data.\password}' | base64 -d +f8EyKG)mNMIMdS~a + +$ kubectl exec -it mysql-0 -n demo -- mysql -u root --password='f8EyKG)mNMIMdS~a' --host=mysql-0.mysql-pods.demo -e "show variables like '%require_secure_transport%';"; +Defaulted container "mysql" out of: mysql, mysql-init (init) +mysql: [Warning] Using a password on the command line interface can be insecure. ++--------------------------+-------+ +| Variable_name | Value | ++--------------------------+-------+ +| require_secure_transport | OFF | ++--------------------------+-------+ + +kubectl exec -it mysql-0 -n demo -- mysql -u root --password='f8EyKG)mNMIMdS~a' --host=mysql-0.mysql-pods.demo -e "\s;"; +Defaulted container "mysql" out of: mysql, mysql-init (init) +mysql: [Warning] Using a password on the command line interface can be insecure. +-------------- +mysql Ver 8.0.35 for Linux on x86_64 (MySQL Community Server - GPL) + +Connection id: 91 +Current database: +Current user: root@mysql-0.mysql-pods.demo.svc.cluster.local +SSL: Cipher in use is TLS_AES_256_GCM_SHA384 +Current pager: stdout +Using outfile: '' +Using delimiter: ; +Server version: 8.0.35 MySQL Community Server - GPL +Protocol version: 10 +Connection: mysql-0.mysql-pods.demo via TCP/IP +Server characterset: utf8mb4 +Db characterset: utf8mb4 +Client characterset: latin1 +Conn. characterset: latin1 +TCP port: 3306 +Binary data as: Hexadecimal +Uptime: 11 min 44 sec + +Threads: 2 Questions: 454 Slow queries: 0 Opens: 185 Flush tables: 3 Open tables: 104 Queries per second avg: 0.644 + + +``` + +We can verify from the above output that TLS is disabled for this database. + +### Create Issuer/ ClusterIssuer + +Now, We are going to create an example `Issuer` that will be used to enable SSL/TLS in MySQL. Alternatively, you can follow this [cert-manager tutorial](https://cert-manager.io/docs/configuration/ca/) to create your own `Issuer`. + +- Start off by generating a ca certificates using openssl. + +```bash +$ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout ./ca.key -out ./ca.crt -subj "/CN=ca/O=kubedb" +Generating a RSA private key +................+++++ +........................+++++ +writing new private key to './ca.key' +----- +``` + +- Now we are going to create a ca-secret using the certificate files that we have just generated. + +```bash +$ kubectl create secret tls my-ca \ + --cert=ca.crt \ + --key=ca.key \ + --namespace=demo +secret/my-ca created +``` + +Now, Let's create an `Issuer` using the `my-ca` secret that we have just created. The `YAML` file looks like this: + +```yaml +apiVersion: cert-manager.io/v1 +kind: Issuer +metadata: + name: my-issuer + namespace: demo +spec: + ca: + secretName: my-ca +``` + +Let's apply the `YAML` file: + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mysql/reconfigure-tls/reconfigure/yamls/issuer.yaml +issuer.cert-manager.io/my-issuer created +``` + +### Create MySQLOpsRequest + +In order to add TLS to the database, we have to create a `MySQLOpsRequest` CRO with our created issuer. Below is the YAML of the `MySQLOpsRequest` CRO that we are going to create, + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: MySQLOpsRequest +metadata: + name: myops-add-tls + namespace: demo +spec: + type: ReconfigureTLS + databaseRef: + name: mysql + tls: + requireSSL: true + issuerRef: + name: my-issuer + kind: Issuer + apiGroup: "cert-manager.io" + certificates: + - alias: client + subject: + organizations: + - mysql + organizationalUnits: + - client +``` + +Here, + +- `spec.databaseRef.name` specifies that we are performing reconfigure TLS operation on `mysql` database. +- `spec.type` specifies that we are performing `ReconfigureTLS` on our database. +- `spec.tls.issuerRef` specifies the issuer name, kind and api group. +- `spec.tls.certificates` specifies the certificates. You can learn more about this field from [here](/docs/v2024.1.31/guides/mysql/concepts/database/#spectls). + +Let's create the `MySQLOpsRequest` CR we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mysql/reconfigure-tls/reconfigure/yamls/myops-add-tls.yaml +mysqlopsrequest.ops.kubedb.com/myops-add-tls created +``` + +#### Verify TLS Enabled Successfully + +Let's wait for `MySQLOpsRequest` to be `Successful`. Run the following command to watch `MySQLOpsRequest` CRO, + +```bash +$ kubectl get mysqlopsrequest -n demo +NAME TYPE STATUS AGE +myops-add-tls ReconfigureTLS Successful 91s +``` + +We can see from the above output that the `MySQLOpsRequest` has succeeded. If we describe the `MySQLOpsRequest` we will get an overview of the steps that were followed. + +```bash +$ kubectl describe mysqlopsrequest -n demo myops-add-tls +Name: myops-add-tls +Namespace: demo +Labels: +Annotations: +API Version: ops.kubedb.com/v1alpha1 +Kind: MySQLOpsRequest +Metadata: + Creation Timestamp: 2022-11-22T04:09:32Z + Generation: 1 + Managed Fields: + API Version: ops.kubedb.com/v1alpha1 + Fields Type: FieldsV1 + fieldsV1: + f:metadata: + f:annotations: + .: + f:kubectl.kubernetes.io/last-applied-configuration: + f:spec: + .: + f:apply: + f:databaseRef: + f:tls: + .: + f:certificates: + f:issuerRef: + f:type: + Manager: kubectl-client-side-apply + Operation: Update + Time: 2022-11-22T04:09:32Z + API Version: ops.kubedb.com/v1alpha1 + Fields Type: FieldsV1 + fieldsV1: + f:status: + .: + f:conditions: + f:observedGeneration: + f:phase: + Manager: kubedb-ops-manager + Operation: Update + Subresource: status + Time: 2022-11-22T04:09:34Z + Resource Version: 715635 + UID: 0bae4203-991b-4377-b38b-981648855638 +Spec: + Apply: IfReady + Database Ref: + Name: mysql + Tls: + Certificates: + Alias: client + Subject: + Organizational Units: + client + Organizations: + mysql + Issuer Ref: + API Group: cert-manager.io + Kind: Issuer + Name: my-issuer + Type: ReconfigureTLS +Status: + Conditions: + Last Transition Time: 2022-11-22T04:09:34Z + Message: Controller has started to Progress the MySQLOpsRequest: demo/myops-add-tls + Observed Generation: 1 + Reason: OpsRequestProgressingStarted + Status: True + Type: Progressing + Last Transition Time: 2022-11-22T04:09:42Z + Message: Successfully synced all certificates + Observed Generation: 1 + Reason: ReconfigureTLS + Status: True + Type: CertificateSynced + Last Transition Time: 2022-11-22T04:10:07Z + Message: Successfully restarted MySQL pods for MySQLDBOpsRequest: demo/myops-add-tls + Observed Generation: 1 + Reason: SuccessfullyRestartedStatefulSet + Status: True + Type: RestartStatefulSet + Last Transition Time: 2022-11-22T04:10:16Z + Message: Successfully reconfigured MySQL TLS for MySQLOpsRequest: demo/myops-add-tls + Observed Generation: 1 + Reason: SuccessfullyReconfiguredTLS + Status: True + Type: DBReady + Last Transition Time: 2022-11-22T04:10:21Z + Message: Controller has successfully reconfigure the MySQL demo/myops-add-tls + Observed Generation: 1 + Reason: OpsRequestProcessedSuccessfully + Status: True + Type: Successful + Observed Generation: 1 + Phase: Successful +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal Starting 16m KubeDB Enterprise Operator Start processing for MySQLOpsRequest: demo/myops-add-tls + Normal Starting 16m KubeDB Enterprise Operator Pausing MySQL databse: demo/mysql + Normal Successful 16m KubeDB Enterprise Operator Successfully paused MySQL database: demo/mysql for MySQLOpsRequest: myops-add-tls + Normal Successful 16m KubeDB Enterprise Operator Successfully synced all certificates + Normal Starting 16m KubeDB Enterprise Operator Restarting Pod: demo/mysql-0 + Normal Successful 16m KubeDB Enterprise Operator Successfully restarted MySQL pods for MySQLDBOpsRequest: demo/myops-add-tls + Normal Successful 16m KubeDB Enterprise Operator Successfully reconfigured MySQL TLS for MySQLOpsRequest: demo/myops-add-tls + Normal Starting 16m KubeDB Enterprise Operator Resuming MySQL database: demo/mysql + Normal Successful 16m KubeDB Enterprise Operator Successfully resumed MySQL database: demo/mysql + Normal Successful 16m KubeDB Enterprise Operator Controller has Successfully Reconfigured TLS + +``` + +All tls-secret are created by KubeDB ops-manager operator. Default tls-secret name formed as {mysql-object-name}-{cert-alias}-cert. +NAME TYPE DATA AGE +my-ca kubernetes.io/tls 2 22m +mysql-auth kubernetes.io/basic-auth 2 22m +mysql-client-cert kubernetes.io/tls 3 18m +mysql-metrics-exporter-cert kubernetes.io/tls 3 18m +mysql-server-cert kubernetes.io/tls 3 18m + + + + +Now, Let's exec into a database primary node and connect to the mysql-shell, + +```bash +bash-4.4# ls /etc/mysql/certs +ca.crt client.crt client.key server.crt server.key +bash-4.4# +bash-4.4# +bash-4.4# openssl x509 -in /etc/mysql/certs/client.crt -inform PEM -subject -nameopt RFC2253 -noout +subject=CN=root,OU=client,O=mysql +bash-4.4# +bash-4.4# +bash-4.4# +bash-4.4# mysql -uroot -p$MYSQL_ROOT_PASSWORD -h mysql.demo.svc --ssl-ca=/etc/mysql/certs/ca.crt --ssl-cert=/etc/mysql/certs/client.crt --ssl-key=/etc/mysql/certs/client.key +mysql: [Warning] Using a password on the command line interface can be insecure. +Welcome to the MySQL monitor. Commands end with ; or \g. +Your MySQL connection id is 94 +Server version: 8.0.35 MySQL Community Server - GPL + +Copyright (c) 2000, 2022, Oracle and/or its affiliates. + +Oracle is a registered trademark of Oracle Corporation and/or its +affiliates. Other names may be trademarks of their respective +owners. + +Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. + +mysql> \s +-------------- +mysql Ver 8.0.35 for Linux on x86_64 (MySQL Community Server - GPL) + +Connection id: 94 +Current database: +Current user: root@10.244.0.1 +SSL: Cipher in use is TLS_AES_256_GCM_SHA384 +Current pager: stdout +Using outfile: '' +Using delimiter: ; +Server version: 8.0.35 MySQL Community Server - GPL +Protocol version: 10 +Connection: mysql.demo.svc via TCP/IP +Server characterset: utf8mb4 +Db characterset: utf8mb4 +Client characterset: latin1 +Conn. characterset: latin1 +TCP port: 3306 +Binary data as: Hexadecimal +Uptime: 13 min 42 sec + +Threads: 2 Questions: 522 Slow queries: 0 Opens: 167 Flush tables: 3 Open tables: 86 Queries per second avg: 0.635 +-------------- + +mysql> show variables like '%require_secure_transport%'; ++--------------------------+-------+ +| Variable_name | Value | ++--------------------------+-------+ +| require_secure_transport | ON | ++--------------------------+-------+ +1 row in set (0.00 sec) +``` + +## Rotate Certificate + +Now we are going to rotate the certificate of this database. First let's check the current expiration date of the certificate. + +```bash +$ kubectl exec -it mysql-0 -n demo -- bash +bash-4.4# openssl x509 -in /etc/mysql/certs/client.crt -inform PEM -enddate -nameopt RFC2253 -noout +notAfter=Feb 20 04:09:37 2023 GM +``` + +So, the certificate will expire on this time `Feb 20 04:09:37 2023 GMT`. + +### Create MySQLRequest + +Now we are going to increase it using a MysqlOpsRequest. Below is the yaml of the ops request that we are going to create, + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: MySQLOpsRequest +metadata: + name: myops-rotate + namespace: demo +spec: + type: ReconfigureTLS + databaseRef: + name: mysql + tls: + rotateCertificates: true +``` + +Here, + +- `spec.databaseRef.name` specifies that we are performing reconfigure TLS operation on `mysql` database. +- `spec.type` specifies that we are performing `ReconfigureTLS` on our database. +- `spec.tls.rotateCertificates` specifies that we want to rotate the certificate of this database. + +Let's create the `MySQLOpsRequest` CR we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mysql/reconfigure-tls/reconfigure/yamls/myops-rotate.yaml +mysqlopsrequest.ops.kubedb.com/myops-rotate created +``` + +#### Verify Certificate Rotated Successfully + +Let's wait for `MySQLOpsRequest` to be `Successful`. Run the following command to watch `MySQLOpsRequest` CRO, + +```bash +$ kubectl get mysqlopsrequest -n demo +Every 2.0s: kubectl get mysqlopsrequest -n demo +NAME TYPE STATUS AGE +myops-rotate ReconfigureTLS Successful 112s +``` + +We can see from the above output that the `MysqlOpsRequest` has succeeded. If we describe the `MysqlOpsRequest` we will get an overview of the steps that were followed. + +```bash +$ kubectl describe mysqlopsrequest -n demo myops-rotate +Name: myops-rotate +Namespace: demo +Labels: +Annotations: +API Version: ops.kubedb.com/v1alpha1 +Kind: MySQLOpsRequest +Metadata: + Creation Timestamp: 2022-11-22T04:39:37Z + Generation: 1 + Managed Fields: + API Version: ops.kubedb.com/v1alpha1 + Fields Type: FieldsV1 + fieldsV1: + f:metadata: + f:annotations: + .: + f:kubectl.kubernetes.io/last-applied-configuration: + f:spec: + .: + f:apply: + f:databaseRef: + f:tls: + .: + f:rotateCertificates: + f:type: + Manager: kubectl-client-side-apply + Operation: Update + Time: 2022-11-22T04:39:37Z + API Version: ops.kubedb.com/v1alpha1 + Fields Type: FieldsV1 + fieldsV1: + f:status: + .: + f:conditions: + f:observedGeneration: + f:phase: + Manager: kubedb-ops-manager + Operation: Update + Subresource: status + Time: 2022-11-22T04:39:38Z + Resource Version: 718328 + UID: 89798e59-9868-46b9-a11e-d87ad4e9bd9f +Spec: + Apply: IfReady + Database Ref: + Name: mysql + Tls: + Rotate Certificates: true + Type: ReconfigureTLS +Status: + Conditions: + Last Transition Time: 2022-11-22T04:39:38Z + Message: Controller has started to Progress the MySQLOpsRequest: demo/myops-rotate + Observed Generation: 1 + Reason: OpsRequestProgressingStarted + Status: True + Type: Progressing + Last Transition Time: 2022-11-22T04:39:45Z + Message: Successfully synced all certificates + Observed Generation: 1 + Reason: ReconfigureTLS + Status: True + Type: CertificateSynced + Last Transition Time: 2022-11-22T04:39:54Z + Message: Successfully restarted MySQL pods for MySQLDBOpsRequest: demo/myops-rotate + Observed Generation: 1 + Reason: SuccessfullyRestartedStatefulSet + Status: True + Type: RestartStatefulSet + Last Transition Time: 2022-11-22T04:40:08Z + Message: Successfully reconfigured MySQL TLS for MySQLOpsRequest: demo/myops-rotate + Observed Generation: 1 + Reason: SuccessfullyReconfiguredTLS + Status: True + Type: DBReady + Last Transition Time: 2022-11-22T04:40:13Z + Message: Controller has successfully reconfigure the MySQL demo/myops-rotate + Observed Generation: 1 + Reason: OpsRequestProcessedSuccessfully + Status: True + Type: Successful + Observed Generation: 1 + Phase: Successful +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal Starting 52s KubeDB Enterprise Operator Start processing for MySQLOpsRequest: demo/myops-rotate + Normal Starting 52s KubeDB Enterprise Operator Pausing MySQL databse: demo/mysql + Normal Successful 52s KubeDB Enterprise Operator Successfully paused MySQL database: demo/mysql for MySQLOpsRequest: myops-rotate + Normal Successful 45s KubeDB Enterprise Operator Successfully synced all certificates + Normal Starting 36s KubeDB Enterprise Operator Restarting Pod: demo/mysql-0 + Normal Successful 36s KubeDB Enterprise Operator Successfully restarted MySQL pods for MySQLDBOpsRequest: demo/myops-rotate + Normal Successful 22s KubeDB Enterprise Operator Successfully reconfigured MySQL TLS for MySQLOpsRequest: demo/myops-rotate + Normal Starting 17s KubeDB Enterprise Operator Resuming MySQL database: demo/mysql + Normal Successful 17s KubeDB Enterprise Operator Successfully resumed MySQL database: demo/mysql + Normal Successful 17s KubeDB Enterprise Operator Controller has Successfully Reconfigured TLS + +``` + +Now, let's check the expiration date of the certificate. + +```bash +$ kubectl exec -it mysql-0 -n demo bash +openssl x509 -in /etc/mysql/certs/client.crt -inform PEM -enddate -nameopt RFC2253 -noout +notAfter=Feb 20 04:40:08 2023 GMT + +``` + +As we can see from the above output, the certificate has been rotated successfully. + +## Change Issuer/ClusterIssuer + +Now, we are going to change the issuer of this database. + +- Let's create a new ca certificate and key using a different subject `CN=ca-update,O=kubedb-updated`. + +```bash +$ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout ./ca.key -out ./ca.crt -subj "/CN=ca-updated/O=kubedb-updated" +Generating a RSA private key +..............................................................+++++ +......................................................................................+++++ +writing new private key to './ca.key' +----- +``` + +- Now we are going to create a new ca-secret using the certificate files that we have just generated. + +```bash +$ kubectl create secret tls mysql-new-ca \ + --cert=ca.crt \ + --key=ca.key \ + --namespace=demo +secret/mysql-new-ca created +``` + +Now, Let's create a new `Issuer` using the `mysql-new-ca` secret that we have just created. The `YAML` file looks like this: + +```yaml +apiVersion: cert-manager.io/v1 +kind: Issuer +metadata: + name: my-new-issuer + namespace: demo +spec: + ca: + secretName: mysql-new-ca +``` + +Let's apply the `YAML` file: + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mysql/reconfigure-tls/reconfigure/yamls/new-issuer.yaml +issuer.cert-manager.io/my-new-issuer created +``` + +### Create MySQLOpsRequest + +In order to use the new issuer to issue new certificates, we have to create a `MySQLOpsRequest` CRO with the newly created issuer. Below is the YAML of the `MySQLOpsRequest` CRO that we are going to create, + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: MySQLOpsRequest +metadata: + name: myops-change-issuer + namespace: demo +spec: + type: ReconfigureTLS + databaseRef: + name: mysql + tls: + issuerRef: + name: my-new-issuer + kind: Issuer + apiGroup: "cert-manager.io" +``` + +Here, + +- `spec.databaseRef.name` specifies that we are performing reconfigure TLS operation on `mysql` database. +- `spec.type` specifies that we are performing `ReconfigureTLS` on our database. +- `spec.tls.issuerRef` specifies the issuer name, kind and api group. + +Let's create the `MysqlOpsRequest` CR we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mysql/reconfigure-tls/reconfigure/yamls/myops-change-issuer.yaml +mysqlopsrequest.ops.kubedb.com/mops-change-issuer created +``` + +#### Verify Issuer is changed successfully + +Let's wait for `MySQLOpsRequest` to be `Successful`. Run the following command to watch `MySQLOpsRequest` CRO, + +```bash +$ kubectl get mysqlopsrequest -n demo +Every 2.0s: kubectl get mongodbopsrequest -n demo +NAME TYPE STATUS AGE +myops-change-issuer ReconfigureTLS Successful 87s +``` + +We can see from the above output that the `MysqlOpsRequest` has succeeded. If we describe the `MySQLOpsRequest` we will get an overview of the steps that were followed. + +```bash +$ kubectl describe mysqlopsrequest -n demo myops-change-issuer +Name: myops-change-issuer +Namespace: demo +Labels: +Annotations: +API Version: ops.kubedb.com/v1alpha1 +Kind: MySQLOpsRequest +Metadata: + Creation Timestamp: 2022-11-22T04:56:51Z + Generation: 1 + Managed Fields: + API Version: ops.kubedb.com/v1alpha1 + Fields Type: FieldsV1 + fieldsV1: + f:metadata: + f:annotations: + .: + f:kubectl.kubernetes.io/last-applied-configuration: + f:spec: + .: + f:apply: + f:databaseRef: + f:tls: + .: + f:issuerRef: + f:type: + Manager: kubectl-client-side-apply + Operation: Update + Time: 2022-11-22T04:56:51Z + API Version: ops.kubedb.com/v1alpha1 + Fields Type: FieldsV1 + fieldsV1: + f:status: + .: + f:conditions: + f:observedGeneration: + f:phase: + Manager: kubedb-ops-manager + Operation: Update + Subresource: status + Time: 2022-11-22T04:56:51Z + Resource Version: 719824 + UID: bcc96807-7efb-45e9-add8-54f858ed18d4 +Spec: + Apply: IfReady + Database Ref: + Name: mysql + Tls: + Issuer Ref: + API Group: cert-manager.io + Kind: Issuer + Name: my-new-issuer + Type: ReconfigureTLS +Status: + Conditions: + Last Transition Time: 2022-11-22T04:56:51Z + Message: Controller has started to Progress the MySQLOpsRequest: demo/myops-change-issuer + Observed Generation: 1 + Reason: OpsRequestProgressingStarted + Status: True + Type: Progressing + Last Transition Time: 2022-11-22T04:56:57Z + Message: Successfully synced all certificates + Observed Generation: 1 + Reason: ReconfigureTLS + Status: True + Type: CertificateSynced + Last Transition Time: 2022-11-22T04:57:06Z + Message: Successfully restarted MySQL pods for MySQLDBOpsRequest: demo/myops-change-issuer + Observed Generation: 1 + Reason: SuccessfullyRestartedStatefulSet + Status: True + Type: RestartStatefulSet + Last Transition Time: 2022-11-22T04:57:15Z + Message: Successfully reconfigured MySQL TLS for MySQLOpsRequest: demo/myops-change-issuer + Observed Generation: 1 + Reason: SuccessfullyReconfiguredTLS + Status: True + Type: DBReady + Last Transition Time: 2022-11-22T04:57:19Z + Message: Controller has successfully reconfigure the MySQL demo/myops-change-issuer + Observed Generation: 1 + Reason: OpsRequestProcessedSuccessfully + Status: True + Type: Successful + Observed Generation: 1 + Phase: Successful +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal Starting 2m16s KubeDB Enterprise Operator Start processing for MySQLOpsRequest: demo/myops-change-issuer + Normal Starting 2m16s KubeDB Enterprise Operator Pausing MySQL databse: demo/mysql + Normal Successful 2m16s KubeDB Enterprise Operator Successfully paused MySQL database: demo/mysql for MySQLOpsRequest: myops-change-issuer + Normal Successful 2m10s KubeDB Enterprise Operator Successfully synced all certificates + Normal Starting 2m1s KubeDB Enterprise Operator Restarting Pod: demo/mysql-0 + Normal Successful 2m1s KubeDB Enterprise Operator Successfully restarted MySQL pods for MySQLDBOpsRequest: demo/myops-change-issuer + Normal Successful 112s KubeDB Enterprise Operator Successfully reconfigured MySQL TLS for MySQLOpsRequest: demo/myops-change-issuer + Normal Starting 108s KubeDB Enterprise Operator Resuming MySQL database: demo/mysql + Normal Successful 108s KubeDB Enterprise Operator Successfully resumed MySQL database: demo/mysql + Normal Successful 108s KubeDB Enterprise Operator Controller has Successfully Reconfigured TLS + +``` + +Now, Let's exec into a database node and find out the ca subject to see if it matches the one we have provided. + +```bash +$ `kubectl exec -it mysql-0 -n demo -- bash` +root@mgo-rs-tls-2:/$ openssl x509 -in /etc/mysql/certs/ca.crt -inform PEM -subject -nameopt RFC2253 -noout +subject=O=kubedb-updated,CN=ca-updated +``` + +We can see from the above output that, the subject name matches the subject name of the new ca certificate that we have created. So, the issuer is changed successfully. + +## Remove TLS from the Database + +Now, we are going to remove TLS from this database using a MySQLOpsRequest. + +### Create MySQLOpsRequest + +Below is the YAML of the `MySQLOpsRequest` CRO that we are going to create, + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: MySQLOpsRequest +metadata: + name: myops-remove + namespace: demo +spec: + type: ReconfigureTLS + databaseRef: + name: mysql + tls: + remove: true +``` + +Here, + +- `spec.databaseRef.name` specifies that we are performing reconfigure TLS operation on `mysql` database. +- `spec.type` specifies that we are performing `ReconfigureTLS` on our database. +- `spec.tls.remove` specifies that we want to remove tls from this database. + +Let's create the `mysqlOpsRequest` CR we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mysql/reconfigure-tls/reconfigure/yamls/myops-remove.yaml +mysqlopsrequest.ops.kubedb.com/mops-remove created +``` + +#### Verify TLS Removed Successfully + +Let's wait for `MySQLOpsRequest` to be `Successful`. Run the following command to watch `MySQLOpsRequest` CRO, + +```bash +$ kubectl get mysqlopsrequest -n demo +Every 2.0s: kubectl get mysql opsrequest -n demo +NAME TYPE STATUS AGE +myops-remove ReconfigureTLS Successful 105s +``` + +We can see from the above output that the `MySQLOpsRequest` has succeeded. If we describe the `MySQLOpsRequest` we will get an overview of the steps that were followed. + +```bash +$ kubectl describe mysqlopsrequest -n demo myops-remove +Name: myops-remove +Namespace: demo +Labels: +Annotations: +API Version: ops.kubedb.com/v1alpha1 +Kind: MySQLOpsRequest +Metadata: + Creation Timestamp: 2022-11-22T05:02:52Z + Generation: 1 + Managed Fields: + API Version: ops.kubedb.com/v1alpha1 + Fields Type: FieldsV1 + fieldsV1: + f:metadata: + f:annotations: + .: + f:kubectl.kubernetes.io/last-applied-configuration: + f:spec: + .: + f:apply: + f:databaseRef: + f:tls: + .: + f:remove: + f:type: + Manager: kubectl-client-side-apply + Operation: Update + Time: 2022-11-22T05:02:52Z + API Version: ops.kubedb.com/v1alpha1 + Fields Type: FieldsV1 + fieldsV1: + f:status: + .: + f:conditions: + f:observedGeneration: + f:phase: + Manager: kubedb-ops-manager + Operation: Update + Subresource: status + Time: 2022-11-22T05:02:52Z + Resource Version: 720411 + UID: 43adad9c-e9f6-4cd9-a19e-6ba848901c0c +Spec: + Apply: IfReady + Database Ref: + Name: mysql + Tls: + Remove: true + Type: ReconfigureTLS +Status: + Conditions: + Last Transition Time: 2022-11-22T05:02:52Z + Message: Controller has started to Progress the MySQLOpsRequest: demo/myops-remove + Observed Generation: 1 + Reason: OpsRequestProgressingStarted + Status: True + Type: Progressing + Last Transition Time: 2022-11-22T05:03:08Z + Message: Successfully restarted MySQL pods for MySQLDBOpsRequest: demo/myops-remove + Observed Generation: 1 + Reason: SuccessfullyRestartedStatefulSet + Status: True + Type: RestartStatefulSet + Last Transition Time: 2022-11-22T05:03:18Z + Message: Successfully reconfigured MySQL TLS for MySQLOpsRequest: demo/myops-remove + Observed Generation: 1 + Reason: SuccessfullyReconfiguredTLS + Status: True + Type: DBReady + Last Transition Time: 2022-11-22T05:03:27Z + Message: Controller has successfully reconfigure the MySQL demo/myops-remove + Observed Generation: 1 + Reason: OpsRequestProcessedSuccessfully + Status: True + Type: Successful + Observed Generation: 1 + Phase: Successful +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal Starting 90s KubeDB Enterprise Operator Start processing for MySQLOpsRequest: demo/myops-remove + Normal Starting 90s KubeDB Enterprise Operator Pausing MySQL databse: demo/mysql + Normal Successful 90s KubeDB Enterprise Operator Successfully paused MySQL database: demo/mysql for MySQLOpsRequest: myops-remove + Normal Starting 74s KubeDB Enterprise Operator Restarting Pod: demo/mysql-0 + Normal Successful 74s KubeDB Enterprise Operator Successfully restarted MySQL pods for MySQLDBOpsRequest: demo/myops-remove + Normal Successful 64s KubeDB Enterprise Operator Successfully reconfigured MySQL TLS for MySQLOpsRequest: demo/myops-remove + Normal Starting 55s KubeDB Enterprise Operator Resuming MySQL database: demo/mysql + Normal Successful 55s KubeDB Enterprise Operator Successfully resumed MySQL database: demo/mysql + Normal Successful 55s KubeDB Enterprise Operator Controller has Successfully Reconfigured TLS + +``` + +Now, Let's exec into the database primary node and find out that TLS is disabled or not. + +```bash +$ kubectl exec -it -n demo mysql-0 -- mysql -u root -p 'f8EyKG)mNMIMdS~a' + +mysql> show variables like '%require_secure_transport%'; ++--------------------------+-------+ +| Variable_name | Value | ++--------------------------+-------+ +| require_secure_transport | OFF | ++--------------------------+-------+ + +``` + +So, we can see from the above that, output that tls is disabled successfully. + +## Cleaning up + +To cleanup the Kubernetes resources created by this tutorial, run: + +```bash +kubectl delete mysql -n demo mysql +kubectl delete issuer -n demo my-issuer my-new-issuer +kubectl delete mysqlopsrequest myops-add-tls myops-remove mops-rotate myps-change-issuer +kubectl delete ns demo +``` + +## Next Steps + +- [Quickstart MySQL](/docs/v2024.1.31/guides/mysql/quickstart/) with KubeDB Operator. +- Initialize [MySQL with Script](/docs/v2024.1.31/guides/mysql/initialization/). +- Monitor your MySQL database with KubeDB using [out-of-the-box Prometheus operator](/docs/v2024.1.31/guides/mysql/monitoring/prometheus-operator/). +- Monitor your MySQL database with KubeDB using [out-of-the-box builtin-Prometheus](/docs/v2024.1.31/guides/mysql/monitoring/builtin-prometheus/). +- Use [private Docker registry](/docs/v2024.1.31/guides/mysql/private-registry/) to deploy MySQL with KubeDB. +- Use [kubedb cli](/docs/v2024.1.31/guides/mysql/cli/) to manage databases like kubectl for Kubernetes. +- Detail concepts of [MySQL object](/docs/v2024.1.31/guides/mysql/concepts/database/). +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/v2024.1.31/CONTRIBUTING). diff --git a/content/docs/v2024.1.31/guides/mysql/reconfigure-tls/reconfigure/yamls/group-replication.yaml b/content/docs/v2024.1.31/guides/mysql/reconfigure-tls/reconfigure/yamls/group-replication.yaml new file mode 100644 index 0000000000..89b078bbe3 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/reconfigure-tls/reconfigure/yamls/group-replication.yaml @@ -0,0 +1,18 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MySQL +metadata: + name: mysql + namespace: demo +spec: + version: "8.0.35" + topology: + mode: GroupReplication + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: Delete \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/mysql/reconfigure-tls/reconfigure/yamls/innodb-cluster.yaml b/content/docs/v2024.1.31/guides/mysql/reconfigure-tls/reconfigure/yamls/innodb-cluster.yaml new file mode 100644 index 0000000000..c1f76097c8 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/reconfigure-tls/reconfigure/yamls/innodb-cluster.yaml @@ -0,0 +1,21 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MySQL +metadata: + name: mysql + namespace: demo +spec: + version: "8.0.31-innodb" + topology: + mode: InnoDBCluster + innoDBCluster: + router: + replicas: 1 + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: Delete \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/mysql/reconfigure-tls/reconfigure/yamls/issuer.yaml b/content/docs/v2024.1.31/guides/mysql/reconfigure-tls/reconfigure/yamls/issuer.yaml new file mode 100644 index 0000000000..6ab5d25522 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/reconfigure-tls/reconfigure/yamls/issuer.yaml @@ -0,0 +1,8 @@ +apiVersion: cert-manager.io/v1 +kind: Issuer +metadata: + name: my-issuer + namespace: demo +spec: + ca: + secretName: my-ca \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/mysql/reconfigure-tls/reconfigure/yamls/myops-add-tls.yaml b/content/docs/v2024.1.31/guides/mysql/reconfigure-tls/reconfigure/yamls/myops-add-tls.yaml new file mode 100644 index 0000000000..181ccd151e --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/reconfigure-tls/reconfigure/yamls/myops-add-tls.yaml @@ -0,0 +1,22 @@ +apiVersion: ops.kubedb.com/v1alpha1 +kind: MySQLOpsRequest +metadata: + name: myops-add-tls + namespace: demo +spec: + type: ReconfigureTLS + databaseRef: + name: mysql + tls: + requireSSL: true + issuerRef: + name: my-issuer + kind: Issuer + apiGroup: "cert-manager.io" + certificates: + - alias: client + subject: + organizations: + - mysql + organizationalUnits: + - client \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/mysql/reconfigure-tls/reconfigure/yamls/myops-change-issuer.yaml b/content/docs/v2024.1.31/guides/mysql/reconfigure-tls/reconfigure/yamls/myops-change-issuer.yaml new file mode 100644 index 0000000000..a5b19a2a82 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/reconfigure-tls/reconfigure/yamls/myops-change-issuer.yaml @@ -0,0 +1,14 @@ +apiVersion: ops.kubedb.com/v1alpha1 +kind: MySQLOpsRequest +metadata: + name: myops-change-issuer + namespace: demo +spec: + type: ReconfigureTLS + databaseRef: + name: mysql + tls: + issuerRef: + name: my-new-issuer + kind: Issuer + apiGroup: "cert-manager.io" \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/mysql/reconfigure-tls/reconfigure/yamls/myops-remove.yaml b/content/docs/v2024.1.31/guides/mysql/reconfigure-tls/reconfigure/yamls/myops-remove.yaml new file mode 100644 index 0000000000..83b33b6195 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/reconfigure-tls/reconfigure/yamls/myops-remove.yaml @@ -0,0 +1,11 @@ +apiVersion: ops.kubedb.com/v1alpha1 +kind: MySQLOpsRequest +metadata: + name: myops-remove + namespace: demo +spec: + type: ReconfigureTLS + databaseRef: + name: mysql + tls: + remove: true \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/mysql/reconfigure-tls/reconfigure/yamls/myops-rotate.yaml b/content/docs/v2024.1.31/guides/mysql/reconfigure-tls/reconfigure/yamls/myops-rotate.yaml new file mode 100644 index 0000000000..70a875bacf --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/reconfigure-tls/reconfigure/yamls/myops-rotate.yaml @@ -0,0 +1,11 @@ +apiVersion: ops.kubedb.com/v1alpha1 +kind: MySQLOpsRequest +metadata: + name: myops-rotate + namespace: demo +spec: + type: ReconfigureTLS + databaseRef: + name: mysql + tls: + rotateCertificates: true \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/mysql/reconfigure-tls/reconfigure/yamls/mysql.yaml b/content/docs/v2024.1.31/guides/mysql/reconfigure-tls/reconfigure/yamls/mysql.yaml new file mode 100644 index 0000000000..4e9acf905d --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/reconfigure-tls/reconfigure/yamls/mysql.yaml @@ -0,0 +1,16 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MySQL +metadata: + name: mysql + namespace: demo +spec: + version: "8.0.35" + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/mysql/reconfigure-tls/reconfigure/yamls/new-issuer.yaml b/content/docs/v2024.1.31/guides/mysql/reconfigure-tls/reconfigure/yamls/new-issuer.yaml new file mode 100644 index 0000000000..20a54263e4 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/reconfigure-tls/reconfigure/yamls/new-issuer.yaml @@ -0,0 +1,8 @@ +apiVersion: cert-manager.io/v1 +kind: Issuer +metadata: + name: my-new-issuer + namespace: demo +spec: + ca: + secretName: mysql-new-ca \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/mysql/reconfigure-tls/reconfigure/yamls/semi-sync.yaml b/content/docs/v2024.1.31/guides/mysql/reconfigure-tls/reconfigure/yamls/semi-sync.yaml new file mode 100644 index 0000000000..c0a6a4ec79 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/reconfigure-tls/reconfigure/yamls/semi-sync.yaml @@ -0,0 +1,23 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MySQL +metadata: + name: mysql + namespace: demo +spec: + version: "8.0.35" + replicas: 3 + topology: + mode: SemiSync + semiSync: + sourceWaitForReplicaCount: 1 + sourceTimeout: 23h + errantTransactionRecoveryPolicy: PseudoTransaction + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut diff --git a/content/docs/v2024.1.31/guides/mysql/reconfigure-tls/reconfigure/yamls/standalone.yaml b/content/docs/v2024.1.31/guides/mysql/reconfigure-tls/reconfigure/yamls/standalone.yaml new file mode 100644 index 0000000000..be6a74fc1e --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/reconfigure-tls/reconfigure/yamls/standalone.yaml @@ -0,0 +1,17 @@ + +apiVersion: kubedb.com/v1alpha2 +kind: MySQL +metadata: + name: mysql + namespace: demo +spec: + version: "8.0.35" + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: Delete diff --git a/content/docs/v2024.1.31/guides/mysql/reconfigure/_index.md b/content/docs/v2024.1.31/guides/mysql/reconfigure/_index.md new file mode 100644 index 0000000000..7c54a8ad33 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/reconfigure/_index.md @@ -0,0 +1,22 @@ +--- +title: Reconfigure MySQL Configuration +menu: + docs_v2024.1.31: + identifier: guides-mysql-reconfigure + name: Reconfigure + parent: guides-mysql + weight: 41 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/mysql/reconfigure/overview/index.md b/content/docs/v2024.1.31/guides/mysql/reconfigure/overview/index.md new file mode 100644 index 0000000000..965887ed23 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/reconfigure/overview/index.md @@ -0,0 +1,65 @@ +--- +title: Reconfiguring MySQL +menu: + docs_v2024.1.31: + identifier: guides-mysql-reconfigure-overview + name: Overview + parent: guides-mysql-reconfigure + weight: 10 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +### Reconfiguring MySQL + +This guide will give an overview on how KubeDB Ops Manager reconfigures `MySQL`. + +## Before You Begin + +- You should be familiar with the following `KubeDB` concepts: + - [MySQL](/docs/v2024.1.31/guides/mysql/concepts/) + - [MySQLOpsRequest](/docs/v2024.1.31/guides/mysql/concepts/opsrequest) + +## How Reconfiguring MySQL Process Works + +The following diagram shows how KubeDB Ops Manager reconfigures `MySQL` database components. Open the image in a new tab to see the enlarged version. + +
+  Reconfiguring process of MySQL +
Fig: Reconfiguring process of MySQL
+
+ +The Reconfiguring MySQL process consists of the following steps: + +1. At first, a user creates a `MySQL` Custom Resource (CR). + +2. `KubeDB` Community operator watches the `MySQL` CR. + +3. When the operator finds a `MySQL` CR, it creates required number of `StatefulSets` and related necessary stuff like secrets, services, etc. + +4. Then, in order to reconfigure the `MySQL` standalone or cluster the user creates a `MySQLOpsRequest` CR with desired information. + +5. `KubeDB` Enterprise operator watches the `MySQLOpsRequest` CR. + +6. When it finds a `MySQLOpsRequest` CR, it halts the `MySQL` object which is referred from the `MySQLOpsRequest`. So, the `KubeDB` Community operator doesn't perform any operations on the `MySQL` object during the reconfiguring process. + +7. Then the `KubeDB` Enterprise operator will replace the existing configuration with the new configuration provided or merge the new configuration with the existing configuration according to the `MySQLOpsRequest` CR. + +8. Then the `KubeDB` Enterprise operator will restart the related StatefulSet Pods so that they restart with the new configuration defined in the `MySQLOpsRequest` CR. + +9. After the successful reconfiguring of the `MySQL`, the `KubeDB` Enterprise operator resumes the `MySQL` object so that the `KubeDB` Community operator resumes its usual operations. + +In the next docs, we are going to show a step by step guide on reconfiguring MySQL database components using `MySQLOpsRequest` CRD. \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/mysql/reconfigure/overview/reconfigure.jpg b/content/docs/v2024.1.31/guides/mysql/reconfigure/overview/reconfigure.jpg new file mode 100644 index 0000000000..4682a7c869 Binary files /dev/null and b/content/docs/v2024.1.31/guides/mysql/reconfigure/overview/reconfigure.jpg differ diff --git a/content/docs/v2024.1.31/guides/mysql/reconfigure/reconfigure-steps/index.md b/content/docs/v2024.1.31/guides/mysql/reconfigure/reconfigure-steps/index.md new file mode 100644 index 0000000000..e4b0f2092c --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/reconfigure/reconfigure-steps/index.md @@ -0,0 +1,603 @@ +--- +title: Reconfigure MySQL Configuration +menu: + docs_v2024.1.31: + identifier: guides-mysql-reconfigure-reconfigure-steps + name: Cluster + parent: guides-mysql-reconfigure + weight: 20 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Reconfigure MySQL Cluster Database + +This guide will show you how to use `KubeDB` Enterprise operator to reconfigure a MySQL Cluster. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster. + +- Install `KubeDB` Community and Enterprise operator in your cluster following the steps [here](/docs/v2024.1.31/setup/README). + +- You should be familiar with the following `KubeDB` concepts: +- [MySQL](/docs/v2024.1.31/guides/mysql/concepts/database/) +- [MySQL Cluster](/docs/v2024.1.31/guides/mysql/clustering) +- [MYSQLOpsRequest](/docs/v2024.1.31/guides/mysql/concepts/opsrequest) +- [Reconfigure Overview](/docs/v2024.1.31/guides/mysql/reconfigure/overview) + +To keep everything isolated, we are going to use a separate namespace called `demo` throughout this tutorial. + +```bash +$ kubectl create ns demo +namespace/demo created +``` + +Now, we are going to deploy a `MySQL` Cluster using a supported version by `KubeDB` operator. Then we are going to apply `MySQLOpsRequest` to reconfigure its configuration. + +### Prepare MySQL Cluster + +Now, we are going to deploy a `MySQL` Cluster database with version `8.0.35`. + +### Deploy MySQL + +At first, we will create `my-config.cnf` file containing required configuration settings. + +```ini +$ cat my-config.cnf +[mysqld] +max_connections = 200 +read_buffer_size = 1048576 +``` + +Here, `max_connections` is set to `200`, whereas the default value is `151`. Likewise, `read_buffer_size` has the deafult value `131072`. + +Now, we will create a secret with this configuration file. + +```bash +$ kubectl create secret generic -n demo my-configuration --from-file=./my-config.cnf +secret/my-configuration created +``` + +In this section, we are going to create a MySQL object specifying `spec.configSecret` field to apply this custom configuration. Below is the YAML of the `MySQL` CR that we are going to create, + + + + +
+
+ +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: MySQL +metadata: + name: sample-mysql + namespace: demo +spec: + version: "8.0.35" + topology: + mode: GroupReplication + replicas: 3 + configSecret: + name: my-configuration + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut +``` + +Let's create the `MySQL` CR we have shown above, + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mysql/reconfigure/reconfigure-steps/yamls/group-replication.yaml +mysql.kubedb.com/sample-mysql created +``` + +
+ +
+ +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: MySQL +metadata: + name: sample-mysql + namespace: demo +spec: + version: "8.0.31-innodb" + topology: + mode: InnoDBCluster + innoDBCluster: + router: + replicas: 1 + replicas: 3 + configSecret: + name: my-configuration + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut +``` + +Let's create the `MySQL` CR we have shown above, + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mysql/reconfigure/reconfigure-steps/yamls/innob-cluster.yaml +mysql.kubedb.com/sample-mysql created +``` + +
+ +
+ +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: MySQL +metadata: + name: sample-mysql + namespace: demo +spec: + version: "8.0.35" + topology: + mode: SemiSync + semiSync: + sourceWaitForReplicaCount: 1 + sourceTimeout: 23h + errantTransactionRecoveryPolicy: PseudoTransaction + replicas: 3 + configSecret: + name: my-configuration + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut +``` + +Let's create the `MySQL` CR we have shown above, + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mysql/reconfigure/reconfigure-steps/yamls/semi-sync.yaml +mysql.kubedb.com/sample-mysql created +``` + +
+ + +
+ +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: MySQL +metadata: + name: sample-mysql + namespace: demo +spec: + version: "8.0.35" + configSecret: + name: my-configuration + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut +``` + +Let's create the `MySQL` CR we have shown above, + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mysql/reconfigure/reconfigure-steps/yamls/stand-alone.yaml +mysql.kubedb.com/sample-mysql created +``` +
+ +
+ + +Now, wait until `sample-mysql` has status `Ready`. i.e, + +```bash +$ kubectl get mysql -n demo +NAME VERSION STATUS AGE +sample-mysql 8.0.35 Ready 5m49s +``` + +Now, we will check if the database has started with the custom configuration we have provided. + +First we need to get the username and password to connect to a mysql instance, + +```bash +$ kubectl get secrets -n demo sample-mysql-auth -o jsonpath='{.data.\username}' | base64 -d +root + +$ kubectl get secrets -n demo sample-mysql-auth -o jsonpath='{.data.\password}' | base64 -d +86TwLJ!2Kpq*vv1y +``` + +Now, we will check if the database has started with the custom configuration we have provided. + +```bash +$ kubectl exec -it -n demo sample-mysql-0 -- bash +mysql -uroot -p$MYSQL_ROOT_PASSWORD +mysql: [Warning] Using a password on the command line interface can be insecure. +Welcome to the MySQL monitor. Commands end with ; or \g. +Your MySQL connection id is 112 +Server version: 8.0.35 MySQL Community Server - GPL + +Copyright (c) 2000, 2022, Oracle and/or its affiliates. + +Oracle is a registered trademark of Oracle Corporation and/or its +affiliates. Other names may be trademarks of their respective +owners. + +Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. + +mysql> show variables like 'max_connections'; ++-----------------+-------+ +| Variable_name | Value | ++-----------------+-------+ +| max_connections | 200 | ++-----------------+-------+ +1 row in set (0.00 sec) + +mysql> show variables like 'read_buffer_size'; ++------------------+---------+ +| Variable_name | Value | ++------------------+---------+ +| read_buffer_size | 1048576 | ++------------------+---------+ +1 row in set (0.00 sec) + +mysql> + +``` + +As we can see from the configuration of ready mysql, the value of `max_connections` has been set to `200` and `read_buffer_size` has been set to `1048576`. + +### Reconfigure using new config secret + +Now we will reconfigure this database to set `max_connections` to `250` and `read_buffer_size` to `122880`. + +Now, we will create new file `new-my-config.cnf` containing required configuration settings. + +```ini +$ cat new-my-config.cnf +[mysqld] +max_connections = 250 +read_buffer_size = 122880 +``` + +Then, we will create a new secret with this configuration file. + +```bash +$ kubectl create secret generic -n demo new-my-configuration --from-file=./new-my-config.cnf +secret/new-my-configuration created +``` + +#### Create MySQLOpsRequest + +Now, we will use this secret to replace the previous secret using a `MySQLOpsRequest` CR. The `MySQLOpsRequest` yaml is given below, + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: MySQLOpsRequest +metadata: + name: myops-reconfigure-config + namespace: demo +spec: + type: Reconfigure + databaseRef: + name: sample-mysql + configuration: + configSecret: + name: new-my-configuration +``` + +Here, + +- `spec.databaseRef.name` specifies that we are reconfiguring `sample-mysql` database. +- `spec.type` specifies that we are performing `Reconfigure` on our database. +- `spec.configuration.configSecret.name` specifies the name of the new secret. + +Let's create the `MySQLOpsRequest` CR we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mysql/reconfigure/reconfigure-steps/yamls/reconfigure-using-secret.yaml +mysqlopsrequest.ops.kubedb.com/myops-reconfigure-config created +``` + +#### Verify the new configuration is working + +If everything goes well, `KubeDB` Enterprise operator will update the `configSecret` of `MySQL` object. + +Let's wait for `MySQLOpsRequest` to be `Successful`. Run the following command to watch `MySQLOpsRequest` CR, + +```bash +$ kubectl get mysqlopsrequest --all-namespaces +NAMESPACE NAME TYPE STATUS AGE +demo myops-reconfigure-config Reconfigure Successful 3m8s +``` + +We can see from the above output that the `MySQLOpsRequest` has succeeded. If we describe the `MySQLOpsRequest` we will get an overview of the steps that were followed to reconfigure the database. + +```bash +$ kubectl describe mysqlopsrequest -n demo myops-reconfigure-config +Name: myops-reconfigure-config +Namespace: demo +Labels: +Annotations: +API Version: ops.kubedb.com/v1alpha1 +Kind: MySQLOpsRequest +Metadata: + Creation Timestamp: 2022-11-23T09:09:20Z + Generation: 1 + Managed Fields: + API Version: ops.kubedb.com/v1alpha1 + Fields Type: FieldsV1 + fieldsV1: + f:metadata: + f:annotations: + .: + f:kubectl.kubernetes.io/last-applied-configuration: + f:spec: + .: + f:apply: + f:configuration: + .: + f:configSecret: + f:databaseRef: + f:type: + Manager: kubectl-client-side-apply + Operation: Update + Time: 2022-11-23T09:09:20Z + API Version: ops.kubedb.com/v1alpha1 + Fields Type: FieldsV1 + fieldsV1: + f:status: + .: + f:conditions: + f:observedGeneration: + f:phase: + Manager: kubedb-ops-manager + Operation: Update + Subresource: status + Time: 2022-11-23T09:09:20Z + Resource Version: 786443 + UID: 253ff2e3-0647-4926-bfb9-ef44b3b8a31d +Spec: + Apply: IfReady + Configuration: + Config Secret: + Name: new-my-configuration + Database Ref: + Name: sample-mysql + Type: Reconfigure +Status: + Conditions: + Last Transition Time: 2022-11-23T09:09:20Z + Message: Controller has started to Progress the MySQLOpsRequest: demo/myops-reconfigure-config + Observed Generation: 1 + Reason: OpsRequestProgressingStarted + Status: True + Type: Progressing + Last Transition Time: 2022-11-23T09:13:10Z + Message: Successfully reconfigured MySQL pod for MySQLOpsRequest: demo/myops-reconfigure-config + Observed Generation: 1 + Reason: SuccessfullyDBReconfigured + Status: True + Type: Reconfigure + Last Transition Time: 2022-11-23T09:13:10Z + Message: Controller has successfully reconfigure the MySQL demo/myops-reconfigure-config + Observed Generation: 1 + Reason: OpsRequestProcessedSuccessfully + Status: True + Type: Successful + Observed Generation: 1 + Phase: Successful +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal Starting 30m KubeDB Enterprise Operator Start processing for MySQLOpsRequest: demo/myops-reconfigure-config + Normal Starting 30m KubeDB Enterprise Operator Pausing MySQL databse: demo/sample-mysql + Normal Successful 30m KubeDB Enterprise Operator Successfully paused MySQL database: demo/sample-mysql for MySQLOpsRequest: myops-reconfigure-config + Normal Starting 30m KubeDB Enterprise Operator Restarting Pod: sample-mysql-1/demo + Normal Starting 29m KubeDB Enterprise Operator Restarting Pod: sample-mysql-2/demo + Normal Starting 28m KubeDB Enterprise Operator Restarting Pod: sample-mysql-0/demo + Normal Successful 27m KubeDB Enterprise Operator Successfully reconfigured MySQL pod for MySQLOpsRequest: demo/myops-reconfigure-config + Normal Starting 27m KubeDB Enterprise Operator Reconfiguring MySQL + Normal Successful 27m KubeDB Enterprise Operator Successfully reconfigure the MySQL object + Normal Starting 27m KubeDB Enterprise Operator Resuming MySQL database: demo/sample-mysql + Normal Successful 27m KubeDB Enterprise Operator Successfully resumed MySQL database: demo/sample-mysql + Normal Successful 27m KubeDB Enterprise Operator Controller has Successfully reconfigure the of MySQL: demo/sample-mysql + +``` + +Now let's connect to a mysql instance and run a mysql internal command to check the new configuration we have provided. + +```bash +$ kubectl exec -it -n demo sample-mysql-0 -- bash + +bash-4.4# mysql -u${MYSQL_ROOT_USERNAME} -p${MYSQL_ROOT_PASSWORD} + +mysql: [Warning] Using a password on the command line interface can be insecure. +Welcome to the MySQL monitor. Commands end with ; or \g. +Your MySQL connection id is 279 +Server version: 8.0.35 MySQL Community Server - GPL + +Copyright (c) 2000, 2022, Oracle and/or its affiliates. + +Oracle is a registered trademark of Oracle Corporation and/or its +affiliates. Other names may be trademarks of their respective +owners. + +Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. + +mysql> +mysql> +mysql> show variables like 'max_connections'; ++-----------------+-------+ +| Variable_name | Value | ++-----------------+-------+ +| max_connections | 250 | ++-----------------+-------+ +1 row in set (0.00 sec) + +mysql> show variables like 'read_buffer_size'; ++------------------+--------+ +| Variable_name | Value | ++------------------+--------+ +| read_buffer_size | 122880 | ++------------------+--------+ +1 row in set (0.00 sec) + +mysql> + +``` + +As we can see from the configuration has changed, the value of `max_connections` has been changed from `200` to `250` and and the `read_buffer_size` has been changed `1048576` to `122880`. So the reconfiguration of the database is successful. + +### Remove Custom Configuration + +We can also remove exisiting custom config using `MySQLOpsRequest`. Provide `true` to field `spec.configuration.removeCustomConfig` and make an Ops Request to remove existing custom configuration. + +#### Create MySQLOpsRequest + +Lets create an `MySQLOpsRequest` having `spec.configuration.removeCustomConfig` is equal `true`, + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: MySQLOpsRequest +metadata: + name: myops-reconfigure-remove + namespace: demo +spec: + type: Reconfigure + databaseRef: + name: sample-mysql + configuration: + removeCustomConfig: true +``` + +Here, + +- `spec.databaseRef.name` specifies that we are reconfiguring `myops-reconfigure-remove` database. +- `spec.type` specifies that we are performing `Reconfigure` on our database. +- `spec.configuration.removeCustomConfig` is a bool field that should be `true` when you want to remove existing custom configuration. + +Let's create the `MySQLOpsRequest` CR we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mysql/reconfigure/yamls/reconfigure-steps/reconfigure-remove.yaml +mysqlopsrequest.ops.kubedb.com/mdops-reconfigure-remove created +``` + +#### Verify the new configuration is working + +If everything goes well, `KubeDB` Enterprise operator will update the `configSecret` of `MySQL` object. + +Let's wait for `MySQLOpsRequest` to be `Successful`. Run the following command to watch `MariaDBOpsRequest` CR, + +```bash +$ kubectl get mysqlopsrequest --all-namespaces +NAMESPACE NAME TYPE STATUS AGE +demo mdops-reconfigure-remove Reconfigure Successful 2m1s +``` + +Now let's connect to a mysql instance and run a mysql internal command to check the new configuration we have provided. + +```bash +$ kubectl exec -it -n demo sample-mysql-0 -- bash +bash-4.4# mysql -u${MYSQL_ROOT_USERNAME} -p${MYSQL_ROOT_PASSWORD} +mysql: [Warning] Using a password on the command line interface can be insecure. +Welcome to the MySQL monitor. Commands end with ; or \g. +Your MySQL connection id is 279 +Server version: 8.0.35 MySQL Community Server - GPL + +Copyright (c) 2000, 2022, Oracle and/or its affiliates. + +Oracle is a registered trademark of Oracle Corporation and/or its +affiliates. Other names may be trademarks of their respective +owners. + +Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. + +mysql> +mysql> +mysql> show variables like 'max_connections'; ++-----------------+-------+ +| Variable_name | Value | ++-----------------+-------+ +| max_connections | 151 | ++-----------------+-------+ +1 row in set (0.00 sec) + +mysql> show variables like 'read_buffer_size'; ++------------------+--------+ +| Variable_name | Value | ++------------------+--------+ +| read_buffer_size | 131072 | ++------------------+--------+ +1 row in set (0.00 sec) + +mysql> + +``` + +As we can see from the configuration has changed to its default value. So removal of existing custom configuration using `MySQLOpsRequest` is successful. + +## Cleaning Up + +To clean up the Kubernetes resources created by this tutorial, run: + +```bash +$ kubectl delete mysql -n demo sample-mysql +$ kubectl delete mysqlopsrequest -n demo myops-reconfigure-config mdops-reconfigure-remove +$ kubectl delete ns demo +``` diff --git a/content/docs/v2024.1.31/guides/mysql/reconfigure/reconfigure-steps/yamls/group-replication.yaml b/content/docs/v2024.1.31/guides/mysql/reconfigure/reconfigure-steps/yamls/group-replication.yaml new file mode 100644 index 0000000000..29e3b8594f --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/reconfigure/reconfigure-steps/yamls/group-replication.yaml @@ -0,0 +1,21 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MySQL +metadata: + name: sample-mysql + namespace: demo +spec: + version: "8.0.35" + topology: + mode: GroupReplication + replicas: 3 + configSecret: + name: my-configuration + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/mysql/reconfigure/reconfigure-steps/yamls/inndob-cluster.yaml b/content/docs/v2024.1.31/guides/mysql/reconfigure/reconfigure-steps/yamls/inndob-cluster.yaml new file mode 100644 index 0000000000..c2204fefb5 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/reconfigure/reconfigure-steps/yamls/inndob-cluster.yaml @@ -0,0 +1,24 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MySQL +metadata: + name: sample-mysql + namespace: demo +spec: + version: "8.0.31-innodb" + topology: + mode: InnoDBCluster + innoDBCluster: + router: + replicas: 1 + replicas: 3 + configSecret: + name: my-configuration + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/mysql/reconfigure/reconfigure-steps/yamls/semi-sync.yaml b/content/docs/v2024.1.31/guides/mysql/reconfigure/reconfigure-steps/yamls/semi-sync.yaml new file mode 100644 index 0000000000..2ef9bea1b2 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/reconfigure/reconfigure-steps/yamls/semi-sync.yaml @@ -0,0 +1,25 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MySQL +metadata: + name: sample-mysql + namespace: demo +spec: + version: "8.0.35" + topology: + mode: SemiSync + semiSync: + sourceWaitForReplicaCount: 1 + sourceTimeout: 23h + errantTransactionRecoveryPolicy: PseudoTransaction + replicas: 3 + configSecret: + name: my-configuration + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/mysql/reconfigure/reconfigure-steps/yamls/stand-alone.yaml b/content/docs/v2024.1.31/guides/mysql/reconfigure/reconfigure-steps/yamls/stand-alone.yaml new file mode 100644 index 0000000000..2e3e54314a --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/reconfigure/reconfigure-steps/yamls/stand-alone.yaml @@ -0,0 +1,18 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MySQL +metadata: + name: sample-mysql + namespace: demo +spec: + version: "8.0.35" + configSecret: + name: my-configuration + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/mysql/scaling/_index.md b/content/docs/v2024.1.31/guides/mysql/scaling/_index.md new file mode 100644 index 0000000000..87e16bf273 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/scaling/_index.md @@ -0,0 +1,22 @@ +--- +title: Scaling MySQL +menu: + docs_v2024.1.31: + identifier: guides-mysql-scaling + name: Scaling MySQL + parent: guides-mysql + weight: 43 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/mysql/scaling/horizontal-scaling/_index.md b/content/docs/v2024.1.31/guides/mysql/scaling/horizontal-scaling/_index.md new file mode 100644 index 0000000000..e201d2a7d8 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/scaling/horizontal-scaling/_index.md @@ -0,0 +1,22 @@ +--- +title: Horizontal Scaling +menu: + docs_v2024.1.31: + identifier: guides-mysql-scaling-horizontal + name: Horizontal Scaling + parent: guides-mysql-scaling + weight: 10 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/mysql/scaling/horizontal-scaling/cluster/index.md b/content/docs/v2024.1.31/guides/mysql/scaling/horizontal-scaling/cluster/index.md new file mode 100644 index 0000000000..f0adac4b1f --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/scaling/horizontal-scaling/cluster/index.md @@ -0,0 +1,553 @@ +--- +title: Horizontal Scaling MySQL Cluster +menu: + docs_v2024.1.31: + identifier: guides-mysql-scaling-horizontal-cluster + name: MySQL Cluster + parent: guides-mysql-scaling-horizontal + weight: 20 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Horizontal Scale MySQL Group Replication + +This guide will show you how to use `KubeDB` Ops Manager to increase/decrease the number of members of a `MySQL` Group Replication. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +- Install `KubeDB` in your cluster following the steps [here](/docs/v2024.1.31/setup/README). + +- You should be familiar with the following `KubeDB` concepts: + - [MySQL](/docs/v2024.1.31/guides/mysql/concepts/database/) + - [MySQLOpsRequest](/docs/v2024.1.31/guides/mysql/concepts/opsrequest/) + - [Horizontal Scaling Overview](/docs/v2024.1.31/guides/mysql/scaling/horizontal-scaling/overview/) + +To keep everything isolated, we are going to use a separate namespace called `demo` throughout this tutorial. + +```bash +$ kubectl create ns demo +namespace/demo created +``` + +> **Note:** YAML files used in this tutorial are stored in [docs/guides/mysql/scaling/horizontal-scaling/cluster/yamls](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/guides/mysql/scaling/horizontal-scaling/cluster/yamls) directory of [kubedb/doc](https://github.com/kubedb/docs) repository. + +### Apply Horizontal Scaling on MySQL Group Replication + +Here, we are going to deploy a `MySQL` group replication using a supported version by `KubeDB` operator. Then we are going to apply horizontal scaling on it. + +#### Prepare Group Replication + +At first, we are going to deploy a group replication server with 3 members. Then, we are going to add two additional members through horizontal scaling. Finally, we will remove 1 member from the cluster again via horizontal scaling. + +**Find supported MySQL Version:** + +When you have installed `KubeDB`, it has created `MySQLVersion` CR for all supported `MySQL` versions. Let's check the supported MySQL versions, + +```bash +$ kubectl get mysqlversion +NAME VERSION DISTRIBUTION DB_IMAGE DEPRECATED AGE +5.7.35-v1 5.7.35 Official mysql:5.7.35 4d2h +5.7.44 5.7.44 Official mysql:5.7.44 4d2h +8.0.17 8.0.17 Official mysql:8.0.17 4d2h +8.0.35 8.0.35 Official mysql:8.0.35 4d2h +8.0.31-innodb 8.0.35 MySQL mysql/mysql-server:8.0.35 4d2h +8.0.35 8.0.35 Official mysql:8.0.35 4d2h +8.0.3-v4 8.0.3 Official mysql:8.0.3 4d2h +8.0.35 8.0.35 Official mysql:8.0.35 4d2h +8.0.31-innodb 8.0.35 MySQL mysql/mysql-server:8.0.35 4d2h +``` + + +The version above that does not show `DEPRECATED` `true` is supported by `KubeDB` for `MySQL`. You can use any non-deprecated version. Here, we are going to create a MySQL Group Replication using `MySQL` `8.0.35`. + +**Deploy MySQL Cluster:** + + + + + +
+
+ +**Group Replication:** + +In this section, we are going to deploy a MySQL group replication with 3 members. Then, in the next section we will scale-up the cluster using horizontal scaling. Below is the YAML of the `MySQL` cr that we are going to create, + + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: MySQL +metadata: + name: my-group + namespace: demo +spec: + version: "8.0.35" + replicas: 3 + topology: + mode: GroupReplication + group: + name: "dc002fc3-c412-4d18-b1d4-66c1fbfbbc9b" + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut +``` + +Let's create the `MySQL` cr we have shown above, + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mysql/scaling/horizontal-scaling/cluster/yamls/group-replication.yaml +mysql.kubedb.com/my-group created +``` + +
+ +
+ +**MySQL Innodb Cluster:** + +In this section, we are going to deploy a MySQL innodb with 3 members. Then, in the next section we will scale-up the cluster using horizontal scaling. Below is the YAML of the `MySQL` cr that we are going to create, + + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: MySQL +metadata: + name: my-group + namespace: demo +spec: + version: "8.0.31-innodb" + replicas: 3 + topology: + mode: InnoDBCluster + innoDBCluster: + router: + replicas: 1 + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut +``` + +Let's create the `MySQL` cr we have shown above, + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mysql/scaling/horizontal-scaling/cluster/yamls/innodb.yaml +mysql.kubedb.com/my-group created +``` + +
+ +
+ +**MySQL Semi Sync Cluster:** + +In this section, we are going to deploy a MySQL semi-sync with 3 members. Then, in the next section we will scale-up the cluster using horizontal scaling. Below is the YAML of the `MySQL` cr that we are going to create, + + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: MySQL +metadata: + name: my-group + namespace: demo +spec: + version: "8.0.35" + replicas: 3 + topology: + mode: SemiSync + semiSync: + sourceWaitForReplicaCount: 1 + sourceTimeout: 23h + errantTransactionRecoveryPolicy: PseudoTransaction + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut +``` + +Let's create the `MySQL` cr we have shown above, + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mysql/scaling/horizontal-scaling/cluster/yamls/semi-sync.yaml +mysql.kubedb.com/my-group created +``` + + +
+
+ + +**Wait for the cluster to be ready:** + +`KubeDB` operator watches for `MySQL` objects using Kubernetes API. When a `MySQL` object is created, `KubeDB` operator will create a new StatefulSet, Services, and Secrets, etc. A secret called `my-group-auth` (format: {mysql-object-name}-auth) will be created storing the password for mysql superuser. +Now, watch `MySQL` is going to `Running` state and also watch `StatefulSet` and its pod is created and going to `Running` state, + + +```bash +$ watch -n 3 kubectl get my -n demo my-group +Every 3.0s: kubectl get my -n demo my-group suaas-appscode: Tue Jun 30 22:43:57 2020 + +NAME VERSION STATUS AGE +my-group 8.0.35 Running 16m + +$ watch -n 3 kubectl get sts -n demo my-group +Every 3.0s: kubectl get sts -n demo my-group Every 3.0s: kubectl get sts -n demo my-group suaas-appscode: Tue Jun 30 22:44:35 2020 + +NAME READY AGE +my-group 3/3 16m + +$ watch -n 3 kubectl get pod -n demo -l app.kubernetes.io/name=mysqls.kubedb.com,app.kubernetes.io/instance=my-group +Every 3.0s: kubectl get pod -n demo -l app.kubernetes.io/name=mysqls.kubedb.com suaas-appscode: Tue Jun 30 22:45:33 2020 + +NAME READY STATUS RESTARTS AGE +my-group-0 2/2 Running 0 17m +my-group-1 2/2 Running 0 14m +my-group-2 2/2 Running 0 11m +``` + +Let's verify that the StatefulSet's pods have joined into a group replication cluster, + +```bash +$ kubectl get secrets -n demo my-group-auth -o jsonpath='{.data.\username}' | base64 -d +root + +$ kubectl get secrets -n demo my-group-auth -o jsonpath='{.data.\password}' | base64 -d +sWfUMoqRpOJyomgb + +$ kubectl exec -it -n demo my-group-0 -c mysql -- mysql -u root --password=sWfUMoqRpOJyomgb --host=my-group-0.my-group-pods.demo -e "select * from performance_schema.replication_group_members" +mysql: [Warning] Using a password on the command line interface can be insecure. ++---------------------------+--------------------------------------+------------------------------+-------------+--------------+-------------+----------------+ +| CHANNEL_NAME | MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE | MEMBER_ROLE | MEMBER_VERSION | ++---------------------------+--------------------------------------+------------------------------+-------------+--------------+-------------+----------------+ +| group_replication_applier | 596be47b-baef-11ea-859a-02c946ef4fe7 | my-group-1.my-group-pods.demo | 3306 | ONLINE | SECONDARY | 8.0.23 | +| group_replication_applier | 815974c2-baef-11ea-bd7e-a695cbdbd6cc | my-group-2.my-group-pods.demo | 3306 | ONLINE | SECONDARY | 8.0.23 | +| group_replication_applier | ec61cef2-baee-11ea-adb0-9a02630bae5d | my-group-0.my-group-pods.demo | 3306 | ONLINE | PRIMARY | 8.0.23 | ++---------------------------+--------------------------------------+------------------------------+-------------+--------------+-------------+----------------+ +``` + +So, we can see that our group replication cluster has 3 members. Now, we are ready to apply the horizontal scale to this group replication. + +#### Scale Up + +Here, we are going to add 2 members in our group replication using horizontal scaling. + +**Create MySQLOpsRequest:** + +To scale up your cluster, you have to create a `MySQLOpsRequest` cr with your desired number of members after scaling. Below is the YAML of the `MySQLOpsRequest` cr that we are going to create, + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: MySQLOpsRequest +metadata: + name: my-scale-up + namespace: demo +spec: + type: HorizontalScaling + databaseRef: + name: my-group + horizontalScaling: + member: 5 +``` + +Here, + +- `spec.databaseRef.name` specifies that we are performing operation on `my-group` `MySQL` database. +- `spec.type` specifies that we are performing `HorizontalScaling` on our database. +- `spec.horizontalScaling.member` specifies the expected number of members after the scaling. + +Let's create the `MySQLOpsRequest` cr we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mysql/scaling/horizontal-scaling/cluster/yamls/scale_up.yaml +mysqlopsrequest.ops.kubedb.com/my-scale-up created +``` + +**Verify Scale-Up Succeeded:** + +If everything goes well, `KubeDB` Ops Manager will scale up the StatefulSet's `Pod`. After the scaling process is completed successfully, the `KubeDB` Ops Manager updates the replicas of the `MySQL` object. + +First, we will wait for `MySQLOpsRequest` to be successful. Run the following command to watch `MySQlOpsRequest` cr, + +```bash +Every 3.0s: kubectl get myops -n demo my-scale-up suaas-appscode: Sat Jul 25 15:49:42 2020 + +NAME TYPE STATUS AGE +my-scale-up HorizontalScaling Successful 2m55s +``` + +You can see from the above output that the `MySQLOpsRequest` has succeeded. If we describe the `MySQLOpsRequest`, we shall see that the `MySQL` group replication is scaled up. + +```bash +$ kubectl describe myops -n demo my-scale-up +$ Name: my-scale-up +Namespace: demo +Labels: +Annotations: +API Version: ops.kubedb.com/v1alpha1 +Kind: MySQLOpsRequest +Metadata: + Creation Timestamp: 2021-03-10T11:18:39Z + Generation: 2 + Manager: kubedb-enterprise + Operation: Update + Time: 2021-03-10T11:20:45Z + Resource Version: 1088850 + Self Link: /apis/ops.kubedb.com/v1alpha1/namespaces/demo/mysqlopsrequests/my-scale-up + UID: f60f5bbc-3086-4b86-bf74-baf0b39ff358 +Spec: + Database Ref: + Name: my-group + Horizontal Scaling: + Member: 5 + Stateful Set Ordinal: 0 + Type: HorizontalScaling +Status: + Conditions: + Last Transition Time: 2021-03-10T11:18:39Z + Message: Controller has started to Progress the MySQLOpsRequest: demo/my-scale-up + Observed Generation: 2 + Reason: OpsRequestProgressingStarted + Status: True + Type: Progressing + Last Transition Time: 2021-03-10T11:18:40Z + Message: Horizontal scaling started in MySQL: demo/my-group for MySQLOpsRequest: my-scale-up + Observed Generation: 2 + Reason: HorizontalScalingStarted + Status: True + Type: Scaling + Last Transition Time: 2021-03-10T11:20:45Z + Message: Horizontal scaling Up performed successfully in MySQL: demo/my-group for MySQLOpsRequest: my-scale-up + Observed Generation: 2 + Reason: SuccessfullyPerformedHorizontalScaling + Status: True + Type: ScalingUp + Last Transition Time: 2021-03-10T11:20:45Z + Message: Controller has successfully scaled the MySQL demo/my-scale-up + Observed Generation: 2 + Reason: OpsRequestProcessedSuccessfully + Status: True + Type: Successful + Observed Generation: 3 + Phase: Successful +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal Starting 28m KubeDB Enterprise Operator Start processing for MySQLOpsRequest: demo/my-scale-up + Normal Starting 28m KubeDB Enterprise Operator Pausing MySQL databse: demo/my-group + Normal Successful 28m KubeDB Enterprise Operator Successfully paused MySQL database: demo/my-group for MySQLOpsRequest: my-scale-up + Normal Starting 28m KubeDB Enterprise Operator Horizontal scaling started in MySQL: demo/my-group for MySQLOpsRequest: my-scale-up + Normal Successful 26m KubeDB Enterprise Operator Horizontal scaling Up performed successfully in MySQL: demo/my-group for MySQLOpsRequest: my-scale-up + Normal Starting 26m KubeDB Enterprise Operator Resuming MySQL database: demo/my-group + Normal Successful 26m KubeDB Enterprise Operator Successfully resumed MySQL database: demo/my-group + Normal Successful 26m KubeDB Enterprise Operator Controller has Successfully scaled the MySQL database: demo/my-group +``` + +Now, we are going to verify whether the number of members has increased to meet up the desired state, Let's check, + +```bash +$ kubectl get secrets -n demo my-group-auth -o jsonpath='{.data.\username}' | base64 -d +root + +$ kubectl get secrets -n demo my-group-auth -o jsonpath='{.data.\password}' | base64 -d +Y28qkWFQ8QHVzq2h + +$ kubectl exec -it -n demo my-group-0 -c mysql -- mysql -u root --password=Y28qkWFQ8QHVzq2h --host=my-group-0.my-group-pods.demo -e "select * from performance_schema.replication_group_members" +mysql: [Warning] Using a password on the command line interface can be insecure. ++---------------------------+--------------------------------------+------------------------------+-------------+--------------+-------------+----------------+ +| CHANNEL_NAME | MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE | MEMBER_ROLE | MEMBER_VERSION | ++---------------------------+--------------------------------------+------------------------------+-------------+--------------+-------------+----------------+ +| group_replication_applier | 4b76f5c8-baff-11ea-9848-425294afbbbf | my-group-3.my-group-pods.demo | 3306 | ONLINE | SECONDARY | 8.0.23 | +| group_replication_applier | 73c1f150-baff-11ea-9394-4a8c424ea5c2 | my-group-4.my-group-pods.demo | 3306 | ONLINE | SECONDARY | 8.0.23 | +| group_replication_applier | 9f6c694c-bafd-11ea-8ad4-822669614bde | my-group-0.my-group-pods.demo | 3306 | ONLINE | PRIMARY | 8.0.23 | +| group_replication_applier | c9d82f09-bafd-11ea-ab3a-764d326534a6 | my-group-1.my-group-pods.demo | 3306 | ONLINE | SECONDARY | 8.0.23 | +| group_replication_applier | eff81073-bafd-11ea-9f3d-ca1e99c33106 | my-group-2.my-group-pods.demo | 3306 | ONLINE | SECONDARY | 8.0.23 | ++---------------------------+--------------------------------------+------------------------------+-------------+--------------+-------------+----------------+ +``` + +You can see above that our `MySQL` group replication now has a total of 5 members. It verifies that we have successfully scaled up. + +#### Scale Down + +Here, we are going to remove 1 member from our group replication using horizontal scaling. + +**Create MysQLOpsRequest:** + +To scale down your cluster, you have to create a `MySQLOpsRequest` cr with your desired number of members after scaling. Below is the YAML of the `MySQLOpsRequest` cr that we are going to create, + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: MySQLOpsRequest +metadata: + name: my-scale-down + namespace: demo +spec: + type: HorizontalScaling + databaseRef: + name: my-group + horizontalScaling: + member: 4 +``` + +Let's create the `MySQLOpsRequest` cr we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mysql/scaling/horizontal-scaling/cluster/yamls/scale_down.yaml +mysqlopsrequest.ops.kubedb.com/my-scale-down created +``` + +**Verify Scale-down Succeeded:** + +If everything goes well, `KubeDB` Ops Manager will scale down the StatefulSet's `Pod`. After the scaling process is completed successfully, the `KubeDB` Ops Manager updates the replicas of the `MySQL` object. + +Now, we will wait for `MySQLOpsRequest` to be successful. Run the following command to watch `MySQlOpsRequest` cr, + +```bash +Every 3.0s: kubectl get myops -n demo my-scale-down suaas-appscode: Sat Jul 25 15:49:42 2020 + +NAME TYPE STATUS AGE +my-scale-down HorizontalScaling Successful 2m55s +``` + +You can see from the above output that the `MySQLOpsRequest` has succeeded. If we describe the `MySQLOpsRequest`, we shall see that the `MySQL` group replication is scaled down. + +```bash +$ kubectl describe myops -n demo my-scale-down +Name: my-scale-down +Namespace: demo +Labels: +Annotations: +API Version: ops.kubedb.com/v1alpha1 +Kind: MySQLOpsRequest +Metadata: + Creation Timestamp: 2021-03-10T11:48:42Z + Generation: 2 + Manager: kubedb-enterprise + Operation: Update + Time: 2021-03-10T11:49:18Z + Resource Version: 1094487 + Self Link: /apis/ops.kubedb.com/v1alpha1/namespaces/demo/mysqlopsrequests/my-scale-down + UID: 7b9471ed-87b1-4c62-939b-dbb8a7641554 +Spec: + Database Ref: + Name: my-group + Horizontal Scaling: + Member: 4 + Stateful Set Ordinal: 0 + Type: HorizontalScaling +Status: + Conditions: + Last Transition Time: 2021-03-10T11:48:42Z + Message: Controller has started to Progress the MySQLOpsRequest: demo/my-scale-down + Observed Generation: 2 + Reason: OpsRequestProgressingStarted + Status: True + Type: Progressing + Last Transition Time: 2021-03-10T11:48:43Z + Message: Horizontal scaling started in MySQL: demo/my-group for MySQLOpsRequest: my-scale-down + Observed Generation: 2 + Reason: HorizontalScalingStarted + Status: True + Type: Scaling + Last Transition Time: 2021-03-10T11:49:18Z + Message: Horizontal scaling down performed successfully in MySQL: demo/my-group for MySQLOpsRequest: my-scale-down + Observed Generation: 2 + Reason: SuccessfullyPerformedHorizontalScaling + Status: True + Type: ScalingDown + Last Transition Time: 2021-03-10T11:49:18Z + Message: Controller has successfully scaled the MySQL demo/my-scale-down + Observed Generation: 2 + Reason: OpsRequestProcessedSuccessfully + Status: True + Type: Successful + Observed Generation: 4 + Phase: Successful +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal Starting 97s KubeDB Enterprise Operator Start processing for MySQLOpsRequest: demo/my-scale-down + Normal Starting 97s KubeDB Enterprise Operator Pausing MySQL databse: demo/my-group + Normal Successful 97s KubeDB Enterprise Operator Successfully paused MySQL database: demo/my-group for MySQLOpsRequest: my-scale-down + Normal Starting 96s KubeDB Enterprise Operator Horizontal scaling started in MySQL: demo/my-group for MySQLOpsRequest: my-scale-down + Normal Successful 61s KubeDB Enterprise Operator Horizontal scaling down performed successfully in MySQL: demo/my-group for MySQLOpsRequest: my-scale-down + Normal Starting 61s KubeDB Enterprise Operator Resuming MySQL database: demo/my-group + Normal Successful 61s KubeDB Enterprise Operator Successfully resumed MySQL database: demo/my-group + Normal Successful 61s KubeDB Enterprise Operator Controller has Successfully scaled the MySQL database: demo/my-group +``` + +Now, we are going to verify whether the number of members has decreased to meet up the desired state, Let's check, + +```bash +$ kubectl get secrets -n demo my-group-auth -o jsonpath='{.data.\username}' | base64 -d +root + +$ kubectl get secrets -n demo my-group-auth -o jsonpath='{.data.\password}' | base64 -d +Y28qkWFQ8QHVzq2h + +$ kubectl exec -it -n demo my-group-0 -c mysql -- mysql -u root --password=5pwciRRUWHhSJ6qQ --host=my-group-0.my-group-pods.demo -e "select * from performance_schema.replication_group_members" +mysql: [Warning] Using a password on the command line interface can be insecure. ++---------------------------+--------------------------------------+------------------------------+-------------+--------------+-------------+----------------+ +| CHANNEL_NAME | MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE | MEMBER_ROLE | MEMBER_VERSION | ++---------------------------+--------------------------------------+------------------------------+-------------+--------------+-------------+----------------+ +| group_replication_applier | 533602d0-ce5b-11ea-b866-5ad2598e5303 | my-group-1.my-group-pods.demo | 3306 | ONLINE | SECONDARY | 8.0.23 | +| group_replication_applier | 7d429240-ce5b-11ea-9fe2-0aaa5a845ec8 | my-group-2.my-group-pods.demo | 3306 | ONLINE | SECONDARY | 8.0.23 | +| group_replication_applier | c498302f-ce5b-11ea-96a3-72980d437abc | my-group-3.my-group-pods.demo | 3306 | ONLINE | SECONDARY | 8.0.23 | +| group_replication_applier | dfb1633a-ce5a-11ea-a9c8-6e4ef86119d0 | my-group-0.my-group-pods.demo | 3306 | ONLINE | PRIMARY | 8.0.23 | ++---------------------------+--------------------------------------+------------------------------+-------------+--------------+-------------+----------------+ +``` + +You can see above that our `MySQL` group replication now has a total of 4 members. It verifies that we have successfully scaled down. + +## Cleaning Up + +To clean up the Kubernetes resources created by this tutorial, run: + +```bash +kubectl delete my -n demo my-group +kubectl delete myops -n demo my-scale-up +kubectl delete myops -n demo my-scale-down +``` \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/mysql/scaling/horizontal-scaling/cluster/yamls/group-replication.yaml b/content/docs/v2024.1.31/guides/mysql/scaling/horizontal-scaling/cluster/yamls/group-replication.yaml new file mode 100644 index 0000000000..a18eb3cfef --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/scaling/horizontal-scaling/cluster/yamls/group-replication.yaml @@ -0,0 +1,21 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MySQL +metadata: + name: my-group + namespace: demo +spec: + version: "8.0.35" + replicas: 3 + topology: + mode: GroupReplication + group: + name: "dc002fc3-c412-4d18-b1d4-66c1fbfbbc9b" + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/mysql/scaling/horizontal-scaling/cluster/yamls/innodb.yaml b/content/docs/v2024.1.31/guides/mysql/scaling/horizontal-scaling/cluster/yamls/innodb.yaml new file mode 100644 index 0000000000..6ee3aa67a6 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/scaling/horizontal-scaling/cluster/yamls/innodb.yaml @@ -0,0 +1,22 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MySQL +metadata: + name: my-group + namespace: demo +spec: + version: "8.0.31-innodb" + replicas: 3 + topology: + mode: InnoDBCluster + innoDBCluster: + router: + replicas: 1 + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/mysql/scaling/horizontal-scaling/cluster/yamls/scale_down.yaml b/content/docs/v2024.1.31/guides/mysql/scaling/horizontal-scaling/cluster/yamls/scale_down.yaml new file mode 100644 index 0000000000..0bf32e33ae --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/scaling/horizontal-scaling/cluster/yamls/scale_down.yaml @@ -0,0 +1,13 @@ +apiVersion: ops.kubedb.com/v1alpha1 +kind: MySQLOpsRequest +metadata: + name: my-scale-down + namespace: demo +spec: + type: HorizontalScaling + databaseRef: + name: my-group + horizontalScaling: + member: 4 + + diff --git a/content/docs/v2024.1.31/guides/mysql/scaling/horizontal-scaling/cluster/yamls/scale_up.yaml b/content/docs/v2024.1.31/guides/mysql/scaling/horizontal-scaling/cluster/yamls/scale_up.yaml new file mode 100644 index 0000000000..8823466bfd --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/scaling/horizontal-scaling/cluster/yamls/scale_up.yaml @@ -0,0 +1,13 @@ +apiVersion: ops.kubedb.com/v1alpha1 +kind: MySQLOpsRequest +metadata: + name: my-scale-up + namespace: demo +spec: + type: HorizontalScaling + databaseRef: + name: my-group + horizontalScaling: + member: 5 + + diff --git a/content/docs/v2024.1.31/guides/mysql/scaling/horizontal-scaling/cluster/yamls/semi-sync.yaml b/content/docs/v2024.1.31/guides/mysql/scaling/horizontal-scaling/cluster/yamls/semi-sync.yaml new file mode 100644 index 0000000000..8887e3a612 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/scaling/horizontal-scaling/cluster/yamls/semi-sync.yaml @@ -0,0 +1,23 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MySQL +metadata: + name: my-group + namespace: demo +spec: + version: "8.0.35" + replicas: 3 + topology: + mode: SemiSync + semiSync: + sourceWaitForReplicaCount: 1 + sourceTimeout: 23h + errantTransactionRecoveryPolicy: PseudoTransaction + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/mysql/scaling/horizontal-scaling/overview/images/my-horizontal_scaling.png b/content/docs/v2024.1.31/guides/mysql/scaling/horizontal-scaling/overview/images/my-horizontal_scaling.png new file mode 100644 index 0000000000..f6a970af92 Binary files /dev/null and b/content/docs/v2024.1.31/guides/mysql/scaling/horizontal-scaling/overview/images/my-horizontal_scaling.png differ diff --git a/content/docs/v2024.1.31/guides/mysql/scaling/horizontal-scaling/overview/index.md b/content/docs/v2024.1.31/guides/mysql/scaling/horizontal-scaling/overview/index.md new file mode 100644 index 0000000000..3c5d494455 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/scaling/horizontal-scaling/overview/index.md @@ -0,0 +1,65 @@ +--- +title: MySQL Horizontal Scaling Overview +menu: + docs_v2024.1.31: + identifier: guides-mysql-scaling-horizontal-overview + name: Overview + parent: guides-mysql-scaling-horizontal + weight: 10 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Horizontal Scaling Overview + +This guide will give you an overview of how `KubeDB` Ops Manager scales up/down the number of members of a `MySQL` group replication. + +## Before You Begin + +- You should be familiar with the following `KubeDB` concepts: + - [MySQL](/docs/v2024.1.31/guides/mysql/concepts/database/) + - [MySQLOpsRequest](/docs/v2024.1.31/guides/mysql/concepts/opsrequest/) + +## How Horizontal Scaling Process Works + +The following diagram shows how `KubeDB` Ops Manager used to scale up the number of members of a `MySQL` group replication. Open the image in a new tab to see the enlarged version. + +
+ Stash Backup Flow +
Fig: Horizontal scaling process of MySQL group replication
+
+ +The horizontal scaling process consists of the following steps: + +1. At first, a user creates a `MySQL` cr. + +2. `KubeDB` community operator watches for the `MySQL` cr. + +3. When it finds one, it creates a `StatefulSet` and related necessary stuff like secret, service, etc. + +4. Then, in order to scale the cluster, the user creates a `MySQLOpsRequest` cr with the desired number of members after scaling. + +5. `KubeDB` Ops Manager watches for `MySQLOpsRequest`. + +6. When it finds one, it halts the `MySQL` object so that the `KubeDB` community operator doesn't perform any operation on the `MySQL` during the scaling process. + +7. Then the `KubeDB` Ops Manager will scale the StatefulSet replicas to reach the expected number of members for the group replication. + +8. After successful scaling of the StatefulSet's replica, the `KubeDB` Ops Manager updates the `spec.replicas` field of `MySQL` object to reflect the updated cluster state. + +9. After successful scaling of the `MySQL` replicas, the `KubeDB` Ops Manager resumes the `MySQL` object so that the `KubeDB` community operator can resume its usual operations. + +In the next doc, we are going to show a step by step guide on scaling of a MySQL group replication using Horizontal Scaling. \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/mysql/scaling/vertical-scaling/_index.md b/content/docs/v2024.1.31/guides/mysql/scaling/vertical-scaling/_index.md new file mode 100644 index 0000000000..04fc66dd1c --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/scaling/vertical-scaling/_index.md @@ -0,0 +1,22 @@ +--- +title: Vertical Scaling +menu: + docs_v2024.1.31: + identifier: guides-mysql-scaling-vertical + name: Vertical Scaling + parent: guides-mysql-scaling + weight: 20 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/mysql/scaling/vertical-scaling/cluster/index.md b/content/docs/v2024.1.31/guides/mysql/scaling/vertical-scaling/cluster/index.md new file mode 100644 index 0000000000..8fb28ffba2 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/scaling/vertical-scaling/cluster/index.md @@ -0,0 +1,411 @@ +--- +title: Vertical Scaling MySQL Cluster +menu: + docs_v2024.1.31: + identifier: guides-mysql-scaling-vertical-cluster + name: Cluster + parent: guides-mysql-scaling-vertical + weight: 30 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Vertical Scale MySQL Cluster + +This guide will show you how to use `KubeDB` Ops Manager to update the resources of the members of a `MySQL` Cluster. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +- Install `KubeDB` in your cluster following the steps [here](/docs/v2024.1.31/setup/README). + +- You should be familiar with the following `KubeDB` concepts: + - [MySQL](/docs/v2024.1.31/guides/mysql/concepts/database/) + - [MySQLOpsRequest](/docs/v2024.1.31/guides/mysql/concepts/opsrequest/) + - [Vertical Scaling Overview](/docs/v2024.1.31/guides/mysql/scaling/vertical-scaling/overview/) + +To keep everything isolated, we are going to use a separate namespace called `demo` throughout this tutorial. + +```bash +$ kubectl create ns demo +namespace/demo created +``` + +> **Note:** YAML files used in this tutorial are stored in [docs/guides/mysql/scaling/vertical-scaling/cluster/yamls](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/guides/mysql/scaling/vertical-scaling/cluster/yamls) directory of [kubedb/doc](https://github.com/kubedb/docs) repository. + +### Apply Vertical Scaling on MySQL Group Replication + +Here, we are going to deploy a `MySQL` group replication using a supported version by `KubeDB` operator. Then we are going to apply vertical scaling on it. + +#### Prepare MySQL Cluster + +At first, we are going to deploy a cluster using a supported `MySQL` version. Then, we are going to update the resources of the members through vertical scaling. + +**Find supported MySQL Version:** + +When you have installed `KubeDB`, it has created `MySQLVersion` CR for all supported `MySQL` versions. Let's check the supported MySQL versions, + +```bash +$ kubectl get mysqlversion +NAME VERSION DISTRIBUTION DB_IMAGE DEPRECATED AGE +5.7.35-v1 5.7.35 Official mysql:5.7.35 4d2h +5.7.44 5.7.44 Official mysql:5.7.44 4d2h +8.0.17 8.0.17 Official mysql:8.0.17 4d2h +8.0.35 8.0.35 Official mysql:8.0.35 4d2h +8.0.31-innodb 8.0.35 MySQL mysql/mysql-server:8.0.35 4d2h +8.0.35 8.0.35 Official mysql:8.0.35 4d2h +8.0.3-v4 8.0.3 Official mysql:8.0.3 4d2h +8.0.35 8.0.35 Official mysql:8.0.35 4d2h +8.0.31-innodb 8.0.35 MySQL mysql/mysql-server:8.0.35 4d2h +``` + +The version above that does not show `DEPRECATED` `true` is supported by `KubeDB` for `MySQL`. You can use any non-deprecated version. Here, we are going to create a MySQL Group Replication using non-deprecated `MySQL` version `8.0.35`. + +**Deploy MySQL Cluster:** + + + + +
+
+ +In this section, we are going to deploy a MySQL group replication with 3 members. Then, in the next section we will update the resources of the members using vertical scaling. Below is the YAML of the `MySQL` cr that we are going to create, + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: MySQL +metadata: + name: my-group + namespace: demo +spec: + version: "8.0.35" + replicas: 3 + topology: + mode: GroupReplication + group: + name: "dc002fc3-c412-4d18-b1d4-66c1fbfbbc9b" + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut +``` + +Let's create the `MySQL` cr we have shown above, + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mysql/scaling/vertical-scaling/cluster/yamls/group-replication.yaml +mysql.kubedb.com/my-group created +``` +
+ +
+In this section, we are going to deploy a MySQL Innodb Cluster with 3 members. Then, in the next section we will update the resources of the members using vertical scaling. Below is the YAML of the `MySQL` cr that we are going to create, + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: MySQL +metadata: + name: my-group + namespace: demo +spec: + version: "8.0.31-innodb" + replicas: 3 + topology: + mode: InnoDBCluster + innoDBCluster: + router: + replicas: 1 + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut +``` + +Let's create the `MySQL` cr we have shown above, + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mysql/scaling/vertical-scaling/cluster/yamls/innodb.yaml +mysql.kubedb.com/my-group created +``` + +
+ +
+ +In this section, we are going to deploy a MySQL group replication with 3 members. Then, in the next section we will update the resources of the members using vertical scaling. Below is the YAML of the `MySQL` cr that we are going to create, + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: MySQL +metadata: + name: my-group + namespace: demo +spec: + version: "8.0.35" + replicas: 3 + topology: + mode: SemiSync + semiSync: + sourceWaitForReplicaCount: 1 + sourceTimeout: 24h + errantTransactionRecoveryPolicy: PseudoTransaction + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut +``` + +Let's create the `MySQL` cr we have shown above, + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mysql/scaling/vertical-scaling/cluster/yamls/group-replication.yaml +mysql.kubedb.com/my-group created +``` + +
+ +
+ + + +**Wait for the cluster to be ready:** + +`KubeDB` operator watches for `MySQL` objects using Kubernetes API. When a `MySQL` object is created, `KubeDB` operator will create a new StatefulSet, Services, and Secrets, etc. +Now, watch `MySQL` is going to `Running` state and also watch `StatefulSet` and its pod is created and going to `Running` state, + +```bash +$ watch -n 3 kubectl get my -n demo my-group +Every 3.0s: kubectl get my -n demo my-group suaas-appscode: Tue Jun 30 22:43:57 2020 + +NAME VERSION STATUS AGE +my-group 8.0.35 Running 16m + +$ watch -n 3 kubectl get sts -n demo my-group +Every 3.0s: kubectl get sts -n demo my-group Every 3.0s: kubectl get sts -n demo my-group suaas-appscode: Tue Jun 30 22:44:35 2020 + +NAME READY AGE +my-group 3/3 16m + +$ watch -n 3 kubectl get pod -n demo -l app.kubernetes.io/name=mysqls.kubedb.com,app.kubernetes.io/instance=my-group +Every 3.0s: kubectl get pod -n demo -l app.kubernetes.io/name=mysqls.kubedb.com suaas-appscode: Tue Jun 30 22:45:33 2020 + +NAME READY STATUS RESTARTS AGE +my-group-0 2/2 Running 0 17m +my-group-1 2/2 Running 0 14m +my-group-2 2/2 Running 0 11m +``` + +Let's check one of the StatefulSet's pod containers resources, + +```bash +$ kubectl get pod -n demo my-group-0 -o json | jq '.spec.containers[1].resources' +{} +``` + +You can see that the Pod has empty resources that mean the scheduler will choose a random node to place the container of the Pod on by default. Now, we are ready to apply the vertical scale on this group replication. + +#### Vertical Scaling + +Here, we are going to update the resources of the database cluster to meet up with the desired resources after scaling. + +**Create MySQLOpsRequest:** + +In order to update the resources of your database cluster, you have to create a `MySQLOpsRequest` cr with your desired resources after scaling. Below is the YAML of the `MySQLOpsRequest` cr that we are going to create, + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: MySQLOpsRequest +metadata: + name: my-scale-group + namespace: demo +spec: + type: VerticalScaling + databaseRef: + name: my-group + verticalScaling: + mysql: + resources: + requests: + memory: "1200Mi" + cpu: "0.7" + limits: + memory: "1200Mi" + cpu: "0.7" +``` + +Here, + +- `spec.databaseRef.name` specifies that we are performing operation on `my-group` `MySQL` database. +- `spec.type` specifies that we are performing `VerticalScaling` on our database. +- `spec.VerticalScaling.mysql` specifies the expected mysql container resources after scaling. + +Let's create the `MySQLOpsRequest` cr we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mysql/scaling/vertical-scaling/cluster/yamls/my-scale-group.yaml +mysqlopsrequest.ops.kubedb.com/my-scale-group created +``` + +**Verify MySQL Group Replication resources updated successfully:** + +If everything goes well, `KubeDB` Ops Manager will update the resources of the StatefulSet's `Pod` containers. After a successful scaling process is done, the `KubeDB` Ops Manager updates the resources of the `MySQL` cluster. + +First, we will wait for `MySQLOpsRequest` to be successful. Run the following command to watch `MySQlOpsRequest` cr, + +```bash +$ watch -n 3 kubectl get myops -n demo my-scale-group +Every 3.0s: kubectl get myops -n demo my-sc... suaas-appscode: Wed Aug 12 16:49:21 2020 + +NAME TYPE STATUS AGE +my-scale-group VerticalScaling Successful 4m53s +``` + +You can see from the above output that the `MySQLOpsRequest` has succeeded. If we describe the `MySQLOpsRequest`, we shall see that the resources of the members of the `MySQL` group replication are updated. + +```bash +$ kubectl describe myops -n demo my-scale-group +Name: my-scale-group +Namespace: demo +Labels: +Annotations: +API Version: ops.kubedb.com/v1alpha1 +Kind: MySQLOpsRequest +Metadata: + Creation Timestamp: 2021-03-10T10:54:24Z + Generation: 2 + Manager: kubectl-client-side-apply + Operation: Update + Time: 2021-03-10T10:54:24Z + API Version: ops.kubedb.com/v1alpha1 + Fields Type: FieldsV1 + Manager: kubedb-enterprise + Operation: Update + Time: 2021-03-10T10:56:19Z + Resource Version: 1083654 + Self Link: /apis/ops.kubedb.com/v1alpha1/namespaces/demo/mysqlopsrequests/my-scale-group + UID: 92ace405-666c-45da-aaa1-fd3d079c187d +Spec: + Database Ref: + Name: my-group + Stateful Set Ordinal: 0 + Type: VerticalScaling + Vertical Scaling: + Mysql: + Limits: + Cpu: 0.7 + Memory: 1200Mi + Requests: + Cpu: 0.7 + Memory: 1200Mi +Status: + Conditions: + Last Transition Time: 2021-03-10T10:54:24Z + Message: Controller has started to Progress the MySQLOpsRequest: demo/my-scale-group + Observed Generation: 2 + Reason: OpsRequestProgressingStarted + Status: True + Type: Progressing + Last Transition Time: 2021-03-10T10:54:24Z + Message: Vertical scaling started in MySQL: demo/my-group for MySQLOpsRequest: my-scale-group + Observed Generation: 2 + Reason: VerticalScalingStarted + Status: True + Type: Scaling + Last Transition Time: 2021-03-10T10:56:19Z + Message: Vertical scaling performed successfully in MySQL: demo/my-group for MySQLOpsRequest: my-scale-group + Observed Generation: 2 + Reason: SuccessfullyPerformedVerticalScaling + Status: True + Type: VerticalScaling + Last Transition Time: 2021-03-10T10:56:19Z + Message: Controller has successfully scaled the MySQL demo/my-scale-group + Observed Generation: 2 + Reason: OpsRequestProcessedSuccessfully + Status: True + Type: Successful + Observed Generation: 3 + Phase: Successful +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal Starting 7m46s KubeDB Enterprise Operator Start processing for MySQLOpsRequest: demo/my-scale-group + Normal Starting 7m46s KubeDB Enterprise Operator Pausing MySQL databse: demo/my-group + Normal Successful 7m46s KubeDB Enterprise Operator Successfully paused MySQL database: demo/my-group for MySQLOpsRequest: my-scale-group + Normal Starting 7m46s KubeDB Enterprise Operator Vertical scaling started in MySQL: demo/my-group for MySQLOpsRequest: my-scale-group + Normal Starting 7m41s KubeDB Enterprise Operator Restarting Pod: demo/my-group-1 + Normal Starting 7m1s KubeDB Enterprise Operator Restarting Pod: demo/my-group-2 + Normal Starting 6m31s KubeDB Enterprise Operator Restarting Pod (master): demo/my-group-0 + Normal Successful 5m51s KubeDB Enterprise Operator Vertical scaling performed successfully in MySQL: demo/my-group for MySQLOpsRequest: my-scale-group + Normal Starting 5m51s KubeDB Enterprise Operator Resuming MySQL database: demo/my-group + Normal Successful 5m51s KubeDB Enterprise Operator Successfully resumed MySQL database: demo/my-group + Normal Successful 5m51s KubeDB Enterprise Operator Controller has Successfully scaled the MySQL database: demo/my-group +``` + +Now, we are going to verify whether the resources of the members of the cluster have updated to meet up the desired state, Let's check, + +```bash +$ kubectl get pod -n demo my-group-0 -o json | jq '.spec.containers[1].resources' +{ + "limits": { + "cpu": "700m", + "memory": "1200Mi" + }, + "requests": { + "cpu": "700m", + "memory": "1200Mi" + } +} +``` + +The above output verifies that we have successfully updated the resources of the `MySQL` group replication. + +## Cleaning Up + +To clean up the Kubernetes resources created by this tutorial, run: + +```bash +kubectl delete my -n demo my-group +kubectl delete myops -n demo my-scale-group +``` \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/mysql/scaling/vertical-scaling/cluster/yamls/group-replication.yaml b/content/docs/v2024.1.31/guides/mysql/scaling/vertical-scaling/cluster/yamls/group-replication.yaml new file mode 100644 index 0000000000..a18eb3cfef --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/scaling/vertical-scaling/cluster/yamls/group-replication.yaml @@ -0,0 +1,21 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MySQL +metadata: + name: my-group + namespace: demo +spec: + version: "8.0.35" + replicas: 3 + topology: + mode: GroupReplication + group: + name: "dc002fc3-c412-4d18-b1d4-66c1fbfbbc9b" + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/mysql/scaling/vertical-scaling/cluster/yamls/innodb.yaml b/content/docs/v2024.1.31/guides/mysql/scaling/vertical-scaling/cluster/yamls/innodb.yaml new file mode 100644 index 0000000000..6ee3aa67a6 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/scaling/vertical-scaling/cluster/yamls/innodb.yaml @@ -0,0 +1,22 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MySQL +metadata: + name: my-group + namespace: demo +spec: + version: "8.0.31-innodb" + replicas: 3 + topology: + mode: InnoDBCluster + innoDBCluster: + router: + replicas: 1 + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/mysql/scaling/vertical-scaling/cluster/yamls/my-scale-group.yaml b/content/docs/v2024.1.31/guides/mysql/scaling/vertical-scaling/cluster/yamls/my-scale-group.yaml new file mode 100644 index 0000000000..427fdce4c3 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/scaling/vertical-scaling/cluster/yamls/my-scale-group.yaml @@ -0,0 +1,18 @@ +apiVersion: ops.kubedb.com/v1alpha1 +kind: MySQLOpsRequest +metadata: + name: my-scale-group + namespace: demo +spec: + type: VerticalScaling + databaseRef: + name: my-group + verticalScaling: + mysql: + resources: + requests: + memory: "1200Mi" + cpu: "0.7" + limits: + memory: "1200Mi" + cpu: "0.7" \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/mysql/scaling/vertical-scaling/cluster/yamls/semi-sync.yaml b/content/docs/v2024.1.31/guides/mysql/scaling/vertical-scaling/cluster/yamls/semi-sync.yaml new file mode 100644 index 0000000000..6f89afb8b8 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/scaling/vertical-scaling/cluster/yamls/semi-sync.yaml @@ -0,0 +1,23 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MySQL +metadata: + name: my-group + namespace: demo +spec: + version: "8.0.35" + replicas: 3 + topology: + mode: SemiSync + semiSync: + sourceWaitForReplicaCount: 1 + sourceTimeout: 24h + errantTransactionRecoveryPolicy: PseudoTransaction + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/mysql/scaling/vertical-scaling/overview/images/my-vertical_scaling.png b/content/docs/v2024.1.31/guides/mysql/scaling/vertical-scaling/overview/images/my-vertical_scaling.png new file mode 100644 index 0000000000..0075b0c78e Binary files /dev/null and b/content/docs/v2024.1.31/guides/mysql/scaling/vertical-scaling/overview/images/my-vertical_scaling.png differ diff --git a/content/docs/v2024.1.31/guides/mysql/scaling/vertical-scaling/overview/index.md b/content/docs/v2024.1.31/guides/mysql/scaling/vertical-scaling/overview/index.md new file mode 100644 index 0000000000..1ef66410c5 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/scaling/vertical-scaling/overview/index.md @@ -0,0 +1,65 @@ +--- +title: MySQL Vertical Scaling Overview +menu: + docs_v2024.1.31: + identifier: guides-mysql-scaling-vertical-overview + name: Overview + parent: guides-mysql-scaling-vertical + weight: 10 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Vertical Scaling MySQL + +This guide will give you an overview of how `KubeDB` Ops Manager updates the resources(for example Memory and RAM etc.) of the `MySQL` database server. + +## Before You Begin + +- You should be familiar with the following `KubeDB` concepts: + - [MySQL](/docs/v2024.1.31/guides/mysql/concepts/database/) + - [MySQLOpsRequest](/docs/v2024.1.31/guides/mysql/concepts/opsrequest/) + +## How Vertical Scaling Process Works + +The following diagram shows how the `KubeDB` Ops Manager used to update the resources of the `MySQL` database server. Open the image in a new tab to see the enlarged version. + +
+ Stash Backup Flow +
Fig: Vertical scaling process of MySQL
+
+ +The vertical scaling process consists of the following steps: + +1. At first, a user creates a `MySQL` cr. + +2. `KubeDB` community operator watches for the `MySQL` cr. + +3. When it finds one, it creates a `StatefulSet` and related necessary stuff like secret, service, etc. + +4. Then, in order to update the resources(for example `CPU`, `Memory` etc.) of the `MySQL` database the user creates a `MySQLOpsRequest` cr. + +5. `KubeDB` Ops Manager watches for `MySQLOpsRequest`. + +6. When it finds one, it halts the `MySQL` object so that the `KubeDB` community operator doesn't perform any operation on the `MySQL` during the scaling process. + +7. Then the `KubeDB` Ops Manager will update resources of the StatefulSet replicas to reach the desired state. + +8. After successful updating of the resources of the StatefulSet's replica, the `KubeDB` Ops Manager updates the `MySQL` object resources to reflect the updated state. + +9. After successful updating of the `MySQL` resources, the `KubeDB` Ops Manager resumes the `MySQL` object so that the `KubeDB` community operator resumes its usual operations. + +In the next doc, we are going to show a step by step guide on updating resources of MySQL database using vertical scaling operation. \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/mysql/scaling/vertical-scaling/standalone/index.md b/content/docs/v2024.1.31/guides/mysql/scaling/vertical-scaling/standalone/index.md new file mode 100644 index 0000000000..4476eac975 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/scaling/vertical-scaling/standalone/index.md @@ -0,0 +1,304 @@ +--- +title: Vertical Scaling MySQL standalone +menu: + docs_v2024.1.31: + identifier: guides-mysql-scaling-vertical-standalone + name: Standalone + parent: guides-mysql-scaling-vertical + weight: 20 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Vertical Scale MySQL Standalone + +This guide will show you how to use `KubeDB` Ops Manager to update the resources of a standalone. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +- Install `KubeDB` in your cluster following the steps [here](/docs/v2024.1.31/setup/README). + +- You should be familiar with the following `KubeDB` concepts: + - [MySQL](/docs/v2024.1.31/guides/mysql/concepts/database/) + - [MySQLOpsRequest](/docs/v2024.1.31/guides/mysql/concepts/opsrequest/) + - [Vertical Scaling Overview](/docs/v2024.1.31/guides/mysql/scaling/vertical-scaling/overview/) + +To keep everything isolated, we are going to use a separate namespace called `demo` throughout this tutorial. + +```bash +$ kubectl create ns demo +namespace/demo created +``` + +> **Note:** YAML files used in this tutorial are stored in [docs/guides/mysql/scaling/vertical-scaling/standalone/yamls](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/guides/mysql/scaling/vertical-scaling/standalone/yamls) directory of [kubedb/doc](https://github.com/kubedb/docs) repository. + +### Apply Vertical Scaling on Standalone + +Here, we are going to deploy a `MySQL` standalone using a supported version by `KubeDB` operator. Then we are going to apply vertical scaling on it. + +#### Prepare Group Replication + +At first, we are going to deploy a standalone using supported `MySQL` version. Then, we are going to update the resources of the database server through vertical scaling. + +**Find supported MySQL Version:** + +When you have installed `KubeDB`, it has created `MySQLVersion` CR for all supported `MySQL` versions. Let's check the supported MySQL versions, + +```bash +$ kubectl get mysqlversion +NAME VERSION DB_IMAGE DEPRECATED AGE +5.7.25-v2 5.7.25 kubedb/mysql:5.7.25-v2 3h55m +5.7.44 5.7.29 kubedb/mysql:5.7.44 3h55m +5.7.44 5.7.31 kubedb/mysql:5.7.44 3h55m +5.7.44 5.7.33 kubedb/mysql:5.7.44 3h55m +8.0.14-v2 8.0.14 kubedb/mysql:8.0.14-v2 3h55m +8.0.20-v1 8.0.20 kubedb/mysql:8.0.20-v1 3h55m +8.0.35 8.0.21 kubedb/mysql:8.0.35 3h55m +8.0.35 8.0.35 kubedb/mysql:8.0.35 3h55m +8.0.3-v2 8.0.3 kubedb/mysql:8.0.3-v2 3h55m +``` + +The version above that does not show `DEPRECATED` `true` is supported by `KubeDB` for `MySQL`. You can use any non-deprecated version. Here, we are going to create a standalone using non-deprecated `MySQL` version `8.0.35`. + +**Deploy MySQL Standalone:** + +In this section, we are going to deploy a MySQL standalone. Then, in the next section, we will update the resources of the database server using vertical scaling. Below is the YAML of the `MySQL` cr that we are going to create, + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: MySQL +metadata: + name: my-standalone + namespace: demo +spec: + version: "8.0.35" + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut +``` + +Let's create the `MySQL` cr we have shown above, + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mysql/scaling/vertical-scaling/standalone/yamls/standalone.yaml +mysql.kubedb.com/my-standalone created +``` + +**Check Standalone Ready to Scale:** + +`KubeDB` operator watches for `MySQL` objects using Kubernetes API. When a `MySQL` object is created, `KubeDB` operator will create a new StatefulSet, Services, and Secrets, etc. +Now, watch `MySQL` is going to `Running` state and also watch `StatefulSet` and its pod is created and going to `Running` state, + +```bash +$ watch -n 3 kubectl get my -n demo my-standalone +Every 3.0s: kubectl get my -n demo my-standalone suaas-appscode: Wed Jul 1 17:48:14 2020 + +NAME VERSION STATUS AGE +my-standalone 8.0.35 Running 2m58s + +$ watch -n 3 kubectl get sts -n demo my-standalone +Every 3.0s: kubectl get sts -n demo my-standalone suaas-appscode: Wed Jul 1 17:48:52 2020 + +NAME READY AGE +my-standalone 1/1 3m36s + +$ watch -n 3 kubectl get pod -n demo my-standalone-0 +Every 3.0s: kubectl get pod -n demo my-standalone-0 suaas-appscode: Wed Jul 1 17:50:18 2020 + +NAME READY STATUS RESTARTS AGE +my-standalone-0 1/1 Running 0 5m1s +``` + +Let's check the above Pod containers resources, + +```bash +$ kubectl get pod -n demo my-standalone-0 -o json | jq '.spec.containers[].resources' +{} +``` + +You can see the Pod has empty resources that mean the scheduler will choose a random node to place the container of the Pod on by default + +We are ready to apply a horizontal scale on this standalone database. + +#### Vertical Scaling + +Here, we are going to update the resources of the standalone to meet up with the desired resources after scaling. + +**Create MySQLOpsRequest:** + +In order to update the resources of your database, you have to create a `MySQLOpsRequest` cr with your desired resources after scaling. Below is the YAML of the `MySQLOpsRequest` cr that we are going to create, + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: MySQLOpsRequest +metadata: + name: my-scale-standalone + namespace: demo +spec: + type: VerticalScaling + databaseRef: + name: my-standalone + verticalScaling: + mysql: + resources: + requests: + memory: "1200Mi" + cpu: "0.7" + limits: + memory: "1200Mi" + cpu: "0.7" +``` + +Here, + +- `spec.databaseRef.name` specifies that we are performing operation on `my-group` `MySQL` database. +- `spec.type` specifies that we are performing `VerticalScaling` on our database. +- `spec.VerticalScaling.mysql` specifies the expected mysql container resources after scaling. + +Let's create the `MySQLOpsRequest` cr we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mysql/scaling/vertical-scaling/standalone/yamls/my-scale-standalone.yaml +mysqlopsrequest.ops.kubedb.com/my-scale-standalone created +``` + +**Verify MySQL Standalone resources updated successfully:** + +If everything goes well, `KubeDB` Ops Manager will update the resources of the StatefulSet's `Pod` containers. After a successful scaling process is done, the `KubeDB` Ops Manager updates the resources of the `MySQL` object. + +First, we will wait for `MySQLOpsRequest` to be successful. Run the following command to watch `MySQlOpsRequest` cr, + +```bash +$ watch -n 3 kubectl get myops -n demo my-scale-standalone +Every 3.0s: kubectl get myops -n demo my-sc... suaas-appscode: Wed Aug 12 17:21:42 2020 + +NAME TYPE STATUS AGE +my-scale-standalone VerticalScaling Successful 2m15s +``` + +We can see from the above output that the `MySQLOpsRequest` has succeeded. If we describe the `MySQLOpsRequest`, we shall see that the standalone resources are updated. + +```bash +$ kubectl describe myops -n demo my-scale-standalone +Name: my-scale-standalone +Namespace: demo +Labels: +Annotations: +API Version: ops.kubedb.com/v1alpha1 +Kind: MySQLOpsRequest +Metadata: + Creation Timestamp: 2021-03-10T10:42:05Z + Generation: 2 + Operation: Update + Time: 2021-03-10T10:42:05Z + API Version: ops.kubedb.com/v1alpha1 + Fields Type: FieldsV1 + Manager: kubedb-enterprise + Operation: Update + Time: 2021-03-10T10:42:31Z + Resource Version: 1080528 + Self Link: /apis/ops.kubedb.com/v1alpha1/namespaces/demo/mysqlopsrequests/my-scale-standalone + UID: f6371eeb-b6e3-4d9b-ba15-0dbc6a92385c +Spec: + Database Ref: + Name: my-standalone + Stateful Set Ordinal: 0 + Type: VerticalScaling + Vertical Scaling: + Mysql: + Limits: + Cpu: 0.7 + Memory: 1200Mi + Requests: + Cpu: 0.7 + Memory: 1200Mi +Status: + Conditions: + Last Transition Time: 2021-03-10T10:42:05Z + Message: Controller has started to Progress the MySQLOpsRequest: demo/my-scale-standalone + Observed Generation: 2 + Reason: OpsRequestProgressingStarted + Status: True + Type: Progressing + Last Transition Time: 2021-03-10T10:42:05Z + Message: Vertical scaling started in MySQL: demo/my-standalone for MySQLOpsRequest: my-scale-standalone + Observed Generation: 2 + Reason: VerticalScalingStarted + Status: True + Type: Scaling + Last Transition Time: 2021-03-10T10:42:30Z + Message: Vertical scaling performed successfully in MySQL: demo/my-standalone for MySQLOpsRequest: my-scale-standalone + Observed Generation: 2 + Reason: SuccessfullyPerformedVerticalScaling + Status: True + Type: VerticalScaling + Last Transition Time: 2021-03-10T10:42:31Z + Message: Controller has successfully scaled the MySQL demo/my-scale-standalone + Observed Generation: 2 + Reason: OpsRequestProcessedSuccessfully + Status: True + Type: Successful + Observed Generation: 3 + Phase: Successful +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal Starting 40s KubeDB Enterprise Operator Start processing for MySQLOpsRequest: demo/my-scale-standalone + Normal Starting 40s KubeDB Enterprise Operator Pausing MySQL databse: demo/my-standalone + Normal Successful 40s KubeDB Enterprise Operator Successfully paused MySQL database: demo/my-standalone for MySQLOpsRequest: my-scale-standalone + Normal Starting 40s KubeDB Enterprise Operator Vertical scaling started in MySQL: demo/my-standalone for MySQLOpsRequest: my-scale-standalone + Normal Starting 35s KubeDB Enterprise Operator Restarting Pod (master): demo/my-standalone-0 + Normal Successful 15s KubeDB Enterprise Operator Vertical scaling performed successfully in MySQL: demo/my-standalone for MySQLOpsRequest: my-scale-standalone + Normal Starting 14s KubeDB Enterprise Operator Resuming MySQL database: demo/my-standalone + Normal Successful 14s KubeDB Enterprise Operator Successfully resumed MySQL database: demo/my-standalone + Normal Successful 14s KubeDB Enterprise Operator Controller has Successfully scaled the MySQL database: demo/my-standalone + +``` + +Now, we are going to verify whether the resources of the standalone has updated to meet up the desired state, Let's check, + +```bash +$ kubectl get pod -n demo my-standalone-0 -o json | jq '.spec.containers[].resources' +{ + "limits": { + "cpu": "700m", + "memory": "1200Mi" + }, + "requests": { + "cpu": "700m", + "memory": "1200Mi" + } +} +``` + +The above output verifies that we have successfully scaled up the resources of the standalone. + +## Cleaning Up + +To clean up the Kubernetes resources created by this tutorial, run: + +```bash +kubectl delete my -n demo my-standalone +kubectl delete myops -n demo my-scale-standalone +``` \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/mysql/scaling/vertical-scaling/standalone/yamls/my-scale-standalone.yaml b/content/docs/v2024.1.31/guides/mysql/scaling/vertical-scaling/standalone/yamls/my-scale-standalone.yaml new file mode 100644 index 0000000000..583a49c3bb --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/scaling/vertical-scaling/standalone/yamls/my-scale-standalone.yaml @@ -0,0 +1,18 @@ +apiVersion: ops.kubedb.com/v1alpha1 +kind: MySQLOpsRequest +metadata: + name: my-scale-standalone + namespace: demo +spec: + type: VerticalScaling + databaseRef: + name: my-standalone + verticalScaling: + mysql: + resources: + requests: + memory: "1200Mi" + cpu: "0.7" + limits: + memory: "1200Mi" + cpu: "0.7" \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/mysql/scaling/vertical-scaling/standalone/yamls/standalone.yaml b/content/docs/v2024.1.31/guides/mysql/scaling/vertical-scaling/standalone/yamls/standalone.yaml new file mode 100644 index 0000000000..56ef5f11e0 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/scaling/vertical-scaling/standalone/yamls/standalone.yaml @@ -0,0 +1,16 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MySQL +metadata: + name: my-standalone + namespace: demo +spec: + version: "8.0.35" + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/mysql/schema-manager/_index.md b/content/docs/v2024.1.31/guides/mysql/schema-manager/_index.md new file mode 100644 index 0000000000..5800291ce5 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/schema-manager/_index.md @@ -0,0 +1,22 @@ +--- +title: MySQL Schema Manager +menu: + docs_v2024.1.31: + identifier: guides-mysql-schema-manager + name: Schema Manager + parent: guides-mysql + weight: 55 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/mysql/schema-manager/deploy-mysqldatabase/index.md b/content/docs/v2024.1.31/guides/mysql/schema-manager/deploy-mysqldatabase/index.md new file mode 100644 index 0000000000..d25d997fd5 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/schema-manager/deploy-mysqldatabase/index.md @@ -0,0 +1,438 @@ +--- +title: Deploy MySQLDatabase +menu: + docs_v2024.1.31: + identifier: deploy-mysqldatabase + name: Deploy MySQLDatabase + parent: guides-mysql-schema-manager + weight: 15 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Create Database with MySQL Schema Manager + +This guide will show you how to create database with MySQL Schema Manager using KubeDB Ops Manager. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +- Install KubeDB in your cluster following the steps [here](/docs/v2024.1.31/setup/README). +- Install `KubeVault` in your cluster following the steps [here](https://kubevault.com/docs/latest/setup/install/kubevault/). + +- You should be familiar with the following `KubeDB` and `KubeVault` concepts: + - [MySQL](/docs/v2024.1.31/guides/mysql/concepts/database/) + - [MySQLDatabase](/docs/v2024.1.31/guides/mysql/concepts/mysqldatabase/) + - [Schema Manager Overview](/docs/v2024.1.31/guides/mysql/schema-manager/overview/) + - [KubeVault Overview](https://kubevault.com/docs/latest/concepts/overview/) + +To keep everything isolated, we are going to use a separate namespace called `demo` throughout this tutorial. + +```bash +$ kubectl create ns demo +namespace/demo created +``` + +> **Note:** YAML files used in this tutorial are stored in [docs/guides/mysql/schema-manager/deploy-mysqldatabase/yamls](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/guides/mysql/schema-manager/deploy-mysqldatabase/yamls) directory of [kubedb/doc](https://github.com/kubedb/docs) repository. + +## Deploy MySQL Server and Vault Server + +Firstly, we are going to deploy a `MySQL Server` by using `KubeDB` operator. Also, we are deploying a `Vault Server` using `KubeVault` Operator. + +### Deploy MySQL Server + +In this section, we are going to deploy a MySQL Server. Let's deploy it using this following yaml, + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: MySQL +metadata: + name: mysql-server + namespace: demo +spec: + version: "8.0.35" + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 200Mi + allowedSchemas: + namespaces: + from: Selector + selector: + matchLabels: + app: schemaManager + terminationPolicy: WipeOut +``` + +Here, + +- `spec.version` is the name of the MySQLVersion CR. Here, we are using MySQL version `8.0.35`. +- `spec.storageType` specifies the type of storage that will be used for MySQL. It can be `Durable` or `Ephemeral`. The default value of this field is `Durable`. If `Ephemeral` is used then KubeDB will create the MySQL using `EmptyDir` volume. +- `spec.storage` specifies the StorageClass of PVC dynamically allocated to store data for this database. This storage spec will be passed to the StatefulSet created by KubeDB operator to run database pods. So, each members will have a pod of this storage configuration. You can specify any StorageClass available in your cluster with appropriate resource requests. +- `spec.allowedSchemas` specifies the namespace of allowed `Schema Manager`. +- `spec.terminationPolicy` specifies what KubeDB should do when a user try to delete the operation of MySQL CR. *Wipeout* means that the database will be deleted without restrictions. It can also be "Halt", "Delete" and "DoNotTerminate". Learn More about these [HERE](https://kubedb.com/docs/latest/guides/mysql/concepts/database/#specterminationpolicy). + +Let’s save this yaml configuration into `mysql-server.yaml` Then create the above `MySQL` CR + +```bash +$ kubectl apply -f mysql-server.yaml +mysql.kubedb.com/mysql-server created +``` + +### Deploy Vault Server + +In this section, we are going to deploy a Vault Server. Let's deploy it using this following yaml, + +```yaml +apiVersion: kubevault.com/v1alpha1 +kind: VaultServer +metadata: + name: vault + namespace: demo +spec: + version: 1.9.2 + replicas: 1 + allowedSecretEngines: + namespaces: + from: All + secretEngines: + - mysql + unsealer: + secretShares: 5 + secretThreshold: 3 + mode: + kubernetesSecret: + secretName: vault-keys + backend: + raft: + path: "/vault/data" + storage: + storageClassName: "standard" + resources: + requests: + storage: 1Gi + authMethods: + - type: kubernetes + path: kubernetes + terminationPolicy: WipeOut +``` + +Here, + +- `spec.version` is a required field that specifies the original version of Vault that has been used to build the docker image specified in `spec.vault.image` field. +- `spec.replicas` specifies the number of Vault nodes to deploy. It has to be a positive number. +- `spec.allowedSecretEngines` defines the types of Secret Engines & the Allowed namespaces from where a `SecretEngine` can be attached to the `VaultServer`. +- `spec.unsealer` is an optional field that specifies `Unsealer` configuration. `Unsealer` handles automatic initializing and unsealing of Vault. +- `spec.backend` is a required field that specifies the Vault backend storage configuration. KubeVault operator generates storage configuration according to this `spec.backend`. +- `spec.authMethods` is an optional field that specifies the list of auth methods to enable in Vault. +- `spec.terminationPolicy` is an optional field that gives flexibility whether to nullify(reject) the delete operation of VaultServer crd or which resources KubeVault operator should keep or delete when you delete VaultServer crd. + + +Let’s save this yaml configuration into `vault.yaml` Then create the above `VaultServer` CR + +```bash +$ kubectl apply -f vault.yaml +vaultserver.kubevault.com/vault created +``` +### Create Separate Namespace For Schema Manager + +In this section, we are going to create a new `Namespace` and we will only allow this namespace for our `Schema Manager`. Let's deploy it using this following yaml, + +```yaml +apiVersion: v1 +kind: Namespace +metadata: + name: demox + labels: + app: schemaManager +``` +Let’s save this yaml configuration into `namespace.yaml` Then create the above `Namespace` + +```bash +$ kubectl apply -f namespace.yaml +namespace/demox created +``` + + +### Deploy Schema Manager + +Here, we are going to deploy `Schema Manager` with the new `Namespace` that we have created above. Let's deploy it using this following yaml, + +```yaml +apiVersion: schema.kubedb.com/v1alpha1 +kind: MySQLDatabase +metadata: + name: schema-manager + namespace: demox +spec: + database: + serverRef: + name: mysql-server + namespace: demo + config: + name: demo_user + characterSet: utf8 + encryption: disabled + readOnly: 0 + vaultRef: + name: vault + namespace: demo + accessPolicy: + subjects: + - kind: ServiceAccount + name: "tester" + namespace: "demox" + defaultTTL: "5m" + deletionPolicy: "Delete" +``` +Here, + +- `spec.database` is a required field specifying the database server reference and the desired database configuration. +- `spec.vaultRef` is a required field that specifies which KubeVault server to use for user management. +- `spec.accessPolicy` is a required field that specifies the access permissions like which service account or cluster user have the access and also for how long they can access through it. +- `spec.deletionPolicy` is a required field that gives flexibility whether to `nullify` (reject) the delete operation or which resources KubeDB should keep or delete when you delete the CRD. + +Let’s save this yaml configuration into `schema-manager.yaml` and apply it, + +```bash +$ kubectl apply -f schema-manager.yaml +mysqldatabase.schema.kubedb.com/schema-manager created +``` + +Let's check the `STATUS` of `Schema Manager`, + +```bash +$ kubectl get mysqldatabase -A +NAMESPACE NAME DB_SERVER DB_NAME STATUS AGE +demox schema-manager mysql-server demo_user Current 27s +``` +Here, + +> In `STATUS` section, `Current` means that the current `Secret` of `Schema Manager` is vaild, and it will automatically `Expired` after it reaches the limit of `defaultTTL` that we've defined in the above yaml. + +Now, let's get the secret name from `schema-manager`, and get the login credentials for connecting to the database, + +```bash +$ kubectl get mysqldatabase schema-manager -n demox -o=jsonpath='{.status.authSecret.name}' +schema-manager-mysql-req-o2j0jk + +$ kubectl view-secret schema-manager-mysql-req-o2j0jk -n demox -a +password=bCfsp77bWztyZwH-i4F6 +username=v-kubernetes-k8s.dc833e-txGUfwPa +``` + +### Insert Sample Data + +Here, we are going to connect to the database with the login credentials and insert some sample data into it. + +```bash +$ kubectl exec -it mysql-server-0 -n demo -c mysql -- bash +bash-4.4# mysql --user='v-kubernetes-k8s.dc833e-txGUfwPa' --password='bCfsp77bWztyZwH-i4F6' + +Welcome to the MySQL monitor. Commands end with ; or \g. +Your MySQL connection id is 287 +Server version: 8.0.35 MySQL Community Server - GPL + + +mysql> SHOW DATABASES; ++--------------------+ +| Database | ++--------------------+ +| demo_user | +| information_schema | ++--------------------+ +2 rows in set (0.01 sec) + +mysql> USE demo_user; +Database changed + +mysql> CREATE TABLE random(name varchar(20)); +Query OK, 0 rows affected (0.02 sec) + +mysql> INSERT INTO random(name) value('KubeDB'); +Query OK, 1 row affected (0.00 sec) + +mysql> INSERT INTO random(name) value('KubeVault'); +Query OK, 1 row affected (0.01 sec) + +mysql> SELECT * FROM random; ++-----------+ +| name | ++-----------+ +| KubeDB | +| KubeVault | ++-----------+ +2 rows in set (0.00 sec) + +mysql> exit +Bye +``` + + +Now, Let's check the `STATUS` of `Schema Manager` again, + +```bash +$ kubectl get mysqldatabase -A +NAMESPACE NAME DB_SERVER DB_NAME STATUS AGE +demox schema-manager mysql-server demo_user Expired 5m35s +``` + +Here, we can see that the `STATUS` of the `schema-manager` is `Expired` because it's exceeded `defaultTTL: "5m"`, which means the current `Secret` of `Schema Manager` isn't vaild anymore. Now, if we try to connect and login with the credentials that we have acquired before from `schema-manager`, it won't work. + +```bash +$ kubectl exec -it mysql-server-0 -n demo -c mysql -- bash +bash-4.4# mysql --user='v-kubernetes-k8s.dc833e-txGUfwPa' --password='bCfsp77bWztyZwH-i4F6' +ERROR 1045 (28000): Access denied for user 'v-kubernetes-k8s.dc833e-txGUfwPa'@'localhost' (using password: YES) + +mysql> exit +Bye +``` +> We can't connect to the database with the login credentials, which is `Expired`. We will not be able to access the database even though we're in the middle of a connected session. + +## Alter Database + +In this section, we are going to alter database by changing some characteristics of our database. For this demonstration, We have to logged in as a database admin. + +```bash +$ kubectl exec -it mysql-server-0 -n demo -c mysql -- bash +bash-4.4# mysql -uroot -p$MYSQL_ROOT_PASSWORD + +Welcome to the MySQL monitor. Commands end with ; or \g. +Your MySQL connection id is 2358 +Server version: 8.0.35 MySQL Community Server - GPL + +mysql> SHOW DATABASES; ++--------------------+ +| Database | ++--------------------+ +| demo_user | +| information_schema | +| kubedb_system | +| mysql | +| performance_schema | +| sys | ++--------------------+ +6 rows in set (0.00 sec) + +# Check the existing characteristics + +mysql> SHOW CREATE DATABASE demo_user; ++-----------+----------------------------------------------------------------------------------------------------------+ +| Database | Create Database | ++-----------+----------------------------------------------------------------------------------------------------------+ +| demo_user | CREATE DATABASE `demo_user` /*!40100 DEFAULT CHARACTER SET utf8mb3 */ /*!80016 DEFAULT ENCRYPTION='N' */ | ++-----------+----------------------------------------------------------------------------------------------------------+ +1 row in set (0.00 sec) + +mysql> exit +bye +``` + +Let's, change the `spec.database.config.characterSet` to `big5`. + +```yaml +apiVersion: schema.kubedb.com/v1alpha1 +kind: MySQLDatabase +metadata: + name: schema-manager + namespace: demox +spec: + database: + serverRef: + name: mysql-server + namespace: demo + config: + name: demo_user + characterSet: big5 + encryption: disabled + readOnly: 0 + vaultRef: + name: vault + namespace: demo + accessPolicy: + subjects: + - kind: ServiceAccount + name: "tester" + namespace: "demox" + defaultTTL: "5m" + deletionPolicy: "Delete" +``` + +Save this yaml configuration and apply it, + +```bash +$ kubectl apply -f schema-manager.yaml +mysqldatabase.schema.kubedb.com/schema-manager configured +``` + +Now, let's check the modified characteristics of our database. + +```bash +$ kubectl exec -it mysql-server-0 -n demo -c mysql -- bash +bash-4.4# mysql -uroot -p$MYSQL_ROOT_PASSWORD + +Welcome to the MySQL monitor. Commands end with ; or \g. +Your MySQL connection id is 2358 +Server version: 8.0.35 MySQL Community Server - GPL + +mysql> SHOW DATABASES; ++--------------------+ +| Database | ++--------------------+ +| demo_user | +| information_schema | +| kubedb_system | +| mysql | +| performance_schema | +| sys | ++--------------------+ +6 rows in set (0.00 sec) + +# Check the existing characteristics + +mysql> SHOW CREATE DATABASE demo_user; ++-----------+-------------------------------------------------------------------------------------------------------+ +| Database | Create Database | ++-----------+-------------------------------------------------------------------------------------------------------+ +| demo_user | CREATE DATABASE `demo_user` /*!40100 DEFAULT CHARACTER SET big5 */ /*!80016 DEFAULT ENCRYPTION='N' */ | ++-----------+-------------------------------------------------------------------------------------------------------+ +1 row in set (0.00 sec) +``` +Here, we can see that the `spec.database.config.characterSet` is changed to `big5`. So, our database altering has been successful. + +> Note: When the Schema Manager is deleted, the associated database and user will also be deleted. + +## Cleaning Up + +To clean up the Kubernetes resources created by this tutorial, run: + +```bash +$ kubectl delete ns demox +$ kubectl delete ns demo +``` + + +## Next Steps + +- Detail concepts of [MySQLDatabase object](/docs/v2024.1.31/guides/mysql/concepts/mysqldatabase/). +- Go through the concepts of [KubeVault](https://kubevault.com/docs/latest/guides). +- Detail concepts of [MySQL object](/docs/v2024.1.31/guides/mysql/concepts/database/). +- Detail concepts of [MySQLVersion object](/docs/v2024.1.31/guides/mysql/concepts/catalog/). diff --git a/content/docs/v2024.1.31/guides/mysql/schema-manager/deploy-mysqldatabase/yamls/mysql-server.yaml b/content/docs/v2024.1.31/guides/mysql/schema-manager/deploy-mysqldatabase/yamls/mysql-server.yaml new file mode 100644 index 0000000000..810c9bec70 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/schema-manager/deploy-mysqldatabase/yamls/mysql-server.yaml @@ -0,0 +1,22 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MySQL +metadata: + name: mysql-server + namespace: demo +spec: + version: "8.0.35" + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 200Mi + allowedSchemas: + namespaces: + from: Selector + selector: + matchLabels: + app: schemaManager + terminationPolicy: WipeOut \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/mysql/schema-manager/deploy-mysqldatabase/yamls/namespace.yaml b/content/docs/v2024.1.31/guides/mysql/schema-manager/deploy-mysqldatabase/yamls/namespace.yaml new file mode 100644 index 0000000000..d1662e68ff --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/schema-manager/deploy-mysqldatabase/yamls/namespace.yaml @@ -0,0 +1,6 @@ +apiVersion: v1 +kind: Namespace +metadata: + name: demox + labels: + app: schemaManager \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/mysql/schema-manager/deploy-mysqldatabase/yamls/schema-manager.yaml b/content/docs/v2024.1.31/guides/mysql/schema-manager/deploy-mysqldatabase/yamls/schema-manager.yaml new file mode 100644 index 0000000000..5aa3ddbb13 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/schema-manager/deploy-mysqldatabase/yamls/schema-manager.yaml @@ -0,0 +1,25 @@ +apiVersion: schema.kubedb.com/v1alpha1 +kind: MySQLDatabase +metadata: + name: schema-manager + namespace: demox +spec: + database: + serverRef: + name: mysql-server + namespace: demo + config: + name: demo_user + characterSet: utf8 + encryption: disabled + readOnly: 0 + vaultRef: + name: vault + namespace: demo + accessPolicy: + subjects: + - kind: ServiceAccount + name: "tester" + namespace: "demox" + defaultTTL: "5m" + deletionPolicy: "Delete" \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/mysql/schema-manager/deploy-mysqldatabase/yamls/vault.yaml b/content/docs/v2024.1.31/guides/mysql/schema-manager/deploy-mysqldatabase/yamls/vault.yaml new file mode 100644 index 0000000000..34f3b1460c --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/schema-manager/deploy-mysqldatabase/yamls/vault.yaml @@ -0,0 +1,31 @@ +apiVersion: kubevault.com/v1alpha1 +kind: VaultServer +metadata: + name: vault + namespace: demo +spec: + version: 1.9.2 + replicas: 1 + allowedSecretEngines: + namespaces: + from: All + secretEngines: + - mysql + unsealer: + secretShares: 5 + secretThreshold: 3 + mode: + kubernetesSecret: + secretName: vault-keys + backend: + raft: + path: "/vault/data" + storage: + storageClassName: "standard" + resources: + requests: + storage: 1Gi + authMethods: + - type: kubernetes + path: kubernetes + terminationPolicy: WipeOut \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/mysql/schema-manager/initializing-with-script/index.md b/content/docs/v2024.1.31/guides/mysql/schema-manager/initializing-with-script/index.md new file mode 100644 index 0000000000..deb1565df2 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/schema-manager/initializing-with-script/index.md @@ -0,0 +1,355 @@ +--- +title: Initializing with Script +menu: + docs_v2024.1.31: + identifier: mysql-initializing-with-script + name: Initializing with Script + parent: guides-mysql-schema-manager + weight: 20 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Initializing with Script + +This guide will show you how to to create database and initialize Script with MySQL `Schema Manager` using KubeDB Ops Manager. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +- Install KubeDB in your cluster following the steps [here](/docs/v2024.1.31/setup/README). +- Install `KubeVault` in your cluster following the steps [here](https://kubevault.com/docs/latest/setup/install/kubevault/). + +- You should be familiar with the following `KubeDB` and `KubeVault` concepts: + - [MySQL](/docs/v2024.1.31/guides/mysql/concepts/database/) + - [MySQLDatabase](/docs/v2024.1.31/guides/mysql/concepts/mysqldatabase/) + - [Schema Manager Overview](/docs/v2024.1.31/guides/mysql/schema-manager/overview/) + - [KubeVault Overview](https://kubevault.com/docs/latest/concepts/overview/) + +To keep everything isolated, we are going to use a separate namespace called `demo` throughout this tutorial. + +```bash +$ kubectl create ns demo +namespace/demo created +``` + +> **Note:** YAML files used in this tutorial are stored in [docs/guides/mysql/schema-manager/initializing-with-script/yamls](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/guides/mysql/schema-manager/initializing-with-script/yamls) directory of [kubedb/doc](https://github.com/kubedb/docs) repository. + +## Deploy MySQL Server and Vault Server + +Here, we are going to deploy a `MySQL Server` by using `KubeDB` operator. Also, we are deploying a `Vault Server` using `KubeVault` Operator. + +### Deploy MySQL Server + +In this section, we are going to deploy a MySQL Server. Let's deploy it using this following yaml, + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: MySQL +metadata: + name: mysql-server + namespace: demo +spec: + version: "8.0.35" + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 200Mi + allowedSchemas: + namespaces: + from: Selector + selector: + matchLabels: + app: schemaManager + terminationPolicy: WipeOut +``` + +Here, + +- `spec.version` is the name of the MySQLVersion CR. Here, we are using MySQL version `8.0.35`. +- `spec.storageType` specifies the type of storage that will be used for MySQL. It can be `Durable` or `Ephemeral`. The default value of this field is `Durable`. If `Ephemeral` is used then KubeDB will create the MySQL using `EmptyDir` volume. +- `spec.storage` specifies the StorageClass of PVC dynamically allocated to store data for this database. This storage spec will be passed to the StatefulSet created by KubeDB operator to run database pods. So, each members will have a pod of this storage configuration. You can specify any StorageClass available in your cluster with appropriate resource requests. +- `spec.allowedSchemas` specifies the namespace of allowed `Schema Manager`. +- `spec.terminationPolicy` specifies what KubeDB should do when a user try to delete the operation of MySQL CR. *Wipeout* means that the database will be deleted without restrictions. It can also be "Halt", "Delete" and "DoNotTerminate". Learn More about these [HERE](https://kubedb.com/docs/latest/guides/mysql/concepts/database/#specterminationpolicy). + + +Let’s save this yaml configuration into `mysql-server.yaml` Then create the above `MySQL` CR + +```bash +$ kubectl apply -f mysql-server.yaml +mysql.kubedb.com/mysql-server created +``` + +### Deploy Vault Server + +In this section, we are going to deploy a Vault Server. Let's deploy it using this following yaml, + +```yaml +apiVersion: kubevault.com/v1alpha1 +kind: VaultServer +metadata: + name: vault + namespace: demo +spec: + version: 1.9.2 + replicas: 1 + allowedSecretEngines: + namespaces: + from: All + secretEngines: + - mysql + unsealer: + secretShares: 5 + secretThreshold: 3 + mode: + kubernetesSecret: + secretName: vault-keys + backend: + raft: + path: "/vault/data" + storage: + storageClassName: "standard" + resources: + requests: + storage: 1Gi + authMethods: + - type: kubernetes + path: kubernetes + terminationPolicy: WipeOut +``` + +Here, + +- `spec.version` is a required field that specifies the original version of Vault that has been used to build the docker image specified in `spec.vault.image` field. +- `spec.replicas` specifies the number of Vault nodes to deploy. It has to be a positive number. +- `spec.allowedSecretEngines` defines the types of Secret Engines & the Allowed namespaces from where a `SecretEngine` can be attached to the `VaultServer`. +- `spec.unsealer` is an optional field that specifies `Unsealer` configuration. `Unsealer` handles automatic initializing and unsealing of Vault. +- `spec.backend` is a required field that specifies the Vault backend storage configuration. KubeVault operator generates storage configuration according to this `spec.backend`. +- `spec.authMethods` is an optional field that specifies the list of auth methods to enable in Vault. +- `spec.terminationPolicy` is an optional field that gives flexibility whether to nullify(reject) the delete operation of VaultServer crd or which resources KubeVault operator should keep or delete when you delete VaultServer crd. + + +Let’s save this yaml configuration into `vault.yaml` Then create the above `VaultServer` CR + +```bash +$ kubectl apply -f vault.yaml +vaultserver.kubevault.com/vault created +``` +### Create Separate Namespace For Schema Manager + +In this section, we are going to create a new `Namespace` and we will only allow this namespace for our `Schema Manager`. Let's deploy it using this following yaml, + +```yaml +apiVersion: v1 +kind: Namespace +metadata: + name: demox + labels: + app: schemaManager +``` + +Let’s save this yaml configuration into `namespace.yaml` Then create the above `Namespace` + +```bash +$ kubectl apply -f namespace.yaml +namespace/demox created +``` + +### SQL Script with ConfigMap + +```yaml +apiVersion: v1 +kind: ConfigMap +metadata: + name: scripter + namespace: demox +data: + script.sql: |- + use demo_script; + create table Product(Name varchar(50),Title varchar(50)); + insert into Product(Name,Title) value('KubeDB','Database Management Solution'); + insert into Product(Name,Title) value('Stash','Backup and Recovery Solution'); +``` + +```bash +$ kubectl apply -f configmap.yaml +configmap/scripter created + +``` + + +### Deploy Schema Manager Initialize with Script + +Here, we are going to deploy `Schema Manager` with the new `Namespace` that we have created above. Let's deploy it using this following yaml, + +```yaml +apiVersion: schema.kubedb.com/v1alpha1 +kind: MySQLDatabase +metadata: + name: schema-script + namespace: demox +spec: + database: + serverRef: + name: mysql-server + namespace: demo + config: + name: demo_script + vaultRef: + name: vault + namespace: demo + accessPolicy: + subjects: + - kind: ServiceAccount + name: "script-tester" + namespace: "demox" + defaultTTL: "5m" + init: + initialized: false + script: + scriptPath: "etc/config" + configMap: + name: scripter + deletionPolicy: "Delete" +``` +Here, + +- `spec.database` is a required field specifying the database server reference and the desired database configuration. +- `spec.vaultRef` is a required field that specifies which KubeVault server to use for user management. +- `spec.accessPolicy` is a required field that specifies the access permissions like which service account or cluster user have the access and also for how long they can access through it. +- `spec.init` is an optional field, containing the information of a script or a snapshot using which the database should be initialized during creation. +- `spec.deletionPolicy` is a required field that gives flexibility whether to `nullify` (reject) the delete operation or which resources KubeDB should keep or delete when you delete the CRD. + +Let’s save this yaml configuration into `schema-manager.yaml` and apply it, + +```bash +$ kubectl apply -f schema-script.yaml +mysqldatabase.schema.kubedb.com/schema-script created +``` + +Let's check the `STATUS` of `Schema Manager`, + +```bash +$ kubectl get mysqldatabase -A +NAMESPACE NAME DB_SERVER DB_NAME STATUS AGE +demox schema-script mysql-server demo_script Current 21s +``` +Here, + +> In `STATUS` section, `Current` means that the current `Secret` of `Schema Manager` is vaild, and it will automatically `Expired` after it reaches the limit of `defaultTTL` that we've defined in the above yaml. + +Now, let's get the secret name from `schema-manager`, and get the login credentials for connecting to the database, + +```bash +$ kubectl get mysqldatabase schema-script -n demox -o=jsonpath='{.status.authSecret.name}' +schema-script-mysql-req-s85fuw + +$ kubectl view-secret schema-script-mysql-req-s85fuw -n demox -a +password=DueiiR-JyGpa3rejG2Zd +username=v-kubernetes-k8s.dc833e-yb9r7uhs +``` + +### Verify Initialization + +Here, we are going to connect to the database with the login credentials and verify the database initialization, + +```bash +$ kubectl exec -it mysql-server-0 -n demo -c mysql -- bash +bash-4.4# mysql --user='v-kubernetes-k8s.dc833e-yb9r7uhs' --password='DueiiR-JyGpa3rejG2Zd' + +Welcome to the MySQL monitor. Commands end with ; or \g. +Your MySQL connection id is 287 +Server version: 8.0.35 MySQL Community Server - GPL + +mysql> SHOW DATABASES; ++--------------------+ +| Database | ++--------------------+ +| demo_script | +| information_schema | ++--------------------+ +2 rows in set (0.00 sec) + +mysql> USE demo_script; +Reading table information for completion of table and column names +You can turn off this feature to get a quicker startup with -A + +Database changed +mysql> SHOW TABLES; ++-----------------------+ +| Tables_in_demo_script | ++-----------------------+ +| Product | ++-----------------------+ +1 row in set (0.00 sec) + +mysql> SELECT * FROM Product; ++--------+------------------------------+ +| Name | Title | ++--------+------------------------------+ +| KubeDB | Database Management Solution | +| Stash | Backup and Recovery Solution | ++--------+------------------------------+ +2 rows in set (0.00 sec) + +mysql> exit +Bye +``` + + +Now, Let's check the `STATUS` of `Schema Manager` again, + +```bash +$ kubectl get mysqldatabase -A +NAMESPACE NAME DB_SERVER DB_NAME STATUS AGE +demox schema-script mysql-server demo_script Expired 5m27s +``` + +Here, we can see that the `STATUS` of the `schema-manager` is `Expired` because it's exceeded `defaultTTL: "5m"`, which means the current `Secret` of `Schema Manager` isn't vaild anymore. Now, if we try to connect and login with the credentials that we have acquired before from `schema-manager`, it won't work. + +```bash +$ kubectl exec -it mysql-server-0 -n demo -c mysql -- bash +bash-4.4# mysql --user='v-kubernetes-k8s.dc833e-yb9r7uhs' --password='DueiiR-JyGpa3rejG2Zd' +ERROR 1045 (28000): Access denied for user 'v-kubernetes-k8s.dc833e-txGUfwPa'@'localhost' (using password: YES) + +mysql> exit +Bye +``` +> We can't connect to the database with the login credentials, which is `Expired`. We will not be able to access the database even though we're in the middle of a connected session. + + + +## Cleaning Up + +To clean up the Kubernetes resources created by this tutorial, run: + +```bash +$ kubectl delete ns demox +$ kubectl delete ns demo +``` + + +## Next Steps + +- Detail concepts of [MySQLDatabase object](/docs/v2024.1.31/guides/mysql/concepts/mysqldatabase/). +- Go through the concepts of [KubeVault](https://kubevault.com/docs/latest/guides). +- Detail concepts of [MySQL object](/docs/v2024.1.31/guides/mysql/concepts/database/). +- Detail concepts of [MySQLVersion object](/docs/v2024.1.31/guides/mysql/concepts/catalog/). diff --git a/content/docs/v2024.1.31/guides/mysql/schema-manager/initializing-with-script/yamls/configmap.yaml b/content/docs/v2024.1.31/guides/mysql/schema-manager/initializing-with-script/yamls/configmap.yaml new file mode 100644 index 0000000000..dfb54ca4ba --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/schema-manager/initializing-with-script/yamls/configmap.yaml @@ -0,0 +1,11 @@ +apiVersion: v1 +kind: ConfigMap +metadata: + name: scripter + namespace: demox +data: + script.sql: |- + use demo_script; + create table Product(Name varchar(50),Title varchar(50)); + insert into Product(Name,Title) value('KubeDB','Database Management Solution'); + insert into Product(Name,Title) value('Stash','Backup and Recovery Solution'); \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/mysql/schema-manager/initializing-with-script/yamls/mysql-server.yaml b/content/docs/v2024.1.31/guides/mysql/schema-manager/initializing-with-script/yamls/mysql-server.yaml new file mode 100644 index 0000000000..810c9bec70 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/schema-manager/initializing-with-script/yamls/mysql-server.yaml @@ -0,0 +1,22 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MySQL +metadata: + name: mysql-server + namespace: demo +spec: + version: "8.0.35" + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 200Mi + allowedSchemas: + namespaces: + from: Selector + selector: + matchLabels: + app: schemaManager + terminationPolicy: WipeOut \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/mysql/schema-manager/initializing-with-script/yamls/namespace.yaml b/content/docs/v2024.1.31/guides/mysql/schema-manager/initializing-with-script/yamls/namespace.yaml new file mode 100644 index 0000000000..d1662e68ff --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/schema-manager/initializing-with-script/yamls/namespace.yaml @@ -0,0 +1,6 @@ +apiVersion: v1 +kind: Namespace +metadata: + name: demox + labels: + app: schemaManager \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/mysql/schema-manager/initializing-with-script/yamls/schema-script.yaml b/content/docs/v2024.1.31/guides/mysql/schema-manager/initializing-with-script/yamls/schema-script.yaml new file mode 100644 index 0000000000..8e4757e35b --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/schema-manager/initializing-with-script/yamls/schema-script.yaml @@ -0,0 +1,28 @@ +apiVersion: schema.kubedb.com/v1alpha1 +kind: MySQLDatabase +metadata: + name: schema-script + namespace: demox +spec: + database: + serverRef: + name: mysql-server + namespace: demo + config: + name: demo_script + vaultRef: + name: vault + namespace: demo + accessPolicy: + subjects: + - kind: ServiceAccount + name: "script-tester" + namespace: "demox" + defaultTTL: "5m" + init: + initialized: false + script: + scriptPath: "etc/config" + configMap: + name: scripter + deletionPolicy: "Delete" \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/mysql/schema-manager/initializing-with-script/yamls/vault.yaml b/content/docs/v2024.1.31/guides/mysql/schema-manager/initializing-with-script/yamls/vault.yaml new file mode 100644 index 0000000000..34f3b1460c --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/schema-manager/initializing-with-script/yamls/vault.yaml @@ -0,0 +1,31 @@ +apiVersion: kubevault.com/v1alpha1 +kind: VaultServer +metadata: + name: vault + namespace: demo +spec: + version: 1.9.2 + replicas: 1 + allowedSecretEngines: + namespaces: + from: All + secretEngines: + - mysql + unsealer: + secretShares: 5 + secretThreshold: 3 + mode: + kubernetesSecret: + secretName: vault-keys + backend: + raft: + path: "/vault/data" + storage: + storageClassName: "standard" + resources: + requests: + storage: 1Gi + authMethods: + - type: kubernetes + path: kubernetes + terminationPolicy: WipeOut \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/mysql/schema-manager/overview/images/mysql-schema-manager-diagram.svg b/content/docs/v2024.1.31/guides/mysql/schema-manager/overview/images/mysql-schema-manager-diagram.svg new file mode 100644 index 0000000000..f3ee25adfb --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/schema-manager/overview/images/mysql-schema-manager-diagram.svg @@ -0,0 +1 @@ + \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/mysql/schema-manager/overview/index.md b/content/docs/v2024.1.31/guides/mysql/schema-manager/overview/index.md new file mode 100644 index 0000000000..b27e974d18 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/schema-manager/overview/index.md @@ -0,0 +1,66 @@ +--- +title: MySQL Schema Manager Overview +menu: + docs_v2024.1.31: + identifier: mysql-schema-manager-overview + name: Overview + parent: guides-mysql-schema-manager + weight: 10 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + + +## Before You Begin + +- You should be familiar with the following `KubeDB` concepts: + - [MySQL](/docs/v2024.1.31/guides/mysql/concepts/database/) + - [MySQLDatabase](/docs/v2024.1.31/guides/mysql/concepts/mysqldatabase/) + + +## What is Schema Manager + +`Schema Manager` is a Kubernetes operator developed by AppsCode that implements multi-tenancy inside KubeDB provisioned database servers like MySQL, MariaDB, PosgreSQL and MongoDB etc. With `Schema Manager` one can create database into specific database server. An user will also be created with KubeVault and assigned to that database. Using the newly created user credential one can access the database and run operations into it. One may pass the database server reference, configuration, user access policy through a single yaml and `Schema Manager` will do all the task above mentioned. `Schema Manager` also allows initializing the database and restore snapshot while bootstrap. + + +## How MySQL Schema Manager Process Works + +The following diagram shows how MySQL Schema Manager process worked. Open the image in a new tab to see the enlarged version. + +
+ MySQL Schema Mananger Diagram +
Fig: Process of MySQL Schema Manager
+
+ +The process consists of the following steps: + +1. At first the user will deploy a `MySQLDatabase` object. + +2. Once a `MySQLDatabase` object is deployed to the cluster, the `Schema Manager` operator first verifies the object by checking the `Double-OptIn`. + +3. After the `Double-OptIn` verification `Schema Manager` operator checks in the `MySQL` server if the target database is already present or not. If the database already present there, then the `MySQLDatabase` object will be immediately denied. + +4. Once everything is ok in the `MySQL` server side, then the target database will be created and an entry for that will be entered in the `kubedb_system` database. + +5. After successful database creation, the `Vault` server creates a user in the `MySQL` server. The user gets all the privileges on our target database and its credentials are served with a secret. + +6. The user credentials secret reference is patched with the `MySQLDatabase` object yaml in the `.status.authSecret.name` field. + +7. If there is any `init script` associated with the `MySQLDatabase` object, it will be executed in this step with the `Schema Manager` operator. + +8. The user can also provide a `snapshot` reference for initialization. In that case `Schema Manager` operator fetches necessary `appbinding`, `secrets`, `repository` and then the `Stash` operator takes action with all the information. + +In the next doc, we are going to show a step by step guide of using MySQL Schema Manager with KubeDB. \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/mysql/tls/_index.md b/content/docs/v2024.1.31/guides/mysql/tls/_index.md new file mode 100644 index 0000000000..1f2bdc3b50 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/tls/_index.md @@ -0,0 +1,22 @@ +--- +title: TLS/SSL Encryption +menu: + docs_v2024.1.31: + identifier: guides-mysql-tls + name: TLS/SSL Encryption + parent: guides-mysql + weight: 45 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/mysql/tls/configure/index.md b/content/docs/v2024.1.31/guides/mysql/tls/configure/index.md new file mode 100644 index 0000000000..86924605f3 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/tls/configure/index.md @@ -0,0 +1,525 @@ +--- +title: TLS/SSL (Transport Encryption) +menu: + docs_v2024.1.31: + identifier: guides-mysql-tls-configure + name: MySQL TLS/SSL Configuration + parent: guides-mysql-tls + weight: 20 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Configure TLS/SSL in MySQL + +`KubeDB` supports providing TLS/SSL encryption (via, `requireSSL` mode) for `MySQL`. This tutorial will show you how to use `KubeDB` to deploy a `MySQL` database with TLS/SSL configuration. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +- Install [`cert-manger`](https://cert-manager.io/docs/installation/) v1.0.0 or later to your cluster to manage your SSL/TLS certificates. + +- Install `KubeDB` in your cluster following the steps [here](/docs/v2024.1.31/setup/README). + +- To keep things isolated, this tutorial uses a separate namespace called `demo` throughout this tutorial. + + ```bash + $ kubectl create ns demo + namespace/demo created + ``` + +> Note: YAML files used in this tutorial are stored in [docs/guides/mysql/tls/configure/yamls](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/guides/mysql/tls/configure/yamls) folder in GitHub repository [kubedb/docs](https://github.com/kubedb/docs). + +### Deploy MySQL database with TLS/SSL configuration + +As pre-requisite, at first, we are going to create an Issuer/ClusterIssuer. This Issuer/ClusterIssuer is used to create certificates. Then we are going to deploy a MySQL standalone and a group replication that will be configured with these certificates by `KubeDB` operator. + +### Create Issuer/ClusterIssuer + +Now, we are going to create an example `Issuer` that will be used throughout the duration of this tutorial. Alternatively, you can follow this [cert-manager tutorial](https://cert-manager.io/docs/configuration/ca/) to create your own `Issuer`. By following the below steps, we are going to create our desired issuer, + +- Start off by generating our ca-certificates using openssl, + +```bash +openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout ./ca.key -out ./ca.crt -subj "/CN=mysql/O=kubedb" +``` + +- create a secret using the certificate files we have just generated, + +```bash +kubectl create secret tls my-ca \ + --cert=ca.crt \ + --key=ca.key \ + --namespace=demo +secret/my-ca created +``` + +Now, we are going to create an `Issuer` using the `my-ca` secret that hols the ca-certificate we have just created. Below is the YAML of the `Issuer` cr that we are going to create, + +```yaml +apiVersion: cert-manager.io/v1 +kind: Issuer +metadata: + name: mysql-issuer + namespace: demo +spec: + ca: + secretName: my-ca +``` + +Let’s create the `Issuer` cr we have shown above, + +```bash +kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mysql/tls/configure/yamls/issuer.yaml +issuer.cert-manager.io/mysql-issuer created +``` + +## Deploy MySQL with TLS/SSL configuration + + + + +
+
+ +Now, we are going to deploy a `MySQL` group replication with TLS/SSL configuration. Below is the YAML for MySQL group replication that we are going to create, + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: MySQL +metadata: + name: some-mysql + namespace: demo +spec: + version: "8.0.35" + replicas: 3 + topology: + mode: GroupReplication + group: + name: "dc002fc3-c412-4d18-b1d4-66c1fbfbbc9b" + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + requireSSL: true + tls: + issuerRef: + apiGroup: cert-manager.io + kind: Issuer + name: mysql-issuer + certificates: + - alias: server + subject: + organizations: + - kubedb:server + dnsNames: + - localhost + ipAddresses: + - "127.0.0.1" + terminationPolicy: WipeOut +``` + +**Deploy MySQL group replication:** + +```bash +kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mysql/tls/configure/yamls/group-replication.yaml +mysql.kubedb.com/some-mysql created +``` + +
+ +
+ +Now, we are going to deploy a `MySQL` Innodb with TLS/SSL configuration. Below is the YAML for MySQL innodb cluster that we are going to create, + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: MySQL +metadata: + name: some-mysql + namespace: demo +spec: + version: "8.0.31-innodb" + replicas: 3 + topology: + mode: InnoDBCluster + innoDBCluster: + router: + replicas: 1 + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + requireSSL: true + tls: + issuerRef: + apiGroup: cert-manager.io + kind: Issuer + name: mysql-issuer + certificates: + - alias: server + subject: + organizations: + - kubedb:server + dnsNames: + - localhost + ipAddresses: + - "127.0.0.1" + terminationPolicy: WipeOut +``` + +**Deploy MySQL Innodb Cluster:** + +```bash +kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mysql/tls/configure/yamls/innodb.yaml +mysql.kubedb.com/some-mysql created +``` +
+ +
+Now, we are going to deploy a `MySQL` Semi sync cluster with TLS/SSL configuration. Below is the YAML for MySQL semi-sync cluster that we are going to create, + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: MySQL +metadata: + name: some-mysql + namespace: demo +spec: + version: "8.0.35" + replicas: 3 + topology: + mode: SemiSync + semiSync: + sourceWaitForReplicaCount: 1 + sourceTimeout: 24h + errantTransactionRecoveryPolicy: PseudoTransaction + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + requireSSL: true + tls: + issuerRef: + apiGroup: cert-manager.io + kind: Issuer + name: mysql-issuer + certificates: + - alias: server + subject: + organizations: + - kubedb:server + dnsNames: + - localhost + ipAddresses: + - "127.0.0.1" + terminationPolicy: WipeOut +``` + +**Deploy MySQL Semi-sync:** + +```bash +kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mysql/tls/configure/yamls/semi-sync.yaml +mysql.kubedb.com/some-mysql created +``` + +
+ + +
+ +Now, we are going to deploy a stand alone `MySQL` with TLS/SSL configuration. Below is the YAML for stand alone MySQL that we are going to create, + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: MySQL +metadata: + name: some-mysql + namespace: demo +spec: + version: "8.0.35" + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + requireSSL: true + tls: + issuerRef: + apiGroup: cert-manager.io + kind: Issuer + name: mysql-issuer + certificates: + - alias: server + subject: + organizations: + - kubedb:server + dnsNames: + - localhost + ipAddresses: + - "127.0.0.1" + terminationPolicy: WipeOut +``` + +**Deploy Stand Alone MySQL:** + +```bash +kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mysql/tls/configure/yamls/standalone.yaml +mysql.kubedb.com/some-mysql created +``` +
+ +
+ + + +**Wait for the database to be ready :** + +Now, watch `MySQL` is going to `Running` state and also watch `StatefulSet` and its pod is created and going to `Running` state, + +```bash +$ watch -n 3 kubectl get my -n demo some-mysql +Every 3.0s: kubectl get my -n demo some-mysql suaas-appscode: Thu Aug 13 19:02:15 2020 + +NAME VERSION STATUS AGE +some-mysql 8.0.35 Running 9m41s + +$ watch -n 3 kubectl get sts -n demo some-mysql +Every 3.0s: kubectl get sts -n demo some-mysql suaas-appscode: Thu Aug 13 19:02:42 2020 + +NAME READY AGE +some-mysql 3/3 9m51s + +$ watch -n 3 kubectl get pod -n demo -l app.kubernetes.io/name=mysqls.kubedb.com,app.kubernetes.io/instance=some-mysql +Every 3.0s: kubectl get pod -n demo -l app.kubernetes.io/name=mysqls.kubedb.com suaas-appscode: Thu Aug 13 19:03:02 2020 + +NAME READY STATUS RESTARTS AGE +some-mysql-0 2/2 Running 0 10m +some-mysql-1 2/2 Running 0 4m4s +some-mysql-2 2/2 Running 0 2m3s +``` + +**Verify tls-secrets created successfully :** + +If everything goes well, you can see that our tls-secrets will be created which contains server, client, exporter certificate. Server tls-secret will be used for server configuration and client tls-secret will be used for a secure connection. + +All tls-secret are created by `KubeDB` Ops Manager. Default tls-secret name formed as _{mysql-object-name}-{cert-alias}-cert_. + +Let's check the tls-secrets have created, + +```bash +$ kubectl get secrets -n demo | grep "some-mysql" +some-mysql-client-cert kubernetes.io/tls 3 13m +some-mysql-auth Opaque 2 13m +some-mysql-metrics-exporter-cert kubernetes.io/tls 3 13m +some-mysql-metrics-exporter-config Opaque 1 13m +some-mysql-server-cert kubernetes.io/tls 3 13m +some-mysql-token-49sjm kubernetes.io/service-account-token 3 13m +``` + +**Verify MySQL Standalone configured to TLS/SSL:** + +Now, we are going to connect to the database for verifying the `MySQL` group replication has configured with TLS/SSL encryption. + +Let's exec into the pod to verify TLS/SSL configuration, + +```bash +$ kubectl exec -it -n demo some-mysql-0 -c mysql -- bash +root@my-group-0:/# ls /etc/mysql/certs/ +ca.crt client.crt client.key server.crt server.key + +root@my-group-0:/# mysql -u${MYSQL_ROOT_USERNAME} -p{MYSQL_ROOT_PASSWORD} +mysql: [Warning] Using a password on the command line interface can be insecure. +Welcome to the MySQL monitor. Commands end with ; or \g. +Your MySQL connection id is 27 +Server version: 8.0.23 MySQL Community Server - GPL + +Copyright (c) 2000, 2021, Oracle and/or its affiliates. + +Oracle is a registered trademark of Oracle Corporation and/or its +affiliates. Other names may be trademarks of their respective +owners. + +Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. + +mysql> SHOW VARIABLES LIKE '%ssl%'; ++---------------------------------------------------+-----------------------------+ +| Variable_name | Value | ++---------------------------------------------------+-----------------------------+ +| admin_ssl_ca | | +| admin_ssl_capath | | +| admin_ssl_cert | | +| admin_ssl_cipher | | +| admin_ssl_crl | | +| admin_ssl_crlpath | | +| admin_ssl_key | | +| group_replication_recovery_ssl_ca | | +| group_replication_recovery_ssl_capath | | +| group_replication_recovery_ssl_cert | | +| group_replication_recovery_ssl_cipher | | +| group_replication_recovery_ssl_crl | | +| group_replication_recovery_ssl_crlpath | | +| group_replication_recovery_ssl_key | | +| group_replication_recovery_ssl_verify_server_cert | OFF | +| group_replication_recovery_use_ssl | ON | +| group_replication_ssl_mode | REQUIRED | +| have_openssl | YES | +| have_ssl | YES | +| mysqlx_ssl_ca | | +| mysqlx_ssl_capath | | +| mysqlx_ssl_cert | | +| mysqlx_ssl_cipher | | +| mysqlx_ssl_crl | | +| mysqlx_ssl_crlpath | | +| mysqlx_ssl_key | | +| ssl_ca | /etc/mysql/certs/ca.crt | +| ssl_capath | /etc/mysql/certs | +| ssl_cert | /etc/mysql/certs/server.crt | +| ssl_cipher | | +| ssl_crl | | +| ssl_crlpath | | +| ssl_fips_mode | OFF | +| ssl_key | /etc/mysql/certs/server.key | ++---------------------------------------------------+-----------------------------+ +34 rows in set (0.02 sec) + + +mysql> SHOW VARIABLES LIKE '%require_secure_transport%'; ++--------------------------+-------+ +| Variable_name | Value | ++--------------------------+-------+ +| require_secure_transport | ON | ++--------------------------+-------+ +1 row in set (0.00 sec) + +mysql> exit +Bye +``` + +The above output shows that the `MySQL` server is configured to TLS/SSL. You can also see that the `.crt` and `.key` files are stored in the `/etc/ mysql/certs/` directory for client and server. + +**Verify secure connection for SSL required user:** + +Now, you can create an SSL required user that will be used to connect to the database with a secure connection. + +Let's connect to the database server with a secure connection, + +```bash +# creating SSL required user +$ kubectl exec -it -n demo some-mysql-0 -c mysql -- bash + +root@my-group-0:/# mysql -uroot -p${MYSQL_ROOT_PASSWORD} +mysql: [Warning] Using a password on the command line interface can be insecure. +Welcome to the MySQL monitor. Commands end with ; or \g. +Your MySQL connection id is 27 +Server version: 8.0.23 MySQL Community Server - GPL + +Copyright (c) 2000, 2021, Oracle and/or its affiliates. + +Oracle is a registered trademark of Oracle Corporation and/or its +affiliates. Other names may be trademarks of their respective +owners. + +Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. + +mysql> CREATE USER 'mysql_user'@'localhost' IDENTIFIED BY 'pass' REQUIRE SSL; +Query OK, 0 rows affected (0.01 sec) + +mysql> FLUSH PRIVILEGES; +Query OK, 0 rows affected (0.00 sec) + +mysql> exit +Bye + +# accessing database server with newly created user +root@my-group-0:/# mysql -umysql_user -ppass +mysql: [Warning] Using a password on the command line interface can be insecure. +ERROR 1045 (28000): Access denied for user 'mysql_user'@'localhost' (using password: YES) + +# accessing the database server newly created user with ssl-mode=disable +root@my-group-0:/# mysql -umysql_user -ppass --ssl-mode=disabled +mysql: [Warning] Using a password on the command line interface can be insecure. +ERROR 1045 (28000): Access denied for user 'mysql_user'@'localhost' (using password: YES) + +# accessing the database server newly created user with certificates +root@my-group-0:/# mysql -umysql_user -ppass --ssl-ca=/etc/mysql/certs/ca.crt --ssl-cert=/etc/mysql/certs/client.crt --ssl-key=/etc/mysql/certs/client.key +mysql: [Warning] Using a password on the command line interface can be insecure. +Welcome to the MySQL monitor. Commands end with ; or \g. +Your MySQL connection id is 384 +Server version: 5.7.29-log MySQL Community Server (GPL) + +Copyright (c) 2000, 2020, Oracle and/or its affiliates. All rights reserved. + +Oracle is a registered trademark of Oracle Corporation and/or its +affiliates. Other names may be trademarks of their respective +owners. + +Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. + +You are enforcing ssl connection via unix socket. Please consider +switching ssl off as it does not make connection via unix socket +any more secure. +mysql> exit +Bye +``` + +From the above output, you can see that only using client certificate we can access the database securely, otherwise, it shows "Access denied". Our client certificate is stored in `/etc/mysql/certs/` directory. + +## Cleaning up + +To clean up the Kubernetes resources created by this tutorial, run: + +```bash +kubectl delete my -n demo my-standalone-tls +kubectl delete my -n demo some-mysql +kubectl delete ns demo +``` + +## Next Steps + +- Detail concepts of [MySQL object](/docs/v2024.1.31/guides/mysql/concepts/database/). \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/mysql/tls/configure/yamls/group-replication.yaml b/content/docs/v2024.1.31/guides/mysql/tls/configure/yamls/group-replication.yaml new file mode 100644 index 0000000000..764c855824 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/tls/configure/yamls/group-replication.yaml @@ -0,0 +1,36 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MySQL +metadata: + name: some-mysql + namespace: demo +spec: + version: "8.0.35" + replicas: 3 + topology: + mode: GroupReplication + group: + name: "dc002fc3-c412-4d18-b1d4-66c1fbfbbc9b" + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + requireSSL: true + tls: + issuerRef: + apiGroup: cert-manager.io + kind: Issuer + name: mysql-issuer + certificates: + - alias: server + subject: + organizations: + - kubedb:server + dnsNames: + - localhost + ipAddresses: + - "127.0.0.1" + terminationPolicy: WipeOut \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/mysql/tls/configure/yamls/innodb.yaml b/content/docs/v2024.1.31/guides/mysql/tls/configure/yamls/innodb.yaml new file mode 100644 index 0000000000..8facf010d2 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/tls/configure/yamls/innodb.yaml @@ -0,0 +1,37 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MySQL +metadata: + name: some-mysql + namespace: demo +spec: + version: "8.0.31-innodb" + replicas: 3 + topology: + mode: InnoDBCluster + innoDBCluster: + router: + replicas: 1 + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + requireSSL: true + tls: + issuerRef: + apiGroup: cert-manager.io + kind: Issuer + name: mysql-issuer + certificates: + - alias: server + subject: + organizations: + - kubedb:server + dnsNames: + - localhost + ipAddresses: + - "127.0.0.1" + terminationPolicy: WipeOut \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/mysql/tls/configure/yamls/issuer.yaml b/content/docs/v2024.1.31/guides/mysql/tls/configure/yamls/issuer.yaml new file mode 100644 index 0000000000..9ec9f3bbd8 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/tls/configure/yamls/issuer.yaml @@ -0,0 +1,8 @@ +apiVersion: cert-manager.io/v1 +kind: Issuer +metadata: + name: mysql-issuer + namespace: demo +spec: + ca: + secretName: my-ca \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/mysql/tls/configure/yamls/semi-sync.yaml b/content/docs/v2024.1.31/guides/mysql/tls/configure/yamls/semi-sync.yaml new file mode 100644 index 0000000000..22ccfdefa1 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/tls/configure/yamls/semi-sync.yaml @@ -0,0 +1,38 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MySQL +metadata: + name: some-mysql + namespace: demo +spec: + version: "8.0.35" + replicas: 3 + topology: + mode: SemiSync + semiSync: + sourceWaitForReplicaCount: 1 + sourceTimeout: 24h + errantTransactionRecoveryPolicy: PseudoTransaction + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + requireSSL: true + tls: + issuerRef: + apiGroup: cert-manager.io + kind: Issuer + name: mysql-issuer + certificates: + - alias: server + subject: + organizations: + - kubedb:server + dnsNames: + - localhost + ipAddresses: + - "127.0.0.1" + terminationPolicy: WipeOut \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/mysql/tls/configure/yamls/standalone.yaml b/content/docs/v2024.1.31/guides/mysql/tls/configure/yamls/standalone.yaml new file mode 100644 index 0000000000..e181459de8 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/tls/configure/yamls/standalone.yaml @@ -0,0 +1,31 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MySQL +metadata: + name: some-mysql + namespace: demo +spec: + version: "8.0.35" + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + requireSSL: true + tls: + issuerRef: + apiGroup: cert-manager.io + kind: Issuer + name: mysql-issuer + certificates: + - alias: server + subject: + organizations: + - kubedb:server + dnsNames: + - localhost + ipAddresses: + - "127.0.0.1" + terminationPolicy: WipeOut \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/mysql/tls/overview/images/my-tls-ssl.png b/content/docs/v2024.1.31/guides/mysql/tls/overview/images/my-tls-ssl.png new file mode 100644 index 0000000000..70e62dd240 Binary files /dev/null and b/content/docs/v2024.1.31/guides/mysql/tls/overview/images/my-tls-ssl.png differ diff --git a/content/docs/v2024.1.31/guides/mysql/tls/overview/index.md b/content/docs/v2024.1.31/guides/mysql/tls/overview/index.md new file mode 100644 index 0000000000..45daff21db --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/tls/overview/index.md @@ -0,0 +1,81 @@ +--- +title: MySQL TLS/SSL Encryption Overview +menu: + docs_v2024.1.31: + identifier: guides-mysql-tls-overview + name: Overview + parent: guides-mysql-tls + weight: 10 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# MySQL TLS/SSL Encryption + +**Prerequisite :** To configure TLS/SSL in `MySQL`, `KubeDB` uses `cert-manager` to issue certificates. So first you have to make sure that the cluster has `cert-manager` installed. To install `cert-manager` in your cluster following steps [here](https://cert-manager.io/docs/installation/kubernetes/). + +To issue a certificate, the following cr of `cert-manager` is used: + +- `Issuer/ClusterIssuer`: Issuers and ClusterIssuers represent certificate authorities (CAs) that are able to generate signed certificates by honoring certificate signing requests. All cert-manager certificates require a referenced issuer that is in a ready condition to attempt to honor the request. You can learn more details [here](https://cert-manager.io/docs/concepts/issuer/). + +- `Certificate`: `cert-manager` has the concept of Certificates that define the desired x509 certificate which will be renewed and kept up to date. You can learn more details [here](https://cert-manager.io/docs/concepts/certificate/). + +**MySQL CRD Specification:** + +KubeDB uses the following cr fields to enable SSL/TLS encryption in `MySQL`. + +- `spec:` + - `requireSSL` + - `tls:` + - `issuerRef` + - `certificates` + +Read about the fields in details from [mysql concept](/docs/v2024.1.31/guides/mysql/concepts/database/#), + +When, `requireSSL` is set, the users must specify the `tls.issuerRef` field. `KubeDB` uses the `issuer` or `clusterIssuer` referenced in the `tls.issuerRef` field, and the certificate specs provided in `tls.certificate` to generate certificate secrets using `Issuer/ClusterIssuers` specification. These certificates secrets including `ca.crt`, `tls.crt` and `tls.key` etc. are used to configure `MySQL` server, exporter etc. respectively. + +## How TLS/SSL configures in MySQL + +The following figure shows how `KubeDB` enterprise is used to configure TLS/SSL in MySQL. Open the image in a new tab to see the enlarged version. + +
+ Stash Backup Flow +
Fig: Deploy MySQL with TLS/SSL
+
+ +Deploying MySQL with TLS/SSL configuration process consists of the following steps: + +1. At first, a user creates an `Issuer/ClusterIssuer` cr. + +2. Then the user creates a `MySQL` cr. + +3. `KubeDB` community operator watches for the `MySQL` cr. + +4. When it finds one, it creates `Secret`, `Service`, etc. for the `MySQL` database. + +5. `KubeDB` Ops Manager watches for `MySQL`(5c), `Issuer/ClusterIssuer`(5b), `Secret` and `Service`(5a). + +6. When it finds all the resources(`MySQL`, `Issuer/ClusterIssuer`, `Secret`, `Service`), it creates `Certificates` by using `tls.issuerRef` and `tls.certificates` field specification from `MySQL` cr. + +7. `cert-manager` watches for certificates. + +8. When it finds one, it creates certificate secrets `tls-secrets`(server, client, exporter secrets, etc.) that hold the actual self-signed certificate. + +9. `KubeDB` community operator watches for the Certificate secrets `tls-secrets`. + +10. When it finds all the tls-secret, it creates a `StatefulSet` so that MySQL server is configured with TLS/SSL. + +In the next doc, we are going to show a step by step guide on how to configure a `MySQL` database with TLS/SSL. \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/mysql/update-version/_index.md b/content/docs/v2024.1.31/guides/mysql/update-version/_index.md new file mode 100644 index 0000000000..33ede84072 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/update-version/_index.md @@ -0,0 +1,22 @@ +--- +title: Updating MySQL +menu: + docs_v2024.1.31: + identifier: guides-mysql-updating + name: UpdateVersion MySQL + parent: guides-mysql + weight: 42 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/mysql/update-version/majorversion/_index.md b/content/docs/v2024.1.31/guides/mysql/update-version/majorversion/_index.md new file mode 100644 index 0000000000..0b32f9f1f2 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/update-version/majorversion/_index.md @@ -0,0 +1,22 @@ +--- +title: Updating MySQL major version +menu: + docs_v2024.1.31: + identifier: guides-mysql-updating-major + name: Major version + parent: guides-mysql-updating + weight: 20 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/mysql/update-version/majorversion/group-replication/index.md b/content/docs/v2024.1.31/guides/mysql/update-version/majorversion/group-replication/index.md new file mode 100644 index 0000000000..7c5f78f881 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/update-version/majorversion/group-replication/index.md @@ -0,0 +1,393 @@ +--- +title: Updating MySQL group replication major version +menu: + docs_v2024.1.31: + identifier: guides-mysql-updating-major-group + name: Group Replication + parent: guides-mysql-updating-major + weight: 20 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# update major version of MySQL Group Replication + +This guide will show you how to use `KubeDB` Ops Manager to update the major version of `MySQL` Group Replication. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +- Install `KubeDB` in your cluster following the steps [here](/docs/v2024.1.31/setup/README). + +- You should be familiar with the following `KubeDB` concepts: + - [MySQL](/docs/v2024.1.31/guides/mysql/concepts/database/) + - [MySQLOpsRequest](/docs/v2024.1.31/guides/mysql/concepts/opsrequest/) + - [Updating Overview](/docs/v2024.1.31/guides/mysql/update-version/overview/) + +To keep everything isolated, we are going to use a separate namespace called `demo` throughout this tutorial. + +```bash +$ kubectl create ns demo +namespace/demo created +``` + +> **Note:** YAML files used in this tutorial are stored in [docs/guides/mysql/update-version/majorversion/group-replication/yamls](/docs/v2024.1.31/guides/mysql/update-version/majorversion/group-replication/yamls) directory of [kubedb/docs](https://github.com/kubedb/docs) repository. + +### Apply Version updating on Group Replication + +Here, we are going to deploy a `MySQL` group replication using a supported version by `KubeDB` operator. Then we are going to apply updating on it. + +#### Prepare Group Replication + +At first, we are going to deploy a group replication using supported that `MySQL` version whether it is possible to update from this version to another. In the next two sections, we are going to find out the supported version and version update constraints. + +**Find supported MySQL Version:** + +When you have installed `KubeDB`, it has created `MySQLVersion` CR for all supported `MySQL` versions. Let’s check the supported `MySQL` versions, + +```bash +$ kubectl get mysqlversion +NAME VERSION DISTRIBUTION DB_IMAGE DEPRECATED AGE +5.7.35-v1 5.7.35 Official mysql:5.7.35 13d +5.7.44 5.7.44 Official mysql:5.7.44 13d +8.0.17 8.0.17 Official mysql:8.0.17 13d +8.0.35 8.0.35 Official mysql:8.0.35 13d +8.0.31-innodb 8.0.35 MySQL mysql/mysql-server:8.0.35 13d +8.0.35 8.0.35 Official mysql:8.0.35 13d +8.0.3-v4 8.0.3 Official mysql:8.0.3 13d + +``` + +The version above that does not show `DEPRECATED` true is supported by `KubeDB` for `MySQL`. You can use any non-deprecated version. Now, we are going to select a non-deprecated version from `MySQLVersion` for `MySQL` group replication that will be possible to update from this version to another version. In the next section, we are going to verify version update constraints. + +**Check update Constraints:** + +Database version update constraints is a constraint that shows whether it is possible or not possible to update from one version to another. Let's check the version update constraints of `MySQL` `5.7.44`, + +```bash +$ kubectl get mysqlversion 5.7.44 -o yaml | kubectl neat +apiVersion: catalog.kubedb.com/v1alpha1 +kind: MySQLVersion +metadata: + annotations: + meta.helm.sh/release-name: kubedb-catalog + meta.helm.sh/release-namespace: kubedb + creationTimestamp: "2022-06-16T13:52:58Z" + generation: 1 + labels: + app.kubernetes.io/instance: kubedb-catalog + app.kubernetes.io/managed-by: Helm + app.kubernetes.io/name: kubedb-catalog + app.kubernetes.io/version: v2022.03.28 + helm.sh/chart: kubedb-catalog-v2022.03.28 + name: 5.7.44 + resourceVersion: "1092465" + uid: 4cc87fc8-efd7-4e69-bb12-4454a2b1bf06 +spec: + coordinator: + image: kubedb/mysql-coordinator:v0.5.0 + db: + image: mysql:5.7.44 + distribution: Official + exporter: + image: kubedb/mysqld-exporter:v0.13.1 + initContainer: + image: kubedb/mysql-init:5.7-v2 + podSecurityPolicies: + databasePolicyName: mysql-db + replicationModeDetector: + image: kubedb/replication-mode-detector:v0.13.0 + stash: + addon: + backupTask: + name: mysql-backup-5.7.25 + restoreTask: + name: mysql-restore-5.7.25 + updateConstraints: + denylist: + groupReplication: + - < 5.7.44 + standalone: + - < 5.7.44 + version: 5.7.44 + +``` + +The above `spec.updateConstraints.denylist` of `5.7.44` is showing that updating below version of `5.7.44` is not possible for both group replication and standalone. That means, it is possible to update any version above `5.7.44`. Here, we are going to create a `MySQL` Group Replication using MySQL `5.7.44`. Then we are going to update this version to `8.0.35`. + +**Deploy MySQL Group Replication:** + +In this section, we are going to deploy a MySQL group replication with 3 members. Then, in the next section we will update the version of the members using updating. Below is the YAML of the `MySQL` cr that we are going to create, + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: MySQL +metadata: + name: my-group + namespace: demo +spec: + version: "5.7.44" + replicas: 3 + topology: + mode: GroupReplication + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut +``` + +Let's create the `MySQL` cr we have shown above, + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mysql/update-version/majorversion/group-replication/yamls/group_replication.yaml +mysql.kubedb.com/my-group created +``` + +**Wait for the cluster to be ready:** + +`KubeDB` operator watches for `MySQL` objects using Kubernetes API. When a `MySQL` object is created, `KubeDB` operator will create a new StatefulSet, Services, and Secrets, etc. A secret called `my-group-auth` (format: {mysql-object-name}-auth) will be created storing the password for mysql superuser. +Now, watch `MySQL` is going to `Running` state and also watch `StatefulSet` and its pod is created and going to `Running` state, + +```bash +$ watch -n 3 kubectl get my -n demo my-group + +NAME VERSION STATUS AGE +my-group 5.7.44 Running 5m52s + +$ watch -n 3 kubectl get sts -n demo my-group + +NAME READY AGE +my-group 3/3 7m12s + +$ watch -n 3 kubectl get pod -n demo -l app.kubernetes.io/name=mysqls.kubedb.com,app.kubernetes.io/instance=my-group + +NAME READY STATUS RESTARTS AGE +my-group-0 2/2 Running 0 11m +my-group-1 2/2 Running 0 9m53s +my-group-2 2/2 Running 0 6m48s +``` + +Let's verify the `MySQL`, the `StatefulSet` and its `Pod` image version, + +```bash +$ kubectl get my -n demo my-group -o=jsonpath='{.spec.version}{"\n"}' +5.7.44 + +$ kubectl get sts -n demo -l app.kubernetes.io/name=mysqls.kubedb.com,app.kubernetes.io/instance=my-group -o json | jq '.items[].spec.template.spec.containers[1].image' +"kubedb/mysql:5.7.44" + +$ kubectl get pod -n demo -l app.kubernetes.io/name=mysqls.kubedb.com,app.kubernetes.io/instance=my-group -o json | jq '.items[].spec.containers[1].image' +"kubedb/mysql:5.7.44" +"kubedb/mysql:5.7.44" +"kubedb/mysql:5.7.44" +``` + +Let's also verify that the StatefulSet’s pods have joined into a group replication, + +```bash +$ kubectl get secrets -n demo my-group-auth -o jsonpath='{.data.\username}' | base64 -d +root + +$ kubectl get secrets -n demo my-group-auth -o jsonpath='{.data.\password}' | base64 -d +7gUARa&Jkg.ypJE8 + +$ kubectl exec -it -n demo my-group-0 -c mysql -- mysql -u root --password='7gUARa&Jkg.ypJE8' --host=my-group-0.my-group-pods.demo -e "select * from performance_schema.replication_group_members" +mysql: [Warning] Using a password on the command line interface can be insecure. ++---------------------------+--------------------------------------+-----------------------------------+-------------+--------------+ +| CHANNEL_NAME | MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE | ++---------------------------+--------------------------------------+-----------------------------------+-------------+--------------+ +| group_replication_applier | b0e71e0c-f849-11ec-a315-46392c50e39c | my-group-1.my-group-pods.demo.svc | 3306 | ONLINE | +| group_replication_applier | b34b16d7-f849-11ec-9362-a2f432876ee4 | my-group-2.my-group-pods.demo.svc | 3306 | ONLINE | +| group_replication_applier | b5542a4a-f849-11ec-9a75-3e8abd17fee6 | my-group-0.my-group-pods.demo.svc | 3306 | ONLINE | ++---------------------------+--------------------------------------+-----------------------------------+-------------+--------------+ + +``` + +We are ready to apply updating on this `MySQL` group replication. + +#### UpdateVesion + +Here, we are going to update the `MySQL` group replication from `5.7.44` to `8.0.35`. + +**Create MySQLOpsRequest:** + +To update your database cluster, you have to create a `MySQLOpsRequest` cr with your desired version that supported by `KubeDB`. Below is the YAML of the `MySQLOpsRequest` cr that we are going to create, + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: MySQLOpsRequest +metadata: + name: my-update-major-group + namespace: demo +spec: + type: UpdateVersion + databaseRef: + name: my-group + updateVersion: + targetVersion: "8.0.35" +``` + +Here, + +- `spec.databaseRef.name` specifies that we are performing operation on `my-group` MySQL database. +- `spec.type` specifies that we are going to perform `UpdateVersion` on our database. +- `spec.updateVersion.targetVersion` specifies expected version `8.0.35` after updating. + +Let's create the `MySQLOpsRequest` cr we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mysql/update-version/majorversion/group-replication/yamls/update_major_version_group.yaml +mysqlopsrequest.ops.kubedb.com/my-update-major-group created +``` + +> Note: During the upgradation of the major version of MySQL group replication, a new StatefulSet is created by the `KubeDB` Ops Manager and the old one is deleted. The name of the newly created StatefulSet is formed as follows: `-`. +Here, `` is a positive integer number and starts with 1. It's determined as follows: +For one-time major version updating of group replication, the suffix will be 1. +For the 2nd time major version updating of group replication, the suffix will be 2. +It will be continued... + +**Verify MySQL version updated successfully:** + +If everything goes well, `KubeDB` Ops Manager will create a new `StatefulSet` named `my-group-1` with the desire updated version and delete the old one. + +At first, we will wait for `MySQLOpsRequest` to be successful. Run the following command to watch `MySQlOpsRequest` cr, + +```bash +$ watch -n 3 kubectl get myops -n demo my-update-major-group + +NAME TYPE STATUS AGE +my-update-major-group UpdateVersion Successful 5m26s +``` + +You can see from the above output that the `MySQLOpsRequest` has succeeded. If we describe the `MySQLOpsRequest`, we shall see that the `MySQL` group replication is updated with new images and the `StatefulSet` is created with a new image. + +```bash +$ kubectl describe myops -n demo my-update-major-group +Name: my-update-major-group +Namespace: demo +Labels: +Annotations: +API Version: ops.kubedb.com/v1alpha1 +Kind: MySQLOpsRequest +Metadata: + Creation Timestamp: 2022-06-30T07:55:16Z + Manager: kubedb-ops-manager + Operation: Update + Time: 2022-06-30T07:55:16Z + Resource Version: 1708721 + UID: 460319fc-8dc4-45d7-8958-fa84e24d5f51 +Spec: + Database Ref: + Name: my-group + Type: UpdateVersion + UpdateVersion: + Target Version: 8.0.35 +Status: + Conditions: + Last Transition Time: 2022-06-30T07:55:16Z + Message: Controller has started to Progress the MySQLOpsRequest: demo/my-update-major-group + Observed Generation: 1 + Reason: OpsRequestProgressingStarted + Status: True + Type: Progressing + Last Transition Time: 2022-06-30T07:55:16Z + Message: MySQL version updateFunc stated for MySQLOpsRequest: demo/my-update-major-group + Observed Generation: 1 + Reason: DatabaseVersionupdatingStarted + Status: True + Type: updating + Last Transition Time: 2022-06-30T07:59:16Z + Message: Image successfully updated in MySQL: demo/my-group for MySQLOpsRequest: my-update-major-group + Observed Generation: 1 + Reason: SuccessfullyUpdatedDatabaseVersion + Status: True + Type: UpdateVersion + Last Transition Time: 2022-06-30T07:59:16Z + Message: Controller has successfully updated the MySQL demo/my-update-major-group + Observed Generation: 1 + Reason: OpsRequestProcessedSuccessfully + Status: True + Type: Successful + Observed Generation: 3 + Phase: Successful +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal Starting 5m24s KubeDB Enterprise Operator Start processing for MySQLOpsRequest: demo/my-update-major-group + Normal Starting 5m24s KubeDB Enterprise Operator Pausing MySQL databse: demo/my-group + Normal Successful 5m24s KubeDB Enterprise Operator Successfully paused MySQL database: demo/my-group for MySQLOpsRequest: my-update-major-group + Normal Starting 5m24s KubeDB Enterprise Operator Updating MySQL images: demo/my-group for MySQLOpsRequest: my-update-major-group + Normal Starting 5m19s KubeDB Enterprise Operator Restarting Pod: my-group-1/demo + Normal Starting 3m59s KubeDB Enterprise Operator Restarting Pod: my-group-2/demo + Normal Starting 2m39s KubeDB Enterprise Operator Restarting Pod: my-group-0/demo + Normal Successful 84s KubeDB Enterprise Operator Image successfully updated in MySQL: demo/my-group for MySQLOpsRequest: my-update-major-group + Normal Starting 84s KubeDB Enterprise Operator Resuming MySQL database: demo/my-group + Normal Successful 84s KubeDB Enterprise Operator Successfully resumed MySQL database: demo/my-group + Normal Successful 84s KubeDB Enterprise Operator Controller has Successfully updated the version of MySQL : demo/my-group +``` + +Now, we are going to verify whether the `MySQL` and `StatefulSet` and it's `Pod` have updated with new image. Let's check, + +```bash +$ kubectl get my -n demo my-group -o=jsonpath='{.spec.version}{"\n"}' +8.0.35 + +$ kubectl get sts -n demo -l app.kubernetes.io/name=mysqls.kubedb.com,app.kubernetes.io/instance=my-group -o json | jq '.items[].spec.template.spec.containers[1].image' +"kubedb/mysql:8.0.35" + +$ kubectl get pod -n demo -l app.kubernetes.io/name=mysqls.kubedb.com,app.kubernetes.io/instance=my-group -o json | jq '.items[].spec.containers[1].image' +"kubedb/mysql:8.0.35" +"kubedb/mysql:8.0.35" +"kubedb/mysql:8.0.35" +``` + +Let's also check the StatefulSet pods have joined the `MySQL` group replication, + +```bash +$ kubectl get secrets -n demo my-group-auth -o jsonpath='{.data.\username}' | base64 -d +root + +$ kubectl get secrets -n demo my-group-auth -o jsonpath='{.data.\password}' | base64 -d +7gUARa&Jkg.ypJE8 + +$ kubectl exec -it -n demo my-group-0 -c mysql -- mysql -u root --password='7gUARa&Jkg.ypJE8' --host=my-group-0.my-group-pods.demo -e "select * from performance_schema.replication_group_members" +mysql: [Warning] Using a password on the command line interface can be insecure. ++---------------------------+--------------------------------------+-----------------------------------+-------------+--------------+-------------+----------------+----------------------------+ +| CHANNEL_NAME | MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE | MEMBER_ROLE | MEMBER_VERSION | MEMBER_COMMUNICATION_STACK | ++---------------------------+--------------------------------------+-----------------------------------+-------------+--------------+-------------+----------------+----------------------------+ +| group_replication_applier | b0e71e0c-f849-11ec-a315-46392c50e39c | my-group-1.my-group-pods.demo.svc | 3306 | ONLINE | PRIMARY | 8.0.35 | XCom | +| group_replication_applier | b34b16d7-f849-11ec-9362-a2f432876ee4 | my-group-2.my-group-pods.demo.svc | 3306 | ONLINE | SECONDARY | 8.0.35 | XCom | +| group_replication_applier | b5542a4a-f849-11ec-9a75-3e8abd17fee6 | my-group-0.my-group-pods.demo.svc | 3306 | ONLINE | SECONDARY | 8.0.35 | XCom | ++---------------------------+--------------------------------------+-----------------------------------+-------------+--------------+-------------+----------------+----------------------------+ + +``` + +You can see above that our `MySQL` group replication now has updated members. It verifies that we have successfully updated our cluster. + +## Cleaning Up + +To clean up the Kubernetes resources created by this tutorial, run: + +```bash +kubectl delete my -n demo my-group +kubectl delete myops -n demo my-update-major-group +``` \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/mysql/update-version/majorversion/group-replication/yamls/group_replication.yaml b/content/docs/v2024.1.31/guides/mysql/update-version/majorversion/group-replication/yamls/group_replication.yaml new file mode 100644 index 0000000000..e90a9c9b4e --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/update-version/majorversion/group-replication/yamls/group_replication.yaml @@ -0,0 +1,21 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MySQL +metadata: + name: my-group + namespace: demo +spec: + version: "5.7.44" + replicas: 3 + topology: + mode: GroupReplication + group: + name: "dc002fc3-c412-4d18-b1d4-66c1fbfbbc9b" + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/mysql/update-version/majorversion/group-replication/yamls/upgrade_major_version_group.yaml b/content/docs/v2024.1.31/guides/mysql/update-version/majorversion/group-replication/yamls/upgrade_major_version_group.yaml new file mode 100644 index 0000000000..a9b98708d2 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/update-version/majorversion/group-replication/yamls/upgrade_major_version_group.yaml @@ -0,0 +1,11 @@ +apiVersion: ops.kubedb.com/v1alpha1 +kind: MySQLOpsRequest +metadata: + name: my-update-major-group + namespace: demo +spec: + type: UpdateVersion + databaseRef: + name: my-group + updateVersion: + targetVersion: "8.0.35" \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/mysql/update-version/majorversion/standalone/index.md b/content/docs/v2024.1.31/guides/mysql/update-version/majorversion/standalone/index.md new file mode 100644 index 0000000000..0e9ecfe191 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/update-version/majorversion/standalone/index.md @@ -0,0 +1,332 @@ +--- +title: Updating MySQL standalone major version +menu: + docs_v2024.1.31: + identifier: guides-mysql-updating-major-standalone + name: Standalone + parent: guides-mysql-updating-major + weight: 10 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Update major version of MySQL Standalone + +This guide will show you how to use `KubeDB` Ops Manager to update the major version of `MySQL` standalone. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +- Install `KubeDB` in your cluster following the steps [here](/docs/v2024.1.31/setup/README). + +- You should be familiar with the following `KubeDB` concepts: + - [MySQL](/docs/v2024.1.31/guides/mysql/concepts/database/) + - [MySQLOpsRequest](/docs/v2024.1.31/guides/mysql/concepts/opsrequest/) + - [Updating Overview](/docs/v2024.1.31/guides/mysql/update-version/overview/) + +To keep everything isolated, we are going to use a separate namespace called `demo` throughout this tutorial. + +```bash +$ kubectl create ns demo +namespace/demo created +``` + +> **Note:** YAML files used in this tutorial are stored in [docs/guides/mysql/update-version/majorversion/standalone/yamls](/docs/v2024.1.31/guides/mysql/update-version/majorversion/standalone/yamls) directory of [kubedb/docs](https://github.com/kube/docs) repository. + +### Apply Version updating on Standalone + +Here, we are going to deploy a `MySQL` standalone using a supported version by `KubeDB` operator. Then we are going to apply updating on it. + +#### Prepare Standalone + +At first, we are going to deploy a standalone using supported that `MySQL` version whether it is possible to update from this version to another. In the next two sections, we are going to find out the supported version and version update constraints. + +**Find supported MySQLVersion:** + +When you have installed `KubeDB`, it has created `MySQLVersion` CR for all supported `MySQL` versions. Let's check support versions, + +```bash +$ kubectl get mysqlversion +NAME VERSION DISTRIBUTION DB_IMAGE DEPRECATED AGE +5.7.35-v1 5.7.35 Official mysql:5.7.35 13d +5.7.44 5.7.44 Official mysql:5.7.44 13d +8.0.17 8.0.17 Official mysql:8.0.17 13d +8.0.35 8.0.35 Official mysql:8.0.35 13d +8.0.31-innodb 8.0.35 MySQL mysql/mysql-server:8.0.35 13d +8.0.35 8.0.35 Official mysql:8.0.35 13d +8.0.3-v4 8.0.3 Official mysql:8.0.3 13d +``` + +The version above that does not show `DEPRECATED` `true` is supported by `KubeDB` for `MySQL`. You can use any non-deprecated version. Now, we are going to select a non-deprecated version from `MySQLVersion` for `MySQL` standalone that will be possible to update from this version to another version. In the next section, we are going to verify version update constraints. + +**Check update Constraints:** + +Database version update constraints is a constraint that shows whether it is possible or not possible to update from one version to another. Let's check the version update constraints of `MySQL` `5.7.44`, + +```bash +$ kubectl get mysqlversion 5.7.44 -o yaml | kubectl neat +apiVersion: catalog.kubedb.com/v1alpha1 +kind: MySQLVersion +metadata: + annotations: + meta.helm.sh/release-name: kubedb-catalog + meta.helm.sh/release-namespace: kubedb + creationTimestamp: "2022-06-16T13:52:58Z" + generation: 1 + labels: + app.kubernetes.io/instance: kubedb-catalog + app.kubernetes.io/managed-by: Helm + app.kubernetes.io/name: kubedb-catalog + app.kubernetes.io/version: v2022.03.28 + helm.sh/chart: kubedb-catalog-v2022.03.28 + name: 5.7.44 + resourceVersion: "1092465" + uid: 4cc87fc8-efd7-4e69-bb12-4454a2b1bf06 +spec: + coordinator: + image: kubedb/mysql-coordinator:v0.5.0 + db: + image: mysql:5.7.44 + distribution: Official + exporter: + image: kubedb/mysqld-exporter:v0.13.1 + initContainer: + image: kubedb/mysql-init:5.7-v2 + podSecurityPolicies: + databasePolicyName: mysql-db + replicationModeDetector: + image: kubedb/replication-mode-detector:v0.13.0 + stash: + addon: + backupTask: + name: mysql-backup-5.7.25 + restoreTask: + name: mysql-restore-5.7.25 + updateConstraints: + denylist: + groupReplication: + - < 5.7.44 + standalone: + - < 5.7.44 + version: 5.7.44 + +``` + +The above `spec.updateConstraints.denylist` is showing that updating below version of `5.7.44` is not possible for both standalone and group replication. That means, it is possible to update any version above `5.7.44`. Here, we are going to create a `MySQL` standalone using MySQL `5.7.44`. Then we are going to update this version to `8.0.35`. + +**Deploy MySQL standalone:** + +In this section, we are going to deploy a MySQL standalone. Then, in the next section, we will update the version of the database using updating. Below is the YAML of the `MySQL` cr that we are going to create, + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: MySQL +metadata: + name: my-standalone + namespace: demo +spec: + version: "5.7.44" + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut +``` + +Let's create the `MySQL` cr we have shown above, + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mysql/update-version/majorversion/standalone/yamls/standalone.yaml +mysql.kubedb.com/my-standalone created +``` + +**Wait for the database to be ready:** + +`KubeDB` operator watches for `MySQL` objects using Kubernetes API. When a `MySQL` object is created, `KubeDB` operator will create a new StatefulSet, Services, and Secrets, etc. A secret called `my-standalone-auth` (format: {mysql-object-name}-auth) will be created storing the password for mysql superuser. +Now, watch `MySQL` is going to `Running` state and also watch `StatefulSet` and its pod is created and going to `Running` state, + +```bash +$ watch -n 3 kubectl get my -n demo my-standalone + +NAME VERSION STATUS AGE +my-standalone 5.7.44 Running 3m + +$ watch -n 3 kubectl get sts -n demo my-standalone + +NAME READY AGE +my-standalone 1/1 3m42s + +$ watch -n 3 kubectl get pod -n demo my-standalone-0 + +NAME READY STATUS RESTARTS AGE +my-standalone-0 1/1 Running 0 5m23s +``` + +Let's verify the `MySQL`, the `StatefulSet` and its `Pod` image version, + +```bash +$ kubectl get my -n demo my-standalone -o=jsonpath='{.spec.version}{"\n"}' +5.7.44 + +$ kubectl get sts -n demo my-standalone -o=jsonpath='{.spec.template.spec.containers[0].image}{"\n"}' +kubedb/my:5.7.44 + +$ kubectl get pod -n demo my-standalone-0 -o=jsonpath='{.spec.containers[0].image}{"\n"}' +kubedb/my:5.7.44 +``` + +We are ready to apply updating on this `MySQL` standalone. + +#### UpdateVersion + +Here, we are going to update `MySQL` standalone from `5.7.44` to `8.0.35`. + +**Create MySQLOpsRequest:** + +To update the standalone, you have to create a `MySQLOpsRequest` cr with your desired version that supported by `KubeDB`. Below is the YAML of the `MySQLOpsRequest` cr that we are going to create, + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: MySQLOpsRequest +metadata: + name: my-update-major-standalone + namespace: demo +spec: + databaseRef: + name: my-standalone + type: UpdateVersion + updateVersion: + targetVersion: "8.0.35" +``` + +Here, + +- `spec.databaseRef.name` specifies that we are performing operation on `my-group` MySQL database. +- `spec.type` specifies that we are going to perform `UpdateVersion` on our database. +- `spec.updateVersion.targetVersion` specifies expected version `8.0.35` after updating. + +Let's create the `MySQLOpsRequest` cr we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mysql/update-version/majorversion/standalone/yamls/update_major_version_standalone.yaml +mysqlopsrequest.ops.kubedb.com/my-update-major-standalone created +``` + +**Verify MySQL version updated successfully:** + +If everything goes well, `KubeDB` Ops Manager will update the image of `MySQL`, `StatefulSet`, and its `Pod`. + +At first, we will wait for `MySQLOpsRequest` to be successful. Run the following command to watch `MySQlOpsRequest` cr, + +```bash +$ watch -n 3 kubectl get myops -n demo my-update-major-standalone + +NAME TYPE STATUS AGE +my-update-major-standalone UpdateVersion Successful 3m57s +``` + +We can see from the above output that the `MySQLOpsRequest` has succeeded. If we describe the `MySQLOpsRequest`, we shall see that the `MySQL`, `StatefulSet`, and its `Pod` have updated with a new image. + +```bash +$ kubectl describe myops -n demo my-update-major-standalone +Name: my-update-major-standalone +Namespace: demo +Labels: +Annotations: +API Version: ops.kubedb.com/v1alpha1 +Kind: MySQLOpsRequest +Metadata: + Creation Timestamp: 2022-06-30T07:55:16Z + Manager: kubedb-ops-manager + Operation: Update + Time: 2022-06-30T07:55:16Z + Resource Version: 1708721 + UID: 460319fc-8dc4-45d7-8958-fa84e24d5f51 +Spec: + Database Ref: + Name: my-standalone + Type: UpdateVersion + UpdateVersion: + TargetVersion: 8.0.35 +Status: + Conditions: + Last Transition Time: 2022-06-30T07:55:16Z + Message: Controller has started to Progress the MySQLOpsRequest: demo/my-update-major-standalone + Observed Generation: 1 + Reason: OpsRequestProgressingStarted + Status: True + Type: Progressing + Last Transition Time: 2022-06-30T07:55:16Z + Message: MySQL version updateFunc stated for MySQLOpsRequest: demo/my-update-major-standalone + Observed Generation: 1 + Reason: DatabaseVersionupdatingStarted + Status: True + Type: updating + Last Transition Time: 2022-06-30T07:59:16Z + Message: Image successfully updated in MySQL: demo/my-standalone for MySQLOpsRequest: my-update-major-standalone + Observed Generation: 1 + Reason: SuccessfullyUpdatedDatabaseVersion + Status: True + Type: UpdateVersion + Last Transition Time: 2022-06-30T07:59:16Z + Message: Controller has successfully updated the MySQL demo/my-update-major-standalone + Observed Generation: 1 + Reason: OpsRequestProcessedSuccessfully + Status: True + Type: Successful + Observed Generation: 3 + Phase: Successful +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal Starting 8m47s KubeDB Enterprise Operator Start processing for MySQLOpsRequest: demo/my-update-major-standalone + Normal Starting 8m47s KubeDB Enterprise Operator Pausing MySQL databse: demo/my-standalone + Normal Successful 8m47s KubeDB Enterprise Operator Successfully paused MySQL database: demo/my-standalone for MySQLOpsRequest: my-update-major-standalone + Normal Starting 6m2s KubeDB Enterprise Operator Restarting Pod: my-standalone-0/demo + Normal Successful 4m47s KubeDB Enterprise Operator Image successfully updated in MySQL: demo/my-standalone for MySQLOpsRequest: my-update-major-standalone + Normal Starting 4m47s KubeDB Enterprise Operator Resuming MySQL database: demo/my-standalone + Normal Successful 4m47s KubeDB Enterprise Operator Successfully resumed MySQL database: demo/my-standalone + Normal Successful 4m47s KubeDB Enterprise Operator Controller has Successfully updated the version of MySQL : demo/my-standalone +``` + +Now, we are going to verify whether the `MySQL`, `StatefulSet` and it's `Pod` have updated with new image. Let's check, + +```bash +$ kubectl get my -n demo my-standalone -o=jsonpath='{.spec.version}{"\n"}' +8.0.35 + +$ kubectl get sts -n demo my-standalone -o=jsonpath='{.spec.template.spec.containers[0].image}{"\n"}' +mysql:8.0.35 + +$ kubectl get pod -n demo my-standalone-0 -o=jsonpath='{.spec.containers[0].image}{"\n"}' +mysql:8.0.35 +``` + +You can see above that our `MySQL`standalone has been updated with the new version. It verifies that we have successfully updated our standalone. + +## Cleaning Up + +To clean up the Kubernetes resources created by this tutorial, run: + +```bash +kubectl delete my -n demo my-standalone +kubectl delete myops -n demo my-update-major-standalone +``` \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/mysql/update-version/majorversion/standalone/yamls/standalone.yaml b/content/docs/v2024.1.31/guides/mysql/update-version/majorversion/standalone/yamls/standalone.yaml new file mode 100644 index 0000000000..b06ee2cd5b --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/update-version/majorversion/standalone/yamls/standalone.yaml @@ -0,0 +1,16 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MySQL +metadata: + name: my-standalone + namespace: demo +spec: + version: "5.7.44" + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/mysql/update-version/majorversion/standalone/yamls/upgrade_major_version_standalone.yaml b/content/docs/v2024.1.31/guides/mysql/update-version/majorversion/standalone/yamls/upgrade_major_version_standalone.yaml new file mode 100644 index 0000000000..8e1477480f --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/update-version/majorversion/standalone/yamls/upgrade_major_version_standalone.yaml @@ -0,0 +1,11 @@ +apiVersion: ops.kubedb.com/v1alpha1 +kind: MySQLOpsRequest +metadata: + name: my-update-major-standalone + namespace: demo +spec: + databaseRef: + name: my-standalone + type: UpdateVersion + updateVersion: + targetVersion: "8.0.35" \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/mysql/update-version/minorversion/_index.md b/content/docs/v2024.1.31/guides/mysql/update-version/minorversion/_index.md new file mode 100644 index 0000000000..2883c976a0 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/update-version/minorversion/_index.md @@ -0,0 +1,22 @@ +--- +title: Updating MySQL minor version +menu: + docs_v2024.1.31: + identifier: guides-mysql-updating-minor + name: Minor version + parent: guides-mysql-updating + weight: 30 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/mysql/update-version/minorversion/group-replication/index.md b/content/docs/v2024.1.31/guides/mysql/update-version/minorversion/group-replication/index.md new file mode 100644 index 0000000000..36a2b5be00 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/update-version/minorversion/group-replication/index.md @@ -0,0 +1,392 @@ +--- +title: Updating MySQL group replication minor version +menu: + docs_v2024.1.31: + identifier: guides-mysql-updating-minor-group + name: Group Replication + parent: guides-mysql-updating-minor + weight: 20 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# update minor version of MySQL Group Replication + +This guide will show you how to use `KubeDB` Ops Manager to update the minor version of `MySQL` Group Replication. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +- Install `KubeDB` in your cluster following the steps [here](/docs/v2024.1.31/setup/README). + +- You should be familiar with the following `KubeDB` concepts: + - [MySQL](/docs/v2024.1.31/guides/mysql/concepts/database/) + - [MySQLOpsRequest](/docs/v2024.1.31/guides/mysql/concepts/opsrequest/) + - [Updating Overview](/docs/v2024.1.31/guides/mysql/update-version/overview/) + +To keep everything isolated, we are going to use a separate namespace called `demo` throughout this tutorial. + +```bash +$ kubectl create ns demo +namespace/demo created +``` + +> **Note:** YAML files used in this tutorial are stored in [docs/guides/mysql/update-version/minorversion/group-replication/yamls](/docs/v2024.1.31/guides/mysql/update-version/minorversion/group-replication/yamls) directory of [kubedb/docs](https://github.com/kubedb/docs) repository. + +### Apply Version updating on Group Replication + +Here, we are going to deploy a `MySQL` group replication using a supported version by `KubeDB` operator. Then we are going to apply updating on it. + +#### Prepare Group Replication + +At first, we are going to deploy a group replication using supported that `MySQL` version whether it is possible to update from this version to another. In the next two sections, we are going to find out the supported version and version update constraints. + +**Find supported MySQL Version:** + +When you have installed `KubeDB`, it has created `MySQLVersion` CR for all supported `MySQL` versions. Let’s check the supported `MySQL` versions, + +```bash +$ kubectl get mysqlversion +NAME VERSION DISTRIBUTION DB_IMAGE DEPRECATED AGE +5.7.35-v1 5.7.35 Official mysql:5.7.35 13d +5.7.44 5.7.44 Official mysql:5.7.44 13d +8.0.17 8.0.17 Official mysql:8.0.17 13d +8.0.35 8.0.35 Official mysql:8.0.35 13d +8.0.31-innodb 8.0.35 MySQL mysql/mysql-server:8.0.35 13d +8.0.35 8.0.35 Official mysql:8.0.35 13d +8.0.3-v4 8.0.3 Official mysql:8.0.3 13d + +``` + +The version above that does not show `DEPRECATED` true is supported by `KubeDB` for `MySQL`. You can use any non-deprecated version. Now, we are going to select a non-deprecated version from `MySQLVersion` for `MySQL` group replication that will be possible to update from this version to another version. In the next section, we are going to verify version update constraints. + +**Check update Constraints:** + +Database version update constraints is a constraint that shows whether it is possible or not possible to update from one version to another. Let's check the version update constraints of `MySQL` `8.0.35`, + +```bash +$ kubectl get mysqlversion 8.0.35 -o yaml +apiVersion: catalog.kubedb.com/v1alpha1 +kind: MySQLVersion +metadata: + annotations: + meta.helm.sh/release-name: kubedb-catalog + meta.helm.sh/release-namespace: kubedb + creationTimestamp: "2022-06-16T13:52:58Z" + generation: 1 + labels: + app.kubernetes.io/instance: kubedb-catalog + app.kubernetes.io/managed-by: Helm + app.kubernetes.io/name: kubedb-catalog + app.kubernetes.io/version: v2022.03.28 + helm.sh/chart: kubedb-catalog-v2022.03.28 + name: 8.0.35 + resourceVersion: "1092466" + uid: fa68b792-a8b3-47a3-a32e-66a47f79c177 +spec: + coordinator: + image: kubedb/mysql-coordinator:v0.5.0 + db: + image: mysql:8.0.35 + distribution: Official + exporter: + image: kubedb/mysqld-exporter:v0.13.1 + initContainer: + image: kubedb/mysql-init:8.0.26-v1 + podSecurityPolicies: + databasePolicyName: mysql-db + replicationModeDetector: + image: kubedb/replication-mode-detector:v0.13.0 + stash: + addon: + backupTask: + name: mysql-backup-8.0.21 + restoreTask: + name: mysql-restore-8.0.21 + updateConstraints: + denylist: + groupReplication: + - < 8.0.35 + standalone: + - < 8.0.35 + version: 8.0.35 + +``` + +The above `spec.updateConstraints.denylist` of `8.0.35` is showing that updating below version of `8.0.35` is not possible for both group replication and standalone. That means, it is possible to update any version above `8.0.35`. Here, we are going to create a `MySQL` Group Replication using MySQL `8.0.35`. Then we are going to update this version to `8.0.35`. + +**Deploy MySQL Group Replication:** + +In this section, we are going to deploy a MySQL group replication with 3 members. Then, in the next section we will update the version of the members using updating. Below is the YAML of the `MySQL` cr that we are going to create, + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: MySQL +metadata: + name: my-group + namespace: demo +spec: + version: "8.0.35" + replicas: 3 + topology: + mode: GroupReplication + group: + name: "dc002fc3-c412-4d18-b1d4-66c1fbfbbc9b" + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut +``` + +Let's create the `MySQL` cr we have shown above, + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mysql/update-version/minorversion/group-replication/yamls/group_replication.yaml +mysql.kubedb.com/my-group created +``` + +**Wait for the cluster to be ready:** + +`KubeDB` operator watches for `MySQL` objects using Kubernetes API. When a `MySQL` object is created, `KubeDB` operator will create a new StatefulSet, Services, and Secrets, etc. A secret called `my-group-auth` (format: {mysql-object-name}-auth) will be created storing the password for mysql superuser. +Now, watch `MySQL` is going to `Running` state and also watch `StatefulSet` and its pod is created and going to `Running` state, + +```bash +$ watch -n 3 kubectl get my -n demo my-group + + +NAME VERSION STATUS AGE +my-group 8.0.35 Ready 5m + +$ watch -n 3 kubectl get sts -n demo my-group + +NAME READY AGE +my-group 3/3 7m12s + +$ watch -n 3 kubectl get pod -n demo -l app.kubernetes.io/name=mysqls.kubedb.com,app.kubernetes.io/instance=my-group + +NAME READY STATUS RESTARTS AGE +my-group-0 2/2 Running 0 11m +my-group-1 2/2 Running 0 9m53s +my-group-2 2/2 Running 0 6m48s +``` + +Let's verify the `MySQL`, the `StatefulSet` and its `Pod` image version, + +```bash +$ kubectl get my -n demo my-group -o=jsonpath='{.spec.version}{"\n"}' +8.0.35 + +$ kubectl get sts -n demo -l app.kubernetes.io/name=mysqls.kubedb.com,app.kubernetes.io/instance=my-group -o json | jq '.items[].spec.template.spec.containers[1].image' +"mysql:8.0.35" + +$ kubectl get pod -n demo -l app.kubernetes.io/name=mysqls.kubedb.com,app.kubernetes.io/instance=my-group -o json | jq '.items[].spec.containers[1].image' +"mysql:8.0.35" +"mysql:8.0.35" +"mysql:8.0.35" +``` + +Let's also verify that the StatefulSet’s pods have joined into the group replication, + +```bash +$ kubectl get secrets -n demo my-group-auth -o jsonpath='{.data.\username}' | base64 -d +root + +$ kubectl get secrets -n demo my-group-auth -o jsonpath='{.data.\password}' | base64 -d +XbUHi_Cp&SLSXTmo + +$ kubectl exec -it -n demo my-group-0 -c mysql -- mysql -u root --password='XbUHi_Cp&SLSXTmo' --host=my-group-0.my-group-pods.demo -e "select * from performance_schema.replication_group_members" +mysql: [Warning] Using a password on the command line interface can be insecure. ++---------------------------+--------------------------------------+-----------------------------------+-------------+--------------+-------------+----------------+----------------------------+ +| CHANNEL_NAME | MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE | MEMBER_ROLE | MEMBER_VERSION | MEMBER_COMMUNICATION_STACK | ++---------------------------+--------------------------------------+-----------------------------------+-------------+--------------+-------------+----------------+----------------------------+ +| group_replication_applier | 6e7f3cc4-f84d-11ec-adcd-d23a2a3ef58a | my-group-1.my-group-pods.demo.svc | 3306 | ONLINE | SECONDARY | 8.0.35 | XCom | +| group_replication_applier | 70c60c5b-f84d-11ec-821b-4af781e22a9f | my-group-2.my-group-pods.demo.svc | 3306 | ONLINE | SECONDARY | 8.0.35 | XCom | +| group_replication_applier | 71fdc498-f84d-11ec-a6f3-b2ee89425e4f | my-group-0.my-group-pods.demo.svc | 3306 | ONLINE | PRIMARY | 8.0.35 | XCom | ++---------------------------+--------------------------------------+-----------------------------------+-------------+--------------+-------------+----------------+----------------------------+ + +``` + +We are ready to apply updating on this `MySQL` group replication. + +#### UpdateVersion + +Here, we are going to update the `MySQL` group replication from `8.0.35` to `8.0.35`. + +**Create MySQLOpsRequest:** + +To update your database cluster, you have to create a `MySQLOpsRequest` cr with your desired version that supported by `KubeDB`. Below is the YAML of the `MySQLOpsRequest` cr that we are going to create, + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: MySQLOpsRequest +metadata: + name: my-update-minor-group + namespace: demo +spec: + type: UpdateVersion + databaseRef: + name: my-group + updateVersion: + targetVersion: "8.0.35" +``` + +Here, + +- `spec.databaseRef.name` specifies that we are performing operation on `my-group` MySQL database. +- `spec.type` specifies that we are going to perform `UpdateVersion` on our database. +- `spec.updateVersion.targetVersion` specifies expected version `8.0.35` after updating. + +Let's create the `MySQLOpsRequest` cr we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mysql/update-version/minorversion/group-replication/yamls/update_minor_version_group.yaml +mysqlopsrequest.ops.kubedb.com/my-update-minor-group created +``` + +**Verify MySQL version updated successfully:** + +If everything goes well, `KubeDB` Ops Manager will update the image of `MySQL`, `StatefulSet`, and its `Pod`. + +At first, we will wait for `MySQLOpsRequest` to be successful. Run the following command to watch `MySQlOpsRequest` cr, + +```bash +$ watch -n 3 kubectl get myops -n demo my-update-minor-group +NAME TYPE STATUS AGE +my-update-minor-group UpdateVersion Successful 5m26s +``` + +You can see from the above output that the `MySQLOpsRequest` has succeeded. If we describe the `MySQLOpsRequest`, we shall see that the `MySQL` group replication is updated with the new version and the `StatefulSet` is created with a new image. + +```bash +$ kubectl describe myops -n demo my-update-minor-group + +Name: my-update-minor-group +Namespace: demo +Labels: +Annotations: +API Version: ops.kubedb.com/v1alpha1 +Kind: MySQLOpsRequest +Metadata: + Creation Timestamp: 2022-06-30T08:26:36Z + Manager: kubedb-ops-manager + Operation: Update + Time: 2022-06-30T08:26:36Z + Resource Version: 1712998 + UID: 3a84eb20-ba5f-4ec6-969a-bfdb7b072b5a +Spec: + Database Ref: + Name: my-group + Type: UpdateVersion + UpdateVersion: + TargetVersion: 8.0.35 +Status: + Conditions: + Last Transition Time: 2022-06-30T08:26:36Z + Message: Controller has started to Progress the MySQLOpsRequest: demo/my-update-minor-group + Observed Generation: 1 + Reason: OpsRequestProgressingStarted + Status: True + Type: Progressing + Last Transition Time: 2022-06-30T08:26:36Z + Message: MySQL version updateFunc stated for MySQLOpsRequest: demo/my-update-minor-group + Observed Generation: 1 + Reason: DatabaseVersionupdatingStarted + Status: True + Type: updating + Last Transition Time: 2022-06-30T08:31:26Z + Message: Image successfully updated in MySQL: demo/my-group for MySQLOpsRequest: my-update-minor-group + Observed Generation: 1 + Reason: SuccessfullyUpdatedDatabaseVersion + Status: True + Type: UpdateVersion + Last Transition Time: 2022-06-30T08:31:27Z + Message: Controller has successfully updated the MySQL demo/my-update-minor-group + Observed Generation: 1 + Reason: OpsRequestProcessedSuccessfully + Status: True + Type: Successful + Observed Generation: 3 + Phase: Successful +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal Starting 33m KubeDB Enterprise Operator Start processing for MySQLOpsRequest: demo/my-update-minor-group + Normal Starting 33m KubeDB Enterprise Operator Pausing MySQL databse: demo/my-group + Normal Successful 33m KubeDB Enterprise Operator Successfully paused MySQL database: demo/my-group for MySQLOpsRequest: my-update-minor-group + Normal Starting 33m KubeDB Enterprise Operator updating MySQL images: demo/my-group for MySQLOpsRequest: my-update-minor-group + Normal Starting 33m KubeDB Enterprise Operator Restarting Pod: my-group-1/demo + Normal Starting 32m KubeDB Enterprise Operator Restarting Pod: my-group-2/demo + Normal Starting 30m KubeDB Enterprise Operator Restarting Pod: my-group-0/demo + Normal Successful 29m KubeDB Enterprise Operator Image successfully updated in MySQL: demo/my-group for MySQLOpsRequest: my-update-minor-group + Normal Starting 29m KubeDB Enterprise Operator Resuming MySQL database: demo/my-group + Normal Successful 29m KubeDB Enterprise Operator Successfully resumed MySQL database: demo/my-group + Normal Successful 29m KubeDB Enterprise Operator Controller has Successfully updated the version of MySQL : demo/my-group + + +``` + +Now, we are going to verify whether the `MySQL` and `StatefulSet` and it's `Pod` have updated with new image. Let's check, + +```bash +$ kubectl get my -n demo my-group -o=jsonpath='{.spec.version}{"\n"}' +5.7.44 + +$ kubectl get sts -n demo -l app.kubernetes.io/name=mysqls.kubedb.com,app.kubernetes.io/instance=my-group -o json | jq '.items[].spec.template.spec.containers[1].image' +"mysql:8.0.35" + +$ kubectl get pod -n demo -l app.kubernetes.io/name=mysqls.kubedb.com,app.kubernetes.io/instance=my-group -o json | jq '.items[].spec.containers[1].image' +"mysql:8.0.35" +"mysql:8.0.35" +"mysql:8.0.35" +``` + +Let's also check the StatefulSet pods have joined the `MySQL` group replication, + +```bash +$ kubectl get secrets -n demo my-group-auth -o jsonpath='{.data.\username}' | base64 -d +root + +$ kubectl get secrets -n demo my-group-auth -o jsonpath='{.data.\password}' | base64 -d +XbUHi_Cp&SLSXTmo + +$ kubectl exec -it -n demo my-group-0 -c mysql -- mysql -u root --password='XbUHi_Cp&SLSXTmo' --host=my-group-0.my-group-pods.demo -e "select * from performance_schema.replication_group_members" +mysql: [Warning] Using a password on the command line interface can be insecure. ++---------------------------+--------------------------------------+-----------------------------------+-------------+--------------+-------------+----------------+----------------------------+ +| CHANNEL_NAME | MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE | MEMBER_ROLE | MEMBER_VERSION | MEMBER_COMMUNICATION_STACK | ++---------------------------+--------------------------------------+-----------------------------------+-------------+--------------+-------------+----------------+----------------------------+ +| group_replication_applier | 6e7f3cc4-f84d-11ec-adcd-d23a2a3ef58a | my-group-1.my-group-pods.demo.svc | 3306 | ONLINE | PRIMARY | 8.0.35 | XCom | +| group_replication_applier | 70c60c5b-f84d-11ec-821b-4af781e22a9f | my-group-2.my-group-pods.demo.svc | 3306 | ONLINE | SECONDARY | 8.0.35 | XCom | +| group_replication_applier | 71fdc498-f84d-11ec-a6f3-b2ee89425e4f | my-group-0.my-group-pods.demo.svc | 3306 | ONLINE | SECONDARY | 8.0.35 | XCom | ++---------------------------+--------------------------------------+-----------------------------------+-------------+--------------+-------------+----------------+----------------------------+ + +``` + +You can see above that our `MySQL` group replication now has updated members. It verifies that we have successfully updated our cluster. + +## Cleaning Up + +To clean up the Kubernetes resources created by this tutorial, run: + +```bash +kubectl delete my -n demo my-group +kubectl delete myops -n demo my-update-minor-group +``` \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/mysql/update-version/minorversion/group-replication/yamls/group_replication.yaml b/content/docs/v2024.1.31/guides/mysql/update-version/minorversion/group-replication/yamls/group_replication.yaml new file mode 100644 index 0000000000..a18eb3cfef --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/update-version/minorversion/group-replication/yamls/group_replication.yaml @@ -0,0 +1,21 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MySQL +metadata: + name: my-group + namespace: demo +spec: + version: "8.0.35" + replicas: 3 + topology: + mode: GroupReplication + group: + name: "dc002fc3-c412-4d18-b1d4-66c1fbfbbc9b" + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/mysql/update-version/minorversion/group-replication/yamls/upgrade_minor_version_group.yaml b/content/docs/v2024.1.31/guides/mysql/update-version/minorversion/group-replication/yamls/upgrade_minor_version_group.yaml new file mode 100644 index 0000000000..f2daf51f89 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/update-version/minorversion/group-replication/yamls/upgrade_minor_version_group.yaml @@ -0,0 +1,11 @@ +apiVersion: ops.kubedb.com/v1alpha1 +kind: MySQLOpsRequest +metadata: + name: my-update-minor-group + namespace: demo +spec: + type: UpdateVersion + databaseRef: + name: my-group + updateVersion: + targetVersion: "8.0.35" \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/mysql/update-version/minorversion/standalone/index.md b/content/docs/v2024.1.31/guides/mysql/update-version/minorversion/standalone/index.md new file mode 100644 index 0000000000..070707d817 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/update-version/minorversion/standalone/index.md @@ -0,0 +1,333 @@ +--- +title: Updating MySQL standalone minor version +menu: + docs_v2024.1.31: + identifier: guides-mysql-updating-minor-standalone + name: Standalone + parent: guides-mysql-updating-minor + weight: 10 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# update minor version of MySQL Standalone + +This guide will show you how to use `KubeDB` Ops Manager to update the minor version of `MySQL` standalone. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +- Install `KubeDB` in your cluster following the steps [here](/docs/v2024.1.31/setup/README). + +- You should be familiar with the following `KubeDB` concepts: + - [MySQL](/docs/v2024.1.31/guides/mysql/concepts/database/) + - [MySQLOpsRequest](/docs/v2024.1.31/guides/mysql/concepts/opsrequest/) + - [Updating Overview](/docs/v2024.1.31/guides/mysql/update-version/overview/) + +To keep everything isolated, we are going to use a separate namespace called `demo` throughout this tutorial. + +```bash +$ kubectl create ns demo +namespace/demo created +``` + +> **Note:** YAML files used in this tutorial are stored in [docs/guides/mysql/update-version/minorversion/standalone/yamls](/docs/v2024.1.31/guides/mysql/update-version/minorversion/standalone/yamls) directory of [kubedb/docs](https://github.com/kube/docs) repository. + +### Apply Version updating on Standalone + +Here, we are going to deploy a `MySQL` standalone using a supported version by `KubeDB` operator. Then we are going to apply updating on it. + +#### Prepare Standalone + +At first, we are going to deploy a standalone using supported `MySQL` version whether it is possible to update from this version to another. In the next two sections, we are going to find out the supported version and version update constraints. + +**Find supported MySQLVersion:** + +When you have installed `KubeDB`, it has created `MySQLVersion` CR for all supported `MySQL` versions. Let's check support versions, + +```bash +$ kubectl get mysqlversion +NAME VERSION DISTRIBUTION DB_IMAGE DEPRECATED AGE +5.7.35-v1 5.7.35 Official mysql:5.7.35 13d +5.7.44 5.7.44 Official mysql:5.7.44 13d +8.0.17 8.0.17 Official mysql:8.0.17 13d +8.0.35 8.0.35 Official mysql:8.0.35 13d +8.0.31-innodb 8.0.35 MySQL mysql/mysql-server:8.0.35 13d +8.0.35 8.0.35 Official mysql:8.0.35 13d +8.0.3-v4 8.0.3 Official mysql:8.0.3 13d + +``` + +The version above that does not show `DEPRECATED` `true` is supported by `KubeDB` for `MySQL`. You can use any non-deprecated version. Now, we are going to select a non-deprecated version from `MySQLVersion` for `MySQL` standalone that will be possible to update from this version to another version. In the next section, we are going to verify version update constraints. + +**Check update Constraints:** + +Database version update constraints is a constraint that shows whether it is possible or not possible to update from one version to another. Let's check the version update constraints of `MySQL` `5.7.44`, + +```bash +$ kubectl get mysqlversion 5.7.44 -o yaml +apiVersion: catalog.kubedb.com/v1alpha1 +kind: MySQLVersion +metadata: + annotations: + meta.helm.sh/release-name: kubedb-catalog + meta.helm.sh/release-namespace: kubedb + creationTimestamp: "2022-06-16T13:52:58Z" + generation: 1 + labels: + app.kubernetes.io/instance: kubedb-catalog + app.kubernetes.io/managed-by: Helm + app.kubernetes.io/name: kubedb-catalog + app.kubernetes.io/version: v2022.03.28 + helm.sh/chart: kubedb-catalog-v2022.03.28 + name: 5.7.44 + resourceVersion: "1092465" + uid: 4cc87fc8-efd7-4e69-bb12-4454a2b1bf06 +spec: + coordinator: + image: kubedb/mysql-coordinator:v0.5.0 + db: + image: mysql:5.7.44 + distribution: Official + exporter: + image: kubedb/mysqld-exporter:v0.13.1 + initContainer: + image: kubedb/mysql-init:5.7-v2 + podSecurityPolicies: + databasePolicyName: mysql-db + replicationModeDetector: + image: kubedb/replication-mode-detector:v0.13.0 + stash: + addon: + backupTask: + name: mysql-backup-5.7.25 + restoreTask: + name: mysql-restore-5.7.25 + updateConstraints: + denylist: + groupReplication: + - < 5.7.44 + standalone: + - < 5.7.44 + version: 5.7.44 + +``` + +The above `spec.updateConstraints.denylist` is showing that updating below version of `5.7.44` is not possible for both standalone and group replication. That means, it is possible to update any version above `5.7.44`. Here, we are going to create a `MySQL` standalone using MySQL `5.7.44`. Then we are going to update this version to `5.7.44`. + +**Deploy MySQL standalone:** + +In this section, we are going to deploy a MySQL standalone. Then, in the next section, we will update the version of the database using updating. Below is the YAML of the `MySQL` cr that we are going to create, + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: MySQL +metadata: + name: my-standalone + namespace: demo +spec: + version: "5.7.44" + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut +``` + +Let's create the `MySQL` cr we have shown above, + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mysql/update-version/minorversion/standalone/yamls/standalone.yaml +mysql.kubedb.com/my-standalone created +``` + +**Wait for the database to be ready:** + +`KubeDB` operator watches for `MySQL` objects using Kubernetes API. When a `MySQL` object is created, `KubeDB` operator will create a new StatefulSet, Services, and Secrets, etc. A secret called `my-standalone-auth` (format: {mysql-object-name}-auth) will be created storing the password for mysql superuser. +Now, watch `MySQL` is going to `Running` state and also watch `StatefulSet` and its pod is created and going to `Running` state, + +```bash +$ watch -n 3 kubectl get my -n demo my-standalone + +NAME VERSION STATUS AGE +my-standalone 5.7.44 Running 3m + +$ watch -n 3 kubectl get sts -n demo my-standalone + +NAME READY AGE +my-standalone 1/1 3m42s + +$ watch -n 3 kubectl get pod -n demo my-standalone-0 + +NAME READY STATUS RESTARTS AGE +my-standalone-0 1/1 Running 0 5m23s +``` + +Let's verify the `MySQL`, the `StatefulSet` and its `Pod` image version, + +```bash +$ kubectl get my -n demo my-standalone -o=jsonpath='{.spec.version}{"\n"}' +5.7.44 + +$ kubectl get sts -n demo my-standalone -o=jsonpath='{.spec.template.spec.containers[0].image}{"\n"}' +mysql:5.7.44 + +$ kubectl get pod -n demo my-standalone-0 -o=jsonpath='{.spec.containers[0].image}{"\n"}' +mysql:5.7.44 +``` + +We are ready to apply updating on this `MySQL` standalone. + +#### UpdateVersion + +Here, we are going to update `MySQL` standalone from `5.7.44` to `5.7.44`. + +**Create MySQLOpsRequest:** + +To update the standalone, you have to create a `MySQLOpsRequest` cr with your desired version that supported by `KubeDB`. Below is the YAML of the `MySQLOpsRequest` cr that we are going to create, + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: MySQLOpsRequest +metadata: + name: my-update-minor-standalone + namespace: demo +spec: + databaseRef: + name: my-standalone + type: UpdateVersion + updateVersion: + targetVersion: "5.7.44" +``` + +Here, + +- `spec.databaseRef.name` specifies that we are performing operation on `my-group` MySQL database. +- `spec.type` specifies that we are going to perform `UpdateVersion` on our database. +- `spec.updateVersion.targetVersion` specifies expected version `5.7.44` after updating. + +Let's create the `MySQLOpsRequest` cr we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mysql/update-version/minorversion/standalone/yamls/update_minor_version_standalone.yaml +mysqlopsrequest.ops.kubedb.com/my-update-minor-standalone created +``` + +**Verify MySQL version updated successfully:** + +If everything goes well, `KubeDB` Ops Manager will update the image of `MySQL`, `StatefulSet`, and its `Pod`. + +At first, we will wait for `MySQLOpsRequest` to be successful. Run the following command to watch `MySQlOpsRequest` cr, + +```bash +$ watch -n 3 kubectl get myops -n demo my-update-minor-standalone + +NAME TYPE STATUS AGE +my-update-minor-standalone UpdateVersion Successful 3m57s +``` + +We can see from the above output that the `MySQLOpsRequest` has succeeded. If we describe the `MySQLOpsRequest`, we shall see that the `MySQL`, `StatefulSet`, and its `Pod` have updated with a new image. + +```bash +$ kubectl describe myops -n demo my-update-minor-standalone +Name: my-update-minor-standalone +Namespace: demo +Labels: +Annotations: +API Version: ops.kubedb.com/v1alpha1 +Kind: MySQLOpsRequest +Metadata: + Manager: kubedb-ops-manager + Operation: Update + Time: 2022-06-30T09:05:14Z + Resource Version: 1717990 + UID: 3f5bceed-74ba-4fbe-a8a5-229aed60212d +Spec: + Database Ref: + Name: my-standalone + Type: UpdateVersion + UpdateVersion: + TargetVersion: 5.7.44 +Status: + Conditions: + Last Transition Time: 2022-06-30T09:05:14Z + Message: Controller has started to Progress the MySQLOpsRequest: demo/my-update-minor-standalone + Observed Generation: 1 + Reason: OpsRequestProgressingStarted + Status: True + Type: Progressing + Last Transition Time: 2022-06-30T09:05:14Z + Message: MySQL version updateFunc stated for MySQLOpsRequest: demo/my-update-minor-standalone + Observed Generation: 1 + Reason: DatabaseVersionupdatingStarted + Status: True + Type: updating + Last Transition Time: 2022-06-30T09:05:19Z + Message: Image successfully updated in MySQL: demo/my-standalone for MySQLOpsRequest: my-update-minor-standalone + Observed Generation: 1 + Reason: SuccessfullyUpdatedDatabaseVersion + Status: True + Type: UpdateVersion + Last Transition Time: 2022-06-30T09:11:15Z + Message: Controller has successfully updated the MySQL demo/my-update-minor-standalone + Observed Generation: 1 + Reason: OpsRequestProcessedSuccessfully + Status: True + Type: Successful + Observed Generation: 2 + Phase: Successful +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal Starting 7m8s KubeDB Enterprise Operator Start processing for MySQLOpsRequest: demo/my-update-minor-standalone + Normal Starting 7m8s KubeDB Enterprise Operator Pausing MySQL databse: demo/my-standalone + Normal Successful 7m8s KubeDB Enterprise Operator Successfully paused MySQL database: demo/my-standalone for MySQLOpsRequest: my-update-minor-standalone + Normal Starting 7m8s KubeDB Enterprise Operator updating MySQL images: demo/my-standalone for MySQLOpsRequest: my-update-minor-standalone + Normal Starting 7m3s KubeDB Enterprise Operator Restarting Pod: my-standalone-0/demo + Normal Starting 67s KubeDB Enterprise Operator Resuming MySQL database: demo/my-standalone + Normal Successful 67s KubeDB Enterprise Operator Successfully resumed MySQL database: demo/my-standalone + Normal Successful 67s KubeDB Enterprise Operator Controller has Successfully updated the version of MySQL : demo/my-standalone + +``` + +Now, we are going to verify whether the `MySQL`, `StatefulSet` and it's `Pod` have updated with new image. Let's check, + +```bash +$ kubectl get my -n demo my-standalone -o=jsonpath='{.spec.version}{"\n"}' +5.7.44 + +$ kubectl get sts -n demo my-standalone -o=jsonpath='{.spec.template.spec.containers[0].image}{"\n"}' +kubedb/my:5.7.44 + +$ kubectl get pod -n demo my-standalone-0 -o=jsonpath='{.spec.containers[0].image}{"\n"}' +kubedb/my:5.7.44 +``` + +You can see above that our `MySQL`standalone has been updated with the new version. It verifies that we have successfully updated our standalone. + +## Cleaning Up + +To clean up the Kubernetes resources created by this tutorial, run: + +```bash +kubectl delete my -n demo my-standalone +kubectl delete myops -n demo my-update-minor-standalone +``` \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/mysql/update-version/minorversion/standalone/yamls/standalone.yaml b/content/docs/v2024.1.31/guides/mysql/update-version/minorversion/standalone/yamls/standalone.yaml new file mode 100644 index 0000000000..b06ee2cd5b --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/update-version/minorversion/standalone/yamls/standalone.yaml @@ -0,0 +1,16 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MySQL +metadata: + name: my-standalone + namespace: demo +spec: + version: "5.7.44" + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/mysql/update-version/minorversion/standalone/yamls/upgrade_minor_version_standalone.yaml b/content/docs/v2024.1.31/guides/mysql/update-version/minorversion/standalone/yamls/upgrade_minor_version_standalone.yaml new file mode 100644 index 0000000000..a37ead33ae --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/update-version/minorversion/standalone/yamls/upgrade_minor_version_standalone.yaml @@ -0,0 +1,11 @@ +apiVersion: ops.kubedb.com/v1alpha1 +kind: MySQLOpsRequest +metadata: + name: my-update-minor-standalone + namespace: demo +spec: + databaseRef: + name: my-standalone + type: UpdateVersion + updateVersion: + targetVersion: "5.7.44" \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/mysql/update-version/overview/images/my-updating.png b/content/docs/v2024.1.31/guides/mysql/update-version/overview/images/my-updating.png new file mode 100644 index 0000000000..106bb432cd Binary files /dev/null and b/content/docs/v2024.1.31/guides/mysql/update-version/overview/images/my-updating.png differ diff --git a/content/docs/v2024.1.31/guides/mysql/update-version/overview/index.md b/content/docs/v2024.1.31/guides/mysql/update-version/overview/index.md new file mode 100644 index 0000000000..5503050b8a --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/update-version/overview/index.md @@ -0,0 +1,67 @@ +--- +title: Updating MySQL Overview +menu: + docs_v2024.1.31: + identifier: guides-mysql-updating-overview + name: Overview + parent: guides-mysql-updating + weight: 10 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# updating MySQL version Overview + +This guide will give you an overview of how `KubeDB` Ops Manager updates the version of `MySQL` database. + +## Before You Begin + +- You should be familiar with the following `KubeDB` concepts: + - [MySQL](/docs/v2024.1.31/guides/mysql/concepts/database/) + - [MySQLOpsRequest](/docs/v2024.1.31/guides/mysql/concepts/opsrequest/) + +## How update Process Works + +The following diagram shows how `KubeDB` Ops Manager used to update the version of `MySQL`. Open the image in a new tab to see the enlarged version. + +
+ Stash Backup Flow +
Fig: updating Process of MySQL
+
+ +The updating process consists of the following steps: + +1. At first, a user creates a `MySQL` cr. + +2. `KubeDB` community operator watches for the `MySQL` cr. + +3. When it finds one, it creates a `StatefulSet` and related necessary stuff like secret, service, etc. + +4. Then, in order to update the version of the `MySQL` database the user creates a `MySQLOpsRequest` cr with the desired version. + +5. `KubeDB` Ops Manager watches for `MySQLOpsRequest`. + +6. When it finds one, it halts the `MySQL` object so that the `KubeDB` community operator doesn't perform any operation on the `MySQL` during the updating process. + +7. By looking at the target version from `MySQLOpsRequest` cr, `KubeDB` Ops Manager takes one of the following steps: + - either update the images of the `StatefulSet` for updating between patch/minor versions. + - or creates a new `StatefulSet` using targeted image for updating between major versions. + +8. After successful upgradation of the `StatefulSet` and its `Pod` images, the `KubeDB` Ops Manager updates the image of the `MySQL` object to reflect the updated cluster state. + +9. After successful upgradation of `MySQL` object, the `KubeDB` Ops Manager resumes the `MySQL` object so that the `KubeDB` community operator can resume its usual operations. + +In the next doc, we are going to show a step by step guide on updating of a MySQL database using update operation. \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/mysql/volume-expansion/_index.md b/content/docs/v2024.1.31/guides/mysql/volume-expansion/_index.md new file mode 100644 index 0000000000..1a29adaaa7 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/volume-expansion/_index.md @@ -0,0 +1,22 @@ +--- +title: MySQL Volume Expansion +menu: + docs_v2024.1.31: + identifier: guides-mysql-volume-expansion + name: MySQL Volume Expansion + parent: guides-mysql + weight: 46 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/mysql/volume-expansion/overview/images/volume-expansion.jpg b/content/docs/v2024.1.31/guides/mysql/volume-expansion/overview/images/volume-expansion.jpg new file mode 100644 index 0000000000..f3b95000f1 Binary files /dev/null and b/content/docs/v2024.1.31/guides/mysql/volume-expansion/overview/images/volume-expansion.jpg differ diff --git a/content/docs/v2024.1.31/guides/mysql/volume-expansion/overview/index.md b/content/docs/v2024.1.31/guides/mysql/volume-expansion/overview/index.md new file mode 100644 index 0000000000..a6f905e294 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/volume-expansion/overview/index.md @@ -0,0 +1,67 @@ +--- +title: MySQL Volume Expansion Overview +menu: + docs_v2024.1.31: + identifier: guides-mysql-volume-expansion-overview + name: Overview + parent: guides-mysql-volume-expansion + weight: 11 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# MySQL Volume Expansion + +This guide will give an overview on how KubeDB Ops Manager expand the volume of `MySQL`. + +## Before You Begin + +- You should be familiar with the following `KubeDB` concepts: + - [MySQL](/docs/v2024.1.31/guides/mysql/concepts/mysqldatabase) + - [MySQLOpsRequest](/docs/v2024.1.31/guides/mysql/concepts/opsrequest) + +## How Volume Expansion Process Works + +The following diagram shows how KubeDB Ops Manager expand the volumes of `MySQL` database components. Open the image in a new tab to see the enlarged version. + +
+  Volume Expansion process of MySQL +
Fig: Volume Expansion process of MySQL
+
+ +The Volume Expansion process consists of the following steps: + +1. At first, a user creates a `MySQL` Custom Resource (CR). + +2. `KubeDB` Community operator watches the `MySQL` CR. + +3. When the operator finds a `MySQL` CR, it creates required `StatefulSet` and related necessary stuff like secrets, services, etc. + +4. The statefulSet creates Persistent Volumes according to the Volume Claim Template provided in the statefulset configuration. This Persistent Volume will be expanded by the `KubeDB` Enterprise operator. + +5. Then, in order to expand the volume of the `MySQL` database the user creates a `MySQLOpsRequest` CR with desired information. + +6. `KubeDB` Enterprise operator watches the `MySQLOpsRequest` CR. + +7. When it finds a `MySQLOpsRequest` CR, it pauses the `MySQL` object which is referred from the `MySQLOpsRequest`. So, the `KubeDB` Community operator doesn't perform any operations on the `MySQL` object during the volume expansion process. + +8. Then the `KubeDB` Enterprise operator will expand the persistent volume to reach the expected size defined in the `MySQLOpsRequest` CR. + +9. After the successfully expansion of the volume of the related StatefulSet Pods, the `KubeDB` Enterprise operator updates the new volume size in the `MySQL` object to reflect the updated state. + +10. After the successful Volume Expansion of the `MySQL`, the `KubeDB` Enterprise operator resumes the `MySQL` object so that the `KubeDB` Community operator resumes its usual operations. + +In the next docs, we are going to show a step by step guide on Volume Expansion of various MySQL database using `MySQLOpsRequest` CRD. diff --git a/content/docs/v2024.1.31/guides/mysql/volume-expansion/volume-expansion/expamples/group_replication.yaml b/content/docs/v2024.1.31/guides/mysql/volume-expansion/volume-expansion/expamples/group_replication.yaml new file mode 100644 index 0000000000..687bb3e9e3 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/volume-expansion/volume-expansion/expamples/group_replication.yaml @@ -0,0 +1,19 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MySQL +metadata: + name: sample-mysql + namespace: demo +spec: + version: "8.0.35" + replicas: 3 + topology: + mode: GroupReplication + storageType: Durable + storage: + storageClassName: "topolvm-provisioner" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/mysql/volume-expansion/volume-expansion/expamples/innodb.yaml b/content/docs/v2024.1.31/guides/mysql/volume-expansion/volume-expansion/expamples/innodb.yaml new file mode 100644 index 0000000000..d1d4bea6bd --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/volume-expansion/volume-expansion/expamples/innodb.yaml @@ -0,0 +1,22 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MySQL +metadata: + name: sample-mysql + namespace: demo +spec: + version: "8.0.31-innodb" + replicas: 3 + topology: + mode: InnoDBCluster + innoDBCluster: + router: + replicas: 1 + storageType: Durable + storage: + storageClassName: "topolvm-provisioner" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/mysql/volume-expansion/volume-expansion/expamples/semi-sync.yaml b/content/docs/v2024.1.31/guides/mysql/volume-expansion/volume-expansion/expamples/semi-sync.yaml new file mode 100644 index 0000000000..17aa78f487 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/volume-expansion/volume-expansion/expamples/semi-sync.yaml @@ -0,0 +1,23 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MySQL +metadata: + name: sample-mysql + namespace: demo +spec: + version: "8.0.35" + replicas: 3 + topology: + mode: SemiSync + semiSync: + sourceWaitForReplicaCount: 1 + sourceTimeout: 23h + errantTransactionRecoveryPolicy: PseudoTransaction + storageType: Durable + storage: + storageClassName: "topolvm-provisioner" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/mysql/volume-expansion/volume-expansion/expamples/standalone.yaml b/content/docs/v2024.1.31/guides/mysql/volume-expansion/volume-expansion/expamples/standalone.yaml new file mode 100644 index 0000000000..6a3dbbcb67 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/volume-expansion/volume-expansion/expamples/standalone.yaml @@ -0,0 +1,16 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MySQL +metadata: + name: sample-mysql + namespace: demo +spec: + version: "8.0.35" + storageType: Durable + storage: + storageClassName: "topolvm-provisioner" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/mysql/volume-expansion/volume-expansion/index.md b/content/docs/v2024.1.31/guides/mysql/volume-expansion/volume-expansion/index.md new file mode 100644 index 0000000000..45fbf9c742 --- /dev/null +++ b/content/docs/v2024.1.31/guides/mysql/volume-expansion/volume-expansion/index.md @@ -0,0 +1,383 @@ +--- +title: MySQL Volume Expansion +menu: + docs_v2024.1.31: + identifier: guides-mysql-volume-expansion-volume-expansion + name: MySQL Volume Expansion + parent: guides-mysql-volume-expansion + weight: 12 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# MySQL Volume Expansion + +This guide will show you how to use `KubeDB` Enterprise operator to expand the volume of a MySQL. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster. + +- You must have a `StorageClass` that supports volume expansion. + +- Install `KubeDB` Community and Enterprise operator in your cluster following the steps [here](/docs/v2024.1.31/setup/README). + +- You should be familiar with the following `KubeDB` concepts: + - [MySQL](/docs/v2024.1.31/guides/mysql/concepts/mysqldatabase) + - [MySQLOpsRequest](/docs/v2024.1.31/guides/mysql/concepts/opsrequest) + - [Volume Expansion Overview](/docs/v2024.1.31/guides/mysql/volume-expansion/overview) + +To keep everything isolated, we are going to use a separate namespace called `demo` throughout this tutorial. + +```bash +$ kubectl create ns demo +namespace/demo created +``` + +## Expand Volume of MySQL + +Here, we are going to deploy a `MySQL` cluster using a supported version by `KubeDB` operator. Then we are going to apply `MySQLOpsRequest` to expand its volume. The process of expanding MySQL `standalone` is same as MySQL cluster. + +### Prepare MySQL Database + +At first verify that your cluster has a storage class, that supports volume expansion. Let's check, + +```bash +$ kubectl get storageclass +NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE +standard (default) rancher.io/local-path Delete WaitForFirstConsumer false 69s +topolvm-provisioner topolvm.cybozu.com Delete WaitForFirstConsumer true 37s + +``` + +We can see from the output the `topolvm-provisioner` storage class has `ALLOWVOLUMEEXPANSION` field as true. So, this storage class supports volume expansion. We will use this storage class. You can install topolvm from [here](https://github.com/topolvm/topolvm). + +Now, we are going to deploy a `MySQL` database of 3 replicas with version `8.0.35`. + +### Deploy MySQL + +In this section, we are going to deploy a MySQL Cluster with 1GB volume. Then, in the next section we will expand its volume to 2GB using `MySQLOpsRequest` CRD. Below is the YAML of the `MySQL` CR that we are going to create, + + + + +
+
+ +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: MySQL +metadata: + name: sample-mysql + namespace: demo +spec: + version: "8.0.35" + replicas: 3 + topology: + mode: GroupReplication + storageType: Durable + storage: + storageClassName: "topolvm-provisioner" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut + +``` + +Let's create the `MySQL` CR we have shown above, + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mysql/volume-expansion/volume-expansion/example/group_replication.yaml +mysql.kubedb.com/sample-mysql created +``` +
+ +
+ +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: MySQL +metadata: + name: sample-mysql + namespace: demo +spec: + version: "8.0.31-innodb" + replicas: 3 + topology: + mode: InnoDBCluster + innoDBCluster: + router: + replicas: 1 + storageType: Durable + storage: + storageClassName: "topolvm-provisioner" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut +``` + +Let's create the `MySQL` CR we have shown above, + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mysql/volume-expansion/volume-expansion/example/innodb.yaml +mysql.kubedb.com/sample-mysql created +```` +
+ +
+ +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: MySQL +metadata: + name: sample-mysql + namespace: demo +spec: + version: "8.0.35" + replicas: 3 + topology: + mode: SemiSync + semiSync: + sourceWaitForReplicaCount: 1 + sourceTimeout: 23h + errantTransactionRecoveryPolicy: PseudoTransaction + storageType: Durable + storage: + storageClassName: "topolvm-provisioner" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut +``` + +Let's create the `MySQL` CR we have shown above, + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mysql/volume-expansion/volume-expansion/example/semi-sync.yaml +mysql.kubedb.com/sample-mysql created +```` + +
+ +
+ +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: MySQL +metadata: + name: sample-mysql + namespace: demo +spec: + version: "8.0.35" + storageType: Durable + storage: + storageClassName: "topolvm-provisioner" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut +``` + +Let's create the `MySQL` CR we have shown above, + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mysql/volume-expansion/volume-expansion/example/standalone.yaml +mysql.kubedb.com/sample-mysql created +```` +
+ +
+ +Now, wait until `sample-mysql` has status `Ready`. i.e, + +```bash +$ kubectl get mysql -n demo +NAME VERSION STATUS AGE +sample-mysql 8.0.35 Ready 5m4s +``` + +Let's check volume size from statefulset, and from the persistent volume, + +```bash +$ kubectl get sts -n demo sample-mysql -o json | jq '.spec.volumeClaimTemplates[].spec.resources.requests.storage' +"1Gi" + +$ kubectl get pv -n demo +NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE +pvc-331335d1-c8e0-4b73-9dab-dae57920e997 1Gi RWO Delete Bound demo/data-sample-mysql-0 topolvm-provisioner 63s +pvc-b90179f8-c40a-4273-ad77-74ca8470b782 1Gi RWO Delete Bound demo/data-sample-mysql-1 topolvm-provisioner 62s +pvc-f72411a4-80d5-4d32-b713-cb30ec662180 1Gi RWO Delete Bound demo/data-sample-mysql-2 topolvm-provisioner 62s +``` + +You can see the statefulset has 1GB storage, and the capacity of all the persistent volumes are also 1GB. + +We are now ready to apply the `MySQLOpsRequest` CR to expand the volume of this database. + +### Volume Expansion + +Here, we are going to expand the volume of the MySQL cluster. + +#### Create MySQLOpsRequest + +In order to expand the volume of the database, we have to create a `MySQLOpsRequest` CR with our desired volume size. Below is the YAML of the `MySQLOpsRequest` CR that we are going to create, + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: MySQLOpsRequest +metadata: + name: my-online-volume-expansion + namespace: demo +spec: + type: VolumeExpansion + databaseRef: + name: sample-mysql + volumeExpansion: + mode: "Online" + mysql: 2Gi +``` + +Here, + +- `spec.databaseRef.name` specifies that we are performing volume expansion operation on `sample-mysql` database. +- `spec.type` specifies that we are performing `VolumeExpansion` on our database. +- `spec.volumeExpansion.mysql` specifies the desired volume size. +- `spec.volumeExpansion.mode` specifies the desired volume expansion mode (`Online` or `Offline`). Storageclass `topolvm-provisioner` supports `Online` volume expansion. + +> **Note:** If the Storageclass you are using doesn't support `Online` Volume Expansion, Try offline volume expansion by using `spec.volumeExpansion.mode:"Offline"`. + +Let's create the `MySQLOpsRequest` CR we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mysql/volume-expansion/volume-expansion/example/online-volume-expansion.yaml +mysqlopsrequest.ops.kubedb.com/my-online-volume-expansion created +``` + +#### Verify MySQL volume expanded successfully + +If everything goes well, `KubeDB` Enterprise operator will update the volume size of `MySQL` object and related `StatefulSets` and `Persistent Volumes`. + +Let's wait for `MySQLOpsRequest` to be `Successful`. Run the following command to watch `MySQLOpsRequest` CR, + +```bash +$ kubectl get mysqlopsrequest -n demo +NAME TYPE STATUS AGE +my-online-volume-expansion VolumeExpansion Successful 96s +``` + +We can see from the above output that the `MySQLOpsRequest` has succeeded. If we describe the `MySQLOpsRequest` we will get an overview of the steps that were followed to expand the volume of the database. + +```bash +$ kubectl describe mysqlopsrequest -n demo my-online-volume-expansion +Name: my-online-volume-expansion +Namespace: demo +Labels: +Annotations: API Version: ops.kubedb.com/v1alpha1 +Kind: MySQLOpsRequest +Metadata: + UID: 09a119aa-4f2a-4cb4-b620-2aa3a514df11 +Spec: + Database Ref: + Name: sample-mysql + Type: VolumeExpansion + Volume Expansion: + mysql: 2Gi + Mode: Online +Status: + Conditions: + Last Transition Time: 2022-01-07T06:38:29Z + Message: Controller has started to Progress the MySQLOpsRequest: demo/my-online-volume-expansion + Observed Generation: 1 + Reason: OpsRequestProgressingStarted + Status: True + Type: Progressing + Last Transition Time: 2022-01-07T06:39:49Z + Message: Online Volume Expansion performed successfully in MySQL pod for MySQLOpsRequest: demo/my-online-volume-expansion + Observed Generation: 1 + Reason: SuccessfullyVolumeExpanded + Status: True + Type: VolumeExpansion + Last Transition Time: 2022-01-07T06:39:49Z + Message: Controller has successfully expand the volume of MySQL demo/my-online-volume-expansion + Observed Generation: 1 + Reason: OpsRequestProcessedSuccessfully + Status: True + Type: Successful + Observed Generation: 3 + Phase: Successful +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal Starting 2m1s KubeDB Enterprise Operator Start processing for MySQLOpsRequest: demo/my-online-volume-expansion + Normal Starting 2m1s KubeDB Enterprise Operator Pausing MySQL databse: demo/sample-mysql + Normal Successful 2m1s KubeDB Enterprise Operator Successfully paused MySQL database: demo/sample-mysql for MySQLOpsRequest: my-online-volume-expansion + Normal Successful 41s KubeDB Enterprise Operator Online Volume Expansion performed successfully in MySQL pod for MySQLOpsRequest: demo/my-online-volume-expansion + Normal Starting 41s KubeDB Enterprise Operator Updating MySQL storage + Normal Successful 41s KubeDB Enterprise Operator Successfully Updated MySQL storage + Normal Starting 41s KubeDB Enterprise Operator Resuming MySQL database: demo/sample-mysql + Normal Successful 41s KubeDB Enterprise Operator Successfully resumed MySQL database: demo/sample-mysql + Normal Successful 41s KubeDB Enterprise Operator Controller has Successfully expand the volume of MySQL: demo/sample-mysql + +``` + +Now, we are going to verify from the `Statefulset`, and the `Persistent Volumes` whether the volume of the database has expanded to meet the desired state, Let's check, + +```bash +$ kubectl get sts -n demo sample-mysql -o json | jq '.spec.volumeClaimTemplates[].spec.resources.requests.storage' +"2Gi" + +$ kubectl get pv -n demo +NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE +pvc-331335d1-c8e0-4b73-9dab-dae57920e997 2Gi RWO Delete Bound demo/data-sample-mysql-0 topolvm-provisioner 12m +pvc-b90179f8-c40a-4273-ad77-74ca8470b782 2Gi RWO Delete Bound demo/data-sample-mysql-1 topolvm-provisioner 12m +pvc-f72411a4-80d5-4d32-b713-cb30ec662180 2Gi RWO Delete Bound demo/data-sample-mysql-2 topolvm-provisioner 12m +``` + +The above output verifies that we have successfully expanded the volume of the MySQL database. + +## Cleaning Up + +To clean up the Kubernetes resources created by this tutorial, run: + +```bash +$ kubectl delete mysql -n demo sample-mysql +$ kubectl delete mysqlopsrequest -n demo my-online-volume-expansion +``` diff --git a/content/docs/v2024.1.31/guides/percona-xtradb/README.md b/content/docs/v2024.1.31/guides/percona-xtradb/README.md new file mode 100644 index 0000000000..481a532dab --- /dev/null +++ b/content/docs/v2024.1.31/guides/percona-xtradb/README.md @@ -0,0 +1,58 @@ +--- +title: PerconaXtraDB +menu: + docs_v2024.1.31: + identifier: guides-perconaxtradb-overview + name: PerconaXtraDB + parent: guides-perconaxtradb + weight: 10 +menu_name: docs_v2024.1.31 +section_menu_id: guides +url: /docs/v2024.1.31/guides/percona-xtradb/ +aliases: +- /docs/v2024.1.31/guides/percona-xtradb/README/ +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +## Supported PerconaXtraDB Features + +| Features | Availability | +| ------------------------------------------------------- | :----------: | +| Clustering | ✓ | +| Persistent Volume | ✓ | +| Instant Backup | ✓ | +| Scheduled Backup | ✓ | +| Initialize using Snapshot | ✓ | +| Custom Configuration | ✓ | +| Using Custom docker image | ✓ | +| Builtin Prometheus Discovery | ✓ | +| Using Prometheus operator | ✓ | + +## Life Cycle of a PerconaXtraDB Object + +

+  lifecycle +

+ +## User Guide + +- [Quickstart PerconaXtraDB](/docs/v2024.1.31/guides/percona-xtradb/quickstart/overview) with KubeDB Operator. +- Detail concepts of [PerconaXtraDB object](/docs/v2024.1.31/guides/percona-xtradb/concepts/perconaxtradb). +- Detail concepts of [PerconaXtraDBVersion object](/docs/v2024.1.31/guides/percona-xtradb/concepts/perconaxtradb-version). +- Create [PerconaXtraDB Cluster](/docs/v2024.1.31/guides/percona-xtradb/clustering/galera-cluster). +- Create [PerconaXtraDB with Custom Configuration](/docs/v2024.1.31/guides/percona-xtradb/configuration/using-config-file). +- Use [Custom RBAC](/docs/v2024.1.31/guides/percona-xtradb/custom-rbac/using-custom-rbac). +- Use [private Docker registry](/docs/v2024.1.31/guides/percona-xtradb/private-registry/quickstart) to deploy MySQL with KubeDB. +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/v2024.1.31/CONTRIBUTING). diff --git a/content/docs/v2024.1.31/guides/percona-xtradb/_index.md b/content/docs/v2024.1.31/guides/percona-xtradb/_index.md new file mode 100644 index 0000000000..76fa96474e --- /dev/null +++ b/content/docs/v2024.1.31/guides/percona-xtradb/_index.md @@ -0,0 +1,22 @@ +--- +title: PerconaXtraDB +menu: + docs_v2024.1.31: + identifier: guides-perconaxtradb + name: PerconaXtraDB + parent: guides + weight: 10 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/percona-xtradb/autoscaler/_index.md b/content/docs/v2024.1.31/guides/percona-xtradb/autoscaler/_index.md new file mode 100644 index 0000000000..1fa653575b --- /dev/null +++ b/content/docs/v2024.1.31/guides/percona-xtradb/autoscaler/_index.md @@ -0,0 +1,22 @@ +--- +title: Autoscaling +menu: + docs_v2024.1.31: + identifier: guides-perconaxtradb-autoscaling + name: Autoscaling + parent: guides-perconaxtradb + weight: 47 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/percona-xtradb/autoscaler/compute/_index.md b/content/docs/v2024.1.31/guides/percona-xtradb/autoscaler/compute/_index.md new file mode 100644 index 0000000000..81db5d305c --- /dev/null +++ b/content/docs/v2024.1.31/guides/percona-xtradb/autoscaler/compute/_index.md @@ -0,0 +1,22 @@ +--- +title: Compute Autoscaling +menu: + docs_v2024.1.31: + identifier: guides-perconaxtradb-autoscaling-compute + name: Compute Autoscaling + parent: guides-perconaxtradb-autoscaling + weight: 46 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/percona-xtradb/autoscaler/compute/cluster/examples/pxas-compute.yaml b/content/docs/v2024.1.31/guides/percona-xtradb/autoscaler/compute/cluster/examples/pxas-compute.yaml new file mode 100644 index 0000000000..086145d0f3 --- /dev/null +++ b/content/docs/v2024.1.31/guides/percona-xtradb/autoscaler/compute/cluster/examples/pxas-compute.yaml @@ -0,0 +1,24 @@ +apiVersion: autoscaling.kubedb.com/v1alpha1 +kind: PerconaXtraDBAutoscaler +metadata: + name: px-as-compute + namespace: demo +spec: + databaseRef: + name: sample-pxc + opsRequestOptions: + timeout: 3m + apply: IfReady + compute: + perconaxtradb: + trigger: "On" + podLifeTimeThreshold: 5m + resourceDiffPercentage: 20 + minAllowed: + cpu: 250m + memory: 400Mi + maxAllowed: + cpu: 1 + memory: 1Gi + containerControlledValues: "RequestsAndLimits" + controlledResources: ["cpu", "memory"] \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/percona-xtradb/autoscaler/compute/cluster/examples/sample-pxc.yaml b/content/docs/v2024.1.31/guides/percona-xtradb/autoscaler/compute/cluster/examples/sample-pxc.yaml new file mode 100644 index 0000000000..7c419484a1 --- /dev/null +++ b/content/docs/v2024.1.31/guides/percona-xtradb/autoscaler/compute/cluster/examples/sample-pxc.yaml @@ -0,0 +1,26 @@ +apiVersion: kubedb.com/v1alpha2 +kind: PerconaXtraDB +metadata: + name: sample-pxc + namespace: demo +spec: + version: "8.0.26" + replicas: 3 + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + podTemplate: + spec: + resources: + requests: + cpu: "200m" + memory: "300Mi" + limits: + cpu: "200m" + memory: "300Mi" + terminationPolicy: WipeOut \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/percona-xtradb/autoscaler/compute/cluster/index.md b/content/docs/v2024.1.31/guides/percona-xtradb/autoscaler/compute/cluster/index.md new file mode 100644 index 0000000000..3c3c559796 --- /dev/null +++ b/content/docs/v2024.1.31/guides/percona-xtradb/autoscaler/compute/cluster/index.md @@ -0,0 +1,498 @@ +--- +title: PerconaXtraDB Cluster Autoscaling +menu: + docs_v2024.1.31: + identifier: guides-perconaxtradb-autoscaling-compute-cluster + name: Cluster + parent: guides-perconaxtradb-autoscaling-compute + weight: 20 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Autoscaling the Compute Resource of a PerconaXtraDB Cluster Database + +This guide will show you how to use `KubeDB` to autoscale compute resources i.e. cpu and memory of a PerconaXtraDB replicaset database. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster. + +- Install `KubeDB` Community, Ops-Manager and Autoscaler operator in your cluster following the steps [here](/docs/v2024.1.31/setup/README). + +- Install `Metrics Server` from [here](https://github.com/kubernetes-sigs/metrics-server#installation) + +- You should be familiar with the following `KubeDB` concepts: + - [PerconaXtraDB](/docs/v2024.1.31/guides/percona-xtradb/concepts/perconaxtradb) + - [PerconaXtraDBAutoscaler](/docs/v2024.1.31/guides/percona-xtradb/concepts/autoscaler) + - [PerconaXtraDBOpsRequest](/docs/v2024.1.31/guides/percona-xtradb/concepts/opsrequest) + - [Compute Resource Autoscaling Overview](/docs/v2024.1.31/guides/percona-xtradb/autoscaler/compute/overview) + +To keep everything isolated, we are going to use a separate namespace called `demo` throughout this tutorial. + +```bash +$ kubectl create ns demo +namespace/demo created +``` +## Autoscaling of Cluster Database + +Here, we are going to deploy a `PerconaXtraDB` Cluster using a supported version by `KubeDB` operator. Then we are going to apply `PerconaXtraDBAutoscaler` to set up autoscaling. + +#### Deploy PerconaXtraDB Cluster + +In this section, we are going to deploy a PerconaXtraDB Cluster with version `8.0.26`. Then, in the next section we will set up autoscaling for this database using `PerconaXtraDBAutoscaler` CRD. Below is the YAML of the `PerconaXtraDB` CR that we are going to create, +> If you want to autoscale PerconaXtraDB `Standalone`, Just remove the `spec.Replicas` from the below yaml and rest of the steps are same. + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: PerconaXtraDB +metadata: + name: sample-pxc + namespace: demo +spec: + version: "8.0.26" + replicas: 3 + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + podTemplate: + spec: + resources: + requests: + cpu: "200m" + memory: "300Mi" + limits: + cpu: "200m" + memory: "300Mi" + terminationPolicy: WipeOut +``` + +Let's create the `PerconaXtraDB` CRO we have shown above, + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/percona-xtradb/autoscaler/compute/cluster/examples/sample-pxc.yaml +perconaxtradb.kubedb.com/sample-pxc created +``` + +Now, wait until `sample-pxc` has status `Ready`. i.e, + +```bash +$ kubectl get perconaxtradb -n demo +NAME VERSION STATUS AGE +sample-pxc 8.0.26 Ready 14m +``` + +Let's check the Pod containers resources, + +```bash +$ kubectl get pod -n demo sample-pxc-0 -o json | jq '.spec.containers[].resources' +{ + "limits": { + "cpu": "200m", + "memory": "300Mi" + }, + "requests": { + "cpu": "200m", + "memory": "300Mi" + } +} +``` + +Let's check the PerconaXtraDB resources, +```bash +$ kubectl get perconaxtradb -n demo sample-pxc -o json | jq '.spec.podTemplate.spec.resources' +{ + "limits": { + "cpu": "200m", + "memory": "300Mi" + }, + "requests": { + "cpu": "200m", + "memory": "300Mi" + } +} +``` + +You can see from the above outputs that the resources are same as the one we have assigned while deploying the perconaxtradb. + +We are now ready to apply the `PerconaXtraDBAutoscaler` CRO to set up autoscaling for this database. + +### Compute Resource Autoscaling + +Here, we are going to set up compute resource autoscaling using a PerconaXtraDBAutoscaler Object. + +#### Create PerconaXtraDBAutoscaler Object + +In order to set up compute resource autoscaling for this database cluster, we have to create a `PerconaXtraDBAutoscaler` CRO with our desired configuration. Below is the YAML of the `PerconaXtraDBAutoscaler` object that we are going to create, + +```yaml +apiVersion: autoscaling.kubedb.com/v1alpha1 +kind: PerconaXtraDBAutoscaler +metadata: + name: px-as-compute + namespace: demo +spec: + databaseRef: + name: sample-pxc + opsRequestOptions: + timeout: 3m + apply: IfReady + compute: + perconaxtradb: + trigger: "On" + podLifeTimeThreshold: 5m + resourceDiffPercentage: 20 + minAllowed: + cpu: 250m + memory: 400Mi + maxAllowed: + cpu: 1 + memory: 1Gi + containerControlledValues: "RequestsAndLimits" + controlledResources: ["cpu", "memory"] +``` + +Here, + +- `spec.databaseRef.name` specifies that we are performing compute resource scaling operation on `sample-pxc` database. +- `spec.compute.perconaxtradb.trigger` specifies that compute autoscaling is enabled for this database. +- `spec.compute.perconaxtradb.podLifeTimeThreshold` specifies the minimum lifetime for at least one of the pod to initiate a vertical scaling. +- `spec.compute.perconaxtradb.resourceDiffPercentage` specifies the minimum resource difference in percentage. The default is 10%. +If the difference between current & recommended resource is less than ResourceDiffPercentage, Autoscaler Operator will ignore the updating. +- `spec.compute.perconaxtradb.minAllowed` specifies the minimum allowed resources for the database. +- `spec.compute.perconaxtradb.maxAllowed` specifies the maximum allowed resources for the database. +- `spec.compute.perconaxtradb.controlledResources` specifies the resources that are controlled by the autoscaler. +- `spec.compute.perconaxtradb.containerControlledValues` specifies which resource values should be controlled. The default is "RequestsAndLimits". +- `spec.opsRequestOptions.apply` has two supported value : `IfReady` & `Always`. +Use `IfReady` if you want to process the opsReq only when the database is Ready. And use `Always` if you want to process the execution of opsReq irrespective of the Database state. +- `spec.opsRequestOptions.timeout` specifies the maximum time for each step of the opsRequest(in seconds). +If a step doesn't finish within the specified timeout, the ops request will result in failure. + + +Let's create the `PerconaXtraDBAutoscaler` CR we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/percona-xtradb/autoscaler/compute/cluster/examples/pxas-compute.yaml +perconaxtradbautoscaler.autoscaling.kubedb.com/pxas-compute created +``` + +#### Verify Autoscaling is set up successfully + +Let's check that the `perconaxtradbautoscaler` resource is created successfully, + +```bash +$ kubectl get perconaxtradbautoscaler -n demo +NAME AGE +px-as-compute 5m56s + +$ kubectl describe perconaxtradbautoscaler px-as-compute -n demo +Name: px-as-compute +Namespace: demo +Labels: +Annotations: +API Version: autoscaling.kubedb.com/v1alpha1 +Kind: PerconaXtraDBAutoscaler +Metadata: + Creation Timestamp: 2022-09-16T11:26:58Z + Generation: 1 + Managed Fields: + ... + Resource Version: 846645 + UID: 44bd46c3-bbc5-4c4a-aff4-00c7f84c6f58 +Spec: + Compute: + Mariadb: + Container Controlled Values: RequestsAndLimits + Controlled Resources: + cpu + memory + Max Allowed: + Cpu: 1 + Memory: 1Gi + Min Allowed: + Cpu: 250m + Memory: 400Mi + Pod Life Time Threshold: 5m0s + Resource Diff Percentage: 20 + Trigger: On + Database Ref: + Name: sample-pxc + Ops Request Options: + Apply: IfReady + Timeout: 3m0s +Status: + Checkpoints: + Cpu Histogram: + Bucket Weights: + Index: 0 + Weight: 10000 + Index: 46 + Weight: 555 + Reference Timestamp: 2022-09-16T00:00:00Z + Total Weight: 2.648440345821337 + First Sample Start: 2022-09-16T11:26:48Z + Last Sample Start: 2022-09-16T11:32:52Z + Last Update Time: 2022-09-16T11:33:02Z + Memory Histogram: + Bucket Weights: + Index: 1 + Weight: 10000 + Reference Timestamp: 2022-09-17T00:00:00Z + Total Weight: 1.391848625060675 + Ref: + Container Name: px-coordinator + Vpa Object Name: sample-pxc + Total Samples Count: 19 + Version: v3 + Cpu Histogram: + Bucket Weights: + Index: 0 + Weight: 10000 + Index: 3 + Weight: 556 + Reference Timestamp: 2022-09-16T00:00:00Z + Total Weight: 2.648440345821337 + First Sample Start: 2022-09-16T11:26:48Z + Last Sample Start: 2022-09-16T11:32:52Z + Last Update Time: 2022-09-16T11:33:02Z + Memory Histogram: + Reference Timestamp: 2022-09-17T00:00:00Z + Ref: + Container Name: perconaxtradb + Vpa Object Name: sample-pxc + Total Samples Count: 19 + Version: v3 + Conditions: + Last Transition Time: 2022-09-16T11:27:07Z + Message: Successfully created mariaDBOpsRequest demo/pxops-sample-pxc-6xc1kc + Observed Generation: 1 + Reason: CreateOpsRequest + Status: True + Type: CreateOpsRequest + Vpas: + Conditions: + Last Transition Time: 2022-09-16T11:27:02Z + Status: True + Type: RecommendationProvided + Recommendation: + Container Recommendations: + Container Name: perconaxtradb + Lower Bound: + Cpu: 250m + Memory: 400Mi + Target: + Cpu: 250m + Memory: 400Mi + Uncapped Target: + Cpu: 25m + Memory: 262144k + Upper Bound: + Cpu: 1 + Memory: 1Gi + Vpa Name: sample-pxc +Events: + +``` +So, the `perconaxtradbautoscaler` resource is created successfully. + +We can verify from the above output that `status.vpas` contains the `RecommendationProvided` condition to true. And in the same time, `status.vpas.recommendation.containerRecommendations` contain the actual generated recommendation. + +Our autoscaler operator continuously watches the recommendation generated and creates an `perconaxtradbopsrequest` based on the recommendations, if the database pod resources are needed to scaled up or down. + +Let's watch the `perconaxtradbopsrequest` in the demo namespace to see if any `perconaxtradbopsrequest` object is created. After some time you'll see that a `perconaxtradbopsrequest` will be created based on the recommendation. + +```bash +$ kubectl get perconaxtradbopsrequest -n demo +NAME TYPE STATUS AGE +pxops-sample-pxc-6xc1kc VerticalScaling Progressing 7s +``` + +Let's wait for the ops request to become successful. + +```bash +$ kubectl get perconaxtradbopsrequest -n demo +NAME TYPE STATUS AGE +pxops-vpa-sample-pxc-z43wc8 VerticalScaling Successful 3m32s +``` + +We can see from the above output that the `PerconaXtraDBOpsRequest` has succeeded. If we describe the `PerconaXtraDBOpsRequest` we will get an overview of the steps that were followed to scale the database. + +```bash +$ kubectl describe perconaxtradbopsrequest -n demo pxops-vpa-sample-pxc-z43wc8 +Name: pxops-sample-pxc-6xc1kc +Namespace: demo +Labels: +Annotations: +API Version: ops.kubedb.com/v1alpha1 +Kind: PerconaXtraDBOpsRequest +Metadata: + Creation Timestamp: 2022-09-16T11:27:07Z + Generation: 1 + Managed Fields: + API Version: ops.kubedb.com/v1alpha1 + Fields Type: FieldsV1 + fieldsV1: + f:metadata: + f:ownerReferences: + .: + k:{"uid":"44bd46c3-bbc5-4c4a-aff4-00c7f84c6f58"}: + f:spec: + .: + f:apply: + f:databaseRef: + .: + f:name: + f:timeout: + f:type: + f:verticalScaling: + .: + f:perconaxtradb: + .: + f:limits: + .: + f:cpu: + f:memory: + f:requests: + .: + f:cpu: + f:memory: + Manager: kubedb-autoscaler + Operation: Update + Time: 2022-09-16T11:27:07Z + API Version: ops.kubedb.com/v1alpha1 + Fields Type: FieldsV1 + fieldsV1: + f:status: + .: + f:conditions: + f:observedGeneration: + f:phase: + Manager: kubedb-ops-manager + Operation: Update + Subresource: status + Time: 2022-09-16T11:27:07Z + Owner References: + API Version: autoscaling.kubedb.com/v1alpha1 + Block Owner Deletion: true + Controller: true + Kind: PerconaXtraDBAutoscaler + Name: px-as-compute + UID: 44bd46c3-bbc5-4c4a-aff4-00c7f84c6f58 + Resource Version: 846324 + UID: c2b30107-c6d3-44bb-adf3-135edc5d615b +Spec: + Apply: IfReady + Database Ref: + Name: sample-pxc + Timeout: 2m0s + Type: VerticalScaling + Vertical Scaling: + Mariadb: + Limits: + Cpu: 250m + Memory: 400Mi + Requests: + Cpu: 250m + Memory: 400Mi +Status: + Conditions: + Last Transition Time: 2022-09-16T11:27:07Z + Message: Controller has started to Progress the PerconaXtraDBOpsRequest: demo/pxops-sample-pxc-6xc1kc + Observed Generation: 1 + Reason: OpsRequestProgressingStarted + Status: True + Type: Progressing + Last Transition Time: 2022-09-16T11:30:42Z + Message: Successfully restarted PerconaXtraDB pods for PerconaXtraDBOpsRequest: demo/pxops-sample-pxc-6xc1kc + Observed Generation: 1 + Reason: SuccessfullyRestatedStatefulSet + Status: True + Type: RestartStatefulSet + Last Transition Time: 2022-09-16T11:30:47Z + Message: Vertical scale successful for PerconaXtraDBOpsRequest: demo/pxops-sample-pxc-6xc1kc + Observed Generation: 1 + Reason: SuccessfullyPerformedVerticalScaling + Status: True + Type: VerticalScaling + Last Transition Time: 2022-09-16T11:30:47Z + Message: Controller has successfully scaled the PerconaXtraDB demo/pxops-sample-pxc-6xc1kc + Observed Generation: 1 + Reason: OpsRequestProcessedSuccessfully + Status: True + Type: Successful + Observed Generation: 1 + Phase: Successful +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal Starting 8m48s KubeDB Enterprise Operator Start processing for PerconaXtraDBOpsRequest: demo/pxops-sample-pxc-6xc1kc + Normal Starting 8m48s KubeDB Enterprise Operator Pausing PerconaXtraDB databse: demo/sample-pxc + Normal Successful 8m48s KubeDB Enterprise Operator Successfully paused PerconaXtraDB database: demo/sample-pxc for PerconaXtraDBOpsRequest: pxops-sample-pxc-6xc1kc + Normal Starting 8m43s KubeDB Enterprise Operator Restarting Pod: demo/sample-pxc-0 + Normal Starting 7m33s KubeDB Enterprise Operator Restarting Pod: demo/sample-pxc-1 + Normal Starting 6m23s KubeDB Enterprise Operator Restarting Pod: demo/sample-pxc-2 + Normal Successful 5m13s KubeDB Enterprise Operator Successfully restarted PerconaXtraDB pods for PerconaXtraDBOpsRequest: demo/pxops-sample-pxc-6xc1kc + Normal Successful 5m8s KubeDB Enterprise Operator Vertical scale successful for PerconaXtraDBOpsRequest: demo/pxops-sample-pxc-6xc1kc + Normal Starting 5m8s KubeDB Enterprise Operator Resuming PerconaXtraDB database: demo/sample-pxc + Normal Successful 5m8s KubeDB Enterprise Operator Successfully resumed PerconaXtraDB database: demo/sample-pxc + Normal Successful 5m8s KubeDB Enterprise Operator Controller has Successfully scaled the PerconaXtraDB database: demo/sample-pxc +``` + +Now, we are going to verify from the Pod, and the PerconaXtraDB yaml whether the resources of the replicaset database has updated to meet up the desired state, Let's check, + +```bash +$ kubectl get pod -n demo sample-pxc-0 -o json | jq '.spec.containers[].resources' +{ + "limits": { + "cpu": "250m", + "memory": "400Mi" + }, + "requests": { + "cpu": "250m", + "memory": "400Mi" + } +} + +$ kubectl get perconaxtradb -n demo sample-pxc -o json | jq '.spec.podTemplate.spec.resources' +{ + "limits": { + "cpu": "250m", + "memory": "400Mi" + }, + "requests": { + "cpu": "250m", + "memory": "400Mi" + } +} +``` + + +The above output verifies that we have successfully autoscaled the resources of the PerconaXtraDB replicaset database. + +## Cleaning Up + +To clean up the Kubernetes resources created by this tutorial, run: + +```bash +kubectl delete perconaxtradb -n demo sample-pxc +kubectl delete perconaxtradbautoscaler -n demo px-as-compute +kubectl delete ns demo +``` \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/percona-xtradb/autoscaler/compute/overview/images/pxas-compute.png b/content/docs/v2024.1.31/guides/percona-xtradb/autoscaler/compute/overview/images/pxas-compute.png new file mode 100644 index 0000000000..dedce14be2 Binary files /dev/null and b/content/docs/v2024.1.31/guides/percona-xtradb/autoscaler/compute/overview/images/pxas-compute.png differ diff --git a/content/docs/v2024.1.31/guides/percona-xtradb/autoscaler/compute/overview/index.md b/content/docs/v2024.1.31/guides/percona-xtradb/autoscaler/compute/overview/index.md new file mode 100644 index 0000000000..e64426cb81 --- /dev/null +++ b/content/docs/v2024.1.31/guides/percona-xtradb/autoscaler/compute/overview/index.md @@ -0,0 +1,67 @@ +--- +title: PerconaXtraDB Compute Autoscaling Overview +menu: + docs_v2024.1.31: + identifier: guides-perconaxtradb-autoscaling-compute-overview + name: Overview + parent: guides-perconaxtradb-autoscaling-compute + weight: 10 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# PerconaXtraDB Compute Resource Autoscaling + +This guide will give an overview on how KubeDB Autoscaler operator autoscales the database compute resources i.e. cpu and memory using `perconaxtradbautoscaler` crd. + +## Before You Begin + +- You should be familiar with the following `KubeDB` concepts: + - [PerconaXtraDB](/docs/v2024.1.31/guides/percona-xtradb/concepts/perconaxtradb) + - [PerconaXtraDBAutoscaler](/docs/v2024.1.31/guides/percona-xtradb/concepts/autoscaler) + - [PerconaXtraDBOpsRequest](/docs/v2024.1.31/guides/percona-xtradb/concepts/opsrequest) + +## How Compute Autoscaling Works + +The following diagram shows how KubeDB Autoscaler operator autoscales the resources of `PerconaXtraDB` database components. Open the image in a new tab to see the enlarged version. + +
+  Auto Scaling process of PerconaXtraDB +
Fig: Auto Scaling process of PerconaXtraDB
+
+ +The Auto Scaling process consists of the following steps: + +1. At first, the user creates a `PerconaXtraDB` Custom Resource Object (CRO). + +2. `KubeDB` Community operator watches the `PerconaXtraDB` CRO. + +3. When the operator finds a `PerconaXtraDB` CRO, it creates required number of `StatefulSets` and related necessary stuff like secrets, services, etc. + +4. Then, in order to set up autoscaling of the CPU & Memory resources of the `PerconaXtraDB` database the user creates a `PerconaXtraDBAutoscaler` CRO with desired configuration. + +5. `KubeDB` Autoscaler operator watches the `PerconaXtraDBAutoscaler` CRO. + +6. `KubeDB` Autoscaler operator utilizes the modified version of Kubernetes official [VPA-Recommender](https://github.com/kubernetes/autoscaler/tree/master/vertical-pod-autoscaler/pkg) for different components of the database, as specified in the `perconaxtradbautoscaler` CRO. +It generates recommendations based on resource usages, & store them in the `status` section of the autoscaler CRO. + +7. If the generated recommendation doesn't match the current resources of the database, then `KubeDB` Autoscaler operator creates a `PerconaXtraDBOpsRequest` CRO to scale the database to match the recommendation provided by the VPA object. + +8. `KubeDB Ops-Manager operator` watches the `PerconaXtraDBOpsRequest` CRO. + +9. Lastly, the `KubeDB Ops-Manager operator` will scale the database component vertically as specified on the `PerconaXtraDBOpsRequest` CRO. + +In the next docs, we are going to show a step by step guide on Autoscaling of PerconaXtraDB database using `PerconaXtraDBAutoscaler` CRD. diff --git a/content/docs/v2024.1.31/guides/percona-xtradb/autoscaler/storage/_index.md b/content/docs/v2024.1.31/guides/percona-xtradb/autoscaler/storage/_index.md new file mode 100644 index 0000000000..490f4c41a2 --- /dev/null +++ b/content/docs/v2024.1.31/guides/percona-xtradb/autoscaler/storage/_index.md @@ -0,0 +1,22 @@ +--- +title: Storage Autoscaling +menu: + docs_v2024.1.31: + identifier: guides-perconaxtradb-autoscaling-storage + name: Storage Autoscaling + parent: guides-perconaxtradb-autoscaling + weight: 46 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/percona-xtradb/autoscaler/storage/cluster/examples/pxas-storage.yaml b/content/docs/v2024.1.31/guides/percona-xtradb/autoscaler/storage/cluster/examples/pxas-storage.yaml new file mode 100644 index 0000000000..8606219610 --- /dev/null +++ b/content/docs/v2024.1.31/guides/percona-xtradb/autoscaler/storage/cluster/examples/pxas-storage.yaml @@ -0,0 +1,14 @@ +apiVersion: autoscaling.kubedb.com/v1alpha1 +kind: PerconaXtraDBAutoscaler +metadata: + name: px-as-st + namespace: demo +spec: + databaseRef: + name: sample-pxc + storage: + perconaxtradb: + trigger: "On" + usageThreshold: 20 + scalingThreshold: 20 + expansionMode: "Online" diff --git a/content/docs/v2024.1.31/guides/percona-xtradb/autoscaler/storage/cluster/examples/sample-pxc.yaml b/content/docs/v2024.1.31/guides/percona-xtradb/autoscaler/storage/cluster/examples/sample-pxc.yaml new file mode 100644 index 0000000000..cba8b8ed46 --- /dev/null +++ b/content/docs/v2024.1.31/guides/percona-xtradb/autoscaler/storage/cluster/examples/sample-pxc.yaml @@ -0,0 +1,17 @@ +apiVersion: kubedb.com/v1alpha2 +kind: PerconaXtraDB +metadata: + name: sample-pxc + namespace: demo +spec: + version: "8.0.26" + replicas: 3 + storageType: Durable + storage: + storageClassName: "topolvm-provisioner" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut diff --git a/content/docs/v2024.1.31/guides/percona-xtradb/autoscaler/storage/cluster/index.md b/content/docs/v2024.1.31/guides/percona-xtradb/autoscaler/storage/cluster/index.md new file mode 100644 index 0000000000..ea010a1a56 --- /dev/null +++ b/content/docs/v2024.1.31/guides/percona-xtradb/autoscaler/storage/cluster/index.md @@ -0,0 +1,329 @@ +--- +title: PerconaXtraDB Cluster Autoscaling +menu: + docs_v2024.1.31: + identifier: guides-perconaxtradb-autoscaling-storage-cluster + name: Cluster + parent: guides-perconaxtradb-autoscaling-storage + weight: 20 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Storage Autoscaling of a PerconaXtraDB Cluster + +This guide will show you how to use `KubeDB` to autoscale the storage of a PerconaXtraDB Replicaset database. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster. + +- Install `KubeDB` Community, Enterprise and Autoscaler operator in your cluster following the steps [here](/docs/v2024.1.31/setup/README). + +- Install `Metrics Server` from [here](https://github.com/kubernetes-sigs/metrics-server#installation) + +- Install Prometheus from [here](https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack) + +- You must have a `StorageClass` that supports volume expansion. + +- You should be familiar with the following `KubeDB` concepts: + - [PerconaXtraDB](/docs/v2024.1.31/guides/percona-xtradb/concepts/perconaxtradb) + - [PerconaXtraDBAutoscaler](/docs/v2024.1.31/guides/percona-xtradb/concepts/autoscaler) + - [PerconaXtraDBOpsRequest](/docs/v2024.1.31/guides/percona-xtradb/concepts/opsrequest) + - [Storage Autoscaling Overview](/docs/v2024.1.31/guides/percona-xtradb/autoscaler/storage/overview) + +To keep everything isolated, we are going to use a separate namespace called `demo` throughout this tutorial. + +```bash +$ kubectl create ns demo +namespace/demo created +``` + +## Storage Autoscaling of Cluster Database + +At first verify that your cluster has a storage class, that supports volume expansion. Let's check, + +```bash +$ kubectl get storageclass +NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE +standard (default) rancher.io/local-path Delete WaitForFirstConsumer false 79m +topolvm-provisioner topolvm.cybozu.com Delete WaitForFirstConsumer true 78m +``` + +We can see from the output the `topolvm-provisioner` storage class has `ALLOWVOLUMEEXPANSION` field as true. So, this storage class supports volume expansion. We can use it. You can install topolvm from [here](https://github.com/topolvm/topolvm) + +Now, we are going to deploy a `PerconaXtraDB` replicaset using a supported version by `KubeDB` operator. Then we are going to apply `PerconaXtraDBAutoscaler` to set up autoscaling. + +#### Deploy PerconaXtraDB Cluster + +In this section, we are going to deploy a PerconaXtraDB replicaset database with version `8.0.26`. Then, in the next section we will set up autoscaling for this database using `PerconaXtraDBAutoscaler` CRD. Below is the YAML of the `PerconaXtraDB` CR that we are going to create, + +> If you want to autoscale PerconaXtraDB `Standalone`, Just remove the `spec.Replicas` from the below yaml and rest of the steps are same. + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: PerconaXtraDB +metadata: + name: sample-pxc + namespace: demo +spec: + version: "8.0.26" + replicas: 3 + storageType: Durable + storage: + storageClassName: "topolvm-provisioner" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut +``` + +Let's create the `PerconaXtraDB` CRO we have shown above, + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/percona-xtradb/autoscaler/storage/cluster/examples/sample-pxc.yaml +perconaxtradb.kubedb.com/sample-pxc created +``` + +Now, wait until `sample-pxc` has status `Ready`. i.e, + +```bash +$ kubectl get perconaxtradb -n demo +NAME VERSION STATUS AGE +sample-pxc 8.0.26 Ready 3m46s +``` + +Let's check volume size from statefulset, and from the persistent volume, + +```bash +$ kubectl get sts -n demo sample-pxc -o json | jq '.spec.volumeClaimTemplates[].spec.resources.requests.storage' +"1Gi" + +$ kubectl get pv -n demo +NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE +pvc-43266d76-f280-4cca-bd78-d13660a84db9 1Gi RWO Delete Bound demo/data-sample-pxc-2 topolvm-provisioner 57s +pvc-4a509b05-774b-42d9-b36d-599c9056af37 1Gi RWO Delete Bound demo/data-sample-pxc-0 topolvm-provisioner 58s +pvc-c27eee12-cd86-4410-b39e-b1dd735fc14d 1Gi RWO Delete Bound demo/data-sample-pxc-1 topolvm-provisioner 57s +``` + +You can see the statefulset has 1GB storage, and the capacity of all the persistent volume is also 1GB. + +We are now ready to apply the `PerconaXtraDBAutoscaler` CRO to set up storage autoscaling for this database. + +### Storage Autoscaling + +Here, we are going to set up storage autoscaling using a PerconaXtraDBAutoscaler Object. + +#### Create PerconaXtraDBAutoscaler Object + +In order to set up vertical autoscaling for this replicaset database, we have to create a `PerconaXtraDBAutoscaler` CRO with our desired configuration. Below is the YAML of the `PerconaXtraDBAutoscaler` object that we are going to create, + +```yaml +apiVersion: autoscaling.kubedb.com/v1alpha1 +kind: PerconaXtraDBAutoscaler +metadata: + name: px-as-st + namespace: demo +spec: + databaseRef: + name: sample-pxc + storage: + perconaxtradb: + trigger: "On" + usageThreshold: 20 + scalingThreshold: 20 + expansionMode: "Online" +``` + +Here, + +- `spec.databaseRef.name` specifies that we are performing vertical scaling operation on `sample-pxc` database. +- `spec.storage.perconaxtradb.trigger` specifies that storage autoscaling is enabled for this database. +- `spec.storage.perconaxtradb.usageThreshold` specifies storage usage threshold, if storage usage exceeds `20%` then storage autoscaling will be triggered. +- `spec.storage.perconaxtradb.scalingThreshold` specifies the scaling threshold. Storage will be scaled to `20%` of the current amount. +- `spec.storage.perconaxtradb.expansionMode` specifies the expansion mode of volume expansion `PerconaXtraDBOpsRequest` created by `PerconaXtraDBAutoscaler`. topolvm-provisioner supports online volume expansion so here `expansionMode` is set as "Online". + +Let's create the `PerconaXtraDBAutoscaler` CR we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/percona-xtradb/autoscaler/storage/cluster/examples/pxas-storage.yaml +perconaxtradbautoscaler.autoscaling.kubedb.com/px-as-st created +``` + +#### Storage Autoscaling is set up successfully + +Let's check that the `perconaxtradbautoscaler` resource is created successfully, + +```bash +$ kubectl get perconaxtradbautoscaler -n demo +NAME AGE +px-as-st 33s + +$ kubectl describe perconaxtradbautoscaler px-as-st -n demo +Name: px-as-st +Namespace: demo +Labels: +Annotations: API Version: autoscaling.kubedb.com/v1alpha1 +Kind: PerconaXtraDBAutoscaler +Metadata: + Creation Timestamp: 2022-01-14T06:08:02Z + Generation: 1 + Managed Fields: + ... + Resource Version: 24009 + UID: 4f45a3b3-fc72-4d04-b52c-a770944311f6 +Spec: + Database Ref: + Name: sample-pxc + Storage: + Mariadb: + Scaling Threshold: 20 + Trigger: On + Usage Threshold: 20 +Events: +``` + +So, the `perconaxtradbautoscaler` resource is created successfully. + +Now, for this demo, we are going to manually fill up the persistent volume to exceed the `usageThreshold` using `dd` command to see if storage autoscaling is working or not. + +Let's exec into the database pod and fill the database volume(`var/lib/mysql`) using the following commands: + +```bash +$ kubectl exec -it -n demo sample-pxc-0 -- bash +root@sample-pxc-0:/ df -h /var/lib/mysql +Filesystem Size Used Avail Use% Mounted on +/dev/topolvm/57cd4330-784f-42c1-bf8e-e743241df164 1014M 357M 658M 36% /var/lib/mysql +root@sample-pxc-0:/ dd if=/dev/zero of=/var/lib/mysql/file.img bs=500M count=1 +1+0 records in +1+0 records out +524288000 bytes (524 MB, 500 MiB) copied, 0.340877 s, 1.5 GB/s +root@sample-pxc-0:/ df -h /var/lib/mysql +Filesystem Size Used Avail Use% Mounted on +/dev/topolvm/57cd4330-784f-42c1-bf8e-e743241df164 1014M 857M 158M 85% /var/lib/mysql +``` + +So, from the above output we can see that the storage usage is 83%, which exceeded the `usageThreshold` 20%. + +Let's watch the `perconaxtradbopsrequest` in the demo namespace to see if any `perconaxtradbopsrequest` object is created. After some time you'll see that a `perconaxtradbopsrequest` of type `VolumeExpansion` will be created based on the `scalingThreshold`. + +```bash +$ kubectl get perconaxtradbopsrequest -n demo +NAME TYPE STATUS AGE +mops-sample-pxc-xojkua VolumeExpansion Progressing 15s +``` + +Let's wait for the ops request to become successful. + +```bash +$ kubectl get perconaxtradbopsrequest -n demo +NAME TYPE STATUS AGE +mops-sample-pxc-xojkua VolumeExpansion Successful 97s +``` + +We can see from the above output that the `PerconaXtraDBOpsRequest` has succeeded. If we describe the `PerconaXtraDBOpsRequest` we will get an overview of the steps that were followed to expand the volume of the database. + +```bash +$ kubectl describe perconaxtradbopsrequest -n demo mops-sample-pxc-xojkua +Name: mops-sample-pxc-xojkua +Namespace: demo +Labels: app.kubernetes.io/component=database + app.kubernetes.io/instance=sample-pxc + app.kubernetes.io/managed-by=kubedb.com + app.kubernetes.io/name=perconaxtradbs.kubedb.com +Annotations: +API Version: ops.kubedb.com/v1alpha1 +Kind: PerconaXtraDBOpsRequest +Metadata: + Creation Timestamp: 2022-01-14T06:13:10Z + Generation: 1 + Managed Fields: ... + Owner References: + API Version: autoscaling.kubedb.com/v1alpha1 + Block Owner Deletion: true + Controller: true + Kind: PerconaXtraDBAutoscaler + Name: px-as-st + UID: 4f45a3b3-fc72-4d04-b52c-a770944311f6 + Resource Version: 25557 + UID: 90763a49-a03f-407c-a233-fb20c4ab57d7 +Spec: + Database Ref: + Name: sample-pxc + Type: VolumeExpansion + Volume Expansion: + Mariadb: 1594884096 +Status: + Conditions: + Last Transition Time: 2022-01-14T06:13:10Z + Message: Controller has started to Progress the PerconaXtraDBOpsRequest: demo/mops-sample-pxc-xojkua + Observed Generation: 1 + Reason: OpsRequestProgressingStarted + Status: True + Type: Progressing + Last Transition Time: 2022-01-14T06:14:25Z + Message: Volume Expansion performed successfully in PerconaXtraDB pod for PerconaXtraDBOpsRequest: demo/mops-sample-pxc-xojkua + Observed Generation: 1 + Reason: SuccessfullyVolumeExpanded + Status: True + Type: VolumeExpansion + Last Transition Time: 2022-01-14T06:14:25Z + Message: Controller has successfully expand the volume of PerconaXtraDB demo/mops-sample-pxc-xojkua + Observed Generation: 1 + Reason: OpsRequestProcessedSuccessfully + Status: True + Type: Successful + Observed Generation: 3 + Phase: Successful +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal Starting 2m58s KubeDB Enterprise Operator Start processing for PerconaXtraDBOpsRequest: demo/mops-sample-pxc-xojkua + Normal Starting 2m58s KubeDB Enterprise Operator Pausing PerconaXtraDB databse: demo/sample-pxc + Normal Successful 2m58s KubeDB Enterprise Operator Successfully paused PerconaXtraDB database: demo/sample-pxc for PerconaXtraDBOpsRequest: mops-sample-pxc-xojkua + Normal Successful 103s KubeDB Enterprise Operator Volume Expansion performed successfully in PerconaXtraDB pod for PerconaXtraDBOpsRequest: demo/mops-sample-pxc-xojkua + Normal Starting 103s KubeDB Enterprise Operator Updating PerconaXtraDB storage + Normal Successful 103s KubeDB Enterprise Operator Successfully Updated PerconaXtraDB storage + Normal Starting 103s KubeDB Enterprise Operator Resuming PerconaXtraDB database: demo/sample-pxc + Normal Successful 103s KubeDB Enterprise Operator Successfully resumed PerconaXtraDB database: demo/sample-pxc + Normal Successful 103s KubeDB Enterprise Operator Controller has Successfully expand the volume of PerconaXtraDB: demo/sample-pxc +``` + +Now, we are going to verify from the `Statefulset`, and the `Persistent Volume` whether the volume of the replicaset database has expanded to meet the desired state, Let's check, + +```bash +$ kubectl get sts -n demo sample-pxc -o json | jq '.spec.volumeClaimTemplates[].spec.resources.requests.storage' +"1594884096" +$ kubectl get pv -n demo +NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE +pvc-43266d76-f280-4cca-bd78-d13660a84db9 2Gi RWO Delete Bound demo/data-sample-pxc-2 topolvm-provisioner 23m +pvc-4a509b05-774b-42d9-b36d-599c9056af37 2Gi RWO Delete Bound demo/data-sample-pxc-0 topolvm-provisioner 24m +pvc-c27eee12-cd86-4410-b39e-b1dd735fc14d 2Gi RWO Delete Bound demo/data-sample-pxc-1 topolvm-provisioner 23m +``` + +The above output verifies that we have successfully autoscaled the volume of the PerconaXtraDB replicaset database. + +## Cleaning Up + +To clean up the Kubernetes resources created by this tutorial, run: + +```bash +kubectl delete perconaxtradb -n demo sample-pxc +kubectl delete perconaxtradbautoscaler -n demo px-as-st +kubectl delete ns demo +``` diff --git a/content/docs/v2024.1.31/guides/percona-xtradb/autoscaler/storage/overview/images/pxas-storage.jpeg b/content/docs/v2024.1.31/guides/percona-xtradb/autoscaler/storage/overview/images/pxas-storage.jpeg new file mode 100644 index 0000000000..5a7316550a Binary files /dev/null and b/content/docs/v2024.1.31/guides/percona-xtradb/autoscaler/storage/overview/images/pxas-storage.jpeg differ diff --git a/content/docs/v2024.1.31/guides/percona-xtradb/autoscaler/storage/overview/index.md b/content/docs/v2024.1.31/guides/percona-xtradb/autoscaler/storage/overview/index.md new file mode 100644 index 0000000000..b135506e5c --- /dev/null +++ b/content/docs/v2024.1.31/guides/percona-xtradb/autoscaler/storage/overview/index.md @@ -0,0 +1,66 @@ +--- +title: PerconaXtraDB Storage Autoscaling Overview +menu: + docs_v2024.1.31: + identifier: guides-perconaxtradb-autoscaling-storage-overview + name: Overview + parent: guides-perconaxtradb-autoscaling-storage + weight: 10 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# PerconaXtraDB Vertical Autoscaling + +This guide will give an overview on how KubeDB Autoscaler operator autoscales the database storage using `perconaxtradbautoscaler` crd. + +## Before You Begin + +- You should be familiar with the following `KubeDB` concepts: + - [PerconaXtraDB](/docs/v2024.1.31/guides/percona-xtradb/concepts/perconaxtradb) + - [PerconaXtraDBAutoscaler](/docs/v2024.1.31/guides/percona-xtradb/concepts/autoscaler) + - [PerconaXtraDBOpsRequest](/docs/v2024.1.31/guides/percona-xtradb/concepts/opsrequest) + +## How Storage Autoscaling Works + +The following diagram shows how KubeDB Autoscaler operator autoscales the resources of `PerconaXtraDB` database components. Open the image in a new tab to see the enlarged version. + +
+  Storage Autoscaling process of PerconaXtraDB +
Fig: Storage Autoscaling process of PerconaXtraDB
+
+ +The Auto Scaling process consists of the following steps: + +1. At first, a user creates a `PerconaXtraDB` Custom Resource (CR). + +2. `KubeDB` Community operator watches the `PerconaXtraDB` CR. + +3. When the operator finds a `PerconaXtraDB` CR, it creates required number of `StatefulSets` and related necessary stuff like secrets, services, etc. + +4. Each StatefulSet creates a Persistent Volume according to the Volume Claim Template provided in the statefulset configuration. This Persistent Volume will be expanded by the `KubeDB` Enterprise operator. + +5. Then, in order to set up storage autoscaling of the `PerconaXtraDB` database the user creates a `PerconaXtraDBAutoscaler` CRO with desired configuration. + +6. `KubeDB` Autoscaler operator watches the `PerconaXtraDBAutoscaler` CRO. + +7. `KubeDB` Autoscaler operator continuously watches persistent volumes of the databases to check if it exceeds the specified usage threshold. + +8. If the usage exceeds the specified usage threshold, then `KubeDB` Autoscaler operator creates a `PerconaXtraDBOpsRequest` to expand the storage of the database. +9. `KubeDB` Enterprise operator watches the `PerconaXtraDBOpsRequest` CRO. +10. Then the `KubeDB` Enterprise operator will expand the storage of the database component as specified on the `PerconaXtraDBOpsRequest` CRO. + +In the next docs, we are going to show a step-by-step guide on Autoscaling storage of various PerconaXtraDB database components using `PerconaXtraDBAutoscaler` CRD. diff --git a/content/docs/v2024.1.31/guides/percona-xtradb/clustering/_index.md b/content/docs/v2024.1.31/guides/percona-xtradb/clustering/_index.md new file mode 100644 index 0000000000..5164f90d73 --- /dev/null +++ b/content/docs/v2024.1.31/guides/percona-xtradb/clustering/_index.md @@ -0,0 +1,22 @@ +--- +title: PerconaXtraDB Clustering +menu: + docs_v2024.1.31: + identifier: guides-perconaxtradb-clustering + name: PerconaXtraDB Clustering + parent: guides-perconaxtradb + weight: 30 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/percona-xtradb/clustering/galera-cluster/examples/demo-1.yaml b/content/docs/v2024.1.31/guides/percona-xtradb/clustering/galera-cluster/examples/demo-1.yaml new file mode 100644 index 0000000000..d9d321ba58 --- /dev/null +++ b/content/docs/v2024.1.31/guides/percona-xtradb/clustering/galera-cluster/examples/demo-1.yaml @@ -0,0 +1,17 @@ +apiVersion: kubedb.com/v1alpha2 +kind: PerconaXtraDB +metadata: + name: sample-pxc + namespace: demo +spec: + version: "8.0.26" + replicas: 3 + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/percona-xtradb/clustering/galera-cluster/index.md b/content/docs/v2024.1.31/guides/percona-xtradb/clustering/galera-cluster/index.md new file mode 100644 index 0000000000..b51842f1cf --- /dev/null +++ b/content/docs/v2024.1.31/guides/percona-xtradb/clustering/galera-cluster/index.md @@ -0,0 +1,582 @@ +--- +title: PerconaXtraDB Galera Cluster Guide +menu: + docs_v2024.1.31: + identifier: guides-perconaxtradb-clustering-galeracluster + name: PerconaXtraDB Galera Cluster Guide + parent: guides-perconaxtradb-clustering + weight: 20 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# KubeDB - PerconaXtraDB Cluster + +This tutorial will show you how to create a 3 node PerconaXtraDB Cluster using KubeDB. + +## Before You Begin + +Before proceeding: + +- Read [perconaxtradb galera cluster concept](/docs/v2024.1.31/guides/percona-xtradb/clustering/overview) to learn about PerconaXtraDB Group Replication. + +- You need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +- Now, install KubeDB cli on your workstation and KubeDB operator in your cluster following the steps [here](/docs/v2024.1.31/setup/README). + +- To keep things isolated, this tutorial uses a separate namespace called `demo` throughout this tutorial. Run the following command to prepare your cluster for this tutorial: + + ```bash + $ kubectl create ns demo + namespace/demo created + ``` + +> Note: The yaml files used in this tutorial are stored in [docs/examples/mysql](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/examples/mysql) folder in GitHub repository [kubedb/docs](https://github.com/kubedb/docs). + +## Deploy PerconaXtraDB Cluster + +The following is an example `PerconaXtraDB` object which creates a multi-master PerconaXtraDB group with three members. + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: PerconaXtraDB +metadata: + name: sample-pxc + namespace: demo +spec: + version: "8.0.26" + replicas: 3 + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut +``` + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/percona-xtradb/clustering/galera-cluster/examples/demo-1.yaml +perconaxtradb.kubedb.com/sample-pxc created +``` + +Here, + +- `spec.replicas` is the number of nodes in the cluster. +- `spec.storage` specifies the StorageClass of PVC dynamically allocated to store data for this database. This storage spec will be passed to the StatefulSet created by KubeDB operator to run database pods. So, each members will have a pod of this storage configuration. You can specify any StorageClass available in your cluster with appropriate resource requests. + +KubeDB operator watches for `PerconaXtraDB` objects using Kubernetes API. When a `PerconaXtraDB` object is created, KubeDB operator will create a new StatefulSet and a Service with the matching PerconaXtraDB object name. KubeDB operator will also create a governing service for the StatefulSet with the name `-pods`. + +```bash +$ kubectl get perconaxtradb -n demo sample-pxc -o yaml +apiVersion: kubedb.com/v1alpha2 +kind: PerconaXtraDB +metadata: + annotations: + kubectl.kubernetes.io/last-applied-configuration: | + {"apiVersion":"kubedb.com/v1alpha2","kind":"PerconaXtraDB","metadata":{"annotations":{},"name":"sample-pxc","namespace":"demo"},"spec":{"replicas":3,"storage":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"1Gi"}},"storageClassName":"standard"},"storageType":"Durable","terminationPolicy":"WipeOut","version":"8.0.26"}} + creationTimestamp: "2022-12-20T05:15:56Z" + finalizers: + - kubedb.com + generation: 4 + name: sample-pxc + namespace: demo + resourceVersion: "8919" + uid: 5202f646-1f14-4008-9034-cddd481a0ea3 +spec: + allowedSchemas: + namespaces: + from: Same + authSecret: + name: sample-pxc-auth + autoOps: {} + coordinator: + resources: {} + healthChecker: + failureThreshold: 1 + periodSeconds: 10 + timeoutSeconds: 10 + podTemplate: + controller: {} + metadata: {} + spec: + affinity: + podAntiAffinity: + preferredDuringSchedulingIgnoredDuringExecution: + - podAffinityTerm: + labelSelector: + matchLabels: + app.kubernetes.io/instance: sample-pxc + app.kubernetes.io/managed-by: kubedb.com + app.kubernetes.io/name: perconaxtradbs.kubedb.com + namespaces: + - demo + topologyKey: kubernetes.io/hostname + weight: 100 + - podAffinityTerm: + labelSelector: + matchLabels: + app.kubernetes.io/instance: sample-pxc + app.kubernetes.io/managed-by: kubedb.com + app.kubernetes.io/name: perconaxtradbs.kubedb.com + namespaces: + - demo + topologyKey: failure-domain.beta.kubernetes.io/zone + weight: 50 + resources: + limits: + memory: 1Gi + requests: + cpu: 500m + memory: 1Gi + securityContext: + fsGroup: 1001 + runAsGroup: 1001 + runAsUser: 1001 + serviceAccountName: sample-pxc + replicas: 3 + storage: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + storageClassName: standard + storageType: Durable + systemUserSecrets: + monitorUserSecret: + name: sample-pxc-monitor + replicationUserSecret: + name: sample-pxc-replication + terminationPolicy: WipeOut + version: 8.0.26 +status: + conditions: + - lastTransitionTime: "2022-12-20T05:15:56Z" + message: 'The KubeDB operator has started the provisioning of PerconaXtraDB: demo/sample-pxc' + reason: DatabaseProvisioningStartedSuccessfully + status: "True" + type: ProvisioningStarted + - lastTransitionTime: "2022-12-20T05:17:30Z" + message: All desired replicas are ready. + reason: AllReplicasReady + status: "True" + type: ReplicaReady + - lastTransitionTime: "2022-12-20T05:19:02Z" + message: database sample-pxc/demo is ready + observedGeneration: 4 + reason: ReadinessCheckSucceeded + status: "True" + type: Ready + - lastTransitionTime: "2022-12-20T05:18:12Z" + message: database sample-pxc/demo is accepting connection + observedGeneration: 4 + reason: AcceptingConnection + status: "True" + type: AcceptingConnection + - lastTransitionTime: "2022-12-20T05:19:07Z" + message: 'The PerconaXtraDB: demo/sample-pxc is successfully provisioned.' + observedGeneration: 4 + reason: DatabaseSuccessfullyProvisioned + status: "True" + type: Provisioned + observedGeneration: 4 + phase: Ready + +$ kubectl get sts,svc,secret,pvc,pv,pod -n demo +NAME READY AGE +statefulset.apps/sample-pxc 3/3 7m5s + +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +service/sample-pxc ClusterIP 10.96.207.41 3306/TCP 7m11s +service/sample-pxc-pods ClusterIP None 3306/TCP 7m11s + +NAME TYPE DATA AGE +secret/default-token-bbgjp kubernetes.io/service-account-token 3 7m19s +secret/sample-pxc-auth kubernetes.io/basic-auth 2 7m11s +secret/sample-pxc-monitor kubernetes.io/basic-auth 2 7m11s +secret/sample-pxc-replication kubernetes.io/basic-auth 2 7m11s +secret/sample-pxc-token-gbzg6 kubernetes.io/service-account-token 3 7m11s + +NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE +persistentvolumeclaim/data-sample-pxc-0 Bound pvc-cb4f41de-1ead-4124-98a7-e3e950c8d10f 1Gi RWO standard 7m5s +persistentvolumeclaim/data-sample-pxc-1 Bound pvc-3c2887f5-5a7c-4df3-b7ca-34a6bcf91904 1Gi RWO standard 7m5s +persistentvolumeclaim/data-sample-pxc-2 Bound pvc-521f81f1-6261-4252-a2a8-32bfe472000e 1Gi RWO standard 7m5s + +NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE +persistentvolume/pvc-3c2887f5-5a7c-4df3-b7ca-34a6bcf91904 1Gi RWO Delete Bound demo/data-sample-pxc-1 standard 7m3s +persistentvolume/pvc-521f81f1-6261-4252-a2a8-32bfe472000e 1Gi RWO Delete Bound demo/data-sample-pxc-2 standard 7m1s +persistentvolume/pvc-cb4f41de-1ead-4124-98a7-e3e950c8d10f 1Gi RWO Delete Bound demo/data-sample-pxc-0 standard 7m2s + +NAME READY STATUS RESTARTS AGE +pod/sample-pxc-0 2/2 Running 0 7m5s +pod/sample-pxc-1 2/2 Running 0 7m5s +pod/sample-pxc-2 2/2 Running 0 7m5s + +``` + +## Connect with PerconaXtraDB database + +Once the database is in running state we can connect to each of three nodes. We will use login credentials `MYSQL_ROOT_USERNAME` and `MYSQL_ROOT_PASSWORD` saved as container's environment variable. + +```bash +# First Node +$ kubectl exec -it -n demo sample-pxc-0 -- bash +Defaulted container "perconaxtradb" out of: perconaxtradb, px-coordinator, px-init (init) +bash-4.4$ mysql -u${MYSQL_ROOT_USERNAME} -p${MYSQL_ROOT_PASSWORD} +mysql: [Warning] Using a password on the command line interface can be insecure. +Welcome to the MySQL monitor. Commands end with ; or \g. +Your MySQL connection id is 133 +Server version: 8.0.26-16.1 Percona XtraDB Cluster (GPL), Release rel16, Revision b141904, WSREP version 26.4.3 + +Copyright (c) 2009-2021 Percona LLC and/or its affiliates +Copyright (c) 2000, 2021, Oracle and/or its affiliates. + +Oracle is a registered trademark of Oracle Corporation and/or its +affiliates. Other names may be trademarks of their respective +owners. + +Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. + +mysql> SELECT 1; ++---+ +| 1 | ++---+ +| 1 | ++---+ +1 row in set (0.00 sec) + +mysql> quit; +Bye + + +# Second Node +$ kubectl exec -it -n demo sample-pxc-1 -- bash +Defaulted container "perconaxtradb" out of: perconaxtradb, px-coordinator, px-init (init) +bash-4.4$ mysql -u${MYSQL_ROOT_USERNAME} -p${MYSQL_ROOT_PASSWORD} +mysql: [Warning] Using a password on the command line interface can be insecure. +Welcome to the MySQL monitor. Commands end with ; or \g. +Your MySQL connection id is 123 +Server version: 8.0.26-16.1 Percona XtraDB Cluster (GPL), Release rel16, Revision b141904, WSREP version 26.4.3 + +Copyright (c) 2009-2021 Percona LLC and/or its affiliates +Copyright (c) 2000, 2021, Oracle and/or its affiliates. + +Oracle is a registered trademark of Oracle Corporation and/or its +affiliates. Other names may be trademarks of their respective +owners. + +Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. + +mysql> SELECT 1; ++---+ +| 1 | ++---+ +| 1 | ++---+ +1 row in set (0.00 sec) + +mysql> quit; +Bye + + +# Third Node +$ kubectl exec -it -n demo sample-pxc-2 -- bash +Defaulted container "perconaxtradb" out of: perconaxtradb, px-coordinator, px-init (init) +bash-4.4$ mysql -u${MYSQL_ROOT_USERNAME} -p${MYSQL_ROOT_PASSWORD} +mysql: [Warning] Using a password on the command line interface can be insecure. +Welcome to the MySQL monitor. Commands end with ; or \g. +Your MySQL connection id is 139 +Server version: 8.0.26-16.1 Percona XtraDB Cluster (GPL), Release rel16, Revision b141904, WSREP version 26.4.3 + +Copyright (c) 2009-2021 Percona LLC and/or its affiliates +Copyright (c) 2000, 2021, Oracle and/or its affiliates. + +Oracle is a registered trademark of Oracle Corporation and/or its +affiliates. Other names may be trademarks of their respective +owners. + +Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. + +mysql> SELECT 1; ++---+ +| 1 | ++---+ +| 1 | ++---+ +1 row in set (0.00 sec) + +mysql> quit; +Bye + +``` + +## Check the Cluster Status + +Now, we are ready to check newly created cluster status. Connect and run the following commands from any of the hosts and you will get the same result, that is the cluster size is three. + +```bash +$ kubectl exec -it -n demo sample-pxc-0 -- bash +Defaulted container "perconaxtradb" out of: perconaxtradb, px-coordinator, px-init (init) +bash-4.4$ mysql -u${MYSQL_ROOT_USERNAME} -p${MYSQL_ROOT_PASSWORD} +mysql: [Warning] Using a password on the command line interface can be insecure. +Welcome to the MySQL monitor. Commands end with ; or \g. +Your MySQL connection id is 231 +Server version: 8.0.26-16.1 Percona XtraDB Cluster (GPL), Release rel16, Revision b141904, WSREP version 26.4.3 + +Copyright (c) 2009-2021 Percona LLC and/or its affiliates +Copyright (c) 2000, 2021, Oracle and/or its affiliates. + +Oracle is a registered trademark of Oracle Corporation and/or its +affiliates. Other names may be trademarks of their respective +owners. + +Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. + +mysql> show status like 'wsrep_cluster_size'; ++--------------------+-------+ +| Variable_name | Value | ++--------------------+-------+ +| wsrep_cluster_size | 3 | ++--------------------+-------+ +1 row in set (0.00 sec) + +mysql> quit; +Bye + +``` + +## Data Availability + +In a PerconaXtraDB Galera Cluster, Each member can read and write. In this section, we will insert data from any nodes, and we will see whether we can get the data from every other members. + +> Read the comment written for the following commands. They contain the instructions and explanations of the commands. + +```bash +$ kubectl exec -it -n demo sample-pxc-0 -- bash +Defaulted container "perconaxtradb" out of: perconaxtradb, px-coordinator, px-init (init) +bash-4.4$ mysql -u${MYSQL_ROOT_USERNAME} -p${MYSQL_ROOT_PASSWORD} +mysql: [Warning] Using a password on the command line interface can be insecure. +Welcome to the MySQL monitor. Commands end with ; or \g. +Your MySQL connection id is 260 +Server version: 8.0.26-16.1 Percona XtraDB Cluster (GPL), Release rel16, Revision b141904, WSREP version 26.4.3 + +Copyright (c) 2009-2021 Percona LLC and/or its affiliates +Copyright (c) 2000, 2021, Oracle and/or its affiliates. + +Oracle is a registered trademark of Oracle Corporation and/or its +affiliates. Other names may be trademarks of their respective +owners. + +Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. + +mysql> CREATE DATABASE playground; +Query OK, 1 row affected (0.01 sec) + +mysql> CREATE TABLE playground.equipment ( id INT NOT NULL AUTO_INCREMENT, type VARCHAR(50), quant INT, color VARCHAR(25), PRIMARY KEY(id)); +Query OK, 0 rows affected (0.02 sec) + +mysql> INSERT INTO playground.equipment (type, quant, color) VALUES ('slide', 2, 'blue'); +Query OK, 1 row affected (0.00 sec) + +mysql> SELECT * FROM playground.equipment; ++----+-------+-------+-------+ +| id | type | quant | color | ++----+-------+-------+-------+ +| 1 | slide | 2 | blue | ++----+-------+-------+-------+ +1 row in set (0.00 sec) + +mysql> quit; +Bye +bash-4.4$ exit +exit + +$ kubectl exec -it -n demo sample-pxc-2 -- bash +Defaulted container "perconaxtradb" out of: perconaxtradb, px-coordinator, px-init (init) +bash-4.4$ mysql -u${MYSQL_ROOT_USERNAME} -p${MYSQL_ROOT_PASSWORD} +mysql: [Warning] Using a password on the command line interface can be insecure. +Welcome to the MySQL monitor. Commands end with ; or \g. +Your MySQL connection id is 253 +Server version: 8.0.26-16.1 Percona XtraDB Cluster (GPL), Release rel16, Revision b141904, WSREP version 26.4.3 + +Copyright (c) 2009-2021 Percona LLC and/or its affiliates +Copyright (c) 2000, 2021, Oracle and/or its affiliates. + +Oracle is a registered trademark of Oracle Corporation and/or its +affiliates. Other names may be trademarks of their respective +owners. + +Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. + +mysql> SELECT * FROM playground.equipment; ++----+-------+-------+-------+ +| id | type | quant | color | ++----+-------+-------+-------+ +| 1 | slide | 2 | blue | ++----+-------+-------+-------+ +1 row in set (0.00 sec) + +mysql> INSERT INTO playground.equipment (type, quant, color) VALUES ('slide', 4, 'red'); +Query OK, 1 row affected (0.00 sec) + +mysql> SELECT * FROM playground.equipment; ++----+-------+-------+-------+ +| id | type | quant | color | ++----+-------+-------+-------+ +| 1 | slide | 2 | blue | +| 6 | slide | 4 | red | ++----+-------+-------+-------+ +2 rows in set (0.00 sec) + +mysql> quit; +Bye +bash-4.4$ exit +exit + +$ kubectl exec -it -n demo sample-pxc-2 -- bash +Defaulted container "perconaxtradb" out of: perconaxtradb, px-coordinator, px-init (init) +bash-4.4$ mysql -u${MYSQL_ROOT_USERNAME} -p${MYSQL_ROOT_PASSWORD} +mysql: [Warning] Using a password on the command line interface can be insecure. +Welcome to the MySQL monitor. Commands end with ; or \g. +Your MySQL connection id is 283 +Server version: 8.0.26-16.1 Percona XtraDB Cluster (GPL), Release rel16, Revision b141904, WSREP version 26.4.3 + +Copyright (c) 2009-2021 Percona LLC and/or its affiliates +Copyright (c) 2000, 2021, Oracle and/or its affiliates. + +Oracle is a registered trademark of Oracle Corporation and/or its +affiliates. Other names may be trademarks of their respective +owners. + +Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. + +mysql> INSERT INTO playground.equipment (type, quant, color) VALUES ('slide', 4, 'red'); +Query OK, 1 row affected (0.00 sec) + +mysql> SELECT * FROM playground.equipment; ++----+-------+-------+-------+ +| id | type | quant | color | ++----+-------+-------+-------+ +| 1 | slide | 2 | blue | +| 6 | slide | 4 | red | +| 9 | slide | 4 | red | ++----+-------+-------+-------+ +3 rows in set (0.00 sec) + +mysql> quit; +Bye +bash-4.4$ exit +exit +``` + +## Automatic Failover + +To test automatic failover, we will force the one of three pods to restart and check if it can rejoin the cluster. + +> Read the comment written for the following commands. They contain the instructions and explanations of the commands. + +```bash +$ kubectl exec -it -n demo sample-pxc-0 -- bash +Defaulted container "perconaxtradb" out of: perconaxtradb, px-coordinator, px-init (init) +bash-4.4$ mysql -u${MYSQL_ROOT_USERNAME} -p${MYSQL_ROOT_PASSWORD} +mysql: [Warning] Using a password on the command line interface can be insecure. +Welcome to the MySQL monitor. Commands end with ; or \g. +Your MySQL connection id is 332 +Server version: 8.0.26-16.1 Percona XtraDB Cluster (GPL), Release rel16, Revision b141904, WSREP version 26.4.3 + +Copyright (c) 2009-2021 Percona LLC and/or its affiliates +Copyright (c) 2000, 2021, Oracle and/or its affiliates. + +Oracle is a registered trademark of Oracle Corporation and/or its +affiliates. Other names may be trademarks of their respective +owners. + +Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. + +mysql> SELECT * FROM playground.equipment; ++----+-------+-------+-------+ +| id | type | quant | color | ++----+-------+-------+-------+ +| 1 | slide | 2 | blue | +| 6 | slide | 4 | red | +| 9 | slide | 4 | red | ++----+-------+-------+-------+ +3 rows in set (0.00 sec) + +mysql> quit; +Bye +bash-4.4$ exit +exit + + +# Forcefully delete Node 1 +~ $ kubectl delete pod -n demo sample-pxc-0 +pod "sample-pxc-0" deleted + +# Wait for sample-pxc-0 to restart +$ kubectl exec -it -n demo sample-pxc-0 -- bash +Defaulted container "perconaxtradb" out of: perconaxtradb, px-coordinator, px-init (init) +bash-4.4$ mysql -u${MYSQL_ROOT_USERNAME} -p${MYSQL_ROOT_PASSWORD} +mysql: [Warning] Using a password on the command line interface can be insecure. +Welcome to the MySQL monitor. Commands end with ; or \g. +Your MySQL connection id is 49 +Server version: 8.0.26-16.1 Percona XtraDB Cluster (GPL), Release rel16, Revision b141904, WSREP version 26.4.3 + +Copyright (c) 2009-2021 Percona LLC and/or its affiliates +Copyright (c) 2000, 2021, Oracle and/or its affiliates. + +Oracle is a registered trademark of Oracle Corporation and/or its +affiliates. Other names may be trademarks of their respective +owners. + +Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. + +mysql> SELECT * FROM playground.equipment; ++----+-------+-------+-------+ +| id | type | quant | color | ++----+-------+-------+-------+ +| 1 | slide | 2 | blue | +| 6 | slide | 4 | red | +| 9 | slide | 4 | red | ++----+-------+-------+-------+ +3 rows in set (0.00 sec) + +# Check cluster size +mysql [(none)]> show status like 'wsrep_cluster_size'; ++--------------------+-------+ +| Variable_name | Value | ++--------------------+-------+ +| wsrep_cluster_size | 3 | ++--------------------+-------+ +1 row in set (0.002 sec) + +mysql> quit +Bye +``` + +## Cleaning up + +Clean what we created in this tutorial. + +```bash +$ kubectl delete perconaxtradb -n demo sample-pxc +perconaxtradb.kubedb.com "sample-pxc" deleted +$ kubectl delete ns demo +namespace "demo" deleted +``` diff --git a/content/docs/v2024.1.31/guides/percona-xtradb/clustering/overview/index.md b/content/docs/v2024.1.31/guides/percona-xtradb/clustering/overview/index.md new file mode 100644 index 0000000000..b125ff90d1 --- /dev/null +++ b/content/docs/v2024.1.31/guides/percona-xtradb/clustering/overview/index.md @@ -0,0 +1,54 @@ +--- +title: PerconaXtraDB Galera Cluster Overview +menu: + docs_v2024.1.31: + identifier: guides-perconaxtradb-clustering-overview + name: PerconaXtraDB Galera Cluster Overview + parent: guides-perconaxtradb-clustering + weight: 10 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# PerconaXtraDB Galera Cluster + +Here we'll discuss some concepts about PerconaXtraDB Galera Cluster. + +## Galera Clustering + +PerconaXtraDB Galera Cluster is a `virtually synchronous` multi-master cluster for PerconaXtraDB. The Server replicates a transaction at commit time by broadcasting the write set associated with the transaction to every node in the cluster. The client connects directly to the DBMS and experiences behavior that is similar to native PerconaXtraDB in most cases. The wsrep API (write set replication API) defines the interface between Galera replication and PerconaXtraDB. + +Ref: [About Galera Replication](https://galeracluster.com/library/documentation/tech-desc-introduction.html) + +## PerconaXtraDB Galera Cluster Features + +- Virtually synchronous replication +- Active-active multi-master topology +- Read and write to any cluster node +- Automatic membership control, failed nodes drop from the cluster +- Automatic node joining +- True parallel replication, on row level +- Direct client connections, native PerconaXtraDB look & feel + +Ref: [Common Operations of PerconaXtraDB Galera Cluster and Group Replication?](https://www.percona.com/blog/2020/04/28/group-replication-and-percona-xtradb-cluster-overview-of-common-operations/) + +### Limitations + +There are some limitations in PerconaXtraDB Galera Cluster that are listed [here](https://docs.percona.com/percona-xtradb-cluster/8.0/limitation.html). + +## Next Steps + +- [Deploy PerconaXtraDB Galera Cluster](/docs/v2024.1.31/guides/percona-xtradb/clustering/galera-cluster) using KubeDB. diff --git a/content/docs/v2024.1.31/guides/percona-xtradb/concepts/_index.md b/content/docs/v2024.1.31/guides/percona-xtradb/concepts/_index.md new file mode 100644 index 0000000000..7dd3771b1a --- /dev/null +++ b/content/docs/v2024.1.31/guides/percona-xtradb/concepts/_index.md @@ -0,0 +1,22 @@ +--- +title: PerconaXtraDB Concepts +menu: + docs_v2024.1.31: + identifier: guides-perconaxtradb-concepts + name: Concepts + parent: guides-perconaxtradb + weight: 20 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/percona-xtradb/concepts/appbinding/index.md b/content/docs/v2024.1.31/guides/percona-xtradb/concepts/appbinding/index.md new file mode 100644 index 0000000000..dbea5e523c --- /dev/null +++ b/content/docs/v2024.1.31/guides/percona-xtradb/concepts/appbinding/index.md @@ -0,0 +1,152 @@ +--- +title: AppBinding CRD +menu: + docs_v2024.1.31: + identifier: guides-perconaxtradb-concepts-appbinding + name: AppBinding + parent: guides-perconaxtradb-concepts + weight: 25 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# AppBinding + +## What is AppBinding + +An `AppBinding` is a Kubernetes `CustomResourceDefinition`(CRD) which points to an application using either its URL (usually for a non-Kubernetes resident service instance) or a Kubernetes service object (if self-hosted in a Kubernetes cluster), some optional parameters and a credential secret. To learn more about AppBinding and the problems it solves, please read this blog post: [The case for AppBinding](https://blog.byte.builders/post/the-case-for-appbinding). + +If you deploy a database using [KubeDB](https://kubedb.com/docs/0.11.0/concepts/), `AppBinding` object will be created automatically for it. Otherwise, you have to create an `AppBinding` object manually pointing to your desired database. + +KubeDB uses [Stash](https://appscode.com/products/stash/) to perform backup/recovery of databases. Stash needs to know how to connect with a target database and the credentials necessary to access it. This is done via an `AppBinding`. + +## AppBinding CRD Specification + +Like any official Kubernetes resource, an `AppBinding` has `TypeMeta`, `ObjectMeta` and `Spec` sections. However, unlike other Kubernetes resources, it does not have a `Status` section. + +An `AppBinding` object created by `KubeDB` for PerconaXtraDB database is shown below, + +```yaml +apiVersion: appcatalog.appscode.com/v1alpha1 +kind: AppBinding +metadata: + labels: + app.kubernetes.io/component: database + app.kubernetes.io/instance: sample-pxc + app.kubernetes.io/managed-by: kubedb.com + app.kubernetes.io/name: perconaxtradbs.kubedb.com + name: sample-pxc + namespace: demo +spec: + clientConfig: + caBundle: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURJekNDQWd1Z0F3SUJBZ0lVVUg1V24wOSt6MnR6RU5ESnF4N1AxZFg5aWM4d0RRWUpLb1pJaHZjTkFRRUwKQlFBd0lURU9NQXdHQTFVRUF3d0ZiWGx6Y1d3eER6QU5CZ05WQkFvTUJtdDFZbVZrWWpBZUZ3MHlNVEF5TURrdwpPVEkxTWpCYUZ3MHlNakF5TURrd09USTFNakJhTUNFeERqQU1CZ05WQkFNTUJXMTVjM0ZzTVE4d0RRWURWUVFLCkRBWnJkV0psWkdJd2dnRWlNQTBHQ1NxR1NJYjNEUUVCQVFVQUE0SUJEd0F3Z2dFS0FvSUJBUUM3ZDl5YUtMQ3UKYy9NclRBb0NkV1VORld3ckdqbVdvUEVTRWNMR0pjT0JVSTZ5NXZ5QXVGMG1TakZvNzR3SEdSbWRmS2ExMWh0Ygo4TWZ2UFNwWXVGWFpUSi9GbnkwNnU2ekZMVm5xV2h3MUdiZ2ZCUE5XK0w1ZGkzZmVjanBEZmtLbTcrd0ZUVnNmClVzWGVVcUR0VHFpdlJHVUQ5aURUTzNTUmdheVI5U0J0RnRxcHRtV0YrODFqZGlSS2pRTVlCVGJ2MDRueW9UdHUKK0hJZlFjbE40Q1p3NzJPckpUdFdiYnNiWHVlMU5RZU9nQzJmSVhkZEF0WEkxd3lOT04zckxuTFF1SUIrakVLSQpkZTlPandKSkJhSFVzRVZEbllnYlJLSTdIcVdFdk5kL29OU2VZRXF2TXk3K1hwTFV0cDBaVXlxUDV2cC9PSXJ3CmlVMWVxZGNZMzJDcEFnTUJBQUdqVXpCUk1CMEdBMVVkRGdRV0JCUlNnNDVpazFlT3lCU1VKWHkvQllZVDVLck8KeWpBZkJnTlZIU01FR0RBV2dCUlNnNDVpazFlT3lCU1VKWHkvQllZVDVLck95akFQQmdOVkhSTUJBZjhFQlRBRApBUUgvTUEwR0NTcUdTSWIzRFFFQkN3VUFBNElCQVFCNTlhNlFGQm1YMTh1b1dzQ3dGT1Y0R25GYnVBL2NoaVN6CkFwRVVhcjI1L2RNK2RpT0RVNkJuMXM3Wmpqem45WU9aVVJJL3UyRGRhdmNnQUNYR2FXWHJhSS84UUYxZDB3OGsKNXFlRmMxQWE5UEhFeEsxVm1xb21MV2xhMkdTOW1EbFpEOEtueDdjU3lpRmVqRXJYdWtld1B1VXA0dUUzTjAraApwQjQ0MDVRU2d4VVc3SmVhamFQdTNXOHgyRUFKMnViTkdMVEk5L0x4V1Z0YkxGcUFoSFphbGRjaXpOSHdTUGYzCkdMWEo3YTBWTW1JY0NuMWh5a0k2UkNrUTRLSE9tbDNOcXRRS2F5RnhUVHVpdzRiZ3k3czA1UnNzRlVUaWN1VmcKc3hnMjFVQUkvYW9WaXpQOVpESGE2TmV0YnpNczJmcmZBeHhBZk9pWDlzN1JuTmM0WHd4VAotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg== + service: + name: sample-pxc + port: 3306 + scheme: mysql + secret: + name: sample-pxc-auth + type: kubedb.com/perconaxtradb + version: 8.0.26 +``` + +Here, we are going to describe the sections of an `AppBinding` crd. + +### AppBinding `Spec` + +An `AppBinding` object has the following fields in the `spec` section: + +#### spec.type + +`spec.type` is an optional field that indicates the type of the app that this `AppBinding` is pointing to. Stash uses this field to resolve the values of `TARGET_APP_TYPE`, `TARGET_APP_GROUP` and `TARGET_APP_RESOURCE` variables of [BackupBlueprint](https://appscode.com/products/stash/latest/concepts/crds/backupblueprint/) object. + +This field follows the following format: `/`. The above AppBinding is pointing to a `perconaxtradb` resource under `kubedb.com` group. + +Here, the variables are parsed as follows: + +| Variable | Usage | +| --------------------- | --------------------------------------------------------------------------------------------------------------------------------- | +| `TARGET_APP_GROUP` | Represents the application group where the respective app belongs (i.e: `kubedb.com`). | +| `TARGET_APP_RESOURCE` | Represents the resource under that application group that this appbinding represents (i.e: `perconaxtradb`). | +| `TARGET_APP_TYPE` | Represents the complete type of the application. It's simply `TARGET_APP_GROUP/TARGET_APP_RESOURCE` (i.e: `kubedb.com/perconaxtradb`). | + +#### spec.secret + +`spec.secret` specifies the name of the secret which contains the credentials that are required to access the database. This secret must be in the same namespace as the `AppBinding`. + +This secret must contain the following keys: + +PostgreSQL : + +| Key | Usage | +| ------------------- | --------------------------------------------------- | +| `POSTGRES_USER` | Username of the target database. | +| `POSTGRES_PASSWORD` | Password for the user specified by `POSTGRES_USER`. | + +MySQL : + +| Key | Usage | +| ---------- | ---------------------------------------------- | +| `username` | Username of the target database. | +| `password` | Password for the user specified by `username`. | + +PerconaXtraDB : + +| Key | Usage | +| ---------- | ---------------------------------------------- | +| `username` | Username of the target database. | +| `password` | Password for the user specified by `username`. | + +MongoDB : + +| Key | Usage | +| ---------- | ---------------------------------------------- | +| `username` | Username of the target database. | +| `password` | Password for the user specified by `username`. | + +PerconaXtraDB: + +| Key | Usage | +| ---------------- | ----------------------- | +| `ADMIN_USERNAME` | Admin username | +| `ADMIN_PASSWORD` | Password for admin user | + +#### spec.clientConfig + +`spec.clientConfig` defines how to communicate with the target database. You can use either an URL or a Kubernetes service to connect with the database. You don't have to specify both of them. + +You can configure following fields in `spec.clientConfig` section: + +- **spec.clientConfig.url** + + `spec.clientConfig.url` gives the location of the database, in standard URL form (i.e. `[scheme://]host:port/[path]`). This is particularly useful when the target database is running outside of the Kubernetes cluster. If your database is running inside the cluster, use `spec.clientConfig.service` section instead. + Note that, attempting to use a user or basic auth (e.g. `user:password@host:port`) is not allowed. Stash will insert them automatically from the respective secret. Fragments ("#...") and query parameters ("?...") are not allowed either. + +- **spec.clientConfig.service** + + If you are running the database inside the Kubernetes cluster, you can use Kubernetes service to connect with the database. You have to specify the following fields in `spec.clientConfig.service` section if you manually create an `AppBinding` object. + + - **name :** `name` indicates the name of the service that connects with the target database. + - **scheme :** `scheme` specifies the scheme (i.e. http, https) to use to connect with the database. + - **port :** `port` specifies the port where the target database is running. + +- **spec.clientConfig.insecureSkipTLSVerify** + + `spec.clientConfig.insecureSkipTLSVerify` is used to disable TLS certificate verification while connecting with the database. We strongly discourage to disable TLS verification during backup. You should provide the respective CA bundle through `spec.clientConfig.caBundle` field instead. + +- **spec.clientConfig.caBundle** + + `spec.clientConfig.caBundle` is a PEM encoded CA bundle which will be used to validate the serving certificate of the database. diff --git a/content/docs/v2024.1.31/guides/percona-xtradb/concepts/autoscaler/index.md b/content/docs/v2024.1.31/guides/percona-xtradb/concepts/autoscaler/index.md new file mode 100644 index 0000000000..8c57fd530d --- /dev/null +++ b/content/docs/v2024.1.31/guides/percona-xtradb/concepts/autoscaler/index.md @@ -0,0 +1,107 @@ +--- +title: PerconaXtraDBAutoscaler CRD +menu: + docs_v2024.1.31: + identifier: guides-perconaxtradb-concepts-autoscaler + name: PerconaXtraDBAutoscaler + parent: guides-perconaxtradb-concepts + weight: 26 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# PerconaXtraDBAutoscaler + +## What is PerconaXtraDBAutoscaler + +`PerconaXtraDBAutoscaler` is a Kubernetes `Custom Resource Definitions` (CRD). It provides a declarative configuration for autoscaling [PerconaXtraDB](https://docs.percona.com/percona-xtradb-cluster/8.0//) compute resources and storage of database components in a Kubernetes native way. + +## PerconaXtraDBAutoscaler CRD Specifications + +Like any official Kubernetes resource, a `PerconaXtraDBAutoscaler` has `TypeMeta`, `ObjectMeta`, `Spec` and `Status` sections. + +Here, some sample `PerconaXtraDBAutoscaler` CROs for autoscaling different components of database is given below: + +**Sample `PerconaXtraDBAutoscaler` for PerconaXtraDB:** + +```yaml +apiVersion: autoscaling.kubedb.com/v1alpha1 +kind: PerconaXtraDBAutoscaler +metadata: + name: px-as + namespace: demo +spec: + databaseRef: + name: sample-pxc + compute: + perconaxtradb: + trigger: "On" + podLifeTimeThreshold: 5m + minAllowed: + cpu: 250m + memory: 350Mi + maxAllowed: + cpu: 1 + memory: 1Gi + controlledResources: ["cpu", "memory"] + storage: + perconaxtradb: + trigger: "On" + usageThreshold: 60 + scalingThreshold: 50 + expansionMode: "Online" +``` + +Here, we are going to describe the various sections of a `PerconaXtraDBAutoscaler` crd. + +A `PerconaXtraDBAutoscaler` object has the following fields in the `spec` section. + +### spec.databaseRef + +`spec.databaseRef` is a required field that point to the [PerconaXtraDB](/docs/v2024.1.31/guides/percona-xtradb/concepts/perconaxtradb) object for which the autoscaling will be performed. This field consists of the following sub-field: + +- **spec.databaseRef.name :** specifies the name of the [PerconaXtraDB](/docs/v2024.1.31/guides/percona-xtradb/concepts/perconaxtradb) object. + +### spec.compute + +`spec.compute` specifies the autoscaling configuration for the compute resources i.e. cpu and memory of the database components. This field consists of the following sub-field: + +- `spec.compute.perconaxtradb` indicates the desired compute autoscaling configuration for a PerconaXtraDB standalone or cluster. + +All of them has the following sub-fields: + +- `trigger` indicates if compute autoscaling is enabled for this component of the database. If "On" then compute autoscaling is enabled. If "Off" then compute autoscaling is disabled. +- `minAllowed` specifies the minimal amount of resources that will be recommended, default is no minimum. +- `maxAllowed` specifies the maximum amount of resources that will be recommended, default is no maximum. +- `controlledResources` specifies which type of compute resources (cpu and memory) are allowed for autoscaling. Allowed values are "cpu" and "memory". +- `containerControlledValues` specifies which resource values should be controlled. Allowed values are "RequestsAndLimits" and "RequestsOnly". +- `resourceDiffPercentage` specifies the minimum resource difference between recommended value and the current value in percentage. If the difference percentage is greater than this value than autoscaling will be triggered. +- `podLifeTimeThreshold` specifies the minimum pod lifetime of at least one of the pods before triggering autoscaling. +- `InMemoryScalingThreshold` the percentage of the Memory that will be passed as inMemorySizeGB for inmemory database engine, which is only available for the percona variant of the perconaxtradb. + +### spec.storage + +`spec.compute` specifies the autoscaling configuration for the storage resources of the database components. This field consists of the following sub-field: + +- `spec.compute.mairadb` indicates the desired storage autoscaling configuration for a PerconaXtraDB standalone or cluster. + +All of them has the following sub-fields: + +- `trigger` indicates if storage autoscaling is enabled for this component of the database. If "On" then storage autoscaling is enabled. If "Off" then storage autoscaling is disabled. +- `usageThreshold` indicates usage percentage threshold, if the current storage usage exceeds then storage autoscaling will be triggered. +- `scalingThreshold` indicates the percentage of the current storage that will be scaled. +- `expansionMode` specifies the mode of volume expansion when storage autoscaler performs volume expansion OpsRequest. Default value is `Online`. + diff --git a/content/docs/v2024.1.31/guides/percona-xtradb/concepts/opsrequest/index.md b/content/docs/v2024.1.31/guides/percona-xtradb/concepts/opsrequest/index.md new file mode 100644 index 0000000000..52f5935098 --- /dev/null +++ b/content/docs/v2024.1.31/guides/percona-xtradb/concepts/opsrequest/index.md @@ -0,0 +1,361 @@ +--- +title: PerconaXtraDBOpsRequest CRD +menu: + docs_v2024.1.31: + identifier: guides-perconaxtradb-concepts-perconaxtradbopsrequest + name: PerconaXtraDBOpsRequest + parent: guides-perconaxtradb-concepts + weight: 15 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# PerconaXtraDBOpsRequest + +## What is PerconaXtraDBOpsRequest + +`PerconaXtraDBOpsRequest` is a Kubernetes `Custom Resource Definitions` (CRD). It provides a declarative configuration for [PerconaXtraDB](https://docs.percona.com/percona-xtradb-cluster/8.0//) administrative operations like database version updating, horizontal scaling, vertical scaling etc. in a Kubernetes native way. + +## PerconaXtraDBOpsRequest CRD Specifications + +Like any official Kubernetes resource, a `PerconaXtraDBOpsRequest` has `TypeMeta`, `ObjectMeta`, `Spec` and `Status` sections. + +Here, some sample `PerconaXtraDBOpsRequest` CRs for different administrative operations is given below: + +**Sample `PerconaXtraDBOpsRequest` for updating database:** + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: PerconaXtraDBOpsRequest +metadata: + name: px-version-update + namespace: demo +spec: + type: UpdateVersion + databaseRef: + name: sample-pxc + updateVersion: + targetVersion: 8.0.26 +status: + conditions: + - lastTransitionTime: "2020-08-25T18:22:38Z" + message: Successfully completed the modification process + observedGeneration: 1 + reason: Successful + status: "True" + type: Successful + observedGeneration: 1 + phase: Successful +``` + +**Sample `PerconaXtraDBOpsRequest` Objects for Horizontal Scaling of database cluster:** + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: PerconaXtraDBOpsRequest +metadata: + name: px-scale-horizontal + namespace: demo +spec: + type: HorizontalScaling + databaseRef: + name: sample-pxc + horizontalScaling: + member : 5 +status: + conditions: + - lastTransitionTime: "2020-08-25T18:22:38Z" + message: Successfully completed the modification process + observedGeneration: 1 + reason: Successful + status: "True" + type: Successful + observedGeneration: 1 + phase: Successful +``` + +**Sample `PerconaXtraDBOpsRequest` Objects for Vertical Scaling of the database cluster:** + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: PerconaXtraDBOpsRequest +metadata: + name: px-scale-vertical + namespace: demo +spec: + type: VerticalScaling + databaseRef: + name: sample-pxc + verticalScaling: + perconaxtradb: + resources: + requests: + memory: "600Mi" + cpu: "0.1" + limits: + memory: "600Mi" + cpu: "0.1" +status: + conditions: + - lastTransitionTime: "2020-08-25T18:22:38Z" + message: Successfully completed the modification process + observedGeneration: 1 + reason: Successful + status: "True" + type: Successful + observedGeneration: 1 + phase: Successful +``` + +**Sample `PerconaXtraDBOpsRequest` Objects for Reconfiguring PerconaXtraDB Database:** + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: PerconaXtraDBOpsRequest +metadata: + name: px-reconfigure + namespace: demo +spec: + type: Reconfigure + databaseRef: + name: sample-pxc + configuration: + inlineConfig: | + max_connections = 300 + read_buffer_size = 1234567 +status: + conditions: + - lastTransitionTime: "2020-08-25T18:22:38Z" + message: Successfully completed the modification process + observedGeneration: 1 + reason: Successful + status: "True" + type: Successful + observedGeneration: 1 + phase: Successful +``` + +**Sample `PerconaXtraDBOpsRequest` Objects for Volume Expansion of PerconaXtraDB:** + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: PerconaXtraDBOpsRequest +metadata: + name: px-volume-expansion + namespace: demo +spec: + type: VolumeExpansion + databaseRef: + name: sample-pxc + volumeExpansion: + mode: "Online" + perconaxtradb: 2Gi +status: + conditions: + - lastTransitionTime: "2020-08-25T18:22:38Z" + message: Successfully completed the modification process + observedGeneration: 1 + reason: Successful + status: "True" + type: Successful + observedGeneration: 1 + phase: Successful +``` + +**Sample `PerconaXtraDBOpsRequest` Objects for Reconfiguring TLS of the database:** + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: PerconaXtraDBOpsRequest +metadata: + name: px-recon-tls-add + namespace: demo +spec: + type: ReconfigureTLS + databaseRef: + name: sample-pxc + tls: + requireSSL: true + issuerRef: + apiGroup: cert-manager.io + kind: Issuer + name: px-issuer + certificates: + - alias: server + subject: + organizations: + - kubedb:server + dnsNames: + - localhost + ipAddresses: + - "127.0.0.1" +``` + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: PerconaXtraDBOpsRequest +metadata: + name: px-recon-tls-rotate + namespace: demo +spec: + type: ReconfigureTLS + databaseRef: + name: sample-pxc + tls: + rotateCertificates: true +``` + + +Here, we are going to describe the various sections of a `PerconaXtraDBOpsRequest` crd. + +A `PerconaXtraDBOpsRequest` object has the following fields in the `spec` section. + +### spec.databaseRef + +`spec.databaseRef` is a required field that point to the [PerconaXtraDB](/docs/v2024.1.31/guides/percona-xtradb/concepts/perconaxtradb) object for which the administrative operations will be performed. This field consists of the following sub-field: + +- **spec.databaseRef.name :** specifies the name of the [PerconaXtraDB](/docs/v2024.1.31/guides/percona-xtradb/concepts/perconaxtradb) object. + +### spec.type + +`spec.type` specifies the kind of operation that will be applied to the database. Currently, the following types of operations are allowed in `PerconaXtraDBOpsRequest`. + +- `Upgrade` / `UpdateVersion` +- `HorizontalScaling` +- `VerticalScaling` +- `VolumeExpansion` +- `Reconfigure` +- `ReconfigureTLS` +- `Restart` + +> You can perform only one type of operation on a single `PerconaXtraDBOpsRequest` CR. For example, if you want to update your database and scale up its replica then you have to create two separate `PerconaXtraDBOpsRequest`. At first, you have to create a `PerconaXtraDBOpsRequest` for updating. Once it is completed, then you can create another `PerconaXtraDBOpsRequest` for scaling. You should not create two `PerconaXtraDBOpsRequest` simultaneously. + +### spec.updateVersion + +If you want to update your PerconaXtraDB version, you have to specify the `spec.updateVersion` section that specifies the desired version information. This field consists of the following sub-field: + +- `spec.updateVersion.targetVersion` refers to a [PerconaXtraDBVersion](/docs/v2024.1.31/guides/percona-xtradb/concepts/perconaxtradb-version/) CR that contains the PerconaXtraDB version information where you want to update. + +> You can only update between PerconaXtraDB versions. KubeDB does not support downgrade for PerconaXtraDB. + +### spec.horizontalScaling + +If you want to scale-up or scale-down your PerconaXtraDB cluster or different components of it, you have to specify `spec.horizontalScaling` section. This field consists of the following sub-field: +- `spec.horizontalScaling.member` indicates the desired number of nodes for PerconaXtraDB cluster after scaling. For example, if your cluster currently has 4 nodes, and you want to add additional 2 nodes then you have to specify 6 in `spec.horizontalScaling.member` field. Similarly, if you want to remove one node from the cluster, you have to specify 3 in `spec.horizontalScaling.` field. + +### spec.verticalScaling + +`spec.verticalScaling` is a required field specifying the information of `PerconaXtraDB` resources like `cpu`, `memory` etc that will be scaled. This field consists of the following sub-field: + +- `spec.verticalScaling.perconaxtradb` indicates the desired resources for PerconaXtraDB standalone or cluster after scaling. +- `spec.verticalScaling.exporter` indicates the desired resources for the `exporter` container. +- `spec.verticalScaling.coordinator` indicates the desired resources for the `coordinator` container. + + +All of them has the below structure: + +```yaml +requests: + memory: "200Mi" + cpu: "0.1" +limits: + memory: "300Mi" + cpu: "0.2" +``` + +Here, when you specify the resource request, the scheduler uses this information to decide which node to place the container of the Pod on and when you specify a resource limit for the container, the `kubelet` enforces those limits so that the running container is not allowed to use more of that resource than the limit you set. You can found more details from [here](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/). + +### spec.volumeExpansion + +> To use the volume expansion feature the storage class must support volume expansion + +If you want to expand the volume of your PerconaXtraDB standalone or cluster, you have to specify `spec.volumeExpansion` section. This field consists of the following sub-field: + +- `spec.volumeExpansion.perconaxtradb` indicates the desired size for the persistent volume of a PerconaXtraDB. +- `spec.volumeExpansion.mode` indicates the mode of volume expansion. It can be `online` or `offline` based on the storage class. + + +All of them refer to Quantity types of Kubernetes. + +Example usage of this field is given below: + +```yaml +spec: + volumeExpansion: + perconaxtradb: "2Gi" +``` + +This will expand the volume size of all the perconaxtradb nodes to 2 GB. + +### spec.configuration + +If you want to reconfigure your Running PerconaXtraDB cluster with new custom configuration, you have to specify `spec.configuration` section. This field consists of the following sub-fields: +- `configSecret` points to a secret in the same namespace of a PerconaXtraDB resource, which contains the new custom configurations. If there are any configSecret set before in the database, this secret will replace it. +- `inlineConfig` contains the new custom config as a string which will be merged with the previous configuration. +- `removeCustomConfig` reomoves all the custom configs of the PerconaXtraDB server. + +### spec.tls + +If you want to reconfigure the TLS configuration of your database i.e. add TLS, remove TLS, update issuer/cluster issuer or Certificates and rotate the certificates, you have to specify `spec.tls` section. This field consists of the following sub-field: + +- `spec.tls.issuerRef` specifies the issuer name, kind and api group. +- `spec.tls.certificates` specifies the certificates. You can learn more about this field from [here](/docs/v2024.1.31/guides/percona-xtradb/concepts/perconaxtradb/#spectls). +- `spec.tls.rotateCertificates` specifies that we want to rotate the certificate of this database. +- `spec.tls.remove` specifies that we want to remove tls from this database. + + +### PerconaXtraDBOpsRequest `Status` + +`.status` describes the current state and progress of a `PerconaXtraDBOpsRequest` operation. It has the following fields: + +### status.phase + +`status.phase` indicates the overall phase of the operation for this `PerconaXtraDBOpsRequest`. It can have the following three values: + +| Phase | Meaning | +| ---------- | ---------------------------------------------------------------------------------- | +| Successful | KubeDB has successfully performed the operation requested in the PerconaXtraDBOpsRequest | +| Failed | KubeDB has failed the operation requested in the PerconaXtraDBOpsRequest | +| Denied | KubeDB has denied the operation requested in the PerconaXtraDBOpsRequest | + +### status.observedGeneration + +`status.observedGeneration` shows the most recent generation observed by the `PerconaXtraDBOpsRequest` controller. + +### status.conditions + +`status.conditions` is an array that specifies the conditions of different steps of `PerconaXtraDBOpsRequest` processing. Each condition entry has the following fields: + +- `types` specifies the type of the condition. PerconaXtraDBOpsRequest has the following types of conditions: + +| Type | Meaning | +| ----------------------------- | ------------------------------------------------------------------------- | +| `Progressing` | Specifies that the operation is now in the progressing state | +| `Successful` | Specifies such a state that the operation on the database was successful. | +| `Failed` | Specifies such a state that the operation on the database failed. | +| `ScaleDownCluster` | Specifies such a state that the scale down operation of replicaset | +| `ScaleUpCluster` | Specifies such a state that the scale up operation of replicaset | +| `VolumeExpansion` | Specifies such a state that the volume expansion operaton of the database | +| `Reconfigure` | Specifies such a state that the reconfiguration of replicaset nodes | + +- The `status` field is a string, with possible values `True`, `False`, and `Unknown`. + - `status` will be `True` if the current transition succeeded. + - `status` will be `False` if the current transition failed. + - `status` will be `Unknown` if the current transition was denied. +- The `message` field is a human-readable message indicating details about the condition. +- The `reason` field is a unique, one-word, CamelCase reason for the condition's last transition. +- The `lastTransitionTime` field provides a timestamp for when the operation last transitioned from one state to another. +- The `observedGeneration` shows the most recent condition transition generation observed by the controller. diff --git a/content/docs/v2024.1.31/guides/percona-xtradb/concepts/perconaxtradb-version/index.md b/content/docs/v2024.1.31/guides/percona-xtradb/concepts/perconaxtradb-version/index.md new file mode 100644 index 0000000000..a877105eec --- /dev/null +++ b/content/docs/v2024.1.31/guides/percona-xtradb/concepts/perconaxtradb-version/index.md @@ -0,0 +1,117 @@ +--- +title: PerconaXtraDBVersion CRD +menu: + docs_v2024.1.31: + identifier: guides-perconaxtradb-concepts-perconaxtradbversion + name: PerconaXtraDBVersion + parent: guides-perconaxtradb-concepts + weight: 15 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# PerconaXtraDBVersion + +## What is PerconaXtraDBVersion + +`PerconaXtraDBVersion` is a Kubernetes `Custom Resource Definitions` (CRD). It provides a declarative configuration to specify the docker images to be used for [PerconaXtraDB](https://docs.percona.com/percona-xtradb-cluster/8.0/) database deployed with KubeDB in a Kubernetes native way. + +When you install KubeDB, a `PerconaXtraDBVersion` custom resource will be created automatically for every supported PerconaXtraDB versions. You have to specify the name of `PerconaXtraDBVersion` crd in `spec.version` field of [PerconaXtraDB](/docs/v2024.1.31/guides/percona-xtradb/concepts/perconaxtradb) crd. Then, KubeDB will use the docker images specified in the `PerconaXtraDBVersion` crd to create your expected database. + +Using a separate crd for specifying respective docker images, and pod security policy names allow us to modify the images, and policies independent of KubeDB operator. This will also allow the users to use a custom image for the database. + +## PerconaXtraDBVersion Specification + +As with all other Kubernetes objects, a PerconaXtraDBVersion needs `apiVersion`, `kind`, and `metadata` fields. It also needs a `.spec` section. + +```yaml +apiVersion: catalog.kubedb.com/v1alpha1 +kind: PerconaXtraDBVersion +metadata: + annotations: + meta.helm.sh/release-name: kubedb + meta.helm.sh/release-namespace: kubedb + creationTimestamp: "2022-12-19T09:39:14Z" + generation: 1 + labels: + app.kubernetes.io/instance: kubedb + app.kubernetes.io/managed-by: Helm + app.kubernetes.io/name: kubedb-catalog + app.kubernetes.io/version: v2022.12.13-rc.0 + helm.sh/chart: kubedb-catalog-v2022.12.13-rc.0 + name: 8.0.26 + resourceVersion: "1611" + uid: 38161f93-0501-4caf-98a5-4d8d168951ca +spec: + coordinator: + image: kubedb/percona-xtradb-coordinator:v0.3.0-rc.0 + db: + image: percona/percona-xtradb-cluster:8.0.26 + exporter: + image: prom/mysqld-exporter:v0.13.0 + initContainer: + image: kubedb/percona-xtradb-init:0.2.0 + podSecurityPolicies: + databasePolicyName: percona-xtradb-db + stash: + addon: + backupTask: + name: perconaxtradb-backup-5.7 + restoreTask: + name: perconaxtradb-restore-5.7 + version: 8.0.26 +``` + +### metadata.name + +`metadata.name` is a required field that specifies the name of the `PerconaXtraDBVersion` crd. You have to specify this name in `spec.version` field of [PerconaXtraDB](/docs/v2024.1.31/guides/percona-xtradb/concepts/perconaxtradb) crd. + +We follow this convention for naming PerconaXtraDBVersion crd: + +- Name format: `{Original PerconaXtraDB image version}-{modification tag}` + +We modify original PerconaXtraDB docker image to support additional features. An image with higher modification tag will have more features than the images with lower modification tag. Hence, it is recommended to use PerconaXtraDBVersion crd with highest modification tag to take advantage of the latest features. + +### spec.version + +`spec.version` is a required field that specifies the original version of PerconaXtraDB database that has been used to build the docker image specified in `spec.db.image` field. + +### spec.deprecated + +`spec.deprecated` is an optional field that specifies whether the docker images specified here is supported by the current KubeDB operator. + +The default value of this field is `false`. If `spec.deprecated` is set `true`, KubeDB operator will not create the database and other respective resources for this version. + +### spec.db.image + +`spec.db.image` is a required field that specifies the docker image which will be used to create Statefulset by KubeDB operator to create expected PerconaXtraDB database. + +### spec.exporter.image + +`spec.exporter.image` is a required field that specifies the image which will be used to export Prometheus metrics. + +### spec.initContainer.image + +`spec.initContainer.image` is a required field that specifies the image which will be used to remove `lost+found` directory and mount an `EmptyDir` data volume. + +### spec.podSecurityPolicies.databasePolicyName + +`spec.podSecurityPolicies.databasePolicyName` is a required field that specifies the name of the pod security policy required to get the database server pod(s) running. + +### spec.stash + +`spec.stash` is an optional field that specifies the name of the task for stash backup and restore. Learn more about [Stash PerconaXtraDB addon](https://stash.run/docs/v2022.12.11/addons/percona-xtradb/) + diff --git a/content/docs/v2024.1.31/guides/percona-xtradb/concepts/perconaxtradb/index.md b/content/docs/v2024.1.31/guides/percona-xtradb/concepts/perconaxtradb/index.md new file mode 100644 index 0000000000..5f11e911ac --- /dev/null +++ b/content/docs/v2024.1.31/guides/percona-xtradb/concepts/perconaxtradb/index.md @@ -0,0 +1,339 @@ +--- +title: PerconaXtraDB CRD +menu: + docs_v2024.1.31: + identifier: guides-perconaxtradb-concepts-perconaxtradb + name: PerconaXtraDB + parent: guides-perconaxtradb-concepts + weight: 10 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# PerconaXtraDB + +## What is PerconaXtraDB + +`PerconaXtraDB` is a Kubernetes `Custom Resource Definitions` (CRD). It provides declarative configuration for [PerconaXtraDB](https://docs.percona.com/percona-xtradb-cluster/8.0//) in a Kubernetes native way. You only need to describe the desired database configuration in a PerconaXtraDB object, and the KubeDB operator will create Kubernetes objects in the desired state for you. + +## PerconaXtraDB Spec + +As with all other Kubernetes objects, a PerconaXtraDB needs `apiVersion`, `kind`, and `metadata` fields. It also needs a `.spec` section. Below is an example PerconaXtraDB object. + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: PerconaXtraDB +metadata: + name: sample-pxc + namespace: demo +spec: + version: "8.0.26" + replicas: 3 + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + tls: + issuerRef: + apiGroup: cert-manager.io + kind: Issuer + name: px-issuer + certificates: + - alias: server + subject: + organizations: + - kubedb:server + dnsNames: + - localhost + ipAddresses: + - "127.0.0.1" + requireSSL: true + monitor: + agent: prometheus.io/operator + prometheus: + serviceMonitor: + labels: + release: prometheus + interval: 10s + terminationPolicy: WipeOut +``` + +### spec.version + +`spec.version` is a required field specifying the name of the [PerconaXtraDBVersion](/docs/v2024.1.31/guides/percona-xtradb/concepts/perconaxtradb-version) crd where the docker images are specified. Currently, when you install KubeDB, it creates the following `PerconaXtraDBVersion` resources, + +- `8.0.26`, `8.0.28` + +### spec.replicas + +`spec.version` is the number of replicas in the database cluster. The default value of Percona XtraDB Cluster size is 3. + +### spec.authSecret + +`spec.authSecret` is an optional field that points to a Secret used to hold credentials for `perconaxtradb` root user. If not set, the KubeDB operator creates a new Secret `{perconaxtradb-object-name}-auth` for storing the password for `perconaxtradb` root user for each PerconaXtraDB object. If you want to use an existing secret please specify that when creating the PerconaXtraDB object using `spec.authSecret.name`. + +This secret contains a `user` key and a `password` key which contains the `username` and `password` respectively for `perconaxtradb` root user. Here, the value of `user` key is fixed to be `root`. + +Secrets provided by users are not managed by KubeDB, and therefore, won't be modified or garbage collected by the KubeDB operator (version 0.13.0 and higher). + +Example: + +```bash +kubectl create secret generic perconaxtradb-auth -n demo \ + --from-literal=user=root \ + --from-literal=password=6q8u_2jMOW-OOZXk +secret/perconaxtradb-auth created +``` + +```yaml +apiVersion: v1 +data: + password: NnE4dV8yak1PVy1PT1pYaw== + user: cm9vdA== +kind: Secret +metadata: + name: perconaxtradb-auth + namespace: demo +type: Opaque +``` + +### spec.systemUserSecrets +`spec.systemUserSecrets` points the secrets of system users inside perconaxtradb cluster. Currently, KubeDB is using `monitor` and `replication` system user secrets. +In the given secrets below, `sample-pxc-monitor` and `sample-pxc-replication` are the system user secrets under `sample-pxc` PerconaXtraDB object. + +```bash +$ kubectl get secret -n demo +NAME TYPE DATA AGE +default-token-r556j kubernetes.io/service-account-token 3 157m +sample-pxc-auth kubernetes.io/basic-auth 2 157m +sample-pxc-monitor kubernetes.io/basic-auth 2 157m +sample-pxc-replication kubernetes.io/basic-auth 2 157m +sample-pxc-token-p25ww kubernetes.io/service-account-token 3 141m + +``` + +### spec.storageType + +`spec.storageType` is an optional field that specifies the type of storage to use for the database. It can be either `Durable` or `Ephemeral`. The default value of this field is `Durable`. If `Ephemeral` is used then KubeDB will create PerconaXtraDB database using [emptyDir](https://kubernetes.io/docs/concepts/storage/volumes/#emptydir) volume. In this case, you don't have to specify `spec.storage` field. + +### spec.storage + +If you set `spec.storageType:` to `Durable`, then `spec.storage` is a required field that specifies the StorageClass of PVCs dynamically allocated to store data for the database. This storage spec will be passed to the StatefulSet created by KubeDB operator to run database pods. You can specify any StorageClass available in your cluster with appropriate resource requests. + +- `spec.storage.storageClassName` is the name of the StorageClass used to provision PVCs. PVCs don’t necessarily have to request a class. A PVC with its storageClassName set equal to "" is always interpreted to be requesting a PV with no class, so it can only be bound to PVs with no class (no annotation or one set equal to ""). A PVC with no storageClassName is not quite the same and is treated differently by the cluster depending on whether the DefaultStorageClass admission plugin is turned on. +- `spec.storage.accessModes` uses the same conventions as Kubernetes PVCs when requesting storage with specific access modes. +- `spec.storage.resources` can be used to request specific quantities of storage. This follows the same resource model used by PVCs. + +To learn how to configure `spec.storage`, please visit the links below: + +- https://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims + +### spec.init + +`spec.init` is an optional section that can be used to initialize a newly created PerconaXtraDB database. PerconaXtraDB databases can be initialized in one of two ways: + +- Initialize from Script +- Initialize from Stash Restore + +### spec.monitor + +PerconaXtraDB managed by KubeDB can be monitored with builtin-Prometheus and Prometheus operator out-of-the-box. + +### spec.requireSSL + +`spec.requireSSL` specifies whether the client connections require SSL. If `spec.requireSSL` is `true` then the server permits only TCP/IP connections that use SSL, or connections that use a socket file (on Unix) or shared memory (on Windows). The server rejects any non-secure connection attempt. + +### spec.tls + +`spec.tls` specifies the TLS/SSL configurations for the PerconaXtraDB. + +The following fields are configurable in the `spec.tls` section: + +- `issuerRef` is a reference to the `Issuer` or `ClusterIssuer` CR of [cert-manager](https://cert-manager.io/docs/concepts/issuer/) that will be used by `KubeDB` to generate necessary certificates. + + - `apiGroup` is the group name of the resource being referenced. The value for `Issuer` or `ClusterIssuer` is "cert-manager.io" (cert-manager v0.12.0 and later). + - `kind` is the type of resource being referenced. KubeDB supports both `Issuer` and `ClusterIssuer` as values for this field. + - `name` is the name of the resource (`Issuer` or `ClusterIssuer`) being referenced. + +- `certificates` (optional) are a list of certificates used to configure the server and/or client certificate. It has the following fields: + + - `alias` represents the identifier of the certificate. It has the following possible value: + - `server` is used for server certificate identification. + - `client` is used for client certificate identification. + - `metrics-exporter` is used for metrics exporter certificate identification. + - `secretName` (optional) specifies the k8s secret name that holds the certificates. + This field is optional. If the user does not specify this field, the default secret name will be created in the following format: `--cert`. + - `subject` (optional) specifies an `X.509` distinguished name. It has the following possible field, + - `organizations` (optional) are the list of different organization names to be used on the Certificate. + - `organizationalUnits` (optional) are the list of different organization unit name to be used on the Certificate. + - `countries` (optional) are the list of country names to be used on the Certificate. + - `localities` (optional) are the list of locality names to be used on the Certificate. + - `provinces` (optional) are the list of province names to be used on the Certificate. + - `streetAddresses` (optional) are the list of a street address to be used on the Certificate. + - `postalCodes` (optional) are the list of postal code to be used on the Certificate. + - `serialNumber` (optional) is a serial number to be used on the Certificate. + You can found more details from [Here](https://golang.org/pkg/crypto/x509/pkix/#Name) + + - `duration` (optional) is the period during which the certificate is valid. + - `renewBefore` (optional) is a specifiable time before expiration duration. + - `dnsNames` (optional) is a list of subject alt names to be used in the Certificate. + - `ipAddresses` (optional) is a list of IP addresses to be used in the Certificate. + - `uriSANs` (optional) is a list of URI Subject Alternative Names to be set in the Certificate. + - `emailSANs` (optional) is a list of email Subject Alternative Names to be set in the Certificate. + +### spec.configSecret + +`spec.configSecret` is an optional field that allows users to provide custom configuration for PerconaXtraDB. This field accepts a [`VolumeSource`](https://github.com/kubernetes/api/blob/release-1.11/core/v1/types.go#L47). + +### spec.podTemplate + +KubeDB allows providing a template for database pod through `spec.podTemplate`. KubeDB operator will pass the information provided in `spec.podTemplate` to the StatefulSet created for the PerconaXtraDB database. + +KubeDB accepts the following fields to set in `spec.podTemplate:` + +- metadata: + - annotations (pod's annotation) +- controller: + - annotations (statefulset's annotation) +- spec: + - args + - env + - resources + - initContainers + - imagePullSecrets + - nodeSelector + - affinity + - serviceAccountName + - schedulerName + - tolerations + - priorityClassName + - priority + - securityContext + - livenessProbe + - readinessProbe + - lifecycle + +Uses of some field of `spec.podTemplate` is described below, + +#### spec.podTemplate.spec.args + +`spec.podTemplate.spec.args` is an optional field. This can be used to provide additional arguments for database installation. + +#### spec.podTemplate.spec.env + +`spec.podTemplate.spec.env` is an optional field that specifies the environment variables to pass to the PerconaXtraDB docker image. To know about supported environment variables, please visit [here](https://hub.docker.com/_/perconaxtradb/). + +Note that, KubeDB does not allow `MYSQL_ROOT_PASSWORD`, `MYSQL_ALLOW_EMPTY_PASSWORD`, `MYSQL_RANDOM_ROOT_PASSWORD`, and `MYSQL_ONETIME_PASSWORD` environment variables to set in `spec.env`. If you want to set the root password, please use `spec.authSecret` instead described earlier. + +If you try to set any of the forbidden environment variables i.e. `MYSQL_ROOT_PASSWORD` in PerconaXtraDB crd, Kubed operator will reject the request with the following error, + +```bash +Error from server (Forbidden): error when creating "./perconaxtradb.yaml": admission webhook "perconaxtradb.validators.kubedb.com" denied the request: environment variable MYSQL_ROOT_PASSWORD is forbidden to use in PerconaXtraDB spec +``` + +Also, note that KubeDB does not allow to update the environment variables as updating them does not have any effect once the database is created. If you try to update environment variables, KubeDB operator will reject the request with the following error, + +```bash +Error from server (BadRequest): error when applying patch: +... +for: "./perconaxtradb.yaml": admission webhook "perconaxtradb.validators.kubedb.com" denied the request: precondition failed for: +...At least one of the following was changed: + apiVersion + kind + name + namespace + spec.authSecret + spec.init + spec.storageType + spec.storage + spec.podTemplate.spec.nodeSelector + spec.podTemplate.spec.env +``` + +#### spec.podTemplate.spec.imagePullSecrets + +`KubeDB` provides the flexibility of deploying PerconaXtraDB database from a private Docker registry. `spec.podTemplate.spec.imagePullSecrets` is an optional field that points to secrets to be used for pulling docker image if you are using a private docker registry. + +#### spec.podTemplate.spec.nodeSelector + +`spec.podTemplate.spec.nodeSelector` is an optional field that specifies a map of key-value pairs. For the pod to be eligible to run on a node, the node must have each of the indicated key-value pairs as labels (it can have additional labels as well). To learn more, see [here](https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector) . + +#### spec.podTemplate.spec.serviceAccountName + + `serviceAccountName` is an optional field supported by KubeDB Operator (version 0.13.0 and higher) that can be used to specify a custom service account to fine-tune role-based access control. + + If this field is left empty, the KubeDB operator will create a service account name matching PerconaXtraDB crd name. Role and RoleBinding that provide necessary access permissions will also be generated automatically for this service account. + + If a service account name is given, but there's no existing service account by that name, the KubeDB operator will create one, and Role and RoleBinding that provide necessary access permissions will also be generated for this service account. + + If a service account name is given, and there's an existing service account by that name, the KubeDB operator will use that existing service account. Since this service account is not managed by KubeDB, users are responsible for providing necessary access permissions manually. + +#### spec.podTemplate.spec.resources + +`spec.podTemplate.spec.resources` is an optional field. This can be used to request compute resources required by the database pods. To learn more, visit [here](http://kubernetes.io/docs/user-guide/compute-resources/). + +### spec.serviceTemplate + +You can also provide a template for the services created by KubeDB operator for PerconaXtraDB database through `spec.serviceTemplate`. This will allow you to set the type and other properties of the services. + +KubeDB allows following fields to set in `spec.serviceTemplate`: + +- metadata: + - annotations +- spec: + - type + - ports + - clusterIP + - externalIPs + - loadBalancerIP + - loadBalancerSourceRanges + - externalTrafficPolicy + - healthCheckNodePort + - sessionAffinityConfig + +See [here](https://github.com/kmodules/offshoot-api/blob/kubernetes-1.16.3/api/v1/types.go#L163) to understand these fields in detail. + +### spec.halted + +`spec.halted` is an optional field. Suppose you want to delete the `PerconaXtraDB` resources(`StatefulSet`, `Service` etc.) except `PerconaXtraDB` object, `PVCs` and `Secret` then you need to set `spec.halted` to `true`. If you set `spec.halted` to `true` then the `terminationPolicy` in `PerconaXtraDB` object will be set `Halt` by-default. + +### spec.terminationPolicy + +`terminationPolicy` gives flexibility whether to `nullify`(reject) the delete operation of `PerconaXtraDB` crd or which resources KubeDB should keep or delete when you delete `PerconaXtraDB` crd. KubeDB provides the following four termination policies: + +- DoNotTerminate +- Halt +- Delete (`Default`) +- WipeOut + +When `terminationPolicy` is `DoNotTerminate`, KubeDB takes advantage of `ValidationWebhook` feature in Kubernetes 1.9.0 or later clusters to implement `DoNotTerminate` feature. If admission webhook is enabled, `DoNotTerminate` prevents users from deleting the database as long as the `spec.terminationPolicy` is set to `DoNotTerminate`. + +Following table show what KubeDB does when you delete PerconaXtraDB crd for different termination policies, + +| Behavior | DoNotTerminate | Halt | Delete | WipeOut | +| ----------------------------------- | :------------: | :------: | :------: | :------: | +| 1. Block Delete operation | ✓ | ✗ | ✗ | ✗ | +| 2. Delete StatefulSet | ✗ | ✓ | ✓ | ✓ | +| 3. Delete Services | ✗ | ✓ | ✓ | ✓ | +| 4. Delete PVCs | ✗ | ✗ | ✓ | ✓ | +| 5. Delete Secrets | ✗ | ✗ | ✗ | ✓ | +| 6. Delete Snapshots | ✗ | ✗ | ✗ | ✓ | + +If you don't specify `spec.terminationPolicy` KubeDB uses `Delete` termination policy by default. \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/percona-xtradb/configuration/_index.md b/content/docs/v2024.1.31/guides/percona-xtradb/configuration/_index.md new file mode 100755 index 0000000000..e6d9bab79d --- /dev/null +++ b/content/docs/v2024.1.31/guides/percona-xtradb/configuration/_index.md @@ -0,0 +1,22 @@ +--- +title: Run PerconaXtraDB with Custom Configuration +menu: + docs_v2024.1.31: + identifier: guides-perconaxtradb-configuration + name: Custom Configuration + parent: guides-perconaxtradb + weight: 40 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/percona-xtradb/configuration/using-config-file/examples/px-custom.yaml b/content/docs/v2024.1.31/guides/percona-xtradb/configuration/using-config-file/examples/px-custom.yaml new file mode 100644 index 0000000000..0e17eb0539 --- /dev/null +++ b/content/docs/v2024.1.31/guides/percona-xtradb/configuration/using-config-file/examples/px-custom.yaml @@ -0,0 +1,18 @@ +apiVersion: kubedb.com/v1alpha2 +kind: PerconaXtraDB +metadata: + name: sample-pxc + namespace: demo +spec: + version: "8.0.26" + configSecret: + name: px-configuration + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut diff --git a/content/docs/v2024.1.31/guides/percona-xtradb/configuration/using-config-file/index.md b/content/docs/v2024.1.31/guides/percona-xtradb/configuration/using-config-file/index.md new file mode 100644 index 0000000000..1b5f8b34ca --- /dev/null +++ b/content/docs/v2024.1.31/guides/percona-xtradb/configuration/using-config-file/index.md @@ -0,0 +1,204 @@ +--- +title: Run PerconaXtraDB with Custom Configuration +menu: + docs_v2024.1.31: + identifier: guides-perconaxtradb-configuration-usingconfigfile + name: Config File + parent: guides-perconaxtradb-configuration + weight: 10 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Using Custom Configuration File + +KubeDB supports providing custom configuration for PerconaXtraDB. This tutorial will show you how to use KubeDB to run a PerconaXtraDB database with custom configuration. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +- Now, install KubeDB cli on your workstation and KubeDB operator in your cluster following the steps [here](/docs/v2024.1.31/setup/README). + +- To keep things isolated, this tutorial uses a separate namespace called `demo` throughout this tutorial. + + ```bash + $ kubectl create ns demo + namespace/demo created + + $ kubectl get ns demo + NAME STATUS AGE + demo Active 5s + ``` + +> Note: YAML files used in this tutorial are stored in [here](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/guides/percona-xtradb/configuration/using-config-file/examples) folder in GitHub repository [kubedb/docs](https://github.com/kubedb/docs). + +## Overview + +PerconaXtraDB allows to configure database via configuration file. The default configuration for PerconaXtraDB can be found in `/etc/my.cnf` file. KubeDB adds a new custom configuration directory `/etc/mysql/custom.conf.d` if it's enabled. When PerconaXtraDB starts, it will look for custom configuration file in `/etc/mysql/custom.conf.d` directory. If configuration file exist, PerconaXtraDB instance will use combined startup setting from both `/etc/my.cnf` and `*.cnf` files in `/etc/mysql/conf.d` and `/etc/mysql/custom.conf.d` directory. This custom configuration will overwrite the existing default one. + +At first, you have to create a config file with `.cnf` extension with your desired configuration. Then you have to put this file into a [volume](https://kubernetes.io/docs/concepts/storage/volumes/). You have to specify this volume in `spec.configSecret` section while creating PerconaXtraDB crd. KubeDB will mount this volume into `/etc/mysql/custom.conf.d` directory of the database pod. + +In this tutorial, we will configure [max_connections](https://dev.mysql.com/doc/refman/8.0/en/server-system-variables.html#sysvar_max_connections/) and [read_buffer_size](https://dev.mysql.com/doc/refman/8.0/en/server-system-variables.html#sysvar_read_buffer_size) via a custom config file. We will use Secret as volume source. + +## Custom Configuration + +At first, let's create `px-config.cnf` file setting `max_connections` and `read_buffer_size` parameters. + +```bash +cat < px-config.cnf +[mysqld] +max_connections = 200 +read_buffer_size = 1048576 +EOF + +$ cat px-config.cnf +[mysqld] +max_connections = 200 +read_buffer_size = 1048576 +``` + +Here, `read_buffer_size` is set to 1MB in bytes. + +Now, create a Secret with this configuration file. + +```bash +$ kubectl create secret generic -n demo px-configuration --from-file=./px-config.cnf +secret/md-configuration created +``` + +Verify the Secret has the configuration file. + +```bash +$ kubectl get secret -n demo px-configuration -o yaml +apiVersion: v1 +stringData: + px-config.cnf: | + [mysqld] + max_connections = 200 + read_buffer_size = 1048576 +kind: Secret +metadata: + name: px-configuration + namespace: demo + ... +``` + +Now, create PerconaXtraDB crd specifying `spec.configSecret` field. + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/percona-xtradb/configuration/using-config-file/examples/px-custom.yaml +mysql.kubedb.com/custom-mysql created +``` + +Below is the YAML for the PerconaXtraDB crd we just created. + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: PerconaXtraDB +metadata: + name: sample-pxc + namespace: demo +spec: + version: "8.0.26" + configSecret: + name: px-configuration + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut + +``` + +Now, wait a few minutes. KubeDB operator will create necessary PVC, statefulset, services, secret etc. If everything goes well, we will see that a pod with the name `sample-pxc-0` has been created. + +Check that the statefulset's pod is running + +```bash +$ kubectl get pod -n demo +NAME READY STATUS RESTARTS AGE +sample-pxc-0 2/2 Running 0 75m +sample-pxc-1 2/2 Running 0 95m +sample-pxc-2 2/2 Running 0 95m + +$ kubectl get perconaxtradb -n demo +NAME VERSION STATUS AGE +NAME VERSION STATUS AGE +sample-pxc 8.0.26 Ready 96m +``` + +We can see the database is in ready phase so it can accept connection. + +Now, we will check if the database has started with the custom configuration we have provided. + +> Read the comment written for the following commands. They contain the instructions and explanations of the commands. + +```bash +# Connecting to the database +$ kubectl exec -it -n demo sample-pxc-0 -- bash +Defaulted container "perconaxtradb" out of: perconaxtradb, px-coordinator, px-init (init) +bash-4.4$ mysql -u${MYSQL_ROOT_USERNAME} -p${MYSQL_ROOT_PASSWORD} +mysql: [Warning] Using a password on the command line interface can be insecure. +Welcome to the MySQL monitor. Commands end with ; or \g. +Your MySQL connection id is 1390 +Server version: 8.0.26-16.1 Percona XtraDB Cluster (GPL), Release rel16, Revision b141904, WSREP version 26.4.3 + +Copyright (c) 2009-2021 Percona LLC and/or its affiliates +Copyright (c) 2000, 2021, Oracle and/or its affiliates. + +Oracle is a registered trademark of Oracle Corporation and/or its +affiliates. Other names may be trademarks of their respective +owners. + +Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. + +mysql> show variables like 'max_connections'; ++-----------------+-------+ +| Variable_name | Value | ++-----------------+-------+ +| max_connections | 200 | ++-----------------+-------+ +1 row in set (0.01 sec) + + +# value of `read_buffer_size` is same as provided +mysql> show variables like 'read_buffer_size'; ++------------------+---------+ +| Variable_name | Value | ++------------------+---------+ +| read_buffer_size | 1048576 | ++------------------+---------+ +1 row in set (0.001 sec) + +mysql> exit +Bye +``` + +## Cleaning up + +To cleanup the Kubernetes resources created by this tutorial, run: + +```bash +$ kubectl delete perconaxtradb -n demo sample-pxc +perconaxtradb.kubedb.com "sample-pxc" deleted +$ kubectl delete ns demo +namespace "demo" deleted +``` diff --git a/content/docs/v2024.1.31/guides/percona-xtradb/configuration/using-pod-template/examples/md-misc-config.yaml b/content/docs/v2024.1.31/guides/percona-xtradb/configuration/using-pod-template/examples/md-misc-config.yaml new file mode 100644 index 0000000000..b7948ac580 --- /dev/null +++ b/content/docs/v2024.1.31/guides/percona-xtradb/configuration/using-pod-template/examples/md-misc-config.yaml @@ -0,0 +1,27 @@ +apiVersion: kubedb.com/v1alpha2 +kind: PerconaXtraDB +metadata: + name: sample-pxc + namespace: demo +spec: + version: "8.0.26" + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + podTemplate: + spec: + env: + - name: MYSQL_DATABASE + value: mdDB + args: + - --character-set-server=utf8mb4 + resources: + requests: + memory: "1Gi" + cpu: "250m" + terminationPolicy: WipeOut \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/percona-xtradb/configuration/using-pod-template/index.md b/content/docs/v2024.1.31/guides/percona-xtradb/configuration/using-pod-template/index.md new file mode 100644 index 0000000000..ae9f866629 --- /dev/null +++ b/content/docs/v2024.1.31/guides/percona-xtradb/configuration/using-pod-template/index.md @@ -0,0 +1,196 @@ +--- +title: Run PerconaXtraDB with Custom PodTemplate +menu: + docs_v2024.1.31: + identifier: guides-perconaxtradb-configuration-usingpodtemplate + name: Customize PodTemplate + parent: guides-perconaxtradb-configuration + weight: 15 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Run PerconaXtraDB with Custom PodTemplate + +KubeDB supports providing custom configuration for PerconaXtraDB via [PodTemplate](/docs/v2024.1.31/guides/percona-xtradb/concepts/perconaxtradb/#specpodtemplate). This tutorial will show you how to use KubeDB to run a PerconaXtraDB database with custom configuration using PodTemplate. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +- Now, install KubeDB cli on your workstation and KubeDB operator in your cluster following the steps [here](/docs/v2024.1.31/setup/README). + +- To keep things isolated, this tutorial uses a separate namespace called `demo` throughout this tutorial. + + ```bash + $ kubectl create ns demo + namespace/demo created + ``` + +> Note: YAML files used in this tutorial are stored in [docs/examples/mysql](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/guides/percona-xtradb/configuration/using-pod-template/examples) folder in GitHub repository [kubedb/docs](https://github.com/kubedb/docs). + +## Overview + +KubeDB allows providing a template for database pod through `spec.podTemplate`. KubeDB operator will pass the information provided in `spec.podTemplate` to the StatefulSet created for PerconaXtraDB database. + +KubeDB accept following fields to set in `spec.podTemplate:` + +- metadata: + - annotations (pod's annotation) +- controller: + - annotations (statefulset's annotation) +- spec: + - env + - resources + - initContainers + - imagePullSecrets + - nodeSelector + - affinity + - schedulerName + - tolerations + - priorityClassName + - priority + - securityContext + +Read about the fields in details in [PodTemplate concept](/docs/v2024.1.31/guides/percona-xtradb/concepts/perconaxtradb/#specpodtemplate), + +## CRD Configuration + +Below is the YAML for the PerconaXtraDB created in this example. Here, [`spec.podTemplate.spec.env`](/docs/v2024.1.31/guides/percona-xtradb/concepts/perconaxtradb/#specpodtemplatespecenv) specifies environment variables and [`spec.podTemplate.spec.args`](/docs/v2024.1.31/guides/percona-xtradb/concepts/perconaxtradb/#specpodtemplatespecargs) provides extra arguments for [PerconaXtraDB Docker Image](https://hub.docker.com/_/perconaxtradb/). + +In this tutorial, an initial database `mdDB` will be created by providing `env` `MYSQL_DATABASE` while the server character set will be set to `utf8mb4` by adding extra `args`. + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: PerconaXtraDB +metadata: + name: sample-pxc + namespace: demo +spec: + version: "8.0.26" + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + podTemplate: + spec: + env: + - name: MYSQL_DATABASE + value: mdDB + args: + - --character-set-server=utf8mb4 + resources: + requests: + memory: "1Gi" + cpu: "250m" + terminationPolicy: WipeOut +``` + + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/percona-xtradb/configuration/using-pod-template/examples/md-misc-config.yaml +perconaxtradb.kubedb.com/sample-pxc created +``` + +Now, wait a few minutes. KubeDB operator will create necessary PVC, statefulset, services, secret etc. If everything goes well, we will see that a pod with the name `sample-pxc` has been created. + +Check that the statefulset's pod is running + +```bash +$ kubectl get pod -n demo +NAME READY STATUS RESTARTS AGE +sample-pxc-0 2/2 Running 0 3m30s +sample-pxc-1 2/2 Running 0 3m30s +sample-pxc-2 2/2 Running 0 3m30s +``` + +Check the perconaxtradb CRD status if the database is ready + +```bash +$ kubectl get perconaxtradb --all-namespaces +NAMESPACE NAME VERSION STATUS AGE +demo sample-pxc 8.0.26 Ready 4m8s +``` + +Once we see `Note] mysqld: ready for connections.` in the log, the database is ready. + +Now, we will check if the database has started with the custom configuration we have provided. + +```bash +$ kubectl exec -it -n demo sample-pxc-0 -- bash +Defaulted container "perconaxtradb" out of: perconaxtradb, px-coordinator, px-init (init) +bash-4.4$ mysql -u${MYSQL_ROOT_USERNAME} -p${MYSQL_ROOT_PASSWORD} +mysql: [Warning] Using a password on the command line interface can be insecure. +Welcome to the MySQL monitor. Commands end with ; or \g. +Your MySQL connection id is 110 +Server version: 8.0.26-16.1 Percona XtraDB Cluster (GPL), Release rel16, Revision b141904, WSREP version 26.4.3 + +Copyright (c) 2009-2021 Percona LLC and/or its affiliates +Copyright (c) 2000, 2021, Oracle and/or its affiliates. + +Oracle is a registered trademark of Oracle Corporation and/or its +affiliates. Other names may be trademarks of their respective +owners. + +Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. + +mysql> show databases; ++--------------------+ +| Database | ++--------------------+ +| information_schema | +| kubedb_system | +| mdDB | +| mysql | +| performance_schema | +| sys | ++--------------------+ +6 rows in set (0.01 sec) + +# Check character_set_server +mysql> show variables like 'char%'; ++--------------------------+---------------------------------------------+ +| Variable_name | Value | ++--------------------------+---------------------------------------------+ +| character_set_client | latin1 | +| character_set_connection | latin1 | +| character_set_database | utf8mb4 | +| character_set_filesystem | binary | +| character_set_results | latin1 | +| character_set_server | utf8mb4 | +| character_set_system | utf8mb3 | +| character_sets_dir | /usr/share/percona-xtradb-cluster/charsets/ | ++--------------------------+---------------------------------------------+ +8 rows in set (0.01 sec) + +mysql> quit; +Bye +``` + +## Cleaning up + +To cleanup the Kubernetes resources created by this tutorial, run: + +```bash +$ kubectl delete perconaxtradb -n demo sample-pxc +perconaxtradb.kubedb.com "sample-pxc" deleted +$ kubectl delete ns demo +namespace "demo" deleted +``` diff --git a/content/docs/v2024.1.31/guides/percona-xtradb/custom-rbac/_index.md b/content/docs/v2024.1.31/guides/percona-xtradb/custom-rbac/_index.md new file mode 100755 index 0000000000..421178eea4 --- /dev/null +++ b/content/docs/v2024.1.31/guides/percona-xtradb/custom-rbac/_index.md @@ -0,0 +1,22 @@ +--- +title: Run MySQL with Custom RBAC resources +menu: + docs_v2024.1.31: + identifier: guides-perconaxtradb-customrbac + name: Custom RBAC + parent: guides-perconaxtradb + weight: 50 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/percona-xtradb/custom-rbac/using-custom-rbac/examples/px-custom-db-2.yaml b/content/docs/v2024.1.31/guides/percona-xtradb/custom-rbac/using-custom-rbac/examples/px-custom-db-2.yaml new file mode 100644 index 0000000000..721bab4d9a --- /dev/null +++ b/content/docs/v2024.1.31/guides/percona-xtradb/custom-rbac/using-custom-rbac/examples/px-custom-db-2.yaml @@ -0,0 +1,19 @@ +apiVersion: kubedb.com/v1alpha2 +kind: PerconaXtraDB +metadata: + name: another-perconaxtradb + namespace: demo +spec: + version: "8.0.26" + storageType: Durable + podTemplate: + spec: + serviceAccountName: px-custom-serviceaccount + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/percona-xtradb/custom-rbac/using-custom-rbac/examples/px-custom-db.yaml b/content/docs/v2024.1.31/guides/percona-xtradb/custom-rbac/using-custom-rbac/examples/px-custom-db.yaml new file mode 100644 index 0000000000..c6109cc41f --- /dev/null +++ b/content/docs/v2024.1.31/guides/percona-xtradb/custom-rbac/using-custom-rbac/examples/px-custom-db.yaml @@ -0,0 +1,19 @@ +apiVersion: kubedb.com/v1alpha2 +kind: PerconaXtraDB +metadata: + name: sample-pxc + namespace: demo +spec: + version: "8.0.26" + storageType: Durable + podTemplate: + spec: + serviceAccountName: px-custom-serviceaccount + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/percona-xtradb/custom-rbac/using-custom-rbac/examples/px-custom-role.yaml b/content/docs/v2024.1.31/guides/percona-xtradb/custom-rbac/using-custom-rbac/examples/px-custom-role.yaml new file mode 100644 index 0000000000..80aee3520c --- /dev/null +++ b/content/docs/v2024.1.31/guides/percona-xtradb/custom-rbac/using-custom-rbac/examples/px-custom-role.yaml @@ -0,0 +1,14 @@ +apiVersion: rbac.authorization.k8s.io/v1 +kind: Role +metadata: + name: px-custom-role + namespace: demo +rules: +- apiGroups: + - policy + resourceNames: + - perconaxtra-db + resources: + - podsecuritypolicies + verbs: + - use \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/percona-xtradb/custom-rbac/using-custom-rbac/index.md b/content/docs/v2024.1.31/guides/percona-xtradb/custom-rbac/using-custom-rbac/index.md new file mode 100644 index 0000000000..f4568fb5d7 --- /dev/null +++ b/content/docs/v2024.1.31/guides/percona-xtradb/custom-rbac/using-custom-rbac/index.md @@ -0,0 +1,259 @@ +--- +title: Run PerconaXtraDB with Custom RBAC resources +menu: + docs_v2024.1.31: + identifier: guides-perconaxtradb-customrbac-usingcustomrbac + name: Custom RBAC + parent: guides-perconaxtradb-customrbac + weight: 10 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Using Custom RBAC resources + +KubeDB (version 0.13.0 and higher) supports finer user control over role based access permissions provided to a PerconaXtraDB instance. This tutorial will show you how to use KubeDB to run PerconaXtraDB instance with custom RBAC resources. + +## Before You Begin + +At first, you need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +Now, install KubeDB cli on your workstation and KubeDB operator in your cluster following the steps [here](/docs/v2024.1.31/setup/README). + +To keep things isolated, this tutorial uses a separate namespace called `demo` throughout this tutorial. + +```bash +$ kubectl create ns demo +namespace/demo created +``` + +> Note: YAML files used in this tutorial are stored in [here](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/guides/percona-xtradb/custom-rbac/using-custom-rbac/examples) folder in GitHub repository [kubedb/docs](https://github.com/kubedb/docs). + +## Overview + +KubeDB allows users to provide custom RBAC resources, namely, `ServiceAccount`, `Role`, and `RoleBinding` for PerconaXtraDB. This is provided via the `spec.podTemplate.spec.serviceAccountName` field in PerconaXtraDB crd. If this field is left empty, the KubeDB operator will create a service account name matching PerconaXtraDB crd name. Role and RoleBinding that provide necessary access permissions will also be generated automatically for this service account. + +If a service account name is given, but there's no existing service account by that name, the KubeDB operator will create one, and Role and RoleBinding that provide necessary access permissions will also be generated for this service account. + +If a service account name is given, and there's an existing service account by that name, the KubeDB operator will use that existing service account. Since this service account is not managed by KubeDB, users are responsible for providing necessary access permissions manually. + +This guide will show you how to create custom `Service Account`, `Role`, and `RoleBinding` for a PerconaXtraDB instance named `quick-postges` to provide the bare minimum access permissions. + +## Custom RBAC for PerconaXtraDB + +At first, let's create a `Service Acoount` in `demo` namespace. + +```bash +$ kubectl create serviceaccount -n demo px-custom-serviceaccount +serviceaccount/px-custom-serviceaccount created +``` + +It should create a service account. + +```bash +$ kubectl get serviceaccount -n demo px-custom-serviceaccount -o yaml +apiVersion: v1 +kind: ServiceAccount +metadata: + creationTimestamp: "2021-03-18T04:38:59Z" + name: px-custom-serviceaccount + namespace: demo + resourceVersion: "84669" + selfLink: /api/v1/namespaces/demo/serviceaccounts/px-custom-serviceaccount + uid: 788bd6c6-3eae-4797-b6ca-5722ef64c9dc +secrets: +- name: px-custom-serviceaccount-token-jnhvd +``` + +Now, we need to create a role that has necessary access permissions for the PerconaXtraDB instance named `sample-pxc`. + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/percona-xtradb/custom-rbac/using-custom-rbac/examples/px-custom-role.yaml +role.rbac.authorization.k8s.io/px-custom-role created +``` + +Below is the YAML for the Role we just created. + +```yaml +apiVersion: rbac.authorization.k8s.io/v1 +kind: Role +metadata: + name: px-custom-role + namespace: demo +rules: +- apiGroups: + - policy + resourceNames: + - perconaxtra-db + resources: + - podsecuritypolicies + verbs: + - use +``` + +This permission is required for PerconaXtraDB pods running on PSP enabled clusters. + +Now create a `RoleBinding` to bind this `Role` with the already created service account. + +```bash +$ kubectl create rolebinding px-custom-rolebinding --role=px-custom-role --serviceaccount=demo:px-custom-serviceaccount --namespace=demo +rolebinding.rbac.authorization.k8s.io/px-custom-rolebinding created +``` + +It should bind `px-custom-role` and `px-custom-serviceaccount` successfully. + +SO, All required resources for RBAC are created. + +```bash +$ kubectl get serviceaccount,role,rolebindings -n demo +NAME SECRETS AGE +serviceaccount/default 1 38m +serviceaccount/px-custom-serviceaccount 1 36m + +NAME CREATED AT +role.rbac.authorization.k8s.io/px-custom-role 2021-03-18T05:13:27Z + +NAME ROLE AGE +rolebinding.rbac.authorization.k8s.io/px-custom-rolebinding Role/px-custom-role 79s +``` + +Now, create a PerconaXtraDB crd specifying `spec.podTemplate.spec.serviceAccountName` field to `px-custom-serviceaccount`. + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/percona-xtradb/custom-rbac/using-custom-rbac/examples/px-custom-db.yaml +perconaxtradb.kubedb.com/sample-pxc created +``` + +Below is the YAML for the PerconaXtraDB crd we just created. + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: PerconaXtraDB +metadata: + name: sample-pxc + namespace: demo +spec: + replicas: 3 + version: "8.0.26" + storageType: Durable + podTemplate: + spec: + serviceAccountName: px-custom-serviceaccount + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut +``` + +Now, wait a few minutes. the KubeDB operator will create necessary PVC, StatefulSet, services, secret etc. If everything goes well, we should see that a pod with the name `sample-pxc-0` has been created. + +Check that the statefulset's pod is running + +```bash +$ kubectl get pod -n demo +NAME READY STATUS RESTARTS AGE +sample-pxc-0 2/2 Running 0 84m +sample-pxc-1 2/2 Running 0 84m +sample-pxc-2 2/2 Running 0 84m + +``` + +Check the PerconaXtraDB custom resource to see if the database cluster is ready: + +```bash +~ $ kubectl get perconaxtradb --all-namespaces +NAMESPACE NAME VERSION STATUS AGE +demo sample-pxc 8.0.26 Ready 83m +``` + +## Reusing Service Account + +An existing service account can be reused in another PerconaXtraDB instance. No new access permission is required to run the new PerconaXtraDB instance. + +Now, create PerconaXtraDB crd `another-perconaxtradb` using the existing service account name `px-custom-serviceaccount` in the `spec.podTemplate.spec.serviceAccountName` field. + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/percona-xtradb/custom-rbac/using-custom-rbac/examples/px-custom-db-2.yaml +perconaxtradb.kubedb.com/another-perconaxtradb created +``` + +Below is the YAML for the PerconaXtraDB crd we just created. + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: PerconaXtraDB +metadata: + name: another-perconaxtradb + namespace: demo +spec: + replicas: 3 + version: "8.0.26" + storageType: Durable + podTemplate: + spec: + serviceAccountName: px-custom-serviceaccount + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut +``` + +Now, wait a few minutes. the KubeDB operator will create necessary PVC, statefulset, services, secret etc. If everything goes well, we should see that a pod with the name `another-perconaxtradb` has been created. + +Check that the statefulset's pod is running + +```bash +$ kubectl get pod -n demo another-perconaxtradb-0 +NAME READY STATUS RESTARTS AGE +another-perconaxtradb-0 2/2 Running 0 37s +``` + +Check the PerconaXtraDB custom resource to see if the database cluster is ready: + +```bash +~ $ kubectl get perconaxtradb --all-namespaces +NAMESPACE NAME VERSION STATUS AGE +another-perconaxtradb sample-pxc 8.0.26 Ready 83m +``` + +## Cleaning up + +To cleanup the Kubernetes resources created by this tutorial, run: + +```bash +$ kubectl delete perconaxtradb -n demo sample-pxc +perconaxtradb.kubedb.com "sample-pxc" deleted +$ kubectl delete perconaxtradb -n demo another-perconaxtradb +perconaxtradb.kubedb.com "another-perconaxtradb" deleted +$ kubectl delete -n demo role px-custom-role +role.rbac.authorization.k8s.io "px-custom-role" deleted +$ kubectl delete -n demo rolebinding px-custom-rolebinding +rolebinding.rbac.authorization.k8s.io "px-custom-rolebinding" deleted +$ kubectl delete sa -n demo px-custom-serviceaccount +serviceaccount "px-custom-serviceaccount" deleted +$ kubectl delete ns demo +namespace "demo" deleted +``` + + diff --git a/content/docs/v2024.1.31/guides/percona-xtradb/images/perconaxtradb-lifecycle.svg b/content/docs/v2024.1.31/guides/percona-xtradb/images/perconaxtradb-lifecycle.svg new file mode 100644 index 0000000000..33614fdb59 --- /dev/null +++ b/content/docs/v2024.1.31/guides/percona-xtradb/images/perconaxtradb-lifecycle.svg @@ -0,0 +1,562 @@ + + + + + + image/svg+xml + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/content/docs/v2024.1.31/guides/percona-xtradb/monitoring/_index.md b/content/docs/v2024.1.31/guides/percona-xtradb/monitoring/_index.md new file mode 100755 index 0000000000..455b0428b8 --- /dev/null +++ b/content/docs/v2024.1.31/guides/percona-xtradb/monitoring/_index.md @@ -0,0 +1,22 @@ +--- +title: Monitoring PerconaXtraDB +menu: + docs_v2024.1.31: + identifier: guides-perconaxtradb-monitoring + name: Monitoring + parent: guides-perconaxtradb + weight: 120 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/percona-xtradb/monitoring/builtin-prometheus/examples/builtin-prom-px.yaml b/content/docs/v2024.1.31/guides/percona-xtradb/monitoring/builtin-prometheus/examples/builtin-prom-px.yaml new file mode 100644 index 0000000000..6f7a7b89e6 --- /dev/null +++ b/content/docs/v2024.1.31/guides/percona-xtradb/monitoring/builtin-prometheus/examples/builtin-prom-px.yaml @@ -0,0 +1,17 @@ +apiVersion: kubedb.com/v1alpha2 +kind: PerconaXtraDB +metadata: + name: builtin-prom-px + namespace: demo +spec: + version: "8.0.26" + terminationPolicy: WipeOut + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + monitor: + agent: prometheus.io/builtin \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/percona-xtradb/monitoring/builtin-prometheus/examples/prom-config.yaml b/content/docs/v2024.1.31/guides/percona-xtradb/monitoring/builtin-prometheus/examples/prom-config.yaml new file mode 100644 index 0000000000..45aee6317a --- /dev/null +++ b/content/docs/v2024.1.31/guides/percona-xtradb/monitoring/builtin-prometheus/examples/prom-config.yaml @@ -0,0 +1,68 @@ +apiVersion: v1 +kind: ConfigMap +metadata: + name: prometheus-config + labels: + app: prometheus-demo + namespace: monitoring +data: + prometheus.yml: |- + global: + scrape_interval: 5s + evaluation_interval: 5s + scrape_configs: + - job_name: 'kubedb-databases' + honor_labels: true + scheme: http + kubernetes_sd_configs: + - role: endpoints + # by default Prometheus server select all Kubernetes services as possible target. + # relabel_config is used to filter only desired endpoints + relabel_configs: + # keep only those services that has "prometheus.io/scrape","prometheus.io/path" and "prometheus.io/port" anootations + - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape, __meta_kubernetes_service_annotation_prometheus_io_port] + separator: ; + regex: true;(.*) + action: keep + # currently KubeDB supported databases uses only "http" scheme to export metrics. so, drop any service that uses "https" scheme. + - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scheme] + action: drop + regex: https + # only keep the stats services created by KubeDB for monitoring purpose which has "-stats" suffix + - source_labels: [__meta_kubernetes_service_name] + separator: ; + regex: (.*-stats) + action: keep + # service created by KubeDB will have "app.kubernetes.io/name" and "app.kubernetes.io/instance" annotations. keep only those services that have these annotations. + - source_labels: [__meta_kubernetes_service_label_app_kubernetes_io_name] + separator: ; + regex: (.*) + action: keep + # read the metric path from "prometheus.io/path: " annotation + - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_path] + action: replace + target_label: __metrics_path__ + regex: (.+) + # read the port from "prometheus.io/port: " annotation and update scraping address accordingly + - source_labels: [__address__, __meta_kubernetes_service_annotation_prometheus_io_port] + action: replace + target_label: __address__ + regex: ([^:]+)(?::\d+)?;(\d+) + replacement: $1:$2 + # add service namespace as label to the scraped metrics + - source_labels: [__meta_kubernetes_namespace] + separator: ; + regex: (.*) + target_label: namespace + replacement: $1 + action: replace + # add service name as a label to the scraped metrics + - source_labels: [__meta_kubernetes_service_name] + separator: ; + regex: (.*) + target_label: service + replacement: $1 + action: replace + # add stats service's labels to the scraped metrics + - action: labelmap + regex: __meta_kubernetes_service_label_(.+) diff --git a/content/docs/v2024.1.31/guides/percona-xtradb/monitoring/builtin-prometheus/images/built-prom.png b/content/docs/v2024.1.31/guides/percona-xtradb/monitoring/builtin-prometheus/images/built-prom.png new file mode 100644 index 0000000000..c40435214a Binary files /dev/null and b/content/docs/v2024.1.31/guides/percona-xtradb/monitoring/builtin-prometheus/images/built-prom.png differ diff --git a/content/docs/v2024.1.31/guides/percona-xtradb/monitoring/builtin-prometheus/index.md b/content/docs/v2024.1.31/guides/percona-xtradb/monitoring/builtin-prometheus/index.md new file mode 100644 index 0000000000..8f6ce5db84 --- /dev/null +++ b/content/docs/v2024.1.31/guides/percona-xtradb/monitoring/builtin-prometheus/index.md @@ -0,0 +1,364 @@ +--- +title: Monitor PerconaXtraDB using Builtin Prometheus Discovery +menu: + docs_v2024.1.31: + identifier: guides-perconaxtradb-monitoring-builtinprometheus + name: Builtin Prometheus + parent: guides-perconaxtradb-monitoring + weight: 30 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Monitoring PerconaXtraDB with builtin Prometheus + +This tutorial will show you how to monitor PerconaXtraDB database using builtin [Prometheus](https://github.com/prometheus/prometheus) scraper. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +- Install KubeDB operator in your cluster following the steps [here](/docs/v2024.1.31/setup/README). + +- If you are not familiar with how to configure Prometheus to scrape metrics from various Kubernetes resources, please read the tutorial from [here](https://github.com/appscode/third-party-tools/tree/master/monitoring/prometheus/builtin). + +- To learn how Prometheus monitoring works with KubeDB in general, please visit [here](/docs/v2024.1.31/guides/percona-xtradb/monitoring/overview). + +- To keep Prometheus resources isolated, we are going to use a separate namespace called `monitoring` to deploy respective monitoring resources. We are going to deploy database in `demo` namespace. + + ```bash + $ kubectl create ns monitoring + namespace/monitoring created + + $ kubectl create ns demo + namespace/demo created + ``` + +> Note: YAML files used in this tutorial are stored in [docs/guides/percona-xtradb/monitoring/builtin-prometheus/examples](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/guides/percona-xtradb/monitoring/builtin-prometheus/examples) folder in GitHub repository [kubedb/docs](https://github.com/kubedb/docs). + +## Deploy PerconaXtraDB with Monitoring Enabled + +At first, let's deploy an PerconaXtraDB database with monitoring enabled. Below is the PerconaXtraDB object that we are going to create. + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: PerconaXtraDB +metadata: + name: builtin-prom-px + namespace: demo +spec: + version: "8.0.26" + terminationPolicy: WipeOut + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + monitor: + agent: prometheus.io/builtin +``` + +Here, + +- `spec.monitor.agent: prometheus.io/builtin` specifies that we are going to monitor this server using builtin Prometheus scraper. + +Let's create the PerconaXtraDB crd we have shown above. + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/percona-xtradb/monitoring/builtin-prometheus/examples/builtin-prom-px.yaml +perconaxtradb.kubedb.com/builtin-prom-px created +``` + +Now, wait for the database to go into `Running` state. + +```bash +$ kubectl get perconaxtradb -n demo builtin-prom-px +NAME VERSION STATUS AGE +builtin-prom-px 8.0.26 Ready 76s +``` + +KubeDB will create a separate stats service with name `{PerconaXtraDB crd name}-stats` for monitoring purpose. + +```bash +$ kubectl get svc -n demo --selector="app.kubernetes.io/instance=builtin-prom-px" +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +builtin-prom-px ClusterIP 10.106.32.194 3306/TCP 2m3s +builtin-prom-px-pods ClusterIP None 3306/TCP 2m3s +builtin-prom-px-stats ClusterIP 10.109.106.92 56790/TCP 2m2s +``` + +Here, `builtin-prom-px-stats ` service has been created for monitoring purpose. Let's describe the service. + +```bash +$ kubectl describe svc -n demo builtin-prom-px-stats +Name: builtin-prom-px-stats +Namespace: demo +Labels: app.kubernetes.io/instance=builtin-prom-px + app.kubernetes.io/managed-by=kubedb.com + app.kubernetes.io/name=perconaxtradbs.kubedb.com + kubedb.com/role=stats +Annotations: monitoring.appscode.com/agent: prometheus.io/builtin + prometheus.io/path: /metrics + prometheus.io/port: 56790 + prometheus.io/scrape: true +Selector: app.kubernetes.io/instance=builtin-prom-px,app.kubernetes.io/managed-by=kubedb.com,app.kubernetes.io/name=perconaxtradbs.kubedb.com +Type: ClusterIP +IP: 10.109.106.92 +Port: metrics 56790/TCP +TargetPort: metrics/TCP +Endpoints: 10.244.0.34:56790 +Session Affinity: None +Events: +``` + +You can see that the service contains following annotations. + +```bash +prometheus.io/path: /metrics +prometheus.io/port: 56790 +prometheus.io/scrape: true +``` + +The Prometheus server will discover the service endpoint using these specifications and will scrape metrics from the exporter. + +## Configure Prometheus Server + +Now, we have to configure a Prometheus scraping job to scrape the metrics using this service. We are going to configure scraping job similar to this [kubernetes-service-endpoints](https://github.com/appscode/third-party-tools/tree/master/monitoring/prometheus/builtin#kubernetes-service-endpoints) job that scrapes metrics from endpoints of a service. + +Let's configure a Prometheus scraping job to collect metrics from this service. + +```yaml +- job_name: 'kubedb-databases' + honor_labels: true + scheme: http + kubernetes_sd_configs: + - role: endpoints + # by default Prometheus server select all Kubernetes services as possible target. + # relabel_config is used to filter only desired endpoints + relabel_configs: + # keep only those services that has "prometheus.io/scrape","prometheus.io/path" and "prometheus.io/port" annotations + - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape, __meta_kubernetes_service_annotation_prometheus_io_port] + separator: ; + regex: true;(.*) + action: keep + # currently KubeDB supported databases uses only "http" scheme to export metrics. so, drop any service that uses "https" scheme. + - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scheme] + action: drop + regex: https + # only keep the stats services created by KubeDB for monitoring purpose which has "-stats" suffix + - source_labels: [__meta_kubernetes_service_name] + separator: ; + regex: (.*-stats) + action: keep + # service created by KubeDB will have "app.kubernetes.io/name" and "app.kubernetes.io/instance" annotations. keep only those services that have these annotations. + - source_labels: [__meta_kubernetes_service_label_app_kubernetes_io_name] + separator: ; + regex: (.*) + action: keep + # read the metric path from "prometheus.io/path: " annotation + - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_path] + action: replace + target_label: __metrics_path__ + regex: (.+) + # read the port from "prometheus.io/port: " annotation and update scraping address accordingly + - source_labels: [__address__, __meta_kubernetes_service_annotation_prometheus_io_port] + action: replace + target_label: __address__ + regex: ([^:]+)(?::\d+)?;(\d+) + replacement: $1:$2 + # add service namespace as label to the scraped metrics + - source_labels: [__meta_kubernetes_namespace] + separator: ; + regex: (.*) + target_label: namespace + replacement: $1 + action: replace + # add service name as a label to the scraped metrics + - source_labels: [__meta_kubernetes_service_name] + separator: ; + regex: (.*) + target_label: service + replacement: $1 + action: replace + # add stats service's labels to the scraped metrics + - action: labelmap + regex: __meta_kubernetes_service_label_(.+) +``` + +### Configure Existing Prometheus Server + +If you already have a Prometheus server running, you have to add above scraping job in the `ConfigMap` used to configure the Prometheus server. Then, you have to restart it for the updated configuration to take effect. + +>If you don't use a persistent volume for Prometheus storage, you will lose your previously scraped data on restart. + +### Deploy New Prometheus Server + +If you don't have any existing Prometheus server running, you have to deploy one. In this section, we are going to deploy a Prometheus server in `monitoring` namespace to collect metrics using this stats service. + +**Create ConfigMap:** + +At first, create a ConfigMap with the scraping configuration. Bellow, the YAML of ConfigMap that we are going to create in this tutorial. + +```yaml +apiVersion: v1 +kind: ConfigMap +metadata: + name: prometheus-config + labels: + app: prometheus-demo + namespace: monitoring +data: + prometheus.yml: |- + global: + scrape_interval: 5s + evaluation_interval: 5s + scrape_configs: + - job_name: 'kubedb-databases' + honor_labels: true + scheme: http + kubernetes_sd_configs: + - role: endpoints + # by default Prometheus server select all Kubernetes services as possible target. + # relabel_config is used to filter only desired endpoints + relabel_configs: + # keep only those services that has "prometheus.io/scrape","prometheus.io/path" and "prometheus.io/port" anootations + - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape, __meta_kubernetes_service_annotation_prometheus_io_port] + separator: ; + regex: true;(.*) + action: keep + # currently KubeDB supported databases uses only "http" scheme to export metrics. so, drop any service that uses "https" scheme. + - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scheme] + action: drop + regex: https + # only keep the stats services created by KubeDB for monitoring purpose which has "-stats" suffix + - source_labels: [__meta_kubernetes_service_name] + separator: ; + regex: (.*-stats) + action: keep + # service created by KubeDB will have "app.kubernetes.io/name" and "app.kubernetes.io/instance" annotations. keep only those services that have these annotations. + - source_labels: [__meta_kubernetes_service_label_app_kubernetes_io_name] + separator: ; + regex: (.*) + action: keep + # read the metric path from "prometheus.io/path: " annotation + - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_path] + action: replace + target_label: __metrics_path__ + regex: (.+) + # read the port from "prometheus.io/port: " annotation and update scraping address accordingly + - source_labels: [__address__, __meta_kubernetes_service_annotation_prometheus_io_port] + action: replace + target_label: __address__ + regex: ([^:]+)(?::\d+)?;(\d+) + replacement: $1:$2 + # add service namespace as label to the scraped metrics + - source_labels: [__meta_kubernetes_namespace] + separator: ; + regex: (.*) + target_label: namespace + replacement: $1 + action: replace + # add service name as a label to the scraped metrics + - source_labels: [__meta_kubernetes_service_name] + separator: ; + regex: (.*) + target_label: service + replacement: $1 + action: replace + # add stats service's labels to the scraped metrics + - action: labelmap + regex: __meta_kubernetes_service_label_(.+) +``` + +Let's create above `ConfigMap`, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/percona-xtradb/monitoring/builtin-prometheus/examples/prom-config.yaml +configmap/prometheus-config created +``` + +**Create RBAC:** + +If you are using an RBAC enabled cluster, you have to give necessary RBAC permissions for Prometheus. Let's create necessary RBAC stuffs for Prometheus, + +```bash +$ kubectl apply -f https://github.com/appscode/third-party-tools/raw/master/monitoring/prometheus/builtin/artifacts/rbac.yaml +clusterrole.rbac.authorization.k8s.io/prometheus created +serviceaccount/prometheus created +clusterrolebinding.rbac.authorization.k8s.io/prometheus created +``` + +>YAML for the RBAC resources created above can be found [here](https://github.com/appscode/third-party-tools/blob/master/monitoring/prometheus/builtin/artifacts/rbac.yaml). + +**Deploy Prometheus:** + +Now, we are ready to deploy Prometheus server. We are going to use following [deployment](https://github.com/appscode/third-party-tools/blob/master/monitoring/prometheus/builtin/artifacts/deployment.yaml) to deploy Prometheus server. + +Let's deploy the Prometheus server. + +```bash +$ kubectl apply -f https://github.com/appscode/third-party-tools/raw/master/monitoring/prometheus/builtin/artifacts/deployment.yaml +deployment.apps/prometheus created +``` + +### Verify Monitoring Metrics + +Prometheus server is listening to port `9090`. We are going to use [port forwarding](https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/) to access Prometheus dashboard. + +At first, let's check if the Prometheus pod is in `Running` state. + +```bash +$ kubectl get pod -n monitoring -l=app=prometheus +NAME READY STATUS RESTARTS AGE +prometheus-5dff66b455-cz9td 1/1 Running 0 42s +``` + +Now, run following command on a separate terminal to forward 9090 port of `prometheus-8568c86d86-95zhn` pod, + +```bash +$ kubectl port-forward -n monitoring prometheus-8568c86d86-95zhn 9090 +Forwarding from 127.0.0.1:9090 -> 9090 +Forwarding from [::1]:9090 -> 9090 +``` + +Now, we can access the dashboard at `localhost:9090`. Open [http://localhost:9090](http://localhost:9090) in your browser. You should see the endpoint of `builtin-prom-px-stats` service as one of the targets. + +

+  Prometheus Target +

+ +Check the labels marked with red rectangle. These labels confirm that the metrics are coming from `PerconaXtraDB` database `builtin-prom-px` through stats service `builtin-prom-px-stats`. + +Now, you can view the collected metrics and create a graph from homepage of this Prometheus dashboard. You can also use this Prometheus server as data source for [Grafana](https://grafana.com/) and create beautiful dashboard with collected metrics. + +## Cleaning up + +To cleanup the Kubernetes resources created by this tutorial, run following commands + +```bash +kubectl delete perconaxtradb -n demo builtin-prom-px + +kubectl delete -n monitoring deployment.apps/prometheus + +kubectl delete -n monitoring clusterrole.rbac.authorization.k8s.io/prometheus +kubectl delete -n monitoring serviceaccount/prometheus +kubectl delete -n monitoring clusterrolebinding.rbac.authorization.k8s.io/prometheus + +kubectl delete ns demo +kubectl delete ns monitoring +``` diff --git a/content/docs/v2024.1.31/guides/percona-xtradb/monitoring/overview/images/database-monitoring-overview.svg b/content/docs/v2024.1.31/guides/percona-xtradb/monitoring/overview/images/database-monitoring-overview.svg new file mode 100644 index 0000000000..395eefb334 --- /dev/null +++ b/content/docs/v2024.1.31/guides/percona-xtradb/monitoring/overview/images/database-monitoring-overview.svg @@ -0,0 +1 @@ + \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/percona-xtradb/monitoring/overview/index.md b/content/docs/v2024.1.31/guides/percona-xtradb/monitoring/overview/index.md new file mode 100644 index 0000000000..d391e4fd49 --- /dev/null +++ b/content/docs/v2024.1.31/guides/percona-xtradb/monitoring/overview/index.md @@ -0,0 +1,118 @@ +--- +title: PerconaXtraDB Monitoring Overview +description: PerconaXtraDB Monitoring Overview +menu: + docs_v2024.1.31: + identifier: guides-perconaxtradb-monitoring-overview + name: Overview + parent: guides-perconaxtradb-monitoring + weight: 10 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Monitoring PerconaXtraDB with KubeDB + +KubeDB has native support for monitoring via [Prometheus](https://prometheus.io/). You can use builtin [Prometheus](https://github.com/prometheus/prometheus) scraper or [Prometheus operator](https://github.com/prometheus-operator/prometheus-operator) to monitor KubeDB managed databases. This tutorial will show you how database monitoring works with KubeDB and how to configure Database crd to enable monitoring. + +## Overview + +KubeDB uses Prometheus [exporter](https://prometheus.io/docs/instrumenting/exporters/#databases) images to export Prometheus metrics for respective databases. Following diagram shows the logical flow of database monitoring with KubeDB. + +

+  Database Monitoring Flow +

+ +When a user creates a database crd with `spec.monitor` section configured, KubeDB operator provisions the respective database and injects an exporter image as sidecar to the database pod. It also creates a dedicated stats service with name `{database-crd-name}-stats` for monitoring. Prometheus server can scrape metrics using this stats service. + +## Configure Monitoring + +In order to enable monitoring for a database, you have to configure `spec.monitor` section. KubeDB provides following options to configure `spec.monitor` section: + +| Field | Type | Uses | +| -------------------------------------------------- | ---------- | ---------------------------------------------------------------------------------------------------------------------------------------------- | +| `spec.monitor.agent` | `Required` | Type of the monitoring agent that will be used to monitor this database. It can be `prometheus.io/builtin` or `prometheus.io/operator`. | +| `spec.monitor.prometheus.exporter.port` | `Optional` | Port number where the exporter side car will serve metrics. | +| `spec.monitor.prometheus.exporter.args` | `Optional` | Arguments to pass to the exporter sidecar. | +| `spec.monitor.prometheus.exporter.env` | `Optional` | List of environment variables to set in the exporter sidecar container. | +| `spec.monitor.prometheus.exporter.resources` | `Optional` | Resources required by exporter sidecar container. | +| `spec.monitor.prometheus.exporter.securityContext` | `Optional` | Security options the exporter should run with. | +| `spec.monitor.prometheus.serviceMonitor.labels` | `Optional` | Labels for `ServiceMonitor` crd. | +| `spec.monitor.prometheus.serviceMonitor.interval` | `Optional` | Interval at which metrics should be scraped. | + +## Sample Configuration + +A sample YAML for Redis crd with `spec.monitor` section configured to enable monitoring with [Prometheus operator](https://github.com/prometheus-operator/prometheus-operator) is shown below. + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: Redis +metadata: + name: sample-redis + namespace: databases +spec: + version: 6.0.20 + terminationPolicy: WipeOut + configSecret: # configure Redis to use password for authentication + name: redis-config + storageType: Durable + storage: + storageClassName: default + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 5Gi + monitor: + agent: prometheus.io/operator + prometheus: + serviceMonitor: + labels: + release: prometheus + exporter: + args: + - --redis.password=$(REDIS_PASSWORD) + env: + - name: REDIS_PASSWORD + valueFrom: + secretKeyRef: + name: _name_of_secret_with_redis_password + key: password # key with the password + resources: + requests: + memory: 512Mi + cpu: 200m + limits: + memory: 512Mi + cpu: 250m + securityContext: + runAsUser: 2000 + allowPrivilegeEscalation: false +``` + +Assume that above Redis is configured to use basic authentication. So, exporter image also need to provide password to collect metrics. We have provided it through `spec.monitor.args` field. + +Here, we have specified that we are going to monitor this server using Prometheus operator through `spec.monitor.agent: prometheus.io/operator`. KubeDB will create a `ServiceMonitor` crd in `monitoring` namespace and this `ServiceMonitor` will have `release: prometheus` label. + +## Next Steps + +- Learn how to monitor `PerconaXtraDB` database with KubeDB using [builtin-Prometheus](/docs/v2024.1.31/guides/percona-xtradb/monitoring/builtin-prometheus) and using [Prometheus operator](/docs/v2024.1.31/guides/percona-xtradb/monitoring/prometheus-operator). +- Learn how to monitor `PerconaXtraDB` database with KubeDB using [builtin-Prometheus](/docs/v2024.1.31/guides/elasticsearch/monitoring/using-builtin-prometheus) and using [Prometheus operator](/docs/v2024.1.31/guides/elasticsearch/monitoring/using-prometheus-operator). +- Learn how to monitor `PostgreSQL` database with KubeDB using [builtin-Prometheus](/docs/v2024.1.31/guides/postgres/monitoring/using-builtin-prometheus) and using [Prometheus operator](/docs/v2024.1.31/guides/postgres/monitoring/using-prometheus-operator). +- Learn how to monitor `MySQL` database with KubeDB using [builtin-Prometheus](/docs/v2024.1.31/guides/mysql/monitoring/builtin-prometheus/) and using [Prometheus operator](/docs/v2024.1.31/guides/mysql/monitoring/prometheus-operator/). +- Learn how to monitor `MongoDB` database with KubeDB using [builtin-Prometheus](/docs/v2024.1.31/guides/mongodb/monitoring/using-builtin-prometheus) and using [Prometheus operator](/docs/v2024.1.31/guides/mongodb/monitoring/using-prometheus-operator). +- Learn how to monitor `Redis` server with KubeDB using [builtin-Prometheus](/docs/v2024.1.31/guides/redis/monitoring/using-builtin-prometheus) and using [Prometheus operator](/docs/v2024.1.31/guides/redis/monitoring/using-prometheus-operator). +- Learn how to monitor `Memcached` server with KubeDB using [builtin-Prometheus](/docs/v2024.1.31/guides/memcached/monitoring/using-builtin-prometheus) and using [Prometheus operator](/docs/v2024.1.31/guides/memcached/monitoring/using-prometheus-operator). diff --git a/content/docs/v2024.1.31/guides/percona-xtradb/monitoring/prometheus-operator/examples/prom-operator-px.yaml b/content/docs/v2024.1.31/guides/percona-xtradb/monitoring/prometheus-operator/examples/prom-operator-px.yaml new file mode 100644 index 0000000000..d763906c87 --- /dev/null +++ b/content/docs/v2024.1.31/guides/percona-xtradb/monitoring/prometheus-operator/examples/prom-operator-px.yaml @@ -0,0 +1,22 @@ +apiVersion: kubedb.com/v1alpha2 +kind: PerconaXtraDB +metadata: + name: coreos-prom-px + namespace: demo +spec: + version: "8.0.26" + terminationPolicy: WipeOut + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + monitor: + agent: prometheus.io/operator + prometheus: + serviceMonitor: + labels: + release: prometheus + interval: 10s \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/percona-xtradb/monitoring/prometheus-operator/images/prom-end.png b/content/docs/v2024.1.31/guides/percona-xtradb/monitoring/prometheus-operator/images/prom-end.png new file mode 100644 index 0000000000..15d055fd9c Binary files /dev/null and b/content/docs/v2024.1.31/guides/percona-xtradb/monitoring/prometheus-operator/images/prom-end.png differ diff --git a/content/docs/v2024.1.31/guides/percona-xtradb/monitoring/prometheus-operator/index.md b/content/docs/v2024.1.31/guides/percona-xtradb/monitoring/prometheus-operator/index.md new file mode 100644 index 0000000000..babc9ffd39 --- /dev/null +++ b/content/docs/v2024.1.31/guides/percona-xtradb/monitoring/prometheus-operator/index.md @@ -0,0 +1,309 @@ +--- +title: Monitor PerconaXtraDB using Prometheus Operator +menu: + docs_v2024.1.31: + identifier: guides-perconaxtradb-monitoring-prometheusoperator + name: Prometheus Operator + parent: guides-perconaxtradb-monitoring + weight: 20 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Monitoring PerconaXtraDB Using Prometheus operator + +[Prometheus operator](https://github.com/prometheus-operator/prometheus-operator) provides simple and Kubernetes native way to deploy and configure Prometheus server. This tutorial will show you how to use Prometheus operator to monitor PerconaXtraDB database deployed with KubeDB. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +- To learn how Prometheus monitoring works with KubeDB in general, please visit [here](/docs/v2024.1.31/guides/percona-xtradb/monitoring/overview). + +- To keep database resources isolated, this tutorial uses a separate namespace called `demo` throughout this tutorial. Run the following command to prepare your cluster: + + ```bash + $ kubectl create ns demo + namespace/demo created + ``` + +- We need a [Prometheus operator](https://github.com/prometheus-operator/prometheus-operator) instance running. If you don't already have a running instance, deploy one following the docs from [here](https://github.com/appscode/third-party-tools/blob/master/monitoring/prometheus/operator/README.md). + +- If you already don't have a Prometheus server running, deploy one following tutorial from [here](https://github.com/appscode/third-party-tools/blob/master/monitoring/prometheus/operator/README.md#deploy-prometheus-server). + +> Note: YAML files used in this tutorial are stored in [/docs/guides/percona-xtradb/monitoring/prometheus-operator/examples](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/guides/percona-xtradb/monitoring/prometheus-operator/examples) folder in GitHub repository [kubedb/docs](https://github.com/kubedb/docs). + +## Find out required labels for ServiceMonitor + +We need to know the labels used to select `ServiceMonitor` by a `Prometheus` crd. We are going to provide these labels in `spec.monitor.prometheus.labels` field of PerconaXtraDB crd so that KubeDB creates `ServiceMonitor` object accordingly. + +At first, let's find out the available Prometheus server in our cluster. + +```bash +$ kubectl get prometheus --all-namespaces +NAMESPACE NAME VERSION REPLICAS AGE +default prometheus 1 2m19s +``` + +> If you don't have any Prometheus server running in your cluster, deploy one following the guide specified in **Before You Begin** section. + +Now, let's view the YAML of the available Prometheus server `prometheus` in `default` namespace. + +```yaml +$ kubectl get prometheus -n default prometheus -o yaml +apiVersion: monitoring.coreos.com/v1 +kind: Prometheus +metadata: + annotations: + kubectl.kubernetes.io/last-applied-configuration: | + {"apiVersion":"monitoring.coreos.com/v1","kind":"Prometheus","metadata":{"annotations":{},"labels":{"prometheus":"prometheus"},"name":"prometheus","namespace":"default"},"spec":{"replicas":1,"resources":{"requests":{"memory":"400Mi"}},"serviceAccountName":"prometheus","serviceMonitorNamespaceSelector":{"matchLabels":{"prometheus":"prometheus"}},"serviceMonitorSelector":{"matchLabels":{"release":"prometheus"}}}} + creationTimestamp: "2020-08-25T04:02:07Z" + generation: 1 + labels: + prometheus: prometheus + ... + manager: kubectl + operation: Update + time: "2020-08-25T04:02:07Z" + name: prometheus + namespace: default + resourceVersion: "2087" + selfLink: /apis/monitoring.coreos.com/v1/namespaces/default/prometheuses/prometheus + uid: 972a50cb-b751-418b-b2bc-e0ecc9232730 +spec: + replicas: 1 + resources: + requests: + memory: 400Mi + serviceAccountName: prometheus + serviceMonitorNamespaceSelector: + matchLabels: + prometheus: prometheus + serviceMonitorSelector: + matchLabels: + release: prometheus +``` + +- `spec.serviceMonitorSelector` field specifies which ServiceMonitors should be included. The Above label `release: prometheus` is used to select `ServiceMonitors` by its selector. So, we are going to use this label in `spec.monitor.prometheus.labels` field of PerconaXtraDB crd. +- `spec.serviceMonitorNamespaceSelector` field specifies that the `ServiceMonitors` can be selected outside the Prometheus namespace by Prometheus using namespace selector. The Above label `prometheus: prometheus` is used to select the namespace where the `ServiceMonitor` is created. + +### Add Label to database namespace + +KubeDB creates a `ServiceMonitor` in database namespace `demo`. We need to add label to `demo` namespace. Prometheus will select this namespace by using its `spec.serviceMonitorNamespaceSelector` field. + +Let's add label `prometheus: prometheus` to `demo` namespace, + +```bash +$ kubectl patch namespace demo -p '{"metadata":{"labels": {"prometheus":"prometheus"}}}' +namespace/demo patched +``` + +## Deploy PerconaXtraDB with Monitoring Enabled + +At first, let's deploy an PerconaXtraDB database with monitoring enabled. Below is the PerconaXtraDB object that we are going to create. + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: PerconaXtraDB +metadata: + name: coreos-prom-px + namespace: demo +spec: + version: "8.0.26" + terminationPolicy: WipeOut + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + monitor: + agent: prometheus.io/operator + prometheus: + serviceMonitor: + labels: + release: prometheus + interval: 10s +``` + +Here, + +- `monitor.agent: prometheus.io/operator` indicates that we are going to monitor this server using Prometheus operator. + +- `monitor.prometheus.labels` specifies that KubeDB should create `ServiceMonitor` with these labels. + +- `monitor.prometheus.interval` indicates that the Prometheus server should scrape metrics from this database with 10 seconds interval. + +Let's create the PerconaXtraDB object that we have shown above, + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/percona-xtradb/monitoring/prometheus-operator/examples/prom-operator-px.yaml +perconaxtradb.kubedb.com/coreos-prom-px created +``` + +Now, wait for the database to go into `Ready` state. + +```bash +$ kubectl get perconaxtradb -n demo coreos-prom-px +NAME VERSION STATUS AGE +coreos-prom-px 8.0.26 Ready 59s +``` + +KubeDB will create a separate stats service with name `{PerconaXtraDB crd name}-stats` for monitoring purpose. + +```bash +$ $ kubectl get svc -n demo --selector="app.kubernetes.io/instance=coreos-prom-px" +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +coreos-prom-px ClusterIP 10.99.96.226 3306/TCP 107s +coreos-prom-px-pods ClusterIP None 3306/TCP 107s +coreos-prom-px-stats ClusterIP 10.101.190.67 56790/TCP 107s +``` + +Here, `coreos-prom-px-stats` service has been created for monitoring purpose. + +Let's describe this stats service. + +```bash +$ kubectl describe svc -n demo coreos-prom-px-stats +Name: coreos-prom-px-stats +Namespace: demo +Labels: app.kubernetes.io/instance=coreos-prom-px + app.kubernetes.io/managed-by=kubedb.com + app.kubernetes.io/name=perconaxtradbs.kubedb.com + kubedb.com/role=stats +Annotations: monitoring.appscode.com/agent: prometheus.io/operator +Selector: app.kubernetes.io/instance=coreos-prom-px,app.kubernetes.io/managed-by=kubedb.com,app.kubernetes.io/name=perconaxtradbs.kubedb.com +Type: ClusterIP +IP: 10.101.190.67 +Port: metrics 56790/TCP +TargetPort: metrics/TCP +Endpoints: 10.244.0.31:56790 +Session Affinity: None +Events: +``` + +Notice the `Labels` and `Port` fields. `ServiceMonitor` will use these information to target its endpoints. + +KubeDB will also create a `ServiceMonitor` crd in `demo` namespace that select the endpoints of `coreos-prom-px-stats` service. Verify that the `ServiceMonitor` crd has been created. + +```bash +$ kubectl get servicemonitor -n demo +NAME AGE +coreos-prom-px-stats 4m8s +``` + +Let's verify that the `ServiceMonitor` has the label that we had specified in `spec.monitor` section of PerconaXtraDB crd. + +```bash +$ kubectl get servicemonitor -n demo coreos-prom-px-stats -o yaml +apiVersion: monitoring.coreos.com/v1 +kind: ServiceMonitor +metadata: + creationTimestamp: "2021-03-19T10:09:03Z" + generation: 1 + labels: + app.kubernetes.io/component: database + app.kubernetes.io/instance: coreos-prom-px + app.kubernetes.io/managed-by: kubedb.com + app.kubernetes.io/name: perconaxtradbs.kubedb.com + release: prometheus + managedFields: + ... + name: coreos-prom-px-stats + namespace: demo + ownerReferences: + - apiVersion: v1 + blockOwnerDeletion: true + controller: true + kind: Service + name: coreos-prom-px-stats + uid: 08260a99-0984-4d90-bf68-34080ad0ee5b + resourceVersion: "241637" + selfLink: /apis/monitoring.coreos.com/v1/namespaces/demo/servicemonitors/coreos-prom-px-stats + uid: 4f022d98-d2d8-490f-9548-f6367d03ae1f +spec: + endpoints: + - bearerTokenSecret: + key: "" + honorLabels: true + interval: 10s + path: /metrics + port: metrics + namespaceSelector: + matchNames: + - demo + selector: + matchLabels: + app.kubernetes.io/instance: coreos-prom-px + app.kubernetes.io/managed-by: kubedb.com + app.kubernetes.io/name: perconaxtradbs.kubedb.com + kubedb.com/role: stats +``` + +Notice that the `ServiceMonitor` has label `release: prometheus` that we had specified in PerconaXtraDB crd. + +Also notice that the `ServiceMonitor` has selector which match the labels we have seen in the `coreos-prom-px-stats` service. It also, target the `prom-http` port that we have seen in the stats service. + +## Verify Monitoring Metrics + +At first, let's find out the respective Prometheus pod for `prometheus` Prometheus server. + +```bash +$ kubectl get pod -n default -l=app=prometheus +NAME READY STATUS RESTARTS AGE +prometheus-prometheus-0 3/3 Running 1 16m +prometheus-prometheus-1 3/3 Running 1 16m +prometheus-prometheus-2 3/3 Running 1 16m +``` + +Prometheus server is listening to port `9090` of `prometheus-prometheus-0` pod. We are going to use [port forwarding](https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/) to access Prometheus dashboard. + +Run following command on a separate terminal to forward the port 9090 of `prometheus-prometheus-0` pod, + +```bash +$ kubectl port-forward -n default prometheus-prometheus-0 9090 +Forwarding from 127.0.0.1:9090 -> 9090 +Forwarding from [::1]:9090 -> 9090 +``` + +Now, we can access the dashboard at `localhost:9090`. Open [http://localhost:9090](http://localhost:9090) in your browser. You should see `prom-http` endpoint of `coreos-prom-px-stats` service as one of the targets. + +

+  Prometheus Target +

+ +Check the `endpoint` and `service` labels. It verifies that the target is our expected database. Now, you can view the collected metrics and create a graph from homepage of this Prometheus dashboard. You can also use this Prometheus server as data source for [Grafana](https://grafana.com/) and create beautiful dashboard with collected metrics. + +## Cleaning up + +To cleanup the Kubernetes resources created by this tutorial, run following commands + +```bash +# cleanup database +kubectl delete perconaxtradb -n demo coreos-prom-px + +# cleanup Prometheus resources +kubectl delete -f https://raw.githubusercontent.com/appscode/third-party-tools/master/monitoring/prometheus/operator/artifacts/prometheus.yaml + +kubectl delete -f https://raw.githubusercontent.com/appscode/third-party-tools/master/monitoring/prometheus/operator/artifacts/prometheus-rbac.yaml + +# cleanup Prometheus operator resources +kubectl delete -f https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/release-0.41/bundle.yaml + +# delete namespace +kubectl delete ns demo +``` diff --git a/content/docs/v2024.1.31/guides/percona-xtradb/private-registry/_index.md b/content/docs/v2024.1.31/guides/percona-xtradb/private-registry/_index.md new file mode 100755 index 0000000000..6ed5621b1f --- /dev/null +++ b/content/docs/v2024.1.31/guides/percona-xtradb/private-registry/_index.md @@ -0,0 +1,22 @@ +--- +title: Run PerconaXtraDB using Private Registry +menu: + docs_v2024.1.31: + identifier: guides-perconaxtradb-privateregistry + name: Private Registry + parent: guides-perconaxtradb + weight: 60 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/percona-xtradb/private-registry/quickstart/examples/demo.yaml b/content/docs/v2024.1.31/guides/percona-xtradb/private-registry/quickstart/examples/demo.yaml new file mode 100644 index 0000000000..471654b9c3 --- /dev/null +++ b/content/docs/v2024.1.31/guides/percona-xtradb/private-registry/quickstart/examples/demo.yaml @@ -0,0 +1,19 @@ +apiVersion: kubedb.com/v1alpha2 +kind: PerconaXtraDB +metadata: + name: px-pvt-reg + namespace: demo +spec: + version: "8.0.26" + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + podTemplate: + spec: + imagePullSecrets: + - name: pxregistrykey + terminationPolicy: WipeOut \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/percona-xtradb/private-registry/quickstart/index.md b/content/docs/v2024.1.31/guides/percona-xtradb/private-registry/quickstart/index.md new file mode 100644 index 0000000000..d4e3a33df7 --- /dev/null +++ b/content/docs/v2024.1.31/guides/percona-xtradb/private-registry/quickstart/index.md @@ -0,0 +1,153 @@ +--- +title: Run PerconaXtraDB using Private Registry +menu: + docs_v2024.1.31: + identifier: guides-perconaxtradb-privateregistry-quickstart + name: Quickstart + parent: guides-perconaxtradb-privateregistry + weight: 10 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Deploy PerconaXtraDB from private Docker registry + +KubeDB operator supports using private Docker registry. This tutorial will show you how to use KubeDB to run PerconaXtraDB database using private Docker images. + +## Before You Begin + +- Read [concept of PerconaXtraDB Version Catalog](/docs/v2024.1.31/guides/percona-xtradb/concepts/perconaxtradb-version) to learn detail concepts of `PerconaXtraDBVersion` object. + +- You need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +- You will also need a docker private [registry](https://docs.docker.com/registry/) or [private repository](https://docs.docker.com/docker-hub/repos/#private-repositories). In this tutorial we will use private repository of [docker hub](https://hub.docker.com/). + +- You have to push the required images from KubeDB's [Docker hub account](https://hub.docker.com/u/kubedb) into your private registry. For perconaxtradb, push `DB_IMAGE`, `EXPORTER_IMAGE`, `INITCONTAINER_IMAGE` of following PerconaXtraDBVersions, where `deprecated` is not true, to your private registry. + +```bash +$ kubectl get perconaxtradbversions -n kube-system -o=custom-columns=NAME:.metadata.name,VERSION:.spec.version,DB_IMAGE:.spec.db.image,EXPORTER_IMAGE:.spec.exporter.image,INITCONTAINER_IMAGE:.spec.initContainer.image,DEPRECATED:.spec.deprecated +NAME VERSION DB_IMAGE EXPORTER_IMAGE INITCONTAINER_IMAGE DEPRECATED +8.0.26 8.0.26 percona/percona-xtradb-cluster:8.0.26 prom/mysqld-exporter:v0.13.0 kubedb/percona-xtradb-init:0.2.0 +8.0.28 8.0.28 percona/percona-xtradb-cluster:8.0.28 prom/mysqld-exporter:v0.13.0 kubedb/percona-xtradb-init:0.2.0 +``` + +Docker hub repositories: + +- [kubedb/operator](https://hub.docker.com/r/kubedb/operator) +- [kubedb/perconaxtradb](https://hub.docker.com/r/percona/percona-xtradb-cluster) +- [kubedb/mysqld-exporter](https://hub.docker.com/r/kubedb/mysqld-exporter) + +- Update KubeDB catalog for private Docker registry. Ex: + + ```yaml + apiVersion: catalog.kubedb.com/v1alpha1 + kind: PerconaXtraDBVersion + metadata: + name: 8.0.26 + spec: + db: + image: PRIVATE_REGISTRY/mysql:8.0.26 + exporter: + image: PRIVATE_REGISTRY/mysqld-exporter:v0.11.0 + initContainer: + image: PRIVATE_REGISTRY/busybox + podSecurityPolicies: + databasePolicyName: perconaxtra-db + version: 8.0.26 + ``` + +- To keep things isolated, this tutorial uses a separate namespace called `demo` throughout this tutorial. Run the following command to prepare your cluster for this tutorial: + + ```bash + $ kubectl create ns demo + namespace/demo created + ``` + +## Create ImagePullSecret + +ImagePullSecrets is a type of a Kubernete Secret whose sole purpose is to pull private images from a Docker registry. It allows you to specify the url of the docker registry, credentials for logging in and the image name of your private docker image. + +Run the following command, substituting the appropriate uppercase values to create an image pull secret for your private Docker registry: + +```bash +$ kubectl create secret docker-registry -n demo pxregistrykey \ + --docker-server=DOCKER_REGISTRY_SERVER \ + --docker-username=DOCKER_USER \ + --docker-email=DOCKER_EMAIL \ + --docker-password=DOCKER_PASSWORD +secret/pxregistrykey created +``` + +If you wish to follow other ways to pull private images see [official docs](https://kubernetes.io/docs/concepts/containers/images/) of Kubernetes. + +NB: If you are using `kubectl` 1.9.0, update to 1.9.1 or later to avoid this [issue](https://github.com/kubernetes/kubernetes/issues/57427). + +## Install KubeDB operator + +When installing KubeDB operator, set the flags `--docker-registry` and `--image-pull-secret` to appropriate value. Follow the steps to [install KubeDB operator](/docs/v2024.1.31/setup/README) properly in cluster so that to points to the DOCKER_REGISTRY you wish to pull images from. + +## Deploy PerconaXtraDB database from Private Registry + +While deploying `PerconaXtraDB` from private repository, you have to add `pxregistrykey` secret in `PerconaXtraDB` `spec.imagePullSecrets`. +Below is the PerconaXtraDB CRD object we will create. + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: PerconaXtraDB +metadata: + name: px-pvt-reg + namespace: demo +spec: + version: "8.0.26" + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + podTemplate: + spec: + imagePullSecrets: + - name: pxregistrykey + terminationPolicy: WipeOut +``` + +Now run the command to deploy this `PerconaXtraDB` object: + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/percona-xtradb/private-registry/quickstart/examples/demo.yaml +perconaxtradb.kubedb.com/px-pvt-reg created +``` + +To check if the images pulled successfully from the repository, see if the `PerconaXtraDB` is in running state: + +```bash +$ kubectl get pods -n demo +NAME READY STATUS RESTARTS AGE +px-pvt-reg-0 1/1 Running 0 56s +``` + +## Cleaning up + +To cleanup the Kubernetes resources created by this tutorial, run: + +```bash +$ kubectl delete perconaxtradb -n demo px-pvt-reg +perconaxtradb.kubedb.com "px-pvt-reg" deleted +$ kubectl delete ns demo +namespace "demo" deleted +``` diff --git a/content/docs/v2024.1.31/guides/percona-xtradb/quickstart/_index.md b/content/docs/v2024.1.31/guides/percona-xtradb/quickstart/_index.md new file mode 100755 index 0000000000..510b4f46c8 --- /dev/null +++ b/content/docs/v2024.1.31/guides/percona-xtradb/quickstart/_index.md @@ -0,0 +1,22 @@ +--- +title: PerconaXtraDB Quickstart +menu: + docs_v2024.1.31: + identifier: guides-perconaxtradb-quickstart + name: Quickstart + parent: guides-perconaxtradb + weight: 15 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/percona-xtradb/quickstart/overview/examples/sample-pxc.yaml b/content/docs/v2024.1.31/guides/percona-xtradb/quickstart/overview/examples/sample-pxc.yaml new file mode 100644 index 0000000000..37f2888e48 --- /dev/null +++ b/content/docs/v2024.1.31/guides/percona-xtradb/quickstart/overview/examples/sample-pxc.yaml @@ -0,0 +1,16 @@ +apiVersion: kubedb.com/v1alpha2 +kind: PerconaXtraDB +metadata: + name: sample-pxc + namespace: demo +spec: + version: "8.0.26" + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: Delete \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/percona-xtradb/quickstart/overview/images/perconaxtradb-lifecycle.svg b/content/docs/v2024.1.31/guides/percona-xtradb/quickstart/overview/images/perconaxtradb-lifecycle.svg new file mode 100644 index 0000000000..33614fdb59 --- /dev/null +++ b/content/docs/v2024.1.31/guides/percona-xtradb/quickstart/overview/images/perconaxtradb-lifecycle.svg @@ -0,0 +1,562 @@ + + + + + + image/svg+xml + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/content/docs/v2024.1.31/guides/percona-xtradb/quickstart/overview/index.md b/content/docs/v2024.1.31/guides/percona-xtradb/quickstart/overview/index.md new file mode 100644 index 0000000000..9804ad4d0a --- /dev/null +++ b/content/docs/v2024.1.31/guides/percona-xtradb/quickstart/overview/index.md @@ -0,0 +1,466 @@ +--- +title: PerconaXtraDB Quickstart +menu: + docs_v2024.1.31: + identifier: guides-perconaxtradb-quickstart-overview + name: Overview + parent: guides-perconaxtradb-quickstart + weight: 10 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# PerconaXtraDB QuickStart + +This tutorial will show you how to use KubeDB to run a PerconaXtraDB database. + +

+  lifecycle +

+ +> Note: The yaml files used in this tutorial are stored in [here](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/guides/percona-xtradb/quickstart/overview/examples). + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +- Now, install KubeDB cli on your workstation and KubeDB operator in your cluster following the steps [here](/docs/v2024.1.31/setup/README). + +- [StorageClass](https://kubernetes.io/docs/concepts/storage/storage-classes/) is required to run KubeDB. Check the available StorageClass in cluster. + +```bash +$ kubectl get storageclasses +NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE +standard (default) rancher.io/local-path Delete WaitForFirstConsumer false 6h22m +``` + +- To keep things isolated, this tutorial uses a separate namespace called `demo` throughout this tutorial. + +``` +$ kubectl create ns demo +namespace/demo created +``` + +## Find Available PerconaXtraDBVersion + +When you have installed KubeDB, it has created `PerconaXtraDBVersion` crd for all supported PerconaXtraDB versions. Check it by using the following command, + +```bash +$ kubectl get perconaxtradbversions +NAME VERSION DB_IMAGE DEPRECATED AGE +8.0.26 8.0.26 percona/percona-xtradb-cluster:8.0.26 6m1s +8.0.28 8.0.28 percona/percona-xtradb-cluster:8.0.28 6m1s +``` + +## Create a PerconaXtraDB database + +KubeDB implements a `PerconaXtraDB` CRD to define the specification of a PerconaXtraDB database. Below is the `PerconaXtraDB` object created in this tutorial. + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: PerconaXtraDB +metadata: + name: sample-pxc + namespace: demo +spec: + version: "8.0.26" + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: Delete +``` + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/percona-xtradb/quickstart/overview/examples/sample-pxc.yaml +perconaxtradb.kubedb.com/sample-pxc created +``` + +Here, + +- `spec.version` is the name of the PerconaXtraDBVersion CRD where the docker images are specified. In this tutorial, a PerconaXtraDB `8.0.26` database is going to create. +- `spec.storageType` specifies the type of storage that will be used for PerconaXtraDB database. It can be `Durable` or `Ephemeral`. Default value of this field is `Durable`. If `Ephemeral` is used then KubeDB will create PerconaXtraDB database using `EmptyDir` volume. In this case, you don't have to specify `spec.storage` field. This is useful for testing purposes. +- `spec.storage` specifies the StorageClass of PVC dynamically allocated to store data for this database. This storage spec will be passed to the StatefulSet created by KubeDB operator to run database pods. You can specify any StorageClass available in your cluster with appropriate resource requests. +- `spec.terminationPolicy` gives flexibility whether to `nullify`(reject) the delete operation of `PerconaXtraDB` crd or which resources KubeDB should keep or delete when you delete `PerconaXtraDB` crd. If admission webhook is enabled, It prevents users from deleting the database as long as the `spec.terminationPolicy` is set to `DoNotTerminate`. + +> Note: `spec.storage` section is used to create PVC for database pod. It will create PVC with storage size specified in `storage.resources.requests` field. Don't specify limits here. PVC does not get resized automatically. + +KubeDB operator watches for `PerconaXtraDB` objects using Kubernetes api. When a `PerconaXtraDB` object is created, KubeDB operator will create a new StatefulSet and a Service with the matching PerconaXtraDB object name. KubeDB operator will also create a governing service for StatefulSets with the name `kubedb`, if one is not already present. + +```bash +$ kubectl describe -n demo perconaxtradb sample-pxc +Name: sample-pxc +Namespace: demo +Labels: +Annotations: +API Version: kubedb.com/v1alpha2 +Kind: PerconaXtraDB +Metadata: + Creation Timestamp: 2022-12-19T09:54:09Z + Finalizers: + kubedb.com + Generation: 4 + ... + Resource Version: 4309 + UID: 75511bbb-d24f-41a9-9b1c-4bfffd1f5289 +Spec: + Allowed Schemas: + Namespaces: + From: Same + Auth Secret: + Name: sample-pxc-auth + Auto Ops: + Coordinator: + Resources: + Health Checker: + Failure Threshold: 1 + Period Seconds: 10 + Timeout Seconds: 10 + Pod Template: + Controller: + Metadata: + Spec: + Affinity: + Pod Anti Affinity: + Preferred During Scheduling Ignored During Execution: + Pod Affinity Term: + Label Selector: + Match Labels: + app.kubernetes.io/instance: sample-pxc + app.kubernetes.io/managed-by: kubedb.com + app.kubernetes.io/name: perconaxtradbs.kubedb.com + Namespaces: + demo + Topology Key: kubernetes.io/hostname + Weight: 100 + Pod Affinity Term: + Label Selector: + Match Labels: + app.kubernetes.io/instance: sample-pxc + app.kubernetes.io/managed-by: kubedb.com + app.kubernetes.io/name: perconaxtradbs.kubedb.com + Namespaces: + demo + Topology Key: failure-domain.beta.kubernetes.io/zone + Weight: 50 + Resources: + Limits: + Memory: 1Gi + Requests: + Cpu: 500m + Memory: 1Gi + Security Context: + Fs Group: 1001 + Run As Group: 1001 + Run As User: 1001 + Service Account Name: sample-pxc + Replicas: 3 + Storage: + Access Modes: + ReadWriteOnce + Resources: + Requests: + Storage: 1Gi + Storage Class Name: standard + Storage Type: Durable + System User Secrets: + Monitor User Secret: + Name: sample-pxc-monitor + Replication User Secret: + Name: sample-pxc-replication + Termination Policy: Delete + Version: 8.0.26 +Status: + Conditions: + Last Transition Time: 2022-12-19T09:54:09Z + Message: The KubeDB operator has started the provisioning of PerconaXtraDB: demo/sample-pxc + Reason: DatabaseProvisioningStartedSuccessfully + Status: True + Type: ProvisioningStarted + Last Transition Time: 2022-12-19T09:56:53Z + Message: All desired replicas are ready. + Reason: AllReplicasReady + Status: True + Type: ReplicaReady + Last Transition Time: 2022-12-19T10:00:03Z + Message: database sample-pxc/demo is ready + Observed Generation: 4 + Reason: ReadinessCheckSucceeded + Status: True + Type: Ready + Last Transition Time: 2022-12-19T09:59:13Z + Message: database sample-pxc/demo is accepting connection + Observed Generation: 4 + Reason: AcceptingConnection + Status: True + Type: AcceptingConnection + Last Transition Time: 2022-12-19T10:00:19Z + Message: The PerconaXtraDB: demo/sample-pxc is successfully provisioned. + Observed Generation: 4 + Reason: DatabaseSuccessfullyProvisioned + Status: True + Type: Provisioned + Observed Generation: 4 + Phase: Ready +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal PhaseChanged 6m42s KubeDB Operator Phase changed from to Provisioning. + Normal Successful 6m42s KubeDB Operator Successfully created governing service + Normal Successful 6m42s KubeDB Operator Successfully created Service + Normal Successful 6m32s KubeDB Operator Successfully created StatefulSet demo/sample-pxc + Normal Successful 6m32s KubeDB Operator Successfully created PerconaXtraDB + Normal Successful 6m32s KubeDB Operator Successfully created appbinding + Normal PhaseChanged 51s KubeDB Operator Phase changed from NotReady to Provisioning. + Normal PhaseChanged 32s KubeDB Operator Phase changed from Provisioning to Ready. + + +$ kubectl get statefulset -n demo +NAME READY AGE +sample-pxc 1/1 27m + +$ kubectl get pvc -n demo +NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE +data-sample-pxc-0 Bound pvc-10651900-d975-467f-80ff-9c4755bdf917 1Gi RWO standard 27m + +$ kubectl get pv -n demo +NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE +pvc-10651900-d975-467f-80ff-9c4755bdf917 1Gi RWO Delete Bound demo/data-sample-pxc-0 standard 27m + +$ kubectl get service -n demo +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +sample-pxc ClusterIP 10.105.207.172 3306/TCP 28m +sample-pxc-pods ClusterIP None 3306/TCP 28m +``` + +KubeDB operator sets the `status.phase` to `Running` once the database is successfully created. Run the following command to see PerconaXtraDB object status: + +```bash +$ kubectl get perconaxtradb -n demo +NAME VERSION STATUS AGE +sample-pxc 8.0.26 Ready 9m32s +``` + +## Connect with PerconaXtraDB database + +KubeDB operator has created a new Secret called `sample-pxc-auth` for storing the password for `perconaxtradb` superuser. This secret contains a `username` key which contains the *username* for PerconaXtraDB superuser and a `password` key which contains the *password* for PerconaXtraDB superuser. + +If you want to use an existing secret please specify that when creating the PerconaXtraDB object using `spec.authSecret.name`. While creating this secret manually, make sure the secret contains these two keys containing data `username` and `password` and also make sure of using `root` as value of `username`. + +Now, we need `username` and `password` to connect to this database from `kubeclt exec` command. In this example, `sample-pxc-auth` secret holds username and password. + +```bash +$ kubectl get secrets -n demo sample-pxc-auth -o jsonpath='{.data.\username}' | base64 -d +root + +$ kubectl get secrets -n demo sample-pxc-auth -o jsonpath='{.data.\password}' | base64 -d +w*yOU$b53dTbjsjJ +``` + +We will exec into the pod `sample-pxc-0` and connet to the database using `username` and `password`. + +```bash +$ kubectl exec -it -n demo sample-pxc-0 -- perconaxtradb -u root --password='w*yOU$b53dTbjsjJ' + +Server version: 8.0.26-16.1 Percona XtraDB Cluster (GPL), Release rel16, Revision b141904, WSREP version 26.4.3 + +Copyright (c) 2009-2021 Percona LLC and/or its affiliates +Copyright (c) 2000, 2021, Oracle and/or its affiliates. + +Oracle is a registered trademark of Oracle Corporation and/or its +affiliates. Other names may be trademarks of their respective +owners. + +Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. + +mysql> show databases; ++--------------------+ +| Database | ++--------------------+ +| information_schema | +| kubedb_system | +| mysql | +| performance_schema | +| sys | ++--------------------+ +5 rows in set (0.00 sec) + +``` + +## Database TerminationPolicy + +This field is used to regulate the deletion process of the related resources when `PerconaXtraDB` object is deleted. User can set the value of this field according to their needs. The available options and their use case scenario is described below: + +**DoNotTerminate:** + +When `terminationPolicy` is set to `DoNotTerminate`, KubeDB takes advantage of `ValidationWebhook` feature in Kubernetes 1.9.0 or later clusters to implement `DoNotTerminate` feature. If admission webhook is enabled, It prevents users from deleting the database as long as the `spec.terminationPolicy` is set to `DoNotTerminate`. If you create a database with `terminationPolicy` `DoNotTerminate` and try to delete it, you will see this: + +```bash +$ kubectl delete perconaxtradb sample-pxc -n demo +Error from server (BadRequest): admission webhook "perconaxtradb.validators.kubedb.com" denied the request: perconaxtradb "perconaxtradb-quickstart" can't be halted. To delete, change spec.terminationPolicy +``` + +Now, run `kubectl edit perconaxtradb sample-pxc -n demo` to set `spec.terminationPolicy` to `Halt` (which deletes the perconaxtradb object and keeps PVC, snapshots, Secrets intact) or remove this field (which default to `Delete`). Then you will be able to delete/halt the database. + + +**Halt:** + +Suppose you want to reuse your database volume and credential to deploy your database in future using the same configurations. But, right now you just want to delete the database except the database volumes and credentials. In this scenario, you must set the `PerconaXtraDB` object `terminationPolicy` to `Halt`. + +When the `TerminationPolicy` is set to `Halt` and the PerconaXtraDB object is deleted, the KubeDB operator will delete the StatefulSet and its pods but leaves the `PVCs`, `secrets` and database backup data(`snapshots`) intact. You can set the `terminationPolicy` to `Halt` in existing database using `edit` command for testing. + +At first, run `kubectl edit perconaxtradb sample-pxc -n demo` to set `spec.terminationPolicy` to `Halt`. Then delete the perconaxtradb object, + +```bash +$ kubectl delete perconaxtradb sample-pxc -n demo +perconaxtradb.kubedb.com "sample-pxc" deleted +``` + +Now, run the following command to get all perconaxtradb resources in `demo` namespaces, + +```bash +$ kubectl get sts,svc,secret,pvc -n demo +NAME TYPE DATA AGE +secret/default-token-w2pgw kubernetes.io/service-account-token 3 31m +secret/sample-pxc-auth kubernetes.io/basic-auth 2 39s + +NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE +persistentvolumeclaim/data-sample-pxc-0 Bound pvc-7502c222-2b02-4363-9027-91ab0e7b76dc 1Gi RWO standard 39s +``` + +From the above output, you can see that all perconaxtradb resources(`StatefulSet`, `Service`, etc.) are deleted except `PVC` and `Secret`. You can recreate your perconaxtradb again using this resources. + +**Delete:** + +If you want to delete the existing database along with the volumes used, but want to restore the database from previously taken `snapshots` and `secrets` then you might want to set the `PerconaXtraDB` object `terminationPolicy` to `Delete`. In this setting, `StatefulSet` and the volumes will be deleted. If you decide to restore the database, you can do so using the snapshots and the credentials. + +When the `TerminationPolicy` is set to `Delete` and the PerconaXtraDB object is deleted, the KubeDB operator will delete the StatefulSet and its pods along with PVCs but leaves the `secret` and database backup data(`snapshots`) intact. + +Suppose, we have a database with `terminationPolicy` set to `Delete`. Now, are going to delete the database using the following command: + +```bash +$ kubectl delete perconaxtradb sample-pxc -n demo +perconaxtradb.kubedb.com "sample-pxc" deleted +``` + +Now, run the following command to get all perconaxtradb resources in `demo` namespaces, + +```bash +$ kubectl get sts,svc,secret,pvc -n demo +NAME READY AGE +statefulset.apps/sample-pxc 3/3 3m46s + +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +service/sample-pxc ClusterIP 10.96.128.19 3306/TCP 4m5s +service/sample-pxc-pods ClusterIP None 3306/TCP 4m5s + +NAME TYPE DATA AGE +secret/default-token-r556j kubernetes.io/service-account-token 3 20m +secret/sample-pxc-auth kubernetes.io/basic-auth 2 20m +secret/sample-pxc-monitor kubernetes.io/basic-auth 2 20m +secret/sample-pxc-replication kubernetes.io/basic-auth 2 20m +secret/sample-pxc-token-p25ww kubernetes.io/service-account-token 3 4m5s + +NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE +persistentvolumeclaim/data-sample-pxc-0 Bound pvc-11f7b634-689e-457e-ba41-157a51090475 1Gi RWO standard 3m46s +persistentvolumeclaim/data-sample-pxc-1 Bound pvc-84dce4b5-35df-4a06-bfea-b0530d83ebb0 1Gi RWO standard 3m46s +persistentvolumeclaim/data-sample-pxc-2 Bound pvc-85a35a7c-dfb8-4ca2-96a6-21c9e0b892db 1Gi RWO standard 3m46s +``` + +From the above output, you can see that all perconaxtradb resources(`StatefulSet`, `Service`, `PVCs` etc.) are deleted except `Secret`. + +>If you don't set the terminationPolicy then the kubeDB set the TerminationPolicy to Delete by-default. + +**WipeOut:** + +You can totally delete the `PerconaXtraDB` database and relevant resources without any tracking by setting `terminationPolicy` to `WipeOut`. KubeDB operator will delete all relevant resources of this `PerconaXtraDB` database (i.e, `PVCs`, `Secrets`, `Snapshots`) when the `terminationPolicy` is set to `WipeOut`. + +Suppose, we have a database with `terminationPolicy` set to `WipeOut`. Now, are going to delete the database using the following command: + +```yaml +$ kubectl delete perconaxtradb sample-pxc -n demo +perconaxtradb.kubedb.com "sample-pxc" deleted +``` + +Now, run the following command to get all perconaxtradb resources in `demo` namespaces, + +```bash +$ kubectl get sts,svc,secret,pvc -n demo +No resources found in demo namespace. +``` + +From the above output, you can see that all perconaxtradb resources are deleted. there is no option to recreate/reinitialize your database if `terminationPolicy` is set to `Delete`. + +>Be careful when you set the `terminationPolicy` to `Delete`. Because there is no option to trace the database resources if once deleted the database. + +## Database Halted + +If you want to delete PerconaXtraDB resources(`StatefulSet`,`Service`, etc.) without deleting the `PerconaXtraDB` object, `PVCs` and `Secret` you have to set the `spec.halted` to `true`. KubeDB operator will be able to delete the PerconaXtraDB related resources except the `PerconaXtraDB` object, `PVCs` and `Secret`. + +Suppose we have a database running `perconaxtradb-quickstart` in our cluster. Now, we are going to set `spec.halted` to `true` in `PerconaXtraDB` object by running `kubectl edit -n demo perconaxtradb-quickstart` command. + +Run the following command to get PerconaXtraDB resources, + +```bash +$ kubectl get perconaxtradb,sts,secret,svc,pvc -n demo +NAME VERSION STATUS AGE +perconaxtradb.kubedb.com/sample-pxc 8.0.26 Halted 22m + +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +service/sample-pxc ClusterIP 10.96.128.19 3306/TCP 4m5s +service/sample-pxc-pods ClusterIP None 3306/TCP 4m5s + +NAME TYPE DATA AGE +secret/default-token-r556j kubernetes.io/service-account-token 3 20m +secret/sample-pxc-auth kubernetes.io/basic-auth 2 20m +secret/sample-pxc-monitor kubernetes.io/basic-auth 2 20m +secret/sample-pxc-replication kubernetes.io/basic-auth 2 20m +secret/sample-pxc-token-p25ww kubernetes.io/service-account-token 3 4m5s + +NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE +persistentvolumeclaim/data-sample-pxc-0 Bound pvc-11f7b634-689e-457e-ba41-157a51090475 1Gi RWO standard 3m46s +persistentvolumeclaim/data-sample-pxc-1 Bound pvc-84dce4b5-35df-4a06-bfea-b0530d83ebb0 1Gi RWO standard 3m46s +persistentvolumeclaim/data-sample-pxc-2 Bound pvc-85a35a7c-dfb8-4ca2-96a6-21c9e0b892db 1Gi RWO standard 3m46s +``` + +From the above output , you can see that `PerconaXtraDB` object, `PVCs`, `Secret` are still alive. Then you can recreate your `PerconaXtraDB` with same configuration. + +>When you set `spec.halted` to `true` in `PerconaXtraDB` object then the `terminationPolicy` is also set to `Halt` by KubeDB operator. + +## Cleaning up + +To cleanup the Kubernetes resources created by this tutorial, run: + +```bash +kubectl delete -n demo perconaxtradb/sample-pxc + +kubectl delete ns demo +``` + +## Tips for Testing + +If you are just testing some basic functionalities, you might want to avoid additional hassles due to some safety features that are great for production environment. You can follow these tips to avoid them. + +1. **Use `storageType: Ephemeral`**. Databases are precious. You might not want to lose your data in your production environment if database pod fail. So, we recommend to use `spec.storageType: Durable` and provide storage spec in `spec.storage` section. For testing purpose, you can just use `spec.storageType: Ephemeral`. KubeDB will use [emptyDir](https://kubernetes.io/docs/concepts/storage/volumes/#emptydir) for storage. You will not require to provide `spec.storage` section. +2. **Use `terminationPolicy: WipeOut`**. It is nice to be able to delete everything created by KubeDB for a particular PerconaXtraDB crd when you delete the crd. + +## Next Steps + +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/v2024.1.31/CONTRIBUTING). diff --git a/content/docs/v2024.1.31/guides/percona-xtradb/reconfigure-tls/_index.md b/content/docs/v2024.1.31/guides/percona-xtradb/reconfigure-tls/_index.md new file mode 100644 index 0000000000..8487ae0bf9 --- /dev/null +++ b/content/docs/v2024.1.31/guides/percona-xtradb/reconfigure-tls/_index.md @@ -0,0 +1,22 @@ +--- +title: Reconfigure PerconaXtraDB TLS/SSL +menu: + docs_v2024.1.31: + identifier: guides-perconaxtradb-reconfigure-tls + name: Reconfigure TLS/SSL + parent: guides-perconaxtradb + weight: 46 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/percona-xtradb/reconfigure-tls/cluster/examples/issuer.yaml b/content/docs/v2024.1.31/guides/percona-xtradb/reconfigure-tls/cluster/examples/issuer.yaml new file mode 100644 index 0000000000..9662bdd2db --- /dev/null +++ b/content/docs/v2024.1.31/guides/percona-xtradb/reconfigure-tls/cluster/examples/issuer.yaml @@ -0,0 +1,8 @@ +apiVersion: cert-manager.io/v1 +kind: Issuer +metadata: + name: px-issuer + namespace: demo +spec: + ca: + secretName: px-ca \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/percona-xtradb/reconfigure-tls/cluster/examples/pxops-add-tls.yaml b/content/docs/v2024.1.31/guides/percona-xtradb/reconfigure-tls/cluster/examples/pxops-add-tls.yaml new file mode 100644 index 0000000000..df2d52969a --- /dev/null +++ b/content/docs/v2024.1.31/guides/percona-xtradb/reconfigure-tls/cluster/examples/pxops-add-tls.yaml @@ -0,0 +1,24 @@ +apiVersion: ops.kubedb.com/v1alpha1 +kind: PerconaXtraDBOpsRequest +metadata: + name: pxops-add-tls + namespace: demo +spec: + type: ReconfigureTLS + databaseRef: + name: sample-pxc + tls: + requireSSL: true + issuerRef: + apiGroup: cert-manager.io + kind: Issuer + name: px-issuer + certificates: + - alias: server + subject: + organizations: + - kubedb:server + dnsNames: + - localhost + ipAddresses: + - "127.0.0.1" diff --git a/content/docs/v2024.1.31/guides/percona-xtradb/reconfigure-tls/cluster/examples/pxops-remove-tls.yaml b/content/docs/v2024.1.31/guides/percona-xtradb/reconfigure-tls/cluster/examples/pxops-remove-tls.yaml new file mode 100644 index 0000000000..9df1a25b04 --- /dev/null +++ b/content/docs/v2024.1.31/guides/percona-xtradb/reconfigure-tls/cluster/examples/pxops-remove-tls.yaml @@ -0,0 +1,11 @@ +apiVersion: ops.kubedb.com/v1alpha1 +kind: PerconaXtraDBOpsRequest +metadata: + name: pxops-remove-tls + namespace: demo +spec: + type: ReconfigureTLS + databaseRef: + name: sample-pxc + tls: + remove: true diff --git a/content/docs/v2024.1.31/guides/percona-xtradb/reconfigure-tls/cluster/examples/pxops-rotate-tls.yaml b/content/docs/v2024.1.31/guides/percona-xtradb/reconfigure-tls/cluster/examples/pxops-rotate-tls.yaml new file mode 100644 index 0000000000..faa21da6bf --- /dev/null +++ b/content/docs/v2024.1.31/guides/percona-xtradb/reconfigure-tls/cluster/examples/pxops-rotate-tls.yaml @@ -0,0 +1,11 @@ +apiVersion: ops.kubedb.com/v1alpha1 +kind: PerconaXtraDBOpsRequest +metadata: + name: pxops-rotate-tls + namespace: demo +spec: + type: ReconfigureTLS + databaseRef: + name: sample-pxc + tls: + rotateCertificates: true diff --git a/content/docs/v2024.1.31/guides/percona-xtradb/reconfigure-tls/cluster/examples/pxops-update-tls.yaml b/content/docs/v2024.1.31/guides/percona-xtradb/reconfigure-tls/cluster/examples/pxops-update-tls.yaml new file mode 100644 index 0000000000..58138defbd --- /dev/null +++ b/content/docs/v2024.1.31/guides/percona-xtradb/reconfigure-tls/cluster/examples/pxops-update-tls.yaml @@ -0,0 +1,17 @@ +apiVersion: ops.kubedb.com/v1alpha1 +kind: PerconaXtraDBOpsRequest +metadata: + name: pxops-update-tls + namespace: demo +spec: + type: ReconfigureTLS + databaseRef: + name: sample-pxc + tls: + certificates: + - alias: server + subject: + organizations: + - kubedb:server + emailAddresses: + - "kubedb@appscode.com" diff --git a/content/docs/v2024.1.31/guides/percona-xtradb/reconfigure-tls/cluster/examples/sample-pxc.yaml b/content/docs/v2024.1.31/guides/percona-xtradb/reconfigure-tls/cluster/examples/sample-pxc.yaml new file mode 100644 index 0000000000..8c3f62cc2a --- /dev/null +++ b/content/docs/v2024.1.31/guides/percona-xtradb/reconfigure-tls/cluster/examples/sample-pxc.yaml @@ -0,0 +1,18 @@ +apiVersion: kubedb.com/v1alpha2 +kind: PerconaXtraDB +metadata: + name: sample-pxc + namespace: demo +spec: + version: "8.0.26" + replicas: 3 + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut + diff --git a/content/docs/v2024.1.31/guides/percona-xtradb/reconfigure-tls/cluster/index.md b/content/docs/v2024.1.31/guides/percona-xtradb/reconfigure-tls/cluster/index.md new file mode 100644 index 0000000000..d961882eff --- /dev/null +++ b/content/docs/v2024.1.31/guides/percona-xtradb/reconfigure-tls/cluster/index.md @@ -0,0 +1,583 @@ +--- +title: Reconfigure PerconaXtraDB TLS/SSL Encryption +menu: + docs_v2024.1.31: + identifier: guides-perconaxtradb-reconfigure-tls-cluster + name: Reconfigure PerconaXtraDB TLS/SSL Encryption + parent: guides-perconaxtradb-reconfigure-tls + weight: 20 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Reconfigure PerconaXtraDB TLS/SSL (Transport Encryption) + +KubeDB supports reconfigure i.e. add, remove, update and rotation of TLS/SSL certificates for existing PerconaXtraDB database via a PerconaXtraDBOpsRequest. This tutorial will show you how to use KubeDB to reconfigure TLS/SSL encryption. + +## Before You Begin + +- At first, you need to have a Kubernetes Cluster, and the kubectl command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +- Install [`cert-manger`](https://cert-manager.io/docs/installation/) v1.6.0 or later to your cluster to manage your SSL/TLS certificates. + +- Now, install KubeDB cli on your workstation and KubeDB operator in your cluster following the steps [here](/docs/v2024.1.31/setup/README). + +- To keep things isolated, this tutorial uses a separate namespace called `demo` throughout this tutorial. + + ```bash + $ kubectl create ns demo + namespace/demo created + ``` + +## Add TLS to a PerconaXtraDB Cluster + +Here, We are going to create a PerconaXtraDB database without TLS and then reconfigure the database to use TLS. +> **Note:** Steps for reconfiguring TLS of PerconaXtraDB `Standalone` is same as PerconaXtraDB `Cluster`. + +### Deploy PerconaXtraDB without TLS + +In this section, we are going to deploy a PerconaXtraDB Cluster database without TLS. In the next few sections we will reconfigure TLS using `PerconaXtraDBOpsRequest` CRD. Below is the YAML of the `PerconaXtraDB` CR that we are going to create, + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: PerconaXtraDB +metadata: + name: sample-pxc + namespace: demo +spec: + version: "8.0.26" + replicas: 3 + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut +``` + +Let's create the `PerconaXtraDB` CR we have shown above, + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/percona-xtradb/reconfigure-tls/cluster/examples/sample-pxc.yaml +perconaxtradb.kubedb.com/sample-pxc created +``` + +Now, wait until `sample-pxc` has status `Ready`. i.e, + +```bash +$ kubectl get perconaxtradb -n demo +NAME VERSION STATUS AGE +sample-pxc 8.0.26 Ready 9m17s +``` + +```bash +$ kubectl get secrets -n demo sample-pxc-auth -o jsonpath='{.data.\username}' | base64 -d +root + +$ kubectl get secrets -n demo sample-pxc-auth -o jsonpath='{.data.\password}' | base64 -d +U6(h_pYrekLZ2OOd + +$ kubectl exec -it -n demo sample-pxc-0 -c perconaxtradb -- bash +root@sample-pxc-0:/ mysql -u${MYSQL_ROOT_USERNAME} -p${MYSQL_ROOT_PASSWORD} +Welcome to the PerconaXtraDB monitor. Commands end with ; or \g. +Your PerconaXtraDB connection id is 108 +Server version: 8.0.26-PerconaXtraDB-1:8.0.26+maria~focal perconaxtradb.org binary distribution + +Copyright (c) 2000, 2018, Oracle, PerconaXtraDB Corporation Ab and others. + +Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. + +PerconaXtraDB [(none)]> show variables like '%ssl%'; ++---------------------+-----------------------------+ +| Variable_name | Value | ++---------------------+-----------------------------+ +| have_openssl | YES | +| have_ssl | DISABLED | +| ssl_ca | | +| ssl_capath | | +| ssl_cert | | +| ssl_cipher | | +| ssl_crl | | +| ssl_crlpath | | +| ssl_key | | +| version_ssl_library | OpenSSL 1.1.1f 31 Mar 2020 | ++---------------------+-----------------------------+ +10 rows in set (0.001 sec) + +``` + +We can verify from the above output that TLS is disabled for this database. + +### Create Issuer/ ClusterIssuer + +Now, we are going to create an example `Issuer` that will be used throughout the duration of this tutorial. Alternatively, you can follow this [cert-manager tutorial](https://cert-manager.io/docs/configuration/ca/) to create your own `Issuer`. By following the below steps, we are going to create our desired issuer, + +- Start off by generating our ca-certificates using openssl, + +```bash +$ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout ./ca.key -out ./ca.crt -subj "/CN=perconaxtradb/O=kubedb" +Generating a RSA private key +...........................................................................+++++ +........................................................................................................+++++ +writing new private key to './ca.key' +``` + +- create a secret using the certificate files we have just generated, + +```bash +kubectl create secret tls px-ca \ + --cert=ca.crt \ + --key=ca.key \ + --namespace=demo +secret/px-ca created +``` + +Now, we are going to create an `Issuer` using the `px-ca` secret that hols the ca-certificate we have just created. Below is the YAML of the `Issuer` cr that we are going to create, + +```yaml +apiVersion: cert-manager.io/v1 +kind: Issuer +metadata: + name: px-issuer + namespace: demo +spec: + ca: + secretName: px-ca +``` + +Let’s create the `Issuer` cr we have shown above, + +```bash +kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}//docs/guides/percona-xtradb/reconfigure-tls/cluster/examples/issuer.yaml +issuer.cert-manager.io/px-issuer created +``` + +### Create PerconaXtraDBOpsRequest + +In order to add TLS to the database, we have to create a `PerconaXtraDBOpsRequest` CRO with our created issuer. Below is the YAML of the `PerconaXtraDBOpsRequest` CRO that we are going to create, + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: PerconaXtraDBOpsRequest +metadata: + name: pxops-add-tls + namespace: demo +spec: + type: ReconfigureTLS + databaseRef: + name: sample-pxc + tls: + requireSSL: true + issuerRef: + apiGroup: cert-manager.io + kind: Issuer + name: px-issuer + certificates: + - alias: server + subject: + organizations: + - kubedb:server + dnsNames: + - localhost + ipAddresses: + - "127.0.0.1" +``` + +Here, + +- `spec.databaseRef.name` specifies that we are performing reconfigure TLS operation on `sample-pxc` database. +- `spec.type` specifies that we are performing `ReconfigureTLS` on our database. +- `spec.tls.issuerRef` specifies the issuer name, kind and api group. +- `requireSSL` specifies that the clients connecting to the server are required to use secured connection. +- `spec.tls.certificates` specifies the certificates. You can learn more about this field from [here](/docs/v2024.1.31/guides/percona-xtradb/concepts/perconaxtradb/#spectls). + +Let's create the `PerconaXtraDBOpsRequest` CR we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/percona-xtradb/reconfigure-tls/cluster/examples/pxops-add-tls.yaml +perconaxtradbopsrequest.ops.kubedb.com/pxops-add-tls created +``` + +#### Verify TLS Enabled Successfully + +Let's wait for `PerconaXtraDBOpsRequest` to be `Successful`. Run the following command to watch `PerconaXtraDBOpsRequest` CRO, + +```bash +$ kubectl get perconaxtradbopsrequest --all-namespaces +NAMESPACE NAME TYPE STATUS AGE +demo pxops-add-tls ReconfigureTLS Successful 6m6s +``` + +We can see from the above output that the `PerconaXtraDBOpsRequest` has succeeded. + +Now, we are going to connect to the database for verifying the `PerconaXtraDB` server has configured with TLS/SSL encryption. + +Let's exec into the pod to verify TLS/SSL configuration, + +```bash +$ kubectl exec -it -n demo sample-pxc-0 -c perconaxtradb -- bash +root@sample-pxc-0:/ ls /etc/mysql/certs/client +ca.crt tls.crt tls.key +root@sample-pxc-0:/ ls /etc/mysql/certs/server +ca.crt tls.crt tls.key +root@sample-pxc-0:/ mysql -u${MYSQL_ROOT_USERNAME} -p${MYSQL_ROOT_PASSWORD} +Welcome to the PerconaXtraDB monitor. Commands end with ; or \g. +Your PerconaXtraDB connection id is 58 +Server version: 8.0.26-PerconaXtraDB-1:8.0.26+maria~focal perconaxtradb.org binary distribution + +Copyright (c) 2000, 2018, Oracle, PerconaXtraDB Corporation Ab and others. + +Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. + +PerconaXtraDB [(none)]> show variables like '%ssl%'; ++---------------------+---------------------------------+ +| Variable_name | Value | ++---------------------+---------------------------------+ +| have_openssl | YES | +| have_ssl | YES | +| ssl_ca | /etc/mysql/certs/server/ca.crt | +| ssl_capath | /etc/mysql/certs/server | +| ssl_cert | /etc/mysql/certs/server/tls.crt | +| ssl_cipher | | +| ssl_crl | | +| ssl_crlpath | | +| ssl_key | /etc/mysql/certs/server/tls.key | +| version_ssl_library | OpenSSL 1.1.1f 31 Mar 2020 | ++---------------------+---------------------------------+ +10 rows in set (0.005 sec) + +PerconaXtraDB [(none)]> show variables like '%require_secure_transport%'; ++--------------------------+-------+ +| Variable_name | Value | ++--------------------------+-------+ +| require_secure_transport | ON | ++--------------------------+-------+ +1 row in set (0.005 sec) + +PerconaXtraDB [(none)]> quit; +Bye +``` + +We can see from the above output that, `have_ssl` is set to `ture`. So, database TLS is enabled successfully to this database. + +> Note: Add or Update reconfigure TLS with with `RequireSSL=true` will create downtime of the database while `PerconaXtraDBOpsRequest` is in `Progressing` status. + +## Rotate Certificate + +Now we are going to rotate the certificate of this database. First let's check the current expiration date of the certificate. + +```bash +$ kubectl exec -it -n demo sample-pxc-0 -c perconaxtradb -- bash +root@sample-pxc-0:/ apt update +root@sample-pxc-0:/ apt install openssl +root@sample-pxc-0:/ openssl x509 -in /etc/mysql/certs/client/tls.crt -inform PEM -enddate -nameopt RFC2253 -noout +notAfter=Apr 13 05:18:43 2022 GMT +``` + +So, the certificate will expire on this time `Apr 13 05:18:43 2022 GMT`. + +### Create PerconaXtraDBOpsRequest + +Now we are going to increase it using a PerconaXtraDBOpsRequest. Below is the yaml of the ops request that we are going to create, + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: PerconaXtraDBOpsRequest +metadata: + name: pxops-rotate-tls + namespace: demo +spec: + type: ReconfigureTLS + databaseRef: + name: sample-pxc + tls: + rotateCertificates: true +``` + +Here, + +- `spec.databaseRef.name` specifies that we are performing reconfigure TLS operation on `sample-pxc` database. +- `spec.type` specifies that we are performing `ReconfigureTLS` on our database. +- `spec.tls.rotateCertificates` specifies that we want to rotate the certificate of this database. + +Let's create the `PerconaXtraDBOpsRequest` CR we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/percona-xtradb/reconfigure-tls/cluster/examples/pxops-rotate-tls.yaml +perconaxtradbopsrequest.ops.kubedb.com/pxops-rotate-tls created +``` + +#### Verify Certificate Rotated Successfully + +Let's wait for `PerconaXtraDBOpsRequest` to be `Successful`. Run the following command to watch `PerconaXtraDBOpsRequest` CRO, + +```bash +$ kubectl get perconaxtradbopsrequest --all-namespaces +NAMESPACE NAME TYPE STATUS AGE +demo pxops-rotate-tls ReconfigureTLS Successful 3m +``` + +We can see from the above output that the `PerconaXtraDBOpsRequest` has succeeded. Now, let's check the expiration date of the certificate. + +```bash +$ kubectl exec -it -n demo sample-pxc-0 -c perconaxtradb -- bash +root@sample-pxc-0:/ apt update +root@sample-pxc-0:/ apt install openssl +root@sample-pxc-0:/# openssl x509 -in /etc/mysql/certs/client/tls.crt -inform PEM -enddate -nameopt RFC2253 -noout +notAfter=Apr 13 06:04:50 2022 GMT +``` + +As we can see from the above output, the certificate has been rotated successfully. + +## Update Certificate + +Now, we are going to update the server certificate. + +- Let's describe the server certificate `sample-pxc-server-cert` +```bash +$ kubectl describe certificate -n demo sample-pxc-server-cert +Name: sample-pxc-server-cert +Namespace: demo +Labels: app.kubernetes.io/component=database + app.kubernetes.io/instance=sample-pxc + app.kubernetes.io/managed-by=kubedb.com + app.kubernetes.io/name=perconaxtradbs.kubedb.com +Annotations: +API Version: cert-manager.io/v1 +Kind: Certificate +Metadata: + Creation Timestamp: 2022-01-13T05:18:42Z + Generation: 1 + ... + Owner References: + API Version: kubedb.com/v1alpha2 + Block Owner Deletion: true + Controller: true + Kind: PerconaXtraDB + Name: sample-pxc + UID: ed8f45c7-7caf-4890-8a9c-b8437b6ca48b + Resource Version: 241340 + UID: 3343e971-395d-46df-9536-47194eb96dcc +Spec: + Common Name: sample-pxc.demo.svc + Dns Names: + *.sample-pxc-pods.demo.svc + *.sample-pxc-pods.demo.svc.cluster.local + *.sample-pxc.demo.svc + localhost + sample-pxc + sample-pxc.demo.svc + Ip Addresses: + 127.0.0.1 + Issuer Ref: + Group: cert-manager.io + Kind: Issuer + Name: px-issuer + Secret Name: sample-pxc-server-cert + Subject: + Organizations: + kubedb:server + Usages: + digital signature + key encipherment + server auth + client auth +Status: + Conditions: + Last Transition Time: 2022-01-13T05:18:43Z + Message: Certificate is up to date and has not expired + Observed Generation: 1 + Reason: Ready + Status: True + Type: Ready + Not After: 2022-04-13T06:04:50Z + Not Before: 2022-01-13T06:04:50Z + Renewal Time: 2022-03-14T06:04:50Z + Revision: 6 +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal Requested 22m cert-manager Created new CertificateRequest resource "sample-pxc-server-cert-8tnj5" + Normal Requested 22m cert-manager Created new CertificateRequest resource "sample-pxc-server-cert-fw6sk" + Normal Requested 22m cert-manager Created new CertificateRequest resource "sample-pxc-server-cert-cvphm" + Normal Requested 20m cert-manager Created new CertificateRequest resource "sample-pxc-server-cert-nvhp6" + Normal Requested 19m cert-manager Created new CertificateRequest resource "sample-pxc-server-cert-p5287" + Normal Reused 19m (x5 over 22m) cert-manager Reusing private key stored in existing Secret resource "sample-pxc-server-cert" + Normal Issuing 19m (x6 over 65m) cert-manager The certificate has been successfully issued +``` + +We want to add `subject` and `emailAddresses` in the spec of server sertificate. + +### Create PerconaXtraDBOpsRequest + +Below is the YAML of the `PerconaXtraDBOpsRequest` CRO that we are going to create ton update the server certificate, + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: PerconaXtraDBOpsRequest +metadata: + name: pxops-update-tls + namespace: demo +spec: + type: ReconfigureTLS + databaseRef: + name: sample-pxc + tls: + certificates: + - alias: server + subject: + organizations: + - kubedb:server + emailAddresses: + - "kubedb@appscode.com" +``` + +Here, + +- `spec.databaseRef.name` specifies that we are performing reconfigure TLS operation on `sample-pxc` database. +- `spec.type` specifies that we are performing `ReconfigureTLS` on our database. +- `spec.tls.issuerRef` specifies the issuer name, kind and api group. +- `spec.tls.certificates` specifies the changes that we want in certificate objects. +- `spec.tls.certificates[].alias` specifies the certificate type which is one of these: `server`, `client`, `metrics-exporter`. + +Let's create the `PerconaXtraDBOpsRequest` CR we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/percona-xtradb/reconfigure-tls/cluster/examples/pxops-update-tls.yaml +perconaxtradbopsrequest.ops.kubedb.com/pxops-update-tls created +``` + +#### Verify certificate is updated successfully + +Let's wait for `PerconaXtraDBOpsRequest` to be `Successful`. Run the following command to watch `PerconaXtraDBOpsRequest` CRO, + +```bash +$ kubectl get perconaxtradbopsrequest -n demo +Every 2.0s: kubectl get perconaxtradbopsrequest -n demo +NAME TYPE STATUS AGE +pxops-update-tls ReconfigureTLS Successful 7m + +``` + +We can see from the above output that the `PerconaXtraDBOpsRequest` has succeeded. + +Now, Let's exec into a database node and find out the ca subject to see if it matches the one we have provided. + +```bash +$ kubectl exec -it -n demo sample-pxc-0 -c perconaxtradb -- bash +root@sample-pxc-0:/ apt update +root@sample-pxc-0:/ apt install openssl +root@sample-pxc-0:/ openssl x509 -in /etc/mysql/certs/server/tls.crt -inform PEM -subject -email -nameopt RFC2253 -noout +subject=CN=sample-pxc.demo.svc,O=kubedb:server +kubedb@appscode.com +``` + +We can see from the above output that, the subject name and email address match with the new ca certificate that we have created. So, the issuer is changed successfully. + +## Remove TLS from the Database + +Now, we are going to remove TLS from this database using a PerconaXtraDBOpsRequest. + +### Create PerconaXtraDBOpsRequest + +Below is the YAML of the `PerconaXtraDBOpsRequest` CRO that we are going to create, + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: PerconaXtraDBOpsRequest +metadata: + name: pxops-remove-tls + namespace: demo +spec: + type: ReconfigureTLS + databaseRef: + name: sample-pxc + tls: + remove: true +``` + +Here, + +- `spec.databaseRef.name` specifies that we are performing reconfigure TLS operation on `sample-pxc` database. +- `spec.type` specifies that we are performing `ReconfigureTLS` on our database. +- `spec.tls.remove` specifies that we want to remove tls from this database. + +Let's create the `PerconaXtraDBOpsRequest` CR we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/percona-xtradb/reconfigure-tls/cluster/examples/pxops-remove-tls.yaml +perconaxtradbopsrequest.ops.kubedb.com/pxops-remove-tls created +``` + +#### Verify TLS Removed Successfully + +Let's wait for `PerconaXtraDBOpsRequest` to be `Successful`. Run the following command to watch `PerconaXtraDBOpsRequest` CRO, + +```bash +$ kubectl get perconaxtradbopsrequest --all-namespaces +NAMESPACE NAME TYPE STATUS AGE +demo pxops-remove-tls ReconfigureTLS Successful 6m27s +``` + +We can see from the above output that the `PerconaXtraDBOpsRequest` has succeeded. If we describe the `PerconaXtraDBOpsRequest` we will get an overview of the steps that were followed. + +Now, Let's exec into the database and find out that TLS is disabled or not. + +```bash +$ kubectl exec -it -n demo sample-pxc-0 -c perconaxtradb -- bash +root@sample-pxc-0:/ mysql -u${MYSQL_ROOT_USERNAME} -p${MYSQL_ROOT_PASSWORD} +Welcome to the PerconaXtraDB monitor. Commands end with ; or \g. +Your PerconaXtraDB connection id is 108 +Server version: 8.0.26-PerconaXtraDB-1:8.0.26+maria~focal perconaxtradb.org binary distribution + +Copyright (c) 2000, 2018, Oracle, PerconaXtraDB Corporation Ab and others. + +Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. + +PerconaXtraDB [(none)]> show variables like '%ssl%'; ++---------------------+-----------------------------+ +| Variable_name | Value | ++---------------------+-----------------------------+ +| have_openssl | YES | +| have_ssl | DISABLED | +| ssl_ca | | +| ssl_capath | | +| ssl_cert | | +| ssl_cipher | | +| ssl_crl | | +| ssl_crlpath | | +| ssl_key | | +| version_ssl_library | OpenSSL 1.1.1f 31 Mar 2020 | ++---------------------+-----------------------------+ +10 rows in set (0.001 sec) + +``` + +So, we can see from the above that, output that tls is disabled successfully. + +## Cleaning up + +To cleanup the Kubernetes resources created by this tutorial, run: + +```bash +$ kubectl delete perconaxtradb -n demo --all +$ kubectl delete issuer -n demo --all +$ kubectl delete perconaxtradbopsrequest -n demo --all +$ kubectl delete ns demo +``` diff --git a/content/docs/v2024.1.31/guides/percona-xtradb/reconfigure-tls/overview/images/reconfigure-tls.jpeg b/content/docs/v2024.1.31/guides/percona-xtradb/reconfigure-tls/overview/images/reconfigure-tls.jpeg new file mode 100644 index 0000000000..d60d6c3525 Binary files /dev/null and b/content/docs/v2024.1.31/guides/percona-xtradb/reconfigure-tls/overview/images/reconfigure-tls.jpeg differ diff --git a/content/docs/v2024.1.31/guides/percona-xtradb/reconfigure-tls/overview/index.md b/content/docs/v2024.1.31/guides/percona-xtradb/reconfigure-tls/overview/index.md new file mode 100644 index 0000000000..f68d2d0703 --- /dev/null +++ b/content/docs/v2024.1.31/guides/percona-xtradb/reconfigure-tls/overview/index.md @@ -0,0 +1,65 @@ +--- +title: Reconfiguring TLS of PerconaXtraDB Database +menu: + docs_v2024.1.31: + identifier: guides-perconaxtradb-reconfigure-tls-overview + name: Overview + parent: guides-perconaxtradb-reconfigure-tls + weight: 10 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Reconfiguring TLS of PerconaXtraDB Database + +This guide will give an overview on how KubeDB Ops Manager reconfigures TLS configuration i.e. add TLS, remove TLS, update issuer/cluster issuer or Certificates and rotate the certificates of a `PerconaXtraDB` database. + +## Before You Begin + +- You should be familiar with the following `KubeDB` concepts: + - [PerconaXtraDB](/docs/v2024.1.31/guides/percona-xtradb/concepts/perconaxtradb) + - [PerconaXtraDBOpsRequest](/docs/v2024.1.31/guides/percona-xtradb/concepts/opsrequest) + +## How Reconfiguring PerconaXtraDB TLS Configuration Process Works + +The following diagram shows how KubeDB Ops Manager reconfigures TLS of a `PerconaXtraDB` database. Open the image in a new tab to see the enlarged version. + +
+  Reconfiguring TLS process of PerconaXtraDB +
Fig: Reconfiguring TLS process of PerconaXtraDB
+
+ +The Reconfiguring PerconaXtraDB TLS process consists of the following steps: + +1. At first, a user creates a `PerconaXtraDB` Custom Resource Object (CRO). + +2. `KubeDB` Community operator watches the `PerconaXtraDB` CRO. + +3. When the operator finds a `PerconaXtraDB` CR, it creates required number of `StatefulSets` and related necessary stuff like secrets, services, etc. + +4. Then, in order to reconfigure the TLS configuration of the `PerconaXtraDB` database the user creates a `PerconaXtraDBOpsRequest` CR with desired information. + +5. `KubeDB` Enterprise operator watches the `PerconaXtraDBOpsRequest` CR. + +6. When it finds a `PerconaXtraDBOpsRequest` CR, it pauses the `PerconaXtraDB` object which is referred from the `PerconaXtraDBOpsRequest`. So, the `KubeDB` Community operator doesn't perform any operations on the `PerconaXtraDB` object during the reconfiguring TLS process. + +7. Then the `KubeDB` Enterprise operator will add, remove, update or rotate TLS configuration based on the Ops Request yaml. + +8. Then the `KubeDB` Enterprise operator will restart all the Pods of the database so that they restart with the new TLS configuration defined in the `PerconaXtraDBOpsRequest` CR. + +9. After the successful reconfiguring of the `PerconaXtraDB` TLS, the `KubeDB` Enterprise operator resumes the `PerconaXtraDB` object so that the `KubeDB` Community operator resumes its usual operations. + +In the next docs, we are going to show a step by step guide on reconfiguring TLS configuration of a PerconaXtraDB database using `PerconaXtraDBOpsRequest` CRD. \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/percona-xtradb/reconfigure/_index.md b/content/docs/v2024.1.31/guides/percona-xtradb/reconfigure/_index.md new file mode 100644 index 0000000000..4feec8fa85 --- /dev/null +++ b/content/docs/v2024.1.31/guides/percona-xtradb/reconfigure/_index.md @@ -0,0 +1,22 @@ +--- +title: Reconfigure +menu: + docs_v2024.1.31: + identifier: guides-perconaxtradb-reconfigure + name: Reconfigure + parent: guides-perconaxtradb + weight: 46 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/percona-xtradb/reconfigure/cluster/examples/new-px-config.cnf b/content/docs/v2024.1.31/guides/percona-xtradb/reconfigure/cluster/examples/new-px-config.cnf new file mode 100644 index 0000000000..7e27973b35 --- /dev/null +++ b/content/docs/v2024.1.31/guides/percona-xtradb/reconfigure/cluster/examples/new-px-config.cnf @@ -0,0 +1,3 @@ +[mysqld] +max_connections = 250 +read_buffer_size = 122880 diff --git a/content/docs/v2024.1.31/guides/percona-xtradb/reconfigure/cluster/examples/px-config.cnf b/content/docs/v2024.1.31/guides/percona-xtradb/reconfigure/cluster/examples/px-config.cnf new file mode 100644 index 0000000000..ccd87f160c --- /dev/null +++ b/content/docs/v2024.1.31/guides/percona-xtradb/reconfigure/cluster/examples/px-config.cnf @@ -0,0 +1,3 @@ +[mysqld] +max_connections = 200 +read_buffer_size = 1048576 diff --git a/content/docs/v2024.1.31/guides/percona-xtradb/reconfigure/cluster/examples/pxops-reconfigure-apply-config.yaml b/content/docs/v2024.1.31/guides/percona-xtradb/reconfigure/cluster/examples/pxops-reconfigure-apply-config.yaml new file mode 100644 index 0000000000..36df25e4a4 --- /dev/null +++ b/content/docs/v2024.1.31/guides/percona-xtradb/reconfigure/cluster/examples/pxops-reconfigure-apply-config.yaml @@ -0,0 +1,18 @@ +apiVersion: ops.kubedb.com/v1alpha1 +kind: PerconaXtraDBOpsRequest +metadata: + name: pxops-reconfigure-apply-config + namespace: demo +spec: + type: Reconfigure + databaseRef: + name: sample-pxc + configuration: + applyConfig: + new-md-config.cnf: | + [mysqld] + max_connections = 230 + read_buffer_size = 1064960 + innodb-config.cnf: | + [mysqld] + innodb_log_buffer_size = 17408000 diff --git a/content/docs/v2024.1.31/guides/percona-xtradb/reconfigure/cluster/examples/reconfigure-remove.yaml b/content/docs/v2024.1.31/guides/percona-xtradb/reconfigure/cluster/examples/reconfigure-remove.yaml new file mode 100644 index 0000000000..5c259174e7 --- /dev/null +++ b/content/docs/v2024.1.31/guides/percona-xtradb/reconfigure/cluster/examples/reconfigure-remove.yaml @@ -0,0 +1,11 @@ +apiVersion: ops.kubedb.com/v1alpha1 +kind: PerconaXtraDBOpsRequest +metadata: + name: pxops-reconfigure-remove + namespace: demo +spec: + type: Reconfigure + databaseRef: + name: sample-pxc + configuration: + removeCustomConfig: true diff --git a/content/docs/v2024.1.31/guides/percona-xtradb/reconfigure/cluster/examples/reconfigure-using-secret.yaml b/content/docs/v2024.1.31/guides/percona-xtradb/reconfigure/cluster/examples/reconfigure-using-secret.yaml new file mode 100644 index 0000000000..d0aaed8fd2 --- /dev/null +++ b/content/docs/v2024.1.31/guides/percona-xtradb/reconfigure/cluster/examples/reconfigure-using-secret.yaml @@ -0,0 +1,12 @@ +apiVersion: ops.kubedb.com/v1alpha1 +kind: PerconaXtraDBOpsRequest +metadata: + name: pxops-reconfigure-config + namespace: demo +spec: + type: Reconfigure + databaseRef: + name: sample-pxc + configuration: + configSecret: + name: new-px-configuration diff --git a/content/docs/v2024.1.31/guides/percona-xtradb/reconfigure/cluster/examples/sample-pxc-config.yaml b/content/docs/v2024.1.31/guides/percona-xtradb/reconfigure/cluster/examples/sample-pxc-config.yaml new file mode 100644 index 0000000000..6cec245abd --- /dev/null +++ b/content/docs/v2024.1.31/guides/percona-xtradb/reconfigure/cluster/examples/sample-pxc-config.yaml @@ -0,0 +1,20 @@ +apiVersion: kubedb.com/v1alpha2 +kind: PerconaXtraDB +metadata: + name: sample-pxc + namespace: demo +spec: + version: "8.0.26" + replicas: 3 + configSecret: + name: px-configuration + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut + diff --git a/content/docs/v2024.1.31/guides/percona-xtradb/reconfigure/cluster/index.md b/content/docs/v2024.1.31/guides/percona-xtradb/reconfigure/cluster/index.md new file mode 100644 index 0000000000..f39839132f --- /dev/null +++ b/content/docs/v2024.1.31/guides/percona-xtradb/reconfigure/cluster/index.md @@ -0,0 +1,647 @@ +--- +title: Reconfigure PerconaXtraDB Cluster +menu: + docs_v2024.1.31: + identifier: guides-perconaxtradb-reconfigure-cluster + name: Cluster + parent: guides-perconaxtradb-reconfigure + weight: 30 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Reconfigure PerconaXtraDB Cluster Database + +This guide will show you how to use `KubeDB` Enterprise operator to reconfigure a PerconaXtraDB Cluster. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster. + +- Install `KubeDB` Community and Enterprise operator in your cluster following the steps [here](/docs/v2024.1.31/setup/README). + +- You should be familiar with the following `KubeDB` concepts: + - [PerconaXtraDB](/docs/v2024.1.31/guides/percona-xtradb/concepts/perconaxtradb) + - [PerconaXtraDB Cluster](/docs/v2024.1.31/guides/percona-xtradb/clustering/galera-cluster) + - [PerconaXtraDBOpsRequest](/docs/v2024.1.31/guides/percona-xtradb/concepts/opsrequest) + - [Reconfigure Overview](/docs/v2024.1.31/guides/percona-xtradb/reconfigure/overview) + +To keep everything isolated, we are going to use a separate namespace called `demo` throughout this tutorial. + +```bash +$ kubectl create ns demo +namespace/demo created +``` + +Now, we are going to deploy a `PerconaXtraDB` Cluster using a supported version by `KubeDB` operator. Then we are going to apply `PerconaXtraDBOpsRequest` to reconfigure its configuration. + +### Prepare PerconaXtraDB Cluster + +Now, we are going to deploy a `PerconaXtraDB` Cluster database with version `10.6.16`. + +### Deploy PerconaXtraDB + +At first, we will create `px-config.cnf` file containing required configuration settings. + +```ini +$ cat px-config.cnf +[mysqld] +max_connections = 200 +read_buffer_size = 1048576 +``` + +Here, `max_connections` is set to `200`, whereas the default value is `151`. Likewise, `read_buffer_size` has the deafult value `131072`. + +Now, we will create a secret with this configuration file. + +```bash +$ kubectl create secret generic -n demo px-configuration --from-file=./px-config.cnf +secret/px-configuration created +``` + +In this section, we are going to create a PerconaXtraDB object specifying `spec.configSecret` field to apply this custom configuration. Below is the YAML of the `PerconaXtraDB` CR that we are going to create, + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: PerconaXtraDB +metadata: + name: sample-pxc + namespace: demo +spec: + version: "8.0.26" + replicas: 3 + configSecret: + name: px-configuration + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut +``` + +Let's create the `PerconaXtraDB` CR we have shown above, + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/percona-xtradb/reconfigure/cluster/examples/sample-pxc-config.yaml +perconaxtradb.kubedb.com/sample-pxc created +``` + +Now, wait until `sample-pxc` has status `Ready`. i.e, + +```bash +$ kubectl get perconaxtradb -n demo +NAME VERSION STATUS AGE +sample-pxc 8.0.26 Ready 71s +``` + +Now, we will check if the database has started with the custom configuration we have provided. + +First we need to get the username and password to connect to a perconaxtradb instance, + +```bash +$ kubectl get secrets -n demo sample-pxc-auth -o jsonpath='{.data.\username}' | base64 -d +root + +$ kubectl get secrets -n demo sample-pxc-auth -o jsonpath='{.data.\password}' | base64 -d +nrKuxni0wDSMrgwy +``` + +Now, we will check if the database has started with the custom configuration we have provided. + +```bash +$ kubectl exec -it -n demo sample-pxc-0 -- bash +Defaulted container "perconaxtradb" out of: perconaxtradb, px-coordinator, px-init (init) +bash-4.4$ mysql -u${MYSQL_ROOT_USERNAME} -p${MYSQL_ROOT_PASSWORD} +mysql: [Warning] Using a password on the command line interface can be insecure. +Welcome to the MySQL monitor. Commands end with ; or \g. +Your MySQL connection id is 3699 +Server version: 8.0.26-16.1 Percona XtraDB Cluster (GPL), Release rel16, Revision b141904, WSREP version 26.4.3 + +Copyright (c) 2009-2021 Percona LLC and/or its affiliates +Copyright (c) 2000, 2021, Oracle and/or its affiliates. + +Oracle is a registered trademark of Oracle Corporation and/or its +affiliates. Other names may be trademarks of their respective +owners. + +Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. + +mysql> show variables like 'max_connections'; ++-----------------+-------+ +| Variable_name | Value | ++-----------------+-------+ +| max_connections | 200 | ++-----------------+-------+ +1 row in set (0.00 sec) + + +# value of `read_buffer_size` is same as provided +mysql> show variables like 'read_buffer_size'; ++------------------+--------+ +| Variable_name | Value | ++------------------+--------+ +| read_buffer_size | 1048576 | ++------------------+--------+ +1 row in set (0.00 sec) + + +mysql> exit +Bye +``` + +As we can see from the configuration of ready perconaxtradb, the value of `max_connections` has been set to `200` and `read_buffer_size` has been set to `1048576`. + +### Reconfigure using new config secret + +Now we will reconfigure this database to set `max_connections` to `250` and `read_buffer_size` to `122880`. + +Now, we will create new file `new-px-config.cnf` containing required configuration settings. + +```ini +$ cat new-px-config.cnf +[mysqld] +max_connections = 250 +read_buffer_size = 122880 +``` + +Then, we will create a new secret with this configuration file. + +```bash +$ kubectl create secret generic -n demo new-px-configuration --from-file=./new-px-config.cnf +secret/new-px-configuration created +``` + +#### Create PerconaXtraDBOpsRequest + +Now, we will use this secret to replace the previous secret using a `PerconaXtraDBOpsRequest` CR. The `PerconaXtraDBOpsRequest` yaml is given below, + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: PerconaXtraDBOpsRequest +metadata: + name: pxops-reconfigure-config + namespace: demo +spec: + type: Reconfigure + databaseRef: + name: sample-pxc + configuration: + configSecret: + name: new-px-configuration +``` + +Here, + +- `spec.databaseRef.name` specifies that we are reconfiguring `sample-pxc` database. +- `spec.type` specifies that we are performing `Reconfigure` on our database. +- `spec.configuration.configSecret.name` specifies the name of the new secret. + +Let's create the `PerconaXtraDBOpsRequest` CR we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/percona-xtradb/reconfigure/cluster/examples/reconfigure-using-secret.yaml +perconaxtradbopsrequest.ops.kubedb.com/pxops-reconfigure-config created +``` + +#### Verify the new configuration is working + +If everything goes well, `KubeDB` Enterprise operator will update the `configSecret` of `PerconaXtraDB` object. + +Let's wait for `PerconaXtraDBOpsRequest` to be `Successful`. Run the following command to watch `PerconaXtraDBOpsRequest` CR, + +```bash +$ kubectl get perconaxtradbopsrequest --all-namespaces +NAMESPACE NAME TYPE STATUS AGE +demo pxops-reconfigure-config Reconfigure Successful 3m8s +``` + +We can see from the above output that the `PerconaXtraDBOpsRequest` has succeeded. If we describe the `PerconaXtraDBOpsRequest` we will get an overview of the steps that were followed to reconfigure the database. + +```bash +$ kubectl describe perconaxtradbopsrequest -n demo pxops-reconfigure-config +Name: pxops-reconfigure-config +Namespace: demo +Labels: +Annotations: +API Version: ops.kubedb.com/v1alpha1 +Kind: PerconaXtraDBOpsRequest +Metadata: + Creation Timestamp: 2022-06-10T04:43:50Z + Generation: 1 + Resource Version: 1123451 + UID: 27a73fc6-1d25-4019-8975-f7d4daf782b7 +Spec: + Configuration: + Config Secret: + Name: new-px-configuration + Database Ref: + Name: sample-pxc + Type: Reconfigure +Status: + Conditions: + Last Transition Time: 2022-06-10T04:43:50Z + Message: Controller has started to Progress the PerconaXtraDBOpsRequest: demo/pxops-reconfigure-config + Observed Generation: 1 + Reason: OpsRequestProgressingStarted + Status: True + Type: Progressing + Last Transition Time: 2022-06-10T04:47:25Z + Message: Successfully restarted PerconaXtraDB pods for PerconaXtraDBOpsRequest: demo/pxops-reconfigure-config + Observed Generation: 1 + Reason: SuccessfullyRestatedStatefulSet + Status: True + Type: RestartStatefulSetPods + Last Transition Time: 2022-06-10T04:47:30Z + Message: Successfully reconfigured PerconaXtraDB for PerconaXtraDBOpsRequest: demo/pxops-reconfigure-config + Observed Generation: 1 + Reason: SuccessfullyDBReconfigured + Status: True + Type: DBReady + Last Transition Time: 2022-06-10T04:47:30Z + Message: Controller has successfully reconfigure the PerconaXtraDB demo/pxops-reconfigure-config + Observed Generation: 1 + Reason: OpsRequestProcessedSuccessfully + Status: True + Type: Successful + Observed Generation: 3 + Phase: Successful + +``` + +Now let's connect to a perconaxtradb instance and run a perconaxtradb internal command to check the new configuration we have provided. + +```bash +$ kubectl exec -it -n demo sample-pxc-0 -- bash +Defaulted container "perconaxtradb" out of: perconaxtradb, px-coordinator, px-init (init) +bash-4.4$ mysql -u${MYSQL_ROOT_USERNAME} -p${MYSQL_ROOT_PASSWORD} +mysql: [Warning] Using a password on the command line interface can be insecure. +Welcome to the MySQL monitor. Commands end with ; or \g. +Your MySQL connection id is 3699 +Server version: 8.0.26-16.1 Percona XtraDB Cluster (GPL), Release rel16, Revision b141904, WSREP version 26.4.3 + +Copyright (c) 2009-2021 Percona LLC and/or its affiliates +Copyright (c) 2000, 2021, Oracle and/or its affiliates. + +Oracle is a registered trademark of Oracle Corporation and/or its +affiliates. Other names may be trademarks of their respective +owners. + +Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. + +mysql> show variables like 'max_connections'; ++-----------------+-------+ +| Variable_name | Value | ++-----------------+-------+ +| max_connections | 250 | ++-----------------+-------+ +1 row in set (0.00 sec) + + +# value of `read_buffer_size` is same as provided +mysql> show variables like 'read_buffer_size'; ++------------------+--------+ +| Variable_name | Value | ++------------------+--------+ +| read_buffer_size | 122880 | ++------------------+--------+ +1 row in set (0.00 sec) + + +mysql> exit +Bye +``` + + +As we can see from the configuration has changed, the value of `max_connections` has been changed from `200` to `250` and and the `read_buffer_size` has been changed `1048576` to `122880`. So the reconfiguration of the database is successful. + + +### Reconfigure Existing Config Secret + +Now, we will create a new `PerconaXtraDBOpsRequest` to reconfigure our existing secret `new-px-configuration` by modifying our `new-px-config.cnf` file using `applyConfig`. The `PerconaXtraDBOpsRequest` yaml is given below, + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: PerconaXtraDBOpsRequest +metadata: + name: pxops-reconfigure-apply-config + namespace: demo +spec: + type: Reconfigure + databaseRef: + name: sample-pxc + configuration: + applyConfig: + new-px-config.cnf: | + [mysqld] + max_connections = 230 + read_buffer_size = 1064960 + innodb-config.cnf: | + [mysqld] + innodb_log_buffer_size = 17408000 +``` +> Note: You can modify multiple fields of your current configuration using `applyConfig`. If you don't have any secrets then `applyConfig` will create a secret for you. Here, we modified value of our two existing fields which are `max_connections` and `read_buffer_size` also, we modified a new field `innodb_log_buffer_size` of our configuration. + +Here, +- `spec.databaseRef.name` specifies that we are reconfiguring `sample-pxc` database. +- `spec.type` specifies that we are performing `Reconfigure` on our database. +- `spec.configuration.applyConfig` contains the configuration of existing or newly created secret. + +Before applying this yaml we are going to check the existing value of our new field, + +```bash +$ kubectl exec -it -n demo sample-pxc-0 -- bash +Defaulted container "perconaxtradb" out of: perconaxtradb, px-coordinator, px-init (init) +bash-4.4$ mysql -u${MYSQL_ROOT_USERNAME} -p${MYSQL_ROOT_PASSWORD} +mysql: [Warning] Using a password on the command line interface can be insecure. +Welcome to the MySQL monitor. Commands end with ; or \g. +Your MySQL connection id is 3699 +Server version: 8.0.26-16.1 Percona XtraDB Cluster (GPL), Release rel16, Revision b141904, WSREP version 26.4.3 + +Copyright (c) 2009-2021 Percona LLC and/or its affiliates +Copyright (c) 2000, 2021, Oracle and/or its affiliates. + +Oracle is a registered trademark of Oracle Corporation and/or its +affiliates. Other names may be trademarks of their respective +owners. + +Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. + +mysql> show variables like 'innodb_log_buffer_size'; ++------------------------+----------+ +| Variable_name | Value | ++------------------------+----------+ +| innodb_log_buffer_size | 16777216 | ++------------------------+----------+ +1 row in set (0.00 sec) + +PerconaXtraDB [(none)]> exit +Bye +``` + +16777216 + +Here, we can see the default value for `innodb_log_buffer_size` is `16777216`. + +Let's create the `PerconaXtraDBOpsRequest` CR we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/percona-xtradb/reconfigure/cluster/examples/pxops-reconfigure-apply-config.yaml +perconaxtradbopsrequest.ops.kubedb.com/pxops-reconfigure-apply-config created +``` + + +#### Verify the new configuration is working + +If everything goes well, `KubeDB` Enterprise operator will update the `configSecret` of `PerconaXtraDB` object. + +Let's wait for `PerconaXtraDBOpsRequest` to be `Successful`. Run the following command to watch `PerconaXtraDBOpsRequest` CR, + +```bash +$ kubectl get perconaxtradbopsrequest pxops-reconfigure-apply-config -n demo +NAME TYPE STATUS AGE +pxops-reconfigure-apply-config Reconfigure Successful 4m59s +``` + +We can see from the above output that the `PerconaXtraDBOpsRequest` has succeeded. If we describe the `PerconaXtraDBOpsRequest` we will get an overview of the steps that were followed to reconfigure the database. + +```bash +$ kubectl describe perconaxtradbopsrequest -n demo pxops-reconfigure-apply-config +Name: pxops-reconfigure-apply-config +Namespace: demo +Labels: +Annotations: +API Version: ops.kubedb.com/v1alpha1 +Kind: PerconaXtraDBOpsRequest +Metadata: + Creation Timestamp: 2022-06-10T09:13:49Z + Generation: 1 + Resource Version: 14120 + UID: eb8d5df5-a0ce-4011-890c-c18c0200b5ac +Spec: + Configuration: + Apply Config: + innodb-config.cnf: [mysqld] +innodb_log_buffer_size = 17408000 + + new-px-config.cnf: [mysqld] +max_connections = 230 +read_buffer_size = 1064960 + + Database Ref: + Name: sample-pxc + Type: Reconfigure +Status: + Conditions: + Last Transition Time: 2022-06-10T09:13:49Z + Message: Controller has started to Progress the PerconaXtraDBOpsRequest: demo/pxops-reconfigure-apply-config + Observed Generation: 1 + Reason: OpsRequestProgressingStarted + Status: True + Type: Progressing + Last Transition Time: 2022-06-10T09:13:49Z + Message: Successfully prepared user provided custom config secret + Observed Generation: 1 + Reason: PrepareSecureCustomConfig + Status: True + Type: PrepareCustomConfig + Last Transition Time: 2022-06-10T09:17:24Z + Message: Successfully restarted PerconaXtraDB pods for PerconaXtraDBOpsRequest: demo/pxops-reconfigure-apply-config + Observed Generation: 1 + Reason: SuccessfullyRestatedStatefulSet + Status: True + Type: RestartStatefulSetPods + Last Transition Time: 2022-06-10T09:17:29Z + Message: Successfully reconfigured PerconaXtraDB for PerconaXtraDBOpsRequest: demo/pxops-reconfigure-apply-config + Observed Generation: 1 + Reason: SuccessfullyDBReconfigured + Status: True + Type: DBReady + Last Transition Time: 2022-06-10T09:17:29Z + Message: Controller has successfully reconfigure the PerconaXtraDB demo/pxops-reconfigure-apply-config + Observed Generation: 1 + Reason: OpsRequestProcessedSuccessfully + Status: True + Type: Successful + Observed Generation: 3 + Phase: Successful +``` + +Now let's connect to a perconaxtradb instance and run a perconaxtradb internal command to check the new configuration we have provided. + +```bash +$ kubectl exec -it -n demo sample-pxc-0 -- bash +Defaulted container "perconaxtradb" out of: perconaxtradb, px-coordinator, px-init (init) +bash-4.4$ mysql -u${MYSQL_ROOT_USERNAME} -p${MYSQL_ROOT_PASSWORD} +mysql: [Warning] Using a password on the command line interface can be insecure. +Welcome to the MySQL monitor. Commands end with ; or \g. +Your MySQL connection id is 3699 +Server version: 8.0.26-16.1 Percona XtraDB Cluster (GPL), Release rel16, Revision b141904, WSREP version 26.4.3 + +Copyright (c) 2009-2021 Percona LLC and/or its affiliates +Copyright (c) 2000, 2021, Oracle and/or its affiliates. + +Oracle is a registered trademark of Oracle Corporation and/or its +affiliates. Other names may be trademarks of their respective +owners. + +Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. +# value of `max_conncetions` is same as provided +mysql> show variables like 'max_connections'; ++-----------------+-------+ +| Variable_name | Value | ++-----------------+-------+ +| max_connections | 230 | ++-----------------+-------+ +1 row in set (0.001 sec) + +# value of `read_buffer_size` is same as provided +mysql> show variables like 'read_buffer_size'; ++------------------+---------+ +| Variable_name | Value | ++------------------+---------+ +| read_buffer_size | 1064960 | ++------------------+---------+ +1 row in set (0.001 sec) + +# value of `innodb_log_buffer_size` is same as provided +mysql> show variables like 'innodb_log_buffer_size'; ++------------------------+----------+ +| Variable_name | Value | ++------------------------+----------+ +| innodb_log_buffer_size | 17408000 | ++------------------------+----------+ +1 row in set (0.001 sec) + +mysql> exit +Bye +``` + +As we can see from above the configuration has been changed, the value of `max_connections` has been changed from `250` to `230` and the `read_buffer_size` has been changed `122880` to `1064960` also, `innodb_log_buffer_size` has been changed from `16777216` to `17408000`. So the reconfiguration of the `sample-pxc` database is successful. + + +### Remove Custom Configuration + +We can also remove existing custom config using `PerconaXtraDBOpsRequest`. Provide `true` to field `spec.configuration.removeCustomConfig` and make an Ops Request to remove existing custom configuration. + +#### Create PerconaXtraDBOpsRequest + +Lets create an `PerconaXtraDBOpsRequest` having `spec.configuration.removeCustomConfig` is equal `true`, + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: PerconaXtraDBOpsRequest +metadata: + name: pxops-reconfigure-remove + namespace: demo +spec: + type: Reconfigure + databaseRef: + name: sample-pxc + configuration: + removeCustomConfig: true +``` + +Here, + +- `spec.databaseRef.name` specifies that we are reconfiguring `pxops-reconfigure-remove` database. +- `spec.type` specifies that we are performing `Reconfigure` on our database. +- `spec.configuration.removeCustomConfig` is a bool field that should be `true` when you want to remove existing custom configuration. + +Let's create the `PerconaXtraDBOpsRequest` CR we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/percona-xtradb/reconfigure/cluster/examples/reconfigure-remove.yaml +perconaxtradbopsrequest.ops.kubedb.com/pxops-reconfigure-remove created +``` + +#### Verify the new configuration is working + +If everything goes well, `KubeDB` Enterprise operator will update the `configSecret` of `PerconaXtraDB` object. + +Let's wait for `PerconaXtraDBOpsRequest` to be `Successful`. Run the following command to watch `PerconaXtraDBOpsRequest` CR, + +```bash +$ kubectl get perconaxtradbopsrequest --all-namespaces +NAMESPACE NAME TYPE STATUS AGE +demo pxops-reconfigure-remove Reconfigure Successful 2m1s +``` + +Now let's connect to a perconaxtradb instance and run a perconaxtradb internal command to check the new configuration we have provided. + +```bash +$ kubectl exec -it -n demo sample-pxc-0 -- bash +Defaulted container "perconaxtradb" out of: perconaxtradb, px-coordinator, px-init (init) +bash-4.4$ mysql -u${MYSQL_ROOT_USERNAME} -p${MYSQL_ROOT_PASSWORD} +mysql: [Warning] Using a password on the command line interface can be insecure. +Welcome to the MySQL monitor. Commands end with ; or \g. +Your MySQL connection id is 3699 +Server version: 8.0.26-16.1 Percona XtraDB Cluster (GPL), Release rel16, Revision b141904, WSREP version 26.4.3 + +Copyright (c) 2009-2021 Percona LLC and/or its affiliates +Copyright (c) 2000, 2021, Oracle and/or its affiliates. + +Oracle is a registered trademark of Oracle Corporation and/or its +affiliates. Other names may be trademarks of their respective +owners. + +Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. + +# value of `max_connections` is default +PerconaXtraDB [(none)]> show variables like 'max_connections'; ++-----------------+-------+ +| Variable_name | Value | ++-----------------+-------+ +| max_connections | 151 | ++-----------------+-------+ +1 row in set (0.001 sec) + +# value of `read_buffer_size` is default +PerconaXtraDB [(none)]> show variables like 'read_buffer_size'; ++------------------+---------+ +| Variable_name | Value | ++------------------+---------+ +| read_buffer_size | 131072 | ++------------------+---------+ +1 row in set (0.001 sec) + +# value of `innodb_log_buffer_size` is default +PerconaXtraDB [(none)]> show variables like 'innodb_log_buffer_size'; ++------------------------+----------+ +| Variable_name | Value | ++------------------------+----------+ +| innodb_log_buffer_size | 16777216 | ++------------------------+----------+ +1 row in set (0.001 sec) + +PerconaXtraDB [(none)]> exit +Bye +``` + +As we can see from the configuration has changed to its default value. So removal of existing custom configuration using `PerconaXtraDBOpsRequest` is successful. + +## Cleaning Up + +To clean up the Kubernetes resources created by this tutorial, run: + +```bash +$ kubectl delete perconaxtradb -n demo sample-pxc +$ kubectl delete perconaxtradbopsrequest -n demo pxops-reconfigure-config pxops-reconfigure-apply-config pxops-reconfigure-remove +$ kubectl delete ns demo +``` diff --git a/content/docs/v2024.1.31/guides/percona-xtradb/reconfigure/overview/images/reconfigure.jpeg b/content/docs/v2024.1.31/guides/percona-xtradb/reconfigure/overview/images/reconfigure.jpeg new file mode 100644 index 0000000000..aacf42cefe Binary files /dev/null and b/content/docs/v2024.1.31/guides/percona-xtradb/reconfigure/overview/images/reconfigure.jpeg differ diff --git a/content/docs/v2024.1.31/guides/percona-xtradb/reconfigure/overview/index.md b/content/docs/v2024.1.31/guides/percona-xtradb/reconfigure/overview/index.md new file mode 100644 index 0000000000..30ccb48fc6 --- /dev/null +++ b/content/docs/v2024.1.31/guides/percona-xtradb/reconfigure/overview/index.md @@ -0,0 +1,65 @@ +--- +title: Reconfiguring PerconaXtraDB +menu: + docs_v2024.1.31: + identifier: guides-perconaxtradb-reconfigure-overview + name: Overview + parent: guides-perconaxtradb-reconfigure + weight: 10 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Reconfiguring PerconaXtraDB + +This guide will give an overview on how KubeDB Ops Manager reconfigures `PerconaXtraDB`. + +## Before You Begin + +- You should be familiar with the following `KubeDB` concepts: + - [PerconaXtraDB](/docs/v2024.1.31/guides/percona-xtradb/concepts/perconaxtradb) + - [PerconaXtraDBOpsRequest](/docs/v2024.1.31/guides/percona-xtradb/concepts/opsrequest) + +## How Reconfiguring PerconaXtraDB Process Works + +The following diagram shows how KubeDB Ops Manager reconfigures `PerconaXtraDB` database components. Open the image in a new tab to see the enlarged version. + +
+  Reconfiguring process of PerconaXtraDB +
Fig: Reconfiguring process of PerconaXtraDB
+
+ +The Reconfiguring PerconaXtraDB process consists of the following steps: + +1. At first, a user creates a `PerconaXtraDB` Custom Resource (CR). + +2. `KubeDB` Community operator watches the `PerconaXtraDB` CR. + +3. When the operator finds a `PerconaXtraDB` CR, it creates required number of `StatefulSets` and related necessary stuff like secrets, services, etc. + +4. Then, in order to reconfigure the `PerconaXtraDB` standalone or cluster the user creates a `PerconaXtraDBOpsRequest` CR with desired information. + +5. `KubeDB` Enterprise operator watches the `PerconaXtraDBOpsRequest` CR. + +6. When it finds a `PerconaXtraDBOpsRequest` CR, it halts the `PerconaXtraDB` object which is referred from the `PerconaXtraDBOpsRequest`. So, the `KubeDB` Community operator doesn't perform any operations on the `PerconaXtraDB` object during the reconfiguring process. + +7. Then the `KubeDB` Enterprise operator will replace the existing configuration with the new configuration provided or merge the new configuration with the existing configuration according to the `PerconaXtraDBOpsRequest` CR. + +8. Then the `KubeDB` Enterprise operator will restart the related StatefulSet Pods so that they restart with the new configuration defined in the `PerconaXtraDBOpsRequest` CR. + +9. After the successful reconfiguring of the `PerconaXtraDB`, the `KubeDB` Enterprise operator resumes the `PerconaXtraDB` object so that the `KubeDB` Community operator resumes its usual operations. + +In the next docs, we are going to show a step by step guide on reconfiguring PerconaXtraDB database components using `PerconaXtraDBOpsRequest` CRD. \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/percona-xtradb/scaling/_index.md b/content/docs/v2024.1.31/guides/percona-xtradb/scaling/_index.md new file mode 100644 index 0000000000..68f8f1e6b4 --- /dev/null +++ b/content/docs/v2024.1.31/guides/percona-xtradb/scaling/_index.md @@ -0,0 +1,22 @@ +--- +title: Scaling PerconaXtraDB +menu: + docs_v2024.1.31: + identifier: guides-perconaxtradb-scaling + name: Scaling + parent: guides-perconaxtradb + weight: 43 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/percona-xtradb/scaling/horizontal-scaling/_index.md b/content/docs/v2024.1.31/guides/percona-xtradb/scaling/horizontal-scaling/_index.md new file mode 100644 index 0000000000..2e56c1da0e --- /dev/null +++ b/content/docs/v2024.1.31/guides/percona-xtradb/scaling/horizontal-scaling/_index.md @@ -0,0 +1,22 @@ +--- +title: Horizontal Scaling +menu: + docs_v2024.1.31: + identifier: guides-perconaxtradb-scaling-horizontal + name: Horizontal Scaling + parent: guides-perconaxtradb-scaling + weight: 10 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/percona-xtradb/scaling/horizontal-scaling/cluster/example/pxops-downscale.yaml b/content/docs/v2024.1.31/guides/percona-xtradb/scaling/horizontal-scaling/cluster/example/pxops-downscale.yaml new file mode 100644 index 0000000000..e7602843bc --- /dev/null +++ b/content/docs/v2024.1.31/guides/percona-xtradb/scaling/horizontal-scaling/cluster/example/pxops-downscale.yaml @@ -0,0 +1,11 @@ +apiVersion: ops.kubedb.com/v1alpha1 +kind: PerconaXtraDBOpsRequest +metadata: + name: pxops-scale-horizontal-down + namespace: demo +spec: + type: HorizontalScaling + databaseRef: + name: sample-pxc + horizontalScaling: + member : 3 diff --git a/content/docs/v2024.1.31/guides/percona-xtradb/scaling/horizontal-scaling/cluster/example/pxops-upscale.yaml b/content/docs/v2024.1.31/guides/percona-xtradb/scaling/horizontal-scaling/cluster/example/pxops-upscale.yaml new file mode 100644 index 0000000000..916f8b79db --- /dev/null +++ b/content/docs/v2024.1.31/guides/percona-xtradb/scaling/horizontal-scaling/cluster/example/pxops-upscale.yaml @@ -0,0 +1,11 @@ +apiVersion: ops.kubedb.com/v1alpha1 +kind: PerconaXtraDBOpsRequest +metadata: + name: pxops-scale-horizontal-up + namespace: demo +spec: + type: HorizontalScaling + databaseRef: + name: sample-pxc + horizontalScaling: + member : 5 diff --git a/content/docs/v2024.1.31/guides/percona-xtradb/scaling/horizontal-scaling/cluster/example/sample-pxc.yaml b/content/docs/v2024.1.31/guides/percona-xtradb/scaling/horizontal-scaling/cluster/example/sample-pxc.yaml new file mode 100644 index 0000000000..b932b93cb1 --- /dev/null +++ b/content/docs/v2024.1.31/guides/percona-xtradb/scaling/horizontal-scaling/cluster/example/sample-pxc.yaml @@ -0,0 +1,17 @@ +apiVersion: kubedb.com/v1alpha2 +kind: PerconaXtraDB +metadata: + name: sample-pxc + namespace: demo +spec: + version: "8.0.26" + replicas: 3 + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut diff --git a/content/docs/v2024.1.31/guides/percona-xtradb/scaling/horizontal-scaling/cluster/index.md b/content/docs/v2024.1.31/guides/percona-xtradb/scaling/horizontal-scaling/cluster/index.md new file mode 100644 index 0000000000..59b51b2628 --- /dev/null +++ b/content/docs/v2024.1.31/guides/percona-xtradb/scaling/horizontal-scaling/cluster/index.md @@ -0,0 +1,283 @@ +--- +title: Horizontal Scaling PerconaXtraDB +menu: + docs_v2024.1.31: + identifier: guides-perconaxtradb-scaling-horizontal-cluster + name: Cluster + parent: guides-perconaxtradb-scaling-horizontal + weight: 20 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Horizontal Scale PerconaXtraDB + +This guide will show you how to use `KubeDB` Enterprise operator to scale the cluster of a PerconaXtraDB database. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +- Install `KubeDB` Community and Enterprise operator in your cluster following the steps [here](/docs/v2024.1.31/setup/README). + +- You should be familiar with the following `KubeDB` concepts: + - [PerconaXtraDB](/docs/v2024.1.31/guides/percona-xtradb/concepts/perconaxtradb/) + - [PerconaXtraDB Cluster](/docs/v2024.1.31/guides/percona-xtradb/clustering/galera-cluster/) + - [PerconaXtraDBOpsRequest](/docs/v2024.1.31/guides/percona-xtradb/concepts/opsrequest/) + - [Horizontal Scaling Overview](/docs/v2024.1.31/guides/percona-xtradb/scaling/horizontal-scaling/overview/) + +To keep everything isolated, we are going to use a separate namespace called `demo` throughout this tutorial. + +```bash +$ kubectl create ns demo +namespace/demo created +``` + +## Apply Horizontal Scaling on Cluster + +Here, we are going to deploy a `PerconaXtraDB` cluster using a supported version by `KubeDB` operator. Then we are going to apply horizontal scaling on it. + +### Prepare PerconaXtraDB Cluster Database + +Now, we are going to deploy a `PerconaXtraDB` cluster with version `8.0.26`. + +### Deploy PerconaXtraDB Cluster + +In this section, we are going to deploy a PerconaXtraDB cluster. Then, in the next section we will scale the database using `PerconaXtraDBOpsRequest` CRD. Below is the YAML of the `PerconaXtraDB` CR that we are going to create, + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: PerconaXtraDB +metadata: + name: sample-pxc + namespace: demo +spec: + version: "8.0.26" + replicas: 3 + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut +``` + +Let's create the `PerconaXtraDB` CR we have shown above, + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/percona-xtradb/scaling/horizontal-scaling/cluster/example/sample-pxc.yaml +perconaxtradb.kubedb.com/sample-pxc created +``` + +Now, wait until `sample-pxc` has status `Ready`. i.e, + +```bash +$ kubectl get perconaxtradb -n demo +NAME VERSION STATUS AGE +sample-pxc 8.0.26 Ready 2m36s +``` + +Let's check the number of replicas this database has from the PerconaXtraDB object, number of pods the statefulset have, + +```bash +$ kubectl get perconaxtradb -n demo sample-pxc -o json | jq '.spec.replicas' +3 +$ kubectl get sts -n demo sample-pxc -o json | jq '.spec.replicas' +3 +``` + +We can see from both command that the database has 3 replicas in the cluster. + +Also, we can verify the replicas of the replicaset from an internal perconaxtradb command by execing into a replica. + +First we need to get the username and password to connect to a perconaxtradb instance, +```bash +$ kubectl get secrets -n demo sample-pxc-auth -o jsonpath='{.data.\username}' | base64 -d +root + +$ kubectl get secrets -n demo sample-pxc-auth -o jsonpath='{.data.\password}' | base64 -d +nrKuxni0wDSMrgwy +``` + +Now let's connect to a perconaxtradb instance and run a perconaxtradb internal command to check the number of replicas, + +```bash +$ kubectl exec -it -n demo sample-pxc-0 -c perconaxtradb -- bash +root@sample-pxc-0:/ mysql -uroot -p$MYSQL_ROOT_PASSWORD -e "show status like 'wsrep_cluster_size';" ++--------------------+-------+ +| Variable_name | Value | ++--------------------+-------+ +| wsrep_cluster_size | 3 | ++--------------------+-------+ + +``` + +We can see from the above output that the cluster has 3 nodes. + +We are now ready to apply the `PerconaXtraDBOpsRequest` CR to scale this database. + +## Scale Up Replicas + +Here, we are going to scale up the replicas of the replicaset to meet the desired number of replicas after scaling. + +#### Create PerconaXtraDBOpsRequest + +In order to scale up the replicas of the replicaset of the database, we have to create a `PerconaXtraDBOpsRequest` CR with our desired replicas. Below is the YAML of the `PerconaXtraDBOpsRequest` CR that we are going to create, + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: PerconaXtraDBOpsRequest +metadata: + name: pxops-scale-horizontal-up + namespace: demo +spec: + type: HorizontalScaling + databaseRef: + name: sample-pxc + horizontalScaling: + member : 5 +``` + +Here, + +- `spec.databaseRef.name` specifies that we are performing horizontal scaling operation on `sample-pxc` database. +- `spec.type` specifies that we are performing `HorizontalScaling` on our database. +- `spec.horizontalScaling.member` specifies the desired replicas after scaling. + +Let's create the `PerconaXtraDBOpsRequest` CR we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/percona-xtradb/scaling/horizontal-scaling/cluster/example/pxops-upscale.yaml +perconaxtradbopsrequest.ops.kubedb.com/pxops-scale-horizontal-up created +``` + +#### Verify Cluster replicas scaled up successfully + +If everything goes well, `KubeDB` Enterprise operator will update the replicas of `PerconaXtraDB` object and related `StatefulSets` and `Pods`. + +Let's wait for `PerconaXtraDBOpsRequest` to be `Successful`. Run the following command to watch `PerconaXtraDBOpsRequest` CR, + +```bash +$ watch kubectl get perconaxtradbopsrequest -n demo +Every 2.0s: kubectl get perconaxtradbopsrequest -n demo +NAME TYPE STATUS AGE +pxps-scale-horizontal HorizontalScaling Successful 106s +``` + +We can see from the above output that the `PerconaXtraDBOpsRequest` has succeeded. Now, we are going to verify the number of replicas this database has from the PerconaXtraDB object, number of pods the statefulset have, + +```bash +$ kubectl get perconaxtradb -n demo sample-pxc -o json | jq '.spec.replicas' +5 +$ kubectl get sts -n demo sample-pxc -o json | jq '.spec.replicas' +5 +``` + +Now let's connect to a perconaxtradb instance and run a perconaxtradb internal command to check the number of replicas, + +```bash +$ $ kubectl exec -it -n demo sample-pxc-0 -c perconaxtradb -- bash +root@sample-pxc-0:/ mysql -uroot -p$MYSQL_ROOT_PASSWORD -e "show status like 'wsrep_cluster_size';" ++--------------------+-------+ +| Variable_name | Value | ++--------------------+-------+ +| wsrep_cluster_size | 5 | ++--------------------+-------+ +``` + +From all the above outputs we can see that the replicas of the cluster is `5`. That means we have successfully scaled up the replicas of the PerconaXtraDB replicaset. + +### Scale Down Replicas + +Here, we are going to scale down the replicas of the cluster to meet the desired number of replicas after scaling. + +#### Create PerconaXtraDBOpsRequest + +In order to scale down the cluster of the database, we have to create a `PerconaXtraDBOpsRequest` CR with our desired replicas. Below is the YAML of the `PerconaXtraDBOpsRequest` CR that we are going to create, + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: PerconaXtraDBOpsRequest +metadata: + name: pxops-scale-horizontal-down + namespace: demo +spec: + type: HorizontalScaling + databaseRef: + name: sample-pxc + horizontalScaling: + member : 3 +``` + +Here, + +- `spec.databaseRef.name` specifies that we are performing horizontal scaling down operation on `sample-pxc` database. +- `spec.type` specifies that we are performing `HorizontalScaling` on our database. +- `spec.horizontalScaling.replicas` specifies the desired replicas after scaling. + +Let's create the `PerconaXtraDBOpsRequest` CR we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/percona-xtradb/scaling/horizontal-scaling/cluster/example/pxops-downscale.yaml +perconaxtradbopsrequest.ops.kubedb.com/pxops-scale-horizontal-down created +``` + +#### Verify Cluster replicas scaled down successfully + +If everything goes well, `KubeDB` Enterprise operator will update the replicas of `PerconaXtraDB` object and related `StatefulSets` and `Pods`. + +Let's wait for `PerconaXtraDBOpsRequest` to be `Successful`. Run the following command to watch `PerconaXtraDBOpsRequest` CR, + +```bash +$ watch kubectl get perconaxtradbopsrequest -n demo +Every 2.0s: kubectl get perconaxtradbopsrequest -n demo +NAME TYPE STATUS AGE +mops-hscale-down-replicaset HorizontalScaling Successful 2m32s +``` + +We can see from the above output that the `PerconaXtraDBOpsRequest` has succeeded. Now, we are going to verify the number of replicas this database has from the PerconaXtraDB object, number of pods the statefulset have, + +```bash +$ kubectl get perconaxtradb -n demo sample-pxc -o json | jq '.spec.replicas' +3 +$ kubectl get sts -n demo sample-pxc -o json | jq '.spec.replicas' +3 +``` + +Now let's connect to a perconaxtradb instance and run a perconaxtradb internal command to check the number of replicas, +```bash +$ $ kubectl exec -it -n demo sample-pxc-0 -c perconaxtradb -- bash +root@sample-pxc-0:/ mysql -uroot -p$MYSQL_ROOT_PASSWORD -e "show status like 'wsrep_cluster_size';" ++--------------------+-------+ +| Variable_name | Value | ++--------------------+-------+ +| wsrep_cluster_size | 5 | ++--------------------+-------+ +``` + +From all the above outputs we can see that the replicas of the cluster is `5`. That means we have successfully scaled down the replicas of the PerconaXtraDB replicaset. + +## Cleaning Up + +To clean up the Kubernetes resources created by this tutorial, run: + +```bash +$ kubectl delete perconaxtradb -n demo sample-pxc +$ kubectl delete perconaxtradbopsrequest -n demo pxops-scale-horizontal-up pxops-scale-horizontal-down +``` \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/percona-xtradb/scaling/horizontal-scaling/overview/images/horizontal-scaling.jpg b/content/docs/v2024.1.31/guides/percona-xtradb/scaling/horizontal-scaling/overview/images/horizontal-scaling.jpg new file mode 100644 index 0000000000..56ddd42aa1 Binary files /dev/null and b/content/docs/v2024.1.31/guides/percona-xtradb/scaling/horizontal-scaling/overview/images/horizontal-scaling.jpg differ diff --git a/content/docs/v2024.1.31/guides/percona-xtradb/scaling/horizontal-scaling/overview/index.md b/content/docs/v2024.1.31/guides/percona-xtradb/scaling/horizontal-scaling/overview/index.md new file mode 100644 index 0000000000..69da41aa1c --- /dev/null +++ b/content/docs/v2024.1.31/guides/percona-xtradb/scaling/horizontal-scaling/overview/index.md @@ -0,0 +1,65 @@ +--- +title: PerconaXtraDB Horizontal Scaling Overview +menu: + docs_v2024.1.31: + identifier: guides-perconaxtradb-scaling-horizontal-overview + name: Overview + parent: guides-perconaxtradb-scaling-horizontal + weight: 10 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# PerconaXtraDB Horizontal Scaling + +This guide will give an overview on how KubeDB Ops Manager scales up or down `PerconaXtraDB Cluster`. + +## Before You Begin + +- You should be familiar with the following `KubeDB` concepts: + - [PerconaXtraDB](/docs/v2024.1.31/guides/percona-xtradb/concepts/perconaxtradb/) + - [PerconaXtraDBOpsRequest](/docs/v2024.1.31/guides/percona-xtradb/concepts/opsrequest/) + +## How Horizontal Scaling Process Works + +The following diagram shows how KubeDB Ops Manager scales up or down `PerconaXtraDB` database components. Open the image in a new tab to see the enlarged version. + +
+  Horizontal scaling process of PerconaXtraDB +
Fig: Horizontal scaling process of PerconaXtraDB
+
+ +The Horizontal scaling process consists of the following steps: + +1. At first, a user creates a `PerconaXtraDB` Custom Resource (CR). + +2. `KubeDB` Community operator watches the `PerconaXtraDB` CR. + +3. When the operator finds a `PerconaXtraDB` CR, it creates required number of `StatefulSets` and related necessary stuff like secrets, services, etc. + +4. Then, in order to scale the `PerconaXtraDB` database the user creates a `PerconaXtraDBOpsRequest` CR with desired information. + +5. `KubeDB` Enterprise operator watches the `PerconaXtraDBOpsRequest` CR. + +6. When it finds a `PerconaXtraDBOpsRequest` CR, it pauses the `PerconaXtraDB` object which is referred from the `PerconaXtraDBOpsRequest`. So, the `KubeDB` Community operator doesn't perform any operations on the `PerconaXtraDB` object during the horizontal scaling process. + +7. Then the `KubeDB` Enterprise operator will scale the related StatefulSet Pods to reach the expected number of replicas defined in the `PerconaXtraDBOpsRequest` CR. + +8. After the successfully scaling the replicas of the StatefulSet Pods, the `KubeDB` Enterprise operator updates the number of replicas in the `PerconaXtraDB` object to reflect the updated state. + +9. After the successful scaling of the `PerconaXtraDB` replicas, the `KubeDB` Enterprise operator resumes the `PerconaXtraDB` object so that the `KubeDB` Community operator resumes its usual operations. + +In the next docs, we are going to show a step by step guide on horizontal scaling of PerconaXtraDB database using `PerconaXtraDBOpsRequest` CRD. diff --git a/content/docs/v2024.1.31/guides/percona-xtradb/scaling/vertical-scaling/_index.md b/content/docs/v2024.1.31/guides/percona-xtradb/scaling/vertical-scaling/_index.md new file mode 100644 index 0000000000..110de930ad --- /dev/null +++ b/content/docs/v2024.1.31/guides/percona-xtradb/scaling/vertical-scaling/_index.md @@ -0,0 +1,22 @@ +--- +title: Vertical Scaling +menu: + docs_v2024.1.31: + identifier: guides-perconaxtradb-scaling-vertical + name: Vertical Scaling + parent: guides-perconaxtradb-scaling + weight: 20 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/percona-xtradb/scaling/vertical-scaling/cluster/example/pxops-vscale.yaml b/content/docs/v2024.1.31/guides/percona-xtradb/scaling/vertical-scaling/cluster/example/pxops-vscale.yaml new file mode 100644 index 0000000000..8d85ea3e5e --- /dev/null +++ b/content/docs/v2024.1.31/guides/percona-xtradb/scaling/vertical-scaling/cluster/example/pxops-vscale.yaml @@ -0,0 +1,18 @@ +apiVersion: ops.kubedb.com/v1alpha1 +kind: PerconaXtraDBOpsRequest +metadata: + name: pxops-vscale + namespace: demo +spec: + type: VerticalScaling + databaseRef: + name: sample-pxc + verticalScaling: + perconaxtradb: + resources: + requests: + memory: "1.2Gi" + cpu: "0.6" + limits: + memory: "1.2Gi" + cpu: "0.6" diff --git a/content/docs/v2024.1.31/guides/percona-xtradb/scaling/vertical-scaling/cluster/example/sample-pxc.yaml b/content/docs/v2024.1.31/guides/percona-xtradb/scaling/vertical-scaling/cluster/example/sample-pxc.yaml new file mode 100644 index 0000000000..b932b93cb1 --- /dev/null +++ b/content/docs/v2024.1.31/guides/percona-xtradb/scaling/vertical-scaling/cluster/example/sample-pxc.yaml @@ -0,0 +1,17 @@ +apiVersion: kubedb.com/v1alpha2 +kind: PerconaXtraDB +metadata: + name: sample-pxc + namespace: demo +spec: + version: "8.0.26" + replicas: 3 + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut diff --git a/content/docs/v2024.1.31/guides/percona-xtradb/scaling/vertical-scaling/cluster/index.md b/content/docs/v2024.1.31/guides/percona-xtradb/scaling/vertical-scaling/cluster/index.md new file mode 100644 index 0000000000..a39e182bd6 --- /dev/null +++ b/content/docs/v2024.1.31/guides/percona-xtradb/scaling/vertical-scaling/cluster/index.md @@ -0,0 +1,197 @@ +--- +title: Vertical Scaling PerconaXtraDB Cluster +menu: + docs_v2024.1.31: + identifier: guides-perconaxtradb-scaling-vertical-cluster + name: Cluster + parent: guides-perconaxtradb-scaling-vertical + weight: 30 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Vertical Scale PerconaXtraDB Cluster + +This guide will show you how to use `KubeDB` Enterprise operator to update the resources of a PerconaXtraDB cluster database. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +- Install `KubeDB` Community and Enterprise operator in your cluster following the steps [here](/docs/v2024.1.31/setup/README). + +- You should be familiar with the following `KubeDB` concepts: + - [PerconaXtraDB](/docs/v2024.1.31/guides/percona-xtradb/concepts/perconaxtradb) + - [Clustering](/docs/v2024.1.31/guides/percona-xtradb/clustering/galera-cluster) + - [PerconaXtraDBOpsRequest](/docs/v2024.1.31/guides/percona-xtradb/concepts/opsrequest) + - [Vertical Scaling Overview](/docs/v2024.1.31/guides/percona-xtradb/scaling/vertical-scaling/overview) + +To keep everything isolated, we are going to use a separate namespace called `demo` throughout this tutorial. + +```bash +$ kubectl create ns demo +namespace/demo created +``` + +## Apply Vertical Scaling on Cluster + +Here, we are going to deploy a `PerconaXtraDB` cluster using a supported version by `KubeDB` operator. Then we are going to apply vertical scaling on it. + +### Prepare PerconaXtraDB Cluster + +Now, we are going to deploy a `PerconaXtraDB` cluster database with version `8.0.26`. +> Vertical Scaling for `PerconaXtraDB Standalone` can be performed in the same way as `PerconaXtraDB Cluster`. Only remove the `spec.replicas` field from the below yaml to deploy a PerconaXtraDB Standalone. + +### Deploy PerconaXtraDB Cluster + +In this section, we are going to deploy a PerconaXtraDB cluster database. Then, in the next section we will update the resources of the database using `PerconaXtraDBOpsRequest` CRD. Below is the YAML of the `PerconaXtraDB` CR that we are going to create, + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: PerconaXtraDB +metadata: + name: sample-pxc + namespace: demo +spec: + version: "8.0.26" + replicas: 3 + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut +``` + +Let's create the `PerconaXtraDB` CR we have shown above, + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/percona-xtradb/scaling/vertical-scaling/cluster/example/sample-pxc.yaml +perconaxtradb.kubedb.com/sample-pxc created +``` + +Now, wait until `sample-pxc` has status `Ready`. i.e, + +```bash +$ kubectl get perconaxtradb -n demo +NAME VERSION STATUS AGE +sample-pxc 8.0.26 Ready 3m46s +``` + +Let's check the Pod containers resources, + +```bash +$ kubectl get pod -n demo sample-pxc-0 -o json | jq '.spec.containers[].resources' +{ + "limits": { + "cpu": "500m", + "memory": "1Gi" + }, + "requests": { + "cpu": "500m", + "memory": "1Gi" + } +} +``` + +You can see the Pod has the default resources which is assigned by Kubedb operator. + +We are now ready to apply the `PerconaXtraDBOpsRequest` CR to update the resources of this database. + +### Vertical Scaling + +Here, we are going to update the resources of the database to meet the desired resources after scaling. + +#### Create PerconaXtraDBOpsRequest + +In order to update the resources of the database, we have to create a `PerconaXtraDBOpsRequest` CR with our desired resources. Below is the YAML of the `PerconaXtraDBOpsRequest` CR that we are going to create, + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: PerconaXtraDBOpsRequest +metadata: + name: pxops-vscale + namespace: demo +spec: + type: VerticalScaling + databaseRef: + name: sample-pxc + verticalScaling: + perconaxtradb: + resources: + requests: + memory: "1.2Gi" + cpu: "0.6" + limits: + memory: "1.2Gi" + cpu: "0.6" +``` + +Here, + +- `spec.databaseRef.name` specifies that we are performing vertical scaling operation on `sample-pxc` database. +- `spec.type` specifies that we are performing `VerticalScaling` on our database. +- `spec.VerticalScaling.perconaxtradb` specifies the desired resources after scaling. + +Let's create the `PerconaXtraDBOpsRequest` CR we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/percona-xtradb/scaling/vertical-scaling/cluster/example/pxops-vscale.yaml +perconaxtradbopsrequest.ops.kubedb.com/pxops-vscale created +``` + +#### Verify PerconaXtraDB Cluster resources updated successfully + +If everything goes well, `KubeDB` Enterprise operator will update the resources of `PerconaXtraDB` object and related `StatefulSets` and `Pods`. + +Let's wait for `PerconaXtraDBOpsRequest` to be `Successful`. Run the following command to watch `PerconaXtraDBOpsRequest` CR, + +```bash +$ kubectl get perconaxtradbopsrequest -n demo +Every 2.0s: kubectl get perconaxtradbopsrequest -n demo +NAME TYPE STATUS AGE +pxops-vscale VerticalScaling Successful 3m56s +``` + +We can see from the above output that the `PerconaXtraDBOpsRequest` has succeeded. Now, we are going to verify from one of the Pod yaml whether the resources of the database has updated to meet up the desired state, Let's check, + +```bash +$ kubectl get pod -n demo sample-pxc-0 -o json | jq '.spec.containers[].resources' +{ + "limits": { + "cpu": "600m", + "memory": "1288490188800m" + }, + "requests": { + "cpu": "600m", + "memory": "1288490188800m" + } +} +``` + +The above output verifies that we have successfully scaled up the resources of the PerconaXtraDB database. + +## Cleaning Up + +To clean up the Kubernetes resources created by this tutorial, run: + +```bash +$ kubectl delete perconaxtradb -n demo sample-pxc +$ kubectl delete perconaxtradbopsrequest -n demo pxops-vscale +``` \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/percona-xtradb/scaling/vertical-scaling/overview/images/vertical-scaling.jpg b/content/docs/v2024.1.31/guides/percona-xtradb/scaling/vertical-scaling/overview/images/vertical-scaling.jpg new file mode 100644 index 0000000000..61a885492b Binary files /dev/null and b/content/docs/v2024.1.31/guides/percona-xtradb/scaling/vertical-scaling/overview/images/vertical-scaling.jpg differ diff --git a/content/docs/v2024.1.31/guides/percona-xtradb/scaling/vertical-scaling/overview/index.md b/content/docs/v2024.1.31/guides/percona-xtradb/scaling/vertical-scaling/overview/index.md new file mode 100644 index 0000000000..f1d4c814fe --- /dev/null +++ b/content/docs/v2024.1.31/guides/percona-xtradb/scaling/vertical-scaling/overview/index.md @@ -0,0 +1,65 @@ +--- +title: PerconaXtraDB Vertical Scaling Overview +menu: + docs_v2024.1.31: + identifier: guides-perconaxtradb-scaling-vertical-overview + name: Overview + parent: guides-perconaxtradb-scaling-vertical + weight: 10 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# PerconaXtraDB Vertical Scaling + +This guide will give an overview on how KubeDB Ops Manager vertically scales up `PerconaXtraDB`. + +## Before You Begin + +- You should be familiar with the following `KubeDB` concepts: + - [PerconaXtraDB](/docs/v2024.1.31/guides/percona-xtradb/concepts/perconaxtradb/) + - [PerconaXtraDBOpsRequest](/docs/v2024.1.31/guides/percona-xtradb/concepts/opsrequest/) + +## How Horizontal Scaling Process Works + +The following diagram shows how KubeDB Ops Manager scales up or down `PerconaXtraDB` database components. Open the image in a new tab to see the enlarged version. + +
+  Vertical scaling process of PerconaXtraDB +
Fig: Vertical scaling process of PerconaXtraDB
+
+ +The vertical scaling process consists of the following steps: + +1. At first, a user creates a `PerconaXtraDB` Custom Resource (CR). + +2. `KubeDB` Community operator watches the `PerconaXtraDB` CR. + +3. When the operator finds a `PerconaXtraDB` CR, it creates required number of `StatefulSets` and related necessary stuff like secrets, services, etc. + +4. Then, in order to update the resources(for example `CPU`, `Memory` etc.) of the `PerconaXtraDB` database the user creates a `PerconaXtraDBOpsRequest` CR with desired information. + +5. `KubeDB` Enterprise operator watches the `PerconaXtraDBOpsRequest` CR. + +6. When it finds a `PerconaXtraDBOpsRequest` CR, it halts the `PerconaXtraDB` object which is referred from the `PerconaXtraDBOpsRequest`. So, the `KubeDB` Community operator doesn't perform any operations on the `PerconaXtraDB` object during the vertical scaling process. + +7. Then the `KubeDB` Enterprise operator will update resources of the StatefulSet Pods to reach desired state. + +8. After the successful update of the resources of the StatefulSet's replica, the `KubeDB` Enterprise operator updates the `PerconaXtraDB` object to reflect the updated state. + +9. After the successful update of the `PerconaXtraDB` resources, the `KubeDB` Enterprise operator resumes the `PerconaXtraDB` object so that the `KubeDB` Community operator resumes its usual operations. + +In the next docs, we are going to show a step by step guide on updating resources of PerconaXtraDB database using `PerconaXtraDBOpsRequest` CRD. \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/percona-xtradb/tls/_index.md b/content/docs/v2024.1.31/guides/percona-xtradb/tls/_index.md new file mode 100644 index 0000000000..94af2f8630 --- /dev/null +++ b/content/docs/v2024.1.31/guides/percona-xtradb/tls/_index.md @@ -0,0 +1,22 @@ +--- +title: TLS/SSL Encryption +menu: + docs_v2024.1.31: + identifier: guides-perconaxtradb-tls + name: TLS/SSL Encryption + parent: guides-perconaxtradb + weight: 110 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/percona-xtradb/tls/configure/examples/issuer.yaml b/content/docs/v2024.1.31/guides/percona-xtradb/tls/configure/examples/issuer.yaml new file mode 100644 index 0000000000..9662bdd2db --- /dev/null +++ b/content/docs/v2024.1.31/guides/percona-xtradb/tls/configure/examples/issuer.yaml @@ -0,0 +1,8 @@ +apiVersion: cert-manager.io/v1 +kind: Issuer +metadata: + name: px-issuer + namespace: demo +spec: + ca: + secretName: px-ca \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/percona-xtradb/tls/configure/examples/tls-cluster.yaml b/content/docs/v2024.1.31/guides/percona-xtradb/tls/configure/examples/tls-cluster.yaml new file mode 100644 index 0000000000..5e436284c5 --- /dev/null +++ b/content/docs/v2024.1.31/guides/percona-xtradb/tls/configure/examples/tls-cluster.yaml @@ -0,0 +1,32 @@ +apiVersion: kubedb.com/v1alpha2 +kind: PerconaXtraDB +metadata: + name: sample-pxc + namespace: demo +spec: + version: "8.0.26" + replicas: 3 + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + requireSSL: true + tls: + issuerRef: + apiGroup: cert-manager.io + kind: Issuer + name: px-issuer + certificates: + - alias: server + subject: + organizations: + - kubedb:server + dnsNames: + - localhost + ipAddresses: + - "127.0.0.1" + terminationPolicy: WipeOut \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/percona-xtradb/tls/configure/index.md b/content/docs/v2024.1.31/guides/percona-xtradb/tls/configure/index.md new file mode 100644 index 0000000000..1d4465fefa --- /dev/null +++ b/content/docs/v2024.1.31/guides/percona-xtradb/tls/configure/index.md @@ -0,0 +1,388 @@ +--- +title: TLS/SSL (Transport Encryption) +menu: + docs_v2024.1.31: + identifier: guides-perconaxtradb-tls-configure + name: PerconaXtraDB TLS/SSL Configuration + parent: guides-perconaxtradb-tls + weight: 20 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Configure TLS/SSL in PerconaXtraDB + +`KubeDB` supports providing TLS/SSL encryption (via, `requireSSL` mode) for `PerconaXtraDB`. This tutorial will show you how to use `KubeDB` to deploy a `PerconaXtraDB` database with TLS/SSL configuration. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +- Install [`cert-manger`](https://cert-manager.io/docs/installation/) v1.9.0 or later to your cluster to manage your SSL/TLS certificates. + +- Install `KubeDB` in your cluster following the steps [here](/docs/v2024.1.31/setup/README). + +- To keep things isolated, this tutorial uses a separate namespace called `demo` throughout this tutorial. + + ```bash + $ kubectl create ns demo + namespace/demo created + ``` + +> Note: YAML files used in this tutorial are stored in [docs/guides/percona-xtradb/tls/configure/examples](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/guides/mysql/tls/configure/yamls) folder in GitHub repository [kubedb/docs](https://github.com/kubedb/docs). + +### Deploy PerconaXtraDB database with TLS/SSL configuration + +As pre-requisite, at first, we are going to create an Issuer/ClusterIssuer. This Issuer/ClusterIssuer is used to create certificates. Then we are going to deploy a PerconaXtraDB standalone and a group replication that will be configured with these certificates by `KubeDB` operator. + +### Create Issuer/ClusterIssuer + +Now, we are going to create an example `Issuer` that will be used throughout the duration of this tutorial. Alternatively, you can follow this [cert-manager tutorial](https://cert-manager.io/docs/configuration/ca/) to create your own `Issuer`. By following the below steps, we are going to create our desired issuer, + +- Start off by generating our ca-certificates using openssl, + +```bash +$ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout ./ca.key -out ./ca.crt -subj "/CN=perconaxtradb/O=kubedb" +Generating a RSA private key +...........................................................................+++++ +........................................................................................................+++++ +writing new private key to './ca.key' +``` + +- create a secret using the certificate files we have just generated, + +```bash +kubectl create secret tls px-ca \ + --cert=ca.crt \ + --key=ca.key \ + --namespace=demo +secret/px-ca created +``` + +Now, we are going to create an `Issuer` using the `px-ca` secret that hols the ca-certificate we have just created. Below is the YAML of the `Issuer` cr that we are going to create, + +```yaml +apiVersion: cert-manager.io/v1 +kind: Issuer +metadata: + name: px-issuer + namespace: demo +spec: + ca: + secretName: px-ca +``` + +Let’s create the `Issuer` cr we have shown above, + +```bash +kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/percona-xtradb/tls/configure/examples/issuer.yaml +issuer.cert-manager.io/px-issuer created +``` + +## Deploy PerconaXtraDB Cluster with TLS/SSL configuration + +Now, we are going to deploy a `PerconaXtraDB` Cluster with TLS/SSL configuration. Below is the YAML for PerconaXtraDB cluster that we are going to create, + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: PerconaXtraDB +metadata: + name: sample-pxc + namespace: demo +spec: + version: "8.0.26" + replicas: 3 + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + requireSSL: true + tls: + issuerRef: + apiGroup: cert-manager.io + kind: Issuer + name: px-issuer + certificates: + - alias: server + subject: + organizations: + - kubedb:server + dnsNames: + - localhost + ipAddresses: + - "127.0.0.1" + terminationPolicy: WipeOut +``` + +**Deploy PerconaXtraDB Cluster:** + +```bash +kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/percona-xtradb/tls/configure/examples/tls-cluster.yaml +perconaxtradb.kubedb.com/sample-pxc created +``` + +**Wait for the database to be ready :** + +Now, wait for `PerconaXtraDB` going on `Running` state and also wait for `StatefulSet` and its pods to be created and going to `Running` state, + +```bash +$ kubectl get perconaxtradb -n demo sample-pxc +NAME VERSION STATUS AGE +sample-pxc 8.0.26 Ready 3m23s + + +$ kubectl get pod -n demo | grep sample-pxc +sample-pxc-0 2/2 Running 0 3m32s +sample-pxc-1 2/2 Running 0 3m32s +sample-pxc-2 2/2 Running 0 3m32s +``` + +**Verify tls-secrets created successfully :** + +If everything goes well, you can see that our tls-secrets will be created which contains server, client, exporter certificate. Server tls-secret will be used for server configuration and client tls-secret will be used for a secure connection. + +All tls-secret are created by `KubeDB` Ops Manager. Default tls-secret name formed as _{perconaxtradb-object-name}-{cert-alias}-cert_. + +Let's check the tls-secrets have created, + +```bash +$ kubectl get secrets -n demo | grep sample-pxc +sample-pxc-auth kubernetes.io/basic-auth 2 4m18s +sample-pxc-client-cert kubernetes.io/tls 3 4m19s +sample-pxc-metrics-exporter-cert kubernetes.io/tls 3 4m18s +sample-pxc-monitor kubernetes.io/basic-auth 2 4m18s +sample-pxc-replication kubernetes.io/basic-auth 2 4m18s +sample-pxc-server-cert kubernetes.io/tls 3 4m18s +sample-pxc-token-84hrj kubernetes.io/service-account-token 3 4m19s + +``` + +**Verify PerconaXtraDB Cluster configured with TLS/SSL:** + +Now, we are going to connect to the database for verifying the `PerconaXtraDB` server has configured with TLS/SSL encryption. + +Let's exec into the first pod to verify TLS/SSL configuration, + +```bash +$ kubectl exec -it -n demo sample-pxc-0 -- bash +Defaulted container "perconaxtradb" out of: perconaxtradb, px-coordinator, px-init (init) +bash-4.4$ ls /etc/mysql/certs/client +ca.crt tls.crt tls.key +bash-4.4$ ls /etc/mysql/certs/server +ca.crt tls.crt tls.key +bash-4.4$ mysql -u${MYSQL_ROOT_USERNAME} -p${MYSQL_ROOT_PASSWORD} +mysql: [Warning] Using a password on the command line interface can be insecure. +Welcome to the MySQL monitor. Commands end with ; or \g. +Your MySQL connection id is 78 +Server version: 8.0.26-16.1 Percona XtraDB Cluster (GPL), Release rel16, Revision b141904, WSREP version 26.4.3 + +Copyright (c) 2009-2021 Percona LLC and/or its affiliates +Copyright (c) 2000, 2021, Oracle and/or its affiliates. + +Oracle is a registered trademark of Oracle Corporation and/or its +affiliates. Other names may be trademarks of their respective +owners. + +Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. + +mysql> show variables like '%ssl%'; ++-------------------------------------+---------------------------------+ +| Variable_name | Value | ++-------------------------------------+---------------------------------+ +| admin_ssl_ca | | +| admin_ssl_capath | | +| admin_ssl_cert | | +| admin_ssl_cipher | | +| admin_ssl_crl | | +| admin_ssl_crlpath | | +| admin_ssl_key | | +| have_openssl | YES | +| have_ssl | YES | +| mysqlx_ssl_ca | | +| mysqlx_ssl_capath | | +| mysqlx_ssl_cert | | +| mysqlx_ssl_cipher | | +| mysqlx_ssl_crl | | +| mysqlx_ssl_crlpath | | +| mysqlx_ssl_key | | +| performance_schema_show_processlist | OFF | +| ssl_ca | /etc/mysql/certs/server/ca.crt | +| ssl_capath | /etc/mysql/certs/server | +| ssl_cert | /etc/mysql/certs/server/tls.crt | +| ssl_cipher | | +| ssl_crl | | +| ssl_crlpath | | +| ssl_fips_mode | OFF | +| ssl_key | /etc/mysql/certs/server/tls.key | ++-------------------------------------+---------------------------------+ +25 rows in set (0.00 sec) + +mysql> show variables like '%require_secure_transport%'; ++--------------------------+-------+ +| Variable_name | Value | ++--------------------------+-------+ +| require_secure_transport | ON | ++--------------------------+-------+ +1 row in set (0.00 sec) + +mysql> quit; +Bye + +``` + +Now let's check for the second database server, + +```bash +$ kubectl exec -it -n demo sample-pxc-1 -- bash +Defaulted container "perconaxtradb" out of: perconaxtradb, px-coordinator, px-init (init) +bash-4.4$ ls /etc/mysql/certs/client +ca.crt tls.crt tls.key +bash-4.4$ ls /etc/mysql/certs/server +ca.crt tls.crt tls.key +bash-4.4$ mysql -u${MYSQL_ROOT_USERNAME} -p${MYSQL_ROOT_PASSWORD} +mysql: [Warning] Using a password on the command line interface can be insecure. +Welcome to the MySQL monitor. Commands end with ; or \g. +Your MySQL connection id is 186 +Server version: 8.0.26-16.1 Percona XtraDB Cluster (GPL), Release rel16, Revision b141904, WSREP version 26.4.3 + +Copyright (c) 2009-2021 Percona LLC and/or its affiliates +Copyright (c) 2000, 2021, Oracle and/or its affiliates. + +Oracle is a registered trademark of Oracle Corporation and/or its +affiliates. Other names may be trademarks of their respective +owners. + +Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. + +mysql> show variables like '%ssl%'; ++-------------------------------------+---------------------------------+ +| Variable_name | Value | ++-------------------------------------+---------------------------------+ +| admin_ssl_ca | | +| admin_ssl_capath | | +| admin_ssl_cert | | +| admin_ssl_cipher | | +| admin_ssl_crl | | +| admin_ssl_crlpath | | +| admin_ssl_key | | +| have_openssl | YES | +| have_ssl | YES | +| mysqlx_ssl_ca | | +| mysqlx_ssl_capath | | +| mysqlx_ssl_cert | | +| mysqlx_ssl_cipher | | +| mysqlx_ssl_crl | | +| mysqlx_ssl_crlpath | | +| mysqlx_ssl_key | | +| performance_schema_show_processlist | OFF | +| ssl_ca | /etc/mysql/certs/server/ca.crt | +| ssl_capath | /etc/mysql/certs/server | +| ssl_cert | /etc/mysql/certs/server/tls.crt | +| ssl_cipher | | +| ssl_crl | | +| ssl_crlpath | | +| ssl_fips_mode | OFF | +| ssl_key | /etc/mysql/certs/server/tls.key | ++-------------------------------------+---------------------------------+ +25 rows in set (0.00 sec) + +mysql> show variables like '%require_secure_transport%'; ++--------------------------+-------+ +| Variable_name | Value | ++--------------------------+-------+ +| require_secure_transport | ON | ++--------------------------+-------+ +1 row in set (0.00 sec) + +mysql> quit; +Bye +``` + +The above output shows that the `PerconaXtraDB` server is configured to TLS/SSL. You can also see that the `.crt` and `.key` files are stored in `/etc/mysql/certs/client/` and `/etc/mysql/certs/server/` directory for client and server respectively. + +**Verify secure connection for SSL required user:** + +Now, you can create an SSL required user that will be used to connect to the database with a secure connection. + +Let's connect to the database server with a secure connection, + +```bash +$ kubectl exec -it -n demo sample-pxc-0 -- bash +Defaulted container "perconaxtradb" out of: perconaxtradb, px-coordinator, px-init (init) +bash-4.4$ mysql -u${MYSQL_ROOT_USERNAME} -p${MYSQL_ROOT_PASSWORD} +mysql: [Warning] Using a password on the command line interface can be insecure. +Welcome to the MySQL monitor. Commands end with ; or \g. +Your MySQL connection id is 232 +Server version: 8.0.26-16.1 Percona XtraDB Cluster (GPL), Release rel16, Revision b141904, WSREP version 26.4.3 + +Copyright (c) 2009-2021 Percona LLC and/or its affiliates +Copyright (c) 2000, 2021, Oracle and/or its affiliates. + +Oracle is a registered trademark of Oracle Corporation and/or its +affiliates. Other names may be trademarks of their respective +owners. + +Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. + +mysql> CREATE USER 'new_user'@'localhost' IDENTIFIED BY '1234' REQUIRE SSL; +Query OK, 0 rows affected (0.01 sec) + +mysql> FLUSH PRIVILEGES; +Query OK, 0 rows affected (0.01 sec) + +mysql> exit +Bye +bash-4.4$ mysql -unew_user -p1234 +mysql: [Warning] Using a password on the command line interface can be insecure. +ERROR 1045 (28000): Access denied for user 'new_user'@'localhost' (using password: YES) +bash-4.4$ mysql -unew_user -p1234 --ssl-ca=/etc/mysql/certs/server/ca.crt --ssl-cert=/etc/mysql/certs/server/tls.crt --ssl-key=/etc/mysql/certs/server/tls.key +mysql: [Warning] Using a password on the command line interface can be insecure. +Welcome to the MySQL monitor. Commands end with ; or \g. +Your MySQL connection id is 242 +Server version: 8.0.26-16.1 Percona XtraDB Cluster (GPL), Release rel16, Revision b141904, WSREP version 26.4.3 + +Copyright (c) 2009-2021 Percona LLC and/or its affiliates +Copyright (c) 2000, 2021, Oracle and/or its affiliates. + +Oracle is a registered trademark of Oracle Corporation and/or its +affiliates. Other names may be trademarks of their respective +owners. + +Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. + +You are enforcing ssl connection via unix socket. Please consider +switching ssl off as it does not make connection via unix socket +any more secure. +mysql> exit +Bye +``` + +From the above output, you can see that only using client certificate we can access the database securely, otherwise, it shows "Access denied". Our client certificate is stored in `/etc/mysql/certs/client/` directory. + +## Cleaning up + +To clean up the Kubernetes resources created by this tutorial, run: + +```bash +$ kubectl delete perconaxtradb demo sample-pxc +perconaxtradb.kubedb.com "sample-pxc" deleted +$ kubectl delete ns demo +namespace "demo" deleted +``` \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/percona-xtradb/tls/overview/images/px-tls-ssl.png b/content/docs/v2024.1.31/guides/percona-xtradb/tls/overview/images/px-tls-ssl.png new file mode 100644 index 0000000000..73695771c4 Binary files /dev/null and b/content/docs/v2024.1.31/guides/percona-xtradb/tls/overview/images/px-tls-ssl.png differ diff --git a/content/docs/v2024.1.31/guides/percona-xtradb/tls/overview/index.md b/content/docs/v2024.1.31/guides/percona-xtradb/tls/overview/index.md new file mode 100644 index 0000000000..fffe7e5d2a --- /dev/null +++ b/content/docs/v2024.1.31/guides/percona-xtradb/tls/overview/index.md @@ -0,0 +1,81 @@ +--- +title: PerconaXtraDB TLS/SSL Encryption Overview +menu: + docs_v2024.1.31: + identifier: guides-perconaxtradb-tls-overview + name: Overview + parent: guides-perconaxtradb-tls + weight: 10 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# PerconaXtraDB TLS/SSL Encryption + +**Prerequisite :** To configure TLS/SSL in `PerconaXtraDB`, `KubeDB` uses `cert-manager` to issue certificates. So first you have to make sure that the cluster has `cert-manager` installed. To install `cert-manager` in your cluster following steps [here](https://cert-manager.io/docs/installation/kubernetes/). + +To issue a certificate, the following cr of `cert-manager` is used: + +- `Issuer/ClusterIssuer`: Issuers and ClusterIssuers represent certificate authorities (CAs) that are able to generate signed certificates by honoring certificate signing requests. All cert-manager certificates require a referenced issuer that is in a ready condition to attempt to honor the request. You can learn more details [here](https://cert-manager.io/docs/concepts/issuer/). + +- `Certificate`: `cert-manager` has the concept of Certificates that define the desired x509 certificate which will be renewed and kept up to date. You can learn more details [here](https://cert-manager.io/docs/concepts/certificate/). + +**PerconaXtraDB CRD Specification:** + +KubeDB uses the following cr fields to enable SSL/TLS encryption in `PerconaXtraDB`. + +- `spec:` + - `requireSSL` + - `tls:` + - `issuerRef` + - `certificates` + +Read about the fields in details from [perconaxtradb concept](/docs/v2024.1.31/guides/percona-xtradb/concepts/perconaxtradb/#spectls), + +When, `requireSSL` is set, the users must specify the `tls.issuerRef` field. `KubeDB` uses the `issuer` or `clusterIssuer` referenced in the `tls.issuerRef` field, and the certificate specs provided in `tls.certificate` to generate certificate secrets using `Issuer/ClusterIssuers` specification. These certificates secrets including `ca.crt`, `tls.crt` and `tls.key` etc. are used to configure `PerconaXtraDB` server, exporter etc. respectively. + +## How TLS/SSL configures in PerconaXtraDB + +The following figure shows how `KubeDB` enterprise is used to configure TLS/SSL in PerconaXtraDB. Open the image in a new tab to see the enlarged version. + +
+ Stash Backup Flow +
Fig: Deploy PerconaXtraDB with TLS/SSL
+
+ +Deploying PerconaXtraDB with TLS/SSL configuration process consists of the following steps: + +1. At first, a user creates an `Issuer/ClusterIssuer` cr. + +2. Then the user creates a `PerconaXtraDB` cr. + +3. `KubeDB` community operator watches for the `PerconaXtraDB` cr. + +4. When it finds one, it creates `Secret`, `Service`, etc. for the `PerconaXtraDB` database. + +5. `KubeDB` Ops Manager watches for `PerconaXtraDB`(5c), `Issuer/ClusterIssuer`(5b), `Secret` and `Service`(5a). + +6. When it finds all the resources(`PerconaXtraDB`, `Issuer/ClusterIssuer`, `Secret`, `Service`), it creates `Certificates` by using `tls.issuerRef` and `tls.certificates` field specification from `PerconaXtraDB` cr. + +7. `cert-manager` watches for certificates. + +8. When it finds one, it creates certificate secrets `tls-secrets`(server, client, exporter secrets, etc.) that hold the actual self-signed certificate. + +9. `KubeDB` community operator watches for the Certificate secrets `tls-secrets`. + +10. When it finds all the tls-secret, it creates a `StatefulSet` so that PerconaXtraDB server is configured with TLS/SSL. + +In the next doc, we are going to show a step by step guide on how to configure a `PerconaXtraDB` database with TLS/SSL. diff --git a/content/docs/v2024.1.31/guides/percona-xtradb/update-version/_index.md b/content/docs/v2024.1.31/guides/percona-xtradb/update-version/_index.md new file mode 100644 index 0000000000..5106942e9d --- /dev/null +++ b/content/docs/v2024.1.31/guides/percona-xtradb/update-version/_index.md @@ -0,0 +1,22 @@ +--- +title: Updating +menu: + docs_v2024.1.31: + identifier: guides-perconaxtradb-updating + name: UpdateVersion + parent: guides-perconaxtradb + weight: 45 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/percona-xtradb/update-version/cluster/examples/pxops-update.yaml b/content/docs/v2024.1.31/guides/percona-xtradb/update-version/cluster/examples/pxops-update.yaml new file mode 100644 index 0000000000..3ed2c2297b --- /dev/null +++ b/content/docs/v2024.1.31/guides/percona-xtradb/update-version/cluster/examples/pxops-update.yaml @@ -0,0 +1,11 @@ +apiVersion: ops.kubedb.com/v1alpha1 +kind: PerconaXtraDBOpsRequest +metadata: + name: pxops-update + namespace: demo +spec: + type: UpdateVersion + databaseRef: + name: sample-pxc + updateVersion: + targetVersion: "8.0.28" \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/percona-xtradb/update-version/cluster/examples/sample-pxc.yaml b/content/docs/v2024.1.31/guides/percona-xtradb/update-version/cluster/examples/sample-pxc.yaml new file mode 100644 index 0000000000..b932b93cb1 --- /dev/null +++ b/content/docs/v2024.1.31/guides/percona-xtradb/update-version/cluster/examples/sample-pxc.yaml @@ -0,0 +1,17 @@ +apiVersion: kubedb.com/v1alpha2 +kind: PerconaXtraDB +metadata: + name: sample-pxc + namespace: demo +spec: + version: "8.0.26" + replicas: 3 + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut diff --git a/content/docs/v2024.1.31/guides/percona-xtradb/update-version/cluster/index.md b/content/docs/v2024.1.31/guides/percona-xtradb/update-version/cluster/index.md new file mode 100644 index 0000000000..4bf2854feb --- /dev/null +++ b/content/docs/v2024.1.31/guides/percona-xtradb/update-version/cluster/index.md @@ -0,0 +1,169 @@ +--- +title: Updating PerconaXtraDB Cluster +menu: + docs_v2024.1.31: + identifier: guides-perconaxtradb-updating-cluster + name: Cluster + parent: guides-perconaxtradb-updating + weight: 30 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Update version of PerconaXtraDB Cluster + +This guide will show you how to use `KubeDB` Enterprise operator to update the version of `PerconaXtraDB` Cluster. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +- Install `KubeDB` Community and Enterprise operator in your cluster following the steps [here](/docs/v2024.1.31/setup/README). + +- You should be familiar with the following `KubeDB` concepts: + - [PerconaXtraDB](/docs/v2024.1.31/guides/percona-xtradb/concepts/perconaxtradb) + - [Cluster](/docs/v2024.1.31/guides/percona-xtradb/clustering/overview) + - [PerconaXtraDBOpsRequest](/docs/v2024.1.31/guides/percona-xtradb/concepts/opsrequest) + - [updating Overview](/docs/v2024.1.31/guides/percona-xtradb/update-version/overview) + +To keep everything isolated, we are going to use a separate namespace called `demo` throughout this tutorial. + +```bash +$ kubectl create ns demo +namespace/demo created +``` + +## Prepare PerconaXtraDB Cluster + +Now, we are going to deploy a `PerconaXtraDB` cluster database with version `10.4.32`. + +### Deploy PerconaXtraDB cluster + +In this section, we are going to deploy a PerconaXtraDB Cluster. Then, in the next section we will update the version of the database using `PerconaXtraDBOpsRequest` CRD. Below is the YAML of the `PerconaXtraDB` CR that we are going to create, + +> If you want to update `PerconaXtraDB Standalone`, Just remove the `spec.Replicas` from the below yaml and rest of the steps are same. + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: PerconaXtraDB +metadata: + name: sample-pxc + namespace: demo +spec: + version: "8.0.26" + replicas: 3 + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut + +``` + +Let's create the `PerconaXtraDB` CR we have shown above, + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/percona-xtradb/update-version/cluster/examples/sample-pxc.yaml +perconaxtradb.kubedb.com/sample-pxc created +``` + +Now, wait until `sample-pxc` created has status `Ready`. i.e, + +```bash +$ kubectl get perconaxtradb -n demo +NAME VERSION STATUS AGE +sample-pxc 8.0.26 Ready 3m15s +``` + +We are now ready to apply the `PerconaXtraDBOpsRequest` CR to update this database. + +### Update PerconaXtraDB Version + +Here, we are going to update `PerconaXtraDB` cluster from `8.0.26` to `8.0.28`. + +#### Create PerconaXtraDBOpsRequest: + +In order to update the database cluster, we have to create a `PerconaXtraDBOpsRequest` CR with your desired version that is supported by `KubeDB`. Below is the YAML of the `PerconaXtraDBOpsRequest` CR that we are going to create, + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: PerconaXtraDBOpsRequest +metadata: + name: pxops-update + namespace: demo +spec: + type: UpdateVersion + databaseRef: + name: sample-pxc + updateVersion: + targetVersion: "8.0.28" +``` + +Here, + +- `spec.databaseRef.name` specifies that we are performing operation on `sample-pxc` PerconaXtraDB database. +- `spec.type` specifies that we are going to perform `UpdateVersion` on our database. +- `spec.updateVersion.targetVersion` specifies the expected version of the database `8.0.28`. + +Let's create the `PerconaXtraDBOpsRequest` CR we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/percona-xtradb/update-version/cluster/examples/pxops-update.yaml +perconaxtradbopsrequest.ops.kubedb.com/pxops-update created +``` + +#### Verify PerconaXtraDB version updated successfully + +If everything goes well, `KubeDB` Enterprise operator will update the image of `PerconaXtraDB` object and related `StatefulSets` and `Pods`. + +Let's wait for `PerconaXtraDBOpsRequest` to be `Successful`. Run the following command to watch `PerconaXtraDBOpsRequest` CR, + +```bash +$ kubectl get perconaxtradbopsrequest -n demo +Every 2.0s: kubectl get perconaxtradbopsrequest -n demo +NAME TYPE STATUS AGE +pxops-update UpdateVersion Successful 84s +``` + +We can see from the above output that the `PerconaXtraDBOpsRequest` has succeeded. + +Now, we are going to verify whether the `PerconaXtraDB` and the related `StatefulSets` and their `Pods` have the new version image. Let's check, + +```bash +$ kubectl get perconaxtradb -n demo sample-pxc -o=jsonpath='{.spec.version}{"\n"}' +8.0.28 + +$ kubectl get sts -n demo sample-pxc -o=jsonpath='{.spec.template.spec.containers[0].image}{"\n"}' +percona/percona-xtradb-cluster:8.0.28 + +$ kubectl get pods -n demo sample-pxc-0 -o=jsonpath='{.spec.containers[0].image}{"\n"}' +percona/percona-xtradb-cluster:8.0.28 +``` + +You can see from above, our `PerconaXtraDB` cluster database has been updated with the new version. So, the update process is successfully completed. + +## Cleaning Up + +To clean up the Kubernetes resources created by this tutorial, run: + +```bash +$ kubectl delete perconaxtradb -n demo sample-pxc +$ kubectl delete perconaxtradbopsrequest -n demo pxops-update +``` \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/percona-xtradb/update-version/overview/images/pxops-update.jpeg b/content/docs/v2024.1.31/guides/percona-xtradb/update-version/overview/images/pxops-update.jpeg new file mode 100644 index 0000000000..763de89495 Binary files /dev/null and b/content/docs/v2024.1.31/guides/percona-xtradb/update-version/overview/images/pxops-update.jpeg differ diff --git a/content/docs/v2024.1.31/guides/percona-xtradb/update-version/overview/index.md b/content/docs/v2024.1.31/guides/percona-xtradb/update-version/overview/index.md new file mode 100644 index 0000000000..5933ef9994 --- /dev/null +++ b/content/docs/v2024.1.31/guides/percona-xtradb/update-version/overview/index.md @@ -0,0 +1,65 @@ +--- +title: Updating PerconaXtraDB Overview +menu: + docs_v2024.1.31: + identifier: guides-perconaxtradb-updating-overview + name: Overview + parent: guides-perconaxtradb-updating + weight: 10 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# updating PerconaXtraDB version Overview + +This guide will give you an overview on how KubeDB Ops Manager update the version of `PerconaXtraDB` database. + +## Before You Begin + +- You should be familiar with the following `KubeDB` concepts: + - [PerconaXtraDB](/docs/v2024.1.31/guides/percona-xtradb/concepts/perconaxtradb) + - [PerconaXtraDBOpsRequest](/docs/v2024.1.31/guides/percona-xtradb/concepts/opsrequest) + +## How update Process Works + +The following diagram shows how KubeDB Ops Manager used to update the version of `PerconaXtraDB`. Open the image in a new tab to see the enlarged version. + +
+  updating Process of PerconaXtraDB +
Fig: updating Process of PerconaXtraDB
+
+ +The updating process consists of the following steps: + +1. At first, a user creates a `PerconaXtraDB` Custom Resource (CR). + +2. `KubeDB` Community operator watches the `PerconaXtraDB` CR. + +3. When the operator finds a `PerconaXtraDB` CR, it creates required number of `StatefulSets` and related necessary stuff like secrets, services, etc. + +4. Then, in order to update the version of the `PerconaXtraDB` database the user creates a `PerconaXtraDBOpsRequest` CR with the desired version. + +5. `KubeDB` Enterprise operator watches the `PerconaXtraDBOpsRequest` CR. + +6. When it finds a `PerconaXtraDBOpsRequest` CR, it halts the `PerconaXtraDB` object which is referred from the `PerconaXtraDBOpsRequest`. So, the `KubeDB` Community operator doesn't perform any operations on the `PerconaXtraDB` object during the updating process. + +7. By looking at the target version from `PerconaXtraDBOpsRequest` CR, `KubeDB` Enterprise operator updates the images of all the `StatefulSets`. After each image update, the operator performs some checks such as if the oplog is synced and database size is almost same or not. + +8. After successfully updating the `StatefulSets` and their `Pods` images, the `KubeDB` Enterprise operator updates the image of the `PerconaXtraDB` object to reflect the updated state of the database. + +9. After successfully updating of `PerconaXtraDB` object, the `KubeDB` Enterprise operator resumes the `PerconaXtraDB` object so that the `KubeDB` Community operator can resume its usual operations. + +In the next doc, we are going to show a step by step guide on updating of a PerconaXtraDB database using update operation. \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/percona-xtradb/volume-expansion/_index.md b/content/docs/v2024.1.31/guides/percona-xtradb/volume-expansion/_index.md new file mode 100644 index 0000000000..11f8c200b8 --- /dev/null +++ b/content/docs/v2024.1.31/guides/percona-xtradb/volume-expansion/_index.md @@ -0,0 +1,22 @@ +--- +title: Volume Expansion +menu: + docs_v2024.1.31: + identifier: guides-perconaxtradb-volume-expansion + name: Volume Expansion + parent: guides-perconaxtradb + weight: 44 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/percona-xtradb/volume-expansion/overview/images/volume-expansion.jpeg b/content/docs/v2024.1.31/guides/percona-xtradb/volume-expansion/overview/images/volume-expansion.jpeg new file mode 100644 index 0000000000..14bf4bf824 Binary files /dev/null and b/content/docs/v2024.1.31/guides/percona-xtradb/volume-expansion/overview/images/volume-expansion.jpeg differ diff --git a/content/docs/v2024.1.31/guides/percona-xtradb/volume-expansion/overview/index.md b/content/docs/v2024.1.31/guides/percona-xtradb/volume-expansion/overview/index.md new file mode 100644 index 0000000000..29b11757fc --- /dev/null +++ b/content/docs/v2024.1.31/guides/percona-xtradb/volume-expansion/overview/index.md @@ -0,0 +1,67 @@ +--- +title: PerconaXtraDB Volume Expansion Overview +menu: + docs_v2024.1.31: + identifier: guides-perconaxtradb-volume-expansion-overview + name: Overview + parent: guides-perconaxtradb-volume-expansion + weight: 10 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# PerconaXtraDB Volume Expansion + +This guide will give an overview on how KubeDB Ops Manager expand the volume of `PerconaXtraDB`. + +## Before You Begin + +- You should be familiar with the following `KubeDB` concepts: + - [PerconaXtraDB](/docs/v2024.1.31/guides/percona-xtradb/concepts/perconaxtradb) + - [PerconaXtraDBOpsRequest](/docs/v2024.1.31/guides/percona-xtradb/concepts/opsrequest) + +## How Volume Expansion Process Works + +The following diagram shows how KubeDB Ops Manager expand the volumes of `PerconaXtraDB` database components. Open the image in a new tab to see the enlarged version. + +
+  Volume Expansion process of PerconaXtraDB +
Fig: Volume Expansion process of PerconaXtraDB
+
+ +The Volume Expansion process consists of the following steps: + +1. At first, a user creates a `PerconaXtraDB` Custom Resource (CR). + +2. `KubeDB` Community operator watches the `PerconaXtraDB` CR. + +3. When the operator finds a `PerconaXtraDB` CR, it creates required `StatefulSet` and related necessary stuff like secrets, services, etc. + +4. The statefulSet creates Persistent Volumes according to the Volume Claim Template provided in the statefulset configuration. This Persistent Volume will be expanded by the `KubeDB` Enterprise operator. + +5. Then, in order to expand the volume of the `PerconaXtraDB` database the user creates a `PerconaXtraDBOpsRequest` CR with desired information. + +6. `KubeDB` Enterprise operator watches the `PerconaXtraDBOpsRequest` CR. + +7. When it finds a `PerconaXtraDBOpsRequest` CR, it pauses the `PerconaXtraDB` object which is referred from the `PerconaXtraDBOpsRequest`. So, the `KubeDB` Community operator doesn't perform any operations on the `PerconaXtraDB` object during the volume expansion process. + +8. Then the `KubeDB` Enterprise operator will expand the persistent volume to reach the expected size defined in the `PerconaXtraDBOpsRequest` CR. + +9. After the successfully expansion of the volume of the related StatefulSet Pods, the `KubeDB` Enterprise operator updates the new volume size in the `PerconaXtraDB` object to reflect the updated state. + +10. After the successful Volume Expansion of the `PerconaXtraDB`, the `KubeDB` Enterprise operator resumes the `PerconaXtraDB` object so that the `KubeDB` Community operator resumes its usual operations. + +In the next docs, we are going to show a step by step guide on Volume Expansion of various PerconaXtraDB database using `PerconaXtraDBOpsRequest` CRD. diff --git a/content/docs/v2024.1.31/guides/percona-xtradb/volume-expansion/volume-expansion/example/online-volume-expansion.yaml b/content/docs/v2024.1.31/guides/percona-xtradb/volume-expansion/volume-expansion/example/online-volume-expansion.yaml new file mode 100644 index 0000000000..5a8df54afa --- /dev/null +++ b/content/docs/v2024.1.31/guides/percona-xtradb/volume-expansion/volume-expansion/example/online-volume-expansion.yaml @@ -0,0 +1,12 @@ +apiVersion: ops.kubedb.com/v1alpha1 +kind: PerconaXtraDBOpsRequest +metadata: + name: px-online-volume-expansion + namespace: demo +spec: + type: VolumeExpansion + databaseRef: + name: sample-pxc + volumeExpansion: + mode: "Online" + perconaxtradb: 2Gi diff --git a/content/docs/v2024.1.31/guides/percona-xtradb/volume-expansion/volume-expansion/example/sample-pxc.yaml b/content/docs/v2024.1.31/guides/percona-xtradb/volume-expansion/volume-expansion/example/sample-pxc.yaml new file mode 100644 index 0000000000..cba8b8ed46 --- /dev/null +++ b/content/docs/v2024.1.31/guides/percona-xtradb/volume-expansion/volume-expansion/example/sample-pxc.yaml @@ -0,0 +1,17 @@ +apiVersion: kubedb.com/v1alpha2 +kind: PerconaXtraDB +metadata: + name: sample-pxc + namespace: demo +spec: + version: "8.0.26" + replicas: 3 + storageType: Durable + storage: + storageClassName: "topolvm-provisioner" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut diff --git a/content/docs/v2024.1.31/guides/percona-xtradb/volume-expansion/volume-expansion/index.md b/content/docs/v2024.1.31/guides/percona-xtradb/volume-expansion/volume-expansion/index.md new file mode 100644 index 0000000000..b729b91c85 --- /dev/null +++ b/content/docs/v2024.1.31/guides/percona-xtradb/volume-expansion/volume-expansion/index.md @@ -0,0 +1,255 @@ +--- +title: PerconaXtraDB Volume Expansion +menu: + docs_v2024.1.31: + identifier: guides-perconaxtradb-volume-expansion-volume-expansion + name: PerconaXtraDB Volume Expansion + parent: guides-perconaxtradb-volume-expansion + weight: 20 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# PerconaXtraDB Volume Expansion + +This guide will show you how to use `KubeDB` Enterprise operator to expand the volume of a PerconaXtraDB. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster. + +- You must have a `StorageClass` that supports volume expansion. + +- Install `KubeDB` Community and Enterprise operator in your cluster following the steps [here](/docs/v2024.1.31/setup/README). + +- You should be familiar with the following `KubeDB` concepts: + - [PerconaXtraDB](/docs/v2024.1.31/guides/percona-xtradb/concepts/perconaxtradb) + - [PerconaXtraDBOpsRequest](/docs/v2024.1.31/guides/percona-xtradb/concepts/opsrequest) + - [Volume Expansion Overview](/docs/v2024.1.31/guides/percona-xtradb/volume-expansion/overview) + +To keep everything isolated, we are going to use a separate namespace called `demo` throughout this tutorial. + +```bash +$ kubectl create ns demo +namespace/demo created +``` + +## Expand Volume of PerconaXtraDB + +Here, we are going to deploy a `PerconaXtraDB` cluster using a supported version by `KubeDB` operator. Then we are going to apply `PerconaXtraDBOpsRequest` to expand its volume. The process of expanding PerconaXtraDB `standalone` is same as PerconaXtraDB cluster. + +### Prepare PerconaXtraDB Database + +At first verify that your cluster has a storage class, that supports volume expansion. Let's check, + +```bash +$ kubectl get storageclass +NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE +standard (default) rancher.io/local-path Delete WaitForFirstConsumer false 69s +topolvm-provisioner topolvm.cybozu.com Delete WaitForFirstConsumer true 37s + +``` + +We can see from the output the `topolvm-provisioner` storage class has `ALLOWVOLUMEEXPANSION` field as true. So, this storage class supports volume expansion. We will use this storage class. You can install topolvm from [here](https://github.com/topolvm/topolvm). + +Now, we are going to deploy a `PerconaXtraDB` database of 3 replicas with version `8.0.26`. + +### Deploy PerconaXtraDB + +In this section, we are going to deploy a PerconaXtraDB Cluster with 1GB volume. Then, in the next section we will expand its volume to 2GB using `PerconaXtraDBOpsRequest` CRD. Below is the YAML of the `PerconaXtraDB` CR that we are going to create, + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: PerconaXtraDB +metadata: + name: sample-pxc + namespace: demo +spec: + version: "8.0.26" + replicas: 3 + storageType: Durable + storage: + storageClassName: "topolvm-provisioner" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut + +``` + +Let's create the `PerconaXtraDB` CR we have shown above, + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/percona-xtradb/volume-expansion/volume-expansion/example/sample-pxc.yaml +perconaxtradb.kubedb.com/sample-pxc created +``` + +Now, wait until `sample-pxc` has status `Ready`. i.e, + +```bash +$ kubectl get perconaxtradb -n demo +NAME VERSION STATUS AGE +sample-pxc 8.0.26 Ready 5m4s +``` + +Let's check volume size from statefulset, and from the persistent volume, + +```bash +$ kubectl get sts -n demo sample-pxc -o json | jq '.spec.volumeClaimTemplates[].spec.resources.requests.storage' +"1Gi" + +$ kubectl get pv -n demo +NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE +pvc-331335d1-c8e0-4b73-9dab-dae57920e997 1Gi RWO Delete Bound demo/data-sample-pxc-0 topolvm-provisioner 63s +pvc-b90179f8-c40a-4273-ad77-74ca8470b782 1Gi RWO Delete Bound demo/data-sample-pxc-1 topolvm-provisioner 62s +pvc-f72411a4-80d5-4d32-b713-cb30ec662180 1Gi RWO Delete Bound demo/data-sample-pxc-2 topolvm-provisioner 62s +``` + +You can see the statefulset has 1GB storage, and the capacity of all the persistent volumes are also 1GB. + +We are now ready to apply the `PerconaXtraDBOpsRequest` CR to expand the volume of this database. + +### Volume Expansion + +Here, we are going to expand the volume of the PerconaXtraDB cluster. + +#### Create PerconaXtraDBOpsRequest + +In order to expand the volume of the database, we have to create a `PerconaXtraDBOpsRequest` CR with our desired volume size. Below is the YAML of the `PerconaXtraDBOpsRequest` CR that we are going to create, + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: PerconaXtraDBOpsRequest +metadata: + name: px-online-volume-expansion + namespace: demo +spec: + type: VolumeExpansion + databaseRef: + name: sample-pxc + volumeExpansion: + mode: "Online" + perconaxtradb: 2Gi +``` + +Here, + +- `spec.databaseRef.name` specifies that we are performing volume expansion operation on `sample-pxc` database. +- `spec.type` specifies that we are performing `VolumeExpansion` on our database. +- `spec.volumeExpansion.perconaxtradb` specifies the desired volume size. +- `spec.volumeExpansion.mode` specifies the desired volume expansion mode (`Online` or `Offline`). Storageclass `topolvm-provisioner` supports `Online` volume expansion. + +> **Note:** If the Storageclass you are using doesn't support `Online` Volume Expansion, Try offline volume expansion by using `spec.volumeExpansion.mode:"Offline"`. + +Let's create the `PerconaXtraDBOpsRequest` CR we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/percona-xtradb/volume-expansion/volume-expansion/example/online-volume-expansion.yaml +perconaxtradbopsrequest.ops.kubedb.com/px-online-volume-expansion created +``` + +#### Verify PerconaXtraDB volume expanded successfully + +If everything goes well, `KubeDB` Enterprise operator will update the volume size of `PerconaXtraDB` object and related `StatefulSets` and `Persistent Volumes`. + +Let's wait for `PerconaXtraDBOpsRequest` to be `Successful`. Run the following command to watch `PerconaXtraDBOpsRequest` CR, + +```bash +$ kubectl get perconaxtradbopsrequest -n demo +NAME TYPE STATUS AGE +px-online-volume-expansion VolumeExpansion Successful 96s +``` + +We can see from the above output that the `PerconaXtraDBOpsRequest` has succeeded. If we describe the `PerconaXtraDBOpsRequest` we will get an overview of the steps that were followed to expand the volume of the database. + +```bash +$ kubectl describe perconaxtradbopsrequest -n demo px-online-volume-expansion +Name: px-online-volume-expansion +Namespace: demo +Labels: +Annotations: API Version: ops.kubedb.com/v1alpha1 +Kind: PerconaXtraDBOpsRequest +Metadata: + UID: 09a119aa-4f2a-4cb4-b620-2aa3a514df11 +Spec: + Database Ref: + Name: sample-pxc + Type: VolumeExpansion + Volume Expansion: + PerconaXtraDB: 2Gi + Mode: Online +Status: + Conditions: + Last Transition Time: 2022-01-07T06:38:29Z + Message: Controller has started to Progress the PerconaXtraDBOpsRequest: demo/px-online-volume-expansion + Observed Generation: 1 + Reason: OpsRequestProgressingStarted + Status: True + Type: Progressing + Last Transition Time: 2022-01-07T06:39:49Z + Message: Online Volume Expansion performed successfully in PerconaXtraDB pod for PerconaXtraDBOpsRequest: demo/px-online-volume-expansion + Observed Generation: 1 + Reason: SuccessfullyVolumeExpanded + Status: True + Type: VolumeExpansion + Last Transition Time: 2022-01-07T06:39:49Z + Message: Controller has successfully expand the volume of PerconaXtraDB demo/px-online-volume-expansion + Observed Generation: 1 + Reason: OpsRequestProcessedSuccessfully + Status: True + Type: Successful + Observed Generation: 3 + Phase: Successful +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal Starting 2m1s KubeDB Enterprise Operator Start processing for PerconaXtraDBOpsRequest: demo/px-online-volume-expansion + Normal Starting 2m1s KubeDB Enterprise Operator Pausing PerconaXtraDB databse: demo/sample-pxc + Normal Successful 2m1s KubeDB Enterprise Operator Successfully paused PerconaXtraDB database: demo/sample-pxc for PerconaXtraDBOpsRequest: px-online-volume-expansion + Normal Successful 41s KubeDB Enterprise Operator Online Volume Expansion performed successfully in PerconaXtraDB pod for PerconaXtraDBOpsRequest: demo/px-online-volume-expansion + Normal Starting 41s KubeDB Enterprise Operator Updating PerconaXtraDB storage + Normal Successful 41s KubeDB Enterprise Operator Successfully Updated PerconaXtraDB storage + Normal Starting 41s KubeDB Enterprise Operator Resuming PerconaXtraDB database: demo/sample-pxc + Normal Successful 41s KubeDB Enterprise Operator Successfully resumed PerconaXtraDB database: demo/sample-pxc + Normal Successful 41s KubeDB Enterprise Operator Controller has Successfully expand the volume of PerconaXtraDB: demo/sample-pxc + +``` + +Now, we are going to verify from the `Statefulset`, and the `Persistent Volumes` whether the volume of the database has expanded to meet the desired state, Let's check, + +```bash +$ kubectl get sts -n demo sample-pxc -o json | jq '.spec.volumeClaimTemplates[].spec.resources.requests.storage' +"2Gi" + +$ kubectl get pv -n demo +NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE +pvc-331335d1-c8e0-4b73-9dab-dae57920e997 2Gi RWO Delete Bound demo/data-sample-pxc-0 topolvm-provisioner 12m +pvc-b90179f8-c40a-4273-ad77-74ca8470b782 2Gi RWO Delete Bound demo/data-sample-pxc-1 topolvm-provisioner 12m +pvc-f72411a4-80d5-4d32-b713-cb30ec662180 2Gi RWO Delete Bound demo/data-sample-pxc-2 topolvm-provisioner 12m +``` + +The above output verifies that we have successfully expanded the volume of the PerconaXtraDB database. + +## Cleaning Up + +To clean up the Kubernetes resources created by this tutorial, run: + +```bash +$ kubectl delete perconaxtradb -n demo sample-pxc +$ kubectl delete perconaxtradbopsrequest -n demo px-online-volume-expansion +``` diff --git a/content/docs/v2024.1.31/guides/pgbouncer/README.md b/content/docs/v2024.1.31/guides/pgbouncer/README.md new file mode 100644 index 0000000000..eb839f44b4 --- /dev/null +++ b/content/docs/v2024.1.31/guides/pgbouncer/README.md @@ -0,0 +1,53 @@ +--- +title: PgBouncer +menu: + docs_v2024.1.31: + identifier: pb-readme-pgbouncer + name: PgBouncer + parent: pb-pgbouncer-guides + weight: 10 +menu_name: docs_v2024.1.31 +section_menu_id: guides +url: /docs/v2024.1.31/guides/pgbouncer/ +aliases: +- /docs/v2024.1.31/guides/pgbouncer/README/ +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Overview + +[PgBouncer](https://pgbouncer.github.io/) is an open-source, lightweight, single-binary connection-pooling middleware for PostgreSQL. PgBouncer maintains a pool of connections for each locally stored user-database pair. It is typically configured to hand out one of these connections to a new incoming client connection, and return it back in to the pool when the client disconnects. PgBouncer can manage one or more PostgreSQL databases on possibly different servers and serve clients over TCP and Unix domain sockets. For a more hands-on experience, see this brief [tutorial on how to create a PgBouncer](https://pgdash.io/blog/pgbouncer-connection-pool.html) for PostgreSQL database. + +KubeDB operator now comes bundled with PgBouncer crd to handle connection pooling. With connection pooling, clients connect to a proxy server which maintains a pool of direct connections to other real PostgreSQL servers. PgBouncer crd can handle multiple local or remote Postgres database connections across multiple users using PgBouncer's connection pooling mechanism. + +## PgBouncer Features + +| Features | Availability | +|------------------------------------| :----------: | +| Clustering | ✓ | +| Multiple PgBouncer Versions | ✓ | +| Customizable Pooling Configuration | ✓ | +| Custom docker images | ✓ | +| Builtin Prometheus Discovery | ✓ | +| Using Prometheus operator | ✓ | + +## User Guide + +- [Quickstart PgBouncer](/docs/v2024.1.31/guides/pgbouncer/quickstart/quickstart) with KubeDB Operator. +- Monitor your PgBouncer with KubeDB using [`out-of-the-box` builtin-Prometheus](/docs/v2024.1.31/guides/pgbouncer/monitoring/using-builtin-prometheus). +- Monitor your PgBouncer with KubeDB using [`out-of-the-box` Prometheus operator](/docs/v2024.1.31/guides/pgbouncer/monitoring/using-prometheus-operator). +- Use [private Docker registry](/docs/v2024.1.31/guides/pgbouncer/private-registry/using-private-registry) to deploy PgBouncer with KubeDB. +- Detail concepts of [PgBouncer object](/docs/v2024.1.31/guides/pgbouncer/concepts/pgbouncer). +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/v2024.1.31/CONTRIBUTING). diff --git a/content/docs/v2024.1.31/guides/pgbouncer/_index.md b/content/docs/v2024.1.31/guides/pgbouncer/_index.md new file mode 100644 index 0000000000..96fded5e20 --- /dev/null +++ b/content/docs/v2024.1.31/guides/pgbouncer/_index.md @@ -0,0 +1,22 @@ +--- +title: PgBouncer +menu: + docs_v2024.1.31: + identifier: pb-pgbouncer-guides + name: PgBouncer + parent: guides + weight: 10 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/pgbouncer/cli/_index.md b/content/docs/v2024.1.31/guides/pgbouncer/cli/_index.md new file mode 100755 index 0000000000..2626ab19e1 --- /dev/null +++ b/content/docs/v2024.1.31/guides/pgbouncer/cli/_index.md @@ -0,0 +1,22 @@ +--- +title: CLI | KubeDB +menu: + docs_v2024.1.31: + identifier: pb-cli-pgbouncer + name: CLI + parent: pb-pgbouncer-guides + weight: 100 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/pgbouncer/cli/cli.md b/content/docs/v2024.1.31/guides/pgbouncer/cli/cli.md new file mode 100644 index 0000000000..c62b2ff2f9 --- /dev/null +++ b/content/docs/v2024.1.31/guides/pgbouncer/cli/cli.md @@ -0,0 +1,353 @@ +--- +title: CLI | KubeDB +menu: + docs_v2024.1.31: + identifier: pb-cli-cli + name: Quickstart + parent: pb-cli-pgbouncer + weight: 10 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Manage KubeDB objects using CLIs + +## KubeDB CLI + +KubeDB comes with its own cli. It is called `kubedb` cli. `kubedb` can be used to manage any KubeDB object. `kubedb` cli also performs various validations to improve ux. To install KubeDB cli on your workstation, follow the steps [here](/docs/v2024.1.31/setup/README). + +### How to Create objects + +`kubectl create` creates a pgbouncer CRD object in `default` namespace by default. Following command will create a PgBouncer object as specified in `pgbouncer.yaml`. + +```bash +$ kubectl create -f pgbouncer-demo.yaml +pgbouncer "pgbouncer-demo" created +``` + +You can provide namespace as a flag `--namespace`. Provided namespace should match with namespace specified in input file. + +```bash +$ kubectl create -f pgbouncer-demo.yaml --namespace=kube-system +pgbouncer "pgbouncer-demo" created +``` + +`kubectl create` command also considers `stdin` as input. + +```bash +cat pgbouncer-demo.yaml | kubectl create -f - +``` + +### How to List Objects + +`kubectl get` command allows users to list or find any KubeDB object. To list all PgBouncer objects in `default` namespace, run the following command: + +```bash +$ kubectl get pgbouncer +NAME VERSION STATUS AGE +pgbouncer-demo 1.17.0 Running 13m +pgbouncer-dev 1.17.0 Running 11m +pgbouncer-prod 1.17.0 Running 11m +pgbouncer-qa 1.17.0 Running 10m +``` + +To get YAML of an object, use `--output=yaml` flag. + +```yaml +$ kubectl get pgbouncer pgbouncer-demo --output=yaml +apiVersion: kubedb.com/v1alpha2 +kind: PgBouncer +metadata: + annotations: + kubectl.kubernetes.io/last-applied-configuration: | + {"apiVersion":"kubedb.com/v1alpha2","kind":"PgBouncer","metadata":{"annotations":{},"name":"pgbouncer-demo","namespace":"demo"},"spec":{"connectionPool":{"adminUsers":["admin","admin1"],"maxClientConnections":20,"reservePoolSize":5},"databases":[{"alias":"postgres","databaseName":"postgres","databaseRef":{"name":"quick-postgres"}},{"alias":"tmpdb","databaseName":"mydb","databaseRef":{"name":"quick-postgres"}}],"monitor":{"agent":"prometheus.io/builtin"},"replicas":1,"userListSecretRef":{"name":"db-user-pass"},"version":"1.17.0"}} + creationTimestamp: "2019-10-31T10:34:04Z" + finalizers: + - kubedb.com + generation: 1 + name: pgbouncer-demo + namespace: demo + resourceVersion: "4733" + selfLink: /apis/kubedb.com/v1alpha2/namespaces/demo/pgbouncers/pgbouncer-demo + uid: 158b7c58-ecb2-4a77-bceb-081489b4921a +spec: + connectionPool: + poolMode: session + port: 5432 + reservePoolSize: 5 + databases: + - alias: postgres + databaseName: postgres + databaseRef: + name: quick-postgres + nnamespace: demo + - alias: tmpdb + databaseName: mydb + databaseRef: + name: quick-postgres + namespace: demo + monitor: + agent: prometheus.io/builtin + prometheus: + exporter: + port: 56790 + resources: {} + podTemplate: + controller: {} + metadata: {} + spec: + resources: {} + replicas: 1 + version: 1.17.0 +status: + observedGeneration: 1$6208915667192219204 + phase: Running +``` + +To get JSON of an object, use `--output=json` flag. + +```bash +kubectl get pgbouncer pgbouncer-demo --output=json +``` + +To list all KubeDB objects, use following command: + +```bash +$ kubectl get all -n demo -o wide +NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES +pod/pgbouncer-demo-0 2/2 Running 0 5m53s 10.244.1.3 kind-worker + +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR +service/kubedb ClusterIP None 5m54s +service/pgbouncer-demo ClusterIP 10.98.95.4 5432/TCP 5m54s app.kubernetes.io/name=pgbouncers.kubedb.com,app.kubernetes.io/instance=pgbouncer-demo +service/pgbouncer-demo-stats ClusterIP 10.107.214.97 56790/TCP 5m38s app.kubernetes.io/name=pgbouncers.kubedb.com,app.kubernetes.io/instance=pgbouncer-demo + +NAME READY AGE CONTAINERS IMAGES +statefulset.apps/pgbouncer-demo 1/1 5m53s pgbouncer,exporter kubedb/pgbouncer:1.17.0,kubedb/pgbouncer_exporter:v0.1.1 + +NAME VERSION STATUS AGE +pgbouncer.kubedb.com/pgbouncer-demo 1.17.0 Running 5m54s + +``` + +Flag `--output=wide` is used to print additional information. + +List command supports short names for each object types. You can use it like `kubectl get `. Below are the short name for KubeDB objects: + +- Postgres: `pg` +- PgBouncer: `pb` +- Snapshot: `snap` +- DormantDatabase: `drmn` + +You can print labels with objects. The following command will list all Snapshots with their corresponding labels. + +```bash +$ kubectl get pb -n demo --show-labels +NAME DATABASE STATUS AGE LABELS +pgbouncer-demo pb/pgbouncer-demo Succeeded 11m app.kubernetes.io/name=pgbouncers.kubedb.com,app.kubernetes.io/instance=pgbouncer-demo +pgbouncer-tmp pb/postgres-demo Succeeded 1h app.kubernetes.io/name=pgbouncers.kubedb.com,app.kubernetes.io/instance=pgbouncer-tmp +``` + +You can also filter list using `--selector` flag. + +```bash +$ kubectl get pb --selector='app.kubernetes.io/name=pgbouncers.kubedb.com' --show-labels +NAME DATABASE STATUS AGE LABELS +pgbouncer-demo pb/pgbouncer-demo Succeeded 11m app.kubernetes.io/name=pgbouncers.kubedb.com,app.kubernetes.io/instance=pgbouncer-demo +pgbouncer-dev pb/postgres-demo Succeeded 1h app.kubernetes.io/name=pgbouncers.kubedb.com,app.kubernetes.io/instance=pgbouncer-dev +``` + +To print only object name, run the following command: + +```bash +$ kubectl get all -n demo -o name +pod/pgbouncer-demo-0 +service/kubedb +service/pgbouncer-demo +service/pgbouncer-demo-stats +statefulset.apps/pgbouncer-demo +pgbouncer.kubedb.com/pgbouncer-demo +``` + +### How to Describe Objects + +`kubectl dba describe` command allows users to describe any KubeDB object. The following command will describe PgBouncer `pgbouncer-demo` with relevant information. + +```bash +Name: pgbouncer-demo +Namespace: default +API Version: kubedb.com/v1alpha2 +Kind: PgBouncer +Metadata: + Creation Timestamp: 2019-09-09T09:27:48Z + Finalizers: + kubedb.com + Generation: 1 + Resource Version: 303596 + Self Link: /apis/kubedb.com/v1alpha2/namespaces/demo/pgbouncers/pgbouncer-demo + UID: f59c58da-ae21-403d-a4ce-affc8e10345c +Spec: + Connection Pool: + Admin Users: + admin + Listen Address: * + Listen Port: 5432 + Max Client Conn: 20 + Pool Mode: session + Reserve Pool Size: 5 + Databases: + Alias: postgres + App Binding Name: postgres-demo + App Binding Namespace: demo + Database Name: postgres + Replicas: 1 + Service Template: + Metadata: + Spec: + User List: + Secret Name: db-userlist + Secret Namespace: demo + Version: 1.17.0 +Status: + Observed Generation: 1$6208915667192219204 + Phase: Running +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal Successful 13m PgBouncer operator Successfully created Service + Normal Successful 13m PgBouncer operator Successfully created PgBouncer configMap + Normal Successful 13m PgBouncer operator Successfully created StatefulSet + Normal Successful 13m PgBouncer operator Successfully created PgBouncer statefulset + Normal Successful 13m PgBouncer operator Successfully patched StatefulSet + Normal Successful 13m PgBouncer operator Successfully patched PgBouncer statefulset + +``` + +`kubectl dba describe` command provides following basic information about a database. + +- StatefulSet +- Storage (Persistent Volume) +- Service +- Secret (If available) +- Topology (If available) +- Snapshots (If any) +- Monitoring system (If available) + +To hide events on KubeDB object, use flag `--show-events=false` + +To describe all PgBouncer objects in `default` namespace, use following command + +```bash +kubectl dba describe pb +``` + +To describe all PgBouncer objects from every namespace, provide `--all-namespaces` flag. + +```bash +kubectl dba describe pb --all-namespaces +``` + +To describe all KubeDB objects from every namespace, use the following command: + +```bash +kubectl dba describe all --all-namespaces +``` + +You can also describe KubeDb objects with matching labels. The following command will describe all Elasticsearch & PgBouncer objects with specified labels from every namespace. + +```bash +kubectl dba describe pg,es --all-namespaces --selector='group=dev' +``` + +To learn about various options of `describe` command, please visit [here](/docs/v2024.1.31/reference/cli/kubectl-dba_describe). + +### How to Edit Objects + +`kubectl edit` command allows users to directly edit any KubeDB object. It will open the editor defined by _KUBEDB_EDITOR_, or _EDITOR_ environment variables, or fall back to `nano`. + +Let's edit an existing running PgBouncer object to setup [Monitoring](/docs/v2024.1.31/guides/pgbouncer/monitoring/using-prometheus-operator). The following command will open PgBouncer `pgbouncer-demo` in editor. + +```bash +$ kubectl edit pb pgbouncer-demo + +# Add following to Spec to configure monitoring: + monitor: + agent: prometheus.io/operator + prometheus: + serviceMonitor: + labels: + release: prometheus + interval: 10s +pgbouncer "pgbouncer-demo" edited +``` + +#### Edit restrictions + +Various fields of a KubeDb object can't be edited using `edit` command. The following fields are restricted from updates for all KubeDB objects: + +- _apiVersion_ +- _kind_ +- _metadata.name_ +- _metadata.namespace_ + +### How to Delete Objects + +`kubectl delete` command will delete an object in `default` namespace by default unless namespace is provided. The following command will delete a PgBouncer `pgbouncer-dev` in default namespace + +```bash +$ kubectl delete pgbouncer pgbouncer-dev +pgbouncer "pgbouncer-dev" deleted +``` + +You can also use YAML files to delete objects. The following command will delete a PgBouncer using the type and name specified in `pgbouncer.yaml`. + +```bash +$ kubectl delete -f pgbouncer.yaml +PgBouncer "pgbouncer-dev" deleted +``` + +`kubectl delete` command also takes input from `stdin`. + +```bash +cat pgbouncer.yaml | kubectl delete -f - +``` + +To delete objects with matching labels, use `--selector` flag. The following command will delete PgBouncers with label `pgbouncer.app.kubernetes.io/instance=pgbouncer-demo`. + +```bash +kubectl delete pgbouncer -l pgbouncer.app.kubernetes.io/instance=pgbouncer-demo +``` + +## Using Kubectl + +You can use Kubectl with KubeDB objects like any other CRDs. Below are some common examples of using Kubectl with KubeDB objects. + +```bash +# Create objects +$ kubectl create -f + +# List objects +$ kubectl get pgbouncer +$ kubectl get pgbouncer.kubedb.com + +# Delete objects +$ kubectl delete pgbouncer +``` + +## Next Steps + +- Learn how to use KubeDB to run a PgBouncer [here](/docs/v2024.1.31/guides/pgbouncer/README). +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/v2024.1.31/CONTRIBUTING). diff --git a/content/docs/v2024.1.31/guides/pgbouncer/concepts/_index.md b/content/docs/v2024.1.31/guides/pgbouncer/concepts/_index.md new file mode 100755 index 0000000000..5b8e192aab --- /dev/null +++ b/content/docs/v2024.1.31/guides/pgbouncer/concepts/_index.md @@ -0,0 +1,22 @@ +--- +title: PgBouncer Concepts +menu: + docs_v2024.1.31: + identifier: pb-concepts-pgbouncer + name: Concepts + parent: pb-pgbouncer-guides + weight: 20 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/pgbouncer/concepts/appbinding.md b/content/docs/v2024.1.31/guides/pgbouncer/concepts/appbinding.md new file mode 100644 index 0000000000..c26b192176 --- /dev/null +++ b/content/docs/v2024.1.31/guides/pgbouncer/concepts/appbinding.md @@ -0,0 +1,162 @@ +--- +title: AppBinding CRD +menu: + docs_v2024.1.31: + identifier: pb-appbinding-concepts + name: AppBinding + parent: pb-concepts-pgbouncer + weight: 20 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# AppBinding + +## What is AppBinding + +An `AppBinding` is a Kubernetes `CustomResourceDefinition`(CRD) which points to an application using either its URL (usually for a non-Kubernetes resident service instance) or a Kubernetes service object (if self-hosted in a Kubernetes cluster), some optional parameters and a credential secret. To learn more about AppBinding and the problems it solves, please read this blog post: [The case for AppBinding](https://blog.byte.builders/post/the-case-for-appbinding). + +If you deploy a database using [KubeDB](https://kubedb.com/docs/0.11.0/concepts/), `AppBinding` object will be created automatically for it. Otherwise, you have to create an `AppBinding` object manually pointing to your desired database. + +KubeDB uses [Stash](https://appscode.com/products/stash/) to perform backup/recovery of databases. Stash needs to know how to connect with a target database and the credentials necessary to access it. This is done via an `AppBinding`. + +## AppBinding CRD Specification + +Like any official Kubernetes resource, an `AppBinding` has `TypeMeta`, `ObjectMeta` and `Spec` sections. However, unlike other Kubernetes resources, it does not have a `Status` section. + +An `AppBinding` object created by `KubeDB` for PostgreSQL database is shown below, + +```yaml +apiVersion: appcatalog.appscode.com/v1alpha1 +kind: AppBinding +metadata: + name: quick-postgres + namespace: demo + labels: + app.kubernetes.io/component: database + app.kubernetes.io/instance: quick-postgres + app.kubernetes.io/managed-by: kubedb.com + app.kubernetes.io/name: postgres + app.kubernetes.io/version: "10.2"-v2 + app.kubernetes.io/name: postgreses.kubedb.com + app.kubernetes.io/instance: quick-postgres +spec: + type: kubedb.com/postgres + secret: + name: quick-postgres-auth + clientConfig: + service: + name: quick-postgres + path: / + port: 5432 + query: sslmode=disable + scheme: postgresql + secretTransforms: + - renameKey: + from: POSTGRES_USER + to: username + - renameKey: + from: POSTGRES_PASSWORD + to: password + version: "10.2" +``` + +Here, we are going to describe the sections of an `AppBinding` crd. + +### AppBinding `Spec` + +An `AppBinding` object has the following fields in the `spec` section: + +#### spec.type + +`spec.type` is an optional field that indicates the type of the app that this `AppBinding` is pointing to. Stash uses this field to resolve the values of `TARGET_APP_TYPE`, `TARGET_APP_GROUP` and `TARGET_APP_RESOURCE` variables of [BackupBlueprint](https://appscode.com/products/stash/latest/concepts/crds/backupblueprint/) object. + +This field follows the following format: `/`. The above AppBinding is pointing to a `postgres` resource under `kubedb.com` group. + +Here, the variables are parsed as follows: + +| Variable | Usage | +| --------------------- | --------------------------------------------------------------------------------------------------------------------------------- | +| `TARGET_APP_GROUP` | Represents the application group where the respective app belongs (i.e: `kubedb.com`). | +| `TARGET_APP_RESOURCE` | Represents the resource under that application group that this appbinding represents (i.e: `postgres`). | +| `TARGET_APP_TYPE` | Represents the complete type of the application. It's simply `TARGET_APP_GROUP/TARGET_APP_RESOURCE` (i.e: `kubedb.com/postgres`). | + +#### spec.secret + +`spec.secret` specifies the name of the secret which contains the credentials that are required to access the database. This secret must be in the same namespace as the `AppBinding`. + +This secret must contain the following keys: + +PostgreSQL : + +| Key | Usage | +| ------------------- | --------------------------------------------------- | +| `POSTGRES_USER` | Username of the target database. | +| `POSTGRES_PASSWORD` | Password for the user specified by `POSTGRES_USER`. | + +MySQL : + +| Key | Usage | +| ---------- | ---------------------------------------------- | +| `username` | Username of the target database. | +| `password` | Password for the user specified by `username`. | + +MongoDB : + +| Key | Usage | +| ---------- | ---------------------------------------------- | +| `username` | Username of the target database. | +| `password` | Password for the user specified by `username`. | + +Elasticsearch: + +| Key | Usage | +| ---------------- | ----------------------- | +| `ADMIN_USERNAME` | Admin username | +| `ADMIN_PASSWORD` | Password for admin user | + +#### spec.clientConfig + +`spec.clientConfig` defines how to communicate with the target database. You can use either an URL or a Kubernetes service to connect with the database. You don't have to specify both of them. + +You can configure following fields in `spec.clientConfig` section: + +- **spec.clientConfig.url** + + `spec.clientConfig.url` gives the location of the database, in standard URL form (i.e. `[scheme://]host:port/[path]`). This is particularly useful when the target database is running outside of the Kubernetes cluster. If your database is running inside the cluster, use `spec.clientConfig.service` section instead. + + > Note that, attempting to use a user or basic auth (e.g. `user:password@host:port`) is not allowed. Stash will insert them automatically from the respective secret. Fragments ("#...") and query parameters ("?...") are not allowed either. + +- **spec.clientConfig.service** + + If you are running the database inside the Kubernetes cluster, you can use Kubernetes service to connect with the database. You have to specify the following fields in `spec.clientConfig.service` section if you manually create an `AppBinding` object. + + - **name :** `name` indicates the name of the service that connects with the target database. + - **scheme :** `scheme` specifies the scheme (i.e. http, https) to use to connect with the database. + - **port :** `port` specifies the port where the target database is running. + +- **spec.clientConfig.insecureSkipTLSVerify** + + `spec.clientConfig.insecureSkipTLSVerify` is used to disable TLS certificate verification while connecting with the database. We strongly discourage to disable TLS verification during backup. You should provide the respective CA bundle through `spec.clientConfig.caBundle` field instead. + +- **spec.clientConfig.caBundle** + + `spec.clientConfig.caBundle` is a PEM encoded CA bundle which will be used to validate the serving certificate of the database. + +## Next Steps + +- Learn how to use KubeDB to manage various databases [here](/docs/v2024.1.31/guides/README). +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/v2024.1.31/CONTRIBUTING). diff --git a/content/docs/v2024.1.31/guides/pgbouncer/concepts/catalog.md b/content/docs/v2024.1.31/guides/pgbouncer/concepts/catalog.md new file mode 100644 index 0000000000..eeb76a7093 --- /dev/null +++ b/content/docs/v2024.1.31/guides/pgbouncer/concepts/catalog.md @@ -0,0 +1,87 @@ +--- +title: PgBouncerVersion CRD +menu: + docs_v2024.1.31: + identifier: pb-catalog-concepts + name: PgBouncerVersion + parent: pb-concepts-pgbouncer + weight: 15 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# PgBouncerVersion + +## What is PgBouncerVersion + +`PgBouncerVersion` is a Kubernetes `Custom Resource Definitions` (CRD). It provides a declarative configuration to specify the docker images to be used for [PgBouncer](https://pgbouncer.github.io/) server deployed with KubeDB in a Kubernetes native way. + +When you install KubeDB, a `PgBouncerVersion` custom resource will be created automatically for every supported PgBouncer release versions. You have to specify the name of `PgBouncerVersion` crd in `spec.version` field of [PgBouncer](/docs/v2024.1.31/guides/pgbouncer/concepts/pgbouncer) crd. Then, KubeDB will use the docker images specified in the `PgBouncerVersion` crd to create your expected PgBouncer instance. + +Using a separate crd for specifying respective docker image names allow us to modify the images independent of KubeDB operator. This will also allow the users to use a custom PgBouncer image for their server. For more details about how to use custom image with PgBouncer in KubeDB, please visit [here](/docs/v2024.1.31/guides/pgbouncer/custom-versions/setup). + +## PgBouncerVersion Specification + +As with all other Kubernetes objects, a PgBouncerVersion needs `apiVersion`, `kind`, and `metadata` fields. It also needs a `.spec` section. + +```yaml +apiVersion: catalog.kubedb.com/v1alpha1 +kind: PgBouncerVersion +metadata: + name: "1.17.0" + labels: + app: kubedb +spec: + deprecated: false + version: "1.17.0" + pgBouncer: + image: "${KUBEDB_CATALOG_REGISTRY}/pgbouncer:1.17.0" + exporter: + image: "${KUBEDB_CATALOG_REGISTRY}/pgbouncer_exporter:v0.1.1" +``` + +### metadata.name + +`metadata.name` is a required field that specifies the name of the `PgBouncerVersion` crd. You have to specify this name in `spec.version` field of [PgBouncer](/docs/v2024.1.31/guides/pgbouncer/concepts/pgbouncer) crd. + +We follow this convention for naming PgBouncerVersion crd: + +- Name format: `{Original pgbouncer image version}-{modification tag}` + +We plan to modify original PgBouncer docker images to support additional features. Re-tagging the image with v1, v2 etc. modification tag helps separating newer iterations from the older ones. An image with higher modification tag will have more features than the images with lower modification tag. Hence, it is recommended to use PgBouncerVersion crd with highest modification tag to take advantage of the latest features. + +### spec.version + +`spec.version` is a required field that specifies the original version of PgBouncer that has been used to build the docker image specified in `spec.server.image` field. + +### spec.deprecated + +`spec.deprecated` is an optional field that specifies whether the docker images specified here is supported by the current KubeDB operator. + +The default value of this field is `false`. If `spec.deprecated` is set `true`, KubeDB operator will not create the server and other respective resources for this version. + +### spec.pgBouncer.image + +`spec.pgBouncer.image` is a required field that specifies the docker image which will be used to create Statefulset by KubeDB operator to create expected PgBouncer server. + +### spec.exporter.image + +`spec.exporter.image` is a required field that specifies the image which will be used to export Prometheus metrics. + +## Next Steps + +- Learn about PgBouncer crd [here](/docs/v2024.1.31/guides/pgbouncer/concepts/catalog). +- Deploy your first PgBouncer server with KubeDB by following the guide [here](/docs/v2024.1.31/guides/pgbouncer/quickstart/quickstart). \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/pgbouncer/concepts/pgbouncer.md b/content/docs/v2024.1.31/guides/pgbouncer/concepts/pgbouncer.md new file mode 100644 index 0000000000..e9a3e9ca73 --- /dev/null +++ b/content/docs/v2024.1.31/guides/pgbouncer/concepts/pgbouncer.md @@ -0,0 +1,237 @@ +--- +title: PgBouncer CRD +menu: + docs_v2024.1.31: + identifier: pb-pgbouncer-concepts + name: PgBouncer + parent: pb-concepts-pgbouncer + weight: 10 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# PgBouncer + +## What is PgBouncer + +`PgBouncer` is a Kubernetes `Custom Resource Definitions` (CRD). It provides declarative configuration for [PgBouncer](https://www.pgbouncer.github.io/) in a Kubernetes native way. You only need to describe the desired configurations in a `PgBouncer` object, and the KubeDB operator will create Kubernetes resources in the desired state for you. + +## PgBouncer Spec + +Like any official Kubernetes resource, a `PgBouncer` object has `TypeMeta`, `ObjectMeta`, `Spec` and `Status` sections. + +Below is an example PgBouncer object. + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: PgBouncer +metadata: + name: pgbouncer-server + namespace: demo +spec: + version: "1.18.0" + replicas: 2 + databases: + - alias: "postgres" + databaseName: "postgres" + databaseRef: + name: "quick-postgres" + namespace: demo + connectionPool: + maxClientConnections: 20 + reservePoolSize: 5 + monitor: + agent: prometheus.io/operator + prometheus: + serviceMonitor: + labels: + release: prometheus + interval: 10s +``` + +### spec.version + +`spec.version` is a required field that specifies the name of the [PgBouncerVersion](/docs/v2024.1.31/guides/pgbouncer/concepts/catalog) crd where the docker images are specified. Currently, when you install KubeDB, it creates the following `PgBouncerVersion` resources, + +- `1.18.0` + +### spec.replicas + +`spec.replicas` specifies the total number of available pgbouncer server nodes for each crd. KubeDB uses `PodDisruptionBudget` to ensure that majority of the replicas are available during [voluntary disruptions](https://kubernetes.io/docs/concepts/workloads/pods/disruptions/#voluntary-and-involuntary-disruptions). + +### spec.databases + +`spec.databases` specifies an array of postgres databases that pgbouncer should add to its connection pool. It contains three `required` fields and two `optional` fields for each database connection. + +- `spec.databases.alias`: specifies an alias for the target database located in a postgres server specified by an appbinding. +- `spec.databases.databaseName`: specifies the name of the target database. +- `spec.databases.databaseRef`: specifies the name and namespace of the AppBinding that contains the path to a PostgreSQL server where the target database can be found. + +ConnectionPool is used to configure pgbouncer connection-pool. All the fields here are accompanied by default values and can be left unspecified if no customisation is required by the user. + +- `spec.connectionPool.port`: specifies the port on which pgbouncer should listen to connect with clients. The default is 5432. + +- `spec.connectionPool.poolMode`: specifies the value of pool_mode. Specifies when a server connection can be reused by other clients. + + - session + + Server is released back to pool after client disconnects. Default. + + - transaction + + Server is released back to pool after transaction finishes. + + - statement + + Server is released back to pool after query finishes. Long transactions spanning multiple statements are disallowed in this mode. + +- `spec.connectionPool.maxClientConnections`: specifies the value of max_client_conn. When increased then the file descriptor limits should also be increased. Note that actual number of file descriptors used is more than max_client_conn. Theoretical maximum used is: + + ```bash + max_client_conn + (max pool_size * total databases * total users) + ``` + + if each user connects under its own username to server. If a database user is specified in connect string (all users connect under same username), the theoretical maximum is: + + ```bash + max_client_conn + (max pool_size * total databases) + ``` + + The theoretical maximum should be never reached, unless somebody deliberately crafts special load for it. Still, it means you should set the number of file descriptors to a safely high number. + + Search for `ulimit` in your favorite shell man page. Note: `ulimit` does not apply in a Windows environment. + + Default: 100 + +- `spec.connectionPool.defaultPoolSize`: specifies the value of default_pool_size. Used to determine how many server connections to allow per user/database pair. Can be overridden in the per-database configuration. + + Default: 20 + +- `spec.connectionPool.minPoolSize`: specifies the value of min_pool_size. PgBouncer adds more server connections to pool if below this number. Improves behavior when usual load comes suddenly back after period of total inactivity. + + Default: 0 (disabled) + +- `spec.connectionPool.reservePoolSize`: specifies the value of reserve_pool_size. Used to determine how many additional connections to allow to a pool. 0 disables. + + Default: 0 (disabled) + +- `spec.connectionPool.reservePoolTimeout`: specifies the value of reserve_pool_timeout. If a client has not been serviced in this many seconds, pgbouncer enables use of additional connections from reserve pool. 0 disables. + + Default: 5.0 + +- `spec.connectionPool.maxDbConnections`: specifies the value of max_db_connections. PgBouncer does not allow more than this many connections per-database (regardless of pool - i.e. user). It should be noted that when you hit the limit, closing a client connection to one pool will not immediately allow a server connection to be established for another pool, because the server connection for the first pool is still open. Once the server connection closes (due to idle timeout), a new server connection will immediately be opened for the waiting pool. + + Default: unlimited + +- `spec.connectionPool.maxUserConnections`: specifies the value of max_user_connections. PgBouncer does not allow more than this many connections per-user (regardless of pool - i.e. user). It should be noted that when you hit the limit, closing a client connection to one pool will not immediately allow a server connection to be established for another pool, because the server connection for the first pool is still open. Once the server connection closes (due to idle timeout), a new server connection will immediately be opened for the waiting pool. + Default: unlimited + +- `spec.connectionPool.statsPeriod`: sets how often the averages shown in various `SHOW` commands are updated and how often aggregated statistics are written to the log. + Default: 60 + +- `spec.connectionPool.authType`: specifies how to authenticate users. PgBouncer supports several authentication methods including pam, md5, scram-sha-256, trust , or any. However hba, and cert are not supported. + +- `spec.connectionPool.IgnoreStartupParameters`: specifies comma-separated startup parameters that pgbouncer knows are handled by admin and it can ignore them. + +### spec.monitor + +PgBouncer managed by KubeDB can be monitored with builtin-Prometheus and Prometheus operator out-of-the-box. To learn more, + +- [Monitor PgBouncer with builtin Prometheus](/docs/v2024.1.31/guides/pgbouncer/monitoring/using-builtin-prometheus) +- [Monitor PgBouncer with Prometheus operator](/docs/v2024.1.31/guides/pgbouncer/monitoring/using-prometheus-operator) + +### spec.podTemplate + +KubeDB allows providing a template for pgbouncer pods through `spec.podTemplate`. KubeDB operator will pass the information provided in `spec.podTemplate` to the StatefulSet created for PgBouncer server + +KubeDB accept following fields to set in `spec.podTemplate:` + +- metadata + - annotations (pod's annotation) +- controller + - annotations (statefulset's annotation) +- spec: + - env + - resources + - initContainers + - imagePullSecrets + - affinity + - tolerations + - priorityClassName + - priority + - lifecycle + +Usage of some fields in `spec.podTemplate` is described below, + +#### spec.podTemplate.spec.env + +`spec.podTemplate.spec.env` is an optional field that specifies the environment variables to pass to the PgBouncer docker image. To know about supported environment variables, please visit [here](https://hub.docker.com/kubedb/pgbouncer/). + +Also, note that KubeDB does not allow updates to the environment variables as updating them does not have any effect once the server is created. If you try to update environment variables, KubeDB operator will reject the request with following error, + +```ini +Error from server (BadRequest): error when applying patch: +... +for: "./pgbouncer.yaml": admission webhook "pgbouncer.validators.kubedb.com" denied the request: precondition failed for: +... +At least one of the following was changed: + apiVersion + kind + name + namespace + spec.podTemplate.spec.nodeSelector +``` + +#### spec.podTemplate.spec.imagePullSecrets + +`spec.podTemplate.spec.imagePullSecrets` is an optional field that points to secrets to be used for pulling docker image if you are using a private docker registry. For more details on how to use private docker registry, please visit [here](/docs/v2024.1.31/guides/pgbouncer/private-registry/using-private-registry). + +#### spec.podTemplate.spec.nodeSelector + +`spec.podTemplate.spec.nodeSelector` is an optional field that specifies a map of key-value pairs. For the pod to be eligible to run on a node, the node must have each of the indicated key-value pairs as labels (it can have additional labels as well). To learn more, see [here](https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector) . + +#### spec.podTemplate.spec.resources + +`spec.podTemplate.spec.resources` is an optional field. This can be used to request compute resources required by the database pods. To learn more, visit [here](http://kubernetes.io/docs/user-guide/compute-resources/). + +### spec.serviceTemplate + +KubeDB creates a service for each PgBouncer instance. The service has the same name as the `pgbouncer.name` and points to pgbouncer pods. + +You can provide template for this service using `spec.serviceTemplate`. This will allow you to set the type and other properties of the service. If `spec.serviceTemplate` is not provided, KubeDB will create a service of type `ClusterIP` with minimal settings. + +KubeDB allows the following fields to set in `spec.serviceTemplate`: + +- metadata: + - annotations +- spec: + - type + - ports + - clusterIP + - externalIPs + - loadBalancerIP + - loadBalancerSourceRanges + - externalTrafficPolicy + - healthCheckNodePort + - sessionAffinityConfig + +See [here](https://github.com/kmodules/offshoot-api/blob/kubernetes-1.16.3/api/v1/types.go#L163) to understand these fields in detail. + +## Next Steps + +- Learn how to use KubeDB to run a PostgreSQL database [here](/docs/v2024.1.31/guides/postgres/README). +- Learn how to how to get started with PgBouncer [here](/docs/v2024.1.31/guides/pgbouncer/quickstart/quickstart). +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/v2024.1.31/CONTRIBUTING). diff --git a/content/docs/v2024.1.31/guides/pgbouncer/custom-versions/_index.md b/content/docs/v2024.1.31/guides/pgbouncer/custom-versions/_index.md new file mode 100644 index 0000000000..fdd2b7cc5f --- /dev/null +++ b/content/docs/v2024.1.31/guides/pgbouncer/custom-versions/_index.md @@ -0,0 +1,22 @@ +--- +title: PgBouncer Custom Versions +menu: + docs_v2024.1.31: + identifier: pb-custom-versions-pgbouncer + name: Custom Versions + parent: pb-pgbouncer-guides + weight: 36 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/pgbouncer/custom-versions/setup.md b/content/docs/v2024.1.31/guides/pgbouncer/custom-versions/setup.md new file mode 100644 index 0000000000..809ed47bbd --- /dev/null +++ b/content/docs/v2024.1.31/guides/pgbouncer/custom-versions/setup.md @@ -0,0 +1,83 @@ +--- +title: Setup Custom PgBouncerVersions +menu: + docs_v2024.1.31: + identifier: pb-custom-versions-setup-pgbouncer + name: Overview + parent: pb-custom-versions-pgbouncer + weight: 10 +menu_name: docs_v2024.1.31s +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +## Setting up Custom PgBouncerVersions + +PgBouncerVersions are KubeDB crds that define the docker images KubeDB will use when deploying a pgbouncer server. For more details about PgBouncerVersion crd, please visit [here](/docs/v2024.1.31/guides/pgbouncer/concepts/catalog). + +## Creating a Custom PgBouncer Image for KubeDB + +If you want to create a custom image of pgbouncer with additional features, the best way is to build on top of the existing kubedb image. + +```docker +FROM kubedb/pgbouncer:1.17.0 + +ENV SOME_VERSION_VAR 0.9.1 + +RUN set -ex \ + && apk add --no-cache --virtual .fetch-deps \ + ca-certificates \ + curl \ + bash +``` + +From there, we would define a PgBouncerVersion that contains this new image. Let's say we tagged it as `myco/pgbouncer:custom-1.17.0`. You can also build exporter image yourself using [pgbouncer_exporter](https://github.com/kubedb/pgbouncer_exporter) repository. + +```yaml +apiVersion: catalog.kubedb.com/v1alpha1 +kind: PgBouncerVersion +metadata: + name: "1.17.0" +spec: + deprecated: false + version: "1.17.0" + pgBouncer: + image: "myco/pgbouncer:custom-1.17.0" + exporter: + image: "myco/pgbouncer_exporter:v0.1.1" +``` + +Once we add this PgBouncerVersion we can use it in a new PgBouncer like: + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: PgBouncer +metadata: + name: pgbouncer-server + namespace: demo +spec: + version: "1.17.0" + replicas: 1 + connectionPool: + poolMode: session + port: 5432 + reservePoolSize: 5 + databases: + - alias: postgres + databaseName: postgres + databaseRef: + name: quick-postgres + namespace: demo +``` diff --git a/content/docs/v2024.1.31/guides/pgbouncer/monitoring/_index.md b/content/docs/v2024.1.31/guides/pgbouncer/monitoring/_index.md new file mode 100755 index 0000000000..5d30d9fede --- /dev/null +++ b/content/docs/v2024.1.31/guides/pgbouncer/monitoring/_index.md @@ -0,0 +1,22 @@ +--- +title: Monitoring PgBouncer +menu: + docs_v2024.1.31: + identifier: pb-monitoring-pgbouncer + name: Monitoring + parent: pb-pgbouncer-guides + weight: 50 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/pgbouncer/monitoring/overview.md b/content/docs/v2024.1.31/guides/pgbouncer/monitoring/overview.md new file mode 100644 index 0000000000..5c72eaea30 --- /dev/null +++ b/content/docs/v2024.1.31/guides/pgbouncer/monitoring/overview.md @@ -0,0 +1,117 @@ +--- +title: PgBouncer Monitoring Overview +description: PgBouncer Monitoring Overview +menu: + docs_v2024.1.31: + identifier: pb-monitoring-overview + name: Overview + parent: pb-monitoring-pgbouncer + weight: 10 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Monitoring PgBouncer with KubeDB + +KubeDB has native support for monitoring via [Prometheus](https://prometheus.io/). You can use builtin [Prometheus](https://github.com/prometheus/prometheus) scraper or [Prometheus operator](https://github.com/prometheus-operator/prometheus-operator) to monitor KubeDB managed databases. This tutorial will show you how database monitoring works with KubeDB and how to configure Database crd to enable monitoring. + +## Overview + +KubeDB uses Prometheus [exporter](https://prometheus.io/docs/instrumenting/exporters/#databases) images to export Prometheus metrics for respective databases. Following diagram shows the logical flow of database monitoring with KubeDB. + +

+  Database Monitoring Flow +

+ +When a user creates a database crd with `spec.monitor` section configured, KubeDB operator provisions the respective database and injects an exporter image as sidecar to the database pod. It also creates a dedicated stats service with name `{database-crd-name}-stats` for monitoring. Prometheus server can scrape metrics using this stats service. + +## Configure Monitoring + +In order to enable monitoring for a database, you have to configure `spec.monitor` section. KubeDB provides following options to configure `spec.monitor` section: + +| Field | Type | Uses | +| -------------------------------------------------- | ---------- | ---------------------------------------------------------------------------------------------------------------------------------------------- | +| `spec.monitor.agent` | `Required` | Type of the monitoring agent that will be used to monitor this database. It can be `prometheus.io/builtin` or `prometheus.io/operator`. | +| `spec.monitor.prometheus.exporter.port` | `Optional` | Port number where the exporter side car will serve metrics. | +| `spec.monitor.prometheus.exporter.args` | `Optional` | Arguments to pass to the exporter sidecar. | +| `spec.monitor.prometheus.exporter.env` | `Optional` | List of environment variables to set in the exporter sidecar container. | +| `spec.monitor.prometheus.exporter.resources` | `Optional` | Resources required by exporter sidecar container. | +| `spec.monitor.prometheus.exporter.securityContext` | `Optional` | Security options the exporter should run with. | +| `spec.monitor.prometheus.serviceMonitor.labels` | `Optional` | Labels for `ServiceMonitor` crd. | +| `spec.monitor.prometheus.serviceMonitor.interval` | `Optional` | Interval at which metrics should be scraped. | + +## Sample Configuration + +A sample YAML for Redis crd with `spec.monitor` section configured to enable monitoring with [Prometheus operator](https://github.com/prometheus-operator/prometheus-operator) is shown below. + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: Redis +metadata: + name: sample-redis + namespace: databases +spec: + version: 6.0.20 + terminationPolicy: WipeOut + configSecret: # configure Redis to use password for authentication + name: redis-config + storageType: Durable + storage: + storageClassName: default + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 5Gi + monitor: + agent: prometheus.io/operator + prometheus: + serviceMonitor: + labels: + release: prometheus + exporter: + args: + - --redis.password=$(REDIS_PASSWORD) + env: + - name: REDIS_PASSWORD + valueFrom: + secretKeyRef: + name: _name_of_secret_with_redis_password + key: password # key with the password + resources: + requests: + memory: 512Mi + cpu: 200m + limits: + memory: 512Mi + cpu: 250m + securityContext: + runAsUser: 2000 + allowPrivilegeEscalation: false +``` + +Assume that above Redis is configured to use basic authentication. So, exporter image also need to provide password to collect metrics. We have provided it through `spec.monitor.args` field. + +Here, we have specified that we are going to monitor this server using Prometheus operator through `spec.monitor.agent: prometheus.io/operator`. KubeDB will create a `ServiceMonitor` crd in `monitoring` namespace and this `ServiceMonitor` will have `release: prometheus` label. + +## Next Steps + +- Learn how to monitor Elasticsearch database with KubeDB using [builtin-Prometheus](/docs/v2024.1.31/guides/elasticsearch/monitoring/using-builtin-prometheus) and using [Prometheus operator](/docs/v2024.1.31/guides/elasticsearch/monitoring/using-prometheus-operator). +- Learn how to monitor PostgreSQL database with KubeDB using [builtin-Prometheus](/docs/v2024.1.31/guides/postgres/monitoring/using-builtin-prometheus) and using [Prometheus operator](/docs/v2024.1.31/guides/postgres/monitoring/using-prometheus-operator). +- Learn how to monitor MySQL database with KubeDB using [builtin-Prometheus](/docs/v2024.1.31/guides/mysql/monitoring/builtin-prometheus/) and using [Prometheus operator](/docs/v2024.1.31/guides/mysql/monitoring/prometheus-operator/). +- Learn how to monitor MongoDB database with KubeDB using [builtin-Prometheus](/docs/v2024.1.31/guides/mongodb/monitoring/using-builtin-prometheus) and using [Prometheus operator](/docs/v2024.1.31/guides/mongodb/monitoring/using-prometheus-operator). +- Learn how to monitor Redis server with KubeDB using [builtin-Prometheus](/docs/v2024.1.31/guides/redis/monitoring/using-builtin-prometheus) and using [Prometheus operator](/docs/v2024.1.31/guides/redis/monitoring/using-prometheus-operator). +- Learn how to monitor Memcached server with KubeDB using [builtin-Prometheus](/docs/v2024.1.31/guides/memcached/monitoring/using-builtin-prometheus) and using [Prometheus operator](/docs/v2024.1.31/guides/memcached/monitoring/using-prometheus-operator). diff --git a/content/docs/v2024.1.31/guides/pgbouncer/monitoring/setup-grafana-dashboard.md b/content/docs/v2024.1.31/guides/pgbouncer/monitoring/setup-grafana-dashboard.md new file mode 100644 index 0000000000..1dc7d2a541 --- /dev/null +++ b/content/docs/v2024.1.31/guides/pgbouncer/monitoring/setup-grafana-dashboard.md @@ -0,0 +1,128 @@ +--- +title: Monitor PgBouncer using Prometheus Operator +menu: + docs_v2024.1.31: + identifier: pb-setup-grafana-dashboard-monitoring + name: Setup Grafana Dashboard + parent: pb-monitoring-pgbouncer + weight: 25 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Visualize PgBouncer Using Grafana Dashboard + +[Grafana](https://github.com/grafana/grafana) is an open source, feature rich metrics dashboard and graph editor for Graphite, Elasticsearch, OpenTSDB, Prometheus and InfluxDB. PgBouncer comes with a Grafana dashboard designed to monitor real-time updates of PgBouncer servers using Prometheus metrics. + +This tutorial will show you how to import our dashboard on Grafana to monitor PgBouncer deployed with KubeDB. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +- To learn how Prometheus monitoring works with KubeDB in general, please visit [here](/docs/v2024.1.31/guides/pgbouncer/monitoring/overview). + +- You need to have monitoring enabled using either [Builtin Prometheus](/docs/v2024.1.31/guides/pgbouncer/monitoring/using-builtin-prometheus) or [Prometheus operator](/docs/v2024.1.31/guides/pgbouncer/monitoring/using-builtin-prometheus). + +- To keep everything isolated, we are going to use a separate namespace called `monitoring` to deploy respective monitoring resources. We are going to deploy database in `demo` namespace. + + ```bash + $ kubectl create ns monitoring + namespace/monitoring created + + $ kubectl create ns demo + namespace/demo created + ``` + +> Note: YAML files used in this tutorial are stored in [docs/examples/pgbouncer](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/examples/pgbouncer) folder in GitHub repository [kubedb/docs](https://github.com/kubedb/docs). + +## Deploy Grafana + +After you have made sure that you have a PgBouncer server running with Monitoring enabled, you're ready to deploy your very own Grafana server. If you still have not deployed PgBouncer server with monitoring enabled, then do so using [Builtin Prometheus](/docs/v2024.1.31/guides/pgbouncer/monitoring/using-builtin-prometheus) or [Prometheus operator](/docs/v2024.1.31/guides/pgbouncer/monitoring/using-builtin-prometheus). + +However, if you already have a Grafana server running in your cluster, feel free to skip this part. Otherwise, create one using: + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/pgbouncer/monitoring/grafana.yaml +deployment.apps/grafana created +``` + +Let's get the name of the pod created by this deployment: + +```bash +$ kubectl get pod -n monitoring -l "app=grafana" + +NAME READY STATUS RESTARTS AGE +grafana-7cbd6b6f87-w9dkh 1/1 Running 0 57s +``` + +## View Dashboard + +Now, we have to expose the Grafana pod so that we can access it from a browser. + +```bash +$ kubectl port-forward -n monitoring grafana-7cbd6b6f87-w9dkh 3000 +Forwarding from 127.0.0.1:3000 -> 3000 +Forwarding from [::1]:3000 -> 3000 +``` + +Grafana should now be available on [localhost](http://localhost:3000/). Use default credentials `(username: admin, password: admin)` to login to Grafana Dashboard. + +## Add Data Source + +First, we need to know the name of the service that exposes our prometheus server pods. In this tutorial, we have used a service named `prometheus-operated` that exposes our prometheus metrics on port 9090. + +```bash +$ kubectl get service -n monitoring +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +prometheus-operated ClusterIP 10.111.246.229 9090/TCP 38m +``` + +We will use this service to point Grafana to our desired data source. + +From Home Dashboard, go to [Configuration > Data Sources](http://localhost:3000/datasources), and select `Add data source`. Select `Prometheus` as the `data source type`. + +In the following screen, add `http://prometheus-operated.monitoring.svc:9090` as the data source `URL`, give it a name `PGBOUNCER_PROMETHEUS`, and press the `Save and Test` button. You should get a message confirming that the `Data source is working`. + +

+  Data Target +

+ +## Import Dashboard + +Now, go to [http://localhost:3000/dashboard/import](http://localhost:3000/dashboard/import) to import our PgBouncer Dashboard. Put `10945` as the grafana dashboard id. Select `PGBOUNCER_PROMETHEUS` as the data source, and press `import`. You will now be directed to your PgBouncer dashboard. + +

+  Data Target +

+ +## Cleaning up + +To cleanup the Kubernetes resources created by this tutorial, run the following commands + +```bash +# cleanup prometheus resources +kubectl delete -n monitoring deployment grafana + +# delete namespace +kubectl delete ns monitoring +``` + +## Next Steps + +- Monitor your PgBouncer with KubeDB using [built-in Prometheus](/docs/v2024.1.31/guides/pgbouncer/monitoring/using-builtin-prometheus). +- Monitor your PgBouncer with KubeDB using [Prometheus operator](/docs/v2024.1.31/guides/pgbouncer/monitoring/using-prometheus-operator). +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/v2024.1.31/CONTRIBUTING). diff --git a/content/docs/v2024.1.31/guides/pgbouncer/monitoring/using-builtin-prometheus.md b/content/docs/v2024.1.31/guides/pgbouncer/monitoring/using-builtin-prometheus.md new file mode 100644 index 0000000000..5a097dee1d --- /dev/null +++ b/content/docs/v2024.1.31/guides/pgbouncer/monitoring/using-builtin-prometheus.md @@ -0,0 +1,378 @@ +--- +title: Monitor PgBouncer using Builtin Prometheus Discovery +menu: + docs_v2024.1.31: + identifier: pb-using-builtin-prometheus-monitoring + name: Builtin Prometheus + parent: pb-monitoring-pgbouncer + weight: 20 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Monitoring PgBouncer with builtin Prometheus + +This tutorial will show you how to monitor PgBouncer using builtin [Prometheus](https://github.com/prometheus/prometheus) scraper. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +- Install KubeDB operator in your cluster following the steps [here](/docs/v2024.1.31/setup/README). + +- If you are not familiar with how to configure Prometheus to scrape metrics from various Kubernetes resources, please read the tutorial from [here](https://github.com/appscode/third-party-tools/tree/master/monitoring/prometheus/builtin). + +- To learn how Prometheus monitoring works with KubeDB in general, please visit [here](/docs/v2024.1.31/guides/pgbouncer/monitoring/overview). + +- To keep Prometheus resources isolated, we are going to use a separate namespace called `monitoring` to deploy respective monitoring resources. We are going to deploy database in `demo` namespace. + + ```bash + $ kubectl create ns monitoring + namespace/monitoring created + + $ kubectl create ns demo + namespace/demo created + ``` + +> Note: YAML files used in this tutorial are stored in [docs/examples/pgbouncer](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/examples/pgbouncer) folder in GitHub repository [kubedb/docs](https://github.com/kubedb/docs). + +## Deploy PgBouncer with Monitoring Enabled + +At first, we will need a PgBouncer with monitoring enabled. This PgBouncer needs to be connected to PostgreSQL database(s). You can get a PgBouncer setup with active connection(s) to PostgreSQL by following the [quickstart](/docs/v2024.1.31/guides/pgbouncer/quickstart/quickstart) guide. PgBouncer object in that guide didn't come with monitoring. So we are going to enable monitoring in it. Below is the PgBouncer object that contains built-in monitoring: + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: PgBouncer +metadata: + name: pgbouncer-server + namespace: demo +spec: + version: "1.17.0" + replicas: 1 + databases: + - alias: "postgres" + databaseName: "postgres" + databaseRef: + name: "quick-postgres" + namespace: demo + connectionPool: + maxClientConnections: 20 + reservePoolSize: 5 + monitor: + agent: prometheus.io/builtin +``` + +Here, + +- `spec.monitor.agent: prometheus.io/builtin` specifies that we are going to monitor this server using builtin Prometheus scraper. + +Let's patch the existing PgBouncer with the crd we have shown above. + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/pgbouncer/monitoring/builtin-prom-pgbouncer.yaml +pgbouncer.kubedb.com/pgbouncer-server configured +``` + +PgBouncer should still be in `Running` state. + +```bash +$ kubectl get pb -n demo pgbouncer-server +NAME VERSION STATUS AGE +pgbouncer-server 1.17.0 Running 13s +``` + +KubeDB will create a separate stats service with name `{PgBouncer crd name}-stats` for monitoring purpose. + +```bash +$ kubectl get svc -n demo --selector="app.kubernetes.io/instance=pgbouncer-server" +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +pgbouncer-server ClusterIP 10.108.152.208 5432/TCP 16m +pgbouncer-server-stats ClusterIP 10.111.194.83 56790/TCP 16m +``` + +Here, `pgbouncer-server-stats` service has been created for monitoring purpose. Let's describe the service. + +```bash +$ kubectl describe svc -n demo pgbouncer-server-stats +Name: pgbouncer-server-stats +Namespace: demo +Labels: app.kubernetes.io/name=pgbouncers.kubedb.com + app.kubernetes.io/instance=pgbouncer-server + kubedb.com/role=stats +Annotations: monitoring.appscode.com/agent: prometheus.io/builtin + prometheus.io/path: /metrics + prometheus.io/port: 56790 + prometheus.io/scrape: true +Selector: app.kubernetes.io/name=pgbouncers.kubedb.com,app.kubernetes.io/instance=pgbouncer-server +Type: ClusterIP +IP: 10.110.56.149 +Port: prom-http 56790/TCP +TargetPort: prom-http/TCP +Endpoints: 172.17.0.7:56790 +Session Affinity: None +Events: +``` + +You can see that the service contains following annotations. + +```bash +prometheus.io/path: /metrics +prometheus.io/port: 56790 +prometheus.io/scrape: true +``` + +The Prometheus server will discover the service endpoint using these specifications and will scrape metrics from the exporter. + +## Configure Prometheus Server + +Now, we have to configure a Prometheus scraping job to scrape the metrics using this service. We are going to configure scraping job similar to this [kubernetes-service-endpoints](https://github.com/appscode/third-party-tools/tree/master/monitoring/prometheus/builtin#kubernetes-service-endpoints) job that scrapes metrics from endpoints of a service. + +Let's configure a Prometheus scraping job to collect metrics from this service. + +```yaml +- job_name: 'kubedb-databases' + honor_labels: true + scheme: http + kubernetes_sd_configs: + - role: endpoints + # by default Prometheus server select all Kubernetes services as possible target. + # relabel_config is used to filter only desired endpoints + relabel_configs: + # keep only those services that has "prometheus.io/scrape","prometheus.io/path" and "prometheus.io/port" anootations + - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape, __meta_kubernetes_service_annotation_prometheus_io_port] + separator: ; + regex: true;(.*) + action: keep + # currently KubeDB supported databases uses only "http" scheme to export metrics. so, drop any service that uses "https" scheme. + - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scheme] + action: drop + regex: https + # only keep the stats services created by KubeDB for monitoring purpose which has "-stats" suffix + - source_labels: [__meta_kubernetes_service_name] + separator: ; + regex: (.*-stats) + action: keep + # service created by KubeDB will have "app.kubernetes.io/name" and "app.kubernetes.io/instance" annotations. keep only those services that have these annotations. + - source_labels: [__meta_kubernetes_service_label_app_kubernetes_io_name] + separator: ; + regex: (.*) + action: keep + # read the metric path from "prometheus.io/path: " annotation + - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_path] + action: replace + target_label: __metrics_path__ + regex: (.+) + # read the port from "prometheus.io/port: " annotation and update scraping address accordingly + - source_labels: [__address__, __meta_kubernetes_service_annotation_prometheus_io_port] + action: replace + target_label: __address__ + regex: ([^:]+)(?::\d+)?;(\d+) + replacement: $1:$2 + # add service namespace as label to the scraped metrics + - source_labels: [__meta_kubernetes_namespace] + separator: ; + regex: (.*) + target_label: namespace + replacement: $1 + action: replace + # add service name as a label to the scraped metrics + - source_labels: [__meta_kubernetes_service_name] + separator: ; + regex: (.*) + target_label: service + replacement: $1 + action: replace + # add stats service's labels to the scraped metrics + - action: labelmap + regex: __meta_kubernetes_service_label_(.+) +``` + +### Configure Existing Prometheus Server + +If you already have a Prometheus server running, you have to add above scraping job in the `ConfigMap` used to configure the Prometheus server. Then, you have to restart it for the updated configuration to take effect. + +> If you don't use a persistent volume for Prometheus storage, you will lose your previously scraped data on restart. + +### Deploy New Prometheus Server + +If you don't have any existing Prometheus server running, you have to deploy one. In this section, we are going to deploy a Prometheus server in `monitoring` namespace to collect metrics using this stats service. + +**Create ConfigMap:** + +At first, create a ConfigMap with the scraping configuration. Bellow, the YAML of ConfigMap that we are going to create in this tutorial. + +```yaml +apiVersion: v1 +kind: ConfigMap +metadata: + name: prometheus-config + labels: + app: prometheus-demo + namespace: monitoring +data: + prometheus.yml: |- + global: + scrape_interval: 5s + evaluation_interval: 5s + scrape_configs: + - job_name: 'kubedb-databases' + honor_labels: true + scheme: http + kubernetes_sd_configs: + - role: endpoints + # by default Prometheus server select all Kubernetes services as possible target. + # relabel_config is used to filter only desired endpoints + relabel_configs: + # keep only those services that has "prometheus.io/scrape","prometheus.io/path" and "prometheus.io/port" anootations + - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape, __meta_kubernetes_service_annotation_prometheus_io_port] + separator: ; + regex: true;(.*) + action: keep + # currently KubeDB supported databases uses only "http" scheme to export metrics. so, drop any service that uses "https" scheme. + - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scheme] + action: drop + regex: https + # only keep the stats services created by KubeDB for monitoring purpose which has "-stats" suffix + - source_labels: [__meta_kubernetes_service_name] + separator: ; + regex: (.*-stats) + action: keep + # service created by KubeDB will have "app.kubernetes.io/name" and "app.kubernetes.io/instance" annotations. keep only those services that have these annotations. + - source_labels: [__meta_kubernetes_service_label_app_kubernetes_io_name] + separator: ; + regex: (.*) + action: keep + # read the metric path from "prometheus.io/path: " annotation + - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_path] + action: replace + target_label: __metrics_path__ + regex: (.+) + # read the port from "prometheus.io/port: " annotation and update scraping address accordingly + - source_labels: [__address__, __meta_kubernetes_service_annotation_prometheus_io_port] + action: replace + target_label: __address__ + regex: ([^:]+)(?::\d+)?;(\d+) + replacement: $1:$2 + # add service namespace as label to the scraped metrics + - source_labels: [__meta_kubernetes_namespace] + separator: ; + regex: (.*) + target_label: namespace + replacement: $1 + action: replace + # add service name as a label to the scraped metrics + - source_labels: [__meta_kubernetes_service_name] + separator: ; + regex: (.*) + target_label: service + replacement: $1 + action: replace + # add stats service's labels to the scraped metrics + - action: labelmap + regex: __meta_kubernetes_service_label_(.+) +``` + +Let's create above `ConfigMap`, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/monitoring/builtin-prometheus/prom-config.yaml +configmap/prometheus-config created +``` + +**Create RBAC:** + +If you are using an RBAC enabled cluster, you have to give necessary RBAC permissions for Prometheus. Let's create necessary RBAC stuffs for Prometheus, + +```bash +$ kubectl apply -f https://github.com/appscode/third-party-tools/raw/master/monitoring/prometheus/builtin/artifacts/rbac.yaml +clusterrole.rbac.authorization.k8s.io/prometheus created +serviceaccount/prometheus created +clusterrolebinding.rbac.authorization.k8s.io/prometheus created +``` + +>YAML for the RBAC resources created above can be found [here](https://github.com/appscode/third-party-tools/blob/master/monitoring/prometheus/builtin/artifacts/rbac.yaml). + +**Deploy Prometheus:** + +Now, we are ready to deploy Prometheus server. We are going to use following [deployment](https://github.com/appscode/third-party-tools/blob/master/monitoring/prometheus/builtin/artifacts/deployment.yaml) to deploy Prometheus server. + +Let's deploy the Prometheus server. + +```bash +$ kubectl apply -f https://github.com/appscode/third-party-tools/raw/master/monitoring/prometheus/builtin/artifacts/deployment.yaml +deployment.apps/prometheus created +``` + +**Prometheus Service:** + +We will use a service for the Prometheus server. We can use this to look up metrics from within the cluster as well as outside of the cluster. + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/pgbouncer/monitoring/builtin-prom-service.yaml +service/prometheus-operated created +``` + +### Verify Monitoring Metrics + +Prometheus server is listening to port `9090`. We are going to use [port forwarding](https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/) to access Prometheus dashboard. + +At first, let's check if the Prometheus pod is in `Running` state. + +```bash +kubectl get pod -n monitoring -l=app=prometheus +NAME READY STATUS RESTARTS AGE +prometheus-789c9695fc-7rjzf 1/1 Running 0 27s +``` + +Now, run following command on a separate terminal to forward 9090 port of `prometheus-8568c86d86-95zhn` pod, + +```bash +$ kubectl port-forward -n monitoring svc/prometheus-operated 9090 +Forwarding from 127.0.0.1:9090 -> 9090 +Forwarding from [::1]:9090 -> 9090 +``` + +Now, we can access the dashboard at `localhost:9090`. Open [http://localhost:9090/targets](http://localhost:9090/targets) in your browser. You should see the endpoint of `pgbouncer-server-stats` service as one of the targets. + +

+  Prometheus Target +

+ +Check the labels which confirm that the metrics are coming from `pgbouncer-server` through stats service `pgbouncer-server-stats`. + +Now, you can view the collected metrics and create a graph from homepage of this Prometheus dashboard. You can also use this Prometheus server as data source for [Grafana](https://grafana.com/) and create beautiful dashboard with collected metrics. + +## Cleaning up + +To cleanup the Kubernetes resources created by this tutorial, run following commands + +```bash +$ kubectl delete -n demo pb/pgbouncer-server + +$ kubectl delete -n monitoring deployment.apps/prometheus + +$ kubectl delete -n monitoring clusterrole.rbac.authorization.k8s.io/prometheus +$ kubectl delete -n monitoring serviceaccount/prometheus +$ kubectl delete -n monitoring clusterrolebinding.rbac.authorization.k8s.io/prometheus + +$ kubectl delete ns demo +$ kubectl delete ns monitoring +``` + +## Next Steps +- Monitor your PgBouncer with KubeDB using [`out-of-the-box` Prometheus operator](/docs/v2024.1.31/guides/pgbouncer/monitoring/using-prometheus-operator). +- Use [private Docker registry](/docs/v2024.1.31/guides/pgbouncer/private-registry/using-private-registry) to deploy PgBouncer with KubeDB. +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/v2024.1.31/CONTRIBUTING). diff --git a/content/docs/v2024.1.31/guides/pgbouncer/monitoring/using-prometheus-operator.md b/content/docs/v2024.1.31/guides/pgbouncer/monitoring/using-prometheus-operator.md new file mode 100644 index 0000000000..3b029e2eb3 --- /dev/null +++ b/content/docs/v2024.1.31/guides/pgbouncer/monitoring/using-prometheus-operator.md @@ -0,0 +1,297 @@ +--- +title: Monitor PgBouncer using Prometheus Operator +menu: + docs_v2024.1.31: + identifier: pb-using-prometheus-operator-monitoring + name: Prometheus Operator + parent: pb-monitoring-pgbouncer + weight: 15 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Monitoring PgBouncer using Prometheus operator + +[Prometheus operator](https://github.com/prometheus-operator/prometheus-operator) provides simple and Kubernetes native way to deploy and configure Prometheus server. This tutorial will show you how to use Prometheus operator to monitor PgBouncer deployed with KubeDB. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +- To learn how Prometheus monitoring works with KubeDB in general, please visit [here](/docs/v2024.1.31/guides/pgbouncer/monitoring/overview). + +- To keep Prometheus resources isolated, we are going to use a separate namespace called `monitoring` to deploy respective monitoring resources. We are going to deploy database in `demo` namespace. + + ```bash + $ kubectl create ns monitoring + namespace/monitoring created + ``` + +- We need a [Prometheus operator](https://github.com/prometheus-operator/prometheus-operator) instance running. If you don't already have a running instance, deploy one following the docs from [here](https://github.com/appscode/third-party-tools/blob/master/monitoring/prometheus/operator/README.md). + +- If you already don't have a Prometheus server running, deploy one following tutorial from [here](https://github.com/appscode/third-party-tools/blob/master/monitoring/prometheus/operator/README.md#deploy-prometheus-server). + +> Note: YAML files used in this tutorial are stored in [docs/examples/pgbouncer](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/examples/pgbouncer) folder in GitHub repository [kubedb/docs](https://github.com/kubedb/docs). + +## Find out required labels for ServiceMonitor + +We need to know the labels used to select `ServiceMonitor` by a `Prometheus` crd. We are going to provide these labels in `spec.monitor.prometheus.labels` field of PgBouncer crd so that KubeDB creates `ServiceMonitor` object accordingly. + +As a prerequisite, we need to have Prometheus operator running, and a prometheus server created to monitor PgBouncer exporter. In this tutorial we are going to use a prometheus server named `promethus` in `monitoring` namespace. You can use the following to install `Prometheus operator`. + +```bash +$ kubectl apply -f https://raw.githubusercontent.com/appscode/third-party-tools/master/monitoring/prometheus/coreos-operator/artifacts/operator.yaml +``` + +Now, get a prometheus server up and running. + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/pgbouncer/monitoring/coreos-prom-server.yaml + +clusterrole.rbac.authorization.k8s.io/prometheus created +serviceaccount/prometheus created +clusterrolebinding.rbac.authorization.k8s.io/prometheus created +prometheus.monitoring.coreos.com/prometheus created +``` + +Now, let's find out the available Prometheus server in our cluster. + +```bash + +$ kubectl get prometheus --all-namespaces +NAMESPACE NAME AGE +default tufted-rodent-prometheus-o-prometheus 3h42m +monitoring prometheus 18m +``` + +Now, let's view the YAML of the available Prometheus server `prometheus` in `monitoring` namespace. + +```yaml +$ kubectl get prometheus -n monitoring prometheus -o yaml +apiVersion: monitoring.coreos.com/v1 +kind: Prometheus +metadata: + annotations: + kubectl.kubernetes.io/last-applied-configuration: | + {"apiVersion":"monitoring.coreos.com/v1","kind":"Prometheus","metadata":{"annotations":{},"labels":{"prometheus":"prometheus"},"name":"prometheus","namespace":"monitoring"},"spec":{"replicas":1,"resources":{"requests":{"memory":"400Mi"}},"serviceAccountName":"prometheus","serviceMonitorSelector":{"matchLabels":{"release":"prometheus"}}}} + creationTimestamp: "2019-09-19T09:32:12Z" + generation: 1 + labels: + prometheus: prometheus + name: prometheus + namespace: monitoring + resourceVersion: "38348" + selfLink: /apis/monitoring.coreos.com/v1/namespaces/monitoring/prometheuses/prometheus + uid: f9285974-3349-40e8-815a-8f50c3a8a4f5 +spec: + replicas: 1 + resources: + requests: + memory: 400Mi + serviceAccountName: prometheus + serviceMonitorSelector: + matchLabels: + release: prometheus +``` + +Notice the `spec.serviceMonitorSelector` section. Here, `release: prometheus` label is used to select `ServiceMonitor` crd. So, we are going to use this label in `spec.monitor.prometheus.labels` field of PgBouncer crd. + +## Deploy PgBouncer with Monitoring Enabled + +We will need a PgBouncer with monitoring enabled. This PgBouncer needs to be connected to PostgreSQL database(s). You can get a PgBouncer setup with active connection(s) to PostgreSQL by following the [quickstart](/docs/v2024.1.31/guides/pgbouncer/quickstart/quickstart) guide. PgBouncer object in that guide didn't come with monitoring. So we are going to enable monitoring in it. Below is the PgBouncer object that contains Prometheus operator based monitoring: + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: PgBouncer +metadata: + name: pgbouncer-server + namespace: demo +spec: + version: "1.17.0" + replicas: 1 + databases: + - alias: "postgres" + databaseName: "postgres" + databaseRef: + name: "quick-postgres" + namespace: demo + connectionPool: + maxClientConnections: 20 + reservePoolSize: 5 + monitor: + agent: prometheus.io/operator + prometheus: + serviceMonitor: + labels: + release: prometheus + interval: 10s +``` + +Here, + +- `monitor.agent: prometheus.io/operator` indicates that we are going to monitor this server using Prometheus operator. +- `monitor.prometheus.namespace: monitoring` specifies that KubeDB should create `ServiceMonitor` in `monitoring` namespace. + +- `monitor.prometheus.labels` specifies that KubeDB should create `ServiceMonitor` with these labels. + +- `monitor.prometheus.interval` indicates that the Prometheus server should scrape metrics from this database with 10 seconds interval. + +Let's create the PgBouncer object that we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/pgbouncer/monitoring/coreos-prom-pgbouncer.yaml +pgbouncer.kubedb.com/pgbouncer-server configured +``` + +Now, wait for the database to go into `Running` state. + +```bash +$ kubectl get pb -n demo pgbouncer-server +NAME VERSION STATUS AGE +pgbouncer-server 1.17.0 Running 10s +``` + +KubeDB will create a separate stats service with name `{PgBouncer crd name}-stats` for monitoring purpose. + +```bash +$ kubectl get svc -n demo --selector="app.kubernetes.io/instance=pgbouncer-server" +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +pgbouncer-server ClusterIP 10.104.83.201 5432/TCP 52s +pgbouncer-server-stats ClusterIP 10.101.214.117 56790/TCP 50s +``` + +Here, `pgbouncer-server-stats` service has been created for monitoring purpose. + +Let's describe this stats service. + +```yaml +$ kubectl describe svc -n demo pgbouncer-server-stats +Name: pgbouncer-server-stats +Namespace: demo +Labels: app.kubernetes.io/name=pgbouncers.kubedb.com + app.kubernetes.io/instance=pgbouncer-server + kubedb.com/role=stats +Annotations: monitoring.appscode.com/agent:prometheus.io/operator +Selector: app.kubernetes.io/name=pgbouncers.kubedb.com,app.kubernetes.io/instance=pgbouncer-server +Type: ClusterIP +IP: 10.101.214.117 +Port: prom-http 56790/TCP +TargetPort: prom-http/TCP +Endpoints: 172.17.0.7:56790 +Session Affinity: None +``` + +Notice the `Labels` and `Port` fields. `ServiceMonitor` will use these information to target its endpoints. + +KubeDB will also create a `ServiceMonitor` crd in `monitoring` namespace that select the endpoints of `pgbouncer-server-stats` service. Verify that the `ServiceMonitor` crd has been created. + +```bash +$ kubectl get servicemonitor -n monitoring +NAME AGE +kubedb-demo-pgbouncer-server 3m4s +``` + +Let's verify that the `ServiceMonitor` has the label that we had specified in `spec.monitor` section of PgBouncer crd. + +```yaml +$ kubectl get servicemonitor -n monitoring kubedb-demo-pgbouncer-server -o yaml +apiVersion: monitoring.coreos.com/v1 +kind: ServiceMonitor +metadata: + creationTimestamp: "2019-09-19T10:03:24Z" + generation: 1 + labels: + release: prometheus + monitoring.appscode.com/service: pgbouncer-server-stats.demo + name: kubedb-demo-pgbouncer-server + namespace: monitoring + ownerReferences: + - apiVersion: v1 + blockOwnerDeletion: true + kind: Service + name: pgbouncer-server-stats + uid: 749bc2ed-e14c-4a9e-9688-9d319af2b902 + resourceVersion: "41639" + selfLink: /apis/monitoring.coreos.com/v1/namespaces/monitoring/servicemonitors/kubedb-demo-pgbouncer-server + uid: 4a68d942-a003-4b47-a8cb-f20e526e9748 +spec: + endpoints: + - honorLabels: true + interval: 5s + path: /metrics + port: prom-http + namespaceSelector: + matchNames: + - demo + selector: + matchLabels: + app.kubernetes.io/name: pgbouncers.kubedb.com + app.kubernetes.io/instance: pgbouncer-server + kubedb.com/role: stats +``` + +Notice that the `ServiceMonitor` has label `release: prometheus` that we had specified in PgBouncer crd. + +Also notice that the `ServiceMonitor` has selector which match the labels we have seen in the `pgbouncer-server-stats` service. It also, target the `prom-http` port that we have seen in the stats service. + +## Verify Monitoring Metrics + +At first, let's find out the respective Prometheus pod for `prometheus` Prometheus server. + +```bash +$ kubectl get pod -n monitoring -l=app=prometheus +NAME READY STATUS RESTARTS AGE +prometheus-prometheus-0 3/3 Running 1 35m +``` + +Prometheus server is listening to port `9090` of `prometheus-prometheus-0` pod. We are going to use [port forwarding](https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/) to access Prometheus dashboard. + +Run following command on a separate terminal to forward the port 9090 of `prometheus-prometheus-0` pod, + +```bash +$ kubectl port-forward -n monitoring prometheus-prometheus-0 9090 +Forwarding from 127.0.0.1:9090 -> 9090 +Forwarding from [::1]:9090 -> 9090 +``` + +Now, we can access the dashboard at `localhost:9090`. Open [http://localhost:9090/targets](http://localhost:9090/targets) in your browser. You should see `prom-http` endpoint of `pgbouncer-server-stats` service as one of the targets. + +

+  Prometheus Target +

+ +Check the `endpoint` and `service` labels which verify that the target is our expected database. Now, you can view the collected metrics and create a graph from homepage of this Prometheus dashboard. You can also use this Prometheus server as data source for [Grafana](https://grafana.com/) and create beautiful dashboard with collected metrics. + +## Cleaning up + +To cleanup the Kubernetes resources created by this tutorial, run the following commands + +```bash +# cleanup prometheus resources +kubectl delete -n monitoring prometheus prometheus +kubectl delete -n monitoring clusterrolebinding prometheus +kubectl delete -n monitoring clusterrole prometheus +kubectl delete -n monitoring serviceaccount prometheus +kubectl delete -n monitoring service prometheus-operated + +# delete namespace +kubectl delete ns monitoring +``` + +## Next Steps + +- Monitor your PgBouncer with KubeDB using [built-in Prometheus](/docs/v2024.1.31/guides/pgbouncer/monitoring/using-builtin-prometheus). +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/v2024.1.31/CONTRIBUTING). diff --git a/content/docs/v2024.1.31/guides/pgbouncer/private-registry/_index.md b/content/docs/v2024.1.31/guides/pgbouncer/private-registry/_index.md new file mode 100755 index 0000000000..1c4313de22 --- /dev/null +++ b/content/docs/v2024.1.31/guides/pgbouncer/private-registry/_index.md @@ -0,0 +1,22 @@ +--- +title: Run PgBouncer using Private Registry +menu: + docs_v2024.1.31: + identifier: pb-private-registry-pgbouncer + name: Private Registry + parent: pb-pgbouncer-guides + weight: 35 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/pgbouncer/private-registry/using-private-registry.md b/content/docs/v2024.1.31/guides/pgbouncer/private-registry/using-private-registry.md new file mode 100644 index 0000000000..c75c4c469d --- /dev/null +++ b/content/docs/v2024.1.31/guides/pgbouncer/private-registry/using-private-registry.md @@ -0,0 +1,171 @@ +--- +title: Run PgBouncer using Private Registry +menu: + docs_v2024.1.31: + identifier: pb-using-private-registry-private-registry + name: Quickstart + parent: pb-private-registry-pgbouncer + weight: 10 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Using private Docker registry + +KubeDB supports using private Docker registry. This tutorial will show you how to run KubeDB managed PgBouncer using private Docker images. + +## Before You Begin + +At first, you need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +To keep things isolated, this tutorial uses a separate namespace called `demo` throughout this tutorial. + +```bash +$ kubectl create ns demo +namespace/demo created +``` + +> Note: YAML files used in this tutorial are stored in [docs/examples/pgbouncer](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/examples/pgbouncer) folder in GitHub repository [kubedb/docs](https://github.com/kubedb/docs). + +## Prepare Private Docker Registry + +- You will need a docker private [registry](https://docs.docker.com/registry/) or [private repository](https://docs.docker.com/docker-hub/repos/#private-repositories). In this tutorial we will use private repository of [docker hub](https://hub.docker.com/). + +- You have to push the required images from KubeDB's [Docker hub account](https://hub.docker.com/r/kubedb/) into your private registry. For pgbouncer, push `SERVER_IMAGE`, `EXPORTER_IMAGE` of following PgBouncerVersions, where `deprecated` is not true, to your private registry. + + ```bash + $ kubectl get pgbouncerversions -o=custom-columns=NAME:.metadata.name,VERSION:.spec.version,DB_IMAGE:.spec.server.image,EXPORTER_IMAGE:.spec.exporter.image,DEPRECATED:.spec.deprecated + NAME VERSION SERVER_IMAGE EXPORTER_IMAGE DEPRECATED + 1.17.0 1.17.0 kubedb/pgbouncer:1.17.0 kubedb/pgbouncer_exporter:v0.1.1 false + ``` + + Docker hub repositories: + +- [kubedb/operator](https://hub.docker.com/r/kubedb/operator) +- [kubedb/pgbouncer](https://hub.docker.com/r/kubedb/pgbouncer) +- [kubedb/pgbouncer_exporter](https://hub.docker.com/r/kubedb/pgbouncer_exporter) + +## Create ImagePullSecret + +ImagePullSecrets is a type of a Kubernetes Secret whose sole purpose is to pull private images from a Docker registry. It allows you to specify the url of the docker registry, credentials for logging in and the image name of your private docker image. + +Run the following command, substituting the appropriate uppercase values to create an image pull secret for your private Docker registry: + +```bash +$ kubectl create secret generic -n demo docker-registry myregistrykey \ + --docker-server=DOCKER_REGISTRY_SERVER \ + --docker-username=DOCKER_USER \ + --docker-email=DOCKER_EMAIL \ + --docker-password=DOCKER_PASSWORD +secret/myregistrykey created +``` + +If you wish to follow other ways to pull private images see [official docs](https://kubernetes.io/docs/concepts/containers/images/) of Kubernetes. + +> Note; If you are using `kubectl` 1.9.0, update to 1.9.1 or later to avoid this [issue](https://github.com/kubernetes/kubernetes/issues/57427). + +## Install KubeDB operator + +When installing KubeDB operator, set the flags `--docker-registry` and `--image-pull-secret` to appropriate value. +Follow the steps to [install KubeDB operator](/docs/v2024.1.31/setup/README) properly in cluster so that to points to the DOCKER_REGISTRY you wish to pull images from. + +## Create PgBouncerVersion CRD + +KubeDB uses images specified in PgBouncerVersion crd for pgbouncer server, and prometheus metrics exporter. You have to create a PgBouncerVersion crd specifying images from your private registry. Then, you have to point this PgBouncerVersion crd in `spec.version` field of Postgres object. For more details about PgBouncerVersion crd, please visit [here](/docs/v2024.1.31/guides/pgbouncer/concepts/catalog). + +Here, is an example of PgBouncerVersion crd. Replace `` with your private registry. + +```yaml +apiVersion: catalog.kubedb.com/v1alpha1 +kind: PgBouncerVersion +metadata: + name: "1.17.0" +spec: + exporter: + image: PRIVATE_REGISTRY/pgbouncer_exporter:v0.1.1 + pgBouncer: + image: PRIVATE_REGISTRY/pgbouncer:1.17.0 + version: 1.17.0 +``` + +Now, create the PgBouncerVersion crd, + +```bash +$ kubectl apply -f pvt-pgbouncerversion.yaml +pgbouncerversion.kubedb.com/pvt-1.17.0 created +``` + +## Deploy PgBouncer from Private Registry + +While deploying PgBouncer from private repository, you have to add `myregistrykey` secret in PgBouncer `spec.podTemplate.spec.imagePullSecrets` and specify `pvt-1.17.0` in `spec.version` field. + +Below is the PgBouncer object we will create in this tutorial + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: PgBouncer +metadata: + name: pvt-reg-pgbouncer + namespace: demo +spec: + version: "1.17.0" + databases: + - alias: "postgres" + databaseName: "postgres" + databaseRef: + name: "quick-postgres" + namespace: demo + connectionPool: + maxClientConnections: 20 + reservePoolSize: 5 + podTemplate: + spec: + imagePullSecrets: + - name: myregistrykey +``` + +Now run the command to create this pgbouncer server: + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/pgbouncer/private-registry/pvt-reg-pgbouncer.yaml +pgbouncer.kubedb.com/pvt-reg-pgbouncer created +``` + +To check if the images pulled successfully from the repository, see if the PgBouncer is in Running state: + +```bash +$ kubectl get pods -n demo --selector="app.kubernetes.io/instance=pvt-reg-pgbouncer" +NAME READY STATUS RESTARTS AGE +pvt-reg-pgbouncer-0 1/1 Running 0 3m +``` + +## Cleaning up + +To cleanup the Kubernetes resources created by this tutorial, run: + +```bash +kubectl delete -n demo pb/pvt-reg-pgbouncer + +kubectl delete ns demo +``` + +If you would like to uninstall KubeDB operator, please follow the steps [here](/docs/v2024.1.31/setup/README). + +## Next Steps + +- Monitor your PgBouncer with KubeDB using [built-in Prometheus](/docs/v2024.1.31/guides/pgbouncer/monitoring/using-builtin-prometheus). +- Monitor your PgBouncer with KubeDB using [Prometheus operator](/docs/v2024.1.31/guides/pgbouncer/monitoring/using-prometheus-operator). +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/v2024.1.31/CONTRIBUTING). diff --git a/content/docs/v2024.1.31/guides/pgbouncer/quickstart/_index.md b/content/docs/v2024.1.31/guides/pgbouncer/quickstart/_index.md new file mode 100755 index 0000000000..992498f31f --- /dev/null +++ b/content/docs/v2024.1.31/guides/pgbouncer/quickstart/_index.md @@ -0,0 +1,22 @@ +--- +title: PgBouncer Quickstart +menu: + docs_v2024.1.31: + identifier: pb-quickstart-pgbouncer + name: Quickstart + parent: pb-pgbouncer-guides + weight: 15 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/pgbouncer/quickstart/quickstart.md b/content/docs/v2024.1.31/guides/pgbouncer/quickstart/quickstart.md new file mode 100644 index 0000000000..c5abe8e05c --- /dev/null +++ b/content/docs/v2024.1.31/guides/pgbouncer/quickstart/quickstart.md @@ -0,0 +1,492 @@ +--- +bastitle: PgBouncer Quickstart +menu: + docs_v2024.1.31: + identifier: pb-quickstart-quickstart + name: Overview + parent: pb-quickstart-pgbouncer + weight: 10 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Running PgBouncer + +This tutorial will show you how to use KubeDB to run a PgBouncer. + +

+ lifecycle +

+ +## Before You Begin + +At first, you need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +Now, install KubeDB cli on your workstation and KubeDB operator in your cluster following the steps [here](/docs/v2024.1.31/setup/README). + +To keep things isolated, this tutorial uses a separate namespace called `demo` throughout this tutorial. + +```bash +$ kubectl create ns demo +namespace/demo created +``` + +> Note: YAML files used in this tutorial are stored in [docs/examples/pgbouncer](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/examples/pgbouncer) folder in GitHub repository [kubedb/docs](https://github.com/kubedb/docs). + +> We have designed this tutorial to demonstrate a production setup of KubeDB managed PgBouncer. If you just want to try out KubeDB, you can bypass some of the safety features following the tips [here](/docs/v2024.1.31/guides/pgbouncer/quickstart/quickstart#tips-for-testing). + +## Find Available PgBouncerVersion + +When you have installed KubeDB, it has created `PgBouncerVersion` crd for all supported PgBouncer versions. Let's check available PgBouncerVersion by, + +```bash +$ kubectl get pgbouncerversions + + NAME VERSION PGBOUNCER_IMAGE DEPRECATED AGE + 1.17.0 1.17.0 ghcr.io/kubedb/pgbouncer:1.17.0 22h + 1.18.0 1.18.0 ghcr.io/kubedb/pgbouncer:1.18.0 22h + +``` + +Notice the `DEPRECATED` column. Here, `true` means that this PgBouncerVersion is deprecated for current KubeDB version. KubeDB will not work for deprecated PgBouncerVersion. + +In this tutorial, we will use `1.18.0` PgBouncerVersion crd to create PgBouncer. To know more about what `PgBouncerVersion` crd is, please visit [here](/docs/v2024.1.31/guides/pgbouncer/concepts/catalog). You can also see supported PgBouncerVersion [here](/docs/v2024.1.31/guides/pgbouncer/README#supported-pgbouncerversion-crd). + +## Get PostgreSQL Server ready + +PgBouncer is a connection-pooling middleware for PostgreSQL. Therefore you will need to have a PostgreSQL server up and running for PgBouncer to connect to. + +Luckily PostgreSQL is readily available in KubeDB as crd and can easily be deployed using this guide [here](/docs/v2024.1.31/guides/postgres/quickstart/quickstart). + +In this tutorial, we will use a Postgres named `quick-postgres` in the `demo` namespace. + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/pgbouncer/quickstart/quick-postgres.yaml +postgres.kubedb.com/quick-postgres created +``` + +KubeDB creates all the necessary resources including services, secrets, and appbindings to get this server up and running. A default database `postgres` is created in `quick-postgres`. Database secret `quick-postgres-auth` holds this user's username and password. Following is the yaml file for it. + +```yaml +$ kubectl get secrets -n demo quick-postgres-auth -o yaml + +apiVersion: v1 +data: + password: Um9YKkw4STs4Ujd2MzJ0aQ== + username: cG9zdGdyZXM= +kind: Secret +metadata: + creationTimestamp: "2023-10-10T11:03:47Z" + labels: + app.kubernetes.io/component: database + app.kubernetes.io/instance: quick-postgres + app.kubernetes.io/managed-by: kubedb.com + app.kubernetes.io/name: postgreses.kubedb.com + name: quick-postgres-auth + namespace: demo + resourceVersion: "5527" + uid: 7f865964-58dd-40e7-aca6-d2a3010732c3 +type: kubernetes.io/basic-auth + +``` + +For the purpose of this tutorial, we will need to extract the username and password from database secret `quick-postgres-auth`. + +```bash +$ kubectl get secrets -n demo quick-postgres-auth -o jsonpath='{.data.\password}' | base64 -d +RoX*L8I;8R7v32ti⏎ + +$ kubectl get secrets -n demo quick-postgres-auth -o jsonpath='{.data.\username}' | base64 -d +postgres⏎ +``` + +Now, to test connection with this database using the credentials obtained above, we will expose the service port associated with `quick-postgres` to localhost. + +```bash +$ kubectl port-forward -n demo svc/quick-postgres 5432 +Forwarding from 127.0.0.1:5432 -> 5432 +Forwarding from [::1]:5432 -> 5432 +``` + +With that done , we should now be able to connect to `postgres` database using username `postgres`, and password `RoX*L8I;8R7v32ti`. + +```bash +$ export PGPASSWORD='RoX*L8I;8R7v32ti' +$ psql --host=localhost --port=5432 --username=postgres postgres +psql (14.9 (Ubuntu 14.9-0ubuntu0.22.04.1), server 13.2) +Type "help" for help. + +postgres=# +``` + +After establishing connection successfully, we will create a table in `postgres` database and populate it with data. + +```bash +postgres=# CREATE TABLE COMPANY( NAME TEXT NOT NULL, EMPLOYEE INT NOT NULL); +CREATE TABLE +postgres=# INSERT INTO COMPANY (name, employee) VALUES ('Apple',10); +INSERT 0 1 +postgres=# INSERT INTO COMPANY (name, employee) VALUES ('Google',15); +INSERT 0 1 +``` + +After data insertion, we need to verify that our data have been inserted successfully. + +```bash +postgres=# SELECT * FROM company ORDER BY name; + name | employee +--------+---------- + Apple | 10 + Google | 15 +(2 rows) +postgres=# \q +``` + +If no error occurs, `quick-postgres` is ready to be used by PgBouncer. + +You can also use any other tool to deploy your PostgreSQL server and create a database `postgres` for user `postgres`. + +Should you choose not to use KubeDB to deploy Postgres, create AppBinding(s) to point PgBouncer to your PostgreSQL server(s) where your target databases are located. Click [here](/docs/v2024.1.31/guides/pgbouncer/concepts/appbinding) for detailed instructions on how to manually create AppBindings for Postgres. + +## Create a PgBouncer Server + +KubeDB implements a PgBouncer crd to define the specifications of a PgBouncer. + +Below is the PgBouncer object created in this tutorial. + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: PgBouncer +metadata: + name: pgbouncer-server + namespace: demo +spec: + version: "1.18.0" + replicas: 1 + databases: + - alias: "postgres" + databaseName: "postgres" + databaseRef: + name: "quick-postgres" + namespace: demo + connectionPool: + port: 5432 + maxClientConnections: 20 + reservePoolSize: 5 + terminationPolicy: Delete +``` + +Here, + +- `spec.version` is name of the PgBouncerVersion crd where the docker images are specified. In this tutorial, a PgBouncer with base image version 1.17.0 is created. +- `spec.replicas` specifies the number of replica pgbouncer server pods to be created for the PgBouncer object. +- `spec.databases` specifies the databases that are going to be served via PgBouncer. +- `spec.connectionPool` specifies the configurations for connection pool. +- `spec.terminationPolicy` specifies what policy to apply while deletion. + +### spec.databases + +Databases contain three `required` fields and two `optional` fields. + +- `spec.databases.alias`: specifies an alias for the target database located in a postgres server specified by an appbinding. +- `spec.databases.databaseName`: specifies the name of the target database. +- `spec.databases.databaseRef`: specifies the name and namespace of the appBinding that contains the path to a PostgreSQL server where the target database can be found. +- `spec.databases.username` (optional): specifies the user with whom this particular database should have an exclusive connection. By default, if this field is left empty, all users will be able to use the database. +- `spec.databases.password` (optional): specifies password to authenticate the user with whom this particular database should have an exclusive connection. + +### spec.connectionPool + +ConnectionPool is used to configure pgbouncer connection pool. All the fields here are accompanied by default values and can be left unspecified if no customization is required by the user. + +- `spec.connectionPool.port`: specifies the port on which pgbouncer should listen to connect with clients. The default is 5432. +- `spec.connectionPool.authType`: specifies how to authenticate users. +- `spec.connectionPool.poolMode`: specifies the value of pool_mode. +- `spec.connectionPool.maxClientConnections`: specifies the value of max_client_conn. +- `spec.connectionPool.defaultPoolSize`: specifies the value of default_pool_size. +- `spec.connectionPool.minPoolSize`: specifies the value of min_pool_size. +- `spec.connectionPool.reservePoolSize`: specifies the value of reserve_pool_size. +- `spec.connectionPool.reservePoolTimeout`: specifies the value of reserve_pool_timeout. +- `spec.connectionPool.maxDbConnections`: specifies the value of max_db_connections. +- `spec.connectionPool.maxUserConnections`: specifies the value of max_user_connections. + +### spec.terminationPolicy + +`terminationPolicy` gives flexibility whether to `nullify`(reject) the delete operation of `PgBouncer` crd or which resources KubeDB should keep or delete when you delete `PgBouncer` crd. KubeDB provides following four termination policies: + +- DoNotTerminate +- Delete (`Default`) +- WipeOut + +When `terminationPolicy` is `DoNotTerminate`, KubeDB takes advantage of `ValidationWebhook` feature in Kubernetes 1.9.0 or later clusters to provide safety from accidental deletion of database. If admission webhook is enabled, KubeDB prevents users from deleting the database as long as the `spec.terminationPolicy` is set to `DoNotTerminate`. + +Following table show what KubeDB does when you delete Postgres crd for different termination policies, + +| Behavior | DoNotTerminate | Delete | WipeOut | +|---------------------------| :------------: | :------: | :------: | +| 1. Block Delete operation | ✓ | ✗ | ✗ | +| 2. Delete StatefulSet | ✗ | ✓ | ✓ | +| 3. Delete Services | ✗ | ✓ | ✓ | +| 4. Delete PVCs | ✗ | ✓ | ✓ | +| 5. Delete Secrets | ✗ | ✗ | ✓ | + + +Now that we've been introduced to the pgBouncer crd, let's create it, + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/pgbouncer/quickstart/pgbouncer-server.yaml + +pgbouncer.kubedb.com/pgbouncer-server created +``` + +## Connect via PgBouncer + +To connect via pgBouncer we have to expose its service to localhost. + +```bash +$ kubectl port-forward -n demo svc/pgbouncer-server 5432 + +Forwarding from 127.0.0.1:5432 -> 5432 +Forwarding from [::1]:5432 -> 5432 +``` + +Now, let's connect to `postgres` database via PgBouncer using psql. + +``` bash +$ env PGPASSWORD='RoX*L8I;8R7v32ti' psql --host=localhost --port=5432 --username=postgres postgres +psql (14.9 (Ubuntu 14.9-0ubuntu0.22.04.1), server 13.2) +Type "help" for help. + +postgres=# \q +``` + +If everything goes well, we'll be connected to the `postgres` database and be able to execute commands. Let's confirm if the company data we inserted in the `postgres` database before are available via PgBouncer: + +```bash +$ env PGPASSWORD='RoX*L8I;8R7v32ti' psql --host=localhost --port=5432 --username=postgres postgres --command='SELECT * FROM company ORDER BY name;' + name | employee +--------+---------- + Apple | 10 + Google | 15 +(2 rows) +``` + +KubeDB operator watches for PgBouncer objects using Kubernetes api. When a PgBouncer object is created, KubeDB operator will create a new StatefulSet and a Service with the matching name. KubeDB operator will also create a governing service for StatefulSet with the name `kubedb`, if one is not already present. + +KubeDB operator sets the `status.phase` to `Running` once the connection-pooling mechanism is ready. + +```bash +$ kubectl get pb -n demo pgbouncer-server -o wide +NAME VERSION STATUS AGE +pgbouncer-server 1.18.0 Ready 2h +``` + +Let's describe PgBouncer object `pgbouncer-server` + +```bash +$ kubectl dba describe pb -n demo pgbouncer-server +Name: pgbouncer-server +Namespace: demo +Labels: +Annotations: +API Version: kubedb.com/v1alpha2 +Kind: PgBouncer +Metadata: + Creation Timestamp: 2023-10-11T06:28:02Z + Finalizers: + kubedb.com + Generation: 2 + Managed Fields: + API Version: kubedb.com/v1alpha2 + Fields Type: FieldsV1 + fieldsV1: + f:metadata: + f:annotations: + .: + f:kubectl.kubernetes.io/last-applied-configuration: + f:spec: + .: + f:connectionPool: + .: + f:authType: + f:defaultPoolSize: + f:ignoreStartupParameters: + f:maxClientConnections: + f:maxDBConnections: + f:maxUserConnections: + f:minPoolSize: + f:poolMode: + f:port: + f:reservePoolSize: + f:reservePoolTimeoutSeconds: + f:statsPeriodSeconds: + f:databases: + f:healthChecker: + .: + f:failureThreshold: + f:periodSeconds: + f:timeoutSeconds: + f:replicas: + f:terminationPolicy: + f:version: + Manager: kubectl-client-side-apply + Operation: Update + Time: 2023-10-11T06:28:02Z + API Version: kubedb.com/v1alpha2 + Fields Type: FieldsV1 + fieldsV1: + f:metadata: + f:finalizers: + .: + v:"kubedb.com": + f:spec: + f:authSecret: + Manager: kubedb-provisioner + Operation: Update + Time: 2023-10-11T06:28:02Z + API Version: kubedb.com/v1alpha2 + Fields Type: FieldsV1 + fieldsV1: + f:status: + .: + f:conditions: + f:observedGeneration: + f:phase: + Manager: kubedb-provisioner + Operation: Update + Subresource: status + Time: 2023-10-11T08:43:35Z + Resource Version: 48101 + UID: b5974ff8-c9e8-4308-baf0-f07bb5af9403 +Spec: + Auth Secret: + Name: pgbouncer-server-auth + Auto Ops: + Connection Pool: + Auth Type: md5 + Default Pool Size: 20 + Ignore Startup Parameters: empty + Max Client Connections: 20 + Max DB Connections: 0 + Max User Connections: 0 + Min Pool Size: 0 + Pool Mode: session + Port: 5432 + Reserve Pool Size: 5 + Reserve Pool Timeout Seconds: 5 + Stats Period Seconds: 60 + Databases: + Alias: postgres + Database Name: postgres + Database Ref: + Name: quick-postgres + Namespace: demo + Health Checker: + Disable Write Check: true + Failure Threshold: 1 + Period Seconds: 10 + Timeout Seconds: 10 + Pod Template: + Controller: + Metadata: + Spec: + Container Security Context: + Privileged: false + Run As Group: 70 + Run As User: 70 + Resources: + Limits: + Memory: 1Gi + Requests: + Cpu: 500m + Memory: 1Gi + Security Context: + Fs Group: 70 + Run As Group: 70 + Run As User: 70 + Replicas: 1 + Ssl Mode: disable + Termination Policy: Delete + Version: 1.18.0 +Status: + Conditions: + Last Transition Time: 2023-10-11T06:28:02Z + Message: The KubeDB operator has started the provisioning of PgBouncer: demo/pgbouncer-server + Reason: DatabaseProvisioningStartedSuccessfully + Status: True + Type: ProvisioningStarted + Last Transition Time: 2023-10-11T08:43:35Z + Message: All replicas are ready and in Running state + Observed Generation: 2 + Reason: AllReplicasReady + Status: True + Type: ReplicaReady + Last Transition Time: 2023-10-11T06:28:15Z + Message: The PgBouncer: demo/pgbouncer-server is accepting client requests. + Observed Generation: 2 + Reason: DatabaseAcceptingConnectionRequest + Status: True + Type: AcceptingConnection + Last Transition Time: 2023-10-11T06:28:15Z + Message: DB is ready because of server getting Online and Running state + Observed Generation: 2 + Reason: ReadinessCheckSucceeded + Status: True + Type: Ready + Last Transition Time: 2023-10-11T06:28:18Z + Message: The PgBouncer: demo/pgbouncer-server is successfully provisioned. + Observed Generation: 2 + Reason: DatabaseSuccessfullyProvisioned + Status: True + Type: Provisioned + Observed Generation: 2 + Phase: Ready +Events: +``` + +KubeDB has created a service for the PgBouncer object. + +```bash +$ kubectl get service -n demo --selector=app.kubernetes.io/name=pgbouncers.kubedb.com,app.kubernetes.io/instance=pgbouncer-server +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +pgbouncer-server ClusterIP 10.96.36.35 5432/TCP 141m +pgbouncer-server-pods ClusterIP None 5432/TCP 141m +``` + +Here, Service *`pgbouncer-server`* targets random pods to carry out connection-pooling. + +## Cleaning up + +To cleanup the Kubernetes resources created by this tutorial, run: + +```bash +$ kubectl delete -n demo pg/quick-postgres +postgres.kubedb.com "quick-postgres" deleted + +$ kubectl delete -n demo pb/pgbouncer-server +pgbouncer.kubedb.com "pgbouncer-server" deleted + +$ kubectl delete ns demo +namespace "demo" deleted +``` + +## Next Steps + +- Learn about [custom PgBouncerVersions](/docs/v2024.1.31/guides/pgbouncer/custom-versions/setup). +- Monitor your PgBouncer with KubeDB using [built-in Prometheus](/docs/v2024.1.31/guides/pgbouncer/monitoring/using-builtin-prometheus). +- Monitor your PgBouncer with KubeDB using [Prometheus operator](/docs/v2024.1.31/guides/pgbouncer/monitoring/using-prometheus-operator). +- Detail concepts of [PgBouncer object](/docs/v2024.1.31/guides/pgbouncer/concepts/pgbouncer). +- Use [private Docker registry](/docs/v2024.1.31/guides/pgbouncer/private-registry/using-private-registry) to deploy PgBouncer with KubeDB. +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/v2024.1.31/CONTRIBUTING). diff --git a/content/docs/v2024.1.31/guides/postgres/README.md b/content/docs/v2024.1.31/guides/postgres/README.md new file mode 100644 index 0000000000..2ba8539f9f --- /dev/null +++ b/content/docs/v2024.1.31/guides/postgres/README.md @@ -0,0 +1,68 @@ +--- +title: Postgres +menu: + docs_v2024.1.31: + identifier: pg-readme-postgres + name: Postgres + parent: pg-postgres-guides + weight: 10 +menu_name: docs_v2024.1.31 +section_menu_id: guides +url: /docs/v2024.1.31/guides/postgres/ +aliases: +- /docs/v2024.1.31/guides/postgres/README/ +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +## Supported PostgreSQL Features + +| Features | Availability | +| ---------------------------------- |:------------:| +| Clustering | ✓ | +| Warm Standby | ✓ | +| Hot Standby | ✓ | +| Synchronous Replication | ✓ | +| Streaming Replication | ✓ | +| Automatic Failover | ✓ | +| Continuous Archiving using `wal-g` | ✗ | +| Initialization from WAL archive | ✓ | +| Persistent Volume | ✓ | +| Instant Backup | ✓ | +| Scheduled Backup | ✓ | +| Initialization from Snapshot | ✓ | +| Initialization using Script | ✓ | +| Builtin Prometheus Discovery | ✓ | +| Using Prometheus operator | ✓ | +| Custom Configuration | ✓ | +| Using Custom docker image | ✓ | + +## Life Cycle of a PostgreSQL Object + +

+ lifecycle +

+ +## User Guide + +- [Quickstart PostgreSQL](/docs/v2024.1.31/guides/postgres/quickstart/quickstart) with KubeDB Operator. +- How to [Backup & Restore](/docs/v2024.1.31/guides/postgres/backup/overview/) PostgreSQL database using Stash. +- Initialize [PostgreSQL with Script](/docs/v2024.1.31/guides/postgres/initialization/script_source). +- [PostgreSQL Clustering](/docs/v2024.1.31/guides/postgres/clustering/ha_cluster) supported by KubeDB Postgres. +- [Streaming Replication](/docs/v2024.1.31/guides/postgres/clustering/streaming_replication) for PostgreSQL clustering. +- Monitor your PostgreSQL database with KubeDB using [`out-of-the-box` builtin-Prometheus](/docs/v2024.1.31/guides/postgres/monitoring/using-builtin-prometheus). +- Monitor your PostgreSQL database with KubeDB using [`out-of-the-box` Prometheus operator](/docs/v2024.1.31/guides/postgres/monitoring/using-prometheus-operator). +- Use [private Docker registry](/docs/v2024.1.31/guides/postgres/private-registry/using-private-registry) to deploy PostgreSQL with KubeDB. +- Detail concepts of [Postgres object](/docs/v2024.1.31/guides/postgres/concepts/postgres). +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/v2024.1.31/CONTRIBUTING). diff --git a/content/docs/v2024.1.31/guides/postgres/_index.md b/content/docs/v2024.1.31/guides/postgres/_index.md new file mode 100644 index 0000000000..b34176b56d --- /dev/null +++ b/content/docs/v2024.1.31/guides/postgres/_index.md @@ -0,0 +1,22 @@ +--- +title: PostgreSQL +menu: + docs_v2024.1.31: + identifier: pg-postgres-guides + name: PostgreSQL + parent: guides + weight: 10 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/postgres/backup/_index.md b/content/docs/v2024.1.31/guides/postgres/backup/_index.md new file mode 100755 index 0000000000..e10e6c8601 --- /dev/null +++ b/content/docs/v2024.1.31/guides/postgres/backup/_index.md @@ -0,0 +1,22 @@ +--- +title: Backup & Restore PostgreSQL +menu: + docs_v2024.1.31: + identifier: guides-pg-backup + name: Backup & Restore + parent: pg-postgres-guides + weight: 40 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/postgres/backup/auto-backup/examples/backupblueprint.yaml b/content/docs/v2024.1.31/guides/postgres/backup/auto-backup/examples/backupblueprint.yaml new file mode 100644 index 0000000000..e27c396ce9 --- /dev/null +++ b/content/docs/v2024.1.31/guides/postgres/backup/auto-backup/examples/backupblueprint.yaml @@ -0,0 +1,17 @@ +apiVersion: stash.appscode.com/v1beta1 +kind: BackupBlueprint +metadata: + name: postgres-backup-template +spec: + # ============== Blueprint for Repository ========================== + backend: + gcs: + bucket: stash-testing + prefix: stash-backup/${TARGET_NAMESPACE}/${TARGET_APP_RESOURCE}/${TARGET_NAME} + storageSecretName: gcs-secret + # ============== Blueprint for BackupConfiguration ================= + schedule: "*/5 * * * *" + retentionPolicy: + name: 'keep-last-5' + keepLast: 5 + prune: true diff --git a/content/docs/v2024.1.31/guides/postgres/backup/auto-backup/examples/sample-pg-1.yaml b/content/docs/v2024.1.31/guides/postgres/backup/auto-backup/examples/sample-pg-1.yaml new file mode 100644 index 0000000000..b9554b62c1 --- /dev/null +++ b/content/docs/v2024.1.31/guides/postgres/backup/auto-backup/examples/sample-pg-1.yaml @@ -0,0 +1,18 @@ +apiVersion: kubedb.com/v1alpha2 +kind: Postgres +metadata: + name: sample-postgres-1 + namespace: demo + annotations: + stash.appscode.com/backup-blueprint: postgres-backup-template +spec: + version: "11.22" + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: Delete diff --git a/content/docs/v2024.1.31/guides/postgres/backup/auto-backup/examples/sample-pg-2.yaml b/content/docs/v2024.1.31/guides/postgres/backup/auto-backup/examples/sample-pg-2.yaml new file mode 100644 index 0000000000..70613a0d62 --- /dev/null +++ b/content/docs/v2024.1.31/guides/postgres/backup/auto-backup/examples/sample-pg-2.yaml @@ -0,0 +1,19 @@ +apiVersion: kubedb.com/v1alpha2 +kind: Postgres +metadata: + name: sample-postgres-2 + namespace: demo-2 + annotations: + stash.appscode.com/backup-blueprint: postgres-backup-template + stash.appscode.com/schedule: "*/3 * * * *" +spec: + version: "11.22" + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: Delete diff --git a/content/docs/v2024.1.31/guides/postgres/backup/auto-backup/examples/sample-pg-3.yaml b/content/docs/v2024.1.31/guides/postgres/backup/auto-backup/examples/sample-pg-3.yaml new file mode 100644 index 0000000000..a5c7eccda2 --- /dev/null +++ b/content/docs/v2024.1.31/guides/postgres/backup/auto-backup/examples/sample-pg-3.yaml @@ -0,0 +1,19 @@ +apiVersion: kubedb.com/v1alpha2 +kind: Postgres +metadata: + name: sample-postgres-3 + namespace: demo-3 + annotations: + stash.appscode.com/backup-blueprint: postgres-backup-template + params.stash.appscode.com/args: --no-owner --clean +spec: + version: "11.22" + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: Delete diff --git a/content/docs/v2024.1.31/guides/postgres/backup/auto-backup/images/sample-postgres-1.png b/content/docs/v2024.1.31/guides/postgres/backup/auto-backup/images/sample-postgres-1.png new file mode 100644 index 0000000000..08ac7e0983 Binary files /dev/null and b/content/docs/v2024.1.31/guides/postgres/backup/auto-backup/images/sample-postgres-1.png differ diff --git a/content/docs/v2024.1.31/guides/postgres/backup/auto-backup/images/sample-postgres-2.png b/content/docs/v2024.1.31/guides/postgres/backup/auto-backup/images/sample-postgres-2.png new file mode 100644 index 0000000000..2331644c8b Binary files /dev/null and b/content/docs/v2024.1.31/guides/postgres/backup/auto-backup/images/sample-postgres-2.png differ diff --git a/content/docs/v2024.1.31/guides/postgres/backup/auto-backup/images/sample-postgres-3.png b/content/docs/v2024.1.31/guides/postgres/backup/auto-backup/images/sample-postgres-3.png new file mode 100644 index 0000000000..b27d7581e9 Binary files /dev/null and b/content/docs/v2024.1.31/guides/postgres/backup/auto-backup/images/sample-postgres-3.png differ diff --git a/content/docs/v2024.1.31/guides/postgres/backup/auto-backup/index.md b/content/docs/v2024.1.31/guides/postgres/backup/auto-backup/index.md new file mode 100644 index 0000000000..1125d83477 --- /dev/null +++ b/content/docs/v2024.1.31/guides/postgres/backup/auto-backup/index.md @@ -0,0 +1,651 @@ +--- +title: PostgreSQL | Stash +description: Stash auto-backup for PostgreSQL database +menu: + docs_v2024.1.31: + identifier: guides-pg-backup-auto-backup + name: Auto-Backup + parent: guides-pg-backup + weight: 30 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +# Backup PostgreSQL using Stash Auto-Backup + +Stash can be configured to automatically backup any PostgreSQL database in your cluster. Stash enables cluster administrators to deploy backup blueprints ahead of time so that the database owners can easily backup their database with just a few annotations. + +In this tutorial, we are going to show how you can configure a backup blueprint for PostgreSQL databases in your cluster and backup them with few annotations. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster. +- Install KubeDB in your cluster following the steps [here](/docs/v2024.1.31/setup/README). +- Install Stash in your cluster following the steps [here](https://stash.run/docs/latest/setup/install/stash/). +- If you are not familiar with how Stash backup and restore PostgreSQL databases, please check the following guide [here](/docs/v2024.1.31/guides/postgres/backup/overview/). +- If you are not familiar with how auto-backup works in Stash, please check the following guide [here](https://stash.run/docs/latest/guides/auto-backup/overview/). +- If you are not familiar with the available auto-backup options for databases in Stash, please check the following guide [here](https://stash.run/docs/latest/guides/auto-backup/database/). + +You should be familiar with the following `Stash` concepts: + +- [BackupBlueprint](https://stash.run/docs/latest/concepts/crds/backupblueprint/) +- [BackupConfiguration](https://stash.run/docs/latest/concepts/crds/backupconfiguration/) +- [BackupSession](https://stash.run/docs/latest/concepts/crds/backupsession/) +- [Repository](https://stash.run/docs/latest/concepts/crds/repository/) +- [Function](https://stash.run/docs/latest/concepts/crds/function/) +- [Task](https://stash.run/docs/latest/concepts/crds/task/) + +In this tutorial, we are going to show backup of three different PostgreSQL databases on three different namespaces named `demo`, `demo-2`, and `demo-3`. Create the namespaces as below if you haven't done it already. + +```bash +❯ kubectl create ns demo +namespace/demo created + +❯ kubectl create ns demo-2 +namespace/demo-2 created + +❯ kubectl create ns demo-3 +namespace/demo-3 created +``` + +When you install the Stash, it automatically installs all the official database addons. Verify that it has installed the PostgreSQL addons using the following command. + +```bash +❯ kubectl get tasks.stash.appscode.com | grep postgres +postgres-backup-10.14 8d +postgres-backup-11.9 8d +postgres-backup-12.4 8d +postgres-backup-13.1 8d +postgres-backup-9.6.19 8d +postgres-restore-10.14 8d +postgres-restore-11.9 8d +postgres-restore-12.4 8d +postgres-restore-13.1 8d +postgres-restore-9.6.19 8d +``` + +## Prepare Backup Blueprint + +To backup a PostgreSQL database using Stash, you have to create a `Secret` containing the backend credentials, a `Repository` containing the backend information, and a `BackupConfiguration` containing the schedule and target information. A `BackupBlueprint` allows you to specify a template for the `Repository` and the `BackupConfiguration`. + +The `BackupBlueprint` is a non-namespaced CRD. So, once you have created a `BackupBlueprint`, you can use it to backup any PostgreSQL database of any namespace just by creating the storage `Secret` in that namespace and adding few annotations to your Postgres CRO. Then, Stash will automatically create a `Repository` and a `BackupConfiguration` according to the template to backup the database. + +Below is the `BackupBlueprint` object that we are going to use in this tutorial, + +```yaml +apiVersion: stash.appscode.com/v1beta1 +kind: BackupBlueprint +metadata: + name: postgres-backup-template +spec: + # ============== Blueprint for Repository ========================== + backend: + gcs: + bucket: stash-testing + prefix: stash-backup/${TARGET_NAMESPACE}/${TARGET_APP_RESOURCE}/${TARGET_NAME} + storageSecretName: gcs-secret + # ============== Blueprint for BackupConfiguration ================= + schedule: "*/5 * * * *" + retentionPolicy: + name: 'keep-last-5' + keepLast: 5 + prune: true +``` + +Here, we are using a GCS bucket as our backend. We are providing `gcs-secret` at the `storageSecretName` field. Hence, we have to create a secret named `gcs-secret` with the access credentials of our bucket in every namespace where we want to enable backup through this blueprint. + +Notice the `prefix` field of `backend` section. We have used some variables in form of `${VARIABLE_NAME}`. Stash will automatically resolve those variables from the database information to make the backend prefix unique for each database instance. + +Let's create the `BackupBlueprint` we have shown above, + +```bash +❯ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/postgres/backup/auto-backup/examples/backupblueprint.yaml +backupblueprint.stash.appscode.com/postgres-backup-template created +``` + +Now, we are ready to backup our PostgreSQL databases using few annotations. You can check available auto-backup annotations for a database from [here](https://stash.run/docs/latest/guides/auto-backup/database/#available-auto-backup-annotations-for-database). + +## Auto-backup with default configurations + +In this section, we are going to backup a PostgreSQL database of `demo` namespace. We are going to use the default configurations specified in the `BackupBlueprint`. + +### Create Storage Secret + +At first, let's create the `gcs-secret` in `demo` namespace with the access credentials to our GCS bucket. + +```bash +❯ echo -n 'changeit' > RESTIC_PASSWORD +❯ echo -n '' > GOOGLE_PROJECT_ID +❯ cat downloaded-sa-key.json > GOOGLE_SERVICE_ACCOUNT_JSON_KEY +❯ kubectl create secret generic -n demo gcs-secret \ + --from-file=./RESTIC_PASSWORD \ + --from-file=./GOOGLE_PROJECT_ID \ + --from-file=./GOOGLE_SERVICE_ACCOUNT_JSON_KEY +secret/gcs-secret created +``` + +### Create Database + +Now, we are going to create a Postgres CRO in `demo` namespace. Below is the YAML of the PostgreSQL object that we are going to create, + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: Postgres +metadata: + name: sample-postgres-1 + namespace: demo + annotations: + stash.appscode.com/backup-blueprint: postgres-backup-template +spec: + version: "11.22" + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: Delete +``` + +Notice the `annotations` section. We are pointing to the `BackupBlueprint` that we have created earlier though `stash.appscode.com/backup-blueprint` annotation. Stash will watch this annotation and create a `Repository` and a `BackupConfiguration` according to the `BackupBlueprint`. + +Let's create the above Postgres CRO, + +```bash +❯ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/postgres/backup/auto-backup/examples/sample-pg-1.yaml +postgres.kubedb.com/sample-postgres-1 created +``` + +### Verify Auto-backup configured + +In this section, we are going to verify whether Stash has created the respective `Repository` and `BackupConfiguration` for our PostgreSQL database we have just deployed. + +#### Verify Repository + +At first, let's verify whether Stash has created a `Repository` for our PostgreSQL or not. + +```bash +❯ kubectl get repository -n demo +NAME INTEGRITY SIZE SNAPSHOT-COUNT LAST-SUCCESSFUL-BACKUP AGE +app-sample-postgres-1 25s +``` + +Now, let's check the YAML of the `Repository`. + +```yaml +❯ kubectl get repository -n demo app-sample-postgres-1 -o yaml +apiVersion: stash.appscode.com/v1alpha1 +kind: Repository +metadata: + name: app-sample-postgres-1 + namespace: demo + ... +spec: + backend: + gcs: + bucket: stash-testing + prefix: stash-backup/demo/postgres/sample-postgres-1 + storageSecretName: gcs-secret +``` + +Here, you can see that Stash has resolved the variables in `prefix` field and substituted them with the equivalent information from this database. + +#### Verify BackupConfiguration + +If everything goes well, Stash should create a `BackupConfiguration` for our PostgreSQL in `demo` namespace and the phase of that `BackupConfiguration` should be `Ready`. Verify the `BackupConfiguration` crd by the following command, + +```bash +❯ kubectl get backupconfiguration -n demo +NAME TASK SCHEDULE PAUSED PHASE AGE +app-sample-postgres-1 postgres-backup-11.9 */5 * * * * Ready 97s +``` + +Now, let's check the YAML of the `BackupConfiguration`. + +```yaml +❯ kubectl get backupconfiguration -n demo app-sample-postgres-1 -o yaml +apiVersion: stash.appscode.com/v1beta1 +kind: BackupConfiguration +metadata: + name: app-sample-postgres-1 + namespace: demo + ... +spec: + driver: Restic + repository: + name: app-sample-postgres-1 + retentionPolicy: + keepLast: 5 + name: keep-last-5 + prune: true + runtimeSettings: {} + schedule: '*/5 * * * *' + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-postgres-1 + tempDir: {} +status: + conditions: + - lastTransitionTime: "2021-02-23T09:38:19Z" + message: Repository demo/app-sample-postgres-1 exist. + reason: RepositoryAvailable + status: "True" + type: RepositoryFound + - lastTransitionTime: "2021-02-23T09:38:19Z" + message: Backend Secret demo/gcs-secret exist. + reason: BackendSecretAvailable + status: "True" + type: BackendSecretFound + - lastTransitionTime: "2021-02-23T09:38:19Z" + message: Backup target appcatalog.appscode.com/v1alpha1 appbinding/sample-postgres-1 + found. + reason: TargetAvailable + status: "True" + type: BackupTargetFound + - lastTransitionTime: "2021-02-23T09:38:19Z" + message: Successfully created backup triggering CronJob. + reason: CronJobCreationSucceeded + status: "True" + type: CronJobCreated + observedGeneration: 1 + +``` + +Notice the `target` section. Stash has automatically added the respective AppBinding of our PostgreSQL database as the target of this `BackupConfiguration`. + +#### Verify Backup + +Now, let's wait for a backup run to complete. You can watch for `BackupSession` as below, + +```bash +❯ kubectl get backupsession -n demo -w +NAME INVOKER-TYPE INVOKER-NAME PHASE AGE +app-sample-postgres-1-1614073215 BackupConfiguration app-sample-postgres-1 0s +app-sample-postgres-1-1614073215 BackupConfiguration app-sample-postgres-1 Running 3s +app-sample-postgres-1-1614073215 BackupConfiguration app-sample-postgres-1 Succeeded 47s +``` + +Once the backup has been completed successfully, you should see the backed-up data has been stored in the bucket at the directory pointed by the `prefix` field of the `Repository`. + +
+ Backup data in GCS Bucket +
Fig: Backup data in GCS Bucket
+
+ +## Auto-backup with a custom schedule + +In this section, we are going to backup a PostgreSQL database of `demo-2` namespace. This time, we are going to overwrite the default schedule used in the `BackupBlueprint`. + +### Create Storage Secret + +At first, let's create the backend Secret `gcs-secret` in `demo-2` namespace with the access credentials to our GCS bucket. + +```bash +❯ kubectl create secret generic -n demo-2 gcs-secret \ + --from-file=./RESTIC_PASSWORD \ + --from-file=./GOOGLE_PROJECT_ID \ + --from-file=./GOOGLE_SERVICE_ACCOUNT_JSON_KEY +secret/gcs-secret created +``` + +### Create Database + +Now, we are going to create a Postgres CRO in `demo-2` namespace. Below is the YAML of the PostgreSQL object that we are going to create, + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: Postgres +metadata: + name: sample-postgres-2 + namespace: demo-2 + annotations: + stash.appscode.com/backup-blueprint: postgres-backup-template + stash.appscode.com/schedule: "*/3 * * * *" +spec: + version: "11.22" + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: Delete +``` + +Notice the `annotations` section. This time, we have passed a schedule via `stash.appscode.com/schedule` annotation along with the `stash.appscode.com/backup-blueprint` annotation. + +Let's create the above Postgres CRO, + +```bash +❯ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/postgres/backup/auto-backup/examples/sample-pg-2.yaml +postgres.kubedb.com/sample-postgres-2 created +``` + +### Verify Auto-backup configured + +Now, let's verify whether the auto-backup has been configured properly or not. + +#### Verify Repository + +At first, let's verify whether Stash has created a `Repository` for our PostgreSQL or not. + +```bash +❯ kubectl get repository -n demo-2 +NAME INTEGRITY SIZE SNAPSHOT-COUNT LAST-SUCCESSFUL-BACKUP AGE +app-sample-postgres-2 13s +``` + +Now, let's check the YAML of the `Repository`. + +```yaml +❯ kubectl get repository -n demo-2 app-sample-postgres-2 -o yaml +apiVersion: stash.appscode.com/v1alpha1 +kind: Repository +metadata: + name: app-sample-postgres-2 + namespace: demo-2 + ... +spec: + backend: + gcs: + bucket: stash-testing + prefix: stash-backup/demo-2/postgres/sample-postgres-2 + storageSecretName: gcs-secret +``` + +Here, you can see that Stash has resolved the variables in `prefix` field and substituted them with the equivalent information from this new database. + +#### Verify BackupConfiguration + +If everything goes well, Stash should create a `BackupConfiguration` for our PostgreSQL in `demo-2` namespace and the phase of that `BackupConfiguration` should be `Ready`. Verify the `BackupConfiguration` crd by the following command, + +```bash +❯ kubectl get backupconfiguration -n demo-2 +NAME TASK SCHEDULE PAUSED PHASE AGE +app-sample-postgres-2 postgres-backup-11.9 */3 * * * * Ready 61s +``` + +Now, let's check the YAML of the `BackupConfiguration`. + +```yaml +❯ kubectl get backupconfiguration -n demo-2 app-sample-postgres-2 -o yaml +apiVersion: stash.appscode.com/v1beta1 +kind: BackupConfiguration +metadata: + name: app-sample-postgres-2 + namespace: demo-2 + ... +spec: + driver: Restic + repository: + name: app-sample-postgres-2 + retentionPolicy: + keepLast: 5 + name: keep-last-5 + prune: true + runtimeSettings: {} + schedule: '*/3 * * * *' + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-postgres-2 + tempDir: {} +status: + conditions: + - lastTransitionTime: "2021-02-23T09:44:33Z" + message: Repository demo-2/app-sample-postgres-2 exist. + reason: RepositoryAvailable + status: "True" + type: RepositoryFound + - lastTransitionTime: "2021-02-23T09:44:33Z" + message: Backend Secret demo-2/gcs-secret exist. + reason: BackendSecretAvailable + status: "True" + type: BackendSecretFound + - lastTransitionTime: "2021-02-23T09:44:33Z" + message: Backup target appcatalog.appscode.com/v1alpha1 appbinding/sample-postgres-2 + found. + reason: TargetAvailable + status: "True" + type: BackupTargetFound + - lastTransitionTime: "2021-02-23T09:44:33Z" + message: Successfully created backup triggering CronJob. + reason: CronJobCreationSucceeded + status: "True" + type: CronJobCreated + observedGeneration: 1 +``` + +Notice the `schedule` section. This time the `BackupConfiguration` has been created with the schedule we have provided via annotation. + +Also, notice the `target` section. Stash has automatically added the new PostgreSQL as the target of this `BackupConfiguration`. + +#### Verify Backup + +Now, let's wait for a backup run to complete. You can watch for `BackupSession` as below, + +```bash +❯ kubectl get backupsession -n demo-2 -w +NAME INVOKER-TYPE INVOKER-NAME PHASE AGE +app-sample-postgres-2-1614073502 BackupConfiguration app-sample-postgres-2 0s +app-sample-postgres-2-1614073502 BackupConfiguration app-sample-postgres-2 Running 2s +app-sample-postgres-2-1614073502 BackupConfiguration app-sample-postgres-2 Succeeded 48s +``` + +Once the backup has been completed successfully, you should see that Stash has created a new directory as pointed by the `prefix` field of the new `Repository` and stored the backed-up data there. + +
+ Backup data in GCS Bucket +
Fig: Backup data in GCS Bucket
+
+ +## Auto-backup with custom parameters + +In this section, we are going to backup a PostgreSQL database of `demo-3` namespace. This time, we are going to pass some parameters for the Task through the annotations. + +### Create Storage Secret + +At first, let's create the `gcs-secret` in `demo-3` namespace with the access credentials to our GCS bucket. + +```bash +❯ kubectl create secret generic -n demo-3 gcs-secret \ + --from-file=./RESTIC_PASSWORD \ + --from-file=./GOOGLE_PROJECT_ID \ + --from-file=./GOOGLE_SERVICE_ACCOUNT_JSON_KEY +secret/gcs-secret created +``` + +### Create Database + +Now, we are going to create a Postgres CRO in `demo-3` namespace. Below is the YAML of the PostgreSQL object that we are going to create, + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: Postgres +metadata: + name: sample-postgres-3 + namespace: demo-3 + annotations: + stash.appscode.com/backup-blueprint: postgres-backup-template + params.stash.appscode.com/args: --no-owner --clean +spec: + version: "11.22" + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: Delete +``` + +Notice the `annotations` section. This time, we have passed an argument via `params.stash.appscode.com/args` annotation along with the `stash.appscode.com/backup-blueprint` annotation. + +Let's create the above Postgres CRO, + +```bash +❯ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/postgres/backup/auto-backup/examples/sample-pg-3.yaml +postgres.kubedb.com/sample-postgres-3 created +``` + +### Verify Auto-backup configured + +Now, let's verify whether the auto-backup resources has been created or not. + +#### Verify Repository + +At first, let's verify whether Stash has created a `Repository` for our PostgreSQL or not. + +```bash +❯ kubectl get repository -n demo-3 +NAME INTEGRITY SIZE SNAPSHOT-COUNT LAST-SUCCESSFUL-BACKUP AGE +app-sample-postgres-3 17s +``` + +Now, let's check the YAML of the `Repository`. + +```yaml +❯ kubectl get repository -n demo-3 app-sample-postgres-3 -o yaml +apiVersion: stash.appscode.com/v1alpha1 +kind: Repository +metadata: + name: app-sample-postgres-3 + namespace: demo-3 + ... +spec: + backend: + gcs: + bucket: stash-testing + prefix: stash-backup/demo-3/postgres/sample-postgres-3 + storageSecretName: gcs-secret +``` + +Here, you can see that Stash has resolved the variables in `prefix` field and substituted them with the equivalent information from this new database. + +#### Verify BackupConfiguration + +If everything goes well, Stash should create a `BackupConfiguration` for our PostgreSQL in `demo-3` namespace and the phase of that `BackupConfiguration` should be `Ready`. Verify the `BackupConfiguration` crd by the following command, + +```bash +❯ kubectl get backupconfiguration -n demo-3 +NAME TASK SCHEDULE PAUSED PHASE AGE +app-sample-postgres-3 postgres-backup-11.9 */5 * * * * Ready 51s +``` + +Now, let's check the YAML of the `BackupConfiguration`. + +```yaml +❯ kubectl get backupconfiguration -n demo-3 app-sample-postgres-3 -o yaml +apiVersion: stash.appscode.com/v1beta1 +kind: BackupConfiguration +metadata: + name: app-sample-postgres-3 + namespace: demo-3 + ... +spec: + driver: Restic + repository: + name: app-sample-postgres-3 + retentionPolicy: + keepLast: 5 + name: keep-last-5 + prune: true + runtimeSettings: {} + schedule: '*/5 * * * *' + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-postgres-3 + task: + params: + - name: args + value: --no-owner --clean + tempDir: {} +status: + conditions: + - lastTransitionTime: "2021-02-23T09:48:15Z" + message: Repository demo-3/app-sample-postgres-3 exist. + reason: RepositoryAvailable + status: "True" + type: RepositoryFound + - lastTransitionTime: "2021-02-23T09:48:15Z" + message: Backend Secret demo-3/gcs-secret exist. + reason: BackendSecretAvailable + status: "True" + type: BackendSecretFound + - lastTransitionTime: "2021-02-23T09:48:15Z" + message: Backup target appcatalog.appscode.com/v1alpha1 appbinding/sample-postgres-3 + found. + reason: TargetAvailable + status: "True" + type: BackupTargetFound + - lastTransitionTime: "2021-02-23T09:48:15Z" + message: Successfully created backup triggering CronJob. + reason: CronJobCreationSucceeded + status: "True" + type: CronJobCreated + observedGeneration: 1 +``` + +Notice the `task` section. The `args` parameter that we had passed via annotations has been added to the `params` section. + +Also, notice the `target` section. Stash has automatically added the new PostgreSQL as the target of this `BackupConfiguration`. + +#### Verify Backup + +Now, let's wait for a backup run to complete. You can watch for `BackupSession` as below, + +```bash +❯ kubectl get backupsession -n demo-3 -w +NAME INVOKER-TYPE INVOKER-NAME PHASE AGE +app-sample-postgres-3-1614073808 BackupConfiguration app-sample-postgres-3 0s +app-sample-postgres-3-1614073808 BackupConfiguration app-sample-postgres-3 Running 3s +app-sample-postgres-3-1614073808 BackupConfiguration app-sample-postgres-3 Succeeded 47s +``` + +Once the backup has been completed successfully, you should see that Stash has created a new directory as pointed by the `prefix` field of the new `Repository` and stored the backed-up data there. + +
+ Backup data in GCS Bucket +
Fig: Backup data in GCS Bucket
+
+ +## Cleanup + +To cleanup the resources crated by this tutorial, run the following commands, + +```bash +❯ kubectl delete -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/postgres/backup/auto-backup/examples/ +backupblueprint.stash.appscode.com "postgres-backup-template" deleted +postgres.kubedb.com "sample-postgres-1" deleted +postgres.kubedb.com "sample-postgres-2" deleted +postgres.kubedb.com "sample-postgres-3" deleted + +❯ kubectl delete repository -n demo --all +repository.stash.appscode.com "app-sample-postgres-1" deleted +❯ kubectl delete repository -n demo-2 --all +repository.stash.appscode.com "app-sample-postgres-2" deleted +❯ kubectl delete repository -n demo-3 --all +repository.stash.appscode.com "app-sample-postgres-3" deleted +``` diff --git a/content/docs/v2024.1.31/guides/postgres/backup/customization/examples/backup/multi-retention-policy.yaml b/content/docs/v2024.1.31/guides/postgres/backup/customization/examples/backup/multi-retention-policy.yaml new file mode 100644 index 0000000000..be6d4389a1 --- /dev/null +++ b/content/docs/v2024.1.31/guides/postgres/backup/customization/examples/backup/multi-retention-policy.yaml @@ -0,0 +1,22 @@ +apiVersion: stash.appscode.com/v1beta1 +kind: BackupConfiguration +metadata: + name: sample-postgres-backup + namespace: demo +spec: + schedule: "*/5 * * * *" + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-postgres + retentionPolicy: + name: sample-postgres-retention + keepLast: 5 + keepDaily: 10 + keepWeekly: 20 + keepMonthly: 50 + keepYearly: 100 + prune: true diff --git a/content/docs/v2024.1.31/guides/postgres/backup/customization/examples/backup/resource-limit.yaml b/content/docs/v2024.1.31/guides/postgres/backup/customization/examples/backup/resource-limit.yaml new file mode 100644 index 0000000000..6ebf96205d --- /dev/null +++ b/content/docs/v2024.1.31/guides/postgres/backup/customization/examples/backup/resource-limit.yaml @@ -0,0 +1,24 @@ +apiVersion: stash.appscode.com/v1beta1 +kind: RestoreSession +metadata: + name: sample-postgres-restore + namespace: demo +spec: + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-postgres + runtimeSettings: + container: + resources: + requests: + cpu: "200m" + memory: "1Gi" + limits: + cpu: "200m" + memory: "1Gi" + rules: + - snapshots: [latest] diff --git a/content/docs/v2024.1.31/guides/postgres/backup/customization/examples/backup/specific-database-user.yaml b/content/docs/v2024.1.31/guides/postgres/backup/customization/examples/backup/specific-database-user.yaml new file mode 100644 index 0000000000..567677085a --- /dev/null +++ b/content/docs/v2024.1.31/guides/postgres/backup/customization/examples/backup/specific-database-user.yaml @@ -0,0 +1,22 @@ +apiVersion: stash.appscode.com/v1beta1 +kind: BackupConfiguration +metadata: + name: sample-postgres-backup + namespace: demo +spec: + schedule: "*/5 * * * *" + task: + params: + - name: user + value: testuser + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-postgres + retentionPolicy: + name: keep-last-5 + keepLast: 5 + prune: true diff --git a/content/docs/v2024.1.31/guides/postgres/backup/customization/examples/backup/specific-user.yaml b/content/docs/v2024.1.31/guides/postgres/backup/customization/examples/backup/specific-user.yaml new file mode 100644 index 0000000000..2f54bdeb13 --- /dev/null +++ b/content/docs/v2024.1.31/guides/postgres/backup/customization/examples/backup/specific-user.yaml @@ -0,0 +1,23 @@ +apiVersion: stash.appscode.com/v1beta1 +kind: BackupConfiguration +metadata: + name: sample-postgres-backup + namespace: demo +spec: + schedule: "*/2 * * * *" + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-postgres + runtimeSettings: + pod: + securityContext: + runAsUser: 0 + runAsGroup: 0 + retentionPolicy: + name: keep-last-5 + keepLast: 5 + prune: true diff --git a/content/docs/v2024.1.31/guides/postgres/backup/customization/examples/repository.yaml b/content/docs/v2024.1.31/guides/postgres/backup/customization/examples/repository.yaml new file mode 100644 index 0000000000..8a6aaab13b --- /dev/null +++ b/content/docs/v2024.1.31/guides/postgres/backup/customization/examples/repository.yaml @@ -0,0 +1,11 @@ +apiVersion: stash.appscode.com/v1alpha1 +kind: Repository +metadata: + name: gcs-repo + namespace: demo +spec: + backend: + gcs: + bucket: stash-testing + prefix: /demo/customizing + storageSecretName: gcs-secret diff --git a/content/docs/v2024.1.31/guides/postgres/backup/customization/examples/restore/passing-args.yaml b/content/docs/v2024.1.31/guides/postgres/backup/customization/examples/restore/passing-args.yaml new file mode 100644 index 0000000000..4ce8ab431e --- /dev/null +++ b/content/docs/v2024.1.31/guides/postgres/backup/customization/examples/restore/passing-args.yaml @@ -0,0 +1,19 @@ +apiVersion: stash.appscode.com/v1beta1 +kind: RestoreSession +metadata: + name: sample-postgres-restore + namespace: demo +spec: + task: + params: + - name: args + value: --dbname=testdb + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-postgres + rules: + - snapshots: [latest] diff --git a/content/docs/v2024.1.31/guides/postgres/backup/customization/examples/restore/resource-limit.yaml b/content/docs/v2024.1.31/guides/postgres/backup/customization/examples/restore/resource-limit.yaml new file mode 100644 index 0000000000..988f054c57 --- /dev/null +++ b/content/docs/v2024.1.31/guides/postgres/backup/customization/examples/restore/resource-limit.yaml @@ -0,0 +1,26 @@ +apiVersion: stash.appscode.com/v1beta1 +kind: RestoreSession +metadata: + name: sample-postgres-restore + namespace: demo +spec: + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-postgres + runtimeSettings: + container: + resources: + requests: + cpu: "200m" + memory: "1Gi" + limits: + cpu: "200m" + memory: "1Gi" + rules: + - snapshots: [latest] + + diff --git a/content/docs/v2024.1.31/guides/postgres/backup/customization/examples/restore/specific-database-user.yaml b/content/docs/v2024.1.31/guides/postgres/backup/customization/examples/restore/specific-database-user.yaml new file mode 100644 index 0000000000..2dcf7be183 --- /dev/null +++ b/content/docs/v2024.1.31/guides/postgres/backup/customization/examples/restore/specific-database-user.yaml @@ -0,0 +1,19 @@ +apiVersion: stash.appscode.com/v1beta1 +kind: RestoreSession +metadata: + name: sample-postgres-restore + namespace: demo +spec: + task: + params: + - name: user + value: testuser + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-postgres + rules: + - snapshots: [latest] \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/postgres/backup/customization/examples/restore/specific-snapshot.yaml b/content/docs/v2024.1.31/guides/postgres/backup/customization/examples/restore/specific-snapshot.yaml new file mode 100644 index 0000000000..ad3e1e88aa --- /dev/null +++ b/content/docs/v2024.1.31/guides/postgres/backup/customization/examples/restore/specific-snapshot.yaml @@ -0,0 +1,15 @@ +apiVersion: stash.appscode.com/v1beta1 +kind: RestoreSession +metadata: + name: sample-postgres-restore + namespace: demo +spec: + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-postgres + rules: + - snapshots: [4bc21d6f] diff --git a/content/docs/v2024.1.31/guides/postgres/backup/customization/examples/restore/specific-user.yaml b/content/docs/v2024.1.31/guides/postgres/backup/customization/examples/restore/specific-user.yaml new file mode 100644 index 0000000000..2b42af3a02 --- /dev/null +++ b/content/docs/v2024.1.31/guides/postgres/backup/customization/examples/restore/specific-user.yaml @@ -0,0 +1,21 @@ +apiVersion: stash.appscode.com/v1beta1 +kind: RestoreSession +metadata: + name: sample-postgres-restore + namespace: demo +spec: + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-postgres + runtimeSettings: + pod: + securityContext: + runAsUser: 0 + runAsGroup: 0 + rules: + - snapshots: [latest] + diff --git a/content/docs/v2024.1.31/guides/postgres/backup/customization/examples/sample-postgres.yaml b/content/docs/v2024.1.31/guides/postgres/backup/customization/examples/sample-postgres.yaml new file mode 100644 index 0000000000..be410f1527 --- /dev/null +++ b/content/docs/v2024.1.31/guides/postgres/backup/customization/examples/sample-postgres.yaml @@ -0,0 +1,17 @@ +apiVersion: kubedb.com/v1alpha2 +kind: Postgres +metadata: + name: sample-postgres + namespace: demo +spec: + version: "14.10" + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: Delete + diff --git a/content/docs/v2024.1.31/guides/postgres/backup/customization/index.md b/content/docs/v2024.1.31/guides/postgres/backup/customization/index.md new file mode 100644 index 0000000000..edd72a365a --- /dev/null +++ b/content/docs/v2024.1.31/guides/postgres/backup/customization/index.md @@ -0,0 +1,343 @@ +--- +title: PostgreSQL Backup Customization | Stash +description: Customizing PostgreSQL Backup and Restore process with Stash +menu: + docs_v2024.1.31: + identifier: guides-pg-backup-customization + name: Customizing Backup & Restore Process + parent: guides-pg-backup + weight: 40 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +# Customizing Backup and Restore Process + +Stash provides rich customization supports for the backup and restore process to meet the requirements of various cluster configurations. This guide will show you some examples of these customizations. + +## Customizing Backup Process + +In this section, we are going to show you how to customize the backup process. Here, we are going to show some examples of providing arguments to the backup process, running the backup process as a specific user, ignoring some indexes during the backup process, etc. + +### Specifying database user +If you want to specify the postgres database user you can provide it through `user` param under `task.params` section. + +The below example shows how you can pass the `testuser` to set the username as `testuser`. +```yaml +apiVersion: stash.appscode.com/v1beta1 +kind: BackupConfiguration +metadata: + name: sample-postgres-backup + namespace: demo +spec: + schedule: "*/5 * * * *" + task: + params: + - name: user + value: testuser + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-postgres + retentionPolicy: + name: keep-last-5 + keepLast: 5 + prune: true +``` +> **WARNING**: Passing `user` is not applicable for Basic Authentication. + +### Passing arguments to the backup process + +Stash PostgreSQL addon uses [pg_dumpall](https://www.postgresql.org/docs/9.2/app-pg-dumpall.html) by default for backup. You can pass arguments to the `pg_dumpall` through `args` param under `task.params` section. + +The below example shows how you can pass the `--clean` to include SQL commands to clean (drop) databases before recreating them. +```yaml +apiVersion: stash.appscode.com/v1beta1 +kind: BackupConfiguration +metadata: + name: sample-postgres-backup + namespace: demo +spec: + schedule: "*/5 * * * *" + task: + params: + - name: args + value: --clean + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-postgres + retentionPolicy: + name: keep-last-5 + keepLast: 5 + prune: true +``` + +> **WARNING**: Make sure that you have the specific database created before taking backup. In this case, Database `testdb` should exist before the backup job starts. + +### Running backup job as a specific user + +If your cluster requires running the backup job as a specific user, you can provide `securityContext` under `runtimeSettings.pod` section. The below example shows how you can run the backup job as the root user. + +```yaml +apiVersion: stash.appscode.com/v1beta1 +kind: BackupConfiguration +metadata: + name: sample-postgres-backup + namespace: demo +spec: + schedule: "*/2 * * * *" + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-postgres + runtimeSettings: + pod: + securityContext: + runAsUser: 0 + runAsGroup: 0 + retentionPolicy: + name: keep-last-5 + keepLast: 5 + prune: true +``` + +### Specifying Memory/CPU limit/request for the backup job + +If you want to specify the Memory/CPU limit/request for your backup job, you can specify `resources` field under `runtimeSettings.container` section. + +```yaml +apiVersion: stash.appscode.com/v1beta1 +kind: BackupConfiguration +metadata: + name: sample-postgres-backup + namespace: demo +spec: + schedule: "*/2 * * * *" + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-postgres + runtimeSettings: + container: + resources: + requests: + cpu: "200m" + memory: "1Gi" + limits: + cpu: "200m" + memory: "1Gi" + retentionPolicy: + name: keep-last-5 + keepLast: 5 + prune: true +``` + +### Using multiple retention policies + +You can also specify multiple retention policies for your backed up data. For example, you may want to keep few daily snapshots, few weekly snapshots, and few monthly snapshots, etc. You just need to pass the desired number with the respective key under the `retentionPolicy` section. + +```yaml +apiVersion: stash.appscode.com/v1beta1 +kind: BackupConfiguration +metadata: + name: sample-postgres-backup + namespace: demo +spec: + schedule: "*/5 * * * *" + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-postgres + retentionPolicy: + name: sample-postgres-retention + keepLast: 5 + keepDaily: 10 + keepWeekly: 20 + keepMonthly: 50 + keepYearly: 100 + prune: true +``` + +To know more about the available options for retention policies, please visit [here](https://stash.run/docs/latest/concepts/crds/backupconfiguration/#specretentionpolicy). + +## Customizing Restore Process + +Stash uses [psql](https://www.postgresql.org/docs/9.2/app-psql.html) during the restore process. In this section, we are going to show how you can pass arguments to the restore process, restore a specific snapshot, run restore job as a specific user, etc. + +### Passing arguments to the restore process + +You can pass arguments to the restore process through the `args` params under `task.params` section. This example will restore data from database `testdb` only. + +```yaml +apiVersion: stash.appscode.com/v1beta1 +kind: RestoreSession +metadata: + name: sample-postgres-restore + namespace: demo +spec: + task: + params: + - name: args + value: --dbname=testdb + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-postgres + rules: + - snapshots: [latest] +``` + +### Specifying database user +If you want to specify the postgres database user you can provide it through `user` param under `task.params` section. + +The below example shows how you can pass the `testuser` to set the username as `testuser`. +```yaml +apiVersion: stash.appscode.com/v1beta1 +kind: RestoreSession +metadata: + name: sample-postgres-restore + namespace: demo +spec: + task: + params: + - name: args + value: --dbname=testdb + - name: user + value: testuser + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-postgres + rules: + - snapshots: [latest] +``` +> **WARNING**: Passing `user` is not applicable for Basic Authentication. + + +### Restore specific snapshot + +You can also restore a specific snapshot. At first, list the available snapshot as bellow, + +```bash +❯ kubectl get snapshots -n demo +NAME ID REPOSITORY HOSTNAME CREATED AT +gcs-repo-4bc21d6f 4bc21d6f gcs-repo host-0 2021-02-12T14:54:27Z +gcs-repo-f0ac7cbd f0ac7cbd gcs-repo host-0 2021-02-12T14:56:26Z +gcs-repo-9210ebb6 9210ebb6 gcs-repo host-0 2021-02-12T14:58:27Z +gcs-repo-0aff8890 0aff8890 gcs-repo host-0 2021-02-12T15:00:28Z +``` + +>You can also filter the snapshots as shown in the guide [here](https://stash.run/docs/latest/concepts/crds/snapshot/#working-with-snapshot). + +The below example shows how you can pass a specific snapshot id through the `snapshots` field of `rules` section. + +```yaml +apiVersion: stash.appscode.com/v1beta1 +kind: RestoreSession +metadata: + name: sample-postgres-restore + namespace: demo +spec: + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-postgres + rules: + - snapshots: [4bc21d6f] +``` + +>Please, do not specify multiple snapshots here. Each snapshot represents a complete backup of your database. Multiple snapshots are only usable during file/directory restore. + +### Running restore job as a specific user + +You can provide `securityContext` under `runtimeSettings.pod` section to run the restore job as a specific user. + +```yaml +apiVersion: stash.appscode.com/v1beta1 +kind: RestoreSession +metadata: + name: sample-postgres-restore + namespace: demo +spec: + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-postgres + runtimeSettings: + pod: + securityContext: + runAsUser: 0 + runAsGroup: 0 + rules: + - snapshots: [latest] +``` + +### Specifying Memory/CPU limit/request for the restore job + +Similar to the backup process, you can also provide `resources` field under the `runtimeSettings.container` section to limit the Memory/CPU for your restore job. + +```yaml +apiVersion: stash.appscode.com/v1beta1 +kind: RestoreSession +metadata: + name: sample-postgres-restore + namespace: demo +spec: + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-postgres + runtimeSettings: + container: + resources: + requests: + cpu: "200m" + memory: "1Gi" + limits: + cpu: "200m" + memory: "1Gi" + rules: + - snapshots: [latest] +``` diff --git a/content/docs/v2024.1.31/guides/postgres/backup/overview/images/backup_overview.svg b/content/docs/v2024.1.31/guides/postgres/backup/overview/images/backup_overview.svg new file mode 100644 index 0000000000..68437a3669 --- /dev/null +++ b/content/docs/v2024.1.31/guides/postgres/backup/overview/images/backup_overview.svg @@ -0,0 +1,997 @@ + + + + + + image/svg+xml + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/content/docs/v2024.1.31/guides/postgres/backup/overview/images/restore_overview.svg b/content/docs/v2024.1.31/guides/postgres/backup/overview/images/restore_overview.svg new file mode 100644 index 0000000000..b9adb0bba0 --- /dev/null +++ b/content/docs/v2024.1.31/guides/postgres/backup/overview/images/restore_overview.svg @@ -0,0 +1,857 @@ + + + + + + image/svg+xml + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/content/docs/v2024.1.31/guides/postgres/backup/overview/index.md b/content/docs/v2024.1.31/guides/postgres/backup/overview/index.md new file mode 100644 index 0000000000..7ad4ba9a6b --- /dev/null +++ b/content/docs/v2024.1.31/guides/postgres/backup/overview/index.md @@ -0,0 +1,100 @@ +--- +title: Backup & Restore PostgreSQL Using Stash +menu: + docs_v2024.1.31: + identifier: guides-pg-backup-overview + name: Overview + parent: guides-pg-backup + weight: 10 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +{{< notice type="warning" message="Please install [Stash](https://stash.run/docs/latest/setup/install/stash/) to try this feature. Database backup with Stash is already included in the KubeDB license. So, you don't need a separate license for Stash." >}} + +# PostgreSQL Backup & Restore Overview + +KubeDB uses [Stash](https://stash.run) to backup and restore databases. Stash by AppsCode is a cloud native data backup and recovery solution for Kubernetes workloads. Stash utilizes [restic](https://github.com/restic/restic) to securely backup stateful applications to any cloud or on-prem storage backends (for example, S3, GCS, Azure Blob storage, Minio, NetApp, Dell EMC etc.). + +
+  KubeDB + Stash +
Fig: Backup KubeDB Databases Using Stash
+
+ +## How Backup Works + +The following diagram shows how Stash takes backup of a PostgreSQL database. Open the image in a new tab to see the enlarged version. + +
+  PostgreSQL Backup Overview +
Fig: PostgreSQL Backup Overview
+
+ +The backup process consists of the following steps: + +1. At first, a user creates a secret with access credentials of the backend where the backed up data will be stored. + +2. Then, she creates a `Repository` crd that specifies the backend information along with the secret that holds the credentials to access the backend. + +3. Then, she creates a `BackupConfiguration` crd targeting the [AppBinding](/docs/v2024.1.31/guides/postgres/concepts/appbinding) crd of the desired database. The `BackupConfiguration` object also specifies the `Task` to use to backup the database. + +4. Stash operator watches for `BackupConfiguration` crd. + +5. Once Stash operator finds a `BackupConfiguration` crd, it creates a CronJob with the schedule specified in `BackupConfiguration` object to trigger backup periodically. + +6. On the next scheduled slot, the CronJob triggers a backup by creating a `BackupSession` crd. + +7. Stash operator also watches for `BackupSession` crd. + +8. When it finds a `BackupSession` object, it resolves the respective `Task` and `Function` and prepares a Job definition to backup. + +9. Then, it creates the Job to backup the targeted database. + +10. The backup Job reads necessary information to connect with the database from the `AppBinding` crd. It also reads backend information and access credentials from `Repository` crd and Storage Secret respectively. + +11. Then, the Job dumps the targeted database and uploads the output to the backend. Stash pipes the output of dump command to uploading process. Hence, backup Job does not require a large volume to hold the entire dump output. + +12. Finally, when the backup is complete, the Job sends Prometheus metrics to the Pushgateway running inside Stash operator pod. It also updates the `BackupSession` and `Repository` status to reflect the backup procedure. + +## How Restore Process Works + +The following diagram shows how Stash restores backed up data into a PostgreSQL database. Open the image in a new tab to see the enlarged version. + +
+  Database Restore Overview +
Fig: PostgreSQL Restore Process Overview
+
+ +The restore process consists of the following steps: + +1. At first, a user creates a `RestoreSession` crd targeting the `AppBinding` of the desired database where the backed up data will be restored. It also specifies the `Repository` crd which holds the backend information and the `Task` to use to restore the target. + +2. Stash operator watches for `RestoreSession` object. + +3. Once it finds a `RestoreSession` object, it resolves the respective `Task` and `Function` and prepares a Job definition to restore. + +4. Then, it creates the Job to restore the target. + +5. The Job reads necessary information to connect with the database from respective `AppBinding` crd. It also reads backend information and access credentials from `Repository` crd and Storage Secret respectively. + +6. Then, the job downloads the backed up data from the backend and injects into the desired database. Stash pipes the downloaded data to the respective database tool to inject into the database. Hence, restore job does not require a large volume to download entire backup data inside it. + +7. Finally, when the restore process is complete, the Job sends Prometheus metrics to the Pushgateway and update the `RestoreSession` status to reflect restore completion. + +## Next Steps + +- Backup a standalone PostgreSQL database using Stash following the guide from [here](/docs/v2024.1.31/guides/postgres/backup/standalone/). +- Configure a generic backup template for all the PostgreSQL databases of your cluster using Stash Auto-backup by following the guide from [here](/docs/v2024.1.31/guides/postgres/backup/auto-backup/). diff --git a/content/docs/v2024.1.31/guides/postgres/backup/standalone/examples/appbinding.yaml b/content/docs/v2024.1.31/guides/postgres/backup/standalone/examples/appbinding.yaml new file mode 100644 index 0000000000..93a5c3bf47 --- /dev/null +++ b/content/docs/v2024.1.31/guides/postgres/backup/standalone/examples/appbinding.yaml @@ -0,0 +1,22 @@ +apiVersion: appcatalog.appscode.com/v1alpha1 +kind: AppBinding +metadata: + labels: + app.kubernetes.io/component: database + app.kubernetes.io/instance: sample-postgres + app.kubernetes.io/managed-by: kubedb.com + app.kubernetes.io/name: postgreses.kubedb.com + name: sample-postgres + namespace: demo +spec: + clientConfig: + service: + name: sample-postgres + path: / + port: 5432 + query: sslmode=disable + scheme: postgresql + secret: + name: sample-postgres-auth + type: kubedb.com/postgres + version: "11.2" diff --git a/content/docs/v2024.1.31/guides/postgres/backup/standalone/examples/backupconfiguration.yaml b/content/docs/v2024.1.31/guides/postgres/backup/standalone/examples/backupconfiguration.yaml new file mode 100644 index 0000000000..915a948e08 --- /dev/null +++ b/content/docs/v2024.1.31/guides/postgres/backup/standalone/examples/backupconfiguration.yaml @@ -0,0 +1,18 @@ +apiVersion: stash.appscode.com/v1beta1 +kind: BackupConfiguration +metadata: + name: sample-postgres-backup + namespace: demo +spec: + schedule: "*/5 * * * *" + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-postgres + retentionPolicy: + name: keep-last-5 + keepLast: 5 + prune: true diff --git a/content/docs/v2024.1.31/guides/postgres/backup/standalone/examples/postgres.yaml b/content/docs/v2024.1.31/guides/postgres/backup/standalone/examples/postgres.yaml new file mode 100644 index 0000000000..e53e59514c --- /dev/null +++ b/content/docs/v2024.1.31/guides/postgres/backup/standalone/examples/postgres.yaml @@ -0,0 +1,16 @@ +apiVersion: kubedb.com/v1alpha2 +kind: Postgres +metadata: + name: sample-postgres + namespace: demo +spec: + version: "11.22" + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: Delete diff --git a/content/docs/v2024.1.31/guides/postgres/backup/standalone/examples/repository.yaml b/content/docs/v2024.1.31/guides/postgres/backup/standalone/examples/repository.yaml new file mode 100644 index 0000000000..025191f0be --- /dev/null +++ b/content/docs/v2024.1.31/guides/postgres/backup/standalone/examples/repository.yaml @@ -0,0 +1,11 @@ +apiVersion: stash.appscode.com/v1alpha1 +kind: Repository +metadata: + name: gcs-repo + namespace: demo +spec: + backend: + gcs: + bucket: stash-testing + prefix: demo/postgres/sample-postgres + storageSecretName: gcs-secret diff --git a/content/docs/v2024.1.31/guides/postgres/backup/standalone/examples/restored-postgres.yaml b/content/docs/v2024.1.31/guides/postgres/backup/standalone/examples/restored-postgres.yaml new file mode 100644 index 0000000000..ec7daff594 --- /dev/null +++ b/content/docs/v2024.1.31/guides/postgres/backup/standalone/examples/restored-postgres.yaml @@ -0,0 +1,18 @@ +apiVersion: kubedb.com/v1alpha2 +kind: Postgres +metadata: + name: restored-postgres + namespace: demo +spec: + version: "11.22" + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + init: + waitForInitialRestore: true + terminationPolicy: Delete diff --git a/content/docs/v2024.1.31/guides/postgres/backup/standalone/examples/restoresession.yaml b/content/docs/v2024.1.31/guides/postgres/backup/standalone/examples/restoresession.yaml new file mode 100644 index 0000000000..3eb93d60a6 --- /dev/null +++ b/content/docs/v2024.1.31/guides/postgres/backup/standalone/examples/restoresession.yaml @@ -0,0 +1,15 @@ +apiVersion: stash.appscode.com/v1beta1 +kind: RestoreSession +metadata: + name: sample-postgres-restore + namespace: demo +spec: + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: restored-postgres + rules: + - snapshots: [latest] diff --git a/content/docs/v2024.1.31/guides/postgres/backup/standalone/images/sample-postgres-backup.png b/content/docs/v2024.1.31/guides/postgres/backup/standalone/images/sample-postgres-backup.png new file mode 100644 index 0000000000..333d669827 Binary files /dev/null and b/content/docs/v2024.1.31/guides/postgres/backup/standalone/images/sample-postgres-backup.png differ diff --git a/content/docs/v2024.1.31/guides/postgres/backup/standalone/index.md b/content/docs/v2024.1.31/guides/postgres/backup/standalone/index.md new file mode 100644 index 0000000000..42d31379a2 --- /dev/null +++ b/content/docs/v2024.1.31/guides/postgres/backup/standalone/index.md @@ -0,0 +1,608 @@ +--- +title: PostgreSQL | Stash +description: Backup and restore standalone PostgreSQL database using Stash +menu: + docs_v2024.1.31: + identifier: guides-pg-backup-standalone + name: Standalone PostgreSQL + parent: guides-pg-backup + weight: 20 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +# Backup and Restore standalone PostgreSQL database using Stash + +Stash 0.9.0+ supports backup and restoration of PostgreSQL databases. This guide will show you how you can backup and restore your PostgreSQL database with Stash. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using Minikube. +- Install KubeDB in your cluster following the steps [here](/docs/v2024.1.31/setup/README). +- Install Stash in your cluster following the steps [here](https://stash.run/docs/latest/setup/install/stash/). +- Install Stash `kubectl` plugin following the steps [here](https://stash.run/docs/latest/setup/install/kubectl-plugin/). +- If you are not familiar with how Stash backup and restore PostgreSQL databases, please check the following guide [here](/docs/v2024.1.31/guides/postgres/backup/overview/): + +You have to be familiar with following custom resources: + +- [AppBinding](/docs/v2024.1.31/guides/postgres/concepts/appbinding) +- [Function](https://stash.run/docs/latest/concepts/crds/function/) +- [Task](https://stash.run/docs/latest/concepts/crds/task/) +- [BackupConfiguration](https://stash.run/docs/latest/concepts/crds/backupconfiguration/) +- [RestoreSession](https://stash.run/docs/latest/concepts/crds/restoresession/) + +To keep things isolated, we are going to use a separate namespace called `demo` throughout this tutorial. Create the `demo` namespace if you haven't created it already. + +```bash +$ kubectl create ns demo +namespace/demo created +``` + +## Backup PostgreSQL + +This section will demonstrate how to backup a PostgreSQL database. Here, we are going to deploy a PostgreSQL database using KubeDB. Then, we are going to backup this database into a GCS bucket. Finally, we are going to restore the backed-up data into another PostgreSQL database. + +### Deploy Sample PostgreSQL Database + +Let's deploy a sample PostgreSQL database and insert some data into it. + +**Create Postgres CRD:** + +Below is the YAML of a sample Postgres crd that we are going to create for this tutorial: + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: Postgres +metadata: + name: sample-postgres + namespace: demo +spec: + version: "11.22" + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: Delete +``` + +Create the above `Postgres` crd, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/postgres/backup/standalone/examples/postgres.yaml +postgres.kubedb.com/sample-postgres created +``` + +KubeDB will deploy a PostgreSQL database according to the above specification. It will also create the necessary secrets and services to access the database. + +Let's check if the database is ready to use, + +```bash +❯ kubectl get pg -n demo sample-postgres +NAME VERSION STATUS AGE +sample-postgres 11.11 Ready 50s +``` + +The database is `Ready`. Verify that KubeDB has created a Secret and a Service for this database using the following commands, + +```bash +❯ kubectl get secret -n demo -l=app.kubernetes.io/instance=sample-postgres +NAME TYPE DATA AGE +sample-postgres-auth kubernetes.io/basic-auth 2 2m42s + + +❯ kubectl get service -n demo -l=app.kubernetes.io/instance=sample-postgres +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +sample-postgres ClusterIP 10.96.242.0 5432/TCP 3m9s +sample-postgres-pods ClusterIP None 5432/TCP 3m9s +``` + +Here, we have to use the service `sample-postgres` and secret `sample-postgres-auth` to connect with the database. KubeDB creates an [AppBinding](/docs/v2024.1.31/guides/postgres/concepts/appbinding) crd that holds the necessary information to connect with the database. + +**Verify AppBinding:** + +Verify that the `AppBinding` has been created successfully using the following command, + +```bash +❯ kubectl get appbindings -n demo +NAME TYPE VERSION AGE +sample-postgres kubedb.com/postgres 11.11 3m54s +``` + +Let's check the YAML of the above `AppBinding`, + +```bash +❯ kubectl get appbindings -n demo sample-postgres -o yaml +``` + +```yaml +apiVersion: appcatalog.appscode.com/v1alpha1 +kind: AppBinding +metadata: + labels: + app.kubernetes.io/component: database + app.kubernetes.io/instance: sample-postgres + app.kubernetes.io/managed-by: kubedb.com + app.kubernetes.io/name: postgreses.kubedb.com + name: sample-postgres + namespace: demo + ... +spec: + clientConfig: + service: + name: sample-postgres + path: / + port: 5432 + query: sslmode=disable + scheme: postgresql + secret: + name: sample-postgres-auth + parameters: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: StashAddon + stash: + addon: + backupTask: + name: postgres-backup-11.9 + restoreTask: + name: postgres-restore-11.9 + type: kubedb.com/postgres + version: "11.22" +``` + +Stash uses the `AppBinding` crd to connect with the target database. It requires the following two fields to set in AppBinding's `Spec` section. + +- `spec.clientConfig.service.name` specifies the name of the service that connects to the database. +- `spec.secret` specifies the name of the secret that holds necessary credentials to access the database. +- `spec.parameters.stash` specifies the Stash Addons that will be used to backup and restore this database. +- `spec.type` specifies the types of the app that this AppBinding is pointing to. KubeDB generated AppBinding follows the following format: `/`. + +**Insert Sample Data:** + +Now, we are going to exec into the database pod and create some sample data. At first, find out the database pod using the following command, + +```bash +❯ kubectl get pods -n demo --selector="app.kubernetes.io/instance=sample-postgres" +NAME READY STATUS RESTARTS AGE +sample-postgres-0 1/1 Running 0 18m +``` + +Now, let's exec into the pod and create a table, + +```bash +❯ kubectl exec -it -n demo sample-postgres-0 -- sh + +# login as "postgres" superuser. +/ # psql -U postgres +psql (11.11) +Type "help" for help. + +# list available databases +postgres=# \l + List of databases + Name | Owner | Encoding | Collate | Ctype | Access privileges +-----------+----------+----------+------------+------------+----------------------- + postgres | postgres | UTF8 | en_US.utf8 | en_US.utf8 | + template0 | postgres | UTF8 | en_US.utf8 | en_US.utf8 | =c/postgres + + | | | | | postgres=CTc/postgres + template1 | postgres | UTF8 | en_US.utf8 | en_US.utf8 | =c/postgres + + | | | | | postgres=CTc/postgres +(3 rows) + +# create a database named "demo" +postgres=# create database demo; +CREATE DATABASE + +# verify that the "demo" database has been created +postgres=# \l + List of databases + Name | Owner | Encoding | Collate | Ctype | Access privileges +-----------+----------+----------+------------+------------+----------------------- + demo | postgres | UTF8 | en_US.utf8 | en_US.utf8 | + postgres | postgres | UTF8 | en_US.utf8 | en_US.utf8 | + template0 | postgres | UTF8 | en_US.utf8 | en_US.utf8 | =c/postgres + + | | | | | postgres=CTc/postgres + template1 | postgres | UTF8 | en_US.utf8 | en_US.utf8 | =c/postgres + + | | | | | postgres=CTc/postgres +(4 rows) + +# connect to the "demo" database +postgres=# \c demo +You are now connected to database "demo" as user "postgres". + +# create a sample table +demo=# CREATE TABLE COMPANY( NAME TEXT NOT NULL, EMPLOYEE INT NOT NULL); +CREATE TABLE + +# verify that the table has been created +demo=# \d + List of relations + Schema | Name | Type | Owner +--------+---------+-------+---------- + public | company | table | postgres +(1 row) + +# quit from the database +demo=# \q + +# exit from the pod +/ # exit +``` + +Now, we are ready to backup this sample database. + +### Prepare Backend + +We are going to store our backed-up data into a GCS bucket. At first, we need to create a secret with GCS credentials then we need to create a `Repository` crd. If you want to use a different backend, please read the respective backend configuration doc from [here](https://stash.run/docs/latest/guides/backends/overview/). + +**Create Storage Secret:** + +Let's create a secret called `gcs-secret` with access credentials to our desired GCS bucket, + +```bash +$ echo -n 'changeit' > RESTIC_PASSWORD +$ echo -n '' > GOOGLE_PROJECT_ID +$ cat downloaded-sa-key.json > GOOGLE_SERVICE_ACCOUNT_JSON_KEY +$ kubectl create secret generic -n demo gcs-secret \ + --from-file=./RESTIC_PASSWORD \ + --from-file=./GOOGLE_PROJECT_ID \ + --from-file=./GOOGLE_SERVICE_ACCOUNT_JSON_KEY +secret/gcs-secret created +``` + +**Create Repository:** + +Now, crete a `Repository` using this secret. Below is the YAML of Repository crd we are going to create, + +```yaml +apiVersion: stash.appscode.com/v1alpha1 +kind: Repository +metadata: + name: gcs-repo + namespace: demo +spec: + backend: + gcs: + bucket: stash-testing + prefix: demo/postgres/sample-postgres + storageSecretName: gcs-secret +``` + +Let's create the `Repository` we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/postgres/backup/standalone/examples/repository.yaml +repository.stash.appscode.com/gcs-repo created +``` + +Now, we are ready to backup our database to our desired backend. + +### Backup + +We have to create a `BackupConfiguration` targeting the respective AppBinding object of our desired database. Stash will create a CronJob to periodically backup the database. + +**Create BackupConfiguration:** + +Below is the YAML for `BackupConfiguration` crd to backup the `sample-postgres` database we have deployed earlier., + +```yaml +apiVersion: stash.appscode.com/v1beta1 +kind: BackupConfiguration +metadata: + name: sample-postgres-backup + namespace: demo +spec: + schedule: "*/5 * * * *" + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-postgres + retentionPolicy: + name: keep-last-5 + keepLast: 5 + prune: true +``` + +Here, + +- `spec.schedule` specifies that we want to backup the database at 5 minutes interval. +- `spec.repository.name` specifies the name of the `Repository` crd the holds the backend information where the backed up data will be stored. +- `spec.target.ref` refers to the `AppBinding` crd that was created for `sample-postgres` database. +- `spec.retentionPolicy` specifies the policy to follow for cleaning old snapshots. + +Let's create the `BackupConfiguration` object we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/postgres/backup/standalone/examples/backupconfiguration.yaml +backupconfiguration.stash.appscode.com/sample-postgres-backup created +``` + +**Verify Backup Setup Successful:** + +If everything goes well, the phase of the `BackupConfiguration` should be `Ready`. The `Ready` phase indicates that the backup setup is successful. Let's verify the `Phase` of the BackupConfiguration, + +```bash +$ kubectl get backupconfiguration -n demo +NAME TASK SCHEDULE PAUSED PHASE AGE +sample-postgres-backup postgres-backup-11.9 */5 * * * * Ready 11s +``` + +**Verify CronJob:** + +Stash will create a CronJob with the schedule specified in `spec.schedule` field of `BackupConfiguration` crd. + +Verify that the CronJob has been created using the following command, + +```bash +❯ kubectl get cronjob -n demo +NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE +stash-backup-sample-postgres-backup */5 * * * * False 0 30s +``` + +**Wait for BackupSession:** + +The `sample-postgres-backup` CronJob will trigger a backup on each scheduled slot by creating a `BackupSession` crd. + +Wait for a schedule to appear. Run the following command to watch `BackupSession` crd, + +```bash +❯ kubectl get backupsession -n demo -w +NAME INVOKER-TYPE INVOKER-NAME PHASE AGE +sample-postgres-backup-1613390711 BackupConfiguration sample-postgres-backup Running 15s +sample-postgres-backup-1613390711 BackupConfiguration sample-postgres-backup Succeeded 78s +``` + +We can see above that the backup session has succeeded. Now, we are going to verify that the backed up data has been stored in the backend. + +**Verify Backup:** + +Once a backup is complete, Stash will update the respective `Repository` object to reflect the backup completion. Check that the repository `gcs-repo` has been updated by the following command, + +```bash +❯ kubectl get repository -n demo gcs-repo +NAME INTEGRITY SIZE SNAPSHOT-COUNT LAST-SUCCESSFUL-BACKUP AGE +gcs-repo true 1.770 KiB 1 2m 4m16s +``` + +Now, if we navigate to the GCS bucket, we are going to see backed up data has been stored in `demo/postgres/sample-postgres` directory as specified by `spec.backend.gcs.prefix` field of Repository crd. + +
+ Backup data in GCS Bucket +
Fig: Backup data in GCS Bucket
+
+ +> Note: Stash keeps all the backed-up data encrypted. So, data in the backend will not make any sense until they are decrypted. + +## Restore PostgreSQL + +Now, we are going to restore the database from the backup we have taken in the previous section. We are going to deploy a new database and initialize it from the backup. + +#### Stop Taking Backup of the Old Database: + +At first, let's stop taking any further backup of the old database so that no backup is taken during the restore process. We are going to pause the `BackupConfiguration` crd that we had created to backup the `sample-postgres` database. Then, Stash will stop taking any further backup for this database. + +Let's pause the `sample-postgres-backup` BackupConfiguration, +```bash +❯ kubectl patch backupconfiguration -n demo sample-postgres-backup --type="merge" --patch='{"spec": {"paused": true}}' +backupconfiguration.stash.appscode.com/sample-postgres-backup patched +``` + +Or you can use the Stash `kubectl` plugin to pause the `BackupConfiguration`, +```bash +❯ kubectl stash pause backup -n demo --backupconfig=sample-postgres-backup +BackupConfiguration demo/sample-postgres-backup has been paused successfully. +``` + +Now, wait for a moment. Stash will pause the BackupConfiguration. Verify that the BackupConfiguration has been paused, + +```bash +❯ kubectl get backupconfiguration -n demo sample-postgres-backup +NAME TASK SCHEDULE PAUSED PHASE AGE +sample-postgres-backup postgres-backup-11.9 */5 * * * * true Ready 5m55s +``` + +Notice the `PAUSED` column. Value `true` for this field means that the BackupConfiguration has been paused. + +#### Deploy Restored Database: + +Now, we are going to deploy the restored database similarly as we have deployed the original `sample-psotgres` database. + +Below is the YAML for `Postgres` crd we are going deploy to initialize from backup, + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: Postgres +metadata: + name: restored-postgres + namespace: demo +spec: + version: "11.22" + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + init: + waitForInitialRestore: true + terminationPolicy: Delete + +``` + +Notice the `init` section. Here, we have specified `waitForInitialRestore: true` which tells KubeDB to wait for the first restore to complete before marking this database as ready to use. + +Let's create the above database, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/postgres/backup/standalone/examples/restored-postgres.yaml +postgres.kubedb.com/restored-postgres created +``` + +This time, the database will get stuck in the `Provisioning` state because we haven't restored the data yet. + +```bash +❯ kubectl get postgres -n demo restored-postgres +NAME VERSION STATUS AGE +restored-postgres 11.11 Provisioning 6m7s +``` + +You can check the log from the database pod to be sure whether the database is ready to accept connections or not. + +```bash +❯ kubectl logs -n demo restored-postgres-0 +.... +2021-02-15 12:36:31.087 UTC [19] LOG: listening on IPv4 address "0.0.0.0", port 5432 +2021-02-15 12:36:31.087 UTC [19] LOG: listening on IPv6 address "::", port 5432 +2021-02-15 12:36:31.094 UTC [19] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432" +2021-02-15 12:36:31.121 UTC [50] LOG: database system was shut down at 2021-02-15 12:36:31 UTC +2021-02-15 12:36:31.126 UTC [19] LOG: database system is ready to accept connections +``` + +As you can see from the above log that the database is ready to accept connections. Now, we can start restoring this database. + +#### Create RestoreSession: + +Now, we need to create a `RestoreSession` object pointing to the AppBinding for this restored database. + +Check AppBinding has been created for the `restored-postgres` database using the following command, + +```bash +❯ kubectl get appbindings -n demo restored-postgres +NAME TYPE VERSION AGE +restored-postgres kubedb.com/postgres 11.11 6m45s +``` + +> If you are not using KubeDB to deploy the database, then create the AppBinding manually. + +Below is the YAML for the `RestoreSession` crd that we are going to create to restore backed up data into `restored-postgres` database. + +```yaml +apiVersion: stash.appscode.com/v1beta1 +kind: RestoreSession +metadata: + name: sample-postgres-restore + namespace: demo +spec: + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: restored-postgres + rules: + - snapshots: [latest] +``` + +Here, + +- `spec.repository.name` specifies the `Repository` crd that holds the backend information where our backed up data has been stored. +- `spec.target.ref` refers to the AppBinding crd for the `restored-postgres` database where the backed up data will be restored. +- `spec.rules` specifies that we are restoring from the latest backup snapshot of the original database. + +Let's create the `RestoreSession` crd we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/postgres/backup/standalone/examples/restoresession.yaml +restoresession.stash.appscode.com/sample-postgres-restore created +``` + +Once, you have created the `RestoreSession` object, Stash will create a job to restore the database. We can watch the `RestoreSession` phase to check whether the restore process has succeeded or not. + +Run the following command to watch `RestoreSession` phase, + +```bash +❯ kubectl get restoresession -n demo -w +NAME REPOSITORY PHASE AGE +sample-postgres-restore gcs-repo Running 4s +sample-postgres-restore gcs-repo Running 15s +sample-postgres-restore gcs-repo Succeeded 15s +sample-postgres-restore gcs-repo Succeeded 15s +``` + +So, we can see from the output of the above command that the restore process succeeded. + +#### Verify Restored Data: + +In this section, we are going to verify that the desired data has been restored successfully. We are going to connect to the database and check whether the table we had created in the original database has been restored or not. + +At first, check if the database has gone into `Ready` state using the following command, + +```bash +❯ kubectl get pg -n demo restored-postgres +NAME VERSION STATUS AGE +restored-postgres 11.11 Ready 11m +``` + +Now, exec into the database pod and verify restored data. + +```bash +❯ kubectl exec -it -n demo restored-postgres-0 -- /bin/sh +# login as "postgres" superuser. +/ # psql -U postgres +psql (11.11) +Type "help" for help. + +# verify that the "demo" database has been restored +postgres=# \l + List of databases + Name | Owner | Encoding | Collate | Ctype | Access privileges +-----------+----------+----------+------------+------------+----------------------- + demo | postgres | UTF8 | en_US.utf8 | en_US.utf8 | + postgres | postgres | UTF8 | en_US.utf8 | en_US.utf8 | + template0 | postgres | UTF8 | en_US.utf8 | en_US.utf8 | =c/postgres + + | | | | | postgres=CTc/postgres + template1 | postgres | UTF8 | en_US.utf8 | en_US.utf8 | =c/postgres + + | | | | | postgres=CTc/postgres +(4 rows) + +# connect to the "demo" database +postgres=# \c demo +You are now connected to database "demo" as user "postgres". + +# verify that the sample table has been restored +demo=# \d + List of relations + Schema | Name | Type | Owner +--------+---------+-------+---------- + public | company | table | postgres +(1 row) + +# disconnect from the database +demo=# \q + +# exit from the pod +/ # exit +``` + +So, from the above output, we can see the `demo` database we had created in the original database `sample-postgres` has been restored in the `restored-postgres` database. + +## Cleanup + +To cleanup the Kubernetes resources created by this tutorial, run: + +```bash +kubectl delete -n demo backupconfiguration sample-postgres-backup +kubectl delete -n demo restoresession sample-postgres-restore +kubectl delete -n demo postgres sample-postgres restored-postgres +kubectl delete -n demo repository gcs-repo +``` diff --git a/content/docs/v2024.1.31/guides/postgres/cli/_index.md b/content/docs/v2024.1.31/guides/postgres/cli/_index.md new file mode 100755 index 0000000000..3c37947d66 --- /dev/null +++ b/content/docs/v2024.1.31/guides/postgres/cli/_index.md @@ -0,0 +1,22 @@ +--- +title: CLI | KubeDB +menu: + docs_v2024.1.31: + identifier: pg-cli-postgres + name: CLI + parent: pg-postgres-guides + weight: 100 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/postgres/cli/cli.md b/content/docs/v2024.1.31/guides/postgres/cli/cli.md new file mode 100644 index 0000000000..7d5c54a0bd --- /dev/null +++ b/content/docs/v2024.1.31/guides/postgres/cli/cli.md @@ -0,0 +1,314 @@ +--- +title: CLI | KubeDB +menu: + docs_v2024.1.31: + identifier: pg-cli-cli + name: Quickstart + parent: pg-cli-postgres + weight: 10 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Manage KubeDB objects using CLIs + +## KubeDB CLI + +KubeDB comes with its own cli. It is called `kubedb` cli. `kubedb` can be used to manage any KubeDB object. `kubedb` cli also performs various validations to improve ux. To install KubeDB cli on your workstation, follow the steps [here](/docs/v2024.1.31/setup/README). + +### How to Create objects + +`kubectl create` creates a database CRD object in `default` namespace by default. Following command will create a Postgres object as specified in `postgres.yaml`. + +```bash +$ kubectl create -f postgres-demo.yaml +postgres "postgres-demo" created +``` + +You can provide namespace as a flag `--namespace`. Provided namespace should match with namespace specified in input file. + +```bash +$ kubectl create -f postgres-demo.yaml --namespace=kube-system +postgres "postgres-demo" created +``` + +`kubectl create` command also considers `stdin` as input. + +```bash +cat postgres-demo.yaml | kubectl create -f - +``` + +### How to List Objects + +`kubectl get` command allows users to list or find any KubeDB object. To list all Postgres objects in `default` namespace, run the following command: + +```bash +$ kubectl get postgres +NAME VERSION STATUS AGE +postgres-demo 10.2-v5 Running 13m +postgres-dev 10.2-v5 Running 11m +postgres-prod 10.2-v5 Running 11m +postgres-qa 10.2-v5 Running 10m +``` + +To get YAML of an object, use `--output=yaml` flag. + +```yaml +$ kubectl get postgres postgres-demo --output=yaml +apiVersion: kubedb.com/v1alpha2 +kind: Postgres +metadata: + name: postgres-demo + namespace: demo +spec: + authSecret: + name: postgres-demo-auth + version: "13.13" +status: + creationTime: 2017-12-12T05:46:16Z + phase: Running +``` + +To get JSON of an object, use `--output=json` flag. + +```bash +kubectl get postgres postgres-demo --output=json +``` + +To list all KubeDB objects, use following command: + +```bash +$ kubectl get all -o wide + +NAME VERSION STATUS AGE +es/elasticsearch-demo 2.3.1 Running 17m + +NAME VERSION STATUS AGE +pg/postgres-demo 9.6.7 Running 3h +pg/postgres-dev 9.6.7 Running 3h +pg/postgres-prod 9.6.7 Running 3h +pg/postgres-qa 9.6.7 Running 3h + +NAME DATABASE BUCKET STATUS AGE +snap/postgres-demo-20170605-073557 pg/postgres-demo gs:bucket-name Succeeded 9m +snap/snapshot-20171212-114700 pg/postgres-demo gs:bucket-name Succeeded 1h +snap/snapshot-xyz es/elasticsearch-demo local:/directory Succeeded 5m +``` + +Flag `--output=wide` is used to print additional information. + +List command supports short names for each object types. You can use it like `kubectl get `. Below are the short name for KubeDB objects: + +- Postgres: `pg` +- Snapshot: `snap` +- DormantDatabase: `drmn` + +You can print labels with objects. The following command will list all Snapshots with their corresponding labels. + +```bash +$ kubectl get snap --show-labels +NAME DATABASE STATUS AGE LABELS +postgres-demo-20170605-073557 pg/postgres-demo Succeeded 11m app.kubernetes.io/name=postgreses.kubedb.com,app.kubernetes.io/instance=postgres-demo +snapshot-20171212-114700 pg/postgres-demo Succeeded 1h app.kubernetes.io/name=postgreses.kubedb.com,app.kubernetes.io/instance=postgres-demo +snapshot-xyz es/elasticsearch-demo Succeeded 6m app.kubernetes.io/name=elasticsearches.kubedb.com,app.kubernetes.io/instance=elasticsearch-demo +``` + +You can also filter list using `--selector` flag. + +```bash +$ kubectl get snap --selector='app.kubernetes.io/name=postgreses.kubedb.com' --show-labels +NAME DATABASE STATUS AGE LABELS +postgres-demo-20171212-073557 pg/postgres-demo Succeeded 14m app.kubernetes.io/name=postgreses.kubedb.com,app.kubernetes.io/instance=postgres-demo +snapshot-20171212-114700 pg/postgres-demo Succeeded 2h app.kubernetes.io/name=postgreses.kubedb.com,app.kubernetes.io/instance=postgres-demo +``` + +To print only object name, run the following command: + +```bash +$ kubectl get all -o name +postgres/postgres-demo +postgres/postgres-dev +postgres/postgres-prod +postgres/postgres-qa +snapshot/postgres-demo-20170605-073557 +snapshot/snapshot-20170505-114700 +snapshot/snapshot-xyz +``` + +### How to Describe Objects + +`kubectl dba describe` command allows users to describe any KubeDB object. The following command will describe PostgreSQL database `postgres-demo` with relevant information. + +```bash +$ kubectl dba describe pg postgres-demo +Name: postgres-demo +Namespace: default +StartTimestamp: Tue, 12 Dec 2017 11:46:16 +0600 +Status: Running +Volume: + StorageClass: standard + Capacity: 1Gi + Access Modes: RWO + +StatefulSet: + Name: postgres-demo + Replicas: 1 current / 1 desired + CreationTimestamp: Tue, 12 Dec 2017 11:46:21 +0600 + Pods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed + +Service: + Name: postgres-demo + Type: ClusterIP + IP: 10.111.209.148 + Port: api 5432/TCP + +Service: + Name: postgres-demo-primary + Type: ClusterIP + IP: 10.102.192.231 + Port: api 5432/TCP + +Database Secret: + Name: postgres-demo-auth + Type: Opaque + Data + ==== + .admin: 35 bytes + +Topology: + Type Pod StartTime Phase + ---- --- --------- ----- + primary postgres-demo-0 2017-12-12 11:46:22 +0600 +06 Running + +No Snapshots. + +Events: + FirstSeen LastSeen From Type Reason Message + --------- -------- ---- ---- ------ ------- + 5s 5s Postgres operator Normal SuccessfulCreate Successfully created StatefulSet + 5s 5s Postgres operator Normal SuccessfulCreate Successfully created Postgres + 55s 55s Postgres operator Normal SuccessfulValidate Successfully validate Postgres + 55s 55s Postgres operator Normal Creating Creating Kubernetes objects +``` + +`kubectl dba describe` command provides following basic information about a database. + +- StatefulSet +- Storage (Persistent Volume) +- Service +- Secret (If available) +- Topology (If available) +- Snapshots (If any) +- Monitoring system (If available) + +To hide events on KubeDB object, use flag `--show-events=false` + +To describe all Postgres objects in `default` namespace, use following command + +```bash +kubectl dba describe pg +``` + +To describe all Postgres objects from every namespace, provide `--all-namespaces` flag. + +```bash +kubectl dba describe pg --all-namespaces +``` + +To describe all KubeDB objects from every namespace, use the following command: + +```bash +kubectl dba describe all --all-namespaces +``` + +You can also describe KubeDb objects with matching labels. The following command will describe all Elasticsearch & Postgres objects with specified labels from every namespace. + +```bash +kubectl dba describe pg,es --all-namespaces --selector='group=dev' +``` + +To learn about various options of `describe` command, please visit [here](/docs/v2024.1.31/reference/cli/kubectl-dba_describe). + +#### Edit restrictions + +Various fields of a KubeDb object can't be edited using `edit` command. The following fields are restricted from updates for all KubeDB objects: + +- _apiVersion_ +- _kind_ +- _metadata.name_ +- _metadata.namespace_ + +If StatefulSets or Deployments exists for a database, following fields can't be modified as well. + +- _spec.standby_ +- _spec.streaming_ +- _spec.archiver_ +- _spec.authSecret_ +- _spec.storageType_ +- _spec.storage_ +- _spec.podTemplate.spec.nodeSelector_ +- _spec.init_ + +For DormantDatabase, _spec.origin_ can't be edited using `kubectl edit` + +### How to Delete Objects + +`kubectl delete` command will delete an object in `default` namespace by default unless namespace is provided. The following command will delete a Postgres `postgres-dev` in default namespace + +```bash +$ kubectl delete postgres postgres-dev +postgres "postgres-dev" deleted +``` + +You can also use YAML files to delete objects. The following command will delete a postgres using the type and name specified in `postgres.yaml`. + +```bash +$ kubectl delete -f postgres.yaml +postgres "postgres-dev" deleted +``` + +`kubectl delete` command also takes input from `stdin`. + +```bash +cat postgres.yaml | kubectl delete -f - +``` + +To delete database with matching labels, use `--selector` flag. The following command will delete postgres with label `postgres.app.kubernetes.io/instance=postgres-demo`. + +```bash +kubectl delete postgres -l postgres.app.kubernetes.io/instance=postgres-demo +``` + +## Using Kubectl + +You can use Kubectl with KubeDB objects like any other CRDs. Below are some common examples of using Kubectl with KubeDB objects. + +```bash +# Create objects +$ kubectl create -f + +# List objects +$ kubectl get postgres +$ kubectl get postgres.kubedb.com + +# Delete objects +$ kubectl delete postgres +``` + +## Next Steps + +- Learn how to use KubeDB to run a PostgreSQL database [here](/docs/v2024.1.31/guides/postgres/README). +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/v2024.1.31/CONTRIBUTING). diff --git a/content/docs/v2024.1.31/guides/postgres/clustering/_index.md b/content/docs/v2024.1.31/guides/postgres/clustering/_index.md new file mode 100755 index 0000000000..5a9308bcb1 --- /dev/null +++ b/content/docs/v2024.1.31/guides/postgres/clustering/_index.md @@ -0,0 +1,22 @@ +--- +title: PostgreSQL Clustering +menu: + docs_v2024.1.31: + identifier: pg-clustering-postgres + name: Clustering + parent: pg-postgres-guides + weight: 25 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/postgres/clustering/ha_cluster.md b/content/docs/v2024.1.31/guides/postgres/clustering/ha_cluster.md new file mode 100644 index 0000000000..050ddf3fff --- /dev/null +++ b/content/docs/v2024.1.31/guides/postgres/clustering/ha_cluster.md @@ -0,0 +1,113 @@ +--- +title: Setup HA Postgres Cluster +menu: + docs_v2024.1.31: + identifier: pg-ha-cluster-clustering + name: HA Setup + parent: pg-clustering-postgres + weight: 10 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Configuring Highly Available PostgreSQL Cluster + +In PostgreSQL, multiple servers can work together to serve high availability and load balancing. These servers will be either in *Master* or *Standby* mode. + +In *master* mode, server that can modify data. In *standby* mode, the server continuously applies WAL received from the master server. The standby server can read WAL from a WAL archive (see restore_command) or directly from the master over a TCP connection (streaming replication). + +Standby servers can be either *warm standby* or *hot standby* server. + +## Warm Standby + +A standby server that cannot be connected to until it is promoted to a *master* server is called a *warm standby* server. +*Standby* servers are by default *warm standby* unless we make them *hot standby*. + +The following is an example of a `Postgres` object which creates PostgreSQL cluster of three servers. + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: Postgres +metadata: + name: warm-postgres + namespace: demo +spec: + version: "13.13" + replicas: 3 + standbyMode: Warm + storageType: Ephemeral +``` + +In this examples: + +- This `Postgres` object creates three PostgreSQL servers, indicated by the **`replicas`** field. +- One server will be *primary* and two others will be *warm standby* servers, as instructed by **`spec.standbyMode`** + +## Hot Standby + +A standby server that can accept connections and serves read-only queries is called a *hot standby* server. + +The following `Postgres` object will create PostgreSQL cluster with *hot standby* servers. + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: Postgres +metadata: + name: hot-postgres + namespace: demo +spec: + version: "13.13" + replicas: 3 + standbyMode: Hot + storageType: Ephemeral +``` + +In this examples: + +- This `Postgres` object creates three PostgreSQL servers, indicated by the **`replicas`** field. +- One server will be *primary* and two others will be *hot standby* servers, as instructed by **`spec.standbyMode`** + +## High Availability + +Database servers can work together to allow a second server to take over quickly if the *primary* server fails. This is called high availability. When *primary* server is unavailable, *standby* servers go through a leader election process to take control as *primary* server. PostgreSQL database with high availability feature can either have *warm standby* or *hot standby* servers. + +To enable high availability, you need to create PostgreSQL with multiple server. Set `spec.replicas` to more than one in Postgres. + +[//]: # (For more information on failover process, [read here]) + +## Load Balancing + +*Master* server along with *standby* server(s) can serve the same data. This is called load balancing. In our setup, we only support read-only *standby* server. +To enable load balancing, you need to setup *hot standby* PostgreSQL cluster. + +Read about [hot standby](#hot-standby) and its setup in Postgres. + +## Replication + +There are many approaches available to scale PostgreSQL beyond running on a single server. + +Now KubeDB supports only following one: + +- **Streaming Replication** provides *asynchronous* replication to one or more *standby* servers. + +These *standby* servers can also be *hot standby* server. This is the fastest type of replication available as +WAL data is sent immediately rather than waiting for a whole segment to be produced and shipped. + + KubeDB PostgreSQL support [Streaming Replication](/docs/v2024.1.31/guides/postgres/clustering/streaming_replication) + +## Next Steps + +- Learn how to setup [Streaming Replication](/docs/v2024.1.31/guides/postgres/clustering/streaming_replication) diff --git a/content/docs/v2024.1.31/guides/postgres/clustering/streaming_replication.md b/content/docs/v2024.1.31/guides/postgres/clustering/streaming_replication.md new file mode 100644 index 0000000000..18fc3f6756 --- /dev/null +++ b/content/docs/v2024.1.31/guides/postgres/clustering/streaming_replication.md @@ -0,0 +1,412 @@ +--- +title: Using Postgres Streaming Replication +menu: + docs_v2024.1.31: + identifier: pg-streaming-replication-clustering + name: Streaming Replication + parent: pg-clustering-postgres + weight: 15 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Streaming Replication + +Streaming Replication provides *asynchronous* replication to one or more *standby* servers. + +## Before You Begin + +At first, you need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. +If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +Now, install KubeDB cli on your workstation and KubeDB operator in your cluster following the steps [here](/docs/v2024.1.31/setup/README). + +To keep things isolated, this tutorial uses a separate namespace called `demo` throughout this tutorial. + +```bash +$ kubectl create ns demo +namespace/demo created +``` + +> Note: YAML files used in this tutorial are stored in [docs/examples/postgres](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/examples/postgres) folder in GitHub repository [kubedb/docs](https://github.com/kubedb/docs). + +## Create PostgreSQL with Streaming replication + +The example below demonstrates KubeDB PostgreSQL for Streaming Replication + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: Postgres +metadata: + name: ha-postgres + namespace: demo +spec: + version: "13.13" + replicas: 3 + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi +``` + +In this examples: + +- This `Postgres` object creates three PostgreSQL servers, indicated by the **`replicas`** field. +- One server will be *primary* and two others will be *warm standby* servers, default of **`spec.standbyMode`** + +### What is Streaming Replication + +Streaming Replication allows a *standby* server to stay more up-to-date by shipping and applying the [WAL XLOG](http://www.postgresql.org/docs/9.6/static/wal.html) +records continuously. The *standby* connects to the *primary*, which streams WAL records to the *standby* as they're generated, without waiting for the WAL file to be filled. + +Streaming Replication is **asynchronous** by default. As a result, there is a small delay between committing a transaction in the *primary* and the changes becoming visible in the *standby*. + +### Streaming Replication setup + +Following parameters are set in `postgresql.conf` for both *primary* and *standby* server + +```bash +wal_level = replica +max_wal_senders = 99 +wal_keep_segments = 32 +``` + +Here, + +- _wal_keep_segments_ specifies the minimum number of past log file segments kept in the pg_xlog directory. + +And followings are in `recovery.conf` for *standby* server + +```bash +standby_mode = on +trigger_file = '/tmp/pg-failover-trigger' +recovery_target_timeline = 'latest' +primary_conninfo = 'application_name=$HOSTNAME host=$PRIMARY_HOST' +``` + +Here, + +- _trigger_file_ is created to trigger a *standby* to take over as *primary* server. +- *$PRIMARY_HOST* holds the Kubernetes Service name that targets *primary* server + +Now create this Postgres object with Streaming Replication support + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/postgres/clustering/ha-postgres.yaml +postgres.kubedb.com/ha-postgres created +``` + +KubeDB operator creates three Pod as PostgreSQL server. + +```bash +$ kubectl get pods -n demo --selector="app.kubernetes.io/instance=ha-postgres" --show-labels +NAME READY STATUS RESTARTS AGE LABELS +ha-postgres-0 1/1 Running 0 20s controller-revision-hash=ha-postgres-6b7998ccfd,app.kubernetes.io/name=postgreses.kubedb.com,app.kubernetes.io/instance=ha-postgres,kubedb.com/role=primary,statefulset.kubernetes.io/pod-name=ha-postgres-0 +ha-postgres-1 1/1 Running 0 16s controller-revision-hash=ha-postgres-6b7998ccfd,app.kubernetes.io/name=postgreses.kubedb.com,app.kubernetes.io/instance=ha-postgres,kubedb.com/role=replica,statefulset.kubernetes.io/pod-name=ha-postgres-1 +ha-postgres-2 1/1 Running 0 10s controller-revision-hash=ha-postgres-6b7998ccfd,app.kubernetes.io/name=postgreses.kubedb.com,app.kubernetes.io/instance=ha-postgres,kubedb.com/role=replica,statefulset.kubernetes.io/pod-name=ha-postgres-2 +``` + +Here, + +- Pod `ha-postgres-0` is serving as *primary* server, indicated by label `kubedb.com/role=primary` +- Pod `ha-postgres-1` & `ha-postgres-2` both are serving as *standby* server, indicated by label `kubedb.com/role=replica` + +And two services for Postgres `ha-postgres` are created. + +```bash +$ kubectl get svc -n demo --selector="app.kubernetes.io/instance=ha-postgres" +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +ha-postgres ClusterIP 10.102.19.49 5432/TCP 4m +ha-postgres-replicas ClusterIP 10.97.36.117 5432/TCP 4m +``` + +```bash +$ kubectl get svc -n demo --selector="app.kubernetes.io/instance=ha-postgres" -o=custom-columns=NAME:.metadata.name,SELECTOR:.spec.selector +NAME SELECTOR +ha-postgres map[app.kubernetes.io/name:postgreses.kubedb.com app.kubernetes.io/instance:ha-postgres kubedb.com/role:primary] +ha-postgres-replicas map[app.kubernetes.io/name:postgreses.kubedb.com app.kubernetes.io/instance:ha-postgres] +``` + +Here, + +- Service `ha-postgres` targets Pod `ha-postgres-0`, which is *primary* server, by selector `app.kubernetes.io/name=postgreses.kubedb.com,app.kubernetes.io/instance=ha-postgres,kubedb.com/role=primary`. +- Service `ha-postgres-replicas` targets all Pods (*`ha-postgres-0`*, *`ha-postgres-1`* and *`ha-postgres-2`*) with label `app.kubernetes.io/name=postgreses.kubedb.com,app.kubernetes.io/instance=ha-postgres`. + +>These *standby* servers are asynchronous *warm standby* server. That means, you can only connect to *primary* sever. + +Now connect to this *primary* server Pod `ha-postgres-0` using pgAdmin installed in [quickstart](/docs/v2024.1.31/guides/postgres/quickstart/quickstart#before-you-begin) tutorial. + +**Connection information:** + +- Host name/address: you can use any of these + - Service: `ha-postgres.demo` + - Pod IP: (`$kubectl get pods ha-postgres-0 -n demo -o yaml | grep podIP`) +- Port: `5432` +- Maintenance database: `postgres` +- Username: Run following command to get *username*, + + ```bash + $ kubectl get secrets -n demo ha-postgres-auth -o jsonpath='{.data.\POSTGRES_USER}' | base64 -d + postgres + ``` + +- Password: Run the following command to get *password*, + + ```bash + $ kubectl get secrets -n demo ha-postgres-auth -o jsonpath='{.data.\POSTGRES_PASSWORD}' | base64 -d + MHRrOcuyddfh3YpU + ``` + +You can check `pg_stat_replication` information to know who is currently streaming from *primary*. + +```bash +postgres=# select * from pg_stat_replication; +``` + + pid | usesysid | usename | application_name | client_addr | client_port | backend_start | state | sent_location | write_location | flush_location | replay_location | sync_priority | sync_state +-----|----------|----------|------------------|-------------|-------------|-------------------------------|-----------|---------------|----------------|----------------|-----------------|---------------|------------ + 89 | 10 | postgres | ha-postgres-2 | 172.17.0.8 | 35306 | 2018-02-09 04:27:11.674828+00 | streaming | 0/5000060 | 0/5000060 | 0/5000060 | 0/5000060 | 0 | async + 90 | 10 | postgres | ha-postgres-1 | 172.17.0.7 | 42400 | 2018-02-09 04:27:13.716104+00 | streaming | 0/5000060 | 0/5000060 | 0/5000060 | 0/5000060 | 0 | async + +Here, both `ha-postgres-1` and `ha-postgres-2` are streaming asynchronously from *primary* server. + +### Lease Duration + +Get the postgres CRD at this point. + +```yaml +$ kubectl get pg -n demo ha-postgres -o yaml +apiVersion: kubedb.com/v1alpha2 +kind: Postgres +metadata: + creationTimestamp: "2019-02-07T12:14:05Z" + finalizers: + - kubedb.com + generation: 2 + name: ha-postgres + namespace: demo + resourceVersion: "44966" + selfLink: /apis/kubedb.com/v1alpha2/namespaces/demo/postgreses/ha-postgres + uid: dcf6d96a-2ad1-11e9-9d44-080027154f61 +spec: + authSecret: + name: ha-postgres-auth + leaderElection: + leaseDurationSeconds: 15 + renewDeadlineSeconds: 10 + retryPeriodSeconds: 2 + podTemplate: + controller: {} + metadata: {} + spec: + resources: {} + replicas: 3 + storage: + accessModes: + - ReadWriteOnce + dataSource: null + resources: + requests: + storage: 1Gi + storageClassName: standard + storageType: Durable + terminationPolicy: Halt + version: "10.2"-v5 +status: + observedGeneration: 2$4213139756412538772 + phase: Running +``` + +There are three fields under Postgres CRD's `spec.leaderElection`. These values defines how fast the leader election can happen. + +- leaseDurationSeconds: This is the duration in seconds that non-leader candidates will wait to force acquire leadership. This is measured against time of last observed ack. Default 15 secs. +- renewDeadlineSeconds: This is the duration in seconds that the acting master will retry refreshing leadership before giving up. Normally, LeaseDuration * 2 / 3. Default 10 secs. +- retryPeriodSeconds: This is the duration in seconds the LeaderElector clients should wait between tries of actions. Normally, LeaseDuration / 3. Default 2 secs. + +If the Cluster machine is powerful, user can reduce the times. But, Do not make it so little, in that case Postgres will restarts very often. + +### Automatic failover + +If *primary* server fails, another *standby* server will take over and serve as *primary*. + +Delete Pod `ha-postgres-0` to see the failover behavior. + +```bash +kubectl delete pod -n demo ha-postgres-0 +``` + +```bash +$ kubectl get pods -n demo --selector="app.kubernetes.io/instance=ha-postgres" --show-labels +NAME READY STATUS RESTARTS AGE LABELS +ha-postgres-0 1/1 Running 0 10s controller-revision-hash=ha-postgres-b8b4b5fc4,app.kubernetes.io/name=postgreses.kubedb.com,app.kubernetes.io/instance=ha-postgres,kubedb.com/role=replica,statefulset.kubernetes.io/pod-name=ha-postgres-0 +ha-postgres-1 1/1 Running 0 52m controller-revision-hash=ha-postgres-b8b4b5fc4,app.kubernetes.io/name=postgreses.kubedb.com,app.kubernetes.io/instance=ha-postgres,kubedb.com/role=primary,statefulset.kubernetes.io/pod-name=ha-postgres-1 +ha-postgres-2 1/1 Running 0 51m controller-revision-hash=ha-postgres-b8b4b5fc4,app.kubernetes.io/name=postgreses.kubedb.com,app.kubernetes.io/instance=ha-postgres,kubedb.com/role=replica,statefulset.kubernetes.io/pod-name=ha-postgres-2 +``` + +Here, + +- Pod `ha-postgres-1` is now serving as *primary* server +- Pod `ha-postgres-0` and `ha-postgres-2` both are serving as *standby* server + +And result from `pg_stat_replication` + +```bash +postgres=# select * from pg_stat_replication; +``` + + pid | usesysid | usename | application_name | client_addr | client_port | backend_start | state | sent_location | write_location | flush_location | replay_location | sync_priority | sync_state +-----|----------|----------|------------------|-------------|-------------|-------------------------------|-----------|---------------|----------------|----------------|-----------------|---------------|------------ + 57 | 10 | postgres | ha-postgres-0 | 172.17.0.6 | 52730 | 2018-02-09 04:33:06.051716|00 | streaming | 0/7000060 | 0/7000060 | 0/7000060 | 0/7000060 | 0 | async + 58 | 10 | postgres | ha-postgres-2 | 172.17.0.8 | 42824 | 2018-02-09 04:33:09.762168|00 | streaming | 0/7000060 | 0/7000060 | 0/7000060 | 0/7000060 | 0 | async + +You can see here, now `ha-postgres-0` and `ha-postgres-2` are streaming asynchronously from `ha-postgres-1`, our *primary* server. + +

+ + recovered-postgres + +

+ +[//]: # (If you want to know how this failover process works, [read here]) + +## Streaming Replication with `hot standby` + +Streaming Replication also works with one or more *hot standby* servers. + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: Postgres +metadata: + name: hot-postgres + namespace: demo +spec: + version: "13.13" + replicas: 3 + standbyMode: Hot + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi +``` + +In this examples: + +- This `Postgres` object creates three PostgreSQL servers, indicated by the **`replicas`** field. +- One server will be *primary* and two others will be *hot standby* servers, as instructed by **`spec.standbyMode`** + +### `hot standby` setup + +Following parameters are set in `postgresql.conf` for *standby* server + +```bash +hot_standby = on +``` + +Here, + +- _hot_standby_ specifies that *standby* server will act as *hot standby*. + +Now create this Postgres object + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/postgres/clustering/hot-postgres.yaml +postgres "hot-postgres" created +``` + +KubeDB operator creates three Pod as PostgreSQL server. + +```bash +$ kubectl get pods -n demo --selector="app.kubernetes.io/instance=hot-postgres" --show-labels +NAME READY STATUS RESTARTS AGE LABELS +hot-postgres-0 1/1 Running 0 1m controller-revision-hash=hot-postgres-6c48cfb5bb,app.kubernetes.io/name=postgreses.kubedb.com,app.kubernetes.io/instance=hot-postgres,kubedb.com/role=primary,statefulset.kubernetes.io/pod-name=hot-postgres-0 +hot-postgres-1 1/1 Running 0 1m controller-revision-hash=hot-postgres-6c48cfb5bb,app.kubernetes.io/name=postgreses.kubedb.com,app.kubernetes.io/instance=hot-postgres,kubedb.com/role=replica,statefulset.kubernetes.io/pod-name=hot-postgres-1 +hot-postgres-2 1/1 Running 0 48s controller-revision-hash=hot-postgres-6c48cfb5bb,app.kubernetes.io/name=postgreses.kubedb.com,app.kubernetes.io/instance=hot-postgres,kubedb.com/role=replica,statefulset.kubernetes.io/pod-name=hot-postgres-2 +``` + +Here, + +- Pod `hot-postgres-0` is serving as *primary* server, indicated by label `kubedb.com/role=primary` +- Pod `hot-postgres-1` & `hot-postgres-2` both are serving as *standby* server, indicated by label `kubedb.com/role=replica` + +> These *standby* servers are asynchronous *hot standby* servers. + +That means, you can connect to both *primary* and *standby* sever. But these *hot standby* servers only accept read-only queries. + +Now connect to one of our *hot standby* servers Pod `hot-postgres-2` using pgAdmin installed in [quickstart](/docs/v2024.1.31/guides/postgres/quickstart/quickstart#before-you-begin) tutorial. + +**Connection information:** + +- Host name/address: you can use any of these + - Service: `hot-postgres-replicas.demo` + - Pod IP: (`$kubectl get pods hot-postgres-2 -n demo -o yaml | grep podIP`) +- Port: `5432` +- Maintenance database: `postgres` +- Username: Run following command to get *username*, + + ```bash + $ kubectl get secrets -n demo hot-postgres-auth -o jsonpath='{.data.\POSTGRES_USER}' | base64 -d + postgres + ``` + +- Password: Run the following command to get *password*, + + ```bash + $ kubectl get secrets -n demo hot-postgres-auth -o jsonpath='{.data.\POSTGRES_PASSWORD}' | base64 -d + ZZgjjQMUdKJYy1W9 + ``` + +Try to create a database (write operation) + +```bash +postgres=# CREATE DATABASE standby; +ERROR: cannot execute CREATE DATABASE in a read-only transaction +``` + +Failed to execute write operation. But it can execute following read query + +```bash +postgres=# select pg_last_xlog_receive_location(); + pg_last_xlog_receive_location +------------------------------- + 0/7000220 +``` + +So, you can see here that you can connect to *hot standby* and it only accepts read-only queries. + +## Cleaning up + +To cleanup the Kubernetes resources created by this tutorial, run: + +```bash +$ kubectl patch -n demo pg/ha-postgres pg/hot-postgres -p '{"spec":{"terminationPolicy":"WipeOut"}}' --type="merge" +$ kubectl delete -n demo pg/ha-postgres pg/hot-postgres + +$ kubectl delete ns demo +``` + +## Next Steps + +- Monitor your PostgreSQL database with KubeDB using [built-in Prometheus](/docs/v2024.1.31/guides/postgres/monitoring/using-builtin-prometheus). +- Monitor your PostgreSQL database with KubeDB using [Prometheus operator](/docs/v2024.1.31/guides/postgres/monitoring/using-prometheus-operator). +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/v2024.1.31/CONTRIBUTING). diff --git a/content/docs/v2024.1.31/guides/postgres/concepts/_index.md b/content/docs/v2024.1.31/guides/postgres/concepts/_index.md new file mode 100755 index 0000000000..decb588f1c --- /dev/null +++ b/content/docs/v2024.1.31/guides/postgres/concepts/_index.md @@ -0,0 +1,22 @@ +--- +title: PostgreSQL Concepts +menu: + docs_v2024.1.31: + identifier: pg-concepts-postgres + name: Concepts + parent: pg-postgres-guides + weight: 20 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/postgres/concepts/appbinding.md b/content/docs/v2024.1.31/guides/postgres/concepts/appbinding.md new file mode 100644 index 0000000000..c340a09ff0 --- /dev/null +++ b/content/docs/v2024.1.31/guides/postgres/concepts/appbinding.md @@ -0,0 +1,162 @@ +--- +title: AppBinding CRD +menu: + docs_v2024.1.31: + identifier: pg-appbinding-concepts + name: AppBinding + parent: pg-concepts-postgres + weight: 20 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# AppBinding + +## What is AppBinding + +An `AppBinding` is a Kubernetes `CustomResourceDefinition`(CRD) which points to an application using either its URL (usually for a non-Kubernetes resident service instance) or a Kubernetes service object (if self-hosted in a Kubernetes cluster), some optional parameters and a credential secret. To learn more about AppBinding and the problems it solves, please read this blog post: [The case for AppBinding](https://blog.byte.builders/post/the-case-for-appbinding). + +If you deploy a database using [KubeDB](https://kubedb.com/docs/0.11.0/concepts/), `AppBinding` object will be created automatically for it. Otherwise, you have to create an `AppBinding` object manually pointing to your desired database. + +KubeDB uses [Stash](https://appscode.com/products/stash/) to perform backup/recovery of databases. Stash needs to know how to connect with a target database and the credentials necessary to access it. This is done via an `AppBinding`. + +## AppBinding CRD Specification + +Like any official Kubernetes resource, an `AppBinding` has `TypeMeta`, `ObjectMeta` and `Spec` sections. However, unlike other Kubernetes resources, it does not have a `Status` section. + +An `AppBinding` object created by `KubeDB` for PostgreSQL database is shown below, + +```yaml +apiVersion: appcatalog.appscode.com/v1alpha1 +kind: AppBinding +metadata: + name: quick-postgres + namespace: demo + labels: + app.kubernetes.io/component: database + app.kubernetes.io/instance: quick-postgres + app.kubernetes.io/managed-by: kubedb.com + app.kubernetes.io/name: postgres + app.kubernetes.io/version: "10.2" + app.kubernetes.io/name: postgreses.kubedb.com + app.kubernetes.io/instance: quick-postgres +spec: + type: kubedb.com/postgres + secret: + name: quick-postgres-auth + clientConfig: + service: + name: quick-postgres + path: / + port: 5432 + query: sslmode=disable + scheme: postgresql + secretTransforms: + - renameKey: + from: POSTGRES_USER + to: username + - renameKey: + from: POSTGRES_PASSWORD + to: password + version: "10.2" +``` + +Here, we are going to describe the sections of an `AppBinding` crd. + +### AppBinding `Spec` + +An `AppBinding` object has the following fields in the `spec` section: + +#### spec.type + +`spec.type` is an optional field that indicates the type of the app that this `AppBinding` is pointing to. Stash uses this field to resolve the values of `TARGET_APP_TYPE`, `TARGET_APP_GROUP` and `TARGET_APP_RESOURCE` variables of [BackupBlueprint](https://appscode.com/products/stash/latest/concepts/crds/backupblueprint/) object. + +This field follows the following format: `/`. The above AppBinding is pointing to a `postgres` resource under `kubedb.com` group. + +Here, the variables are parsed as follows: + +| Variable | Usage | +| --------------------- | --------------------------------------------------------------------------------------------------------------------------------- | +| `TARGET_APP_GROUP` | Represents the application group where the respective app belongs (i.e: `kubedb.com`). | +| `TARGET_APP_RESOURCE` | Represents the resource under that application group that this appbinding represents (i.e: `postgres`). | +| `TARGET_APP_TYPE` | Represents the complete type of the application. It's simply `TARGET_APP_GROUP/TARGET_APP_RESOURCE` (i.e: `kubedb.com/postgres`). | + +#### spec.secret + +`spec.secret` specifies the name of the secret which contains the credentials that are required to access the database. This secret must be in the same namespace as the `AppBinding`. + +This secret must contain the following keys: + +PostgreSQL : + +| Key | Usage | +| ------------------- | --------------------------------------------------- | +| `POSTGRES_USER` | Username of the target database. | +| `POSTGRES_PASSWORD` | Password for the user specified by `POSTGRES_USER`. | + +MySQL : + +| Key | Usage | +| ---------- | ---------------------------------------------- | +| `username` | Username of the target database. | +| `password` | Password for the user specified by `username`. | + +MongoDB : + +| Key | Usage | +| ---------- | ---------------------------------------------- | +| `username` | Username of the target database. | +| `password` | Password for the user specified by `username`. | + +Elasticsearch: + +| Key | Usage | +| ---------------- | ----------------------- | +| `ADMIN_USERNAME` | Admin username | +| `ADMIN_PASSWORD` | Password for admin user | + +#### spec.clientConfig + +`spec.clientConfig` defines how to communicate with the target database. You can use either an URL or a Kubernetes service to connect with the database. You don't have to specify both of them. + +You can configure following fields in `spec.clientConfig` section: + +- **spec.clientConfig.url** + + `spec.clientConfig.url` gives the location of the database, in standard URL form (i.e. `[scheme://]host:port/[path]`). This is particularly useful when the target database is running outside of the Kubernetes cluster. If your database is running inside the cluster, use `spec.clientConfig.service` section instead. + + > Note that, attempting to use a user or basic auth (e.g. `user:password@host:port`) is not allowed. Stash will insert them automatically from the respective secret. Fragments ("#...") and query parameters ("?...") are not allowed either. + +- **spec.clientConfig.service** + + If you are running the database inside the Kubernetes cluster, you can use Kubernetes service to connect with the database. You have to specify the following fields in `spec.clientConfig.service` section if you manually create an `AppBinding` object. + + - **name :** `name` indicates the name of the service that connects with the target database. + - **scheme :** `scheme` specifies the scheme (i.e. http, https) to use to connect with the database. + - **port :** `port` specifies the port where the target database is running. + +- **spec.clientConfig.insecureSkipTLSVerify** + + `spec.clientConfig.insecureSkipTLSVerify` is used to disable TLS certificate verification while connecting with the database. We strongly discourage to disable TLS verification during backup. You should provide the respective CA bundle through `spec.clientConfig.caBundle` field instead. + +- **spec.clientConfig.caBundle** + + `spec.clientConfig.caBundle` is a PEM encoded CA bundle which will be used to validate the serving certificate of the database. + +## Next Steps + +- Learn how to use KubeDB to manage various databases [here](/docs/v2024.1.31/guides/README). +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/v2024.1.31/CONTRIBUTING). diff --git a/content/docs/v2024.1.31/guides/postgres/concepts/catalog.md b/content/docs/v2024.1.31/guides/postgres/concepts/catalog.md new file mode 100644 index 0000000000..14a7660810 --- /dev/null +++ b/content/docs/v2024.1.31/guides/postgres/concepts/catalog.md @@ -0,0 +1,113 @@ +--- +title: PostgresVersion CRD +menu: + docs_v2024.1.31: + identifier: pg-catalog-concepts + name: PostgresVersion + parent: pg-concepts-postgres + weight: 15 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# PostgresVersion + +## What is PostgresVersion + +`PostgresVersion` is a Kubernetes `Custom Resource Definitions` (CRD). It provides a declarative configuration to specify the docker images to be used for [PostgreSQL](https://www.postgresql.org/) database deployed with KubeDB in a Kubernetes native way. + +When you install KubeDB, a `PostgresVersion` custom resource will be created automatically for every supported PostgreSQL versions. You have to specify the name of `PostgresVersion` crd in `spec.version` field of [Postgres](/docs/v2024.1.31/guides/postgres/concepts/postgres) crd. Then, KubeDB will use the docker images specified in the `PostgresVersion` crd to create your expected database. + +Using a separate crd for specifying respective docker images, and pod security policy names allow us to modify the images, and policies independent of KubeDB operator. This will also allow the users to use a custom image for the database. For more details about how to use custom image with Postgres in KubeDB, please visit [here](/docs/v2024.1.31/guides/postgres/custom-versions/setup). + +## PostgresVersion Specification + +As with all other Kubernetes objects, a PostgresVersion needs `apiVersion`, `kind`, and `metadata` fields. It also needs a `.spec` section. + +```yaml +apiVersion: catalog.kubedb.com/v1alpha1 +kind: PostgresVersion +metadata: + name: "13.13" +spec: + coordinator: + image: kubedb/pg-coordinator:v0.1.0 + db: + image: postgres:13.2-alpine + distribution: PostgreSQL + exporter: + image: prometheuscommunity/postgres-exporter:v0.9.0 + initContainer: + image: kubedb/postgres-init:0.1.0 + podSecurityPolicies: + databasePolicyName: postgres-db + stash: + addon: + backupTask: + name: postgres-backup-13.1 + restoreTask: + name: postgres-restore-13.1 + version: "13.13" +``` + +### metadata.name + +`metadata.name` is a required field that specifies the name of the `PostgresVersion` crd. You have to specify this name in `spec.version` field of [Postgres](/docs/v2024.1.31/guides/postgres/concepts/postgres) crd. + +We follow this convention for naming PostgresVersion crd: +- Name format: `{Original PostgreSQL image version}-{modification tag}` + +We modify original PostgreSQL docker image to support additional features like WAL archiving, clustering etc. and re-tag the image with v1, v2 etc. modification tag. An image with higher modification tag will have more features than the images with lower modification tag. Hence, it is recommended to use PostgresVersion crd with highest modification tag to take advantage of the latest features. + +### spec.version + +`spec.version` is a required field that specifies the original version of PostgreSQL database that has been used to build the docker image specified in `spec.db.image` field. + +### spec.deprecated + +`spec.deprecated` is an optional field that specifies whether the docker images specified here is supported by the current KubeDB operator. For example, we have modified `kubedb/postgres:10.2` docker image to support custom configuration and re-tagged as `kubedb/postgres:10.2-v2`. Now, KubeDB `0.9.0-rc.0` supports providing custom configuration which required `kubedb/postgres:10.2-v2` docker image. So, we have marked `kubedb/postgres:10.2` as deprecated in KubeDB `0.9.0-rc.0`. + +The default value of this field is `false`. If `spec.deprecated` is set `true`, KubeDB operator will not create the database and other respective resources for this version. + +### spec.db.image + +`spec.db.image` is a required field that specifies the docker image which will be used to create Statefulset by KubeDB operator to create expected PostgreSQL database. + +### spec.exporter.image + +`spec.exporter.image` is a required field that specifies the image which will be used to export Prometheus metrics. + +### spec.tools.image + +`spec.tools.image` is a required field that specifies the image which will be used to take backup and initialize database from snapshot. + +### spec.podSecurityPolicies.databasePolicyName + +`spec.podSecurityPolicies.databasePolicyName` is a required field that specifies the name of the pod security policy required to get the database server pod(s) running. + +```bash +helm upgrade -i kubedb oci://ghcr.io/appscode-charts/kubedb \ + --namespace kubedb --create-namespace \ + --set additionalPodSecurityPolicies[0]=custom-db-policy \ + --set additionalPodSecurityPolicies[1]=custom-snapshotter-policy \ + --set-file global.license=/path/to/the/license.txt \ + --wait --burst-limit=10000 --debug +``` + +## Next Steps + +- Learn about Postgres crd [here](/docs/v2024.1.31/guides/postgres/concepts/postgres). +- Deploy your first PostgreSQL database with KubeDB by following the guide [here](/docs/v2024.1.31/guides/postgres/quickstart/quickstart). \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/postgres/concepts/opsrequest.md b/content/docs/v2024.1.31/guides/postgres/concepts/opsrequest.md new file mode 100644 index 0000000000..9ab36eefb9 --- /dev/null +++ b/content/docs/v2024.1.31/guides/postgres/concepts/opsrequest.md @@ -0,0 +1,248 @@ +--- +title: PostgresOpsRequests CRD +menu: + docs_v2024.1.31: + identifier: guides-postgres-concepts-opsrequest + name: PostgresOpsRequest + parent: pg-concepts-postgres + weight: 25 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# PostgresOpsRequest + +## What is PostgresOpsRequest + +`PostgresOpsRequest` is a Kubernetes `Custom Resource Definitions` (CRD). It provides declarative configuration for [Postgres](https://www.postgresql.org/) administrative operations like database version updating, horizontal scaling, vertical scaling, etc. in a Kubernetes native way. + +## PostgresOpsRequest CRD Specifications + +Like any official Kubernetes resource, a `PostgresOpsRequest` has `TypeMeta`, `ObjectMeta`, `Spec` and `Status` sections. + +Here, some sample `PostgresOpsRequest` CRs for different administrative operations is given below, + +Sample `PostgresOpsRequest` for updating database: + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: PostgresOpsRequest +metadata: + name: pg-ops-update + namespace: demo +spec: + databaseRef: + name: pg-group + type: UpdateVersion + updateVersion: + targetVersion: 8.0.35 +status: + conditions: + - lastTransitionTime: "2020-06-11T09:59:05Z" + message: The controller has scaled/updated the Postgres successfully + observedGeneration: 3 + reason: OpsRequestSuccessful + status: "True" + type: Successful + observedGeneration: 3 + phase: Successful +``` + +Sample `PostgresOpsRequest` for horizontal scaling: + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: PostgresOpsRequest +metadata: + name: myops + namespace: demo +spec: + databaseRef: + name: pg-group + type: HorizontalScaling + horizontalScaling: + replicas: 3 +status: + conditions: + - lastTransitionTime: "2020-06-11T09:59:05Z" + message: The controller has scaled/updated the Postgres successfully + observedGeneration: 3 + reason: OpsRequestSuccessful + status: "True" + type: Successful + observedGeneration: 3 + phase: Successful +``` + +Sample `PostgresOpsRequest` for vertical scaling: + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: PostgresOpsRequest +metadata: + name: myops + namespace: demo +spec: + databaseRef: + name: pg-group + type: VerticalScaling + verticalScaling: + postgres: + resources: + requests: + memory: "1200Mi" + cpu: "0.7" + limits: + memory: "1200Mi" + cpu: "0.7" +status: + conditions: + - lastTransitionTime: "2020-06-11T09:59:05Z" + message: The controller has scaled/updated the Postgres successfully + observedGeneration: 3 + reason: OpsRequestSuccessful + status: "True" + type: Successful + observedGeneration: 3 + phase: Successful +``` + +Here, we are going to describe the various sections of a `PostgresOpsRequest` cr. + +### PostgresOpsRequest `Spec` + +A `PostgresOpsRequest` object has the following fields in the `spec` section. + +#### spec.databaseRef + +`spec.databaseRef` is a required field that point to the [Postgres](/docs/v2024.1.31/guides/postgres/concepts/postgres) object where the administrative operations will be applied. This field consists of the following sub-field: + +- **spec.databaseRef.name :** specifies the name of the [Postgres](/docs/v2024.1.31/guides/postgres/concepts/postgres) object. + +#### spec.type + +`spec.type` specifies the kind of operation that will be applied to the database. Currently, the following types of operations are allowed in `PostgresOpsRequest`. + +- `Upgrade` / `UpdateVersion` +- `HorizontalScaling` +- `VerticalScaling` +- `volumeExpansion` +- `Restart` +- `Reconfigure` +- `ReconfigureTLS` + +>You can perform only one type of operation on a single `PostgresOpsRequest` CR. For example, if you want to update your database and scale up its replica then you have to create two separate `PostgresOpsRequest`. At first, you have to create a `PostgresOpsRequest` for updating. Once it is completed, then you can create another `PostgresOpsRequest` for scaling. You should not create two `PostgresOpsRequest` simultaneously. + +#### spec.updateVersion + +If you want to update your Postgres version, you have to specify the `spec.updateVersion` section that specifies the desired version information. This field consists of the following sub-field: + +- `spec.updateVersion.targetVersion` refers to a [PostgresVersion](/docs/v2024.1.31/guides/postgres/concepts/catalog) CR that contains the Postgres version information where you want to update. + +>You can only update between Postgres versions. KubeDB does not support downgrade for Postgres. + +#### spec.horizontalScaling + +If you want to scale-up or scale-down your Postgres cluster, you have to specify `spec.horizontalScaling` section. This field consists of the following sub-field: + +- `spec.horizontalScaling.member` indicates the desired number of members for your Postgres cluster after scaling. For example, if your cluster currently has 4 members and you want to add additional 2 members then you have to specify 6 in `spec.horizontalScaling.member` field. Similarly, if you want to remove one member from the cluster, you have to specify 3 in `spec.horizontalScaling.member` field. + +#### spec.verticalScaling + +`spec.verticalScaling` is a required field specifying the information of `Postgres` resources like `cpu`, `memory` etc that will be scaled. This field consists of the following sub-fields: + +- `spec.verticalScaling.postgres` indicates the `Postgres` server resources. It has the below structure: + +```yaml +requests: + memory: "200Mi" + cpu: "0.1" +limits: + memory: "300Mi" + cpu: "0.2" +``` + +Here, when you specify the resource request for `Postgres` container, the scheduler uses this information to decide which node to place the container of the Pod on and when you specify a resource limit for `Postgres` container, the `kubelet` enforces those limits so that the running container is not allowed to use more of that resource than the limit you set. you can found more details from [here](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/) + +- `spec.verticalScaling.exporter` indicates the `exporter` container resources. It has the same structure as `spec.verticalScaling.postgres` and you can scale the resource the same way as `postgres` container. + +>You can increase/decrease resources for both `postgres` container and `exporter` container on a single `PostgresOpsRequest` CR. + +### PostgresOpsRequest `Status` + +`.status` describes the current state and progress of the `PostgresOpsRequest` operation. It has the following fields: + +#### status.phase + +`status.phase` indicates the overall phase of the operation for this `PostgresOpsRequest`. It can have the following three values: + +| Phase | Meaning | +| ---------- | ----------------------------------------------------------------------------------- | +| Successful | KubeDB has successfully performed the operation requested in the PostgresOpsRequest | +| Failed | KubeDB has failed the operation requested in the PostgresOpsRequest | +| Denied | KubeDB has denied the operation requested in the PostgresOpsRequest | + +#### status.observedGeneration + +`status.observedGeneration` shows the most recent generation observed by the `PostgresOpsRequest` controller. + +#### status.conditions + +`status.conditions` is an array that specifies the conditions of different steps of `PostgresOpsRequest` processing. Each condition entry has the following fields: + +- `types` specifies the type of the condition. PostgresOpsRequest has the following types of conditions: + +| Type | Meaning | +|--------------------| ----------------------------------------------------------------------------------------- | +| `Progressing` | Specifies that the operation is now progressing | +| `Successful` | Specifies such a state that the operation on the database has been successful. | +| `HaltDatabase` | Specifies such a state that the database is halted by the operator | +| `ResumeDatabase` | Specifies such a state that the database is resumed by the operator | +| `Failure` | Specifies such a state that the operation on the database has been failed. | +| `Scaling` | Specifies such a state that the scaling operation on the database has stared | +| `VerticalScaling` | Specifies such a state that vertical scaling has performed successfully on database | +| `HorizontalScaling` | Specifies such a state that horizontal scaling has performed successfully on database | +| `updating` | Specifies such a state that database updating operation has stared | +| `UpdateVersion` | Specifies such a state that version updating on the database have performed successfully | + +- The `status` field is a string, with possible values `"True"`, `"False"`, and `"Unknown"`. + - `status` will be `"True"` if the current transition is succeeded. + - `status` will be `"False"` if the current transition is failed. + - `status` will be `"Unknown"` if the current transition is denied. +- The `message` field is a human-readable message indicating details about the condition. +- The `reason` field is a unique, one-word, CamelCase reason for the condition's last transition. It has the following possible values: + +| Reason | Meaning | +|------------------------------------------| -------------------------------------------------------------------------------- | +| `OpsRequestProgressingStarted` | Operator has started the OpsRequest processing | +| `OpsRequestFailedToProgressing` | Operator has failed to start the OpsRequest processing | +| `SuccessfullyHaltedDatabase` | Database is successfully halted by the operator | +| `FailedToHaltDatabase` | Database is failed to halt by the operator | +| `SuccessfullyResumedDatabase` | Database is successfully resumed to perform its usual operation | +| `FailedToResumedDatabase` | Database is failed to resume | +| `DatabaseVersionUpdatingStarted` | Operator has started updating the database version | +| `SuccessfullyUpdatedDatabaseVersion` | Operator has successfully updated the database version | +| `FailedToUpdateDatabaseVersion` | Operator has failed to update the database version | +| `HorizontalScalingStarted` | Operator has started the horizontal scaling | +| `SuccessfullyPerformedHorizontalScaling` | Operator has successfully performed on horizontal scaling | +| `FailedToPerformHorizontalScaling` | Operator has failed to perform on horizontal scaling | +| `VerticalScalingStarted` | Operator has started the vertical scaling | +| `SuccessfullyPerformedVerticalScaling` | Operator has successfully performed on vertical scaling | +| `FailedToPerformVerticalScaling` | Operator has failed to perform on vertical scaling | +| `OpsRequestProcessedSuccessfully` | Operator has completed the operation successfully requested by the OpeRequest cr | + +- The `lastTransitionTime` field provides a timestamp for when the operation last transitioned from one state to another. +- The `observedGeneration` shows the most recent condition transition generation observed by the controller. diff --git a/content/docs/v2024.1.31/guides/postgres/concepts/postgres.md b/content/docs/v2024.1.31/guides/postgres/concepts/postgres.md new file mode 100644 index 0000000000..0dd9b69b21 --- /dev/null +++ b/content/docs/v2024.1.31/guides/postgres/concepts/postgres.md @@ -0,0 +1,430 @@ +--- +title: Postgres CRD +menu: + docs_v2024.1.31: + identifier: pg-postgres-concepts + name: Postgres + parent: pg-concepts-postgres + weight: 10 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Postgres + +## What is Postgres + +`Postgres` is a Kubernetes `Custom Resource Definitions` (CRD). It provides declarative configuration for [PostgreSQL](https://www.postgresql.org/) in a Kubernetes native way. You only need to describe the desired database configuration in a Postgres object, and the KubeDB operator will create Kubernetes objects in the desired state for you. + +## Postgres Spec + +As with all other Kubernetes objects, a Postgres needs `apiVersion`, `kind`, and `metadata` fields. It also needs a `.spec` section. + +Below is an example Postgres object. + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: Postgres +metadata: + name: p1 + namespace: demo +spec: + version: "13.13" + replicas: 2 + standbyMode: Hot + streamingMode: Asynchronous + leaderElection: + leaseDurationSeconds: 15 + renewDeadlineSeconds: 10 + retryPeriodSeconds: 2 + authSecret: + name: p1-auth + storageType: "Durable" + storage: + storageClassName: standard + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + init: + script: + configMap: + name: pg-init-script + monitor: + agent: prometheus.io/operator + prometheus: + serviceMonitor: + labels: + app: kubedb + interval: 10s + configSecret: + name: pg-custom-config + podTemplate: + metadata: + annotations: + passMe: ToDatabasePod + controller: + annotations: + passMe: ToStatefulSet + spec: + serviceAccountName: my-custom-sa + schedulerName: my-scheduler + nodeSelector: + disktype: ssd + imagePullSecrets: + - name: myregistrykey + env: + - name: POSTGRES_DB + value: pgdb + resources: + requests: + memory: "64Mi" + cpu: "250m" + limits: + memory: "128Mi" + cpu: "500m" + serviceTemplates: + - alias: primary + metadata: + annotations: + passMe: ToService + spec: + type: NodePort + ports: + - name: http + port: 5432 + - alias: standby + metadata: + annotations: + passMe: ToReplicaService + spec: + type: NodePort + ports: + - name: http + port: 5432 + terminationPolicy: "Halt" +``` + +### spec.version + +`spec.version` is a required field that specifies the name of the [PostgresVersion](/docs/v2024.1.31/guides/postgres/concepts/catalog) crd where the docker images are specified. Currently, when you install KubeDB, it creates the following `PostgresVersion` resources, + +```bash +$ kubectl get pgversion +NAME VERSION DB_IMAGE DEPRECATED AGE +10.2 10.2 kubedb/postgres:10.2 true 44m +10.2-v1 10.2 kubedb/postgres:10.2-v2 true 44m +10.2-v2 10.2 kubedb/postgres:10.2-v3 44m +10.2-v3 10.2 kubedb/postgres:10.2-v4 44m +10.2-v4 10.2 kubedb/postgres:10.2-v5 44m +10.2-v5 10.2 kubedb/postgres:10.2-v6 44m +10.6 10.6 kubedb/postgres:10.6 44m +10.6-v1 10.6 kubedb/postgres:10.6-v1 44m +10.6-v2 10.6 kubedb/postgres:10.6-v2 44m +10.6-v3 10.6 kubedb/postgres:10.6-v3 44m +11.1 11.1 kubedb/postgres:11.1 44m +11.1-v1 11.1 kubedb/postgres:11.1-v1 44m +11.1-v2 11.1 kubedb/postgres:11.1-v2 44m +11.1-v3 11.1 kubedb/postgres:11.1-v3 44m +11.2 11.2 kubedb/postgres:11.2 44m +11.2-v1 11.2 kubedb/postgres:11.2-v1 44m +9.6 9.6 kubedb/postgres:9.6 true 44m +9.6-v1 9.6 kubedb/postgres:9.6-v2 true 44m +9.6-v2 9.6 kubedb/postgres:9.6-v3 44m +9.6-v3 9.6 kubedb/postgres:9.6-v4 44m +9.6-v4 9.6 kubedb/postgres:9.6-v5 44m +9.6-v5 9.6 kubedb/postgres:9.6-v6 44m +9.6.7 9.6.7 kubedb/postgres:9.6.7 true 44m +9.6.7-v1 9.6.7 kubedb/postgres:9.6.7-v2 true 44m +9.6.7-v2 9.6.7 kubedb/postgres:9.6.7-v3 44m +9.6.7-v3 9.6.7 kubedb/postgres:9.6.7-v4 44m +9.6.7-v4 9.6.7 kubedb/postgres:9.6.7-v5 44m +9.6.7-v5 9.6.7 kubedb/postgres:9.6.7-v6 44m +``` +### spec.replicas + +`spec.replicas` specifies the total number of primary and standby nodes in Postgres database cluster configuration. One pod is selected as Primary and others act as standby replicas. KubeDB uses `PodDisruptionBudget` to ensure that majority of the replicas are available during [voluntary disruptions](https://kubernetes.io/docs/concepts/workloads/pods/disruptions/#voluntary-and-involuntary-disruptions). + +To learn more about how to setup a HA PostgreSQL cluster in KubeDB, please visit [here](/docs/v2024.1.31/guides/postgres/clustering/ha_cluster). + +### spec.standbyMode + +`spec.standby` is an optional field that specifies the standby mode (_Warm / Hot_) to use for standby replicas. In **hot standby** mode, standby replicas can accept connection and run read-only queries. In **warm standby** mode, standby replicas can't accept connection and only used for replication purpose. + +### spec.streamingMode + +`spec.streamingMode` is an optional field that specifies the streaming mode (_Synchronous / Asynchronous_) of the standby replicas. KubeDB currently supports only **Asynchronous** streaming mode. + +### spec.leaderElection + +There are three fields under Postgres CRD's `spec.leaderElection`. These values defines how fast the leader election can happen. + +- `leaseDurationSeconds`: This is the duration in seconds that non-leader candidates will wait to force acquire leadership. This is measured against time of last observed ack. Default 15 sec. +- `renewDeadlineSeconds`: This is the duration in seconds that the acting master will retry refreshing leadership before giving up. Normally, LeaseDuration \* 2 / 3. Default 10 sec. +- `retryPeriodSeconds`: This is the duration in seconds the LeaderElector clients should wait between tries of actions. Normally, LeaseDuration / 3. Default 2 sec. + +If the Cluster machine is powerful, user can reduce the times. But, Do not make it so little, in that case Postgres will restarts very often. + +### spec.authSecret + +`spec.authSecret` is an optional field that points to a Secret used to hold credentials for `postgres` database. If not set, KubeDB operator creates a new Secret with name `{postgres-name}-auth` that hold _username_ and _password_ for `postgres` database. + +If you want to use an existing or custom secret, please specify that when creating the Postgres object using `spec.authSecret.name`. This Secret should contain superuser _username_ as `POSTGRES_USER` key and superuser _password_ as `POSTGRES_PASSWORD` key. Secrets provided by users are not managed by KubeDB, and therefore, won't be modified or garbage collected by the KubeDB operator (version >= 0.13.0). + +Example: + +```bash +$ kubectl create secret generic p1-auth -n demo \ +--from-literal=POSTGRES_USER=not@user \ +--from-literal=POSTGRES_PASSWORD=not@secret +secret "p1-auth" created +``` + +```bash +$ kubectl get secret -n demo p1-auth -o yaml +apiVersion: v1 +data: + POSTGRES_PASSWORD: bm90QHNlY3JldA== + POSTGRES_USER: bm90QHVzZXI= +kind: Secret +metadata: + creationTimestamp: 2018-09-03T11:25:39Z + name: p1-auth + namespace: demo + resourceVersion: "1677" + selfLink: /api/v1/namespaces/demo/secrets/p1-auth + uid: 15b3e8a1-af6c-11e8-996d-0800270d7bae +type: Opaque +``` + +### spec.storageType + +`spec.storageType` is an optional field that specifies the type of storage to use for database. It can be either `Durable` or `Ephemeral`. The default value of this field is `Durable`. If `Ephemeral` is used then KubeDB will create Postgres database using [emptyDir](https://kubernetes.io/docs/concepts/storage/volumes/#emptydir) volume. In this case, you don't have to specify `spec.storage` field. + +### spec.storage + +If you don't set `spec.storageType:` to `Ephemeral` then `spec.storage` field is required. This field specifies the StorageClass of PVCs dynamically allocated to store data for the database. This storage spec will be passed to the StatefulSet created by KubeDB operator to run database pods. You can specify any StorageClass available in your cluster with appropriate resource requests. + +- `spec.storage.storageClassName` is the name of the StorageClass used to provision PVCs. PVCs don’t necessarily have to request a class. A PVC with its storageClassName set equal to "" is always interpreted to be requesting a PV with no class, so it can only be bound to PVs with no class (no annotation or one set equal to ""). A PVC with no storageClassName is not quite the same and is treated differently by the cluster depending on whether the DefaultStorageClass admission plugin is turned on. +- `spec.storage.accessModes` uses the same conventions as Kubernetes PVCs when requesting storage with specific access modes. +- `spec.storage.resources` can be used to request specific quantities of storage. This follows the same resource model used by PVCs. + +To learn how to configure `spec.storage`, please visit the links below: + +- https://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims + +### spec.init + +`spec.init` is an optional section that can be used to initialize a newly created Postgres database. PostgreSQL databases can be initialized from these three ways: + +1. Initialize from Script +2. Initialize from Snapshot + +#### Initialize via Script + +To initialize a PostgreSQL database using a script (shell script, db migrator, etc.), set the `spec.init.script` section when creating a Postgres object. `script` must have the following information: + +- [VolumeSource](https://kubernetes.io/docs/concepts/storage/volumes/#types-of-volumes): Where your script is loaded from. + +Below is an example showing how a script from a configMap can be used to initialize a PostgreSQL database. + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: Postgres +metadata: + name: postgres-db + namespace: demo +spec: + version: "13.13" + init: + script: + configMap: + name: pg-init-script +``` + +In the above example, Postgres will execute provided script once the database is running. For more details tutorial on how to initialize from script, please visit [here](/docs/v2024.1.31/guides/postgres/initialization/script_source). + +### spec.monitor + +PostgreSQL managed by KubeDB can be monitored with builtin-Prometheus and Prometheus operator out-of-the-box. To learn more, + +- [Monitor PostgreSQL with builtin Prometheus](/docs/v2024.1.31/guides/postgres/monitoring/using-builtin-prometheus) +- [Monitor PostgreSQL with Prometheus operator](/docs/v2024.1.31/guides/postgres/monitoring/using-prometheus-operator) + +### spec.configSecret + +`spec.configSecret` is an optional field that allows users to provide custom configuration for PostgreSQL. This field accepts a [`VolumeSource`](https://github.com/kubernetes/api/blob/release-1.11/core/v1/types.go#L47). You can use any Kubernetes supported volume source such as `configMap`, `secret`, `azureDisk` etc. To learn more about how to use a custom configuration file see [here](/docs/v2024.1.31/guides/postgres/configuration/using-config-file). + +### spec.podTemplate + +KubeDB allows providing a template for database pod through `spec.podTemplate`. KubeDB operator will pass the information provided in `spec.podTemplate` to the StatefulSet created for Postgres database. + +KubeDB accept following fields to set in `spec.podTemplate:` + +- metadata + - annotations (pod's annotation) +- controller + - annotations (statefulset's annotation) +- spec: + - serviceAccountName + - env + - resources + - initContainers + - imagePullSecrets + - nodeSelector + - affinity + - schedulerName + - tolerations + - priorityClassName + - priority + - securityContext + - livenessProbe + - readinessProbe + - lifecycle + +Uses of some field of `spec.podTemplate` is described below, + +#### spec.podTemplate.spec.serviceAccountName + +`serviceAccountName` is an optional field supported by KubeDB Operator (version 0.13.0 and higher) that can be used to specify a custom service account to fine tune role based access control. + +If this field is left empty, the KubeDB operator will create a service account name matching Postgres crd name. Role and RoleBinding that provide necessary access permissions will also be generated automatically for this service account. + +If a service account name is given, but there's no existing service account by that name, the KubeDB operator will create one, and Role and RoleBinding that provide necessary access permissions will also be generated for this service account. + +If a service account name is given, and there's an existing service account by that name, the KubeDB operator will use that existing service account. Since this service account is not managed by KubeDB, users are responsible for providing necessary access permissions manually. Follow the guide [here](/docs/v2024.1.31/guides/postgres/custom-rbac/using-custom-rbac) to grant necessary permissions in this scenario. + +#### spec.podTemplate.spec.env + +`spec.podTemplate.spec.env` is an optional field that specifies the environment variables to pass to the Postgres docker image. To know about supported environment variables, please visit [here](https://hub.docker.com/_/postgres/). + +Note that, the KubeDB operator does not allow `POSTGRES_USER` and `POSTGRES_PASSWORD` environment variable to set in `spec.podTemplate.spec.env`. If you want to set the superuser _username_ and _password_, please use `spec.authSecret` instead described earlier. + +If you try to set `POSTGRES_USER` or `POSTGRES_PASSWORD` environment variable in Postgres crd, KubeDB operator will reject the request with following error, + +```ini +Error from server (Forbidden): error when creating "./postgres.yaml": admission webhook "postgres.validators.kubedb.com" denied the request: environment variable POSTGRES_PASSWORD is forbidden to use in Postgres spec +``` + +Also, note that KubeDB does not allow to update the environment variables as updating them does not have any effect once the database is created. If you try to update environment variables, KubeDB operator will reject the request with following error, + +```ini +Error from server (BadRequest): error when applying patch: +... +for: "./postgres.yaml": admission webhook "postgres.validators.kubedb.com" denied the request: precondition failed for: +... +At least one of the following was changed: + apiVersion + kind + name + namespace + spec.standby + spec.streaming + spec.authSecret + spec.storageType + spec.storage + spec.podTemplate.spec.nodeSelector + spec.init +``` + +#### spec.podTemplate.spec.imagePullSecrets + +`spec.podTemplate.spec.imagePullSecrets` is an optional field that points to secrets to be used for pulling docker image if you are using a private docker registry. For more details on how to use private docker registry, please visit [here](/docs/v2024.1.31/guides/postgres/private-registry/using-private-registry). + +#### spec.podTemplate.spec.nodeSelector + +`spec.podTemplate.spec.nodeSelector` is an optional field that specifies a map of key-value pairs. For the pod to be eligible to run on a node, the node must have each of the indicated key-value pairs as labels (it can have additional labels as well). To learn more, see [here](https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector) . + +#### spec.podTemplate.spec.resources + +`spec.podTemplate.spec.resources` is an optional field. This can be used to request compute resources required by the database pods. To learn more, visit [here](http://kubernetes.io/docs/user-guide/compute-resources/). + +### spec.serviceTemplate + +KubeDB creates two different services for each Postgres instance. One of them is a master service named `` and points to the Postgres `Primary` pod/node. Another one is a replica service named `-replicas` and points to Postgres `replica` pods/nodes. + +These `master` and `replica` services can be customized using [spec.serviceTemplate](#spec.servicetemplate) and [spec.replicaServiceTemplate](#specreplicaservicetemplate) respectively. + +You can provide template for the `master` service using `spec.serviceTemplate`. This will allow you to set the type and other properties of the service. If `spec.serviceTemplate` is not provided, KubeDB will create a `master` service of type `ClusterIP` with minimal settings. + +KubeDB allows following fields to set in `spec.serviceTemplate`: + +- metadata: + - annotations +- spec: + - type + - ports + - clusterIP + - externalIPs + - loadBalancerIP + - loadBalancerSourceRanges + - externalTrafficPolicy + - healthCheckNodePort + - sessionAffinityConfig + +See [here](https://github.com/kmodules/offshoot-api/blob/kubernetes-1.16.3/api/v1/types.go#L163) to understand these fields in detail. + +### spec.replicaServiceTemplate + +You can provide template for the `replica` service using `spec.replicaServiceTemplate`. If `spec.replicaServiceTemplate` is not provided, KubeDB will create a `replica` service of type `ClusterIP` with minimal settings. + +The fileds of `spec.replicaServiceTemplate` is similar to `spec.serviceTemplate`, that is: + +- metadata: + - annotations +- spec: + - type + - ports + - clusterIP + - externalIPs + - loadBalancerIP + - loadBalancerSourceRanges + - externalTrafficPolicy + - healthCheckNodePort + +See [here](https://github.com/kmodules/offshoot-api/blob/kubernetes-1.16.3/api/v1/types.go#L163) to understand these fields in detail. + +### spec.terminationPolicy + +`terminationPolicy` gives flexibility whether to `nullify`(reject) the delete operation of `Postgres` crd or which resources KubeDB should keep or delete when you delete `Postgres` crd. KubeDB provides following four termination policies: + +- DoNotTerminate +- Halt +- Delete (`Default`) +- WipeOut + +When `terminationPolicy` is `DoNotTerminate`, KubeDB takes advantage of `ValidationWebhook` feature in Kubernetes 1.9.0 or later clusters to provide safety from accidental deletion of database. If admission webhook is enabled, KubeDB prevents users from deleting the database as long as the `spec.terminationPolicy` is set to `DoNotTerminate`. + +Following table show what KubeDB does when you delete Postgres crd for different termination policies, + +| Behavior | DoNotTerminate | Halt | Delete | WipeOut | +| ---------------------------------------- | :------------: | :------: | :------: | :------: | +| 1. Block Delete operation | ✓ | ✗ | ✗ | ✗ | +| 2. Create Dormant Database | ✗ | ✓ | ✗ | ✗ | +| 3. Delete StatefulSet | ✗ | ✓ | ✓ | ✓ | +| 4. Delete Services | ✗ | ✓ | ✓ | ✓ | +| 5. Delete PVCs | ✗ | ✗ | ✓ | ✓ | +| 6. Delete Secrets | ✗ | ✗ | ✗ | ✓ | +| 7. Delete Snapshots | ✗ | ✗ | ✗ | ✓ | +| 8. Delete Snapshot data from bucket | ✗ | ✗ | ✗ | ✓ | + +If you don't specify `spec.terminationPolicy` KubeDB uses `Halt` termination policy by default. + +## Next Steps + +- Learn how to use KubeDB to run a PostgreSQL database [here](/docs/v2024.1.31/guides/postgres/README). +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/v2024.1.31/CONTRIBUTING). diff --git a/content/docs/v2024.1.31/guides/postgres/configuration/_index.md b/content/docs/v2024.1.31/guides/postgres/configuration/_index.md new file mode 100755 index 0000000000..f7e0f0fa2b --- /dev/null +++ b/content/docs/v2024.1.31/guides/postgres/configuration/_index.md @@ -0,0 +1,22 @@ +--- +title: Run PostgreSQL with Custom Configuration +menu: + docs_v2024.1.31: + identifier: pg-configuration + name: Custom Configuration + parent: pg-postgres-guides + weight: 30 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/postgres/configuration/using-config-file.md b/content/docs/v2024.1.31/guides/postgres/configuration/using-config-file.md new file mode 100644 index 0000000000..e0fbf3b473 --- /dev/null +++ b/content/docs/v2024.1.31/guides/postgres/configuration/using-config-file.md @@ -0,0 +1,222 @@ +--- +title: Run PostgreSQL with Custom Configuration +menu: + docs_v2024.1.31: + identifier: pg-using-config-file-configuration + name: Config File + parent: pg-configuration + weight: 10 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Using Custom Configuration File + +KubeDB supports providing custom configuration for PostgreSQL. This tutorial will show you how to use KubeDB to run PostgreSQL database with custom configuration. + +## Before You Begin + +At first, you need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +Now, install KubeDB cli on your workstation and KubeDB operator in your cluster following the steps [here](/docs/v2024.1.31/setup/README). + +To keep things isolated, this tutorial uses a separate namespace called `demo` throughout this tutorial. + +```bash +$ kubectl create ns demo +namespace/demo created +``` + +> Note: YAML files used in this tutorial are stored in [docs/examples/postgres](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/examples/postgres) folder in GitHub repository [kubedb/docs](https://github.com/kubedb/docs). + +## Overview + +PostgreSQL allows to configure database via **Configuration File**, **SQL** and **Shell**. The most common way is to edit configuration file `postgresql.conf`. When PostgreSQL docker image starts, it uses the configuration specified in `postgresql.conf` file. This file can have `include` directive which allows to include configuration from other files. One of these `include` directives is `include_if_exists` which accept a file reference. If the referenced file exists, it includes configuration from the file. Otherwise, it uses default configuration. KubeDB takes advantage of this feature to allow users to provide their custom configuration. To know more about configuring PostgreSQL see [here](https://www.postgresql.org/docs/current/static/runtime-config.html). + +At first, you have to create a config file named `user.conf` with your desired configuration. Then you have to put this file into a [volume](https://kubernetes.io/docs/concepts/storage/volumes/). You have to specify this volume in `spec.configSecret` section while creating Postgres crd. KubeDB will mount this volume into `/etc/config/` directory of the database pod which will be referenced by `include_if_exists` directive. + +In this tutorial, we will configure `max_connections` and `shared_buffers` via a custom config file. We will use Secret as volume source. + +## Custom Configuration + +At first, let's create `user.conf` file setting `max_connections` and `shared_buffers` parameters. + +```ini +$ cat user.conf +max_connections=300 +shared_buffers=256MB +``` + +> Note that config file name must be `user.conf` + +Now, create a Secret with this configuration file. + +```bash +$ kubectl create secret generic -n demo pg-configuration --from-literal=user.conf="$(curl -fsSL https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/postgres/configuration/user.conf)" +secret/pg-configuration created +``` + +Verify the Secret has the configuration file. + +```yaml +$ kubectl get secret -n demo pg-configuration -o yaml +apiVersion: v1 +stringData: + user.conf: |- + max_connections=300 + shared_buffers=256MB +kind: Secret +metadata: + creationTimestamp: "2019-02-07T12:08:26Z" + name: pg-configuration + namespace: demo + resourceVersion: "44214" + selfLink: /api/v1/namespaces/demo/secrets/pg-configuration + uid: 131b321f-2ad1-11e9-9d44-080027154f61 +``` + +Now, create Postgres crd specifying `spec.configSecret` field. + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/postgres/configuration/pg-configuration.yaml +postgres.kubedb.com/custom-postgres created +``` + +Below is the YAML for the Postgres crd we just created. + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: Postgres +metadata: + name: custom-postgres + namespace: demo +spec: + version: "13.13" + configSecret: + name: pg-configuration + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi +``` + +Now, wait a few minutes. KubeDB operator will create necessary PVC, statefulset, services, secret etc. If everything goes well, we will see that a pod with the name `custom-postgres-0` has been created. + +Check that the statefulset's pod is running + +```bash +$ kubectl get pod -n demo custom-postgres-0 +NAME READY STATUS RESTARTS AGE +custom-postgres-0 1/1 Running 0 14m +``` + +Check the pod's log to see if the database is ready + +```bash +$ kubectl logs -f -n demo custom-postgres-0 +I0705 12:05:51.697190 1 logs.go:19] FLAG: --alsologtostderr="false" +I0705 12:05:51.717485 1 logs.go:19] FLAG: --enable-analytics="true" +I0705 12:05:51.717543 1 logs.go:19] FLAG: --help="false" +I0705 12:05:51.717558 1 logs.go:19] FLAG: --log_backtrace_at=":0" +I0705 12:05:51.717566 1 logs.go:19] FLAG: --log_dir="" +I0705 12:05:51.717573 1 logs.go:19] FLAG: --logtostderr="false" +I0705 12:05:51.717581 1 logs.go:19] FLAG: --stderrthreshold="0" +I0705 12:05:51.717589 1 logs.go:19] FLAG: --v="0" +I0705 12:05:51.717597 1 logs.go:19] FLAG: --vmodule="" +We want "custom-postgres-0" as our leader +I0705 12:05:52.753464 1 leaderelection.go:175] attempting to acquire leader lease demo/custom-postgres-leader-lock... +I0705 12:05:52.822093 1 leaderelection.go:184] successfully acquired lease demo/custom-postgres-leader-lock +Got leadership, now do your jobs +Running as Primary +sh: locale: not found + +WARNING: enabling "trust" authentication for local connections +You can change this by editing pg_hba.conf or using the option -A, or +--auth-local and --auth-host, the next time you run initdb. +ALTER ROLE +/scripts/primary/start.sh: ignoring /var/initdb/* + +LOG: database system was shut down at 2018-07-05 12:07:51 UTC +LOG: MultiXact member wraparound protections are now enabled +LOG: database system is ready to accept connections +LOG: autovacuum launcher started +``` + +Once we see `LOG: database system is ready to accept connections` in the log, the database is ready. + +Now, we will check if the database has started with the custom configuration we have provided. We will `exec` into the pod and use [SHOW](https://www.postgresql.org/docs/9.6/static/sql-show.html) query to check the run-time parameters. + +```bash + $ kubectl exec -it -n demo custom-postgres-0 sh + / # + ## login as user "postgres". no authentication required from inside the pod because it is using trust authentication local connection. +/ # psql -U postgres +psql (9.6.7) +Type "help" for help. + +## query for "max_connections" +postgres=# SHOW max_connections; + max_connections +----------------- + 300 +(1 row) + +## query for "shared_buffers" +postgres=# SHOW shared_buffers; + shared_buffers +---------------- + 256MB +(1 row) + +## log out from database +postgres=# \q +/ # + +``` + +You can also connect to this database from pgAdmin and use following SQL query to check these configuration. + +```sql +SELECT name,setting +FROM pg_settings +WHERE name='max_connections' OR name='shared_buffers'; +``` + +## Cleaning up + +To cleanup the Kubernetes resources created by this tutorial, run: + +```bash +kubectl patch -n demo pg/custom-postgres -p '{"spec":{"terminationPolicy":"WipeOut"}}' --type="merge" +kubectl delete -n demo pg/custom-postgres + +kubectl delete -n demo secret pg-configuration +kubectl delete ns demo +``` + +If you would like to uninstall KubeDB operator, please follow the steps [here](/docs/v2024.1.31/setup/README). + +## Next Steps + +- Learn about [backup and restore](/docs/v2024.1.31/guides/postgres/backup/overview/) PostgreSQL database using Stash. +- Learn about initializing [PostgreSQL with Script](/docs/v2024.1.31/guides/postgres/initialization/script_source). +- Want to setup PostgreSQL cluster? Check how to [configure Highly Available PostgreSQL Cluster](/docs/v2024.1.31/guides/postgres/clustering/ha_cluster) +- Monitor your PostgreSQL database with KubeDB using [built-in Prometheus](/docs/v2024.1.31/guides/postgres/monitoring/using-builtin-prometheus). +- Monitor your PostgreSQL database with KubeDB using [Prometheus operator](/docs/v2024.1.31/guides/postgres/monitoring/using-prometheus-operator). +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/v2024.1.31/CONTRIBUTING). diff --git a/content/docs/v2024.1.31/guides/postgres/custom-rbac/_index.md b/content/docs/v2024.1.31/guides/postgres/custom-rbac/_index.md new file mode 100755 index 0000000000..013d8d5cfc --- /dev/null +++ b/content/docs/v2024.1.31/guides/postgres/custom-rbac/_index.md @@ -0,0 +1,22 @@ +--- +title: Run PostgreSQL with Custom RBAC resources +menu: + docs_v2024.1.31: + identifier: pg-custom-rbac + name: Custom RBAC + parent: pg-postgres-guides + weight: 31 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/postgres/custom-rbac/using-custom-rbac.md b/content/docs/v2024.1.31/guides/postgres/custom-rbac/using-custom-rbac.md new file mode 100644 index 0000000000..f590a481c2 --- /dev/null +++ b/content/docs/v2024.1.31/guides/postgres/custom-rbac/using-custom-rbac.md @@ -0,0 +1,405 @@ +--- +title: Run PostgreSQL with Custom RBAC resources +menu: + docs_v2024.1.31: + identifier: pg-custom-rbac-quickstart + name: Custom RBAC + parent: pg-custom-rbac + weight: 10 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Using Custom RBAC resources + +KubeDB (version 0.13.0 and higher) supports finer user control over role based access permissions provided to a PostgreSQL instance. This tutorial will show you how to use KubeDB to run PostgreSQL instance with custom RBAC resources. + +## Before You Begin + +At first, you need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +Now, install KubeDB cli on your workstation and KubeDB operator in your cluster following the steps [here](/docs/v2024.1.31/setup/README). + +To keep things isolated, this tutorial uses a separate namespace called `demo` throughout this tutorial. + +```bash +$ kubectl create ns demo +namespace/demo created +``` + +> Note: YAML files used in this tutorial are stored in [docs/examples/postgres](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/examples/postgres) folder in GitHub repository [kubedb/docs](https://github.com/kubedb/docs). + +## Overview + +KubeDB allows users to provide custom RBAC resources, namely, `ServiceAccount`, `Role`, and `RoleBinding` for PostgreSQL. This is provided via the `spec.podTemplate.spec.serviceAccountName` field in Postgres CRD. If this field is left empty, the KubeDB operator will create a service account name matching Postgres crd name. Role and RoleBinding that provide necessary access permissions will also be generated automatically for this service account. + +If a service account name is given, but there's no existing service account by that name, the KubeDB operator will create one, and Role and RoleBinding that provide necessary access permissions will also be generated for this service account. + +If a service account name is given, and there's an existing service account by that name, the KubeDB operator will use that existing service account. Since this service account is not managed by KubeDB, users are responsible for providing necessary access permissions manually. + +This guide will show you how to create custom `Service Account`, `Role`, and `RoleBinding` for a PostgreSQL instance named `quick-postges` to provide the bare minimum access permissions. + +## Custom RBAC for PostgreSQL + +At first, let's create a `Service Acoount` in `demo` namespace. + +```bash +$ kubectl create serviceaccount -n demo my-custom-serviceaccount +serviceaccount/my-custom-serviceaccount created +``` + +It should create a service account. + +```yaml +$ kubectl get serviceaccount -n demo my-custom-serviceaccount -o yaml +apiVersion: v1 +kind: ServiceAccount +metadata: + creationTimestamp: "2019-05-30T04:23:39Z" + name: my-custom-serviceaccount + namespace: demo + resourceVersion: "21657" + selfLink: /api/v1/namespaces/demo/serviceaccounts/myserviceaccount + uid: b2ec2b05-8292-11e9-8d10-080027a8b217 +secrets: +- name: myserviceaccount-token-t8zxd +``` + +Now, we need to create a role that has necessary access permissions for the PostgreSQl Database named `quick-postgres`. + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/postgres/custom-rbac/pg-custom-role.yaml +role.rbac.authorization.k8s.io/my-custom-role created +``` + +Below is the YAML for the Role we just created. + +```yaml +apiVersion: rbac.authorization.k8s.io/v1 +kind: Role +metadata: + name: my-custom-role + namespace: demo +rules: +- apiGroups: + - apps + resourceNames: + - quick-postgres + resources: + - statefulsets + verbs: + - get +- apiGroups: + - "" + resources: + - pods + verbs: + - list + - patch +- apiGroups: + - "" + resources: + - configmaps + verbs: + - create +- apiGroups: + - "" + resourceNames: + - quick-postgres-leader-lock + resources: + - configmaps + verbs: + - get + - update +- apiGroups: + - policy + resourceNames: + - postgres-db + resources: + - podsecuritypolicies + verbs: + - use +``` + +Please note that resourceNames `quick-postgres` and `quick-postgres-leader-lock` are unique to `quick-postgres` PostgreSQL instance. Another database `quick-postgres-2`, for exmaple, will require these resourceNames to be `quick-postgres-2`, and `quick-postgres-2-leader-lock`. + +Now create a `RoleBinding` to bind this `Role` with the already created service account. + +```bash +$ kubectl create rolebinding my-custom-rolebinding --role=my-custom-role --serviceaccount=demo:my-custom-serviceaccount --namespace=demo +rolebinding.rbac.authorization.k8s.io/my-custom-rolebinding created + +``` + +It should bind `my-custom-role` and `my-custom-serviceaccount` successfully. + +```yaml +$ kubectl get rolebinding -n demo my-custom-rolebinding -o yaml +apiVersion: rbac.authorization.k8s.io/v1 +kind: RoleBinding +metadata: + creationTimestamp: "2019-05-30T04:54:56Z" + name: my-custom-rolebinding + namespace: demo + resourceVersion: "23944" + selfLink: /apis/rbac.authorization.k8s.io/v1/namespaces/demo/rolebindings/my-custom-rolebinding + uid: 123afc02-8297-11e9-8d10-080027a8b217 +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: Role + name: my-custom-role +subjects: +- kind: ServiceAccount + name: my-custom-serviceaccount + namespace: demo + +``` + +Now, create a Postgres CRD specifying `spec.podTemplate.spec.serviceAccountName` field to `my-custom-serviceaccount`. + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/postgres/custom-rbac/pg-custom-db.yaml +postgres.kubedb.com/quick-postgres created +``` + +Below is the YAML for the Postgres crd we just created. + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: Postgres +metadata: + name: quick-postgres + namespace: demo + labels: + app.kubernetes.io/component: database + app.kubernetes.io/instance: quick-postgres +spec: + version: "13.13" + storageType: Durable + podTemplate: + spec: + serviceAccountName: my-custom-serviceaccount + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 50Mi + +``` + +Now, wait a few minutes. the KubeDB operator will create necessary PVC, statefulset, services, secret etc. If everything goes well, we should see that a pod with the name `quick-postgres-0` has been created. + +Check that the statefulset's pod is running + +```bash +$ kubectl get pod -n demo quick-postgres-0 +NAME READY STATUS RESTARTS AGE +quick-postgres-0 1/1 Running 0 14m +``` + +Check the pod's log to see if the database is ready + +```bash +$ kubectl logs -f -n demo quick-postgres-0 +I0705 12:05:51.697190 1 logs.go:19] FLAG: --alsologtostderr="false" +I0705 12:05:51.717485 1 logs.go:19] FLAG: --enable-analytics="true" +I0705 12:05:51.717543 1 logs.go:19] FLAG: --help="false" +I0705 12:05:51.717558 1 logs.go:19] FLAG: --log_backtrace_at=":0" +I0705 12:05:51.717566 1 logs.go:19] FLAG: --log_dir="" +I0705 12:05:51.717573 1 logs.go:19] FLAG: --logtostderr="false" +I0705 12:05:51.717581 1 logs.go:19] FLAG: --stderrthreshold="0" +I0705 12:05:51.717589 1 logs.go:19] FLAG: --v="0" +I0705 12:05:51.717597 1 logs.go:19] FLAG: --vmodule="" +We want "quick-postgres-0" as our leader +I0705 12:05:52.753464 1 leaderelection.go:175] attempting to acquire leader lease demo/quick-postgres-leader-lock... +I0705 12:05:52.822093 1 leaderelection.go:184] successfully acquired lease demo/quick-postgres-leader-lock +Got leadership, now do your jobs +Running as Primary +sh: locale: not found + +WARNING: enabling "trust" authentication for local connections +You can change this by editing pg_hba.conf or using the option -A, or +--auth-local and --auth-host, the next time you run initdb. +ALTER ROLE +/scripts/primary/start.sh: ignoring /var/initdb/* + +LOG: database system was shut down at 2018-07-05 12:07:51 UTC +LOG: MultiXact member wraparound protections are now enabled +LOG: database system is ready to accept connections +LOG: autovacuum launcher started +``` + +Once we see `LOG: database system is ready to accept connections` in the log, the database is ready. + +## Reusing Service Account + +An existing service account can be reused in another Postgres Database. However, users need to create a new Role specific to that Postgres and bind it to the existing service account so that all the necessary access permissions are available to run the new Postgres Database. + +For example, to reuse `my-custom-serviceaccount` in a new Database `minute-postgres`, create a role that has all the necessary access permissions for this PostgreSQl Database. + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/postgres/custom-rbac/pg-custom-role-two.yaml +role.rbac.authorization.k8s.io/my-custom-role created +``` + +Below is the YAML for the Role we just created. + +```yaml +apiVersion: rbac.authorization.k8s.io/v1 +kind: Role +metadata: + name: my-custom-role-two + namespace: demo +rules: +- apiGroups: + - apps + resourceNames: + - miniute-postgres + resources: + - statefulsets + verbs: + - get +- apiGroups: + - "" + resourceNames: + - miniute-postgres-leader-lock + resources: + - configmaps + verbs: + - get + - update +``` + +Now create a `RoleBinding` to bind `my-custom-role-two` with the already created `my-custom-serviceaccount`. + +```bash +$ kubectl create rolebinding my-custom-rolebinding-two --role=my-custom-role-two --serviceaccount=demo:my-custom-serviceaccount --namespace=demo +rolebinding.rbac.authorization.k8s.io/my-custom-rolebinding-two created + +``` + +Now, create Postgres CRD `minute-postgres` using the existing service account name `my-custom-serviceaccount` in the `spec.podTemplate.spec.serviceAccountName` field. + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/postgres/custom-rbac/pg-custom-db-two.yaml +postgres.kubedb.com/quick-postgres created +``` + +Below is the YAML for the Postgres crd we just created. + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: Postgres +metadata: + name: minute-postgres + namespace: demo + labels: + app.kubernetes.io/component: database + app.kubernetes.io/instance: quick-postgres +spec: + version: "13.13" + storageType: Durable + podTemplate: + spec: + serviceAccountName: my-custom-serviceaccount + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 50Mi + +``` + +Now, wait a few minutes. the KubeDB operator will create necessary PVC, statefulset, services, secret etc. If everything goes well, we should see that a pod with the name `minute-postgres-0` has been created. + +Check that the statefulset's pod is running + +```bash +$ kubectl get pod -n demo minute-postgres-0 +NAME READY STATUS RESTARTS AGE +minute-postgres-0 1/1 Running 0 14m +``` + +Check the pod's log to see if the database is ready + +```bash +$ kubectl logs -f -n demo minute-postgres-0 +I0705 12:05:51.697190 1 logs.go:19] FLAG: --alsologtostderr="false" +I0705 12:05:51.717485 1 logs.go:19] FLAG: --enable-analytics="true" +I0705 12:05:51.717543 1 logs.go:19] FLAG: --help="false" +I0705 12:05:51.717558 1 logs.go:19] FLAG: --log_backtrace_at=":0" +I0705 12:05:51.717566 1 logs.go:19] FLAG: --log_dir="" +I0705 12:05:51.717573 1 logs.go:19] FLAG: --logtostderr="false" +I0705 12:05:51.717581 1 logs.go:19] FLAG: --stderrthreshold="0" +I0705 12:05:51.717589 1 logs.go:19] FLAG: --v="0" +I0705 12:05:51.717597 1 logs.go:19] FLAG: --vmodule="" +We want "minute-postgres-0" as our leader +I0705 12:05:52.753464 1 leaderelection.go:175] attempting to acquire leader lease demo/minute-postgres-leader-lock... +I0705 12:05:52.822093 1 leaderelection.go:184] successfully acquired lease demo/minute-postgres-leader-lock +Got leadership, now do your jobs +Running as Primary +sh: locale: not found + +WARNING: enabling "trust" authentication for local connections +You can change this by editing pg_hba.conf or using the option -A, or +--auth-local and --auth-host, the next time you run initdb. +ALTER ROLE +/scripts/primary/start.sh: ignoring /var/initdb/* + +LOG: database system was shut down at 2018-07-05 12:07:51 UTC +LOG: MultiXact member wraparound protections are now enabled +LOG: database system is ready to accept connections +LOG: autovacuum launcher started +``` + +`LOG: database system is ready to accept connections` in the log signifies that the database is running successfully. + +## Cleaning up + +To cleanup the Kubernetes resources created by this tutorial, run: + +```bash +kubectl patch -n demo pg/quick-postgres -p '{"spec":{"terminationPolicy":"WipeOut"}}' --type="merge" +kubectl delete -n demo pg/quick-postgres + +kubectl patch -n demo pg/minute-postgres -p '{"spec":{"terminationPolicy":"WipeOut"}}' --type="merge" +kubectl delete -n demo pg/minute-postgres + +kubectl delete -n demo role my-custom-role +kubectl delete -n demo role my-custom-role-two + +kubectl delete -n demo rolebinding my-custom-rolebinding +kubectl delete -n demo rolebinding my-custom-rolebinding-two + +kubectl delete sa -n demo my-custom-serviceaccount + +kubectl delete ns demo +``` + +If you would like to uninstall the KubeDB operator, please follow the steps [here](/docs/v2024.1.31/setup/README). + +## Next Steps + +- Learn about [backup & restore](/docs/v2024.1.31/guides/postgres/backup/overview/) of PostgreSQL databases using Stash. +- Learn about initializing [PostgreSQL with Script](/docs/v2024.1.31/guides/postgres/initialization/script_source). +- Want to setup PostgreSQL cluster? Check how to [configure Highly Available PostgreSQL Cluster](/docs/v2024.1.31/guides/postgres/clustering/ha_cluster) +- Monitor your PostgreSQL instance with KubeDB using [built-in Prometheus](/docs/v2024.1.31/guides/postgres/monitoring/using-builtin-prometheus). +- Monitor your PostgreSQL instance with KubeDB using [Prometheus operator](/docs/v2024.1.31/guides/postgres/monitoring/using-prometheus-operator). +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/v2024.1.31/CONTRIBUTING). diff --git a/content/docs/v2024.1.31/guides/postgres/custom-versions/_index.md b/content/docs/v2024.1.31/guides/postgres/custom-versions/_index.md new file mode 100644 index 0000000000..a8ef7ed462 --- /dev/null +++ b/content/docs/v2024.1.31/guides/postgres/custom-versions/_index.md @@ -0,0 +1,22 @@ +--- +title: PostgreSQL Custom Versions +menu: + docs_v2024.1.31: + identifier: pg-custom-versions-postgres + name: Custom Versions + parent: pg-postgres-guides + weight: 36 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/postgres/custom-versions/setup.md b/content/docs/v2024.1.31/guides/postgres/custom-versions/setup.md new file mode 100644 index 0000000000..4a86b5b2c0 --- /dev/null +++ b/content/docs/v2024.1.31/guides/postgres/custom-versions/setup.md @@ -0,0 +1,117 @@ +--- +title: Setup Custom PostgresVersions +menu: + docs_v2024.1.31: + identifier: pg-custom-versions-setup-postgres + name: Overview + parent: pg-custom-versions-postgres + weight: 10 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +## Setting up Custom PostgresVersions + +PostgresVersions are KubeDB crds that define the docker images KubeDB will use when deploying a postgres database. For more details about PostgresVersion crd, please visit [here](/docs/v2024.1.31/guides/postgres/concepts/catalog). + +## Creating a Custom Postgres Database Image for KubeDB + +The best way to create a custom image is to build on top of the existing kubedb image. + +```docker +FROM kubedb/postgres:10.2-v3 + +ENV TIMESCALEDB_VERSION 0.9.1 + +RUN set -ex \ + && apk add --no-cache --virtual .fetch-deps \ + ca-certificates \ + openssl \ + tar \ + && mkdir -p /build/timescaledb \ + && wget -O /timescaledb.tar.gz https://github.com/timescale/timescaledb/archive/$TIMESCALEDB_VERSION.tar.gz \ + && tar -C /build/timescaledb --strip-components 1 -zxf /timescaledb.tar.gz \ + && rm -f /timescaledb.tar.gz \ + \ + && apk add --no-cache --virtual .build-deps \ + coreutils \ + dpkg-dev dpkg \ + gcc \ + libc-dev \ + make \ + cmake \ + util-linux-dev \ + \ + && cd /build/timescaledb \ + && ./bootstrap \ + && cd build && make install \ + && cd ~ \ + \ + && apk del .fetch-deps .build-deps \ + && rm -rf /build + +RUN sed -r -i "s/[#]*\s*(shared_preload_libraries)\s*=\s*'(.*)'/\1 = 'timescaledb,\2'/;s/,'/'/" /scripts/primary/postgresql.conf +``` + +From there, we would define a PostgresVersion that contains this new image. Let's say we tagged it as `myco/postgres:timescale-0.9.1` + +```yaml +apiVersion: catalog.kubedb.com/v1alpha1 +kind: PostgresVersion +metadata: + name: timescaledb-2.1.0-pg13 +spec: + coordinator: + image: kubedb/pg-coordinator:v0.8.0 + db: + image: timescale/timescaledb:2.1.0-pg13-oss + distribution: TimescaleDB + exporter: + image: prometheuscommunity/postgres-exporter:v0.9.0 + initContainer: + image: kubedb/postgres-init:0.4.0 + podSecurityPolicies: + databasePolicyName: postgres-db + securityContext: + runAsAnyNonRoot: true + runAsUser: 70 + stash: + addon: + backupTask: + name: postgres-backup-13.1 + restoreTask: + name: postgres-restore-13.1 + version: "13.13" +``` + +Once we add this PostgresVersion we can use it in a new Postgres like: + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: Postgres +metadata: + name: timescale-postgres + namespace: demo +spec: + version: "timescaledb-2.1.0-pg13" # points to the name of our custom PostgresVersion + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi +``` diff --git a/content/docs/v2024.1.31/guides/postgres/initialization/_index.md b/content/docs/v2024.1.31/guides/postgres/initialization/_index.md new file mode 100755 index 0000000000..9e9d161bae --- /dev/null +++ b/content/docs/v2024.1.31/guides/postgres/initialization/_index.md @@ -0,0 +1,22 @@ +--- +title: PostgreSQL Initialization +menu: + docs_v2024.1.31: + identifier: pg-initialization-postgres + name: Initialization + parent: pg-postgres-guides + weight: 41 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/postgres/initialization/script_source.md b/content/docs/v2024.1.31/guides/postgres/initialization/script_source.md new file mode 100644 index 0000000000..f86d12a2c8 --- /dev/null +++ b/content/docs/v2024.1.31/guides/postgres/initialization/script_source.md @@ -0,0 +1,252 @@ +--- +title: Initialize Postgres using Script Source +menu: + docs_v2024.1.31: + identifier: pg-script-source-initialization + name: Using Script + parent: pg-initialization-postgres + weight: 10 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Initialize PostgreSQL with Script + +KubeDB supports PostgreSQL database initialization. This tutorial will show you how to use KubeDB to initialize a PostgreSQL database from script. + +## Before You Begin + +At first, you need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +Now, install KubeDB cli on your workstation and KubeDB operator in your cluster following the steps [here](/docs/v2024.1.31/setup/README). + +To keep things isolated, this tutorial uses a separate namespace called `demo` throughout this tutorial. + +```bash +$ kubectl create ns demo +namespace/demo created + +$ kubectl get ns demo +NAME STATUS AGE +demo Active 5s +``` + +> Note: YAML files used in this tutorial are stored in [docs/examples/postgres](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/examples/postgres) folder in GitHub repository [kubedb/docs](https://github.com/kubedb/docs). + +## Prepare Initialization Scripts + +PostgreSQL supports initialization with `.sh`, `.sql` and `.sql.gz` files. In this tutorial, we will use `data.sql` script from [postgres-init-scripts](https://github.com/kubedb/postgres-init-scripts.git) git repository to create a TABLE `dashboard` in `data` Schema. + +We will use a ConfigMap as script source. You can use any Kubernetes supported [volume](https://kubernetes.io/docs/concepts/storage/volumes) as script source. + +At first, we will create a ConfigMap from `data.sql` file. Then, we will provide this ConfigMap as script source in `init.script` of Postgres crd spec. + +Let's create a ConfigMap with initialization script, + +```bash +$ kubectl create configmap -n demo pg-init-script \ +--from-literal=data.sql="$(curl -fsSL https://raw.githubusercontent.com/kubedb/postgres-init-scripts/master/data.sql)" +configmap/pg-init-script created +``` + +## Create PostgreSQL with script source + +Following YAML describes the Postgres object with `init.script`, + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: Postgres +metadata: + name: script-postgres + namespace: demo +spec: + version: "13.13" + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + init: + script: + configMap: + name: pg-init-script +``` + +Here, + +- `init.script` specifies scripts used to initialize the database when it is being created. + +VolumeSource provided in `init.script` will be mounted in Pod and will be executed while creating PostgreSQL. + +Now, let's create the Postgres crd which YAML we have shown above, + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/postgres/initialization/script-postgres.yaml +postgres.kubedb.com/script-postgres created +``` + +Now, wait until Postgres goes in `Running` state. Verify that the database is in `Running` state using following command, + +```bash + $ kubectl get pg -n demo script-postgres +NAME VERSION STATUS AGE +script-postgres 10.2-v5 Running 39s +``` + +You can use `kubectl dba describe` command to view which resources has been created by KubeDB for this Postgres object. + +```bash +$ kubectl dba describe pg -n demo script-postgres +Name: script-postgres +Namespace: demo +CreationTimestamp: Fri, 21 Sep 2018 15:53:27 +0600 +Labels: +Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"kubedb.com/v1alpha2","kind":"Postgres","metadata":{"annotations":{},"name":"script-postgres","namespace":"demo"},"spec":{"init":{"script... +Replicas: 1 total +Status: Running +Init: + script: +Volume: + Type: ConfigMap (a volume populated by a ConfigMap) + Name: pg-init-script + Optional: false + StorageType: Durable +Volume: + StorageClass: standard + Capacity: 1Gi + Access Modes: RWO + +StatefulSet: + Name: script-postgres + CreationTimestamp: Fri, 21 Sep 2018 15:53:28 +0600 + Labels: app.kubernetes.io/name=postgreses.kubedb.com + app.kubernetes.io/instance=script-postgres + Annotations: + Replicas: 824638467136 desired | 1 total + Pods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed + +Service: + Name: script-postgres + Labels: app.kubernetes.io/name=postgreses.kubedb.com + app.kubernetes.io/instance=script-postgres + Annotations: + Type: ClusterIP + IP: 10.108.14.12 + Port: api 5432/TCP + TargetPort: api/TCP + Endpoints: 192.168.1.31:5432 + +Service: + Name: script-postgres-replicas + Labels: app.kubernetes.io/name=postgreses.kubedb.com + app.kubernetes.io/instance=script-postgres + Annotations: + Type: ClusterIP + IP: 10.110.102.203 + Port: api 5432/TCP + TargetPort: api/TCP + Endpoints: 192.168.1.31:5432 + +Database Secret: + Name: script-postgres-auth + Labels: app.kubernetes.io/name=postgreses.kubedb.com + app.kubernetes.io/instance=script-postgres + Annotations: + +Type: Opaque + +Data +==== + POSTGRES_PASSWORD: 16 bytes + POSTGRES_USER: 8 bytes + +Topology: + Type Pod StartTime Phase + ---- --- --------- ----- + primary script-postgres-0 2018-09-21 15:53:28 +0600 +06 Running + +No Snapshots. + +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal Successful 1m Postgres operator Successfully created Service + Normal Successful 1m Postgres operator Successfully created Service + Normal Successful 57s Postgres operator Successfully created StatefulSet + Normal Successful 57s Postgres operator Successfully created Postgres + Normal Successful 57s Postgres operator Successfully patched StatefulSet + Normal Successful 57s Postgres operator Successfully patched Postgres + Normal Successful 57s Postgres operator Successfully patched StatefulSet + Normal Successful 57s Postgres operator Successfully patched Postgres +``` + +## Verify Initialization + +Now let's connect to our Postgres `script-postgres` using pgAdmin we have installed in [quickstart](/docs/v2024.1.31/guides/postgres/quickstart/quickstart#before-you-begin) tutorial to verify that the database has been initialized successfully. + +**Connection Information:** + +- Host name/address: you can use any of these + - Service: `script-postgres.demo` + - Pod IP: (`$ kubectl get pods script-postgres-0 -n demo -o yaml | grep podIP`) +- Port: `5432` +- Maintenance database: `postgres` + +- Username: Run following command to get *username*, + + ```bash + $ kubectl get secrets -n demo script-postgres-auth -o jsonpath='{.data.\POSTGRES_USER}' | base64 -d + postgres + ``` + +- Password: Run the following command to get *password*, + + ```bash + $ kubectl get secrets -n demo script-postgres-auth -o jsonpath='{.data.\POSTGRES_PASSWORD}' | base64 -d + NC1fEq0q5XqHazB8 + ``` + +In PostgreSQL, run following query to check `pg_catalog.pg_tables` to confirm initialization. + +```bash +select * from pg_catalog.pg_tables where schemaname = 'data'; +``` + + | schemaname | tablename | tableowner | hasindexes | hasrules | hastriggers | rowsecurity | + | ---------- | --------- | ---------- | ---------- | -------- | ----------- | ----------- | + | data | dashboard | postgres | true | false | false | false | + +We can see TABLE `dashboard` in `data` Schema which is created through initialization. + +## Cleaning up + +To cleanup the Kubernetes resources created by this tutorial, run: + +```bash +$ kubectl patch -n demo pg/script-postgres -p '{"spec":{"terminationPolicy":"WipeOut"}}' --type="merge" +$ kubectl delete -n demo pg/script-postgres + +$ kubectl delete -n demo configmap/pg-init-script +$ kubectl delete ns demo +``` + +## Next Steps + +- Learn about [backup and restore](/docs/v2024.1.31/guides/postgres/backup/overview/) PostgreSQL database using Stash. +- Want to setup PostgreSQL cluster? Check how to [configure Highly Available PostgreSQL Cluster](/docs/v2024.1.31/guides/postgres/clustering/ha_cluster) +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/v2024.1.31/CONTRIBUTING). diff --git a/content/docs/v2024.1.31/guides/postgres/monitoring/_index.md b/content/docs/v2024.1.31/guides/postgres/monitoring/_index.md new file mode 100755 index 0000000000..bef368e31c --- /dev/null +++ b/content/docs/v2024.1.31/guides/postgres/monitoring/_index.md @@ -0,0 +1,22 @@ +--- +title: Monitoring PostgreSQL +menu: + docs_v2024.1.31: + identifier: pg-monitoring-postgres + name: Monitoring + parent: pg-postgres-guides + weight: 40 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/postgres/monitoring/overview.md b/content/docs/v2024.1.31/guides/postgres/monitoring/overview.md new file mode 100644 index 0000000000..0227743f96 --- /dev/null +++ b/content/docs/v2024.1.31/guides/postgres/monitoring/overview.md @@ -0,0 +1,117 @@ +--- +title: PostgreSQL Monitoring Overview +description: PostgreSQL Monitoring Overview +menu: + docs_v2024.1.31: + identifier: pg-monitoring-overview + name: Overview + parent: pg-monitoring-postgres + weight: 10 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Monitoring PostgreSQL with KubeDB + +KubeDB has native support for monitoring via [Prometheus](https://prometheus.io/). You can use builtin [Prometheus](https://github.com/prometheus/prometheus) scraper or [Prometheus operator](https://github.com/prometheus-operator/prometheus-operator) to monitor KubeDB managed databases. This tutorial will show you how database monitoring works with KubeDB and how to configure Database crd to enable monitoring. + +## Overview + +KubeDB uses Prometheus [exporter](https://prometheus.io/docs/instrumenting/exporters/#databases) images to export Prometheus metrics for respective databases. Following diagram shows the logical flow of database monitoring with KubeDB. + +

+  Database Monitoring Flow +

+ +When a user creates a database crd with `spec.monitor` section configured, KubeDB operator provisions the respective database and injects an exporter image as sidecar to the database pod. It also creates a dedicated stats service with name `{database-crd-name}-stats` for monitoring. Prometheus server can scrape metrics using this stats service. + +## Configure Monitoring + +In order to enable monitoring for a database, you have to configure `spec.monitor` section. KubeDB provides following options to configure `spec.monitor` section: + +| Field | Type | Uses | +| -------------------------------------------------- | ---------- | ---------------------------------------------------------------------------------------------------------------------------------------------- | +| `spec.monitor.agent` | `Required` | Type of the monitoring agent that will be used to monitor this database. It can be `prometheus.io/builtin` or `prometheus.io/operator`. | +| `spec.monitor.prometheus.exporter.port` | `Optional` | Port number where the exporter side car will serve metrics. | +| `spec.monitor.prometheus.exporter.args` | `Optional` | Arguments to pass to the exporter sidecar. | +| `spec.monitor.prometheus.exporter.env` | `Optional` | List of environment variables to set in the exporter sidecar container. | +| `spec.monitor.prometheus.exporter.resources` | `Optional` | Resources required by exporter sidecar container. | +| `spec.monitor.prometheus.exporter.securityContext` | `Optional` | Security options the exporter should run with. | +| `spec.monitor.prometheus.serviceMonitor.labels` | `Optional` | Labels for `ServiceMonitor` crd. | +| `spec.monitor.prometheus.serviceMonitor.interval` | `Optional` | Interval at which metrics should be scraped. | + +## Sample Configuration + +A sample YAML for Redis crd with `spec.monitor` section configured to enable monitoring with [Prometheus operator](https://github.com/prometheus-operator/prometheus-operator) is shown below. + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: Redis +metadata: + name: sample-redis + namespace: databases +spec: + version: 6.0.20 + terminationPolicy: WipeOut + configSecret: # configure Redis to use password for authentication + name: redis-config + storageType: Durable + storage: + storageClassName: default + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 5Gi + monitor: + agent: prometheus.io/operator + prometheus: + serviceMonitor: + labels: + release: prometheus + exporter: + args: + - --redis.password=$(REDIS_PASSWORD) + env: + - name: REDIS_PASSWORD + valueFrom: + secretKeyRef: + name: _name_of_secret_with_redis_password + key: password # key with the password + resources: + requests: + memory: 512Mi + cpu: 200m + limits: + memory: 512Mi + cpu: 250m + securityContext: + runAsUser: 2000 + allowPrivilegeEscalation: false +``` + +Assume that above Redis is configured to use basic authentication. So, exporter image also need to provide password to collect metrics. We have provided it through `spec.monitor.args` field. + +Here, we have specified that we are going to monitor this server using Prometheus operator through `spec.monitor.agent: prometheus.io/operator`. KubeDB will create a `ServiceMonitor` crd in `monitoring` namespace and this `ServiceMonitor` will have `release: prometheus` label. + +## Next Steps + +- Learn how to monitor Elasticsearch database with KubeDB using [builtin-Prometheus](/docs/v2024.1.31/guides/elasticsearch/monitoring/using-builtin-prometheus) and using [Prometheus operator](/docs/v2024.1.31/guides/elasticsearch/monitoring/using-prometheus-operator). +- Learn how to monitor PostgreSQL database with KubeDB using [builtin-Prometheus](/docs/v2024.1.31/guides/postgres/monitoring/using-builtin-prometheus) and using [Prometheus operator](/docs/v2024.1.31/guides/postgres/monitoring/using-prometheus-operator). +- Learn how to monitor MySQL database with KubeDB using [builtin-Prometheus](/docs/v2024.1.31/guides/mysql/monitoring/builtin-prometheus/) and using [Prometheus operator](/docs/v2024.1.31/guides/mysql/monitoring/prometheus-operator/). +- Learn how to monitor MongoDB database with KubeDB using [builtin-Prometheus](/docs/v2024.1.31/guides/mongodb/monitoring/using-builtin-prometheus) and using [Prometheus operator](/docs/v2024.1.31/guides/mongodb/monitoring/using-prometheus-operator). +- Learn how to monitor Redis server with KubeDB using [builtin-Prometheus](/docs/v2024.1.31/guides/redis/monitoring/using-builtin-prometheus) and using [Prometheus operator](/docs/v2024.1.31/guides/redis/monitoring/using-prometheus-operator). +- Learn how to monitor Memcached server with KubeDB using [builtin-Prometheus](/docs/v2024.1.31/guides/memcached/monitoring/using-builtin-prometheus) and using [Prometheus operator](/docs/v2024.1.31/guides/memcached/monitoring/using-prometheus-operator). diff --git a/content/docs/v2024.1.31/guides/postgres/monitoring/using-builtin-prometheus.md b/content/docs/v2024.1.31/guides/postgres/monitoring/using-builtin-prometheus.md new file mode 100644 index 0000000000..3b86591739 --- /dev/null +++ b/content/docs/v2024.1.31/guides/postgres/monitoring/using-builtin-prometheus.md @@ -0,0 +1,369 @@ +--- +title: Monitor PostgreSQL using Builtin Prometheus Discovery +menu: + docs_v2024.1.31: + identifier: pg-using-builtin-prometheus-monitoring + name: Builtin Prometheus + parent: pg-monitoring-postgres + weight: 20 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Monitoring PostgreSQL with builtin Prometheus + +This tutorial will show you how to monitor PostgreSQL database using builtin [Prometheus](https://github.com/prometheus/prometheus) scraper. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +- Install KubeDB operator in your cluster following the steps [here](/docs/v2024.1.31/setup/README). + +- If you are not familiar with how to configure Prometheus to scrape metrics from various Kubernetes resources, please read the tutorial from [here](https://github.com/appscode/third-party-tools/tree/master/monitoring/prometheus/builtin). + +- To learn how Prometheus monitoring works with KubeDB in general, please visit [here](/docs/v2024.1.31/guides/postgres/monitoring/overview). + +- To keep Prometheus resources isolated, we are going to use a separate namespace called `monitoring` to deploy respective monitoring resources. We are going to deploy database in `demo` namespace. + + ```bash + $ kubectl create ns monitoring + namespace/monitoring created + + $ kubectl create ns demo + namespace/demo created + ``` + +> Note: YAML files used in this tutorial are stored in [docs/examples/postgres](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/examples/postgres) folder in GitHub repository [kubedb/docs](https://github.com/kubedb/docs). + +## Deploy PostgreSQL with Monitoring Enabled + +At first, let's deploy an PostgreSQL database with monitoring enabled. Below is the PostgreSQL object that we are going to create. + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: Postgres +metadata: + name: builtin-prom-postgres + namespace: demo +spec: + version: "13.13" + terminationPolicy: WipeOut + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + monitor: + agent: prometheus.io/builtin +``` + +Here, + +- `spec.monitor.agent: prometheus.io/builtin` specifies that we are going to monitor this server using builtin Prometheus scraper. + +Let's create the PostgreSQL crd we have shown above. + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/postgres/monitoring/builtin-prom-postgres.yaml +postgres.kubedb.com/builtin-prom-postgres created +``` + +Now, wait for the database to go into `Running` state. + +```bash +$ kubectl get pg -n demo builtin-prom-postgres +NAME VERSION STATUS AGE +builtin-prom-postgres 10.2-v5 Running 1m +``` + +KubeDB will create a separate stats service with name `{PostgreSQL crd name}-stats` for monitoring purpose. + +```bash +$ kubectl get svc -n demo --selector="app.kubernetes.io/instance=builtin-prom-postgres" +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +builtin-prom-postgres ClusterIP 10.102.7.190 5432/TCP 87s +builtin-prom-postgres-replicas ClusterIP 10.100.103.146 5432/TCP 87s +builtin-prom-postgres-stats ClusterIP 10.102.128.153 56790/TCP 56s +``` + +Here, `builtin-prom-postgres-stats` service has been created for monitoring purpose. Let's describe the service. + +```bash +$ kubectl describe svc -n demo builtin-prom-postgres-stats +Name: builtin-prom-postgres-stats +Namespace: demo +Labels: app.kubernetes.io/name=postgreses.kubedb.com + app.kubernetes.io/instance=builtin-prom-postgres +Annotations: monitoring.appscode.com/agent: prometheus.io/builtin + prometheus.io/path: /metrics + prometheus.io/port: 56790 + prometheus.io/scrape: true +Selector: app.kubernetes.io/name=postgreses.kubedb.com,app.kubernetes.io/instance=builtin-prom-postgres +Type: ClusterIP +IP: 10.102.128.153 +Port: prom-http 56790/TCP +TargetPort: prom-http/TCP +Endpoints: 172.17.0.14:56790 +Session Affinity: None +Events: +``` + +You can see that the service contains following annotations. + +```bash +prometheus.io/path: /metrics +prometheus.io/port: 56790 +prometheus.io/scrape: true +``` + +The Prometheus server will discover the service endpoint using these specifications and will scrape metrics from the exporter. + +## Configure Prometheus Server + +Now, we have to configure a Prometheus scraping job to scrape the metrics using this service. We are going to configure scraping job similar to this [kubernetes-service-endpoints](https://github.com/appscode/third-party-tools/tree/master/monitoring/prometheus/builtin#kubernetes-service-endpoints) job that scrapes metrics from endpoints of a service. + +Let's configure a Prometheus scraping job to collect metrics from this service. + +```yaml +- job_name: 'kubedb-databases' + honor_labels: true + scheme: http + kubernetes_sd_configs: + - role: endpoints + # by default Prometheus server select all Kubernetes services as possible target. + # relabel_config is used to filter only desired endpoints + relabel_configs: + # keep only those services that has "prometheus.io/scrape","prometheus.io/path" and "prometheus.io/port" anootations + - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape, __meta_kubernetes_service_annotation_prometheus_io_port] + separator: ; + regex: true;(.*) + action: keep + # currently KubeDB supported databases uses only "http" scheme to export metrics. so, drop any service that uses "https" scheme. + - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scheme] + action: drop + regex: https + # only keep the stats services created by KubeDB for monitoring purpose which has "-stats" suffix + - source_labels: [__meta_kubernetes_service_name] + separator: ; + regex: (.*-stats) + action: keep + # service created by KubeDB will have "app.kubernetes.io/name" and "app.kubernetes.io/instance" annotations. keep only those services that have these annotations. + - source_labels: [__meta_kubernetes_service_label_app_kubernetes_io_name] + separator: ; + regex: (.*) + action: keep + # read the metric path from "prometheus.io/path: " annotation + - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_path] + action: replace + target_label: __metrics_path__ + regex: (.+) + # read the port from "prometheus.io/port: " annotation and update scraping address accordingly + - source_labels: [__address__, __meta_kubernetes_service_annotation_prometheus_io_port] + action: replace + target_label: __address__ + regex: ([^:]+)(?::\d+)?;(\d+) + replacement: $1:$2 + # add service namespace as label to the scraped metrics + - source_labels: [__meta_kubernetes_namespace] + separator: ; + regex: (.*) + target_label: namespace + replacement: $1 + action: replace + # add service name as a label to the scraped metrics + - source_labels: [__meta_kubernetes_service_name] + separator: ; + regex: (.*) + target_label: service + replacement: $1 + action: replace + # add stats service's labels to the scraped metrics + - action: labelmap + regex: __meta_kubernetes_service_label_(.+) +``` + +### Configure Existing Prometheus Server + +If you already have a Prometheus server running, you have to add above scraping job in the `ConfigMap` used to configure the Prometheus server. Then, you have to restart it for the updated configuration to take effect. + +>If you don't use a persistent volume for Prometheus storage, you will lose your previously scraped data on restart. + +### Deploy New Prometheus Server + +If you don't have any existing Prometheus server running, you have to deploy one. In this section, we are going to deploy a Prometheus server in `monitoring` namespace to collect metrics using this stats service. + +**Create ConfigMap:** + +At first, create a ConfigMap with the scraping configuration. Bellow, the YAML of ConfigMap that we are going to create in this tutorial. + +```yaml +apiVersion: v1 +kind: ConfigMap +metadata: + name: prometheus-config + labels: + app: prometheus-demo + namespace: monitoring +data: + prometheus.yml: |- + global: + scrape_interval: 5s + evaluation_interval: 5s + scrape_configs: + - job_name: 'kubedb-databases' + honor_labels: true + scheme: http + kubernetes_sd_configs: + - role: endpoints + # by default Prometheus server select all Kubernetes services as possible target. + # relabel_config is used to filter only desired endpoints + relabel_configs: + # keep only those services that has "prometheus.io/scrape","prometheus.io/path" and "prometheus.io/port" anootations + - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape, __meta_kubernetes_service_annotation_prometheus_io_port] + separator: ; + regex: true;(.*) + action: keep + # currently KubeDB supported databases uses only "http" scheme to export metrics. so, drop any service that uses "https" scheme. + - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scheme] + action: drop + regex: https + # only keep the stats services created by KubeDB for monitoring purpose which has "-stats" suffix + - source_labels: [__meta_kubernetes_service_name] + separator: ; + regex: (.*-stats) + action: keep + # service created by KubeDB will have "app.kubernetes.io/name" and "app.kubernetes.io/instance" annotations. keep only those services that have these annotations. + - source_labels: [__meta_kubernetes_service_label_app_kubernetes_io_name] + separator: ; + regex: (.*) + action: keep + # read the metric path from "prometheus.io/path: " annotation + - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_path] + action: replace + target_label: __metrics_path__ + regex: (.+) + # read the port from "prometheus.io/port: " annotation and update scraping address accordingly + - source_labels: [__address__, __meta_kubernetes_service_annotation_prometheus_io_port] + action: replace + target_label: __address__ + regex: ([^:]+)(?::\d+)?;(\d+) + replacement: $1:$2 + # add service namespace as label to the scraped metrics + - source_labels: [__meta_kubernetes_namespace] + separator: ; + regex: (.*) + target_label: namespace + replacement: $1 + action: replace + # add service name as a label to the scraped metrics + - source_labels: [__meta_kubernetes_service_name] + separator: ; + regex: (.*) + target_label: service + replacement: $1 + action: replace + # add stats service's labels to the scraped metrics + - action: labelmap + regex: __meta_kubernetes_service_label_(.+) +``` + +Let's create above `ConfigMap`, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/monitoring/builtin-prometheus/prom-config.yaml +configmap/prometheus-config created +``` + +**Create RBAC:** + +If you are using an RBAC enabled cluster, you have to give necessary RBAC permissions for Prometheus. Let's create necessary RBAC stuffs for Prometheus, + +```bash +$ kubectl apply -f https://github.com/appscode/third-party-tools/raw/master/monitoring/prometheus/builtin/artifacts/rbac.yaml +clusterrole.rbac.authorization.k8s.io/prometheus created +serviceaccount/prometheus created +clusterrolebinding.rbac.authorization.k8s.io/prometheus created +``` + +>YAML for the RBAC resources created above can be found [here](https://github.com/appscode/third-party-tools/blob/master/monitoring/prometheus/builtin/artifacts/rbac.yaml). + +**Deploy Prometheus:** + +Now, we are ready to deploy Prometheus server. We are going to use following [deployment](https://github.com/appscode/third-party-tools/blob/master/monitoring/prometheus/builtin/artifacts/deployment.yaml) to deploy Prometheus server. + +Let's deploy the Prometheus server. + +```bash +$ kubectl apply -f https://github.com/appscode/third-party-tools/raw/master/monitoring/prometheus/builtin/artifacts/deployment.yaml +deployment.apps/prometheus created +``` + +### Verify Monitoring Metrics + +Prometheus server is listening to port `9090`. We are going to use [port forwarding](https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/) to access Prometheus dashboard. + +At first, let's check if the Prometheus pod is in `Running` state. + +```bash +$ kubectl get pod -n monitoring -l=app=prometheus +NAME READY STATUS RESTARTS AGE +prometheus-8568c86d86-95zhn 1/1 Running 0 77s +``` + +Now, run following command on a separate terminal to forward 9090 port of `prometheus-8568c86d86-95zhn` pod, + +```bash +$ kubectl port-forward -n monitoring prometheus-8568c86d86-95zhn 9090 +Forwarding from 127.0.0.1:9090 -> 9090 +Forwarding from [::1]:9090 -> 9090 +``` + +Now, we can access the dashboard at `localhost:9090`. Open [http://localhost:9090](http://localhost:9090) in your browser. You should see the endpoint of `builtin-prom-postgres-stats` service as one of the targets. + +

+  Prometheus Target +

+ +Check the labels marked with red rectangle. These labels confirm that the metrics are coming from `Postgres` database `builtin-prom-postgres` through stats service `builtin-prom-postgres-stats`. + +Now, you can view the collected metrics and create a graph from homepage of this Prometheus dashboard. You can also use this Prometheus server as data source for [Grafana](https://grafana.com/) and create beautiful dashboard with collected metrics. + +## Cleaning up + +To cleanup the Kubernetes resources created by this tutorial, run following commands + +```bash +$ kubectl delete -n demo pg/builtin-prom-postgres + +$ kubectl delete -n monitoring deployment.apps/prometheus + +$ kubectl delete -n monitoring clusterrole.rbac.authorization.k8s.io/prometheus +$ kubectl delete -n monitoring serviceaccount/prometheus +$ kubectl delete -n monitoring clusterrolebinding.rbac.authorization.k8s.io/prometheus + +$ kubectl delete ns demo +$ kubectl delete ns monitoring +``` + +## Next Steps + +- Learn about [backup and restore](/docs/v2024.1.31/guides/postgres/backup/overview/) PostgreSQL databases using Stash. +- Monitor your PostgreSQL database with KubeDB using [`out-of-the-box` Prometheus operator](/docs/v2024.1.31/guides/postgres/monitoring/using-prometheus-operator). +- Use [private Docker registry](/docs/v2024.1.31/guides/postgres/private-registry/using-private-registry) to deploy PostgreSQL with KubeDB. +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/v2024.1.31/CONTRIBUTING). diff --git a/content/docs/v2024.1.31/guides/postgres/monitoring/using-prometheus-operator.md b/content/docs/v2024.1.31/guides/postgres/monitoring/using-prometheus-operator.md new file mode 100644 index 0000000000..ab672d0573 --- /dev/null +++ b/content/docs/v2024.1.31/guides/postgres/monitoring/using-prometheus-operator.md @@ -0,0 +1,285 @@ +--- +title: Monitor PostgreSQL using Prometheus Operator +menu: + docs_v2024.1.31: + identifier: pg-using-prometheus-operator-monitoring + name: Prometheus Operator + parent: pg-monitoring-postgres + weight: 15 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Monitoring PostgreSQL Using Prometheus operator + +[Prometheus operator](https://github.com/prometheus-operator/prometheus-operator) provides simple and Kubernetes native way to deploy and configure Prometheus server. This tutorial will show you how to use Prometheus operator to monitor PostgreSQL database deployed with KubeDB. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +- To learn how Prometheus monitoring works with KubeDB in general, please visit [here](/docs/v2024.1.31/guides/postgres/monitoring/overview). + +- To keep Prometheus resources isolated, we are going to use a separate namespace called `monitoring` to deploy respective monitoring resources. We are going to deploy database in `demo` namespace. + + ```bash + $ kubectl create ns monitoring + namespace/monitoring created + + $ kubectl create ns demo + namespace/demo created + ``` + +- We need a [Prometheus operator](https://github.com/prometheus-operator/prometheus-operator) instance running. If you don't already have a running instance, deploy one following the docs from [here](https://github.com/appscode/third-party-tools/blob/master/monitoring/prometheus/operator/README.md). + +- If you already don't have a Prometheus server running, deploy one following tutorial from [here](https://github.com/appscode/third-party-tools/blob/master/monitoring/prometheus/operator/README.md#deploy-prometheus-server). + +> Note: YAML files used in this tutorial are stored in [docs/examples/postgres](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/examples/postgres) folder in GitHub repository [kubedb/docs](https://github.com/kubedb/docs). + +## Find out required labels for ServiceMonitor + +We need to know the labels used to select `ServiceMonitor` by a `Prometheus` crd. We are going to provide these labels in `spec.monitor.prometheus.labels` field of PostgreSQL crd so that KubeDB creates `ServiceMonitor` object accordingly. + +At first, let's find out the available Prometheus server in our cluster. + +```bash +$ kubectl get prometheus --all-namespaces +NAMESPACE NAME AGE +monitoring prometheus 18m +``` + +> If you don't have any Prometheus server running in your cluster, deploy one following the guide specified in **Before You Begin** section. + +Now, let's view the YAML of the available Prometheus server `prometheus` in `monitoring` namespace. + +```yaml +$ kubectl get prometheus -n monitoring prometheus -o yaml +apiVersion: monitoring.coreos.com/v1 +kind: Prometheus +metadata: + annotations: + kubectl.kubernetes.io/last-applied-configuration: | + {"apiVersion":"monitoring.coreos.com/v1","kind":"Prometheus","metadata":{"annotations":{},"labels":{"prometheus":"prometheus"},"name":"prometheus","namespace":"monitoring"},"spec":{"replicas":1,"resources":{"requests":{"memory":"400Mi"}},"serviceAccountName":"prometheus","serviceMonitorSelector":{"matchLabels":{"release":"prometheus"}}}} + creationTimestamp: 2019-01-03T13:41:51Z + generation: 1 + labels: + prometheus: prometheus + name: prometheus + namespace: monitoring + resourceVersion: "44402" + selfLink: /apis/monitoring.coreos.com/v1/namespaces/monitoring/prometheuses/prometheus + uid: 5324ad98-0f5d-11e9-b230-080027f306f3 +spec: + replicas: 1 + resources: + requests: + memory: 400Mi + serviceAccountName: prometheus + serviceMonitorSelector: + matchLabels: + release: prometheus +``` + +Notice the `spec.serviceMonitorSelector` section. Here, `release: prometheus` label is used to select `ServiceMonitor` crd. So, we are going to use this label in `spec.monitor.prometheus.labels` field of PostgreSQL crd. + +## Deploy PostgreSQL with Monitoring Enabled + +At first, let's deploy an PostgreSQL database with monitoring enabled. Below is the PostgreSQL object that we are going to create. + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: Postgres +metadata: + name: coreos-prom-postgres + namespace: demo +spec: + version: "13.13" + terminationPolicy: WipeOut + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + monitor: + agent: prometheus.io/operator + prometheus: + serviceMonitor: + labels: + release: prometheus + interval: 10s +``` + +Here, + +- `monitor.agent: prometheus.io/operator` indicates that we are going to monitor this server using Prometheus operator. +- `monitor.prometheus.namespace: monitoring` specifies that KubeDB should create `ServiceMonitor` in `monitoring` namespace. + +- `monitor.prometheus.labels` specifies that KubeDB should create `ServiceMonitor` with these labels. + +- `monitor.prometheus.interval` indicates that the Prometheus server should scrape metrics from this database with 10 seconds interval. + +Let's create the PostgreSQL object that we have shown above, + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/postgres/monitoring/coreos-prom-postgres.yaml +postgresql.kubedb.com/coreos-prom-postgres created +``` + +Now, wait for the database to go into `Running` state. + +```bash +$ kubectl get pg -n demo coreos-prom-postgres +NAME VERSION STATUS AGE +coreos-prom-postgres 10.2-v5 Running 38s +``` + +KubeDB will create a separate stats service with name `{PostgreSQL crd name}-stats` for monitoring purpose. + +```bash +$ kubectl get svc -n demo --selector="app.kubernetes.io/instance=coreos-prom-postgres" +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +coreos-prom-postgres ClusterIP 10.107.102.123 5432/TCP 58s +coreos-prom-postgres-replicas ClusterIP 10.109.11.171 5432/TCP 58s +coreos-prom-postgres-stats ClusterIP 10.110.218.172 56790/TCP 51s +``` + +Here, `coreos-prom-postgres-stats` service has been created for monitoring purpose. + +Let's describe this stats service. + +```yaml +$ kubectl describe svc -n demo coreos-prom-postgres-stats +Name: coreos-prom-postgres-stats +Namespace: demo +Labels: app.kubernetes.io/name=postgreses.kubedb.com + app.kubernetes.io/instance=coreos-prom-postgres +Annotations: monitoring.appscode.com/agent: prometheus.io/operator +Selector: app.kubernetes.io/name=postgreses.kubedb.com,app.kubernetes.io/instance=coreos-prom-postgres +Type: ClusterIP +IP: 10.110.218.172 +Port: prom-http 56790/TCP +TargetPort: prom-http/TCP +Endpoints: 172.17.0.7:56790 +Session Affinity: None +Events: +``` + +Notice the `Labels` and `Port` fields. `ServiceMonitor` will use these information to target its endpoints. + +KubeDB will also create a `ServiceMonitor` crd in `monitoring` namespace that select the endpoints of `coreos-prom-postgres-stats` service. Verify that the `ServiceMonitor` crd has been created. + +```bash +$ kubectl get servicemonitor -n monitoring +NAME AGE +kubedb-demo-coreos-prom-postgres 1m +``` + +Let's verify that the `ServiceMonitor` has the label that we had specified in `spec.monitor` section of PostgreSQL crd. + +```yaml +$ kubectl get servicemonitor -n monitoring kubedb-demo-coreos-prom-postgres -o yaml +apiVersion: monitoring.coreos.com/v1 +kind: ServiceMonitor +metadata: + creationTimestamp: 2019-01-03T15:47:08Z + generation: 1 + labels: + release: prometheus + monitoring.appscode.com/service: coreos-prom-postgres-stats.demo + name: kubedb-demo-coreos-prom-postgres + namespace: monitoring + resourceVersion: "53969" + selfLink: /apis/monitoring.coreos.com/v1/namespaces/monitoring/servicemonitors/kubedb-demo-coreos-prom-postgres + uid: d3c419ad-0f6e-11e9-b230-080027f306f3 +spec: + endpoints: + - honorLabels: true + interval: 10s + path: /metrics + port: prom-http + namespaceSelector: + matchNames: + - demo + selector: + matchLabels: + app.kubernetes.io/name: postgreses.kubedb.com + app.kubernetes.io/instance: coreos-prom-postgres +``` + +Notice that the `ServiceMonitor` has label `release: prometheus` that we had specified in PostgreSQL crd. + +Also notice that the `ServiceMonitor` has selector which match the labels we have seen in the `coreos-prom-postgres-stats` service. It also, target the `prom-http` port that we have seen in the stats service. + +## Verify Monitoring Metrics + +At first, let's find out the respective Prometheus pod for `prometheus` Prometheus server. + +```bash +$ kubectl get pod -n monitoring -l=app=prometheus +NAME READY STATUS RESTARTS AGE +prometheus-prometheus-0 3/3 Running 1 63m +``` + +Prometheus server is listening to port `9090` of `prometheus-prometheus-0` pod. We are going to use [port forwarding](https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/) to access Prometheus dashboard. + +Run following command on a separate terminal to forward the port 9090 of `prometheus-prometheus-0` pod, + +```bash +$ kubectl port-forward -n monitoring prometheus-prometheus-0 9090 +Forwarding from 127.0.0.1:9090 -> 9090 +Forwarding from [::1]:9090 -> 9090 +``` + +Now, we can access the dashboard at `localhost:9090`. Open [http://localhost:9090](http://localhost:9090) in your browser. You should see `prom-http` endpoint of `coreos-prom-postgres-stats` service as one of the targets. + +

+  Prometheus Target +

+ +Check the `endpoint` and `service` labels marked by red rectangle. It verifies that the target is our expected database. Now, you can view the collected metrics and create a graph from homepage of this Prometheus dashboard. You can also use this Prometheus server as data source for [Grafana](https://grafana.com/) and create beautiful dashboard with collected metrics. + +## Cleaning up + +To cleanup the Kubernetes resources created by this tutorial, run following commands + +```bash +# cleanup database +kubectl delete -n demo pg/coreos-prom-postgres + +# cleanup prometheus resources +kubectl delete -n monitoring prometheus prometheus +kubectl delete -n monitoring clusterrolebinding prometheus +kubectl delete -n monitoring clusterrole prometheus +kubectl delete -n monitoring serviceaccount prometheus +kubectl delete -n monitoring service prometheus-operated + +# cleanup prometheus operator resources +kubectl delete -n monitoring deployment prometheus-operator +kubectl delete -n dmeo serviceaccount prometheus-operator +kubectl delete clusterrolebinding prometheus-operator +kubectl delete clusterrole prometheus-operator + +# delete namespace +kubectl delete ns monitoring +kubectl delete ns demo +``` + +## Next Steps + +- Monitor your PostgreSQL database with KubeDB using [built-in Prometheus](/docs/v2024.1.31/guides/postgres/monitoring/using-builtin-prometheus). +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/v2024.1.31/CONTRIBUTING). diff --git a/content/docs/v2024.1.31/guides/postgres/pitr/_index.md b/content/docs/v2024.1.31/guides/postgres/pitr/_index.md new file mode 100644 index 0000000000..c9654aaee8 --- /dev/null +++ b/content/docs/v2024.1.31/guides/postgres/pitr/_index.md @@ -0,0 +1,22 @@ +--- +title: Continuous Archiving and Point-in-time Recovery +menu: + docs_v2024.1.31: + identifier: pitr-postgres + name: Point-in-time Recovery + parent: pg-postgres-guides + weight: 42 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/postgres/pitr/archiver.md b/content/docs/v2024.1.31/guides/postgres/pitr/archiver.md new file mode 100644 index 0000000000..212902919d --- /dev/null +++ b/content/docs/v2024.1.31/guides/postgres/pitr/archiver.md @@ -0,0 +1,440 @@ +--- +title: Continuous Archiving and Point-in-time Recovery +menu: + docs_v2024.1.31: + identifier: pitr-postgres-archiver + name: Overview + parent: pitr-postgres + weight: 10 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# KubeDB PostgreSQL - Continuous Archiving and Point-in-time Recovery + +Here, will show you how to use KubeDB to provision a PostgreSQL to Archive continuously and Restore point-in-time. + +## Before You Begin + +At first, you need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +Now,install `KubeDB` operator in your cluster following the steps [here](/docs/v2024.1.31/setup/README). + +To install `KubeStash` operator in your cluster following the steps [here](https://github.com/kubestash/installer/tree/master/charts/kubestash). + +To install `SideKick` in your cluster following the steps [here](https://github.com/kubeops/installer/tree/master/charts/sidekick). + +To install `External-snapshotter` in your cluster following the steps [here](https://github.com/kubernetes-csi/external-snapshotter/tree/release-5.0). + +To keep things isolated, this tutorial uses a separate namespace called `demo` throughout this tutorial. + +```bash +$ kubectl create ns demo +namespace/demo created +``` +> Note: The yaml files used in this tutorial are stored in [docs/guides/postgres/remote-replica/yamls](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/guides/postgres/remote-replica/yamls) folder in GitHub repository [kubedb/docs](https://github.com/kubedb/docs). + +## continuous archiving +Continuous archiving involves making regular copies (or "archives") of the PostgreSQL transaction log files.To ensure continuous archiving to a remote location we need prepare `BackupStorage`,`RetentionPolicy`,`PostgresArchiver` for the KubeDB Managed PostgreSQL Databases. + +### BackupStorage +BackupStorage is a CR provided by KubeStash that can manage storage from various providers like GCS, S3, and more. + +```yaml +apiVersion: storage.kubestash.com/v1alpha1 +kind: BackupStorage +metadata: + name: linode-storage + namespace: demo +spec: + storage: + provider: s3 + s3: + bucket: mehedi-pg-wal-g + endpoint: https://ap-south-1.linodeobjects.com + region: ap-south-1 + prefix: backup + secret: storage + usagePolicy: + allowedNamespaces: + from: All + default: true + deletionPolicy: WipeOut +``` + +```bash + $ kubectl apply -f backupstorage.yaml + backupstorage.storage.kubestash.com/linode-storage created +``` + +### secrets for backup-storage +```yaml +apiVersion: v1 +kind: Secret +type: Opaque +metadata: + name: storage + namespace: demo +stringData: + AWS_ACCESS_KEY_ID: "*************26CX" + AWS_SECRET_ACCESS_KEY: "************jj3lp" + AWS_ENDPOINT: https://ap-south-1.linodeobjects.com +``` + +```bash + $ kubectl apply -f storage-secret.yaml + secret/storage created +``` + +### Retention policy +RetentionPolicy is a CR provided by KubeStash that allows you to set how long you'd like to retain the backup data. + +```yaml +apiVersion: storage.kubestash.com/v1alpha1 +kind: RetentionPolicy +metadata: + name: postgres-retention-policy + namespace: demo +spec: + maxRetentionPeriod: "30d" + successfulSnapshots: + last: 100 + failedSnapshots: + last: 2 +``` +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/postgres/pitr/yamls/retention-policy.yaml +retentionpolicy.storage.kubestash.com/postgres-retention-policy created +``` + +### PostgreSQLArchiver +PostgreSQLArchiver is a CR provided by KubeDB for managing the archiving of MongoDB oplog files and performing volume-level backups + +```yaml +apiVersion: archiver.kubedb.com/v1alpha1 +kind: PostgresArchiver +metadata: + name: postgresarchiver-sample + namespace: demo +spec: + pause: false + databases: + namespaces: + from: Selector + selector: + matchLabels: + kubernetes.io/metadata.name: demo + selector: + matchLabels: + archiver: "true" + retentionPolicy: + name: postgres-retention-policy + namespace: demo + encryptionSecret: + name: "encrypt-secret" + namespace: "demo" + fullBackup: + driver: "VolumeSnapshotter" + task: + params: + volumeSnapshotClassName: "longhorn-snapshot-vsc" + scheduler: + successfulJobsHistoryLimit: 1 + failedJobsHistoryLimit: 1 + schedule: "/30 * * * *" + sessionHistoryLimit: 2 + manifestBackup: + scheduler: + successfulJobsHistoryLimit: 1 + failedJobsHistoryLimit: 1 + schedule: "/30 * * * *" + sessionHistoryLimit: 2 + backupStorage: + ref: + name: "linode-storage" + namespace: "demo" + +``` + +### EncryptionSecret + +```yaml +apiVersion: v1 +kind: Secret +type: Opaque +metadata: + name: encrypt-secret + namespace: demo +stringData: + RESTIC_PASSWORD: "changeit" +``` + +```bash + $ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/postgres/pirt/yamls/postgresarchiver.yaml + postgresarchiver.archiver.kubedb.com/postgresarchiver-sample created + $ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/postgres/pirt/yamls/encryptionSecret.yaml +``` + +## Ensure volumeSnapshotClass + +```bash +$ kubectl get volumesnapshotclasses +NAME DRIVER DELETIONPOLICY AGE +longhorn-snapshot-vsc driver.longhorn.io Delete 7d22h + +``` +If not any, try using `longhorn` or any other [volumeSnapshotClass](https://kubernetes.io/docs/concepts/storage/volume-snapshot-classes/). +```yaml +kind: VolumeSnapshotClass +apiVersion: snapshot.storage.k8s.io/v1 +metadata: + name: longhorn-snapshot-vsc +driver: driver.longhorn.io +deletionPolicy: Delete +parameters: + type: snap + +``` + +```bash +$ helm install longhorn longhorn/longhorn --namespace longhorn-system --create-namespace + +$ kubectl apply -f volumesnapshotclass.yaml + volumesnapshotclass.snapshot.storage.k8s.io/longhorn-snapshot-vsc unchanged +``` + +# Deploy PostgreSQL +So far we are ready with setup for continuously archive PostgreSQL, We deploy a postgresql referring the PostgreSQL archiver object + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: Postgres +metadata: + name: demo-pg + namespace: demo + labels: + archiver: "true" +spec: + version: "13.13" + replicas: 3 + standbyMode: Hot + storageType: Durable + storage: + storageClassName: "longhorn" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + archiver: + ref: + name: postgresarchiver-sample + namespace: demo + terminationPolicy: WipeOut + +``` + + +```bash +$ kubectl get pod -n demo +NAME READY STATUS RESTARTS AGE +demo-pg-0 2/2 Running 0 8m52s +demo-pg-1 2/2 Running 0 8m22s +demo-pg-2 2/2 Running 0 7m57s +demo-pg-backup-config-full-backup-1702388088-z4qbz 0/1 Completed 0 37s +demo-pg-backup-config-manifest-1702388088-hpx6m 0/1 Completed 0 37s +demo-pg-sidekick 1/1 Running 0 7m31s +``` + +`demo-pg-sidekick` is responsible for uploading wal-files + +`demo-pg-backup-config-full-backup-1702388088-z4qbz ` are the pod of volumes levels backups for postgreSQL. + +`demo-pg-backup-config-manifest-1702388088-hpx6m ` are the pod of the manifest backup related to PostgreSQL object + +### validate BackupConfiguration and VolumeSnapshots + +```bash + +$ kubectl get backupconfigurations -n demo + +NAME PHASE PAUSED AGE +demo-pg-backup-config Ready 2m43s + +$ kubectl get backupsession -n demo +NAME INVOKER-TYPE INVOKER-NAME PHASE DURATION AGE +demo-pg-backup-config-full-backup-1702388088 BackupConfiguration demo-pg-backup-config Succeeded 74s +demo-pg-backup-config-manifest-1702388088 BackupConfiguration demo-pg-backup-config Succeeded 74s + +kubectl get volumesnapshots -n demo +NAME READYTOUSE SOURCEPVC SOURCESNAPSHOTCONTENT RESTORESIZE SNAPSHOTCLASS SNAPSHOTCONTENT CREATIONTIME AGE +demo-pg-1702388096 true data-demo-pg-1 1Gi longhorn-snapshot-vsc snapcontent-735e97ad-1dfa-4b70-b416-33f7270d792c 2m5s 2m5s +``` + +## data insert and switch wal +After each and every wal switch the wal files will be uploaded to backup storage + +```bash +$ kubectl exec -it -n demo demo-pg-0 -- bash + +bash-5.1$ psql + +postgres=# create database hi; +CREATE DATABASE +postgres=# \c hi +hi=# create table tab_1 (a int); +CREATE TABLE +hi=# insert into tab_1 values(generate_series(1,100)); +INSERT 0 100 +hi=# select pg_switch_wal(); + 0/504A0D8 +(1 row) + +hi=# insert into tab_1 values(generate_series(1,100)); +INSERT 0 100 + +hi=# select now(); + 2023-12-12 13:43:41.300216+00 + +hi=# select pg_switch_wal(); + 0/6013240 + +hi=# select count(*) from tab_1 ; + 200 +``` + +> At this point We have 200 rows in our newly created table `tab_1` on database `hi` + +## Point-in-time Recovery +Point-In-Time Recovery allows you to restore a PostgreSQL database to a specific point in time using the archived transaction logs. This is particularly useful in scenarios where you need to recover to a state just before a specific error or data corruption occurred. +Let's say accidentally our dba drops the the table tab_1 and we want to restore. + +```bash +$ kubectl exec -it -n demo demo-pg-0 -- bash +bash-5.1$ psql +postgres=# \c hi + +hi=# drop table tab_1; +DROP TABLE +hi=# select count(*) from tab_1 ; +ERROR: relation "tab_1" does not exist +LINE 1: select count(*) from tab_1 ; +``` +We can't restore from a full backup since at this point no full backup was perform. so we can choose a specific time in which time we want to restore.We can get the specfice time from the wal that archived in the backup storage . Go to the binlog file and find where to store. You can parse wal-files using `pg-waldump`. + + +For the demo I will use the previous time we get form `select now()` + +```bash +hi=# select now(); + 2023-12-12 13:43:41.300216+00 +``` +### Restore PostgreSQL + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: Postgres +metadata: + name: restore-pg + namespace: demo +spec: + init: + archiver: + encryptionSecret: + name: encrypt-secret + namespace: demo + fullDBRepository: + name: demo-pg-repository + namespace: demo + manifestRepository: + name: demo-pg-manifest + namespace: demo + recoveryTimestamp: "2023-12-12T13:43:41.300216Z" + version: "13.13" + replicas: 3 + standbyMode: Hot + storageType: Durable + storage: + storageClassName: "longhorn" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut +``` + +```bash +$ kubectl apply -f restore.yaml +postgres.kubedb.com/restore-pg created +``` + +**check for Restored PostgreSQL** + +```bash +$ kubectl get pod -n demo +NAME READY STATUS RESTARTS AGE +restore-pg-0 2/2 Running 0 46s +restore-pg-1 2/2 Running 0 41s +restore-pg-2 2/2 Running 0 22s +restore-pg-restorer-4d4dg 0/1 Completed 0 104s +restore-pg-restoresession-2tsbv 0/1 Completed 0 115s +``` + +```bash +$ kubectl get pg -n demo +NAME VERSION STATUS AGE +demo-pg 13.6 Ready 44m +restore-pg 13.6 Ready 2m36s +``` + +**Validating data on Restored PostgreSQL** + +```bash +$ kubectl exec -it -n demo restore-pg-0 -- bash +bash-5.1$ psql + +postgres=# \c hi + +hi=# select count(*) from tab_1 ; + 200 +``` + +**so we are able to successfully recover from a disaster** + +## Cleaning up + +To cleanup the Kubernetes resources created by this tutorial, run: + +```bash +$ kubectl delete -n demo pg/demo-pg +$ kubectl delete -n demo pg/restore-pg +$ kubectl delete -n demo backupstorage +$ kubectl delete -n demo postgresqlarchiver +$ kubectl delete ns demo +``` + +## Next Steps + +- Learn about [backup and restore](/docs/v2024.1.31/guides/postgres/backup/overview/) PostgreSQL database using Stash. +- Learn about initializing [PostgreSQL with Script](/docs/v2024.1.31/guides/postgres/initialization/script_source). +- Learn about [custom PostgresVersions](/docs/v2024.1.31/guides/postgres/custom-versions/setup). +- Want to setup PostgreSQL cluster? Check how to [configure Highly Available PostgreSQL Cluster](/docs/v2024.1.31/guides/postgres/clustering/ha_cluster) +- Monitor your PostgreSQL database with KubeDB using [built-in Prometheus](/docs/v2024.1.31/guides/postgres/monitoring/using-builtin-prometheus). +- Monitor your PostgreSQL database with KubeDB using [Prometheus operator](/docs/v2024.1.31/guides/postgres/monitoring/using-prometheus-operator). +- Detail concepts of [Postgres object](/docs/v2024.1.31/guides/postgres/concepts/postgres). +- Use [private Docker registry](/docs/v2024.1.31/guides/postgres/private-registry/using-private-registry) to deploy PostgreSQL with KubeDB. +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/v2024.1.31/CONTRIBUTING). \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/postgres/pitr/yamls/backupstorage.yaml b/content/docs/v2024.1.31/guides/postgres/pitr/yamls/backupstorage.yaml new file mode 100644 index 0000000000..6747e4338a --- /dev/null +++ b/content/docs/v2024.1.31/guides/postgres/pitr/yamls/backupstorage.yaml @@ -0,0 +1,19 @@ +apiVersion: storage.kubestash.com/v1alpha1 +kind: BackupStorage +metadata: + name: linode-storage + namespace: demo +spec: + storage: + provider: s3 + s3: + bucket: mehedi-pg-wal-g + endpoint: https://ap-south-1.linodeobjects.com + region: ap-south-1 + prefix: backup + secret: storage + usagePolicy: + allowedNamespaces: + from: All + default: true + deletionPolicy: WipeOut \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/postgres/pitr/yamls/encryptionSecret.yaml b/content/docs/v2024.1.31/guides/postgres/pitr/yamls/encryptionSecret.yaml new file mode 100644 index 0000000000..4eb0c25bdb --- /dev/null +++ b/content/docs/v2024.1.31/guides/postgres/pitr/yamls/encryptionSecret.yaml @@ -0,0 +1,8 @@ +apiVersion: v1 +kind: Secret +type: Opaque +metadata: + name: encrypt-secret + namespace: demo +stringData: + RESTIC_PASSWORD: "changeit" diff --git a/content/docs/v2024.1.31/guides/postgres/pitr/yamls/postgresarchiver.yaml b/content/docs/v2024.1.31/guides/postgres/pitr/yamls/postgresarchiver.yaml new file mode 100644 index 0000000000..119df31067 --- /dev/null +++ b/content/docs/v2024.1.31/guides/postgres/pitr/yamls/postgresarchiver.yaml @@ -0,0 +1,42 @@ +apiVersion: archiver.kubedb.com/v1alpha1 +kind: PostgresArchiver +metadata: + name: postgresarchiver-sample + namespace: demo +spec: + pause: false + databases: + namespaces: + from: Selector + selector: + matchLabels: + kubernetes.io/metadata.name: demo + selector: + matchLabels: + archiver: "true" + retentionPolicy: + name: postgres-retention-policy + namespace: demo + encryptionSecret: + name: "encrypt-secret" + namespace: "demo" + fullBackup: + driver: "VolumeSnapshotter" + task: + params: + volumeSnapshotClassName: "longhorn-snapshot-vsc" + scheduler: + successfulJobsHistoryLimit: 1 + failedJobsHistoryLimit: 1 + schedule: "/30 * * * *" + sessionHistoryLimit: 2 + manifestBackup: + scheduler: + successfulJobsHistoryLimit: 1 + failedJobsHistoryLimit: 1 + schedule: "/30 * * * *" + sessionHistoryLimit: 2 + backupStorage: + ref: + name: "linode-storage" + namespace: "demo" \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/postgres/pitr/yamls/retentionPolicy.yaml b/content/docs/v2024.1.31/guides/postgres/pitr/yamls/retentionPolicy.yaml new file mode 100644 index 0000000000..679d35e15f --- /dev/null +++ b/content/docs/v2024.1.31/guides/postgres/pitr/yamls/retentionPolicy.yaml @@ -0,0 +1,11 @@ +apiVersion: storage.kubestash.com/v1alpha1 +kind: RetentionPolicy +metadata: + name: postgres-retention-policy + namespace: demo +spec: + maxRetentionPeriod: "30d" + successfulSnapshots: + last: 100 + failedSnapshots: + last: 2 diff --git a/content/docs/v2024.1.31/guides/postgres/pitr/yamls/voluemsnapshotclass.yaml b/content/docs/v2024.1.31/guides/postgres/pitr/yamls/voluemsnapshotclass.yaml new file mode 100644 index 0000000000..1a67906612 --- /dev/null +++ b/content/docs/v2024.1.31/guides/postgres/pitr/yamls/voluemsnapshotclass.yaml @@ -0,0 +1,8 @@ +kind: VolumeSnapshotClass +apiVersion: snapshot.storage.k8s.io/v1 +metadata: + name: longhorn-snapshot-vsc +driver: driver.longhorn.io +deletionPolicy: Delete +parameters: + type: snap \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/postgres/private-registry/_index.md b/content/docs/v2024.1.31/guides/postgres/private-registry/_index.md new file mode 100755 index 0000000000..7be7e7304c --- /dev/null +++ b/content/docs/v2024.1.31/guides/postgres/private-registry/_index.md @@ -0,0 +1,22 @@ +--- +title: Run PostgreSQL using Private Registry +menu: + docs_v2024.1.31: + identifier: pg-private-registry-postgres + name: Private Registry + parent: pg-postgres-guides + weight: 35 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/postgres/private-registry/using-private-registry.md b/content/docs/v2024.1.31/guides/postgres/private-registry/using-private-registry.md new file mode 100644 index 0000000000..826d3828c2 --- /dev/null +++ b/content/docs/v2024.1.31/guides/postgres/private-registry/using-private-registry.md @@ -0,0 +1,221 @@ +--- +title: Run PostgreSQL using Private Registry +menu: + docs_v2024.1.31: + identifier: pg-using-private-registry-private-registry + name: Quickstart + parent: pg-private-registry-postgres + weight: 10 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Using private Docker registry + +KubeDB supports using private Docker registry. This tutorial will show you how to run KubeDB managed PostgreSQL database using private Docker images. + +## Before You Begin + +At first, you need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +To keep things isolated, this tutorial uses a separate namespace called `demo` throughout this tutorial. + +```bash +$ kubectl create ns demo +namespace/demo created +``` + +> Note: YAML files used in this tutorial are stored in [docs/examples/postgres](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/examples/postgres) folder in GitHub repository [kubedb/docs](https://github.com/kubedb/docs). + +## Prepare Private Docker Registry + +- You will also need a docker private [registry](https://docs.docker.com/registry/) or [private repository](https://docs.docker.com/docker-hub/repos/#private-repositories). In this tutorial we will use private repository of [docker hub](https://hub.docker.com/). + +- You have to push the required images from KubeDB's [Docker hub account](https://hub.docker.com/r/kubedb/) into your private registry. For postgres, push `DB_IMAGE`, `TOOLS_IMAGE`, `EXPORTER_IMAGE` of following PostgresVersions, where `deprecated` is not true, to your private registry. + + ```bash + $ kubectl get postgresversions -n kube-system -o=custom-columns=NAME:.metadata.name,VERSION:.spec.version,DB_IMAGE:.spec.db.image,TOOLS_IMAGE:.spec.tools.image,EXPORTER_IMAGE:.spec.exporter.image,DEPRECATED:.spec.deprecated + NAME VERSION DB_IMAGE TOOLS_IMAGE EXPORTER_IMAGE DEPRECATED + 10.2 10.2 kubedb/postgres:10.2 kubedb/postgres-tools:10.2 kubedb/operator:0.8.0 true + 10.2-v1 10.2 kubedb/postgres:10.2-v2 kubedb/postgres-tools:10.2-v2 kubedb/postgres_exporter:v0.4.6 true + 10.2-v2 10.2 kubedb/postgres:10.2-v3 kubedb/postgres-tools:10.2-v3 kubedb/postgres_exporter:v0.4.7 + 10.2-v3 10.2 kubedb/postgres:10.2-v4 kubedb/postgres-tools:10.2-v3 kubedb/postgres_exporter:v0.4.7 + 10.2-v4 10.2 kubedb/postgres:10.2-v5 kubedb/postgres-tools:10.2-v3 kubedb/postgres_exporter:v0.4.7 + 10.2-v5 10.2 kubedb/postgres:10.2-v6 kubedb/postgres-tools:10.2-v3 kubedb/postgres_exporter:v0.4.7 + 10.6 10.6 kubedb/postgres:10.6 kubedb/postgres-tools:10.6 kubedb/postgres_exporter:v0.4.7 + 10.6-v1 10.6 kubedb/postgres:10.6-v1 kubedb/postgres-tools:10.6 kubedb/postgres_exporter:v0.4.7 + 10.6-v2 10.6 kubedb/postgres:10.6-v2 kubedb/postgres-tools:10.6 kubedb/postgres_exporter:v0.4.7 + 10.6-v3 10.6 kubedb/postgres:10.6-v3 kubedb/postgres-tools:10.6 kubedb/postgres_exporter:v0.4.7 + 11.1 11.1 kubedb/postgres:11.1 kubedb/postgres-tools:11.1 kubedb/postgres_exporter:v0.4.7 + 11.1-v1 11.1 kubedb/postgres:11.1-v1 kubedb/postgres-tools:11.1 kubedb/postgres_exporter:v0.4.7 + 11.1-v2 11.1 kubedb/postgres:11.1-v2 kubedb/postgres-tools:11.1 kubedb/postgres_exporter:v0.4.7 + 11.1-v3 11.1 kubedb/postgres:11.1-v3 kubedb/postgres-tools:11.1 kubedb/postgres_exporter:v0.4.7 + 11.2 11.2 kubedb/postgres:11.2 kubedb/postgres-tools:11.2 kubedb/postgres_exporter:v0.4.7 + 11.2-v1 11.2 kubedb/postgres:11.2-v1 kubedb/postgres-tools:11.2 kubedb/postgres_exporter:v0.4.7 + 9.6 9.6 kubedb/postgres:9.6 kubedb/postgres-tools:9.6 kubedb/operator:0.8.0 true + 9.6-v1 9.6 kubedb/postgres:9.6-v2 kubedb/postgres-tools:9.6-v2 kubedb/postgres_exporter:v0.4.6 true + 9.6-v2 9.6 kubedb/postgres:9.6-v3 kubedb/postgres-tools:9.6-v3 kubedb/postgres_exporter:v0.4.7 + 9.6-v3 9.6 kubedb/postgres:9.6-v4 kubedb/postgres-tools:9.6-v3 kubedb/postgres_exporter:v0.4.7 + 9.6-v4 9.6 kubedb/postgres:9.6-v5 kubedb/postgres-tools:9.6-v3 kubedb/postgres_exporter:v0.4.7 + 9.6-v5 9.6 kubedb/postgres:9.6-v6 kubedb/postgres-tools:9.6-v3 kubedb/postgres_exporter:v0.4.7 + 9.6.7 9.6.7 kubedb/postgres:9.6.7 kubedb/postgres-tools:9.6.7 kubedb/operator:0.8.0 true + 9.6.7-v1 9.6.7 kubedb/postgres:9.6.7-v2 kubedb/postgres-tools:9.6.7-v2 kubedb/postgres_exporter:v0.4.6 true + 9.6.7-v2 9.6.7 kubedb/postgres:9.6.7-v3 kubedb/postgres-tools:9.6.7-v3 kubedb/postgres_exporter:v0.4.7 + 9.6.7-v3 9.6.7 kubedb/postgres:9.6.7-v4 kubedb/postgres-tools:9.6.7-v3 kubedb/postgres_exporter:v0.4.7 + 9.6.7-v4 9.6.7 kubedb/postgres:9.6.7-v5 kubedb/postgres-tools:9.6.7-v3 kubedb/postgres_exporter:v0.4.7 + 9.6.7-v5 9.6.7 kubedb/postgres:9.6.7-v6 kubedb/postgres-tools:9.6.7-v3 kubedb/postgres_exporter:v0.4.7 + ``` + + Docker hub repositories: + +- [kubedb/operator](https://hub.docker.com/r/kubedb/operator) +- [kubedb/postgres](https://hub.docker.com/r/kubedb/postgres) +- [kubedb/postgres-tools](https://hub.docker.com/r/kubedb/postgres-tools) +- [kubedb/postgres_exporter](https://hub.docker.com/r/kubedb/postgres_exporter) + +```bash +``` + +## Create ImagePullSecret + +ImagePullSecrets is a type of a Kubernetes Secret whose sole purpose is to pull private images from a Docker registry. It allows you to specify the url of the docker registry, credentials for logging in and the image name of your private docker image. + +Run the following command, substituting the appropriate uppercase values to create an image pull secret for your private Docker registry: + +```bash +$ kubectl create secret generic -n demo docker-registry myregistrykey \ + --docker-server=DOCKER_REGISTRY_SERVER \ + --docker-username=DOCKER_USER \ + --docker-email=DOCKER_EMAIL \ + --docker-password=DOCKER_PASSWORD +secret/myregistrykey created +``` + +If you wish to follow other ways to pull private images see [official docs](https://kubernetes.io/docs/concepts/containers/images/) of Kubernetes. + +> Note; If you are using `kubectl` 1.9.0, update to 1.9.1 or later to avoid this [issue](https://github.com/kubernetes/kubernetes/issues/57427). + +## Install KubeDB operator + +When installing KubeDB operator, set the flags `--docker-registry` and `--image-pull-secret` to appropriate value. +Follow the steps to [install KubeDB operator](/docs/v2024.1.31/setup/README) properly in cluster so that to points to the DOCKER_REGISTRY you wish to pull images from. + +## Create PostgresVersion CRD + +KubeDB uses images specified in PostgresVersion crd for database, backup and exporting prometheus metrics. You have to create a PostgresVersion crd specifying images from your private registry. Then, you have to point this PostgresVersion crd in `spec.version` field of Postgres object. For more details about PostgresVersion crd, please visit [here](/docs/v2024.1.31/guides/postgres/concepts/catalog). + +Here, is an example of PostgresVersion crd. Replace `` with your private registry. + +```yaml +apiVersion: catalog.kubedb.com/v1alpha1 +kind: PostgresVersion +metadata: + name: "13.13" +spec: + coordinator: + image: PRIVATE_REGISTRY/pg-coordinator:v0.1.0 + db: + image: PRIVATE_REGISTRY/postgres:13.2-alpine + distribution: PostgreSQL + exporter: + image: PRIVATE_REGISTRY/postgres-exporter:v0.9.0 + initContainer: + image: PRIVATE_REGISTRY/postgres-init:0.1.0 + podSecurityPolicies: + databasePolicyName: postgres-db + stash: + addon: + backupTask: + name: postgres-backup-13.1 + restoreTask: + name: postgres-restore-13.1 + version: "13.13" +``` + +Now, create the PostgresVersion crd, + +```bash +$ kubectl apply -f pvt-postgresversion.yaml +postgresversion.kubedb.com/pvt-10.2 created +``` + +## Deploy PostgreSQL database from Private Registry + +While deploying PostgreSQL from private repository, you have to add `myregistrykey` secret in Postgres `spec.podTemplate.spec.imagePullSecrets` and specify `pvt-10.2` in `spec.version` field. + +Below is the Postgres object we will create in this tutorial + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: Postgres +metadata: + name: pvt-reg-postgres + namespace: demo +spec: + version: "13.13" + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + podTemplate: + spec: + imagePullSecrets: + - name: myregistrykey +``` + +Now run the command to create this Postgres object: + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/postgres/private-registry/pvt-reg-postgres.yaml +postgres.kubedb.com/pvt-reg-postgres created +``` + +To check if the images pulled successfully from the repository, see if the PostgreSQL is in Running state: + +```bash +$ kubectl get pods -n demo --selector="app.kubernetes.io/instance=pvt-reg-postgres" +NAME READY STATUS RESTARTS AGE +pvt-reg-postgres-0 1/1 Running 0 3m +``` + +## Snapshot + +You can specify `imagePullSecret` for Snapshot objects in `spec.podTemplate.spec.imagePullSecrets` field of Snapshot object. If you are using scheduled backup, you can also provide `imagePullSecret` in `backupSchedule.podTemplate.spec.imagePullSecrets` field of Postgres crd. KubeDB also reuses `imagePullSecret` for Snapshot object from `spec.podTemplate.spec.imagePullSecrets` field of Postgres crd. + +## Cleaning up + +To cleanup the Kubernetes resources created by this tutorial, run: + +```bash +kubectl patch -n demo pg/pvt-reg-postgres -p '{"spec":{"terminationPolicy":"WipeOut"}}' --type="merge" +kubectl delete -n demo pg/pvt-reg-postgres + +kubectl delete ns demo +``` + +If you would like to uninstall KubeDB operator, please follow the steps [here](/docs/v2024.1.31/setup/README). + +## Next Steps + +- Learn about [backup and restore](/docs/v2024.1.31/guides/postgres/backup/overview/) PostgreSQL database using Stash. +- Learn about initializing [PostgreSQL with Script](/docs/v2024.1.31/guides/postgres/initialization/script_source). +- Want to setup PostgreSQL cluster? Check how to [configure Highly Available PostgreSQL Cluster](/docs/v2024.1.31/guides/postgres/clustering/ha_cluster) +- Monitor your PostgreSQL database with KubeDB using [built-in Prometheus](/docs/v2024.1.31/guides/postgres/monitoring/using-builtin-prometheus). +- Monitor your PostgreSQL database with KubeDB using [Prometheus operator](/docs/v2024.1.31/guides/postgres/monitoring/using-prometheus-operator). +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/v2024.1.31/CONTRIBUTING). diff --git a/content/docs/v2024.1.31/guides/postgres/quickstart/_index.md b/content/docs/v2024.1.31/guides/postgres/quickstart/_index.md new file mode 100755 index 0000000000..4e006bcf78 --- /dev/null +++ b/content/docs/v2024.1.31/guides/postgres/quickstart/_index.md @@ -0,0 +1,22 @@ +--- +title: PostgreSQL Quickstart +menu: + docs_v2024.1.31: + identifier: pg-quickstart-postgres + name: Quickstart + parent: pg-postgres-guides + weight: 15 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/postgres/quickstart/quickstart.md b/content/docs/v2024.1.31/guides/postgres/quickstart/quickstart.md new file mode 100644 index 0000000000..4f840216bc --- /dev/null +++ b/content/docs/v2024.1.31/guides/postgres/quickstart/quickstart.md @@ -0,0 +1,539 @@ +--- +title: PostgreSQL Quickstart +menu: + docs_v2024.1.31: + identifier: pg-quickstart-quickstart + name: Overview + parent: pg-quickstart-postgres + weight: 10 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Running PostgreSQL + +This tutorial will show you how to use KubeDB to run a PostgreSQL database. + +

+  lifecycle +

+ +## Before You Begin + +At first, you need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +Now, install KubeDB cli on your workstation and KubeDB operator in your cluster following the steps [here](/docs/v2024.1.31/setup/README). + +To keep things isolated, this tutorial uses a separate namespace called `demo` throughout this tutorial. + +```bash +$ kubectl create ns demo +namespace/demo created +``` + +> Note: YAML files used in this tutorial are stored in [docs/examples/postgres](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/examples/postgres) folder in GitHub repository [kubedb/docs](https://github.com/kubedb/docs). + +>We have designed this tutorial to demonstrate a production setup of KubeDB managed PostgreSQL. If you just want to try out KubeDB, you can bypass some of the safety features following the tips [here](/docs/v2024.1.31/guides/postgres/quickstart/quickstart#tips-for-testing). + +## Install pgAdmin + +This tutorial will also use a pgAdmin to connect and test PostgreSQL database, once it is running. + +Run the following command to install pgAdmin, + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/postgres/quickstart/pgadmin.yaml +deployment.apps/pgadmin created +service/pgadmin created + +$ kubectl get pods -n demo --watch +NAME READY STATUS RESTARTS AGE +pgadmin-5b4b96779-lfpfh 0/1 ContainerCreating 0 1m +pgadmin-5b4b96779-lfpfh 1/1 Running 0 2m +^C⏎ +``` + +Now, you can open pgAdmin on your browser using following address `http://:`. + +If you are using minikube then open pgAdmin in your browser by running `minikube service pgadmin -n demo`. Or you can get the URL of Service `pgadmin` by running following command + +```bash +$ minikube service pgadmin -n demo --url +http://192.168.99.100:31983 +``` + +To log into the pgAdmin, use username __`admin`__ and password __`admin`__. + +## Find Available StorageClass + +We will have to provide `StorageClass` in Postgres crd specification. Check available `StorageClass` in your cluster using following command, + +```bash +$ kubectl get storageclass +NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE +standard (default) rancher.io/local-path Delete WaitForFirstConsumer false 10d + +``` + +Here, we have `standard` StorageClass in our cluster. + +## Find Available PostgresVersion + +When you have installed KubeDB, it has created `PostgresVersion` crd for all supported PostgreSQL versions. Let's check available PostgresVersions by, + +```bash +$ kubectl get postgresversion +NAME VERSION DISTRIBUTION DB_IMAGE DEPRECATED AGE +10.16 10.16 Official postgres:10.16-alpine 3d +10.16-debian 10.16 Official postgres:10.16 3d +10.19 10.19 Official postgres:10.19-bullseye 3d +10.19-bullseye 10.19 Official postgres:10.19-bullseye 3d +10.20 10.20 Official postgres:10.20-bullseye 3d +10.20-bullseye 10.20 Official postgres:10.20-bullseye 3d +11.11 11.11 Official postgres:11.11-alpine 3d +11.11-debian 11.11 Official postgres:11.11 3d +11.14 11.14 Official postgres:11.14-alpine 3d +11.14-bullseye 11.14 Official postgres:11.14-bullseye 3d +11.14-bullseye-postgis 11.14 PostGIS postgis/postgis:11-3.1 3d +11.15 11.15 Official postgres:11.15-alpine 3d +11.15-bullseye 11.15 Official postgres:11.15-bullseye 3d +12.10 12.10 Official postgres:12.10-alpine 3d +12.10-bullseye 12.10 Official postgres:12.10-bullseye 3d +12.6 12.6 Official postgres:12.6-alpine 3d +12.6-debian 12.6 Official postgres:12.6 3d +12.9 12.9 Official postgres:12.9-alpine 3d +12.9-bullseye 12.9 Official postgres:12.9-bullseye 3d +12.9-bullseye-postgis 12.9 PostGIS postgis/postgis:12-3.1 3d +13.2 13.2 Official postgres:13.2-alpine 3d +13.2-debian 13.2 Official postgres:13.2 3d +13.5 13.5 Official postgres:13.5-alpine 3d +13.5-bullseye 13.5 Official postgres:13.5-bullseye 3d +13.5-bullseye-postgis 13.5 PostGIS postgis/postgis:13-3.1 3d +13.6 13.6 Official postgres:13.6-alpine 3d +13.6-bullseye 13.6 Official postgres:13.6-bullseye 3d +14.1 14.1 Official postgres:14.1-alpine 3d +14.1-bullseye 14.1 Official postgres:14.1-bullseye 3d +14.1-bullseye-postgis 14.1 PostGIS postgis/postgis:14-3.1 3d +14.2 14.2 Official postgres:14.2-alpine 3d +14.2-bullseye 14.2 Official postgres:14.2-bullseye 3d +9.6.21 9.6.21 Official postgres:9.6.21-alpine 3d +9.6.21-debian 9.6.21 Official postgres:9.6.21 3d +9.6.24 9.6.24 Official postgres:9.6.24-alpine 3d +9.6.24-bullseye 9.6.24 Official postgres:9.6.24-bullseye 3d +timescaledb-2.1.0-pg11 11.11 TimescaleDB timescale/timescaledb:2.1.0-pg11-oss 3d +timescaledb-2.1.0-pg12 12.6 TimescaleDB timescale/timescaledb:2.1.0-pg12-oss 3d +timescaledb-2.1.0-pg13 13.2 TimescaleDB timescale/timescaledb:2.1.0-pg13-oss 3d +timescaledb-2.5.0-pg14.1 14.1 TimescaleDB timescale/timescaledb:2.5.0-pg14-oss 3d + +``` + +Notice the `DEPRECATED` column. Here, `true` means that this PostgresVersion is deprecated for current KubeDB version. KubeDB will not work for deprecated PostgresVersion. + +In this tutorial, we will use `13.2` PostgresVersion crd to create PostgreSQL database. To know more about what is `PostgresVersion` crd and why there is `13.2` and `13.2-debian` variation, please visit [here](/docs/v2024.1.31/guides/postgres/concepts/catalog). You can also see supported PostgresVersion [here](/docs/v2024.1.31/guides/postgres/README#supported-postgresversion-crd). + +## Create a PostgreSQL database + +KubeDB implements a Postgres CRD to define the specification of a PostgreSQL database. + +Below is the Postgres object created in this tutorial. + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: Postgres +metadata: + name: quick-postgres + namespace: demo +spec: + version: "13.13" + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: Delete +``` + +Here, + +- `spec.version` is name of the PostgresVersion crd where the docker images are specified. In this tutorial, a PostgreSQL 13.2 database is created. +- `spec.storageType` specifies the type of storage that will be used for Postgres database. It can be `Durable` or `Ephemeral`. Default value of this field is `Durable`. If `Ephemeral` is used then KubeDB will create Postgres database using `EmptyDir` volume. In this case, you don't have to specify `spec.storage` field. This is useful for testing purposes. +- `spec.storage` specifies the size and StorageClass of PVC that will be dynamically allocated to store data for this database. This storage spec will be passed to the StatefulSet created by KubeDB operator to run database pods. You can specify any StorageClass available in your cluster with appropriate resource requests. If you don't specify `spec.storageType: Ephemeral`, then this field is required. +- `spec.terminationPolicy` specifies what KubeDB should do when user try to delete Postgres crd. Termination policy `DoNotTerminate` prevents a user from deleting this object if admission webhook is enabled. + +>Note: `spec.storage` section is used to create PVC for database pod. It will create PVC with storage size specified in`storage.resources.requests` field. Don't specify `limits` here. PVC does not get resized automatically. + +Let's create Postgres crd, + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/postgres/quickstart/quick-postgres.yaml +postgres.kubedb.com/quick-postgres created +``` + +KubeDB operator watches for Postgres objects using Kubernetes api. When a Postgres object is created, KubeDB operator will create a new StatefulSet and two ClusterIP Service with the matching name. KubeDB operator will also create a governing service for StatefulSet with the name `kubedb`, if one is not already present. + +If you are using RBAC enabled cluster, PostgreSQL specific RBAC permission is required. For details, please visit [here](/docs/v2024.1.31/guides/postgres/quickstart/rbac). + +KubeDB operator sets the `status.phase` to `Running` once the database is successfully created. + +```bash +$ kubectl get pg -n demo quick-postgres -o wide +NAME VERSION STATUS AGE +quick-postgres 13.2 Creating 13s +``` + +Let's describe Postgres object `quick-postgres` + +```bash +$ kubectl describe -n demo postgres quick-postgres +Name: quick-postgres +Namespace: demo +Labels: +Annotations: +API Version: kubedb.com/v1alpha2 +Kind: Postgres +Metadata: + Creation Timestamp: 2022-05-30T09:15:36Z + Finalizers: + kubedb.com + Generation: 2 + Managed Fields: + API Version: kubedb.com/v1alpha2 + Fields Type: FieldsV1 + fieldsV1: + f:metadata: + f:annotations: + .: + f:kubectl.kubernetes.io/last-applied-configuration: + f:spec: + .: + f:allowedSchemas: + .: + f:namespaces: + .: + f:from: + f:storage: + .: + f:accessModes: + f:resources: + .: + f:requests: + .: + f:storage: + f:storageClassName: + f:storageType: + f:terminationPolicy: + f:version: + Manager: kubectl-client-side-apply + Operation: Update + Time: 2022-05-30T09:15:36Z + API Version: kubedb.com/v1alpha2 + Fields Type: FieldsV1 + fieldsV1: + f:metadata: + f:finalizers: + .: + v:"kubedb.com": + f:spec: + f:authSecret: + .: + f:name: + Manager: pg-operator + Operation: Update + Time: 2022-05-30T09:15:37Z + API Version: kubedb.com/v1alpha2 + Fields Type: FieldsV1 + fieldsV1: + f:status: + f:conditions: + f:observedGeneration: + f:phase: + Manager: pg-operator + Operation: Update + Subresource: status + Time: 2022-05-30T09:16:26Z + Resource Version: 330717 + UID: aa9193d0-cd9b-4b63-8403-2b12ec1b04be +Spec: + Allowed Schemas: + Namespaces: + From: Same + Auth Secret: + Name: quick-postgres-auth + Client Auth Mode: md5 + Coordinator: + Resources: + Limits: + Memory: 256Mi + Requests: + Cpu: 200m + Memory: 256Mi + Leader Election: + Election Tick: 10 + Heartbeat Tick: 1 + Maximum Lag Before Failover: 67108864 + Period: 300ms + Pod Template: + Controller: + Metadata: + Spec: + Affinity: + Pod Anti Affinity: + Preferred During Scheduling Ignored During Execution: + Pod Affinity Term: + Label Selector: + Match Labels: + app.kubernetes.io/instance: quick-postgres + app.kubernetes.io/managed-by: kubedb.com + app.kubernetes.io/name: postgreses.kubedb.com + Namespaces: + demo + Topology Key: kubernetes.io/hostname + Weight: 100 + Pod Affinity Term: + Label Selector: + Match Labels: + app.kubernetes.io/instance: quick-postgres + app.kubernetes.io/managed-by: kubedb.com + app.kubernetes.io/name: postgreses.kubedb.com + Namespaces: + demo + Topology Key: failure-domain.beta.kubernetes.io/zone + Weight: 50 + Container Security Context: + Capabilities: + Add: + IPC_LOCK + SYS_RESOURCE + Privileged: false + Run As Group: 70 + Run As User: 70 + Resources: + Limits: + Memory: 1Gi + Requests: + Cpu: 500m + Memory: 1Gi + Security Context: + Fs Group: 70 + Run As Group: 70 + Run As User: 70 + Service Account Name: quick-postgres + Replicas: 1 + Ssl Mode: disable + Storage: + Access Modes: + ReadWriteOnce + Resources: + Requests: + Storage: 1Gi + Storage Class Name: standard + Storage Type: Durable + Termination Policy: DoNotTerminate + Version: 13.2 +Status: + Conditions: + Last Transition Time: 2022-05-30T09:15:36Z + Message: The KubeDB operator has started the provisioning of Postgres: demo/quick-postgres + Reason: DatabaseProvisioningStartedSuccessfully + Status: True + Type: ProvisioningStarted + Last Transition Time: 2022-05-30T09:16:26Z + Message: All replicas are ready and in Running state + Observed Generation: 2 + Reason: AllReplicasReady + Status: True + Type: ReplicaReady + Last Transition Time: 2022-05-30T09:16:26Z + Message: The PostgreSQL: demo/quick-postgres is accepting client requests. + Observed Generation: 2 + Reason: DatabaseAcceptingConnectionRequest + Status: True + Type: AcceptingConnection + Last Transition Time: 2022-05-30T09:16:26Z + Message: DB is ready because of server getting Online and Running state + Observed Generation: 2 + Reason: ReadinessCheckSucceeded + Status: True + Type: Ready + Last Transition Time: 2022-05-30T09:16:26Z + Message: The PostgreSQL: demo/quick-postgres is successfully provisioned. + Observed Generation: 2 + Reason: DatabaseSuccessfullyProvisioned + Status: True + Type: Provisioned + Observed Generation: 2 + Phase: Ready +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal Successful 106s Postgres operator Successfully created governing service + Normal Successful 106s Postgres operator Successfully created Service + Normal Successful 105s Postgres operator Successfully created appbinding +``` + +KubeDB has created two services for the Postgres object. + +```bash +$ kubectl get service -n demo --selector=app.kubernetes.io/name=postgreses.kubedb.com,app.kubernetes.io/instance=quick-postgres +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +quick-postgres ClusterIP 10.96.52.28 5432/TCP,2379/TCP 3m19s +quick-postgres-pods ClusterIP None 5432/TCP,2380/TCP,2379/TCP 3m19s + + +``` + +Here, + +- Service *`quick-postgres`* targets only one Pod which is acting as *primary* server +- Service *`quick-postgres-pods`* targets all Pods created by StatefulSet + +KubeDB supports PostgreSQL clustering where Pod can be either *primary* or *standby*. To learn how to configure highly available PostgreSQL cluster, click [here](/docs/v2024.1.31/guides/postgres/clustering/ha_cluster). + +Here, we have created a PostgreSQL database with single node, *primary* only. + +## Connect with PostgreSQL database + +KubeDB operator has created a new Secret called `quick-postgres-auth` for storing the *username* and *password* for `postgres` database. + +```yaml + $ kubectl get secret -n demo quick-postgres-auth -o yaml +apiVersion: v1 +data: + POSTGRES_PASSWORD: REQ4aTU2VUJJY3M2M1BWTw== + POSTGRES_USER: cG9zdGdyZXM= +kind: Secret +metadata: + creationTimestamp: 2018-09-03T11:25:39Z + labels: + app.kubernetes.io/name: postgreses.kubedb.com + app.kubernetes.io/instance: quick-postgres + name: quick-postgres-auth + namespace: demo + resourceVersion: "1677" + selfLink: /api/v1/namespaces/demo/secrets/quick-postgres-auth + uid: 15b3e8a1-af6c-11e8-996d-0800270d7bae +type: Opaque +``` + +This secret contains superuser name for `postgres` database as `POSTGRES_USER` key and +password as `POSTGRES_PASSWORD` key. By default, superuser name is `postgres` and password is randomly generated. + +If you want to use custom password, please create the secret manually and specify that when creating the Postgres object using `spec.authSecret.name`. For more details see [here](/docs/v2024.1.31/guides/postgres/concepts/postgres#specdatabasesecret). + +> Note: Auth Secret name format: `{postgres-name}-auth` + +Now, you can connect to this database from the pgAdmin dashboard using `quick-postgres.demo` service and *username* and *password* created in `quick-postgres-auth` secret. + +**Connection information:** + +- Host name/address: you can use any of these + - Service: `quick-postgres.demo` + - Pod IP: (`$ kubectl get pods quick-postgres-0 -n demo -o yaml | grep podIP`) +- Port: `5432` +- Maintenance database: `postgres` + +- Username: Run following command to get *username*, + + ```bash + $ kubectl get secrets -n demo quick-postgres-auth -o jsonpath='{.data.\POSTGRES_USER}' | base64 -d + postgres + ``` + +- Password: Run the following command to get *password*, + + ```bash + $ kubectl get secrets -n demo quick-postgres-auth -o jsonpath='{.data.\POSTGRES_PASSWORD}' | base64 -d + DD8i56UBIcs63PVO + ``` + +Now, go to pgAdmin dashboard and connect to the database using the connection information as shown below, + +

+ + quick-postgres + +

+ +## Halt Database + +KubeDB takes advantage of `ValidationWebhook` feature in Kubernetes 1.9.0 or later clusters to implement `DoNotTerminate` termination policy. If admission webhook is enabled, it prevents user from deleting the database as long as the `spec.terminationPolicy` is set `DoNotTerminate`. + +To halt the database, we have to set `spec.terminationPolicy:` to `Halt` by updating it, + +```bash +$ kubectl edit pg -n demo quick-postgres +spec: + terminationPolicy: Halt +``` + +Now, if you delete the Postgres object, the KubeDB operator will delete every resource created for this Postgres CR, but leaves the auth secrets, and PVCs. + +Let's delete the Postgres object, + +```bash +$ kubectl delete pg -n demo quick-postgres +postgres.kubedb.com "quick-postgres" deleted +``` +Check resources: +```bash +$ kubectl get all,secret,pvc -n demo -l 'app.kubernetes.io/instance=quick-postgres' +NAME TYPE DATA AGE +secret/quick-postgres-auth kubernetes.io/basic-auth 2 27m + +NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE +persistentvolumeclaim/data-quick-postgres-0 Bound pvc-b30e3255-a7ea-4f61-8637-f60e283236b2 1Gi RWO standard 27m +``` + +## Resume Postgres +Say, the Postgres CR was deleted with `spec.terminationPolicy` to `Halt` and you want to re-create the Postgres using the existing auth secrets and the PVCs. + +You can do it by simpily re-deploying the original Postgres object: +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/postgres/quickstart/quick-postgres.yaml +postgres.kubedb.com/quick-postgres created +``` +## Cleaning up + +To cleanup the Kubernetes resources created by this tutorial, run: + +```bash +kubectl patch -n demo pg/quick-postgres -p '{"spec":{"terminationPolicy":"WipeOut"}}' --type="merge" +kubectl delete -n demo pg/quick-postgres + +kubectl delete ns demo +``` + +## Tips for Testing + +If you are just testing some basic functionalities, you might want to avoid additional hassles due to some safety features that are great for production environment. You can follow these tips to avoid them. + +1. **Use `storageType: Ephemeral`**. Databases are precious. You might not want to lose your data in your production environment if database pod fail. So, we recommend to use `spec.storageType: Durable` and provide storage spec in `spec.storage` section. For testing purpose, you can just use `spec.storageType: Ephemeral`. KubeDB will use [emptyDir](https://kubernetes.io/docs/concepts/storage/volumes/#emptydir) for storage. You will not require to provide `spec.storage` section. +2. **Use `terminationPolicy: WipeOut`**. It is nice to be able to resume database from previous one. So, we preserve all your `PVCs`, `Secrets`, etc. If you don't want to resume database, you can just use `spec.terminationPolicy: WipeOut`. It will delete everything created by KubeDB for a particular Postgres crd when you delete the crd. For more details about termination policy, please visit [here](/docs/v2024.1.31/guides/postgres/concepts/postgres#specterminationpolicy). + +## Next Steps + +- Learn about [backup and restore](/docs/v2024.1.31/guides/postgres/backup/overview/) PostgreSQL database using Stash. +- Learn about initializing [PostgreSQL with Script](/docs/v2024.1.31/guides/postgres/initialization/script_source). +- Learn about [custom PostgresVersions](/docs/v2024.1.31/guides/postgres/custom-versions/setup). +- Want to setup PostgreSQL cluster? Check how to [configure Highly Available PostgreSQL Cluster](/docs/v2024.1.31/guides/postgres/clustering/ha_cluster) +- Monitor your PostgreSQL database with KubeDB using [built-in Prometheus](/docs/v2024.1.31/guides/postgres/monitoring/using-builtin-prometheus). +- Monitor your PostgreSQL database with KubeDB using [Prometheus operator](/docs/v2024.1.31/guides/postgres/monitoring/using-prometheus-operator). +- Detail concepts of [Postgres object](/docs/v2024.1.31/guides/postgres/concepts/postgres). +- Use [private Docker registry](/docs/v2024.1.31/guides/postgres/private-registry/using-private-registry) to deploy PostgreSQL with KubeDB. +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/v2024.1.31/CONTRIBUTING). diff --git a/content/docs/v2024.1.31/guides/postgres/quickstart/rbac.md b/content/docs/v2024.1.31/guides/postgres/quickstart/rbac.md new file mode 100644 index 0000000000..cd234e3a3a --- /dev/null +++ b/content/docs/v2024.1.31/guides/postgres/quickstart/rbac.md @@ -0,0 +1,252 @@ +--- +title: RBAC for PostgreSQL +menu: + docs_v2024.1.31: + identifier: pg-rbac-quickstart + name: RBAC + parent: pg-quickstart-postgres + weight: 15 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# RBAC Permissions for Postgres + +If RBAC is enabled in clusters, some PostgreSQL specific RBAC permissions are required. These permissions are required for Leader Election process of PostgreSQL clustering. + +Here is the list of additional permissions required by StatefulSet of Postgres: + +| Kubernetes Resource | Resource Names | Permission required | +|---------------------|-------------------|---------------------| +| statefulsets | `{postgres-name}` | get | +| pods | | list, patch | +| pods/exec | | create | +| Postgreses | | get | +| configmaps | `{postgres-name}` | get, update, create | + +## Before You Begin + +At first, you need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +Now, install KubeDB cli on your workstation and KubeDB operator in your cluster following the steps [here](/docs/v2024.1.31/setup/README). + +To keep things isolated, this tutorial uses a separate namespace called `demo` throughout this tutorial. + +```bash +$ kubectl create ns demo +namespace/demo created +``` + +> Note: YAML files used in this tutorial are stored in [docs/examples/postgres](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/examples/postgres) folder in GitHub repository [kubedb/docs](https://github.com/kubedb/docs). + +## Create a PostgreSQL database + +Below is the Postgres object created in this tutorial. + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: Postgres +metadata: + name: quick-postgres + namespace: demo +spec: + version: "13.13" + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: Delete +``` + +Create above Postgres object with following command + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/postgres/quickstart/quick-postgres.yaml +postgres.kubedb.com/quick-postgres created +``` + +When this Postgres object is created, KubeDB operator creates Role, ServiceAccount and RoleBinding with the matching PostgreSQL name and uses that ServiceAccount name in the corresponding StatefulSet. + +Let's see what KubeDB operator has created for additional RBAC permission + +### Role + +KubeDB operator create a Role object `quick-postgres` in same namespace as Postgres object. + +```yaml +$ kubectl get role -n demo quick-postgres -o yaml +apiVersion: rbac.authorization.k8s.io/v1 +kind: Role +metadata: + creationTimestamp: "2022-05-31T05:20:19Z" + labels: + app.kubernetes.io/component: database + app.kubernetes.io/instance: quick-postgres + app.kubernetes.io/managed-by: kubedb.com + app.kubernetes.io/name: postgreses.kubedb.com + name: quick-postgres + namespace: demo + ownerReferences: + - apiVersion: kubedb.com/v1alpha2 + blockOwnerDeletion: true + controller: true + kind: Postgres + name: quick-postgres + uid: c118d264-85b7-4140-bc3f-d459c58c0523 + resourceVersion: "367334" + uid: e72f25a5-5945-4687-9e8f-8af33c1a6b13 +rules: + - apiGroups: + - apps + resourceNames: + - quick-postgres + resources: + - statefulsets + verbs: + - get + - apiGroups: + - kubedb.com + resourceNames: + - quick-postgres + resources: + - postgreses + verbs: + - get + - apiGroups: + - "" + resources: + - pods + verbs: + - get + - list + - patch + - delete + - apiGroups: + - "" + resources: + - pods/exec + verbs: + - create + - apiGroups: + - "" + resources: + - secrets + verbs: + - get + - list + - apiGroups: + - "" + resources: + - configmaps + verbs: + - create + - get + - update + - apiGroups: + - policy + resourceNames: + - postgres-db + resources: + - podsecuritypolicies + verbs: + - use + +``` + +### ServiceAccount + +KubeDB operator create a ServiceAccount object `quick-postgres` in same namespace as Postgres object. + +```yaml +$ kubectl get serviceaccount -n demo quick-postgres -o yaml +apiVersion: v1 +kind: ServiceAccount +metadata: + creationTimestamp: "2022-05-31T05:20:19Z" + labels: + app.kubernetes.io/component: database + app.kubernetes.io/instance: quick-postgres + app.kubernetes.io/managed-by: kubedb.com + app.kubernetes.io/name: postgreses.kubedb.com + name: quick-postgres + namespace: demo + ownerReferences: + - apiVersion: kubedb.com/v1alpha2 + blockOwnerDeletion: true + controller: true + kind: Postgres + name: quick-postgres + uid: c118d264-85b7-4140-bc3f-d459c58c0523 + resourceVersion: "367333" + uid: 1a1db587-d5a6-4cfc-aa82-dc960b7e1f28 + +``` + +This ServiceAccount is used in StatefulSet created for Postgres object. + +### RoleBinding + +KubeDB operator create a RoleBinding object `quick-postgres` in same namespace as Postgres object. + +```yaml +$ kubectl get rolebinding -n demo quick-postgres -o yaml +apiVersion: rbac.authorization.k8s.io/v1 +kind: RoleBinding +metadata: + creationTimestamp: "2022-05-31T05:20:19Z" + labels: + app.kubernetes.io/component: database + app.kubernetes.io/instance: quick-postgres + app.kubernetes.io/managed-by: kubedb.com + app.kubernetes.io/name: postgreses.kubedb.com + name: quick-postgres + namespace: demo + ownerReferences: + - apiVersion: kubedb.com/v1alpha2 + blockOwnerDeletion: true + controller: true + kind: Postgres + name: quick-postgres + uid: c118d264-85b7-4140-bc3f-d459c58c0523 + resourceVersion: "367335" + uid: 1fc9f872-8adc-4940-b93d-18f70bec38d5 +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: Role + name: quick-postgres +subjects: + - kind: ServiceAccount + name: quick-postgres + namespace: demo + +``` + +This object binds Role `quick-postgres` with ServiceAccount `quick-postgres`. + +## Cleaning up + +To cleanup the Kubernetes resources created by this tutorial, run: + +```bash +kubectl patch -n demo pg/quick-postgres -p '{"spec":{"terminationPolicy":"WipeOut"}}' --type="merge" +kubectl delete -n demo pg/quick-postgres + +kubectl delete ns demo +``` diff --git a/content/docs/v2024.1.31/guides/postgres/remote-replica/_index.md b/content/docs/v2024.1.31/guides/postgres/remote-replica/_index.md new file mode 100644 index 0000000000..f6e82e0f2c --- /dev/null +++ b/content/docs/v2024.1.31/guides/postgres/remote-replica/_index.md @@ -0,0 +1,22 @@ +--- +title: PostgreSQL Remote Replica +menu: + docs_v2024.1.31: + identifier: pg-remote-replica + name: Remote Replica + parent: pg-postgres-guides + weight: 27 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/postgres/remote-replica/remotereplica.md b/content/docs/v2024.1.31/guides/postgres/remote-replica/remotereplica.md new file mode 100644 index 0000000000..225dfd8071 --- /dev/null +++ b/content/docs/v2024.1.31/guides/postgres/remote-replica/remotereplica.md @@ -0,0 +1,377 @@ +--- +title: PostgreSQL Remote Replica +menu: + docs_v2024.1.31: + identifier: pg-remote-replica-details + name: Overview + parent: pg-remote-replica + weight: 10 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# KubeDB - PostgreSQL Remote Replica + +This tutorial will show you how to use KubeDB to provision a PostgreSQL Remote Replica from a KubeDB managed PostgreSQL instance. Remote replica can used in in or across cluster + +## Before You Begin + +At first, you need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +Now, install KubeDB cli on your workstation and KubeDB operator in your cluster following the steps [here](/docs/v2024.1.31/setup/README). + +To keep things isolated, this tutorial uses a separate namespace called `demo` throughout this tutorial. + +```bash +$ kubectl create ns demo +namespace/demo created +``` +> Note: The yaml files used in this tutorial are stored in [docs/guides/postgres/remote-replica/yamls](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/guides/postgres/remote-replica/yamls) folder in GitHub repository [kubedb/docs](https://github.com/kubedb/docs). + +## Remote Replica + +The remote replica allows you to replicate data from an KubeDB managed PostgreSQL server to a read-only PostgreSQL server. The whole process uses PostgreSQL asynchronous replication to keep up-to-date the replica with source server. +It's useful to use remote replica to scale of read-intensive workloads, can be a workaround for your BI and analytical workloads and can be geo-replicated. + +## Deploy PostgreSQL server + +The following is an example `PostgreSQL` object which creates a PostgreSQL cluster instance.we will create a tls secure instance since were planing to replicated across cluster + +Lets start with creating a secret first to access to database and we will deploy a tls secured instance since were replication across cluster + +### Create Issuer/ClusterIssuer + +Now, we are going to create an example `Issuer` that will be used throughout the duration of this tutorial. Alternatively, you can follow this [cert-manager tutorial](https://cert-manager.io/docs/configuration/ca/) to create your own `Issuer`. By following the below steps, we are going to create our desired issuer, + +- Start off by generating our ca-certificates using openssl, + +```bash +openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout ./ca.key -out ./ca.crt -subj "/CN=postgres/O=kubedb" +``` + +- create a secret using the certificate files we have just generated, + +```bash +kubectl create secret tls pg-ca \ + --cert=ca.crt \ + --key=ca.key \ + --namespace=demo +secret/pg-ca created +``` + +Now, we are going to create an `Issuer` using the `pg-ca` secret that hols the ca-certificate we have just created. Below is the YAML of the `Issuer` cr that we are going to create, + +```yaml +apiVersion: cert-manager.io/v1 +kind: Issuer +metadata: + name: pg-issuer + namespace: demo +spec: + ca: + secretName: pg-ca +``` + +Let’s create the `Issuer` cr we have shown above, + +```bash +kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/postgres/remote-replica/yamls/pg-issuer.yaml +issuer.cert-manager.io/pg-issuer created +``` + + +### Create Auth Secret + +```yaml +apiVersion: v1 +data: + password: cGFzcw== + username: cG9zdGdyZXM= +kind: Secret +metadata: + name: pg-singapore-auth + namespace: demo +type: kubernetes.io/basic-auth +``` + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/postgres/remote-replica/yamls/pg-singapore-auth.yaml +secret/pg-singapore-auth created +``` + +## Deploy PostgreSQL with TLS/SSL configuration +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: Postgres +metadata: + name: pg-singapore + namespace: demo +spec: + authSecret: + name: pg-singapore-auth + allowedSchemas: + namespaces: + from: Same + autoOps: {} + clientAuthMode: md5 + replicas: 3 + sslMode: verify-ca + standbyMode: Hot + streamingMode: Synchronous + tls: + issuerRef: + apiGroup: cert-manager.io + name: pg-issuer + kind: Issuer + certificates: + - alias: server + subject: + organizations: + - kubedb:server + dnsNames: + - localhost + ipAddresses: + - "127.0.0.1" + storage: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + storageClassName: linode-block-storage + storageType: Durable + terminationPolicy: WipeOut + version: "15.5" +``` + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/postgres/remote-replica/yamls/pg-singapore.yaml +postgres.kubedb.com/pg-singapore created +``` +KubeDB operator sets the `status.phase` to `Ready` once the database is successfully created + +```bash +$ kubectl get pg -n demo +NAME VERSION STATUS AGE +pg-singapore 15.3 Ready 22h +``` + +# Exposing to outside world +For Now we will expose our postgresql with ingress with to outside world +```bash +$ helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx +$ helm upgrade -i ingress-nginx ingress-nginx/ingress-nginx \ + --namespace demo --create-namespace \ + --set tcp.5432="demo/pg-singapore:5432" +``` +Let's apply the ingress yaml thats refers to `pg-singpore` service + +```yaml +apiVersion: networking.k8s.io/v1 +kind: Ingress +metadata: + name: pg-singapore + namespace: demo +spec: + ingressClassName: nginx + rules: + - host: pg-singapore.something.org + http: + paths: + - backend: + service: + name: pg-singapore + port: + number: 5432 + path: / + pathType: Prefix +``` + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/postgres/remote-replica/yamls/pg-ingress.yaml +ingress.networking.k8s.io/pg-singapore created +$ kubectl get ingress -n demo +NAME CLASS HOSTS ADDRESS PORTS AGE +pg-singapore nginx pg-singapore.something.org 172.104.37.147 80 22h +``` + +# Prepare for Remote Replica +We wil use the [kubedb_plugin](/docs/v2024.1.31/setup/README) for generating configuration for remote replica. It will create the appbinding and and necessary secrets to connect with source server +```bash +$ kubectl dba remote-config postgres -n demo pg-singapore -uremote -ppass -d 172.104.37.147 -y +home/mehedi/go/src/kubedb.dev/yamls/postgres/pg-singapore-remote-config.yaml +``` + +# Create Remote Replica +We have prepared another cluster in london region for replicating across cluster. follow the installation instruction [above](/docs/v2024.1.31/README). + +### Create sourceRef + +We will apply the generated config from kubeDB plugin to create the source refs and secrets for it +```bash +$ kubectl apply -f /home/mehedi/go/src/kubedb.dev/yamls/pg-singapore-remote-config.yaml +secret/pg-singapore-remote-replica-auth created +secret/pg-singapore-client-cert-remote created +appbinding.appcatalog.appscode.com/pg-singapore created +``` + +### Create remote replica auth +We will need to use the same auth secrets for remote replicas as well since operations like clone also replicated the auth-secrets from source server + +```yaml +apiVersion: v1 +data: + password: cGFzcw== + username: cG9zdGdyZXM= +kind: Secret +metadata: + name: pg-london-auth + namespace: demo +type: kubernetes.io/basic-auth +``` + +```bash +kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/postgres/remote-replica/yamls/pg-london-auth.yaml +``` + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: Postgres +metadata: + name: pg-london + namespace: demo +spec: + remoteReplica: + sourceRef: + name: pg-singapore + namespace: demo + healthChecker: + failureThreshold: 1 + periodSeconds: 10 + timeoutSeconds: 10 + disableWriteCheck: true + authSecret: + name: pg-london-auth + clientAuthMode: md5 + standbyMode: Hot + replicas: 1 + storage: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + storageClassName: linode-block-storage + storageType: Durable + terminationPolicy: WipeOut + version: "15.5" +``` + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/postgres/remote-replica/yamls/pg-london.yaml +postgres.kubedb.com/pg-london created +``` + +Now we will be able to see kubedb will provision a Remote Replica from the source postgres instance. Lets checkout out the statefulSet , pvc , pv and services associated with it +. +KubeDB operator sets the `status.phase` to `Ready` once the database is successfully created. Run the following command to see the modified `PostgreSQL` object: +```bash +$ kubectl get pg -n demo +NAME VERSION STATUS AGE +pg-london 15.3 Ready 7m17s +``` + +## Validate Remote Replica + +At this point we want to validate the replication, we can see `pg-london-0` is connected as asynchronous replica + +### Validate from source + +```bash +$ kubectl exec -it -n demo pg-singapore-0 -c postgres -- psql -c "select * from pg_stat_replication"; + pid | usesysid | usename | application_name | client_addr | client_hostname | client_port | backend_start | backend_xmin | state | sent_lsn | write_lsn | flush_lsn | replay_lsn | write_lag | flush_lag | replay_lag | sync_priority | sync_state | reply_time +--------+----------+----------+------------------+-------------+-----------------+-------------+-------------------------------+--------------+-----------+-----------+-----------+-----------+------------+-----------------+-----------------+-----------------+---------------+------------+------------------------------- + 121 | 10 | postgres | pg-singapore-1 | 10.2.1.13 | | 37990 | 2023-10-12 06:53:50.402925+00 | | streaming | 0/89758A8 | 0/89758A8 | 0/89758A8 | 0/89758A8 | 00:00:00.000745 | 00:00:00.00484 | 00:00:00.004848 | 1 | quorum | 2023-10-13 05:43:53.817575+00 + 209 | 10 | postgres | pg-singapore-2 | 10.2.0.11 | | 51270 | 2023-10-12 06:54:15.759067+00 | | streaming | 0/89758A8 | 0/89758A8 | 0/89758A8 | 0/89758A8 | 00:00:00.000581 | 00:00:00.009797 | 00:00:00.009955 | 1 | quorum | 2023-10-13 05:43:53.823562+00 + 205338 | 16394 | remote | pg-london-0 | 10.2.1.10 | | 34850 | 2023-10-12 20:15:07.751715+00 | | streaming | 0/89758A8 | 0/89758A8 | 0/89758A8 | 0/89758A8 | 00:00:00.158877 | 00:00:00.163418 | 00:00:00.163425 | 0 | async | 2023-10-13 05:43:53.900061+00 +(3 rows) + +### Validate from remote replica + +$ kubectl exec -it -n demo pg-london-0 -c postgres -- psql -c "select * from pg_stat_wal_receiver"; + pid | status | receive_start_lsn | receive_start_tli | written_lsn | flushed_lsn | received_tli | last_msg_send_time | last_msg_receipt_time | latest_end_lsn | latest_end_time | slot_name | sender_host | sender_port | conninfo +------+-----------+-------------------+-------------------+-------------+-------------+--------------+-------------------------------+-------------------------------+----------------+-------------------------------+-----------+----------------+-------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- + 4813 | streaming | 0/8000000 | 1 | 0/8DC01E0 | 0/8DC01E0 | 1 | 2023-10-13 05:54:33.812544+00 | 2023-10-13 05:54:33.893159+00 | 0/8DC01E0 | 2023-10-13 05:54:33.812544+pplication_name=walreceiver sslmode=verify-full sslcompression=0 sslcert=/tls/certs/remote/client.crt sslkey=/tls/certs/remote/client.key sslrootcert=/tls/certs/remote/ca.crt sslsni=1 ssl_min_protocol_version=TLSv1.2 gssencmode=prefer krbsrvname=postgres target_session_attrs=any +(1 row) +## Validation data replication +lets create a a database and insert some data + +$ kubectl exec -it -n demo pg-singapore-0 -c postgres -- psql -c "create database hi"; +CREATE DATABASE + +$ kubectl exec -it -n demo pg-singapore-0 -c postgres -- psql -c "create table tab_1 ( a int); insert into tab_1 values(generate_series(1,5))"; +CREATE TABLE +INSERT 0 5 + +### Validate data on primary +kubectl exec -it -n demo pg-singapore-0 -c postgres -- psql -c "select * from tab_1"; + a +--- + 1 + 2 + 3 + 4 + 5 +(5 rows) + +### Validate data on remote replica + +$ kubectl exec -it -n demo pg-london-0 -c postgres -- psql -c "select * from tab_1"; + a +--- + 1 + 2 + 3 + 4 + 5 +(5 rows) + +``` + +## Cleaning up + +To cleanup the Kubernetes resources created by this tutorial, run: + +```bash +kubectl delete -n demo pg/pg-singapore +kubectl delete -n demo pg/pg-london +kubectl delete secret -n demo pg-singapore-auth +kubectl delete secret -n demo pg-london-auth +kubectl delete ingres -n demo pg-singapore +kubectl delete ns demo +``` + +## Next Steps + +- Learn about [backup and restore](/docs/v2024.1.31/guides/postgres/backup/overview/) PostgreSQL database using Stash. +- Learn about initializing [PostgreSQL with Script](/docs/v2024.1.31/guides/postgres/initialization/script_source). +- Learn about [custom PostgresVersions](/docs/v2024.1.31/guides/postgres/custom-versions/setup). +- Want to setup PostgreSQL cluster? Check how to [configure Highly Available PostgreSQL Cluster](/docs/v2024.1.31/guides/postgres/clustering/ha_cluster) +- Monitor your PostgreSQL database with KubeDB using [built-in Prometheus](/docs/v2024.1.31/guides/postgres/monitoring/using-builtin-prometheus). +- Monitor your PostgreSQL database with KubeDB using [Prometheus operator](/docs/v2024.1.31/guides/postgres/monitoring/using-prometheus-operator). +- Detail concepts of [Postgres object](/docs/v2024.1.31/guides/postgres/concepts/postgres). +- Use [private Docker registry](/docs/v2024.1.31/guides/postgres/private-registry/using-private-registry) to deploy PostgreSQL with KubeDB. +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/v2024.1.31/CONTRIBUTING). \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/postgres/remote-replica/yamls/pg-ingres.yaml b/content/docs/v2024.1.31/guides/postgres/remote-replica/yamls/pg-ingres.yaml new file mode 100644 index 0000000000..9b3268df5a --- /dev/null +++ b/content/docs/v2024.1.31/guides/postgres/remote-replica/yamls/pg-ingres.yaml @@ -0,0 +1,18 @@ +apiVersion: networking.k8s.io/v1 +kind: Ingress +metadata: + name: pg-singapore + namespace: demo +spec: + ingressClassName: nginx + rules: + - host: pg-singapore.something.org + http: + paths: + - backend: + service: + name: pg-singapore + port: + number: 5432 + path: / + pathType: Prefix \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/postgres/remote-replica/yamls/pg-issuer.yaml b/content/docs/v2024.1.31/guides/postgres/remote-replica/yamls/pg-issuer.yaml new file mode 100644 index 0000000000..ebd1ea09d2 --- /dev/null +++ b/content/docs/v2024.1.31/guides/postgres/remote-replica/yamls/pg-issuer.yaml @@ -0,0 +1,8 @@ +apiVersion: cert-manager.io/v1 +kind: Issuer +metadata: + name: pg-issuer + namespace: demo +spec: + ca: + secretName: pg-ca \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/postgres/remote-replica/yamls/pg-london-auth.yaml b/content/docs/v2024.1.31/guides/postgres/remote-replica/yamls/pg-london-auth.yaml new file mode 100644 index 0000000000..03eab99c7d --- /dev/null +++ b/content/docs/v2024.1.31/guides/postgres/remote-replica/yamls/pg-london-auth.yaml @@ -0,0 +1,9 @@ +apiVersion: v1 +data: + password: cGFzcw== + username: cG9zdGdyZXM= +kind: Secret +metadata: + name: pg-london-auth + namespace: demo +type: kubernetes.io/basic-auth \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/postgres/remote-replica/yamls/pg-london.yaml b/content/docs/v2024.1.31/guides/postgres/remote-replica/yamls/pg-london.yaml new file mode 100644 index 0000000000..9bdc1c3ed8 --- /dev/null +++ b/content/docs/v2024.1.31/guides/postgres/remote-replica/yamls/pg-london.yaml @@ -0,0 +1,30 @@ +apiVersion: kubedb.com/v1alpha2 +kind: Postgres +metadata: + name: pg-london + namespace: demo +spec: + remoteReplica: + sourceRef: + name: pg-singapore + namespace: demo + healthChecker: + failureThreshold: 1 + periodSeconds: 10 + timeoutSeconds: 10 + disableWriteCheck: true + authSecret: + name: pg-london-auth + clientAuthMode: md5 + standbyMode: Hot + replicas: 1 + storage: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + storageClassName: linode-block-storage + storageType: Durable + terminationPolicy: WipeOut + version: "15.5" \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/postgres/remote-replica/yamls/pg-singapore-auth.yaml b/content/docs/v2024.1.31/guides/postgres/remote-replica/yamls/pg-singapore-auth.yaml new file mode 100644 index 0000000000..5126729239 --- /dev/null +++ b/content/docs/v2024.1.31/guides/postgres/remote-replica/yamls/pg-singapore-auth.yaml @@ -0,0 +1,9 @@ +apiVersion: v1 +data: + password: cGFzcw== + username: cG9zdGdyZXM= +kind: Secret +metadata: + name: pg-singapore-auth + namespace: demo +type: kubernetes.io/basic-auth \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/postgres/remote-replica/yamls/pg-singapore.yaml b/content/docs/v2024.1.31/guides/postgres/remote-replica/yamls/pg-singapore.yaml new file mode 100644 index 0000000000..ae57b08ffe --- /dev/null +++ b/content/docs/v2024.1.31/guides/postgres/remote-replica/yamls/pg-singapore.yaml @@ -0,0 +1,41 @@ +apiVersion: kubedb.com/v1alpha2 +kind: Postgres +metadata: + name: pg-singapore + namespace: demo +spec: + authSecret: + name: pg-singapore-auth + allowedSchemas: + namespaces: + from: Same + autoOps: {} + clientAuthMode: md5 + replicas: 3 + sslMode: verify-ca + standbyMode: Hot + streamingMode: Synchronous + tls: + issuerRef: + apiGroup: cert-manager.io + name: pg-issuer + kind: Issuer + certificates: + - alias: server + subject: + organizations: + - kubedb:server + dnsNames: + - localhost + ipAddresses: + - "127.0.0.1" + storage: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + storageClassName: linode-block-storage + storageType: Durable + terminationPolicy: WipeOut + version: "15.5" \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/postgres/scaling/_index.md b/content/docs/v2024.1.31/guides/postgres/scaling/_index.md new file mode 100644 index 0000000000..f2bec41fbd --- /dev/null +++ b/content/docs/v2024.1.31/guides/postgres/scaling/_index.md @@ -0,0 +1,22 @@ +--- +title: Scaling Postgres +menu: + docs_v2024.1.31: + identifier: guides-postgres-scaling + name: Scaling Postgres + parent: pg-postgres-guides + weight: 43 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/postgres/scaling/horizontal-scaling/_index.md b/content/docs/v2024.1.31/guides/postgres/scaling/horizontal-scaling/_index.md new file mode 100644 index 0000000000..2c4ee99b0c --- /dev/null +++ b/content/docs/v2024.1.31/guides/postgres/scaling/horizontal-scaling/_index.md @@ -0,0 +1,22 @@ +--- +title: Horizontal Scaling +menu: + docs_v2024.1.31: + identifier: guides-postgres-scaling-horizontal + name: Horizontal Scaling + parent: guides-postgres-scaling + weight: 10 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/postgres/scaling/horizontal-scaling/overview/images/pg-horizontal-scaling.png b/content/docs/v2024.1.31/guides/postgres/scaling/horizontal-scaling/overview/images/pg-horizontal-scaling.png new file mode 100644 index 0000000000..6557a332ed Binary files /dev/null and b/content/docs/v2024.1.31/guides/postgres/scaling/horizontal-scaling/overview/images/pg-horizontal-scaling.png differ diff --git a/content/docs/v2024.1.31/guides/postgres/scaling/horizontal-scaling/overview/index.md b/content/docs/v2024.1.31/guides/postgres/scaling/horizontal-scaling/overview/index.md new file mode 100644 index 0000000000..f0722e409c --- /dev/null +++ b/content/docs/v2024.1.31/guides/postgres/scaling/horizontal-scaling/overview/index.md @@ -0,0 +1,67 @@ +--- +title: Postgres Horizontal Scaling Overview +menu: + docs_v2024.1.31: + identifier: guides-postgres-scaling-horizontal-overview + name: Overview + parent: guides-postgres-scaling-horizontal + weight: 10 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Horizontal Scaling Overview + +This guide will give you an overview of how `KubeDB` Ops Manager scales up/down the number of members of a `Postgres` instance. + +## Before You Begin + +- You should be familiar with the following `KubeDB` concepts: + - [Postgres](/docs/v2024.1.31/guides/postgres/concepts/postgres) + - [PostgresOpsRequest](/docs/v2024.1.31/guides/postgres/concepts/opsrequest) + +## How Horizontal Scaling Process Works + +The following diagram shows how `KubeDB` Ops Manager used to scale up the number of members of a `Postgres` cluster. Open the image in a new tab to see the enlarged version. + +
+ Horizontal scaling Flow +
Fig: Horizontal scaling process of Postgres
+
+ +The horizontal scaling process consists of the following steps: + +1. At first, a user creates a `Postgres` cr. + +2. `KubeDB` community operator watches for the `Postgres` cr. + +3. When it finds one, it creates a `StatefulSet` and related necessary stuff like secret, service, etc. + +4. Then, in order to scale the cluster, the user creates a `PostgresOpsRequest` cr with the desired number of members after scaling. + +5. `KubeDB` Ops Manager watches for `PostgresOpsRequest`. + +6. When it finds one, it halts the `Postgres` object so that the `KubeDB` community operator doesn't perform any operation on the `Postgres` during the scaling process. + +7. Then `KubeDB` Ops Manager will add nodes in case of scale up or remove nodes in case of scale down. + +8. Then the `KubeDB` Ops Manager will scale the StatefulSet replicas to reach the expected number of replicas for the cluster. + +9. After successful scaling of the StatefulSet's replica, the `KubeDB` Ops Manager updates the `spec.replicas` field of `Postgres` object to reflect the updated cluster state. + +10. After successful scaling of the `Postgres` replicas, the `KubeDB` Ops Manager resumes the `Postgres` object so that the `KubeDB` community operator can resume its usual operations. + +In the next doc, we are going to show a step by step guide on scaling of a Postgres cluster using Horizontal Scaling. \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/postgres/scaling/horizontal-scaling/scale-horizontally/index.md b/content/docs/v2024.1.31/guides/postgres/scaling/horizontal-scaling/scale-horizontally/index.md new file mode 100644 index 0000000000..40fcad69ed --- /dev/null +++ b/content/docs/v2024.1.31/guides/postgres/scaling/horizontal-scaling/scale-horizontally/index.md @@ -0,0 +1,488 @@ +--- +title: Horizontal Scaling Postgres Cluster +menu: + docs_v2024.1.31: + identifier: guides-postgres-scaling-horizontal-scale-horizontally + name: Scale Horizontally + parent: guides-postgres-scaling-horizontal + weight: 20 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Horizontal Scale Postgres Cluster + +This guide will show you how to use `KubeDB` Ops Manager to increase/decrease the number of members of a `Postgres` Cluster. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +- Install `KubeDB` in your cluster following the steps [here](/docs/v2024.1.31/setup/README). + +- You should be familiar with the following `KubeDB` concepts: + - [Postgres](/docs/v2024.1.31/guides/postgres/concepts/postgres) + - [PostgresOpsRequest](/docs/v2024.1.31/guides/postgres/concepts/opsrequest) + - [Horizontal Scaling Overview](/docs/v2024.1.31/guides/postgres/scaling/horizontal-scaling/overview/) + +To keep everything isolated, we are going to use a separate namespace called `demo` throughout this tutorial. + +```bash +$ kubectl create ns demo +namespace/demo created +``` + +> **Note:** YAML files used in this tutorial are stored in [docs/guides/postgres/scaling/horizontal-scaling/scale-horizontally/yamls](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/guides/postgres/scaling/horizontal-scaling/scale-horizontally/yamls) directory of [kubedb/doc](https://github.com/kubedb/docs) repository. + +### Apply Horizontal Scaling on Postgres Cluster + +Here, we are going to deploy a `Postgres` Cluster using a supported version by `KubeDB` operator. Then we are going to apply horizontal scaling on it. + +#### Prepare Cluster + +At first, we are going to deploy a Cluster server with 3 members. Then, we are going to add two additional members through horizontal scaling. Finally, we will remove 1 member from the cluster again via horizontal scaling. + +**Find supported Postgres Version:** + +When you have installed `KubeDB`, it has created `PostgresVersion` CR for all supported `Postgres` versions. Let's check the supported Postgres versions, + +```bash +$ kubectl get postgresversion +NAME VERSION DISTRIBUTION DB_IMAGE DEPRECATED AGE +10.16 10.16 Official postgres:10.16-alpine 63s +10.16-debian 10.16 Official postgres:10.16 63s +10.19 10.19 Official postgres:10.19-bullseye 63s +10.19-bullseye 10.19 Official postgres:10.19-bullseye 63s +11.11 11.11 Official postgres:11.11-alpine 63s +11.11-debian 11.11 Official postgres:11.11 63s +11.14 11.14 Official postgres:11.14-alpine 63s +11.14-bullseye 11.14 Official postgres:11.14-bullseye 63s +11.14-bullseye-postgis 11.14 PostGIS postgis/postgis:11-3.1 63s +12.6 12.6 Official postgres:12.6-alpine 63s +12.6-debian 12.6 Official postgres:12.6 63s +12.9 12.9 Official postgres:12.9-alpine 63s +12.9-bullseye 12.9 Official postgres:12.9-bullseye 63s +12.9-bullseye-postgis 12.9 PostGIS postgis/postgis:12-3.1 63s +13.2 13.2 Official postgres:13.2-alpine 63s +13.2-debian 13.2 Official postgres:13.2 63s +13.5 13.5 Official postgres:13.5-alpine 63s +13.5-bullseye 13.5 Official postgres:13.5-bullseye 63s +13.5-bullseye-postgis 13.5 PostGIS postgis/postgis:13-3.1 63s +14.1 14.1 Official postgres:14.1-alpine 63s +14.1-bullseye 14.1 Official postgres:14.1-bullseye 63s +14.1-bullseye-postgis 14.1 PostGIS postgis/postgis:14-3.1 63s +9.6.21 9.6.21 Official postgres:9.6.21-alpine 63s +9.6.21-debian 9.6.21 Official postgres:9.6.21 63s +9.6.24 9.6.24 Official postgres:9.6.24-alpine 63s +9.6.24-bullseye 9.6.24 Official postgres:9.6.24-bullseye 63s +timescaledb-2.1.0-pg11 11.11 TimescaleDB timescale/timescaledb:2.1.0-pg11-oss 63s +timescaledb-2.1.0-pg12 12.6 TimescaleDB timescale/timescaledb:2.1.0-pg12-oss 63s +timescaledb-2.1.0-pg13 13.2 TimescaleDB timescale/timescaledb:2.1.0-pg13-oss 63s +timescaledb-2.5.0-pg14.1 14.1 TimescaleDB timescale/timescaledb:2.5.0-pg14-oss 63s +``` + +The version above that does not show `DEPRECATED` `true` is supported by `KubeDB` for `Postgres`. You can use any non-deprecated version. Here, we are going to create a Postgres Cluster using `Postgres` `13.2`. + +**Deploy Postgres Cluster:** + +In this section, we are going to deploy a Postgres Cluster with 3 members. Then, in the next section we will scale-up the cluster using horizontal scaling. Below is the YAML of the `Postgres` cr that we are going to create, + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: Postgres +metadata: + name: pg + namespace: demo +spec: + version: "13.13" + replicas: 3 + standbyMode: Hot + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut +``` + +Let's create the `Postgres` cr we have shown above, + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/postgres/scaling/horizontal-scaling/scale-horizontally/postgres.yaml +postgres.kubedb.com/pg created +``` + +**Wait for the cluster to be ready:** + +`KubeDB` operator watches for `Postgres` objects using Kubernetes API. When a `Postgres` object is created, `KubeDB` operator will create a new StatefulSet, Services, and Secrets, etc. A secret called `pg-auth` (format: {postgres-object-name}-auth) will be created storing the password for postgres superuser. +Now, watch `Postgres` is going to `Running` state and also watch `StatefulSet` and its pod is created and going to `Running` state, + +```bash +$ watch -n 3 kubectl get postgres -n demo pg +Every 3.0s: kubectl get postgres -n demo pg emon-r7: Thu Dec 2 15:31:16 2021 + +NAME VERSION STATUS AGE +pg 13.2 Ready 4h40m + + +$ watch -n 3 kubectl get sts -n demo pg +Every 3.0s: kubectl get sts -n demo pg emon-r7: Thu Dec 2 15:31:38 2021 + +NAME READY AGE +pg 3/3 4h41m + + + +$ watch -n 3 kubectl get pods -n demo +Every 3.0s: kubectl get pod -n demo emon-r7: Thu Dec 2 15:33:24 2021 + +NAME READY STATUS RESTARTS AGE +pg-0 2/2 Running 0 4h25m +pg-1 2/2 Running 0 4h26m +pg-2 2/2 Running 0 4h26m + +``` + +Let's verify that the StatefulSet's pods have joined into cluster, + +```bash +$ kubectl get secrets -n demo pg-auth -o jsonpath='{.data.\username}' | base64 -d +postgres + +$ kubectl get secrets -n demo pg-auth -o jsonpath='{.data.\password}' | base64 -d +b3b5838EhjwsiuFU + +``` + +So, we can see that our cluster has 3 members. Now, we are ready to apply the horizontal scale to this Postgres cluster. + +#### Scale Up + +Here, we are going to add 2 replicas in our Cluster using horizontal scaling. + +**Create PostgresOpsRequest:** + +To scale up your cluster, you have to create a `PostgresOpsRequest` cr with your desired number of replicas after scaling. Below is the YAML of the `PostgresOpsRequest` cr that we are going to create, + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: PostgresOpsRequest +metadata: + name: pg-scale-horizontal + namespace: demo +spec: + type: HorizontalScaling + databaseRef: + name: pg + horizontalScaling: + replicas: 5 +``` + +Here, + +- `spec.databaseRef.name` specifies that we are performing operation on `pg` `Postgres` database. +- `spec.type` specifies that we are performing `HorizontalScaling` on our database. +- `spec.horizontalScaling.replicas` specifies the expected number of replicas after the scaling. + +Let's create the `PostgresOpsRequest` cr we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/postgres/scaling/horizontal-scaling/scale-horizontally/yamls/pg-scale-up.yaml +postgresopsrequest.ops.kubedb.com/pg-scale-up created +``` + +**Verify Scale-Up Succeeded:** + +If everything goes well, `KubeDB` Ops Manager will scale up the StatefulSet's `Pod`. After the scaling process is completed successfully, the `KubeDB` Ops Manager updates the replicas of the `Postgres` object. + +First, we will wait for `PostgresOpsRequest` to be successful. Run the following command to watch `PostgresOpsRequest` cr, + +```bash +$ watch kubectl get postgresopsrequest -n demo pg-scale-up +Every 2.0s: kubectl get postgresopsrequest -n demo pg-scale-up emon-r7: Thu Dec 2 17:57:36 2021 + +NAME TYPE STATUS AGE +pg-scale-up HorizontalScaling Successful 8m23s + +``` + +You can see from the above output that the `PostgresOpsRequest` has succeeded. If we describe the `PostgresOpsRequest`, we will see that the `Postgres` cluster is scaled up. + +```bash +kubectl describe postgresopsrequest -n demo pg-scale-up +Name: pg-scale-up +Namespace: demo +Labels: +Annotations: +API Version: ops.kubedb.com/v1alpha1 +Kind: PostgresOpsRequest +Metadata: + Creation Timestamp: 2021-12-02T11:49:13Z + Generation: 1 + Managed Fields: + API Version: ops.kubedb.com/v1alpha1 + Fields Type: FieldsV1 + fieldsV1: + f:metadata: + f:annotations: + .: + f:kubectl.kubernetes.io/last-applied-configuration: + f:spec: + .: + f:databaseRef: + .: + f:name: + f:horizontalScaling: + .: + f:replicas: + f:type: + Manager: kubectl-client-side-apply + Operation: Update + Time: 2021-12-02T11:49:13Z + API Version: ops.kubedb.com/v1alpha1 + Fields Type: FieldsV1 + fieldsV1: + f:status: + .: + f:conditions: + f:observedGeneration: + f:phase: + Manager: kubedb-enterprise + Operation: Update + Time: 2021-12-02T11:49:13Z + Resource Version: 49610 + UID: cc62fe84-5c13-4c77-b130-f748c0beff27 +Spec: + Database Ref: + Name: pg + Horizontal Scaling: + Replicas: 5 + Type: HorizontalScaling +Status: + Conditions: + Last Transition Time: 2021-12-02T11:49:13Z + Message: Postgres ops request is horizontally scaling database + Observed Generation: 1 + Reason: Progressing + Status: True + Type: Progressing + Last Transition Time: 2021-12-02T11:50:38Z + Message: Successfully Horizontally Scaled Up + Observed Generation: 1 + Reason: ScalingUp + Status: True + Type: ScalingUp + Last Transition Time: 2021-12-02T11:50:38Z + Message: Successfully Horizontally Scaled Postgres + Observed Generation: 1 + Reason: Successful + Status: True + Type: Successful + Observed Generation: 1 + Phase: Successful +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal PauseDatabase 10m KubeDB Enterprise Operator Pausing Postgres demo/pg + Normal PauseDatabase 10m KubeDB Enterprise Operator Successfully paused Postgres demo/pg + Normal ScalingUp 9m17s KubeDB Enterprise Operator Successfully Horizontally Scaled Up + Normal ResumeDatabase 9m17s KubeDB Enterprise Operator Resuming PostgreSQL demo/pg + Normal ResumeDatabase 9m17s KubeDB Enterprise Operator Successfully resumed PostgreSQL demo/pg + Normal Successful 9m17s KubeDB Enterprise Operator Successfully Horizontally Scaled Database +``` + +Now, we are going to verify whether the number of members has increased to meet up the desired state. So let's check the new pods logs to see if they have joined in the cluster as new replica. + +```bash +$ kubectl logs -n demo pg-4 -c postgres -f +waiting for the role to be decided ... +running the initial script ... +Running as Replica +Attempting pg_isready on primary +Attempting query on primary +take base basebackup... +2021-12-02 11:50:11.062 GMT [17] LOG: skipping missing configuration file "/etc/config/user.conf" +2021-12-02 11:50:11.062 GMT [17] LOG: skipping missing configuration file "/etc/config/user.conf" +2021-12-02 11:50:11.075 UTC [17] LOG: starting PostgreSQL 13.2 on x86_64-pc-linux-musl, compiled by gcc (Alpine 10.2.1_pre1) 10.2.1 20201203, 64-bit +2021-12-02 11:50:11.075 UTC [17] LOG: listening on IPv4 address "0.0.0.0", port 5432 +2021-12-02 11:50:11.075 UTC [17] LOG: listening on IPv6 address "::", port 5432 +2021-12-02 11:50:11.081 UTC [17] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432" +2021-12-02 11:50:11.088 UTC [30] LOG: database system was interrupted; last known up at 2021-12-02 11:50:10 UTC +2021-12-02 11:50:11.148 UTC [30] LOG: entering standby mode +2021-12-02 11:50:11.154 UTC [30] LOG: redo starts at 0/8000028 +2021-12-02 11:50:11.157 UTC [30] LOG: consistent recovery state reached at 0/8000100 +2021-12-02 11:50:11.157 UTC [17] LOG: database system is ready to accept read only connections +2021-12-02 11:50:11.162 UTC [35] LOG: started streaming WAL from primary at 0/9000000 on timeline 2 + +``` + +You can see above that this pod is streaming wal from primary as replica. It verifies that we have successfully scaled up. + +#### Scale Down + +Here, we are going to remove 1 replica from our cluster using horizontal scaling. + +**Create PostgresOpsRequest:** + +To scale down your cluster, you have to create a `PostgresOpsRequest` cr with your desired number of members after scaling. Below is the YAML of the `PostgresOpsRequest` cr that we are going to create, + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: PostgresOpsRequest +metadata: + name: pg-scale-down + namespace: demo +spec: + type: HorizontalScaling + databaseRef: + name: pg + horizontalScaling: + replicas: 4 +``` + +Let's create the `PostgresOpsRequest` cr we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/postgres/scaling/horizontal-scaling/scale-horizontally/yamls/pg-scale-down.yaml +postgresopsrequest.ops.kubedb.com/pg-scale-down created +``` + +**Verify Scale-down Succeeded:** + +If everything goes well, `KubeDB` Ops Manager will scale down the StatefulSet's `Pod`. After the scaling process is completed successfully, the `KubeDB` Ops Manager updates the replicas of the `Postgres` object. + +Now, we will wait for `PostgresOpsRequest` to be successful. Run the following command to watch `PostgresOpsRequest` cr, + +```bash +$ watch kubectl get postgresopsrequest -n demo pg-scale-down +Every 2.0s: kubectl get postgresopsrequest -n demo pg-scale-down emon-r7: Thu Dec 2 18:15:37 2021 + +NAME TYPE STATUS AGE +pg-scale-down HorizontalScaling Successful 115s + + +``` + +You can see from the above output that the `PostgresOpsRequest` has succeeded. If we describe the `PostgresOpsRequest`, we shall see that the `Postgres` cluster is scaled down. + +```bash +$ kubectl describe postgresopsrequest -n demo pg-scale-down +Name: pg-scale-down +Namespace: demo +Labels: +Annotations: +API Version: ops.kubedb.com/v1alpha1 +Kind: PostgresOpsRequest +Metadata: + Creation Timestamp: 2021-12-02T12:13:42Z + Generation: 1 + Managed Fields: + API Version: ops.kubedb.com/v1alpha1 + Fields Type: FieldsV1 + fieldsV1: + f:metadata: + f:annotations: + .: + f:kubectl.kubernetes.io/last-applied-configuration: + f:spec: + .: + f:databaseRef: + .: + f:name: + f:horizontalScaling: + .: + f:replicas: + f:type: + Manager: kubectl-client-side-apply + Operation: Update + Time: 2021-12-02T12:13:42Z + API Version: ops.kubedb.com/v1alpha1 + Fields Type: FieldsV1 + fieldsV1: + f:status: + .: + f:conditions: + f:observedGeneration: + f:phase: + Manager: kubedb-enterprise + Operation: Update + Time: 2021-12-02T12:13:42Z + Resource Version: 52120 + UID: c69ea56e-e21c-4b1e-8a80-76f1b74ef2ba +Spec: + Database Ref: + Name: pg + Horizontal Scaling: + Replicas: 4 + Type: HorizontalScaling +Status: + Conditions: + Last Transition Time: 2021-12-02T12:13:42Z + Message: Postgres ops request is horizontally scaling database + Observed Generation: 1 + Reason: Progressing + Status: True + Type: Progressing + Last Transition Time: 2021-12-02T12:14:42Z + Message: Successfully Horizontally Scaled Down + Observed Generation: 1 + Reason: ScalingDown + Status: True + Type: ScalingDown + Last Transition Time: 2021-12-02T12:14:42Z + Message: Successfully Horizontally Scaled Postgres + Observed Generation: 1 + Reason: Successful + Status: True + Type: Successful + Observed Generation: 1 + Phase: Successful +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal PauseDatabase 2m31s KubeDB Enterprise Operator Pausing Postgres demo/pg + Normal PauseDatabase 2m31s KubeDB Enterprise Operator Successfully paused Postgres demo/pg + Normal ScalingDown 91s KubeDB Enterprise Operator Successfully Horizontally Scaled Down + Normal ResumeDatabase 91s KubeDB Enterprise Operator Resuming PostgreSQL demo/pg + Normal ResumeDatabase 91s KubeDB Enterprise Operator Successfully resumed PostgreSQL demo/pg + Normal Successful 91s KubeDB Enterprise Operator Successfully Horizontally Scaled Database +``` + +Now, we are going to verify whether the number of members has decreased to meet up the desired state, Let's check, the postgres status if it's ready then the scale-down is successful. + +```bash +$ kubectl get postgres -n demo pg +Every 3.0s: kubectl get postgres -n demo pg emon-r7: Thu Dec 2 18:16:39 2021 + +NAME VERSION STATUS AGE +pg 13.2 Ready 7h26m + +``` + +You can see above that our `Postgres` cluster now has a total of 4 members. It verifies that we have successfully scaled down. + +## Cleaning Up + +To clean up the Kubernetes resources created by this tutorial, run: + +```bash +kubectl delete pg -n demo pg +kubectl delete postgresopsrequest -n demo pg-scale-up +kubectl delete postgresopsrequest -n demo pg-scale-down +``` diff --git a/content/docs/v2024.1.31/guides/postgres/scaling/horizontal-scaling/scale-horizontally/yamls/pg-scale-down.yaml b/content/docs/v2024.1.31/guides/postgres/scaling/horizontal-scaling/scale-horizontally/yamls/pg-scale-down.yaml new file mode 100644 index 0000000000..25a488efc0 --- /dev/null +++ b/content/docs/v2024.1.31/guides/postgres/scaling/horizontal-scaling/scale-horizontally/yamls/pg-scale-down.yaml @@ -0,0 +1,11 @@ +apiVersion: ops.kubedb.com/v1alpha1 +kind: PostgresOpsRequest +metadata: + name: pg-scale-down + namespace: demo +spec: + type: HorizontalScaling + databaseRef: + name: pg + horizontalScaling: + replicas: 4 \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/postgres/scaling/horizontal-scaling/scale-horizontally/yamls/pg-scale-up.yaml b/content/docs/v2024.1.31/guides/postgres/scaling/horizontal-scaling/scale-horizontally/yamls/pg-scale-up.yaml new file mode 100644 index 0000000000..f618ce2545 --- /dev/null +++ b/content/docs/v2024.1.31/guides/postgres/scaling/horizontal-scaling/scale-horizontally/yamls/pg-scale-up.yaml @@ -0,0 +1,11 @@ +apiVersion: ops.kubedb.com/v1alpha1 +kind: PostgresOpsRequest +metadata: + name: pg-scale-up + namespace: demo +spec: + type: HorizontalScaling + databaseRef: + name: pg + horizontalScaling: + replicas: 5 \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/postgres/scaling/horizontal-scaling/scale-horizontally/yamls/postgres.yaml b/content/docs/v2024.1.31/guides/postgres/scaling/horizontal-scaling/scale-horizontally/yamls/postgres.yaml new file mode 100644 index 0000000000..7ca7025ecc --- /dev/null +++ b/content/docs/v2024.1.31/guides/postgres/scaling/horizontal-scaling/scale-horizontally/yamls/postgres.yaml @@ -0,0 +1,18 @@ +apiVersion: kubedb.com/v1alpha2 +kind: Postgres +metadata: + name: pg + namespace: demo +spec: + version: "13.13" + replicas: 3 + standbyMode: Hot + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/postgres/scaling/vertical-scaling/_index.md b/content/docs/v2024.1.31/guides/postgres/scaling/vertical-scaling/_index.md new file mode 100644 index 0000000000..2968d38936 --- /dev/null +++ b/content/docs/v2024.1.31/guides/postgres/scaling/vertical-scaling/_index.md @@ -0,0 +1,22 @@ +--- +title: Vertical Scaling +menu: + docs_v2024.1.31: + identifier: guides-postgres-scaling-vertical + name: Vertical Scaling + parent: guides-postgres-scaling + weight: 20 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/postgres/scaling/vertical-scaling/overview/images/pg-vertical-scaling.png b/content/docs/v2024.1.31/guides/postgres/scaling/vertical-scaling/overview/images/pg-vertical-scaling.png new file mode 100644 index 0000000000..9ff1eaee99 Binary files /dev/null and b/content/docs/v2024.1.31/guides/postgres/scaling/vertical-scaling/overview/images/pg-vertical-scaling.png differ diff --git a/content/docs/v2024.1.31/guides/postgres/scaling/vertical-scaling/overview/index.md b/content/docs/v2024.1.31/guides/postgres/scaling/vertical-scaling/overview/index.md new file mode 100644 index 0000000000..96e47ba80d --- /dev/null +++ b/content/docs/v2024.1.31/guides/postgres/scaling/vertical-scaling/overview/index.md @@ -0,0 +1,65 @@ +--- +title: Postgres Vertical Scaling Overview +menu: + docs_v2024.1.31: + identifier: guides-postgres-scaling-vertical-overview + name: Overview + parent: guides-postgres-scaling-vertical + weight: 10 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Vertical Scaling Postgres + +This guide will give you an overview of how KubeDB Ops Manager updates the resources(for example Memory, CPU etc.) of the `Postgres` database server. + +## Before You Begin + +- You should be familiar with the following `KubeDB` concepts: + - [Postgres](/docs/v2024.1.31/guides/postgres/concepts/postgres) + - [PostgresOpsRequest](/docs/v2024.1.31/guides/postgres/concepts/opsrequest) + +## How Vertical Scaling Process Works + +The following diagram shows how the `KubeDB` Ops Manager used to update the resources of the `Postgres` database server. Open the image in a new tab to see the enlarged version. + +
+ Vertical scaling Flow +
Fig: Vertical scaling process of Postgres
+
+ +The vertical scaling process consists of the following steps: + +1. At first, a user creates a `Postgres` cr. + +2. `KubeDB` community operator watches for the `Postgres` cr. + +3. When it finds one, it creates a `StatefulSet` and related necessary stuff like secret, service, etc. + +4. Then, in order to update the resources(for example `CPU`, `Memory` etc.) of the `Postgres` database the user creates a `PostgresOpsRequest` cr. + +5. `KubeDB` Ops Manager watches for `PostgresOpsRequest`. + +6. When it finds one, it halts the `Postgres` object so that the `KubeDB` Provisioner operator doesn't perform any operation on the `Postgres` during the scaling process. + +7. Then the `KubeDB` Ops Manager operator will update resources of the StatefulSet replicas to reach the desired state. + +8. After successful updating of the resources of the StatefulSet's replica, the `KubeDB` Ops Manager updates the `Postgres` object resources to reflect the updated state. + +9. After successful updating of the `Postgres` resources, the `KubeDB` Ops Manager resumes the `Postgres` object so that the `KubeDB` Provisioner operator resumes its usual operations. + +In the next doc, we are going to show a step by step guide on updating resources of Postgres database using vertical scaling operation. diff --git a/content/docs/v2024.1.31/guides/postgres/scaling/vertical-scaling/scale-vertically/index.md b/content/docs/v2024.1.31/guides/postgres/scaling/vertical-scaling/scale-vertically/index.md new file mode 100644 index 0000000000..ad0ee9232c --- /dev/null +++ b/content/docs/v2024.1.31/guides/postgres/scaling/vertical-scaling/scale-vertically/index.md @@ -0,0 +1,364 @@ +--- +title: Vertical Scaling Postgres +menu: + docs_v2024.1.31: + identifier: guides-postgres-scaling-vertical-scale-vertically + name: scale vertically + parent: guides-postgres-scaling-vertical + weight: 20 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Vertical Scale Postgres Instance + +This guide will show you how to use `kubeDB-Ops-Manager` to update the resources of a Postgres instance. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +- Install `KubeDB-Provisioner` and `kubeDB-Ops-Manager` in your cluster following the steps [here](/docs/v2024.1.31/setup/README). + +- You should be familiar with the following `KubeDB` concepts: + - [Postgres](/docs/v2024.1.31/guides/postgres/concepts/postgres) + - [PostgresOpsRequest](/docs/v2024.1.31/guides/postgres/concepts/opsrequest) + - [Vertical Scaling Overview](/docs/v2024.1.31/guides/postgres/scaling/vertical-scaling/overview/) + +To keep everything isolated, we are going to use a separate namespace called `demo` throughout this tutorial. + +```bash +$ kubectl create ns demo +namespace/demo created +``` + +> **Note:** YAML files used in this tutorial are stored in [docs/guides/postgres/scaling/vertical-scaling/scale-vertically/yamls](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/guides/postgres/scaling/vertical-scaling/scale-vertically/yamls) directory of [kubedb/doc](https://github.com/kubedb/docs) repository. + +### Apply Vertical Scaling on Postgres Instance + +Here, we are going to deploy a `Postgres` instance using a supported version by `KubeDB` operator. Then we are going to apply vertical scaling on it. + +**Find supported Postgres Version:** + +When you have installed `KubeDB`, it has created `PostgresVersion` CR for all supported `Postgres` versions. Let's check the supported Postgres versions, + +```bash +$ kubectl get postgresversion +NAME VERSION DISTRIBUTION DB_IMAGE DEPRECATED AGE +10.16 10.16 Official postgres:10.16-alpine 63s +10.16-debian 10.16 Official postgres:10.16 63s +10.19 10.19 Official postgres:10.19-bullseye 63s +10.19-bullseye 10.19 Official postgres:10.19-bullseye 63s +11.11 11.11 Official postgres:11.11-alpine 63s +11.11-debian 11.11 Official postgres:11.11 63s +11.14 11.14 Official postgres:11.14-alpine 63s +11.14-bullseye 11.14 Official postgres:11.14-bullseye 63s +11.14-bullseye-postgis 11.14 PostGIS postgis/postgis:11-3.1 63s +12.6 12.6 Official postgres:12.6-alpine 63s +12.6-debian 12.6 Official postgres:12.6 63s +12.9 12.9 Official postgres:12.9-alpine 63s +12.9-bullseye 12.9 Official postgres:12.9-bullseye 63s +12.9-bullseye-postgis 12.9 PostGIS postgis/postgis:12-3.1 63s +13.2 13.2 Official postgres:13.2-alpine 63s +13.2-debian 13.2 Official postgres:13.2 63s +13.5 13.5 Official postgres:13.5-alpine 63s +13.5-bullseye 13.5 Official postgres:13.5-bullseye 63s +13.5-bullseye-postgis 13.5 PostGIS postgis/postgis:13-3.1 63s +14.1 14.1 Official postgres:14.1-alpine 63s +14.1-bullseye 14.1 Official postgres:14.1-bullseye 63s +14.1-bullseye-postgis 14.1 PostGIS postgis/postgis:14-3.1 63s +9.6.21 9.6.21 Official postgres:9.6.21-alpine 63s +9.6.21-debian 9.6.21 Official postgres:9.6.21 63s +9.6.24 9.6.24 Official postgres:9.6.24-alpine 63s +9.6.24-bullseye 9.6.24 Official postgres:9.6.24-bullseye 63s +timescaledb-2.1.0-pg11 11.11 TimescaleDB timescale/timescaledb:2.1.0-pg11-oss 63s +timescaledb-2.1.0-pg12 12.6 TimescaleDB timescale/timescaledb:2.1.0-pg12-oss 63s +timescaledb-2.1.0-pg13 13.2 TimescaleDB timescale/timescaledb:2.1.0-pg13-oss 63s +timescaledb-2.5.0-pg14.1 14.1 TimescaleDB timescale/timescaledb:2.5.0-pg14-oss 63s +``` + +The version above that does not show `DEPRECATED` `true` is supported by `KubeDB` for `Postgres`. You can use any non-deprecated version. Here, we are going to create a postgres using non-deprecated `Postgres` version `13.2`. + +**Deploy Postgres:** + +In this section, we are going to deploy a Postgres instance. Then, in the next section, we will update the resources of the database server using vertical scaling. Below is the YAML of the `Postgres` cr that we are going to create, + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: Postgres +metadata: + name: pg + namespace: demo +spec: + version: "13.13" + replicas: 3 + standbyMode: Hot + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut +``` + +Let's create the `Postgres` cr we have shown above, + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/postgres/scaling/vertical-scaling/scale-vertically/yamls/postgres.yaml +postgres.kubedb.com/pg created +``` + +**Check postgres Ready to Scale:** + +`KubeDB-Provisioner` watches for `Postgres` objects using Kubernetes API. When a `Postgres` object is created, `KubeDB-Provisioner` will create a new StatefulSet, Services, and Secrets, etc. +Now, watch `Postgres` is going to be in `Running` state and also watch `StatefulSet` and its pod is created and going to be in `Running` state, + +```bash +$ watch -n 3 kubectl get postgres -n demo pg +Every 3.0s: kubectl get postgres -n demo pg emon-r7: Thu Dec 2 10:53:54 2021 + +NAME VERSION STATUS AGE +pg 13.2 Ready 3m16s + +$ watch -n 3 kubectl get sts -n demo pg +Every 3.0s: kubectl get sts -n demo pg emon-r7: Thu Dec 2 10:54:31 2021 + +NAME READY AGE +pg 3/3 3m54s + +$ watch -n 3 kubectl get pod -n demo +Every 3.0s: kubectl get pod -n demo emon-r7: Thu Dec 2 10:55:29 2021 + +NAME READY STATUS RESTARTS AGE +pg-0 2/2 Running 0 4m51s +pg-1 2/2 Running 0 3m50s +pg-2 2/2 Running 0 3m46s + +``` + +Let's check the `pg-0` Pod's postgres container's resources, As there are two containers, And Postgres container is the first container So it's index will be 0. + +```bash +$ kubectl get pod -n demo pg-0 -o json | jq '.spec.containers[0].resources' +{ + "limits": { + "memory": "1Gi" + }, + "requests": { + "cpu": "500m", + "memory": "1Gi" + } +} + +``` + +Now, We are ready to apply a vertical scale on this postgres database. + +#### Vertical Scaling + +Here, we are going to update the resources of the postgres to meet up with the desired resources after scaling. + +**Create PostgresOpsRequest:** + +In order to update the resources of your database, you have to create a `PostgresOpsRequest` cr with your desired resources after scaling. Below is the YAML of the `PostgresOpsRequest` cr that we are going to create, + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: PostgresOpsRequest +metadata: + name: pg-scale-vertical + namespace: demo +spec: + type: VerticalScaling + databaseRef: + name: pg + verticalScaling: + postgres: + resources: + requests: + memory: "1200Mi" + cpu: "0.7" + limits: + memory: "1200Mi" + cpu: "0.7" +``` + +Here, + +- `spec.databaseRef.name` specifies that we are performing operation on `pg` `Postgres` database. +- `spec.type` specifies that we are performing `VerticalScaling` on our database. +- `spec.VerticalScaling.postgres` specifies the expected postgres container resources after scaling. + +Let's create the `PostgresOpsRequest` cr we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/postgres/scaling/vertical-scaling/scale-vertically/yamls/pg-vertical-scaling.yaml +postgresopsrequest.ops.kubedb.com/pg-scale-vertical created +``` + +**Verify Postgres resources updated successfully:** + +If everything goes well, `KubeDB-Ops-Manager` will update the resources of the StatefulSet's `Pod` containers. After a successful scaling process is done, the `KubeDB-Ops-Manager` updates the resources of the `Postgres` object. + +First, we will wait for `PostgresOpsRequest` to be successful. Run the following command to watch `PostgresOpsRequest` cr, + +```bash +$ watch kubectl get postgresopsrequest -n demo pg-scale-vertical + +Every 2.0s: kubectl get postgresopsrequest -n demo pg-scale-ve... emon-r7: Thu Dec 2 11:09:49 2021 + +NAME TYPE STATUS AGE +pg-scale-vertical VerticalScaling Successful 3m42s + +``` + +We can see from the above output that the `PostgresOpsRequest` has succeeded. If we describe the `PostgresOpsRequest`, we will see that the postgres resources are updated. + +```bash +$ kubectl describe postgresopsrequest -n demo pg-scale-vertical +Name: pg-scale-vertical +Namespace: demo +Labels: +Annotations: +API Version: ops.kubedb.com/v1alpha1 +Kind: PostgresOpsRequest +Metadata: + Creation Timestamp: 2021-12-02T05:06:07Z + Generation: 1 + Managed Fields: + API Version: ops.kubedb.com/v1alpha1 + Fields Type: FieldsV1 + fieldsV1: + f:metadata: + f:annotations: + .: + f:kubectl.kubernetes.io/last-applied-configuration: + f:spec: + .: + f:databaseRef: + .: + f:name: + f:type: + f:verticalScaling: + .: + f:postgres: + .: + f:limits: + .: + f:cpu: + f:memory: + f:requests: + .: + f:cpu: + f:memory: + Manager: kubectl-client-side-apply + Operation: Update + Time: 2021-12-02T05:06:07Z + API Version: ops.kubedb.com/v1alpha1 + Fields Type: FieldsV1 + fieldsV1: + f:status: + .: + f:conditions: + f:observedGeneration: + f:phase: + Manager: kubedb-enterprise + Operation: Update + Time: 2021-12-02T05:06:07Z + Resource Version: 8452 + UID: 92d1e69f-c99a-4d0b-b8bf-e904e1336083 +Spec: + Database Ref: + Name: pg + Type: VerticalScaling + Vertical Scaling: + Postgres: + Limits: + Cpu: 0.7 + Memory: 1200Mi + Requests: + Cpu: 0.7 + Memory: 1200Mi +Status: + Conditions: + Last Transition Time: 2021-12-02T05:06:07Z + Message: Postgres ops request is vertically scaling database + Observed Generation: 1 + Reason: Progressing + Status: True + Type: Progressing + Last Transition Time: 2021-12-02T05:06:07Z + Message: Successfully updated statefulsets resources + Observed Generation: 1 + Reason: UpdateStatefulSetResources + Status: True + Type: UpdateStatefulSetResources + Last Transition Time: 2021-12-02T05:08:02Z + Message: SuccessfullyPerformedVerticalScaling + Observed Generation: 1 + Reason: VerticalScaling + Status: True + Type: VerticalScaling + Last Transition Time: 2021-12-02T05:08:02Z + Message: Successfully Vertically Scaled Database + Observed Generation: 1 + Reason: Successful + Status: True + Type: Successful + Observed Generation: 1 + Phase: Successful +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal PauseDatabase 4m17s KubeDB Enterprise Operator Pausing Postgres demo/pg + Normal PauseDatabase 4m17s KubeDB Enterprise Operator Successfully paused Postgres demo/pg + Normal VerticalScaling 2m22s KubeDB Enterprise Operator SuccessfullyPerformedVerticalScaling + Normal ResumeDatabase 2m22s KubeDB Enterprise Operator Resuming PostgreSQL demo/pg + Normal ResumeDatabase 2m22s KubeDB Enterprise Operator Successfully resumed PostgreSQL demo/pg + Normal Successful 2m22s KubeDB Enterprise Operator Successfully Vertically Scaled Database + +``` + +Now, we are going to verify whether the resources of the postgres instance has updated to meet up the desired state, Let's check, + +```bash +$ kubectl get pod -n demo pg-0 -o json | jq '.spec.containers[0].resources' +{ + "limits": { + "cpu": "700m", + "memory": "1200Mi" + }, + "requests": { + "cpu": "700m", + "memory": "1200Mi" + } +} + +``` + +The above output verifies that we have successfully scaled up the resources of the Postgres. + +## Cleaning Up + +To clean up the Kubernetes resources created by this tutorial, run: + +```bash +kubectl delete postgres -n demo pg +kubectl delete postgresopsrequest -n demo pg-scale-vertical +``` diff --git a/content/docs/v2024.1.31/guides/postgres/scaling/vertical-scaling/scale-vertically/yamls/pg-vertical-scaling.yaml b/content/docs/v2024.1.31/guides/postgres/scaling/vertical-scaling/scale-vertically/yamls/pg-vertical-scaling.yaml new file mode 100644 index 0000000000..3a8c33d923 --- /dev/null +++ b/content/docs/v2024.1.31/guides/postgres/scaling/vertical-scaling/scale-vertically/yamls/pg-vertical-scaling.yaml @@ -0,0 +1,18 @@ +apiVersion: ops.kubedb.com/v1alpha1 +kind: PostgresOpsRequest +metadata: + name: pg-scale-vertical + namespace: demo +spec: + type: VerticalScaling + databaseRef: + name: pg + verticalScaling: + postgres: + resources: + requests: + memory: "1200Mi" + cpu: "0.7" + limits: + memory: "1200Mi" + cpu: "0.7" \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/postgres/scaling/vertical-scaling/scale-vertically/yamls/postgres.yaml b/content/docs/v2024.1.31/guides/postgres/scaling/vertical-scaling/scale-vertically/yamls/postgres.yaml new file mode 100644 index 0000000000..7ca7025ecc --- /dev/null +++ b/content/docs/v2024.1.31/guides/postgres/scaling/vertical-scaling/scale-vertically/yamls/postgres.yaml @@ -0,0 +1,18 @@ +apiVersion: kubedb.com/v1alpha2 +kind: Postgres +metadata: + name: pg + namespace: demo +spec: + version: "13.13" + replicas: 3 + standbyMode: Hot + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/postgres/synchronous/index.md b/content/docs/v2024.1.31/guides/postgres/synchronous/index.md new file mode 100644 index 0000000000..2c4a7886e5 --- /dev/null +++ b/content/docs/v2024.1.31/guides/postgres/synchronous/index.md @@ -0,0 +1,150 @@ +--- +title: Synchronous Replication +menu: + docs_v2024.1.31: + identifier: guides-postgres-synchronous + name: Synchronous Replication Postgres + parent: pg-postgres-guides + weight: 42 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Run as Synchronous Replication Cluster + +KubeDB supports Synchronous Replication for PostgreSQL Cluster. This tutorial will show you how to use KubeDB to run PostgreSQL database with Replication Mode as Synchronous. + +## Before You Begin + +At first, you need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +Now, install KubeDB cli on your workstation and KubeDB operator in your cluster following the steps [here](/docs/v2024.1.31/setup/README). + +## Configure Synchronous Replication Cluster +To keep things isolated, this tutorial uses a separate namespace called `demo` throughout this tutorial. + +```bash +$ kubectl create ns demo +namespace/demo created +``` + +> Note: YAML files used in this tutorial are stored in [docs/examples/postgres](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/examples/postgres) folder in GitHub repository [kubedb/docs](https://github.com/kubedb/docs). + + +Now, create Postgres crd specifying `spec.streamingMode` with `Synchronous` field. + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/postgres/synchronous/postgres.yaml +postgres.kubedb.com/demo-pg created +``` + +Below is the YAML for the Postgres crd we just created. + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: Postgres +metadata: + name: demo-pg + namespace: demo +spec: + version: "13.13" + replicas: 3 + standbyMode: Hot + streamingMode: Synchronous + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: DoNotTerminate +``` + +By default, KubeDB create a Synchronous Replication where one Replica Postgres server out of all the replicas will be in `sync` with Current `primary`. +And others are `potential` candidate to be in sync with primary if the `synchronous` replica failed in any case. + +Let's check in the postgres cluster that we have deployed. Now, exec into the current primary, in our case it is Pod `demo-pg-0`. +```bash +$ kubectl exec -it -n demo demo-pg-0 -c postgres -- bash +bash-5.1$ psql +psql (14.2) +Type "help" for help. + +postgres=# select application_name, client_addr, state, sent_lsn, write_lsn, flush_lsn, replay_lsn, sync_state from pg_stat_replication; + application_name | client_addr | state | sent_lsn | write_lsn | flush_lsn | replay_lsn | sync_state +------------------+-------------+-----------+-----------+-----------+-----------+------------+------------ + demo-pg-1 | 10.244.0.22 | streaming | 0/5000060 | 0/5000060 | 0/5000060 | 0/5000060 | sync + demo-pg-2 | 10.244.0.24 | streaming | 0/5000060 | 0/5000060 | 0/5000060 | 0/5000060 | potential + +``` +But Users can also configure a Synchronous replication cluster where all the replica are in `sync` with current primary. +Let's see how a user can do so, Users need to provide `custom configuration` with setting the config for `synchronous_standby_names`. + +For example, If there are 3 nodes in a Postgres cluster where 1 node is a primary and other 2 are acting as replicas. +In this scenario, We can set all the 2 replicas server as synchronous replica with the current primary. +We need to provide `synchronous_standby_names = 'FIRST 2 (*)'` inside custom configuration. +That`s all, Then you can see that all the replicas are configured as synchronous replica. +```bash +$ kubectl exec -it -n demo demo-pg-0 -c postgres -- bash +bash-5.1$ psql +psql (14.2) +Type "help" for help. + +postgres=# select application_name, client_addr, state, sent_lsn, write_lsn, flush_lsn, replay_lsn, sync_state from pg_stat_replication; + application_name | client_addr | state | sent_lsn | write_lsn | flush_lsn | replay_lsn | sync_state +------------------+-------------+-----------+-----------+-----------+-----------+------------+------------ + demo-pg-1 | 10.244.0.22 | streaming | 0/5000060 | 0/5000060 | 0/5000060 | 0/5000060 | sync + demo-pg-2 | 10.244.0.24 | streaming | 0/5000060 | 0/5000060 | 0/5000060 | 0/5000060 | sync + +``` +To know how to set custom configuration for postgres please check [here](/docs/v2024.1.31/guides/postgres/configuration/using-config-file). + +### synchronous_commit +`remote_write:` By default `KubeDB Postgres` uses `remote_write` for `synchronous_commit`, which is the least sufficient option for replication +in terms of data preservation as it only guarantees that transaction was replicated over the network and saved into the +standby's `WAL(write-ahead-log)` without `fsync`. `KubeDB` is using it to ensure minimum latency. + +`remote_apply:` which means that the transaction upon completion will be both: persisted to a durable storage and visible +to a user on standby server(s). Note that this will cause much larger commit delays than other options. + +`on:` is a quite safe option when dealing with synchronous replication. +`on` which in context of synchronous replication might be better referred to as `remote_flush`. +Commits will wait until replies from the current synchronous standby(s) indicate they have received the commit record of +the transaction and flushed it to disk. Although the output of the transaction will not be immediately visible to the users +on the standby server(s). + +## Cleaning up + +To cleanup the Kubernetes resources created by this tutorial, run: + +```bash +kubectl patch -n demo pg/demo-pg -p '{"spec":{"terminationPolicy":"WipeOut"}}' --type="merge" +kubectl delete -n demo pg/demo-pg + +kubectl delete ns demo +``` + +If you would like to uninstall KubeDB operator, please follow the steps [here](/docs/v2024.1.31/setup/README). + +## Next Steps + +- Learn about [backup and restore](/docs/v2024.1.31/guides/postgres/backup/overview/) PostgreSQL database using Stash. +- Learn about initializing [PostgreSQL with Script](/docs/v2024.1.31/guides/postgres/initialization/script_source). +- Monitor your PostgreSQL database with KubeDB using [built-in Prometheus](/docs/v2024.1.31/guides/postgres/monitoring/using-builtin-prometheus). +- Monitor your PostgreSQL database with KubeDB using [Prometheus operator](/docs/v2024.1.31/guides/postgres/monitoring/using-prometheus-operator). +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/v2024.1.31/CONTRIBUTING). diff --git a/content/docs/v2024.1.31/guides/postgres/tls/_index.md b/content/docs/v2024.1.31/guides/postgres/tls/_index.md new file mode 100644 index 0000000000..4e3d6ac3d6 --- /dev/null +++ b/content/docs/v2024.1.31/guides/postgres/tls/_index.md @@ -0,0 +1,22 @@ +--- +title: TLS/SSL Encryption +menu: + docs_v2024.1.31: + identifier: guides-postgres-tls + name: TLS/SSL Encryption + parent: pg-postgres-guides + weight: 45 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/postgres/tls/configure/index.md b/content/docs/v2024.1.31/guides/postgres/tls/configure/index.md new file mode 100644 index 0000000000..c15ede7d5d --- /dev/null +++ b/content/docs/v2024.1.31/guides/postgres/tls/configure/index.md @@ -0,0 +1,278 @@ +--- +title: TLS/SSL (Transport Encryption) +menu: + docs_v2024.1.31: + identifier: guides-postgres-tls-configure + name: Postgres TLS/SSL Configuration + parent: guides-postgres-tls + weight: 20 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Configure TLS/SSL in Postgres + +`KubeDB` provides support for TLS/SSL encryption with SSLMode (`allow`, `prefer`, `require`, `verify-ca`, `verify-full`) for `Postgres`. This tutorial will show you how to use `KubeDB` to deploy a `Postgres` database with TLS/SSL configuration. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +- Install [`cert-manger`](https://cert-manager.io/docs/installation/) v1.4.0 or later to your cluster to manage your SSL/TLS certificates. + +- Install `KubeDB` in your cluster following the steps [here](/docs/v2024.1.31/setup/README). + +- To keep things isolated, this tutorial uses a separate namespace called `demo` throughout this tutorial. + + ```bash + $ kubectl create ns demo + namespace/demo created + ``` + +> Note: YAML files used in this tutorial are stored in [docs/guides/postgres/tls/configure/yamls](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/guides/postgres/tls/configure/yamls) folder in GitHub repository [kubedb/docs](https://github.com/kubedb/docs). + +### Deploy Postgres database with TLS/SSL configuration + +As pre-requisite, at first, we are going to create an Issuer/ClusterIssuer. This Issuer/ClusterIssuer is used to create certificates. Then we are going to deploy a Postgres with TLS/SSL configuration. + +### Create Issuer/ClusterIssuer + +Now, we are going to create an example `Issuer` that will be used throughout the duration of this tutorial. Alternatively, you can follow this [cert-manager tutorial](https://cert-manager.io/docs/configuration/ca/) to create your own `Issuer`. By following the below steps, we are going to create our desired issuer, + +- Start off by generating our ca-certificates using openssl, + +```bash +openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout ./ca.key -out ./ca.crt -subj "/CN=postgres/O=kubedb" +``` + +- create a secret using the certificate files we have just generated, + +```bash +$ kubectl create secret tls postgres-ca --cert=ca.crt --key=ca.key --namespace=demo +secret/postgres-ca created +``` + +Now, we are going to create an `Issuer` using the `postgres-ca` secret that contains the ca-certificate we have just created. Below is the YAML of the `Issuer` cr that we are going to create, + +```yaml +apiVersion: cert-manager.io/v1 +kind: Issuer +metadata: + name: postgres-ca-issuer + namespace: demo +spec: + ca: + secretName: postgres-ca +``` + +Let’s create the `Issuer` cr we have shown above, + +```bash +kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/postgres/tls/configure/yamls/issuer.yaml +issuer.cert-manager.io/postgres-ca-issuer created +``` + +### Deploy Postgres cluster with TLS/SSL configuration + +Here, our issuer `postgres-ca-issuer` is ready to deploy a `Postgres` Cluster with TLS/SSL configuration. Below is the YAML for Postgres Cluster that we are going to create, + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: Postgres +metadata: + name: demo-pg + namespace: demo +spec: + version: "13.13" + replicas: 3 + standbyMode: Hot + sslMode: verify-full + storageType: Durable + tls: + issuerRef: + apiGroup: cert-manager.io + name: postgres-ca-issuer + kind: Issuer + certificates: + - alias: server + subject: + organizations: + - kubedb:server + dnsNames: + - localhost + ipAddresses: + - "127.0.0.1" + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut +``` + +Here, + +- `spec.sslMode` specifies the SSL/TLS client connection to the server is required. + +- `spec.tls.issuerRef` refers to the `postgres-ca-issuer` issuer. + +- `spec.tls.certificates` gives you a lot of options to configure so that the certificate will be renewed and kept up to date. +You can found more details from [here](/docs/v2024.1.31/guides/postgres/concepts/postgres#tls) + +**Deploy Postgres Cluster:** + +Let’s create the `Postgres` cr we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/postgres/tls/configure/yamls/tls-postgres.yaml +postgres.kubedb.com/pg created +``` + +**Wait for the database to be ready:** + +Now, watch `Postgres` is going to `Running` state and also watch `StatefulSet` and its pod is created and going to `Running` state, + +```bash +$ watch kubectl get postgres -n demo pg + +Every 2.0s: kubectl get postgres --all-namespaces ac-emon: Fri Dec 3 15:14:11 2021 + +NAMESPACE NAME VERSION STATUS AGE +demo pg 13.2 Ready 62s + + +$ watch -n 3 kubectl get sts -n demo pg +Every 2.0s: kubectl get sts -n demo pg ac-emon: Fri Dec 3 15:15:41 2021 + +NAME READY AGE +pg 3/3 2m30s + +$ watch -n 3 kubectl get pod -n demo -l app.kubernetes.io/name=postgreses.kubedb.com,app.kubernetes.io/instance=pg +Every 3.0s: kubectl get pod -n demo -l app.kubernetes.io/name=postg... ac-emon: Fri Dec 3 15:17:10 2021 + +NAME READY STATUS RESTARTS AGE +pg-0 2/2 Running 0 3m59s +pg-1 2/2 Running 0 3m54s +pg-2 2/2 Running 0 3m49s +``` + +**Verify tls-secrets created successfully:** + +If everything goes well, you can see that our tls-secrets will be created which contains server, client, exporter certificate. Server tls-secret will be used for server configuration and client tls-secret will be used for a secure connection. + +All tls-secret are created by `KubeDB` Ops Manager. Default tls-secret name formed as _{postgres-object-name}-{cert-alias}-cert_. + +Let's check if the tls-secrets have been created properly, + +```bash +$ kubectl get secrets -n demo | grep pg +pg-auth kubernetes.io/basic-auth 2 4m41s +pg-client-cert kubernetes.io/tls 3 4m40s +pg-metrics-exporter-cert kubernetes.io/tls 3 4m40s +pg-server-cert kubernetes.io/tls 3 4m41s +pg-token-xvk9p kubernetes.io/service-account-token 3 4m41s +``` + +**Verify Postgres Cluster configured with TLS/SSL:** + +Now, we are going to connect to the database to verify that `Postgres` server has configured with TLS/SSL encryption. + +Let's exec into the pod to verify TLS/SSL configuration, + +```bash +$ kubectl exec -it -n demo pg-0 -- bash +bash-5.1$ ls /tls/certs +client exporter server + +bash-5.1$ ls /tls/certs/server +ca.crt server.crt server.key + +bash-5.1$ psql +psql (13.2) +Type "help" for help. + +postgres=# SELECT * FROM pg_stat_ssl; + pid | ssl | version | cipher | bits | compression | client_dn | client_serial | issuer_dn +------+-----+---------+------------------------+------+-------------+-----------+---------------+----------- + 129 | t | TLSv1.3 | TLS_AES_256_GCM_SHA384 | 256 | f | | | + 130 | t | TLSv1.3 | TLS_AES_256_GCM_SHA384 | 256 | f | | | + 2175 | f | | | | | | | +(3 rows) + +postgres=# exit + +bash-5.1$ cat /var/pv/data/postgresql.conf | grep ssl +ssl =on +ssl_cert_file ='/tls/certs/server/server.crt' +ssl_key_file ='/tls/certs/server/server.key' +ssl_ca_file ='/tls/certs/server/ca.crt' +primary_conninfo = 'application_name=pg-0 host=pg user=postgres password=0WpDlAbHsrNs-7hp sslmode=verify-full sslrootcert=/tls/certs/client/ca.crt' +#ssl = off +#ssl_ca_file = '' +#ssl_cert_file = 'server.crt' +#ssl_crl_file = '' +#ssl_key_file = 'server.key' +#ssl_ciphers = 'HIGH:MEDIUM:+3DES:!aNULL' # allowed SSL ciphers +#ssl_prefer_server_ciphers = on +#ssl_ecdh_curve = 'prime256v1' +#ssl_min_protocol_version = 'TLSv1.2' +#ssl_max_protocol_version = '' +#ssl_dh_params_file = '' +#ssl_passphrase_command = '' +#ssl_passphrase_command_supports_reload = off + +``` + +The above output shows that the `Postgres` server is configured with TLS/SSL configuration and in `/var/pv/data/postgresql.conf ` you can see that `ssl= on`. You can also see that the `.crt` and `.key` files are stored in the `/tls/certs/` directory for client and server. + +**Verify secure connection for SSL required user:** + +Now, you can create an SSL required user that will be used to connect to the database with a secure connection. + +Let's connect to the database server with a secure connection, + +```bash +# creating SSL required user +$ kubectl exec -it -n demo pg-0 -- bash + +bash-5.1$ psql -d "user=postgres password=$POSTGRES_PASSWORD host=pg port=5432 connect_timeout=15 dbname=postgres sslmode=verify-full sslrootcert=/tls/certs/client/ca.crt" +psql (13.2) +SSL connection (protocol: TLSv1.3, cipher: TLS_AES_256_GCM_SHA384, bits: 256, compression: off) +Type "help" for help. + +postgres=# exit + +bash-5.1$ psql -d "user=postgres password=$POSTGRES_PASSWORD host=pg port=5432 connect_timeout=15 dbname=postgres sslmode=verify-full" +psql: error: root certificate file "/var/lib/postgresql/.postgresql/root.crt" does not exist +Either provide the file or change sslmode to disable server certificate verification. +``` + +From the above output, you can see that only using ca certificate we can access the database securely, otherwise, it ask for the ca verification. Our client certificate is stored in `ls /tls/certs/client` directory. + +## Cleaning up + +To clean up the Kubernetes resources created by this tutorial, run: + +```bash +kubectl delete pg -n demo pg +kubectl delete ns demo +``` + +## Next Steps + +- Detail concepts of [Postgres object](/docs/v2024.1.31/guides/postgres/concepts/postgres). \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/postgres/tls/configure/yamls/issuer.yaml b/content/docs/v2024.1.31/guides/postgres/tls/configure/yamls/issuer.yaml new file mode 100644 index 0000000000..1cdfa86567 --- /dev/null +++ b/content/docs/v2024.1.31/guides/postgres/tls/configure/yamls/issuer.yaml @@ -0,0 +1,8 @@ +apiVersion: cert-manager.io/v1 +kind: Issuer +metadata: + name: postgres-ca-issuer + namespace: demo +spec: + ca: + secretName: postgres-ca \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/postgres/tls/configure/yamls/tls-postgres.yaml b/content/docs/v2024.1.31/guides/postgres/tls/configure/yamls/tls-postgres.yaml new file mode 100644 index 0000000000..5e387194aa --- /dev/null +++ b/content/docs/v2024.1.31/guides/postgres/tls/configure/yamls/tls-postgres.yaml @@ -0,0 +1,33 @@ +apiVersion: kubedb.com/v1alpha2 +kind: Postgres +metadata: + name: pg + namespace: demo +spec: + version: "13.13" + replicas: 3 + standbyMode: Hot + sslMode: verify-full + storageType: Durable + tls: + issuerRef: + apiGroup: cert-manager.io + name: postgres-ca-issuer + kind: Issuer + certificates: + - alias: server + subject: + organizations: + - kubedb:server + dnsNames: + - localhost + ipAddresses: + - "127.0.0.1" + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/postgres/tls/overview/images/pg-tls-ssl.png b/content/docs/v2024.1.31/guides/postgres/tls/overview/images/pg-tls-ssl.png new file mode 100644 index 0000000000..c513e8256a Binary files /dev/null and b/content/docs/v2024.1.31/guides/postgres/tls/overview/images/pg-tls-ssl.png differ diff --git a/content/docs/v2024.1.31/guides/postgres/tls/overview/index.md b/content/docs/v2024.1.31/guides/postgres/tls/overview/index.md new file mode 100644 index 0000000000..83f2ea951e --- /dev/null +++ b/content/docs/v2024.1.31/guides/postgres/tls/overview/index.md @@ -0,0 +1,88 @@ +--- +title: Postgres TLS/SSL Encryption Overview +menu: + docs_v2024.1.31: + identifier: guides-postgres-tls-overview + name: Overview + parent: guides-postgres-tls + weight: 10 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Postgres TLS/SSL Encryption + +**Prerequisite :** To configure TLS/SSL in `Postgres`, `KubeDB` uses `cert-manager` to issue certificates. So first you have to make sure that the cluster has `cert-manager` installed. To install `cert-manager` in your cluster following steps [here](https://cert-manager.io/docs/installation/kubernetes/). + +To issue a certificate, the following cr of `cert-manager` is used: + +- `Issuer/ClusterIssuer`: Issuers and ClusterIssuers represent certificate authorities (CAs) that are able to generate signed certificates by honoring certificate signing requests. All cert-manager certificates require a referenced issuer that is in a ready condition to attempt to honor the request. You can learn more details [here](https://cert-manager.io/docs/concepts/issuer/). + +- `Certificate`: `cert-manager` has the concept of Certificates that define the desired x509 certificate which will be renewed and kept up to date. You can learn more details [here](https://cert-manager.io/docs/concepts/certificate/). + +**Postgres CRD Specification:** + +KubeDB uses the following cr fields to enable SSL/TLS encryption in `Postgres`. + +- `spec:` + - `sslMode` + - `tls:` + - `issuerRef` + - `certificates` + +Read about the fields in details from [postgres concept](/docs/v2024.1.31/guides/postgres/concepts/postgres#), + +- `sslMode` supported values are [`disable`, `allow`, `prefer`, `require`, `verify-ca`, `verify-full`] + - `disable:` It ensures that the server does not use TLS/SSL + - `allow:` you don't care about security, but I will pay the overhead of encryption if the server insists on it. + - `prefer:` you don't care about encryption, but you wish to pay the overhead of encryption if the server supports it. + - `require:` you want your data to be encrypted, and you accept the overhead. you want to be sure that you connect to a server that you trust. + - `verify-ca:` you want your data to be encrypted, and you accept the overhead. you want to be sure that you connect to a server you trust, and that it's the one you specify. + +When, `sslMode` is set and the value is not `disable` then, the users must specify the `tls.issuerRef` field. `KubeDB` uses the `issuer` or `clusterIssuer` referenced in the `tls.issuerRef` field, and the certificate specs provided in `tls.certificate` to generate certificate secrets using `Issuer/ClusterIssuers` specification. These certificates secrets including `ca.crt`, `tls.crt` and `tls.key` etc. are used to configure `Postgres` server, exporter etc. respectively. + +## How TLS/SSL configures in Postgres + +The following figure shows how `KubeDB` enterprise is used to configure TLS/SSL in Postgres. Open the image in a new tab to see the enlarged version. + +
+ Postgres with TLS/SSL Flow +
Fig: Deploy Postgres with TLS/SSL
+
+ +Deploying Postgres with TLS/SSL configuration process consists of the following steps: + +1. At first, a user creates an `Issuer/ClusterIssuer` cr. + +2. Then the user creates a `Postgres` cr. + +3. `KubeDB` community operator watches for the `Postgres` cr. + +4. When it finds one, it creates `Secret`, `Service`, etc. for the `Postgres` database. + +5. `KubeDB` Ops Manager watches for `Postgres`(5c), `Issuer/ClusterIssuer`(5b), `Secret` and `Service`(5a). + +6. When it finds all the resources(`Postgres`, `Issuer/ClusterIssuer`, `Secret`, `Service`), it creates `Certificates` by using `tls.issuerRef` and `tls.certificates` field specification from `Postgres` cr. + +7. `cert-manager` watches for certificates. + +8. When it finds one, it creates certificate secrets `cert-secrets`(server, client, exporter secrets, etc.) that hold the actual self-signed certificate. + +9. `KubeDB` community operator watches for the Certificate secrets `tls-secrets`. + +10. When it finds all the tls-secret, it creates a `StatefulSet` so that Postgres server is configured with TLS/SSL. + +In the next doc, we are going to show a step by step guide on how to configure a `Postgres` database with TLS/SSL. \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/postgres/update-version/_index.md b/content/docs/v2024.1.31/guides/postgres/update-version/_index.md new file mode 100644 index 0000000000..2a2e436efc --- /dev/null +++ b/content/docs/v2024.1.31/guides/postgres/update-version/_index.md @@ -0,0 +1,22 @@ +--- +title: Updating Postgres +menu: + docs_v2024.1.31: + identifier: guides-postgres-updating + name: UpdateVersion Postgres + parent: pg-postgres-guides + weight: 42 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/postgres/update-version/overview/images/pg-updating.png b/content/docs/v2024.1.31/guides/postgres/update-version/overview/images/pg-updating.png new file mode 100644 index 0000000000..92755ed1f0 Binary files /dev/null and b/content/docs/v2024.1.31/guides/postgres/update-version/overview/images/pg-updating.png differ diff --git a/content/docs/v2024.1.31/guides/postgres/update-version/overview/index.md b/content/docs/v2024.1.31/guides/postgres/update-version/overview/index.md new file mode 100644 index 0000000000..3b797fa235 --- /dev/null +++ b/content/docs/v2024.1.31/guides/postgres/update-version/overview/index.md @@ -0,0 +1,67 @@ +--- +title: Updating Postgres Overview +menu: + docs_v2024.1.31: + identifier: guides-postgres-updating-overview + name: Overview + parent: guides-postgres-updating + weight: 10 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Updating Postgres version + +This guide will give you an overview of how KubeDB ops manager updates the version of `Postgres` database. + +## Before You Begin + +- You should be familiar with the following `KubeDB` concepts: + - [Postgres](/docs/v2024.1.31/guides/postgres/concepts/postgres) + - [PostgresOpsRequest](/docs/v2024.1.31/guides/postgres/concepts/opsrequest) + +## How update Process Works + +The following diagram shows how KubeDB KubeDB ops manager used to update the version of `Postgres`. Open the image in a new tab to see the enlarged version. + +
+ Postgres update Flow +
Fig: updating Process of Postgres
+
+ +The updating process consists of the following steps: + +1. At first, a user creates a `Postgres` cr. + +2. `KubeDB-Provisioner` operator watches for the `Postgres` cr. + +3. When it finds one, it creates a `StatefulSet` and related necessary stuff like secret, service, etc. + +4. Then, in order to update the version of the `Postgres` database the user creates a `PostgresOpsRequest` cr with the desired version. + +5. `KubeDB-ops-manager` operator watches for `PostgresOpsRequest`. + +6. When it finds one, it Pauses the `Postgres` object so that the `KubeDB-Provisioner` operator doesn't perform any operation on the `Postgres` during the updating process. + +7. By looking at the target version from `PostgresOpsRequest` cr, In case of major update `KubeDB-ops-manager` does some pre-update steps as we need old bin and lib files to update from current to target Postgres version. +8. Then By looking at the target version from `PostgresOpsRequest` cr, `KubeDB-ops-manager` operator updates the images of the `StatefulSet` for updating versions. + + +9. After successful upgradation of the `StatefulSet` and its `Pod` images, the `KubeDB-ops-manager` updates the image of the `Postgres` object to reflect the updated cluster state. + +10. After successful upgradation of `Postgres` object, the `KubeDB` ops manager resumes the `Postgres` object so that the `KubeDB-provisioner` can resume its usual operations. + +In the next doc, we are going to show a step by step guide on updating of a Postgres database using update operation. \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/postgres/update-version/versionupgrading/index.md b/content/docs/v2024.1.31/guides/postgres/update-version/versionupgrading/index.md new file mode 100644 index 0000000000..a050413f40 --- /dev/null +++ b/content/docs/v2024.1.31/guides/postgres/update-version/versionupgrading/index.md @@ -0,0 +1,482 @@ +--- +title: Updating Postgres version +menu: + docs_v2024.1.31: + identifier: guides-postgres-updating-version + name: Update version + parent: guides-postgres-updating + weight: 20 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# update version of Postgres + +This guide will show you how to use `KubeDB` ops-manager operator to update the version of `Postgres` cr. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +- Install `KubeDB` in your cluster following the steps [here](/docs/v2024.1.31/setup/README). + +- You should be familiar with the following `KubeDB` concepts: + - [Postgres](/docs/v2024.1.31/guides/postgres/concepts/postgres) + - [PostgresOpsRequest](/docs/v2024.1.31/guides/postgres/concepts/opsrequest) + - [updating Overview](/docs/v2024.1.31/guides/postgres/update-version/overview/) + +To keep everything isolated, we are going to use a separate namespace called `demo` throughout this tutorial. + +```bash +$ kubectl create ns demo +namespace/demo created +``` + +> **Note:** YAML files used in this tutorial are stored in [docs/guides/postgres/updating/versionupdating/yamls](/docs/v2024.1.31/guides/postgres/update-version/versionupgrading/yamls) directory of [kubedb/docs](https://github.com/kube/docs) repository. + +### Apply Version updating on Postgres + +Here, we are going to deploy a `Postgres` instance using a supported version by `KubeDB` provisioner. Then we are going to apply update-ops-request on it. + +#### Prepare Postgres + +At first, we are going to deploy a Postgres using supported `Postgres` version whether it is possible to update from this version to another. In the next two sections, we are going to find out the supported version and version update constraints. + +**Find supported PostgresVersion:** + +When you have installed `KubeDB`, it has created `PostgresVersion` CR for all supported `Postgres` versions. Let's check support versions, + +```bash +$ kubectl get postgresversion +NAME VERSION DISTRIBUTION DB_IMAGE DEPRECATED AGE +10.16 10.16 Official postgres:10.16-alpine 63s +10.16-debian 10.16 Official postgres:10.16 63s +10.19 10.19 Official postgres:10.19-bullseye 63s +10.19-bullseye 10.19 Official postgres:10.19-bullseye 63s +11.11 11.11 Official postgres:11.11-alpine 63s +11.11-debian 11.11 Official postgres:11.11 63s +11.14 11.14 Official postgres:11.14-alpine 63s +11.14-bullseye 11.14 Official postgres:11.14-bullseye 63s +11.14-bullseye-postgis 11.14 PostGIS postgis/postgis:11-3.1 63s +12.6 12.6 Official postgres:12.6-alpine 63s +12.6-debian 12.6 Official postgres:12.6 63s +12.9 12.9 Official postgres:12.9-alpine 63s +12.9-bullseye 12.9 Official postgres:12.9-bullseye 63s +12.9-bullseye-postgis 12.9 PostGIS postgis/postgis:12-3.1 63s +13.2 13.2 Official postgres:13.2-alpine 63s +13.2-debian 13.2 Official postgres:13.2 63s +13.5 13.5 Official postgres:13.5-alpine 63s +13.5-bullseye 13.5 Official postgres:13.5-bullseye 63s +13.5-bullseye-postgis 13.5 PostGIS postgis/postgis:13-3.1 63s +14.1 14.1 Official postgres:14.1-alpine 63s +14.1-bullseye 14.1 Official postgres:14.1-bullseye 63s +14.1-bullseye-postgis 14.1 PostGIS postgis/postgis:14-3.1 63s +9.6.21 9.6.21 Official postgres:9.6.21-alpine 63s +9.6.21-debian 9.6.21 Official postgres:9.6.21 63s +9.6.24 9.6.24 Official postgres:9.6.24-alpine 63s +9.6.24-bullseye 9.6.24 Official postgres:9.6.24-bullseye 63s +timescaledb-2.1.0-pg11 11.11 TimescaleDB timescale/timescaledb:2.1.0-pg11-oss 63s +timescaledb-2.1.0-pg12 12.6 TimescaleDB timescale/timescaledb:2.1.0-pg12-oss 63s +timescaledb-2.1.0-pg13 13.2 TimescaleDB timescale/timescaledb:2.1.0-pg13-oss 63s +timescaledb-2.5.0-pg14.1 14.1 TimescaleDB timescale/timescaledb:2.5.0-pg14-oss 63s + + +``` + +The version above that does not show `DEPRECATED` `true` is supported by `KubeDB` for `Postgres`. You can use any non-deprecated version. Now, we are going to select a non-deprecated version from `PostgresVersion` for `Postgres` Instance that will be possible to update from this version to another version. In the next section, we are going to verify version update constraints. + +**Check update Constraints:** + +When you are trying to update make sure that from current version the target version update is supported. + +| Current Version | updateable Minor Versions | updateable Major Versions | +| ----------------- | -------------------------- | ------------------------------------------------------------------------------------- | +| `9.6.21` | `9.6.24` | `10.16`, `11.11`, `12.6`, `13.2 ` | +| `9.6.21-debian` | `9.6.24-bullseye` | `12.6-debian`, `13.2-debian` | +| `9.6.24` | - | `10.19`, `11.14`, `12.9`, `13.5`, `14.1` | +| `9.6.24-bullseye` | - | `10.19-bullseye`, `11.14-bullseye`, `12.9-bullseye`, `13.5-bullseye`, `14.1-bullseye` | +| `10.16` | `10.19` | `11.11`, `12.6`, `13.2` | +| `10.16-debian` | `10.19-bullseye ` | `11.11-debian` | +| `10.19` | - | `11.14`, `12.9`, `13.5`, `14.1` | +| `10.19-bullseye` | - | `11.14-bullseye`, `12.9-bullseye`, `13.5-bullseye`, `14.1-bullseye` | +| `11.11` | ` 11.14` | `12.6`, `13.2` | +| `11.11-debian` | `11.14-bullseye` | - | +| `11.14` | - | `12.9`, `13.5`, `14.1` | +| `11.14-bullseye` | - | `12.9-bullseye`, `13.5-bullseye`, `14.1-bullseye` | +| `12.6` | `12.9` | `13.2` | +| `12.6-debian` | `12.9-bullseye` | `13.2-debian` | +| `12.9` | - | `13.5`, `14.1` | +| `12.9-bullseye` | - | `13.5-bullseye`, `14.1-bullseye` | +| `13.2` | `13.5` | - | +| `13.2-debian` | `13.5-bullseye` | - | +| `13.5` | - | `14.1` | +| `13.5-bullseye` | - | `14.1-bullseye` | +| `14.1` | - | - | +| `14.1-bullseye` | - | - | + +For Example: If you want to update from 9.6.21 to 14.1. From the table, you can see that you can't directly update from 9.6.21 to 14.1. So what you need to is first update 9.6.21 to 9.6.24. then try to update from 9.6.24 to 14.1. + +Let's get one of the `postgresversion` YAML: +```bash +$ kubectl get postgresversion 13.2 -o yaml | kubectl neat +apiVersion: catalog.kubedb.com/v1alpha1 +kind: PostgresVersion +metadata: + annotations: + meta.helm.sh/release-name: kubedb + meta.helm.sh/release-namespace: kubedb + labels: + app.kubernetes.io/instance: kubedb + app.kubernetes.io/managed-by: Helm + app.kubernetes.io/name: kubedb-catalog + app.kubernetes.io/version: v2021.11.24 + helm.sh/chart: kubedb-catalog-v2021.11.24 + name: "13.13" +spec: + coordinator: + image: kubedb/pg-coordinator:v0.8.0 + db: + image: postgres:13.2-alpine + distribution: Official + exporter: + image: prometheuscommunity/postgres-exporter:v0.9.0 + initContainer: + image: kubedb/postgres-init:0.4.0 + podSecurityPolicies: + databasePolicyName: postgres-db + securityContext: + runAsAnyNonRoot: false + runAsUser: 70 + stash: + addon: + backupTask: + name: postgres-backup-13.1 + restoreTask: + name: postgres-restore-13.1 + version: "13.13" + + +``` + + +**Deploy Postgres Instance:** + +In this section, we are going to deploy a Postgres Instance. Then, in the next section, we will update the version of the database using updating. Below is the YAML of the `Postgres` cr that we are going to create, + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: Postgres +metadata: + name: pg + namespace: demo +spec: + version: "11.22" + replicas: 3 + standbyMode: Hot + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut +``` + +Let's create the `Postgres` cr we have shown above, + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/postgres/update-version/versionupgrading/yamls/postgres.yaml +postgres.kubedb.com/pg created +``` + +**Wait for the database to be ready:** + +`KubeDB` operator watches for `Postgres` objects using Kubernetes API. When a `Postgres` object is created, `KubeDB` operator will create a new StatefulSet, Services, and Secrets, etc. A secret called `pg-auth` (format: {postgres-object-name}-auth) will be created storing the password for postgres superuser. +Now, watch `Postgres` is going to `Running` state and also watch `StatefulSet` and its pod is created and going to `Running` state, + +```bash +$ watch -n 3 kubectl get postgres -n demo +Every 3.0s: kubectl get postgres -n demo + +NAME VERSION STATUS AGE +pg 11.11 Ready 3m17s + +$ watch -n 3 kubectl get sts -n demo pg +Every 3.0s: kubectl get sts -n demo pg ac-emon: Tue Nov 30 11:38:12 2021 + +NAME READY AGE +pg 3/3 4m17s + +$ watch -n 3 kubectl get pod -n demo +Every 3.0s: kubectl get pods -n demo + +Every 3.0s: kubectl get pods -n demo ac-emon: Tue Nov 30 11:39:03 2021 + +NAME READY STATUS RESTARTS AGE +pg-0 2/2 Running 0 4m55s +pg-1 2/2 Running 0 3m15s +pg-2 2/2 Running 0 3m11s + +``` + +Let's verify the `Postgres`, the `StatefulSet` and its `Pod` image version, + +```bash +$ kubectl get pg -n demo pg -o=jsonpath='{.spec.version}{"\n"}' +11.11 + +$ kubectl get sts -n demo pg -o=jsonpath='{.spec.template.spec.containers[0].image}{"\n"}' +postgres:11.11-alpine + +$ kubectl get pod -n demo pg-0 -o=jsonpath='{.spec.containers[0].image}{"\n"}' +postgres:11.11-alpine +``` + +We are ready to apply updating on this `Postgres` Instance. + +#### UpdateVersion + +Here, we are going to update `Postgres` Instance from `11.11` to `13.2`. + +**Create PostgresOpsRequest:** + +To update the Instance, you have to create a `PostgresOpsRequest` cr with your desired version that supported by `KubeDB`. Below is the YAML of the `PostgresOpsRequest` cr that we are going to create, + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: PostgresOpsRequest +metadata: + name: pg-update + namespace: demo +spec: + type: UpdateVersion + updateVersion: + targetVersion: "13.13" + databaseRef: + name: pg +``` + +Here, + +- `spec.databaseRef.name` specifies that we are performing operation on `pg-group` Postgres database. +- `spec.type` specifies that we are going to perform `UpdateVersion` on our database. +- `spec.updateVersion.targetVersion` specifies expected version `13.2` after updating. + +Let's create the `PostgresOpsRequest` cr we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/postgres/update-version/versionupgrading/yamls/update_version.yaml +postgresopsrequest.ops.kubedb.com/pg-update created +``` + +**Verify Postgres version updated successfully:** + +If everything goes well, `KubeDB` ops-manager operator will update the image of `Postgres`, `StatefulSet`, and its `Pod`. + +At first, we will wait for `PostgresOpsRequest` to be successful. Run the following command to watch `PostgresOpsRequest` cr, + +```bash +$ watch -n 3 kubectl get PostgresOpsRequest -n demo pg-update +Every 3.0s: kubectl get PostgresOpsRequest -n demo pg-update + +NAME TYPE STATUS AGE +pg-update UpdateVersion Successful 3m57s +``` + +We can see from the above output that the `PostgresOpsRequest` has succeeded. If we describe the `PostgresOpsRequest`, we shall see that the `Postgres`, `StatefulSet`, and its `Pod` have updated with a new image. + +```bash +$ kubectl describe PostgresOpsRequest -n demo pg-update +Name: pg-update +Namespace: demo +Labels: +Annotations: +API Version: ops.kubedb.com/v1alpha1 +Kind: PostgresOpsRequest +Metadata: + Creation Timestamp: 2021-11-30T07:29:04Z + Generation: 1 + Managed Fields: + API Version: ops.kubedb.com/v1alpha1 + Fields Type: FieldsV1 + fieldsV1: + f:metadata: + f:annotations: + .: + f:kubectl.kubernetes.io/last-applied-configuration: + f:spec: + .: + f:databaseRef: + .: + f:name: + f:type: + f:updateVersion: + .: + f:targetVersion: + Manager: kubectl-client-side-apply + Operation: Update + Time: 2021-11-30T07:29:04Z + API Version: ops.kubedb.com/v1alpha1 + Fields Type: FieldsV1 + fieldsV1: + f:status: + .: + f:conditions: + f:observedGeneration: + f:phase: + Manager: kubedb-enterprise + Operation: Update + Time: 2021-11-30T07:29:04Z + Resource Version: 638178 + UID: d18198d9-0d27-449d-9a1d-edf60c7bdf38 +Spec: + Database Ref: + Name: pg + Type: UpdateVersion + UpdateVersion: + Target Version: 13.2 +Status: + Conditions: + Last Transition Time: 2021-11-30T07:29:04Z + Message: Postgres ops request is update-version database version + Observed Generation: 1 + Reason: UpdateVersion + Status: True + Type: UpdateVersion + Last Transition Time: 2021-11-30T07:29:04Z + Message: Successfully copied binaries for old postgres version + Observed Generation: 1 + Reason: CopiedOldBinaries + Status: True + Type: CopiedOldBinaries + Last Transition Time: 2021-11-30T07:29:04Z + Message: Successfully updated statefulsets update strategy type + Observed Generation: 1 + Reason: UpdateStatefulSets + Status: True + Type: UpdateStatefulSets + Last Transition Time: 2021-11-30T07:29:10Z + Message: Successfully Transferred Leadership to first node before pg-coordinator paused + Observed Generation: 1 + Reason: TransferLeaderShipToFirstNodeBeforeCoordinatorPaused + Status: True + Type: TransferLeaderShipToFirstNodeBeforeCoordinatorPaused + Last Transition Time: 2021-11-30T07:29:10Z + Message: Successfully Pause Pg-Coordinator + Observed Generation: 1 + Reason: PausePgCoordinator + Status: True + Type: PausePgCoordinator + Last Transition Time: 2021-11-30T07:29:50Z + Message: Successfully Updated primary Image + Observed Generation: 1 + Reason: UpdatePrimaryImage + Status: True + Type: UpdatePrimaryImage + Last Transition Time: 2021-11-30T07:29:52Z + Message: Successfully Initialized new data directory + Observed Generation: 1 + Reason: DataDirectoryInitialized + Status: True + Type: DataDirectoryInitialized + Last Transition Time: 2021-11-30T07:29:59Z + Message: Successfully updated new data directory + Observed Generation: 1 + Reason: PgUpdated + Status: True + Type: Pgupdated + Last Transition Time: 2021-11-30T07:29:59Z + Message: Successfully Rename new data directory + Observed Generation: 1 + Reason: ReplacedDataDirectory + Status: True + Type: ReplacedDataDirectory + Last Transition Time: 2021-11-30T07:30:24Z + Message: Successfully Transfer Primary Role to first node + Observed Generation: 1 + Reason: TransferPrimaryRoleToDefault + Status: True + Type: TransferPrimaryRoleToDefault + Last Transition Time: 2021-11-30T07:30:29Z + Message: Successfully running the primary + Observed Generation: 1 + Reason: ResumePgCoordinator + Status: True + Type: ResumePgCoordinator + Last Transition Time: 2021-11-30T07:32:24Z + Message: Successfully Updated replica Images + Observed Generation: 1 + Reason: UpdateStandbyPodImage + Status: True + Type: UpdateStandbyPodImage + Last Transition Time: 2021-11-30T07:32:24Z + Message: Successfully Updated cluster Image + Observed Generation: 1 + Reason: UpdateStatefulSetImage + Status: True + Type: UpdateStatefulSetImage + Last Transition Time: 2021-11-30T07:32:24Z + Message: Successfully completed the modification process. + Observed Generation: 1 + Reason: Successful + Status: True + Type: Successful + Observed Generation: 1 + Phase: Successful +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal PauseDatabase 3m50s KubeDB Enterprise Operator Pausing Postgres demo/pg + Normal PauseDatabase 3m50s KubeDB Enterprise Operator Successfully paused Postgres demo/pg + Normal Updating 3m50s KubeDB Enterprise Operator Updating StatefulSets + Normal Updating 3m50s KubeDB Enterprise Operator Successfully Updated StatefulSets + Normal TransferLeaderShipToFirstNodeBeforeCoordinatorPaused 3m44s KubeDB Enterprise Operator Successfully Transferred Leadership to first node before pg-coordinator paused + Normal UpdatePrimaryImage 3m4s KubeDB Enterprise Operator Successfully Updated primary Image + Normal TransferPrimaryRoleToDefault 2m30s KubeDB Enterprise Operator Successfully Transfer Primary Role to first node + Normal ResumePgCoordinator 2m25s KubeDB Enterprise Operator Successfully running the primary + Normal UpdateStandbyPodImage 30s KubeDB Enterprise Operator Successfully Updated replica Images + Normal ResumeDatabase 30s KubeDB Enterprise Operator Resuming PostgreSQL demo/pg + Normal ResumeDatabase 30s KubeDB Enterprise Operator Successfully resumed PostgreSQL demo/pg + Normal Successful 30s KubeDB Enterprise Operator Successfully Updated Database + Normal Successful 30s KubeDB Enterprise Operator Successfully Updated Database + + ``` + +Now, we are going to verify whether the `Postgres`, `StatefulSet` and it's `Pod` have updated with new image. Let's check, + +```bash +$ kubectl get postgres -n demo pg -o=jsonpath='{.spec.version}{"\n"}' +13.2 + +$ kubectl get sts -n demo pg -o=jsonpath='{.spec.template.spec.containers[0].image}{"\n"}' +postgres:13.2-alpine + +$ kubectl get pod -n demo pg-0 -o=jsonpath='{.spec.containers[0].image}{"\n"}' +postgres:13.2-alpine +``` + +You can see above that our `Postgres` has been updated with the new version. It verifies that we have successfully updated our Postgres Instance. + +## Cleaning Up + +To clean up the Kubernetes resources created by this tutorial, run: + +```bash +kubectl delete postgres -n demo pg +kubectl delete PostgresOpsRequest -n demo pg-update +``` \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/postgres/update-version/versionupgrading/yamls/postgres.yaml b/content/docs/v2024.1.31/guides/postgres/update-version/versionupgrading/yamls/postgres.yaml new file mode 100644 index 0000000000..5803ea0663 --- /dev/null +++ b/content/docs/v2024.1.31/guides/postgres/update-version/versionupgrading/yamls/postgres.yaml @@ -0,0 +1,18 @@ +apiVersion: kubedb.com/v1alpha2 +kind: Postgres +metadata: + name: pg + namespace: demo +spec: + version: "11.22" + replicas: 3 + standbyMode: Hot + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/postgres/update-version/versionupgrading/yamls/upgrade_version.yaml b/content/docs/v2024.1.31/guides/postgres/update-version/versionupgrading/yamls/upgrade_version.yaml new file mode 100644 index 0000000000..38146d1fb4 --- /dev/null +++ b/content/docs/v2024.1.31/guides/postgres/update-version/versionupgrading/yamls/upgrade_version.yaml @@ -0,0 +1,11 @@ +apiVersion: ops.kubedb.com/v1alpha1 +kind: PostgresOpsRequest +metadata: + name: pg-update + namespace: demo +spec: + type: UpdateVersion + updateVersion: + targetVersion: "13.13" + databaseRef: + name: pg diff --git a/content/docs/v2024.1.31/guides/proxysql/README.md b/content/docs/v2024.1.31/guides/proxysql/README.md new file mode 100644 index 0000000000..c46447dee5 --- /dev/null +++ b/content/docs/v2024.1.31/guides/proxysql/README.md @@ -0,0 +1,58 @@ +--- +title: ProxySQL +menu: + docs_v2024.1.31: + identifier: guides-proxysql-readme + name: ProxySQL + parent: guides-proxysql + weight: 5 +menu_name: docs_v2024.1.31 +section_menu_id: guides +url: /docs/v2024.1.31/guides/proxysql/ +aliases: +- /docs/v2024.1.31/guides/proxysql/README/ +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +## Supported ProxySQL Features + +| Features | Availability | +| ------------------------------------ | :----------: | +| Load balance MySQL Group Replication | ✓ | +| Load balance PerconaXtraDB Cluster | ✓ | +| Custom Configuration | ✓ | +| Declarative Configuration | ✓ | +| Version Update | ✓ | +| Builtin Prometheus Discovery | ✓ | +| Using Prometheus operator | ✓ | +| ProxySQL server cluster | ✓ | +| ProxySQL server failure recovery | ✓ | +| TLS secured connection for backend | ✓ | +| TLS secured connection for frontend | ✓ | + +## User Guide + +- [Overview of KubeDB ProxySQL CRD](/docs/v2024.1.31/guides/proxysql/concepts/proxysql/) +- [Configure KubeDB ProxySQL for MySQL Group Replication](/docs/v2024.1.31/guides/proxysql/quickstart/mysqlgrp/) +- [Deploy ProxySQL cluster with KubeDB](/docs/v2024.1.31/guides/proxysql/clustering/proxysql-cluster/) +- [Initialize KubeDB ProxySQL with declarative configuration](/docs/v2024.1.31/guides/proxysql/concepts/declarative-configuration/) +- [Reconfigure KubeDB ProxySQL with ops-request](/docs/v2024.1.31/guides/proxysql/concepts/opsrequest/) +- [Deploy TLS/SSL secured KubeDB ProxySQL](/docs/v2024.1.31/guides/proxysql/tls/configure/) +- [Reconfigure TLS/SSL for KubeDB ProxySQL](/docs/v2024.1.31/guides/proxysql/reconfigure-tls/cluster/) +- [Detail concepts of ProxySQLVersion CRD](/docs/v2024.1.31/guides/proxysql/concepts/proxysql-version/) +- [Update KubeDB ProxySQL version with ops-request](/docs/v2024.1.31/guides/proxysql/update-version/cluster/) +- [Scale horizontally and vertically KubeDB ProxySQL with ops-request](/docs/v2024.1.31/guides/proxysql/scaling/horizontal-scaling/cluster/) +- [Learn auto-scaling for KubeDB ProxySQL](/docs/v2024.1.31/guides/proxysql/autoscaler/compute/cluster/) +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/v2024.1.31/CONTRIBUTING). diff --git a/content/docs/v2024.1.31/guides/proxysql/_index.md b/content/docs/v2024.1.31/guides/proxysql/_index.md new file mode 100644 index 0000000000..72a63fe0df --- /dev/null +++ b/content/docs/v2024.1.31/guides/proxysql/_index.md @@ -0,0 +1,22 @@ +--- +title: ProxySQL +menu: + docs_v2024.1.31: + identifier: guides-proxysql + name: ProxySQL + parent: guides + weight: 10 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/proxysql/autoscaler/_index.md b/content/docs/v2024.1.31/guides/proxysql/autoscaler/_index.md new file mode 100644 index 0000000000..22b600e0e9 --- /dev/null +++ b/content/docs/v2024.1.31/guides/proxysql/autoscaler/_index.md @@ -0,0 +1,22 @@ +--- +title: Autoscaling +menu: + docs_v2024.1.31: + identifier: guides-proxysql-autoscaling + name: Autoscaling + parent: guides-proxysql + weight: 46 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/proxysql/autoscaler/compute/_index.md b/content/docs/v2024.1.31/guides/proxysql/autoscaler/compute/_index.md new file mode 100644 index 0000000000..eb3c2f9bbd --- /dev/null +++ b/content/docs/v2024.1.31/guides/proxysql/autoscaler/compute/_index.md @@ -0,0 +1,22 @@ +--- +title: Compute Autoscaling +menu: + docs_v2024.1.31: + identifier: guides-proxysql-autoscaling-compute + name: Compute Autoscaling + parent: guides-proxysql-autoscaling + weight: 46 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/proxysql/autoscaler/compute/cluster/examples/proxy-as-compute.yaml b/content/docs/v2024.1.31/guides/proxysql/autoscaler/compute/cluster/examples/proxy-as-compute.yaml new file mode 100644 index 0000000000..4216b37aa3 --- /dev/null +++ b/content/docs/v2024.1.31/guides/proxysql/autoscaler/compute/cluster/examples/proxy-as-compute.yaml @@ -0,0 +1,24 @@ +apiVersion: autoscaling.kubedb.com/v1alpha1 +kind: ProxySQLAutoscaler +metadata: + name: proxy-as-compute + namespace: demo +spec: + proxyRef: + name: proxy-server + opsRequestOptions: + timeout: 3m + apply: IfReady + compute: + proxysql: + trigger: "On" + podLifeTimeThreshold: 5m + resourceDiffPercentage: 20 + minAllowed: + cpu: 250m + memory: 400Mi + maxAllowed: + cpu: 1 + memory: 1Gi + containerControlledValues: "RequestsAndLimits" + controlledResources: ["cpu", "memory"] \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/proxysql/autoscaler/compute/cluster/examples/sample-mysql.yaml b/content/docs/v2024.1.31/guides/proxysql/autoscaler/compute/cluster/examples/sample-mysql.yaml new file mode 100644 index 0000000000..c884c6f678 --- /dev/null +++ b/content/docs/v2024.1.31/guides/proxysql/autoscaler/compute/cluster/examples/sample-mysql.yaml @@ -0,0 +1,19 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MySQL +metadata: + name: mysql-server + namespace: demo +spec: + version: "5.7.44" + replicas: 3 + topology: + mode: GroupReplication + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/proxysql/autoscaler/compute/cluster/examples/sample-proxysql.yaml b/content/docs/v2024.1.31/guides/proxysql/autoscaler/compute/cluster/examples/sample-proxysql.yaml new file mode 100644 index 0000000000..6ad122dd11 --- /dev/null +++ b/content/docs/v2024.1.31/guides/proxysql/autoscaler/compute/cluster/examples/sample-proxysql.yaml @@ -0,0 +1,21 @@ +apiVersion: kubedb.com/v1alpha2 +kind: ProxySQL +metadata: + name: proxy-server + namespace: demo +spec: + version: "2.3.2-debian" + replicas: 3 + backend: + name: mysql-server + syncUsers: true + terminationPolicy: WipeOut + podTemplate: + spec: + resources: + limits: + cpu: 200m + memory: 300Mi + requests: + cpu: 200m + memory: 300Mi \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/proxysql/autoscaler/compute/cluster/index.md b/content/docs/v2024.1.31/guides/proxysql/autoscaler/compute/cluster/index.md new file mode 100644 index 0000000000..1dea2b257e --- /dev/null +++ b/content/docs/v2024.1.31/guides/proxysql/autoscaler/compute/cluster/index.md @@ -0,0 +1,578 @@ +--- +title: ProxySQL Cluster Autoscaling +menu: + docs_v2024.1.31: + identifier: guides-proxysql-autoscaling-compute-cluster + name: Demo + parent: guides-proxysql-autoscaling-compute + weight: 20 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Autoscaling the Compute Resource of a ProxySQL Cluster Database + +This guide will show you how to use `KubeDB` to autoscale compute resources i.e. cpu and memory of a ProxySQL replicaset database. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster. + +- Install `KubeDB` Community, Ops-Manager and Autoscaler operator in your cluster following the steps [here](/docs/v2024.1.31/setup/README). + +- Install `Metrics Server` from [here](https://github.com/kubernetes-sigs/metrics-server#installation) + +- You should be familiar with the following `KubeDB` concepts: + - [ProxySQL](/docs/v2024.1.31/guides/proxysql/concepts/proxysql) + - [ProxySQLAutoscaler](/docs/v2024.1.31/guides/proxysql/concepts/autoscaler) + - [ProxySQLOpsRequest](/docs/v2024.1.31/guides/proxysql/concepts/opsrequest) + - [Compute Resource Autoscaling Overview](/docs/v2024.1.31/guides/proxysql/autoscaler/compute/overview) + +To keep everything isolated, we are going to use a separate namespace called `demo` throughout this tutorial. + +```bash +$ kubectl create ns demo +namespace/demo created +``` +### Prepare MySQL backend + +We need a mysql backend for the proxysql server. So we are creating one with the below yaml. + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: MySQL +metadata: + name: mysql-server + namespace: demo +spec: + version: "5.7.44" + replicas: 3 + topology: + mode: GroupReplication + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut +``` + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/proxysql/autoscaler/cluster/examples/sample-mysql.yaml +mysql.kubedb.com/mysql-server created +``` + +Let's wait for the MySQL to be Ready. + +```bash +$ kubectl get mysql -n demo +NAME VERSION STATUS AGE +mysql-server 5.7.44 Ready 3m51s +``` + +## Autoscaling of ProxySQL Cluster + +Here, we are going to deploy a `ProxySQL` Cluster using a supported version by `KubeDB` operator. Then we are going to apply `ProxySQLAutoscaler` to set up autoscaling. + +### Deploy ProxySQL Cluster + +In this section, we are going to deploy a ProxySQL Cluster with version `2.3.2-debian`. Then, in the next section we will set up autoscaling for this database using `ProxySQLAutoscaler` CRD. Below is the YAML of the `ProxySQL` CR that we are going to create, + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: ProxySQL +metadata: + name: proxy-server + namespace: demo +spec: + version: "2.3.2-debian" + replicas: 3 + backend: + name: mysql-server + syncUsers: true + terminationPolicy: WipeOut + podTemplate: + spec: + resources: + limits: + cpu: 200m + memory: 300Mi + requests: + cpu: 200m + memory: 300Mi +``` + +Let's create the `ProxySQL` CRO we have shown above, + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/proxysql/autoscaler/compute/cluster/examples/sample-proxysql.yaml +proxysql.kubedb.com/proxy-server created +``` + +Now, wait until `proxy-server` has status `Ready`. i.e, + +```bash +$ kubectl get proxysql -n demo +NAME VERSION STATUS AGE +proxy-server 2.3.2-debian Ready 4m +``` + +Let's check the Pod containers resources, + +```bash +$ kubectl get pod -n demo proxy-server-0 -o json | jq '.spec.containers[].resources' +{ + "limits": { + "cpu": "200m", + "memory": "300Mi" + }, + "requests": { + "cpu": "200m", + "memory": "300Mi" + } +} +``` + +Let's check the ProxySQL resources, +```bash +$ kubectl get proxysql -n demo proxy-server -o json | jq '.spec.podTemplate.spec.resources' +{ + "limits": { + "cpu": "200m", + "memory": "300Mi" + }, + "requests": { + "cpu": "200m", + "memory": "300Mi" + } +} +``` + +You can see from the above outputs that the resources are same as the one we have assigned while deploying the proxysql. + +We are now ready to apply the `ProxySQLAutoscaler` CRO to set up autoscaling for this database. + +### Compute Resource Autoscaling + +Here, we are going to set up compute resource autoscaling using a ProxySQLAutoscaler Object. + +#### Create ProxySQLAutoscaler Object + +In order to set up compute resource autoscaling for this proxysql cluster, we have to create a `ProxySQLAutoscaler` CRO with our desired configuration. Below is the YAML of the `ProxySQLAutoscaler` object that we are going to create, + +```yaml +apiVersion: autoscaling.kubedb.com/v1alpha1 +kind: ProxySQLAutoscaler +metadata: + name: proxy-as-compute + namespace: demo +spec: + proxyRef: + name: proxy-server + opsRequestOptions: + timeout: 3m + apply: IfReady + compute: + proxysql: + trigger: "On" + podLifeTimeThreshold: 5m + resourceDiffPercentage: 20 + minAllowed: + cpu: 250m + memory: 400Mi + maxAllowed: + cpu: 1 + memory: 1Gi + containerControlledValues: "RequestsAndLimits" + controlledResources: ["cpu", "memory"] +``` + +Here, + +- `spec.proxyRef.name` specifies that we are performing compute resource scaling operation on `proxy-server` proxysql server. +- `spec.compute.proxysql.trigger` specifies that compute autoscaling is enabled for this proxysql server. +- `spec.compute.proxysql.podLifeTimeThreshold` specifies the minimum lifetime for at least one of the pod to initiate a vertical scaling. +- `spec.compute.proxysql.resourceDiffPercentage` specifies the minimum resource difference in percentage. The default is 10%. +If the difference between current & recommended resource is less than ResourceDiffPercentage, Autoscaler Operator will ignore the updating. +- `spec.compute.proxysql.minAllowed` specifies the minimum allowed resources for the proxysql server. +- `spec.compute.proxysql.maxAllowed` specifies the maximum allowed resources for the proxysql server. +- `spec.compute.proxysql.controlledResources` specifies the resources that are controlled by the autoscaler. +- `spec.compute.proxysql.containerControlledValues` specifies which resource values should be controlled. The default is "RequestsAndLimits". +- `spec.opsRequestOptions.apply` has two supported value : `IfReady` & `Always`. +Use `IfReady` if you want to process the opsReq only when the proxysql server is Ready. And use `Always` if you want to process the execution of opsReq irrespective of the proxysql server state. +- `spec.opsRequestOptions.timeout` specifies the maximum time for each step of the opsRequest(in seconds). +If a step doesn't finish within the specified timeout, the ops request will result in failure. + + +Let's create the `ProxySQLAutoscaler` CR we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/proxysql/autoscaler/compute/cluster/examples/proxy-as-compute.yaml +proxysqlautoscaler.autoscaling.kubedb.com/proxy-as-compute created +``` + +#### Verify Autoscaling is set up successfully + +Let's check that the `proxysqlautoscaler` resource is created successfully, + +```bash +$ kubectl get proxysqlautoscaler -n demo +NAME AGE +proxy-as-compute 5m56s + +$ kubectl describe proxysqlautoscaler proxy-as-compute -n demo +Name: proxy-as-compute +Namespace: demo +Labels: +Annotations: +API Version: autoscaling.kubedb.com/v1alpha1 +Kind: ProxySQLAutoscaler +Metadata: + Creation Timestamp: 2022-09-16T11:26:58Z + Generation: 1 + Managed Fields: + API Version: autoscaling.kubedb.com/v1alpha1 + Fields Type: FieldsV1 + fieldsV1: + f:metadata: + f:annotations: + .: + f:kubectl.kubernetes.io/last-applied-configuration: + f:spec: + .: + f:compute: + .: + f:proxysql: + .: + f:containerControlledValues: + f:controlledResources: + f:maxAllowed: + .: + f:cpu: + f:memory: + f:minAllowed: + .: + f:cpu: + f:memory: + f:podLifeTimeThreshold: + f:resourceDiffPercentage: + f:trigger: + f:databaseRef: + .: + f:name: + f:opsRequestOptions: + .: + f:apply: + f:timeout: + Manager: kubectl-client-side-apply + Operation: Update + Time: 2022-09-16T11:26:58Z + API Version: autoscaling.kubedb.com/v1alpha1 + Fields Type: FieldsV1 + fieldsV1: + f:status: + .: + f:checkpoints: + f:conditions: + f:vpas: + Manager: kubedb-autoscaler + Operation: Update + Subresource: status + Time: 2022-09-16T11:27:07Z + Resource Version: 846645 + UID: 44bd46c3-bbc5-4c4a-aff4-00c7f84c6f58 +Spec: + Compute: + Proxysql: + Container Controlled Values: RequestsAndLimits + Controlled Resources: + cpu + memory + Max Allowed: + Cpu: 1 + Memory: 1Gi + Min Allowed: + Cpu: 250m + Memory: 400Mi + Pod Life Time Threshold: 5m0s + Resource Diff Percentage: 20 + Trigger: On + Proxy Ref: + Name: proxy-server + Ops Request Options: + Apply: IfReady + Timeout: 3m0s +Status: + Checkpoints: + Cpu Histogram: + Bucket Weights: + Index: 0 + Weight: 10000 + Index: 46 + Weight: 555 + Reference Timestamp: 2022-09-16T00:00:00Z + Total Weight: 2.648440345821337 + First Sample Start: 2022-09-16T11:26:48Z + Last Sample Start: 2022-09-16T11:32:52Z + Last Update Time: 2022-09-16T11:33:02Z + Memory Histogram: + Bucket Weights: + Index: 1 + Weight: 10000 + Reference Timestamp: 2022-09-17T00:00:00Z + Total Weight: 1.391848625060675 + Ref: + Container Name: md-coordinator + Vpa Object Name: proxy-server + Total Samples Count: 19 + Version: v3 + Cpu Histogram: + Bucket Weights: + Index: 0 + Weight: 10000 + Index: 3 + Weight: 556 + Reference Timestamp: 2022-09-16T00:00:00Z + Total Weight: 2.648440345821337 + First Sample Start: 2022-09-16T11:26:48Z + Last Sample Start: 2022-09-16T11:32:52Z + Last Update Time: 2022-09-16T11:33:02Z + Memory Histogram: + Reference Timestamp: 2022-09-17T00:00:00Z + Ref: + Container Name: proxysql + Vpa Object Name: proxy-server + Total Samples Count: 19 + Version: v3 + Conditions: + Last Transition Time: 2022-09-16T11:27:07Z + Message: Successfully created proxySQLOpsRequest demo/prxops-proxy-server-6xc1kc + Observed Generation: 1 + Reason: CreateOpsRequest + Status: True + Type: CreateOpsRequest + Vpas: + Conditions: + Last Transition Time: 2022-09-16T11:27:02Z + Status: True + Type: RecommendationProvided + Recommendation: + Container Recommendations: + Container Name: proxysql + Lower Bound: + Cpu: 250m + Memory: 400Mi + Target: + Cpu: 250m + Memory: 400Mi + Uncapped Target: + Cpu: 25m + Memory: 262144k + Upper Bound: + Cpu: 1 + Memory: 1Gi + Vpa Name: proxy-server +Events: + +``` +So, the `proxysqlautoscaler` resource is created successfully. + +We can verify from the above output that `status.vpas` contains the `RecommendationProvided` condition to true. And in the same time, `status.vpas.recommendation.containerRecommendations` contain the actual generated recommendation. + +Our autoscaler operator continuously watches the recommendation generated and creates an `proxysqlopsrequest` based on the recommendations, if the database pod resources are needed to scaled up or down. + +Let's watch the `proxysqlopsrequest` in the demo namespace to see if any `proxysqlopsrequest` object is created. After some time you'll see that a `proxysqlopsrequest` will be created based on the recommendation. + +```bash +$ kubectl get proxysqlopsrequest -n demo +NAME TYPE STATUS AGE +prxops-proxy-server-6xc1kc VerticalScaling Progressing 7s +``` + +Let's wait for the ops request to become successful. + +```bash +$ kubectl get proxysqlopsrequest -n demo +NAME TYPE STATUS AGE +prxops-vpa-proxy-server-z43wc8 VerticalScaling Successful 3m32s +``` + +We can see from the above output that the `ProxySQLOpsRequest` has succeeded. If we describe the `ProxySQLOpsRequest` we will get an overview of the steps that were followed to scale the proxysql server. + +```bash +$ kubectl describe proxysqlopsrequest -n demo prxops-vpa-proxy-server-z43wc8 +Name: prxops-proxy-server-6xc1kc +Namespace: demo +Labels: +Annotations: +API Version: ops.kubedb.com/v1alpha1 +Kind: ProxySQLOpsRequest +Metadata: + Creation Timestamp: 2022-09-16T11:27:07Z + Generation: 1 + Managed Fields: + API Version: ops.kubedb.com/v1alpha1 + Fields Type: FieldsV1 + fieldsV1: + f:metadata: + f:ownerReferences: + .: + k:{"uid":"44bd46c3-bbc5-4c4a-aff4-00c7f84c6f58"}: + f:spec: + .: + f:apply: + f:databaseRef: + .: + f:name: + f:timeout: + f:type: + f:verticalScaling: + .: + f:proxysql: + .: + f:limits: + .: + f:cpu: + f:memory: + f:requests: + .: + f:cpu: + f:memory: + Manager: kubedb-autoscaler + Operation: Update + Time: 2022-09-16T11:27:07Z + API Version: ops.kubedb.com/v1alpha1 + Fields Type: FieldsV1 + fieldsV1: + f:status: + .: + f:conditions: + f:observedGeneration: + f:phase: + Manager: kubedb-ops-manager + Operation: Update + Subresource: status + Time: 2022-09-16T11:27:07Z + Owner References: + API Version: autoscaling.kubedb.com/v1alpha1 + Block Owner Deletion: true + Controller: true + Kind: ProxySQLAutoscaler + Name: proxy-as-compute + UID: 44bd46c3-bbc5-4c4a-aff4-00c7f84c6f58 + Resource Version: 846324 + UID: c2b30107-c6d3-44bb-adf3-135edc5d615b +Spec: + Apply: IfReady + Database Ref: + Name: proxy-server + Timeout: 2m0s + Type: VerticalScaling + Vertical Scaling: + Proxysql: + Limits: + Cpu: 250m + Memory: 400Mi + Requests: + Cpu: 250m + Memory: 400Mi +Status: + Conditions: + Last Transition Time: 2022-09-16T11:27:07Z + Message: Controller has started to Progress the ProxySQLOpsRequest: demo/prxops-proxy-server-6xc1kc + Observed Generation: 1 + Reason: OpsRequestProgressingStarted + Status: True + Type: Progressing + Last Transition Time: 2022-09-16T11:30:42Z + Message: Successfully restarted ProxySQL pods for ProxySQLOpsRequest: demo/prxops-proxy-server-6xc1kc + Observed Generation: 1 + Reason: SuccessfullyRestatedStatefulSet + Status: True + Type: RestartStatefulSet + Last Transition Time: 2022-09-16T11:30:47Z + Message: Vertical scale successful for ProxySQLOpsRequest: demo/prxops-proxy-server-6xc1kc + Observed Generation: 1 + Reason: SuccessfullyPerformedVerticalScaling + Status: True + Type: VerticalScaling + Last Transition Time: 2022-09-16T11:30:47Z + Message: Controller has successfully scaled the ProxySQL demo/prxops-proxy-server-6xc1kc + Observed Generation: 1 + Reason: OpsRequestProcessedSuccessfully + Status: True + Type: Successful + Observed Generation: 1 + Phase: Successful +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal Starting 8m48s KubeDB Enterprise Operator Start processing for ProxySQLOpsRequest: demo/prxops-proxy-server-6xc1kc + Normal Starting 8m48s KubeDB Enterprise Operator Pausing ProxySQL databse: demo/proxy-server + Normal Successful 8m48s KubeDB Enterprise Operator Successfully paused ProxySQL database: demo/proxy-server for ProxySQLOpsRequest: prxops-proxy-server-6xc1kc + Normal Starting 8m43s KubeDB Enterprise Operator Restarting Pod: demo/proxy-server-0 + Normal Starting 7m33s KubeDB Enterprise Operator Restarting Pod: demo/proxy-server-1 + Normal Starting 6m23s KubeDB Enterprise Operator Restarting Pod: demo/proxy-server-2 + Normal Successful 5m13s KubeDB Enterprise Operator Successfully restarted ProxySQL pods for ProxySQLOpsRequest: demo/prxops-proxy-server-6xc1kc + Normal Successful 5m8s KubeDB Enterprise Operator Vertical scale successful for ProxySQLOpsRequest: demo/prxops-proxy-server-6xc1kc + Normal Starting 5m8s KubeDB Enterprise Operator Resuming ProxySQL database: demo/proxy-server + Normal Successful 5m8s KubeDB Enterprise Operator Successfully resumed ProxySQL database: demo/proxy-server + Normal Successful 5m8s KubeDB Enterprise Operator Controller has Successfully scaled the ProxySQL database: demo/proxy-server +``` + +Now, we are going to verify from the Pod, and the ProxySQL yaml whether the resources of the replicaset database has updated to meet up the desired state, Let's check, + +```bash +$ kubectl get pod -n demo proxy-server-0 -o json | jq '.spec.containers[].resources' +{ + "limits": { + "cpu": "250m", + "memory": "400Mi" + }, + "requests": { + "cpu": "250m", + "memory": "400Mi" + } +} + +$ kubectl get proxysql -n demo proxy-server -o json | jq '.spec.podTemplate.spec.resources' +{ + "limits": { + "cpu": "250m", + "memory": "400Mi" + }, + "requests": { + "cpu": "250m", + "memory": "400Mi" + } +} +``` + + +The above output verifies that we have successfully autoscaled the resources of the ProxySQL replicaset database. + +## Cleaning Up + +To clean up the Kubernetes resources created by this tutorial, run: + +```bash +kubectl delete proxysql -n demo proxy-server +kubectl delete proxysqlautoscaler -n demo proxy-as-compute +kubectl delete ns demo +``` \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/proxysql/autoscaler/compute/overview/images/proxy-as-compute.png b/content/docs/v2024.1.31/guides/proxysql/autoscaler/compute/overview/images/proxy-as-compute.png new file mode 100644 index 0000000000..ec96582209 Binary files /dev/null and b/content/docs/v2024.1.31/guides/proxysql/autoscaler/compute/overview/images/proxy-as-compute.png differ diff --git a/content/docs/v2024.1.31/guides/proxysql/autoscaler/compute/overview/index.md b/content/docs/v2024.1.31/guides/proxysql/autoscaler/compute/overview/index.md new file mode 100644 index 0000000000..567ac5f226 --- /dev/null +++ b/content/docs/v2024.1.31/guides/proxysql/autoscaler/compute/overview/index.md @@ -0,0 +1,67 @@ +--- +title: ProxySQL Compute Autoscaling Overview +menu: + docs_v2024.1.31: + identifier: guides-proxysql-autoscaling-compute-overview + name: Overview + parent: guides-proxysql-autoscaling-compute + weight: 10 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# ProxySQL Compute Resource Autoscaling + +This guide will give an overview on how KubeDB Autoscaler operator autoscales the database compute resources i.e. cpu and memory using `proxysqlautoscaler` crd. + +## Before You Begin + +- You should be familiar with the following `KubeDB` concepts: + - [ProxySQL](/docs/v2024.1.31/guides/proxysql/concepts/proxysql) + - [ProxySQLAutoscaler](/docs/v2024.1.31/guides/proxysql/concepts/autoscaler) + - [ProxySQLOpsRequest](/docs/v2024.1.31/guides/proxysql/concepts/opsrequest) + +## How Compute Autoscaling Works + +The following diagram shows how KubeDB Autoscaler operator autoscales the resources of `ProxySQL` database components. Open the image in a new tab to see the enlarged version. + +
+  Auto Scaling process of ProxySQL +
Fig: Auto Scaling process of ProxySQL
+
+ +The Auto Scaling process consists of the following steps: + +1. At first, the user creates a `ProxySQL` Custom Resource Object (CRO). + +2. `KubeDB` Community operator watches the `ProxySQL` CRO. + +3. When the operator finds a `ProxySQL` CRO, it creates required number of `StatefulSets` and related necessary stuff like secrets, services, etc. + +4. Then, in order to set up autoscaling of the CPU & Memory resources of the `ProxySQL` database the user creates a `ProxySQLAutoscaler` CRO with desired configuration. + +5. `KubeDB` Autoscaler operator watches the `ProxySQLAutoscaler` CRO. + +6. `KubeDB` Autoscaler operator utilizes the modified version of Kubernetes official [VPA-Recommender](https://github.com/kubernetes/autoscaler/tree/master/vertical-pod-autoscaler/pkg) for different components of the database, as specified in the `proxysqlautoscaler` CRO. +It generates recommendations based on resource usages, & store them in the `status` section of the autoscaler CRO. + +7. If the generated recommendation doesn't match the current resources of the database, then `KubeDB` Autoscaler operator creates a `ProxySQLOpsRequest` CRO to scale the database to match the recommendation provided by the VPA object. + +8. `KubeDB Ops-Manager operator` watches the `ProxySQLOpsRequest` CRO. + +9. Lastly, the `KubeDB Ops-Manager operator` will scale the database component vertically as specified on the `ProxySQLOpsRequest` CRO. + +In the next docs, we are going to show a step by step guide on Autoscaling of ProxySQL database using `ProxySQLAutoscaler` CRD. diff --git a/content/docs/v2024.1.31/guides/proxysql/clustering/_index.md b/content/docs/v2024.1.31/guides/proxysql/clustering/_index.md new file mode 100644 index 0000000000..e34cb3348b --- /dev/null +++ b/content/docs/v2024.1.31/guides/proxysql/clustering/_index.md @@ -0,0 +1,22 @@ +--- +title: ProxySQL Clustering +menu: + docs_v2024.1.31: + identifier: guides-proxysql-clustering + name: ProxySQL Clustering + parent: guides-proxysql + weight: 30 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/proxysql/clustering/overview/images/proxy-cluster.png b/content/docs/v2024.1.31/guides/proxysql/clustering/overview/images/proxy-cluster.png new file mode 100644 index 0000000000..b4eec08de9 Binary files /dev/null and b/content/docs/v2024.1.31/guides/proxysql/clustering/overview/images/proxy-cluster.png differ diff --git a/content/docs/v2024.1.31/guides/proxysql/clustering/overview/index.md b/content/docs/v2024.1.31/guides/proxysql/clustering/overview/index.md new file mode 100644 index 0000000000..02eea79411 --- /dev/null +++ b/content/docs/v2024.1.31/guides/proxysql/clustering/overview/index.md @@ -0,0 +1,48 @@ +--- +title: ProxySQL Cluster Overview +menu: + docs_v2024.1.31: + identifier: guides-proxysql-clustering-overview + name: ProxySQL Cluster Overview + parent: guides-proxysql-clustering + weight: 10 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# ProxySQL Cluster + +Here we'll discuss some concepts about ProxySQL Cluster. + +## So What is Replication + +Replication means there are multiple proxy server who are doing proxy with the traffic and all necessary configuration in all the nodes are same and in sync always. Change in any server configuration will eventually propagate to all other nodes and will behave the same. One can read or write in any server of the cluster. The following figure shows a cluster of four ProxySQL servers: + +![ProxySQL Cluster](/docs/v2024.1.31/guides/proxysql/clustering/overview/images/proxy-cluster.png) + + +## ProxySQL Cluster Features + +- Virtually synchronous replication +- Read and write through any cluster node +- Cluster failover recovery +- Enhanced in performance than standalone proxy server +- Load balance in ProxySQL end + + +## Next Steps + +- [Deploy ProxySQL Cluster](/docs/v2024.1.31/guides/proxysql/clustering/proxysql-cluster/) using KubeDB. \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/proxysql/clustering/proxysql-cluster/examples/load.sh b/content/docs/v2024.1.31/guides/proxysql/clustering/proxysql-cluster/examples/load.sh new file mode 100644 index 0000000000..bda45a516c --- /dev/null +++ b/content/docs/v2024.1.31/guides/proxysql/clustering/proxysql-cluster/examples/load.sh @@ -0,0 +1,19 @@ +#!/bin/bash + +COUNTER=0 + +USER='test' +PROXYSQL_NAME='proxy-server' +NAMESPACE='demo' +PASS='pass' + +VAR="x" + +while [ $COUNTER -lt 100 ]; do + let COUNTER=COUNTER+1 + VAR=a$VAR + mysql -u$USER -h$PROXYSQL_NAME.$NAMESPACE.svc -P6033 -p$PASS -e 'select 1;' > /dev/null 2>&1 + mysql -u$USER -h$PROXYSQL_NAME.$NAMESPACE.svc -P6033 -p$PASS -e "INSERT INTO test.testtb(name) VALUES ('$VAR');" > /dev/null 2>&1 + mysql -u$USER -h$PROXYSQL_NAME.$NAMESPACE.svc -P6033 -p$PASS -e "select * from test.testtb;" > /dev/null 2>&1 + sleep 0.0001 +done \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/proxysql/clustering/proxysql-cluster/examples/sample-mysql.yaml b/content/docs/v2024.1.31/guides/proxysql/clustering/proxysql-cluster/examples/sample-mysql.yaml new file mode 100644 index 0000000000..c884c6f678 --- /dev/null +++ b/content/docs/v2024.1.31/guides/proxysql/clustering/proxysql-cluster/examples/sample-mysql.yaml @@ -0,0 +1,19 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MySQL +metadata: + name: mysql-server + namespace: demo +spec: + version: "5.7.44" + replicas: 3 + topology: + mode: GroupReplication + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/proxysql/clustering/proxysql-cluster/examples/sample-proxysql.yaml b/content/docs/v2024.1.31/guides/proxysql/clustering/proxysql-cluster/examples/sample-proxysql.yaml new file mode 100644 index 0000000000..890e05bfc0 --- /dev/null +++ b/content/docs/v2024.1.31/guides/proxysql/clustering/proxysql-cluster/examples/sample-proxysql.yaml @@ -0,0 +1,11 @@ +apiVersion: kubedb.com/v1alpha2 +kind: ProxySQL +metadata: + name: proxy-server + namespace: demo +spec: + version: "2.3.2-debian" + replicas: 3 + backend: + name: mysql-server + terminationPolicy: WipeOut \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/proxysql/clustering/proxysql-cluster/index.md b/content/docs/v2024.1.31/guides/proxysql/clustering/proxysql-cluster/index.md new file mode 100644 index 0000000000..d1f27b720a --- /dev/null +++ b/content/docs/v2024.1.31/guides/proxysql/clustering/proxysql-cluster/index.md @@ -0,0 +1,554 @@ +--- +title: ProxySQL Cluster Guide +menu: + docs_v2024.1.31: + identifier: guides-proxysql-clustering-cluster + name: ProxySQL Cluster Guide + parent: guides-proxysql-clustering + weight: 20 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# ProxySQL Cluster + +This guide will show you how to use `KubeDB` Enterprise operator to set up a `ProxySQL` Cluster. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster. + +- Install `KubeDB` Community and Enterprise operator in your cluster following the steps [here](/docs/v2024.1.31/setup/README). + +- You should be familiar with the following `KubeDB` concepts: + - [ProxySQL](/docs/v2024.1.31/guides/proxysql/concepts/proxysql/) + - [ProxySQL Cluster](/docs/v2024.1.31/guides/proxysql/clustering/overview/) + +To keep everything isolated, we are going to use a separate namespace called `demo` throughout this tutorial. + +```bash +$ kubectl create ns demo +namespace/demo created +``` + +### Prepare MySQL backend + +We need a mysql backend for the proxysql server. So we are creating one with the below yaml. + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: MySQL +metadata: + name: mysql-server + namespace: demo +spec: + version: "5.7.44" + replicas: 3 + topology: + mode: GroupReplication + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut +``` + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/proxysql/clustering/proxysql-cluster/examples/sample-mysql.yaml +mysql.kubedb.com/mysql-server created +``` + +Let's wait for the MySQL to be Ready. + +```bash +$ kubectl get mysql -n demo +NAME VERSION STATUS AGE +mysql-server 5.7.44 Ready 3m51s +``` + +Let's first create an user in the backend mysql server and a database to test test the proxy traffic . + +```bash +$ kubectl exec -it -n demo mysql-server-0 -- bash +Defaulted container "mysql" out of: mysql, mysql-coordinator, mysql-init (init) +root@mysql-server-0:/# mysql -uroot -p$MYSQL_ROOT_PASSWORD +mysql: [Warning] Using a password on the command line interface can be insecure. +Welcome to the MySQL monitor. Commands end with ; or \g. +Your MySQL connection id is 195 +Server version: 5.7.44-log MySQL Community Server (GPL) + +Copyright (c) 2000, 2021, Oracle and/or its affiliates. + +Oracle is a registered trademark of Oracle Corporation and/or its +affiliates. Other names may be trademarks of their respective +owners. + +Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. + +mysql> create user `test`@'%' identified by 'pass'; +Query OK, 0 rows affected (0.00 sec) + +mysql> create database test; +Query OK, 1 row affected (0.01 sec) + +mysql> use test; +Database changed + +mysql> show tables; +Empty set (0.00 sec) + +mysql> create table testtb(name varchar(103), primary key(name)); +Query OK, 0 rows affected (0.01 sec) + +mysql> grant all privileges on test.* to 'test'@'%'; +Query OK, 0 rows affected (0.00 sec) + +mysql> flush privileges; +Query OK, 0 rows affected (0.00 sec) + +mysql> exit +Bye +``` + +## Deploy ProxySQL Cluster + +The following is an example `ProxySQL` object which creates a proxysql cluster with three members. + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: ProxySQL +metadata: + name: proxy-server + namespace: demo +spec: + version: "2.3.2-debian" + replicas: 3 + backend: + name: mysql-server + terminationPolicy: WipeOut +``` + +To deploy a simple proxysql cluster all you need to do is just set the `.spec.replicas` field to a higher value than 2. + +Let's apply the yaml. + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/proxysql/clustering/proxysql-cluster/examples/sample-proxysql.yaml +proxysql.kubedb.com/proxysql-server created +``` + +Let's wait for the ProxySQL to be Ready. + +```bash +$ kubectl get proxysql -n demo +NAME VERSION STATUS AGE +proxy-server 2.3.2-debian Ready 4m +``` + +Let's see the pods + +```bash +$ kubectl get pods -n demo | grep proxy +proxy-server-0 1/1 Running 3 4m +proxy-server-1 1/1 Running 3 4m +proxy-server-2 1/1 Running 3 4m +``` + +We can see that three nodes are up now. + +## Check proxysql_servers table + +Let's check the proxysql_servers table inside the ProxySQL pods. + +```bash +#first node +$ kubectl exec -it -n demo proxy-server-0 -- bash +root@proxy-server-0:/# mysql -uadmin -padmin -h127.0.0.1 -P6032 --prompt "ProxySQLAdmin > " +Welcome to the MariaDB monitor. Commands end with ; or \g. +Your MySQL connection id is 316 +Server version: 8.0.35 (ProxySQL Admin Module) + +Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others. + +Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. + +ProxySQLAdmin > select * from runtime_proxysql_servers; ++---------------------------------------+------+--------+---------+ +| hostname | port | weight | comment | ++---------------------------------------+------+--------+---------+ +| proxy-server-2.proxy-server-pods.demo | 6032 | 1 | | +| proxy-server-1.proxy-server-pods.demo | 6032 | 1 | | +| proxy-server-0.proxy-server-pods.demo | 6032 | 1 | | ++---------------------------------------+------+--------+---------+ +3 rows in set (0.000 sec) + +ProxySQLAdmin > exit +Bye +``` +```bash +#second node +$ kubectl exec -it -n demo proxy-server-1 -- bash +root@proxy-server-0:/# mysql -uadmin -padmin -h127.0.0.1 -P6032 --prompt "ProxySQLAdmin >" +Welcome to the MariaDB monitor. Commands end with ; or \g. +Your MySQL connection id is 316 +Server version: 8.0.35 (ProxySQL Admin Module) + +Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others. + +Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. + +ProxySQLAdmin > select * from runtime_proxysql_servers; ++---------------------------------------+------+--------+---------+ +| hostname | port | weight | comment | ++---------------------------------------+------+--------+---------+ +| proxy-server-2.proxy-server-pods.demo | 6032 | 1 | | +| proxy-server-1.proxy-server-pods.demo | 6032 | 1 | | +| proxy-server-0.proxy-server-pods.demo | 6032 | 1 | | ++---------------------------------------+------+--------+---------+ +3 rows in set (0.000 sec) + +ProxySQLAdmin >exit +Bye +``` + +```bash +#third node +$ kubectl exec -it -n demo proxy-server-2 -- bash +root@proxy-server-0:/# mysql -uadmin -padmin -h127.0.0.1 -P6032 --prompt "ProxySQLAdmin >" +Welcome to the MariaDB monitor. Commands end with ; or \g. +Your MySQL connection id is 316 +Server version: 8.0.35 (ProxySQL Admin Module) + +Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others. + +Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. + +ProxySQLAdmin > select * from runtime_proxysql_servers; ++---------------------------------------+------+--------+---------+ +| hostname | port | weight | comment | ++---------------------------------------+------+--------+---------+ +| proxy-server-2.proxy-server-pods.demo | 6032 | 1 | | +| proxy-server-1.proxy-server-pods.demo | 6032 | 1 | | +| proxy-server-0.proxy-server-pods.demo | 6032 | 1 | | ++---------------------------------------+------+--------+---------+ +3 rows in set (0.000 sec) + +ProxySQLAdmin >exit +Bye +``` + +From the above output we can see that the proxysql_servers tables has been successfuly set up. + +## Create test user in proxysql server + +Let's insert the test user inside the proxysql server + +```bash +$ kubectl exec -it -n demo proxy-server-1 -- bash +root@proxy-server-0:/# mysql -uadmin -padmin -h127.0.0.1 -P6032 --prompt "ProxySQLAdmin >" +Welcome to the MariaDB monitor. Commands end with ; or \g. +Your MySQL connection id is 316 +Server version: 8.0.35 (ProxySQL Admin Module) + +Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others. + +Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. + +ProxySQLAdmin > insert into mysql_users(username,password,default_hostgroup) values('test','pass',2); +Query OK, 1 row affected (0.001 sec) + +ProxySQLAdmin > LOAD MYSQL USERS TO RUNTIME; +Query OK, 0 rows affected (0.000 sec) + +ProxySQLAdmin > SAVE MYSQL USERS TO DISK; +Query OK, 0 rows affected (0.009 sec) + +``` + +## Check load balance + +Now lets check the load balancing through the cluster. + +First we need to create a script to sent load over the ProxySQL. We will use the test user and the test table to check and send the load. + +```bash +$ kubectl exec -it -n demo proxy-server-1 -- bash +root@proxy-server-1:/# apt update +... ... ... +root@proxy-server-1:/# apt install nano +... ... ... +root@proxy-server-1:/# nano load.sh +# copy paste the load.sh file here +GNU nano 5.4 load.sh +#!/bin/bash + +COUNTER=0 + +USER='test' +PROXYSQL_NAME='proxy-server' +NAMESPACE='demo' +PASS='pass' + +VAR="x" + +while [ $COUNTER -lt 100 ]; do + let COUNTER=COUNTER+1 + VAR=a$VAR + mysql -u$USER -h$PROXYSQL_NAME.$NAMESPACE.svc -P6033 -p$PASS -e 'select 1;' > /dev/null 2>&1 + mysql -u$USER -h$PROXYSQL_NAME.$NAMESPACE.svc -P6033 -p$PASS -e "INSERT INTO test.testtb(name) VALUES ('$VAR');" > /dev/null 2>&1 + mysql -u$USER -h$PROXYSQL_NAME.$NAMESPACE.svc -P6033 -p$PASS -e "select * from test.testtb;" > /dev/null 2>&1 + sleep 0.0001 +done + +root@proxy-server-1:/# chmod +x load.sh + +root@proxy-server-1:/# ./load.sh +``` + +> You can find the load.sh file [here](https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/proxysql/clustering/proxysql-cluster/examples/load.sh) + +```bash +$ kubectl exec -it -n demo proxy-server-1 -- bash +root@proxy-server-0:/# mysql -uadmin -padmin -h127.0.0.1 -P6032 --prompt "ProxySQLAdmin >" +Welcome to the MariaDB monitor. Commands end with ; or \g. +Your MySQL connection id is 316 +Server version: 8.0.35 (ProxySQL Admin Module) + +Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others. + +Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. + +ProxySQLAdmin > select hostname, Queries from stats_proxysql_servers_metrics; ++---------------------------------------+---------+ +| hostname | Queries | ++---------------------------------------+---------+ +| proxy-server-2.proxy-server-pods.demo | 122 | +| proxy-server-1.proxy-server-pods.demo | 94 | +| proxy-server-0.proxy-server-pods.demo | 101 | ++---------------------------------------+---------+ +3 rows in set (0.000 sec) + +ProxySQLAdmin > select hostgroup,srv_host,Queries from stats_mysql_connection_pool; ++-----------+-------------------------------+---------+ +| hostgroup | srv_host | Queries | ++-----------+-------------------------------+---------+ +| 2 | mysql-server.demo.svc | 30 | +| 3 | mysql-server-standby.demo.svc | 100 | +| 3 | mysql-server.demo.svc | 34 | ++-----------+-------------------------------+---------+ +``` + +From the above output we can see that the loads are properly distributed over the proxysql servers and the backend mysqls. + +## Chekc cluster sync + +Let's check if any configuration change is automatically propagated to other in out proxysql cluster. + +We will change the `admin-restapi_enabled` in one cluster and observe the change in others. + +First check the current status. + +```bash +$ kubectl exec -it -n demo proxy-server-0 -- bash +root@proxy-server-0:/# mysql -uadmin -padmin -h127.0.0.1 -P6032 -e "show variables like 'admin-restapi_enabled';" ++-----------------------+-------+ +| Variable_name | Value | ++-----------------------+-------+ +| admin-restapi_enabled | false | ++-----------------------+-------+ +root@proxy-server-0:/# exit +exit + +$ kubectl exec -it -n demo proxy-server-1 -- bash +root@proxy-server-1:/# mysql -uadmin -padmin -h127.0.0.1 -P6032 -e "show variables like 'admin-restapi_enabled';" ++-----------------------+-------+ +| Variable_name | Value | ++-----------------------+-------+ +| admin-restapi_enabled | false | ++-----------------------+-------+ +root@proxy-server-1:/# exit +exit + +$ kubectl exec -it -n demo proxy-server-2 -- bash +root@proxy-server-2:/# mysql -uadmin -padmin -h127.0.0.1 -P6032 -e "show variables like 'admin-restapi_enabled';" ++-----------------------+-------+ +| Variable_name | Value | ++-----------------------+-------+ +| admin-restapi_enabled | false | ++-----------------------+-------+ +root@proxy-server-2:/# exit +exit + +``` + +Now set the value to `true` in server 0 . + +```bash + +$ kubectl exec -it -n demo proxy-server-0 -- bash +root@proxy-server-0:/# mysql -uadmin -padmin -h127.0.0.1 -P6032 -e "set admin-restapi_enabled='true';" +root@proxy-server-0:/# mysql -uadmin -padmin -h127.0.0.1 -P6032 -e "show variables like 'admin-restapi_enabled';" ++-----------------------+-------+ +| Variable_name | Value | ++-----------------------+-------+ +| admin-restapi_enabled | true | ++-----------------------+-------+ +root@proxy-server-0:/# exit +exit + +$ kubectl exec -it -n demo proxy-server-1 -- bash +root@proxy-server-1:/# mysql -uadmin -padmin -h127.0.0.1 -P6032 -e "show variables like 'admin-restapi_enabled';" ++-----------------------+-------+ +| Variable_name | Value | ++-----------------------+-------+ +| admin-restapi_enabled | true | ++-----------------------+-------+ +root@proxy-server-1:/# exit +exit + +$ kubectl exec -it -n demo proxy-server-2 -- bash +root@proxy-server-2:/# mysql -uadmin -padmin -h127.0.0.1 -P6032 -e "show variables like 'admin-restapi_enabled';" ++-----------------------+-------+ +| Variable_name | Value | ++-----------------------+-------+ +| admin-restapi_enabled | true | ++-----------------------+-------+ +root@proxy-server-2:/# exit +exit + +``` +From the above output we can see that the cluster is always in sync and the configuration change is always propagated to other cluster nodes. + +## Cluster failover recovery +In case of any pod crash for proxysql cluster, the statefulset which was created by KubeDb operator creates another pod and the is auto joins the cluster. We can delete a pod and wait for that to create again and join the cluster and test this feature. + +Let's see the current status first. + +```bash +ProxySQLAdmin > SELECT hostname, checksum, FROM_UNIXTIME(changed_at) changed_at, FROM_UNIXTIME(updated_at) updated_at FROM stats_proxysql_servers_checksums WHERE name='mysql_users' ORDER BY hostname; ++---------------------------------------+--------------------+---------------------+---------------------+ +| hostname | checksum | changed_at | updated_at | ++---------------------------------------+--------------------+---------------------+---------------------+ +| proxy-server-0.proxy-server-pods.demo | 0x49728B20D3BC91AC | 2022-11-15 06:09:49 | 2022-11-15 06:34:28 | +| proxy-server-1.proxy-server-pods.demo | 0x49728B20D3BC91AC | 2022-11-15 06:10:22 | 2022-11-15 06:34:28 | +| proxy-server-2.proxy-server-pods.demo | 0x49728B20D3BC91AC | 2022-11-15 06:10:17 | 2022-11-15 06:34:28 | ++---------------------------------------+--------------------+---------------------+---------------------+ +3 rows in set (0.000 sec) +``` + +Now let's delete the pod-2. + +```bash +$ kubectl delete pod -n demo proxy-server-2 +pod "proxy-server-2" deleted +``` + +Let's watch the cluster status now. +```bash +ProxySQLAdmin > SELECT hostname, checksum, FROM_UNIXTIME(changed_at) changed_at, FROM_UNIXTIME(updated_at) updated_at FROM stats_proxysql_servers_checksums WHERE name='mysql_users' ORDER BY hostname; ++---------------------------------------+--------------------+---------------------+---------------------+ +| hostname | checksum | changed_at | updated_at | ++---------------------------------------+--------------------+---------------------+---------------------+ +| proxy-server-0.proxy-server-pods.demo | 0x49728B20D3BC91AC | 2022-11-15 06:09:49 | 2022-11-15 06:34:28 | +| proxy-server-1.proxy-server-pods.demo | 0x49728B20D3BC91AC | 2022-11-15 06:10:22 | 2022-11-15 06:34:28 | +| proxy-server-2.proxy-server-pods.demo | 0x49728B20D3BC91AC | 2022-11-15 06:10:17 | 2022-11-15 06:34:28 | ++---------------------------------------+--------------------+---------------------+---------------------+ +3 rows in set (0.000 sec) +ProxySQLAdmin > SELECT hostname, checksum, FROM_UNIXTIME(changed_at) changed_at, FROM_UNIXTIME(updated_at) updated_at FROM stats_proxysql_servers_checksums WHERE name='mysql_users' ORDER BY hostname; ++---------------------------------------+--------------------+---------------------+---------------------+ +| hostname | checksum | changed_at | updated_at | ++---------------------------------------+--------------------+---------------------+---------------------+ +| proxy-server-0.proxy-server-pods.demo | 0x49728B20D3BC91AC | 2022-11-15 06:09:49 | 2022-11-15 06:34:28 | +| proxy-server-1.proxy-server-pods.demo | 0x49728B20D3BC91AC | 2022-11-15 06:10:22 | 2022-11-15 06:34:28 | +| proxy-server-2.proxy-server-pods.demo | 0x49728B20D3BC91AC | 2022-11-15 06:10:17 | 2022-11-15 06:34:28 | ++---------------------------------------+--------------------+---------------------+---------------------+ +3 rows in set (0.000 sec) + +... ... ... + +ProxySQLAdmin > SELECT hostname, checksum, FROM_UNIXTIME(changed_at) changed_at, FROM_UNIXTIME(updated_at) updated_at FROM stats_proxysql_servers_checksums WHERE name='mysql_users' ORDER BY hostname; ++---------------------------------------+--------------------+---------------------+---------------------+ +| hostname | checksum | changed_at | updated_at | ++---------------------------------------+--------------------+---------------------+---------------------+ +| proxy-server-0.proxy-server-pods.demo | 0x49728B20D3BC91AC | 2022-11-15 06:09:49 | 2022-11-15 06:34:40 | +| proxy-server-1.proxy-server-pods.demo | 0x49728B20D3BC91AC | 2022-11-15 06:10:22 | 2022-11-15 06:34:40 | +| proxy-server-2.proxy-server-pods.demo | 0x49728B20D3BC91AC | 2022-11-15 06:10:17 | 2022-11-15 06:34:28 | ++---------------------------------------+--------------------+---------------------+---------------------+ +3 rows in set (0.000 sec) + +... ... ... + +ProxySQLAdmin > SELECT hostname, checksum, FROM_UNIXTIME(changed_at) changed_at, FROM_UNIXTIME(updated_at) updated_at FROM stats_proxysql_servers_checksums WHERE name='mysql_users' ORDER BY hostname; ++---------------------------------------+--------------------+---------------------+---------------------+ +| hostname | checksum | changed_at | updated_at | ++---------------------------------------+--------------------+---------------------+---------------------+ +| proxy-server-0.proxy-server-pods.demo | 0x49728B20D3BC91AC | 2022-11-15 06:09:49 | 2022-11-15 06:34:40 | +| proxy-server-1.proxy-server-pods.demo | 0x49728B20D3BC91AC | 2022-11-15 06:10:22 | 2022-11-15 06:34:40 | +| proxy-server-2.proxy-server-pods.demo | 0x49728B20D3BC91AC | 2022-11-15 06:10:17 | 2022-11-15 06:34:28 | ++---------------------------------------+--------------------+---------------------+---------------------+ +3 rows in set (0.000 sec) +``` + +From the above output we can see that the third server is out of sync as it is not available right now. But the other two are in sync. + +Wait for the new pod come up. + +```bash +$ kubectl get pod -n demo proxy-server-2 +NAME READY STATUS RESTARTS AGE +proxy-server-2 1/1 Running 0 94s +``` + +Now check the status again. + +``` +ProxySQLAdmin > SELECT hostname, checksum, FROM_UNIXTIME(changed_at) changed_at, FROM_UNIXTIME(updated_at) updated_at FROM stats_proxysql_servers_checksums WHERE name='mysql_users' ORDER BY hostname; ++---------------------------------------+--------------------+---------------------+---------------------+ +| hostname | checksum | changed_at | updated_at | ++---------------------------------------+--------------------+---------------------+---------------------+ +| proxy-server-0.proxy-server-pods.demo | 0x49728B20D3BC91AC | 2022-11-15 06:09:49 | 2022-11-15 06:34:50 | +| proxy-server-1.proxy-server-pods.demo | 0x49728B20D3BC91AC | 2022-11-15 06:10:22 | 2022-11-15 06:34:50 | +| proxy-server-2.proxy-server-pods.demo | 0x49728B20D3BC91AC | 2022-11-15 06:10:17 | 2022-11-15 06:34:28 | ++---------------------------------------+--------------------+---------------------+---------------------+ +3 rows in set (0.000 sec) + +... ... ... + +ProxySQLAdmin > SELECT hostname, checksum, FROM_UNIXTIME(changed_at) changed_at, FROM_UNIXTIME(updated_at) updated_at FROM stats_proxysql_servers_checksums WHERE name='mysql_users' ORDER BY hostname; ++---------------------------------------+--------------------+---------------------+---------------------+ +| hostname | checksum | changed_at | updated_at | ++---------------------------------------+--------------------+---------------------+---------------------+ +| proxy-server-0.proxy-server-pods.demo | 0x49728B20D3BC91AC | 2022-11-15 06:09:49 | 2022-11-15 06:35:15 | +| proxy-server-1.proxy-server-pods.demo | 0x49728B20D3BC91AC | 2022-11-15 06:10:22 | 2022-11-15 06:35:15 | +| proxy-server-2.proxy-server-pods.demo | 0x49728B20D3BC91AC | 2022-11-15 06:10:17 | 2022-11-15 06:35:15 | ++---------------------------------------+--------------------+---------------------+---------------------+ +3 rows in set (0.000 sec) +``` + +From the above output we can see that the new pod is now in sync with the two others. So the failover recovery is successful. + +## Cleaning up + +```bash +$ kubectl delete proxysql -n demo proxy-server +proxysql.kubedb.com "proxy-server" deleted +$ kubectl delete mysql -n demo mysql-server +mysql.kubedb.com "mysql-server" deleted +``` \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/proxysql/concepts/_index.md b/content/docs/v2024.1.31/guides/proxysql/concepts/_index.md new file mode 100644 index 0000000000..737923196f --- /dev/null +++ b/content/docs/v2024.1.31/guides/proxysql/concepts/_index.md @@ -0,0 +1,22 @@ +--- +title: ProxySQL Concepts +menu: + docs_v2024.1.31: + identifier: guides-proxysql-concepts + name: Concepts + parent: guides-proxysql + weight: 20 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/proxysql/concepts/appbinding/index.md b/content/docs/v2024.1.31/guides/proxysql/concepts/appbinding/index.md new file mode 100644 index 0000000000..c620aab429 --- /dev/null +++ b/content/docs/v2024.1.31/guides/proxysql/concepts/appbinding/index.md @@ -0,0 +1,152 @@ +--- +title: AppBinding CRD +menu: + docs_v2024.1.31: + identifier: guides-proxysql-concepts-appbinding + name: AppBinding + parent: guides-proxysql-concepts + weight: 17 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# AppBinding + +## What is AppBinding + +An `AppBinding` is a Kubernetes `CustomResourceDefinition`(CRD) which points to an application using either its URL (usually for a non-Kubernetes resident service instance) or a Kubernetes service object (if self-hosted in a Kubernetes cluster), some optional parameters and a credential secret. To learn more about AppBinding and the problems it solves, please read this blog post: [The case for AppBinding](https://blog.byte.builders/post/the-case-for-appbinding). + +If you deploy a database using [KubeDB](https://kubedb.com/docs/0.11.0/concepts/), `AppBinding` object will be created automatically for it. Otherwise, you have to create an `AppBinding` object manually pointing to your desired database. + +KubeDB uses [Stash](https://appscode.com/products/stash/) to perform backup/recovery of databases. Stash needs to know how to connect with a target database and the credentials necessary to access it. This is done via an `AppBinding`. + +## AppBinding CRD Specification + +Like any official Kubernetes resource, an `AppBinding` has `TypeMeta`, `ObjectMeta` and `Spec` sections. However, unlike other Kubernetes resources, it does not have a `Status` section. + +An `AppBinding` object created by `KubeDB` for MariaDB database is shown below, + +```yaml +apiVersion: appcatalog.appscode.com/v1alpha1 +kind: AppBinding +metadata: + labels: + app.kubernetes.io/component: database + app.kubernetes.io/instance: sample-mariadb + app.kubernetes.io/managed-by: kubedb.com + app.kubernetes.io/name: mariadbs.kubedb.com + name: sample-mariadb + namespace: demo +spec: + clientConfig: + caBundle: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURJekNDQWd1Z0F3SUJBZ0lVVUg1V24wOSt6MnR6RU5ESnF4N1AxZFg5aWM4d0RRWUpLb1pJaHZjTkFRRUwKQlFBd0lURU9NQXdHQTFVRUF3d0ZiWGx6Y1d3eER6QU5CZ05WQkFvTUJtdDFZbVZrWWpBZUZ3MHlNVEF5TURrdwpPVEkxTWpCYUZ3MHlNakF5TURrd09USTFNakJhTUNFeERqQU1CZ05WQkFNTUJXMTVjM0ZzTVE4d0RRWURWUVFLCkRBWnJkV0psWkdJd2dnRWlNQTBHQ1NxR1NJYjNEUUVCQVFVQUE0SUJEd0F3Z2dFS0FvSUJBUUM3ZDl5YUtMQ3UKYy9NclRBb0NkV1VORld3ckdqbVdvUEVTRWNMR0pjT0JVSTZ5NXZ5QXVGMG1TakZvNzR3SEdSbWRmS2ExMWh0Ygo4TWZ2UFNwWXVGWFpUSi9GbnkwNnU2ekZMVm5xV2h3MUdiZ2ZCUE5XK0w1ZGkzZmVjanBEZmtLbTcrd0ZUVnNmClVzWGVVcUR0VHFpdlJHVUQ5aURUTzNTUmdheVI5U0J0RnRxcHRtV0YrODFqZGlSS2pRTVlCVGJ2MDRueW9UdHUKK0hJZlFjbE40Q1p3NzJPckpUdFdiYnNiWHVlMU5RZU9nQzJmSVhkZEF0WEkxd3lOT04zckxuTFF1SUIrakVLSQpkZTlPandKSkJhSFVzRVZEbllnYlJLSTdIcVdFdk5kL29OU2VZRXF2TXk3K1hwTFV0cDBaVXlxUDV2cC9PSXJ3CmlVMWVxZGNZMzJDcEFnTUJBQUdqVXpCUk1CMEdBMVVkRGdRV0JCUlNnNDVpazFlT3lCU1VKWHkvQllZVDVLck8KeWpBZkJnTlZIU01FR0RBV2dCUlNnNDVpazFlT3lCU1VKWHkvQllZVDVLck95akFQQmdOVkhSTUJBZjhFQlRBRApBUUgvTUEwR0NTcUdTSWIzRFFFQkN3VUFBNElCQVFCNTlhNlFGQm1YMTh1b1dzQ3dGT1Y0R25GYnVBL2NoaVN6CkFwRVVhcjI1L2RNK2RpT0RVNkJuMXM3Wmpqem45WU9aVVJJL3UyRGRhdmNnQUNYR2FXWHJhSS84UUYxZDB3OGsKNXFlRmMxQWE5UEhFeEsxVm1xb21MV2xhMkdTOW1EbFpEOEtueDdjU3lpRmVqRXJYdWtld1B1VXA0dUUzTjAraApwQjQ0MDVRU2d4VVc3SmVhamFQdTNXOHgyRUFKMnViTkdMVEk5L0x4V1Z0YkxGcUFoSFphbGRjaXpOSHdTUGYzCkdMWEo3YTBWTW1JY0NuMWh5a0k2UkNrUTRLSE9tbDNOcXRRS2F5RnhUVHVpdzRiZ3k3czA1UnNzRlVUaWN1VmcKc3hnMjFVQUkvYW9WaXpQOVpESGE2TmV0YnpNczJmcmZBeHhBZk9pWDlzN1JuTmM0WHd4VAotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg== + service: + name: sample-mariadb + port: 3306 + scheme: mysql + secret: + name: sample-mariadb-auth + type: kubedb.com/mariadb + version: 10.5.23 +``` + +Here, we are going to describe the sections of an `AppBinding` crd. + +### AppBinding `Spec` + +An `AppBinding` object has the following fields in the `spec` section: + +#### spec.type + +`spec.type` is an optional field that indicates the type of the app that this `AppBinding` is pointing to. Stash uses this field to resolve the values of `TARGET_APP_TYPE`, `TARGET_APP_GROUP` and `TARGET_APP_RESOURCE` variables of [BackupBlueprint](https://appscode.com/products/stash/latest/concepts/crds/backupblueprint/) object. + +This field follows the following format: `/`. The above AppBinding is pointing to a `mariadb` resource under `kubedb.com` group. + +Here, the variables are parsed as follows: + +| Variable | Usage | +| --------------------- | --------------------------------------------------------------------------------------------------------------------------------- | +| `TARGET_APP_GROUP` | Represents the application group where the respective app belongs (i.e: `kubedb.com`). | +| `TARGET_APP_RESOURCE` | Represents the resource under that application group that this appbinding represents (i.e: `mariadb`). | +| `TARGET_APP_TYPE` | Represents the complete type of the application. It's simply `TARGET_APP_GROUP/TARGET_APP_RESOURCE` (i.e: `kubedb.com/mariadb`). | + +#### spec.secret + +`spec.secret` specifies the name of the secret which contains the credentials that are required to access the database. This secret must be in the same namespace as the `AppBinding`. + +This secret must contain the following keys: + +PostgreSQL : + +| Key | Usage | +| ------------------- | --------------------------------------------------- | +| `POSTGRES_USER` | Username of the target database. | +| `POSTGRES_PASSWORD` | Password for the user specified by `POSTGRES_USER`. | + +MySQL : + +| Key | Usage | +| ---------- | ---------------------------------------------- | +| `username` | Username of the target database. | +| `password` | Password for the user specified by `username`. | + +MariaDB : + +| Key | Usage | +| ---------- | ---------------------------------------------- | +| `username` | Username of the target database. | +| `password` | Password for the user specified by `username`. | + +MongoDB : + +| Key | Usage | +| ---------- | ---------------------------------------------- | +| `username` | Username of the target database. | +| `password` | Password for the user specified by `username`. | + +MariaDB: + +| Key | Usage | +| ---------------- | ----------------------- | +| `ADMIN_USERNAME` | Admin username | +| `ADMIN_PASSWORD` | Password for admin user | + +#### spec.clientConfig + +`spec.clientConfig` defines how to communicate with the target database. You can use either an URL or a Kubernetes service to connect with the database. You don't have to specify both of them. + +You can configure following fields in `spec.clientConfig` section: + +- **spec.clientConfig.url** + + `spec.clientConfig.url` gives the location of the database, in standard URL form (i.e. `[scheme://]host:port/[path]`). This is particularly useful when the target database is running outside of the Kubernetes cluster. If your database is running inside the cluster, use `spec.clientConfig.service` section instead. + Note that, attempting to use a user or basic auth (e.g. `user:password@host:port`) is not allowed. Stash will insert them automatically from the respective secret. Fragments ("#...") and query parameters ("?...") are not allowed either. + +- **spec.clientConfig.service** + + If you are running the database inside the Kubernetes cluster, you can use Kubernetes service to connect with the database. You have to specify the following fields in `spec.clientConfig.service` section if you manually create an `AppBinding` object. + + - **name :** `name` indicates the name of the service that connects with the target database. + - **scheme :** `scheme` specifies the scheme (i.e. http, https) to use to connect with the database. + - **port :** `port` specifies the port where the target database is running. + +- **spec.clientConfig.insecureSkipTLSVerify** + + `spec.clientConfig.insecureSkipTLSVerify` is used to disable TLS certificate verification while connecting with the database. We strongly discourage to disable TLS verification during backup. You should provide the respective CA bundle through `spec.clientConfig.caBundle` field instead. + +- **spec.clientConfig.caBundle** + + `spec.clientConfig.caBundle` is a PEM encoded CA bundle which will be used to validate the serving certificate of the database. \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/proxysql/concepts/autoscaler/index.md b/content/docs/v2024.1.31/guides/proxysql/concepts/autoscaler/index.md new file mode 100644 index 0000000000..be950aa090 --- /dev/null +++ b/content/docs/v2024.1.31/guides/proxysql/concepts/autoscaler/index.md @@ -0,0 +1,87 @@ +--- +title: ProxySQLAutoscaler CRD +menu: + docs_v2024.1.31: + identifier: guides-proxysql-concepts-autoscaler + name: ProxySQLAutoscaler + parent: guides-proxysql-concepts + weight: 26 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# ProxySQLAutoscaler + +## What is ProxySQLAutoscaler + +`ProxySQLAutoscaler` is a Kubernetes `Custom Resource Definitions` (CRD). It provides a declarative configuration for autoscaling [ProxySQL](https://www.proxysql.com/) compute resources and storage of database components in a Kubernetes native way. + +## ProxySQLAutoscaler CRD Specifications + +Like any official Kubernetes resource, a `ProxySQLAutoscaler` has `TypeMeta`, `ObjectMeta`, `Spec` and `Status` sections. + +Here, some sample `ProxySQLAutoscaler` CROs for autoscaling different components of database is given below: + +**Sample `ProxySQLAutoscaler` for ProxySQL:** + +```yaml +apiVersion: autoscaling.kubedb.com/v1alpha1 +kind: ProxySQLAutoscaler +metadata: + name: psops-autoscale + namespace: demo +spec: + proxyRef: + name: sample-proxysql + compute: + proxysql: + trigger: "On" + podLifeTimeThreshold: 5m + minAllowed: + cpu: 250m + memory: 350Mi + maxAllowed: + cpu: 1 + memory: 1Gi + controlledResources: ["cpu", "memory"] +``` + +Here, we are going to describe the various sections of a `ProxySQLAutoscaler` crd. + +A `ProxySQLAutoscaler` object has the following fields in the `spec` section. + +### spec.proxyRef + +`spec.proxyRef` is a required field that point to the [ProxySQL](/docs/v2024.1.31/guides/proxysql/concepts/proxysql) object for which the autoscaling will be performed. This field consists of the following sub-field: + +- **spec.proxyRef.name :** specifies the name of the [ProxySQL](/docs/v2024.1.31/guides/proxysql/concepts/proxysql) object. + +### spec.compute + +`spec.compute` specifies the autoscaling configuration for the compute resources i.e. cpu and memory of the proxysql components. This field consists of the following sub-field: + +- `spec.compute.proxysql` indicates the desired compute autoscaling configuration for a ProxySQL standalone or cluster. + +All of them has the following sub-fields: + +- `trigger` indicates if compute autoscaling is enabled for this component of the database. If "On" then compute autoscaling is enabled. If "Off" then compute autoscaling is disabled. +- `minAllowed` specifies the minimal amount of resources that will be recommended, default is no minimum. +- `maxAllowed` specifies the maximum amount of resources that will be recommended, default is no maximum. +- `controlledResources` specifies which type of compute resources (cpu and memory) are allowed for autoscaling. Allowed values are "cpu" and "memory". +- `containerControlledValues` specifies which resource values should be controlled. Allowed values are "RequestsAndLimits" and "RequestsOnly". +- `resourceDiffPercentage` specifies the minimum resource difference between recommended value and the current value in percentage. If the difference percentage is greater than this value than autoscaling will be triggered. +- `podLifeTimeThreshold` specifies the minimum pod lifetime of at least one of the pods before triggering autoscaling. +- `InMemoryScalingThreshold` the percentage of the Memory that will be passed as inMemorySizeGB for inmemory database engine, which is only available for the percona variant of the proxysql. \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/proxysql/concepts/declarative-configuration/images/configuration.png b/content/docs/v2024.1.31/guides/proxysql/concepts/declarative-configuration/images/configuration.png new file mode 100644 index 0000000000..3d66b03c16 Binary files /dev/null and b/content/docs/v2024.1.31/guides/proxysql/concepts/declarative-configuration/images/configuration.png differ diff --git a/content/docs/v2024.1.31/guides/proxysql/concepts/declarative-configuration/index.md b/content/docs/v2024.1.31/guides/proxysql/concepts/declarative-configuration/index.md new file mode 100644 index 0000000000..2e1ce4a762 --- /dev/null +++ b/content/docs/v2024.1.31/guides/proxysql/concepts/declarative-configuration/index.md @@ -0,0 +1,220 @@ +--- +title: ProxySQL Declarative Configuration +menu: + docs_v2024.1.31: + identifier: guides-proxysql-concepts-declarative-configuration + name: Declarative Configuration + parent: guides-proxysql-concepts + weight: 12 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# ProxySQL Declarative Configuration + +## What is ProxySQL Declarative Configuration + +To bootstrap a native ProxySQL server with desired configuration we need to pass a configuration file named `proxysql.cnf`. Through the proxysql.cnf file we can pass some initial configuration for various tables and global variables with a specific format. You can check the link for a sample proxysql.cnf and its grammar [here](https://github.com/sysown/proxysql/blob/v2.x/etc/proxysql.cnf). + +With kubedb proxysql we have eased this process with declarative yaml. We have scoped the CRD in a specific way so that you can provide the desired configuration in a yaml format. + +## How it works + +User will provide the configuration under the `.spec.initConfig` section of the proxysql yaml. The operator parses the yaml and creates a configuration file. A secret is then created, holding that configuration file inside. Each time a new pod is created it is created with the configuration file inside that secret. + +
+ ProxySQL Declarative Configuration +
Fig: ProxySQL Configuration Secret Lifecycle
+
+ +At any time the user might need to change the configuration. To serve that purpose we have introduced ProxySQLOpsRequest. When an ops request is being created, the `KubeDB` Ops Manager updates the configuration secret and applies the changes to the proxysql cluster nodes. This is how the the configuration secret remains as a source of truth for the ProxySQL CRO and any changes are made in a declarative way. + +> User can exec into any proxysql pod and change any configuration from the admin panel anytime. But that won't update the configuration secret. We recommend the ops-request to keep things declarative and keep the proxysql.cnf file always updated. + +## API Description + +You can write the configuration in yaml format under the `spec.initConfig` section and the operator would do the rest. `spec.initConfig` section is devided into four sections : `spec.initConfig.mysqlUsers` , `spec.initConfig.mysqlQueryRules`, `spec.initConfig.mysqlVariables`, `spec.initConfig.adminVariables`. You can configure the `mysql_users` and `mysql_query_rules` tables and also the global variables under the corresponding fields. This is the [api](https://pkg.go.dev/kubedb.dev/apimachinery@v0.29.1/apis/kubedb/v1alpha2#ProxySQLConfiguration) documentation. We will discuss in detail about each of the fields. + +### initConfig.mysqlUsers + +This is an array field. Each of the array element should carry infos of users that you want to be present in the `mysql_users` table inside the proxysql server. + +As per the official proxysql documentation, `mysql_users` table looks something like [this](https://proxysql.com/documentation/main-runtime/#mysql_users). So with kubedb we have created [this](https://pkg.go.dev/kubedb.dev/apimachinery@v0.29.1/apis/kubedb/v1alpha2#MySQLUser) api to configure the table. Which basically means that you can configure everything through the yaml except the password. The password will be automatically fetched from the backend server. So you don't need to mention this. This has been done keeping in mind the data sensitivity issue with yaml. + +```yaml +spec: + ... + initConfig: + mysqlUsers: + - username: wolverine + active: 1 + default_hostgroup: 2 + default_schema: marvel + - username: superman + active: 1 + default_hostgroup: 3 + ... +``` + +### initConfig.mysqlQueryRules + +This is an array field. Each of the array element should carry infos of query rules that you want to set up for the proxysql server. With all these query rules you mention, operator will set up the `mysql_query_rules` table. + +As per the official proxysql documentation, `mysql_query_rules` table looks something like [this](https://proxysql.com/documentation/main-runtime/#mysql_query_rules). With kubedb we are using an array of [runtime.rawExtension](https://pkg.go.dev/k8s.io/apimachinery/pkg/runtime#RawExtension) for the `mysqlQueryRules`. You can configure all the column for a rule through this. In simple terms, just use this as a key-value yaml section. + +```yaml +spec: + ... + initConfig: + ... + mysqlQueryRules: + - rule_id: 1 + active: 1 + match_pattern: "^SELECT .* FOR UPDATE$" + destination_hostgroup: 2 + apply: 1 + - rule_id: 2 + active: 1 + match_pattern: "^SELECT" + destination_hostgroup: 3 + apply: 1 + ... +``` + +### initConfig.mysqlVariables + +This is a [runtime.rawExtension](https://pkg.go.dev/k8s.io/apimachinery/pkg/runtime#RawExtension) field. You can pass all the [MySQL Variables](https://proxysql.com/Documentation/global-variables/mysql-variables/) you want to configure in a key-value format under this section. You can configure almost all the mysql variables except `interfaces`, `monitor_username`, `monitor_password`, `ssl_p2s_cert`, `ssl_p2s_key`, `ssl_p2s_ca`. We have protected the `interface` variable because a lot of our operator logic depends on it. + +```yaml +spec: + ... + initConfig: + ... + mysqlVariables: + max_connections: 1024 + default_schema: "information_schema" + stacksize: 1048576 + default_schema: "information_schema" + commands_stats: "true" + sessions_sort: "true" + server_version: "8.0.35" + monitor_history: 60000 + ping_timeout_server: 200 + default_query_timeout: 36000000 + connect_timeout_server: 10000 + monitor_ping_interval: 200000 + poll_timeout: 2000 + max_connections: 2048 + default_query_delay: 0 + ping_interval_server_msec: 10000 + have_compress: "true" + threads: 4 + monitor_connect_interval: 200000 + ... +``` +### initConfig.adminVariables + +This is a [runtime.rawExtension](https://pkg.go.dev/k8s.io/apimachinery/pkg/runtime#RawExtension) field. You can pass all the [Admin Variables](https://proxysql.com/Documentation/global-variables/admin-variables/) you want to configure in a key-value format under this section. You can configure almost all the admin variables except `admin_credentials` and `mysql_interface` . The default `admin_credential` is always `admin:admin`. If you pass any credential from the `spec.authSecret` our operator would add this too. And if you don't do so, the operator will create one and add that as the `cluster_admin`. And as for the `mysql_interface`, we have protected this because our operator code logic depends on this in some case. + +```yaml +spec: + ... + initConfig: + ... + adminVariables: + cluster_mysql_users_save_to_disk: "true" + cluster_mysql_servers_save_to_disk: "true" + cluster_proxysql_servers_diffs_before_sync: "3" + restapi_enabled: "true" + cluster_mysql_query_rules_diffs_before_sync: "3" + cluster_mysql_servers_diffs_before_sync: "3" + cluster_proxysql_servers_save_to_disk: "true" + restapi_port: "6070" + cluster_mysql_query_rules_save_to_disk: "true" + cluster_check_status_frequency: "100" + cluster_mysql_users_diffs_before_sync: "3" + refresh_interval: "2000" + cluster_check_interval_ms: "200" + ... +``` + +### Complete YAML + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: ProxySQL +metadata: + name: proxy-server + namespace: demo +spec: + version: "2.3.2-debian" + replicas: 3 + backend: + name: mysql-server + initConfig: + mysqlUsers: + - username: wolverine + active: 1 + default_hostgroup: 2 + default_schema: marvel + - username: superman + active: 1 + default_hostgroup: 3 + mysqlQueryRules: + - rule_id: 1 + active: 1 + match_pattern: "^SELECT .* FOR UPDATE$" + destination_hostgroup: 2 + apply: 1 + - rule_id: 2 + active: 1 + match_pattern: "^SELECT" + destination_hostgroup: 3 + apply: 1 + mysqlVariables: + stacksize: 1048576 + default_schema: "information_schema" + commands_stats: "true" + sessions_sort: "true" + server_version: "8.0.35" + monitor_history: 60000 + ping_timeout_server: 200 + default_query_timeout: 36000000 + connect_timeout_server: 10000 + monitor_ping_interval: 200000 + poll_timeout: 2000 + max_connections: 2048 + default_query_delay: 0 + ping_interval_server_msec: 10000 + have_compress: "true" + threads: 4 + monitor_connect_interval: 200000 + adminVariables: + cluster_mysql_users_save_to_disk: "true" + cluster_mysql_servers_save_to_disk: "true" + cluster_proxysql_servers_diffs_before_sync: "3" + restapi_enabled: "true" + cluster_mysql_query_rules_diffs_before_sync: "3" + cluster_mysql_servers_diffs_before_sync: "3" + cluster_proxysql_servers_save_to_disk: "true" + restapi_port: "6070" + cluster_mysql_query_rules_save_to_disk: "true" + cluster_check_status_frequency: "100" + cluster_mysql_users_diffs_before_sync: "3" + refresh_interval: "2000" + cluster_check_interval_ms: "200" + terminationPolicy: WipeOut + ``` \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/proxysql/concepts/opsrequest/index.md b/content/docs/v2024.1.31/guides/proxysql/concepts/opsrequest/index.md new file mode 100644 index 0000000000..51bb00b5bc --- /dev/null +++ b/content/docs/v2024.1.31/guides/proxysql/concepts/opsrequest/index.md @@ -0,0 +1,282 @@ +--- +title: ProxySQLOpsRequest CRD +menu: + docs_v2024.1.31: + identifier: guides-proxysql-concepts-proxysqlopsrequest + name: ProxySQLOpsRequest + parent: guides-proxysql-concepts + weight: 15 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# ProxySQLOpsRequest + +## What is ProxySQLOpsRequest + +`ProxySQLOpsRequest` is a Kubernetes `Custom Resource Definitions` (CRD). It provides a declarative configuration for [ProxySQL](https://www.proxysql.com/) administrative operations like database version updating, horizontal scaling, vertical scaling,reconfiguration etc. in a Kubernetes native way. + +## ProxySQLOpsRequest CRD Specifications + +Like any official Kubernetes resource, a `ProxySQLOpsRequest` has `TypeMeta`, `ObjectMeta`, `Spec` and `Status` sections. + +Here, some sample `ProxySQLOpsRequest` CRs for different administrative operations is given below: + +**Sample ProxySQLOpsRequest for updating database:** + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: ProxySQLOpsRequest +metadata: + name: proxyops-update + namespace: demo +spec: + type: UpdateVersion + proxyRef: + name: proxy-server + updateVersion: + targetVersion: "2.4.4-debian" +``` + +**Sample ProxySQLOpsRequest Objects for Horizontal Scaling of proxysql cluster:** + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: ProxySQLOpsRequest +metadata: + name: scale-up + namespace: demo +spec: + type: HorizontalScaling + proxyRef: + name: proxy-server + horizontalScaling: + member: 5 + +``` + +**Sample ProxySQLOpsRequest Objects for Vertical Scaling of the proxysql cluster:** + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: ProxySQLOpsRequest +metadata: + name: proxyops-vscale + namespace: demo +spec: + type: VerticalScaling + proxyRef: + name: proxy-server + verticalScaling: + proxysql: + resources: + requests: + memory: "1.2Gi" + cpu: "0.6" + limits: + memory: "1.2Gi" + cpu: "0.6" +``` + +**Sample ProxySQLOpsRequest Objects for Reconfiguring ProxySQL cluster:** + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: ProxySQLOpsRequest +metadata: + name: reconfigure-vars + namespace: demo +spec: + type: Reconfigure + proxyRef: + name: proxy-server + configuration: + adminVariables: + refresh_interval: 2055 + cluster_check_interval_ms: 205 + mysqlVariables: + max_transaction_time: 1540000 + max_stmts_per_connection: 19 +``` + +**Sample ProxySQLOpsRequest Objects for Reconfiguring TLS of the ProxySQL:** + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: ProxySQLOpsRequest +metadata: + name: recon-tls-update + namespace: demo +spec: + type: ReconfigureTLS + proxyRef: + name: proxy-server + tls: + certificates: + - alias: server + subject: + organizations: + - kubedb:server + dnsNames: + - localhost + ipAddresses: + - "127.0.0.1" + emailAddresses: + - "mikebaker@gmail.com" + certificates: + - alias: client + subject: + organizations: + - kubedb:server + dnsNames: + - localhost + ipAddresses: + - "127.0.0.1" + emailAddresses: + - "mikebaker@gmail.com" +``` + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: ProxySQLOpsRequest +metadata: + name: recon-tls-rotate + namespace: demo +spec: + type: ReconfigureTLS + proxyRef: + name: proxy-server + tls: + rotateCertificates: true +``` + + +Here, we are going to describe the various sections of a `ProxySQLOpsRequest` crd. + +A `ProxySQLOpsRequest` object has the following fields in the `spec` section. + +### spec.proxyRef + +`spec.proxyRef` is a required field that point to the [ProxySQL](/docs/v2024.1.31/guides/proxysql/concepts/proxysql) object for which the administrative operations will be performed. This field consists of the following sub-field: + +- **spec.proxyRef.name :** specifies the name of the [ProxySQL](/docs/v2024.1.31/guides/proxysql/concepts/proxysql) object. + +### spec.type + +`spec.type` specifies the kind of operation that will be applied to the database. Currently, the following types of operations are allowed in `ProxySQLOpsRequest`. + +- `UpdateVersion` +- `HorizontalScaling` +- `VerticalScaling` +- `Reconfigure` +- `ReconfigureTLS` +- `Restart` + +> You can perform only one type of operation on a single `ProxySQLOpsRequest` CR. For example, if you want to update your proxysql and scale up its replica then you have to create two separate `ProxySQLOpsRequest`. At first, you have to create a `ProxySQLOpsRequest` for updating. Once it is completed, then you can create another `ProxySQLOpsRequest` for scaling. You should not create two `ProxySQLOpsRequest` simultaneously. + +### spec.updateVersion + +If you want to update your ProxySQL version, you have to specify the `spec.updateVersion` section that specifies the desired version information. This field consists of the following sub-field: + +- `spec.updateVersion.targetVersion` refers to a [ProxySQLVersion](/docs/v2024.1.31/guides/proxysql/concepts/proxysql-version/) CR that contains the ProxySQL version information where you want to update. + +> You can only update between ProxySQL versions. KubeDB does not support downgrade for ProxySQL. + +### spec.horizontalScaling + +If you want to scale-up or scale-down your ProxySQL cluster or different components of it, you have to specify `spec.horizontalScaling` section. `spec.horizontalScaling.member` indicates the desired number of nodes for ProxySQL cluster after scaling. For example, if your cluster currently has 4 nodes, and you want to add additional 2 nodes then you have to specify 6 in `spec.horizontalScaling.member` field. Similarly, if you want to remove one node from the cluster, you have to specify 3 in `spec.horizontalScaling.` field. + +### spec.verticalScaling + +`spec.verticalScaling` is a required field specifying the information of `ProxySQL` resources like `cpu`, `memory` etc that will be scaled. This field consists of the following sub-field: + +- `spec.verticalScaling.proxysql` indicates the desired resources for ProxySQL standalone or cluster after scaling. +- `spec.verticalScaling.exporter` indicates the desired resources for the `exporter` container. +- `spec.verticalScaling.coordinator` indicates the desired resources for the `coordinator` container. + + +All of them has the below structure: + +```yaml +requests: + memory: "200Mi" + cpu: "0.1" +limits: + memory: "300Mi" + cpu: "0.2" +``` + +Here, when you specify the resource request, the scheduler uses this information to decide which node to place the container of the Pod on and when you specify a resource limit for the container, the `kubelet` enforces those limits so that the running container is not allowed to use more of that resource than the limit you set. You can found more details from [here](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/). + +### spec.configuration + +If you want to reconfigure your Running ProxySQL cluster with new custom configuration, you have to specify `spec.configuration` section. This field consists of the following sub-fields: +- `mysqlUsers` : To reconfigure the `mysql_users` table, you need to provide the desired user infos under the `spec.configuration.mysqlUsers.users` section. Set the `.spec.configuration.mysqlUsers.reqType` to either `add`, `update` or `delete` based on the operation you want to do. +- `mysqlQueryRules` : To reconfigure the `mysql_query_rules` table, you need to provide the desired rule infos under the `spec.configuration.mysqlQueryRules.rules` section. Set the `.spec.configuration.mysqlQueryRules.reqType` to either `add`, `update` or `delete` based on the operation you want to do. +- `mysqlVariables` : You can reconfigure mysql variables for the proxysql server using this field. You can reconfigure almost all the mysql variables except `mysql-interfaces`, `mysql-monitor_username`, `mysql-monitor_password`, `mysql-ssl_p2s_cert`, `mysql-ssl_p2s_key`, `mysql-ssl_p2s_ca`. +- `adminVariables` : You can reconfigure admin variables for the proxysql server using this field. You can reconfigure almost all the admin variables except `admin-admin_credentials` and `admin-mysql_interface`. + +### spec.tls + +If you want to reconfigure the TLS configuration of your database i.e. add TLS, remove TLS, update issuer/cluster issuer or Certificates and rotate the certificates, you have to specify `spec.tls` section. This field consists of the following sub-field: + +- `spec.tls.issuerRef` specifies the issuer name, kind and api group. +- `spec.tls.certificates` specifies the certificates. You can learn more about this field from [here](/docs/v2024.1.31/guides/proxysql/concepts/proxysql/index.md/#spectls). +- `spec.tls.rotateCertificates` specifies that we want to rotate the certificate of this proxysql. +- `spec.tls.remove` specifies that we want to remove tls from this proxysql. + + +### ProxySQLOpsRequest Status + +`.status` describes the current state and progress of a `ProxySQLOpsRequest` operation. It has the following fields: + +### status.phase + +`status.phase` indicates the overall phase of the operation for this `ProxySQLOpsRequest`. It can have the following three values: + +| Phase | Meaning | +| ---------- | ----------------------------------------------------------------------------------- | +| Successful | KubeDB has successfully performed the operation requested in the ProxySQLOpsRequest | +| Failed | KubeDB has failed the operation requested in the ProxySQLOpsRequest | +| Denied | KubeDB has denied the operation requested in the ProxySQLOpsRequest | + +### status.observedGeneration + +`status.observedGeneration` shows the most recent generation observed by the `ProxySQLOpsRequest` controller. + +### status.conditions + +`status.conditions` is an array that specifies the conditions of different steps of `ProxySQLOpsRequest` processing. Each condition entry has the following fields: + +- `types` specifies the type of the condition. ProxySQLOpsRequest has the following types of conditions: + +| Type | Meaning | +| ----------------------------- | ------------------------------------------------------------------------- | +| `Progressing` | Specifies that the operation is now in the progressing state | +| `Successful` | Specifies such a state that the operation on the database was successful. | +| `Failed` | Specifies such a state that the operation on the database failed. | +| `ScaleDownCluster` | Specifies such a state that the scale down operation of replicaset | +| `ScaleUpCluster` | Specifies such a state that the scale up operation of replicaset | +| `Reconfigure` | Specifies such a state that the reconfiguration of replicaset nodes | + +- The `status` field is a string, with possible values `True`, `False`, and `Unknown`. + - `status` will be `True` if the current transition succeeded. + - `status` will be `False` if the current transition failed. + - `status` will be `Unknown` if the current transition was denied. +- The `message` field is a human-readable message indicating details about the condition. +- The `reason` field is a unique, one-word, CamelCase reason for the condition's last transition. +- The `lastTransitionTime` field provides a timestamp for when the operation last transitioned from one state to another. +- The `observedGeneration` shows the most recent condition transition generation observed by the controller. \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/proxysql/concepts/proxysql-version/index.md b/content/docs/v2024.1.31/guides/proxysql/concepts/proxysql-version/index.md new file mode 100644 index 0000000000..2655b3ba1c --- /dev/null +++ b/content/docs/v2024.1.31/guides/proxysql/concepts/proxysql-version/index.md @@ -0,0 +1,91 @@ +--- +title: ProxySQLVersion CRD +menu: + docs_v2024.1.31: + identifier: guides-proxysql-concepts-proxysqlversion + name: ProxySQLVersion + parent: guides-proxysql-concepts + weight: 10 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# ProxySQLVersion + +## What is ProxySQLVersion + +`ProxySQLVersion` is a Kubernetes `Custom Resource Definitions` (CRD). It provides a declarative configuration to specify the docker images to be used for [ProxySQL](https://www.proxysql.com/) deployed with KubeDB in a Kubernetes native way. + +When you install KubeDB, a `ProxySQLVersion` custom resource will be created automatically for supported ProxySQL versions. You have to specify the name of `ProxySQLVersion` object in `.spec.version` field of [ProxySQL](/docs/v2024.1.31/guides/proxysql/concepts/proxysql/) object. Then, KubeDB will use the docker images specified in the `ProxySQLVersion` object to create your ProxySQL instance. + +Using a separate CRD for this purpose allows us to modify the images, and policies independent of KubeDB operator. This will also allow the users to use a custom image for the ProxySQL. + +## ProxySQLVersion Specification + +As with all other Kubernetes objects, a ProxySQLVersion needs `apiVersion`, `kind`, and `metadata` fields. It also needs a `.spec` section. + +```yaml +apiVersion: catalog.kubedb.com/v1alpha1 +kind: ProxySQLVersion +metadata: + name: "2.3.2-debian" + labels: + app: kubedb +spec: + version: "2.3.2-debian" + proxysql: + image: "${KUBEDB_CATALOG_REGISTRY}/proxysql:2.3.2-debian-v2" + exporter: + image: "${KUBEDB_CATALOG_REGISTRY}/proxysql-exporter:1.1.0" + podSecurityPolicies: + databasePolicyName: proxysql-db +``` + +### .metadata.name + +`.metadata.name` is a required field that specifies the name of the `ProxySQLVersion` object. You have to specify this name in `.spec.version` field of [ProxySQL](/docs/v2024.1.31/guides/proxysql/concepts/proxysql/) object. + +We follow this convention for naming ProxySQLVersion object: + +- Name format: `{Original ProxySQL image version}-{modification tag}` + +We modify the original ProxySQL docker image to support additional features. An image with a higher modification tag will have more features than the images with a lower modification tag. Hence, it is recommended to use ProxySQLVersion object with the highest modification tag to take advantage of the latest features. + +### .spec.version + +`.spec.version` is a required field that specifies the original version of ProxySQL that has been used to build the docker image specified in `.spec.proxysql.image` field. + +### .spec.deprecated + +`spec.deprecated` is an optional field that specifies whether the docker images specified here is supported by the current KubeDB operator. + +The default value of this field is `false`. If `.spec.deprecated` is set `true`, KubeDB operator will not deploy ProxySQL and other respective resources for this version. + +### .spec.proxysql.image + +`.spec.proxysql.image` is a required field that specifies the docker image which will be used to create Statefulset by KubeDB operator to deploy expected ProxySQL. + +### .spec.exporter.image + +`.spec.exporter.image` is a required field that specifies the image which will be used to export Prometheus metrics. + +### .spec.podSecurityPolicies.databasePolicyName + +`spec.podSecurityPolicies.databasePolicyName` is a required field that specifies the name of the pod security policy required to get the ProxySQL pod(s) running. + +## Next Steps + +- Learn how to use KubeDB ProxySQL to load balance MySQL Group Replication [here](/docs/v2024.1.31/guides/proxysql/quickstart/mysqlgrp/) \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/proxysql/concepts/proxysql/index.md b/content/docs/v2024.1.31/guides/proxysql/concepts/proxysql/index.md new file mode 100644 index 0000000000..3efc44138a --- /dev/null +++ b/content/docs/v2024.1.31/guides/proxysql/concepts/proxysql/index.md @@ -0,0 +1,370 @@ +--- +title: ProxySQL CRD +menu: + docs_v2024.1.31: + identifier: guides-proxysql-concepts-proxysql + name: ProxySQL + parent: guides-proxysql-concepts + weight: 5 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# ProxySQL + +## What is ProxySQL + +`ProxySQL` is a Kubernetes `Custom Resource Definitions` (CRD). It provides declarative configuration for [ProxySQL](https://www.proxysql.com/) in a Kubernetes native way. You only need to describe the desired configurations in a ProxySQL object, and the KubeDB operator will create Kubernetes objects in the desired state for you. + +## ProxySQL Spec + +Like any official Kubernetes resource, a `ProxySQL` object has `TypeMeta`, `ObjectMeta`, `Spec` and `Status` sections. Below is an example of the ProxySQL object. + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: ProxySQL +metadata: + name: demo-proxysql + namespace: demo +spec: + version: "2.3.2-debian" + replicas: 1 + backend: + name: my-group + authSecret: + name: proxysql-cluster-auth + externallyManaged: true + monitor: + agent: prometheus.io/operator + prometheus: + serviceMonitor: + labels: + release: prometheus + interval: 10s + initConfig: + mysqlUsers: + - username: test + active: 1 + default_hostgroup: 2 + adminVariables: + restapi_enabled: true + restapi_port: 6070 + configSecret: + name: my-custom-config + tls: + issuerRef: + apiGroup: cert-manager.io + kind: Issuer + name: proxy-issuer + certificates: + - alias: server + subject: + organizations: + - kubedb:server + dnsNames: + - localhost + ipAddresses: + - "127.0.0.1" + podTemplate: + metadata: + annotations: + passMe: ToProxySQLPod + controller: + annotations: + passMe: ToStatefulSet + spec: + serviceAccountName: my-service-account + schedulerName: my-scheduler + nodeSelector: + disktype: ssd + imagePullSecrets: + - name: myregistrykey + args: + - --reload + env: + - name: LOAD_BALANCE_MODE + value: GroupReplication + resources: + requests: + memory: "64Mi" + cpu: "250m" + limits: + memory: "128Mi" + cpu: "500m" + serviceTemplates: + - alias: primary + metadata: + annotations: + passMe: ToService + spec: + type: NodePort + ports: + - name: http + port: 6033 + terminationPolicy: WipeOut + healthChecker: + failureThreshold: 3 +``` + +### .spec.version + +`.spec.version` is a required field specifying the name of the [ProxySQLVersion](/docs/v2024.1.31/guides/proxysql/concepts/proxysql-version/) CRD where the docker images are specified. Currently, when you install KubeDB, it creates the following `ProxySQLVersion` resources, + +- `2.3.2-debian` +- `2.3.2-centos` +- `2.4.4-debian` +- `2.4.4-centos` + +### .spec.backend + +`.spec.backend` specifies the information about the appbinding of the backend MySQL/PerconaXtraDB/MariaDB. The appbinding should contain the basic informations like connections url, server type , ssl infos etc. To know more about what appbinding is, you can refer to the [Appbinding](/docs/v2024.1.31/guides/proxysql/concepts/appbinding/) page in the concept section. See the api [here](https://pkg.go.dev/kubedb.dev/apimachinery@v0.29.1/apis/kubedb/v1alpha2#:~:text=//%20Backend%20refers%20to%20the%20AppBinding%20of%20the%20backend%20MySQL/MariaDB/Percona%2DXtraDB%20server%0A%09Backend%20*core.LocalObjectReference%20%60json%3A%22backend%2Comitempty%22%60). + +### .spec.authSecret + +`.spec.authSecret` is an optional field that points to a secret used to hold credentials for `proxysql cluster admin` user. If not set, the KubeDB operator creates a new Secret `{proxysql-object-name}-auth` for storing the password for `proxysql cluster admin` user for each ProxySQL object. If you want to use an existing secret please specify that when creating the ProxySQL object using `.spec.authSecret`. Turn the `.spec.authSecret.extenallyManaged` field `true` in that case. + +This secret contains a `username` key and a `password` key which contains the username and password respectively for `proxysql cluster admin` user. The password should always be alpha-numeric. If no Secret is found, KubeDB sets the value of `username` key to `"cluster"`. See the api [here](https://pkg.go.dev/kubedb.dev/apimachinery@v0.29.1/apis/kubedb/v1alpha2#:~:text=//%20ProxySQL%20secret%20containing%20username%20and%20password%20for%20root%20user%20and%20proxysql%20user%0A%09//%20%2Boptional%0A%09AuthSecret%20*SecretReference%20%60json%3A%22authSecret%2Comitempty%22%60). + +> Secrets provided by users are not managed by KubeDB, and therefore, won't be modified or garbage collected by the KubeDB operator (version 0.13.0 and higher). + +Example: + +```bash +$ kubectl create secret generic proxysql-cluster-auth -n demo \ +--from-literal=username=cluster \ +--from-literal=password=6q8u2jMOWOOZXk +secret "proxysql-cluster-auth" created +``` + +```yaml +apiVersion: v1 +data: + password: NnE4dTJqTU9XT09aWGs= + username: Y2x1c3Rlcg== +kind: Secret +metadata: + ... + name: proxysql-cluster-auth + namespace: demo + ... +type: Opaque +``` + +### .spec.monitor + +ProxySQL managed by KubeDB can be monitored with builtin-Prometheus and Prometheus operator out-of-the-box. In the `.spec.monitor` section you can configure neccessary settings regarding monitoring. See the api [here](https://pkg.go.dev/kubedb.dev/apimachinery@v0.29.1/apis/kubedb/v1alpha2#:~:text=//%20Monitor%20is%20used%20monitor%20proxysql%20instance%0A%09//%20%2Boptional%0A%09Monitor%20*mona.AgentSpec%20%60json%3A%22monitor%2Comitempty%22%60). + +### .spec.InitConfig + +`spec.initConfig` is the field where we can set the proxysql bootstrap configuration. In ProxySQL an initial configuration file is needed to bootstrap, named `proxysql.cnf`. In that file you should write down all the necessary configuration related to various proxysql tables and variables in a specific format. In KubeDB ProxySQL we have eased this initial configuration setup with declarative yaml. All you need to do is to pass the configuration in the yaml in key-value format and the operator will turn that into a `proxysql.cnf` file with proper formatting . The `proxysql.cnf` file will be available in a secret with name `-configuration` . When you change any configuration with the proxysqlOpsRequest , the secret will be auto updated with the new configuration. + +`.spec.initConfig` contains four subsections : `mysqlUsers`, `mysqlQueryRules`, `adminVariables`, `mysqlVariables`. The detailed description is given below. See the api [here](https://pkg.go.dev/kubedb.dev/apimachinery@v0.29.1/apis/kubedb/v1alpha2#:~:text=//%20%2Boptional%0A%09//%20InitConfiguration%20contains%20information%20with%20which%20the%20proxysql%20will%20bootstrap%20(only%204%20tables%20are%20configurable)%0A%09InitConfiguration%20*ProxySQLConfiguration%20%60json%3A%22initConfig%2Comitempty%22%60). + +`.spec.initConfig.mysqlUsers` section carries info for the `mysql_users` table. All the information provided through this field will eventually be used for setting up the `mysql_users` table inside the proxysql server. This section is an array field where each element of the array carries the necessary information for each individual users. An important note to be mentioned is that you don't need to fill up the password field for any user. The password will be automatically fetched by the KubeDB operator from the backend server. + +`.spec.initConfig.mysqlQueryRules`section carries info for the `mysql_query_rules` table. This section is also an array field and each element of the array should be a `query_rule` as per proxysql accepts. + +`.spec.initConfig.mysqlVariables` section carries all the `mysql_variables` info that you want to set for the proxysql. You need to mention the variables you want to set with its value in a key-value format under this section and the KubeDB operator will bootstrap the proxysql with this. + +`.spec.initConfig.adminVariables` section carries all the `admin_variables` info that you want to set for the proxysql. You need to mention the variables you want to set with its value in a key-value format under this section and the KubeDB operator will bootstrap the proxysql with this. + +Checkout this [link](/docs/v2024.1.31/guides/proxysql/concepts/declarative-configuration/) for detailed overview on declarative configuration. + +### .spec.configSecret + +`.spec.configSecret` is another field to pass the bootstrap configuration for the proxysql. If you want to pass the configuration through a secret you can just mention the secret name under this field. The secret should look something like the following + +```bash +$ kubectl view-secret -n demo my-config-secret -a +AdminVariables.cnf=admin_variables= +{ + checksum_mysql_query_rules: true + refresh_interval: 2000 + connect_timeout_server: 3000 +} +MySQLQueryRules.cnf=mysql_query_rules= +( + { + rule_id=1 + active=1 + match_pattern="^SELECT .* FOR UPDATE$" + destination_hostgroup=2 + apply=1 + }, + { + rule_id=2 + active=1 + match_pattern="^SELECT" + destination_hostgroup=3 + apply=1 + } +) + +MySQLUsers.cnf=mysql_users= +( + { + username = "user2" + password = "pass2" + default_hostgroup = 2 + active = 1 + }, + { + username = "user3" + password = "pass3" + default_hostgroup = 2 + max_connections=1000 + default_schema="test" + active = 1 + }, + { username = "user4" , password = "pass4" , default_hostgroup = 0 , active = 1 ,comment = "hello all"} +) +MySQLVariables.cnf=mysql_variables= +{ + max_connections=1024 + default_schema="information_schema" +} +``` + +The secret should contain keys none other than `AdminVariables.cnf`, `MySQLVariables.cnf`, `MySQLUsers.cnf`, `MySQLVariables.cnf` . The key names define the contents of the values itself. Important info to add is that the value provided with the keys will be patched to the `proxysql.cnf` file exactly as it is. So be careful with the format when you are going to bootstrap proxysql in this way. + +### .spec.syncUsers + +`spec.syncUsers` is a boolean field. While true, KubeDB Operator fetches all the users from the backend and puts them into the `mysql_users` table. Any update regarding a user in the backend will also reflect in the proxysql server. This field can be turned off by simply changing the value to false and applying the yaml. It is set false by default though. + + +### spec.tls + +`spec.tls` specifies the TLS/SSL configurations for the ProxySQL frontend connections. See the api [here](https://pkg.go.dev/kmodules.xyz/client-go/api/v1#TLSConfig) + +The following fields are configurable in the `spec.tls` section: + +- `issuerRef` is a reference to the `Issuer` or `ClusterIssuer` CR of [cert-manager](https://cert-manager.io/docs/concepts/issuer/) that will be used by `KubeDB` to generate necessary certificates. + + - `apiGroup` is the group name of the resource being referenced. The value for `Issuer` or `ClusterIssuer` is "cert-manager.io" (cert-manager v0.12.0 and later). + - `kind` is the type of resource being referenced. KubeDB supports both `Issuer` and `ClusterIssuer` as values for this field. + - `name` is the name of the resource (`Issuer` or `ClusterIssuer`) being referenced. + +- `certificates` (optional) are a list of certificates used to configure the server and/or client certificate. It has the following fields: + + - `alias` represents the identifier of the certificate. It has the following possible value: + - `server` is used for server certificate identification. + - `client` is used for client certificate identification. + - `metrics-exporter` is used for metrics exporter certificate identification. + - `secretName` (optional) specifies the k8s secret name that holds the certificates. + This field is optional. If the user does not specify this field, the default secret name will be created in the following format: `--cert`. + - `subject` (optional) specifies an `X.509` distinguished name. It has the following possible field, + - `organizations` (optional) are the list of different organization names to be used on the Certificate. + - `organizationalUnits` (optional) are the list of different organization unit name to be used on the Certificate. + - `countries` (optional) are the list of country names to be used on the Certificate. + - `localities` (optional) are the list of locality names to be used on the Certificate. + - `provinces` (optional) are the list of province names to be used on the Certificate. + - `streetAddresses` (optional) are the list of a street address to be used on the Certificate. + - `postalCodes` (optional) are the list of postal code to be used on the Certificate. + - `serialNumber` (optional) is a serial number to be used on the Certificate. + You can found more details from [Here](https://golang.org/pkg/crypto/x509/pkix/#Name) + + - `duration` (optional) is the period during which the certificate is valid. + - `renewBefore` (optional) is a specifiable time before expiration duration. + - `dnsNames` (optional) is a list of subject alt names to be used in the Certificate. + - `ipAddresses` (optional) is a list of IP addresses to be used in the Certificate. + - `uriSANs` (optional) is a list of URI Subject Alternative Names to be set in the Certificate. + - `emailSANs` (optional) is a list of email Subject Alternative Names to be set in the Certificate. + + +### .spec.podTemplate + +KubeDB allows providing a template for proxysql pod through `.spec.podTemplate`. KubeDB operator will pass the information provided in `.spec.podTemplate` to the StatefulSet created for ProxySQL. See the api [here](https://pkg.go.dev/kmodules.xyz/offshoot-api/api/v1#PodTemplateSpec) + +KubeDB accept following fields to set in `.spec.podTemplate`: + +- metadata: + - annotations (pod's annotation) +- controller: + - annotations (statefulset's annotation) +- spec: + - args + - env + - resources + - initContainers + - imagePullSecrets + - nodeSelector + - affinity + - serviceAccountName + - schedulerName + - tolerations + - priorityClassName + - priority + - securityContext + - livenessProbe + - readinessProbe + - lifecycle + +Usage of some field of `.spec.podTemplate` is described below, + +#### .spec.podTemplate.spec.args + +`.spec.podTemplate.spec.args` is an optional field. This can be used to provide additional arguments to proxysql installation. + +#### .spec.podTemplate.spec.env + +`.spec.podTemplate.spec.env` is an optional field that specifies the environment variables to pass to the ProxySQL docker image. Here is a list of currently supported environment variables to the ProxySQL image: + +#### .spec.podTemplate.spec.imagePullSecrets + +`KubeDB` provides the flexibility of deploying ProxySQL from a private Docker registry. `.spec.podTemplate.spec.imagePullSecrets` is an optional field that points to secrets to be used for pulling docker images if you are using a private docker registry. + +#### .spec.podTemplate.spec.nodeSelector + +`.spec.podTemplate.spec.nodeSelector` is an optional field that specifies a map of key-value pairs. For the pod to be eligible to run on a node, the node must have each of the indicated key-value pairs as labels (it can have additional labels as well). To learn more, see [here](https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector) . + +#### .spec.podTemplate.spec.serviceAccountName + + `serviceAccountName` is an optional field supported by KubeDB Operator (version 0.13.0 and higher) that can be used to specify a custom service account to fine-tune role-based access control. + + If this field is left empty, the KubeDB operator will create a service account name matching the ProxySQL object name. Role and RoleBinding that provide necessary access permissions will also be generated automatically for this service account. + + If a service account name is given, but there's no existing service account by that name, the KubeDB operator will create one, and Role and RoleBinding that provide necessary access permissions will also be generated for this service account. + + If a service account name is given, and there's an existing service account by that name, the KubeDB operator will use that existing service account. Since this service account is not managed by KubeDB, users are responsible for providing necessary access permissions manually. + +#### .spec.podTemplate.spec.resources + +`.spec.podTemplate.spec.resources` is an optional field. This can be used to request compute resources required by the ProxySQL Pod. To learn more, visit [here](http://kubernetes.io/docs/user-guide/compute-resources/). + +### .spec.serviceTemplate + +You can also provide a template for the services created by KubeDB operator for the ProxySQL through `.spec.serviceTemplate`. This will allow you to set the type and other properties of the services. + +KubeDB allows following fields to set in `.spec.serviceTemplate`: + +- metadata: +- annotations +- spec: +- type +- ports +- clusterIP +- externalIPs +- loadBalancerIP +- loadBalancerSourceRanges +- externalTrafficPolicy +- healthCheckNodePort +- sessionAffinityConfig + +See [here](https://github.com/kmodules/offshoot-api/blob/kubernetes-1.16.3/api/v1/types.go#L163) to understand these fields in detail. + +## Next Steps + +- Learn how to use KubeDB ProxySQL to load balance MySQL Group Replication [here](/docs/v2024.1.31/guides/proxysql/quickstart/mysqlgrp/) \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/proxysql/custom-rbac/index.md b/content/docs/v2024.1.31/guides/proxysql/custom-rbac/index.md new file mode 100644 index 0000000000..de4db440cd --- /dev/null +++ b/content/docs/v2024.1.31/guides/proxysql/custom-rbac/index.md @@ -0,0 +1,190 @@ +--- +title: Run ProxySQL with Custom RBAC resources +menu: + docs_v2024.1.31: + identifier: guides-proxysql-custom-rbac + name: Custom RBAC + parent: guides-proxysql + weight: 31 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Using Custom RBAC resources + +KubeDB (version 0.13.0 and higher) supports finer user control over role based access permissions provided to a ProxySQL instance. This tutorial will show you how to use KubeDB to run ProxySQL instance with custom RBAC resources. + +## Before You Begin + +At first, you need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +Now, install KubeDB cli on your workstation and KubeDB operator in your cluster following the steps [here](/docs/v2024.1.31/setup/README). + +To keep things isolated, this tutorial uses a separate namespace called `demo` throughout this tutorial. + +```bash +$ kubectl create ns demo +namespace/demo created +``` + +> Note: YAML files used in this tutorial are stored in [docs/guides/proxysql/custom-rbac/yamls](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/guides/proxysql/custom-rbac/yamls) folder in GitHub repository [kubedb/docs](https://github.com/kubedb/docs). + +## Overview + +KubeDB allows users to provide custom RBAC resources, namely, `ServiceAccount`, `Role`, and `RoleBinding` for ProxySQL. This is provided via the `spec.podTemplate.spec.serviceAccountName` field in ProxySQL crd. If this field is left empty, the KubeDB operator will create a service account name matching ProxySQL crd name. Role and RoleBinding that provide necessary access permissions will also be generated automatically for this service account. + +If a service account name is given, but there's no existing service account by that name, the KubeDB operator will create one, and Role and RoleBinding that provide necessary access permissions will also be generated for this service account. + +If a service account name is given, and there's an existing service account by that name, the KubeDB operator will use that existing service account. Since this service account is not managed by KubeDB, users are responsible for providing necessary access permissions manually. + +This guide will show you how to create custom `Service Account`, `Role`, and `RoleBinding` for a ProxySQL instance named `proxy-server` to provide the bare minimum access permissions. + +## Custom RBAC for ProxySQL + +At first, let's create a `Service Acoount` in `demo` namespace. + +```bash +$ kubectl create serviceaccount -n demo prx-custom-sa +serviceaccount/prx-custom-sa created +``` + +It should create a service account. + +```yaml +$ kubectl get serviceaccount -n demo prx-custom-sa -oyaml +apiVersion: v1 +kind: ServiceAccount +metadata: + creationTimestamp: "2022-12-07T04:31:17Z" + name: prx-custom-sa + namespace: demo + resourceVersion: "494665" + uid: 4a8d9571-4bae-4af8-976e-061c5dd70a22 +secrets: + - name: prx-custom-sa-token-57whl + +``` + +Now, we need to create a role that has necessary access permissions for the ProxySQL instance named `proxy-server`. + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/proxysql/custom-rbac/yamls/prx-custom-role.yaml +role.rbac.authorization.k8s.io/prx-custom-role created +``` + +Below is the YAML for the Role we just created. + +```yaml +apiVersion: rbac.authorization.k8s.io/v1 +kind: Role +metadata: + name: prx-custom-role + namespace: demo +rules: + - apiGroups: + - policy + resourceNames: + - proxy-server + resources: + - podsecuritypolicies + verbs: + - use +``` + +This permission is required for ProxySQL pods running on PSP enabled clusters. + +Now create a `RoleBinding` to bind this `Role` with the already created service account. + +```bash +$ kubectl create rolebinding prx-custom-rb --role=prx-custom-role --serviceaccount=demo:prx-custom-sa --namespace=demo +rolebinding.rbac.authorization.k8s.io/prx-custom-rb created + +``` + +It should bind `prx-custom-role` and `prx-custom-sa` successfully. + +```yaml +$ kubectl get rolebinding -n demo prx-custom-rb -o yaml +apiVersion: rbac.authorization.k8s.io/v1 +kind: RoleBinding +metadata: + creationTimestamp: "2022-12-07T04:35:58Z" + name: prx-custom-rb + namespace: demo + resourceVersion: "495245" + uid: d0286421-a0a2-46c8-b3aa-8e7cac9c5cf8 +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: Role + name: prx-custom-role +subjects: + - kind: ServiceAccount + name: prx-custom-sa + namespace: demo + +``` + +Now, create a ProxySQL crd specifying `spec.podTemplate.spec.serviceAccountName` field to `prx-custom-sa`. + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/proxysql/custom-rbac/yamls/my-custom-db.yaml +proxysql.kubedb.com/proxy-server created +``` + +Below is the YAML for the ProxySQL crd we just created. + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: ProxySQL +metadata: + name: proxy-server + namespace: demo +spec: + version: "2.4.4-debian" + replicas: 1 + backend: + name: xtradb-galera-appbinding + syncUsers: true + podTemplate: + spec: + serviceAccountName: prx-custom-sa + terminationPolicy: WipeOut + healthChecker: + failureThreshold: 3 + +``` + +Now, wait a few minutes. the KubeDB operator will create necessary PVC, StatefulSet, services, secret etc. If everything goes well, we should see that a pod with the name `proxy-server-0` has been created. + +Check that the statefulset's pod is running + +```bash +$ kubectl get pod -n demo proxy-server-0 +NAME READY STATUS RESTARTS AGE +proxy-server-0 1/1 Running 0 2m44s +``` + +Check the pod's log to see if the proxy server is ready + +```bash +$ kubectl logs -f -n demo proxy-server-0 +... +2022-12-07 04:42:04 [INFO] Cluster: detected a new checksum for mysql_users from peer proxy-server-0.proxy-server-pods.demo:6032, version 2, epoch 1670388124, checksum 0xE6BB9970689336DB . Not syncing yet ... +2022-12-07 04:42:04 [INFO] Cluster: checksum for mysql_users from peer proxy-server-0.proxy-server-pods.demo:6032 matches with local checksum 0xE6BB9970689336DB , we won't sync. + +``` + +Once we see the local checksum matched in the log, the proxysql server is ready. diff --git a/content/docs/v2024.1.31/guides/proxysql/custom-rbac/yamls/my-custom-db.yaml b/content/docs/v2024.1.31/guides/proxysql/custom-rbac/yamls/my-custom-db.yaml new file mode 100644 index 0000000000..f919072def --- /dev/null +++ b/content/docs/v2024.1.31/guides/proxysql/custom-rbac/yamls/my-custom-db.yaml @@ -0,0 +1,17 @@ +apiVersion: kubedb.com/v1alpha2 +kind: ProxySQL +metadata: + name: proxy-server + namespace: demo +spec: + version: "2.4.4-debian" + replicas: 1 + backend: + name: xtradb-galera-appbinding + syncUsers: true + podTemplate: + spec: + serviceAccountName: prx-custom-sa + terminationPolicy: WipeOut + healthChecker: + failureThreshold: 3 diff --git a/content/docs/v2024.1.31/guides/proxysql/custom-rbac/yamls/prx-custom-role.yaml b/content/docs/v2024.1.31/guides/proxysql/custom-rbac/yamls/prx-custom-role.yaml new file mode 100644 index 0000000000..49b9dde7c9 --- /dev/null +++ b/content/docs/v2024.1.31/guides/proxysql/custom-rbac/yamls/prx-custom-role.yaml @@ -0,0 +1,14 @@ +apiVersion: rbac.authorization.k8s.io/v1 +kind: Role +metadata: + name: prx-custom-role + namespace: demo +rules: + - apiGroups: + - policy + resourceNames: + - proxy-server + resources: + - podsecuritypolicies + verbs: + - use \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/proxysql/monitoring/_index.md b/content/docs/v2024.1.31/guides/proxysql/monitoring/_index.md new file mode 100644 index 0000000000..f779b6472e --- /dev/null +++ b/content/docs/v2024.1.31/guides/proxysql/monitoring/_index.md @@ -0,0 +1,22 @@ +--- +title: Monitoring ProxySQL +menu: + docs_v2024.1.31: + identifier: guides-proxysql-monitoring + name: Monitoring + parent: guides-proxysql +weight: 120 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/proxysql/monitoring/builtin-prometheus/examples/mysql.yaml b/content/docs/v2024.1.31/guides/proxysql/monitoring/builtin-prometheus/examples/mysql.yaml new file mode 100644 index 0000000000..d16fb2fa03 --- /dev/null +++ b/content/docs/v2024.1.31/guides/proxysql/monitoring/builtin-prometheus/examples/mysql.yaml @@ -0,0 +1,19 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MySQL +metadata: + name: mysql-grp + namespace: demo +spec: + version: "5.7.44" + replicas: 3 + topology: + mode: GroupReplication + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut diff --git a/content/docs/v2024.1.31/guides/proxysql/monitoring/builtin-prometheus/examples/prom-config.yaml b/content/docs/v2024.1.31/guides/proxysql/monitoring/builtin-prometheus/examples/prom-config.yaml new file mode 100644 index 0000000000..fc3c0399bc --- /dev/null +++ b/content/docs/v2024.1.31/guides/proxysql/monitoring/builtin-prometheus/examples/prom-config.yaml @@ -0,0 +1,68 @@ +apiVersion: v1 +kind: ConfigMap +metadata: + name: prometheus-config + labels: + app: prometheus-demo + namespace: monitoring +data: + prometheus.yml: |- + global: + scrape_interval: 5s + evaluation_interval: 5s + scrape_configs: + - job_name: 'kubedb-databases' + honor_labels: true + scheme: http + kubernetes_sd_configs: + - role: endpoints + # by default Prometheus server select all Kubernetes services as possible target. + # relabel_config is used to filter only desired endpoints + relabel_configs: + # keep only those services that has "prometheus.io/scrape","prometheus.io/path" and "prometheus.io/port" anootations + - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape, __meta_kubernetes_service_annotation_prometheus_io_port] + separator: ; + regex: true;(.*) + action: keep + # currently KubeDB supported databases uses only "http" scheme to export metrics. so, drop any service that uses "https" scheme. + - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scheme] + action: drop + regex: https + # only keep the stats services created by KubeDB for monitoring purpose which has "-stats" suffix + - source_labels: [__meta_kubernetes_service_name] + separator: ; + regex: (.*-stats) + action: keep + # service created by KubeDB will have "app.kubernetes.io/name" and "app.kubernetes.io/instance" annotations. keep only those services that have these annotations. + - source_labels: [__meta_kubernetes_service_label_app_kubernetes_io_name] + separator: ; + regex: (.*) + action: keep + # read the metric path from "prometheus.io/path: " annotation + - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_path] + action: replace + target_label: __metrics_path__ + regex: (.+) + # read the port from "prometheus.io/port: " annotation and update scraping address accordingly + - source_labels: [__address__, __meta_kubernetes_service_annotation_prometheus_io_port] + action: replace + target_label: __address__ + regex: ([^:]+)(?::\d+)?;(\d+) + replacement: $1:$2 + # add service namespace as label to the scraped metrics + - source_labels: [__meta_kubernetes_namespace] + separator: ; + regex: (.*) + target_label: namespace + replacement: $1 + action: replace + # add service name as a label to the scraped metrics + - source_labels: [__meta_kubernetes_service_name] + separator: ; + regex: (.*) + target_label: service + replacement: $1 + action: replace + # add stats service's labels to the scraped metrics + - action: labelmap + regex: __meta_kubernetes_service_label_(.+) \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/proxysql/monitoring/builtin-prometheus/examples/proxysql.yaml b/content/docs/v2024.1.31/guides/proxysql/monitoring/builtin-prometheus/examples/proxysql.yaml new file mode 100644 index 0000000000..7f8e97f3ac --- /dev/null +++ b/content/docs/v2024.1.31/guides/proxysql/monitoring/builtin-prometheus/examples/proxysql.yaml @@ -0,0 +1,16 @@ +apiVersion: kubedb.com/v1alpha2 +kind: ProxySQL +metadata: + name: proxy-server + namespace: demo +spec: + version: "2.4.4-debian" + replicas: 3 + backend: + name: mysql-grp + syncUsers: true + monitor: + agent: prometheus.io/operator + terminationPolicy: WipeOut + healthChecker: + failureThreshold: 3 diff --git a/content/docs/v2024.1.31/guides/proxysql/monitoring/builtin-prometheus/images/built-prom.png b/content/docs/v2024.1.31/guides/proxysql/monitoring/builtin-prometheus/images/built-prom.png new file mode 100644 index 0000000000..5c583b861b Binary files /dev/null and b/content/docs/v2024.1.31/guides/proxysql/monitoring/builtin-prometheus/images/built-prom.png differ diff --git a/content/docs/v2024.1.31/guides/proxysql/monitoring/builtin-prometheus/index.md b/content/docs/v2024.1.31/guides/proxysql/monitoring/builtin-prometheus/index.md new file mode 100644 index 0000000000..17556ee7b3 --- /dev/null +++ b/content/docs/v2024.1.31/guides/proxysql/monitoring/builtin-prometheus/index.md @@ -0,0 +1,397 @@ +--- +title: Monitor ProxySQL using Builtin Prometheus Discovery +menu: + docs_v2024.1.31: + identifier: guides-proxysql-monitoring-builtinprometheus + name: Builtin Prometheus + parent: guides-proxysql-monitoring + weight: 30 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Monitoring ProxySQL with builtin Prometheus + +This tutorial will show you how to monitor ProxySQL database using builtin [Prometheus](https://github.com/prometheus/prometheus) scraper. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +- Install KubeDB operator in your cluster following the steps [here](/docs/v2024.1.31/setup/README). + +- If you are not familiar with how to configure Prometheus to scrape metrics from various Kubernetes resources, please read the tutorial from [here](https://github.com/appscode/third-party-tools/tree/master/monitoring/prometheus/builtin). + +- To learn how Prometheus monitoring works with KubeDB in general, please visit [here](/docs/v2024.1.31/guides/proxysql/monitoring/overview). + +- To keep Prometheus resources isolated, we are going to use a separate namespace called `monitoring` to deploy respective monitoring resources. We are going to deploy database in `demo` namespace. + + ```bash + $ kubectl create ns monitoring + namespace/monitoring created + + $ kubectl create ns demo + namespace/demo created + ``` + +> Note: YAML files used in this tutorial are stored in [docs/guides/proxysql/monitoring/builtin-prometheus/examples](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/guides/proxysql/monitoring/builtin-prometheus/examples) folder in GitHub repository [kubedb/docs](https://github.com/kubedb/docs). + +## Deploy MySQL as ProxySQL Backend + +We need a mysql backend for the proxysql server. So we are creating one with the below yaml. + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: MySQL +metadata: + name: mysql-grp + namespace: demo +spec: + version: "5.7.44" + replicas: 3 + topology: + mode: GroupReplication + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut +``` + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/proxysql/monitoring/builtin-prometheus/example/mysql.yaml +mysql.kubedb.com/mysql-grp created +``` + +After applying the above yaml wait for the MySQL to be Ready. + +## Deploy ProxySQL with Monitoring Enabled + +At first, let's deploy an ProxySQL server with monitoring enabled. Below is the ProxySQL object that we are going to create. + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: ProxySQL +metadata: + name: proxy-server + namespace: demo +spec: + version: "2.4.4-debian" + replicas: 3 + backend: + name: mysql-grp + syncUsers: true + monitor: + agent: prometheus.io/operator + terminationPolicy: WipeOut + healthChecker: + failureThreshold: 3 + +``` + +Here, + +- `spec.monitor.agent: prometheus.io/builtin` specifies that we are going to monitor this server using builtin Prometheus scraper. + +Let's create the ProxySQL crd we have shown above. + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/proxysql/monitoring/builtin-prometheus/examples/proxysql.yaml +proxysql.kubedb.com/proxysql-server created +``` + +Now, wait for the server to go into `Running` state. + +```bash +$ kubectl get proxysql -n demo proxy-server +NAME VERSION STATUS AGE +proxy-server 2.4.4-debian Ready 76s +``` + +KubeDB will create a separate stats service with name `{ProxySQL crd name}-stats` for monitoring purpose. + +```bash +$ kubectl get svc -n demo --selector="app.kubernetes.io/instance=proxy-server" +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +proxy-server ClusterIP 10.106.32.194 6033/TCP 2m3s +proxy-server-pods ClusterIP None 6032/TCP,6033/TCP 2m3s +proxy-server-stats ClusterIP 10.109.106.92 6070/TCP 2m2s +``` + +Here, `proxy-server-stats ` service has been created for monitoring purpose. Let's describe the service. + +```bash +$ kubectl describe svc -n demo proxy-server-stats +Name: proxy-server-stats +Namespace: demo +Labels: app.kubernetes.io/instance=proxy-server + app.kubernetes.io/managed-by=kubedb.com + app.kubernetes.io/name=proxysqls.kubedb.com + kubedb.com/role=stats +Annotations: monitoring.appscode.com/agent: prometheus.io/builtin + prometheus.io/path: /metrics + prometheus.io/port: 6070 + prometheus.io/scrape: true +Selector: app.kubernetes.io/instance=proxy-server,app.kubernetes.io/managed-by=kubedb.com,app.kubernetes.io/name=proxysqls.kubedb.com +Type: ClusterIP +IP: 10.109.106.92 +Port: metrics 6070/TCP +TargetPort: metrics/TCP +Endpoints: 10.244.0.34:6070 +Session Affinity: None +Events: +``` + +You can see that the service contains following annotations. + +```bash +prometheus.io/path: /metrics +prometheus.io/port: 6070 +prometheus.io/scrape: true +``` + +The Prometheus server will discover the service endpoint using these specifications and will scrape metrics from the exporter. + +## Configure Prometheus Server + +Now, we have to configure a Prometheus scraping job to scrape the metrics using this service. We are going to configure scraping job similar to this [kubernetes-service-endpoints](https://github.com/appscode/third-party-tools/tree/master/monitoring/prometheus/builtin#kubernetes-service-endpoints) job that scrapes metrics from endpoints of a service. + +Let's configure a Prometheus scraping job to collect metrics from this service. + +```yaml +- job_name: 'kubedb-databases' + honor_labels: true + scheme: http + kubernetes_sd_configs: + - role: endpoints + # by default Prometheus server select all Kubernetes services as possible target. + # relabel_config is used to filter only desired endpoints + relabel_configs: + # keep only those services that has "prometheus.io/scrape","prometheus.io/path" and "prometheus.io/port" annotations + - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape, __meta_kubernetes_service_annotation_prometheus_io_port] + separator: ; + regex: true;(.*) + action: keep + # currently KubeDB supported databases uses only "http" scheme to export metrics. so, drop any service that uses "https" scheme. + - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scheme] + action: drop + regex: https + # only keep the stats services created by KubeDB for monitoring purpose which has "-stats" suffix + - source_labels: [__meta_kubernetes_service_name] + separator: ; + regex: (.*-stats) + action: keep + # service created by KubeDB will have "app.kubernetes.io/name" and "app.kubernetes.io/instance" annotations. keep only those services that have these annotations. + - source_labels: [__meta_kubernetes_service_label_app_kubernetes_io_name] + separator: ; + regex: (.*) + action: keep + # read the metric path from "prometheus.io/path: " annotation + - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_path] + action: replace + target_label: __metrics_path__ + regex: (.+) + # read the port from "prometheus.io/port: " annotation and update scraping address accordingly + - source_labels: [__address__, __meta_kubernetes_service_annotation_prometheus_io_port] + action: replace + target_label: __address__ + regex: ([^:]+)(?::\d+)?;(\d+) + replacement: $1:$2 + # add service namespace as label to the scraped metrics + - source_labels: [__meta_kubernetes_namespace] + separator: ; + regex: (.*) + target_label: namespace + replacement: $1 + action: replace + # add service name as a label to the scraped metrics + - source_labels: [__meta_kubernetes_service_name] + separator: ; + regex: (.*) + target_label: service + replacement: $1 + action: replace + # add stats service's labels to the scraped metrics + - action: labelmap + regex: __meta_kubernetes_service_label_(.+) +``` + +### Configure Existing Prometheus Server + +If you already have a Prometheus server running, you have to add above scraping job in the `ConfigMap` used to configure the Prometheus server. Then, you have to restart it for the updated configuration to take effect. + +>If you don't use a persistent volume for Prometheus storage, you will lose your previously scraped data on restart. + +### Deploy New Prometheus Server + +If you don't have any existing Prometheus server running, you have to deploy one. In this section, we are going to deploy a Prometheus server in `monitoring` namespace to collect metrics using this stats service. + +**Create ConfigMap:** + +At first, create a ConfigMap with the scraping configuration. Bellow, the YAML of ConfigMap that we are going to create in this tutorial. + +```yaml +apiVersion: v1 +kind: ConfigMap +metadata: + name: prometheus-config + labels: + app: prometheus-demo + namespace: monitoring +data: + prometheus.yml: |- + global: + scrape_interval: 5s + evaluation_interval: 5s + scrape_configs: + - job_name: 'kubedb-databases' + honor_labels: true + scheme: http + kubernetes_sd_configs: + - role: endpoints + # by default Prometheus server select all Kubernetes services as possible target. + # relabel_config is used to filter only desired endpoints + relabel_configs: + # keep only those services that has "prometheus.io/scrape","prometheus.io/path" and "prometheus.io/port" anootations + - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape, __meta_kubernetes_service_annotation_prometheus_io_port] + separator: ; + regex: true;(.*) + action: keep + # currently KubeDB supported databases uses only "http" scheme to export metrics. so, drop any service that uses "https" scheme. + - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scheme] + action: drop + regex: https + # only keep the stats services created by KubeDB for monitoring purpose which has "-stats" suffix + - source_labels: [__meta_kubernetes_service_name] + separator: ; + regex: (.*-stats) + action: keep + # service created by KubeDB will have "app.kubernetes.io/name" and "app.kubernetes.io/instance" annotations. keep only those services that have these annotations. + - source_labels: [__meta_kubernetes_service_label_app_kubernetes_io_name] + separator: ; + regex: (.*) + action: keep + # read the metric path from "prometheus.io/path: " annotation + - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_path] + action: replace + target_label: __metrics_path__ + regex: (.+) + # read the port from "prometheus.io/port: " annotation and update scraping address accordingly + - source_labels: [__address__, __meta_kubernetes_service_annotation_prometheus_io_port] + action: replace + target_label: __address__ + regex: ([^:]+)(?::\d+)?;(\d+) + replacement: $1:$2 + # add service namespace as label to the scraped metrics + - source_labels: [__meta_kubernetes_namespace] + separator: ; + regex: (.*) + target_label: namespace + replacement: $1 + action: replace + # add service name as a label to the scraped metrics + - source_labels: [__meta_kubernetes_service_name] + separator: ; + regex: (.*) + target_label: service + replacement: $1 + action: replace + # add stats service's labels to the scraped metrics + - action: labelmap + regex: __meta_kubernetes_service_label_(.+) +``` + +Let's create above `ConfigMap`, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/proxysql/monitoring/builtin-prometheus/examples/prom-config.yaml +configmap/prometheus-config created +``` + +**Create RBAC:** + +If you are using an RBAC enabled cluster, you have to give necessary RBAC permissions for Prometheus. Let's create necessary RBAC stuffs for Prometheus, + +```bash +$ kubectl apply -f https://github.com/appscode/third-party-tools/raw/master/monitoring/prometheus/builtin/artifacts/rbac.yaml +clusterrole.rbac.authorization.k8s.io/prometheus created +serviceaccount/prometheus created +clusterrolebinding.rbac.authorization.k8s.io/prometheus created +``` + +>YAML for the RBAC resources created above can be found [here](https://github.com/appscode/third-party-tools/blob/master/monitoring/prometheus/builtin/artifacts/rbac.yaml). + +**Deploy Prometheus:** + +Now, we are ready to deploy Prometheus server. We are going to use following [deployment](https://github.com/appscode/third-party-tools/blob/master/monitoring/prometheus/builtin/artifacts/deployment.yaml) to deploy Prometheus server. + +Let's deploy the Prometheus server. + +```bash +$ kubectl apply -f https://github.com/appscode/third-party-tools/raw/master/monitoring/prometheus/builtin/artifacts/deployment.yaml +deployment.apps/prometheus created +``` + +### Verify Monitoring Metrics + +Prometheus server is listening to port `9090`. We are going to use [port forwarding](https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/) to access Prometheus dashboard. + +At first, let's check if the Prometheus pod is in `Running` state. + +```bash +$ kubectl get pod -n monitoring -l=app=prometheus +NAME READY STATUS RESTARTS AGE +prometheus-5dff66b455-cz9td 1/1 Running 0 42s +``` + +Now, run following command on a separate terminal to forward 9090 port of `prometheus-8568c86d86-95zhn` pod, + +```bash +$ kubectl port-forward -n monitoring prometheus-8568c86d86-95zhn 9090 +Forwarding from 127.0.0.1:9090 -> 9090 +Forwarding from [::1]:9090 -> 9090 +``` + +Now, we can access the dashboard at `localhost:9090`. Open [http://localhost:9090](http://localhost:9090) in your browser. You should see the endpoint of `proxy-server-stats` service as one of the targets. + +

+  Prometheus Target +

+ +Check the labels marked with red rectangle. These labels confirm that the metrics are coming from `ProxySQL` server `proxy-server` through stats service `proxy-server-stats`. + +Now, you can view the collected metrics and create a graph from homepage of this Prometheus dashboard. You can also use this Prometheus server as data source for [Grafana](https://grafana.com/) and create beautiful dashboard with collected metrics. + +## Cleaning up + +To cleanup the Kubernetes resources created by this tutorial, run following commands + +```bash +kubectl delete proxysql -n demo proxy-server + +kubectl delete -n monitoring deployment.apps/prometheus + +kubectl delete -n monitoring clusterrole.rbac.authorization.k8s.io/prometheus +kubectl delete -n monitoring serviceaccount/prometheus +kubectl delete -n monitoring clusterrolebinding.rbac.authorization.k8s.io/prometheus + +kubectl delete ns demo +kubectl delete ns monitoring +``` \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/proxysql/monitoring/overview/images/database-monitoring-overview.svg b/content/docs/v2024.1.31/guides/proxysql/monitoring/overview/images/database-monitoring-overview.svg new file mode 100644 index 0000000000..395eefb334 --- /dev/null +++ b/content/docs/v2024.1.31/guides/proxysql/monitoring/overview/images/database-monitoring-overview.svg @@ -0,0 +1 @@ + \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/proxysql/monitoring/overview/index.md b/content/docs/v2024.1.31/guides/proxysql/monitoring/overview/index.md new file mode 100644 index 0000000000..b9aed41816 --- /dev/null +++ b/content/docs/v2024.1.31/guides/proxysql/monitoring/overview/index.md @@ -0,0 +1,120 @@ +--- +title: ProxySQL Monitoring Overview +description: ProxySQL Monitoring Overview +menu: + docs_v2024.1.31: + identifier: guides-proxysql-monitoring-overview + name: Overview + parent: guides-proxysql-monitoring + weight: 10 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Monitoring ProxySQL with KubeDB + +KubeDB has native support for monitoring via [Prometheus](https://prometheus.io/). You can use builtin [Prometheus](https://github.com/prometheus/prometheus) scraper or [Prometheus operator](https://github.com/prometheus-operator/prometheus-operator) to monitor KubeDB managed databases. This tutorial will show you how database monitoring works with KubeDB and how to configure Database crd to enable monitoring. + +## Overview + +KubeDB uses Prometheus [exporter](https://prometheus.io/docs/instrumenting/exporters/#databases) images to export Prometheus metrics for respective databases. Following diagram shows the logical flow of database monitoring with KubeDB. + +

+  Database Monitoring Flow +

+ +When a user creates a database crd with `spec.monitor` section configured, KubeDB operator provisions the respective database and injects an exporter image as sidecar to the database pod. It also creates a dedicated stats service with name `{database-crd-name}-stats` for monitoring. Prometheus server can scrape metrics using this stats service. + +## Configure Monitoring + +In order to enable monitoring for a database, you have to configure `spec.monitor` section. KubeDB provides following options to configure `spec.monitor` section: + +| Field | Type | Uses | +| -------------------------------------------------- | ---------- | ---------------------------------------------------------------------------------------------------------------------------------------------- | +| `spec.monitor.agent` | `Required` | Type of the monitoring agent that will be used to monitor this database. It can be `prometheus.io/builtin` or `prometheus.io/operator`. | +| `spec.monitor.prometheus.exporter.port` | `Optional` | Port number where the exporter side car will serve metrics. | +| `spec.monitor.prometheus.exporter.args` | `Optional` | Arguments to pass to the exporter sidecar. | +| `spec.monitor.prometheus.exporter.env` | `Optional` | List of environment variables to set in the exporter sidecar container. | +| `spec.monitor.prometheus.exporter.resources` | `Optional` | Resources required by exporter sidecar container. | +| `spec.monitor.prometheus.exporter.securityContext` | `Optional` | Security options the exporter should run with. | +| `spec.monitor.prometheus.serviceMonitor.labels` | `Optional` | Labels for `ServiceMonitor` crd. | +| `spec.monitor.prometheus.serviceMonitor.interval` | `Optional` | Interval at which metrics should be scraped. | + +## Sample Configuration + +A sample YAML for Redis crd with `spec.monitor` section configured to enable monitoring with [Prometheus operator](https://github.com/prometheus-operator/prometheus-operator) is shown below. + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: Redis +metadata: + name: sample-redis + namespace: databases +spec: + version: 6.0.20 + terminationPolicy: WipeOut + configSecret: # configure Redis to use password for authentication + name: redis-config + storageType: Durable + storage: + storageClassName: default + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 5Gi + monitor: + agent: prometheus.io/operator + prometheus: + serviceMonitor: + labels: + release: prometheus + exporter: + args: + - --redis.password=$(REDIS_PASSWORD) + env: + - name: REDIS_PASSWORD + valueFrom: + secretKeyRef: + name: _name_of_secret_with_redis_password + key: password # key with the password + resources: + requests: + memory: 512Mi + cpu: 200m + limits: + memory: 512Mi + cpu: 250m + securityContext: + runAsUser: 2000 + allowPrivilegeEscalation: false +``` + +Assume that above Redis is configured to use basic authentication. So, exporter image also need to provide password to collect metrics. We have provided it through `spec.monitor.args` field. + +Here, we have specified that we are going to monitor this server using Prometheus operator through `spec.monitor.agent: prometheus.io/operator`. KubeDB will create a `ServiceMonitor` crd in `monitoring` namespace and this `ServiceMonitor` will have `release: prometheus` label. + +> As native ProxySQL provides builtin monitoring utilities , KubeDB ProxySQL do not depend on external exporter container. Which means one don't need to configure the `spec.monitor.agent.exporter` field. + +## Next Steps + +- Learn how to monitor `ProxySQL` server with KubeDB using [builtin-Prometheus](/docs/v2024.1.31/guides/proxysql/monitoring/builtin-prometheus) and using [Prometheus operator](/docs/v2024.1.31/guides/proxysql/monitoring/prometheus-operator). +- Learn how to monitor `ElasticSearch` database with KubeDB using [builtin-Prometheus](/docs/v2024.1.31/guides/elasticsearch/monitoring/using-builtin-prometheus) and using [Prometheus operator](/docs/v2024.1.31/guides/elasticsearch/monitoring/using-prometheus-operator). +- Learn how to monitor `PostgreSQL` database with KubeDB using [builtin-Prometheus](/docs/v2024.1.31/guides/postgres/monitoring/using-builtin-prometheus) and using [Prometheus operator](/docs/v2024.1.31/guides/postgres/monitoring/using-prometheus-operator). +- Learn how to monitor `MySQL` database with KubeDB using [builtin-Prometheus](/docs/v2024.1.31/guides/mysql/monitoring/builtin-prometheus/) and using [Prometheus operator](/docs/v2024.1.31/guides/mysql/monitoring/prometheus-operator/). +- Learn how to monitor `MongoDB` database with KubeDB using [builtin-Prometheus](/docs/v2024.1.31/guides/mongodb/monitoring/using-builtin-prometheus) and using [Prometheus operator](/docs/v2024.1.31/guides/mongodb/monitoring/using-prometheus-operator). +- Learn how to monitor `Redis` server with KubeDB using [builtin-Prometheus](/docs/v2024.1.31/guides/redis/monitoring/using-builtin-prometheus) and using [Prometheus operator](/docs/v2024.1.31/guides/redis/monitoring/using-prometheus-operator). +- Learn how to monitor `Memcached` server with KubeDB using [builtin-Prometheus](/docs/v2024.1.31/guides/memcached/monitoring/using-builtin-prometheus) and using [Prometheus operator](/docs/v2024.1.31/guides/memcached/monitoring/using-prometheus-operator). \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/proxysql/monitoring/prometheus-operator/examples/mysql.yaml b/content/docs/v2024.1.31/guides/proxysql/monitoring/prometheus-operator/examples/mysql.yaml new file mode 100644 index 0000000000..d16fb2fa03 --- /dev/null +++ b/content/docs/v2024.1.31/guides/proxysql/monitoring/prometheus-operator/examples/mysql.yaml @@ -0,0 +1,19 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MySQL +metadata: + name: mysql-grp + namespace: demo +spec: + version: "5.7.44" + replicas: 3 + topology: + mode: GroupReplication + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut diff --git a/content/docs/v2024.1.31/guides/proxysql/monitoring/prometheus-operator/examples/proxysql.yaml b/content/docs/v2024.1.31/guides/proxysql/monitoring/prometheus-operator/examples/proxysql.yaml new file mode 100644 index 0000000000..3264735ff3 --- /dev/null +++ b/content/docs/v2024.1.31/guides/proxysql/monitoring/prometheus-operator/examples/proxysql.yaml @@ -0,0 +1,22 @@ +apiVersion: kubedb.com/v1alpha2 +kind: ProxySQL +metadata: + name: proxy-server + namespace: demo +spec: + version: "2.4.4-debian" + replicas: 3 + backend: + name: mysql-grp + syncUsers: true + monitor: + agent: prometheus.io/operator + prometheus: + serviceMonitor: + labels: + release: prometheus + release: prometheus + interval: 10s + terminationPolicy: WipeOut + healthChecker: + failureThreshold: 3 diff --git a/content/docs/v2024.1.31/guides/proxysql/monitoring/prometheus-operator/images/prom-end.png b/content/docs/v2024.1.31/guides/proxysql/monitoring/prometheus-operator/images/prom-end.png new file mode 100644 index 0000000000..9987dd9b54 Binary files /dev/null and b/content/docs/v2024.1.31/guides/proxysql/monitoring/prometheus-operator/images/prom-end.png differ diff --git a/content/docs/v2024.1.31/guides/proxysql/monitoring/prometheus-operator/index.md b/content/docs/v2024.1.31/guides/proxysql/monitoring/prometheus-operator/index.md new file mode 100644 index 0000000000..47322c2c2c --- /dev/null +++ b/content/docs/v2024.1.31/guides/proxysql/monitoring/prometheus-operator/index.md @@ -0,0 +1,368 @@ +--- +title: Monitor ProxySQL using Prometheus Operator +menu: + docs_v2024.1.31: + identifier: guides-proxysql-monitoring-prometheusoperator + name: Prometheus Operator + parent: guides-proxysql-monitoring + weight: 20 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Monitoring ProxySQL Using Prometheus operator + +[Prometheus operator](https://github.com/prometheus-operator/prometheus-operator) provides simple and Kubernetes native way to deploy and configure Prometheus server. This tutorial will show you how to use Prometheus operator to monitor ProxySQL server deployed with KubeDB. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +- To learn how Prometheus monitoring works with KubeDB in general, please visit [here](/docs/v2024.1.31/guides/proxysql/monitoring/overview). + +- To keep database resources isolated, this tutorial uses a separate namespace called `demo` throughout this tutorial. Run the following command to prepare your cluster: + + ```bash + $ kubectl create ns demo + namespace/demo created + ``` + +- We need a [Prometheus operator](https://github.com/prometheus-operator/prometheus-operator) instance running. If you don't already have a running instance, deploy one following the docs from [here](https://github.com/appscode/third-party-tools/blob/master/monitoring/prometheus/operator/README.md). + +- If you already don't have a Prometheus server running, deploy one following tutorial from [here](https://github.com/appscode/third-party-tools/blob/master/monitoring/prometheus/operator/README.md#deploy-prometheus-server). + +> Note: YAML files used in this tutorial are stored in [/docs/guides/proxysql/monitoring/prometheus-operator/examples](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/guides/proxysql/monitoring/prometheus-operator/examples) folder in GitHub repository [kubedb/docs](https://github.com/kubedb/docs). + +## Find out required labels for ServiceMonitor + +We need to know the labels used to select `ServiceMonitor` by a `Prometheus` crd. We are going to provide these labels in `spec.monitor.prometheus.labels` field of ProxySQL crd so that KubeDB creates `ServiceMonitor` object accordingly. + +At first, let's find out the available Prometheus server in our cluster. + +```bash +$ kubectl get prometheus --all-namespaces +NAMESPACE NAME VERSION REPLICAS AGE +default prometheus 1 2m19s +``` + +> If you don't have any Prometheus server running in your cluster, deploy one following the guide specified in **Before You Begin** section. + +Now, let's view the YAML of the available Prometheus server `prometheus` in `default` namespace. + +```yaml +$ kubectl get prometheus -n default prometheus -o yaml +apiVersion: monitoring.coreos.com/v1 +kind: Prometheus +metadata: + annotations: + kubectl.kubernetes.io/last-applied-configuration: | + {"apiVersion":"monitoring.coreos.com/v1","kind":"Prometheus","metadata":{"annotations":{},"labels":{"prometheus":"prometheus"},"name":"prometheus","namespace":"default"},"spec":{"replicas":1,"resources":{"requests":{"memory":"400Mi"}},"serviceAccountName":"prometheus","serviceMonitorNamespaceSelector":{"matchLabels":{"prometheus":"prometheus"}},"serviceMonitorSelector":{"matchLabels":{"release":"prometheus"}}}} + creationTimestamp: "2020-08-25T04:02:07Z" + generation: 1 + labels: + prometheus: prometheus + ... + manager: kubectl + operation: Update + time: "2020-08-25T04:02:07Z" + name: prometheus + namespace: default + resourceVersion: "2087" + selfLink: /apis/monitoring.coreos.com/v1/namespaces/default/prometheuses/prometheus + uid: 972a50cb-b751-418b-b2bc-e0ecc9232730 +spec: + replicas: 1 + resources: + requests: + memory: 400Mi + serviceAccountName: prometheus + serviceMonitorNamespaceSelector: + matchLabels: + prometheus: prometheus + serviceMonitorSelector: + matchLabels: + release: prometheus +``` + +- `spec.serviceMonitorSelector` field specifies which ServiceMonitors should be included. The Above label `release: prometheus` is used to select `ServiceMonitors` by its selector. So, we are going to use this label in `spec.monitor.prometheus.labels` field of ProxySQL crd. +- `spec.serviceMonitorNamespaceSelector` field specifies that the `ServiceMonitors` can be selected outside the Prometheus namespace by Prometheus using namespace selector. The Above label `prometheus: prometheus` is used to select the namespace where the `ServiceMonitor` is created. + +### Add Label to database namespace + +KubeDB creates a `ServiceMonitor` in database namespace `demo`. We need to add label to `demo` namespace. Prometheus will select this namespace by using its `spec.serviceMonitorNamespaceSelector` field. + +Let's add label `prometheus: prometheus` to `demo` namespace, + +```bash +$ kubectl patch namespace demo -p '{"metadata":{"labels": {"prometheus":"prometheus"}}}' +namespace/demo patched +``` + +## Deploy MySQL as ProxySQL Backend + +We need a mysql backend for the proxysql server. So we are creating one with the below yaml. + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: MySQL +metadata: + name: mysql-grp + namespace: demo +spec: + version: "5.7.44" + replicas: 3 + topology: + mode: GroupReplication + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut +``` + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/proxysql/monitoring/builtin-prometheus/example/mysql.yaml +mysql.kubedb.com/mysql-grp created +``` + +After applying the above yaml wait for the MySQL to be Ready. + +## Deploy ProxySQL with Monitoring Enabled + +At first, let's deploy an ProxySQL database with monitoring enabled. Below is the ProxySQL object that we are going to create. + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: ProxySQL +metadata: + name: proxy-server + namespace: demo +spec: + version: "2.4.4-debian" + replicas: 3 + backend: + name: mysql-grp + syncUsers: true + monitor: + agent: prometheus.io/operator + prometheus: + serviceMonitor: + labels: + release: prometheus + release: prometheus + interval: 10s + terminationPolicy: WipeOut + healthChecker: + failureThreshold: 3 +``` + +Here, + +- `monitor.agent: prometheus.io/operator` indicates that we are going to monitor this server using Prometheus operator. + +- `monitor.prometheus.labels` specifies that KubeDB should create `ServiceMonitor` with these labels. + +- `monitor.prometheus.interval` indicates that the Prometheus server should scrape metrics from this server with 10 seconds interval. + +Let's create the ProxySQL object that we have shown above, + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/proxysql/monitoring/prometheus-operator/examples/proxysql.yaml +proxysql.kubedb.com/proxy-server created +``` + +Now, wait for the server to go into `Ready` state. + +```bash +$ kubectl get proxysql -n demo proxy-server +NAME VERSION STATUS AGE +proxy-server 2.4.4-debian Ready 59s +``` + +KubeDB will create a separate stats service with name `{ProxySQL crd name}-stats` for monitoring purpose. + +```bash +$ $ kubectl get svc -n demo --selector="app.kubernetes.io/instance=proxy-server" +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +proxy-server ClusterIP 10.99.96.226 6033/TCP 107s +proxy-server-pods ClusterIP None 6032/TCP,6033/TCP 107s +proxy-server-stats ClusterIP 10.101.190.67 6070/TCP 107s +``` + +Here, `proxy-server-stats` service has been created for monitoring purpose. + +Let's describe this stats service. + +```yaml +$ kubectl describe svc -n demo proxy-server-stats +Name: proxy-server-stats +Namespace: demo +Labels: app.kubernetes.io/instance=proxy-server + app.kubernetes.io/managed-by=kubedb.com + app.kubernetes.io/name=proxysqls.kubedb.com + kubedb.com/role=stats +Annotations: monitoring.appscode.com/agent: prometheus.io/operator +Selector: app.kubernetes.io/instance=proxy-server,app.kubernetes.io/managed-by=kubedb.com,app.kubernetes.io/name=proxysqls.kubedb.com +Type: ClusterIP +IP: 10.101.190.67 +Port: metrics 6070/TCP +TargetPort: metrics/TCP +Endpoints: 10.244.0.31:6070 +Session Affinity: None +Events: +``` + +Notice the `Labels` and `Port` fields. `ServiceMonitor` will use these information to target its endpoints. + +KubeDB will also create a `ServiceMonitor` crd in `demo` namespace that select the endpoints of `proxy-server-stats` service. Verify that the `ServiceMonitor` crd has been created. + +```bash +$ kubectl get servicemonitor -n demo +NAME AGE +proxy-server-stats 4m8s +``` + +Let's verify that the `ServiceMonitor` has the label that we had specified in `spec.monitor` section of ProxySQL crd. + +```yaml +$ kubectl get servicemonitor -n demo proxy-server-stats -o yaml +apiVersion: monitoring.coreos.com/v1 +kind: ServiceMonitor +metadata: + creationTimestamp: "2021-03-19T10:09:03Z" + generation: 1 + labels: + app.kubernetes.io/component: database + app.kubernetes.io/instance: proxy-server + app.kubernetes.io/managed-by: kubedb.com + app.kubernetes.io/name: proxysqls.kubedb.com + release: prometheus + managedFields: + - apiVersion: monitoring.coreos.com/v1 + fieldsType: FieldsV1 + fieldsV1: + f:metadata: + f:labels: + .: {} + f:app.kubernetes.io/component: {} + f:app.kubernetes.io/instance: {} + f:app.kubernetes.io/managed-by: {} + f:app.kubernetes.io/name: {} + f:release: {} + f:ownerReferences: {} + f:spec: + .: {} + f:endpoints: {} + f:namespaceSelector: + .: {} + f:matchNames: {} + f:selector: + .: {} + f:matchLabels: + .: {} + f:app.kubernetes.io/instance: {} + f:app.kubernetes.io/managed-by: {} + f:app.kubernetes.io/name: {} + f:kubedb.com/role: {} + manager: proxysql-operator + operation: Update + time: "2021-03-19T10:09:03Z" + name: proxy-server-stats + namespace: demo + ownerReferences: + - apiVersion: v1 + blockOwnerDeletion: true + controller: true + kind: Service + name: proxy-server-stats + uid: 08260a99-0984-4d90-bf68-34080ad0ee5b + resourceVersion: "241637" + selfLink: /apis/monitoring.coreos.com/v1/namespaces/demo/servicemonitors/proxy-server-stats + uid: 4f022d98-d2d8-490f-9548-f6367d03ae1f +spec: + endpoints: + - bearerTokenSecret: + key: "" + honorLabels: true + interval: 10s + path: /metrics + port: metrics + namespaceSelector: + matchNames: + - demo + selector: + matchLabels: + app.kubernetes.io/instance: proxy-server + app.kubernetes.io/managed-by: kubedb.com + app.kubernetes.io/name: proxysqls.kubedb.com + kubedb.com/role: stats +``` + +Notice that the `ServiceMonitor` has label `release: prometheus` that we had specified in ProxySQL crd. + +Also notice that the `ServiceMonitor` has selector which match the labels we have seen in the `proxy-server-stats` service. It also, target the `prom-http` port that we have seen in the stats service. + +## Verify Monitoring Metrics + +At first, let's find out the respective Prometheus pod for `prometheus` Prometheus server. + +```bash +$ kubectl get pod -n default -l=app=prometheus +NAME READY STATUS RESTARTS AGE +prometheus-prometheus-0 3/3 Running 1 16m +``` + +Prometheus server is listening to port `9090` of `prometheus-prometheus-0` pod. We are going to use [port forwarding](https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/) to access Prometheus dashboard. + +Run following command on a separate terminal to forward the port 9090 of `prometheus-prometheus-0` pod, + +```bash +$ kubectl port-forward -n default prometheus-prometheus-0 9090 +Forwarding from 127.0.0.1:9090 -> 9090 +Forwarding from [::1]:9090 -> 9090 +``` + +Now, we can access the dashboard at `localhost:9090`. Open [http://localhost:9090](http://localhost:9090) in your browser. You should see `prom-http` endpoint of `proxy-server-stats` service as one of the targets. + +

+  Prometheus Target +

+ +Check the `endpoint` and `service` labels. It verifies that the target is our expected database. Now, you can view the collected metrics and create a graph from homepage of this Prometheus dashboard. You can also use this Prometheus server as data source for [Grafana](https://grafana.com/) and create beautiful dashboard with collected metrics. + +## Cleaning up + +To cleanup the Kubernetes resources created by this tutorial, run following commands + +```bash +# cleanup database +kubectl delete proxysql -n demo proxy-server + +# cleanup Prometheus resources +kubectl delete -f https://raw.githubusercontent.com/appscode/third-party-tools/master/monitoring/prometheus/operator/artifacts/prometheus.yaml + +kubectl delete -f https://raw.githubusercontent.com/appscode/third-party-tools/master/monitoring/prometheus/operator/artifacts/prometheus-rbac.yaml + +# cleanup Prometheus operator resources +kubectl delete -f https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/release-0.41/bundle.yaml + +# delete namespace +kubectl delete ns demo +``` \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/proxysql/quickstart/_index.md b/content/docs/v2024.1.31/guides/proxysql/quickstart/_index.md new file mode 100644 index 0000000000..cd1bfd8e8b --- /dev/null +++ b/content/docs/v2024.1.31/guides/proxysql/quickstart/_index.md @@ -0,0 +1,22 @@ +--- +title: ProxySQL Quickstart +menu: + docs_v2024.1.31: + identifier: guides-proxysql-quickstart + name: Quickstart + parent: guides-proxysql + weight: 20 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/proxysql/quickstart/mysqlgrp/examples/appbinding.yaml b/content/docs/v2024.1.31/guides/proxysql/quickstart/mysqlgrp/examples/appbinding.yaml new file mode 100644 index 0000000000..c04a4dd697 --- /dev/null +++ b/content/docs/v2024.1.31/guides/proxysql/quickstart/mysqlgrp/examples/appbinding.yaml @@ -0,0 +1,20 @@ +apiVersion: appcatalog.appscode.com/v1alpha1 +kind: AppBinding +metadata: + name: mysql-backend-apb + namespace: demo +spec: + clientConfig: + caBundle: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURJekNDQWd1Z0F3SUJBZ0lVWUp4dVBqcW1EbjJPaVdkMGk5cUZ2MGdzdzQwd0RRWUpLb1pJaHZjTkFRRUwKQlFBd0lURU9NQXdHQTFVRUF3d0ZiWGx6Y1d3eER6QU5CZ05WQkFvTUJtdDFZbVZrWWpBZUZ3MHlNakV3TVRBeApNalE1TlROYUZ3MHlNekV3TVRBeE1qUTVOVE5hTUNFeERqQU1CZ05WQkFNTUJXMTVjM0ZzTVE4d0RRWURWUVFLCkRBWnJkV0psWkdJd2dnRWlNQTBHQ1NxR1NJYjNEUUVCQVFVQUE0SUJEd0F3Z2dFS0FvSUJBUURkRTRMaEFabEQKQTh2aWk5QTlaT1VLOEVzeWU2ZDBWWnpRZmJUeS90VlF0N05ybkhjS1NkVHFmS0lkczl1bXhSd1Y2ak5WU2RtUQp2M3NUa0xwYTZBbFRwYklQazZ5S2UxRGs2YUhhbFZDSVc4SExLNW43YklzTEV3aEkyb3F4WmIrd0pydXVGSi95Clk4a2tyazVXaFBJZzRqM3VjV0FhcllqTVpxNXRQbU9sOFJXanhzY2k3WjJsN0lIcWplSjYrKzRENXlkeGx6L3gKdVhLNmxVM2J2Y2craWhUVno1UENNS2R4OHZNL052dTNuQ21Ja2hzQUkrNGVWZE4xenRIWG51UTFXRFhlUEFFYwpRQnJGanFuWk5pYmRUeU4zYTgrdmVUM2hLK3Fhc0ZFSU5aOFY4ZFVQSVV5cHFYSmk0anZCSW9FU0RvV2V1Z3QzCklhMjh6OE5XNk9WbkFnTUJBQUdqVXpCUk1CMEdBMVVkRGdRV0JCUUk3QU41RnNrT2lxb1pOVDNSc2ozQVBDVXcKbkRBZkJnTlZIU01FR0RBV2dCUUk3QU41RnNrT2lxb1pOVDNSc2ozQVBDVXduREFQQmdOVkhSTUJBZjhFQlRBRApBUUgvTUEwR0NTcUdTSWIzRFFFQkN3VUFBNElCQVFCU2JDUkx1TitGd2loQ3ZiamlmcjFldGViNUEzeVRWWjN2CkN6MEhMK2RYekJEbVI5OGxPdWgvSDV4bEVyT3U1emdjSXJLK3BtM0toQVl1NXhZOHBFSklwRXBCRExnSEVQNTQKMkc1NllydXcva1J2cWJ4VWlwUjIzTWJGZXpyUW9BeXNKbmZGRmZrYXNVWlBRSjg0dE05UStoTEpUcnp0VkczZgphcnVCd3h0SDA3bklZMTFRUnhZbERPQWx6ck9icWdvUUtwODFXVTBrTzdDRVd2Q1ZOenphb2dWV08rZUtvdUw5Ci9aQjVYQ1FVRlRScFlEQjB1aFk1NTAwR1kxYnRpRUVKaVdZVTg0UVFzampINVZlRkFtN21ldWRkak9pTEM3dUMKSmFISkRLa0txZWtDSkZ6YzF0QmpHQVZUNHZaTGcrUldhUmJHa01Qdm1WZVFIOUtSMVN2aQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg== + service: + name: mysql-server + path: / + port: 3306 + scheme: mysql + url: tcp(mysql-server.demo.svc:3306)/ + secret: + name: mysql-server-auth + tlsSecret: + name: mysql-server-client-cert + type: mysql + version: 8.0.35 \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/proxysql/quickstart/mysqlgrp/examples/sample-mysql.yaml b/content/docs/v2024.1.31/guides/proxysql/quickstart/mysqlgrp/examples/sample-mysql.yaml new file mode 100644 index 0000000000..c884c6f678 --- /dev/null +++ b/content/docs/v2024.1.31/guides/proxysql/quickstart/mysqlgrp/examples/sample-mysql.yaml @@ -0,0 +1,19 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MySQL +metadata: + name: mysql-server + namespace: demo +spec: + version: "5.7.44" + replicas: 3 + topology: + mode: GroupReplication + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/proxysql/quickstart/mysqlgrp/examples/sample-proxysql.yaml b/content/docs/v2024.1.31/guides/proxysql/quickstart/mysqlgrp/examples/sample-proxysql.yaml new file mode 100644 index 0000000000..ba34db6ec6 --- /dev/null +++ b/content/docs/v2024.1.31/guides/proxysql/quickstart/mysqlgrp/examples/sample-proxysql.yaml @@ -0,0 +1,12 @@ +apiVersion: kubedb.com/v1alpha2 +kind: ProxySQL +metadata: + name: proxy-server + namespace: demo +spec: + version: "2.3.2-debian" + replicas: 1 + syncUsers: true + backend: + name: mysql-server + terminationPolicy: WipeOut \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/proxysql/quickstart/mysqlgrp/examples/ubuntu.yaml b/content/docs/v2024.1.31/guides/proxysql/quickstart/mysqlgrp/examples/ubuntu.yaml new file mode 100644 index 0000000000..f142777e69 --- /dev/null +++ b/content/docs/v2024.1.31/guides/proxysql/quickstart/mysqlgrp/examples/ubuntu.yaml @@ -0,0 +1,26 @@ +apiVersion: apps/v1 +kind: Deployment +metadata: + creationTimestamp: null + labels: + app: ubuntu + name: ubuntu + namespace: demo +spec: + replicas: 1 + selector: + matchLabels: + app: ubuntu + strategy: {} + template: + metadata: + creationTimestamp: null + labels: + app: ubuntu + spec: + containers: + - image: ubuntu + imagePullPolicy: IfNotPresent + name: ubuntu + command: ["/bin/sleep", "3650d"] + resources: {} \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/proxysql/quickstart/mysqlgrp/index.md b/content/docs/v2024.1.31/guides/proxysql/quickstart/mysqlgrp/index.md new file mode 100644 index 0000000000..15d99fe56d --- /dev/null +++ b/content/docs/v2024.1.31/guides/proxysql/quickstart/mysqlgrp/index.md @@ -0,0 +1,373 @@ +--- +title: Load Balance To MySQL Group Replication With KubeDB Provisioned ProxySQL +menu: + docs_v2024.1.31: + identifier: guides-proxysql-quickstart-overview + name: KubeDB MySQL Group Replication Backend + parent: guides-proxysql-quickstart + weight: 20 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# KubeDB ProxySQL Quickstart with KubeDB MySQL Group Replication + +This guide will show you how to use `KubeDB` Enterprise operator to set up a `ProxySQL` server for KubeDB managed MySQL. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster. + +- Install `KubeDB` Community and Enterprise operator in your cluster following the steps [here](/docs/v2024.1.31/setup/README). + +- You should be familiar with the following `KubeDB` concepts: + - [ProxySQL](/docs/v2024.1.31/guides/proxysql/concepts/proxysql/) + +To keep everything isolated, we are going to use a separate namespace called `demo` throughout this tutorial. + +```bash +$ kubectl create ns demo +namespace/demo created +``` + +## Prepare MySQL Backend + +In this tutorial we are going to test set up a ProxySQL server with KubeDB operator for a MySQL Group Replication. We will use KubeDb to set up our MySQL servers here. +By applying the following yaml we are going to create our MySQL Group Replication + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: MySQL +metadata: + name: mysql-server + namespace: demo +spec: + version: "5.7.44" + replicas: 3 + topology: + mode: GroupReplication + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut +``` + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/proxysql/quickstart/mysqlgrp/examples/sample-mysql.yaml +mysql.kubedb.com/mysql-server created +``` + +Let's wait for the MySQL to be Ready. + +```bash +$ kubectl get mysql -n demo +NAME VERSION STATUS AGE +mysql-server 5.7.44 Ready 3m51s +``` + +Let's first create a user in the backend mysql server and a database to test the proxy traffic . + +```bash +$ kubectl exec -it -n demo mysql-server-0 -- bash +Defaulted container "mysql" out of: mysql, mysql-coordinator, mysql-init (init) +root@mysql-server-0:/# mysql -uroot -p$MYSQL_ROOT_PASSWORD +mysql: [Warning] Using a password on the command line interface can be insecure. +Welcome to the MySQL monitor. Commands end with ; or \g. +Your MySQL connection id is 195 +Server version: 5.7.44-log MySQL Community Server (GPL) + +Copyright (c) 2000, 2021, Oracle and/or its affiliates. + +Oracle is a registered trademark of Oracle Corporation and/or its +affiliates. Other names may be trademarks of their respective +owners. + +Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. + +mysql> create user `test`@'%' identified by 'pass'; +Query OK, 0 rows affected (0.00 sec) + +mysql> create database test; +Query OK, 1 row affected (0.01 sec) + +mysql> use test; +Database changed + +mysql> show tables; +Empty set (0.00 sec) + +mysql> create table testtb(name varchar(103), primary key(name)); +Query OK, 0 rows affected (0.01 sec) + +mysql> grant all privileges on test.* to 'test'@'%'; +Query OK, 0 rows affected (0.00 sec) + +mysql> flush privileges; +Query OK, 0 rows affected (0.00 sec) + +mysql> exit +Bye +``` + +Now we are ready to deploy and test our ProxySQL server. + +## Deploy ProxySQL Server + +With the following yaml we are going to create our desired ProxySQL server. + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: ProxySQL +metadata: + name: proxy-server + namespace: demo +spec: + version: "2.3.2-debian" + replicas: 1 + syncUsers: true + backend: + name: mysql-server + terminationPolicy: WipeOut +``` + +This is the simplest version of a KubeDB ProxySQL server. Here in the `.spec.version` field we are saying that we want a ProxySQL-2.3.2 with base image of debian. In the `.spec.replicas` section we have written 1, so the operator will create a single node ProxySQL. The `spec.syncUser` field is set to true, which means all the users in the backend MySQL server will be fetched to the ProxySQL server. + +Now let's apply the yaml. + +```yaml +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/proxysql/quickstart/mysqlgrp/examples/sample-proxysql.yaml + proxysql.kubedb.com/proxysql-server created +``` + +Let's wait for the ProxySQL to be Ready. + +```bash +$ kubectl get proxysql -n demo +NAME VERSION STATUS AGE +proxy-server 2.3.2-debian Ready 4m +``` + +Let's check the pod. + +```bash +$ kubectl get pods -n demo | grep proxy +proxy-server-0 1/1 Running 0 4m +``` + +### Check Associated Kubernetes Objects + +KubeDB operator will create some services and secrets for the ProxySQL object. Let's check. + +```bash +$ kubectl get svc,secret -n demo | grep proxy +service/proxy-server ClusterIP 10.96.181.182 6033/TCP 4m +service/proxy-server-pods ClusterIP None 6032/TCP,6033/TCP 4m +secret/proxy-server-auth kubernetes.io/basic-auth 2 4m +secret/proxy-server-configuration Opaque 1 4m +secret/proxy-server-monitor kubernetes.io/basic-auth 2 4m +``` + +You can find the description of the associated objects here. + +### Check Internal Configuration + +Let's exec into the ProxySQL server pod and get into the admin panel. + +```bash +$ kubectl exec -it -n demo proxy-mysql-0 -- bash 11:20 +root@proxy-mysql-0:/# mysql -uadmin -padmin -h127.0.0.1 -P6032 --prompt="ProxySQLAdmin > " +Welcome to the MariaDB monitor. Commands end with ; or \g. +Your MySQL connection id is 1204 +Server version: 8.0.35 (ProxySQL Admin Module) + +Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others. + +Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. + +ProxySQLAdmin > +``` + +Let's check the mysql_servers table first. We didn't set it from the yaml. The KubeDB operator will do that for us. + +```bash +ProxySQLAdmin > select * from mysql_servers; ++--------------+-------------------------------+------+-----------+--------+--------+-------------+-----------------+---------------------+---------+----------------+---------+ +| hostgroup_id | hostname | port | gtid_port | status | weight | compression | max_connections | max_replication_lag | use_ssl | max_latency_ms | comment | ++--------------+-------------------------------+------+-----------+--------+--------+-------------+-----------------+---------------------+---------+----------------+---------+ +| 2 | mysql-server.demo.svc | 3306 | 0 | ONLINE | 1 | 0 | 1000 | 0 | 0 | 0 | | +| 3 | mysql-server-standby.demo.svc | 3306 | 0 | ONLINE | 1 | 0 | 1000 | 0 | 0 | 0 | | ++--------------+-------------------------------+------+-----------+--------+--------+-------------+-----------------+---------------------+---------+----------------+---------+ +2 rows in set (0.000 sec) +``` + +Here we can see that the primary service of our MySQL instance has been set to the writer(hg:2) hostgroup and the secondary service has been set to the reader(hg:3) hostgroup. KubeDB MySQL group replication usually creates two services. The primary one forwards query to the writer node and the secondary one to the readers. + +Let's check the mysql_users table. + +```bash +ProxySQLAdmin > select username from mysql_users; ++----------+ +| username | ++----------+ +| root | +| test | ++----------+ +2 rows in set (0.000 sec) +``` + +So we are now ready to test our traffic proxy. In the next section we are going to have some demo's. + +### Check Traffic Proxy + +To test the traffic routing through the ProxySQL server let's first create a pod with ubuntu base image in it. We will use the following yaml. + +```yaml +apiVersion: apps/v1 +kind: Deployment +metadata: + creationTimestamp: null + labels: + app: ubuntu + name: ubuntu + namespace: demo +spec: + replicas: 1 + selector: + matchLabels: + app: ubuntu + strategy: {} + template: + metadata: + creationTimestamp: null + labels: + app: ubuntu + spec: + containers: + - image: ubuntu + imagePullPolicy: IfNotPresent + name: ubuntu + command: ["/bin/sleep", "3650d"] + resources: {} +``` + +Let's apply the yaml. + +```yaml +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/proxysql/quickstart/mysqlgrp/examples/ubuntu.yaml +deployment.apps/ubuntu created +``` + +Let's exec into the pod and install mysql-client. + +```bash +$ kubectl exec -it -n demo ubuntu-867d4588d8-tl7hh -- bash 12:00 +root@ubuntu-867d4588d8-tl7hh:/# apt update +... ... .. +root@ubuntu-867d4588d8-tl7hh:/# apt install mysql-client -y +Reading package lists... Done +... .. ... +root@ubuntu-867d4588d8-tl7hh:/# +``` + +Now let's try to connect with the ProxySQL server through the `proxy-server` service as the `test` user. + +```bash +root@ubuntu-867d4588d8-tl7hh:/# mysql -utest -ppass -hproxy-server.demo.svc -P6033 +mysql: [Warning] Using a password on the command line interface can be insecure. +Welcome to the MySQL monitor. Commands end with ; or \g. +Your MySQL connection id is 1881 +Server version: 8.0.35 (ProxySQL) + +Copyright (c) 2000, 2022, Oracle and/or its affiliates. + +Oracle is a registered trademark of Oracle Corporation and/or its +affiliates. Other names may be trademarks of their respective +owners. + +Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. + +mysql> +``` + +We are successfully connected as the `test` user. Let's run some read/write query on this connection. + +```bash +mysql> show databases; ++--------------------+ +| Database | ++--------------------+ +| information_schema | +| test | ++--------------------+ +2 rows in set (0.00 sec) + +mysql> use test; +Reading table information for completion of table and column names +You can turn off this feature to get a quicker startup with -A + +Database changed +mysql> show tables; ++----------------+ +| Tables_in_test | ++----------------+ +| testtb | ++----------------+ +1 row in set (0.00 sec) + +mysql> insert into testtb(name) values("Kim Torres"); +Query OK, 1 row affected (0.01 sec) + +mysql> insert into testtb(name) values("Tony SoFua"); +Query OK, 1 row affected (0.01 sec) + +mysql> select * from testtb; ++------------+ +| name | ++------------+ +| Kim Torres | +| Tony SoFua | ++------------+ +2 rows in set (0.00 sec) + +mysql> +``` + +We can see the queries are successfully executed through the ProxySQL server. + +Let's check the query splits inside the ProxySQL server by going back to the ProxySQLAdmin panel. + +```bash +ProxySQLAdmin > select hostgroup,Queries from stats_mysql_connection_pool; ++-----------+---------+ +| hostgroup | Queries | ++-----------+---------+ +| 2 | 6 | +| 3 | 0 | +| 3 | 3 | +| 4 | 0 | ++-----------+---------+ +4 rows in set (0.003 sec) +``` + +We can see that the read-write split is successfully executed in the ProxySQL server. So the ProxySQL server is ready to use. + +## Conclusion + +In this tutorial we have seen some very basic version of KubeDB ProxySQL. KubeDB provides many more for ProxySQL. In this site we have discussed on lot's of other features like `TLS Secured ProxySQL` , `Declarative Configuration` , `MariaDB and Percona-XtraDB Backend` , `Reconfigure` and much more. Checkout out other docs to learn more. \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/proxysql/quickstart/xtradbext/examples/appbinding.yaml b/content/docs/v2024.1.31/guides/proxysql/quickstart/xtradbext/examples/appbinding.yaml new file mode 100644 index 0000000000..f9b0f02ae3 --- /dev/null +++ b/content/docs/v2024.1.31/guides/proxysql/quickstart/xtradbext/examples/appbinding.yaml @@ -0,0 +1,20 @@ +apiVersion: appcatalog.appscode.com/v1alpha1 +kind: AppBinding +metadata: + name: xtradb-galera-appbinding + namespace: demo +spec: + clientConfig: + caBundle: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURJekNDQWd1Z0F3SUJBZ0lVWUp4dVBqcW1EbjJPaVdkMGk5cUZ2MGdzdzQwd0RRWUpLb1pJaHZjTkFRRUwKQlFBd0lURU9NQXdHQTFVRUF3d0ZiWGx6Y1d3eER6QU5CZ05WQkFvTUJtdDFZbVZrWWpBZUZ3MHlNakV3TVRBeApNalE1TlROYUZ3MHlNekV3TVRBeE1qUTVOVE5hTUNFeERqQU1CZ05WQkFNTUJXMTVjM0ZzTVE4d0RRWURWUVFLCkRBWnJkV0psWkdJd2dnRWlNQTBHQ1NxR1NJYjNEUUVCQVFVQUE0SUJEd0F3Z2dFS0FvSUJBUURkRTRMaEFabEQKQTh2aWk5QTlaT1VLOEVzeWU2ZDBWWnpRZmJUeS90VlF0N05ybkhjS1NkVHFmS0lkczl1bXhSd1Y2ak5WU2RtUQp2M3NUa0xwYTZBbFRwYklQazZ5S2UxRGs2YUhhbFZDSVc4SExLNW43YklzTEV3aEkyb3F4WmIrd0pydXVGSi95Clk4a2tyazVXaFBJZzRqM3VjV0FhcllqTVpxNXRQbU9sOFJXanhzY2k3WjJsN0lIcWplSjYrKzRENXlkeGx6L3gKdVhLNmxVM2J2Y2craWhUVno1UENNS2R4OHZNL052dTNuQ21Ja2hzQUkrNGVWZE4xenRIWG51UTFXRFhlUEFFYwpRQnJGanFuWk5pYmRUeU4zYTgrdmVUM2hLK3Fhc0ZFSU5aOFY4ZFVQSVV5cHFYSmk0anZCSW9FU0RvV2V1Z3QzCklhMjh6OE5XNk9WbkFnTUJBQUdqVXpCUk1CMEdBMVVkRGdRV0JCUUk3QU41RnNrT2lxb1pOVDNSc2ozQVBDVXcKbkRBZkJnTlZIU01FR0RBV2dCUUk3QU41RnNrT2lxb1pOVDNSc2ozQVBDVXduREFQQmdOVkhSTUJBZjhFQlRBRApBUUgvTUEwR0NTcUdTSWIzRFFFQkN3VUFBNElCQVFCU2JDUkx1TitGd2loQ3ZiamlmcjFldGViNUEzeVRWWjN2CkN6MEhMK2RYekJEbVI5OGxPdWgvSDV4bEVyT3U1emdjSXJLK3BtM0toQVl1NXhZOHBFSklwRXBCRExnSEVQNTQKMkc1NllydXcva1J2cWJ4VWlwUjIzTWJGZXpyUW9BeXNKbmZGRmZrYXNVWlBRSjg0dE05UStoTEpUcnp0VkczZgphcnVCd3h0SDA3bklZMTFRUnhZbERPQWx6ck9icWdvUUtwODFXVTBrTzdDRVd2Q1ZOenphb2dWV08rZUtvdUw5Ci9aQjVYQ1FVRlRScFlEQjB1aFk1NTAwR1kxYnRpRUVKaVdZVTg0UVFzampINVZlRkFtN21ldWRkak9pTEM3dUMKSmFISkRLa0txZWtDSkZ6YzF0QmpHQVZUNHZaTGcrUldhUmJHa01Qdm1WZVFIOUtSMVN2aQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg== + service: + name: xtradb-galera + path: / + port: 3306 + scheme: mysql + url: tcp(xtradb-galera.demo.svc:3306)/ + secret: + name: xtradb-galera-auth + type: perconaxtradb + tlsSecret: + name: xtradb-galera-client-cert + version: 8.0.26 \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/proxysql/quickstart/xtradbext/examples/sample-proxy.yaml b/content/docs/v2024.1.31/guides/proxysql/quickstart/xtradbext/examples/sample-proxy.yaml new file mode 100644 index 0000000000..c90e070a55 --- /dev/null +++ b/content/docs/v2024.1.31/guides/proxysql/quickstart/xtradbext/examples/sample-proxy.yaml @@ -0,0 +1,12 @@ +apiVersion: kubedb.com/v1alpha2 +kind: ProxySQL +metadata: + name: proxy-server + namespace: demo +spec: + version: "2.4.4-debian" + replicas: 1 + syncUsers: true + backend: + name: xtradb-galera-appbinding + terminationPolicy: WipeOut \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/proxysql/quickstart/xtradbext/examples/ubuntu.yaml b/content/docs/v2024.1.31/guides/proxysql/quickstart/xtradbext/examples/ubuntu.yaml new file mode 100644 index 0000000000..f142777e69 --- /dev/null +++ b/content/docs/v2024.1.31/guides/proxysql/quickstart/xtradbext/examples/ubuntu.yaml @@ -0,0 +1,26 @@ +apiVersion: apps/v1 +kind: Deployment +metadata: + creationTimestamp: null + labels: + app: ubuntu + name: ubuntu + namespace: demo +spec: + replicas: 1 + selector: + matchLabels: + app: ubuntu + strategy: {} + template: + metadata: + creationTimestamp: null + labels: + app: ubuntu + spec: + containers: + - image: ubuntu + imagePullPolicy: IfNotPresent + name: ubuntu + command: ["/bin/sleep", "3650d"] + resources: {} \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/proxysql/quickstart/xtradbext/index.md b/content/docs/v2024.1.31/guides/proxysql/quickstart/xtradbext/index.md new file mode 100644 index 0000000000..bfb3a0c624 --- /dev/null +++ b/content/docs/v2024.1.31/guides/proxysql/quickstart/xtradbext/index.md @@ -0,0 +1,504 @@ +--- +title: Proxy Load To Percona XtraDB Galera cluster With KubeDB Provisioned ProxySQL +menu: + docs_v2024.1.31: + identifier: guides-proxysql-quickstart-external + name: External XtraDB Galera Cluster Backend + parent: guides-proxysql-quickstart + weight: 25 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# KubeDB ProxySQL Quickstart with Percona XtraDB + +This guide will show you how to use `KubeDB` Enterprise operator to set up a `ProxySQL` server for externally managed Percona XtraDB cluster. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster. + +- Install `KubeDB` Community and Enterprise operator in your cluster following the steps [here](/docs/v2024.1.31/setup/README). + +- You should be familiar with the following `KubeDB` concepts: + - [ProxySQL](/docs/v2024.1.31/guides/proxysql/concepts/proxysql) + +To keep everything isolated, we are going to use a separate namespace called `demo` throughout this tutorial. + +```bash +$ kubectl create ns demo +namespace/demo created +``` + +## Percona XtraDB Backend + +In this tutorial we are going to test set up a ProxySQL server with KubeDB operator for a Percona XtraDB cluster. We have a Percona XtraDB cluster running in our K8s cluster which is TLS secured. We need to prepare an appbinding for this cluster so that our operator can get enough information to set up a ProxySQL server for this specific Percona XtraDB cluster. + +**What we have** + +We have a 3 node cluster, a service for this cluster, the root-auth secret, a secret which contains the client certificates for TLS secured connections and some more other secrets and services. Let's see the resources first. + +```bash +~ $ kubectl get pods -n demo | grep xtradb +xtradb-galera-0 2/2 Running 0 31m +xtradb-galera-1 2/2 Running 0 31m +xtradb-galera-2 2/2 Running 0 31m + +~ $ kubectl get svc -n demo | grep xtradb +xtradb-galera ClusterIP 10.96.11.201 3306/TCP 31m +... ... ... + +~ $ kubectl get secret -n demo | grep +xtradb-galera-auth kubernetes.io/basic-auth 2 32m +xtradb-galera-client-cert kubernetes.io/tls 3 32m +... ... ... + +~ $ kubectl view-secret -n demo xtradb-galera-auth -a +password=0cPVJdA*jfPs.C(L +username=root + +~ $ kubectl view-secret -n demo xtradb-galera-client-cert -a +ca.crt=-----BEGIN CERTIFICATE----- +MIIDIzCCAgugAwIBAgIUYJxuPjqmDn2OiWd0i9qFv0gsw40wDQYJKoZIhvcNAQEL +BQAwITEOMAwGA1UEAwwFbXlzcWwxDzANBgNVBAoMBmt1YmVkYjAeFw0yMjEwMTAx +MjQ5NTNaFw0yMzEwMTAxMjQ5NTNaMCExDjAMBgNVBAMMBW15c3FsMQ8wDQYDVQQK +DAZrdWJlZGIwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDdE4LhAZlD +A8vii9A9ZOUK8Esye6d0VZzQfbTy/tVQt7NrnHcKSdTqfKIds9umxRwV6jNVSdmQ +v3sTkLpa6AlTpbIPk6yKe1Dk6aHalVCIW8HLK5n7bIsLEwhI2oqxZb+wJruuFJ/y +Y8kkrk5WhPIg4j3ucWAarYjMZq5tPmOl8RWjxsci7Z2l7IHqjeJ6++4D5ydxlz/x +uXK6lU3bvcg+ihTVz5PCMKdx8vM/Nvu3nCmIkhsAI+4eVdN1ztHXnuQ1WDXePAEc +QBrFjqnZNibdTyN3a8+veT3hK+qasFEINZ8V8dUPIUypqXJi4jvBIoESDoWeugt3 +Ia28z8NW6OVnAgMBAAGjUzBRMB0GA1UdDgQWBBQI7AN5FskOiqoZNT3Rsj3APCUw +nDAfBgNVHSMEGDAWgBQI7AN5FskOiqoZNT3Rsj3APCUwnDAPBgNVHRMBAf8EBTAD +AQH/MA0GCSqGSIb3DQEBCwUAA4IBAQBSbCRLuN+FwihCvbjifr1eteb5A3yTVZ3v +Cz0HL+dXzBDmR98lOuh/H5xlErOu5zgcIrK+pm3KhAYu5xY8pEJIpEpBDLgHEP54 +2G56Yruw/kRvqbxUipR23MbFezrQoAysJnfFFfkasUZPQJ84tM9Q+hLJTrztVG3f +aruBwxtH07nIY11QRxYlDOAlzrObqgoQKp81WU0kO7CEWvCVNzzaogVWO+eKouL9 +/ZB5XCQUFTRpYDB0uhY5500GY1btiEEJiWYU84QQsjjH5VeFAm7meuddjOiLC7uC +JaHJDKkKqekCJFzc1tBjGAVT4vZLg+RWaRbGkMPvmVeQH9KR1Svi +-----END CERTIFICATE----- + +tls.crt=-----BEGIN CERTIFICATE----- +MIIDEDCCAfigAwIBAgIQAciDaLH+9Oh4QWxmu+fMFjANBgkqhkiG9w0BAQsFADAh +MQ4wDAYDVQQDDAVteXNxbDEPMA0GA1UECgwGa3ViZWRiMB4XDTIyMTIwMjA0MjM0 +NloXDTIzMDMwMjA0MjM0NlowDzENMAsGA1UEAxMEcm9vdDCCASIwDQYJKoZIhvcN +AQEBBQADggEPADCCAQoCggEBALZ+Fd5lbGg7tIoxNsaKmOsEZNnLiWo5u/lQ5eaI +JufmvxTpaZmqw68yIX4yLZd7iXmSPrydEy6uJYq4HPghyapV20eg7dpHfWkjpmpx +OgudXBHeETyD2P4fR8KQjgyn8qF5pwwq210M46Olq/AatJFAEW/4+7wAPLugLl6Y +V0vFhbAcDmLXAxfz6HyiafF1czPDsaqi4sOV0WC5hnD2NnAcxpR7LfGVPSLosz2x +hs/aEnBdW9+AWhyDjJjslGslyWC8vge6F7dvJrkJcROM0ndk/IEOnNz0KP7dae/T +4XDj8/D2nwbxg421N7BOfby65ZQFMbDLJ0vsM9QdYa6faDECAwEAAaNWMFQwDgYD +VR0PAQH/BAQDAgWgMBMGA1UdJQQMMAoGCCsGAQUFBwMCMAwGA1UdEwEB/wQCMAAw +HwYDVR0jBBgwFoAUCOwDeRbJDoqqGTU90bI9wDwlMJwwDQYJKoZIhvcNAQELBQAD +ggEBAHj/QRv9seYBuA7NUTPQmxEom/1A9XW6IW330mHKCMs7C4c4bRPXSh1hj8bz +CUoI/v4ItNBzcGFJc2LJSZNHVRuNZddDOebxepYngm/2u5wQot8ER2a+TkNBZtSs +kQI9O10awelzbhLoV9is6X3LsTnxk5AOm/fiShfISAdxAbejBOchTjF1g5CrlvD7 +k4rOFJRXVDsQH0ken8JH9sKDcJwVM3Mjm+lO68Cv5kR7JOY1mrShvMVPCjEKC2kA +0xb+SNYBgjBsso8CkgJfqCiBi6S/zn/f83Qn60n7PJoHtZrDHhJSt07mTPuk3Ro6 +NoEwUfZKavW5HRTH64qUizJFSb0= +-----END CERTIFICATE----- + +tls.key=-----BEGIN RSA PRIVATE KEY----- +MIIEpAIBAAKCAQEAtn4V3mVsaDu0ijE2xoqY6wRk2cuJajm7+VDl5ogm5+a/FOlp +marDrzIhfjItl3uJeZI+vJ0TLq4lirgc+CHJqlXbR6Dt2kd9aSOmanE6C51cEd4R +PIPY/h9HwpCODKfyoXmnDCrbXQzjo6Wr8Bq0kUARb/j7vAA8u6AuXphXS8WFsBwO +YtcDF/PofKJp8XVzM8OxqqLiw5XRYLmGcPY2cBzGlHst8ZU9IuizPbGGz9oScF1b +34BaHIOMmOyUayXJYLy+B7oXt28muQlxE4zSd2T8gQ6c3PQo/t1p79PhcOPz8Paf +BvGDjbU3sE59vLrllAUxsMsnS+wz1B1hrp9oMQIDAQABAoIBACwS/40aybfS06Oc +hzIkPxJjmUfQlHuHPhLUqvGmaF8Rp4yRYuOuDly9qsEjtUckmus1mtlKxls7y+1Y +0gZLgr0Ux0ThZRCWu38tEQAcIHy1oIrgKyGGZl3ZiCdBak08Mqk1DFcv8pLijgfz +9zah/IIoCw4UABhDpmdaJFjMSikOPrIHOgRO6UmREkjjcN8T/qLAY34+13oM0zY2 +AyUyuD2hVxBYDu9dR8IN+PngALnUBDuAmnhPf9DwVyz2gkxRhcFEzIZbbSYrBL60 +LuclP07gmggWyvM2UirwovE/jyTrbqhlYk7S2uo+5zpPhpdzCTwuQQRstoK7tVM5 +Ty0OVtECgYEA6Fl3UYDPfqVggtCPkrqvvRGeT0DiKVuYDK0ju2AzeVewVMeYWGr9 +mCrc87twFWksty8SU1ilCe/RItGXKiTE+jk34s6/Wi3Pj3+grT6vNQYA0mbgAYUj +xBKAQFov0xAh6bLYHMwabYVtpYDvlMVqak1HDkUMqimrBN72XsldKT8CgYEAyRFz +9Oqu/PeuUBjMfPzbGRKzGI2ObZGV6WZBFbygXLGIQJrC4tzDZIsBirhCsd9znWGx +J9MZzpUc5mz91FRrg95OnM4acvuMpwv6XlXNJIZrM5nxOfGjqta11Fmgr4bSajBW +nuL3BHtoeinTvEcv3Sxa8Nmyy8/9o/G+4KIlIo8CgYEAok0UZu9Wgb3dq6MqFzGm +3qg28F9/W6pqjLhI1HN/oUxalO4TgffCiw+t5edRhPNB0/fikivCpS1K5kqHkF28 +5pkfa6RF0CVd7nwVbc7yrlQyMMbBxO4OrMDLq6gT7hg/yDIwefUspMJmdAybzk0U +Z4rxjos3LIoMt0tTx6RbGhsCgYEArE/MtAO7WwdX10SpWiPIEEC6Qzxs5vFxK8h5 +1osEUuvB/LukcI8I1E1cUOmAHreEeUeTbrG22Bdp4P9euGxwh14ouLDYcdmpvC7D +rbySRc78aAhxdlrjDDFdOlJlJofAI0ixsxCG6MxpyOe3kQ7gsgalGOs4Evp4P9uY +3SGX+XkCgYBPNmR7nodCjljuSSS5uvcU0j4W6VHUj+uwAbuZR9lBCdCdhwgG9Zg4 +oJQ2E75DXW2QieEIgBysXlIHf1LyvF9re6xIJIbl2p7m+/U0cPsGJhq+/CEyehJp +I30CEBNnaJM4N3pqrBvjWEcmuhvmiHc31vmf2aqnKY++SuAkfJpuAw== +-----END RSA PRIVATE KEY----- + +~ $ kubectl get services -n demo xtradb-galera -oyaml +apiVersion: v1 +kind: Service +metadata: + creationTimestamp: ... ... ... + labels: + ... ... ... + name: xtradb-galera + namespace: demo + ownerReferences: + - ... ... ... + resourceVersion: ... ... ... + uid: ... ... ... +spec: + clusterIP: ... ... ... + clusterIPs: + - ... ... ... + ipFamilies: + - IPv4 + ipFamilyPolicy: SingleStack + ports: + - name: primary + port: 3306 + protocol: TCP + targetPort: db + selector: + ... ... ... + sessionAffinity: None + type: ClusterIP +status: + loadBalancer: {} + +``` + +We have shown the resources in detail. You should create similar thing with similar field names and key names. + +Now, based on these above information we are going to create our appbinding. The following is the appbinding yaml. + +```yaml +apiVersion: appcatalog.appscode.com/v1alpha1 +kind: AppBinding +metadata: + name: xtradb-galera-appbinding + namespace: demo +spec: + clientConfig: + caBundle: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURJekNDQWd1Z0F3SUJBZ0lVWUp4dVBqcW1EbjJPaVdkMGk5cUZ2MGdzdzQwd0RRWUpLb1pJaHZjTkFRRUwKQlFBd0lURU9NQXdHQTFVRUF3d0ZiWGx6Y1d3eER6QU5CZ05WQkFvTUJtdDFZbVZrWWpBZUZ3MHlNakV3TVRBeApNalE1TlROYUZ3MHlNekV3TVRBeE1qUTVOVE5hTUNFeERqQU1CZ05WQkFNTUJXMTVjM0ZzTVE4d0RRWURWUVFLCkRBWnJkV0psWkdJd2dnRWlNQTBHQ1NxR1NJYjNEUUVCQVFVQUE0SUJEd0F3Z2dFS0FvSUJBUURkRTRMaEFabEQKQTh2aWk5QTlaT1VLOEVzeWU2ZDBWWnpRZmJUeS90VlF0N05ybkhjS1NkVHFmS0lkczl1bXhSd1Y2ak5WU2RtUQp2M3NUa0xwYTZBbFRwYklQazZ5S2UxRGs2YUhhbFZDSVc4SExLNW43YklzTEV3aEkyb3F4WmIrd0pydXVGSi95Clk4a2tyazVXaFBJZzRqM3VjV0FhcllqTVpxNXRQbU9sOFJXanhzY2k3WjJsN0lIcWplSjYrKzRENXlkeGx6L3gKdVhLNmxVM2J2Y2craWhUVno1UENNS2R4OHZNL052dTNuQ21Ja2hzQUkrNGVWZE4xenRIWG51UTFXRFhlUEFFYwpRQnJGanFuWk5pYmRUeU4zYTgrdmVUM2hLK3Fhc0ZFSU5aOFY4ZFVQSVV5cHFYSmk0anZCSW9FU0RvV2V1Z3QzCklhMjh6OE5XNk9WbkFnTUJBQUdqVXpCUk1CMEdBMVVkRGdRV0JCUUk3QU41RnNrT2lxb1pOVDNSc2ozQVBDVXcKbkRBZkJnTlZIU01FR0RBV2dCUUk3QU41RnNrT2lxb1pOVDNSc2ozQVBDVXduREFQQmdOVkhSTUJBZjhFQlRBRApBUUgvTUEwR0NTcUdTSWIzRFFFQkN3VUFBNElCQVFCU2JDUkx1TitGd2loQ3ZiamlmcjFldGViNUEzeVRWWjN2CkN6MEhMK2RYekJEbVI5OGxPdWgvSDV4bEVyT3U1emdjSXJLK3BtM0toQVl1NXhZOHBFSklwRXBCRExnSEVQNTQKMkc1NllydXcva1J2cWJ4VWlwUjIzTWJGZXpyUW9BeXNKbmZGRmZrYXNVWlBRSjg0dE05UStoTEpUcnp0VkczZgphcnVCd3h0SDA3bklZMTFRUnhZbERPQWx6ck9icWdvUUtwODFXVTBrTzdDRVd2Q1ZOenphb2dWV08rZUtvdUw5Ci9aQjVYQ1FVRlRScFlEQjB1aFk1NTAwR1kxYnRpRUVKaVdZVTg0UVFzampINVZlRkFtN21ldWRkak9pTEM3dUMKSmFISkRLa0txZWtDSkZ6YzF0QmpHQVZUNHZaTGcrUldhUmJHa01Qdm1WZVFIOUtSMVN2aQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg== + service: + name: xtradb-galera + path: / + port: 3306 + scheme: mysql + url: tcp(xtradb-galera.demo.svc:3306)/ + secret: + name: xtradb-galera-auth + type: perconaxtradb + tlsSecret: + name: xtradb-galera-client-cert + version: 8.0.26 +``` + +Now we will see how we have filled out the appbinding for each fields. + +`spec.clientConfig.caBundle` : We got the `ca.crt` field value from the `xtradb-galera-client-cert` secret, encoded with base64 and placed it here. + +`spec.clientConfig.service.name` : The service name which we created to communicate with the xtradb cluster. + +`spec.clientConfig.service.port` : Took the value as the primary service port. + +`spec.clientConfig.service.shceme` : This will always be mysql, as Percona XtraDB is a mysql fork. + +`spec.clientConfig.url` : Just followed the Kubernetes convention to hit a service in a specific namespace to a specific port and path. + +`spec.secret.name` : This is the root secret name. You can replace it with some other user credential rather than root. In that case make sure the user has got proper privileges on the sys.* and mysql.user tables. + +`spec.type` : This is set to `perconaxtradb` as our operator knows it that way. + +`spec.tlsSecret.name` : This is the secret reference which carries the cient certs for tls secured connections. The secret should contain `ca.crt`,`tls.crt` and `tls.key` keys and corresponding values. + +`spec.version` : The XtraDB version is mentioned here. + +These are enough information to set up a ProxySQL server/cluster for the Percona XtraDB cluster. Now we will apply this to our cluster and refer the appbinding name in the ProxySQL yaml. + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/proxysql/quickstart/xtradbext/examples/appbinding.yaml +appbinding.appcatalog.appscode.com/xtradb-galera-appbinding created +``` + +We are ready with our backend appbinding. But before we proceed to the ProxySQL server, lets first create some test user and database so that we can use them for testing. + +Let's first create a user in the backend xtradb server and a database to test the proxy traffic . + +```bash +$ kubectl exec -it -n demo xtradb-galera-0 -- bash +Defaulted container "perconaxtradb" out of: perconaxtradb, px-coordinator, px-init (init) +bash-4.4$ mysql -uroot -p$MYSQL_ROOT_PASSWORD +mysql: [Warning] Using a password on the command line interface can be insecure. +Welcome to the MySQL monitor. Commands end with ; or \g. +Your MySQL connection id is 1602 +Server version: 8.0.26-16.1 Percona XtraDB Cluster (GPL), Release rel16, Revision b141904, WSREP version 26.4.3 + +Copyright (c) 2009-2021 Percona LLC and/or its affiliates +Copyright (c) 2000, 2021, Oracle and/or its affiliates. + +Oracle is a registered trademark of Oracle Corporation and/or its +affiliates. Other names may be trademarks of their respective +owners. + +Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. + +mysql> create user `test`@'%' identified by 'pass'; +Query OK, 0 rows affected (0.00 sec) + +mysql> create database test; +Query OK, 1 row affected (0.01 sec) + +mysql> use test; +Database changed + +mysql> show tables; +Empty set (0.00 sec) + +mysql> create table testtb(name varchar(103), primary key(name)); +Query OK, 0 rows affected (0.01 sec) + +mysql> grant all privileges on test.* to 'test'@'%'; +Query OK, 0 rows affected (0.00 sec) + +mysql> flush privileges; +Query OK, 0 rows affected (0.00 sec) + +mysql> exit +Bye +``` + +We are now ready with our backend. In the next section we will set up our ProxySQL for this backend. + +## Deploy ProxySQL Server + +With the following yaml we are going to create our desired ProxySQL server. + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: ProxySQL +metadata: + name: proxy-server + namespace: demo +spec: + version: "2.4.4-debian" + replicas: 1 + syncUsers: true + backend: + name: xtradb-galera-appbinding + terminationPolicy: WipeOut +``` + +This is the simplest version of a KubeDB ProxySQL server. Here in the `.spec.version` field we are saying that we want a ProxySQL-2.4.4 with base image of debian. In the `.spec.replicas` section we have written 1, so the operator will create a single node ProxySQL. The `spec.syncUser` field is set to true, which means all the users in the backend MySQL server will be fetched to the ProxySQL server. + +Now let's apply the yaml. + +```yaml +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/proxysql/quickstart/xtradbext/examples/sample-proxysql.yaml + proxysql.kubedb.com/proxysql-server created +``` + +Let's wait for the ProxySQL to be Ready. + +```bash +$ kubectl get proxysql -n demo +NAME VERSION STATUS AGE +proxy-server 2.4.4-debian Ready 4m +``` + +Let's check the pod. + +```bash +$ kubectl get pods -n demo | grep proxy +proxy-server-0 1/1 Running 0 4m +``` + +### Check Associated Kubernetes Objects + +KubeDB operator will create some services and secrets for the ProxySQL object. Let's check. + +```bash +$ kubectl get svc,secret -n demo | grep proxy +service/proxy-server ClusterIP 10.96.181.182 6033/TCP 4m +service/proxy-server-pods ClusterIP None 6032/TCP,6033/TCP 4m +secret/proxy-server-auth kubernetes.io/basic-auth 2 4m +secret/proxy-server-configuration Opaque 1 4m +secret/proxy-server-monitor kubernetes.io/basic-auth 2 4m +``` + +You can find the description of the associated objects here. + +### Check Internal Configuration + +Let's exec into the ProxySQL server pod and get into the admin panel. + +```bash +$ kubectl exec -it -n demo proxy-mysql-0 -- bash 11:20 +root@proxy-mysql-0:/# mysql -uadmin -padmin -h127.0.0.1 -P6032 --prompt="ProxySQLAdmin > " +Welcome to the MariaDB monitor. Commands end with ; or \g. +Your MySQL connection id is 1204 +Server version: 8.0.35 (ProxySQL Admin Module) + +Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others. + +Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. + +ProxySQLAdmin > +``` + +Let's check the mysql_servers table first. We didn't set it from the yaml. The KubeDB operator will do that for us. + +```bash +ProxySQLAdmin > select * from mysql_servers; ++--------------+------------------------+------+-----------+--------+--------+-------------+-----------------+---------------------+---------+----------------+---------+ +| hostgroup_id | hostname | port | gtid_port | status | weight | compression | max_connections | max_replication_lag | use_ssl | max_latency_ms | comment | ++--------------+------------------------+------+-----------+--------+--------+-------------+-----------------+---------------------+---------+----------------+---------+ +| 2 | xtradb-galera.demo.svc | 3306 | 0 | ONLINE | 1 | 0 | 1000 | 0 | 1 | 0 | | ++--------------+------------------------+------+-----------+--------+--------+-------------+-----------------+---------------------+---------+----------------+---------+ + +1 rows in set (0.000 sec) +``` + +Let's check the mysql_users table. + +```bash +ProxySQLAdmin > select username from mysql_users; ++----------+ +| username | ++----------+ +| root | +| monitor | +| test | ++----------+ +2 rows in set (0.000 sec) +``` + +So we are now ready to test our traffic proxy. In the next section we are going to have some demo's. + +### Check Traffic Proxy + +To test the traffic routing through the ProxySQL server let's first create a pod with ubuntu base image in it. We will use the following yaml. + +```yaml +apiVersion: apps/v1 +kind: Deployment +metadata: + creationTimestamp: null + labels: + app: ubuntu + name: ubuntu + namespace: demo +spec: + replicas: 1 + selector: + matchLabels: + app: ubuntu + strategy: {} + template: + metadata: + creationTimestamp: null + labels: + app: ubuntu + spec: + containers: + - image: ubuntu + imagePullPolicy: IfNotPresent + name: ubuntu + command: ["/bin/sleep", "3650d"] + resources: {} +``` + +Let's apply the yaml. + +```yaml +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/proxysql/quickstart/xtradbext/examples/ubuntu.yaml +deployment.apps/ubuntu created +``` + +Let's exec into the pod and install mysql-client. + +```bash +$ kubectl exec -it -n demo ubuntu-867d4588d8-tl7hh -- bash 12:00 +root@ubuntu-867d4588d8-tl7hh:/# apt update +... ... .. +root@ubuntu-867d4588d8-tl7hh:/# apt install mysql-client -y +Reading package lists... Done +... .. ... +root@ubuntu-867d4588d8-tl7hh:/# +``` + +Now let's try to connect with the ProxySQL server through the `proxy-server` service as the `test` user. + +```bash +root@ubuntu-867d4588d8-tl7hh:/# mysql -utest -ppass -hproxy-server.demo.svc -P6033 +mysql: [Warning] Using a password on the command line interface can be insecure. +Welcome to the MySQL monitor. Commands end with ; or \g. +Your MySQL connection id is 1881 +Server version: 8.0.35 (ProxySQL) + +Copyright (c) 2000, 2022, Oracle and/or its affiliates. + +Oracle is a registered trademark of Oracle Corporation and/or its +affiliates. Other names may be trademarks of their respective +owners. + +Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. + +mysql> +``` + +We are successfully connected as the `test` user. Let's run some read/write query on this connection. + +```bash +mysql> show databases; ++--------------------+ +| Database | ++--------------------+ +| information_schema | +| test | ++--------------------+ +2 rows in set (0.00 sec) + +mysql> use test; +Reading table information for completion of table and column names +You can turn off this feature to get a quicker startup with -A + +Database changed +mysql> show tables; ++----------------+ +| Tables_in_test | ++----------------+ +| testtb | ++----------------+ +1 row in set (0.00 sec) + +mysql> insert into testtb(name) values("Kim Torres"); +Query OK, 1 row affected (0.01 sec) + +mysql> insert into testtb(name) values("Tony SoFua"); +Query OK, 1 row affected (0.01 sec) + +mysql> select * from testtb; ++------------+ +| name | ++------------+ +| Kim Torres | +| Tony SoFua | ++------------+ +2 rows in set (0.00 sec) + +mysql> +``` + +We can see the queries are successfully executed through the ProxySQL server. + +We can see that the read-write queries are successfully executed in the ProxySQL server. So the ProxySQL server is ready to use. + +## Conclusion + +In this tutorial we have seen some very basic version of KubeDB ProxySQL. KubeDB provides many more for ProxySQL. In this site we have discussed on lot's of other features like `TLS Secured ProxySQL` , `Declarative Configuration` , `MariaDB and Percona-XtraDB Backend` , `Reconfigure` and much more. Checkout out other docs to learn more. \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/proxysql/reconfigure-tls/_index.md b/content/docs/v2024.1.31/guides/proxysql/reconfigure-tls/_index.md new file mode 100644 index 0000000000..23c952518f --- /dev/null +++ b/content/docs/v2024.1.31/guides/proxysql/reconfigure-tls/_index.md @@ -0,0 +1,22 @@ +--- +title: Reconfigure ProxySQL TLS/SSL +menu: + docs_v2024.1.31: + identifier: guides-proxysql-reconfigure-tls + name: Reconfigure TLS/SSL + parent: guides-proxysql + weight: 46 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/proxysql/reconfigure-tls/cluster/examples/issuer.yaml b/content/docs/v2024.1.31/guides/proxysql/reconfigure-tls/cluster/examples/issuer.yaml new file mode 100644 index 0000000000..404669b707 --- /dev/null +++ b/content/docs/v2024.1.31/guides/proxysql/reconfigure-tls/cluster/examples/issuer.yaml @@ -0,0 +1,8 @@ +apiVersion: cert-manager.io/v1 +kind: Issuer +metadata: + name: proxy-issuer + namespace: demo +spec: + ca: + secretName: proxy-ca \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/proxysql/reconfigure-tls/cluster/examples/proxyops-activate-ssl.yaml b/content/docs/v2024.1.31/guides/proxysql/reconfigure-tls/cluster/examples/proxyops-activate-ssl.yaml new file mode 100644 index 0000000000..e6fba496bb --- /dev/null +++ b/content/docs/v2024.1.31/guides/proxysql/reconfigure-tls/cluster/examples/proxyops-activate-ssl.yaml @@ -0,0 +1,15 @@ +apiVersion: ops.kubedb.com/v1alpha1 +kind: ProxySQLOpsRequest +metadata: + name: activate-ssl + namespace: demo +spec: + type: Reconfigure + proxyRef: + name: proxy-server + configuration: + mysqlUsers: + users: + - username: test + use_ssl: 1 + reqType: update \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/proxysql/reconfigure-tls/cluster/examples/proxyops-add-tls.yaml b/content/docs/v2024.1.31/guides/proxysql/reconfigure-tls/cluster/examples/proxyops-add-tls.yaml new file mode 100644 index 0000000000..ea4be64873 --- /dev/null +++ b/content/docs/v2024.1.31/guides/proxysql/reconfigure-tls/cluster/examples/proxyops-add-tls.yaml @@ -0,0 +1,25 @@ +apiVersion: ops.kubedb.com/v1alpha1 +kind: ProxySQLOpsRequest +metadata: + name: recon-tls-add + namespace: demo +spec: + type: ReconfigureTLS + proxyRef: + name: proxy-server + tls: + issuerRef: + apiGroup: cert-manager.io + kind: Issuer + name: proxy-issuer + certificates: + - alias: server + subject: + organizations: + - kubedb:server + dnsNames: + - localhost + ipAddresses: + - "127.0.0.1" + emailAddresses: + - "spike@appscode.com" \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/proxysql/reconfigure-tls/cluster/examples/proxyops-remove-tls.yaml b/content/docs/v2024.1.31/guides/proxysql/reconfigure-tls/cluster/examples/proxyops-remove-tls.yaml new file mode 100644 index 0000000000..0bf0313150 --- /dev/null +++ b/content/docs/v2024.1.31/guides/proxysql/reconfigure-tls/cluster/examples/proxyops-remove-tls.yaml @@ -0,0 +1,11 @@ +apiVersion: ops.kubedb.com/v1alpha1 +kind: ProxySQLOpsRequest +metadata: + name: recon-tls-remove + namespace: demo +spec: + type: ReconfigureTLS + proxyRef: + name: proxy-server + tls: + remove: true \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/proxysql/reconfigure-tls/cluster/examples/proxyops-rotate-tls.yaml b/content/docs/v2024.1.31/guides/proxysql/reconfigure-tls/cluster/examples/proxyops-rotate-tls.yaml new file mode 100644 index 0000000000..309efdf6c7 --- /dev/null +++ b/content/docs/v2024.1.31/guides/proxysql/reconfigure-tls/cluster/examples/proxyops-rotate-tls.yaml @@ -0,0 +1,11 @@ +apiVersion: ops.kubedb.com/v1alpha1 +kind: ProxySQLOpsRequest +metadata: + name: recon-tls-rotate + namespace: demo +spec: + type: ReconfigureTLS + proxyRef: + name: proxy-server + tls: + rotateCertificates: true \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/proxysql/reconfigure-tls/cluster/examples/proxyops-update-tls.yaml b/content/docs/v2024.1.31/guides/proxysql/reconfigure-tls/cluster/examples/proxyops-update-tls.yaml new file mode 100644 index 0000000000..ae326aa4a2 --- /dev/null +++ b/content/docs/v2024.1.31/guides/proxysql/reconfigure-tls/cluster/examples/proxyops-update-tls.yaml @@ -0,0 +1,32 @@ +apiVersion: ops.kubedb.com/v1alpha1 +kind: ProxySQLOpsRequest +metadata: + name: recon-tls-update + namespace: demo +spec: + type: ReconfigureTLS + proxyRef: + name: proxy-server + tls: + certificates: + - alias: server + subject: + organizations: + - kubedb:server + dnsNames: + - localhost + ipAddresses: + - "127.0.0.1" + emailAddresses: + - "mikebaker@gmail.com" + certificates: + - alias: client + subject: + organizations: + - kubedb:server + dnsNames: + - localhost + ipAddresses: + - "127.0.0.1" + emailAddresses: + - "mikebaker@gmail.com" \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/proxysql/reconfigure-tls/cluster/examples/sample-mysql.yaml b/content/docs/v2024.1.31/guides/proxysql/reconfigure-tls/cluster/examples/sample-mysql.yaml new file mode 100644 index 0000000000..c884c6f678 --- /dev/null +++ b/content/docs/v2024.1.31/guides/proxysql/reconfigure-tls/cluster/examples/sample-mysql.yaml @@ -0,0 +1,19 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MySQL +metadata: + name: mysql-server + namespace: demo +spec: + version: "5.7.44" + replicas: 3 + topology: + mode: GroupReplication + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/proxysql/reconfigure-tls/cluster/examples/sample-proxysql.yaml b/content/docs/v2024.1.31/guides/proxysql/reconfigure-tls/cluster/examples/sample-proxysql.yaml new file mode 100644 index 0000000000..b2795bf06b --- /dev/null +++ b/content/docs/v2024.1.31/guides/proxysql/reconfigure-tls/cluster/examples/sample-proxysql.yaml @@ -0,0 +1,12 @@ +apiVersion: kubedb.com/v1alpha2 +kind: ProxySQL +metadata: + name: proxy-server + namespace: demo +spec: + version: "2.3.2-debian" + replicas: 3 + backend: + name: mysql-server + syncUsers: true + terminationPolicy: WipeOut \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/proxysql/reconfigure-tls/cluster/index.md b/content/docs/v2024.1.31/guides/proxysql/reconfigure-tls/cluster/index.md new file mode 100644 index 0000000000..33b2efc831 --- /dev/null +++ b/content/docs/v2024.1.31/guides/proxysql/reconfigure-tls/cluster/index.md @@ -0,0 +1,827 @@ +--- +title: Reconfigure ProxySQL TLS/SSL Encryption +menu: + docs_v2024.1.31: + identifier: guides-proxysql-reconfigure-tls-cluster + name: Reconfigure ProxySQL TLS/SSL Encryption + parent: guides-proxysql-reconfigure-tls + weight: 20 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Reconfigure ProxySQL TLS/SSL (Transport Encryption) + +KubeDB supports reconfigure i.e. add, remove, update and rotation of TLS/SSL certificates for existing ProxySQL via a ProxySQLOpsRequest. This tutorial will show you how to use KubeDB to reconfigure TLS/SSL encryption. + +`ReconfigureTLS` is a very useful ops-request when it comes to reconfiguring TLS settings for proxysql server without entering the admin panel. With this type of ops-request you can `add`, `remove` and `update` TLS configuration for the proxysql server. You can `rotate` the certificates as well. + +Below, we are providing some examples for the ops-request. + +## Before You Begin + +- At first, you need to have a Kubernetes Cluster, and the kubectl command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +- Install [`cert-manger`](https://cert-manager.io/docs/installation/) v1.6.0 or later to your cluster to manage your SSL/TLS certificates. + +- Now, install KubeDB cli on your workstation and KubeDB operator in your cluster following the steps [here](/docs/v2024.1.31/setup/README). + +- To keep things isolated, this tutorial uses a separate namespace called `demo` throughout this tutorial. + + ```bash + $ kubectl create ns demo + namespace/demo created + ``` + +### Prepare MySQL Backend + +To test any proxysql functionality we need to have a mysql backend . + +Below, here is the yaml for the KubeDB MySQL backend. + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: MySQL +metadata: + name: mysql-server + namespace: demo +spec: + version: "5.7.44" + replicas: 3 + topology: + mode: GroupReplication + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut +``` + +Let's apply the yaml, + +``` bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/proxysql/reconfigure-tls/cluster/examples/sample-mysql.yaml +mysql.kubedb.com/mysql-server created +``` + +Let's now wait for the mysql instance to be ready, + +```bash +$ kubectl get mysql -n demo +NAME VERSION STATUS AGE +mysql-server 5.7.44 Ready 3m16s + +$ kubectl get pods -n demo +NAME READY STATUS RESTARTS AGE +mysql-server-0 2/2 Running 0 3m11s +mysql-server-1 2/2 Running 0 113s +mysql-server-2 2/2 Running 0 109s +``` + +We need a user to test all the ssl functionalities. So let's create one user inside the mysql servers, + +```bash +~ $ kubectl exec -it -n demo mysql-server-0 -- bash +Defaulted container "mysql" out of: mysql, mysql-coordinator, mysql-init (init) +root@mysql-server-0:/# mysql -uroot -p$MYSQL_ROOT_PASSWORD +mysql: [Warning] Using a password on the command line interface can be insecure. +Welcome to the MySQL monitor. Commands end with ; or \g. +Your MySQL connection id is 106 +Server version: 5.7.44-log MySQL Community Server (GPL) + +Copyright (c) 2000, 2021, Oracle and/or its affiliates. + +Oracle is a registered trademark of Oracle Corporation and/or its +affiliates. Other names may be trademarks of their respective +owners. + +Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. + +mysql> create user 'test'@'%' identified by 'pass'; +Query OK, 0 rows affected (0.00 sec) + +mysql> create database testdb; +Query OK, 1 row affected (0.00 sec) + +mysql> grant all privileges on testdb.* to 'test'@'%'; +Query OK, 0 rows affected (0.01 sec) + +mysql> flush privileges; +Query OK, 0 rows affected (0.00 sec) + +mysql> exit +Bye +``` + +## Deploy ProxySQL without TLS + +We are now all set with our backend. Now let's create a KubeDB ProxySQL server. Lets keep the syncUser field true so that we don't need to create the user again. + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: ProxySQL +metadata: + name: proxy-server + namespace: demo +spec: + version: "2.3.2-debian" + replicas: 3 + backend: + name: mysql-server + syncUsers: true + terminationPolicy: WipeOut +``` + +``` bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/proxysql/reconfigure-tls/cluster/examples/sample-proxysql.yaml +proxysql.kubedb.com/proxy-server created +``` + +## Check User and current TLS status + +Let's exec into the proxysql pod and see the current status. + +```bash +$ kubectl exec -it -n demo proxy-server-0 -- bash +root@proxy-server-0:/# mysql -uadmin -padmin -h127.0.0.1 -P6032 +Welcome to the MariaDB monitor. Commands end with ; or \g. +Your MySQL connection id is 18 +Server version: 8.0.35 (ProxySQL Admin Module) + +Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others. + +Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. + +MySQL [(none)]> select username, use_ssl from mysql_users; ++----------+---------+ +| username | use_ssl | ++----------+---------+ +| root | 0 | +| test | 0 | ++----------+---------+ +2 rows in set (0.000 sec) + +MySQL [(none)]> show variables like '%have_ssl%'; ++----------------+-------+ +| Variable_name | Value | ++----------------+-------+ +| mysql-have_ssl | false | ++----------------+-------+ +1 row in set (0.001 sec) + +MySQL [(none)]> exit +Bye +``` +We can see that the users have been fetched. Also the mysql-have_ssl variables is set to false. The use_ssl column is also set to 0 which means that there is no need for ssl-ca or cert for connect. + +Let's check it with the follwing command. + +```bash +root@proxy-server-0:/# mysql -utest -ppass -h127.0.0.1 -P6033 +Welcome to the MariaDB monitor. Commands end with ; or \g. +Your MySQL connection id is 914 +Server version: 8.0.35 (ProxySQL) + +Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others. + +Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. + +MySQL [(none)]> \s +-------------- +mysql Ver 15.1 Distrib 10.5.15-MariaDB, for debian-linux-gnu (x86_64) using EditLine wrapper + +Connection id: 914 +Current database: information_schema +Current user: test@10.244.0.20 +SSL: Not in use +Current pager: stdout +Using outfile: '' +Using delimiter: ; +Server: MySQL +Server version: 8.0.35 (ProxySQL) +Protocol version: 10 +Connection: 127.0.0.1 via TCP/IP +Server characterset: latin1 +Db characterset: utf8 +Client characterset: latin1 +Conn. characterset: latin1 +TCP port: 6033 +Uptime: 1 hour 27 min 36 sec + +Threads: 1 Questions: 3 Slow queries: 3 +-------------- + +MySQL [(none)]> exit +Bye +``` + +## Add TLS with RreconfigureTLS Ops-Request + +Now we want to add TLS to our proxysql server and we want the frontend connections to be tls-secured. + +### Create Issuer + +First we need an issuer for this. We can create one with the following command. Make sure that you have cert-manager running in your cluster and openssl installed. + +```bash +$ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout ./ca.key -out ./ca.crt -subj "/CN=mysql/O=kubedb" + +Generating a RSA private key +.......................................+++++ +...........................+++++ +writing new private key to './ca.key' + +``` + +Let's create the ca-secret with the above created ca.crt and ca.key by using the following command, + +```bash +$ kubectl create secret tls proxy-ca \ + --cert=ca.crt \ + --key=ca.key \ + --namespace=demo +secret/proxy-ca created +``` + +Now create issuer with the following yaml, + +```yaml +apiVersion: cert-manager.io/v1 +kind: Issuer +metadata: + name: proxy-issuer + namespace: demo +spec: + ca: + secretName: proxy-ca +``` + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/proxysql/reconfigure-tls/cluster/examples/issuer.yaml +issuer.cert-manager.io/proxy-issuer created +``` + +### Apply ops-request to add TLS + +We are all set to go! now lets create an ReconfigureTLS ops-request like below. We have set a desired configuration under the `.spec.tls` section here as you can see. You can checkout the api documentation of this field [here](https://pkg.go.dev/kubedb.dev/apimachinery@v0.29.1/apis/ops/v1alpha1#ProxySQLOpsRequestSpec). + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: ProxySQLOpsRequest +metadata: + name: recon-tls-add + namespace: demo +spec: + type: ReconfigureTLS + proxyRef: + name: proxy-server + tls: + issuerRef: + apiGroup: cert-manager.io + kind: Issuer + name: proxy-issuer + certificates: + - alias: server + subject: + organizations: + - kubedb:server + dnsNames: + - localhost + ipAddresses: + - "127.0.0.1" + emailAddresses: + - "spike@appscode.com" +``` + +Let's apply and wait for the ops-request to be succeeded. + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/proxysql/reconfigure-tls/cluster/examples/proxyops-add-tls.yaml +proxysqlopsrequest.ops.kubedb.com/recon-tls-add created + +$ kubectl get proxysqlopsrequest -n demo +NAME TYPE STATUS AGE +recon-tls-add ReconfigureTLS Successful 5m +``` + +### Check ops-request effects + +Following secrets should be created + +```bash +$ kubectl get secrets -n demo | grep cert +proxy-server-server-cert kubernetes.io/tls 3 4m53s +proxy-server-client-cert kubernetes.io/tls 3 4m53s +``` + +The directory `/var/lib/frontend/` should carry the certificates and other files within the directories as seen below. +```bash +root@proxy-server-0:/# ls /var/lib/frontend/ +client server + +root@proxy-server-0:/# ls /var/lib/frontend/client +ca.crt tls.crt tls.key + +root@proxy-server-0:/# ls /var/lib/frontend/server +ca.crt tls.crt tls.key +``` + +The `mysql-have_ssl` variables should be true by this time. + +```bash +root@proxy-server-0:/# mysql -uadmin -padmin -h127.0.0.1 -P6032 +Welcome to the MariaDB monitor. Commands end with ; or \g. +Your MySQL connection id is 22 +Server version: 8.0.35 (ProxySQL Admin Module) + +Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others. + +Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. + +MySQL [(none)]> show variables like '%have_ssl%'; ++----------------+-------+ +| Variable_name | Value | ++----------------+-------+ +| mysql-have_ssl | true | ++----------------+-------+ +1 row in set (0.001 sec) +``` + +### Activate use_ssl field for the test user + +Now our ProxySQL server is ready to serve tls-secured connections. Let's modify our test user to use ssl with an ops-request. You can do this task from the admin panel also. But we like to do it in KubeDB way. + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: ProxySQLOpsRequest +metadata: + name: activate-ssl + namespace: demo +spec: + type: Reconfigure + proxyRef: + name: proxy-server + configuration: + mysqlUsers: + users: + - username: test + use_ssl: 1 + reqType: update +``` + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/proxysql/reconfigure-tls/cluster/examples/proxyops-activate-ssl.yaml +proxysqlopsrequest.ops.kubedb.com/activate-ssl created +``` + +Let's check the effect from the admin panel. + +```bash +MySQL [(none)]> select username,use_ssl from mysql_users; ++----------+---------+ +| username | use_ssl | ++----------+---------+ +| root | 0 | +| test | 1 | ++----------+---------+ +2 rows in set (0.001 sec) +``` + +### Check TLS secured connections + +Now our user is also modified to accept only tls-secured requests. Let's try to connect without TLS. + +```bash +root@proxy-server-0:/# mysql -utest -ppass -h127.0.0.01 -P6033 +ERROR 1045 (28000): ProxySQL Error: Access denied for user 'test' (using password: YES). SSL is required +``` + +We can see that the connection is refused. Now try with the tls certificates. + +```bash +root@proxy-server-0:/# mysql -utest -ppass -h127.0.0.01 -P6033 --ssl-ca=/var/lib/frontend/client/ca.crt --ssl-cert=/var/lib/frontend/client/tls.crt --ssl-key=/var/lib/frontend/client/tls.key +Welcome to the MariaDB monitor. Commands end with ; or \g. +Your MySQL connection id is 107 +Server version: 8.0.35 (ProxySQL) + +Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others. + +Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. + +MySQL [testdb]> \s +-------------- +mysql Ver 15.1 Distrib 10.5.15-MariaDB, for debian-linux-gnu (x86_64) using EditLine wrapper + +Connection id: 107 +Current database: testdb +Current user: test@10.244.0.23 +SSL: Cipher in use is TLS_AES_256_GCM_SHA384 +Current pager: stdout +Using outfile: '' +Using delimiter: ; +Server: MySQL +Server version: 8.0.35 (ProxySQL) +Protocol version: 10 +Connection: 127.0.0.01 via TCP/IP +Server characterset: latin1 +Db characterset: latin1 +Client characterset: latin1 +Conn. characterset: latin1 +TCP port: 6033 +Uptime: 8 min 3 sec + +Threads: 1 Questions: 7 Slow queries: 7 +-------------- +``` + +We can see that the user is successfuly logged in with the tls informations. Also in the `\s` query result , the SSL field has got a cipher name, which means the connection is tls-secured. + +## Rotate Certificate + +Now we are going to rotate the certificate for this proxysql. First let's check the current expiration date for current certificate. + +```bash +root@proxy-server-0:/# openssl x509 -in /var/lib/frontend/client/tls.crt -inform PEM -enddate -nameopt RFC2253 -noout +notAfter=Feb 6 08:44:01 2023 GMT +``` + +Let's look into the server certificate crd. + +```bash +~ $ kubectl describe certificate -n demo proxy-server-server-cert +Name: proxy-server-server-cert +Namespace: demo +Labels: app.kubernetes.io/component=database + app.kubernetes.io/instance=proxy-server + app.kubernetes.io/managed-by=kubedb.com + app.kubernetes.io/name=proxysqls.kubedb.com + proxysql.kubedb.com/load-balance=GroupReplication +Annotations: +API Version: cert-manager.io/v1 +Kind: Certificate +Metadata: + Creation Timestamp: 2022-11-08T08:44:01Z + Generation: 1 + ... + Owner References: + API Version: kubedb.com/v1alpha2 + Block Owner Deletion: true + Controller: true + Kind: ProxySQL + Name: proxy-server + UID: b4fa48bc-b6cc-4ce7-beaf-c91987f4e0b5 + Resource Version: 29102 + UID: aa69c146-1581-4fce-a160-ad85b4296e4d +Spec: + Common Name: proxy-server + Dns Names: + *.proxy-server-pods.demo.svc + *.proxy-server-pods.demo.svc.cluster.local + *.proxy-server.demo.svc + localhost + proxy-server + proxy-server.demo.svc + Email Addresses: + spike@appscode.com + Ip Addresses: + 127.0.0.1 + Issuer Ref: + Group: cert-manager.io + Kind: Issuer + Name: proxy-issuer + Secret Name: proxy-server-server-cert + Subject: + Organizations: + kubedb:server + Usages: + digital signature + key encipherment + server auth + client auth +Status: + Conditions: + Last Transition Time: 2022-11-08T08:44:01Z + Message: Certificate is up to date and has not expired + Observed Generation: 1 + Reason: Ready + Status: True + Type: Ready + Not After: 2023-02-06T08:44:01Z + Not Before: 2022-11-08T08:44:01Z + Renewal Time: 2023-01-07T08:44:01Z + Revision: 1 +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal Issuing 17m cert-manager Issuing certificate as Secret does not exist + Normal Generated 17m cert-manager Stored new private key in temporary Secret resource "proxy-server-server-cert-ksk6g" + Normal Requested 17m cert-manager Created new CertificateRequest resource "proxy-server-server-cert-9mqjf" + Normal Issuing 17m cert-manager The certificate has been successfully issued +``` + +### Apply ops-request to rotate certificate + +Now lets apply the follwoing yaml and rotate the certificate of our proxysql server. + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: ProxySQLOpsRequest +metadata: + name: recon-tls-rotate + namespace: demo +spec: + type: ReconfigureTLS + proxyRef: + name: proxy-server + tls: + rotateCertificates: true +``` + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/proxysql/reconfigure-tls/cluster/examples/proxyops-rotate-tls.yaml +proxysqlopsrequest.ops.kubedb.com/recon-tls-rotate created +``` + +```bash +$ kubectl get proxysqlopsrequest -n demo +NAME TYPE STATUS AGE +recon-tls-add ReconfigureTLS Successful 15m +recon-tls-rotate ReconfigureTLS Successful 5m +``` + +### Check ops-request effect + +Let's check if the expiration time has been updated or not. + +```bash +root@proxy-server-0:/# openssl x509 -in /var/lib/frontend/client/tls.crt -inform PEM -enddate -nameopt RFC2253 -noout +notAfter=Feb 6 09:05:54 2023 GMT +``` + +The expiration time has been updated. Now lets check the certificate crd. + +```bash + $ kubectl describe certificate -n demo proxy-server-server-cert +Name: proxy-server-server-cert +Namespace: demo +Labels: app.kubernetes.io/component=database + app.kubernetes.io/instance=proxy-server + app.kubernetes.io/managed-by=kubedb.com + app.kubernetes.io/name=proxysqls.kubedb.com + proxysql.kubedb.com/load-balance=GroupReplication +Annotations: +API Version: cert-manager.io/v1 +Kind: Certificate +Metadata: + Creation Timestamp: 2022-11-08T08:44:01Z + Generation: 1 + ... + Owner References: + API Version: kubedb.com/v1alpha2 + Block Owner Deletion: true + Controller: true + Kind: ProxySQL + Name: proxy-server + UID: b4fa48bc-b6cc-4ce7-beaf-c91987f4e0b5 + Resource Version: 32254 + UID: aa69c146-1581-4fce-a160-ad85b4296e4d +Spec: + Common Name: proxy-server + Dns Names: + *.proxy-server-pods.demo.svc + *.proxy-server-pods.demo.svc.cluster.local + *.proxy-server.demo.svc + localhost + proxy-server + proxy-server.demo.svc + Email Addresses: + spike@appscode.com + Ip Addresses: + 127.0.0.1 + Issuer Ref: + Group: cert-manager.io + Kind: Issuer + Name: proxy-issuer + Secret Name: proxy-server-server-cert + Subject: + Organizations: + kubedb:server + Usages: + digital signature + key encipherment + server auth + client auth +Status: + Conditions: + Last Transition Time: 2022-11-08T08:44:01Z + Message: Certificate is up to date and has not expired + Observed Generation: 1 + Reason: Ready + Status: True + Type: Ready + Not After: 2023-02-06T09:05:54Z + Not Before: 2022-11-08T09:05:54Z + Renewal Time: 2023-01-07T09:05:54Z + Revision: 6 +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal Issuing 23m cert-manager Issuing certificate as Secret does not exist + Normal Generated 23m cert-manager Stored new private key in temporary Secret resource "proxy-server-server-cert-ksk6g" + Normal Requested 23m cert-manager Created new CertificateRequest resource "proxy-server-server-cert-9mqjf" + Normal Requested 4m22s cert-manager Created new CertificateRequest resource "proxy-server-server-cert-s7d6r" + Normal Requested 4m22s cert-manager Created new CertificateRequest resource "proxy-server-server-cert-cd5sg" + Normal Requested 4m17s cert-manager Created new CertificateRequest resource "proxy-server-server-cert-pbm8q" + Normal Requested 2m9s cert-manager Created new CertificateRequest resource "proxy-server-server-cert-4qm6l" + Normal Requested 2m2s cert-manager Created new CertificateRequest resource "proxy-server-server-cert-l2xgk" + Normal Reused 2m2s (x5 over 4m22s) cert-manager Reusing private key stored in existing Secret resource "proxy-server-server-cert" + Normal Issuing 2m1s (x6 over 23m) cert-manager The certificate has been successfully issued +``` + +This has also been updated. + +So from the above ovservation we can say that the TLS certificate rotation has been succeeded. + +## Update TLS Configuration + +Now lets update the certificate information. + +Let's check the current info first. + +```bash +root@proxy-server-0:/# openssl x509 -in /var/lib/proxysql/proxysql-cert.pem -inform PEM -subject -email -nameopt RFC2253 -noout +subject=CN=proxy-server,O=kubedb:server +spike@appscode.com +``` + +### Apply ops-request to update TLS + +We can see the informations. Suppose we want to update the email address . We want to change it to mikebaker@gmail.com. Let's create a ops-request for that in the following manner. + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: ProxySQLOpsRequest +metadata: + name: recon-tls-update + namespace: demo +spec: + type: ReconfigureTLS + proxyRef: + name: proxy-server + tls: + certificates: + - alias: server + subject: + organizations: + - kubedb:server + dnsNames: + - localhost + ipAddresses: + - "127.0.0.1" + emailAddresses: + - "mikebaker@gmail.com" + certificates: + - alias: client + subject: + organizations: + - kubedb:server + dnsNames: + - localhost + ipAddresses: + - "127.0.0.1" + emailAddresses: + - "mikebaker@gmail.com" +``` + +Let's apply and then wait for it to be succeed. + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/proxysql/reconfigure-tls/cluster/examples/proxyops-update-tls.yaml +proxysqlopsrequest.ops.kubedb.com/recon-tls-update created + +$ kubectl get proxysqlopsrequest -n demo +NAME TYPE STATUS AGE +recon-tls-update ReconfigureTLS Successful 5m +recon-tls-add ReconfigureTLS Successful 15m +recon-tls-rotate ReconfigureTLS Successful 10m +``` + +Let's check the info now. + +```bash +root@proxy-server-1:/# openssl x509 -in /var/lib/frontend/server/tls.crt -inform PEM -subject -email -nameopt RFC2253 -noout +subject=CN=proxy-server,O=kubedb:server +mikebaker@gmail.com +``` + +We can see the email has been successfuly updated. You can configure other field as well. To know more about the .spec.tls field refer to the link [here](https://pkg.go.dev/kubedb.dev/apimachinery@v0.29.1/apis/ops/v1alpha1#TLSSpec) . + +## Remove TLS + +To remove TLS from a KubeDB ProxySQL instance, all you need to do is apply a similar yaml like below. Just change the `.spec.proxyRef.name` field with your own ProxySQL instance name. + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: ProxySQLOpsRequest +metadata: + name: recon-tls-remove + namespace: demo +spec: + type: ReconfigureTLS + proxyRef: + name: proxy-server + tls: + remove: true +``` + +Let's apply and check the effects. + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/proxysql/reconfigure-tls/cluster/examples/proxyops-remove-tls.yaml +proxysqlopsrequest.ops.kubedb.com/recon-tls-remove created + +$ kubectl get proxysqlopsrequest -n demo +NAME TYPE STATUS AGE +recon-tls-remove ReconfigureTLS Successful 3m +recon-tls-update ReconfigureTLS Successful 7m +recon-tls-add ReconfigureTLS Successful 17m +recon-tls-rotate ReconfigureTLS Successful 12m +``` + +### Check ops-request effect + +Let's check the effect. + +```bash +root@proxy-server-1:/# mysql -uadmin -padmin -h127.0.0.1 -P6032 +Welcome to the MariaDB monitor. Commands end with ; or \g. +Your MySQL connection id is 25 +Server version: 8.0.35 (ProxySQL Admin Module) + +Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others. + +Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. + +MySQL [(none)]> show variables like '%have_ssl%'; ++----------------+-------+ +| Variable_name | Value | ++----------------+-------+ +| mysql-have_ssl | false | ++----------------+-------+ +1 row in set (0.001 sec) +``` + +The mysql-have_ssl has been set to false by the ops-request. So no more tls-secured frontend connections will be created. + +Let's update the user configuration to use_ssl=0 . Otherwise the user won't be able to connect. + +```bash +MySQL [(none)]> update mysql_users set use_ssl=0 where username='test'; +Query OK, 1 row affected (0.001 sec) + +MySQL [(none)]> LOAD MYSQL USERS TO RUNTIME; +Query OK, 0 rows affected (0.001 sec) + +MySQL [(none)]> ^DBye + +root@proxy-server-1:/# mysql -utest -ppass -h127.0.0.1 -P6033 +Welcome to the MariaDB monitor. Commands end with ; or \g. +Your MySQL connection id is 267 +Server version: 8.0.35 (ProxySQL) + +Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others. + +Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. + +MySQL [(none)]> +``` + +We can see the user has been successfuly connected without the tls information. + +## Cleaning up + +To cleanup the Kubernetes resources created by this tutorial, run: + +```bash +$ kubectl delete proxysql -n demo --all +$ kubectl delete issuer -n demo --all +$ kubectl delete proxysqlopsrequest -n demo --all +$ kubectl delete ns demo +``` \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/proxysql/reconfigure-tls/overview/images/reconfigure-tls.png b/content/docs/v2024.1.31/guides/proxysql/reconfigure-tls/overview/images/reconfigure-tls.png new file mode 100644 index 0000000000..2800cc3b37 Binary files /dev/null and b/content/docs/v2024.1.31/guides/proxysql/reconfigure-tls/overview/images/reconfigure-tls.png differ diff --git a/content/docs/v2024.1.31/guides/proxysql/reconfigure-tls/overview/index.md b/content/docs/v2024.1.31/guides/proxysql/reconfigure-tls/overview/index.md new file mode 100644 index 0000000000..b5774f651c --- /dev/null +++ b/content/docs/v2024.1.31/guides/proxysql/reconfigure-tls/overview/index.md @@ -0,0 +1,65 @@ +--- +title: Reconfiguring TLS of ProxySQL +menu: + docs_v2024.1.31: + identifier: guides-proxysql-reconfigure-tls-overview + name: Overview + parent: guides-proxysql-reconfigure-tls + weight: 10 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Reconfiguring TLS of ProxySQL + +This guide will give an overview on how KubeDB Ops Manager reconfigures TLS configuration i.e. add TLS, remove TLS, update issuer/cluster issuer or Certificates and rotate the certificates of a `ProxySQL`. + +## Before You Begin + +- You should be familiar with the following `KubeDB` concepts: + - [ProxySQL](/docs/v2024.1.31/guides/proxysql/concepts/proxysql) + - [ProxySQLOpsRequest](/docs/v2024.1.31/guides/proxysql/concepts/opsrequest) + +## How Reconfiguring ProxySQL TLS Configuration Process Works + +The following diagram shows how KubeDB Ops Manager reconfigures TLS of a `ProxySQL`. Open the image in a new tab to see the enlarged version. + +
+  Reconfiguring TLS process of ProxySQL +
Fig: Reconfiguring TLS process of ProxySQL
+
+ +The Reconfiguring ProxySQL TLS process consists of the following steps: + +1. At first, a user creates a `ProxySQL` Custom Resource Object (CRO). + +2. `KubeDB` Community operator watches the `ProxySQL` CRO. + +3. When the operator finds a `ProxySQL` CR, it creates required number of `StatefulSets` and related necessary stuff like secrets, services, etc. + +4. Then, in order to reconfigure the TLS configuration of the `ProxySQL`, the user creates a `ProxySQLOpsRequest` CR with desired information. + +5. `KubeDB` Enterprise operator watches the `ProxySQLOpsRequest` CR. + +6. When it finds a `ProxySQLOpsRequest` CR, it pauses the `ProxySQL` object which is referred from the `ProxySQLOpsRequest`. So, the `KubeDB` Community operator doesn't perform any operations on the `ProxySQL` object during the reconfiguring TLS process. + +7. Then the `KubeDB` Enterprise operator will add, remove, update or rotate TLS configuration based on the Ops Request yaml. + +8. Then the `KubeDB` Enterprise operator will restart all the Pods of the server so that they restart with the new TLS configuration defined in the `ProxySQLOpsRequest` CR. + +9. After the successful reconfiguring of the `ProxySQL` TLS, the `KubeDB` Enterprise operator resumes the `ProxySQL` object so that the `KubeDB` Community operator resumes its usual operations. + +In the next docs, we are going to show a step by step guide on reconfiguring TLS configuration of a ProxySQL using `ProxySQLOpsRequest` CRD. \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/proxysql/reconfigure/_index.md b/content/docs/v2024.1.31/guides/proxysql/reconfigure/_index.md new file mode 100644 index 0000000000..374cf669f5 --- /dev/null +++ b/content/docs/v2024.1.31/guides/proxysql/reconfigure/_index.md @@ -0,0 +1,22 @@ +--- +title: Reconfigure +menu: + docs_v2024.1.31: + identifier: guides-proxysql-reconfigure + name: Reconfigure + parent: guides-proxysql + weight: 46 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/proxysql/reconfigure/cluster/examples/proxyops-add-rules.yaml b/content/docs/v2024.1.31/guides/proxysql/reconfigure/cluster/examples/proxyops-add-rules.yaml new file mode 100644 index 0000000000..bda6aa4fd4 --- /dev/null +++ b/content/docs/v2024.1.31/guides/proxysql/reconfigure/cluster/examples/proxyops-add-rules.yaml @@ -0,0 +1,18 @@ +apiVersion: ops.kubedb.com/v1alpha1 +kind: ProxySQLOpsRequest +metadata: + name: add-rule + namespace: demo +spec: + type: Reconfigure + proxyRef: + name: proxy-server + configuration: + mysqlQueryRules: + rules: + - rule_id: 4 + active: 1 + match_digest: "^SELECT .* FOR DELETE$" + destination_hostgroup: 2 + apply: 1 + reqType: add \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/proxysql/reconfigure/cluster/examples/proxyops-add-users.yaml b/content/docs/v2024.1.31/guides/proxysql/reconfigure/cluster/examples/proxyops-add-users.yaml new file mode 100644 index 0000000000..10a11a1e8e --- /dev/null +++ b/content/docs/v2024.1.31/guides/proxysql/reconfigure/cluster/examples/proxyops-add-users.yaml @@ -0,0 +1,19 @@ +apiVersion: ops.kubedb.com/v1alpha1 +kind: ProxySQLOpsRequest +metadata: + name: add-user + namespace: demo +spec: + type: Reconfigure + proxyRef: + name: proxy-server + configuration: + mysqlUsers: + users: + - username: testA + active: 1 + default_hostgroup: 2 + - username: testB + active: 1 + default_hostgroup: 2 + reqType: add \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/proxysql/reconfigure/cluster/examples/proxyops-recon-vars.yaml b/content/docs/v2024.1.31/guides/proxysql/reconfigure/cluster/examples/proxyops-recon-vars.yaml new file mode 100644 index 0000000000..e5d52f6245 --- /dev/null +++ b/content/docs/v2024.1.31/guides/proxysql/reconfigure/cluster/examples/proxyops-recon-vars.yaml @@ -0,0 +1,16 @@ +apiVersion: ops.kubedb.com/v1alpha1 +kind: ProxySQLOpsRequest +metadata: + name: reconfigure-vars + namespace: demo +spec: + type: Reconfigure + proxyRef: + name: proxy-server + configuration: + adminVariables: + refresh_interval: 2055 + cluster_check_interval_ms: 205 + mysqlVariables: + max_transaction_time: 1540000 + max_stmts_per_connection: 19 \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/proxysql/reconfigure/cluster/examples/proxyops-remove-rules.yaml b/content/docs/v2024.1.31/guides/proxysql/reconfigure/cluster/examples/proxyops-remove-rules.yaml new file mode 100644 index 0000000000..6f34f241a7 --- /dev/null +++ b/content/docs/v2024.1.31/guides/proxysql/reconfigure/cluster/examples/proxyops-remove-rules.yaml @@ -0,0 +1,14 @@ +apiVersion: ops.kubedb.com/v1alpha1 +kind: ProxySQLOpsRequest +metadata: + name: delete-rule + namespace: demo +spec: + type: Reconfigure + proxyRef: + name: proxy-server + configuration: + mysqlQueryRules: + rules: + - rule_id: 4 + reqType: delete \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/proxysql/reconfigure/cluster/examples/proxyops-remove-users.yaml b/content/docs/v2024.1.31/guides/proxysql/reconfigure/cluster/examples/proxyops-remove-users.yaml new file mode 100644 index 0000000000..20b42d3611 --- /dev/null +++ b/content/docs/v2024.1.31/guides/proxysql/reconfigure/cluster/examples/proxyops-remove-users.yaml @@ -0,0 +1,14 @@ +apiVersion: ops.kubedb.com/v1alpha1 +kind: ProxySQLOpsRequest +metadata: + name: delete-user + namespace: demo +spec: + type: Reconfigure + proxyRef: + name: proxy-server + configuration: + mysqlUsers: + users: + - username: testA + reqType: delete \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/proxysql/reconfigure/cluster/examples/proxyops-update-rules.yaml b/content/docs/v2024.1.31/guides/proxysql/reconfigure/cluster/examples/proxyops-update-rules.yaml new file mode 100644 index 0000000000..1b2ad39762 --- /dev/null +++ b/content/docs/v2024.1.31/guides/proxysql/reconfigure/cluster/examples/proxyops-update-rules.yaml @@ -0,0 +1,15 @@ +apiVersion: ops.kubedb.com/v1alpha1 +kind: ProxySQLOpsRequest +metadata: + name: update-rule + namespace: demo +spec: + type: Reconfigure + proxyRef: + name: proxy-server + configuration: + mysqlQueryRules: + rules: + - rule_id: 4 + active: 0 + reqType: update \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/proxysql/reconfigure/cluster/examples/proxyops-update-users.yaml b/content/docs/v2024.1.31/guides/proxysql/reconfigure/cluster/examples/proxyops-update-users.yaml new file mode 100644 index 0000000000..11a0e4edbb --- /dev/null +++ b/content/docs/v2024.1.31/guides/proxysql/reconfigure/cluster/examples/proxyops-update-users.yaml @@ -0,0 +1,19 @@ +apiVersion: ops.kubedb.com/v1alpha1 +kind: ProxySQLOpsRequest +metadata: + name: update-user + namespace: demo +spec: + type: Reconfigure + proxyRef: + name: proxy-server + configuration: + mysqlUsers: + users: + - username: testA + active: 0 + default_hostgroup: 3 + - username: testB + active: 1 + default_hostgroup: 3 + reqType: update \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/proxysql/reconfigure/cluster/examples/sample-mysql.yaml b/content/docs/v2024.1.31/guides/proxysql/reconfigure/cluster/examples/sample-mysql.yaml new file mode 100644 index 0000000000..c884c6f678 --- /dev/null +++ b/content/docs/v2024.1.31/guides/proxysql/reconfigure/cluster/examples/sample-mysql.yaml @@ -0,0 +1,19 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MySQL +metadata: + name: mysql-server + namespace: demo +spec: + version: "5.7.44" + replicas: 3 + topology: + mode: GroupReplication + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/proxysql/reconfigure/cluster/examples/sample-proxysql.yaml b/content/docs/v2024.1.31/guides/proxysql/reconfigure/cluster/examples/sample-proxysql.yaml new file mode 100644 index 0000000000..890e05bfc0 --- /dev/null +++ b/content/docs/v2024.1.31/guides/proxysql/reconfigure/cluster/examples/sample-proxysql.yaml @@ -0,0 +1,11 @@ +apiVersion: kubedb.com/v1alpha2 +kind: ProxySQL +metadata: + name: proxy-server + namespace: demo +spec: + version: "2.3.2-debian" + replicas: 3 + backend: + name: mysql-server + terminationPolicy: WipeOut \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/proxysql/reconfigure/cluster/index.md b/content/docs/v2024.1.31/guides/proxysql/reconfigure/cluster/index.md new file mode 100644 index 0000000000..2554e7df4d --- /dev/null +++ b/content/docs/v2024.1.31/guides/proxysql/reconfigure/cluster/index.md @@ -0,0 +1,616 @@ +--- +title: Reconfigure ProxySQL Cluster +menu: + docs_v2024.1.31: + identifier: guides-proxysql-reconfigure-cluster + name: Demo + parent: guides-proxysql-reconfigure + weight: 30 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Reconfigure ProxySQL Cluster Database + +This guide will show you how to use `KubeDB` Enterprise operator to reconfigure a `ProxySQL` Cluster. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster. + +- Install `KubeDB` Community and Enterprise operator in your cluster following the steps [here](/docs/v2024.1.31/setup/README). + +- You should be familiar with the following `KubeDB` concepts: + - [ProxySQL](/docs/v2024.1.31/guides/proxysql/concepts/proxysql) + - [ProxySQL Cluster](/docs/v2024.1.31/guides/proxysql/clustering/proxysql-cluster) + - [ProxySQLOpsRequest](/docs/v2024.1.31/guides/proxysql/concepts/opsrequest) + - [Reconfigure Overview](/docs/v2024.1.31/guides/proxysql/reconfigure/overview) + +To keep everything isolated, we are going to use a separate namespace called `demo` throughout this tutorial. + +```bash +$ kubectl create ns demo +namespace/demo created +``` + +### Prepare MySQL backend + +We need a mysql backend for the proxysql server. So we are creating one with the below yaml. + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: MySQL +metadata: + name: mysql-server + namespace: demo +spec: + version: "5.7.44" + replicas: 3 + topology: + mode: GroupReplication + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut +``` + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/proxysql/reconfigure/cluster/examples/sample-mysql.yaml +mysql.kubedb.com/mysql-server created +``` + +Let's wait for the MySQL to be Ready. + +```bash +$ kubectl get mysql -n demo +NAME VERSION STATUS AGE +mysql-server 5.7.44 Ready 3m51s +``` + +### Prepare ProxySQL Cluster + +Let's create a KubeDB ProxySQL cluster with the following yaml. + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: ProxySQL +metadata: + name: proxy-server + namespace: demo +spec: + version: "2.3.2-debian" + replicas: 3 + backend: + name: mysql-server + terminationPolicy: WipeOut +``` + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/proxysql/reconfigure/cluster/examples/sample-proxysql.yaml +proxysql.kubedb.com/proxy-server created +``` + +Let's wait for the ProxySQL to be Ready. + +```bash +$ kubectl get proxysql -ndemo +NAME VERSION STATUS AGE +proxy-server 2.3.2-debian Ready 98s +``` + +## Reconfigure MYSQL USERS + +With `KubeDB` `ProxySQL` ops-request you can reconfigure `mysql_users` table. You can `add` and `delete` any users in the table. Also you can `update` any information of any user that is present in the table. To reconfigure the `mysql_users` table, you need to set the `.spec.type` to `Reconfigure`, provide the KubeDB ProxySQL instance name under the `spec.proxyRef` section and provide the desired user infos under the `spec.configuration.mysqlUsers.users` section. Set the `.spec.configuration.mysqlUsers.reqType` to either `add`, `update` or `delete` based on the operation you want to do. Below there are some samples for corresponding request type. + +### Create user in mysql database + +Let's first create two users in the backend mysql server. + +```bash +$ kubectl exec -it -n demo mysql-server-0 -- bash +Defaulted container "mysql" out of: mysql, mysql-coordinator, mysql-init (init) +root@mysql-server-0:/# mysql -uroot -p$MYSQL_ROOT_PASSWORD +mysql: [Warning] Using a password on the command line interface can be insecure. +Welcome to the MySQL monitor. Commands end with ; or \g. +Your MySQL connection id is 195 +Server version: 5.7.44-log MySQL Community Server (GPL) + +Copyright (c) 2000, 2021, Oracle and/or its affiliates. + +Oracle is a registered trademark of Oracle Corporation and/or its +affiliates. Other names may be trademarks of their respective +owners. + +Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. + +mysql> create user `testA`@'%' identified by 'passA'; +Query OK, 0 rows affected (0.00 sec) + +mysql> create user `testB`@'%' identified by 'passB'; +Query OK, 0 rows affected (0.01 sec) + +mysql> create database test; +Query OK, 1 row affected (0.01 sec) + +mysql> grant all privileges on test.* to 'testA'@'%'; +Query OK, 0 rows affected (0.00 sec) + +mysql> grant all privileges on test.* to 'testB'@'%'; +Query OK, 0 rows affected (0.00 sec) + +mysql> flush privileges; +Query OK, 0 rows affected (0.00 sec) + +mysql> exit +Bye +``` + +### Check current mysql_users table in ProxySQL + +Let's check the current mysql_users table in the proxysql server. Make sure that the spec.syncUsers field was not set to true when the proxysql was deployed. Otherwise it will fetch all the users from the mysql backend and we won't be able to see the effects of reconfigure users ops requests. + +```bash +$ kubectl exec -it -n demo proxy-server-0 -- bash +root@proxy-server-0:/# mysql -uadmin -padmin -h127.0.0.1 -P6032 --prompt "ProxySQLAdmin > " +Welcome to the MariaDB monitor. Commands end with ; or \g. +Your MySQL connection id is 71 +Server version: 8.0.35 (ProxySQL Admin Module) + +Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others. + +Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. + +ProxySQLAdmin > select * from mysql_users; +Empty set (0.001 sec) +``` + +### Add Users + +Let's add the testA and testB user to the proxysql server with the ops-request. Make sure you have created the users in the mysql backend. As we don't provide the password in the yaml, the KubeDB operator fetches them from the backend server. So if the user is not present in the backend server, our operator will not be able to fetch the passwords and the ops-request will be failed. + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: ProxySQLOpsRequest +metadata: + name: add-user + namespace: demo +spec: + type: Reconfigure + proxyRef: + name: proxy-server + configuration: + mysqlUsers: + users: + - username: testA + active: 1 + default_hostgroup: 2 + - username: testB + active: 1 + default_hostgroup: 2 + reqType: add +``` + +Let's applly the yaml. + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/proxysql/reconfigure/cluster/examples/proxyops-add-users.yaml +proxysqlopsrequest.ops.kubedb.com/add-user created +``` + +Let's wait for the ops-request to be Successful. + +```bash +$ kubectl get proxysqlopsrequest -n demo +NAME TYPE STATUS AGE +add-user Reconfigure Successful 20s +``` + +Now let's check the `mysql_users` table in the proxysql server. + +```bash +ProxySQLAdmin > select username,password,active,default_hostgroup from mysql_users; ++----------+-------------------------------------------+--------+-------------------+ +| username | password | active | default_hostgroup | ++----------+-------------------------------------------+--------+-------------------+ +| testA | *1BB8830D52D091A226FB7990D996CBC20F913475 | 1 | 2 | +| testB | *AE9C3C2838160D2591B6B15FA281CE712ABE94F0 | 1 | 2 | ++----------+-------------------------------------------+--------+-------------------+ +2 rows in set (0.001 sec) +``` + +We can see that the users has been successfuly added to the `mysql_users` table. + +### Update Users + +We have successfuly added new users in the `mysql_users` table with proxysqlopsrequest in the last section. Now we will see how to update any user information with proxysqlopsrequest. + +Suppose we want to update the `active` status and the `default_hostgroup` for the users "testA" and "testB". We can create an ops-request like the following. As in the `mysql_users` table the `username` is the primary key, we should always provide the `username` in the information. To update just change the `.spec.reqType` to `"update"`. + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: ProxySQLOpsRequest +metadata: + name: update-user + namespace: demo +spec: + type: Reconfigure + proxyRef: + name: proxy-server + configuration: + mysqlUsers: + users: + - username: testA + active: 0 + default_hostgroup: 3 + - username: testB + active: 1 + default_hostgroup: 3 + reqType: update +``` + +Let's apply the yaml. + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/proxysql/reconfigure/cluster/examples/proxyops-update-users.yaml +proxysqlopsrequest.ops.kubedb.com/update-user created +``` + +Now wait for the ops-request to be Successful. + +```bash +$ kubectl get proxysqlopsrequest -n demo +NAME TYPE STATUS AGE +add-user Reconfigure Successful 2m36s +update-user Reconfigure Successful 6s +``` + +Let's check the `mysql_users` table from the admin interface. + +```bash +ProxySQLAdmin > select username,password,active,default_hostgroup from mysql_users; ++----------+-------------------------------------------+--------+-------------------+ +| username | password | active | default_hostgroup | ++----------+-------------------------------------------+--------+-------------------+ +| testA | *1BB8830D52D091A226FB7990D996CBC20F913475 | 0 | 3 | +| testB | *AE9C3C2838160D2591B6B15FA281CE712ABE94F0 | 1 | 3 | ++----------+-------------------------------------------+--------+-------------------+ +2 rows in set (0.000 sec) +``` + +From the above output we can see that the user information has been successfuly updated. + +### Delete Users + +To delete user from the `mysql_users` table, all we need to do is just provide the usernames in the `spec.configuration.mysqlUsers.users` array and set the `spec.reqType` to delete. Let's have a look at the following yaml. + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: ProxySQLOpsRequest +metadata: + name: delete-user + namespace: demo +spec: + type: Reconfigure + proxyRef: + name: proxy-server + configuration: + mysqlUsers: + users: + - username: testA + reqType: delete +``` +Let's apply the yaml. + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/proxysql/reconfigure/cluster/examples/proxyops-remove-users.yaml +proxysqlopsrequest.ops.kubedb.com/delete-user created +``` +Let's wait for the ops-request to be successful. + +```bash +$ kubectl get proxysqlopsrequest -n demo +NAME TYPE STATUS AGE +add-user Reconfigure Successful 5m29s +delete-user Reconfigure Successful 12s +update-user Reconfigure Successful 2m59s +``` + +Now check the `mysql_users` table in the proxysql server. + +```bash +ProxySQLAdmin > select username,password,active,default_hostgroup from mysql_users; ++----------+-------------------------------------------+--------+-------------------+ +| username | password | active | default_hostgroup | ++----------+-------------------------------------------+--------+-------------------+ +| testB | *AE9C3C2838160D2591B6B15FA281CE712ABE94F0 | 1 | 3 | ++----------+-------------------------------------------+--------+-------------------+ +1 row in set (0.001 sec) +``` + +We can see that the user is successfuly deleted. + +## Reconfigure MYSQL QUERY RULES + +With `KubeDB` `ProxySQL` ops-request you can reconfigure `mysql_query_rules` table. You can `add` and `delete` any rules in the table. Also you can `update` any information of any rule that is present in the table. To reconfigure the `mysql_query_rules` table, you need to set the `.spec.type` to `Reconfigure`, provide the KubeDB ProxySQL instance name under the `spec.proxyRef` section and provide the desired user infos under the `spec.configuration.mysqlQueryRules.rules` section. Set the `.spec.configuration.mysqlQueryRules.reqType` to either `add`, `update` or `delete` based on the operation you want to do. Below there are some samples for corresponding request type. + +### Check current mysql_query_rules table in ProxySQL + +Let's check the current `mysql_query_rules` table in the proxysql server. +We might see some of the rules are already present. It happens when no rules are set in the `.spec.initConfig` section while deploying the proxysql. The operator adds some of the default query rules so that the basic operations can be run through the proxysql server. + +```bash +ProxySQLAdmin > select rule_id,active,match_digest,destination_hostgroup,apply from mysql_query_ru +les; ++---------+--------+----------------------+-----------------------+-------+ +| rule_id | active | match_digest | destination_hostgroup | apply | ++---------+--------+----------------------+-----------------------+-------+ +| 1 | 1 | ^SELECT.*FOR UPDATE$ | 2 | 1 | +| 2 | 1 | ^SELECT | 3 | 1 | +| 3 | 1 | .* | 2 | 1 | ++---------+--------+----------------------+-----------------------+-------+ +3 rows in set (0.001 sec) +``` + +### Add Query Rules + +Let's add a query rule to the `mysql_query_rules` table with the proxysqlopsrequest. We should create a yaml like the following. + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: ProxySQLOpsRequest +metadata: + name: add-rule + namespace: demo +spec: + type: Reconfigure + proxyRef: + name: proxy-server + configuration: + mysqlQueryRules: + rules: + - rule_id: 4 + active: 1 + match_digest: "^SELECT .* FOR DELETE$" + destination_hostgroup: 2 + apply: 1 + reqType: add +``` + +Let's apply the ops-request yaml. + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/proxysql/reconfigure/cluster/examples/proxyops-add-rules.yaml +proxysqlopsrequest.ops.kubedb.com/add-rule created +``` + +Wait for the ops-request to be successful. + +```bash +$ kubectl get proxysqlopsrequest -n demo | grep rule +add-rule Reconfigure Successful 59s +``` +Now let's check the mysql_query_rules table in the proxysql server. + +```bash +ProxySQLAdmin > select rule_id,active,match_digest,destination_hostgroup,apply from mysql_query_rules; ++---------+--------+------------------------+-----------------------+-------+ +| rule_id | active | match_digest | destination_hostgroup | apply | ++---------+--------+------------------------+-----------------------+-------+ +| 1 | 1 | ^SELECT.*FOR UPDATE$ | 2 | 1 | +| 2 | 1 | ^SELECT | 3 | 1 | +| 3 | 1 | .* | 2 | 1 | +| 4 | 1 | ^SELECT .* FOR DELETE$ | 2 | 1 | ++---------+--------+------------------------+-----------------------+-------+ +4 rows in set (0.001 sec) +``` +We can see that the users has been successfuly added to the `mysql_query_rules` table. + +### Update Query Rules + +We have successfuly added new rule in the `mysql_query_rules` table with proxysqlopsrequest in the last section. Now we will see how to update any rules information with proxysqlopsrequest. + +Suppose we want to update the `active` status rule 4. We can create an ops-request like the following. As in the `mysql_query_rules` table the `rule_id` is the primary key, we should always provide the `rule_id` in the information. To update just change the `.spec.reqType` to update. +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: ProxySQLOpsRequest +metadata: + name: update-rule + namespace: demo +spec: + type: Reconfigure + proxyRef: + name: proxy-server + configuration: + mysqlQueryRules: + rules: + - rule_id: 4 + active: 0 + reqType: update +``` +Let's apply the yaml. + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/proxysql/reconfigure/cluster/examples/proxyops-update-rules.yaml +proxysqlopsrequest.ops.kubedb.com/update-rule created +``` +Now wait for the ops-request to be successful. + +```bash +$ kubectl get proxysqlopsrequest -n demo | grep rule +add-rule Reconfigure Successful 3m10s +update-rule Reconfigure Successful 71s +``` +Let's check the `mysql_query_rules` table from the admin interface. + +```bash +ProxySQLAdmin > select rule_id,active,match_digest,destination_hostgroup,apply from mysql_query_rules; ++---------+--------+------------------------+-----------------------+-------+ +| rule_id | active | match_digest | destination_hostgroup | apply | ++---------+--------+------------------------+-----------------------+-------+ +| 1 | 1 | ^SELECT.*FOR UPDATE$ | 2 | 1 | +| 2 | 1 | ^SELECT | 3 | 1 | +| 3 | 1 | .* | 2 | 1 | +| 4 | 0 | ^SELECT .* FOR DELETE$ | 2 | 1 | ++---------+--------+------------------------+-----------------------+-------+ +4 rows in set (0.001 sec) +``` +From the above output we can see that the rules information has been successfuly updated. + +### Delete Query Rules + +To delete rules from the `mysql_query_rules` table, all we need to do is just provide the `rule_id` in the `spec.configuration.mysqlQueryRules.rules` array and set the `.spec.reqType` to `"delete"`. Let's have a look at the below yaml. + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: ProxySQLOpsRequest +metadata: + name: delete-rule + namespace: demo +spec: + type: Reconfigure + proxyRef: + name: proxy-server + configuration: + mysqlQueryRules: + rules: + - rule_id: 4 + reqType: delete +``` +Let's apply the yaml. + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/proxysql/reconfigure/cluster/examples/proxyops-remove-rules.yaml +proxysqlopsrequest.ops.kubedb.com/delete-rule created +``` +Let's wait for the ops-request to be Successful. + +```bash +$ kubectl get proxysqlopsrequest -n demo | grep rule +add-rule Reconfigure Successful 4m13s +delete-rule Reconfigure Successful 12s +update-rule Reconfigure Successful 2m14s +``` + +Now check the `mysql_query_rules` table in the proxysql server. + +```bash +ProxySQLAdmin > select rule_id,active,match_digest,destination_hostgroup,apply from mysql_query_rules; ++---------+--------+----------------------+-----------------------+-------+ +| rule_id | active | match_digest | destination_hostgroup | apply | ++---------+--------+----------------------+-----------------------+-------+ +| 1 | 1 | ^SELECT.*FOR UPDATE$ | 2 | 1 | +| 2 | 1 | ^SELECT | 3 | 1 | +| 3 | 1 | .* | 2 | 1 | ++---------+--------+----------------------+-----------------------+-------+ +3 rows in set (0.001 sec) +``` +We can see that the user is successfuly deleted. + +## Reconfigure Global Variables + +With `KubeDB` `ProxySQL` ops-request you can reconfigure mysql variables and admin variables. You can reconfigure almost all the global variables except `mysql-interfaces`, `mysql-monitor_username`, `mysql-monitor_password`, `mysql-ssl_p2s_cert`, `mysql-ssl_p2s_key`, `mysql-ssl_p2s_ca`, `admin-admin_credentials` and `admin-mysql_interface`. To reconfigure any variable, you need to set the `.spec.type` to Reconfigure, provide the KubeDB ProxySQL instance name under the `spec.proxyRef` section and provide the desired configuration under the `spec.configuration.adminVariables` and the `spec.cofiguration.mysqlVariables` section. Below there are some samples for corresponding request type. + +Suppose we want to update 4 global variables. Among these 2 are admin variables : cluster_check_interval_ms and refresh_interval . The other 2 are mysql variables : max_stmts_per_connection and max_transaction_time. + +Let's see the current status from the proxysql server. + +```bash +ProxySQLAdmin > show global variables; ++----------------------------------------------------------------------+--------------------------------------+ +| Variable_name | Value | ++----------------------------------------------------------------------+--------------------------------------+ +| ... | ... | +| admin-cluster_check_interval_ms | 200 | +| ... | ... | +| admin-refresh_interval | 2000 | +| ... | ... | +| mysql-max_stmts_per_connection | 20 | +| ... | ... | +| mysql-max_transaction_time | 14400000 | +| ... | ... | ++----------------------------------------------------------------------+--------------------------------------+ +193 rows in set (0.001 sec) +``` + +To reconfigure these variables all we need to do is create a yaml like the following. Just mention the variable name and its desired value in a key-value style under corresponding variable type i.e `mysqlVariables` and `adminVariables`. + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: ProxySQLOpsRequest +metadata: + name: reconfigure-vars + namespace: demo +spec: + type: Reconfigure + proxyRef: + name: proxy-server + configuration: + adminVariables: + refresh_interval: 2055 + cluster_check_interval_ms: 205 + mysqlVariables: + max_transaction_time: 1540000 + max_stmts_per_connection: 19 +``` + +Let's apply the yaml. + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/proxysql/reconfigure/cluster/examples/proxyops-recon-vars.yaml +proxysqlopsrequest.ops.kubedb.com/recofigure-vars created +``` + +Wait for the ops-request to be successful. + +```bash +$ kubectl get proxysqlopsrequest -n demo | grep reco +reconfigure-vars Reconfigure Successful 30s +``` + +Now let's check the variables we wanted to reconfigure. + +```bash +ProxySQLAdmin > show global variables; ++----------------------------------------------------------------------+--------------------------------------+ +| Variable_name | Value | ++----------------------------------------------------------------------+--------------------------------------+ +| ... | ... | +| admin-cluster_check_interval_ms | 205 | +| ... | ... | +| admin-refresh_interval | 2055 | +| ... | ... | +| mysql-max_stmts_per_connection | 19 | +| ... | ... | +| mysql-max_transaction_time | 1540000.0 | +| ... | ... | ++----------------------------------------------------------------------+--------------------------------------+ +193 rows in set (0.001 sec) +``` + +From the above output we can see the variables has been successfuly updated with the desired value. + +### Clean-up +```bash +$ kubectl delete proxysql -n demo proxy-server +$ kubectl delete proxysqlopsrequest -n demo --all +$ kubectl delete mysql -n demo mysql-server +$ kubectl delete ns demo +``` \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/proxysql/reconfigure/overview/images/reconfigure.png b/content/docs/v2024.1.31/guides/proxysql/reconfigure/overview/images/reconfigure.png new file mode 100644 index 0000000000..0f791095f2 Binary files /dev/null and b/content/docs/v2024.1.31/guides/proxysql/reconfigure/overview/images/reconfigure.png differ diff --git a/content/docs/v2024.1.31/guides/proxysql/reconfigure/overview/index.md b/content/docs/v2024.1.31/guides/proxysql/reconfigure/overview/index.md new file mode 100644 index 0000000000..b2e363826b --- /dev/null +++ b/content/docs/v2024.1.31/guides/proxysql/reconfigure/overview/index.md @@ -0,0 +1,59 @@ +--- +title: Reconfiguring ProxySQL +menu: + docs_v2024.1.31: + identifier: guides-proxysql-reconfigure-overview + name: Overview + parent: guides-proxysql-reconfigure + weight: 10 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Reconfiguring ProxySQL + +This guide will give an overview on how KubeDB Ops Manager reconfigures `ProxySQL`. + +## Before You Begin + +- You should be familiar with the following `KubeDB` concepts: + - [ProxySQL](/docs/v2024.1.31/guides/proxysql/concepts/proxysql) + - [ProxySQLOpsRequest](/docs/v2024.1.31/guides/proxysql/concepts/opsrequest) + +## How Reconfiguring ProxySQL Process Works + +The following diagram shows how KubeDB Ops Manager reconfigures `ProxySQL` database components. Open the image in a new tab to see the enlarged version. + +
+  Reconfiguring process of ProxySQL +
Fig: Reconfiguring process of ProxySQL
+
+ +The Reconfiguring ProxySQL process consists of the following steps: + +1. At first, a user creates a `ProxySQL` Custom Resource (CR). + +2. `KubeDB` Community operator watches the `ProxySQL` CR. + +3. When the operator finds a `ProxySQL` CR, it creates required number of `StatefulSets` and related necessary stuff like secrets, services, etc. + +4. Then, in order to reconfigure the `ProxySQL` standalone or cluster the user creates a `ProxySQLOpsRequest` CR with desired information. + +5. `KubeDB` Enterprise operator watches the `ProxySQLOpsRequest` CR. + +6. Then the `KubeDB` Enterprise operator will replace the existing configuration with the new configuration provided or merge the new configuration with the existing configuration according to the `ProxySQLOpsRequest` CR. + +In the next docs, we are going to show a step by step guide on reconfiguring ProxySQL database components using `ProxySQLOpsRequest` CRD. \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/proxysql/scaling/_index.md b/content/docs/v2024.1.31/guides/proxysql/scaling/_index.md new file mode 100644 index 0000000000..a7620f3f1d --- /dev/null +++ b/content/docs/v2024.1.31/guides/proxysql/scaling/_index.md @@ -0,0 +1,22 @@ +--- +title: Scaling ProxySQL +menu: + docs_v2024.1.31: + identifier: guides-proxysql-scaling + name: Scaling + parent: guides-proxysql + weight: 43 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/proxysql/scaling/horizontal-scaling/_index.md b/content/docs/v2024.1.31/guides/proxysql/scaling/horizontal-scaling/_index.md new file mode 100644 index 0000000000..c2fd915669 --- /dev/null +++ b/content/docs/v2024.1.31/guides/proxysql/scaling/horizontal-scaling/_index.md @@ -0,0 +1,22 @@ +--- +title: Horizontal Scaling +menu: + docs_v2024.1.31: + identifier: guides-proxysql-scaling-horizontal + name: Horizontal Scaling + parent: guides-proxysql-scaling + weight: 10 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/proxysql/scaling/horizontal-scaling/cluster/examples/proxyops-downscale.yaml b/content/docs/v2024.1.31/guides/proxysql/scaling/horizontal-scaling/cluster/examples/proxyops-downscale.yaml new file mode 100644 index 0000000000..b2f93ee69a --- /dev/null +++ b/content/docs/v2024.1.31/guides/proxysql/scaling/horizontal-scaling/cluster/examples/proxyops-downscale.yaml @@ -0,0 +1,11 @@ +apiVersion: ops.kubedb.com/v1alpha1 +kind: ProxySQLOpsRequest +metadata: + name: scale-down + namespace: demo +spec: + type: HorizontalScaling + proxyRef: + name: proxy-server + horizontalScaling: + member: 4 \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/proxysql/scaling/horizontal-scaling/cluster/examples/proxyops-upscale.yaml b/content/docs/v2024.1.31/guides/proxysql/scaling/horizontal-scaling/cluster/examples/proxyops-upscale.yaml new file mode 100644 index 0000000000..88e890b667 --- /dev/null +++ b/content/docs/v2024.1.31/guides/proxysql/scaling/horizontal-scaling/cluster/examples/proxyops-upscale.yaml @@ -0,0 +1,11 @@ +apiVersion: ops.kubedb.com/v1alpha1 +kind: ProxySQLOpsRequest +metadata: + name: scale-up + namespace: demo +spec: + type: HorizontalScaling + proxyRef: + name: proxy-server + horizontalScaling: + member: 5 diff --git a/content/docs/v2024.1.31/guides/proxysql/scaling/horizontal-scaling/cluster/examples/sample-mysql.yaml b/content/docs/v2024.1.31/guides/proxysql/scaling/horizontal-scaling/cluster/examples/sample-mysql.yaml new file mode 100644 index 0000000000..c884c6f678 --- /dev/null +++ b/content/docs/v2024.1.31/guides/proxysql/scaling/horizontal-scaling/cluster/examples/sample-mysql.yaml @@ -0,0 +1,19 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MySQL +metadata: + name: mysql-server + namespace: demo +spec: + version: "5.7.44" + replicas: 3 + topology: + mode: GroupReplication + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/proxysql/scaling/horizontal-scaling/cluster/examples/sample-proxysql.yaml b/content/docs/v2024.1.31/guides/proxysql/scaling/horizontal-scaling/cluster/examples/sample-proxysql.yaml new file mode 100644 index 0000000000..b2795bf06b --- /dev/null +++ b/content/docs/v2024.1.31/guides/proxysql/scaling/horizontal-scaling/cluster/examples/sample-proxysql.yaml @@ -0,0 +1,12 @@ +apiVersion: kubedb.com/v1alpha2 +kind: ProxySQL +metadata: + name: proxy-server + namespace: demo +spec: + version: "2.3.2-debian" + replicas: 3 + backend: + name: mysql-server + syncUsers: true + terminationPolicy: WipeOut \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/proxysql/scaling/horizontal-scaling/cluster/index.md b/content/docs/v2024.1.31/guides/proxysql/scaling/horizontal-scaling/cluster/index.md new file mode 100644 index 0000000000..102fac983e --- /dev/null +++ b/content/docs/v2024.1.31/guides/proxysql/scaling/horizontal-scaling/cluster/index.md @@ -0,0 +1,309 @@ +--- +title: Horizontal Scaling ProxySQL +menu: + docs_v2024.1.31: + identifier: guides-proxysql-scaling-horizontal-cluster + name: Demo + parent: guides-proxysql-scaling-horizontal + weight: 20 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Horizontal Scale ProxySQL + +This guide will show you how to use `KubeDB` Enterprise operator to scale the cluster of a ProxySQL server. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +- Install `KubeDB` Community and Enterprise operator in your cluster following the steps [here](/docs/v2024.1.31/setup/README). + +- You should be familiar with the following `KubeDB` concepts: + - [ProxySQL](/docs/v2024.1.31/guides/proxysql/concepts/proxysql/) + - [ProxySQL Cluster](/docs/v2024.1.31/guides/proxysql/clustering/proxysql-cluster/) + - [ProxySQLOpsRequest](/docs/v2024.1.31/guides/proxysql/concepts/opsrequest/)` + - [Horizontal Scaling Overview](/docs/v2024.1.31/guides/proxysql/scaling/horizontal-scaling/overview/) + +To keep everything isolated, we are going to use a separate namespace called `demo` throughout this tutorial. + +```bash +$ kubectl create ns demo +namespace/demo created +``` + +Also we need a mysql backend for the proxysql server. So we are creating one with the below yaml. + + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: MySQL +metadata: + name: mysql-server + namespace: demo +spec: + version: "5.7.44" + replicas: 3 + topology: + mode: GroupReplication + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut +``` + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/proxysql/scaling/horizontal-scaling/cluster/example/sample-mysql.yaml +mysql.kubedb.com/mysql-server created +``` + +After applying the above yaml wait for the MySQL to be Ready. + +## Apply Horizontal Scaling on Cluster + +Here, we are going to deploy a `ProxySQL` cluster using a supported version by `KubeDB` operator. Then we are going to apply horizontal scaling on it. + +### Deploy ProxySQL Cluster + +In this section, we are going to deploy a ProxySQL cluster. Then, in the next section we will scale the proxy server using `ProxySQLOpsRequest` CRD. Below is the YAML of the `ProxySQL` CR that we are going to create, + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: ProxySQL +metadata: + name: proxy-server + namespace: demo +spec: + version: "2.3.2-debian" + replicas: 3 + backend: + name: mysql-server + syncUsers: true + terminationPolicy: WipeOut +``` + +Let's create the `ProxySQL` CR we have shown above, + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/proxysql/scaling/horizontal-scaling/cluster/example/sample-proxysql.yaml +proxysql.kubedb.com/proxy-server created +``` + +Now, wait until `proxy-server` has status `Ready`. i.e, + +```bash +$ kubectl get proxysql -n demo +NAME VERSION STATUS AGE +proxy-server 2.3.2-debian Ready 2m36s +``` + +Let's check the number of replicas this cluster has from the ProxySQL object, number of pods the statefulset have, + +```bash +$ kubectl get proxysql -n demo proxy-server -o json | jq '.spec.replicas' +3 +$ kubectl get sts -n demo proxy-server -o json | jq '.spec.replicas' +3 +``` + +We can see from both command that the server has 3 replicas in the cluster. + +Also, we can verify the replicas of the replicaset from an internal proxysql command by execing into a replica. + +Now let's connect to a proxysql instance and run a proxysql internal command to check the cluster status, + +```bash +$ kubectl exec -it -n demo proxy-server-0 -- bash +root@proxy-server-1:/# mysql -uadmin -padmin -h127.0.0.1 -P6032 -e "select * from runtime_proxysql_servers;" ++---------------------------------------+------+--------+---------+ +| hostname | port | weight | comment | ++---------------------------------------+------+--------+---------+ +| proxy-server-2.proxy-server-pods.demo | 6032 | 1 | | +| proxy-server-1.proxy-server-pods.demo | 6032 | 1 | | +| proxy-server-0.proxy-server-pods.demo | 6032 | 1 | | ++---------------------------------------+------+--------+---------+ +``` + +We can see from the above output that the cluster has 3 nodes. + +We are now ready to apply the `ProxySQLOpsRequest` CR to scale this server. + +## Scale Up Replicas + +Here, we are going to scale up the replicas of the replicaset to meet the desired number of replicas after scaling. + +### Create ProxySQLOpsRequest + +In order to scale up the replicas of the replicaset of the server, we have to create a `ProxySQLOpsRequest` CR with our desired replicas. Below is the YAML of the `ProxySQLOpsRequest` CR that we are going to create, + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: ProxySQLOpsRequest +metadata: + name: scale-up + namespace: demo +spec: + type: HorizontalScaling + proxyRef: + name: proxy-server + horizontalScaling: + member: 5 + +``` + +Here, + +- `spec.proxyRef.name` specifies that we are performing horizontal scaling operation on `proxy-server` instance. +- `spec.type` specifies that we are performing `HorizontalScaling` on our database. +- `spec.horizontalScaling.member` specifies the desired replicas after scaling. + +Let's create the `ProxySQLOpsRequest` CR we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/proxysql/scaling/horizontal-scaling/cluster/example/proxyops-upscale.yaml +proxysqlopsrequest.ops.kubedb.com/scale-up created +``` + +### Verify Cluster replicas scaled up successfully + +If everything goes well, `KubeDB` Enterprise operator will update the replicas of `ProxySQL` object and related `StatefulSets` and `Pods`. + +Let's wait for `ProxySQLOpsRequest` to be `Successful`. Run the following command to watch `ProxySQLOpsRequest` CR, + +```bash +$ watch kubectl get proxysqlopsrequest -n demo +Every 2.0s: kubectl get proxysqlopsrequest -n demo +NAME TYPE STATUS AGE +scale-up HorizontalScaling Successful 106s +``` + +We can see from the above output that the `ProxySQLOpsRequest` has succeeded. Now, we are going to verify the number of replicas this database has from the ProxySQL object, number of pods the statefulset have, + +```bash +$ kubectl get proxysql -n demo proxy-server -o json | jq '.spec.replicas' +5 +$ kubectl get sts -n demo proxy-server -o json | jq '.spec.replicas' +5 +``` + +Now let's connect to a proxysql instance and run a proxysql internal command to check the number of replicas, + +```bash +$ kubectl exec -it -n demo proxy-server-0 -- bash +root@proxy-server-1:/# mysql -uadmin -padmin -h127.0.0.1 -P6032 -e "select * from runtime_proxysql_servers;" ++---------------------------------------+------+--------+---------+ +| hostname | port | weight | comment | ++---------------------------------------+------+--------+---------+ +| proxy-server-2.proxy-server-pods.demo | 6032 | 1 | | +| proxy-server-1.proxy-server-pods.demo | 6032 | 1 | | +| proxy-server-0.proxy-server-pods.demo | 6032 | 1 | | +| proxy-server-3.proxy-server-pods.demo | 6032 | 1 | | +| proxy-server-4.proxy-server-pods.demo | 6032 | 1 | | ++---------------------------------------+------+--------+---------+ +root@proxy-server-1:/# + + +``` + +From all the above outputs we can see that the replicas of the cluster is `5`. That means we have successfully scaled up the replicas of the ProxySQL replicaset. + +## Scale Down Replicas + +Here, we are going to scale down the replicas of the cluster to meet the desired number of replicas after scaling. + +### Create ProxySQLOpsRequest + +In order to scale down the cluster of the server, we have to create a `ProxySQLOpsRequest` CR with our desired replicas. Below is the YAML of the `ProxySQLOpsRequest` CR that we are going to create, + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: ProxySQLOpsRequest +metadata: + name: scale-down + namespace: demo +spec: + type: HorizontalScaling + proxyRef: + name: proxy-server + horizontalScaling: + member: 4 +``` + +Here, + +- `spec.proxyRef.name` specifies that we are performing horizontal scaling operation on `proxy-server` instance. +- `spec.type` specifies that we are performing `HorizontalScaling` on our database. +- `spec.horizontalScaling.member` specifies the desired replicas after scaling. + +Let's create the `ProxySQLOpsRequest` CR we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/proxysql/scaling/horizontal-scaling/cluster/example/proxyops-downscale.yaml +proxysqlopsrequest.ops.kubedb.com/scale-down created +``` + +#### Verify Cluster replicas scaled down successfully + +If everything goes well, `KubeDB` Enterprise operator will update the replicas of `ProxySQL` object and related `StatefulSets` and `Pods`. + +Let's wait for `ProxySQLOpsRequest` to be `Successful`. Run the following command to watch `ProxySQLOpsRequest` CR, + +```bash +$ watch kubectl get proxysqlopsrequest -n demo +Every 2.0s: kubectl get proxysqlopsrequest -n demo +NAME TYPE STATUS AGE +scale-down HorizontalScaling Successful 2m32s +``` + +We can see from the above output that the `ProxySQLOpsRequest` has succeeded. Now, we are going to verify the number of replicas this database has from the ProxySQL object, number of pods the statefulset have, + +```bash +$ kubectl get proxysql -n demo proxy-server -o json | jq '.spec.replicas' +4 +$ kubectl get sts -n demo proxy-server -o json | jq '.spec.replicas' +4 +``` + +Now let's connect to a proxysql instance and run a proxysql internal command to check the number of replicas, +```bash +$ kubectl exec -it -n demo proxy-server-0 -- bash +root@proxy-server-1:/# mysql -uadmin -padmin -h127.0.0.1 -P6032 -e "select * from runtime_proxysql_servers;" ++---------------------------------------+------+--------+---------+ +| hostname | port | weight | comment | ++---------------------------------------+------+--------+---------+ +| proxy-server-2.proxy-server-pods.demo | 6032 | 1 | | +| proxy-server-1.proxy-server-pods.demo | 6032 | 1 | | +| proxy-server-0.proxy-server-pods.demo | 6032 | 1 | | +| proxy-server-3.proxy-server-pods.demo | 6032 | 1 | | ++---------------------------------------+------+--------+---------+ +``` + +From all the above outputs we can see that the replicas of the cluster is `4`. That means we have successfully scaled down the replicas of the ProxySQL replicaset. + +## Cleaning Up + +To clean up the Kubernetes resources created by this tutorial, run: + +```bash +$ kubectl delete proxysql -n demo proxy-server +$ kubectl delete proxysqlopsrequest -n demo scale-up scale-down +``` \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/proxysql/scaling/horizontal-scaling/overview/images/horizontal-scaling.png b/content/docs/v2024.1.31/guides/proxysql/scaling/horizontal-scaling/overview/images/horizontal-scaling.png new file mode 100644 index 0000000000..eb91bdfa96 Binary files /dev/null and b/content/docs/v2024.1.31/guides/proxysql/scaling/horizontal-scaling/overview/images/horizontal-scaling.png differ diff --git a/content/docs/v2024.1.31/guides/proxysql/scaling/horizontal-scaling/overview/index.md b/content/docs/v2024.1.31/guides/proxysql/scaling/horizontal-scaling/overview/index.md new file mode 100644 index 0000000000..66d8b3c6ec --- /dev/null +++ b/content/docs/v2024.1.31/guides/proxysql/scaling/horizontal-scaling/overview/index.md @@ -0,0 +1,65 @@ +--- +title: ProxySQL Horizontal Scaling Overview +menu: + docs_v2024.1.31: + identifier: guides-proxysql-scaling-horizontal-overview + name: Overview + parent: guides-proxysql-scaling-horizontal + weight: 10 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# ProxySQL Horizontal Scaling + +This guide will give an overview on how KubeDB Ops Manager scales up or down `ProxySQL Cluster`. + +## Before You Begin + +- You should be familiar with the following `KubeDB` concepts: + - [ProxySQL](/docs/v2024.1.31/guides/proxysql/concepts/proxysql/) + - [ProxySQLOpsRequest](/docs/v2024.1.31/guides/proxysql/concepts/opsrequest/) + +## How Horizontal Scaling Process Works + +The following diagram shows how KubeDB Ops Manager scales up or down `ProxySQL` components. Open the image in a new tab to see the enlarged version. + +
+  Horizontal scaling process of ProxySQL +
Fig: Horizontal scaling process of ProxySQL
+
+ +The Horizontal scaling process consists of the following steps: + +1. At first, a user creates a `ProxySQL` Custom Resource (CR). + +2. `KubeDB` Community operator watches the `ProxySQL` CR. + +3. When the operator finds a `ProxySQL` CR, it creates required number of `StatefulSets` and related necessary stuff like secrets, services, etc. + +4. Then, in order to scale the `ProxySQL` the user creates a `ProxySQLOpsRequest` CR with desired information. + +5. `KubeDB` Enterprise operator watches the `ProxySQLOpsRequest` CR. + +6. When it finds a `ProxySQLOpsRequest` CR, it pauses the `ProxySQL` object which is referred from the `ProxySQLOpsRequest`. So, the `KubeDB` Community operator doesn't perform any operations on the `ProxySQL` object during the horizontal scaling process. + +7. Then the `KubeDB` Enterprise operator will scale the related StatefulSet Pods to reach the expected number of replicas defined in the `ProxySQLOpsRequest` CR. + +8. After the successfully scaling the replicas of the StatefulSet Pods, the `KubeDB` Enterprise operator updates the number of replicas in the `ProxySQL` object to reflect the updated state. + +9. After the successful scaling of the `ProxySQL` replicas, the `KubeDB` Enterprise operator resumes the `ProxySQL` object so that the `KubeDB` Community operator resumes its usual operations. + +In the next docs, we are going to show a step by step guide on horizontal scaling of ProxySQL database using `ProxySQLOpsRequest` CRD. \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/proxysql/scaling/vertical-scaling/_index.md b/content/docs/v2024.1.31/guides/proxysql/scaling/vertical-scaling/_index.md new file mode 100644 index 0000000000..e21d8a5624 --- /dev/null +++ b/content/docs/v2024.1.31/guides/proxysql/scaling/vertical-scaling/_index.md @@ -0,0 +1,22 @@ +--- +title: Vertical Scaling +menu: + docs_v2024.1.31: + identifier: guides-proxysql-scaling-vertical + name: Vertical Scaling + parent: guides-proxysql-scaling + weight: 20 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/proxysql/scaling/vertical-scaling/cluster/example/proxyops-vscale.yaml b/content/docs/v2024.1.31/guides/proxysql/scaling/vertical-scaling/cluster/example/proxyops-vscale.yaml new file mode 100644 index 0000000000..d13a92a67a --- /dev/null +++ b/content/docs/v2024.1.31/guides/proxysql/scaling/vertical-scaling/cluster/example/proxyops-vscale.yaml @@ -0,0 +1,18 @@ +apiVersion: ops.kubedb.com/v1alpha1 +kind: ProxySQLOpsRequest +metadata: + name: proxyops-vscale + namespace: demo +spec: + type: VerticalScaling + proxyRef: + name: proxy-server + verticalScaling: + proxysql: + resources: + requests: + memory: "1.2Gi" + cpu: "0.6" + limits: + memory: "1.2Gi" + cpu: "0.6" \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/proxysql/scaling/vertical-scaling/cluster/example/sample-mysql.yaml b/content/docs/v2024.1.31/guides/proxysql/scaling/vertical-scaling/cluster/example/sample-mysql.yaml new file mode 100644 index 0000000000..c884c6f678 --- /dev/null +++ b/content/docs/v2024.1.31/guides/proxysql/scaling/vertical-scaling/cluster/example/sample-mysql.yaml @@ -0,0 +1,19 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MySQL +metadata: + name: mysql-server + namespace: demo +spec: + version: "5.7.44" + replicas: 3 + topology: + mode: GroupReplication + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/proxysql/scaling/vertical-scaling/cluster/example/sample-proxysql.yaml b/content/docs/v2024.1.31/guides/proxysql/scaling/vertical-scaling/cluster/example/sample-proxysql.yaml new file mode 100644 index 0000000000..44063d5eca --- /dev/null +++ b/content/docs/v2024.1.31/guides/proxysql/scaling/vertical-scaling/cluster/example/sample-proxysql.yaml @@ -0,0 +1,21 @@ +apiVersion: kubedb.com/v1alpha2 +kind: ProxySQL +metadata: + name: proxy-server + namespace: demo +spec: + version: "2.3.2-debian" + replicas: 3 + backend: + name: mysql-server + syncUsers: true + terminationPolicy: WipeOut + podTemplate: + spec: + resources: + limits: + cpu: 500m + memory: 1Gi + requests: + cpu: 500m + memory: 1Gi \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/proxysql/scaling/vertical-scaling/cluster/index.md b/content/docs/v2024.1.31/guides/proxysql/scaling/vertical-scaling/cluster/index.md new file mode 100644 index 0000000000..33b7270f2d --- /dev/null +++ b/content/docs/v2024.1.31/guides/proxysql/scaling/vertical-scaling/cluster/index.md @@ -0,0 +1,229 @@ +--- +title: Vertical Scaling ProxySQL Cluster +menu: + docs_v2024.1.31: + identifier: guides-proxysql-scaling-vertical-cluster + name: Demo + parent: guides-proxysql-scaling-vertical + weight: 30 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Vertical Scale ProxySQL Cluster + +This guide will show you how to use `KubeDB` Enterprise operator to update the resources of a ProxySQL cluster . + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +- Install `KubeDB` Community and Enterprise operator in your cluster following the steps [here](/docs/v2024.1.31/setup/README). + +- You should be familiar with the following `KubeDB` concepts: + - [ProxySQL](/docs/v2024.1.31/guides/proxysql/concepts/proxysql) + - [Clustering](/docs/v2024.1.31/guides/proxysql/clustering/proxysql-cluster) + - [ProxySQLOpsRequest](/docs/v2024.1.31/guides/proxysql/concepts/opsrequest) + - [Vertical Scaling Overview](/docs/v2024.1.31/guides/proxysql/scaling/vertical-scaling/overview) + +To keep everything isolated, we are going to use a separate namespace called `demo` throughout this tutorial. + +```bash +$ kubectl create ns demo +namespace/demo created +``` + +Also we need a mysql backend for the proxysql server. So we are creating one with the below yaml. + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: MySQL +metadata: + name: mysql-server + namespace: demo +spec: + version: "5.7.44" + replicas: 3 + topology: + mode: GroupReplication + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut +``` + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/proxysql/scaling/vertical-scaling/cluster/example/sample-mysql.yaml +mysql.kubedb.com/mysql-server created +``` + +After applying the above yaml wait for the MySQL to be Ready. + +## Apply Vertical Scaling on Cluster + +Here, we are going to deploy a `ProxySQL` cluster using a supported version by `KubeDB` operator. Then we are going to apply vertical scaling on it. + +### Prepare ProxySQL Cluster + +Now, we are going to deploy a `ProxySQL` cluster database with version `2.3.2-debian`. + +In this section, we are going to deploy a ProxySQL cluster. Then, in the next section we will update the resources of the servers using `ProxySQLOpsRequest` CRD. Below is the YAML of the `ProxySQL` CR that we are going to create, + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: ProxySQL +metadata: + name: proxy-server + namespace: demo +spec: + version: "2.3.2-debian" + replicas: 3 + backend: + name: mysql-server + syncUsers: true + terminationPolicy: WipeOut + podTemplate: + spec: + resources: + limits: + cpu: 500m + memory: 1Gi + requests: + cpu: 500m + memory: 1Gi +``` + +Let's create the `ProxySQL` CR we have shown above, + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/proxysql/scaling/vertical-scaling/cluster/example/sample-proxysql.yaml +proxysql.kubedb.com/proxy-server created +``` + +Now, wait until `proxy-server` has status `Ready`. i.e, + +```bash +$ kubectl get proxysql -n demo +NAME VERSION STATUS AGE +proxy-server 2.3.2-debian Ready 3m46s +``` + +Let's check the Pod containers resources, + +```bash +$ kubectl get pod -n demo proxy-server-0 -o json | jq '.spec.containers[].resources' +{ + "limits": { + "cpu": "500m", + "memory": "1Gi" + }, + "requests": { + "cpu": "500m", + "memory": "1Gi" + } +} +``` + +You can see the Pod has the default resources which is assigned by Kubedb operator. + +We are now ready to apply the `ProxySQLOpsRequest` CR to update the resources of this server. + +### Scale Vertically + +Here, we are going to update the resources of the server to meet the desired resources after scaling. + +#### Create ProxySQLOpsRequest + +In order to update the resources of the database, we have to create a `ProxySQLOpsRequest` CR with our desired resources. Below is the YAML of the `ProxySQLOpsRequest` CR that we are going to create, + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: ProxySQLOpsRequest +metadata: + name: proxyops-vscale + namespace: demo +spec: + type: VerticalScaling + proxyRef: + name: proxy-server + verticalScaling: + proxysql: + resources: + requests: + memory: "1.2Gi" + cpu: "0.6" + limits: + memory: "1.2Gi" + cpu: "0.6" +``` + +Here, + +- `spec.proxyRef.name` specifies that we are performing vertical scaling operation on `proxy-server` instance. +- `spec.type` specifies that we are performing `VerticalScaling` on our server. +- `spec.verticalScaling.proxysql` specifies the desired resources after scaling. + +Let's create the `ProxySQLOpsRequest` CR we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/proxysql/scaling/vertical-scaling/cluster/example/proxyops-vscale.yaml +proxysqlopsrequest.ops.kubedb.com/proxyops-vscale created +``` + +#### Verify ProxySQL Cluster resources updated successfully + +If everything goes well, `KubeDB` Enterprise operator will update the resources of `ProxySQL` object and related `StatefulSets` and `Pods`. + +Let's wait for `ProxySQLOpsRequest` to be `Successful`. Run the following command to watch `ProxySQLOpsRequest` CR, + +```bash +$ kubectl get proxysqlopsrequest -n demo +Every 2.0s: kubectl get proxysqlopsrequest -n demo +NAME TYPE STATUS AGE +proxyops-vscale VerticalScaling Successful 3m56s +``` + +We can see from the above output that the `ProxySQLOpsRequest` has succeeded. Now, we are going to verify from one of the Pod yaml whether the resources of the database has updated to meet up the desired state, Let's check, + +```bash +$ kubectl get pod -n demo proxy-server-0 -o json | jq '.spec.containers[].resources' +{ + "limits": { + "cpu": "600m", + "memory": "1288490188800m" + }, + "requests": { + "cpu": "600m", + "memory": "1288490188800m" + } +} +``` + +The above output verifies that we have successfully scaled up the resources of the ProxySQL instance. + +## Cleaning Up + +To clean up the Kubernetes resources created by this tutorial, run: + +```bash +$ kubectl delete proxysql -n demo proxy-server +$ kubectl delete proxysqlopsrequest -n demo proxyops-vscale +``` \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/proxysql/scaling/vertical-scaling/overview/images/vertical-scaling.png b/content/docs/v2024.1.31/guides/proxysql/scaling/vertical-scaling/overview/images/vertical-scaling.png new file mode 100644 index 0000000000..6c56c66928 Binary files /dev/null and b/content/docs/v2024.1.31/guides/proxysql/scaling/vertical-scaling/overview/images/vertical-scaling.png differ diff --git a/content/docs/v2024.1.31/guides/proxysql/scaling/vertical-scaling/overview/index.md b/content/docs/v2024.1.31/guides/proxysql/scaling/vertical-scaling/overview/index.md new file mode 100644 index 0000000000..a12a437958 --- /dev/null +++ b/content/docs/v2024.1.31/guides/proxysql/scaling/vertical-scaling/overview/index.md @@ -0,0 +1,65 @@ +--- +title: ProxySQL Vertical Scaling Overview +menu: + docs_v2024.1.31: + identifier: guides-proxysql-scaling-vertical-overview + name: Overview + parent: guides-proxysql-scaling-vertical + weight: 10 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# ProxySQL Vertical Scaling + +This guide will give an overview on how KubeDB Ops Manager vertically scales up `ProxySQL`. + +## Before You Begin + +- You should be familiar with the following `KubeDB` concepts: + - [ProxySQL](/docs/v2024.1.31/guides/proxysql/concepts/proxysql/) + - [ProxySQLOpsRequest](/docs/v2024.1.31/guides/proxysql/concepts/opsrequest/) + +## How Vertical Scaling Process Works + +The following diagram shows how KubeDB Ops Manager scales up or down `ProxySQL` instance resources. Open the image in a new tab to see the enlarged version. + +
+  Vertical scaling process of ProxySQL +
Fig: Vertical scaling process of ProxySQL
+
+ +The vertical scaling process consists of the following steps: + +1. At first, a user creates a `ProxySQL` Custom Resource (CR). + +2. `KubeDB` Community operator watches the `ProxySQL` CR. + +3. When the operator finds a `ProxySQL` CR, it creates required number of `StatefulSets` and related necessary stuff like secrets, services, etc. + +4. Then, in order to update the resources(for example `CPU`, `Memory` etc.) of the `ProxySQL` the user creates a `ProxySQLOpsRequest` CR with desired information. + +5. `KubeDB` Enterprise operator watches the `ProxySQLOpsRequest` CR. + +6. When it finds a `ProxySQLOpsRequest` CR, it halts the `ProxySQL` object which is referred from the `ProxySQLOpsRequest`. So, the `KubeDB` Community operator doesn't perform any operations on the `ProxySQL` object during the vertical scaling process. + +7. Then the `KubeDB` Enterprise operator will update resources of the StatefulSet Pods to reach desired state. + +8. After the successful update of the resources of the StatefulSet's replica, the `KubeDB` Enterprise operator updates the `ProxySQL` object to reflect the updated state. + +9. After the successful update of the `ProxySQL` resources, the `KubeDB` Enterprise operator resumes the `ProxySQL` object so that the `KubeDB` Community operator resumes its usual operations. + +In the next docs, we are going to show a step by step guide on updating resources of ProxySQL using `ProxySQLOpsRequest` CRD. \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/proxysql/tls/_index.md b/content/docs/v2024.1.31/guides/proxysql/tls/_index.md new file mode 100644 index 0000000000..139107d2c8 --- /dev/null +++ b/content/docs/v2024.1.31/guides/proxysql/tls/_index.md @@ -0,0 +1,22 @@ +--- +title: TLS/SSL Encryption +menu: + docs_v2024.1.31: + identifier: guides-proxysql-tls + name: TLS/SSL Encryption + parent: guides-proxysql + weight: 110 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/proxysql/tls/configure/examples/issuer.yaml b/content/docs/v2024.1.31/guides/proxysql/tls/configure/examples/issuer.yaml new file mode 100644 index 0000000000..404669b707 --- /dev/null +++ b/content/docs/v2024.1.31/guides/proxysql/tls/configure/examples/issuer.yaml @@ -0,0 +1,8 @@ +apiVersion: cert-manager.io/v1 +kind: Issuer +metadata: + name: proxy-issuer + namespace: demo +spec: + ca: + secretName: proxy-ca \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/proxysql/tls/configure/examples/sample-mysql.yaml b/content/docs/v2024.1.31/guides/proxysql/tls/configure/examples/sample-mysql.yaml new file mode 100644 index 0000000000..c884c6f678 --- /dev/null +++ b/content/docs/v2024.1.31/guides/proxysql/tls/configure/examples/sample-mysql.yaml @@ -0,0 +1,19 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MySQL +metadata: + name: mysql-server + namespace: demo +spec: + version: "5.7.44" + replicas: 3 + topology: + mode: GroupReplication + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/proxysql/tls/configure/examples/sample-proxysql.yaml b/content/docs/v2024.1.31/guides/proxysql/tls/configure/examples/sample-proxysql.yaml new file mode 100644 index 0000000000..754d21150b --- /dev/null +++ b/content/docs/v2024.1.31/guides/proxysql/tls/configure/examples/sample-proxysql.yaml @@ -0,0 +1,26 @@ +apiVersion: kubedb.com/v1alpha2 +kind: ProxySQL +metadata: + name: proxy-server + namespace: demo +spec: + version: "2.3.2-debian" + replicas: 3 + backend: + name: mysql-server + syncUsers: true + tls: + issuerRef: + apiGroup: cert-manager.io + kind: Issuer + name: proxy-issuer + certificates: + - alias: server + subject: + organizations: + - kubedb:server + dnsNames: + - localhost + ipAddresses: + - "127.0.0.1" + terminationPolicy: WipeOut \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/proxysql/tls/configure/index.md b/content/docs/v2024.1.31/guides/proxysql/tls/configure/index.md new file mode 100644 index 0000000000..f5c347c8c6 --- /dev/null +++ b/content/docs/v2024.1.31/guides/proxysql/tls/configure/index.md @@ -0,0 +1,361 @@ +--- +title: TLS/SSL (Transport Encryption) +menu: + docs_v2024.1.31: + identifier: guides-proxysql-tls-configure + name: ProxySQL TLS/SSL Configuration + parent: guides-proxysql-tls + weight: 20 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Configure TLS/SSL in ProxySQL Frontend Connections + +`KubeDB` supports providing TLS/SSL encryption for `ProxySQL`. This tutorial will show you how to use `KubeDB` to deploy a `ProxySQL` with TLS/SSL configuration. + +> While talking about TLS secured connections in `ProxySQL`, we know there are two types of connections in `ProxySQL`. The first one is the client-to-proxy and second one is proxy-to-backend. The first type is refered as frontend connection and the second one as backend. As for the backend connection, it will be TLS secured automatically if the necessary ca_bundle is provided with the `appbinding`. And as for the frontend connections to be TLS secured, in this tutorial we are going to discuss how to achieve it with KubeDB operator. + + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +- Install [`cert-manger`](https://cert-manager.io/docs/installation/) v1.0.0 or later to your cluster to manage your SSL/TLS certificates. + +- Install `KubeDB` in your cluster following the steps [here](/docs/v2024.1.31/setup/README). + +- To keep things isolated, this tutorial uses a separate namespace called `demo` throughout this tutorial. + +```bash +$ kubectl create ns demo +namespace/demo created +``` + +> Note: YAML files used in this tutorial are stored in [docs/guides/proxysql/tls/configure/examples](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/guides/proxysql/tls/configure/examples) folder in GitHub repository [kubedb/docs](https://github.com/kubedb/docs). + + +### Deploy KubeDB MySQL instance as the backend + +We need a mysql backend for the proxysql server. So we are creating one with the following yaml. + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: MySQL +metadata: + name: mysql-server + namespace: demo +spec: + version: "5.7.44" + replicas: 3 + topology: + mode: GroupReplication + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut +``` + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/proxysql/tls/configure/examples/sample-mysql.yaml +mysql.kubedb.com/mysql-server created +``` + +After applying the above yaml wait for the MySQL to be Ready. + +## Deploy ProxySQL with TLS/SSL configuration + +As pre-requisite, at first, we are going to create an Issuer/ClusterIssuer. This Issuer/ClusterIssuer is used to create certificates. Then we are going to deploy a `ProxySQL` cluster that will be configured with these certificates by `KubeDB` operator. + +### Create Issuer/ClusterIssuer + +Now, we are going to create an example `Issuer` that will be used throughout the duration of this tutorial. Alternatively, you can follow this [cert-manager tutorial](https://cert-manager.io/docs/configuration/ca/) to create your own `Issuer`. With the following steps, we are going to create our desired issuer, + +- Start off by generating our ca-certificates using openssl, + +```bash +$ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout ./ca.key -out ./ca.crt -subj "/CN=proxysql/O=kubedb" +Generating a RSA private key +...........................................................................+++++ +........................................................................................................+++++ +writing new private key to './ca.key' +``` + +- create a secret using the certificate files we have just generated, + +```bash +kubectl create secret tls proxy-ca \ + --cert=ca.crt \ + --key=ca.key \ + --namespace=demo +secret/proxy-ca created +``` + +Now, we are going to create an `Issuer` using the `proxy-ca` secret that holds the ca-certificate we have just created. Below is the YAML of the `Issuer` cr that we are going to create, + +```yaml +apiVersion: cert-manager.io/v1 +kind: Issuer +metadata: + name: proxy-issuer + namespace: demo +spec: + ca: + secretName: proxy-ca +``` + +Let’s create the `Issuer` cr we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/proxysql/tls/configure/examples/issuer.yaml +issuer.cert-manager.io/proxy-issuer created +``` + +### Deploy ProxySQL Cluster with TLS/SSL configuration + +Here, our issuer `proxy-issuer` is ready to deploy a `ProxySQL` cluster with TLS/SSL configuration. Below is the YAML for ProxySQL Cluster that we are going to create, + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: ProxySQL +metadata: + name: proxy-server + namespace: demo +spec: + version: "2.3.2-debian" + replicas: 3 + backend: + name: mysql-server + syncUsers: true + tls: + issuerRef: + apiGroup: cert-manager.io + kind: Issuer + name: proxy-issuer + certificates: + - alias: server + subject: + organizations: + - kubedb:server + dnsNames: + - localhost + ipAddresses: + - "127.0.0.1" + terminationPolicy: WipeOut +``` + +Here, + +- `spec.tls.issuerRef` refers to the `proxy-issuer` issuer. + +- `spec.tls.certificates` gives you a lot of options to configure so that the certificate will be renewed and kept up to date. +You can find more details from [here](/docs/v2024.1.31/guides/proxysql/concepts/proxysql/index.md/#spectls) + +**Deploy ProxySQL Cluster:** + +Let’s create the `ProxySQL` cr we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/proxysql/tls/configure/examples/sample-proxysql.yaml +proxysql.kubedb.com/proxy-server created +``` + +**Wait for the database to be ready:** + +Now, wait for `ProxySQL` going on `Ready` state and also wait for `StatefulSet` and its pod to be created and going to `Running` state, + +```bash +$ kubectl get proxysql -n demo proxy-server +NAME VERSION STATUS AGE +proxy-server 2.3.2-debian Ready 5m48s + +$ kubectl get sts -n demo proxy-server +NAME READY AGE +proxy-server 3/3 7m5s +``` + +**Verify tls-secrets created successfully:** + +If everything goes well, you can see that our tls-secrets will be created which contains server, client, exporter certificate. Server tls-secret will be used for server configuration and client tls-secret will be used for a secure connection. + +All tls-secret are created by `KubeDB` Ops Manager. Default tls-secret name formed as _{proxysql-object-name}-{cert-alias}-cert_. + +Let's check the tls-secrets have created, + +```bash +$ kubectl get secrets -n demo | grep proxy-server +proxy-server-auth kubernetes.io/basic-auth 2 7m54s +proxy-server-configuration Opaque 1 7m54s +proxy-server-monitor kubernetes.io/basic-auth 2 7m54s +proxy-server-token-4w4mb kubernetes.io/service-account-token 3 7m54s +proxy-server-server-cert kubernetes.io/tls 3 7m53s +proxy-server-client-cert kubernetes.io/tls 3 7m53s +``` + +**Verify ProxySQL Cluster configured with TLS/SSL:** + +Now, we are going to connect to the proxysql server for verifying the proxysql server has configured with TLS/SSL encryption. + +Let's exec into the pod to verify TLS/SSL configuration, + +```bash +$ kubectl exec -it -n demo proxy-server-0 -- bash + +root@proxy-server-0:/ ls /var/lib/frontend/client +ca.crt tls.crt tls.key +root@proxy-server-0:/ ls /var/lib/frontend/server +ca.crt tls.crt tls.key + +root@proxy-server-0:/ mysql -uadmin -padmin -h127.0.0.1 -P6032 --prompt 'ProxySQLAdmin>' +Welcome to the ProxySQL monitor. Commands end with ; or \g. +Your ProxySQL connection id is 64 +Server version: 2.3.2-debian-ProxySQL-1:2.3.2-debian+maria~focal proxysql.org binary distribution + +Copyright (c) 2000, 2018, Oracle, ProxySQL Corporation Ab and others. + +Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. + +ProxySQLAdmin [(none)]> show variables like '%have_ssl%'; ++---------------------+-------------------------+ +| Variable_name | Value | ++---------------------+-------------------------+ +| mysql-have_ssl | true | ++---------------------+-------------------------+ +10 rows in set (0.002 sec) + +ProxySQLAdmin [(none)]> quit; +Bye +``` + +The above output shows that the proxy server is configured to TLS/SSL. You can also see that the `.crt` and `.key` files are stored in `/var/lib/frontend/client/` and `/var/lib/frontend/server/` directory for client and server respectively. + +**Verify secure connection for user:** + +Now, you can create an user that will be used to connect to the server with a secure connection. + +First, lets create the user in the backend mysql server. + +```bash +$ kubectl exec -it -n demo mysql-server-0 -- bash +Defaulted container "mysql" out of: mysql, mysql-coordinator, mysql-init (init) +root@mysql-server-0:/# mysql -uroot -p$MYSQL_ROOT_PASSWORD +mysql: [Warning] Using a password on the command line interface can be insecure. +Welcome to the MySQL monitor. Commands end with ; or \g. +Your MySQL connection id is 26692 +Server version: 5.7.44-log MySQL Community Server (GPL) + +Copyright (c) 2000, 2021, Oracle and/or its affiliates. + +Oracle is a registered trademark of Oracle Corporation and/or its +affiliates. Other names may be trademarks of their respective +owners. + +Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. + +mysql> create user 'test'@'%' identified by 'pass'; +Query OK, 0 rows affected (0.00 sec) + +mysql> grant all privileges on test.* to 'again'@'%'; +Query OK, 0 rows affected (0.00 sec) + +mysql> flush privileges; +Query OK, 0 rows affected (0.00 sec) +``` + +As we deployed the ProxySQL with `.spec.syncUsers` turned true, the user will automatically be fetched into the proxysql server. + +```bash +ProxySQLAdmin [(none)]> select username,active,use_ssl from mysql_users; ++----------+--------+---------+ +| username | active | use_ssl | ++----------+--------+---------+ +| root | 1 | 0 | +| test | 1 | 0 | ++----------+--------+---------+ +2 rows in set (0.001 sec) +``` + +We need to turn the use_ssl on for tls secured connections. + +```bash +ProxySQLAdmin [(none)]> update mysql_users set use_ssl=1 where username='test'; +Query OK, 1 row affected (0.000 sec) + +ProxySQLAdmin [(none)]> LOAD MYSQL USERS TO RUNTIME; +Query OK, 0 rows affected (0.001 sec) + +ProxySQLAdmin [(none)]> SAVE MYSQL USERS TO DISK; +Query OK, 0 rows affected (0.008 sec) +``` + +Let's connect to the proxysql server with a secure connection, + +```bash +$ kubectl exec -it -n demo proxy-server-0 -- bash +root@proxy-server-0:/ mysql -utest -ppass -h127.0.0.1 -P6033 +ERROR 1045 (28000): ProxySQL Error: Access denied for user 'test' (using password: YES). SSL is required + +root@proxy-server-0:/ mysql -utest -ppass -h127.0.0.1 -P6033 --ssl-ca=/var/lib/frontend/server/ca.crt --ssl-cert=/var/lib/frontend/server/tls.crt --ssl-key=/var/lib/frontend/server/tls.key +Welcome to the MariaDB monitor. Commands end with ; or \g. +Your MySQL connection id is 1573 +Server version: 8.0.35 (ProxySQL) + +Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others. + +Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. + +MySQL [(none)]> \s +-------------- +mysql Ver 15.1 Distrib 10.5.15-MariaDB, for debian-linux-gnu (x86_64) using EditLine wrapper + +Connection id: 1573 +Current database: information_schema +Current user: test@10.244.0.26 +SSL: Cipher in use is TLS_AES_256_GCM_SHA384 +Current pager: stdout +Using outfile: '' +Using delimiter: ; +Server: MySQL +Server version: 8.0.35 (ProxySQL) +Protocol version: 10 +Connection: 127.0.0.1 via TCP/IP +Server characterset: latin1 +Db characterset: utf8 +Client characterset: latin1 +Conn. characterset: latin1 +TCP port: 6033 +Uptime: 2 hours 30 min 27 sec + +Threads: 1 Questions: 12 Slow queries: 12 +``` + +In the above output section we can see there is cipher in user at the SSL field. Which means the connection is TLS secured. + +## Cleaning up + +To clean up the Kubernetes resources created by this tutorial, run: + +```bash +$ kubectl delete proxysql -n demo proxy-server +$ kubectl delete mysql -n demo mysql-server +$ kubectl delete issuer -n demo --all +$ kubectl delete ns demo +``` \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/proxysql/tls/overview/images/proxy-tls-ssl.png b/content/docs/v2024.1.31/guides/proxysql/tls/overview/images/proxy-tls-ssl.png new file mode 100644 index 0000000000..aa00290ae0 Binary files /dev/null and b/content/docs/v2024.1.31/guides/proxysql/tls/overview/images/proxy-tls-ssl.png differ diff --git a/content/docs/v2024.1.31/guides/proxysql/tls/overview/index.md b/content/docs/v2024.1.31/guides/proxysql/tls/overview/index.md new file mode 100644 index 0000000000..c0d8aaa279 --- /dev/null +++ b/content/docs/v2024.1.31/guides/proxysql/tls/overview/index.md @@ -0,0 +1,80 @@ +--- +title: ProxySQL TLS/SSL Encryption Overview +menu: + docs_v2024.1.31: + identifier: guides-proxysql-tls-overview + name: Overview + parent: guides-proxysql-tls + weight: 10 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# ProxySQL TLS/SSL Encryption + +**Prerequisite :** To configure TLS/SSL in `ProxySQL`, `KubeDB` uses `cert-manager` to issue certificates. So first you have to make sure that the cluster has `cert-manager` installed. To install `cert-manager` in your cluster following steps [here](https://cert-manager.io/docs/installation/kubernetes/). + +To issue a certificate, the following cr of `cert-manager` is used: + +- `Issuer/ClusterIssuer`: Issuers and ClusterIssuers represent certificate authorities (CAs) that are able to generate signed certificates by honoring certificate signing requests. All cert-manager certificates require a referenced issuer that is in a ready condition to attempt to honor the request. You can learn more details [here](https://cert-manager.io/docs/concepts/issuer/). + +- `Certificate`: `cert-manager` has the concept of Certificates that define the desired x509 certificate which will be renewed and kept up to date. You can learn more details [here](https://cert-manager.io/docs/concepts/certificate/). + +**ProxySQL CRD Specification:** + +KubeDB uses the following cr fields to enable SSL/TLS encryption in `ProxySQL`. + +- `spec:` + - `tls:` + - `issuerRef` + - `certificates` + +Read about the fields in details from [proxysql concept](/docs/v2024.1.31/guides/proxysql/concepts/proxysql/index.md/#spectls), + +`KubeDB` uses the `issuer` or `clusterIssuer` referenced in the `tls.issuerRef` field, and the certificate specs provided in `tls.certificate` to generate certificate secrets using `Issuer/ClusterIssuers` specification. These certificates secrets including `ca.crt`, `tls.crt` and `tls.key` etc. are used to configure `ProxySQL` server, exporter etc. respectively. + +## How TLS/SSL configures in ProxySQL + +The following figure shows how `KubeDB` enterprise is used to configure TLS/SSL in ProxySQL. Open the image in a new tab to see the enlarged version. + +
+ ProxySQL TLS +
Fig: Deploy ProxySQL with TLS/SSL
+
+ +Deploying ProxySQL with TLS/SSL configuration process consists of the following steps: + +1. At first, a user creates an `Issuer/ClusterIssuer` cr. + +2. Then the user creates a `ProxySQL` cr. + +3. `KubeDB` community operator watches for the `ProxySQL` cr. + +4. When it finds one, it creates `Secret`, `Service`, etc. for the `ProxySQL` server. + +5. `KubeDB` Ops Manager watches for `ProxySQL`(5c), `Issuer/ClusterIssuer`(5b), `Secret` and `Service`(5a). + +6. When it finds all the resources(`ProxySQL`, `Issuer/ClusterIssuer`, `Secret`, `Service`), it creates `Certificates` by using `tls.issuerRef` and `tls.certificates` field specification from `ProxySQL` cr. + +7. `cert-manager` watches for certificates. + +8. When it finds one, it creates certificate secrets `tls-secrets`(server, client, exporter secrets, etc.) that hold the actual self-signed certificate. + +9. `KubeDB` community operator watches for the Certificate secrets `tls-secrets`. + +10. When it finds all the tls-secret, it creates a `StatefulSet` so that ProxySQL server is configured with TLS/SSL. + +In the next doc, we are going to show a step by step guide on how to configure a `ProxySQL` server with TLS/SSL. diff --git a/content/docs/v2024.1.31/guides/proxysql/update-version/_index.md b/content/docs/v2024.1.31/guides/proxysql/update-version/_index.md new file mode 100644 index 0000000000..54483b8473 --- /dev/null +++ b/content/docs/v2024.1.31/guides/proxysql/update-version/_index.md @@ -0,0 +1,22 @@ +--- +title: Updating +menu: + docs_v2024.1.31: + identifier: guides-proxysql-updating + name: UpdateVersion + parent: guides-proxysql + weight: 45 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/proxysql/update-version/cluster/examples/proxyops-upgrade.yaml b/content/docs/v2024.1.31/guides/proxysql/update-version/cluster/examples/proxyops-upgrade.yaml new file mode 100644 index 0000000000..e9cfa7164a --- /dev/null +++ b/content/docs/v2024.1.31/guides/proxysql/update-version/cluster/examples/proxyops-upgrade.yaml @@ -0,0 +1,11 @@ +apiVersion: ops.kubedb.com/v1alpha1 +kind: ProxySQLOpsRequest +metadata: + name: proxyops-update + namespace: demo +spec: + type: UpdateVersion + proxyRef: + name: proxy-server + updateVersion: + targetVersion: "2.4.4-debian" \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/proxysql/update-version/cluster/examples/sample-mysql.yaml b/content/docs/v2024.1.31/guides/proxysql/update-version/cluster/examples/sample-mysql.yaml new file mode 100644 index 0000000000..c884c6f678 --- /dev/null +++ b/content/docs/v2024.1.31/guides/proxysql/update-version/cluster/examples/sample-mysql.yaml @@ -0,0 +1,19 @@ +apiVersion: kubedb.com/v1alpha2 +kind: MySQL +metadata: + name: mysql-server + namespace: demo +spec: + version: "5.7.44" + replicas: 3 + topology: + mode: GroupReplication + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/proxysql/update-version/cluster/examples/sample-proxysql.yaml b/content/docs/v2024.1.31/guides/proxysql/update-version/cluster/examples/sample-proxysql.yaml new file mode 100644 index 0000000000..b2795bf06b --- /dev/null +++ b/content/docs/v2024.1.31/guides/proxysql/update-version/cluster/examples/sample-proxysql.yaml @@ -0,0 +1,12 @@ +apiVersion: kubedb.com/v1alpha2 +kind: ProxySQL +metadata: + name: proxy-server + namespace: demo +spec: + version: "2.3.2-debian" + replicas: 3 + backend: + name: mysql-server + syncUsers: true + terminationPolicy: WipeOut \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/proxysql/update-version/cluster/index.md b/content/docs/v2024.1.31/guides/proxysql/update-version/cluster/index.md new file mode 100644 index 0000000000..3b10c8b85a --- /dev/null +++ b/content/docs/v2024.1.31/guides/proxysql/update-version/cluster/index.md @@ -0,0 +1,195 @@ +--- +title: Updating ProxySQL Cluster +menu: + docs_v2024.1.31: + identifier: guides-proxysql-updating-cluster + name: Demo + parent: guides-proxysql-updating + weight: 30 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# update version of ProxySQL Cluster + +This guide will show you how to use `KubeDB` Enterprise operator to update the version of `ProxySQL` Cluster. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +- Install `KubeDB` Community and Enterprise operator in your cluster following the steps [here](/docs/v2024.1.31/setup/README). + +- You should be familiar with the following `KubeDB` concepts: + - [ProxySQL](/docs/v2024.1.31/guides/proxysql/concepts/proxysql) + - [Cluster](/docs/v2024.1.31/guides/proxysql/clustering/overview) + - [ProxySQLOpsRequest](/docs/v2024.1.31/guides/proxysql/concepts/opsrequest) + - [updating Overview](/docs/v2024.1.31/guides/proxysql/update-version/overview) + +To keep everything isolated, we are going to use a separate namespace called `demo` throughout this tutorial. + +```bash +$ kubectl create ns demo +namespace/demo created +``` + +Also we need a mysql backend for the proxysql server. So we are creating one with the below yaml. + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: MySQL +metadata: + name: mysql-server + namespace: demo +spec: + version: "5.7.44" + replicas: 3 + topology: + mode: GroupReplication + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut +``` + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/proxysql/update-version/cluster/examples/sample-mysql.yaml +mysql.kubedb.com/mysql-server created +``` + +After applying the above yaml wait for the MySQL to be Ready. + +## Prepare ProxySQL Cluster + +Now, we are going to deploy a `ProxySQL` cluster with version `2.3.2-debian`. + +### Deploy ProxySQL cluster + +In this section, we are going to deploy a ProxySQL Cluster. Then, in the next section we will update the version of the instance using `ProxySQLOpsRequest` CRD. Below is the YAML of the `ProxySQL` CR that we are going to create, + + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: ProxySQL +metadata: + name: proxy-server + namespace: demo +spec: + version: "2.3.2-debian" + replicas: 3 + backend: + name: mysql-server + syncUsers: true + terminationPolicy: WipeOut + +``` + +Let's create the `ProxySQL` CR we have shown above, + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/proxysql/update-version/cluster/examples/sample-proxysql.yaml +proxysql.kubedb.com/proxy-server created +``` + +Now, wait until `proxy-server` created has status `Ready`. i.e, + +```bash +$ kubectl get proxysql -n demo +NAME VERSION STATUS AGE +proxy-server 2.3.2-debian Ready 3m15s +``` + +We are now ready to apply the `ProxySQLOpsRequest` CR to update this database. + +## update ProxySQL Version + +Here, we are going to update `ProxySQL` cluster from `2.3.2-debian` to `2.4.4-debian`. + +### Create ProxySQLOpsRequest: + +In order to update the database cluster, we have to create a `ProxySQLOpsRequest` CR with your desired version that is supported by `KubeDB`. Below is the YAML of the `ProxySQLOpsRequest` CR that we are going to create, + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: ProxySQLOpsRequest +metadata: + name: proxyops-update + namespace: demo +spec: + type: UpdateVersion + proxyRef: + name: proxy-server + updateVersion: + targetVersion: "2.4.4-debian" +``` + +Here, + +- `spec.proxyRef.name` specifies that we are performing operation on `proxy-server` ProxySQL database. +- `spec.type` specifies that we are going to perform `UpdateVersion` on our database. +- `spec.updateVersion.targetVersion` specifies the expected version of the database `2.4.4-debian`. + +Let's create the `ProxySQLOpsRequest` CR we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/proxysql/update-version/cluster/examples/proxyops-update.yaml +proxysqlopsrequest.ops.kubedb.com/proxyops-update created +``` + +### Verify ProxySQL version updated successfully + +If everything goes well, `KubeDB` Enterprise operator will update the image of `ProxySQL` object and related `StatefulSets` and `Pods`. + +Let's wait for `ProxySQLOpsRequest` to be `Successful`. Run the following command to watch `ProxySQLOpsRequest` CR, + +```bash +$ kubectl get proxysqlopsrequest -n demo +Every 2.0s: kubectl get proxysqlopsrequest -n demo +NAME TYPE STATUS AGE +proxyops-update UpdateVersion Successful 84s +``` + +We can see from the above output that the `ProxySQLOpsRequest` has succeeded. + +Now, we are going to verify whether the `ProxySQL` and the related `StatefulSets` and their `Pods` have the new version image. Let's check, + +```bash +$ kubectl get proxysql -n demo proxy-server -o=jsonpath='{.spec.version}{"\n"}' +2.4.4-debian + +$ kubectl get sts -n demo proxy-server -o=jsonpath='{.spec.template.spec.containers[0].image}{"\n"}' +kubedb/proxysql:2.4.4-debian@sha256.... + +$ kubectl get pods -n demo proxy-server-0 -o=jsonpath='{.spec.containers[0].image}{"\n"}' +kubedb/proxysql:2.4.4-debian@sha256.... + +``` + +You can see from above, our `ProxySQL` cluster database has been updated with the new version. So, the update process is successfully completed. + +## Cleaning Up + +To clean up the Kubernetes resources created by this tutorial, run: + +```bash +$ kubectl delete proxysql -n demo proxy-server +$ kubectl delete proxysqlopsrequest -n demo proxyops-update +``` \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/proxysql/update-version/overview/images/proxysql-update.png b/content/docs/v2024.1.31/guides/proxysql/update-version/overview/images/proxysql-update.png new file mode 100644 index 0000000000..a6c9067b44 Binary files /dev/null and b/content/docs/v2024.1.31/guides/proxysql/update-version/overview/images/proxysql-update.png differ diff --git a/content/docs/v2024.1.31/guides/proxysql/update-version/overview/index.md b/content/docs/v2024.1.31/guides/proxysql/update-version/overview/index.md new file mode 100644 index 0000000000..48dcb94d13 --- /dev/null +++ b/content/docs/v2024.1.31/guides/proxysql/update-version/overview/index.md @@ -0,0 +1,65 @@ +--- +title: Updating ProxySQL Overview +menu: + docs_v2024.1.31: + identifier: guides-proxysql-updating-overview + name: Overview + parent: guides-proxysql-updating + weight: 10 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# updating ProxySQL version Overview + +This guide will give you an overview on how KubeDB Ops Manager update the version of `ProxySQL` instance. + +## Before You Begin + +- You should be familiar with the following `KubeDB` concepts: + - [ProxySQL](/docs/v2024.1.31/guides/proxysql/concepts/proxysql) + - [ProxySQLOpsRequest](/docs/v2024.1.31/guides/proxysql/concepts/opsrequest) + +## How update Process Works + +The following diagram shows how KubeDB Ops Manager used to update the version of `ProxySQL`. Open the image in a new tab to see the enlarged version. + +
+  updating Process of ProxySQL +
Fig: updating Process of ProxySQL
+
+ +The updating process consists of the following steps: + +1. At first, a user creates a `ProxySQL` Custom Resource (CR). + +2. `KubeDB` Community operator watches the `ProxySQL` CR. + +3. When the operator finds a `ProxySQL` CR, it creates required number of `StatefulSets` and related necessary stuff like secrets, services, etc. + +4. Then, in order to update the version of the `ProxySQL` the user creates a `ProxySQLOpsRequest` CR with the desired version. + +5. `KubeDB` Enterprise operator watches the `ProxySQLOpsRequest` CR. + +6. When it finds a `ProxySQLOpsRequest` CR, it halts the `ProxySQL` object which is referred from the `ProxySQLOpsRequest`. So, the `KubeDB` Community operator doesn't perform any operations on the `ProxySQL` object during the updating process. + +7. By looking at the target version from `ProxySQLOpsRequest` CR, `KubeDB` Enterprise operator updates the images of all the `StatefulSets`. After each image update, the operator performs some checks such as if the oplog is synced and database size is almost same or not. + +8. After successfully updating the `StatefulSets` and their `Pods` images, the `KubeDB` Enterprise operator updates the image of the `ProxySQL` object to reflect the updated state of the server. + +9. After successfully updating of `ProxySQL` object, the `KubeDB` Enterprise operator resumes the `ProxySQL` object so that the `KubeDB` Community operator can resume its usual operations. + +In the next doc, we are going to show a step by step guide on updating of a ProxySQL using update operation. \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/redis/README.md b/content/docs/v2024.1.31/guides/redis/README.md new file mode 100644 index 0000000000..aed0ab57c2 --- /dev/null +++ b/content/docs/v2024.1.31/guides/redis/README.md @@ -0,0 +1,67 @@ +--- +title: Redis +menu: + docs_v2024.1.31: + identifier: rd-readme-redis + name: Redis + parent: rd-redis-guides + weight: 10 +menu_name: docs_v2024.1.31 +section_menu_id: guides +url: /docs/v2024.1.31/guides/redis/ +aliases: +- /docs/v2024.1.31/guides/redis/README/ +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +## Supported Redis Features +| Features | Community | Enterprise | +|------------------------------------------------------------------------------------|:---------:|:----------:| +| Clustering | ✓ | ✓ | +| Sentinel | ✓ | ✓ | +| Standalone | ✓ | ✓ | +| Authentication & Autorization | ✓ | ✓ | +| Persistent Volume | ✓ | ✓ | +| Initializing from Snapshot ( [Stash](https://stash.run/) ) | ✓ | ✓ | +| Instant Backup (Sentinel and Standalone Mode) | ✓ | ✓ | +| Scheduled Backup (Sentinel and Standalone Mode) | ✓ | ✓ | +| Builtin Prometheus Discovery | ✓ | ✓ | +| Using Prometheus operator | ✓ | ✓ | +| Automated Version Update | ✗ | ✓ | +| Automatic Vertical Scaling | ✗ | ✓ | +| Automated Horizontal Scaling | ✗ | ✓ | +| Automated db-configure Reconfiguration | ✗ | ✓ | +| TLS: Add, Remove, Update, Rotate ( [Cert Manager](https://cert-manager.io/docs/) ) | ✗ | ✓ | +| Automated Volume Expansion | ✗ | ✓ | +| Autoscaling (vertically) | ✗ | ✓ | + + +## Life Cycle of a Redis Object + +

+  lifecycle +

+ +## User Guide + +- [Quickstart Redis](/docs/v2024.1.31/guides/redis/quickstart/quickstart) with KubeDB Operator. +- [Deploy Redis Cluster](/docs/v2024.1.31/guides/redis/clustering/redis-cluster) using KubeDB. +- Monitor your Redis server with KubeDB using [out-of-the-box Prometheus operator](/docs/v2024.1.31/guides/redis/monitoring/using-prometheus-operator). +- Monitor your Redis server with KubeDB using [out-of-the-box builtin-Prometheus](/docs/v2024.1.31/guides/redis/monitoring/using-builtin-prometheus). +- Use [private Docker registry](/docs/v2024.1.31/guides/redis/private-registry/using-private-registry) to deploy Redis with KubeDB. +- Use [kubedb cli](/docs/v2024.1.31/guides/redis/cli/cli) to manage databases like kubectl for Kubernetes. +- Detail concepts of [Redis object](/docs/v2024.1.31/guides/redis/concepts/redis). +- Detail concepts of [RedisVersion object](/docs/v2024.1.31/guides/redis/concepts/catalog). +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/v2024.1.31/CONTRIBUTING). diff --git a/content/docs/v2024.1.31/guides/redis/_index.md b/content/docs/v2024.1.31/guides/redis/_index.md new file mode 100644 index 0000000000..f90846b685 --- /dev/null +++ b/content/docs/v2024.1.31/guides/redis/_index.md @@ -0,0 +1,22 @@ +--- +title: Redis +menu: + docs_v2024.1.31: + identifier: rd-redis-guides + name: Redis + parent: guides + weight: 10 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/redis/autoscaler/_index.md b/content/docs/v2024.1.31/guides/redis/autoscaler/_index.md new file mode 100644 index 0000000000..87189d26bb --- /dev/null +++ b/content/docs/v2024.1.31/guides/redis/autoscaler/_index.md @@ -0,0 +1,22 @@ +--- +title: Autoscaling +menu: + docs_v2024.1.31: + identifier: rd-auto-scaling + name: Autoscaling + parent: rd-redis-guides + weight: 47 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/redis/autoscaler/compute/_index.md b/content/docs/v2024.1.31/guides/redis/autoscaler/compute/_index.md new file mode 100644 index 0000000000..7c6803e0d5 --- /dev/null +++ b/content/docs/v2024.1.31/guides/redis/autoscaler/compute/_index.md @@ -0,0 +1,22 @@ +--- +title: Compute Autoscaling +menu: + docs_v2024.1.31: + identifier: rd-compute-auto-scaling + name: Compute Autoscaling + parent: rd-auto-scaling + weight: 46 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/redis/autoscaler/compute/overview.md b/content/docs/v2024.1.31/guides/redis/autoscaler/compute/overview.md new file mode 100644 index 0000000000..7e3ba7131c --- /dev/null +++ b/content/docs/v2024.1.31/guides/redis/autoscaler/compute/overview.md @@ -0,0 +1,68 @@ +--- +title: Redis Compute Autoscaling Overview +menu: + docs_v2024.1.31: + identifier: rd-auto-scaling-overview + name: Overview + parent: rd-compute-auto-scaling + weight: 10 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Redis Compute Resource Autoscaling + +This guide will give an overview on how KubeDB Autoscaler operator autoscales the database compute resources i.e. cpu and memory using `redisautoscaler` crd. + +## Before You Begin + +- You should be familiar with the following `KubeDB` concepts: + - [Redis](/docs/v2024.1.31/guides/redis/concepts/redis) + - [RedisAutoscaler](/docs/v2024.1.31/guides/redis/concepts/autoscaler) + - [RedisOpsRequest](/docs/v2024.1.31/guides/redis/concepts/redisopsrequest) + +## How Compute Autoscaling Works + +The following diagram shows how KubeDB Autoscaler operator autoscales the resources of `Redis` database components. Open the image in a new tab to see the enlarged version. + +
+  Compute Auto Scaling process of Redis +
Fig: Compute Auto Scaling process of Redis
+
+ +The Auto Scaling process consists of the following steps: + +1. At first, a user creates a `Redis`/`RedisSentinel` Custom Resource Object (CRO). + +2. `KubeDB` Provisioner operator watches the `Redis`/`RedisSentinel` CRO. + +3. When the operator finds a `Redis`/`RedisSentinel` CRO, it creates required number of `StatefulSets` and related necessary stuff like secrets, services, etc. + +4. Then, in order to set up autoscaling of the `Redis` database the user creates a `RedisAutoscaler` CRO with desired configuration. + +5. Then, in order to set up autoscaling of the `RedisSentinel` database the user creates a `RedisSentinelAutoscaler` CRO with desired configuration. + +6. `KubeDB` Autoscaler operator watches the `RedisAutoscaler` && `RedisSentinelAutoscaler` CRO. + +7. `KubeDB` Autoscaler operator generates recommendation using the modified version of kubernetes [official recommender](https://github.com/kubernetes/autoscaler/tree/master/vertical-pod-autoscaler/pkg/recommender) for the database, as specified in the `RedisAutoscaler`/`RedisSentinelAutoscaler` CRO. + +8. If the generated recommendation doesn't match the current resources of the database, then `KubeDB` Autoscaler operator creates a `RedisOpsRequest`/`RedisSentinelOpsRequest` CRO to scale the database to match the recommendation generated. + +9. `KubeDB` Ops-manager operator watches the `RedisOpsRequest`/`RedisSentinelOpsRequest` CRO. + +10. Then the `KubeDB` ops-manager operator will scale the database component vertically as specified on the `RedisOpsRequest`/`RedisSentinelOpsRequest` CRO. + +In the next docs, we are going to show a step-by-step guide on Autoscaling of various Redis database components using `RedisAutoscaler`/`RedisSentinelAutoscaler` CRD. diff --git a/content/docs/v2024.1.31/guides/redis/autoscaler/compute/redis.md b/content/docs/v2024.1.31/guides/redis/autoscaler/compute/redis.md new file mode 100644 index 0000000000..12cb21daec --- /dev/null +++ b/content/docs/v2024.1.31/guides/redis/autoscaler/compute/redis.md @@ -0,0 +1,372 @@ +--- +title: Redis Autoscaling +menu: + docs_v2024.1.31: + identifier: rd-auto-scaling-standalone + name: Redis Autoscaling + parent: rd-compute-auto-scaling + weight: 15 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Autoscaling the Compute Resource of a Redis Database + +This guide will show you how to use `KubeDB` to autoscale compute resources i.e. cpu and memory of a Redis standalone database. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster. + +- Install `KubeDB` Provisioner, Ops-manager and Autoscaler operator in your cluster following the steps [here](/docs/v2024.1.31/setup/README). + +- Install `Metrics Server` from [here](https://github.com/kubernetes-sigs/metrics-server#installation) + +- You should be familiar with the following `KubeDB` concepts: + - [Redis](/docs/v2024.1.31/guides/redis/concepts/redis) + - [RedisAutoscaler](/docs/v2024.1.31/guides/redis/concepts/autoscaler) + - [RedisOpsRequest](/docs/v2024.1.31/guides/redis/concepts/redisopsrequest) + - [Compute Resource Autoscaling Overview](/docs/v2024.1.31/guides/redis/autoscaler/compute/overview) + +To keep everything isolated, we are going to use a separate namespace called `demo` throughout this tutorial. + +```bash +$ kubectl create ns demo +namespace/demo created +``` + +> **Note:** YAML files used in this tutorial are stored in [docs/examples/redis](/docs/v2024.1.31/examples/redis) directory of [kubedb/docs](https://github.com/kubedb/docs) repository. + +## Autoscaling of Standalone Database + +Here, we are going to deploy a `Redis` standalone using a supported version by `KubeDB` operator. Then we are going to apply `RedisAutoscaler` to set up autoscaling. + +#### Deploy Redis standalone + +In this section, we are going to deploy a Redis standalone database with version `6.2.14`. Then, in the next section we will set up autoscaling for this database using `RedisAutoscaler` CRD. Below is the YAML of the `Redis` CR that we are going to create, + +> If you want to autoscale Redis in `Cluster` or `Sentinel` mode, just deploy a Redis database in respective Mode and rest of the steps are same. + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: Redis +metadata: + name: rd-standalone + namespace: demo +spec: + version: "6.2.14" + storageType: Durable + storage: + resources: + requests: + storage: 1Gi + podTemplate: + spec: + resources: + requests: + cpu: "200m" + memory: "300Mi" + limits: + cpu: "200m" + memory: "300Mi" + terminationPolicy: WipeOut +``` + +Let's create the `Redis` CRO we have shown above, + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/redis/autoscaling/compute/rd-standalone.yaml +redis.kubedb.com/rd-standalone created +``` + +Now, wait until `rd-standalone` has status `Ready`. i.e, + +```bash +$ kubectl get mg -n demo +NAME VERSION STATUS AGE +rd-standalone 6.2.14 Ready 2m53s +``` + +Let's check the Pod containers resources, + +```bash +$ kubectl get pod -n demo rd-standalone-0 -o json | jq '.spec.containers[].resources' +{ + "limits": { + "cpu": "200m", + "memory": "300Mi" + }, + "requests": { + "cpu": "200m", + "memory": "300Mi" + } +} +``` + +Let's check the Redis resources, +```bash +$ kubectl get redis -n demo rd-standalone -o json | jq '.spec.podTemplate.spec.resources' +{ + "limits": { + "cpu": "200m", + "memory": "300Mi" + }, + "requests": { + "cpu": "200m", + "memory": "300Mi" + } +} +``` + +You can see from the above outputs that the resources are same as the one we have assigned while deploying the redis. + +We are now ready to apply the `RedisAutoscaler` CRO to set up autoscaling for this database. + +### Compute Resource Autoscaling + +Here, we are going to set up compute (cpu and memory) autoscaling using a RedisAutoscaler Object. + +#### Create RedisAutoscaler Object + +In order to set up compute resource autoscaling for this standalone database, we have to create a `RedisAutoscaler` CRO with our desired configuration. Below is the YAML of the `RedisAutoscaler` object that we are going to create, + +```yaml +apiVersion: autoscaling.kubedb.com/v1alpha1 +kind: RedisAutoscaler +metadata: + name: rd-as + namespace: demo +spec: + databaseRef: + name: rd-standalone + opsRequestOptions: + timeout: 3m + apply: IfReady + compute: + standalone: + trigger: "On" + podLifeTimeThreshold: 5m + resourceDiffPercentage: 20 + minAllowed: + cpu: 400m + memory: 400Mi + maxAllowed: + cpu: 1 + memory: 1Gi + controlledResources: ["cpu", "memory"] + containerControlledValues: "RequestsAndLimits" +``` + +> If you want to autoscale Redis in Cluster mode, the field in `spec.compute` should be `cluster` and for sentinel it should be `sentinel`. The subfields are same inside `spec.computer.standalone`, `spec.compute.cluster` and `spec.compute.sentinel` + +Here, + +- `spec.databaseRef.name` specifies that we are performing compute resource autoscaling on `rd-standalone` database. +- `spec.compute.standalone.trigger` specifies that compute resource autoscaling is enabled for this database. +- `spec.compute.standalone.podLifeTimeThreshold` specifies the minimum lifetime for at least one of the pod to initiate a vertical scaling. +- `spec.compute.replicaset.resourceDiffPercentage` specifies the minimum resource difference in percentage. The default is 10%. + If the difference between current & recommended resource is less than ResourceDiffPercentage, Autoscaler Operator will ignore the updating. +- `spec.compute.standalone.minAllowed` specifies the minimum allowed resources for the database. +- `spec.compute.standalone.maxAllowed` specifies the maximum allowed resources for the database. +- `spec.compute.standalone.controlledResources` specifies the resources that are controlled by the autoscaler. +- `spec.compute.standalone.containerControlledValues` specifies which resource values should be controlled. The default is "RequestsAndLimits". +- `spec.opsRequestOptions` contains the options to pass to the created OpsRequest. It has 2 fields. Know more about them here : [timeout](/docs/v2024.1.31/guides/redis/concepts/redisopsrequest#spectimeout), [apply](/docs/v2024.1.31/guides/redis/concepts/redisopsrequest#specapply). + +Let's create the `RedisAutoscaler` CR we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/redis/autoscaling/compute/rd-as-standalone.yaml +redisautoscaler.autoscaling.kubedb.com/rd-as created +``` + +#### Verify Autoscaling is set up successfully + +Let's check that the `redisautoscaler` resource is created successfully, + +```bash +$ kubectl get redisautoscaler -n demo +NAME AGE +rd-as 102s + +$ kubectl describe redisautoscaler rd-as -n demo +Name: rd-as +Namespace: demo +Labels: +Annotations: +API Version: autoscaling.kubedb.com/v1alpha1 +Kind: RedisAutoscaler +Metadata: + Creation Timestamp: 2023-02-09T10:02:26Z + Generation: 1 + Managed Fields: + API Version: autoscaling.kubedb.com/v1alpha1 + Fields Type: FieldsV1 + fieldsV1: + f:metadata: + f:annotations: + .: + f:kubectl.kubernetes.io/last-applied-configuration: + f:spec: + .: + f:compute: + .: + f:standalone: + .: + f:containerControlledValues: + f:controlledResources: + f:maxAllowed: + .: + f:cpu: + f:memory: + f:minAllowed: + .: + f:cpu: + f:memory: + f:podLifeTimeThreshold: + f:resourceDiffPercentage: + f:trigger: + f:databaseRef: + f:opsRequestOptions: + .: + f:apply: + f:timeout: + Manager: kubectl-client-side-apply + Operation: Update + Time: 2023-02-09T10:02:26Z + API Version: autoscaling.kubedb.com/v1alpha1 + Fields Type: FieldsV1 + fieldsV1: + f:status: + .: + f:checkpoints: + f:vpas: + Manager: kubedb-autoscaler + Operation: Update + Subresource: status + Time: 2023-02-09T10:02:29Z + Resource Version: 839366 + UID: 5a5dedc1-fbef-4afa-93f3-0ca8dfb8a30b +Spec: + Compute: + Standalone: + Container Controlled Values: RequestsAndLimits + Controlled Resources: + cpu + memory + Max Allowed: + Cpu: 1 + Memory: 1Gi + Min Allowed: + Cpu: 400m + Memory: 400Mi + Pod Life Time Threshold: 5m0s + Resource Diff Percentage: 20 + Trigger: On + Database Ref: + Name: rd-standalone + Ops Request Options: + Apply: IfReady + Timeout: 3m0s +Status: + Checkpoints: + Cpu Histogram: + Last Update Time: 2023-02-09T10:03:29Z + Memory Histogram: + Ref: + Container Name: redis + Vpa Object Name: rd-standalone + Version: v3 + Vpas: + Conditions: + Last Transition Time: 2023-02-09T10:02:29Z + Status: False + Type: RecommendationProvided + Recommendation: + Vpa Name: rd-standalone +Events: + +``` +So, the `redisautoscaler` resource is created successfully. + +you can see in the `Status.VPAs.Recommendation` section, that recommendation has been generated for our database. Our autoscaler operator continuously watches the recommendation generated and creates an `redisopsrequest` based on the recommendations, if the database pods are needed to scaled up or down. + +Let's watch the `redisopsrequest` in the demo namespace to see if any `redisopsrequest` object is created. After some time you'll see that a `redisopsrequest` will be created based on the recommendation. + +```bash +$ watch kubectl get redisopsrequest -n demo +Every 2.0s: kubectl get redisopsrequest -n demo +NAME TYPE STATUS AGE +rdops-rd-standalone-q2zozm VerticalScaling Progressing 10s +``` + +Let's wait for the ops request to become successful. + +```bash +$ watch kubectl get redisopsrequest -n demo +Every 2.0s: kubectl get redisopsrequest -n demo +NAME TYPE STATUS AGE +rdops-rd-standalone-q2zozm VerticalScaling Successful 68s +``` + +We can see from the above output that the `RedisOpsRequest` has succeeded. + +Now, we are going to verify from the Pod, and the Redis yaml whether the resources of the standalone database has updated to meet up the desired state, Let's check, + +```bash +$ kubectl get pod -n demo rd-standalone-0 -o json | jq '.spec.containers[].resources' +{ + "limits": { + "cpu": "400m", + "memory": "400Mi" + }, + "requests": { + "cpu": "400m", + "memory": "400Mi" + } +} + +$ kubectl get redis -n demo rd-standalone -o json | jq '.spec.podTemplate.spec.resources' +{ + "limits": { + "cpu": "400m", + "memory": "400Mi" + }, + "requests": { + "cpu": "400m", + "memory": "400Mi" + } +} +``` + + +The above output verifies that we have successfully auto-scaled the resources of the Redis standalone database. + + + +## Cleaning Up + +To clean up the Kubernetes resources created by this tutorial, run: + +```bash +$ kubectl patch -n demo rd/rd-standalone -p '{"spec":{"terminationPolicy":"WipeOut"}}' --type="merge" +redis.kubedb.com/rd-standalone patched + +$ kubectl delete rd -n demo rd-standalone +redis.kubedb.com "rd-standalone" deleted + +$ kubectl delete redisautoscaler -n demo rd-as +redisautoscaler.autoscaling.kubedb.com "rd-as" deleted +``` \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/redis/autoscaler/compute/sentinel.md b/content/docs/v2024.1.31/guides/redis/autoscaler/compute/sentinel.md new file mode 100644 index 0000000000..35fa764995 --- /dev/null +++ b/content/docs/v2024.1.31/guides/redis/autoscaler/compute/sentinel.md @@ -0,0 +1,401 @@ +--- +title: Sentinel Autoscaling +menu: + docs_v2024.1.31: + identifier: rd-auto-scaling-sentinel + name: Sentinel Autoscaling + parent: rd-compute-auto-scaling + weight: 20 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Autoscaling the Compute Resource of a Sentinel + +This guide will show you how to use `KubeDB` to autoscale compute resources i.e. cpu and memory of a Redis standalone database. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster. + +- Install `KubeDB` Provisioner, Ops-manager and Autoscaler operator in your cluster following the steps [here](/docs/v2024.1.31/setup/README). + +- Install `Metrics Server` from [here](https://github.com/kubernetes-sigs/metrics-server#installation) + +- You should be familiar with the following `KubeDB` concepts: + - [RedisSentinel](/docs/v2024.1.31/guides/redis/concepts/redissentinel) + - [RedisAutoscaler](/docs/v2024.1.31/guides/redis/concepts/autoscaler) + - [RedisOpsRequest](/docs/v2024.1.31/guides/redis/concepts/redisopsrequest) + - [Compute Resource Autoscaling Overview](/docs/v2024.1.31/guides/redis/autoscaler/compute/overview) + +To keep everything isolated, we are going to use a separate namespace called `demo` throughout this tutorial. + +```bash +$ kubectl create ns demo +namespace/demo created +``` + +> **Note:** YAML files used in this tutorial are stored in [docs/examples/redis](/docs/v2024.1.31/examples/redis) directory of [kubedb/docs](https://github.com/kubedb/docs) repository. + +## Autoscaling of Sentinel + +Here, we are going to deploy a `RedisSentinel` instance using a supported version by `KubeDB` operator. Then we are going to apply `RedisSentinelAutoscaler` to set up autoscaling. + +#### Deploy Redis standalone + +In this section, we are going to deploy a RedisSentinel instance with version `6.2.14`. Then, in the next section we will set up autoscaling for this database using `RedisSentinelAutoscaler` CRD. Below is the YAML of the `RedisSentinel` CR that we are going to create, + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: RedisSentinel +metadata: + name: sen-demo + namespace: demo +spec: + version: "6.2.14" + storageType: Durable + replicas: 3 + storage: + resources: + requests: + storage: 1Gi + podTemplate: + spec: + resources: + requests: + cpu: "200m" + memory: "300Mi" + limits: + cpu: "200m" + memory: "300Mi" + terminationPolicy: WipeOut +``` + +Let's create the `RedisSentinel` CRO we have shown above, + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/redis/autoscaling/compute/sentinel.yaml +redissentinel.kubedb.com/sen-demo created +``` + +Now, wait until `sen-demo` has status `Ready`. i.e, + +```bash +$ kubectl get redissentinel -n demo +NAME VERSION STATUS AGE +sen-demo 6.2.14 Ready 86s +``` + +Let's check the Pod containers resources, + +```bash +$ kubectl get pod -n demo sen-demo-0 -o json | jq '.spec.containers[].resources' +{ + "limits": { + "cpu": "200m", + "memory": "300Mi" + }, + "requests": { + "cpu": "200m", + "memory": "300Mi" + } +} +``` + +Let's check the RedisSentinel resources, +```bash +$ kubectl get redissentinel -n demo sen-demo -o json | jq '.spec.podTemplate.spec.resources' +{ + "limits": { + "cpu": "200m", + "memory": "300Mi" + }, + "requests": { + "cpu": "200m", + "memory": "300Mi" + } +} +``` + +You can see from the above outputs that the resources are same as the one we have assigned while deploying the redissentinel. + +We are now ready to apply the `RedisSentinelAutoscaler` CRO to set up autoscaling for this database. + +### Compute Resource Autoscaling + +Here, we are going to set up compute (cpu and memory) autoscaling using a RedisSentinelAutoscaler Object. + +#### Create RedisSentinelAutoscaler Object + +In order to set up compute resource autoscaling for this standalone database, we have to create a `RedisAutoscaler` CRO with our desired configuration. Below is the YAML of the `RedisAutoscaler` object that we are going to create, + +```yaml +apiVersion: autoscaling.kubedb.com/v1alpha1 +kind: RedisSentinelAutoscaler +metadata: + name: sen-as + namespace: demo +spec: + databaseRef: + name: sen-demo + opsRequestOptions: + timeout: 3m + apply: IfReady + compute: + sentinel: + trigger: "On" + podLifeTimeThreshold: 5m + resourceDiffPercentage: 20 + minAllowed: + cpu: 400m + memory: 400Mi + maxAllowed: + cpu: 1 + memory: 1Gi + controlledResources: ["cpu", "memory"] + containerControlledValues: "RequestsAndLimits" +``` + +Here, + +- `spec.databaseRef.name` specifies that we are performing compute resource autoscaling on `sen-demo` database. +- `spec.compute.standalone.trigger` specifies that compute resource autoscaling is enabled for this database. +- `spec.compute.sentinel.podLifeTimeThreshold` specifies the minimum lifetime for at least one of the pod to initiate a vertical scaling. + If the difference between current & recommended resource is less than ResourceDiffPercentage, Autoscaler Operator will ignore the updating. +- `spec.compute.sentinel.minAllowed` specifies the minimum allowed resources for the database. +- `spec.compute.sentinel.maxAllowed` specifies the maximum allowed resources for the database. +- `spec.compute.sentinel.controlledResources` specifies the resources that are controlled by the autoscaler. +- `spec.compute.sentinel.containerControlledValues` specifies which resource values should be controlled. The default is "RequestsAndLimits". +- `spec.opsRequestOptions` contains the options to pass to the created OpsRequest. It has 2 fields. Know more about them here : [timeout](/docs/v2024.1.31/guides/redis/concepts/redisopsrequest#spectimeout), [apply](/docs/v2024.1.31/guides/redis/concepts/redisopsrequest#specapply). + +If it was an `InMemory database`, we could also autoscaler the inMemory resources using Redis compute autoscaler, like below. + + +Let's create the `RedisAutoscaler` CR we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/redis/compute/autoscaling/sen-as.yaml +redissentinelautoscaler.autoscaling.kubedb.com/sen-as created +``` + +#### Verify Autoscaling is set up successfully + +Let's check that the `redisautoscaler` resource is created successfully, + +```bash +$ kubectl get redisautoscaler -n demo +NAME AGE +sen-as 102s + +$ kubectl describe redissentinelautoscaler sen-as -n demo +Name: sen-as +Namespace: demo +Labels: +Annotations: +API Version: autoscaling.kubedb.com/v1alpha1 +Kind: RedisSentinelAutoscaler +Metadata: + Creation Timestamp: 2023-02-09T11:14:18Z + Generation: 1 + Managed Fields: + API Version: autoscaling.kubedb.com/v1alpha1 + Fields Type: FieldsV1 + fieldsV1: + f:metadata: + f:annotations: + .: + f:kubectl.kubernetes.io/last-applied-configuration: + f:spec: + .: + f:compute: + .: + f:sentinel: + .: + f:containerControlledValues: + f:controlledResources: + f:maxAllowed: + .: + f:cpu: + f:memory: + f:minAllowed: + .: + f:cpu: + f:memory: + f:podLifeTimeThreshold: + f:resourceDiffPercentage: + f:trigger: + f:databaseRef: + f:opsRequestOptions: + .: + f:apply: + f:timeout: + Manager: kubectl-client-side-apply + Operation: Update + Time: 2023-02-09T11:14:18Z + API Version: autoscaling.kubedb.com/v1alpha1 + Fields Type: FieldsV1 + fieldsV1: + f:status: + .: + f:checkpoints: + f:conditions: + f:vpas: + Manager: kubedb-autoscaler + Operation: Update + Subresource: status + Time: 2023-02-09T11:15:20Z + Resource Version: 845618 + UID: 44da50a4-6e4f-49fa-b7e4-6c7f83c3e6c4 +Spec: + Compute: + Sentinel: + Container Controlled Values: RequestsAndLimits + Controlled Resources: + cpu + memory + Max Allowed: + Cpu: 1 + Memory: 1Gi + Min Allowed: + Cpu: 400m + Memory: 400Mi + Pod Life Time Threshold: 5m0s + Resource Diff Percentage: 20 + Trigger: On + Database Ref: + Name: sen-demo + Ops Request Options: + Apply: IfReady + Timeout: 3m0s +Status: + Checkpoints: + Cpu Histogram: + Bucket Weights: + Index: 0 + Weight: 10000 + Reference Timestamp: 2023-02-09T00:00:00Z + Total Weight: 0.4150619553793766 + First Sample Start: 2023-02-09T11:14:17Z + Last Sample Start: 2023-02-09T11:14:32Z + Last Update Time: 2023-02-09T11:14:35Z + Memory Histogram: + Reference Timestamp: 2023-02-10T00:00:00Z + Ref: + Container Name: redissentinel + Vpa Object Name: sen-demo + Total Samples Count: 3 + Version: v3 + Conditions: + Last Transition Time: 2023-02-09T11:15:20Z + Message: Successfully created RedisSentinelOpsRequest demo/rdsops-sen-demo-5emii6 + Observed Generation: 1 + Reason: CreateOpsRequest + Status: True + Type: CreateOpsRequest + Vpas: + Conditions: + Last Transition Time: 2023-02-09T11:14:35Z + Status: True + Type: RecommendationProvided + Recommendation: + Container Recommendations: + Container Name: redissentinel + Lower Bound: + Cpu: 400m + Memory: 400Mi + Target: + Cpu: 400m + Memory: 400Mi + Uncapped Target: + Cpu: 100m + Memory: 262144k + Upper Bound: + Cpu: 1 + Memory: 1Gi + Vpa Name: sen-demo +Events: +``` +So, the `redisautoscaler` resource is created successfully. + +you can see in the `Status.VPAs.Recommendation` section, that recommendation has been generated for our database. Our autoscaler operator continuously watches the recommendation generated and creates an `redissentinelopsrequest` based on the recommendations, if the database pods are needed to scaled up or down. + +Let's watch the `redissentinelopsrequest` in the demo namespace to see if any `redissentinelopsrequest` object is created. After some time you'll see that a `redissentinelopsrequest` will be created based on the recommendation. + +```bash +$ watch kubectl get redissentinelopsrequest -n demo +Every 2.0s: kubectl get redissentinelopsrequest -n demo +NAME TYPE STATUS AGE +rdsops-sen-demo-5emii6 VerticalScaling Progressing 10s +``` + +Let's wait for the ops request to become successful. + +```bash +$ watch kubectl get redissentinelopsrequest -n demo +Every 2.0s: kubectl get redissentinelopsrequest -n demo +NAME TYPE STATUS AGE +rdsops-sen-demo-5emii6 VerticalScaling Successfull 10s +``` + +We can see from the above output that the `RedisSentinelOpsRequest` has succeeded. + +Now, we are going to verify from the Pod, and the Redis yaml whether the resources of the standalone database has updated to meet up the desired state, Let's check, + +```bash +$ kubectl get pod -n demo sen-demo-0 -o json | jq '.spec.containers[].resources' +{ + "limits": { + "cpu": "400m", + "memory": "400Mi" + }, + "requests": { + "cpu": "400m", + "memory": "400Mi" + } +} + +$ kubectl get redis -n demo sen-demo -o json | jq '.spec.podTemplate.spec.resources' +{ + "limits": { + "cpu": "400m", + "memory": "400Mi" + }, + "requests": { + "cpu": "400m", + "memory": "400Mi" + } +} +``` + + +The above output verifies that we have successfully auto-scaled the resources of the Redis standalone database. + + + +## Cleaning Up + +To clean up the Kubernetes resources created by this tutorial, run: + +```bash +$ kubectl patch -n demo redissentinel/sen-demo -p '{"spec":{"terminationPolicy":"WipeOut"}}' --type="merge" +redissentinel.kubedb.com/sen-demo patched + +$ kubectl delete redissentinel -n demo sen-demo +redissentinel.kubedb.com "sen-demo" deleted + +$ kubectl delete redissentinelautoscaler -n demo sen-as +redissentinelautoscaler.autoscaling.kubedb.com "sen-as" deleted +``` \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/redis/autoscaler/storage/_index.md b/content/docs/v2024.1.31/guides/redis/autoscaler/storage/_index.md new file mode 100644 index 0000000000..c45f5ace5b --- /dev/null +++ b/content/docs/v2024.1.31/guides/redis/autoscaler/storage/_index.md @@ -0,0 +1,22 @@ +--- +title: Storage Autoscaling +menu: + docs_v2024.1.31: + identifier: rd-storage-auto-scaling + name: Storage Autoscaling + parent: rd-auto-scaling + weight: 50 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/redis/autoscaler/storage/overview.md b/content/docs/v2024.1.31/guides/redis/autoscaler/storage/overview.md new file mode 100644 index 0000000000..0e6d163918 --- /dev/null +++ b/content/docs/v2024.1.31/guides/redis/autoscaler/storage/overview.md @@ -0,0 +1,68 @@ +--- +title: Redis Storage Autoscaling Overview +menu: + docs_v2024.1.31: + identifier: rd-storage-auto-scaling-overview + name: Overview + parent: rd-storage-auto-scaling + weight: 10 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Redis Vertical Autoscaling + +This guide will give an overview on how KubeDB Autoscaler operator autoscales the database storage using `redisautoscaler` crd. + +## Before You Begin + +- You should be familiar with the following `KubeDB` concepts: + - [Redis](/docs/v2024.1.31/guides/redis/concepts/redis) + - [RedisAutoscaler](/docs/v2024.1.31/guides/redis/concepts/autoscaler) + - [RedisOpsRequest](/docs/v2024.1.31/guides/redis/concepts/redisopsrequest) + +## How Storage Autoscaling Works + +The following diagram shows how KubeDB Autoscaler operator autoscales the resources of `Redis` database components. Open the image in a new tab to see the enlarged version. + +
+  Storage Auto Scaling process of Redis +
Fig: Storage Auto Scaling process of Redis
+
+ + +The Auto Scaling process consists of the following steps: + +1. At first, a user creates a `Redis` Custom Resource (CR). + +2. `KubeDB` Provisioner operator watches the `Redis` CR. + +3. When the operator finds a `Redis` CR, it creates required number of `StatefulSets` and related necessary stuff like secrets, services, etc. + +4. Each StatefulSet creates a Persistent Volume according to the Volume Claim Template provided in the statefulset configuration. + +5. Then, in order to set up storage autoscaling of the various components (ie. ReplicaSet, Shard, ConfigServer etc.) of the `Redis` database the user creates a `RedisAutoscaler` CRO with desired configuration. + +6. `KubeDB` Autoscaler operator watches the `RedisAutoscaler` CRO. + +7. `KubeDB` Autoscaler operator continuously watches persistent volumes of the databases to check if it exceeds the specified usage threshold. + If the usage exceeds the specified usage threshold, then `KubeDB` Autoscaler operator creates a `RedisOpsRequest` to expand the storage of the database. + +8. `KubeDB` Ops-manager operator watches the `RedisOpsRequest` CRO. + +9. Then the `KubeDB` Ops-manager operator will expand the storage of the database component as specified on the `RedisOpsRequest` CRO. + +In the next docs, we are going to show a step-by-step guide on Autoscaling storage of various Redis database components using `RedisAutoscaler` CRD. diff --git a/content/docs/v2024.1.31/guides/redis/autoscaler/storage/redis.md b/content/docs/v2024.1.31/guides/redis/autoscaler/storage/redis.md new file mode 100644 index 0000000000..934c9a3f99 --- /dev/null +++ b/content/docs/v2024.1.31/guides/redis/autoscaler/storage/redis.md @@ -0,0 +1,290 @@ +--- +title: Redis Autoscaling +menu: + docs_v2024.1.31: + identifier: rd-storage-auto-scaling-standalone + name: Redis Autoscaling + parent: rd-storage-auto-scaling + weight: 20 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Storage Autoscaling of a Redis Standalone Database + +This guide will show you how to use `KubeDB` to autoscale the storage of a Redis standalone database. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster. + +- Install `KubeDB` Provisioner, Ops-manager and Autoscaler operator in your cluster following the steps [here](/docs/v2024.1.31/setup/README). + +- Install `Metrics Server` from [here](https://github.com/kubernetes-sigs/metrics-server#installation) + +- Install Prometheus from [here](https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack) + +- You must have a `StorageClass` that supports volume expansion. + +- You should be familiar with the following `KubeDB` concepts: + - [Redis](/docs/v2024.1.31/guides/redis/concepts/redis) + - [RedisAutoscaler](/docs/v2024.1.31/guides/redis/concepts/autoscaler) + - [RedisOpsRequest](/docs/v2024.1.31/guides/redis/concepts/redisopsrequest) + - [Storage Autoscaling Overview](/docs/v2024.1.31/guides/redis/autoscaler/storage/overview) + +To keep everything isolated, we are going to use a separate namespace called `demo` throughout this tutorial. + +```bash +$ kubectl create ns demo +namespace/demo created +``` + +> **Note:** YAML files used in this tutorial are stored in [docs/examples/redis](/docs/v2024.1.31/examples/redis) directory of [kubedb/docs](https://github.com/kubedb/docs) repository. + +## Storage Autoscaling of Standalone Database + +At first verify that your cluster has a storage class, that supports volume expansion. Let's check, + +```bash +$ kubectl get storageclass +NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE +standard (default) rancher.io/local-path Delete WaitForFirstConsumer false 9h +topolvm-provisioner topolvm.cybozu.com Delete WaitForFirstConsumer true 9h +``` + +We can see from the output the `topolvm-provisioner` storage class has `ALLOWVOLUMEEXPANSION` field as true. So, this storage class supports volume expansion. We can use it. You can install topolvm from [here](https://github.com/topolvm/topolvm) + +Now, we are going to deploy a `Redis` standalone using a supported version by `KubeDB` operator. Then we are going to apply `RedisAutoscaler` to set up autoscaling. + +#### Deploy Redis standalone + +> If you want to autoscale Redis in `Cluster` or `Sentinel` mode, just deploy a Redis database in respective Mode and rest of the steps are same. + + +In this section, we are going to deploy a Redis standalone database with version `6.2.14`. Then, in the next section we will set up autoscaling for this database using `RedisAutoscaler` CRD. Below is the YAML of the `Redis` CR that we are going to create, + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: Redis +metadata: + name: rd-standalone + namespace: demo +spec: + version: "6.2.14" + storageType: Durable + storage: + storageClassName: topolvm-provisioner + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut +``` + +Let's create the `Redis` CRO we have shown above, + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/redis/autoscaling/storage/rd-standalone.yaml +redis.kubedb.com/rd-standalone created +``` + +Now, wait until `rd-standalone` has status `Ready`. i.e, + +```bash +$ kubectl get rd -n demo +NAME VERSION STATUS AGE +rd-standalone 6.2.14 Ready 2m53s +``` + +Let's check volume size from statefulset, and from the persistent volume, + +```bash +$ kubectl get sts -n demo rd-standalone -o json | jq '.spec.volumeClaimTemplates[].spec.resources.requests.storage' +"1Gi" + +$ kubectl get pv -n demo +NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE +pvc-cf469ed8-a89a-49ca-bf7c-8c76b7889428 1Gi RWO Delete Bound demo/datadir-rd-standalone-0 topolvm-provisioner 7m41s +``` + +You can see the statefulset has 1GB storage, and the capacity of the persistent volume is also 1GB. + +We are now ready to apply the `RedisAutoscaler` CRO to set up storage autoscaling for this database. + +### Storage Autoscaling + +Here, we are going to set up storage autoscaling using a RedisAutoscaler Object. + +#### Create RedisAutoscaler Object + +In order to set up vertical autoscaling for this standalone database, we have to create a `RedisAutoscaler` CRO with our desired configuration. Below is the YAML of the `RedisAutoscaler` object that we are going to create, + +```yaml +apiVersion: autoscaling.kubedb.com/v1alpha1 +kind: RedisAutoscaler +metadata: + name: rd-as + namespace: demo +spec: + databaseRef: + name: rd-standalone + storage: + standalone: + trigger: "On" + usageThreshold: 60 + scalingThreshold: 50 +``` + +> If you want to autoscale Redis in Cluster mode, the field in `spec.storage` should be `cluster` and for sentinel it should be `sentinel`. The subfields are same inside `spec.storage.standalone`, `spec.storage.cluster` and `spec.storage.sentinel` + + +Here, + +- `spec.databaseRef.name` specifies that we are performing vertical scaling operation on `rd-standalone` database. +- `spec.storage.standalone.trigger` specifies that storage autoscaling is enabled for this database. +- `spec.storage.standalone.usageThreshold` specifies storage usage threshold, if storage usage exceeds `60%` then storage autoscaling will be triggered. +- `spec.storage.standalone.scalingThreshold` specifies the scaling threshold. Storage will be scaled to `50%` of the current amount. +- It has another field `spec.storage.replicaSet.expansionMode` to set the opsRequest volumeExpansionMode, which support two values: `Online` & `Offline`. Default value is `Online`. + +Let's create the `RedisAutoscaler` CR we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/redis/autoscaling/storage/rd-as.yaml +redisautoscaler.autoscaling.kubedb.com/rd-as created +``` + +#### Storage Autoscaling is set up successfully + +Let's check that the `redisautoscaler` resource is created successfully, + +```bash +$ kubectl get redisautoscaler -n demo +NAME AGE +rd-as 102s + +$ kubectl describe redisautoscaler rd-as -n demo +Name: rd-as +Namespace: demo +Labels: +Annotations: +API Version: autoscaling.kubedb.com/v1alpha1 +Kind: RedisAutoscaler +Metadata: + Creation Timestamp: 2023-02-09T11:02:26Z + Generation: 1 + Managed Fields: + API Version: autoscaling.kubedb.com/v1alpha1 + Fields Type: FieldsV1 + fieldsV1: + f:metadata: + f:annotations: + .: + f:kubectl.kubernetes.io/last-applied-configuration: + f:spec: + .: + f:databaseRef: + .: + f:name: + f:storage: + .: + f:standalone: + .: + f:scalingThreshold: + f:trigger: + f:usageThreshold: + Manager: kubectl-client-side-apply + Operation: Update + Time: 2023-02-09T11:02:26Z + Resource Version: 134423 + Self Link: /apis/autoscaling.kubedb.com/v1alpha1/namespaces/demo/redisautoscalers/rd-as + UID: 999a2dc9-7eb7-4ed2-9e90-d3f8b21c091a +Spec: + Database Ref: + Name: rd-standalone + Storage: + Standalone: + Scaling Threshold: 50 + Trigger: On + Usage Threshold: 60 +Events: +``` +So, the `redisautoscaler` resource is created successfully. + +Now, for this demo, we are going to manually fill up the persistent volume to exceed the `usageThreshold` using `dd` command to see if storage autoscaling is working or not. + +Lets exec into the database pod and fill the database volume using the following commands: + +```bash +$ kubectl exec -it -n demo rd-standalone-0 -- bash +root@rd-standalone-0:/# df -h /data +Filesystem Size Used Avail Use% Mounted on +/dev/topolvm/1df4ee9e-b900-4c0f-9d2c-8493fb30bdc0 1014M 334M 681M 33% /data/db +root@rd-standalone-0:/# dd if=/dev/zero of=/data/file.img bs=500M count=1 +1+0 records in +1+0 records out +524288000 bytes (524 MB, 500 MiB) copied, 0.359202 s, 1.5 GB/s +root@rd-standalone-0:/# df -h /data +Filesystem Size Used Avail Use% Mounted on +/dev/topolvm/1df4ee9e-b900-4c0f-9d2c-8493fb30bdc0 1014M 835M 180M 83% /data/db +``` + +So, from the above output we can see that the storage usage is 84%, which exceeded the `usageThreshold` 60%. + +Let's watch the `redisopsrequest` in the demo namespace to see if any `redisopsrequest` object is created. After some time you'll see that a `redisopsrequest` of type `VolumeExpansion` will be created based on the `scalingThreshold`. + +```bash +$ watch kubectl get redisopsrequest -n demo +Every 2.0s: kubectl get redisopsrequest -n demo +NAME TYPE STATUS AGE +rdops-rd-standalone-p27c11 VolumeExpansion Progressing 26s +``` + +Let's wait for the ops request to become successful. + +```bash +$ watch kubectl get redisopsrequest -n demo +Every 2.0s: kubectl get redisopsrequest -n demo +NAME TYPE STATUS AGE +rdops-rd-standalone-p27c11 VolumeExpansion Successful 73s +``` + +We can see from the above output that the `RedisOpsRequest` has succeeded. + +Now, we are going to verify from the `Statefulset`, and the `Persistent Volume` whether the volume of the standalone database has expanded to meet the desired state, Let's check, + +```bash +$ kubectl get sts -n demo rd-standalone -o json | jq '.spec.volumeClaimTemplates[].spec.resources.requests.storage' +"1594884096" +$ kubectl get pv -n demo +NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE +pvc-cf469ed8-a89a-49ca-bf7c-8c76b7889428 2Gi RWO Delete Bound demo/datadir-rd-standalone-0 topolvm-provisioner 26m +``` + +The above output verifies that we have successfully autoscaled the volume of the Redis standalone database. + +## Cleaning Up + +To clean up the Kubernetes resources created by this tutorial, run: + +```bash +$ kubectl patch -n demo rd/rd-standalone -p '{"spec":{"terminationPolicy":"WipeOut"}}' --type="merge" +redis.kubedb.com/rd-standalone patched + +$ kubectl delete rd -n demo rd-standalone +redis.kubedb.com "rd-standalone" deleted + +$ kubectl delete redisautoscaler -n demo rd-as +redisautoscaler.autoscaling.kubedb.com "rd-as" deleted +``` diff --git a/content/docs/v2024.1.31/guides/redis/backup/_index.md b/content/docs/v2024.1.31/guides/redis/backup/_index.md new file mode 100644 index 0000000000..281cc0604e --- /dev/null +++ b/content/docs/v2024.1.31/guides/redis/backup/_index.md @@ -0,0 +1,22 @@ +--- +title: Backup and Restore Redis +menu: + docs_v2024.1.31: + identifier: rd-guides-redis-backup + name: Backup + parent: rd-redis-guides + weight: 50 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/redis/backup/auto-backup/examples/backupblueprint.yaml b/content/docs/v2024.1.31/guides/redis/backup/auto-backup/examples/backupblueprint.yaml new file mode 100644 index 0000000000..ec76d548e6 --- /dev/null +++ b/content/docs/v2024.1.31/guides/redis/backup/auto-backup/examples/backupblueprint.yaml @@ -0,0 +1,17 @@ +apiVersion: stash.appscode.com/v1beta1 +kind: BackupBlueprint +metadata: + name: redis-backup-template +spec: + # ============== Blueprint for Repository ========================== + backend: + gcs: + bucket: stash-testing + prefix: redis-backup/${TARGET_NAMESPACE}/${TARGET_APP_RESOURCE}/${TARGET_NAME} + storageSecretName: gcs-secret + # ============== Blueprint for BackupConfiguration ================= + schedule: "*/5 * * * *" + retentionPolicy: + name: 'keep-last-5' + keepLast: 5 + prune: true \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/redis/backup/auto-backup/examples/sample-redis-1.yaml b/content/docs/v2024.1.31/guides/redis/backup/auto-backup/examples/sample-redis-1.yaml new file mode 100644 index 0000000000..3342db0e00 --- /dev/null +++ b/content/docs/v2024.1.31/guides/redis/backup/auto-backup/examples/sample-redis-1.yaml @@ -0,0 +1,18 @@ +apiVersion: kubedb.com/v1alpha2 +kind: Redis +metadata: + name: sample-redis-1 + namespace: demo-1 + annotations: + stash.appscode.com/backup-blueprint: redis-backup-template +spec: + version: 6.0.20 + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: Delete \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/redis/backup/auto-backup/examples/sample-redis-2.yaml b/content/docs/v2024.1.31/guides/redis/backup/auto-backup/examples/sample-redis-2.yaml new file mode 100644 index 0000000000..f6933b9887 --- /dev/null +++ b/content/docs/v2024.1.31/guides/redis/backup/auto-backup/examples/sample-redis-2.yaml @@ -0,0 +1,19 @@ +apiVersion: kubedb.com/v1alpha2 +kind: Redis +metadata: + name: sample-redis-2 + namespace: demo-2 + annotations: + stash.appscode.com/backup-blueprint: redis-backup-template + stash.appscode.com/schedule: "*/3 * * * *" +spec: + version: 6.0.20 + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: Delete \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/redis/backup/auto-backup/examples/sample-redis-3.yaml b/content/docs/v2024.1.31/guides/redis/backup/auto-backup/examples/sample-redis-3.yaml new file mode 100644 index 0000000000..d98942d0c6 --- /dev/null +++ b/content/docs/v2024.1.31/guides/redis/backup/auto-backup/examples/sample-redis-3.yaml @@ -0,0 +1,19 @@ +apiVersion: kubedb.com/v1alpha2 +kind: Redis +metadata: + name: sample-redis-3 + namespace: demo-3 + annotations: + stash.appscode.com/backup-blueprint: redis-backup-template + params.stash.appscode.com/args: "-db 0" +spec: + version: 6.0.20 + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: Delete \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/redis/backup/auto-backup/images/sample-redis-1.png b/content/docs/v2024.1.31/guides/redis/backup/auto-backup/images/sample-redis-1.png new file mode 100644 index 0000000000..fafe699957 Binary files /dev/null and b/content/docs/v2024.1.31/guides/redis/backup/auto-backup/images/sample-redis-1.png differ diff --git a/content/docs/v2024.1.31/guides/redis/backup/auto-backup/images/sample-redis-2.png b/content/docs/v2024.1.31/guides/redis/backup/auto-backup/images/sample-redis-2.png new file mode 100644 index 0000000000..3ca8153635 Binary files /dev/null and b/content/docs/v2024.1.31/guides/redis/backup/auto-backup/images/sample-redis-2.png differ diff --git a/content/docs/v2024.1.31/guides/redis/backup/auto-backup/images/sample-redis-3.png b/content/docs/v2024.1.31/guides/redis/backup/auto-backup/images/sample-redis-3.png new file mode 100644 index 0000000000..6da0f056c5 Binary files /dev/null and b/content/docs/v2024.1.31/guides/redis/backup/auto-backup/images/sample-redis-3.png differ diff --git a/content/docs/v2024.1.31/guides/redis/backup/auto-backup/index.md b/content/docs/v2024.1.31/guides/redis/backup/auto-backup/index.md new file mode 100644 index 0000000000..25f74bea63 --- /dev/null +++ b/content/docs/v2024.1.31/guides/redis/backup/auto-backup/index.md @@ -0,0 +1,680 @@ +--- +title: Redis Auto-Backup | Stash +description: Backup Redis using Stash Auto-Backup +menu: + docs_v2024.1.31: + identifier: rd-auto-backup-kubedb + name: Auto-Backup + parent: rd-guides-redis-backup + weight: 30 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +# Backup Redis using Stash Auto-Backup + +Stash can be configured to automatically backup any Redis database in your cluster. Stash enables cluster administrators to deploy backup blueprints ahead of time so that the database owners can easily backup their database with just a few annotations. + +In this tutorial, we are going to show how you can configure a backup blueprint for Redis databases in your cluster and backup them with few annotations. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster. +- Install KubeDB in your cluster following the steps [here](/docs/v2024.1.31/setup/README). +- Install Stash in your cluster following the steps [here](https://stash.run/docs/latest/setup/install/stash/). +- If you are not familiar with how Stash backup and restore Redis databases, please check the following guide [here](/docs/v2024.1.31/guides/redis/backup/overview/). +- If you are not familiar with how auto-backup works in Stash, please check the following guide [here](https://stash.run/docs/latest/guides/auto-backup/overview/). +- If you are not familiar with the available auto-backup options for databases in Stash, please check the following guide [here](https://stash.run/docs/latest/guides/auto-backup/database/). + +You should be familiar with the following `Stash` concepts: + +- [BackupBlueprint](https://stash.run/docs/latest/concepts/crds/backupblueprint/) +- [BackupConfiguration](https://stash.run/docs/latest/concepts/crds/backupconfiguration/) +- [BackupSession](https://stash.run/docs/latest/concepts/crds/backupsession/) +- [Repository](https://stash.run/docs/latest/concepts/crds/repository/) +- [Function](https://stash.run/docs/latest/concepts/crds/function/) +- [Task](https://stash.run/docs/latest/concepts/crds/task/) + + +In this tutorial, we are going to show backup of three different Redis databases on three different namespaces named `demo-1`, `demo-2`, and `demo-3`. Create the namespaces as below if you haven't done it already. + +```bash +❯ kubectl create ns demo-1 +namespace/demo-1 created + +❯ kubectl create ns demo-2 +namespace/demo-2 created + +❯ kubectl create ns demo-3 +namespace/demo-3 created +``` + +When you install Stash, it installs the necessary addons to backup Redis. Verify that the Redis addons were installed properly using the following command. + +```bash +❯ kubectl get tasks.stash.appscode.com | grep redis +redis-backup-5.0.13 62m +redis-backup-6.2.5 62m +redis-restore-5.0.13 62m +redis-restore-6.2.5 62m +``` + +## Prepare Backup Blueprint + +To backup a Redis database using Stash, you have to create a `Secret` containing the backend credentials, a `Repository` containing the backend information, and a `BackupConfiguration` containing the schedule and target information. A `BackupBlueprint` allows you to specify a template for the `Repository` and the `BackupConfiguration`. + +The `BackupBlueprint` is a non-namespaced CRD. So, once you have created a `BackupBlueprint`, you can use it to backup any Redis database of any namespace just by creating the storage `Secret` in that namespace and adding few annotations to your Redis CRO. Then, Stash will automatically create a `Repository` and a `BackupConfiguration` according to the template to backup the database. + +Below is the `BackupBlueprint` object that we are going to use in this tutorial, + +```yaml +apiVersion: stash.appscode.com/v1beta1 +kind: BackupBlueprint +metadata: + name: redis-backup-template +spec: + # ============== Blueprint for Repository ========================== + backend: + gcs: + bucket: stash-testing + prefix: redis-backup/${TARGET_NAMESPACE}/${TARGET_APP_RESOURCE}/${TARGET_NAME} + storageSecretName: gcs-secret + # ============== Blueprint for BackupConfiguration ================= + schedule: "*/5 * * * *" + retentionPolicy: + name: 'keep-last-5' + keepLast: 5 + prune: true +``` +Here, we are using a GCS bucket as our backend. We are providing `gcs-secret` at the `storageSecretName` field. Hence, we have to create a secret named `gcs-secret` with the access credentials of our bucket in every namespace where we want to enable backup through this blueprint. + +Notice the `prefix` field of `backend` section. We have used some variables in form of `${VARIABLE_NAME}`. Stash will automatically resolve those variables from the database information to make the backend prefix unique for each database instance. + +Let's create the `BackupBlueprint` we have shown above, +```bash +❯ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/redis/backup/auto-backup/examples/backupblueprint.yaml +backupblueprint.stash.appscode.com/redis-backup-template created +``` + +Now, we are ready to backup our Redis databases using few annotations. You can check available auto-backup annotations for a database from [here](https://stash.run/docs/latest/guides/auto-backup/database/#available-auto-backup-annotations-for-database). + + +## Auto-backup with default configurations + +In this section, we are going to backup a Redis database of `demo-1` namespace. We are going to use the default configurations specified in the `BackupBlueprint`. + +### Create Storage Secret + +At first, let's create the `gcs-secret` in `demo-1` namespace with the access credentials to our GCS bucket. + +```bash +❯ echo -n 'changeit' > RESTIC_PASSWORD +❯ echo -n '' > GOOGLE_PROJECT_ID +❯ cat downloaded-sa-key.json > GOOGLE_SERVICE_ACCOUNT_JSON_KEY +❯ kubectl create secret generic -n demo-1 gcs-secret \ + --from-file=./RESTIC_PASSWORD \ + --from-file=./GOOGLE_PROJECT_ID \ + --from-file=./GOOGLE_SERVICE_ACCOUNT_JSON_KEY +secret/gcs-secret created +``` + +### Create Database + +Now, we are going to create a Redis CRO in `demo-1` namespace. Below is the YAML of the Redis object that we are going to create, + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: Redis +metadata: + name: sample-redis-1 + namespace: demo-1 + annotations: + stash.appscode.com/backup-blueprint: redis-backup-template +spec: + version: 6.0.20 + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: Delete +``` + +Notice the `annotations` section. We are pointing to the `BackupBlueprint` that we have created earlier through `stash.appscode.com/backup-blueprint` annotation. Stash will watch this annotation and create a `Repository` and a `BackupConfiguration` according to the `BackupBlueprint`. + +Let's create the above Redis CRO, + +```bash +❯ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/redis/backup/auto-backup/examples/sample-redis-1.yaml +redis.kubedb.com/sample-redis-1 created +``` + +Now, let's insert some sample data into it. + +```bash +❯ export PASSWORD=$(kubectl get secrets -n demo-1 sample-redis-1-auth -o jsonpath='{.data.\password}' | base64 -d) +❯ kubectl exec -it -n demo-1 sample-redis-1-0 -- redis-cli -a $PASSWORD +Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe. +127.0.0.1:6379> set key1 value1 +OK +127.0.0.1:6379> get key1 +"value1" +127.0.0.1:6379> exit +``` + +### Verify Auto-backup configured + +In this section, we are going to verify whether Stash has created the respective `Repository` and `BackupConfiguration` for our Redis database or not. + +#### Verify Repository + +At first, let's verify whether Stash has created a `Repository` for our Redis or not. + +```bash +❯ kubectl get repository -n demo-1 +NAME INTEGRITY SIZE SNAPSHOT-COUNT LAST-SUCCESSFUL-BACKUP AGE +app-sample-redis-1 22s +``` + +Now, let's check the YAML of the `Repository`. + +```yaml +❯ kubectl get repository -n demo-1 app-sample-redis-1 -o yaml +apiVersion: stash.appscode.com/v1alpha1 +kind: Repository +metadata: + name: app-sample-redis-1 + namespace: demo-1 + ... +spec: + backend: + gcs: + bucket: stash-testing + prefix: redis-backup/demo-1/redis/sample-redis-1 + storageSecretName: gcs-secret +``` + +Here, you can see that Stash has resolved the variables in `prefix` field and substituted them with the equivalent information from this database. + +#### Verify BackupConfiguration + +If everything goes well, Stash should create a `BackupConfiguration` for our Redis in `demo-1` namespace and the phase of that `BackupConfiguration` should be `Ready`. Verify the `BackupConfiguration` crd by the following command, + +```bash +❯ kubectl get backupconfiguration -n demo-1 +NAME TASK SCHEDULE PAUSED PHASE AGE +app-sample-redis-1 redis-backup-6.2.5 */5 * * * * Ready 76s +``` + +Now, let's check the YAML of the `BackupConfiguration`. + + +```yaml +❯ kubectl get backupconfiguration -n demo-1 app-sample-redis-1 -o yaml +apiVersion: stash.appscode.com/v1beta1 +kind: BackupConfiguration +metadata: + name: app-sample-redis-1 + namespace: demo-1 + ... +spec: + driver: Restic + repository: + name: app-sample-redis-1 + retentionPolicy: + keepLast: 5 + name: keep-last-5 + prune: true + runtimeSettings: {} + schedule: '*/5 * * * *' + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-redis-1 + tempDir: {} +status: + conditions: + - lastTransitionTime: "2022-01-29T13:59:57Z" + message: Repository demo-1/app-sample-redis-1 exist. + reason: RepositoryAvailable + status: "True" + type: RepositoryFound + - lastTransitionTime: "2022-01-29T13:59:57Z" + message: Backend Secret demo-1/gcs-secret exist. + reason: BackendSecretAvailable + status: "True" + type: BackendSecretFound + - lastTransitionTime: "2022-01-29T13:59:57Z" + message: Backup target appcatalog.appscode.com/v1alpha1 appbinding/sample-redis-1 + found. + reason: TargetAvailable + status: "True" + type: BackupTargetFound + - lastTransitionTime: "2022-01-29T13:59:57Z" + message: Successfully created backup triggering CronJob. + reason: CronJobCreationSucceeded + status: "True" + type: CronJobCreated + observedGeneration: 1 + +``` + +Notice the `target` section. Stash has automatically added the Redis as the target of this `BackupConfiguration`. + +#### Verify Backup + +Now, let's wait for a backup run to complete. You can watch for `BackupSession` as below, + +```bash +❯ kubectl get backupsession -n demo-1 -w +NAME INVOKER-TYPE INVOKER-NAME PHASE DURATION AGE +app-sample-redis-1-1627567808 BackupConfiguration app-sample-redis-1 0s +app-sample-redis-1-1627567808 BackupConfiguration app-sample-redis-1 Running 22s +app-sample-redis-1-1627567808 BackupConfiguration app-sample-redis-1 Succeeded 1m28.696008079s 88s +``` + +Once the backup has been completed successfully, you should see the backed up data has been stored in the bucket at the directory pointed by the `prefix` field of the `Repository`. + +
+ Backup data in GCS Bucket +
Fig: Backup data in GCS Bucket
+
+ +## Auto-backup with a custom schedule + +In this section, we are going to backup a Redis database of `demo-2` namespace. This time, we are going to overwrite the default schedule used in the `BackupBlueprint`. + +### Create Storage Secret + +At first, let's create the `gcs-secret` in `demo-2` namespace with the access credentials to our GCS bucket. + +```bash +❯ kubectl create secret generic -n demo-2 gcs-secret \ + --from-file=./RESTIC_PASSWORD \ + --from-file=./GOOGLE_PROJECT_ID \ + --from-file=./GOOGLE_SERVICE_ACCOUNT_JSON_KEY +secret/gcs-secret created +``` + +### Deploy Database + +Let's deploy a Redis database named `sample-redis-2` in the `demo-2` namespace. + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: Redis +metadata: + name: sample-redis-2 + namespace: demo-2 + annotations: + stash.appscode.com/backup-blueprint: redis-backup-template + stash.appscode.com/schedule: "*/3 * * * *" +spec: + version: 6.0.20 + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: Delete +``` + +Notice the `annotations` section. This time, we have passed a schedule via `stash.appscode.com/schedule` annotation along with the `stash.appscode.com/backup-blueprint` annotation. + +Let's create the above Redis CRD, + +```bash +❯ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/redis/backup/auto-backup/examples/sample-redis-2.yaml +redis.kubedb.com/sample-redis-2 created +``` + +Now, let's insert some sample data into it. + +```bash +❯ export PASSWORD=$(kubectl get secrets -n demo-2 sample-redis-2-auth -o jsonpath='{.data.\password}' | base64 -d) +❯ kubectl exec -it -n demo-2 sample-redis-2-0 -- redis-cli -a $PASSWORD +Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe. +127.0.0.1:6379> set key1 value1 +OK +127.0.0.1:6379> get key1 +"value1" +127.0.0.1:6379> exit +``` + +### Verify Auto-backup configured + +Now, let's verify whether the auto-backup has been configured properly or not. + +#### Verify Repository + +At first, let's verify whether Stash has created a `Repository` for our Redis or not. + +```bash +❯ kubectl get repository -n demo-2 +NAME INTEGRITY SIZE SNAPSHOT-COUNT LAST-SUCCESSFUL-BACKUP AGE +app-sample-redis-2 29s +``` + +Now, let's check the YAML of the `Repository`. + +```yaml +❯ kubectl get repository -n demo-2 app-sample-redis-2 -o yaml +apiVersion: stash.appscode.com/v1alpha1 +kind: Repository +metadata: + name: app-sample-redis-2 + namespace: demo-2 + ... +spec: + backend: + gcs: + bucket: stash-testing + prefix: redis-backup/demo-2/redis/sample-redis-2 + storageSecretName: gcs-secret +``` + +Here, you can see that Stash has resolved the variables in `prefix` field and substituted them with the equivalent information from this new database. + +#### Verify BackupConfiguration + +If everything goes well, Stash should create a `BackupConfiguration` for our Redis in `demo-2` namespace and the phase of that `BackupConfiguration` should be `Ready`. Verify the `BackupConfiguration` crd by the following command, + +```bash +❯ kubectl get backupconfiguration -n demo-2 +NAME TASK SCHEDULE PAUSED PHASE AGE +app-sample-redis-2 redis-backup-6.2.5 */3 * * * * Ready 64s +``` + +Now, let's check the YAML of the `BackupConfiguration`. + +```yaml +❯ kubectl get backupconfiguration -n demo-2 app-sample-redis-2 -o yaml +apiVersion: stash.appscode.com/v1beta1 +kind: BackupConfiguration +metadata: + name: app-sample-redis-2 + namespace: demo-2 + ... +spec: + driver: Restic + repository: + name: app-sample-redis-2 + retentionPolicy: + keepLast: 5 + name: keep-last-5 + prune: true + runtimeSettings: {} + schedule: '*/3 * * * *' + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-redis-2 + tempDir: {} +status: + conditions: + - lastTransitionTime: "2022-01-29T14:17:31Z" + message: Repository demo-2/app-sample-redis-2 exist. + reason: RepositoryAvailable + status: "True" + type: RepositoryFound + - lastTransitionTime: "2022-01-29T14:17:31Z" + message: Backend Secret demo-2/gcs-secret exist. + reason: BackendSecretAvailable + status: "True" + type: BackendSecretFound + - lastTransitionTime: "2022-01-29T14:17:31Z" + message: Backup target appcatalog.appscode.com/v1alpha1 appbinding/sample-redis-2 + found. + reason: TargetAvailable + status: "True" + type: BackupTargetFound + - lastTransitionTime: "2022-01-29T14:17:31Z" + message: Successfully created backup triggering CronJob. + reason: CronJobCreationSucceeded + status: "True" + type: CronJobCreated + observedGeneration: 1 +``` + +Notice the `schedule` section. This time the `BackupConfiguration` has been created with the schedule we have provided via annotation. + +Also, notice the `target` section. Stash has automatically added the new Redis as the target of this `BackupConfiguration`. + +#### Verify Backup + +Now, let's wait for a backup run to complete. You can watch for `BackupSession` as below, + +```bash +❯ kubectl get backupsession -n demo-2 -w +NAME INVOKER-TYPE INVOKER-NAME PHASE DURATION AGE +app-sample-redis-2-1627568283 BackupConfiguration app-sample-redis-2 Running 86s +app-sample-redis-2-1627568283 BackupConfiguration app-sample-redis-2 Succeeded 1m33.522226054s 93s +``` + +Once the backup has been completed successfully, you should see that Stash has created a new directory as pointed by the `prefix` field of the new `Repository` and stored the backed up data there. + +
+ Backup data in GCS Bucket +
Fig: Backup data in GCS Bucket
+
+ +## Auto-backup with custom parameters + +In this section, we are going to backup a Redis database of `demo-3` namespace. This time, we are going to pass some parameters for the Task through the annotations. + +```bash +❯ kubectl create secret generic -n demo-3 gcs-secret \ + --from-file=./RESTIC_PASSWORD \ + --from-file=./GOOGLE_PROJECT_ID \ + --from-file=./GOOGLE_SERVICE_ACCOUNT_JSON_KEY +secret/gcs-secret created +``` + +### Deploy Database + +Let's deploy a Redis database named `sample-redis-3` in the `demo-3` namespace. +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: Redis +metadata: + name: sample-redis-3 + namespace: demo-3 + annotations: + stash.appscode.com/backup-blueprint: redis-backup-template + params.stash.appscode.com/args: "-db 0" +spec: + version: 6.0.20 + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: Delete +``` + +Notice the `annotations` section. This time, we have passed an argument via `params.stash.appscode.com/args` annotation along with the `stash.appscode.com/backup-blueprint` annotation. + +Let's create the above Redis CRD, + +```bash +❯ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/redis/backup/auto-backup/examples/sample-redis-3.yaml +redis.kubedb.com/sample-redis-3 created +``` + +Now, let's insert some sample data into it. + +```bash +❯ export PASSWORD=$(kubectl get secrets -n demo-3 sample-redis-3-auth -o jsonpath='{.data.\password}' | base64 -d) +❯ kubectl exec -it -n demo-3 sample-redis-3-0 -- redis-cli -a $PASSWORD +Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe. +127.0.0.1:6379> set key1 value1 +OK +127.0.0.1:6379> get key1 +"value1" +127.0.0.1:6379> exit +``` + +### Verify Auto-backup configured + +Now, let's verify whether the auto-backup resources has been created or not. + +#### Verify Repository + +At first, let's verify whether Stash has created a `Repository` for our Redis or not. + +```bash +❯ kubectl get repository -n demo-3 +NAME INTEGRITY SIZE SNAPSHOT-COUNT LAST-SUCCESSFUL-BACKUP AGE +app-sample-redis-3 29s +``` + +Now, let's check the YAML of the `Repository`. + +```yaml +❯ kubectl get repository -n demo-3 app-sample-redis-3 -o yaml +apiVersion: stash.appscode.com/v1alpha1 +kind: Repository +metadata: + name: app-sample-redis-3 + namespace: demo-3 + ... +spec: + backend: + gcs: + bucket: stash-testing + prefix: redis-backup/demo-3/redis/sample-redis-3 + storageSecretName: gcs-secret +``` + +Here, you can see that Stash has resolved the variables in `prefix` field and substituted them with the equivalent information from this new database. + +#### Verify BackupConfiguration + +If everything goes well, Stash should create a `BackupConfiguration` for our Redis in `demo-3` namespace and the phase of that `BackupConfiguration` should be `Ready`. Verify the `BackupConfiguration` crd by the following command, + +```bash +❯ kubectl get backupconfiguration -n demo-3 +NAME TASK SCHEDULE PAUSED PHASE AGE +app-sample-redis-3 redis-backup-6.2.5 */5 * * * * Ready 62s +``` + +Now, let's check the YAML of the `BackupConfiguration`. + +```yaml +❯ kubectl get backupconfiguration -n demo-3 app-sample-redis-3 -o yaml +apiVersion: stash.appscode.com/v1beta1 +kind: BackupConfiguration +metadata: + name: app-sample-redis-3 + namespace: demo-3 + ... +spec: + driver: Restic + repository: + name: app-sample-redis-3 + retentionPolicy: + keepLast: 5 + name: keep-last-5 + prune: true + runtimeSettings: {} + schedule: '*/5 * * * *' + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-redis-3 + task: + params: + - name: args + value: "-db 0" + tempDir: {} +status: + conditions: + - lastTransitionTime: "2021-07-29T14:23:58Z" + message: Repository demo-3/app-sample-redis-3 exist. + reason: RepositoryAvailable + status: "True" + type: RepositoryFound + - lastTransitionTime: "2021-07-29T14:23:58Z" + message: Backend Secret demo-3/gcs-secret exist. + reason: BackendSecretAvailable + status: "True" + type: BackendSecretFound + - lastTransitionTime: "2021-07-29T14:23:58Z" + message: Backup target appcatalog.appscode.com/v1alpha1 appbinding/sample-redis-3 + found. + reason: TargetAvailable + status: "True" + type: BackupTargetFound + - lastTransitionTime: "2021-07-29T14:23:58Z" + message: Successfully created backup triggering CronJob. + reason: CronJobCreationSucceeded + status: "True" + type: CronJobCreated + observedGeneration: 1 +``` + +Notice the `task` section. The `args` parameter that we had passed via annotations has been added to the `params` section. + +Also, notice the `target` section. Stash has automatically added the new Redis as the target of this `BackupConfiguration`. + +#### Verify Backup + +Now, let's wait for a backup run to complete. You can watch for `BackupSession` as below, + +```bash +❯ kubectl get backupsession -n demo-3 -w +NAME INVOKER-TYPE INVOKER-NAME PHASE DURATION AGE +app-sample-redis-3-1627568709 BackupConfiguration app-sample-redis-3 Running 20s +app-sample-redis-3-1627568709 BackupConfiguration app-sample-redis-3 Succeeded 1m43.931692282s 103s +``` + +Once the backup has been completed successfully, you should see that Stash has created a new directory as pointed by the `prefix` field of the new `Repository` and stored the backed up data there. + +
+ Backup data in GCS Bucket +
Fig: Backup data in GCS Bucket
+
+ +## Cleanup + +To cleanup the resources created by this tutorial, run the following commands, + +```bash +# cleanup sample-redis-1 resources +❯ kubectl delete redis sample-redis-1 -n demo-1 +❯ kubectl delete repository -n demo-1 --all + +# cleanup sample-redis-2 resources +❯ kubectl delete redis sample-redis-2 -n demo-2 +❯ kubectl delete repository -n demo-2 --all + +# cleanup sample-redis-3 resources +❯ kubectl delete redis sample-redis-3 -n demo-3 +❯ kubectl delete repository -n demo-3 --all + +# cleanup BackupBlueprint +❯ kubectl delete backupblueprint redis-backup-template +``` \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/redis/backup/customization/examples/limits-for-backup-job.yaml b/content/docs/v2024.1.31/guides/redis/backup/customization/examples/limits-for-backup-job.yaml new file mode 100644 index 0000000000..f9fca0a4a2 --- /dev/null +++ b/content/docs/v2024.1.31/guides/redis/backup/customization/examples/limits-for-backup-job.yaml @@ -0,0 +1,27 @@ +apiVersion: stash.appscode.com/v1beta1 +kind: BackupConfiguration +metadata: + name: sample-redis-backup + namespace: demo +spec: + schedule: "*/2 * * * *" + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-redis + runtimeSettings: + container: + resources: + requests: + cpu: "200m" + memory: "1Gi" + limits: + cpu: "200m" + memory: "1Gi" + retentionPolicy: + name: keep-last-5 + keepLast: 5 + prune: true \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/redis/backup/customization/examples/limits-for-restore-job.yaml b/content/docs/v2024.1.31/guides/redis/backup/customization/examples/limits-for-restore-job.yaml new file mode 100644 index 0000000000..76e83e5034 --- /dev/null +++ b/content/docs/v2024.1.31/guides/redis/backup/customization/examples/limits-for-restore-job.yaml @@ -0,0 +1,24 @@ +apiVersion: stash.appscode.com/v1beta1 +kind: RestoreSession +metadata: + name: sample-redis-restore + namespace: demo +spec: + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-redis + runtimeSettings: + container: + resources: + requests: + cpu: "200m" + memory: "1Gi" + limits: + cpu: "200m" + memory: "1Gi" + rules: + - snapshots: [latest] \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/redis/backup/customization/examples/multiple-retention-policy.yaml b/content/docs/v2024.1.31/guides/redis/backup/customization/examples/multiple-retention-policy.yaml new file mode 100644 index 0000000000..976478d8ce --- /dev/null +++ b/content/docs/v2024.1.31/guides/redis/backup/customization/examples/multiple-retention-policy.yaml @@ -0,0 +1,22 @@ +apiVersion: stash.appscode.com/v1beta1 +kind: BackupConfiguration +metadata: + name: sample-redis-backup + namespace: demo +spec: + schedule: "*/5 * * * *" + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-redis + retentionPolicy: + name: sample-redis-retention + keepLast: 5 + keepDaily: 10 + keepWeekly: 20 + keepMonthly: 50 + keepYearly: 100 + prune: true \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/redis/backup/customization/examples/passing-args-to-backup.yaml b/content/docs/v2024.1.31/guides/redis/backup/customization/examples/passing-args-to-backup.yaml new file mode 100644 index 0000000000..731db56dd4 --- /dev/null +++ b/content/docs/v2024.1.31/guides/redis/backup/customization/examples/passing-args-to-backup.yaml @@ -0,0 +1,22 @@ +apiVersion: stash.appscode.com/v1beta1 +kind: BackupConfiguration +metadata: + name: sample-redis-backup + namespace: demo +spec: + schedule: "*/2 * * * *" + task: + params: + - name: args + value: "-db 1" + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-redis + retentionPolicy: + name: keep-last-5 + keepLast: 5 + prune: true \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/redis/backup/customization/examples/passing-args-to-restore.yaml b/content/docs/v2024.1.31/guides/redis/backup/customization/examples/passing-args-to-restore.yaml new file mode 100644 index 0000000000..20cd1fa1ec --- /dev/null +++ b/content/docs/v2024.1.31/guides/redis/backup/customization/examples/passing-args-to-restore.yaml @@ -0,0 +1,19 @@ +apiVersion: stash.appscode.com/v1beta1 +kind: RestoreSession +metadata: + name: sample-redis-restore + namespace: demo +spec: + task: + params: + - name: args + value: "--pipe-timeout 300" + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-redis + rules: + - snapshots: [latest] \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/redis/backup/customization/examples/restore-specific-snapshot.yaml b/content/docs/v2024.1.31/guides/redis/backup/customization/examples/restore-specific-snapshot.yaml new file mode 100644 index 0000000000..8b64211a82 --- /dev/null +++ b/content/docs/v2024.1.31/guides/redis/backup/customization/examples/restore-specific-snapshot.yaml @@ -0,0 +1,15 @@ +apiVersion: stash.appscode.com/v1beta1 +kind: RestoreSession +metadata: + name: sample-redis-restore + namespace: demo +spec: + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-redis + rules: + - snapshots: [4bc21d6f] \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/redis/backup/customization/examples/run-backup-as-a-specific-user.yaml b/content/docs/v2024.1.31/guides/redis/backup/customization/examples/run-backup-as-a-specific-user.yaml new file mode 100644 index 0000000000..5910e760b1 --- /dev/null +++ b/content/docs/v2024.1.31/guides/redis/backup/customization/examples/run-backup-as-a-specific-user.yaml @@ -0,0 +1,23 @@ +apiVersion: stash.appscode.com/v1beta1 +kind: BackupConfiguration +metadata: + name: sample-redis-backup + namespace: demo +spec: + schedule: "*/2 * * * *" + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-redis + runtimeSettings: + pod: + securityContext: + runAsUser: 0 + runAsGroup: 0 + retentionPolicy: + name: keep-last-5 + keepLast: 5 + prune: true \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/redis/backup/customization/examples/run-restore-as-a-specific-user.yaml b/content/docs/v2024.1.31/guides/redis/backup/customization/examples/run-restore-as-a-specific-user.yaml new file mode 100644 index 0000000000..2a882b5c4c --- /dev/null +++ b/content/docs/v2024.1.31/guides/redis/backup/customization/examples/run-restore-as-a-specific-user.yaml @@ -0,0 +1,20 @@ +apiVersion: stash.appscode.com/v1beta1 +kind: RestoreSession +metadata: + name: sample-redis-restore + namespace: demo +spec: + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-redis + runtimeSettings: + pod: + securityContext: + runAsUser: 0 + runAsGroup: 0 + rules: + - snapshots: [latest] \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/redis/backup/customization/index.md b/content/docs/v2024.1.31/guides/redis/backup/customization/index.md new file mode 100644 index 0000000000..82ea4223d8 --- /dev/null +++ b/content/docs/v2024.1.31/guides/redis/backup/customization/index.md @@ -0,0 +1,285 @@ +--- +title: Redis Backup Customization | Stash +description: Customizing Redis Backup and Restore process with Stash +menu: + docs_v2024.1.31: + identifier: rd-backup-customization-kubedb + name: Customizing Backup & Restore Process + parent: rd-guides-redis-backup + weight: 40 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +# Customizing Backup and Restore Process + +Stash provides rich customization supports for the backup and restore process to meet the requirements of various cluster configurations. This guide will show you some examples of these customizations. + + +## Customizing Backup Process + +In this section, we are going to show you how to customize the backup process. Here, we are going to show some examples of providing arguments to the backup process, running the backup process as a specific user, ignoring some indexes during the backup process, etc. + +### Passing arguments to the backup process + +Stash Redis addon uses [redis-dump-go](https://github.com/yannh/redis-dump-go) for backup. You can pass arguments to the `redis-dump-go` through `args` param under `task.params` section. + +The below example shows how you can pass the `-db 1` to take backup only the database with index 1. + +```yaml +apiVersion: stash.appscode.com/v1beta1 +kind: BackupConfiguration +metadata: + name: sample-redis-backup + namespace: demo +spec: + schedule: "*/2 * * * *" + task: + params: + - name: args + value: "-db 1" + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-redis + retentionPolicy: + name: keep-last-5 + keepLast: 5 + prune: true +``` + +### Running backup job as a specific user + +If your cluster requires running the backup job as a specific user, you can provide `securityContext` under `runtimeSettings.pod` section. The below example shows how you can run the backup job as the root user. + +```yaml +apiVersion: stash.appscode.com/v1beta1 +kind: BackupConfiguration +metadata: + name: sample-redis-backup + namespace: demo +spec: + schedule: "*/2 * * * *" + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-redis + runtimeSettings: + pod: + securityContext: + runAsUser: 0 + runAsGroup: 0 + retentionPolicy: + name: keep-last-5 + keepLast: 5 + prune: true +``` + +### Specifying Memory/CPU limit/request for the backup job + +If you want to specify the Memory/CPU limit/request for your backup job, you can specify `resources` field under `runtimeSettings.container` section. + +```yaml +apiVersion: stash.appscode.com/v1beta1 +kind: BackupConfiguration +metadata: + name: sample-redis-backup + namespace: demo +spec: + schedule: "*/2 * * * *" + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-redis + runtimeSettings: + container: + resources: + requests: + cpu: "200m" + memory: "1Gi" + limits: + cpu: "200m" + memory: "1Gi" + retentionPolicy: + name: keep-last-5 + keepLast: 5 + prune: true +``` + +### Using multiple retention policies + +You can also specify multiple retention policies for your backed up data. For example, you may want to keep few daily snapshots, few weekly snapshots, and few monthly snapshots, etc. You just need to pass the desired number with the respective key under the `retentionPolicy` section. + +```yaml +apiVersion: stash.appscode.com/v1beta1 +kind: BackupConfiguration +metadata: + name: sample-redis-backup + namespace: demo +spec: + schedule: "*/5 * * * *" + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-redis + retentionPolicy: + name: sample-redis-retention + keepLast: 5 + keepDaily: 10 + keepWeekly: 20 + keepMonthly: 50 + keepYearly: 100 + prune: true +``` + +To know more about the available options for retention policies, please visit [here](https://stash.run/docs/latest/concepts/crds/backupconfiguration/#specretentionpolicy). + +## Customizing Restore Process + +Stash uses `redis-cli` during the restore process. In this section, we are going to show how you can pass arguments to the restore process, restore a specific snapshot, run restore job as a specific user, etc. + +### Passing arguments to the restore process + +Similar to the backup process, you can pass arguments to the restore process through the `args` params under `task.params` section. Here, we have passed `--pipe-timeout` argument to the `redis-cli`. + +```yaml +apiVersion: stash.appscode.com/v1beta1 +kind: RestoreSession +metadata: + name: sample-redis-restore + namespace: demo +spec: + task: + params: + - name: args + value: "--pipe-timeout 300" + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-redis + rules: + - snapshots: [latest] +``` + +### Restore specific snapshot + +You can also restore a specific snapshot. At first, list the available snapshots as below, + +```bash +❯ kubectl get snapshots -n demo +NAME ID REPOSITORY HOSTNAME CREATED AT +gcs-repo-4bc21d6f 4bc21d6f gcs-repo host-0 2022-01-12T14:54:27Z +gcs-repo-f0ac7cbd f0ac7cbd gcs-repo host-0 2022-01-12T14:56:26Z +gcs-repo-9210ebb6 9210ebb6 gcs-repo host-0 2022-01-12T14:58:27Z +gcs-repo-0aff8890 0aff8890 gcs-repo host-0 2022-01-12T15:00:28Z +``` + +>You can also filter the snapshots as shown in the guide [here](https://stash.run/docs/latest/concepts/crds/snapshot/#working-with-snapshot). + +You can use the respective ID of the snapshot to restore that snapshot. + +The below example shows how you can pass a specific snapshot ID through the `snapshots` field of `rules` section. + +```yaml +apiVersion: stash.appscode.com/v1beta1 +kind: RestoreSession +metadata: + name: sample-redis-restore + namespace: demo +spec: + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-redis + rules: + - snapshots: [4bc21d6f] +``` + +>Please, do not specify multiple snapshots here. Each snapshot represents a complete backup of your database. Multiple snapshots are only usable during file/directory restore. + +### Running restore job as a specific user + +You can provide `securityContext` under `runtimeSettings.pod` section to run the restore job as a specific user. + +```yaml +apiVersion: stash.appscode.com/v1beta1 +kind: RestoreSession +metadata: + name: sample-redis-restore + namespace: demo +spec: + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-redis + runtimeSettings: + pod: + securityContext: + runAsUser: 0 + runAsGroup: 0 + rules: + - snapshots: [latest] +``` + +### Specifying Memory/CPU limit/request for the restore job + +Similar to the backup process, you can also provide `resources` field under the `runtimeSettings.container` section to limit the Memory/CPU for your restore job. + +```yaml +apiVersion: stash.appscode.com/v1beta1 +kind: RestoreSession +metadata: + name: sample-redis-restore + namespace: demo +spec: + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-redis + runtimeSettings: + container: + resources: + requests: + cpu: "200m" + memory: "1Gi" + limits: + cpu: "200m" + memory: "1Gi" + rules: + - snapshots: [latest] +``` diff --git a/content/docs/v2024.1.31/guides/redis/backup/overview/images/redis-logical-backup.svg b/content/docs/v2024.1.31/guides/redis/backup/overview/images/redis-logical-backup.svg new file mode 100644 index 0000000000..58769ecaa1 --- /dev/null +++ b/content/docs/v2024.1.31/guides/redis/backup/overview/images/redis-logical-backup.svg @@ -0,0 +1,987 @@ + + + + + + image/svg+xml + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/content/docs/v2024.1.31/guides/redis/backup/overview/images/redis-logical-restore.svg b/content/docs/v2024.1.31/guides/redis/backup/overview/images/redis-logical-restore.svg new file mode 100644 index 0000000000..1ad280075f --- /dev/null +++ b/content/docs/v2024.1.31/guides/redis/backup/overview/images/redis-logical-restore.svg @@ -0,0 +1,857 @@ + + + + + + image/svg+xml + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/content/docs/v2024.1.31/guides/redis/backup/overview/index.md b/content/docs/v2024.1.31/guides/redis/backup/overview/index.md new file mode 100644 index 0000000000..faddf47de3 --- /dev/null +++ b/content/docs/v2024.1.31/guides/redis/backup/overview/index.md @@ -0,0 +1,107 @@ +--- +title: Backup & Restore Redis Using Stash +menu: + docs_v2024.1.31: + identifier: rd-backup-overview + name: Overview + parent: rd-guides-redis-backup + weight: 10 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +{{< notice type="warning" message="Please install [Stash](https://stash.run/docs/latest/setup/install/stash/) to try this feature. Database backup with Stash is already included in the KubeDB license. So, you don't need a separate license for Stash." >}} + +# Redis Backup & Restore Overview + +KubeDB uses [Stash](https://stash.run) to backup and restore databases. Stash by AppsCode is a cloud native data backup and recovery solution for Kubernetes workloads. Stash utilizes [restic](https://github.com/restic/restic) to securely backup stateful applications to any cloud or on-prem storage backends (for example, S3, GCS, Azure Blob storage, Minio, NetApp, Dell EMC etc.). + +
+  KubeDB + Stash +
Fig: Backup KubeDB Databases Using Stash
+
+ +# How Stash Backups & Restores Redis Database + +Stash supports backup and restore operation of many databases. This guide will give you an overview of how Redis database backup and restore process works in Stash. + +## Logical Backup + +Stash supports taking logical backup of Redis databases using [redis-dump-go](https://github.com/yannh/redis-dump-go). It is the most flexible way to perform a backup and restore, and a good choice when the data size is relatively small. + +### How Logical Backup Works + +The following diagram shows how Stash takes logical backup of a Redis database. Open the image in a new tab to see the enlarged version. + +
+  Redis Backup Overview +
Fig: Redis Logical Backup Overview
+
+ +The backup process consists of the following steps: + +1. At first, a user creates a secret with access credentials of the backend where the backed up data will be stored. + +2. Then, she creates a `Repository` crd that specifies the backend information along with the secret that holds the credentials to access the backend. + +3. Then, she creates a `BackupConfiguration` crd targeting the [AppBinding](/docs/v2024.1.31/guides/redis/concepts/appbinding) crd of the desired database. The `BackupConfiguration` object also specifies the `Task` to use to backup the database. + +4. Stash operator watches for `BackupConfiguration` crd. + +5. Once Stash operator finds a `BackupConfiguration` crd, it creates a CronJob with the schedule specified in `BackupConfiguration` object to trigger backup periodically. + +6. On the next scheduled slot, the CronJob triggers a backup by creating a `BackupSession` crd. + +7. Stash operator also watches for `BackupSession` crd. + +8. When it finds a `BackupSession` object, it resolves the respective `Task` and `Function` and prepares a Job definition to backup. + +9. Then, it creates the Job to backup the targeted database. + +10. The backup Job reads necessary information to connect with the database from the `AppBinding` crd. It also reads backend information and access credentials from `Repository` crd and Storage Secret respectively. + +11. Then, the Job dumps the targeted database and uploads the output to the backend. Stash pipes the output of dump command to uploading process. Hence, backup Job does not require a large volume to hold the entire dump output. + +12. Finally, when the backup is complete, the Job sends Prometheus metrics to the Pushgateway running inside Stash operator pod. It also updates the `BackupSession` and `Repository` status to reflect the backup procedure. + +### How Restore from Logical Backup Works + +The following diagram shows how Stash restores a Redis database from a logical backup. Open the image in a new tab to see the enlarged version. + +
+  Database Restore Overview +
Fig: Redis Logical Restore Process Overview
+
+ +The restore process consists of the following steps: + +1. At first, a user creates a `RestoreSession` crd targeting the `AppBinding` of the desired database where the backed up data will be restored. It also specifies the `Repository` crd which holds the backend information and the `Task` to use to restore the target. + +2. Stash operator watches for `RestoreSession` object. + +3. Once it finds a `RestoreSession` object, it resolves the respective `Task` and `Function` and prepares a Job definition to restore. + +4. Then, it creates the Job to restore the target. + +5. The Job reads necessary information to connect with the database from respective `AppBinding` crd. It also reads backend information and access credentials from `Repository` crd and Storage Secret respectively. + +6. Then, the job downloads the backed up data from the backend and injects into the desired database. Stash pipes the downloaded data to the respective database tool to inject into the database. Hence, restore job does not require a large volume to download entire backup data inside it. + +7. Finally, when the restore process is complete, the Job sends Prometheus metrics to the Pushgateway and update the `RestoreSession` status to reflect restore completion. + +## Next Steps + +- Backup your Redis database using Stash following the guide from [here](/docs/v2024.1.31/guides/redis/backup/standalone/). diff --git a/content/docs/v2024.1.31/guides/redis/backup/standalone/examples/backupconfiguration.yaml b/content/docs/v2024.1.31/guides/redis/backup/standalone/examples/backupconfiguration.yaml new file mode 100644 index 0000000000..45deece8f8 --- /dev/null +++ b/content/docs/v2024.1.31/guides/redis/backup/standalone/examples/backupconfiguration.yaml @@ -0,0 +1,18 @@ +apiVersion: stash.appscode.com/v1beta1 +kind: BackupConfiguration +metadata: + name: sample-redis-backup + namespace: demo +spec: + schedule: "*/5 * * * *" + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-redis + retentionPolicy: + name: keep-last-5 + keepLast: 5 + prune: true \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/redis/backup/standalone/examples/redis.yaml b/content/docs/v2024.1.31/guides/redis/backup/standalone/examples/redis.yaml new file mode 100644 index 0000000000..9d26c58287 --- /dev/null +++ b/content/docs/v2024.1.31/guides/redis/backup/standalone/examples/redis.yaml @@ -0,0 +1,15 @@ +apiVersion: kubedb.com/v1alpha2 +kind: Redis +metadata: + name: redis-quickstart + namespace: demo +spec: + version: 6.0.20 + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/redis/backup/standalone/examples/repository.yaml b/content/docs/v2024.1.31/guides/redis/backup/standalone/examples/repository.yaml new file mode 100644 index 0000000000..dde2612beb --- /dev/null +++ b/content/docs/v2024.1.31/guides/redis/backup/standalone/examples/repository.yaml @@ -0,0 +1,11 @@ +apiVersion: stash.appscode.com/v1alpha1 +kind: Repository +metadata: + name: gcs-repo + namespace: demo +spec: + backend: + gcs: + bucket: stash-testing + prefix: /demo/redis/sample-redis + storageSecretName: gcs-secret \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/redis/backup/standalone/examples/restoresession.yaml b/content/docs/v2024.1.31/guides/redis/backup/standalone/examples/restoresession.yaml new file mode 100644 index 0000000000..964f2969f5 --- /dev/null +++ b/content/docs/v2024.1.31/guides/redis/backup/standalone/examples/restoresession.yaml @@ -0,0 +1,15 @@ +apiVersion: stash.appscode.com/v1beta1 +kind: RestoreSession +metadata: + name: sample-redis-restore + namespace: demo +spec: + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-redis + rules: + - snapshots: [latest] \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/redis/backup/standalone/images/sample-redis-backup.png b/content/docs/v2024.1.31/guides/redis/backup/standalone/images/sample-redis-backup.png new file mode 100644 index 0000000000..c646174c77 Binary files /dev/null and b/content/docs/v2024.1.31/guides/redis/backup/standalone/images/sample-redis-backup.png differ diff --git a/content/docs/v2024.1.31/guides/redis/backup/standalone/index.md b/content/docs/v2024.1.31/guides/redis/backup/standalone/index.md new file mode 100644 index 0000000000..c5949f3493 --- /dev/null +++ b/content/docs/v2024.1.31/guides/redis/backup/standalone/index.md @@ -0,0 +1,570 @@ +--- +title: Logical Backup & Restore Redis | Stash +description: Take logical backup of Redis database using Stash +menu: + docs_v2024.1.31: + identifier: rd-backup-standalone + name: Standalone + parent: rd-guides-redis-backup + weight: 20 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +# Backup and Restore Redis database using Stash + +Stash 0.9.0+ supports backup and restoration of Redis databases. This guide will show you how you can backup and restore your Redis database with Stash. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using Minikube. +- Install KubeDB in your cluster following the steps [here](/docs/v2024.1.31/setup/README). +- Install Stash in your cluster following the steps [here](https://stash.run/docs/latest/setup/install/stash/). +- Install Stash `kubectl` plugin following the steps [here](https://stash.run/docs/latest/setup/install/kubectl-plugin/). +- If you are not familiar with how Stash backup and restore Redis databases, please check the following guide [here](/docs/v2024.1.31/guides/redis/backup/overview/): + + +You have to be familiar with following custom resources: + +- [AppBinding](/docs/v2024.1.31/guides/redis/concepts/appbinding) +- [Function](https://stash.run/docs/latest/concepts/crds/function/) +- [Task](https://stash.run/docs/latest/concepts/crds/task/) +- [BackupConfiguration](https://stash.run/docs/latest/concepts/crds/backupconfiguration/) +- [RestoreSession](https://stash.run/docs/latest/concepts/crds/restoresession/) + +To keep things isolated, we are going to use a separate namespace called `demo` throughout this tutorial. Create the `demo` namespace if you haven't created it already. + +```bash +$ kubectl create ns demo +namespace/demo created +``` + +## Backup Redis + +This section will demonstrate how to backup a Redis database. Here, we are going to deploy a Resis database using KubeDB. Then, we are going to backup this database into a GCS bucket. Finally, we are going to restore the backed-up data into another Redis database. + +### Deploy Sample Redis Database + +Let's deploy a sample Redis database and insert some data into it. + +**Create Redis CRD:** + +Below is the YAML of a sample Redis crd that we are going to create for this tutorial: + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: Redis +metadata: + name: sample-redis + namespace: demo +spec: + version: 6.0.20 + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi +``` + +Create the above `Redis` crd, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/redis/backup/standalone/examples/redis.yaml +redis.kubedb.com/sample-redis created +``` + +KubeDB will deploy a Redis database according to the above specification. It will also create the necessary secrets and services to access the database. + +Let's check if the database is ready to use, + +```bash +❯ kubectl get rd -n demo +NAME VERSION STATUS AGE +sample-redis 6.0.20 Ready 58s +``` + +The database is `Ready`. Verify that KubeDB has created a Secret and a Service for this database using the following commands, + +```bash +❯ kubectl get secret -n demo -l=app.kubernetes.io/instance=sample-redis +NAME TYPE DATA AGE +sample-redis-auth kubernetes.io/basic-auth 2 90s +sample-redis-config Opaque 1 90s + + +❯ kubectl get service -n demo -l=app.kubernetes.io/instance=sample-redis +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +sample-redis ClusterIP 10.96.179.49 6379/TCP 116s +sample-redis-pods ClusterIP None 6379/TCP 116s +``` + +Here, we have to use the service `sample-redis` and secret `sample-redis-auth` to connect with the database. + +### Insert Sample Data + +Now, we are going to exec into the database pod and create some sample data. Kubedb has created a secret with access credentials. Let's find out the credentials from the Secret, + +```yaml +❯ kubectl get secret -n demo sample-redis-auth -o yaml +apiVersion: v1 +data: + password: Q3l4cjttTzE3OEsuMCQ3Nw== + username: cm9vdA== +kind: Secret +metadata: + creationTimestamp: "2022-02-04T05:59:53Z" + labels: + app.kubernetes.io/component: database + app.kubernetes.io/instance: sample-redis + app.kubernetes.io/managed-by: kubedb.com + app.kubernetes.io/name: redises.kubedb.com + name: sample-redis-auth + namespace: demo + resourceVersion: "422952" + uid: 58e3ac2b-51fe-4845-8bb1-959e51f52015 +type: kubernetes.io/basic-auth +``` + +Here, we are going to use `password` to authenticate and insert the sample data. + +At first, let's export the password as environment variables to make further commands re-usable. + +```bash +export PASSWORD=$(kubectl get secrets -n demo sample-redis-auth -o jsonpath='{.data.\password}' | base64 -d) +``` + +Now, let's exec into the database pod and insert some sample data, + +```bash +❯ kubectl exec -it -n demo sample-redis-0 -- redis-cli -a $PASSWORD +Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe. +# insert some key value pairs +127.0.0.1:6379> set key1 value1 +OK +127.0.0.1:6379> set key2 value2 +OK +# check the inserted data +127.0.0.1:6379> get key1 +"value1" +127.0.0.1:6379> get key2 +"value2" +# exit from redis-cli +127.0.0.1:6379> exit +``` + +We have successfully deployed a Redis database and inserted some sample data into it. Now, we are ready to backup our database into our desired backend using Stash. + +## Prepare for Backup + +In this section, we are going to prepare the necessary resources (i.e. database connection information, backend information, etc.) before backup. + +### Verify Stash Redis Addon Installed + +When you install the Stash, it automatically installs all the official database addons. Verify that it has installed the Redis addons using the following command. + +```bash +$ kubectl get tasks.stash.appscode.com | grep redis +redis-backup-5.0.13 1h +redis-backup-6.2.5 1h +redis-restore-5.0.13 1h +redis-restore-6.2.5 1h +``` + +### Ensure AppBinding +Stash needs to know how to connect with the database. An `AppBinding` exactly provides this information. It holds the Service and Secret information of the database. You have to point to the respective `AppBinding` as a target of backup instead of the database itself. + +Verify that the `AppBinding` has been created successfully using the following command, + +```bash +❯ kubectl get appbindings -n demo +NAME TYPE VERSION AGE +sample-redis kubedb.com/redis 6.0.20 2m54s +``` + +Let's check the YAML of the above `AppBinding`, + +```bash +❯ kubectl get appbindings -n demo sample-redis -o yaml +``` + +```yaml +apiVersion: appcatalog.appscode.com/v1alpha1 +kind: AppBinding +metadata: + labels: + app.kubernetes.io/component: database + app.kubernetes.io/instance: sample-redis + app.kubernetes.io/managed-by: kubedb.com + app.kubernetes.io/name: redises.kubedb.com + name: sample-redis + namespace: demo + ... +clientConfig: + service: + name: sample-redis + port: 6379 + scheme: redis + parameters: + apiVersion: config.kubedb.com/v1alpha1 + kind: RedisConfiguration + stash: + addon: + backupTask: + name: redis-backup-6.2.5 + restoreTask: + name: redis-restore-6.2.5 + secret: + name: sample-redis-auth + type: kubedb.com/redis + version: 6.0.20 +``` +Stash requires the following fields to set in AppBinding's `Spec` section. + +- `spec.clientConfig.service.name` specifies the name of the service that connects to the database. +- `spec.secret` specifies the name of the secret that holds necessary credentials to access the database. +- `spec.parameters.stash` specifies the Stash Addons that will be used to backup and restore this database. +- `spec.type` specifies the types of the app that this AppBinding is pointing to. KubeDB generated AppBinding follows the following format: `/`. + +We will use this `Appbinding` later for connecting into this database. + +### Prepare Backend + +We are going to store our backed up data into a GCS bucket. So, we need to create a Secret with GCS credentials and a `Repository` object with the bucket information. If you want to use a different backend, please read the respective backend configuration doc from [here](https://stash.run/docs/latest/guides/backends/overview/). + +**Create Storage Secret:** + +At first, let's create a secret called `gcs-secret` with access credentials to our desired GCS bucket, + +```bash +$ echo -n 'changeit' > RESTIC_PASSWORD +$ echo -n '' > GOOGLE_PROJECT_ID +$ cat downloaded-sa-key.json > GOOGLE_SERVICE_ACCOUNT_JSON_KEY +$ kubectl create secret generic -n demo gcs-secret \ + --from-file=./RESTIC_PASSWORD \ + --from-file=./GOOGLE_PROJECT_ID \ + --from-file=./GOOGLE_SERVICE_ACCOUNT_JSON_KEY +secret/gcs-secret created +``` + +**Create Repository:** + +Now, crete a `Repository` object with the information of your desired bucket. Below is the YAML of `Repository` object we are going to create, + +```yaml +apiVersion: stash.appscode.com/v1alpha1 +kind: Repository +metadata: + name: gcs-repo + namespace: demo +spec: + backend: + gcs: + bucket: stash-testing + prefix: /demo/redis/sample-redis + storageSecretName: gcs-secret +``` + +Let's create the `Repository` we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/redis/backup/standalone/examples/repository.yaml +repository.stash.appscode.com/gcs-repo created +``` + +Now, we are ready to backup our database into our GCS bucket. + +### Backup + +To schedule a backup, we have to create a `BackupConfiguration` object targeting the respective `AppBinding` of our desired database. Then Stash will create a CronJob to periodically backup the database. + +#### Create BackupConfiguration + +Below is the YAML for `BackupConfiguration` object we care going to use to backup the `sample-redis` database we have deployed earlier, + +```yaml +apiVersion: stash.appscode.com/v1beta1 +kind: BackupConfiguration +metadata: + name: sample-redis-backup + namespace: demo +spec: + schedule: "*/5 * * * *" + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-redis + retentionPolicy: + name: keep-last-5 + keepLast: 5 + prune: true +``` + +Here, + +- `.spec.schedule` specifies that we want to backup the database at 5 minutes intervals. +- `.spec.repository.name` specifies the Repository CR name we have created earlier with backend information. +- `.spec.target.ref` refers to the AppBinding object that holds the connection information of our targeted database. +- `.spec.retentionPolicy` specifies a policy indicating how we want to cleanup the old backups. + +Let's create the `BackupConfiguration` object we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/redis/backup/standalone/examples/backupconfiguration.yaml +backupconfiguration.stash.appscode.com/sample-redis-backup created +``` + +#### Verify Backup Setup Successful + +If everything goes well, the phase of the `BackupConfiguration` should be `Ready`. The `Ready` phase indicates that the backup setup is successful. Let's verify the `Phase` of the BackupConfiguration, + +```bash +$ kubectl get backupconfiguration -n demo +NAME TASK SCHEDULE PAUSED PHASE AGE +sample-redis-backup redis-backup-6.2.5 */5 * * * * Ready 11s +``` + +#### Verify CronJob + +Stash will create a CronJob with the schedule specified in `spec.schedule` field of `BackupConfiguration` object. + +Verify that the CronJob has been created using the following command, + +```bash +❯ kubectl get cronjob -n demo +NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE +stash-backup-sample-redis-backup */5 * * * * False 0 14s +``` + +#### Wait for BackupSession + +The `sample-redis-backup` CronJob will trigger a backup on each scheduled slot by creating a `BackupSession` object. + +Now, wait for a schedule to appear. Run the following command to watch for a `BackupSession` object, + +```bash +❯ kubectl get backupsession -n demo -w +NAME INVOKER-TYPE INVOKER-NAME PHASE DURATION AGE +sample-redis-backup-1627490702 BackupConfiguration sample-redis-backup 0s +sample-redis-backup-1627490702 BackupConfiguration sample-redis-backup Running 0s +sample-redis-backup-1627490702 BackupConfiguration sample-redis-backup Succeeded 1m18.098555424s 78s +``` + +Here, the phase `Succeeded` means that the backup process has been completed successfully. + +#### Verify Backup + +Now, we are going to verify whether the backed up data is present in the backend or not. Once a backup is completed, Stash will update the respective `Repository` object to reflect the backup completion. Check that the repository `gcs-repo` has been updated by the following command, + +```bash +$ kubectl get repository -n demo gcs-repo +NAME INTEGRITY SIZE SNAPSHOT-COUNT LAST-SUCCESSFUL-BACKUP AGE +gcs-repo true 1.327 MiB 1 60s 8m +``` + +Now, if we navigate to the GCS bucket, we will see the backed up data has been stored in `demo/redis/sample-redis` directory as specified by `.spec.backend.gcs.prefix` field of the `Repository` object. +
+ Backup data in GCS Bucket +
Fig: Backup data in GCS Bucket
+
+ + +> Note: Stash keeps all the backed up data encrypted. So, data in the backend will not make any sense until they are decrypted. + +## Restore Redis +If you have followed the previous sections properly, you should have a successful logical backup of your Redis database. Now, we are going to show how you can restore the database from the backed up data. + +### Restore Into the Same Database + +You can restore your data into the same database you have backed up from or into a different database in the same cluster or a different cluster. In this section, we are going to show you how to restore in the same database which may be necessary when you have accidentally deleted any data from the running database. + +#### Temporarily Pause Backup + +At first, let's stop taking any further backup of the database so that no backup runs after we delete the sample data. We are going to pause the `BackupConfiguration` object. Stash will stop taking any further backup when the `BackupConfiguration` is paused. + +Let's pause the `sample-redis-backup` BackupConfiguration, +```bash +❯ kubectl patch backupconfiguration -n demo sample-redis-backup --type="merge" --patch='{"spec": {"paused": true}}' +backupconfiguration.stash.appscode.com/sample-redis-backup patched +``` + +Or you can use the Stash `kubectl` plugin to pause the `BackupConfiguration`, +```bash +❯ kubectl stash pause backup -n demo --backupconfig=sample-redis-backup +BackupConfiguration demo/sample-redis-backup has been paused successfully. +``` + +Verify that the `BackupConfiguration` has been paused, + +```bash +❯ kubectl get backupconfiguration -n demo sample-redis-backup +NAME TASK SCHEDULE PAUSED PHASE AGE +sample-redis-backup redis-backup-6.2.5 */5 * * * * true Ready 4h47m +``` + +Notice the `PAUSED` column. Value `true` for this field means that the `BackupConfiguration` has been paused. + +Stash will also suspend the respective CronJob. + +```bash +❯ kubectl get cronjob -n demo +NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE +stash-backup-sample-redis-backup */5 * * * * True 0 113s 4h48m +``` + +#### Simulate Disaster + +Now, let's simulate an accidental deletion scenario. Here, we are going to exec into the database pod and delete the sample data we have inserted earlier. + +```bash +❯ kubectl exec -it -n demo sample-redis-0 -- redis-cli -a $PASSWORD +Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe. +# delete the sample data +127.0.0.1:6379> del key1 key2 +(integer) 2 +# verify that the sample data has been deleted +127.0.0.1:6379> get key1 +(nil) +127.0.0.1:6379> get key2 +(nil) +127.0.0.1:6379> exit +``` + +#### Create RestoreSession + +To restore the database, you have to create a `RestoreSession` object pointing to the `AppBinding` of the targeted database. + +Here, is the YAML of the `RestoreSession` object that we are going to use for restoring our `sample-redis` database. + +```yaml +apiVersion: stash.appscode.com/v1beta1 +kind: RestoreSession +metadata: + name: sample-redis-restore + namespace: demo +spec: + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-redis + rules: + - snapshots: [latest] +``` + +Here, + +- `.spec.repository.name` specifies the Repository object that holds the backend information where our backed up data has been stored. +- `.spec.target.ref` refers to the respective AppBinding of the `sample-redis` database. +- `.spec.rules` specifies that we are restoring data from the latest backup snapshot of the database. + +Let's create the `RestoreSession` object object we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/redis/backup/standalone/examples/restoresession.yaml +restoresession.stash.appscode.com/sample-redis-restore created +``` + +Once, you have created the `RestoreSession` object, Stash will create a restore Job. Run the following command to watch the phase of the `RestoreSession` object, + +```bash +❯ kubectl get restoresession -n demo -w +NAME REPOSITORY PHASE DURATION AGE +sample-redis-restore gcs-repo Running 6s +sample-redis-restore gcs-repo Running 16s +sample-redis-restore gcs-repo Succeeded 16s +sample-redis-restore gcs-repo Succeeded 16.324570911s 16s +``` + +The `Succeeded` phase means that the restore process has been completed successfully. + +#### Verify Restored Data + +Now, let's exec into the database pod and verify whether data actual data has been restored or not, + +```bash +❯ kubectl exec -it -n demo sample-redis-0 -- redis-cli -a $PASSWORD +Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe. +127.0.0.1:6379> get key1 +"value1" +127.0.0.1:6379> get key2 +"value2" +127.0.0.1:6379> exit +``` + +Hence, we can see from the above output that the deleted data has been restored successfully from the backup. + +#### Resume Backup + +Since our data has been restored successfully we can now resume our usual backup process. Resume the `BackupConfiguration` using following command, +```bash +❯ kubectl patch backupconfiguration -n demo sample-redis-backup --type="merge" --patch='{"spec": {"paused": false}}' +backupconfiguration.stash.appscode.com/sample-redis-backup patched +``` + +Or you can use the Stash `kubectl` plugin to resume the `BackupConfiguration` +```bash +❯ kubectl stash resume -n demo --backupconfig=sample-redis-backup +BackupConfiguration demo/sample-redis-backup has been resumed successfully. +``` + +Verify that the `BackupConfiguration` has been resumed, +```bash +❯ kubectl get backupconfiguration -n demo sample-redis-backup +NAME TASK SCHEDULE PAUSED PHASE AGE +sample-redis-backup redis-backup-6.2.5 */5 * * * * false Ready 4h54m +``` + +Here, `false` in the `PAUSED` column means the backup has been resume successfully. The CronJob also should be resumed now. + +```bash +❯ kubectl get cronjob -n demo +NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE +stash-backup-sample-redis-backup */5 * * * * False 0 3m24s 4h54m +``` + +Here, `False` in the `SUSPEND` column means the CronJob is no longer suspended and will trigger in the next schedule. + +### Restore Into Different Database of the Same Namespace + +If you want to restore the backed up data into a different database of the same namespace, you need to have another `AppBinding` pointing to the desired database. Then, you have to create the `RestoreSession` pointing to the new `AppBinding`. + +### Restore Into Different Namespace + +If you want to restore into a different namespace of the same cluster, you have to create the Repository, backend Secret, AppBinding, in the desired namespace. You can use [Stash kubectl plugin](https://stash.run/docs/latest/guides/cli/kubectl-plugin/) to easily copy the resources into a new namespace. Then, you have to create the `RestoreSession` object in the desired namespace pointing to the Repository, AppBinding of that namespace. + +### Restore Into Different Cluster + +If you want to restore into a different cluster, you have to install Stash in the desired cluster. Then, you have to install Stash Redis addon in that cluster too. Then, you have to create the Repository, backend Secret, AppBinding, in the desired cluster. Finally, you have to create the `RestoreSession` object in the desired cluster pointing to the Repository, AppBinding of that cluster. + +## Cleanup + +To cleanup the Kubernetes resources created by this tutorial, run: + +```bash +kubectl delete -n demo backupconfiguration sample-redis-backup +kubectl delete -n demo restoresession sample-redis-restore +kubectl delete -n demo repository gcs-repo +# delete the database resources +kubectl delete redis sample-redis -n demo +#delete the namespace +kubectl delete ns demo +``` diff --git a/content/docs/v2024.1.31/guides/redis/cli/_index.md b/content/docs/v2024.1.31/guides/redis/cli/_index.md new file mode 100755 index 0000000000..7606f166c8 --- /dev/null +++ b/content/docs/v2024.1.31/guides/redis/cli/_index.md @@ -0,0 +1,22 @@ +--- +title: CLI | KubeDB +menu: + docs_v2024.1.31: + identifier: rd-cli-redis + name: Cli + parent: rd-redis-guides + weight: 100 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/redis/cli/cli.md b/content/docs/v2024.1.31/guides/redis/cli/cli.md new file mode 100644 index 0000000000..faf55ded56 --- /dev/null +++ b/content/docs/v2024.1.31/guides/redis/cli/cli.md @@ -0,0 +1,315 @@ +--- +title: CLI | KubeDB +menu: + docs_v2024.1.31: + identifier: rd-cli-cli + name: Quickstart + parent: rd-cli-redis + weight: 10 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Manage KubeDB objects using CLIs + +## KubeDB CLI + +KubeDB comes with its own cli. It is called `kubedb` cli. `kubedb` can be used to manage any KubeDB object. `kubedb` cli also performs various validations to improve ux. To install KubeDB cli on your workstation, follow the steps [here](/docs/v2024.1.31/setup/README). + +### How to Create objects + +`kubectl create` creates a database CRD object in `default` namespace by default. Following command will create a Redis object as specified in `redis.yaml`. + +```bash +$ kubectl create -f redis-demo.yaml +redis.kubedb.com/redis-demo created +``` + +You can provide namespace as a flag `--namespace`. Provided namespace should match with namespace specified in input file. + +```bash +$ kubectl create -f redis-demo.yaml --namespace=kube-system +redis.kubedb.com/redis-demo created +``` + +`kubectl create` command also considers `stdin` as input. + +```bash +cat redis-demo.yaml | kubectl create -f - +``` + +### How to List Objects + +`kubectl get` command allows users to list or find any KubeDB object. To list all Redis objects in `default` namespace, run the following command: + +```bash +$ kubectl get redis +NAME VERSION STATUS AGE +redis-demo 4.0-v1 Running 13s +redis-dev 4.0-v1 Running 13s +redis-prod 4.0-v1 Running 13s +redis-qa 4.0-v1 Running 13s +``` + +To get YAML of an object, use `--output=yaml` flag. + +```yaml +$ kubectl get redis redis-demo --output=yaml +apiVersion: kubedb.com/v1alpha2 +kind: Redis +metadata: + creationTimestamp: 2018-10-01T08:14:27Z + finalizers: + - kubedb.com + generation: 1 + labels: + kubedb: cli-demo + name: redis-demo + namespace: demo + resourceVersion: "18201" + selfLink: /apis/kubedb.com/v1alpha2/namespaces/default/redises/redis-demo + uid: 039aeaa1-c552-11e8-9ba7-0800274bef12 +spec: + mode: Standalone + podTemplate: + controller: {} + metadata: {} + spec: + resources: {} + replicas: 1 + storage: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + storageClassName: standard + storageType: Durable + terminationPolicy: Halt + version: 4.0-v1 +status: + observedGeneration: 1$7916315637361465932 + phase: Running +``` + +To get JSON of an object, use `--output=json` flag. + +```bash +kubectl get redis redis-demo --output=json +``` + +To list all KubeDB objects, use following command: + +```bash +$ kubectl get all -o wide +NAME VERSION STATUS AGE +redis.kubedb.com/redis-demo 4.0-v1 Running 3m +redis.kubedb.com/redis-dev 4.0-v1 Running 3m +redis.kubedb.com/redis-prod 4.0-v1 Running 3m +redis.kubedb.com/redis-qa 4.0-v1 Running 3m +``` + +Flag `--output=wide` is used to print additional information. + +List command supports short names for each object types. You can use it like `kubectl get `. Below are the short name for KubeDB objects: + +- Redis: `rd` +- DormantDatabase: `drmn` + +You can print labels with objects. The following command will list all Redis with their corresponding labels. + +```bash +$ kubectl get rd --show-labels +NAME VERSION STATUS AGE LABELS +redis-demo 4.0-v1 Running 4m kubedb=cli-demo +``` + +To print only object name, run the following command: + +```bash +$ kubectl get all -o name +redis/redis-demo +redis/redis-dev +redis/redis-prod +redis/redis-qa +``` + +### How to Describe Objects + +`kubectl dba describe` command allows users to describe any KubeDB object. The following command will describe Redis server `redis-demo` with relevant information. + +```bash +$ kubectl dba describe rd redis-demo +Name: redis-demo +Namespace: default +CreationTimestamp: Mon, 01 Oct 2018 14:14:27 +0600 +Labels: kubedb=cli-demo +Annotations: +Replicas: 1 total +Status: Running + StorageType: Durable +Volume: + StorageClass: standard + Capacity: 1Gi + Access Modes: RWO + +StatefulSet: + Name: redis-demo + CreationTimestamp: Mon, 01 Oct 2018 14:14:31 +0600 + Labels: kubedb=cli-demo + app.kubernetes.io/name=redises.kubedb.com + app.kubernetes.io/instance=redis-demo + Annotations: + Replicas: 824640807604 desired | 1 total + Pods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed + +Service: + Name: redis-demo + Labels: app.kubernetes.io/name=redises.kubedb.com + app.kubernetes.io/instance=redis-demo + Annotations: + Type: ClusterIP + IP: 10.102.148.196 + Port: db 6379/TCP + TargetPort: db/TCP + Endpoints: 172.17.0.4:6379 + +No Snapshots. + +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal Successful 5m Redis operator Successfully created Service + Normal Successful 5m Redis operator Successfully created StatefulSet + Normal Successful 5m Redis operator Successfully created Redis + Normal Successful 5m Redis operator Successfully patched StatefulSet + Normal Successful 5m Redis operator Successfully patched Redis +``` + +`kubectl dba describe` command provides following basic information about a Redis server. + +- StatefulSet +- Storage (Persistent Volume) +- Service +- Monitoring system (If available) + +To hide events on KubeDB object, use flag `--show-events=false` + +To describe all Redis objects in `default` namespace, use following command + +```bash +kubectl dba describe rd +``` + +To describe all Redis objects from every namespace, provide `--all-namespaces` flag. + +```bash +kubectl dba describe rd --all-namespaces +``` + +To describe all KubeDB objects from every namespace, use the following command: + +```bash +kubectl dba describe all --all-namespaces +``` + +You can also describe KubeDB objects with matching labels. The following command will describe all Redis objects with specified labels from every namespace. + +```bash +kubectl dba describe rd --all-namespaces --selector='group=dev' +``` + +To learn about various options of `describe` command, please visit [here](/docs/v2024.1.31/reference/cli/kubectl-dba_describe). + +### How to Edit Objects + +`kubectl edit` command allows users to directly edit any KubeDB object. It will open the editor defined by _KUBEDB_EDITOR_, or _EDITOR_ environment variables, or fall back to `nano`. + +Let's edit an existing running Redis object to setup [Monitoring](/docs/v2024.1.31/guides/redis/monitoring/using-builtin-prometheus). The following command will open Redis `redis-demo` in editor. + +```bash +$ kubectl edit rd redis-demo +#spec: +# monitor: +# agent: prometheus.io/builtin + +redis "redis-demo" edited +``` + +#### Edit Restrictions + +Various fields of a KubeDB object can't be edited using `edit` command. The following fields are restricted from updates for all KubeDB objects: + +- apiVersion +- kind +- metadata.name +- metadata.namespace + +If StatefulSets exists for a Redis server, following fields can't be modified as well. + +- spec.storageType +- spec.storage +- spec.podTemplate.spec.nodeSelector +- spec.podTemplate.spec.env + +For DormantDatabase, `spec.origin` can't be edited using `kubectl edit` + +### How to Delete Objects + +`kubectl delete` command will delete an object in `default` namespace by default unless namespace is provided. The following command will delete a Redis `redis-dev` in default namespace + +```bash +$ kubectl delete redis redis-dev +redis.kubedb.com "redis-dev" deleted +``` + +You can also use YAML files to delete objects. The following command will delete a redis using the type and name specified in `redis.yaml`. + +```bash +$ kubectl delete -f redis-demo.yaml +redis.kubedb.com "redis-dev" deleted +``` + +`kubectl delete` command also takes input from `stdin`. + +```bash +cat redis-demo.yaml | kubectl delete -f - +``` + +To delete database with matching labels, use `--selector` flag. The following command will delete redis with label `redis.app.kubernetes.io/instance=redis-demo`. + +```bash +kubectl delete redis -l redis.app.kubernetes.io/instance=redis-demo +``` + +## Using Kubectl + +You can use Kubectl with KubeDB objects like any other CRDs. Below are some common examples of using Kubectl with KubeDB objects. + +```bash +# List objects +$ kubectl get redis +$ kubectl get redis.kubedb.com + +# Delete objects +$ kubectl delete redis +``` + +## Next Steps + +- Learn how to use KubeDB to run a Redis server [here](/docs/v2024.1.31/guides/redis/README). +- Learn how to use custom configuration in Redis with KubeDB [here](/docs/v2024.1.31/guides/redis/configuration/using-config-file) +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/v2024.1.31/CONTRIBUTING). diff --git a/content/docs/v2024.1.31/guides/redis/clustering/_index.md b/content/docs/v2024.1.31/guides/redis/clustering/_index.md new file mode 100755 index 0000000000..a15273a759 --- /dev/null +++ b/content/docs/v2024.1.31/guides/redis/clustering/_index.md @@ -0,0 +1,22 @@ +--- +title: Redis Clustering +menu: + docs_v2024.1.31: + identifier: rd-clustering-redis + name: Clustering + parent: rd-redis-guides + weight: 25 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/redis/clustering/overview.md b/content/docs/v2024.1.31/guides/redis/clustering/overview.md new file mode 100644 index 0000000000..9070512a51 --- /dev/null +++ b/content/docs/v2024.1.31/guides/redis/clustering/overview.md @@ -0,0 +1,205 @@ +--- +title: Redis Cluster Overview +menu: + docs_v2024.1.31: + identifier: rd-clustering-overview + name: Overview + parent: rd-clustering-redis + weight: 10 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Redis Cluster + +Redis Cluster is a native sharding solution for Redis, which provides automatic partitioning of data across multiple Redis nodes. +It enables horizontal scalability, allowing Redis to handle larger amounts of data by splitting data across multiple Redis instances. +Each Redis instance in a Redis Cluster can operate as a master or slave, providing failover and redundancy for increased availability and reliability. +Redis Cluster uses a consistent hashing algorithm to distribute keys across nodes, ensuring that the data is evenly balanced and minimizing the +overhead of rebalancing when nodes are added or removed. Redis Cluster also provides a distributed system for managing and maintaining the cluster, +ensuring that data is always consistent and available even in the event of node failures. + +So in practical terms, what do you get with Redis Cluster? + +- **Horizontal Scalability**: Redis Cluster allows Redis to handle larger amounts of data by splitting the data across multiple nodes, enabling linear scalability as more nodes are added to the cluster. + +- **High Availability**: Redis Cluster provides automatic failover and redundancy, ensuring that data is always available and that the cluster can continue operating even in the event of node failures. + +- **Load Balancing**: Redis Cluster distributes keys evenly across nodes, ensuring that the cluster is balanced and reducing the overhead of rebalancing when nodes are added or removed. + +- **Consistent Data**: Redis Cluster provides a distributed system for managing and maintaining the cluster, ensuring that data is always consistent and available. + +- **Easy Administration**: Redis Cluster provides a centralized management interface for the cluster, making it easier to manage and monitor large-scale Redis setups. + +- **Fast Performance**: Redis Cluster provides fast, in-memory data access, enabling fast and responsive applications. + +- **Simplified operations**: Redis Cluster eliminates the need for manual sharding, enabling a simpler and more automated way to scale Redis. +![redis-cluster](/docs/v2024.1.31/images/redis/redis-cluster.png) + + +## Redis Cluster TCP ports + +Every Redis Cluster node requires two TCP connections open. The normal Redis TCP port used to serve clients, for example 6379, plus the port obtained by adding 10000 to the data port, so 16379 in the example. + +This second *high* port is used for the Cluster bus, that is a node-to-node communication channel using a binary protocol. The Cluster bus is used by nodes for failure detection, configuration update, failover authorization and so forth. Clients should never try to communicate with the cluster bus port, but always with the normal Redis command port, however make sure you open both ports in your firewall, otherwise Redis cluster nodes will be not able to communicate. + +The command port and cluster bus port offset is fixed and is always 10000. + +Note that for a Redis Cluster to work properly you need, for each node: + +1. The normal client communication port (usually 6379) used to communicate with clients to be open to all the clients that need to reach the cluster, plus all the other cluster nodes (that use the client port for keys migrations). +2. The cluster bus port (the client port + 10000) must be reachable from all the other cluster nodes. + +If you don't open both TCP ports, your cluster will not work as expected. + +The cluster bus uses a different, binary protocol, for node to node data exchange, which is more suited to exchange information between nodes using little bandwidth and processing time. + +Reference: https://redis.io/docs/management/scaling/#redis-cluster-101 + +## Redis Cluster data sharding + +Redis Cluster does not use consistent hashing, but a different form of sharding where every key is conceptually part of what we call a **hash slot**. + +There are in total 16384 hash slots in Redis Cluster, and to compute what is the hash slot of a given key, it simply takes the CRC16 of the key modulo 16384. + +Every node in a Redis Cluster is responsible for a subset of the hash slots, so, for example, you may have a cluster with 3 nodes, where: + +- Node A contains hash slots from 0 to 5500. +- Node B contains hash slots from 5501 to 11000. +- Node C contains hash slots from 11001 to 16383. + +This allows to add and remove nodes in the cluster easily. For example if one wants to add a new node D, he/she needs to move some hash slots from nodes A, B, C to D. Similarly if he/she wants to remove node A from the cluster he/she can just move the hash slots served by A to B and C. When the node A will be empty he/she can remove it from the cluster completely. + +Because moving hash slots from a node to another does not require to stop operations, adding and removing nodes, or changing the percentage of hash slots hold by nodes, does not require any downtime. + +Reference: https://redis.io/docs/management/scaling/#redis-cluster-101 + +## Redis Cluster master-slave model + +In order to ensure availability when a subset of master nodes are failing or are not able to communicate with the majority of nodes, Redis Cluster uses a master-slave model where every hash slot has from 1 (the master itself) to N replicas (N-1 additional replicas nodes). + +In our example cluster with nodes A, B, C, if node B fails the cluster is not able to continue since we no longer have a way to serve hash slots in the range 5501-11000. + +However, when the cluster is created (or at a later time) we add a slave node to every master, so that the final cluster is composed of A, B, C those are master nodes, and A1, B1, C1 are slave nodes, the system is able to continue if node B fails. + +Node B1 replicates B, and B fails, the cluster will promote node B1 as the new master and will continue to operate correctly. + +However, note that if nodes B and B1 fail at the same time Redis Cluster is not able to continue to operate. + +Reference: https://redis.io/docs/management/scaling/#redis-cluster-101 + +## Redis Cluster consistency guarantees + +Redis Cluster is not able to guarantee **strong consistency**. In practical terms, this means that under certain conditions it is possible that Redis Cluster will lose writes that were acknowledged by the system to the client. + +The first reason why Redis Cluster can lose writes because it uses asynchronous replication. This means that during writes the following happens: + +- Your client writes to the master B. +- The master B replies OK to your client. +- The master B propagates the write to its replicas B1, B2, and B3. + +As you can see B does not wait for an acknowledge from B1, B2, B3 before replying to the client, since this would be a prohibitive latency penalty for Redis, so if your client writes something, B acknowledges the write, but crashes before being able to send the write to its replicas, one of the replicas (that did not receive the write) can be promoted to master, losing the write forever. + +This is **very similar to what happens** with most databases that are configured to flush data to disk every second, so it is a scenario you are already able to reason about because of past experiences with traditional database systems not involving distributed systems. Similarly, you can improve consistency by forcing the database to flush data on disk before replying to the client, but this usually results in prohibitively low performance. That would be the equivalent of synchronous replication in the case of Redis Cluster. + +Basically, there is a trade-off to take between performance and consistency. + +Redis Cluster has support for synchronous writes when absolutely needed, implemented via the [WAIT](https://redis.io/commands/wait) command, this makes losing writes a lot less likely, however note that Redis Cluster does not implement strong consistency even when synchronous replication is used: it is always possible under more complex failure scenarios that a slave that was not able to receive the write is elected as master. + +There is another notable scenario where Redis Cluster will lose writes, that happens during a network partition where a client is isolated with a minority of instances including at least a master. + +Take as an example our 6 nodes cluster composed of A, B, C, A1, B1, C1, with 3 masters and 3 replicas. There is also a client, that we will call Z1. + +After a partition occurs, it is possible that on one side of the partition we have A, C, A1, B1, C1, and on the other side, we have B and Z1. + +Z1 is still able to write to B, that will accept its writes. If the partition heals in a very short time, the cluster will continue normally. However, if the partition lasts enough time for B1 to be promoted to master in the majority side of the partition, the writes that Z1 is sending to B will be lost. + +Reference: https://redis.io/docs/management/scaling/#redis-cluster-101 + +## Redis Cluster configuration parameters + +Let's introduce the configuration parameters that Redis Cluster introduces in the `redis.conf` file. Some will be obvious, others will be more clear as you continue reading. + +- **cluster-enabled **: If yes enables Redis Cluster support in a specific Redis instance. Otherwise the instance starts as a stand alone instance as usual. +- **cluster-config-file **: Note that despite the name of this option, this is not an user editable configuration file, but the file where a Redis Cluster node automatically persists the cluster configuration (the state, basically) every time there is a change, in order to be able to re-read it at startup. The file lists things like the other nodes in the cluster, their state, persistent variables, and so forth. Often this file is rewritten and flushed on disk as a result of some message reception. +- **cluster-node-timeout **: The maximum amount of time a Redis Cluster node can be unavailable, without it being considered as failing. If a master node is not reachable for more than the specified amount of time, it will be failed over by its replicas. This parameter controls other important things in Redis Cluster. Notably, every node that can't reach the majority of master nodes for the specified amount of time, will stop accepting queries. +- **cluster-slave-validity-factor **: If set to zero, a slave will always try to failover a master, regardless of the amount of time the link between the master and the slave remained disconnected. If the value is positive, a maximum disconnection time is calculated as the *node timeout* value multiplied by the factor provided with this option, and if the node is a slave, it will not try to start a failover if the master link was disconnected for more than the specified amount of time. For example if the node timeout is set to 5 seconds, and the validity factor is set to 10, a slave disconnected from the master for more than 50 seconds will not try to failover its master. Note that any value different than zero may result in Redis Cluster to be unavailable after a master failure if there is no slave able to failover it. In that case the cluster will return back available only when the original master rejoins the cluster. +- **cluster-migration-barrier **: Minimum number of replicas a master will remain connected with, for another slave to migrate to a master which is no longer covered by any slave. See the appropriate section about replica migration in this tutorial for more information. +- **cluster-require-full-coverage **: If this is set to yes, as it is by default, the cluster stops accepting writes if some percentage of the key space is not covered by any node. If the option is set to no, the cluster will still serve queries even if only requests about a subset of keys can be processed. + +Reference: https://redis.io/docs/management/scaling/#redis-cluster-configuration-parameters + +For more parameters, see [here](http://download.redis.io/redis-stable/redis.conf). + +## Redis Cluster main components + +- **Keys distribution model**: The key space is split into 16384 slots, effectively setting an upper limit for the cluster size of 16384 master nodes (however the suggested max size of nodes is in the order of ~ 1000 nodes). + + Each master node in a cluster handles a subset of the 16384 hash slots. The cluster is **stable** when there is no cluster reconfiguration in progress (i.e. where hash slots are being moved from one node to another). When the cluster is stable, a single hash slot will be served by a single node (however the serving node can have one or more replicas that will replace it in the case of net splits or failures, and that can be used in order to scale read operations where reading stale data is acceptable). + + Reference: https://redis.io/docs/management/scaling/ + +- **Keys hash tags**: There is an exception for the computation of the hash slot that is used in order to implement **hash tags**. Hash tags are a way to ensure that multiple keys are allocated in the same hash slot. This is used in order to implement multi-key operations in Redis Cluster. + + Reference:https://redis.io/docs/management/scaling/ + +- **Cluster nodes' attributes**: Every node has a unique name in the cluster. The node name is the hex representation of a 160 bit random number, obtained the first time a node is started (usually using /dev/urandom). The node will save its ID in the node configuration file, and will use the same ID forever, or at least as long as the node configuration file is not deleted by the system administrator, or a *hard reset* is requested via the [CLUSTER RESET](https://redis.io/commands/cluster-reset) command. + + A detailed [explanation of all the node fields](http://redis.io/commands/cluster-nodes) is described in the [CLUSTER NODES](https://redis.io/commands/cluster-nodes) documentation. + + The following is sample output of the [CLUSTER NODES](https://redis.io/commands/cluster-nodes) command sent to a master node in a small cluster of three nodes. + + ```bash + $ redis-cli cluster nodes + d1861060fe6a534d42d8a19aeb36600e18785e04 127.0.0.1:6379 myself - 0 1318428930 1 connected 0-1364 + 3886e65cc906bfd9b1f7e7bde468726a052d1dae 127.0.0.1:6380 master - 1318428930 1318428931 2 connected 1365-2729 + d289c575dcbc4bdd2931585fd4339089e461a27d 127.0.0.1:6381 master - 1318428931 1318428931 3 connected 2730-4095 + ``` + + Reference: https://redis.io/docs/management/scaling/ + +- **The Cluster bus**: Every Redis Cluster node has an additional TCP port for receiving incoming connections from other Redis Cluster nodes. This port is at a fixed offset from the normal TCP port used to receive incoming connections from clients. To obtain the Redis Cluster port, 10000 should be added to the normal commands port. For example, if a Redis node is listening for client connections on port 6379, the Cluster bus port 16379 will also be opened. + + Reference: https://redis.io/docs/management/scaling/ + +- **Cluster topology**: Redis Cluster is a full mesh where every node is connected with every other node using a TCP connection. + + In a cluster of N nodes, every node has N-1 outgoing TCP connections and N-1 incoming connections. + + These TCP connections are kept alive all the time and are not created on demand. When a node expects a pong reply in response to a ping in the cluster bus, before waiting long enough to mark the node as unreachable, it will try to refresh the connection with the node by reconnecting from scratch. + + Reference: https://redis.io/docs/management/scaling/ + +- **Nodes handshake**: Nodes always accept connections on the cluster bus port, and even reply to pings when received, even if the pinging node is not trusted. However, all other packets will be discarded by the receiving node if the sending node is not considered part of the cluster. + + A node will accept another node as part of the cluster only in two ways: + + - If a node presents itself with a `MEET` message. A meet message is exactly like a [PING](https://redis.io/commands/ping) message but forces the receiver to accept the node as part of the cluster. Nodes will send `MEET` messages to other nodes **only if** the system administrator requests this via the following command: + + ```bash + $ CLUSTER MEET ip port + ``` + + - A node will also register another node as part of the cluster if a node that is already trusted will gossip about this other node. So if A knows B, and B knows C, eventually B will send gossip messages to A about C. When this happens, A will register C as part of the network, and will try to connect with C. + + Reference: https://redis.io/docs/management/scaling/ + +## Next Steps + +- [Deploy Redis Cluster](/docs/v2024.1.31/guides/redis/clustering/redis-cluster) using KubeDB. +- Detail concepts of [Redis object](/docs/v2024.1.31/guides/redis/concepts/redis). +- Detail concepts of [RedisVersion object](/docs/v2024.1.31/guides/redis/concepts/catalog). +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/v2024.1.31/CONTRIBUTING). diff --git a/content/docs/v2024.1.31/guides/redis/clustering/redis-cluster.md b/content/docs/v2024.1.31/guides/redis/clustering/redis-cluster.md new file mode 100644 index 0000000000..6192699e3b --- /dev/null +++ b/content/docs/v2024.1.31/guides/redis/clustering/redis-cluster.md @@ -0,0 +1,399 @@ +--- +title: Redis Cluster Guide +menu: + docs_v2024.1.31: + identifier: rd-cluster + name: Clustering Guide + parent: rd-clustering-redis + weight: 15 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# KubeDB - Redis Cluster + +This tutorial will show you how to use KubeDB to provision a Redis cluster. + +## Before You Begin + +Before proceeding: + +- Read [redis clustering concept](/docs/v2024.1.31/guides/redis/clustering/overview) to learn about Redis clustering. + +- You need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +- Now, install KubeDB cli on your workstation and KubeDB operator in your cluster following the steps [here](/docs/v2024.1.31/setup/README). + +- To keep things isolated, this tutorial uses a separate namespace called `demo` throughout this tutorial. Run the following command to prepare your cluster for this tutorial: + + ```bash + $ kubectl create ns demo + namespace/demo created + ``` + +> Note: The yaml files used in this tutorial are stored in [docs/examples/redis](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/examples/redis) folder in GitHub repository [kubedb/docs](https://github.com/kubedb/docs). + +## Deploy Redis Cluster + +To deploy a Redis Cluster, specify `spec.mode` and `spec.cluster` fields in `Redis` CRD. + +The following is an example `Redis` object which creates a Redis cluster with three master nodes each of which has one replica node. + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: Redis +metadata: + name: redis-cluster + namespace: demo +spec: + version: 6.2.14 + mode: Cluster + cluster: + master: 3 + replicas: 1 + storageType: Durable + storage: + resources: + requests: + storage: 1Gi + storageClassName: "standard" + accessModes: + - ReadWriteOnce + terminationPolicy: Halt +``` + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/redis/clustering/demo-1.yaml +redis.kubedb.com/redis-cluster created +``` + +Here, + +- `spec.mode` specifies the mode for Redis. Here we have used `Cluster` to tell the operator that we want to deploy Redis in cluster mode. +- `spec.cluster` represents the cluster configuration. + - `master` denotes the number of master nodes. + - `replicas` denotes the number of replica nodes per master. +- `spec.storage` specifies the StorageClass of PVC dynamically allocated to store data for this database. This storage spec will be passed to the StatefulSet created by KubeDB operator to run database pods. So, each members will have a pod of this storage configuration. You can specify any StorageClass available in your cluster with appropriate resource requests. + +KubeDB operator watches for `Redis` objects using Kubernetes API. When a `Redis` object is created, KubeDB operator will create a new StatefulSet and a Service with the matching Redis object name. KubeDB operator will also create a governing service for StatefulSets named `kubedb`, if one is not already present. + +```bash +$ kubectl get rd -n demo +NAME VERSION STATUS AGE +redis-cluster 6.2.14 Ready 82s + + +$ kubectl get statefulset -n demo +NAME READY AGE +redis-cluster-shard0 2/2 92s +redis-cluster-shard1 2/2 88s +redis-cluster-shard2 2/2 84s + +$ kubectl get pvc -n demo +NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE +data-redis-cluster-shard0-0 Bound pvc-4dd44ddd-06d8-4f2d-bb57-4324c3385d06 1Gi RWO standard 112s +data-redis-cluster-shard0-1 Bound pvc-fb431bb5-036d-4bd8-a89d-4b2477136c1c 1Gi RWO standard 105s +data-redis-cluster-shard1-0 Bound pvc-1be09fa7-6c26-4d5c-8aae-c0cc99e41c73 1Gi RWO standard 108s +data-redis-cluster-shard1-1 Bound pvc-3206ff9e-1ca3-4cef-846d-f91f60c5d572 1Gi RWO standard 98s +data-redis-cluster-shard2-0 Bound pvc-40ccbe7c-e414-4e7b-b40b-2816f42efa63 1Gi RWO standard 104s +data-redis-cluster-shard2-1 Bound pvc-be02792b-b033-407b-a376-9b34001c561f 1Gi RWO standard 92s + + +$ kubectl get pv -n demo +NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE +pvc-1be09fa7-6c26-4d5c-8aae-c0cc99e41c73 1Gi RWO Delete Bound demo/data-redis-cluster-shard1-0 standard 2m33s +pvc-3206ff9e-1ca3-4cef-846d-f91f60c5d572 1Gi RWO Delete Bound demo/data-redis-cluster-shard1-1 standard 2m21s +pvc-40ccbe7c-e414-4e7b-b40b-2816f42efa63 1Gi RWO Delete Bound demo/data-redis-cluster-shard2-0 standard 2m29s +pvc-4dd44ddd-06d8-4f2d-bb57-4324c3385d06 1Gi RWO Delete Bound demo/data-redis-cluster-shard0-0 standard 2m39s +pvc-be02792b-b033-407b-a376-9b34001c561f 1Gi RWO Delete Bound demo/data-redis-cluster-shard2-1 standard 2m17s +pvc-fb431bb5-036d-4bd8-a89d-4b2477136c1c 1Gi RWO Delete Bound demo/data-redis-cluster-shard0-1 standard 2m30s + +$ kubectl get svc -n demo +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +redis-cluster ClusterIP 10.96.115.92 6379/TCP 3m4s +redis-cluster-pods ClusterIP None 6379/TCP 3m4s +``` + +KubeDB operator sets the `status.phase` to `Ready` once the database is successfully created. Run the following command to see the modified `Redis` object: + +```bash +$ kubectl get rd -n demo redis-cluster -o yaml +``` +``` yaml +apiVersion: kubedb.com/v1alpha2 +kind: Redis +metadata: + annotations: + kubectl.kubernetes.io/last-applied-configuration: | + {"apiVersion":"kubedb.com/v1alpha2","kind":"Redis","metadata":{"annotations":{},"name":"redis-cluster","namespace":"demo"},"spec":{"cluster":{"master":3,"replicas":1},"mode":"Cluster","storage":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"1Gi"}},"storageClassName":"standard"},"storageType":"Durable","terminationPolicy":"Halt","version":"6.2.14"}} + creationTimestamp: "2023-02-02T11:16:57Z" + finalizers: + - kubedb.com + generation: 2 + name: redis-cluster + namespace: demo + resourceVersion: "493812" + uid: d3809d4b-b244-40a5-9570-77141cb1864b +spec: + allowedSchemas: + namespaces: + from: Same + authSecret: + name: redis-cluster-auth + autoOps: {} + cluster: + master: 3 + replicas: 1 + coordinator: + resources: {} + healthChecker: + failureThreshold: 1 + periodSeconds: 10 + timeoutSeconds: 10 + mode: Cluster + podTemplate: + controller: {} + metadata: {} + spec: + affinity: + podAntiAffinity: + preferredDuringSchedulingIgnoredDuringExecution: + - podAffinityTerm: + labelSelector: + matchLabels: + app.kubernetes.io/instance: redis-cluster + app.kubernetes.io/managed-by: kubedb.com + app.kubernetes.io/name: redises.kubedb.com + redis.kubedb.com/shard: ${SHARD_INDEX} + namespaces: + - demo + topologyKey: kubernetes.io/hostname + weight: 100 + - podAffinityTerm: + labelSelector: + matchLabels: + app.kubernetes.io/instance: redis-cluster + app.kubernetes.io/managed-by: kubedb.com + app.kubernetes.io/name: redises.kubedb.com + redis.kubedb.com/shard: ${SHARD_INDEX} + namespaces: + - demo + topologyKey: failure-domain.beta.kubernetes.io/zone + weight: 50 + resources: + limits: + memory: 1Gi + requests: + cpu: 500m + memory: 1Gi + serviceAccountName: redis-cluster + replicas: 1 + storage: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + storageClassName: standard + storageType: Durable + terminationPolicy: Halt + version: 6.2.14 +status: + conditions: + - lastTransitionTime: "2023-02-02T11:16:57Z" + message: 'The KubeDB operator has started the provisioning of Redis: demo/redis-cluster' + reason: DatabaseProvisioningStartedSuccessfully + status: "True" + type: ProvisioningStarted + - lastTransitionTime: "2023-02-02T11:17:31Z" + message: All desired replicas are ready. + reason: AllReplicasReady + status: "True" + type: ReplicaReady + - lastTransitionTime: "2023-02-02T11:17:44Z" + message: 'The Redis: demo/redis-cluster is accepting rdClient requests.' + observedGeneration: 2 + reason: DatabaseAcceptingConnectionRequest + status: "True" + type: AcceptingConnection + - lastTransitionTime: "2023-02-02T11:17:54Z" + message: 'The Redis: demo/redis-cluster is ready.' + observedGeneration: 2 + reason: ReadinessCheckSucceeded + status: "True" + type: Ready + - lastTransitionTime: "2023-02-02T11:18:14Z" + message: 'The Redis: demo/redis-cluster is successfully provisioned.' + observedGeneration: 2 + reason: DatabaseSuccessfullyProvisioned + status: "True" + type: Provisioned + observedGeneration: 2 + phase: Ready +``` + +## Connection Information + +- Hostname/address: you can use any of these + - Service: `redis-cluster.demo` + - Pod IP: (`$ kubectl get pod -n demo -l app.kubernetes.io/name=redises.kubedb.com -o yaml | grep podIP`) +- Port: `6379` +- Username: Run following command to get _username_, + + ```bash + $ kubectl get secrets -n demo redis-cluster-auth -o jsonpath='{.data.\username}' | base64 -d + default + ``` + +- Password: Run the following command to get _password_, + + ```bash + $ kubectl get secrets -n demo redis-cluster-auth -o jsonpath='{.data.\password}' | base64 -d + AO8iK)s);o5kQVFs + ``` + +Now, you can connect to this database using the service using the credentials. +## Check Cluster Scenario + +The operator creates a cluster according to the newly created `Redis` object. This cluster has 3 masters and one replica per master. And every node in the cluster is responsible for a subset of the total **16384** hash slots. + +```bash +# first list the redis pods list +$ kubectl get pods --all-namespaces -o jsonpath='{range.items[*]}{.metadata.name} ---------- {.status.podIP}:6379{"\\n"}{end}' | grep redis +redis-cluster-shard0-0 ---------- 10.244.0.140:6379 +redis-cluster-shard0-1 ---------- 10.244.0.145:6379 +redis-cluster-shard1-0 ---------- 10.244.0.144:6379 +redis-cluster-shard1-1 ---------- 10.244.0.149:6379 +redis-cluster-shard2-0 ---------- 10.244.0.146:6379 +redis-cluster-shard2-1 ---------- 10.244.0.150:637 + +# enter into any pod's container named redis +$ kubectl exec -it redis-cluster-shard0-0 -n demo -c redis -- bash +/data # + +# now inside this container, see which ones are the masters +# which ones are the replicas +/data # redis-cli -c cluster nodes +d3d7d5924fa4aa7347acb2d4c86f7cd5d18a2950 10.244.0.145:6379@16379 slave f9af25d8db7bb742346b0130fb1cc749ffcd4d1e 0 1675337399550 1 connected +3b4048d43fa982dd246703c899602f5c2472a995 10.244.0.149:6379@16379 slave b49398da2eefac62a3b668a60f36bf4ccc3ccf4f 0 1675337400854 2 connected +b49398da2eefac62a3b668a60f36bf4ccc3ccf4f 10.244.0.144:6379@16379 master - 0 1675337400352 2 connected 5461-10922 +31d3f90e1bde3835ca7b08ae8b145b230d9b1ba8 10.244.0.146:6379@16379 master - 0 1675337399000 3 connected 10923-16383 +6acca34b192445b888649a839bb7537d2cbb1cf4 10.244.0.150:6379@16379 slave 31d3f90e1bde3835ca7b08ae8b145b230d9b1ba8 0 1675337400553 3 connected +f9af25d8db7bb742346b0130fb1cc749ffcd4d1e 10.244.0.140:6379@16379 myself,master - 0 1675337398000 1 connected 0-5460 +``` +Each master has assigned some slots from slot 0 to slot 16383, and each master has one replica following it. + +## Data Availability + +Now, you can connect to this database through [redis-cli](https://redis.io/topics/rediscli). In this tutorial, we will insert data, and we will see whether we can get the data from any other node (any master or replica) or not. + +> Read the comment written for the following commands. They contain the instructions and explanations of the commands. + +```bash +# here the hash slot for key 'hello' is 866 which is in 1st node +# named 'redis-cluster-shard0-0' (0-5460) +$ kubectl exec -it redis-cluster-shard0-0 -n demo -c redis -- redis-cli -c cluster keyslot hello +(integer) 866 + +# connect to any node +$ kubectl exec -it redis-cluster-shard0-0 -n demo -c redis -- bash +/data # + +# now ensure that you are connected to the 1st pod +/data # redis-cli -c -h 10.244.0.140 +10.244.0.140:6379> + +# set 'world' as value for the key 'hello' +10.244.0.140:6379> set hello world +OK +10.244.0.140:6379> exit + +# switch the connection to the replica of the current master and get the data +/data # redis-cli -c -h 10.244.0.145 +10.244.0.145:6379> get hello +-> Redirected to slot [866] located at 10.244.0.140:6379 +"world" +10.244.0.145:6379> exit + +# switch the connection to any other node +# get the data +/data # redis-cli -c -h 10.244.0.146 +10.244.0.146:6379> get hello +-> Redirected to slot [866] located at 10.244.0.140:6379 +"world" +10.244.0.146:6379> exit +``` + +## Automatic Failover + +To test automatic failover, we will force a master node to sleep for a period. Since the master node (`pod`) becomes unavailable, the rest of the members will elect a replica (one of its replica in case of more than one replica under this master) of this master node as the new master. When the old master comes back, it will join the cluster as the new replica of the new master. + +> Read the comment written for the following commands. They contain the instructions and explanations of the commands. + +```bash +# connect to any node and get the master nodes info +$ kubectl exec -it redis-cluster-shard0-0 -n demo -c redis -- bash +/data # redis-cli -c cluster nodes | grep master +b49398da2eefac62a3b668a60f36bf4ccc3ccf4f 10.244.0.144:6379@16379 master - 0 1675338070000 2 connected 5461-10922 +31d3f90e1bde3835ca7b08ae8b145b230d9b1ba8 10.244.0.146:6379@16379 master - 0 1675338070000 3 connected 10923-16383 +f9af25d8db7bb742346b0130fb1cc749ffcd4d1e 10.244.0.140:6379@16379 myself,master - 0 1675338070000 1 connected 0-5460 + +# let's sleep node 10.244.0.144 with the `DEBUG SLEEP` command +/data # redis-cli -h 10.244.0.144 debug sleep 120 +OK + +# now again connect to a node and get the master nodes info +$ kubectl exec -it redis-cluster-shard0-0 -n demo -c redis -- bash +/data # redis-cli -c cluster nodes | grep master +3b4048d43fa982dd246703c899602f5c2472a995 10.244.0.149:6379@16379 master - 0 1675338334000 4 connected 5461-10922 +31d3f90e1bde3835ca7b08ae8b145b230d9b1ba8 10.244.0.146:6379@16379 master - 0 1675338335355 3 connected 10923-16383 +f9af25d8db7bb742346b0130fb1cc749ffcd4d1e 10.244.0.140:6379@16379 myself,master - 0 1675338334000 1 connected 0-5460 + + +/data # redis-cli -c cluster nodes +d3d7d5924fa4aa7347acb2d4c86f7cd5d18a2950 10.244.0.145:6379@16379 slave f9af25d8db7bb742346b0130fb1cc749ffcd4d1e 0 1675338355429 1 connected +3b4048d43fa982dd246703c899602f5c2472a995 10.244.0.149:6379@16379 master - 0 1675338355530 4 connected 5461-10922 +b49398da2eefac62a3b668a60f36bf4ccc3ccf4f 10.244.0.144:6379@16379 slave 3b4048d43fa982dd246703c899602f5c2472a995 0 1675338353521 4 connected +31d3f90e1bde3835ca7b08ae8b145b230d9b1ba8 10.244.0.146:6379@16379 master - 0 1675338355000 3 connected 10923-16383 +6acca34b192445b888649a839bb7537d2cbb1cf4 10.244.0.150:6379@16379 slave 31d3f90e1bde3835ca7b08ae8b145b230d9b1ba8 0 1675338355000 3 connected +f9af25d8db7bb742346b0130fb1cc749ffcd4d1e 10.244.0.140:6379@16379 myself,master - 0 1675338355000 1 connected 0-5460 + +/data # exit +``` + +Notice that 110.244.0.149 is the new master and 10.244.0.144 has become the replica of 10.244.0.149. + +## Cleaning up + +First set termination policy to `WipeOut` all the things created by KubeDB operator for this Redis instance is deleted. Then delete the redis instance +to clean what you created in this tutorial. + +```bash +$ kubectl patch -n demo rd/redis-cluster -p '{"spec":{"terminationPolicy":"WipeOut"}}' --type="merge" +redis.kubedb.com/redis-cluster patched + +$ kubectl delete rd redis-cluster -n demo +redis.kubedb.com "redis-cluster" deleted +``` + +## Next Steps + +- Deploy [Redis Sentinel](/docs/v2024.1.31/guides/redis/sentinel/overview) +- Monitor your Redis database with KubeDB using [out-of-the-box Prometheus operator](/docs/v2024.1.31/guides/redis/monitoring/using-prometheus-operator). +- Monitor your Redis database with KubeDB using [out-of-the-box builtin-Prometheus](/docs/v2024.1.31/guides/redis/monitoring/using-builtin-prometheus). +- Use [private Docker registry](/docs/v2024.1.31/guides/redis/private-registry/using-private-registry) to deploy Redis with KubeDB. +- Detail concepts of [Redis object](/docs/v2024.1.31/guides/redis/concepts/redis). +- Detail concepts of [RedisVersion object](/docs/v2024.1.31/guides/redis/concepts/catalog). diff --git a/content/docs/v2024.1.31/guides/redis/concepts/_index.md b/content/docs/v2024.1.31/guides/redis/concepts/_index.md new file mode 100755 index 0000000000..f2fc4b9c79 --- /dev/null +++ b/content/docs/v2024.1.31/guides/redis/concepts/_index.md @@ -0,0 +1,22 @@ +--- +title: Redis Concepts +menu: + docs_v2024.1.31: + identifier: rd-concepts-redis + name: Concepts + parent: rd-redis-guides + weight: 20 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/redis/concepts/appbinding.md b/content/docs/v2024.1.31/guides/redis/concepts/appbinding.md new file mode 100644 index 0000000000..7b84d1022a --- /dev/null +++ b/content/docs/v2024.1.31/guides/redis/concepts/appbinding.md @@ -0,0 +1,194 @@ +--- +title: AppBinding CRD +menu: + docs_v2024.1.31: + identifier: rd-appbinding-concepts + name: AppBinding + parent: rd-concepts-redis + weight: 30 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# AppBinding + +## What is AppBinding + +An `AppBinding` is a Kubernetes `CustomResourceDefinition`(CRD) which points to an application using either its URL (usually for a non-Kubernetes resident service instance) or a Kubernetes service object (if self-hosted in a Kubernetes cluster), some optional parameters and a credential secret. To learn more about AppBinding and the problems it solves, please read this blog post: [The case for AppBinding](https://blog.byte.builders/post/the-case-for-appbinding). + +If you deploy a database using [KubeDB](https://kubedb.com/docs/0.11.0/concepts/), `AppBinding` object will be created automatically for it. Otherwise, you have to create an `AppBinding` object manually pointing to your desired database. + +KubeDB uses [Stash](https://appscode.com/products/stash/) to perform backup/recovery of databases. Stash needs to know how to connect with a target database and the credentials necessary to access it. This is done via an `AppBinding`. + +## AppBinding CRD Specification + +Like any official Kubernetes resource, an `AppBinding` has `TypeMeta`, `ObjectMeta` and `Spec` sections. However, unlike other Kubernetes resources, it does not have a `Status` section. + +An `AppBinding` object created by `KubeDB` for Redis database is shown below, + +```yaml +apiVersion: appcatalog.appscode.com/v1alpha1 +kind: AppBinding +metadata: + annotations: + kubectl.kubernetes.io/last-applied-configuration: | + {"apiVersion":"kubedb.com/v1alpha2","kind":"Redis","metadata":{"annotations":{},"name":"redis1","namespace":"demo"},"spec":{"authSecret":{"externallyManaged":false,"name":"redis1-auth"},"autoOps":{"disabled":true},"cluster":{"master":3,"replicas":1},"configSecret":{"name":"rd-custom-config"},"disableAuth":false,"halted":false,"healthChecker":{"disableWriteCheck":false,"failureThreshold":2,"periodSeconds":15,"timeoutSeconds":10},"mode":"Cluster","monitor":{"agent":"prometheus.io/operator","prometheus":{"serviceMonitor":{"interval":"10s","labels":{"app":"kubedb"}}}},"podTemplate":{"controller":{"annotations":{"passMe":"ToStatefulSet"}},"metadata":{"annotations":{"passMe":"ToDatabasePod"}},"spec":{"args":["--loglevel verbose"],"env":[{"name":"ENV_VARIABLE","value":"value"}],"imagePullSecrets":[{"name":"regcred"}],"resources":{"limits":{"cpu":"500m","memory":"128Mi"},"requests":{"cpu":"250m","memory":"64Mi"}},"serviceAccountName":"my-service-account"}},"serviceTemplates":[{"alias":"primary","metadata":{"annotations":{"passMe":"ToService"}},"spec":{"ports":[{"name":"http","port":9200}],"type":"NodePort"}}],"storage":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"1Gi"}},"storageClassName":"standard"},"terminationPolicy":"Halt","tls":{"certificates":[{"alias":"client","emailAddresses":["abc@appscode.com"],"subject":{"organizations":["kubedb"]}},{"alias":"server","emailAddresses":["abc@appscode.com"],"subject":{"organizations":["kubedb"]}}],"issuerRef":{"apiGroup":"cert-manager.io","kind":"Issuer","name":"redis-ca-issuer"}},"version":"6.2.14"}} + creationTimestamp: "2023-02-01T05:27:19Z" + generation: 1 + labels: + app.kubernetes.io/component: database + app.kubernetes.io/instance: redis1 + app.kubernetes.io/managed-by: kubedb.com + app.kubernetes.io/name: redises.kubedb.com + name: redis1 + namespace: demo + ownerReferences: + - apiVersion: kubedb.com/v1alpha2 + blockOwnerDeletion: true + controller: true + kind: Redis + name: redis1 + uid: a01272d3-97b6-4e8c-912f-67eff07e3811 + resourceVersion: "398775" + uid: 336988b4-5805-48ac-9d06-e3375fa4c435 +spec: + appRef: + apiGroup: kubedb.com + kind: Redis + name: redis1 + namespace: demo + clientConfig: + service: + name: redis1 + port: 6379 + scheme: rediss + parameters: + apiVersion: config.kubedb.com/v1alpha1 + clientCertSecret: + name: redis1-client-cert + kind: RedisConfiguration + stash: + addon: + backupTask: + name: redis-backup-6.2.5 + restoreTask: + name: redis-restore-6.2.5 + secret: + name: redis1-auth + tlsSecret: + name: redis1-client-cert + type: kubedb.com/redis + version: 6.2.14 + +``` + +Here, we are going to describe the sections of an `AppBinding` crd. + +### AppBinding `Spec` + +An `AppBinding` object has the following fields in the `spec` section: + +#### spec.type + +`spec.type` is an optional field that indicates the type of the app that this `AppBinding` is pointing to. Stash uses this field to resolve the values of `TARGET_APP_TYPE`, `TARGET_APP_GROUP` and `TARGET_APP_RESOURCE` variables of [BackupBlueprint](https://appscode.com/products/stash/latest/concepts/crds/backupblueprint/) object. + +This field follows the following format: `/`. The above AppBinding is pointing to a `redis` resource under `kubedb.com` group. + +Here, the variables are parsed as follows: + +| Variable | Usage | +| --------------------- |--------------------------------------------------------------------------------------------------------------------------------| +| `TARGET_APP_GROUP` | Represents the application group where the respective app belongs (i.e: `kubedb.com`). | +| `TARGET_APP_RESOURCE` | Represents the resource under that application group that this appbinding represents (i.e: `redis`). | +| `TARGET_APP_TYPE` | Represents the complete type of the application. It's simply `TARGET_APP_GROUP/TARGET_APP_RESOURCE` (i.e: `kubedb.com/redis`). | + +#### spec.secret + +`spec.secret` specifies the name of the secret which contains the credentials that are required to access the database. This secret must be in the same namespace as the `AppBinding`. + +This secret must contain the following keys: + + +Redis : + +| Key | Usage | +| ---------- | ---------------------------------------------- | +| `username` | Username of the target database. | +| `password` | Password for the user specified by `username`. | + +PostgreSQL : + +| Key | Usage | +| ------------------- | --------------------------------------------------- | +| `POSTGRES_USER` | Username of the target database. | +| `POSTGRES_PASSWORD` | Password for the user specified by `POSTGRES_USER`. | + +MySQL : + +| Key | Usage | +| ---------- | ---------------------------------------------- | +| `username` | Username of the target database. | +| `password` | Password for the user specified by `username`. | + +MongoDB : + +| Key | Usage | +| ---------- | ---------------------------------------------- | +| `username` | Username of the target database. | +| `password` | Password for the user specified by `username`. | + +Elasticsearch: + +| Key | Usage | +| ---------------- | ----------------------- | +| `ADMIN_USERNAME` | Admin username | +| `ADMIN_PASSWORD` | Password for admin user | + +#### spec.appRef +appRef refers to the underlying application. It has 4 fields named `apiGroup`, `kind`, `name` & `namespace`. + +#### spec.clientConfig + +`spec.clientConfig` defines how to communicate with the target database. You can use either a URL or a Kubernetes service to connect with the database. You don't have to specify both of them. + +You can configure following fields in `spec.clientConfig` section: + +- **spec.clientConfig.url** + + `spec.clientConfig.url` gives the location of the database, in standard URL form (i.e. `[scheme://]host:port/[path]`). This is particularly useful when the target database is running outside of the Kubernetes cluster. If your database is running inside the cluster, use `spec.clientConfig.service` section instead. + +> Note that, attempting to use a user or basic auth (e.g. `user:password@host:port`) is not allowed. Stash will insert them automatically from the respective secret. Fragments ("#...") and query parameters ("?...") are not allowed either. + +- **spec.clientConfig.service** + + If you are running the database inside the Kubernetes cluster, you can use Kubernetes service to connect with the database. You have to specify the following fields in `spec.clientConfig.service` section if you manually create an `AppBinding` object. + + - **name :** `name` indicates the name of the service that connects with the target database. + - **scheme :** `scheme` specifies the scheme (i.e. http, https) to use to connect with the database. + - **port :** `port` specifies the port where the target database is running. + +- **spec.clientConfig.insecureSkipTLSVerify** + + `spec.clientConfig.insecureSkipTLSVerify` is used to disable TLS certificate verification while connecting with the database. We strongly discourage to disable TLS verification during backup. You should provide the respective CA bundle through `spec.clientConfig.caBundle` field instead. + +- **spec.clientConfig.caBundle** + + `spec.clientConfig.caBundle` is a PEM encoded CA bundle which will be used to validate the serving certificate of the database. + +## Next Steps + +- Learn how to use KubeDB to manage various databases [here](/docs/v2024.1.31/guides/README). +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/v2024.1.31/CONTRIBUTING). diff --git a/content/docs/v2024.1.31/guides/redis/concepts/autoscaler.md b/content/docs/v2024.1.31/guides/redis/concepts/autoscaler.md new file mode 100644 index 0000000000..7f825cd562 --- /dev/null +++ b/content/docs/v2024.1.31/guides/redis/concepts/autoscaler.md @@ -0,0 +1,155 @@ +--- +title: RedisAutoscaler CRD +menu: + docs_v2024.1.31: + identifier: rd-autoscaler-concepts + name: RedisAutoscaler + parent: rd-concepts-redis + weight: 40 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# RedisAutoscaler + +## What is RedisAutoscaler + +`RedisAutoscaler` is a Kubernetes `Custom Resource Definitions` (CRD). It provides a declarative configuration for autoscaling [Redis](https://www.redis.io/) compute resources and storage of database components in a Kubernetes native way. + +## RedisAutoscaler CRD Specifications + +Like any official Kubernetes resource, a `RedisAutoscaler` has `TypeMeta`, `ObjectMeta`, `Spec` and `Status` sections. + +Here is a sample `RedisAutoscaler` CRDs for autoscaling different components of database is given below: + +**Sample `RedisAutoscaler` for standalone database:** + +```yaml +apiVersion: autoscaling.kubedb.com/v1alpha1 +kind: RedisAutoscaler +metadata: + name: standalone-autoscaler + namespace: demo +spec: + databaseRef: + name: redis-standalone + opsRequestOptions: + apply: IfReady + timeout: 5m + compute: + standalone: + trigger: "On" + podLifeTimeThreshold: 5m + minAllowed: + cpu: 600m + memory: 600Mi + maxAllowed: + cpu: 1 + memory: 1Gi + controlledResources: ["cpu", "memory"] + containerControlledValues: "RequestsAndLimits" + resourceDiffPercentage: 10 + storage: + standalone: + trigger: "On" + usageThreshold: 25 + scalingThreshold: 20 +``` + +Here is a sample `RedisSentinelAutoscaler` CRDs for autoscaling different components of database is given below: + +**Sample `RedisSentinelAutoscaler` for standalone database:** +```yaml +apiVersion: autoscaling.kubedb.com/v1alpha1 +kind: RedisSentinelAutoscaler +metadata: + name: sentinel-autoscalar + namespace: demo +spec: + databaseRef: + name: sentinel + opsRequestOptions: + apply: IfReady + timeout: 5m + compute: + sentinel: + trigger: "On" + podLifeTimeThreshold: 5m + minAllowed: + cpu: 600m + memory: 600Mi + maxAllowed: + cpu: 1 + memory: 1Gi + controlledResources: ["cpu", "memory"] + containerControlledValues: "RequestsAndLimits" + resourceDiffPercentage: 10 +``` + +Here, we are going to describe the various sections of a `RedisAutoscaler` and `RedisSentinelAutoscaler` crd. + +A `RedisAutoscaler` object has the following fields in the `spec` section. + +### spec.databaseRef + +`spec.databaseRef` is a required field that point to the [Redis](/docs/v2024.1.31/guides/redis/concepts/redis) object for which the autoscaling will be performed. This field consists of the following sub-field: + +- **spec.databaseRef.name :** specifies the name of the [Redis](/docs/v2024.1.31/guides/redis/concepts/redis) object. + +### spec.opsRequestOptions +These are the options to pass in the internally created opsRequest CRD. `opsRequestOptions` has three fields. They have been described in details [here](/docs/v2024.1.31/guides/redis/concepts/redisopsrequest#specreadinesscriteria). + +### spec.compute + +`spec.compute` specifies the autoscaling configuration for to compute resources i.e. cpu and memory of the database components. This field consists of the following sub-field: + +- `spec.compute.standalone` indicates the desired compute autoscaling configuration for a standalone mode in Redis database. +- `spec.compute.cluster` indicates the desired compute autoscaling configuration for cluster mode in Redis database. +- `spec.compute.sentinel` indicates the desired compute autoscaling configuration for sentinel mode in Redis database. + +`RedisSentinelAutoscaler` on has only `spec.compute.sentinel` field. + +All of them has the following sub-fields: + +- `trigger` indicates if compute autoscaling is enabled for this component of the database. If "On" then compute autoscaling is enabled. If "Off" then compute autoscaling is disabled. +- `minAllowed` specifies the minimal amount of resources that will be recommended, default is no minimum. +- `maxAllowed` specifies the maximum amount of resources that will be recommended, default is no maximum. +- `controlledResources` specifies which type of compute resources (cpu and memory) are allowed for autoscaling. Allowed values are "cpu" and "memory". +- `containerControlledValues` specifies which resource values should be controlled. Allowed values are "RequestsAndLimits" and "RequestsOnly". +- `resourceDiffPercentage` specifies the minimum resource difference between recommended value and the current value in percentage. If the difference percentage is greater than this value than autoscaling will be triggered. +- `podLifeTimeThreshold` specifies the minimum pod lifetime of at least one of the pods before triggering autoscaling. + +### spec.storage + +`spec.storage` specifies the autoscaling configuration for the storage resources of the database components. This field consists of the following sub-field: + +- `spec.storage.standalone` indicates the desired storage autoscaling configuration for a standalone mode in Redis database. +- `spec.storage.cluster` indicates the desired storage autoscaling configuration for cluster mode in Redis database. +- `spec.storage.sentinel` indicates the desired storage autoscaling configuration for sentinel mode in Redis database. + +`RedisSentinelAutoscaler` does not have `spec.stoage` section. + +All of them has the following sub-fields: + +- `trigger` indicates if storage autoscaling is enabled for this component of the database. If "On" then storage autoscaling is enabled. If "Off" then storage autoscaling is disabled. +- `usageThreshold` indicates usage percentage threshold, if the current storage usage exceeds then storage autoscaling will be triggered. +- `scalingThreshold` indicates the percentage of the current storage that will be scaled. +- `expansionMode` indicates the volume expansion mode. + +## Next Steps + +- Learn about Redis crd [here](/docs/v2024.1.31/guides/redis/concepts/redis). +- Deploy your first Redis database with Redis by following the guide [here](/docs/v2024.1.31/guides/redis/quickstart/quickstart). diff --git a/content/docs/v2024.1.31/guides/redis/concepts/catalog.md b/content/docs/v2024.1.31/guides/redis/concepts/catalog.md new file mode 100644 index 0000000000..c70ffa4f66 --- /dev/null +++ b/content/docs/v2024.1.31/guides/redis/concepts/catalog.md @@ -0,0 +1,131 @@ +--- +title: RedisVersion CRD +menu: + docs_v2024.1.31: + identifier: rd-catalog-concepts + name: RedisVersion + parent: rd-concepts-redis + weight: 20 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# RedisVersion + +## What is RedisVersion + +`RedisVersion` is a Kubernetes `Custom Resource Definitions` (CRD). It provides a declarative configuration to specify the docker images to be used for [Redis](https://redis.io/) database deployed with KubeDB in a Kubernetes native way. + +When you install KubeDB, a `RedisVersion` custom resource will be created automatically for every supported Redis versions. You have to specify the name of `RedisVersion` crd in `spec.version` field of [Redis](/docs/v2024.1.31/guides/redis/concepts/redis) crd. Then, KubeDB will use the docker images specified in the `RedisVersion` crd to create your expected database. + +Using a separate crd for specifying respective docker images, and pod security policy names allow us to modify the images, and policies independent of KubeDB operator. This will also allow the users to use a custom image for the database. + +## RedisVersion Specification + +As with all other Kubernetes objects, a RedisVersion needs `apiVersion`, `kind`, and `metadata` fields. It also needs a `.spec` section. + +```yaml +apiVersion: catalog.kubedb.com/v1alpha1 +kind: RedisVersion +metadata: + annotations: + meta.helm.sh/release-name: kubedb + meta.helm.sh/release-namespace: kubedb + labels: + app.kubernetes.io/instance: kubedb + app.kubernetes.io/managed-by: Helm + app.kubernetes.io/name: kubedb-catalog + app.kubernetes.io/version: v2023.01.17 + helm.sh/chart: kubedb-catalog-v2023.01.17 + name: 6.2.14 +spec: + coordinator: + image: kubedb/redis-coordinator:v0.9.1 + db: + image: redis:6.2.14 + exporter: + image: kubedb/redis_exporter:1.9.0 + initContainer: + image: kubedb/redis-init:0.7.0 + podSecurityPolicies: + databasePolicyName: redis-db + stash: + addon: + backupTask: + name: redis-backup-6.2.5 + restoreTask: + name: redis-restore-6.2.5 + version: 6.2.14 +``` + +### metadata.name + +`metadata.name` is a required field that specifies the name of the `RedisVersion` crd. You have to specify this name in `spec.version` field of [Redis](/docs/v2024.1.31/guides/redis/concepts/redis) crd. + +We follow this convention for naming RedisVersion crd: + +- Name format: `{Original Redis image verion}-{modification tag}` + +We modify original Redis docker image to support Redis clustering and re-tag the image with v1, v2 etc. modification tag. An image with higher modification tag will have more features than the images with lower modification tag. Hence, it is recommended to use RedisVersion crd with highest modification tag to enjoy the latest features. + +### spec.version + +`spec.version` is a required field that specifies the original version of Redis server that has been used to build the docker image specified in `spec.db.image` field. + +### spec.deprecated + +`spec.deprecated` is an optional field that specifies whether the docker images specified here is supported by the current KubeDB operator. + +The default value of this field is `false`. If `spec.deprecated` is set to `true`, KubeDB operator will skip processing this CRD object and will add a event to the CRD object specifying that the DB version is deprecated. + +### spec.db.image + +`spec.db.image` is a required field that specifies the docker image which will be used to create Statefulset by KubeDB operator to create expected Redis server. + +### spec.initContainer.image + +`spec.initContainer.image` is a required field that specifies the image for init container. + +### spec.exporter.image + +`spec.exporter.image` is a required field that specifies the image which will be used to export Prometheus metrics. + +### spec.stash + +This holds the Backup & Restore task definitions, where a `TaskRef` has a `Name` & `Params` section. Params specifies a list of parameters to pass to the task. +To learn more, visit [stash documentation](https://stash.run/) + +### spec.updateConstraints +updateConstraints specifies the constraints that need to be considered during version update. Here `allowList` contains the versions those are allowed for updating from the current version. +An empty list of AllowList indicates all the versions are accepted except the denyList. +On the other hand, `DenyList` contains all the rejected versions for the update request. An empty list indicates no version is rejected. + +### spec.podSecurityPolicies.databasePolicyName + +`spec.podSecurityPolicies.databasePolicyName` is a required field that specifies the name of the pod security policy required to get the database server pod(s) running. To use a user-defined policy, the name of the policy has to be set in `spec.podSecurityPolicies` and in the list of allowed policy names in KubeDB operator like below: + +```bash +helm upgrade -i kubedb oci://ghcr.io/appscode-charts/kubedb \ + --namespace kubedb --create-namespace \ + --set additionalPodSecurityPolicies[0]=custom-db-policy \ + --set-file global.license=/path/to/the/license.txt \ + --wait --burst-limit=10000 --debug +``` + +## Next Steps + +- Learn about Redis crd [here](/docs/v2024.1.31/guides/redis/concepts/redis). +- Deploy your first Redis server with KubeDB by following the guide [here](/docs/v2024.1.31/guides/redis/quickstart/quickstart). diff --git a/content/docs/v2024.1.31/guides/redis/concepts/redis.md b/content/docs/v2024.1.31/guides/redis/concepts/redis.md new file mode 100644 index 0000000000..6f48699f38 --- /dev/null +++ b/content/docs/v2024.1.31/guides/redis/concepts/redis.md @@ -0,0 +1,413 @@ +--- +title: Redis CRD +menu: + docs_v2024.1.31: + identifier: rd-redis-concepts + name: Redis + parent: rd-concepts-redis + weight: 10 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Redis + +## What is Redis + +`Redis` is a Kubernetes `Custom Resource Definitions` (CRD). It provides declarative configuration for [Redis](https://redis.io/) in a Kubernetes native way. You only need to describe the desired database configuration in a Redis object, and the KubeDB operator will create Kubernetes objects in the desired state for you. + +## Redis Spec + +As with all other Kubernetes objects, a Redis needs `apiVersion`, `kind`, and `metadata` fields. It also needs a `.spec` section. Below is an example Redis object. + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: Redis +metadata: + name: redis1 + namespace: demo +spec: + autoOps: + disabled: true + version: 6.2.14 + mode: Cluster + cluster: + master: 3 + replicas: 1 + disableAuth: false + authSecret: + name: redis1-auth + externallyManaged: false + tls: + issuerRef: + name: redis-ca-issuer + kind: Issuer + apiGroup: "cert-manager.io" + certificates: + - alias: client + subject: + organizations: + - kubedb + emailAddresses: + - abc@appscode.com + - alias: server + subject: + organizations: + - kubedb + emailAddresses: + - abc@appscode.com + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + monitor: + agent: prometheus.io/operator + prometheus: + serviceMonitor: + labels: + app: kubedb + interval: 10s + configSecret: + name: rd-custom-config + podTemplate: + metadata: + annotations: + passMe: ToDatabasePod + controller: + annotations: + passMe: ToStatefulSet + spec: + serviceAccountName: my-service-account + schedulerName: my-scheduler + nodeSelector: + disktype: ssd + imagePullSecrets: + - name: myregistrykey + args: + - "--loglevel verbose" + env: + - name: ENV_VARIABLE + value: "value" + resources: + requests: + memory: "64Mi" + cpu: "250m" + limits: + memory: "128Mi" + cpu: "500m" + serviceTemplates: + - alias: primary + metadata: + annotations: + passMe: ToService + spec: + type: NodePort + ports: + - name: http + port: 9200 + terminationPolicy: Halt + halted: false + healthChecker: + periodSeconds: 15 + timeoutSeconds: 10 + failureThreshold: 2 + disableWriteCheck: false +``` + +### spec.autoOps +AutoOps is an optional field to control the generation of version update & TLS-related recommendations. + +### spec.version + +`spec.version` is a required field specifying the name of the [RedisVersion](/docs/v2024.1.31/guides/redis/concepts/catalog) crd where the docker images are specified. Currently, when you install KubeDB, it creates the following `RedisVersion` crds, + +- `4.0.6-v2`, `4.0.11`, `6.2.14`, `5.0.14` +- `6.0.20`, `6.2.14`, `6.2.14` `6.2.14` +- `7.0.4`, `7.0.14`, `7.0.6` + +### spec.mode + +`spec.mode` specifies the mode in which Redis server instance(s) will be deployed. The possible values are either `"Standalone"`, `"Cluster"` and `"Sentinel""`. The default value is `"Standalone"`. + +- ***Standalone***: In this mode, the operator to starts a standalone Redis server. + +- ***Cluster***: In this mode, the operator will deploy Redis cluster. + +- ***Sentinel***: In this mode, the operator will deploy a Redis Sentinel Cluster. The `RedisSentinel` instances need exist before deploying Redis in Sentinel mode. + +When `spec.mode` is set to `Sentinel`, `spec.sentinelRef.name` and `spec.sentinelRef.namespace` fields needs to be set to give reference to Sentinel instance + + +### spec.cluster + +If `spec.mode` is set to `"Cluster"`, users can optionally provide a cluster specification. Currently, the following two parameters can be configured: + +- `spec.cluster.master`: specifies the number of Redis master nodes. It must be greater or equal to 3. If not set, the operator set it to 3. +- `spec.cluster.replicas`: specifies the number of replica nodes per master. It must be greater than 0. If not set, the operator set it to 1. + +KubeDB uses `PodDisruptionBudget` to ensure that majority of these cluster replicas are available during [voluntary disruptions](https://kubernetes.io/docs/concepts/workloads/pods/disruptions/#voluntary-and-involuntary-disruptions) so that quorum is maintained and no data loss is occurred. + +If `spec.mode` is set to `"Cluster"`, then `spec.replicas` field is ignored. + +### spec.sentinelRef +`spec.sentinelRef` field is only used when `spec.mode` is `Sentinel`. We want Redis instance to be monitored by a RedisSentinel instance which is already created. +This field has the following subfields. + +- `spec.sentinelRef.name`: specifies name of the RedisSentinel instance. +- `spec.sentinelRef.namespace` specifies namespace of the RedisSentinel instance. + +### spec.disableAuth + +`spec.disableAuth` is an optional field that decides whether Redis instance will be secured by auth or no. + +### spec.authSecret + +`spec.authSecret` is an optional field that points to a Secret used to hold credentials for `redis` superuser. If not set, KubeDB operator creates a new Secret `{redis-object-name}-auth` for storing the password for `redis` superuser. + +We can use this field in 3 mode. +1. Using an external secret. In this case, You need to create an auth secret first with required fields, then specify the secret name when creating the Redis object using `spec.authSecret.name` & set `spec.authSecret.externallyManaged` to true. +```yaml +authSecret: + name: + externallyManaged: true +``` + +2. Specifying the secret name only. In this case, You need to specify the secret name when creating the Redis object using `spec.authSecret.name`. `externallyManaged` is by default false. +```yaml +authSecret: + name: +``` + +3. Let KubeDB do everything for you. In this case, no work for you. + +AuthSecret contains a `user` key and a `password` key which contains the `username` and `password` respectively for `redis` superuser. + +Example: + +```bash +$ kubectl create secret generic redis1-auth -n demo \ +--from-literal=username=jhon-doe \ +--from-literal=password=6q8u_2jMOW-OOZXk +secret "redis1-auth" created +``` + +```yaml +apiVersion: v1 +data: + password: NnE4dV8yak1PVy1PT1pYaw== + username: amhvbi1kb2U= +kind: Secret +metadata: + name: redis1-auth + namespace: demo +type: Opaque +``` + +Secrets provided by users are not managed by KubeDB, and therefore, won't be modified or garbage collected by the KubeDB operator (version 0.13.0 and higher). + + +### spec.tls + +`spec.tls` specifies the TLS/SSL configurations for the Redis. KubeDB uses [cert-manager](https://cert-manager.io/) v1 api to provision and manage TLS certificates. + +The following fields are configurable in the `spec.tls` section: + +- `issuerRef` is a reference to the `Issuer` or `ClusterIssuer` CR of [cert-manager](https://cert-manager.io/docs/concepts/issuer/) that will be used by `KubeDB` to generate necessary certificates. + + - `apiGroup` is the group name of the resource that is being referenced. Currently, the only supported value is `cert-manager.io`. + - `kind` is the type of resource that is being referenced. KubeDB supports both `Issuer` and `ClusterIssuer` as values for this field. + - `name` is the name of the resource (`Issuer` or `ClusterIssuer`) being referenced. + +- `certificates` (optional) are a list of certificates used to configure the server and/or client certificate. It has the following fields: + - `alias` represents the identifier of the certificate. It has the following possible value: + - `server` is used for server certificate identification. + - `client` is used for client certificate identification. + - `metrics-exporter` is used for metrics exporter certificate identification. + - `secretName` (optional) specifies the k8s secret name that holds the certificates. +> This field is optional. If the user does not specify this field, the default secret name will be created in the following format: `--cert`. + + - `subject` (optional) specifies an `X.509` distinguished name. It has the following possible field, + - `organizations` (optional) are the list of different organization names to be used on the Certificate. + - `organizationalUnits` (optional) are the list of different organization unit name to be used on the Certificate. + - `countries` (optional) are the list of country names to be used on the Certificate. + - `localities` (optional) are the list of locality names to be used on the Certificate. + - `provinces` (optional) are the list of province names to be used on the Certificate. + - `streetAddresses` (optional) are the list of a street address to be used on the Certificate. + - `postalCodes` (optional) are the list of postal code to be used on the Certificate. + - `serialNumber` (optional) is a serial number to be used on the Certificate. + You can find more details from [Here](https://golang.org/pkg/crypto/x509/pkix/#Name) + - `duration` (optional) is the period during which the certificate is valid. + - `renewBefore` (optional) is a specifiable time before expiration duration. + - `dnsNames` (optional) is a list of subject alt names to be used in the Certificate. + - `ipAddresses` (optional) is a list of IP addresses to be used in the Certificate. + - `uris` (optional) is a list of URI Subject Alternative Names to be set in the Certificate. + - `emailAddresses` (optional) is a list of email Subject Alternative Names to be set in the Certificate. + - `privateKey` (optional) specifies options to control private keys used for the Certificate. + - `encoding` (optional) is the private key cryptography standards (PKCS) encoding for this certificate's private key to be encoded in. If provided, allowed values are "pkcs1" and "pkcs8" standing for PKCS#1 and PKCS#8, respectively. It defaults to PKCS#1 if not specified. + + +### spec.storage + +Since 0.10.0-rc.0, If you set `spec.storageType:` to `Durable`, then `spec.storage` is a required field that specifies the StorageClass of PVCs dynamically allocated to store data for the database. This storage spec will be passed to the StatefulSet created by KubeDB operator to run database pods. You can specify any StorageClass available in your cluster with appropriate resource requests. + +- `spec.storage.storageClassName` is the name of the StorageClass used to provision PVCs. PVCs don’t necessarily have to request a class. A PVC with its storageClassName set equal to "" is always interpreted to be requesting a PV with no class, so it can only be bound to PVs with no class (no annotation or one set equal to ""). A PVC with no storageClassName is not quite the same and is treated differently by the cluster depending on whether the DefaultStorageClass admission plugin is turned on. +- `spec.storage.accessModes` uses the same conventions as Kubernetes PVCs when requesting storage with specific access modes. +- `spec.storage.resources` can be used to request specific quantities of storage. This follows the same resource model used by PVCs. + +To learn how to configure `spec.storage`, please visit the links below: + +- https://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims + +### spec.monitor + +Redis managed by KubeDB can be monitored with builtin-Prometheus and Prometheus operator out-of-the-box. To learn more, + +- [Monitor Redis with builtin Prometheus](/docs/v2024.1.31/guides/redis/monitoring/using-builtin-prometheus) +- [Monitor Redis with Prometheus operator](/docs/v2024.1.31/guides/redis/monitoring/using-prometheus-operator) + +### spec.configSecret + +`spec.configSecret` is an optional field that allows users to provide custom configuration for Redis. This field accepts a [`VolumeSource`](https://github.com/kubernetes/api/blob/release-1.11/core/v1/types.go#L47). So you can use any Kubernetes supported volume source such as `configMap`, `secret`, `azureDisk` etc. To learn more about how to use a custom configuration file see [here](/docs/v2024.1.31/guides/redis/configuration/using-config-file). + +### spec.podTemplate + +KubeDB allows providing a template for database pod through `spec.podTemplate`. KubeDB operator will pass the information provided in `spec.podTemplate` to the StatefulSet created for Redis server. + +KubeDB accept following fields to set in `spec.podTemplate:` + +- metadata: + - annotations (pod's annotation) +- controller: + - annotations (statefulset's annotation) +- spec: + - args + - env + - resources + - initContainers + - imagePullSecrets + - nodeSelector + - affinity + - serviceAccountName + - schedulerName + - tolerations + - priorityClassName + - priority + - securityContext + - livenessProbe + - readinessProbe + - lifecycle + +You can check out the full list [here](https://github.com/kmodules/offshoot-api/blob/ea366935d5bad69d7643906c7556923271592513/api/v1/types.go#L42-L259). +Uses of some field of `spec.podTemplate` is described below, + +#### spec.podTemplate.spec.args + `spec.podTemplate.spec.args` is an optional field. This can be used to provide additional arguments to database installation. + +### spec.podTemplate.spec.env + +`spec.podTemplate.spec.env` is an optional field that specifies the environment variables to pass to the Redis docker image. + + +#### spec.podTemplate.spec.imagePullSecret + +`KubeDB` provides the flexibility of deploying Redis server from a private Docker registry. To learn how to deploy Redis from a private registry, please visit [here](/docs/v2024.1.31/guides/redis/private-registry/using-private-registry). + +#### spec.podTemplate.spec.nodeSelector + +`spec.podTemplate.spec.nodeSelector` is an optional field that specifies a map of key-value pairs. For the pod to be eligible to run on a node, the node must have each of the indicated key-value pairs as labels (it can have additional labels as well). To learn more, see [here](https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector) . + +#### spec.podTemplate.spec.serviceAccountName + + `serviceAccountName` is an optional field supported by KubeDB Operator (version 0.13.0 and higher) that can be used to specify a custom service account to fine tune role based access control. + + If this field is left empty, the KubeDB operator will create a service account name matching Redis crd name. Role and RoleBinding that provide necessary access permissions will also be generated automatically for this service account. + + If a service account name is given, but there's no existing service account by that name, the KubeDB operator will create one, and Role and RoleBinding that provide necessary access permissions will also be generated for this service account. + + If a service account name is given, and there's an existing service account by that name, the KubeDB operator will use that existing service account. Since this service account is not managed by KubeDB, users are responsible for providing necessary access permissions manually. Follow the guide [here](/docs/v2024.1.31/guides/redis/custom-rbac/using-custom-rbac) to grant necessary permissions in this scenario. + +#### spec.podTemplate.spec.resources + +`spec.podTemplate.spec.resources` is an optional field. This can be used to request compute resources required by the database pods. To learn more, visit [here](http://kubernetes.io/docs/user-guide/compute-resources/). + +### spec.serviceTemplates + +You can also provide a template for the services created by KubeDB operator for Redis server through `spec.serviceTemplates`. This will allow you to set the type and other properties of the services. + +KubeDB allows following fields to set in `spec.serviceTemplates`: +- `alias` represents the identifier of the service. It has the following possible value: + - `primary` is used for the primary service identification. + - `standby` is used for the secondary service identification. + - `stats` is used for the exporter service identification. + +- metadata: + - annotations +- spec: + - type + - ports + - clusterIP + - externalIPs + - loadBalancerIP + - loadBalancerSourceRanges + - externalTrafficPolicy + - healthCheckNodePort + - sessionAffinityConfig + +See [here](https://github.com/kmodules/offshoot-api/blob/kubernetes-1.16.3/api/v1/types.go#L163) to understand these fields in detail. + +### spec.terminationPolicy + +`terminationPolicy` gives flexibility whether to `nullify`(reject) the delete operation of `Redis` crd or which resources KubeDB should keep or delete when you delete `Redis` crd. KubeDB provides following four termination policies: + +- DoNotTerminate +- Halt +- Delete (`Default`) +- WipeOut + +When `terminationPolicy` is `DoNotTerminate`, KubeDB takes advantage of `ValidationWebhook` feature in Kubernetes 1.9.0 or later clusters to implement `DoNotTerminate` feature. If admission webhook is enabled, `DoNotTerminate` prevents users from deleting the database as long as the `spec.terminationPolicy` is set to `DoNotTerminate`. + +Following table show what KubeDB does when you delete Redis crd for different termination policies, + +| Behavior | DoNotTerminate | Halt | Delete | WipeOut | +| ----------------------------------- | :------------: | :------: | :------: | :------: | +| 1. Block Delete operation | ✓ | ✗ | ✗ | ✗ | +| 2. Delete StatefulSet | ✗ | ✓ | ✓ | ✓ | +| 3. Delete Services | ✗ | ✓ | ✓ | ✓ | +| 4. Delete PVCs | ✗ | ✗ | ✓ | ✓ | +| 5. Delete Secrets | ✗ | ✗ | ✗ | ✓ | +| 6. Delete Snapshots | ✗ | ✗ | ✗ | ✓ | +| 7. Delete Snapshot data from bucket | ✗ | ✗ | ✗ | ✓ | +If you don't specify `spec.terminationPolicy` KubeDB uses `Delete` termination policy by default. + +### spec.halted +Indicates that the database is halted and all offshoot Kubernetes resources except PVCs are deleted. + +## spec.healthChecker +It defines the attributes for the health checker. +- `spec.healthChecker.periodSeconds` specifies how often to perform the health check. +- `spec.healthChecker.timeoutSeconds` specifies the number of seconds after which the probe times out. +- `spec.healthChecker.failureThreshold` specifies minimum consecutive failures for the healthChecker to be considered failed. +- `spec.healthChecker.disableWriteCheck` specifies whether to disable the writeCheck or not. + +Know details about KubeDB Health checking from this [blog post](https://blog.byte.builders/post/kubedb-health-checker/). + +## Next Steps + +- Learn how to use KubeDB to run a Redis server [here](/docs/v2024.1.31/guides/redis/README). +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/v2024.1.31/CONTRIBUTING). diff --git a/content/docs/v2024.1.31/guides/redis/concepts/redisopsrequest.md b/content/docs/v2024.1.31/guides/redis/concepts/redisopsrequest.md new file mode 100644 index 0000000000..3d47b8e3f4 --- /dev/null +++ b/content/docs/v2024.1.31/guides/redis/concepts/redisopsrequest.md @@ -0,0 +1,295 @@ +--- +title: OpsRequests CRD +menu: + docs_v2024.1.31: + identifier: guides-redis-concepts-opsrequest + name: OpsRequest + parent: rd-concepts-redis + weight: 25 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# RedisOpsRequest + +## What is RedisOpsRequest + +`RedisOpsRequest` is a Kubernetes `Custom Resource Definitions` (CRD). It provides declarative configuration for [Redis](https://www.redis.io/) administrative operations like database version updating, horizontal scaling, vertical scaling, etc. in a Kubernetes native way. + +## RedisOpsRequest CRD Specifications + +Like any official Kubernetes resource, a `RedisOpsRequest` has `TypeMeta`, `ObjectMeta`, `Spec` and `Status` sections. + +Here, some sample `RedisOpsRequest` CRs for different administrative operations is given below, + +Sample `RedisOpsRequest` for updating database: + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: RedisOpsRequest +metadata: + name: update-version + namespace: demo +spec: + type: UpdateVersion + databaseRef: + name: standalone-redis + updateVersion: + targetVersion: 7.0.14 +``` + +Sample `RedisOpsRequest` for horizontal scaling: + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: RedisOpsRequest +metadata: + name: up-horizontal-redis-ops + namespace: demo +spec: + type: HorizontalScaling + databaseRef: + name: redis-cluster + horizontalScaling: + master: 5 + replicas: 2 +``` + + +## What is RedisSentinelOpsRequest + +`RedisSentinelOpsRequest` is a Kubernetes `Custom Resource Definitions` (CRD). It provides declarative configuration for [Redis](https://www.redis.io/) administrative operations like database version updating, horizontal scaling, vertical scaling, reconfiguring TLS etc. in a Kubernetes native way. +The spec in `RedisOpsRequest` and `RedisSentinelOpsRequest` similar which will be described below. + +Sample `RedisSentinelOpsRequest` for vertical scaling +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: RedisSentinelOpsRequest +metadata: + name: redisops-vertical + namespace: omed +spec: + type: VerticalScaling + databaseRef: + name: sentinel-tls + verticalScaling: + redissentinel: + resources: + requests: + memory: "300Mi" + cpu: "200m" + limits: + memory: "800Mi" + cpu: "500m" +``` + +Here, we are going to describe the various sections of `RedisOpsRequest` and `RedisSentinelOpsRequest` CR . + +### RedisOpsRequest `Spec` + +A `RedisOpsRequest` object has the following fields in the `spec` section. + +#### spec.databaseRef + +`spec.databaseRef` is a required field that point to the [Redis](/docs/v2024.1.31/guides/redis/concepts/redis) object where the administrative operations will be applied. This field consists of the following sub-field: + +- **spec.databaseRef.name :** specifies the name of the [Redis](/docs/v2024.1.31/guides/redis/concepts/redis) object. + +#### spec.type + +`spec.type` specifies the kind of operation that will be applied to the database. Currently, the following types of operations are allowed in `RedisOpsRequest`. + +- `UpdateVersion` +- `HorizontalScaling` +- `VerticalScaling` +- `VolumeExpansion` +- `Restart` +- `Reconfigure` +- `ReconfigureTLS` +- `ReplaceSentinel` (Only in Sentinel Mode) + +`Reconfigure` and `ReplaceSentinel` ops request can not be done in `RedisSentinelOpsRequest` + +>You can perform only one type of operation on a single `RedisOpsRequest` CR. For example, if you want to update your database and scale up its replica then you have to create two separate `RedisOpsRequest`. At first, you have to create a `RedisOpsRequest` for updating. Once it is completed, then you can create another `RedisOpsRequest` for scaling. You should not create two `RedisOpsRequest` simultaneously. + +#### spec.updateVersion + +If you want to update your Redis version, you have to specify the `spec.updateVersion` section that specifies the desired version information. This field consists of the following sub-field: + +- `spec.updateVersion.targetVersion` refers to a [RedisVersion](/docs/v2024.1.31/guides/redis/concepts/catalog) CR that contains the Redis version information where you want to update. + +>You can only update between Redis versions. KubeDB does not support downgrade for Redis. + +#### spec.horizontalScaling + +If you want to scale-up or scale-down your Redis cluster, you have to specify `spec.horizontalScaling` section. This field consists of the following sub-field: + +- `spec.horizontalScaling.replicas` indicates the desired number of replicas for your Redis instance after scaling. For example, if your cluster currently has 4 replicas, and you want to add additional 2 replicas then you have to specify 6 in `spec.horizontalScaling.replicas` field. Similarly, if you want to remove one replicas, you have to specify 3 in `spec.horizontalScaling.replicas` field. +- `spec.horizontalScaling.master` indicates the desired number of master for your Redis cluster. It is only applicable for Cluster Mode. + +#### spec.verticalScaling + +`spec.verticalScaling` is a required field specifying the information of `Redis` resources like `cpu`, `memory` that will be scaled. This field consists of the following sub-fields: + +- `spec.verticalScaling.redis` indicates the `Redis` server resources. It has the below structure: + +```yaml +requests: + memory: "200Mi" + cpu: "0.1" +limits: + memory: "300Mi" + cpu: "0.2" +``` + +Here, when you specify the resource request for `Redis` container, the scheduler uses this information to decide which node to place the container of the Pod on and when you specify a resource limit for `Redis` container, the `kubelet` enforces those limits so that the running container is not allowed to use more of that resource than the limit you set. you can found more details from [here](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/) + +- `spec.verticalScaling.exporter` indicates the `exporter` container resources. It has the same structure as `spec.verticalScaling.redis` and you can scale the resource the same way as `redis` container. + +>You can increase/decrease resources for both `redis` container and `exporter` container on a single `RedisOpsRequest` CR. + +### spec.timeout +As we internally retry the ops request steps multiple times, This `timeout` field helps the users to specify the timeout for those steps of the ops request (in second). +If a step doesn't finish within the specified timeout, the ops request will result in failure. + +### spec.apply +This field controls the execution of obsRequest depending on the database state. It has two supported values: `Always` & `IfReady`. +Use IfReady, if you want to process the opsRequest only when the database is Ready. And use Always, if you want to process the execution of opsReq irrespective of the Database state. + + +### RedisOpsRequest `Status` + +After creating the Ops request `status` section is added in RedisOpsRequest CR. The yaml looks like following : +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: RedisOpsRequest +metadata: + annotations: + kubectl.kubernetes.io/last-applied-configuration: | + {"apiVersion":"ops.kubedb.com/v1alpha1","kind":"RedisOpsRequest","metadata":{"annotations":{},"name":"redisops-vertical","namespace":"demo"},"spec":{"databaseRef":{"name":"standalone-redis"},"type":"VerticalScaling","verticalScaling":{"redis":{"limits":{"cpu":"500m","memory":"800Mi"},"requests":{"cpu":"200m","memory":"300Mi"}}}}} + creationTimestamp: "2023-02-02T09:14:01Z" + generation: 1 + name: redisops-vertical + namespace: demo + resourceVersion: "483411" + uid: 12c45d9c-daea-472d-be61-b88505cb755d +spec: + apply: IfReady + databaseRef: + name: standalone-redis + type: VerticalScaling + verticalScaling: + redis: + resources: + limits: + cpu: 500m + memory: 800Mi + requests: + cpu: 200m + memory: 300Mi +status: + conditions: + - lastTransitionTime: "2023-02-02T09:14:01Z" + message: Redis ops request is vertically scaling database + observedGeneration: 1 + reason: VerticalScaling + status: "True" + type: VerticalScaling + - lastTransitionTime: "2023-02-02T09:14:01Z" + message: Successfully updated StatefulSets Resources + observedGeneration: 1 + reason: UpdateStatefulSetResources + status: "True" + type: UpdateStatefulSetResources + - lastTransitionTime: "2023-02-02T09:14:12Z" + message: Successfully Restarted Pods With Resources + observedGeneration: 1 + reason: RestartedPodsWithResources + status: "True" + type: RestartedPodsWithResources + - lastTransitionTime: "2023-02-02T09:14:12Z" + message: Successfully Vertically Scaled Database + observedGeneration: 1 + reason: Successful + status: "True" + type: Successful + observedGeneration: 1 + phase: Successful + +``` + +`.status` describes the current state of the `RedisOpsRequest` operation. It has the following fields: + +#### status.phase + +`status.phase` indicates the overall phase of the operation for this `RedisOpsRequest`. It can have the following three values: + +| Phase | Meaning | +| ---------- |----------------------------------------------------------------------------------| +| Successful | KubeDB has successfully performed the operation requested in the RedisOpsRequest | +| Failed | KubeDB has failed the operation requested in the RedisOpsRequest | +| Denied | KubeDB has denied the operation requested in the RedisOpsRequest | + +#### status.observedGeneration + +`status.observedGeneration` shows the most recent generation observed by the `RedisOpsRequest` controller. + +#### status.conditions + +`status.conditions` is an array that specifies the conditions of different steps of `RedisOpsRequest` processing. Each condition entry has the following fields: + +- `types` specifies the type of the condition. RedisOpsRequest has the following types of conditions: + +| Type | Meaning | +|---------------------|------------------------------------------------------------------------------------------| +| `Progressing` | Specifies that the operation is now progressing | +| `Successful` | Specifies such a state that the operation on the database has been successful. | +| `HaltDatabase` | Specifies such a state that the database is halted by the operator | +| `ResumeDatabase` | Specifies such a state that the database is resumed by the operator | +| `Failure` | Specifies such a state that the operation on the database has been failed. | +| `Scaling` | Specifies such a state that the scaling operation on the database has stared | +| `VerticalScaling` | Specifies such a state that vertical scaling has performed successfully on database | +| `HorizontalScaling` | Specifies such a state that horizontal scaling has performed successfully on database | +| `UpdateVersion` | Specifies such a state that version updating on the database have performed successfully | + +- The `status` field is a string, with possible values `"True"`, `"False"`, and `"Unknown"`. + - `status` will be `"True"` if the current transition is succeeded. + - `status` will be `"False"` if the current transition is failed. + - `status` will be `"Unknown"` if the current transition is denied. +- The `message` field is a human-readable message indicating details about the condition. +- The `reason` field is a unique, one-word, CamelCase reason for the condition's last transition. It has the following possible values: + +| Reason | Meaning | +|-----------------------------------------| -------------------------------------------------------------------------------- | +| `OpsRequestProgressingStarted` | Operator has started the OpsRequest processing | +| `OpsRequestFailedToProgressing` | Operator has failed to start the OpsRequest processing | +| `SuccessfullyHaltedDatabase` | Database is successfully halted by the operator | +| `FailedToHaltDatabase` | Database is failed to halt by the operator | +| `SuccessfullyResumedDatabase` | Database is successfully resumed to perform its usual operation | +| `FailedToResumedDatabase` | Database is failed to resume | +| `DatabaseVersionupdatingStarted` | Operator has started updating the database version | +| `SuccessfullyUpdatedDatabaseVersion` | Operator has successfully updated the database version | +| `FailedToUpdateDatabaseVersion` | Operator has failed to update the database version | +| `HorizontalScalingStarted` | Operator has started the horizontal scaling | +| `SuccessfullyPerformedHorizontalScaling` | Operator has successfully performed on horizontal scaling | +| `FailedToPerformHorizontalScaling` | Operator has failed to perform on horizontal scaling | +| `VerticalScalingStarted` | Operator has started the vertical scaling | +| `SuccessfullyPerformedVerticalScaling` | Operator has successfully performed on vertical scaling | +| `FailedToPerformVerticalScaling` | Operator has failed to perform on vertical scaling | +| `OpsRequestProcessedSuccessfully` | Operator has completed the operation successfully requested by the OpeRequest cr | + +- The `lastTransitionTime` field provides a timestamp for when the operation last transitioned from one state to another. +- The `observedGeneration` shows the most recent condition transition generation observed by the controller. diff --git a/content/docs/v2024.1.31/guides/redis/concepts/redissentinel.md b/content/docs/v2024.1.31/guides/redis/concepts/redissentinel.md new file mode 100644 index 0000000000..c54b1c2bdb --- /dev/null +++ b/content/docs/v2024.1.31/guides/redis/concepts/redissentinel.md @@ -0,0 +1,424 @@ +--- +title: RedisSentinel CRD +menu: + docs_v2024.1.31: + identifier: rd-redissentinel-concepts + name: RedisSentinel + parent: rd-concepts-redis + weight: 12 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# RedisSentinel + +## What is RedisSentinel + +`RedisSentinel` is a Kubernetes `Custom Resource Definitions` (CRD). It provides declarative configuration for [Redis](https://redis.io/) in a Kubernetes native way. You only need to describe the desired database configuration in a Redis Sentinel object, and the KubeDB operator will create Kubernetes objects in the desired state for you. + +## RedisSentinel Spec + +As with all other Kubernetes objects, a Redis needs `apiVersion`, `kind`, and `metadata` fields. It also needs a `.spec` section. Below is an example Redis object. + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: RedisSentinel +metadata: + name: sentinel1 + namespace: demo +spec: + autoOps: + disabled: true + version: 6.2.14 + replicas: 3 + disableAuth: false + authSecret: + name: sentinel1-auth + externallyManaged: false + tls: + issuerRef: + name: redis-ca-issuer + kind: ClusterIssuer + apiGroup: "cert-manager.io" + certificates: + - alias: client + subject: + organizations: + - kubedb + emailAddresses: + - abc@appscode.com + - alias: server + subject: + organizations: + - kubedb + emailAddresses: + - abc@appscode.com + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + monitor: + agent: prometheus.io/operator + prometheus: + serviceMonitor: + labels: + app: kubedb + interval: 10s + podTemplate: + metadata: + annotations: + passMe: ToDatabasePod + controller: + annotations: + passMe: ToStatefulSet + spec: + serviceAccountName: my-service-account + imagePullSecrets: + - name: regcred + args: + - "--loglevel verbose" + env: + - name: ENV_VARIABLE + value: "value" + resources: + requests: + memory: "64Mi" + cpu: "250m" + limits: + memory: "128Mi" + cpu: "500m" + serviceTemplates: + - alias: primary + metadata: + annotations: + passMe: ToService + spec: + type: NodePort + ports: + - name: http + port: 9200 + terminationPolicy: Halt + halted: false + healthChecker: + periodSeconds: 15 + timeoutSeconds: 10 + failureThreshold: 2 + disableWriteCheck: false +``` + +### spec.autoOps +AutoOps is an optional field to control the generation of version update & TLS-related recommendations. + +### spec.version + +`spec.version` is a required field specifying the name of the [RedisVersion](/docs/v2024.1.31/guides/redis/concepts/catalog) crd where the docker images are specified. RedisSentinel is supported in following Redis Versions. + +- `6.2.14`, `6.2.14` `6.2.14` +- `7.0.4`, `7.0.14`, `7.0.6` + +### spec.disableAuth + +`spec.disableAuth` is an optional field that decides whether RedisSentinel instance will be secured by auth or no. + +### spec.authSecret + +`spec.authSecret` is an optional field that points to a Secret used to hold credentials for `redis` superuser. If not set, KubeDB operator creates a new Secret `{redissentinel-object-name}-auth` for storing the password for `redis` superuser. + +We can use this field in 3 mode. +1. Using an external secret. In this case, You need to create an auth secret first with required fields, then specify the secret name when creating the RedisSentinel object using `spec.authSecret.name` & set `spec.authSecret.externallyManaged` to true. +```yaml +authSecret: + name: + externallyManaged: true +``` + +2. Specifying the secret name only. In this case, You need to specify the secret name when creating the RedisSentinel object using `spec.authSecret.name`. `externallyManaged` is by default false. +```yaml +authSecret: + name: +``` + +3. Let KubeDB do everything for you. In this case, no work for you. + +AuthSecret contains a `user` key and a `password` key which contains the `username` and `password` respectively for `redis` superuser. + +Example: + +```bash +$ kubectl create secret generic sentinel1-auth -n demo \ +--from-literal=username=jhon-doe \ +--from-literal=password=6q8u_2jMOW-OOZXk +secret "sentinel1-auth" created +``` + +```yaml +apiVersion: v1 +data: + password: NnE4dV8yak1PVy1PT1pYaw== + username: amhvbi1kb2U= +kind: Secret +metadata: + name: sentinel1-auth + namespace: demo +type: Opaque +``` + +Secrets provided by users are not managed by KubeDB, and therefore, won't be modified or garbage collected by the KubeDB operator (version 0.13.0 and higher). + + +### spec.tls + +`spec.tls` specifies the TLS/SSL configurations for the RedisSentinel. KubeDB uses [cert-manager](https://cert-manager.io/) v1 api to provision and manage TLS certificates. + +The following fields are configurable in the `spec.tls` section: + +- `issuerRef` is a reference to the `Issuer` or `ClusterIssuer` CR of [cert-manager](https://cert-manager.io/docs/concepts/issuer/) that will be used by `KubeDB` to generate necessary certificates. + + - `apiGroup` is the group name of the resource that is being referenced. Currently, the only supported value is `cert-manager.io`. + - `kind` is the type of resource that is being referenced. KubeDB supports both `Issuer` and `ClusterIssuer` as values for this field. + - `name` is the name of the resource (`Issuer` or `ClusterIssuer`) being referenced. + +- `certificates` (optional) are a list of certificates used to configure the server and/or client certificate. It has the following fields: + - `alias` represents the identifier of the certificate. It has the following possible value: + - `server` is used for server certificate identification. + - `client` is used for client certificate identification. + - `metrics-exporter` is used for metrics exporter certificate identification. + - `secretName` (optional) specifies the k8s secret name that holds the certificates. +> This field is optional. If the user does not specify this field, the default secret name will be created in the following format: `--cert`. + + - `subject` (optional) specifies an `X.509` distinguished name. It has the following possible field, + - `organizations` (optional) are the list of different organization names to be used on the Certificate. + - `organizationalUnits` (optional) are the list of different organization unit name to be used on the Certificate. + - `countries` (optional) are the list of country names to be used on the Certificate. + - `localities` (optional) are the list of locality names to be used on the Certificate. + - `provinces` (optional) are the list of province names to be used on the Certificate. + - `streetAddresses` (optional) are the list of a street address to be used on the Certificate. + - `postalCodes` (optional) are the list of postal code to be used on the Certificate. + - `serialNumber` (optional) is a serial number to be used on the Certificate. + You can find more details from [Here](https://golang.org/pkg/crypto/x509/pkix/#Name) + - `duration` (optional) is the period during which the certificate is valid. + - `renewBefore` (optional) is a specifiable time before expiration duration. + - `dnsNames` (optional) is a list of subject alt names to be used in the Certificate. + - `ipAddresses` (optional) is a list of IP addresses to be used in the Certificate. + - `uris` (optional) is a list of URI Subject Alternative Names to be set in the Certificate. + - `emailAddresses` (optional) is a list of email Subject Alternative Names to be set in the Certificate. + - `privateKey` (optional) specifies options to control private keys used for the Certificate. + - `encoding` (optional) is the private key cryptography standards (PKCS) encoding for this certificate's private key to be encoded in. If provided, allowed values are "pkcs1" and "pkcs8" standing for PKCS#1 and PKCS#8, respectively. It defaults to PKCS#1 if not specified. + +The Redis object we construct will be watched over by the Redis Sentinel object, therefore in order for them to connect in TLS enabled mode, +both objects must have the same issuer. On the other side, the Redis object must likewise be TLS off if the RedisSentinel object is. +Set `spec.tls.issuerRef.kind` to `ClusterIssuer` if you want your RedisSentinel object and Redis object to be in different namespaces. +Both "Issuer" and "ClusterIssuer" can be used if both instances are in the same namespace. + +### spec.storage + +Since 0.10.0-rc.0, If you set `spec.storageType:` to `Durable`, then `spec.storage` is a required field that specifies the StorageClass of PVCs dynamically allocated to store data for the database. This storage spec will be passed to the StatefulSet created by KubeDB operator to run database pods. You can specify any StorageClass available in your cluster with appropriate resource requests. + +- `spec.storage.storageClassName` is the name of the StorageClass used to provision PVCs. PVCs don’t necessarily have to request a class. A PVC with its storageClassName set equal to "" is always interpreted to be requesting a PV with no class, so it can only be bound to PVs with no class (no annotation or one set equal to ""). A PVC with no storageClassName is not quite the same and is treated differently by the cluster depending on whether the DefaultStorageClass admission plugin is turned on. +- `spec.storage.accessModes` uses the same conventions as Kubernetes PVCs when requesting storage with specific access modes. +- `spec.storage.resources` can be used to request specific quantities of storage. This follows the same resource model used by PVCs. + +To learn how to configure `spec.storage`, please visit the links below: + +- https://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims + +### spec.monitor + +RedisSentinel managed by KubeDB can be monitored with builtin-Prometheus and Prometheus operator out-of-the-box. To learn more, + +- [Monitor Redis with builtin Prometheus](/docs/v2024.1.31/guides/redis/monitoring/using-builtin-prometheus) +- [Monitor Redis with Prometheus operator](/docs/v2024.1.31/guides/redis/monitoring/using-prometheus-operator) + + +### spec.podTemplate + +KubeDB allows providing a template for database pod through `spec.podTemplate`. KubeDB operator will pass the information provided in `spec.podTemplate` to the StatefulSet created for RedisSentinel server. + +KubeDB accept following fields to set in `spec.podTemplate:` + +- metadata: + - annotations (pod's annotation) +- controller: + - annotations (statefulset's annotation) +- spec: + - args + - env + - resources + - initContainers + - imagePullSecrets + - nodeSelector + - affinity + - serviceAccountName + - schedulerName + - tolerations + - priorityClassName + - priority + - securityContext + - livenessProbe + - readinessProbe + - lifecycle + +Uses of some field of `spec.podTemplate` is described below, + +#### spec.podTemplate.spec.args + `spec.podTemplate.spec.args` is an optional field. This can be used to provide additional arguments to database installation. + +### spec.podTemplate.spec.env + +`spec.podTemplate.spec.env` is an optional field that specifies the environment variables to pass to the Redis docker image. + +Note that, KubeDB does not allow to update the environment variables. If you try to update environment variables, KubeDB operator will reject the request with following error, + +```ini +Error from server (BadRequest): error when applying patch: +... +for: "./redis.yaml": admission webhook "redis.validators.kubedb.com" denied the request: precondition failed for: +... +At least one of the following was changed: +apiVersion +kind +name +namespace +spec.storage +spec.podTemplate.spec.nodeSelector +spec.podTemplate.spec.env +``` + +#### spec.podTemplate.spec.imagePullSecret + +`KubeDB` provides the flexibility of deploying Redis server from a private Docker registry. To learn how to deploy Redis from a private registry, please visit [here](/docs/v2024.1.31/guides/redis/private-registry/using-private-registry). + +#### spec.podTemplate.spec.nodeSelector + +`spec.podTemplate.spec.nodeSelector` is an optional field that specifies a map of key-value pairs. For the pod to be eligible to run on a node, the node must have each of the indicated key-value pairs as labels (it can have additional labels as well). To learn more, see [here](https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector) . + +#### spec.podTemplate.spec.serviceAccountName + + `serviceAccountName` is an optional field supported by KubeDB Operator (version 0.13.0 and higher) that can be used to specify a custom service account to fine tune role based access control. + + If this field is left empty, the KubeDB operator will create a service account name matching RedisSentinel crd name. Role and RoleBinding that provide necessary access permissions will also be generated automatically for this service account. + + If a service account name is given, but there's no existing service account by that name, the KubeDB operator will create one, and Role and RoleBinding that provide necessary access permissions will also be generated for this service account. + + If a service account name is given, and there's an existing service account by that name, the KubeDB operator will use that existing service account. Since this service account is not managed by KubeDB, users are responsible for providing necessary access permissions manually. Follow the guide [here](/docs/v2024.1.31/guides/redis/custom-rbac/using-custom-rbac) to grant necessary permissions in this scenario. + +#### spec.podTemplate.spec.resources + +`spec.podTemplate.spec.resources` is an optional field. This can be used to request compute resources required by the database pods. To learn more, visit [here](http://kubernetes.io/docs/user-guide/compute-resources/). + +### spec.serviceTemplates + +You can also provide a template for the services created by KubeDB operator for Redis server through `spec.serviceTemplates`. This will allow you to set the type and other properties of the services. + +KubeDB allows following fields to set in `spec.serviceTemplates`: + +- metadata: + - annotations +- spec: + - type + - ports + - clusterIP + - externalIPs + - loadBalancerIP + - loadBalancerSourceRanges + - externalTrafficPolicy + - healthCheckNodePort + - sessionAffinityConfig + +See [here](https://github.com/kmodules/offshoot-api/blob/kubernetes-1.16.3/api/v1/types.go#L163) to understand these fields in detail. + +### spec.terminationPolicy + +`terminationPolicy` gives flexibility whether to `nullify`(reject) the delete operation of `RedisSentinel` crd or which resources KubeDB should keep or delete when you delete `RedisSentinel` crd. KubeDB provides following four termination policies: + +- DoNotTerminate +- Halt +- Delete (`Default`) +- WipeOut + +When `terminationPolicy` is `DoNotTerminate`, KubeDB takes advantage of `ValidationWebhook` feature in Kubernetes 1.9.0 or later clusters to implement `DoNotTerminate` feature. If admission webhook is enabled, `DoNotTerminate` prevents users from deleting the database as long as the `spec.terminationPolicy` is set to `DoNotTerminate`. + +Following table show what KubeDB does when you delete Redis crd for different termination policies, + +| Behavior | DoNotTerminate | Halt | Delete | WipeOut | +| ----------------------------------- | :------------: | :------: | :------: | :------: | +| 1. Block Delete operation | ✓ | ✗ | ✗ | ✗ | +| 2. Delete StatefulSet | ✗ | ✓ | ✓ | ✓ | +| 3. Delete Services | ✗ | ✓ | ✓ | ✓ | +| 4. Delete PVCs | ✗ | ✗ | ✓ | ✓ | +| 5. Delete Secrets | ✗ | ✗ | ✗ | ✓ | + +If you don't specify `spec.terminationPolicy` KubeDB uses `Delete` termination policy by default. + +### spec.halted +Indicates that the database is halted and all offshoot Kubernetes resources except PVCs are deleted. + +## spec.healthChecker +It defines the attributes for the health checker. +- `spec.healthChecker.periodSeconds` specifies how often to perform the health check. +- `spec.healthChecker.timeoutSeconds` specifies the number of seconds after which the probe times out. +- `spec.healthChecker.failureThreshold` specifies minimum consecutive failures for the healthChecker to be considered failed. +- `spec.healthChecker.disableWriteCheck` specifies whether to disable the writeCheck or not. + +Know details about KubeDB Health checking from this [blog post](https://blog.byte.builders/post/kubedb-health-checker/). + +## Sample Redis instace +A yaml for a sample Redis instance that can be monitored by this RedisSentinel instance is given below +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: Redis +metadata: + name: redis1 + namespace: demo +spec: + version: 6.2.14 + replicas: 3 + sentinelRef: + name: sentinel1 + namespace: demo + mode: Sentinel + tls: + issuerRef: + apiGroup: cert-manager.io + name: redis-ca-issuer + kind: ClusterIssuer + certificates: + - alias: server + subject: + organizations: + - kubedb:server + dnsNames: + - localhost + ipAddresses: + - "127.0.0.1" + storageType: Durable + storage: + resources: + requests: + storage: 1Gi + storageClassName: "standard" + accessModes: + - ReadWriteOnce + terminationPolicy: WipeOut +``` + +## Next Steps + +- Learn how to use KubeDB to run a Redis server [here](/docs/v2024.1.31/guides/redis/README). +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/v2024.1.31/CONTRIBUTING). diff --git a/content/docs/v2024.1.31/guides/redis/configuration/_index.md b/content/docs/v2024.1.31/guides/redis/configuration/_index.md new file mode 100755 index 0000000000..03fd1fae92 --- /dev/null +++ b/content/docs/v2024.1.31/guides/redis/configuration/_index.md @@ -0,0 +1,22 @@ +--- +title: Run Redis with Custom Configuration +menu: + docs_v2024.1.31: + identifier: rd-configuration + name: Custom Configuration + parent: rd-redis-guides + weight: 30 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/redis/configuration/using-config-file.md b/content/docs/v2024.1.31/guides/redis/configuration/using-config-file.md new file mode 100644 index 0000000000..2dc150f89e --- /dev/null +++ b/content/docs/v2024.1.31/guides/redis/configuration/using-config-file.md @@ -0,0 +1,178 @@ +--- +title: Run Redis with Custom Configuration +menu: + docs_v2024.1.31: + identifier: rd-using-config-file-configuration + name: Config File + parent: rd-configuration + weight: 10 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Using Custom Configuration File + +KubeDB supports providing custom configuration for Redis. This tutorial will show you how to use KubeDB to run Redis with custom configuration. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +- Now, install KubeDB cli on your workstation and KubeDB operator in your cluster following the steps [here](/docs/v2024.1.31/setup/README). + +- To keep things isolated, this tutorial uses a separate namespace called `demo` throughout this tutorial. + + ```bash + $ kubectl create ns demo + namespace/demo created + + $ kubectl get ns demo + NAME STATUS AGE + demo Active 5s + ``` + +> Note: YAML files used in this tutorial are stored in [docs/examples/redis](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/examples/redis) folder in GitHub repository [kubedb/docs](https://github.com/kubedb/docs). + +## Overview + +Redis allows configuration via a config file. When redis docker image starts, it executes `redis-server` command. If we provide a `.conf` file directory as an argument of this command, Redis server will use configuration specified in the file. To know more about configuring Redis see [here](https://redis.io/topics/config). + +At first, you have to create a config file named `redis.conf` with your desired configuration. Then you have to put this file into a [secret](https://kubernetes.io/docs/concepts/configuration/secret/). You have to specify this secret in `spec.configSecret` section while creating Redis crd. KubeDB will mount this secret into `/usr/local/etc/redis` directory of the pod and the `redis.conf` file path will be sent as an argument of `redis-server` command. + +In this tutorial, we will configure `databases` and `maxclients` via a custom config file. + +## Custom Configuration + +At first, let's create `redis.conf` file setting `databases` and `maxclients` parameters. Default value of `databases` is 16 and `maxclients` is 10000. + +```bash +$ cat <redis.conf +databases 10 +maxclients 425 +EOF + +$ cat redis.conf +databases 10 +maxclients 425 +``` + +> Note that config file name must be `redis.conf` + +Now, create a Secret with this configuration file. + +```bash +$ kubectl create secret generic -n demo rd-configuration --from-file=./redis.conf +secret/rd-configuration created +``` + +Verify the Secret has the configuration file. + +```bash +$ kubectl get secret -n demo rd-configuration -o yaml + +apiVersion: v1 +data: + redis.conf: ZGF0YWJhc2VzIDEwCm1heGNsaWVudHMgNDI1Cgo= +kind: Secret +metadata: + creationTimestamp: "2023-02-06T08:55:14Z" + name: rd-configuration + namespace: demo + resourceVersion: "676133" + uid: 73c4e8b5-9e9c-45e6-8b83-b6bc6f090663 +type: Opaque +``` + +The configurations are encrypted in the secret. + +Now, create Redis crd specifying `spec.configSecret` field. + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/redis/custom-config/redis-custom.yaml +redis.kubedb.com "custom-redis" created +``` + +Below is the YAML for the Redis crd we just created. + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: Redis +metadata: + name: custom-redis + namespace: demo +spec: + version: 6.2.14 + configSecret: + name: rd-configuration + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi +``` + +Now, wait a few minutes. KubeDB operator will create necessary statefulset, services etc. If everything goes well, we will see that a pod with the name `custom-redis-0` has been created. + + +Check if the database is ready + +```bash +$ kubectl get redis -n demo +NAME VERSION STATUS AGE +custom-redis 6.2.14 Ready 10m +``` + + +Now, we will check if the database has started with the custom configuration we have provided. We will `exec` into the pod and use [CONFIG GET](https://redis.io/commands/config-get) command to check the configuration. + +```bash +$ kubectl exec -it -n demo custom-redis-0 -- bash +root@custom-redis-0:/data# redis-cli +127.0.0.1:6379> ping +PONG +127.0.0.1:6379> config get databases +1) "databases" +2) "10" +127.0.0.1:6379> config get maxclients +1) "maxclients" +2) "425" +127.0.0.1:6379> exit +root@custom-redis-0:/data# +``` + +## Cleaning up + +To clean up the Kubernetes resources created by this tutorial, run: + +```bash +$ kubectl patch -n demo rd/custom-redis -p '{"spec":{"terminationPolicy":"WipeOut"}}' --type="merge" +redis.kubedb.com/custom-redis patched + +$ kubectl delete -n demo redis custom-redis +redis.kubedb.com "custom-redis" deleted + +$ kubectl delete -n demo secret rd-configuration +secret "rd-configuration" deleted + +$ kubectl delete ns demo +namespace "demo" deleted +``` + +## Next Steps + +- Learn how to use KubeDB to run a Redis server [here](/docs/v2024.1.31/guides/redis/README). diff --git a/content/docs/v2024.1.31/guides/redis/custom-rbac/_index.md b/content/docs/v2024.1.31/guides/redis/custom-rbac/_index.md new file mode 100755 index 0000000000..e69c470cc3 --- /dev/null +++ b/content/docs/v2024.1.31/guides/redis/custom-rbac/_index.md @@ -0,0 +1,22 @@ +--- +title: Run Redis with Custom RBAC resources +menu: + docs_v2024.1.31: + identifier: rd-custom-rbac + name: Custom RBAC + parent: rd-redis-guides + weight: 31 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/redis/custom-rbac/using-custom-rbac.md b/content/docs/v2024.1.31/guides/redis/custom-rbac/using-custom-rbac.md new file mode 100644 index 0000000000..23706d9fbf --- /dev/null +++ b/content/docs/v2024.1.31/guides/redis/custom-rbac/using-custom-rbac.md @@ -0,0 +1,284 @@ +--- +title: Run Redis with Custom RBAC resources +menu: + docs_v2024.1.31: + identifier: rd-custom-rbac-quickstart + name: Custom RBAC + parent: rd-custom-rbac + weight: 10 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Using Custom RBAC resources + +KubeDB (version 0.13.0 and higher) supports finer user control over role based access permissions provided to a Redis instance. This tutorial will show you how to use KubeDB to run Redis instance with custom RBAC resources. + +## Before You Begin + +At first, you need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +Now, install KubeDB cli on your workstation and KubeDB operator in your cluster following the steps [here](/docs/v2024.1.31/setup/README). + +To keep things isolated, this tutorial uses a separate namespace called `demo` throughout this tutorial. + +```bash +$ kubectl create ns demo +namespace/demo created +``` + +> Note: YAML files used in this tutorial are stored in [docs/examples/redis](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/examples/redis) folder in GitHub repository [kubedb/docs](https://github.com/kubedb/docs). + +## Overview + +KubeDB allows users to provide custom RBAC resources, namely, `ServiceAccount`, `Role`, and `RoleBinding` for Redis. This is provided via the `spec.podTemplate.spec.serviceAccountName` field in Redis crd. If this field is left empty, the KubeDB operator will create a service account name matching Redis crd name. Role and RoleBinding that provide necessary access permissions will also be generated automatically for this service account. + +If a service account name is given, but there's no existing service account by that name, the KubeDB operator will create one, and Role and RoleBinding that provide necessary access permissions will also be generated for this service account. + +If a service account name is given, and there's an existing service account by that name, the KubeDB operator will use that existing service account. Since this service account is not managed by KubeDB, users are responsible for providing necessary access permissions manually. + +This guide will show you how to create custom `Service Account`, `Role`, and `RoleBinding` for a Redis instance named `quick-redis` to provide the bare minimum access permissions. + +## Custom RBAC for Redis + +At first, let's create a `Service Acoount` in `demo` namespace. + +```bash +$ kubectl create serviceaccount -n demo my-custom-serviceaccount +serviceaccount/my-custom-serviceaccount created +``` + +It should create a service account. + +```bash +$ kubectl get serviceaccount -n demo my-custom-serviceaccount -o yaml +``` +```yaml +apiVersion: v1 +kind: ServiceAccount +metadata: + creationTimestamp: "2023-02-06T10:19:00Z" + name: my-custom-serviceaccount + namespace: demo + resourceVersion: "683509" + uid: 186702c3-6d84-4ba9-b349-063c4e681622 +secrets: + - name: my-custom-serviceaccount-token-vpr84 +``` + +Now, we need to create a role that has necessary access permissions for the Redis instance named `quick-redis`. + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/redis/custom-rbac/rd-custom-role.yaml +role.rbac.authorization.k8s.io/my-custom-role created +``` + +Below is the YAML for the Role we just created. + +```yaml +apiVersion: rbac.authorization.k8s.io/v1 +kind: Role +metadata: + name: my-custom-role + namespace: demo +rules: +- apiGroups: + - policy + resourceNames: + - redis-db + resources: + - podsecuritypolicies + verbs: + - use +``` + +This permission is required for Redis pods running on PSP enabled clusters. + +Now create a `RoleBinding` to bind this `Role` with the already created service account. + +```bash +$ kubectl create rolebinding my-custom-rolebinding --role=my-custom-role --serviceaccount=demo:my-custom-serviceaccount --namespace=demo +rolebinding.rbac.authorization.k8s.io/my-custom-rolebinding created + +``` + +It should bind `my-custom-role` and `my-custom-serviceaccount` successfully. + +```bash +$ kubectl get rolebinding -n demo my-custom-rolebinding -o yaml +``` +```yaml +apiVersion: rbac.authorization.k8s.io/v1 +kind: RoleBinding +metadata: + creationTimestamp: "2023-02-06T09:46:26Z" + name: my-custom-rolebinding + namespace: demo + resourceVersion: "680621" + uid: 6f74cce7-bb20-4584-bdc1-bdfb3598604f +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: Role + name: my-custom-role +subjects: + - kind: ServiceAccount + name: my-custom-serviceaccount + namespace: demo +``` + +Now, create a Redis crd specifying `spec.podTemplate.spec.serviceAccountName` field to `my-custom-serviceaccount`. + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/redis/custom-rbac/rd-custom-db.yaml +redis.kubedb.com/quick-redis created +``` + +Below is the YAML for the Redis crd we just created. + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: Redis +metadata: + name: quick-redis + namespace: demo +spec: + version: 6.2.14 + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: DoNotTerminate +``` + +Now, wait a few minutes. the KubeDB operator will create necessary PVC, statefulset, services, secret etc. If everything goes well, we should see that a pod with the name `quick-redis-0` has been created. + +Check that the statefulset's pod is running + +```bash +$ kubectl get pod -n demo quick-redis-0 +NAME READY STATUS RESTARTS AGE +quick-redis-0 1/1 Running 0 61s +``` + +Check if database is in Ready state + +```bash +$ kubectl get redis -n demo +NAME VERSION STATUS AGE +quick-redis 6.2.14 Ready 117s +``` + +## Reusing Service Account + +An existing service account can be reused in another Redis instance. No new access permission is required to run the new Redis instance. + +Now, create Redis crd `minute-redis` using the existing service account name `my-custom-serviceaccount` in the `spec.podTemplate.spec.serviceAccountName` field. + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/redis/custom-rbac/rd-custom-db-two.yaml +redis.kubedb.com/quick-redis created +``` + +Below is the YAML for the Redis crd we just created. + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: Redis +metadata: + name: minute-redis + namespace: demo +spec: + version: 6.2.14 + podTemplate: + spec: + serviceAccountName: my-custom-serviceaccount + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: DoNotTerminate + +``` + +Now, wait a few minutes. the KubeDB operator will create necessary PVC, statefulset, services, secret etc. If everything goes well, we should see that a pod with the name `minute-redis-0` has been created. + +Check that the statefulset's pod is running + +```bash +$ kubectl get pod -n demo minute-redis-0 +NAME READY STATUS RESTARTS AGE +minute-redis-0 1/1 Running 0 14m +``` + +Check if database is in Ready state + +```bash +$ kubectl get redis -n demo +NAME VERSION STATUS AGE +minute-redis 6.2.14 Ready 76s +quick-redis 6.2.14 Ready 4m26s +``` + +## Cleaning up + +To clean up the Kubernetes resources created by this tutorial, run: + +```bash +$ kubectl patch -n demo rd/quick-redis -p '{"spec":{"terminationPolicy":"WipeOut"}}' --type="merge" +redis.kubedb.com/quick-redis patched + +$ kubectl delete -n demo rd/quick-redis +redis.kubedb.com "quick-redis" deleted + +$ kubectl patch -n demo rd/minute-redis -p '{"spec":{"terminationPolicy":"WipeOut"}}' --type="merge" +redis.kubedb.com/minute-redis patched + +$ kubectl delete -n demo rd/minute-redis +redis.kubedb.com "minute-redis" deleted + +$ kubectl delete -n demo role my-custom-role +role.rbac.authorization.k8s.io "my-custom-role" deleted + +$ kubectl delete -n demo rolebinding my-custom-rolebinding +rolebinding.rbac.authorization.k8s.io "my-custom-rolebinding" deleted + +$ kubectl delete sa -n demo my-custom-serviceaccount +serviceaccount "my-custom-serviceaccount" deleted + +$ kubectl delete ns demo +namespace "demo" deleted +``` + +If you would like to uninstall the KubeDB operator, please follow the steps [here](/docs/v2024.1.31/setup/README). + +## Next Steps + +- [Quickstart Redis](/docs/v2024.1.31/guides/redis/quickstart/quickstart) with KubeDB Operator. +- Monitor your Redis instance with KubeDB using [out-of-the-box Prometheus operator](/docs/v2024.1.31/guides/redis/monitoring/using-prometheus-operator). +- Monitor your Redis instance with KubeDB using [out-of-the-box builtin-Prometheus](/docs/v2024.1.31/guides/redis/monitoring/using-builtin-prometheus). +- Use [private Docker registry](/docs/v2024.1.31/guides/redis/private-registry/using-private-registry) to deploy Redis with KubeDB. +- Use [kubedb cli](/docs/v2024.1.31/guides/redis/cli/cli) to manage databases like kubectl for Kubernetes. +- Detail concepts of [Redis object](/docs/v2024.1.31/guides/redis/concepts/redis). +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/v2024.1.31/CONTRIBUTING). + diff --git a/content/docs/v2024.1.31/guides/redis/monitoring/_index.md b/content/docs/v2024.1.31/guides/redis/monitoring/_index.md new file mode 100755 index 0000000000..6613276443 --- /dev/null +++ b/content/docs/v2024.1.31/guides/redis/monitoring/_index.md @@ -0,0 +1,22 @@ +--- +title: Monitoring Redis +menu: + docs_v2024.1.31: + identifier: rd-monitoring-redis + name: Monitoring + parent: rd-redis-guides + weight: 55 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/redis/monitoring/overview.md b/content/docs/v2024.1.31/guides/redis/monitoring/overview.md new file mode 100644 index 0000000000..a50cccfbd6 --- /dev/null +++ b/content/docs/v2024.1.31/guides/redis/monitoring/overview.md @@ -0,0 +1,117 @@ +--- +title: Redis Monitoring Overview +description: Redis Monitoring Overview +menu: + docs_v2024.1.31: + identifier: rd-monitoring-overview + name: Overview + parent: rd-monitoring-redis + weight: 10 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Monitoring Redis with KubeDB + +KubeDB has native support for monitoring via [Prometheus](https://prometheus.io/). You can use builtin [Prometheus](https://github.com/prometheus/prometheus) scraper or [Prometheus operator](https://github.com/prometheus-operator/prometheus-operator) to monitor KubeDB managed databases. This tutorial will show you how database monitoring works with KubeDB and how to configure Database crd to enable monitoring. + +## Overview + +KubeDB uses Prometheus [exporter](https://prometheus.io/docs/instrumenting/exporters/#databases) images to export Prometheus metrics for respective databases. Following diagram shows the logical flow of database monitoring with KubeDB. + +

+  Database Monitoring Flow +

+ +When a user creates a database crd with `spec.monitor` section configured, KubeDB operator provisions the respective database and injects an exporter image as sidecar to the database pod. It also creates a dedicated stats service with name `{database-crd-name}-stats` for monitoring. Prometheus server can scrape metrics using this stats service. + +## Configure Monitoring + +In order to enable monitoring for a database, you have to configure `spec.monitor` section. KubeDB provides following options to configure `spec.monitor` section: + +| Field | Type | Uses | +| -------------------------------------------------- | ---------- | ---------------------------------------------------------------------------------------------------------------------------------------------- | +| `spec.monitor.agent` | `Required` | Type of the monitoring agent that will be used to monitor this database. It can be `prometheus.io/builtin` or `prometheus.io/operator`. | +| `spec.monitor.prometheus.exporter.port` | `Optional` | Port number where the exporter side car will serve metrics. | +| `spec.monitor.prometheus.exporter.args` | `Optional` | Arguments to pass to the exporter sidecar. | +| `spec.monitor.prometheus.exporter.env` | `Optional` | List of environment variables to set in the exporter sidecar container. | +| `spec.monitor.prometheus.exporter.resources` | `Optional` | Resources required by exporter sidecar container. | +| `spec.monitor.prometheus.exporter.securityContext` | `Optional` | Security options the exporter should run with. | +| `spec.monitor.prometheus.serviceMonitor.labels` | `Optional` | Labels for `ServiceMonitor` crd. | +| `spec.monitor.prometheus.serviceMonitor.interval` | `Optional` | Interval at which metrics should be scraped. | + +## Sample Configuration + +A sample YAML for Redis crd with `spec.monitor` section configured to enable monitoring with [Prometheus operator](https://github.com/prometheus-operator/prometheus-operator) is shown below. + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: Redis +metadata: + name: sample-redis + namespace: databases +spec: + version: 6.0.20 + terminationPolicy: WipeOut + configSecret: # configure Redis to use password for authentication + name: redis-config + storageType: Durable + storage: + storageClassName: default + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 5Gi + monitor: + agent: prometheus.io/operator + prometheus: + serviceMonitor: + labels: + release: prometheus + exporter: + args: + - --redis.password=$(REDIS_PASSWORD) + env: + - name: REDIS_PASSWORD + valueFrom: + secretKeyRef: + name: _name_of_secret_with_redis_password + key: password # key with the password + resources: + requests: + memory: 512Mi + cpu: 200m + limits: + memory: 512Mi + cpu: 250m + securityContext: + runAsUser: 2000 + allowPrivilegeEscalation: false +``` + +Assume that above Redis is configured to use basic authentication. So, exporter image also need to provide password to collect metrics. We have provided it through `spec.monitor.args` field. + +Here, we have specified that we are going to monitor this server using Prometheus operator through `spec.monitor.agent: prometheus.io/operator`. KubeDB will create a `ServiceMonitor` crd in `monitoring` namespace and this `ServiceMonitor` will have `release: prometheus` label. + +## Next Steps + +- Learn how to monitor Elasticsearch database with KubeDB using [builtin-Prometheus](/docs/v2024.1.31/guides/elasticsearch/monitoring/using-builtin-prometheus) and using [Prometheus operator](/docs/v2024.1.31/guides/elasticsearch/monitoring/using-prometheus-operator). +- Learn how to monitor PostgreSQL database with KubeDB using [builtin-Prometheus](/docs/v2024.1.31/guides/postgres/monitoring/using-builtin-prometheus) and using [Prometheus operator](/docs/v2024.1.31/guides/postgres/monitoring/using-prometheus-operator). +- Learn how to monitor MySQL database with KubeDB using [builtin-Prometheus](/docs/v2024.1.31/guides/mysql/monitoring/builtin-prometheus/) and using [Prometheus operator](/docs/v2024.1.31/guides/mysql/monitoring/prometheus-operator/). +- Learn how to monitor MongoDB database with KubeDB using [builtin-Prometheus](/docs/v2024.1.31/guides/mongodb/monitoring/using-builtin-prometheus) and using [Prometheus operator](/docs/v2024.1.31/guides/mongodb/monitoring/using-prometheus-operator). +- Learn how to monitor Redis server with KubeDB using [builtin-Prometheus](/docs/v2024.1.31/guides/redis/monitoring/using-builtin-prometheus) and using [Prometheus operator](/docs/v2024.1.31/guides/redis/monitoring/using-prometheus-operator). +- Learn how to monitor Memcached server with KubeDB using [builtin-Prometheus](/docs/v2024.1.31/guides/memcached/monitoring/using-builtin-prometheus) and using [Prometheus operator](/docs/v2024.1.31/guides/memcached/monitoring/using-prometheus-operator). diff --git a/content/docs/v2024.1.31/guides/redis/monitoring/using-builtin-prometheus.md b/content/docs/v2024.1.31/guides/redis/monitoring/using-builtin-prometheus.md new file mode 100644 index 0000000000..739172e8a8 --- /dev/null +++ b/content/docs/v2024.1.31/guides/redis/monitoring/using-builtin-prometheus.md @@ -0,0 +1,368 @@ +--- +title: Monitoring Redis Using Builtin Prometheus Discovery +menu: + docs_v2024.1.31: + identifier: rd-using-builtin-prometheus-monitoring + name: Builtin Prometheus + parent: rd-monitoring-redis + weight: 20 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Monitoring Redis with builtin Prometheus + +This tutorial will show you how to monitor Redis server using builtin [Prometheus](https://github.com/prometheus/prometheus) scraper. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +- Install KubeDB operator in your cluster following the steps [here](/docs/v2024.1.31/setup/README). + +- If you are not familiar with how to configure Prometheus to scrape metrics from various Kubernetes resources, please read the tutorial from [here](https://github.com/appscode/third-party-tools/tree/master/monitoring/prometheus/builtin). + +- To learn how Prometheus monitoring works with KubeDB in general, please visit [here](/docs/v2024.1.31/guides/redis/monitoring/overview). + +- To keep Prometheus resources isolated, we are going to use a separate namespace called `monitoring` to deploy respective monitoring resources. We are going to deploy database in `demo` namespace. + + ```bash + $ kubectl create ns monitoring + namespace/monitoring created + + $ kubectl create ns demo + namespace/demo created + ``` + +> Note: YAML files used in this tutorial are stored in [docs/examples/redis](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/examples/redis) folder in GitHub repository [kubedb/docs](https://github.com/kubedb/docs). + +## Deploy Redis with Monitoring Enabled + +At first, let's deploy an Redis server with monitoring enabled. Below is the Redis object that we are going to create. + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: Redis +metadata: + name: builtin-prom-redis + namespace: demo +spec: + version: 6.0.20 + terminationPolicy: WipeOut + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + monitor: + agent: prometheus.io/builtin +``` + +Here, + +- `spec.monitor.agent: prometheus.io/builtin` specifies that we are going to monitor this server using builtin Prometheus scraper. + +Let's create the Redis crd we have shown above. + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/redis/monitoring/builtin-prom-redis.yaml +redis.kubedb.com/builtin-prom-redis created +``` + +Now, wait for the database to go into `Running` state. + +```bash +$ kubectl get rd -n demo builtin-prom-redis +NAME VERSION STATUS AGE +builtin-prom-redis 4.0-v1 Running 41s +``` + +KubeDB will create a separate stats service with name `{Redis crd name}-stats` for monitoring purpose. + +```bash +$ kubectl get svc -n demo --selector="app.kubernetes.io/instance=builtin-prom-redis" +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +builtin-prom-redis ClusterIP 10.109.162.108 6379/TCP 59s +builtin-prom-redis-stats ClusterIP 10.106.243.251 56790/TCP 41s +``` + +Here, `builtin-prom-redis-stats` service has been created for monitoring purpose. Let's describe the service. + +```bash +$ kubectl describe svc -n demo builtin-prom-redis-stats +Name: builtin-prom-redis-stats +Namespace: demo +Labels: app.kubernetes.io/name=redises.kubedb.com + app.kubernetes.io/instance=builtin-prom-redis +Annotations: monitoring.appscode.com/agent: prometheus.io/builtin + prometheus.io/path: /metrics + prometheus.io/port: 56790 + prometheus.io/scrape: true +Selector: app.kubernetes.io/name=redises.kubedb.com,app.kubernetes.io/instance=builtin-prom-redis +Type: ClusterIP +IP: 10.106.243.251 +Port: prom-http 56790/TCP +TargetPort: prom-http/TCP +Endpoints: 172.17.0.14:56790 +Session Affinity: None +Events: +``` + +You can see that the service contains following annotations. + +```bash +prometheus.io/path: /metrics +prometheus.io/port: 56790 +prometheus.io/scrape: true +``` + +The Prometheus server will discover the service endpoint using these specifications and will scrape metrics from the exporter. + +## Configure Prometheus Server + +Now, we have to configure a Prometheus scraping job to scrape the metrics using this service. We are going to configure scraping job similar to this [kubernetes-service-endpoints](https://github.com/appscode/third-party-tools/tree/master/monitoring/prometheus/builtin#kubernetes-service-endpoints) job that scrapes metrics from endpoints of a service. + +Let's configure a Prometheus scraping job to collect metrics from this service. + +```yaml +- job_name: 'kubedb-databases' + honor_labels: true + scheme: http + kubernetes_sd_configs: + - role: endpoints + # by default Prometheus server select all Kubernetes services as possible target. + # relabel_config is used to filter only desired endpoints + relabel_configs: + # keep only those services that has "prometheus.io/scrape","prometheus.io/path" and "prometheus.io/port" anootations + - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape, __meta_kubernetes_service_annotation_prometheus_io_port] + separator: ; + regex: true;(.*) + action: keep + # currently KubeDB supported databases uses only "http" scheme to export metrics. so, drop any service that uses "https" scheme. + - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scheme] + action: drop + regex: https + # only keep the stats services created by KubeDB for monitoring purpose which has "-stats" suffix + - source_labels: [__meta_kubernetes_service_name] + separator: ; + regex: (.*-stats) + action: keep + # service created by KubeDB will have "app.kubernetes.io/name" and "app.kubernetes.io/instance" annotations. keep only those services that have these annotations. + - source_labels: [__meta_kubernetes_service_label_app_kubernetes_io_name] + separator: ; + regex: (.*) + action: keep + # read the metric path from "prometheus.io/path: " annotation + - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_path] + action: replace + target_label: __metrics_path__ + regex: (.+) + # read the port from "prometheus.io/port: " annotation and update scraping address accordingly + - source_labels: [__address__, __meta_kubernetes_service_annotation_prometheus_io_port] + action: replace + target_label: __address__ + regex: ([^:]+)(?::\d+)?;(\d+) + replacement: $1:$2 + # add service namespace as label to the scraped metrics + - source_labels: [__meta_kubernetes_namespace] + separator: ; + regex: (.*) + target_label: namespace + replacement: $1 + action: replace + # add service name as a label to the scraped metrics + - source_labels: [__meta_kubernetes_service_name] + separator: ; + regex: (.*) + target_label: service + replacement: $1 + action: replace + # add stats service's labels to the scraped metrics + - action: labelmap + regex: __meta_kubernetes_service_label_(.+) +``` + +### Configure Existing Prometheus Server + +If you already have a Prometheus server running, you have to add above scraping job in the `ConfigMap` used to configure the Prometheus server. Then, you have to restart it for the updated configuration to take effect. + +>If you don't use a persistent volume for Prometheus storage, you will lose your previously scraped data on restart. + +### Deploy New Prometheus Server + +If you don't have any existing Prometheus server running, you have to deploy one. In this section, we are going to deploy a Prometheus server in `monitoring` namespace to collect metrics using this stats service. + +**Create ConfigMap:** + +At first, create a ConfigMap with the scraping configuration. Bellow, the YAML of ConfigMap that we are going to create in this tutorial. + +```yaml +apiVersion: v1 +kind: ConfigMap +metadata: + name: prometheus-config + labels: + app: prometheus-demo + namespace: monitoring +data: + prometheus.yml: |- + global: + scrape_interval: 5s + evaluation_interval: 5s + scrape_configs: + - job_name: 'kubedb-databases' + honor_labels: true + scheme: http + kubernetes_sd_configs: + - role: endpoints + # by default Prometheus server select all Kubernetes services as possible target. + # relabel_config is used to filter only desired endpoints + relabel_configs: + # keep only those services that has "prometheus.io/scrape","prometheus.io/path" and "prometheus.io/port" anootations + - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape, __meta_kubernetes_service_annotation_prometheus_io_port] + separator: ; + regex: true;(.*) + action: keep + # currently KubeDB supported databases uses only "http" scheme to export metrics. so, drop any service that uses "https" scheme. + - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scheme] + action: drop + regex: https + # only keep the stats services created by KubeDB for monitoring purpose which has "-stats" suffix + - source_labels: [__meta_kubernetes_service_name] + separator: ; + regex: (.*-stats) + action: keep + # service created by KubeDB will have "app.kubernetes.io/name" and "app.kubernetes.io/instance" annotations. keep only those services that have these annotations. + - source_labels: [__meta_kubernetes_service_label_app_kubernetes_io_name] + separator: ; + regex: (.*) + action: keep + # read the metric path from "prometheus.io/path: " annotation + - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_path] + action: replace + target_label: __metrics_path__ + regex: (.+) + # read the port from "prometheus.io/port: " annotation and update scraping address accordingly + - source_labels: [__address__, __meta_kubernetes_service_annotation_prometheus_io_port] + action: replace + target_label: __address__ + regex: ([^:]+)(?::\d+)?;(\d+) + replacement: $1:$2 + # add service namespace as label to the scraped metrics + - source_labels: [__meta_kubernetes_namespace] + separator: ; + regex: (.*) + target_label: namespace + replacement: $1 + action: replace + # add service name as a label to the scraped metrics + - source_labels: [__meta_kubernetes_service_name] + separator: ; + regex: (.*) + target_label: service + replacement: $1 + action: replace + # add stats service's labels to the scraped metrics + - action: labelmap + regex: __meta_kubernetes_service_label_(.+) +``` + +Let's create above `ConfigMap`, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/monitoring/builtin-prometheus/prom-config.yaml +configmap/prometheus-config created +``` + +**Create RBAC:** + +If you are using an RBAC enabled cluster, you have to give necessary RBAC permissions for Prometheus. Let's create necessary RBAC stuffs for Prometheus, + +```bash +$ kubectl apply -f https://github.com/appscode/third-party-tools/raw/master/monitoring/prometheus/builtin/artifacts/rbac.yaml +clusterrole.rbac.authorization.k8s.io/prometheus created +serviceaccount/prometheus created +clusterrolebinding.rbac.authorization.k8s.io/prometheus created +``` + +>YAML for the RBAC resources created above can be found [here](https://github.com/appscode/third-party-tools/blob/master/monitoring/prometheus/builtin/artifacts/rbac.yaml). + +**Deploy Prometheus:** + +Now, we are ready to deploy Prometheus server. We are going to use following [deployment](https://github.com/appscode/third-party-tools/blob/master/monitoring/prometheus/builtin/artifacts/deployment.yaml) to deploy Prometheus server. + +Let's deploy the Prometheus server. + +```bash +$ kubectl apply -f https://github.com/appscode/third-party-tools/raw/master/monitoring/prometheus/builtin/artifacts/deployment.yaml +deployment.apps/prometheus created +``` + +### Verify Monitoring Metrics + +Prometheus server is listening to port `9090`. We are going to use [port forwarding](https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/) to access Prometheus dashboard. + +At first, let's check if the Prometheus pod is in `Running` state. + +```bash +$ kubectl get pod -n monitoring -l=app=prometheus +NAME READY STATUS RESTARTS AGE +prometheus-8568c86d86-95zhn 1/1 Running 0 77s +``` + +Now, run following command on a separate terminal to forward 9090 port of `prometheus-8568c86d86-95zhn` pod, + +```bash +$ kubectl port-forward -n monitoring prometheus-8568c86d86-95zhn 9090 +Forwarding from 127.0.0.1:9090 -> 9090 +Forwarding from [::1]:9090 -> 9090 +``` + +Now, we can access the dashboard at `localhost:9090`. Open [http://localhost:9090](http://localhost:9090) in your browser. You should see the endpoint of `builtin-prom-redis-stats` service as one of the targets. + +

+  Prometheus Target +

+ +Check the labels marked with red rectangle. These labels confirm that the metrics are coming from `Redis` server `builtin-prom-redis` through stats service `builtin-prom-redis-stats`. + +Now, you can view the collected metrics and create a graph from homepage of this Prometheus dashboard. You can also use this Prometheus server as data source for [Grafana](https://grafana.com/) and create beautiful dashboard with collected metrics. + +## Cleaning up + +To cleanup the Kubernetes resources created by this tutorial, run following commands + +```bash +$ kubectl delete -n demo rd/builtin-prom-redis + +$ kubectl delete -n monitoring deployment.apps/prometheus + +$ kubectl delete -n monitoring clusterrole.rbac.authorization.k8s.io/prometheus +$ kubectl delete -n monitoring serviceaccount/prometheus +$ kubectl delete -n monitoring clusterrolebinding.rbac.authorization.k8s.io/prometheus + +$ kubectl delete ns demo +$ kubectl delete ns monitoring +``` + +## Next Steps + +- Monitor your Redis server with KubeDB using [out-of-the-box Prometheus operator](/docs/v2024.1.31/guides/redis/monitoring/using-prometheus-operator). +- Detail concepts of [Redis object](/docs/v2024.1.31/guides/redis/concepts/redis). +- Use [private Docker registry](/docs/v2024.1.31/guides/redis/private-registry/using-private-registry) to deploy Redis with KubeDB. +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/v2024.1.31/CONTRIBUTING). diff --git a/content/docs/v2024.1.31/guides/redis/monitoring/using-prometheus-operator.md b/content/docs/v2024.1.31/guides/redis/monitoring/using-prometheus-operator.md new file mode 100644 index 0000000000..faa04e2586 --- /dev/null +++ b/content/docs/v2024.1.31/guides/redis/monitoring/using-prometheus-operator.md @@ -0,0 +1,287 @@ +--- +title: Monitoring Redis using Prometheus Operator +menu: + docs_v2024.1.31: + identifier: rd-using-prometheus-operator-monitoring + name: Prometheus Operator + parent: rd-monitoring-redis + weight: 15 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Monitoring Redis Using Prometheus operator + +[Prometheus operator](https://github.com/prometheus-operator/prometheus-operator) provides simple and Kubernetes native way to deploy and configure Prometheus server. This tutorial will show you how to use Prometheus operator to monitor Redis server deployed with KubeDB. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +- To learn how Prometheus monitoring works with KubeDB in general, please visit [here](/docs/v2024.1.31/guides/redis/monitoring/overview). + +- To keep Prometheus resources isolated, we are going to use a separate namespace called `monitoring` to deploy respective monitoring resources. We are going to deploy database in `demo` namespace. + + ```bash + $ kubectl create ns monitoring + namespace/monitoring created + + $ kubectl create ns demo + namespace/demo created + ``` + +- We need a [Prometheus operator](https://github.com/prometheus-operator/prometheus-operator) instance running. If you don't already have a running instance, deploy one following the docs from [here](https://github.com/appscode/third-party-tools/blob/master/monitoring/prometheus/operator/README.md). + +- If you already don't have a Prometheus server running, deploy one following tutorial from [here](https://github.com/appscode/third-party-tools/blob/master/monitoring/prometheus/operator/README.md#deploy-prometheus-server). + +> Note: YAML files used in this tutorial are stored in [docs/examples/redis](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/examples/redis) folder in GitHub repository [kubedb/docs](https://github.com/kubedb/docs). + +## Find out required labels for ServiceMonitor + +We need to know the labels used to select `ServiceMonitor` by a `Prometheus` crd. We are going to provide these labels in `spec.monitor.prometheus.labels` field of Redis crd so that KubeDB creates `ServiceMonitor` object accordingly. + +At first, let's find out the available Prometheus server in our cluster. + +```bash +$ kubectl get prometheus --all-namespaces +NAMESPACE NAME AGE +monitoring prometheus 18m +``` + +> If you don't have any Prometheus server running in your cluster, deploy one following the guide specified in **Before You Begin** section. + +Now, let's view the YAML of the available Prometheus server `prometheus` in `monitoring` namespace. + +```yaml +$ kubectl get prometheus -n monitoring prometheus -o yaml +apiVersion: monitoring.coreos.com/v1 +kind: Prometheus +metadata: + annotations: + kubectl.kubernetes.io/last-applied-configuration: | + {"apiVersion":"monitoring.coreos.com/v1","kind":"Prometheus","metadata":{"annotations":{},"labels":{"prometheus":"prometheus"},"name":"prometheus","namespace":"monitoring"},"spec":{"replicas":1,"resources":{"requests":{"memory":"400Mi"}},"serviceAccountName":"prometheus","serviceMonitorSelector":{"matchLabels":{"release":"prometheus"}}}} + creationTimestamp: 2019-01-03T13:41:51Z + generation: 1 + labels: + prometheus: prometheus + name: prometheus + namespace: monitoring + resourceVersion: "44402" + selfLink: /apis/monitoring.coreos.com/v1/namespaces/monitoring/prometheuses/prometheus + uid: 5324ad98-0f5d-11e9-b230-080027f306f3 +spec: + replicas: 1 + resources: + requests: + memory: 400Mi + serviceAccountName: prometheus + serviceMonitorSelector: + matchLabels: + release: prometheus +``` + +Notice the `spec.serviceMonitorSelector` section. Here, `release: prometheus` label is used to select `ServiceMonitor` crd. So, we are going to use this label in `spec.monitor.prometheus.labels` field of Redis crd. + +## Deploy Redis with Monitoring Enabled + +At first, let's deploy an Redis server with monitoring enabled. Below is the Redis object that we are going to create. + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: Redis +metadata: + name: coreos-prom-redis + namespace: demo +spec: + version: 6.0.20 + terminationPolicy: WipeOut + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + monitor: + agent: prometheus.io/operator + prometheus: + serviceMonitor: + labels: + release: prometheus + interval: 10s +``` + +Here, + +- `monitor.agent: prometheus.io/operator` indicates that we are going to monitor this server using Prometheus operator. +- `monitor.prometheus.namespace: monitoring` specifies that KubeDB should create `ServiceMonitor` in `monitoring` namespace. + +- `monitor.prometheus.labels` specifies that KubeDB should create `ServiceMonitor` with these labels. + +- `monitor.prometheus.interval` indicates that the Prometheus server should scrape metrics from this database with 10 seconds interval. + +Let's create the Redis object that we have shown above, + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/redis/monitoring/coreos-prom-redis.yaml +redis.kubedb.com/coreos-prom-redis created +``` + +Now, wait for the database to go into `Running` state. + +```bash +$ kubectl get rd -n demo coreos-prom-redis +NAME VERSION STATUS AGE +coreos-prom-redis 4.0-v1 Running 15s +``` + +KubeDB will create a separate stats service with name `{Redis crd name}-stats` for monitoring purpose. + +```bash +$ kubectl get svc -n demo --selector="app.kubernetes.io/instance=coreos-prom-redis" +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +coreos-prom-redis ClusterIP 10.110.70.53 6379/TCP 35s +coreos-prom-redis-stats ClusterIP 10.99.161.76 56790/TCP 31s +``` + +Here, `coreos-prom-redis-stats` service has been created for monitoring purpose. + +Let's describe this stats service. + +```yaml +$ kubectl describe svc -n demo coreos-prom-redis-stats +Name: coreos-prom-redis-stats +Namespace: demo +Labels: app.kubernetes.io/name=redises.kubedb.com + app.kubernetes.io/instance=coreos-prom-redis +Annotations: monitoring.appscode.com/agent: prometheus.io/operator +Selector: app.kubernetes.io/name=redises.kubedb.com,app.kubernetes.io/instance=coreos-prom-redis +Type: ClusterIP +IP: 10.99.161.76 +Port: prom-http 56790/TCP +TargetPort: prom-http/TCP +Endpoints: 172.17.0.7:56790 +Session Affinity: None +Events: +``` + +Notice the `Labels` and `Port` fields. `ServiceMonitor` will use these information to target its endpoints. + +KubeDB will also create a `ServiceMonitor` crd in `monitoring` namespace that select the endpoints of `coreos-prom-redis-stats` service. Verify that the `ServiceMonitor` crd has been created. + +```bash +$ kubectl get servicemonitor -n monitoring +NAME AGE +kubedb-demo-coreos-prom-redis 1m +``` + +Let's verify that the `ServiceMonitor` has the label that we had specified in `spec.monitor` section of Redis crd. + +```yaml +$ kubectl get servicemonitor -n monitoring kubedb-demo-coreos-prom-redis -o yaml +apiVersion: monitoring.coreos.com/v1 +kind: ServiceMonitor +metadata: + creationTimestamp: 2019-01-03T15:55:23Z + generation: 1 + labels: + release: prometheus + monitoring.appscode.com/service: coreos-prom-redis-stats.demo + name: kubedb-demo-coreos-prom-redis + namespace: monitoring + resourceVersion: "54802" + selfLink: /apis/monitoring.coreos.com/v1/namespaces/monitoring/servicemonitors/kubedb-demo-coreos-prom-redis + uid: fafceb49-0f6f-11e9-b230-080027f306f3 +spec: + endpoints: + - honorLabels: true + interval: 10s + path: /metrics + port: prom-http + namespaceSelector: + matchNames: + - demo + selector: + matchLabels: + app.kubernetes.io/name: redises.kubedb.com + app.kubernetes.io/instance: coreos-prom-redis +``` + +Notice that the `ServiceMonitor` has label `release: prometheus` that we had specified in Redis crd. + +Also notice that the `ServiceMonitor` has selector which match the labels we have seen in the `coreos-prom-redis-stats` service. It also, target the `prom-http` port that we have seen in the stats service. + +## Verify Monitoring Metrics + +At first, let's find out the respective Prometheus pod for `prometheus` Prometheus server. + +```bash +$ kubectl get pod -n monitoring -l=app=prometheus +NAME READY STATUS RESTARTS AGE +prometheus-prometheus-0 3/3 Running 1 63m +``` + +Prometheus server is listening to port `9090` of `prometheus-prometheus-0` pod. We are going to use [port forwarding](https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/) to access Prometheus dashboard. + +Run following command on a separate terminal to forward the port 9090 of `prometheus-prometheus-0` pod, + +```bash +$ kubectl port-forward -n monitoring prometheus-prometheus-0 9090 +Forwarding from 127.0.0.1:9090 -> 9090 +Forwarding from [::1]:9090 -> 9090 +``` + +Now, we can access the dashboard at `localhost:9090`. Open [http://localhost:9090](http://localhost:9090) in your browser. You should see `prom-http` endpoint of `coreos-prom-redis-stats` service as one of the targets. + +

+  Prometheus Target +

+ +Check the `endpoint` and `service` labels marked by red rectangle. It verifies that the target is our expected database. Now, you can view the collected metrics and create a graph from homepage of this Prometheus dashboard. You can also use this Prometheus server as data source for [Grafana](https://grafana.com/) and create beautiful dashboard with collected metrics. + +## Cleaning up + +To cleanup the Kubernetes resources created by this tutorial, run following commands + +```bash +# cleanup database +kubectl delete -n demo rd/coreos-prom-redis + +# cleanup prometheus resources +kubectl delete -n monitoring prometheus prometheus +kubectl delete -n monitoring clusterrolebinding prometheus +kubectl delete -n monitoring clusterrole prometheus +kubectl delete -n monitoring serviceaccount prometheus +kubectl delete -n monitoring service prometheus-operated + +# cleanup prometheus operator resources +kubectl delete -n monitoring deployment prometheus-operator +kubectl delete -n dmeo serviceaccount prometheus-operator +kubectl delete clusterrolebinding prometheus-operator +kubectl delete clusterrole prometheus-operator + +# delete namespace +kubectl delete ns monitoring +kubectl delete ns demo +``` + +## Next Steps + +- Monitor your Redis server with KubeDB using [out-of-the-box builtin-Prometheus](/docs/v2024.1.31/guides/redis/monitoring/using-builtin-prometheus). +- Detail concepts of [RedisVersion object](/docs/v2024.1.31/guides/redis/concepts/catalog). +- Detail concepts of [Redis object](/docs/v2024.1.31/guides/redis/concepts/redis). +- Use [private Docker registry](/docs/v2024.1.31/guides/redis/private-registry/using-private-registry) to deploy Redis with KubeDB. +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/v2024.1.31/CONTRIBUTING). diff --git a/content/docs/v2024.1.31/guides/redis/private-registry/_index.md b/content/docs/v2024.1.31/guides/redis/private-registry/_index.md new file mode 100755 index 0000000000..3fb342d159 --- /dev/null +++ b/content/docs/v2024.1.31/guides/redis/private-registry/_index.md @@ -0,0 +1,22 @@ +--- +title: Run Redis using Private Registry +menu: + docs_v2024.1.31: + identifier: rd-private-registry-redis + name: Private Registry + parent: rd-redis-guides + weight: 35 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/redis/private-registry/using-private-registry.md b/content/docs/v2024.1.31/guides/redis/private-registry/using-private-registry.md new file mode 100644 index 0000000000..ad64a99de5 --- /dev/null +++ b/content/docs/v2024.1.31/guides/redis/private-registry/using-private-registry.md @@ -0,0 +1,192 @@ +--- +title: Run Redis using Private Registry +menu: + docs_v2024.1.31: + identifier: rd-using-private-registry-private-registry + name: Quickstart + parent: rd-private-registry-redis + weight: 10 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Using private Docker registry + +KubeDB operator supports using private Docker registry. This tutorial will show you how to use KubeDB to run Redis server using private Docker images. + +## Before You Begin + +- Read [concept of Redis Version Catalog](/docs/v2024.1.31/guides/redis/concepts/catalog) to learn detail concepts of `RedisVersion` object. + +- You need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +- To keep things isolated, this tutorial uses a separate namespace called `demo` throughout this tutorial. Run the following command to prepare your cluster for this tutorial: + + ```bash + $ kubectl create ns demo + namespace/demo created + ``` + +- You will also need a docker private [registry](https://docs.docker.com/registry/) or [private repository](https://docs.docker.com/docker-hub/repos/#private-repositories). In this tutorial we will use private repository of [docker hub](https://hub.docker.com/). + +- You have to push the required images from KubeDB's [Docker hub account](https://hub.docker.com/r/kubedb/) into your private registry. For redis, push `DB_IMAGE`, `TOOLS_IMAGE`, `EXPORTER_IMAGE` of following RedisVersions, where `deprecated` is not true, to your private registry. + +```bash +$ kubectl get redisversions -n kube-system -o=custom-columns=NAME:.metadata.name,VERSION:.spec.version,INITCONTAINER_IMAGE:.spec.initContainer.image,DB_IMAGE:.spec.db.image,EXPORTER_IMAGE:.spec.exporter.image +NAME VERSION INITCONTAINER_IMAGE DB_IMAGE EXPORTER_IMAGE +4.0.11 4.0.11 kubedb/redis-init:0.7.0 kubedb/redis:4.0.11 kubedb/redis_exporter:v0.21.1 +4.0.6-v2 4.0.6 kubedb/redis-init:0.7.0 kubedb/redis:4.0.6-v2 kubedb/redis_exporter:v0.21.1 +5.0.14 5.0.14 kubedb/redis-init:0.7.0 redis:5.0.14 kubedb/redis_exporter:1.9.0 +6.2.14 5.0.3 kubedb/redis-init:0.7.0 kubedb/redis:6.2.14 kubedb/redis_exporter:v0.21.1 +6.0.20 6.0.20 kubedb/redis-init:0.7.0 kubedb/redis:6.0.20 kubedb/redis_exporter:1.9.0 +6.2.14 6.2.14 kubedb/redis-init:0.7.0 redis:6.2.14 kubedb/redis_exporter:1.9.0 +6.2.14 6.2.14 kubedb/redis-init:0.7.0 redis:6.2.14 kubedb/redis_exporter:1.9.0 +6.2.14 6.2.14 kubedb/redis-init:0.7.0 redis:6.2.14 kubedb/redis_exporter:1.9.0 +7.0.4 7.0.4 kubedb/redis-init:0.7.0 redis:7.0.4 kubedb/redis_exporter:1.9.0 +7.0.14 7.0.14 kubedb/redis-init:0.7.0 redis:7.0.14 kubedb/redis_exporter:1.9.0 +7.0.6 7.0.6 kubedb/redis-init:0.7.0 redis:7.0.6 kubedb/redis_exporter:1.9.0 +``` + + Docker hub repositories: + + - [kubedb/operator](https://hub.docker.com/r/kubedb/operator) + - [kubedb/redis](https://hub.docker.com/r/kubedb/redis) + - [kubedb/redis_exporter](https://hub.docker.com/r/kubedb/redis_exporter) + +- Update KubeDB catalog for private Docker registry. Ex: + + ```yaml + apiVersion: catalog.kubedb.com/v1alpha1 + kind: RedisVersion + metadata: + name: 6.2.14 + spec: + db: + image: PRIVATE_DOCKER_REGISTRY:6.0.20 + exporter: + image: PRIVATE_DOCKER_REGISTRY:1.9.0 + podSecurityPolicies: + databasePolicyName: redis-db + version: 6.0.20 + ``` + +## Create ImagePullSecret + +ImagePullSecrets is a type of Kubernetes Secret whose sole purpose is to pull private images from a Docker registry. It allows you to specify the url of the docker registry, credentials for logging in and the image name of your private docker image. + +Run the following command, substituting the appropriate uppercase values to create an image pull secret for your private Docker registry: + +```bash +$ kubectl create secret docker-registry -n demo myregistrykey \ + --docker-server=DOCKER_REGISTRY_SERVER \ + --docker-username=DOCKER_USER \ + --docker-email=DOCKER_EMAIL \ + --docker-password=DOCKER_PASSWORD +secret/myregistrykey created +``` + +If you wish to follow other ways to pull private images see [official docs](https://kubernetes.io/docs/concepts/containers/images/) of Kubernetes. + +NB: If you are using `kubectl` 1.9.0, update to 1.9.1 or later to avoid this [issue](https://github.com/kubernetes/kubernetes/issues/57427). + +## Install KubeDB operator + +When installing KubeDB operator, set the flags `--docker-registry` and `--image-pull-secret` to appropriate value. Follow the steps to [install KubeDB operator](/docs/v2024.1.31/setup/README) properly in cluster so that to points to the DOCKER_REGISTRY you wish to pull images from. + +## Deploy Redis server from Private Registry + +While deploying `Redis` from private repository, you have to add `myregistrykey` secret in `Redis` `spec.imagePullSecrets`. +Below is the Redis CRD object we will create. + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: Redis +metadata: + name: redis-pvt-reg + namespace: demo +spec: + version: 6.2.14 + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + podTemplate: + spec: + imagePullSecrets: + - name: myregistrykey +``` + +Now run the command to deploy this `Redis` object: + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/redis/private-registry/demo-2.yaml +redis.kubedb.com/redis-pvt-reg created +``` + +To check if the images pulled successfully from the repository, see if the `Redis` is in running state: + +```bash +$ kubectl get pods -n demo -w +NAME READY STATUS RESTARTS AGE +redis-pvt-reg-0 0/1 Pending 0 0s +redis-pvt-reg-0 0/1 Pending 0 0s +redis-pvt-reg-0 0/1 ContainerCreating 0 0s +redis-pvt-reg-0 1/1 Running 0 2m + + +$ kubectl get rd -n demo +NAME VERSION STATUS AGE +redis-pvt-reg 6.2.14 Running 40s +``` + +## Cleaning up + +To clean up the Kubernetes resources created by this tutorial, run: + +```bash +kubectl patch -n demo rd/redis-pvt-reg -p '{"spec":{"terminationPolicy":"WipeOut"}}' --type="merge" +kubectl delete -n demo rd/redis-pvt-reg + +kubectl patch -n demo drmn/redis-pvt-reg -p '{"spec":{"wipeOut":true}}' --type="merge" +kubectl delete -n demo drmn/redis-pvt-reg + +kubectl delete ns demo +``` + +```bash +$ kubectl patch -n demo rd/redis-pvt-reg -p '{"spec":{"terminationPolicy":"WipeOut"}}' --type="merge" +redis.kubedb.com/redis-pvt-reg patched + +$ kubectl delete -n demo rd/redis-pvt-reg +redis.kubedb.com "redis-pvt-reg" deleted + +$ kubectl delete -n demo secret myregistrykey +secret "myregistrykey" deleted + +$ kubectl delete ns demo +namespace "demo" deleted +``` + +## Next Steps + +- Monitor your Redis server with KubeDB using [out-of-the-box Prometheus operator](/docs/v2024.1.31/guides/redis/monitoring/using-prometheus-operator). +- Monitor your Redis server with KubeDB using [out-of-the-box builtin-Prometheus](/docs/v2024.1.31/guides/redis/monitoring/using-builtin-prometheus). +- Detail concepts of [Redis object](/docs/v2024.1.31/guides/redis/concepts/redis). +- Detail concepts of [RedisVersion object](/docs/v2024.1.31/guides/redis/concepts/catalog). +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/v2024.1.31/CONTRIBUTING). diff --git a/content/docs/v2024.1.31/guides/redis/quickstart/_index.md b/content/docs/v2024.1.31/guides/redis/quickstart/_index.md new file mode 100755 index 0000000000..a188170563 --- /dev/null +++ b/content/docs/v2024.1.31/guides/redis/quickstart/_index.md @@ -0,0 +1,22 @@ +--- +title: Redis Quickstart +menu: + docs_v2024.1.31: + identifier: rd-quickstart-redis + name: Quickstart + parent: rd-redis-guides + weight: 15 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/redis/quickstart/quickstart.md b/content/docs/v2024.1.31/guides/redis/quickstart/quickstart.md new file mode 100644 index 0000000000..1bdf8390ab --- /dev/null +++ b/content/docs/v2024.1.31/guides/redis/quickstart/quickstart.md @@ -0,0 +1,479 @@ +--- +title: Redis Quickstart +menu: + docs_v2024.1.31: + identifier: rd-quickstart-quickstart + name: Overview + parent: rd-quickstart-redis + weight: 10 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Redis QuickStart + +This tutorial will show you how to use KubeDB to run a Redis server. + +

+  lifecycle +

+ +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +- Now, install KubeDB cli on your workstation and KubeDB operator in your cluster following the steps [here](/docs/v2024.1.31/setup/README). + +- [StorageClass](https://kubernetes.io/docs/concepts/storage/storage-classes/) is required to run KubeDB. Check the available StorageClass in cluster. + + ```bash + $ kubectl get storageclasses + NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE + standard (default) rancher.io/local-path Delete WaitForFirstConsumer false 4h + ``` + +- To keep things isolated, this tutorial uses a separate namespace called `demo` throughout this tutorial. Run the following command to prepare your cluster for this tutorial: + + ```bash + $ kubectl create namespace demo + namespace/demo created + + $ kubectl get namespaces + NAME STATUS AGE + demo Active 10s + ``` + +> Note: The yaml files used in this tutorial are stored in [docs/examples](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/examples) folder in GitHub repository [kubedb/docs](https://github.com/kubedb/docs). + +## Find Available RedisVersion + +When you have installed KubeDB, it has created `RedisVersion` crd for all supported Redis versions. Check: + +```bash +$ kubectl get redisversions +NAME VERSION DB_IMAGE DEPRECATED AGE +4.0.11 4.0.11 kubedb/redis:4.0.11 7h31m +4.0.6-v2 4.0.6 kubedb/redis:4.0.6-v2 7h31m +5.0.14 5.0.14 redis:5.0.14 7h31m +6.2.14 5.0.3 kubedb/redis:6.2.14 7h31m +6.0.20 6.0.20 kubedb/redis:6.0.20 7h31m +6.2.14 6.2.14 redis:6.2.14 7h31m +6.2.14 6.2.14 redis:6.2.14 7h31m +6.2.14 6.2.14 redis:6.2.14 7h31m +7.0.4 7.0.4 redis:7.0.4 7h31m +7.0.14 7.0.14 redis:7.0.14 7h31m +7.0.6 7.0.6 redis:7.0.6 7h31m +``` + +## Create a Redis server + +KubeDB implements a `Redis` CRD to define the specification of a Redis server. Below is the `Redis` object created in this tutorial. + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: Redis +metadata: + name: redis-quickstart + namespace: demo +spec: + version: 6.2.14 + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: DoNotTerminate +``` + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/redis/quickstart/demo-1.yaml +redis.kubedb.com/redis-quickstart created +``` + +Here, + +- `spec.version` is name of the RedisVersion crd where the docker images are specified. In this tutorial, a Redis 6.2.14 database is created. +- `spec.storageType` specifies the type of storage that will be used for Redis server. It can be `Durable` or `Ephemeral`. Default value of this field is `Durable`. If `Ephemeral` is used then KubeDB will create Redis server using `EmptyDir` volume. In this case, you don't have to specify `spec.storage` field. This is useful for testing purposes. +- `spec.storage` specifies PVC spec that will be dynamically allocated to store data for this database. This storage spec will be passed to the StatefulSet created by KubeDB operator to run database pods. You can specify any StorageClass available in your cluster with appropriate resource requests. +- `spec.terminationPolicy` gives flexibility whether to `nullify`(reject) the delete operation of `Redis` crd or which resources KubeDB should keep or delete when you delete `Redis` crd. If admission webhook is enabled, It prevents users from deleting the database as long as the `spec.terminationPolicy` is set to `DoNotTerminate`. Learn details of all `TerminationPolicy` [here](/docs/v2024.1.31/guides/redis/concepts/redis#specterminationpolicy) + +> Note: `spec.storage` section is used to create PVC for database pod. It will create PVC with storage size specified in storage.resources.requests field. Don't specify limits here. PVC does not get resized automatically. + +KubeDB operator watches for `Redis` objects using Kubernetes api. When a `Redis` object is created, KubeDB operator will create a new StatefulSet and a Service with the matching Redis object name. KubeDB operator will also create a governing service for StatefulSets with the name `kubedb`, if one is not already present. + +```bash +$ kubectl get rd -n demo +NAME VERSION STATUS AGE +redis-quickstart 6.2.14 Running 1m + +$ kubectl describe rd -n demo redis-quickstart +Name: redis-quickstart +Namespace: demo +CreationTimestamp: Tue, 31 May 2022 10:31:38 +0600 +Labels: +Annotations: +Replicas: 1 total +Status: Ready +StorageType: Durable +Volume: + StorageClass: standard + Capacity: 1Gi + Access Modes: RWO +Paused: false +Halted: false +Termination Policy: DoNotTerminate + +StatefulSet: + Name: redis-quickstart + CreationTimestamp: Tue, 31 May 2022 10:31:38 +0600 + Labels: app.kubernetes.io/component=database + app.kubernetes.io/instance=redis-quickstart + app.kubernetes.io/managed-by=kubedb.com + app.kubernetes.io/name=redises.kubedb.com + Annotations: + Replicas: 824644335612 desired | 1 total + Pods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed + +Service: + Name: redis-quickstart + Labels: app.kubernetes.io/component=database + app.kubernetes.io/instance=redis-quickstart + app.kubernetes.io/managed-by=kubedb.com + app.kubernetes.io/name=redises.kubedb.com + Annotations: + Type: ClusterIP + IP: 10.96.216.57 + Port: primary 6379/TCP + TargetPort: db/TCP + Endpoints: 10.244.0.58:6379 + +Service: + Name: redis-quickstart-pods + Labels: app.kubernetes.io/component=database + app.kubernetes.io/instance=redis-quickstart + app.kubernetes.io/managed-by=kubedb.com + app.kubernetes.io/name=redises.kubedb.com + Annotations: + Type: ClusterIP + IP: None + Port: db 6379/TCP + TargetPort: db/TCP + Endpoints: 10.244.0.58:6379 + +AppBinding: + Metadata: + Creation Timestamp: 2022-05-31T04:31:38Z + Labels: + app.kubernetes.io/component: database + app.kubernetes.io/instance: redis-quickstart + app.kubernetes.io/managed-by: kubedb.com + app.kubernetes.io/name: redises.kubedb.com + Name: redis-quickstart + Namespace: demo + Spec: + Client Config: + Service: + Name: redis-quickstart + Port: 6379 + Scheme: redis + Parameters: + API Version: config.kubedb.com/v1alpha1 + Kind: RedisConfiguration + Stash: + Addon: + Backup Task: + Name: redis-backup-6.2.5 + Restore Task: + Name: redis-restore-6.2.5 + Secret: + Name: redis-quickstart-auth + Type: kubedb.com/redis + Version: 6.2.14 + +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal Successful 2m Redis Operator Successfully created governing service + Normal Successful 2m Redis Operator Successfully created Service + Normal Successful 2m Redis Operator Successfully created appbinding + + +$ kubectl get statefulset -n demo +NAME READY AGE +redis-quickstart 1/1 1m + +$ kubectl get pvc -n demo +NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE +data-redis-quickstart-0 Bound pvc-6e457226-c53f-11e8-9ba7-0800274bef12 1Gi RWO standard 2m + +$ kubectl get pv -n demo +NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE +pvc-6e457226-c53f-11e8-9ba7-0800274bef12 1Gi RWO Delete Bound demo/data-redis-quickstart-0 standard 2m + +$ kubectl get service -n demo +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +redis-quickstart-pods ClusterIP None 2m +redis-quickstart ClusterIP 10.108.149.205 6379/TCP 2m +``` + +KubeDB operator sets the `status.phase` to `Running` once the database is successfully created. Run the following command to see the modified Redis object: + +```bash +$ kubectl get rd -n demo redis-quickstart -o yaml +apiVersion: kubedb.com/v1alpha2 +kind: Redis +metadata: + creationTimestamp: "2022-05-31T04:31:38Z" + finalizers: + - kubedb.com + generation: 2 + name: redis-quickstart + namespace: demo + resourceVersion: "63624" + uid: 7ffc9d73-94df-4475-9656-a382f380c293 +spec: + allowedSchemas: + namespaces: + from: Same + authSecret: + name: redis-quickstart-auth + coordinator: + resources: {} + mode: Standalone + podTemplate: + controller: {} + metadata: {} + spec: + affinity: + podAntiAffinity: + preferredDuringSchedulingIgnoredDuringExecution: + - podAffinityTerm: + labelSelector: + matchLabels: + app.kubernetes.io/instance: redis-quickstart + app.kubernetes.io/managed-by: kubedb.com + app.kubernetes.io/name: redises.kubedb.com + namespaces: + - demo + topologyKey: kubernetes.io/hostname + weight: 100 + - podAffinityTerm: + labelSelector: + matchLabels: + app.kubernetes.io/instance: redis-quickstart + app.kubernetes.io/managed-by: kubedb.com + app.kubernetes.io/name: redises.kubedb.com + namespaces: + - demo + topologyKey: failure-domain.beta.kubernetes.io/zone + weight: 50 + resources: + limits: + memory: 1Gi + requests: + cpu: 500m + memory: 1Gi + serviceAccountName: redis-quickstart + replicas: 1 + storage: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + storageClassName: standard + storageType: Durable + terminationPolicy: Delete + version: 6.2.14 +status: + conditions: + - lastTransitionTime: "2022-05-31T04:31:38Z" + message: 'The KubeDB operator has started the provisioning of Redis: demo/redis-quickstart' + reason: DatabaseProvisioningStartedSuccessfully + status: "True" + type: ProvisioningStarted + - lastTransitionTime: "2022-05-31T04:31:43Z" + message: All desired replicas are ready. + reason: AllReplicasReady + status: "True" + type: ReplicaReady + - lastTransitionTime: "2022-05-31T04:31:48Z" + message: 'The Redis: demo/redis-quickstart is accepting rdClient requests.' + observedGeneration: 2 + reason: DatabaseAcceptingConnectionRequest + status: "True" + type: AcceptingConnection + - lastTransitionTime: "2022-05-31T04:31:48Z" + message: 'The Redis: demo/redis-quickstart is ready.' + observedGeneration: 2 + reason: ReadinessCheckSucceeded + status: "True" + type: Ready + - lastTransitionTime: "2022-05-31T04:31:48Z" + message: 'The Redis: demo/redis-quickstart is successfully provisioned.' + observedGeneration: 2 + reason: DatabaseSuccessfullyProvisioned + status: "True" + type: Provisioned + observedGeneration: 2 + phase: Ready + +``` + +Now, you can connect to this database through [redis-cli](https://redis.io/topics/rediscli). In this tutorial, we are connecting to the Redis server from inside of pod. + +```bash +$ kubectl exec -it -n demo redis-quickstart-0 -- sh + +/data > redis-cli + +127.0.0.1:6379> ping +PONG + +#save data +127.0.0.1:6379> SET mykey "Hello" +OK + +# view data +127.0.0.1:6379> GET mykey +"Hello" + +127.0.0.1:6379> exit + +/data > exit +``` + +## DoNotTerminate Property + +When `terminationPolicy` is `DoNotTerminate`, KubeDB takes advantage of `ValidationWebhook` feature in Kubernetes 1.9.0 or later clusters to implement `DoNotTerminate` feature. If admission webhook is enabled, It prevents users from deleting the database as long as the `spec.terminationPolicy` is set to `DoNotTerminate`. You can see this below: + +```bash +$ kubectl delete rd redis-quickstart -n demo +Error from server (BadRequest): admission webhook "redis.validators.kubedb.com" denied the request: redis "redis-quickstart" can't be halted. To delete, change spec.terminationPolicy +``` + +Now, run `kubectl edit rd redis-quickstart -n demo` to set `spec.terminationPolicy` to `Halt` . Then you will be able to delete/halt the database. + +Learn details of all `TerminationPolicy` [here](/docs/v2024.1.31/guides/redis/concepts/redis#specterminationpolicy) + +## Halt Database + +When [TerminationPolicy](/docs/v2024.1.31/guides/redis/concepts/redis#specterminationpolicy) is set to halt, and you delete the redis object, the KubeDB operator will delete the StatefulSet and its pods but leaves the PVCs, secrets and database backup (snapshots) intact. Learn details of all `TerminationPolicy` [here](/docs/v2024.1.31/guides/redis/concepts/redis#specterminationpolicy). + +You can also keep the redis object and halt the database to resume it again later. If you halt the database, the KubeDB operator will delete the statefulsets and services but will keep the redis object, pvcs, secrets and backup (snapshots). + +To halt the database, first you have to set the terminationPolicy to `Halt` in existing database. You can use the below command to set the terminationPolicy to `Halt`, if it is not already set. + +```bash +$ kubectl patch -n demo rd/redis-quickstart -p '{"spec":{"terminationPolicy":"Halt"}}' --type="merge" +redis.kubedb.com/redis-quickstart patched +``` + +Then, you have to set the `spec.halted` as true to set the database in a `Halted` state. You can use the below command. + +```bash +$ kubectl patch -n demo rd/redis-quickstart -p '{"spec":{"halted":true}}' --type="merge" +redis.kubedb.com/redis-quickstart patched +``` +After that, kubedb will delete the statefulsets and services, and you can see the database Phase as `Halted`. + +Now, you can run the following command to get all redis resources in demo namespaces, +```bash +$ kubectl get redis,secret,pvc -n demo +NAME VERSION STATUS AGE +redis.kubedb.com/redis-quickstart 6.2.14 Halted 5m26s + +NAME TYPE DATA AGE +secret/default-token-rs764 kubernetes.io/service-account-token 3 6h54m +secret/redis-quickstart-auth kubernetes.io/basic-auth 2 5m26s +secret/redis-quickstart-config Opaque 1 5m26s +secret/root-secret kubernetes.io/tls 3 6h19m +secret/sh.helm.release.v1.vault.v1 helm.sh/release.v1 1 176m +secret/vault-client-certs kubernetes.io/tls 3 22s +secret/vault-server-certs kubernetes.io/tls 3 22s + +NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE +persistentvolumeclaim/data-redis-quickstart-0 Bound pvc-ee1c2fd3-4c0e-4dad-812b-8f83e20284f8 1Gi RWO standard 5m24s +``` + +## Resume Halted Redis + +Now, to resume the database, i.e. to get the same database setup back again, you have to set the `spec.halted` as false. You can use the below command. + +```bash +$ kubectl patch -n demo rd/redis-quickstart -p '{"spec":{"halted":false}}' --type="merge" +redis.kubedb.com/redis-quickstart patched +``` + +When the database is resumed successfully, you can see the database Status is set to `Ready`. + +```bash +$ kubectl get rd -n demo +NAME VERSION STATUS AGE +redis-quickstart 6.2.14 Ready 7m52s +``` + +Now, If you again exec into the `pod` and look for previous data, you will see that, all the data persists. +```bash +$ kubectl exec -it -n demo redis-quickstart-0 -- sh + +/data > redis-cli + +127.0.0.1:6379> ping +PONG + +# view data +127.0.0.1:6379> GET mykey +"Hello" + +127.0.0.1:6379> exit + +/data > exit +``` +## Cleaning up + +To clean up the Kubernetes resources created by this tutorial, run: + +```bash + +$ kubectl patch -n demo rd/redis-quickstart -p '{"spec":{"terminationPolicy":"WipeOut"}}' --type="merge" +redis.kubedb.com/redis-quickstart patched + +$ kubectl delete -n demo rd/redis-quickstart +redis.kubedb.com "redis-quickstart" deleted + +$ kubectl delete ns demo +namespace "demo" deleted +``` + +## Tips for Testing + +If you are just testing some basic functionalities, you might want to avoid additional hassles due to some safety features that are great for production environment. You can follow these tips to avoid them. + +1. **Use `storageType: Ephemeral`**. Databases are precious. You might not want to lose your data in your production environment if database pod fail. So, we recommend to use `spec.storageType: Durable` and provide storage spec in `spec.storage` section. For testing purpose, you can just use `spec.storageType: Ephemeral`. KubeDB will use [emptyDir](https://kubernetes.io/docs/concepts/storage/volumes/#emptydir) for storage. You will not require to provide `spec.storage` section. +2. **Use `terminationPolicy: WipeOut`**. It is nice to be able to resume database from previous one.So, we preserve all your `PVCs`, auth `Secrets`. If you don't want to resume database, you can just use `spec.terminationPolicy: WipeOut`. It will delete everything created by KubeDB for a particular Redis crd when you delete the crd. For more details about termination policy, please visit [here](/docs/v2024.1.31/guides/redis/concepts/redis#specterminationpolicy). + +## Next Steps + +- Monitor your Redis server with KubeDB using [out-of-the-box Prometheus operator](/docs/v2024.1.31/guides/redis/monitoring/using-prometheus-operator). +- Monitor your Redis server with KubeDB using [out-of-the-box builtin-Prometheus](/docs/v2024.1.31/guides/redis/monitoring/using-builtin-prometheus). +- Use [private Docker registry](/docs/v2024.1.31/guides/redis/private-registry/using-private-registry) to deploy Redis with KubeDB. +- Detail concepts of [Redis object](/docs/v2024.1.31/guides/redis/concepts/redis). +- Detail concepts of [RedisVersion object](/docs/v2024.1.31/guides/redis/concepts/catalog). +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/v2024.1.31/CONTRIBUTING). diff --git a/content/docs/v2024.1.31/guides/redis/reconfigure-tls/_index.md b/content/docs/v2024.1.31/guides/redis/reconfigure-tls/_index.md new file mode 100644 index 0000000000..2870b76ab6 --- /dev/null +++ b/content/docs/v2024.1.31/guides/redis/reconfigure-tls/_index.md @@ -0,0 +1,22 @@ +--- +title: Reconfigure Redis TLS/SSL +menu: + docs_v2024.1.31: + identifier: rd-reconfigure-tls + name: Reconfigure TLS/SSL + parent: rd-redis-guides + weight: 46 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/redis/reconfigure-tls/overview.md b/content/docs/v2024.1.31/guides/redis/reconfigure-tls/overview.md new file mode 100644 index 0000000000..085513bebe --- /dev/null +++ b/content/docs/v2024.1.31/guides/redis/reconfigure-tls/overview.md @@ -0,0 +1,68 @@ +--- +title: Reconfiguring TLS of Redis +menu: + docs_v2024.1.31: + identifier: rd-reconfigure-tls-overview + name: Overview + parent: rd-reconfigure-tls + weight: 10 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Reconfiguring TLS of Redis Database + +This guide will give an overview on how KubeDB Ops-manager operator reconfigures TLS configuration i.e. add TLS, remove TLS, update issuer/cluster issuer or Certificates and rotate the certificates of a `Redis` database. + +## Before You Begin + +- You should be familiar with the following `KubeDB` concepts: + - [Redis](/docs/v2024.1.31/guides/redis/concepts/redis) + - [RedisSentinel](/docs/v2024.1.31/guides/redis/concepts/redissentinel) + - [RedisOpsRequest](/docs/v2024.1.31/guides/redis/concepts/redisopsrequest) + +## How Reconfiguring Redis TLS Configuration Process Works + +The following diagram shows how KubeDB Ops-manager operator reconfigures TLS of a `Redis` database. Open the image in a new tab to see the enlarged version. + +
+  Reconfiguring TLS process of Redis +
Fig: Reconfiguring TLS process of Redis
+
+ +The Reconfiguring Redis/RedisSentinel TLS process consists of the following steps: + +1. At first, a user creates a `Redis`/`RedisSentinel` Custom Resource (CR). + +2. `KubeDB` Community operator watches the `Redis` and `RedisSentinel` CR. + +3. When the operator finds a `Redis`/`RedisSentinel` CR, it creates required number of `StatefulSets` and related necessary stuff like appbinding, services, etc. + +4. Then, in order to reconfigure the TLS configuration of the `Redis` database the user creates a `RedisOpsRequest` CR with the desired version. + +5. Then, in order to reconfigure the TLS configuration (rotate certificate, update certificate) of the `RedisSentinel` database the user creates a `RedisSentinelOpsRequest` CR with the desired version. + +6. `KubeDB` Enterprise operator watches the `RedisOpsRequest` and `RedisSentinelOpsRequest` CR. + +7. When it finds a `RedisOpsRequest` CR, it halts the `Redis` object which is referred from the `RedisOpsRequest`. So, the `KubeDB` Community operator doesn't perform any operations on the `Redis` object during the reconfiguring process. + +8. When it finds a `RedisSentinelOpsRequest` CR, it halts the `RedisSentinel` object which is referred from the `RedisSentinelOpsRequest`. So, the `KubeDB` Community operator doesn't perform any operations on the `RedisSentinel` object during the reconfiguring process. + +9. By looking at the target version from `RedisOpsRequest`/`RedisSentinelOpsRequest` CR, `KubeDB` Enterprise operator will add, remove, update or rotate TLS configuration based on the Ops Request yaml. + +10. After successfully reconfiguring `Redis`/`RedisSentinel` object, the `KubeDB` Enterprise operator resumes the `Redis`/`RedisSentinel` object so that the `KubeDB` Community operator can resume its usual operations. + +In the next doc, we are going to show a step-by-step guide on updating of a Redis database using update operation. \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/redis/reconfigure-tls/sentinel.md b/content/docs/v2024.1.31/guides/redis/reconfigure-tls/sentinel.md new file mode 100644 index 0000000000..9bc3573e97 --- /dev/null +++ b/content/docs/v2024.1.31/guides/redis/reconfigure-tls/sentinel.md @@ -0,0 +1,533 @@ +--- +title: Reconfigure Redis Sentinel TLS/SSL Encryption +menu: + docs_v2024.1.31: + identifier: rd-reconfigure-tls-sentinel + name: Sentinel + parent: rd-reconfigure-tls + weight: 30 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Reconfigure Redis TLS/SSL (Transport Encryption) + +KubeDB supports reconfigure i.e. add, remove, update and rotation of TLS/SSL certificates for existing Redis database via a RedisOpsRequest. This tutorial will show you how to use KubeDB to reconfigure TLS/SSL encryption. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +- Install [`cert-manger`](https://cert-manager.io/docs/installation/) v1.0.0 or later to your cluster to manage your SSL/TLS certificates. + +- Now, install KubeDB cli on your workstation and KubeDB operator in your cluster following the steps [here](/docs/v2024.1.31/setup/README). + +- To keep things isolated, this tutorial uses a separate namespace called `demo` throughout this tutorial. + + ```bash + $ kubectl create ns demo + namespace/demo created + ``` + +> Note: YAML files used in this tutorial are stored in [docs/examples/redis](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/examples/redis) folder in GitHub repository [kubedb/docs](https://github.com/kubedb/docs). + +## Add TLS to a Redis database + +Here we are going to reconfigure TLS of Redis in Sentinel Mode. First we are going to deploy a RedisSentinel instance and a Redis instance. Then wer are going to +add TLS to them. + +### Deploy RedisSentinel without TLS : + +In this section, we are going to deploy a `RedisSentinel` instance. Below is the YAML of the `RedisSentinel` CR that we are going to create, + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: RedisSentinel +metadata: + name: sen-sample + namespace: demo +spec: + version: 6.2.14 + replicas: 3 + storageType: Durable + storage: + resources: + requests: + storage: 1Gi + storageClassName: "standard" + accessModes: + - ReadWriteOnce + terminationPolicy: DoNotTerminate +``` + +Let's create the `RedisSentinel` CR we have shown above, + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/redis/reconfigure-tls/sentinel.yaml +redissentinel.kubedb.com/sen-sample created +``` + +Now, wait until `sen-sample` created has status `Ready`. i.e, + +```bash +$ kubectl get redissentinel -n demo +NAME VERSION STATUS AGE +sen-sample 6.2.14 Ready 5m20s +``` + +### Deploy Redis without TLS + +In this section, we are going to deploy a Redis Standalone database without TLS. In the next few sections we will reconfigure TLS using `RedisOpsRequest` CRD. Below is the YAML of the `Redis` CR that we are going to create, + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: Redis +metadata: + name: rd-sample + namespace: demo +spec: + version: 6.2.14 + replicas: 3 + sentinelRef: + name: sen-sample + namespace: demo + mode: Sentinel + storageType: Durable + storage: + resources: + requests: + storage: 1Gi + storageClassName: "standard" + accessModes: + - ReadWriteOnce + terminationPolicy: DoNotTerminate +``` + +Let's create the `Redis` CR we have shown above, + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/redis/reconfigure-tls/rd-sentinel.yaml +redis.kubedb.com/rd-sample created +``` + +Now, wait until `redis-standalone` has status `Ready`. i.e, + +```bash +$ watch kubectl get rd -n demo +Every 2.0s: kubectl get rd -n demo +NAME VERSION STATUS AGE +rd-sample 6.2.14 Ready 88s +``` + +Now, we can connect to this database through redis-cli verify that the TLS is disabled. + +```bash +$ kubectl exec -it -n demo rd-sample-0 -c redis -- bash + +root@rd-sample-0:/data# redis-cli + +127.0.0.1:6379> config get tls-cert-file +1) "tls-cert-file" +2) "" +127.0.0.1:6379> exit +root@rd-sample-0:/data# +``` + +We can verify from the above output that TLS is disabled for this database. + +## Create Issuer/ ClusterIssuer + +Now, We are going to create an example `ClusterIssuer` that will be used to enable SSL/TLS in Redis. Alternatively, you can follow this [cert-manager tutorial](https://cert-manager.io/docs/configuration/ca/) to create your own `ClusterIssuer`. + +- Start off by generating a ca certificates using openssl. + +```bash +$ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout ./ca.key -out ./ca.crt -subj "/CN=ca/O=kubedb" +Generating a RSA private key +................+++++ +........................+++++ +writing new private key to './ca.key' +----- +``` + +- Now create a ca-secret using the certificate files you have just generated. The secret should be created in `cert-manager` namespace to create the `ClusterIssuer`. + +```bash +$ kubectl create secret tls redis-ca \ + --cert=ca.crt \ + --key=ca.key \ + --namespace=cert-manager +``` + +Now, create an `ClusterIssuer` using the `ca-secret` you have just created. The `YAML` file looks like this: + +```yaml +apiVersion: cert-manager.io/v1 +kind: ClusterIssuer +metadata: + name: redis-ca-issuer +spec: + ca: + secretName: redis-ca +``` + +Apply the `YAML` file: + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/redis/reconfigure-tls/clusterissuer.yaml +clusterissuer.cert-manager.io/redis-ca-issuer created +``` + +### Create RedisOpsRequest +There are two basic things to keep in mind when securing Redis using TLS in Sentinel Mode. + +- Either Sentinel instance and Redis database both should have TLS enabled or both have TLS disabled. +- If TLS enabled, both Sentinel instance and Redis database should use the same `Issuer`. If they are in different namespace, in order to use same issuer, the certificates should be signed using `ClusterIssuer` + +Currently, both Sentinel and Redis is tls disabled. If we want to add TLS to Redis database, we need to give reference to name/namespace of a Sentinel which +is tls enabled. If a Sentinel is not found in given name/namespace KubeDB operator will create one. + +In order to add TLS to the database, we have to create a `RedisOpsRequest` CRO with our created issuer. Below is the YAML of the `RedisOpsRequest` CRO that we are going to create, + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: RedisOpsRequest +metadata: + name: rd-add-tls + namespace: demo +spec: + type: ReconfigureTLS + databaseRef: + name: rd-sample + tls: + sentinel: + ref: + name: sen-demo-tls + namespace: demo + removeUnusedSentinel: true + issuerRef: + apiGroup: cert-manager.io + name: redis-ca-issuer + kind: ClusterIssuer + certificates: + - alias: client + subject: + organizations: + - redis + organizationalUnits: + - client +``` + +Here, +- `spec.databaseRef.name` specifies that we are performing reconfigure TLS operation on `rd-sample` database. +- `spec.type` specifies that we are performing `ReconfigureTLS` on our database. +- `spec.tls.sentinel.ref` specifies the new sentinel which will monitor the redis after adding tls. If it does not exist, KubeDB will create one with given issuer. +- `spec.tls.issuerRef` specifies the issuer name, kind and api group. +- `spec.tls.certificates` specifies the certificates. You can learn more about this field from [here](/docs/v2024.1.31/guides/redis/concepts/redis#spectls). + +Let's create the `RedisOpsRequest` CR we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/redis/reconfigure-tls/rd-add-tls.yaml +redisopsrequest.ops.kubedb.com/rd-add-tls created +``` + +#### Verify TLS Enabled Successfully + +Let's wait for `RedisOpsRequest` to be `Successful`. Run the following command to watch `RedisOpsRequest` CRO, + +```bash +$ kubectl get redisopsrequest -n demo +Every 2.0s: kubectl get redisopsrequest -n demo +NAME TYPE STATUS AGE +rd-add-tls ReconfigureTLS Successful 9m +``` +We can see from the above output that the `RedisOpsRequest` has succeeded. + +Let's check if new sentinel named `sen-demo-tls` is created +```bash +$ kubectl get redissentinel -n demo +NAME VERSION STATUS AGE +sen-demo-tls 6.2.14 Ready 17m +``` + +Now, connect to this database by exec into a pod and verify if `tls` has been set up as intended. + +```bash +$ kubectl describe secret -n demo rd-sample-client-cert +Name: rd-sample-client-cert +Namespace: demo +Labels: app.kubernetes.io/component=database + app.kubernetes.io/instance=rd-sample + app.kubernetes.io/managed-by=kubedb.com + app.kubernetes.io/name=redises.kubedb.com +Annotations: cert-manager.io/alt-names: + cert-manager.io/certificate-name: rd-sample-client-cert + cert-manager.io/common-name: default + cert-manager.io/ip-sans: + cert-manager.io/issuer-group: cert-manager.io + cert-manager.io/issuer-kind: ClusterIssuer + cert-manager.io/issuer-name: redis-ca-issuer + cert-manager.io/uri-sans: + +Type: kubernetes.io/tls + +Data +==== +ca.crt: 1139 bytes +tls.crt: 1168 bytes +tls.key: 1675 bytes +``` + +Now, Lets exec into a redis container and find out the username to connect in a redis shell, + +```bash +$ kubectl exec -it -n demo rd-sample-0 -c redis -- bash + +root@rd-sample-0:/data# ls /certs +ca.crt client.crt client.key server.crt server.key + +root@rd-sample-0:/data# redis-cli --tls --cert "/certs/client.crt" --key "/certs/client.key" --cacert "/certs/ca.crt" config get tls-cert-file +1) "tls-cert-file" +2) "/certs/server.crt + +``` + +Now, we can connect using tls-certs to the redis and write some data + +```bash +$ kubectl exec -it -n demo rd-sample-0 -c redis -- bash +# Trying to connect without tls certificates +root@rd-sample-0:/data# redis-cli +127.0.0.1:6379> +127.0.0.1:6379> set hello world +# Can not write data +Error: Connection reset by peer + +# Trying to connect with tls certificates +root@rd-sample-0:/data# redis-cli --tls --cert "/certs/client.crt" --key "/certs/client.key" --cacert "/certs/ca.crt" +127.0.0.1:6379> +127.0.0.1:6379> set hello world +OK +127.0.0.1:6379> exit +``` + +## Rotate Certificate + +Now we are going to rotate the certificate of sentinel and database. + +### Create RedisOpsRequest + +Now we are going to rotate certificates using a RedisOpsRequest. Below is the yaml of the ops request that we are going to create, + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: RedisOpsRequest +metadata: + name: rd-ops-rotate + namespace: demo +spec: + type: ReconfigureTLS + databaseRef: + name: rd-sample + tls: + rotateCertificates: true +``` + +Here, + +- `spec.databaseRef.name` specifies that we are performing reconfigure TLS operation on `rd-sample` database. +- `spec.type` specifies that we are performing `ReconfigureTLS` on our database. +- `spec.tls.rotateCertificates` specifies that we want to rotate the certificate of this database. + +Let's create the `RedisOpsRequest` CR we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/redis/reconfigure-tls/rd-ops-rotate.yaml +redisopsrequest.ops.kubedb.com/rd-ops-rotate created +``` + +#### Verify Certificate Rotated Successfully + +Let's wait for `RedisOpsRequest` to be `Successful`. Run the following command to watch `RedisOpsRequest` CRO, + +```bash +$ watch kubectl get redisopsrequest -n demo +Every 2.0s: kubectl get redisopsrequest -n demo +NAME TYPE STATUS AGE +rd-ops-rotate ReconfigureTLS Successful 5m5s +``` + +We can see from the above output that the `RedisOpsRequest` has succeeded. + +### Create RedisSentinelOpsRequest + +Now we are going to rotate certificates using a RedisOpsRequest. Below is the yaml of the ops request that we are going to create, + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: RedisSentinelOpsRequest +metadata: + name: sen-ops-rotate + namespace: demo +spec: + type: ReconfigureTLS + databaseRef: + name: sen-demo-tls + tls: + rotateCertificates: true +``` + +Here, + +- `spec.databaseRef.name` specifies that we are performing reconfigure TLS operation on `sen-demo-tls` sentinel. +- `spec.type` specifies that we are performing `ReconfigureTLS` on our database. +- `spec.tls.rotateCertificates` specifies that we want to rotate the certificate of this database. + +Let's create the `RedisOpsRequest` CR we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/redis/reconfigure-tls/sen-ops-rotate.yaml +redisopsrequest.ops.kubedb.com/rd-ops-rotate created +``` + +#### Verify Certificate Rotated Successfully + +Let's wait for `RedisOpsRequest` to be `Successful`. Run the following command to watch `RedisOpsRequest` CRO, + +```bash +$ watch kubectl get redissentinelopsrequest -n demo +Every 2.0s: kubectl get redissentinelopsrequest -n demo +NAME TYPE STATUS AGE +sen-ops-rotate ReconfigureTLS Successful 78s +``` + +We can see from the above output that the `RedisSentinelOpsRequest` has succeeded. + + +## Remove TLS from the Database + +Now, we are going to remove TLS from this database using a RedisOpsRequest. + +Currently, both Sentinel and Redis is tls enabled. If we want to remove TLS from Redis database, we need to give reference to name/namespace of a Sentinel which +is tls disabled. If a Sentinel is not found in given name/namespace KubeDB operator will create one. + +### Create RedisOpsRequest + +Below is the YAML of the `RedisOpsRequest` CRO that we are going to create, + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: RedisOpsRequest +metadata: + name: rd-ops-remove + namespace: demo +spec: + type: ReconfigureTLS + databaseRef: + name: rd-sample + tls: + sentinel: + ref: + name: sen-sample + namespace: demo + removeUnusedSentinel: true + remove: true +``` + +Here, + +- `spec.databaseRef.name` specifies that we are performing reconfigure TLS operation on `rd-sample` database. +- `spec.type` specifies that we are performing `ReconfigureTLS` on our database. +- `spec.tls.sentinel.ref` specifies the new sentinel which will monitor the redis after removing tls. If it does not exist, KubeDB will create a sentinel with given name/namespace. +- `spec.tls.remove` specifies that we want to remove tls from this database. + +Let's create the `RedisOpsRequest` CR we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/redis/reconfigure-tls/sen-ops-remove.yaml +redisopsrequest.ops.kubedb.com/rd-ops-remove created +``` + +#### Verify TLS Removed Successfully + +Let's wait for `RedisOpsRequest` to be `Successful`. Run the following command to watch `RedisOpsRequest` CRO, + +```bash +$ kubectl get redisopsrequest -n demo +Every 2.0s: kubectl get redisopsrequest -n demo +NAME TYPE STATUS AGE +rd-ops-remove ReconfigureTLS Successful 2m5s +``` +We can see from the above output that the `RedisOpsRequest` has succeeded. + +Let's check if new sentinel named `sen-sample` is created +```bash +$ kubectl get redissentinel -n demo +NAME VERSION STATUS AGE +sen-sample 6.2.14 Ready 7m56s +``` + +Now, Lets exec into the database primary node and find out that TLS is disabled or not. + +```bash +$ kubectl exec -it -n demo rd-sample-0 -c redis -- bash +# +root@rd-sample-0:/data# redis-cli + +127.0.0.1:6379> config get tls-cert-file +1) "tls-cert-file" +2) "" +127.0.0.1:6379> exit +root@rd-sample-0:/data# +``` + +So, we can see from the above that, output that tls is disabled successfully. + +## Cleaning up + +To clean up the Kubernetes resources created by this tutorial, run: + +```bash +# Delete Redis and RedisOpsRequest +$ kubectl patch -n demo rd/rd-sample -p '{"spec":{"terminationPolicy":"WipeOut"}}' --type="merge" +redis.kubedb.com/rd-sample patched + +$ kubectl delete -n demo redis rd-sample +redis.kubedb.com "rd-sample" deleted + +$ kubectl delete -n demo redisopsrequest rd-add-tls rd-ops-remove rd-ops-rotate +redisopsrequest.ops.kubedb.com "rd-add-tls" deleted +redisopsrequest.ops.kubedb.com "rd-ops-remove" deleted +redisopsrequest.ops.kubedb.com "rd-ops-rotate" deleted + +# Delete RedisSentinel and RedisSentinelOpsRequest +$ kubectl patch -n demo redissentinel/sen-sample -p '{"spec":{"terminationPolicy":"WipeOut"}}' --type="merge" +redissentinel.kubedb.com/sen-sample patched + +$ kubectl delete -n demo redissentinel sen-sample +redissentinel.kubedb.com "sen-sample" deleted + +$ kubectl delete -n demo redissentinelopsrequests sen-ops-rotate +redissentinelopsrequest.ops.kubedb.com "sen-ops-rotate" deleted +``` + +## Next Steps + +- Detail concepts of [Redis object](/docs/v2024.1.31/guides/redis/concepts/redis). +- [Backup and Restore](/docs/v2024.1.31/guides/redis/backup/overview/) Redis databases using Stash. . +- Monitor your Redis database with KubeDB using [out-of-the-box Prometheus operator](/docs/v2024.1.31/guides/redis/monitoring/using-prometheus-operator). +- Monitor your Redis database with KubeDB using [out-of-the-box builtin-Prometheus](/docs/v2024.1.31/guides/redis/monitoring/using-builtin-prometheus). diff --git a/content/docs/v2024.1.31/guides/redis/reconfigure-tls/standalone.md b/content/docs/v2024.1.31/guides/redis/reconfigure-tls/standalone.md new file mode 100644 index 0000000000..8f96636b5d --- /dev/null +++ b/content/docs/v2024.1.31/guides/redis/reconfigure-tls/standalone.md @@ -0,0 +1,499 @@ +--- +title: Reconfigure Redis TLS/SSL Encryption +menu: + docs_v2024.1.31: + identifier: rd-reconfigure-tls-standalone + name: Standalone and Cluster + parent: rd-reconfigure-tls + weight: 20 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Reconfigure Redis TLS/SSL (Transport Encryption) + +KubeDB supports reconfigure i.e. add, remove, update and rotation of TLS/SSL certificates for existing Redis database via a RedisOpsRequest. This tutorial will show you how to use KubeDB to reconfigure TLS/SSL encryption. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +- Install [`cert-manger`](https://cert-manager.io/docs/installation/) v1.0.0 or later to your cluster to manage your SSL/TLS certificates. + +- Now, install KubeDB cli on your workstation and KubeDB operator in your cluster following the steps [here](/docs/v2024.1.31/setup/README). + +- To keep things isolated, this tutorial uses a separate namespace called `demo` throughout this tutorial. + + ```bash + $ kubectl create ns demo + namespace/demo created + ``` + +> Note: YAML files used in this tutorial are stored in [docs/examples/redis](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/examples/redis) folder in GitHub repository [kubedb/docs](https://github.com/kubedb/docs). + +## Add TLS to a Redis database + +> In this tutorial we are going to reconfigure TLS of Redis in Standalone mode. For the Cluster mode, the process is same. A Redis database in Cluster mode +needs to be deployed instead of Standalone mode and RedisOpsRequest CR fields are same for both. + +Here, We are going to create a Redis database without TLS and then reconfigure the database to use TLS. + +### Deploy Redis without TLS + +In this section, we are going to deploy a Redis Standalone database without TLS. In the next few sections we will reconfigure TLS using `RedisOpsRequest` CRD. Below is the YAML of the `Redis` CR that we are going to create, + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: Redis +metadata: + name: rd-sample + namespace: demo +spec: + version: "6.2.14" + mode: Standalone + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi +``` + +Let's create the `Redis` CR we have shown above, + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/redis/reconfigure-tls/redis-standalone.yaml +redis.kubedb.com/rd-sample created +``` + +Now, wait until `redis-standalone` has status `Ready`. i.e, + +```bash +$ watch kubectl get rd -n demo +Every 2.0s: kubectl get rd -n demo +NAME VERSION STATUS AGE +rd-sample 6.2.14 Ready 88s +``` + +Now, we can connect to this database through redis-cli verify that the TLS is disabled. + +```bash +$ kubectl exec -it -n demo rd-sample-0 -c redis -- bash + +root@rd-sample-0:/data# redis-cli + +127.0.0.1:6379> config get tls-cert-file +1) "tls-cert-file" +2) "" +127.0.0.1:6379> exit +root@rd-sample-0:/data# +``` + +We can verify from the above output that TLS is disabled for this database. + +### Create Issuer/ StandaloneIssuer + +Now, We are going to create an example `Issuer` that will be used to enable SSL/TLS in Redis. Alternatively, you can follow this [cert-manager tutorial](https://cert-manager.io/docs/configuration/ca/) to create your own `Issuer`. + +- Start off by generating a ca certificates using openssl. + +```bash +$ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout ./ca.key -out ./ca.crt -subj "/CN=ca/O=kubedb" +Generating a RSA private key +................+++++ +........................+++++ +writing new private key to './ca.key' +----- +``` + +- Now we are going to create a ca-secret using the certificate files that we have just generated. + +```bash +$ kubectl create secret tls redis-ca \ + --cert=ca.crt \ + --key=ca.key \ + --namespace=demo +secret/redis-ca created +``` + +Now, Let's create an `Issuer` using the `redis-ca` secret that we have just created. The `YAML` file looks like this: + +```yaml +apiVersion: cert-manager.io/v1 +kind: Issuer +metadata: + name: redis-ca-issuer + namespace: demo +spec: + ca: + secretName: redis-ca +``` + +Let's apply the `YAML` file: + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/redis/reconfigure-tls/issuer.yaml +issuer.cert-manager.io/redis-ca-issuer created +``` + +### Create RedisOpsRequest + +In order to add TLS to the database, we have to create a `RedisOpsRequest` CRO with our created issuer. Below is the YAML of the `RedisOpsRequest` CRO that we are going to create, + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: RedisOpsRequest +metadata: + name: rd-add-tls + namespace: demo +spec: + type: ReconfigureTLS + databaseRef: + name: rd-sample + tls: + issuerRef: + name: redis-ca-issuer + kind: Issuer + apiGroup: "cert-manager.io" + certificates: + - alias: client + subject: + organizations: + - redis + organizationalUnits: + - client +``` + +Here, + +- `spec.databaseRef.name` specifies that we are performing reconfigure TLS operation on `rd-sample` database. +- `spec.type` specifies that we are performing `ReconfigureTLS` on our database. +- `spec.tls.issuerRef` specifies the issuer name, kind and api group. +- `spec.tls.certificates` specifies the certificates. You can learn more about this field from [here](/docs/v2024.1.31/guides/redis/concepts/redis#spectls). + +Let's create the `RedisOpsRequest` CR we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/redis/reconfigure-tls/rd-add-tls.yaml +redisopsrequest.ops.kubedb.com/rd-add-tls created +``` + +#### Verify TLS Enabled Successfully + +Let's wait for `RedisOpsRequest` to be `Successful`. Run the following command to watch `RedisOpsRequest` CRO, + +```bash +$ kubectl get redisopsrequest -n demo +Every 2.0s: kubectl get redisopsrequest -n demo +NAME TYPE STATUS AGE +rd-add-tls ReconfigureTLS Successful 9m +``` + +We can see from the above output that the `RedisOpsRequest` has succeeded. + +Now, connect to this database by exec into a pod and verify if `tls` has been set up as intended. + +```bash +$ kubectl describe secret -n demo rd-sample-client-cert +Name: rd-sample-client-cert +Namespace: demo +Labels: app.kubernetes.io/component=database + app.kubernetes.io/instance=rd-sample + app.kubernetes.io/managed-by=kubedb.com + app.kubernetes.io/name=redises.kubedb.com +Annotations: cert-manager.io/alt-names: + cert-manager.io/certificate-name: rd-sample-client-cert + cert-manager.io/common-name: default + cert-manager.io/ip-sans: + cert-manager.io/issuer-group: cert-manager.io + cert-manager.io/issuer-kind: Issuer + cert-manager.io/issuer-name: redis-ca-issuer + cert-manager.io/uri-sans: + +Type: kubernetes.io/tls + +Data +==== +ca.crt: 1147 bytes +tls.crt: 1127 bytes +tls.key: 1679 bytes +``` + +Now, Lets exec into a redis container and find out the username to connect in a redis shell, + +```bash +$ kubectl exec -it -n demo rd-sample-0 -c redis -- bash + +root@rd-sample-0:/data# ls /certs +ca.crt client.crt client.key server.crt server.key + +root@rd-sample-0:/data# redis-cli --tls --cert "/certs/client.crt" --key "/certs/client.key" --cacert "/certs/ca.crt" config get tls-cert-file +1) "tls-cert-file" +2) "/certs/server.crt +``` + +Now, we can connect using tls-certs to connect to the redis and write some data + +```bash +$ kubectl exec -it -n demo rd-sample-0 -c redis -- bash +# Trying to connect without tls certificates +root@rd-sample-0:/data# redis-cli +127.0.0.1:6379> +127.0.0.1:6379> set hello world +# Can not write data +Error: Connection reset by peer + +# Trying to connect with tls certificates +root@rd-sample-0:/data# redis-cli --tls --cert "/certs/client.crt" --key "/certs/client.key" --cacert "/certs/ca.crt" +127.0.0.1:6379> +127.0.0.1:6379> set hello world +OK +127.0.0.1:6379> exit +``` + +## Rotate Certificate + +Now we are going to rotate the certificate of this database. + +### Create RedisOpsRequest + +Now we are going to rotate certificates using a RedisOpsRequest. Below is the yaml of the ops request that we are going to create, + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: RedisOpsRequest +metadata: + name: rd-ops-rotate + namespace: demo +spec: + type: ReconfigureTLS + databaseRef: + name: rd-sample + tls: + rotateCertificates: true +``` + +Here, + +- `spec.databaseRef.name` specifies that we are performing reconfigure TLS operation on `rd-sample` database. +- `spec.type` specifies that we are performing `ReconfigureTLS` on our database. +- `spec.tls.rotateCertificates` specifies that we want to rotate the certificate of this database. + +Let's create the `RedisOpsRequest` CR we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/redis/reconfigure-tls/rd-ops-rotate.yaml +redisopsrequest.ops.kubedb.com/rd-ops-rotate created +``` + +#### Verify Certificate Rotated Successfully + +Let's wait for `RedisOpsRequest` to be `Successful`. Run the following command to watch `RedisOpsRequest` CRO, + +```bash +$ watch kubectl get redisopsrequest -n demo +Every 2.0s: kubectl get redisopsrequest -n demo +NAME TYPE STATUS AGE +rd-ops-rotate ReconfigureTLS Successful 5m5s +``` + +We can see from the above output that the `RedisOpsRequest` has succeeded. + +## Change Issuer/ClusterIssuer + +Now, we are going to change the issuer of this database. + +- Let's create a new ca certificate and key using a different subject `CN=ca-update,O=kubedb-updated`. + +```bash +$ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout ./ca.key -out ./ca.crt -subj "/CN=ca-updated/O=kubedb-updated" +Generating a RSA private key +..............................................................+++++ +......................................................................................+++++ +writing new private key to './ca.key' +----- +``` + +- Now we are going to create a new ca-secret using the certificate files that we have just generated. + +```bash +$ kubectl create secret tls redis-new-ca \ + --cert=ca.crt \ + --key=ca.key \ + --namespace=demo +secret/redis-new-ca created +``` + +Now, Let's create a new `Issuer` using the `redis-new-ca` secret that we have just created. The `YAML` file looks like this: + +```yaml +apiVersion: cert-manager.io/v1 +kind: Issuer +metadata: + name: rd-new-issuer + namespace: demo +spec: + ca: + secretName: redis-new-ca +``` + +Let's apply the `YAML` file: + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/redis/reconfigure-tls/new-issuer.yaml +issuer.cert-manager.io/rd-new-issuer created +``` + +### Create RedisOpsRequest + +In order to use the new issuer to issue new certificates, we have to create a `RedisOpsRequest` CRO with the newly created issuer. Below is the YAML of the `RedisOpsRequest` CRO that we are going to create, + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: RedisOpsRequest +metadata: + name: rd-change-issuer + namespace: demo +spec: + type: ReconfigureTLS + databaseRef: + name: rd-sample + tls: + issuerRef: + name: rd-new-issuer + kind: Issuer + apiGroup: "cert-manager.io" +``` + +Here, + +- `spec.databaseRef.name` specifies that we are performing reconfigure TLS operation on `rd-sample` database. +- `spec.type` specifies that we are performing `ReconfigureTLS` on our database. +- `spec.tls.issuerRef` specifies the issuer name, kind and api group. + +Let's create the `RedisOpsRequest` CR we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/redis/reconfigure-tls/rd-change-issuer.yaml +redisopsrequest.ops.kubedb.com/rd-change-issuer created +``` + +#### Verify Issuer is changed successfully + +Let's wait for `RedisOpsRequest` to be `Successful`. Run the following command to watch `RedisOpsRequest` CRO, + +```bash +$ kubectl get redisopsrequest -n demo +Every 2.0s: kubectl get redisopsrequest -n demo +NAME TYPE STATUS AGE +rd-change-issuer ReconfigureTLS Successful 4m65s +``` + +We can see from the above output that the `RedisOpsRequest` has succeeded. + +## Remove TLS from the Database + +Now, we are going to remove TLS from this database using a RedisOpsRequest. + +### Create RedisOpsRequest + +Below is the YAML of the `RedisOpsRequest` CRO that we are going to create, + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: RedisOpsRequest +metadata: + name: rd-ops-remove + namespace: demo +spec: + type: ReconfigureTLS + databaseRef: + name: rd-sample + tls: + remove: true +``` + +Here, + +- `spec.databaseRef.name` specifies that we are performing reconfigure TLS operation on `rd-sample` database. +- `spec.type` specifies that we are performing `ReconfigureTLS` on our database. +- `spec.tls.remove` specifies that we want to remove tls from this database. + +Let's create the `RedisOpsRequest` CR we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/redis/reconfigure-tls/rd-ops-remove.yaml +redisopsrequest.ops.kubedb.com/rd-ops-remove created +``` + +#### Verify TLS Removed Successfully + +Let's wait for `RedisOpsRequest` to be `Successful`. Run the following command to watch `RedisOpsRequest` CRO, + +```bash +$ kubectl get redisopsrequest -n demo +Every 2.0s: kubectl get redisopsrequest -n demo +NAME TYPE STATUS AGE +rd-ops-remove ReconfigureTLS Successful 105s +``` + +We can see from the above output that the `RedisOpsRequest` has succeeded. + +Now, Lets exec into the database primary node and find out that TLS is disabled or not. + +```bash +$ kubectl exec -it -n demo rd-sample-0 -c redis -- bash +# +root@rd-sample-0:/data# redis-cli + +127.0.0.1:6379> config get tls-cert-file +1) "tls-cert-file" +2) "" +127.0.0.1:6379> exit +root@rd-sample-0:/data# +``` + +So, we can see from the above that, output that tls is disabled successfully. + +## Cleaning up + +To clean up the Kubernetes resources created by this tutorial, run: + +```bash +$ kubectl patch -n demo redis/rd-sample -p '{"spec":{"terminationPolicy":"WipeOut"}}' --type="merge" +redis.kubedb.com/rd-sample patched + +$ kubectl delete redis -n demo rd-sample +redis.kubedb.com/rd-sample deleted + +$ kubectl delete issuer -n demo redis-ca-issuer rd-new-issuer +issuer.cert-manager.io "redis-ca-issuer" deleted +issuer.cert-manager.io "rd-new-issuer" deleted + +$ kubectl delete redisopsrequest -n demo rd-add-tls rd-ops-remove rd-ops-rotate rd-change-issuer +redisopsrequest.ops.kubedb.com "rd-add-tls" deleted +redisopsrequest.ops.kubedb.com "rd-ops-remove" deleted +redisopsrequest.ops.kubedb.com "rd-ops-rotate" deleted +redisopsrequest.ops.kubedb.com "rd-change-issuer" deleted +``` + +## Next Steps + +- Detail concepts of [Redis object](/docs/v2024.1.31/guides/redis/concepts/redis). +- [Backup and Restore](/docs/v2024.1.31/guides/redis/backup/overview/) Redis databases using Stash. . +- Monitor your Redis database with KubeDB using [out-of-the-box Prometheus operator](/docs/v2024.1.31/guides/redis/monitoring/using-prometheus-operator). +- Monitor your Redis database with KubeDB using [out-of-the-box builtin-Prometheus](/docs/v2024.1.31/guides/redis/monitoring/using-builtin-prometheus). diff --git a/content/docs/v2024.1.31/guides/redis/scaling/_index.md b/content/docs/v2024.1.31/guides/redis/scaling/_index.md new file mode 100644 index 0000000000..84edd61daa --- /dev/null +++ b/content/docs/v2024.1.31/guides/redis/scaling/_index.md @@ -0,0 +1,22 @@ +--- +title: Scaling Redis +menu: + docs_v2024.1.31: + identifier: rd-scaling + name: Scaling + parent: rd-redis-guides + weight: 43 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/redis/scaling/horizontal-scaling/_index.md b/content/docs/v2024.1.31/guides/redis/scaling/horizontal-scaling/_index.md new file mode 100644 index 0000000000..7963f0fd69 --- /dev/null +++ b/content/docs/v2024.1.31/guides/redis/scaling/horizontal-scaling/_index.md @@ -0,0 +1,22 @@ +--- +title: Horizontal Scaling +menu: + docs_v2024.1.31: + identifier: rd-horizontal-scaling + name: Horizontal Scaling + parent: rd-scaling + weight: 10 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/redis/scaling/horizontal-scaling/cluster.md b/content/docs/v2024.1.31/guides/redis/scaling/horizontal-scaling/cluster.md new file mode 100644 index 0000000000..e939908777 --- /dev/null +++ b/content/docs/v2024.1.31/guides/redis/scaling/horizontal-scaling/cluster.md @@ -0,0 +1,212 @@ +--- +title: Horizontal Scaling Redis Cluster +menu: + docs_v2024.1.31: + identifier: rd-horizontal-scaling-cluster + name: Cluster + parent: rd-horizontal-scaling + weight: 20 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Horizontal Scale Redis Cluster + +This guide will give an overview on how KubeDB Ops-manager operator scales up or down `Redis` database master and replicas Redis in Cluster mode. + + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +- Install `KubeDB` Community and Enterprise operator in your cluster following the steps [here](/docs/v2024.1.31/setup/README). + +- You should be familiar with the following `KubeDB` concepts: + - [Redis](/docs/v2024.1.31/guides/redis/concepts/redis) + - [RedisOpsRequest](/docs/v2024.1.31/guides/redis/concepts/redisopsrequest) + - [Horizontal Scaling Overview](/docs/v2024.1.31/guides/redis/scaling/horizontal-scaling/overview) + +To keep everything isolated, we are going to use a separate namespace called `demo` throughout this tutorial. + +```bash +$ kubectl create ns demo +namespace/demo created +``` + +> **Note:** YAML files used in this tutorial are stored in [docs/examples/redis](/docs/v2024.1.31/examples/redis) directory of [kubedb/docs](https://github.com/kubedb/docs) repository. + +## Apply Horizontal Scaling on Cluster + +Here, we are going to deploy a `Redis` cluster using a supported version by `KubeDB` operator. Then we are going to apply horizontal scaling on it. + +### Prepare Redis Cluster Database + +Now, we are going to deploy a `Redis` cluster database with version `6.2.14`. + +### Deploy Redis Cluster + +In this section, we are going to deploy a Redis cluster database. Then, in the next section we will update the resources of the database using `RedisOpsRequest` CRD. Below is the YAML of the `Redis` CR that we are going to create, + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: Redis +metadata: + name: redis-cluster + namespace: demo +spec: + version: 6.2.14 + mode: Cluster + cluster: + master: 3 + replicas: 2 + storageType: Durable + storage: + resources: + requests: + storage: "1Gi" + storageClassName: "standard" + accessModes: + - ReadWriteOnce + terminationPolicy: Halt +``` + +Let's create the `Redis` CR we have shown above, + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/redis/scaling/horizontal-scaling/rd-cluster.yaml +redis.kubedb.com/redis-cluster created +``` + +Now, wait until `rd-cluster` has status `Ready`. i.e. , + +```bash +$ kubectl get redis -n demo +NAME VERSION STATUS AGE +redis-cluster 6.2.14 Ready 7m +``` + +Let's check the number of master and replicas this database has from the Redis object + +```bash +$ kubectl get redis -n demo redis-cluster -o json | jq '.spec.cluster.master' +3 +$ kubectl get redis -n demo redis-cluster -o json | jq '.spec.cluster.replicas' +2 +``` + +Now let's connect to redis-cluster using `redis-cli` and verify master and replica count of the cluster +```bash +$ kubectl exec -it -n demo redis-cluster-shard0-0 -c redis -- redis-cli -c cluster nodes | grep master +914e68b97816a9aae0ee90e68b918a096baf479b 10.244.0.159:6379@16379 myself,master - 0 1675770134000 1 connected 0-5460 +a70923f477d7b37ce3c0beb7ed891f6501ac48ef 10.244.0.165:6379@16379 master - 0 1675770134111 3 connected 10923-16383 +94ee446e08494f1c5c826e03151dd1889585140e 10.244.0.162:6379@16379 master - 0 1675770134813 2 connected 5461-10922 + +$ kubectl exec -it -n demo redis-cluster-shard0-0 -c redis -- redis-cli -c cluster nodes | grep slave | wc -l +6 +``` + +We can see from above output that there are 3 masters and each master has 2 replicas. So, total 6 replicas in the cluster. Each master and its two replicas belongs to a shard. + +We are now ready to apply the `RedisOpsRequest` CR to update the resources of this database. + +### Horizontal Scaling + +Here, we are going to scale up the master and scale down the replicas of the redis cluster to meet the desired resources after scaling. + +#### Create RedisOpsRequest + +In order to scale up the master and scale down the replicas of the redis cluster, we have to create a `RedisOpsRequest` CR with our desired number of masters and replicas. Below is the YAML of the `RedisOpsRequest` CR that we are going to create, + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: RedisOpsRequest +metadata: + name: redisops-horizontal + namespace: demo +spec: + type: HorizontalScaling + databaseRef: + name: redis-cluster + horizontalScaling: + master: 4 + replicas: 1 +``` + +Here, + +- `spec.databaseRef.name` specifies that we are performing horizontal scaling operation on `redis-cluster` database. +- `spec.type` specifies that we are performing `HorizontalScaling` on our database. +- `spec.horizontalScaling.master` specifies the desired number of master after scaling. +- `spec.horizontalScaling.replicas` specifies the desired number of replicas after scaling. + +Let's create the `RedisOpsRequest` CR we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/redis/scaling/horizontal-scaling/horizontal-cluster.yaml +redisopsrequest.ops.kubedb.com/redisops-horizontal created +``` + +#### Verify Redis Cluster resources updated successfully + +If everything goes well, `KubeDB` Enterprise operator will update the replicas and master of `Redis` object and related `StatefulSets`. + +Let's wait for `RedisOpsRequest` to be `Successful`. Run the following command to watch `RedisOpsRequest` CR, + +```bash +$ watch kubectl get redisopsrequest -n demo redisops-horizontal +NAME TYPE STATUS AGE +redisops-horizontal HorizontalScaling Successful 6m11s +``` + +Now, we are going to verify if the number of master and replicas the redis cluster has updated to meet up the desired state, Let's check, + +```bash +$ kubectl get redis -n demo redis-cluster -o json | jq '.spec.cluster.master' +4 +$ kubectl get redis -n demo redis-cluster -o json | jq '.spec.cluster.replicas' +1 +``` + +Now let's connect to redis-cluster using `redis-cli` and verify master and replica count of the cluster +```bash +$ kubectl exec -it -n demo redis-cluster-shard0-0 -c redis -- redis-cli -c cluster nodes | grep master +94a9278454d934d4b5058d3e49b4bca14ff88975 10.244.0.176:6379@16379 master - 0 1675770403000 6 connected 0-1364 5461-6826 10923-12287 +914e68b97816a9aae0ee90e68b918a096baf479b 10.244.0.159:6379@16379 myself,master - 0 1675770403000 1 connected 1365-5460 +a70923f477d7b37ce3c0beb7ed891f6501ac48ef 10.244.0.165:6379@16379 master - 0 1675770404571 3 connected 12288-16383 +94ee446e08494f1c5c826e03151dd1889585140e 10.244.0.162:6379@16379 master - 0 1675770403667 2 connected 6827-10922 + +$ kubectl exec -it -n demo redis-cluster-shard0-0 -c redis -- redis-cli -c cluster nodes | grep slave | wc -l +4 +``` + +The above output verifies that we have successfully scaled up the master and scaled down the replicas of the Redis cluster database. The slots in redis shard +is also distributed among 4 master. + +## Cleaning up + +To clean up the Kubernetes resources created by this tutorial, run: + +```bash + +$ kubectl patch -n demo rd/redis-cluster -p '{"spec":{"terminationPolicy":"WipeOut"}}' --type="merge" +redis.kubedb.com/redis-cluster patched + +$ kubectl delete -n demo redis redis-cluster +redis.kubedb.com "redis-cluster" deleted + +$ kubectl delete -n demo redisopsrequest redisops-horizontal +redisopsrequest.ops.kubedb.com "redisops-horizontal " deleted +``` \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/redis/scaling/horizontal-scaling/overview.md b/content/docs/v2024.1.31/guides/redis/scaling/horizontal-scaling/overview.md new file mode 100644 index 0000000000..b8be82a323 --- /dev/null +++ b/content/docs/v2024.1.31/guides/redis/scaling/horizontal-scaling/overview.md @@ -0,0 +1,69 @@ +--- +title: Redis Horizontal Scaling Overview +menu: + docs_v2024.1.31: + identifier: rd-horizontal-scaling-overview + name: Overview + parent: rd-horizontal-scaling + weight: 10 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Redis Horizontal Scaling + +This guide will give an overview on how KubeDB Ops Manager scales up or down of `Redis` cluster database for both the number of replicas and masters. + +## Before You Begin + +- You should be familiar with the following `KubeDB` concepts: + - [Redis](/docs/v2024.1.31/guides/redis/concepts/redis) + - [RedisOpsRequest](/docs/v2024.1.31/guides/redis/concepts/redisopsrequest) + +## How Horizontal Scaling Process Works + +The following diagram shows how KubeDB Ops Manager scales up or down `Redis` database components. Open the image in a new tab to see the enlarged version. + +
+  Horizontal scaling process of Redis +
Fig: Horizontal scaling process of Redis
+
+ +The updating process consists of the following steps: + +1. At first, a user creates a `Redis`/`RedisSentinel` Custom Resource (CR). + +2. `KubeDB` Community operator watches the `Redis` and `RedisSentinel` CR. + +3. When the operator finds a `Redis`/`RedisSentinel` CR, it creates required number of `StatefulSets` and related necessary stuff like appbinding, services, etc. + +4. Then, in order to scale the number of replica or master for the `Redis` cluster database the user creates a `RedisOpsRequest` CR with desired information. + +5. Then, in order to scale the number of replica for the `RedisSentinel` instance the user creates a `RedisSentinelOpsRequest` CR with desired information. + +6. `KubeDB` Enterprise operator watches the `RedisOpsRequest` and `RedisSentinelOpsRequest` CR. + +7. When it finds a `RedisOpsRequest` CR, it halts the `Redis` object which is referred from the `RedisOpsRequest`. So, the `KubeDB` Community operator doesn't perform any operations on the `Redis` object during the scaling process. + +8. When it finds a `RedisSentinelOpsRequest` CR, it halts the `RedisSentinel` object which is referred from the `RedisSentinelOpsRequest`. So, the `KubeDB` Community operator doesn't perform any operations on the `RedisSentinel` object during the scaling process. + +9. Then the Redis Ops-manager operator will scale the related StatefulSet Pods to reach the expected number of masters and/or replicas defined in the RedisOpsRequest or RedisSentinelOpsRequest CR. + +10. After the successful scaling the replicas of the related StatefulSet Pods, the KubeDB Ops-manager operator updates the number of replicas/masters in the Redis/RedisSentinel object to reflect the updated state. + +11. After successfully updating of `Redis`/`RedisSentinel` object, the `KubeDB` Enterprise operator resumes the `Redis`/`RedisSentinel` object so that the `KubeDB` Community operator can resume its usual operations. + +In the next doc, we are going to show a step-by-step guide on updating of a Redis database using scale operation. \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/redis/scaling/horizontal-scaling/sentinel.md b/content/docs/v2024.1.31/guides/redis/scaling/horizontal-scaling/sentinel.md new file mode 100644 index 0000000000..0e03b86f4f --- /dev/null +++ b/content/docs/v2024.1.31/guides/redis/scaling/horizontal-scaling/sentinel.md @@ -0,0 +1,341 @@ +--- +title: Horizontal Scaling Redis Sentinel +menu: + docs_v2024.1.31: + identifier: rd-horizontal-scaling-sentinel + name: Sentinel + parent: rd-horizontal-scaling + weight: 20 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Horizontal Scale of Redis Sentinel + +This guide will give an overview on how KubeDB Ops-manager operator scales up or down `Redis` database and `RedisSentinel` instance. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +- Install `KubeDB` Community and Enterprise operator in your cluster following the steps [here](/docs/v2024.1.31/setup/README). + +- You should be familiar with the following `KubeDB` concepts: + - [Redis](/docs/v2024.1.31/guides/redis/concepts/redis) + - [RedisSentinel](/docs/v2024.1.31/guides/redis/concepts/redissentinel) + - [RedisOpsRequest](/docs/v2024.1.31/guides/redis/concepts/redisopsrequest) + - [Horizontal Scaling Overview](/docs/v2024.1.31/guides/redis/scaling/horizontal-scaling/overview). + +To keep everything isolated, we are going to use a separate namespace called `demo` throughout this tutorial. + +```bash +$ kubectl create ns demo +namespace/demo created +``` + +> **Note:** YAML files used in this tutorial are stored in [docs/examples/redis](/docs/v2024.1.31/examples/redis) directory of [kubedb/docs](https://github.com/kube/docs) repository. + +### Prepare Redis Sentinel Database + +Now, we are going to deploy a `RedisSentinel` instance with version `6.2.14` and a `Redis` database with version `6.2.14`. Then, in the next section we are going to apply horizontal scaling on the sentinel and the database using `RedisSentinelOpsRequest` and `RedisOpsRequest` CRD. + +### Deploy RedisSentinel : + +In this section, we are going to deploy a `RedisSentinel` instance. Below is the YAML of the `RedisSentinel` CR that we are going to create, + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: RedisSentinel +metadata: + name: sen-sample + namespace: demo +spec: + version: 6.2.14 + replicas: 5 + storageType: Durable + storage: + resources: + requests: + storage: 1Gi + storageClassName: "standard" + accessModes: + - ReadWriteOnce + terminationPolicy: DoNotTerminate +``` + +Let's create the `RedisSentinel` CR we have shown above, + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/redis/scaling/horizontal-scaling/sentinel.yaml +redissentinel.kubedb.com/sen-sample created +``` + +Now, wait until `sen-sample` created has status `Ready`. i.e, + +```bash +$ kubectl get redissentinel -n demo +NAME VERSION STATUS AGE +sen-sample 6.2.14 Ready 5m20s +``` + +Let's check the number of replicas this sentinel has from the RedisSentinel object + +```bash +$ kubectl get redissentinel -n demo sen-sample -o json | jq '.spec.replicas' +5 +``` + +### Deploy Redis : + +In this section, we are going to deploy a `Redis` instance which will be monitored by previously created `sen-sample`. Below is the YAML of the `Redis` CR that we are going to create, + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: Redis +metadata: + name: rd-sample + namespace: demo +spec: + version: 6.2.14 + replicas: 3 + sentinelRef: + name: sen-sample + namespace: demo + mode: Sentinel + storageType: Durable + storage: + resources: + requests: + storage: 1Gi + storageClassName: "standard" + accessModes: + - ReadWriteOnce + terminationPolicy: DoNotTerminate +``` + +Let's create the `Redis` CR we have shown above, + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/redis/scaling/horizontal-scaling/rd-sentinel.yaml +redis.kubedb.com/rd-sample created +``` + +Now, wait until `rd-sample` created has status `Ready`. i.e, + +```bash +$ kubectl get redis -n demo +NAME VERSION STATUS AGE +rd-sample 6.2.14 Ready 2m11s +``` +Let's check the Pod containers resources, +```bash +$ kubectl get redis -n demo rd-sample -o json | jq '.spec.replicas' +3 +``` + +Now let's connect to redis with redis-cli to check the replication configuration +```bash +$ kubectl exec -it -n demo rd-sample-0 -c redis -- redis-cli info replication +# Replication +role:master +connected_slaves:2 +slave0:ip=rd-sample-1.rd-sample-pods.demo.svc,port=6379,state=online,offset=35478,lag=0 +slave1:ip=rd-sample-2.rd-sample-pods.demo.svc,port=6379,state=online,offset=35478,lag=0 +master_failover_state:no-failover +master_replid:4ac5cc7292e84c6d1b69d3732869557f2854db2d +master_replid2:0000000000000000000000000000000000000000 +master_repl_offset:35492 +second_repl_offset:-1 +repl_backlog_active:1 +repl_backlog_size:1048576 +repl_backlog_first_byte_offset:1 +repl_backlog_histlen:35492 +``` + +Additionally, the sentinel monitoring can be checked with following command : +```bash +kubectl exec -it -n demo sen-sample-0 -c redissentinel -- redis-cli -p 26379 sentinel masters +``` + +We are now ready to apply the `RedisSentinelOpsRequest` CR to horizontal scale on sentinel and `RedisOpsRequest` CR to horizontal scale database. + +### Horizontal Scale RedisSentinel + +Here, we are going to scale down the replicas count of the sentinel to meet the desired resources after scaling. + +#### Create RedisSentinelOpsRequest: + +In order to scale the replicas of the sentinel, we have to create a `RedisSentinelOpsRequest` CR with our desired number of replicas. Below is the YAML of the `RedisSentinelOpsRequest` CR that we are going to create, + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: RedisSentinelOpsRequest +metadata: + name: sen-ops-horizontal + namespace: demo +spec: + type: HorizontalScaling + databaseRef: + name: sen-sample + horizontalScaling: + replicas: 3 +``` + +Here, + +- `spec.databaseRef.name` specifies that we are performing operation on `sen-sample` RedisSentinel instance. +- `spec.type` specifies that we are going to perform `HorizontalScaling` on our database. +- `spec.horizontalScaling.replicas` specifies the desired number of replicas after scaling. + +Let's create the `RedisSentinelOpsRequest` CR we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/redis/scaling/horizontal-scaling/horizontal-sentinel.yaml +redissentinelopsrequest.ops.kubedb.com/sen-ops-horizontal created +``` + +#### Verify RedisSentinel replicas updated successfully : + +If everything goes well, `KubeDB` Enterprise operator will scale down the replicas of `RedisSentinel` object. + +Let's wait for `RedisSentinelOpsRequest` to be `Successful`. Run the following command to watch `RedisSentinelOpsRequest` CR, + +```bash +$ watch kubectl get redissentinelopsrequest -n demo +Every 2.0s: kubectl get redissentinelopsrequest -n demo +NAME TYPE STATUS AGE +sen-ops-horizontal HorizontalScaling Successful 5m27s +``` + +We can see from the above output that the `RedisSentinelOpsRequest` has succeeded. + +Let's check the number of master and replicas this database has from the RedisSentinel object + +```bash +$ kubectl get redissentinel -n demo sen-sample -o json | jq '.spec.replicas' +3 +``` + +The above output verifies that we have successfully scaled up the resources of the sentinel instance. +### Horizontal Scale Redis + +Here, we are going to update the resources of the redis database to meet the desired resources after scaling. + +#### Create RedisOpsRequest: + +In order to scale the replicas of the redis database, we have to create a `RedisOpsRequest` CR with our desired number of replicas. Below is the YAML of the `RedisOpsRequest` CR that we are going to create, + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: RedisOpsRequest +metadata: + name: rd-ops-horizontal + namespace: demo +spec: + type: HorizontalScaling + databaseRef: + name: rd-sample + horizontalScaling: + replicas: 5 +``` + +Here, + +- `spec.databaseRef.name` specifies that we are performing operation on `rd-sample` Redis database. +- `spec.type` specifies that we are going to perform `HorizontalScaling` on our database. +- `spec.horizontalScaling.replicas` specifies the desired number of replicas after scaling. + +Let's create the `RedisOpsRequest` CR we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/redis/scaling/horizontal-scaling//horizontal-redis-sentinel.yaml +redisopsrequest.ops.kubedb.com/rd-ops-horizontal created +``` + +#### Verify Redis resources updated successfully : + +If everything goes well, `KubeDB` Enterprise operator will scale up the replicas of `Redis` object. + +Let's wait for `RedisOpsRequest` to be `Successful`. Run the following command to watch `RedisOpsRequest` CR, + +```bash +$ watch kubectl get redisopsrequest -n demo +NAME TYPE STATUS AGE +rd-ops-horizontal HorizontalScaling Successful 4m4s +``` + +We can see from the above output that the `RedisOpsRequest` has succeeded. +Now, we are going to verify if the number of replicas the redis sentinel has updated to meet up the desired state, Let's check, + +```bash +$ kubectl get redis -n demo rd-sample -o json | jq '.spec.replicas' +5 +``` + +Now let's connect to redis with redis-cli to check the replication configuration +```bash +$ kubectl exec -it -n demo rd-sample-0 -c redis -- redis-cli info replication +# Replication +role:master +connected_slaves:4 +slave0:ip=rd-sample-1.rd-sample-pods.demo.svc,port=6379,state=online,offset=325651,lag=1 +slave1:ip=rd-sample-2.rd-sample-pods.demo.svc,port=6379,state=online,offset=325651,lag=1 +slave2:ip=rd-sample-3.rd-sample-pods.demo.svc,port=6379,state=online,offset=325651,lag=1 +slave3:ip=rd-sample-4.rd-sample-pods.demo.svc,port=6379,state=online,offset=325651,lag=1 +master_failover_state:no-failover +master_replid:4871c4756eebbadc7f2c56a4dd1dff11e20a04ba +master_replid2:0000000000000000000000000000000000000000 +master_repl_offset:325651 +second_repl_offset:-1 +repl_backlog_active:1 +repl_backlog_size:1048576 +repl_backlog_first_byte_offset:1 +repl_backlog_histlen:325651 +``` + +The above output verifies that we have successfully scaled up the resources of the redis database. There are 1 master and 4 connected slaves. So, the Ops Request +scaled up the replicas to 5. + +Additionally, the sentinel monitoring can be checked with following command : +```bash +kubectl exec -it -n demo sen-sample-0 -c redissentinel -- redis-cli -p 26379 sentinel masters +``` + +## Cleaning Up + +To clean up the Kubernetes resources created by this tutorial, run: + +```bash +# Delete Redis and RedisOpsRequest +$ kubectl patch -n demo rd/rd-sample -p '{"spec":{"terminationPolicy":"WipeOut"}}' --type="merge" +redis.kubedb.com/rd-sample patched + +$ kubectl delete -n demo redis rd-sample +redis.kubedb.com "rd-sample" deleted + +$ kubectl delete -n demo redisopsrequest rd-ops-horizontal +redisopsrequest.ops.kubedb.com "rd-ops-horizontal" deleted + +# Delete RedisSentinel and RedisSentinelOpsRequest +$ kubectl patch -n demo redissentinel/sen-sample -p '{"spec":{"terminationPolicy":"WipeOut"}}' --type="merge" +redissentinel.kubedb.com/sen-sample patched + +$ kubectl delete -n demo redissentinel sen-sample +redissentinel.kubedb.com "sen-sample" deleted + +$ kubectl delete -n demo redissentinelopsrequests sen-ops-horizontal +redissentinelopsrequest.ops.kubedb.com "sen-ops-horizontal" deleted +``` diff --git a/content/docs/v2024.1.31/guides/redis/scaling/vertical-scaling/_index.md b/content/docs/v2024.1.31/guides/redis/scaling/vertical-scaling/_index.md new file mode 100644 index 0000000000..d37471b950 --- /dev/null +++ b/content/docs/v2024.1.31/guides/redis/scaling/vertical-scaling/_index.md @@ -0,0 +1,22 @@ +--- +title: Vertical Scaling +menu: + docs_v2024.1.31: + identifier: rd-vertical-scaling + name: Vertical Scaling + parent: rd-scaling + weight: 20 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/redis/scaling/vertical-scaling/cluster.md b/content/docs/v2024.1.31/guides/redis/scaling/vertical-scaling/cluster.md new file mode 100644 index 0000000000..714af50738 --- /dev/null +++ b/content/docs/v2024.1.31/guides/redis/scaling/vertical-scaling/cluster.md @@ -0,0 +1,236 @@ +--- +title: Vertical Scaling Redis Cluster +menu: + docs_v2024.1.31: + identifier: rd-vertical-scaling-cluster + name: Cluster + parent: rd-vertical-scaling + weight: 30 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Vertical Scale Redis Cluster + +This guide will show you how to use `KubeDB` Enterprise operator to update the resources of a Redis cluster database. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +- Install `KubeDB` Community and Enterprise operator in your cluster following the steps [here](/docs/v2024.1.31/setup/README). + +- You should be familiar with the following `KubeDB` concepts: + - [Redis](/docs/v2024.1.31/guides/redis/concepts/redis) + - [RedisOpsRequest](/docs/v2024.1.31/guides/redis/concepts/redisopsrequest) + - [Vertical Scaling Overview](/docs/v2024.1.31/guides/redis/scaling/vertical-scaling/overview) + +To keep everything isolated, we are going to use a separate namespace called `demo` throughout this tutorial. + +```bash +$ kubectl create ns demo +namespace/demo created +``` + +> **Note:** YAML files used in this tutorial are stored in [docs/examples/redis](/docs/v2024.1.31/examples/redis) directory of [kubedb/docs](https://github.com/kubedb/docs) repository. + +## Apply Vertical Scaling on Cluster + +Here, we are going to deploy a `Redis` cluster using a supported version by `KubeDB` operator. Then we are going to apply vertical scaling on it. + +### Prepare Redis Cluster Database + +Now, we are going to deploy a `Redis` cluster database with version `6.2.14`. + +### Deploy Redis Cluster + +In this section, we are going to deploy a Redis cluster database. Then, in the next section we will update the resources of the database using `RedisOpsRequest` CRD. Below is the YAML of the `Redis` CR that we are going to create, + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: Redis +metadata: + name: redis-cluster + namespace: demo +spec: + version: 7.0.14 + mode: Cluster + cluster: + master: 3 + replicas: 1 + storageType: Durable + storage: + resources: + requests: + storage: "1Gi" + storageClassName: "standard" + accessModes: + - ReadWriteOnce + podTemplate: + spec: + resources: + requests: + cpu: "100m" + memory: "100Mi" + terminationPolicy: Halt +``` + +Let's create the `Redis` CR we have shown above, + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/redis/scaling/vertical-scaling/rd-cluster.yaml +redis.kubedb.com/redis-cluster created +``` + +Now, wait until `rd-cluster` has status `Ready`. i.e. , + +```bash +$ kubectl get redis -n demo +NAME VERSION STATUS AGE +redis-cluster 7.0.14 Ready 7m +``` + +Let's check the Pod containers resources, + +```bash +$ kubectl get pod -n demo redis-cluster-shard0-0 -o json | jq '.spec.containers[].resources' +{ + "limits": { + "memory": "100Mi" + }, + "requests": { + "cpu": "100m", + "memory": "100Mi" + } +} + +$ kubectl get pod -n demo redis-cluster-shard1-1 -o json | jq '.spec.containers[].resources' +{ + "limits": { + "memory": "100Mi" + }, + "requests": { + "cpu": "100m", + "memory": "100Mi" + } +} +``` + +We can see from the above output that there are some default resources set by the operator for pods across all shards. And the scheduler will choose the best suitable node to place the container of the Pod. + +We are now ready to apply the `RedisOpsRequest` CR to update the resources of this database. + +### Vertical Scaling + +Here, we are going to update the resources of the cluster database to meet the desired resources after scaling. + +#### Create RedisOpsRequest + +In order to update the resources of the database, we have to create a `RedisOpsRequest` CR with our desired resources. Below is the YAML of the `RedisOpsRequest` CR that we are going to create, + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: RedisOpsRequest +metadata: + name: redisops-vertical + namespace: demo +spec: + type: VerticalScaling + databaseRef: + name: redis-cluster + verticalScaling: + redis: + resources: + requests: + memory: "300Mi" + cpu: "200m" + limits: + memory: "800Mi" + cpu: "500m" +``` + + +Here, + +- `spec.databaseRef.name` specifies that we are performing vertical scaling operation on `redis-cluster` database. +- `spec.type` specifies that we are performing `VerticalScaling` on our database. +- `spec.verticalScaling.redis` specifies the desired resources after scaling. + +Let's create the `RedisOpsRequest` CR we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/redis/scaling/vertical-scaling/vertical-cluster.yaml +redisopsrequest.ops.kubedb.com/redisops-vertical created +``` + +#### Verify Redis Cluster resources updated successfully + +If everything goes well, `KubeDB` Enterprise operator will update the resources of `Redis` object and related `StatefulSets` and `Pods`. + +Let's wait for `RedisOpsRequest` to be `Successful`. Run the following command to watch `RedisOpsRequest` CR, + +```bash +$ watch kubectl get redisopsrequest -n demo redisops-vertical +NAME TYPE STATUS AGE +redisops-vertical VerticalScaling Successful 6m11s +``` + +We can see from the above output that the `RedisOpsRequest` has succeeded. + +Now, we are going to verify from the Pod yaml whether the resources of the cluster database has updated to meet up the desired state, Let's check, + +```bash +$ kubectl get pod -n demo redis-cluster-shard0-0 -o json | jq '.spec.containers[].resources' +{ + "limits": { + "cpu": "500m", + "memory": "800Mi" + }, + "requests": { + "cpu": "200m", + "memory": "300Mi" + } +} +$ kubectl get pod -n demo redis-cluster-shard1-1 -o json | jq '.spec.containers[].resources' +{ + "limits": { + "cpu": "500m", + "memory": "800Mi" + }, + "requests": { + "cpu": "200m", + "memory": "300Mi" + } +} +``` + +The above output verifies that we have successfully scaled up the resources of the Redis cluster database. + +## Cleaning up + +To clean up the Kubernetes resources created by this turorial, run: + +```bash + +$ kubectl patch -n demo rd/redis-cluster -p '{"spec":{"terminationPolicy":"WipeOut"}}' --type="merge" +redis.kubedb.com/redis-cluster patched + +$ kubectl delete -n demo redis redis-cluster +redis.kubedb.com "redis-cluster" deleted + +$ kubectl delete -n demo redisopsrequest redisops-vertical +redisopsrequest.ops.kubedb.com "redisops-vertical " deleted +``` \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/redis/scaling/vertical-scaling/overview.md b/content/docs/v2024.1.31/guides/redis/scaling/vertical-scaling/overview.md new file mode 100644 index 0000000000..aa9c8838c4 --- /dev/null +++ b/content/docs/v2024.1.31/guides/redis/scaling/vertical-scaling/overview.md @@ -0,0 +1,67 @@ +--- +title: Redis Vertical Scaling Overview +menu: + docs_v2024.1.31: + identifier: rd-vertical-scaling-overview + name: Overview + parent: rd-vertical-scaling + weight: 10 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Redis Vertical Scaling Overview + +This guide will give you an overview on how KubeDB Ops Manager updates the resources(for example CPU and Memory etc.) of the `Redis` database. + +## Before You Begin + +- You should be familiar with the following `KubeDB` concepts: + - [Redis](/docs/v2024.1.31/guides/redis/concepts/redis) + - [RedisOpsRequest](/docs/v2024.1.31/guides/redis/concepts/redisopsrequest) + +## How Vertical Scaling Process Works + +The following diagram shows how KubeDB Ops Manager updates the resources of the `Redis` database. Open the image in a new tab to see the enlarged version. + +
+  Vertical scaling process of Redis +
Fig: Vertical scaling process of Redis
+
+ +The updating process consists of the following steps: + +1. At first, a user creates a `Redis`/`RedisSentinel` Custom Resource (CR). + +2. `KubeDB` Community operator watches the `Redis` and `RedisSentinel` CR. + +3. When the operator finds a `Redis`/`RedisSentinel` CR, it creates required number of `StatefulSets` and related necessary stuff like appbinding, services, etc. + +4. Then, in order to update the version of the `Redis` database the user creates a `RedisOpsRequest` CR with the desired version. + +5. Then, in order to update the version of the `RedisSentinel` database the user creates a `RedisSentinelOpsRequest` CR with the desired version. + +6. `KubeDB` Enterprise operator watches the `RedisOpsRequest` and `RedisSentinelOpsRequest` CR. + +7. When it finds a `RedisOpsRequest` CR, it halts the `Redis` object which is referred from the `RedisOpsRequest`. So, the `KubeDB` Community operator doesn't perform any operations on the `Redis` object during the updating process. + +8. When it finds a `RedisSentinelOpsRequest` CR, it halts the `RedisSentinel` object which is referred from the `RedisSentinelOpsRequest`. So, the `KubeDB` Community operator doesn't perform any operations on the `RedisSentinel` object during the updating process. + +9. After the successful update of the resources of the StatefulSet's replica, the `KubeDB` Enterprise operator updates the `Redis`/`RedisSentinel` object to reflect the updated state. + +10. After successfully updating of `Redis`/`RedisSentinel` object, the `KubeDB` Enterprise operator resumes the `Redis`/`RedisSentinel` object so that the `KubeDB` Community operator can resume its usual operations. + +In the next doc, we are going to show a step-by-step guide on updating of a Redis database using update operation. \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/redis/scaling/vertical-scaling/sentinel.md b/content/docs/v2024.1.31/guides/redis/scaling/vertical-scaling/sentinel.md new file mode 100644 index 0000000000..4dffc09d83 --- /dev/null +++ b/content/docs/v2024.1.31/guides/redis/scaling/vertical-scaling/sentinel.md @@ -0,0 +1,349 @@ +--- +title: Vertical Scaling Sentinel Redis +menu: + docs_v2024.1.31: + identifier: rd-vertical-scaling-sentinel + name: Sentinel + parent: rd-vertical-scaling + weight: 40 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Vertical Scale of Redis Sentinel + +This guide will show you how to use `KubeDB` Enterprise operator to perform vertical scaling of `Redis` in Sentinel mode and `RedisSentinel`. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +- Install `KubeDB` Community and Enterprise operator in your cluster following the steps [here](/docs/v2024.1.31/setup/README). + +- You should be familiar with the following `KubeDB` concepts: + - [Redis](/docs/v2024.1.31/guides/redis/concepts/redis) + - [RedisSentinel](/docs/v2024.1.31/guides/redis/concepts/redissentinel) + - [RedisOpsRequest](/docs/v2024.1.31/guides/redis/concepts/redisopsrequest) + - [Vertical Scaling Overview](/docs/v2024.1.31/guides/redis/scaling/vertical-scaling/overview). + +To keep everything isolated, we are going to use a separate namespace called `demo` throughout this tutorial. + +```bash +$ kubectl create ns demo +namespace/demo created +``` + +> **Note:** YAML files used in this tutorial are stored in [docs/examples/redis](/docs/v2024.1.31/examples/redis) directory of [kubedb/docs](https://github.com/kube/docs) repository. + +### Prepare Redis Sentinel Database + +Now, we are going to deploy a `RedisSentinel` instance with version `6.2.14` and a `Redis` database with version `6.2.14`. Then, in the next section we are going to apply vertical scaling on the sentinel and the database using `RedisOpsRequest` CRD + +### Deploy RedisSentinel : + +In this section, we are going to deploy a `RedisSentinel` instance. Below is the YAML of the `RedisSentinel` CR that we are going to create, + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: RedisSentinel +metadata: + name: sen-sample + namespace: demo +spec: + version: 6.2.14 + replicas: 3 + storageType: Durable + storage: + resources: + requests: + storage: 1Gi + storageClassName: "standard" + accessModes: + - ReadWriteOnce + podTemplate: + spec: + resources: + requests: + cpu: "100m" + memory: "100Mi" + terminationPolicy: DoNotTerminate +``` + +Let's create the `RedisSentinel` CR we have shown above, + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/redis/scaling/vertical-scaling/sentinel.yaml +redissentinel.kubedb.com/sen-sample created +``` + +Now, wait until `sen-sample` created has status `Ready`. i.e, + +```bash +$ kubectl get redissentinel -n demo +NAME VERSION STATUS AGE +sen-sample 6.2.14 Ready 5m20s +``` + +Let's check the Pod containers resources, +```bash +$ kubectl get pod -n demo sen-sample-0 -o json | jq '.spec.containers[].resources' +{ + "limits": { + "memory": "100Mi" + }, + "requests": { + "cpu": "100m", + "memory": "100Mi" + } +} +``` + +### Deploy Redis : + +In this section, we are going to deploy a `Redis` instance which will be monitored by previously created `sen-sample`. Below is the YAML of the `Redis` CR that we are going to create, + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: Redis +metadata: + name: rd-sample + namespace: demo +spec: + version: 6.2.14 + replicas: 3 + sentinelRef: + name: sen-sample + namespace: demo + mode: Sentinel + storageType: Durable + storage: + resources: + requests: + storage: 1Gi + storageClassName: "standard" + accessModes: + - ReadWriteOnce + podTemplate: + spec: + resources: + requests: + cpu: "100m" + memory: "100Mi" + terminationPolicy: DoNotTerminate +``` + +Let's create the `Redis` CR we have shown above, + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/redis/scaling/vertical-scaling/rd-sentinel.yaml +redis.kubedb.com/rd-sample created +``` + +Now, wait until `rd-sample` created has status `Ready`. i.e, + +```bash +$ kubectl get redis -n demo +NAME VERSION STATUS AGE +rd-sample 6.2.14 Ready 2m11s +``` +Let's check the Pod containers resources, +```bash +$ kubectl get pod -n demo rd-sample-0 -o json | jq '.spec.containers[].resources' +{ + "limits": { + "memory": "100Mi" + }, + "requests": { + "cpu": "100m", + "memory": "100Mi" + } +} +``` + +We are now ready to apply the `RedisSentinelOpsRequest` CR to vertical scale on sentinel and `RedisOpsRequest` CR to vertical scale database. + +### Vertical Scale RedisSentinel + +Here, we are going to update the resources of the sentinel to meet the desired resources after scaling. + +#### Create RedisSentinelOpsRequest: + +In order to update the resources of the sentinel, we have to create a `RedisSentinelOpsRequest` CR with our desired resources. Below is the YAML of the `RedisSentinelOpsRequest` CR that we are going to create, + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: RedisSentinelOpsRequest +metadata: + name: sen-ops-vertical + namespace: demo +spec: + type: VerticalScaling + databaseRef: + name: sen-sample + verticalScaling: + redissentinel: + resources: + requests: + memory: "300Mi" + cpu: "200m" + limits: + memory: "800Mi" + cpu: "500m" +``` + +Here, + +- `spec.databaseRef.name` specifies that we are performing operation on `sen-sample` RedisSentinel instance. +- `spec.type` specifies that we are going to perform `VerticalScaling` on our database. +- `spec.verticalScaling.redissentinel` specifies the desired resources after scaling. + +Let's create the `RedisSentinelOpsRequest` CR we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/redis/scaling/vertical-scaling/vertical-sentinel.yaml +redissentinelopsrequest.ops.kubedb.com/sen-ops-vertical created +``` + +#### Verify RedisSentinel resources updated successfully : + +If everything goes well, `KubeDB` Enterprise operator will update the image of `RedisSentinel` object and related `StatefulSets` and `Pods`. + +Let's wait for `RedisSentinelOpsRequest` to be `Successful`. Run the following command to watch `RedisSentinelOpsRequest` CR, + +```bash +$ watch kubectl get redissentinelopsrequest -n demo +Every 2.0s: kubectl get redissentinelopsrequest -n demo +NAME TYPE STATUS AGE +sen-ops-vertical VerticalScaling Successful 5m27s +``` + +We can see from the above output that the `RedisSentinelOpsRequest` has succeeded. + +Now, we are going to verify from the Pod yaml whether the resources of the sentinel has updated to meet up the desired state, Let's check, + +```bash +$ kubectl get pod -n demo sen-sample-0 -o json | jq '.spec.containers[].resources' +{ + "limits": { + "cpu": "500m", + "memory": "800Mi" + }, + "requests": { + "cpu": "200m", + "memory": "300Mi" + } +} +``` + +The above output verifies that we have successfully scaled up the resources of the sentinel instance. +### Vertical Scale Redis + +Here, we are going to update the resources of the redis database to meet the desired resources after scaling. + +#### Create RedisOpsRequest: + +In order to update the resources of the database, we have to create a `RedisOpsRequest` CR with our desired resources. Below is the YAML of the `RedisOpsRequest` CR that we are going to create, + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: RedisOpsRequest +metadata: + name: rd-ops-vertical + namespace: demo +spec: + type: VerticalScaling + databaseRef: + name: rd-sample + verticalScaling: + redis: + resources: + requests: + memory: "300Mi" + cpu: "200m" + limits: + memory: "800Mi" + cpu: "500m" +``` + +Here, + +- `spec.databaseRef.name` specifies that we are performing operation on `rd-sample` Redis database. +- `spec.type` specifies that we are going to perform `VerticalScaling` on our database. +- `spec.VerticalScaling.redis` specifies the desired resources after scaling. + +Let's create the `RedisOpsRequest` CR we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/redis/scaling/vertical-scaling/vertical-redis-sentinel.yaml +redisopsrequest.ops.kubedb.com/rd-ops-vertical created +``` + +#### Verify Redis resources updated successfully : + +If everything goes well, `KubeDB` Enterprise operator will update the image of `Redis` object and related `StatefulSets` and `Pods`. + +Let's wait for `RedisOpsRequest` to be `Successful`. Run the following command to watch `RedisOpsRequest` CR, + +```bash +$ watch kubectl get redisopsrequest -n demo +NAME TYPE STATUS AGE +rd-ops-vertical VerticalScaling Successful 4m4s +``` + +We can see from the above output that the `RedisOpsRequest` has succeeded. +Now, we are going to verify from the Pod yaml whether the resources of the database has updated to meet up the desired state, Let's check, + +```bash +$ kubectl get pod -n demo rd-sample-0 -o json | jq '.spec.containers[].resources' +{} +{ + "limits": { + "cpu": "500m", + "memory": "800Mi" + }, + "requests": { + "cpu": "200m", + "memory": "300Mi" + } +} +``` + +The above output verifies that we have successfully scaled up the resources of the redis database. +## Cleaning Up + +To clean up the Kubernetes resources created by this tutorial, run: + +```bash +# Delete Redis and RedisOpsRequest +$ kubectl patch -n demo rd/rd-sample -p '{"spec":{"terminationPolicy":"WipeOut"}}' --type="merge" +redis.kubedb.com/rd-sample patched + +$ kubectl delete -n demo redis rd-sample +redis.kubedb.com "rd-sample" deleted + +$ kubectl delete -n demo redisopsrequest rd-ops-vertical +redisopsrequest.ops.kubedb.com "rd-ops-vertical" deleted + +# Delete RedisSentinel and RedisSentinelOpsRequest +$ kubectl patch -n demo redissentinel/sen-sample -p '{"spec":{"terminationPolicy":"WipeOut"}}' --type="merge" +redissentinel.kubedb.com/sen-sample patched + +$ kubectl delete -n demo redissentinel sen-sample +redissentinel.kubedb.com "sen-sample" deleted + +$ kubectl delete -n demo redissentinelopsrequests sen-ops-vertical +redissentinelopsrequest.ops.kubedb.com "sen-ops-vertical" deleted +``` diff --git a/content/docs/v2024.1.31/guides/redis/scaling/vertical-scaling/standalone.md b/content/docs/v2024.1.31/guides/redis/scaling/vertical-scaling/standalone.md new file mode 100644 index 0000000000..98ad4f907e --- /dev/null +++ b/content/docs/v2024.1.31/guides/redis/scaling/vertical-scaling/standalone.md @@ -0,0 +1,208 @@ +--- +title: Vertical Scaling Standalone Redis +menu: + docs_v2024.1.31: + identifier: rd-vertical-scaling-standalone + name: Standalone + parent: rd-vertical-scaling + weight: 20 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Vertical Scale Standalone Redis + +This guide will show you how to use `KubeDB` Enterprise operator to update the resources of a standalone Redis database. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +- Install `KubeDB` Community and Enterprise operator in your cluster following the steps [here](/docs/v2024.1.31/setup/README). + +- You should be familiar with the following `KubeDB` concepts: + - [Redis](/docs/v2024.1.31/guides/redis/concepts/redis) + - [RedisOpsRequest](/docs/v2024.1.31/guides/redis/concepts/redisopsrequest) + - [Vertical Scaling Overview](/docs/v2024.1.31/guides/redis/scaling/vertical-scaling/overview) + +To keep everything isolated, we are going to use a separate namespace called `demo` throughout this tutorial. + +```bash +$ kubectl create ns demo +namespace/demo created +``` + +> **Note:** YAML files used in this tutorial are stored in [docs/examples/redis](/docs/v2024.1.31/examples/redis) directory of [kubedb/docs](https://github.com/kubedb/docs) repository. + +## Apply Vertical Scaling on Standalone + +Here, we are going to deploy a `Redis` standalone using a supported version by `KubeDB` operator. Then we are going to apply vertical scaling on it. + +### Prepare Redis Standalone Database + +Now, we are going to deploy a `Redis` standalone database with version `6.2.14`. + +### Deploy Redis standalone + +In this section, we are going to deploy a Redis standalone database. Then, in the next section we will update the resources of the database using `RedisOpsRequest` CRD. Below is the YAML of the `Redis` CR that we are going to create, + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: Redis +metadata: + name: redis-quickstart + namespace: demo +spec: + version: 6.2.14 + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + podTemplate: + spec: + resources: + requests: + cpu: "100m" + memory: "100Mi" +``` + +Let's create the `Redis` CR we have shown above, + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/redis/scaling/vertical-scaling/rd-standalone.yaml +redis.kubedb.com/redis-quickstart created +``` + +Now, wait until `rd-quickstart` has status `Ready`. i.e. , + +```bash +$ kubectl get redis -n demo +NAME VERSION STATUS AGE +redis-quickstart 6.2.14 Ready 2m30s +``` + +Let's check the Pod containers resources, + +```bash +$ kubectl get pod -n demo redis-quickstart-0 -o json | jq '.spec.containers[].resources' +{ + "limits": { + "memory": "100Mi" + }, + "requests": { + "cpu": "100m", + "memory": "100Mi" + } +} +``` + +We can see from the above output that there are some default resources set by the operator. And the scheduler will choose the best suitable node to place the container of the Pod. + +We are now ready to apply the `RedisOpsRequest` CR to update the resources of this database. + +### Vertical Scaling + +Here, we are going to update the resources of the standalone database to meet the desired resources after scaling. + +#### Create RedisOpsRequest + +In order to update the resources of the database, we have to create a `RedisOpsRequest` CR with our desired resources. Below is the YAML of the `RedisOpsRequest` CR that we are going to create, + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: RedisOpsRequest +metadata: + name: redisopsstandalone + namespace: demo +spec: + type: VerticalScaling + databaseRef: + name: redis-quickstart + verticalScaling: + redis: + resources: + requests: + memory: "300Mi" + cpu: "200m" + limits: + memory: "800Mi" + cpu: "500m" +``` + +Here, + +- `spec.databaseRef.name` specifies that we are performing vertical scaling operation on `redis-quickstart` database. +- `spec.type` specifies that we are performing `VerticalScaling` on our database. +- `spec.verticalScaling.redis` specifies the desired resources after scaling. + +Let's create the `RedisOpsRequest` CR we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/redis/scaling/vertical-scaling/vertical-standalone.yaml +redisopsrequest.ops.kubedb.com/redisopsstandalone created +``` + +#### Verify Redis Standalone resources updated successfully + +If everything goes well, `KubeDB` Enterprise operator will update the resources of `Redis` object and related `StatefulSets` and `Pods`. + +Let's wait for `RedisOpsRequest` to be `Successful`. Run the following command to watch `RedisOpsRequest` CR, + +```bash +$ watch kubectl get redisopsrequest -n demo redisopsstandalone +NAME TYPE STATUS AGE +redisopsstandalone VerticalScaling Successful 26s +``` + +We can see from the above output that the `RedisOpsRequest` has succeeded. +Now, we are going to verify from the Pod yaml whether the resources of the standalone database has updated to meet up the desired state, Let's check, + +```bash +$ kubectl get pod -n demo redis-quickstart-0 -o json | jq '.spec.containers[].resources' +{ + "limits": { + "cpu": "500m", + "memory": "800Mi" + }, + "requests": { + "cpu": "200m", + "memory": "300Mi" + } +} + +``` + +The above output verifies that we have successfully scaled up the resources of the Redis standalone database. + +## Cleaning up + +To clean up the Kubernetes resources created by this turorial, run: + +```bash + +$ kubectl patch -n demo rd/redis-quickstart -p '{"spec":{"terminationPolicy":"WipeOut"}}' --type="merge" +redis.kubedb.com/redis-quickstart patched + +$ kubectl delete -n demo redis redis-quickstart +redis.kubedb.com "redis-quickstart" deleted + +$ kubectl delete redisopsrequest -n demo redisopsstandalone +redisopsrequest.ops.kubedb.com "redisopsstandalone" deleted +``` \ No newline at end of file diff --git a/content/docs/v2024.1.31/guides/redis/sentinel/_index.md b/content/docs/v2024.1.31/guides/redis/sentinel/_index.md new file mode 100644 index 0000000000..0f7681fae0 --- /dev/null +++ b/content/docs/v2024.1.31/guides/redis/sentinel/_index.md @@ -0,0 +1,22 @@ +--- +title: Redis Sentinel +menu: + docs_v2024.1.31: + identifier: rd-sentinel-redis + name: Sentinel + parent: rd-redis-guides + weight: 25 +menu_name: docs_v2024.1.31 +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + diff --git a/content/docs/v2024.1.31/guides/redis/sentinel/overview.md b/content/docs/v2024.1.31/guides/redis/sentinel/overview.md new file mode 100644 index 0000000000..93a7d04414 --- /dev/null +++ b/content/docs/v2024.1.31/guides/redis/sentinel/overview.md @@ -0,0 +1,102 @@ +--- +title: Redis Sentinel Overview +menu: + docs_v2024.1.31: + identifier: rd-sentinel-overview + name: Overview + parent: rd-sentinel-redis + weight: 10 +menu_name: docs_v2024.1.31 +section_menu_id: guides +info: + autoscaler: v0.26.0 + cli: v0.41.0 + dashboard: v0.17.0 + installer: v2024.1.31 + ops-manager: v0.28.0 + provisioner: v0.41.0 + schema-manager: v0.17.0 + ui-server: v0.17.0 + version: v2024.1.31 + webhook-server: v0.17.0 +--- + +> New to KubeDB? Please start [here](/docs/v2024.1.31/README). + +# Redis Sentinel + +Redis Sentinel is a high-availability solution for Redis, which provides automatic failover and monitoring for Redis instances. +It helps to ensure that in the event of a Redis instance failure, Sentinel can detect the failure and automatically promote one of the slaves +to be the new master, providing a highly available and self-healing Redis setup. Additionally, Redis Sentinel provides notifications and other +tools for monitoring the health of Redis instances and handling failures. + +So in practical terms, what do you get with Redis Sentinel? + +- **High Availability**: Redis Sentinel provides automatic failover and monitoring, ensuring that in the event of a Redis instance failure, there is always a functioning master available. + +- **Self-healing** : Sentinel can detect failures and promote a slave to be the new master, reducing downtime and ensuring continuous operation of the Redis setup. + +- **Monitoring** : Sentinel provides a suite of monitoring and notification tools for keeping track of the health of Redis instances and detecting failures. + +- **Load balancing**: Sentinel can be used to manage multiple Redis instances and provide load balancing, distributing clients across multiple Redis instances for better performance and reliability. + +- **Scalability**: Redis Sentinel can be used to create a scalable Redis setup, allowing multiple Redis instances to be added and managed as needed. + +- **Simplified administration**: Redis Sentinel provides a centralized management interface for Redis instances, making it easier to manage and monitor large-scale Redis setups. + + + + +![redis-sentinel](/docs/v2024.1.31/images/redis/redis-sentinel.png) + + +## Redis Sentinel TCP ports + +Redis Sentinel instances typically use two TCP ports for communication: + +- **Port 26379**: This is the default port used for inter-Sentinel communication, allowing Sentinels to communicate with each other and maintain a consistent view of the Redis cluster. + +- **Port 6379**: This is the default port used for communication with Redis instances. Sentinels use this port to monitor and manage Redis instances, including promoting slaves to masters in the event of a failure. + +It is possible to configure Redis Sentinel to use different ports, but the default ports are commonly used and are a good starting point for most Redis Sentinel installations. + + +## Redis Sentinel master-replica model + +In Redis Sentinel mode, the master-replica model works as follows: + +- **Monitoring**: Redis Sentinels continuously monitor the health of the Redis master and replica instances in the cluster, checking for failures or other issues. + +- **Failover**: In the event of a failure of the master instance, Redis Sentinels automatically promote one of the replica instances to be the new master, ensuring that the Redis setup remains available. The Sentinels communicate with each other to coordinate the failover process and ensure that all instances are aware of the change in the cluster's state. + +- **Data Replication**: The master instance continuously replicates its data to the replica instances, ensuring that the data remains consistent across all instances in the cluster. + +- **Load Balancing**: Redis Sentinels can be configured to distribute read operations across multiple replica instances, improving performance and reducing the load on the master instance. + +- **Automatic Recovery**: In the event of a failure, Redis Sentinels can automatically recover the cluster by promoting a new master instance and re-establishing data replication. + +Overall, the master-replica model in Redis Sentinel mode helps to ensure high availability, reliability, and scalability for Redis setups by continuously monitoring the health of the cluster and automatically failing over to a replica instance in the event of a failure. + + +## Redis Sentinel configuration parameters + +Redis Sentinel has a number of configuration parameters that can be set to control its behavior. Some of the most important parameters include: + +- **sentinel announce-ip **: This parameter sets the IP address that Sentinels should use when announcing themselves to the cluster. +- **sentinel announce-port **: This parameter sets the port that Sentinels should use when announcing themselves to the cluster. +- **sentinel monitor **: This parameter is used to specify the name, IP address, and port of a Redis instance that Sentinels should monitor. +- **sentinel parallel-syncs **: This parameter sets the number of replica instances that can be synced in parallel during a failover event. +- **sentinel down-after-milliseconds