Skip to content

Releases: mongodb/mongodb-enterprise-kubernetes

MongoDB Enterprise Kubernetes Operator 1.29.0

18 Nov 14:48
b967163
Compare
Choose a tag to compare

New Features

  • AppDB: Added support for easy resize. More can be read in changelog 1.28.0 - "automated expansion of the pvc"

Bug Fixes

  • MongoDB, AppDB, MongoDBMultiCluster: Fixed a bug where specifying a fractional number for a storage volume's size such as 1.7Gi can break the reconciliation loop for that resource with an error like Can't execute update on forbidden fields even if the underlying Persistence Volume Claim is deployed successfully.
  • MongoDB, MongoDBMultiCluster, OpsManager, AppDB: Increased stability of deployments during TLS rotations. In scenarios where the StatefulSet of the deployment was reconciling and a TLS rotation happened, the deployment would reach a broken state. Deployments will now store the previous TLS certificate alongside the new one.

MongoDB Enterprise Kubernetes Operator 1.28.0

02 Oct 10:23
e134062
Compare
Choose a tag to compare

New Features

  • MongoDB: public preview release of multi kubernetes cluster support for sharded clusters. This can be enabled by setting spec.topology=MultiCluster when creating MongoDB resource of spec.type=ShardedCluster. More details can be found here.
  • MongoDB, MongoDBMultiCluster: support for automated expansion of the PVC.
    More details can be found here.
    Note: Expansion of the pvc is only supported if the storageClass supports expansion.
    Please ensure that the storageClass supports in-place expansion without data-loss.
    • MongoDB This can be done by increasing the size of the PVC in the CRD setting:
      • one PVC - increase: spec.persistence.single.storage
      • multiple PVCs - increase: spec.persistence.multiple.(data/journal/logs).storage
    • MongoDBMulti This can be done by increasing the storage via the statefulset override:
  statefulSet:
    spec:
      volumeClaimTemplates:
        - metadata:
            name: data
          spec:
            resources:
              requests:
                storage: 2Gi # this is my increased storage
                storageClass: <my-class-that-supports-expansion>
  • OpsManager: Introduced support for Ops Manager 8.0.0
  • MongoDB, MongoDBMulti: support for MongoDB 8.0.0
  • MongoDB, MongoDBMultiCluster AppDB: change default behaviour of setting featurecompatibilityversion (fcv) for the database.
    • When upgrading mongoDB version the operator sets the FCV to the prior version we are upgrading from. This allows to
      have sanity checks before setting the fcv to the upgraded version. More information can be found here.
    • To keep the prior behaviour to always use the mongoDB version as FCV; set spec.featureCompatibilityVersion: "AlwaysMatchVersion"
  • Docker images are now built with ubi9 as the base image with the exception of mongodb-enterprise-database-ubi which is still based on ubi8 to support MongoDB workloads < 6.0.4. The ubi8 image is only in use for the default non-static architecture.
    For a full ubi9 setup, the Static Containers architecture should be used instead.

Bug Fixes

  • MongoDB, AppDB, MongoDBMultiCluster: Fixed a bug where the init container was not getting the default security context, which was flagged by security policies.
  • MongoDBMultiCluster: Fixed a bug where resource validations were not performed as part of the reconcile loop.

MongoDB Enterprise Kubernetes Operator 1.27.0

26 Aug 17:14
b5daaec
Compare
Choose a tag to compare

New Features

  • MongoDB: Added Support for enabling LogRotation for MongoDB processes, MonitoringAgent and BackupAgent. More can be found in the following documentation.

    • spec.agent.mongod.logRotation to configure the mongoDB processes
    • spec.agent.mongod.auditLogRotation to configure the mongoDB processes audit logs
    • spec.agent.backupAgent.logRotation to configure the backup agent
    • spec.agent.monitoringAgent.logRotation to configure the backup agent
    • spec.agent.readinessProbe.environmentVariables to configure the environment variables the readinessProbe runs with.
      That also applies to settings related to the logRotation,
      the supported environment settings can be found here.
    • the same applies for AppDB:
      • you can configure AppDB via spec.applicationDatabase.agent.mongod.logRotation
    • Please Note: For shardedCluster we only support configuring logRotation under spec.Agent
      and not per process type (mongos, configsrv etc.)
  • Opsmanager: Added support for replacing the logback.xml which configures general logging settings like logRotation

    • spec.logging.LogBackAccessRef points at a ConfigMap/key with the logback access configuration file to mount on the Pod
      • the key of the configmap has to be logback-access.xml
    • spec.logging.LogBackRef points at a ConfigMap/key with the logback access configuration file to mount on the Pod
      • the key of the configmap has to be logback.xml
  • OpsManager: Added support for configuring votes, priority and tags for application database nodes in the multi-cluster topology under the spec.applicationDatabase.clusterSpecList[i].memberConfig field.

Deprecations

  • AppDB: logRotate for appdb has been deprecated in favor for the new field
    • this spec.applicationDatabase.agent.logRotation has been deprecated for spec.applicationDatabase.agent.mongod.logRotation

Bug Fixes

  • Agent launcher: under some resync scenarios we can have corrupted journal data in /journal.
    The agent now makes sure that there are not conflicting journal data and prioritizes the data from /data/journal.

    • To deactivate this behaviour set the environment variable in the operator MDB_CLEAN_JOURNAL
      to any other value than 1.
  • MongoDB, AppDB, MongoDBMulti: make sure to use external domains in the connectionString created if configured.

  • MongoDB: Removed panic response when configuring shorter horizon config compared to number of members. The operator now signals a
    descriptive error in the status of the MongoDB resource.

  • MongoDB: Fixed a bug where creating a resource in a new project named as a prefix of another project would fail, preventing the MongoDB resource to be created.

MongoDB Enterprise Kubernetes Operator 1.26.0

19 Jun 07:52
6f662db
Compare
Choose a tag to compare

New Features

  • Added the ability to control how many reconciles can be performed in parallel by the operator.
    This enables strongly improved cpu utilization and vertical scaling of the operator and will lead to quicker reconcile of all managed resources.
    • It might lead to increased load on the Ops Manager and K8s API server in the same time window.
      by setting MDB_MAX_CONCURRENT_RECONCILES for the operator deployment or operator.maxConcurrentReconciles in the operator's Helm chart.
      If not provided, the default value is 1.
      • Observe the operator's resource usage and adjust (operator.resources.requests and operator.resources.limits) if needed.

Helm Chart

  • New operator.maxConcurrentReconciles parameter. It controls how many reconciles can be performed in parallel by the operator. The default value is 1.
  • New operator.webhook.installClusterRole parameter. It controls whether to install the cluster role allowing the operator to configure admission webhooks. It should be disabled when cluster roles are not allowed. Default value is true.

Bug Fixes

  • MongoDB: Fixed a bug where configuring a MongoDB with multiple entries in spec.agent.startupOptions would cause additional unnecessary reconciliation of the underlying StatefulSet.
  • MongoDB, MongoDBMultiCluster: Fixed a bug where the operator wouldn't watch for changes in the X509 certificates configured for agent authentication.
  • MongoDB: Fixed a bug where boolean flags passed to the agent cannot be set to false if their default value is true.

MongoDB Enterprise Kubernetes Operator 1.25.0

01 May 11:22
67b067c
Compare
Choose a tag to compare

Known Issues

  • mongodb-enterprise-openshift.yaml file released in this version is incomplete. There is missing operator's ServiceAccount resource. Please use the newer version of the operator or add the service account manually from this commit.

New Features

  • MongoDBOpsManager: Added support for deploying Ops Manager Application on multiple Kubernetes clusters. See documentation for more information.
  • (Public Preview) MongoDB, OpsManager: Introduced opt-in Static Architecture (for all types of deployments) that avoids pulling any binaries at runtime.
    * This feature is recommended only for testing purposes, but will become the default in a later release.
    * You can activate this mode by setting the MDB_DEFAULT_ARCHITECTURE environment variable at the Operator level to static. Alternatively, you can annotate a specific MongoDB or OpsManager Custom Resource with mongodb.com/v1.architecture: "static".
  • MongoDB: Recover Resource Due to Broken Automation Configuration has been extended to all types of MongoDB resources, now including Sharded Clusters. For more information see https://www.mongodb.com/docs/kubernetes-operator/master/reference/troubleshooting/#recover-resource-due-to-broken-automation-configuration
  • MongoDB, MongoDBMultiCluster: Placeholders in external services.
    • You can now define annotations for external services managed by the operator that contain placeholders which will be automatically replaced to the proper values.
    • Previously, the operator was configuring the same annotations for all external services created for each pod. Now, with placeholders the operator is able to customize
      annotations in each service with values that are relevant and different for the particular pod.
    • To learn more please see the relevant documentation:
  • kubectl mongodb:
  • Added printing build info when using the plugin.
  • setup command:
    • Added --image-pull-secrets parameter. If specified, created service accounts will reference the specified secret on ImagePullSecrets field.
    • Improved handling of configurations when the operator is installed in a separate namespace than the resources it's watching and when the operator is watching more than one namespace.
    • Optimized roles and permissions setup in member clusters, using a single service account per cluster with correctly configured Role and RoleBinding (no ClusterRoles necessary) for each watched namespace.
  • OpsManager: Added the spec.internalConnectivity field to allow overrides for the service used by the operator to ensure internal connectivity to the OpsManager pods.
  • Extended the existing event based reconciliation by a time-based one, that is triggered every 24 hours. This ensures all Agents are always upgraded on timely manner.
  • OpenShift / OLM Operator: Removed the requirement for cluster-wide permissions. Previously, the operator needed these permissions to configure admission webhooks. Now, webhooks are automatically configured by OLM.
  • Added optional MDB_WEBHOOK_REGISTER_CONFIGURATION environment variable for the operator. It controls whether the operator should perform automatic admission webhook configuration. Default: true. It's set to false for OLM and OpenShift deployments.

Breaking Change

  • MongoDBOpsManager Stopped testing against Ops Manager 5.0. While it may continue to work, we no longer officially support Ops Manager 5 and customers should move to a later version.

Helm Chart

  • New operator.webhook.registerConfiguration parameter. It controls whether the operator should perform automatic admission webhook configuration (by setting MDB_WEBHOOK_REGISTER_CONFIGURATION environment variable for the operator). Default: true. It's set to false for OLM and OpenShift deployments.
  • Changing the default agent.version to 107.0.0.8502-1, that will change the default agent used in helm deployments.
  • Added operator.additionalArguments (default: []) allowing to pass additional arguments for the operator binary.
  • Added operator.createResourcesServiceAccountsAndRoles (default: true) to control whether to install roles and service accounts for MongoDB and Ops Manager resources. When mongodb kubectl plugin is used to configure the operator for multi-cluster deployment, it installs all necessary roles and service accounts. Therefore, in some cases it is required to not install those roles using the operator's helm chart to avoid clashes.

Bug Fixes

  • MongoDBMultiCluster: Fields spec.externalAccess.externalDomain and spec.clusterSpecList[*].externalAccess.externalDomains were reported as required even though they weren't
    used. Validation was triggered prematurely when structure spec.externalAccess was defined. Now, uniqueness of external domains will only be checked when the external domains are
    actually defined in spec.externalAccess.externalDomain or spec.clusterSpecList[*].externalAccess.externalDomains.
  • MongoDB: Fixed a bug where upon deleting a MongoDB resource the controlledFeature policies are not unset on the related OpsManager/CloudManager instance, making cleanup in the UI impossible in the case of losing the kubernetes operator.
  • OpsManager: The admin-key Secret is no longer deleted when removing the OpsManager Custom Resource. This enables easier Ops Manager re-installation.
  • MongoDB ReadinessProbe Fixed the misleading error message of the readinessProbe: "... kubelet Readiness probe failed:...". This affects all mongodb deployments.
  • Operator: Fixed cases where sometimes while communicating with Opsmanager the operator skipped TLS verification, even if it was activated.

Improvements

Kubectl plugin: The released plugin binaries are now signed, the signatures are published with the release assets. Our public key is available at this address. They are also notarized for MacOS.
Released Images signed: All container images published for the enterprise operator are cryptographically signed. This is visible on our Quay registry, and can be verified using our public key. It is available at this address.

MongoDB Enterprise Kubernetes Operator 1.24.0

21 Dec 10:22
0d2de32
Compare
Choose a tag to compare

New Features

  • MongoDBOpsManager: Added support for the upcoming 7.0.x series of Ops Manager Server.

Bug Fixes

  • Fix a bug that prevented terminating backup correctly.

MongoDB Enterprise CLI 1.23.0

13 Nov 11:14
d5a1a6a
Compare
Choose a tag to compare

MongoDB Enterprise Kubernetes Operator 1.23.0

Warnings and Breaking Changes

  • Starting from 1.23 component image version numbers will be aligned to the MongoDB Enterprise Operator release tag. This allows clear identification of all images related to a specific version of the Operator. This affects the following images:
    • quay.io/mongodb/mongodb-enterprise-database-ubi
    • quay.io/mongodb/mongodb-enterprise-init-database-ubi
    • quay.io/mongodb/mongodb-enterprise-init-appdb-ubi
    • quay.io/mongodb/mongodb-enterprise-init-ops-manager-ubi
  • Removed spec.exposedExternally in favor of spec.externalAccess from the MongoDB Customer Resource. spec.exposedExternally was deprecated in operator version 1.19.

Bug Fixes

  • Fix a bug with scaling a multi-cluster replica-set in the case of losing connectivity to a member cluster. The fix addresses both the manual and automated recovery procedures.
  • Fix of a bug where changing the names of the automation agent and MongoDB audit logs prevented them from being sent to Kubernetes pod logs. There are no longer restrictions on MongoDB audit log file names (mentioned in the previous release).
  • New log types from the mongodb-enterprise-database container are now streamed to Kubernetes logs.
    • New log types:
      • agent-launcher-script
      • monitoring-agent
      • backup-agent
    • The rest of available log types:
      • automation-agent-verbose
      • automation-agent-stderr
      • automation-agent
      • mongodb
      • mongodb-audit
  • MongoDBUser Fix a bug ignoring the Spec.MongoDBResourceRef.Namespace. This prevented storing the user resources in another namespace than the MongoDB resource.

MongoDB Enterprise Kubernetes Operator 1.22.0

09 Oct 08:04
042bf91
Compare
Choose a tag to compare

MongoDB Enterprise Kubernetes Operator 1.22.0

Breaking Changes

  • All Resources: The Operator no longer uses the "Reconciling" state. In most of the cases it has been replaced with "Pending" and a proper message

Deprecations

None

Bug Fixes

  • MongoDB: Fix support for setting autoTerminateOnDeletion=true for sharded clusters. This setting makes sure that the operator stops and terminates the backup before the cleanup.

New Features

  • MongoDB: An Automatic Recovery mechanism has been introduced for MongoDB resources and is turned on by default. If a Custom Resource remains in Pending or Failed state for a longer period of time (controlled by MDB_AUTOMATIC_RECOVERY_BACKOFF_TIME_S environment variable at the Operator Pod spec level, the default is 20 minutes)
    the Automation Config is pushed to the Ops Manager. This helps to prevent a deadlock when an Automation Config can not be pushed because of the StatefulSet not being ready and the StatefulSet being not ready because of a broken Automation Config.
    The behavior can be turned off by setting MDB_AUTOMATIC_RECOVERY_ENABLE environment variable to false.
  • MongoDB: MongoDB audit logs can now be routed to Kubernetes pod logs.
    • Ensure MongoDB audit logs are written to /var/log/mongodb-mms-automation/mongodb-audit.log file. Pod monitors this file and tails its content to k8s logs.
    • Use the following example configuration in MongoDB resource to send audit logs to k8s logs:
    spec:
      additionalMongodConfig:
        auditLog:
          destination: file
          format: JSON
          path: /var/log/mongodb-mms-automation/mongodb-audit.log
    
    • Audit log entries are tagged with the "mongodb-audit" key in pod logs. Extract audit log entries with the following example command:
    kubectl logs -c mongodb-enterprise-database replica-set-0 | jq -r 'select(.logType == "mongodb-audit") | .contents'
    
  • MongoDBOpsManager: Improved handling of unreachable clusters in AppDB Multi-Cluster resources
    • In the last release, the operator required a healthy connection to the cluster to scale down processes, which could block the reconcile process if there was a full-cluster outage.
    • Now, the operator will still successfully manage the remaining healthy clusters, as long as they have a majority of votes to elect a primary.
    • The associated processes of an unreachable cluster are not automatically removed from the automation config and replica set configuration. These processes will only be removed under the following conditions:
      • The corresponding cluster is deleted from spec.applicationDatabase.clusterSpecList or has zero members specified.
      • When deleted, the operator scales down the replica set by removing processes tied to that cluster one at a time.
  • MongoDBOpsManager: Add support for configuring logRotate on the automation-agent for appdb.
  • MongoDBOpsManager: systemLog can now be configured to differ from the otherwise default of /var/log/mongodb-mms-automation.

MongoDB Kubernetes Enterprise Operator 1.21.0

25 Aug 16:15
feac69b
Compare
Choose a tag to compare

MongoDB Enterprise Kubernetes Operator 1.21.0

Breaking changes

  • The environment variable to track the operator namespace has been renamed from CURRENT_NAMESPACE to NAMESPACE. If you set this variable manually via YAML files, you should update this environment variable name while upgrading the operator deployment.

Bug fixes

  • Fixes a bug where passing the labels via statefulset override mechanism would not lead to an override on the actual statefulset.

New Feature

  • Support for Label and Annotations Wrapper for the following CRDs: mongodb, mongodbmulti and opsmanager
    • Additionally, to the specWrapper for statefulsets we now support overriding metadata.Labels and metadata.Annotations via the MetadataWrapper.

MongoDBOpsManager Resource

New Features

  • Support configuring OpsManager with a highly available applicationDatabase across multiple Kubernetes clusters by introducing the following fields:
    • om.spec.applicationDatabase.topology which can be one of MultiCluster and SingleCluster.
    • om.spec.applicationDatabase.clusterSpecList for configuring the list of Kubernetes clusters which will have For extended considerations for the multi-cluster AppDB configuration, check the official guide and the OpsManager resource specification.
      The implementation is backwards compatible with single cluster deployments of AppDB, by defaulting om.spec.applicationDatabase.topology to SingleCluster. Existing OpsManager resources do not need to be modified to upgrade to this version of the operator.
  • Support for providing a list of custom certificates for S3 based backups via secret references spec.backup.[]s3Stores.customCertificateSecretRefs and spec.backup.[]s3OpLogStores.customCertificateSecretRefs
    • The list consists of single certificate strings, each references a secret containing a certificate authority.
    • We do not support adding multiple certificates in a chain. In that case, only the first certificate in the chain is imported.
    • Note:
      • If providing a list of customCertificateSecretRefs, then those certificates will be used instead of the default certificates setup in the JVM Trust Store (in Ops Manager or Cloud Manager).
      • If none are provided, the default JVM Truststore certificates will be used instead.

Breaking changes

  • The appdb-ca is no longer automatically added to the JVM Trust Store (in Ops Manager or Cloud Manager). Since a bug introduced in version 1.17.0, automatically adding these certificates to the JVM Trust Store has no longer worked.
    • This will only impact you if:
      • You are using the same custom certificate for both appdb-ca and for your S3 compatible backup store
      • AND: You are using an operator prior to 1.17.0 (where automated inclusion in the JVM Trust Store worked) OR had a workaround (such as mounting your own trust store to OM)
    • If you do need to use the same custom certificate for both appdb-ca and for your S3 compatible backup store then you now need to utilise spec.backup.[]s3Config.customCertificateSecretRefs (introduced in this release and covered below in the release notes) to specify the certificate authority for use for backups.
    • The appdb-ca is the certificate authority saved in the configmap specified under om.spec.applicationDatabase.security.tls.ca.

Bug fixes

  • Allowed setting an arbitrary port number in spec.externalConnectivity.port when LoadBalancer service type is used for exposing Ops Manager instance externally.
  • The operator is now able to import the appdb-ca which consists of a bundle of certificate authorities into the ops-manager JVM trust store. Previously, the keystore had 2 problems:
    • It was immutable.
    • Only the first certificate authority out of the bundle was imported into the trust store.
    • Both could lead to certificates being rejected by Ops Manager during requests to it.

Deprecation

  • The setting spec.backup.[]s3Stores.customCertificate and spec.backup.[]s3OpLogStores.customCertificate are being deprecated in favor of spec.backup.[]s3OpLogStores.[]customCertificateSecretRefs and spec.backup.[]s3Stores.[]customCertificateSecretRefs
    • Previously, when enabling customCertificate, the operator would use the appdb-ca as the custom certificate. Currently, this should be explicitly set via customCertificateSecretRefs.

MongoDB Enterprise CLI 1.21.0-mcli

25 Aug 16:35
67f85b9
Compare
Choose a tag to compare
Pre-release
Update release-multicluster-cli.yaml