diff --git a/content/docs/2.15/operate/_index.md b/content/docs/2.15/Operator Guide/_index.md similarity index 95% rename from content/docs/2.15/operate/_index.md rename to content/docs/2.15/Operator Guide/_index.md index dac7c02f2..009527701 100644 --- a/content/docs/2.15/operate/_index.md +++ b/content/docs/2.15/Operator Guide/_index.md @@ -1,5 +1,5 @@ +++ -title = "Operate" +title = "Operator Guide" description = "Guidance and requirements for operating KEDA" weight = 1 +++ diff --git a/content/docs/2.15/operate/admission-webhooks.md b/content/docs/2.15/Operator Guide/admission-webhooks.md similarity index 100% rename from content/docs/2.15/operate/admission-webhooks.md rename to content/docs/2.15/Operator Guide/admission-webhooks.md diff --git a/content/docs/2.15/Operator Guide/caching-metrics.md b/content/docs/2.15/Operator Guide/caching-metrics.md new file mode 100644 index 000000000..d092a72e8 --- /dev/null +++ b/content/docs/2.15/Operator Guide/caching-metrics.md @@ -0,0 +1,12 @@ ++++ +title = "Caching Metrics" +weight = 600 ++++ + +## Caching Metrics + +This feature enables caching of metric values during polling interval (as specified in `.spec.pollingInterval`). Kubernetes (HPA controller) asks for a metric every few seconds (as defined by `--horizontal-pod-autoscaler-sync-period`, usually 15s), then this request is routed to KEDA Metrics Server, that by default queries the scaler and reads the metric values. Enabling this feature changes this behavior such that KEDA Metrics Server tries to read metric from the cache first. This cache is updated periodically during the polling interval. + +Enabling this feature can significantly reduce the load on the scaler service. + +This feature is not supported for `cpu`, `memory` or `cron` scaler. diff --git a/content/docs/2.15/operate/cloud-events.md b/content/docs/2.15/Operator Guide/cloud-events.md similarity index 100% rename from content/docs/2.15/operate/cloud-events.md rename to content/docs/2.15/Operator Guide/cloud-events.md diff --git a/content/docs/2.16/operate/cluster.md b/content/docs/2.15/Operator Guide/cluster.md similarity index 100% rename from content/docs/2.16/operate/cluster.md rename to content/docs/2.15/Operator Guide/cluster.md diff --git a/content/docs/2.15/operate/istio-integration.md b/content/docs/2.15/Operator Guide/istio-integration.md similarity index 100% rename from content/docs/2.15/operate/istio-integration.md rename to content/docs/2.15/Operator Guide/istio-integration.md diff --git a/content/docs/2.15/operate/metrics-server.md b/content/docs/2.15/Operator Guide/metrics-server.md similarity index 100% rename from content/docs/2.15/operate/metrics-server.md rename to content/docs/2.15/Operator Guide/metrics-server.md diff --git a/content/docs/2.16/migration.md b/content/docs/2.15/Operator Guide/migration.md similarity index 100% rename from content/docs/2.16/migration.md rename to content/docs/2.15/Operator Guide/migration.md diff --git a/content/docs/2.15/operate/opentelemetry.md b/content/docs/2.15/Operator Guide/opentelemetry.md similarity index 100% rename from content/docs/2.15/operate/opentelemetry.md rename to content/docs/2.15/Operator Guide/opentelemetry.md diff --git a/content/docs/2.15/Operator Guide/pause-autoscaling-deployments.md b/content/docs/2.15/Operator Guide/pause-autoscaling-deployments.md new file mode 100644 index 000000000..e94f0920c --- /dev/null +++ b/content/docs/2.15/Operator Guide/pause-autoscaling-deployments.md @@ -0,0 +1,28 @@ ++++ +title = "Pause Auto-Scaling with deployments" +weight = 600 ++++ + +## Pausing autoscaling + +It can be useful to instruct KEDA to pause the autoscaling of objects, to do to cluster maintenance or to avoid resource starvation by removing non-mission-critical workloads. + +This is preferable to deleting the resource because it removes the instances it is running from operation without touching the applications themselves. When ready, you can then reenable scaling. + +You can pause autoscaling by adding this annotation to your `ScaledObject` definition: + + +```yaml +metadata: + annotations: + autoscaling.keda.sh/paused-replicas: "0" + autoscaling.keda.sh/paused: "true" +``` + +The presence of these annotations will pause autoscaling no matter what number of replicas is provided. + +The annotation `autoscaling.keda.sh/paused` will pause scaling immediately and use the current instance count while the annotation `autoscaling.keda.sh/paused-replicas: ""` will scale your current workload to specified amount of replicas and pause autoscaling. You can set the value of replicas for an object to be paused to any arbitrary number. + +Typically, either one or the other is being used given they serve a different purpose/scenario. However, if both `paused` and `paused-replicas` are set, KEDA will scale your current workload to the number specified count in `paused-replicas` and then pause autoscaling. + +To unpause (reenable) autoscaling again, remove all paused annotations from the `ScaledObject` definition. If you paused with `autoscaling.keda.sh/paused`, you can unpause by setting the annotation to `false`. diff --git a/content/docs/2.15/Operator Guide/pause-autoscaling-jobs.md b/content/docs/2.15/Operator Guide/pause-autoscaling-jobs.md new file mode 100644 index 000000000..ad3b61036 --- /dev/null +++ b/content/docs/2.15/Operator Guide/pause-autoscaling-jobs.md @@ -0,0 +1,26 @@ ++++ +title = "Pause Auto-Scaling jobs" +weight = 600 ++++ + +## Pausing autoscaling + +It can be useful to instruct KEDA to pause the autoscaling of objects, to do to cluster maintenance or to avoid resource starvation by removing non-mission-critical workloads. + +This is preferable to deleting the resource because it removes the instances it is running from operation without touching the applications themselves. When ready, you can then reenable scaling. + +You can pause autoscaling by adding this annotation to your `ScaledJob` definition: + +```yaml +metadata: + annotations: + autoscaling.keda.sh/paused: true +``` + +To reenable autoscaling, remove the annotation from the `ScaledJob` definition or set the value to `false`. + +```yaml +metadata: + annotations: + autoscaling.keda.sh/paused: false +``` diff --git a/content/docs/2.15/Operator Guide/prevention-rules.md b/content/docs/2.15/Operator Guide/prevention-rules.md new file mode 100644 index 000000000..328e64cd5 --- /dev/null +++ b/content/docs/2.15/Operator Guide/prevention-rules.md @@ -0,0 +1,23 @@ ++++ +title = "Prevention Rules" +description = "Rules to prevent misconfigurations and ensure proper scaling behavior" +weight = 600 ++++ + +There are some several misconfiguration scenarios that can produce scaling problems in productive workloads, for example: in Kubernetes a single workload should never be scaled by 2 or more HPA because that will produce conflicts and unintended behaviors. + +Some errors with data format can be detected during the model validation, but these misconfigurations can't be detected in that step because the model is correct indeed. For trying to avoid those misconfigurations at data plane detecting them early, admission webhooks validate all the incoming (KEDA) resources (new or updated) and reject any resource that doesn't match the rules below. + +### Prevention Rules + +KEDA will block all incoming changes to `ScaledObject` that don't match these rules: + +- The scaled workload (`scaledobject.spec.scaleTargetRef`) is already autoscaled by another other sources (other ScaledObject or HPA). +- CPU and/or Memory trigger are used and the scaled workload doesn't have the requests defined. **This rule doesn't apply to all the workload types, only to `Deployment` and `StatefulSet`.** +- CPU and/or Memory trigger are **the only used triggers** and the ScaledObject defines `minReplicaCount:0`. **This rule doesn't apply to all the workload types, only to `Deployment` and `StatefulSet`.** +- In the case of multiple triggers where a `name` is **specified**, the name must be **unique** (it is not allowed to have multiple triggers with the same name) + +KEDA will block all incoming changes to `TriggerAuthentication`/`ClusterTriggerAuthentication` that don't match these rules: + +- The specified identity ID for Azure AD Workload Identity and/or Pod Identity is empty. (Default/unset identity ID will be passed through.) + > NOTE: This only applies if the `TriggerAuthentication/ClusterTriggerAuthentication` is overriding the default identityId provided to KEDA during the installation diff --git a/content/docs/2.15/operate/prometheus.md b/content/docs/2.15/Operator Guide/prometheus.md similarity index 100% rename from content/docs/2.15/operate/prometheus.md rename to content/docs/2.15/Operator Guide/prometheus.md diff --git a/content/docs/2.15/operate/security.md b/content/docs/2.15/Operator Guide/security.md similarity index 100% rename from content/docs/2.15/operate/security.md rename to content/docs/2.15/Operator Guide/security.md diff --git a/content/docs/2.16/concepts/troubleshooting.md b/content/docs/2.15/Operator Guide/troubleshooting.md similarity index 92% rename from content/docs/2.16/concepts/troubleshooting.md rename to content/docs/2.15/Operator Guide/troubleshooting.md index 68b19bf15..0d70c9517 100644 --- a/content/docs/2.16/concepts/troubleshooting.md +++ b/content/docs/2.15/Operator Guide/troubleshooting.md @@ -22,3 +22,7 @@ kubectl logs -n keda {keda-pod-name} -c keda-operator ## Reporting issues If you are having issues or hitting a potential bug, please file an issue [in the KEDA GitHub repo](https://github.com/kedacore/keda/issues/new/choose) with details, logs, and steps to reproduce the behavior. + +## Common issue and their Solutions + +{{< troubleshooting >}} \ No newline at end of file diff --git a/content/docs/2.16/operate/_index.md b/content/docs/2.16/Operator Guide/_index.md similarity index 95% rename from content/docs/2.16/operate/_index.md rename to content/docs/2.16/Operator Guide/_index.md index dac7c02f2..009527701 100644 --- a/content/docs/2.16/operate/_index.md +++ b/content/docs/2.16/Operator Guide/_index.md @@ -1,5 +1,5 @@ +++ -title = "Operate" +title = "Operator Guide" description = "Guidance and requirements for operating KEDA" weight = 1 +++ diff --git a/content/docs/2.16/operate/admission-webhooks.md b/content/docs/2.16/Operator Guide/admission-webhooks.md similarity index 100% rename from content/docs/2.16/operate/admission-webhooks.md rename to content/docs/2.16/Operator Guide/admission-webhooks.md diff --git a/content/docs/2.16/Operator Guide/caching-metrics.md b/content/docs/2.16/Operator Guide/caching-metrics.md new file mode 100644 index 000000000..d092a72e8 --- /dev/null +++ b/content/docs/2.16/Operator Guide/caching-metrics.md @@ -0,0 +1,12 @@ ++++ +title = "Caching Metrics" +weight = 600 ++++ + +## Caching Metrics + +This feature enables caching of metric values during polling interval (as specified in `.spec.pollingInterval`). Kubernetes (HPA controller) asks for a metric every few seconds (as defined by `--horizontal-pod-autoscaler-sync-period`, usually 15s), then this request is routed to KEDA Metrics Server, that by default queries the scaler and reads the metric values. Enabling this feature changes this behavior such that KEDA Metrics Server tries to read metric from the cache first. This cache is updated periodically during the polling interval. + +Enabling this feature can significantly reduce the load on the scaler service. + +This feature is not supported for `cpu`, `memory` or `cron` scaler. diff --git a/content/docs/2.16/operate/cloud-events.md b/content/docs/2.16/Operator Guide/cloud-events.md similarity index 100% rename from content/docs/2.16/operate/cloud-events.md rename to content/docs/2.16/Operator Guide/cloud-events.md diff --git a/content/docs/2.15/operate/cluster.md b/content/docs/2.16/Operator Guide/cluster.md similarity index 99% rename from content/docs/2.15/operate/cluster.md rename to content/docs/2.16/Operator Guide/cluster.md index 6d0fd7b11..23b367d23 100644 --- a/content/docs/2.15/operate/cluster.md +++ b/content/docs/2.16/Operator Guide/cluster.md @@ -16,6 +16,7 @@ As a reference, this compatibility matrix shows supported k8s versions per KEDA | KEDA | Kubernetes | | ----- | ------------- | +| v2.16 | TBD | | v2.15 | v1.28 - v1.30 | | v2.14 | v1.27 - v1.29 | | v2.13 | v1.27 - v1.29 | diff --git a/content/docs/2.16/operate/istio-integration.md b/content/docs/2.16/Operator Guide/istio-integration.md similarity index 100% rename from content/docs/2.16/operate/istio-integration.md rename to content/docs/2.16/Operator Guide/istio-integration.md diff --git a/content/docs/2.16/operate/metrics-server.md b/content/docs/2.16/Operator Guide/metrics-server.md similarity index 100% rename from content/docs/2.16/operate/metrics-server.md rename to content/docs/2.16/Operator Guide/metrics-server.md diff --git a/content/docs/2.16/Operator Guide/migration.md b/content/docs/2.16/Operator Guide/migration.md new file mode 100644 index 000000000..3e52fd485 --- /dev/null +++ b/content/docs/2.16/Operator Guide/migration.md @@ -0,0 +1,211 @@ ++++ +title = "Migration Guide" ++++ + +## Migrating from KEDA v1 to v2 + +Please note that you **can not** run both KEDA v1 and v2 on the same Kubernetes cluster. You need to [uninstall](../../1.5/deploy) KEDA v1 first, in order to [install](../deploy) and use KEDA v2. + +> 💡 **NOTE:** When uninstalling KEDA v1 make sure v1 CRDs are uninstalled from the cluster as well. + +KEDA v2 is using a new API namespace for its Custom Resources Definitions (CRD): `keda.sh` instead of `keda.k8s.io` and introduces a new Custom Resource for scaling of Jobs. See full details on KEDA Custom Resources [here](../concepts/#custom-resources-crd). + +Here's an overview of what's changed: + +- [Scaling of Deployments](#scaling-of-deployments) +- [Scaling of Jobs](#scaling-of-jobs) +- [Improved flexibility & usability of trigger metadata](#improved-flexibility--usability-of-trigger-metadata) +- [Scalers](#scalers) +- [TriggerAuthentication](#triggerauthentication) + +### Scaling of Deployments + +In order to scale `Deployments` with KEDA v2, you need to do only a few modifications to existing v1 `ScaledObjects` definitions, so they comply with v2: + +- Change the value of `apiVersion` property from `keda.k8s.io/v1alpha1` to `keda.sh/v1alpha1` +- Rename property `spec.scaleTargetRef.deploymentName` to `spec.scaleTargetRef.name` +- Rename property `spec.scaleTargetRef.containerName` to `spec.scaleTargetRef.envSourceContainerName` +- Label `deploymentName` (in `metadata.labels.`) is no longer needed to be specified on v2 ScaledObject (it was mandatory on older versions of v1) + +Please see the examples below or refer to the full [v2 ScaledObject Specification](./reference/scaledobject-spec) + +**Example of v1 ScaledObject** + +```yaml +apiVersion: keda.k8s.io/v1alpha1 +kind: ScaledObject +metadata: + name: { scaled-object-name } + labels: + deploymentName: { deployment-name } +spec: + scaleTargetRef: + deploymentName: { deployment-name } + containerName: { container-name } + pollingInterval: 30 + cooldownPeriod: 300 + minReplicaCount: 0 + maxReplicaCount: 100 + triggers: + # {list of triggers to activate the deployment} +``` + +**Example of v2 ScaledObject** + +```yaml +apiVersion: keda.sh/v1alpha1 # <--- Property value was changed +kind: ScaledObject +metadata: # <--- labels.deploymentName is not needed + name: { scaled-object-name } +spec: + scaleTargetRef: + name: { deployment-name } # <--- Property name was changed + envSourceContainerName: { container-name } # <--- Property name was changed + pollingInterval: 30 + cooldownPeriod: 300 + minReplicaCount: 0 + maxReplicaCount: 100 + triggers: + # {list of triggers to activate the deployment} +``` + +### Scaling of Jobs + +In order to scale `Jobs` with KEDA v2, you need to do only a few modifications to existing v1 `ScaledObjects` definitions, so they comply with v2: + +- Change the value of `apiVersion` property from `keda.k8s.io/v1alpha1` to `keda.sh/v1alpha1` +- Change the value of `kind` property from `ScaledObject` to `ScaledJob` +- Remove property `spec.scaleType` +- Remove properties `spec.cooldownPeriod` and `spec.minReplicaCount` + +You can configure `successfulJobsHistoryLimit` and `failedJobsHistoryLimit`. They will remove the old job histories automatically. + +Please see the examples below or refer to the full [v2 ScaledJob Specification](./reference/scaledjob-spec/) + +**Example of v1 ScaledObject for Jobs scaling** + +```yaml +apiVersion: keda.k8s.io/v1alpha1 +kind: ScaledObject +metadata: + name: { scaled-object-name } +spec: + scaleType: job + jobTargetRef: + parallelism: 1 + completions: 1 + activeDeadlineSeconds: 600 + backoffLimit: 6 + template: + # {job template} + pollingInterval: 30 + cooldownPeriod: 300 + minReplicaCount: 0 + maxReplicaCount: 100 + triggers: + # {list of triggers to create jobs} +``` + +**Example of v2 ScaledJob** + +```yaml +apiVersion: keda.sh/v1alpha1 # <--- Property value was changed +kind: ScaledJob # <--- Property value was changed +metadata: + name: { scaled-job-name } +spec: # <--- spec.scaleType is not needed + jobTargetRef: + parallelism: 1 + completions: 1 + activeDeadlineSeconds: 600 + backoffLimit: 6 + template: + # {job template} + pollingInterval: 30 # <--- spec.cooldownPeriod and spec.minReplicaCount are not needed + successfulJobsHistoryLimit: 5 # <--- property is added + failedJobsHistoryLimit: 5 # <--- Property is added + maxReplicaCount: 100 + triggers: + # {list of triggers to create jobs} +``` + +### Improved flexibility & usability of trigger metadata + +We've introduced more options to configure trigger metadata to give users more flexibility. + +> 💡 **NOTE:** Changes only apply to trigger metadata and don't impact usage of `TriggerAuthentication` + +Here's an overview: + +| Scaler | 1.x | 2.0 | +| -------------------- | --------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------- | -------- | +| `azure-blob` | `connection` (**Default**: `AzureWebJobsStorage`) | `connectionFromEnv` | +| `azure-monitor` | `activeDirectoryClientId` `activeDirectoryClientPassword` | `activeDirectoryClientId` `activeDirectoryClientIdFromEnv` `activeDirectoryClientPasswordFromEnv` | +| `azure-queue` | `connection` (**Default**: AzureWebJobsStorage) | `connectionFromEnv` | +| `azure-servicebus` | `connection` | `connectionFromEnv` | +| `azure-eventhub` | `storageConnection` (**Default**: `AzureWebJobsStorage`) `connection` (**Default**: `EventHub`) | `storageConnectionFromEnv` `connectionFromEnv` | +| `aws-cloudwatch` | `awsAccessKeyID` (**Default**: `AWS_ACCESS_KEY_ID`) `awsSecretAccessKey` (**Default**: `AWS_SECRET_ACCESS_KEY`) | `awsAccessKeyID` `awsAccessKeyIDFromEnv` `awsSecretAccessKeyFromEnv` | +| `aws-kinesis-stream` | `awsAccessKeyID` (**Default**: `AWS_ACCESS_KEY_ID`) `awsSecretAccessKey` (**Default**: `AWS_SECRET_ACCESS_KEY`) | `awsAccessKeyID` `awsAccessKeyIDFromEnv` `awsSecretAccessKeyFromEnv` | +| `aws-sqs-queue` | `awsAccessKeyID` (**Default**: `AWS_ACCESS_KEY_ID`) `awsSecretAccessKey` (**Default**: `AWS_SECRET_ACCESS_KEY`) | `awsAccessKeyID` `awsAccessKeyIDFromEnv` `awsSecretAccessKeyFromEnv` | +| `kafka` | _(none)_ | _(none)_ | +| `rabbitmq` | `apiHost` `host` | ~~`apiHost`~~ `host` `hostFromEnv` | +| `prometheus` | _(none)_ | _(none)_ | +| `cron` | _(none)_ | _(none)_ | +| `redis` | `address` `host` `port` `password` | `address` `addressFromEnv` `host` `hostFromEnv` ~~`port`~~ `passwordFromEnv` | +| `redis-streams` | `address` `host` `port` `password` | `address` `addressFromEnv` `host` `hostFromEnv` ~~`port`~~ `passwordFromEnv` | +| `gcp-pubsub` | `credentials` | `credentialsFromEnv` | +| `external` | _(any matching value)_ | _(any matching value with `FromEnv` suffix)_ | +| `liiklus` | _(none)_ | _(none)_ | +| `stan` | _(none)_ | _(none)_ | +| `huawei-cloudeye` | | _(none)_ | _(none)_ | +| `postgresql` | `connection` `password` | `connectionFromEnv` `passwordFromEnv` | +| `mysql` | `connectionString` `password` | `connectionStringFromEnv` `passwordFromEnv` | + +### Scalers + +**Azure Service Bus** + +- `queueLength` was renamed to `messageCount` + +**Kafka** + +- `authMode` property was replaced with `sasl` and `tls` properties. Please refer [documentation](../scalers/apache-kafka/#authentication-parameters) for Kafka Authentication Parameters details. + +**RabbitMQ** + +In KEDA 2.0 the RabbitMQ scaler has only `host` parameter, and the protocol for communication can be specified by +`protocol` (http or amqp). The default value is `amqp`. The behavior changes only for scalers that were using HTTP +protocol. + +Example of RabbitMQ trigger before 2.0: + +```yaml +triggers: + - type: rabbitmq + metadata: + queueLength: "20" + queueName: testqueue + includeUnacked: "true" + apiHost: "https://guest:password@localhost:443/vhostname" +``` + +The same trigger in 2.0: + +```yaml +triggers: + - type: rabbitmq + metadata: + queueLength: "20" + queueName: testqueue + protocol: "http" + host: "https://guest:password@localhost:443/vhostname" +``` + +### TriggerAuthentication + +In order to use Authentication via `TriggerAuthentication` with KEDA v2, you need to change: + +- Change the value of `apiVersion` property from `keda.k8s.io/v1alpha1` to `keda.sh/v1alpha1` + +For more details please refer to the full +[v2 TriggerAuthentication Specification](../concepts/authentication/#re-use-credentials-and-delegate-auth-with-triggerauthentication) diff --git a/content/docs/2.16/operate/opentelemetry.md b/content/docs/2.16/Operator Guide/opentelemetry.md similarity index 100% rename from content/docs/2.16/operate/opentelemetry.md rename to content/docs/2.16/Operator Guide/opentelemetry.md diff --git a/content/docs/2.16/Operator Guide/pause-autoscaling-deployments.md b/content/docs/2.16/Operator Guide/pause-autoscaling-deployments.md new file mode 100644 index 000000000..e94f0920c --- /dev/null +++ b/content/docs/2.16/Operator Guide/pause-autoscaling-deployments.md @@ -0,0 +1,28 @@ ++++ +title = "Pause Auto-Scaling with deployments" +weight = 600 ++++ + +## Pausing autoscaling + +It can be useful to instruct KEDA to pause the autoscaling of objects, to do to cluster maintenance or to avoid resource starvation by removing non-mission-critical workloads. + +This is preferable to deleting the resource because it removes the instances it is running from operation without touching the applications themselves. When ready, you can then reenable scaling. + +You can pause autoscaling by adding this annotation to your `ScaledObject` definition: + + +```yaml +metadata: + annotations: + autoscaling.keda.sh/paused-replicas: "0" + autoscaling.keda.sh/paused: "true" +``` + +The presence of these annotations will pause autoscaling no matter what number of replicas is provided. + +The annotation `autoscaling.keda.sh/paused` will pause scaling immediately and use the current instance count while the annotation `autoscaling.keda.sh/paused-replicas: ""` will scale your current workload to specified amount of replicas and pause autoscaling. You can set the value of replicas for an object to be paused to any arbitrary number. + +Typically, either one or the other is being used given they serve a different purpose/scenario. However, if both `paused` and `paused-replicas` are set, KEDA will scale your current workload to the number specified count in `paused-replicas` and then pause autoscaling. + +To unpause (reenable) autoscaling again, remove all paused annotations from the `ScaledObject` definition. If you paused with `autoscaling.keda.sh/paused`, you can unpause by setting the annotation to `false`. diff --git a/content/docs/2.16/Operator Guide/pause-autoscaling-jobs.md b/content/docs/2.16/Operator Guide/pause-autoscaling-jobs.md new file mode 100644 index 000000000..ad3b61036 --- /dev/null +++ b/content/docs/2.16/Operator Guide/pause-autoscaling-jobs.md @@ -0,0 +1,26 @@ ++++ +title = "Pause Auto-Scaling jobs" +weight = 600 ++++ + +## Pausing autoscaling + +It can be useful to instruct KEDA to pause the autoscaling of objects, to do to cluster maintenance or to avoid resource starvation by removing non-mission-critical workloads. + +This is preferable to deleting the resource because it removes the instances it is running from operation without touching the applications themselves. When ready, you can then reenable scaling. + +You can pause autoscaling by adding this annotation to your `ScaledJob` definition: + +```yaml +metadata: + annotations: + autoscaling.keda.sh/paused: true +``` + +To reenable autoscaling, remove the annotation from the `ScaledJob` definition or set the value to `false`. + +```yaml +metadata: + annotations: + autoscaling.keda.sh/paused: false +``` diff --git a/content/docs/2.16/Operator Guide/prevention-rules.md b/content/docs/2.16/Operator Guide/prevention-rules.md new file mode 100644 index 000000000..328e64cd5 --- /dev/null +++ b/content/docs/2.16/Operator Guide/prevention-rules.md @@ -0,0 +1,23 @@ ++++ +title = "Prevention Rules" +description = "Rules to prevent misconfigurations and ensure proper scaling behavior" +weight = 600 ++++ + +There are some several misconfiguration scenarios that can produce scaling problems in productive workloads, for example: in Kubernetes a single workload should never be scaled by 2 or more HPA because that will produce conflicts and unintended behaviors. + +Some errors with data format can be detected during the model validation, but these misconfigurations can't be detected in that step because the model is correct indeed. For trying to avoid those misconfigurations at data plane detecting them early, admission webhooks validate all the incoming (KEDA) resources (new or updated) and reject any resource that doesn't match the rules below. + +### Prevention Rules + +KEDA will block all incoming changes to `ScaledObject` that don't match these rules: + +- The scaled workload (`scaledobject.spec.scaleTargetRef`) is already autoscaled by another other sources (other ScaledObject or HPA). +- CPU and/or Memory trigger are used and the scaled workload doesn't have the requests defined. **This rule doesn't apply to all the workload types, only to `Deployment` and `StatefulSet`.** +- CPU and/or Memory trigger are **the only used triggers** and the ScaledObject defines `minReplicaCount:0`. **This rule doesn't apply to all the workload types, only to `Deployment` and `StatefulSet`.** +- In the case of multiple triggers where a `name` is **specified**, the name must be **unique** (it is not allowed to have multiple triggers with the same name) + +KEDA will block all incoming changes to `TriggerAuthentication`/`ClusterTriggerAuthentication` that don't match these rules: + +- The specified identity ID for Azure AD Workload Identity and/or Pod Identity is empty. (Default/unset identity ID will be passed through.) + > NOTE: This only applies if the `TriggerAuthentication/ClusterTriggerAuthentication` is overriding the default identityId provided to KEDA during the installation diff --git a/content/docs/2.16/operate/prometheus.md b/content/docs/2.16/Operator Guide/prometheus.md similarity index 100% rename from content/docs/2.16/operate/prometheus.md rename to content/docs/2.16/Operator Guide/prometheus.md diff --git a/content/docs/2.16/operate/security.md b/content/docs/2.16/Operator Guide/security.md similarity index 100% rename from content/docs/2.16/operate/security.md rename to content/docs/2.16/Operator Guide/security.md diff --git a/content/docs/2.16/Operator Guide/troubleshooting.md b/content/docs/2.16/Operator Guide/troubleshooting.md new file mode 100644 index 000000000..0d70c9517 --- /dev/null +++ b/content/docs/2.16/Operator Guide/troubleshooting.md @@ -0,0 +1,28 @@ ++++ +title = "Troubleshooting" +weight = 600 ++++ + +## KEDA logging and telemetry + +The first place to look if something isn't behaving correctly is the logs generated from KEDA. After deploying you should have a pod with two containers running within the namespace (by default: `keda`). + +You can view the KEDA operator pod via kubectl: + +```sh +kubectl get pods -n keda +``` + +You can view the logs for the keda operator container with the following: + +```sh +kubectl logs -n keda {keda-pod-name} -c keda-operator +``` + +## Reporting issues + +If you are having issues or hitting a potential bug, please file an issue [in the KEDA GitHub repo](https://github.com/kedacore/keda/issues/new/choose) with details, logs, and steps to reproduce the behavior. + +## Common issue and their Solutions + +{{< troubleshooting >}} \ No newline at end of file diff --git a/content/docs/2.16/troubleshooting.md b/content/docs/2.16/troubleshooting.md deleted file mode 100644 index 9bc0b05b1..000000000 --- a/content/docs/2.16/troubleshooting.md +++ /dev/null @@ -1,6 +0,0 @@ -+++ -title = "Troubleshooting" -description = "How to address commonly encountered KEDA issues" -+++ - -{{< troubleshooting >}}