Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add options to control to otel resources #5

Merged
merged 7 commits into from
Oct 15, 2024
Merged
Show file tree
Hide file tree
Changes from 5 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
30 changes: 30 additions & 0 deletions cluster/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,6 +23,33 @@ module "<REGION>-<CLUSTER_NAME>" {
# By default, this module will also deploy the k8s manifests. Set to `false` if planning to deploy with another tool
#deploy_manifests = false

# If you need to set environment variables for the OpenTelemetry Collector, you can do so by setting the `otel_env` variable:
# otel_env = {
# "GOMEMLIMIT" = "2750MiB" # set the memory limit for the OpenTelemetry Collector
# }

# We recommend to read the OpenTelemetry Collector documentation to understand the memory limiter processor configuration: https://github.com/open-telemetry/opentelemetry-collector/blob/main/processor/memorylimiterprocessor/README.md#best-practices

# If you want to customize the memory limiter processor for the OpenTelemetry Collector, you can do so by setting the `otel_memory_limiter` variable:
# otel_memory_limiter = {
# check_interval = "1s"
# limit_percentage = 70
# spike_limit_percentage = 30
# }


# If you want to customize the resources for the OpenTelemetry Collector container, you can do so by setting the `otel_resources` variable:
# otel_resources = {
# requests = {
# cpu = "100m"
# memory = "256Mi"
# }
# limits = {
# cpu = "100m"
# memory = "256Mi"
# }
# }

# when configuring multiple providers for different clusters, you can configure the module to use to correct provider alias:
providers = {
kubernetes = kubernetes.<PROVIDER_ALIAS>
Expand All @@ -39,6 +66,9 @@ module "<REGION>-<CLUSTER_NAME>" {
| cluster\_oidc\_issuer\_url | The OIDC Identity issuer URL for the EKS cluster | `string` | n/a | yes |
| ec2\_cluster | Set to true if this is a self-managed k8s cluster running on EC2 (if so, you could also set `cluster_oidc_issuer_url` to an empty string) | `bool` | `false` | no |
| deploy\_manifests | Set to false if you don't want this module to deploy EKS Lens into your cluster | `bool` | `true` | no |
| otel\_env | Environment variables to set for the OpenTelemetry Collector | `map(string)` | `{}` | no |
| otel\_memory\_limiter | Configuration for the memory limiter processor | <pre>object({<br> check_interval = string<br> limit_percentage = number<br> spike_limit_percentage = number<br> })</pre> | <pre>{<br> "check_interval": "1s",<br> "limit_percentage": 70,<br> "spike_limit_percentage": 30<br>}</pre> | no |
| otel\_resources | Resources to set for the OpenTelemetry Collector container | <pre>object({<br> requests = object({<br> cpu = optional(string)<br> memory = optional(string)<br> })<br> limits = object({<br> cpu = optional(string)<br> memory = optional(string)<br> })<br> })</pre> | <pre>{}</pre> | no |

## Outputs

Expand Down
6 changes: 3 additions & 3 deletions cluster/collector-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -12,9 +12,9 @@ processors:
timeout: 30s
send_batch_size: 800
memory_limiter:
check_interval: 1s
limit_percentage: 70
spike_limit_percentage: 30
check_interval: ${check_interval}
limit_percentage: ${limit_percentage}
spike_limit_percentage: ${spike_limit_percentage}
resource/1:
attributes:
- key: service.name
Expand Down
16 changes: 16 additions & 0 deletions cluster/k8s.tf
Original file line number Diff line number Diff line change
Expand Up @@ -374,6 +374,9 @@ resource "kubernetes_config_map" "doit_collector_config" {
collector_bucket_name = local.s3_bucket
collector_bucket_prefix = "eks-metrics/${local.account_id}/${local.region}/${var.cluster.name}"
region = local.region
check_interval = var.otel_memory_limiter.check_interval
limit_percentage = var.otel_memory_limiter.limit_percentage
spike_limit_percentage = var.otel_memory_limiter.spike_limit_percentage
}
)}"
}
Expand Down Expand Up @@ -488,6 +491,19 @@ resource "kubernetes_deployment" "collector" {
}
}
}

dynamic "env" {
for_each = var.otel_env
content {
name = env.key
value = env.value
}
}

resources {
requests = var.otel_resources.requests
limits = var.otel_resources.limits
}

volume_mount {
mount_path = "/conf"
Expand Down
37 changes: 37 additions & 0 deletions cluster/variables.tf
Original file line number Diff line number Diff line change
Expand Up @@ -35,3 +35,40 @@ variable "permissions_boundary" {
description = "If provided, all IAM roles will be created with this permissions boundary attached."
default = ""
}


// https://github.com/open-telemetry/opentelemetry-collector/blob/main/processor/memorylimiterprocessor/README.md
variable "otel_env" {
type = map(string)
default = {
// "GOMEMLIMIT" = "2750MiB"
}
}

variable "otel_memory_limiter" {
type = object({
check_interval = string
limit_percentage = number
spike_limit_percentage = number
})
default = {
check_interval = "1s"
limit_percentage = 70
spike_limit_percentage = 30
}
}

variable "otel_resources" {
type = object({
limits = optional(object({
cpu = optional(string)
memory = optional(string)
}))
requests = optional(object({
cpu = optional(string)
memory = optional(string)
})
) })

default = {}
}