Skip to content

Commit

Permalink
Update to ARM worker node types
Browse files Browse the repository at this point in the history
Signed-off-by: Kamesh Akella <[email protected]>
  • Loading branch information
kami619 committed Oct 31, 2024
1 parent d2f38ef commit 573749d
Show file tree
Hide file tree
Showing 8 changed files with 41 additions and 9 deletions.
2 changes: 1 addition & 1 deletion .github/workflows/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@
2. Click on Run workflow button
3. Fill in the form and click on Run workflow button
1. Name of the cluster - the name of the cluster that will be later used for other workflows. Default value is `gh-${{ github.repository_owner }}`, this results in `gh-<owner of fork>`.
2. Instance type for compute nodes - see [AWS EC2 instance types](https://aws.amazon.com/ec2/instance-types/). Default value is `m5.2xlarge`.
2. Instance type for compute nodes - see [AWS EC2 instance types](https://aws.amazon.com/ec2/instance-types/). Default value is `m6g.2xlarge`.
3. Deploy to multiple availability zones in the region - if checked, the cluster will be deployed to multiple availability zones in the region. Default value is `false`.
4. Number of worker nodes to provision - number of compute nodes in the cluster. Default value is `2`.
4. Wait for the workflow to finish.
Expand Down
4 changes: 2 additions & 2 deletions .github/workflows/rosa-cluster-create.yml
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ on:
type: string
computeMachineType:
description: 'Instance type for the compute nodes'
default: 'm5.2xlarge'
default: 'm6g.2xlarge'
type: string
availabilityZones:
description: 'Availability zones to deploy to'
Expand All @@ -35,7 +35,7 @@ on:
default: 10.0.0.0/24
computeMachineType:
description: 'Instance type for the compute nodes'
default: 'm5.2xlarge'
default: 'm6g.2xlarge'
type: string
availabilityZones:
description: 'Availability zones to deploy to'
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ Collecting the CPU usage for refreshing a token is currently performed manually
This setup is run https://github.com/keycloak/keycloak-benchmark/blob/main/.github/workflows/rosa-cluster-auto-provision-on-schedule.yml[daily on a GitHub action schedule]:

* OpenShift 4.15.x deployed on AWS via ROSA with two AWS availability zones in AWS one region.
* Machinepool with `m5.2xlarge` instances.
* Machinepool with `m6g.2xlarge` instances.
* Keycloak 25 release candidate build deployed with Operator and 3 pods in each site as an active/passive setup, and Infinispan connecting the two sites.
* Default user password hashing with Argon2 and 5 hash iterations and minimum memory size 7 MiB https://cheatsheetseries.owasp.org/cheatsheets/Password_Storage_Cheat_Sheet.html#argon2id[as recommended by OWASP].
* Database seeded with 100,000 users and 100,000 clients.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ After the installation process is finished, it creates a new admin user.
CLUSTER_NAME=rosa-kcb
VERSION=4.13.8
REGION=eu-central-1
COMPUTE_MACHINE_TYPE=m5.2xlarge
COMPUTE_MACHINE_TYPE=m6g.2xlarge
MULTI_AZ=false
REPLICAS=3
----
Expand Down Expand Up @@ -85,7 +85,7 @@ The above installation script creates an admin user automatically but in case th
== Scaling the cluster's nodes on demand

The standard setup of nodes might be too small for running a load test, at the same time using a different instance type and rebuilding the cluster takes a lot of time (about 45 minutes).
To scale the cluster on demand, the standard setup has a machine pool named `scaling` with instances of type `m5.2xlarge` which is auto-scaled based on the current demand from 4 to 15 instances.
To scale the cluster on demand, the standard setup has a machine pool named `scaling` with instances of type `m6g.2xlarge` which is auto-scaled based on the current demand from 4 to 15 instances.
However, auto-scaling of worker nodes is quite time-consuming as nodes are scaled one by one.

To use different instance types, use `rosa create machinepool` to create additional machine pools
Expand Down
2 changes: 1 addition & 1 deletion provision/aws/rosa_create_cluster.sh
Original file line number Diff line number Diff line change
Expand Up @@ -63,7 +63,7 @@ fi

SCALING_MACHINE_POOL=$(rosa list machinepools -c "${CLUSTER_NAME}" -o json | jq -r '.[] | select(.id == "scaling") | .id')
if [[ "${SCALING_MACHINE_POOL}" != "scaling" ]]; then
rosa create machinepool -c "${CLUSTER_NAME}" --instance-type "${COMPUTE_MACHINE_TYPE:-m5.2xlarge}" --max-replicas 15 --min-replicas 1 --name scaling --enable-autoscaling --autorepair
rosa create machinepool -c "${CLUSTER_NAME}" --instance-type "${COMPUTE_MACHINE_TYPE:-m6g.2xlarge}" --max-replicas 15 --min-replicas 1 --name scaling --enable-autoscaling --autorepair
fi

cd ${SCRIPT_DIR}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ spec:
name: quay.io/keycloak/keycloak:nightly
generation: 2
importPolicy:
importMode: Legacy
importMode: PreserveOriginal
referencePolicy:
type: Source
{{ end }}
32 changes: 32 additions & 0 deletions provision/minikube/keycloak/templates/keycloak.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -23,6 +23,38 @@ spec:
limits:
{{ if .Values.cpuLimits }}cpu: "{{ .Values.cpuLimits }}"{{end}}
{{ if .Values.memoryLimitsMB }}memory: "{{ .Values.memoryLimitsMB }}M"{{end}}
scheduling:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/arch
operator: In
values:
- arm64
podAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 10
podAffinityTerm:
labelSelector:
matchLabels:
app: keycloak
app.kubernetes.io/component: server
app.kubernetes.io/instance: keycloak
app.kubernetes.io/managed-by: keycloak-operator
topologyKey: topology.kubernetes.io/zone
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 50
podAffinityTerm:
labelSelector:
matchLabels:
app: keycloak
app.kubernetes.io/component: server
app.kubernetes.io/instance: keycloak
app.kubernetes.io/managed-by: keycloak-operator
topologyKey: kubernetes.io/hostname
db:
{{ if or (eq .Values.database "aurora-postgres") (eq .Values.database "postgres") (eq .Values.database "postgres+infinispan") }}
vendor: postgres
Expand Down
2 changes: 1 addition & 1 deletion provision/opentofu/modules/rosa/hcp/variables.tf
Original file line number Diff line number Diff line change
Expand Up @@ -61,7 +61,7 @@ variable "openshift_version" {

variable "instance_type" {
type = string
default = "m5.2xlarge"
default = "m6g.2xlarge"
nullable = false
}

Expand Down

0 comments on commit 573749d

Please sign in to comment.