-
Notifications
You must be signed in to change notification settings - Fork 29
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
add user node pool & availability zones #9
Conversation
Can you please run |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @nellyk , some review comments.
29cf373
to
715659a
Compare
min_count = number | ||
node_count = number | ||
os_sku = string | ||
mode = optional(string, "User") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why we have the mode ? Are the possible values just User
and System
? Can the customer really create a System
nodepool defining it in this variable ?
variables.tf
Outdated
enable_auto_scaling = true | ||
max_count = 110 | ||
min_count = 2 | ||
node_count = 2 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why you set node_count
if enable_auto_scaling
is true
?
If the autoscaler changed the number of nodes you risk that a Terraform run will revert the current number of node to node_count
. I believe the correct value for node_count
is null
when the autoscaler is active.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks Saverio, i'll update this
features {} | ||
} | ||
|
||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
extra line
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i'll check if this a problem upstream
variables.tf
Outdated
mode = optional(string, "User") | ||
os_disk_size_gb = optional(number, null) | ||
tags = optional(map(string), {}) | ||
zones = optional(string) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should this be a set
of string
like here ?
https://github.com/Azure/terraform-azurerm-aks/blob/4f7efad06d3471f22e960ad7874b2888e3754f26/variables.tf#L972
tags = optional(map(string), {}) | ||
zones = optional(string) | ||
})) | ||
default = { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is going to create 3 nodepools also in those regions that don't support AZ, correct ? Is this what we want ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i've added a condition to only create 1 if the zone isn't available, unfortunately i still have to keep the 3 node pool values since i can't use count
on a list to control the generation of user node pool
main.tf
Outdated
@@ -55,6 +63,21 @@ resource "azurerm_management_lock" "this" { | |||
scope = azurerm_kubernetes_cluster.this.id | |||
} | |||
|
|||
resource "azurerm_kubernetes_cluster_node_pool" "this" { | |||
# set max nodepools created to 3 | |||
for_each = var.node_pools |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looping on the node_pools is correct, but the intent is to have multiple node_pools in the map for multiple workloads ( e.g. ingress nodepool and application nodepool).
When we need here is a nested loop, that in Terraform is not straightforward.
We need to loop once o the node_pools map, and each element has to be expanded to 1 node_pool per zone.
I would ask the help of @lonegunmanb to understand if this can be implemented with a local
and then we loop on the local
value.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i have implemented a local now that provides the zones if they are available creates three node pools with each having an availability zone
variables.tf
Outdated
zones = optional(string) | ||
})) | ||
default = { | ||
"1" = { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To avoid confusion you could call it "workload1" "workload2" and so on
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
added this to the when the resource is being created
commit 2d3630b Author: nellyk <[email protected]> Date: Mon Mar 4 19:55:30 2024 +0000 review comments commit 861af28 Author: nellyk <[email protected]> Date: Mon Mar 4 19:45:52 2024 +0000 review comments commit 1dc4d1b Author: nellyk <[email protected]> Date: Mon Mar 4 19:40:56 2024 +0000 review comments commit deeee1c Author: nellyk <[email protected]> Date: Mon Mar 4 11:08:26 2024 +0000 review comments commit ab7c134 Author: nellyk <[email protected]> Date: Wed Feb 21 12:06:49 2024 +0000 fix failing test commit 93d2a9f Author: nellyk <[email protected]> Date: Wed Feb 21 11:48:42 2024 +0000 fix failing test commit 1401bf8 Author: nellyk <[email protected]> Date: Wed Feb 21 10:36:29 2024 +0000 add pr review comments commit 715659a Author: nellyk <[email protected]> Date: Fri Feb 9 22:16:17 2024 +0000 set up aks cluster user node pools and availability zone commit 02a8464 Author: nellyk <[email protected]> Date: Tue Feb 6 19:02:21 2024 +0000 add availability zone setup commit 4dda197 Author: nellyk <[email protected]> Date: Tue Feb 6 18:57:21 2024 +0000 add availability zone setup commit bd20d79 Merge: 680b00c 05cfc97 Author: nellyk <[email protected]> Date: Tue Feb 6 17:16:36 2024 +0000 Merge remote-tracking branch 'origin' into availability-zones commit 680b00c Author: nellyk <[email protected]> Date: Tue Feb 6 17:15:31 2024 +0000 add availability zone setup commit e873a28 Author: nellyk <[email protected]> Date: Tue Feb 6 16:20:04 2024 +0000 add availability zone setup commit 05cfc97 Merge: cfc986e d4255cd Author: Saverio Proto <[email protected]> Date: Tue Feb 6 16:38:38 2024 +0100 Merge pull request Azure#3 from nellyk/main rename to avm-ptn-aks-production commit bf2c904 Author: nellyk <[email protected]> Date: Tue Feb 6 15:33:52 2024 +0000 add availability zone setup commit cfc986e Merge: 7e4b10b af7a2cc Author: Saverio Proto <[email protected]> Date: Tue Feb 6 09:46:25 2024 +0100 Merge pull request #2 from nellyk/main Initial draft with default node pool
main.tf
Outdated
resource "azurerm_kubernetes_cluster_node_pool" "this" { | ||
# if the region has zone create a node pool per zone | ||
# if the region does not have zone create a single node pool with the zone as null | ||
# if node pools are not emplty check if the node has a zone if yes then create a node pool per zone otherwise create a single node pool |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
emplty -> empty
closing in favor of #15 |
closes #8
closes #5