Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

HETZNER - Custom Labels on scaled nodes #4604

Closed
ThatDeveloper opened this issue Jan 12, 2022 · 19 comments
Closed

HETZNER - Custom Labels on scaled nodes #4604

ThatDeveloper opened this issue Jan 12, 2022 · 19 comments
Labels
area/cluster-autoscaler area/provider/hetzner Issues or PRs related to Hetzner provider kind/feature Categorizes issue or PR as related to a new feature. lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale.

Comments

@ThatDeveloper
Copy link

Which component are you using?:
Hetzner

Is your feature request designed to solve a problem? If so describe the problem this feature should solve.:
More and more developers decide to let their more and more complex solutions run on kubernetes and in order for them to spot a place to run, need labels on them. (Affinity)
With adding the possibility to add multiple (n) custom labels to the --nodes= that will get applied on a scale event, that would be great. Also, the node pool could be selected / preffered if the labels match the ones of the pod.

Describe the solution you'd like.:
--nodes=1:10:CPX21:FSN1:pool1:[abc.io/services]
--nodes=1:10:CPX51:FSN1:pool1:[abc.io/services,abc.io/regular]
The selection of the nodepool would look if [label...] is present and select based on this; scale up would add these labels to the node.

Describe any alternative solutions you've considered.:
Working with prebuilt images, but the cluster autoscaler can not differentiate.

@ThatDeveloper ThatDeveloper added the kind/feature Categorizes issue or PR as related to a new feature. label Jan 12, 2022
@AzSiAz
Copy link

AzSiAz commented Jan 12, 2022

Documentation is not fantastic but you can already scale up based on pool1 in your exemple

The only downside is: it's not based on Kubernetes labels but Hetzner node labels, so you can't scale up on custom Kubernetes labels

affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
          - matchExpressions:
              - key: hcloud/node-group
                operator: In
                values:
                  - pool1

@ThatDeveloper
Copy link
Author

Thank you for this information. But how would I do the following:

PROJECT X tells me, that the compontents only get schedules on Nodes with specific labels. Lets call the labels:
L1, L2 L3

L1, L2, L3 should never be on the same node. Now a pod wants to get scheduled and requires the L2 tag. As you mentioned above, it is not "custom Kubernetes labels".

If I do not know which label the node has, I can not change something in the setup process. Cluster Autoscaler would know, but not the server itself.

@AzSiAz
Copy link

AzSiAz commented Jan 12, 2022

Unfortunately I am not there yet, the only thing I can do is trigger different nodepool scale up/down and sometimes it just don't so yeah

@AzSiAz
Copy link

AzSiAz commented Jan 12, 2022

Well I give up and I will just label my node with cloud-init based on hostname first part with this hcloud/node-group as a key

@BlakeB415
Copy link

BlakeB415 commented Jan 25, 2022

If I'm not mistaken, this is where the labels are added. The Labels field was just left empty instead of passing in the values.
https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/cloudprovider/hetzner/hetzner_node_group.go#L205
I don't have a dev environment for this setup at the moment so I'm not exactly sure.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 25, 2022
@WebSpider
Copy link
Contributor

/remove-lifecycle stale

I'm experiencing this as well, and would love a way to attach kubernetes node labels to provisioned nodes. I'll see if I can borrow some code from other cloud-providers, since I'm not very proficient in go

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 19, 2022
@mhmnemati
Copy link

I need to target the Load Balancer to scaled nodes, is this possible without custom labels support ?

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Oct 15, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Nov 14, 2022
@WebSpider
Copy link
Contributor

/remove-lifecycle rotten

@k8s-ci-robot k8s-ci-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Nov 14, 2022
@nfacha
Copy link

nfacha commented Feb 2, 2023

Also facing this issue, adding a +1
Bringing attention to @apricote

@apricote
Copy link
Member

apricote commented Feb 3, 2023

Hi everyone,

I was unclear whether this is about the labels for Hetzner Cloud Servers or the labels for Kubernetes Nodes. The scheduling constraint appears to be related to Node labels, while the load balancer target request requires Server labels.

Server Labels

Currently, we only specify the label hcloud/node-group=foobar which can be used to target Hetzner Cloud servers with a load balancer. However, this might not be sufficient once multiple clusters are run in the same project. To improve this, the cluster-autoscaler cloud provider needs to change the label and provide a config interface for users to specify these labels.

Node Labels

Unfortunately, the cluster-autoscaler cloud provider cannot make changes to Node labels. These labels are added from other cluster components (e.g., hcloud-cloud-controller-manager) and not created by the cluster-autoscaler.

Custom Node labels can be specified when using kubeadm/kubelet by utilizing the kubelet --node-labels flag. You can achieve this by modifying the cloud-init script passed to the server. However, it appears that you cannot specify different scripts for each node group, limiting the usefulness of this option.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 4, 2023
@nfacha
Copy link

nfacha commented May 4, 2023

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 4, 2023
@apricote
Copy link
Member

As of #6184 it will be possible to specify to the Cluster-Autoscaler which additional Node Labels for use in Kubernetes are added to the Nodes in the cloud-init config.

/area provider/hetzner

@k8s-ci-robot k8s-ci-robot added the area/provider/hetzner Issues or PRs related to Hetzner provider label Oct 20, 2023
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 30, 2024
@apricote
Copy link
Member

apricote commented Feb 5, 2024

No one has clearly requested Server Labels for the last year, and there is now an option to add Node Labels. I think we can consider this closed until someone comes with a request for the Server Labels.

/close

@k8s-ci-robot
Copy link
Contributor

@apricote: Closing this issue.

In response to this:

No one has clearly requested Server Labels for the last year, and there is now an option to add Node Labels. I think we can consider this closed until someone comes with a request for the Server Labels.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/cluster-autoscaler area/provider/hetzner Issues or PRs related to Hetzner provider kind/feature Categorizes issue or PR as related to a new feature. lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

10 participants