Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implementation of HorizontalPodAutoscaler #140

Closed
Freyrecorp1 opened this issue Jan 10, 2023 · 12 comments
Closed

Implementation of HorizontalPodAutoscaler #140

Freyrecorp1 opened this issue Jan 10, 2023 · 12 comments

Comments

@Freyrecorp1
Copy link

One question, has anyone had problems implementing the HPA in their cluster? I have implemented it in another cluster and it works fine, but in this one it does not give me real-time cpu usage, it stays at 0.

@vitobotta
Copy link
Owner

Hi, did you install metrics server in the cluster? Where/how do you see that CPU usage is zero?

@Freyrecorp1
Copy link
Author

I managed to solve it, it was a problem with my metrics file, but a new query came up, when new nodes are added to hetzner, it does not join the cluster, therefore the pods that are in the pending state cannot run yet!

@vitobotta
Copy link
Owner

I managed to solve it, it was a problem with my metrics file, but a new query came up, when new nodes are added to hetzner, it does not join the cluster, therefore the pods that are in the pending state cannot run yet!

Are these nodes created with a standard node pool or autoscaled? Which version of the tool did you use to create the cluster? Can you share your config (remove the token!)?

@Freyrecorp1
Copy link
Author

Logré solucionarlo, era un problema con mi archivo de métricas, pero surgió una nueva consulta, cuando se agregaron nuevos nodos a hetzner, no se une al clúster, por lo tanto, ¡los pods que están en estado pendiente todavía no pueden ejecutarse!

¿Estos nodos se crean con un grupo de nodos estándar o se escalan automáticamente? ¿Qué versión de la herramienta mejorada para crear el clúster? ¿Puede compartir su configuración (¡eliminar el token!)?

I use the standard configuration, there I share my file!

config_test.txt

@vitobotta
Copy link
Owner

How many nodes do you have now according to kubectl and how many servers do you see in the Hetzner console?

@Freyrecorp1
Copy link
Author

5 nodes and in the hetzner 6 console, the one I create when I scale the nodes is not added to the worker nodes pool

@Freyrecorp1
Copy link
Author

Captura desde 2023-01-10 16-37-14
Captura desde 2023-01-10 16-37-48

@vitobotta
Copy link
Owner

Something doesn't look right: the nodes created automatically by the autoscaler have names starting with the word "autoscaled-". while your CPX31 node's name starts with "big". If that node doesn't show up in Kubectl then please remove it from the Hetzner Console. Then please check the autoscaler's pod's logs (in the kube-system namespace). Do you see any errors?

@Freyrecorp1
Copy link
Author

Today I have been testing with version 1.0.5 because I did the tests yesterday with version 1.0.2, that must be why it gives this error when creating the new instances in hetznert, but today I got the error that is explained in this issue #141, the ssh is without passwords!

@vitobotta
Copy link
Owner

Today I have been testing with version 1.0.5 because I did the tests yesterday with version 1.0.2, that must be why it gives this error when creating the new instances in hetznert, but today I got the error that is explained in this issue #141, the ssh is without passwords!

Better to continue in that thread about this since it's a separate issue.

@vitobotta
Copy link
Owner

Can I close this issue since we are following up to the time out issue in another one?

@vitobotta
Copy link
Owner

Closing since you confirmed in another issue that you got your cluster working. :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants