Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

openscapes: Use a bigger node to account for future event expected usage #3262

Merged

Conversation

GeorgianaElena
Copy link
Member

For #3217

@github-actions
Copy link

Merging this PR will trigger the following deployment actions.

Support and Staging deployments

Cloud Provider Cluster Name Upgrade Support? Reason for Support Redeploy Upgrade Staging? Reason for Staging Redeploy
aws openscapes No Yes Following prod hubs require redeploy: prod

Production deployments

Cloud Provider Cluster Name Hub Name Reason for Redeploy
aws openscapes prod Following helm chart values files were modified: prod.values.yaml

@consideRatio consideRatio changed the title Use a bigger node to account for future event expected usage openscapes: Use a bigger node to account for future event expected usage Oct 12, 2023
Copy link
Contributor

@consideRatio consideRatio left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

More users per node during events - yes!

I think its great to schedule so that we expect 8 users to fit on the node instead of just 2 for an event. During events we expect significant amounts of users, and I figure ending up with using around ~4-12 nodes is about right with regards to startup time considerations and ability to scale up/down in a more fine grained way to reduce cloud costs.

Warning about just changing node selector

The guarantees and limits are generated based on the specific node size, a node with 4 times the size will have slightly more than 4 times the allocatable capacity overall I think.

If this was done in the other way, where we used requests(aka guarantees)/limits based on a larger node, adopted for a smaller node, we could end up with request of 51% or 101% of a nodes capacity etc, making us schedule ineffectively.

So in practice - we must be careful changing this without re-generating the list, but in this case its okay I think.

Related discussions

@GeorgianaElena
Copy link
Member Author

When the ~same resource request can be made on two node sizes, we currently only provide one option. Sometimes one is better than the other, so it would be good to be able to configure what requests should use what instance size. This is exactly what is changed here, we want to use the larger n

Ah, I get it! Thank you @consideRatio. I am feeling a bit more confident in what we want to achieve with node sharing and how, but it's lots of info to consider. But would love to have a central place where we document all these different scenarios.

@GeorgianaElena GeorgianaElena merged commit dcf1cf3 into 2i2c-org:master Oct 12, 2023
7 checks passed
@GeorgianaElena GeorgianaElena deleted the openscapes-event-node-update branch October 12, 2023 11:18
@github-actions
Copy link

🎉🎉🎉🎉

Monitor the deployment of the hubs here 👉 https://github.com/2i2c-org/infrastructure/actions/runs/6494935375

@consideRatio
Copy link
Contributor

consideRatio commented Oct 12, 2023

But would love to have a central place where we document all these different scenarios.

I agree, I would have feel agency to write docs about this if we have clear guidance to go for. I've opened #3177 to come up with preliminary guidance on how many users per node we may want to aim for. With such guidance we can not only document "this is complicated because..." but also "so here is a recommendation on what to do".

@GeorgianaElena
Copy link
Member Author

Thank you @consideRatio! #3177 seems like Q4 material 🚀 🚀 and something I'm looking forward to have <3

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
No open projects
Status: Done 🎉
Development

Successfully merging this pull request may close these issues.

2 participants