-
Notifications
You must be signed in to change notification settings - Fork 31
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allow nodevertical to run on existing number of worker nodes in a cluster without requiring 4 worker nodes #45
Comments
@wabouhamad Thanks for opening an issue here. I think it makes more sense for us to control the number of nodes under test via a label than always placing pods on all worker nodes. We are planning to reduce nodevertical test requirements with #42 and #43. This will allow us to maintain the previous standard of running nodevert against 4 nodes as well as allow us to run it against smaller clusters in CI. Let me know if this will suffice the 1st portion of this issue. In regards to the renaming of the var, please open a pr for adjustment. Thanks! |
@wabouhamad Did you have any feedback now that #43 is merged? Will you be opening a pr to change the parameter names? |
@akrzos I was planning to run a nodevertical test next week, I'll provide feedback after the test run. I can open a PR to change the param name if that won't break other tests. |
@wabouhamad any updates? |
@akrzos I was able to run nodevertical successfully on less than 2 worker nodes with after PRs merged for issue 42 and 43. Will discuss with @chaitanyaenr renaming parameters as that impacts the scale CI pipeline properties for nodevertical |
This is related to issue: #42
Instead of needing 4 worker nodes for nodevertical, it would help if we could run the test by dividing the total pod count by the current number of worker nodes. For example if NODEVERTICAL_MAXPODS is set to 1000, and we have two worker nodes on a cluster, we would try to deploy 1000/2 = 500 pods per worker node.
Renaming "NODEVERTICAL_MAXPODS" to NODEVERTICAL_TOTALPODS would help to in order not to confuse with kubeletArgument maxPods
The text was updated successfully, but these errors were encountered: