Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Automatically set or not set resource quota through the cluster #59

Merged
merged 4 commits into from
Jan 18, 2024

Conversation

pycgo
Copy link
Contributor

@pycgo pycgo commented Jan 5, 2024

CRI-O 1.28 no longer supports CGOUP1 for CPU allocation when use centos7, In order to be compatible, we should remove the CPU restrictions in the script.
#53

@kvaps
Copy link
Owner

kvaps commented Jan 5, 2024

The requests/limits options were introduced by @arothste-blk in #38
We should decide which defaults are better, but keep both options available.

@pycgo
Copy link
Contributor Author

pycgo commented Jan 8, 2024

I don’t think the CPU limit is necessary. The actual amount of CPU used is very small and it should be compatible with most environments. Most operating systems, especially older operating systems, have cgoup values of 1 instead of 2.
If you do not modify this tool, upgrading the system where k8s is located will be a big job.

@arothste-blk
Copy link
Contributor

arothste-blk commented Jan 8, 2024 via email

@kvaps
Copy link
Owner

kvaps commented Jan 8, 2024

@arothste-blk is there any way to implement any sort of auto-detection for resources requirements?
Could you check something like:

if kubectl run --image test test --dry-run 2>&1 | grep -q 'resources.requests: Required value' then
  container_cpu="100m"
  container_memory="256Mi"
fi

in your cluster please

@pycgo
Copy link
Contributor Author

pycgo commented Jan 10, 2024

in k8s create this yaml to test it

apiVersion: v1
kind: ResourceQuota
metadata:
  name: mem-cpu-example
spec:
  hard:
    requests.cpu: 150m
    requests.memory: 200Mi
    limits.cpu: 2
    limits.memory: 4Gi
[root@master ~]# kubectl run --image test test --dry-run
W0110 10:21:43.697889  104817 helpers.go:692] --dry-run is deprecated and can be replaced with --dry-run=client.
pod/test created (dry run)
[root@master ~]# echo $?
0
[root@master ~]# kubectl run --image test test
Error from server (Forbidden): pods "test" is forbidden: failed quota: mem-cpu-example: must specify limits.cpu for: test; limits.memory for: test; requests.cpu for: test; requests.memory for: test
[root@master ~]# echo $?
1
[root@master ~]# 

@pycgo
Copy link
Contributor Author

pycgo commented Jan 12, 2024

we can use this to test and It won't really be created, but it can be verified

kubectl run test --image test --dry-run=server
[root@master ~]# kubectl run test --image test --dry-run=server
Error from server (Forbidden): pods "test" is forbidden: failed quota: mem-cpu-example: must specify limits.cpu for: test; limits.memory for: test; requests.cpu for: test; requests.memory for: test

@arothste-blk can you test this in your cluster?

@kvaps
Copy link
Owner

kvaps commented Jan 12, 2024

I think the best option is to:

test if resource specification is required

  • if not: do not specify any resources
  • if yes: specify both CPU and MEM

I could implement this by myself, but I have no free time right now. Sorry.

If you want this asap, feel free to add this check into your PR :)

@pycgo pycgo changed the title delele cpu limit fix timeout Automatically set or not set resource quota through the cluster Jan 17, 2024
@pycgo
Copy link
Contributor Author

pycgo commented Jan 17, 2024

@kvaps I've changed it, take a look. thank you.

Copy link
Owner

@kvaps kvaps left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Amazing job, please apply review suggestions and here we go

@@ -140,6 +140,20 @@ else
fi
fi

# test if resource specification is required
set +e
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
set +e

Copy link
Contributor Author

@pycgo pycgo Jan 18, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if do not set this,when grep nothting ,the shell will exit ,can you test it in no limit cluster ?
I can not find other way to fix it.

resources_json='"resources": {}'

fi
set -e
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
set -e

kubectl-node_shell Outdated Show resolved Hide resolved
@pycgo
Copy link
Contributor Author

pycgo commented Jan 18, 2024

@kvaps I fix it ,and test ok,take a look again.thank you.

Copy link
Owner

@kvaps kvaps left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, I forgot we use set -e (I confused it with set -x)
But as I can see, you already found elegant solution!

LGTM

"limits": { "cpu": "'${container_cpu}'", "memory": "'${container_memory}'" },
"requests": { "cpu": "'${container_cpu}'", "memory": "'${container_memory}'" }
}'
$kubectl run test --image test --dry-run=server 2>&1 | grep 'failed quota' --quiet || resources_json='"resources": {}'
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Very tricky, bravo👏

@kvaps kvaps merged commit ef2c556 into kvaps:master Jan 18, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants