Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Set NVIDIA_DRIVER_CAPABILITIES to all when GPU is enabled #19345

Merged
merged 1 commit into from
Aug 20, 2024

Conversation

chubei-urus
Copy link
Contributor

fixes #19318

Copy link

linux-foundation-easycla bot commented Jul 29, 2024

CLA Signed


The committers listed above are authorized under a signed CLA.

@k8s-ci-robot k8s-ci-robot added the cncf-cla: no Indicates the PR's author has not signed the CNCF CLA. label Jul 29, 2024
@k8s-ci-robot
Copy link
Contributor

Welcome @chubei-urus!

It looks like this is your first PR to kubernetes/minikube 🎉. Please refer to our pull request process documentation to help your PR have a smooth ride to approval.

You will be prompted by a bot to use commands during the review process. Do not be afraid to follow the prompts! It is okay to experiment. Here is the bot commands documentation.

You can also check if kubernetes/minikube has its own contribution guidelines.

You may want to refer to our testing guide if you run into trouble with your tests not passing.

If you are having difficulty getting your pull request seen, please follow the recommended escalation practices. Also, for tips and tricks in the contribution process you may want to read the Kubernetes contributor cheat sheet. We want to make sure your contribution gets all the attention it needs!

Thank you, and welcome to Kubernetes. 😃

@k8s-ci-robot k8s-ci-robot added the needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. label Jul 29, 2024
@k8s-ci-robot
Copy link
Contributor

Hi @chubei-urus. Thanks for your PR.

I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@k8s-ci-robot k8s-ci-robot added the size/XS Denotes a PR that changes 0-9 lines, ignoring generated files. label Jul 29, 2024
@minikube-bot
Copy link
Collaborator

Can one of the admins verify this patch?

@k8s-ci-robot k8s-ci-robot added cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. and removed cncf-cla: no Indicates the PR's author has not signed the CNCF CLA. labels Jul 29, 2024
@chubei-urus
Copy link
Contributor Author

I'm new to the repo and don't know how this feature should be tested. Many thanks to anyone who can give some pointers!

@medyagh
Copy link
Member

medyagh commented Jul 29, 2024

Thank you @chubei-urus for creating this PR, do you mind sharing a Before After this PR Example of running a workload
and how did you verify that it was NOT using the graphic card before this PR ?

@chubei-urus
Copy link
Contributor Author

Thank you for your quick reply. I'll create a minimal example.

@medyagh
Copy link
Member

medyagh commented Jul 29, 2024

/ok-to-test

@k8s-ci-robot k8s-ci-robot added ok-to-test Indicates a non-member PR verified by an org member that is safe to test. and removed needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. labels Jul 29, 2024
@minikube-pr-bot
Copy link

kvm2 driver with docker runtime

+----------------+----------+---------------------+
|    COMMAND     | MINIKUBE | MINIKUBE (PR 19345) |
+----------------+----------+---------------------+
| minikube start | 49.8s    | 49.4s               |
| enable ingress | 26.5s    | 25.0s               |
+----------------+----------+---------------------+

Times for minikube start: 52.0s 46.2s 49.2s 50.8s 50.7s
Times for minikube (PR 19345) start: 51.2s 49.4s 50.6s 48.4s 47.5s

Times for minikube ingress: 29.0s 27.0s 24.9s 27.0s 24.4s
Times for minikube (PR 19345) ingress: 27.5s 24.9s 23.9s 24.9s 24.0s

docker driver with docker runtime

+----------------+----------+---------------------+
|    COMMAND     | MINIKUBE | MINIKUBE (PR 19345) |
+----------------+----------+---------------------+
| minikube start | 23.1s    | 22.2s               |
| enable ingress | 21.4s    | 22.1s               |
+----------------+----------+---------------------+

Times for minikube start: 23.9s 23.6s 23.2s 20.9s 23.8s
Times for minikube (PR 19345) start: 21.5s 22.4s 21.4s 21.8s 24.0s

Times for minikube (PR 19345) ingress: 22.7s 21.8s 21.7s 21.7s 22.7s
Times for minikube ingress: 21.2s 21.7s 21.3s 21.7s 21.2s

docker driver with containerd runtime

+----------------+----------+---------------------+
|    COMMAND     | MINIKUBE | MINIKUBE (PR 19345) |
+----------------+----------+---------------------+
| minikube start | 21.3s    | 21.6s               |
| enable ingress | 48.2s    | 48.1s               |
+----------------+----------+---------------------+

Times for minikube start: 22.8s 20.8s 19.9s 23.6s 19.6s
Times for minikube (PR 19345) start: 19.9s 20.0s 22.6s 23.0s 22.7s

Times for minikube ingress: 48.3s 48.2s 48.2s 48.2s 48.2s
Times for minikube (PR 19345) ingress: 48.2s 48.2s 48.3s 48.2s 47.8s

@minikube-pr-bot
Copy link

Here are the number of top 10 failed tests in each environments with lowest flake rate.

Environment Test Name Flake Rate

Besides the following environments also have failed tests:

To see the flake rates of all tests by environment, click here.

@chubei-urus
Copy link
Contributor Author

Steps

  1. Follow https://minikube.sigs.k8s.io/docs/tutorials/nvidia/ to set up GPU support with docker driver
  2. minikube start --gpus all
  3. Create vulkan.yaml with following content.
apiVersion: v1
kind: Pod
metadata:
  name: vulkan
spec:
  containers:
  - name: vulkan
    env:
    - name: NVIDIA_DRIVER_CAPABILITIES
      value: "graphics"
    image: dualvtable/vulkan-sample
    resources:
      limits:
        nvidia.com/gpu: 1
  restartPolicy: Never
  1. kubectl apply -f vulkan.yaml
  2. Wait for the container to finish, then kubectl logs vulkan

Before

The logs look like:

computeheadless: /build/Vulkan/examples/computeheadless/computeheadless.cpp:181: VulkanExample::VulkanExample(): Assertion `res == VK_SUCCESS' failed.
/build/entrypoint.sh: line 4:    14 Done                    echo 'y'
        15 Aborted                 (core dumped) | ${EXAMPLES}/$i
\n
renderheadless: /build/Vulkan/examples/renderheadless/renderheadless.cpp:211: VulkanExample::VulkanExample(): Assertion `res == VK_SUCCESS' failed.
/build/entrypoint.sh: line 4:    16 Done                    echo 'y'
        17 Aborted                 (core dumped) | ${EXAMPLES}/$i
\n

After

The logs look like:

Running headless compute example
GPU: NVIDIA GeForce RTX 4060 Laptop GPU
Compute input:
0       1       2       3       4       5       6       7       8       9       10      11      12      13      14      15      16      17      18      19      20      21      22      23      24      25      26      27      28      29      30      31 
Compute output:
0       1       1       2       3       5       8       13      21      34      55      89      144     233     377     610     987     1597    2584    4181    6765    10946   17711   28657   46368   75025   121393  196418  317811  514229  832040  1346269 
Finished. Press enter to terminate...\n
Running headless rendering example
GPU: NVIDIA GeForce RTX 4060 Laptop GPU
Framebuffer image saved to headless.ppm
Finished. Press enter to terminate...\n

Tested on

(base) bei@bei-urus:~/minikube$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description:    Ubuntu 24.04 LTS
Release:        24.04
Codename:       noble
(base) bei@bei-urus:~/minikube$ nvidia-smi 
Tue Jul 30 10:08:20 2024       
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 535.183.01             Driver Version: 535.183.01   CUDA Version: 12.2     |
|-----------------------------------------+----------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |         Memory-Usage | GPU-Util  Compute M. |
|                                         |                      |               MIG M. |
|=========================================+======================+======================|
|   0  NVIDIA GeForce RTX 4060 ...    Off | 00000000:01:00.0  On |                  N/A |
| N/A   42C    P4              10W /  35W |    827MiB /  8188MiB |      2%      Default |
|                                         |                      |                  N/A |
+-----------------------------------------+----------------------+----------------------+
                                                                                         
+---------------------------------------------------------------------------------------+
| Processes:                                                                            |
|  GPU   GI   CI        PID   Type   Process name                            GPU Memory |
|        ID   ID                                                             Usage      |
|=======================================================================================|
|    0   N/A  N/A      2613      G   /usr/lib/xorg/Xorg                          271MiB |
|    0   N/A  N/A      2942      G   /usr/bin/gnome-shell                        172MiB |
|    0   N/A  N/A      3914      G   ...yOnDemand --variations-seed-version       91MiB |
|    0   N/A  N/A      4839      G   ...seed-version=20240729-050126.230000      109MiB |
|    0   N/A  N/A      6026      G   ...erProcess --variations-seed-version      137MiB |
+---------------------------------------------------------------------------------------+

Note that this is not the workload I was running, but I believe it shows the same issue.

@@ -191,7 +191,7 @@ func CreateContainerNode(p CreateParams) error { //nolint to suppress cyclomatic
runArgs = append(runArgs, "--ip", p.IP)
}
if p.GPUs != "" {
runArgs = append(runArgs, "--gpus", "all")
runArgs = append(runArgs, "--gpus", "all", "--env", "NVIDIA_DRIVER_CAPABILITIES=all")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thank you for adding the example, and I found the documentation on this https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/1.10.0/user-guide.html

you are spot on ! it says "empty or unset | use default driver capability: utility, compute"I would love to see the example you provided be to be added as an integration test with the condition that it should skip the test if there is no GPU on the machine it avoid spamming failure on our CI machines

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure thing. I'll study how integration tests are implemented a bit and try to do that.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@chubei-urus here is an example of integraiton test

https://github.com/medyagh/minikube/blob/abcff1741451c3867f80277115029457ad4fd23f/test/integration/start_stop_delete_test.go#L43

you can simply create a new file called
test/integration/gpu_ml_test.go

and create a new test there

and then you can have an if statment to skip the test if there gpu is not available on the test machine, for example
if hasGPU == false{
t.Skip("skipping test since the test machine does not have a GPU")
}

btw this would also be a good idea for a follow up PR, that if user machine does not have a GPU and they try to enable the gpu, we could warn them that you try to enable --gpus without one (follow up PR)

let me know if you have any questions

@medyagh
Copy link
Member

medyagh commented Aug 20, 2024

@chubei-urus I could merge this PR and if you like I would love to see a follow up adding integraiton test
chubei-urus. #19486

@medyagh
Copy link
Member

medyagh commented Aug 20, 2024

/lgtm

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Aug 20, 2024
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: chubei-urus, medyagh

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Aug 20, 2024
@medyagh medyagh merged commit 2957f96 into kubernetes:master Aug 20, 2024
42 of 52 checks passed
@chubei-urus
Copy link
Contributor Author

Thank you I'd like an integration test but have been busy with other things.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. lgtm "Looks good to me", indicates that a PR is ready to be merged. ok-to-test Indicates a non-member PR verified by an org member that is safe to test. size/XS Denotes a PR that changes 0-9 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Enable NVIDIA GPU graphics capabilities
5 participants