Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update model.py to enable CUDA when available #478

Closed
wants to merge 2 commits into from

Conversation

FNGarvin
Copy link
Contributor

@FNGarvin FNGarvin commented Nov 22, 2024

Shell out to nvidia-smi for NVidia detection, add device or gpu args to conman as appropriate for docker vs podman. Import subprocess (to shell out for nvidia-smi) and shutil (to duplicate available() functionality and test whether we're using podman or docker)

Summary by Sourcery

Enhancements:

  • Enable CUDA support in container setup by detecting NVidia GPUs using nvidia-smi and configuring container arguments accordingly.

Copy link
Contributor

sourcery-ai bot commented Nov 22, 2024

Reviewer's Guide by Sourcery

This PR adds CUDA GPU support to the container management system by detecting NVIDIA GPUs and configuring the appropriate container runtime arguments for both Docker and Podman environments. The implementation shells out to nvidia-smi for GPU detection and handles container-specific GPU flags differently based on the container runtime being used.

Sequence diagram for GPU detection and container setup

sequenceDiagram
    participant User
    participant Model as model.py
    participant Subprocess
    participant Shutil

    User->>Model: Call setup_container(args)
    Model->>Model: get_gpu()
    Model->>Subprocess: Run nvidia-smi
    alt NVIDIA GPU detected
        Subprocess-->>Model: Return CUDA_VISIBLE_DEVICES, gpu_count
        Model->>Shutil: Check if Podman is available
        alt Podman available
            Model->>Model: Add --device nvidia.com/gpu=all to conman_args
        else Docker available
            Model->>Model: Add --gpus all to conman_args
        end
    else No NVIDIA GPU
        Model->>Model: Return None, None
    end
    Model-->>User: Return conman_args
Loading

File-Level Changes

Change Details Files
Added NVIDIA GPU detection using nvidia-smi command
  • Added subprocess call to 'nvidia-smi -L' to detect NVIDIA GPUs
  • Returns CUDA_VISIBLE_DEVICES and count of available GPUs if nvidia-smi succeeds
  • Implements graceful fallback if nvidia-smi command fails
ramalama/model.py
Implemented container runtime-specific GPU configuration
  • Added detection for Podman vs Docker runtime
  • Configured Podman-specific GPU flags using --device nvidia.com/gpu=all
  • Configured Docker-specific GPU flags using --gpus all
ramalama/model.py

Tips and commands

Interacting with Sourcery

  • Trigger a new review: Comment @sourcery-ai review on the pull request.
  • Continue discussions: Reply directly to Sourcery's review comments.
  • Generate a GitHub issue from a review comment: Ask Sourcery to create an
    issue from a review comment by replying to it.
  • Generate a pull request title: Write @sourcery-ai anywhere in the pull
    request title to generate a title at any time.
  • Generate a pull request summary: Write @sourcery-ai summary anywhere in
    the pull request body to generate a PR summary at any time. You can also use
    this command to specify where the summary should be inserted.

Customizing Your Experience

Access your dashboard to:

  • Enable or disable review features such as the Sourcery-generated pull request
    summary, the reviewer's guide, and others.
  • Change the review language.
  • Add, remove or edit custom review instructions.
  • Adjust other review settings.

Getting Help

Copy link
Contributor

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey @FNGarvin - I've reviewed your changes - here's some feedback:

Overall Comments:

  • The empty catch block in get_gpu() silently swallows all exceptions. Consider logging errors or handling specific exceptions to avoid masking real issues.
  • Consider extracting the podman/docker detection logic into a shared utility function to avoid the noted code duplication with common.py
Here's what I looked at during the review
  • 🟡 General issues: 1 issue found
  • 🟢 Security: all looks good
  • 🟢 Testing: all looks good
  • 🟢 Complexity: all looks good
  • 🟢 Documentation: all looks good

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

try:
#TODO I don't currently have access to a PC w/ multiple NVidia GPUs nor an NVidia Mac... but I *think* that
#every Linux and Windows machine having modern NVidia will have nvidia-smi and that the number of lines corresponds to the number of zero-indexed gpus
check_output = subprocess.run(['nvidia-smi', '-L'], check=True, capture_output=True) #shell to nvidia-smi
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion: Redundant error checking with subprocess.run()

Using check=True with manual returncode checking is redundant as check=True will raise CalledProcessError on non-zero exit codes.

        check_output = subprocess.run(['nvidia-smi', '-L'], capture_output=True, check=True)

Shell out to nvidia-smi for NVidia detection, add device or gpu args to conman as appropriate for docker vs podman

Signed-off-by: Fred N. Garvin, Esq. <[email protected]>
import subprocess (to shell out for nvidia-smi) and shutil (to duplicate available() functionality and test whether we're in podman or docker

Signed-off-by: Fred N. Garvin, Esq. <[email protected]>
try:
#TODO I don't currently have access to a PC w/ multiple NVidia GPUs nor an NVidia Mac... but I *think* that
#every Linux and Windows machine having modern NVidia will have nvidia-smi and that the number of lines corresponds to the number of zero-indexed gpus
check_output = subprocess.run(['nvidia-smi', '-L'], check=True, capture_output=True) #shell to nvidia-smi
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Use run_cmd

Copy link
Collaborator

@bmahabirbu bmahabirbu Nov 25, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can go a step further and query the nvidia-smi command itself to get more info! For example doing this command nvidia-smi --query-gpu=index,memory.total --format=csv,noheader,nounits | sort -t, -k2 -nr | head -n 1 Can get us the largest vram GPU and id formatted as "id, vram-in-mb".

We can do something like this

try:
        command = ['nvidia-smi', '--query-gpu=index,memory.total', '--format=csv,noheader,nounits']
        output = run_cmd(command)
        gpus = output.stdout.strip().split('\n')
        gpus_sorted = sorted(gpus, key=lambda x: int(x.split(',')[1]), reverse=True)
        return "CUDA_VISIBLE_DEVICES",  gpus_sorted[0][0]
except Exception: {} #fall through
    return None, None

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think that's a fabulous idea. My inclination in general is to avoid shelling out, but I don't think we're going to find a better or more lightweight way to test for the presence of NVidia and CUDA. Probably why all the NVidia Container Toolkit docs seem to use it as a sanity check for installs.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same I'd rather not shell out myself but doing it this way avoids complications with different systems. if we can assume a system has an nvidia GPU then most likely there will be drivers installed with nvidia-smi as well.

Right now the vulkan backed for llama.cpp doesn't have all the functionality like cuda and hip blas does. But later down the line id like to switch to vulkan and use the vulkan SDK to query GPU data since it supports amd nvidia and intel graphics.

@rhatdan
Copy link
Member

rhatdan commented Nov 22, 2024

Thanks @FNGarvin. I found some issues.

@FNGarvin
Copy link
Contributor Author

FNGarvin commented Dec 3, 2024

Superceded by #490 etc

@FNGarvin FNGarvin closed this Dec 3, 2024
@FNGarvin FNGarvin deleted the main branch December 17, 2024 15:40
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants