Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature Request]: HuggingFacePipelineModelHandler Load Large Models on Multiple GPUs #28847

Open
1 of 16 tasks
entrpn opened this issue Oct 5, 2023 · 2 comments
Open
1 of 16 tasks

Comments

@entrpn
Copy link

entrpn commented Oct 5, 2023

What would you like to happen?

It would be nice if HuggingFacePipelineModelHandler could load a large model across multiple-gpus out of the box, for example, llama 2 70b across multiple L4s or V100s.

Issue Priority

Priority: 2 (default / most feature requests should be filed as P2)

Issue Components

  • Component: Python SDK
  • Component: Java SDK
  • Component: Go SDK
  • Component: Typescript SDK
  • Component: IO connector
  • Component: Beam YAML
  • Component: Beam examples
  • Component: Beam playground
  • Component: Beam katas
  • Component: Website
  • Component: Spark Runner
  • Component: Flink Runner
  • Component: Samza Runner
  • Component: Twister2 Runner
  • Component: Hazelcast Jet Runner
  • Component: Google Cloud Dataflow Runner
@tvalentyn
Copy link
Contributor

cc: @riteshghorse @damccorm FYI

@riteshghorse riteshghorse self-assigned this Oct 9, 2023
@riteshghorse
Copy link
Contributor

ack

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants