You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It would be nice if HuggingFacePipelineModelHandler could load a large model across multiple-gpus out of the box, for example, llama 2 70b across multiple L4s or V100s.
Issue Priority
Priority: 2 (default / most feature requests should be filed as P2)
Issue Components
Component: Python SDK
Component: Java SDK
Component: Go SDK
Component: Typescript SDK
Component: IO connector
Component: Beam YAML
Component: Beam examples
Component: Beam playground
Component: Beam katas
Component: Website
Component: Spark Runner
Component: Flink Runner
Component: Samza Runner
Component: Twister2 Runner
Component: Hazelcast Jet Runner
Component: Google Cloud Dataflow Runner
The text was updated successfully, but these errors were encountered:
What would you like to happen?
It would be nice if
HuggingFacePipelineModelHandler
could load a large model across multiple-gpus out of the box, for example, llama 2 70b across multiple L4s or V100s.Issue Priority
Priority: 2 (default / most feature requests should be filed as P2)
Issue Components
The text was updated successfully, but these errors were encountered: