Skip to content
This repository has been archived by the owner on May 28, 2024. It is now read-only.

Queue-Worker System #123

Open
AIApprentice101 opened this issue Jan 22, 2024 · 2 comments
Open

Queue-Worker System #123

AIApprentice101 opened this issue Jan 22, 2024 · 2 comments

Comments

@AIApprentice101
Copy link

Thank you for the great package. I'm interested in hosting an LLM on GKE.

For our existing ML applications, we usually implement a queue-worker system (e.g. redis-queue or redis-celery) to handle long-running background tasks. Does ray-llm have a similar feature implemented under-the-hood? Or do I need to set it up myself?

@sihanwang41
Copy link
Collaborator

Hi @AIApprentice101, We don't have the functionality in the ray-llm, you have to set it up by yourself.

For redis solution, do you see any issue or pain points? Or it is more about the integration effort.

@AIApprentice101
Copy link
Author

@sihanwang41 Thank you for your reply. I saw there's a RFC related to integration of queuing system in Ray serve: ray-project/ray#32292. So I was wondering if that's something Ray-LLM would consider, especially given the inference of LLM usually takes pretty long to run.

In the meantime, we can set up the queuing system ourselves.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants