Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to run longvila large context, sequence parallel inference? #130

Open
zadeismael opened this issue Aug 27, 2024 · 20 comments
Open

How to run longvila large context, sequence parallel inference? #130

zadeismael opened this issue Aug 27, 2024 · 20 comments

Comments

@zadeismael
Copy link

There are multiple mentions of a multi modal sequence parallel system for inference which can be seamlessly integrated with HF transformers. However, I am not able to follow this through the codebase OR see this exhibited in any of the scripts / examples.

Can the team please:

  1. Point me to the code that enables long context, sequence parallel inference for generation?
  2. Provide an examples script to run this inference (preferably the same script used for the eval metrics mentioned in the paper?

Mentions of inference in the longvila paper:
Section 1:
For inference, the memory usage of KV cache will also be a bottleneck
when the sequence length is very long, we thus implement the inference mode of our MM-SP to
support long context multi-modal language deployment.

Section 3.3
Thus, we implement sequence parallelism for VLMs distributed inference. Compared to the training mode, the
system needs to additionally maintain tensors (e.g. input tokens and position encodings) that are progressively
changing during the decoding phrase (Yu et al., 2022). In
addition, the system needs to detect signals from the machine that holds the last token and accordingly terminate
the distributed process.

Section 5(.1)

@zadeismael zadeismael changed the title How to run longvila large input sequence inference? How to run longvila large context, sequence parallel inference? Aug 27, 2024
@Lyken17
Copy link
Collaborator

Lyken17 commented Aug 28, 2024

@DachengLi1 @yukang2017

@DachengLi1
Copy link
Collaborator

Hi @zadeismael Thank you for the notice! This is an active PR that will be merged very soon (within days).

@hb-jw
Copy link

hb-jw commented Sep 2, 2024

Hello, I am also very interested in sequence parallel inference. May I ask when you plan to open-source the code for sequence parallel inference?

@DachengLi1
Copy link
Collaborator

@hb-jw Thank you! We are undergoing the final merging check in our internal codebase for this PR, and will be ready very soon (If everything goes well, it should be mid this week).

@hb-jw
Copy link

hb-jw commented Sep 6, 2024

Hello,today is Friday,I want to ask if everything goes well?

@DachengLi1
Copy link
Collaborator

@hb-jw Hi there, sorry for the delay. We have worked out the version update. We are working on integrating with the vision needle-in-a-haystack before OSS this PR.

@zade-twelvelabs
Copy link

@DachengLi1 Thanks for the update - can you let us know a new expected date?

@DachengLi1
Copy link
Collaborator

@zade-twelvelabs I will allocate more bandwidth to the task, and hopefully finish it by this Thursday. Thanks for your patience, and apologize the delay!

@hb-jw
Copy link

hb-jw commented Sep 10, 2024

@zade-twelvelabs I will allocate more bandwidth to the task, and hopefully finish it by this Thursday. Thanks for your patience, and apologize the delay!

OK!Thank you for your effort and open-source, I like the project about sequence parallel very much and check if it is open-source every day,please reply me when it open-sourced! Thank you again!

@zade-twelvelabs
Copy link

@DachengLi1 Echoing @hb-jw 's comment - thanks for the prioritization :)

@hb-jw
Copy link

hb-jw commented Sep 12, 2024

Thank you for your amazing work! It's already Thursday, and I've been looking forward to it for a long time. Could you please tell me when the sequence parallel code will be open-sourced?

@DachengLi1
Copy link
Collaborator

Hi @hb-jw Sorry we have an internal regression that leads to small accuracy mis-match. If you are finding a quick solution, we have an implementation here: https://github.com/NVlabs/VILA/tree/main/llava/eval/vision_niah_vila.

@zade-twelvelabs
Copy link

zade-twelvelabs commented Sep 13, 2024

This is a non generative example though, right? Can this be used for next token generation?

@zade-twelvelabs
Copy link

Hi @DachengLi1 :)

image

@hb-jw
Copy link

hb-jw commented Sep 28, 2024

Hello,It has been a long time, but it feels as if only a short time has passed because I have been waiting for parallel inference in sequences. Do you still have plans to open-source the parallel inference code for sequences? If so, when exactly will it be? I am eagerly looking forward to it, waiting with bated breath, and eagerly await and thank you for your reply.

@DachengLi1
Copy link
Collaborator

Hi @hb-jw Sincerely apologize for this. We undergo a significant refactoring in the internal repo in support of more models, and this PR could hardly be merged. We are working on a deadline at Oct 2, and I will rewrite the PR accordingly, hopefully within one week of Oct 2.

@zade-twelvelabs
Copy link

Hi - updates?
image

@liuyijiang1994
Copy link

+1

2 similar comments
@hhaAndroid
Copy link

+1

@hb-jw
Copy link

hb-jw commented Nov 9, 2024

+1

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants