-
Notifications
You must be signed in to change notification settings - Fork 264
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Multicore prompt_compress #204
base: main
Are you sure you want to change the base?
Multicore prompt_compress #204
Conversation
@SrdanProdanovic please read the following Contributor License Agreement(CLA). If you agree with the CLA, please reply with the following information.
Contributor License AgreementContribution License AgreementThis Contribution License Agreement (“Agreement”) is agreed to by the party signing below (“You”),
|
Hi @SrdanProdanovic, this looks cool! Do you have some benchmark numbers, did you actually manage to speed up the compression process? I see that you parallelized the calculation of the context probabilities when compressing multiple contexts. I have not used LLMLingua-2 with multiple contexts yet, but I'd still be interested how much faster you got it to work. I have experimented myself with trying to implement multiprocessing in |
What does this PR do?
Add a parallel mutlicore version, a test that runs in and scripts to monitor
Fixes # (issue)
Before submitting
to it if that's the case.
Who can review?
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of who to tag.
Please tag fewer than 3 people.
LLMLingua/LongLLMLingua:
Documentation: @SiyunZhao
-->