Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix P-tuning for Llama based models #9297

Merged
merged 4 commits into from
May 23, 2024

Conversation

apanteleev
Copy link
Contributor

What does this PR do ?

Fixes inference with P-tuning.

Collection: [Note which collection this PR will affect]

Changelog

  • Added the BOS token for Llama, Mistral and Mixtral.

PR Type:

  • New Feature
  • Bugfix
  • Documentation

apanteleev and others added 3 commits May 23, 2024 10:26
…port process and avoid possible contamination from previous runs.

Signed-off-by: Alexey Panteleev <[email protected]>
Copy link
Collaborator

@oyilmaz-nvidia oyilmaz-nvidia left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, thanks!

@github-actions github-actions bot removed the NLP label May 23, 2024
@oyilmaz-nvidia oyilmaz-nvidia merged commit f073ed9 into NVIDIA:r2.0.0rc0 May 23, 2024
10 checks passed
github-actions bot pushed a commit that referenced this pull request May 23, 2024
* Added the BOS token for Llama, Mistral and Mixtral.

Signed-off-by: Alexey Panteleev <[email protected]>

* Don't load an existing TRT-LLM model before export to speed up the export process and avoid possible contamination from previous runs.

Signed-off-by: Alexey Panteleev <[email protected]>

* Apply isort and black reformatting

Signed-off-by: apanteleev <[email protected]>

---------

Signed-off-by: Alexey Panteleev <[email protected]>
Signed-off-by: apanteleev <[email protected]>
Co-authored-by: apanteleev <[email protected]>
Co-authored-by: Onur Yilmaz <[email protected]>
pablo-garay pushed a commit that referenced this pull request May 30, 2024
* Fix P-tuning for Llama based models (#9297)

* Added the BOS token for Llama, Mistral and Mixtral.

Signed-off-by: Alexey Panteleev <[email protected]>

* Don't load an existing TRT-LLM model before export to speed up the export process and avoid possible contamination from previous runs.

Signed-off-by: Alexey Panteleev <[email protected]>

* Apply isort and black reformatting

Signed-off-by: apanteleev <[email protected]>

---------

Signed-off-by: Alexey Panteleev <[email protected]>
Signed-off-by: apanteleev <[email protected]>
Co-authored-by: apanteleev <[email protected]>
Co-authored-by: Onur Yilmaz <[email protected]>

* Fix the export test

---------

Signed-off-by: Alexey Panteleev <[email protected]>
Signed-off-by: apanteleev <[email protected]>
Signed-off-by: Onur Yilmaz <[email protected]>
Co-authored-by: Alexey Panteleev <[email protected]>
Co-authored-by: apanteleev <[email protected]>
Co-authored-by: Onur Yilmaz <[email protected]>
BoxiangW pushed a commit to BoxiangW/NeMo that referenced this pull request Jun 5, 2024
* Fix P-tuning for Llama based models (NVIDIA#9297)

* Added the BOS token for Llama, Mistral and Mixtral.

Signed-off-by: Alexey Panteleev <[email protected]>

* Don't load an existing TRT-LLM model before export to speed up the export process and avoid possible contamination from previous runs.

Signed-off-by: Alexey Panteleev <[email protected]>

* Apply isort and black reformatting

Signed-off-by: apanteleev <[email protected]>

---------

Signed-off-by: Alexey Panteleev <[email protected]>
Signed-off-by: apanteleev <[email protected]>
Co-authored-by: apanteleev <[email protected]>
Co-authored-by: Onur Yilmaz <[email protected]>

* Fix the export test

---------

Signed-off-by: Alexey Panteleev <[email protected]>
Signed-off-by: apanteleev <[email protected]>
Signed-off-by: Onur Yilmaz <[email protected]>
Co-authored-by: Alexey Panteleev <[email protected]>
Co-authored-by: apanteleev <[email protected]>
Co-authored-by: Onur Yilmaz <[email protected]>
Signed-off-by: Boxiang Wang <[email protected]>
janekl pushed a commit that referenced this pull request Jun 12, 2024
* Fix P-tuning for Llama based models (#9297)

* Added the BOS token for Llama, Mistral and Mixtral.

Signed-off-by: Alexey Panteleev <[email protected]>

* Don't load an existing TRT-LLM model before export to speed up the export process and avoid possible contamination from previous runs.

Signed-off-by: Alexey Panteleev <[email protected]>

* Apply isort and black reformatting

Signed-off-by: apanteleev <[email protected]>

---------

Signed-off-by: Alexey Panteleev <[email protected]>
Signed-off-by: apanteleev <[email protected]>
Co-authored-by: apanteleev <[email protected]>
Co-authored-by: Onur Yilmaz <[email protected]>

* Fix the export test

---------

Signed-off-by: Alexey Panteleev <[email protected]>
Signed-off-by: apanteleev <[email protected]>
Signed-off-by: Onur Yilmaz <[email protected]>
Co-authored-by: Alexey Panteleev <[email protected]>
Co-authored-by: apanteleev <[email protected]>
Co-authored-by: Onur Yilmaz <[email protected]>
Signed-off-by: Jan Lasek <[email protected]>
rohitrango pushed a commit to rohitrango/NeMo that referenced this pull request Jun 25, 2024
* Fix P-tuning for Llama based models (NVIDIA#9297)

* Added the BOS token for Llama, Mistral and Mixtral.

Signed-off-by: Alexey Panteleev <[email protected]>

* Don't load an existing TRT-LLM model before export to speed up the export process and avoid possible contamination from previous runs.

Signed-off-by: Alexey Panteleev <[email protected]>

* Apply isort and black reformatting

Signed-off-by: apanteleev <[email protected]>

---------

Signed-off-by: Alexey Panteleev <[email protected]>
Signed-off-by: apanteleev <[email protected]>
Co-authored-by: apanteleev <[email protected]>
Co-authored-by: Onur Yilmaz <[email protected]>

* Fix the export test

---------

Signed-off-by: Alexey Panteleev <[email protected]>
Signed-off-by: apanteleev <[email protected]>
Signed-off-by: Onur Yilmaz <[email protected]>
Co-authored-by: Alexey Panteleev <[email protected]>
Co-authored-by: apanteleev <[email protected]>
Co-authored-by: Onur Yilmaz <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants