forked from vllm-project/vllm
-
Notifications
You must be signed in to change notification settings - Fork 15
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
[issue templates] add some issue templates (vllm-project#3412)
- Loading branch information
1 parent
c17ca8e
commit dfc7740
Showing
11 changed files
with
1,005 additions
and
0 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,22 @@ | ||
name: 📚 Documentation | ||
description: Report an issue related to https://docs.vllm.ai/ | ||
title: "[Doc]: " | ||
labels: ["doc"] | ||
|
||
body: | ||
- type: textarea | ||
attributes: | ||
label: 📚 The doc issue | ||
description: > | ||
A clear and concise description of what content in https://docs.vllm.ai/ is an issue. | ||
validations: | ||
required: true | ||
- type: textarea | ||
attributes: | ||
label: Suggest a potential alternative/fix | ||
description: > | ||
Tell us how we could improve the documentation in this regard. | ||
- type: markdown | ||
attributes: | ||
value: > | ||
Thanks for contributing 🎉! |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,39 @@ | ||
name: 🛠️ Installation | ||
description: Report an issue here when you hit errors during installation. | ||
title: "[Installation]: " | ||
labels: ["installation"] | ||
|
||
body: | ||
- type: markdown | ||
attributes: | ||
value: > | ||
#### Before submitting an issue, please make sure the issue hasn't been already addressed by searching through [the existing and past issues](https://github.com/vllm-project/vllm/issues?q=is%3Aissue+sort%3Acreated-desc+). | ||
- type: textarea | ||
attributes: | ||
label: Your current environment | ||
description: | | ||
Please run the following and paste the output below. | ||
```sh | ||
wget https://raw.githubusercontent.com/vllm-project/vllm/main/collect_env.py | ||
# For security purposes, please feel free to check the contents of collect_env.py before running it. | ||
python collect_env.py | ||
``` | ||
value: | | ||
```text | ||
The output of `python collect_env.py` | ||
``` | ||
validations: | ||
required: true | ||
- type: textarea | ||
attributes: | ||
label: How you are installing vllm | ||
description: | | ||
Paste the full command you are trying to execute. | ||
value: | | ||
```sh | ||
pip install -vvv vllm | ||
``` | ||
- type: markdown | ||
attributes: | ||
value: > | ||
Thanks for contributing 🎉! |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,37 @@ | ||
name: 💻 Usage | ||
description: Raise an issue here if you don't know how to use vllm. | ||
title: "[Usage]: " | ||
labels: ["usage"] | ||
|
||
body: | ||
- type: markdown | ||
attributes: | ||
value: > | ||
#### Before submitting an issue, please make sure the issue hasn't been already addressed by searching through [the existing and past issues](https://github.com/vllm-project/vllm/issues?q=is%3Aissue+sort%3Acreated-desc+). | ||
- type: textarea | ||
attributes: | ||
label: Your current environment | ||
description: | | ||
Please run the following and paste the output below. | ||
```sh | ||
wget https://raw.githubusercontent.com/vllm-project/vllm/main/collect_env.py | ||
# For security purposes, please feel free to check the contents of collect_env.py before running it. | ||
python collect_env.py | ||
``` | ||
value: | | ||
```text | ||
The output of `python collect_env.py` | ||
``` | ||
validations: | ||
required: true | ||
- type: textarea | ||
attributes: | ||
label: How would you like to use vllm | ||
description: | | ||
A detailed description of how you want to use vllm. | ||
value: | | ||
I want to run inference of a [specific model](put link here). I don't know how to integrate it with vllm. | ||
- type: markdown | ||
attributes: | ||
value: > | ||
Thanks for contributing 🎉! |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,81 @@ | ||
name: 🐛 Bug report | ||
description: Raise an issue here if you find a bug. | ||
title: "[Bug]: " | ||
labels: ["bug"] | ||
|
||
body: | ||
- type: markdown | ||
attributes: | ||
value: > | ||
#### Before submitting an issue, please make sure the issue hasn't been already addressed by searching through [the existing and past issues](https://github.com/vllm-project/vllm/issues?q=is%3Aissue+sort%3Acreated-desc+). | ||
- type: textarea | ||
attributes: | ||
label: Your current environment | ||
description: | | ||
Please run the following and paste the output below. | ||
```sh | ||
wget https://raw.githubusercontent.com/vllm-project/vllm/main/collect_env.py | ||
# For security purposes, please feel free to check the contents of collect_env.py before running it. | ||
python collect_env.py | ||
``` | ||
value: | | ||
```text | ||
The output of `python collect_env.py` | ||
``` | ||
validations: | ||
required: true | ||
- type: textarea | ||
attributes: | ||
label: 🐛 Describe the bug | ||
description: | | ||
Please provide a clear and concise description of what the bug is. | ||
If relevant, add a minimal example so that we can reproduce the error by running the code. It is very important for the snippet to be as succinct (minimal) as possible, so please take time to trim down any irrelevant code to help us debug efficiently. We are going to copy-paste your code and we expect to get the same result as you did: avoid any external data, and include the relevant imports, etc. For example: | ||
```python | ||
from vllm import LLM, SamplingParams | ||
prompts = [ | ||
"Hello, my name is", | ||
"The president of the United States is", | ||
"The capital of France is", | ||
"The future of AI is", | ||
] | ||
sampling_params = SamplingParams(temperature=0.8, top_p=0.95) | ||
llm = LLM(model="facebook/opt-125m") | ||
outputs = llm.generate(prompts, sampling_params) | ||
# Print the outputs. | ||
for output in outputs: | ||
prompt = output.prompt | ||
generated_text = output.outputs[0].text | ||
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}") | ||
``` | ||
If the code is too long (hopefully, it isn't), feel free to put it in a public gist and link it in the issue: https://gist.github.com. | ||
Please also paste or describe the results you observe instead of the expected results. If you observe an error, please paste the error message including the **full** traceback of the exception. It may be relevant to wrap error messages in ```` ```triple quotes blocks``` ````. | ||
placeholder: | | ||
A clear and concise description of what the bug is. | ||
```python | ||
# Sample code to reproduce the problem | ||
``` | ||
``` | ||
The error message you got, with the full traceback. | ||
``` | ||
validations: | ||
required: true | ||
- type: markdown | ||
attributes: | ||
value: > | ||
⚠️ Please separate bugs of `transformers` implementation or usage from bugs of `vllm`. If you think anything is wrong with the models' output: | ||
- Try the counterpart of `transformers` first. If the error appears, please go to [their issues](https://github.com/huggingface/transformers/issues?q=is%3Aissue+is%3Aopen+sort%3Aupdated-desc). | ||
- If the error only appears in vllm, please provide the detailed script of how you run `transformers` and `vllm`, also highlight the difference and what you expect. | ||
Thanks for contributing 🎉! |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,31 @@ | ||
name: 🚀 Feature request | ||
description: Submit a proposal/request for a new vllm feature | ||
title: "[Feature]: " | ||
labels: ["feature"] | ||
|
||
body: | ||
- type: markdown | ||
attributes: | ||
value: > | ||
#### Before submitting an issue, please make sure the issue hasn't been already addressed by searching through [the existing and past issues](https://github.com/vllm-project/vllm/issues?q=is%3Aissue+sort%3Acreated-desc+). | ||
- type: textarea | ||
attributes: | ||
label: 🚀 The feature, motivation and pitch | ||
description: > | ||
A clear and concise description of the feature proposal. Please outline the motivation for the proposal. Is your feature request related to a specific problem? e.g., *"I'm working on X and would like Y to be possible"*. If this is related to another GitHub issue, please link here too. | ||
validations: | ||
required: true | ||
- type: textarea | ||
attributes: | ||
label: Alternatives | ||
description: > | ||
A description of any alternative solutions or features you've considered, if any. | ||
- type: textarea | ||
attributes: | ||
label: Additional context | ||
description: > | ||
Add any other context or screenshots about the feature request. | ||
- type: markdown | ||
attributes: | ||
value: > | ||
Thanks for contributing 🎉! |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,33 @@ | ||
name: 🤗 Support request for a new model from huggingface | ||
description: Submit a proposal/request for a new model from huggingface | ||
title: "[New Model]: " | ||
labels: ["new model"] | ||
|
||
body: | ||
- type: markdown | ||
attributes: | ||
value: > | ||
#### Before submitting an issue, please make sure the issue hasn't been already addressed by searching through [the existing and past issues](https://github.com/vllm-project/vllm/issues?q=is%3Aissue+sort%3Acreated-desc+). | ||
#### We also highly recommend you read https://docs.vllm.ai/en/latest/models/adding_model.html first to understand how to add a new model. | ||
- type: textarea | ||
attributes: | ||
label: The model to consider. | ||
description: > | ||
A huggingface url, pointing to the model, e.g. https://huggingface.co/openai-community/gpt2 . | ||
validations: | ||
required: true | ||
- type: textarea | ||
attributes: | ||
label: The closest model vllm already supports. | ||
description: > | ||
Here is the list of models already supported by vllm: https://github.com/vllm-project/vllm/tree/main/vllm/model_executor/models . Which model is the most similar to the model you want to add support for? | ||
- type: textarea | ||
attributes: | ||
label: What's your difficulty of supporting the model you want? | ||
description: > | ||
For example, any new operators or new architecture? | ||
- type: markdown | ||
attributes: | ||
value: > | ||
Thanks for contributing 🎉! |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,51 @@ | ||
name: ⚡ Discussion on the performance of vllm | ||
description: Submit a proposal/discussion about the performance of vllm | ||
title: "[Performance]: " | ||
labels: ["performance"] | ||
|
||
body: | ||
- type: markdown | ||
attributes: | ||
value: > | ||
#### Before submitting an issue, please make sure the issue hasn't been already addressed by searching through [the existing and past issues](https://github.com/vllm-project/vllm/issues?q=is%3Aissue+sort%3Acreated-desc+). | ||
- type: textarea | ||
attributes: | ||
label: Proposal to improve performance | ||
description: > | ||
How do you plan to improve vllm's performance? | ||
validations: | ||
required: false | ||
- type: textarea | ||
attributes: | ||
label: Report of performance regression | ||
description: > | ||
Please provide detailed description of performance comparison to confirm the regression. You may want to run the benchmark script at https://github.com/vllm-project/vllm/tree/main/benchmarks . | ||
validations: | ||
required: false | ||
- type: textarea | ||
attributes: | ||
label: Misc discussion on performance | ||
description: > | ||
Anything about the performance. | ||
validations: | ||
required: false | ||
- type: textarea | ||
attributes: | ||
label: Your current environment (if you think it is necessary) | ||
description: | | ||
Please run the following and paste the output below. | ||
```sh | ||
wget https://raw.githubusercontent.com/vllm-project/vllm/main/collect_env.py | ||
# For security purposes, please feel free to check the contents of collect_env.py before running it. | ||
python collect_env.py | ||
``` | ||
value: | | ||
```text | ||
The output of `python collect_env.py` | ||
``` | ||
validations: | ||
required: false | ||
- type: markdown | ||
attributes: | ||
value: > | ||
Thanks for contributing 🎉! |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,21 @@ | ||
name: 🎲 Misc/random discussions that do not fit into the above categories. | ||
description: Submit a discussion as you like. Note that developers are heavily overloaded and we mainly rely on community users to answer these issues. | ||
title: "[Misc]: " | ||
labels: ["misc"] | ||
|
||
body: | ||
- type: markdown | ||
attributes: | ||
value: > | ||
#### Before submitting an issue, please make sure the issue hasn't been already addressed by searching through [the existing and past issues](https://github.com/vllm-project/vllm/issues?q=is%3Aissue+sort%3Acreated-desc+). | ||
- type: textarea | ||
attributes: | ||
label: Anything you want to discuss about vllm. | ||
description: > | ||
Anything you want to discuss about vllm. | ||
validations: | ||
required: true | ||
- type: markdown | ||
attributes: | ||
value: > | ||
Thanks for contributing 🎉! |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1 @@ | ||
blank_issues_enabled: false |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1 @@ | ||
collect_env.py |
Oops, something went wrong.