Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature Request] Stop inferencing button #48

Open
FreedomCoder-dev opened this issue Dec 29, 2024 · 0 comments
Open

[Feature Request] Stop inferencing button #48

FreedomCoder-dev opened this issue Dec 29, 2024 · 0 comments

Comments

@FreedomCoder-dev
Copy link

Reference Issues

No response

Summary

If we already have a response or encounter inappropriate behavior, we can prevent the LLM from consuming additional tokens and time.

Basic Example

  1. Irrelevant or Off-Topic Responses
    Use Case: If the model starts generating content that is irrelevant or strays from the intended topic, the user can stop the generation to avoid wasting time or resources.
    Example: You ask for a summary of a scientific paper, but the model starts discussing unrelated theories.
  2. Excessive Length
    Use Case: When the model's response becomes excessively long and verbose, the user can stop it to get a more concise answer.
    Example: You request a brief explanation of a concept, but the model begins to write a lengthy essay.
  3. Repetitive Content
    Use Case: When the model begins to repeat itself or generate redundant information, the user can stop the process to avoid redundancy.
    Example: You ask for a list of benefits of exercise, but the model keeps repeating the same points.

Drawbacks

No response

Unresolved questions

No response

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants