Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix: token speed should not be calculated based on state updates #4159

Merged
merged 2 commits into from
Nov 29, 2024

Conversation

louis-jan
Copy link
Contributor

@louis-jan louis-jan commented Nov 29, 2024

Describe Your Changes

This PR addresses an issue where message rendering could impact token speed, which is not intended. The token speed calculation should not be affected by component rendering, but rather based on the received tokens.

Ensure that rendering of complicated message contents doesn’t reduce token speed (currently it does).

Token speed is now calculated on message receive, assuming each event yields one token.

The formula for calculating token speed is given by:

$$ \text{Token Speed} = \frac{\text{Eval Count}}{\text{Eval Duration}} $$

Where:

  • Total Tokens is the total number of tokens processed.
  • Duration is the time taken in seconds to process those tokens.

Screenshot demonstrate a case where convo is very long, but the token speed is not reduced (it's reduced to ~3 before where it should be ~8x)
CleanShot 2024-11-29 at 09 41 06

Changes made

  1. New State Management:

    • Added a new tokenSpeedAtom in ChatMessage.atom.ts to store token processing speed details using Jotai.
  2. ModelHandler.tsx:

    • Imported tokenSpeedAtom.
    • Added functionality to calculate and update the token speed in the ModelHandler component. This involves using setTokenSpeed to track the speed at which tokens are processed for each message.
  3. useSendChatMessage.ts:

    • Refactored to remove queuedMessageAtom and replace its function by setting tokenSpeedAtom to undefined when a new message is sent, resetting the state for new calculations.
  4. index.tsx (SimpleTextMessage Component):

    • Simplified the logic by using useAtomValue to get the current token speed from tokenSpeedAtom.
    • Adjusted the UI to display the token speed only if the current message matches and tokenSpeed is above zero, using the new data structure.
  5. Types:

    • Added token.d.ts defining the TokenSpeed type, which includes fields for message, tokenSpeed, tokenCount, and lastTimestamp, used to manage token speed calculations.

Overall, the feature updates the application with functionality to calculate, store, and display the speed at which tokens are processed for chat messages.

@github-actions github-actions bot added the type: bug Something isn't working label Nov 29, 2024
Copy link
Contributor

github-actions bot commented Nov 29, 2024

@louis-jan louis-jan requested review from a team and namchuai November 29, 2024 04:42
Copy link
Collaborator

@hiento09 hiento09 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@louis-jan louis-jan merged commit 0f834a6 into dev Nov 29, 2024
9 checks passed
@louis-jan louis-jan deleted the fix/correct-token-speed-calculating branch November 29, 2024 04:52
@github-actions github-actions bot added this to the v0.5.10 milestone Nov 29, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
type: bug Something isn't working
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants