Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: add prompt caching to improve response times #134

Closed
wants to merge 2 commits into from

Conversation

vgcman16
Copy link

This PR adds caching for enhanced prompts to improve response times and reduce API calls.

Changes
Added new prompt-cache store with 24-hour TTL for cached prompts
Modified enhancer API to check cache before making LLM calls
Added visual indicators in UI to show when responses come from cache:
Clock icon for cached responses
"From cache" text instead of "Prompt enhanced"
Cache status header in API response
Technical Details
Cache entries expire after 24 hours
Cache is stored in memory using nanostores
Cache status is communicated via x-from-cache response header
UI updates dynamically based on cache status
Testing
Create a new prompt: Shows stars icon, makes LLM call
Repeat same prompt: Shows clock icon, returns cached result
Wait 24 hours: Cache expires, makes fresh LLM call
Modify prompt: Makes new LLM call, caches new result
Performance Impact
Reduces API calls for frequently used prompts
Improves response time for cached prompts
Minimal memory footprint with 24h TTL

@thecodacus
Copy link
Collaborator

tried this pr, but i am getting local storage access issue,
i am running from github codespace
image

@vgcman16 vgcman16 closed this by deleting the head repository Nov 18, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants