Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Remove TTL header from cache key #1048

Merged
merged 2 commits into from
Apr 24, 2024
Merged

Conversation

dinosaursrarr
Copy link
Contributor

See discussion on Discord. Multiple times, app developers have wanted to cache a piece of content until a particular time, rather than for a particular period. That requires setting ttl_seconds to a variable amount, calculated as the time remaining.

Currently, the cache client won't support that. It generates the cache key from the entire request, which includes the TTL seconds in a header. This change removes that header before generating the key. That seems sensible because the key should be about the content and how it would be fetched, but the TTL is about what to do with the response.

As a transitionary step, this writes to new TTL-less keys, but continues to look up keys with TTLs. Once all the old cache entries have expired, then we can stop looking up the old keys.

@matslina
Copy link
Contributor

I believe the main motivation for including TTL in cache key was to protect against apps accidentally using a giant TTL, which would then leave the developer unable to have a fix (with lower TTL) take effect until the (giant) TTL has expired. With TTL in cache key, changing the TTL is equivalent to evicting the old record, from the app's perspective.

But I think we can do this. I don't recall ever seeing anyone accidentally using too large a TTL, but plenty of people have wanted to dynamically change TTL without read-through.

@dinosaursrarr
Copy link
Contributor Author

dinosaursrarr commented Apr 24, 2024

Seems like you could also introduce a special header on the request to overwrite any existing cache entries in case the giant TTL risk became a real problem

@matslina
Copy link
Contributor

@dinosaursrarr I don't think we need the migration steps in here; would you mind updating the PR? The actual production cache code lives elsewhere, so it won't matter, and even if it did I think I'd prefer just getting the change out, verifying the world didn't break, and call it a day. =)

@matslina
Copy link
Contributor

Seems like you could also introduce a special header on the request to overwrite any existing cache entries in case the giant TTL risk became a real problem

Yeah that's an option if it's ever needed.

@dinosaursrarr dinosaursrarr force-pushed the cachekey branch 2 times, most recently from 0aa2145 to 147b036 Compare April 24, 2024 19:49
See [discussion on Discord](https://discord.com/channels/928484660785336380/928485908842426389/1231791730513412136). Multiple times, app developers have wanted to cache a piece of content until a particular time, rather than for a particular period. That requires setting ttl_seconds to a variable amount, calculated as the time remaining.

Currently, the cache client won't support that. It generates the cache key from the entire request, which includes the TTL seconds in a header. This change removes that header before generating the key. That seems sensible because the key should be about the content and how it would be fetched, but the TTL is about what to do with the response.

As a transitionary step, this writes to new TTL-less keys, but continues to look up keys with TTLs. Once all the old cache entries have expired, then we can stop looking up the old keys.
Tidbyt team say they're happy to jump to the new world even if it means a one-off spike in cache misses
@dinosaursrarr
Copy link
Contributor Author

@matslina PR updated as requested

@matslina
Copy link
Contributor

Many thanks!

@matslina matslina merged commit 452de9a into tidbyt:main Apr 24, 2024
7 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants