Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: Add support for external cache storage #412

Open
wants to merge 21 commits into
base: v9
Choose a base branch
from

Conversation

nalexn
Copy link

@nalexn nalexn commented Oct 25, 2024

Requirements

  • I have added test coverage for new or changed functionality
  • I have followed the repository's pull request submission guidelines
  • I have validated my changes against all supported platform versions

Related issues

Provide links to any issues in this repository or elsewhere relating to this pull request: #316
Also, provides an alternative to #408 when using LDInMemoryCache

Describe the solution you've provided

This PR adds a new property to the LDConfig file allowing to replace the default cache storage (NSUserDefaults) with either LDInMemoryCache, LDFileCache, or a custom cache user implements by conforming to KeyedValueCaching protocol.

var config = LDConfig(mobileKey: ..., autoEnvAttributes: .disabled)
config.cacheFactory = LDFileCache.factory(encryptionKey: ...)

Describe alternatives you've considered

Provide a clear and concise description of any alternative solutions or features you've considered.

Additional context

  • 100% backward compatible, all inner workings of the library don't change if the LDConfig isn't touched
  • Both LDInMemoryCache and LDFileCache are thread-safe and fully covered with tests
  • LDFileCache supports optional encryption of the file with the AES algorithm
  • LDFileCache supports migration from the NSUserDefaults so there is no flag flicker when updating
  • Writing to file happens from a background queue and is throttled

Alexey Naumov and others added 20 commits October 23, 2024 17:47
* feat: [UXP-3578] Allow changing the default flag cache storage

* feat: [UXP-3578] Add LDInMemoryCache

* feat: [UXP-3578] Add KeyedValueCaching tests

* feat: [UXP-3578] Add foundation for LDFileCache

* feat: [UXP-3578] Add debounce

* refactor: [UXP-3578] Cache factory name and parameters

* feat: [UXP-3578] Add file IO

* feat: [UXP-3578] Add tests for file IO

* refactor: [UXP-3578] Use JSONDecoder

* chore: [UXP-3578] Remove unused method

* refactor: [UXP-3578] Reorder factory params

* feat: [UXP-3578] Tweak the tests

* feat: [UXP-3578] Support file encryption

* chore: [UXP-3578] Add files to project

* fix: [UXP-3578] Make cacheKey optional

* feat: [UXP-3578] Add cache storage migration

* feat: [UXP-3578] Add cache storage migration test

* feat: [UXP-3578] Use sha256hex as file name

* fix: [UXP-3578] FileQueue not retained
@nalexn nalexn requested a review from a team as a code owner October 25, 2024 18:23
@tanderson-ld
Copy link
Contributor

Hi @nalexn, thank you for your contribution. We on the SDK team will take a look early next week and provide our thoughts and feedback.

I did take a brief look through just now. My initial reaction is your code is thoughtful, so thank you for that. Some tasks popped into my head that I want to document to come back to during review:

  • During review, examine guarantees around data making it to persistence.
  • Discuss migration pros/cons.

@nalexn nalexn changed the title Adds support for external cache storage feat: Add support for external cache storage Oct 26, 2024
Comment on lines +759 to +760
cacheConverter.migrateStorage(serviceFactory: serviceFactory, keysToMigrate: keys, from: LDConfig.Defaults.cacheFactory)
cacheConverter.convertCacheData(serviceFactory: serviceFactory, keysToConvert: keys, maxCachedContexts: config.maxCachedContexts)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

With caching implementations being as wildly different as they are, it seem like the convertCacheData should be implemented alongside the cache implementation.

So I think the cache prototype probably needs a method to retrieve an optional converter. If provided, then we pass in the list of mobile keys, and the max cached contexts option. From there, the implementation is cache specific.

We should also conversion migration and conversion as the same concept. If someone wants to migrate between two completely different storage technologies, they can write a custom cacher to handle that. Seems like it would be straight forward enough.

@@ -57,18 +57,24 @@ protocol FeatureFlagCaching {
func saveCachedData(_ storedItems: StoredItems, cacheKey: String, contextHash: String, lastUpdated: Date, etag: String?)
}

extension MobileKey {
func cacheKey() -> String {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What do you think about having the cache define it's own key? I don't see any reason why completely different implementations need to use the same format as this one.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I didn't consider this as an important thing to add (trying to make the PR smaller), but this certainly can be done. Ideally, a wider refactoring is needed here. The parameter name cacheKey is confusing because it means different entities in different contexts, while always being a String. This cost me a couple of hours of painful debugging to realize that:

  • makeKeyedValueCache(cacheKey: String?)
  • func getCachedData(cacheKey: String, ...) and func saveCachedData(_ storedItems: StoredItems, cacheKey: String, ...)

use cacheKey that look different, and are generated differently. The first one usually looks like com.launchdarkly.client...., and the second one is context.fullyQualifiedHashedKey()

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's fair. It may be something we add ourselves then after this lands.

@tanderson-ld that is something you and I can decide on then if we want this now, later, or not at all.

import Foundation
import OSLog

public final class LDFileCache: KeyedValueCaching {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm hesitant to include this in the SDK. While this implementation seems sufficient for now, over time it becomes another point of divergence we need to maintain and test. This is doubly true for an implementation that deals with encryption in any sort of way.

In general, I think if we can avoid assuming that encryption burden for our customers. This would be a good candidate for a standalone library that integrates with the LD SDK if you wanted to make your implementation available to the public.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

An additional consideration with the debouncer is there will be a change in the when data becomes durable, where that happens, and the probability of it happening.

With UserDefaults, it is my understanding that the API guarantees eventual durability except in the case of OS/phone battery failure. The issue you linked to in the PR description where flicker is seen on relaunch is a result of the OS not sharing the view of in memory caching of UserDefaults across processes. Definitely an issue, but not trivial to address without blocking logic somewhere.

With the debouncer, there is a window of time where if two updates come in barely before the application is killed, the second update will affect flag evaluations for a moment, but then may not make it to persistence. This could have the same effect of "flicker" where the next app launch initializes with flag data from "the past", similar to the UserDefaults case.

Based on @keelerm84's comment above, if the debouncer goes with this file, this behavior would be in your control.

@@ -0,0 +1,52 @@
import Foundation

public final class LDInMemoryCache: KeyedValueCaching {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm okay with keeping the in memory cache implementation since the maintenance burden is greatly reduced. Also this would immediately prove useful for unit testing.

dictionaryRepresentation().keys.map { String($0) }
}
public extension LDConfig {
typealias CacheFactory = (_ cacheKey: String?, _ logger: OSLog) -> KeyedValueCaching
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there any reason to make the cache key optional? It forces a decision on what a default key would be in your cache implementations which I would prefer to avoid.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This was dictated by the convertCacheData function, which is migrating UserDefaults with key = nil

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think that's actually in line with my later comment about this conversion being something associated with a specific cache implementation. Our existing implementation could take its cache key, but when doing the conversion bit, it can ignore that key then since it knows it didn't use to use it.

Also, at this point, I think we could argue that our cache converter could probably be cleaned up some. I don't think we have to support a v5 -> v9 migration path. So we could probably remove those steps entirely. What do you think @tanderson-ld

Copy link
Contributor

@tanderson-ld tanderson-ld Oct 29, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree. Dropping support for converting v5 to remove the optionality of the cacheKey is a good tradeoff.

I think these are the pertinent lines.

        // Remove V5 cache data
        let standardDefaults = serviceFactory.makeKeyedValueCache(cacheKey: nil)
        standardDefaults.removeObject(forKey: "com.launchdarkly.dataManager.userEnvironments")

return { cacheKey, _ in
instancesLock.lock()
defer { instancesLock.unlock() }
let cacheKey = cacheKey ?? "default"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As mentioned above, if the cache key isn't nillable, then we don't have to hard code this "default" which I would really rather avoid.


extension DispatchQueue {

func debouncer() -> Debouncer {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can see this method being misused. It would seem like I could call debouncer on any queue from two different areas, and it would handle debouncing both of those requests. But that's not the case since it would be two instances with different locks.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Well, that's a debouncer, not a debounce method. If you don't retain it - the work item won't be executed at all. Obviously it can be misused, just like entire GCD. Happy to see a suggestion on alternative approach. Late enough I found a throttle in the codebase, maybe something that reuses it

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Well, that's a debouncer, not a debounce method.

Yeah, I see that. I guess it is that it's an extension of the queue that makes me think it would be unique across usages. DispatchQueue.main.debouncer().debounce(....) seems like it would be kind of convenient usage, and would probably be something I'd want to attempt.

We don't have to worry about it.

}

func debounce(interval: DispatchTimeInterval, action: @escaping () -> Void) {
lock.lock(); defer { lock.unlock() }
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
lock.lock(); defer { lock.unlock() }
lock.lock()
defer { lock.unlock() }

@@ -17,6 +27,48 @@ class Util {
}
return Data(digest)
}

class func encrypt(_ data: Data, encryptionKey: String, cacheKey: String) throws -> Data {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As mentioned above, this isn't something we are ready to bring into the SDK. This, along with the file cache, can be tucked into that same external library.

@@ -253,6 +253,11 @@ public struct LDConfig {
/// The default logger for the SDK. Can be overridden to provide customization.
static let logger: OSLog = OSLog(subsystem: "com.launchdarkly", category: "ios-client-sdk")

/// The default cache for feature flags is UserDefaults
static let cacheFactory: CacheFactory = { cacheKey, _ in
UserDefaults(suiteName: cacheKey)!
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I realize we were previously forcing the unwrap here and you just moved this bit of code.

Now that we have an in memory cache that can't fail, what about having this fall back to that if this instantiation fails? Then we don't have to worry about the force unwrap anymore.

}
return "com.launchdarkly.client.\(cacheKey)"
}
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this extension organizational or is there another reason for adding it?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code reuse

import Foundation
import OSLog

public final class LDFileCache: KeyedValueCaching {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

An additional consideration with the debouncer is there will be a change in the when data becomes durable, where that happens, and the probability of it happening.

With UserDefaults, it is my understanding that the API guarantees eventual durability except in the case of OS/phone battery failure. The issue you linked to in the PR description where flicker is seen on relaunch is a result of the OS not sharing the view of in memory caching of UserDefaults across processes. Definitely an issue, but not trivial to address without blocking logic somewhere.

With the debouncer, there is a window of time where if two updates come in barely before the application is killed, the second update will affect flag evaluations for a moment, but then may not make it to persistence. This could have the same effect of "flicker" where the next app launch initializes with flag data from "the past", similar to the UserDefaults case.

Based on @keelerm84's comment above, if the debouncer goes with this file, this behavior would be in your control.

@nalexn
Copy link
Author

nalexn commented Nov 5, 2024

All the suggested improvements and refactorings totally make sense, but my team is limited on the time we can dedicate to solving our peculiar problem, so we'll likely just use our branch for the time being. The PR can be closed or updated at your discretion.

@keelerm84
Copy link
Member

All the suggested improvements and refactorings totally make sense, but my team is limited on the time we can dedicate to solving our peculiar problem, so we'll likely just use our branch for the time being. The PR can be closed or updated at your discretion.

Thank you for letting me know. I'll take a look at folding in those changes for you.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants