-
Notifications
You must be signed in to change notification settings - Fork 12
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: add grpc sync flag store #84
base: main
Are you sure you want to change the base?
Conversation
Signed-off-by: Cole Bailey <[email protected]>
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #84 +/- ##
==========================================
+ Coverage 90.55% 93.21% +2.65%
==========================================
Files 8 15 +7
Lines 180 560 +380
==========================================
+ Hits 163 522 +359
- Misses 17 38 +21
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. |
Signed-off-by: Cole Bailey <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Awesome work @colebaileygit! I left a few comments and questions merely to help push this forward, without being exhaustive. I know this is still in draft.
context: test-harness | ||
dockerfile: sync/Dockerfile.unstable | ||
ports: | ||
- 9091:9090 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's make sure this file is excluded from published package. We can do so from the pyproject.toml
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good point. Checking this now it also includes tests
and test-harness
etc. Should we simplify to only include src?
[tool.hatch.build.targets.sdist]
include = [
"src",
]
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd have to check the generated build to confirm. If you can please make this change in a separate PR.
from openfeature.provider.provider import AbstractProvider | ||
|
||
from ....config import Config | ||
from ....proto.flagd.sync.v1 import sync_pb2, sync_pb2_grpc |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For more than one level up, an absolute import is much easier to read and to update if necessary.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was getting some mypy errors when using absolute import, need to check how to fix those:
providers/openfeature-provider-flagd/src/openfeature/contrib/provider/flagd/resolvers/process/connector/grpc_watcher.py:10: error:
Cannot find implementation or library stub for module named
"openfeature.contrib.provider.flagd.proto.flagd.sync.v1" [import-not-found]
from openfeature.contrib.provider.flagd.proto.flagd.sync.v1 import (
^
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Couldn't figure this out in a simple way, seems related to using pre-commit at root of a "mono-repo" but I don't see any clear resources of how to structure it better. Maybe this is a follow-up issue?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This approach that may work as long as the number of packages remains manageable.
...der-flagd/src/openfeature/contrib/provider/flagd/resolvers/process/connector/grpc_watcher.py
Outdated
Show resolved
Hide resolved
Signed-off-by: Cole Bailey <[email protected]>
Signed-off-by: Cole Bailey <[email protected]>
Signed-off-by: Cole Bailey <[email protected]>
Signed-off-by: Cole Bailey <[email protected]>
Signed-off-by: Cole Bailey <[email protected]>
Signed-off-by: Cole Bailey <[email protected]>
Signed-off-by: Cole Bailey <[email protected]>
Signed-off-by: Cole Bailey <[email protected]>
...der-flagd/src/openfeature/contrib/provider/flagd/resolvers/process/connector/file_watcher.py
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
overal looks good 🙌 nice work and I left few comments, check them when you can
...der-flagd/src/openfeature/contrib/provider/flagd/resolvers/process/connector/file_watcher.py
Outdated
Show resolved
Hide resolved
...rs/openfeature-provider-flagd/src/openfeature/contrib/provider/flagd/resolvers/in_process.py
Outdated
Show resolved
Hide resolved
Signed-off-by: Cole Bailey <[email protected]>
Signed-off-by: Cole Bailey <[email protected]>
Signed-off-by: Cole Bailey <[email protected]>
@@ -42,6 +44,14 @@ def __init__( # noqa: PLR0913 | |||
env_or_default("FLAGD_TLS", False, cast=str_to_bool) if tls is None else tls | |||
) | |||
self.timeout = 5 if timeout is None else timeout | |||
self.retry_backoff_seconds: float = ( | |||
float(env_or_default("FLAGD_RETRY_BACKOFF_SECONDS", 2.0)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we do cast=float
here for consistency?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, but then it gives me this:
After fighting with mypy a bit I realised that pulling the cast outside of the function is the cleanest way to have clean typing. I would instead suggest refactoring to remove this helper function everywhere and just use the python libraries directly. This abstraction doesn't add too much if you ask me:
self.retry_backoff_seconds: float = (
env_or_default("FLAGD_RETRY_BACKOFF_SECONDS", 2.0, cast=float)
if retry_backoff_seconds is None
else retry_backoff_seconds
)
vs
self.retry_backoff_seconds: float = (
float(os.environ.get("FLAGD_RETRY_BACKOFF_SECONDS", 2.0))
if retry_backoff_seconds is None
else retry_backoff_seconds
)
@@ -2,7 +2,7 @@ | |||
"""Client and server classes corresponding to protobuf-defined services.""" | |||
import grpc | |||
|
|||
from flagd.evaluation.v1 import evaluation_pb2 as flagd_dot_evaluation_dot_v1_dot_evaluation__pb2 | |||
from . import evaluation_pb2 as flagd_dot_evaluation_dot_v1_dot_evaluation__pb2 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are these changes automated after grpc code generation?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nope, I actually didn't notice that script. When I try to run it I get this file not found error, does it need to be fixed and maybe documented in the contributing guide?
Failure: could not read file schemas/protobuf/buf.gen.python.yaml: open schemas/protobuf/buf.gen.python.yaml: no such file or directory
providers/openfeature-provider-flagd/src/openfeature/contrib/provider/flagd/provider.py
Outdated
Show resolved
Hide resolved
...der-flagd/src/openfeature/contrib/provider/flagd/resolvers/process/connector/file_watcher.py
Outdated
Show resolved
Hide resolved
|
||
def initialize(self, evaluation_context: EvaluationContext) -> None: | ||
self.active = True | ||
self.thread = threading.Thread(target=self.refresh_file, daemon=True) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why is daemon set to true in this thread?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's add a name to all child threads for easier debugging. This could be FlagdFileWatcherWorkerThread
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Basically this:
By setting them as daemon threads, you can let them run and forget about them, and when your program quits, any daemon threads are killed automatically.
https://stackoverflow.com/questions/190010/daemon-threads-explanation
...openfeature-provider-flagd/src/openfeature/contrib/provider/flagd/resolvers/process/flags.py
Show resolved
Hide resolved
transposed = re.sub( | ||
rf"{{\s*\"\$ref\":\s*\"{name}\"\s*}}", json.dumps(rule), transposed | ||
) | ||
flags = json.loads(transposed) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should include unit tests for this parsing.
flag = cls(key=key, **data) | ||
return flag | ||
except Exception as err: | ||
raise ParseError from err |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add unit tests for this method please.
...der-flagd/src/openfeature/contrib/provider/flagd/resolvers/process/connector/grpc_watcher.py
Outdated
Show resolved
Hide resolved
raise ProviderNotReadyError from err | ||
|
||
def shutdown(self) -> None: | ||
self.active = False |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Shutting down and then initializing again can leave more than one thread running here, because active
might not be checked until after the sleep expires.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We have the same issue in both connector implementations. The current thread handling logic is a bit fragile, would it make sense to build a cancellable scheduled task manager and concentrate our efforts on making sure its implementation is robust? I'm happy to discuss how to approach it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sounds sensible. Would this be a simple refactor/enhancement we can add directly to this PR? It might be easier for me to review and test a first reference implementation if you can help provide that. I'm definitely not an expert in python thread handling
|
||
def initialize(self, evaluation_context: EvaluationContext) -> None: | ||
self.active = True | ||
self.thread = threading.Thread(target=self.refresh_file, daemon=True) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's add a name to all child threads for easier debugging. This could be FlagdFileWatcherWorkerThread
...der-flagd/src/openfeature/contrib/provider/flagd/resolvers/process/connector/grpc_watcher.py
Outdated
Show resolved
Hide resolved
Hey @colebaileygit, could you please resolve all the conversations that have been addressed? It would be great to merge this soon. |
Sorry for disappearing, had a European style vacation and getting back into things now. Will take some time to revisit the PR this week |
41ec424
to
217dcf7
Compare
Signed-off-by: Cole Bailey <[email protected]>
Signed-off-by: Cole Bailey <[email protected]>
Signed-off-by: Cole Bailey <[email protected]>
Signed-off-by: Cole Bailey <[email protected]>
4b34fad
to
e9cd2ed
Compare
No worries. I hope you had a relaxing vacation. A European-style vacation sure sounds nice😄 At any rate, can you resolve all the addressed threads and provide a short summary of what's remaining? It looks like it's close, and I would love to help get it across the finish line. |
@beeme1mr I've closed some threads where I provided a best effort fix, feel free to reopen @federicobond if you think they are important! Main points still open:
Otherwise I'd say remaining topics can be follow-up tasks that don't need to block this PR:
|
This PR
Related Issues
Fixes #83
Notes
Follow-up Tasks
How to test