-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Enable Dynamic Container Configuration #76
Comments
Looking for some examples online for how to wire up the CDK side of this, it seems there's a number of folks who had success with relatively small amounts of CPU/Memory for the AppConfig Sidecar:
Annoyingly, the sidecar resources will come out of our Capture Node resource budget. |
AWS AppConfig's docs are very unhelpful [1]. I'm having a hard time understanding precisely how new changes are rolled out and how that will interact with the standard Docker configure-on-boot-and-bounce-to-reconfigure paradigm [2]. It's unclear whether AppConfig works via a push or a pull mechanism. I'd assume it would be a pull mechanism, but there's also a DeploymentStrategy construct which indicates a managed roll-out across your fleet [3]. How do you ensure a rollout of 20% of the fleet every 10 minutes unless you're doing a push? But you don't register the AppConfig sidecar container with AWS AppConfig during setup [4]... [1] https://docs.aws.amazon.com/appconfig/latest/userguide/what-is-appconfig.html |
After looking through several implementations in CDK, the docs, AWS Blog Posts, and stack overflow threads (including one answered by a member of the service team [1]), I still don't know how this service works. Do I have to do a Deployment? If so, when? How does it work? And so on. Fortunately - I've come to the conclusion that we don't actually need AWS AppConfig if we're willing to bounce our containers. With App Config, we'd store our configuration in Parameter Store, undefined magic would occur, we'd bounce our containers, and they'd pull the new configuration. We can just cut the magic out and save ourselves the pain/confusion. Will update the task title/description accordingly. [1] https://stackoverflow.com/questions/73689103/aws-appconfig-onboarding-doubts |
Discussed w/ @awick and we agreed it's fine to go the simpler route for now. Some considerations:
|
Work steps:
|
Actually, VIEWER_PASS_ARN, VIEWER_USER aren't in the viewer config file, so I'll tackle those during the OIDC task (#77) |
PR approved and merged; AC met. Resolving. |
Description
UPDATED DESCRIPTION:
As part of enabling OIDC auth (#75), we want to enable changes to capture/viewer configuration without requiring a CloudFormation Stack Update. To accomplish this, we will implement a dynamic container configuration. That is, our Capture and Viewer node containers will pull their configuration on boot rather than have the configuration embedded in their image. This will allow us to update the configuration by bouncing the containers and avoid a CloudFormation Stack Update.
OBSOLETE DESCRIPTION:
As part of enabling OIDC auth (#75), we want to enable changes to capture/viewer configuration without requiring a CloudFormation Stack Update. To accomplish this, we will use AWS AppConfig to distribute the configuration rather than baking it into the containers via ENV variables. AppConfig will live in a sidecar container and present the configuration over a
curl
-able localhost port. We'll pull the config on Container startup similar to how we currently pull the config from ENV. The configuration that AppConfig pulls needs to live somewhere so we'll store it in AWS Parameter Store (which it tightly integrated with AppConfig). There's no L2 CDK constructs for this so we'll have to use the L1's.See: https://docs.aws.amazon.com/appconfig/latest/userguide/appconfig-integration-containers-agent.html
See: https://github.com/aws/aws-cdk/tree/main/packages/aws-cdk-lib/aws-appconfig
Acceptance Criteria
The text was updated successfully, but these errors were encountered: