-
Notifications
You must be signed in to change notification settings - Fork 10
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add configwatcher replacement #351
Conversation
// sync our internal superuser first | ||
internalSuperuser, password, mechanism := getInternalUser() | ||
// the internal user should only ever be created once, so don't | ||
// update its password ever. | ||
w.syncUser(ctx, internalSuperuser, password, mechanism, false) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is not true as SuncUsers
function is called from 2 places syncInitial
and watchFilesystem
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
But the last argument "recreate" does the trick!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
NIT: Maybe comment could be adjusted to indicate that syncUser will not recreate user as the last flag is set to false
.
if _, err := w.adminClient.PatchClusterConfig(ctx, map[string]any{ | ||
"superusers": users, | ||
}, []string{}); err != nil { | ||
w.log.Error(err, "could not set superusers") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
NIT: Maybe
w.log.Error(err, "could not set superusers") | |
w.log.Error(err, "could not set superusers: %v", users) |
// here we don't return as that'd crash the broker, instead | ||
// just log the error and move on after some sleep time. | ||
w.log.Error(err, "watcher returned an error") | ||
time.Sleep(5 * time.Second) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should the watcher be re-instantiated?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't believe that it needs to be re-initialized based on the fsnotify
usage docs. It's a bit hard to test with the in-memory filesystem, but do you think it's worth writing a test that just dumps some files to disk in a temporary directory and uses them instead of the in-memory afero
FS?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think it's worth implementing test for that. I'm just worry that if watcher crash and we will not re-instantiate it, then we might miss SASL users update.
// No auth is easy, only test on a cluster with auth on admin API. | ||
container, err := redpanda.Run( | ||
ctx, | ||
"redpandadata/redpanda:v24.2.4", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
NIT:
"redpandadata/redpanda:v24.2.4", | |
"redpandadata/redpanda:v23.3.1", |
require.NoError(t, afero.WriteFile(fs, "/var/lib/redpanda.yaml", []byte(redpandaYaml), 0o644)) | ||
require.NoError(t, afero.WriteFile(fs, "/etc/secret/users/users.txt", []byte(strings.Join(users, "\n")), 0o644)) | ||
|
||
ctx, cancel := context.WithCancel(ctx) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
NIT: Can this with Cancel context could be moved around line 34 or 35?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
flaky operator v2 test. Retry |
204ef3b
to
4f8f253
Compare
This is meant to replace the entirety of our configwatcher script from our helm chart and move all of the ad-hoc bash/secret mounts into a unified go entrypoint in our operator.