-
Notifications
You must be signed in to change notification settings - Fork 4
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Support remote clusters and arbitrary types
This commit adds the infrastructure for watching and querying arbitrarily-typed pipeline targets, in remote clusters as well as the local cluster. The basic shape is this: for each target that needs to be examined, the reconciler uses `watchTargetAndGetReader(..., target)`. This procedure encapsulates the detail of making sure there's a cache for the target's cluster and type, and supplies the client.Reader needed for fetching the target object. A `cache.Cache` is kept for each {cluster, type}. `cache.Cache` is the smallest piece of machinery that can be torn down, because the next layer down, `Informer` objects, can't be removed once created. This is important for being able to stop watching targets when they are no longer targets. Target object updates will come from all the caches, which come and (in principle) go; but, the handler must be statically installed in SetupWithManager(). So, targets are looked up in an index to get the corresponding pipeline (if there is one), and that pipeline is put into a `source.Channel`. The channel source multiplexes the dynamic event handlers into a static pipeline requeue handler. NB: * I've put the remote cluster test in its own Test* wrapper, because it needs to start another testenv to be the remote cluster. * Supporting arbitrary types means using `unstructured.Unstructured` when querying for target objects, and this complicates checking their status. Since the caches are per-type, in theory there could be code for uerying known types (HelmRelease and Kustomize), with `Unstructured` as a fallback. So long at the object passed to `watchTargetAndGetReader(...) is the same one used with client.Get(...), it should all work. * A cache per {cluster, type} is not the only possible scheme. The watching could be more precise -- meaning fewer spurious events, and narrower permissions needed -- by having a cache per {cluster, namespace, type}, with the trade-off being managing more goroutines, and other overheads. I've chosen the chunkier scheme based on an informed guess that it'll be more efficient for low numbers of clusters and targets.
- Loading branch information
Showing
10 changed files
with
2,320 additions
and
120 deletions.
There are no files selected for viewing
Large diffs are not rendered by default.
Oops, something went wrong.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.