-
Notifications
You must be signed in to change notification settings - Fork 20
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implement reconciliation for ManagedCluster and create ManagedClusterView #219
Implement reconciliation for ManagedCluster and create ManagedClusterView #219
Conversation
controllers/manager.go
Outdated
@@ -140,6 +142,15 @@ func (o *ManagerOptions) runManager() { | |||
os.Exit(1) | |||
} | |||
|
|||
if err = (&ManagedClusterReconciler{ | |||
Client: mgr.GetClient(), | |||
Scheme: mgr.GetScheme(), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We don't need to pass the Scheme here, we can get it from client using Client.Scheme()
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What would be the difference between both ways ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We could probably remove the Scheme from the ManagedClusterReconciler, we arent using it anywhere.
/test integration-test |
924779d
to
6bfaed3
Compare
controllers/manager.go
Outdated
@@ -140,6 +142,15 @@ func (o *ManagerOptions) runManager() { | |||
os.Exit(1) | |||
} | |||
|
|||
if err = (&ManagedClusterReconciler{ | |||
Client: mgr.GetClient(), | |||
Scheme: mgr.GetScheme(), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We could probably remove the Scheme from the ManagedClusterReconciler, we arent using it anywhere.
409a44a
to
35359fa
Compare
controllers/mirrorpeer_controller.go
Outdated
@@ -71,6 +71,7 @@ const spokeClusterRoleBindingName = "spoke-clusterrole-bindings" | |||
//+kubebuilder:rbac:groups=addon.open-cluster-management.io,resources=managedclusteraddons/status,verbs=* | |||
//+kubebuilder:rbac:groups=apps,resources=deployments,verbs=get;list;create;update;watch | |||
//+kubebuilder:rbac:groups=console.openshift.io,resources=consoleplugins,verbs=get;list;create;update;watch | |||
// +kubebuilder:rbac:groups=view.open-cluster-management.io,resources=managedclusterviews,verbs=get;list;watch;create;update |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
just a question: shouldn't this be added to managedcluster_controller
file ?? instead of here...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
kubebuilder will add the required permissions for the pod to the role manifest automatically. All controllers run in the same container.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah, but I assumed it was more of a good practice to add RBACs on top of respective reconciler, rather than spreading it across multiple unrelated files...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree. It is good practice to have RBACs close to the controllers that require it. We haven't followed that well enough in this repo, but I think we should start doing it. End result is the same, but this makes it easy to argue about why a particular RBAC was added.
35359fa
to
0a9f688
Compare
controllers/mirrorpeer_controller.go
Outdated
@@ -71,6 +71,7 @@ const spokeClusterRoleBindingName = "spoke-clusterrole-bindings" | |||
//+kubebuilder:rbac:groups=addon.open-cluster-management.io,resources=managedclusteraddons/status,verbs=* | |||
//+kubebuilder:rbac:groups=apps,resources=deployments,verbs=get;list;create;update;watch | |||
//+kubebuilder:rbac:groups=console.openshift.io,resources=consoleplugins,verbs=get;list;create;update;watch | |||
// +kubebuilder:rbac:groups=view.open-cluster-management.io,resources=managedclusterviews,verbs=get;list;watch;create;update |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree. It is good practice to have RBACs close to the controllers that require it. We haven't followed that well enough in this repo, but I think we should start doing it. End result is the same, but this makes it easy to argue about why a particular RBAC was added.
0a9f688
to
1d772d8
Compare
1d772d8
to
df4b7a4
Compare
2988e81
to
428f7f2
Compare
|
DeleteFunc: func(e event.DeleteEvent) bool { | ||
return false | ||
}, | ||
GenericFunc: func(e event.GenericEvent) bool { | ||
return false | ||
}, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we need to explicitly specify the Delete and Generic? won't it default to false?
ownerExists := false | ||
for _, ownerRef := range configMap.OwnerReferences { | ||
if ownerRef.UID == managedClusterView.UID { | ||
ownerExists = true | ||
break | ||
} | ||
} | ||
|
||
if !ownerExists { | ||
ownerRef := *metav1.NewControllerRef(&managedClusterView, viewv1beta1.GroupVersion.WithKind("ManagedClusterView")) | ||
logger.Info("OwnerRef added", "UID", string(ownerRef.UID)) | ||
configMap.OwnerReferences = append(configMap.OwnerReferences, ownerRef) | ||
} | ||
return nil |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe we can replace it with controller-util to make it simpler?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't have access to scheme which is required for setOwnerReference function
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can we use c.Scheme()
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I used it but it was not getting set somehow. The tests were failing. I will look into this later
Name: ClientInfoConfigMapName, | ||
Namespace: operatorNamespace, | ||
}, | ||
Data: configMapData, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We don't need to set the data here, we can do it inside the CreateOrUpdate directly
if strings.HasSuffix(key, ".config.yaml") { | ||
var config StorageClusterConfig | ||
err := json.Unmarshal([]byte(value), &config) | ||
if err != nil { | ||
return fmt.Errorf("failed to unmarshal config data for key %s: %v", key, err) | ||
} | ||
|
||
providerInfo := config | ||
providerInfo.Clients = nil | ||
providerInfo.ProviderClusterName = managedClusterView.Namespace | ||
|
||
providerInfoJSON, err := json.Marshal(providerInfo) | ||
if err != nil { | ||
return fmt.Errorf("failed to marshal provider info: %v", err) | ||
} | ||
|
||
for _, client := range config.Clients { | ||
reverseLookup[client] = string(providerInfoJSON) | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this should change once we have changed the struct
428f7f2
to
e45de1f
Compare
NamespacedName types.NamespacedName `yaml:"namespacedName"` | ||
StorageProviderEndpoint string `yaml:"storageProviderEndpoint"` | ||
CephClusterFSID string `yaml:"cephClusterFSID"` | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a temporary addition till the api is exported on OCS operator
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
e45de1f
to
150f216
Compare
for _, client := range odfInfo.Clients { | ||
clientInfo := ClientInfo{ | ||
ClusterID: client.ClusterID, | ||
Name: client.Name, | ||
ProviderInfo: providerInfo, | ||
} | ||
clientInfoMap[client.Name] = clientInfo | ||
} | ||
} | ||
|
||
configMapData := make(map[string]string) | ||
for clientName, clientInfo := range clientInfoMap { | ||
clientInfoJSON, err := json.Marshal(clientInfo) | ||
if err != nil { | ||
return fmt.Errorf("failed to marshal client info: %v", err) | ||
} | ||
configMapData[clientName] = string(clientInfoJSON) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The key we use in the configMap for reverse lookup is storageClient Name, two different managed clusters can have the storageClient Name. What do you think about using ClientName/ManagedClusterName (of the client cluster) as the key for the configMap?
We can get the managedClusterName of the client cluster here by referencing the openshift Cluster Version UUID provided in the odfInfo .clients.clusterId
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This makes sense.
Added func to fetch ManagedCluster
via the clusterId
.
Added another field called clientManagedClusterName
to ClientInfo
The key is changed from client.Name
to clientManagedClusterName/client.Name
50cb833
to
d5f5de3
Compare
d5f5de3
to
c49600c
Compare
/test unit-test |
1 similar comment
/test unit-test |
/test integration-test |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Squash the first 2 commits and last 2 commits. Everything else looks good to me.
b9dbed4
to
7e269a5
Compare
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: umangachapagain, vbnrh The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
…lusterView - Added reconciliation logic for `ManagedCluster` - Added tests for `ManagedClusterReconciler` reconciliation logic and ensure ManagedClusterViews function. - Reconciliation for ManagedCluster creates ManagedClusterView which pulls the ‘odf-info’ configmap onto the hub. - Updated RBAC rules in `config/rbac/role.yaml` to include permissions for ManagedClusterView resources. (ManagedCluster RBAC already there) - Plumbing to make the controllers work like the items above - Updated go.mod and go.sum to include `github.com/stolostron/multicloud-operators-foundation`. - Fixes to functions, tests and adding new predicate Signed-off-by: vbadrina <[email protected]>
7e269a5
to
ef0af41
Compare
- Added initialization for ManagedClusterViewReconciler in manager.go to setup the ManagedClusterView controller. - Creates or updates configMap odf-client-info which maps client to it provider cluster - Created comprehensive unit tests to cover the creation and update scenarios of the ConfigMap. Signed-off-by: vbadrina <[email protected]>
ef0af41
to
3ed8293
Compare
/lgtm |
7cfca25
into
red-hat-storage:main
"sigs.k8s.io/controller-runtime/pkg/controller/controllerutil" | ||
) | ||
|
||
const MCVLabelKey = "multicluster.odf.openshift.io/cluster" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think this is used anywhere...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, we replaced with the createdBy label for MCO , i will remove it later
} | ||
|
||
return ctrl.NewControllerManagedBy(mgr). | ||
For(&clusterv1.ManagedCluster{}, builder.WithPredicates(managedClusterPredicate, predicate.ResourceVersionChangedPredicate{})). | ||
Owns(&viewv1beta1.ManagedClusterView{}). | ||
Owns(&corev1.ConfigMap{}). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there any need to reconcile this on the owned ConfigMaps ?? It's not like we are managing cleanup of MCV or anything here, was there some other reason ??
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is done to ensure that MC reconcile is triggered whenever there are events for the ConfigMap. To ensure the desired ConfigMap is always present..
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ConfigMap is created by MCV controller, not by MC controller, right ?? my question was, if any event occurs on that ConfigMap, why we need to reconcile MC controller (which creates MCV)... which edge case am I missing ??
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Configmap cannot be owned by MCV as they will be in different namespaces. It can be owned by cluster scoped resource like MC. Which will propagate events to eventually reconcile the configmap to desired state
for _, client := range odfInfo.Clients { | ||
managedCluster, err := utils.GetManagedClusterById(ctx, c, client.ClusterID) | ||
if err != nil { | ||
return err | ||
} | ||
clientInfo := ClientInfo{ | ||
ClusterID: client.ClusterID, | ||
Name: client.Name, | ||
ProviderInfo: providerInfo, | ||
ClientManagedClusterName: managedCluster.Name, | ||
} | ||
clientInfoMap[fmt.Sprintf("%s/%s", managedCluster.Name, client.Name)] = clientInfo | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What will happen in case of Multiple StorageCluster scenario (normal 1 internal + 1 external on same OCP) (no Provider-Client) ??
- Won't
fmt.Sprintf("%s/%s", managedCluster.Name, client.Name)
give exact same "key" in both cases ?? - Assuming that there will always be one client for internal (somehow), what happens to external mode cluster ?? This ConfigMap won't store info about external cluster, right ??
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let me know if this ConfigMap is only for Provider-Client use case and not for other deployments. Because it make sense if so, but I am not sure why to make it so specific.
@SanjalKatiyar For now, UI may need to consume MCV's created by MCO directly. |
Thanks for the confirmation (that was my guess too). Keeping key as But anyway, this is just a question. My main concern has already been answered. Console will rely on MCV, instead of the ConfigMap. |
This won't function as reverse lookup for client information as we may need to look through all keys. Your concern is valid though, we will work upon the design to cover all scenarios. |
ManagedCluster
ManagedClusterReconciler
reconciliation logic and ensure ManagedClusterViews function.config/rbac/role.yaml
to include permissions for ManagedClusterView resources. (ManagedCluster RBAC already there)github.com/stolostron/multicloud-operators-foundation
.