You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Jun 8, 2022. It is now read-only.
All these issues are pointing to the same problem, that is:
How can we organize K8s resources while we're separating them into workloads and traits.
There are mainly two mechanisms in K8s to build relationships between objects.
Object Reference
Label Selector
The first conclusion that we are on the same page is: OAM should no doubting help users hide these concepts and manage these objects properly.
The problem is how can we hide them?
Hiding Object Reference by declare workloadRefPath in TraitDefinition
Several months ago, we have proposed a design to tell how trait can interact with workload, we add a workloadRefPath (which is object reference of workload) for trait, so traits can connect to workload on their side. And soon, we come up with similar way to let scopes connect to workloads.
Assuming we can use some similar ways for object reference if new demands arise in the future.
How to handle Label Selector relationships?
Solution 1: Rely on workload to propagate OAM specified labels to pod template
We used to propose that we should automatically add default labels (#174) for workloads, but these labels won't work(#184) if we can't propagate them from workload metadata to podTemplate. That means we have to assume all workload will propagate labels to pods which is obviously not right.
Solution 2: Rely on workload has pod template spec and hack into it to add labels.
Alternatively, we may need to hack into workload to detect podTemplate and patch labels on it, that's why we try to add podspecable in workloadDefinition (oam-dev/spec#392) to declare whether the workload has PodTemplate in it's spec.
This proposal also has drawbacks as we make too much assumptions for workload spec.
Solution 3: Declare uniquePodLabel in WorkloadDefinition as OAM specified labels which will always exist in pod
After digging more and discussed a lot offline, we find out that if our final target workload resource is pod, we can always assume the pod has unique labels:
For pods created by K8s Deployment, the pod-template-hash label will be automatically added.
apiVersion: v1
kind: Pod
metadata:
labels:
pod-template-hash: 748f857667
For pods created by K8s DaemonSet or StatefulSet, we will have controller-revision-hash label.
apiVersion: v1
kind: Pod
metadata:
labels:
controller-revision-hash: b8d6c88f6
Declare uniquePodLabel in WorkloadDefinition as OAM specified labels, these labels will always exist in pod. Then trait automatically fill correlation fields by detecting the uniquePodLabel, for example, generating service, split traffic, etc..
Conclusion
So the conclusion comes up easily, declaring uniquePodLabel in WorkloadDefinition seems to be best.
Fixes #136#181 , we can automatically generate label selector by using uniquePodLabel for service or other resources in trait.
Fixes #184#174 , now we don't need to propagate labels to pod template and OAM related info has already added in PR #189 .
This issue is trying to summarize all issues which is label selector related, let me summarize them here first:
All these issues are pointing to the same problem, that is:
How can we organize K8s resources while we're separating them into workloads and traits.
There are mainly two mechanisms in K8s to build relationships between objects.
The first conclusion that we are on the same page is: OAM should no doubting help users hide these concepts and manage these objects properly.
The problem is how can we hide them?
Hiding Object Reference by declare
workloadRefPath
in TraitDefinitionSeveral months ago, we have proposed a design to tell how trait can interact with workload, we add a
workloadRefPath
(which is object reference of workload) for trait, so traits can connect to workload on their side. And soon, we come up with similar way to let scopes connect to workloads.Assuming we can use some similar ways for object reference if new demands arise in the future.
How to handle Label Selector relationships?
Solution 1: Rely on workload to propagate OAM specified labels to pod template
We used to propose that we should automatically add default labels (#174) for workloads, but these labels won't work(#184) if we can't propagate them from workload metadata to podTemplate. That means we have to assume all workload will propagate labels to pods which is obviously not right.
Solution 2: Rely on workload has pod template spec and hack into it to add labels.
Alternatively, we may need to hack into workload to detect
podTemplate
and patch labels on it, that's why we try to addpodspecable
in workloadDefinition (oam-dev/spec#392) to declare whether the workload has PodTemplate in it's spec.This proposal also has drawbacks as we make too much assumptions for workload spec.
Solution 3: Declare
uniquePodLabel
in WorkloadDefinition as OAM specified labels which will always exist in podAfter digging more and discussed a lot offline, we find out that if our final target workload resource is pod, we can always assume the pod has unique labels:
pod-template-hash
label will be automatically added.controller-revision-hash
label.controller-revision-hash
label automatically added.Declare
uniquePodLabel
in WorkloadDefinition as OAM specified labels, these labels will always exist in pod. Then trait automatically fill correlation fields by detecting theuniquePodLabel
, for example, generating service, split traffic, etc..Conclusion
So the conclusion comes up easily, declaring
uniquePodLabel
in WorkloadDefinition seems to be best.Fixes #136 #181 , we can automatically generate label selector by using
uniquePodLabel
for service or other resources in trait.Fixes #184 #174 , now we don't need to propagate labels to pod template and OAM related info has already added in PR #189 .
\cc @artursouza @resouer @hongchaodeng @ryanzhang-oss @zzxwill
The text was updated successfully, but these errors were encountered: