Skip to content
This repository has been archived by the owner on Jun 8, 2022. It is now read-only.

[Feature] Proper defaulting labels for metadata of workload #174

Open
resouer opened this issue Aug 12, 2020 · 5 comments
Open

[Feature] Proper defaulting labels for metadata of workload #174

resouer opened this issue Aug 12, 2020 · 5 comments

Comments

@resouer
Copy link
Contributor

resouer commented Aug 12, 2020

Is your feature request related to a problem? Please describe.

As discussed in: #136, OAM runtime should automatically generate labels for Workload (ref: k8s recommend labels) so Trait can choose to select the workload by leveraging these default labels if they don't want to define things like app: nginx.

I'd propose several auto labels below:

component.oam.dev/name: <component's metadata.name>
component.oam.dev/revision: <revision number of the component>

How to use these lables:

apiVersion: core.oam.dev/v1alpha2
kind: Component
metadata:
  name: frontend
spec:
  workload:
    apiVersion: core.oam.dev/v1alpha2
    kind: ContainerizedWorkload
    spec:
      containers:
      - name: nginx
        image: nginx:1.14.2
        ports:
        - containerPort: 80
---
apiVersion: core.oam.dev/v1alpha2
kind: ApplicationConfiguration
metadata:
  name: my-app-deployment
spec:
  components:
    - componentName: frontend
      traits:
        - apiVersion: v1
          kind: Service
          spec:
            selector:
               component.oam.dev/name: frontend
               component.oam.dev/revision: 1 # add this if this trait wants to select workload at specific revision
            ports:
              - protocol: TCP
                port: 80
                targetPort: 9376

Note that this proposal relies on: https://github.com/crossplane/oam-kubernetes-runtime/pull/175/files

@ryanzhang-oss
Copy link
Collaborator

workload.oam.dev/name: <same with workload's metadata.name>

Why do we need this when the name is already in the meta?

workload.oam.dev/revision: <revision number>

What revision is this? It looks like it's not the component revision number.

@resouer
Copy link
Contributor Author

resouer commented Aug 13, 2020

@ryanzhang-oss These labels are all component's info, not workload, I've updated the label key and example.

@wonderflow
Copy link
Member

This is an important issue, with this label, trait can find underlying pods easily.

It's helpful at least for below two cases:

  • ingress/service/traffic trait can easily route their traffic to pods with the information of OAM AppConfig/Component.
  • log/metrics trait can easily find which pods need to gather logs or metrics by OAM AppConfig.

What's more, any trait can rely on K8s label selector ability to find underlying resource through abstraction layer.

Besides these to labes:

component.oam.dev/name: <component's metadata.name>
component.oam.dev/revision: <revision number of the component>

I propose we add one more to indicate which AppConfig instance it is:

appconfig.oam.dev/name: <appconfig's metadata.name>

@resouer
Copy link
Contributor Author

resouer commented Aug 19, 2020

Not sure how about app.oam.dev/name: <appconfig's metadata.name>?

For other points, agree.

@zzxwill
Copy link
Member

zzxwill commented Aug 26, 2020

Note that this proposal relies on: https://github.com/crossplane/oam-kubernetes-runtime/pull/175/files

For ContainerizedWorkload, the labels of the workload can be propagated to deployments and pods, but for other non-core workloads, like deployment, the labels could not be automatically generated for its pods (#184 in details).

So need to find a way to propagate all workloads' labels to pod template.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants