Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

explore router statefulset #258

Open
qrkourier opened this issue Oct 10, 2024 · 3 comments
Open

explore router statefulset #258

qrkourier opened this issue Oct 10, 2024 · 3 comments
Assignees

Comments

@qrkourier
Copy link
Member

No description provided.

@dariuszSki
Copy link
Contributor

dariuszSki commented Nov 7, 2024

Issue trying to solve:
Pass List of tokens to a set that can have more than one replica. One unique token to be available at the deployment to register each router separately.
Why stateful set: it maintains the stable pod identity, i.e name is created based on the metadata/name plus -0, -1, etc. The thesis were to create ENV VAR that varies with its name and pins the jwt token to it. i.e.

env:
          - name: M_NAME
            valueFrom:
              fieldRef:
                fieldPath: "metadata.name"
          - name: ZITI_ENROLL_TOKEN
            valueFrom:
              secretKeyRef:
                name: ziti-router-identities
                key: $M_NAME
       

Unfortunately, you can only read another variable after it is deployed, so this cant be used to dynamically set the reference to a token secret. But if the entry script is altered and is able to read ZITI_ENROLL_TOKEN, then retrieve that token from the secrets store by using its name as a key. Maybe?

Another way perhaps is to use Operator Pattern and read/even create the token directly on the Ziti Controller at run time and patch the statefulset with new token ENV VAR everytime a new container comes up. I did and continue to explore this avenue, but it is tricky. Most likely one would need to create the configmap/secret and patch that instead, this option still to be verified though. Some notes on this:
SetupWithManger Reconciler would look something like this. Would need to watch the pods, since new CR would not be the owner of them. Here is the code to my poc on that. https://github.com/dariuszSki/ziti-operator

func (r *ZitiRouterReconciler) SetupWithManager(mgr ctrl.Manager) error {
	return ctrl.NewControllerManagedBy(mgr).
		For(&zitiv1alpha1.ZitiRouter{}). // Watch the primary resource
		Owns(&appsv1.StatefulSet{}).     // Watch the secondary resource (Statefulset)
		Owns(&corev1.ConfigMap{}).       // Watch the secondary resource (ConfigMap)
		Watches(&corev1.Pod{}, &handler.EnqueueRequestForObject{}).
		Complete(r)
}

Last thought on this, if we are going down the road of Operator Pattern, then perhaps it makes more sense to just manage more than one single deployments and token env var is created at the time of rendering the deployment details not at the run time . Options part of new CR would be for example for users to choose from:

  • tproxy mode, operator would spin up as many single replica deployments they are nodes, i.e. daemonset
  • proxy/host modes, operator would spin up only two copies for redundancy.

@dariuszSki
Copy link
Contributor

dariuszSki commented Nov 7, 2024

Update on run-time token retrieval and replacement from ConfigMap, it works much better than using env vars update. Definitely would be the way to go if decided to go this way. Snippets of Code:
New CR:

apiVersion: ziti.dariuszski.dev/v1alpha1
kind: ZitiRouter
metadata:
  labels:
    app.kubernetes.io/name: ziti-operator
    app.kubernetes.io/managed-by: kustomize
  name: zitirouter-sample
  namespace: ziti
spec:
  zitiMgmtApi: {fqdn}:443
  zitiRouterEnrollmentToken: 
    ziti-router-0: eyJhb...
    ziti-router-1: eyJhb...
  routerStatefulsetNamePrefix: ziti-router
  routerReplicas: 2
  debug: "1"

ConfigMap

apiVersion: v1
kind: ConfigMap
data:
  ziti-router-token: eyJhb...
  ziti-router.yaml: |
    v: 3
    identity:
        cert: /etc/ziti/config/ziti-router.cert
        server_cert: /etc/ziti/config/ziti-router.server.chain.cert
        key: /etc/ziti/config/ziti-router.key
        ca: /etc/ziti/config/ziti-router.cas
    ctrl:
     ...

StatefulSet:

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: ziti-router
  namespace: ziti
  ownerReferences:
  - apiVersion: ziti.dariuszski.dev/v1alpha1
    blockOwnerDeletion: true
    controller: true
    kind: ZitiRouter
    name: zitirouter-sample
spec:
  persistentVolumeClaimRetentionPolicy:
    whenDeleted: Delete
    whenScaled: Delete
  podManagementPolicy: OrderedReady
  replicas: 2
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app: ziti-router
  serviceName: ""
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: ziti-router
    spec:
      containers:
      - args:
        - run
        - /etc/ziti/config/ziti-router.yaml
        command:
        - /entrypoint.bash
        env:
        - name: ZITI_ENROLL_TOKEN
          valueFrom:
            configMapKeyRef:
              key: ziti-router
              name: ziti-router-config

@qrkourier
Copy link
Member Author

I favor leaning into the operator over trying to keep a multi-router deployment as similar as possible to today's single-router deployment.

The operator could run in the same cluster as ziti controller and hold service accounts for other clusters where it deploys routers by calling the other cluster's kube API (probably via a ziti service), or the operator could be installed in each cluster and hold a ziti admin credential (probably a cert), in which case the operator would provision either controller(s) or router(s) or both.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants