Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

updating to add secrets installation and more documentation on proxy … #17

Open
wants to merge 9 commits into
base: master
Choose a base branch
from
37 changes: 37 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -103,3 +103,40 @@ You can use this variables for cli and helm install. Use as **--set key=value**
* `apiUrl` - (default: `https://g.codefresh.io/api/k8s-monitor/events`) agent use this endpoint for all work with k8s-monitor
* `port` - (default: `80`)
* `servicePort` - (default: `80`)

## Installing (proxy) using Helm
Using these variables you can install the agent to a management cluster so that the agent is not running on the targeted cluster.

For example you may have a management cluster (running the k8s-agent) that can connect via private network route to production (data source).

You will then need to change your kube-context to production.

Now create a service account using Kubernetes RBAC and the YAML manifest `k8s-agent-role-binding.yaml`

`$ kubectl apply -f k8s-agent-role-binding.yaml`

This will create a minimal viewer role which will be used to connect to production k8s api.

Run the following commands to get the required data for the Helm Chart install.

`clusterUrl` - `$ export CURRENT_CONTEXT=$(kubectl config current-context) && export CURRENT_CLUSTER=$(kubectl config view -o go-template="{{\$curr_context := \"$CURRENT_CONTEXT\" }}{{range .contexts}}{{if eq .name \$curr_context}}{{.context.cluster}}{{end}}{{end}}") && echo $(kubectl config view -o go-template="{{\$cluster_context := \"$CURRENT_CLUSTER\"}}{{range .clusters}}{{if eq .name \$cluster_context}}{{.cluster.server}}{{end}}{{end}}")`

`clusterCA` - `$ echo $(kubectl get secret -n kube-system -o go-template='{{index .data "ca.crt" }}' $(kubectl get sa k8s-agent-codefresh -n kube-system -o go-template="{{range .secrets}}{{.name}}{{end}}"))`

`clusterToken` - `$ echo $(kubectl get secret -o go-template='{{index .data "token" }}' $(kubectl get sa k8s-agent-codefresh -n kube-system -o go-template="{{range .secrets}}{{.name}}{{end}}") -n kube-system | base64 -D)`

Finally run the install with the additional `--set` parameters.

`$ helm upgrade <release_name> ./k8s-agent --install --force --reset-values --namespace=<namespace for agent install> --set apiToken=<codefresh api token> --set clusterId=<see above> --set clusterUrl=<kubernetes api url> --set clusterCA=<production ca cert> --set clusterToken=<kubernetes sa token>`

## Installing using K8s Secrets

Alternatively, you can add this flag to the Helm Chart `values.yaml` to pull the Host, CA Cert, SA Token and Codefresh API Token from K8s Secrets.

Use file `k8s-agent-secrets.yaml`, add required data and apply the file to generate the secrets required and set `use-k8s-secrets: true`.

`$ kubectl apply -f k8s-agent-secrets.yaml -n <namespace>`

then install using command below

`$ helm upgrade <release name> ./k8s-agent --install --force --reset-values --namespace=<namespace> --set clusterId=<see above> --set clusterUrl=<kubernetes api url> --set useK8sSecrets=true`
27 changes: 27 additions & 0 deletions k8s-agent-role-binding.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,27 @@
apiVersion: v1
kind: ServiceAccount
metadata:
name: k8s-agent-codefresh
namespace: kube-system
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: k8s-agent-codefresh-cluster-reader
rules:
- apiGroups: ["", "apps", "extensions"] # "" indicates the core API group
resources: ["*"]
verbs: ["get", "watch", "list"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: k8s-agent-codefresh-cluster-reader
subjects:
- kind: ServiceAccount
namespace: kube-system
name: k8s-agent-codefresh
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: k8s-agent-codefresh-cluster-reader
9 changes: 9 additions & 0 deletions k8s-agent-secrets.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
apiVersion: v1
kind: Secret
metadata:
name: k8s-agent-secrets
type: Opaque
stringData:
ca_cert: <base64 encoded kubernetes ca cert>
cf_api_token: <codefresh api key>
sa_token: <encoded kubernetes sa token>
18 changes: 18 additions & 0 deletions k8s-agent/templates/deployment.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -45,12 +45,30 @@ spec:
value: {{ .Values.port | quote }}
- name: CLUSTER_URL
value: {{ .Values.clusterUrl | quote }}
{{- if .Values.useK8sSecrets }}
- name: CLUSTER_TOKEN
valueFrom:
secretKeyRef:
name: k8s-agent-secrets
key: sa_token
- name: CLUSTER_CA
valueFrom:
secretKeyRef:
name: k8s-agent-secrets
key: ca_cert
- name: API_TOKEN
valueFrom:
secretKeyRef:
name: k8s-agent-secrets
key: cf_api_token
{{- else }}
- name: CLUSTER_TOKEN
value: {{ .Values.clusterToken | quote }}
- name: CLUSTER_CA
value: {{ .Values.clusterCA | quote }}
- name: API_TOKEN
value: {{ .Values.apiToken | quote }}
{{- end }}
- name: CLUSTER_ID
value: {{ .Values.clusterId | quote }}
- name: API_URL
Expand Down
4 changes: 4 additions & 0 deletions k8s-agent/values.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -40,3 +40,7 @@ accountId: user
apiToken: ""

rbacEnabled: true

# Set to true to keep secrets from being exposed in Helm
# See k8s-agent-secrets.yaml and edit to set secret values there
useK8sSecrets: false