Skip to content

Commit

Permalink
update README.md
Browse files Browse the repository at this point in the history
  • Loading branch information
pavars committed May 24, 2023
1 parent a042776 commit c4ecff8
Show file tree
Hide file tree
Showing 5 changed files with 683 additions and 13 deletions.
60 changes: 47 additions & 13 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,9 +4,11 @@

Jābūt pieejam lokāli ieinstalētam vienam no Kubernetes klāsteriem, komandas ir izpildāmas no WSL, Linux vai MacOS operetājsistēmām.

* [Docker-Desktop](https://docs.docker.com/desktop/kubernetes/)
* [kind](https://kind.sigs.k8s.io/docs/user/quick-start/#installation)
* [microk8s](https://microk8s.io/docs/getting-started)
* [Docker-Desktop](https://docs.docker.com/desktop/kubernetes/) Pārbaudīts
* [kind](https://kind.sigs.k8s.io/docs/user/quick-start/#installation) Nav pārbaudīts
* [microk8s](https://microk8s.io/docs/getting-started) Nav pārbaudīts
* [minikube](https://minikube.sigs.k8s.io/docs/start/) Nav pārbaudīts
Un
* [ArgoCD CLI](https://argo-cd.readthedocs.io/en/stable/getting_started/)

## Running (Palaišana)
Expand All @@ -21,27 +23,46 @@ git clone https://github.com/pavars/masters.git && cd masters
# Ieinstalējam argocd (reizēm jāpalaiž divas reizes, ja CRD nav laicīgi izveidojušies)
kubectl apply -k argocd/overlays/global

# Pieliekam Kubernetes anotāciju, lai argocd spētu izveidot CRD resursu
kubectl annotate crd prometheuses.monitoring.coreos.com argocd.argoproj.io/sync-options='Replace=true'

# Pārbaudām instalācijas statusu (visur jābūt READY 1/1 )
kubectl get po -n argocd
# NAME READY STATUS RESTARTS AGE
# argocd-application-controller-0 1/1 Running 0 103s
# argocd-applicationset-controller-6d6fc9c56b-g6nr8 1/1 Running 0 103s
# argocd-dex-server-87568f444-w87rx 1/1 Running 0 103s
# argocd-notifications-controller-6f859d8d59-p4hnt 1/1 Running 0 103s
# argocd-redis-74d77964b-7slt5 1/1 Running 0 103s
# argocd-repo-server-f6d876c67-ptc5q 1/1 Running 0 103s
# argocd-server-7cd5b4d746-dfqpx 1/1 Running 0 103s


# Iegūstam noklusējuma paroli
kubectl get secrets argocd-initial-admin-secret -o jsonpath='{.data.password}' | base64 -d
kubectl get secrets argocd-initial-admin-secret -o jsonpath='{.data.password}' -n argocd | base64 -d


# Ieslēdzam lokālo portu pārnešanu uz kubernetes vidi, lai piekļūtu vadības paneļiem un monitorētu statusu
# Piekļūstam lokālajai ArgoCD videi no interneta pārlūka izmantojot lietotāju admin https://127.0.0.1:8080
kubectl port-forward svc/argocd-server -n argocd 8080:443
```

3. Gaidām, kad argocd nosinhronizēs monitoringa un žurnālēšanas rīkus, daži rīki jāsinhronizē manuāli
3. Sinhronizējam monitoringa sistēmas un demo lietotni

```bash
# Ja ir ieinstalēts ArgoCD CLI, tad laižam sekojošo komandu, lai forsētu resursu sinhronizāciju
# Ja nav argocd cli, tad kube-prometheus-stack-global lietotni vajag sinhronizēt no ArgoCD vadības paneļa izvēloties opcijas "Force" un "Replace"
argocd login localhost:8080
# WARNING: server is not configured with TLS. Proceed (y/n)? y
# Username: admin
# Password: <parole no iepriekšējām komandām>
# 'admin:login' logged in successfully
# Context 'localhost:8080' updated


# Ja nav argocd cli, tad kube-prometheus-stack-global lietotni vajag sinhronizēt no ArgoCD vadības paneļa izvēloties opciju "Replace"
argocd app sync main-app
argocd app sync kube-prometheus-stack-global --replace --resource :CustomResourceDefinition:
argocd app sync loki-global
argocd app sync kube-prometheus-stack-global --replace --resource apiextensions.k8s.io:CustomResourceDefinition:prometheuses.monitoring.coreos.com
# Ja šeit iegūstam Erroru, tad apturam to no komandrindas:
# FATA[0000] rpc error: code = FailedPrecondition desc = another operation is already in progress
argocd app terminate-op kube-prometheus-stack-global

# Pieliekam anotāciju prometheus resursam, lai tas turpinātu sinhronizēties
kubectl annotate crd prometheuses.monitoring.coreos.com argocd.argoproj.io/sync-options='Replace=true'
Expand All @@ -50,10 +71,23 @@ kubectl annotate crd prometheuses.monitoring.coreos.com argocd.argoproj.io/sync-
kubectl port-forward svc/grafana -n monitoring 8081:80

# Iegūstam grafanas admin paroli
kubectl get secrets grafana -o jsonpath='{.data.admin-password}' | base64 -d
kubectl get secrets grafana -n monitoring -o jsonpath='{.data.admin-password}' | base64 -d
```

4. Ieslēdzam lokālo portu pārnešanu uz kubernetes vidi, lai piekļūtu vadības paneļiem
```bash
kubectl port-forward XXX yyy
```
# Grafana
kubectl port-forward svc/grafana -n monitoring 8081:80

# Prometheus
kubectl port-forward svc/kube-prometheus-stack-prometheus -n monitoring 8082:9090

# Demo (Podinfo lietotne)
kubectl port-forward svc/demo-podinfo -n demo 8082:9898


```

## Produkcijas vidē

[Produkcijas vidē](cluster-state/production/) ir jāizmanto kāds no ingress kontrolieriem (Nginx-ingress, Traefik, Istio, Linkerd, u.c. ), kas kalpo kā reverse proxy serveris un web serveris, kas servē TLS sertifikātus, domēna vārdus, sadala ienākošu datu plūsmu atbilstošajiem servisiem, nodrošina autorizāciju. Drošības konfigurāciju web serveriem jāskatās izvēlētā rīka dokumentācijā un jānokonfigurē atbilstoši prasībām. Piemēros ir izmantots Nginx ingresa kontrolieris. Konfigurācija kalpo tikai kā piemērs.
139 changes: 139 additions & 0 deletions cluster-state/production/monitoring/grafana.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,139 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: grafana-global
namespace: argocd
finalizers:
- resources-finalizer.argocd.argoproj.io
spec:
project: default
source:
repoURL: https://grafana.github.io/helm-charts
chart: grafana
targetRevision: "6.53.0"
helm:
values: |
fullnameOverride: grafana
deploymentStrategy:
type: Recreate
ingress:
enabled: true
annotations:
kubernetes.io/ingress.class: nginx-public
hosts:
- grafana.example.com
tls:
- hosts:
- grafana.example.com
secretName: wildcard-example-cert
# SSO izmantojot Google Authentication
grafana.ini:
auth.google:
enabled: true
client_id: $__file{/etc/secrets/auth_google/client_id}
client_secret: $__file{/etc/secrets/auth_google/client_secret}
scopes: https://www.googleapis.com/auth/userinfo.profile https://www.googleapis.com/auth/userinfo.email
auth_url: https://accounts.google.com/o/oauth2/auth
token_url: https://accounts.google.com/o/oauth2/token
allowed_domains: example.com
hosted_domain: example.com
auth:
disable_login_form: true
rendering:
callback_url: https://grafana.example.com/
log:
filters: rendering:debug
# The full public facing url you use in browser, used for redirects and emails
server:
root_url: https://grafana.example.com/
users:
auto_assign_org_role: Viewer
viewers_can_edit: true
# unified alerting section
alerting:
enabled: false
unified_alerting:
enabled: true
dataproxy:
logging: false
timeout: 300
# Kubernetes Secret resurss, kurā glabājas Google Oauth autorizācijas dati
extraSecretMounts:
- name: auth-google-secret-mount
secretName: grafana-auth-google-secret
defaultMode: 0440
mountPath: /etc/secrets/auth_google
readOnly: true
sidecar:
datasources:
enabled: true
defaultDatasourceEnabled: false
dashboards:
enabled: true
label: grafana_dashboard
labelValue: 1
resource: configmap
searchNamespace: ALL
folder: /tmp/dashboards
folderAnnotation: grafana_folder
provider:
foldersFromFilesStructure: true
persistence:
type: pvc
enabled: true
accessModes:
- ReadWriteOnce
size: 10Gi
datasources:
datasources.yaml:
apiVersion: 1
datasources:
- name: "Loki"
type: loki
url: http://loki-gateway
- name: Alertmananger
type: alertmanager
url: http://kube-prometheus-stack-alertmanager.monitoring.svc:9093
basicAuth: false
withCredentials: false
jsonData:
implementation: prometheus
access: proxy
imageRenderer:
enabled: true
grafanaProtocol: https
resources:
limits:
cpu: 100m
memory: 100Mi
requests:
cpu: 50m
memory: 50Mi
serviceMonitor:
enabled: true
path: /metrics
labels:
prometheus: "main"
interval: 1m
scrapeTimeout: 30s
resources:
requests:
cpu: "100m"
memory: 1Gi
destination:
name: in-cluster
namespace: monitoring
syncPolicy:
automated:
selfHeal: true
Loading

0 comments on commit c4ecff8

Please sign in to comment.