diff --git a/docs/cdev-vs-pulumi.md b/docs/cdev-vs-pulumi.md new file mode 100644 index 00000000..52f4282b --- /dev/null +++ b/docs/cdev-vs-pulumi.md @@ -0,0 +1,16 @@ +# Cluster.dev vs. Pulumi and Crossplane + +Pulumi and Crossplane are modern alternatives to Terraform. + +These are great tools and we admire alternative views on infrastructure management. + +What makes Cluster.dev different is its purpose and limitations. +Tools like Pulumi, Crossplane, and Terraform are aimed to manage clouds - creating new instances or clusters, cloud resources like databases, and others. +While Cluster.dev is designed to manage the whole infrastructure, including those tools as units. That means you can run Terraform, then run Pulumi, or Bash, or Ansible with variables received from Terraform, and then run Crossplane or something else. Cluster.dev is created to connect and manage all infrastructure tools. + +With infrastructure tools, users are often restricted with one-tool usage that has specific language or DSL. Whereas Cluster.dev allows to have a limitless number of units and workflow combinations between tools. + +For now Cluster.dev has a major support for Terraform only, mostly because we want to provide the best experience for the majority of users. Moreover, Terraform is a de-facto industry standard and already has a lot of modules created by the community. +To read more on the subject please refer to [Cluster.dev vs. Terraform](https://docs.cluster.dev/cdev-vs-terraform/) section. + +If you or your company would like to use Pulumi or Crossplane with Cluster.dev, please feel free to contact us. diff --git a/docs/cdev-vs-terraform.md b/docs/cdev-vs-terraform.md new file mode 100644 index 00000000..fc48556d --- /dev/null +++ b/docs/cdev-vs-terraform.md @@ -0,0 +1,32 @@ +# Cluster.dev vs. Terraform + +Terraform is a great and popular tool for creating infrastructures. Despite the fact that it was founded more than five years ago, Terraform supports many providers and resources, which is impressive. + +Cluster.dev loves Terraform (and even supports export to the plain Terraform code). Still, Terraform lacks a robust relation system, fast plans, automatic reconciliation, and configuration templates. + +Cluster.dev, on the other hand, is a managing software that uses Terraform alongside other infrastructure tools as building blocks. + +As a higher abstraction, Cluster.dev fixes all listed problems: builds a single source of truth, and combines and orchestrates different infrastructure tools under the same roof. + +Let's dig more into the problems that Cluster.dev solves. + +## Internal relation + +As Terraform has pretty complex rendering logic, it affects the relations between its pieces. For example, you cannot define a provider for, let say, k8s or Helm, in the same codebase that creates a k8s cluster. This forces users to resort to internal hacks or employ a custom wrapper to have two different deploys — that is a problem we solved via Cluster.dev. + +Another problem with internal relations concerns huge execution plans that Terraform creates for massive projects. Users who tried to avoid this issue by using small Terraform repos, faced the challenges of weak "remote state" relations and limited possibilities of reconciliation: it was not possible to trigger a related module, if the output of the module it relied upon had been changed. + +On the contrary, Cluster.dev allows you to trigger only the necessary parts, as it is a GitOps-first tool. + +## Templating + +The second limitation of Terraform is templating: Terraform doesn’t support templating of tf files that it uses. This forces users to hacks that further tangle their Terraform files. +And while Cluster.dev uses templating, it allows to include, let’s say, Jenkins Terraform module with custom inputs for dev environment and not to include it for staging and production — all in the same codebase. + +## Third Party + +Terraform allows for executing Bash or Ansible. However, it doesn't contain many instruments to control where and how these external tools will be run. + +While Cluster.dev as a cloud native manager provides all tools the same level of support and integration. + + diff --git a/docs/cli-options.md b/docs/cli-options.md index 3d0f52da..c8bb9d92 100644 --- a/docs/cli-options.md +++ b/docs/cli-options.md @@ -22,7 +22,7 @@ * `--interactive` Use interactive mode for project generation. -* `--list-templates` Show all available stack templates for project generator. +* `--list-templates` Show all available templates for project generator. ## Destroy flags diff --git a/docs/env-variables.md b/docs/env-variables.md new file mode 100644 index 00000000..43abe9fa --- /dev/null +++ b/docs/env-variables.md @@ -0,0 +1,3 @@ +# Environment Variables + +* `CDEV_TF_BINARY` Indicates which Terraform binary to use. Recommended usage: for debug during template development. diff --git a/docs/examples-aws-eks.md b/docs/examples-aws-eks.md new file mode 100644 index 00000000..4c632120 --- /dev/null +++ b/docs/examples-aws-eks.md @@ -0,0 +1,137 @@ +# AWS-EKS + +This section provides information on how to create a new project on AWS with [AWS-EKS](https://github.com/shalb/cdev-aws-eks) stack template. + +## Prerequisites + +1. Terraform version 13+. + +2. AWS account. + +3. [AWS CLI](#install-aws-client) installed. + +4. kubectl installed. + +5. [Cluster.dev client installed](https://docs.cluster.dev/get-started-install/). + +### Authentication + +Cluster.dev requires cloud credentials to manage and provision resources. You can configure access to AWS in two ways: + +!!! Info + Please note that you have to use IAM user with granted administrative permissions. + +* **Environment variables**: provide your credentials via the `AWS_ACCESS_KEY_ID` and `AWS_SECRET_ACCESS_KEY`, the environment variables that represent your AWS Access Key and AWS Secret Key. You can also use the `AWS_DEFAULT_REGION` or `AWS_REGION` environment variable to set region, if needed. Example usage: + + ```bash + export AWS_ACCESS_KEY_ID="MYACCESSKEY" + export AWS_SECRET_ACCESS_KEY="MYSECRETKEY" + export AWS_DEFAULT_REGION="eu-central-1" + ``` + +* **Shared Credentials File (recommended)**: set up an [AWS configuration file](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-files.html) to specify your credentials. + + Credentials file `~/.aws/credentials` example: + + ```bash + [cluster-dev] + aws_access_key_id = MYACCESSKEY + aws_secret_access_key = MYSECRETKEY + ``` + + Config: `~/.aws/config` example: + + ```bash + [profile cluster-dev] + region = eu-central-1 + ``` + + Then export `AWS_PROFILE` environment variable. + + ```bash + export AWS_PROFILE=cluster-dev + ``` + +### Install AWS client + +If you don't have the AWS CLI installed, refer to AWS CLI [official installation guide](https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2-linux.html), or use commands from the example: + +```bash +curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip" +unzip awscliv2.zip +sudo ./aws/install +aws s3 ls +``` + +### Create S3 bucket + +Cluster.dev uses S3 bucket for storing states. Create the bucket with the command: + +```bash +aws s3 mb s3://cdev-states +``` + +### DNS Zone + +In AWS-EKS stack template example you need to define a Route 53 hosted zone. Options: + +1. You already have a Route 53 hosted zone. + +2. Create a new hosted zone using a [Route 53 documentation example](https://docs.aws.amazon.com/cli/latest/reference/route53/create-hosted-zone.html#examples). + +3. Use "cluster.dev" domain for zone delegation. + +## Create project + +1. Configure [access to AWS](#authentication) and export required variables. + +2. Create locally a project directory, cd into it and execute the command: + + ```bash + cdev project create https://github.com/shalb/cdev-aws-eks + ``` + This will create a new empty project. + +3. Edit variables in the example's files, if necessary: + + * project.yaml - main project config. Sets common global variables for current project such as organization, region, state bucket name etc. See [project configuration docs](https://docs.cluster.dev/structure-project/). + + * backend.yaml - configures backend for Cluster.dev states (including Terraform states). Uses variables from project.yaml. See [backend docs](https://docs.cluster.dev/structure-backend/). + + * infra.yaml - describes stack configuration. See [stack docs](https://docs.cluster.dev/structure-stack/). + +4. Run `cdev plan` to build the project. In the output you will see an infrastructure that is going to be created after running `cdev apply`. + + !!! note + Prior to running `cdev apply` make sure to look through the infra.yaml file and replace the commented fields with real values. In case you would like to use existing VPC and subnets, uncomment preset options and set correct VPC ID and subnets' IDs. If you leave them as is, Cluster.dev will have VPC and subnets created for you. + +5. Run `cdev apply` + + !!! tip + We highly recommend to run `cdev apply` in a debug mode so that you could see the Cluster.dev logging in the output: `cdev apply -l debug` + +6. After `cdev apply` is successfully executed, in the output you will see the ArgoCD URL of your cluster. Sign in to the console to check whether ArgoCD is up and running and the stack template has been deployed correctly. To sign in, use the "admin" login and the bcrypted password that you have generated for the infra.yaml. + +7. Displayed in the output will be also a command on how to get kubeconfig and connect to your Kubernetes cluster. + +8. Destroy the cluster and all created resources with the command `cdev destroy` + +## Resources + +Resources to be created within the project: + +* *(optional, if you use cluster.dev domain)* Route53 zone **.cluster.dev** + +* *(optional, if vpc_id is not set)* VPC for EKS cluster + +* EKS Kubernetes cluster with addons: + + * cert-manager + + * ingress-nginx + + * external-dns + + * argocd + +* AWS IAM roles for EKS IRSA cert-manager and external-dns diff --git a/docs/examples-develop-stack-template.md b/docs/examples-develop-stack-template.md new file mode 100644 index 00000000..79196389 --- /dev/null +++ b/docs/examples-develop-stack-template.md @@ -0,0 +1,112 @@ +# Develop Stack Template + +Cluster.dev gives you freedom to modify existing templates or create your own. You can add inputs and outputs to already preset units, take the output of one unit and send it as an input to another, or write new units and add them to a template. + +In our example we shall use the [tmpl-development](https://github.com/shalb/cluster.dev/tree/master/.cdev-metadata/generator) sample to create a project. Then we shall modify its stack template by adding new parameters to the units. + +## Workflow steps + +1. Create a project following the steps described in [Create Own Project](https://docs.cluster.dev/get-started-create-project/) section. + +2. To start working with the stack template, cd into the template directory and open the template.yaml file: ./template/template.yaml. + + Our sample stack template contains 3 units. Now, let's elaborate on each of them and see how we can modify it. + +3. The `create-bucket` unit uses a remote [Terraform module](https://registry.terraform.io/modules/terraform-aws-modules/s3-bucket/aws/latest) to create an S3 bucket on AWS: + + ```yaml + name: create-bucket + type: terraform + providers: *provider_aws + source: terraform-aws-modules/s3-bucket/aws + version: "2.9.0" + inputs: + bucket: {{ .variables.bucket_name }} + force_destroy: true + ``` + + We can modify the unit by adding more parameters in [inputs](https://registry.terraform.io/modules/terraform-aws-modules/s3-bucket/aws/latest?tab=inputs). For example, let's add some tags using the [`insertYAML`](https://docs.cluster.dev/stack-templates-functions/) function: + + ```yaml + name: create-bucket + type: terraform + providers: *provider_aws + source: terraform-aws-modules/s3-bucket/aws + version: "2.9.0" + inputs: + bucket: {{ .variables.bucket_name }} + force_destroy: true + tags: {{ insertYAML .variables.tags }} + ``` + + Now we can see the tags in infra.yaml: + + ```yaml + name: cdev-tests-local + template: ./template/ + kind: Stack + backend: aws-backend + variables: + bucket_name: "tmpl-dev-test" + region: {{ .project.variables.region }} + organization: {{ .project.variables.organization }} + name: "Developer" + tags: + tag1_name: "tag 1 value" + tag2_name: "tag 2 value" + ``` + + To check the configuration, run `cdev plan --tf-plan` command. In the output you can see that Terraform will create a bucket with the defined tags. Run `cdev apply -l debug` to have the configuration applied. + +4. The `create-s3-object` unit uses local Terraform module to get the bucket ID and save data inside the bucket. The Terraform module is stored in s3-file directory, main.tf file: + + ```yaml + name: create-s3-object + type: terraform + providers: *provider_aws + source: ./s3-file/ + depends_on: this.create-bucket + inputs: + bucket_name: {{ remoteState "this.create-bucket.s3_bucket_id" }} + data: | + The data that will be saved in the S3 bucket after being processed by the template engine. + Organization: {{ .variables.organization }} + Name: {{ .variables.name }} + ``` + + The unit sends 2 parameters. The *bucket_name* is retrieved from the `create-bucket` unit by means of [`remoteState`](https://docs.cluster.dev/stack-templates-functions/) function. The *data* parameter uses templating to obtain the *Organization* and *Name* variables from infra.yaml. + + Let's add to *data* input *bucket_regional_domain_name* variable to obtain the region-specific domain name of the bucket: + + ```yaml + name: create-s3-object + type: terraform + providers: *provider_aws + source: ./s3-file/ + depends_on: this.create-bucket + inputs: + bucket_name: {{ remoteState "this.create-bucket.s3_bucket_id" }} + data: | + The data that will be saved in the s3 bucket after being processed by the template engine. + Organization: {{ .variables.organization }} + Name: {{ .variables.name }} + Bucket regional domain name: {{ remoteState "this.create-bucket.s3_bucket_bucket_regional_domain_name" }} + ``` + + Check the configuration by running `cdev plan` command; apply it with `cdev apply -l debug`. + +5. The `print_outputs` unit retrieves data from two other units by means of [`remoteState`](https://docs.cluster.dev/stack-templates-functions/) function: *bucket_domain* variable from `create-bucket` unit and *s3_file_info* from `create-s3-object` unit: + + ```yaml + name: print_outputs + type: printer + inputs: + bucket_domain: {{ remoteState "this.create-bucket.s3_bucket_bucket_domain_name" }} + s3_file_info: "To get file use: aws s3 cp {{ remoteState "this.create-s3-object.file_s3_url" }} ./my_file && cat my_file" + ``` + +6. Having finished your work, run `cdev destroy` to eliminate the created resources. + + + + diff --git a/docs/examples-do-k8s.md b/docs/examples-do-k8s.md new file mode 100644 index 00000000..8f841181 --- /dev/null +++ b/docs/examples-do-k8s.md @@ -0,0 +1,97 @@ +# DO-K8s + +This section provides information on how to create a new project on DigitalOcean with [DO-k8s](https://github.com/shalb/cdev-do-k8s) stack template. + +## Prerequisites + +1. Terraform version 13+. + +2. DigitalOcean account. + +3. [doctl installed](https://docs.digitalocean.com/reference/doctl/how-to/install/). + +4. [Cluster.dev client installed](https://docs.cluster.dev/get-started-install/). + +### Authentication + +Create an [access token](https://www.digitalocean.com/docs/apis-clis/api/create-personal-access-token/) for a user. + +!!! Info + Make sure to grant the user with administrative permissions. + +For details on using DO spaces bucket as a backend, see [here](https://www.digitalocean.com/community/questions/spaces-as-terraform-backend). + +### DO access configuration + +1. Install `doctl`. For more information, see [the official documentation](https://www.digitalocean.com/docs/apis-clis/doctl/how-to/install/). + + ```bash + cd ~ + wget https://github.com/digitalocean/doctl/releases/download/v1.57.0/doctl-1.57.0-linux-amd64.tar.gz + tar xf ~/doctl-1.57.0-linux-amd64.tar.gz + sudo mv ~/doctl /usr/local/bin + ``` + +2. Export your DIGITALOCEAN_TOKEN, for details see [here](https://www.digitalocean.com/docs/apis-clis/api/create-personal-access-token/). + + ```bash + export DIGITALOCEAN_TOKEN="MyDIGITALOCEANToken" + ``` + +3. Export SPACES_ACCESS_KEY_ID and SPACES_SECRET_ACCESS_KEY environment variables, for details see [here](https://www.digitalocean.com/community/tutorials/how-to-create-a-digitalocean-space-and-api-key). + + ```bash + export SPACES_SECRET_ACCESS_KEY="dSUGdbJqa6xwJ6Fo8qV2DSksdjh..." + export SPACES_SECRET_ACCESS_KEY="TEaKjdj8DSaJl7EnOdsa..." + ``` + +4. [Create a spaces bucket](https://www.digitalocean.com/docs/spaces/quickstart/#create-a-space) for Terraform states in the chosen region (in the example we used the 'cdev-data' bucket name). + +5. [Create a domain](https://www.digitalocean.com/docs/networking/dns/how-to/add-domains/) in DigitalOcean domains service. + +!!! Info + In the project generated by default we used 'k8s.cluster.dev' zone as an example. Please make sure to change it. + +## Create project + +1. Configure [access to DigitalOcean](#do-access-configuration) and export required variables. + +2. Create locally a project directory, cd into it and execute the command: + + ```bash + cdev project create https://github.com/shalb/cdev-do-k8s + ``` + This will create a new empty project. + +3. Edit variables in the example's files, if necessary: + + * project.yaml - main project config. Sets common global variables for current project such as organization, region, state bucket name etc. See [project configuration docs](https://docs.cluster.dev/structure-project/). + + * backend.yaml - configures backend for Cluster.dev states (including Terraform states). Uses variables from project.yaml. See [backend docs](https://docs.cluster.dev/structure-backend/). + + * infra.yaml - describes stack configuration. See [stack docs](https://docs.cluster.dev/structure-stack/). + +4. Run `cdev plan` to build the project. In the output you will see an infrastructure that is going to be created after running `cdev apply`. + + !!! note + Prior to running `cdev apply` make sure to look through the infra.yaml file and replace the commented fields with real values. In case you would like to use existing VPC and subnets, uncomment preset options and set correct VPC ID and subnets' IDs. If you leave them as is, Cluster.dev will have VPC and subnets created for you. + +5. Run `cdev apply` + + !!! tip + We highly recommend to run `cdev apply` in a debug mode so that you could see the Cluster.dev logging in the output: `cdev apply -l debug` + +6. After `cdev apply` is successfully executed, in the output you will see the ArgoCD URL of your cluster. Sign in to the console to check whether ArgoCD is up and running and the stack template has been deployed correctly. To sign in, use the "admin" login and the bcrypted password that you have generated for the infra.yaml. + +7. Displayed in the output will be also a command on how to get kubeconfig and connect to your Kubernetes cluster. + +8. Destroy the cluster and all created resources with the command `cdev destroy` + +## Resources + +Resources to be created within the project: + +* *(optional, if vpc_id is not set)* VPC for Kubernetes cluster +* DO Kubernetes cluster with addons: + * cert-manager + * argocd diff --git a/docs/examples-modify-aws-eks.md b/docs/examples-modify-aws-eks.md new file mode 100644 index 00000000..a558af95 --- /dev/null +++ b/docs/examples-modify-aws-eks.md @@ -0,0 +1,166 @@ +# Modify AWS-EKS + +Let's assume you want to make changes to AWS-EKS stack template. In the example below we have customized the existing template by adding some features and removing the functionality that we don't need. + +## Workflow steps + +1. Go to the GitHub page via the [AWS-EKS link](https://github.com/shalb/cdev-aws-eks) and download the stack template. + +2. If you are not planning to use some preset addons, edit aws-eks.yaml to exclude them. In our case, it was cert-manager, cert-manager-issuer, ingress-nginx, argocd, and argocd_apps. + +3. In order to dynamically retrieve the AWS account ID parameter, we have added a data block to our stack template: + + ```yaml + - name: data + type: terraform + providers: *provider_aws + depends_on: this.eks + source: ./terraform-submodules/data/ + ``` + + ```yaml + {{ remoteState "this.data.account_id" }} + ``` + + The block is also used in eks_auth ConfigMap and expands its functionality with groups of users: + + ```yaml + apiVersion: v1 + data: + mapAccounts: | + [] + mapRoles: | + - "groups": + - "system:bootstrappers" + - "system:nodes" + "rolearn": "{{ remoteState "this.eks.worker_iam_role_arn" }}" + "username": "system:node:{{ "{{EC2PrivateDNSName}}" }}" + - "groups": + - "system:masters" + "rolearn": "arn:aws:iam::{{ remoteState "this.data.account_id" }}:role/OrganizationAccountAccessRole" + "username": "general-role" + mapUsers: | + - "groups": + - "system:masters" + "userarn": "arn:aws:iam::{{ remoteState "this.data.account_id" }}:user/jenkins-eks" + "username": "jenkins-eks" + kind: ConfigMap + metadata: + name: aws-auth + namespace: kube-system + ``` + + The data block configuration in main.tf: ```data "aws_caller_identity" "current" {}``` + + In output.tf: + + ```yaml + output "account_id" { + value = data.aws_caller_identity.current.account_id + } + ``` + +4. As it was decided to use Traefik Ingress controller instead of basic Nginx, we spun up two load balancers (first - internet-facing ALB for public ingresses, and second - internal ALB for private ingresses) and security groups necessary for its work, and described them in albs unit. The unit configuration within the template: + + ```yaml + {{- if .variables.ingressControllerEnabled }} + - name: albs + type: terraform + providers: *provider_aws + source: ./terraform-submodules/albs/ + inputs: + main_domain: {{ .variables.alb_main_domain }} + main_external_domain: {{ .variables.alb_main_external_domain }} + main_vpc: {{ .variables.vpc_id }} + acm_external_certificate_arn: {{ .variables.alb_acm_external_certificate_arn }} + private_subnets: {{ insertYAML .variables.private_subnets }} + public_subnets: {{ insertYAML .variables.public_subnets }} + environment: {{ .name }} + {{- end }} + ``` + +5. Also we have created a dedicated unit for testing Ingress through Route 53 records: + + ```yaml + data "aws_route53_zone" "existing" { + name = var.domain + private_zone = var.private_zone + } + module "records" { + source = "terraform-aws-modules/route53/aws//modules/records" + version = "~> 2.0" + zone_id = data.aws_route53_zone.existing.zone_id + private_zone = var.private_zone + records = [ + { + name = "test-ingress-eks" + type = "A" + alias = { + name = var.private_lb_dns_name + zone_id = var.private_lb_zone_id + evaluate_target_health = false + } + }, + { + name = "test-ingress-2-eks" + type = "A" + alias = { + name = var.private_lb_dns_name + zone_id = var.private_lb_zone_id + evaluate_target_health = false + } + } + ] + } + ``` + + The unit configuration within the template: + + ```yaml + {{- if .variables.ingressControllerRoute53Enabled }} + - name: route53_records + type: terraform + providers: *provider_aws + source: ./terraform-submodules/route53_records/ + inputs: + private_zone: {{ .variables.private_zone }} + domain: {{ .variables.domain }} + private_lb_dns_name: {{ remoteState "this.albs.eks_ingress_lb_dns_name" }} + public_lb_dns_name: {{ remoteState "this.albs.eks_public_lb_dns_name" }} + private_lb_zone_id: {{ remoteState "this.albs.eks_ingress_lb_zone_id" }} + {{- end }} + ``` + +6. Also, to map service accounts to AWS IAM roles we have created a separate template for IRSA. Example configuration for a cluster autoscaler: + + ```yaml + kind: StackTemplate + name: aws-eks + units: + {{- if .variables.cluster_autoscaler_irsa.enabled }} + - name: iam_assumable_role_autoscaling_autoscaler + type: terraform + source: "terraform-aws-modules/iam/aws//modules/iam-assumable-role-with-oidc" + version: "~> 3.0" + providers: *provider_aws + inputs: + role_name: "eks-autoscaling-autoscaler-{{ .variables.cluster_name }}" + create_role: true + role_policy_arns: + - {{ remoteState "this.iam_policy_autoscaling_autoscaler.arn" }} + oidc_fully_qualified_subjects: {{ insertYAML .variables.cluster_autoscaler_irsa.subjects }} + provider_url: {{ .variables.provider_url }} + - name: iam_policy_autoscaling_autoscaler + type: terraform + source: "terraform-aws-modules/iam/aws//modules/iam-policy" + version: "~> 3.0" + providers: *provider_aws + inputs: + name: AllowAutoScalingAccessforClusterAutoScaler-{{ .variables.cluster_name }} + policy: {{ insertYAML .variables.cluster_autoscaler_irsa.policy }} + {{- end }} + ``` + +Cluster.dev enables you to create your own stack templates using ready-made samples as a key. In our example we have modified the prepared AWS-EKS stack template by adding a customized data block and excluding some addons. + +We have also changed the template's structure by placing the Examples directory into a separate repository, in order to decouple the abstract template from its implementation for concrete setups. This enabled us to use the template via Git and mark the template's version with Git tags. diff --git a/docs/generators-overview.md b/docs/generators-overview.md new file mode 100644 index 00000000..0ceb7aa7 --- /dev/null +++ b/docs/generators-overview.md @@ -0,0 +1,37 @@ +# Overview + +Generators are part of the Cluster.dev functionality. They enable users to create parts of infrastructure just by filling stack variables in script dialogues, with no infrastructure coding required. This simplifies the creation of new stacks for developers who may lack the Ops skills, and could be useful for quick infrastructure deployment from ready parts (units). + +Generators create project from a preset profile - a set of data predefined as a project, with variables for stack template. Each template may have a profile for generator, which is stored in the .cdev-metadata/generator directory. + +## How it works + +Generator creates `backend.yaml`, `project.yaml`, `infra.yaml` by populating the files with user-entered values. The asked-for stack variables are listed in config.yaml under options: + +```yaml + options: + - name: name + description: Project name + regex: "^[a-zA-Z][a-zA-Z_0-9\\-]{0,32}$" + default: "demo-project" + - name: organization + description: Organization name + regex: "^[a-zA-Z][a-zA-Z_0-9\\-]{0,64}$" + default: "my-organization" + - name: region + description: DigitalOcean region + regex: "^[a-zA-Z][a-zA-Z_0-9\\-]{0,32}$" + default: "ams3" + - name: domain + description: DigitalOcean DNS zone domain name + regex: "^[a-zA-Z0-9][a-zA-Z0-9-\\.]{1,61}[a-zA-Z0-9]\\.[a-zA-Z]{2,}$" + default: "cluster.dev" + - name: bucket_name + description: DigitalOcean spaces bucket name for states + regex: "^[a-zA-Z][a-zA-Z0-9\\-]{0,64}$" + default: "cdev-state" +``` + +In options you can define default parameters and add other variables to the generator's list. The variables included by default are project name, organization name, region, domain and bucket name. + +In config.yaml you can also define a help message text. diff --git a/docs/get-started-create-project.md b/docs/get-started-create-project.md new file mode 100644 index 00000000..bf8b6a12 --- /dev/null +++ b/docs/get-started-create-project.md @@ -0,0 +1,23 @@ +# Create New Project + +## Quick start + +In our example we shall use the [tmpl-development](https://github.com/shalb/cluster.dev/tree/master/.cdev-metadata/generator) sample to create a new project on AWS cloud. + +1. Install the [Cluster.dev client](https://docs.cluster.dev/get-started-install/). + +2. Create a project directory, cd into it and generate a project with the command: + + ```cdev project create https://github.com/shalb/cluster.dev tmpl-development``` + +3. Export environmental variables via an [AWS profile](https://docs.cluster.dev/examples-aws-eks/#authentication). + +4. Run `cdev plan` to build the project and see the infrastructure that will be created. + +5. Run `cdev apply` to deploy the stack. + +## Workflow diagram + +The diagram below describes the steps of creating a new project without generators. + +![create new project diagram](./images/create-project-diagram.png) diff --git a/docs/get-started-install-from-sources.md b/docs/get-started-install-from-sources.md new file mode 100644 index 00000000..a71a18e3 --- /dev/null +++ b/docs/get-started-install-from-sources.md @@ -0,0 +1,40 @@ +# Install From Sources + +## Download from release + +Each stable version of Cluster.dev has a binary that can be downloaded and installed manually. The documentation is suitable for **v0.4.0 or higher** of the Cluster.dev client. + +Installation example for Linux amd64: + +1. Download your desired version from the [releases page](https://github.com/shalb/cluster.dev/releases). + +2. Unpack it. + +3. Find the Cluster.dev binary in the unpacked directory. + +4. Move the binary to bin folder (/usr/local/bin/). + +## Building from source + +Go version 16 or higher is required, see [Golang installation instructions](https://golang.org/doc/install). + +To build the Cluster.dev client from source: + +1. Clone the Cluster.dev Git repo: + + ```bash + git clone https://github.com/shalb/cluster.dev/ + ``` + +2. Build the binary: + + ```bash + cd cluster.dev/ && make + ``` + +3. Check Cluster.dev and move the binary to bin folder: + + ```bash + ./bin/cdev --help + mv ./bin/cdev /usr/local/bin/ + ``` diff --git a/docs/get-started-install.md b/docs/get-started-install.md new file mode 100644 index 00000000..864c90ea --- /dev/null +++ b/docs/get-started-install.md @@ -0,0 +1,14 @@ +# Install From Script + +!!! tip + + This is the easiest way to have the Cluster.dev client installed. For other options, please refer to [Install From Sources](https://docs.cluster.dev/get-started-install-from-sources/) section. + +Cluster.dev has an installer script that takes the latest version of Cluster.dev client and installs it for you locally.
+ +Fetch the script and execute it locally with the command: + +```bash +curl -fsSL https://raw.githubusercontent.com/shalb/cluster.dev/master/scripts/get_cdev.sh | sh +``` + diff --git a/docs/get-started-prerequisites.md b/docs/get-started-prerequisites.md new file mode 100644 index 00000000..3479934e --- /dev/null +++ b/docs/get-started-prerequisites.md @@ -0,0 +1,27 @@ +# Prerequisites + +To start using Cluster.dev please make sure that you comply with the following preconditions. + +Supported operation systems: + +* Linux amd64 + +* Darwin amd64 + +Required software installed: + +* Git console client + +* Terraform + +## Terraform + +Cluster.dev client uses the Terraform binary. The required Terraform version is ~13 or higher. Refer to the [Terraform installation instructions](https://www.terraform.io/downloads.html) to install Terraform. + +Terraform installation example for Linux amd64: + +```bash +curl -O https://releases.hashicorp.com/terraform/0.14.7/terraform_0.14.7_linux_amd64.zip +unzip terraform_0.14.7_linux_amd64.zip +mv terraform /usr/local/bin/ +``` diff --git a/docs/how-does-cdev-work.md b/docs/how-does-cdev-work.md new file mode 100644 index 00000000..5bac01e0 --- /dev/null +++ b/docs/how-does-cdev-work.md @@ -0,0 +1,97 @@ +# Cluster.dev - Working Principles + +With Cluster.dev you download a predefined stack template, set the variables, then render and deploy a whole stack. + +Capabilities: + +- Re-using all existing Terraform private and public modules and Helm Charts. +- Applying parallel changes in multiple infrastructures concurrently. +- Using the same global variables and secrets across different infrastructures, clouds and technologies. +- Templating anything with Go-template function, even Terraform modules in Helm style templates. +- Create and manage secrets with SOPS or cloud secret storages. +- Generate a ready-to-use Terraform code. + +## Basic diagram + +![cdev diagram](./images/cdev-base-diagram.png) + +## Variables + +Cluster.dev uses global and stack-specific variables. + +Global variables are defined within project. They could be common for a few stacks that are reconciled within a project, and passed across them. Example of `project.yaml`: + +```yaml +name: my_project +kind: project +backend: aws-backend +variables: + organization: shalb + region: eu-central-1 + state_bucket_name: cdev-states +exports: + AWS_PROFILE: cluster-dev +``` + + From `project.yaml` the variable value is passed to `infra.yaml` from where it is applied to a stack template. + + Global variables could be used in all configurations of stacks and backends within a given project. To refer to a global variable, use the {{ .project.variables.KEY_NAME }} syntax, where KEY_NAME stands for the variable name defined in `project.yaml` and will be replaced by its value. Example of global variables in `infra.yaml`: + +```yaml +name: eks-demo +template: https://github.com/shalb/cdev-aws-eks?ref=v0.2.0 +kind: Stack +backend: aws-backend +variables: + region: {{ .project.variables.region }} + organization: {{ .project.variables.organization }} + domain: cluster.dev + instance_type: "t3.medium" + eks_version: "1.20" +``` + +Stack-specific variables are defined in `infra.yaml` and relate to a concrete infrastructure. They can be used solely in stack templates that are bound to this stack. + +## How to use Cluster.dev + +Cluster.dev is quite a powerful framework that can be operated in several modes. + +### Deploy infrastructures from existing stack templates + +This mode, also known as **user mode**, gives you the ability to launch ready-to-use infrastructures from prepared stack templates by just adding your cloud credentials and setting variables (such as name, zones, number of instances, etc.). +You don't need to know background tooling like Terraform or Helm, it's just as simple as downloading a sample and launching commands. Here are the steps: + +* Install Cluster.dev binary +* Choose and download a stack template +* Set cloud credentials +* Define variables for the stack template +* Run Cluster.dev and get a cloud infrastructure + +### Create your own stack template + +In this mode you can create your own stack templates. Having your own template enables you to launch or copy environments (like dev/stage/prod) with the same template. +You'll be able to develop and propagate changes together with your team members, just using Git. +Operating Cluster.dev in the **developer mode** requires some prerequisites. The most important is understanding Terraform and how to work with its modules. The knowledge of `go-template` syntax or `Helm` is advisable but not mandatory. + +The easiest way to start is to download/clone a sample template project like [AWS-EKS](https://github.com/shalb/cdev-aws-eks) +and launch an infrastructure from one of the examples. +Then you can edit some required variables, and play around by changing values in the template itself. + +#### Workflow + +Let's assume you are starting a new infrastructure project. Let's see how your workflow would look like. + +1. Define what kind of infrastructure pattern you need to achieve. + + a. What Terraform modules it would include (for example: I need to have VPC, Subnet definitions, IAM's and Roles). + + b. Whether you need to apply any Bash scripts before and after the module, or inside as pre/post-hooks. + + c. If you are using Kubernetes, check what controllers would be deployed and how (by Helm chart or K8s manifests). + +2. Check if there is any similar sample template that already exists. + +3. Clone the stack template locally. + +4. Apply it. + diff --git a/docs/howto-tf-versions.md b/docs/howto-tf-versions.md new file mode 100644 index 00000000..4240c309 --- /dev/null +++ b/docs/howto-tf-versions.md @@ -0,0 +1,31 @@ +# Use Different Terraform Versions + +By default Cluster.dev runs that version of Terraform which is installed on a local machine. If you need to switch between versions, use some third-party utilities, such as [Terraform Switcher](https://github.com/warrensbox/terraform-switcher/). + +Example of `tfswitch` usage: + +```bash +tfswitch 0.15.5 + +cdev apply +``` +This will tell Cluster.dev to use Terraform v0.15.5. + +Use [`CDEV_TF_BINARY`](https://docs.cluster.dev/env-variables/) variable to indicate which Terraform binary to use. + +!!! Info + The variable is recommended to use for debug and template development only. + + You can pin it in `project.yaml`: + +```yaml + name: dev + kind: Project + backend: aws-backend + variables: + organization: cluster-dev + region: eu-central-1 + state_bucket_name: cluster-dev-gha-tests + exports: + CDEV_TF_BINARY: "terraform_14" +``` diff --git a/docs/images/create-project-diagram.png b/docs/images/create-project-diagram.png new file mode 100644 index 00000000..de76b2c5 Binary files /dev/null and b/docs/images/create-project-diagram.png differ diff --git a/docs/index.md b/docs/index.md index e000904a..cf2def21 100644 --- a/docs/index.md +++ b/docs/index.md @@ -1,35 +1,17 @@ # Cluster.dev - Cloud Infrastructures' Management Tool -## What is it? - Cluster.dev is an open-source tool designed to manage cloud native infrastructures with simple declarative manifests - stack templates. It allows you to describe a whole infrastructure and deploy it with a single tool. -The stack templates could be based on Terraform modules, Kubernetes manifests, Shell scripts, Helm charts, Kustomize and ArgoCD/Flux applications, OPA policies etc. Cluster.dev sticks those components together so that you could deploy, test and distribute a whole set of components with pinned versions. - -## Principle Diagram - -![cdev diagram](./images/cdev-base-diagram.png) +The stack templates could be based on Terraform modules, Kubernetes manifests, Shell scripts, Helm charts and ArgoCD/Flux applications, OPA policies, etc. Cluster.dev sticks those components together so that you could deploy, test and distribute a whole set of components with pinned versions. ## Quick Preview ![demo video cdev](./images/demo.gif) -## How does it work? - -With cluster.dev you create or download a predefined stack template, set the variables, then render and deploy a whole stack. - -Capabilities: - -- Re-using all existing Terraform private and public modules and Helm Charts. -- Applying parallel changes in multiple infrastructures concurrently. -- Using the same global variables and secrets across different infrastructures, clouds and technologies. -- Templating anything with Go-template function, even Terraform modules in Helm style templates. -- Create and manage secrets with SOPS or cloud secret storages. -- Generate a ready-to-use Terraform code. - ## Features - Based on DevOps and SRE best-practices. - Simple CI/CD integration. - GitOps cluster management and application delivery. - Automated provisioning of Kubernetes clusters in AWS, Azure, DO and GCE. + diff --git a/docs/stack-templates-functions.md b/docs/stack-templates-functions.md new file mode 100644 index 00000000..648fc9a0 --- /dev/null +++ b/docs/stack-templates-functions.md @@ -0,0 +1,65 @@ +# Functions + +1) [Base Go template language functions](https://golang.org/pkg/text/template/#hdr-Functions). + +2) [Sprig functions](https://masterminds.github.io/sprig/). + +3) Enhanced functions: all functions described above allow you to modify the text of a stack template. Apart from these, some special enhanced functions are available. They cannot be used everywhere. The functions are integrated within the functionality of the program and with the yaml syntax: + +* `insertYAML` - pass yaml block as value of target yaml template. **Argument**: data to pass, any value or reference to block. **Allowed use**: only as full yaml value, in unit `inputs`. Example: + + Source yaml: + + ```yaml + values: + node_groups: + - name: ng1 + min_size: 1 + max_size: 5 + - name: ng2 + max_size: 2 + type: spot + ``` + + Target yaml template: + + ```yaml + units: + - name: k3s + type: terraform + node_groups: {{ insertYAML .values.node_groups }} + ``` + + Rendered stack template: + + ```yaml + units: + - name: k3s + type: terraform + node_groups: + - name: ng1 + min_size: 1 + max_size: 5 + - name: ng2 + max_size: 2 + type: spot + ``` + +* `remoteState` - is used for passing data across units and stacks, can be used in pre/post hooks. **Argument**: string, path to remote state consisting of 3 parts separated by a dot: `"stack_name.unit_name.output_name"`. Since the name of the stack is unknown inside the stack template, you can use "this" instead:`"this.unit_name.output_name"`. **Allowed use**: + + * all units types: in `inputs`; + + * all units types: in units pre/post hooks; + + * in Kubernetes modules: in Kubernetes manifests. + +* `cidrSubnet` - calculates a subnet address within given IP network address prefix. Same as [Terraform function](https://www.terraform.io/docs/language/functions/cidrsubnet.html). Example: + + Source: + ```bash + {{ cidrSubnet "172.16.0.0/12" 4 2 }} + ``` + Rendered: + ```bash + 172.18.0.0/16 + ``` diff --git a/docs/stack-templates-list.md b/docs/stack-templates-list.md new file mode 100644 index 00000000..7666c21f --- /dev/null +++ b/docs/stack-templates-list.md @@ -0,0 +1,9 @@ +# Stack Templates List + +Currently there are 3 types of stack templates available: + + * [AWS-K3s](https://github.com/shalb/cdev-aws-k3s) + * [AWS-EKS](https://github.com/shalb/cdev-aws-eks) + * [DO-K8s](https://github.com/shalb/cdev-do-k8s) + +For more information on the templates please refer to [Examples](https://docs.cluster.dev/examples-aws-eks/) section. diff --git a/docs/stack-templates-overview.md b/docs/stack-templates-overview.md new file mode 100644 index 00000000..0b917301 --- /dev/null +++ b/docs/stack-templates-overview.md @@ -0,0 +1,17 @@ +# Overview + +A stack template is a yaml file that tells Cluster.dev which units to run and how. It is a core Cluster.dev resource that makes for its flexibility. Stack templates use Go template language to allow you customise and select the units you want to run. + +The stack template's config files are stored within the stack template directory that could be located either locally or in a Git repo. Cluster.dev reads all _./*.yaml files from the directory (non-recursively), renders a stack template with the project's data, parse the yaml file and loads units - the most primitive elements of a stack template. For more details on units please refer to [Units](https://docs.cluster.dev/units-overview/) section. + +A stack template represents a yaml structure with an array of different invocation units. Common view: + +```yaml +units: + - unit1 + - unit2 + - unit3 + ... +``` + +Stack templates can utilize all kinds of Go templates and Sprig functions (similar to Helm). Along with that it is enhanced with [functions](https://docs.cluster.dev/stack-templates-functions/) like `insertYAML` that could pass yaml blocks directly. diff --git a/docs/structure-backend.md b/docs/structure-backend.md new file mode 100644 index 00000000..316d1ac9 --- /dev/null +++ b/docs/structure-backend.md @@ -0,0 +1,54 @@ +# Backend + +Backend is an object that describes backend storage for Terraform and Cluster.dev states. + +File: searching in `./*.yaml`. *Required at least one*. +In the backends' configuration you can use any options of the appropriate Terraform backend. They will be converted as is. +Currently 4 types of backends are supported: + +* `s3` AWS S3 backend: + +```yaml +name: aws-backend +kind: backend +provider: s3 +spec: + bucket: cdev-states + region: {{ .project.variables.region }} +``` + +* `do` DigitalOcean spaces backend: + +```yaml +name: do-backend +kind: backend +provider: do +spec: + bucket: cdev-states + region: {{ .project.variables.region }} + access_key: {{ env "SPACES_ACCESS_KEY_ID" }} + secret_key: {{ env "SPACES_SECRET_ACCESS_KEY" }} +``` + +* `azurerm` Microsoft azurerm: + +```yaml +name: gcs-b +kind: backend +provider: azurerm +spec: + resource_group_name: "StorageAccount-ResourceGroup" + storage_account_name: "example" + container_name: "cdev-states" +``` + +* `gcs` Google Cloud backend: + +```yaml +name: do-backend +kind: backend +provider: gcs +spec: + bucket: cdev-states + prefix: pref +``` diff --git a/docs/structure-overview.md b/docs/structure-overview.md new file mode 100644 index 00000000..f17780b5 --- /dev/null +++ b/docs/structure-overview.md @@ -0,0 +1,12 @@ +# Overview + +Common project files: + +```bash +project.yaml # Contains global project variables that can be used in other configuration objects. +.yaml # Contains reference to a stack template, variables to render the stack template and backend for states. +.yaml # Describes a backend storage for Terraform and Cluster.dev states. +.yaml # Contains secrets, one per file. +``` + +Cluster.dev reads configuration from current directory, i.e. all files by mask: `*.yaml`. It is allowed to place several yaml configuration objects in one file, separating them with "---". The exception is the project.yaml configuration file and files with secrets. diff --git a/docs/structure-project.md b/docs/structure-project.md new file mode 100644 index 00000000..83b4a13a --- /dev/null +++ b/docs/structure-project.md @@ -0,0 +1,31 @@ +# Project + +Project is a storage for global variables related to all stacks. It is a high-level abstraction to store and reconcile different stacks, and pass values across them. + +File: `project.yaml`. *Required*. +Represents a set of configuration options for the whole project. Contains global project variables that can be used in other configuration objects, such as backend or stack (except of `secrets`). Note that the `project.conf` file is not rendered with the template and you cannot use template units in it. + +Example `project.yaml`: + +```yaml +name: my_project +kind: project +backend: aws-backend +variables: + organization: shalb + region: eu-central-1 + state_bucket_name: cdev-states +exports: + AWS_PROFILE: cluster-dev +``` + +* `name`: project name. *Required*. + +* `kind`: object kind. Must be set as `project`. *Required*. + +* `backend`: name of the backend that will be used to store the Cluster.dev state of the current project. *Optional*. If the backend is not specified the state will be saved locally in the ./ +.state file. For now only S3 bucket backends are supported. + +* `variables`: a set of data in yaml format that can be referenced in other configuration objects. For the example above, the link to the organization name will look like this: `{{ .project.variables.organization }}`. + +* `exports`: list of environment variables that will be exported while working with the project. *Optional*. diff --git a/docs/structure-secrets.md b/docs/structure-secrets.md new file mode 100644 index 00000000..fdd8ba7a --- /dev/null +++ b/docs/structure-secrets.md @@ -0,0 +1,53 @@ +# Secrets + +Secret is an object that contains sensitive data such as a password, a token, or a key. Is used to pass secret values to the tools that don't have a proper support of secret engines. + +There are two ways to use secrets: + +## SOPS secrets + +For **creating** and **editing** SOPS secrets, Cluster.dev uses SOPS binary. But the SOPS binary is **not required** for decrypting and using SOPS secrets. As none of Cluster.dev reconcilation processes (build, plan, apply) requires SOPS to be performed, you don't have to install it for pipelines. + +See [SOPS installation instructions](https://github.com/mozilla/sops#download) in official repo. + +Secrets are encoded/decoded with [SOPS](https://github.com/mozilla/sops) utility that supports AWS KMS, GCP KMS, Azure Key Vault and PGP keys. How to use: + +1. Use Cluster.dev console client to create a new secret from scratch: + + ```bash + cdev secret create + ``` + +2. Use interactive menu to create a secret. + +3. Edit the secret and set secret data in `encrypted_data:` section. + +4. Use references to the secret data in a stack template (you can find the examples in the generated secret file). + +## Amazon secret manager + +Cluster.dev client can use AWS SSM as a secret storage. How to use: + +1. Create a new secret in AWS secret manager using AWS CLI or web console. Both raw and JSON data formats are supported. + +2. Use Cluster.dev console client to create a new secret from scratch: + + ```bash + cdev secret create + ``` + +3. Answer the questions. For `Name of secret in AWS Secrets manager` enter the name of the AWS secret created above. + +4. Use references to the secret data in a stack template (you can find the examples in the generated secret file). + +To list and edit any secret, use the commands: + +```bash +cdev secret ls +``` + +and + +```bash +cdev secret edit secret_name +``` diff --git a/docs/structure-stack.md b/docs/structure-stack.md new file mode 100644 index 00000000..67e8f7bb --- /dev/null +++ b/docs/structure-stack.md @@ -0,0 +1,52 @@ +# Stack + +Stack is a yaml file that tells Cluster.dev which template to use and what variables to apply to this template. Usually, users have multiple stacks that reflect their environments or tenants, and point to the same template with different variables. + +File: searching in `./*.yaml`. *Required at least one*. +Stack object (`kind: stack`) contains reference to a stack template, variables to render the template and backend for states. + +Example `infra.yaml`: + +```yaml +# Define stack itself +name: k3s-infra +template: "./templates/" +kind: stack +backend: aws-backend +variables: + bucket: {{ .project.variables.state_bucket_name }} # Using project variables. + region: {{ .project.variables.region }} + organization: {{ .project.variables.organization }} + domain: cluster.dev + instance_type: "t3.medium" + vpc_id: "vpc-5ecf1234" +``` + +* `name`: stack name. *Required*. + +* `kind`: object kind. `stack`. *Required*. + +* `backend`: name of the backend that will be used to store the states of this stack. *Required*. + +* `variables`: data set for the stack template rendering. + +* `template`: it's either a path to a local directory containing the stack template's configuration files, or a remote Git repository as the stack template source. For more details on stack templates please refer to [Stack Template](https://docs.cluster.dev/stack-templates-overview/) section. A local path must begin with either `/` for absolute path, `./` or `../` for relative path. For Git source, use this format: `//?ref=`: + * `` - *required*. Standard Git repo url. See details on [official Git page](https://git-scm.com/docs/git-clone#_git_urls). + * `` - *optional*, use it if the stack template's configuration is not in repo root. + * ``- Git branch or tag. + +## Examples + +```yaml +template: /path/to/dir # absolute local path +template: ./template/ # relative local path +template: ../../template/ # relative local path +template: https://github.com/shalb/cdev-k8s # https Git url +template: https://github.com/shalb/cdev-k8s//some/dir/ # subdirectory +template: https://github.com/shalb/cdev-k8s//some/dir/?ref=branch-name # branch +template: https://github.com/shalb/cdev-k8s?ref=v1.1.1 # tag +template: git@github.com:shalb/cdev-k8s.git # ssh Git url +template: git@github.com:shalb/cdev-k8s.git//some/dir/ # subdirectory +template: git@github.com:shalb/cdev-k8s.git//some/dir/?ref=branch-name # branch +template: git@github.com:shalb/cdev-k8s.git?ref=v1.1.1 # tag +``` diff --git a/docs/template-development-guide.md b/docs/template-development-guide.md new file mode 100644 index 00000000..203ade1a --- /dev/null +++ b/docs/template-development-guide.md @@ -0,0 +1,120 @@ +# Stack Template Development Guide + +Cluster.dev uses generators to help you develop stack templates. Generators provide you with scripted dialogues, where you can populate stack values in an interactive mode. + +In our example we shall use the [tmpl-development](https://github.com/shalb/cluster.dev/tree/master/.cdev-metadata/generator) generator to create a project. Then we shall modify its stack template as described below. + +## Workflow steps + +1. Install the [cluster.dev client](https://docs.cluster.dev/getting-started/#cdev-install). + +2. Create a project directory, cd into it and generate the project with the command: + + ```cdev project create https://github.com/shalb/cluster.dev tmpl-development``` + +3. Export environmental variables via an [AWS profile](https://docs.cluster.dev/aws-cloud-provider/#authentication). + +4. Run `cdev plan` to build the project and see the infrastructure that will be created. + +5. To start working with the stack template, cd into the template directory and open the template.yaml file: ./template/template.yaml. + + Our sample stack template contains 3 units. Now, let's elaborate on each of them and see how we can modify it. + +6. The `create-bucket` unit uses a remote [Terraform module](https://registry.terraform.io/modules/terraform-aws-modules/s3-bucket/aws/latest) to create an S3 bucket on AWS: + + ```yaml + name: create-bucket + type: terraform + providers: *provider_aws + source: terraform-aws-modules/s3-bucket/aws + version: "2.9.0" + inputs: + bucket: {{ .variables.bucket_name }} + force_destroy: true + ``` + + We can modify the unit by adding more parameters in [inputs](https://registry.terraform.io/modules/terraform-aws-modules/s3-bucket/aws/latest?tab=inputs). For example, let's add some tags using the [`insertYAML`](https://docs.cluster.dev/stack-template-development/#functions) function: + + ```yaml + name: create-bucket + type: terraform + providers: *provider_aws + source: terraform-aws-modules/s3-bucket/aws + version: "2.9.0" + inputs: + bucket: {{ .variables.bucket_name }} + force_destroy: true + tags: {{ insertYAML .variables.tags }} + ``` + + Now we can see the tags in infra.yaml: + + ```yaml + name: cdev-tests-local + template: ./template/ + kind: Stack + backend: aws-backend + variables: + bucket_name: "tmpl-dev-test" + region: {{ .project.variables.region }} + organization: {{ .project.variables.organization }} + name: "Developer" + tags: + tag1_name: "tag 1 value" + tag2_name: "tag 2 value" + ``` + + To check the configuration, run the `cdev plan --tf-plan` command. In the output you can see that Terraform will create a bucket with the defined tags. Run `cdev apply -l debug` to have the configuration applied. + +7. The `create-s3-object` unit uses local Terraform module to get the bucket ID and save data inside the bucket. The Terraform module is stored in s3-file directory, main.tf file: + + ```yaml + name: create-s3-object + type: terraform + providers: *provider_aws + source: ./s3-file/ + depends_on: this.create-bucket + inputs: + bucket_name: {{ remoteState "this.create-bucket.s3_bucket_id" }} + data: | + The data that will be saved in the S3 bucket after being processed by the template engine. + Organization: {{ .variables.organization }} + Name: {{ .variables.name }} + ``` + + The unit sends 2 parameters. The *bucket_name* is retrieved from the `create-bucket` unit by means of [`remoteState`](https://docs.cluster.dev/stack-template-development/#functions) function. The *data* parameter uses templating to obtain the *Organization* and *Name* variables from infra.yaml. + + Let's add to *data* input *bucket_regional_domain_name* variable to obtain the region-specific domain name of the bucket: + + ```yaml + name: create-s3-object + type: terraform + providers: *provider_aws + source: ./s3-file/ + depends_on: this.create-bucket + inputs: + bucket_name: {{ remoteState "this.create-bucket.s3_bucket_id" }} + data: | + The data that will be saved in the s3 bucket after being processed by the template engine. + Organization: {{ .variables.organization }} + Name: {{ .variables.name }} + Bucket regional domain name: {{ remoteState "this.create-bucket.s3_bucket_bucket_regional_domain_name" }} + ``` + + Check the configuration by running the `cdev plan` command; apply it with `cdev apply -l debug`. + +8. The `print_outputs` unit retrieves data from two other units by means of [`remoteState`](https://docs.cluster.dev/stack-template-development/#functions) function: *bucket_domain* variable from `create-bucket` unit and *s3_file_info* from `create-s3-object` unit: + + ```yaml + name: print_outputs + type: printer + inputs: + bucket_domain: {{ remoteState "this.create-bucket.s3_bucket_bucket_domain_name" }} + s3_file_info: "To get file use: aws s3 cp {{ remoteState "this.create-s3-object.file_s3_url" }} ./my_file && cat my_file" + ``` + +9. Having finished your work, run `cdev destroy` to eliminate the created resources. + +Cluster.dev gives you freedom to modify existing templates or create your own using generators. You can add inputs and outputs to already preset units, take the output of one unit and send it as an input for another, or write new units and add them to a template. In our example we used a sample project and had its stack template modified by adding new parameters to the units. + + diff --git a/docs/units-helm.md b/docs/units-helm.md new file mode 100644 index 00000000..6a316d70 --- /dev/null +++ b/docs/units-helm.md @@ -0,0 +1,66 @@ +# Helm Unit + +Describes [Terraform Helm provider](https://registry.terraform.io/providers/hashicorp/helm/latest/docs) invocation. + +Example: + +```yaml +units: + - name: argocd + type: helm + source: + repository: "https://argoproj.github.io/argo-helm" + chart: "argo-cd" + version: "2.11.0" + pre_hook: + command: *getKubeconfig + on_destroy: true + kubeconfig: ./kubeconfig_{{ .name }} + depends_on: this.cert-manager-issuer + additional_options: + namespace: "argocd" + create_namespace: true + values: + - file: ./argo/values.yaml + apply_template: true + inputs: + global.image.tag: v1.8.3 +``` + +In addition to common options the following are available: + +* `source` - *map*, *required*. This block describes Helm chart source. + +* `chart`, `repository`, `version` - correspond to options with the same name from helm_release resource. See [chart](https://registry.terraform.io/providers/hashicorp/helm/latest/docs/resources/release#chart), [repository](https://registry.terraform.io/providers/hashicorp/helm/latest/docs/resources/release#repository) and [version](https://registry.terraform.io/providers/hashicorp/helm/latest/docs/resources/release#version). + +* `kubeconfig` - *string*, *required*. Path to the kubeconfig file which is relative to the directory where the unit was executed. +* `provider_version` - *string*, *optional*. Version of terraform helm provider to use. Default - latest. See [terraform helm provider](https://registry.terraform.io/providers/hashicorp/helm/latest) + +* `additional_options` - *map of any*, *optional*. Corresponds to [Terraform helm_release resource options](https://registry.terraform.io/providers/hashicorp/helm/latest/docs/resources/release#argument-reference). Will be passed as is. + +* `values` - *array*, *optional*. List of values files in raw yaml to be passed to Helm. Values will be merged, in order, as Helm does with multiple -f options. + + * `file` - *string*, *required*. Path to the values file. + + * `apply_template` - *bool*, *optional*. Defines whether a template should be applied to the values file. By default is set to `true`. + +* `inputs` - *map of any*, *optional*. A map that represents [Terraform helm_release sets](https://registry.terraform.io/providers/hashicorp/helm/latest/docs/resources/release#set). This block allows to use functions `remoteState` and `insertYAML`. For example: + +```yaml + inputs: + global.image.tag: v1.8.3 + service.type: LoadBalancer + ``` + +Corresponds to: + +```yaml + set { + name = "global.image.tag" + value = "v1.8.3" + } + set { + name = "service.type" + value = "LoadBalancer" + } +``` diff --git a/docs/units-kubernetes.md b/docs/units-kubernetes.md new file mode 100644 index 00000000..75ac7054 --- /dev/null +++ b/docs/units-kubernetes.md @@ -0,0 +1,20 @@ +# Kubernetes Unit + +Describes [Terraform kubernetes-alpha provider](https://github.com/hashicorp/terraform-provider-kubernetes-alpha) invocation. + +Example: + +```yaml +units: + - name: argocd_apps + type: kubernetes + provider_version: "0.2.1" + source: ./argocd-apps/app1.yaml + kubeconfig: ../kubeconfig + depends_on: this.argocd +``` + +* `source` - *string*, *required*. Path to Kubernetes manifest that will be converted into a representation of kubernetes-alpha provider. **Source file will be rendered with the stack template, and also allows to use functions `remoteState` and `insertYAML`**. + +* `kubeconfig` - *string*, *required*. Path to the kubeconfig file, which is relative to the directory where the unit was executed. +* `provider_version` - *string*, *optional*. Version of terraform kubernetes-alpha provider to use. Default - latest. See [terraform kubernetes-alpha provider](https://registry.terraform.io/providers/hashicorp/kubernetes-alpha/latest) diff --git a/docs/units-overview.md b/docs/units-overview.md new file mode 100644 index 00000000..1f3b0dc5 --- /dev/null +++ b/docs/units-overview.md @@ -0,0 +1,44 @@ +# Overview + +Units are building blocks that stack templates are made of. It could be anything — a Terraform module, Helm you want to install or a Bash script that you want to run. Units can be remote or stored in the same repo with other Cluster.dev code. Units may contain reference to other files that are required for work. These files should be located inside the current directory (within the stack template's context). As some of the files will also be rendered with the project's data, you can use Go templates in them. + +All units described below have a common format and common fields. Base example: + +```yaml + - name: k3s + type: terraform + depends_on: + - this.unit1_name + - this.unit2_name +# depends_on: this.unit1_name # is allowed to use string for single, or list for multiple dependencies + pre_hook: + command: "echo pre_hook" + # script: "./scripts/hook.sh" + on_apply: true + on_destroy: false + on_plan: false + post_hook: + # command: "echo post_hook" + script: "./scripts/hook.sh" + on_apply: true + on_destroy: false + on_plan: false +``` + +* `name` - unit name. *Required*. + +* `type` - unit type. One of: `terraform`, `helm`, `kubernetes`, `printer`. + +* `depends_on` - *string* or *list of strings*. One or multiple unit dependencies in the format "stack_name.unit_name". Since the name of the stack is unknown inside the stack template, you can use "this" instead:`"this.unit_name.output_name"`. + +* `pre_hook` and `post_hook` blocks: describe the shell commands to be executed before and after the unit, respectively. The commands will be executed in the same context as the actions of the unit. Environment variables are common to the shell commands, the pre_hook and post_hook scripts, and the unit execution. You can export a variable in the pre_hook and it will be available in the post_hook or in the unit. + + * `command` - *string*. Shell command in text format. Will be executed in Bash -c "command". Can be used if the "script" option is not used. One of `command` or `script` is required. + + * `script` - *string*. Path to shell script file which is relative to template directory. Can be used if the "command" option is not used. One of `command` or `script` is required. + + * `on_apply` *bool*, *optional*. Turn off/on when unit applying. **Default: "true"**. + + * `on_destroy` - *bool*, *optional*. Turn off/on when unit destroying. **Default: "false"**. + + * `on_plan` - *bool*, *optional*. Turn off/on when unit plan executing. **Default: "false"**. diff --git a/docs/units-printer.md b/docs/units-printer.md new file mode 100644 index 00000000..d2f69733 --- /dev/null +++ b/docs/units-printer.md @@ -0,0 +1,16 @@ +# Printer Unit + +This unit is mainly used to see the outputs of other units in the console logs. + +Example: + +```yaml +units: + - name: print_outputs + type: printer + inputs: + cluster_name: {{ .name }} + worker_iam_role_arn: {{ remoteState "this.eks.worker_iam_role_arn" }} +``` + +* `inputs` - *any*, *required* - a map that represents data to be printed in the log. The block **allows to use functions `remoteState` and `insertYAML`**. diff --git a/docs/units-shell.md b/docs/units-shell.md new file mode 100644 index 00000000..ed566e89 --- /dev/null +++ b/docs/units-shell.md @@ -0,0 +1,106 @@ +# Shell Unit + +Example: + +```yaml +units: + - name: my-tf-code + kind: shell + env: + AWS_PROFILE: {{ .variables.aws_profile }} + TF_VAR_region: {{ .project.region }} + create_files: + - file: ./terraform.tfvars + content: | +{{- range $key, $value := .variables.tfvars }} + $key = "$value" +{{- end}} + work_dir: ~/env/prod/ + apply: + commands: + - terraform apply -var-file terraform.tfvars {{ range $key, $value := .variables.vars_list }} -var="$key=$value"{{ end }} + plan: + commands: + - terraform plan + destroy: + commands: + - terraform destroy + - rm ./.terraform + outputs: # how to get outputs + type: json (regexp, separator) + regexp_key: "regexp" + regexp_value: "regexp" + separator: "=" + command: terraform output -json + create_files: + - file: ./my_text_file.txt + mode: 0644 + content: "some text" + - file: ./my_text_file2.txt + content: "some text 2" +``` + +## Options + +* `env` - *map*, *optional*. The list of environment variables that will be exported before executing commands of this unit. The variables defined in shell unit have a priority over variables defined in the project (the option `exports`) and will rewrite them. + +* `work_dir` - *string*, *required*. The working directory within which the code of the unit will be executed. + +* `apply` - *optional*, *map*. Describes commands to be executed when running `cdev apply`. + + * `init` - *optional*. Describes commands to be executed prior to running `cdev apply`. + + * `commands` - *list of strings*, *required*. The list of commands to be executed when running `cdev apply`. + +* `plan` - *optional*, *map*. Describes commands to be executed when running `cdev plan`. + + * `init` - *optional*. Describes commands to be executed prior to running `cdev plan`. + + * `commands` - *list of strings*, *required*. The list of commands to be executed when running `cdev plan`. + +* `destroy` - *optional*, *map*. Describes commands to be executed when running `cdev destroy`. + + * `init` - *optional*. Describes commands to be executed prior to running `cdev destroy`. + + * `commands` - *list of strings*, *required*. The list of commands to be executed when running `cdev destroy`. + +* `outputs` - *optional*, *map*. Describes how to get outputs from a command. + + * `type` - *string*, *required*. A type of format to deliver the output. Could have 3 options: JSON, regexp, separator. According to the type specified, further options will differ. + + * `JSON` - if the `type` is defined as JSON, outputs will be parsed as key-value JSON. This type of output makes all other options not required. + + * `regexp` - if the `type` is defined as regexp, this introduces an additional required option `regexp`. Regexp is a regular expression which defines how to parse each line in the module output. Example: + + ```yaml + outputs: # how to get outputs + type: regexp + regexp: "^(.*)=(.*)$" + command: | + echo "key1=val1\nkey2=val2" + ``` + + * `separator` - if the `type` is defined as separator, this introduces an additional option `separator` (*string*). Separator is a symbol that defines how a line is divided in two parts: the key and the value. + + ```yaml + outputs: # how to get outputs + type: separator + separator: "=" + command: | + echo "key1=val1\nkey2=val2" + ``` + * `command` - *string*, *optional*. The command to take the outputs from. Is used regardless of the type option. If the command is not defined, cdev takes the outputs from the `apply` command. + +* `create_files` - *list of files*, *optional*. The list of files that have to be saved in the state in case of their changing. + +* `pre_hook` and `post_hook` blocks: describe the shell commands to be executed before and after the unit, respectively. The commands will be executed in the same context as the actions of the unit. Environment variables are common to the shell commands, the pre_hook and post_hook scripts, and the unit execution. You can export a variable in the pre_hook and it will be available in the post_hook or in the unit. + + * `command` - *string*. Shell command in text format. Will be executed in Bash -c "command". Can be used if the "script" option is not used. One of `command` or `script` is required. + + * `script` - *string*. Path to shell script file which is relative to template directory. Can be used if the "command" option is not used. One of `command` or `script` is required. + + * `on_apply` *bool*, *optional*. Turn off/on when unit applying. **Default: "true"**. + + * `on_destroy` - *bool*, *optional*. Turn off/on when unit destroying. **Default: "false"**. + + * `on_plan` - *bool*, *optional*. Turn off/on when unit plan executing. **Default: "false"**. diff --git a/docs/units-terraform.md b/docs/units-terraform.md new file mode 100644 index 00000000..2e3502cb --- /dev/null +++ b/docs/units-terraform.md @@ -0,0 +1,25 @@ +# Terraform Unit + +Describes direct invocation of Terraform modules. + +Example: + +```yaml +units: + - name: vpc + type: terraform + version: "2.77.0" + source: terraform-aws-modules/vpc/aws + inputs: + name: {{ .name }} + azs: {{ insertYAML .variables.azs }} + vpc_id: {{ .variables.vpc_id }} +``` + +In addition to common options the following are available: + +* `source` - *string*, *required*. Terraform module [source](https://www.terraform.io/docs/language/modules/syntax.html#source). **It is not allowed to use local folders in source!** + +* `version` - *string*, *optional*. Module [version](https://www.terraform.io/docs/language/modules/syntax.html#version). + +* `inputs` - *map of any*, *required*. A map that corresponds to [input variables](https://www.terraform.io/docs/language/values/variables.html) defined by the module. This block allows to use functions `remoteState` and `insertYAML`. diff --git a/mkdocs.yml b/mkdocs.yml index d95896d5..ce30f902 100644 --- a/mkdocs.yml +++ b/mkdocs.yml @@ -6,25 +6,46 @@ site_url: 'https://docs.cluster.dev' nav: - - Home: - - Welcome: index.md - - How to Use Cdev: how-to-use.md - - Documentation Structure: documentation-structure.md - - User Mode: - - Getting Started: getting-started.md - - Deploying to AWS: aws-cloud-provider.md - - Deploying to DigitalOcean: digital-ocean-cloud-provider.md - - Deploying to GCE: google-cloud-provider.md - - Deploying to Azure: azure-cloud-provider.md - - Cdev Install Reference: cdev-installation-reference.md - - Developer Mode: - - Stack Template Development: stack-template-development.md - - Workflow: template-development-workflow.md + - Introduction: + - Overview: + - What Is Cluster.dev?: index.md + - How Does It Work?: how-does-cdev-work.md + - Getting Started: + - Prerequisites: get-started-prerequisites.md + - Install: get-started-install.md + - Install From Sources: get-started-install-from-sources.md + - Create New Project: get-started-create-project.md + - Examples: + - AWS-EKS: examples-aws-eks.md + - DO-K8s: examples-do-k8s.md + - Modify AWS-EKS: examples-modify-aws-eks.md + - Develop Stack Template: examples-develop-stack-template.md + - Cluster.dev vs. Others: + - Cluster.dev vs. Terraform: cdev-vs-terraform.md + - Cluster.dev vs. Pulumi & Crossplane: cdev-vs-pulumi.md - Reference: - - Project Configuration: project-configuration.md + - Structure: + - Overview: structure-overview.md + - Project: structure-project.md + - Stack: structure-stack.md + - Backend: structure-backend.md + - Secrets: structure-secrets.md + - Stack Templates: + - Overview: stack-templates-overview.md + - Functions: stack-templates-functions.md + - Stack Templates List: stack-templates-list.md + - Units: + - Overview: units-overview.md + - Terraform: units-terraform.md + - Helm: units-helm.md + - Kubernetes: units-kubernetes.md + - Printer: units-printer.md - CLI Reference: - CLI Commands: cli-commands.md - CLI Options: cli-options.md + - Environment variables: env-variables.md + - How-to Articles: + - Use Different Terraform versions: howto-tf-versions.md markdown_extensions: @@ -65,7 +86,7 @@ theme: logo: '/images/cluster-dev-logo-site.png' favicon: 'images/favicon.png' google_analytics: - - UA-157259461-1 + - G-KK8Z11MM4P - auto extra: social: