Skip to content

Commit

Permalink
Merge branch 'master' into shell-unit
Browse files Browse the repository at this point in the history
  • Loading branch information
romanprog authored Nov 14, 2021
2 parents fb3b1f5 + 0759b32 commit ee33ee1
Show file tree
Hide file tree
Showing 33 changed files with 1,562 additions and 37 deletions.
16 changes: 16 additions & 0 deletions docs/cdev-vs-pulumi.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
# Cluster.dev vs. Pulumi and Crossplane

Pulumi and Crossplane are modern alternatives to Terraform.

These are great tools and we admire alternative views on infrastructure management.

What makes Cluster.dev different is its purpose and limitations.
Tools like Pulumi, Crossplane, and Terraform are aimed to manage clouds - creating new instances or clusters, cloud resources like databases, and others.
While Cluster.dev is designed to manage the whole infrastructure, including those tools as units. That means you can run Terraform, then run Pulumi, or Bash, or Ansible with variables received from Terraform, and then run Crossplane or something else. Cluster.dev is created to connect and manage all infrastructure tools.

With infrastructure tools, users are often restricted with one-tool usage that has specific language or DSL. Whereas Cluster.dev allows to have a limitless number of units and workflow combinations between tools.

For now Cluster.dev has a major support for Terraform only, mostly because we want to provide the best experience for the majority of users. Moreover, Terraform is a de-facto industry standard and already has a lot of modules created by the community.
To read more on the subject please refer to [Cluster.dev vs. Terraform](https://docs.cluster.dev/cdev-vs-terraform/) section.

If you or your company would like to use Pulumi or Crossplane with Cluster.dev, please feel free to contact us.
32 changes: 32 additions & 0 deletions docs/cdev-vs-terraform.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,32 @@
# Cluster.dev vs. Terraform

Terraform is a great and popular tool for creating infrastructures. Despite the fact that it was founded more than five years ago, Terraform supports many providers and resources, which is impressive.

Cluster.dev loves Terraform (and even supports export to the plain Terraform code). Still, Terraform lacks a robust relation system, fast plans, automatic reconciliation, and configuration templates.

Cluster.dev, on the other hand, is a managing software that uses Terraform alongside other infrastructure tools as building blocks.

As a higher abstraction, Cluster.dev fixes all listed problems: builds a single source of truth, and combines and orchestrates different infrastructure tools under the same roof.

Let's dig more into the problems that Cluster.dev solves.

## Internal relation

As Terraform has pretty complex rendering logic, it affects the relations between its pieces. For example, you cannot define a provider for, let say, k8s or Helm, in the same codebase that creates a k8s cluster. This forces users to resort to internal hacks or employ a custom wrapper to have two different deploys — that is a problem we solved via Cluster.dev.

Another problem with internal relations concerns huge execution plans that Terraform creates for massive projects. Users who tried to avoid this issue by using small Terraform repos, faced the challenges of weak "remote state" relations and limited possibilities of reconciliation: it was not possible to trigger a related module, if the output of the module it relied upon had been changed.

On the contrary, Cluster.dev allows you to trigger only the necessary parts, as it is a GitOps-first tool.

## Templating

The second limitation of Terraform is templating: Terraform doesn’t support templating of tf files that it uses. This forces users to hacks that further tangle their Terraform files.
And while Cluster.dev uses templating, it allows to include, let’s say, Jenkins Terraform module with custom inputs for dev environment and not to include it for staging and production — all in the same codebase.

## Third Party

Terraform allows for executing Bash or Ansible. However, it doesn't contain many instruments to control where and how these external tools will be run.

While Cluster.dev as a cloud native manager provides all tools the same level of support and integration.


2 changes: 1 addition & 1 deletion docs/cli-options.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@

* `--interactive` Use interactive mode for project generation.

* `--list-templates` Show all available stack templates for project generator.
* `--list-templates` Show all available templates for project generator.

## Destroy flags

Expand Down
3 changes: 3 additions & 0 deletions docs/env-variables.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
# Environment Variables

* `CDEV_TF_BINARY` Indicates which Terraform binary to use. Recommended usage: for debug during template development.
137 changes: 137 additions & 0 deletions docs/examples-aws-eks.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,137 @@
# AWS-EKS

This section provides information on how to create a new project on AWS with [AWS-EKS](https://github.com/shalb/cdev-aws-eks) stack template.

## Prerequisites

1. Terraform version 13+.

2. AWS account.

3. [AWS CLI](#install-aws-client) installed.

4. kubectl installed.

5. [Cluster.dev client installed](https://docs.cluster.dev/get-started-install/).

### Authentication

Cluster.dev requires cloud credentials to manage and provision resources. You can configure access to AWS in two ways:

!!! Info
Please note that you have to use IAM user with granted administrative permissions.

* **Environment variables**: provide your credentials via the `AWS_ACCESS_KEY_ID` and `AWS_SECRET_ACCESS_KEY`, the environment variables that represent your AWS Access Key and AWS Secret Key. You can also use the `AWS_DEFAULT_REGION` or `AWS_REGION` environment variable to set region, if needed. Example usage:

```bash
export AWS_ACCESS_KEY_ID="MYACCESSKEY"
export AWS_SECRET_ACCESS_KEY="MYSECRETKEY"
export AWS_DEFAULT_REGION="eu-central-1"
```

* **Shared Credentials File (recommended)**: set up an [AWS configuration file](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-files.html) to specify your credentials.

Credentials file `~/.aws/credentials` example:

```bash
[cluster-dev]
aws_access_key_id = MYACCESSKEY
aws_secret_access_key = MYSECRETKEY
```

Config: `~/.aws/config` example:

```bash
[profile cluster-dev]
region = eu-central-1
```

Then export `AWS_PROFILE` environment variable.

```bash
export AWS_PROFILE=cluster-dev
```

### Install AWS client

If you don't have the AWS CLI installed, refer to AWS CLI [official installation guide](https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2-linux.html), or use commands from the example:
```bash
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/install
aws s3 ls
```
### Create S3 bucket
Cluster.dev uses S3 bucket for storing states. Create the bucket with the command:
```bash
aws s3 mb s3://cdev-states
```
### DNS Zone
In AWS-EKS stack template example you need to define a Route 53 hosted zone. Options:
1. You already have a Route 53 hosted zone.
2. Create a new hosted zone using a [Route 53 documentation example](https://docs.aws.amazon.com/cli/latest/reference/route53/create-hosted-zone.html#examples).
3. Use "cluster.dev" domain for zone delegation.
## Create project
1. Configure [access to AWS](#authentication) and export required variables.
2. Create locally a project directory, cd into it and execute the command:
```bash
cdev project create https://github.com/shalb/cdev-aws-eks
```
This will create a new empty project.
3. Edit variables in the example's files, if necessary:

* project.yaml - main project config. Sets common global variables for current project such as organization, region, state bucket name etc. See [project configuration docs](https://docs.cluster.dev/structure-project/).

* backend.yaml - configures backend for Cluster.dev states (including Terraform states). Uses variables from project.yaml. See [backend docs](https://docs.cluster.dev/structure-backend/).

* infra.yaml - describes stack configuration. See [stack docs](https://docs.cluster.dev/structure-stack/).

4. Run `cdev plan` to build the project. In the output you will see an infrastructure that is going to be created after running `cdev apply`.

!!! note
Prior to running `cdev apply` make sure to look through the infra.yaml file and replace the commented fields with real values. In case you would like to use existing VPC and subnets, uncomment preset options and set correct VPC ID and subnets' IDs. If you leave them as is, Cluster.dev will have VPC and subnets created for you.
5. Run `cdev apply`
!!! tip
We highly recommend to run `cdev apply` in a debug mode so that you could see the Cluster.dev logging in the output: `cdev apply -l debug`
6. After `cdev apply` is successfully executed, in the output you will see the ArgoCD URL of your cluster. Sign in to the console to check whether ArgoCD is up and running and the stack template has been deployed correctly. To sign in, use the "admin" login and the bcrypted password that you have generated for the infra.yaml.
7. Displayed in the output will be also a command on how to get kubeconfig and connect to your Kubernetes cluster.
8. Destroy the cluster and all created resources with the command `cdev destroy`
## Resources
Resources to be created within the project:
* *(optional, if you use cluster.dev domain)* Route53 zone **<cluster-name>.cluster.dev**
* *(optional, if vpc_id is not set)* VPC for EKS cluster
* EKS Kubernetes cluster with addons:
* cert-manager
* ingress-nginx
* external-dns
* argocd
* AWS IAM roles for EKS IRSA cert-manager and external-dns
112 changes: 112 additions & 0 deletions docs/examples-develop-stack-template.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,112 @@
# Develop Stack Template

Cluster.dev gives you freedom to modify existing templates or create your own. You can add inputs and outputs to already preset units, take the output of one unit and send it as an input to another, or write new units and add them to a template.

In our example we shall use the [tmpl-development](https://github.com/shalb/cluster.dev/tree/master/.cdev-metadata/generator) sample to create a project. Then we shall modify its stack template by adding new parameters to the units.

## Workflow steps

1. Create a project following the steps described in [Create Own Project](https://docs.cluster.dev/get-started-create-project/) section.

2. To start working with the stack template, cd into the template directory and open the template.yaml file: ./template/template.yaml.

Our sample stack template contains 3 units. Now, let's elaborate on each of them and see how we can modify it.

3. The `create-bucket` unit uses a remote [Terraform module](https://registry.terraform.io/modules/terraform-aws-modules/s3-bucket/aws/latest) to create an S3 bucket on AWS:

```yaml
name: create-bucket
type: terraform
providers: *provider_aws
source: terraform-aws-modules/s3-bucket/aws
version: "2.9.0"
inputs:
bucket: {{ .variables.bucket_name }}
force_destroy: true
```
We can modify the unit by adding more parameters in [inputs](https://registry.terraform.io/modules/terraform-aws-modules/s3-bucket/aws/latest?tab=inputs). For example, let's add some tags using the [`insertYAML`](https://docs.cluster.dev/stack-templates-functions/) function:

```yaml
name: create-bucket
type: terraform
providers: *provider_aws
source: terraform-aws-modules/s3-bucket/aws
version: "2.9.0"
inputs:
bucket: {{ .variables.bucket_name }}
force_destroy: true
tags: {{ insertYAML .variables.tags }}
```

Now we can see the tags in infra.yaml:

```yaml
name: cdev-tests-local
template: ./template/
kind: Stack
backend: aws-backend
variables:
bucket_name: "tmpl-dev-test"
region: {{ .project.variables.region }}
organization: {{ .project.variables.organization }}
name: "Developer"
tags:
tag1_name: "tag 1 value"
tag2_name: "tag 2 value"
```

To check the configuration, run `cdev plan --tf-plan` command. In the output you can see that Terraform will create a bucket with the defined tags. Run `cdev apply -l debug` to have the configuration applied.

4. The `create-s3-object` unit uses local Terraform module to get the bucket ID and save data inside the bucket. The Terraform module is stored in s3-file directory, main.tf file:

```yaml
name: create-s3-object
type: terraform
providers: *provider_aws
source: ./s3-file/
depends_on: this.create-bucket
inputs:
bucket_name: {{ remoteState "this.create-bucket.s3_bucket_id" }}
data: |
The data that will be saved in the S3 bucket after being processed by the template engine.
Organization: {{ .variables.organization }}
Name: {{ .variables.name }}
```

The unit sends 2 parameters. The *bucket_name* is retrieved from the `create-bucket` unit by means of [`remoteState`](https://docs.cluster.dev/stack-templates-functions/) function. The *data* parameter uses templating to obtain the *Organization* and *Name* variables from infra.yaml.

Let's add to *data* input *bucket_regional_domain_name* variable to obtain the region-specific domain name of the bucket:

```yaml
name: create-s3-object
type: terraform
providers: *provider_aws
source: ./s3-file/
depends_on: this.create-bucket
inputs:
bucket_name: {{ remoteState "this.create-bucket.s3_bucket_id" }}
data: |
The data that will be saved in the s3 bucket after being processed by the template engine.
Organization: {{ .variables.organization }}
Name: {{ .variables.name }}
Bucket regional domain name: {{ remoteState "this.create-bucket.s3_bucket_bucket_regional_domain_name" }}
```

Check the configuration by running `cdev plan` command; apply it with `cdev apply -l debug`.

5. The `print_outputs` unit retrieves data from two other units by means of [`remoteState`](https://docs.cluster.dev/stack-templates-functions/) function: *bucket_domain* variable from `create-bucket` unit and *s3_file_info* from `create-s3-object` unit:

```yaml
name: print_outputs
type: printer
inputs:
bucket_domain: {{ remoteState "this.create-bucket.s3_bucket_bucket_domain_name" }}
s3_file_info: "To get file use: aws s3 cp {{ remoteState "this.create-s3-object.file_s3_url" }} ./my_file && cat my_file"
```

6. Having finished your work, run `cdev destroy` to eliminate the created resources.




Loading

0 comments on commit ee33ee1

Please sign in to comment.