Skip to content

Commit

Permalink
Add Stack Dependecies V2 docs
Browse files Browse the repository at this point in the history
Signed-off-by: peterdeme <[email protected]>
  • Loading branch information
peterdeme committed Sep 27, 2023
1 parent 06d68b6 commit ea21733
Show file tree
Hide file tree
Showing 5 changed files with 97 additions and 4 deletions.
2 changes: 2 additions & 0 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -60,6 +60,8 @@ You can also manually trigger the tests at any time by running:
pre-commit
```

> Tip: one of our precommit checks is `oxipng` which optimizes PNG images. If you don't want to use `pre-commit` locally, you can optimize your PNG images with `docker run --rm -it -v $(PWD):/workdir -w /workdir videah/oxipng docs/assets/screenshots/filename.png --opt=4 --preserve --strip=safe`.
## Self-Hosted

Our self-hosted docs are built using a tool called [Mike](https://github.com/jimporter/mike). Mike allows us to snapshot multiple versions of the documentation in a separate Git branch. In our case this branch is called `self-hosted-releases`.
Expand Down
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
98 changes: 94 additions & 4 deletions docs/concepts/stack/stack-dependencies.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,6 +22,90 @@ Stack dependencies can be defined in the `Dependencies` tab of the stack.
!!! info
In order to create a dependency between two stacks you need to have at least **reader** permission to one stack (dependency) and **admin** permission to the other (dependee). See [Spaces Access Control](../spaces/access-control.md#roles) for more information.

### Defining references between stacks

You have the option to refer to outputs of other stacks: your stack will be only triggered if the referenced output has been created or changed.

![](../../assets/screenshots/Screenshot_Stack_Dependencies_add_ref.png)

You can either choose an existing output value or add one that doesn't exist yet but will be created by the stack. On the receiving end, you need to choose an environment variable (`Input name`) to store the output value in.

![](../../assets/screenshots/Screenshot_Stack_Dependencies_added_input.png)

!!! tip
If you use Terraform, make sure to use [`TF_VAR_`](https://developer.hashicorp.com/terraform/language/values/variables#environment-variables){: rel="nofollow"} prefix for environment variable names.

#### Enabling sensitive outputs for references

A stack output can be sensitive or non-sensitive. For example, in Terraform [you can mark an output](https://developer.hashicorp.com/terraform/language/values/outputs#sensitive-suppressing-values-in-cli-output){: rel="nofollow"} `sensitive = true`. Sensitive outputs are being masked in the Spacelift UI and in the logs.

Spacelift will upload sensitive outputs to the server - this is enabled by default on our public worker pool.

On [private worker pools](../../concepts/worker-pools.md) however, it needs to be enabled **explicitly** by adding `SPACELIFT_SENSITIVE_OUTPUT_UPLOAD_ENABLED=true` [environment variable](../../concepts/worker-pools.md#configuration-options) to the worker. This is a requirement if you wish to utilize sensitive outputs for stack dependencies.

#### Stack dependency reference limitations

When a stack has an upstream dependency with a reference, it relies on the existence of the outputs.

```mermaid
graph TD;
Storage --> |TF_VAR_AWS_S3_BUCKET_ARN|storageColor(StorageService);
style storageColor fill:#51cbad
```

If you trigger `StorageService` in the above scenario, you need to make sure `Storage` has produced `TF_VAR_AWS_S3_BUCKET_ARN` already. Otherwise you'll get the following error:

```plain
job assignment failed: the following inputs are missing: Storage.TF_VAR_AWS_S3_BUCKET_ARN => TF_VAR_AWS_S3_BUCKET_ARN
```

!!! note
We have enabled the output uploading to our backend on 2023 August 21. This means that if you have a stack that produced an output before that date, you'll need to rerun it to make the output available for references.

We upload outputs during the [Apply phase](../run/tracked.md#applying). If you stumble upon the error above, you'll need to make sure that the stack producing the output had a tracked run **with an Apply phase**.

You can simply do it by adding a dummy output to the stack and removing it afterwards:

```terraform
output "dummy" {
value = "dummy"
}
```

#### Vendor limitations

[Ansible](../../vendors/ansible/) and [Kubernetes](../../vendors/kubernetes/) does not have the concept of outputs, so you cannot reference the outputs of them. They _can_ be on the receiving end though:

```mermaid
graph TD;
A[Terraform Stack] --> |VARIABLE_1|B[Kubernetes Stack];
A --> |VARIABLE_2|C[Ansible Stack];
```

#### Scenario 1

```mermaid
graph TD;
Infrastructure --> |TF_VAR_VPC_ID|Database;
Database --> |TF_VAR_CONNECTION_STRING|PaymentService;
```

In case your `Infrastructure` stack has a `VPC_ID`, you can set that as an input to your `Database` stack (e.g. `TF_VAR_VPC_ID`). When the `Infrastructure` stack finishes running, the `Database` stack will be triggered and the `TF_VAR_VPC_ID` environment variable will be set to the value of the `VPC_ID` output of the `Infrastructure` stack.

If there is one or more references defined, the stack will only be triggered if the referenced output has been created or changed. If they remain the same, the downstream stack will be skipped.

#### Scenario 2

```mermaid
graph TD;
Infrastructure --> |TF_VAR_VPC_ID|Database;
Database --> |TF_VAR_CONNECTION_STRING|PaymentService;
Database --> CartService;
```

You can also mix references and referenceless dependencies. In the above case, `CartService` will be triggered whenever `Database` finishes running, regardless of the `TF_VAR_CONNECTION_STRING` output.

## Dependencies overview

In the `Dependencies` tab of the stack, there is a button called `Dependencies graph` to view the full dependency graph of the stack.
Expand All @@ -30,7 +114,7 @@ In the `Dependencies` tab of the stack, there is a button called `Dependencies g

## How it works

Stack dependencies are directed acyclic graphs ([DAGs](https://wikipedia.org/wiki/Directed_acyclic_graph)). This means that a stack
Stack dependencies are directed acyclic graphs ([DAGs](https://wikipedia.org/wiki/Directed_acyclic_graph){: rel="nofollow"}). This means that a stack
can depend on multiple stacks, and a stack can be depended on by multiple stacks but there cannot be loops:
you will receive an error if you try to add a stack to a dependency graph that will create a cycle.

Expand Down Expand Up @@ -73,7 +157,7 @@ graph TD;
baseInfraColor(BaseInfra)-->databaseColor(Database);
baseInfraColor(BaseInfra)-->networkColor(Network);
baseInfraColor(BaseInfra)-->storageColor(Storage);
databaseColor(Database)-->paymentSvcColor(PaymentService);
databaseColor(Database)-->|TF_VAR_CONNECTION_STRING|paymentSvcColor(PaymentService);
networkColor(Network)-->paymentSvcColor(PaymentService);
databaseColor(Database)-->cartSvcColor(CartService);
networkColor(Network)-->cartSvcColor(CartService);
Expand All @@ -90,9 +174,11 @@ If `BaseInfra` receives a push event, it will start running immediately and queu
_all_ of the stacks below. The order of the runs: `BaseInfra`, then `Database` & `Network` & `Storage` in parallel,
finally `PaymentService` & `CartService` in parallel.

Note: since `PaymentService` and `CartService` does not depend on `Storage`, they will not
Since `PaymentService` and `CartService` does not depend on `Storage`, they will not
wait until it finishes running.

Note: `PaymentService` references `Database` with `TF_VAR_CONNECTION_STRING`. But since it also depends on `Network` with no references, it'll run regardless of the `TF_VAR_CONNECTION_STRING` output. If the `Database` stack does not have the corresponding output, the `TF_VAR_CONNECTION_STRING` environment variable will not be injected into the run.

### Scenario 3

```mermaid
Expand Down Expand Up @@ -154,22 +240,26 @@ graph TD;
networkColor(Network)-->paymentSvcColor(PaymentService);
databaseColor(Database)-->cartSvcColor(CartService);
networkColor(Network)-->cartSvcColor(CartService);
storageColor(Storage)-->|TF_VAR_AWS_S3_BUCKET_ARN|storageSvcColor(StorageService);
style baseInfraColor fill:#51cbad
style networkColor fill:#51abcb
style paymentSvcColor fill:#51abcb
style cartSvcColor fill:#51abcb
style storageColor fill:#51abcb
style databaseColor fill:#51cbad
style storageSvcColor fill:#ecd309
```

If `BaseInfra` and `Database` are a monorepo and a push event affects both of them, this scenario isn't any different than [Scenario 2](#scenario-2) and [Scenario 4](#scenario-4). The order from top to bottom is still the same: `BaseInfra` first, then `Database` & `Network` & `Storage` in parallel, finally `PaymentService` & `CartService` in parallel.

`Storage` and `StorageService`: let's say that the S3 bucket resource of `Storage` already exists. This means that the bucket ARN didn't change, so `StorageService` will be skipped.

## Trigger policies

Stack dependencies are a simpler alternative to [Trigger policies](../policy/trigger-policy.md) that cover most use cases. If your use case does not fit Stack dependencies, consider using a Trigger policy.

There is no connection between the two features, and **the two shouldn't be combined** to avoid confusion.
There is no connection between the two features, and **the two shouldn't be combined** to avoid confusion or even infinite loops in the dependency graph.

## Stack deletion

Expand Down
1 change: 1 addition & 0 deletions docs/concepts/worker-pools.md
Original file line number Diff line number Diff line change
Expand Up @@ -369,6 +369,7 @@ A number of configuration variables is available to customize how your launcher
- `SPACELIFT_DOCKER_CONFIG_DIR` - if set, the value of this variable will point to the directory containing Docker configuration, which includes credentials for private Docker registries. Private workers can populate this directory for example by executing `docker login` before the launcher process is started;
- `SPACELIFT_MASK_ENVS`- comma-delimited list of whitelisted environment variables that are passed to the workers but should never appear in the logs;
- `SPACELIFT_SENSITIVE_OUTPUT_UPLOAD_ENABLED` - If set to `true`, the launcher will upload sensitive run outputs to the Spacelift backend. This is a requirement if want to use sensitive outputs for [stack dependencies](./stack/stack-dependencies.md);
- `SPACELIFT_WORKER_NETWORK` - network ID/name to connect the launched worker containers, defaults to `bridge`;
- `SPACELIFT_WORKER_EXTRA_MOUNTS` - additional files or directories to be mounted to the launched worker docker containers during **either read or write runs**, as a comma-separated list of mounts in the form of `/host/path:/container/path`;
- `SPACELIFT_WORKER_WO_EXTRA_MOUNTS` - Additional directories to be mounted to the worker docker container during **write only runs**, as a comma separated list of mounts in the form of `/host/path:/container/path`;
Expand Down

0 comments on commit ea21733

Please sign in to comment.