Skip to content

An example Terraform setup using modularised components to fulfill a use-case - repo managed by sudoblark.terraform.github

License

Notifications You must be signed in to change notification settings

sudoblark/sudoblark.terraform.modularised-demo

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

20 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation


Logo

sudoblark.terraform.modularised-demo

An example Terraform setup using modularised components to fulfill a use-case - repo managed by sudoblark.terraform.github

Table of Contents
  1. About The Project
  2. Getting Started
  3. Architecture
  4. Usage

About The Project

This repo is simply a demo of how a modularised terraform setup may be utilised in a micro-repo fashion - i.e. one repo per business-case.

It's counter may be considered to be sudoblark.terraform.github , which is an example mono-repo to manage all aspects of a single SaaS product in one place.

For now, the repo is intended to be used in workshops/conferences to demonstrate a data-structure driven approach to Terraform.

(back to top)

Built With

Infrastructure

Application code

(back to top)

Getting Started

Below we outlined how to interact with both Infrastructure and Application code bases.

The repo structure is relatively simple:

  • application is the top-level for app code. Subfolders in here should be made such that Python apps following their respective best-practices, and we have a single source of truth for state machine JSON etc.
  • infrastructure contains both:
    • example-account folders, one per account, which instantiate modules
    • modules folder to act as a top-level for re-usable Terraform modules

This repo is intended to be used for demonstration purposes when delivering conferences, but is also made public such that conference attendees may query it in their own time as well.

Prerequisites

Note: Below instructions are for MacOS only, alteration may be required to get this working on other operating systems.

  • tfenv
git clone https://github.com/tfutils/tfenv.git ~/.tfenv
echo 'export PATH="$HOME/.tfenv/bin:$PATH"' >> ~/.bash_profile
  • awscli
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
./aws/install
  • Virtual environment with pre-commit installed
python3 -m venv venv
source venv/bin/activate
pip install pre-commit
  • Poetry
pip install -U pip setuptools
pip install poetry

Pre-commit hooks

Various pre-commit hooks are in place in order to ensure consistency across the codebases.

You may run these yourself as follows:

source venv/bin/activate
pip install pre-commit
pre-commit run --all-files

(back to top)

Architecture

The below sections outline architecture diagrams and explanations in order to better understand just what this demo repo both in terms of infrastructure and application workflows.

Infrastructure

architecture-beta
    group demo(cloud)[AWS Account]

    service unzipLambda(server)[Unzip lambda] in demo
    service rawBucket(database)[raw bucket] in demo
    service processedBucket(database)[processed bucket] in demo
    service bucketNotification(disk)[Bucket notification] in demo

    rawBucket:R -- L:bucketNotification
    bucketNotification:R -- L:unzipLambda
    unzipLambda:R -- L:processedBucket
Loading

Note: s3_files module, as you can see, is not needed for ETL. It is instead included as it's one of the simplest data-driven modules you can have. Therefore, it's included to be used as a simple, but useful, example in a conference/workshop setting.

Workflows

---
title: Demo workflow
---
flowchart TD
    start((.ZIP uploaded to s3://raw/dogs/landing prefix))
    bucketNotification(Bucket notification)

    subgraph UnzipLambda
        lambdaStart(For each file in .ZIP)
        startPartition(Grabs year/month/day from filename)
        uploads(Uploads to s3://processed/dogs/daily/<partition>)

    end

    sns[\Send failure notification to topic/]
    endNode((Processed complete))

    start -- triggers --> bucketNotification
    bucketNotification -- triggers --> UnzipLambda

    lambdaStart --> startPartition
    startPartition --> uploads
    uploads -- If all succeed --> endNode
    uploads -- If fail --> sns
    sns --> endNode
Loading

(back to top)

Usage

Below we will outline intended use-cases for the repository.

Note: This section assumes you've installed pre-requisites as per above.

Deploying Terraform

The main.tf file in example-account is left deliberately blank, such that this may be instantiated in any AWS Infrastructure required for demonstration or learning purposes.

Simply:

  1. Navigate to the instantiation folder:
cd infrastructure/example-account
  1. Ensure your shell is authenticated to an appropriate profile for AWS
export AWS_DEFAULT_PROFILE=<PROFILE-NAME>
  1. ZIP the lambda (Note in a production environment this would usually be done via CI/CD)
cd application/unzip-lambda/unzip_lambda
zip -r lambda.zip lambda_function.py
mkdir src
mv lambda.zip src
  1. Init, plan and then apply.
terraform init
terraform plan
terraform apply
  1. Simply tear-down when required
terraform destroy

Processing dummy files

Files under application/dummy_uploads contain the contents of the ZIP file our unzip lambda unzips from the raw to processed bucket.

Files names are in the format: YYYYmmdd.csv Each file corresponds to dogs viewed that day, with rows being in the format of:

dog_name, breed, location

For example, we may have a file named 20241008.csv with a single row:

Cerberus, Molossus, Hades

Thus indicating that on the 08th October 2024, we spotted Cerberus doing a valiant job guarding the gates to Hates.

To upload these dummy files after the solution has been deployed in to AWS:

  1. First ZIP the files:
cd application/dummy_uploads
zip -r uploads.zip .
  1. Then, upload the ZIP to the dev-raw bucket at the dogs/landing/ prefix:
aws s3 rm s3://dev-demo-raw/dogs/landing/uploads.zip
aws s3 cp ./application/dummy_uploads/uploads.zip s3://dev-demo-raw/dogs/landing/
  1. This should then trigger an s3 bucket notification to run the lambda.
  2. This lambda, in turn, should unzip the files and re-upload them into the dev-processed bucket, at the dogs/daily root with prefix determined by date. i.e. for 20241008.csv we'd expect an upload at dogs/daily/_year=2024/_month=10/_day=08/viewings

(back to top)

About

An example Terraform setup using modularised components to fulfill a use-case - repo managed by sudoblark.terraform.github

Topics

Resources

License

Stars

Watchers

Forks

Packages

No packages published