This project contains source code and supporting files for a serverless application that you can deploy with the SAM CLI. This project is set up like Python project using poetry package manager.
It includes:
- Lambda Powertools for operational best practices
- editor config
- poetry
- pre-commit
- isort
- flake8
- mypy
- cfn-lint
- Docker Compose
- sqlalchemy
- alembic
- pytest
The code was built to work with MySQL, but a MongoDB container is included as part of the docker-compose file, so you can use it if needed.
Make sure you have the following installed before you proceed
- Python 3
- Docker and Docker Compose - Get Docker
https://python-poetry.org/docs/basic-usage/
Configure a Poetry environment
# create virtualenv
poetry shell
# install dependencies
(.venv)$ poetry install
# install the git hook scripts
pre-commit install
# start the db with Docker Compose:
docker-compose up -d
# run all db migrations
alembic upgrade head
To check the logs of mysql, run:
docker-compose logs mysql
It needs the mysql container up and running:
docker-compose up -d
Next, inside the poetry shell:
# run integration tests
pytest runtime/tests/integration
# run unit tests
pytest runtime/tests/unit
Remember to use -s
if you want pytest not to capture stdout.
As during local development your database is mounted as a
volume inside the container (set in the file docker-compose.yml
).
Make sure you create a "revision" of your models and that you "upgrade" your database with that revision every time you change it. As this is what will update the tables in your database. Otherwise, your application will have errors.
-
If you created a new model in
./runtime/src/db_models/
, make sure to import it in./runtime/src/db/base.py
, that Python module (base.py
) that imports all the models will be used by Alembic. -
After changing a model (for example, adding a column), create a revision, e.g.:
alembic revision --autogenerate -m "Add column last_name to User model"
-
Commit to the git repository the files generated in the alembic directory.
-
After creating the revision, run the migration in the database (this is what will actually change the database):
alembic upgrade head
If you don't want to use migrations at all, uncomment the line in the file at ./runtime/src/db/init_db.py
with:
Base.metadata.create_all(bind=engine)
If you don't want to start with the default models and want to remove them / modify them,
from the beginning, without having any previous revision, you can remove the revision
files (.py
Python files) under ./db_migrations/versions/
. And then create a first migration as described above.
If you are using an RDS database, and want the same table schema in local,
you can generate the migration code for alembic using the Python script:
./db_migrations/generate_table_migration_from_rds_table.py
:
You need to change the Server, user, password and database name, and specify all the tables you want to inspect the metadata. More info in: Print Python Code to Generate Particular Database Tables
As an example, the fake
table with data is created in MySQL as part of the migrations.
git add .
# git commit will trigger pre-commit hook
git commit -m "[ADDED/FIXED/CHANGED] - What you do"
git push ...
You must use the Bitbucket pipeline CI (continuous integration) system to do it automatically.
The pipeline needs two deployment environments:
- Test
- Production
With the following variables set:
- DEPLOY_ENV: dev/prod
- AWS_ACCESS_KEY_ID: for the AWS CLI
- AWS_SECRET_ACCESS_KEY: for the AWS CLI
- AWS_REGION: for the AWS CLI
- SAM_S3_BUCKET: s3 bucket for SAM deployments.
For DEV deploy, if you want to run DB migrations as part of you CI/CD, you also need:
- MYSQL_SERVER: url of the DEV DB
- MYSQL_DB: database name
- MYSQL_USER: username for alembic
- MYSQL_PASSWORD: password for alembic
Secrets parameters are stored in AWS SSM
The example CloudFormation deployment includes the parameter in SSM:
- MysqlServer
Please change it with the server that you need, so the deployment doesn't fail.
Tracing
Tracer utility patches known libraries, and trace the execution of this sample code including the response and exceptions as tracing metadata - You can visualize them in AWS X-Ray.
Logger
Logger utility creates an opinionated application Logger with structured logging as the output, dynamically samples 10% of your logs in DEBUG mode for concurrent invocations, log incoming events as your function is invoked, and injects key information from Lambda context object into your Logger - You can visualize them in Amazon CloudWatch Logs.
Metrics
Metrics utility captures cold start metric of your Lambda invocation, and could add additional metrics to help you understand your application KPIs - You can visualize them in Amazon CloudWatch.