This repository contains tooling to deploy autonomous service agent(s) on Amazon Web Services (AWS) using Terraform. The repository is designed to facilitate the deployment process by using GitHub actions. You can also clone the repository locally on your machine and execute the deployment commands manually.
After the deployment process finishes, the agent will be running in a screen
session within an AWS EC2 instance in the default AWS Region us-east-2
.1
Note
This repository contains default configuration parameters for the Autonomous Keeper Service (AKS) in the./config/service_vars.env
file. To deploy this particular service, you will also need to set these variables:
SERVICE_REPO_URL
: https://github.com/valory-xyz/agent-academy-2SERVICE_ID
:valory/keep3r_bot_goerli:0.1.0
(for Görli testnet) orvalory/keep3r_bot:0.1.0
(for Ethereum mainnet)- (Optional)
SERVICE_REPO_TAG
:v0.3.0
See the complete instructions below.
-
Set up your AWS account. Sign in to the AWS Management Console and configure the following parameters.
- In case you don't have one, you need to create an IAM user with an access key. Within the AWS Management Console, create a new user (IAM/Users), and create an access key for that user (Security credentials/Access keys). Note down the AWS Access Key ID and AWS Secret Access Key.
- You also need to create an S3 bucket to store the Terraform state for the service.2 Create the bucket in region
us-east-2
. You must follow the AWS guidelines for naming your bucket. Note down the bucket name.
-
Prepare an SSH key pair. This key pair will be used to access the deployed AWS EC2 instance where the service will be running.
You can generate the key pair yourself, e.g.,
ssh-keygen -t rsa -b 2048 -f id_rsa
Make sure to use one of the supported AWS EC2 key pairs. You can also use the AWS Management Console to create a key pair. Store securely both the public and private key.
-
Prepare the service repository data.
- Note down the service repository URL, the public ID of the service, and the release tag corresponding to the version of the service you want to deploy. If you don't define the release tag, the script will deploy the latest available release.
- You also need to prepare the required service configuration parameters (
service_vars.env
) and the agent keys file (keys.json
). See the details below. - Ensure that the GitHub repository of the service is publicly accessible. If it is a private repository, your GitHub user has to be authorized to access it, and you need to create a GitHub personal access token with
repo
permissions enabled.
The repository is prepared to deploy the service using GitHub actions. This is the easiest way to deploy your service.
-
Fork this repository.
-
Configure cloud credentials and service parameters. First, define the following secrets and variables in the forked repository:
Warning
Please make sure to avoid any leading or trailing white spaces or newlines when defining secrets and variables.Name Type Description AWS_ACCESS_KEY_ID
Secret AWS Access Key ID (20-character alphanumeric string). AWS_SECRET_ACCESS_KEY
Secret AWS Secret Access Key (40-character alphanumeric string). OPERATOR_SSH_PRIVATE_KEY
Secret SSH private key to access the deployed AWS EC2 instance. It must include the opening and closing lines. Example: -----BEGIN OPENSSH PRIVATE KEY----- <ssh_private_key> -----END OPENSSH PRIVATE KEY-----
GH_TOKEN
(Optional)Secret GitHub access token. This is only required if the service repository is private. Example: ghp_000000000000000000000000000000000000
TFSTATE_S3_BUCKET
Variable AWS S3 bucket name to store the Terraform state. SERVICE_REPO_URL
Variable Service repository URL. Example: https://github.com/valory-xyz/agent-academy-2
SERVICE_ID
Variable Public ID of the service. Example: valory/keep3r_bot_goerli:0.1.0
SERVICE_REPO_TAG
(Optional)Variable Release tag corresponding to the version of the service you want to deploy. If not defined, the script will deploy the latest release. Example: v.0.3.0
Next, Populate, commit and push the files with the service configuration parameters:
File Description ./config/keys.json
It must contain the address(es) and key(s) for the agents to be deployed. Example (for a single agent): [ { "address": "0x1c883D4D6a429ef5ea12Fa70a1d67D6f6013b279", "private_key": "0x0123456789abcdef0123456789abcdef...0" } ]
Warning
If you don't want to expose thekeys.json
file in the repository, you can define the GitHub secretKEYS_JSON
with the contents of the file../config/service_vars.env
It must contain the service-specific variables. You must check the service you are deploying to know which variables you need to define. Example:
SERVICE_VAR1=value1 SERVICE_VAR2=value2 SERVICE_SECRET1=secret_value1 SERVICE_SECRET2=secret_value2
Warning
If you don't want to expose secret/confidential variables in the repository, you can assign them a blank value (or a placeholder value)3 in the fileservice_vars.env
, and override their values by defining GitHub secrets matching the corresponding variables' names.Warning
For security reasons, even if present in the fileservice_vars.env
, the following variables will not be overridden with GitHub secrets or variables:AWS_ACCESS_KEY_ID
,AWS_SECRET_ACCESS_KEY
,OPERATOR_SSH_PRIVATE_KEY
,GH_TOKEN
,KEYS_JSON
,TFSTATE_S3_BUCKET
. -
Deploy the infrastructure and the service to AWS. Run the "Deploy service" workflow using the GitHub interface. This workflow will create the necessary resources on AWS. Once the workflow has finished, the generated AWS EC2 instance will take over with the process of deploying the agent for the service.
You should see the following output in the Summary step of the workflow:
Summary: - Service repository URL: <SERVICE_REPO_URL> - Service repository tag: <SERVICE_REPO_TAG> - Service ID: <SERVICE_ID> - AWS EC2 instance public IP: <AWS_EC2_PUBLIC_IP> - AWS EC2 instance ID: <AWS_EC2_ID> Please wait until the AWS EC2 instance finishes completing the service deployment.
-
Interact with the AWS EC2 instance. You can connect to the AWS EC2 instance as the user
ubuntu
using the SSH private key specified:ssh -i /path/to/private_key ubuntu@<AWS_EC2_PUBLIC_IP>
Track the progress of the service deployment by checking the log file:
cat ~/deploy_service.log
Once the service agent is up and running, you can attach to its
screen
session:screen -r service_screen_session
Use
Ctrl+A D
to detach from the session. Alternatively, you can also follow the Docker logs:docker logs abci0 --follow # For the agent docker logs node0 --follow # For the Tendermint node
-
Destroy the infrastructure. Run the "Destroy infrastructure" workflow using the GitHub interface. This will destroy the resources created on AWS. Alternatively, you can also remove the resources using the AWS Management Console.
You can also execute the deployment steps manually on your local machine. You must ensure that you have the required software:
Follow these steps:
-
Clone this repository to your local machine.
-
Configure cloud credentials and service parameters.
-
Set up the AWS CLI. Configure your local machine to work with your AWS credentials (AWS Access Key ID and AWS Secret Access Key):
aws configure
You can check if it has been properly configured by examining the files
cat ~/.aws/config cat ~/.aws/credentials
-
Define the following environment variables. See the description of the variables in Step 2 in the section above.
export TF_VAR_operator_ssh_pub_key_path=<path/to/your/ssh_public_key> export TFSTATE_S3_BUCKET=<tfstate_s3_bucket> export SERVICE_REPO_URL=https://github.com/<owner>/<repo> export SERVICE_ID=<author>/<service_name>:<version> # Optional variables export SERVICE_REPO_TAG=<version_tag> export GH_TOKEN=ghp_000000000000000000000000000000000000
-
Populate the files with the service configuration parameters. See the description of the files in Step 2 in the section above:
./config/keys.json
./config/service_vars.env
-
-
Deploy the infrastructure and the service to AWS.
-
Deploy the infrastructure:
cd ./infra/aws/docker-compose/ terraform init -backend-config="bucket=$TFSTATE_S3_BUCKET" terraform plan terraform apply
You should see the the following output once the command finishes:
Apply complete! Resources: 10 added, 0 changed, 0 destroyed. Outputs: instance_id = <AWS_EC2_ID> instance_public_ip = <AWS_EC2_PUBLIC_IP>
The instance will automatically install the Open Autonomy framework, together with a number of dependencies. You should wait until the AWS EC2 instance is ready (typically, less than 5 minutes). The command below will wait until it is ready:
aws ec2 wait instance-status-ok --instance-ids <AWS_EC2_ID>
-
Generate the service deployment script:
# Position on the root of the repository cd ../../.. ./scripts/generate_service_deployment_script.sh
The script will generate the file
deploy_service.sh
, which contains the necessary commands to deploy the service in the AWS EC2 instance. -
Deploy the agent to the AWS EC2 instance:
scp ./deploy_service.sh ubuntu@<AWS_EC2_PUBLIC_IP>:~ ssh ubuntu@<AWS_EC2_PUBLIC_IP> 'nohup ~/deploy_service.sh > deploy_service.log 2>&1 &'
You might need to indicate the SSH private key path using the
-i
option on thescp
andssh
commands.
-
-
Interact with the AWS EC2 instance. See Step 5 in the section above.
-
Destroy the infrastructure.
cd ./infra/aws/docker-compose/ terraform destroy
This will destroy the resources created on AWS. Alternatively, you can also remove the resources using the AWS Management Console.
Footnotes
-
If you wish to deploy on another AWS Region, you need to modify the Terraform variable
deployment_region
in the file./infra/aws/docker_compose/variables.tf
. You also need to provide a valid Amazon Machine Image (AMI) ID for that region (resourceaws_instance
in the file./infra/aws/docker_compose/main.tf
), otherwise the deployment process will fail onterraform apply
. ↩ -
For simplicity, the Terraform scripts in this repository do not implement state locking. Therefore, it is important to ensure that the script is not executed concurrently by different users in order to prevent potential issues. You might consider implementing state locking in the AWS S3 bucket using DyanomDB. See for example this or this tutorial. ↩
-
The deployment process will override any service-specific variable defined in the
./config/service_vars.env
with any secret or variable defined in the GitHub repository that matches the variable name. It is important to note that a variable (overridden or not) will be exported to the AWS EC2 instance running the service agent only if it is declared in the./config/service_vars.env
file. ↩