Skip to content

Commit

Permalink
Merge pull request #14 from aws-samples/code-quality
Browse files Browse the repository at this point in the history
Added contribution guide, updated readme and added pre-commit tool
  • Loading branch information
frbrkoala authored Nov 14, 2023
2 parents 239e24f + e68545b commit 50dcf5a
Show file tree
Hide file tree
Showing 71 changed files with 207 additions and 145 deletions.
2 changes: 1 addition & 1 deletion .gitallowed
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
account: '\d{12}'
account: "\d{12}"
account=\d{12}
account=\d{12}
12 changes: 8 additions & 4 deletions .github/PULL_REQUEST_TEMPLATE.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,13 +12,17 @@ Consult the [CONTRIBUTING](https://github.com/aws-samples/aws-blockchain-node-ru
### More

- [ ] Yes, I have tested the PR using my local account setup (Provide any test evidence report under Additional Notes)
- [ ] Mandatory for new node blueprints. Yes, I have added a example to support my blueprint PR
- [ ] Mandatory for new node blueprints. Yes, I have updated the `website/docs` or `website/blog` section for this feature
- [ ] Yes, I ran `pre-commit run -a` with this PR. Link for installing [pre-commit](https://pre-commit.com/) locally
- [ ] Mandatory for new node blueprints. Yes, I have added usage example to the `README.md` file in my blueprint folder
- [ ] Mandatory for new node blueprints. Yes, I have implemented automated tests for all stacks in my blueprint and they pass
- [ ] Mandatory for new node blueprints. Yes, I have added a reference to my `README.md` file to `website/docs` section for this feature
- [ ] Yes, I have set up and ran all [pre-merge quality control tools](./docs/pre-merge-tools.md) on my local machine and they don't show warnings.

### For Moderators

- [ ] E2E Test successfully complete before merge?
- [ ] The tests for all current blueprints successfully complete before merge?
- [ ] Mandatory for new node blueprints. All [pre-merge quality control tools](./docs/pre-merge-tools.md) and `cdk-nag` tools don't show warnings?
- [ ] Mandatory for new node blueprints. The deployment test works on blank AWS account according to instructions in the `README.md` before merge?
- [ ] Mandatory for new node blueprints. The website builds without errors?

### Additional Notes

Expand Down
2 changes: 1 addition & 1 deletion .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -33,4 +33,4 @@ single-node-deploy*.json
ha-nodes-deploy*.json

*.OLD
.env
.env
1 change: 1 addition & 0 deletions .pre-commit-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -10,3 +10,4 @@ repos:
- id: detect-private-key
- id: detect-aws-credentials
args: ['--allow-missing-credentials']
- id: forbid-submodules
9 changes: 5 additions & 4 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,10 +32,11 @@ To send us a pull request, please:
1. Fork the repository.
2. Modify the source; please focus on the specific change you are contributing. If you also reformat all the code, it will be hard for us to focus on your change.
3. Ensure local tests pass.
4. If you contribute a new CDK app, make sure you use [cdk-nag](https://aws.amazon.com/blogs/devops/manage-application-security-and-compliance-with-the-aws-cloud-development-kit-and-cdk-nag/) in your applicaiton.
4. Commit to your fork using clear commit messages.
5. Send us a pull request, answering any default questions in the pull request interface.
6. Pay attention to any automated CI failures reported in the pull request, and stay involved in the conversation.
4. If you contribute a new CDK app, make sure you use [cdk-nag](https://aws.amazon.com/blogs/devops/manage-application-security-and-compliance-with-the-aws-cloud-development-kit-and-cdk-nag/) in your application.
4. Set up and run [pre-merge quality control tools](./docs/pre-merge-tools.md) on your local machine.
5. Commit to your fork using clear commit messages.
6. Send us a pull request, answering any default questions in the pull request interface.
7. Pay attention to any automated CI failures reported in the pull request, and stay involved in the conversation.

GitHub provides additional document on [forking a repository](https://help.github.com/articles/fork-a-repo/) and
[creating a pull request](https://help.github.com/articles/creating-a-pull-request/).
Expand Down
1 change: 0 additions & 1 deletion LICENSE
Original file line number Diff line number Diff line change
Expand Up @@ -14,4 +14,3 @@ FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR
COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER
IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

24 changes: 22 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,25 @@
# AWS Blockchain Node Runners

This repository contains sample [AWS Cloud Development Kit (CDK)](https://aws.amazon.com/cdk/) applications that help to set up and run self-service blockchain nodes for different protocols on AWS.
This repository contains sample [AWS Cloud Development Kit (CDK)](https://aws.amazon.com/cdk/) applications (Node Runner blueprints) to deploy on AWS self-service blockchain nodes for various protocols. For more information see [Introducing AWS Blockchain Node Runners](https://aws-samples.github.io/aws-blockchain-node-runners/docs/intro).

- [Setup instructions for Ethereum nodes on AWS](./lib/ethereum/README.md)
### Documentation
For deployment instructions see [AWS Blockchain Node Runners Blueprints](https://aws-samples.github.io/aws-blockchain-node-runners/docs/Blueprints/intro)

### Contributing
See [CONTRIBUTING](./CONTRIBUTING.md) for more information.

### Directory structure

- `docs` - General documentation applicable to all Node Runner blueprints (CDK applications within the `./lib` directory)
- `lib` - The place for all Node Runner blueprints and shared re-usable [CDK constructs](https://docs.aws.amazon.com/cdk/v2/guide/constructs.html)
- `lib/constructs` - [CDK constructs](https://docs.aws.amazon.com/cdk/v2/guide/constructs.html) used in Node Runner blueprints
- `lib/your-chain` - Node Runner blueprint for a specific chain
- `lib/your-chain/doc` - Documentation specific to the Node Runner blueprint
- `lib/your-chain/lib` - Place for CDK stacks and other blueprint assets
- `lib/your-chain/sample-configs` - Place for sample configurations to deploy Node Runner blueprint to your environment
- `lib/your-chain/test` - Place for unit tests to verify the Node Runner blueprint creates all necessary infrastructure
- `website` - Content for the project web site built with [Docusaurus](https://docusaurus.io/)
- `website/docs` - Place for the new blueprint deployment instructions. (If you are adding a new blueprint, use on of the existing examples to refer to the `README.md` file within your Node Runner blueprint directory inside `lib`).

### License
This repository uses MIT License. See more in [LICENSE](./LICENSE)
43 changes: 43 additions & 0 deletions docs/pre-merge-tools.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,43 @@
# Pre-merge tools

We need your help to achieve better code quality and make sure the blueprints stay secure. Before merging your new commit, please set up and run the following tools on your development machine.

1. [git-secrets](https://github.com/awslabs/git-secrets)

```bash
# Install (Mac OS)
npm run install-git-secrets-mac

# Install on other platforms: https://github.com/awslabs/git-secrets#installing-git-secrets

# Setup
npm run setup-git-secrets

# Scan history
npm run scan-history-git-secrets

# Scan repository
npm run scan-repo-git-secrets
```

2. [semgrep](https://github.com/semgrep/semgrep)

```bash
# Install (Mac OS)
npm run install-semgrep-mac

# Install on other platforms: https://github.com/semgrep/semgrep#option-2-getting-started-from-the-cli

# Scan
npm run scan-semgrep
```

3. [pre-commit](https://pre-commit.com)

```bash
# Install (Mac OS)
npm run install-pre-commit-mac

# Run
npm run run-pre-commit
```
2 changes: 1 addition & 1 deletion lib/constructs/constants.ts
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
export const InstanceStoreageDeviceVolumeType = "instance-store";
export const NoneValue = "none";
export const VolumeDeviceNames = ["/dev/sdf", "/dev/sdg", "/dev/sdh", "/dev/sdi", "/dev/sdj", "/dev/sdk"]
export const GibibytesToBytesConversionCoefficient = 1073741824;
export const GibibytesToBytesConversionCoefficient = 1073741824;
6 changes: 3 additions & 3 deletions lib/constructs/ha-rpc-nodes-with-alb.ts
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@ export class HANodesConstruct extends cdkContructs.Construct {

const availabilityZones = cdk.Stack.of(this).availabilityZones;

const {
const {
instanceType,
dataVolumes,
rootDataVolumeDeviceName,
Expand Down Expand Up @@ -202,10 +202,10 @@ export class HANodesConstruct extends cdkContructs.Construct {
{
id: "AwsSolutions-EC29",
reason: "Its Ok to terminate this instance as long as we have the data in the snapshot",

},
],
true
);
}
}
}
18 changes: 9 additions & 9 deletions lib/constructs/single-node.ts
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ export class SingleNodeConstruct extends cdkContructs.Construct {
constructor(scope: cdkContructs.Construct, id: string, props: SingleNodeConstructCustomProps) {
super(scope, id);

const {
const {
instanceName,
instanceType,
dataVolumes,
Expand All @@ -37,7 +37,7 @@ export class SingleNodeConstruct extends cdkContructs.Construct {
availabilityZone,
vpcSubnets,
} = props;

const singleNode = new ec2.Instance(this, "single-node", {
instanceName: instanceName,
instanceType: instanceType,
Expand All @@ -64,10 +64,10 @@ export class SingleNodeConstruct extends cdkContructs.Construct {
});

this.instance = singleNode;

// Processing data volumes
let dataVolumeIDs: string[] = [constants.NoneValue];

dataVolumes.forEach((dataVolume, arrayIndex) => {
const dataVolumeIndex = arrayIndex +1;
if (dataVolumeIndex > 6){
Expand All @@ -83,18 +83,18 @@ export class SingleNodeConstruct extends cdkContructs.Construct {
throughput: dataVolume.throughput,
removalPolicy: cdk.RemovalPolicy.DESTROY,
});

new ec2.CfnVolumeAttachment(this, `data-volume${dataVolumeIndex}-attachment`, {
// Device naming according to https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/device_naming.html
device: constants.VolumeDeviceNames[arrayIndex],
instanceId: singleNode.instanceId,
volumeId: newDataVolume.volumeId,
});

dataVolumeIDs[arrayIndex] = newDataVolume.volumeId;
}
})

// Getting logical ID of the instance to send ready signal later once the instance is initialized
const singleNodeCfn = singleNode.node.defaultChild as ec2.CfnInstance;
this.nodeCFLogicalId = singleNodeCfn.logicalId;
Expand All @@ -117,10 +117,10 @@ export class SingleNodeConstruct extends cdkContructs.Construct {
{
id: "AwsSolutions-EC29",
reason: "Its Ok to terminate this instance as long as we have the data in the snapshot",

},
],
true
);
}
}
}
6 changes: 3 additions & 3 deletions lib/constructs/snapshots-bucket.ts
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ export class SnapshotsS3BucketConstruct extends cdkContructs.Construct {

constructor(scope: cdkContructs.Construct, id: string, props: SnapshotsS3BucketConstructProps) {
super(scope, id);
const {
const {
bucketName
} = props;

Expand All @@ -28,9 +28,9 @@ export class SnapshotsS3BucketConstruct extends cdkContructs.Construct {
encryption: s3.BucketEncryption.S3_MANAGED,
enforceSSL: true,
});

this.bucketName = snapshotsBucket.bucketName;
this.bucketArn = snapshotsBucket.bucketArn;
this.arnForObjects = snapshotsBucket.arnForObjects;
}
}
}
8 changes: 4 additions & 4 deletions lib/ethereum/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -129,7 +129,7 @@ Note: the snapshot backup process will automatically run ever day at midnight ti
```bash
export ETH_RPC_ABL_URL=$(cat rpc-node-deploy.json | jq -r '..|.alburl? | select(. != null)')
echo $ETH_RPC_ABL_URL

# We query token balance of Beacon deposit contract: https://etherscan.io/address/0x00000000219ab540356cbb839cbe05303d7705fa
curl http://$ETH_RPC_ABL_URL:8545 -X POST -H "Content-Type: application/json" \
--data '{"method":"eth_getBalance","params":["0x00000000219ab540356cBB839Cbe05303d7705Fa", "latest"],"id":1,"jsonrpc":"2.0"}'
Expand All @@ -151,7 +151,7 @@ The result should be like this (the actual balance might change):
</body>
```

**NOTE:** By default and for security reasons the load balancer is available only from wihtin the default VPC in the region where it is deployed. It is not available from the Internet and is not open for external connections. Before opening it up please make sure you protect your RPC APIs.
**NOTE:** By default and for security reasons the load balancer is available only from wihtin the default VPC in the region where it is deployed. It is not available from the Internet and is not open for external connections. Before opening it up please make sure you protect your RPC APIs.

### Clearing up and undeploying everything

Expand All @@ -161,10 +161,10 @@ The result should be like this (the actual balance might change):
# Setting the AWS account id and region in case local .env file is lost
export AWS_ACCOUNT_ID=<your_target_AWS_account_id>
export AWS_REGION=<your_target_AWS_region>

pwd
# Make sure you are in aws-blockchain-node-runners/lib/ethereum

# Undeploy RPC Nodes
cdk destroy eth-rpc-nodes

Expand Down
2 changes: 1 addition & 1 deletion lib/ethereum/cdk.json
Original file line number Diff line number Diff line change
Expand Up @@ -41,4 +41,4 @@
"@aws-cdk/aws-route53-patters:useCertificate": true,
"@aws-cdk/customresources:installLatestAwsSdkDefault": false
}
}
}
2 changes: 1 addition & 1 deletion lib/ethereum/doc/assets/Well_Architected.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,4 +23,4 @@ This is the Well-Architected checklist for Ethereum nodes implementation of the
| | Storage selection | How is storage solution selected? | Storage solution is selected based on best price-performance, i.e. gp3 Amazon EBS volumes with optimal IOPS and throughput. |
| | Architecture selection | How is the best performance architecture selected? | s5cmd tool has been chosen for Amazon S3 uploads/downloads because it gives better price-performance compared to Amazon EBS snapshots (including Fast Snapshot Restore, which can be expensive). |
| Operational excellence | Workload health | How is health of workload determined? | Health of workload is determined via AWS Application Load Balancer Target Group Health Checks, on port 8545. |
| Sustainability | Hardware & services | Select most efficient hardware for your workload | This solution uses AWS Graviton-based Amazon EC2 instances which offer the best performance per watt of energy use in Amazon EC2. |
| Sustainability | Hardware & services | Select most efficient hardware for your workload | This solution uses AWS Graviton-based Amazon EC2 instances which offer the best performance per watt of energy use in Amazon EC2. |
2 changes: 1 addition & 1 deletion lib/ethereum/lib/assets/copy-data-from-s3.sh
Original file line number Diff line number Diff line change
Expand Up @@ -14,4 +14,4 @@ echo "$(($SECONDS / 60)) minutes and $(($SECONDS % 60)) seconds elapsed." && \
su ethereum && \
/usr/local/bin/docker-compose -f /home/ethereum/docker-compose.yml up -d && \
aws autoscaling complete-lifecycle-action --lifecycle-action-result CONTINUE --instance-id $INSTANCE_ID --lifecycle-hook-name "$LIFECYCLE_HOOK_NAME" --auto-scaling-group-name "$AUTOSCALING_GROUP_NAME" --region $REGION || \
aws autoscaling complete-lifecycle-action --lifecycle-action-result ABANDON --instance-id $INSTANCE_ID --lifecycle-hook-name "$LIFECYCLE_HOOK_NAME" --auto-scaling-group-name "$AUTOSCALING_GROUP_NAME" --region $REGION
aws autoscaling complete-lifecycle-action --lifecycle-action-result ABANDON --instance-id $INSTANCE_ID --lifecycle-hook-name "$LIFECYCLE_HOOK_NAME" --auto-scaling-group-name "$AUTOSCALING_GROUP_NAME" --region $REGION
2 changes: 1 addition & 1 deletion lib/ethereum/lib/assets/copy-data-to-s3.sh
Original file line number Diff line number Diff line change
Expand Up @@ -8,4 +8,4 @@ s5cmd --log error sync /data $SNAPSHOT_S3_PATH/
echo "Sync finished at " $(date)
sudo touch /data/snapshotted
sudo su ethereum
/usr/local/bin/docker-compose -f /home/ethereum/docker-compose.yml up -d
/usr/local/bin/docker-compose -f /home/ethereum/docker-compose.yml up -d
2 changes: 1 addition & 1 deletion lib/ethereum/lib/assets/cw-agent.json
Original file line number Diff line number Diff line change
Expand Up @@ -73,4 +73,4 @@
}
}
}
}
}
2 changes: 1 addition & 1 deletion lib/ethereum/lib/assets/node-cw-dashboard.ts
Original file line number Diff line number Diff line change
Expand Up @@ -270,4 +270,4 @@ export const SyncNodeCWDashboardJSON = {
}
}
]
}
}
6 changes: 3 additions & 3 deletions lib/ethereum/lib/assets/sync-checker/syncchecker-besu-teku.sh
Original file line number Diff line number Diff line change
Expand Up @@ -49,14 +49,14 @@ aws cloudwatch put-metric-data --metric-name elc_sync_block --namespace CWAgent
aws cloudwatch put-metric-data --metric-name elc_blocks_behind --namespace CWAgent --value $EXECUTION_CLIENT_BLOCKS_BEHIND --timestamp $TIMESTAMP --dimensions InstanceId=$INSTANCE_ID --region $REGION

# If the node is a sync node, check if the snapshot is already taken. If the snapshot is not taken, then take it and restart the node.
if [[ "$NODE_ROLE" == "sync-node" ]]; then
if [[ "$NODE_ROLE" == "sync-node" ]]; then
if [ ! -f "/data/snapshotted" ]; then
if [ "$EXECUTION_CLIENT_SYNC_STATS" == "false" ] && [ "$CONSENSUS_CLIENT_IS_SYNCING" == "false" ] && [ "$CONSENSUS_CLIENT_IS_OPTIMISTIC" == "false" ]; then
sudo /opt/copy-data-to-s3.sh

# Take a snapshot once a day at midnight
(sudo crontab -u root -l; echo '0 0 * * * /opt/copy-data-to-s3.sh' ) | sudo crontab -u root -
sudo crontab -l
fi
fi
fi
fi
fi
Original file line number Diff line number Diff line change
Expand Up @@ -49,14 +49,14 @@ aws cloudwatch put-metric-data --metric-name elc_sync_block --namespace CWAgent
aws cloudwatch put-metric-data --metric-name elc_blocks_behind --namespace CWAgent --value $EXECUTION_CLIENT_BLOCKS_BEHIND --timestamp $TIMESTAMP --dimensions InstanceId=$INSTANCE_ID --region $REGION

# If the node is a sync node, check if the snapshot is already taken. If the snapshot is not taken, then take it and restart the node.
if [[ "$NODE_ROLE" == "sync-node" ]]; then
if [[ "$NODE_ROLE" == "sync-node" ]]; then
if [ ! -f "/data/snapshotted" ]; then
if [ "$EXECUTION_CLIENT_SYNC_STATS" == "false" ] && [ "$CONSENSUS_CLIENT_IS_SYNCING" == "false" ] && [ "$CONSENSUS_CLIENT_IS_OPTIMISTIC" == "false" ]; then
sudo /opt/copy-data-to-s3.sh

# Take a snapshot once a day at midnight
(sudo crontab -u root -l; echo '0 0 * * * /opt/copy-data-to-s3.sh' ) | sudo crontab -u root -
sudo crontab -l
fi
fi
fi
fi
fi
Original file line number Diff line number Diff line change
Expand Up @@ -49,14 +49,14 @@ aws cloudwatch put-metric-data --metric-name elc_sync_block --namespace CWAgent
aws cloudwatch put-metric-data --metric-name elc_blocks_behind --namespace CWAgent --value $EXECUTION_CLIENT_BLOCKS_BEHIND --timestamp $TIMESTAMP --dimensions InstanceId=$INSTANCE_ID --region $REGION

# If the node is a sync node, check if the snapshot is already taken. If the snapshot is not taken, then take it and restart the node.
if [[ "$NODE_ROLE" == "sync-node" ]]; then
if [[ "$NODE_ROLE" == "sync-node" ]]; then
if [ ! -f "/data/snapshotted" ]; then
if [ "$EXECUTION_CLIENT_SYNC_STATS" == "false" ] && [ "$CONSENSUS_CLIENT_IS_SYNCING" == "false" ] && [ "$CONSENSUS_CLIENT_IS_OPTIMISTIC" == "false" ]; then
sudo /opt/copy-data-to-s3.sh

# Take a snapshot once a day at midnight
(sudo crontab -u root -l; echo '0 0 * * * /opt/copy-data-to-s3.sh' ) | sudo crontab -u root -
sudo crontab -l
fi
fi
fi
fi
fi
Loading

0 comments on commit 50dcf5a

Please sign in to comment.