This page walks you through the steps required to replicate the cluster used for benchmarking
- Terraform must be installed (on the host on which the user intends to use this benchmark setup)
- AWS CLI should be installed and configured to access the AWS account (on the host on which the user intends to use this benchmark setup)
- User should have an AWS account and have the necessary permissions to create AWS instances (see terraform doc for more details)
- The AWS account should have a default VPC already configured
The script setup.sh
under the terraform
directory takes care of provisioning the necessary AWS instances
cd terraform
bash setup.sh
Internally, the script performs the following operations:
- Creates a temporary RSA key pair
- Terraform initialization steps (
terraform init
) - Runs the
apply
command to instruct terraform to execute the actions described in the.tf
files (terraform apply ...
) - Once the AWS instances are provisioned, instances details are collected
The terraform/setup.sh
script has been written to be self-reliant
It will prompt the user only once
- Enter
yes
in the prompt to approve the plan and create instancesDo you want to perform these actions? Terraform will perform the actions described above. Only 'yes' will be accepted to approve. Enter a value:
The following details of the provisioned instances are collected by the terraform/setup.sh
script using terraform:
- instance names (solr-node-x/zoo-node-x/solrj-client-x)
- instance private IP's
- instance public IP's
The private and public IP addresses are automatically allotted to each instance at the time of instance creation
They are used by the setup and benchmark run scripts to perform operations like ssh
and scp
etc.
/etc/hosts
file on all the provisioned AWS instances are updated with the following info
<solr-node-1_private_ip_address> solr-node-1
<solr-node-2_private_ip_address> solr-node-2
<solr-node-3_private_ip_address> solr-node-3
<solr-node-4_private_ip_address> solr-node-4
<zoo-node-1_private_ip_address> zoo-node-1
<zoo-node-2_private_ip_address> zoo-node-2
<zoo-node-3_private_ip_address> zoo-node-3
<solrj-client-1_private_ip_address> solrj-client-1
This allows the instances to talk/communicate with each other using the instance names (solr-node-x/zoo-node-x/solrj-client-x) instead of the private IP addresses
The terraform/setup.sh
collects and stores the instance details in a file named awsInstanceDetails.txt
under the
base directory (direcotry that contains the terraform
folder)
The details are stored in the following format:
solr-node-1 <solr-node-1_private_ip_address> <solr-node-1_public_ip_address>
solr-node-2 <solr-node-2_private_ip_address> <solr-node-2_public_ip_address>
solr-node-3 <solr-node-3_private_ip_address> <solr-node-3_public_ip_address>
solr-node-4 <solr-node-4_private_ip_address> <solr-node-4_public_ip_address>
zoo-node-1 <zoo-node-1_private_ip_address> <zoo-node-1_public_ip_address>
zoo-node-2 <zoo-node-2_private_ip_address> <zoo-node-2_public_ip_address>
zoo-node-3 <zoo-node-3_private_ip_address> <zoo-node-3_public_ip_address>
solrj-client-1 <solrj-client-1_private_ip_address> <solrj-client-1_public_ip_address>
The setup and benchmark run scripts, reads the awsInstanceDetails.txt
to perform operations like ssh
and scp
etc.
DO NOT FORGET TO DESTROY ALL THE INSTANCES
Run the following command:
terraform destroy -var-file=benchmark.tfvars
NOTE: benchmark.tfvars
file will be created under the terraform
directory
when terraform/setup.sh
script is run
Terraform will prompt the user to approve cleaning up the instances
Enter yes
Do you really want to destroy all resources?
Terraform will destroy all your managed infrastructure, as shown above.
There is no undo. Only 'yes' will be accepted to confirm.
Enter a value: