-
Notifications
You must be signed in to change notification settings - Fork 9
Backups Strategy
- Login to AWS console, search for 'S3'
- Open either openmrs-backups (automated backups) or openmrs-manual-backup (manual backups)
- Identify the file you want and Download file
- Profit!
- On AWS console, create a new key pair for your user. Go to IAM Users -> {Your User} -> Security Credentials -> Create access key. Download the csv file, and keep it safe!
- On AWS console, go to S3 -> 'openmrs-backups'. Find the file you want to download. Note that 'order by' just orders current folder, use 'Search' if you need more files
- Install aws cli on a machine with the backups
pip install awscli
- Run 'aws configure' on the machine containing the file(s) to be backed up. Add the access key created before, and region 'us-west-2'.
- Run aws cli to upload files to the s3 bucket. For example:
aws s3 cp s3://openmrs-backups/ako/ldap_config-2019-02-20_00-00-01.tar.gz . && aws s3 cp s3://openmrs-backups/ako/ldap_database-2019-02-20_00-00-01.tar.gz .
to download those files. - After the download, please deactivate the access key from the amazon console. You should always activate an access key every time there's a desire to upload something and deactivate it afterwards.
If using the backups docker image, stop the containers (docker-compose down -v
), copy backups to /opt/backups
and run:
docker-compose run --rm backup bash restore.sh 2017-09-27_00-00-01
If it's SQL files for mysql, ensure the database is empty and run:
gunzip -c <file>.sql.gz | mysql -u root <database>
For all manually uploaded backups, use S3 bucket openmrs-manual-backup.
- On AWS console, create a new key pair for your user. Go to IAM Users -> {Your User} -> Security Credentials -> Create access key. Download the csv file, and keep it safe!
- On AWS console, go to S3 -> 'openmrs-manual-backup'. Verify there's a folder for the product you are uploading the backups. Otherwise, create a folder now.
- Install aws cli on a machine with the backups
pip install awscli
- Run 'aws configure' on the machine containing the file(s) to be backed up. Add the access key created before, and region 'us-west-2'.
- Run aws cli to upload files to the s3 bucket. For example:
aws s3 cp backup-2016-09-03.tgz s3://openmrs-manual-backup/nexus/backup-2016-09-03.tgz
to upload a file to the folder nexus. - After the uploads, please deactivate the access key from the amazon console. You should always activate an access key every time there's a desire to upload something and deactivate it afterwards.
Make sure the terraform stack as either 'has_backup=true' or has module "backup-user". When applying the stack, you should receive the AWS backup credentials for that server.
You can now go to ansible:
- Add the machine to 'backup' group
- Add AWS credentials to host vars (make sure they are encrypted in
vars
- Deploy the other cron tasks or relevant tasks to generate files in
/opt/backups
- Add the following variable to the host:
backup_tag: 'configured'
*** Exception is talk/discourse. It's are configured to upload their backups straight to S3, bucket openmrs-talk-backups.
Applications are configured to generate daily backup tar/zip files and store them in /opt/backups
folder; make sure the user backup-s3
can read and write those files.
Every day at 4am UTC, a cron task ran by user backup-s3
will upload all files in /opt/backups
to AWS S3 (s3://openmrs-backups/<hostname>
), server-side encrypted with AWS KMS.
S3 is configured to archive to glacier after 30 days, and delete after 6 months
(glacier is more expensive to retrieve).
Files are deleted from filesystem after a successful upload. Cron task logs can be found in /home/backup-s3/backup.logs
AWS credentials are unique per server, and should not be shared. That user has only permission to write files under hostname
folder in S3.
In datadog, you can group machines by their backup condition:
- non-applicable (no state to have a backup)
- bootstrapped (scripts to upload to S3 in place, applications not yet configured to generate tar files)
- configured (backups working as expected)
Read this before updating this wiki.