Skip to content

josephpisciotta/ansible_agnostic_deployer

 
 

Repository files navigation

Ansible Agnostic Deployer

Prerequisites

There are several prerequisites for using this repository, scripted and detailed instructions for usage are available in the following the Preparing Your Workstation document. [estimated effort 5-10 minutes]

  • Software required on provisioning workstation:

    • Python version 2.7.x (3.x untested and may not work)

    • Python Boto version 2.41 or greater

    • Git any version would do.

    • Ansible version 2.1.2 or greater with version 1.11.32

  • AWS

    • awscli bundle tested

    • Credentials and Policies:

      • AWS user account with credentials to provision resources

      • A route53 public hosted zone is required for the scripts to create the various DNS entries for the resources it creates. The "HostedZoneId" will need to be provided in the variable file.

      • An EC2 SSH keypair should be created in advance and you should save the key file to your system. (command line instructions can be found in the Preparing Your Workstation document.)

Standard Configurations

  • Several "Standard Configurations" are included in this repository.

  • A "Standard Configurations" or "Config" are a predefined deployment examples that can be used or copied and modified by anyone.

  • A "Config" will include all the files, templates, pre and post playbooks that a deployment example requires to be deployed.

  • "Config" specific Variable files will be included in the "Config" directory as well.

Note
Until we implement using Ansible Vault, each "Config" has two vars files _vars and _secret_vars. The example_secret_vars file shows the format for what to put in your CONFIGNAME_secret_vars file.

Running the Ansible Playbooks

Once you have installed your prerequisites and have configured all settings and files, simply run Ansible like so:

ansible-playbook -i 127.0.0.1, ansible/main.yml -e "env_type=config-name" -e "aws_region=ap-southeast-2" -e "guid=youruniqueidentifier"
Note
Be sure to exchange guid for a sensible prefix of your choosing.

For "opentlc-shared" standard config, check out the README file

Cleanup (Reference Only)

Note
S3 Buckets are now part of a CloudFormation stack and are properly deleted before the stack in the destroy playbooks.
  • S3 Bucket

    • (Reference Only) An S3 bucket is used to back the Docker registry. AWS will not let you delete a non-empty S3 bucket, so you must do this manually. The aws CLI makes this easy:

      aws s3 rm s3://bucket-name --recursive
    • Your bucket name is named {{ env_type }}-{{ guid }}. So, in the case of a bu-workshop environment where you provided the guid of "Atlanta", your S3 bucket is called bu-workshop-atlanta.

  • CloudFormation Template

    • If destroy_env.yml playbook failed, just go into your AWS account to the CloudFormation section in the region where you provisioned, find the deployed stack, and delete it.

  • SSH config

    • This Ansible playbook creates a SSH config for the environment you are provisioning. It is created in ansible/workdir directory. The file is then used by ansible to access the environment.

Troubleshooting

Information will be added here as problems are solved. So far it’s pretty vanilla, but quite slow. Expect at least 40 min for a full OpenShift deployment. Some configs are faster.

Use stable tags

Configs are tested on a regular basis. Once it works, a release (tag) for this config is created. You can list all tag by running git tag -l.

Make sure you are using a stable tag for the config you want to provision. For example, if you are provisioning ocp-workshop, use a tag like ocp-workshop-prod-1.8. This is done by simply running:

git checkout ocp-workshop-prod-1.8

EC2 instability

It has been seen that, on occasion, EC2 is generally unstable. This manifests in various ways:

  • The autoscaling group for the nodes takes an extremely long time to deploy, or will never complete deploying

  • Individual EC2 instances may have terrible performance, which can result in nodes that seem to be "hung" despite being reachable via SSH.

There is not much that can be done in this circumstance besides starting over (in a different region).

Re-Running

While Ansible is idempotent and supports being re-run, there are some known issues with doing so. Specifically:

  • You should skip the tag nfs_tasks with the --skip-tags option if you re-run the playbook after the NFS server has been provisioned and configured. The playbook is not safe for re-run and will fail.

FAQ

  • Is this a replacement for openshift-ansible playbook ? Why ?

No! First, this repository is a set of playbooks and roles, it is not only about OpenShift and AWS. A run is organized in several steps: pre_infra, infra, post_infra, pre_software, software, post_software. If you choose to use a config that installs OpenShift, it will actually use the openshift-ansible playbook, also known as byo/config.yml, during the Software step.

About

Ansible Deployer for multiple Cloud Deployers

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Python 68.4%
  • Shell 23.5%
  • Groovy 6.9%
  • HTML 1.0%
  • Ruby 0.1%
  • JavaScript 0.1%