Skip to content

Latest commit

 

History

History
183 lines (118 loc) · 7.23 KB

File metadata and controls

183 lines (118 loc) · 7.23 KB

How to deploy fabric8-analytics services on OpenShift

Install required tools

Use your preferred package manager to install aws-cli, psql, origin-clients, jq and pwgen.

If you are running Fedora, then following command will do the trick:

$ sudo dnf install awscli pwgen postgresql origin-clients jq

Mac users will also need to install gawk from brew.

If you are running Mac, then following commands will do the trick:

$ brew install awscli
$ brew install postgres
$ brew install openshift-cli
$ brew install pwgen
$ brew install jq

For RedHat Employees, Please refer Requesting AWS Access

Configure fabric8-analytics services

The deploy.sh script expects to find configuration in env.sh file. The easiest way how to create the configuration file is to copy env-template.sh and modify it.

$ cd openshift
$ cp env-template.sh env.sh
$ vim env.sh

Editing env.sh to add required credentials and tokens.

  • Login into your dev cluster and from top-right Dropdown Menu click "Copy Login Command"
  • Execute the Copied login command into the shell
  • Get AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY from AWS Console
  • Generate new and set RDS_PASSWORD
  • GITHUB_API_TOKENS from https://github.com/settings/tokens
  • LIBRARIES_IO_TOKEN from https://libraries.io/account
  • For GITHUB_OAUTH_CONSUMER_KEY and GITHUB_OAUTH_CONSUMER_SECRET, refer comments

For others Keys/Values refer Comments in env.sh

For Red Hatters: If your kerberos_id and github username is different: Set OC_PROJECT="[your_kerberos_id]-fabric8-analytics"

Deploy fabric8-analytics services

Just run the deploy script and enjoy!

$ ./deploy.sh

If you have already run the script previously and therefore there exists a $OC_PROJECT project, the script purges it to start from scratch. If you want to also purge previously allocated AWS resources (RDS db, SQS queues, S3 buckets, DynamoDB tables) use

$ ./deploy.sh --purge-aws-resources

Once you know that you no longer need the fabric8-analytics deployment, you can run

$ ./cleanup.sh

to remove the OpenShift project and all allocated AWS resources.

FAQ's:

  1. In dev_console, bayesian-data-importer service is down.

cause: In this case you have some data messed up in your Dynamo DB Tables. resolution: Completely remove your tables only from AWS Dynamo DB, i.e tables prefixed with name {YOUR_KERBEROS_ID}_*. Redeploy bayesian-gremlin-http to recreate tables and then redeploy bayesian-data-importer.

  1. After dev_cluster delpoyment some pods failed with 'Timeout' issue.

cause: This is happens as some of the keys in 'aws' and 'aws-dynamodb' secrets are missing. resolution: Log into dev cluster console and goto Your project --> Application --> Secret Manually add below keys in 'aws' secrets: dynamodb-access-key-id: dynamodb-secret-access-key: sqs-access-key-id: sqs-secret-access-key:

Note: Replace and with the id and key values present in other keys of same secret.

Manually add below key in 'aws-dynamodb' secrets: aws_region: dXMtZWFzdC0x

Note: Value for this key is BASE64 encoded string of AWS region, in above case its 'us-east-1', which can be generated as below:

$ echo "us-east-1" | base64
  1. Upon running $ ./deploy.sh or any script from terminal throws INVALID TOKEN error.

cause: If dev-cluter session is timeout / user logs into again, then it generated a new login token for CLI. resolution: Always ensure that your OC_TOKEN value in env.sh is latest as per dev-cluster console value. Get the latest value from dev-cluster console.

  1. Enable to create dev cluster project / RDS database with correct prefixes.

cause: When kerberos Id and Github ids are not same, we might face this issue. In env.sh we set USER_ID to kerberos id, but this value gets overwritten by git user id during $ ./delpoy.sh execution. THIS IS REQUIRED ONLY WHEN GITHUB AND KERBEROS IDS ARE DIFFERENT. resolution: Hardcode the OC_PROJECT and RDS_INSTANCE_NAME with fixed value for ${USER_ID} fields. Example: In env.sh, set OC_PROJECT={YOUR_KERBEROS_ID}-fabric8-analytics RDS_INSTANCE_NAME={YOUR_KERBEROS_ID}-bayesiandb

Note: Jump to dev-cluster console --> Monitor page to view failures message and logs.

Test not-yet-merged changes

Build in CI

Assume you have opened a PR in one of the fabric8-analytics repositories. Once tests are green, CentosCI will build your image and comment on the PR:

Your image is available in the registry: docker pull registry.devshift.net/fabric8-analytics/worker-scaler:SNAPSHOT-PR-25

To update your dev deployment to use the above mentioned image you can use one the following ways:

  • oc edit from command line
  • editor in web interface: Applications -> Deployments -> select deployment -> Actions -> Edit YAML
  • edit deploy.sh, add "-p IMAGE_TAG=SNAPSHOT-PR-25" (with correct tag) to corresponding oc_process_apply call at the end of the file and (re-)run ./deploy.sh.

Build in OpenShift

Update configure_os_builds.sh remotes value should contain your github accout name. Local variable templates define all the repositories that will be cloned and build using openshift docker build.

Update deployments to use imagestreams

After sucessfull build of all required images user needs to update all deployments to use newly build streams

E2E test

Configure OSIO token

If you want to run E2E tests, you will need to configure RECOMMENDER_API_TOKEN variable in your env.sh file. You can get the token on your openshift.io profile page after clicking on the "Update Profile" button.

Run E2E tests against your deployment

First clone E2E tests (git clone [email protected]:fabric8-analytics/fabric8-analytics-common.git) repository, if you haven't done so already.

Then prepare your environment (you'll need your API token for this, see the previous section):

source env.sh

And finally run the tests in the same terminal window:

cd fabric8-analytics-common/integration-tests/
./runtest.sh

Dockerized deployment scripts

There's also Dockerfile and Makefile to run these scripts in docker container to avoid installing the required tools. Just prepare your env.sh and run

  • make deploy to (re-)deploy to Openshift
  • make clean-deploy to purge fabric8-analytics project from Openshift along with allocated AWS resources and (re-)deploy
  • make clean to remove fabric8-analytics project from Openshift along with allocated AWS resources