Skip to content

Commit

Permalink
Merge pull request #92 from RADAR-CNS/release1.0.0
Browse files Browse the repository at this point in the history
Release-1.0.0
  • Loading branch information
nivemaham authored Jan 19, 2018
2 parents 66b78cc + 77ec23c commit 6cd353e
Show file tree
Hide file tree
Showing 52 changed files with 1,251 additions and 421 deletions.
2 changes: 1 addition & 1 deletion .travis.yml
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ script:
- sudo docker volume create certs
- sudo docker volume create certs-data
- cp etc/radar.yml.template etc/radar.yml
- cp etc/nginx.conf.template etc/nginx.conf
- cp etc/webserver/nginx.conf.template etc/webserver/nginx.conf
- cp etc/sink-hdfs.properties.template etc/sink-hdfs.properties
- cp etc/sink-mongo.properties.template etc/sink-mongo.properties
- sudo $HOME/bin/docker-compose up -d --build && sleep 15 && [ -z "$($HOME/bin/docker-compose ps | tail -n +3 | grep " Exit ")" ]
Expand Down
110 changes: 8 additions & 102 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -65,113 +65,19 @@ In addition to Confluent Kafka platform components, RADAR-CNS platform offers
* RADAR-Monitor - Status monitors,
* [RADAR-HotStorage](https://github.com/RADAR-CNS/RADAR-HotStorage) via MongoDB,
* [RADAR-REST API](https://github.com/RADAR-CNS/RADAR-RestApi),
* a Hadoop cluster, and
* an email server.

* A Hadoop cluster, and
* An email server.
* Management Portal - A web portal to manage patient monitoring studies.
* RADAR-Gateway - A validating gateway to allow only valid and authentic data to the platform
* Catalog server - A Service to share source-types configured in the platform.
To run RADAR-CNS stack in a single node setup:

1. Navigate to `radar-hadoop-cp-stack`:
1. Navigate to `radar-cp-hadoop-stack`:

```shell
cd RADAR-Docker/dcompose-stack/radar-hadoop-cp-stack/
```
2. Configure monitor settings in `radar.yml`:

```yaml
battery_monitor:
level: CRITICAL
email_address:
- [email protected]
- [email protected]
email_host: smtp
email_port: 25
email_user: [email protected]
topics:
- android_empatica_e4_battery_level
disconnect_monitor:
# timeout in milliseconds -> 5 minutes
timeout: 300000
email_address:
- [email protected]
- [email protected]
email_host: smtp
email_port: 25
email_user: [email protected]
# temperature readings are sent very regularly, but
# not too often.
topics:
- android_empatica_e4_temperature
```
3. Create `smtp.env` and configure your email settings following `smtp.env.template`. Configure alternative mail providers like Amazon SES or Gmail by using the parameters of the [`namshi/smtp` Docker image](https://hub.docker.com/r/namshi/smtp/).
4. (Optional) Modify flush.size and HDFS direcotory for Cold storage in `sink-hdfs.properties`

```ini
flush.size=
topics.dir=/path/to/data
```
Note: To have different flush.size for different topics, you can create multipe property configurations for a single connector. To do that,

4.1 Create multipe property files that have different `flush.size` for given topics.
Examples [sink-hdfs-high.properties](https://github.com/RADAR-CNS/RADAR-Docker/blob/dev/dcompose-stack/radar-cp-hadoop-stack/sink-hdfs-high.properties) , [sink-hdfs-low.properties](https://github.com/RADAR-CNS/RADAR-Docker/blob/dev/dcompose-stack/radar-cp-hadoop-stack/sink-hdfs-low.properties)

4.2 Add `CONNECTOR_PROPERTY_FILE_PREFIX: <prefix-value>` enviornment variable to `radar-hdfs-connector` service in `docker-compose` file.

4.3 Add created property files to the `radar-hdfs-connector` service in `docker-compose` with name abides to prefix-value mentioned in `CONNECTOR_PROPERTY_FILE_PREFIX`

```ini
radar-hdfs-connector:
image: radarcns/radar-hdfs-connector-auto:0.2
restart: on-failure
volumes:
- ./sink-hdfs-high.properties:/etc/kafka-connect/sink-hdfs-high.properties
- ./sink-hdfs-low.properties:/etc/kafka-connect/sink-hdfs-low.properties
environment:
CONNECT_BOOTSTRAP_SERVERS: PLAINTEXT://kafka-1:9092,PLAINTEXT://kafka-2:9092,PLAINTEXT://kafka-3:9092
CONNECTOR_PROPERTY_FILE_PREFIX: "sink-hdfs"
```

5. Configure Hot Storage settings in `.env` file

```ini
HOTSTORAGE_USERNAME=mongodb-user
HOTSTORAGE_PASSWORD=XXXXXXXX
HOTSTORAGE_NAME=mongodb-database
```
6. To install the stack

```shell
sudo ./install-radar-stack.sh
cd RADAR-Docker/dcompose-stack/radar-cp-hadoop-stack/
```

To stop RADAR-CNS stack on a single node setup, run

```shell
cd RADAR-Docker/dcompose-stack/radar-hadoop-cp-stack/
sudo ./stop-radar-stack.sh
```
To reboot RADAR-CNS stack on a single node setup, run

```shell
cd RADAR-Docker/dcompose-stack/radar-hadoop-cp-stack/
sudo ./reboot-radar-stack.sh
```
To start RADAR-CNS stack on a single node setup after installing, run

```shell
cd RADAR-Docker/dcompose-stack/radar-hadoop-cp-stack/
sudo ./start-radar-stack.sh
```

#### cAdvisor

cAdvisor (Container Advisor) provides container users an understanding of the resource usage and performance characteristics of their running containers.

To view current resource performance,if running locally, try [http://localhost:8181](http://localhost:8181). This will bring up the built-in Web UI. Clicking on `/docker` in `Subcontainers` takes you to a new window with all of the Docker containers listed individually.

#### Portainer

Portainer provides simple interactive UI-based docker management. If running locally, try [http://localhost:8182](http://localhost:8182) for portainer's UI. To set-up portainer follow this [link](https://www.ostechnix.com/portainer-an-easiest-way-to-manage-docker/).
2. Follow the README instructions there for correct configuration.

### Logging

Expand Down
133 changes: 126 additions & 7 deletions dcompose-stack/radar-cp-hadoop-stack/README.md
Original file line number Diff line number Diff line change
@@ -1,17 +1,44 @@
# RADAR-CNS with a HDFS connector
# RADAR platform

This docker-compose stack contains the full operational RADAR platform. Once configured, it is meant to run on a single server with at least 16 GB memory and 4 CPU cores. It is tested on Ubuntu 16.04 and on macOS 11.1 with Docker 17.06.

## Configuration

1. First move `etc/env.template` file to `./.env` and check and modify all its variables. To have a valid HTTPS connection for a public host, set `SELF_SIGNED_CERT=no`. You need to provide a public valid DNS name as `SERVER_NAME` for SSL certificate to work. IP addresses will not work.
1. First copy `etc/env.template` file to `./.env` and check and modify all its variables.
1.1. To have a valid HTTPS connection for a public host, set `SELF_SIGNED_CERT=no`. You need to provide a public valid DNS name as `SERVER_NAME` for SSL certificate to work. IP addresses will not work.

1.2. Set `MANAGEMENTPORTAL_FRONTEND_CLIENT_SECRET` to a secret to be used by the Management Portal frontend.

1.3. If you want to enable auto import of source types from the catalog server set the variable `MANAGEMENTPORTAL_CATALOGUE_SERVER_ENABLE_AUTO_IMPORT` to `true`.

2. Copy `etc/smtp.env.template` to `etc/smtp.env` and configure your email settings. Configure alternative mail providers like Amazon SES or Gmail by using the parameters of the [`namshi/smtp` Docker image](https://hub.docker.com/r/namshi/smtp/).

3. Copy `etc/redcap-integration/radar.yml.template` to `etc/redcap-integration/radar.yml` and modify it to configure the properties of Redcap instance and the management portal. For reference on configuration of this file look at the Readme file here - <https://github.com/RADAR-CNS/RADAR-RedcapIntegration#configuration>. In the REDcap portal under Project Setup, define the Data Trigger as `https://<YOUR_HOST_URL>/redcapint/trigger`

4. Copy `etc/managementportal/config/oauth_client_details.csv.template` to `etc/managementportal/config/oauth_client_details.csv` and change OAuth client credentials for production MP. The OAuth client for the frontend will be loaded automatically and does not need to be listed in this file. This file will be read at each startup. The current implementation overwrites existing clients with the same client ID, so be aware of this if you have made changes to a client listed in this file using the Management Portal frontend. This behaviour might change in the future.

5. Finally, copy `etc/radar.yml.template` to `etc/radar.yml` and edit it, especially concerning the monitor email address configuration.

6. (Optional) Note: To have different flush.size for different topics, you can create multipe property configurations for a single connector. To do that,

2. Modify `etc/smtp.env.template` to set a SMTP host to send emails with, and move it to `etc/smtp.env`. The configuration settings are passed to a [namshi/smtp](https://hub.docker.com/r/namshi/smtp/) Docker container. This container supports a.o. regular SMTP and GMail.
6.1 Create multipe property files that have different `flush.size` for given topics.
Examples [sink-hdfs-high.properties](https://github.com/RADAR-CNS/RADAR-Docker/blob/dev/dcompose-stack/radar-cp-hadoop-stack/etc/sink-hdfs-high.properties) , [sink-hdfs-low.properties](https://github.com/RADAR-CNS/RADAR-Docker/blob/dev/dcompose-stack/radar-cp-hadoop-stack/etc/sink-hdfs-low.properties)

3. Modify the `etc/redcap-integration/radar.yml.template` to configure the properties of Redcap instance and the management portal, and move it to `etc/redcap-integration/radar.yml`. For reference on configuration of this file look at the Readme file here - https://github.com/RADAR-CNS/RADAR-RedcapIntegration#configuration
In the REDcap portal under Project Setup, define the Data Trigger as `https://<YOUR_HOST_URL>/redcapint/trigger`
6.2 Add `CONNECTOR_PROPERTY_FILE_PREFIX: <prefix-value>` environment variable to `radar-hdfs-connector` service in `docker-compose` file.

4. Move `etc/managementportal/changelogs/config/liquibase/oauth_client_details.csv.template` to `etc/managementportal/changelogs/config/liquibase/oauth_client_details.csv` and change OAuth client credentials for production MP. (Except ManagementPortalapp)
6.3 Add created property files to the `radar-hdfs-connector` service in `docker-compose` with name abides to prefix-value mentioned in `CONNECTOR_PROPERTY_FILE_PREFIX`

5. Finally, move `etc/radar.yml.template` to `etc/radar.yml` and edit it, especially concerning the monitor email address configuration.
```ini
radar-hdfs-connector:
image: radarcns/radar-hdfs-connector-auto:0.2
restart: on-failure
volumes:
- ./sink-hdfs-high.properties:/etc/kafka-connect/sink-hdfs-high.properties
- ./sink-hdfs-low.properties:/etc/kafka-connect/sink-hdfs-low.properties
environment:
CONNECT_BOOTSTRAP_SERVERS: PLAINTEXT://kafka-1:9092,PLAINTEXT://kafka-2:9092,PLAINTEXT://kafka-3:9092
CONNECTOR_PROPERTY_FILE_PREFIX: "sink-hdfs"
```

## Usage

Expand All @@ -21,6 +48,50 @@ Run
```
to start all the RADAR services. Use the `(start|stop|reboot)-radar-stack.sh` to start, stop or reboot it. Note: whenever `.env` or `docker-compose.yml` are modified, this script needs to be called again. To start a reduced set of containers, call `install-radar-stack.sh` with the intended containers as arguments.

To enable a `systemd` service to control the platform, run
```shell
./install-systemd-wrappers.sh
```
After that command, the RADAR platform should be controlled via `systemctl`.
```shell
# query the latest status and logs
sudo systemctl status radar-docker

# Stop radar-docker
sudo systemctl stop radar-docker

# Restart all containers
sudo systemctl reload radar-docker

# Start radar-docker
sudo systemctl start radar-docker

# Full radar-docker system logs
sudo journalctl -u radar-docker
```
The control scripts in this directory should preferably not be used if `systemctl` is used. To remove `systemctl` integration, run
```
sudo systemctl disable radar-docker
sudo systemctl disable radar-output
sudo systemctl disable radar-check-health
sudo systemctl disable radar-renew-certificate
```

To clear all data from the platform, run
```
sudo systemctl stop radar-docker
./docker-prune.sh
sudo systemctl start radar-docker
```

## Data extraction

If systemd integration is enabled, HDFS data will be extracted to the `./output` directory every hour. It can then be run directly by running
```
sudo systemctl start radar-output.service
```
Otherwise, the following manual commands can be invoked.

Raw data can be extracted from this setup by running:

```shell
Expand All @@ -35,4 +106,52 @@ CSV-structured data can be gotten from HDFS by running
```
This will put all CSV files in the destination directory, with subdirectory structure `PatientId/SensorType/Date_Hour.csv`.

## Cerificate

If systemd integration is enabled, the ssl certificate will be renewed daily. It can then be run directly by running
```
sudo systemctl start radar-renew-certificate.service
```
Otherwise, the following manual commands can be invoked.
If `SELF_SIGNED_CERT=no` in `./.env`, be sure to run `./renew_ssl_certificate.sh` daily to ensure that your certificate does not expire.


### cAdvisor

cAdvisor (Container Advisor) provides container users an understanding of the resource usage and performance characteristics of their running containers.

To view current resource performance,if running locally, try <http://localhost:8080>. This will bring up the built-in Web UI. Clicking on `/docker` in `Subcontainers` takes you to a new window with all of the Docker containers listed individually.

### Portainer

Portainer provides simple interactive UI-based docker management. If running locally, try <http://localhost/portainer/> for portainer's UI. To set-up portainer follow this [link](https://www.ostechnix.com/portainer-an-easiest-way-to-manage-docker/).

### Kafka Manager

The [kafka-manager](https://github.com/yahoo/kafka-manager) is an interactive web based tool for managing Apache Kafka. Kafka manager has beed integrated in the stack. It is accessible at <http://localhost/kafkamanager/>

### Check Health
Each of the containers in the stack monitor their own health and show the output as healthy or unhealthy. A script called check-health.sh is used to check this output and send an email to the maintainer if a container is unhealthy.

First check that the `MAINTAINER_EMAIL` in the .env file is correct.

Then make sure that the SMTP server is configured properly and running.

If systemd integration is enabled, the check-health.sh script will check health of containers every five minutes. It can then be run directly by running
```
sudo systemctl start radar-check-health.service
```
Otherwise, the following manual commands can be invoked.

Then just add a cron job to run the `check-health.sh` script periodically like -
1. Edit the crontab file for the current user by typing `$ crontab -e`
2. Add your job and time interval. For example, add the following for checking health every 5 mins -

```*/5 * * * * /home/ubuntu/RADAR-Docker/dcompose-stack/radar-cp-hadoop-stack/check-health.sh```

You can check the logs of CRON by typing `$ grep CRON /var/log/syslog`
Also you will need to change the directory. So just add the following to the top of the check-health.sh script -
```sh
cd "$( dirname "${BASH_SOURCE[0]}" )"
```

76 changes: 76 additions & 0 deletions dcompose-stack/radar-cp-hadoop-stack/check-health.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,76 @@
#!/bin/bash
# Check whether services are healthy. If not, restart them and notify the maintainer.

DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
. "$DIR/util.sh"
. .env

function hipchat_notify() {
# Send notification via HipChat, if configured.
if [ "$HEALTHCHECK_HIPCHAT_NOTIFY" == "yes" ] ; then
if [ -z "$HEALTHCHECK_HIPCHAT_ROOM_ID" ] ; then
echo "Error: HipChat notifications are enabled, but \$HEALTHCHECK_HIPCHAT_ROOM_ID is undefined. Unable to send HipChat notification."
exit 1
fi

if [ -z "$HEALTHCHECK_HIPCHAT_TOKEN" ] ; then
echo "Error: HipChat notifications are enabled, but \$HEALTHCHECK_HIPCHAT_TOKEN is undefined. Unable to send HipChat notification."
exit 1
fi

color=$1
body=$2
curl -X POST -H "Content-Type: application/json" --header "Authorization: Bearer $HEALTHCHECK_HIPCHAT_TOKEN" \
-d "{\"color\": \"$color\", \"message_format\": \"text\", \"message\": \"$body\" }" \
https://api.hipchat.com/v2/room/$HEALTHCHECK_HIPCHAT_ROOM_ID/notification
fi
}

unhealthy=()

# get all human-readable service names
# see last line of loop
while read service; do
# check if a container was started for the service
container=$(sudo-linux docker-compose ps -q $service)
if [ -z "${container}" ]; then
# no container means no running service
continue
fi
health=$(sudo-linux docker inspect --format '{{.State.Health.Status}}' $container 2>/dev/null || echo "null")
if [ "$health" = "unhealthy" ]; then
echo "Service $service is unhealthy. Restarting."
unhealthy+=("${service}")
sudo-linux docker-compose restart ${service}
fi
done <<< "$(sudo-linux docker-compose config --services)"

if [ "${#unhealthy[@]}" -eq 0 ]; then
if [ -f .unhealthy ]; then
rm -f .unhealthy
hipchat_notify green "All services are healthy again"
fi
echo "All services are healthy"
else
echo "$unhealthy services were unhealthy and have been restarted."

# Send notification to MAINTAINER
# start up the mail container if not already started
sudo-linux docker-compose up -d smtp
# save the container, so that we can use exec to send an email later
container=$(sudo-linux docker-compose ps -q smtp)
SAVEIFS=$IFS
IFS=,
display_services="[${unhealthy[*]}]"
IFS=$SAVEIFS
display_host="${SERVER_NAME} ($(hostname -f), $(curl -s http://ipecho.net/plain))"
body="Services on $display_host are unhealthy. Services $display_services have been restarted. Please log in for further information."
echo "Sent notification to $MAINTAINER_EMAIL"
echo "$body" | sudo-linux docker exec -i ${container} mail -aFrom:$FROM_EMAIL "-s[RADAR] Services on ${SERVER_NAME} unhealthy" $MAINTAINER_EMAIL

echo "${unhealthy[@]}" > .unhealthy

hipchat_notify red "$body"

exit 1
fi
Loading

0 comments on commit 6cd353e

Please sign in to comment.