Skip to content

Commit

Permalink
Merge pull request #51 from RADAR-CNS/dev
Browse files Browse the repository at this point in the history
Bring master up current dev
  • Loading branch information
yatharthranjan authored Oct 13, 2017
2 parents 7bfa75c + 44fa028 commit 66b78cc
Show file tree
Hide file tree
Showing 82 changed files with 2,993 additions and 67 deletions.
51 changes: 51 additions & 0 deletions .travis.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,51 @@
language: generic
sudo: required
services:
- docker
env:
DOCKER_COMPOSE_VERSION: 1.11.2

before_install:
- docker --version
- mkdir -p "$HOME/bin";
- export PATH="$HOME/bin:$PATH";
- curl -L https://github.com/docker/compose/releases/download/${DOCKER_COMPOSE_VERSION}/docker-compose-`uname -s`-`uname -m` > "$HOME/bin/docker-compose";
- chmod +x "$HOME/bin/docker-compose";
- sudo $HOME/bin/docker-compose --version
script:
# Standard stack
- cd dcompose-stack/radar-cp-stack
- sudo $HOME/bin/docker-compose up -d --build && sleep 15 && [ -z "$(sudo $HOME/bin/docker-compose ps | tail -n +3 | grep " Exit ")" ]
- sudo $HOME/bin/docker-compose down

# With kerberos support
# NOT SUPPORTED: kerberos image cannot be found
#- cd ../radar-cp-sasl-stack
#- sudo $HOME/bin/docker-compose up -d --build && sleep 15 && [ -z "$($HOME/bin/docker-compose ps | tail -n +3 | grep " Exit ")" ]
#- sudo $HOME/bin/docker-compose down

# With email and HDFS support
- cd ../radar-cp-hadoop-stack
- sudo docker network create hadoop
- export SERVER_NAME=localhost
- export HDFS_DATA_DIR_1=$PWD/hdfs-data1
- export HDFS_DATA_DIR_2=$PWD/hdfs-data2
- export HDFS_NAME_DIR_1=$PWD/hdfs-name1
- export HDFS_NAME_DIR_2=$PWD/hdfs-name2
- echo $"SMARTHOST_ADDRESS=mail.example.com\nSMARTHOST_PORT=587\[email protected]\nSMARTHOST_PASSWORD=XXXXXXXX" > etc/smtp.env
- sudo docker volume create certs
- sudo docker volume create certs-data
- cp etc/radar.yml.template etc/radar.yml
- cp etc/nginx.conf.template etc/nginx.conf
- cp etc/sink-hdfs.properties.template etc/sink-hdfs.properties
- cp etc/sink-mongo.properties.template etc/sink-mongo.properties
- sudo $HOME/bin/docker-compose up -d --build && sleep 15 && [ -z "$($HOME/bin/docker-compose ps | tail -n +3 | grep " Exit ")" ]
- sudo $HOME/bin/docker-compose down
- sudo docker network rm hadoop

# With Docker Swarm support
# NOT SUPPORTED: docker swarm and docker beta features are not available in Travis
#- cd ../radar-cp-swarm-stack
#- sudo docker network create --attachable hadoop
#- sudo $HOME/bin/docker-compose up -d --build && sleep 15 && [ -z "$($HOME/bin/docker-compose ps | tail -n +3 | grep " Exit ")" ]
#- sudo $HOME/bin/docker-compose down
196 changes: 184 additions & 12 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,23 +1,195 @@
# RADAR-Docker
The dockerized RADAR stack or deploying the RADAR-CNS platform. Component repositories can be found here [RADAR-CNS DockerHub org](https://hub.docker.com/u/radarcns/dashboard/)

# Overview
The dockerized RADAR stack for deploying the RADAR-CNS platform. Component repositories can be found at [RADAR-CNS DockerHub org](https://hub.docker.com/u/radarcns/dashboard/)

## Installation instructions
To install RADAR-CNS stack, do the following:

# Deployment Instructions
1. Install [Docker Engine](https://docs.docker.com/engine/installation/)
2. Install `docker-compose` using the [installation guide](https://docs.docker.com/compose/install/) or by following our [wiki](https://github.com/RADAR-CNS/RADAR-Docker/wiki/How-to-set-up-docker-on-ubuntu#install-docker-compose).
3. Verify the Docker installation by running on the command-line:

## Non-Kerberized stack
```sh
$ cd RADAR-Docker/dcompose-stack/radar-cp-stack/
$ docker-compose up
```shell
docker --version
docker-compose --version
```
This should show Docker version 1.12 or later and docker-compose version 1.9.0 or later.
4. Install [git](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git) for your platform.
1. For Ubuntu

```shell
sudo apt-get install git
```

5. Clone [RADAR-Docker](https://github.com/RADAR-CNS/RADAR-Docker) repository from GitHub.

```shell
git clone https://github.com/RADAR-CNS/RADAR-Docker.git
```

6. Install required component stack following the instructions below.

## Usage

RADAR-Docker currently offers two component stacks to run.

1. A Docker-compose for components from [Confluent Kafka Platform](http://docs.confluent.io/3.1.0/) community
2. A Docker-compose for components from RADAR-CNS platform.

> **Note**: on macOS, remove `sudo` from all `docker` and `docker-compose` commands in the usage instructions below.

### Confluent Kafka platform
Confluent Kafka platform offers integration of the basic components for streaming such as Zookeeper, Kafka brokers, Schema registry and REST-Proxy.

Run this stack in a single-node setup on the command-line:

```shell
cd RADAR-Docker/dcompose-stack/radar-cp-stack/
sudo docker-compose up -d
```

To stop this stack, run:

```shell
sudo docker-compose down
```

### RADAR-CNS platform

In addition to Confluent Kafka platform components, RADAR-CNS platform offers

* RADAR-HDFS-Connector - Cold storage of selected streams in Hadoop data storage,
* RADAR-MongoDB-Connector - Hot storage of selected streams in MongoDB,
* [RADAR-Dashboard](https://github.com/RADAR-CNS/RADAR-Dashboard),
* RADAR-Streams - real-time aggregated streams,
* RADAR-Monitor - Status monitors,
* [RADAR-HotStorage](https://github.com/RADAR-CNS/RADAR-HotStorage) via MongoDB,
* [RADAR-REST API](https://github.com/RADAR-CNS/RADAR-RestApi),
* a Hadoop cluster, and
* an email server.

To run RADAR-CNS stack in a single node setup:

1. Navigate to `radar-hadoop-cp-stack`:

```shell
cd RADAR-Docker/dcompose-stack/radar-hadoop-cp-stack/
```
2. Configure monitor settings in `radar.yml`:

```yaml
battery_monitor:
level: CRITICAL
email_address:
- [email protected]
- [email protected]
email_host: smtp
email_port: 25
email_user: [email protected]
topics:
- android_empatica_e4_battery_level
disconnect_monitor:
# timeout in milliseconds -> 5 minutes
timeout: 300000
email_address:
- [email protected]
- [email protected]
email_host: smtp
email_port: 25
email_user: [email protected]
# temperature readings are sent very regularly, but
# not too often.
topics:
- android_empatica_e4_temperature
```
3. Create `smtp.env` and configure your email settings following `smtp.env.template`. Configure alternative mail providers like Amazon SES or Gmail by using the parameters of the [`namshi/smtp` Docker image](https://hub.docker.com/r/namshi/smtp/).
4. (Optional) Modify flush.size and HDFS direcotory for Cold storage in `sink-hdfs.properties`

```ini
flush.size=
topics.dir=/path/to/data
```
Note: To have different flush.size for different topics, you can create multipe property configurations for a single connector. To do that,

4.1 Create multipe property files that have different `flush.size` for given topics.
Examples [sink-hdfs-high.properties](https://github.com/RADAR-CNS/RADAR-Docker/blob/dev/dcompose-stack/radar-cp-hadoop-stack/sink-hdfs-high.properties) , [sink-hdfs-low.properties](https://github.com/RADAR-CNS/RADAR-Docker/blob/dev/dcompose-stack/radar-cp-hadoop-stack/sink-hdfs-low.properties)

4.2 Add `CONNECTOR_PROPERTY_FILE_PREFIX: <prefix-value>` enviornment variable to `radar-hdfs-connector` service in `docker-compose` file.

4.3 Add created property files to the `radar-hdfs-connector` service in `docker-compose` with name abides to prefix-value mentioned in `CONNECTOR_PROPERTY_FILE_PREFIX`

```ini
radar-hdfs-connector:
image: radarcns/radar-hdfs-connector-auto:0.2
restart: on-failure
volumes:
- ./sink-hdfs-high.properties:/etc/kafka-connect/sink-hdfs-high.properties
- ./sink-hdfs-low.properties:/etc/kafka-connect/sink-hdfs-low.properties
environment:
CONNECT_BOOTSTRAP_SERVERS: PLAINTEXT://kafka-1:9092,PLAINTEXT://kafka-2:9092,PLAINTEXT://kafka-3:9092
CONNECTOR_PROPERTY_FILE_PREFIX: "sink-hdfs"
```

5. Configure Hot Storage settings in `.env` file

```ini
HOTSTORAGE_USERNAME=mongodb-user
HOTSTORAGE_PASSWORD=XXXXXXXX
HOTSTORAGE_NAME=mongodb-database
```
6. To install the stack

```shell
sudo ./install-radar-stack.sh
```

To stop RADAR-CNS stack on a single node setup, run

```shell
cd RADAR-Docker/dcompose-stack/radar-hadoop-cp-stack/
sudo ./stop-radar-stack.sh
```
To reboot RADAR-CNS stack on a single node setup, run

```shell
cd RADAR-Docker/dcompose-stack/radar-hadoop-cp-stack/
sudo ./reboot-radar-stack.sh
```
To start RADAR-CNS stack on a single node setup after installing, run

```shell
cd RADAR-Docker/dcompose-stack/radar-hadoop-cp-stack/
sudo ./start-radar-stack.sh
```

#### cAdvisor

## Kerberized stack
cAdvisor (Container Advisor) provides container users an understanding of the resource usage and performance characteristics of their running containers.

```sh
$ cd RADAR-Docker/dcompose-stack/radar-cp-sasl-stack/
To view current resource performance,if running locally, try [http://localhost:8181](http://localhost:8181). This will bring up the built-in Web UI. Clicking on `/docker` in `Subcontainers` takes you to a new window with all of the Docker containers listed individually.

#### Portainer

Portainer provides simple interactive UI-based docker management. If running locally, try [http://localhost:8182](http://localhost:8182) for portainer's UI. To set-up portainer follow this [link](https://www.ostechnix.com/portainer-an-easiest-way-to-manage-docker/).
### Logging
Set up logging by going to the `dcompose-stack/logging` directory and follow the README there.
## Work in progress
The two following stacks will not work on with only Docker and docker-compose. For the Kerberos stack, the Kerberos image is not public. For the multi-host setup, also docker-swarm and Docker beta versions are needed.
### Kerberized stack
In this setup, Kerberos is used to secure the connections between the Kafka brokers, Zookeeper and the Kafka REST API. Unfortunately, the Kerberos container from Confluent is not publicly available, so an alternative has to be found here.
```shell
$ cd wip/radar-cp-sasl-stack/
$ docker-compose up
```
* still WIP *
# Multi-host setup
### Multi-host setup
In the end, we aim to deploy the platform in a multi-host environment. We are currently aiming for a deployment with Docker Swarm. This setup uses features that are not yet released in the stable Docker Engine. Once they are, this stack may become the main Docker stack. See the `wip/radar-swarm-cp-stack/` directory for more information.
1 change: 1 addition & 0 deletions dcompose-stack/logging/.gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
*.env
24 changes: 24 additions & 0 deletions dcompose-stack/logging/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
# Docker logging with Graylog2

This directory sets up a graylog2 instance that docker can stream data to.

## Usage

Set up this container by moving `graylog.env.template` to `graylog.env` and editing it. See instructions inside the `graylog.env.template` on how to set each variable.

Start the logging container with
```shell
sudo docker-compose up -d
```
On macOS, omit `sudo` in the command above.

Then go to the [Graylog dashboard](http://localhost:9000). Log in with your chosen password, and navigate to `System -> Inputs`. Choose `GELF UDP` as a source and click `Launch new input`. Set the option to allow Global logs, and name the input `RADAR-Docker`. Now your Graylog instance is ready to collect data from docker on the host it is running on, using the GELF driver with URL `udp://localhost:12201` (replace `localhost` with the hostname where the Graylog is running, if needed).

Now, other docker containers can be configured to use the `gelf` log driver. In a docker-compose file, add the following lines to a service to let it use Graylog:
```yaml
logging:
driver: gelf
options:
gelf-address: udp://localhost:12201
```
Now all docker logs of that service will be forwarded to Graylog
47 changes: 47 additions & 0 deletions dcompose-stack/logging/docker-compose.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,47 @@
---
version: '3'

networks:
graylog:
driver: bridge

volumes:
mongo: {}
elasticsearch: {}
graylog: {}

services:

mongo:
image: mongo:3.4.3
networks:
- graylog
volumes:
- mongo:/data/db

elasticsearch:
image: elasticsearch:2.4.4-alpine
command: elasticsearch -Des.cluster.name="graylog"
networks:
- graylog
volumes:
- elasticsearch:/usr/share/elasticsearch/data

graylog:
image: graylog2/server:2.2.3-1
networks:
- graylog
depends_on:
- mongo
- elasticsearch
links:
- mongo
- elasticsearch
env_file:
- ./graylog.env
ports:
- "9000:9000"
- "12201/udp:12201/udp"
volumes:
- graylog:/usr/share/graylog/data/journal

10 changes: 10 additions & 0 deletions dcompose-stack/logging/graylog.env.template
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
# Set a secret pepper that the passwords will be hashed with
# Minimum length is 16 characters
GRAYLOG_PASSWORD_SECRET=

# Set a password for the admin user. Obtain the SHA2 of the
# password by running echo -n "mypassword" | shasum -a 256
GRAYLOG_ROOT_PASSWORD_SHA2=

# Web address for Graylog to run on
GRAYLOG_WEB_ENDPOINT_URI=http://127.0.0.1:9000/api
4 changes: 4 additions & 0 deletions dcompose-stack/radar-cp-hadoop-stack/.gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
/.env
/etc/smtp.env
/radar.yml
/output/
38 changes: 38 additions & 0 deletions dcompose-stack/radar-cp-hadoop-stack/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,38 @@
# RADAR-CNS with a HDFS connector

## Configuration

1. First move `etc/env.template` file to `./.env` and check and modify all its variables. To have a valid HTTPS connection for a public host, set `SELF_SIGNED_CERT=no`. You need to provide a public valid DNS name as `SERVER_NAME` for SSL certificate to work. IP addresses will not work.

2. Modify `etc/smtp.env.template` to set a SMTP host to send emails with, and move it to `etc/smtp.env`. The configuration settings are passed to a [namshi/smtp](https://hub.docker.com/r/namshi/smtp/) Docker container. This container supports a.o. regular SMTP and GMail.

3. Modify the `etc/redcap-integration/radar.yml.template` to configure the properties of Redcap instance and the management portal, and move it to `etc/redcap-integration/radar.yml`. For reference on configuration of this file look at the Readme file here - https://github.com/RADAR-CNS/RADAR-RedcapIntegration#configuration
In the REDcap portal under Project Setup, define the Data Trigger as `https://<YOUR_HOST_URL>/redcapint/trigger`

4. Move `etc/managementportal/changelogs/config/liquibase/oauth_client_details.csv.template` to `etc/managementportal/changelogs/config/liquibase/oauth_client_details.csv` and change OAuth client credentials for production MP. (Except ManagementPortalapp)

5. Finally, move `etc/radar.yml.template` to `etc/radar.yml` and edit it, especially concerning the monitor email address configuration.

## Usage

Run
```shell
./install-radar-stack.sh
```
to start all the RADAR services. Use the `(start|stop|reboot)-radar-stack.sh` to start, stop or reboot it. Note: whenever `.env` or `docker-compose.yml` are modified, this script needs to be called again. To start a reduced set of containers, call `install-radar-stack.sh` with the intended containers as arguments.

Raw data can be extracted from this setup by running:

```shell
./hdfs_extract.sh <hdfs file> <destination directory>
```
This command will not overwrite data in the destination directory.

CSV-structured data can be gotten from HDFS by running

```shell
./hdfs_restructure.sh /topicAndroidNew <destination directory>
```
This will put all CSV files in the destination directory, with subdirectory structure `PatientId/SensorType/Date_Hour.csv`.

If `SELF_SIGNED_CERT=no` in `./.env`, be sure to run `./renew_ssl_certificate.sh` daily to ensure that your certificate does not expire.
Loading

0 comments on commit 66b78cc

Please sign in to comment.