-
Notifications
You must be signed in to change notification settings - Fork 45
Deployment
To deploy to cjworkbench.org, you will need to get comfortable with the wonderful world of Docker!
We build our images on Docker Hub, triggered by commits to the main repo. The Dockerfiles which define the images are here.
For rapid iteration, the build is split into two images: cjworkbench-reqs starts with a base python image and installs all of the modules in requirements.txt. (Dockerfile) This can take quite a long time, maybe 30 minutes, as some of the modules (like pandas) require compilation For this reason this image does not build automatically; you must trigger it whenever you change requirements.txt or requirements-dev.txt.
The second image is the main cjworkbench which clones the repo, installs npm modules, and creates a default admin user. (Dockerfile) It's set up to rebuild on dockerhub whenever anything is committed to the main repo.
We use Docker Compose to tie together the cjworkbench
container with a postgres container and configure a few things. The cjworkbench-docker repository gets cloned to the server, and contains the docker-compose.yml
file that specifies how everything is stitched together. It also defines the Docker volumes that save the files from two place: the database and the importedmodules
directory. All other file are erased when the docker image is updated.
To set up a server, first install Docker Compose. Then clone cjworkbench-docker
:
git clone https://github.com/cjworkbench/cjworkbench-docker
To bring up the server, first you will need an .env
file in your home directory. This contains settings for all sorts of environment variables, including secret keys of various types. It must never be committed to a repo. Then, to start the server, change to the cjworkbench-docker
directory and run
./update
This pulls all necessary containers (including the database) and starts the server processes. This same command van be used to update to the latest container versions. In that case, it will stop the server and restart it after downloading new images (takes 30 seconds or so).
You can see the running containers with docker ps
. The server is started inside the cjw-web
container, via the start-prod.sh
script and runs on port 8000 by default.
If you are on the Workbench team, you will need the cjworkbench
private ssh key. Once this is copied to your ~/.ssh
directory, you can log into staging and update to the latest version like this
ssh -i ~/.ssh/cjworkbench [email protected]
cd cjworkbench-docker
./update
and production like this
ssh -i ~/.ssh/cjworkbench [email protected]
cd cjworkbench-docker
./update-production
You can run Django manage commands inside the container. The first time you run, you will need to manually create an admin user.
docker exec -i -t cjw-web python manage.py createsuperuser
You can also run arbitrary Python code from within the context of the server process by doing something like
docker exec -i -t cjw-web python manage.py shell -c "from django.contrib.auth.models import User; User.objects.get(email='[email protected]').delete()"
You can get a shell on the server container by the servershell
script in the cjworkbench-docker
directory.
You can view the server logs continuously with the taillogs
script in the cjworkbench-docker
directory..
First pull down the image. You'll need to repeat this whenever it needs an update.
docker pull jonathanstray/cjworkbench
Then start the container. You can do this interactively (container goes away with ^C)
docker run -it --rm -p 8000:8000 cjworkbench
Remove the --rm
flag if you want the container and its contents to persist -- e.g. all users and created workflows. Or you can run as a background process:
docker run -p 8000:8000 cjworkbench
Either way, since this is debug mode, all data is stored in sqlite.db
as usual.