Skip to content

Pretalx Chart

Amit Aronovitch edited this page Mar 26, 2019 · 5 revisions

Details about k8s installation of the Pretalx server

For reaching results fast, we tried to reuse existing resources. In particular, we used the pretalx-docker github repo, which contains both a Dockerfile and a docker-compose file. We started by an installation whose structure is similar to what the docker-compose does, but will probably modify to become more suitable for k8s deployment.

The pretalx-docker installation

The docker image is quite heavy. One reason for it is that it was designed to run several processes (including nginx and supervisord) in the same container. The docker-compose file also runs (in addition to the pretalx container) a redis server and a mysql server using their own images (connecting to the pretalx container via docker-compose network). Each of the 3 containers has its own data volume, which is bound to a subdirectory of the source tree (for the pretalx container, it also bind-mounts conf/pretalx.cfg to /etc/pretalx/pretalx.cfg, so it can be edited easily).

The pretalx container uses its data directory for django media, data and logs. The entrypoint command of the container fires supervisord which takes care of starting the other processes.

  • An nginx server serves port 80. '/' is forwarded to a unix domain socket (on which the gunicorn process listens). (It also serves directly /media/ from the django media dir, and /static/ from static.dist diretory in the source tree).
  • gunicorn (with 2 x nproc workers) serves the pretalx wsgi on the unix domain socket. It is configured to use the mysql server for the model, and redis for celery. Also uses db 0 on the redis server for cache.
  • celery worker (uses db 1 and 2 on the redis server for broker and backend)

The Helm Chart

Currently our docker image is a minor fork of pretalx-docker (the main difference is that our image uses release-tags of our own pretalx fork rather than upstream releases). The design of the chart tried to be similar to the compose file of pretalx-docker, so:

  1. You should read the previous section to better understand this one.
  2. At some point, when the design changes, we would have to create our own dockerfiles from scratch (but they will be smaller and simpler).

I chose to assume that we will use a managed sql server, so instead of including an internal deployment (as the docker-compose does), the chart expects connection parameters in the values settings.

However, the chart does provide a redis server (using the stable/redis chart).

Instead of using the supervisord daemon to start 3 processes in a single container (as the docker-compose does). Instead, we define 3 containers running in a single pod and let k8s schedule them (note that I did try to mimic the compose settings at the beginning, but this did not work, because processes started by the supervisord lose the environment variables, which I use for passing the secrets, so they cannot connect to redis. Note also that the nginx talks to gunicorn via a domain socket, which forces them to be in the same pod. I suppose the celery worker could run on another, but it must use the same image as the API server does).

The chart builds a pretalx.cfg file based on the values settings, but the passwords for the db and email are not taken from there, but passed from k8s secrets via environment varibles. Note that the redis URLs specified in the pretalx.cfg file are also overridden by environment variables. The reason for that is that the only way I know of to configure passwords for redis in pretalx is by including it in the URL.

Both pretalx and redis use a pvc for their state storage. The default setting does not specify a StorageClass, but they can be configured via the persistence and redis.persistence settings of the values setting respectively.