Skip to content

Running in containers with podman

mdavidsaver edited this page Nov 20, 2022 · 17 revisions

Note

See more recent instructions.

Running in containers with podman

This page describes one recipe for running the channelfinder services as containers using podman, which could be adapted to docker without too much difficulty. The recipe is tested with ChannelFinderService 4.0.0 with podman 3.0.1 hosted on Debian 11 circa May 2022.

apt-get install podman
# for the 'jar' utility
apt-get install default-jdk-headless

Networking

While the main ChannelFinderServices uses HTTP/TCP, the recsync service uses UDP broadcast/unicast for to announce its existence to EPICS IOCs.

One way to achieve this is to attach the containers to a Linux software bridge interface (br0). This uses root containers, and will appear as a second distinct (static) IP address.

cat <<EOF > /etc/cni/net.d/epics.conflist
{
   "cniVersion": "0.4.0",
   "name": "epics",
   "plugins": [
      {
        "type": "bridge",
        "bridge": "br0",
        "runtimeConfig": {"mac": "11:22:33:44:55:66"},
        "ipam": {
            "type": "static",
            "addresses": [
                {
                    "address": "10.0.2.100/24",
                    "gateway": "10.0.2.1"
                }
            ],
            "routes": [
                {"dst": "0.0.0.0/0"}
            ],
            "dns": {
                "nameservers" : ["10.0.2.3"]
            }
        }
      }
   ]
}
EOF
podman network inspect epics

cf. Documentation for the "bridge" and "static" plugins.

NOTE: this configuration does not include any firewall.

TODO: the MAC address setting may be omitted as it is parsed, but seems to be ignored by containernetworking-plugins == 0.9.0-1+b6. So DHCP can't give a stable IP address.

Pod

We want to constituent services to be able to directly communicate with each other. In the language of podman, this can be achieved by by placing them into a "pod" (aka. a shared Linux network namespace).

podman pod create --net epics --name cf

ElasticSearch

First we must select a location for the persistent elasticsearch database. Let's use /var/lib/elastic.

The elastic daemon runs with UID 1000 (in its container). Since

install -d -o1000 -g1000 /var/lib/elastic

This guide gives only the bare minimum steps required. See the elasticsearch docker install guide for full details on configuring/optimizing a host.

podman create --name elastic \
 --pod cf \
 --volume /var/lib/elastic:/usr/share/elasticsearch/data \
 --env "discovery.type=single-node" \
 docker.elastic.co/elasticsearch/elasticsearch:6.4.3
podman start elastic

Verification:

$ wget -qO - http://10.0.2.100:9200/
{
  "name" : "j-rHCfq",
  "cluster_name" : "docker-cluster",
  "cluster_uuid" : "rJ6VIJftQGqmWd-5W0mAtw",
  "version" : {
    "number" : "6.4.3",
    "build_flavor" : "default",
    "build_type" : "tar",
    "build_hash" : "fe40335",
    "build_date" : "2018-10-30T23:17:19.084789Z",
    "build_snapshot" : false,
    "lucene_version" : "7.4.0",
    "minimum_wire_compatibility_version" : "5.6.0",
    "minimum_index_compatibility_version" : "5.0.0"
  },
  "tagline" : "You Know, for Search"
}

ChannelFinderService

For the main channelfinder service, we first construct a container image.

mkdir ~/ChannelFinderService
cd ~/ChannelFinderService
wget https://github.com/ChannelFinder/ChannelFinderService/releases/download/service-cf-4.0.0/ChannelFinder-4.0.0.jar
jar xf ChannelFinder-4.0.0.jar BOOT-INF/classes/application.properties BOOT-INF/classes/cf.ldif
mv BOOT-INF/classes/application.properties .
mv BOOT-INF/classes/cf.ldif .

Now edit application.properties and if necessary cf.ldif. eg. for standalone demo, ensure that the following are set.

security.require-ssl=false
ldap.enabled = false
embedded_ldap.enabled = true
demo_auth.enabled = false
spring.ldap.embedded.ldif=file:///usr/local/share/cf.ldif

Now write the Containerfile (aka. Dockerfile)

cat <<EOF > Containerfile
FROM docker.io/library/eclipse-temurin:11-jre
MAINTAINER $USER

ARG VERSION=4.0.0
ENV VERSION=\${VERSION}

COPY ChannelFinder-\${VERSION}.jar application.properties cf.ldif /usr/local/share/

EXPOSE 8080 8443

USER nobody:nogroup

ENTRYPOINT exec java \
 -jar /usr/local/share/ChannelFinder-\${VERSION}.jar \
 --spring.config.location=classpath:,file:///usr/local/share/application.properties
EOF

Build an image, then create and run a container based on this image:

podman build --build-arg VERSION=4.0.0 -t channelfinder:4.0.0 .
podman create --name channelfinder \
 --pod cf \
 localhost/channelfinder:4.0.0
podman start channelfinder

Verification

$ wget -qO - http://10.0.2.100:8080/ChannelFinder
{
  "name" : "ChannelFinder Service",
  "version" : "4.0.0x",
  "elastic" : {
    "status" : "Connected",
    "clusterName" : "docker-cluster",
    "clusterUuid" : "rJ6VIJftQGqmWd-5W0mAtw",
    "version" : "6.4.3"
  }
}

recsync/recceiver

The final piece to this puzzle is the recceiver daemon.

mkdir ~/recsync
cd ~/recsync
cat <<EOF > Containerfile
FROM docker.io/library/python:3.9
MAINTAINER $USER

RUN pip install --no-cache-dir \
 Twisted~=20.3 \
 git+https://github.com/ChannelFinder/pyCFClient.git \
 git+https://github.com/ChannelFinder/recsync#subdirectory=server

RUN python -c 'from twisted.plugin import IPlugin, getPlugins; list(getPlugins(IPlugin))'

COPY recceiver.conf channelfinderapi.conf entry-point.sh /etc/

USER nobody:nogroup

ENTRYPOINT exec /etc/entry-point.sh
EOF
cat <<EOF > entry-point.sh
#!/bin/sh
set -e

rm -f /tmp/recciver.pid

exec twistd -n --reactor=poll --pidfile=/tmp/recciver.pid recceiver -f /etc/recceiver.conf
EOF
chmod +x entry-point.sh
cat <<EOF > recceiver.conf
[recceiver]

loglevel = DEBUG

addrlist = 10.0.2.255:5049

procs = cf
EOF
cat <<EOF > channelfinderapi.conf
[DEFAULT]
BaseURL=http://localhost:8080/ChannelFinder
username=admin
password=1234
EOF

NOTE: this rather insecure default credential is defined in the cf.ldif file mentioned above. Sites are encouraged to use a real LDAP server, or at least change this credential. (cf. man slappasswd)

Build an image, then create and run a container based on this image.

podman build -t recceiver:latest .
podman create --name recceiver --pod cf \
 localhost/recceiver:latest
podman start recceiver

Verification

Download and run the test-client.py script, and/or wireshark/tshark (tshark -i any port 5049) on each host where IOCs will run. An announcement broadcast should be seen every 15 second.

$ python3 test-client.py 
...
>> (b'RC\x00\x00\xff\xff\xff\xff\x8a-\x00\x006\xcb\x03\x8b', ('10.0.2.100', 59730))

Automatic start with systemd

cat <<EOF > /etc/systemd/system/[email protected]
[Unit]
Description=pod %I

[Service]
Type=simple
ExecStart=/usr/bin/podman start -a %i
ExecStop=/usr/bin/podman stop -t 60 %i
Restart=always
RestartSec=10
EOF
mkdir /etc/systemd/system/[email protected]
cat <<EOF > /etc/systemd/system/[email protected]/override.conf
[Unit]
Description=ElasticSearch daemon
Requires=network.target
After=network.target
EOF
mkdir /etc/systemd/system/[email protected]
cat <<EOF > /etc/systemd/system/[email protected]/override.conf
[Unit]
Description=ChannelFinder directory service
Documentation=https://github.com/ChannelFinder/ChannelFinderService
Requires=network.target [email protected]
After=network.target [email protected]

[Service]
SuccessExitStatus=143
EOF
mkdir /etc/systemd/system/[email protected]
cat <<EOF > /etc/systemd/system/[email protected]/override.conf
[Unit]
Description=ChannelFinder directory service
Documentation=https://github.com/ChannelFinder/recsync
Requires=network.target [email protected]
After=network.target [email protected]
EOF
systemctl daemon-reload
systemctl start [email protected]
systemctl enable [email protected]

And finally a test reboot is recommended, then repeat the verification steps above.

Notes and tips

To troubleshoot network namespace configuration using hosts tools (eg. netstat), first find the host PID for the container'd process.

# lsns | grep ' net '
4026531992 net       139     1 root             /sbin/init
4026532231 net         8   742 root             /pause

Here 742 is the host PID of the cf pod. Some useful commands include:

nsenter -t 742 -n ip link
nsenter -t 742 -n ip addr
nsenter -t 742 -n ip route
nsenter -t 742 -n netstat -tulpn

will print something like:

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
3: eth0@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default 
    link/ether 76:13:48:39:0d:e9 brd ff:ff:ff:ff:ff:ff link-netnsid 0
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
3: eth0@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 76:13:48:39:0d:e9 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 10.0.2.100/24 brd 10.0.2.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fec0::7413:48ff:fe39:de9/64 scope site dynamic mngtmpaddr 
       valid_lft 85992sec preferred_lft 13992sec
    inet6 fe80::7413:48ff:fe39:de9/64 scope link 
       valid_lft forever preferred_lft forever
default via 10.0.2.1 dev eth0 
10.0.2.0/24 dev eth0 proto kernel scope link src 10.0.2.100 
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
tcp        0      0 0.0.0.0:39941           0.0.0.0:*               LISTEN      1352/python         
tcp6       0      0 :::8443                 :::*                    LISTEN      774/java            
tcp6       0      0 :::8389                 :::*                    LISTEN      774/java            
tcp6       0      0 :::9200                 :::*                    LISTEN      765/java            
tcp6       0      0 :::8080                 :::*                    LISTEN      774/java            
tcp6       0      0 :::5075                 :::*                    LISTEN      774/java            
tcp6       0      0 :::9300                 :::*                    LISTEN      765/java            
udp        0      0 0.0.0.0:58973           0.0.0.0:*                           1352/python         
udp6       0      0 :::45458                :::*                                774/java            
udp6       0      0 :::5076                 :::*                                774/java

Make extra noise with:

podman --log-level=debug ...