Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Podman Autoupdate does not work with podman-compose systemd generated pods/containers #534

Closed
mholiv opened this issue Jul 28, 2022 · 9 comments
Labels
bug Something isn't working

Comments

@mholiv
Copy link

mholiv commented Jul 28, 2022

Using the devel branch (acquired 2022-07-28) pods/containers that have the appropriate "io.containers.autoupdate=registry" labels fail to update with an error saying auto-updating container "LONG_HASH_HERE": no PODMAN_SYSTEMD_UNIT label found

To Reproduce
Steps to reproduce the behavior:

  1. Ensure the relevent containers have theio.containers.autoupdate=registry label in the compose file.
  2. Run sudo podman-compose systemd --action create-unit
  3. Run podman-compose systemd -a register -f myfile.yml
  4. Run systemctl --user enable --now 'podman-compose@myfile'
  5. Run podman auto-update

Expected behavior
Updated imaged are pulled, and the pods relaunched with appropriate new images

Actual behavior
An error happens.

Output

$ podman-compose --version
podman-compose version: 1.0.4
['podman', '--version', '']
using podman version: 4.1.1
podman-compose version 1.0.4
podman --version 
podman version 4.1.1
exit code: 0

$ podman auto-update 
Error: 2 errors occurred:
        * auto-updating container "LONG_HASH": no PODMAN_SYSTEMD_UNIT label found
        * auto-updating container "LONG_HASH": no PODMAN_SYSTEMD_UNIT label found

Additional context

I was able to fix this by adding an additional PODMAN_SYSTEMD_UNIT label to the compose file (in the compose file below), but shoulden’t this be applied via the env file or somewhere else?

version: "3"

networks:
  gitea:
    external: false

services:
  server:
    labels:
      - "io.containers.autoupdate=registry"
      - "[email protected]"
    image: gitea/gitea:1
    container_name: gitea
    environment:
      - USER_UID=1000
      - USER_GID=1000
      - DB_TYPE=postgres
      - DB_HOST=db:5432
      - DB_NAME=
      - DB_USER=
      - DB_PASSWD=
    restart: always
    networks:
      - gitea
    volumes:
      - /opt/gitea/gitea:/data
      - /etc/localtime:/etc/localtime:ro
    ports:
      - "3000:3000"
      - "22:22"
    depends_on:
      - db
  db:
    labels:
      - "io.containers.autoupdate=registry"
      - "[email protected]"
    image: postgres:14-alpine
    restart: always
    environment:
      - POSTGRES_USER=
      - POSTGRES_PASSWORD=
      - POSTGRES_DB=
    networks:
      - gitea
    volumes:
      - /opt/gitea/postgres:/var/lib/postgresql/data
@mholiv mholiv added the bug Something isn't working label Jul 28, 2022
@muayyad-alsadi
Copy link
Collaborator

please test,

I've added the label "PODMAN_SYSTEMD_UNIT=podman-compose@{PROJ}.service" as requested

@mikaellanger
Copy link

Very nice change! Previously I've been doing podman generate systemd and manually set the PODMAN_SYSTEMD_UNIT label to the generated unit in the compose file, which was a hassle frankly, so glad to see this fixed.

Unfortunately, with this change, auto-updates doesn't work neither with the above solution nor out of the box because of #466, which is required for auto-updates to work.
For now I'm working around this by doing systemctl edit --user --full 'podman-compose@<my-project>' and adding --force-recreate to the podman-compose up command, but hoping that will be fixed anytime soon.

StefaBa added a commit to StefaBa/podman-compose that referenced this issue Feb 15, 2023
@StefaBa
Copy link

StefaBa commented Aug 9, 2023

as of podman v4.5.0 auto-update is broken again.

The issue seems to stem from this commit in podman itself:
containers/podman@6dd7978

which results in auto-update expecting an infra container.

Currently testing wether manually generating an infra container (via e.g. --pod-args='--infra=true') can somewhat workaround this. (Seems to only work if i use podman-compose.py for just generating the pod/containers, but not the systemd service files and then afterwards use regular podman generate systemd to create the systemd service files)

edit: from the updated man page of podman-auto-update from here: vrothberg/libpod@0ef5def

Moreover, the systemd units are expected to be generated with podman-generate-systemd --new, or similar units that create new containers in order to run the updated images.
Systemd units that start and stop a container cannot run a new image.

@mholiv
Copy link
Author

mholiv commented Aug 9, 2023

I can confirm auto updates broke for me too.

@StefaBa
Copy link

StefaBa commented Aug 9, 2023

As podman auto-update now seems to depend on pod recreation after update (see my edit two posts above), i've been experimenting with using the podman generate systemd command to generate systemd service files instead of podman-compose' built in version.
The following seems to work for a rootless pod, assuming one is in the current directory of a docker-compose.yml:

podman-compose down                                     # delete a potentially existing pod
podman-compose --pod-args='--share="ipc"' up --no-start # create a new pod with an infra container and shared IPC
                                                        # s.t. the state of the pod is not shown as degraded
BASENAME=$(basename "$PWD")
cd ~/.config/systemd/user/
podman generate systemd --new -f -n pod_$BASENAME 
systemctl --user daemon-reload
cd -
systemctl --user enable --now pod-pod_$BASENAME.service

Remarks:
podman generate systemd creates 1 service file for the pod itself and 1 for every container running inside the pod.
When stopping the pod via systemctl --user stop pod-pod_name.service the services of the containers get killed via SIGTERM.
Because of this the exit status shown in systemctl status will show the containers as failed because of exitstatus 143 (process shut down gracefully because of SIGTERM) or exitstatus 137 (process did not shutdown gracefully).

Default share namespaces are ipc,net,uts, see https://github.com/containers/podman/blob/main/pkg/specgen/namespaces.go#L72
Working infra containers seem to only require the ipc namespace i think?

@Hoeze
Copy link

Hoeze commented Nov 29, 2023

Please reopen. I am using podman 4.6.1 and all I get is the following:

Error: 2 errors occurred:
        * looking up pod's systemd unit: pod has no infra container: no such container
        * looking up pod's systemd unit: pod has no infra container: no such container

This caught me totally by surprise when I wondered why my nextcloud instance does not get security updates any more...

@simon-bueler
Copy link

simon-bueler commented Mar 23, 2024

Still an issue with podman 4.9.3 and podman-compose 1.0.6

❯ podman auto-update
Error: 7 errors occurred:
        * looking up pod's systemd unit: pod has no infra container: no such container
        * looking up pod's systemd unit: pod has no infra container: no such container
        * looking up pod's systemd unit: pod has no infra container: no such container
        * looking up pod's systemd unit: pod has no infra container: no such container
        * looking up pod's systemd unit: pod has no infra container: no such container
        * looking up pod's systemd unit: pod has no infra container: no such container
        * looking up pod's systemd unit: pod has no infra container: no such container

Edit:
maybe this will help: containers/podman#21399 (comment)

@simon-bueler
Copy link

Still an issue with podman 5.0.3
The problems is that podman changed how the auto-update works (simplified answer).
If you want to make it work with podman-compose, it would be necessary to rewright how systemd integration is done (as podman does now).
For me I switched to the podamn integrated functionalities by using quadlets. For that I just generated kubernetes yml from my pods created by podman-compose (podman generate kube <pod-name> >> <desired-filename.yml>. Then I followed https://www.redhat.com/sysadmin/multi-container-application-podman-quadlet on how to prepare the necessary files. After that you have services for everything and podman auto-update works fine.

@Milor123
Copy link

Still an issue with podman 5.0.3 The problems is that podman changed how the auto-update works (simplified answer). If you want to make it work with podman-compose, it would be necessary to rewright how systemd integration is done (as podman does now). For me I switched to the podamn integrated functionalities by using quadlets. For that I just generated kubernetes yml from my pods created by podman-compose (podman generate kube <pod-name> >> <desired-filename.yml>. Then I followed https://www.redhat.com/sysadmin/multi-container-application-podman-quadlet on how to prepare the necessary files. After that you have services for everything and podman auto-update works fine.

Did you solve it? I have the same problem

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

7 participants