Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Podman auto-update command fails with error no PODMAN_SYSTEMD_UNIT label found for containers within a pod #21399

Closed
Taar opened this issue Jan 28, 2024 · 4 comments
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.

Comments

@Taar
Copy link

Taar commented Jan 28, 2024

Issue Description

Running podman auto-update --dry-run errors with the following error when a container service (created using quadlet) is within a pod.

auto-updating container "a6c6f40399de07c22957ad83381e36966751373997ac6be53ffe00617705728a": no PODMAN_SYSTEMD_UNIT label found

Raw Command output

podman container ls

CONTAINER ID  IMAGE                                    COMMAND     CREATED         STATUS         PORTS       NAMES
855225ba1515  localhost/podman-pause:4.9.0-1706014507              51 minutes ago  Up 5 seconds               e2701fa17813-infra
89ea94abbab3  docker.io/traefik/whoami:latest                      11 seconds ago  Up 11 seconds              systemd-whoami
0921320dad41  docker.io/traefik/whoami:latest                      5 seconds ago   Up 5 seconds               systemd-whoami-pod

podman auto-update --dry-run

            UNIT            CONTAINER                      IMAGE                            POLICY      UPDATED
            whoami.service  89ea94abbab3 (systemd-whoami)  docker.io/traefik/whoami:latest  registry    false
Error: auto-updating container "0921320dad41c41e03f8dbebb6a457351f43bfe66930e628899f300cff9b1396": no PODMAN_SYSTEMD_UNIT label found

podman container inspect --format "{{ .Config.Labels }}" 0921320dad41

map[PODMAN_SYSTEMD_UNIT:whoami-pod.service io.containers.autoupdate:registry org.opencontainers.image.created:2023-07-12T14:02:18Z org.opencontainers.image.description:Tiny Go webserver that prints OS information and HTTP request to output org.opencontainers.image.documentation:https://github.com/traefik/whoami org.opencontainers.image.revision:87f25fc35b3e9051117dddfd11bbae5fbc986581 org.opencontainers.image.source:https://github.com/traefik/whoami org.opencontainers.image.title:whoami org.opencontainers.image.url:https://github.com/traefik/whoami org.opencontainers.image.version:1.10.1]

Steps to reproduce the issue

Steps to reproduce the issue

  1. Create the files listed below so that qaudlet can generate the service files. I've created a gist here for easy download.

File location: ~/.config/systemd/run/whoami-create-pod.service
Description: Service file that'll create the pod automatically

[Unit]
Description=Create WhoAmI Pod

[Service]
Type=oneshot
ExecStart=podman pod create --replace whoami
RemainAfterExit=true
StandardOutput=journal

[Install]
WantedBy=default.target

File location: ~/.config/containers/systemd/whoami.container
Description: whoami service with auto update enabled

[Unit]
Description=Traefik WhoAmI Container

[Container]
Image=docker.io/traefik/whoami:latest
AutoUpdate=registry
LogDriver=journald

File location: ~/.config/containers/systemd/whoami-pod.container
Description: whoami service with auto update enabled. Places the container within the whoami pod created by the service file above.

[Unit]
Description=Traefik WhoAmI Pod Container
After=whoami-create-pod.service
Requires=whoami-create-pod.service

[Container]
Image=docker.io/traefik/whoami:latest
AutoUpdate=registry
PodmanArgs=--pod=whoami
LogDriver=journald
  1. Run systemctl --user daemon-reload
  2. Run systemctl --user start whoami-create-pod.service
  3. Run systemctl --user start whoami.service
  4. Run systemctl --user start whoami-pod.service
  5. Make sure all containers are running with podman container ls
  6. If both whoami services are running and one is within the whoami pod then execute the command: podman auto-update --dry-run
  7. The auto update command should fail with an output that looks something like this:
            UNIT            CONTAINER                      IMAGE                            POLICY      UPDATED
            whoami.service  89ea94abbab3 (systemd-whoami)  docker.io/traefik/whoami:latest  registry    false
Error: auto-updating container "0921320dad41c41e03f8dbebb6a457351f43bfe66930e628899f300cff9b1396": no PODMAN_SYSTEMD_UNIT label found

The container that triggers the error should be the one that is within the whoami pod.

Describe the results you received

The command podman auto-update --dry-run fails with an error for all containers that are within a pod.

Describe the results you expected

The command podman auto-update --dry-run should be able to successfully run and provide information about every running container regardless of whether or not the container is within a pod.

podman info output

host:
  arch: amd64
  buildahVersion: 1.33.3
  cgroupControllers:
  - cpu
  - memory
  - pids
  cgroupManager: systemd
  cgroupVersion: v2
  conmon:
    package: conmon-2.1.10-1.1.x86_64
    path: /usr/bin/conmon
    version: 'conmon version 2.1.10, commit: unknown'
  cpuUtilization:
    idlePercent: 99.96
    systemPercent: 0.03
    userPercent: 0.02
  cpus: 8
  databaseBackend: sqlite
  distribution:
    distribution: opensuse-tumbleweed
    version: "20240125"
  eventLogger: journald
  freeLocks: 2036
  hostname: twopeasinapod
  idMappings:
    gidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 100000
      size: 65536
    uidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 100000
      size: 65536
  kernel: 6.7.1-1-default
  linkmode: dynamic
  logDriver: journald
  memFree: 113172480
  memTotal: 1914728448
  networkBackend: netavark
  networkBackendInfo:
    backend: netavark
    dns:
      package: aardvark-dns-1.9.0-1.1.x86_64
      path: /usr/libexec/podman/aardvark-dns
      version: aardvark-dns 1.9.0
    package: netavark-1.9.0-1.1.x86_64
    path: /usr/libexec/podman/netavark
    version: netavark 1.9.0
  ociRuntime:
    name: crun
    package: crun-1.12-1.1.x86_64
    path: /usr/bin/crun
    version: |-
      crun version 1.12
      commit: ce429cb2e277d001c2179df1ac66a470f00802ae
      rundir: /run/user/1000/crun
      spec: 1.0.0
      +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +LIBKRUN +WASM:wasmedge +YAJL
  os: linux
  pasta:
    executable: ""
    package: ""
    version: ""
  remoteSocket:
    exists: true
    path: /run/user/1000/podman/podman.sock
  security:
    apparmorEnabled: false
    capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
    rootless: true
    seccompEnabled: true
    seccompProfilePath: /etc/containers/seccomp.json
    selinuxEnabled: false
  serviceIsRemote: false
  slirp4netns:
    executable: /usr/bin/slirp4netns
    package: slirp4netns-1.2.2-1.1.x86_64
    version: |-
      slirp4netns version 1.2.2
      commit: 0ee2d87523e906518d34a6b423271e4826f71faf
      libslirp: 4.7.0
      SLIRP_CONFIG_VERSION_MAX: 5
      libseccomp: 2.5.5
  swapFree: 2136825856
  swapTotal: 2148507648
  uptime: 40h 2m 32.00s (Approximately 1.67 days)
  variant: ""
plugins:
  authorization: null
  log:
  - k8s-file
  - none
  - passthrough
  - journald
  network:
  - bridge
  - macvlan
  - ipvlan
  volume:
  - local
registries:
  search:
  - registry.opensuse.org
  - registry.suse.com
  - docker.io
store:
  configFile: /home/rtop/.config/containers/storage.conf
  containerStore:
    number: 8
    paused: 0
    running: 8
    stopped: 0
  graphDriverName: overlay
  graphOptions: {}
  graphRoot: /home/rtop/.local/share/containers/storage
  graphRootAllocated: 104687730688
  graphRootUsed: 4530941952
  graphStatus:
    Backing Filesystem: btrfs
    Native Overlay Diff: "true"
    Supports d_type: "true"
    Supports shifting: "false"
    Supports volatile: "true"
    Using metacopy: "false"
  imageCopyTmpDir: /var/tmp
  imageStore:
    number: 8
  runRoot: /run/user/1000/containers
  transientStore: false
  volumePath: /home/rtop/.local/share/containers/storage/volumes
version:
  APIVersion: 4.9.0
  Built: 1705968000
  BuiltTime: Mon Jan 22 19:00:00 2024
  GitCommit: ""
  GoVersion: go1.21.6
  Os: linux
  OsArch: linux/amd64
  Version: 4.9.0

Podman in a container

No

Privileged Or Rootless

Rootless

Upstream Latest Release

Yes

Additional environment details

Testing using openSUSE is a VM running using qemu. I can provide more information upon request.

Additional information

I was able to reproduce this issue on the system that provided the podman info output above and on a local machine running Arch Linux which also has podman version 4.9.0.

This issue also existed in podman version 4.8.3 though I didn't have the time to report the issue until now.

@Taar Taar added the kind/bug Categorizes issue or PR as related to a bug. label Jan 28, 2024
@vrothberg
Copy link
Member

Thanks for reaching out! To make use of auto-updates inside Pods with Quadlet, you need to wait for the upcoming Podman 5.0 release. The feature wasn't backported to 4.9, apologies.

@Lippiece
Copy link

Apologies for the off-topic, but it's a bit frustrating to migrate containers to quadlets because all the docs and commands say do that if you want auto updates and then find out this.

Thanks for reaching out! To make use of auto-updates inside Pods with Quadlet, you need to wait for the upcoming Podman 5.0 release. The feature wasn't backported to 4.9, apologies.

@vrothberg
Copy link
Member

Thanks for sharing, @Lippiece!

It's something we could fix on https://docs.podman.io/en/latest/ where latest points to the current main branch.
@containers/podman-maintainers WDYT?

@mheon
Copy link
Member

mheon commented Mar 19, 2024

We'll also have a 5.0 docs branch within the next ~2-3 hours when the 5.0 release happens. But no objection to making the change on main as well.

@stale-locking-app stale-locking-app bot added the locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. label Jun 18, 2024
@stale-locking-app stale-locking-app bot locked as resolved and limited conversation to collaborators Jun 18, 2024
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.
Projects
None yet
Development

No branches or pull requests

4 participants