Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Incorrect process UIDs with podman compose top (running docker-compose v2) #22293

Closed
samuel-andres opened this issue Apr 6, 2024 · 5 comments · Fixed by #23096
Closed

Incorrect process UIDs with podman compose top (running docker-compose v2) #22293

samuel-andres opened this issue Apr 6, 2024 · 5 comments · Fixed by #23096
Assignees
Labels
In Progress This issue is actively being worked by the assignee, please do not work on this at this time. kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.

Comments

@samuel-andres
Copy link

Issue Description

I'm seeing the wrong UID on the output of podman compose top. I'm running rootless podman with rootless users inside the containers, but under my user ns (keepid). The container user with UID 1000 in this case is called nonroot and my host uesr is samuel.

I'm new to podman so I have a lot to learn, let me know if this issue should be better on the docker-compose side.

Steps to reproduce the issue

  • Have a running compose environment with PODMAN_USERNS=keep-id-
  • Check the output of podman top, podman compose top, and host ps.

Describe the results you received

Podman top output (UID's inside the container:

afy-backend(AFY-512-big-refactor)$ podman top afy-backend-app-1
USER        PID         PPID        %CPU        ELAPSED           TTY         TIME        COMMAND
nonroot     1           0           0.000       19m29.040208648s  ?           0s          /bin/sh /app/./scripts/runserver.dev.sh
nonroot     20          1           0.086       19m20.040264425s  ?           1s          python3 manage.py runserver 0.0.0.0:8000
nonroot     28          20          1.726       19m19.04029244s   ?           20s         /opt/venv/bin/python3 manage.py runserver 0.0.0.0:8000

Podman compose output (IT SHOWS ROOT!)

afy-backend(AFY-512-big-refactor)$ podman compose top app
>>>> Executing external compose provider "/usr/bin/docker-compose". Please refer to the documentation for details. <<<<

afy-backend-app-1
UID                                                                                                          PID   PPID   C    STIME   TTY   TIME   CMD
root           1       0  0 16:42 ?        00:00:00 /bin/sh /app/./scripts/runserver.dev.sh
root          20       1  0 16:42 ?        00:00:01 python3 manage.py runserver 0.0.0.0:8000
root          28      20  1 16:42 ?        00:00:20 /opt/venv/bin/python3 manage.py runserver 0.0.0.0:8000

From the host perspective, the process show UID 1000

afy-backend(AFY-512-big-refactor)$ DID=$(docker inspect -f '{{.State.Pid}}' afy-backend-app-1); ps --ppid $DID -o user,uid,gid,pid,ppid,cmd

USER       UID   GID     PID    PPID CMD
samuel    1000  1000   25018   24067 python3 manage.py runserver 0.0.0.0:8000
afy-backend(AFY-512-big-refactor)$

Describe the results you expected

I expected the UID shown in podman compose top to be 1000, as inside the containers and on the host.

podman info output

host:
  arch: amd64
  buildahVersion: 1.33.7
  cgroupControllers:
  - cpu
  - io
  - memory
  - pids
  cgroupManager: systemd
  cgroupVersion: v2
  conmon:
    package: conmon-2.1.10-1.fc39.x86_64
    path: /usr/bin/conmon
    version: 'conmon version 2.1.10, commit: '
  cpuUtilization:
    idlePercent: 98.23
    systemPercent: 0.5
    userPercent: 1.27
  cpus: 8
  databaseBackend: sqlite
  distribution:
    distribution: fedora
    variant: workstation
    version: "39"
  eventLogger: journald
  freeLocks: 2022
  hostname: e14fedora
  idMappings:
    gidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 524288
      size: 65536
    uidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 524288
      size: 65536
  kernel: 6.7.11-200.fc39.x86_64
  linkmode: dynamic
  logDriver: journald
  memFree: 9848229888
  memTotal: 16449613824
  networkBackend: netavark
  networkBackendInfo:
    backend: netavark
    dns:
      package: aardvark-dns-1.10.0-1.fc39.x86_64
      path: /usr/libexec/podman/aardvark-dns
      version: aardvark-dns 1.10.0
    package: netavark-1.10.3-1.fc39.x86_64
    path: /usr/libexec/podman/netavark
    version: netavark 1.10.3
  ociRuntime:
    name: crun
    package: crun-1.14.4-1.fc39.x86_64
    path: /usr/bin/crun
    version: |-
      crun version 1.14.4
      commit: a220ca661ce078f2c37b38c92e66cf66c012d9c1
      rundir: /run/user/1000/crun
      spec: 1.0.0
      +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +LIBKRUN +WASM:wasmedge +YAJL
  os: linux
  pasta:
    executable: /usr/bin/pasta
    package: passt-0^20240220.g1e6f92b-1.fc39.x86_64
    version: |
      pasta 0^20240220.g1e6f92b-1.fc39.x86_64
      Copyright Red Hat
      GNU General Public License, version 2 or later
        <https://www.gnu.org/licenses/old-licenses/gpl-2.0.html>
      This is free software: you are free to change and redistribute it.
      There is NO WARRANTY, to the extent permitted by law.
  remoteSocket:
    exists: true
    path: /run/user/1000/podman/podman.sock
  security:
    apparmorEnabled: false
    capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
    rootless: true
    seccompEnabled: true
    seccompProfilePath: /usr/share/containers/seccomp.json
    selinuxEnabled: true
  serviceIsRemote: false
  slirp4netns:
    executable: /usr/bin/slirp4netns
    package: slirp4netns-1.2.2-1.fc39.x86_64
    version: |-
      slirp4netns version 1.2.2
      commit: 0ee2d87523e906518d34a6b423271e4826f71faf
      libslirp: 4.7.0
      SLIRP_CONFIG_VERSION_MAX: 4
      libseccomp: 2.5.3
  swapFree: 8589930496
  swapTotal: 8589930496
  uptime: 0h 60m 48.00s
  variant: ""
plugins:
  authorization: null
  log:
  - k8s-file
  - none
  - passthrough
  - journald
  network:
  - bridge
  - macvlan
  - ipvlan
  volume:
  - local
registries:
  search:
  - docker.io
store:
  configFile: /home/samuel/.config/containers/storage.conf
  containerStore:
    number: 11
    paused: 0
    running: 6
    stopped: 5
  graphDriverName: overlay
  graphOptions: {}
  graphRoot: /home/samuel/.local/share/containers/storage
  graphRootAllocated: 240423796736
  graphRootUsed: 58886856704
  graphStatus:
    Backing Filesystem: btrfs
    Native Overlay Diff: "true"
    Supports d_type: "true"
    Supports shifting: "false"
    Supports volatile: "true"
    Using metacopy: "false"
  imageCopyTmpDir: /var/tmp
  imageStore:
    number: 53
  runRoot: /run/user/1000/containers
  transientStore: false
  volumePath: /home/samuel/.local/share/containers/storage/volumes
version:
  APIVersion: 4.9.4
  Built: 1711445992
  BuiltTime: Tue Mar 26 06:39:52 2024
  GitCommit: ""
  GoVersion: go1.21.8
  Os: linux
  OsArch: linux/amd64
  Version: 4.9.4

Podman in a container

No

Privileged Or Rootless

Rootless

Upstream Latest Release

Yes

Additional environment details

I'm on Fedora 39.

Additional information

No response

@samuel-andres samuel-andres added the kind/bug Categorizes issue or PR as related to a bug. label Apr 6, 2024
@francoism90
Copy link

How did you get the podman compose top command? Because it's not in 1.0.6.

Is this included into Podman 5.0 now?

@samuel-andres
Copy link
Author

@francoism90

How did you get the podman compose top command? Because it's not in 1.0.6.

Is this included into Podman 5.0 now?

It's part of Compose V2 https://docs.docker.com/reference/cli/docker/compose/top/

@francoism90
Copy link

Just to be sure, isn't this normal?
I thought Docker wasn't running rootless at all, unless you change some settings.

I think the IDs reported are correct, as you're not running it inside Podman.

Maybe I'm completely wrong here, but I did experiment with Docker rootless, and it couldn't really handle UID/GID-handling like Podman.

@samuel-andres
Copy link
Author

@francoism90

Just to be sure, isn't this normal?
I thought Docker wasn't running rootless at all, unless you change some settings.

I think the IDs reported are correct, as you're not running it inside Podman.

Maybe I'm completely wrong here, but I did experiment with Docker rootless, and it couldn't really handle UID/GID-handling like Podman.

I'm not running Docker, I'm running Podman, just using docker-compose V2 to talk to the Podman Rest API. If I'm not wrong this is possible since Podman v4.1

Copy link

github-actions bot commented May 7, 2024

A friendly reminder that this issue had no activity for 30 days.

@Luap99 Luap99 self-assigned this Jun 25, 2024
@Luap99 Luap99 added In Progress This issue is actively being worked by the assignee, please do not work on this at this time. and removed stale-issue labels Jun 25, 2024
Luap99 added a commit to Luap99/libpod that referenced this issue Jun 26, 2024
When we execute ps(1) in the container and the container uses a userns
with a different id mapping the user id field will be wrong.

To fix this we must join the userns in such case.

Fixes containers#22293

Signed-off-by: Paul Holzinger <[email protected]>
mheon pushed a commit to mheon/libpod that referenced this issue Jul 10, 2024
When we execute ps(1) in the container and the container uses a userns
with a different id mapping the user id field will be wrong.

To fix this we must join the userns in such case.

Fixes containers#22293

Signed-off-by: Paul Holzinger <[email protected]>

<MH: Fixed conflict in tests>

Signed-off-by: Matt Heon <[email protected]>
@stale-locking-app stale-locking-app bot added the locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. label Sep 25, 2024
@stale-locking-app stale-locking-app bot locked as resolved and limited conversation to collaborators Sep 25, 2024
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
In Progress This issue is actively being worked by the assignee, please do not work on this at this time. kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants