Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support podman v.2 #966

Closed
abitrolly opened this issue Nov 26, 2020 · 65 comments
Closed

Support podman v.2 #966

abitrolly opened this issue Nov 26, 2020 · 65 comments
Labels
type/bug Issue that reports an unexpected behaviour.

Comments

@abitrolly
Copy link
Contributor

abitrolly commented Nov 26, 2020

UPDATE:

List of issues with podman 3.0.0-dev to make it work with pack.


Looks like there should be more specific issue than #564 to track interaction between pack and podman.

As of podman 2.1.1 it is still impossible to use it with pack. I am not sure which bug is this. I don't see any errors in podman logs.

✗ DOCKER_HOST=unix:///run/user/1000/podman/podman.sock pack -v build json2openapi -B heroku/buildpacks:18
Builder heroku/buildpacks:18 is trusted
Pulling image index.docker.io/heroku/buildpacks:18
d32588c49e27fa2d8cfc3ff16eb411487b12f6116054445b5c026112925a9911: pulling image () from docker.io/heroku/buildpacks:18
ERROR: failed to build: invalid builder heroku/buildpacks:18: provided image OS '' must be either 'linux' or 'windows'
✗ podman --version
podman version 2.1.1
pack info
Pack:
  Version:  0.15.0+git-49ad805.build-1613
  OS/Arch:  linux/amd64

Default Lifecycle Version:  0.9.3

Supported Platform APIs:  0.3, 0.4

Config:
(no config file found at /home/anatoli/.pack/config.toml)
test session for tmux

If you're not being paid to work on this like me, then the following script will save you time on checking podman debug logs.

#!/bin/bash

LEFT="./bin/podman system service --log-level debug -t 0"
RIGHT="DOCKER_HOST=unix://$XDG_RUNTIME_DIR/podman/podman.sock pack -v build json2openapi -B heroku/buildpacks:18"

tmux new-session\; send-keys "$LEFT" Enter\; split-window -h\; send-keys "$RIGHT" Enter
@abitrolly abitrolly added status/triage Issue or PR that requires contributor attention. type/bug Issue that reports an unexpected behaviour. labels Nov 26, 2020
@matejvasek
Copy link
Contributor

@abitrolly could you please try this with podman build from master?

@matejvasek
Copy link
Contributor

I guess v2.2.0 would be OK too, it's just 4 days old.

@matejvasek
Copy link
Contributor

Nope, the fix of mine will be merged after v2.2 so you need master.

@matejvasek
Copy link
Contributor

matejvasek commented Dec 4, 2020

What is the project you tried to build?
I mean what contains the dir where are you running the pack command?

@matejvasek
Copy link
Contributor

Nevermind I was using older version of pack at first, now I can reproduce it on podman master.

@abitrolly
Copy link
Contributor Author

@matejvasek I've tried to build podman few days ago, but didn't have the time to hunt all dependencies. It is trivial, but I got distracted. Having a buildpack for podman would be a great test case once it bootstraps. :D

✗ BUILDTAGS="exclude_graphdriver_btrfs" make podman
Podman is being compiled without the systemd build tag. Install libsystemd on 	Ubuntu or systemd-devel on rpm based distro for journald support.
go build -mod=vendor  -gcflags 'all=-trimpath=/home/anatoli/f/libpod' -asmflags 'all=-trimpath=/home/anatoli/f/libpod' -ldflags '-X github.com/containers/podman/v2/libpod/define.gitCommit=85b412ddcdacb635e13ec67ecd2df5990dbdca02 -X github.com/containers/podman/v2/libpod/define.buildInfo=1607018344 -X github.com/containers/podman/v2/libpod/config._installPrefix=/usr/local -X github.com/containers/podman/v2/libpod/config._etcDir=/etc ' -tags "exclude_graphdriver_btrfs" -o bin/podman ./cmd/podman
# pkg-config --cflags  -- devmapper
Package devmapper was not found in the pkg-config search path.
Perhaps you should add the directory containing `devmapper.pc'
to the PKG_CONFIG_PATH environment variable
Package 'devmapper', required by 'virtual:world', not found
pkg-config: exit status 1
# github.com/mtrmac/gpgme
vendor/github.com/mtrmac/gpgme/data.go:4:11: fatal error: gpgme.h: No such file or directory
    4 | // #include <gpgme.h>
      |           ^~~~~~~~~
compilation terminated.
make: *** [Makefile:189: bin/podman] Error 2

@matejvasek
Copy link
Contributor

Yeah you need to install some dev libs.

@matejvasek
Copy link
Contributor

Actually I was wrong it works OK in master. I forgot to fetch right remote 🤦‍♂️ . It's been fixed there containers/podman#8494.

@matejvasek
Copy link
Contributor

@abitrolly please try to latest master it think it should work. (Well.. it eventually fails because of pushing-nonexistent-layers-hack but hat's separate issues)

@matejvasek
Copy link
Contributor

If you verify that it's been fixed please close this issue.

@jromero
Copy link
Member

jromero commented Dec 4, 2020

@ekcasey ^ this is relevant if you want to test it with buildpacks/imgutil#80.

@abitrolly
Copy link
Contributor Author

Pulling is slow. Given the size of those buildpacks and builders, some progress bar would be nice.

@abitrolly
Copy link
Contributor Author

Yep. It fails. I still don't get how can an image without base layers be valid?

✗ DOCKER_HOST=unix:///run/user/1000/podman/podman.sock pack -v build json2openapi -B heroku/buildpacks:18
Builder heroku/buildpacks:18 is trusted
Pulling image index.docker.io/heroku/buildpacks:18
d32588c49e27fa2d8cfc3ff16eb411487b12f6116054445b5c026112925a9911: pulling image () from docker.io/heroku/buildpacks:18
Selected run image heroku/pack:18
Pulling image heroku/pack:18
69d1cc930f67c112aa884c88b02b376c0b32f3c8f4a5a601c790c9d350cd53df: pulling image () from docker.io/heroku/pack:18
Creating builder with the following buildpacks:
-> heroku/[email protected]
-> heroku/[email protected]
-> heroku/[email protected]
-> heroku/[email protected]
-> heroku/[email protected]
-> heroku/[email protected]
-> heroku/[email protected]
-> heroku/[email protected]
-> heroku/[email protected]
-> heroku/[email protected]
-> heroku/[email protected]
-> heroku/[email protected]
-> heroku/[email protected]
-> heroku/[email protected]
-> heroku/[email protected]
-> salesforce/[email protected]
-> projectriff/[email protected]
-> projectriff/[email protected]
-> evergreen/[email protected]
ERROR: failed to build: failed to write image to the following tags: [pack.local/builder/67656775727a6b636370:latest: save image 'pack.local/builder/67656775727a6b636370:latest': Error: No such image: e1868ad35487df19adca4706313e76bc5bca431c09f93446a54bf08709e9b362]

@matejvasek
Copy link
Contributor

@abitrolly it's not really valid, however the missing layers are already present in the docker storage as you just pulled them, so it somehow works on docker, however podman validate inputs more strictly.

@matejvasek
Copy link
Contributor

see containers/podman#8132

@abitrolly
Copy link
Contributor Author

If layers are already present, then why where will be performance drop with podman? It could just return immediately with some success status if somebody tries to push the layer with the same content hash,

@matejvasek
Copy link
Contributor

It could just return immediately with some success status if somebody tries to push the layer with the same content hash...

I think that how image registry with docker push works. You are not really uploading existing layers to server.
However here we are not using registry (docker push), but something more like docker load xxx.tar which doesn't use such optimization.

I think that in containers/podman#8132 somebody suggests to run temporary local registry to solve this.

@abitrolly
Copy link
Contributor Author

Is there a change to get rid of that tar cake is favour of more efficient storage that will speed up tools like https://github.com/wagoodman/dive as well? If everything is hidden behind API anyway. Maybe there is something like ostree that already does the job of efficiently merging filesystems. Maybe pack and podman can create a proper API for performance without Docker hacks?

@abitrolly
Copy link
Contributor Author

Checking the status with latest podman 3.0.0-dev. First building master.

$ BUILDTAGS="exclude_graphdriver_btrfs exclude_graphdriver_devicemapper" make podman

During the test podman service enters interactive state, which is weird.

DEBU[0002] parsed reference into "[overlay@/home/anatoli/.local/share/containers/stor│
age+/run/user/1000/containers:overlay.mount_program=/usr/bin/fuse-overlayfs]docker.io│
/heroku/buildpacks:18"                                                               │
DEBU[0002] reference "[overlay@/home/anatoli/.local/share/containers/storage+/run/use│
r/1000/containers:overlay.mount_program=/usr/bin/fuse-overlayfs]docker.io/heroku/buil│
dpacks:18" does not resolve to an image ID                                           │
? Please select an image:                                                            │
  ▸ registry.fedoraproject.org/heroku/buildpacks:18                                  │
    registry.access.redhat.com/heroku/buildpacks:18                                  │
    registry.centos.org/heroku/buildpacks:18                                         │
    docker.io/heroku/buildpacks:18                    

@abitrolly
Copy link
Contributor Author

Still fails.

ERROR: failed to build: failed to write image to the following tags: [pack.local/builder/6a64676f71796367696b:latest: save image 'pack.local/builder/6a64676f71796367696b:latest': Error: No such image: 723ea9bd848b9e4de194ae659c2a6838d33a60ddc80c4ed950cffc1617aeb82f]

@matejvasek
Copy link
Contributor

@abitrolly note that there is another bug in podman APIv2 impl. containers/podman#8697.

hackaround:

diff --git a/pkg/api/handlers/utils/containers.go b/pkg/api/handlers/utils/containers.go
index 1439a3a75..927d85abc 100644
--- a/pkg/api/handlers/utils/containers.go
+++ b/pkg/api/handlers/utils/containers.go
@@ -1,6 +1,7 @@
 package utils
 
 import (
+       "fmt"
        "net/http"
        "time"
 
@@ -42,7 +43,34 @@ func WaitContainer(w http.ResponseWriter, r *http.Request) (int32, error) {
        }
        condition := define.ContainerStateStopped
        if _, found := r.URL.Query()["condition"]; found {
-               condition, err = define.StringToContainerStatus(query.Condition)
+               if query.Condition == "next-exit" {
+                       name := GetName(r)
+                       con, err := runtime.LookupContainer(name)
+                       if err != nil {
+                               ContainerNotFound(w, name, err)
+                               return 0, err
+                       }
+                       ch := make(chan struct{},1)
+
+                       s, _ := con.State()
+                       fmt.Println("\n\n### ", s.String())
+                       go func() {
+                               con.WaitForConditionWithInterval(interval, define.ContainerStateCreated)
+                               ch <- struct{}{}
+                       }()
+                       go func() {
+                               con.WaitForConditionWithInterval(interval, define.ContainerStateConfigured)
+                               ch <- struct{}{}
+                       }()
+                       go func() {
+                               con.WaitForConditionWithInterval(interval, define.ContainerStateRunning)
+                               ch <- struct{}{}
+                       }()
+                       <- ch
+                       return con.WaitForConditionWithInterval(interval, define.ContainerStateStopped)
+               } else {
+                       condition, err = define.StringToContainerStatus(query.Condition)
+               }
                if err != nil {
                        InternalServerError(w, err)
                        return 0, err

@matejvasek
Copy link
Contributor

also I recommend this (to workaround minor untar issues in pack):

diff --git a/vendor/github.com/containers/image/v5/docker/internal/tarfile/writer.go b/vendor/github.com/containers/image/v5/docker/internal/tarfile/writer.go
index e0683b3cd..dd9a1803f 100644
--- a/vendor/github.com/containers/image/v5/docker/internal/tarfile/writer.go
+++ b/vendor/github.com/containers/image/v5/docker/internal/tarfile/writer.go
@@ -92,6 +92,13 @@ func (w *Writer) ensureSingleLegacyLayerLocked(layerID string, layerDigest diges
        if _, ok := w.legacyLayers[layerID]; !ok {
                // Create a symlink for the legacy format, where there is one subdirectory per layer ("image").
                // See also the comment in physicalLayerPath.
+
+               hdr, err := tar.FileInfoHeader(&tarFI{path: layerID, isDir: true}, "")
+               if err != nil {
+                       return nil
+               }
+               return w.tar.WriteHeader(hdr)
+
                physicalLayerPath := w.physicalLayerPath(layerDigest)
                if err := w.sendSymlinkLocked(filepath.Join(layerID, legacyLayerFileName), filepath.Join("..", physicalLayerPath)); err != nil {
                        return errors.Wrap(err, "Error creating layer symbolic link")
@@ -317,25 +324,32 @@ type tarFI struct {
        path      string
        size      int64
        isSymlink bool
+       isDir     bool
 }
 
 func (t *tarFI) Name() string {
        return t.path
 }
 func (t *tarFI) Size() int64 {
+       if t.isDir {
+               return 0
+       }
        return t.size
 }
 func (t *tarFI) Mode() os.FileMode {
        if t.isSymlink {
                return os.ModeSymlink
        }
+       if t.isDir {
+               return os.ModeDir
+       }
        return 0444
 }
 func (t *tarFI) ModTime() time.Time {
        return time.Unix(0, 0)
 }
 func (t *tarFI) IsDir() bool {
-       return false
+       return t.isDir
 }
 func (t *tarFI) Sys() interface{} {
        return nil

@abitrolly
Copy link
Contributor Author

Nice. I can wait for a proper fix though.

@frenzymadness
Copy link

There is a slight difference but neither work:

$ echo $DOCKER_HOST
unix:///run/user/1000/podman/podman.sock

$ ./pack build --path apps/standalone-test-app/ --builder fedora-python-builder:f33 standalone-test-app-image
…
===> DETECTING
[detector] 3 of 4 buildpacks participating
[detector] buildpacks/python-venv   0.0.1
[detector] buildpacks/python-script 0.0.1
[detector] buildpacks/requirements  0.0.1
===> ANALYZING
ERROR: failed to build: executing lifecycle. This may be the result of using an untrusted builder: failed to create 'analyzer' container: Error response from daemon: container create: statfs /var/run/docker.sock: permission denied

$ ./pack build --docker-host=unix:///run/user/1000/podman/podman.sock --path apps/standalone-test-app/ --builder fedora-python-builder:f33 standalone-test-app-image
…
===> DETECTING
[detector] 3 of 4 buildpacks participating
[detector] buildpacks/python-venv   0.0.1
[detector] buildpacks/python-script 0.0.1
[detector] buildpacks/requirements  0.0.1
===> ANALYZING
[analyzer] ERROR: failed to get previous image: Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Get "http://%2Fvar%2Frun%2Fdocker.sock/v1.24/info": dial unix /var/run/docker.sock: connect: permission denied
ERROR: failed to build: executing lifecycle. This may be the result of using an untrusted builder: failed with status code: 1

@matejvasek
Copy link
Contributor

@frenzymadness that's SELinux issue I believe. Either you will need proper labels somewhere, or run podman over TCP socket not Unix socket.

@matejvasek
Copy link
Contributor

Just try using setenforce 0.

@matejvasek
Copy link
Contributor

matejvasek commented Mar 24, 2021

Using TCP:
podman system service --time=0 tcp:0.0.0.0:1234

export DOCKER_HOST=tcp://127.0.0.1:1234

./pack build --docker-host=tcp://127.0.0.1:1234 --network=host --path apps/standalone-test-app/ --builder fedora-python-builder:f33 standalone-test-app-image

@matejvasek
Copy link
Contributor

SELinux doesn't like mounting files (like sockets) without proper labels.

@frenzymadness
Copy link

Thank you very much for your help. It would be awesome to transfer these useful tips to some documentation for podman users. I can confirm that disabling selinux as well as switching to tcp fix the issue.

@dfreilich
Copy link
Member

@frenzymadness 💯 . I opened buildpacks/docs#341 as a result.

@abitrolly
Copy link
Contributor Author

Looks like there is no point in SELinux if everybody needs to disable it to work with podman. :D

@matejvasek
Copy link
Contributor

I suggested turning off the enforcing only to test if the issues is indeed caused by selinux. You shouldn't do that. @abitrolly

@matejvasek
Copy link
Contributor

BTW depending on how you installed docker you can have selinux issues too. Fortunately many installation repos/packages also include selinux policy package.

@matejvasek
Copy link
Contributor

@abitrolly
Copy link
Contributor Author

@matejvasek yes I hope for a proper solution to forget about SELinux problems.

@matejvasek
Copy link
Contributor

Most simple is to run podman on TCP (not UNIX) socket IMHO.

@matejvasek
Copy link
Contributor

containers/podman#9860

@dfreilich
Copy link
Member

@matejvasek At this point, it seems like the main issue has been solved, though there definitely can be improvements made. Should we keep this issue open to generally track all podman concerns?

@jromero
Copy link
Member

jromero commented Apr 5, 2021

Closing this issue in favor of having new issues created for more specific improvements/bugs.

@jromero jromero closed this as completed Apr 5, 2021
@FlorianLudwig
Copy link

Hi @matejvasek

Following your suggestion here:

Using TCP:
podman system service --time=0 tcp:0.0.0.0:1234

export DOCKER_HOST=tcp://127.0.0.1:1234

./pack build --docker-host=tcp://127.0.0.1:1234 --network=host --path apps/standalone-test-app/ --builder fedora-python-builder:f33 standalone-test-app-image

I cannot get it working.

Using the example from buildpack - since I could not find any fedora-python-builder (but i'd be interested ;))

$ pack -v --docker-host=tcp://127.0.0.1:1234 --network=host  build sample-app --builder cnbs/sample-builder:bionic
Builder cnbs/sample-builder:bionic is untrusted
As a result, the phases of the lifecycle which require root access will be run in separate trusted ephemeral containers.
For more information, see https://medium.com/buildpacks/faster-more-secure-builds-with-pack-0-11-0-4d0c633ca619
Pulling image index.docker.io/cnbs/sample-builder:bionic
4bbfd2c87b75: Already exists 
5c4cf4bf4c45: Already exists 
d2e110be24e1: Already exists 
58904b69a6b9: Already exists 
889a7173dcfe: Already exists 
9892987675a2: Already exists 
ece01c3b31c4: Already exists 
75dd0332c4e5: Already exists 
357fefdf9bc9: Already exists 
5c2e4179bee1: Already exists 
a697cc8a1d5c: Already exists 
c4e3bdcbb8c3: Already exists 
53a52c7f9926: Already exists 
04f9e5a54d38: Already exists 
c238db6a02a5: Already exists 
a302059dbdba: Already exists 
0cceee8a8cb0: Already exists 
db1bbcc47135: Already exists 
4f4fb700ef54: Already exists 
39da2bad90ed: Download complete 
Selected run image cnbs/sample-stack-run:bionic
Pulling image cnbs/sample-stack-run:bionic
ERROR: failed to build: invalid run-image 'cnbs/sample-stack-run:bionic': image cnbs/sample-stack-run:bionic does not exist on the daemon: not found

any idea what could be wrong? Thanks!

@matejvasek
Copy link
Contributor

@FlorianLudwig I think this could be related to podman's reluctancy to pull not fully qualified image names as 'cnbs/sample-stack-run:bionic'. If you are using for instance Fedora 34 (fresh install) then default config forbids pulling images without host (like docker.io) in the name of image.

@matejvasek
Copy link
Contributor

@FlorianLudwig of course the error message is confusing. I believe I fixed that in some newer version of podman.

@FlorianLudwig
Copy link

@matejvasek It looks like you are right.

using:

pack -v --docker-host=tcp://127.0.0.1:1234 --network=host  build sample-app --builder docker.io/cnbs/sample-builder:bionic

instead of

pack -v --docker-host=tcp://127.0.0.1:1234 --network=host  build sample-app --builder cnbs/sample-builder:bionic

works better - but still not fully functional as it will try pull the next image without a fully qualified name.

Btw, I am on podman 3.2.1

@FlorianLudwig
Copy link

For reference:

To workaround the issue, comment out short-name-mode="enforcing" in /etc/containers/registries.conf.

@matejvasek
Copy link
Contributor

yeah I had to put:

unqualified-search-registries = ["docker.io", "quay.io", "registry.fedoraproject.org", "registry.access.redhat.com"]
short-name-mode="permissive"

into ~/.config/containers/registries.conf

@matejvasek
Copy link
Contributor

podman is just to paranoid, with short name somebody might spoof malicious image

@matejvasek
Copy link
Contributor

IMHO it's kinda of fault of cnbs/sample-builder:bionic to use short name (assuming everything in in docker.io)

@FlorianLudwig
Copy link

IMHO it's kinda of fault of cnbs/sample-builder:bionic to use short name (assuming everything in in docker.io)

I agree and opened #1218 :)

@gabomgp4
Copy link

yeah I had to put:

unqualified-search-registries = ["docker.io", "quay.io", "registry.fedoraproject.org", "registry.access.redhat.com"]
short-name-mode="permissive"

into ~/.config/containers/registries.conf

I don't found this file in Windows... maybe in Windows the path is different?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
type/bug Issue that reports an unexpected behaviour.
Projects
None yet
Development

No branches or pull requests

7 participants