-
Notifications
You must be signed in to change notification settings - Fork 293
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support podman v.2 #966
Comments
@abitrolly could you please try this with podman build from master? |
I guess |
Nope, the fix of mine will be merged after |
What is the project you tried to build? |
Nevermind I was using older version of |
@matejvasek I've tried to build
|
Yeah you need to install some dev libs. |
Actually I was wrong it works OK in master. I forgot to fetch right remote 🤦♂️ . It's been fixed there containers/podman#8494. |
@abitrolly please try to latest master it think it should work. (Well.. it eventually fails because of pushing-nonexistent-layers-hack but hat's separate issues) |
If you verify that it's been fixed please close this issue. |
@ekcasey ^ this is relevant if you want to test it with buildpacks/imgutil#80. |
Pulling is slow. Given the size of those buildpacks and builders, some progress bar would be nice. |
Yep. It fails. I still don't get how can an image without base layers be valid?
|
@abitrolly it's not really valid, however the missing layers are already present in the |
If layers are already present, then why where will be performance drop with |
I think that how image registry with I think that in containers/podman#8132 somebody suggests to run temporary local registry to solve this. |
Is there a change to get rid of that tar cake is favour of more efficient storage that will speed up tools like https://github.com/wagoodman/dive as well? If everything is hidden behind API anyway. Maybe there is something like |
Checking the status with latest
During the test
|
Still fails.
|
@abitrolly note that there is another bug in hackaround: diff --git a/pkg/api/handlers/utils/containers.go b/pkg/api/handlers/utils/containers.go
index 1439a3a75..927d85abc 100644
--- a/pkg/api/handlers/utils/containers.go
+++ b/pkg/api/handlers/utils/containers.go
@@ -1,6 +1,7 @@
package utils
import (
+ "fmt"
"net/http"
"time"
@@ -42,7 +43,34 @@ func WaitContainer(w http.ResponseWriter, r *http.Request) (int32, error) {
}
condition := define.ContainerStateStopped
if _, found := r.URL.Query()["condition"]; found {
- condition, err = define.StringToContainerStatus(query.Condition)
+ if query.Condition == "next-exit" {
+ name := GetName(r)
+ con, err := runtime.LookupContainer(name)
+ if err != nil {
+ ContainerNotFound(w, name, err)
+ return 0, err
+ }
+ ch := make(chan struct{},1)
+
+ s, _ := con.State()
+ fmt.Println("\n\n### ", s.String())
+ go func() {
+ con.WaitForConditionWithInterval(interval, define.ContainerStateCreated)
+ ch <- struct{}{}
+ }()
+ go func() {
+ con.WaitForConditionWithInterval(interval, define.ContainerStateConfigured)
+ ch <- struct{}{}
+ }()
+ go func() {
+ con.WaitForConditionWithInterval(interval, define.ContainerStateRunning)
+ ch <- struct{}{}
+ }()
+ <- ch
+ return con.WaitForConditionWithInterval(interval, define.ContainerStateStopped)
+ } else {
+ condition, err = define.StringToContainerStatus(query.Condition)
+ }
if err != nil {
InternalServerError(w, err)
return 0, err |
also I recommend this (to workaround minor untar issues in diff --git a/vendor/github.com/containers/image/v5/docker/internal/tarfile/writer.go b/vendor/github.com/containers/image/v5/docker/internal/tarfile/writer.go
index e0683b3cd..dd9a1803f 100644
--- a/vendor/github.com/containers/image/v5/docker/internal/tarfile/writer.go
+++ b/vendor/github.com/containers/image/v5/docker/internal/tarfile/writer.go
@@ -92,6 +92,13 @@ func (w *Writer) ensureSingleLegacyLayerLocked(layerID string, layerDigest diges
if _, ok := w.legacyLayers[layerID]; !ok {
// Create a symlink for the legacy format, where there is one subdirectory per layer ("image").
// See also the comment in physicalLayerPath.
+
+ hdr, err := tar.FileInfoHeader(&tarFI{path: layerID, isDir: true}, "")
+ if err != nil {
+ return nil
+ }
+ return w.tar.WriteHeader(hdr)
+
physicalLayerPath := w.physicalLayerPath(layerDigest)
if err := w.sendSymlinkLocked(filepath.Join(layerID, legacyLayerFileName), filepath.Join("..", physicalLayerPath)); err != nil {
return errors.Wrap(err, "Error creating layer symbolic link")
@@ -317,25 +324,32 @@ type tarFI struct {
path string
size int64
isSymlink bool
+ isDir bool
}
func (t *tarFI) Name() string {
return t.path
}
func (t *tarFI) Size() int64 {
+ if t.isDir {
+ return 0
+ }
return t.size
}
func (t *tarFI) Mode() os.FileMode {
if t.isSymlink {
return os.ModeSymlink
}
+ if t.isDir {
+ return os.ModeDir
+ }
return 0444
}
func (t *tarFI) ModTime() time.Time {
return time.Unix(0, 0)
}
func (t *tarFI) IsDir() bool {
- return false
+ return t.isDir
}
func (t *tarFI) Sys() interface{} {
return nil |
Nice. I can wait for a proper fix though. |
There is a slight difference but neither work:
|
@frenzymadness that's SELinux issue I believe. Either you will need proper labels somewhere, or run |
Just try using |
Using TCP:
|
SELinux doesn't like mounting files (like sockets) without proper labels. |
Thank you very much for your help. It would be awesome to transfer these useful tips to some documentation for podman users. I can confirm that disabling selinux as well as switching to tcp fix the issue. |
@frenzymadness 💯 . I opened buildpacks/docs#341 as a result. |
Looks like there is no point in SELinux if everybody needs to disable it to work with |
I suggested turning off the enforcing only to test if the issues is indeed caused by selinux. You shouldn't do that. @abitrolly |
BTW depending on how you installed |
@matejvasek yes I hope for a proper solution to forget about SELinux problems. |
Most simple is to run |
@matejvasek At this point, it seems like the main issue has been solved, though there definitely can be improvements made. Should we keep this issue open to generally track all podman concerns? |
Closing this issue in favor of having new issues created for more specific improvements/bugs. |
Hi @matejvasek Following your suggestion here:
I cannot get it working. Using the example from buildpack - since I could not find any
any idea what could be wrong? Thanks! |
@FlorianLudwig I think this could be related to |
@FlorianLudwig of course the error message is confusing. I believe I fixed that in some newer version of |
@matejvasek It looks like you are right. using:
instead of
works better - but still not fully functional as it will try pull the next image without a fully qualified name. Btw, I am on podman 3.2.1 |
For reference: To workaround the issue, comment out |
yeah I had to put:
into |
podman is just to paranoid, with short name somebody might spoof malicious image |
IMHO it's kinda of fault of |
I agree and opened #1218 :) |
I don't found this file in Windows... maybe in Windows the path is different? |
UPDATE:
List of issues with
podman
3.0.0-dev to make it work withpack
.Looks like there should be more specific issue than #564 to track interaction between
pack
andpodman
.As of
podman 2.1.1
it is still impossible to use it withpack
. I am not sure which bug is this. I don't see any errors inpodman
logs.pack info
test session for
tmux
If you're not being paid to work on this like me, then the following script will save you time on checking
podman
debug logs.The text was updated successfully, but these errors were encountered: