-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Inconsistent full filesystem mount results with xfs vs other e.g. ext4 - can not mount xfs #23372
Comments
I don't see what this has to do with podman at all? If xfs fails on lsetxattr with no space left on device then this is nothing podman can ever fix. |
And if you do not want to use lsetxattr then remove the |
@Luap99 I logged this against From the higher level tooling one recommendation is to actually not use xfs (which is really wrong thing to recommend) or to expand xfs to clean up the space before running pod with attached xfs filesystem. |
|
@Luap99 right, so there may be a bug in how the kernel behaves on different fs in this case. For me it should not matter which filesystem is being used as long as it's consistent behavior between them. Thank you ! |
Then do not use |
Just for reference, it's possible to set the security context using ext4 partition type, however the xfs gives error $ sudo chcon -Rt svirt_sandbox_file_t /tmp/testmount_ext4/
$ echo $?
0 $ sudo chcon -Rt svirt_sandbox_file_t /tmp/testmount_xfs/
chcon: failed to change context of 'full_filesystem.abc' to ‘unconfined_u:object_r:svirt_sandbox_file_t:s0’: No space left on device
chcon: failed to change context of '/tmp/testmount_xfs/' to ‘system_u:object_r:svirt_sandbox_file_t:s0’: No space left on device
$ echo $?
1 |
I believe this is a known xfs issue, but should be opened against the kernel, not podman. |
Issue Description
Using podman on a fully populated filesystem yields different results with XFS compared to other filesystems like ext4, making it impossible to free up disk space in some scenarios.
This issue is especially troublesome when it propagates to Kubernetes pods trying to mount such XFS filesystems.
Steps to reproduce the issue
Steps to reproduce the issue:
Consider two scripts that gives two inconsistent results, with XFS it gives error while with ext4 it properly mounts the filesystem:
XFS use case:
EXT4 use case:
Describe the results you received
With XFS filesystem:
With ext4 filesystem, the filesystem was mounted and the container started allowing to manipulate or free up space:
[root@e778ab568f3b /]#
Describe the results you expected
Both cases should be consistent regardless if it's an error or proper container run.
podman info output
Podman in a container
No
Privileged Or Rootless
None
Upstream Latest Release
No
Additional environment details
Additional environment details
Additional information
Additional information like issue happens only occasionally or issue happens with a particular architecture or on a particular setting
The text was updated successfully, but these errors were encountered: