Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

LVM autoextend not working, what am I doing wrong ? #167

Open
tigerblue77 opened this issue Nov 24, 2024 · 3 comments
Open

LVM autoextend not working, what am I doing wrong ? #167

tigerblue77 opened this issue Nov 24, 2024 · 3 comments

Comments

@tigerblue77
Copy link

tigerblue77 commented Nov 24, 2024

Hello !
I am running tests en VDO pools and LVM thin pools on a Debian 12.8 machine. I've added this to lvm.conf :

snapshot_autoextend_threshold = 90
snapshot_autoextend_percent = 10
thin_pool_autoextend_threshold = 90
thin_pool_autoextend_percent = 10
vdo_pool_autoextend_threshold = 90
vdo_pool_autoextend_percent = 10

then ran :

systemctl restart lvm2-monitor

Following this RHEL documentation : https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/configuring_and_managing_logical_volumes/basic-logical-volume-management_configuring-and-managing-logical-volumes#automatically-extending-a-thin-pool_extending-a-thin-pool
And my

lvs -o +seg_monitor

shows they are monitored :

  LV              VG   Attr       LSize   Pool           Origin Data%  Meta%  Move Log Cpy%Sync Convert Monitor
  VG-1_LV-THIN-1  VG-1 twi-aotz--  10.00g                       7.27   11.91                            monitored
  base-501-disk-0 VG-1 Vri---tz-k   8.00g VG-1_LV-THIN-1
  test            VG-1 vwi-aov--- 100.00g vpool0
  vpool0          VG-1 dwi-------  30.00g                                                               monitored
  VG-2_LV-THIN-1  VG-2 twi-a-tz--  10.00g                       0.00   10.94                            monitored
  data            pve  twi-a-tz-- <59.81g                       0.00   1.59                             monitored
  root            pve  -wi-ao---- <41.52g

But I fill my VDO or thin LVs and they don't get expanded...

@zkabelac
Copy link
Contributor

Hi

a) make sure you are running the latest upstream version of lvm2 ('lvm version')
b) it's unclear how many free space is in your VG (use command: 'vgs')
c) you seem to be using 10G thin-pool and 10G + 10G thin LVs - not really used in the moment of your lvs
d) you seem to be using 30G vdo-pool and 100G test LV - - inactive so unknown how much full they are.
e) auto expansion happen on POOL volumes (not virtual volume thin or vdo LV) - if you want to expand those - use 'lvextend -r' so also filesystem grows)

So you would need to capture 'lvs -a' in the moment of your 'full pool' state - to see whether auto resize is working or not?

@tigerblue77
Copy link
Author

tigerblue77 commented Nov 25, 2024

Hi

Hello,

a) make sure you are running the latest upstream version of lvm2 ('lvm version')

LVM version:     2.03.16(2) (2022-05-18)
Library version: 1.02.185 (2022-05-18)
Driver version:  4.48.0

b) it's unclear how many free space is in your VG (use command: 'vgs')

VG   #PV #LV #SN Attr   VSize    VFree
VG-1   1   2   0 wz--n-  <40.02t 39.92t
VG-2   1   0   0 wz--n-   <1.82t <1.82t
pve    1   2   0 wz--n- <118.08g 14.75g

c) you seem to be using 10G thin-pool and 10G + 10G thin LVs - not really used in the moment of your lvs

I'm not sure I get. Did you mean "10G + 10G thin-pools and 8G thin LV on one of the pools" ? If yes, then you're right, one of the pools were unused.

d) you seem to be using 30G vdo-pool and 100G test LV - - inactive so unknown how much full they are.

Strange, I didn't disabled them but sorry I didn't see that.

e) auto expansion happen on POOL volumes (not virtual volume thin or vdo LV) - if you want to expand those - use 'lvextend -r' so also filesystem grows)

Of course, it's very clear to me, thanks.

So you would need to capture 'lvs -a' in the moment of your 'full pool' state - to see whether auto resize is working or not?

Today I've reinstalled the test server, let's resume what I do :

nano /etc/lvm/lvm.conf # I add lines I posted in my original post
systemctl restart lvm2-monitor
lvcreate --type vdo --name VDO-LV-1 --size 10G --virtualsize 20G VG-1

Output :

WARNING: vdo signature detected on /dev/VG-1/vpool0 at offset 0. Wipe it? [y/n]: y
  Wiping vdo signature on /dev/VG-1/vpool0.
    The VDO volume can address 6.00 GB in 3 data slabs, each 2.00 GB.
    It can grow to address at most 16.00 TB of physical storage in 8192 slabs.
    If a larger maximum size might be needed, use bigger slabs.
  Logical volume "VDO-LV-1" created.
lvs -o +seg_monitor

Output :

  LV       VG   Attr       LSize   Pool   Origin Data%  Meta%  Move Log Cpy%Sync Convert Monitor
  VDO-LV-1 VG-1 vwi-a-v---  20.00g vpool0
  vpool0   VG-1 dwi-------  10.00g                                                       monitored
  data     pve  twi-a-tz-- <59.81g               0.00   1.59                             monitored
  root     pve  -wi-ao---- <41.52g
vgs

Output :

  VG   #PV #LV #SN Attr   VSize    VFree
  VG-1   1   2   0 wz--n-  <40.02t <40.01t
  VG-2   1   0   0 wz--n-   <1.82t  <1.82t
  pve    1   2   0 wz--n- <118.08g  14.75g
mkfs.ext4 -E nodiscard /dev/VG-1/VDO-LV-1

Output :

mke2fs 1.47.0 (5-Feb-2023)
Creating filesystem with 5242880 4k blocks and 1310720 inodes
Filesystem UUID: 0b4a8dd4-6463-424d-a6ba-57cc7fc9d31b
Superblock backups stored on blocks:
        32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
        4096000

Allocating group tables: done
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done
mkdir /mnt/VDO-LV-1

Output :

mkdir: created directory '/mnt/VDO-LV-1'
mount /dev/VG-1/VDO-LV-1 /mnt/VDO-LV-1/
dd if=/dev/random of=/mnt/VDO-LV-1/test1.img bs=1M count=10000 oflag=dsync status=progress

Output :

6277824512 bytes (6.3 GB, 5.8 GiB) copied, 59 s, 106 MB/s
dd: error writing '/mnt/VDO-LV-1/test1.img': No space left on device
6024+0 records in
6023+0 records out
6315573248 bytes (6.3 GB, 5.9 GiB) copied, 59.6832 s, 106 MB/s
lvs -a

Output :

  LV              VG   Attr       LSize   Pool   Origin Data%  Meta%  Move Log Cpy%Sync Convert
  VDO-LV-1        VG-1 vwi-aov---  20.00g vpool0
  vpool0          VG-1 dwi-------  10.00g
  [vpool0_vdata]  VG-1 Dwi-ao----  10.00g
  data            pve  twi-a-tz-- <59.81g               0.00   1.59
  [data_tdata]    pve  Twi-ao---- <59.81g
  [data_tmeta]    pve  ewi-ao----   1.00g
  [lvol0_pmspare] pve  ewi-------   1.00g
  root            pve  -wi-ao---- <41.52g
df -h

Output :

Filesystem                    Size  Used Avail Use% Mounted on
udev                          126G     0  126G   0% /dev
tmpfs                          26G  2.6M   26G   1% /run
/dev/mapper/pve-root           41G  4.6G   34G  12% /
tmpfs                         126G   46M  126G   1% /dev/shm
tmpfs                         5.0M     0  5.0M   0% /run/lock
efivarfs                      304K  184K  116K  62% /sys/firmware/efi/efivars
/dev/sdc2                    1022M   12M 1011M   2% /boot/efi
/dev/fuse                     128M   16K  128M   1% /etc/pve
tmpfs                          26G     0   26G   0% /run/user/1000
/dev/mapper/VG--1-VDO--LV--1   20G  5.9G   13G  32% /mnt/VDO-LV-1
rm /mnt/VDO-LV-1/*

Output :

rm: cannot remove '/mnt/VDO-LV-1/lost+found': Is a directory
rm: remove regular file '/mnt/VDO-LV-1/test1.img'? y
rm: cannot remove '/mnt/VDO-LV-1/test1.img': Read-only file system

I don't do it for LVM thin but the behavior is exactly the same.

@zkabelac
Copy link
Contributor

Well - first upgrade to recent upstream version - there is no point to hunt 'fixed' issues in 2 years old version
(i.e. your version has highly experimental version of VDO support without usable support for auto-extension).

So please first switch to version 2.03.28 - then we will continue....

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants