The hard disks attached to a VM or a VM template are, in fact, VM disk images. In the case of the Debian VM created in the G020 guide, its hard disk is an image that has been created as a LVM light volume within a thinpool. This way, the disk image is not just a file, but a virtual storage device that contains the VM's entire filesystem. How to locate and, when necessary, handle such an image? Read the following subsections to have a glimpse about this.
The Proxmox VE web console only gives you a very limited range of actions, like creation or size enlargement, to perform over any VM's hard disks. Also, there's in your system the qemu-img
command to manipulate these images, but it's also kind of limited. A much more powerful command toolkit for handling VM disk images is the one provided by the libguestfs-tools
package.
-
You don't have it installed in your Proxmox VE host, so open a shell in it and install the package with
apt
.$ sudo apt install -y libguestfs-tools
This package's installation will execute a considerable number of actions and install several dependencies, so you'll see a lot of output lines on this process.
-
Since this installation has done quite a bunch of things in your Proxmox VE host, its better if you reboot it right after installing the
libguestfs-tools
package.$ sudo reboot
The libguestfs-tools
package comes with a big set of commands that allows you to handle in complex ways VM disk images. Check them out in the documentation available at libguestfs' official page.
So, where in your system is the VM disk image of your VM template?
-
In the Proxmox VE web console, go to your VM template's
Hardware
view, and read theHard Disk
line.Remember the
ssd_disks:base-100-disk-0
string: it's the name of the hard disk volume within your Proxmox VE node. -
Next, open a shell terminal (as your administrator user) on your Proxmox VE host and execute the following
lvs
command.$ sudo lvs -o lv_full_name,pool_lv,lv_attr,lv_size,lv_path,lv_dm_path LV Pool Attr LSize Path DMPath hddint/hdd_data twi-a-tz-- 870.00g /dev/mapper/hddint-hdd_data hddint/hdd_templates -wi-ao---- 60.00g /dev/hddint/hdd_templates /dev/mapper/hddint-hdd_templates hddusb/hddusb_bkpdata twi-a-tz-- <1.31t /dev/mapper/hddusb-hddusb_bkpdata hddusb/hddusb_bkpvzdumps -wi-ao---- 520.00g /dev/hddusb/hddusb_bkpvzdumps /dev/mapper/hddusb-hddusb_bkpvzdumps pve/root -wi-ao---- <37.50g /dev/pve/root /dev/mapper/pve-root pve/swap -wi-ao---- 12.00g /dev/pve/swap /dev/mapper/pve-swap ssdint/base-100-disk-0 ssd_disks Vri---tz-k 10.00g /dev/ssdint/base-100-disk-0 /dev/mapper/ssdint-base--100--disk--0 ssdint/ssd_disks twi-aotz-- 880.00g /dev/mapper/ssdint-ssd_disks
In the
lvs
output above, you can see your VM template's hard disk volume named asssdint/base-100-disk-0
. This means that it's a volume within thessdint
LVM volume group you created back in the G005 guide. Not only that, in thePool
column you see the namessd_disks
, which refers to the LVM thinpool you created in the G019 guide. The nextAttr
column gives you some information about the volume itself:V
indicates that the volume is virtual.r
means that this volume is read-only.i
refers to the storage allocation policy used in this volume, in this case is inherited.t
means that this volume uses the thin provisioning driver as kernel target.z
indicates that newly-allocated data blocks are overwritten with blocks of (z)eroes before use.k
is a flag to make the system skip this volume during activation.
NOTE
To know all the possible values in theAttr
column, check the Notes section of thelvs
manual (commandman lvs
).Next to the
Attr
column, you can see the size assigned to the volume, 10 GiB in this case, although this number is just logical. If you return to the Proxmox VE web console, and then go to thessd_disks
thinpoolSummary
view, you'll see inUsage
that much less than 10 GiB are actually in use.In this case, just 1.98 GiB are actually in use in the thinpool and, at this point, only the
base-100-disk-0
volume is present there.And what about the columns
Path
and theDMPath
of thelvs
output? They're the paths to the handler files used by the system to manage the light volumes. You can see them with thels
command, except the ones used for thebase-100-disk-0
volume. Since this volume is not active (remember thek
flag in theAttr
column), you won't find the corresponding files present in the system.In conclusion, with the storage structure you have setup in your system, mostly based on LVM thinpools, all your hard disk volumes will be virtual volumes within LVM thinpools. In the case of your VM template's sole hard disk, the concrete LVM location is as follows:
- Volume Group
ssdint
. - Thinpool
ssd_disks
. - Light Volume
base-100-disk-0
.
Remember that, in this scenario, the
ssdint
VG corresponds to an entire physical LVM volume set within the/dev/sda4
PV which, moreover, shares the real underlying ssd unit with the/dev/sda3
PV (the one containing thepve
volume group for the Proxmox VE system volumes).
You've seen the LVM side of the story, but you can get more information about the VM disk image by using some libguestfs commands. In fact, you can even get inside the filesystem within the disk image. To do so, first you have to activate the disk image as light volume, since when you turned the VM into a template, it's hard disk is now disabled and read-only for LVM to avoid further modifications.
-
Reactivate the volume with the following
lvchange
command.sudo lvchange -ay -K ssdint/base-100-disk-0
Since the previous command doesn't give back any output in success, use the lvs command to verify it's status.
$ sudo lvs ssdint/base-100-disk-0 LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert base-100-disk-0 ssdint Vri-a-tz-k 10.00g ssd_disks 18.56
Notice the
a
among the values under theAttr
column, that means the volume is now active. Other command to check out what light volumes are active islvscan
.$ sudo lvscan ACTIVE '/dev/hddusb/hddusb_bkpvzdumps' [520.00 GiB] inherit ACTIVE '/dev/hddusb/hddusb_bkpdata' [<1.31 TiB] inherit ACTIVE '/dev/hddint/hdd_templates' [60.00 GiB] inherit ACTIVE '/dev/hddint/hdd_data' [870.00 GiB] inherit ACTIVE '/dev/ssdint/ssd_disks' [880.00 GiB] inherit ACTIVE '/dev/ssdint/base-100-disk-0' [10.00 GiB] inherit ACTIVE '/dev/pve/swap' [12.00 GiB] inherit ACTIVE '/dev/pve/root' [<37.50 GiB] inherit
This command shows you the state of all the present light volumes, active or inactive. Also notice that it shows the full handler path to the volumes.
-
With the
base-100-disk-0
volume now active, you can check out the status of the filesystem it contains with the libguestfs commandvirt-df
.$ sudo virt-df -h -a /dev/ssdint/base-100-disk-0 Filesystem Size Used Available Use% base-100-disk-0:/dev/sda1 469M 47M 398M 11% base-100-disk-0:/dev/debiantpl-vg/root 8.3G 1.1G 6.8G 14%
This
virt-df
command is very similar to thedf
, and gives you a view of how the storage space is used in the filesystem contained in the volume. Remember that this is the filesystem of your Debian VM template, and it also has a swap volume although thevirt-df
command doesn't show it. -
Another command you might like to try and get a much more complete picture of the filesystem within the
base-100-disk-0
volume isvirt-filesystems
.$ sudo virt-filesystems -a /dev/ssdint/base-100-disk-0 --all --long --uuid -h Name Type VFS Label MBR Size Parent UUID /dev/sda1 filesystem ext2 - - 469M - ec1735f0-edf0-41c5-b54f-9012092a7a2c /dev/debiantpl-vg/root filesystem ext4 - - 8.3G - 41032d28-f6d8-416b-936b-bb0fd803e832 /dev/debiantpl-vg/swap_1 filesystem swap - - 976M - 71cb513a-ef5f-480c-ad54-dea7734d9a97 /dev/debiantpl-vg/root lv - - - 8.5G /dev/debiantpl-vg 0NOJ1z-HMPD-nHl6-OdQr-rgpR-nxtf-FI0i44 /dev/debiantpl-vg/swap_1 lv - - - 976M /dev/debiantpl-vg Kci3BQ-ITxT-SNuC-ITux-BI1x-Wtr2-t9c2RG /dev/debiantpl-vg vg - - - 9.5G /dev/sda5 8cH6WfvK16w50hvYRVx1PyuB2gQrxLgW /dev/sda5 pv - - - 9.5G - Cx7CAvy5XaYDMWtdVERRC4f7aTfG2jDQ /dev/sda1 partition - - 83 487M /dev/sda - /dev/sda2 partition - - 05 1.0K /dev/sda - /dev/sda5 partition - - 8e 9.5G /dev/sda - /dev/sda device - - - 10G - -
See how not only this command returns info about the two LVM light volumes (root and swap_1), but also shows the
sda
partitions and the LVM physical volumesda5
. -
After you've finished checking the
base-100-disk-0
volume, it's better if you deactivate it. For this, uselvchange
again.$ sudo lvchange -an ssdint/base-100-disk-0
Verify its inactive status directly with the
lvscan
command.$ sudo lvscan ACTIVE '/dev/hddusb/hddusb_bkpvzdumps' [520.00 GiB] inherit ACTIVE '/dev/hddusb/hddusb_bkpdata' [<1.31 TiB] inherit ACTIVE '/dev/hddint/hdd_templates' [60.00 GiB] inherit ACTIVE '/dev/hddint/hdd_data' [870.00 GiB] inherit ACTIVE '/dev/ssdint/ssd_disks' [880.00 GiB] inherit inactive '/dev/ssdint/base-100-disk-0' [10.00 GiB] inherit ACTIVE '/dev/pve/swap' [12.00 GiB] inherit ACTIVE '/dev/pve/root' [<37.50 GiB] inherit
Remember that when the volume is inactive, the handler file doesn't exist in the system. Also, know that there is a corresponding
/dev/mapper
handler file for each volume. For yourbase-100-disk-0
volume, the full path would be/dev/mapper/ssdint-base--100--disk--0
.
/dev
/dev/mapper
/dev/ssdint
/dev/mapper/ssdint-base--100--disk--0
/dev/ssdint/base-100-disk-0
- libguestfs official page
- virt-resize –shrink now works
- shrink virtual disk size of VM
- Proxmox VE wiki. Resize disks
- How to shrink KVM/qemu partition and image size
- LVM Resize – How to Decrease an LVM Partition
- How to Manage and Use LVM (Logical Volume Management) in Ubuntu
- How to Extend/Increase LVM’s (Logical Volume Resize) in Linux
- Red Hat Enterprise Linux 8. Configuring and managing logical volumes. Chapter 14. Logical volume activation
- Red Hat Enterprise Linux 8. Configuring and managing logical volumes. Chapter 4. Configuring LVM logical volumes
- How to Extend/Reduce LVM’s (Logical Volume Management) in Linux – Part II
- Linux Man Pages - lvchange (8)
<< Previous (G905. Appendix 05) | +Table Of Contents+ | Next (G907. Appendix 07) >>