You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello everyone. I have the following case. I created a volume group (named raid5) of three physical volumes and created two logical volumes in this volume group
Next, I rebooted the server and, after rebooting, I executed vgchange -ay raid5. After the synchronization was completed
pvdisplay
--- Physical volume ---
PV Name /dev/disk/by-id/scsi-35000cca04e27f5dc-part1
VG Name raid5
PV Size <372.61 GiB / not usable <1.09 MiB
Allocatable yes
PE Size 4.00 MiB
Total PE 95387
Free PE 95387
Allocated PE 0
PV UUID JXfmAb-sEsG-yAgO-5ebL-Ciqa-6Y2d-xR4ei5
--- Physical volume ---
PV Name /dev/disk/by-id/scsi-35000cca04e764154-part1
VG Name raid5
PV Size <372.61 GiB / not usable <1.09 MiB
Allocatable yes
PE Size 4.00 MiB
Total PE 95387
Free PE 18585
Allocated PE 76802
PV UUID K0Rq2g-RwgE-NJuy-FkSw-4fFP-FVYu-H90uvt
--- Physical volume ---
PV Name /dev/disk/by-id/scsi-35001173101138874-part1
VG Name raid5
PV Size <372.61 GiB / not usable <1.09 MiB
Allocatable yes
PE Size 4.00 MiB
Total PE 95387
Free PE 18585
Allocated PE 76802
PV UUID 6B49n3-OsFw-Dt4V-1XWR-sB29-4zpi-0noQAB
--- Physical volume ---
PV Name /dev/disk/by-id/scsi-35000cca04e27f588-part1
VG Name raid5
PV Size <372.61 GiB / not usable <1.09 MiB
Allocatable yes
PE Size 4.00 MiB
Total PE 95387
Free PE 18585
Allocated PE 76802
PV UUID Y5iLkE-bNTd-22Kq-6w3L-fBnF-8l31-huZmkU
And raid5/vol1 has status "lv_health_status":"refresh needed"
Next, I did lvconvert --repair -yf raid5, but it didn't help.
lvconvert --repair -yf raid5/vol1
Insufficient free space: 38401 extents needed, but only 0 available
Failed to replace faulty devices in raid5/vol1.
lvs --reportformat json -o full_name,lv_layout,vg_system_id,copy_percent,lv_health_status
{
"report": [
{
"lv": [
{"lv_full_name":"raid5/vol1", "lv_layout":"raid,raid5,raid5_ls", "vg_systemid":"node1", "copy_percent":"100.00", "lv_health_status":"refresh needed"},
{"lv_full_name":"raid5/vol2", "lv_layout":"raid,raid5,raid5_ls", "vg_systemid":"node1", "copy_percent":"100.00", "lv_health_status":""}
]
}
]
}
Moreover, I repeated the case, but instead executing vgreduce and pvremove, I used lvconvert. The LV started to repair and after that I saw that /dev/disk/by-id/scsi-35000cca04e27f5dc-part1 has only Free PE 56986 instead 95387
lvs --reportformat json -o full_name,lv_layout,vg_system_id,copy_percent,lv_health_status
{
"report": [
{
"lv": [
{"lv_full_name":"raid5/vol1", "lv_layout":"raid,raid5,raid5_ls", "vg_systemid":"node1", "copy_percent":"100.00", "lv_health_status":"refresh needed"},
{"lv_full_name":"raid5/vol2", "lv_layout":"raid,raid5,raid5_ls", "vg_systemid":"node1", "copy_percent":"100.00", "lv_health_status":""}
]
}
]
}
lvconvert --repair -yf raid5/vol1
Faulty devices in raid5/vol1 successfully replaced.
lvs --reportformat json -o full_name,lv_layout,vg_system_id,copy_percent,lv_health_status
{
"report": [
{
"lv": [
{"lv_full_name":"raid5/vol1", "lv_layout":"raid,raid5,raid5_ls", "vg_systemid":"node1", "copy_percent":"0.00", "lv_health_status":""},
{"lv_full_name":"raid5/vol2", "lv_layout":"raid,raid5,raid5_ls", "vg_systemid":"node1", "copy_percent":"100.00", "lv_health_status":""}
]
}
]
}
pvdisplay
--- Physical volume ---
PV Name /dev/disk/by-id/scsi-35000cca04e27f5dc-part1
VG Name raid5
PV Size <372.61 GiB / not usable <1.09 MiB
Allocatable yes
PE Size 4.00 MiB
Total PE 95387
Free PE 56986
Allocated PE 38401
PV UUID TKzHB5-oG2R-h7Jy-DJm3-MCCb-bSXR-ScR00N
--- Physical volume ---
PV Name /dev/disk/by-id/scsi-35000cca04e764154-part1
VG Name raid5
PV Size <372.61 GiB / not usable <1.09 MiB
Allocatable yes
PE Size 4.00 MiB
Total PE 95387
Free PE 18585
Allocated PE 76802
PV UUID r300jQ-kvfq-JaYJ-cfcp-n1Y7-Lu5K-wcFxXd
--- Physical volume ---
PV Name /dev/disk/by-id/scsi-35001173101138874-part1
VG Name raid5
PV Size <372.61 GiB / not usable <1.09 MiB
Allocatable yes
PE Size 4.00 MiB
Total PE 95387
Free PE 56986
Allocated PE 38401
PV UUID l7azMU-3nBo-tfEH-kxJz-Bd29-01WH-VvGC9N
--- Physical volume ---
PV Name /dev/disk/by-id/scsi-35000cca04e27f588-part1
VG Name raid5
PV Size <372.61 GiB / not usable <1.09 MiB
Allocatable yes
PE Size 4.00 MiB
Total PE 95387
Free PE 18585
Allocated PE 76802
PV UUID g2rKya-AjZG-i5Uh-DOfP-2jEJ-tT0E-JEi3Wv
Why can there be such behavior?
The text was updated successfully, but these errors were encountered:
Version:
Hello everyone. I have the following case. I created a volume group (named raid5) of three physical volumes and created two logical volumes in this volume group
After that, I added another physical volume to this volume group and performed a pvmove to replace the old physical volume with a new physical volume
Next, I rebooted the server and, after rebooting, I executed
vgchange -ay raid5
. After the synchronization was completedNext, I did the following
Next, I did the following
And
raid5/vol1
has status"lv_health_status":"refresh needed"
Next, I did
lvconvert --repair -yf raid5
, but it didn't help.Moreover, I repeated the case, but instead executing
vgreduce
andpvremove
, I usedlvconvert
. The LV started to repair and after that I saw that/dev/disk/by-id/scsi-35000cca04e27f5dc-part1
has only Free PE 56986 instead 95387Why can there be such behavior?
The text was updated successfully, but these errors were encountered: