Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: Added support for creating shared LVM setups #388

Merged
merged 2 commits into from
Dec 12, 2023
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
11 changes: 11 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -53,6 +53,17 @@ keys:
This specifies the type of pool to manage.
Valid values for `type`: `lvm`.

- `shared`

If set to `true`, the role creates or manages a shared volume group. Requires lvmlockd and
dlm services configured and running.

Default: `false`
japokorn marked this conversation as resolved.
Show resolved Hide resolved

__WARNING__: Modifying the `shared` value on an existing pool is a
destructive operation. The pool itself will be removed as part of the
process.

- `disks`

A list which specifies the set of disks to use as backing storage for the pool.
Expand Down
2 changes: 2 additions & 0 deletions defaults/main.yml
Original file line number Diff line number Diff line change
Expand Up @@ -27,6 +27,8 @@ storage_pool_defaults:
raid_chunk_size: null
raid_metadata_version: null

shared: false

storage_volume_defaults:
state: "present"
type: lvm
Expand Down
3 changes: 2 additions & 1 deletion library/blivet.py
Original file line number Diff line number Diff line change
Expand Up @@ -1527,7 +1527,7 @@ def _create(self):
if not self._device:
members = self._manage_encryption(self._create_members())
try:
pool_device = self._blivet.new_vg(name=self._pool['name'], parents=members)
pool_device = self._blivet.new_vg(name=self._pool['name'], parents=members, shared=self._pool['shared'])
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Need some sort of logic here to avoid using the shared parameter if not supported

                if self._blivet.new_vg supports 'shared' parameter:
                    pool_device = self._blivet.new_vg(name=self._pool['name'], parents=members, shared=self._pool['shared'])
                else:
                     pool_device = self._blivet.new_vg(name=self._pool['name'], parents=members)

There's probably some way to use introspection to see if the new_vg method supports shared

or some other way to dynamically construct the new_vg arguments e.g. new_vg_args = {} the pass like new_vg(**new_vg_args)

This is what is causing some of the test failures

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have modified the condition so the shared parameter is not used when its value is default (false). This should fix the tests.

except Exception as e:
raise BlivetAnsibleError("failed to set up pool '%s': %s" % (self._pool['name'], str(e)))

Expand Down Expand Up @@ -1823,6 +1823,7 @@ def run_module():
raid_spare_count=dict(type='int'),
raid_metadata_version=dict(type='str'),
raid_chunk_size=dict(type='str'),
shared=dict(type='bool'),
state=dict(type='str', default='present', choices=['present', 'absent']),
type=dict(type='str'),
volumes=dict(type='list', elements='dict', default=list(),
Expand Down
3 changes: 3 additions & 0 deletions tests/collection-requirements.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
---
collections:
- fedora.linux_system_roles
14 changes: 14 additions & 0 deletions tests/test-verify-pool.yml
Original file line number Diff line number Diff line change
Expand Up @@ -15,6 +15,20 @@
# compression
# deduplication

- name: Get VG shared value status
command: vgs --noheadings --binary -o shared {{ storage_test_pool.name }}
register: vgs_dump
when: storage_test_pool.type == 'lvm' and storage_test_pool.state == 'present'
changed_when: false

- name: Verify that VG shared value checks out
assert:
that: (storage_test_pool.shared | bool) and ('1' in vgs_dump.stdout)
msg: >-
Shared VG presence ({{ storage_test_pool.shared }})
does not match its expected state ({{ '1' in vgs_dump.stdout }})
when: storage_test_pool.type == 'lvm' and storage_test_pool.state == 'present'

- name: Verify pool subset
include_tasks: "test-verify-pool-{{ storage_test_pool_subset }}.yml"
loop: "{{ _storage_pool_tests }}"
Expand Down
152 changes: 152 additions & 0 deletions tests/tests_lvm_pool_shared.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,152 @@
---
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess this entire test case should be skipped with older versions of blivet that don't support shared LVM. The support is not yet released on CentOS Stream/RHEL yet, but it should be available in Blivet 3.6.0-11 (EL9) and 3.6.0-8 (EL8) and in Fedora in Blivet 3.8.2-1 or newer.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have added a switch that can skip the test based on platform and blivet versions.

- name: Test LVM shared pools
hosts: all
become: true
vars:
storage_safe_mode: false
storage_use_partitions: true
mount_location1: '/opt/test1'
volume1_size: '4g'

tasks:
japokorn marked this conversation as resolved.
Show resolved Hide resolved
- name: Change node from 'localhost' to '127.0.0.1'
set_fact:
inventory_hostname: "127.0.0.1" # noqa: var-naming
when: inventory_hostname == "localhost"

- name: Run the role to install blivet
include_role:
name: linux-system-roles.storage

- name: Gather package facts
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
- name: Gather package facts
- name: Run the role to install blivet
include_role:
name: linux-system-roles.storage
vars:
storage_pools: []
storage_volumes: []
- name: Gather package facts

Copy link
Contributor

@richm richm Nov 15, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

otherwise, the task Set blivet package name fails, as blivet is not installed.
Are you able to run this test locally on your laptop?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I moved the initial storage role (which installs blivet) run before the check.
While running the test I also noticed that in HA cluster role test_setup.yml there is this task:

  - name: Set node name to 'localhost' for single-node clusters
    set_fact:
      inventory_hostname: localhost  # noqa: var-naming
    when: ansible_play_hosts_all | length == 1

I am not sure what its purpose is in the role but it messed up the test when I tried to run it on single remote node. I replaced it with task that changes inventory name from 'localhost' to '127.0.0.1' and it seems to do the trick.

package_facts:

- name: Set blivet package name
set_fact:
blivet_pkg_name: "{{ ansible_facts.packages |
select('search', 'blivet') | select('search', 'python') | list }}"

- name: Set blivet package version
set_fact:
blivet_pkg_version: "{{
ansible_facts.packages[blivet_pkg_name[0]][0]['version'] +
'-' + ansible_facts.packages[blivet_pkg_name[0]][0]['release'] }}"

- name: Set distribution version
set_fact:
is_rhel9: "{{ (ansible_facts.distribution == 'CentOS' or
ansible_facts.distribution == 'Enterprise Linux' or
ansible_facts.distribution == 'RedHat') and
ansible_facts.distribution_major_version == '9' }}"
is_rhel8: "{{ (ansible_facts.distribution == 'CentOS' or
ansible_facts.distribution == 'Enterprise Linux' or
ansible_facts.distribution == 'RedHat') and
ansible_facts.distribution_major_version == '8' }}"
is_fedora: "{{ ansible_facts.distribution == 'Fedora' }}"

- name: Skip test if the blivet version does not support shared VGs
meta: end_host
when: ((is_fedora and blivet_pkg_version is version("3.8.2-1", "<")) or
(is_rhel8 and blivet_pkg_version is version("3.6.0-8", "<")) or
(is_rhel9 and blivet_pkg_version is version("3.6.0-11", "<")))

- name: Create cluster
ansible.builtin.include_role:
name: fedora.linux_system_roles.ha_cluster
vars:
ha_cluster_cluster_name: rhel9-1node
# Users should vault-encrypt the password
ha_cluster_hacluster_password: hapasswd
ha_cluster_extra_packages:
- dlm
- lvm2-lockd
ha_cluster_cluster_properties:
- attrs:
# Don't do this in production
- name: stonith-enabled
value: 'false'
ha_cluster_resource_primitives:
- id: dlm
agent: 'ocf:pacemaker:controld'
instance_attrs:
- attrs:
# Don't do this in production
- name: allow_stonith_disabled
value: 'true'
- id: lvmlockd
agent: 'ocf:heartbeat:lvmlockd'
ha_cluster_resource_groups:
- id: locking
resource_ids:
- dlm
- lvmlockd

- name: Get unused disks
include_tasks: get_unused_disk.yml
vars:
max_return: 1

- name: >-
Create a disk device; specify disks as non-list mounted on
{{ mount_location }}
include_role:
name: linux-system-roles.storage
vars:
storage_pools:
- name: vg1
disks: "{{ unused_disks }}"
type: lvm
shared: true
state: present
volumes:
- name: lv1
size: "{{ volume1_size }}"
mount_point: "{{ mount_location1 }}"

- name: Verify role results
include_tasks: verify-role-results.yml

- name: Repeat the previous step to verify idempotence
include_role:
name: linux-system-roles.storage
vars:
storage_pools:
- name: vg1
disks: "{{ unused_disks }}"
type: lvm
shared: true
state: present
volumes:
- name: lv1
size: "{{ volume1_size }}"
mount_point: "{{ mount_location1 }}"

- name: Verify role results
include_tasks: verify-role-results.yml

- name: >-
Remove the device created above
{{ mount_location }}
include_role:
name: linux-system-roles.storage
vars:
storage_pools:
- name: vg1
disks: "{{ unused_disks }}"
type: lvm
shared: true
state: absent
volumes:
- name: lv1
size: "{{ volume1_size }}"
mount_point: "{{ mount_location1 }}"

- name: Verify role results
include_tasks: verify-role-results.yml

- name: Remove cluster
ansible.builtin.include_role:
name: fedora.linux_system_roles.ha_cluster
vars:
ha_cluster_cluster_name: rhel9-1node
ha_cluster_cluster_present: false
Loading