-
Notifications
You must be signed in to change notification settings - Fork 59
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Determine too_large_size dynamically #81
Determine too_large_size dynamically #81
Conversation
To test proper error handling the test playbook needs to attempt to create a pool that is too large for the underlying device. What is "too large" depends on the device though. We should not assume that the tests will be run against a VM with disks created by the provisioning script with the expected size, tests can be executed in any environment, like on a machine with a number of big physical disks. tests_lvm_errors has been using a fixed size as the "too large" size, making it fail in an environment with disks larger than that. Make it determine the size dynamically instead. The "too large" size is now just one sector more than the size of the disk that is going to be used for the test. Reported by @yizhanglinux.
[citest bad] |
@xjezda00 rhel-x tasks are failing surprisingly quickly. Can you please check what's wrong? |
too_large_size: '500 GiB' | ||
unused_disk_subfact: '{{ ansible_devices[unused_disks[0]] }}' | ||
too_large_size: '{{ (unused_disk_subfact.sectors|int + 1) * | ||
unused_disk_subfact.sectorsize|int }}' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
didn't know you could use a previously defined value in a vars
list in a subsequent variable declaration - /me makes a note of that for future reference
lgtm
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think it is even correct to consider it a previous definition and subsequent definition. vars:
is a dict. Dicts are not ordered. So there is no notion of "previous" and "subsequent" for variables defined in the same vars:
block. So this would have worked too:
too_large_size: '{{ (unused_disk_subfact.sectors|int + 1) *
unused_disk_subfact.sectorsize|int }}'
unused_disk_subfact: '{{ ansible_devices[unused_disks[0]] }}'
This is a surprising [1], sometimes useful, sometimes not [2], feature of Jinja2 template expansions in variables in Ansible. Ansible expands the templates in the variable value when the variable is used, not when it is defined, i.e. variables are evaluated lazily [3]. So you can do this:
vars:
foo: "{{ bar }}"
tasks:
- set_fact:
bar: quux
- name: use foo
somemodule:
someparameter: "{{ foo }}"
You see, foo
is defined before the tasks even start, but can refer to a value which gets defined only during the execution of tasks! So even in cases there is a notion of "previous" and "subsequent", you can use a subsequently defined value in a previous variable declaration and things continue to work.
Note that in this PR I used this feature: when defining unused_disk_subfact
, I used the variable unused_disks
. But this variable is not available yet. It is set by get_unused_disk.yml
. Still, it works, because unused_disk_subfact
is actually expanded only after the call to get_unused_disk.yml
, and unused_disks
is already defined here.
Another place where this is used is here:
https://github.com/linux-system-roles/timesync/blob/9050dd95314406236bc8869eac2307a6fa932edc/vars/main.yml#L1
Role variables are loaded when the role starts and ansible_facts.packages
is not defined yet (it is not among facts collected by default, we collect it during the role execution). But this is not a problem when defining __timesync_chrony_version
, because the template does not get expanded at this moment - it gets expanded only when __timesync_chrony_version
is used, and if it is after we collect the fact, everything is fine.
(Beware though, set_facts
expands variables, because it is is like a module call. So in set_fact
you can not refer to variables whose values contain templates referring to yet undefined variables. In this way set_fact
serves as a checkpoint that expands everything that's given to it and is not expanded yet. This can be useful, too bad that facts are global and can not be unset.)
[1] Have you seen it documented anywhere?
[2] In a role, you can not pass a variable containing a reference to something like role_name
to another role and use it to reference to the calling role, because the variable will only get expanded in the called role to something else - the variable represents the currently executing role.
[3] Another well-known language with this style of variable expansion is make
: https://www.gnu.org/software/make/manual/html_node/Flavors.html
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[1] Have you seen it documented anywhere?
No
[2]
also good to know
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I put this in oasis-roles/meta_standards#19
[citest bad] |
3 similar comments
[citest bad] |
[citest bad] |
[citest bad] |
@dwlehman ok to merge? @yizhanglinux I suppose it is ok from your perspective? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure if they keys in ansible_devices
will always match our device paths/names, but that should not matter until/unless someone is running these tests on some pretty unusual hardware (eg: firmware raid, multipath). I don't think it's a problem.
Looks okay to me.
LGTM, feel free to add |
To test proper error handling the test playbook needs to attempt to create a pool that is too large for the underlying device. What is "too large" depends on the device though. We should not assume that the tests will be run against a VM with disks created by the provisioning script with the expected size, tests can be executed in any environment, like on a machine with a number of big
physical disks.
tests_lvm_errors has been using a fixed size as the "too large" size, making it fail in an environment with disks larger than that. Make it determine the size dynamically instead. The "too large" size is now just one sector more than the size of the disk that is going to be used for the test.
Reported by @yizhanglinux.