-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Move molecule config for playbooks into playbooks/
folder
#43
Conversation
this is to simplify setting the install_python variable also remove the monitoring tags set
the variables are not available to this playbook
@@ -6,7 +6,7 @@ | |||
tasks: | |||
- name: Check alertmanager exporter container is running | |||
community.docker.docker_container_info: | |||
name: "{{ monitoring_server_alertmanager.container_name }}" | |||
name: alertmanager |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the variables are not available to the verify playbook so I've hardcoded the container names
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks great! Minor changes and a question about the check_default_version
from the mirsg.install_python
role.
playbooks/README.md
Outdated
pull_request: | ||
paths: | ||
- "playbooks/my_playbook.yml" | ||
- ".github/workflows/molecule.yml" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this needed? It has been removed above in the molecule-monitoring.yml
workflow.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think it is, as the molecule-monitoring.yml
workflow doesn't use the reusable workflow
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good point!
@@ -25,6 +25,7 @@ platforms: | |||
- all | |||
- monitoring_client | |||
- monitoring_service | |||
- centos7 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there a reason not to use the check_default_version
task in the install_python
role instead of adding the centos7
and rocky9
groups (which are in both resources/monitoring
and resources/xnat
) to set the install_python
var?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think @paddyroddy was having issues with those pre-tasks when trying to use a different image for testing? And I thought it seemed easier to follow to have variables defined in the group vars rather than tasks that set them, but happy to change it
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think @paddyroddy was having issues with those pre-tasks
Ah, I didn't realise. It's just that the group vars are duplicated for both sets of playbook tests but let's leave it as-is for now.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ah yeah I thought about that. We could have a single inventory for both, and combine the scenarios so we have centos7
and rocky9
scenarios rather than playbook-specific ones. Then in the converge etc. playbook etc. tag the tasks like we were doing for the roles. But that felt like a lot of changes for an already large pr!
@@ -0,0 +1,12 @@ | |||
--- | |||
# mirsg.infrastructure.install_python | |||
install_python: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
See comment above about check_default_version
.
--- | ||
- name: Check monitoring host | ||
hosts: monitoring_host | ||
become: true | ||
gather_facts: true | ||
vars: | ||
container_names: | ||
- alertmanager | ||
- blackbox-exporter | ||
- cadvisor | ||
- grafana | ||
- nginx | ||
- prometheus | ||
tasks: | ||
- name: Get container info | ||
community.docker.docker_container_info: | ||
name: "{{ item }}" | ||
loop: "{{ container_names }}" | ||
register: container_info | ||
|
||
- name: Check containers exist and are running | ||
ansible.builtin.assert: | ||
that: | ||
- item.exists | ||
- item.container.State.Status == "running" | ||
loop: "{{ container_info.results }}" | ||
|
||
- name: Check monitoring client | ||
hosts: monitoring_client | ||
become: true | ||
gather_facts: true | ||
tasks: | ||
- name: Populate service facts | ||
ansible.builtin.service_facts: | ||
|
||
- name: Ensure that node exporter has started | ||
ansible.builtin.assert: | ||
that: | ||
- "{{ 'node_exporter.service' in ansible_facts.services }}" | ||
- ansible_facts.services['node_exporter.service'].state == "running" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think none of these checks were running before as they didn't have a monitoring
tag applied to them (and the workflow was using that tag). I've updated the checks to ensure the containers exist and are running, and using a list of container names to loop over
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for catching that! It is annoying that the verify
playbook doesn't error if something like that happens.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah it really is - it should really fail if there are no checks
Co-authored-by: Daniel Matthews <d.matthews@ucl.ac.uk>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍
Further simplify the testing setup
playbooks/folder
test/
folder tomolecule_configs
, as it now contains only the Molecule base configurationscentos7_install_xnat
androcky9_install_xnat
scenarios tocentos7_xnat
androcky9_monitoring
(to be in line with the naming for monitoring scenarios)playbooks/
andmolecule_configs/
folder