Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Move molecule config for playbooks into playbooks/ folder #43

Merged
merged 16 commits into from
Jan 12, 2024

Conversation

p-j-smith
Copy link
Contributor

@p-j-smith p-j-smith commented Jan 12, 2024

Further simplify the testing setup

  • move playbook testing into the playbooks/folder
  • rename the test/ folder to molecule_configs, as it now contains only the Molecule base configurations
  • rename the centos7_install_xnat and rocky9_install_xnat scenarios to centos7_xnat and rocky9_monitoring (to be in line with the naming for monitoring scenarios)
  • update README to reflect changes
  • add README for playbooks/ and molecule_configs/ folder
  • update workflows to reflect changes. Note, the playbook scenarios do not use the the base configs as there are a lot of differences between the playbook configs and the base configs for the roles, so it seemed easier to follow if everything is in one file
  • remove the use of tags from the monitoring molecule setup - they're not needed anymore

@@ -6,7 +6,7 @@
tasks:
- name: Check alertmanager exporter container is running
community.docker.docker_container_info:
name: "{{ monitoring_server_alertmanager.container_name }}"
name: alertmanager
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the variables are not available to the verify playbook so I've hardcoded the container names

@p-j-smith p-j-smith requested a review from drmatthews January 12, 2024 12:33
Copy link
Contributor

@drmatthews drmatthews left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks great! Minor changes and a question about the check_default_version from the mirsg.install_python role.

README.md Outdated Show resolved Hide resolved
README.md Outdated Show resolved Hide resolved
playbooks/README.md Outdated Show resolved Hide resolved
playbooks/README.md Outdated Show resolved Hide resolved
playbooks/README.md Outdated Show resolved Hide resolved
playbooks/README.md Outdated Show resolved Hide resolved
pull_request:
paths:
- "playbooks/my_playbook.yml"
- ".github/workflows/molecule.yml"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this needed? It has been removed above in the molecule-monitoring.yml workflow.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think it is, as the molecule-monitoring.yml workflow doesn't use the reusable workflow

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point!

playbooks/README.md Outdated Show resolved Hide resolved
@@ -25,6 +25,7 @@ platforms:
- all
- monitoring_client
- monitoring_service
- centos7
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there a reason not to use the check_default_version task in the install_python role instead of adding the centos7 and rocky9 groups (which are in both resources/monitoring and resources/xnat) to set the install_python var?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think @paddyroddy was having issues with those pre-tasks when trying to use a different image for testing? And I thought it seemed easier to follow to have variables defined in the group vars rather than tasks that set them, but happy to change it

Copy link
Contributor

@drmatthews drmatthews Jan 12, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think @paddyroddy was having issues with those pre-tasks

Ah, I didn't realise. It's just that the group vars are duplicated for both sets of playbook tests but let's leave it as-is for now.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ah yeah I thought about that. We could have a single inventory for both, and combine the scenarios so we have centos7 and rocky9 scenarios rather than playbook-specific ones. Then in the converge etc. playbook etc. tag the tasks like we were doing for the roles. But that felt like a lot of changes for an already large pr!

@@ -0,0 +1,12 @@
---
# mirsg.infrastructure.install_python
install_python:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See comment above about check_default_version.

Comment on lines 1 to 40
---
- name: Check monitoring host
hosts: monitoring_host
become: true
gather_facts: true
vars:
container_names:
- alertmanager
- blackbox-exporter
- cadvisor
- grafana
- nginx
- prometheus
tasks:
- name: Get container info
community.docker.docker_container_info:
name: "{{ item }}"
loop: "{{ container_names }}"
register: container_info

- name: Check containers exist and are running
ansible.builtin.assert:
that:
- item.exists
- item.container.State.Status == "running"
loop: "{{ container_info.results }}"

- name: Check monitoring client
hosts: monitoring_client
become: true
gather_facts: true
tasks:
- name: Populate service facts
ansible.builtin.service_facts:

- name: Ensure that node exporter has started
ansible.builtin.assert:
that:
- "{{ 'node_exporter.service' in ansible_facts.services }}"
- ansible_facts.services['node_exporter.service'].state == "running"
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think none of these checks were running before as they didn't have a monitoring tag applied to them (and the workflow was using that tag). I've updated the checks to ensure the containers exist and are running, and using a list of container names to loop over

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for catching that! It is annoying that the verify playbook doesn't error if something like that happens.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah it really is - it should really fail if there are no checks

playbooks/README.md Outdated Show resolved Hide resolved
Co-authored-by: Daniel Matthews <d.matthews@ucl.ac.uk>
@drmatthews drmatthews self-requested a review January 12, 2024 13:57
Copy link
Contributor

@drmatthews drmatthews left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍

@p-j-smith p-j-smith self-assigned this Jan 12, 2024
@p-j-smith p-j-smith merged commit 68457e2 into main Jan 12, 2024
15 checks passed
@p-j-smith p-j-smith deleted the maint/playbook-testing branch January 12, 2024 14:43
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants