Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: basic pytest checks for mindwm context crd #111

Merged
merged 34 commits into from
Sep 17, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
34 commits
Select commit Hold shift + click to select a range
629c851
feat: basic pytest checks for mindwm context crd
metacoma Sep 11, 2024
5e4cb43
feat: add pytest for context broker,kafkasource and knative functions
metacoma Sep 12, 2024
16548db
fix: pytests order
metacoma Sep 12, 2024
32bf9c8
fix: set path for kubeconfig in pytest args
metacoma Sep 12, 2024
d55b293
fix: load kubeconfig
metacoma Sep 12, 2024
4f4b6f6
fix: +context.yaml
metacoma Sep 12, 2024
aff2305
fix: send-pack: unexpected disconnect while reading sideband packet
metacoma Sep 12, 2024
3fdbd0a
fix: pytest dependency and ci secret
metacoma Sep 12, 2024
dc4e26a
fix: another round of pytest dependencies
metacoma Sep 12, 2024
e27e11c
fix: remove jetstream-ch-dispatcher service from the knative eventing…
metacoma Sep 12, 2024
173c25c
feat: switch to python-bdd-ng test
metacoma Sep 14, 2024
b625d2d
fix: HOME variable
metacoma Sep 14, 2024
1d20328
temp: deploy cluster before run bdd tests
metacoma Sep 14, 2024
a5cd5ae
feat: given("Mindwm environment")
metacoma Sep 14, 2024
1f11268
feat: create context resource
metacoma Sep 15, 2024
ee69688
feat: add context validate step
metacoma Sep 15, 2024
111ea37
temp: run argocd_app before tests
metacoma Sep 16, 2024
264ab8d
feat: context validate
metacoma Sep 16, 2024
4690930
fix: add forgotten pytest.ini with right pythonpath
metacoma Sep 16, 2024
119acea
debug: print context
metacoma Sep 16, 2024
e023186
fix: properly waiting for kubernetes custom object status
metacoma Sep 16, 2024
5ff0dfb
feat: add Kubernetes feature
metacoma Sep 16, 2024
59d563e
fix: feature name
metacoma Sep 16, 2024
f8290f5
feat: add host and user scenarios
metacoma Sep 16, 2024
73c03bf
feat: switch to gherkin report
metacoma Sep 16, 2024
72727c9
fix: host ready status
metacoma Sep 16, 2024
2207536
feat: add link to allure report in PR comment
metacoma Sep 16, 2024
e8398a8
Revert "fix: host ready status"
metacoma Sep 16, 2024
a4b8347
fix: catch right exit code
metacoma Sep 16, 2024
4285047
feat: host crd deletion
metacoma Sep 17, 2024
b0063f9
feat: usre crd deletion
metacoma Sep 17, 2024
a0aeefa
feat: context crd deletion
metacoma Sep 17, 2024
27642b5
feat: Cleanup scenario
metacoma Sep 17, 2024
eb94b29
feat: pretty markdown in PR comment
metacoma Sep 17, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 6 additions & 2 deletions .github/workflows/ci.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -142,7 +142,7 @@ jobs:
id: update_mindwm_github_io
uses: cpina/github-action-push-to-another-repository@target-branch
env:
API_TOKEN_GITHUB: ${{ secrets.MINDWM_TOKEN }}
SSH_DEPLOY_KEY: ${{ secrets.SSH_DEPLOY_KEY }}
with:
source-directory: allure-history
destination-github-username: 'mindwm'
Expand All @@ -155,9 +155,13 @@ jobs:
- name: prepare env variable for report.md
run: |
echo "REPORT_MD<<EOF" >> $GITHUB_ENV
cat artifacts/report.md >> $GITHUB_ENV
echo "[allure report](https://mindwm.github.io/mindwm-gitops/${{ github.run_number }})" >> $GITHUB_ENV
echo '```'
grep -vE '^features' artifacts/report.md >> $GITHUB_ENV
echo '```'
echo "EOF" >> $GITHUB_ENV


- uses: actions/github-script@v7
env:
COMMENT_BODY: ${{env.REPORT_MD}}
Expand Down
9 changes: 6 additions & 3 deletions Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -224,15 +224,18 @@ edit_hosts:
echo $$INGRESS_HOST argocd.$(DOMAIN) grafana.$(DOMAIN) vm.$(DOMAIN) nats.$(DOMAIN) neo4j.$(CONTEXT_NAME).$(DOMAIN) tempo.$(DOMAIN) | sudo tee -a /etc/hosts

.PHONY: mindwm_test
mindwm_test:
mindwm_test: cluster argocd_app argocd_app_sync_async argocd_app_async_wait argocd_apps_ensure
$(eval ingress_host := $(shell docker run $(KUBECTL_RUN_OPTS) "kubectl -n istio-system get service "istio-ingressgateway" -o jsonpath='{.status.loadBalancer.ingress[0].ip}'"))
cd tests/mindwm_tests && \
cd tests/mindwm_bdd && \
python3 -m venv .venv && \
source .venv/bin/activate && \
pip3 install -r ./requirements.txt && \
export INGRESS_HOST=$(ingress_host) && \
echo ingress_host = $$INGRESS_HOST && \
pytest -s --md-report --md-report-tee --md-report-verbose=7 --md-report-tee --md-report-output=$(ARTIFACT_DIR)/report.md --alluredir $(ARTIFACT_DIR)/allure-results . --order-dependencies
pytest -s --no-header --disable-warnings -vv --gherkin-terminal-reporter --kube-config=$${HOME}/.kube/config --alluredir=$(ARTIFACT_DIR)/allure-results . > $(ARTIFACT_DIR)/report.md


#pytest -s --md-report --md-report-tee --md-report-verbose=7 --md-report-tee --md-report-output=$(ARTIFACT_DIR)/report.md --kube-config=$${HOME}/.kube/config --alluredir $(ARTIFACT_DIR)/allure-results . --order-dependencies

sleep-%:
sleep $(@:sleep-%=%)
Expand Down
Empty file added tests/mindwm_bdd/__init__.py
Empty file.
125 changes: 125 additions & 0 deletions tests/mindwm_bdd/conftest.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,125 @@
import pytest
import re
import pprint
import kubetest
from kubetest.client import TestClient
from kubernetes import client
from pytest_bdd import scenarios, scenario, given, when, then, parsers
import mindwm_crd
import re
from typing import List

@pytest.fixture
def ctx():
return {}

@scenario('kubernetes.feature','Validate Mindwm custom resource definitions')
def test_scenario():
assert False


@given(".*kubernetes cluster$")
def kubernetes_cluster(kube, clusterinfo):
assert(clusterinfo.cluster), f"{clusterinfo} "

@then("all nodes in the kubernetes are ready")
def kubernetes_nodes(kube):
for node in kube.get_nodes().values():
assert(node.is_ready()), f"{node.name} is not ready"


@scenario('mindwm_crd.feature','Validate Mindwm custom resource definitions')
def test_mindwm():
return True

@given('a MindWM environment')
def mindwm_environment(kube):
for plural in ["xcontexts", "xhosts", "xusers"]:
kube.get_custom_objects(group = 'mindwm.io', version = 'v1beta1', plural = plural, all_namespaces = True)
pass

@when("God creates a MindWM context with the name \"{context_name}\"")
def mindwm_context(ctx, kube, context_name):
ctx['context_name'] = context_name
mindwm_crd.context_create(kube, context_name)

@then("the context should be ready and operable")
def minwdm_context_validate(ctx, kube):
try:
mindwm_crd.context_validate(kube, ctx['context_name'])
except AssertionError as e:
# known bug https://github.com/mindwm/mindwm-gitops/issues/100
if str(e) == f"Context {ctx['context_name']} is not ready":
pass
else:
raise

@when("God creates a MindWM user resource with the name \"{user_name}\" and connects it to the context \"{context_name}\"")
def mindwm_user_create(ctx, kube, user_name, context_name):
ctx['user_name'] = user_name
mindwm_crd.user_create(kube, user_name, context_name)

@then("the user resource should be ready and operable")
def mindwm_user_validate(ctx, kube):
user = mindwm_crd.user_get(kube, ctx['user_name'])
try:
user.validate()
except AssertionError as e:
# known bug https://github.com/mindwm/mindwm-gitops/issues/100
if str(e) == f"User {ctx['user_name']} is not ready":
pass
else:
raise

@when("God creates a MindWM host resource with the name \"{host_name}\" and connects it to the user \"{user_name}\"")
def mindwm_host_create(ctx, kube, host_name, user_name):
ctx['host_name'] = host_name
mindwm_crd.host_create(kube, host_name, user_name)

@then("the host resource should be ready and operable")
def mindwm_host_validate(ctx, kube):
host_name = ctx['host_name']
host = mindwm_crd.host_get(kube, host_name)
try:
host.validate()
except AssertionError as e:
# known bug https://github.com/mindwm/mindwm-gitops/issues/100
if str(e) == f"Host {ctx['host_name']} is not ready":
pass
else:
raise

@when("God deletes the MindWM host resource \"{host_name}\"")
def mindwm_host_delete(kube, host_name):
host = mindwm_crd.host_get(kube, host_name)
return host.delete(None)

@then("the host \"{host_name}\" should be deleted")
def mindwm_host_deleted(kube, host_name):
host = mindwm_crd.host_get(kube, host_name)
return host.wait_until_deleted(timeout=30)


@when("God deletes the MindWM user resource \"{user_name}\"")
def mindwm_user_delete(kube, user_name):
user = mindwm_crd.user_get(kube,user_name)
user.delete(None)
@then("the user \"{user_name}\" should be deleted")
def mindwm_user_deleted(kube, user_name):
user = mindwm_crd.user_get(kube,user_name)
user.wait_until_deleted()

@when("God deletes the MindWM context resource \"{context_name}\"")
def mindwm_context_delete(kube, context_name):
ctx = mindwm_crd.context_get(kube, context_name)
ctx.delete(None)
@then("the context \"{context_name}\" should be deleted")
def mindwm_context_deleted(kube, context_name):
ctx = mindwm_crd.context_get(kube, context_name)
ctx.wait_until_deleted(30)

def pytest_collection_modifyitems(config: pytest.Config, items: List[pytest.Item]):
# XXX workaround
for item in items:
item.add_marker(pytest.mark.namespace(create = False, name = "default"))

Empty file.
65 changes: 65 additions & 0 deletions tests/mindwm_bdd/custom_objects/context.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,65 @@
from kubetest.objects.custom_objects import CustomObject
from kubernetes import client, config
import pprint
from kubetest import utils, condition
import time


from kubernetes.client.rest import ApiException
from kubernetes import client
from typing import Optional, Union

api_group = "mindwm.io"
api_version = "v1beta1"

class MindwmContext(CustomObject):
namespace = "default"

def delete(self, options: client.V1DeleteOptions) -> client.V1Status:
return self.api_client.delete_namespaced_custom_object(api_group, api_version, self.namespace, "contexts", self.name)

def status(self):
r = self.api_client.get_namespaced_custom_object_status(group = api_group, version = api_version, namespace = self.namespace, plural = "contexts", name = self.name)
return r.get('status')

def _has_status(self):
try:
status = self.status()
assert not status is None
return True
except:
return False


def wait_for_status(self):
ready_condition = condition.Condition(
"api object has status",
self._has_status,
)
utils.wait_for_condition(
condition=ready_condition,
timeout=60,
interval=1,
)

def wait_until_deleted(
self, timeout: int = None, interval: Union[int, float] = 1
) -> None:
def deleted_fn():
try:
self.status()
except ApiException as e:
if e.status == 404 and e.reason == "Not Found":
return True
else:
raise e
else:
return False

delete_condition = condition.Condition("api object deleted", deleted_fn)

utils.wait_for_condition(
condition=delete_condition,
timeout=timeout,
interval=interval,
)
112 changes: 112 additions & 0 deletions tests/mindwm_bdd/custom_objects/host.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,112 @@
from kubetest.objects.custom_objects import CustomObject
from kubernetes import client, config
from typing import Optional, Union
import pprint
from kubernetes.client.rest import ApiException
from kubernetes import client
from kubetest import utils, condition
import time
import logging

log = logging.getLogger("kubetest")


api_group = "mindwm.io"
api_version = "v1beta1"

class MindwmHost(CustomObject):
namespace = "default"

def delete(self, options: client.V1DeleteOptions) -> client.V1Status:
return self.api_client.delete_namespaced_custom_object(api_group, api_version, self.namespace, "hosts", self.name)

def is_ready(self):
for condition in self.status().get('conditions'):
if condition.get('type') == 'Ready':
ready_condition = condition
if condition.get('type') == 'Synced':
synced_condition = condition

is_ready = ready_condition and ready_condition.get('status') == 'True'
is_synced = synced_condition and synced_condition.get('status') == 'True'

return not is_ready and not is_synced

def status(self):
r = self.api_client.get_namespaced_custom_object_status(group = api_group, version = api_version, namespace = self.namespace, plural = "hosts", name = self.name)
return r.get('status')


def _has_status(self):
try:
status = self.status()
assert not status is None
return True
except:
return False

def wait_for_status(self):
ready_condition = condition.Condition(
"api object has status",
self._has_status,
)
utils.wait_for_condition(
condition=ready_condition,
timeout=60,
interval=1,
)

def wait_for_ready(self):
self.wait_for_status()

ready_condition = condition.Condition(
"api object is ready",
self.is_ready,
)
utils.wait_for_condition(
condition=ready_condition,
timeout=10,
interval=1,
)
def validate(self):
try:
self.wait_for_ready()
except TimeoutError as e:
pass

status = self.status()
for condition in status.get('conditions'):
if condition.get('type') == 'Ready':
ready_condition = condition
if condition.get('type') == 'Synced':
synced_condition = condition

is_ready = ready_condition and ready_condition.get('status') == 'True'
is_synced = synced_condition and synced_condition.get('status') == 'True'
assert(is_synced), f"Host {self.name} is not synced"
assert(is_ready), f"Host {self.name} is not ready"

def wait_until_deleted(
self, timeout: int = None, interval: Union[int, float] = 1
) -> None:
def deleted_fn():
try:
self.status()
except ApiException as e:
# If we can no longer find the deployment, it is deleted.
# If we get any other exception, raise it.
if e.status == 404 and e.reason == "Not Found":
return True
else:
raise e
else:
# The object was still found, so it has not been deleted
return False

delete_condition = condition.Condition("api object deleted", deleted_fn)

utils.wait_for_condition(
condition=delete_condition,
timeout=timeout,
interval=interval,
)
Loading
Loading