Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: ping scenario #115

Merged
merged 12 commits into from
Sep 20, 2024
Merged

feat: ping scenario #115

merged 12 commits into from
Sep 20, 2024

Conversation

metacoma
Copy link
Contributor

No description provided.

@metacoma metacoma self-assigned this Sep 19, 2024
@mindwm mindwm deleted a comment from github-actions bot Sep 20, 2024
@mindwm mindwm deleted a comment from github-actions bot Sep 20, 2024
@mindwm mindwm deleted a comment from github-actions bot Sep 20, 2024
Copy link

allure report
============================= test session starts ==============================
collecting ... collected 17 items

Feature: Mindwm Lifecycle Management
Scenario: Deploy Mindwm Cluster and Applications
Given an Ubuntu 24.04 system with 6 CPUs and 16 GB of RAM (PASSED)
And the mindwm-gitops repository is cloned into the "~/mindwm-gitops" directory (PASSED)
When God executes "make cluster" (PASSED)
Then all nodes in Kubernetes are ready (PASSED)
When God executes "make argocd" (PASSED)
Then helm release "argocd" is deployed in "argocd" namespace (PASSED)
When God executes "make argocd_app" (PASSED)
Then the argocd "mindwm-gitops" application appears in "argocd" namespace (PASSED)
When God executes "make argocd_app_sync_async" (PASSED)
Then the argocd "mindwm-gitops" application is argocd namespace in a progressing status (PASSED)
When God executes "make argocd_app_async_wait" (PASSED)
Then all argocd applications are in a healthy state (PASSED)
When God executes "make crossplane_rolebinding_workaround" (PASSED)
Then the following roles should exist: (PASSED)
PASSED

Feature: Mindwm event driven architecture
Scenario: Knative
Given a MindWM environment (PASSED)
Then all nodes in Kubernetes are ready (PASSED)
And namespace "knative-serving" should exists (PASSED)
And namespace "knative-eventing" should exists (PASSED)
And namespace "knative-operator" should exists (PASSED)
And following deployments is in ready state in "knative-serving" namespace (PASSED)
And following deployments is in ready state in "knative-eventing" namespace (PASSED)
PASSED

Feature: Mindwm event driven architecture
Scenario: Istio
Given a MindWM environment (PASSED)
Then all nodes in Kubernetes are ready (PASSED)
And namespace "istio-system" should exists (PASSED)
And following deployments is in ready state in "istio-system" namespace (PASSED)
PASSED

Feature: Mindwm event driven architecture
Scenario: Redpanda
Given a MindWM environment (PASSED)
Then all nodes in Kubernetes are ready (PASSED)
And namespace "redpanda" should exists (PASSED)
And following deployments is in ready state in "redpanda" namespace (FAILED)
And statefulset "neo4j-cdc" in namespace "redpanda" is in ready state (FAILED)
FAILED

=================================== FAILURES ===================================
_ test_scenarios[file:features/0_1_mindwm_eda.feature-Mindwm event driven architecture-Redpanda] _

self = <pytest_bdd.runner.ScenarioRunner object at 0x7f67c797bc20>
item = <Function test_scenarios[file:features/0_1_mindwm_eda.feature-Mindwm event driven architecture-Redpanda]>

def pytest_runtest_call(self, item: Item):
    if "pytest_bdd_scenario" in list(map(attrgetter("name"), item.iter_markers())):
        self.request = item._request
        self.feature = self.request.getfixturevalue("feature")
        self.scenario = self.request.getfixturevalue("scenario")
        self.plugin_manager = self.request.config.hook
        self.plugin_manager.pytest_bdd_before_scenario(
            request=self.request, feature=self.feature, scenario=self.scenario
        )
        try:
          self.plugin_manager.pytest_bdd_run_scenario(
                request=self.request,
                feature=self.feature,
                scenario=self.scenario,
            )

.venv/lib/python3.12/site-packages/pytest_bdd/runner.py:37:


.venv/lib/python3.12/site-packages/pytest_bdd/runner.py:59: in pytest_bdd_run_scenario
return step_dispatcher(steps)
.venv/lib/python3.12/site-packages/pytest_bdd/runner.py:69: in dispatcher
self.plugin_manager.pytest_bdd_run_step(
.venv/lib/python3.12/site-packages/pytest_bdd/runner.py:128: in pytest_bdd_run_step
step_result = step_caller()
conftest.py:435: in deployment_ready
deployment = utils.deployment_wait_for(kube, deployment_name, namespace)
utils.py:286: in deployment_wait_for
kubetest_utils.wait_for_condition(


condition = <Condition (name: deployment neo4j-cdc-console exists, met: False)>
timeout = 180, interval = 5, fail_on_api_error = True

def wait_for_condition(
    condition: Condition,
    timeout: int = None,
    interval: Union[int, float] = 1,
    fail_on_api_error: bool = True,
) -> None:
    """Wait for a condition to be met.

    Args:
        condition: The Condition to wait for.
        timeout: The maximum time to wait, in seconds, for the condition to be met.
            If unspecified, this function will wait indefinitely. If specified and
            the timeout is met or exceeded, a TimeoutError will be raised.
        interval: The time, in seconds, to wait before re-checking the condition.
        fail_on_api_error: Fail the condition checks if a Kubernetes API error is
            incurred. An API error can be raised for a number of reasons, including
            a Pod being restarted and temporarily unavailable. Disabling this will
            cause those errors to be ignored, allowing the check to continue until
            timeout or resolution. (default: True).

    Raises:
        TimeoutError: The specified timeout was exceeded.
    """
    log.info(f"waiting for condition: {condition}")

    # define the maximum time to wait. once this is met, we should
    # stop waiting.
    max_time = None
    if timeout is not None:
        max_time = time.time() + timeout

    # start the wait block
    start = time.time()
    while True:
        if max_time and time.time() >= max_time:
          raise TimeoutError(
                f"timed out ({timeout}s) while waiting for condition {condition}"
            )

E TimeoutError: timed out (180s) while waiting for condition <Condition (name: deployment neo4j-cdc-console exists, met: False)>

.venv/lib/python3.12/site-packages/kubetest/utils.py:130: TimeoutError
=========================== short test summary info ============================
FAILED features/0_1_mindwm_eda.feature::test_scenarios[file:features/0_1_mindwm_eda.feature-Mindwm event driven architecture-Redpanda] - TimeoutError: timed out (180s) while waiting for condition <Condition (name: deployment neo4j-cdc-console exists, met: False)>
!!!!!!!!!!!!!!!!!!!!!!!!!! stopping after 1 failures !!!!!!!!!!!!!!!!!!!!!!!!!!!
============= 1 failed, 3 passed, 7 warnings in 910.28s (0:15:10) ==============

@metacoma metacoma merged commit 4eb142b into master Sep 20, 2024
1 check passed
Copy link

allure report
============================= test session starts ==============================
collecting ... collected 17 items

Feature: Mindwm Lifecycle Management
Scenario: Deploy Mindwm Cluster and Applications
Given an Ubuntu 24.04 system with 6 CPUs and 16 GB of RAM (PASSED)
And the mindwm-gitops repository is cloned into the "~/mindwm-gitops" directory (PASSED)
When God executes "make cluster" (PASSED)
Then all nodes in Kubernetes are ready (PASSED)
When God executes "make argocd" (PASSED)
Then helm release "argocd" is deployed in "argocd" namespace (PASSED)
When God executes "make argocd_app" (PASSED)
Then the argocd "mindwm-gitops" application appears in "argocd" namespace (PASSED)
When God executes "make argocd_app_sync_async" (PASSED)
Then the argocd "mindwm-gitops" application is argocd namespace in a progressing status (PASSED)
When God executes "make argocd_app_async_wait" (PASSED)
Then all argocd applications are in a healthy state (PASSED)
When God executes "make crossplane_rolebinding_workaround" (PASSED)
Then the following roles should exist: (PASSED)
PASSED

Feature: Mindwm event driven architecture
Scenario: Knative
Given a MindWM environment (PASSED)
Then all nodes in Kubernetes are ready (PASSED)
And namespace "knative-serving" should exists (PASSED)
And namespace "knative-eventing" should exists (PASSED)
And namespace "knative-operator" should exists (PASSED)
And following deployments is in ready state in "knative-serving" namespace (PASSED)
And following deployments is in ready state in "knative-eventing" namespace (PASSED)
PASSED

Feature: Mindwm event driven architecture
Scenario: Istio
Given a MindWM environment (PASSED)
Then all nodes in Kubernetes are ready (PASSED)
And namespace "istio-system" should exists (PASSED)
And following deployments is in ready state in "istio-system" namespace (PASSED)
PASSED

Feature: Mindwm event driven architecture
Scenario: Redpanda
Given a MindWM environment (PASSED)
Then all nodes in Kubernetes are ready (PASSED)
And namespace "redpanda" should exists (PASSED)
And following deployments is in ready state in "redpanda" namespace (PASSED)
And statefulset "neo4j-cdc" in namespace "redpanda" is in ready state (PASSED)
PASSED

Feature: Mindwm event driven architecture
Scenario: Cert manager
Given a MindWM environment (PASSED)
Then all nodes in Kubernetes are ready (PASSED)
And namespace "cert-manager" should exists (PASSED)
And following deployments is in ready state in "cert-manager" namespace (PASSED)
PASSED

Feature: Mindwm event driven architecture
Scenario: Nats
Given a MindWM environment (PASSED)
Then all nodes in Kubernetes are ready (PASSED)
And namespace "nats" should exists (PASSED)
And following deployments is in ready state in "nats" namespace (PASSED)
And statefulset "nats" in namespace "nats" is in ready state (PASSED)
PASSED

Feature: Mindwm event driven architecture
Scenario: Monitoring
Given a MindWM environment (PASSED)
Then all nodes in Kubernetes are ready (PASSED)
And namespace "monitoring" should exists (PASSED)
And following deployments is in ready state in "monitoring" namespace (PASSED)
And statefulset "loki" in namespace "monitoring" is in ready state (PASSED)
And statefulset "tempo" in namespace "monitoring" is in ready state (PASSED)
And statefulset "vmalertmanager-vm-aio-victoria-metrics-k8s-stack" in namespace "monitoring" is in ready state (PASSED)
PASSED

Feature: MindWM Custom Resource Definition
Scenario: Create Context
Given a MindWM environment (PASSED)
When God creates a MindWM context with the name "xxx3" (PASSED)
Then the context should be ready and operable (PASSED)
PASSED

Feature: MindWM Custom Resource Definition
Scenario: Create User
Given a MindWM environment (PASSED)
When God creates a MindWM user resource with the name "alice" and connects it to the context "xxx3" (PASSED)
Then the user resource should be ready and operable (PASSED)
PASSED

Feature: MindWM Custom Resource Definition
Scenario: Create Host
Given a MindWM environment (PASSED)
When God creates a MindWM host resource with the name "laptop" and connects it to the user "alice" (PASSED)
Then the host resource should be ready and operable (PASSED)
PASSED

Feature: MindWM Custom Resource Definition
Scenario: Delete Resources and Verify Cleanup
Given a MindWM environment (PASSED)
When God deletes the MindWM host resource "laptop" (PASSED)
Then the host "laptop" should be deleted (PASSED)
When God deletes the MindWM user resource "alice" (PASSED)
Then the user "alice" should be deleted (PASSED)
When God deletes the MindWM context resource "xxx3" (PASSED)
Then the context "xxx3" should be deleted (FAILED)
FAILED

=================================== FAILURES ===================================
_ test_scenarios[file:features/1_mindwm_crd.feature-MindWM Custom Resource Definition-Delete Resources and Verify Cleanup] _

self = <pytest_bdd.runner.ScenarioRunner object at 0x7178adca98e0>
item = <Function test_scenarios[file:features/1_mindwm_crd.feature-MindWM Custom Resource Definition-Delete Resources and Verify Cleanup]>

def pytest_runtest_call(self, item: Item):
    if "pytest_bdd_scenario" in list(map(attrgetter("name"), item.iter_markers())):
        self.request = item._request
        self.feature = self.request.getfixturevalue("feature")
        self.scenario = self.request.getfixturevalue("scenario")
        self.plugin_manager = self.request.config.hook
        self.plugin_manager.pytest_bdd_before_scenario(
            request=self.request, feature=self.feature, scenario=self.scenario
        )
        try:
          self.plugin_manager.pytest_bdd_run_scenario(
                request=self.request,
                feature=self.feature,
                scenario=self.scenario,
            )

.venv/lib/python3.12/site-packages/pytest_bdd/runner.py:37:


.venv/lib/python3.12/site-packages/pytest_bdd/runner.py:59: in pytest_bdd_run_scenario
return step_dispatcher(steps)
.venv/lib/python3.12/site-packages/pytest_bdd/runner.py:69: in dispatcher
self.plugin_manager.pytest_bdd_run_step(
.venv/lib/python3.12/site-packages/pytest_bdd/runner.py:128: in pytest_bdd_run_step
step_result = step_caller()
conftest.py:181: in mindwm_context_deleted
context= mindwm_crd.context_get(kube, context_name)
mindwm_crd.py:52: in context_get
resource = api_instance.get_namespaced_custom_object(
.venv/lib/python3.12/site-packages/kubernetes/client/api/custom_objects_api.py:1627: in get_namespaced_custom_object
return self.get_namespaced_custom_object_with_http_info(group, version, namespace, plural, name, **kwargs) # noqa: E501
.venv/lib/python3.12/site-packages/kubernetes/client/api/custom_objects_api.py:1734: in get_namespaced_custom_object_with_http_info
return self.api_client.call_api(
.venv/lib/python3.12/site-packages/kubernetes/client/api_client.py:348: in call_api
return self.__call_api(resource_path, method,
.venv/lib/python3.12/site-packages/kubernetes/client/api_client.py:180: in __call_api
response_data = self.request(
.venv/lib/python3.12/site-packages/kubernetes/client/api_client.py:373: in request
return self.rest_client.GET(url,
.venv/lib/python3.12/site-packages/kubernetes/client/rest.py:244: in GET
return self.request("GET", url,


self = <kubernetes.client.rest.RESTClientObject object at 0x7178acc88bc0>
method = 'GET'
url = 'https://127.0.0.1:6443/apis/mindwm.io/v1beta1/namespaces/default/contexts/xxx3'
query_params = []
headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/30.1.0/python'}
body = None, post_params = {}, _preload_content = True, _request_timeout = None

def request(self, method, url, query_params=None, headers=None,
            body=None, post_params=None, _preload_content=True,
            _request_timeout=None):
    """Perform requests.

    :param method: http request method
    :param url: http request url
    :param query_params: query parameters in the url
    :param headers: http request headers
    :param body: request json body, for `application/json`
    :param post_params: request post parameters,
                        `application/x-www-form-urlencoded`
                        and `multipart/form-data`
    :param _preload_content: if False, the urllib3.HTTPResponse object will
                             be returned without reading/decoding response
                             data. Default is True.
    :param _request_timeout: timeout setting for this request. If one
                             number provided, it will be total request
                             timeout. It can also be a pair (tuple) of
                             (connection, read) timeouts.
    """
    method = method.upper()
    assert method in ['GET', 'HEAD', 'DELETE', 'POST', 'PUT',
                      'PATCH', 'OPTIONS']

    if post_params and body:
        raise ApiValueError(
            "body parameter cannot be used with post_params parameter."
        )

    post_params = post_params or {}
    headers = headers or {}

    timeout = None
    if _request_timeout:
        if isinstance(_request_timeout, (int, ) if six.PY3 else (int, long)):  # noqa: E501,F821
            timeout = urllib3.Timeout(total=_request_timeout)
        elif (isinstance(_request_timeout, tuple) and
              len(_request_timeout) == 2):
            timeout = urllib3.Timeout(
                connect=_request_timeout[0], read=_request_timeout[1])

    if 'Content-Type' not in headers:
        headers['Content-Type'] = 'application/json'

    try:
        # For `POST`, `PUT`, `PATCH`, `OPTIONS`, `DELETE`
        if method in ['POST', 'PUT', 'PATCH', 'OPTIONS', 'DELETE']:
            if query_params:
                url += '?' + urlencode(query_params)
            if (re.search('json', headers['Content-Type'], re.IGNORECASE) or
                    headers['Content-Type'] == 'application/apply-patch+yaml'):
                if headers['Content-Type'] == 'application/json-patch+json':
                    if not isinstance(body, list):
                        headers['Content-Type'] = \
                            'application/strategic-merge-patch+json'
                request_body = None
                if body is not None:
                    request_body = json.dumps(body)
                r = self.pool_manager.request(
                    method, url,
                    body=request_body,
                    preload_content=_preload_content,
                    timeout=timeout,
                    headers=headers)
            elif headers['Content-Type'] == 'application/x-www-form-urlencoded':  # noqa: E501
                r = self.pool_manager.request(
                    method, url,
                    fields=post_params,
                    encode_multipart=False,
                    preload_content=_preload_content,
                    timeout=timeout,
                    headers=headers)
            elif headers['Content-Type'] == 'multipart/form-data':
                # must del headers['Content-Type'], or the correct
                # Content-Type which generated by urllib3 will be
                # overwritten.
                del headers['Content-Type']
                r = self.pool_manager.request(
                    method, url,
                    fields=post_params,
                    encode_multipart=True,
                    preload_content=_preload_content,
                    timeout=timeout,
                    headers=headers)
            # Pass a `string` parameter directly in the body to support
            # other content types than Json when `body` argument is
            # provided in serialized form
            elif isinstance(body, str) or isinstance(body, bytes):
                request_body = body
                r = self.pool_manager.request(
                    method, url,
                    body=request_body,
                    preload_content=_preload_content,
                    timeout=timeout,
                    headers=headers)
            else:
                # Cannot generate the request from given parameters
                msg = """Cannot prepare a request message for provided
                         arguments. Please check that your arguments match
                         declared content type."""
                raise ApiException(status=0, reason=msg)
        # For `GET`, `HEAD`
        else:
            r = self.pool_manager.request(method, url,
                                          fields=query_params,
                                          preload_content=_preload_content,
                                          timeout=timeout,
                                          headers=headers)
    except urllib3.exceptions.SSLError as e:
        msg = "{0}\n{1}".format(type(e).__name__, str(e))
        raise ApiException(status=0, reason=msg)

    if _preload_content:
        r = RESTResponse(r)

        # In the python 3, the response.data is bytes.
        # we need to decode it to string.
        if six.PY3:
            r.data = r.data.decode('utf8')

        # log response body
        logger.debug("response body: %s", r.data)

    if not 200 <= r.status <= 299:
      raise ApiException(http_resp=r)

E kubernetes.client.exceptions.ApiException: (404)
E Reason: Not Found
E HTTP response headers: HTTPHeaderDict({'Audit-Id': 'd8b94769-db2e-4061-a43e-90ea87af1d53', 'Cache-Control': 'no-cache, private', 'Content-Type': 'application/json', 'X-Kubernetes-Pf-Flowschema-Uid': '28fdc954-7bda-41c0-825a-a61e49a66a61', 'X-Kubernetes-Pf-Prioritylevel-Uid': '36281f7c-23d0-4baf-9034-352d586431b1', 'Date': 'Fri, 20 Sep 2024 19:06:28 GMT', 'Content-Length': '214'})
E HTTP response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"contexts.mindwm.io "xxx3" not found","reason":"NotFound","details":{"name":"xxx3","group":"mindwm.io","kind":"contexts"},"code":404}

.venv/lib/python3.12/site-packages/kubernetes/client/rest.py:238: ApiException
=========================== short test summary info ============================
FAILED features/1_mindwm_crd.feature::test_scenarios[file:features/1_mindwm_crd.feature-MindWM Custom Resource Definition-Delete Resources and Verify Cleanup] - kubernetes.client.exceptions.ApiException: (404)
Reason: Not Found
HTTP response headers: HTTPHeaderDict({'Audit-Id': 'd8b94769-db2e-4061-a43e-90ea87af1d53', 'Cache-Control': 'no-cache, private', 'Content-Type': 'application/json', 'X-Kubernetes-Pf-Flowschema-Uid': '28fdc954-7bda-41c0-825a-a61e49a66a61', 'X-Kubernetes-Pf-Prioritylevel-Uid': '36281f7c-23d0-4baf-9034-352d586431b1', 'Date': 'Fri, 20 Sep 2024 19:06:28 GMT', 'Content-Length': '214'})
HTTP response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"contexts.mindwm.io "xxx3" not found","reason":"NotFound","details":{"name":"xxx3","group":"mindwm.io","kind":"contexts"},"code":404}
!!!!!!!!!!!!!!!!!!!!!!!!!! stopping after 1 failures !!!!!!!!!!!!!!!!!!!!!!!!!!!
============ 1 failed, 10 passed, 14 warnings in 1004.09s (0:16:44) ============

@metacoma metacoma mentioned this pull request Oct 3, 2024
40 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
Status: Done
Development

Successfully merging this pull request may close these issues.

1 participant