Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Redpanda operator without flux #129

Merged
merged 15 commits into from
Dec 13, 2024
Merged

Conversation

metacoma
Copy link
Contributor

No description provided.

Copy link

allure report

gherkin outptut ============================= test session starts ============================== collecting ... collected 56 items

Feature: Mindwm Lifecycle Management
Scenario: Deploy Mindwm Cluster and Applications
Given an Ubuntu 24.04 system with 6 CPUs and 16 GB of RAM (PASSED)
And the mindwm-gitops repository is cloned into the "~/mindwm-gitops" directory (PASSED)
When God executes "make cluster" (PASSED)
Then all nodes in Kubernetes are ready (PASSED)
When God executes "make argocd" (PASSED)
Then helm release "argocd" is deployed in "argocd" namespace (PASSED)
When God executes "make argocd_app" (PASSED)
Then the argocd "mindwm-gitops" application appears in "argocd" namespace (PASSED)
When God executes "make argocd_app_sync_async" (PASSED)
Then the argocd "mindwm-gitops" application is argocd namespace in a progressing status (PASSED)
When God executes "make argocd_app_async_wait" (PASSED)
Then all argocd applications are in a healthy state (FAILED)
When God executes "make crossplane_rolebinding_workaround" (FAILED)
Then the following roles should exist: (FAILED)
FAILED

=================================== FAILURES ===================================
_ test_scenarios[file:features/0_0_mindwm_lifecycle.feature-Mindwm Lifecycle Management-Deploy Mindwm Cluster and Applications] _

kube = <kubetest.client.TestClient object at 0x72971b227470>
step = PickleStep(argument=PickleStepArgument(doc_string=None, data_table=PickleTable(rows=[PickleTableRow(cells=[PickleTable...), ast_node_ids=['28'], id='46', type=<Type.outcome: 'Outcome'>, text='all argocd applications are in a healthy state')

@then("all argocd applications are in a healthy state")
def argocd_applications_check(kube, step):
    title_row, *rows = step.data_table.rows

    for row in rows:
        application_name = row.cells[0].value
      argocd_application_in_progress(kube, application_name, "argocd")

conftest.py:284:


conftest.py:268: in argocd_application_in_progress
utils.argocd_application_wait_status(kube, application_name, namespace)
utils.py:87: in argocd_application_wait_status
kubetest_utils.wait_for_condition(


condition = <Condition (name: api object deleted, met: False)>, timeout = 180
interval = 5, fail_on_api_error = True

def wait_for_condition(
    condition: Condition,
    timeout: int = None,
    interval: Union[int, float] = 1,
    fail_on_api_error: bool = True,
) -> None:
    """Wait for a condition to be met.

    Args:
        condition: The Condition to wait for.
        timeout: The maximum time to wait, in seconds, for the condition to be met.
            If unspecified, this function will wait indefinitely. If specified and
            the timeout is met or exceeded, a TimeoutError will be raised.
        interval: The time, in seconds, to wait before re-checking the condition.
        fail_on_api_error: Fail the condition checks if a Kubernetes API error is
            incurred. An API error can be raised for a number of reasons, including
            a Pod being restarted and temporarily unavailable. Disabling this will
            cause those errors to be ignored, allowing the check to continue until
            timeout or resolution. (default: True).

    Raises:
        TimeoutError: The specified timeout was exceeded.
    """
    log.info(f"waiting for condition: {condition}")

    # define the maximum time to wait. once this is met, we should
    # stop waiting.
    max_time = None
    if timeout is not None:
        max_time = time.time() + timeout

    # start the wait block
    start = time.time()
    while True:
        if max_time and time.time() >= max_time:
          raise TimeoutError(
                f"timed out ({timeout}s) while waiting for condition {condition}"
            )

E TimeoutError: timed out (180s) while waiting for condition <Condition (name: api object deleted, met: False)>

.venv/lib/python3.12/site-packages/kubetest/utils.py:130: TimeoutError
=========================== short test summary info ============================
FAILED features/0_0_mindwm_lifecycle.feature::test_scenarios[file:features/0_0_mindwm_lifecycle.feature-Mindwm Lifecycle Management-Deploy Mindwm Cluster and Applications] - TimeoutError: timed out (180s) while waiting for condition <Condition (name: api object deleted, met: False)>
!!!!!!!!!!!!!!!!!!!!!!!!!! stopping after 1 failures !!!!!!!!!!!!!!!!!!!!!!!!!!!
================== 1 failed, 49 warnings in 370.51s (0:06:10) ==================

invalid: metadata.annotations: Too long: must have at most 262144 bytes
Copy link

allure report

gherkin outptut ============================= test session starts ============================== collecting ... collected 56 items

Feature: Mindwm Lifecycle Management
Scenario: Deploy Mindwm Cluster and Applications
Given an Ubuntu 24.04 system with 6 CPUs and 16 GB of RAM (PASSED)
And the mindwm-gitops repository is cloned into the "~/mindwm-gitops" directory (PASSED)
When God executes "make cluster" (PASSED)
Then all nodes in Kubernetes are ready (PASSED)
When God executes "make argocd" (PASSED)
Then helm release "argocd" is deployed in "argocd" namespace (PASSED)
When God executes "make argocd_app" (PASSED)
Then the argocd "mindwm-gitops" application appears in "argocd" namespace (PASSED)
When God executes "make argocd_app_sync_async" (PASSED)
Then the argocd "mindwm-gitops" application is argocd namespace in a progressing status (PASSED)
When God executes "make argocd_app_async_wait" (PASSED)
Then all argocd applications are in a healthy state (PASSED)
When God executes "make crossplane_rolebinding_workaround" (PASSED)
Then the following roles should exist: (PASSED)
PASSED

Feature: Mindwm event driven architecture
Scenario: Knative
Given a MindWM environment (PASSED)
Then all nodes in Kubernetes are ready (PASSED)
And namespace "knative-serving" should exist (PASSED)
And namespace "knative-eventing" should exist (PASSED)
And namespace "knative-operator" should exist (PASSED)
And the following deployments are in a ready state in the "knative-serving" namespace (PASSED)
And the following deployments are in a ready state in the "knative-eventing" namespace (PASSED)
PASSED

Feature: Mindwm event driven architecture
Scenario: Istio
Given a MindWM environment (PASSED)
Then all nodes in Kubernetes are ready (PASSED)
And namespace "istio-system" should exist (PASSED)
And the following deployments are in a ready state in the "istio-system" namespace (PASSED)
PASSED

Feature: Mindwm event driven architecture
Scenario: Redpanda
Given a MindWM environment (PASSED)
Then all nodes in Kubernetes are ready (PASSED)
And namespace "redpanda" should exist (PASSED)
And the following deployments are in a ready state in the "redpanda" namespace (PASSED)
And helm release "neo4j-cdc" is deployed in "redpanda" namespace (FAILED)
And statefulset "neo4j-cdc" in namespace "redpanda" is in ready state (FAILED)
And the following deployments are in a ready state in the "redpanda" namespace (FAILED)
FAILED

=================================== FAILURES ===================================
_ test_scenarios[file:features/0_1_mindwm_eda.feature-Mindwm event driven architecture-Redpanda] _

kube = <kubetest.client.TestClient object at 0x7376206dfad0>
helm_release = 'neo4j-cdc', namespace = 'redpanda'

@then("helm release \"{helm_release}\" is deployed in \"{namespace}\" namespace" )
def helm_release_deploeyd(kube, helm_release, namespace):
    #info = utils.helm_release_info(kube, helm_release, namespace)
  info = utils.helm_release_is_ready(kube, helm_release, namespace)

conftest.py:255:


utils.py:52: in helm_release_is_ready
kubetest_utils.wait_for_condition(


condition = <Condition (name: helm release has status and info, met: False)>
timeout = 600, interval = 5, fail_on_api_error = True

def wait_for_condition(
    condition: Condition,
    timeout: int = None,
    interval: Union[int, float] = 1,
    fail_on_api_error: bool = True,
) -> None:
    """Wait for a condition to be met.

    Args:
        condition: The Condition to wait for.
        timeout: The maximum time to wait, in seconds, for the condition to be met.
            If unspecified, this function will wait indefinitely. If specified and
            the timeout is met or exceeded, a TimeoutError will be raised.
        interval: The time, in seconds, to wait before re-checking the condition.
        fail_on_api_error: Fail the condition checks if a Kubernetes API error is
            incurred. An API error can be raised for a number of reasons, including
            a Pod being restarted and temporarily unavailable. Disabling this will
            cause those errors to be ignored, allowing the check to continue until
            timeout or resolution. (default: True).

    Raises:
        TimeoutError: The specified timeout was exceeded.
    """
    log.info(f"waiting for condition: {condition}")

    # define the maximum time to wait. once this is met, we should
    # stop waiting.
    max_time = None
    if timeout is not None:
        max_time = time.time() + timeout

    # start the wait block
    start = time.time()
    while True:
        if max_time and time.time() >= max_time:
          raise TimeoutError(
                f"timed out ({timeout}s) while waiting for condition {condition}"
            )

E TimeoutError: timed out (600s) while waiting for condition <Condition (name: helm release has status and info, met: False)>

.venv/lib/python3.12/site-packages/kubetest/utils.py:130: TimeoutError
=========================== short test summary info ============================
FAILED features/0_1_mindwm_eda.feature::test_scenarios[file:features/0_1_mindwm_eda.feature-Mindwm event driven architecture-Redpanda] - TimeoutError: timed out (600s) while waiting for condition <Condition (name: helm release has status and info, met: False)>
!!!!!!!!!!!!!!!!!!!!!!!!!! stopping after 1 failures !!!!!!!!!!!!!!!!!!!!!!!!!!!
============ 1 failed, 3 passed, 13 warnings in 1165.86s (0:19:25) =============

Copy link

allure report

gherkin outptut ============================= test session starts ============================== collecting ... collected 56 items

Feature: Mindwm Lifecycle Management
Scenario: Deploy Mindwm Cluster and Applications
Given an Ubuntu 24.04 system with 6 CPUs and 16 GB of RAM (PASSED)
And the mindwm-gitops repository is cloned into the "~/mindwm-gitops" directory (PASSED)
When God executes "make cluster" (PASSED)
Then all nodes in Kubernetes are ready (PASSED)
When God executes "make argocd" (PASSED)
Then helm release "argocd" is deployed in "argocd" namespace (PASSED)
When God executes "make argocd_app" (PASSED)
Then the argocd "mindwm-gitops" application appears in "argocd" namespace (PASSED)
When God executes "make argocd_app_sync_async" (PASSED)
Then the argocd "mindwm-gitops" application is argocd namespace in a progressing status (PASSED)
When God executes "make argocd_app_async_wait" (PASSED)
Then all argocd applications are in a healthy state (PASSED)
When God executes "make crossplane_rolebinding_workaround" (PASSED)
Then the following roles should exist: (PASSED)
PASSED

Feature: Mindwm event driven architecture
Scenario: Knative
Given a MindWM environment (PASSED)
Then all nodes in Kubernetes are ready (PASSED)
And namespace "knative-serving" should exist (PASSED)
And namespace "knative-eventing" should exist (PASSED)
And namespace "knative-operator" should exist (PASSED)
And the following deployments are in a ready state in the "knative-serving" namespace (PASSED)
And the following deployments are in a ready state in the "knative-eventing" namespace (PASSED)
PASSED

Feature: Mindwm event driven architecture
Scenario: Istio
Given a MindWM environment (PASSED)
Then all nodes in Kubernetes are ready (PASSED)
And namespace "istio-system" should exist (PASSED)
And the following deployments are in a ready state in the "istio-system" namespace (PASSED)
PASSED

Feature: Mindwm event driven architecture
Scenario: Redpanda
Given a MindWM environment (PASSED)
Then all nodes in Kubernetes are ready (PASSED)
And namespace "redpanda" should exist (PASSED)
And the following deployments are in a ready state in the "redpanda" namespace (PASSED)
And helm release "neo4j-cdc" is deployed in "redpanda" namespace (FAILED)
And statefulset "neo4j-cdc" in namespace "redpanda" is in ready state (FAILED)
And the following deployments are in a ready state in the "redpanda" namespace (FAILED)
FAILED

=================================== FAILURES ===================================
_ test_scenarios[file:features/0_1_mindwm_eda.feature-Mindwm event driven architecture-Redpanda] _

kube = <kubetest.client.TestClient object at 0x7127916f07a0>
helm_release = 'neo4j-cdc', namespace = 'redpanda'

@then("helm release \"{helm_release}\" is deployed in \"{namespace}\" namespace" )
def helm_release_deploeyd(kube, helm_release, namespace):
    #info = utils.helm_release_info(kube, helm_release, namespace)
  info = utils.helm_release_is_ready(kube, helm_release, namespace)

conftest.py:255:


utils.py:52: in helm_release_is_ready
kubetest_utils.wait_for_condition(


condition = <Condition (name: helm release has status and info, met: False)>
timeout = 600, interval = 5, fail_on_api_error = True

def wait_for_condition(
    condition: Condition,
    timeout: int = None,
    interval: Union[int, float] = 1,
    fail_on_api_error: bool = True,
) -> None:
    """Wait for a condition to be met.

    Args:
        condition: The Condition to wait for.
        timeout: The maximum time to wait, in seconds, for the condition to be met.
            If unspecified, this function will wait indefinitely. If specified and
            the timeout is met or exceeded, a TimeoutError will be raised.
        interval: The time, in seconds, to wait before re-checking the condition.
        fail_on_api_error: Fail the condition checks if a Kubernetes API error is
            incurred. An API error can be raised for a number of reasons, including
            a Pod being restarted and temporarily unavailable. Disabling this will
            cause those errors to be ignored, allowing the check to continue until
            timeout or resolution. (default: True).

    Raises:
        TimeoutError: The specified timeout was exceeded.
    """
    log.info(f"waiting for condition: {condition}")

    # define the maximum time to wait. once this is met, we should
    # stop waiting.
    max_time = None
    if timeout is not None:
        max_time = time.time() + timeout

    # start the wait block
    start = time.time()
    while True:
        if max_time and time.time() >= max_time:
          raise TimeoutError(
                f"timed out ({timeout}s) while waiting for condition {condition}"
            )

E TimeoutError: timed out (600s) while waiting for condition <Condition (name: helm release has status and info, met: False)>

.venv/lib/python3.12/site-packages/kubetest/utils.py:130: TimeoutError
=========================== short test summary info ============================
FAILED features/0_1_mindwm_eda.feature::test_scenarios[file:features/0_1_mindwm_eda.feature-Mindwm event driven architecture-Redpanda] - TimeoutError: timed out (600s) while waiting for condition <Condition (name: helm release has status and info, met: False)>
!!!!!!!!!!!!!!!!!!!!!!!!!! stopping after 1 failures !!!!!!!!!!!!!!!!!!!!!!!!!!!
============ 1 failed, 3 passed, 13 warnings in 1253.98s (0:20:53) =============

Copy link

allure report

gherkin outptut ============================= test session starts ============================== collecting ... collected 56 items

Feature: Mindwm Lifecycle Management
Scenario: Deploy Mindwm Cluster and Applications
Given an Ubuntu 24.04 system with 6 CPUs and 16 GB of RAM (PASSED)
And the mindwm-gitops repository is cloned into the "~/mindwm-gitops" directory (PASSED)
When God executes "make cluster" (PASSED)
Then all nodes in Kubernetes are ready (PASSED)
When God executes "make argocd" (PASSED)
Then helm release "argocd" is deployed in "argocd" namespace (PASSED)
When God executes "make argocd_app" (PASSED)
Then the argocd "mindwm-gitops" application appears in "argocd" namespace (PASSED)
When God executes "make argocd_app_sync_async" (PASSED)
Then the argocd "mindwm-gitops" application is argocd namespace in a progressing status (PASSED)
When God executes "make argocd_app_async_wait" (PASSED)
Then all argocd applications are in a healthy state (FAILED)
When God executes "make crossplane_rolebinding_workaround" (FAILED)
Then the following roles should exist: (FAILED)
FAILED

=================================== FAILURES ===================================
_ test_scenarios[file:features/0_0_mindwm_lifecycle.feature-Mindwm Lifecycle Management-Deploy Mindwm Cluster and Applications] _

kube = <kubetest.client.TestClient object at 0x75200109e870>
step = PickleStep(argument=PickleStepArgument(doc_string=None, data_table=PickleTable(rows=[PickleTableRow(cells=[PickleTable...), ast_node_ids=['28'], id='46', type=<Type.outcome: 'Outcome'>, text='all argocd applications are in a healthy state')

@then("all argocd applications are in a healthy state")
def argocd_applications_check(kube, step):
    title_row, *rows = step.data_table.rows

    for row in rows:
        application_name = row.cells[0].value
      argocd_application_in_progress(kube, application_name, "argocd")

conftest.py:284:


conftest.py:268: in argocd_application_in_progress
utils.argocd_application_wait_status(kube, application_name, namespace)
utils.py:87: in argocd_application_wait_status
kubetest_utils.wait_for_condition(


condition = <Condition (name: api object deleted, met: False)>, timeout = 180
interval = 5, fail_on_api_error = True

def wait_for_condition(
    condition: Condition,
    timeout: int = None,
    interval: Union[int, float] = 1,
    fail_on_api_error: bool = True,
) -> None:
    """Wait for a condition to be met.

    Args:
        condition: The Condition to wait for.
        timeout: The maximum time to wait, in seconds, for the condition to be met.
            If unspecified, this function will wait indefinitely. If specified and
            the timeout is met or exceeded, a TimeoutError will be raised.
        interval: The time, in seconds, to wait before re-checking the condition.
        fail_on_api_error: Fail the condition checks if a Kubernetes API error is
            incurred. An API error can be raised for a number of reasons, including
            a Pod being restarted and temporarily unavailable. Disabling this will
            cause those errors to be ignored, allowing the check to continue until
            timeout or resolution. (default: True).

    Raises:
        TimeoutError: The specified timeout was exceeded.
    """
    log.info(f"waiting for condition: {condition}")

    # define the maximum time to wait. once this is met, we should
    # stop waiting.
    max_time = None
    if timeout is not None:
        max_time = time.time() + timeout

    # start the wait block
    start = time.time()
    while True:
        if max_time and time.time() >= max_time:
          raise TimeoutError(
                f"timed out ({timeout}s) while waiting for condition {condition}"
            )

E TimeoutError: timed out (180s) while waiting for condition <Condition (name: api object deleted, met: False)>

.venv/lib/python3.12/site-packages/kubetest/utils.py:130: TimeoutError
=========================== short test summary info ============================
FAILED features/0_0_mindwm_lifecycle.feature::test_scenarios[file:features/0_0_mindwm_lifecycle.feature-Mindwm Lifecycle Management-Deploy Mindwm Cluster and Applications] - TimeoutError: timed out (180s) while waiting for condition <Condition (name: api object deleted, met: False)>
!!!!!!!!!!!!!!!!!!!!!!!!!! stopping after 1 failures !!!!!!!!!!!!!!!!!!!!!!!!!!!
================== 1 failed, 49 warnings in 372.40s (0:06:12) ==================

Copy link

allure report

gherkin outptut ============================= test session starts ============================== collecting ... collected 56 items

Feature: Mindwm Lifecycle Management
Scenario: Deploy Mindwm Cluster and Applications
Given an Ubuntu 24.04 system with 6 CPUs and 16 GB of RAM (PASSED)
And the mindwm-gitops repository is cloned into the "~/mindwm-gitops" directory (PASSED)
When God executes "make cluster" (PASSED)
Then all nodes in Kubernetes are ready (PASSED)
When God executes "make argocd" (PASSED)
Then helm release "argocd" is deployed in "argocd" namespace (PASSED)
When God executes "make argocd_app" (PASSED)
Then the argocd "mindwm-gitops" application appears in "argocd" namespace (PASSED)
When God executes "make argocd_app_sync_async" (PASSED)
Then the argocd "mindwm-gitops" application is argocd namespace in a progressing status (PASSED)
When God executes "make argocd_app_async_wait" (PASSED)
Then all argocd applications are in a healthy state (FAILED)
When God executes "make crossplane_rolebinding_workaround" (FAILED)
Then the following roles should exist: (FAILED)
FAILED

=================================== FAILURES ===================================
_ test_scenarios[file:features/0_0_mindwm_lifecycle.feature-Mindwm Lifecycle Management-Deploy Mindwm Cluster and Applications] _

kube = <kubetest.client.TestClient object at 0x767aae39a270>
step = PickleStep(argument=PickleStepArgument(doc_string=None, data_table=PickleTable(rows=[PickleTableRow(cells=[PickleTable...), ast_node_ids=['28'], id='46', type=<Type.outcome: 'Outcome'>, text='all argocd applications are in a healthy state')

@then("all argocd applications are in a healthy state")
def argocd_applications_check(kube, step):
    title_row, *rows = step.data_table.rows

    for row in rows:
        application_name = row.cells[0].value
      argocd_application_in_progress(kube, application_name, "argocd")

conftest.py:284:


conftest.py:268: in argocd_application_in_progress
utils.argocd_application_wait_status(kube, application_name, namespace)
utils.py:87: in argocd_application_wait_status
kubetest_utils.wait_for_condition(


condition = <Condition (name: api object deleted, met: False)>, timeout = 180
interval = 5, fail_on_api_error = True

def wait_for_condition(
    condition: Condition,
    timeout: int = None,
    interval: Union[int, float] = 1,
    fail_on_api_error: bool = True,
) -> None:
    """Wait for a condition to be met.

    Args:
        condition: The Condition to wait for.
        timeout: The maximum time to wait, in seconds, for the condition to be met.
            If unspecified, this function will wait indefinitely. If specified and
            the timeout is met or exceeded, a TimeoutError will be raised.
        interval: The time, in seconds, to wait before re-checking the condition.
        fail_on_api_error: Fail the condition checks if a Kubernetes API error is
            incurred. An API error can be raised for a number of reasons, including
            a Pod being restarted and temporarily unavailable. Disabling this will
            cause those errors to be ignored, allowing the check to continue until
            timeout or resolution. (default: True).

    Raises:
        TimeoutError: The specified timeout was exceeded.
    """
    log.info(f"waiting for condition: {condition}")

    # define the maximum time to wait. once this is met, we should
    # stop waiting.
    max_time = None
    if timeout is not None:
        max_time = time.time() + timeout

    # start the wait block
    start = time.time()
    while True:
        if max_time and time.time() >= max_time:
          raise TimeoutError(
                f"timed out ({timeout}s) while waiting for condition {condition}"
            )

E TimeoutError: timed out (180s) while waiting for condition <Condition (name: api object deleted, met: False)>

.venv/lib/python3.12/site-packages/kubetest/utils.py:130: TimeoutError
=========================== short test summary info ============================
FAILED features/0_0_mindwm_lifecycle.feature::test_scenarios[file:features/0_0_mindwm_lifecycle.feature-Mindwm Lifecycle Management-Deploy Mindwm Cluster and Applications] - TimeoutError: timed out (180s) while waiting for condition <Condition (name: api object deleted, met: False)>
!!!!!!!!!!!!!!!!!!!!!!!!!! stopping after 1 failures !!!!!!!!!!!!!!!!!!!!!!!!!!!
================== 1 failed, 49 warnings in 402.03s (0:06:42) ==================

Copy link

allure report

gherkin outptut ============================= test session starts ============================== collecting ... collected 56 items

Feature: Mindwm Lifecycle Management
Scenario: Deploy Mindwm Cluster and Applications
Given an Ubuntu 24.04 system with 6 CPUs and 16 GB of RAM (PASSED)
And the mindwm-gitops repository is cloned into the "~/mindwm-gitops" directory (PASSED)
When God executes "make cluster" (PASSED)
Then all nodes in Kubernetes are ready (PASSED)
When God executes "make argocd" (PASSED)
Then helm release "argocd" is deployed in "argocd" namespace (PASSED)
When God executes "make argocd_app" (PASSED)
Then the argocd "mindwm-gitops" application appears in "argocd" namespace (PASSED)
When God executes "make argocd_app_sync_async" (PASSED)
Then the argocd "mindwm-gitops" application is argocd namespace in a progressing status (PASSED)
When God executes "make argocd_app_async_wait" (PASSED)
Then all argocd applications are in a healthy state (PASSED)
When God executes "make crossplane_rolebinding_workaround" (PASSED)
Then the following roles should exist: (PASSED)
PASSED

Feature: Mindwm event driven architecture
Scenario: Knative
Given a MindWM environment (PASSED)
Then all nodes in Kubernetes are ready (PASSED)
And namespace "knative-serving" should exist (PASSED)
And namespace "knative-eventing" should exist (PASSED)
And namespace "knative-operator" should exist (PASSED)
And the following deployments are in a ready state in the "knative-serving" namespace (PASSED)
And the following deployments are in a ready state in the "knative-eventing" namespace (PASSED)
PASSED

Feature: Mindwm event driven architecture
Scenario: Istio
Given a MindWM environment (PASSED)
Then all nodes in Kubernetes are ready (PASSED)
And namespace "istio-system" should exist (PASSED)
And the following deployments are in a ready state in the "istio-system" namespace (PASSED)
PASSED

Feature: Mindwm event driven architecture
Scenario: Redpanda
Given a MindWM environment (PASSED)
Then all nodes in Kubernetes are ready (PASSED)
And namespace "redpanda" should exist (PASSED)
And the following deployments are in a ready state in the "redpanda" namespace (PASSED)
And helm release "neo4j-cdc" is deployed in "redpanda" namespace (FAILED)
And statefulset "neo4j-cdc" in namespace "redpanda" is in ready state (FAILED)
And the following deployments are in a ready state in the "redpanda" namespace (FAILED)
FAILED

=================================== FAILURES ===================================
_ test_scenarios[file:features/0_1_mindwm_eda.feature-Mindwm event driven architecture-Redpanda] _

kube = <kubetest.client.TestClient object at 0x796b0332d4f0>
helm_release = 'neo4j-cdc', namespace = 'redpanda'

@then("helm release \"{helm_release}\" is deployed in \"{namespace}\" namespace" )
def helm_release_deploeyd(kube, helm_release, namespace):
    #info = utils.helm_release_info(kube, helm_release, namespace)
  info = utils.helm_release_is_ready(kube, helm_release, namespace)

conftest.py:255:


utils.py:52: in helm_release_is_ready
kubetest_utils.wait_for_condition(


condition = <Condition (name: helm release has status and info, met: False)>
timeout = 600, interval = 5, fail_on_api_error = True

def wait_for_condition(
    condition: Condition,
    timeout: int = None,
    interval: Union[int, float] = 1,
    fail_on_api_error: bool = True,
) -> None:
    """Wait for a condition to be met.

    Args:
        condition: The Condition to wait for.
        timeout: The maximum time to wait, in seconds, for the condition to be met.
            If unspecified, this function will wait indefinitely. If specified and
            the timeout is met or exceeded, a TimeoutError will be raised.
        interval: The time, in seconds, to wait before re-checking the condition.
        fail_on_api_error: Fail the condition checks if a Kubernetes API error is
            incurred. An API error can be raised for a number of reasons, including
            a Pod being restarted and temporarily unavailable. Disabling this will
            cause those errors to be ignored, allowing the check to continue until
            timeout or resolution. (default: True).

    Raises:
        TimeoutError: The specified timeout was exceeded.
    """
    log.info(f"waiting for condition: {condition}")

    # define the maximum time to wait. once this is met, we should
    # stop waiting.
    max_time = None
    if timeout is not None:
        max_time = time.time() + timeout

    # start the wait block
    start = time.time()
    while True:
        if max_time and time.time() >= max_time:
          raise TimeoutError(
                f"timed out ({timeout}s) while waiting for condition {condition}"
            )

E TimeoutError: timed out (600s) while waiting for condition <Condition (name: helm release has status and info, met: False)>

.venv/lib/python3.12/site-packages/kubetest/utils.py:130: TimeoutError
=========================== short test summary info ============================
FAILED features/0_1_mindwm_eda.feature::test_scenarios[file:features/0_1_mindwm_eda.feature-Mindwm event driven architecture-Redpanda] - TimeoutError: timed out (600s) while waiting for condition <Condition (name: helm release has status and info, met: False)>
!!!!!!!!!!!!!!!!!!!!!!!!!! stopping after 1 failures !!!!!!!!!!!!!!!!!!!!!!!!!!!
============ 1 failed, 3 passed, 13 warnings in 1696.69s (0:28:16) =============

Copy link

allure report

gherkin outptut ============================= test session starts ============================== collecting ... collected 56 items

Feature: Mindwm Lifecycle Management
Scenario: Deploy Mindwm Cluster and Applications
Given an Ubuntu 24.04 system with 6 CPUs and 16 GB of RAM (PASSED)
And the mindwm-gitops repository is cloned into the "~/mindwm-gitops" directory (PASSED)
When God executes "make cluster" (PASSED)
Then all nodes in Kubernetes are ready (PASSED)
When God executes "make argocd" (PASSED)
Then helm release "argocd" is deployed in "argocd" namespace (PASSED)
When God executes "make argocd_app" (PASSED)
Then the argocd "mindwm-gitops" application appears in "argocd" namespace (PASSED)
When God executes "make argocd_app_sync_async" (PASSED)
Then the argocd "mindwm-gitops" application is argocd namespace in a progressing status (PASSED)
When God executes "make argocd_app_async_wait" (PASSED)
Then all argocd applications are in a healthy state (PASSED)
When God executes "make crossplane_rolebinding_workaround" (PASSED)
Then the following roles should exist: (PASSED)
PASSED

Feature: Mindwm event driven architecture
Scenario: Knative
Given a MindWM environment (PASSED)
Then all nodes in Kubernetes are ready (PASSED)
And namespace "knative-serving" should exist (PASSED)
And namespace "knative-eventing" should exist (PASSED)
And namespace "knative-operator" should exist (PASSED)
And the following deployments are in a ready state in the "knative-serving" namespace (PASSED)
And the following deployments are in a ready state in the "knative-eventing" namespace (PASSED)
PASSED

Feature: Mindwm event driven architecture
Scenario: Istio
Given a MindWM environment (PASSED)
Then all nodes in Kubernetes are ready (PASSED)
And namespace "istio-system" should exist (PASSED)
And the following deployments are in a ready state in the "istio-system" namespace (PASSED)
PASSED

Feature: Mindwm event driven architecture
Scenario: Redpanda
Given a MindWM environment (PASSED)
Then all nodes in Kubernetes are ready (PASSED)
And namespace "redpanda" should exist (PASSED)
And the following deployments are in a ready state in the "redpanda" namespace (PASSED)
And helm release "neo4j-cdc" is deployed in "redpanda" namespace (FAILED)
And statefulset "neo4j-cdc" in namespace "redpanda" is in ready state (FAILED)
And the following deployments are in a ready state in the "redpanda" namespace (FAILED)
FAILED

=================================== FAILURES ===================================
_ test_scenarios[file:features/0_1_mindwm_eda.feature-Mindwm event driven architecture-Redpanda] _

kube = <kubetest.client.TestClient object at 0x75ddcaf3e2a0>
helm_release = 'neo4j-cdc', namespace = 'redpanda'

@then("helm release \"{helm_release}\" is deployed in \"{namespace}\" namespace" )
def helm_release_deploeyd(kube, helm_release, namespace):
    #info = utils.helm_release_info(kube, helm_release, namespace)
  info = utils.helm_release_is_ready(kube, helm_release, namespace)

conftest.py:255:


utils.py:52: in helm_release_is_ready
kubetest_utils.wait_for_condition(


condition = <Condition (name: helm release has status and info, met: False)>
timeout = 600, interval = 5, fail_on_api_error = True

def wait_for_condition(
    condition: Condition,
    timeout: int = None,
    interval: Union[int, float] = 1,
    fail_on_api_error: bool = True,
) -> None:
    """Wait for a condition to be met.

    Args:
        condition: The Condition to wait for.
        timeout: The maximum time to wait, in seconds, for the condition to be met.
            If unspecified, this function will wait indefinitely. If specified and
            the timeout is met or exceeded, a TimeoutError will be raised.
        interval: The time, in seconds, to wait before re-checking the condition.
        fail_on_api_error: Fail the condition checks if a Kubernetes API error is
            incurred. An API error can be raised for a number of reasons, including
            a Pod being restarted and temporarily unavailable. Disabling this will
            cause those errors to be ignored, allowing the check to continue until
            timeout or resolution. (default: True).

    Raises:
        TimeoutError: The specified timeout was exceeded.
    """
    log.info(f"waiting for condition: {condition}")

    # define the maximum time to wait. once this is met, we should
    # stop waiting.
    max_time = None
    if timeout is not None:
        max_time = time.time() + timeout

    # start the wait block
    start = time.time()
    while True:
        if max_time and time.time() >= max_time:
          raise TimeoutError(
                f"timed out ({timeout}s) while waiting for condition {condition}"
            )

E TimeoutError: timed out (600s) while waiting for condition <Condition (name: helm release has status and info, met: False)>

.venv/lib/python3.12/site-packages/kubetest/utils.py:130: TimeoutError
=========================== short test summary info ============================
FAILED features/0_1_mindwm_eda.feature::test_scenarios[file:features/0_1_mindwm_eda.feature-Mindwm event driven architecture-Redpanda] - TimeoutError: timed out (600s) while waiting for condition <Condition (name: helm release has status and info, met: False)>
!!!!!!!!!!!!!!!!!!!!!!!!!! stopping after 1 failures !!!!!!!!!!!!!!!!!!!!!!!!!!!
============ 1 failed, 3 passed, 13 warnings in 1378.15s (0:22:58) =============

Copy link

allure report

gherkin outptut ============================= test session starts ============================== collecting ... collected 56 items

Feature: Mindwm Lifecycle Management
Scenario: Deploy Mindwm Cluster and Applications
Given an Ubuntu 24.04 system with 6 CPUs and 16 GB of RAM (PASSED)
And the mindwm-gitops repository is cloned into the "~/mindwm-gitops" directory (PASSED)
When God executes "make cluster" (PASSED)
Then all nodes in Kubernetes are ready (PASSED)
When God executes "make argocd" (PASSED)
Then helm release "argocd" is deployed in "argocd" namespace (PASSED)
When God executes "make argocd_app" (PASSED)
Then the argocd "mindwm-gitops" application appears in "argocd" namespace (PASSED)
When God executes "make argocd_app_sync_async" (PASSED)
Then the argocd "mindwm-gitops" application is argocd namespace in a progressing status (PASSED)
When God executes "make argocd_app_async_wait" (PASSED)
Then all argocd applications are in a healthy state (FAILED)
When God executes "make crossplane_rolebinding_workaround" (FAILED)
Then the following roles should exist: (FAILED)
FAILED

=================================== FAILURES ===================================
_ test_scenarios[file:features/0_0_mindwm_lifecycle.feature-Mindwm Lifecycle Management-Deploy Mindwm Cluster and Applications] _

kube = <kubetest.client.TestClient object at 0x70ce6f007e60>
step = PickleStep(argument=PickleStepArgument(doc_string=None, data_table=PickleTable(rows=[PickleTableRow(cells=[PickleTable...), ast_node_ids=['28'], id='46', type=<Type.outcome: 'Outcome'>, text='all argocd applications are in a healthy state')

@then("all argocd applications are in a healthy state")
def argocd_applications_check(kube, step):
    title_row, *rows = step.data_table.rows

    for row in rows:
        application_name = row.cells[0].value
      argocd_application_in_progress(kube, application_name, "argocd")

conftest.py:284:


conftest.py:268: in argocd_application_in_progress
utils.argocd_application_wait_status(kube, application_name, namespace)
utils.py:87: in argocd_application_wait_status
kubetest_utils.wait_for_condition(


condition = <Condition (name: api object deleted, met: False)>, timeout = 180
interval = 5, fail_on_api_error = True

def wait_for_condition(
    condition: Condition,
    timeout: int = None,
    interval: Union[int, float] = 1,
    fail_on_api_error: bool = True,
) -> None:
    """Wait for a condition to be met.

    Args:
        condition: The Condition to wait for.
        timeout: The maximum time to wait, in seconds, for the condition to be met.
            If unspecified, this function will wait indefinitely. If specified and
            the timeout is met or exceeded, a TimeoutError will be raised.
        interval: The time, in seconds, to wait before re-checking the condition.
        fail_on_api_error: Fail the condition checks if a Kubernetes API error is
            incurred. An API error can be raised for a number of reasons, including
            a Pod being restarted and temporarily unavailable. Disabling this will
            cause those errors to be ignored, allowing the check to continue until
            timeout or resolution. (default: True).

    Raises:
        TimeoutError: The specified timeout was exceeded.
    """
    log.info(f"waiting for condition: {condition}")

    # define the maximum time to wait. once this is met, we should
    # stop waiting.
    max_time = None
    if timeout is not None:
        max_time = time.time() + timeout

    # start the wait block
    start = time.time()
    while True:
        if max_time and time.time() >= max_time:
          raise TimeoutError(
                f"timed out ({timeout}s) while waiting for condition {condition}"
            )

E TimeoutError: timed out (180s) while waiting for condition <Condition (name: api object deleted, met: False)>

.venv/lib/python3.12/site-packages/kubetest/utils.py:130: TimeoutError
=========================== short test summary info ============================
FAILED features/0_0_mindwm_lifecycle.feature::test_scenarios[file:features/0_0_mindwm_lifecycle.feature-Mindwm Lifecycle Management-Deploy Mindwm Cluster and Applications] - TimeoutError: timed out (180s) while waiting for condition <Condition (name: api object deleted, met: False)>
!!!!!!!!!!!!!!!!!!!!!!!!!! stopping after 1 failures !!!!!!!!!!!!!!!!!!!!!!!!!!!
================== 1 failed, 49 warnings in 384.05s (0:06:24) ==================

Copy link

allure report

gherkin outptut ============================= test session starts ============================== collecting ... collected 56 items

Feature: Mindwm Lifecycle Management
Scenario: Deploy Mindwm Cluster and Applications
Given an Ubuntu 24.04 system with 6 CPUs and 16 GB of RAM (PASSED)
And the mindwm-gitops repository is cloned into the "~/mindwm-gitops" directory (PASSED)
When God executes "make cluster" (PASSED)
Then all nodes in Kubernetes are ready (PASSED)
When God executes "make argocd" (PASSED)
Then helm release "argocd" is deployed in "argocd" namespace (PASSED)
When God executes "make argocd_app" (PASSED)
Then the argocd "mindwm-gitops" application appears in "argocd" namespace (PASSED)
When God executes "make argocd_app_sync_async" (PASSED)
Then the argocd "mindwm-gitops" application is argocd namespace in a progressing status (PASSED)
When God executes "make argocd_app_async_wait" (PASSED)
Then all argocd applications are in a healthy state (FAILED)
When God executes "make crossplane_rolebinding_workaround" (FAILED)
Then the following roles should exist: (FAILED)
FAILED

=================================== FAILURES ===================================
_ test_scenarios[file:features/0_0_mindwm_lifecycle.feature-Mindwm Lifecycle Management-Deploy Mindwm Cluster and Applications] _

kube = <kubetest.client.TestClient object at 0x7ca706b2ae70>
step = PickleStep(argument=PickleStepArgument(doc_string=None, data_table=PickleTable(rows=[PickleTableRow(cells=[PickleTable...), ast_node_ids=['28'], id='46', type=<Type.outcome: 'Outcome'>, text='all argocd applications are in a healthy state')

@then("all argocd applications are in a healthy state")
def argocd_applications_check(kube, step):
    title_row, *rows = step.data_table.rows

    for row in rows:
        application_name = row.cells[0].value
      argocd_application_in_progress(kube, application_name, "argocd")

conftest.py:284:


conftest.py:268: in argocd_application_in_progress
utils.argocd_application_wait_status(kube, application_name, namespace)
utils.py:87: in argocd_application_wait_status
kubetest_utils.wait_for_condition(


condition = <Condition (name: api object deleted, met: False)>, timeout = 180
interval = 5, fail_on_api_error = True

def wait_for_condition(
    condition: Condition,
    timeout: int = None,
    interval: Union[int, float] = 1,
    fail_on_api_error: bool = True,
) -> None:
    """Wait for a condition to be met.

    Args:
        condition: The Condition to wait for.
        timeout: The maximum time to wait, in seconds, for the condition to be met.
            If unspecified, this function will wait indefinitely. If specified and
            the timeout is met or exceeded, a TimeoutError will be raised.
        interval: The time, in seconds, to wait before re-checking the condition.
        fail_on_api_error: Fail the condition checks if a Kubernetes API error is
            incurred. An API error can be raised for a number of reasons, including
            a Pod being restarted and temporarily unavailable. Disabling this will
            cause those errors to be ignored, allowing the check to continue until
            timeout or resolution. (default: True).

    Raises:
        TimeoutError: The specified timeout was exceeded.
    """
    log.info(f"waiting for condition: {condition}")

    # define the maximum time to wait. once this is met, we should
    # stop waiting.
    max_time = None
    if timeout is not None:
        max_time = time.time() + timeout

    # start the wait block
    start = time.time()
    while True:
        if max_time and time.time() >= max_time:
          raise TimeoutError(
                f"timed out ({timeout}s) while waiting for condition {condition}"
            )

E TimeoutError: timed out (180s) while waiting for condition <Condition (name: api object deleted, met: False)>

.venv/lib/python3.12/site-packages/kubetest/utils.py:130: TimeoutError
=========================== short test summary info ============================
FAILED features/0_0_mindwm_lifecycle.feature::test_scenarios[file:features/0_0_mindwm_lifecycle.feature-Mindwm Lifecycle Management-Deploy Mindwm Cluster and Applications] - TimeoutError: timed out (180s) while waiting for condition <Condition (name: api object deleted, met: False)>
!!!!!!!!!!!!!!!!!!!!!!!!!! stopping after 1 failures !!!!!!!!!!!!!!!!!!!!!!!!!!!
================= 1 failed, 49 warnings in 1007.61s (0:16:47) ==================

Copy link

allure report

gherkin outptut ============================= test session starts ============================== collecting ... collected 56 items

Feature: Mindwm Lifecycle Management
Scenario: Deploy Mindwm Cluster and Applications
Given an Ubuntu 24.04 system with 6 CPUs and 16 GB of RAM (PASSED)
And the mindwm-gitops repository is cloned into the "~/mindwm-gitops" directory (PASSED)
When God executes "make cluster" (PASSED)
Then all nodes in Kubernetes are ready (PASSED)
When God executes "make argocd" (PASSED)
Then helm release "argocd" is deployed in "argocd" namespace (PASSED)
When God executes "make argocd_app" (PASSED)
Then the argocd "mindwm-gitops" application appears in "argocd" namespace (PASSED)
When God executes "make argocd_app_sync_async" (PASSED)
Then the argocd "mindwm-gitops" application is argocd namespace in a progressing status (PASSED)
When God executes "make argocd_app_async_wait" (PASSED)
Then all argocd applications are in a healthy state (PASSED)
When God executes "make crossplane_rolebinding_workaround" (PASSED)
Then the following roles should exist: (PASSED)
PASSED

Feature: Mindwm event driven architecture
Scenario: Knative
Given a MindWM environment (PASSED)
Then all nodes in Kubernetes are ready (PASSED)
And namespace "knative-serving" should exist (PASSED)
And namespace "knative-eventing" should exist (PASSED)
And namespace "knative-operator" should exist (PASSED)
And the following deployments are in a ready state in the "knative-serving" namespace (PASSED)
And the following deployments are in a ready state in the "knative-eventing" namespace (PASSED)
PASSED

Feature: Mindwm event driven architecture
Scenario: Istio
Given a MindWM environment (PASSED)
Then all nodes in Kubernetes are ready (PASSED)
And namespace "istio-system" should exist (PASSED)
And the following deployments are in a ready state in the "istio-system" namespace (PASSED)
PASSED

Feature: Mindwm event driven architecture
Scenario: Redpanda
Given a MindWM environment (PASSED)
Then all nodes in Kubernetes are ready (PASSED)
And namespace "redpanda" should exist (PASSED)
And the following deployments are in a ready state in the "redpanda" namespace (PASSED)
And helm release "neo4j-cdc" is deployed in "redpanda" namespace (PASSED)
And statefulset "neo4j-cdc" in namespace "redpanda" is in ready state (PASSED)
And the following deployments are in a ready state in the "redpanda" namespace (PASSED)
PASSED

Feature: Mindwm event driven architecture
Scenario: Cert manager
Given a MindWM environment (PASSED)
Then all nodes in Kubernetes are ready (PASSED)
And namespace "cert-manager" should exist (PASSED)
And the following deployments are in a ready state in the "cert-manager" namespace (PASSED)
PASSED

Feature: Mindwm event driven architecture
Scenario: Nats
Given a MindWM environment (PASSED)
Then all nodes in Kubernetes are ready (PASSED)
And namespace "nats" should exist (PASSED)
And the following deployments are in a ready state in the "nats" namespace (PASSED)
And statefulset "nats" in namespace "nats" is in ready state (PASSED)
PASSED

Feature: Mindwm event driven architecture
Scenario: Monitoring
Given a MindWM environment (PASSED)
Then all nodes in Kubernetes are ready (PASSED)
And namespace "monitoring" should exist (PASSED)
And the following deployments are in a ready state in the "monitoring" namespace (PASSED)
And statefulset "loki" in namespace "monitoring" is in ready state (PASSED)
And statefulset "tempo" in namespace "monitoring" is in ready state (PASSED)
And statefulset "vmalertmanager-vm-aio-victoria-metrics-k8s-stack" in namespace "monitoring" is in ready state (PASSED)
PASSED

Feature: MindWM Custom Resource Definition
Scenario: Create Context
Given a MindWM environment (PASSED)
When God creates a MindWM context with the name "xxx3" (PASSED)
Then the context should be ready and operable (PASSED)
PASSED

Feature: MindWM Custom Resource Definition
Scenario: Create User
Given a MindWM environment (PASSED)
When God creates a MindWM user resource with the name "alice" and connects it to the context "xxx3" (PASSED)
Then the user resource should be ready and operable (PASSED)
PASSED

Feature: MindWM Custom Resource Definition
Scenario: Create Host
Given a MindWM environment (PASSED)
When God creates a MindWM host resource with the name "laptop" and connects it to the user "alice" (PASSED)
Then the host resource should be ready and operable (PASSED)
PASSED

Feature: MindWM Custom Resource Definition
Scenario: Delete Resources and Verify Cleanup
Given a MindWM environment (PASSED)
When God deletes the MindWM host resource "laptop" (PASSED)
Then the host "laptop" should be deleted (PASSED)
When God deletes the MindWM user resource "alice" (PASSED)
Then the user "alice" should be deleted (PASSED)
When God deletes the MindWM context resource "xxx3" (PASSED)
Then the context "xxx3" should be deleted (PASSED)
PASSED

Feature: MindWM Custom kubernetes resources
Scenario: Create Context and check k8s resources
Given a MindWM environment (PASSED)
Then all nodes in Kubernetes are ready (PASSED)
When God creates a MindWM context with the name "cyan" (PASSED)
Then the context should be ready and operable (PASSED)
And namespace "context-cyan" should exist (PASSED)
And helm release "cyan-neo4j" is deployed in "context-cyan" namespace (PASSED)
And statefulset "cyan-neo4j" in namespace "context-cyan" is in ready state (PASSED)
And the following knative services are in a ready state in the "context-cyan" namespace (PASSED)
And the following knative triggers are in a ready state in the "context-cyan" namespace (PASSED)
And the following knative brokers are in a ready state in the "context-cyan" namespace (PASSED)
And kafka topic "context-cyan-cdc" is in ready state in "redpanda" namespace (PASSED)
And kafka source "context-cyan-cdc-kafkasource" is in ready state in "context-cyan" namespace (PASSED)
PASSED

Feature: MindWM Custom kubernetes resources
Scenario: Create User and check k8s resources
Given a MindWM environment (PASSED)
Then all nodes in Kubernetes are ready (PASSED)
When God creates a MindWM user resource with the name "bob" and connects it to the context "cyan" (PASSED)
Then the user resource should be ready and operable (PASSED)
And namespace "user-bob" should exist (PASSED)
And the following knative brokers are in a ready state in the "user-bob" namespace (PASSED)
And the following knative triggers are in a ready state in the "context-cyan" namespace (PASSED)
And the following knative triggers are in a ready state in the "user-bob" namespace (PASSED)
PASSED

'kind': 'NatsJetStreamChannel',
'metadata': {'annotations': {'eventing.knative.dev/scope': 'cluster',
'messaging.knative.dev/creator': 'system:serviceaccount:knative-eventing:eventing-controller',
'messaging.knative.dev/lastModifier': 'system:serviceaccount:knative-eventing:eventing-controller',
'messaging.knative.dev/subscribable': 'v1'},
'creationTimestamp': '2024-12-13T18:26:21Z',
'finalizers': ['natsjetstreamchannels.messaging.knative.dev'],
'generation': 2,
'labels': {'eventing.knative.dev/broker': 'workstation-host-broker',
'eventing.knative.dev/brokerEverything': 'true'},
'managedFields': [{'apiVersion': 'messaging.knative.dev/v1alpha1',
'fieldsType': 'FieldsV1',
'fieldsV1': {'f:metadata': {'f:annotations': {'.': {},
'f:eventing.knative.dev/scope': {}},
'f:labels': {'.': {},
'f:eventing.knative.dev/broker': {},
'f:eventing.knative.dev/brokerEverything': {}},
'f:ownerReferences': {'.': {},
'k:{"uid":"a877d2de-c22b-44ff-ad7f-5b147de7c256"}': {}}},
'f:spec': {'.': {},
'f:delivery': {'.': {},
'f:backoffDelay': {},
'f:backoffPolicy': {},
'f:retry': {}},
'f:stream': {'.': {},
'f:config': {'.': {},
'f:additionalSubjects': {}}}}},
'manager': 'mtchannel_broker',
'operation': 'Update',
'time': '2024-12-13T18:26:21Z'},
{'apiVersion': 'messaging.knative.dev/v1alpha1',
'fieldsType': 'FieldsV1',
'fieldsV1': {'f:metadata': {'f:finalizers': {'.': {},
'v:"natsjetstreamchannels.messaging.knative.dev"': {}}},
'f:spec': {'f:subscribers': {}}},
'manager': 'controller',
'operation': 'Update',
'time': '2024-12-13T18:26:24Z'},
{'apiVersion': 'messaging.knative.dev/v1alpha1',
'fieldsType': 'FieldsV1',
'fieldsV1': {'f:status': {'.': {},
'f:address': {'.': {},
'f:url': {}},
'f:conditions': {},
'f:observedGeneration': {}}},
'manager': 'controller',
'operation': 'Update',
'subresource': 'status',
'time': '2024-12-13T18:26:24Z'},
{'apiVersion': 'messaging.knative.dev/v1alpha1',
'fieldsType': 'FieldsV1',
'fieldsV1': {'f:status': {'f:subscribers': {}}},
'manager': 'dispatcher',
'operation': 'Update',
'subresource': 'status',
'time': '2024-12-13T18:26:24Z'}],
'name': 'workstation-host-broker-kne-trigger',
'namespace': 'user-bob',
'ownerReferences': [{'apiVersion': 'eventing.knative.dev/v1',
'blockOwnerDeletion': True,
'controller': True,
'kind': 'Broker',
'name': 'workstation-host-broker',
'uid': 'a877d2de-c22b-44ff-ad7f-5b147de7c256'}],
'resourceVersion': '18247',
'uid': '725d52c6-4071-4494-93df-98b5397780b8'},
'spec': {'delivery': {'backoffDelay': 'PT0.2S',
'backoffPolicy': 'exponential',
'retry': 10},
'stream': {'config': {'additionalSubjects': ['org.mindwm.bob.workstation.>'],
'duplicateWindow': '0s',
'maxAge': '0s'}},
'subscribers': [{'delivery': {'deadLetterSink': {'uri': 'http://dead-letter.user-bob.svc.cluster.local'}},
'generation': 1,
'name': 'workstation-host-broker-worksta3ef373992fe64250974e6a5197e14f89',
'replyUri': 'http://broker-ingress.knative-eventing.svc.cluster.local/user-bob/workstation-host-broker',
'subscriberUri': 'http://broker-filter.knative-eventing.svc.cluster.local/triggers/user-bob/workstation-trigger-to-user-broker-trigger/1d775b88-aa87-441f-9c57-6f064ce41985',
'uid': 'c240c7f1-bacf-459d-88af-3e0e18e365ac'}]},
'status': {'address': {'url': 'http://workstation-host-broker-kne-trigger-kn-jsm-channel.user-bob.svc.cluster.local'},
'conditions': [{'lastTransitionTime': '2024-12-13T18:26:24Z',
'status': 'True',
'type': 'Addressable'},
{'lastTransitionTime': '2024-12-13T18:26:24Z',
'status': 'True',
'type': 'ChannelServiceReady'},
{'lastTransitionTime': '2024-12-13T18:26:24Z',
'status': 'True',
'type': 'DispatcherReady'},
{'lastTransitionTime': '2024-12-13T18:26:24Z',
'status': 'True',
'type': 'EndpointsReady'},
{'lastTransitionTime': '2024-12-13T18:26:24Z',
'status': 'True',
'type': 'Ready'},
{'lastTransitionTime': '2024-12-13T18:26:22Z',
'status': 'True',
'type': 'ServiceReady'},
{'lastTransitionTime': '2024-12-13T18:26:23Z',
'status': 'True',
'type': 'StreamReady'}],
'observedGeneration': 2,
'subscribers': [{'observedGeneration': 1,
'ready': 'True',
'uid': 'c240c7f1-bacf-459d-88af-3e0e18e365ac'}]}}

Feature: MindWM Custom kubernetes resources
Scenario: Create Host and check k8s resources
Given a MindWM environment (PASSED)
Then all nodes in Kubernetes are ready (PASSED)
When God creates a MindWM host resource with the name "workstation" and connects it to the user "bob" (PASSED)
Then the host resource should be ready and operable (PASSED)
And NatsJetStreamChannel "workstation-host-broker-kne-trigger" is ready in "user-bob" namespace (PASSED)
And the following knative triggers are in a ready state in the "user-bob" namespace (PASSED)
And the following knative brokers are in a ready state in the "user-bob" namespace (PASSED)
PASSED

Feature: MindWM Custom kubernetes resources
Scenario: Delete Resources and Verify Cleanup
Given a MindWM environment (PASSED)
Then all nodes in Kubernetes are ready (PASSED)
When God deletes the MindWM host resource "workstation" (PASSED)
Then the host "workstation" should be deleted (PASSED)
When God deletes the MindWM user resource "bob" (PASSED)
Then the user "bob" should be deleted (PASSED)
And namespace "user-bob" should not exist (PASSED)
When God deletes the MindWM context resource "cyan" (PASSED)
Then the context "cyan" should be deleted (PASSED)
And namespace "context-cyan" should not exist (PASSED)
PASSED

'kind': 'NatsJetStreamChannel',
'metadata': {'annotations': {'eventing.knative.dev/scope': 'cluster',
'messaging.knative.dev/creator': 'system:serviceaccount:knative-eventing:eventing-controller',
'messaging.knative.dev/lastModifier': 'system:serviceaccount:knative-eventing:eventing-controller',
'messaging.knative.dev/subscribable': 'v1'},
'creationTimestamp': '2024-12-13T18:29:02Z',
'finalizers': ['natsjetstreamchannels.messaging.knative.dev'],
'generation': 2,
'labels': {'eventing.knative.dev/broker': 'pi6-host-host-broker',
'eventing.knative.dev/brokerEverything': 'true'},
'managedFields': [{'apiVersion': 'messaging.knative.dev/v1alpha1',
'fieldsType': 'FieldsV1',
'fieldsV1': {'f:metadata': {'f:annotations': {'.': {},
'f:eventing.knative.dev/scope': {}},
'f:labels': {'.': {},
'f:eventing.knative.dev/broker': {},
'f:eventing.knative.dev/brokerEverything': {}},
'f:ownerReferences': {'.': {},
'k:{"uid":"7200f914-e88d-4241-b9eb-4bcd880b4122"}': {}}},
'f:spec': {'.': {},
'f:delivery': {'.': {},
'f:backoffDelay': {},
'f:backoffPolicy': {},
'f:retry': {}},
'f:stream': {'.': {},
'f:config': {'.': {},
'f:additionalSubjects': {}}}}},
'manager': 'mtchannel_broker',
'operation': 'Update',
'time': '2024-12-13T18:29:02Z'},
{'apiVersion': 'messaging.knative.dev/v1alpha1',
'fieldsType': 'FieldsV1',
'fieldsV1': {'f:metadata': {'f:finalizers': {'.': {},
'v:"natsjetstreamchannels.messaging.knative.dev"': {}}},
'f:spec': {'f:subscribers': {}}},
'manager': 'controller',
'operation': 'Update',
'time': '2024-12-13T18:29:06Z'},
{'apiVersion': 'messaging.knative.dev/v1alpha1',
'fieldsType': 'FieldsV1',
'fieldsV1': {'f:status': {'.': {},
'f:address': {'.': {},
'f:url': {}}}},
'manager': 'controller',
'operation': 'Update',
'subresource': 'status',
'time': '2024-12-13T18:29:08Z'},
{'apiVersion': 'messaging.knative.dev/v1alpha1',
'fieldsType': 'FieldsV1',
'fieldsV1': {'f:status': {'f:conditions': {},
'f:observedGeneration': {},
'f:subscribers': {}}},
'manager': 'dispatcher',
'operation': 'Update',
'subresource': 'status',
'time': '2024-12-13T18:29:08Z'}],
'name': 'pi6-host-host-broker-kne-trigger',
'namespace': 'user-amanda4',
'ownerReferences': [{'apiVersion': 'eventing.knative.dev/v1',
'blockOwnerDeletion': True,
'controller': True,
'kind': 'Broker',
'name': 'pi6-host-host-broker',
'uid': '7200f914-e88d-4241-b9eb-4bcd880b4122'}],
'resourceVersion': '23896',
'uid': '68617f0f-0f1b-4e3b-b514-b67c5b2b05ab'},
'spec': {'delivery': {'backoffDelay': 'PT0.2S',
'backoffPolicy': 'exponential',
'retry': 10},
'stream': {'config': {'additionalSubjects': ['org.mindwm.amanda4.pi6-host.>'],
'duplicateWindow': '0s',
'maxAge': '0s'}},
'subscribers': [{'delivery': {'deadLetterSink': {'uri': 'http://dead-letter.user-amanda4.svc.cluster.local'}},
'generation': 1,
'name': 'pi6-host-host-broker-pi6-host-t01c6da9eb34d2cf76a4345b8e79826b4',
'replyUri': 'http://broker-ingress.knative-eventing.svc.cluster.local/user-amanda4/pi6-host-host-broker',
'subscriberUri': 'http://broker-filter.knative-eventing.svc.cluster.local/triggers/user-amanda4/pi6-host-trigger-to-user-broker-trigger/ea448876-7c31-4cf5-afa5-167751aa526c',
'uid': '390ae73f-0b7f-466d-8d34-879b2f5cb9a7'}]},
'status': {'address': {'url': 'http://pi6-host-host-broker-kne-trigger-kn-jsm-channel.user-amanda4.svc.cluster.local'},
'conditions': [{'lastTransitionTime': '2024-12-13T18:29:06Z',
'status': 'True',
'type': 'Addressable'},
{'lastTransitionTime': '2024-12-13T18:29:06Z',
'status': 'True',
'type': 'ChannelServiceReady'},
{'lastTransitionTime': '2024-12-13T18:29:06Z',
'status': 'True',
'type': 'DispatcherReady'},
{'lastTransitionTime': '2024-12-13T18:29:06Z',
'status': 'True',
'type': 'EndpointsReady'},
{'lastTransitionTime': '2024-12-13T18:29:06Z',
'status': 'True',
'type': 'Ready'},
{'lastTransitionTime': '2024-12-13T18:29:02Z',
'status': 'True',
'type': 'ServiceReady'},
{'lastTransitionTime': '2024-12-13T18:29:05Z',
'status': 'True',
'type': 'StreamReady'}],
'observedGeneration': 2,
'subscribers': [{'observedGeneration': 1,
'ready': 'True',
'uid': '390ae73f-0b7f-466d-8d34-879b2f5cb9a7'}]}}

Feature: MindWM Ping-pong EDA test
Scenario: Prepare environment for ping tests
Given A MindWM environment (PASSED)
Then all nodes in Kubernetes are ready (PASSED)
When God creates a MindWM context with the name "green4" (PASSED)
Then the context should be ready and operable (PASSED)
And the following knative services are in a ready state in the "context-green4" namespace (PASSED)
When God creates a MindWM user resource with the name "amanda4" and connects it to the context "green4" (PASSED)
Then the user resource should be ready and operable (PASSED)
When God creates a MindWM host resource with the name "pi6-host" and connects it to the user "amanda4" (PASSED)
Then the host resource should be ready and operable (PASSED)
And NatsJetStreamChannel "pi6-host-host-broker-kne-trigger" is ready in "user-amanda4" namespace (PASSED)
When God starts reading message from NATS topic ">" (PASSED)
PASSED
Connected to NATS server at nats://root:[email protected]:4222
Subscribed to topic '>'

Feature: MindWM Ping-pong EDA test
Scenario: Send pind to knative ping service
Given A MindWM environment (PASSED)
Then all nodes in Kubernetes are ready (PASSED)
When God creates a new cloudevent (PASSED)
And sets cloudevent header "ce-subject" to "#ping" (PASSED)
And sets cloudevent header "ce-type" to "org.mindwm.v1.iodocument" (PASSED)
And sets cloudevent header "ce-source" to "org.mindwm.amanda4.pi6-host.L3RtcC90bXV4LTEwMDAvZGVmYXVsdA==.09fb195c-c419-6d62-15e0-51b6ee990922.23.36.iodocument" (PASSED)
And sends cloudevent to knative service "pong" in "context-green4" namespace (PASSED)
Then the response http code should be "200" (PASSED)
Then the following deployments are in a ready state in the "context-green4" namespace (PASSED)
PASSED

Feature: MindWM Ping-pong EDA test
Scenario: Send ping via broker-ingress.knative-eventing/context-green4/context-broker
Given A MindWM environment (PASSED)
Then all nodes in Kubernetes are ready (PASSED)
When God creates a new cloudevent (PASSED)
And sets cloudevent header "ce-subject" to "#ping" (PASSED)
And sets cloudevent header "ce-type" to "org.mindwm.v1.iodocument" (PASSED)
And sets cloudevent header "traceparent" to "00-5df92f3577b34da6a3ce929d0e0e4734-00f067aa0ba902b7-00" (PASSED)
And sets cloudevent header "ce-source" to "org.mindwm.amanda4.pi6-host.L3RtcC90bXV4LTEwMDAvZGVmYXVsdA==.09fb195c-c419-6d62-15e0-51b6ee990922.23.36.iodocument" (PASSED)
And sends cloudevent to "broker-ingress.knative-eventing/context-green4/context-broker" (PASSED)
Then the response http code should be "202" (PASSED)
Then the following deployments are in a ready state in the "context-green4" namespace (PASSED)
Then the trace with "00-5df92f3577b34da6a3ce929d0e0e4734-00f067aa0ba902b7-00" should appear in TraceQL (PASSED)
And the trace should contains (PASSED)
And a cloudevent with type == "org.mindwm.v1.pong" should have been received from the NATS topic "user-amanda4.pi6-host-host-broker-kne-trigger._knative" (PASSED)
PASSED

Feature: MindWM Ping-pong EDA test
Scenario: Send ping via broker-ingress.knative-eventing/user-amanda4/user-broker
Given A MindWM environment (PASSED)
Then all nodes in Kubernetes are ready (PASSED)
When God creates a new cloudevent (PASSED)
And sets cloudevent header "ce-subject" to "#ping" (PASSED)
And sets cloudevent header "ce-type" to "org.mindwm.v1.iodocument" (PASSED)
And sets cloudevent header "traceparent" to "00-6df93f3577b34da6a3ce929d0e0e4742-00f067aa0ba902b7-00" (PASSED)
And sets cloudevent header "ce-source" to "org.mindwm.amanda4.pi6-host.L3RtcC90bXV4LTEwMDAvZGVmYXVsdA==.09fb195c-c419-6d62-15e0-51b6ee990922.23.36.iodocument" (PASSED)
And sends cloudevent to "broker-ingress.knative-eventing/user-amanda4/user-broker" (PASSED)
Then the response http code should be "202" (PASSED)
Then the following deployments are in a ready state in the "context-green4" namespace (PASSED)
Then the trace with "00-6df93f3577b34da6a3ce929d0e0e4742-00f067aa0ba902b7-00" should appear in TraceQL (PASSED)
And the trace should contains (PASSED)
And a cloudevent with type == "org.mindwm.v1.pong" should have been received from the NATS topic "user-amanda4.pi6-host-host-broker-kne-trigger._knative" (PASSED)
PASSED

Feature: MindWM Ping-pong EDA test
Scenario: Send ping via nats
Given A MindWM environment (PASSED)
Then all nodes in Kubernetes are ready (PASSED)
When God creates a new cloudevent (PASSED)
And sets cloudevent header "ce-subject" to "#ping" (PASSED)
And sets cloudevent header "ce-type" to "org.mindwm.v1.iodocument" (PASSED)
And sets cloudevent header "ce-source" to "org.mindwm.amanda4.pi6-host.L3RtcC90bXV4LTEwMDAvZGVmYXVsdA==.09fb195c-c419-6d62-15e0-51b6ee990922.36.23.iodocument" (PASSED)
And sends cloudevent to nats topic "org.mindwm.amanda4.pi6-host.L3RtcC90bXV4LTEwMDAvZGVmYXVsdA==.09fb195c-c419-6d62-15e0-51b6ee990922.3623.iodocument" (PASSED)
Then the following deployments are in a ready state in the "context-green4" namespace (PASSED)
And a cloudevent with type == "org.mindwm.v1.pong" should have been received from the NATS topic "user-amanda4.pi6-host-host-broker-kne-trigger._knative" (PASSED)
PASSED

Feature: MindWM Ping-pong EDA test
Scenario: Cleanup amanda4@pi6-host in green4
Given A MindWM environment (PASSED)
Then all nodes in Kubernetes are ready (PASSED)
When God deletes the MindWM host resource "pi6-host" (PASSED)
Then the host "pi6-host" should be deleted (PASSED)
When God deletes the MindWM user resource "amanda4" (PASSED)
When God deletes the MindWM context resource "green4" (PASSED)
PASSED

<Record n=<Node element_id='1' labels=frozenset({'Host'}) properties={'hostname': 'tablet', 'atime': 0, 'traceparent': '00-7df92f3577b34da6a3ce930d0e0e4734-2e76dbeed06417a4-01', 'type': 'org.mindwm.v1.graph.node.host'}>>
<Record n=<Node element_id='5' labels=frozenset({'IoDocument'}) properties={'output': 'uid=1000(pion) gid=1000(pion) groups=1000(pion),4(adm),100(users),112(tmux),988(docker)', 'input': 'id', 'atime': 0, 'traceparent': '00-7df92f3577b34da6a3ce930d0e0e4734-2e76dbeed06417a4-01', 'type': 'org.mindwm.v1.graph.node.iodocument', 'uuid': '0c7cc1832a7044d5ba5e13dd3e63ece9', 'ps1': 'pion@mindwm-stg1:~/work/dev/mindwm-manager$'}>>

Feature: MindWM io context function test
Scenario: io context red
Given A MindWM environment (PASSED)
Then all nodes in Kubernetes are ready (PASSED)
When God creates a MindWM context with the name "red" (PASSED)
Then the context should be ready and operable (PASSED)
Then the following knative services are in a ready state in the "context-red" namespace (PASSED)
And statefulset "red-neo4j" in namespace "context-red" is in ready state (PASSED)
When God creates a MindWM user resource with the name "kitty" and connects it to the context "red" (PASSED)
Then the user resource should be ready and operable (PASSED)
When God creates a MindWM host resource with the name "tablet" and connects it to the user "kitty" (PASSED)
Then the host resource should be ready and operable (PASSED)
When God creates a new cloudevent (PASSED)
And sets cloudevent header "ce-id" to "442af213-c860-4535-b639-355f13b2d883" (PASSED)
And sets cloudevent header "traceparent" to "00-7df92f3577b34da6a3ce930d0e0e4734-00f064aa0ba902b8-00" (PASSED)
And sets cloudevent header "ce-subject" to "id" (PASSED)
And sets cloudevent header "ce-source" to "org.mindwm.kitty.tablet.L3RtcC90bXV4LTEwMDAvZGVmYXVsdA==.09fb195c-c419-6d62-15e0-51b6ee990922.23.36.iodocument" (PASSED)
And sets cloudevent header "ce-type" to "org.mindwm.v1.iodocument" (PASSED)
And sends cloudevent to "broker-ingress.knative-eventing/context-red/context-broker" (PASSED)
Then the response http code should be "202" (PASSED)
Then the following deployments are in a ready state in the "context-red" namespace (PASSED)
Then the trace with "00-7df92f3577b34da6a3ce930d0e0e4734-00f064aa0ba902b8-00" should appear in TraceQL (PASSED)
And the trace should contains (PASSED)
Then graph have node "User" with property "username" = "kitty" in context "red" (PASSED)
And graph have node "Host" with property "hostname" = "tablet" in context "red" (PASSED)
And graph have node "IoDocument" with property "input" = "id" in context "red" (PASSED)
PASSED

Feature: MindWM io context function test
Scenario: Cleanup kitty@tablet in red
Given A MindWM environment (PASSED)
Then all nodes in Kubernetes are ready (PASSED)
When God deletes the MindWM host resource "tablet" (PASSED)
Then the host "tablet" should be deleted (PASSED)
When God deletes the MindWM user resource "kitty" (PASSED)
When God deletes the MindWM context resource "red" (PASSED)
PASSED

Subscribed to topic '>'

Feature: MindWM kafka_cdc function test
Scenario: io context blue
Given A MindWM environment (PASSED)
Then all nodes in Kubernetes are ready (PASSED)
When God creates a MindWM context with the name "blue" (PASSED)
Then the context should be ready and operable (PASSED)
Then the following knative services are in a ready state in the "context-blue" namespace (PASSED)
And statefulset "blue-neo4j" in namespace "context-blue" is in ready state (PASSED)
When God creates a MindWM user resource with the name "garmr" and connects it to the context "blue" (PASSED)
Then the user resource should be ready and operable (PASSED)
When God creates a MindWM host resource with the name "helheim" and connects it to the user "garmr" (PASSED)
Then the host resource should be ready and operable (PASSED)
When God starts reading message from NATS topic ">" (PASSED)
And God makes graph query in context "blue" (PASSED)
Then the following knative services are in a ready state in the "context-blue" namespace (PASSED)
And a cloudevent with type == "org.mindwm.v1.graph.created" should have been received from the NATS topic "user-garmr.helheim-host-broker-kne-trigger._knative" (PASSED)
When God makes graph query in context "blue" (PASSED)
Then the following knative services are in a ready state in the "context-blue" namespace (PASSED)
And a cloudevent with type == "org.mindwm.v1.graph.updated" should have been received from the NATS topic "user-garmr.helheim-host-broker-kne-trigger._knative" (PASSED)
When God makes graph query in context "blue" (PASSED)
Then the following knative services are in a ready state in the "context-blue" namespace (PASSED)
And a cloudevent with type == "org.mindwm.v1.graph.deleted" should have been received from the NATS topic "user-garmr.helheim-host-broker-kne-trigger._knative" (PASSED)
PASSED

Feature: MindWM kafka_cdc function test
Scenario: Cleanup garmr@helheim in blue
Given A MindWM environment (PASSED)
Then all nodes in Kubernetes are ready (PASSED)
When God deletes the MindWM host resource "helheim" (PASSED)
Then the host "helheim" should be deleted (PASSED)
When God deletes the MindWM user resource "garmr" (PASSED)
When God deletes the MindWM context resource "blue" (PASSED)
PASSED

Feature: MindWM two hosts one user function test
Scenario: Create context varanasi and user shesha
Given A MindWM environment (PASSED)
Then all nodes in Kubernetes are ready (PASSED)
When God creates a MindWM context with the name "varanasi" (PASSED)
Then the context should be ready and operable (PASSED)
Then the following knative services are in a ready state in the "context-varanasi" namespace (PASSED)
And statefulset "varanasi-neo4j" in namespace "context-varanasi" is in ready state (PASSED)
When God creates a MindWM user resource with the name "shesha" and connects it to the context "varanasi" (PASSED)
Then the user resource should be ready and operable (PASSED)
PASSED

Feature: MindWM two hosts one user function test
Scenario: Create workstation01 for user shesha
Given A MindWM environment (PASSED)
Then all nodes in Kubernetes are ready (PASSED)
When God creates a MindWM host resource with the name "workstation01" and connects it to the user "shesha" (PASSED)
Then the host resource should be ready and operable (PASSED)
PASSED

Feature: MindWM two hosts one user function test
Scenario: Create travellaptop for user shesha
Given A MindWM environment (PASSED)
Then all nodes in Kubernetes are ready (PASSED)
When God creates a MindWM host resource with the name "travellaptop" and connects it to the user "shesha" (PASSED)
Then the host resource should be ready and operable (PASSED)
PASSED

Subscribed to topic '>'

Feature: MindWM two hosts one user function test
Scenario: Send ping via nats host: workstation01, user: shesha
Given A MindWM environment (PASSED)
Then all nodes in Kubernetes are ready (PASSED)
When God creates a new cloudevent (PASSED)
And God starts reading message from NATS topic ">" (PASSED)
And sets cloudevent header "ce-subject" to "#ping" (PASSED)
And sets cloudevent header "ce-type" to "org.mindwm.v1.iodocument" (PASSED)
And sets cloudevent header "ce-source" to "org.mindwm.shesha.workstation01.L3RtcC90bXV4LTEwMDAvZGVmYXVsdA==.8d839f82-79da-11ef-bc9f-f74fac7543ac.36.23.iodocument" (PASSED)
And sends cloudevent to nats topic "org.mindwm.shesha.workstation01.L3RtcC90bXV4LTEwMDAvZGVmYXVsdA==.8d839f82-79da-11ef-bc9f-f74fac7543ac.36.23.iodocument" (PASSED)
Then the following deployments are in a ready state in the "context-varanasi" namespace (PASSED)
And a cloudevent with type == "org.mindwm.v1.pong" should have been received from the NATS topic "user-shesha.workstation01-host-broker-kne-trigger._knative" (PASSED)
PASSED

Subscribed to topic '>'

Feature: MindWM two hosts one user function test
Scenario: Send ping via nats host: travellaptop, user: shesha
Given A MindWM environment (PASSED)
Then all nodes in Kubernetes are ready (PASSED)
When God creates a new cloudevent (PASSED)
And God starts reading message from NATS topic ">" (PASSED)
And sets cloudevent header "ce-subject" to "#ping" (PASSED)
And sets cloudevent header "ce-type" to "org.mindwm.v1.iodocument" (PASSED)
And sets cloudevent header "ce-source" to "org.mindwm.shesha.travellaptop.L3RtcC90bXV4LTEwMDAvZGVmYXVsdA==.8d839f82-79da-11ef-bc9f-f74fac7543ac.36.23.iodocument" (PASSED)
And sends cloudevent to nats topic "org.mindwm.shesha.travellaptop.L3RtcC90bXV4LTEwMDAvZGVmYXVsdA==.8d839f82-79da-11ef-bc9f-f74fac7543ac.36.23.iodocument" (PASSED)
Then the following deployments are in a ready state in the "context-varanasi" namespace (PASSED)
And a cloudevent with type == "org.mindwm.v1.pong" should have been received from the NATS topic "user-shesha.travellaptop-host-broker-kne-trigger._knative" (PASSED)
PASSED

Feature: MindWM two hosts one user function test
Scenario: Send iodocument via nats host: workstation01, user: shesha and check that second host received graph the update
Given A MindWM environment (PASSED)
Then all nodes in Kubernetes are ready (PASSED)
When God creates a new cloudevent (PASSED)
And sets cloudevent header "ce-subject" to "id" (PASSED)
And sets cloudevent header "ce-type" to "org.mindwm.v1.iodocument" (PASSED)
And sets cloudevent header "ce-source" to "org.mindwm.shesha.workstation01.L3RtcC90bXV4LTEwMDAvZGVmYXVsdA==.8d839f82-79da-11ef-bc9f-f74fac7543ac.36.23.iodocument" (PASSED)
And sends cloudevent to nats topic "org.mindwm.shesha.workstation01.L3RtcC90bXV4LTEwMDAvZGVmYXVsdA==.8d839f82-79da-11ef-bc9f-f74fac7543ac.3623.iodocument" (PASSED)
Then the following deployments are in a ready state in the "context-varanasi" namespace (PASSED)
And a cloudevent with type == "org.mindwm.v1.graph.created" should have been received from the NATS topic "user-shesha.travellaptop-host-broker-kne-trigger._knative" (PASSED)
PASSED

Feature: MindWM two hosts one user function test
Scenario: Cleanup hosts workstation01 in user shesha
Given A MindWM environment (PASSED)
Then all nodes in Kubernetes are ready (PASSED)
When God deletes the MindWM host resource "workstation01" (PASSED)
Then the host "workstation01" should be deleted (PASSED)
PASSED

Feature: MindWM two hosts one user function test
Scenario: Cleanup hosts travellaptop in user shesha
Given A MindWM environment (PASSED)
Then all nodes in Kubernetes are ready (PASSED)
When God deletes the MindWM host resource "travellaptop" (PASSED)
Then the host "travellaptop" should be deleted (PASSED)
PASSED

Feature: MindWM two hosts one user function test
Scenario: Cleanup user shesha and context varanasi
Given A MindWM environment (PASSED)
Then all nodes in Kubernetes are ready (PASSED)
When God deletes the MindWM user resource "shesha" (PASSED)
Then the user "shesha" should be deleted (PASSED)
When God deletes the MindWM context resource "varanasi" (PASSED)
Then the context "varanasi" should be deleted (PASSED)
PASSED

Feature: MindWM two users one context function test
Scenario: Create context tokyo
Given A MindWM environment (PASSED)
Then all nodes in Kubernetes are ready (PASSED)
When God creates a MindWM context with the name "tokyo" (PASSED)
Then the context should be ready and operable (PASSED)
Then the following knative services are in a ready state in the "context-tokyo" namespace (PASSED)
And statefulset "tokyo-neo4j" in namespace "context-tokyo" is in ready state (PASSED)
PASSED

Feature: MindWM two users one context function test
Scenario: Create godzilla and connects it to tokyo
Given A MindWM environment (PASSED)
Then all nodes in Kubernetes are ready (PASSED)
When God creates a MindWM user resource with the name "godzilla" and connects it to the context "tokyo" (PASSED)
Then the user resource should be ready and operable (PASSED)
PASSED

Feature: MindWM two users one context function test
Scenario: Create tengu and connects it to tokyo
Given A MindWM environment (PASSED)
Then all nodes in Kubernetes are ready (PASSED)
When God creates a MindWM user resource with the name "tengu" and connects it to the context "tokyo" (PASSED)
Then the user resource should be ready and operable (PASSED)
PASSED

Feature: MindWM two users one context function test
Scenario: Create laptop resource and connects it to godzilla user
Given A MindWM environment (PASSED)
Then all nodes in Kubernetes are ready (PASSED)
When God creates a MindWM host resource with the name "laptop" and connects it to the user "godzilla" (PASSED)
Then the host resource should be ready and operable (PASSED)
PASSED

Feature: MindWM two users one context function test
Scenario: Create tablet resource and connects it to tengu user
Given A MindWM environment (PASSED)
Then all nodes in Kubernetes are ready (PASSED)
When God creates a MindWM host resource with the name "tablet" and connects it to the user "tengu" (PASSED)
Then the host resource should be ready and operable (PASSED)
PASSED

Subscribed to topic '>'

Feature: MindWM two users one context function test
Scenario: Send iodocument via nats host: , user: and check that second user received graph update
Given A MindWM environment (PASSED)
Then all nodes in Kubernetes are ready (PASSED)
When God creates a new cloudevent (PASSED)
And God starts reading message from NATS topic ">" (PASSED)
And sets cloudevent header "ce-subject" to "id" (PASSED)
And sets cloudevent header "ce-type" to "org.mindwm.v1.iodocument" (PASSED)
And sets cloudevent header "ce-source" to "org.mindwm.godzilla.laptop.L3RtcC90bXV4LTEwMDAvZGVmYXVsdA==.8d839f82-79da-11ef-bc9f-f74fac7543ac.23.36.iodocument" (PASSED)
And sends cloudevent to nats topic "org.mindwm.godzilla.laptop.L3RtcC90bXV4LTEwMDAvZGVmYXVsdA==.8d839f82-79da-11ef-bc9f-f74fac7543ac.23.36.iodocument" (PASSED)
Then the following deployments are in a ready state in the "context-tokyo" namespace (PASSED)
And a cloudevent with type == "org.mindwm.v1.graph.created" should have been received from the NATS topic "user-tengu.tablet-host-broker-kne-trigger._knative" (PASSED)
PASSED

Subscribed to topic '>'

Feature: MindWM two users one context function test
Scenario: Send iodocument via nats host: , user: and check that second user received graph update
Given A MindWM environment (PASSED)
Then all nodes in Kubernetes are ready (PASSED)
When God creates a new cloudevent (PASSED)
And God starts reading message from NATS topic ">" (PASSED)
And sets cloudevent header "ce-subject" to "id" (PASSED)
And sets cloudevent header "ce-type" to "org.mindwm.v1.iodocument" (PASSED)
And sets cloudevent header "ce-source" to "org.mindwm.tengu.tablet.L3RtcC90bXV4LTEwMDAvZGVmYXVsdA==.8d839f82-79da-11ef-bc9f-f74fac7543ac.23.36.iodocument" (PASSED)
And sends cloudevent to nats topic "org.mindwm.tengu.tablet.L3RtcC90bXV4LTEwMDAvZGVmYXVsdA==.8d839f82-79da-11ef-bc9f-f74fac7543ac.23.36.iodocument" (PASSED)
Then the following deployments are in a ready state in the "context-tokyo" namespace (PASSED)
And a cloudevent with type == "org.mindwm.v1.graph.created" should have been received from the NATS topic "user-godzilla.laptop-host-broker-kne-trigger._knative" (PASSED)
PASSED

Feature: MindWM two users one context function test
Scenario: Cleanup laptop host for godzilla username
Given A MindWM environment (PASSED)
Then all nodes in Kubernetes are ready (PASSED)
When God deletes the MindWM host resource "laptop" (PASSED)
Then the host "laptop" should be deleted (PASSED)
PASSED

Feature: MindWM two users one context function test
Scenario: Cleanup tablet host for tengu username
Given A MindWM environment (PASSED)
Then all nodes in Kubernetes are ready (PASSED)
When God deletes the MindWM host resource "tablet" (PASSED)
Then the host "tablet" should be deleted (PASSED)
PASSED

Feature: MindWM two users one context function test
Scenario: Cleanup godzilla username
Given A MindWM environment (PASSED)
Then all nodes in Kubernetes are ready (PASSED)
When God deletes the MindWM user resource "godzilla" (PASSED)
Then the user "godzilla" should be deleted (PASSED)
PASSED

Feature: MindWM two users one context function test
Scenario: Cleanup tengu username
Given A MindWM environment (PASSED)
Then all nodes in Kubernetes are ready (PASSED)
When God deletes the MindWM user resource "tengu" (PASSED)
Then the user "tengu" should be deleted (PASSED)
PASSED

Feature: MindWM two users one context function test
Scenario: Cleanup tokyo context
Given A MindWM environment (PASSED)
Then all nodes in Kubernetes are ready (PASSED)
When God deletes the MindWM context resource "tokyo" (PASSED)
Then the context "tokyo" should be deleted (PASSED)
PASSED

Feature: Context Resource Readiness and Cleanup Verification
Scenario: Create Contexts and Verify All Related Resources Are in Ready State
Given a MindWM environment (PASSED)
Then all nodes in Kubernetes are ready (PASSED)
When God creates a MindWM context with the name "aphrodite" (PASSED)
Then the context should be ready and operable (PASSED)
And namespace "context-aphrodite" should exist (PASSED)
And the following deployments are in a ready state in the "context-aphrodite" namespace (PASSED)
And statefulset "aphrodite-neo4j" in namespace "context-aphrodite" is in ready state (PASSED)
And the following knative services are in a ready state in the "context-aphrodite" namespace (PASSED)
And the following knative triggers are in a ready state in the "context-aphrodite" namespace (PASSED)
And the following knative brokers are in a ready state in the "context-aphrodite" namespace (PASSED)
And kafka topic "context-aphrodite-cdc" is in ready state in "redpanda" namespace (PASSED)
And kafka source "context-aphrodite-cdc-kafkasource" is in ready state in "context-aphrodite" namespace (PASSED)
PASSED

Feature: Context Resource Readiness and Cleanup Verification
Scenario: Create Contexts and Verify All Related Resources Are in Ready State
Given a MindWM environment (PASSED)
Then all nodes in Kubernetes are ready (PASSED)
When God creates a MindWM context with the name "kypros" (PASSED)
Then the context should be ready and operable (PASSED)
And namespace "context-kypros" should exist (PASSED)
And the following deployments are in a ready state in the "context-kypros" namespace (PASSED)
And statefulset "kypros-neo4j" in namespace "context-kypros" is in ready state (PASSED)
And the following knative services are in a ready state in the "context-kypros" namespace (PASSED)
And the following knative triggers are in a ready state in the "context-kypros" namespace (PASSED)
And the following knative brokers are in a ready state in the "context-kypros" namespace (PASSED)
And kafka topic "context-kypros-cdc" is in ready state in "redpanda" namespace (PASSED)
And kafka source "context-kypros-cdc-kafkasource" is in ready state in "context-kypros" namespace (PASSED)
PASSED

Feature: Context Resource Readiness and Cleanup Verification
Scenario: Cleanup Contexts and Verify Resources Are Deleted
Given a MindWM environment (PASSED)
Then all nodes in Kubernetes are ready (PASSED)
When God deletes the MindWM context resource "aphrodite" (PASSED)
Then the context "aphrodite" should be deleted (PASSED)
And namespace "context-aphrodite" should not exist (PASSED)
PASSED

Feature: Context Resource Readiness and Cleanup Verification
Scenario: Cleanup Contexts and Verify Resources Are Deleted
Given a MindWM environment (PASSED)
Then all nodes in Kubernetes are ready (PASSED)
When God deletes the MindWM context resource "kypros" (PASSED)
Then the context "kypros" should be deleted (PASSED)
And namespace "context-kypros" should not exist (PASSED)
PASSED

'kind': 'NatsJetStreamChannel',
'metadata': {'annotations': {'eventing.knative.dev/scope': 'cluster',
'messaging.knative.dev/creator': 'system:serviceaccount:knative-eventing:eventing-controller',
'messaging.knative.dev/lastModifier': 'system:serviceaccount:knative-eventing:eventing-controller',
'messaging.knative.dev/subscribable': 'v1'},
'creationTimestamp': '2024-12-13T18:45:39Z',
'finalizers': ['natsjetstreamchannels.messaging.knative.dev'],
'generation': 2,
'labels': {'eventing.knative.dev/broker': 'the-host-host-broker',
'eventing.knative.dev/brokerEverything': 'true'},
'managedFields': [{'apiVersion': 'messaging.knative.dev/v1alpha1',
'fieldsType': 'FieldsV1',
'fieldsV1': {'f:metadata': {'f:annotations': {'.': {},
'f:eventing.knative.dev/scope': {}},
'f:labels': {'.': {},
'f:eventing.knative.dev/broker': {},
'f:eventing.knative.dev/brokerEverything': {}},
'f:ownerReferences': {'.': {},
'k:{"uid":"1716aeb9-d962-4cfe-bb45-53fb8ecc4413"}': {}}},
'f:spec': {'.': {},
'f:delivery': {'.': {},
'f:backoffDelay': {},
'f:backoffPolicy': {},
'f:retry': {}},
'f:stream': {'.': {},
'f:config': {'.': {},
'f:additionalSubjects': {}}}}},
'manager': 'mtchannel_broker',
'operation': 'Update',
'time': '2024-12-13T18:45:39Z'},
{'apiVersion': 'messaging.knative.dev/v1alpha1',
'fieldsType': 'FieldsV1',
'fieldsV1': {'f:metadata': {'f:finalizers': {'.': {},
'v:"natsjetstreamchannels.messaging.knative.dev"': {}}},
'f:spec': {'f:subscribers': {}}},
'manager': 'controller',
'operation': 'Update',
'time': '2024-12-13T18:45:41Z'},
{'apiVersion': 'messaging.knative.dev/v1alpha1',
'fieldsType': 'FieldsV1',
'fieldsV1': {'f:status': {'.': {},
'f:address': {'.': {},
'f:url': {}},
'f:observedGeneration': {}}},
'manager': 'controller',
'operation': 'Update',
'subresource': 'status',
'time': '2024-12-13T18:45:41Z'},
{'apiVersion': 'messaging.knative.dev/v1alpha1',
'fieldsType': 'FieldsV1',
'fieldsV1': {'f:status': {'f:conditions': {},
'f:subscribers': {}}},
'manager': 'dispatcher',
'operation': 'Update',
'subresource': 'status',
'time': '2024-12-13T18:45:41Z'}],
'name': 'the-host-host-broker-kne-trigger',
'namespace': 'user-flukeman',
'ownerReferences': [{'apiVersion': 'eventing.knative.dev/v1',
'blockOwnerDeletion': True,
'controller': True,
'kind': 'Broker',
'name': 'the-host-host-broker',
'uid': '1716aeb9-d962-4cfe-bb45-53fb8ecc4413'}],
'resourceVersion': '61569',
'uid': 'bebf963a-2d8e-4087-942f-98b9b6ec28e8'},
'spec': {'delivery': {'backoffDelay': 'PT0.2S',
'backoffPolicy': 'exponential',
'retry': 10},
'stream': {'config': {'additionalSubjects': ['org.mindwm.flukeman.the-host.>'],
'duplicateWindow': '0s',
'maxAge': '0s'}},
'subscribers': [{'delivery': {'deadLetterSink': {'uri': 'http://dead-letter.user-flukeman.svc.cluster.local'}},
'generation': 1,
'name': 'the-host-host-broker-the-host-t10b1108d52e9822d087d9c0555b03161',
'replyUri': 'http://broker-ingress.knative-eventing.svc.cluster.local/user-flukeman/the-host-host-broker',
'subscriberUri': 'http://broker-filter.knative-eventing.svc.cluster.local/triggers/user-flukeman/the-host-trigger-to-user-broker-trigger/adf02b88-6d8d-4098-b7b2-e35cf7b1df7e',
'uid': 'b538d75a-1a1d-422f-9037-4b7ed3be96b9'}]},
'status': {'address': {'url': 'http://the-host-host-broker-kne-trigger-kn-jsm-channel.user-flukeman.svc.cluster.local'},
'conditions': [{'lastTransitionTime': '2024-12-13T18:45:41Z',
'status': 'True',
'type': 'Addressable'},
{'lastTransitionTime': '2024-12-13T18:45:41Z',
'status': 'True',
'type': 'ChannelServiceReady'},
{'lastTransitionTime': '2024-12-13T18:45:41Z',
'status': 'True',
'type': 'DispatcherReady'},
{'lastTransitionTime': '2024-12-13T18:45:41Z',
'status': 'True',
'type': 'EndpointsReady'},
{'lastTransitionTime': '2024-12-13T18:45:41Z',
'status': 'True',
'type': 'Ready'},
{'lastTransitionTime': '2024-12-13T18:45:39Z',
'status': 'True',
'type': 'ServiceReady'},
{'lastTransitionTime': '2024-12-13T18:45:41Z',
'status': 'True',
'type': 'StreamReady'}],
'observedGeneration': 2,
'subscribers': [{'observedGeneration': 1,
'ready': 'True',
'uid': 'b538d75a-1a1d-422f-9037-4b7ed3be96b9'}]}}

Feature: MindWM clipboard EDA test
Scenario: Prepare environment for ping tests
Given A MindWM environment (PASSED)
Then all nodes in Kubernetes are ready (PASSED)
When God creates a MindWM context with the name "philadelphia" (PASSED)
Then the context should be ready and operable (PASSED)
And the following knative services are in a ready state in the "context-philadelphia" namespace (PASSED)
And statefulset "philadelphia-neo4j" in namespace "context-philadelphia" is in ready state (PASSED)
When God creates a MindWM user resource with the name "flukeman" and connects it to the context "philadelphia" (PASSED)
Then the user resource should be ready and operable (PASSED)
When God creates a MindWM host resource with the name "the-host" and connects it to the user "flukeman" (PASSED)
Then the host resource should be ready and operable (PASSED)
And NatsJetStreamChannel "the-host-host-broker-kne-trigger" is ready in "user-flukeman" namespace (PASSED)
When God starts reading message from NATS topic ">" (PASSED)
PASSED
Connected to NATS server at nats://root:[email protected]:4222
Subscribed to topic '>'

Feature: MindWM clipboard EDA test
Scenario: Send clipboard to knative ping service
Given A MindWM environment (PASSED)
Then all nodes in Kubernetes are ready (PASSED)
When God creates a new cloudevent (PASSED)
And sets cloudevent header "ce-subject" to "clipboard" (PASSED)
And sets cloudevent header "ce-type" to "org.mindwm.v1.clipboard" (PASSED)
And sets cloudevent header "ce-source" to "org.mindwm.flukeman.the-host.clipboard" (PASSED)
And sets cloudevent header "traceparent" to "00-6ef92f3577b34da6a3ce929d0e0e4734-00f067aa0ba902b7-00" (PASSED)
And sends cloudevent to knative service "clipboard" in "context-philadelphia" namespace (PASSED)
Then the response http code should be "200" (PASSED)
Then the following deployments are in a ready state in the "context-philadelphia" namespace (PASSED)
PASSED

Subscribed to topic '>'

Feature: MindWM clipboard EDA test
Scenario: Send ping directly to function broker-ingress.knative-eventing/context-philadelphia/context-broker
Given A MindWM environment (PASSED)
Then all nodes in Kubernetes are ready (PASSED)
When God creates a new cloudevent (PASSED)
And God starts reading message from NATS topic ">" (PASSED)
And sets cloudevent header "ce-subject" to "clipboard" (PASSED)
And sets cloudevent header "ce-type" to "org.mindwm.v1.clipboard" (PASSED)
And sets cloudevent header "ce-source" to "org.mindwm.flukeman.the-host.clipboard" (PASSED)
And sets cloudevent header "traceparent" to "00-5df92f3577b34da6a3ce929d0e0e4734-00f067aa0ba902b7-00" (PASSED)
And sends cloudevent to "broker-ingress.knative-eventing/context-philadelphia/context-broker" (PASSED)
Then the response http code should be "202" (PASSED)
Then the following deployments are in a ready state in the "context-philadelphia" namespace (PASSED)
Then the trace with "00-5df92f3577b34da6a3ce929d0e0e4734-00f067aa0ba902b7-00" should appear in TraceQL (PASSED)
And the trace should contains (PASSED)
And a cloudevent with type == "org.mindwm.v1.graph.created" should have been received from the NATS topic "user-flukeman.the-host-host-broker-kne-trigger._knative" (PASSED)
PASSED

Subscribed to topic '>'

Feature: MindWM clipboard EDA test
Scenario: Send ping directly to function broker-ingress.knative-eventing/user-flukeman/user-broker
Given A MindWM environment (PASSED)
Then all nodes in Kubernetes are ready (PASSED)
When God creates a new cloudevent (PASSED)
And God starts reading message from NATS topic ">" (PASSED)
And sets cloudevent header "ce-subject" to "clipboard" (PASSED)
And sets cloudevent header "ce-type" to "org.mindwm.v1.clipboard" (PASSED)
And sets cloudevent header "ce-source" to "org.mindwm.flukeman.the-host.clipboard" (PASSED)
And sets cloudevent header "traceparent" to "00-6df93f3577b34da6a3ce929d0e0e4742-00f067aa0ba902b7-00" (PASSED)
And sends cloudevent to "broker-ingress.knative-eventing/user-flukeman/user-broker" (PASSED)
Then the response http code should be "202" (PASSED)
Then the following deployments are in a ready state in the "context-philadelphia" namespace (PASSED)
Then the trace with "00-6df93f3577b34da6a3ce929d0e0e4742-00f067aa0ba902b7-00" should appear in TraceQL (PASSED)
And the trace should contains (PASSED)
And a cloudevent with type == "org.mindwm.v1.graph.created" should have been received from the NATS topic "user-flukeman.the-host-host-broker-kne-trigger._knative" (PASSED)
PASSED

Feature: MindWM clipboard EDA test
Scenario: Send ping via nats
Given A MindWM environment (PASSED)
Then all nodes in Kubernetes are ready (PASSED)
When God creates a new cloudevent (PASSED)
And sets cloudevent header "ce-id" to "09fb195c-c419-6d62-15e0-51b6ee990922" (PASSED)
And sets cloudevent header "ce-subject" to "clipboard" (PASSED)
And sets cloudevent header "ce-type" to "org.mindwm.v1.clipboard" (PASSED)
And sets cloudevent header "ce-source" to "org.mindwm.flukeman.the-host.clipboard" (PASSED)
And sets cloudevent header "traceparent" to "00-8af92f3577b34da6a3ce929d0e0e4742-00f067aa0ba902b7-00" (PASSED)
And sends cloudevent to nats topic "org.mindwm.flukeman.the-host.clipboard" (PASSED)
Then the following deployments are in a ready state in the "context-philadelphia" namespace (PASSED)
And a cloudevent with type == "org.mindwm.v1.graph.created" should have been received from the NATS topic "user-flukeman.the-host-host-broker-kne-trigger._knative" (PASSED)
PASSED

Feature: MindWM clipboard EDA test
Scenario: Cleanup flukeman@the-host in philadelphia
Given A MindWM environment (PASSED)
Then all nodes in Kubernetes are ready (PASSED)
When God deletes the MindWM host resource "the-host" (PASSED)
Then the host "the-host" should be deleted (PASSED)
When God deletes the MindWM user resource "flukeman" (PASSED)
When God deletes the MindWM context resource "philadelphia" (PASSED)
PASSED

================= 56 passed, 72 warnings in 2033.99s (0:33:53) =================

@metacoma metacoma merged commit 33826a1 into master Dec 13, 2024
1 check passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant