Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

document incomplete_test_app decorator #3561

Open
wants to merge 1 commit into
base: munir/add-incomplete-test-annotation
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
9 changes: 7 additions & 2 deletions docs/edit/skip-tests.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,8 @@ Three decorators allow you to skip test functions or classes for a library:
* `@irrelevant`: The tested feature/behavior is irrelevant to the library, meaning the feature is either purposefully not supported by the lib or cannot reasonably be implemented
* `@bug`: The lib does not implement the feature correctly/up to spec
* `@flaky` (subclass of `bug`): The feature sometimes fails, sometimes passes. It's not reliable, so don't run it.
* `@missing_feature`: The tested feature/behavior does not exist in the library or there is a deficit in the test library that blocks this test from executing for the lib
* `@missing_feature`: The tested feature/behavior does not exist in the library
* `@incomplete_test_app`: There is a deficit in the weblog/parametric apps or testing interface that prevents us from validating a feature across different applications.

To skip specific test functions within a test class, use them as in-line decorators (Example below).
To skip test classes or test files, use the decorator in the library's [manifest file](./manifest.md).
Expand All @@ -17,7 +18,7 @@ The decorators take several arguments:


```python
from utils import irrelevant
from utils import irrelevant, incomplete_test_app, bug, missing_feature


@irrelevant(library="nodejs")
Expand All @@ -35,4 +36,8 @@ class Test_AwesomeFeature:
@missing_feature(reason="Maybe too soon")
def test_full(self)
assert 42

@incomplete_test_app(library="python", "trace/span/start endpoint does not exist")
def test_span_creation(self):
assert 68
```
16 changes: 8 additions & 8 deletions docs/execute/test-outcomes.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,11 +2,11 @@ Each test can be flagged with an expected outcomes, with a declaration in [manif

Those declaration are interpreted by system-tests and impacts the test execution, and the outcome of the entire run :

| Declaration | Test is executed | Test actual outcome | System test output | Comment
| - | - | - | - | -
| \<no_declaration> | Yes | ✅ Pass | 🟢 Success | All good :sunglasses:
| Missing feature or bug | Yes | ❌ Fail | 🟢 Success | Expected failure
| Missing feature or bug | Yes | ✅ Pass | 🟠 Success | XPASS: The feature has been implemented, bug has been fixed -> easy win
| Flaky | No | N.A. | N.A. | A flaky test doesn't provide any usefull information, and thus, is not executed.
| Irrelevant | No | N.A. | N.A | There is no purpose of running such a test
| \<no_declaration> | Yes | ❌ Fail | 🔴 Fail | Only use case where system test fails : the test should have been ok, and is not
| Declaration | Test is executed | Test actual outcome | System test output | Comment
| - | - | - | - | -
| \<no_declaration> | Yes | ✅ Pass | 🟢 Success | All good :sunglasses:
| Missing feature or bug or incomplete test app | Yes | ❌ Fail | 🟢 Success | Expected failure
| Missing feature or bug or incomplete test app | Yes | ✅ Pass | 🟠 Success | XPASS: The feature has been implemented, bug has been fixed -> easy win
| Flaky | No | N.A. | N.A. | A flaky test doesn't provide any usefull information, and thus, is not executed.
| Irrelevant | No | N.A. | N.A | There is no purpose of running such a test
| \<no_declaration> | Yes | ❌ Fail | 🔴 Fail | Only use case where system test fails : the test should have been ok, and is not