Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ZachG update parametric documentation and add contributing doc #3325

Merged
merged 52 commits into from
Dec 2, 2024
Merged
Show file tree
Hide file tree
Changes from 16 commits
Commits
Show all changes
52 commits
Select commit Hold shift + click to select a range
79714dc
add locations of all tracer servers and also the ht
ZStriker19 Oct 25, 2024
5333bff
starting contributing doc
ZStriker19 Oct 25, 2024
7973fd3
Merge branch 'main' into zachg/docs_contributing
ZStriker19 Nov 14, 2024
df56c38
rough draft of finished parametric contributing, also nits and added …
ZStriker19 Nov 14, 2024
ba71013
Update docs/scenarios/parametric.md
ZStriker19 Nov 14, 2024
a745414
polish
ZStriker19 Nov 14, 2024
b213da8
Merge branch 'zachg/docs_contributing' of github.com:DataDog/system-t…
ZStriker19 Nov 14, 2024
3eba0af
Update docs/edit/features.md
ZStriker19 Nov 25, 2024
26d6a74
Update docs/scenarios/parametric.md
ZStriker19 Nov 25, 2024
605deaf
Update docs/scenarios/parametric_contributing.md
ZStriker19 Nov 25, 2024
6175f7d
Update docs/scenarios/parametric_contributing.md
ZStriker19 Nov 25, 2024
2feab45
Update docs/scenarios/parametric_contributing.md
ZStriker19 Nov 25, 2024
3682261
Update docs/scenarios/parametric_contributing.md
ZStriker19 Nov 25, 2024
1d113cb
Update docs/scenarios/parametric_contributing.md
ZStriker19 Nov 25, 2024
ec5140a
Update docs/scenarios/parametric_contributing.md
ZStriker19 Nov 25, 2024
edf81a1
Update docs/scenarios/parametric_contributing.md
ZStriker19 Nov 25, 2024
fcd3d5a
Update docs/scenarios/parametric_contributing.md
ZStriker19 Nov 25, 2024
f59bf65
Update docs/scenarios/parametric_contributing.md
ZStriker19 Nov 25, 2024
c250c5d
Update docs/scenarios/parametric.md
ZStriker19 Nov 25, 2024
85b6996
Update docs/scenarios/parametric.md
ZStriker19 Nov 25, 2024
a2ac809
Update docs/scenarios/parametric.md
ZStriker19 Nov 25, 2024
f207cbd
Update docs/scenarios/parametric_contributing.md
ZStriker19 Nov 25, 2024
d1d1fc0
Update docs/scenarios/parametric_contributing.md
ZStriker19 Nov 25, 2024
bae326b
more linking and clarifying
ZStriker19 Nov 26, 2024
6096f42
add to use cases section
ZStriker19 Nov 26, 2024
52d9525
Update docs/scenarios/parametric.md
ZStriker19 Nov 26, 2024
b3419d3
fix merge conflict
ZStriker19 Nov 26, 2024
53aa1e9
add contributing doc link
ZStriker19 Nov 26, 2024
aa30368
Update docs/edit/README.md
ZStriker19 Nov 26, 2024
2500cc3
Update docs/scenarios/parametric.md
ZStriker19 Nov 26, 2024
7a70f4a
Update docs/scenarios/parametric_contributing.md
ZStriker19 Nov 26, 2024
959f962
Update docs/scenarios/parametric_contributing.md
ZStriker19 Nov 26, 2024
50116bc
Update docs/scenarios/parametric_contributing.md
ZStriker19 Nov 26, 2024
eb58191
Update docs/scenarios/parametric_contributing.md
ZStriker19 Nov 26, 2024
7d1a30a
Apply suggestions from code review
ZStriker19 Nov 26, 2024
5242acc
Merge branch 'main' into zachg/docs_contributing
ZStriker19 Nov 26, 2024
352718d
Update docs/edit/add-new-test.md
ZStriker19 Nov 26, 2024
5fac381
remove repeating info in features.md
ZStriker19 Nov 26, 2024
afd6eff
Merge branch 'zachg/docs_contributing' of github.com:DataDog/system-t…
ZStriker19 Nov 26, 2024
5bb0bc3
Update docs/scenarios/parametric_contributing.md
ZStriker19 Nov 26, 2024
e1525b2
Apply suggestions from code review
ZStriker19 Nov 26, 2024
2a0f6cd
Apply suggestions from code review
ZStriker19 Nov 26, 2024
e8e06b7
Merge branch 'main' into zachg/docs_contributing
ZStriker19 Nov 26, 2024
a5f9c6a
add link to contributing doc in main readme.md
ZStriker19 Nov 27, 2024
f9cc7b5
Update docs/scenarios/parametric_contributing.md
ZStriker19 Nov 27, 2024
7fd96ed
Update docs/edit/features.md
ZStriker19 Dec 2, 2024
ed9a72c
Update docs/edit/features.md
ZStriker19 Dec 2, 2024
9315e67
Update docs/scenarios/parametric.md
ZStriker19 Dec 2, 2024
cc40375
Update docs/edit/README.md
ZStriker19 Dec 2, 2024
6c0335b
remove todo for link
ZStriker19 Dec 2, 2024
750fe09
Merge branch 'zachg/docs_contributing' of github.com:DataDog/system-t…
ZStriker19 Dec 2, 2024
605f18f
Merge branch 'main' into zachg/docs_contributing
ZStriker19 Dec 2, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion docs/edit/README.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
## Run the test loccally
## Run the test locally
ZStriker19 marked this conversation as resolved.
Show resolved Hide resolved

Please have a look on the [weblog](../execute/)

Expand Down
17 changes: 16 additions & 1 deletion docs/edit/features.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
System tests are feature-oriented. It means that "features" drives how tests are organized.
System tests are feature-oriented. It means that "features" drive how tests are organized.

Let's take an example with a new `Awesome feature`, part of meta feature `stuffs`, so we add a new file called `tests/test_stuffs.py` and add a test class with some boilerplate code, and a basic test:

Expand All @@ -15,6 +15,21 @@ class Test_AwesomeFeature:

Several key points:

* Each new feature should be defined in [_features.py](/utils/_features.py). This consists of adding a feature in [Feature Parity Dashbaord](https://feature-parity.us1.prod.dog/), get the feature id and copying one of the already added features, changing the name and the feature id in the url, and the feature number. In this case we'd add
ZStriker19 marked this conversation as resolved.
Show resolved Hide resolved

```python

@staticmethod
def awesome_feature(test_object):
"""
Awesome Feature for Awesomeness

https://feature-parity.us1.prod.dog/#/?feature=291
"""
pytest.mark.features(feature_id=291)(test_object)
return test_object
```

* One class test one feature
ZStriker19 marked this conversation as resolved.
Show resolved Hide resolved
* One class can have several tests
* Feature link to the [Feature Parity Dashbaord](https://feature-parity.us1.prod.dog/) is declared with `@features` decorators
Expand Down
36 changes: 32 additions & 4 deletions docs/scenarios/parametric.md
Original file line number Diff line number Diff line change
Expand Up @@ -50,8 +50,8 @@ def test_datadog_spans(library_env, test_library, test_agent):
```

- This test case runs against all the APM libraries and is parameterized with two different environments specifying two different values of the environment variable `DD_ENV`.
- The test case creates a new span and sets a tag on it using the shared GRPC/HTTP interface.
- The implementations of the GRPC/HTTP interface, by language, are in `utils/build/docker/<lang>/parametric`.
- The test case creates a new span and sets a tag on it using the shared HTTP interface.
- The implementations of the HTTP interface, by language, are in `utils/build/docker/<lang>/parametric`. See here for exact locations per langugage: [Http Server Implementations](#http-server-implementations) for more details.
ZStriker19 marked this conversation as resolved.
Show resolved Hide resolved
- Data is flushed to the test agent after the with test_library block closes.
- Data is retrieved using the `test_agent` fixture and asserted on.
ZStriker19 marked this conversation as resolved.
Show resolved Hide resolved

Expand Down Expand Up @@ -93,7 +93,7 @@ TEST_LIBRARY=dotnet ./run.sh PARAMETRIC -k test_metrics_
Tests can be aborted using CTRL-C but note that containers maybe still be running and will have to be shut down.

### Running the tests for a custom tracer
To run tests against custom tracers, refer to the [Binaries Documentation](../execute/binaries.md)
To run tests against custom tracer builds, refer to the [Binaries Documentation](../execute/binaries.md)

#### After Testing with a Custom Tracer:
Note: Most of the ways to run system-tests with a custom tracer version involve modifying the binaries directory. Modifying the binaries will alter the tracer version used across your local computer. Once you're done testing with the custom tracer, ensure you **remove** it. For example for Python:
Expand Down Expand Up @@ -199,19 +199,22 @@ See the steps below in the HTTP section to run the Python server and view the sp
### Shared Interface

To view the available HTTP endpoints , follow these steps:
Note: These are based off of the Python tracer's http server which should be held as the standard example interface across implementations.


1. `./utils/scripts/parametric/run_reference_http.sh`
2. Navigate to http://localhost:8000/docs in your web browser to access the documentation.
3. You can download the OpenAPI schema from http://localhost:8000/openapi.json. This schema can be imported into tools like [Postman](https://learning.postman.com/docs/integrations/available-integrations/working-with-openAPI/) or other API clients to facilitate development and testing.


ZStriker19 marked this conversation as resolved.
Show resolved Hide resolved
### Architecture: How System-tests work

Below is an overview of how the testing architecture is structured:

- Shared Tests in Python: We write shared test cases using Python's pytest framework. These tests are designed to be generic and interact with the tracers through an HTTP interface.
- HTTP Servers in Docker: For each language tracer, we build and run an HTTP server within a Docker container. These servers expose the required endpoints defined in the OpenAPI schema and handle the tracer-specific logic.
- [Test Agent](https://github.com/DataDog/dd-apm-test-agent/) in Docker: We start a test agent in a separate Docker container. This agent collects data (such as spans and traces) submitted by the HTTP servers. It serves as a centralized point for aggregating and accessing test data.
- Test Execution: The Python test cases use an HTTP client to communicate with the servers. The servers generate data based on the interactions, which is then sent to the test agent. The tests can query the test agent to retrieve data (usually traces) and perform assertions to verify correct behavior.
- Test Execution: The Python test cases use a [HTTP client](/utils/parametric/_library_client.py) to communicate with the servers. The servers generate data based on the interactions, which is then sent to the test agent. The tests can query the test agent to retrieve data (often traces) and perform assertions to verify correct behavior.

An example of how to get a span from the test agent:
```python
Expand All @@ -220,6 +223,31 @@ span = find_only_span(test_agent.wait_for_num_traces(1))

This architecture allows us to ensure that all tracers conform to the same interface and behavior, making it easier to maintain consistency across different languages and implementations.

#### Http Server Implementations

The http server implementations for each tracer can be found at the following locations:
ZStriker19 marked this conversation as resolved.
Show resolved Hide resolved

ZStriker19 marked this conversation as resolved.
Show resolved Hide resolved
[Python](/utils/build/docker/python/parametric/apm_test_client/server.py)

[Ruby](utils/build/docker/ruby/parametric/server.rb)

[Php](utils/build/docker/php/parametric/server.php)

[Nodejs](utils/build/docker/nodejs/parametric/server.js)

[Java Datadog](utils/build/docker/java/parametric/src/main/java/com/datadoghq/trace/opentracing/controller/OpenTracingController.java)

[Java Otel](utils/build/docker/java/parametric/src/main/java/com/datadoghq/trace/opentelemetry/controller/OpenTelemetryController.java)

[Dotnet Datadog](utils/build/docker/dotnet/parametric/Endpoints/ApmTestApi.cs)

[Dotnet Otel](utils/build/docker/dotnet/parametric/Endpoints/ApmTestApiOtel.cs)

[Go Datadog](utils/build/docker/golang/parametric/main.go)

[Go Otel](utils/build/docker/golang/parametric/otel.go)
ZStriker19 marked this conversation as resolved.
Show resolved Hide resolved


![image](https://github.com/user-attachments/assets/fc144fc1-95aa-4d50-97c5-cda8fdbcefef)

<img width="869" alt="image" src="https://user-images.githubusercontent.com/6321485/182887064-e241d65c-5e29-451b-a8a8-e8d18328c083.png">
Expand Down
62 changes: 62 additions & 0 deletions docs/scenarios/parametric_contributing.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,62 @@
# Contributing to Parametric System-tests

Note: a more in-depth overview of parametric system-tests can be found in [parametric.md](parametric.md).

**MUST:** Acquaint yourself with [how system tests work](parametric.md#architecture-how-system-tests-work) before proceeding.

## Use cases

Let's figure out if your feature is a good candidate to be tested with parametric system-tests.

Parametric system-tests are great for assuring uniform behavior between tracers e.g. [environment variable configuration effects on api methods, sampling, propagation, configuration, telemetry](/tests/parametric).

Parametric tests make requests to [http endpoints](/tests/parametric) dedicated to various tracer methods for creating and modifying spans (manual instrumentation). If you want to test automatic instrumentation behavior, weblog system-tests may be a better fit.

System-tests are **not** for testing internal or niche tracer behavior. Unit tests are a better fit for that case.
ZStriker19 marked this conversation as resolved.
Show resolved Hide resolved

## Getting set up

We usually add new system tests when validating a new feature. This feature might not yet be implemented across all dd-trace libraries. If at least one library already supports the feature, you can verify your test by running it against that library
ZStriker19 marked this conversation as resolved.
Show resolved Hide resolved

To begin we need to point system-tests towards a tracer that has the feature implemented (published or on a branch).
Follow [Binaries Documentation](../execute/binaries.md) for your particular tracer language to set this up.
ZStriker19 marked this conversation as resolved.
Show resolved Hide resolved

[Try running the tests for your tracer language](parametric.md#running-the-tests) and make sure some pass (no need to run the whole suite, you can stop the tests from running with `ctrl+c`). If you have an issue, checkout the [debugging section](parametric.md#debugging) to troubleshoot.
ZStriker19 marked this conversation as resolved.
Show resolved Hide resolved

## Writing the tests

Now that we're all setup with a working test suite and a tracer with the implemented feature, we can begin writing the new tests.

First take a look at the [currently existing tests](/tests/parametric) and see if what you're trying to test is similar and can use the same methods/endpoints (in many cases this is true).
ZStriker19 marked this conversation as resolved.
Show resolved Hide resolved

For a list of methods that already exist, refer to `class APMLibrary` in the [_library_client.py](/utils/parametric/_library_client.py). If you're wondering what the methods do, you can take at look at the respective endpoints they're calling in that same file in `class APMLibraryClient`.

The endpoints (where the actual tracer code runs) are defined in the Http Server implementations per tracer [listed here](parametric.md#http-server-implementations). Click on the one for your language to take a look at the endpoints. In some cases you may need to just slightly modify an endpoint rather than add a new one.
ZStriker19 marked this conversation as resolved.
Show resolved Hide resolved

### If you need to add additional endpoints to test your new feature

*Note:* please refer to the [architecture section](parametric.md#architecture-how-system-tests-work) if you're confused throughout this process.
ZStriker19 marked this conversation as resolved.
Show resolved Hide resolved

Then we need to do the following:

* Determine what you want the endpoint to be called and what you need it to do, and add it to your tracer's http server.
ZStriker19 marked this conversation as resolved.
Show resolved Hide resolved
ZStriker19 marked this conversation as resolved.
Show resolved Hide resolved

*Note:* If adding a new endpoint please let a Python implementer know so they can add it as well [see](parametric.md#shared-interface)
* In [_library_client.py](/utils/parametric/_library_client.py) Add both the endpoint call in `class APMLibraryClient` and the method that invokes it in `class APMLibrary`. Use other implementations for reference.
* Ok we now have our new method! Use it in the tests you write using the [below section](#if-the-methods-you-need-to-run-your-tests-are-already-written)

### If the methods you need to run your tests are already written

Make a new test file in `tests/parametric`, copying in the testing code you want to use as a base/guideline (usually the class and and one of the test methods in it).
ZStriker19 marked this conversation as resolved.
Show resolved Hide resolved

Then:

* [Change the name of the feature annotation it'll fit under for the feature parity board](/docs/edit/features.md) (Not always needed e.g. `@features.datadog_headers_propagation` is used for all the propagation features)
* Change the class and method name to fit what you're testing.
* [Change your tracer's respective manifest.yml file](/docs/edit/manifest.md) or else the script won't know to run your new test. If you're confused at how to do this properly, search for the file you copied the test from in the manifest file and see how it's specified, you can probably copy that for your new file (make sure the path is the same).
ZStriker19 marked this conversation as resolved.
Show resolved Hide resolved
For the version value, to make sure your test runs, specify the current release your tracer is on. This is the minimum value that the script will run your test with. If you make it too high, the script will skip your test.
* Write the test pulling from examples of other tests written. Remember you're almost always follwing the pattern of making spans, getting them from the trace_agent, and then verifying values on them.

**Finally:**
[Try running your test!](parametric.md#running-the-tests)
If you have an issue, checkout the [debugging section](parametric.md#debugging) to troubleshoot.