From 79714dc0e7c3b920ecc559da3c9be5fdeccd904e Mon Sep 17 00:00:00 2001 From: ZStriker19 Date: Fri, 25 Oct 2024 15:40:18 -0400 Subject: [PATCH 01/44] add locations of all tracer servers and also the ht tp client in parametric.md --- docs/scenarios/parametric.md | 22 ++++++++++++++++++++-- 1 file changed, 20 insertions(+), 2 deletions(-) diff --git a/docs/scenarios/parametric.md b/docs/scenarios/parametric.md index e9691f4f06..0c76d88ce3 100644 --- a/docs/scenarios/parametric.md +++ b/docs/scenarios/parametric.md @@ -93,7 +93,7 @@ TEST_LIBRARY=dotnet ./run.sh PARAMETRIC -k test_metrics_ Tests can be aborted using CTRL-C but note that containers maybe still be running and will have to be shut down. ### Running the tests for a custom tracer -To run tests against custom tracers, refer to the [Binaries Documentation](../execute/binaries.md) +To run tests against custom tracer builds, refer to the [Binaries Documentation](../execute/binaries.md) #### After Testing with a Custom Tracer: Note: Most of the ways to run system-tests with a custom tracer version involve modifying the binaries directory. Modifying the binaries will alter the tracer version used across your local computer. Once you're done testing with the custom tracer, ensure you **remove** it. For example for Python: @@ -254,10 +254,11 @@ Then you should have updated proto files. This script will generate weird files, ### Architecture: How System-tests work Below is an overview of how the testing architecture is structured: + - Shared Tests in Python: We write shared test cases using Python's pytest framework. These tests are designed to be generic and interact with the tracers through an HTTP interface. - HTTP Servers in Docker: For each language tracer, we build and run an HTTP server within a Docker container. These servers expose the required endpoints defined in the OpenAPI schema and handle the tracer-specific logic. - [Test Agent](https://github.com/DataDog/dd-apm-test-agent/) in Docker: We start a test agent in a separate Docker container. This agent collects data (such as spans and traces) submitted by the HTTP servers. It serves as a centralized point for aggregating and accessing test data. -- Test Execution: The Python test cases use an HTTP client to communicate with the servers. The servers generate data based on the interactions, which is then sent to the test agent. The tests can query the test agent to retrieve data (usually traces) and perform assertions to verify correct behavior. +- Test Execution: The Python test cases use an [HTTP client](/utils/parametric/_library_client.py) to communicate with the servers. The servers generate data based on the interactions, which is then sent to the test agent. The tests can query the test agent to retrieve data (usually traces) and perform assertions to verify correct behavior. An example of how to get a span from the test agent: ```python @@ -266,6 +267,23 @@ span = find_only_span(test_agent.wait_for_num_traces(1)) This architecture allows us to ensure that all tracers conform to the same interface and behavior, making it easier to maintain consistency across different languages and implementations. +#### Http Server Implementations + +The http server implementations for each tracer can be found at the following locations: + +[Python](/utils/build/docker/python/parametric/apm_test_client/server.py) +[Ruby](utils/build/docker/ruby/parametric/server.rb) +[Php](utils/build/docker/php/parametric/server.php) +[Nodejs](utils/build/docker/nodejs/parametric/server.js) +[Java Datadog](utils/build/docker/java/parametric/src/main/java/com/datadoghq/trace/opentracing/controller/OpenTracingController.java) +[Java Otel](utils/build/docker/java/parametric/src/main/java/com/datadoghq/trace/opentelemetry/controller/OpenTelemetryController.java) +[Dotnet Datadog](utils/build/docker/dotnet/parametric/Endpoints/ApmTestApi.cs) +[Dotnet Otel](utils/build/docker/dotnet/parametric/Endpoints/ApmTestApiOtel.cs) +[Go Datadog](utils/build/docker/golang/parametric/main.go) +[Go Otel](utils/build/docker/golang/parametric/otel.go) + + + image [1]: https://github.com/DataDog/dd-trace-cpp From 5333bff9cfc08a5346597c299b850c9b38fddb1f Mon Sep 17 00:00:00 2001 From: ZStriker19 Date: Fri, 25 Oct 2024 15:41:31 -0400 Subject: [PATCH 02/44] starting contributing doc --- docs/scenarios/parametric_contributing.md | 17 +++++++++++++++++ 1 file changed, 17 insertions(+) create mode 100644 docs/scenarios/parametric_contributing.md diff --git a/docs/scenarios/parametric_contributing.md b/docs/scenarios/parametric_contributing.md new file mode 100644 index 0000000000..87dbfdb86d --- /dev/null +++ b/docs/scenarios/parametric_contributing.md @@ -0,0 +1,17 @@ +# Contributing to Parametric System-tests + +Note: a more in-depth overview of parametric system-tests can be found in [parametric.md](parametric.md). + +## Use cases + +Let's figure out if your feature is a good candidate to be tested with parametric system-tests. Parametric system-tests are great for assuring uniform behavior between tracers e.g. environment variable configuration effects, sampling, propagation. + +Parametric system-tests are horrible for testing internal tracer behavior or testing niche tracer behavior. Tests for those should exist on the tracer repos since they're only applicable for that specific tracer. + +## How to write some parametric tests + +Usually the system-tests writer is writing for a new feature, potentially one that hasn't been completed across all tracers yet. Therefore they'll want to focus on writing and getting the tests to pass for their tracer first. + +To begin we need to point system-tests towards a tracer that has the feature implemented (published or on a branch). Follow [Binaries Documentation](../execute/binaries.md) for your particular language to set this up. + +Now that we can test against a tracer that has the feature let's \ No newline at end of file From df56c38704090a649fb10345f2c4ec913dd1790b Mon Sep 17 00:00:00 2001 From: ZStriker19 Date: Thu, 14 Nov 2024 17:54:08 +0000 Subject: [PATCH 03/44] rough draft of finished parametric contributing, also nits and added server links for parametric.md --- docs/edit/README.md | 2 +- docs/edit/features.md | 17 ++++++- docs/scenarios/parametric.md | 13 +++++- docs/scenarios/parametric_contributing.md | 54 ++++++++++++++++++++--- 4 files changed, 76 insertions(+), 10 deletions(-) diff --git a/docs/edit/README.md b/docs/edit/README.md index 98215573b2..f4f3a5d7a9 100644 --- a/docs/edit/README.md +++ b/docs/edit/README.md @@ -1,4 +1,4 @@ -## Run the test loccally +## Run the test locally Please have a look on the [weblog](../execute/) diff --git a/docs/edit/features.md b/docs/edit/features.md index 8dd696cc9e..afa2ce4cf6 100644 --- a/docs/edit/features.md +++ b/docs/edit/features.md @@ -1,4 +1,4 @@ -System tests are feature-oriented. It means that "features" drives how tests are organized. +System tests are feature-oriented. It means that "features" drive how tests are organized. Let's take an example with a new `Awesome feature`, part of meta feature `stuffs`, so we add a new file called `tests/test_stuffs.py` and add a test class with some boilerplate code, and a basic test: @@ -15,6 +15,21 @@ class Test_AwesomeFeature: Several key points: +* Each new feature should be defined in [_features.py](/utils/_features.py). In most cases this consists of copying one of the already added features, changing the name, bumping the number in the url, and bumping the feature number. In this case we'd add + +```python + + @staticmethod + def awesome_feature(test_object): + """ + Awesome Feature for Awesomeness + + https://feature-parity.us1.prod.dog/#/?feature=291 + """ + pytest.mark.features(feature_id=291)(test_object) + return test_object +``` + * One class test one feature * One class can have several tests * Feature link to the [Feature Parity Dashbaord](https://feature-parity.us1.prod.dog/) is declared with `@features` decorators diff --git a/docs/scenarios/parametric.md b/docs/scenarios/parametric.md index 3592e8402b..d31001c2fa 100644 --- a/docs/scenarios/parametric.md +++ b/docs/scenarios/parametric.md @@ -50,8 +50,8 @@ def test_datadog_spans(library_env, test_library, test_agent): ``` - This test case runs against all the APM libraries and is parameterized with two different environments specifying two different values of the environment variable `DD_ENV`. -- The test case creates a new span and sets a tag on it using the shared GRPC/HTTP interface. -- The implementations of the GRPC/HTTP interface, by language, are in `utils/build/docker//parametric`. +- The test case creates a new span and sets a tag on it using the shared HTTP interface. +- The implementations of the HTTP interface, by language, are in `utils/build/docker//parametric`. See here for exact locations per langugage: [Http Server Implementations](#http-server-implementations) section for more details. - Data is flushed to the test agent after the with test_library block closes. - Data is retrieved using the `test_agent` fixture and asserted on. @@ -225,14 +225,23 @@ This architecture allows us to ensure that all tracers conform to the same inter The http server implementations for each tracer can be found at the following locations: [Python](/utils/build/docker/python/parametric/apm_test_client/server.py) + [Ruby](utils/build/docker/ruby/parametric/server.rb) + [Php](utils/build/docker/php/parametric/server.php) + [Nodejs](utils/build/docker/nodejs/parametric/server.js) + [Java Datadog](utils/build/docker/java/parametric/src/main/java/com/datadoghq/trace/opentracing/controller/OpenTracingController.java) + [Java Otel](utils/build/docker/java/parametric/src/main/java/com/datadoghq/trace/opentelemetry/controller/OpenTelemetryController.java) + [Dotnet Datadog](utils/build/docker/dotnet/parametric/Endpoints/ApmTestApi.cs) + [Dotnet Otel](utils/build/docker/dotnet/parametric/Endpoints/ApmTestApiOtel.cs) + [Go Datadog](utils/build/docker/golang/parametric/main.go) + [Go Otel](utils/build/docker/golang/parametric/otel.go) diff --git a/docs/scenarios/parametric_contributing.md b/docs/scenarios/parametric_contributing.md index 87dbfdb86d..47733a8941 100644 --- a/docs/scenarios/parametric_contributing.md +++ b/docs/scenarios/parametric_contributing.md @@ -2,16 +2,58 @@ Note: a more in-depth overview of parametric system-tests can be found in [parametric.md](parametric.md). +**MUST:** Acquaint yourself with [this section](parametric.md#architecture-how-system-tests-work) for reference so you understand/can track what system-tests are actually doing. + ## Use cases -Let's figure out if your feature is a good candidate to be tested with parametric system-tests. Parametric system-tests are great for assuring uniform behavior between tracers e.g. environment variable configuration effects, sampling, propagation. +Let's figure out if your feature is a good candidate to be tested with parametric system-tests. Parametric system-tests are great for assuring uniform behavior between tracers e.g. [environment variable configuration effects on api methods, sampling, propagation, configuration, telemetry](/tests/parametric). + +The parametric tests rely on the hitting of [http endpoints](/tests/parametric) that run tracer methods to produce and modify spans (manual instrumentation). If you'd like to test behavior across automatic instrumentations of tracers then you should assess if weblog system-tests may be a better fit. + +Parametric system-tests are horrible for testing internal or niche tracer behavior. Tests for those should exist on the tracer repos since they're only applicable for that specific tracer. + +## Getting setup + +Usually the one writing the system-tests is writing for a new feature, potentially one that hasn't been completed across all tracers yet. Therefore they'll want to focus on writing and getting the tests to pass for their tracer implementation first. + +To begin we need to point system-tests towards a tracer that has the feature implemented (published or on a branch). +Follow [Binaries Documentation](../execute/binaries.md) for your particular tracer language to set this up. + +[Try running the tests for your tracer language](parametric.md#running-the-tests) and make sure some pass (no need to run the whole suite, you can stop the tests from running with `ctrl+c`). If you have an issue, checkout the [debugging section](parametric.md#debugging) to troubleshoot. + +## Writing the tests + +Now that we're all setup with a working test suite and a tracer with the implemented feature, we can begin writing the new tests. + +First take a look at the [currently existing tests](/tests/parametric) and see if what you're trying to test is similar and can use the same methods/endpoints (in many cases this is true). + +For all of the exact methods already implemented you can take a look at `class APMLibrary` in the [_library_client.py](/utils/parametric/_library_client.py). If you're wondering exactly what the methods do, you can take at look at the respective endpoints they're calling in that same file in `class APMLibraryClient`. + +The endpoints (where the actual tracer code runs) are defined in the Http Server implementations per tracer [listed here](parametric.md#http-server-implementations). Click on the one for your language to take a look at the endpoints. In some cases you may need to just slightly modify an endpoint rather than add a new one. + +### If you need to add additional endpoints to test your new feature + +Note please refer to the [architecture section](parametric.md#architecture-how-system-tests-work) if you're confused throughout this process. + +Then we need to do the following: + +* Determine what you want the endpoint to be called and what you need it to do, and add it to your tracer's http server. +* In [_library_client.py](/utils/parametric/_library_client.py) Add both the endpoint call in `class APMLibraryClient` and the method that invokes it in `class APMLibrary`. Use other implementations for reference. +* Ok we now have our new method! Use it in the tests you write using the [below section](#if-the-methods-you-need-to-run-your-tests-are-already-written) + +### If the methods you need to run your tests are already written + +Awesome, make a new test file in `tests/parametric`, copying in the testing code you want to use as a base/guideline (usually the class and and one of the test methods in it). -Parametric system-tests are horrible for testing internal tracer behavior or testing niche tracer behavior. Tests for those should exist on the tracer repos since they're only applicable for that specific tracer. +Then: -## How to write some parametric tests +* [Change the name of the feature annotation it'll fit under for the feature parity board](/docs/edit/features.md) (Not always needed e.g. `@features.datadog_headers_propagation` is used for all the propagation features) +* Change the class and method name to fit what you're testing. +* [Change your tracer's respective manifest.yml file](/docs/edit/manifest.md) or else the script won't know to run your new test. If you're confused at how to do this properly, search for the file you copied the test from in the manifest file and see how it's specified, you can probably copy that for your new file (make sure the path is the same). +For the version value, to make sure your test runs, specify the current release your tracer is on. This is the minimum value that the script will run your test with. If you make it too high, the script will skip your test. -Usually the system-tests writer is writing for a new feature, potentially one that hasn't been completed across all tracers yet. Therefore they'll want to focus on writing and getting the tests to pass for their tracer first. -To begin we need to point system-tests towards a tracer that has the feature implemented (published or on a branch). Follow [Binaries Documentation](../execute/binaries.md) for your particular language to set this up. -Now that we can test against a tracer that has the feature let's \ No newline at end of file +**Finally:** +[Try running your test!](parametric.md#running-the-tests) +If you have an issue, checkout the [debugging section](parametric.md#debugging) to troubleshoot. From ba7101307d1cc8ceb3db40b22fe931b10e41045b Mon Sep 17 00:00:00 2001 From: Zachary Groves <32471391+ZStriker19@users.noreply.github.com> Date: Thu, 14 Nov 2024 18:00:36 +0000 Subject: [PATCH 04/44] Update docs/scenarios/parametric.md --- docs/scenarios/parametric.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/scenarios/parametric.md b/docs/scenarios/parametric.md index d31001c2fa..506fe103e4 100644 --- a/docs/scenarios/parametric.md +++ b/docs/scenarios/parametric.md @@ -51,7 +51,7 @@ def test_datadog_spans(library_env, test_library, test_agent): - This test case runs against all the APM libraries and is parameterized with two different environments specifying two different values of the environment variable `DD_ENV`. - The test case creates a new span and sets a tag on it using the shared HTTP interface. -- The implementations of the HTTP interface, by language, are in `utils/build/docker//parametric`. See here for exact locations per langugage: [Http Server Implementations](#http-server-implementations) section for more details. +- The implementations of the HTTP interface, by language, are in `utils/build/docker//parametric`. See here for exact locations per langugage: [Http Server Implementations](#http-server-implementations) for more details. - Data is flushed to the test agent after the with test_library block closes. - Data is retrieved using the `test_agent` fixture and asserted on. From a745414f7cd51106dd76b40c14a52d7a4d892d0f Mon Sep 17 00:00:00 2001 From: ZStriker19 Date: Thu, 14 Nov 2024 18:15:39 +0000 Subject: [PATCH 05/44] polish --- docs/scenarios/parametric.md | 3 +++ docs/scenarios/parametric_contributing.md | 9 +++++---- 2 files changed, 8 insertions(+), 4 deletions(-) diff --git a/docs/scenarios/parametric.md b/docs/scenarios/parametric.md index d31001c2fa..854d6f5535 100644 --- a/docs/scenarios/parametric.md +++ b/docs/scenarios/parametric.md @@ -199,11 +199,14 @@ See the steps below in the HTTP section to run the Python server and view the sp ### Shared Interface To view the available HTTP endpoints , follow these steps: +Note: These are based off of the Python tracer's http server which should be held as the standard example interface across implementations. + 1. `./utils/scripts/parametric/run_reference_http.sh` 2. Navigate to http://localhost:8000/docs in your web browser to access the documentation. 3. You can download the OpenAPI schema from http://localhost:8000/openapi.json. This schema can be imported into tools like [Postman](https://learning.postman.com/docs/integrations/available-integrations/working-with-openAPI/) or other API clients to facilitate development and testing. + ### Architecture: How System-tests work Below is an overview of how the testing architecture is structured: diff --git a/docs/scenarios/parametric_contributing.md b/docs/scenarios/parametric_contributing.md index 47733a8941..5579deb2ff 100644 --- a/docs/scenarios/parametric_contributing.md +++ b/docs/scenarios/parametric_contributing.md @@ -33,17 +33,19 @@ The endpoints (where the actual tracer code runs) are defined in the Http Server ### If you need to add additional endpoints to test your new feature -Note please refer to the [architecture section](parametric.md#architecture-how-system-tests-work) if you're confused throughout this process. +*Note:* please refer to the [architecture section](parametric.md#architecture-how-system-tests-work) if you're confused throughout this process. Then we need to do the following: * Determine what you want the endpoint to be called and what you need it to do, and add it to your tracer's http server. + +*Note:* If adding a new endpoint please let a Python implementer know so they can add it as well [see](parametric.md#shared-interface) * In [_library_client.py](/utils/parametric/_library_client.py) Add both the endpoint call in `class APMLibraryClient` and the method that invokes it in `class APMLibrary`. Use other implementations for reference. * Ok we now have our new method! Use it in the tests you write using the [below section](#if-the-methods-you-need-to-run-your-tests-are-already-written) ### If the methods you need to run your tests are already written -Awesome, make a new test file in `tests/parametric`, copying in the testing code you want to use as a base/guideline (usually the class and and one of the test methods in it). +Make a new test file in `tests/parametric`, copying in the testing code you want to use as a base/guideline (usually the class and and one of the test methods in it). Then: @@ -51,8 +53,7 @@ Then: * Change the class and method name to fit what you're testing. * [Change your tracer's respective manifest.yml file](/docs/edit/manifest.md) or else the script won't know to run your new test. If you're confused at how to do this properly, search for the file you copied the test from in the manifest file and see how it's specified, you can probably copy that for your new file (make sure the path is the same). For the version value, to make sure your test runs, specify the current release your tracer is on. This is the minimum value that the script will run your test with. If you make it too high, the script will skip your test. - - +* Write the test pulling from examples of other tests written. Remember you're almost always follwing the pattern of making spans, getting them from the trace_agent, and then verifying values on them. **Finally:** [Try running your test!](parametric.md#running-the-tests) From 3eba0af37c054e5467723e0f4635d8028fbbd7f2 Mon Sep 17 00:00:00 2001 From: Zachary Groves <32471391+ZStriker19@users.noreply.github.com> Date: Mon, 25 Nov 2024 16:06:31 -0500 Subject: [PATCH 06/44] Update docs/edit/features.md Co-authored-by: Charles de Beauchesne --- docs/edit/features.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/edit/features.md b/docs/edit/features.md index afa2ce4cf6..9005b78f5d 100644 --- a/docs/edit/features.md +++ b/docs/edit/features.md @@ -15,7 +15,7 @@ class Test_AwesomeFeature: Several key points: -* Each new feature should be defined in [_features.py](/utils/_features.py). In most cases this consists of copying one of the already added features, changing the name, bumping the number in the url, and bumping the feature number. In this case we'd add +* Each new feature should be defined in [_features.py](/utils/_features.py). This consists of adding a feature in [Feature Parity Dashbaord](https://feature-parity.us1.prod.dog/), get the feature id and copying one of the already added features, changing the name and the feature id in the url, and the feature number. In this case we'd add ```python From 26d6a746deab3490dc17486c699245df5e41d248 Mon Sep 17 00:00:00 2001 From: Zachary Groves <32471391+ZStriker19@users.noreply.github.com> Date: Mon, 25 Nov 2024 16:10:48 -0500 Subject: [PATCH 07/44] Update docs/scenarios/parametric.md Co-authored-by: Mikayla Toffler <46911781+mtoffl01@users.noreply.github.com> --- docs/scenarios/parametric.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/scenarios/parametric.md b/docs/scenarios/parametric.md index 3dc3ba3027..05ad8137ef 100644 --- a/docs/scenarios/parametric.md +++ b/docs/scenarios/parametric.md @@ -214,7 +214,7 @@ Below is an overview of how the testing architecture is structured: - Shared Tests in Python: We write shared test cases using Python's pytest framework. These tests are designed to be generic and interact with the tracers through an HTTP interface. - HTTP Servers in Docker: For each language tracer, we build and run an HTTP server within a Docker container. These servers expose the required endpoints defined in the OpenAPI schema and handle the tracer-specific logic. - [Test Agent](https://github.com/DataDog/dd-apm-test-agent/) in Docker: We start a test agent in a separate Docker container. This agent collects data (such as spans and traces) submitted by the HTTP servers. It serves as a centralized point for aggregating and accessing test data. -- Test Execution: The Python test cases use an [HTTP client](/utils/parametric/_library_client.py) to communicate with the servers. The servers generate data based on the interactions, which is then sent to the test agent. The tests can query the test agent to retrieve data (usually traces) and perform assertions to verify correct behavior. +- Test Execution: The Python test cases use a [HTTP client](/utils/parametric/_library_client.py) to communicate with the servers. The servers generate data based on the interactions, which is then sent to the test agent. The tests can query the test agent to retrieve data (often traces) and perform assertions to verify correct behavior. An example of how to get a span from the test agent: ```python From 605deaf003b2db1d2def060096696460f1a10220 Mon Sep 17 00:00:00 2001 From: Zachary Groves <32471391+ZStriker19@users.noreply.github.com> Date: Mon, 25 Nov 2024 16:11:11 -0500 Subject: [PATCH 08/44] Update docs/scenarios/parametric_contributing.md Co-authored-by: Mikayla Toffler <46911781+mtoffl01@users.noreply.github.com> --- docs/scenarios/parametric_contributing.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/scenarios/parametric_contributing.md b/docs/scenarios/parametric_contributing.md index 5579deb2ff..f5a0b909ee 100644 --- a/docs/scenarios/parametric_contributing.md +++ b/docs/scenarios/parametric_contributing.md @@ -2,7 +2,7 @@ Note: a more in-depth overview of parametric system-tests can be found in [parametric.md](parametric.md). -**MUST:** Acquaint yourself with [this section](parametric.md#architecture-how-system-tests-work) for reference so you understand/can track what system-tests are actually doing. +**MUST:** Acquaint yourself with [how system tests work](parametric.md#architecture-how-system-tests-work) before proceeding. ## Use cases From 6175f7d2792120327eaf3f0db75597cdc4fe5cef Mon Sep 17 00:00:00 2001 From: Zachary Groves <32471391+ZStriker19@users.noreply.github.com> Date: Mon, 25 Nov 2024 16:11:29 -0500 Subject: [PATCH 09/44] Update docs/scenarios/parametric_contributing.md Co-authored-by: Mikayla Toffler <46911781+mtoffl01@users.noreply.github.com> --- docs/scenarios/parametric_contributing.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/docs/scenarios/parametric_contributing.md b/docs/scenarios/parametric_contributing.md index f5a0b909ee..684b678832 100644 --- a/docs/scenarios/parametric_contributing.md +++ b/docs/scenarios/parametric_contributing.md @@ -6,7 +6,9 @@ Note: a more in-depth overview of parametric system-tests can be found in [param ## Use cases -Let's figure out if your feature is a good candidate to be tested with parametric system-tests. Parametric system-tests are great for assuring uniform behavior between tracers e.g. [environment variable configuration effects on api methods, sampling, propagation, configuration, telemetry](/tests/parametric). +Let's figure out if your feature is a good candidate to be tested with parametric system-tests. + +Parametric system-tests are great for assuring uniform behavior between tracers e.g. [environment variable configuration effects on api methods, sampling, propagation, configuration, telemetry](/tests/parametric). The parametric tests rely on the hitting of [http endpoints](/tests/parametric) that run tracer methods to produce and modify spans (manual instrumentation). If you'd like to test behavior across automatic instrumentations of tracers then you should assess if weblog system-tests may be a better fit. From 2feab45097707b4195788ea37e213608ec165961 Mon Sep 17 00:00:00 2001 From: Zachary Groves <32471391+ZStriker19@users.noreply.github.com> Date: Mon, 25 Nov 2024 16:13:01 -0500 Subject: [PATCH 10/44] Update docs/scenarios/parametric_contributing.md Co-authored-by: Mikayla Toffler <46911781+mtoffl01@users.noreply.github.com> --- docs/scenarios/parametric_contributing.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/scenarios/parametric_contributing.md b/docs/scenarios/parametric_contributing.md index 684b678832..a86b96d177 100644 --- a/docs/scenarios/parametric_contributing.md +++ b/docs/scenarios/parametric_contributing.md @@ -29,7 +29,7 @@ Now that we're all setup with a working test suite and a tracer with the impleme First take a look at the [currently existing tests](/tests/parametric) and see if what you're trying to test is similar and can use the same methods/endpoints (in many cases this is true). -For all of the exact methods already implemented you can take a look at `class APMLibrary` in the [_library_client.py](/utils/parametric/_library_client.py). If you're wondering exactly what the methods do, you can take at look at the respective endpoints they're calling in that same file in `class APMLibraryClient`. +For a list of methods that already exist, refer to `class APMLibrary` in the [_library_client.py](/utils/parametric/_library_client.py). If you're wondering what the methods do, you can take at look at the respective endpoints they're calling in that same file in `class APMLibraryClient`. The endpoints (where the actual tracer code runs) are defined in the Http Server implementations per tracer [listed here](parametric.md#http-server-implementations). Click on the one for your language to take a look at the endpoints. In some cases you may need to just slightly modify an endpoint rather than add a new one. From 36822615242f581b7eeb8b17e8de60e288ed4201 Mon Sep 17 00:00:00 2001 From: Zachary Groves <32471391+ZStriker19@users.noreply.github.com> Date: Mon, 25 Nov 2024 16:17:55 -0500 Subject: [PATCH 11/44] Update docs/scenarios/parametric_contributing.md Co-authored-by: Mikayla Toffler <46911781+mtoffl01@users.noreply.github.com> --- docs/scenarios/parametric_contributing.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/scenarios/parametric_contributing.md b/docs/scenarios/parametric_contributing.md index a86b96d177..ab7cd35244 100644 --- a/docs/scenarios/parametric_contributing.md +++ b/docs/scenarios/parametric_contributing.md @@ -14,7 +14,7 @@ The parametric tests rely on the hitting of [http endpoints](/tests/parametric) Parametric system-tests are horrible for testing internal or niche tracer behavior. Tests for those should exist on the tracer repos since they're only applicable for that specific tracer. -## Getting setup +## Getting set up Usually the one writing the system-tests is writing for a new feature, potentially one that hasn't been completed across all tracers yet. Therefore they'll want to focus on writing and getting the tests to pass for their tracer implementation first. From 1d113cb1a4fa14bbf6f038c7feda824b89ea7824 Mon Sep 17 00:00:00 2001 From: Zachary Groves <32471391+ZStriker19@users.noreply.github.com> Date: Mon, 25 Nov 2024 16:18:28 -0500 Subject: [PATCH 12/44] Update docs/scenarios/parametric_contributing.md Co-authored-by: Mikayla Toffler <46911781+mtoffl01@users.noreply.github.com> --- docs/scenarios/parametric_contributing.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/scenarios/parametric_contributing.md b/docs/scenarios/parametric_contributing.md index ab7cd35244..f041f0bf71 100644 --- a/docs/scenarios/parametric_contributing.md +++ b/docs/scenarios/parametric_contributing.md @@ -12,7 +12,7 @@ Parametric system-tests are great for assuring uniform behavior between tracers The parametric tests rely on the hitting of [http endpoints](/tests/parametric) that run tracer methods to produce and modify spans (manual instrumentation). If you'd like to test behavior across automatic instrumentations of tracers then you should assess if weblog system-tests may be a better fit. -Parametric system-tests are horrible for testing internal or niche tracer behavior. Tests for those should exist on the tracer repos since they're only applicable for that specific tracer. +System-tests are **not** for testing internal or niche tracer behavior. Unit tests are a better fit for that case. ## Getting set up From ec5140ab9b462728453a79c9829435366989ec55 Mon Sep 17 00:00:00 2001 From: Zachary Groves <32471391+ZStriker19@users.noreply.github.com> Date: Mon, 25 Nov 2024 16:19:30 -0500 Subject: [PATCH 13/44] Update docs/scenarios/parametric_contributing.md Co-authored-by: Mikayla Toffler <46911781+mtoffl01@users.noreply.github.com> --- docs/scenarios/parametric_contributing.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/scenarios/parametric_contributing.md b/docs/scenarios/parametric_contributing.md index f041f0bf71..22c0aa8a4d 100644 --- a/docs/scenarios/parametric_contributing.md +++ b/docs/scenarios/parametric_contributing.md @@ -10,7 +10,7 @@ Let's figure out if your feature is a good candidate to be tested with parametri Parametric system-tests are great for assuring uniform behavior between tracers e.g. [environment variable configuration effects on api methods, sampling, propagation, configuration, telemetry](/tests/parametric). -The parametric tests rely on the hitting of [http endpoints](/tests/parametric) that run tracer methods to produce and modify spans (manual instrumentation). If you'd like to test behavior across automatic instrumentations of tracers then you should assess if weblog system-tests may be a better fit. +Parametric tests make requests to [http endpoints](/tests/parametric) dedicated to various tracer methods for creating and modifying spans (manual instrumentation). If you want to test automatic instrumentation behavior, weblog system-tests may be a better fit. System-tests are **not** for testing internal or niche tracer behavior. Unit tests are a better fit for that case. From edf81a1e2d62bc680bbdec1257ef70c722d1be04 Mon Sep 17 00:00:00 2001 From: Zachary Groves <32471391+ZStriker19@users.noreply.github.com> Date: Mon, 25 Nov 2024 16:21:11 -0500 Subject: [PATCH 14/44] Update docs/scenarios/parametric_contributing.md Co-authored-by: Mikayla Toffler <46911781+mtoffl01@users.noreply.github.com> --- docs/scenarios/parametric_contributing.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/scenarios/parametric_contributing.md b/docs/scenarios/parametric_contributing.md index 22c0aa8a4d..a7c3fa2565 100644 --- a/docs/scenarios/parametric_contributing.md +++ b/docs/scenarios/parametric_contributing.md @@ -16,7 +16,7 @@ System-tests are **not** for testing internal or niche tracer behavior. Unit tes ## Getting set up -Usually the one writing the system-tests is writing for a new feature, potentially one that hasn't been completed across all tracers yet. Therefore they'll want to focus on writing and getting the tests to pass for their tracer implementation first. +We usually add new system tests when validating a new feature. This feature might not yet be implemented across all dd-trace libraries. If at least one library already supports the feature, you can verify your test by running it against that library To begin we need to point system-tests towards a tracer that has the feature implemented (published or on a branch). Follow [Binaries Documentation](../execute/binaries.md) for your particular tracer language to set this up. From fcd3d5ae7a25760ec9f4aee9fb7e299206ccf4b6 Mon Sep 17 00:00:00 2001 From: Zachary Groves <32471391+ZStriker19@users.noreply.github.com> Date: Mon, 25 Nov 2024 16:21:39 -0500 Subject: [PATCH 15/44] Update docs/scenarios/parametric_contributing.md --- docs/scenarios/parametric_contributing.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/scenarios/parametric_contributing.md b/docs/scenarios/parametric_contributing.md index a7c3fa2565..828008733a 100644 --- a/docs/scenarios/parametric_contributing.md +++ b/docs/scenarios/parametric_contributing.md @@ -16,7 +16,7 @@ System-tests are **not** for testing internal or niche tracer behavior. Unit tes ## Getting set up -We usually add new system tests when validating a new feature. This feature might not yet be implemented across all dd-trace libraries. If at least one library already supports the feature, you can verify your test by running it against that library +We usually add new system tests when validating a new feature. This feature might not yet be implemented across all dd-trace libraries. If at least one library already supports the feature, you can verify your test by running it against that library. To begin we need to point system-tests towards a tracer that has the feature implemented (published or on a branch). Follow [Binaries Documentation](../execute/binaries.md) for your particular tracer language to set this up. From f59bf65cd23bd6037ee96feff4ec15c49b815d37 Mon Sep 17 00:00:00 2001 From: Zachary Groves <32471391+ZStriker19@users.noreply.github.com> Date: Mon, 25 Nov 2024 16:24:24 -0500 Subject: [PATCH 16/44] Update docs/scenarios/parametric_contributing.md --- docs/scenarios/parametric_contributing.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/docs/scenarios/parametric_contributing.md b/docs/scenarios/parametric_contributing.md index 828008733a..fc41a1b5a5 100644 --- a/docs/scenarios/parametric_contributing.md +++ b/docs/scenarios/parametric_contributing.md @@ -47,7 +47,9 @@ Then we need to do the following: ### If the methods you need to run your tests are already written -Make a new test file in `tests/parametric`, copying in the testing code you want to use as a base/guideline (usually the class and and one of the test methods in it). +If it makes sense to add your tests to a file that already exists, great! Otherwise make a new test file in `tests/parametric`. + +Next copy the testing code you want to use as a base/guideline (usually the class (if using a new file) and one of the test methods in it). Then: From c250c5d8b6cf8b16c09a6dba94812b6374d66d75 Mon Sep 17 00:00:00 2001 From: Zachary Groves <32471391+ZStriker19@users.noreply.github.com> Date: Mon, 25 Nov 2024 16:57:27 -0500 Subject: [PATCH 17/44] Update docs/scenarios/parametric.md Co-authored-by: Mikayla Toffler <46911781+mtoffl01@users.noreply.github.com> --- docs/scenarios/parametric.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/scenarios/parametric.md b/docs/scenarios/parametric.md index 05ad8137ef..dbeef105fe 100644 --- a/docs/scenarios/parametric.md +++ b/docs/scenarios/parametric.md @@ -50,8 +50,8 @@ def test_datadog_spans(library_env, test_library, test_agent): ``` - This test case runs against all the APM libraries and is parameterized with two different environments specifying two different values of the environment variable `DD_ENV`. -- The test case creates a new span and sets a tag on it using the shared HTTP interface. -- The implementations of the HTTP interface, by language, are in `utils/build/docker//parametric`. See here for exact locations per langugage: [Http Server Implementations](#http-server-implementations) for more details. +- In`test_library.start_span`, the test case creates a new span using the shared HTTP interface, then inspects the metadata on the resulting span. +- The request is sent to a HTTP server by language. Implementations can be found in `utils/build/docker//parametric`. More information in [Http Server Implementations](#http-server-implementations). - Data is flushed to the test agent after the with test_library block closes. - Data is retrieved using the `test_agent` fixture and asserted on. From 85b699694c8e3bf447977c66fbb1c36e94c2191b Mon Sep 17 00:00:00 2001 From: Zachary Groves <32471391+ZStriker19@users.noreply.github.com> Date: Mon, 25 Nov 2024 17:02:16 -0500 Subject: [PATCH 18/44] Update docs/scenarios/parametric.md --- docs/scenarios/parametric.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/scenarios/parametric.md b/docs/scenarios/parametric.md index dbeef105fe..c9cd5fd2e6 100644 --- a/docs/scenarios/parametric.md +++ b/docs/scenarios/parametric.md @@ -50,7 +50,7 @@ def test_datadog_spans(library_env, test_library, test_agent): ``` - This test case runs against all the APM libraries and is parameterized with two different environments specifying two different values of the environment variable `DD_ENV`. -- In`test_library.start_span`, the test case creates a new span using the shared HTTP interface, then inspects the metadata on the resulting span. +- `test_library.start_span` creates a new span using the shared HTTP interface. - The request is sent to a HTTP server by language. Implementations can be found in `utils/build/docker//parametric`. More information in [Http Server Implementations](#http-server-implementations). - Data is flushed to the test agent after the with test_library block closes. - Data is retrieved using the `test_agent` fixture and asserted on. From a2ac80972ace5b798c0491ede45f18438d717228 Mon Sep 17 00:00:00 2001 From: Zachary Groves <32471391+ZStriker19@users.noreply.github.com> Date: Mon, 25 Nov 2024 17:03:27 -0500 Subject: [PATCH 19/44] Update docs/scenarios/parametric.md --- docs/scenarios/parametric.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/scenarios/parametric.md b/docs/scenarios/parametric.md index c9cd5fd2e6..26c4dd2f1e 100644 --- a/docs/scenarios/parametric.md +++ b/docs/scenarios/parametric.md @@ -53,7 +53,7 @@ def test_datadog_spans(library_env, test_library, test_agent): - `test_library.start_span` creates a new span using the shared HTTP interface. - The request is sent to a HTTP server by language. Implementations can be found in `utils/build/docker//parametric`. More information in [Http Server Implementations](#http-server-implementations). - Data is flushed to the test agent after the with test_library block closes. -- Data is retrieved using the `test_agent` fixture and asserted on. +- Traces are retrieved using the `test_agent` fixture and we assert that they look the way we'd expect. ## Usage From f207cbd7155d3b3be4af28661ec7d3f95c5a9fc0 Mon Sep 17 00:00:00 2001 From: Zachary Groves <32471391+ZStriker19@users.noreply.github.com> Date: Mon, 25 Nov 2024 17:13:45 -0500 Subject: [PATCH 20/44] Update docs/scenarios/parametric_contributing.md --- docs/scenarios/parametric_contributing.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/scenarios/parametric_contributing.md b/docs/scenarios/parametric_contributing.md index fc41a1b5a5..e4045541c7 100644 --- a/docs/scenarios/parametric_contributing.md +++ b/docs/scenarios/parametric_contributing.md @@ -18,7 +18,7 @@ System-tests are **not** for testing internal or niche tracer behavior. Unit tes We usually add new system tests when validating a new feature. This feature might not yet be implemented across all dd-trace libraries. If at least one library already supports the feature, you can verify your test by running it against that library. -To begin we need to point system-tests towards a tracer that has the feature implemented (published or on a branch). +To begin we need to make sure system-tests run with a tracer that has implemented the feature being tested (published or on a branch). Follow [Binaries Documentation](../execute/binaries.md) for your particular tracer language to set this up. [Try running the tests for your tracer language](parametric.md#running-the-tests) and make sure some pass (no need to run the whole suite, you can stop the tests from running with `ctrl+c`). If you have an issue, checkout the [debugging section](parametric.md#debugging) to troubleshoot. From d1d1fc08b3a9f89a5c4b6186fa6f350a8f113fff Mon Sep 17 00:00:00 2001 From: Zachary Groves <32471391+ZStriker19@users.noreply.github.com> Date: Mon, 25 Nov 2024 17:19:47 -0500 Subject: [PATCH 21/44] Update docs/scenarios/parametric_contributing.md --- docs/scenarios/parametric_contributing.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/scenarios/parametric_contributing.md b/docs/scenarios/parametric_contributing.md index e4045541c7..e49deced1f 100644 --- a/docs/scenarios/parametric_contributing.md +++ b/docs/scenarios/parametric_contributing.md @@ -21,7 +21,7 @@ We usually add new system tests when validating a new feature. This feature migh To begin we need to make sure system-tests run with a tracer that has implemented the feature being tested (published or on a branch). Follow [Binaries Documentation](../execute/binaries.md) for your particular tracer language to set this up. -[Try running the tests for your tracer language](parametric.md#running-the-tests) and make sure some pass (no need to run the whole suite, you can stop the tests from running with `ctrl+c`). If you have an issue, checkout the [debugging section](parametric.md#debugging) to troubleshoot. +[Verify that you can run some parametric tests with your custom tracer](parametric.md#running-the-tests). Make sure some pass (no need to run the whole suite, you can stop the tests from running with `ctrl+c`). If you have an issue, checkout the [debugging section](parametric.md#debugging) to troubleshoot. ## Writing the tests From bae326bd593b664f1f9493b1aff480d577b09bf0 Mon Sep 17 00:00:00 2001 From: ZStriker19 Date: Mon, 25 Nov 2024 19:01:26 -0500 Subject: [PATCH 22/44] more linking and clarifying --- docs/scenarios/parametric.md | 2 +- docs/scenarios/parametric_contributing.md | 4 ++-- 2 files changed, 3 insertions(+), 3 deletions(-) diff --git a/docs/scenarios/parametric.md b/docs/scenarios/parametric.md index 26c4dd2f1e..ecbb7a65c2 100644 --- a/docs/scenarios/parametric.md +++ b/docs/scenarios/parametric.md @@ -212,7 +212,7 @@ Note: These are based off of the Python tracer's http server which should be hel Below is an overview of how the testing architecture is structured: - Shared Tests in Python: We write shared test cases using Python's pytest framework. These tests are designed to be generic and interact with the tracers through an HTTP interface. -- HTTP Servers in Docker: For each language tracer, we build and run an HTTP server within a Docker container. These servers expose the required endpoints defined in the OpenAPI schema and handle the tracer-specific logic. +- [HTTP Servers in Docker](#http-server-implementations): For each language tracer, we build and run an HTTP server within a Docker container. These servers expose the required endpoints defined in the OpenAPI schema and handle the tracer-specific logic. - [Test Agent](https://github.com/DataDog/dd-apm-test-agent/) in Docker: We start a test agent in a separate Docker container. This agent collects data (such as spans and traces) submitted by the HTTP servers. It serves as a centralized point for aggregating and accessing test data. - Test Execution: The Python test cases use a [HTTP client](/utils/parametric/_library_client.py) to communicate with the servers. The servers generate data based on the interactions, which is then sent to the test agent. The tests can query the test agent to retrieve data (often traces) and perform assertions to verify correct behavior. diff --git a/docs/scenarios/parametric_contributing.md b/docs/scenarios/parametric_contributing.md index e49deced1f..465a3fe00d 100644 --- a/docs/scenarios/parametric_contributing.md +++ b/docs/scenarios/parametric_contributing.md @@ -27,9 +27,9 @@ Follow [Binaries Documentation](../execute/binaries.md) for your particular trac Now that we're all setup with a working test suite and a tracer with the implemented feature, we can begin writing the new tests. -First take a look at the [currently existing tests](/tests/parametric) and see if what you're trying to test is similar and can use the same methods/endpoints (in many cases this is true). +First take a look at the [currently existing tests](/tests/parametric), (available client calls)[], and corresponding [available http server endpoints](parametric.md#http-server-implementations) and see if what you're trying to test is similar and can use the same methods/endpoints (in many cases this is true). -For a list of methods that already exist, refer to `class APMLibrary` in the [_library_client.py](/utils/parametric/_library_client.py). If you're wondering what the methods do, you can take at look at the respective endpoints they're calling in that same file in `class APMLibraryClient`. +For a list of client methods that already exist, refer to `class APMLibrary` in the [_library_client.py](/utils/parametric/_library_client.py). If you're wondering what the methods do, you can take at look at the respective endpoints they're calling in that same file in `class APMLibraryClient`. The endpoints (where the actual tracer code runs) are defined in the Http Server implementations per tracer [listed here](parametric.md#http-server-implementations). Click on the one for your language to take a look at the endpoints. In some cases you may need to just slightly modify an endpoint rather than add a new one. From 6096f42be8576f374cec5892f90a231160f97924 Mon Sep 17 00:00:00 2001 From: ZStriker19 Date: Mon, 25 Nov 2024 19:22:25 -0500 Subject: [PATCH 23/44] add to use cases section --- docs/scenarios/parametric_contributing.md | 18 ++++++++++++------ 1 file changed, 12 insertions(+), 6 deletions(-) diff --git a/docs/scenarios/parametric_contributing.md b/docs/scenarios/parametric_contributing.md index 465a3fe00d..51c8fcfe78 100644 --- a/docs/scenarios/parametric_contributing.md +++ b/docs/scenarios/parametric_contributing.md @@ -6,11 +6,13 @@ Note: a more in-depth overview of parametric system-tests can be found in [param ## Use cases -Let's figure out if your feature is a good candidate to be tested with parametric system-tests. +Let's figure out if your feature is a good candidate to be tested with parametric system-tests. -Parametric system-tests are great for assuring uniform behavior between tracers e.g. [environment variable configuration effects on api methods, sampling, propagation, configuration, telemetry](/tests/parametric). +System-tests in general are great for assuring uniform behavior between tracers. There are two types of system-tests, [end-to-end](/docs/README.md) and [parametric](/docs/scenarios/parametric.md). -Parametric tests make requests to [http endpoints](/tests/parametric) dedicated to various tracer methods for creating and modifying spans (manual instrumentation). If you want to test automatic instrumentation behavior, weblog system-tests may be a better fit. +The "parametric" in parametric system-tests stands for parameters. The original purpose of parametric scenarios is when a behavior must be tested across several different values for one or more parameters, usually different tracer configurations with some examples being [environment variable configuration effects on api methods, sampling, propagation, configuration, telemetry](/tests/parametric). + +If your usage does not require different parameter values, then [end-to-end system-tests](/docs/README.md) should be used as they will achieve the same level of behavior uniformity verification and test the feature on real world use cases, catching more issues. System-tests are **not** for testing internal or niche tracer behavior. Unit tests are a better fit for that case. @@ -27,7 +29,9 @@ Follow [Binaries Documentation](../execute/binaries.md) for your particular trac Now that we're all setup with a working test suite and a tracer with the implemented feature, we can begin writing the new tests. -First take a look at the [currently existing tests](/tests/parametric), (available client calls)[], and corresponding [available http server endpoints](parametric.md#http-server-implementations) and see if what you're trying to test is similar and can use the same methods/endpoints (in many cases this is true). +**MUST:** If you haven't yet, please acquaint yourself with [how system tests work](parametric.md#architecture-how-system-tests-work) before proceeding and reference it throughout this section. + +First take a look at the [currently existing tests](/tests/parametric) and see if what you're trying to test is similar and can use the same methods/endpoints, in many cases new endpoints do not need to be added. For a list of client methods that already exist, refer to `class APMLibrary` in the [_library_client.py](/utils/parametric/_library_client.py). If you're wondering what the methods do, you can take at look at the respective endpoints they're calling in that same file in `class APMLibraryClient`. @@ -41,13 +45,15 @@ Then we need to do the following: * Determine what you want the endpoint to be called and what you need it to do, and add it to your tracer's http server. -*Note:* If adding a new endpoint please let a Python implementer know so they can add it as well [see](parametric.md#shared-interface) +*Note:* If adding a new endpoint please let a Python tracer implementer know so they can add it as well [see](parametric.md#shared-interface) + * In [_library_client.py](/utils/parametric/_library_client.py) Add both the endpoint call in `class APMLibraryClient` and the method that invokes it in `class APMLibrary`. Use other implementations for reference. + * Ok we now have our new method! Use it in the tests you write using the [below section](#if-the-methods-you-need-to-run-your-tests-are-already-written) ### If the methods you need to run your tests are already written -If it makes sense to add your tests to a file that already exists, great! Otherwise make a new test file in `tests/parametric`. +If it makes sense to add your tests to a file that already exists, great! Otherwise make a new test file in `tests/parametric`. Next copy the testing code you want to use as a base/guideline (usually the class (if using a new file) and one of the test methods in it). From 52d952582e8cc1f60040dfeb8eff91dcf1658be4 Mon Sep 17 00:00:00 2001 From: Zachary Groves <32471391+ZStriker19@users.noreply.github.com> Date: Tue, 26 Nov 2024 10:25:52 -0500 Subject: [PATCH 24/44] Update docs/scenarios/parametric.md Co-authored-by: Charles de Beauchesne --- docs/scenarios/parametric.md | 29 ++++++++++------------------- 1 file changed, 10 insertions(+), 19 deletions(-) diff --git a/docs/scenarios/parametric.md b/docs/scenarios/parametric.md index ecbb7a65c2..b50610d685 100644 --- a/docs/scenarios/parametric.md +++ b/docs/scenarios/parametric.md @@ -227,25 +227,16 @@ This architecture allows us to ensure that all tracers conform to the same inter The http server implementations for each tracer can be found at the following locations: -[Python](/utils/build/docker/python/parametric/apm_test_client/server.py) - -[Ruby](utils/build/docker/ruby/parametric/server.rb) - -[Php](utils/build/docker/php/parametric/server.php) - -[Nodejs](utils/build/docker/nodejs/parametric/server.js) - -[Java Datadog](utils/build/docker/java/parametric/src/main/java/com/datadoghq/trace/opentracing/controller/OpenTracingController.java) - -[Java Otel](utils/build/docker/java/parametric/src/main/java/com/datadoghq/trace/opentelemetry/controller/OpenTelemetryController.java) - -[Dotnet Datadog](utils/build/docker/dotnet/parametric/Endpoints/ApmTestApi.cs) - -[Dotnet Otel](utils/build/docker/dotnet/parametric/Endpoints/ApmTestApiOtel.cs) - -[Go Datadog](utils/build/docker/golang/parametric/main.go) - -[Go Otel](utils/build/docker/golang/parametric/otel.go) +* [Python](/utils/build/docker/python/parametric/apm_test_client/server.py) +* [Ruby](utils/build/docker/ruby/parametric/server.rb) +* [Php](utils/build/docker/php/parametric/server.php) +* [Nodejs](utils/build/docker/nodejs/parametric/server.js) +* [Java Datadog](utils/build/docker/java/parametric/src/main/java/com/datadoghq/trace/opentracing/controller/OpenTracingController.java) +* [Java Otel](utils/build/docker/java/parametric/src/main/java/com/datadoghq/trace/opentelemetry/controller/OpenTelemetryController.java) +* [Dotnet Datadog](utils/build/docker/dotnet/parametric/Endpoints/ApmTestApi.cs) +* [Dotnet Otel](utils/build/docker/dotnet/parametric/Endpoints/ApmTestApiOtel.cs) +* [Go Datadog](utils/build/docker/golang/parametric/main.go) +* [Go Otel](utils/build/docker/golang/parametric/otel.go) ![image](https://github.com/user-attachments/assets/fc144fc1-95aa-4d50-97c5-cda8fdbcefef) From 53aa1e99fcec121404f33725bdd9357c09d8345b Mon Sep 17 00:00:00 2001 From: ZStriker19 Date: Tue, 26 Nov 2024 10:41:44 -0500 Subject: [PATCH 25/44] add contributing doc link --- docs/edit/add-new-test.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/edit/add-new-test.md b/docs/edit/add-new-test.md index dad6766868..673fb848ad 100644 --- a/docs/edit/add-new-test.md +++ b/docs/edit/add-new-test.md @@ -1,4 +1,4 @@ -Whether it's adding a new test or modifying an existing test, a moderate amount of effort will be required. The instructions below cater to end-to-end tests, refer to [placeholder] (TODO: LINK to parametric_contributing.md) for parametric-specific instructions. +Whether it's adding a new test or modifying an existing test, a moderate amount of effort will be required. The instructions below cater to end-to-end tests, refer to [the paramaetric contributing doc](/docs/scenarios/parametric_contributing.md)for parametric-specific instructions. Once the changes are complete, post them in a PR. From aa3036808efa90ca242cdc68d89254e68395eefd Mon Sep 17 00:00:00 2001 From: Zachary Groves <32471391+ZStriker19@users.noreply.github.com> Date: Tue, 26 Nov 2024 11:00:59 -0500 Subject: [PATCH 26/44] Update docs/edit/README.md --- docs/edit/README.md | 1 - 1 file changed, 1 deletion(-) diff --git a/docs/edit/README.md b/docs/edit/README.md index 8030e86806..8495c096f1 100644 --- a/docs/edit/README.md +++ b/docs/edit/README.md @@ -1,4 +1,3 @@ -## Run the test locally System tests allow developers define scenarios and ensure datadog libraries produce consistent telemetry (that is, traces, metrics, profiles, etc...). This "edit" section addresses the following use-cases: 1. Adding a new test (maybe to support a new or existing feature) From 2500cc3c646b3cd0b30a934995229d962c8f7dc0 Mon Sep 17 00:00:00 2001 From: Zachary Groves <32471391+ZStriker19@users.noreply.github.com> Date: Tue, 26 Nov 2024 11:03:11 -0500 Subject: [PATCH 27/44] Update docs/scenarios/parametric.md Co-authored-by: Munir Abdinur --- docs/scenarios/parametric.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/scenarios/parametric.md b/docs/scenarios/parametric.md index 521146fb15..fdac875686 100644 --- a/docs/scenarios/parametric.md +++ b/docs/scenarios/parametric.md @@ -50,7 +50,7 @@ def test_datadog_spans(library_env, test_library, test_agent): ``` - This test case runs against all the APM libraries and is parameterized with two different environments specifying two different values of the environment variable `DD_ENV`. -- `test_library.start_span` creates a new span using the shared HTTP interface. +- `test_library.dd_start_span` creates a new span using the shared HTTP interface. - The request is sent to a HTTP server by language. Implementations can be found in `utils/build/docker//parametric`. More information in [Http Server Implementations](#http-server-implementations). - Data is flushed to the test agent after the with test_library block closes. - Traces are retrieved using the `test_agent` fixture and we assert that they look the way we'd expect. From 7a70f4a073c7ff3baf6674a75e19edfdd77c6b3c Mon Sep 17 00:00:00 2001 From: Zachary Groves <32471391+ZStriker19@users.noreply.github.com> Date: Tue, 26 Nov 2024 11:39:04 -0500 Subject: [PATCH 28/44] Update docs/scenarios/parametric_contributing.md --- docs/scenarios/parametric_contributing.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/scenarios/parametric_contributing.md b/docs/scenarios/parametric_contributing.md index 51c8fcfe78..473d756723 100644 --- a/docs/scenarios/parametric_contributing.md +++ b/docs/scenarios/parametric_contributing.md @@ -8,7 +8,7 @@ Note: a more in-depth overview of parametric system-tests can be found in [param Let's figure out if your feature is a good candidate to be tested with parametric system-tests. -System-tests in general are great for assuring uniform behavior between tracers. There are two types of system-tests, [end-to-end](/docs/README.md) and [parametric](/docs/scenarios/parametric.md). +System-tests in general are great for assuring uniform behavior between different dd-trace repos (tracing, ASM, DI, profiling, etc.). There are two types of system-tests, [end-to-end](/docs/README.md) and [parametric](/docs/scenarios/parametric.md). The "parametric" in parametric system-tests stands for parameters. The original purpose of parametric scenarios is when a behavior must be tested across several different values for one or more parameters, usually different tracer configurations with some examples being [environment variable configuration effects on api methods, sampling, propagation, configuration, telemetry](/tests/parametric). From 959f962f687c52ff1351bf88bb96ca398d506f21 Mon Sep 17 00:00:00 2001 From: Zachary Groves <32471391+ZStriker19@users.noreply.github.com> Date: Tue, 26 Nov 2024 11:41:09 -0500 Subject: [PATCH 29/44] Update docs/scenarios/parametric_contributing.md Co-authored-by: Munir Abdinur --- docs/scenarios/parametric_contributing.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/scenarios/parametric_contributing.md b/docs/scenarios/parametric_contributing.md index 473d756723..9fcadc2e9f 100644 --- a/docs/scenarios/parametric_contributing.md +++ b/docs/scenarios/parametric_contributing.md @@ -10,7 +10,7 @@ Let's figure out if your feature is a good candidate to be tested with parametri System-tests in general are great for assuring uniform behavior between different dd-trace repos (tracing, ASM, DI, profiling, etc.). There are two types of system-tests, [end-to-end](/docs/README.md) and [parametric](/docs/scenarios/parametric.md). -The "parametric" in parametric system-tests stands for parameters. The original purpose of parametric scenarios is when a behavior must be tested across several different values for one or more parameters, usually different tracer configurations with some examples being [environment variable configuration effects on api methods, sampling, propagation, configuration, telemetry](/tests/parametric). +Parametric tests in the Datadog system test repository validate the behavior of APM Client Libraries by interacting only with their public interfaces. These tests ensure the telemetry generated (spans, metrics, instrumentation telemetry) is consistent and accurate when libraries handle different input parameters (e.g., calling a Tracer's startSpan method with a specific type) and configurations (e.g., sampling rates, distributed tracing, remote settings). They run against web applications in languages like Java, Go, Python, PHP, Node.js, C++, and .NET, which expose endpoints simulating real-world library usage. The generated telemetry is sent to a Datadog agent, queried, and verified by system tests to confirm proper library functionality across scenarios. If your usage does not require different parameter values, then [end-to-end system-tests](/docs/README.md) should be used as they will achieve the same level of behavior uniformity verification and test the feature on real world use cases, catching more issues. From 50116bc51b0cda036f732e7ebddac27df53d5d4a Mon Sep 17 00:00:00 2001 From: Zachary Groves <32471391+ZStriker19@users.noreply.github.com> Date: Tue, 26 Nov 2024 11:43:41 -0500 Subject: [PATCH 30/44] Update docs/scenarios/parametric_contributing.md --- docs/scenarios/parametric_contributing.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/scenarios/parametric_contributing.md b/docs/scenarios/parametric_contributing.md index 9fcadc2e9f..7921958613 100644 --- a/docs/scenarios/parametric_contributing.md +++ b/docs/scenarios/parametric_contributing.md @@ -12,7 +12,7 @@ System-tests in general are great for assuring uniform behavior between differen Parametric tests in the Datadog system test repository validate the behavior of APM Client Libraries by interacting only with their public interfaces. These tests ensure the telemetry generated (spans, metrics, instrumentation telemetry) is consistent and accurate when libraries handle different input parameters (e.g., calling a Tracer's startSpan method with a specific type) and configurations (e.g., sampling rates, distributed tracing, remote settings). They run against web applications in languages like Java, Go, Python, PHP, Node.js, C++, and .NET, which expose endpoints simulating real-world library usage. The generated telemetry is sent to a Datadog agent, queried, and verified by system tests to confirm proper library functionality across scenarios. -If your usage does not require different parameter values, then [end-to-end system-tests](/docs/README.md) should be used as they will achieve the same level of behavior uniformity verification and test the feature on real world use cases, catching more issues. +If your usage does not require different parameter values, then [end-to-end system-tests](/docs/README.md) should be used as they will achieve the same level of behavior uniformity verification and test the feature on real world use cases, catching more issues. End-to-end tests are also what should be used for verify behavior between tracer integrations. System-tests are **not** for testing internal or niche tracer behavior. Unit tests are a better fit for that case. From eb581911793f017b0e39dc44233a96edddd3be52 Mon Sep 17 00:00:00 2001 From: Zachary Groves <32471391+ZStriker19@users.noreply.github.com> Date: Tue, 26 Nov 2024 11:49:42 -0500 Subject: [PATCH 31/44] Update docs/scenarios/parametric_contributing.md --- docs/scenarios/parametric_contributing.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/scenarios/parametric_contributing.md b/docs/scenarios/parametric_contributing.md index 7921958613..7fcff6c886 100644 --- a/docs/scenarios/parametric_contributing.md +++ b/docs/scenarios/parametric_contributing.md @@ -46,7 +46,7 @@ Then we need to do the following: * Determine what you want the endpoint to be called and what you need it to do, and add it to your tracer's http server. *Note:* If adding a new endpoint please let a Python tracer implementer know so they can add it as well [see](parametric.md#shared-interface) - +*Note*: Only add new endpoints that operate on the public API and execute ONE operation. Endpoints that execute complex operations or validate tracer internals will not be accepted. * In [_library_client.py](/utils/parametric/_library_client.py) Add both the endpoint call in `class APMLibraryClient` and the method that invokes it in `class APMLibrary`. Use other implementations for reference. * Ok we now have our new method! Use it in the tests you write using the [below section](#if-the-methods-you-need-to-run-your-tests-are-already-written) From 7d1a30a2ef5518f92ad8d8a0b288b845380a8aec Mon Sep 17 00:00:00 2001 From: Zachary Groves <32471391+ZStriker19@users.noreply.github.com> Date: Tue, 26 Nov 2024 11:59:30 -0500 Subject: [PATCH 32/44] Apply suggestions from code review --- docs/scenarios/parametric.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/docs/scenarios/parametric.md b/docs/scenarios/parametric.md index fdac875686..3a488f816d 100644 --- a/docs/scenarios/parametric.md +++ b/docs/scenarios/parametric.md @@ -53,7 +53,7 @@ def test_datadog_spans(library_env, test_library, test_agent): - `test_library.dd_start_span` creates a new span using the shared HTTP interface. - The request is sent to a HTTP server by language. Implementations can be found in `utils/build/docker//parametric`. More information in [Http Server Implementations](#http-server-implementations). - Data is flushed to the test agent after the with test_library block closes. -- Traces are retrieved using the `test_agent` fixture and we assert that they look the way we'd expect. +- Data (usually traces) are retrieved using the `test_agent` fixture and we assert that they look the way we'd expect. ## Usage @@ -206,6 +206,7 @@ Note: These are based off of the Python tracer's http server which should be hel 2. Navigate to http://localhost:8000/docs in your web browser to access the documentation. 3. You can download the OpenAPI schema from http://localhost:8000/openapi.json. This schema can be imported into tools like [Postman](https://learning.postman.com/docs/integrations/available-integrations/working-with-openAPI/) or other API clients to facilitate development and testing. +Not all endpoint implementations per language are up to spec with regards to their parameters and return values. To see these please reference the [feature parity board](https://feature-parity.us1.prod.dog/#/?runDateFilter=7d&feature=339) ### Architecture: How System-tests work From 352718d5602776ebb82ca2399b9752771e5b19ac Mon Sep 17 00:00:00 2001 From: Zachary Groves <32471391+ZStriker19@users.noreply.github.com> Date: Tue, 26 Nov 2024 13:38:58 -0500 Subject: [PATCH 33/44] Update docs/edit/add-new-test.md Co-authored-by: Mikayla Toffler <46911781+mtoffl01@users.noreply.github.com> --- docs/edit/add-new-test.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/edit/add-new-test.md b/docs/edit/add-new-test.md index 673fb848ad..0a1b8a60a7 100644 --- a/docs/edit/add-new-test.md +++ b/docs/edit/add-new-test.md @@ -1,4 +1,4 @@ -Whether it's adding a new test or modifying an existing test, a moderate amount of effort will be required. The instructions below cater to end-to-end tests, refer to [the paramaetric contributing doc](/docs/scenarios/parametric_contributing.md)for parametric-specific instructions. +Whether it's adding a new test or modifying an existing test, a moderate amount of effort will be required. The instructions below cater to end-to-end tests, refer to [the parametric contributing doc](/docs/scenarios/parametric_contributing.md)for parametric-specific instructions. Once the changes are complete, post them in a PR. From 5fac38124773c1424dcd4f8fee435d18cb8fa22e Mon Sep 17 00:00:00 2001 From: ZStriker19 Date: Tue, 26 Nov 2024 14:30:33 -0500 Subject: [PATCH 34/44] remove repeating info in features.md --- docs/edit/features.md | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/docs/edit/features.md b/docs/edit/features.md index 86a0abb965..0ad3caf20b 100644 --- a/docs/edit/features.md +++ b/docs/edit/features.md @@ -1,4 +1,4 @@ -System tests are feature-oriented; put another way, tests certify which features are supported in each client library (and the supported library versions). Each test class must belong to a "feature", where "features" map to entries in the [Feature Parity Dashboard](https://feature-parity.us1.prod.dog/). We use the @features decorators to achieve this. +System tests are feature-oriented; put another way, tests certify which features are supported in each client library (and the supported library versions). Each test class must belong to a "feature", where "features" map to entries in the [Feature Parity Dashboard](https://feature-parity.us1.prod.dog/). We use the `@features` decorators to achieve this. For example, you have a new feature called `Awesome feature`, which is part of a meta feature called `stuffs`. We add a new file called `tests/test_stuffs.py` and add a test class with some boilerplate code, and a basic test: @@ -32,7 +32,6 @@ Several key points: * One class test one feature * One class can have several tests -* Feature link to the [Feature Parity Dashbaord](https://feature-parity.us1.prod.dog/) is declared with `@features` decorators * Files can be nested (`tests/test_product/test_stuffs.py::Test_AwesomeFeature`), and how files are organized does not make any difference. Use you common sense, or ask on [slack](https://dd.enterprise.slack.com/archives/C025TJ4RZ8X). ## Skip tests From 5bb0bc3fdda5e165506672dcd37c38fe9e234552 Mon Sep 17 00:00:00 2001 From: Zachary Groves <32471391+ZStriker19@users.noreply.github.com> Date: Tue, 26 Nov 2024 14:33:08 -0500 Subject: [PATCH 35/44] Update docs/scenarios/parametric_contributing.md Co-authored-by: Mikayla Toffler <46911781+mtoffl01@users.noreply.github.com> --- docs/scenarios/parametric_contributing.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/scenarios/parametric_contributing.md b/docs/scenarios/parametric_contributing.md index 7fcff6c886..7d90c75e18 100644 --- a/docs/scenarios/parametric_contributing.md +++ b/docs/scenarios/parametric_contributing.md @@ -14,7 +14,7 @@ Parametric tests in the Datadog system test repository validate the behavior of If your usage does not require different parameter values, then [end-to-end system-tests](/docs/README.md) should be used as they will achieve the same level of behavior uniformity verification and test the feature on real world use cases, catching more issues. End-to-end tests are also what should be used for verify behavior between tracer integrations. -System-tests are **not** for testing internal or niche tracer behavior. Unit tests are a better fit for that case. +System-tests are **not** for testing internal or niche library behavior. Unit tests are a better fit for that case. ## Getting set up From e1525b2b20e31bdfd4d0d59cad3c3715309fcd46 Mon Sep 17 00:00:00 2001 From: Zachary Groves <32471391+ZStriker19@users.noreply.github.com> Date: Tue, 26 Nov 2024 14:56:02 -0500 Subject: [PATCH 36/44] Apply suggestions from code review Co-authored-by: Mikayla Toffler <46911781+mtoffl01@users.noreply.github.com> --- docs/scenarios/parametric_contributing.md | 8 +++----- 1 file changed, 3 insertions(+), 5 deletions(-) diff --git a/docs/scenarios/parametric_contributing.md b/docs/scenarios/parametric_contributing.md index 7d90c75e18..743d3de5b4 100644 --- a/docs/scenarios/parametric_contributing.md +++ b/docs/scenarios/parametric_contributing.md @@ -13,17 +13,15 @@ System-tests in general are great for assuring uniform behavior between differen Parametric tests in the Datadog system test repository validate the behavior of APM Client Libraries by interacting only with their public interfaces. These tests ensure the telemetry generated (spans, metrics, instrumentation telemetry) is consistent and accurate when libraries handle different input parameters (e.g., calling a Tracer's startSpan method with a specific type) and configurations (e.g., sampling rates, distributed tracing, remote settings). They run against web applications in languages like Java, Go, Python, PHP, Node.js, C++, and .NET, which expose endpoints simulating real-world library usage. The generated telemetry is sent to a Datadog agent, queried, and verified by system tests to confirm proper library functionality across scenarios. If your usage does not require different parameter values, then [end-to-end system-tests](/docs/README.md) should be used as they will achieve the same level of behavior uniformity verification and test the feature on real world use cases, catching more issues. End-to-end tests are also what should be used for verify behavior between tracer integrations. - +For more on the differences between end-to-end and parametric tests, see [here](/docs/scenarios/README.md#scenarios) System-tests are **not** for testing internal or niche library behavior. Unit tests are a better fit for that case. ## Getting set up -We usually add new system tests when validating a new feature. This feature might not yet be implemented across all dd-trace libraries. If at least one library already supports the feature, you can verify your test by running it against that library. - -To begin we need to make sure system-tests run with a tracer that has implemented the feature being tested (published or on a branch). +We usually add new system tests when validating a new feature. To begin, set up the system-tests repo to run with a version of the library that has already implemented the feature you'd like to test (published or on a branch). Follow [Binaries Documentation](../execute/binaries.md) for your particular tracer language to set this up. -[Verify that you can run some parametric tests with your custom tracer](parametric.md#running-the-tests). Make sure some pass (no need to run the whole suite, you can stop the tests from running with `ctrl+c`). If you have an issue, checkout the [debugging section](parametric.md#debugging) to troubleshoot. +[Verify that you can run some (any) parametric tests with your custom tracer](parametric.md#running-the-tests). Make sure some pass — no need to run the whole suite (you can stop the tests from running with `ctrl+c`). If you have any issues, checkout the [debugging section](parametric.md#debugging) to troubleshoot. ## Writing the tests From 2a0f6cdd7f50061bc4d50ac1e6f517cfc8c760ac Mon Sep 17 00:00:00 2001 From: Zachary Groves <32471391+ZStriker19@users.noreply.github.com> Date: Tue, 26 Nov 2024 15:06:32 -0500 Subject: [PATCH 37/44] Apply suggestions from code review Co-authored-by: Mikayla Toffler <46911781+mtoffl01@users.noreply.github.com> --- docs/scenarios/parametric.md | 1 + docs/scenarios/parametric_contributing.md | 2 +- 2 files changed, 2 insertions(+), 1 deletion(-) diff --git a/docs/scenarios/parametric.md b/docs/scenarios/parametric.md index 3a488f816d..bfc775320f 100644 --- a/docs/scenarios/parametric.md +++ b/docs/scenarios/parametric.md @@ -227,6 +227,7 @@ This architecture allows us to ensure that all tracers conform to the same inter #### Http Server Implementations The http server implementations for each tracer can be found at the following locations: +*Note:* For some languages there is both an Otel and a Datadog server. This is simply to separate the available Otel endpoints from the available Datadog endpoints that can be hit by the client. If a language only has a single server, then both endpoints for Otel and Datadog exist there. * [Python](/utils/build/docker/python/parametric/apm_test_client/server.py) * [Ruby](utils/build/docker/ruby/parametric/server.rb) diff --git a/docs/scenarios/parametric_contributing.md b/docs/scenarios/parametric_contributing.md index 743d3de5b4..26436a156b 100644 --- a/docs/scenarios/parametric_contributing.md +++ b/docs/scenarios/parametric_contributing.md @@ -29,7 +29,7 @@ Now that we're all setup with a working test suite and a tracer with the impleme **MUST:** If you haven't yet, please acquaint yourself with [how system tests work](parametric.md#architecture-how-system-tests-work) before proceeding and reference it throughout this section. -First take a look at the [currently existing tests](/tests/parametric) and see if what you're trying to test is similar and can use the same methods/endpoints, in many cases new endpoints do not need to be added. +Before writing a new test, check the [existing tests](/tests/parametric) to see if you can use the same methods or endpoints for similar scenarios; in many cases, new endpoints do not need to be added. For a list of client methods that already exist, refer to `class APMLibrary` in the [_library_client.py](/utils/parametric/_library_client.py). If you're wondering what the methods do, you can take at look at the respective endpoints they're calling in that same file in `class APMLibraryClient`. From a5f9c6a0927c21bea0b1e9fa499f00fd0ff44554 Mon Sep 17 00:00:00 2001 From: ZStriker19 Date: Wed, 27 Nov 2024 11:22:02 -0500 Subject: [PATCH 38/44] add link to contributing doc in main readme.md --- docs/edit/README.md | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/docs/edit/README.md b/docs/edit/README.md index 8495c096f1..ae6f2745d5 100644 --- a/docs/edit/README.md +++ b/docs/edit/README.md @@ -8,6 +8,10 @@ System tests allow developers define scenarios and ensure datadog libraries prod To make changes, you must be able to run tests locally. Instructions for running **end-to-end** tests can be found [here](https://github.com/DataDog/system-tests/blob/main/docs/execute/README.md#run-tests) and for **parametric**, [here](https://github.com/DataDog/system-tests/blob/main/docs/scenarios/parametric.md#running-the-tests). +**Note** + +For information on contributing to specifically **parametric** tests, see [here](/docs/scenarios/parametric_contributing.md). + **Callout** You'll commonly need to run unmerged changes to your library against system tests (e.g. to ensure the feature is up to spec). Instructions for testing against unmerged changes can be found in [enable-test.md](./enable-test.md). From f9cc7b584c1bdd102750bf36709e2aecf028117e Mon Sep 17 00:00:00 2001 From: Zachary Groves <32471391+ZStriker19@users.noreply.github.com> Date: Wed, 27 Nov 2024 12:29:52 -0500 Subject: [PATCH 39/44] Update docs/scenarios/parametric_contributing.md Co-authored-by: Munir Abdinur --- docs/scenarios/parametric_contributing.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/scenarios/parametric_contributing.md b/docs/scenarios/parametric_contributing.md index 26436a156b..012b22c0e3 100644 --- a/docs/scenarios/parametric_contributing.md +++ b/docs/scenarios/parametric_contributing.md @@ -10,7 +10,7 @@ Let's figure out if your feature is a good candidate to be tested with parametri System-tests in general are great for assuring uniform behavior between different dd-trace repos (tracing, ASM, DI, profiling, etc.). There are two types of system-tests, [end-to-end](/docs/README.md) and [parametric](/docs/scenarios/parametric.md). -Parametric tests in the Datadog system test repository validate the behavior of APM Client Libraries by interacting only with their public interfaces. These tests ensure the telemetry generated (spans, metrics, instrumentation telemetry) is consistent and accurate when libraries handle different input parameters (e.g., calling a Tracer's startSpan method with a specific type) and configurations (e.g., sampling rates, distributed tracing, remote settings). They run against web applications in languages like Java, Go, Python, PHP, Node.js, C++, and .NET, which expose endpoints simulating real-world library usage. The generated telemetry is sent to a Datadog agent, queried, and verified by system tests to confirm proper library functionality across scenarios. +Parametric tests in the Datadog system test repository validate the behavior of APM Client Libraries by interacting only with their public interfaces. These tests ensure the telemetry generated (spans, metrics, instrumentation telemetry) is consistent and accurate when libraries handle different input parameters (e.g., calling a Tracer's startSpan method with a specific type) and configurations (e.g., sampling rates, distributed tracing header formats, remote settings). They run against web applications written in Ruby, Java, Go, Python, PHP, Node.js, C++, and .NET, which expose endpoints simulating real-world ddtrace usage. The generated telemetry is sent to a Datadog agent, queried, and verified by system tests to confirm proper library functionality across scenarios. If your usage does not require different parameter values, then [end-to-end system-tests](/docs/README.md) should be used as they will achieve the same level of behavior uniformity verification and test the feature on real world use cases, catching more issues. End-to-end tests are also what should be used for verify behavior between tracer integrations. For more on the differences between end-to-end and parametric tests, see [here](/docs/scenarios/README.md#scenarios) From 7fd96eda355cb279d91bee3e87e2a0cc72c8feb0 Mon Sep 17 00:00:00 2001 From: Zachary Groves <32471391+ZStriker19@users.noreply.github.com> Date: Mon, 2 Dec 2024 12:29:54 -0500 Subject: [PATCH 40/44] Update docs/edit/features.md Co-authored-by: Mikayla Toffler <46911781+mtoffl01@users.noreply.github.com> --- docs/edit/features.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/edit/features.md b/docs/edit/features.md index 0ad3caf20b..c53f644b67 100644 --- a/docs/edit/features.md +++ b/docs/edit/features.md @@ -15,7 +15,7 @@ class Test_AwesomeFeature: Several key points: -* Each new feature should be defined in [_features.py](/utils/_features.py). This consists of adding a feature in [Feature Parity Dashbaord](https://feature-parity.us1.prod.dog/), get the feature id and copying one of the already added features, changing the name and the feature id in the url, and the feature number. In this case we'd add +* Each new feature should be defined in [_features.py](/utils/_features.py). This consists of adding a feature in [Feature Parity Dashboard](https://feature-parity.us1.prod.dog/), get the feature id and copying one of the already added features, changing the name and the feature id in the url, and the feature number. In this case we'd add ```python From ed9a72ca3b6c6afb3c0351661611b8e480a23494 Mon Sep 17 00:00:00 2001 From: Zachary Groves <32471391+ZStriker19@users.noreply.github.com> Date: Mon, 2 Dec 2024 12:30:05 -0500 Subject: [PATCH 41/44] Update docs/edit/features.md Co-authored-by: Mikayla Toffler <46911781+mtoffl01@users.noreply.github.com> --- docs/edit/features.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/edit/features.md b/docs/edit/features.md index c53f644b67..a5f1053da7 100644 --- a/docs/edit/features.md +++ b/docs/edit/features.md @@ -30,7 +30,7 @@ Several key points: return test_object ``` -* One class test one feature +* One class tests one feature * One class can have several tests * Files can be nested (`tests/test_product/test_stuffs.py::Test_AwesomeFeature`), and how files are organized does not make any difference. Use you common sense, or ask on [slack](https://dd.enterprise.slack.com/archives/C025TJ4RZ8X). From 9315e67375dc8fa04e645a6f46425bdc66c2e703 Mon Sep 17 00:00:00 2001 From: Zachary Groves <32471391+ZStriker19@users.noreply.github.com> Date: Mon, 2 Dec 2024 12:30:16 -0500 Subject: [PATCH 42/44] Update docs/scenarios/parametric.md Co-authored-by: Mikayla Toffler <46911781+mtoffl01@users.noreply.github.com> --- docs/scenarios/parametric.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/scenarios/parametric.md b/docs/scenarios/parametric.md index bfc775320f..94fa163b4a 100644 --- a/docs/scenarios/parametric.md +++ b/docs/scenarios/parametric.md @@ -206,7 +206,7 @@ Note: These are based off of the Python tracer's http server which should be hel 2. Navigate to http://localhost:8000/docs in your web browser to access the documentation. 3. You can download the OpenAPI schema from http://localhost:8000/openapi.json. This schema can be imported into tools like [Postman](https://learning.postman.com/docs/integrations/available-integrations/working-with-openAPI/) or other API clients to facilitate development and testing. -Not all endpoint implementations per language are up to spec with regards to their parameters and return values. To see these please reference the [feature parity board](https://feature-parity.us1.prod.dog/#/?runDateFilter=7d&feature=339) +Not all endpoint implementations per language are up to spec with regards to their parameters and return values. To view endpoints that are not up to spec, see the [feature parity board](https://feature-parity.us1.prod.dog/#/?runDateFilter=7d&feature=339) ### Architecture: How System-tests work From cc403757ba378143cd58fd0fe0a0dd55351ea81c Mon Sep 17 00:00:00 2001 From: Zachary Groves <32471391+ZStriker19@users.noreply.github.com> Date: Mon, 2 Dec 2024 12:30:27 -0500 Subject: [PATCH 43/44] Update docs/edit/README.md Co-authored-by: Charles de Beauchesne --- docs/edit/README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/edit/README.md b/docs/edit/README.md index ae6f2745d5..d1976cf8f8 100644 --- a/docs/edit/README.md +++ b/docs/edit/README.md @@ -10,7 +10,7 @@ To make changes, you must be able to run tests locally. Instructions for running **Note** -For information on contributing to specifically **parametric** tests, see [here](/docs/scenarios/parametric_contributing.md). +For information on contributing to specifically **parametric** scenario, see [here](/docs/scenarios/parametric_contributing.md). **Callout** From 6c0335b83ad1d83be694ee6df92b8e283eeb946f Mon Sep 17 00:00:00 2001 From: ZStriker19 Date: Mon, 2 Dec 2024 12:31:14 -0500 Subject: [PATCH 44/44] remove todo for link --- docs/edit/enable-test.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/edit/enable-test.md b/docs/edit/enable-test.md index 42fb509f8b..406a231a7f 100644 --- a/docs/edit/enable-test.md +++ b/docs/edit/enable-test.md @@ -2,7 +2,7 @@ So, you have a branch that contains changes you'd like to test with system tests... -**Note**: the instructions below assume that the necessary test already exists in system-tests and your weblog or parametric app has the necessary endpoint for serving the test [TODO]: LINK TO CONTRIBUTING DOC +**Note**: the instructions below assume that the necessary test already exists in system-tests and your weblog or parametric app has the necessary endpoint for serving the test. 1. Post a PR to the dd-trace repo if you have not already.