Skip to content

Commit

Permalink
Merge branch 'main' into robertomonteromiguel/onboarding_parallel_ci
Browse files Browse the repository at this point in the history
  • Loading branch information
robertomonteromiguel authored Dec 3, 2024
2 parents dab3e51 + 61c7a3c commit 5e67725
Show file tree
Hide file tree
Showing 30 changed files with 473 additions and 46 deletions.
3 changes: 3 additions & 0 deletions .github/workflows/run-end-to-end.yml
Original file line number Diff line number Diff line change
Expand Up @@ -242,6 +242,9 @@ jobs:
- name: Run APPSEC_RASP scenario
if: always() && steps.build.outcome == 'success' && contains(inputs.scenarios, '"APPSEC_RASP"')
run: ./run.sh APPSEC_RASP
- name: Run APPSEC_META_STRUCT_DISABLED scenario
if: always() && steps.build.outcome == 'success' && contains(inputs.scenarios, '"APPSEC_META_STRUCT_DISABLED"')
run: ./run.sh APPSEC_META_STRUCT_DISABLED
- name: Run SAMPLING scenario
if: always() && steps.build.outcome == 'success' && contains(inputs.scenarios, '"SAMPLING"')
run: ./run.sh SAMPLING
Expand Down
20 changes: 18 additions & 2 deletions .github/workflows/run-lib-injection.yml
Original file line number Diff line number Diff line change
Expand Up @@ -28,6 +28,7 @@ jobs:
matrix: ${{ steps.compute-matrix.outputs.matrix }}
matrix_supported_langs: ${{ steps.compute-matrix.outputs.matrix_supported_langs }}
matrix_profiling_supported: ${{ steps.compute-matrix.outputs.matrix_profiling_supported }}
matrix_skip_basic: ${{ steps.compute-matrix.outputs.matrix_skip_basic }}
init_image: ${{ steps.compute-matrix.outputs.init_image }}
steps:
- name: Compute matrix
Expand All @@ -41,7 +42,9 @@ jobs:
"cpp": [],
"dotnet": [{"name":"dd-lib-dotnet-init-test-app","supported":"true"}],
"golang": [],
"java": [{"name":"dd-lib-java-init-test-app","supported":"true"},{"name":"jdk7-app","supported":"false"}],
"java": [{"name":"dd-lib-java-init-test-app","supported":"true"},
{"name":"jdk7-app","supported":"false"},
{"name":"dd-djm-spark-test-app", "supported":"true", "skip-profiling":"true", "skip-basic":"true"}],
"nodejs": [{"name":"sample-app","supported":"true"},{"name":"sample-app-node13","supported":"false"}],
"php": [],
"python": [{"name":"dd-lib-python-init-test-django","supported":"true"},
Expand Down Expand Up @@ -80,11 +83,14 @@ jobs:
#Only supported weblog variants
results_supported_langs = []
results_profiling_supported = []
results_skip_basic = []
for weblog in weblogs["${{ inputs.library }}"]:
if weblog["supported"] == "true":
results_supported_langs.append(weblog["name"])
if "skip-profiling" not in weblog or weblog["skip-profiling"] != "true":
results_profiling_supported.append(weblog["name"])
if "skip-basic" in weblog and weblog["skip-basic"] == "true":
results_skip_basic.append(weblog["name"])
#Use the latest init image for prod version, latest_snapshot init image for dev version
if "${{ inputs.version }}" == 'prod':
Expand All @@ -97,11 +103,13 @@ jobs:
print(f'init_image={json.dumps(result_init_image)}', file=fh)
print(f'matrix_supported_langs={json.dumps(results_supported_langs)}', file=fh)
print(f'matrix_profiling_supported={json.dumps(results_profiling_supported)}', file=fh)
print(f'matrix_skip_basic={json.dumps(results_skip_basic)}', file=fh)
print(json.dumps(result, indent=2))
print(json.dumps(result_init_image, indent=2))
print(json.dumps(results_supported_langs, indent=2))
print(json.dumps(results_profiling_supported, indent=2))
print(json.dumps(results_skip_basic, indent=2))
lib-injection-init-image-validator:
if: inputs.library == 'dotnet' || inputs.library == 'java' || inputs.library == 'python' || inputs.library == 'ruby' || inputs.library == 'nodejs'
Expand All @@ -116,6 +124,8 @@ jobs:
matrix:
weblog: ${{ fromJson(needs.compute-matrix.outputs.matrix) }}
lib_init_image: ${{ fromJson(needs.compute-matrix.outputs.init_image) }}
exclude:
- weblog: {"name":"dd-djm-spark-test-app", "supported":"true", "skip-profiling":"true", "skip-basic":"true"}
fail-fast: false
env:
TEST_LIBRARY: ${{ inputs.library }}
Expand Down Expand Up @@ -183,7 +193,7 @@ jobs:
matrix:
weblog: ${{ fromJson(needs.compute-matrix.outputs.matrix_supported_langs) }}
lib_init_image: ${{ fromJson(needs.compute-matrix.outputs.init_image) }}
cluster_agent_version: ['7.56.2', '7.57.0']
cluster_agent_version: ['7.56.2', '7.57.0', '7.59.0']
fail-fast: false
env:
TEST_LIBRARY: ${{ inputs.library }}
Expand Down Expand Up @@ -231,13 +241,19 @@ jobs:
- name: Kubernetes lib-injection tests
id: k8s-lib-injection-tests
if: ${{ !contains(fromJson(needs.compute-matrix.outputs.matrix_skip_basic), matrix.weblog) }}
run: ./run.sh K8S_LIBRARY_INJECTION_BASIC

- name: Kubernetes lib-injection profiling tests
id: k8s-lib-injection-tests-profiling
if: ${{ contains(fromJson(needs.compute-matrix.outputs.matrix_profiling_supported), matrix.weblog) }}
run: ./run.sh K8S_LIBRARY_INJECTION_PROFILING

- name: Kubernetes lib-injection DJM tests
id: k8s-lib-injection-tests-djm
if: ${{ matrix.weblog == 'dd-djm-spark-test-app' }}
run: ./run.sh K8S_LIBRARY_INJECTION_DJM

- name: Compress logs
id: compress_logs
if: always() && steps.build.outcome == 'success'
Expand Down
4 changes: 4 additions & 0 deletions docs/edit/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,10 @@ System tests allow developers define scenarios and ensure datadog libraries prod

To make changes, you must be able to run tests locally. Instructions for running **end-to-end** tests can be found [here](https://github.com/DataDog/system-tests/blob/main/docs/execute/README.md#run-tests) and for **parametric**, [here](https://github.com/DataDog/system-tests/blob/main/docs/scenarios/parametric.md#running-the-tests).

**Note**

For information on contributing to specifically **parametric** scenario, see [here](/docs/scenarios/parametric_contributing.md).

**Callout**

You'll commonly need to run unmerged changes to your library against system tests (e.g. to ensure the feature is up to spec). Instructions for testing against unmerged changes can be found in [enable-test.md](./enable-test.md).
Expand Down
2 changes: 1 addition & 1 deletion docs/edit/add-new-test.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
Whether it's adding a new test or modifying an existing test, a moderate amount of effort will be required. The instructions below cater to end-to-end tests, refer to [placeholder] (TODO: LINK to parametric_contributing.md) for parametric-specific instructions.
Whether it's adding a new test or modifying an existing test, a moderate amount of effort will be required. The instructions below cater to end-to-end tests, refer to [the parametric contributing doc](/docs/scenarios/parametric_contributing.md)for parametric-specific instructions.

Once the changes are complete, post them in a PR.

Expand Down
2 changes: 1 addition & 1 deletion docs/edit/enable-test.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

So, you have a branch that contains changes you'd like to test with system tests...

**Note**: the instructions below assume that the necessary test already exists in system-tests and your weblog or parametric app has the necessary endpoint for serving the test [TODO]: LINK TO CONTRIBUTING DOC
**Note**: the instructions below assume that the necessary test already exists in system-tests and your weblog or parametric app has the necessary endpoint for serving the test.

1. Post a PR to the dd-trace repo if you have not already.

Expand Down
27 changes: 26 additions & 1 deletion docs/edit/features.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
System tests are feature-oriented; put another way, tests certify which features are supported in each client library (and the supported library versions). Each test class must belong to a "feature", where "features" map to entries in the [Feature Parity Dashboard](https://feature-parity.us1.prod.dog/). We use the @features decorators to achieve this.
System tests are feature-oriented; put another way, tests certify which features are supported in each client library (and the supported library versions). Each test class must belong to a "feature", where "features" map to entries in the [Feature Parity Dashboard](https://feature-parity.us1.prod.dog/). We use the `@features` decorators to achieve this.

For example, you have a new feature called `Awesome feature`, which is part of a meta feature called `stuffs`. We add a new file called `tests/test_stuffs.py` and add a test class with some boilerplate code, and a basic test:

Expand All @@ -12,3 +12,28 @@ class Test_AwesomeFeature:
def test_basic(self)
assert P==NP
```

Several key points:

* Each new feature should be defined in [_features.py](/utils/_features.py). This consists of adding a feature in [Feature Parity Dashboard](https://feature-parity.us1.prod.dog/), get the feature id and copying one of the already added features, changing the name and the feature id in the url, and the feature number. In this case we'd add

```python

@staticmethod
def awesome_feature(test_object):
"""
Awesome Feature for Awesomeness
https://feature-parity.us1.prod.dog/#/?feature=291
"""
pytest.mark.features(feature_id=291)(test_object)
return test_object
```

* One class tests one feature
* One class can have several tests
* Files can be nested (`tests/test_product/test_stuffs.py::Test_AwesomeFeature`), and how files are organized does not make any difference. Use you common sense, or ask on [slack](https://dd.enterprise.slack.com/archives/C025TJ4RZ8X).

## Skip tests

See [skip-tests.md](/docs/edit/skip-tests.md)
33 changes: 27 additions & 6 deletions docs/scenarios/parametric.md
Original file line number Diff line number Diff line change
Expand Up @@ -50,10 +50,10 @@ def test_datadog_spans(library_env, test_library, test_agent):
```

- This test case runs against all the APM libraries and is parameterized with two different environments specifying two different values of the environment variable `DD_ENV`.
- The test case creates a new span and sets a tag on it using the shared GRPC/HTTP interface.
- The implementations of the GRPC/HTTP interface, by language, are in `utils/build/docker/<lang>/parametric`.
- `test_library.dd_start_span` creates a new span using the shared HTTP interface.
- The request is sent to a HTTP server by language. Implementations can be found in `utils/build/docker/<lang>/parametric`. More information in [Http Server Implementations](#http-server-implementations).
- Data is flushed to the test agent after the with test_library block closes.
- Data is retrieved using the `test_agent` fixture and asserted on.
- Data (usually traces) are retrieved using the `test_agent` fixture and we assert that they look the way we'd expect.


## Usage
Expand Down Expand Up @@ -93,7 +93,7 @@ TEST_LIBRARY=dotnet ./run.sh PARAMETRIC -k test_metrics_
Tests can be aborted using CTRL-C but note that containers maybe still be running and will have to be shut down.

### Running the tests for a custom tracer
To run tests against custom tracers, refer to the [Binaries Documentation](../execute/binaries.md)
To run tests against custom tracer builds, refer to the [Binaries Documentation](../execute/binaries.md)

#### After Testing with a Custom Tracer:
Note: Most of the ways to run system-tests with a custom tracer version involve modifying the binaries directory. Modifying the binaries will alter the tracer version used across your local computer. Once you're done testing with the custom tracer, ensure you **remove** it. For example for Python:
Expand Down Expand Up @@ -199,19 +199,23 @@ See the steps below in the HTTP section to run the Python server and view the sp
### Shared Interface

To view the available HTTP endpoints , follow these steps:
Note: These are based off of the Python tracer's http server which should be held as the standard example interface across implementations.


1. `./utils/scripts/parametric/run_reference_http.sh`
2. Navigate to http://localhost:8000/docs in your web browser to access the documentation.
3. You can download the OpenAPI schema from http://localhost:8000/openapi.json. This schema can be imported into tools like [Postman](https://learning.postman.com/docs/integrations/available-integrations/working-with-openAPI/) or other API clients to facilitate development and testing.

Not all endpoint implementations per language are up to spec with regards to their parameters and return values. To view endpoints that are not up to spec, see the [feature parity board](https://feature-parity.us1.prod.dog/#/?runDateFilter=7d&feature=339)

### Architecture: How System-tests work

Below is an overview of how the testing architecture is structured:

- Shared Tests in Python: We write shared test cases using Python's pytest framework. These tests are designed to be generic and interact with the tracers through an HTTP interface.
- HTTP Servers in Docker: For each language tracer, we build and run an HTTP server within a Docker container. These servers expose the required endpoints defined in the OpenAPI schema and handle the tracer-specific logic.
- [HTTP Servers in Docker](#http-server-implementations): For each language tracer, we build and run an HTTP server within a Docker container. These servers expose the required endpoints defined in the OpenAPI schema and handle the tracer-specific logic.
- [Test Agent](https://github.com/DataDog/dd-apm-test-agent/) in Docker: We start a test agent in a separate Docker container. This agent collects data (such as spans and traces) submitted by the HTTP servers. It serves as a centralized point for aggregating and accessing test data.
- Test Execution: The Python test cases use an HTTP client to communicate with the servers. The servers generate data based on the interactions, which is then sent to the test agent. The tests can query the test agent to retrieve data (usually traces) and perform assertions to verify correct behavior.
- Test Execution: The Python test cases use a [HTTP client](/utils/parametric/_library_client.py) to communicate with the servers. The servers generate data based on the interactions, which is then sent to the test agent. The tests can query the test agent to retrieve data (often traces) and perform assertions to verify correct behavior.

An example of how to get a span from the test agent:
```python
Expand All @@ -220,6 +224,23 @@ span = find_only_span(test_agent.wait_for_num_traces(1))

This architecture allows us to ensure that all tracers conform to the same interface and behavior, making it easier to maintain consistency across different languages and implementations.

#### Http Server Implementations

The http server implementations for each tracer can be found at the following locations:
*Note:* For some languages there is both an Otel and a Datadog server. This is simply to separate the available Otel endpoints from the available Datadog endpoints that can be hit by the client. If a language only has a single server, then both endpoints for Otel and Datadog exist there.

* [Python](/utils/build/docker/python/parametric/apm_test_client/server.py)
* [Ruby](utils/build/docker/ruby/parametric/server.rb)
* [Php](utils/build/docker/php/parametric/server.php)
* [Nodejs](utils/build/docker/nodejs/parametric/server.js)
* [Java Datadog](utils/build/docker/java/parametric/src/main/java/com/datadoghq/trace/opentracing/controller/OpenTracingController.java)
* [Java Otel](utils/build/docker/java/parametric/src/main/java/com/datadoghq/trace/opentelemetry/controller/OpenTelemetryController.java)
* [Dotnet Datadog](utils/build/docker/dotnet/parametric/Endpoints/ApmTestApi.cs)
* [Dotnet Otel](utils/build/docker/dotnet/parametric/Endpoints/ApmTestApiOtel.cs)
* [Go Datadog](utils/build/docker/golang/parametric/main.go)
* [Go Otel](utils/build/docker/golang/parametric/otel.go)


![image](https://github.com/user-attachments/assets/fc144fc1-95aa-4d50-97c5-cda8fdbcefef)

<img width="869" alt="image" src="https://user-images.githubusercontent.com/6321485/182887064-e241d65c-5e29-451b-a8a8-e8d18328c083.png">
Expand Down
Loading

0 comments on commit 5e67725

Please sign in to comment.