From 816108388432ff8af883130c3393ea73e769fab1 Mon Sep 17 00:00:00 2001 From: Patrice Chalin Date: Tue, 30 Jan 2024 08:58:41 -0500 Subject: [PATCH] [CI/textlint] Enforce "backend" rather than "back-end" (#3892) --- .textlintrc.yml | 2 +- content/en/blog/2022/apisix/index.md | 4 +-- .../en/blog/2022/frontend-overhaul/index.md | 8 +++--- .../en/blog/2023/end-user-discussions-01.md | 2 +- content/en/blog/2023/end-user-q-and-a-01.md | 26 +++++++++---------- content/en/blog/2023/end-user-q-and-a-03.md | 12 ++++----- content/en/blog/2023/humans-of-otel.md | 2 +- .../en/blog/2023/testing-otel-demo/index.md | 2 +- content/en/docs/collector/_index.md | 6 ++--- content/en/docs/concepts/distributions.md | 6 ++--- .../en/docs/concepts/observability-primer.md | 4 +-- content/en/docs/concepts/signals/baggage.md | 2 +- content/en/docs/what-is-opentelemetry.md | 2 +- .../collector-exporter-alertmanager.yml | 2 +- 14 files changed, 40 insertions(+), 40 deletions(-) diff --git a/.textlintrc.yml b/.textlintrc.yml index 4f59cefb2c94..66beda405f37 100644 --- a/.textlintrc.yml +++ b/.textlintrc.yml @@ -113,7 +113,7 @@ rules: # https://github.com/sapegin/textlint-rule-terminology/blob/ca36a645c56d21f27cb9d902b5fb9584030c59e3/index.js#L137-L142. # - ['3rd[- ]party', third-party] - - ['back end(s)?', 'backend$1'] + - ['back[- ]end(s)?', 'backend$1'] - ['bugfix', 'bug fix'] - [cpp, C++] - # dotnet|.net -> .NET, but NOT for strings like: diff --git a/content/en/blog/2022/apisix/index.md b/content/en/blog/2022/apisix/index.md index 4d66fd24047a..579bd2dd1f5e 100644 --- a/content/en/blog/2022/apisix/index.md +++ b/content/en/blog/2022/apisix/index.md @@ -29,8 +29,8 @@ and sends it to OpenTelemetry Collector through HTTP protocol. Apache APISIX starts to support this feature in v2.13.0. One of OpenTelemetry's special features is that the agent/SDK of OpenTelemetry -is not locked with back-end implementation, which gives users flexibilities on -choosing their own back-end services. In other words, users can choose the +is not locked with backend implementation, which gives users flexibilities on +choosing their own backend services. In other words, users can choose the backend services they want, such as Zipkin and Jaeger, without affecting the application side. diff --git a/content/en/blog/2022/frontend-overhaul/index.md b/content/en/blog/2022/frontend-overhaul/index.md index 308382874763..c7d145e0471e 100644 --- a/content/en/blog/2022/frontend-overhaul/index.md +++ b/content/en/blog/2022/frontend-overhaul/index.md @@ -114,13 +114,13 @@ This proposal was presented to the OpenTelemetry demo SIG during one of the weekly Monday meetings and we were given the green light to move ahead. As part of the changes, we decided to use [Next.js](https://nextjs.org/) to not only work as the primary front-end application but also to work as an aggregation -layer between the front-end and the gRPC back-end services. +layer between the front-end and the gRPC backend services. ![New Front-end Data Flow](data-flow.png) As you can see in the diagram, the application has two major connectivity points, one coming from the browser side (REST) to connect to the Next.js -aggregation layer and the other from the aggregation layer to the back-end +aggregation layer and the other from the aggregation layer to the backend services (gRPC). ## OpenTelemetry Instrumentation @@ -129,7 +129,7 @@ The next big thing we worked was a way to instrument both sides of the Next.js app. To do this we had to connect the app twice to the same collector used by all the microservices. -A simple back-end solution was designed using the +A simple backend solution was designed using the [official gRPC exporter](https://www.npmjs.com/package/@opentelemetry/exporter-trace-otlp-grpc) in combination with the [Node.js SDK](https://www.npmjs.com/package/@opentelemetry/sdk-node). @@ -160,7 +160,7 @@ CORS requests from the web app. Once the setup is complete, by loading the application from Docker and interacting with the different features, we can start looking at the full traces -that begin from the front-end user events all the way to the back-end gRPC +that begin from the front-end user events all the way to the backend gRPC services. ![Front-end Trace Jaeger Visualization](jaeger.png) diff --git a/content/en/blog/2023/end-user-discussions-01.md b/content/en/blog/2023/end-user-discussions-01.md index 4bd5cba9c19b..af24f78a1f51 100644 --- a/content/en/blog/2023/end-user-discussions-01.md +++ b/content/en/blog/2023/end-user-discussions-01.md @@ -82,7 +82,7 @@ you will have to send the spans to a centralized service. #### 3- Bifurcating data in a pipeline **Q:** If I want to use the Collector to send different sets of data to -different back-ends, what’s the best way to go about it? +different backends, what’s the best way to go about it? **A:** [Connectors](https://github.com/open-telemetry/opentelemetry-collector/pull/6140) diff --git a/content/en/blog/2023/end-user-q-and-a-01.md b/content/en/blog/2023/end-user-q-and-a-01.md index 2e066e8efbb5..b0fae1f5b62a 100644 --- a/content/en/blog/2023/end-user-q-and-a-01.md +++ b/content/en/blog/2023/end-user-q-and-a-01.md @@ -28,10 +28,10 @@ OpenTelemetry with [GraphQL](https://graphql.org/). J and his team embarked on their OpenTelemetry journey for two main reasons: -- J’s company uses a few different observability back-ends. His team had - switched to a vendor back-end that was different from the back-end used by - other teams that they interfaced with. OpenTelemetry allowed them to continue - to get end-to-end Traces in spite of using different vendors. +- J’s company uses a few different observability backends. His team had switched + to a vendor backend that was different from the backend used by other teams + that they interfaced with. OpenTelemetry allowed them to continue to get + end-to-end Traces in spite of using different vendors. - His team was using GraphQL, and needed to be able to better understand what was happening behind the scenes with their GraphQL calls. @@ -58,9 +58,9 @@ Across the organization, different teams have chosen to use different observability platforms to suit their needs, resulting in a mix of both open source and proprietary observability tools. -J’s team had recently migrated from one observability back-end to another. After +J’s team had recently migrated from one observability backend to another. After this migration, they started seeing gaps in trace data, because other teams that -they integrated with were still using a different observability back-end. As a +they integrated with were still using a different observability backend. As a result, they no longer had an end-to-end picture of their traces. The solution was to use a standard, vendor-neutral way to emit telemetry: OpenTelemetry. @@ -133,7 +133,7 @@ is currently discouraging teams from creating their own custom spans. Since they do a lot of asynchronous programming, it can be very difficult for developers to understand how the context is going to behave across asynchronous processes. -Traces are sent to their observability back-end using that vendor’s agent, which +Traces are sent to their observability backend using that vendor’s agent, which is installed on all of their nodes. ### Besides traces, do you use other signals? @@ -142,7 +142,7 @@ The team has implemented a custom Node.js plugin for getting certain [metrics](/docs/concepts/signals/metrics/) data about GraphQL, such as deprecated field usage and overall query usage, which is something that they can’t get from their traces. These metrics are being sent to the observability -back-end through the +backend through the [OpenTelemetry Collector](https://github.com/open-telemetry/opentelemetry-collector#-opentelemetry-collector)’s [OTLP metrics receiver](https://github.com/open-telemetry/opentelemetry-collector/blob/main/receiver/otlpreceiver/README.md). @@ -158,7 +158,7 @@ The team uses [Amazon Elasticache](https://en.wikipedia.org/wiki/Amazon_ElastiCache) and the [ELK stack](https://www.techtarget.com/searchitoperations/definition/Elastic-Stack) for logging. They are currently doing a proof-of-concept (POC) of migrating .NET -logs to their observability back-end. The ultimate goal is to have +logs to their observability backend. The ultimate goal is to have [metrics](/docs/concepts/signals/metrics/), [logs](/docs/concepts/signals/logs/), and [traces](/docs/concepts/signals/traces/) under one roof. @@ -171,7 +171,7 @@ link traces and metrics. ### How is the organization sending telemetry data to various observability back-ends? -J’s team uses a combination of the proprietary back-end agent and the +J’s team uses a combination of the proprietary backend agent and the OpenTelemetry Collector (for metrics). They are one of the primary users of OpenTelemetry at J’s company, and he hopes to help get more teams to make the switch. @@ -245,7 +245,7 @@ which they intend to give back to the OpenTelemetry community. ### Are you planning on instrumenting mainframe code? -The observability back-end used by J’s team provided native instrumentation for +The observability backend used by J’s team provided native instrumentation for the mainframe. J and his team would have loved to instrument mainframe code using OpenTelemetry. Unfortunately, there is currently no OpenTelemetry SDK for PL/I (and other mainframe languages such as @@ -288,8 +288,8 @@ JavaScript environments are akin to the Wild West of Development due to: One of J’s suggestions is to treat OTel JavaScript as a hierarchy, which starts with a Core JavaScript team that splits into two subgroups: front-end web group, -and back-end group. Front-end and back-end would in turn split. For example, for -the back-end, have a separate Deno and Node.js group. +and backend group. Front-end and backend would in turn split. For example, for +the backend, have a separate Deno and Node.js group. Another suggestion is to have a contrib maintainers group, separate from core SDK and API maintainers group. diff --git a/content/en/blog/2023/end-user-q-and-a-03.md b/content/en/blog/2023/end-user-q-and-a-03.md index adf569970c55..734fa6094f33 100644 --- a/content/en/blog/2023/end-user-q-and-a-03.md +++ b/content/en/blog/2023/end-user-q-and-a-03.md @@ -44,7 +44,7 @@ alerting. The team is responsible for maintaining Observability tooling, managing deployments related to Observability tooling, and educating teams on instrumenting code using OpenTelemetry. -Iris first started her career as a software engineer, focusing on back-end +Iris first started her career as a software engineer, focusing on backend development. She eventually moved to a DevOps Engineering role, and it was in this role that she was introduced to cloud monitoring through products such as [Amazon CloudWatch](https://aws.amazon.com/cloudwatch/) and @@ -91,9 +91,9 @@ created by her team. On the open source tooling front: - [Grafana](https://grafana.com) is used for dashboards - OpenTelemetry is used for emitting traces, and - [Grafana Tempo](https://grafana.com/oss/tempo/) is used as a tracing back-end + [Grafana Tempo](https://grafana.com/oss/tempo/) is used as a tracing backend - [Jaeger](https://jaegertracing.io) is still used in some cases for emitting - traces and as a tracing back-end, because some teams have not yet completely + traces and as a tracing backend, because some teams have not yet completely moved to OpenTelemetry for instrumenting traces ([via Jaeger’s implementation of the OpenTracing API](https://medium.com/velotio-perspectives/a-comprehensive-tutorial-to-implementing-opentracing-with-jaeger-a01752e1a8ce)). - [Prometheus Thanos](https://github.com/thanos-io/thanos) (highly-available @@ -141,7 +141,7 @@ They are not fully there yet: In spite of that, Iris and her team are leveraging the power of the [OpenTelemetry Collector](/docs/collector/) to gather and send metrics and -traces to various Observability back-ends. Since she and her team started using +traces to various Observability backends. Since she and her team started using OpenTelemetry, they started instrumenting more traces. In fact, with their current setup, Iris has happily reported that they went from processing 1,000 spans per second, to processing 40,000 spans per second! @@ -301,7 +301,7 @@ Are you currently using any processors on the OTel Collector? \ The team is currently experimenting with processors, namely for data masking ([transform processor](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/processor/transformprocessor), or [redaction processor](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/processor/redactionprocessor)), especially as they move to using OTel Logs, which will contain sensitive data that -they won’t want to transmit to their Observability back-end. They currently, however, +they won’t want to transmit to their Observability backend. They currently, however, are only using the [batch processor](https://github.com/open-telemetry/opentelemetry-collector/blob/main/processor/batchprocessor/README.md). ### Are you aware of any teams using span events? @@ -344,7 +344,7 @@ instances of the Collector, using around 8GB memory. This is something that is currently being explored. The team is exploring [traces/metrics correlation (exemplars)](/docs/specs/otel/metrics/data-model/#exemplars) through OpenTelemetry; however, they found that this correlation is accomplished -more easily through their tracing back-end, Tempo. +more easily through their tracing backend, Tempo. ### Are you concerned about the amount of data that you end up producing, transporting, and collecting? How do you ensure data quality? diff --git a/content/en/blog/2023/humans-of-otel.md b/content/en/blog/2023/humans-of-otel.md index 3fe4505ca143..06f82745b812 100644 --- a/content/en/blog/2023/humans-of-otel.md +++ b/content/en/blog/2023/humans-of-otel.md @@ -181,7 +181,7 @@ together. And in order to use all of these tools together, you need to have the data coming in, the telemetry actually be integrated, so you can't have three -separate streams of telemetry. And then on the back-end, be like, I want to +separate streams of telemetry. And then on the backend, be like, I want to cross-reference. All of that telemetry has to be organized into an actual graph. You need a graphical data structure that all these individual signals are a part of. For me, that is what modern Observability is all about. diff --git a/content/en/blog/2023/testing-otel-demo/index.md b/content/en/blog/2023/testing-otel-demo/index.md index 89b4f856aacf..34937112a7d8 100644 --- a/content/en/blog/2023/testing-otel-demo/index.md +++ b/content/en/blog/2023/testing-otel-demo/index.md @@ -343,7 +343,7 @@ the demo. This will evaluate all services in the OpenTelemetry Demo. During the development of the tests, we noticed some differences in the test results. For example, some minor fixes were made to the Cypress tests, and some -behaviors were observed in the back-end APIs that can be tested and investigated +behaviors were observed in the backend APIs that can be tested and investigated at a later time. You can find the details in [this pull request](https://github.com/open-telemetry/opentelemetry-demo/pull/950) and diff --git a/content/en/docs/collector/_index.md b/content/en/docs/collector/_index.md index dc514f972379..7e7121a15ba5 100644 --- a/content/en/docs/collector/_index.md +++ b/content/en/docs/collector/_index.md @@ -15,9 +15,9 @@ The OpenTelemetry Collector offers a vendor-agnostic implementation of how to receive, process and export telemetry data. It removes the need to run, operate, and maintain multiple agents/collectors. This works with improved scalability and supports open source observability data formats (e.g. Jaeger, Prometheus, -Fluent Bit, etc.) sending to one or more open source or commercial back-ends. -The local Collector agent is the default location to which instrumentation -libraries export their telemetry data. +Fluent Bit, etc.) sending to one or more open source or commercial backends. The +local Collector agent is the default location to which instrumentation libraries +export their telemetry data. ## Objectives diff --git a/content/en/docs/concepts/distributions.md b/content/en/docs/concepts/distributions.md index 8e9fd9785a83..3ad677048e4c 100644 --- a/content/en/docs/concepts/distributions.md +++ b/content/en/docs/concepts/distributions.md @@ -22,8 +22,8 @@ OpenTelemetry component. A distribution is a wrapper around an upstream OpenTelemetry repository with some customizations. Customizations in a distribution may include: -- Scripts to ease use or customize use for a specific back-end or vendor -- Changes to default settings required for a back-end, vendor, or end-user +- Scripts to ease use or customize use for a specific backend or vendor +- Changes to default settings required for a backend, vendor, or end-user - Additional packaging options that may be vendor or end-user specific - Test, performance, and security coverage beyond what OpenTelemetry provides - Additional capabilities beyond what OpenTelemetry provides @@ -33,7 +33,7 @@ Distributions would broadly fall into the following categories: - **"Pure":** These distributions provide the same functionality as upstream and are 100% compatible. Customizations would typically be to ease of use or - packaging. These customizations may be back-end, vendor, or end-user specific. + packaging. These customizations may be backend, vendor, or end-user specific. - **"Plus":** These distributions provide the same functionality as upstream plus more. Customizations beyond those found in pure distributions would be the inclusion of additional components. Examples of this would include diff --git a/content/en/docs/concepts/observability-primer.md b/content/en/docs/concepts/observability-primer.md index 9d8cb944f40c..1359e15a209a 100644 --- a/content/en/docs/concepts/observability-primer.md +++ b/content/en/docs/concepts/observability-primer.md @@ -133,8 +133,8 @@ Each root span represents a request from start to finish. The spans underneath the parent provide a more in-depth context of what occurs during a request (or what steps make up a request). -Many Observability back-ends visualize traces as waterfall diagrams that may -look something like this: +Many Observability backends visualize traces as waterfall diagrams that may look +something like this: ![Sample Trace](/img/waterfall-trace.svg 'Trace waterfall diagram') diff --git a/content/en/docs/concepts/signals/baggage.md b/content/en/docs/concepts/signals/baggage.md index eb634ed39124..2d03c195c12e 100644 --- a/content/en/docs/concepts/signals/baggage.md +++ b/content/en/docs/concepts/signals/baggage.md @@ -37,7 +37,7 @@ Common use cases include information that’s only accessible further up a stack This can include things like Account Identification, User IDs, Product IDs, and origin IPs, for example. Passing these down your stack allows you to then add them to your Spans in downstream services to make it easier to filter when -you’re searching in your Observability back-end. +you’re searching in your Observability backend. ![OTel Baggage](/img/otel-baggage-2.svg) diff --git a/content/en/docs/what-is-opentelemetry.md b/content/en/docs/what-is-opentelemetry.md index a753f7433731..e5f168c8593f 100644 --- a/content/en/docs/what-is-opentelemetry.md +++ b/content/en/docs/what-is-opentelemetry.md @@ -109,7 +109,7 @@ migrate to OpenTelemetry [here](/docs/migration/). ## What OpenTelemetry is not -OpenTelemetry is not an observability back-end like Jaeger, Prometheus, or +OpenTelemetry is not an observability backend like Jaeger, Prometheus, or commercial vendors. OpenTelemetry is focused on the generation, collection, management, and export of telemetry data. The storage and visualization of that data is intentionally left to other tools. diff --git a/data/registry/collector-exporter-alertmanager.yml b/data/registry/collector-exporter-alertmanager.yml index 0950e3135a4f..b484de78f7a6 100644 --- a/data/registry/collector-exporter-alertmanager.yml +++ b/data/registry/collector-exporter-alertmanager.yml @@ -11,7 +11,7 @@ license: Apache 2.0 description: Exports OTel Events (SpanEvent in Tracing added by AddEvent API) as Alerts to [Alertmanager](https://prometheus.io/docs/alerting/latest/alertmanager/) - back-end to notify Errors or Change events. + backend to notify Errors or Change events. authors: - name: OpenTelemetry Authors package: