Skip to content

Commit

Permalink
[CI/textlint] Enforce "backend" rather than "back-end" (#3892)
Browse files Browse the repository at this point in the history
  • Loading branch information
chalin authored Jan 30, 2024
1 parent d99f15e commit 8161083
Show file tree
Hide file tree
Showing 14 changed files with 40 additions and 40 deletions.
2 changes: 1 addition & 1 deletion .textlintrc.yml
Original file line number Diff line number Diff line change
Expand Up @@ -113,7 +113,7 @@ rules:
# https://github.com/sapegin/textlint-rule-terminology/blob/ca36a645c56d21f27cb9d902b5fb9584030c59e3/index.js#L137-L142.
#
- ['3rd[- ]party', third-party]
- ['back end(s)?', 'backend$1']
- ['back[- ]end(s)?', 'backend$1']
- ['bugfix', 'bug fix']
- [cpp, C++]
- # dotnet|.net -> .NET, but NOT for strings like:
Expand Down
4 changes: 2 additions & 2 deletions content/en/blog/2022/apisix/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,8 +29,8 @@ and sends it to OpenTelemetry Collector through HTTP protocol. Apache APISIX
starts to support this feature in v2.13.0.

One of OpenTelemetry's special features is that the agent/SDK of OpenTelemetry
is not locked with back-end implementation, which gives users flexibilities on
choosing their own back-end services. In other words, users can choose the
is not locked with backend implementation, which gives users flexibilities on
choosing their own backend services. In other words, users can choose the
backend services they want, such as Zipkin and Jaeger, without affecting the
application side.

Expand Down
8 changes: 4 additions & 4 deletions content/en/blog/2022/frontend-overhaul/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -114,13 +114,13 @@ This proposal was presented to the OpenTelemetry demo SIG during one of the
weekly Monday meetings and we were given the green light to move ahead. As part
of the changes, we decided to use [Next.js](https://nextjs.org/) to not only
work as the primary front-end application but also to work as an aggregation
layer between the front-end and the gRPC back-end services.
layer between the front-end and the gRPC backend services.

![New Front-end Data Flow](data-flow.png)

As you can see in the diagram, the application has two major connectivity
points, one coming from the browser side (REST) to connect to the Next.js
aggregation layer and the other from the aggregation layer to the back-end
aggregation layer and the other from the aggregation layer to the backend
services (gRPC).

## OpenTelemetry Instrumentation
Expand All @@ -129,7 +129,7 @@ The next big thing we worked was a way to instrument both sides of the Next.js
app. To do this we had to connect the app twice to the same collector used by
all the microservices.

A simple back-end solution was designed using the
A simple backend solution was designed using the
[official gRPC exporter](https://www.npmjs.com/package/@opentelemetry/exporter-trace-otlp-grpc)
in combination with the
[Node.js SDK](https://www.npmjs.com/package/@opentelemetry/sdk-node).
Expand Down Expand Up @@ -160,7 +160,7 @@ CORS requests from the web app.

Once the setup is complete, by loading the application from Docker and
interacting with the different features, we can start looking at the full traces
that begin from the front-end user events all the way to the back-end gRPC
that begin from the front-end user events all the way to the backend gRPC
services.

![Front-end Trace Jaeger Visualization](jaeger.png)
Expand Down
2 changes: 1 addition & 1 deletion content/en/blog/2023/end-user-discussions-01.md
Original file line number Diff line number Diff line change
Expand Up @@ -82,7 +82,7 @@ you will have to send the spans to a centralized service.
#### 3- Bifurcating data in a pipeline

**Q:** If I want to use the Collector to send different sets of data to
different back-ends, what’s the best way to go about it?
different backends, what’s the best way to go about it?

**A:**
[Connectors](https://github.com/open-telemetry/opentelemetry-collector/pull/6140)
Expand Down
26 changes: 13 additions & 13 deletions content/en/blog/2023/end-user-q-and-a-01.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,10 +28,10 @@ OpenTelemetry with [GraphQL](https://graphql.org/).

J and his team embarked on their OpenTelemetry journey for two main reasons:

- J’s company uses a few different observability back-ends. His team had
switched to a vendor back-end that was different from the back-end used by
other teams that they interfaced with. OpenTelemetry allowed them to continue
to get end-to-end Traces in spite of using different vendors.
- J’s company uses a few different observability backends. His team had switched
to a vendor backend that was different from the backend used by other teams
that they interfaced with. OpenTelemetry allowed them to continue to get
end-to-end Traces in spite of using different vendors.
- His team was using GraphQL, and needed to be able to better understand what
was happening behind the scenes with their GraphQL calls.

Expand All @@ -58,9 +58,9 @@ Across the organization, different teams have chosen to use different
observability platforms to suit their needs, resulting in a mix of both open
source and proprietary observability tools.

J’s team had recently migrated from one observability back-end to another. After
J’s team had recently migrated from one observability backend to another. After
this migration, they started seeing gaps in trace data, because other teams that
they integrated with were still using a different observability back-end. As a
they integrated with were still using a different observability backend. As a
result, they no longer had an end-to-end picture of their traces. The solution
was to use a standard, vendor-neutral way to emit telemetry: OpenTelemetry.

Expand Down Expand Up @@ -133,7 +133,7 @@ is currently discouraging teams from creating their own custom spans. Since they
do a lot of asynchronous programming, it can be very difficult for developers to
understand how the context is going to behave across asynchronous processes.

Traces are sent to their observability back-end using that vendor’s agent, which
Traces are sent to their observability backend using that vendor’s agent, which
is installed on all of their nodes.

### Besides traces, do you use other signals?
Expand All @@ -142,7 +142,7 @@ The team has implemented a custom Node.js plugin for getting certain
[metrics](/docs/concepts/signals/metrics/) data about GraphQL, such as
deprecated field usage and overall query usage, which is something that they
can’t get from their traces. These metrics are being sent to the observability
back-end through the
backend through the
[OpenTelemetry Collector](https://github.com/open-telemetry/opentelemetry-collector#-opentelemetry-collector)’s
[OTLP metrics receiver](https://github.com/open-telemetry/opentelemetry-collector/blob/main/receiver/otlpreceiver/README.md).

Expand All @@ -158,7 +158,7 @@ The team uses
[Amazon Elasticache](https://en.wikipedia.org/wiki/Amazon_ElastiCache) and the
[ELK stack](https://www.techtarget.com/searchitoperations/definition/Elastic-Stack)
for logging. They are currently doing a proof-of-concept (POC) of migrating .NET
logs to their observability back-end. The ultimate goal is to have
logs to their observability backend. The ultimate goal is to have
[metrics](/docs/concepts/signals/metrics/),
[logs](/docs/concepts/signals/logs/), and
[traces](/docs/concepts/signals/traces/) under one roof.
Expand All @@ -171,7 +171,7 @@ link traces and metrics.

### How is the organization sending telemetry data to various observability back-ends?

J’s team uses a combination of the proprietary back-end agent and the
J’s team uses a combination of the proprietary backend agent and the
OpenTelemetry Collector (for metrics). They are one of the primary users of
OpenTelemetry at J’s company, and he hopes to help get more teams to make the
switch.
Expand Down Expand Up @@ -245,7 +245,7 @@ which they intend to give back to the OpenTelemetry community.

### Are you planning on instrumenting mainframe code?

The observability back-end used by J’s team provided native instrumentation for
The observability backend used by J’s team provided native instrumentation for
the mainframe. J and his team would have loved to instrument mainframe code
using OpenTelemetry. Unfortunately, there is currently no OpenTelemetry SDK for
PL/I (and other mainframe languages such as
Expand Down Expand Up @@ -288,8 +288,8 @@ JavaScript environments are akin to the Wild West of Development due to:

One of J’s suggestions is to treat OTel JavaScript as a hierarchy, which starts
with a Core JavaScript team that splits into two subgroups: front-end web group,
and back-end group. Front-end and back-end would in turn split. For example, for
the back-end, have a separate Deno and Node.js group.
and backend group. Front-end and backend would in turn split. For example, for
the backend, have a separate Deno and Node.js group.

Another suggestion is to have a contrib maintainers group, separate from core
SDK and API maintainers group.
Expand Down
12 changes: 6 additions & 6 deletions content/en/blog/2023/end-user-q-and-a-03.md
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,7 @@ alerting. The team is responsible for maintaining Observability tooling,
managing deployments related to Observability tooling, and educating teams on
instrumenting code using OpenTelemetry.

Iris first started her career as a software engineer, focusing on back-end
Iris first started her career as a software engineer, focusing on backend
development. She eventually moved to a DevOps Engineering role, and it was in
this role that she was introduced to cloud monitoring through products such as
[Amazon CloudWatch](https://aws.amazon.com/cloudwatch/) and
Expand Down Expand Up @@ -91,9 +91,9 @@ created by her team. On the open source tooling front:

- [Grafana](https://grafana.com) is used for dashboards
- OpenTelemetry is used for emitting traces, and
[Grafana Tempo](https://grafana.com/oss/tempo/) is used as a tracing back-end
[Grafana Tempo](https://grafana.com/oss/tempo/) is used as a tracing backend
- [Jaeger](https://jaegertracing.io) is still used in some cases for emitting
traces and as a tracing back-end, because some teams have not yet completely
traces and as a tracing backend, because some teams have not yet completely
moved to OpenTelemetry for instrumenting traces
([via Jaeger’s implementation of the OpenTracing API](https://medium.com/velotio-perspectives/a-comprehensive-tutorial-to-implementing-opentracing-with-jaeger-a01752e1a8ce)).
- [Prometheus Thanos](https://github.com/thanos-io/thanos) (highly-available
Expand Down Expand Up @@ -141,7 +141,7 @@ They are not fully there yet:

In spite of that, Iris and her team are leveraging the power of the
[OpenTelemetry Collector](/docs/collector/) to gather and send metrics and
traces to various Observability back-ends. Since she and her team started using
traces to various Observability backends. Since she and her team started using
OpenTelemetry, they started instrumenting more traces. In fact, with their
current setup, Iris has happily reported that they went from processing 1,000
spans per second, to processing 40,000 spans per second!
Expand Down Expand Up @@ -301,7 +301,7 @@ Are you currently using any processors on the OTel Collector? \
The team is currently experimenting with processors, namely for data masking ([transform processor](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/processor/transformprocessor),
or [redaction processor](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/processor/redactionprocessor)),
especially as they move to using OTel Logs, which will contain sensitive data that
they won’t want to transmit to their Observability back-end. They currently, however,
they won’t want to transmit to their Observability backend. They currently, however,
are only using the [batch processor](https://github.com/open-telemetry/opentelemetry-collector/blob/main/processor/batchprocessor/README.md).

### Are you aware of any teams using span events?
Expand Down Expand Up @@ -344,7 +344,7 @@ instances of the Collector, using around 8GB memory.
This is something that is currently being explored. The team is exploring
[traces/metrics correlation (exemplars)](/docs/specs/otel/metrics/data-model/#exemplars)
through OpenTelemetry; however, they found that this correlation is accomplished
more easily through their tracing back-end, Tempo.
more easily through their tracing backend, Tempo.

### Are you concerned about the amount of data that you end up producing, transporting, and collecting? How do you ensure data quality?

Expand Down
2 changes: 1 addition & 1 deletion content/en/blog/2023/humans-of-otel.md
Original file line number Diff line number Diff line change
Expand Up @@ -181,7 +181,7 @@ together.

And in order to use all of these tools together, you need to have the data
coming in, the telemetry actually be integrated, so you can't have three
separate streams of telemetry. And then on the back-end, be like, I want to
separate streams of telemetry. And then on the backend, be like, I want to
cross-reference. All of that telemetry has to be organized into an actual graph.
You need a graphical data structure that all these individual signals are a part
of. For me, that is what modern Observability is all about.
Expand Down
2 changes: 1 addition & 1 deletion content/en/blog/2023/testing-otel-demo/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -343,7 +343,7 @@ the demo. This will evaluate all services in the OpenTelemetry Demo.

During the development of the tests, we noticed some differences in the test
results. For example, some minor fixes were made to the Cypress tests, and some
behaviors were observed in the back-end APIs that can be tested and investigated
behaviors were observed in the backend APIs that can be tested and investigated
at a later time. You can find the details in
[this pull request](https://github.com/open-telemetry/opentelemetry-demo/pull/950)
and
Expand Down
6 changes: 3 additions & 3 deletions content/en/docs/collector/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,9 +15,9 @@ The OpenTelemetry Collector offers a vendor-agnostic implementation of how to
receive, process and export telemetry data. It removes the need to run, operate,
and maintain multiple agents/collectors. This works with improved scalability
and supports open source observability data formats (e.g. Jaeger, Prometheus,
Fluent Bit, etc.) sending to one or more open source or commercial back-ends.
The local Collector agent is the default location to which instrumentation
libraries export their telemetry data.
Fluent Bit, etc.) sending to one or more open source or commercial backends. The
local Collector agent is the default location to which instrumentation libraries
export their telemetry data.

## Objectives

Expand Down
6 changes: 3 additions & 3 deletions content/en/docs/concepts/distributions.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,8 +22,8 @@ OpenTelemetry component. A distribution is a wrapper around an upstream
OpenTelemetry repository with some customizations. Customizations in a
distribution may include:

- Scripts to ease use or customize use for a specific back-end or vendor
- Changes to default settings required for a back-end, vendor, or end-user
- Scripts to ease use or customize use for a specific backend or vendor
- Changes to default settings required for a backend, vendor, or end-user
- Additional packaging options that may be vendor or end-user specific
- Test, performance, and security coverage beyond what OpenTelemetry provides
- Additional capabilities beyond what OpenTelemetry provides
Expand All @@ -33,7 +33,7 @@ Distributions would broadly fall into the following categories:

- **"Pure":** These distributions provide the same functionality as upstream and
are 100% compatible. Customizations would typically be to ease of use or
packaging. These customizations may be back-end, vendor, or end-user specific.
packaging. These customizations may be backend, vendor, or end-user specific.
- **"Plus":** These distributions provide the same functionality as upstream
plus more. Customizations beyond those found in pure distributions would be
the inclusion of additional components. Examples of this would include
Expand Down
4 changes: 2 additions & 2 deletions content/en/docs/concepts/observability-primer.md
Original file line number Diff line number Diff line change
Expand Up @@ -133,8 +133,8 @@ Each root span represents a request from start to finish. The spans underneath
the parent provide a more in-depth context of what occurs during a request (or
what steps make up a request).

Many Observability back-ends visualize traces as waterfall diagrams that may
look something like this:
Many Observability backends visualize traces as waterfall diagrams that may look
something like this:

![Sample Trace](/img/waterfall-trace.svg 'Trace waterfall diagram')

Expand Down
2 changes: 1 addition & 1 deletion content/en/docs/concepts/signals/baggage.md
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@ Common use cases include information that’s only accessible further up a stack
This can include things like Account Identification, User IDs, Product IDs, and
origin IPs, for example. Passing these down your stack allows you to then add
them to your Spans in downstream services to make it easier to filter when
you’re searching in your Observability back-end.
you’re searching in your Observability backend.

![OTel Baggage](/img/otel-baggage-2.svg)

Expand Down
2 changes: 1 addition & 1 deletion content/en/docs/what-is-opentelemetry.md
Original file line number Diff line number Diff line change
Expand Up @@ -109,7 +109,7 @@ migrate to OpenTelemetry [here](/docs/migration/).

## What OpenTelemetry is not

OpenTelemetry is not an observability back-end like Jaeger, Prometheus, or
OpenTelemetry is not an observability backend like Jaeger, Prometheus, or
commercial vendors. OpenTelemetry is focused on the generation, collection,
management, and export of telemetry data. The storage and visualization of that
data is intentionally left to other tools.
Expand Down
2 changes: 1 addition & 1 deletion data/registry/collector-exporter-alertmanager.yml
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ license: Apache 2.0
description:
Exports OTel Events (SpanEvent in Tracing added by AddEvent API) as Alerts to
[Alertmanager](https://prometheus.io/docs/alerting/latest/alertmanager/)
back-end to notify Errors or Change events.
backend to notify Errors or Change events.
authors:
- name: OpenTelemetry Authors
package:
Expand Down

0 comments on commit 8161083

Please sign in to comment.