diff --git a/website/www/site/content/en/case-studies/accenture_baltics.md b/website/www/site/content/en/case-studies/accenture_baltics.md index 5fdecc533382..85896aab3547 100644 --- a/website/www/site/content/en/case-studies/accenture_baltics.md +++ b/website/www/site/content/en/case-studies/accenture_baltics.md @@ -65,12 +65,10 @@ The team responsible for transforming, curating, and preparing data, including t As an example, a streaming pipeline ingests transaction data from Pub/Sub, performs basic ETL and data cleaning, and outputs the results to BigQuery. A separate batch Dataflow pipeline evaluates a binary classification model, reading input and writing results to Google Cloud Storage. The following diagram (Figure 1) shows a workflow that uses Pub/Sub to feed Dataflow pipelines across three Google Cloud projects. It also shows how Dataflow, Composer, Cloud Storage, BigQuery, and Grafana integrate into the architecture.
Figure 1. This diagram illustrates how Pub/Sub feeds streaming Dataflow pipelines across three Google Cloud projects (Prod, Dev, and Test), and how Dataflow, Composer, Cloud Storage, BigQuery, and Grafana are integrated into the overall architecture.
Figure 2. Accenture Baltics uses a customized Grafana dashboard to track the data freshness metric of streaming Dataflow jobs.