From fd0efc263a7cd57d02f345fb9a932b468ddf9376 Mon Sep 17 00:00:00 2001 From: Ben Butler-Cole Date: Fri, 3 May 2024 15:50:04 +0100 Subject: [PATCH] Fix dummy data links (#1508) --- docs/actions-pipelines.md | 2 +- docs/workflow.md | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/actions-pipelines.md b/docs/actions-pipelines.md index c80d85eb4..71ddc510b 100644 --- a/docs/actions-pipelines.md +++ b/docs/actions-pipelines.md @@ -5,7 +5,7 @@ This section covers how to develop, run, and test your code to ensure it will wo ## Project pipelines -The [ehrQL](/ehrql/how-to/dummy-data.md) documentation describes how to make an action which generate dummy datasets based on the instructions defined in your `dataset_definition.py` script. +The [ehrQL](/ehrql/how-to/dummy-data) documentation describes how to make an action which generate dummy datasets based on the instructions defined in your `dataset_definition.py` script. These dummy datasets are the basis for developing the analysis code that will eventually be passed to the server to run on real datasets. The code can be written and run on your local machine using whatever development set up you prefer (e.g., developing R in RStudio). However, it's important to ensure that this code will run successfully in OpenSAFELY's secure environment too, using the specific language and package versions that are installed there. To do this, you should use the project pipeline. diff --git a/docs/workflow.md b/docs/workflow.md index 1e87b030c..fd9e7d265 100644 --- a/docs/workflow.md +++ b/docs/workflow.md @@ -11,7 +11,7 @@ This repo will contain all the code relating to your project, and a history of i - specify the patient population (dataset rows) and variables (dataset columns) - specify the expected distributions of these variables for use in dummy data - specify (or create) the [codelists](codelist-intro.md) required by the study definition, hosted by [OpenCodelists](https://www.opencodelists.org), and import them to the repo. -3. **Generate [dummy data](/ehrql/how-to/dummy-data.md)** based on the dataset definition, for writing and testing code. +3. **Generate [dummy data](/ehrql/how-to/dummy-data)** based on the dataset definition, for writing and testing code. 4. **Develop analysis scripts** using the dummy data in R, Stata, or Python. This will include: - importing and processing the dataset(s) created by the cohort extractor - importing any other external files needed for analysis