diff --git a/test_copy.qmd b/test_copy.qmd index cf51a4e..d078371 100644 --- a/test_copy.qmd +++ b/test_copy.qmd @@ -15,10 +15,35 @@ library(tidyverse) This script creates a `count.csv` file that has a column "count" with the first variable being the number of counties in the s3 bucket and the second bing the number of places. ```{bash} +echo "count" > count.csv +aws s3 ls s3://mobility-metrics-data-pages-dev/999_county-pages/ | wc -l >> count.csv +aws s3 ls s3://mobility-metrics-data-pages-dev/998_place-pages/ | wc -l >> count.csv +``` + +```{r} +library(tidyverse) +count <- read_csv("count.csv") +counties_count <- count[1, 1] %>% + pull(count) + +places_count <- count[2, 1] %>% + pull(count) + +stopifnot(counties_count == 3143) +stopifnot(places_count == 486) +``` + +##Test 2: +```{bash} +aws s3 ls s3://mobility-metrics-data-pages-dev/999_county-pages/ --recursive | sort | tail -n 1 +``` +```{bash} +aws s3 ls s3://mobility-metrics-data-pages-dev/998_place-pages/ --recursive | sort | tail -n 1 ``` +``` ## Test 3: To ensure that all of the files are created, we can count the number of files called `index.html` on the EC2 instance in each of the sub-directories. Specifically, you can run: `find factsheets/999_county-pages/ -type f -name 'index.html' | wc -l`