-
Notifications
You must be signed in to change notification settings - Fork 0
/
test_modules.qmd
537 lines (416 loc) · 17.8 KB
/
test_modules.qmd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
# Testing modules {#sec-tests-mods}
```{r}
#| eval: true
#| echo: false
#| include: false
source("_common.R")
```
```{r}
#| label: co_box_tldr
#| echo: false
#| results: asis
#| eval: true
co_box(
color = "b",
header = "TLDR: Testing Modules",
fold = TRUE,
look = "default", hsize = "1.10", size = "1.05",
contents = "
<br>
<h5>**Testing modules**</h5>
- Use `session$returned()` to check server function outputs, ensuring modules respond correctly to inputs.
- Apply `session$flushReact()` to test reactive elements immediately, verifying reactive behavior within modules.
- Parameterize with `args = list()` for flexible testing, simulating various user scenarios and inputs efficiently.
- Aim for efficient module test coverage, identifying and testing critical functionality paths to guarantee module reliability.
"
)
```
---
In the previous chapters we covered adding test fixtures and helpers to our test suite. In this chapter, we're going briefly discuss some tips for testing modules with `testServer()`--specifically, how to verify modules are transferring values correctly.
:::: {.callout-tip collapse='true' appearance='default'}
## [Accessing the code examples]{style='font-weight: bold; font-size: 1.15em;'}
::: {style='font-size: 0.95em; color: #282b2d;'}
I've created the [`shinypak` R package](https://mjfrigaard.github.io/shinypak/) In an effort to make each section accessible and easy to follow:
Install `shinypak` using `pak` (or `remotes`):
```{r}
#| code-fold: false
#| message: false
#| warning: false
#| eval: false
# install.packages('pak')
pak::pak('mjfrigaard/shinypak')
```
Review the chapters in each section:
```{r}
#| code-fold: false
#| message: false
#| warning: false
#| collapse: true
library(shinypak)
list_apps(regex = 'modules')
```
Launch an app:
```{r}
#| code-fold: false
#| eval: false
launch(app = "13_tests-modules")
```
:::
::::
## Integration tests
Integration tests verify that functions and components work together, and often involves instantiating multiple objects to interact with each other in a single test.
```{r}
#| label: git_box_13_tests-modules
#| echo: false
#| results: asis
#| eval: true
git_margin_box(
contents = "launch",
fig_pw = '65%',
branch = "13_tests-modules",
repo = 'sap')
```
We can combine the BDD functions with `testServer()` to test *reactive interactions* between modules. For example, to confirm that the drop-down feature requirement is working (i.e., that user-inputs are updating in the application), we need to test two changes:
1. Values passed to the UI are returned from `mod_var_input_server()`
2. The reactive values returned from `mod_var_input_server()` are passed into `mod_scatter_display_server()` and available as the reactive object `inputs()`
::: {.callout-note collapse="true" appearance="default"}
### [BDD refresher: features & scenarios]{style="font-size: 1.10em;"}
::: {style="font-size: 1.10em;"}
In BDD, requirements are written plain language 'feature files' using a series of keywords:
```{bash}
#| eval: false
#| code-fold: false
Feature: # <1>
As a # <1>
I want # <1>
So that # <1>
Background: # <2>
Given # <3>
And # <3>
Scenario: # <4>
When # <5>
And # <6>
Then # <7>
# <4>
```
1. **High-level description** (title and description)
2. Steps or conditions that exist **before each scenario**
3. Used to **describe the initial context** or **preconditions for the scenario**
4. A series of steps outlining a **concrete examples that illustrates a feature**
5. Used to **describe an event, or an action**
6. Use to combine **`Given`**, **`When`**, or **`Then`**
7. Use to **verify expected outcomes that are observable by a user**
`Feature` and `Background` information can be included in nested `describe()` blocks, but every `Scenario` (i.e., `Then`) keyword should have a corresponding `it()` or `test_that()` call.
Read more about Gherkin on the [Cucumber website.](https://cucumber.io/docs/gherkin/reference/).
:::
:::
The feature, background, and scenario for the changes in `mod_var_input_server()` are provided below:
```{r}
#| eval: false
#| include: true
#| code-fold: false
describe(
"Feature: Scatter Plot Configuration in Movie Review Application
As a user
I want the initial graph pre-configured with variables and aesthetics,
So that I can change the inputs and see a meaningful visualization.",
code = {
describe(
"Background: Initial scatter plot x, y, color values
Given the movie review application is loaded
And the scatter plot initial x-axis value is [IMDB Rating]
And the scatter plot initial y-axis value is [Audience Score]
And the scatter plot initial color value is [MPAA Rating]
And the initial opacity of the points is set to [0.5]
And the initial size of the points is set to [2]
And the initial plot title is set to [Enter plot title]", code = {
it("Scenario: Changing scatter plot x, y, color values
Given the movie review application is loaded
When I choose the [Critics Score] variable for the x-axis
And I choose the [Runtime] variable for the y-axis
And I choose the [Title type] variable for color
Then the scatter plot should show [Critics score] on the x-axis
And the scatter plot should show [Runtime] on the y-axis
And the points on the scatter plot should be colored by [Title type]
And the opacity of the points should be set to [0.5]
And the size of the points should be set to [2]
And the plot title should be [Enter plot title]", code = {
shiny::testServer(app = mod_var_input_server, expr = {
# test code -----
})
})
})
})
```
### Testing return values {#sec-tests-mods-returned}
Inside `testServer()`, we can create a list of initial graph inputs for `mod_var_input_server()`, then pass identical values to `session$setInputs()`, and confirm the returned object with `session$returned()`:[^tests-modules-returned]
[^tests-modules-returned]: Read more about returned values in the section titled, ***'Modules with return values'*** in the [Shiny documentation](https://shiny.posit.co/r/articles/improve/server-function-testing/index.html).
```{r}
#| eval: false
#| include: true
#| code-fold: false
shiny::testServer(app = mod_var_input_server, expr = { # <1>
test_logger(start = "var_inputs", msg = "initial returned()")
# create list of output vals
test_vals <- list(y = "imdb_rating", # <2>
x = "audience_score",
z = "mpaa_rating",
alpha = 0.75,
size = 3,
plot_title = "Example title") # <2>
# change inputs
session$setInputs(y = "imdb_rating", # <3>
x = "audience_score",
z = "mpaa_rating",
alpha = 0.75,
size = 3,
plot_title = "Example title") # <3>
expect_equal(
object = session$returned(), # <4>
expected = test_vals # <4>
)
test_logger(end = "var_inputs", msg = "initial returned()")
}) # <1>
```
1. Call to `testServer()`
2. Create output values for comparison
3. Set each input using `setInputs(input = )`
4. Confirm returned values against `test_vals`
The test above confirms the initial values can be passed and returned from `mod_var_input_server()`.
### Flushing the reactives
If we want to test changing inputs, we should call `session$flushReact()` to remove the values set by `session$setInputs()` [^tests-modules-flush-react]
```{r}
#| eval: false
#| code-fold: false
shiny::testServer(app = mod_var_input_server, expr = { # <1>
# flush reactives
session$flushReact() # <2>
test_logger(start = "var_inputs", msg = "updated returned()")
# set inputs
session$setInputs(y = "critics_score", # <3>
x = "runtime",
z = "title_type",
alpha = 0.5,
size = 2,
plot_title = "Enter plot title") # <3>
expect_equal(object = session$returned(), # <4>
expected = list(y = "critics_score",
x = "runtime",
z = "title_type",
alpha = 0.5,
size = 2,
plot_title = "Enter plot title")) # <4>
test_logger(end = "var_inputs", msg = "updated returned()")
}) # <1>
```
1. Call to `testServer()`
2. Flush reactives from previous `expect_equal()`
3. Set changed input values using `setInputs(input = )`
4. Confirm returned values against `session$returned()`
[^tests-modules-flush-react]: Read more about flushing reactive values in the section titled, ***'Flushing Reactives'*** in the [Shiny documentation](https://shiny.posit.co/r/articles/improve/server-function-testing/index.html).
The final result of running `test_active_file()` on `test-mod_var_input.R` is below:
```{verbatim}
#| eval: false
#| code-fold: false
[ FAIL 0 | WARN 0 | SKIP 0 | PASS 0 ]
INFO [2023-11-08 20:00:39] [ START var_inputs = initial returned()]
[ FAIL 0 | WARN 0 | SKIP 0 | PASS 1 ]
INFO [2023-11-08 20:00:39] [ END var_inputs = initial returned()]
INFO [2023-11-08 20:00:39] [ START var_inputs = updated returned()]
[ FAIL 0 | WARN 0 | SKIP 0 | PASS 2 ]
INFO [2023-11-08 20:00:39] [ END var_inputs = updated returned()]
```
### Testing module parameters {#sec-tests-mods-args-list}
Now that we've confirmed `mod_var_input_server()` is returning the initial updated values, we want to make sure reactive values are passed correctly into `mod_scatter_display_server()`.
In `movies_server()`, when we pass `selected_vars` to the `var_inputs` argument, we're not passing the returned values (this is why we don't need the parentheses). We're calling on the *method* (or function) created by the call to `reactive()` (inside `mod_var_input_server()`).
I've included the `movies_server()` function below to refresh our memory of how this *should* work:[^tests-12]
[^tests-12]: `selected_vars` are the reactive plot values returned from `mod_var_input_server()`.
```{r}
#| eval: false
#| code-fold: false
movies_server <- function(input, output, session) {
selected_vars <- mod_var_input_server("vars") # <1>
mod_scatter_display_server("plot", var_inputs = selected_vars)
}
```
1. Calls `return(reactive(list(...)))`
When we pause execution with Posit Workbench's debugger we can see the difference between calling `selected_vars` and `selected_vars()`:
::: {layout="[49, -2, 49]"}
``` r
Browse[1]> selected_vars
reactive({
list(
y = input$y,
x = input$x,
z = input$z,
alpha = input$alpha,
size = input$size,
plot_title = input$plot_title
)
})
```
``` r
Browse[1]> selected_vars()
$y
[1] "audience_score"
$x
[1] "imdb_rating"
$z
[1] "mpaa_rating"
$alpha
[1] 0.5
$size
[1] 2
$plot_title
[1] ""
```
:::
We'll cover using `browser()` and the IDE's debugger more the [debugging chapter](debugging.qmd).
The feature and scenario for the functionality above is captured in `testthat`'s BDD functions below:
```{r}
#| eval: false
#| include: true
#| code-fold: false
describe(
"Feature: Scatter Plot Configuration in Movie Review Application
As a user
I want the initial graph pre-configured with variables and aesthetics,
So that I can immediately see a meaningful visualization.",
code = {
it(
"Scenario: Scatter plot initial x, y, color values
Given the movie review application is loaded
When I view the initial scatter plot
Then the scatter plot should show 'IMDB Rating' on the x-axis
And the scatter plot should show 'Audience Score' on the y-axis
And the points on the scatter plot should be colored by 'MPAA Rating'
And the size of the points should be set to '2'
And the opacity of the points should be set to '0.5'
And the plot title should be 'Enter plot title'",
code = {
})
})
```
Inside `testServer()`, if we're testing a module function that collects the reactive values, we need to wrap those values in `reactive()` in the `args()` argument: [^tests-modules-args]
[^tests-modules-args]: Read more about adding parameters to `testServer()` in the section titled, ***'Modules with additional parameters'*** in the [Shiny documentation](https://shiny.posit.co/r/articles/improve/server-function-testing/index.html).
```{r}
#| eval: false
#| code-fold: false
shiny::testServer(
app = mod_scatter_display_server,
args = list(
var_inputs =
reactive( # <1>
list(
x = "critics_score",
y = "imdb_rating",
z = "mpaa_rating",
alpha = 0.5,
size = 2,
plot_title = "Enter Plot Title"
)
) # <1>
),
expr = {
test_logger(start = "display", msg = "selected_vars initial values")
expect_equal( # <2>
object = inputs(),
expected = list(
x = "critics_score",
y = "imdb_rating",
z = "mpaa_rating",
alpha = 0.5,
size = 2,
plot_title = "Enter Plot Title"
)
) # <2>
test_logger(end = "display", msg = "selected_vars initial values")
})
```
1. List of reactive variable inputs
2. Compare `inputs()` to initial values
I've included the example above because it's not included on the `testServer()` [documentation](https://shiny.posit.co/r/articles/improve/server-function-testing/), and I've found this method works well if you want to confirm two modules are communicating (i.e., returning and collecting outputs). System test with `shinytest2` are a better option if we're trying to capture a more comprehensive execution path (i.e., user scenario) in the application.
## Module test coverage {#sec-tests-mods-coverage}
When we check the code coverage for the test above, we can see it confirms `var_inputs` is communicating the reactive values to `inputs()` in `mod_scatter_display_server()`, but this test doesn't execute the call to `plotOutput()`:
```{r}
#| label: hot_key_cf_02
#| echo: false
#| results: asis
#| eval: true
hot_key(fun = 'cf')
```
```{r}
#| eval: false
#| code-fold: false
devtools:::test_coverage_active_file()
```
![](images/tests_code_coverage_modules.png){width="100%" fig-align='center'}
### Testing module outputs {#sec-tests-mods-outputs}
To confirm the plot is being created properly in `mod_scatter_display_server()`, we can't use the `ggplot2::is.ggplot()` function because the plot is being rendered by `renderPlot()`. However, we can verify the structure of the `output$scatterplot` object using any of the following expectations:
```{r}
#| eval: false
#| code-fold: false
expect_true(
object = is.list(output$scatterplot))
expect_equal(
object = names(output$scatterplot),
expected = c("src", "width", "height", "alt", "coordmap"))
expect_equal(
object = output$scatterplot[["alt"]],
expected = "Plot object")
```
It's also possible to build the graph *inside* the test using the same code from the module server function, then confirm it with `ggplot2::is.ggplot()`:
```{r}
#| eval: false
#| code-fold: false
plot <- scatter_plot(movies, # <1>
x_var = inputs()$x,
y_var = inputs()$y,
col_var = inputs()$z,
alpha_var = inputs()$alpha,
size_var = inputs()$size) +
ggplot2::labs(
title = inputs()$plot_title,
x = stringr::str_replace_all(
tools::toTitleCase(inputs()$x), "_", " "),
y = stringr::str_replace_all(
tools::toTitleCase(inputs()$y), "_", " ")) +
ggplot2::theme_minimal() +
ggplot2::theme(legend.position = "bottom") # <1>
testthat::expect_true(ggplot2::is.ggplot(plot)) # <2>
```
1. Build graph (same code from module function)
2. Confirm `ggplot2` object is built
If we're still skeptical this test is confirming the plot is being built correctly, we can pass `plot` to `print()` in the test and the plot will appear in the **Plots** pane.
![Passing `plot` to `print()` will send the graph to the **Plots** pane](images/tests_print_plot_module.png){width="100%" fig-align='center'}
## Recap {.unnumbered}
```{r}
#| label: co_box_app_recap
#| echo: false
#| results: asis
#| eval: true
co_box(
color = "g",
header = "Recap: testing modules",
fold = FALSE,
look = "default", hsize = "1.10", size = "1.05",
contents = "
<br>
<h5>**Testing modules**</h5>
This chapter delves into the intricacies of testing Shiny modules. Let's briefly recap the key points covered:
- `session$returned()`: allows us to capture and examine the values returned by server-side functions, which is essential for validating the behavior of modules in response to user inputs and server-side processing.
- `session$flushReact()` is crucial for testing reactive expressions and observers.
- Using `session$flushReact()` forces the reactive system to execute, enabling us to test the outcomes of reactive expressions and observe their effects within the context of the module's functionality.
- `args = list(`): We discussed the importance of parameterizing module server functions using `args = list()` to facilitate more flexible and comprehensive testing.
- parameterizing modules can easily simulate various scenarios and inputs, enhancing test coverage and the robustness of each module's functionality.
- Module Test Coverage: we outlined strategies for identifying critical paths through our module's functionality, testing a range of inputs and user interactions, and ensuring that tests are efficient and maintainable.
"
)
```
```{r}
#| label: git_contrib_box
#| echo: false
#| results: asis
#| eval: true
git_contrib_box()
```