-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Improve handling of numeric output type IDs. #60
Merged
Merged
Changes from 11 commits
Commits
Show all changes
20 commits
Select commit
Hold shift + click to select a range
aee5bac
add model-metadata-schema to pass updated hubUtils validation checks
annakrystalli e8da5b4
Improve handling of numeric output type IDs. Resolves #58 & #54
annakrystalli ea92169
Merge branch 'main' into num-type-ids
annakrystalli 8e0da25
fix error caused by model task with no required task id/output type c…
annakrystalli ae8c0fc
improve exec error message capturing
annakrystalli b9374a2
update news
annakrystalli ae89852
add non req model task tests
annakrystalli 7da8f3a
bump hubUtils requirement
annakrystalli 730040b
Add deployment note in docs
annakrystalli ec8e614
add ability to parse model metadata file names for metadata
annakrystalli 177e8a5
Add fn opt_check_metadata_team_max_model_n. Resolves #34
annakrystalli 5b53f1b
add test cases (currently failing) that validations reject floating p…
elray1 7e42400
Update tests/testthat/testdata/hub-chr/hub-config/admin.json
annakrystalli b116688
Update tests/testthat/testdata/hub-it/hub-config/admin.json
annakrystalli d756bee
Update tests/testthat/testdata/hub-num/hub-config/admin.json
annakrystalli 0ae59f1
Merge pull request #61 from Infectious-Disease-Modeling-Hubs/test_che…
annakrystalli e1842e1
skip tests offline
annakrystalli 30910f6
ensure default read behaviour for file type
annakrystalli 27f1e65
require all character tbls
annakrystalli 24a6bf8
read in all character tbl for some checks
annakrystalli File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,39 @@ | ||
#' Check that submitting team does not exceed maximum number of allowed models | ||
#' per team | ||
#' | ||
#' @inherit check_metadata_file_exists params | ||
#' @param n_max Integer. Number of maximum allowed models per team. | ||
#' @inherit check_tbl_col_types return | ||
#' @details | ||
#' Should be deployed as part of `validate_model_metadata` optional checks. | ||
#' | ||
#' | ||
#' @export | ||
opt_check_metadata_team_max_model_n <- function(file_path, hub_path, n_max = 2L) { | ||
|
||
team_abbr <- parse_file_name( | ||
file_path, | ||
file_type = "model_metadata")$team_abbr | ||
all_model_meta <- hubUtils::load_model_metadata(hub_path) | ||
|
||
team_models <- all_model_meta[["model_abbr"]][all_model_meta[["team_abbr"]] == team_abbr] | ||
n_models <- length(team_models) | ||
check <- isFALSE(n_models > n_max) | ||
if (check) { | ||
details <- NULL | ||
} else { | ||
details <- cli::format_inline( | ||
"Team {.val {team_abbr}} has submitted valid metadata for | ||
{.val {n_models}} model{?s}: | ||
{.val {team_models}}.") | ||
} | ||
|
||
capture_check_cnd( | ||
check = check, | ||
file_path = file_path, | ||
msg_subject = cli::format_inline( | ||
"Maximum number of models per team ({.val {n_max}})"), | ||
msg_attribute = "exceeded.", | ||
msg_verbs = c("not", ""), | ||
details = details) | ||
} |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the use of
as.character
here would open up the problems I outlined in discussion #59. I added an example of a unit test that passes, but that I think should not pass, here. In that example, theoutput_type_id
in the submission file is0.09999999999999997779554
. On my system, this is not equal to the floating point representation of 0.1, butas.character
casts it to"0.1"
. Here's an illustration:As a result of this, if we use values coerced via
as.character
to do validations, we may be generous in passing submissions thatas.character
rounds to the value specified in the tasks config file. This would then cause issues later on in places like ensembling code or filtering code, where grouping or filtering on theoutput_type_id
may give unexpected results.I am wondering if a way forward might be to do the data type coercion in a different order, or just deal with character representations from the start for purposes of validating
output_type_ids
and task id variables? Something like:value
column as character strings. Additionally, read the contents oftasks.json
as strings if that's possible (may not be?)?output_type_id
s and task id variables with the required/optional values specified intasks.json
, as strings. For example, if thetasks.json
file says that a valid quantile level is 0.1, we expect to see the exact characters"0.1"
in the submission file. This would change our current passing validation results for submissions with values like 0.1000000000000000055511 to failures.Something like this would entirely sidestep the issues of what gets rounded to 0.1 or what the floating point representation of 0.1 is.
Another option might be to use arrow's
cast
function for type conversion instead ofas.character
, since it seems to be more careful in what it rounds. But my question about that is whether the outputs here would be stable across platforms. For example, maybe on one computer the floating point representation of 0.1 is 0.1000000000000000055511 and on another computer (e.g., one that's not using 64 bit representations of numbers) it's something slightly different. For that reason, it seems better to avoid relying on equality of floating point representations if possible.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @elray1 . Great points and I agree that although some output IDs are numeric, given we want to be matching output ID values effectively as categories / use them for grouping (both in validation and downstream) it feels indeed more appropriate to work with data as character for this purposes and match exact values, not check for equality using numeric tolerances or rounding behaviours.
So I will look into the most efficient way of including all character versions of data in our model output validation checks.
I will merge your currently failing test to include in the test suite. I also feel the original test you included (of a
0.1000000000000000055511
value included instead of a0.1
value should now also fail.Persistent issue with trailing zeros in
tasks.json
The one remaining problem however that
as.character()
was solving for us is removing trailing zeros. When reading thetasks.json
into R from JSON and then passing values toexpand.grid
to create the grid of all possible valid values, any trailing zeros are removed from numeric output type IDs. Although I spent quite some time trying to figure out a way to read in values from JSON in R as character (retaining any trailing zeros) I could not find a way so I don't think the suggestion:Using
as.character()
on model output data as proposed in this PR was addressing this issue:so that including
0.300
as an output type ID in atasks.json
would match a0.300
value in the data (which previously failed because the value from config would be converted to0.3
while the value in model output data would be read in as0.300
. Having said that, the approach in this PR is still a bit of a hack, as, to match values, trailing zeros are removed from both which is not quite right. I think we will have to tackle this with strong documentation and perhaps some sort of validation check (although I think a validation check will be difficult for the same reasons. Detecting trailing zeros in numerics will likely be hampered by the requirement to retain trailing zeros...our original problem!There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK Just to clarify what the new approach I'm putting together is doing:
read_model_out_file
fn now has argumentcoerce_types
with valueshub
,chr
andnone
instead of logicaluse_hub_schema
.hub
reads in (csvs) or coerces (parquet & arrow which have schema defined in file, using arrowcast
) to the hub schema.chr
reads in (csvs) or coerces (parquet & arrow) all columns as character.none
: no coercion. Read using defaultarrow::read_*
fun settings.This now causes both
0.1000000000000000055511
and0.09999999999999997779554
to fail validation. It also means that values with trailing zeros (e.g.0.300
) will always fail, regardless of whether0.300
is specified intasks.json
(leading to the issue discussed in #54). While I do feel this is actually correct behaviour when validating model output data, it still feels unsatisfactory to me form the aspect that values with trailing zeros can currently be specified in config files but that will not be respected when reading in and will cause validation errors. I think this will remain problematic until we find a way to detect and throw a warning about trailing zeros during config file validation.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Confirm read and column type checking behaviour for different file types
Also can we just confirm that you are happy with the following column type checks (same as before after addressing issues discussed in #54) but needs some recoding due to changed default behaviour in
read_model_out_file
).csv
files (which do not have column type information embedded), whether a file can be parsed using the hub schema on read is checked viacheck_file_read
fn. If that check passes, the file is then read in using thehub schema. If a csv is read in therefore, it will always pass the column type check.Could just give me a thumbs up if you agree?