From 58d147feb5b3b3f08ff9ecb86edfa1e602b15aae Mon Sep 17 00:00:00 2001 From: Allen Lee Date: Thu, 5 Dec 2024 17:33:28 -0700 Subject: [PATCH] content: update peer review rubric --- .../library/review/includes/review_criteria.jinja | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/django/library/jinja2/library/review/includes/review_criteria.jinja b/django/library/jinja2/library/review/includes/review_criteria.jinja index 028dcb14a..455fc450a 100644 --- a/django/library/jinja2/library/review/includes/review_criteria.jinja +++ b/django/library/jinja2/library/review/includes/review_criteria.jinja @@ -1,11 +1,11 @@ -The CoMSES Net Computational Model Peer Review process is not intended to be time-intensive and consists of a simple checklist to verify that a computational model's source code and documentation meets baseline standards derived from [*good enough practices*]({{build_absolute_uri("/resources/guides-to-good-practice/")}}) in the software engineering and scientific communities we serve. Through this process we hope to foster higher quality models shared in the community for reuse, reproducibility, and advancement of the field in addition to supporting the [emerging practice of software citation](https://www.force11.org/software-citation-principles). +The CoMSES Net Computational Model Peer Review process uses a straightforward checklist to verify that a computational model's source code and documentation meet baseline standards derived from [*good enough practices*]({{build_absolute_uri("/resources/guides-to-good-practice/")}}) in the software engineering and scientific communities we serve. The goal of this process is to encourage publication and sharing of higher quality models that align with the [FAIR Principles for Research Software (FAIR4RS)](https://doi.org/10.15497/RDA00068) and promote "frictionless reuse", enabling others to more easily understand, reuse or extend a model. -Reviewers should evaluate the computational model according to the following criteria: +Reviewers should evaluate computational models based on the following criteria: -1. Can the model be run with a reasonable amount of effort? This may involve compilation into an executable, resolving input data dependencies or software library dependencies - all of which should be clearly documented by the author(s). -2. Is the model accompanied by detailed narrative documentation? This should be uploaded as a separate document, a narrative description in the NetLogo info tab is not sufficient. Narrative documentation should follow the [ODD protocol](http://www.ufz.de/index.php?de=40429) or an equivalent documentation protocol and present a cogent high level overview of how the model works as well as essential internal details and assumptions. The documentation should ideally be comprehensive enough for another computational modeler to replicate the model and its results without needing to refer to the source code. Including visual diagrams like flowcharts is highly recommended. -3. Is the model source code well-structured, formatted and "clean" with semantically meaningful variable names and relevant comments that clearly explain methods, functions, and parameters? Unused or duplicated code, overuse of global variables, or other [code smells](https://en.wikipedia.org/wiki/Code_smell) are some example criteria to consider. Clean, well-documented and well-structured code makes it easier for others to review, reuse, or extend the code. +1. **Ease of Execution**. Can the model be run with a reasonable amount of effort? This may involve compilation into an executable, resolving input data dependencies or software library dependencies - all of which should be clearly documented by the author(s). +2. **Thorough Documentation**. Is the model accompanied by detailed narrative documentation? This should be provided as a standalone document, as comments or other in-code descriptions (e.g., in NetLogo's info tab) are **not sufficient**. Narrative documentation should adhere to the [ODD protocol](http://www.ufz.de/index.php?de=40429) or an equivalent documentation framework and present a cogent high level overview of how the model works as well as essential internal details and assumptions. The documentation should ideally be comprehensive enough for another computational modeler to replicate the model and its results without needing to refer to the source code. Visual aids like flowcharts, equations, and diagrams are highly encouraged. +3. **Code Quality**. Is the source code clean, well-structured, and easy to understand? Code should have semantically meaningful variable names and relevant comments that clearly explain methods, functions, and parameters. [Technical debt](https://doi.org/10.48550/arXiv.2403.06484) hinders comprehension and reuse. Examples of technical debt include unused or duplicated code, excessive use of global variables, or overly complex and difficult-to-follow logic. Clean, well-documented and well-structured code makes it easier for others to review, reuse, or extend the code. -For previous examples of computational models that have passed peer review, please visit the [Computational Model Library]({{build_absolute_uri("/codebases/?peerReviewStatus=reviewed")}}). +For examples of computational models that have successfully passed peer review, please visit the [Computational Model Library]({{build_absolute_uri("/codebases/?peerReviewStatus=reviewed")}}). -We do not ask that reviewers assess whether the model is theoretically sound, has scientific merit or is producing correct outputs. That said, reviewers are free to raise any concerns they may have in their private correspondence with the review editors if they detect "red flags" in the code. +Reviewers are not required to assess the theoretical soundness, scientific merit, or validity of model outputs. However, they may privately raise any concerns about these or other aspects of the model with the review editors.